text
string
filename
string
file_size
int64
title
string
authors
string
journal
string
category
string
publisher
string
license
string
license_url
string
doi
string
source_file
string
content
string
year
string
# Sensitive Detection of the Natural Killer Cell-Mediated Cytotoxicity of Anti-CD20 Antibodies and Its Impairment by B-Cell Receptor Pathway Inhibitors **Authors:** Floyd Hassenrück; Eva Knödgen; Elisa Göckeritz; Safi Hasan Midda; Verena Vondey; Lars Neumann; Sylvia Herter; Christian Klein; Michael Hallek; Günter Krause **Journal:** BioMed Research International (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1023490 --- ## Abstract The antibody-dependent cell-mediated cytotoxicity (ADCC) of the anti-CD20 monoclonal antibodies (mAbs) rituximab and obinutuzumab against the cell line Raji and isolated CLL cells and its potential impairment by kinase inhibitors (KI) was determined via lactate dehydrogenase release or calcein retention, respectively, using genetically modified NK92 cells expressing CD16-176V as effector cells. Compared to peripheral blood mononuclear cells, recombinant effector cell lines showed substantial alloreactivity-related cytotoxicity without addition of mAbs but afforded determination of ADCC with reduced interassay variability. The cytotoxicity owing to alloreactivity was less susceptible to interference by KI than the ADCC of anti-CD20 mAbs, which was markedly diminished by ibrutinib, but not by idelalisib. Compared to rituximab, the ADCC of obinutuzumab against primary CLL cells showed approximately 30% higher efficacy and less interference with KI. Irreversible BTK inhibitors at a clinically relevant concentration of 1μM only weakly impaired the ADCC of anti-CD20 mAbs, with less influence in combinations with obinutuzumab than with rituximab and by acalabrutinib than by ibrutinib or tirabrutinib. In summary, NK cell line-based assays permitted the sensitive detection of ADCC of therapeutic anti-CD20 mAbs against CLL cells and of the interference of KI with this important killing mechanism. --- ## Body ## 1. Introduction Monoclonal antibodies (mAbs) play an important role in the treatment of chronic lymphocytic leukemia (CLL) and other B-cell malignancies. Thus the anti-CD20 mAb rituximab is part of the current standard chemoimmunotherapeutic regimen for first-line treatment of CLL patients without comorbidity [1] and the benefit for CLL patients with comorbidity under treatment with the DNA damaging agent chlorambucil was superior in combination with the glycoengineered type II anti-CD20 mAb obinutuzumab than with rituximab [2]. The management of CLL has undergone profound changes also owing to the availability of novel agents that target B-cell receptor signaling [3]. Among these, the kinase inhibitors (KI), idelalisib and ibrutinib that target PI3K-δ and BTK, respectively, have gained approval for the treatment of CLL [4, 5]. This clinical progress was supported by preclinical drug assessment for selecting development candidates and recognizing the underlying mechanisms. For further improvement of therapeutic options these preclinical efforts need to be continued, for example, for designing efficacious drug combinations.Cell killing by therapeutic mAbs proceeds via direct cell death induction and via indirect mechanisms that are mediated by the Fc (fragment crystallizable) portion of mAbs and include complement-dependent cytotoxicity (CDC) as well as antibody-dependent cell-mediated cytotoxicity and phagocytosis (ADCC and ADCP) [6]. Effector cells expressing activating Fcγ receptors (FcγRs), for example, CD16, are activated by binding the Fc portions of antibodies once these are in contact with antigen and subsequently release cytotoxic agents such as perforin and granzymes that accomplish target cell lysis. Thus, in ADCC, the specificity of mAbs provided by the adaptive immune system is linked to powerful innate immune effector functions by binding of Fc regions to Fcγ receptors, for example, on NK cells.In vitro assays of ADCC can be performed in a variety of formats employing different effector cells and a wide range of direct and indirect detection methods [6].As a type II anti-CD20 mAb obinutuzumab has a substantially different binding mode to CD20 as rituximab and enhanced direct cytotoxicity and Fc-mediated functions [7]. For obinutuzumab as a single agent we have previously shown more potent CLL cell depletion from whole blood samples and stronger direct cytotoxicity against CLL cells than by rituximab [8]. In addition the mechanisms of obinutuzumab have been extensively compared with other anti-CD20 mAbs and characterized with regard to the effects of glycoengineering on ADCC and ADCP [9, 10].Owing to independent mechanisms of action, mAbs are considered as promising combination partners of KI, however, with the possible risk of interference of kinase inhibition with major mechanisms of action of mAbs, for instance, ADCC. The irreversible BTK inhibitor ibrutinib, however, was found to antagonize the ADCC of rituximab [11], while in the presence of the phosphatidylinositide-3-kinases- (PI3K-) δ inhibitor idelalisib that of alemtuzumab was maintained [12].The goal of the present study was to combine the use of (1) nonradioactive ADCC detection, (2) NK92-derived recombinant effector cell lines [13, 14], and (3) primary CLL samples as target cells in nonautologous assays. With NK92 cell line-based assays, we were able to distinguish the ADCC of rituximab and obinutuzumab and to evaluate the interference of kinase inhibitors with the ADCC of these anti-CD20 mAbs. ## 2. Materials and Methods ### 2.1. Cell Lines and Patient Samples The CLL-derived EBV-transformed lymphoblastoid lines JVM-3 and Mec1 as well as the Burkitt lymphoma cell line Raji were purchased from the German collection of microorganisms and cell cultures (DSMZ, Braunschweig, Germany) and used as target cells in ADCC assays. Primary CLL cells for use as target cells were isolated from peripheral blood samples from patients who were previously diagnosed for CLL according to standard criteria. Blood samples were obtained with informed consent in accordance with the World Medical Association Declaration of Helsinki following a study protocol approved by the local ethics committee at the University of Cologne (approval number 11-319).Recombinant NK92-derived effector cell lines had been engineered to express the high affinity allele of the FcγR IIIa, also known as CD16-158V or 176V depending on exclusion or inclusion of the signal peptide in the amino acid count, and were obtained under material transfer agreements with Conkwest or Roche, respectively. The cell lines CD16.NK92.26.5 [13] and NK92-1708-CD16 clone LC3E11 [15] in this report are referred to in abbreviated form as 26.5 and 1708-LC3E11, respectively. NK92 cells as obtainable from the American Type Culture Collection do not express FcγR IIIa. The derivative cell lines used here were not only genetically modified to express CD16 but also involve subclones with altered expression of killer cell Ig-like receptors (KIRs) that leads to dampened alloreactivity in ADCC assays with nonautologous hematopoietic cells. ### 2.2. Therapeutic Antibodies and Small Molecule Drugs Obinutuzumab was a kind gift from Roche Glycart. Rituximab and alemtuzumab were obtained from the hospital pharmacy. Antibodies were used at a concentration of 10μg/ml that is known to elicit maximal effects. The PI3K inhibitors idelalisib, duvelisib (IPI-145), and copanlisib (BAY 80-6946) as well as the irreversible BTK inhibitors ibrutinib, acalabrutinib (ACP-196), and tirabrutinib (ONO/GS-4059) were purchased from Selleck via AbSource (Munich, Germany) and used as stock solutions prepared in DMSO. The DMSO concentration in cell culture media was limited to 0.5%. ### 2.3. Cell Isolation and Culture For isolation of CLL cells, Ficoll-Paque Plus sedimentation (GE Healthcare, Freiburg, Germany) was preceded by incubation of whole blood with the Rosette Sep B-cell purification antibody cocktail (Stem Cell Technologies) to aggregate unwanted cells with erythrocytes. The purity of isolated CLL cells was determined by flow cytometry using FITC-labeled anti-CD5 and PE-labeled anti-CD19 antibodies (BD Biosciences, Heidelberg). Isolated CLL cells and cell lines used as target cells were cultured in RPMI medium supplemented with 10% heat-inactivated fetal calf serum (FCS) and antibiotics and antimycotics (Gibco, Thermo Fisher Scientific, Darmstadt, Germany) at 37°C in a humidified atmosphere containing 5% carbon dioxide.For use as effector cells in ADCC assays, peripheral blood mononuclear cells (PBMCs) from a healthy donor were isolated from heparinized blood samples by Ficoll gradient centrifugation. NK92-derived recombinant cell lines were cultured inα-MEM without nucleosides (Life Tech) supplemented with 10% heat-inactivated FCS, 10% heat-inactivated horse serum, 0.1 mM β-mercaptoethanol, 1.5 mM L-glutamine, 0.2 mM myoinositol, 1 mM sodium pyruvate, and 2 μM folic acid. 26.5 cells were supplemented with 100 U/ml IL-2 (Immunotools) upon each splitting and controlled for GFP expression. 1708-LC3E11 cells were IL-2-independent and selected with 5 μg/ml puromycin. ### 2.4. Determination of Antibody-Dependent Cell-Mediated Cytotoxicity (ADCC) Cocultures of target and effector cells were performed on U-bottom 96-well plates in AIM-V medium (Thermo Fisher Scientific). Target cells, that is, Raji cells and primary CLL cells, were seeded at a cell density of 3 × 104 cells per well, with the exception of JVM-3 and Mec1 cells, the density of which was 1.5 × 105 per well. The excess of effector over target cells was 5- or 15-fold with NK92-derived recombinant effector cell lines or freshly isolated PBMCs, respectively. After coincubation with effector cells, target cell lysis was measured with an LDH release cytotoxicity detection kit according to the instructions of the supplier (Roche Diagnostics, Mannheim, Germany). LDH released from lysed cells leads to reduction of iodo tetrazolium chloride. The amounts of the colored formazane formed were measured via absorption at 450 nm in a FluoSTAR OPTIMA plate reader. First effector cells were plated, followed by target cells and finally 10 μg/ml of mAbs. Low and high controls, the spontaneous or maximal LDH release, respectively, were determined from target cells alone or after complete lysis of target plus effector cells by 1% Triton X-100. Cocultures of target and effector cells were performed without or with addition of antibody. ADCC was calculated as the antibody-dependent enhancement of cytotoxicity in cocultures of target and effector cells.With primary CLL cells as target cells cytotoxicity was detected via calcein retention instead of LDH release. For this purpose, CLL cells were stained for 30 minutes with 3.5μM calcein-AM (Promokine) and washed three times in AIM-V medium before coculture with effector cells. For determining calcein retention in specifically labeled target cells, the cells remaining in cocultures were sedimented by centrifugation for 5 min at 400 ×g and lysed in 200 μl of 5 mM sodium borate buffer containing 1% Triton X-100. After transfer of 180 μl of the lysates into black 96-well plates (Nunc) fluorescence signals were measured with excitation and emission wavelengths of 485 and 520 nm, respectively, in a FluoSTAR OPTIMA plate reader. The calcein retention in target cells without addition of effector cells, corresponding to spontaneous label release, was set to 100% and that in completely lysed cocultures to 0%. The calcein retention values were converted to the reciprocal percentages of cytotoxicity to comply with LDH release measurements and commonly used presentation of ADCC assay results. ### 2.5. Data Presentation and Statistics In box plot diagrams, boxes represent the middle quartiles of distributions and whiskers the maximal and minimal ranges, which are limited to 1.5 interquartile ranges. Outliers are indicated by filled diamonds, while plus signs denote arithmetic means.Significance levels were determined by two-tailed, paired Student’st-test and indicated as n.s.: not significant, p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. ## 2.1. Cell Lines and Patient Samples The CLL-derived EBV-transformed lymphoblastoid lines JVM-3 and Mec1 as well as the Burkitt lymphoma cell line Raji were purchased from the German collection of microorganisms and cell cultures (DSMZ, Braunschweig, Germany) and used as target cells in ADCC assays. Primary CLL cells for use as target cells were isolated from peripheral blood samples from patients who were previously diagnosed for CLL according to standard criteria. Blood samples were obtained with informed consent in accordance with the World Medical Association Declaration of Helsinki following a study protocol approved by the local ethics committee at the University of Cologne (approval number 11-319).Recombinant NK92-derived effector cell lines had been engineered to express the high affinity allele of the FcγR IIIa, also known as CD16-158V or 176V depending on exclusion or inclusion of the signal peptide in the amino acid count, and were obtained under material transfer agreements with Conkwest or Roche, respectively. The cell lines CD16.NK92.26.5 [13] and NK92-1708-CD16 clone LC3E11 [15] in this report are referred to in abbreviated form as 26.5 and 1708-LC3E11, respectively. NK92 cells as obtainable from the American Type Culture Collection do not express FcγR IIIa. The derivative cell lines used here were not only genetically modified to express CD16 but also involve subclones with altered expression of killer cell Ig-like receptors (KIRs) that leads to dampened alloreactivity in ADCC assays with nonautologous hematopoietic cells. ## 2.2. Therapeutic Antibodies and Small Molecule Drugs Obinutuzumab was a kind gift from Roche Glycart. Rituximab and alemtuzumab were obtained from the hospital pharmacy. Antibodies were used at a concentration of 10μg/ml that is known to elicit maximal effects. The PI3K inhibitors idelalisib, duvelisib (IPI-145), and copanlisib (BAY 80-6946) as well as the irreversible BTK inhibitors ibrutinib, acalabrutinib (ACP-196), and tirabrutinib (ONO/GS-4059) were purchased from Selleck via AbSource (Munich, Germany) and used as stock solutions prepared in DMSO. The DMSO concentration in cell culture media was limited to 0.5%. ## 2.3. Cell Isolation and Culture For isolation of CLL cells, Ficoll-Paque Plus sedimentation (GE Healthcare, Freiburg, Germany) was preceded by incubation of whole blood with the Rosette Sep B-cell purification antibody cocktail (Stem Cell Technologies) to aggregate unwanted cells with erythrocytes. The purity of isolated CLL cells was determined by flow cytometry using FITC-labeled anti-CD5 and PE-labeled anti-CD19 antibodies (BD Biosciences, Heidelberg). Isolated CLL cells and cell lines used as target cells were cultured in RPMI medium supplemented with 10% heat-inactivated fetal calf serum (FCS) and antibiotics and antimycotics (Gibco, Thermo Fisher Scientific, Darmstadt, Germany) at 37°C in a humidified atmosphere containing 5% carbon dioxide.For use as effector cells in ADCC assays, peripheral blood mononuclear cells (PBMCs) from a healthy donor were isolated from heparinized blood samples by Ficoll gradient centrifugation. NK92-derived recombinant cell lines were cultured inα-MEM without nucleosides (Life Tech) supplemented with 10% heat-inactivated FCS, 10% heat-inactivated horse serum, 0.1 mM β-mercaptoethanol, 1.5 mM L-glutamine, 0.2 mM myoinositol, 1 mM sodium pyruvate, and 2 μM folic acid. 26.5 cells were supplemented with 100 U/ml IL-2 (Immunotools) upon each splitting and controlled for GFP expression. 1708-LC3E11 cells were IL-2-independent and selected with 5 μg/ml puromycin. ## 2.4. Determination of Antibody-Dependent Cell-Mediated Cytotoxicity (ADCC) Cocultures of target and effector cells were performed on U-bottom 96-well plates in AIM-V medium (Thermo Fisher Scientific). Target cells, that is, Raji cells and primary CLL cells, were seeded at a cell density of 3 × 104 cells per well, with the exception of JVM-3 and Mec1 cells, the density of which was 1.5 × 105 per well. The excess of effector over target cells was 5- or 15-fold with NK92-derived recombinant effector cell lines or freshly isolated PBMCs, respectively. After coincubation with effector cells, target cell lysis was measured with an LDH release cytotoxicity detection kit according to the instructions of the supplier (Roche Diagnostics, Mannheim, Germany). LDH released from lysed cells leads to reduction of iodo tetrazolium chloride. The amounts of the colored formazane formed were measured via absorption at 450 nm in a FluoSTAR OPTIMA plate reader. First effector cells were plated, followed by target cells and finally 10 μg/ml of mAbs. Low and high controls, the spontaneous or maximal LDH release, respectively, were determined from target cells alone or after complete lysis of target plus effector cells by 1% Triton X-100. Cocultures of target and effector cells were performed without or with addition of antibody. ADCC was calculated as the antibody-dependent enhancement of cytotoxicity in cocultures of target and effector cells.With primary CLL cells as target cells cytotoxicity was detected via calcein retention instead of LDH release. For this purpose, CLL cells were stained for 30 minutes with 3.5μM calcein-AM (Promokine) and washed three times in AIM-V medium before coculture with effector cells. For determining calcein retention in specifically labeled target cells, the cells remaining in cocultures were sedimented by centrifugation for 5 min at 400 ×g and lysed in 200 μl of 5 mM sodium borate buffer containing 1% Triton X-100. After transfer of 180 μl of the lysates into black 96-well plates (Nunc) fluorescence signals were measured with excitation and emission wavelengths of 485 and 520 nm, respectively, in a FluoSTAR OPTIMA plate reader. The calcein retention in target cells without addition of effector cells, corresponding to spontaneous label release, was set to 100% and that in completely lysed cocultures to 0%. The calcein retention values were converted to the reciprocal percentages of cytotoxicity to comply with LDH release measurements and commonly used presentation of ADCC assay results. ## 2.5. Data Presentation and Statistics In box plot diagrams, boxes represent the middle quartiles of distributions and whiskers the maximal and minimal ranges, which are limited to 1.5 interquartile ranges. Outliers are indicated by filled diamonds, while plus signs denote arithmetic means.Significance levels were determined by two-tailed, paired Student’st-test and indicated as n.s.: not significant, p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. ## 3. Results ### 3.1. Measuring ADCC with Different Effector Cells NK92-derived effector cell lines were compared to unstimulated PBMCs in an assay format that uses LDH release from target cells as a measure of cytotoxicity (Figure1). Along with spontaneous LDH release from target cells alone, that from cocultures of target and effector cells was monitored as background for the determination of the enhancement of cytotoxicity by addition of mAbs, which were used at a concentration of 10 μg/ml assuring maximal response.Figure 1 ADCC mediated by PBMCs and NK92-derived cell lines. Percentages of maximal LDH release from target cells were determined with either PBMCs at 15-fold excess (a) or of two strains of genetically modified NK92 cells expressing CD16-176V at 5-fold excess (b, c) with or without 10 μg/ml of the indicated antibodies. (a) In six independent experiments 1.5 ∗ 10 5 JVM-3 or Mec1 target cells per well were incubated with PBMCs from a healthy donor with or without alemtuzumab for 4 hours. (b, c) In four independent assays each, LDH release from 3 ∗ 10 4 Raji cells was determined after 2 hours of coculture with the effector cell lines 26.5 or 1708-LC3E11 in the absence or presence of three different antibodies. LDH release from mixtures of effector and target cells without or with antibodies was compared by paired Student’st-test. In addition, the efficacy of the ADCC elicited by rituximab versus obinutuzumab was compared by pairedt-test.p ∗ < 0.05; p ∗ ∗ < 0.01. (a) (b) (c)Compared to spontaneous target cell lysis, the relative LDH release was significantly increased by approximately 30% in the presence of effector cell lines (Figures1(b) and 1(c)), but only marginally, that is, by less than half of that amount, in cocultures with PBMCs (Figure 1(a)). Despite different target cell lines, cell densities, and incubation times, the substantial antibody-independent cytotoxicity in cocultures with target cells appears to be connected with alloreactivity compared to that in those with donor-derived effector cells and owing to its size it needs to be carefully separated from the antibody-dependent increase of cytotoxicity that defines ADCC in the proper sense. In this context it may be worthwhile noting that NK92 cells, which had been engineered only for forced CD16 expression, but not for expression of novel KIRs, are functional for ADCC assays with nonhematological target cells [16] but yielded high spontaneous antibody-independent cytotoxicity owing to alloreactivity in cocultures with Raji cells that surpassed and masked ADCC (not shown).Despite the higher cytotoxicity observed in cocultures of target cells with NK92-derived effector cell lines than with PBMCs, alemtuzumab additionally induced equal or higher ADCC in the cell line-based assays. Moreover the ADCC of alemtuzumab observed with PBMCs as effector cells was higher against Mec1 than JVM-3 cells. Comparing two similar NK92-derived effector cell lines, 26.5 cells indicated higher percentages of cytotoxicity altogether and especially a greater increment of cytotoxicity upon addition of mAbs than 1708-LC3E11 cells, whereas the efficacy of the ADCC of obinutuzumab appeared 13% or 36% higher than that of rituximab with 26.5 or 1708-LC3E11, respectively. In summary, the range of the variance between individual experiments was higher with PBMCs as effector cells as evidenced by higher frequency of outliers than with recombinant NK92 cells. Consequently the investigated effector cell lines afford more sensitive detection of ADCC and its impairment by kinase inhibitors than PBMCs as effector cells. ### 3.2. Interference of Kinase Inhibitors with the ADCC of Anti-CD20 mAbs against Raji Cells Currently two kinase inhibitors have gained approval for the treatment of CLL and other B-cell malignancies, namely, the PI3K-δ inhibitor idelalisib and the irreversible BTK inhibitor ibrutinib. Using 26.5 cells as effector cells we determined the ADCC against Raji cells of combinations of CD20 antibodies with these two targeted agents via LDH release (Figure 2(a)). To challenge the stability of NK cell-mediated ADCC, both kinase inhibitors were used at a concentration of 10 μM that clearly surpasses clinically achievable concentrations.Figure 2 Impairment of ADCC against Raji cells by kinase inhibitors. The antibody-dependent increase in the percentages of maximal LDH release was determined with 5-fold excess of 26.5 effector cells in the absence or presence of 10 μM idelalisib or ibrutinib (a) or the PI3K inhibitors duvelisib and copanlisib (b). Data were derived from nine or eight independent experiments, each of which was performed with triplicate samples, in (a) and (b), respectively. Asterisks above the boxes and whiskers denote the significance of enhanced cytotoxicity compared to the control without addition of antibodies and inhibitors as determined by pairedt-tests, while the p value ranges for the impairment of ADCC compared to anti-CD20 antibodies as single agents are indicated beneath. Furthermore comparisons by pairedt-test were performed among mAb and KI treatments and indicated at the top of the diagrams in black and blue print, respectively. p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. (a) (b)The LDH release owing to alloreactivity in cocultures of NK-92-26.5-CD16 effector cells and Raji target cells was hardly influenced by these high concentrations of idelalisib and ibrutinib. The significant enhancement of cytotoxicity due to addition of anti-CD20 mAbs was largely maintained in the presence of 10μM idelalisib, indeed more effectively in the combinations with obinutuzumab than with rituximab. In contrast, the ADCC of CD20 antibodies was virtually abolished by 10 μM ibrutinib. Consequently the cytotoxicity against Raji cells of rituximab and obinutuzumab in combination with idelalisib was significantly higher than in combination with ibrutinib (p = 0.009 and p = 0.007, resp., for n = 9).The widely different impairment of ADCC by idelalisib and ibrutinib prompted us to test whether further PI3K inhibitors in development as drugs against B-cell malignancies, namely, duvelisib and copanlisib, behaved similarly as idelalisib in the same assay system (Figure2(b)). In combinations with rituximab and obinutuzumab also these inhibitors were used at a concentration of 10 μM, which by far exceeds clinically obtainable concentrations, and, in the case of copanlisib additionally at 1 μM, owing to previously noted higher cytotoxicity against malignant B-cells of copanlisib compared to idelalisib [17]. Like the other investigated BCR signaling inhibitors also these PI3K inhibitors did not substantially affect the background cytotoxicity in mixtures of effector and target cells without addition of mAbs. The ADCC against Raji cells mediated by anti-CD20 mAbs was influenced by duvelisib in a similar manner as by idelalisib; that is, ADCC was only marginally impaired, with slightly stronger reduction of rituximab than of obinutuzumab effects. Copanlisib concentration-dependently led to a considerably stronger impairment of ADCC than idelalisib and duvelisib. With 10 μM copanlisib the enhancement of LDH release owing to addition of mAbs was almost completely eliminated. In six assay repetitions that contained all four investigated KI the impairment of ADCC by idelalisib and duvelisib appeared approximately equal, but stronger by ibrutinib than by copanlisib. Significant ADCC of anti-CD20 mAbs was maintained in the presence of 10 μM duvelisib and in the case of obinutuzumab also in combination with 1 μM copanlisib.In summary, high concentrations of kinase inhibitors did not interfere with the background LDH release from mixtures of Raji target cells with NK92-26.5 cells but impaired the ADCC mediated by anti-CD20 mAbs to different degrees. ADCC was significantly disturbed by the BTK inhibitor ibrutinib and the pan-class I PI3K inhibitor copanlisib, in contrast to the PI3K inhibitors idelalisib and duvelisib that selectively target the PI3K-δ isoform. ### 3.3. ADCC against CLL Cells of Rituximab and Obinutuzumab in Combination with Kinase Inhibitors For ADCC measurements in clinical samples, isolated CLL cells were used as target cells instead of the cell line Raji in the newly established ADCC assay using 26.5 or 1708-LC3E11 effector cells. Even at a three times higher target cell density as used with Raji cells, that is, 9 × 104 cells per well, total lysis of CLL cells by Triton X-100 did not yield sizable LDH release (not shown). Therefore measurements of ADCC against primary CLL samples were performed after labeling these target cells with calcein for fluorimetric determination of label release during coculture with effector cells. The ADCC of anti-CD20 mAbs against CLL cells was determined in combinations with idelalisib or ibrutinib at concentrations of 10 μM as used in the determinations of LDH release from Raji cells and of 1 μM to approach clinically relevant concentrations (Figure 3).Figure 3 Impairment of ADCC against primary CLL cells by kinase inhibitors. The antibody-dependent increase in the percentages of minimal calcein retention was determined in six or seven independent experiments with 3-fold excess of recombinant 26.5 (a) or 1708-LC3E11 (b) effector cells over CLL target cells. Cytotoxicity was compared to controls without antibodies by paired two-tailedt-test. p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. (a) (b)Similarly, for the ADCC against Raji cells detected via LDH release (Figures1(b) and 1(c)) also using isolated CLL cells as target cells with detection of calcein release, rituximab and obinutuzumab elicited 20–40% higher cytotoxicity against target cells than in cocultures without addition of mAbs. Ibrutinib impaired the NK92-26.5-mediated ADCC of rituximab or obinutuzumab against CLL cells more strongly than idelalisib but also affected the cytotoxicity against mixtures of target and effector cells (Figure 3(a)), in a similar or divergent manner, respectively, as against Raji cells (Figure 2). The fluorescence signals in target cells in the absence of effector cells, which corresponded to spontaneous calcein retention and were set as maximal values, were equal without or with treatment with KI (not shown). Unlike the ADCC of anti-CD20 mAbs against Raji cells, that against isolated CLL cells was impaired by 10 μM idelalisib, although with significantly maintained ADCC of obinutuzumab. 1 μM ibrutinib decreased the ADCC of the assessed anti-CD20 mAbs to a similar degree as 10 μM idelalisib. Overall, cytotoxicity appeared to be impaired more strongly by kinase inhibitors in cocultures with CLL than with Raji target cells.Comparing the two NK92-derived effector cell lines, the alloreactivity-related antibody-independent cytotoxicity was less affected by 10μM ibrutinib in cocultures with 1708-LC3E11 than with 26.5 cells. In addition 1 μM idelalisib interfered more strongly with the ADCC of anti-CD20 mAbs mediated by 1708-LC3E11 compared to 26.5 effector cells. The ADCC of rituximab and obinutuzumab was impaired by idelalisib and ibrutinib with greater significance in assays using 1708-LC3E11 than 26.5 effector cells.For a more detailed analysis of the impact of KI on ADCC, the percentages of cytotoxicity in the absence of rituximab and obinutuzumab were subtracted from those in their presence and compared head to head (Figure4). The impairment of both ADCC by KI and the superior ADCC of obinutuzumab compared to rituximab was indicated more clearly by 1708-LC3E11 effector cells (Figure 4(b)) than by 26.5 cells (Figure 4(a)). Obinutuzumab consistently showed significantly higher efficacy of ADCC than rituximab. This difference in ADCC efficacy was largely maintained in the presence of idelalisib and with 1 μM ibrutinib, which means less impaired ADCC of obinutuzumab in the presence of KI compared to rituximab.Figure 4 Detailed analysis of the impact of idelalisib and ibrutinib on the ADCC of rituximab and obinutuzumab. The differences in the cytotoxicity against CLL cells in cocultures with the NK92-derived effector cells 26.5 (a) or 1708-LC3E11 (b) in the presence and absence of rituximab and obinutuzumab were calculated from the data shown in Figure 3. Means and standard errors of the means are shown. The ADCC of rituximab and obinutuzumab was compared by paired two-tailedt-test. p ∗ < 0.05; p ∗ ∗ < 0.01. (a) (b) ### 3.4. Impact of BTK Inhibitors on the ADCC against CLL Cells of Rituximab and Obinutuzumab 1708-LC3E11 effector cells, which had indicated the impairment of ADCC against CLL cells by KI with greater sensitivity than 26.5 cells, were employed to compare ibrutinib with the second-generation irreversible BTK inhibitors acalabrutinib (ACP-196) and tirabrutinib (GS-4059) at a clinically relevant concentration of 1μM (Figure 5(a)). With a different set of CLL samples than that used in Figure 4(b), rituximab and obinutuzumab significantly enhanced NK cell-mediated cytotoxicity by 31% and 45%, respectively, and the significance of ADCC, which is equivalent to this enhancement, was maintained in combination with the investigated irreversible BTK inhibitors at clinically relevant concentrations of 1 μM. Significant impairment of ADCC was observed in combinations of rituximab with ibrutinib and tirabrutinib but not acalabrutinib and did not occur in the tested combinations with obinutuzumab. Of note, acalabrutinib with improved BTK selectivity led to weaker impairment of ADCC than ibrutinib and tirabrutinib. Obinutuzumab showed 44% higher efficacy of ADCC than rituximab (Figure 5(b)). This difference was even augmented to approximately 60% in combinations of anti-CD20 mAbs with ibrutinib and acalabrutinib, since the ADCC of obinutuzumab was less affected than that of rituximab also by second-generation BTK inhibitors.Figure 5 Comparison of the impairment of ADCC against primary CLL cells by different irreversible BTK inhibitors. The antibody-dependent increase in the percentages of minimal calcein retention was determined with 3-fold excess of 1708-LC3E11 effector cells over CLL target cells in seven independent experiments. (a) Asterisks above the boxes and whiskers denote the significance of enhanced cytotoxicity compared to the control without addition of antibodies and inhibitors as determined by pairedt-tests. Furthermore combination treatment was compared to that with anti-CD20 antibodies as single agents. (b) The mean differences ± SEM in the cytotoxicity against CLL cells in the presence and absence of rituximab and obinutuzumab were calculated from the data shown in (a). The ADCC of rituximab and obinutuzumab was compared by paired two-tailedt-test. p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. (a) (b) ## 3.1. Measuring ADCC with Different Effector Cells NK92-derived effector cell lines were compared to unstimulated PBMCs in an assay format that uses LDH release from target cells as a measure of cytotoxicity (Figure1). Along with spontaneous LDH release from target cells alone, that from cocultures of target and effector cells was monitored as background for the determination of the enhancement of cytotoxicity by addition of mAbs, which were used at a concentration of 10 μg/ml assuring maximal response.Figure 1 ADCC mediated by PBMCs and NK92-derived cell lines. Percentages of maximal LDH release from target cells were determined with either PBMCs at 15-fold excess (a) or of two strains of genetically modified NK92 cells expressing CD16-176V at 5-fold excess (b, c) with or without 10 μg/ml of the indicated antibodies. (a) In six independent experiments 1.5 ∗ 10 5 JVM-3 or Mec1 target cells per well were incubated with PBMCs from a healthy donor with or without alemtuzumab for 4 hours. (b, c) In four independent assays each, LDH release from 3 ∗ 10 4 Raji cells was determined after 2 hours of coculture with the effector cell lines 26.5 or 1708-LC3E11 in the absence or presence of three different antibodies. LDH release from mixtures of effector and target cells without or with antibodies was compared by paired Student’st-test. In addition, the efficacy of the ADCC elicited by rituximab versus obinutuzumab was compared by pairedt-test.p ∗ < 0.05; p ∗ ∗ < 0.01. (a) (b) (c)Compared to spontaneous target cell lysis, the relative LDH release was significantly increased by approximately 30% in the presence of effector cell lines (Figures1(b) and 1(c)), but only marginally, that is, by less than half of that amount, in cocultures with PBMCs (Figure 1(a)). Despite different target cell lines, cell densities, and incubation times, the substantial antibody-independent cytotoxicity in cocultures with target cells appears to be connected with alloreactivity compared to that in those with donor-derived effector cells and owing to its size it needs to be carefully separated from the antibody-dependent increase of cytotoxicity that defines ADCC in the proper sense. In this context it may be worthwhile noting that NK92 cells, which had been engineered only for forced CD16 expression, but not for expression of novel KIRs, are functional for ADCC assays with nonhematological target cells [16] but yielded high spontaneous antibody-independent cytotoxicity owing to alloreactivity in cocultures with Raji cells that surpassed and masked ADCC (not shown).Despite the higher cytotoxicity observed in cocultures of target cells with NK92-derived effector cell lines than with PBMCs, alemtuzumab additionally induced equal or higher ADCC in the cell line-based assays. Moreover the ADCC of alemtuzumab observed with PBMCs as effector cells was higher against Mec1 than JVM-3 cells. Comparing two similar NK92-derived effector cell lines, 26.5 cells indicated higher percentages of cytotoxicity altogether and especially a greater increment of cytotoxicity upon addition of mAbs than 1708-LC3E11 cells, whereas the efficacy of the ADCC of obinutuzumab appeared 13% or 36% higher than that of rituximab with 26.5 or 1708-LC3E11, respectively. In summary, the range of the variance between individual experiments was higher with PBMCs as effector cells as evidenced by higher frequency of outliers than with recombinant NK92 cells. Consequently the investigated effector cell lines afford more sensitive detection of ADCC and its impairment by kinase inhibitors than PBMCs as effector cells. ## 3.2. Interference of Kinase Inhibitors with the ADCC of Anti-CD20 mAbs against Raji Cells Currently two kinase inhibitors have gained approval for the treatment of CLL and other B-cell malignancies, namely, the PI3K-δ inhibitor idelalisib and the irreversible BTK inhibitor ibrutinib. Using 26.5 cells as effector cells we determined the ADCC against Raji cells of combinations of CD20 antibodies with these two targeted agents via LDH release (Figure 2(a)). To challenge the stability of NK cell-mediated ADCC, both kinase inhibitors were used at a concentration of 10 μM that clearly surpasses clinically achievable concentrations.Figure 2 Impairment of ADCC against Raji cells by kinase inhibitors. The antibody-dependent increase in the percentages of maximal LDH release was determined with 5-fold excess of 26.5 effector cells in the absence or presence of 10 μM idelalisib or ibrutinib (a) or the PI3K inhibitors duvelisib and copanlisib (b). Data were derived from nine or eight independent experiments, each of which was performed with triplicate samples, in (a) and (b), respectively. Asterisks above the boxes and whiskers denote the significance of enhanced cytotoxicity compared to the control without addition of antibodies and inhibitors as determined by pairedt-tests, while the p value ranges for the impairment of ADCC compared to anti-CD20 antibodies as single agents are indicated beneath. Furthermore comparisons by pairedt-test were performed among mAb and KI treatments and indicated at the top of the diagrams in black and blue print, respectively. p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. (a) (b)The LDH release owing to alloreactivity in cocultures of NK-92-26.5-CD16 effector cells and Raji target cells was hardly influenced by these high concentrations of idelalisib and ibrutinib. The significant enhancement of cytotoxicity due to addition of anti-CD20 mAbs was largely maintained in the presence of 10μM idelalisib, indeed more effectively in the combinations with obinutuzumab than with rituximab. In contrast, the ADCC of CD20 antibodies was virtually abolished by 10 μM ibrutinib. Consequently the cytotoxicity against Raji cells of rituximab and obinutuzumab in combination with idelalisib was significantly higher than in combination with ibrutinib (p = 0.009 and p = 0.007, resp., for n = 9).The widely different impairment of ADCC by idelalisib and ibrutinib prompted us to test whether further PI3K inhibitors in development as drugs against B-cell malignancies, namely, duvelisib and copanlisib, behaved similarly as idelalisib in the same assay system (Figure2(b)). In combinations with rituximab and obinutuzumab also these inhibitors were used at a concentration of 10 μM, which by far exceeds clinically obtainable concentrations, and, in the case of copanlisib additionally at 1 μM, owing to previously noted higher cytotoxicity against malignant B-cells of copanlisib compared to idelalisib [17]. Like the other investigated BCR signaling inhibitors also these PI3K inhibitors did not substantially affect the background cytotoxicity in mixtures of effector and target cells without addition of mAbs. The ADCC against Raji cells mediated by anti-CD20 mAbs was influenced by duvelisib in a similar manner as by idelalisib; that is, ADCC was only marginally impaired, with slightly stronger reduction of rituximab than of obinutuzumab effects. Copanlisib concentration-dependently led to a considerably stronger impairment of ADCC than idelalisib and duvelisib. With 10 μM copanlisib the enhancement of LDH release owing to addition of mAbs was almost completely eliminated. In six assay repetitions that contained all four investigated KI the impairment of ADCC by idelalisib and duvelisib appeared approximately equal, but stronger by ibrutinib than by copanlisib. Significant ADCC of anti-CD20 mAbs was maintained in the presence of 10 μM duvelisib and in the case of obinutuzumab also in combination with 1 μM copanlisib.In summary, high concentrations of kinase inhibitors did not interfere with the background LDH release from mixtures of Raji target cells with NK92-26.5 cells but impaired the ADCC mediated by anti-CD20 mAbs to different degrees. ADCC was significantly disturbed by the BTK inhibitor ibrutinib and the pan-class I PI3K inhibitor copanlisib, in contrast to the PI3K inhibitors idelalisib and duvelisib that selectively target the PI3K-δ isoform. ## 3.3. ADCC against CLL Cells of Rituximab and Obinutuzumab in Combination with Kinase Inhibitors For ADCC measurements in clinical samples, isolated CLL cells were used as target cells instead of the cell line Raji in the newly established ADCC assay using 26.5 or 1708-LC3E11 effector cells. Even at a three times higher target cell density as used with Raji cells, that is, 9 × 104 cells per well, total lysis of CLL cells by Triton X-100 did not yield sizable LDH release (not shown). Therefore measurements of ADCC against primary CLL samples were performed after labeling these target cells with calcein for fluorimetric determination of label release during coculture with effector cells. The ADCC of anti-CD20 mAbs against CLL cells was determined in combinations with idelalisib or ibrutinib at concentrations of 10 μM as used in the determinations of LDH release from Raji cells and of 1 μM to approach clinically relevant concentrations (Figure 3).Figure 3 Impairment of ADCC against primary CLL cells by kinase inhibitors. The antibody-dependent increase in the percentages of minimal calcein retention was determined in six or seven independent experiments with 3-fold excess of recombinant 26.5 (a) or 1708-LC3E11 (b) effector cells over CLL target cells. Cytotoxicity was compared to controls without antibodies by paired two-tailedt-test. p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. (a) (b)Similarly, for the ADCC against Raji cells detected via LDH release (Figures1(b) and 1(c)) also using isolated CLL cells as target cells with detection of calcein release, rituximab and obinutuzumab elicited 20–40% higher cytotoxicity against target cells than in cocultures without addition of mAbs. Ibrutinib impaired the NK92-26.5-mediated ADCC of rituximab or obinutuzumab against CLL cells more strongly than idelalisib but also affected the cytotoxicity against mixtures of target and effector cells (Figure 3(a)), in a similar or divergent manner, respectively, as against Raji cells (Figure 2). The fluorescence signals in target cells in the absence of effector cells, which corresponded to spontaneous calcein retention and were set as maximal values, were equal without or with treatment with KI (not shown). Unlike the ADCC of anti-CD20 mAbs against Raji cells, that against isolated CLL cells was impaired by 10 μM idelalisib, although with significantly maintained ADCC of obinutuzumab. 1 μM ibrutinib decreased the ADCC of the assessed anti-CD20 mAbs to a similar degree as 10 μM idelalisib. Overall, cytotoxicity appeared to be impaired more strongly by kinase inhibitors in cocultures with CLL than with Raji target cells.Comparing the two NK92-derived effector cell lines, the alloreactivity-related antibody-independent cytotoxicity was less affected by 10μM ibrutinib in cocultures with 1708-LC3E11 than with 26.5 cells. In addition 1 μM idelalisib interfered more strongly with the ADCC of anti-CD20 mAbs mediated by 1708-LC3E11 compared to 26.5 effector cells. The ADCC of rituximab and obinutuzumab was impaired by idelalisib and ibrutinib with greater significance in assays using 1708-LC3E11 than 26.5 effector cells.For a more detailed analysis of the impact of KI on ADCC, the percentages of cytotoxicity in the absence of rituximab and obinutuzumab were subtracted from those in their presence and compared head to head (Figure4). The impairment of both ADCC by KI and the superior ADCC of obinutuzumab compared to rituximab was indicated more clearly by 1708-LC3E11 effector cells (Figure 4(b)) than by 26.5 cells (Figure 4(a)). Obinutuzumab consistently showed significantly higher efficacy of ADCC than rituximab. This difference in ADCC efficacy was largely maintained in the presence of idelalisib and with 1 μM ibrutinib, which means less impaired ADCC of obinutuzumab in the presence of KI compared to rituximab.Figure 4 Detailed analysis of the impact of idelalisib and ibrutinib on the ADCC of rituximab and obinutuzumab. The differences in the cytotoxicity against CLL cells in cocultures with the NK92-derived effector cells 26.5 (a) or 1708-LC3E11 (b) in the presence and absence of rituximab and obinutuzumab were calculated from the data shown in Figure 3. Means and standard errors of the means are shown. The ADCC of rituximab and obinutuzumab was compared by paired two-tailedt-test. p ∗ < 0.05; p ∗ ∗ < 0.01. (a) (b) ## 3.4. Impact of BTK Inhibitors on the ADCC against CLL Cells of Rituximab and Obinutuzumab 1708-LC3E11 effector cells, which had indicated the impairment of ADCC against CLL cells by KI with greater sensitivity than 26.5 cells, were employed to compare ibrutinib with the second-generation irreversible BTK inhibitors acalabrutinib (ACP-196) and tirabrutinib (GS-4059) at a clinically relevant concentration of 1μM (Figure 5(a)). With a different set of CLL samples than that used in Figure 4(b), rituximab and obinutuzumab significantly enhanced NK cell-mediated cytotoxicity by 31% and 45%, respectively, and the significance of ADCC, which is equivalent to this enhancement, was maintained in combination with the investigated irreversible BTK inhibitors at clinically relevant concentrations of 1 μM. Significant impairment of ADCC was observed in combinations of rituximab with ibrutinib and tirabrutinib but not acalabrutinib and did not occur in the tested combinations with obinutuzumab. Of note, acalabrutinib with improved BTK selectivity led to weaker impairment of ADCC than ibrutinib and tirabrutinib. Obinutuzumab showed 44% higher efficacy of ADCC than rituximab (Figure 5(b)). This difference was even augmented to approximately 60% in combinations of anti-CD20 mAbs with ibrutinib and acalabrutinib, since the ADCC of obinutuzumab was less affected than that of rituximab also by second-generation BTK inhibitors.Figure 5 Comparison of the impairment of ADCC against primary CLL cells by different irreversible BTK inhibitors. The antibody-dependent increase in the percentages of minimal calcein retention was determined with 3-fold excess of 1708-LC3E11 effector cells over CLL target cells in seven independent experiments. (a) Asterisks above the boxes and whiskers denote the significance of enhanced cytotoxicity compared to the control without addition of antibodies and inhibitors as determined by pairedt-tests. Furthermore combination treatment was compared to that with anti-CD20 antibodies as single agents. (b) The mean differences ± SEM in the cytotoxicity against CLL cells in the presence and absence of rituximab and obinutuzumab were calculated from the data shown in (a). The ADCC of rituximab and obinutuzumab was compared by paired two-tailedt-test. p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. (a) (b) ## 4. Discussion Convenient and robust NK cell line-based assays permitted the sensitive detection of ADCC of therapeutic anti-CD20 antibodies against CLL cells and of the interference of KI with this important killing mechanism of mAbs. To our knowledge, in the present report, the use of recombinant NK92-derived effector cell lines expressing CD16 is combined for the first time with that of primary CLL samples as target cells for nonradioactive ADCC determination.Compared to spontaneous target cell lysis, the antibody-independent cytotoxicity in cocultures with effector cell lines was substantially increased, in contrast to cocultures with PBMCs. This is in agreement with the expectation that clonal NK cell lines may elicit stronger alloreactivity than natural NK cell populations that achieve tolerance by expressing various different combinations of activating and inhibitory KIRs. Compared to PBMCs as effector cells, the interassay variability was reduced with NK92-derived effector cell lines in a similar manner as in assays using purified NK cells [18].The ADCC against Raji cells elicited by rituximab was only approximately one-third of the background lysis with NK92 cells expressing CD16 that had not been modified for altered KIR expression [19], but slightly higher than the corresponding spontaneous cytotoxicity with 26.5 effector cells. The observed still comparatively high antibody-independent cytotoxicity in cocultures with the recombinant NK92-derived cell lines expressing CD16 at hand emphasizes the need to take it into account as a contribution to the overall cytotoxicity observed in the presence of both antibodies and effector cells. For the calculation of ADCC in the literal sense, the appropriate background to be subtracted therefore is the alloreactivity-related cytotoxicity in cocultures of target and effector cells and not spontaneous lysis of target cells alone. Although the percentages of cytotoxicity obtained by Duong et al., 2015 [20], with NK92-26.5 cells expressing CD16 as effector cells are similar to the overall cytotoxicity in the presence of mAbs as observed here, they are lacking the important control of target and effector cells without mAbs and do not allow the distinction of KI effects on antibody-dependent or independent cytotoxicity. Using similar NK92-derived effector cells we could show that all investigated inhibitors of BTK and PI3K-δ at clinically relevant concentrations exclusively affected ADCC and left the alloreactivity-related antibody-independent cytotoxicity undisturbed. Only at a 10-fold higher concentration, ibrutinib also inhibited the NK cell-mediated antibody-independent cytotoxicity of anti-CD20 mAbs.Our observation that the ADCC of anti-CD20 mAbs was inhibited more severely by the irreversible BTK inhibitor ibrutinib than by the PI3K-δ inhibitor idelalisib is in agreement with the expectation from observations with these inhibitors separately [11, 12, 17] and with other direct comparisons of the impact of these inhibitors on ADCC in different formats, namely, NK cell-mediated ADCC detected via LDH or 51Cr release, respectively, [21, 22] or in an NK effector cell line-based assay with detection by calcein release [20]. Following up on the preclinical assessment of selected PI3K inhibitors [17], duvelisib, similar to idelalisib, hardly disturbed NK cell mediated ADCC, in line with the very similar molecular structure and in spite of targeting PI3K-γ in addition to PI3K-δ. In contrast, copanlisib, which was recently approved for treatment of follicular lymphoma, strongly impaired the ADCC of rituximab and obinutuzumab at a concentration of 10 μM but permitted significant ADCC of these anti-CD20 mAbs, when used at a concentration of 1 μM, which is cytotoxic for CLL cells, in line with our previous observations [17]. Owing to the short duration of the assays, effects of kinase inhibitors on ADCC are more likely due to specific inhibition of signaling emanating from the Fc-γ-receptors than due to general cytotoxicity for NK effector cells.While the impairment of the ADCC of anti-CD20 mAbs by ibrutinib may be partly due to decreased CD20 expression on the surface of CLL cells [23], we observed that the enhanced ADCC of Fc- and glycoengineered obinutuzumab was less disturbed by ibrutinib than that of rituximab, similar to ADCC-induced cell lysis as well as NK cell activation and degranulation as surrogate markers in autologous systems [24]. Also with the second-generation of irreversible BTK inhibitors, the ADCC of obinutuzumab against CLL cells was less impaired than that of rituximab, in agreement with reports about the interference of acalabrutinib or tirabrutinib with the ADCC of anti-CD20 mAbs against Mec1 or SUDHL-4 cells, respectively [25]. The different capacity of ibrutinib and acalabrutinib for interfering with the ADCC of anti-CD20 mAbs became apparent, at inhibitor concentrations of 1 μM (Figure 5(a)) or 10 μM [26] in combinations with rituximab or obinutuzumab, respectively.In conclusion, NK cell line-based ADCC assays reliably indicated superior induction of ADCC against primary CLL cells by obinutuzumab compared to rituximab and less interference by BTK inhibitors. Since these heterologous assays employ stable effector cell lines and thus avoid variability caused by differences in immune status and genetic background of donor cells, they afford robust and sensitive determinations of ADCC against individual CLL samples in 96-well format. This could be useful for preclinical comparisons of combinations of mAbs and KI. --- *Source: 1023490-2018-03-19.xml*
1023490-2018-03-19_1023490-2018-03-19.md
54,274
Sensitive Detection of the Natural Killer Cell-Mediated Cytotoxicity of Anti-CD20 Antibodies and Its Impairment by B-Cell Receptor Pathway Inhibitors
Floyd Hassenrück; Eva Knödgen; Elisa Göckeritz; Safi Hasan Midda; Verena Vondey; Lars Neumann; Sylvia Herter; Christian Klein; Michael Hallek; Günter Krause
BioMed Research International (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1023490
1023490-2018-03-19.xml
--- ## Abstract The antibody-dependent cell-mediated cytotoxicity (ADCC) of the anti-CD20 monoclonal antibodies (mAbs) rituximab and obinutuzumab against the cell line Raji and isolated CLL cells and its potential impairment by kinase inhibitors (KI) was determined via lactate dehydrogenase release or calcein retention, respectively, using genetically modified NK92 cells expressing CD16-176V as effector cells. Compared to peripheral blood mononuclear cells, recombinant effector cell lines showed substantial alloreactivity-related cytotoxicity without addition of mAbs but afforded determination of ADCC with reduced interassay variability. The cytotoxicity owing to alloreactivity was less susceptible to interference by KI than the ADCC of anti-CD20 mAbs, which was markedly diminished by ibrutinib, but not by idelalisib. Compared to rituximab, the ADCC of obinutuzumab against primary CLL cells showed approximately 30% higher efficacy and less interference with KI. Irreversible BTK inhibitors at a clinically relevant concentration of 1μM only weakly impaired the ADCC of anti-CD20 mAbs, with less influence in combinations with obinutuzumab than with rituximab and by acalabrutinib than by ibrutinib or tirabrutinib. In summary, NK cell line-based assays permitted the sensitive detection of ADCC of therapeutic anti-CD20 mAbs against CLL cells and of the interference of KI with this important killing mechanism. --- ## Body ## 1. Introduction Monoclonal antibodies (mAbs) play an important role in the treatment of chronic lymphocytic leukemia (CLL) and other B-cell malignancies. Thus the anti-CD20 mAb rituximab is part of the current standard chemoimmunotherapeutic regimen for first-line treatment of CLL patients without comorbidity [1] and the benefit for CLL patients with comorbidity under treatment with the DNA damaging agent chlorambucil was superior in combination with the glycoengineered type II anti-CD20 mAb obinutuzumab than with rituximab [2]. The management of CLL has undergone profound changes also owing to the availability of novel agents that target B-cell receptor signaling [3]. Among these, the kinase inhibitors (KI), idelalisib and ibrutinib that target PI3K-δ and BTK, respectively, have gained approval for the treatment of CLL [4, 5]. This clinical progress was supported by preclinical drug assessment for selecting development candidates and recognizing the underlying mechanisms. For further improvement of therapeutic options these preclinical efforts need to be continued, for example, for designing efficacious drug combinations.Cell killing by therapeutic mAbs proceeds via direct cell death induction and via indirect mechanisms that are mediated by the Fc (fragment crystallizable) portion of mAbs and include complement-dependent cytotoxicity (CDC) as well as antibody-dependent cell-mediated cytotoxicity and phagocytosis (ADCC and ADCP) [6]. Effector cells expressing activating Fcγ receptors (FcγRs), for example, CD16, are activated by binding the Fc portions of antibodies once these are in contact with antigen and subsequently release cytotoxic agents such as perforin and granzymes that accomplish target cell lysis. Thus, in ADCC, the specificity of mAbs provided by the adaptive immune system is linked to powerful innate immune effector functions by binding of Fc regions to Fcγ receptors, for example, on NK cells.In vitro assays of ADCC can be performed in a variety of formats employing different effector cells and a wide range of direct and indirect detection methods [6].As a type II anti-CD20 mAb obinutuzumab has a substantially different binding mode to CD20 as rituximab and enhanced direct cytotoxicity and Fc-mediated functions [7]. For obinutuzumab as a single agent we have previously shown more potent CLL cell depletion from whole blood samples and stronger direct cytotoxicity against CLL cells than by rituximab [8]. In addition the mechanisms of obinutuzumab have been extensively compared with other anti-CD20 mAbs and characterized with regard to the effects of glycoengineering on ADCC and ADCP [9, 10].Owing to independent mechanisms of action, mAbs are considered as promising combination partners of KI, however, with the possible risk of interference of kinase inhibition with major mechanisms of action of mAbs, for instance, ADCC. The irreversible BTK inhibitor ibrutinib, however, was found to antagonize the ADCC of rituximab [11], while in the presence of the phosphatidylinositide-3-kinases- (PI3K-) δ inhibitor idelalisib that of alemtuzumab was maintained [12].The goal of the present study was to combine the use of (1) nonradioactive ADCC detection, (2) NK92-derived recombinant effector cell lines [13, 14], and (3) primary CLL samples as target cells in nonautologous assays. With NK92 cell line-based assays, we were able to distinguish the ADCC of rituximab and obinutuzumab and to evaluate the interference of kinase inhibitors with the ADCC of these anti-CD20 mAbs. ## 2. Materials and Methods ### 2.1. Cell Lines and Patient Samples The CLL-derived EBV-transformed lymphoblastoid lines JVM-3 and Mec1 as well as the Burkitt lymphoma cell line Raji were purchased from the German collection of microorganisms and cell cultures (DSMZ, Braunschweig, Germany) and used as target cells in ADCC assays. Primary CLL cells for use as target cells were isolated from peripheral blood samples from patients who were previously diagnosed for CLL according to standard criteria. Blood samples were obtained with informed consent in accordance with the World Medical Association Declaration of Helsinki following a study protocol approved by the local ethics committee at the University of Cologne (approval number 11-319).Recombinant NK92-derived effector cell lines had been engineered to express the high affinity allele of the FcγR IIIa, also known as CD16-158V or 176V depending on exclusion or inclusion of the signal peptide in the amino acid count, and were obtained under material transfer agreements with Conkwest or Roche, respectively. The cell lines CD16.NK92.26.5 [13] and NK92-1708-CD16 clone LC3E11 [15] in this report are referred to in abbreviated form as 26.5 and 1708-LC3E11, respectively. NK92 cells as obtainable from the American Type Culture Collection do not express FcγR IIIa. The derivative cell lines used here were not only genetically modified to express CD16 but also involve subclones with altered expression of killer cell Ig-like receptors (KIRs) that leads to dampened alloreactivity in ADCC assays with nonautologous hematopoietic cells. ### 2.2. Therapeutic Antibodies and Small Molecule Drugs Obinutuzumab was a kind gift from Roche Glycart. Rituximab and alemtuzumab were obtained from the hospital pharmacy. Antibodies were used at a concentration of 10μg/ml that is known to elicit maximal effects. The PI3K inhibitors idelalisib, duvelisib (IPI-145), and copanlisib (BAY 80-6946) as well as the irreversible BTK inhibitors ibrutinib, acalabrutinib (ACP-196), and tirabrutinib (ONO/GS-4059) were purchased from Selleck via AbSource (Munich, Germany) and used as stock solutions prepared in DMSO. The DMSO concentration in cell culture media was limited to 0.5%. ### 2.3. Cell Isolation and Culture For isolation of CLL cells, Ficoll-Paque Plus sedimentation (GE Healthcare, Freiburg, Germany) was preceded by incubation of whole blood with the Rosette Sep B-cell purification antibody cocktail (Stem Cell Technologies) to aggregate unwanted cells with erythrocytes. The purity of isolated CLL cells was determined by flow cytometry using FITC-labeled anti-CD5 and PE-labeled anti-CD19 antibodies (BD Biosciences, Heidelberg). Isolated CLL cells and cell lines used as target cells were cultured in RPMI medium supplemented with 10% heat-inactivated fetal calf serum (FCS) and antibiotics and antimycotics (Gibco, Thermo Fisher Scientific, Darmstadt, Germany) at 37°C in a humidified atmosphere containing 5% carbon dioxide.For use as effector cells in ADCC assays, peripheral blood mononuclear cells (PBMCs) from a healthy donor were isolated from heparinized blood samples by Ficoll gradient centrifugation. NK92-derived recombinant cell lines were cultured inα-MEM without nucleosides (Life Tech) supplemented with 10% heat-inactivated FCS, 10% heat-inactivated horse serum, 0.1 mM β-mercaptoethanol, 1.5 mM L-glutamine, 0.2 mM myoinositol, 1 mM sodium pyruvate, and 2 μM folic acid. 26.5 cells were supplemented with 100 U/ml IL-2 (Immunotools) upon each splitting and controlled for GFP expression. 1708-LC3E11 cells were IL-2-independent and selected with 5 μg/ml puromycin. ### 2.4. Determination of Antibody-Dependent Cell-Mediated Cytotoxicity (ADCC) Cocultures of target and effector cells were performed on U-bottom 96-well plates in AIM-V medium (Thermo Fisher Scientific). Target cells, that is, Raji cells and primary CLL cells, were seeded at a cell density of 3 × 104 cells per well, with the exception of JVM-3 and Mec1 cells, the density of which was 1.5 × 105 per well. The excess of effector over target cells was 5- or 15-fold with NK92-derived recombinant effector cell lines or freshly isolated PBMCs, respectively. After coincubation with effector cells, target cell lysis was measured with an LDH release cytotoxicity detection kit according to the instructions of the supplier (Roche Diagnostics, Mannheim, Germany). LDH released from lysed cells leads to reduction of iodo tetrazolium chloride. The amounts of the colored formazane formed were measured via absorption at 450 nm in a FluoSTAR OPTIMA plate reader. First effector cells were plated, followed by target cells and finally 10 μg/ml of mAbs. Low and high controls, the spontaneous or maximal LDH release, respectively, were determined from target cells alone or after complete lysis of target plus effector cells by 1% Triton X-100. Cocultures of target and effector cells were performed without or with addition of antibody. ADCC was calculated as the antibody-dependent enhancement of cytotoxicity in cocultures of target and effector cells.With primary CLL cells as target cells cytotoxicity was detected via calcein retention instead of LDH release. For this purpose, CLL cells were stained for 30 minutes with 3.5μM calcein-AM (Promokine) and washed three times in AIM-V medium before coculture with effector cells. For determining calcein retention in specifically labeled target cells, the cells remaining in cocultures were sedimented by centrifugation for 5 min at 400 ×g and lysed in 200 μl of 5 mM sodium borate buffer containing 1% Triton X-100. After transfer of 180 μl of the lysates into black 96-well plates (Nunc) fluorescence signals were measured with excitation and emission wavelengths of 485 and 520 nm, respectively, in a FluoSTAR OPTIMA plate reader. The calcein retention in target cells without addition of effector cells, corresponding to spontaneous label release, was set to 100% and that in completely lysed cocultures to 0%. The calcein retention values were converted to the reciprocal percentages of cytotoxicity to comply with LDH release measurements and commonly used presentation of ADCC assay results. ### 2.5. Data Presentation and Statistics In box plot diagrams, boxes represent the middle quartiles of distributions and whiskers the maximal and minimal ranges, which are limited to 1.5 interquartile ranges. Outliers are indicated by filled diamonds, while plus signs denote arithmetic means.Significance levels were determined by two-tailed, paired Student’st-test and indicated as n.s.: not significant, p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. ## 2.1. Cell Lines and Patient Samples The CLL-derived EBV-transformed lymphoblastoid lines JVM-3 and Mec1 as well as the Burkitt lymphoma cell line Raji were purchased from the German collection of microorganisms and cell cultures (DSMZ, Braunschweig, Germany) and used as target cells in ADCC assays. Primary CLL cells for use as target cells were isolated from peripheral blood samples from patients who were previously diagnosed for CLL according to standard criteria. Blood samples were obtained with informed consent in accordance with the World Medical Association Declaration of Helsinki following a study protocol approved by the local ethics committee at the University of Cologne (approval number 11-319).Recombinant NK92-derived effector cell lines had been engineered to express the high affinity allele of the FcγR IIIa, also known as CD16-158V or 176V depending on exclusion or inclusion of the signal peptide in the amino acid count, and were obtained under material transfer agreements with Conkwest or Roche, respectively. The cell lines CD16.NK92.26.5 [13] and NK92-1708-CD16 clone LC3E11 [15] in this report are referred to in abbreviated form as 26.5 and 1708-LC3E11, respectively. NK92 cells as obtainable from the American Type Culture Collection do not express FcγR IIIa. The derivative cell lines used here were not only genetically modified to express CD16 but also involve subclones with altered expression of killer cell Ig-like receptors (KIRs) that leads to dampened alloreactivity in ADCC assays with nonautologous hematopoietic cells. ## 2.2. Therapeutic Antibodies and Small Molecule Drugs Obinutuzumab was a kind gift from Roche Glycart. Rituximab and alemtuzumab were obtained from the hospital pharmacy. Antibodies were used at a concentration of 10μg/ml that is known to elicit maximal effects. The PI3K inhibitors idelalisib, duvelisib (IPI-145), and copanlisib (BAY 80-6946) as well as the irreversible BTK inhibitors ibrutinib, acalabrutinib (ACP-196), and tirabrutinib (ONO/GS-4059) were purchased from Selleck via AbSource (Munich, Germany) and used as stock solutions prepared in DMSO. The DMSO concentration in cell culture media was limited to 0.5%. ## 2.3. Cell Isolation and Culture For isolation of CLL cells, Ficoll-Paque Plus sedimentation (GE Healthcare, Freiburg, Germany) was preceded by incubation of whole blood with the Rosette Sep B-cell purification antibody cocktail (Stem Cell Technologies) to aggregate unwanted cells with erythrocytes. The purity of isolated CLL cells was determined by flow cytometry using FITC-labeled anti-CD5 and PE-labeled anti-CD19 antibodies (BD Biosciences, Heidelberg). Isolated CLL cells and cell lines used as target cells were cultured in RPMI medium supplemented with 10% heat-inactivated fetal calf serum (FCS) and antibiotics and antimycotics (Gibco, Thermo Fisher Scientific, Darmstadt, Germany) at 37°C in a humidified atmosphere containing 5% carbon dioxide.For use as effector cells in ADCC assays, peripheral blood mononuclear cells (PBMCs) from a healthy donor were isolated from heparinized blood samples by Ficoll gradient centrifugation. NK92-derived recombinant cell lines were cultured inα-MEM without nucleosides (Life Tech) supplemented with 10% heat-inactivated FCS, 10% heat-inactivated horse serum, 0.1 mM β-mercaptoethanol, 1.5 mM L-glutamine, 0.2 mM myoinositol, 1 mM sodium pyruvate, and 2 μM folic acid. 26.5 cells were supplemented with 100 U/ml IL-2 (Immunotools) upon each splitting and controlled for GFP expression. 1708-LC3E11 cells were IL-2-independent and selected with 5 μg/ml puromycin. ## 2.4. Determination of Antibody-Dependent Cell-Mediated Cytotoxicity (ADCC) Cocultures of target and effector cells were performed on U-bottom 96-well plates in AIM-V medium (Thermo Fisher Scientific). Target cells, that is, Raji cells and primary CLL cells, were seeded at a cell density of 3 × 104 cells per well, with the exception of JVM-3 and Mec1 cells, the density of which was 1.5 × 105 per well. The excess of effector over target cells was 5- or 15-fold with NK92-derived recombinant effector cell lines or freshly isolated PBMCs, respectively. After coincubation with effector cells, target cell lysis was measured with an LDH release cytotoxicity detection kit according to the instructions of the supplier (Roche Diagnostics, Mannheim, Germany). LDH released from lysed cells leads to reduction of iodo tetrazolium chloride. The amounts of the colored formazane formed were measured via absorption at 450 nm in a FluoSTAR OPTIMA plate reader. First effector cells were plated, followed by target cells and finally 10 μg/ml of mAbs. Low and high controls, the spontaneous or maximal LDH release, respectively, were determined from target cells alone or after complete lysis of target plus effector cells by 1% Triton X-100. Cocultures of target and effector cells were performed without or with addition of antibody. ADCC was calculated as the antibody-dependent enhancement of cytotoxicity in cocultures of target and effector cells.With primary CLL cells as target cells cytotoxicity was detected via calcein retention instead of LDH release. For this purpose, CLL cells were stained for 30 minutes with 3.5μM calcein-AM (Promokine) and washed three times in AIM-V medium before coculture with effector cells. For determining calcein retention in specifically labeled target cells, the cells remaining in cocultures were sedimented by centrifugation for 5 min at 400 ×g and lysed in 200 μl of 5 mM sodium borate buffer containing 1% Triton X-100. After transfer of 180 μl of the lysates into black 96-well plates (Nunc) fluorescence signals were measured with excitation and emission wavelengths of 485 and 520 nm, respectively, in a FluoSTAR OPTIMA plate reader. The calcein retention in target cells without addition of effector cells, corresponding to spontaneous label release, was set to 100% and that in completely lysed cocultures to 0%. The calcein retention values were converted to the reciprocal percentages of cytotoxicity to comply with LDH release measurements and commonly used presentation of ADCC assay results. ## 2.5. Data Presentation and Statistics In box plot diagrams, boxes represent the middle quartiles of distributions and whiskers the maximal and minimal ranges, which are limited to 1.5 interquartile ranges. Outliers are indicated by filled diamonds, while plus signs denote arithmetic means.Significance levels were determined by two-tailed, paired Student’st-test and indicated as n.s.: not significant, p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. ## 3. Results ### 3.1. Measuring ADCC with Different Effector Cells NK92-derived effector cell lines were compared to unstimulated PBMCs in an assay format that uses LDH release from target cells as a measure of cytotoxicity (Figure1). Along with spontaneous LDH release from target cells alone, that from cocultures of target and effector cells was monitored as background for the determination of the enhancement of cytotoxicity by addition of mAbs, which were used at a concentration of 10 μg/ml assuring maximal response.Figure 1 ADCC mediated by PBMCs and NK92-derived cell lines. Percentages of maximal LDH release from target cells were determined with either PBMCs at 15-fold excess (a) or of two strains of genetically modified NK92 cells expressing CD16-176V at 5-fold excess (b, c) with or without 10 μg/ml of the indicated antibodies. (a) In six independent experiments 1.5 ∗ 10 5 JVM-3 or Mec1 target cells per well were incubated with PBMCs from a healthy donor with or without alemtuzumab for 4 hours. (b, c) In four independent assays each, LDH release from 3 ∗ 10 4 Raji cells was determined after 2 hours of coculture with the effector cell lines 26.5 or 1708-LC3E11 in the absence or presence of three different antibodies. LDH release from mixtures of effector and target cells without or with antibodies was compared by paired Student’st-test. In addition, the efficacy of the ADCC elicited by rituximab versus obinutuzumab was compared by pairedt-test.p ∗ < 0.05; p ∗ ∗ < 0.01. (a) (b) (c)Compared to spontaneous target cell lysis, the relative LDH release was significantly increased by approximately 30% in the presence of effector cell lines (Figures1(b) and 1(c)), but only marginally, that is, by less than half of that amount, in cocultures with PBMCs (Figure 1(a)). Despite different target cell lines, cell densities, and incubation times, the substantial antibody-independent cytotoxicity in cocultures with target cells appears to be connected with alloreactivity compared to that in those with donor-derived effector cells and owing to its size it needs to be carefully separated from the antibody-dependent increase of cytotoxicity that defines ADCC in the proper sense. In this context it may be worthwhile noting that NK92 cells, which had been engineered only for forced CD16 expression, but not for expression of novel KIRs, are functional for ADCC assays with nonhematological target cells [16] but yielded high spontaneous antibody-independent cytotoxicity owing to alloreactivity in cocultures with Raji cells that surpassed and masked ADCC (not shown).Despite the higher cytotoxicity observed in cocultures of target cells with NK92-derived effector cell lines than with PBMCs, alemtuzumab additionally induced equal or higher ADCC in the cell line-based assays. Moreover the ADCC of alemtuzumab observed with PBMCs as effector cells was higher against Mec1 than JVM-3 cells. Comparing two similar NK92-derived effector cell lines, 26.5 cells indicated higher percentages of cytotoxicity altogether and especially a greater increment of cytotoxicity upon addition of mAbs than 1708-LC3E11 cells, whereas the efficacy of the ADCC of obinutuzumab appeared 13% or 36% higher than that of rituximab with 26.5 or 1708-LC3E11, respectively. In summary, the range of the variance between individual experiments was higher with PBMCs as effector cells as evidenced by higher frequency of outliers than with recombinant NK92 cells. Consequently the investigated effector cell lines afford more sensitive detection of ADCC and its impairment by kinase inhibitors than PBMCs as effector cells. ### 3.2. Interference of Kinase Inhibitors with the ADCC of Anti-CD20 mAbs against Raji Cells Currently two kinase inhibitors have gained approval for the treatment of CLL and other B-cell malignancies, namely, the PI3K-δ inhibitor idelalisib and the irreversible BTK inhibitor ibrutinib. Using 26.5 cells as effector cells we determined the ADCC against Raji cells of combinations of CD20 antibodies with these two targeted agents via LDH release (Figure 2(a)). To challenge the stability of NK cell-mediated ADCC, both kinase inhibitors were used at a concentration of 10 μM that clearly surpasses clinically achievable concentrations.Figure 2 Impairment of ADCC against Raji cells by kinase inhibitors. The antibody-dependent increase in the percentages of maximal LDH release was determined with 5-fold excess of 26.5 effector cells in the absence or presence of 10 μM idelalisib or ibrutinib (a) or the PI3K inhibitors duvelisib and copanlisib (b). Data were derived from nine or eight independent experiments, each of which was performed with triplicate samples, in (a) and (b), respectively. Asterisks above the boxes and whiskers denote the significance of enhanced cytotoxicity compared to the control without addition of antibodies and inhibitors as determined by pairedt-tests, while the p value ranges for the impairment of ADCC compared to anti-CD20 antibodies as single agents are indicated beneath. Furthermore comparisons by pairedt-test were performed among mAb and KI treatments and indicated at the top of the diagrams in black and blue print, respectively. p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. (a) (b)The LDH release owing to alloreactivity in cocultures of NK-92-26.5-CD16 effector cells and Raji target cells was hardly influenced by these high concentrations of idelalisib and ibrutinib. The significant enhancement of cytotoxicity due to addition of anti-CD20 mAbs was largely maintained in the presence of 10μM idelalisib, indeed more effectively in the combinations with obinutuzumab than with rituximab. In contrast, the ADCC of CD20 antibodies was virtually abolished by 10 μM ibrutinib. Consequently the cytotoxicity against Raji cells of rituximab and obinutuzumab in combination with idelalisib was significantly higher than in combination with ibrutinib (p = 0.009 and p = 0.007, resp., for n = 9).The widely different impairment of ADCC by idelalisib and ibrutinib prompted us to test whether further PI3K inhibitors in development as drugs against B-cell malignancies, namely, duvelisib and copanlisib, behaved similarly as idelalisib in the same assay system (Figure2(b)). In combinations with rituximab and obinutuzumab also these inhibitors were used at a concentration of 10 μM, which by far exceeds clinically obtainable concentrations, and, in the case of copanlisib additionally at 1 μM, owing to previously noted higher cytotoxicity against malignant B-cells of copanlisib compared to idelalisib [17]. Like the other investigated BCR signaling inhibitors also these PI3K inhibitors did not substantially affect the background cytotoxicity in mixtures of effector and target cells without addition of mAbs. The ADCC against Raji cells mediated by anti-CD20 mAbs was influenced by duvelisib in a similar manner as by idelalisib; that is, ADCC was only marginally impaired, with slightly stronger reduction of rituximab than of obinutuzumab effects. Copanlisib concentration-dependently led to a considerably stronger impairment of ADCC than idelalisib and duvelisib. With 10 μM copanlisib the enhancement of LDH release owing to addition of mAbs was almost completely eliminated. In six assay repetitions that contained all four investigated KI the impairment of ADCC by idelalisib and duvelisib appeared approximately equal, but stronger by ibrutinib than by copanlisib. Significant ADCC of anti-CD20 mAbs was maintained in the presence of 10 μM duvelisib and in the case of obinutuzumab also in combination with 1 μM copanlisib.In summary, high concentrations of kinase inhibitors did not interfere with the background LDH release from mixtures of Raji target cells with NK92-26.5 cells but impaired the ADCC mediated by anti-CD20 mAbs to different degrees. ADCC was significantly disturbed by the BTK inhibitor ibrutinib and the pan-class I PI3K inhibitor copanlisib, in contrast to the PI3K inhibitors idelalisib and duvelisib that selectively target the PI3K-δ isoform. ### 3.3. ADCC against CLL Cells of Rituximab and Obinutuzumab in Combination with Kinase Inhibitors For ADCC measurements in clinical samples, isolated CLL cells were used as target cells instead of the cell line Raji in the newly established ADCC assay using 26.5 or 1708-LC3E11 effector cells. Even at a three times higher target cell density as used with Raji cells, that is, 9 × 104 cells per well, total lysis of CLL cells by Triton X-100 did not yield sizable LDH release (not shown). Therefore measurements of ADCC against primary CLL samples were performed after labeling these target cells with calcein for fluorimetric determination of label release during coculture with effector cells. The ADCC of anti-CD20 mAbs against CLL cells was determined in combinations with idelalisib or ibrutinib at concentrations of 10 μM as used in the determinations of LDH release from Raji cells and of 1 μM to approach clinically relevant concentrations (Figure 3).Figure 3 Impairment of ADCC against primary CLL cells by kinase inhibitors. The antibody-dependent increase in the percentages of minimal calcein retention was determined in six or seven independent experiments with 3-fold excess of recombinant 26.5 (a) or 1708-LC3E11 (b) effector cells over CLL target cells. Cytotoxicity was compared to controls without antibodies by paired two-tailedt-test. p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. (a) (b)Similarly, for the ADCC against Raji cells detected via LDH release (Figures1(b) and 1(c)) also using isolated CLL cells as target cells with detection of calcein release, rituximab and obinutuzumab elicited 20–40% higher cytotoxicity against target cells than in cocultures without addition of mAbs. Ibrutinib impaired the NK92-26.5-mediated ADCC of rituximab or obinutuzumab against CLL cells more strongly than idelalisib but also affected the cytotoxicity against mixtures of target and effector cells (Figure 3(a)), in a similar or divergent manner, respectively, as against Raji cells (Figure 2). The fluorescence signals in target cells in the absence of effector cells, which corresponded to spontaneous calcein retention and were set as maximal values, were equal without or with treatment with KI (not shown). Unlike the ADCC of anti-CD20 mAbs against Raji cells, that against isolated CLL cells was impaired by 10 μM idelalisib, although with significantly maintained ADCC of obinutuzumab. 1 μM ibrutinib decreased the ADCC of the assessed anti-CD20 mAbs to a similar degree as 10 μM idelalisib. Overall, cytotoxicity appeared to be impaired more strongly by kinase inhibitors in cocultures with CLL than with Raji target cells.Comparing the two NK92-derived effector cell lines, the alloreactivity-related antibody-independent cytotoxicity was less affected by 10μM ibrutinib in cocultures with 1708-LC3E11 than with 26.5 cells. In addition 1 μM idelalisib interfered more strongly with the ADCC of anti-CD20 mAbs mediated by 1708-LC3E11 compared to 26.5 effector cells. The ADCC of rituximab and obinutuzumab was impaired by idelalisib and ibrutinib with greater significance in assays using 1708-LC3E11 than 26.5 effector cells.For a more detailed analysis of the impact of KI on ADCC, the percentages of cytotoxicity in the absence of rituximab and obinutuzumab were subtracted from those in their presence and compared head to head (Figure4). The impairment of both ADCC by KI and the superior ADCC of obinutuzumab compared to rituximab was indicated more clearly by 1708-LC3E11 effector cells (Figure 4(b)) than by 26.5 cells (Figure 4(a)). Obinutuzumab consistently showed significantly higher efficacy of ADCC than rituximab. This difference in ADCC efficacy was largely maintained in the presence of idelalisib and with 1 μM ibrutinib, which means less impaired ADCC of obinutuzumab in the presence of KI compared to rituximab.Figure 4 Detailed analysis of the impact of idelalisib and ibrutinib on the ADCC of rituximab and obinutuzumab. The differences in the cytotoxicity against CLL cells in cocultures with the NK92-derived effector cells 26.5 (a) or 1708-LC3E11 (b) in the presence and absence of rituximab and obinutuzumab were calculated from the data shown in Figure 3. Means and standard errors of the means are shown. The ADCC of rituximab and obinutuzumab was compared by paired two-tailedt-test. p ∗ < 0.05; p ∗ ∗ < 0.01. (a) (b) ### 3.4. Impact of BTK Inhibitors on the ADCC against CLL Cells of Rituximab and Obinutuzumab 1708-LC3E11 effector cells, which had indicated the impairment of ADCC against CLL cells by KI with greater sensitivity than 26.5 cells, were employed to compare ibrutinib with the second-generation irreversible BTK inhibitors acalabrutinib (ACP-196) and tirabrutinib (GS-4059) at a clinically relevant concentration of 1μM (Figure 5(a)). With a different set of CLL samples than that used in Figure 4(b), rituximab and obinutuzumab significantly enhanced NK cell-mediated cytotoxicity by 31% and 45%, respectively, and the significance of ADCC, which is equivalent to this enhancement, was maintained in combination with the investigated irreversible BTK inhibitors at clinically relevant concentrations of 1 μM. Significant impairment of ADCC was observed in combinations of rituximab with ibrutinib and tirabrutinib but not acalabrutinib and did not occur in the tested combinations with obinutuzumab. Of note, acalabrutinib with improved BTK selectivity led to weaker impairment of ADCC than ibrutinib and tirabrutinib. Obinutuzumab showed 44% higher efficacy of ADCC than rituximab (Figure 5(b)). This difference was even augmented to approximately 60% in combinations of anti-CD20 mAbs with ibrutinib and acalabrutinib, since the ADCC of obinutuzumab was less affected than that of rituximab also by second-generation BTK inhibitors.Figure 5 Comparison of the impairment of ADCC against primary CLL cells by different irreversible BTK inhibitors. The antibody-dependent increase in the percentages of minimal calcein retention was determined with 3-fold excess of 1708-LC3E11 effector cells over CLL target cells in seven independent experiments. (a) Asterisks above the boxes and whiskers denote the significance of enhanced cytotoxicity compared to the control without addition of antibodies and inhibitors as determined by pairedt-tests. Furthermore combination treatment was compared to that with anti-CD20 antibodies as single agents. (b) The mean differences ± SEM in the cytotoxicity against CLL cells in the presence and absence of rituximab and obinutuzumab were calculated from the data shown in (a). The ADCC of rituximab and obinutuzumab was compared by paired two-tailedt-test. p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. (a) (b) ## 3.1. Measuring ADCC with Different Effector Cells NK92-derived effector cell lines were compared to unstimulated PBMCs in an assay format that uses LDH release from target cells as a measure of cytotoxicity (Figure1). Along with spontaneous LDH release from target cells alone, that from cocultures of target and effector cells was monitored as background for the determination of the enhancement of cytotoxicity by addition of mAbs, which were used at a concentration of 10 μg/ml assuring maximal response.Figure 1 ADCC mediated by PBMCs and NK92-derived cell lines. Percentages of maximal LDH release from target cells were determined with either PBMCs at 15-fold excess (a) or of two strains of genetically modified NK92 cells expressing CD16-176V at 5-fold excess (b, c) with or without 10 μg/ml of the indicated antibodies. (a) In six independent experiments 1.5 ∗ 10 5 JVM-3 or Mec1 target cells per well were incubated with PBMCs from a healthy donor with or without alemtuzumab for 4 hours. (b, c) In four independent assays each, LDH release from 3 ∗ 10 4 Raji cells was determined after 2 hours of coculture with the effector cell lines 26.5 or 1708-LC3E11 in the absence or presence of three different antibodies. LDH release from mixtures of effector and target cells without or with antibodies was compared by paired Student’st-test. In addition, the efficacy of the ADCC elicited by rituximab versus obinutuzumab was compared by pairedt-test.p ∗ < 0.05; p ∗ ∗ < 0.01. (a) (b) (c)Compared to spontaneous target cell lysis, the relative LDH release was significantly increased by approximately 30% in the presence of effector cell lines (Figures1(b) and 1(c)), but only marginally, that is, by less than half of that amount, in cocultures with PBMCs (Figure 1(a)). Despite different target cell lines, cell densities, and incubation times, the substantial antibody-independent cytotoxicity in cocultures with target cells appears to be connected with alloreactivity compared to that in those with donor-derived effector cells and owing to its size it needs to be carefully separated from the antibody-dependent increase of cytotoxicity that defines ADCC in the proper sense. In this context it may be worthwhile noting that NK92 cells, which had been engineered only for forced CD16 expression, but not for expression of novel KIRs, are functional for ADCC assays with nonhematological target cells [16] but yielded high spontaneous antibody-independent cytotoxicity owing to alloreactivity in cocultures with Raji cells that surpassed and masked ADCC (not shown).Despite the higher cytotoxicity observed in cocultures of target cells with NK92-derived effector cell lines than with PBMCs, alemtuzumab additionally induced equal or higher ADCC in the cell line-based assays. Moreover the ADCC of alemtuzumab observed with PBMCs as effector cells was higher against Mec1 than JVM-3 cells. Comparing two similar NK92-derived effector cell lines, 26.5 cells indicated higher percentages of cytotoxicity altogether and especially a greater increment of cytotoxicity upon addition of mAbs than 1708-LC3E11 cells, whereas the efficacy of the ADCC of obinutuzumab appeared 13% or 36% higher than that of rituximab with 26.5 or 1708-LC3E11, respectively. In summary, the range of the variance between individual experiments was higher with PBMCs as effector cells as evidenced by higher frequency of outliers than with recombinant NK92 cells. Consequently the investigated effector cell lines afford more sensitive detection of ADCC and its impairment by kinase inhibitors than PBMCs as effector cells. ## 3.2. Interference of Kinase Inhibitors with the ADCC of Anti-CD20 mAbs against Raji Cells Currently two kinase inhibitors have gained approval for the treatment of CLL and other B-cell malignancies, namely, the PI3K-δ inhibitor idelalisib and the irreversible BTK inhibitor ibrutinib. Using 26.5 cells as effector cells we determined the ADCC against Raji cells of combinations of CD20 antibodies with these two targeted agents via LDH release (Figure 2(a)). To challenge the stability of NK cell-mediated ADCC, both kinase inhibitors were used at a concentration of 10 μM that clearly surpasses clinically achievable concentrations.Figure 2 Impairment of ADCC against Raji cells by kinase inhibitors. The antibody-dependent increase in the percentages of maximal LDH release was determined with 5-fold excess of 26.5 effector cells in the absence or presence of 10 μM idelalisib or ibrutinib (a) or the PI3K inhibitors duvelisib and copanlisib (b). Data were derived from nine or eight independent experiments, each of which was performed with triplicate samples, in (a) and (b), respectively. Asterisks above the boxes and whiskers denote the significance of enhanced cytotoxicity compared to the control without addition of antibodies and inhibitors as determined by pairedt-tests, while the p value ranges for the impairment of ADCC compared to anti-CD20 antibodies as single agents are indicated beneath. Furthermore comparisons by pairedt-test were performed among mAb and KI treatments and indicated at the top of the diagrams in black and blue print, respectively. p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. (a) (b)The LDH release owing to alloreactivity in cocultures of NK-92-26.5-CD16 effector cells and Raji target cells was hardly influenced by these high concentrations of idelalisib and ibrutinib. The significant enhancement of cytotoxicity due to addition of anti-CD20 mAbs was largely maintained in the presence of 10μM idelalisib, indeed more effectively in the combinations with obinutuzumab than with rituximab. In contrast, the ADCC of CD20 antibodies was virtually abolished by 10 μM ibrutinib. Consequently the cytotoxicity against Raji cells of rituximab and obinutuzumab in combination with idelalisib was significantly higher than in combination with ibrutinib (p = 0.009 and p = 0.007, resp., for n = 9).The widely different impairment of ADCC by idelalisib and ibrutinib prompted us to test whether further PI3K inhibitors in development as drugs against B-cell malignancies, namely, duvelisib and copanlisib, behaved similarly as idelalisib in the same assay system (Figure2(b)). In combinations with rituximab and obinutuzumab also these inhibitors were used at a concentration of 10 μM, which by far exceeds clinically obtainable concentrations, and, in the case of copanlisib additionally at 1 μM, owing to previously noted higher cytotoxicity against malignant B-cells of copanlisib compared to idelalisib [17]. Like the other investigated BCR signaling inhibitors also these PI3K inhibitors did not substantially affect the background cytotoxicity in mixtures of effector and target cells without addition of mAbs. The ADCC against Raji cells mediated by anti-CD20 mAbs was influenced by duvelisib in a similar manner as by idelalisib; that is, ADCC was only marginally impaired, with slightly stronger reduction of rituximab than of obinutuzumab effects. Copanlisib concentration-dependently led to a considerably stronger impairment of ADCC than idelalisib and duvelisib. With 10 μM copanlisib the enhancement of LDH release owing to addition of mAbs was almost completely eliminated. In six assay repetitions that contained all four investigated KI the impairment of ADCC by idelalisib and duvelisib appeared approximately equal, but stronger by ibrutinib than by copanlisib. Significant ADCC of anti-CD20 mAbs was maintained in the presence of 10 μM duvelisib and in the case of obinutuzumab also in combination with 1 μM copanlisib.In summary, high concentrations of kinase inhibitors did not interfere with the background LDH release from mixtures of Raji target cells with NK92-26.5 cells but impaired the ADCC mediated by anti-CD20 mAbs to different degrees. ADCC was significantly disturbed by the BTK inhibitor ibrutinib and the pan-class I PI3K inhibitor copanlisib, in contrast to the PI3K inhibitors idelalisib and duvelisib that selectively target the PI3K-δ isoform. ## 3.3. ADCC against CLL Cells of Rituximab and Obinutuzumab in Combination with Kinase Inhibitors For ADCC measurements in clinical samples, isolated CLL cells were used as target cells instead of the cell line Raji in the newly established ADCC assay using 26.5 or 1708-LC3E11 effector cells. Even at a three times higher target cell density as used with Raji cells, that is, 9 × 104 cells per well, total lysis of CLL cells by Triton X-100 did not yield sizable LDH release (not shown). Therefore measurements of ADCC against primary CLL samples were performed after labeling these target cells with calcein for fluorimetric determination of label release during coculture with effector cells. The ADCC of anti-CD20 mAbs against CLL cells was determined in combinations with idelalisib or ibrutinib at concentrations of 10 μM as used in the determinations of LDH release from Raji cells and of 1 μM to approach clinically relevant concentrations (Figure 3).Figure 3 Impairment of ADCC against primary CLL cells by kinase inhibitors. The antibody-dependent increase in the percentages of minimal calcein retention was determined in six or seven independent experiments with 3-fold excess of recombinant 26.5 (a) or 1708-LC3E11 (b) effector cells over CLL target cells. Cytotoxicity was compared to controls without antibodies by paired two-tailedt-test. p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. (a) (b)Similarly, for the ADCC against Raji cells detected via LDH release (Figures1(b) and 1(c)) also using isolated CLL cells as target cells with detection of calcein release, rituximab and obinutuzumab elicited 20–40% higher cytotoxicity against target cells than in cocultures without addition of mAbs. Ibrutinib impaired the NK92-26.5-mediated ADCC of rituximab or obinutuzumab against CLL cells more strongly than idelalisib but also affected the cytotoxicity against mixtures of target and effector cells (Figure 3(a)), in a similar or divergent manner, respectively, as against Raji cells (Figure 2). The fluorescence signals in target cells in the absence of effector cells, which corresponded to spontaneous calcein retention and were set as maximal values, were equal without or with treatment with KI (not shown). Unlike the ADCC of anti-CD20 mAbs against Raji cells, that against isolated CLL cells was impaired by 10 μM idelalisib, although with significantly maintained ADCC of obinutuzumab. 1 μM ibrutinib decreased the ADCC of the assessed anti-CD20 mAbs to a similar degree as 10 μM idelalisib. Overall, cytotoxicity appeared to be impaired more strongly by kinase inhibitors in cocultures with CLL than with Raji target cells.Comparing the two NK92-derived effector cell lines, the alloreactivity-related antibody-independent cytotoxicity was less affected by 10μM ibrutinib in cocultures with 1708-LC3E11 than with 26.5 cells. In addition 1 μM idelalisib interfered more strongly with the ADCC of anti-CD20 mAbs mediated by 1708-LC3E11 compared to 26.5 effector cells. The ADCC of rituximab and obinutuzumab was impaired by idelalisib and ibrutinib with greater significance in assays using 1708-LC3E11 than 26.5 effector cells.For a more detailed analysis of the impact of KI on ADCC, the percentages of cytotoxicity in the absence of rituximab and obinutuzumab were subtracted from those in their presence and compared head to head (Figure4). The impairment of both ADCC by KI and the superior ADCC of obinutuzumab compared to rituximab was indicated more clearly by 1708-LC3E11 effector cells (Figure 4(b)) than by 26.5 cells (Figure 4(a)). Obinutuzumab consistently showed significantly higher efficacy of ADCC than rituximab. This difference in ADCC efficacy was largely maintained in the presence of idelalisib and with 1 μM ibrutinib, which means less impaired ADCC of obinutuzumab in the presence of KI compared to rituximab.Figure 4 Detailed analysis of the impact of idelalisib and ibrutinib on the ADCC of rituximab and obinutuzumab. The differences in the cytotoxicity against CLL cells in cocultures with the NK92-derived effector cells 26.5 (a) or 1708-LC3E11 (b) in the presence and absence of rituximab and obinutuzumab were calculated from the data shown in Figure 3. Means and standard errors of the means are shown. The ADCC of rituximab and obinutuzumab was compared by paired two-tailedt-test. p ∗ < 0.05; p ∗ ∗ < 0.01. (a) (b) ## 3.4. Impact of BTK Inhibitors on the ADCC against CLL Cells of Rituximab and Obinutuzumab 1708-LC3E11 effector cells, which had indicated the impairment of ADCC against CLL cells by KI with greater sensitivity than 26.5 cells, were employed to compare ibrutinib with the second-generation irreversible BTK inhibitors acalabrutinib (ACP-196) and tirabrutinib (GS-4059) at a clinically relevant concentration of 1μM (Figure 5(a)). With a different set of CLL samples than that used in Figure 4(b), rituximab and obinutuzumab significantly enhanced NK cell-mediated cytotoxicity by 31% and 45%, respectively, and the significance of ADCC, which is equivalent to this enhancement, was maintained in combination with the investigated irreversible BTK inhibitors at clinically relevant concentrations of 1 μM. Significant impairment of ADCC was observed in combinations of rituximab with ibrutinib and tirabrutinib but not acalabrutinib and did not occur in the tested combinations with obinutuzumab. Of note, acalabrutinib with improved BTK selectivity led to weaker impairment of ADCC than ibrutinib and tirabrutinib. Obinutuzumab showed 44% higher efficacy of ADCC than rituximab (Figure 5(b)). This difference was even augmented to approximately 60% in combinations of anti-CD20 mAbs with ibrutinib and acalabrutinib, since the ADCC of obinutuzumab was less affected than that of rituximab also by second-generation BTK inhibitors.Figure 5 Comparison of the impairment of ADCC against primary CLL cells by different irreversible BTK inhibitors. The antibody-dependent increase in the percentages of minimal calcein retention was determined with 3-fold excess of 1708-LC3E11 effector cells over CLL target cells in seven independent experiments. (a) Asterisks above the boxes and whiskers denote the significance of enhanced cytotoxicity compared to the control without addition of antibodies and inhibitors as determined by pairedt-tests. Furthermore combination treatment was compared to that with anti-CD20 antibodies as single agents. (b) The mean differences ± SEM in the cytotoxicity against CLL cells in the presence and absence of rituximab and obinutuzumab were calculated from the data shown in (a). The ADCC of rituximab and obinutuzumab was compared by paired two-tailedt-test. p ∗ < 0.05; p ∗ ∗ < 0.01; p ∗ ∗ ∗ < 0.001. (a) (b) ## 4. Discussion Convenient and robust NK cell line-based assays permitted the sensitive detection of ADCC of therapeutic anti-CD20 antibodies against CLL cells and of the interference of KI with this important killing mechanism of mAbs. To our knowledge, in the present report, the use of recombinant NK92-derived effector cell lines expressing CD16 is combined for the first time with that of primary CLL samples as target cells for nonradioactive ADCC determination.Compared to spontaneous target cell lysis, the antibody-independent cytotoxicity in cocultures with effector cell lines was substantially increased, in contrast to cocultures with PBMCs. This is in agreement with the expectation that clonal NK cell lines may elicit stronger alloreactivity than natural NK cell populations that achieve tolerance by expressing various different combinations of activating and inhibitory KIRs. Compared to PBMCs as effector cells, the interassay variability was reduced with NK92-derived effector cell lines in a similar manner as in assays using purified NK cells [18].The ADCC against Raji cells elicited by rituximab was only approximately one-third of the background lysis with NK92 cells expressing CD16 that had not been modified for altered KIR expression [19], but slightly higher than the corresponding spontaneous cytotoxicity with 26.5 effector cells. The observed still comparatively high antibody-independent cytotoxicity in cocultures with the recombinant NK92-derived cell lines expressing CD16 at hand emphasizes the need to take it into account as a contribution to the overall cytotoxicity observed in the presence of both antibodies and effector cells. For the calculation of ADCC in the literal sense, the appropriate background to be subtracted therefore is the alloreactivity-related cytotoxicity in cocultures of target and effector cells and not spontaneous lysis of target cells alone. Although the percentages of cytotoxicity obtained by Duong et al., 2015 [20], with NK92-26.5 cells expressing CD16 as effector cells are similar to the overall cytotoxicity in the presence of mAbs as observed here, they are lacking the important control of target and effector cells without mAbs and do not allow the distinction of KI effects on antibody-dependent or independent cytotoxicity. Using similar NK92-derived effector cells we could show that all investigated inhibitors of BTK and PI3K-δ at clinically relevant concentrations exclusively affected ADCC and left the alloreactivity-related antibody-independent cytotoxicity undisturbed. Only at a 10-fold higher concentration, ibrutinib also inhibited the NK cell-mediated antibody-independent cytotoxicity of anti-CD20 mAbs.Our observation that the ADCC of anti-CD20 mAbs was inhibited more severely by the irreversible BTK inhibitor ibrutinib than by the PI3K-δ inhibitor idelalisib is in agreement with the expectation from observations with these inhibitors separately [11, 12, 17] and with other direct comparisons of the impact of these inhibitors on ADCC in different formats, namely, NK cell-mediated ADCC detected via LDH or 51Cr release, respectively, [21, 22] or in an NK effector cell line-based assay with detection by calcein release [20]. Following up on the preclinical assessment of selected PI3K inhibitors [17], duvelisib, similar to idelalisib, hardly disturbed NK cell mediated ADCC, in line with the very similar molecular structure and in spite of targeting PI3K-γ in addition to PI3K-δ. In contrast, copanlisib, which was recently approved for treatment of follicular lymphoma, strongly impaired the ADCC of rituximab and obinutuzumab at a concentration of 10 μM but permitted significant ADCC of these anti-CD20 mAbs, when used at a concentration of 1 μM, which is cytotoxic for CLL cells, in line with our previous observations [17]. Owing to the short duration of the assays, effects of kinase inhibitors on ADCC are more likely due to specific inhibition of signaling emanating from the Fc-γ-receptors than due to general cytotoxicity for NK effector cells.While the impairment of the ADCC of anti-CD20 mAbs by ibrutinib may be partly due to decreased CD20 expression on the surface of CLL cells [23], we observed that the enhanced ADCC of Fc- and glycoengineered obinutuzumab was less disturbed by ibrutinib than that of rituximab, similar to ADCC-induced cell lysis as well as NK cell activation and degranulation as surrogate markers in autologous systems [24]. Also with the second-generation of irreversible BTK inhibitors, the ADCC of obinutuzumab against CLL cells was less impaired than that of rituximab, in agreement with reports about the interference of acalabrutinib or tirabrutinib with the ADCC of anti-CD20 mAbs against Mec1 or SUDHL-4 cells, respectively [25]. The different capacity of ibrutinib and acalabrutinib for interfering with the ADCC of anti-CD20 mAbs became apparent, at inhibitor concentrations of 1 μM (Figure 5(a)) or 10 μM [26] in combinations with rituximab or obinutuzumab, respectively.In conclusion, NK cell line-based ADCC assays reliably indicated superior induction of ADCC against primary CLL cells by obinutuzumab compared to rituximab and less interference by BTK inhibitors. Since these heterologous assays employ stable effector cell lines and thus avoid variability caused by differences in immune status and genetic background of donor cells, they afford robust and sensitive determinations of ADCC against individual CLL samples in 96-well format. This could be useful for preclinical comparisons of combinations of mAbs and KI. --- *Source: 1023490-2018-03-19.xml*
2018
# Navel Orange Maturity Classification by Multispectral Indexes Based on Hyperspectral Diffuse Transmittance Imaging **Authors:** Xuan Wei; Jin-Cheng He; Da-Peng Ye; Deng-Fei Jie **Journal:** Journal of Food Quality (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1023498 --- ## Abstract Maturity grading is important for the quality of fruits. Nondestructive maturity detection can be greatly beneficial to the consumer and fruit industry. In this paper, a hyperspectral image of navel oranges was obtained using a diffuse transmittance imaging based system. Multispectral indexes were built to identify the maturity with the hyperspectral technique. Five indexes were proposed to combine the spectra at wavelengths of 640, 760 nm (red edges), and 670 nm (for chlorophyll content) to grade the navel oranges into three maturity stages. The index of(T670+T760-T640)/(T670+T760+T640) seemed to be more appropriate to classify maturity, especially to distinguish immature oranges that can be straightly identified in accordance with the value of this index ((T670+T760-T640)/(T670+T760+T640)). Different indexes were used as the input of linear discriminate analysis (LDA) and of k-nearest neighbor (k-NN) algorithm to identify the maturity, and it was found that k-NN with (T670+T760-T640)/(T670+T760+T640) could reach the highest correct classification rate of 96.0%. The results showed that the built index was feasible and accurate in the nondestructive classification of oranges based on the hyperspectral diffuse transmittance imaging. It will greatly help to develop low-cost and real-time multispectral imaging systems for the nondestructive detection of fruit quality in the industry. --- ## Body ## 1. Introduction Fruit quality plays a vital part in marketing, and the quality and harvest time of oranges generally depend upon the experienced farmer who has the ability of on-tree visual inspection [1]. However, as the maturity might be influenced by many factors, artificial identification could result in an inappropriate harvest time before the fruit matures commercially [2]. Meanwhile, it is necessary to handle and process the fruits after harvesting. So, it is significant to promote the development of maturity grading with nondestructive analytical methods for classifying the fruits into different commercial applications.Hyperspectral imaging technique is one of the nondestructive technologies which takes advantage of spectroscopic and imaging techniques, providing spectral and spatial information simultaneously [3]. So, spatial distribution information of a chemical entity can be obtained which is based on the spectral analysis at each pixel. As a result, each hyperspectral image contains a large amount of information in a three-dimensional (3D) form called “hypercube.” And the object can be characterized more reliably by the hypercube than by the traditional machine vision or spectroscopy techniques [4].Because each hyperspectral image contains a large amount of information, it is extremely critical to choose the characteristic bands for collecting effective information about the quality attribution. Many algorithms have been applied on the characteristic bands selection in accordance with some quality parameters, that is, soluble solids content (SSC), titratable acidity (TA), and firmness. Now, there have been some studies about the quality of thin-skin fruits. Cen et al. [5] applied supervisory classification algorithms combined with feature-selecting methods to identify chilling injury to cucumbers with the hyperspectral imaging technique. Li et al. [4] detected the defects on peach skin using hyperspectral imaging (325–1100 nm). Based on the proposed multispectral algorithm, the overall detection accuracy for the tested samples was 96.6%. Generally, the hyperspectral imaging detection is conducted in reflection mode. By this means, it is not easy to collect the internal information of thick-skin fruits such as oranges. Meanwhile, owing to the smooth skin of oranges, light spots often appear in the obtained images, causing difficulties in further analysis. So, this study could avoid the light spots generating with diffuse transmittance mode in reflection mode. Meanwhile, diffuse transmittance reduces the influence of shape, size, and core of fruit by adjusting the lighting angle in diffuse transmittance systems so as to reduce the stray radiation [6].In studying fruits maturity discrimination, Sun et al. [7] identified Hami melon with hyperspectral imaging technology in combination with characteristic wavelengths selection methods and SVM. At least 9 characteristic variables were selected. Zhang et al. [8] used the spectra and texture feature at 6 characteristic wavelengths in 6441.1–1013.97 nm to classify strawberry maturity with accuracy of over 85%. In those studies, some algorithms were adopted to select important wavelengths and more than 3 wavelengths were kept. Maturity could be evaluated comprehensively by various chemical components; some researchers believed that some wavelengths of pigments or water content are closely related to maturity [9]. Qin and Lu [10] correctly classified tomatoes into three ripeness groups based on the ratio of the absorption coefficient at 675 nm (for chlorophyll content) to that at 535 nm (for anthocyanins). Schouten et al. [11] found the correlations between chlorophyll content in pericarp tomato tissue and NDVI ((R780-R660)/(R780+R660)) in remittance VIS spectroscopy method.In industrial application, high cost of the equipment and slow processing time of images remain to be the main obstacles for the development of hyperspectral imaging in fruits detection [12]. The multispectral vision camera is considered as a solution, as it is a cheaper system and it takes less time in processing because of few (three or four) needs for wavelengths and images [13]. Hyperspectral imaging could be applied in the laboratory to compare different wavelength combinations. Qin et al. [14] developed a prototype for real-time inspection of citrus canker based on two wavelengths cantered at 730 and 830 nm. The two-band spectral imaging module was integrated with the machine’s sorting capacity to carry out the online inspection of canker with samples moving at a speed of 5 fruits/s, and the system presented an overall classification accuracy of 95.3%. The wavelengths used in the prototype were identified previously from hyperspectral reflectance images [15, 16].So, for selecting multispectral indexes that could be applied to develop low-cost and real-time imaging systems, this study aimed to find the indexes appropriate for the maturity classification of navel oranges based on hyperspectral diffuse transmittance imaging. The present paper proposes some indexes based on some specific wavelengths and combined with the pattern recognition methods to maturity identification. The proposed multispectral index will effectively reduce the calculation time in high-dimensional data processing and it will be helpful for online detective system development. ## 2. Materials and Methods ### 2.1. Fruit Material One hundred and fifty samples at different maturity stages (assessed in the field by an artificial labeling method based on the period of growth from blossom) were harvested from the local farms in Jiangxia, Wuhan Province, China (30°32′N, 114°32′E). Navel oranges (Citrus sinensis Osbeck cv. Robertson) in different maturity stages were picked up as follows: (I) immature, 180 days from blossom; (II) midmature, 200 days from blossom; (III) mature, 220 days from blossom [17]. Firstly, all of the intact fruits were cleaned and numbered and then were stored at 25°C and 60% relative humidity within 24 h before processing. Fruit’s parameters in terms of weight and appearance were measured prior to the acquisition of transmittance hyperspectral images. ### 2.2. Hyperspectral Images Acquisition For hyperspectral diffuse transmittance imaging, a laboratory-type spectrum measurement device was designed (Figure1). The system mainly consisted of a high-performance back illuminated CCD camera(Andor, Clara, DR-328G, Britain), an imaging spectrograph(SPECIM, V10E-CL, Finland) covering the spectral range of 390–1,055 nm, and an assembled light unit containing four 50 W quartz tungsten halogen lamps(Oriel Instruments, 6332, USA) with eight fans to dissipate heat. A mobile platform with its speed controlled by a computer equipped with a spectral image system (Spectral SECN-V17E) was used to set and adjust the parameters of the device, including exposure time, motor speed, imaging acquisition, and wavelength range. The spectral resolution is 2.8 nm; the resolution of the CCD camera is 672 × 512 (spatial × spectral) pixels. In this work, the moving speed of the mobile platform is 2.0 mm/s, and the exposure time of the CCD camera is 0.1 s. The whole system was assembled in a dark chamber. The acquired images were corrected with white and dark images using (1) as follows:(1)T=Isample-IdarkIwhite-Idark,where T is the correct hyperspectral image, Isample is an original uncorrected hyperspectral image, Iwhite is the image of the white reference which was obtained by a Teflon ball, and Idark is the image acquired by the system in the absence of lighting.Figure 1 Schematic diagram of the used diffuse transmittance hyperspectral imaging system for navel oranges maturity classification. ### 2.3. Data Analysis Spectra were collected from the images by the ENVI (Version 4.7, ITT Visual Information Solutions, Boulder, USA) software. Circle region of interest (ROI) was applied to collect the average spectra and the spectra preprocessed by Savitzky-Golay smoothing.The classification models were developed with the linear discriminant analysis (LDA) andk-nearest neighbor (k-NN) algorithm. LDA is a common supervised identification method used in statistics, pattern recognition, and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events [18]. k-NN is a simple algorithm that stores all available cases and classifies new cases based on a similarity measure (e.g., distance functions) [19]. The calculation process is conducted in Matlab 7.11.0 R2010b (MathWorks, Natick, USA). The classification results were evaluated by the correct classification rate (CCR). ### 2.4. Multispectral Indexes Establishment Regularly, the multispectral indexes consist of the differences or ratios of reflectance or transmittance at some wavelengths. Regarding the decimation of fruits ripeness, in several studies, chlorophyll content and water bands have been adopted, which were considered as the most related to the maturity, such as 680, 800, 900, and 950 nm [20]. Qin and Lu [10] found that large differences in the absorption spectra were observed for the tomatoes at three ripeness stages, and their ripeness was correctly identified using the ratio of the absorption coefficient at 675 nm (for chlorophyll content) to that at 535 nm (for anthocyanins). Lleó et al. [13] made a comparison of multispectral indexes extracted from hyperspectral images to assess the maturity of fruit; they proposed four indexes:(a) Ind1=R730+R640-2×R680(b) Ind2=R680/(R640+R730)(c) Ind3=R675/R800(d) IAD =log⁡10(R720/R670)Among the many applications of spectral data, Ye et al. [21] used the two-band vegetation index (TBVI) to identify the best two-band-based predictor of citrus yield. TBVI can be calculated by Rλ1-Rλ2/Rλ1+Rλ2. The TBVI based on the 823 nm (NIR) and 728 nm (red edge) wavelengths was found to provide optimal citrus yield information (R2=0.5795, RRMSE = 0.6636). In accordance with these references, multispectral indexes will be established by analyzing the character of spectra. ## 2.1. Fruit Material One hundred and fifty samples at different maturity stages (assessed in the field by an artificial labeling method based on the period of growth from blossom) were harvested from the local farms in Jiangxia, Wuhan Province, China (30°32′N, 114°32′E). Navel oranges (Citrus sinensis Osbeck cv. Robertson) in different maturity stages were picked up as follows: (I) immature, 180 days from blossom; (II) midmature, 200 days from blossom; (III) mature, 220 days from blossom [17]. Firstly, all of the intact fruits were cleaned and numbered and then were stored at 25°C and 60% relative humidity within 24 h before processing. Fruit’s parameters in terms of weight and appearance were measured prior to the acquisition of transmittance hyperspectral images. ## 2.2. Hyperspectral Images Acquisition For hyperspectral diffuse transmittance imaging, a laboratory-type spectrum measurement device was designed (Figure1). The system mainly consisted of a high-performance back illuminated CCD camera(Andor, Clara, DR-328G, Britain), an imaging spectrograph(SPECIM, V10E-CL, Finland) covering the spectral range of 390–1,055 nm, and an assembled light unit containing four 50 W quartz tungsten halogen lamps(Oriel Instruments, 6332, USA) with eight fans to dissipate heat. A mobile platform with its speed controlled by a computer equipped with a spectral image system (Spectral SECN-V17E) was used to set and adjust the parameters of the device, including exposure time, motor speed, imaging acquisition, and wavelength range. The spectral resolution is 2.8 nm; the resolution of the CCD camera is 672 × 512 (spatial × spectral) pixels. In this work, the moving speed of the mobile platform is 2.0 mm/s, and the exposure time of the CCD camera is 0.1 s. The whole system was assembled in a dark chamber. The acquired images were corrected with white and dark images using (1) as follows:(1)T=Isample-IdarkIwhite-Idark,where T is the correct hyperspectral image, Isample is an original uncorrected hyperspectral image, Iwhite is the image of the white reference which was obtained by a Teflon ball, and Idark is the image acquired by the system in the absence of lighting.Figure 1 Schematic diagram of the used diffuse transmittance hyperspectral imaging system for navel oranges maturity classification. ## 2.3. Data Analysis Spectra were collected from the images by the ENVI (Version 4.7, ITT Visual Information Solutions, Boulder, USA) software. Circle region of interest (ROI) was applied to collect the average spectra and the spectra preprocessed by Savitzky-Golay smoothing.The classification models were developed with the linear discriminant analysis (LDA) andk-nearest neighbor (k-NN) algorithm. LDA is a common supervised identification method used in statistics, pattern recognition, and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events [18]. k-NN is a simple algorithm that stores all available cases and classifies new cases based on a similarity measure (e.g., distance functions) [19]. The calculation process is conducted in Matlab 7.11.0 R2010b (MathWorks, Natick, USA). The classification results were evaluated by the correct classification rate (CCR). ## 2.4. Multispectral Indexes Establishment Regularly, the multispectral indexes consist of the differences or ratios of reflectance or transmittance at some wavelengths. Regarding the decimation of fruits ripeness, in several studies, chlorophyll content and water bands have been adopted, which were considered as the most related to the maturity, such as 680, 800, 900, and 950 nm [20]. Qin and Lu [10] found that large differences in the absorption spectra were observed for the tomatoes at three ripeness stages, and their ripeness was correctly identified using the ratio of the absorption coefficient at 675 nm (for chlorophyll content) to that at 535 nm (for anthocyanins). Lleó et al. [13] made a comparison of multispectral indexes extracted from hyperspectral images to assess the maturity of fruit; they proposed four indexes:(a) Ind1=R730+R640-2×R680(b) Ind2=R680/(R640+R730)(c) Ind3=R675/R800(d) IAD =log⁡10(R720/R670)Among the many applications of spectral data, Ye et al. [21] used the two-band vegetation index (TBVI) to identify the best two-band-based predictor of citrus yield. TBVI can be calculated by Rλ1-Rλ2/Rλ1+Rλ2. The TBVI based on the 823 nm (NIR) and 728 nm (red edge) wavelengths was found to provide optimal citrus yield information (R2=0.5795, RRMSE = 0.6636). In accordance with these references, multispectral indexes will be established by analyzing the character of spectra. ## 3. Results and Discussion ### 3.1. Spectra and Multispectral Indexes Considerations The average spectra of oranges at three maturity stages are shown in Figure2. As the noises and irrelevant information existed in the beginning and the end of spectra, only spectra in the range from 550 to 1,000 nm were used for analysis. From Figure 2, the main differences are shown at the wavebands of 550–700 nm and 750–780 nm. The main difference is the chlorophyll content absorption hole (around 700 nm), disappearing as the fruit ripens with the position of peak change. Meanwhile, 760 nm was also found which could be due to the third overtone of O–H and the fourth overtone of C–H; the O–H and C–H functional groups are related to the concentration of some inner compositions with these bands such as soluble solid contents [22]. So, 640 nm, 760 nm (red edges), and 670 nm (for chlorophyll content) were considered as the characteristic wavelengths in this study. Refer to Section 2.4; five multispectral indexes were established as follows:(a) I1=T760+T640-2×T670(b) I2=T670/(T640+T760)(c) I3=(T760-T670)/(T760+T670)(d) I4=T760/T670(e) I5=(T670+T760-T640)/(T670+T760+T640)Figure 2 Average spectra of navel oranges at three different maturity stages. ### 3.2. Sample Measurements Table1 shows some parameters of the set samples of calibration and prediction. These parameters of 150 samples did not reflect the obvious correlations to the maturity through correlation analysis. The mass and size of the samples in different maturity stages are closely correlated. Minimizing the differences in mass and size of fruits can reduce the individual effect of the fruits on the spectral images acquirement. Variation of SSC in oranges in different maturities was significant (P<0.05), while the SSC increased from immature to mature fruits.Table 1 Some physical parameters of navel oranges at three different maturity stages. Classes Number Weight (g) Height (mm) Perimeter (mm) SSC (°Brix) Mean ± SD Mean ± SD Mean ± SD Mean ± SD Calibration Immature 30 120.7 ± 16.80 55.6 ± 3.37 63.0 ± 3.65 12.8 ± 1.89 Midmature 35 115.2 ± 14.95 53.9 ± 3.17 62.7 ± 3.49 14.4 ± 1.42 Mature 35 115.6 ± 14.07 53.3 ± 3.88 63.6 ± 3.13 14.9 ± 1.26 Prediction Immature 16 121.9 ± 17.2 56.2 ± 3.28 63.8 ± 3.26 12.7 ± 1.86 Midmature 17 116.5 ± 13.6 53.8 ± 2.28 63.6 ± 3.47 14.0 ± 1.51 Mature 17 120.4 ± 17.4 54.3 ± 2.98 64.4 ± 3.42 15.1 ± 0.85 ### 3.3. Analysis of Multispectral Indexes The analysis of the multispectral indexes is shown in Table2. ANOVA showed that the indexes have significant variation (P<0.05), so all the proposed multispectral indexes are statistically significant. And Figure 3 shows the distribution of the five indexes of the calibration set. It shows that there was no significant discrepancy in the indexes I1, I2, I3, and I4 among the immature, midmature, and mature groups. The mature group is the most widely distributed in the four indexes and the immature group is the most concentrated relatively. But for the index I5, it had good classification ability, especially for the immature group. The value of I5 for the immature group ranges from 1.16 to 40.7, but the value is less than 0.16 for the midmature and mature groups. By I5, we can clearly distinguish the immature oranges from the midmature and mature oranges. From Figure 3, the CCR of immature samples could be 100%. For other indexes, it is hard to discriminate the navel oranges in different ripening stages from the figure directly.Table 2 The statistical analysis of the five indexes. Calibration set (mean ± SD) P Prediction set (mean ± SD) P Immature Midmature Mature Immature Midmature Mature I 1 0.139 ± 0.242 0.144 ± 0.023 0.084 ± 0.097 0.000 0.139 ± 0.020 0.124 ± 0.051 0.097 ± 0.081 0.040 I 2 0.054 ± 0.054 0.116 ± 0.093 0.256 ± 0.225 0.000 0.039 ± 0.035 0.196 ± 0.142 0.218 ± 0.201 0.003 I 3 0.884 ± 0.115 0.731 ± 0.217 0.446 ± 0.422 0.000 0.912 ± 0.092 0.540 ± 0.329 0.539 ± 0.384 0.002 I 4 0.066 ± 0.074 0.175 ± 0.161 0.536 ± 0.613 0.000 0.049 ± 0.057 0.364 ± 0.325 0.420 ± 0.518 0.012 I 5 11.04 ± 10.32 0.115 ± 0.024 0.058 ± 0.118 0.000 11.79 ± 9.34 0.100 ± 0.038 0.056 ± 0.011 0.000Figure 3 Multispectral indexes distribution of navel oranges at three different maturity stages. (a) (b) (c) (d) (e) ### 3.4. Classification Models Different indexes were used as the different models’ inputs; the results of each model are shown in Table3. According to the distribution of the different indexes, we can speculate that the predictive performance of index I5 will be the best one. And the classification results showed that the best identification was conducted by I5 using k-NN algorithm with CCR of 100% and 96.0% for calibration and prediction set, respectively.Table 3 The classification results of the different models for the prediction set. LDA k-NN Immature Midmature Mature CCR (%) Immature Midmature Mature CCR (%) I 1 7/16 7/17 5/17 38.0 5/16 8/17 6/17 38.0 I 2 15/16 4/17 6/17 50.0 9/16 9/17 8/17 52.0 I 3 15/16 3/17 7/17 50.0 9/16 3/17 9/17 42.0 I 4 15/16 5/17 4/17 48.0 9/16 3/17 9/17 42.0 I 5 9/16 13/17 17/17 78.0 16/16 17/17 15/17 96.0In theI5-based LDA method, the CCR of calibration and prediction set was 83% and 78%, respectively. The main error appeared in the discrimination of the immature group, but, obviously, immature samples can be distinguished from other oranges in different maturity stage by the value of I5 without modeling. So, the immature samples will be firstly chosen out and then the LDA is applied to classify the midmature and mature samples. The performance of classification for midmature and mature groups has been improved. The main misclassified samples are shown in Figure 4. The CCR for the two groups’ discrimination is 88.2% (four midmature samples were misidentified). If all the immature samples were considered as the correct classification, the whole predictive CCR is improved to 92.0% which is still worse than the predictive CCR of k-NN. Compared to the results of some researches about the nondestructive maturity detection of oranges or citrus, 96% CCR is better than the 91.67% identification accuracy based on machine vision carried out by Ying et al. [23]. And it is also better than the correctness of inspection test, 82%, based on multifractal spectra obtained by Cao [24].Figure 4 Classification of midmature and mature navel oranges in calibration set by LDA. ## 3.1. Spectra and Multispectral Indexes Considerations The average spectra of oranges at three maturity stages are shown in Figure2. As the noises and irrelevant information existed in the beginning and the end of spectra, only spectra in the range from 550 to 1,000 nm were used for analysis. From Figure 2, the main differences are shown at the wavebands of 550–700 nm and 750–780 nm. The main difference is the chlorophyll content absorption hole (around 700 nm), disappearing as the fruit ripens with the position of peak change. Meanwhile, 760 nm was also found which could be due to the third overtone of O–H and the fourth overtone of C–H; the O–H and C–H functional groups are related to the concentration of some inner compositions with these bands such as soluble solid contents [22]. So, 640 nm, 760 nm (red edges), and 670 nm (for chlorophyll content) were considered as the characteristic wavelengths in this study. Refer to Section 2.4; five multispectral indexes were established as follows:(a) I1=T760+T640-2×T670(b) I2=T670/(T640+T760)(c) I3=(T760-T670)/(T760+T670)(d) I4=T760/T670(e) I5=(T670+T760-T640)/(T670+T760+T640)Figure 2 Average spectra of navel oranges at three different maturity stages. ## 3.2. Sample Measurements Table1 shows some parameters of the set samples of calibration and prediction. These parameters of 150 samples did not reflect the obvious correlations to the maturity through correlation analysis. The mass and size of the samples in different maturity stages are closely correlated. Minimizing the differences in mass and size of fruits can reduce the individual effect of the fruits on the spectral images acquirement. Variation of SSC in oranges in different maturities was significant (P<0.05), while the SSC increased from immature to mature fruits.Table 1 Some physical parameters of navel oranges at three different maturity stages. Classes Number Weight (g) Height (mm) Perimeter (mm) SSC (°Brix) Mean ± SD Mean ± SD Mean ± SD Mean ± SD Calibration Immature 30 120.7 ± 16.80 55.6 ± 3.37 63.0 ± 3.65 12.8 ± 1.89 Midmature 35 115.2 ± 14.95 53.9 ± 3.17 62.7 ± 3.49 14.4 ± 1.42 Mature 35 115.6 ± 14.07 53.3 ± 3.88 63.6 ± 3.13 14.9 ± 1.26 Prediction Immature 16 121.9 ± 17.2 56.2 ± 3.28 63.8 ± 3.26 12.7 ± 1.86 Midmature 17 116.5 ± 13.6 53.8 ± 2.28 63.6 ± 3.47 14.0 ± 1.51 Mature 17 120.4 ± 17.4 54.3 ± 2.98 64.4 ± 3.42 15.1 ± 0.85 ## 3.3. Analysis of Multispectral Indexes The analysis of the multispectral indexes is shown in Table2. ANOVA showed that the indexes have significant variation (P<0.05), so all the proposed multispectral indexes are statistically significant. And Figure 3 shows the distribution of the five indexes of the calibration set. It shows that there was no significant discrepancy in the indexes I1, I2, I3, and I4 among the immature, midmature, and mature groups. The mature group is the most widely distributed in the four indexes and the immature group is the most concentrated relatively. But for the index I5, it had good classification ability, especially for the immature group. The value of I5 for the immature group ranges from 1.16 to 40.7, but the value is less than 0.16 for the midmature and mature groups. By I5, we can clearly distinguish the immature oranges from the midmature and mature oranges. From Figure 3, the CCR of immature samples could be 100%. For other indexes, it is hard to discriminate the navel oranges in different ripening stages from the figure directly.Table 2 The statistical analysis of the five indexes. Calibration set (mean ± SD) P Prediction set (mean ± SD) P Immature Midmature Mature Immature Midmature Mature I 1 0.139 ± 0.242 0.144 ± 0.023 0.084 ± 0.097 0.000 0.139 ± 0.020 0.124 ± 0.051 0.097 ± 0.081 0.040 I 2 0.054 ± 0.054 0.116 ± 0.093 0.256 ± 0.225 0.000 0.039 ± 0.035 0.196 ± 0.142 0.218 ± 0.201 0.003 I 3 0.884 ± 0.115 0.731 ± 0.217 0.446 ± 0.422 0.000 0.912 ± 0.092 0.540 ± 0.329 0.539 ± 0.384 0.002 I 4 0.066 ± 0.074 0.175 ± 0.161 0.536 ± 0.613 0.000 0.049 ± 0.057 0.364 ± 0.325 0.420 ± 0.518 0.012 I 5 11.04 ± 10.32 0.115 ± 0.024 0.058 ± 0.118 0.000 11.79 ± 9.34 0.100 ± 0.038 0.056 ± 0.011 0.000Figure 3 Multispectral indexes distribution of navel oranges at three different maturity stages. (a) (b) (c) (d) (e) ## 3.4. Classification Models Different indexes were used as the different models’ inputs; the results of each model are shown in Table3. According to the distribution of the different indexes, we can speculate that the predictive performance of index I5 will be the best one. And the classification results showed that the best identification was conducted by I5 using k-NN algorithm with CCR of 100% and 96.0% for calibration and prediction set, respectively.Table 3 The classification results of the different models for the prediction set. LDA k-NN Immature Midmature Mature CCR (%) Immature Midmature Mature CCR (%) I 1 7/16 7/17 5/17 38.0 5/16 8/17 6/17 38.0 I 2 15/16 4/17 6/17 50.0 9/16 9/17 8/17 52.0 I 3 15/16 3/17 7/17 50.0 9/16 3/17 9/17 42.0 I 4 15/16 5/17 4/17 48.0 9/16 3/17 9/17 42.0 I 5 9/16 13/17 17/17 78.0 16/16 17/17 15/17 96.0In theI5-based LDA method, the CCR of calibration and prediction set was 83% and 78%, respectively. The main error appeared in the discrimination of the immature group, but, obviously, immature samples can be distinguished from other oranges in different maturity stage by the value of I5 without modeling. So, the immature samples will be firstly chosen out and then the LDA is applied to classify the midmature and mature samples. The performance of classification for midmature and mature groups has been improved. The main misclassified samples are shown in Figure 4. The CCR for the two groups’ discrimination is 88.2% (four midmature samples were misidentified). If all the immature samples were considered as the correct classification, the whole predictive CCR is improved to 92.0% which is still worse than the predictive CCR of k-NN. Compared to the results of some researches about the nondestructive maturity detection of oranges or citrus, 96% CCR is better than the 91.67% identification accuracy based on machine vision carried out by Ying et al. [23]. And it is also better than the correctness of inspection test, 82%, based on multifractal spectra obtained by Cao [24].Figure 4 Classification of midmature and mature navel oranges in calibration set by LDA. ## 4. Conclusion In this study, multispectral indexes are to be established for maturity classification of navel oranges based on the diffuse transmittance hyperspectral imaging. The indexes calculated by(T670+T760-T640)/(T670+T760+T640) showed a good performance for maturity detection with the CCR of 96.0% by k-NN method. And particularly this index has an excellent capability of distinguishing the immature oranges. In reality, it is quite practical in developing a portable instrument which can quickly identify the immature oranges with high precision. As only three important wavelengths are needed, it will be beneficial for the development of low-cost and real-time multispectral imaging systems for industrial applications. But in thein-field application, appropriate lighting resource angle and layout could have a great effect on the quality of the obtained hyperspectral images. So, further work will be focused on the light resource device development forin-field application. --- *Source: 1023498-2017-12-27.xml*
1023498-2017-12-27_1023498-2017-12-27.md
30,708
Navel Orange Maturity Classification by Multispectral Indexes Based on Hyperspectral Diffuse Transmittance Imaging
Xuan Wei; Jin-Cheng He; Da-Peng Ye; Deng-Fei Jie
Journal of Food Quality (2017)
Agricultural Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1023498
1023498-2017-12-27.xml
--- ## Abstract Maturity grading is important for the quality of fruits. Nondestructive maturity detection can be greatly beneficial to the consumer and fruit industry. In this paper, a hyperspectral image of navel oranges was obtained using a diffuse transmittance imaging based system. Multispectral indexes were built to identify the maturity with the hyperspectral technique. Five indexes were proposed to combine the spectra at wavelengths of 640, 760 nm (red edges), and 670 nm (for chlorophyll content) to grade the navel oranges into three maturity stages. The index of(T670+T760-T640)/(T670+T760+T640) seemed to be more appropriate to classify maturity, especially to distinguish immature oranges that can be straightly identified in accordance with the value of this index ((T670+T760-T640)/(T670+T760+T640)). Different indexes were used as the input of linear discriminate analysis (LDA) and of k-nearest neighbor (k-NN) algorithm to identify the maturity, and it was found that k-NN with (T670+T760-T640)/(T670+T760+T640) could reach the highest correct classification rate of 96.0%. The results showed that the built index was feasible and accurate in the nondestructive classification of oranges based on the hyperspectral diffuse transmittance imaging. It will greatly help to develop low-cost and real-time multispectral imaging systems for the nondestructive detection of fruit quality in the industry. --- ## Body ## 1. Introduction Fruit quality plays a vital part in marketing, and the quality and harvest time of oranges generally depend upon the experienced farmer who has the ability of on-tree visual inspection [1]. However, as the maturity might be influenced by many factors, artificial identification could result in an inappropriate harvest time before the fruit matures commercially [2]. Meanwhile, it is necessary to handle and process the fruits after harvesting. So, it is significant to promote the development of maturity grading with nondestructive analytical methods for classifying the fruits into different commercial applications.Hyperspectral imaging technique is one of the nondestructive technologies which takes advantage of spectroscopic and imaging techniques, providing spectral and spatial information simultaneously [3]. So, spatial distribution information of a chemical entity can be obtained which is based on the spectral analysis at each pixel. As a result, each hyperspectral image contains a large amount of information in a three-dimensional (3D) form called “hypercube.” And the object can be characterized more reliably by the hypercube than by the traditional machine vision or spectroscopy techniques [4].Because each hyperspectral image contains a large amount of information, it is extremely critical to choose the characteristic bands for collecting effective information about the quality attribution. Many algorithms have been applied on the characteristic bands selection in accordance with some quality parameters, that is, soluble solids content (SSC), titratable acidity (TA), and firmness. Now, there have been some studies about the quality of thin-skin fruits. Cen et al. [5] applied supervisory classification algorithms combined with feature-selecting methods to identify chilling injury to cucumbers with the hyperspectral imaging technique. Li et al. [4] detected the defects on peach skin using hyperspectral imaging (325–1100 nm). Based on the proposed multispectral algorithm, the overall detection accuracy for the tested samples was 96.6%. Generally, the hyperspectral imaging detection is conducted in reflection mode. By this means, it is not easy to collect the internal information of thick-skin fruits such as oranges. Meanwhile, owing to the smooth skin of oranges, light spots often appear in the obtained images, causing difficulties in further analysis. So, this study could avoid the light spots generating with diffuse transmittance mode in reflection mode. Meanwhile, diffuse transmittance reduces the influence of shape, size, and core of fruit by adjusting the lighting angle in diffuse transmittance systems so as to reduce the stray radiation [6].In studying fruits maturity discrimination, Sun et al. [7] identified Hami melon with hyperspectral imaging technology in combination with characteristic wavelengths selection methods and SVM. At least 9 characteristic variables were selected. Zhang et al. [8] used the spectra and texture feature at 6 characteristic wavelengths in 6441.1–1013.97 nm to classify strawberry maturity with accuracy of over 85%. In those studies, some algorithms were adopted to select important wavelengths and more than 3 wavelengths were kept. Maturity could be evaluated comprehensively by various chemical components; some researchers believed that some wavelengths of pigments or water content are closely related to maturity [9]. Qin and Lu [10] correctly classified tomatoes into three ripeness groups based on the ratio of the absorption coefficient at 675 nm (for chlorophyll content) to that at 535 nm (for anthocyanins). Schouten et al. [11] found the correlations between chlorophyll content in pericarp tomato tissue and NDVI ((R780-R660)/(R780+R660)) in remittance VIS spectroscopy method.In industrial application, high cost of the equipment and slow processing time of images remain to be the main obstacles for the development of hyperspectral imaging in fruits detection [12]. The multispectral vision camera is considered as a solution, as it is a cheaper system and it takes less time in processing because of few (three or four) needs for wavelengths and images [13]. Hyperspectral imaging could be applied in the laboratory to compare different wavelength combinations. Qin et al. [14] developed a prototype for real-time inspection of citrus canker based on two wavelengths cantered at 730 and 830 nm. The two-band spectral imaging module was integrated with the machine’s sorting capacity to carry out the online inspection of canker with samples moving at a speed of 5 fruits/s, and the system presented an overall classification accuracy of 95.3%. The wavelengths used in the prototype were identified previously from hyperspectral reflectance images [15, 16].So, for selecting multispectral indexes that could be applied to develop low-cost and real-time imaging systems, this study aimed to find the indexes appropriate for the maturity classification of navel oranges based on hyperspectral diffuse transmittance imaging. The present paper proposes some indexes based on some specific wavelengths and combined with the pattern recognition methods to maturity identification. The proposed multispectral index will effectively reduce the calculation time in high-dimensional data processing and it will be helpful for online detective system development. ## 2. Materials and Methods ### 2.1. Fruit Material One hundred and fifty samples at different maturity stages (assessed in the field by an artificial labeling method based on the period of growth from blossom) were harvested from the local farms in Jiangxia, Wuhan Province, China (30°32′N, 114°32′E). Navel oranges (Citrus sinensis Osbeck cv. Robertson) in different maturity stages were picked up as follows: (I) immature, 180 days from blossom; (II) midmature, 200 days from blossom; (III) mature, 220 days from blossom [17]. Firstly, all of the intact fruits were cleaned and numbered and then were stored at 25°C and 60% relative humidity within 24 h before processing. Fruit’s parameters in terms of weight and appearance were measured prior to the acquisition of transmittance hyperspectral images. ### 2.2. Hyperspectral Images Acquisition For hyperspectral diffuse transmittance imaging, a laboratory-type spectrum measurement device was designed (Figure1). The system mainly consisted of a high-performance back illuminated CCD camera(Andor, Clara, DR-328G, Britain), an imaging spectrograph(SPECIM, V10E-CL, Finland) covering the spectral range of 390–1,055 nm, and an assembled light unit containing four 50 W quartz tungsten halogen lamps(Oriel Instruments, 6332, USA) with eight fans to dissipate heat. A mobile platform with its speed controlled by a computer equipped with a spectral image system (Spectral SECN-V17E) was used to set and adjust the parameters of the device, including exposure time, motor speed, imaging acquisition, and wavelength range. The spectral resolution is 2.8 nm; the resolution of the CCD camera is 672 × 512 (spatial × spectral) pixels. In this work, the moving speed of the mobile platform is 2.0 mm/s, and the exposure time of the CCD camera is 0.1 s. The whole system was assembled in a dark chamber. The acquired images were corrected with white and dark images using (1) as follows:(1)T=Isample-IdarkIwhite-Idark,where T is the correct hyperspectral image, Isample is an original uncorrected hyperspectral image, Iwhite is the image of the white reference which was obtained by a Teflon ball, and Idark is the image acquired by the system in the absence of lighting.Figure 1 Schematic diagram of the used diffuse transmittance hyperspectral imaging system for navel oranges maturity classification. ### 2.3. Data Analysis Spectra were collected from the images by the ENVI (Version 4.7, ITT Visual Information Solutions, Boulder, USA) software. Circle region of interest (ROI) was applied to collect the average spectra and the spectra preprocessed by Savitzky-Golay smoothing.The classification models were developed with the linear discriminant analysis (LDA) andk-nearest neighbor (k-NN) algorithm. LDA is a common supervised identification method used in statistics, pattern recognition, and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events [18]. k-NN is a simple algorithm that stores all available cases and classifies new cases based on a similarity measure (e.g., distance functions) [19]. The calculation process is conducted in Matlab 7.11.0 R2010b (MathWorks, Natick, USA). The classification results were evaluated by the correct classification rate (CCR). ### 2.4. Multispectral Indexes Establishment Regularly, the multispectral indexes consist of the differences or ratios of reflectance or transmittance at some wavelengths. Regarding the decimation of fruits ripeness, in several studies, chlorophyll content and water bands have been adopted, which were considered as the most related to the maturity, such as 680, 800, 900, and 950 nm [20]. Qin and Lu [10] found that large differences in the absorption spectra were observed for the tomatoes at three ripeness stages, and their ripeness was correctly identified using the ratio of the absorption coefficient at 675 nm (for chlorophyll content) to that at 535 nm (for anthocyanins). Lleó et al. [13] made a comparison of multispectral indexes extracted from hyperspectral images to assess the maturity of fruit; they proposed four indexes:(a) Ind1=R730+R640-2×R680(b) Ind2=R680/(R640+R730)(c) Ind3=R675/R800(d) IAD =log⁡10(R720/R670)Among the many applications of spectral data, Ye et al. [21] used the two-band vegetation index (TBVI) to identify the best two-band-based predictor of citrus yield. TBVI can be calculated by Rλ1-Rλ2/Rλ1+Rλ2. The TBVI based on the 823 nm (NIR) and 728 nm (red edge) wavelengths was found to provide optimal citrus yield information (R2=0.5795, RRMSE = 0.6636). In accordance with these references, multispectral indexes will be established by analyzing the character of spectra. ## 2.1. Fruit Material One hundred and fifty samples at different maturity stages (assessed in the field by an artificial labeling method based on the period of growth from blossom) were harvested from the local farms in Jiangxia, Wuhan Province, China (30°32′N, 114°32′E). Navel oranges (Citrus sinensis Osbeck cv. Robertson) in different maturity stages were picked up as follows: (I) immature, 180 days from blossom; (II) midmature, 200 days from blossom; (III) mature, 220 days from blossom [17]. Firstly, all of the intact fruits were cleaned and numbered and then were stored at 25°C and 60% relative humidity within 24 h before processing. Fruit’s parameters in terms of weight and appearance were measured prior to the acquisition of transmittance hyperspectral images. ## 2.2. Hyperspectral Images Acquisition For hyperspectral diffuse transmittance imaging, a laboratory-type spectrum measurement device was designed (Figure1). The system mainly consisted of a high-performance back illuminated CCD camera(Andor, Clara, DR-328G, Britain), an imaging spectrograph(SPECIM, V10E-CL, Finland) covering the spectral range of 390–1,055 nm, and an assembled light unit containing four 50 W quartz tungsten halogen lamps(Oriel Instruments, 6332, USA) with eight fans to dissipate heat. A mobile platform with its speed controlled by a computer equipped with a spectral image system (Spectral SECN-V17E) was used to set and adjust the parameters of the device, including exposure time, motor speed, imaging acquisition, and wavelength range. The spectral resolution is 2.8 nm; the resolution of the CCD camera is 672 × 512 (spatial × spectral) pixels. In this work, the moving speed of the mobile platform is 2.0 mm/s, and the exposure time of the CCD camera is 0.1 s. The whole system was assembled in a dark chamber. The acquired images were corrected with white and dark images using (1) as follows:(1)T=Isample-IdarkIwhite-Idark,where T is the correct hyperspectral image, Isample is an original uncorrected hyperspectral image, Iwhite is the image of the white reference which was obtained by a Teflon ball, and Idark is the image acquired by the system in the absence of lighting.Figure 1 Schematic diagram of the used diffuse transmittance hyperspectral imaging system for navel oranges maturity classification. ## 2.3. Data Analysis Spectra were collected from the images by the ENVI (Version 4.7, ITT Visual Information Solutions, Boulder, USA) software. Circle region of interest (ROI) was applied to collect the average spectra and the spectra preprocessed by Savitzky-Golay smoothing.The classification models were developed with the linear discriminant analysis (LDA) andk-nearest neighbor (k-NN) algorithm. LDA is a common supervised identification method used in statistics, pattern recognition, and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events [18]. k-NN is a simple algorithm that stores all available cases and classifies new cases based on a similarity measure (e.g., distance functions) [19]. The calculation process is conducted in Matlab 7.11.0 R2010b (MathWorks, Natick, USA). The classification results were evaluated by the correct classification rate (CCR). ## 2.4. Multispectral Indexes Establishment Regularly, the multispectral indexes consist of the differences or ratios of reflectance or transmittance at some wavelengths. Regarding the decimation of fruits ripeness, in several studies, chlorophyll content and water bands have been adopted, which were considered as the most related to the maturity, such as 680, 800, 900, and 950 nm [20]. Qin and Lu [10] found that large differences in the absorption spectra were observed for the tomatoes at three ripeness stages, and their ripeness was correctly identified using the ratio of the absorption coefficient at 675 nm (for chlorophyll content) to that at 535 nm (for anthocyanins). Lleó et al. [13] made a comparison of multispectral indexes extracted from hyperspectral images to assess the maturity of fruit; they proposed four indexes:(a) Ind1=R730+R640-2×R680(b) Ind2=R680/(R640+R730)(c) Ind3=R675/R800(d) IAD =log⁡10(R720/R670)Among the many applications of spectral data, Ye et al. [21] used the two-band vegetation index (TBVI) to identify the best two-band-based predictor of citrus yield. TBVI can be calculated by Rλ1-Rλ2/Rλ1+Rλ2. The TBVI based on the 823 nm (NIR) and 728 nm (red edge) wavelengths was found to provide optimal citrus yield information (R2=0.5795, RRMSE = 0.6636). In accordance with these references, multispectral indexes will be established by analyzing the character of spectra. ## 3. Results and Discussion ### 3.1. Spectra and Multispectral Indexes Considerations The average spectra of oranges at three maturity stages are shown in Figure2. As the noises and irrelevant information existed in the beginning and the end of spectra, only spectra in the range from 550 to 1,000 nm were used for analysis. From Figure 2, the main differences are shown at the wavebands of 550–700 nm and 750–780 nm. The main difference is the chlorophyll content absorption hole (around 700 nm), disappearing as the fruit ripens with the position of peak change. Meanwhile, 760 nm was also found which could be due to the third overtone of O–H and the fourth overtone of C–H; the O–H and C–H functional groups are related to the concentration of some inner compositions with these bands such as soluble solid contents [22]. So, 640 nm, 760 nm (red edges), and 670 nm (for chlorophyll content) were considered as the characteristic wavelengths in this study. Refer to Section 2.4; five multispectral indexes were established as follows:(a) I1=T760+T640-2×T670(b) I2=T670/(T640+T760)(c) I3=(T760-T670)/(T760+T670)(d) I4=T760/T670(e) I5=(T670+T760-T640)/(T670+T760+T640)Figure 2 Average spectra of navel oranges at three different maturity stages. ### 3.2. Sample Measurements Table1 shows some parameters of the set samples of calibration and prediction. These parameters of 150 samples did not reflect the obvious correlations to the maturity through correlation analysis. The mass and size of the samples in different maturity stages are closely correlated. Minimizing the differences in mass and size of fruits can reduce the individual effect of the fruits on the spectral images acquirement. Variation of SSC in oranges in different maturities was significant (P<0.05), while the SSC increased from immature to mature fruits.Table 1 Some physical parameters of navel oranges at three different maturity stages. Classes Number Weight (g) Height (mm) Perimeter (mm) SSC (°Brix) Mean ± SD Mean ± SD Mean ± SD Mean ± SD Calibration Immature 30 120.7 ± 16.80 55.6 ± 3.37 63.0 ± 3.65 12.8 ± 1.89 Midmature 35 115.2 ± 14.95 53.9 ± 3.17 62.7 ± 3.49 14.4 ± 1.42 Mature 35 115.6 ± 14.07 53.3 ± 3.88 63.6 ± 3.13 14.9 ± 1.26 Prediction Immature 16 121.9 ± 17.2 56.2 ± 3.28 63.8 ± 3.26 12.7 ± 1.86 Midmature 17 116.5 ± 13.6 53.8 ± 2.28 63.6 ± 3.47 14.0 ± 1.51 Mature 17 120.4 ± 17.4 54.3 ± 2.98 64.4 ± 3.42 15.1 ± 0.85 ### 3.3. Analysis of Multispectral Indexes The analysis of the multispectral indexes is shown in Table2. ANOVA showed that the indexes have significant variation (P<0.05), so all the proposed multispectral indexes are statistically significant. And Figure 3 shows the distribution of the five indexes of the calibration set. It shows that there was no significant discrepancy in the indexes I1, I2, I3, and I4 among the immature, midmature, and mature groups. The mature group is the most widely distributed in the four indexes and the immature group is the most concentrated relatively. But for the index I5, it had good classification ability, especially for the immature group. The value of I5 for the immature group ranges from 1.16 to 40.7, but the value is less than 0.16 for the midmature and mature groups. By I5, we can clearly distinguish the immature oranges from the midmature and mature oranges. From Figure 3, the CCR of immature samples could be 100%. For other indexes, it is hard to discriminate the navel oranges in different ripening stages from the figure directly.Table 2 The statistical analysis of the five indexes. Calibration set (mean ± SD) P Prediction set (mean ± SD) P Immature Midmature Mature Immature Midmature Mature I 1 0.139 ± 0.242 0.144 ± 0.023 0.084 ± 0.097 0.000 0.139 ± 0.020 0.124 ± 0.051 0.097 ± 0.081 0.040 I 2 0.054 ± 0.054 0.116 ± 0.093 0.256 ± 0.225 0.000 0.039 ± 0.035 0.196 ± 0.142 0.218 ± 0.201 0.003 I 3 0.884 ± 0.115 0.731 ± 0.217 0.446 ± 0.422 0.000 0.912 ± 0.092 0.540 ± 0.329 0.539 ± 0.384 0.002 I 4 0.066 ± 0.074 0.175 ± 0.161 0.536 ± 0.613 0.000 0.049 ± 0.057 0.364 ± 0.325 0.420 ± 0.518 0.012 I 5 11.04 ± 10.32 0.115 ± 0.024 0.058 ± 0.118 0.000 11.79 ± 9.34 0.100 ± 0.038 0.056 ± 0.011 0.000Figure 3 Multispectral indexes distribution of navel oranges at three different maturity stages. (a) (b) (c) (d) (e) ### 3.4. Classification Models Different indexes were used as the different models’ inputs; the results of each model are shown in Table3. According to the distribution of the different indexes, we can speculate that the predictive performance of index I5 will be the best one. And the classification results showed that the best identification was conducted by I5 using k-NN algorithm with CCR of 100% and 96.0% for calibration and prediction set, respectively.Table 3 The classification results of the different models for the prediction set. LDA k-NN Immature Midmature Mature CCR (%) Immature Midmature Mature CCR (%) I 1 7/16 7/17 5/17 38.0 5/16 8/17 6/17 38.0 I 2 15/16 4/17 6/17 50.0 9/16 9/17 8/17 52.0 I 3 15/16 3/17 7/17 50.0 9/16 3/17 9/17 42.0 I 4 15/16 5/17 4/17 48.0 9/16 3/17 9/17 42.0 I 5 9/16 13/17 17/17 78.0 16/16 17/17 15/17 96.0In theI5-based LDA method, the CCR of calibration and prediction set was 83% and 78%, respectively. The main error appeared in the discrimination of the immature group, but, obviously, immature samples can be distinguished from other oranges in different maturity stage by the value of I5 without modeling. So, the immature samples will be firstly chosen out and then the LDA is applied to classify the midmature and mature samples. The performance of classification for midmature and mature groups has been improved. The main misclassified samples are shown in Figure 4. The CCR for the two groups’ discrimination is 88.2% (four midmature samples were misidentified). If all the immature samples were considered as the correct classification, the whole predictive CCR is improved to 92.0% which is still worse than the predictive CCR of k-NN. Compared to the results of some researches about the nondestructive maturity detection of oranges or citrus, 96% CCR is better than the 91.67% identification accuracy based on machine vision carried out by Ying et al. [23]. And it is also better than the correctness of inspection test, 82%, based on multifractal spectra obtained by Cao [24].Figure 4 Classification of midmature and mature navel oranges in calibration set by LDA. ## 3.1. Spectra and Multispectral Indexes Considerations The average spectra of oranges at three maturity stages are shown in Figure2. As the noises and irrelevant information existed in the beginning and the end of spectra, only spectra in the range from 550 to 1,000 nm were used for analysis. From Figure 2, the main differences are shown at the wavebands of 550–700 nm and 750–780 nm. The main difference is the chlorophyll content absorption hole (around 700 nm), disappearing as the fruit ripens with the position of peak change. Meanwhile, 760 nm was also found which could be due to the third overtone of O–H and the fourth overtone of C–H; the O–H and C–H functional groups are related to the concentration of some inner compositions with these bands such as soluble solid contents [22]. So, 640 nm, 760 nm (red edges), and 670 nm (for chlorophyll content) were considered as the characteristic wavelengths in this study. Refer to Section 2.4; five multispectral indexes were established as follows:(a) I1=T760+T640-2×T670(b) I2=T670/(T640+T760)(c) I3=(T760-T670)/(T760+T670)(d) I4=T760/T670(e) I5=(T670+T760-T640)/(T670+T760+T640)Figure 2 Average spectra of navel oranges at three different maturity stages. ## 3.2. Sample Measurements Table1 shows some parameters of the set samples of calibration and prediction. These parameters of 150 samples did not reflect the obvious correlations to the maturity through correlation analysis. The mass and size of the samples in different maturity stages are closely correlated. Minimizing the differences in mass and size of fruits can reduce the individual effect of the fruits on the spectral images acquirement. Variation of SSC in oranges in different maturities was significant (P<0.05), while the SSC increased from immature to mature fruits.Table 1 Some physical parameters of navel oranges at three different maturity stages. Classes Number Weight (g) Height (mm) Perimeter (mm) SSC (°Brix) Mean ± SD Mean ± SD Mean ± SD Mean ± SD Calibration Immature 30 120.7 ± 16.80 55.6 ± 3.37 63.0 ± 3.65 12.8 ± 1.89 Midmature 35 115.2 ± 14.95 53.9 ± 3.17 62.7 ± 3.49 14.4 ± 1.42 Mature 35 115.6 ± 14.07 53.3 ± 3.88 63.6 ± 3.13 14.9 ± 1.26 Prediction Immature 16 121.9 ± 17.2 56.2 ± 3.28 63.8 ± 3.26 12.7 ± 1.86 Midmature 17 116.5 ± 13.6 53.8 ± 2.28 63.6 ± 3.47 14.0 ± 1.51 Mature 17 120.4 ± 17.4 54.3 ± 2.98 64.4 ± 3.42 15.1 ± 0.85 ## 3.3. Analysis of Multispectral Indexes The analysis of the multispectral indexes is shown in Table2. ANOVA showed that the indexes have significant variation (P<0.05), so all the proposed multispectral indexes are statistically significant. And Figure 3 shows the distribution of the five indexes of the calibration set. It shows that there was no significant discrepancy in the indexes I1, I2, I3, and I4 among the immature, midmature, and mature groups. The mature group is the most widely distributed in the four indexes and the immature group is the most concentrated relatively. But for the index I5, it had good classification ability, especially for the immature group. The value of I5 for the immature group ranges from 1.16 to 40.7, but the value is less than 0.16 for the midmature and mature groups. By I5, we can clearly distinguish the immature oranges from the midmature and mature oranges. From Figure 3, the CCR of immature samples could be 100%. For other indexes, it is hard to discriminate the navel oranges in different ripening stages from the figure directly.Table 2 The statistical analysis of the five indexes. Calibration set (mean ± SD) P Prediction set (mean ± SD) P Immature Midmature Mature Immature Midmature Mature I 1 0.139 ± 0.242 0.144 ± 0.023 0.084 ± 0.097 0.000 0.139 ± 0.020 0.124 ± 0.051 0.097 ± 0.081 0.040 I 2 0.054 ± 0.054 0.116 ± 0.093 0.256 ± 0.225 0.000 0.039 ± 0.035 0.196 ± 0.142 0.218 ± 0.201 0.003 I 3 0.884 ± 0.115 0.731 ± 0.217 0.446 ± 0.422 0.000 0.912 ± 0.092 0.540 ± 0.329 0.539 ± 0.384 0.002 I 4 0.066 ± 0.074 0.175 ± 0.161 0.536 ± 0.613 0.000 0.049 ± 0.057 0.364 ± 0.325 0.420 ± 0.518 0.012 I 5 11.04 ± 10.32 0.115 ± 0.024 0.058 ± 0.118 0.000 11.79 ± 9.34 0.100 ± 0.038 0.056 ± 0.011 0.000Figure 3 Multispectral indexes distribution of navel oranges at three different maturity stages. (a) (b) (c) (d) (e) ## 3.4. Classification Models Different indexes were used as the different models’ inputs; the results of each model are shown in Table3. According to the distribution of the different indexes, we can speculate that the predictive performance of index I5 will be the best one. And the classification results showed that the best identification was conducted by I5 using k-NN algorithm with CCR of 100% and 96.0% for calibration and prediction set, respectively.Table 3 The classification results of the different models for the prediction set. LDA k-NN Immature Midmature Mature CCR (%) Immature Midmature Mature CCR (%) I 1 7/16 7/17 5/17 38.0 5/16 8/17 6/17 38.0 I 2 15/16 4/17 6/17 50.0 9/16 9/17 8/17 52.0 I 3 15/16 3/17 7/17 50.0 9/16 3/17 9/17 42.0 I 4 15/16 5/17 4/17 48.0 9/16 3/17 9/17 42.0 I 5 9/16 13/17 17/17 78.0 16/16 17/17 15/17 96.0In theI5-based LDA method, the CCR of calibration and prediction set was 83% and 78%, respectively. The main error appeared in the discrimination of the immature group, but, obviously, immature samples can be distinguished from other oranges in different maturity stage by the value of I5 without modeling. So, the immature samples will be firstly chosen out and then the LDA is applied to classify the midmature and mature samples. The performance of classification for midmature and mature groups has been improved. The main misclassified samples are shown in Figure 4. The CCR for the two groups’ discrimination is 88.2% (four midmature samples were misidentified). If all the immature samples were considered as the correct classification, the whole predictive CCR is improved to 92.0% which is still worse than the predictive CCR of k-NN. Compared to the results of some researches about the nondestructive maturity detection of oranges or citrus, 96% CCR is better than the 91.67% identification accuracy based on machine vision carried out by Ying et al. [23]. And it is also better than the correctness of inspection test, 82%, based on multifractal spectra obtained by Cao [24].Figure 4 Classification of midmature and mature navel oranges in calibration set by LDA. ## 4. Conclusion In this study, multispectral indexes are to be established for maturity classification of navel oranges based on the diffuse transmittance hyperspectral imaging. The indexes calculated by(T670+T760-T640)/(T670+T760+T640) showed a good performance for maturity detection with the CCR of 96.0% by k-NN method. And particularly this index has an excellent capability of distinguishing the immature oranges. In reality, it is quite practical in developing a portable instrument which can quickly identify the immature oranges with high precision. As only three important wavelengths are needed, it will be beneficial for the development of low-cost and real-time multispectral imaging systems for industrial applications. But in thein-field application, appropriate lighting resource angle and layout could have a great effect on the quality of the obtained hyperspectral images. So, further work will be focused on the light resource device development forin-field application. --- *Source: 1023498-2017-12-27.xml*
2017
# From Citizen Science to Policy Development on the Coral Reefs of Jamaica **Authors:** M. James C. Crabbe **Journal:** International Journal of Zoology (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/102350 --- ## Abstract This paper explores the application of citizen science to help generation of scientific data and capacity-building, and so underpin scientific ideas and policy development in the area of coral reef management, on the coral reefs of Jamaica. From 2000 to 2008, ninety Earthwatch volunteers were trained in coral reef data acquisition and analysis and made over 6,000 measurements on fringing reef sites along the north coast of Jamaica. Their work showed that while recruitment of small corals is returning after the major bleaching event of 2005, larger corals are not necessarily so resilient and so need careful management if the reefs are to survive such major extreme events. These findings were used in the development of an action plan for Jamaican coral reefs, presented to the Jamaican National Environmental Protection Agency. It was agreed that a number of themes and tactics need to be implemented in order to facilitate coral reef conservation in the Caribbean. The use of volunteers and citizen scientists from both developed and developing countries can help in forging links which can assist in data collection and analysis and, ultimately, in ecosystem management and policy development. --- ## Body ## 1. Introduction Coral reefs throughout the world are under severe challenges from a variety of anthropogenic and environmental factors including overfishing, destructive fishing practices, coral bleaching, ocean acidification, sea-level rise, algal blooms, agricultural run-off, coastal and resort development, marine pollution, increasing coral diseases, invasive species, and hurricane/cyclone damage [1–3]. It is the application of citizen science to help generation of scientific data and capacity-building, and so underpin scientific ideas and policy development in the area of coral reef management, that are explored in this paper, concentrating on Jamaican coral reefs.The “compulsive” appetite for increasing mobility [4] allied to a social desire for extraordinary “peak experiences” [5] has led to the modern “ethical consumer” for tourism services [4, 6] derived from the “experiential” and “existential” tourist of the 1970s [7]. Several organisations have taken the concept of ecotourism further to embracing tourism with citizen science, whereby the tourist gets to work on research projects under the supervision of recognised researchers. Several organisations worldwide have developed citizen science programmes. The drivers behind these activities vary significantly between scientific studies, education, and/or getting the public more engaged and raising awareness of the natural environment. The overall driver cannot only determine the type, quality, and quantity of data required but also the level of volunteer expertise needed. Three organisations that have developed citizen science with tourism are the Earthwatch Institute, (http://www.earthwatch.org/), Operation Wallacea (http://www.opwall.com/), and Coral Cay Conservation (http://www.coralcay.org/). All are international environmental charities, working with a wide range of partners, from individuals who work as conservation volunteers on research teams through to corporate partners (such as HSBC with Earthwatch), governments, and institutions. Research volunteers work with scientists and social scientists around the world to help gather data needed to address environmental and social issues. It is the long-term strategy of these organisations combined with their citizen science funding models that underpins their successes; they are “in for the long haul” and can effect conservation in a different way to a standard 3-year research grant. Key elements are developing projects that can be used by volunteers and verifying the scientific information in a statistically significant way. This paper shows how an Earthwatch programme using volunteers on coral reefs generated scientific information which was used to inform management strategies in Jamaica. ## 2. Materials and Methods ### 2.1. Training of Volunteers All volunteers were SCUBA divers of at least PADI Open Water standard. Training took place at the Discovery Bay Marine Laboratory, Jamaica, and consisted of lectures and interactive discussions covering scleractinian coral biology and taxonomy, coral recognition, data measurements and analysis, and health and safety. Volunteers all had to accomplish open water diving tests, and coral recognition tests in the field, after studying coral taxonomy books and passing land-based tests. ### 2.2. Reef Sites Four randomly located transects, each 15 m long and separated by at least 5 m, were laid at 5–8.5 m depth at each of five sites on the North coast of Jamaica near Discovery Bay: Rio Bueno (18° 28.805′ N; 77° 21.625′ W), M1 (18° 28.337′ N; 77° 24.525′ W), Dancing Ladies (18° 28.369′ N; 77° 24.802′ W), Dairy Bull (18° 28.083′ N; 77° 23.302′ W), and Pear Tree Bottom (18° 27.829′ N; 77° 21.403′ W). These sites were chosen as being workable by volunteers, as they were with 20 min boat ride from the Discovery Bay Marine Laboratory, where all volunteers stayed. Sites had been studied before over a number of years by marine scientists from many countries. GPS coordinates were determined using a hand-held GPS receiver (Garmin Ltd., UK). ### 2.3. Citizen Science Data Collection Corals 2 m either side of the transect lines were photographed for archive information, and surface areas were measured with flexible tape as described previously using SCUBA [8–10]. Depth of samples was between 5 and 8.5 m, to minimise variation in growth rates due to depth [11]. To increase accuracy, surface areas rather than diameters of live nonbranching corals were measured [8, 9]. Sampling was over as wide a range of sizes as possible. Colonies that were close together (<50 mm) or touching were avoided to minimise age discontinuities through fission and altered growth rates [12–14]. In this study Montastrea annularis colonies were ignored, because their surface area does not reflect their age [12], and because hurricanes can increase their asexual reproduction through physical damage [13]. Overall, over 6,000 measurements were made on over 1,000 coral colonies, equally distributed between the sites for species and numbers of colonies.This work was conducted at Discovery Bay during July 15–31 and December 19–30 in 2000, March 26–April 19 in 2002, March 18–April 10 in 2003, July 23–August 21 in 2004, July 18–August 13 in 2005, April 11–18 in 2006, December 30 in 2006–January 6 in 2007, and July 30–August 16 in 2008. Surveys were made at the same locations at the same sites each year. Data from ninety volunteers was used over this period. ### 2.4. Storm Severity Data on storm severity as it impacted the island was obtained from UNISYS (http://weather.unisys.com/hurricane/atlantic/), the NOAA hurricane site (http://www.nhc.noaa.gov/pastall.shtml). Information on bleaching was obtained from the NOAA coral reef watch site (http://coralreefwatch.noaa.gov/satellite/current/sst_series_24reefs.html). ### 2.5. Data Analysis Data analysis on corals was using ANOVA. Skewness (sk, [15]) was used to estimate the distribution of small and large colonies in the coral populations around Discovery Bay in Jamaica. In a normal distribution, approximately 68% of the values lie within one standard deviation of the mean. If there are extreme values towards the positive end of a distribution, the distribution is positively skewed, where the mean is greater than the mode (the mode is the value that occurs the most frequently in a data set) (right tail is longer). The opposite is true for a negatively skewed distribution, where the mean is less than the mode (left tail is longer). With regard to coral populations, negative skewness implies more large colonies than small colonies, while positive skewness implies more small colonies than large colonies. ## 2.1. Training of Volunteers All volunteers were SCUBA divers of at least PADI Open Water standard. Training took place at the Discovery Bay Marine Laboratory, Jamaica, and consisted of lectures and interactive discussions covering scleractinian coral biology and taxonomy, coral recognition, data measurements and analysis, and health and safety. Volunteers all had to accomplish open water diving tests, and coral recognition tests in the field, after studying coral taxonomy books and passing land-based tests. ## 2.2. Reef Sites Four randomly located transects, each 15 m long and separated by at least 5 m, were laid at 5–8.5 m depth at each of five sites on the North coast of Jamaica near Discovery Bay: Rio Bueno (18° 28.805′ N; 77° 21.625′ W), M1 (18° 28.337′ N; 77° 24.525′ W), Dancing Ladies (18° 28.369′ N; 77° 24.802′ W), Dairy Bull (18° 28.083′ N; 77° 23.302′ W), and Pear Tree Bottom (18° 27.829′ N; 77° 21.403′ W). These sites were chosen as being workable by volunteers, as they were with 20 min boat ride from the Discovery Bay Marine Laboratory, where all volunteers stayed. Sites had been studied before over a number of years by marine scientists from many countries. GPS coordinates were determined using a hand-held GPS receiver (Garmin Ltd., UK). ## 2.3. Citizen Science Data Collection Corals 2 m either side of the transect lines were photographed for archive information, and surface areas were measured with flexible tape as described previously using SCUBA [8–10]. Depth of samples was between 5 and 8.5 m, to minimise variation in growth rates due to depth [11]. To increase accuracy, surface areas rather than diameters of live nonbranching corals were measured [8, 9]. Sampling was over as wide a range of sizes as possible. Colonies that were close together (<50 mm) or touching were avoided to minimise age discontinuities through fission and altered growth rates [12–14]. In this study Montastrea annularis colonies were ignored, because their surface area does not reflect their age [12], and because hurricanes can increase their asexual reproduction through physical damage [13]. Overall, over 6,000 measurements were made on over 1,000 coral colonies, equally distributed between the sites for species and numbers of colonies.This work was conducted at Discovery Bay during July 15–31 and December 19–30 in 2000, March 26–April 19 in 2002, March 18–April 10 in 2003, July 23–August 21 in 2004, July 18–August 13 in 2005, April 11–18 in 2006, December 30 in 2006–January 6 in 2007, and July 30–August 16 in 2008. Surveys were made at the same locations at the same sites each year. Data from ninety volunteers was used over this period. ## 2.4. Storm Severity Data on storm severity as it impacted the island was obtained from UNISYS (http://weather.unisys.com/hurricane/atlantic/), the NOAA hurricane site (http://www.nhc.noaa.gov/pastall.shtml). Information on bleaching was obtained from the NOAA coral reef watch site (http://coralreefwatch.noaa.gov/satellite/current/sst_series_24reefs.html). ## 2.5. Data Analysis Data analysis on corals was using ANOVA. Skewness (sk, [15]) was used to estimate the distribution of small and large colonies in the coral populations around Discovery Bay in Jamaica. In a normal distribution, approximately 68% of the values lie within one standard deviation of the mean. If there are extreme values towards the positive end of a distribution, the distribution is positively skewed, where the mean is greater than the mode (the mode is the value that occurs the most frequently in a data set) (right tail is longer). The opposite is true for a negatively skewed distribution, where the mean is less than the mode (left tail is longer). With regard to coral populations, negative skewness implies more large colonies than small colonies, while positive skewness implies more small colonies than large colonies. ## 3. Results ### 3.1. Coral Sizes and Growth All the Jamaican sites showed some similarities in distribution of the size classes for the species studied between 2002 and 2008. However, there were differences between the different sites and between the different species studied at the sites. Skewness values (sk) were used to compare the distribution of the data between 2002 and 2008. ForS. siderea, all sk values were positive, with more small colonies than in a normal distribution for 2002 and 2008, with little change between the dates (all sk values between 0.5 and 1.6). With D. labyrinthiformis colonies, there was a change from negative skewness in 2002 at Dairy Bull and Pear Tree Bottom, with more large colonies than in a normal distribution (sk values −0.25 and −0.006, resp.) to smaller colonies than in a normal distribution in 2008 (sk values of 0.20 and 0.97, resp.). There were no significant changes from 2002 to 2008 at the other sites, with positive sk values from 0.1 to 0.89. M. meandrites colonies at Rio Bueno and Dairy Bull showed a relative decrease in the distribution of larger colonies from 2002 to 2008, with changes in sk values from −0.03 in 02 to 0.78 in 08, and from −0.05 to 0.03, respectively; the other sites all exhibited slightly positive sk values in both years from 0.1 to 0.5. For Agaricia species, there was very little change between the years at all the sites, with sk values from 0.4 to 1.6. For P. astreoides, all values were positive for both years, with an increase in skewness at Rio Bueno from 0.2 to 2.6, showing a marked change in distribution towards the smaller colony sizes. At the other sites there were only small increases in sk values from 2002 to 2008, with Pear Tree Bottom showing a decrease in skewness from 0.9 to 0.6. D. strigosa colonies showed similar results to P. astreoides, all sk values being positive for 2002 and 2008, with an increase at Rio Bueno from 0.2 to 2.2 and at Pear Tree Bottom from 0.4 to 2.4; other sites showed similar sk values for 2002 and 2008 from 0.6 to 1.6. C. natans skewness changed from −0.07 to 0.68 at Rio Bueno from 2002 to 2008 (a decrease in larger colonies relative to a normal distribution) and at Dancing Ladies from −0.31 to 0.38. Other sites showed similar skewness in 2002 and 2008 (sk values between 0.5 and 0.6), except Pear Tree Bottom, which exhibited near normal distribution of colonies about the mean for both 2002 and 2008 (sk values <0.01). Interestingly, in 2005, the year after hurricane Ivan, the most severe storm to impact the reef sites over the study period, there was a slight reduction in the numbers of the smallest size classes, particularly notable at Dairy Bull.In addition, our volunteer studies showed that radial growth rates (mm/yr) of non-branching corals calculated on an annual basis from 2000 to 2008 showed few significant differences either spatially or temporally along the North coast, although growth rates tended to be higher on reefs of higher rugosity and lower macroalgal cover [16]. ### 3.2. Extreme Climate Events The only extreme climate event that significantly impacted the Jamaican reef sites during the study period was the mass Caribbean bleaching event of 2005 [17]. Analysis of satellite data showed that there were 6 degree heating weeks (dhw) for sea surface temperatures in September and October 2005 near Discovery Bay, data which was mirrored by data loggers on the reefs. ### 3.3. Development of Coral Reef Action Plan The coral size and growth data collected by the citizen scientists show that corals of above average size for their species at the sites studied lack resilience, particularly after the major bleaching event of 2005. Because of this, there is a need for different zones to have different levels of protection. To this end, the data was used in the development of an action plan for Jamaican coral reefs, presented to the Jamaican National Environmental Protection Agency, and described in Table1.Table 1 Seven-point action plan for Jamaican coral reefs. (1) The reefs around Jamaica could be designated as the Jamaican Coral Reef Marine Park. This could include all the fringing reefs, seagrass beds, and mangroves from Negril to all along the north coast to the eastern tip of the island. On the south coast it could include Port Royal and Portland Bight. The advantage of this is that one can then consider protection of the Jamaican reefs as a whole. Another advantage is that climate change effects can be considered in a more holistic way(2) There could be a single body, possibly the National Environment Protection Agency (NEPA), or a subset of NEPA, given authority to manage the Park(3) There could be a statement drawn up on “protection and wise use” of the Park. Drawing up that statement should include all stakeholders, from fishermen through Industry and tourism to policy makers(4) The Park could be managed using a “zoning” system. This has been valuable in a number of areas, not least the Great Barrier Reef. This will allow some areas to have greater restrictions (e.g., fishing, resort pollution, ship pollution) than others. Such zoning should help avoid the “tragedy of the commons”. Zoning Plans define what activities can occur in which locations, both to protect the marine environment and to separate potentially conflicting activities(5) Divisions into zones could beGeneral Use,Conservation Park,Habitat Protection,Marine National Park,Another zone might be a Buffer Zone, next to a Marine  National Park(6) Each zone should have at least one of the following: (i) Community Partnerships, (ii) Local Marine Advisory Committees, and (iii) Reef Advisory Committees. These bodies should be responsible for regulating their own area and should be responsible to the overall Marine Park Management body. They would also be responsible for community involvement and information(7) Permissions within the zones (e.g., for tourism, fishing, etc.) would be given by the Jamaican Government, through NEPA ## 3.1. Coral Sizes and Growth All the Jamaican sites showed some similarities in distribution of the size classes for the species studied between 2002 and 2008. However, there were differences between the different sites and between the different species studied at the sites. Skewness values (sk) were used to compare the distribution of the data between 2002 and 2008. ForS. siderea, all sk values were positive, with more small colonies than in a normal distribution for 2002 and 2008, with little change between the dates (all sk values between 0.5 and 1.6). With D. labyrinthiformis colonies, there was a change from negative skewness in 2002 at Dairy Bull and Pear Tree Bottom, with more large colonies than in a normal distribution (sk values −0.25 and −0.006, resp.) to smaller colonies than in a normal distribution in 2008 (sk values of 0.20 and 0.97, resp.). There were no significant changes from 2002 to 2008 at the other sites, with positive sk values from 0.1 to 0.89. M. meandrites colonies at Rio Bueno and Dairy Bull showed a relative decrease in the distribution of larger colonies from 2002 to 2008, with changes in sk values from −0.03 in 02 to 0.78 in 08, and from −0.05 to 0.03, respectively; the other sites all exhibited slightly positive sk values in both years from 0.1 to 0.5. For Agaricia species, there was very little change between the years at all the sites, with sk values from 0.4 to 1.6. For P. astreoides, all values were positive for both years, with an increase in skewness at Rio Bueno from 0.2 to 2.6, showing a marked change in distribution towards the smaller colony sizes. At the other sites there were only small increases in sk values from 2002 to 2008, with Pear Tree Bottom showing a decrease in skewness from 0.9 to 0.6. D. strigosa colonies showed similar results to P. astreoides, all sk values being positive for 2002 and 2008, with an increase at Rio Bueno from 0.2 to 2.2 and at Pear Tree Bottom from 0.4 to 2.4; other sites showed similar sk values for 2002 and 2008 from 0.6 to 1.6. C. natans skewness changed from −0.07 to 0.68 at Rio Bueno from 2002 to 2008 (a decrease in larger colonies relative to a normal distribution) and at Dancing Ladies from −0.31 to 0.38. Other sites showed similar skewness in 2002 and 2008 (sk values between 0.5 and 0.6), except Pear Tree Bottom, which exhibited near normal distribution of colonies about the mean for both 2002 and 2008 (sk values <0.01). Interestingly, in 2005, the year after hurricane Ivan, the most severe storm to impact the reef sites over the study period, there was a slight reduction in the numbers of the smallest size classes, particularly notable at Dairy Bull.In addition, our volunteer studies showed that radial growth rates (mm/yr) of non-branching corals calculated on an annual basis from 2000 to 2008 showed few significant differences either spatially or temporally along the North coast, although growth rates tended to be higher on reefs of higher rugosity and lower macroalgal cover [16]. ## 3.2. Extreme Climate Events The only extreme climate event that significantly impacted the Jamaican reef sites during the study period was the mass Caribbean bleaching event of 2005 [17]. Analysis of satellite data showed that there were 6 degree heating weeks (dhw) for sea surface temperatures in September and October 2005 near Discovery Bay, data which was mirrored by data loggers on the reefs. ## 3.3. Development of Coral Reef Action Plan The coral size and growth data collected by the citizen scientists show that corals of above average size for their species at the sites studied lack resilience, particularly after the major bleaching event of 2005. Because of this, there is a need for different zones to have different levels of protection. To this end, the data was used in the development of an action plan for Jamaican coral reefs, presented to the Jamaican National Environmental Protection Agency, and described in Table1.Table 1 Seven-point action plan for Jamaican coral reefs. (1) The reefs around Jamaica could be designated as the Jamaican Coral Reef Marine Park. This could include all the fringing reefs, seagrass beds, and mangroves from Negril to all along the north coast to the eastern tip of the island. On the south coast it could include Port Royal and Portland Bight. The advantage of this is that one can then consider protection of the Jamaican reefs as a whole. Another advantage is that climate change effects can be considered in a more holistic way(2) There could be a single body, possibly the National Environment Protection Agency (NEPA), or a subset of NEPA, given authority to manage the Park(3) There could be a statement drawn up on “protection and wise use” of the Park. Drawing up that statement should include all stakeholders, from fishermen through Industry and tourism to policy makers(4) The Park could be managed using a “zoning” system. This has been valuable in a number of areas, not least the Great Barrier Reef. This will allow some areas to have greater restrictions (e.g., fishing, resort pollution, ship pollution) than others. Such zoning should help avoid the “tragedy of the commons”. Zoning Plans define what activities can occur in which locations, both to protect the marine environment and to separate potentially conflicting activities(5) Divisions into zones could beGeneral Use,Conservation Park,Habitat Protection,Marine National Park,Another zone might be a Buffer Zone, next to a Marine  National Park(6) Each zone should have at least one of the following: (i) Community Partnerships, (ii) Local Marine Advisory Committees, and (iii) Reef Advisory Committees. These bodies should be responsible for regulating their own area and should be responsible to the overall Marine Park Management body. They would also be responsible for community involvement and information(7) Permissions within the zones (e.g., for tourism, fishing, etc.) would be given by the Jamaican Government, through NEPA ## 4. Discussion ### 4.1. Citizen Science and Use of Volunteer Data Citizen science and use of data measured by volunteers has been very helpful in a number of zoological areas, including amphibian population and biodiversity studies [18, 19], reporting invasive species [20], environmental monitoring [21], evolutionary change [22], marine species abundance and monitoring [23–25], dryland mapping [26], and conservation planning [27, 28].This study used self-selected as “Earthwatch volunteers”, and all were SCUBA divers. Motivation was high in all the volunteers, as was the validity of the data presented by volunteers. A key element in citizen science is good training of volunteers. In the area of coral reef research described in this study, training was given in species recognition, quantitative measurement techniques and validation, and data analysis. Independent validation of volunteer data, once training had been given, was consistent with previous findings by other groups [29]. The validation of the data produced by the volunteers indicated that with appropriate training, data collection by citizen scientists is appropriate for scientific applications in marine biology. ### 4.2. Coral Health and Resilience What is apparent from our studies is that despite the chronic and acute disturbances between 2002 and 2008, demographic studies indicate good levels of coral resilience on the fringing reefs around Discovery Bay in Jamaica (see also [30]). The bleaching event of 2005 resulted in mass bleaching but relatively low levels of mortality unlike corals in the US Virgin islands and Tobago where there was extensive mortality [17, 31], probably because of their greater degree heating week values.This data shows that while recruitment of small corals is returning after the major bleaching event of 2005 [32], larger corals are not necessarily so resilient and so need careful management if the reefs are to survive such major extreme events. ### 4.3. From Information to Policy Development: Themes and Tactics Marine reserves are an important tool in the sustainable management of many coral reefs [33]. However, it is important that the reef ecosystems share regulatory guidelines, enforcement practices and resources, and conservation initiatives and management, underpinned by scientific research. An example of a single marine reserve is the Great Barrier Reef in Australia operated and managed solely by the Great Barrier Reef Marine Park Authority (GBRMPA). In contrast, the second largest barrier reef in the world, the MesoAmerican Barrier Reef, is bounded by four countries (Mexico, Belize, Guatemala, and Honduras), each with its own laws and policies. Here, a number of single and separated marine reserves exist along the barrier reef. In Belize we have successfully transferred scientific expertise in Belize to local volunteers to generate scientific evidence to underpin future management and conservation decisions, as judged, for example, by scientific findings on the impact of hurricanes on reefs in Belize, which showed that hurricanes and severe storms limited the recruitment and survival of nonbranching corals of the Mesoamerican barrier reef [10].For Jamaica, the Action Plan developed (Table1) was well received by managers of the National Environment Protection Agency (NEPA). It was felt by managers that this approach could link together the environment with tourism and business, so that environmental issues are seen as part of the way forward, not part of the problem, as has been all too evident in the past. Even if smaller Marine Protected Areas (MPAs) were developed around the island, the adoption of shared ownership of reef ecosystems was felt to be useful way to proceed.In order to take this forward, it was felt necessary to develop a number of themes and tactics. In a separate capacity building exercise [34], for the MesoAmerican Barrier Reef in Sothern Belize, one officer from the Belize Fisheries Department, three senior officers from NGOs involved in managing Belize MPAs (TIDE, the Toledo Institute for Development and Environment; TASTE, the Toledo Association for Sustainable Tourism and Empowerment; and Friends of Nature), and a Facilitator (the author) from the UK developed six-month Personal/Professional Action Plans which involved (a) tactics for leading, educating, and supporting issues regarding sustainable development of coral reefs;(b) tactics for collaboration with other stakeholders to collectively influence policy decisions for coral reef conservation.Discussion among the participants and facilitator resulted in the generation of a series of generic tactics to be adopted around a number of themes. These are enumerated in Table2. Such themes and tactics may be useful in development of coral reef policies in the Caribbean and elsewhere.Table 2 Themes and tactics to facilitate conservation of coral reefs. Organisation and ManagementTactic number 1: establish a key leader in the Organization/Department to effectively manage the Marine reserves on a day-to-day basisTactic number 2: have a selected key leader provide general Terms of Reference of what is expected of staff and immediate/major stakeholders in order to easily facilitate the process of decision makingEducationTactic number 1: financial resources need to be allocated for an education program. The program should focus on both broad and specific issues that may create friction among stakeholders in the processTactic number 2: a group consisting of community leaders and key/immediate stakeholders should be established to create ways and methods of educating different levels of stakeholders in the effectiveness of sustainable development in the marine parksTactic number 3: surveys need to be conducted to evaluate level of success and failure. Too often programmes have been formed and implemented but end results have not been evaluated. Surveys should be carried back to stakeholders for a presentation to establish further stepsSupportTactic number 1: a well-put together presentation needs to be developed and be presented to the key authority that will have over-all say in the marine park(s). This will stress on the support needed to accomplish both the mission and vision statements and will have positive effects in sustainable developmentTactic number 2: nonmonetary incentives need to be established in order to have full support of stakeholders who would otherwise deter progress in sustainable developmentPoliciesTactic number 1: establish a set of policies that is considered necessary for proper management of the marine reserves. Such policies will be established by all stakeholders involvedTactic number 2: create an influencing program for stakeholders to adhere to such policies through an education/retreat programTactic number 3: establish exchanges with other organizations in capacity building in policy creation and effective implementation ## 4.1. Citizen Science and Use of Volunteer Data Citizen science and use of data measured by volunteers has been very helpful in a number of zoological areas, including amphibian population and biodiversity studies [18, 19], reporting invasive species [20], environmental monitoring [21], evolutionary change [22], marine species abundance and monitoring [23–25], dryland mapping [26], and conservation planning [27, 28].This study used self-selected as “Earthwatch volunteers”, and all were SCUBA divers. Motivation was high in all the volunteers, as was the validity of the data presented by volunteers. A key element in citizen science is good training of volunteers. In the area of coral reef research described in this study, training was given in species recognition, quantitative measurement techniques and validation, and data analysis. Independent validation of volunteer data, once training had been given, was consistent with previous findings by other groups [29]. The validation of the data produced by the volunteers indicated that with appropriate training, data collection by citizen scientists is appropriate for scientific applications in marine biology. ## 4.2. Coral Health and Resilience What is apparent from our studies is that despite the chronic and acute disturbances between 2002 and 2008, demographic studies indicate good levels of coral resilience on the fringing reefs around Discovery Bay in Jamaica (see also [30]). The bleaching event of 2005 resulted in mass bleaching but relatively low levels of mortality unlike corals in the US Virgin islands and Tobago where there was extensive mortality [17, 31], probably because of their greater degree heating week values.This data shows that while recruitment of small corals is returning after the major bleaching event of 2005 [32], larger corals are not necessarily so resilient and so need careful management if the reefs are to survive such major extreme events. ## 4.3. From Information to Policy Development: Themes and Tactics Marine reserves are an important tool in the sustainable management of many coral reefs [33]. However, it is important that the reef ecosystems share regulatory guidelines, enforcement practices and resources, and conservation initiatives and management, underpinned by scientific research. An example of a single marine reserve is the Great Barrier Reef in Australia operated and managed solely by the Great Barrier Reef Marine Park Authority (GBRMPA). In contrast, the second largest barrier reef in the world, the MesoAmerican Barrier Reef, is bounded by four countries (Mexico, Belize, Guatemala, and Honduras), each with its own laws and policies. Here, a number of single and separated marine reserves exist along the barrier reef. In Belize we have successfully transferred scientific expertise in Belize to local volunteers to generate scientific evidence to underpin future management and conservation decisions, as judged, for example, by scientific findings on the impact of hurricanes on reefs in Belize, which showed that hurricanes and severe storms limited the recruitment and survival of nonbranching corals of the Mesoamerican barrier reef [10].For Jamaica, the Action Plan developed (Table1) was well received by managers of the National Environment Protection Agency (NEPA). It was felt by managers that this approach could link together the environment with tourism and business, so that environmental issues are seen as part of the way forward, not part of the problem, as has been all too evident in the past. Even if smaller Marine Protected Areas (MPAs) were developed around the island, the adoption of shared ownership of reef ecosystems was felt to be useful way to proceed.In order to take this forward, it was felt necessary to develop a number of themes and tactics. In a separate capacity building exercise [34], for the MesoAmerican Barrier Reef in Sothern Belize, one officer from the Belize Fisheries Department, three senior officers from NGOs involved in managing Belize MPAs (TIDE, the Toledo Institute for Development and Environment; TASTE, the Toledo Association for Sustainable Tourism and Empowerment; and Friends of Nature), and a Facilitator (the author) from the UK developed six-month Personal/Professional Action Plans which involved (a) tactics for leading, educating, and supporting issues regarding sustainable development of coral reefs;(b) tactics for collaboration with other stakeholders to collectively influence policy decisions for coral reef conservation.Discussion among the participants and facilitator resulted in the generation of a series of generic tactics to be adopted around a number of themes. These are enumerated in Table2. Such themes and tactics may be useful in development of coral reef policies in the Caribbean and elsewhere.Table 2 Themes and tactics to facilitate conservation of coral reefs. Organisation and ManagementTactic number 1: establish a key leader in the Organization/Department to effectively manage the Marine reserves on a day-to-day basisTactic number 2: have a selected key leader provide general Terms of Reference of what is expected of staff and immediate/major stakeholders in order to easily facilitate the process of decision makingEducationTactic number 1: financial resources need to be allocated for an education program. The program should focus on both broad and specific issues that may create friction among stakeholders in the processTactic number 2: a group consisting of community leaders and key/immediate stakeholders should be established to create ways and methods of educating different levels of stakeholders in the effectiveness of sustainable development in the marine parksTactic number 3: surveys need to be conducted to evaluate level of success and failure. Too often programmes have been formed and implemented but end results have not been evaluated. Surveys should be carried back to stakeholders for a presentation to establish further stepsSupportTactic number 1: a well-put together presentation needs to be developed and be presented to the key authority that will have over-all say in the marine park(s). This will stress on the support needed to accomplish both the mission and vision statements and will have positive effects in sustainable developmentTactic number 2: nonmonetary incentives need to be established in order to have full support of stakeholders who would otherwise deter progress in sustainable developmentPoliciesTactic number 1: establish a set of policies that is considered necessary for proper management of the marine reserves. Such policies will be established by all stakeholders involvedTactic number 2: create an influencing program for stakeholders to adhere to such policies through an education/retreat programTactic number 3: establish exchanges with other organizations in capacity building in policy creation and effective implementation ## 5. Conclusion The use of volunteers and citizen scientists from both developed and developing countries can help in forging links which can assist in data collection and analysis and, ultimately, in ecosystem management and policy development. There is much progress internationally in involving organisations to utilize citizen science effectively and efficiently (e.g., [35]).A number of questions remain for the future, for example, assessing how citizen science could be used to better effect, for example, identifying the potential for citizen science to fill known data gaps, for example, gaps in marine and terrestrial taxonomies. In addition, we need greater understanding of where and how technology (software, statistics) can transform the quality and quantity of data from nonexperts, and how scientists can make best use of technology, for example, in using smart phone apps to identify and/or record species and measurements. --- *Source: 102350-2012-02-16.xml*
102350-2012-02-16_102350-2012-02-16.md
39,352
From Citizen Science to Policy Development on the Coral Reefs of Jamaica
M. James C. Crabbe
International Journal of Zoology (2012)
Biological Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/102350
102350-2012-02-16.xml
--- ## Abstract This paper explores the application of citizen science to help generation of scientific data and capacity-building, and so underpin scientific ideas and policy development in the area of coral reef management, on the coral reefs of Jamaica. From 2000 to 2008, ninety Earthwatch volunteers were trained in coral reef data acquisition and analysis and made over 6,000 measurements on fringing reef sites along the north coast of Jamaica. Their work showed that while recruitment of small corals is returning after the major bleaching event of 2005, larger corals are not necessarily so resilient and so need careful management if the reefs are to survive such major extreme events. These findings were used in the development of an action plan for Jamaican coral reefs, presented to the Jamaican National Environmental Protection Agency. It was agreed that a number of themes and tactics need to be implemented in order to facilitate coral reef conservation in the Caribbean. The use of volunteers and citizen scientists from both developed and developing countries can help in forging links which can assist in data collection and analysis and, ultimately, in ecosystem management and policy development. --- ## Body ## 1. Introduction Coral reefs throughout the world are under severe challenges from a variety of anthropogenic and environmental factors including overfishing, destructive fishing practices, coral bleaching, ocean acidification, sea-level rise, algal blooms, agricultural run-off, coastal and resort development, marine pollution, increasing coral diseases, invasive species, and hurricane/cyclone damage [1–3]. It is the application of citizen science to help generation of scientific data and capacity-building, and so underpin scientific ideas and policy development in the area of coral reef management, that are explored in this paper, concentrating on Jamaican coral reefs.The “compulsive” appetite for increasing mobility [4] allied to a social desire for extraordinary “peak experiences” [5] has led to the modern “ethical consumer” for tourism services [4, 6] derived from the “experiential” and “existential” tourist of the 1970s [7]. Several organisations have taken the concept of ecotourism further to embracing tourism with citizen science, whereby the tourist gets to work on research projects under the supervision of recognised researchers. Several organisations worldwide have developed citizen science programmes. The drivers behind these activities vary significantly between scientific studies, education, and/or getting the public more engaged and raising awareness of the natural environment. The overall driver cannot only determine the type, quality, and quantity of data required but also the level of volunteer expertise needed. Three organisations that have developed citizen science with tourism are the Earthwatch Institute, (http://www.earthwatch.org/), Operation Wallacea (http://www.opwall.com/), and Coral Cay Conservation (http://www.coralcay.org/). All are international environmental charities, working with a wide range of partners, from individuals who work as conservation volunteers on research teams through to corporate partners (such as HSBC with Earthwatch), governments, and institutions. Research volunteers work with scientists and social scientists around the world to help gather data needed to address environmental and social issues. It is the long-term strategy of these organisations combined with their citizen science funding models that underpins their successes; they are “in for the long haul” and can effect conservation in a different way to a standard 3-year research grant. Key elements are developing projects that can be used by volunteers and verifying the scientific information in a statistically significant way. This paper shows how an Earthwatch programme using volunteers on coral reefs generated scientific information which was used to inform management strategies in Jamaica. ## 2. Materials and Methods ### 2.1. Training of Volunteers All volunteers were SCUBA divers of at least PADI Open Water standard. Training took place at the Discovery Bay Marine Laboratory, Jamaica, and consisted of lectures and interactive discussions covering scleractinian coral biology and taxonomy, coral recognition, data measurements and analysis, and health and safety. Volunteers all had to accomplish open water diving tests, and coral recognition tests in the field, after studying coral taxonomy books and passing land-based tests. ### 2.2. Reef Sites Four randomly located transects, each 15 m long and separated by at least 5 m, were laid at 5–8.5 m depth at each of five sites on the North coast of Jamaica near Discovery Bay: Rio Bueno (18° 28.805′ N; 77° 21.625′ W), M1 (18° 28.337′ N; 77° 24.525′ W), Dancing Ladies (18° 28.369′ N; 77° 24.802′ W), Dairy Bull (18° 28.083′ N; 77° 23.302′ W), and Pear Tree Bottom (18° 27.829′ N; 77° 21.403′ W). These sites were chosen as being workable by volunteers, as they were with 20 min boat ride from the Discovery Bay Marine Laboratory, where all volunteers stayed. Sites had been studied before over a number of years by marine scientists from many countries. GPS coordinates were determined using a hand-held GPS receiver (Garmin Ltd., UK). ### 2.3. Citizen Science Data Collection Corals 2 m either side of the transect lines were photographed for archive information, and surface areas were measured with flexible tape as described previously using SCUBA [8–10]. Depth of samples was between 5 and 8.5 m, to minimise variation in growth rates due to depth [11]. To increase accuracy, surface areas rather than diameters of live nonbranching corals were measured [8, 9]. Sampling was over as wide a range of sizes as possible. Colonies that were close together (<50 mm) or touching were avoided to minimise age discontinuities through fission and altered growth rates [12–14]. In this study Montastrea annularis colonies were ignored, because their surface area does not reflect their age [12], and because hurricanes can increase their asexual reproduction through physical damage [13]. Overall, over 6,000 measurements were made on over 1,000 coral colonies, equally distributed between the sites for species and numbers of colonies.This work was conducted at Discovery Bay during July 15–31 and December 19–30 in 2000, March 26–April 19 in 2002, March 18–April 10 in 2003, July 23–August 21 in 2004, July 18–August 13 in 2005, April 11–18 in 2006, December 30 in 2006–January 6 in 2007, and July 30–August 16 in 2008. Surveys were made at the same locations at the same sites each year. Data from ninety volunteers was used over this period. ### 2.4. Storm Severity Data on storm severity as it impacted the island was obtained from UNISYS (http://weather.unisys.com/hurricane/atlantic/), the NOAA hurricane site (http://www.nhc.noaa.gov/pastall.shtml). Information on bleaching was obtained from the NOAA coral reef watch site (http://coralreefwatch.noaa.gov/satellite/current/sst_series_24reefs.html). ### 2.5. Data Analysis Data analysis on corals was using ANOVA. Skewness (sk, [15]) was used to estimate the distribution of small and large colonies in the coral populations around Discovery Bay in Jamaica. In a normal distribution, approximately 68% of the values lie within one standard deviation of the mean. If there are extreme values towards the positive end of a distribution, the distribution is positively skewed, where the mean is greater than the mode (the mode is the value that occurs the most frequently in a data set) (right tail is longer). The opposite is true for a negatively skewed distribution, where the mean is less than the mode (left tail is longer). With regard to coral populations, negative skewness implies more large colonies than small colonies, while positive skewness implies more small colonies than large colonies. ## 2.1. Training of Volunteers All volunteers were SCUBA divers of at least PADI Open Water standard. Training took place at the Discovery Bay Marine Laboratory, Jamaica, and consisted of lectures and interactive discussions covering scleractinian coral biology and taxonomy, coral recognition, data measurements and analysis, and health and safety. Volunteers all had to accomplish open water diving tests, and coral recognition tests in the field, after studying coral taxonomy books and passing land-based tests. ## 2.2. Reef Sites Four randomly located transects, each 15 m long and separated by at least 5 m, were laid at 5–8.5 m depth at each of five sites on the North coast of Jamaica near Discovery Bay: Rio Bueno (18° 28.805′ N; 77° 21.625′ W), M1 (18° 28.337′ N; 77° 24.525′ W), Dancing Ladies (18° 28.369′ N; 77° 24.802′ W), Dairy Bull (18° 28.083′ N; 77° 23.302′ W), and Pear Tree Bottom (18° 27.829′ N; 77° 21.403′ W). These sites were chosen as being workable by volunteers, as they were with 20 min boat ride from the Discovery Bay Marine Laboratory, where all volunteers stayed. Sites had been studied before over a number of years by marine scientists from many countries. GPS coordinates were determined using a hand-held GPS receiver (Garmin Ltd., UK). ## 2.3. Citizen Science Data Collection Corals 2 m either side of the transect lines were photographed for archive information, and surface areas were measured with flexible tape as described previously using SCUBA [8–10]. Depth of samples was between 5 and 8.5 m, to minimise variation in growth rates due to depth [11]. To increase accuracy, surface areas rather than diameters of live nonbranching corals were measured [8, 9]. Sampling was over as wide a range of sizes as possible. Colonies that were close together (<50 mm) or touching were avoided to minimise age discontinuities through fission and altered growth rates [12–14]. In this study Montastrea annularis colonies were ignored, because their surface area does not reflect their age [12], and because hurricanes can increase their asexual reproduction through physical damage [13]. Overall, over 6,000 measurements were made on over 1,000 coral colonies, equally distributed between the sites for species and numbers of colonies.This work was conducted at Discovery Bay during July 15–31 and December 19–30 in 2000, March 26–April 19 in 2002, March 18–April 10 in 2003, July 23–August 21 in 2004, July 18–August 13 in 2005, April 11–18 in 2006, December 30 in 2006–January 6 in 2007, and July 30–August 16 in 2008. Surveys were made at the same locations at the same sites each year. Data from ninety volunteers was used over this period. ## 2.4. Storm Severity Data on storm severity as it impacted the island was obtained from UNISYS (http://weather.unisys.com/hurricane/atlantic/), the NOAA hurricane site (http://www.nhc.noaa.gov/pastall.shtml). Information on bleaching was obtained from the NOAA coral reef watch site (http://coralreefwatch.noaa.gov/satellite/current/sst_series_24reefs.html). ## 2.5. Data Analysis Data analysis on corals was using ANOVA. Skewness (sk, [15]) was used to estimate the distribution of small and large colonies in the coral populations around Discovery Bay in Jamaica. In a normal distribution, approximately 68% of the values lie within one standard deviation of the mean. If there are extreme values towards the positive end of a distribution, the distribution is positively skewed, where the mean is greater than the mode (the mode is the value that occurs the most frequently in a data set) (right tail is longer). The opposite is true for a negatively skewed distribution, where the mean is less than the mode (left tail is longer). With regard to coral populations, negative skewness implies more large colonies than small colonies, while positive skewness implies more small colonies than large colonies. ## 3. Results ### 3.1. Coral Sizes and Growth All the Jamaican sites showed some similarities in distribution of the size classes for the species studied between 2002 and 2008. However, there were differences between the different sites and between the different species studied at the sites. Skewness values (sk) were used to compare the distribution of the data between 2002 and 2008. ForS. siderea, all sk values were positive, with more small colonies than in a normal distribution for 2002 and 2008, with little change between the dates (all sk values between 0.5 and 1.6). With D. labyrinthiformis colonies, there was a change from negative skewness in 2002 at Dairy Bull and Pear Tree Bottom, with more large colonies than in a normal distribution (sk values −0.25 and −0.006, resp.) to smaller colonies than in a normal distribution in 2008 (sk values of 0.20 and 0.97, resp.). There were no significant changes from 2002 to 2008 at the other sites, with positive sk values from 0.1 to 0.89. M. meandrites colonies at Rio Bueno and Dairy Bull showed a relative decrease in the distribution of larger colonies from 2002 to 2008, with changes in sk values from −0.03 in 02 to 0.78 in 08, and from −0.05 to 0.03, respectively; the other sites all exhibited slightly positive sk values in both years from 0.1 to 0.5. For Agaricia species, there was very little change between the years at all the sites, with sk values from 0.4 to 1.6. For P. astreoides, all values were positive for both years, with an increase in skewness at Rio Bueno from 0.2 to 2.6, showing a marked change in distribution towards the smaller colony sizes. At the other sites there were only small increases in sk values from 2002 to 2008, with Pear Tree Bottom showing a decrease in skewness from 0.9 to 0.6. D. strigosa colonies showed similar results to P. astreoides, all sk values being positive for 2002 and 2008, with an increase at Rio Bueno from 0.2 to 2.2 and at Pear Tree Bottom from 0.4 to 2.4; other sites showed similar sk values for 2002 and 2008 from 0.6 to 1.6. C. natans skewness changed from −0.07 to 0.68 at Rio Bueno from 2002 to 2008 (a decrease in larger colonies relative to a normal distribution) and at Dancing Ladies from −0.31 to 0.38. Other sites showed similar skewness in 2002 and 2008 (sk values between 0.5 and 0.6), except Pear Tree Bottom, which exhibited near normal distribution of colonies about the mean for both 2002 and 2008 (sk values <0.01). Interestingly, in 2005, the year after hurricane Ivan, the most severe storm to impact the reef sites over the study period, there was a slight reduction in the numbers of the smallest size classes, particularly notable at Dairy Bull.In addition, our volunteer studies showed that radial growth rates (mm/yr) of non-branching corals calculated on an annual basis from 2000 to 2008 showed few significant differences either spatially or temporally along the North coast, although growth rates tended to be higher on reefs of higher rugosity and lower macroalgal cover [16]. ### 3.2. Extreme Climate Events The only extreme climate event that significantly impacted the Jamaican reef sites during the study period was the mass Caribbean bleaching event of 2005 [17]. Analysis of satellite data showed that there were 6 degree heating weeks (dhw) for sea surface temperatures in September and October 2005 near Discovery Bay, data which was mirrored by data loggers on the reefs. ### 3.3. Development of Coral Reef Action Plan The coral size and growth data collected by the citizen scientists show that corals of above average size for their species at the sites studied lack resilience, particularly after the major bleaching event of 2005. Because of this, there is a need for different zones to have different levels of protection. To this end, the data was used in the development of an action plan for Jamaican coral reefs, presented to the Jamaican National Environmental Protection Agency, and described in Table1.Table 1 Seven-point action plan for Jamaican coral reefs. (1) The reefs around Jamaica could be designated as the Jamaican Coral Reef Marine Park. This could include all the fringing reefs, seagrass beds, and mangroves from Negril to all along the north coast to the eastern tip of the island. On the south coast it could include Port Royal and Portland Bight. The advantage of this is that one can then consider protection of the Jamaican reefs as a whole. Another advantage is that climate change effects can be considered in a more holistic way(2) There could be a single body, possibly the National Environment Protection Agency (NEPA), or a subset of NEPA, given authority to manage the Park(3) There could be a statement drawn up on “protection and wise use” of the Park. Drawing up that statement should include all stakeholders, from fishermen through Industry and tourism to policy makers(4) The Park could be managed using a “zoning” system. This has been valuable in a number of areas, not least the Great Barrier Reef. This will allow some areas to have greater restrictions (e.g., fishing, resort pollution, ship pollution) than others. Such zoning should help avoid the “tragedy of the commons”. Zoning Plans define what activities can occur in which locations, both to protect the marine environment and to separate potentially conflicting activities(5) Divisions into zones could beGeneral Use,Conservation Park,Habitat Protection,Marine National Park,Another zone might be a Buffer Zone, next to a Marine  National Park(6) Each zone should have at least one of the following: (i) Community Partnerships, (ii) Local Marine Advisory Committees, and (iii) Reef Advisory Committees. These bodies should be responsible for regulating their own area and should be responsible to the overall Marine Park Management body. They would also be responsible for community involvement and information(7) Permissions within the zones (e.g., for tourism, fishing, etc.) would be given by the Jamaican Government, through NEPA ## 3.1. Coral Sizes and Growth All the Jamaican sites showed some similarities in distribution of the size classes for the species studied between 2002 and 2008. However, there were differences between the different sites and between the different species studied at the sites. Skewness values (sk) were used to compare the distribution of the data between 2002 and 2008. ForS. siderea, all sk values were positive, with more small colonies than in a normal distribution for 2002 and 2008, with little change between the dates (all sk values between 0.5 and 1.6). With D. labyrinthiformis colonies, there was a change from negative skewness in 2002 at Dairy Bull and Pear Tree Bottom, with more large colonies than in a normal distribution (sk values −0.25 and −0.006, resp.) to smaller colonies than in a normal distribution in 2008 (sk values of 0.20 and 0.97, resp.). There were no significant changes from 2002 to 2008 at the other sites, with positive sk values from 0.1 to 0.89. M. meandrites colonies at Rio Bueno and Dairy Bull showed a relative decrease in the distribution of larger colonies from 2002 to 2008, with changes in sk values from −0.03 in 02 to 0.78 in 08, and from −0.05 to 0.03, respectively; the other sites all exhibited slightly positive sk values in both years from 0.1 to 0.5. For Agaricia species, there was very little change between the years at all the sites, with sk values from 0.4 to 1.6. For P. astreoides, all values were positive for both years, with an increase in skewness at Rio Bueno from 0.2 to 2.6, showing a marked change in distribution towards the smaller colony sizes. At the other sites there were only small increases in sk values from 2002 to 2008, with Pear Tree Bottom showing a decrease in skewness from 0.9 to 0.6. D. strigosa colonies showed similar results to P. astreoides, all sk values being positive for 2002 and 2008, with an increase at Rio Bueno from 0.2 to 2.2 and at Pear Tree Bottom from 0.4 to 2.4; other sites showed similar sk values for 2002 and 2008 from 0.6 to 1.6. C. natans skewness changed from −0.07 to 0.68 at Rio Bueno from 2002 to 2008 (a decrease in larger colonies relative to a normal distribution) and at Dancing Ladies from −0.31 to 0.38. Other sites showed similar skewness in 2002 and 2008 (sk values between 0.5 and 0.6), except Pear Tree Bottom, which exhibited near normal distribution of colonies about the mean for both 2002 and 2008 (sk values <0.01). Interestingly, in 2005, the year after hurricane Ivan, the most severe storm to impact the reef sites over the study period, there was a slight reduction in the numbers of the smallest size classes, particularly notable at Dairy Bull.In addition, our volunteer studies showed that radial growth rates (mm/yr) of non-branching corals calculated on an annual basis from 2000 to 2008 showed few significant differences either spatially or temporally along the North coast, although growth rates tended to be higher on reefs of higher rugosity and lower macroalgal cover [16]. ## 3.2. Extreme Climate Events The only extreme climate event that significantly impacted the Jamaican reef sites during the study period was the mass Caribbean bleaching event of 2005 [17]. Analysis of satellite data showed that there were 6 degree heating weeks (dhw) for sea surface temperatures in September and October 2005 near Discovery Bay, data which was mirrored by data loggers on the reefs. ## 3.3. Development of Coral Reef Action Plan The coral size and growth data collected by the citizen scientists show that corals of above average size for their species at the sites studied lack resilience, particularly after the major bleaching event of 2005. Because of this, there is a need for different zones to have different levels of protection. To this end, the data was used in the development of an action plan for Jamaican coral reefs, presented to the Jamaican National Environmental Protection Agency, and described in Table1.Table 1 Seven-point action plan for Jamaican coral reefs. (1) The reefs around Jamaica could be designated as the Jamaican Coral Reef Marine Park. This could include all the fringing reefs, seagrass beds, and mangroves from Negril to all along the north coast to the eastern tip of the island. On the south coast it could include Port Royal and Portland Bight. The advantage of this is that one can then consider protection of the Jamaican reefs as a whole. Another advantage is that climate change effects can be considered in a more holistic way(2) There could be a single body, possibly the National Environment Protection Agency (NEPA), or a subset of NEPA, given authority to manage the Park(3) There could be a statement drawn up on “protection and wise use” of the Park. Drawing up that statement should include all stakeholders, from fishermen through Industry and tourism to policy makers(4) The Park could be managed using a “zoning” system. This has been valuable in a number of areas, not least the Great Barrier Reef. This will allow some areas to have greater restrictions (e.g., fishing, resort pollution, ship pollution) than others. Such zoning should help avoid the “tragedy of the commons”. Zoning Plans define what activities can occur in which locations, both to protect the marine environment and to separate potentially conflicting activities(5) Divisions into zones could beGeneral Use,Conservation Park,Habitat Protection,Marine National Park,Another zone might be a Buffer Zone, next to a Marine  National Park(6) Each zone should have at least one of the following: (i) Community Partnerships, (ii) Local Marine Advisory Committees, and (iii) Reef Advisory Committees. These bodies should be responsible for regulating their own area and should be responsible to the overall Marine Park Management body. They would also be responsible for community involvement and information(7) Permissions within the zones (e.g., for tourism, fishing, etc.) would be given by the Jamaican Government, through NEPA ## 4. Discussion ### 4.1. Citizen Science and Use of Volunteer Data Citizen science and use of data measured by volunteers has been very helpful in a number of zoological areas, including amphibian population and biodiversity studies [18, 19], reporting invasive species [20], environmental monitoring [21], evolutionary change [22], marine species abundance and monitoring [23–25], dryland mapping [26], and conservation planning [27, 28].This study used self-selected as “Earthwatch volunteers”, and all were SCUBA divers. Motivation was high in all the volunteers, as was the validity of the data presented by volunteers. A key element in citizen science is good training of volunteers. In the area of coral reef research described in this study, training was given in species recognition, quantitative measurement techniques and validation, and data analysis. Independent validation of volunteer data, once training had been given, was consistent with previous findings by other groups [29]. The validation of the data produced by the volunteers indicated that with appropriate training, data collection by citizen scientists is appropriate for scientific applications in marine biology. ### 4.2. Coral Health and Resilience What is apparent from our studies is that despite the chronic and acute disturbances between 2002 and 2008, demographic studies indicate good levels of coral resilience on the fringing reefs around Discovery Bay in Jamaica (see also [30]). The bleaching event of 2005 resulted in mass bleaching but relatively low levels of mortality unlike corals in the US Virgin islands and Tobago where there was extensive mortality [17, 31], probably because of their greater degree heating week values.This data shows that while recruitment of small corals is returning after the major bleaching event of 2005 [32], larger corals are not necessarily so resilient and so need careful management if the reefs are to survive such major extreme events. ### 4.3. From Information to Policy Development: Themes and Tactics Marine reserves are an important tool in the sustainable management of many coral reefs [33]. However, it is important that the reef ecosystems share regulatory guidelines, enforcement practices and resources, and conservation initiatives and management, underpinned by scientific research. An example of a single marine reserve is the Great Barrier Reef in Australia operated and managed solely by the Great Barrier Reef Marine Park Authority (GBRMPA). In contrast, the second largest barrier reef in the world, the MesoAmerican Barrier Reef, is bounded by four countries (Mexico, Belize, Guatemala, and Honduras), each with its own laws and policies. Here, a number of single and separated marine reserves exist along the barrier reef. In Belize we have successfully transferred scientific expertise in Belize to local volunteers to generate scientific evidence to underpin future management and conservation decisions, as judged, for example, by scientific findings on the impact of hurricanes on reefs in Belize, which showed that hurricanes and severe storms limited the recruitment and survival of nonbranching corals of the Mesoamerican barrier reef [10].For Jamaica, the Action Plan developed (Table1) was well received by managers of the National Environment Protection Agency (NEPA). It was felt by managers that this approach could link together the environment with tourism and business, so that environmental issues are seen as part of the way forward, not part of the problem, as has been all too evident in the past. Even if smaller Marine Protected Areas (MPAs) were developed around the island, the adoption of shared ownership of reef ecosystems was felt to be useful way to proceed.In order to take this forward, it was felt necessary to develop a number of themes and tactics. In a separate capacity building exercise [34], for the MesoAmerican Barrier Reef in Sothern Belize, one officer from the Belize Fisheries Department, three senior officers from NGOs involved in managing Belize MPAs (TIDE, the Toledo Institute for Development and Environment; TASTE, the Toledo Association for Sustainable Tourism and Empowerment; and Friends of Nature), and a Facilitator (the author) from the UK developed six-month Personal/Professional Action Plans which involved (a) tactics for leading, educating, and supporting issues regarding sustainable development of coral reefs;(b) tactics for collaboration with other stakeholders to collectively influence policy decisions for coral reef conservation.Discussion among the participants and facilitator resulted in the generation of a series of generic tactics to be adopted around a number of themes. These are enumerated in Table2. Such themes and tactics may be useful in development of coral reef policies in the Caribbean and elsewhere.Table 2 Themes and tactics to facilitate conservation of coral reefs. Organisation and ManagementTactic number 1: establish a key leader in the Organization/Department to effectively manage the Marine reserves on a day-to-day basisTactic number 2: have a selected key leader provide general Terms of Reference of what is expected of staff and immediate/major stakeholders in order to easily facilitate the process of decision makingEducationTactic number 1: financial resources need to be allocated for an education program. The program should focus on both broad and specific issues that may create friction among stakeholders in the processTactic number 2: a group consisting of community leaders and key/immediate stakeholders should be established to create ways and methods of educating different levels of stakeholders in the effectiveness of sustainable development in the marine parksTactic number 3: surveys need to be conducted to evaluate level of success and failure. Too often programmes have been formed and implemented but end results have not been evaluated. Surveys should be carried back to stakeholders for a presentation to establish further stepsSupportTactic number 1: a well-put together presentation needs to be developed and be presented to the key authority that will have over-all say in the marine park(s). This will stress on the support needed to accomplish both the mission and vision statements and will have positive effects in sustainable developmentTactic number 2: nonmonetary incentives need to be established in order to have full support of stakeholders who would otherwise deter progress in sustainable developmentPoliciesTactic number 1: establish a set of policies that is considered necessary for proper management of the marine reserves. Such policies will be established by all stakeholders involvedTactic number 2: create an influencing program for stakeholders to adhere to such policies through an education/retreat programTactic number 3: establish exchanges with other organizations in capacity building in policy creation and effective implementation ## 4.1. Citizen Science and Use of Volunteer Data Citizen science and use of data measured by volunteers has been very helpful in a number of zoological areas, including amphibian population and biodiversity studies [18, 19], reporting invasive species [20], environmental monitoring [21], evolutionary change [22], marine species abundance and monitoring [23–25], dryland mapping [26], and conservation planning [27, 28].This study used self-selected as “Earthwatch volunteers”, and all were SCUBA divers. Motivation was high in all the volunteers, as was the validity of the data presented by volunteers. A key element in citizen science is good training of volunteers. In the area of coral reef research described in this study, training was given in species recognition, quantitative measurement techniques and validation, and data analysis. Independent validation of volunteer data, once training had been given, was consistent with previous findings by other groups [29]. The validation of the data produced by the volunteers indicated that with appropriate training, data collection by citizen scientists is appropriate for scientific applications in marine biology. ## 4.2. Coral Health and Resilience What is apparent from our studies is that despite the chronic and acute disturbances between 2002 and 2008, demographic studies indicate good levels of coral resilience on the fringing reefs around Discovery Bay in Jamaica (see also [30]). The bleaching event of 2005 resulted in mass bleaching but relatively low levels of mortality unlike corals in the US Virgin islands and Tobago where there was extensive mortality [17, 31], probably because of their greater degree heating week values.This data shows that while recruitment of small corals is returning after the major bleaching event of 2005 [32], larger corals are not necessarily so resilient and so need careful management if the reefs are to survive such major extreme events. ## 4.3. From Information to Policy Development: Themes and Tactics Marine reserves are an important tool in the sustainable management of many coral reefs [33]. However, it is important that the reef ecosystems share regulatory guidelines, enforcement practices and resources, and conservation initiatives and management, underpinned by scientific research. An example of a single marine reserve is the Great Barrier Reef in Australia operated and managed solely by the Great Barrier Reef Marine Park Authority (GBRMPA). In contrast, the second largest barrier reef in the world, the MesoAmerican Barrier Reef, is bounded by four countries (Mexico, Belize, Guatemala, and Honduras), each with its own laws and policies. Here, a number of single and separated marine reserves exist along the barrier reef. In Belize we have successfully transferred scientific expertise in Belize to local volunteers to generate scientific evidence to underpin future management and conservation decisions, as judged, for example, by scientific findings on the impact of hurricanes on reefs in Belize, which showed that hurricanes and severe storms limited the recruitment and survival of nonbranching corals of the Mesoamerican barrier reef [10].For Jamaica, the Action Plan developed (Table1) was well received by managers of the National Environment Protection Agency (NEPA). It was felt by managers that this approach could link together the environment with tourism and business, so that environmental issues are seen as part of the way forward, not part of the problem, as has been all too evident in the past. Even if smaller Marine Protected Areas (MPAs) were developed around the island, the adoption of shared ownership of reef ecosystems was felt to be useful way to proceed.In order to take this forward, it was felt necessary to develop a number of themes and tactics. In a separate capacity building exercise [34], for the MesoAmerican Barrier Reef in Sothern Belize, one officer from the Belize Fisheries Department, three senior officers from NGOs involved in managing Belize MPAs (TIDE, the Toledo Institute for Development and Environment; TASTE, the Toledo Association for Sustainable Tourism and Empowerment; and Friends of Nature), and a Facilitator (the author) from the UK developed six-month Personal/Professional Action Plans which involved (a) tactics for leading, educating, and supporting issues regarding sustainable development of coral reefs;(b) tactics for collaboration with other stakeholders to collectively influence policy decisions for coral reef conservation.Discussion among the participants and facilitator resulted in the generation of a series of generic tactics to be adopted around a number of themes. These are enumerated in Table2. Such themes and tactics may be useful in development of coral reef policies in the Caribbean and elsewhere.Table 2 Themes and tactics to facilitate conservation of coral reefs. Organisation and ManagementTactic number 1: establish a key leader in the Organization/Department to effectively manage the Marine reserves on a day-to-day basisTactic number 2: have a selected key leader provide general Terms of Reference of what is expected of staff and immediate/major stakeholders in order to easily facilitate the process of decision makingEducationTactic number 1: financial resources need to be allocated for an education program. The program should focus on both broad and specific issues that may create friction among stakeholders in the processTactic number 2: a group consisting of community leaders and key/immediate stakeholders should be established to create ways and methods of educating different levels of stakeholders in the effectiveness of sustainable development in the marine parksTactic number 3: surveys need to be conducted to evaluate level of success and failure. Too often programmes have been formed and implemented but end results have not been evaluated. Surveys should be carried back to stakeholders for a presentation to establish further stepsSupportTactic number 1: a well-put together presentation needs to be developed and be presented to the key authority that will have over-all say in the marine park(s). This will stress on the support needed to accomplish both the mission and vision statements and will have positive effects in sustainable developmentTactic number 2: nonmonetary incentives need to be established in order to have full support of stakeholders who would otherwise deter progress in sustainable developmentPoliciesTactic number 1: establish a set of policies that is considered necessary for proper management of the marine reserves. Such policies will be established by all stakeholders involvedTactic number 2: create an influencing program for stakeholders to adhere to such policies through an education/retreat programTactic number 3: establish exchanges with other organizations in capacity building in policy creation and effective implementation ## 5. Conclusion The use of volunteers and citizen scientists from both developed and developing countries can help in forging links which can assist in data collection and analysis and, ultimately, in ecosystem management and policy development. There is much progress internationally in involving organisations to utilize citizen science effectively and efficiently (e.g., [35]).A number of questions remain for the future, for example, assessing how citizen science could be used to better effect, for example, identifying the potential for citizen science to fill known data gaps, for example, gaps in marine and terrestrial taxonomies. In addition, we need greater understanding of where and how technology (software, statistics) can transform the quality and quantity of data from nonexperts, and how scientists can make best use of technology, for example, in using smart phone apps to identify and/or record species and measurements. --- *Source: 102350-2012-02-16.xml*
2012
# Computed Tomography (CT) Imaging Features of Patients with COVID-19: Systematic Review and Meta-Analysis **Authors:** Ephrem Awulachew; Kuma Diriba; Asrat Anja; Eyob Getu; Firehiwot Belayneh **Journal:** Radiology Research and Practice (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1023506 --- ## Abstract Introduction. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a highly contagious disease, and its first outbreak was reported in Wuhan, China. A coronavirus disease (COVID-19) causes severe respiratory distress (ARDS). Due to the primary involvement of the respiratory system, chest CT is strongly recommended in suspected COVID-19 cases, for both initial evaluation and follow-up. Objective. The aim of this review was to systematically analyze the existing literature on CT imaging features of patients with COVID-19 pneumonia. Methods. A systematic search was conducted on PubMed, Embase, Cochrane Library, Open Access Journals (OAJ), and Google Scholar databases until April 15, 2020. All articles with a report of CT findings in COVID-19 patients published in English from the onset of COVID-19 outbreak to April 20, 2020, were included in the study. Result. From a total of 5041 COVID-19-infected patients, about 98% (4940/5041) had abnormalities in chest CT, while about 2% have normal chest CT findings. Among COVID-19 patients with abnormal chest CT findings, 80% (3952/4940) had bilateral lung involvement. Ground-glass opacity (GGO) and mixed GGO with consolidation were observed in 2482 (65%) and 768 (18%) patients, respectively. Consolidations were detected in 1259 (22%) patients with COVID-19 pneumonia. CT images also showed interlobular septal thickening in about 691 (27%) patients. Conclusion. Frequent involvement of bilateral lung infections, ground-glass opacities, consolidation, crazy paving pattern, air bronchogram signs, and intralobular septal thickening were common CT imaging features of patients with COVID-19 pneumonia. --- ## Body ## 1. Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a highly contagious disease, and its first outbreak was reported in Wuhan, China [1]. On January 30, 2020, the World Health Organization (WHO) declared it a pandemic disease [2]. Currently, the disease has been reported in more than 212 countries worldwide [3]. As of May 01, a total of 3,325,620 cases and 234,496 deaths due to COVID-19 were reported worldwide [4]. The most common diagnostic tool for coronavirus disease 2019 (COVID-19) infection is real-time polymerase chain reaction (RT-PCR), which is regarded as the reference standard [5, 6].COVID-19 causes severe respiratory distress (ARDS). Due to the primary involvement of the respiratory system, chest computed tomography (CT) is strongly recommended in suspected COVID-19 cases, for both initial evaluation and follow-up [7]. Recent studies addressed the importance of chest CT examination in COVID-19 patients with false-negative RT-PCR results [8] and reported the CT sensitivity as 98% [9]. Additionally, CT examinations also have great significance in monitoring disease progression and evaluating therapeutic efficacy.The SARS-CoV-2 has four major structural proteins: the spike surface glycoprotein, small envelope protein, matrix protein, and nucleocapsid protein [10]. The spike protein binds to host receptors via the receptor-binding domains (RBDs) of angiotensin-converting enzyme-2 (ACE2) [11]. The ACE2 protein has been identified in various human organs, including the respiratory system, GI tract, lymph nodes, thymus, bone marrow, spleen, liver, kidney, and brain. SARS-CoV-2 was reported to utilize angiotensin-converting enzyme-2 (ACE2) as the cell receptor in humans [12], firstly causing pulmonary interstitial damage and subsequently with parenchymal changes. Reportedly, chest CT images could manifest different imaging features or patterns in COVID-19 patients with different time course and disease severity [13, 14].Studies suggest that routine chest CT is a useful tool in the early diagnosis of COVID-19 infection, especially in settings of limited availability of reverse-transcriptase polymerase chain reaction (RT-PCR [15]. Imaging is critical in assessing severity and disease progression in COVID-19 infection. Radiologists should be aware of the features and patterns of imaging manifestations of the novel COVID-19 infection. A variety of imaging features have been described in similar coronavirus-associated syndromes. Due to an alarming spread of COVID-19 outbreak throughout the world, a comprehensive understanding of the importance of evaluating chest CT imaging findings is essential for effective patient management and treatment. Individual literature published is required to be summarized. Thus, a comprehensive systematic review has to be performed. Therefore, this study systematically reviewed CT imaging features of patients with COVID-19 pneumonia. ## 2. Methods ### 2.1. Search Strategy A systematic search was conducted on PubMed, Embase, Cochrane Library, Open Access Journals (OAJ), and Google Scholar databases until April 15, 2020. Keywords used to search eligible studies were “severe acute respiratory syndrome 2,” “SARS CoV 2,” “2019-nCoV,” “COVID-19,” “computed tomography,” “CT,” and “radiology.” All identified keywords and mesh terms were combined using the “OR” operator and “AND” operator for searching literatures. We simultaneously searched the reference lists of all recovered articles for potentially eligible studies. ### 2.2. Eligibility Criteria All articles with a report of CT findings in COVID-19 patients published in English from the onset of COVID-19 outbreak to April 20, 2020, were included in the study. Case studies and case series reported on chest CT imaging findings of patients with COVID-19 were included with caution. Studies pertaining to other coronavirus-related illnesses were excluded. ### 2.3. Assessment of Study Quality Studies selected for inclusion were assessed for methodological quality by two teams of independent reviewers using the standard critical appraisal instruments of the Joanna Briggs Institute Meta-Analysis of Statistics Assessment for the Review Instrument (JBI-MAStARI) [16]. Disagreements were resolved by consensus. ### 2.4. Data Extraction and Synthesis Data were extracted by two teams of the investigators using a standardized data extraction form. In addition, eligible case studies and case series studies were extracted carefully. Then the extracted data were merged for systematic analysis. The main outcomes extracted from each study were study design, country, patient demographics, and chest CT findings. Additional findings extracted were the patient’s clinical characteristics. Disagreements were discussed with other reviewers and subsequently resolved via consensus. ### 2.5. Data Analysis and Data Synthesis Systematic reviews and meta-analyses were carried out using R software version 3.6.1 with user-contributed commands for meta-analyses: metaprop, metan, metainf, metabias, and metareg. The effect sizes and SEs of the studies were pooled using a random-effects model to calculate the pooled estimates of CT findings among COVID-19 patients. A meta-analysis was also planned to assess the association of various imaging findings with demographic data. ### 2.6. Risk of Bias and Sensitivity Analysis Evidence for statistical heterogeneity of the results was assessed using the Cochrane Qx2 test and I2 statistic. A significance level of P<0.10 and I2 >50% was interpreted as evidence of heterogeneity [17]. A potential source of heterogeneity was investigated by subgroup analysis and meta-regression analysis [18]. Where statistical pooling was not possible, the findings were presented in a narrative form including tables and figures to aid in data presentation where appropriate.Sensitivity analyses were conducted to weigh up the relative influence of each individual study on the pooled effect size using a user-written function, metainf. The presence of publication bias was assessed informally by visual inspection of funnel plots [19]. ## 2.1. Search Strategy A systematic search was conducted on PubMed, Embase, Cochrane Library, Open Access Journals (OAJ), and Google Scholar databases until April 15, 2020. Keywords used to search eligible studies were “severe acute respiratory syndrome 2,” “SARS CoV 2,” “2019-nCoV,” “COVID-19,” “computed tomography,” “CT,” and “radiology.” All identified keywords and mesh terms were combined using the “OR” operator and “AND” operator for searching literatures. We simultaneously searched the reference lists of all recovered articles for potentially eligible studies. ## 2.2. Eligibility Criteria All articles with a report of CT findings in COVID-19 patients published in English from the onset of COVID-19 outbreak to April 20, 2020, were included in the study. Case studies and case series reported on chest CT imaging findings of patients with COVID-19 were included with caution. Studies pertaining to other coronavirus-related illnesses were excluded. ## 2.3. Assessment of Study Quality Studies selected for inclusion were assessed for methodological quality by two teams of independent reviewers using the standard critical appraisal instruments of the Joanna Briggs Institute Meta-Analysis of Statistics Assessment for the Review Instrument (JBI-MAStARI) [16]. Disagreements were resolved by consensus. ## 2.4. Data Extraction and Synthesis Data were extracted by two teams of the investigators using a standardized data extraction form. In addition, eligible case studies and case series studies were extracted carefully. Then the extracted data were merged for systematic analysis. The main outcomes extracted from each study were study design, country, patient demographics, and chest CT findings. Additional findings extracted were the patient’s clinical characteristics. Disagreements were discussed with other reviewers and subsequently resolved via consensus. ## 2.5. Data Analysis and Data Synthesis Systematic reviews and meta-analyses were carried out using R software version 3.6.1 with user-contributed commands for meta-analyses: metaprop, metan, metainf, metabias, and metareg. The effect sizes and SEs of the studies were pooled using a random-effects model to calculate the pooled estimates of CT findings among COVID-19 patients. A meta-analysis was also planned to assess the association of various imaging findings with demographic data. ## 2.6. Risk of Bias and Sensitivity Analysis Evidence for statistical heterogeneity of the results was assessed using the Cochrane Qx2 test and I2 statistic. A significance level of P<0.10 and I2 >50% was interpreted as evidence of heterogeneity [17]. A potential source of heterogeneity was investigated by subgroup analysis and meta-regression analysis [18]. Where statistical pooling was not possible, the findings were presented in a narrative form including tables and figures to aid in data presentation where appropriate.Sensitivity analyses were conducted to weigh up the relative influence of each individual study on the pooled effect size using a user-written function, metainf. The presence of publication bias was assessed informally by visual inspection of funnel plots [19]. ## 3. Result ### 3.1. Study Selection Following the initial search, 241 studies were identified from the electronic databases (Figure1). After the removal of 67 duplicates, 17 noneligible studies, and six studies with unclear population, 78 studies were retrieved for full-text review. Of this, 18 studies were of poor quality that did not meet the eligibility criteria. Following methodological quality assessment, 60 articles were included in the meta-analysis. In the included studies, a total of 5041 patients with COVID-19 were assessed for CT imaging features. All papers were published in English.Figure 1 Flow chart of the search and study inclusion. ### 3.2. Study Characteristics In this review, 60 papers were eligible and a total of 5041 patients had CT imaging findings [8, 12–14, 19–72]. From a total of 5401 participants, about half of them were male 2710 (50.2%), while 2693 (49.8%) were female. The mean age of the participants was 49 years with a standard deviation of 11.6 years. #### 3.2.1. Clinical Features of Patients with COVID-19 From a total of 60 included studies, only 48 studies had reported the clinical features of COVID-19 infected patients. According to the report of 48 studies, the main clinical features of COVID-19 infected patients were fever and dry cough, which accounted for 80% (2954/3800) and 56.2% (2137/3800), respectively, while about 2.3% (86/3800) patients with COVID-19 infection are asymptomatic. The total white blood cell count was decreased in about 24% (410) of patients with COVID-19, while about 2% [43] had increased white blood cells. Decreased lymphocyte count was detected in about 442 (43%) patients with COVID-19, and only 1% had increased lymphocyte count. Markers like C-reactive protein were found to be increased in about 87% (458) patients, while only 13% (254) had normal C-reactive protein (Table 1).Table 1 Review of clinical features of patients with COVID-19. Clinical featuresNumber of studiesNumber of patients (%)Sign and symptomsFever482954 (80%)Dry cough482137 (56.2%)Respiratory distress29462 (15%)Pharyngeal pain48381 (13%)Fatigue32481 (27%)Total WBC countNormal221195 (68%)Decreased22410 (24%)Increased2245 (2%)Lymphocyte countNormal22566 (56%)Decreased22442 (43%)Increased222 (1%)C-reactive proteinNormal15254 (13%)Decreased150 (0%)Increased15458 (87%) #### 3.2.2. Chest CT Imaging Features of COVID-19-Infected Patients We performed meta-analyses of primary outcome data and secondary outcomes with available data. From a total of 5041 COVID-19-infected patients, about 98% (4940/5041) had abnormalities in chest CT, while about 2% had normal chest CT findings (Figure2). Among COVID-19 patients with abnormal chest CT findings, 80% (3952/4940) had bilateral lung involvement. About 20% (641/3206) patients had exclusively unilateral lung involvement. When a single lobe was involved, the right lung was most often affected (62%) and 38% of them had only left lung involvement. In the right lung, the lower lobe was frequently involved (74% (784)), while the middle lobe was less frequent (38% (326)). In the left lung, the upper lobe was frequently involved (74% (731)).Figure 2 Magnitude of abnormal chest CT finding of patients with COVID-19.Regarding the patterns of lung lesions in patients with COVID-19, ground-glass opacity (GGO) and mixed GGO with consolidations were observed in 2482 (65%) and 768 (18%) patients, respectively. Consolidation, which is defined as denser opacities and blurred margins of pulmonary blood vessels and bronchial tubes, was detected in 1259 (22%) patients with COVID-19 pneumonia. CT images also showed interlobular septal thickening in about 691 (27%) patients. CT showed that 11 (21.6%) patients had discrete pulmonary nodules. Crazy paving patterns of lesions were observed in 575 (12%) patients. Five hundred thirty-one (18%) patients had an air bronchogram sign. Pleural effusion and lymphadenopathy were observed in 94 (1.6%) and 21 (0.7%) patients with COVID-19 pneumonia, respectively. Pulmonary nodules were reported in 262 (9%) patients (Table2).Table 2 CT imaging features of patients with COVID-19. CT findingsNumber of studiesNumber of patients (%)Patterns of the lesionGround-glass opacity with consolidation60768 (18%)Ground-glass opacity602482 (65%)Consolidation601259 (22%)Crazy paving pattern24575 (12%)Reversed halo sign24146 (1%)Other signs in the lesionInterlobular septal thickening23691 (27%)Air bronchogram sign23531 (18%)DistributionBilateral483952 (80%)Unilateral48641 (20%)Right lung848 (62%)Left lung829 (38%)Number of lobes involvedOne lobe13278 (14%)Two lobes13299 (11%)Three lobes13250 (13%)Four lobes13212 (15%)Five lobes14384 (34%)More than one lobe141145 (76%)Lobe of lesion distributionLeft upper lobe14731 (74%)Left lower lobe20504 (46%)Right upper lobe19455 (40%)Right middle lobe15326 (38%)Right lower lobe17784 (74%)Other findingsPleural effusion6094 (1.6%)Lymphadenopathy6021 (0.7%)Pulmonary nodules22262 (9%) ### 3.3. Additional Analysis We also gathered data to demonstrate if there is a difference in the CT findings related to infection time course. In the included studies, the mean time between initial chest CT and follow-up was about 6.5 days (range, 0–21 days). According to the report of 9 included studies, 43% (95 CI: 26%–61%) patients have improved follow-up chest CT findings, while 30% (95 CI: 18%–46%) have advanced chest CT findings [40, 43, 44, 50, 52, 57, 64–66]. Our findings demonstrated pure ground-glass opacity in early disease, followed by the development of crazy paving, and finally increasing consolidation latter in the disease course. GGO density increased and transformed into consolidation; consolidation edges were flat or contracted; and fibrous cord shadow appeared. GGO with consolidation was higher in the advanced stage than in the early stage of the disease (OR 3.2, 95% CI: 2.2–4.7, P=0.013). A crazy paving pattern and a reverse halo sign were all rare in the early sign, but were present in the late stage of the disease. In terms of distribution of disease, bilateral involvements were prominent in the later stage than in the early stage (P<0.001). According to the report of two included studies, pure GGO was the common imaging feature of mild and moderate stages of the disease course, while consolidation and GGO with consolidation increased by 64% and 79% in the critical stage of the disease, respectively [57, 58]. ### 3.4. Heterogeneity and Risk of Bias Subgroup analysis was conducted to justify the cause of heterogeneity. Subgroup analysis of the included studies showed that the possible cause of heterogeneity was sample size (P value < 0.01). Funnel plots did not suggest a publication bias for the majority of the parameters. We demonstrated no publication bias (P value = 0.1947). ## 3.1. Study Selection Following the initial search, 241 studies were identified from the electronic databases (Figure1). After the removal of 67 duplicates, 17 noneligible studies, and six studies with unclear population, 78 studies were retrieved for full-text review. Of this, 18 studies were of poor quality that did not meet the eligibility criteria. Following methodological quality assessment, 60 articles were included in the meta-analysis. In the included studies, a total of 5041 patients with COVID-19 were assessed for CT imaging features. All papers were published in English.Figure 1 Flow chart of the search and study inclusion. ## 3.2. Study Characteristics In this review, 60 papers were eligible and a total of 5041 patients had CT imaging findings [8, 12–14, 19–72]. From a total of 5401 participants, about half of them were male 2710 (50.2%), while 2693 (49.8%) were female. The mean age of the participants was 49 years with a standard deviation of 11.6 years. ### 3.2.1. Clinical Features of Patients with COVID-19 From a total of 60 included studies, only 48 studies had reported the clinical features of COVID-19 infected patients. According to the report of 48 studies, the main clinical features of COVID-19 infected patients were fever and dry cough, which accounted for 80% (2954/3800) and 56.2% (2137/3800), respectively, while about 2.3% (86/3800) patients with COVID-19 infection are asymptomatic. The total white blood cell count was decreased in about 24% (410) of patients with COVID-19, while about 2% [43] had increased white blood cells. Decreased lymphocyte count was detected in about 442 (43%) patients with COVID-19, and only 1% had increased lymphocyte count. Markers like C-reactive protein were found to be increased in about 87% (458) patients, while only 13% (254) had normal C-reactive protein (Table 1).Table 1 Review of clinical features of patients with COVID-19. Clinical featuresNumber of studiesNumber of patients (%)Sign and symptomsFever482954 (80%)Dry cough482137 (56.2%)Respiratory distress29462 (15%)Pharyngeal pain48381 (13%)Fatigue32481 (27%)Total WBC countNormal221195 (68%)Decreased22410 (24%)Increased2245 (2%)Lymphocyte countNormal22566 (56%)Decreased22442 (43%)Increased222 (1%)C-reactive proteinNormal15254 (13%)Decreased150 (0%)Increased15458 (87%) ### 3.2.2. Chest CT Imaging Features of COVID-19-Infected Patients We performed meta-analyses of primary outcome data and secondary outcomes with available data. From a total of 5041 COVID-19-infected patients, about 98% (4940/5041) had abnormalities in chest CT, while about 2% had normal chest CT findings (Figure2). Among COVID-19 patients with abnormal chest CT findings, 80% (3952/4940) had bilateral lung involvement. About 20% (641/3206) patients had exclusively unilateral lung involvement. When a single lobe was involved, the right lung was most often affected (62%) and 38% of them had only left lung involvement. In the right lung, the lower lobe was frequently involved (74% (784)), while the middle lobe was less frequent (38% (326)). In the left lung, the upper lobe was frequently involved (74% (731)).Figure 2 Magnitude of abnormal chest CT finding of patients with COVID-19.Regarding the patterns of lung lesions in patients with COVID-19, ground-glass opacity (GGO) and mixed GGO with consolidations were observed in 2482 (65%) and 768 (18%) patients, respectively. Consolidation, which is defined as denser opacities and blurred margins of pulmonary blood vessels and bronchial tubes, was detected in 1259 (22%) patients with COVID-19 pneumonia. CT images also showed interlobular septal thickening in about 691 (27%) patients. CT showed that 11 (21.6%) patients had discrete pulmonary nodules. Crazy paving patterns of lesions were observed in 575 (12%) patients. Five hundred thirty-one (18%) patients had an air bronchogram sign. Pleural effusion and lymphadenopathy were observed in 94 (1.6%) and 21 (0.7%) patients with COVID-19 pneumonia, respectively. Pulmonary nodules were reported in 262 (9%) patients (Table2).Table 2 CT imaging features of patients with COVID-19. CT findingsNumber of studiesNumber of patients (%)Patterns of the lesionGround-glass opacity with consolidation60768 (18%)Ground-glass opacity602482 (65%)Consolidation601259 (22%)Crazy paving pattern24575 (12%)Reversed halo sign24146 (1%)Other signs in the lesionInterlobular septal thickening23691 (27%)Air bronchogram sign23531 (18%)DistributionBilateral483952 (80%)Unilateral48641 (20%)Right lung848 (62%)Left lung829 (38%)Number of lobes involvedOne lobe13278 (14%)Two lobes13299 (11%)Three lobes13250 (13%)Four lobes13212 (15%)Five lobes14384 (34%)More than one lobe141145 (76%)Lobe of lesion distributionLeft upper lobe14731 (74%)Left lower lobe20504 (46%)Right upper lobe19455 (40%)Right middle lobe15326 (38%)Right lower lobe17784 (74%)Other findingsPleural effusion6094 (1.6%)Lymphadenopathy6021 (0.7%)Pulmonary nodules22262 (9%) ## 3.2.1. Clinical Features of Patients with COVID-19 From a total of 60 included studies, only 48 studies had reported the clinical features of COVID-19 infected patients. According to the report of 48 studies, the main clinical features of COVID-19 infected patients were fever and dry cough, which accounted for 80% (2954/3800) and 56.2% (2137/3800), respectively, while about 2.3% (86/3800) patients with COVID-19 infection are asymptomatic. The total white blood cell count was decreased in about 24% (410) of patients with COVID-19, while about 2% [43] had increased white blood cells. Decreased lymphocyte count was detected in about 442 (43%) patients with COVID-19, and only 1% had increased lymphocyte count. Markers like C-reactive protein were found to be increased in about 87% (458) patients, while only 13% (254) had normal C-reactive protein (Table 1).Table 1 Review of clinical features of patients with COVID-19. Clinical featuresNumber of studiesNumber of patients (%)Sign and symptomsFever482954 (80%)Dry cough482137 (56.2%)Respiratory distress29462 (15%)Pharyngeal pain48381 (13%)Fatigue32481 (27%)Total WBC countNormal221195 (68%)Decreased22410 (24%)Increased2245 (2%)Lymphocyte countNormal22566 (56%)Decreased22442 (43%)Increased222 (1%)C-reactive proteinNormal15254 (13%)Decreased150 (0%)Increased15458 (87%) ## 3.2.2. Chest CT Imaging Features of COVID-19-Infected Patients We performed meta-analyses of primary outcome data and secondary outcomes with available data. From a total of 5041 COVID-19-infected patients, about 98% (4940/5041) had abnormalities in chest CT, while about 2% had normal chest CT findings (Figure2). Among COVID-19 patients with abnormal chest CT findings, 80% (3952/4940) had bilateral lung involvement. About 20% (641/3206) patients had exclusively unilateral lung involvement. When a single lobe was involved, the right lung was most often affected (62%) and 38% of them had only left lung involvement. In the right lung, the lower lobe was frequently involved (74% (784)), while the middle lobe was less frequent (38% (326)). In the left lung, the upper lobe was frequently involved (74% (731)).Figure 2 Magnitude of abnormal chest CT finding of patients with COVID-19.Regarding the patterns of lung lesions in patients with COVID-19, ground-glass opacity (GGO) and mixed GGO with consolidations were observed in 2482 (65%) and 768 (18%) patients, respectively. Consolidation, which is defined as denser opacities and blurred margins of pulmonary blood vessels and bronchial tubes, was detected in 1259 (22%) patients with COVID-19 pneumonia. CT images also showed interlobular septal thickening in about 691 (27%) patients. CT showed that 11 (21.6%) patients had discrete pulmonary nodules. Crazy paving patterns of lesions were observed in 575 (12%) patients. Five hundred thirty-one (18%) patients had an air bronchogram sign. Pleural effusion and lymphadenopathy were observed in 94 (1.6%) and 21 (0.7%) patients with COVID-19 pneumonia, respectively. Pulmonary nodules were reported in 262 (9%) patients (Table2).Table 2 CT imaging features of patients with COVID-19. CT findingsNumber of studiesNumber of patients (%)Patterns of the lesionGround-glass opacity with consolidation60768 (18%)Ground-glass opacity602482 (65%)Consolidation601259 (22%)Crazy paving pattern24575 (12%)Reversed halo sign24146 (1%)Other signs in the lesionInterlobular septal thickening23691 (27%)Air bronchogram sign23531 (18%)DistributionBilateral483952 (80%)Unilateral48641 (20%)Right lung848 (62%)Left lung829 (38%)Number of lobes involvedOne lobe13278 (14%)Two lobes13299 (11%)Three lobes13250 (13%)Four lobes13212 (15%)Five lobes14384 (34%)More than one lobe141145 (76%)Lobe of lesion distributionLeft upper lobe14731 (74%)Left lower lobe20504 (46%)Right upper lobe19455 (40%)Right middle lobe15326 (38%)Right lower lobe17784 (74%)Other findingsPleural effusion6094 (1.6%)Lymphadenopathy6021 (0.7%)Pulmonary nodules22262 (9%) ## 3.3. Additional Analysis We also gathered data to demonstrate if there is a difference in the CT findings related to infection time course. In the included studies, the mean time between initial chest CT and follow-up was about 6.5 days (range, 0–21 days). According to the report of 9 included studies, 43% (95 CI: 26%–61%) patients have improved follow-up chest CT findings, while 30% (95 CI: 18%–46%) have advanced chest CT findings [40, 43, 44, 50, 52, 57, 64–66]. Our findings demonstrated pure ground-glass opacity in early disease, followed by the development of crazy paving, and finally increasing consolidation latter in the disease course. GGO density increased and transformed into consolidation; consolidation edges were flat or contracted; and fibrous cord shadow appeared. GGO with consolidation was higher in the advanced stage than in the early stage of the disease (OR 3.2, 95% CI: 2.2–4.7, P=0.013). A crazy paving pattern and a reverse halo sign were all rare in the early sign, but were present in the late stage of the disease. In terms of distribution of disease, bilateral involvements were prominent in the later stage than in the early stage (P<0.001). According to the report of two included studies, pure GGO was the common imaging feature of mild and moderate stages of the disease course, while consolidation and GGO with consolidation increased by 64% and 79% in the critical stage of the disease, respectively [57, 58]. ## 3.4. Heterogeneity and Risk of Bias Subgroup analysis was conducted to justify the cause of heterogeneity. Subgroup analysis of the included studies showed that the possible cause of heterogeneity was sample size (P value < 0.01). Funnel plots did not suggest a publication bias for the majority of the parameters. We demonstrated no publication bias (P value = 0.1947). ## 4. Discussion ### 4.1. Summary of Evidence In this review, we have demonstrated that the main signs and symptoms of hospital admission of patients with COVID-19 pneumonia were fever, dry cough, fatigue, pharyngeal pain, and respiratory distress. About 24% of the patients had reduced total leukocyte count and 43% had reduced lymphocyte count. Although the laboratory findings are not specific for viral pneumonia, leukopenia and lymphocytopenia may be helpful to distinguish COVID-19 from common bacterial infections. The majority of the patients had increased C-reactive protein.In this study, common CT imaging features in patients with COVID-19 pneumonia included bilateral involvement, ground-glass opacities, consolidation, crazy paving pattern, air bronchogram signs, and intralobular septal thickening. At a later stage of the disease, mixed GGO with consolidation was the more frequent finding. Pulmonary consolidation was mainly found in the severe and progressive late stages of the disease, which can coexist with ground-glass and fibrotic changes. The pathological basis of these changes could be due to inflammatory cell infiltration and interstitial thickening, cell exudation, and hyaline membrane formation.In the present study, the increased frequency of GGO, consolidation, bilateral disease, intralobular septal thickening, a crazy paving pattern of the lesion, and appearance of reverse halo sign lesions over more than one lobe of the lung could represent the pathophysiology of the disease process as it organizes and it could also probably explain chest CT hallmark of COVID-19 infection. CT imaging findings can be correlated with the severity of the disease and disease progression after efforts of treatment.The majority of the patients (76%) had multilobar involvement, and lesions were more frequent in the right lung. Pleural effusion, lymphadenopathy, and pulmonary nodules were less common imaging findings in these patients. About 2% of the patients had normal initial CT findings. When a single lobe was involved, the right lung was most often affected and more than half of the patients with COVID-19 had multiple lobe infections. In the right lung, the lower lobe is frequently involved while the middle lobe was less frequent. In the left lung, the upper lobe was frequently involved. This might indicate the virus tends to disseminate all over all lobes of both lungs as the disease progress. Chest CT imaging features may avoid repeated laboratory testing and may be helpful in a resource-limited country.According to the present study, chest CT imaging showed similar characteristics in the majority of patients, including predominantly bilateral and multilobe involvement. The pattern of ground-glass and consolidative pulmonary opacities, often with a bilateral lung distribution, is somewhat similar to that described in earlier coronavirus outbreaks such as SARS and MERS, which was known to cause ground-glass opacities that may coalesce into dense consolidative lesions [73, 74].According to the report of 9 included studies, 43% (95 CI: 26%–61%) patients have improved follow-up chest CT findings, while 30% (95 CI: 18%–46%) have advanced chest CT findings. Our findings demonstrated pure ground-glass opacity in early disease, followed by the development of crazy paving and finally increasing consolidation latter in the disease course. According to the report of two included studies, pure GGO was the common imaging feature of mild and moderate stages of the disease course, while consolidation and GGO with consolidation increased by 64% and 79% in the critical stage of the disease, respectively. ### 4.2. Limitation This systematic review and meta-analysis came up with CT imaging features of patients with COVID-19; we acknowledge a few limitations of the present systematic review and meta-analysis, which may affect the results. First of all, two relevant studies which were identified through our literature search were excluded due to unavailability for full-text review. The other limitation of the present study was that it was limited to articles published in English. ## 4.1. Summary of Evidence In this review, we have demonstrated that the main signs and symptoms of hospital admission of patients with COVID-19 pneumonia were fever, dry cough, fatigue, pharyngeal pain, and respiratory distress. About 24% of the patients had reduced total leukocyte count and 43% had reduced lymphocyte count. Although the laboratory findings are not specific for viral pneumonia, leukopenia and lymphocytopenia may be helpful to distinguish COVID-19 from common bacterial infections. The majority of the patients had increased C-reactive protein.In this study, common CT imaging features in patients with COVID-19 pneumonia included bilateral involvement, ground-glass opacities, consolidation, crazy paving pattern, air bronchogram signs, and intralobular septal thickening. At a later stage of the disease, mixed GGO with consolidation was the more frequent finding. Pulmonary consolidation was mainly found in the severe and progressive late stages of the disease, which can coexist with ground-glass and fibrotic changes. The pathological basis of these changes could be due to inflammatory cell infiltration and interstitial thickening, cell exudation, and hyaline membrane formation.In the present study, the increased frequency of GGO, consolidation, bilateral disease, intralobular septal thickening, a crazy paving pattern of the lesion, and appearance of reverse halo sign lesions over more than one lobe of the lung could represent the pathophysiology of the disease process as it organizes and it could also probably explain chest CT hallmark of COVID-19 infection. CT imaging findings can be correlated with the severity of the disease and disease progression after efforts of treatment.The majority of the patients (76%) had multilobar involvement, and lesions were more frequent in the right lung. Pleural effusion, lymphadenopathy, and pulmonary nodules were less common imaging findings in these patients. About 2% of the patients had normal initial CT findings. When a single lobe was involved, the right lung was most often affected and more than half of the patients with COVID-19 had multiple lobe infections. In the right lung, the lower lobe is frequently involved while the middle lobe was less frequent. In the left lung, the upper lobe was frequently involved. This might indicate the virus tends to disseminate all over all lobes of both lungs as the disease progress. Chest CT imaging features may avoid repeated laboratory testing and may be helpful in a resource-limited country.According to the present study, chest CT imaging showed similar characteristics in the majority of patients, including predominantly bilateral and multilobe involvement. The pattern of ground-glass and consolidative pulmonary opacities, often with a bilateral lung distribution, is somewhat similar to that described in earlier coronavirus outbreaks such as SARS and MERS, which was known to cause ground-glass opacities that may coalesce into dense consolidative lesions [73, 74].According to the report of 9 included studies, 43% (95 CI: 26%–61%) patients have improved follow-up chest CT findings, while 30% (95 CI: 18%–46%) have advanced chest CT findings. Our findings demonstrated pure ground-glass opacity in early disease, followed by the development of crazy paving and finally increasing consolidation latter in the disease course. According to the report of two included studies, pure GGO was the common imaging feature of mild and moderate stages of the disease course, while consolidation and GGO with consolidation increased by 64% and 79% in the critical stage of the disease, respectively. ## 4.2. Limitation This systematic review and meta-analysis came up with CT imaging features of patients with COVID-19; we acknowledge a few limitations of the present systematic review and meta-analysis, which may affect the results. First of all, two relevant studies which were identified through our literature search were excluded due to unavailability for full-text review. The other limitation of the present study was that it was limited to articles published in English. ## 5. Conclusions The present study showed that common CT imaging features of COVID-19 pneumonia included frequent involvement of bilateral lung infections, ground-glass opacities, consolidation, crazy paving pattern, air bronchogram signs, and intralobular septal thickening. Bilateral involvement was common while single lobe involvement was rare. This sign of CT imaging might be an important tool for diagnosis and monitoring disease progression in patients with COVID-19 infection. --- *Source: 1023506-2020-07-23.xml*
1023506-2020-07-23_1023506-2020-07-23.md
37,911
Computed Tomography (CT) Imaging Features of Patients with COVID-19: Systematic Review and Meta-Analysis
Ephrem Awulachew; Kuma Diriba; Asrat Anja; Eyob Getu; Firehiwot Belayneh
Radiology Research and Practice (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1023506
1023506-2020-07-23.xml
--- ## Abstract Introduction. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a highly contagious disease, and its first outbreak was reported in Wuhan, China. A coronavirus disease (COVID-19) causes severe respiratory distress (ARDS). Due to the primary involvement of the respiratory system, chest CT is strongly recommended in suspected COVID-19 cases, for both initial evaluation and follow-up. Objective. The aim of this review was to systematically analyze the existing literature on CT imaging features of patients with COVID-19 pneumonia. Methods. A systematic search was conducted on PubMed, Embase, Cochrane Library, Open Access Journals (OAJ), and Google Scholar databases until April 15, 2020. All articles with a report of CT findings in COVID-19 patients published in English from the onset of COVID-19 outbreak to April 20, 2020, were included in the study. Result. From a total of 5041 COVID-19-infected patients, about 98% (4940/5041) had abnormalities in chest CT, while about 2% have normal chest CT findings. Among COVID-19 patients with abnormal chest CT findings, 80% (3952/4940) had bilateral lung involvement. Ground-glass opacity (GGO) and mixed GGO with consolidation were observed in 2482 (65%) and 768 (18%) patients, respectively. Consolidations were detected in 1259 (22%) patients with COVID-19 pneumonia. CT images also showed interlobular septal thickening in about 691 (27%) patients. Conclusion. Frequent involvement of bilateral lung infections, ground-glass opacities, consolidation, crazy paving pattern, air bronchogram signs, and intralobular septal thickening were common CT imaging features of patients with COVID-19 pneumonia. --- ## Body ## 1. Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a highly contagious disease, and its first outbreak was reported in Wuhan, China [1]. On January 30, 2020, the World Health Organization (WHO) declared it a pandemic disease [2]. Currently, the disease has been reported in more than 212 countries worldwide [3]. As of May 01, a total of 3,325,620 cases and 234,496 deaths due to COVID-19 were reported worldwide [4]. The most common diagnostic tool for coronavirus disease 2019 (COVID-19) infection is real-time polymerase chain reaction (RT-PCR), which is regarded as the reference standard [5, 6].COVID-19 causes severe respiratory distress (ARDS). Due to the primary involvement of the respiratory system, chest computed tomography (CT) is strongly recommended in suspected COVID-19 cases, for both initial evaluation and follow-up [7]. Recent studies addressed the importance of chest CT examination in COVID-19 patients with false-negative RT-PCR results [8] and reported the CT sensitivity as 98% [9]. Additionally, CT examinations also have great significance in monitoring disease progression and evaluating therapeutic efficacy.The SARS-CoV-2 has four major structural proteins: the spike surface glycoprotein, small envelope protein, matrix protein, and nucleocapsid protein [10]. The spike protein binds to host receptors via the receptor-binding domains (RBDs) of angiotensin-converting enzyme-2 (ACE2) [11]. The ACE2 protein has been identified in various human organs, including the respiratory system, GI tract, lymph nodes, thymus, bone marrow, spleen, liver, kidney, and brain. SARS-CoV-2 was reported to utilize angiotensin-converting enzyme-2 (ACE2) as the cell receptor in humans [12], firstly causing pulmonary interstitial damage and subsequently with parenchymal changes. Reportedly, chest CT images could manifest different imaging features or patterns in COVID-19 patients with different time course and disease severity [13, 14].Studies suggest that routine chest CT is a useful tool in the early diagnosis of COVID-19 infection, especially in settings of limited availability of reverse-transcriptase polymerase chain reaction (RT-PCR [15]. Imaging is critical in assessing severity and disease progression in COVID-19 infection. Radiologists should be aware of the features and patterns of imaging manifestations of the novel COVID-19 infection. A variety of imaging features have been described in similar coronavirus-associated syndromes. Due to an alarming spread of COVID-19 outbreak throughout the world, a comprehensive understanding of the importance of evaluating chest CT imaging findings is essential for effective patient management and treatment. Individual literature published is required to be summarized. Thus, a comprehensive systematic review has to be performed. Therefore, this study systematically reviewed CT imaging features of patients with COVID-19 pneumonia. ## 2. Methods ### 2.1. Search Strategy A systematic search was conducted on PubMed, Embase, Cochrane Library, Open Access Journals (OAJ), and Google Scholar databases until April 15, 2020. Keywords used to search eligible studies were “severe acute respiratory syndrome 2,” “SARS CoV 2,” “2019-nCoV,” “COVID-19,” “computed tomography,” “CT,” and “radiology.” All identified keywords and mesh terms were combined using the “OR” operator and “AND” operator for searching literatures. We simultaneously searched the reference lists of all recovered articles for potentially eligible studies. ### 2.2. Eligibility Criteria All articles with a report of CT findings in COVID-19 patients published in English from the onset of COVID-19 outbreak to April 20, 2020, were included in the study. Case studies and case series reported on chest CT imaging findings of patients with COVID-19 were included with caution. Studies pertaining to other coronavirus-related illnesses were excluded. ### 2.3. Assessment of Study Quality Studies selected for inclusion were assessed for methodological quality by two teams of independent reviewers using the standard critical appraisal instruments of the Joanna Briggs Institute Meta-Analysis of Statistics Assessment for the Review Instrument (JBI-MAStARI) [16]. Disagreements were resolved by consensus. ### 2.4. Data Extraction and Synthesis Data were extracted by two teams of the investigators using a standardized data extraction form. In addition, eligible case studies and case series studies were extracted carefully. Then the extracted data were merged for systematic analysis. The main outcomes extracted from each study were study design, country, patient demographics, and chest CT findings. Additional findings extracted were the patient’s clinical characteristics. Disagreements were discussed with other reviewers and subsequently resolved via consensus. ### 2.5. Data Analysis and Data Synthesis Systematic reviews and meta-analyses were carried out using R software version 3.6.1 with user-contributed commands for meta-analyses: metaprop, metan, metainf, metabias, and metareg. The effect sizes and SEs of the studies were pooled using a random-effects model to calculate the pooled estimates of CT findings among COVID-19 patients. A meta-analysis was also planned to assess the association of various imaging findings with demographic data. ### 2.6. Risk of Bias and Sensitivity Analysis Evidence for statistical heterogeneity of the results was assessed using the Cochrane Qx2 test and I2 statistic. A significance level of P<0.10 and I2 >50% was interpreted as evidence of heterogeneity [17]. A potential source of heterogeneity was investigated by subgroup analysis and meta-regression analysis [18]. Where statistical pooling was not possible, the findings were presented in a narrative form including tables and figures to aid in data presentation where appropriate.Sensitivity analyses were conducted to weigh up the relative influence of each individual study on the pooled effect size using a user-written function, metainf. The presence of publication bias was assessed informally by visual inspection of funnel plots [19]. ## 2.1. Search Strategy A systematic search was conducted on PubMed, Embase, Cochrane Library, Open Access Journals (OAJ), and Google Scholar databases until April 15, 2020. Keywords used to search eligible studies were “severe acute respiratory syndrome 2,” “SARS CoV 2,” “2019-nCoV,” “COVID-19,” “computed tomography,” “CT,” and “radiology.” All identified keywords and mesh terms were combined using the “OR” operator and “AND” operator for searching literatures. We simultaneously searched the reference lists of all recovered articles for potentially eligible studies. ## 2.2. Eligibility Criteria All articles with a report of CT findings in COVID-19 patients published in English from the onset of COVID-19 outbreak to April 20, 2020, were included in the study. Case studies and case series reported on chest CT imaging findings of patients with COVID-19 were included with caution. Studies pertaining to other coronavirus-related illnesses were excluded. ## 2.3. Assessment of Study Quality Studies selected for inclusion were assessed for methodological quality by two teams of independent reviewers using the standard critical appraisal instruments of the Joanna Briggs Institute Meta-Analysis of Statistics Assessment for the Review Instrument (JBI-MAStARI) [16]. Disagreements were resolved by consensus. ## 2.4. Data Extraction and Synthesis Data were extracted by two teams of the investigators using a standardized data extraction form. In addition, eligible case studies and case series studies were extracted carefully. Then the extracted data were merged for systematic analysis. The main outcomes extracted from each study were study design, country, patient demographics, and chest CT findings. Additional findings extracted were the patient’s clinical characteristics. Disagreements were discussed with other reviewers and subsequently resolved via consensus. ## 2.5. Data Analysis and Data Synthesis Systematic reviews and meta-analyses were carried out using R software version 3.6.1 with user-contributed commands for meta-analyses: metaprop, metan, metainf, metabias, and metareg. The effect sizes and SEs of the studies were pooled using a random-effects model to calculate the pooled estimates of CT findings among COVID-19 patients. A meta-analysis was also planned to assess the association of various imaging findings with demographic data. ## 2.6. Risk of Bias and Sensitivity Analysis Evidence for statistical heterogeneity of the results was assessed using the Cochrane Qx2 test and I2 statistic. A significance level of P<0.10 and I2 >50% was interpreted as evidence of heterogeneity [17]. A potential source of heterogeneity was investigated by subgroup analysis and meta-regression analysis [18]. Where statistical pooling was not possible, the findings were presented in a narrative form including tables and figures to aid in data presentation where appropriate.Sensitivity analyses were conducted to weigh up the relative influence of each individual study on the pooled effect size using a user-written function, metainf. The presence of publication bias was assessed informally by visual inspection of funnel plots [19]. ## 3. Result ### 3.1. Study Selection Following the initial search, 241 studies were identified from the electronic databases (Figure1). After the removal of 67 duplicates, 17 noneligible studies, and six studies with unclear population, 78 studies were retrieved for full-text review. Of this, 18 studies were of poor quality that did not meet the eligibility criteria. Following methodological quality assessment, 60 articles were included in the meta-analysis. In the included studies, a total of 5041 patients with COVID-19 were assessed for CT imaging features. All papers were published in English.Figure 1 Flow chart of the search and study inclusion. ### 3.2. Study Characteristics In this review, 60 papers were eligible and a total of 5041 patients had CT imaging findings [8, 12–14, 19–72]. From a total of 5401 participants, about half of them were male 2710 (50.2%), while 2693 (49.8%) were female. The mean age of the participants was 49 years with a standard deviation of 11.6 years. #### 3.2.1. Clinical Features of Patients with COVID-19 From a total of 60 included studies, only 48 studies had reported the clinical features of COVID-19 infected patients. According to the report of 48 studies, the main clinical features of COVID-19 infected patients were fever and dry cough, which accounted for 80% (2954/3800) and 56.2% (2137/3800), respectively, while about 2.3% (86/3800) patients with COVID-19 infection are asymptomatic. The total white blood cell count was decreased in about 24% (410) of patients with COVID-19, while about 2% [43] had increased white blood cells. Decreased lymphocyte count was detected in about 442 (43%) patients with COVID-19, and only 1% had increased lymphocyte count. Markers like C-reactive protein were found to be increased in about 87% (458) patients, while only 13% (254) had normal C-reactive protein (Table 1).Table 1 Review of clinical features of patients with COVID-19. Clinical featuresNumber of studiesNumber of patients (%)Sign and symptomsFever482954 (80%)Dry cough482137 (56.2%)Respiratory distress29462 (15%)Pharyngeal pain48381 (13%)Fatigue32481 (27%)Total WBC countNormal221195 (68%)Decreased22410 (24%)Increased2245 (2%)Lymphocyte countNormal22566 (56%)Decreased22442 (43%)Increased222 (1%)C-reactive proteinNormal15254 (13%)Decreased150 (0%)Increased15458 (87%) #### 3.2.2. Chest CT Imaging Features of COVID-19-Infected Patients We performed meta-analyses of primary outcome data and secondary outcomes with available data. From a total of 5041 COVID-19-infected patients, about 98% (4940/5041) had abnormalities in chest CT, while about 2% had normal chest CT findings (Figure2). Among COVID-19 patients with abnormal chest CT findings, 80% (3952/4940) had bilateral lung involvement. About 20% (641/3206) patients had exclusively unilateral lung involvement. When a single lobe was involved, the right lung was most often affected (62%) and 38% of them had only left lung involvement. In the right lung, the lower lobe was frequently involved (74% (784)), while the middle lobe was less frequent (38% (326)). In the left lung, the upper lobe was frequently involved (74% (731)).Figure 2 Magnitude of abnormal chest CT finding of patients with COVID-19.Regarding the patterns of lung lesions in patients with COVID-19, ground-glass opacity (GGO) and mixed GGO with consolidations were observed in 2482 (65%) and 768 (18%) patients, respectively. Consolidation, which is defined as denser opacities and blurred margins of pulmonary blood vessels and bronchial tubes, was detected in 1259 (22%) patients with COVID-19 pneumonia. CT images also showed interlobular septal thickening in about 691 (27%) patients. CT showed that 11 (21.6%) patients had discrete pulmonary nodules. Crazy paving patterns of lesions were observed in 575 (12%) patients. Five hundred thirty-one (18%) patients had an air bronchogram sign. Pleural effusion and lymphadenopathy were observed in 94 (1.6%) and 21 (0.7%) patients with COVID-19 pneumonia, respectively. Pulmonary nodules were reported in 262 (9%) patients (Table2).Table 2 CT imaging features of patients with COVID-19. CT findingsNumber of studiesNumber of patients (%)Patterns of the lesionGround-glass opacity with consolidation60768 (18%)Ground-glass opacity602482 (65%)Consolidation601259 (22%)Crazy paving pattern24575 (12%)Reversed halo sign24146 (1%)Other signs in the lesionInterlobular septal thickening23691 (27%)Air bronchogram sign23531 (18%)DistributionBilateral483952 (80%)Unilateral48641 (20%)Right lung848 (62%)Left lung829 (38%)Number of lobes involvedOne lobe13278 (14%)Two lobes13299 (11%)Three lobes13250 (13%)Four lobes13212 (15%)Five lobes14384 (34%)More than one lobe141145 (76%)Lobe of lesion distributionLeft upper lobe14731 (74%)Left lower lobe20504 (46%)Right upper lobe19455 (40%)Right middle lobe15326 (38%)Right lower lobe17784 (74%)Other findingsPleural effusion6094 (1.6%)Lymphadenopathy6021 (0.7%)Pulmonary nodules22262 (9%) ### 3.3. Additional Analysis We also gathered data to demonstrate if there is a difference in the CT findings related to infection time course. In the included studies, the mean time between initial chest CT and follow-up was about 6.5 days (range, 0–21 days). According to the report of 9 included studies, 43% (95 CI: 26%–61%) patients have improved follow-up chest CT findings, while 30% (95 CI: 18%–46%) have advanced chest CT findings [40, 43, 44, 50, 52, 57, 64–66]. Our findings demonstrated pure ground-glass opacity in early disease, followed by the development of crazy paving, and finally increasing consolidation latter in the disease course. GGO density increased and transformed into consolidation; consolidation edges were flat or contracted; and fibrous cord shadow appeared. GGO with consolidation was higher in the advanced stage than in the early stage of the disease (OR 3.2, 95% CI: 2.2–4.7, P=0.013). A crazy paving pattern and a reverse halo sign were all rare in the early sign, but were present in the late stage of the disease. In terms of distribution of disease, bilateral involvements were prominent in the later stage than in the early stage (P<0.001). According to the report of two included studies, pure GGO was the common imaging feature of mild and moderate stages of the disease course, while consolidation and GGO with consolidation increased by 64% and 79% in the critical stage of the disease, respectively [57, 58]. ### 3.4. Heterogeneity and Risk of Bias Subgroup analysis was conducted to justify the cause of heterogeneity. Subgroup analysis of the included studies showed that the possible cause of heterogeneity was sample size (P value < 0.01). Funnel plots did not suggest a publication bias for the majority of the parameters. We demonstrated no publication bias (P value = 0.1947). ## 3.1. Study Selection Following the initial search, 241 studies were identified from the electronic databases (Figure1). After the removal of 67 duplicates, 17 noneligible studies, and six studies with unclear population, 78 studies were retrieved for full-text review. Of this, 18 studies were of poor quality that did not meet the eligibility criteria. Following methodological quality assessment, 60 articles were included in the meta-analysis. In the included studies, a total of 5041 patients with COVID-19 were assessed for CT imaging features. All papers were published in English.Figure 1 Flow chart of the search and study inclusion. ## 3.2. Study Characteristics In this review, 60 papers were eligible and a total of 5041 patients had CT imaging findings [8, 12–14, 19–72]. From a total of 5401 participants, about half of them were male 2710 (50.2%), while 2693 (49.8%) were female. The mean age of the participants was 49 years with a standard deviation of 11.6 years. ### 3.2.1. Clinical Features of Patients with COVID-19 From a total of 60 included studies, only 48 studies had reported the clinical features of COVID-19 infected patients. According to the report of 48 studies, the main clinical features of COVID-19 infected patients were fever and dry cough, which accounted for 80% (2954/3800) and 56.2% (2137/3800), respectively, while about 2.3% (86/3800) patients with COVID-19 infection are asymptomatic. The total white blood cell count was decreased in about 24% (410) of patients with COVID-19, while about 2% [43] had increased white blood cells. Decreased lymphocyte count was detected in about 442 (43%) patients with COVID-19, and only 1% had increased lymphocyte count. Markers like C-reactive protein were found to be increased in about 87% (458) patients, while only 13% (254) had normal C-reactive protein (Table 1).Table 1 Review of clinical features of patients with COVID-19. Clinical featuresNumber of studiesNumber of patients (%)Sign and symptomsFever482954 (80%)Dry cough482137 (56.2%)Respiratory distress29462 (15%)Pharyngeal pain48381 (13%)Fatigue32481 (27%)Total WBC countNormal221195 (68%)Decreased22410 (24%)Increased2245 (2%)Lymphocyte countNormal22566 (56%)Decreased22442 (43%)Increased222 (1%)C-reactive proteinNormal15254 (13%)Decreased150 (0%)Increased15458 (87%) ### 3.2.2. Chest CT Imaging Features of COVID-19-Infected Patients We performed meta-analyses of primary outcome data and secondary outcomes with available data. From a total of 5041 COVID-19-infected patients, about 98% (4940/5041) had abnormalities in chest CT, while about 2% had normal chest CT findings (Figure2). Among COVID-19 patients with abnormal chest CT findings, 80% (3952/4940) had bilateral lung involvement. About 20% (641/3206) patients had exclusively unilateral lung involvement. When a single lobe was involved, the right lung was most often affected (62%) and 38% of them had only left lung involvement. In the right lung, the lower lobe was frequently involved (74% (784)), while the middle lobe was less frequent (38% (326)). In the left lung, the upper lobe was frequently involved (74% (731)).Figure 2 Magnitude of abnormal chest CT finding of patients with COVID-19.Regarding the patterns of lung lesions in patients with COVID-19, ground-glass opacity (GGO) and mixed GGO with consolidations were observed in 2482 (65%) and 768 (18%) patients, respectively. Consolidation, which is defined as denser opacities and blurred margins of pulmonary blood vessels and bronchial tubes, was detected in 1259 (22%) patients with COVID-19 pneumonia. CT images also showed interlobular septal thickening in about 691 (27%) patients. CT showed that 11 (21.6%) patients had discrete pulmonary nodules. Crazy paving patterns of lesions were observed in 575 (12%) patients. Five hundred thirty-one (18%) patients had an air bronchogram sign. Pleural effusion and lymphadenopathy were observed in 94 (1.6%) and 21 (0.7%) patients with COVID-19 pneumonia, respectively. Pulmonary nodules were reported in 262 (9%) patients (Table2).Table 2 CT imaging features of patients with COVID-19. CT findingsNumber of studiesNumber of patients (%)Patterns of the lesionGround-glass opacity with consolidation60768 (18%)Ground-glass opacity602482 (65%)Consolidation601259 (22%)Crazy paving pattern24575 (12%)Reversed halo sign24146 (1%)Other signs in the lesionInterlobular septal thickening23691 (27%)Air bronchogram sign23531 (18%)DistributionBilateral483952 (80%)Unilateral48641 (20%)Right lung848 (62%)Left lung829 (38%)Number of lobes involvedOne lobe13278 (14%)Two lobes13299 (11%)Three lobes13250 (13%)Four lobes13212 (15%)Five lobes14384 (34%)More than one lobe141145 (76%)Lobe of lesion distributionLeft upper lobe14731 (74%)Left lower lobe20504 (46%)Right upper lobe19455 (40%)Right middle lobe15326 (38%)Right lower lobe17784 (74%)Other findingsPleural effusion6094 (1.6%)Lymphadenopathy6021 (0.7%)Pulmonary nodules22262 (9%) ## 3.2.1. Clinical Features of Patients with COVID-19 From a total of 60 included studies, only 48 studies had reported the clinical features of COVID-19 infected patients. According to the report of 48 studies, the main clinical features of COVID-19 infected patients were fever and dry cough, which accounted for 80% (2954/3800) and 56.2% (2137/3800), respectively, while about 2.3% (86/3800) patients with COVID-19 infection are asymptomatic. The total white blood cell count was decreased in about 24% (410) of patients with COVID-19, while about 2% [43] had increased white blood cells. Decreased lymphocyte count was detected in about 442 (43%) patients with COVID-19, and only 1% had increased lymphocyte count. Markers like C-reactive protein were found to be increased in about 87% (458) patients, while only 13% (254) had normal C-reactive protein (Table 1).Table 1 Review of clinical features of patients with COVID-19. Clinical featuresNumber of studiesNumber of patients (%)Sign and symptomsFever482954 (80%)Dry cough482137 (56.2%)Respiratory distress29462 (15%)Pharyngeal pain48381 (13%)Fatigue32481 (27%)Total WBC countNormal221195 (68%)Decreased22410 (24%)Increased2245 (2%)Lymphocyte countNormal22566 (56%)Decreased22442 (43%)Increased222 (1%)C-reactive proteinNormal15254 (13%)Decreased150 (0%)Increased15458 (87%) ## 3.2.2. Chest CT Imaging Features of COVID-19-Infected Patients We performed meta-analyses of primary outcome data and secondary outcomes with available data. From a total of 5041 COVID-19-infected patients, about 98% (4940/5041) had abnormalities in chest CT, while about 2% had normal chest CT findings (Figure2). Among COVID-19 patients with abnormal chest CT findings, 80% (3952/4940) had bilateral lung involvement. About 20% (641/3206) patients had exclusively unilateral lung involvement. When a single lobe was involved, the right lung was most often affected (62%) and 38% of them had only left lung involvement. In the right lung, the lower lobe was frequently involved (74% (784)), while the middle lobe was less frequent (38% (326)). In the left lung, the upper lobe was frequently involved (74% (731)).Figure 2 Magnitude of abnormal chest CT finding of patients with COVID-19.Regarding the patterns of lung lesions in patients with COVID-19, ground-glass opacity (GGO) and mixed GGO with consolidations were observed in 2482 (65%) and 768 (18%) patients, respectively. Consolidation, which is defined as denser opacities and blurred margins of pulmonary blood vessels and bronchial tubes, was detected in 1259 (22%) patients with COVID-19 pneumonia. CT images also showed interlobular septal thickening in about 691 (27%) patients. CT showed that 11 (21.6%) patients had discrete pulmonary nodules. Crazy paving patterns of lesions were observed in 575 (12%) patients. Five hundred thirty-one (18%) patients had an air bronchogram sign. Pleural effusion and lymphadenopathy were observed in 94 (1.6%) and 21 (0.7%) patients with COVID-19 pneumonia, respectively. Pulmonary nodules were reported in 262 (9%) patients (Table2).Table 2 CT imaging features of patients with COVID-19. CT findingsNumber of studiesNumber of patients (%)Patterns of the lesionGround-glass opacity with consolidation60768 (18%)Ground-glass opacity602482 (65%)Consolidation601259 (22%)Crazy paving pattern24575 (12%)Reversed halo sign24146 (1%)Other signs in the lesionInterlobular septal thickening23691 (27%)Air bronchogram sign23531 (18%)DistributionBilateral483952 (80%)Unilateral48641 (20%)Right lung848 (62%)Left lung829 (38%)Number of lobes involvedOne lobe13278 (14%)Two lobes13299 (11%)Three lobes13250 (13%)Four lobes13212 (15%)Five lobes14384 (34%)More than one lobe141145 (76%)Lobe of lesion distributionLeft upper lobe14731 (74%)Left lower lobe20504 (46%)Right upper lobe19455 (40%)Right middle lobe15326 (38%)Right lower lobe17784 (74%)Other findingsPleural effusion6094 (1.6%)Lymphadenopathy6021 (0.7%)Pulmonary nodules22262 (9%) ## 3.3. Additional Analysis We also gathered data to demonstrate if there is a difference in the CT findings related to infection time course. In the included studies, the mean time between initial chest CT and follow-up was about 6.5 days (range, 0–21 days). According to the report of 9 included studies, 43% (95 CI: 26%–61%) patients have improved follow-up chest CT findings, while 30% (95 CI: 18%–46%) have advanced chest CT findings [40, 43, 44, 50, 52, 57, 64–66]. Our findings demonstrated pure ground-glass opacity in early disease, followed by the development of crazy paving, and finally increasing consolidation latter in the disease course. GGO density increased and transformed into consolidation; consolidation edges were flat or contracted; and fibrous cord shadow appeared. GGO with consolidation was higher in the advanced stage than in the early stage of the disease (OR 3.2, 95% CI: 2.2–4.7, P=0.013). A crazy paving pattern and a reverse halo sign were all rare in the early sign, but were present in the late stage of the disease. In terms of distribution of disease, bilateral involvements were prominent in the later stage than in the early stage (P<0.001). According to the report of two included studies, pure GGO was the common imaging feature of mild and moderate stages of the disease course, while consolidation and GGO with consolidation increased by 64% and 79% in the critical stage of the disease, respectively [57, 58]. ## 3.4. Heterogeneity and Risk of Bias Subgroup analysis was conducted to justify the cause of heterogeneity. Subgroup analysis of the included studies showed that the possible cause of heterogeneity was sample size (P value < 0.01). Funnel plots did not suggest a publication bias for the majority of the parameters. We demonstrated no publication bias (P value = 0.1947). ## 4. Discussion ### 4.1. Summary of Evidence In this review, we have demonstrated that the main signs and symptoms of hospital admission of patients with COVID-19 pneumonia were fever, dry cough, fatigue, pharyngeal pain, and respiratory distress. About 24% of the patients had reduced total leukocyte count and 43% had reduced lymphocyte count. Although the laboratory findings are not specific for viral pneumonia, leukopenia and lymphocytopenia may be helpful to distinguish COVID-19 from common bacterial infections. The majority of the patients had increased C-reactive protein.In this study, common CT imaging features in patients with COVID-19 pneumonia included bilateral involvement, ground-glass opacities, consolidation, crazy paving pattern, air bronchogram signs, and intralobular septal thickening. At a later stage of the disease, mixed GGO with consolidation was the more frequent finding. Pulmonary consolidation was mainly found in the severe and progressive late stages of the disease, which can coexist with ground-glass and fibrotic changes. The pathological basis of these changes could be due to inflammatory cell infiltration and interstitial thickening, cell exudation, and hyaline membrane formation.In the present study, the increased frequency of GGO, consolidation, bilateral disease, intralobular septal thickening, a crazy paving pattern of the lesion, and appearance of reverse halo sign lesions over more than one lobe of the lung could represent the pathophysiology of the disease process as it organizes and it could also probably explain chest CT hallmark of COVID-19 infection. CT imaging findings can be correlated with the severity of the disease and disease progression after efforts of treatment.The majority of the patients (76%) had multilobar involvement, and lesions were more frequent in the right lung. Pleural effusion, lymphadenopathy, and pulmonary nodules were less common imaging findings in these patients. About 2% of the patients had normal initial CT findings. When a single lobe was involved, the right lung was most often affected and more than half of the patients with COVID-19 had multiple lobe infections. In the right lung, the lower lobe is frequently involved while the middle lobe was less frequent. In the left lung, the upper lobe was frequently involved. This might indicate the virus tends to disseminate all over all lobes of both lungs as the disease progress. Chest CT imaging features may avoid repeated laboratory testing and may be helpful in a resource-limited country.According to the present study, chest CT imaging showed similar characteristics in the majority of patients, including predominantly bilateral and multilobe involvement. The pattern of ground-glass and consolidative pulmonary opacities, often with a bilateral lung distribution, is somewhat similar to that described in earlier coronavirus outbreaks such as SARS and MERS, which was known to cause ground-glass opacities that may coalesce into dense consolidative lesions [73, 74].According to the report of 9 included studies, 43% (95 CI: 26%–61%) patients have improved follow-up chest CT findings, while 30% (95 CI: 18%–46%) have advanced chest CT findings. Our findings demonstrated pure ground-glass opacity in early disease, followed by the development of crazy paving and finally increasing consolidation latter in the disease course. According to the report of two included studies, pure GGO was the common imaging feature of mild and moderate stages of the disease course, while consolidation and GGO with consolidation increased by 64% and 79% in the critical stage of the disease, respectively. ### 4.2. Limitation This systematic review and meta-analysis came up with CT imaging features of patients with COVID-19; we acknowledge a few limitations of the present systematic review and meta-analysis, which may affect the results. First of all, two relevant studies which were identified through our literature search were excluded due to unavailability for full-text review. The other limitation of the present study was that it was limited to articles published in English. ## 4.1. Summary of Evidence In this review, we have demonstrated that the main signs and symptoms of hospital admission of patients with COVID-19 pneumonia were fever, dry cough, fatigue, pharyngeal pain, and respiratory distress. About 24% of the patients had reduced total leukocyte count and 43% had reduced lymphocyte count. Although the laboratory findings are not specific for viral pneumonia, leukopenia and lymphocytopenia may be helpful to distinguish COVID-19 from common bacterial infections. The majority of the patients had increased C-reactive protein.In this study, common CT imaging features in patients with COVID-19 pneumonia included bilateral involvement, ground-glass opacities, consolidation, crazy paving pattern, air bronchogram signs, and intralobular septal thickening. At a later stage of the disease, mixed GGO with consolidation was the more frequent finding. Pulmonary consolidation was mainly found in the severe and progressive late stages of the disease, which can coexist with ground-glass and fibrotic changes. The pathological basis of these changes could be due to inflammatory cell infiltration and interstitial thickening, cell exudation, and hyaline membrane formation.In the present study, the increased frequency of GGO, consolidation, bilateral disease, intralobular septal thickening, a crazy paving pattern of the lesion, and appearance of reverse halo sign lesions over more than one lobe of the lung could represent the pathophysiology of the disease process as it organizes and it could also probably explain chest CT hallmark of COVID-19 infection. CT imaging findings can be correlated with the severity of the disease and disease progression after efforts of treatment.The majority of the patients (76%) had multilobar involvement, and lesions were more frequent in the right lung. Pleural effusion, lymphadenopathy, and pulmonary nodules were less common imaging findings in these patients. About 2% of the patients had normal initial CT findings. When a single lobe was involved, the right lung was most often affected and more than half of the patients with COVID-19 had multiple lobe infections. In the right lung, the lower lobe is frequently involved while the middle lobe was less frequent. In the left lung, the upper lobe was frequently involved. This might indicate the virus tends to disseminate all over all lobes of both lungs as the disease progress. Chest CT imaging features may avoid repeated laboratory testing and may be helpful in a resource-limited country.According to the present study, chest CT imaging showed similar characteristics in the majority of patients, including predominantly bilateral and multilobe involvement. The pattern of ground-glass and consolidative pulmonary opacities, often with a bilateral lung distribution, is somewhat similar to that described in earlier coronavirus outbreaks such as SARS and MERS, which was known to cause ground-glass opacities that may coalesce into dense consolidative lesions [73, 74].According to the report of 9 included studies, 43% (95 CI: 26%–61%) patients have improved follow-up chest CT findings, while 30% (95 CI: 18%–46%) have advanced chest CT findings. Our findings demonstrated pure ground-glass opacity in early disease, followed by the development of crazy paving and finally increasing consolidation latter in the disease course. According to the report of two included studies, pure GGO was the common imaging feature of mild and moderate stages of the disease course, while consolidation and GGO with consolidation increased by 64% and 79% in the critical stage of the disease, respectively. ## 4.2. Limitation This systematic review and meta-analysis came up with CT imaging features of patients with COVID-19; we acknowledge a few limitations of the present systematic review and meta-analysis, which may affect the results. First of all, two relevant studies which were identified through our literature search were excluded due to unavailability for full-text review. The other limitation of the present study was that it was limited to articles published in English. ## 5. Conclusions The present study showed that common CT imaging features of COVID-19 pneumonia included frequent involvement of bilateral lung infections, ground-glass opacities, consolidation, crazy paving pattern, air bronchogram signs, and intralobular septal thickening. Bilateral involvement was common while single lobe involvement was rare. This sign of CT imaging might be an important tool for diagnosis and monitoring disease progression in patients with COVID-19 infection. --- *Source: 1023506-2020-07-23.xml*
2020
# Prediction of 90-Day Mortality among Sepsis Patients Based on a Nomogram Integrating Diverse Clinical Indices **Authors:** Qingbo Zeng; Longping He; Nianqing Zhang; Qingwei Lin; Lincui Zhong; Jingchun Song **Journal:** BioMed Research International (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1023513 --- ## Abstract Background. Sepsis is prevalent among intensive care units and is a frequent cause of death. Several studies have identified individual risk factors or potential predictors of sepsis-associated mortality, without defining an integrated predictive model. The present work was aimed at defining a nomogram for reliably predicting mortality. Methods. We carried out a retrospective, single-center study based on 231 patients with sepsis who were admitted to our intensive care unit between May 2018 and October 2020. Patients were randomly split into training and validation cohorts. In the training cohort, multivariate logistic regression and a stepwise algorithm were performed to identify risk factors, which were then integrated into a predictive nomogram. Nomogram performance was assessed against the training and validation cohorts based on the area under receiver operating characteristic curves (AUC), calibration plots, and decision curve analysis. Results. Among the 161 patients in the training cohort and 70 patients in the validation cohort, 90-day mortality was 31.6%. Older age and higher values for the international normalized ratio, lactate level, and thrombomodulin level were associated with greater risk of 90-day mortality. The nomogram showed an AUC of 0.810 (95% CI 0.739 to 0.881) in the training cohort and 0.813 (95% CI 0.708 to 0.917) in the validation cohort. The nomogram also performed well based on the calibration curve and decision curve analysis. Conclusion. This nomogram may help identify sepsis patients at elevated risk of 90-day mortality, which may help clinicians allocate resources appropriately to improve patient outcomes. --- ## Body ## 1. Introduction Sepsis is life-threatening organ dysfunction initiated by the body’s overwhelming response to infection [1]. Although significant advances have been made in intensive care and supportive technology to treat sepsis, it remains associated with high morbidity and mortality. The global incidence rate is around 437 per 100 000 person-years, and approximately 17% of sepsis cases die in hospital [2]. These figures are even higher in China, where up to 20% of patients in intensive care units have sepsis [3].The pathogenesis of sepsis is complex and involves coagulation disorder, inflammation imbalance, immune dysfunction, and mitochondrial and endothelial damage [4]. Better understanding of the disease’s pathophysiology and identification of reliable predictors of short-term mortality are critical for guiding interventions and improving prognosis.Several studies have analyzed risk factors for mortality in patients with sepsis [5–9], but most have focused on biomarkers related to inflammation or the function of certain organs. For such a complex disease, prediction algorithms may need to take a range of biomarkers into account. Therefore, the main objective of the present study was to consider a diversity of potential clinicodemographic factors for constructing a nomogram to predict 90-day mortality in sepsis. ## 2. Materials and Methods ### 2.1. Patient Selection and Data Collection This retrospective study examined electronic medical record from a consecutive sample of 231 patients who had been diagnosed with sepsis and had been admitted to the intensive care unit at the 908th People’s Liberation Army Hospital (Nanchang, China) between May 2018 and October 2020. A flowchart of patients excluded by each criterion is shown in Figure 1. To be enrolled in the study, patients had to be older than 17 years and diagnosed with sepsis according to the Third International Consensus Definition for Sepsis (“Sepsis-3”) [10]: infection had to be confirmed through culture tests and the Sequential Organ Failure Assessment (SOFA) score had to be at least 2 [4]. Patients were excluded if they were pregnant or had a history of hemorrhagic shock, cancer, acute coronary syndrome, or cardiopulmonary arrest. This study was approved by the Ethics Committee of the 908th People’s Liberation Army Hospital with a waiver of informed consent. Baseline demographic data (age, sex) were collected, as were data on the site of infection, comorbidities, 90-day mortality, and severity of illness, based on the Acute Physiology and Chronic Health Evaluation II (APACHE II) score [11] and the SOFA scores [12] on the first day of admission to the intensive care unit, as well as numerous laboratory and clinical variables which were obtained four hours after admission (see Table 1).Figure 1 Flowchart of patients excluded by each criterion.Table 1 Patient characteristics upon admission to the intensive care unit. CharacteristicTraining cohortValidation cohortSurvivors (n=112)Died at 90 days (n=49)P valueSurvivors (n=46)Died at 90 days (n=24)P valueMen69 (61.6)30 (61.2)0.96329 (63.0)15 (62.5)0.964Age≥57yr77 (68.8)41 (83.7)0.05530 (65.2)22 (91.7)0.016ComorbidityDiabetes17 (15.2)10 (20.4)0.41417 (37.0)9 (37.5)0.964Hypertension42 (37.5)26 (53.1)0.06621 (45.7)12 (50.0)0.729COPD6 (5.4)6 (12.2)0.1268 (17.4)6 (25.0)0.450CKD10 (8.9)6 (12.2)0.5182 (4.3)3 (12.5)0.209Source of infectionPulmonary71 (63.4)33 (67.3)0.62927 (58.7)19 (71.2)0.087Urinary tract8 (7.1)1 (2.0)0.1955 (10.9)0 (0)0.094Abdominal27 (24.1)14 (28.6)0.55013 (28.3)3 (12.5)0.136Skin6 (5.4)2 (4.1)0.7322 (4.3)1 (4.2)0.972TM≥13.1TU/mL48 (42.9)32 (65.3)0.01017 (37.0)17 (70.8)0.007TAT (ng/mL)8.2 (4.6-18.0)17.2 (5.7-46.8)0.0028.7 (5.6-17.0)13.4 (6.2-30.9)0.162PIC (μg/mL)1.16 (0.62-2.16)1.04 (0.57-2.28)0.7421.10 (0.75-1.48)1.43 (0.69-2.83)0.421t-PAIC (ng/mL)12.2 (7.6-24.1)21.7 (11.3-41.7)0.00314.2 (9.4-23.9)21.3 (13.8-47.1)0.020PT (s)14.2 (12.7-16.2)16.4 (14.0-21.4)0.00013.7 (13-15.3)15 (13.6-19.6)0.008INR1.2 (1.1-1.3)1.4 (1.2-1.8)0.0001.14 (1.08-1.27)1.25 (1.13-1.60)0.008APTT (s)31.6 (26.6-38.4)37.4 (32.0-47.7)0.00031.4 (26.7-40.5)33.9 (29.2-48.7)0.087FIB (g/L)2.9±1.092.6±1.20.1432.9±0.92.7±1.30.308TT (s)15.8 (14.5-17.3)17.2 (14.8-18.7)0.00315.5 (14.0-17.4)16.8 (14.9-19.4)0.070FDP (μg/L)8.69 (3.67-18.92)14.45 (4.53-38.00)0.0307.56 (4.51-13.12)11.37 (6.99-27.95)0.033D-dimer (μg/L)2.59 (1.03-5.97)4.91 (1.65-11.00)0.0162.19 (0.87-4.53)3.19 (2.54-7.76)0.015Platelets (×109/L)179±90138 ± 940.010182 ± 108209 ± 1280.358Hemoglobin (g/L)111±29100±310.038109±31104±310.525ALT (U/L)31.9 (12.9-73.5)21.5 (13.3-116.8)0.59727.3 (13.3-58.7)29.6 (11.1-64.2)0.921AST (U/L)43.0 (23.3-84.3)42.1 (26.4-131.2)0.48333.2 (19.8-72.5)30.3 (19.1-76.7)0.843TBil (μmol/L)13.5 (7.9-22.5)13.9 (7.4-32.5)0.51414.5 (6.8-23.4)17.6 (10.9-28.1)0.192Cr (μmol/mL)92.6 (62.3-163.8)136 (76.5-241.4)0.01770.3 (54.5-132.5)113.4 (78.3-150.0)0.056RBG (mmol/L)7.3 (6.2-9.3)6.8 (5.5-9.1)0.1427.6 (6.7-9.6)8.8 (7.1-10.5)0.239Body temp (°C)36.7 (36.5-37.5)36.6 (36.3-37.3)0.35036.7 (36.2-37.3)36.4 (36.0-36.8)0.176Heart rate (min-1)96±20106±250.01398±26107±260.179MAP (mmHg)90±1788±220.54791±1787±180.352SOFA score7 (5-10)9 (7-15)0.0007 (5-10)9 (6-13)0.065APACHE II score21±624±60.00822±727±70.004PH7.41 (7.35-7.45)7.38 (7.29-7.50)0.1337.42 (7.34-7.49)7.29 (7.19-7.43)0.006PaCO2 (mmHg)36 (31-42)34 (29-40)0.14934 (28-41)39 (32-46)0.040PaO2 (mmHg)110 (81-157)93 (64-140)0.017112 (80-167)97.1 (64-152)0.366Lac (mmol/L)1.7 (1-3.2)3.2 (1.5-6.6)0.0002.0 (1.2-3.3)3.8 (1.8-9.5)0.010Values aren (%), mean±SD, or median (interquartile range). Abbreviations: COPD: chronic obstructive pulmonary disease; CKD: chronic kidney disease; TM: thrombomodulin; TAT: thrombin-antithrombin complex; PIC: α2-plasmininhibitor-plasmin complex; tPAIC: tissue plasminogen activator-inhibitor complex; PLT: platelet; HB: hemoglobin; PT: prothrombin time; APTT: activated partial thrombin time; FIB: fibrinogen; INR: international normalized ratio; TT: thrombin time; FDP: fibrinogen degradation product; ALT: alanine transaminase; AST: aspartate transaminase; TBil: total bilirubin; MAP: mean arterial pressure; SOFA: Sequential Organ Failure Assessment; APACHE II: Acute Physiology and Chronic Health Evaluation II; pH: potential of hydrogen; PaO2: arterial partial oxygen pressure; PaCO2: arterial partial pressure of carbon dioxide; Lac: lactate; RBG: random blood glucose. ### 2.2. Statistical Analysis and Nomogram Construction All statistical analyses were performed using R 4.0.1 (R Core Team, Vienna, Austria) and SPSS 25.0 (IBM, Chicago, IL, USA). Differences associated with a two-sidedP<0.05 were considered statistically significant. Data for continuous variables were presented as mean±standarddeviation or as median (interquartile range (IQR)). Differences between groups were assessed for significance using Student’s t-test in the case of normally distributed data or using the Mann-Whitney test in the case of a skewed distribution. Data for categorical variables were expressed as counts and percentages, and differences were assessed using χ2 or Fisher’s exact test. The variance inflation factor (VIF) was used to test collinearity between continuous variables, and an arithmetic square root of VIF≤10 was regarded as noncollinearity. Patients were randomized into training and validation cohorts in a ratio of 2 : 1. Clinical variables in the training cohort were entered into multivariate logistic regression, and backward stepwise selection was applied using the likelihood ratio test and Akaike’s information criterion as the stopping rule [13]. The regression results from the training cohort were used to define a nomogram to predict 90-day mortality. The same regression equations for the training cohort were also applied to the data for the validation cohort in order to verify the nomogram. Calibration curves, accompanied by the Hosmer-Lemeshow test, were used to evaluate the predictive model. Its discriminative ability was assessed based on the area under the receiver operating characteristic curve (AUC). For clinical usefulness, net benefit was examined against the training and validation cohorts using decision curve analysis (DCA). ## 2.1. Patient Selection and Data Collection This retrospective study examined electronic medical record from a consecutive sample of 231 patients who had been diagnosed with sepsis and had been admitted to the intensive care unit at the 908th People’s Liberation Army Hospital (Nanchang, China) between May 2018 and October 2020. A flowchart of patients excluded by each criterion is shown in Figure 1. To be enrolled in the study, patients had to be older than 17 years and diagnosed with sepsis according to the Third International Consensus Definition for Sepsis (“Sepsis-3”) [10]: infection had to be confirmed through culture tests and the Sequential Organ Failure Assessment (SOFA) score had to be at least 2 [4]. Patients were excluded if they were pregnant or had a history of hemorrhagic shock, cancer, acute coronary syndrome, or cardiopulmonary arrest. This study was approved by the Ethics Committee of the 908th People’s Liberation Army Hospital with a waiver of informed consent. Baseline demographic data (age, sex) were collected, as were data on the site of infection, comorbidities, 90-day mortality, and severity of illness, based on the Acute Physiology and Chronic Health Evaluation II (APACHE II) score [11] and the SOFA scores [12] on the first day of admission to the intensive care unit, as well as numerous laboratory and clinical variables which were obtained four hours after admission (see Table 1).Figure 1 Flowchart of patients excluded by each criterion.Table 1 Patient characteristics upon admission to the intensive care unit. CharacteristicTraining cohortValidation cohortSurvivors (n=112)Died at 90 days (n=49)P valueSurvivors (n=46)Died at 90 days (n=24)P valueMen69 (61.6)30 (61.2)0.96329 (63.0)15 (62.5)0.964Age≥57yr77 (68.8)41 (83.7)0.05530 (65.2)22 (91.7)0.016ComorbidityDiabetes17 (15.2)10 (20.4)0.41417 (37.0)9 (37.5)0.964Hypertension42 (37.5)26 (53.1)0.06621 (45.7)12 (50.0)0.729COPD6 (5.4)6 (12.2)0.1268 (17.4)6 (25.0)0.450CKD10 (8.9)6 (12.2)0.5182 (4.3)3 (12.5)0.209Source of infectionPulmonary71 (63.4)33 (67.3)0.62927 (58.7)19 (71.2)0.087Urinary tract8 (7.1)1 (2.0)0.1955 (10.9)0 (0)0.094Abdominal27 (24.1)14 (28.6)0.55013 (28.3)3 (12.5)0.136Skin6 (5.4)2 (4.1)0.7322 (4.3)1 (4.2)0.972TM≥13.1TU/mL48 (42.9)32 (65.3)0.01017 (37.0)17 (70.8)0.007TAT (ng/mL)8.2 (4.6-18.0)17.2 (5.7-46.8)0.0028.7 (5.6-17.0)13.4 (6.2-30.9)0.162PIC (μg/mL)1.16 (0.62-2.16)1.04 (0.57-2.28)0.7421.10 (0.75-1.48)1.43 (0.69-2.83)0.421t-PAIC (ng/mL)12.2 (7.6-24.1)21.7 (11.3-41.7)0.00314.2 (9.4-23.9)21.3 (13.8-47.1)0.020PT (s)14.2 (12.7-16.2)16.4 (14.0-21.4)0.00013.7 (13-15.3)15 (13.6-19.6)0.008INR1.2 (1.1-1.3)1.4 (1.2-1.8)0.0001.14 (1.08-1.27)1.25 (1.13-1.60)0.008APTT (s)31.6 (26.6-38.4)37.4 (32.0-47.7)0.00031.4 (26.7-40.5)33.9 (29.2-48.7)0.087FIB (g/L)2.9±1.092.6±1.20.1432.9±0.92.7±1.30.308TT (s)15.8 (14.5-17.3)17.2 (14.8-18.7)0.00315.5 (14.0-17.4)16.8 (14.9-19.4)0.070FDP (μg/L)8.69 (3.67-18.92)14.45 (4.53-38.00)0.0307.56 (4.51-13.12)11.37 (6.99-27.95)0.033D-dimer (μg/L)2.59 (1.03-5.97)4.91 (1.65-11.00)0.0162.19 (0.87-4.53)3.19 (2.54-7.76)0.015Platelets (×109/L)179±90138 ± 940.010182 ± 108209 ± 1280.358Hemoglobin (g/L)111±29100±310.038109±31104±310.525ALT (U/L)31.9 (12.9-73.5)21.5 (13.3-116.8)0.59727.3 (13.3-58.7)29.6 (11.1-64.2)0.921AST (U/L)43.0 (23.3-84.3)42.1 (26.4-131.2)0.48333.2 (19.8-72.5)30.3 (19.1-76.7)0.843TBil (μmol/L)13.5 (7.9-22.5)13.9 (7.4-32.5)0.51414.5 (6.8-23.4)17.6 (10.9-28.1)0.192Cr (μmol/mL)92.6 (62.3-163.8)136 (76.5-241.4)0.01770.3 (54.5-132.5)113.4 (78.3-150.0)0.056RBG (mmol/L)7.3 (6.2-9.3)6.8 (5.5-9.1)0.1427.6 (6.7-9.6)8.8 (7.1-10.5)0.239Body temp (°C)36.7 (36.5-37.5)36.6 (36.3-37.3)0.35036.7 (36.2-37.3)36.4 (36.0-36.8)0.176Heart rate (min-1)96±20106±250.01398±26107±260.179MAP (mmHg)90±1788±220.54791±1787±180.352SOFA score7 (5-10)9 (7-15)0.0007 (5-10)9 (6-13)0.065APACHE II score21±624±60.00822±727±70.004PH7.41 (7.35-7.45)7.38 (7.29-7.50)0.1337.42 (7.34-7.49)7.29 (7.19-7.43)0.006PaCO2 (mmHg)36 (31-42)34 (29-40)0.14934 (28-41)39 (32-46)0.040PaO2 (mmHg)110 (81-157)93 (64-140)0.017112 (80-167)97.1 (64-152)0.366Lac (mmol/L)1.7 (1-3.2)3.2 (1.5-6.6)0.0002.0 (1.2-3.3)3.8 (1.8-9.5)0.010Values aren (%), mean±SD, or median (interquartile range). Abbreviations: COPD: chronic obstructive pulmonary disease; CKD: chronic kidney disease; TM: thrombomodulin; TAT: thrombin-antithrombin complex; PIC: α2-plasmininhibitor-plasmin complex; tPAIC: tissue plasminogen activator-inhibitor complex; PLT: platelet; HB: hemoglobin; PT: prothrombin time; APTT: activated partial thrombin time; FIB: fibrinogen; INR: international normalized ratio; TT: thrombin time; FDP: fibrinogen degradation product; ALT: alanine transaminase; AST: aspartate transaminase; TBil: total bilirubin; MAP: mean arterial pressure; SOFA: Sequential Organ Failure Assessment; APACHE II: Acute Physiology and Chronic Health Evaluation II; pH: potential of hydrogen; PaO2: arterial partial oxygen pressure; PaCO2: arterial partial pressure of carbon dioxide; Lac: lactate; RBG: random blood glucose. ## 2.2. Statistical Analysis and Nomogram Construction All statistical analyses were performed using R 4.0.1 (R Core Team, Vienna, Austria) and SPSS 25.0 (IBM, Chicago, IL, USA). Differences associated with a two-sidedP<0.05 were considered statistically significant. Data for continuous variables were presented as mean±standarddeviation or as median (interquartile range (IQR)). Differences between groups were assessed for significance using Student’s t-test in the case of normally distributed data or using the Mann-Whitney test in the case of a skewed distribution. Data for categorical variables were expressed as counts and percentages, and differences were assessed using χ2 or Fisher’s exact test. The variance inflation factor (VIF) was used to test collinearity between continuous variables, and an arithmetic square root of VIF≤10 was regarded as noncollinearity. Patients were randomized into training and validation cohorts in a ratio of 2 : 1. Clinical variables in the training cohort were entered into multivariate logistic regression, and backward stepwise selection was applied using the likelihood ratio test and Akaike’s information criterion as the stopping rule [13]. The regression results from the training cohort were used to define a nomogram to predict 90-day mortality. The same regression equations for the training cohort were also applied to the data for the validation cohort in order to verify the nomogram. Calibration curves, accompanied by the Hosmer-Lemeshow test, were used to evaluate the predictive model. Its discriminative ability was assessed based on the area under the receiver operating characteristic curve (AUC). For clinical usefulness, net benefit was examined against the training and validation cohorts using decision curve analysis (DCA). ## 3. Results ### 3.1. Baseline Characteristics of Patients with Sepsis Among the 231 patients in the study, 61.9% were men, the median age was 70 years (range, 18 to 96 years), and 73 (31.6%) died within 90 days of follow-up. In both the training and validation cohorts, patients who survived for 90 days had significantly lower levels of many clinical variables than those who died (Table1), including tissue plasminogen activator-inhibitor complex, thrombin-antithrombin complex, prothrombin time, international normalized ratio, activated partial thrombin time, thrombin time, fibrinogen degradation product, D-dimer, creatinine, lactate, heart rate, Sequential Organ Failure Assessment, and Acute Physiology and Chronic Health Evaluation II. Conversely, survivors showed significantly higher levels of platelet, hemoglobin, and arterial partial oxygen pressure. ### 3.2. Nomogram Construction Multiple logistic regression identified age, international normalized ratio, lactate, and thrombomodulin as independent predictors of 90-day mortality (Table2), which were then integrated into a predictive nomogram (Figure 2). The results of regression analysis were visualized. The clinician can give an individualized evaluation of the risk of 90-day mortality for patients undergoing sepsis according to the total points which were obtained by adding each score in the nomogram. This would facilitate precise risk assessment and better identification of 90-day mortality in the septic population.Table 2 Multivariate logistic regression of data from the training cohort to identify factors independently associated with 90-day mortality. VariableOdds ratio95% confidence intervalP valueAge (≥57 vs. <57 y)1.200.36-2.040.005TM (≥13.1 vs. <13.1 TU/mL)1.300.39-2.210.005INR1.520.23-2.800.021Lac (mmol/L)0.170.04-0.290.008INR: international normalized ratio; TM: thrombomodulin; Lac: lactate.Figure 2 Nomogram for predicting 90-day mortality in patients with sepsis, based on data in the training cohort. ### 3.3. Nomogram Validation The nomogram based on data in the training cohort gave an AUC of 0.810 (95% CI 0.739 to 0.881) for predicting 90-day mortality in that cohort (Figure3(a)). Similarly, it gave an AUC of 0.813 (95% CI 0.708 to 0.917) for predicting 90-day mortality in the validation cohort (Figure 3(b)).Figure 3 Receiver operating characteristic curves assessing the ability of the nomogram to predict 90-day mortality in (a) training and (b) validation cohorts. (a)(b)For both cohorts, the nomogram showed good agreement with actual 90-day mortality based on calibration curves (Figure4), although the logistic calibration curve and nonparametric curve deviated slightly from the ideal line. The Hosmer-Lemeshow test gave a P=0.866 in the training cohort while it gave a P=0.801 in the validation cohort, suggesting no significant deviation from a perfect fit.Figure 4 Calibration plot of predicted and observed probabilities of 90-day mortality in (a) training and (b) validation cohorts. (a)(b) ### 3.4. Potential Clinical Usefulness of the Nomogram DCA showed good clinical potential for the nomogram, based on the training cohort (Figure5(a)) and validation cohort (Figure 5(b)). When the threshold probability is greater than 15%, using the nomogram can lead to lower mortality than treating either all or none of the patients.Figure 5 Decision curve analysis to assess the benefit of clinical intervention based on the predictive nomogram in (a) training and (b) validation cohorts. (a)(b) ## 3.1. Baseline Characteristics of Patients with Sepsis Among the 231 patients in the study, 61.9% were men, the median age was 70 years (range, 18 to 96 years), and 73 (31.6%) died within 90 days of follow-up. In both the training and validation cohorts, patients who survived for 90 days had significantly lower levels of many clinical variables than those who died (Table1), including tissue plasminogen activator-inhibitor complex, thrombin-antithrombin complex, prothrombin time, international normalized ratio, activated partial thrombin time, thrombin time, fibrinogen degradation product, D-dimer, creatinine, lactate, heart rate, Sequential Organ Failure Assessment, and Acute Physiology and Chronic Health Evaluation II. Conversely, survivors showed significantly higher levels of platelet, hemoglobin, and arterial partial oxygen pressure. ## 3.2. Nomogram Construction Multiple logistic regression identified age, international normalized ratio, lactate, and thrombomodulin as independent predictors of 90-day mortality (Table2), which were then integrated into a predictive nomogram (Figure 2). The results of regression analysis were visualized. The clinician can give an individualized evaluation of the risk of 90-day mortality for patients undergoing sepsis according to the total points which were obtained by adding each score in the nomogram. This would facilitate precise risk assessment and better identification of 90-day mortality in the septic population.Table 2 Multivariate logistic regression of data from the training cohort to identify factors independently associated with 90-day mortality. VariableOdds ratio95% confidence intervalP valueAge (≥57 vs. <57 y)1.200.36-2.040.005TM (≥13.1 vs. <13.1 TU/mL)1.300.39-2.210.005INR1.520.23-2.800.021Lac (mmol/L)0.170.04-0.290.008INR: international normalized ratio; TM: thrombomodulin; Lac: lactate.Figure 2 Nomogram for predicting 90-day mortality in patients with sepsis, based on data in the training cohort. ## 3.3. Nomogram Validation The nomogram based on data in the training cohort gave an AUC of 0.810 (95% CI 0.739 to 0.881) for predicting 90-day mortality in that cohort (Figure3(a)). Similarly, it gave an AUC of 0.813 (95% CI 0.708 to 0.917) for predicting 90-day mortality in the validation cohort (Figure 3(b)).Figure 3 Receiver operating characteristic curves assessing the ability of the nomogram to predict 90-day mortality in (a) training and (b) validation cohorts. (a)(b)For both cohorts, the nomogram showed good agreement with actual 90-day mortality based on calibration curves (Figure4), although the logistic calibration curve and nonparametric curve deviated slightly from the ideal line. The Hosmer-Lemeshow test gave a P=0.866 in the training cohort while it gave a P=0.801 in the validation cohort, suggesting no significant deviation from a perfect fit.Figure 4 Calibration plot of predicted and observed probabilities of 90-day mortality in (a) training and (b) validation cohorts. (a)(b) ## 3.4. Potential Clinical Usefulness of the Nomogram DCA showed good clinical potential for the nomogram, based on the training cohort (Figure5(a)) and validation cohort (Figure 5(b)). When the threshold probability is greater than 15%, using the nomogram can lead to lower mortality than treating either all or none of the patients.Figure 5 Decision curve analysis to assess the benefit of clinical intervention based on the predictive nomogram in (a) training and (b) validation cohorts. (a)(b) ## 4. Discussion In this study, we defined a nomogram based on routinely measured clinical variables that may reliably predict 90-day mortality among patients with sepsis. While our nomogram should be verified with other patient populations, it establishes the feasibility of accurate mortality prediction using relatively simple clinical tests. While several studies have identified risk factors associated with 90-day mortality in sepsis, our work suggests that certain risk factors may be particularly relevant for screening patients for mortality risk.The 90-day mortality in our retrospective cohort of Chinese patients was 31.6%, which was higher than that in previous studies [2, 3, 5]. Sepsis patients concluded in the present study had much higher APACHE II scores and had a longer follow-up (90-day mortality) than those in previous reports, which could explain these differences [14].We found that the international normalized ratio was significantly higher among sepsis patients who died within 90 days of follow-up than among those who did not die, and it emerged as an independent predictor of 90-day mortality in multivariate analysis. Coagulopathy is frequently observed in sepsis [15], and it contributes to multiple organ dysfunction syndrome [16]. More severe coagulopathy has been linked to higher risk of mortality among patients with sepsis [17], and clinical parameters reflecting hemostasis can predict sepsis-related mortality [18–20]. Our results are consistent with this literature. Nevertheless, the international normalized ratio alone cannot accurately predict sepsis outcomes [5], which may be due to the need to take into account other independent predictors of mortality.One of those predictors is lactate level, which was significantly higher among our patients who died within 90 days than among those who did not. Critically ill patients, particularly those with sepsis or septic shock, show elevated lactate [21], and the magnitude of the elevation correlates strongly and positively with sepsis severity and associated mortality [22–24]. Serum lactate levels are considered a marker of tissue hypoxia [19], and they have proven useful for guiding clinical treatment and predicting prognosis in various clinical contexts [25]. Our study supports the “Sepsis-3” recommendation that septic shock should be defined as persistence of serum lactate>2mmol/L [10].Another risk factor for 90-day mortality that emerged as particularly important for prediction was elevated thrombomodulin level. Thrombomodulin, an integral endothelial cell membrane protein, is cleaved and released into the bloodstream during sepsis and septic shock [26, 27], leading to elevated levels of serum thrombomodulin in pediatric and adult sepsis patients [28, 29]. Endothelium is the primary site of damage in sepsis due to massive production of proinflammatory cytokines [6]. Elevated serum thrombomodulin level is associated with sepsis severity and risk of death [30]. Our study showed that endothelial cell injury justified by elevated TM activated the coagulation system, depleted coagulation factors characterized by prolonged PT to promote microthrombosis, and caused tissue hypoperfusion and increased lactate, especially obviously in elder patients with sepsis.Our nomogram showed AUC values above 0.8 for the training and validation cohorts, suggesting good predictive ability. In addition, DCA suggested that treating our cohorts according to our nomogram’s predictions could be superior to treating all or none of them. The calibration curve also suggested good fit. Nevertheless, our model was generated based on retrospective analysis of a relatively small sample from a single medical center, so it should be validated in other patient populations. It may be possible to further improve the model by a multicenter study with external validation. ## 5. Conclusions We have developed a nomogram that may reliably predict 90-day mortality in patients with sepsis, based on age, international normalized ratio, lactate, and thrombomodulin. This may help clinicians identify patients at higher risk and modify clinical management and resource allocation accordingly. --- *Source: 1023513-2021-10-20.xml*
1023513-2021-10-20_1023513-2021-10-20.md
28,548
Prediction of 90-Day Mortality among Sepsis Patients Based on a Nomogram Integrating Diverse Clinical Indices
Qingbo Zeng; Longping He; Nianqing Zhang; Qingwei Lin; Lincui Zhong; Jingchun Song
BioMed Research International (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1023513
1023513-2021-10-20.xml
--- ## Abstract Background. Sepsis is prevalent among intensive care units and is a frequent cause of death. Several studies have identified individual risk factors or potential predictors of sepsis-associated mortality, without defining an integrated predictive model. The present work was aimed at defining a nomogram for reliably predicting mortality. Methods. We carried out a retrospective, single-center study based on 231 patients with sepsis who were admitted to our intensive care unit between May 2018 and October 2020. Patients were randomly split into training and validation cohorts. In the training cohort, multivariate logistic regression and a stepwise algorithm were performed to identify risk factors, which were then integrated into a predictive nomogram. Nomogram performance was assessed against the training and validation cohorts based on the area under receiver operating characteristic curves (AUC), calibration plots, and decision curve analysis. Results. Among the 161 patients in the training cohort and 70 patients in the validation cohort, 90-day mortality was 31.6%. Older age and higher values for the international normalized ratio, lactate level, and thrombomodulin level were associated with greater risk of 90-day mortality. The nomogram showed an AUC of 0.810 (95% CI 0.739 to 0.881) in the training cohort and 0.813 (95% CI 0.708 to 0.917) in the validation cohort. The nomogram also performed well based on the calibration curve and decision curve analysis. Conclusion. This nomogram may help identify sepsis patients at elevated risk of 90-day mortality, which may help clinicians allocate resources appropriately to improve patient outcomes. --- ## Body ## 1. Introduction Sepsis is life-threatening organ dysfunction initiated by the body’s overwhelming response to infection [1]. Although significant advances have been made in intensive care and supportive technology to treat sepsis, it remains associated with high morbidity and mortality. The global incidence rate is around 437 per 100 000 person-years, and approximately 17% of sepsis cases die in hospital [2]. These figures are even higher in China, where up to 20% of patients in intensive care units have sepsis [3].The pathogenesis of sepsis is complex and involves coagulation disorder, inflammation imbalance, immune dysfunction, and mitochondrial and endothelial damage [4]. Better understanding of the disease’s pathophysiology and identification of reliable predictors of short-term mortality are critical for guiding interventions and improving prognosis.Several studies have analyzed risk factors for mortality in patients with sepsis [5–9], but most have focused on biomarkers related to inflammation or the function of certain organs. For such a complex disease, prediction algorithms may need to take a range of biomarkers into account. Therefore, the main objective of the present study was to consider a diversity of potential clinicodemographic factors for constructing a nomogram to predict 90-day mortality in sepsis. ## 2. Materials and Methods ### 2.1. Patient Selection and Data Collection This retrospective study examined electronic medical record from a consecutive sample of 231 patients who had been diagnosed with sepsis and had been admitted to the intensive care unit at the 908th People’s Liberation Army Hospital (Nanchang, China) between May 2018 and October 2020. A flowchart of patients excluded by each criterion is shown in Figure 1. To be enrolled in the study, patients had to be older than 17 years and diagnosed with sepsis according to the Third International Consensus Definition for Sepsis (“Sepsis-3”) [10]: infection had to be confirmed through culture tests and the Sequential Organ Failure Assessment (SOFA) score had to be at least 2 [4]. Patients were excluded if they were pregnant or had a history of hemorrhagic shock, cancer, acute coronary syndrome, or cardiopulmonary arrest. This study was approved by the Ethics Committee of the 908th People’s Liberation Army Hospital with a waiver of informed consent. Baseline demographic data (age, sex) were collected, as were data on the site of infection, comorbidities, 90-day mortality, and severity of illness, based on the Acute Physiology and Chronic Health Evaluation II (APACHE II) score [11] and the SOFA scores [12] on the first day of admission to the intensive care unit, as well as numerous laboratory and clinical variables which were obtained four hours after admission (see Table 1).Figure 1 Flowchart of patients excluded by each criterion.Table 1 Patient characteristics upon admission to the intensive care unit. CharacteristicTraining cohortValidation cohortSurvivors (n=112)Died at 90 days (n=49)P valueSurvivors (n=46)Died at 90 days (n=24)P valueMen69 (61.6)30 (61.2)0.96329 (63.0)15 (62.5)0.964Age≥57yr77 (68.8)41 (83.7)0.05530 (65.2)22 (91.7)0.016ComorbidityDiabetes17 (15.2)10 (20.4)0.41417 (37.0)9 (37.5)0.964Hypertension42 (37.5)26 (53.1)0.06621 (45.7)12 (50.0)0.729COPD6 (5.4)6 (12.2)0.1268 (17.4)6 (25.0)0.450CKD10 (8.9)6 (12.2)0.5182 (4.3)3 (12.5)0.209Source of infectionPulmonary71 (63.4)33 (67.3)0.62927 (58.7)19 (71.2)0.087Urinary tract8 (7.1)1 (2.0)0.1955 (10.9)0 (0)0.094Abdominal27 (24.1)14 (28.6)0.55013 (28.3)3 (12.5)0.136Skin6 (5.4)2 (4.1)0.7322 (4.3)1 (4.2)0.972TM≥13.1TU/mL48 (42.9)32 (65.3)0.01017 (37.0)17 (70.8)0.007TAT (ng/mL)8.2 (4.6-18.0)17.2 (5.7-46.8)0.0028.7 (5.6-17.0)13.4 (6.2-30.9)0.162PIC (μg/mL)1.16 (0.62-2.16)1.04 (0.57-2.28)0.7421.10 (0.75-1.48)1.43 (0.69-2.83)0.421t-PAIC (ng/mL)12.2 (7.6-24.1)21.7 (11.3-41.7)0.00314.2 (9.4-23.9)21.3 (13.8-47.1)0.020PT (s)14.2 (12.7-16.2)16.4 (14.0-21.4)0.00013.7 (13-15.3)15 (13.6-19.6)0.008INR1.2 (1.1-1.3)1.4 (1.2-1.8)0.0001.14 (1.08-1.27)1.25 (1.13-1.60)0.008APTT (s)31.6 (26.6-38.4)37.4 (32.0-47.7)0.00031.4 (26.7-40.5)33.9 (29.2-48.7)0.087FIB (g/L)2.9±1.092.6±1.20.1432.9±0.92.7±1.30.308TT (s)15.8 (14.5-17.3)17.2 (14.8-18.7)0.00315.5 (14.0-17.4)16.8 (14.9-19.4)0.070FDP (μg/L)8.69 (3.67-18.92)14.45 (4.53-38.00)0.0307.56 (4.51-13.12)11.37 (6.99-27.95)0.033D-dimer (μg/L)2.59 (1.03-5.97)4.91 (1.65-11.00)0.0162.19 (0.87-4.53)3.19 (2.54-7.76)0.015Platelets (×109/L)179±90138 ± 940.010182 ± 108209 ± 1280.358Hemoglobin (g/L)111±29100±310.038109±31104±310.525ALT (U/L)31.9 (12.9-73.5)21.5 (13.3-116.8)0.59727.3 (13.3-58.7)29.6 (11.1-64.2)0.921AST (U/L)43.0 (23.3-84.3)42.1 (26.4-131.2)0.48333.2 (19.8-72.5)30.3 (19.1-76.7)0.843TBil (μmol/L)13.5 (7.9-22.5)13.9 (7.4-32.5)0.51414.5 (6.8-23.4)17.6 (10.9-28.1)0.192Cr (μmol/mL)92.6 (62.3-163.8)136 (76.5-241.4)0.01770.3 (54.5-132.5)113.4 (78.3-150.0)0.056RBG (mmol/L)7.3 (6.2-9.3)6.8 (5.5-9.1)0.1427.6 (6.7-9.6)8.8 (7.1-10.5)0.239Body temp (°C)36.7 (36.5-37.5)36.6 (36.3-37.3)0.35036.7 (36.2-37.3)36.4 (36.0-36.8)0.176Heart rate (min-1)96±20106±250.01398±26107±260.179MAP (mmHg)90±1788±220.54791±1787±180.352SOFA score7 (5-10)9 (7-15)0.0007 (5-10)9 (6-13)0.065APACHE II score21±624±60.00822±727±70.004PH7.41 (7.35-7.45)7.38 (7.29-7.50)0.1337.42 (7.34-7.49)7.29 (7.19-7.43)0.006PaCO2 (mmHg)36 (31-42)34 (29-40)0.14934 (28-41)39 (32-46)0.040PaO2 (mmHg)110 (81-157)93 (64-140)0.017112 (80-167)97.1 (64-152)0.366Lac (mmol/L)1.7 (1-3.2)3.2 (1.5-6.6)0.0002.0 (1.2-3.3)3.8 (1.8-9.5)0.010Values aren (%), mean±SD, or median (interquartile range). Abbreviations: COPD: chronic obstructive pulmonary disease; CKD: chronic kidney disease; TM: thrombomodulin; TAT: thrombin-antithrombin complex; PIC: α2-plasmininhibitor-plasmin complex; tPAIC: tissue plasminogen activator-inhibitor complex; PLT: platelet; HB: hemoglobin; PT: prothrombin time; APTT: activated partial thrombin time; FIB: fibrinogen; INR: international normalized ratio; TT: thrombin time; FDP: fibrinogen degradation product; ALT: alanine transaminase; AST: aspartate transaminase; TBil: total bilirubin; MAP: mean arterial pressure; SOFA: Sequential Organ Failure Assessment; APACHE II: Acute Physiology and Chronic Health Evaluation II; pH: potential of hydrogen; PaO2: arterial partial oxygen pressure; PaCO2: arterial partial pressure of carbon dioxide; Lac: lactate; RBG: random blood glucose. ### 2.2. Statistical Analysis and Nomogram Construction All statistical analyses were performed using R 4.0.1 (R Core Team, Vienna, Austria) and SPSS 25.0 (IBM, Chicago, IL, USA). Differences associated with a two-sidedP<0.05 were considered statistically significant. Data for continuous variables were presented as mean±standarddeviation or as median (interquartile range (IQR)). Differences between groups were assessed for significance using Student’s t-test in the case of normally distributed data or using the Mann-Whitney test in the case of a skewed distribution. Data for categorical variables were expressed as counts and percentages, and differences were assessed using χ2 or Fisher’s exact test. The variance inflation factor (VIF) was used to test collinearity between continuous variables, and an arithmetic square root of VIF≤10 was regarded as noncollinearity. Patients were randomized into training and validation cohorts in a ratio of 2 : 1. Clinical variables in the training cohort were entered into multivariate logistic regression, and backward stepwise selection was applied using the likelihood ratio test and Akaike’s information criterion as the stopping rule [13]. The regression results from the training cohort were used to define a nomogram to predict 90-day mortality. The same regression equations for the training cohort were also applied to the data for the validation cohort in order to verify the nomogram. Calibration curves, accompanied by the Hosmer-Lemeshow test, were used to evaluate the predictive model. Its discriminative ability was assessed based on the area under the receiver operating characteristic curve (AUC). For clinical usefulness, net benefit was examined against the training and validation cohorts using decision curve analysis (DCA). ## 2.1. Patient Selection and Data Collection This retrospective study examined electronic medical record from a consecutive sample of 231 patients who had been diagnosed with sepsis and had been admitted to the intensive care unit at the 908th People’s Liberation Army Hospital (Nanchang, China) between May 2018 and October 2020. A flowchart of patients excluded by each criterion is shown in Figure 1. To be enrolled in the study, patients had to be older than 17 years and diagnosed with sepsis according to the Third International Consensus Definition for Sepsis (“Sepsis-3”) [10]: infection had to be confirmed through culture tests and the Sequential Organ Failure Assessment (SOFA) score had to be at least 2 [4]. Patients were excluded if they were pregnant or had a history of hemorrhagic shock, cancer, acute coronary syndrome, or cardiopulmonary arrest. This study was approved by the Ethics Committee of the 908th People’s Liberation Army Hospital with a waiver of informed consent. Baseline demographic data (age, sex) were collected, as were data on the site of infection, comorbidities, 90-day mortality, and severity of illness, based on the Acute Physiology and Chronic Health Evaluation II (APACHE II) score [11] and the SOFA scores [12] on the first day of admission to the intensive care unit, as well as numerous laboratory and clinical variables which were obtained four hours after admission (see Table 1).Figure 1 Flowchart of patients excluded by each criterion.Table 1 Patient characteristics upon admission to the intensive care unit. CharacteristicTraining cohortValidation cohortSurvivors (n=112)Died at 90 days (n=49)P valueSurvivors (n=46)Died at 90 days (n=24)P valueMen69 (61.6)30 (61.2)0.96329 (63.0)15 (62.5)0.964Age≥57yr77 (68.8)41 (83.7)0.05530 (65.2)22 (91.7)0.016ComorbidityDiabetes17 (15.2)10 (20.4)0.41417 (37.0)9 (37.5)0.964Hypertension42 (37.5)26 (53.1)0.06621 (45.7)12 (50.0)0.729COPD6 (5.4)6 (12.2)0.1268 (17.4)6 (25.0)0.450CKD10 (8.9)6 (12.2)0.5182 (4.3)3 (12.5)0.209Source of infectionPulmonary71 (63.4)33 (67.3)0.62927 (58.7)19 (71.2)0.087Urinary tract8 (7.1)1 (2.0)0.1955 (10.9)0 (0)0.094Abdominal27 (24.1)14 (28.6)0.55013 (28.3)3 (12.5)0.136Skin6 (5.4)2 (4.1)0.7322 (4.3)1 (4.2)0.972TM≥13.1TU/mL48 (42.9)32 (65.3)0.01017 (37.0)17 (70.8)0.007TAT (ng/mL)8.2 (4.6-18.0)17.2 (5.7-46.8)0.0028.7 (5.6-17.0)13.4 (6.2-30.9)0.162PIC (μg/mL)1.16 (0.62-2.16)1.04 (0.57-2.28)0.7421.10 (0.75-1.48)1.43 (0.69-2.83)0.421t-PAIC (ng/mL)12.2 (7.6-24.1)21.7 (11.3-41.7)0.00314.2 (9.4-23.9)21.3 (13.8-47.1)0.020PT (s)14.2 (12.7-16.2)16.4 (14.0-21.4)0.00013.7 (13-15.3)15 (13.6-19.6)0.008INR1.2 (1.1-1.3)1.4 (1.2-1.8)0.0001.14 (1.08-1.27)1.25 (1.13-1.60)0.008APTT (s)31.6 (26.6-38.4)37.4 (32.0-47.7)0.00031.4 (26.7-40.5)33.9 (29.2-48.7)0.087FIB (g/L)2.9±1.092.6±1.20.1432.9±0.92.7±1.30.308TT (s)15.8 (14.5-17.3)17.2 (14.8-18.7)0.00315.5 (14.0-17.4)16.8 (14.9-19.4)0.070FDP (μg/L)8.69 (3.67-18.92)14.45 (4.53-38.00)0.0307.56 (4.51-13.12)11.37 (6.99-27.95)0.033D-dimer (μg/L)2.59 (1.03-5.97)4.91 (1.65-11.00)0.0162.19 (0.87-4.53)3.19 (2.54-7.76)0.015Platelets (×109/L)179±90138 ± 940.010182 ± 108209 ± 1280.358Hemoglobin (g/L)111±29100±310.038109±31104±310.525ALT (U/L)31.9 (12.9-73.5)21.5 (13.3-116.8)0.59727.3 (13.3-58.7)29.6 (11.1-64.2)0.921AST (U/L)43.0 (23.3-84.3)42.1 (26.4-131.2)0.48333.2 (19.8-72.5)30.3 (19.1-76.7)0.843TBil (μmol/L)13.5 (7.9-22.5)13.9 (7.4-32.5)0.51414.5 (6.8-23.4)17.6 (10.9-28.1)0.192Cr (μmol/mL)92.6 (62.3-163.8)136 (76.5-241.4)0.01770.3 (54.5-132.5)113.4 (78.3-150.0)0.056RBG (mmol/L)7.3 (6.2-9.3)6.8 (5.5-9.1)0.1427.6 (6.7-9.6)8.8 (7.1-10.5)0.239Body temp (°C)36.7 (36.5-37.5)36.6 (36.3-37.3)0.35036.7 (36.2-37.3)36.4 (36.0-36.8)0.176Heart rate (min-1)96±20106±250.01398±26107±260.179MAP (mmHg)90±1788±220.54791±1787±180.352SOFA score7 (5-10)9 (7-15)0.0007 (5-10)9 (6-13)0.065APACHE II score21±624±60.00822±727±70.004PH7.41 (7.35-7.45)7.38 (7.29-7.50)0.1337.42 (7.34-7.49)7.29 (7.19-7.43)0.006PaCO2 (mmHg)36 (31-42)34 (29-40)0.14934 (28-41)39 (32-46)0.040PaO2 (mmHg)110 (81-157)93 (64-140)0.017112 (80-167)97.1 (64-152)0.366Lac (mmol/L)1.7 (1-3.2)3.2 (1.5-6.6)0.0002.0 (1.2-3.3)3.8 (1.8-9.5)0.010Values aren (%), mean±SD, or median (interquartile range). Abbreviations: COPD: chronic obstructive pulmonary disease; CKD: chronic kidney disease; TM: thrombomodulin; TAT: thrombin-antithrombin complex; PIC: α2-plasmininhibitor-plasmin complex; tPAIC: tissue plasminogen activator-inhibitor complex; PLT: platelet; HB: hemoglobin; PT: prothrombin time; APTT: activated partial thrombin time; FIB: fibrinogen; INR: international normalized ratio; TT: thrombin time; FDP: fibrinogen degradation product; ALT: alanine transaminase; AST: aspartate transaminase; TBil: total bilirubin; MAP: mean arterial pressure; SOFA: Sequential Organ Failure Assessment; APACHE II: Acute Physiology and Chronic Health Evaluation II; pH: potential of hydrogen; PaO2: arterial partial oxygen pressure; PaCO2: arterial partial pressure of carbon dioxide; Lac: lactate; RBG: random blood glucose. ## 2.2. Statistical Analysis and Nomogram Construction All statistical analyses were performed using R 4.0.1 (R Core Team, Vienna, Austria) and SPSS 25.0 (IBM, Chicago, IL, USA). Differences associated with a two-sidedP<0.05 were considered statistically significant. Data for continuous variables were presented as mean±standarddeviation or as median (interquartile range (IQR)). Differences between groups were assessed for significance using Student’s t-test in the case of normally distributed data or using the Mann-Whitney test in the case of a skewed distribution. Data for categorical variables were expressed as counts and percentages, and differences were assessed using χ2 or Fisher’s exact test. The variance inflation factor (VIF) was used to test collinearity between continuous variables, and an arithmetic square root of VIF≤10 was regarded as noncollinearity. Patients were randomized into training and validation cohorts in a ratio of 2 : 1. Clinical variables in the training cohort were entered into multivariate logistic regression, and backward stepwise selection was applied using the likelihood ratio test and Akaike’s information criterion as the stopping rule [13]. The regression results from the training cohort were used to define a nomogram to predict 90-day mortality. The same regression equations for the training cohort were also applied to the data for the validation cohort in order to verify the nomogram. Calibration curves, accompanied by the Hosmer-Lemeshow test, were used to evaluate the predictive model. Its discriminative ability was assessed based on the area under the receiver operating characteristic curve (AUC). For clinical usefulness, net benefit was examined against the training and validation cohorts using decision curve analysis (DCA). ## 3. Results ### 3.1. Baseline Characteristics of Patients with Sepsis Among the 231 patients in the study, 61.9% were men, the median age was 70 years (range, 18 to 96 years), and 73 (31.6%) died within 90 days of follow-up. In both the training and validation cohorts, patients who survived for 90 days had significantly lower levels of many clinical variables than those who died (Table1), including tissue plasminogen activator-inhibitor complex, thrombin-antithrombin complex, prothrombin time, international normalized ratio, activated partial thrombin time, thrombin time, fibrinogen degradation product, D-dimer, creatinine, lactate, heart rate, Sequential Organ Failure Assessment, and Acute Physiology and Chronic Health Evaluation II. Conversely, survivors showed significantly higher levels of platelet, hemoglobin, and arterial partial oxygen pressure. ### 3.2. Nomogram Construction Multiple logistic regression identified age, international normalized ratio, lactate, and thrombomodulin as independent predictors of 90-day mortality (Table2), which were then integrated into a predictive nomogram (Figure 2). The results of regression analysis were visualized. The clinician can give an individualized evaluation of the risk of 90-day mortality for patients undergoing sepsis according to the total points which were obtained by adding each score in the nomogram. This would facilitate precise risk assessment and better identification of 90-day mortality in the septic population.Table 2 Multivariate logistic regression of data from the training cohort to identify factors independently associated with 90-day mortality. VariableOdds ratio95% confidence intervalP valueAge (≥57 vs. <57 y)1.200.36-2.040.005TM (≥13.1 vs. <13.1 TU/mL)1.300.39-2.210.005INR1.520.23-2.800.021Lac (mmol/L)0.170.04-0.290.008INR: international normalized ratio; TM: thrombomodulin; Lac: lactate.Figure 2 Nomogram for predicting 90-day mortality in patients with sepsis, based on data in the training cohort. ### 3.3. Nomogram Validation The nomogram based on data in the training cohort gave an AUC of 0.810 (95% CI 0.739 to 0.881) for predicting 90-day mortality in that cohort (Figure3(a)). Similarly, it gave an AUC of 0.813 (95% CI 0.708 to 0.917) for predicting 90-day mortality in the validation cohort (Figure 3(b)).Figure 3 Receiver operating characteristic curves assessing the ability of the nomogram to predict 90-day mortality in (a) training and (b) validation cohorts. (a)(b)For both cohorts, the nomogram showed good agreement with actual 90-day mortality based on calibration curves (Figure4), although the logistic calibration curve and nonparametric curve deviated slightly from the ideal line. The Hosmer-Lemeshow test gave a P=0.866 in the training cohort while it gave a P=0.801 in the validation cohort, suggesting no significant deviation from a perfect fit.Figure 4 Calibration plot of predicted and observed probabilities of 90-day mortality in (a) training and (b) validation cohorts. (a)(b) ### 3.4. Potential Clinical Usefulness of the Nomogram DCA showed good clinical potential for the nomogram, based on the training cohort (Figure5(a)) and validation cohort (Figure 5(b)). When the threshold probability is greater than 15%, using the nomogram can lead to lower mortality than treating either all or none of the patients.Figure 5 Decision curve analysis to assess the benefit of clinical intervention based on the predictive nomogram in (a) training and (b) validation cohorts. (a)(b) ## 3.1. Baseline Characteristics of Patients with Sepsis Among the 231 patients in the study, 61.9% were men, the median age was 70 years (range, 18 to 96 years), and 73 (31.6%) died within 90 days of follow-up. In both the training and validation cohorts, patients who survived for 90 days had significantly lower levels of many clinical variables than those who died (Table1), including tissue plasminogen activator-inhibitor complex, thrombin-antithrombin complex, prothrombin time, international normalized ratio, activated partial thrombin time, thrombin time, fibrinogen degradation product, D-dimer, creatinine, lactate, heart rate, Sequential Organ Failure Assessment, and Acute Physiology and Chronic Health Evaluation II. Conversely, survivors showed significantly higher levels of platelet, hemoglobin, and arterial partial oxygen pressure. ## 3.2. Nomogram Construction Multiple logistic regression identified age, international normalized ratio, lactate, and thrombomodulin as independent predictors of 90-day mortality (Table2), which were then integrated into a predictive nomogram (Figure 2). The results of regression analysis were visualized. The clinician can give an individualized evaluation of the risk of 90-day mortality for patients undergoing sepsis according to the total points which were obtained by adding each score in the nomogram. This would facilitate precise risk assessment and better identification of 90-day mortality in the septic population.Table 2 Multivariate logistic regression of data from the training cohort to identify factors independently associated with 90-day mortality. VariableOdds ratio95% confidence intervalP valueAge (≥57 vs. <57 y)1.200.36-2.040.005TM (≥13.1 vs. <13.1 TU/mL)1.300.39-2.210.005INR1.520.23-2.800.021Lac (mmol/L)0.170.04-0.290.008INR: international normalized ratio; TM: thrombomodulin; Lac: lactate.Figure 2 Nomogram for predicting 90-day mortality in patients with sepsis, based on data in the training cohort. ## 3.3. Nomogram Validation The nomogram based on data in the training cohort gave an AUC of 0.810 (95% CI 0.739 to 0.881) for predicting 90-day mortality in that cohort (Figure3(a)). Similarly, it gave an AUC of 0.813 (95% CI 0.708 to 0.917) for predicting 90-day mortality in the validation cohort (Figure 3(b)).Figure 3 Receiver operating characteristic curves assessing the ability of the nomogram to predict 90-day mortality in (a) training and (b) validation cohorts. (a)(b)For both cohorts, the nomogram showed good agreement with actual 90-day mortality based on calibration curves (Figure4), although the logistic calibration curve and nonparametric curve deviated slightly from the ideal line. The Hosmer-Lemeshow test gave a P=0.866 in the training cohort while it gave a P=0.801 in the validation cohort, suggesting no significant deviation from a perfect fit.Figure 4 Calibration plot of predicted and observed probabilities of 90-day mortality in (a) training and (b) validation cohorts. (a)(b) ## 3.4. Potential Clinical Usefulness of the Nomogram DCA showed good clinical potential for the nomogram, based on the training cohort (Figure5(a)) and validation cohort (Figure 5(b)). When the threshold probability is greater than 15%, using the nomogram can lead to lower mortality than treating either all or none of the patients.Figure 5 Decision curve analysis to assess the benefit of clinical intervention based on the predictive nomogram in (a) training and (b) validation cohorts. (a)(b) ## 4. Discussion In this study, we defined a nomogram based on routinely measured clinical variables that may reliably predict 90-day mortality among patients with sepsis. While our nomogram should be verified with other patient populations, it establishes the feasibility of accurate mortality prediction using relatively simple clinical tests. While several studies have identified risk factors associated with 90-day mortality in sepsis, our work suggests that certain risk factors may be particularly relevant for screening patients for mortality risk.The 90-day mortality in our retrospective cohort of Chinese patients was 31.6%, which was higher than that in previous studies [2, 3, 5]. Sepsis patients concluded in the present study had much higher APACHE II scores and had a longer follow-up (90-day mortality) than those in previous reports, which could explain these differences [14].We found that the international normalized ratio was significantly higher among sepsis patients who died within 90 days of follow-up than among those who did not die, and it emerged as an independent predictor of 90-day mortality in multivariate analysis. Coagulopathy is frequently observed in sepsis [15], and it contributes to multiple organ dysfunction syndrome [16]. More severe coagulopathy has been linked to higher risk of mortality among patients with sepsis [17], and clinical parameters reflecting hemostasis can predict sepsis-related mortality [18–20]. Our results are consistent with this literature. Nevertheless, the international normalized ratio alone cannot accurately predict sepsis outcomes [5], which may be due to the need to take into account other independent predictors of mortality.One of those predictors is lactate level, which was significantly higher among our patients who died within 90 days than among those who did not. Critically ill patients, particularly those with sepsis or septic shock, show elevated lactate [21], and the magnitude of the elevation correlates strongly and positively with sepsis severity and associated mortality [22–24]. Serum lactate levels are considered a marker of tissue hypoxia [19], and they have proven useful for guiding clinical treatment and predicting prognosis in various clinical contexts [25]. Our study supports the “Sepsis-3” recommendation that septic shock should be defined as persistence of serum lactate>2mmol/L [10].Another risk factor for 90-day mortality that emerged as particularly important for prediction was elevated thrombomodulin level. Thrombomodulin, an integral endothelial cell membrane protein, is cleaved and released into the bloodstream during sepsis and septic shock [26, 27], leading to elevated levels of serum thrombomodulin in pediatric and adult sepsis patients [28, 29]. Endothelium is the primary site of damage in sepsis due to massive production of proinflammatory cytokines [6]. Elevated serum thrombomodulin level is associated with sepsis severity and risk of death [30]. Our study showed that endothelial cell injury justified by elevated TM activated the coagulation system, depleted coagulation factors characterized by prolonged PT to promote microthrombosis, and caused tissue hypoperfusion and increased lactate, especially obviously in elder patients with sepsis.Our nomogram showed AUC values above 0.8 for the training and validation cohorts, suggesting good predictive ability. In addition, DCA suggested that treating our cohorts according to our nomogram’s predictions could be superior to treating all or none of them. The calibration curve also suggested good fit. Nevertheless, our model was generated based on retrospective analysis of a relatively small sample from a single medical center, so it should be validated in other patient populations. It may be possible to further improve the model by a multicenter study with external validation. ## 5. Conclusions We have developed a nomogram that may reliably predict 90-day mortality in patients with sepsis, based on age, international normalized ratio, lactate, and thrombomodulin. This may help clinicians identify patients at higher risk and modify clinical management and resource allocation accordingly. --- *Source: 1023513-2021-10-20.xml*
2021
# Subacute Disseminated Intravascular Coagulation in a Patient with Liver Metastases of a Renal Cell Carcinoma **Authors:** L. C. van der Wekken; R. J. L. F. Loffeld **Journal:** Case Reports in Oncological Medicine (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1023538 --- ## Abstract Disseminated intravascular coagulation (DIC) is a syndrome characterised by simultaneous bleeding and thromboembolic formation. Its acute form is associated with severe bacterial infections and hematological malignancies. It has a fulminant presentation with prolonged bleeding times and diffuse thrombosis. On the other hand, chronic DIC can be asymptomatic for long periods of time and can be seen in patients with disseminated malignancies. This case report describes a patient who developed DIC within one week and bled profusely from venipuncture wounds. An underlying hepatogenic metastasised renal cell carcinoma appeared to be the cause. This is an uncommon and diagnostically challenging presentation. --- ## Body ## 1. Introduction Renal cell carcinoma (RCC) makes up for 2% of all new discovered malignancies and is the most common tumour of the kidney. Its incidence rate differs worldwide, with an incidence in Europe of 115 per 1000 European inhabitants [1]. In the year 2000, the United States had 31.200 new cases causing approximately 11.900 deaths. Males are at higher risk for RCC, especially in their 5th to 8th decade, with a median age of diagnosis of 66 years. The incidence is rising, probably due to improved imaging techniques, because mainly early stages are more often diagnosed. In 1970, 10% of the RCCs were diagnosed incidentally, compared to 61% in 1998 [2]. Its mortality rate depends on its stage, with a median 5-year survival rate of 62 percent. RCC predominantly metastasises to the lungs, retroperitoneal space, bones, and brain and sometimes to the liver. Treatment options consist of cryo- or radio frequent ablation, surgery, immunotherapy, or angiogenesis blockers [2].In the present case, a patient with a very rare and strange complication of a renal cell carcinoma is described. ## 2. Case Report A 70-year-old man was admitted to the Department of Internal Medicine and Gastroenterology, because of abnormalities seen on an abdominal ultrasound. The ultrasound was performed for follow-up, because of a renal cell carcinoma two years earlier, for which the patient underwent a nephrectomy. Histological investigation showed a pT2a tumour with angioinvasion. In the past, he had undergone a laparoscopic cholecystectomy because of symptomatic gallstones. He also had atrial fibrillation and a DDDR-pacemaker because of a high grade AV-block.The ultrasound showed multiple echogenic areas in the liver, probably due to metastases. The patient complained of nausea, vomiting, and diminished toleration for larger portions of food. He was tired and experienced a weight loss of five kilograms in two weeks. He noted a few episodes of nose bleeding and had a tiny wound on his ankle that continued bleeding.On physical examination, slight icteric sclerae, signs indicative of bilateral pleural effusion and ascites, were found. Laboratory findings are summarised in Table1. There were no signs of active inflammation, given the fact that erythrocyte sedimentation rate and C-reactive protein were low. Blood cultures remained negative. The calculated DIC score by the International Society of Thrombosis and Haemostasis [3] was 6, which makes DIC probable.Table 1 Laboratory findings and blood drawn at admission. Value Reference Hemoglobin 8,0 mmol/l 8,5–11 mmol/l Thrombocytes 80 ∗ 10 9/l∗ 150–400∗109/l Leukocytes 10 , 8 ∗ 10 9/l 4–10∗109/l Schizocytes Present Not present ALAT 138 IU/l <40 IU/l Bilirubin 35μl/l <21μl/l GGT 1421 U/l <55 U/l AF 438 U/l <120 U/l LDH 421 U/l <250 U/l PTT Unmeasurable APTT Unmeasurable Fibrinogen 0,3 g/l 2,0–4,0 g/l D-dimer 0,98 FU/l <0,5 FU/l Albumin 39 g/l 35–52 g/l ATIII 112% >70% ESR 2 mm after 1 hour <15 mm after 1 hour CRP 11 mg/l <5 mg/l ∗Non-EDTA sensitive.There were no signs of liver failure. Albumin and ATIII were within normal ranges. Patient normally used a coumarin derivate because of his atrial fibrillation. His INR was within the therapeutic range one week before admission, while using normal dosages of acenocoumarol. He had stopped the anticoagulation in preparation for a liver biopsy. The course of the coagulation abnormalities is shown in Figure1 [4].Figure 1 The course of coagulation abnormalities.CT scan of thorax and abdomen confirmed the findings of the ultrasound. A FDG-PET was performed, but no high uptake as sign of infection or tumours could be identified. Due to the existing diffuse intravascular coagulation with a high risk of bleeding, the diagnostic liver biopsy could not be performed.The clinical situation rapidly worsened and the patient died four weeks after admission.Postmortem examination showed multiple petechia and purpura on the thoracal skin, signs of a recent myocardial infarction and an enlarged liver, weighing 5100 g, no splenomegaly, and no signs of infection. The majority of the liver consisted of tumorous areas, without distinct borders. The largest tumour was measured at 19 cm. A tubulopapillary growing pattern was identified within the tumour. The tumour cells had hyperchromatic nuclei and a large quantity of eosinophilic cytoplasm. Immunohistochemical tests revealed positivity of the tumour cells for pankeratin, CK19, vimentin, CD10, and PAX8. The tumour cells were negative for CK7 and CK20. Histologically, this pattern is suggestive for renal cell carcinoma. When compared to the renal cell carcinoma which has been excised earlier in the patient’s life, the tumorous cells in his liver showed a great resemblance.It could be concluded that this patient had diffuse intravascular coagulation with severe hypofibrinogenemia of very short duration, due to a hepatogenic metastasised renal cell carcinoma. ## 3. Discussion ### 3.1. Clinical Characteristics Disseminated intravascular coagulation (DIC) is a syndrome characterised by bleeding, thrombosis, or both. Laboratory investigation of patients with DIC showed signs of activation of both the clotting and the fibrinolytic system [3, 4].The clinical presentation depends on the underlying disease triggering DIC. Acute DIC is associated with (bacterial) infections and sepsis, crush injuries, hematologic malignancies (especially promyelocytic leukemia), and obstetric complications. Patients with acute DIC may present with petechiae, purpura, and bleeding from wounds and venipuncture sites. Although less obvious, microvascular thrombosis (and sometimes large vessel thrombosis as well) may also occur. Organ systems most often affected are the skin, lungs, kidneys, pituitary, liver, and adrenals [3, 5, 6].In contrast, chronic DIC, most often seen in patients with an underlying malignancy, has a less fulminant presentation. It is characterised by a more gradual, chronic, and systemic activation of the coagulation system. Exhaustion of platelets and coagulation factors may lead to bleeding. Thrombocytopenia and elevated levels of fibrin-related markers can be found. Chronic DIC commonly presents with gingival bleeding, easy and spontaneous bruising, and bleeding from gastrointestinal and urogenital tract. Also diffuse or singular thrombosis is seen. Malignancies of the gastrointestinal tract, pancreas, prostate, and lung are associated with chronic DIC [6]. In a cohort study conducted by Levi, 5,2% of the patients with chronic DIC turned out to have a renal cell carcinoma [7].In a cohort of 1117 patients with solid tumours, 76 (6,8%) turned out to have chronic DIC, and almost half of them (46%) had disseminated cancer to the liver [7]. Another study showed that approximately 75% of patients with disseminated malignancies have laboratory evidence for DIC, of which 25% will eventually become clinically manifested [6]. Patients with cancer-related DIC have a significantly lower survival when compared to patients with malignancies but without DIC [7]. ### 3.2. Pathogenesis Tissue factor plays a crucial part in clotting. It binds factor VII(a) to activate factors IX and X and therefore promote coagulation [5]. Solid tumours are found to express tissue factor, which is needed in angioneogenesis and metastasising [8]. Also chemotherapeutics and angiogenesis blockers are associated with an increased risk of venous and arterial thrombosis, probably due to the harmful effect on the endothelial lining of the vasculature [9]. Activated endothelial cells also produce PAI-1, which is an inhibitor of plasminogen activation. Therefore, endothelial dysfunction leads to diminished activation of fibrinolysis therefore contributing to the favouring of coagulation [5]. Endothelial cells in cancer patients get activated by cytokines after which they show enhanced TF expression and TF-dependent procoagulant activity [10]. IL-6 is a cytokine associated with this response, whereas IL-10 has the ability to inhibit DIC [5].DIC in cancer patients is also promoted by a protein called cancer procoagulant, which is a cysteine protease with factor X activating properties and is found in patients with solid tumours [11]. Some mucinous adenocarcinomas have the ability to activate factor X to factor Xa in a nonenzymatic way to contribute to coagulation [4].Besides coagulation, fibrinolysis also plays an important part in DIC. Endothelial cells produce, storage, and release tissue-type plasminogen activator and urokinase-type plasminogen activator, which are fibrinolytic activators. ### 3.3. Treatment The treatment of DIC is to treat the underlying disease. In patients with cancer-related DIC, the disorder resolves when the cancer is brought into remission [5]. An experimental study showed a beneficial effect of unfractioned heparin in lipopolysaccharide-induced hypercoagulability [12], but it is not generally used to prevent thrombus formation in patients with DIC.Although the benefit of low molecular weight heparin (LMWH) usage in patients with DIC is uncertain, acutely ill patients, patients with cancer, and older patients are at risk for venous thromboembolism, so when admitted to the hospital, prophylactic treatment with LMWH is indicated [13].In patients with a platelet count <50×109/l and bleeding complications or who have to undergo an invasive procedure, platelet transfusions are indicated. When no such urgent circumstances are present, a platelet count <10–20×109/l is generally used as threshold for platelet transfusions. Due to the fact that DIC in cancer is partly caused by a lack of fibrinolysis, one should not use antifibrinolytic agents (e.g., tranexamic acid) [5, 14]. ## 3.1. Clinical Characteristics Disseminated intravascular coagulation (DIC) is a syndrome characterised by bleeding, thrombosis, or both. Laboratory investigation of patients with DIC showed signs of activation of both the clotting and the fibrinolytic system [3, 4].The clinical presentation depends on the underlying disease triggering DIC. Acute DIC is associated with (bacterial) infections and sepsis, crush injuries, hematologic malignancies (especially promyelocytic leukemia), and obstetric complications. Patients with acute DIC may present with petechiae, purpura, and bleeding from wounds and venipuncture sites. Although less obvious, microvascular thrombosis (and sometimes large vessel thrombosis as well) may also occur. Organ systems most often affected are the skin, lungs, kidneys, pituitary, liver, and adrenals [3, 5, 6].In contrast, chronic DIC, most often seen in patients with an underlying malignancy, has a less fulminant presentation. It is characterised by a more gradual, chronic, and systemic activation of the coagulation system. Exhaustion of platelets and coagulation factors may lead to bleeding. Thrombocytopenia and elevated levels of fibrin-related markers can be found. Chronic DIC commonly presents with gingival bleeding, easy and spontaneous bruising, and bleeding from gastrointestinal and urogenital tract. Also diffuse or singular thrombosis is seen. Malignancies of the gastrointestinal tract, pancreas, prostate, and lung are associated with chronic DIC [6]. In a cohort study conducted by Levi, 5,2% of the patients with chronic DIC turned out to have a renal cell carcinoma [7].In a cohort of 1117 patients with solid tumours, 76 (6,8%) turned out to have chronic DIC, and almost half of them (46%) had disseminated cancer to the liver [7]. Another study showed that approximately 75% of patients with disseminated malignancies have laboratory evidence for DIC, of which 25% will eventually become clinically manifested [6]. Patients with cancer-related DIC have a significantly lower survival when compared to patients with malignancies but without DIC [7]. ## 3.2. Pathogenesis Tissue factor plays a crucial part in clotting. It binds factor VII(a) to activate factors IX and X and therefore promote coagulation [5]. Solid tumours are found to express tissue factor, which is needed in angioneogenesis and metastasising [8]. Also chemotherapeutics and angiogenesis blockers are associated with an increased risk of venous and arterial thrombosis, probably due to the harmful effect on the endothelial lining of the vasculature [9]. Activated endothelial cells also produce PAI-1, which is an inhibitor of plasminogen activation. Therefore, endothelial dysfunction leads to diminished activation of fibrinolysis therefore contributing to the favouring of coagulation [5]. Endothelial cells in cancer patients get activated by cytokines after which they show enhanced TF expression and TF-dependent procoagulant activity [10]. IL-6 is a cytokine associated with this response, whereas IL-10 has the ability to inhibit DIC [5].DIC in cancer patients is also promoted by a protein called cancer procoagulant, which is a cysteine protease with factor X activating properties and is found in patients with solid tumours [11]. Some mucinous adenocarcinomas have the ability to activate factor X to factor Xa in a nonenzymatic way to contribute to coagulation [4].Besides coagulation, fibrinolysis also plays an important part in DIC. Endothelial cells produce, storage, and release tissue-type plasminogen activator and urokinase-type plasminogen activator, which are fibrinolytic activators. ## 3.3. Treatment The treatment of DIC is to treat the underlying disease. In patients with cancer-related DIC, the disorder resolves when the cancer is brought into remission [5]. An experimental study showed a beneficial effect of unfractioned heparin in lipopolysaccharide-induced hypercoagulability [12], but it is not generally used to prevent thrombus formation in patients with DIC.Although the benefit of low molecular weight heparin (LMWH) usage in patients with DIC is uncertain, acutely ill patients, patients with cancer, and older patients are at risk for venous thromboembolism, so when admitted to the hospital, prophylactic treatment with LMWH is indicated [13].In patients with a platelet count <50×109/l and bleeding complications or who have to undergo an invasive procedure, platelet transfusions are indicated. When no such urgent circumstances are present, a platelet count <10–20×109/l is generally used as threshold for platelet transfusions. Due to the fact that DIC in cancer is partly caused by a lack of fibrinolysis, one should not use antifibrinolytic agents (e.g., tranexamic acid) [5, 14]. ## 4. Conclusion In this case, a 70-year-old man with bleeding diatheses due to a hyperfibrinolytic state caused by DIC related to a renal cell carcinoma was presented. Cancer-related DIC is characterised by a chronic and gradual activation of the coagulation system, with thrombosis and mild bleeding as the key symptoms. In this patient, a more rapid decline in clotting factors and profuse bleeding were seen, which is an uncommon presentation, which leads to diagnostic and therapeutic dilemmas. --- *Source: 1023538-2017-04-05.xml*
1023538-2017-04-05_1023538-2017-04-05.md
16,207
Subacute Disseminated Intravascular Coagulation in a Patient with Liver Metastases of a Renal Cell Carcinoma
L. C. van der Wekken; R. J. L. F. Loffeld
Case Reports in Oncological Medicine (2017)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1023538
1023538-2017-04-05.xml
--- ## Abstract Disseminated intravascular coagulation (DIC) is a syndrome characterised by simultaneous bleeding and thromboembolic formation. Its acute form is associated with severe bacterial infections and hematological malignancies. It has a fulminant presentation with prolonged bleeding times and diffuse thrombosis. On the other hand, chronic DIC can be asymptomatic for long periods of time and can be seen in patients with disseminated malignancies. This case report describes a patient who developed DIC within one week and bled profusely from venipuncture wounds. An underlying hepatogenic metastasised renal cell carcinoma appeared to be the cause. This is an uncommon and diagnostically challenging presentation. --- ## Body ## 1. Introduction Renal cell carcinoma (RCC) makes up for 2% of all new discovered malignancies and is the most common tumour of the kidney. Its incidence rate differs worldwide, with an incidence in Europe of 115 per 1000 European inhabitants [1]. In the year 2000, the United States had 31.200 new cases causing approximately 11.900 deaths. Males are at higher risk for RCC, especially in their 5th to 8th decade, with a median age of diagnosis of 66 years. The incidence is rising, probably due to improved imaging techniques, because mainly early stages are more often diagnosed. In 1970, 10% of the RCCs were diagnosed incidentally, compared to 61% in 1998 [2]. Its mortality rate depends on its stage, with a median 5-year survival rate of 62 percent. RCC predominantly metastasises to the lungs, retroperitoneal space, bones, and brain and sometimes to the liver. Treatment options consist of cryo- or radio frequent ablation, surgery, immunotherapy, or angiogenesis blockers [2].In the present case, a patient with a very rare and strange complication of a renal cell carcinoma is described. ## 2. Case Report A 70-year-old man was admitted to the Department of Internal Medicine and Gastroenterology, because of abnormalities seen on an abdominal ultrasound. The ultrasound was performed for follow-up, because of a renal cell carcinoma two years earlier, for which the patient underwent a nephrectomy. Histological investigation showed a pT2a tumour with angioinvasion. In the past, he had undergone a laparoscopic cholecystectomy because of symptomatic gallstones. He also had atrial fibrillation and a DDDR-pacemaker because of a high grade AV-block.The ultrasound showed multiple echogenic areas in the liver, probably due to metastases. The patient complained of nausea, vomiting, and diminished toleration for larger portions of food. He was tired and experienced a weight loss of five kilograms in two weeks. He noted a few episodes of nose bleeding and had a tiny wound on his ankle that continued bleeding.On physical examination, slight icteric sclerae, signs indicative of bilateral pleural effusion and ascites, were found. Laboratory findings are summarised in Table1. There were no signs of active inflammation, given the fact that erythrocyte sedimentation rate and C-reactive protein were low. Blood cultures remained negative. The calculated DIC score by the International Society of Thrombosis and Haemostasis [3] was 6, which makes DIC probable.Table 1 Laboratory findings and blood drawn at admission. Value Reference Hemoglobin 8,0 mmol/l 8,5–11 mmol/l Thrombocytes 80 ∗ 10 9/l∗ 150–400∗109/l Leukocytes 10 , 8 ∗ 10 9/l 4–10∗109/l Schizocytes Present Not present ALAT 138 IU/l <40 IU/l Bilirubin 35μl/l <21μl/l GGT 1421 U/l <55 U/l AF 438 U/l <120 U/l LDH 421 U/l <250 U/l PTT Unmeasurable APTT Unmeasurable Fibrinogen 0,3 g/l 2,0–4,0 g/l D-dimer 0,98 FU/l <0,5 FU/l Albumin 39 g/l 35–52 g/l ATIII 112% >70% ESR 2 mm after 1 hour <15 mm after 1 hour CRP 11 mg/l <5 mg/l ∗Non-EDTA sensitive.There were no signs of liver failure. Albumin and ATIII were within normal ranges. Patient normally used a coumarin derivate because of his atrial fibrillation. His INR was within the therapeutic range one week before admission, while using normal dosages of acenocoumarol. He had stopped the anticoagulation in preparation for a liver biopsy. The course of the coagulation abnormalities is shown in Figure1 [4].Figure 1 The course of coagulation abnormalities.CT scan of thorax and abdomen confirmed the findings of the ultrasound. A FDG-PET was performed, but no high uptake as sign of infection or tumours could be identified. Due to the existing diffuse intravascular coagulation with a high risk of bleeding, the diagnostic liver biopsy could not be performed.The clinical situation rapidly worsened and the patient died four weeks after admission.Postmortem examination showed multiple petechia and purpura on the thoracal skin, signs of a recent myocardial infarction and an enlarged liver, weighing 5100 g, no splenomegaly, and no signs of infection. The majority of the liver consisted of tumorous areas, without distinct borders. The largest tumour was measured at 19 cm. A tubulopapillary growing pattern was identified within the tumour. The tumour cells had hyperchromatic nuclei and a large quantity of eosinophilic cytoplasm. Immunohistochemical tests revealed positivity of the tumour cells for pankeratin, CK19, vimentin, CD10, and PAX8. The tumour cells were negative for CK7 and CK20. Histologically, this pattern is suggestive for renal cell carcinoma. When compared to the renal cell carcinoma which has been excised earlier in the patient’s life, the tumorous cells in his liver showed a great resemblance.It could be concluded that this patient had diffuse intravascular coagulation with severe hypofibrinogenemia of very short duration, due to a hepatogenic metastasised renal cell carcinoma. ## 3. Discussion ### 3.1. Clinical Characteristics Disseminated intravascular coagulation (DIC) is a syndrome characterised by bleeding, thrombosis, or both. Laboratory investigation of patients with DIC showed signs of activation of both the clotting and the fibrinolytic system [3, 4].The clinical presentation depends on the underlying disease triggering DIC. Acute DIC is associated with (bacterial) infections and sepsis, crush injuries, hematologic malignancies (especially promyelocytic leukemia), and obstetric complications. Patients with acute DIC may present with petechiae, purpura, and bleeding from wounds and venipuncture sites. Although less obvious, microvascular thrombosis (and sometimes large vessel thrombosis as well) may also occur. Organ systems most often affected are the skin, lungs, kidneys, pituitary, liver, and adrenals [3, 5, 6].In contrast, chronic DIC, most often seen in patients with an underlying malignancy, has a less fulminant presentation. It is characterised by a more gradual, chronic, and systemic activation of the coagulation system. Exhaustion of platelets and coagulation factors may lead to bleeding. Thrombocytopenia and elevated levels of fibrin-related markers can be found. Chronic DIC commonly presents with gingival bleeding, easy and spontaneous bruising, and bleeding from gastrointestinal and urogenital tract. Also diffuse or singular thrombosis is seen. Malignancies of the gastrointestinal tract, pancreas, prostate, and lung are associated with chronic DIC [6]. In a cohort study conducted by Levi, 5,2% of the patients with chronic DIC turned out to have a renal cell carcinoma [7].In a cohort of 1117 patients with solid tumours, 76 (6,8%) turned out to have chronic DIC, and almost half of them (46%) had disseminated cancer to the liver [7]. Another study showed that approximately 75% of patients with disseminated malignancies have laboratory evidence for DIC, of which 25% will eventually become clinically manifested [6]. Patients with cancer-related DIC have a significantly lower survival when compared to patients with malignancies but without DIC [7]. ### 3.2. Pathogenesis Tissue factor plays a crucial part in clotting. It binds factor VII(a) to activate factors IX and X and therefore promote coagulation [5]. Solid tumours are found to express tissue factor, which is needed in angioneogenesis and metastasising [8]. Also chemotherapeutics and angiogenesis blockers are associated with an increased risk of venous and arterial thrombosis, probably due to the harmful effect on the endothelial lining of the vasculature [9]. Activated endothelial cells also produce PAI-1, which is an inhibitor of plasminogen activation. Therefore, endothelial dysfunction leads to diminished activation of fibrinolysis therefore contributing to the favouring of coagulation [5]. Endothelial cells in cancer patients get activated by cytokines after which they show enhanced TF expression and TF-dependent procoagulant activity [10]. IL-6 is a cytokine associated with this response, whereas IL-10 has the ability to inhibit DIC [5].DIC in cancer patients is also promoted by a protein called cancer procoagulant, which is a cysteine protease with factor X activating properties and is found in patients with solid tumours [11]. Some mucinous adenocarcinomas have the ability to activate factor X to factor Xa in a nonenzymatic way to contribute to coagulation [4].Besides coagulation, fibrinolysis also plays an important part in DIC. Endothelial cells produce, storage, and release tissue-type plasminogen activator and urokinase-type plasminogen activator, which are fibrinolytic activators. ### 3.3. Treatment The treatment of DIC is to treat the underlying disease. In patients with cancer-related DIC, the disorder resolves when the cancer is brought into remission [5]. An experimental study showed a beneficial effect of unfractioned heparin in lipopolysaccharide-induced hypercoagulability [12], but it is not generally used to prevent thrombus formation in patients with DIC.Although the benefit of low molecular weight heparin (LMWH) usage in patients with DIC is uncertain, acutely ill patients, patients with cancer, and older patients are at risk for venous thromboembolism, so when admitted to the hospital, prophylactic treatment with LMWH is indicated [13].In patients with a platelet count <50×109/l and bleeding complications or who have to undergo an invasive procedure, platelet transfusions are indicated. When no such urgent circumstances are present, a platelet count <10–20×109/l is generally used as threshold for platelet transfusions. Due to the fact that DIC in cancer is partly caused by a lack of fibrinolysis, one should not use antifibrinolytic agents (e.g., tranexamic acid) [5, 14]. ## 3.1. Clinical Characteristics Disseminated intravascular coagulation (DIC) is a syndrome characterised by bleeding, thrombosis, or both. Laboratory investigation of patients with DIC showed signs of activation of both the clotting and the fibrinolytic system [3, 4].The clinical presentation depends on the underlying disease triggering DIC. Acute DIC is associated with (bacterial) infections and sepsis, crush injuries, hematologic malignancies (especially promyelocytic leukemia), and obstetric complications. Patients with acute DIC may present with petechiae, purpura, and bleeding from wounds and venipuncture sites. Although less obvious, microvascular thrombosis (and sometimes large vessel thrombosis as well) may also occur. Organ systems most often affected are the skin, lungs, kidneys, pituitary, liver, and adrenals [3, 5, 6].In contrast, chronic DIC, most often seen in patients with an underlying malignancy, has a less fulminant presentation. It is characterised by a more gradual, chronic, and systemic activation of the coagulation system. Exhaustion of platelets and coagulation factors may lead to bleeding. Thrombocytopenia and elevated levels of fibrin-related markers can be found. Chronic DIC commonly presents with gingival bleeding, easy and spontaneous bruising, and bleeding from gastrointestinal and urogenital tract. Also diffuse or singular thrombosis is seen. Malignancies of the gastrointestinal tract, pancreas, prostate, and lung are associated with chronic DIC [6]. In a cohort study conducted by Levi, 5,2% of the patients with chronic DIC turned out to have a renal cell carcinoma [7].In a cohort of 1117 patients with solid tumours, 76 (6,8%) turned out to have chronic DIC, and almost half of them (46%) had disseminated cancer to the liver [7]. Another study showed that approximately 75% of patients with disseminated malignancies have laboratory evidence for DIC, of which 25% will eventually become clinically manifested [6]. Patients with cancer-related DIC have a significantly lower survival when compared to patients with malignancies but without DIC [7]. ## 3.2. Pathogenesis Tissue factor plays a crucial part in clotting. It binds factor VII(a) to activate factors IX and X and therefore promote coagulation [5]. Solid tumours are found to express tissue factor, which is needed in angioneogenesis and metastasising [8]. Also chemotherapeutics and angiogenesis blockers are associated with an increased risk of venous and arterial thrombosis, probably due to the harmful effect on the endothelial lining of the vasculature [9]. Activated endothelial cells also produce PAI-1, which is an inhibitor of plasminogen activation. Therefore, endothelial dysfunction leads to diminished activation of fibrinolysis therefore contributing to the favouring of coagulation [5]. Endothelial cells in cancer patients get activated by cytokines after which they show enhanced TF expression and TF-dependent procoagulant activity [10]. IL-6 is a cytokine associated with this response, whereas IL-10 has the ability to inhibit DIC [5].DIC in cancer patients is also promoted by a protein called cancer procoagulant, which is a cysteine protease with factor X activating properties and is found in patients with solid tumours [11]. Some mucinous adenocarcinomas have the ability to activate factor X to factor Xa in a nonenzymatic way to contribute to coagulation [4].Besides coagulation, fibrinolysis also plays an important part in DIC. Endothelial cells produce, storage, and release tissue-type plasminogen activator and urokinase-type plasminogen activator, which are fibrinolytic activators. ## 3.3. Treatment The treatment of DIC is to treat the underlying disease. In patients with cancer-related DIC, the disorder resolves when the cancer is brought into remission [5]. An experimental study showed a beneficial effect of unfractioned heparin in lipopolysaccharide-induced hypercoagulability [12], but it is not generally used to prevent thrombus formation in patients with DIC.Although the benefit of low molecular weight heparin (LMWH) usage in patients with DIC is uncertain, acutely ill patients, patients with cancer, and older patients are at risk for venous thromboembolism, so when admitted to the hospital, prophylactic treatment with LMWH is indicated [13].In patients with a platelet count <50×109/l and bleeding complications or who have to undergo an invasive procedure, platelet transfusions are indicated. When no such urgent circumstances are present, a platelet count <10–20×109/l is generally used as threshold for platelet transfusions. Due to the fact that DIC in cancer is partly caused by a lack of fibrinolysis, one should not use antifibrinolytic agents (e.g., tranexamic acid) [5, 14]. ## 4. Conclusion In this case, a 70-year-old man with bleeding diatheses due to a hyperfibrinolytic state caused by DIC related to a renal cell carcinoma was presented. Cancer-related DIC is characterised by a chronic and gradual activation of the coagulation system, with thrombosis and mild bleeding as the key symptoms. In this patient, a more rapid decline in clotting factors and profuse bleeding were seen, which is an uncommon presentation, which leads to diagnostic and therapeutic dilemmas. --- *Source: 1023538-2017-04-05.xml*
2017
# Tunable Photoluminescence of Polyvinyl Alcohol Electrospun Nanofibers by Doping of NaYF4: Eu+3 Nanophosphor **Authors:** Sanjeev Kumar; Garima Jain; B. P. Singh; S. R. Dhakate **Journal:** Journal of Nanomaterials (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1023589 --- ## Abstract NaYF4: Eu+3 nanophosphor/polyvinyl alcohol (PVA) composite nanofibers have been successfully fabricated using the electrospinning technique. The electrospun polymeric nanofibers were characterized by scanning electron microscopy (SEM), high-resolution transmission microscopy (HRTEM), X-ray diffraction (XRD), photoluminescence (PL), and Raman spectroscopy. The flexible polymeric mats exhibited strong red emission at 724 nm at excitation wavelength of 239 nm. 5% concentration of NaYF4: Eu+3 nanophosphor are embedded homogenously inside the PVA matrix. The strong red emission peak attributed to the presence of Eu+3 ions. The characterization of the mats confirmed the uniform dispersion and tunable photoluminescence properties. These photoluminescent nanofibers (PLNs) could be easily fabricated and potentially useful in solid-state lighting applications. --- ## Body ## 1. Introduction The fascinating one-dimensional (1D) nanostructures have captured the attention of scientific community because of their outstanding properties such as high surface area to volume ratio and flexible and tunable surface morphologies. The 1D nanofibers have already been prepared by catalytic synthesis, interfacial polymerization, vapor deposition, vapor-phase transport process, gel spinning, electrospinning, self-assembly, template synthesis, melt spinning, electrostatic spinning and drawing, etc. [1–4]. But the electrospinning technique is the best choice for nanofiber fabricators because of its versatile properties. There is no doubt that this technique is cost-effective, simple, and convenient that utilizes electrostatic forces to fabricate polymeric exceptionally long and uniform 1D nanofibers with large surface area and high length-diameter ratio [5–8]. It is successfully developing continuous and long ultrathin fibers from polymers, composites of inorganic and organic luminescent nanoparticles with polymers, metals, and semiconductors, with diameters ranging from micrometer (μm) to nanometer (nm). In most of the studies, electrospun nanofibers have been prepared for the solid-state lighting from various polymers such as polyvinyl alcohol (PVA), polyacrylonitrile (PAN), poly(methyl methacrylate) (PMMA), polystyrene (PS), poly(ethylene oxide) (PEO), polyvinylpyrrolidone (PVP), polyvinylidene diflouride (PVdF), polyvinylcarbazole (PVK), poly[2-methoxy-5-(2′-ethylhexyloxy)-p-phenylene vinylene] (MEH-PPV), and polydiallyldimethylammonium chloride (PDAC) by using different additives. These polymers are being used to fabricate the light-emitting nanofibers via the electrospinnig technique. Cadmium sulfide (CdS), cerium-doped yttrium aluminum garnet (YAG: Ce3+; Y3−xAl5 O12: Cex3+), silica nanoparticle (SNP), fullerene (C60), europium-doped lutetium oxide (Lu2O3:Eu3+), europium-doped zirconium dioxide (ZrO2:Eu3+), germanium nanocrystals (Ge-NCs), terbium-doped silicon dioxide (SiO2:Tb3+), europium, ytterbium, erbium-doped sodium yttrium fluoride luminescent composite nanophosphor (NaYF4: Eu3+ @ NaYF4: Yb3+, Er3+), and cyclopentadiene derivative AIE-active luminogen have been incorporated into the different polymer matrices to obtain the luminescent nanofibers. Carbon nanofibers have been also fabricated from polyacrylonitrile (PAN) using the electrospinning technique for optoelectronic devices, biological imaging, and photochemical reaction applications [9–33].Among many polymers applied in solid-state lighting, it is decided to focus our attention on polyvinyl alcohol (PVA) as it is water soluble and biodegradable material. Its excellent properties such as thermal stability, chemical resistance, biocompatibility, hydrophilicity, emulsifying, adhesivity, and inexpensiveness make it the material of choice for the luminescent nanofiber fabrication. PVA is a semicrystalline fiber, and its aqueous solution appears transparent and colorless. It also exhibits the unique film forming capability and nontoxic nature. PVA shows the potential applications in various fields such as biomedical applications and drug delivery [34–37]. Recently, researchers have keen interest in exploring the photoluminescence properties of PVA with its potential suitability for the electrospun nanofiber fabrication.The development of electrospun phtoluminescent nanofibers (PLNs) has gained much attention due to their potential applications in many fields such as solid-state lighting, nonlinear optical devices, and biological labels [38–42]. The incorporation of functional additives such as nanophosphors, nanoparticles, quantum dots, nanocrystals, and carbon quantum dots into polymeric nanofiber matrix is stunning which shows distinguishable luminescence, optical, magnetic, and electrical properties [43–45]. Specially, PLNs exhibit considerable significance when rare earth ions such as Eu3+, Er3+, Tb3+, and Tm3+ are doped into polymeric matrix. These PLNs are widely used in solid-state lighting applications including solid-state lasers, scintillators, and planar waveguides. Moreover, these polymeric nanofibers mats would be exceptionally interesting and fascinating structures because of their unique properties such as mechanical flexibility and very light weight [46–48]. It is observed that no studies have been done on fabrication of photoluminescent electrospun nanofibers of polyvinyl alcohol (PVA) with doping of NaYF4: Eu+3 nanophosphor. Therefore, in the present paper, NaYF4: Eu+3 nanophosphor/polyvinyl alcohol (PVA) composite nanofibers were prepared using electrospinning technique with different concentrations of NaYF4: Eu+3 nanophosphor. Herein, we also focus on the morphology and photoluminescence properties of composite nanofibers at room temperature. The produced photoluminescent nanofibers (PLNs) would have potential usage in the solid-state lighting applications. ## 2. Experimental Methods ### 2.1. Materials The rare earth yttrium oxide (Y2O3) and europium oxide (Eu2O3) of 99.99% purity have been used for the proposed study. Including sodium fluoride (NaF) (99.9%), sodium hydroxide (NaOH), hydrochloric acid (HCL), ethanol, distill water, and polyvinyl alcohol (PVA), all chemicals were of analytical grade and used without further purification in the experiment. ### 2.2. Synthesis of NaYF4: Eu+3 Nanophosphor Eu2O3 and Y2O3 were dissolved in dilute HCl @ 60°C under constant magnetic stirring separately to prepare the stock solutions. 2 ml of 0.5 M sodium fluoride (NaF) solution is prepared in deionized water in a three neck flask and 2 ml chlorizated salt YCl3; EuCl3 aqueous solution also introduced in same flask. In a typical synthesis, NaOH (1.5 g) was mixed in ethanol (40 ml), which was added drop wise into three neck flask solutions with the help of burette under the constant magnetic stirring at 40°C. Reaction is kept under vigorously stirring for 1 h. At the end of the reaction, white colloidal precipitates were transferred to a 50 ml autoclave and heated at 180°C for 24 h. The autoclave was allowed to cool at room temperature, and precipitates were collected by centrifugation at 5000 rpm and washed with distilled water in sequence and dried in incubator at 100°C for 12 h. NaYF4: Eu+3 nanophosphor was further used to fabricate the polymeric nanofiber mat of polyvinyl alcohol (PVA). ### 2.3. Electrospinning The polymeric photoluminescent nanofibers (PLNs) have been fabricated via the electrospinning technique by using NaYF4: Eu+3 nanophosphor and PVA. Figure 1 shows the schematic diagram for fabrication process of PLNs. As-prepared 20 mg NaYF4: Eu+3 nanophosphor was dispersed in 4.6 g distilled water by ultrasonication for 1 h. Further, 400 mg PVA was introduced into the dispersed solution with a magnetic bead into the dispersed solution and kept under continuous stirring at room temperature for 16 h. The concentration of nanophosphor in solution was kept 5% with respect to PVA. Therefore, the homogeneous solution was achieved, which was filled in a 5 ml disposal syringe having a needle of nozzle size 24 G. A 13×14cm aluminum sheet was wrapped on collector to get the well-aligned nanofibers. The syringe was placed at the stand of the electrospinning machine (ESPIN-NANO PICO, Chennai) to fabricate the electrospun nanofiber mat with parameters; distance between needle and collector 20 cm, flow rate 0.3 ml/h, collector speed 2000 rpm, and voltage of 15 kV. Consequently, well-aligned nanofibers were obtained on aluminum sheet which was wrapped on collector. Varied concentrations of nanophosphor (3, 5, and 8%) were used to fabricate the nanofiber mat.Figure 1 Schematic diagram for fabrication of polymeric photoluminescent nanofibers (PLNs) using electrospinning technique. ## 2.1. Materials The rare earth yttrium oxide (Y2O3) and europium oxide (Eu2O3) of 99.99% purity have been used for the proposed study. Including sodium fluoride (NaF) (99.9%), sodium hydroxide (NaOH), hydrochloric acid (HCL), ethanol, distill water, and polyvinyl alcohol (PVA), all chemicals were of analytical grade and used without further purification in the experiment. ## 2.2. Synthesis of NaYF4: Eu+3 Nanophosphor Eu2O3 and Y2O3 were dissolved in dilute HCl @ 60°C under constant magnetic stirring separately to prepare the stock solutions. 2 ml of 0.5 M sodium fluoride (NaF) solution is prepared in deionized water in a three neck flask and 2 ml chlorizated salt YCl3; EuCl3 aqueous solution also introduced in same flask. In a typical synthesis, NaOH (1.5 g) was mixed in ethanol (40 ml), which was added drop wise into three neck flask solutions with the help of burette under the constant magnetic stirring at 40°C. Reaction is kept under vigorously stirring for 1 h. At the end of the reaction, white colloidal precipitates were transferred to a 50 ml autoclave and heated at 180°C for 24 h. The autoclave was allowed to cool at room temperature, and precipitates were collected by centrifugation at 5000 rpm and washed with distilled water in sequence and dried in incubator at 100°C for 12 h. NaYF4: Eu+3 nanophosphor was further used to fabricate the polymeric nanofiber mat of polyvinyl alcohol (PVA). ## 2.3. Electrospinning The polymeric photoluminescent nanofibers (PLNs) have been fabricated via the electrospinning technique by using NaYF4: Eu+3 nanophosphor and PVA. Figure 1 shows the schematic diagram for fabrication process of PLNs. As-prepared 20 mg NaYF4: Eu+3 nanophosphor was dispersed in 4.6 g distilled water by ultrasonication for 1 h. Further, 400 mg PVA was introduced into the dispersed solution with a magnetic bead into the dispersed solution and kept under continuous stirring at room temperature for 16 h. The concentration of nanophosphor in solution was kept 5% with respect to PVA. Therefore, the homogeneous solution was achieved, which was filled in a 5 ml disposal syringe having a needle of nozzle size 24 G. A 13×14cm aluminum sheet was wrapped on collector to get the well-aligned nanofibers. The syringe was placed at the stand of the electrospinning machine (ESPIN-NANO PICO, Chennai) to fabricate the electrospun nanofiber mat with parameters; distance between needle and collector 20 cm, flow rate 0.3 ml/h, collector speed 2000 rpm, and voltage of 15 kV. Consequently, well-aligned nanofibers were obtained on aluminum sheet which was wrapped on collector. Varied concentrations of nanophosphor (3, 5, and 8%) were used to fabricate the nanofiber mat.Figure 1 Schematic diagram for fabrication of polymeric photoluminescent nanofibers (PLNs) using electrospinning technique. ## 3. Results and Discussion ### 3.1. Morphology of NaYF4: Eu+3/Polyvinyl Alcohol Nanofibers The morphology of electrospun fibers was characterized by scanning electron microscopy (SEM) using model ZEISS EVO 18. The morphology of well-aligned electrospun fibers can be seen easily in SEM micrographs, which depend directly upon the experimental set up of electrospinning machine. Model JEOL 2100F of high-resolution transmission microscopy (HRTEM) was used for further characterization of nanofibers. We cannot ignore the certain parameters which can affect the morphology of fibers during the experiment such as viscosity, conductivity, and concentration of the solutions as well as applied voltage, flow rate, collector speed, and distance between the needle of the syringe and collector. Figures2(a)–2(c) show the SEM images of NaYF4: Eu+3/polyvinyl alcohol nanofibers @ 5%, respectively. Figure 2(d) is the HRTEM image of nanofibers which reveals that the nanophosphors were embedded homogeneously inside the PVA mat. The nanofibers were collected from the collector whose diameters were found to be in between 166 nm and 487 nm. Since the nanophosphor has already been synthesized separately, therefore, the size of nanofibers does not affect the particle size (~35 nm) of nanophosphor. These nanoparticles were impinged successfully in PVA shell via the electrospinning technique. It is also observed that nanophosphor was incorporated into the polymer matrices without any change of photoluminescence properties of nanophosphor during the nanofiber fabrication via electrospinning. Photoluminescence properties (excitation/emission) can be affected by the use of material which is being doped into the host lattice of nanophosphor. Herein NaYF4: Eu+3 nanophosphor synthesis, Eu is used as a dopant in NaYF4 host lattice.Figure 2 (a–c) show the SEM image of NaYF4: Eu+3/poly vinyl alcohol nanofibers @ 5%, and (d) shows the HRTEM image of nanofibers and dotted circle exhibits the presence of particles of nanophosphor inside the nanofibers. (a) (b) (c) (d) ### 3.2. Photoluminescence (PL) of NaYF4: Eu+3/Polyvinyl Alcohol Nanofibers Photoluminescence (PL) spectra were recorded by spectroflurometer model Perkin-Elmer LS 55. The emission spectrum of NaYF4: Eu+3/polyvinyl alcohol nanofibers demonstrates the characteristic sharp peaks at 538 nm and 724 nm associated with the 5D0→7FJ transition of Eu+3 ion. 5% NaYF4: Eu+3 was doped into the polyvinyl alcohol to fabricate the nanofibers via electrospinning. These nanofibers display typical Eu+3 emission transition in the 500–725 nm regions [11]. Figure 3(a) shows the photoluminescence spectrum of down shift part of the nanofibers upon the excitation of 239 nm wavelength at room temperature. A hypersensitive red emission peak at 724 nm was observed in the red spectral region of the photoluminescence emission spectrum. Sharp red peak was ascribed to the electric dipole transition of the 5D0→7F4 transition. The emission peaks at 538 nm and 554 nm are due to the magnetic dipole transition of 5D0→7F0 [49]. It can be seen that magnetic dipole transition is lower than that the electric dipole transition. The PL emission spectrum shows that the Eu+3 ions are located at the noninversion symmetric sites [50–52]. Pure PVA photoluminescence emission peaks have been observed at 420 nm and 434 nm [34]. It can be observed that NaYF4: Eu+3 nanophosphor has enhanced the photoluminescence properties of nanofibers up to 724 nm. The International Commission on Illumination (CIE) 1931 has been used to draw the color space chromaticity diagram for the polymeric nanofibers at the excitation wavelength of 239 nm with the values X=0.716 and Y=0.282. Figure 3(b) represents the CIE diagram, which suggests the good color quality for understanding luminescence properties of the nanofibers containing Eu3+ ions.Figure 3 (a) shows the photoluminescence spectra of NaYF4: Eu+3/polyvinyl alcohol nanofibers upon excitation wavelength of 239 nm, and (b) shows the CIE color coordinate diagram of nanofibers corresponding to excitation at 239 nm with the values (X=0.716 and Y=0.282). (a) (b) ### 3.3. X-Ray Diffraction (XRD) and Raman Spectra of NaYF4: Eu+3/Polyvinyl Alcohol Nanofibers X-ray diffraction characterization of nanofibers was performed by using XRD Rigaku Japan with X-ray source CuKα (λ=0.15418nm). NaYF4: Eu+3/polyvinyl alcohol nanofibers were collected from aluminium foil (Al) which was used as a substrate for fiber deposition during the fascinating electrospinning technique. The cubic structure of NaYF4: Eu+3 nanophosphor has been decided with the help of JCPDS card no. 77-2042. The cubic crystal structure of NaYF4 exhibits peaks at angles 2θ=28.85° (111), 33.17° (200), 47.6° (220), 53.88° (311), 56.76° (222), 69.85° (400), 76.10° (331), and 79° (420) [11]. The XRD pattern of nanofibers is shown in Figure 4(a). According to JCPDS card no. 53-1857, the two diffraction peaks are observed at angles 2θ=19.46° and 22.32° which are attributed to PVA. The XRD pattern showed the other broad and sharp peaks at angles 28.96° (111), 33.50° (200), 48° (220), 56° (222), and 69.94° (400). The peaks observed at angles 38.30°, 44.52°, 65.04°, and 78.18° are attributed to Al foil [53, 54]. The XRD result shows that the peaks of nanophosphors are slightly shifted to right side due to the stress in PVA shell.Figure 4 (a) shows the XRD pattern, and (b) shows the Raman spectra of NaYF4: Eu+3/polyvinyl alcohol nanofibers. (a) (b)Renishaw spectrophotometer (micro-Raman model in Via Reflex) withλ=514nm laser excitation was used to record the Raman spectra of polymeric nanofiber. Raman spectra of NaYF4: Eu+3/polyvinyl alcohol nanofibers are shown in Figure 4(b). The electrospun nanofibers reveal the broad scattering peaks at 2917 cm-1, 2745 cm-1, and 1428 cm-1 in the spectrum, which confirms the existence of polyvinyl alcohol (PVA), and the peaks are assigned to the stretching vibration of CH2, CH, and OH, respectively [55, 56]. Nanophosphor has a discrete Raman spectrum in the 2565 to 2202 cm-1 region, which exhibits the stretching modes of CH2 [57]. The Raman spectra show that the scattering peaks of PVA are slightly shifted due to uniform impingement of nanophosphor into the PVA shell. ## 3.1. Morphology of NaYF4: Eu+3/Polyvinyl Alcohol Nanofibers The morphology of electrospun fibers was characterized by scanning electron microscopy (SEM) using model ZEISS EVO 18. The morphology of well-aligned electrospun fibers can be seen easily in SEM micrographs, which depend directly upon the experimental set up of electrospinning machine. Model JEOL 2100F of high-resolution transmission microscopy (HRTEM) was used for further characterization of nanofibers. We cannot ignore the certain parameters which can affect the morphology of fibers during the experiment such as viscosity, conductivity, and concentration of the solutions as well as applied voltage, flow rate, collector speed, and distance between the needle of the syringe and collector. Figures2(a)–2(c) show the SEM images of NaYF4: Eu+3/polyvinyl alcohol nanofibers @ 5%, respectively. Figure 2(d) is the HRTEM image of nanofibers which reveals that the nanophosphors were embedded homogeneously inside the PVA mat. The nanofibers were collected from the collector whose diameters were found to be in between 166 nm and 487 nm. Since the nanophosphor has already been synthesized separately, therefore, the size of nanofibers does not affect the particle size (~35 nm) of nanophosphor. These nanoparticles were impinged successfully in PVA shell via the electrospinning technique. It is also observed that nanophosphor was incorporated into the polymer matrices without any change of photoluminescence properties of nanophosphor during the nanofiber fabrication via electrospinning. Photoluminescence properties (excitation/emission) can be affected by the use of material which is being doped into the host lattice of nanophosphor. Herein NaYF4: Eu+3 nanophosphor synthesis, Eu is used as a dopant in NaYF4 host lattice.Figure 2 (a–c) show the SEM image of NaYF4: Eu+3/poly vinyl alcohol nanofibers @ 5%, and (d) shows the HRTEM image of nanofibers and dotted circle exhibits the presence of particles of nanophosphor inside the nanofibers. (a) (b) (c) (d) ## 3.2. Photoluminescence (PL) of NaYF4: Eu+3/Polyvinyl Alcohol Nanofibers Photoluminescence (PL) spectra were recorded by spectroflurometer model Perkin-Elmer LS 55. The emission spectrum of NaYF4: Eu+3/polyvinyl alcohol nanofibers demonstrates the characteristic sharp peaks at 538 nm and 724 nm associated with the 5D0→7FJ transition of Eu+3 ion. 5% NaYF4: Eu+3 was doped into the polyvinyl alcohol to fabricate the nanofibers via electrospinning. These nanofibers display typical Eu+3 emission transition in the 500–725 nm regions [11]. Figure 3(a) shows the photoluminescence spectrum of down shift part of the nanofibers upon the excitation of 239 nm wavelength at room temperature. A hypersensitive red emission peak at 724 nm was observed in the red spectral region of the photoluminescence emission spectrum. Sharp red peak was ascribed to the electric dipole transition of the 5D0→7F4 transition. The emission peaks at 538 nm and 554 nm are due to the magnetic dipole transition of 5D0→7F0 [49]. It can be seen that magnetic dipole transition is lower than that the electric dipole transition. The PL emission spectrum shows that the Eu+3 ions are located at the noninversion symmetric sites [50–52]. Pure PVA photoluminescence emission peaks have been observed at 420 nm and 434 nm [34]. It can be observed that NaYF4: Eu+3 nanophosphor has enhanced the photoluminescence properties of nanofibers up to 724 nm. The International Commission on Illumination (CIE) 1931 has been used to draw the color space chromaticity diagram for the polymeric nanofibers at the excitation wavelength of 239 nm with the values X=0.716 and Y=0.282. Figure 3(b) represents the CIE diagram, which suggests the good color quality for understanding luminescence properties of the nanofibers containing Eu3+ ions.Figure 3 (a) shows the photoluminescence spectra of NaYF4: Eu+3/polyvinyl alcohol nanofibers upon excitation wavelength of 239 nm, and (b) shows the CIE color coordinate diagram of nanofibers corresponding to excitation at 239 nm with the values (X=0.716 and Y=0.282). (a) (b) ## 3.3. X-Ray Diffraction (XRD) and Raman Spectra of NaYF4: Eu+3/Polyvinyl Alcohol Nanofibers X-ray diffraction characterization of nanofibers was performed by using XRD Rigaku Japan with X-ray source CuKα (λ=0.15418nm). NaYF4: Eu+3/polyvinyl alcohol nanofibers were collected from aluminium foil (Al) which was used as a substrate for fiber deposition during the fascinating electrospinning technique. The cubic structure of NaYF4: Eu+3 nanophosphor has been decided with the help of JCPDS card no. 77-2042. The cubic crystal structure of NaYF4 exhibits peaks at angles 2θ=28.85° (111), 33.17° (200), 47.6° (220), 53.88° (311), 56.76° (222), 69.85° (400), 76.10° (331), and 79° (420) [11]. The XRD pattern of nanofibers is shown in Figure 4(a). According to JCPDS card no. 53-1857, the two diffraction peaks are observed at angles 2θ=19.46° and 22.32° which are attributed to PVA. The XRD pattern showed the other broad and sharp peaks at angles 28.96° (111), 33.50° (200), 48° (220), 56° (222), and 69.94° (400). The peaks observed at angles 38.30°, 44.52°, 65.04°, and 78.18° are attributed to Al foil [53, 54]. The XRD result shows that the peaks of nanophosphors are slightly shifted to right side due to the stress in PVA shell.Figure 4 (a) shows the XRD pattern, and (b) shows the Raman spectra of NaYF4: Eu+3/polyvinyl alcohol nanofibers. (a) (b)Renishaw spectrophotometer (micro-Raman model in Via Reflex) withλ=514nm laser excitation was used to record the Raman spectra of polymeric nanofiber. Raman spectra of NaYF4: Eu+3/polyvinyl alcohol nanofibers are shown in Figure 4(b). The electrospun nanofibers reveal the broad scattering peaks at 2917 cm-1, 2745 cm-1, and 1428 cm-1 in the spectrum, which confirms the existence of polyvinyl alcohol (PVA), and the peaks are assigned to the stretching vibration of CH2, CH, and OH, respectively [55, 56]. Nanophosphor has a discrete Raman spectrum in the 2565 to 2202 cm-1 region, which exhibits the stretching modes of CH2 [57]. The Raman spectra show that the scattering peaks of PVA are slightly shifted due to uniform impingement of nanophosphor into the PVA shell. ## 4. Conclusion NaYF4: Eu+3/polyvinyl alcohol nanofibers were prepared successfully by using the electrospinning technique. The well-aligned photoluminescent nanofibers (PLNs) have average diameters from 166 to 487 nm. The SEM and HRTEM micrograph showed that NaYF4: Eu+3 nanophosphor is mixed homogenously in the PVA matrix. The nanofibers exhibited considerable effect on its PL properties because of the strong coordination interaction between nanophosphor and PVA. The enhanced intensity ratios 5D0⟶7F0 to 5D0⟶7F4 of the nanofibers showed more polarized chemical environment of Eu+3 ions. The PL spectra of NaYF4: Eu+3/PVA nanofibers displayed the strong red emission due to its high emission intensity. These nanofibers are the best choice to illuminate the white light in solid-state lighting world. --- *Source: 1023589-2020-03-04.xml*
1023589-2020-03-04_1023589-2020-03-04.md
25,557
Tunable Photoluminescence of Polyvinyl Alcohol Electrospun Nanofibers by Doping of NaYF4: Eu+3 Nanophosphor
Sanjeev Kumar; Garima Jain; B. P. Singh; S. R. Dhakate
Journal of Nanomaterials (2020)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1023589
1023589-2020-03-04.xml
--- ## Abstract NaYF4: Eu+3 nanophosphor/polyvinyl alcohol (PVA) composite nanofibers have been successfully fabricated using the electrospinning technique. The electrospun polymeric nanofibers were characterized by scanning electron microscopy (SEM), high-resolution transmission microscopy (HRTEM), X-ray diffraction (XRD), photoluminescence (PL), and Raman spectroscopy. The flexible polymeric mats exhibited strong red emission at 724 nm at excitation wavelength of 239 nm. 5% concentration of NaYF4: Eu+3 nanophosphor are embedded homogenously inside the PVA matrix. The strong red emission peak attributed to the presence of Eu+3 ions. The characterization of the mats confirmed the uniform dispersion and tunable photoluminescence properties. These photoluminescent nanofibers (PLNs) could be easily fabricated and potentially useful in solid-state lighting applications. --- ## Body ## 1. Introduction The fascinating one-dimensional (1D) nanostructures have captured the attention of scientific community because of their outstanding properties such as high surface area to volume ratio and flexible and tunable surface morphologies. The 1D nanofibers have already been prepared by catalytic synthesis, interfacial polymerization, vapor deposition, vapor-phase transport process, gel spinning, electrospinning, self-assembly, template synthesis, melt spinning, electrostatic spinning and drawing, etc. [1–4]. But the electrospinning technique is the best choice for nanofiber fabricators because of its versatile properties. There is no doubt that this technique is cost-effective, simple, and convenient that utilizes electrostatic forces to fabricate polymeric exceptionally long and uniform 1D nanofibers with large surface area and high length-diameter ratio [5–8]. It is successfully developing continuous and long ultrathin fibers from polymers, composites of inorganic and organic luminescent nanoparticles with polymers, metals, and semiconductors, with diameters ranging from micrometer (μm) to nanometer (nm). In most of the studies, electrospun nanofibers have been prepared for the solid-state lighting from various polymers such as polyvinyl alcohol (PVA), polyacrylonitrile (PAN), poly(methyl methacrylate) (PMMA), polystyrene (PS), poly(ethylene oxide) (PEO), polyvinylpyrrolidone (PVP), polyvinylidene diflouride (PVdF), polyvinylcarbazole (PVK), poly[2-methoxy-5-(2′-ethylhexyloxy)-p-phenylene vinylene] (MEH-PPV), and polydiallyldimethylammonium chloride (PDAC) by using different additives. These polymers are being used to fabricate the light-emitting nanofibers via the electrospinnig technique. Cadmium sulfide (CdS), cerium-doped yttrium aluminum garnet (YAG: Ce3+; Y3−xAl5 O12: Cex3+), silica nanoparticle (SNP), fullerene (C60), europium-doped lutetium oxide (Lu2O3:Eu3+), europium-doped zirconium dioxide (ZrO2:Eu3+), germanium nanocrystals (Ge-NCs), terbium-doped silicon dioxide (SiO2:Tb3+), europium, ytterbium, erbium-doped sodium yttrium fluoride luminescent composite nanophosphor (NaYF4: Eu3+ @ NaYF4: Yb3+, Er3+), and cyclopentadiene derivative AIE-active luminogen have been incorporated into the different polymer matrices to obtain the luminescent nanofibers. Carbon nanofibers have been also fabricated from polyacrylonitrile (PAN) using the electrospinning technique for optoelectronic devices, biological imaging, and photochemical reaction applications [9–33].Among many polymers applied in solid-state lighting, it is decided to focus our attention on polyvinyl alcohol (PVA) as it is water soluble and biodegradable material. Its excellent properties such as thermal stability, chemical resistance, biocompatibility, hydrophilicity, emulsifying, adhesivity, and inexpensiveness make it the material of choice for the luminescent nanofiber fabrication. PVA is a semicrystalline fiber, and its aqueous solution appears transparent and colorless. It also exhibits the unique film forming capability and nontoxic nature. PVA shows the potential applications in various fields such as biomedical applications and drug delivery [34–37]. Recently, researchers have keen interest in exploring the photoluminescence properties of PVA with its potential suitability for the electrospun nanofiber fabrication.The development of electrospun phtoluminescent nanofibers (PLNs) has gained much attention due to their potential applications in many fields such as solid-state lighting, nonlinear optical devices, and biological labels [38–42]. The incorporation of functional additives such as nanophosphors, nanoparticles, quantum dots, nanocrystals, and carbon quantum dots into polymeric nanofiber matrix is stunning which shows distinguishable luminescence, optical, magnetic, and electrical properties [43–45]. Specially, PLNs exhibit considerable significance when rare earth ions such as Eu3+, Er3+, Tb3+, and Tm3+ are doped into polymeric matrix. These PLNs are widely used in solid-state lighting applications including solid-state lasers, scintillators, and planar waveguides. Moreover, these polymeric nanofibers mats would be exceptionally interesting and fascinating structures because of their unique properties such as mechanical flexibility and very light weight [46–48]. It is observed that no studies have been done on fabrication of photoluminescent electrospun nanofibers of polyvinyl alcohol (PVA) with doping of NaYF4: Eu+3 nanophosphor. Therefore, in the present paper, NaYF4: Eu+3 nanophosphor/polyvinyl alcohol (PVA) composite nanofibers were prepared using electrospinning technique with different concentrations of NaYF4: Eu+3 nanophosphor. Herein, we also focus on the morphology and photoluminescence properties of composite nanofibers at room temperature. The produced photoluminescent nanofibers (PLNs) would have potential usage in the solid-state lighting applications. ## 2. Experimental Methods ### 2.1. Materials The rare earth yttrium oxide (Y2O3) and europium oxide (Eu2O3) of 99.99% purity have been used for the proposed study. Including sodium fluoride (NaF) (99.9%), sodium hydroxide (NaOH), hydrochloric acid (HCL), ethanol, distill water, and polyvinyl alcohol (PVA), all chemicals were of analytical grade and used without further purification in the experiment. ### 2.2. Synthesis of NaYF4: Eu+3 Nanophosphor Eu2O3 and Y2O3 were dissolved in dilute HCl @ 60°C under constant magnetic stirring separately to prepare the stock solutions. 2 ml of 0.5 M sodium fluoride (NaF) solution is prepared in deionized water in a three neck flask and 2 ml chlorizated salt YCl3; EuCl3 aqueous solution also introduced in same flask. In a typical synthesis, NaOH (1.5 g) was mixed in ethanol (40 ml), which was added drop wise into three neck flask solutions with the help of burette under the constant magnetic stirring at 40°C. Reaction is kept under vigorously stirring for 1 h. At the end of the reaction, white colloidal precipitates were transferred to a 50 ml autoclave and heated at 180°C for 24 h. The autoclave was allowed to cool at room temperature, and precipitates were collected by centrifugation at 5000 rpm and washed with distilled water in sequence and dried in incubator at 100°C for 12 h. NaYF4: Eu+3 nanophosphor was further used to fabricate the polymeric nanofiber mat of polyvinyl alcohol (PVA). ### 2.3. Electrospinning The polymeric photoluminescent nanofibers (PLNs) have been fabricated via the electrospinning technique by using NaYF4: Eu+3 nanophosphor and PVA. Figure 1 shows the schematic diagram for fabrication process of PLNs. As-prepared 20 mg NaYF4: Eu+3 nanophosphor was dispersed in 4.6 g distilled water by ultrasonication for 1 h. Further, 400 mg PVA was introduced into the dispersed solution with a magnetic bead into the dispersed solution and kept under continuous stirring at room temperature for 16 h. The concentration of nanophosphor in solution was kept 5% with respect to PVA. Therefore, the homogeneous solution was achieved, which was filled in a 5 ml disposal syringe having a needle of nozzle size 24 G. A 13×14cm aluminum sheet was wrapped on collector to get the well-aligned nanofibers. The syringe was placed at the stand of the electrospinning machine (ESPIN-NANO PICO, Chennai) to fabricate the electrospun nanofiber mat with parameters; distance between needle and collector 20 cm, flow rate 0.3 ml/h, collector speed 2000 rpm, and voltage of 15 kV. Consequently, well-aligned nanofibers were obtained on aluminum sheet which was wrapped on collector. Varied concentrations of nanophosphor (3, 5, and 8%) were used to fabricate the nanofiber mat.Figure 1 Schematic diagram for fabrication of polymeric photoluminescent nanofibers (PLNs) using electrospinning technique. ## 2.1. Materials The rare earth yttrium oxide (Y2O3) and europium oxide (Eu2O3) of 99.99% purity have been used for the proposed study. Including sodium fluoride (NaF) (99.9%), sodium hydroxide (NaOH), hydrochloric acid (HCL), ethanol, distill water, and polyvinyl alcohol (PVA), all chemicals were of analytical grade and used without further purification in the experiment. ## 2.2. Synthesis of NaYF4: Eu+3 Nanophosphor Eu2O3 and Y2O3 were dissolved in dilute HCl @ 60°C under constant magnetic stirring separately to prepare the stock solutions. 2 ml of 0.5 M sodium fluoride (NaF) solution is prepared in deionized water in a three neck flask and 2 ml chlorizated salt YCl3; EuCl3 aqueous solution also introduced in same flask. In a typical synthesis, NaOH (1.5 g) was mixed in ethanol (40 ml), which was added drop wise into three neck flask solutions with the help of burette under the constant magnetic stirring at 40°C. Reaction is kept under vigorously stirring for 1 h. At the end of the reaction, white colloidal precipitates were transferred to a 50 ml autoclave and heated at 180°C for 24 h. The autoclave was allowed to cool at room temperature, and precipitates were collected by centrifugation at 5000 rpm and washed with distilled water in sequence and dried in incubator at 100°C for 12 h. NaYF4: Eu+3 nanophosphor was further used to fabricate the polymeric nanofiber mat of polyvinyl alcohol (PVA). ## 2.3. Electrospinning The polymeric photoluminescent nanofibers (PLNs) have been fabricated via the electrospinning technique by using NaYF4: Eu+3 nanophosphor and PVA. Figure 1 shows the schematic diagram for fabrication process of PLNs. As-prepared 20 mg NaYF4: Eu+3 nanophosphor was dispersed in 4.6 g distilled water by ultrasonication for 1 h. Further, 400 mg PVA was introduced into the dispersed solution with a magnetic bead into the dispersed solution and kept under continuous stirring at room temperature for 16 h. The concentration of nanophosphor in solution was kept 5% with respect to PVA. Therefore, the homogeneous solution was achieved, which was filled in a 5 ml disposal syringe having a needle of nozzle size 24 G. A 13×14cm aluminum sheet was wrapped on collector to get the well-aligned nanofibers. The syringe was placed at the stand of the electrospinning machine (ESPIN-NANO PICO, Chennai) to fabricate the electrospun nanofiber mat with parameters; distance between needle and collector 20 cm, flow rate 0.3 ml/h, collector speed 2000 rpm, and voltage of 15 kV. Consequently, well-aligned nanofibers were obtained on aluminum sheet which was wrapped on collector. Varied concentrations of nanophosphor (3, 5, and 8%) were used to fabricate the nanofiber mat.Figure 1 Schematic diagram for fabrication of polymeric photoluminescent nanofibers (PLNs) using electrospinning technique. ## 3. Results and Discussion ### 3.1. Morphology of NaYF4: Eu+3/Polyvinyl Alcohol Nanofibers The morphology of electrospun fibers was characterized by scanning electron microscopy (SEM) using model ZEISS EVO 18. The morphology of well-aligned electrospun fibers can be seen easily in SEM micrographs, which depend directly upon the experimental set up of electrospinning machine. Model JEOL 2100F of high-resolution transmission microscopy (HRTEM) was used for further characterization of nanofibers. We cannot ignore the certain parameters which can affect the morphology of fibers during the experiment such as viscosity, conductivity, and concentration of the solutions as well as applied voltage, flow rate, collector speed, and distance between the needle of the syringe and collector. Figures2(a)–2(c) show the SEM images of NaYF4: Eu+3/polyvinyl alcohol nanofibers @ 5%, respectively. Figure 2(d) is the HRTEM image of nanofibers which reveals that the nanophosphors were embedded homogeneously inside the PVA mat. The nanofibers were collected from the collector whose diameters were found to be in between 166 nm and 487 nm. Since the nanophosphor has already been synthesized separately, therefore, the size of nanofibers does not affect the particle size (~35 nm) of nanophosphor. These nanoparticles were impinged successfully in PVA shell via the electrospinning technique. It is also observed that nanophosphor was incorporated into the polymer matrices without any change of photoluminescence properties of nanophosphor during the nanofiber fabrication via electrospinning. Photoluminescence properties (excitation/emission) can be affected by the use of material which is being doped into the host lattice of nanophosphor. Herein NaYF4: Eu+3 nanophosphor synthesis, Eu is used as a dopant in NaYF4 host lattice.Figure 2 (a–c) show the SEM image of NaYF4: Eu+3/poly vinyl alcohol nanofibers @ 5%, and (d) shows the HRTEM image of nanofibers and dotted circle exhibits the presence of particles of nanophosphor inside the nanofibers. (a) (b) (c) (d) ### 3.2. Photoluminescence (PL) of NaYF4: Eu+3/Polyvinyl Alcohol Nanofibers Photoluminescence (PL) spectra were recorded by spectroflurometer model Perkin-Elmer LS 55. The emission spectrum of NaYF4: Eu+3/polyvinyl alcohol nanofibers demonstrates the characteristic sharp peaks at 538 nm and 724 nm associated with the 5D0→7FJ transition of Eu+3 ion. 5% NaYF4: Eu+3 was doped into the polyvinyl alcohol to fabricate the nanofibers via electrospinning. These nanofibers display typical Eu+3 emission transition in the 500–725 nm regions [11]. Figure 3(a) shows the photoluminescence spectrum of down shift part of the nanofibers upon the excitation of 239 nm wavelength at room temperature. A hypersensitive red emission peak at 724 nm was observed in the red spectral region of the photoluminescence emission spectrum. Sharp red peak was ascribed to the electric dipole transition of the 5D0→7F4 transition. The emission peaks at 538 nm and 554 nm are due to the magnetic dipole transition of 5D0→7F0 [49]. It can be seen that magnetic dipole transition is lower than that the electric dipole transition. The PL emission spectrum shows that the Eu+3 ions are located at the noninversion symmetric sites [50–52]. Pure PVA photoluminescence emission peaks have been observed at 420 nm and 434 nm [34]. It can be observed that NaYF4: Eu+3 nanophosphor has enhanced the photoluminescence properties of nanofibers up to 724 nm. The International Commission on Illumination (CIE) 1931 has been used to draw the color space chromaticity diagram for the polymeric nanofibers at the excitation wavelength of 239 nm with the values X=0.716 and Y=0.282. Figure 3(b) represents the CIE diagram, which suggests the good color quality for understanding luminescence properties of the nanofibers containing Eu3+ ions.Figure 3 (a) shows the photoluminescence spectra of NaYF4: Eu+3/polyvinyl alcohol nanofibers upon excitation wavelength of 239 nm, and (b) shows the CIE color coordinate diagram of nanofibers corresponding to excitation at 239 nm with the values (X=0.716 and Y=0.282). (a) (b) ### 3.3. X-Ray Diffraction (XRD) and Raman Spectra of NaYF4: Eu+3/Polyvinyl Alcohol Nanofibers X-ray diffraction characterization of nanofibers was performed by using XRD Rigaku Japan with X-ray source CuKα (λ=0.15418nm). NaYF4: Eu+3/polyvinyl alcohol nanofibers were collected from aluminium foil (Al) which was used as a substrate for fiber deposition during the fascinating electrospinning technique. The cubic structure of NaYF4: Eu+3 nanophosphor has been decided with the help of JCPDS card no. 77-2042. The cubic crystal structure of NaYF4 exhibits peaks at angles 2θ=28.85° (111), 33.17° (200), 47.6° (220), 53.88° (311), 56.76° (222), 69.85° (400), 76.10° (331), and 79° (420) [11]. The XRD pattern of nanofibers is shown in Figure 4(a). According to JCPDS card no. 53-1857, the two diffraction peaks are observed at angles 2θ=19.46° and 22.32° which are attributed to PVA. The XRD pattern showed the other broad and sharp peaks at angles 28.96° (111), 33.50° (200), 48° (220), 56° (222), and 69.94° (400). The peaks observed at angles 38.30°, 44.52°, 65.04°, and 78.18° are attributed to Al foil [53, 54]. The XRD result shows that the peaks of nanophosphors are slightly shifted to right side due to the stress in PVA shell.Figure 4 (a) shows the XRD pattern, and (b) shows the Raman spectra of NaYF4: Eu+3/polyvinyl alcohol nanofibers. (a) (b)Renishaw spectrophotometer (micro-Raman model in Via Reflex) withλ=514nm laser excitation was used to record the Raman spectra of polymeric nanofiber. Raman spectra of NaYF4: Eu+3/polyvinyl alcohol nanofibers are shown in Figure 4(b). The electrospun nanofibers reveal the broad scattering peaks at 2917 cm-1, 2745 cm-1, and 1428 cm-1 in the spectrum, which confirms the existence of polyvinyl alcohol (PVA), and the peaks are assigned to the stretching vibration of CH2, CH, and OH, respectively [55, 56]. Nanophosphor has a discrete Raman spectrum in the 2565 to 2202 cm-1 region, which exhibits the stretching modes of CH2 [57]. The Raman spectra show that the scattering peaks of PVA are slightly shifted due to uniform impingement of nanophosphor into the PVA shell. ## 3.1. Morphology of NaYF4: Eu+3/Polyvinyl Alcohol Nanofibers The morphology of electrospun fibers was characterized by scanning electron microscopy (SEM) using model ZEISS EVO 18. The morphology of well-aligned electrospun fibers can be seen easily in SEM micrographs, which depend directly upon the experimental set up of electrospinning machine. Model JEOL 2100F of high-resolution transmission microscopy (HRTEM) was used for further characterization of nanofibers. We cannot ignore the certain parameters which can affect the morphology of fibers during the experiment such as viscosity, conductivity, and concentration of the solutions as well as applied voltage, flow rate, collector speed, and distance between the needle of the syringe and collector. Figures2(a)–2(c) show the SEM images of NaYF4: Eu+3/polyvinyl alcohol nanofibers @ 5%, respectively. Figure 2(d) is the HRTEM image of nanofibers which reveals that the nanophosphors were embedded homogeneously inside the PVA mat. The nanofibers were collected from the collector whose diameters were found to be in between 166 nm and 487 nm. Since the nanophosphor has already been synthesized separately, therefore, the size of nanofibers does not affect the particle size (~35 nm) of nanophosphor. These nanoparticles were impinged successfully in PVA shell via the electrospinning technique. It is also observed that nanophosphor was incorporated into the polymer matrices without any change of photoluminescence properties of nanophosphor during the nanofiber fabrication via electrospinning. Photoluminescence properties (excitation/emission) can be affected by the use of material which is being doped into the host lattice of nanophosphor. Herein NaYF4: Eu+3 nanophosphor synthesis, Eu is used as a dopant in NaYF4 host lattice.Figure 2 (a–c) show the SEM image of NaYF4: Eu+3/poly vinyl alcohol nanofibers @ 5%, and (d) shows the HRTEM image of nanofibers and dotted circle exhibits the presence of particles of nanophosphor inside the nanofibers. (a) (b) (c) (d) ## 3.2. Photoluminescence (PL) of NaYF4: Eu+3/Polyvinyl Alcohol Nanofibers Photoluminescence (PL) spectra were recorded by spectroflurometer model Perkin-Elmer LS 55. The emission spectrum of NaYF4: Eu+3/polyvinyl alcohol nanofibers demonstrates the characteristic sharp peaks at 538 nm and 724 nm associated with the 5D0→7FJ transition of Eu+3 ion. 5% NaYF4: Eu+3 was doped into the polyvinyl alcohol to fabricate the nanofibers via electrospinning. These nanofibers display typical Eu+3 emission transition in the 500–725 nm regions [11]. Figure 3(a) shows the photoluminescence spectrum of down shift part of the nanofibers upon the excitation of 239 nm wavelength at room temperature. A hypersensitive red emission peak at 724 nm was observed in the red spectral region of the photoluminescence emission spectrum. Sharp red peak was ascribed to the electric dipole transition of the 5D0→7F4 transition. The emission peaks at 538 nm and 554 nm are due to the magnetic dipole transition of 5D0→7F0 [49]. It can be seen that magnetic dipole transition is lower than that the electric dipole transition. The PL emission spectrum shows that the Eu+3 ions are located at the noninversion symmetric sites [50–52]. Pure PVA photoluminescence emission peaks have been observed at 420 nm and 434 nm [34]. It can be observed that NaYF4: Eu+3 nanophosphor has enhanced the photoluminescence properties of nanofibers up to 724 nm. The International Commission on Illumination (CIE) 1931 has been used to draw the color space chromaticity diagram for the polymeric nanofibers at the excitation wavelength of 239 nm with the values X=0.716 and Y=0.282. Figure 3(b) represents the CIE diagram, which suggests the good color quality for understanding luminescence properties of the nanofibers containing Eu3+ ions.Figure 3 (a) shows the photoluminescence spectra of NaYF4: Eu+3/polyvinyl alcohol nanofibers upon excitation wavelength of 239 nm, and (b) shows the CIE color coordinate diagram of nanofibers corresponding to excitation at 239 nm with the values (X=0.716 and Y=0.282). (a) (b) ## 3.3. X-Ray Diffraction (XRD) and Raman Spectra of NaYF4: Eu+3/Polyvinyl Alcohol Nanofibers X-ray diffraction characterization of nanofibers was performed by using XRD Rigaku Japan with X-ray source CuKα (λ=0.15418nm). NaYF4: Eu+3/polyvinyl alcohol nanofibers were collected from aluminium foil (Al) which was used as a substrate for fiber deposition during the fascinating electrospinning technique. The cubic structure of NaYF4: Eu+3 nanophosphor has been decided with the help of JCPDS card no. 77-2042. The cubic crystal structure of NaYF4 exhibits peaks at angles 2θ=28.85° (111), 33.17° (200), 47.6° (220), 53.88° (311), 56.76° (222), 69.85° (400), 76.10° (331), and 79° (420) [11]. The XRD pattern of nanofibers is shown in Figure 4(a). According to JCPDS card no. 53-1857, the two diffraction peaks are observed at angles 2θ=19.46° and 22.32° which are attributed to PVA. The XRD pattern showed the other broad and sharp peaks at angles 28.96° (111), 33.50° (200), 48° (220), 56° (222), and 69.94° (400). The peaks observed at angles 38.30°, 44.52°, 65.04°, and 78.18° are attributed to Al foil [53, 54]. The XRD result shows that the peaks of nanophosphors are slightly shifted to right side due to the stress in PVA shell.Figure 4 (a) shows the XRD pattern, and (b) shows the Raman spectra of NaYF4: Eu+3/polyvinyl alcohol nanofibers. (a) (b)Renishaw spectrophotometer (micro-Raman model in Via Reflex) withλ=514nm laser excitation was used to record the Raman spectra of polymeric nanofiber. Raman spectra of NaYF4: Eu+3/polyvinyl alcohol nanofibers are shown in Figure 4(b). The electrospun nanofibers reveal the broad scattering peaks at 2917 cm-1, 2745 cm-1, and 1428 cm-1 in the spectrum, which confirms the existence of polyvinyl alcohol (PVA), and the peaks are assigned to the stretching vibration of CH2, CH, and OH, respectively [55, 56]. Nanophosphor has a discrete Raman spectrum in the 2565 to 2202 cm-1 region, which exhibits the stretching modes of CH2 [57]. The Raman spectra show that the scattering peaks of PVA are slightly shifted due to uniform impingement of nanophosphor into the PVA shell. ## 4. Conclusion NaYF4: Eu+3/polyvinyl alcohol nanofibers were prepared successfully by using the electrospinning technique. The well-aligned photoluminescent nanofibers (PLNs) have average diameters from 166 to 487 nm. The SEM and HRTEM micrograph showed that NaYF4: Eu+3 nanophosphor is mixed homogenously in the PVA matrix. The nanofibers exhibited considerable effect on its PL properties because of the strong coordination interaction between nanophosphor and PVA. The enhanced intensity ratios 5D0⟶7F0 to 5D0⟶7F4 of the nanofibers showed more polarized chemical environment of Eu+3 ions. The PL spectra of NaYF4: Eu+3/PVA nanofibers displayed the strong red emission due to its high emission intensity. These nanofibers are the best choice to illuminate the white light in solid-state lighting world. --- *Source: 1023589-2020-03-04.xml*
2020
# Characterization of Titanium Oxide Nanoparticles Obtained by Hydrolysis Reaction of Ethylene Glycol Solution of Alkoxide **Authors:** Naofumi Uekawa; Naoya Endo; Keisuke Ishii; Takashi Kojima; Kazuyuki Kakegawa **Journal:** Journal of Nanotechnology (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/102361 --- ## Abstract Transparent and stable sols of titanium oxide nanoparticles were obtained by heating a mixture of ethylene glycol solution of titanium tetraisopropoxide (TIP) and a NH3 aqueous solution at 368 K for 24 h. The concentration of NH3 aqueous solution affected the structure of the obtained titanium oxide nanoparticles. For NH3 aqueous solution concentrations higher than 0.2 mol/L, a mixture of anatase TiO2 nanoparticles and layered titanic acid nanoparticles was obtained. The obtained sol was very stable without formation of aggregated precipitates and gels. Coordination of ethylene glycol to Ti4+ ions inhibited the rapid hydrolysis reaction and aggregation of the obtained nanoparticles. The obtained titanium oxide nanoparticles had a large specific surface area: larger than 350 m2/g. The obtained titanium oxide nanoparticles showed an enhanced adsorption towards the cationic dye molecules. The selective adsorption corresponded to presence of layered titanic acid on the obtained anatase TiO2 nanoparticles. --- ## Body ## 1. Introduction Titanium dioxide (TiO2 titania) is an n-type oxide semiconductor that shows photocatalytic activity and photoconductivity [1, 2]. Several various applications of TiO2 particles have been studied in recent years, as photocatalysts and to solar cells, UV-shielding materials and electric devices [3–6]. For these applications, development of a simple synthesis method to obtain TiO2 nanoparticles with highly homogeneous dispersion has been required [7, 8]. TiO2 nanoparticles are also useful in many important applications for improving environmental problems [9, 10]. These nanoparticles can be used for formation of TiO2 thin films with optical transparency and photocatalyst activity. Furthermore, surface characteristics of TiO2 nanoparticles strongly affect the application area of TiO2 nanoparticles. For example, Grätzel developed photoelectrochemical systems with dye-sensitized anatase TiO2 semiconductor electrodes [11, 12]. The characteristics of these solar cells depend on adsorption interaction between dye molecules and TiO2 nanoparticle surface. Therefore, it is also important to control molecular adsorption properties of TiO2 nanoparticles.Rath et al. prepared size-controlled TiO2 nanoparticles in reverse micelles using a surfactant Aerosol-OT (AOT) [13]. Zaki et al. obtained anatase TiO2 nanoparticles through hydrolysis of ethanol solution of titanium tetraisopropoxide by adding nitric acid [14]. They examined an adsorption of amine molecules to investigate the surface adsorption sites. Nakayama and Hyashi prepared stable sols of TiO2 with surface modification of carboxylic acid and amine [15]. Some reports have also described methods for preparation of TiO2 nanoparticles and stable sols using peroxotitanic acid as a precursor [16].This study describes examination of a novel preparation method of titanium oxide nanoparticles and their stable sols through hydrolysis reaction of an ethylene glycol solution of titanium alkoxide with NH3 aqueous solution. Ethylene glycol easily coordinates to Ti4+ ions and controls the hydrolysis reaction [17, 18]. Furthermore, NH3 molecules also strongly coordinate to the Ti4+ ions. It is expected that restricting rapid hydrolysis reaction enables the production of titanium oxide nanoparticles without aggregation. Furthermore, surface characteristics of the titanium oxide nanoparticles were examined by measuring adsorption isotherms of cationic and anionic dye molecules. ## 2. Experiments ### 2.1. Preparation of Titanium Oxide Nanoparticles and Their Sols The titanium oxide nanoparticles were prepared as follows. The 1 mol/L of NH3 aqueous solution was added to the 50 mL of ethylene glycol solution (0.1 mol/L) of titanium tetraisopropoxide (TIP). The total volume was adjusted to 100 mL. Under these circumstances no precipitate was observed, although an opaque solution was obtained. This solution was heated at 368 K for 24 h in a closed glass vessel, and a stable sol was obtained without precipitation. All chemicals used in this preparation were of reagent grade (Wako Pure Chemical Industries Ltd.). To control the particle size and surface properties of the obtained titanium oxide nanoparticles, the same synthetic process was also conducted using the NH3 (aq) with other concentrations. The concentrations were in the range of 0.1 mol/L–1 mol/L. Hereinafter, this concentration will be designated as [NH3].To separate the obtained particles from the sol, 50 mL of the obtained sol was poured into a cellulose tube for dialysis, and the cellulose tube was soaked in 500 mL of H2O for 3 h at room temperature. The 500 mL of H2O was exchanged five times. Finally, the sol in the cellulose tube was dried at 348 K for 12 h.Anatase TiO2 particles were also prepared using a simple hydrolysis reaction between Ti alkoxide and H2O. The H2O was added to 0.01 mol of TIP. Then the total volume was adjusted to 100 mL. The mixed solution of TIP and H2O was kept in a closed glass beaker and heated at 368 K for 24 h. White precipitate was obtained using the hydrolysis reaction. The precipitate was separated by centrifugation at 3000 rpm for 5 min. The obtained precipitate was dried at 348 K for 24 h. ### 2.2. Characterization The structure of the obtained particles was characterized using X-ray diffraction (XRD) (Cu Kα 40 kV, 100 mA, MXP-18; Bruker AXS Co., Ltd.). The particle shape was observed using field emission scanning electron microscopy (FE-SEM: JSM-6330; JEOL) after osmium coating, which is one of the electroconductive film formation methods for electron microscope observation. It is the method for depositing osmium metal thin film on the sample surface by DC glow discharge in osmium oxide gas. The ultraviolet-visible (UV-VIS) spectra of the sols and the solutions were measured using quartz cell (UV2000; Hitachi Ltd.) with wavelengths of 300–800 nm, and the optical path length of the cell was 1 cm. Thermogravimetric analysis and differential thermal analysis (TG-DTA) were measured in the air with the flow rate = 100 mL/min. Weight of the used samples was 10 mg, and the rate of elevating temperature was 10 K/min. The upper limit of the measuring temperature was 1073 K. The N2 adsorption isotherms of the obtained powders were measured at 77 K by using the volumetric method (BELSORP-max, BEL Japan, Inc.) after pretreatment at 383 K in 1 mPa for 1 h. The used sample weight was ca. 0.1 g. ### 2.3. Dye Adsorption Measurement Dye adsorption isotherms of the titanium oxide nanoparticles were measured as described below. First, 50 mL of methylene blue aqueous solutions was prepared and adjusted to the following concentrations: 1 × 10-4mol/L, 2 × 10-4mol/L, 3 × 10-4mol/L, 4 × 10-4mol/L, 6 × 10-4mol/L, and 8 × 10-4mol/L. 0.05 g of the titanium oxide powder was added to 50 mL of each of the methylene blue aqueous solution. The pH values of the dye aqueous solution with dispersion of the titanium oxide powder were in the range from 6.5 to 7.5 before and after the adsorption. Each dye aqueous solution with the titanium oxide powder was stirred at 500 rpm at 298 K for 24 h to reach the equilibrium of dye adsorption. From each solution, 3 mL of the methylene blue aqueous solution was separated using filtration with a 0.45 μm membrane filter. The optical absorbance of the corrected methylene blue aqueous solution at 655 nm was measured using a UV-VIS spectrometer (UV2000; Hitachi Ltd.). The methylene blue concentrations were calculated from the absorbance using a working curve. Measurements of the dye adsorption isotherm described above were also conducted using other dyes such as crystal violet, Evans blue, and eosin Y. The respective wavelengths used for estimating the dye aqueous solution concentrations of crystal violet, Evans blue, and eosin Y were 590 nm, 608 nm, and 517 nm. ## 2.1. Preparation of Titanium Oxide Nanoparticles and Their Sols The titanium oxide nanoparticles were prepared as follows. The 1 mol/L of NH3 aqueous solution was added to the 50 mL of ethylene glycol solution (0.1 mol/L) of titanium tetraisopropoxide (TIP). The total volume was adjusted to 100 mL. Under these circumstances no precipitate was observed, although an opaque solution was obtained. This solution was heated at 368 K for 24 h in a closed glass vessel, and a stable sol was obtained without precipitation. All chemicals used in this preparation were of reagent grade (Wako Pure Chemical Industries Ltd.). To control the particle size and surface properties of the obtained titanium oxide nanoparticles, the same synthetic process was also conducted using the NH3 (aq) with other concentrations. The concentrations were in the range of 0.1 mol/L–1 mol/L. Hereinafter, this concentration will be designated as [NH3].To separate the obtained particles from the sol, 50 mL of the obtained sol was poured into a cellulose tube for dialysis, and the cellulose tube was soaked in 500 mL of H2O for 3 h at room temperature. The 500 mL of H2O was exchanged five times. Finally, the sol in the cellulose tube was dried at 348 K for 12 h.Anatase TiO2 particles were also prepared using a simple hydrolysis reaction between Ti alkoxide and H2O. The H2O was added to 0.01 mol of TIP. Then the total volume was adjusted to 100 mL. The mixed solution of TIP and H2O was kept in a closed glass beaker and heated at 368 K for 24 h. White precipitate was obtained using the hydrolysis reaction. The precipitate was separated by centrifugation at 3000 rpm for 5 min. The obtained precipitate was dried at 348 K for 24 h. ## 2.2. Characterization The structure of the obtained particles was characterized using X-ray diffraction (XRD) (Cu Kα 40 kV, 100 mA, MXP-18; Bruker AXS Co., Ltd.). The particle shape was observed using field emission scanning electron microscopy (FE-SEM: JSM-6330; JEOL) after osmium coating, which is one of the electroconductive film formation methods for electron microscope observation. It is the method for depositing osmium metal thin film on the sample surface by DC glow discharge in osmium oxide gas. The ultraviolet-visible (UV-VIS) spectra of the sols and the solutions were measured using quartz cell (UV2000; Hitachi Ltd.) with wavelengths of 300–800 nm, and the optical path length of the cell was 1 cm. Thermogravimetric analysis and differential thermal analysis (TG-DTA) were measured in the air with the flow rate = 100 mL/min. Weight of the used samples was 10 mg, and the rate of elevating temperature was 10 K/min. The upper limit of the measuring temperature was 1073 K. The N2 adsorption isotherms of the obtained powders were measured at 77 K by using the volumetric method (BELSORP-max, BEL Japan, Inc.) after pretreatment at 383 K in 1 mPa for 1 h. The used sample weight was ca. 0.1 g. ## 2.3. Dye Adsorption Measurement Dye adsorption isotherms of the titanium oxide nanoparticles were measured as described below. First, 50 mL of methylene blue aqueous solutions was prepared and adjusted to the following concentrations: 1 × 10-4mol/L, 2 × 10-4mol/L, 3 × 10-4mol/L, 4 × 10-4mol/L, 6 × 10-4mol/L, and 8 × 10-4mol/L. 0.05 g of the titanium oxide powder was added to 50 mL of each of the methylene blue aqueous solution. The pH values of the dye aqueous solution with dispersion of the titanium oxide powder were in the range from 6.5 to 7.5 before and after the adsorption. Each dye aqueous solution with the titanium oxide powder was stirred at 500 rpm at 298 K for 24 h to reach the equilibrium of dye adsorption. From each solution, 3 mL of the methylene blue aqueous solution was separated using filtration with a 0.45 μm membrane filter. The optical absorbance of the corrected methylene blue aqueous solution at 655 nm was measured using a UV-VIS spectrometer (UV2000; Hitachi Ltd.). The methylene blue concentrations were calculated from the absorbance using a working curve. Measurements of the dye adsorption isotherm described above were also conducted using other dyes such as crystal violet, Evans blue, and eosin Y. The respective wavelengths used for estimating the dye aqueous solution concentrations of crystal violet, Evans blue, and eosin Y were 590 nm, 608 nm, and 517 nm. ## 3. Results and Discussion ### 3.1. Characterization of the Obtained Titanium Oxide Nanoparticles Figure1 portrays XRD patterns of the particles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The XRD peaks in Figures 1(a)–1(e), as marked with open circles, can be assigned to anatase TiO2. When the concentration of the NH3 (aq) concentration mixed with the ethylene glycol solution of TIP was 0.1 mol/L, the obtained titanium oxide was anatase TiO2 as shown in Figure 1(a). When the mixture of TIP and H2O was heated at 368 K for 24 h, anatase TiO2 was also obtained as shown in Figure 1(e). In Figure 1(b), a very weak and broad peak around 2θ < 10° can be assigned to layered titanic acid structure [19]. When the NH3 (aq) concentration became higher than 0.2 mol/L, the XRD peaks shown in Figures 1(b)–1(d) can be assigned to layered titanic acid lattice and anatase TiO2. Accordingly, the NH3 (aq) concentration affected the crystal structure of the obtained particles. Furthermore, crystallite sizes were determined by using Scherrer’s equation at the peak of anatase TiO2 (101) plane at 2θ = 25.8°. The crystallite sizes determined from the XRD patterns of Figures 1(a)–1(e) were 4.68 nm, 4.60 nm, 2.88 nm, 2.27 nm, and 10.7 nm, respectively. Accordingly, a decrease in crystallite size was observed with increases in the NH3 (aq) concentration. Furthermore, the crystallite size of anatase TiO2 of Figure 1(a) was smaller than that of Figure 1(e). This result means that growth of the crystallite size of the anatase TiO2 particles was restricted by coordination of the coexisted NH3 and ethylene glycol molecules in the solution.Figure 1 XRD patterns of the obtained particles by heating a mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The NH3 (aq) concentrations were (a) 0.1 mol/L, (b) 0.2 mol/L, (c) 0.5 mol/L, (d) 1 mol/L, and (e) the particles obtained by heating the mixture of TIP and H2O at 368 K for 24 h. ◯: anatase TiO2 and ⚫: layered titanic acid.Figure2 shows FE-SEM images of the obtained titanium oxide particles. Figures 2(a), 2(b), and 2(c), respectively, show FE-SEM images of particles obtained using the NH3 (aq) at concentrations of 0.1 mol/L, 0.5 mol/L, and 1 mol/L. When the [NH3] values were 0.1 mol/L, 0.5 mol/L, and 1 mol/L, the average particle sizes were 32.1 nm, 31.5 nm, and 28.0 nm, respectively. The particle sizes of the obtained titanium oxide particles were almost close to each other. The coordination of NH3 aqueous solution did not affect the particle size of the titanium oxide nanoparticles, although as discussed in Figure 1, the crystal structure was affected. The average particle sizes determined by FE-SEM images of the titanium oxide nanoparticles were larger than the crystallite sizes determined by using Scherrer’s equation. This means that the particles observed in the FE-SEM images had aggregated structure of crystallites. Furthermore, when the NH3 (aq) concentration was more than 0.5 mol/L, the particle shape would change from spherical particles to plate particles. This will be one of the reasons of difference between the crystallite size and the average particle size.FE-SEM images of the titanium oxide particles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The [NH3] values were (a) 0.1 mol/L, (b) 0.5 mol/L, and (c) 1 mol/L. (a)(b)(c)To examine the residual organic molecules and the degree of the crystallization of the obtained nanoparticles, the TG-DTA curves were measured. Figure3(a) shows the TG-DTA curve of the nanoparticles obtained by using the NH3 (aq) ([NH3] = 0.1 mol/L). The TG curve had 15.1% weight loss below 400 K. This weight loss corresponded to desorption of adsorbed H2O on the nanoparticles. Furthermore, when the temperature was around 523 K, a steep decrease of weight was observed. According to the DTA curve, a sharp exothermic peak also appeared around 523 K. Therefore, the weight loss around 523 K corresponded to the oxidation of the TIP molecules which did not react in the hydrolysis reaction. The weight loss around this temperature was 8.1%. Furthermore, a slight decrease of weight (4.82%) occurred at temperatures of 530 K to 700 K. This weight loss corresponded to dehydration of surface OH groups and desorption of the adsorbed ammonium ions. Figure 3(b) shows the TG-DTA curve of the nanoparticles obtained using the NH3 (aq) ([NH3] = 1 mol/L). In this case, 24% of the large weight loss caused by desorption of adsorbed H2O was also observed below 400 K. The DTA curve had an exothermic peak around 535 K, and the temperature was close to that of Figure 4(a). Therefore, this DTA peak corresponds to the oxidation of organic molecules. The weight loss around this temperature was 2.4%. This value of the weight loss in Figure 3(b) was less than that in Figure 3(a). This result means that TIP molecules in the 1 mol/L of NH3 (aq) are more effectively hydrolyzed than those in the 1 mol/L of NH3 (aq). Furthermore, the slight decrease in mass corresponded to desorption of ammonium ions and the surface OH groups, as discussed in Figure 3(a). Accordingly, the hydrolysis reaction of titanium alkoxides proceeded more effectively at higher concentrations of NH3 (aq).TG-DTA curves of the titanium oxide nanoparticles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq). The [NH3] values were (a) 0.1 mol/L and (b) 1 mol/L. (a)(b)Figure 4 UV-VIS absorption spectra of the sols obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The solid line shows the spectra of the obtained sols. The NH3 (aq) concentrations were (a) 0.05 mol/L, (b) 0.1 mol/L, (c) 0.2 mol/L, (d) 0.5 mol/L, and (e) 1 mol/L. The broken lines show spectra of the isopropanol solution of TIP. The TIP concentrations of the solutions were (f) 1 mol/L, (g) 0.1 mol/L, and (h) 0.01 mol/L. ### 3.2. Investigation of Dispersion Stability and Formation Process of Sols by Using UV-VIS Absorption Spectra Measurements To examine dispersed state of particles in the obtained sols, UV-VIS absorption spectra were measured as presented in Figure4. When the [NH3] was 0.05 mol/L, the value of optical absorbance at 400 nm was 0.35. In this case, the obtained solution was a slightly opaque sol. Then, the observed optical absorbance corresponded to scattering of light. Furthermore, when wavelength decreased from 400 nm to 320 nm, the optical absorbance increased from 0.35 to 3.00. This increase of the optical absorbance corresponds to the electron transition between band gap of TiO2. In general, a charge transfer transition between O2- ion and Ti4+ ion also causes optical absorption in this wavelength range [20]. Figures 4(f)–4(h) show optical absorption spectra of isopropanol solution of TIP whose concentrations were, respectively, 1 mol/L, 0.1 mol/L, and 0.01 mol/L. According to Figure 4(f), the absorption edge of the charge transfer transition was around 355 nm when the concentration of the isopropanol solution of TIP was 1 mol/L. As shown in Figures 4(f)–4(h), with decreasing the TIP concentration in the solution from 1 mol/L to 0.01 mol/L, the wavelength of the absorption edge shifted from 355 nm to a shorter wavelength. Because the maximum concentration of TIP in the obtained sol was 0.05 mol/L, the absorption edge, which corresponded to the charge transfer, was expected to be less than 355 nm. The strong absorption around 375 nm in Figure 4(a) accordingly corresponded to the band gap transition of electrons in TiO2. When the [NH3] were greater than 0.1 mol/L, the optical absorbance at 400 nm was less than 0.05. Therefore, the obtained sol was almost transparent for visible light, and the scattering of light by the particles in the sol was slight. Therefore, transparent and homogeneous sols without precipitate are obtainable merely by heating the mixture of ethylene glycol solution of TIP and NH3 (aq). Furthermore, the wavelength of the absorption edge decreased with the increase of the [NH3], and the wavelengths remained longer than 360 nm. The strong optical absorption corresponded to the band gap electron transition of TiO2 and indicated the formation of TiO2 particles in the solution during the heating process.To examine the formation process of titanium oxide nanoparticles in the TIP solution, we examined change of UV-VIS absorption spectra of the solution during the heating process. Figure5 portrays the UV-VIS absorption spectra of the sols obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K. As presented in Figure 5(a), when the heating time was 0 h, that is, before the heat treatment, value of the optical absorbance at 400 nm was 0.52. The obtained solution was an opaque sol. Then, the observed optical absorbance corresponded to scattering of light. The optical absorbance increased from 0.35 to 3.00 for wavelengths of 400–320 nm. This spectrum resembled that portrayed in Figure 2(g). Therefore, this increase of the optical absorbance corresponds to the charge transfer transition between O2- ion and Ti4+ ion. As presented in Figure 5(b), the optical absorbance at 400 nm of the solution that had been heated for 1 h was 0.01. Accordingly, the obtained solution was almost transparent in the visible light wavelength range. The absorption edge was 344 nm. When heating times were longer than 1 h, the optical absorption at 400 nm remained almost 0, which indicates that the solution was transparent and the clear sol was obtained. The absorption edge of the steep increase of absorbance shifted from 344 nm to 359 nm when the heating time increased from 0 h to 24 h. This shift of the wavelength of the absorption edge indicated the formation of anatase TiO2 lattice. If the shift of the adsorption edge was caused by decrease of the concentration of TIP in the solution, the wavelength of the charge transfer absorption would shift to shorter wavelength with increase of the heating time. The band gap energy, which corresponds to the wavelength of the absorption edge, depends on the size of nanoparticles by the quantum size effect. Therefore, the shift of the absorption edge signifies formation of titanium oxide and change of its particle sizes. Accordingly, the heating of the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for more than 1 h was sufficient to obtain the transparent and homogeneous sol of titanium oxide nanoparticles.Figure 5 UV-VIS absorption spectra of the mixture of ethyleneglycol solution of TIP and NH3 (aq) ([NH3]=1 mol/L). The heating temperature was 368 K. The heating times were the following: (a) 0 h (before heating), (b) 1 h, (c) 3 h, (d) 6 h, (e) 12 h, and (f) 24 h. ### 3.3. Dye Adsorption Characteristics of the Titanium Oxide Nanoparticles Obtained by Hydrolysis Reaction of Ethylene Glycol Solution of TIP To examine the surface properties of the obtained titanium oxide nanoparticles, the N2 adsorption isotherms at 77 K were measured, and the BET-specific surface area was estimated. Figure 6 shows N2 adsorption isotherms of the titanium oxide nanoparticles obtained with 0.1 mol/L and 1 mol/L of the NH3 (aq). Additionally, a N2 adsorption isotherm of the conventional anatase TiO2 particles obtained by heating the mixture of TIP and H2O at 368 K for 24 h is also shown in Figure 6. Both N2 adsorption isotherms obtained with the NH3 aqueous solutions belong to the type I isotherms defined by IUPAC. The N2 adsorption isotherm of the conventional anatase TiO2 particles belongs to the type IV isotherm, and the hysteresis loop can be classified to the type H2 [21]. The N2 adsorption isotherms of the titanium oxide nanoparticles were analyzed by using BJH method to estimate the average pore sizes and the total pore volume. The N2 adsorption isotherms were analyzed using the Brunauer-Emmett-Teller (BET) equation, allowing us to obtain the specific surface area of the titanium oxide nanoparticles (SBET). The results of calculation were indicated in Table 1. When the [NH3] values were 0.1 mol/L and 1 mol/L, the SBET values were, respectively, 358 m2/g and 445 m2/g. With the higher concentration of NH3 (aq), coordination of NH3 molecules to Ti4+ ions inhibited rapid growth and aggregation of the nanoparticles so that higher specific surface area were achieved. However, the SBET value of the anatase TiO2 particles obtained by heating the mixture of TIP and H2O was 214 m2/g. Accordingly, the titanium oxide nanoparticles obtained using the reaction between the ethylene glycol solution of TIP and NH3 (aq) had quite large SBET values compared to those particles obtained using the conventional hydrolysis reaction of TIP. As, shown in the Table 1, the total pore volumes of the titanium oxide nanoparticles obtained with 0.1 mol/L and 1 mol/L of NH3 aqueous solutions were 0.12 cm3/g and 0.13 cm3/g, respectively, and they had very close values. The average pore sizes of the nanoparticles obtained with the 0.1 mol/L of NH3 (aq) were larger than those obtained with the 1 mol/L of NH3 (aq). Both of the average pore sizes were smaller than 2 nm so that the nanoparticles obtained with NH3 (aq) were microporous materials. On the other hand, the average pore size of the anatase TiO2 particles obtained by heating the mixture of TIP and H2O was 4.67 nm. Accordingly, the obtained anatase TiO2 was a mesoporous material. The dependence of the porosity and the pore size on the NH3 (aq) concentration reflected the difference of the hydrolysis reaction in the solutions.Table 1 The BET specific surface areas of the nanoparticles obtained by heating the mixture of the ethylene glycol solution of TIP and the NH3 aqueous solution. Average pore diameter and total pore volume were estimated by using the BJH method to the N2 adsorption isotherms shown in Figure 6. The solution added to ethyleneglycol solution of TIP[NH3] = 0.1 M[NH3] = 1 MH2OSBET358 m2/g445 m2/g214 m2/gAverage pore diameter1.34 nm1.17 nm4.67 nmTotal pore volume0.12 cm3/g0.13 cm3/g0.25 cm3/gFigure 6 N2 adsorption isotherms of the titanium oxide nanoparticles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 77 K. ◯, ⚫: adsorption and desorption isotherms of [NH3]=1 mol/L. □, ■: adsorption and desorption isotherms of [NH3] = 0.1 mol/L. ∆▲: adsorption and desorption isotherms of particles obtained by heating the mixture of TIP and H2O at 368 K for 24 h.Adsorption isotherms of dye molecules were measured to investigate the interaction between the dye molecules and the titanium oxide nanoparticle surfaces. Figure7 presents adsorption isotherms of dye molecules on the titanium oxide nanoparticles obtained at [NH3]=1 mol/L. The adsorption isotherms of the cationic dye molecules of methylene blue and crystal violet on the obtained titanium oxide nanoparticles are shown, respectively, as open circles and open squares. The black line shows the result of least-square curve-fitting based on Langmuir equation, which fits the experimental data very well. The saturated adsorbed amount of methylene blue and crystal violet was, respectively, 3.59 × 10-4mol/g and 2.02 × 10-4mol/g. The adsorption isotherms of the anionic dye molecules of eosin Y and Evans blue on the titanium oxide nanoparticles are shown, respectively, as closed squares and closed circles. The saturated adsorbed amounts of eosin Y and Evans blue, as calculated from the Langmuir plots, were, respectively, 8.25 × 10-6mol/g and 6.91 × 10-6mol/g. Accordingly, the saturated adsorbed amounts of the anionic dye molecules were less than 1/50 of those of the cationic dye molecules. Highly selective adsorption of the cationic dye molecules was observed.Figure 7 Adsorption isotherms of dye molecules on the titanium oxide nanoparticles in aqueous solution of dye. Aqueous solutions of the dye molecules were the following:◯, methylene blue; □, crystal violet; ⚫, Evans blue; ■, eosin Y. The titanium oxide nanoparticles were prepared using the aqueous solution having [NH3] of 1 mol/L.To examine more details of the selective adsorption of the cationic dye molecules, the adsorption isotherms of methylene blue, which is a cationic dye molecule, were measured as presented in Figure8. The adsorbents were titanium oxide nanoparticles prepared by using NH3 (aq) whose concentrations were 0.1 mol/L, 0.2 mol/L, and 1 mol/L. The saturated adsorbed amounts estimated by using the Langmuir plot were, respectively, 2.37 × 10-4mol/g, 2.49 × 10-4mol/g, and 3.59 × 10-4mol/g. The saturated adsorbed amounts corresponded to SBET values of the titanium oxide nanoparticles.Figure 8 Adsorption isotherms of methylene blue on the titanium oxide nanoparticles in aqueous solution of dye. The NH3 aqueous solutions concentrations used to prepare the titanium oxide nanoparticles were the following: ◯, 1 mol/L; □, 0.2 mol/L; ∆, 0.1 mol/L.Figure9 shows adsorption isotherms of Evans blue, which is an anionic dye molecule. The saturated adsorbed amounts of Evans blue molecules of the nanoparticles were less than 2.5 × 10-5mol/g. According to the results presented in Figures 8 and 9, the cationic dye molecules interacted strongly with the surface of the titanium oxide nanoparticles obtained by heating the mixtures of ethylene glycol solution of TIP and NH3 (aq). As presented in Figure 1(d), when the [NH3] value was 1 mol/L, the obtained nanoparticles were mixture of anatase TiO2 and layered titanic acid. In general, a layered titanic acid structure shows the cationic ion exchange property. Therefore, an enhanced adsorption towards the cationic dye molecules on the nanoparticles corresponded to the presence of layered titanic acid structure in the obtained nanoparticles. On the other hand, although the XRD peaks of the nanoparticles prepared at [NH3] = 0.1 mol/L as presented in Figure 1(a) can be assigned only to anatase TiO2, the obtained nanoparticles also showed the enhanced adsorption towards the cationic dye molecules. In this case, the layered titanic acid particles with very low degree of crystallization were included in the obtained nanoparticles so that the presence of the layered titanic acid phase cannot be detected by XRD. To examine the above consideration, the adsorption characteristics of conventional anatase TiO2 particles, the dye adsorption isotherms of the particles were presented in Figure 10. The saturated adsorbed amounts of methylene blue and Evans blue were, respectively, 1.50 × 10-5mol/g and 1.66 × 10-5mol/g. In this case, the adsorbed amount of Evans blue, an anionic dye molecule, was larger than that of methylene blue, a cationic dye molecule. This adsorption behavior was opposite to the titanium nanoparticles obtained from the ethylene glycol solution of TIP and NH3 aqueous solution. Accordingly, the positively charged sites on the anatase particles such as Ti4+ ions played an important role in the adsorption of Evans blue molecules. Therefore, the adsorption behavior of the dye molecules depends strongly on the surface characteristics. Results show that the selective adsorption of the cationic dye molecules did not correspond to the anatase TiO2 structure. The layered titanic acid structure is expected to play an important role in the selective adsorption of the cationic dye molecules. The hydrolysis reaction with a sufficient amount of NH3 aqueous solution enabled formation of a composite of layered titanic acid structure and anatase TiO2, and it is considered that the structure contributed to the selective dye adsorption characteristics.Figure 9 Adsorption isotherms of Evans blue on the titanium oxide nanoparticles in aqueous solution of dye. The concentrations of NH3 (aq) used to prepare the titanium oxide nanoparticles were the following: ◯, 1 mol/L; □, 0.2 mol/L; and ∆, 0.1 mol/L.Figure 10 Adsorption isotherms of dye molecules on titanium oxide nanoparticles in aqueous solution of dye. The dye molecules were the following:◯, methylene blue and ⚫, Evans blue. Titanium oxide nanoparticles were prepared by heating the mixture of TIP and H2O at 368 K for 24 h. ## 3.1. Characterization of the Obtained Titanium Oxide Nanoparticles Figure1 portrays XRD patterns of the particles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The XRD peaks in Figures 1(a)–1(e), as marked with open circles, can be assigned to anatase TiO2. When the concentration of the NH3 (aq) concentration mixed with the ethylene glycol solution of TIP was 0.1 mol/L, the obtained titanium oxide was anatase TiO2 as shown in Figure 1(a). When the mixture of TIP and H2O was heated at 368 K for 24 h, anatase TiO2 was also obtained as shown in Figure 1(e). In Figure 1(b), a very weak and broad peak around 2θ < 10° can be assigned to layered titanic acid structure [19]. When the NH3 (aq) concentration became higher than 0.2 mol/L, the XRD peaks shown in Figures 1(b)–1(d) can be assigned to layered titanic acid lattice and anatase TiO2. Accordingly, the NH3 (aq) concentration affected the crystal structure of the obtained particles. Furthermore, crystallite sizes were determined by using Scherrer’s equation at the peak of anatase TiO2 (101) plane at 2θ = 25.8°. The crystallite sizes determined from the XRD patterns of Figures 1(a)–1(e) were 4.68 nm, 4.60 nm, 2.88 nm, 2.27 nm, and 10.7 nm, respectively. Accordingly, a decrease in crystallite size was observed with increases in the NH3 (aq) concentration. Furthermore, the crystallite size of anatase TiO2 of Figure 1(a) was smaller than that of Figure 1(e). This result means that growth of the crystallite size of the anatase TiO2 particles was restricted by coordination of the coexisted NH3 and ethylene glycol molecules in the solution.Figure 1 XRD patterns of the obtained particles by heating a mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The NH3 (aq) concentrations were (a) 0.1 mol/L, (b) 0.2 mol/L, (c) 0.5 mol/L, (d) 1 mol/L, and (e) the particles obtained by heating the mixture of TIP and H2O at 368 K for 24 h. ◯: anatase TiO2 and ⚫: layered titanic acid.Figure2 shows FE-SEM images of the obtained titanium oxide particles. Figures 2(a), 2(b), and 2(c), respectively, show FE-SEM images of particles obtained using the NH3 (aq) at concentrations of 0.1 mol/L, 0.5 mol/L, and 1 mol/L. When the [NH3] values were 0.1 mol/L, 0.5 mol/L, and 1 mol/L, the average particle sizes were 32.1 nm, 31.5 nm, and 28.0 nm, respectively. The particle sizes of the obtained titanium oxide particles were almost close to each other. The coordination of NH3 aqueous solution did not affect the particle size of the titanium oxide nanoparticles, although as discussed in Figure 1, the crystal structure was affected. The average particle sizes determined by FE-SEM images of the titanium oxide nanoparticles were larger than the crystallite sizes determined by using Scherrer’s equation. This means that the particles observed in the FE-SEM images had aggregated structure of crystallites. Furthermore, when the NH3 (aq) concentration was more than 0.5 mol/L, the particle shape would change from spherical particles to plate particles. This will be one of the reasons of difference between the crystallite size and the average particle size.FE-SEM images of the titanium oxide particles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The [NH3] values were (a) 0.1 mol/L, (b) 0.5 mol/L, and (c) 1 mol/L. (a)(b)(c)To examine the residual organic molecules and the degree of the crystallization of the obtained nanoparticles, the TG-DTA curves were measured. Figure3(a) shows the TG-DTA curve of the nanoparticles obtained by using the NH3 (aq) ([NH3] = 0.1 mol/L). The TG curve had 15.1% weight loss below 400 K. This weight loss corresponded to desorption of adsorbed H2O on the nanoparticles. Furthermore, when the temperature was around 523 K, a steep decrease of weight was observed. According to the DTA curve, a sharp exothermic peak also appeared around 523 K. Therefore, the weight loss around 523 K corresponded to the oxidation of the TIP molecules which did not react in the hydrolysis reaction. The weight loss around this temperature was 8.1%. Furthermore, a slight decrease of weight (4.82%) occurred at temperatures of 530 K to 700 K. This weight loss corresponded to dehydration of surface OH groups and desorption of the adsorbed ammonium ions. Figure 3(b) shows the TG-DTA curve of the nanoparticles obtained using the NH3 (aq) ([NH3] = 1 mol/L). In this case, 24% of the large weight loss caused by desorption of adsorbed H2O was also observed below 400 K. The DTA curve had an exothermic peak around 535 K, and the temperature was close to that of Figure 4(a). Therefore, this DTA peak corresponds to the oxidation of organic molecules. The weight loss around this temperature was 2.4%. This value of the weight loss in Figure 3(b) was less than that in Figure 3(a). This result means that TIP molecules in the 1 mol/L of NH3 (aq) are more effectively hydrolyzed than those in the 1 mol/L of NH3 (aq). Furthermore, the slight decrease in mass corresponded to desorption of ammonium ions and the surface OH groups, as discussed in Figure 3(a). Accordingly, the hydrolysis reaction of titanium alkoxides proceeded more effectively at higher concentrations of NH3 (aq).TG-DTA curves of the titanium oxide nanoparticles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq). The [NH3] values were (a) 0.1 mol/L and (b) 1 mol/L. (a)(b)Figure 4 UV-VIS absorption spectra of the sols obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The solid line shows the spectra of the obtained sols. The NH3 (aq) concentrations were (a) 0.05 mol/L, (b) 0.1 mol/L, (c) 0.2 mol/L, (d) 0.5 mol/L, and (e) 1 mol/L. The broken lines show spectra of the isopropanol solution of TIP. The TIP concentrations of the solutions were (f) 1 mol/L, (g) 0.1 mol/L, and (h) 0.01 mol/L. ## 3.2. Investigation of Dispersion Stability and Formation Process of Sols by Using UV-VIS Absorption Spectra Measurements To examine dispersed state of particles in the obtained sols, UV-VIS absorption spectra were measured as presented in Figure4. When the [NH3] was 0.05 mol/L, the value of optical absorbance at 400 nm was 0.35. In this case, the obtained solution was a slightly opaque sol. Then, the observed optical absorbance corresponded to scattering of light. Furthermore, when wavelength decreased from 400 nm to 320 nm, the optical absorbance increased from 0.35 to 3.00. This increase of the optical absorbance corresponds to the electron transition between band gap of TiO2. In general, a charge transfer transition between O2- ion and Ti4+ ion also causes optical absorption in this wavelength range [20]. Figures 4(f)–4(h) show optical absorption spectra of isopropanol solution of TIP whose concentrations were, respectively, 1 mol/L, 0.1 mol/L, and 0.01 mol/L. According to Figure 4(f), the absorption edge of the charge transfer transition was around 355 nm when the concentration of the isopropanol solution of TIP was 1 mol/L. As shown in Figures 4(f)–4(h), with decreasing the TIP concentration in the solution from 1 mol/L to 0.01 mol/L, the wavelength of the absorption edge shifted from 355 nm to a shorter wavelength. Because the maximum concentration of TIP in the obtained sol was 0.05 mol/L, the absorption edge, which corresponded to the charge transfer, was expected to be less than 355 nm. The strong absorption around 375 nm in Figure 4(a) accordingly corresponded to the band gap transition of electrons in TiO2. When the [NH3] were greater than 0.1 mol/L, the optical absorbance at 400 nm was less than 0.05. Therefore, the obtained sol was almost transparent for visible light, and the scattering of light by the particles in the sol was slight. Therefore, transparent and homogeneous sols without precipitate are obtainable merely by heating the mixture of ethylene glycol solution of TIP and NH3 (aq). Furthermore, the wavelength of the absorption edge decreased with the increase of the [NH3], and the wavelengths remained longer than 360 nm. The strong optical absorption corresponded to the band gap electron transition of TiO2 and indicated the formation of TiO2 particles in the solution during the heating process.To examine the formation process of titanium oxide nanoparticles in the TIP solution, we examined change of UV-VIS absorption spectra of the solution during the heating process. Figure5 portrays the UV-VIS absorption spectra of the sols obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K. As presented in Figure 5(a), when the heating time was 0 h, that is, before the heat treatment, value of the optical absorbance at 400 nm was 0.52. The obtained solution was an opaque sol. Then, the observed optical absorbance corresponded to scattering of light. The optical absorbance increased from 0.35 to 3.00 for wavelengths of 400–320 nm. This spectrum resembled that portrayed in Figure 2(g). Therefore, this increase of the optical absorbance corresponds to the charge transfer transition between O2- ion and Ti4+ ion. As presented in Figure 5(b), the optical absorbance at 400 nm of the solution that had been heated for 1 h was 0.01. Accordingly, the obtained solution was almost transparent in the visible light wavelength range. The absorption edge was 344 nm. When heating times were longer than 1 h, the optical absorption at 400 nm remained almost 0, which indicates that the solution was transparent and the clear sol was obtained. The absorption edge of the steep increase of absorbance shifted from 344 nm to 359 nm when the heating time increased from 0 h to 24 h. This shift of the wavelength of the absorption edge indicated the formation of anatase TiO2 lattice. If the shift of the adsorption edge was caused by decrease of the concentration of TIP in the solution, the wavelength of the charge transfer absorption would shift to shorter wavelength with increase of the heating time. The band gap energy, which corresponds to the wavelength of the absorption edge, depends on the size of nanoparticles by the quantum size effect. Therefore, the shift of the absorption edge signifies formation of titanium oxide and change of its particle sizes. Accordingly, the heating of the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for more than 1 h was sufficient to obtain the transparent and homogeneous sol of titanium oxide nanoparticles.Figure 5 UV-VIS absorption spectra of the mixture of ethyleneglycol solution of TIP and NH3 (aq) ([NH3]=1 mol/L). The heating temperature was 368 K. The heating times were the following: (a) 0 h (before heating), (b) 1 h, (c) 3 h, (d) 6 h, (e) 12 h, and (f) 24 h. ## 3.3. Dye Adsorption Characteristics of the Titanium Oxide Nanoparticles Obtained by Hydrolysis Reaction of Ethylene Glycol Solution of TIP To examine the surface properties of the obtained titanium oxide nanoparticles, the N2 adsorption isotherms at 77 K were measured, and the BET-specific surface area was estimated. Figure 6 shows N2 adsorption isotherms of the titanium oxide nanoparticles obtained with 0.1 mol/L and 1 mol/L of the NH3 (aq). Additionally, a N2 adsorption isotherm of the conventional anatase TiO2 particles obtained by heating the mixture of TIP and H2O at 368 K for 24 h is also shown in Figure 6. Both N2 adsorption isotherms obtained with the NH3 aqueous solutions belong to the type I isotherms defined by IUPAC. The N2 adsorption isotherm of the conventional anatase TiO2 particles belongs to the type IV isotherm, and the hysteresis loop can be classified to the type H2 [21]. The N2 adsorption isotherms of the titanium oxide nanoparticles were analyzed by using BJH method to estimate the average pore sizes and the total pore volume. The N2 adsorption isotherms were analyzed using the Brunauer-Emmett-Teller (BET) equation, allowing us to obtain the specific surface area of the titanium oxide nanoparticles (SBET). The results of calculation were indicated in Table 1. When the [NH3] values were 0.1 mol/L and 1 mol/L, the SBET values were, respectively, 358 m2/g and 445 m2/g. With the higher concentration of NH3 (aq), coordination of NH3 molecules to Ti4+ ions inhibited rapid growth and aggregation of the nanoparticles so that higher specific surface area were achieved. However, the SBET value of the anatase TiO2 particles obtained by heating the mixture of TIP and H2O was 214 m2/g. Accordingly, the titanium oxide nanoparticles obtained using the reaction between the ethylene glycol solution of TIP and NH3 (aq) had quite large SBET values compared to those particles obtained using the conventional hydrolysis reaction of TIP. As, shown in the Table 1, the total pore volumes of the titanium oxide nanoparticles obtained with 0.1 mol/L and 1 mol/L of NH3 aqueous solutions were 0.12 cm3/g and 0.13 cm3/g, respectively, and they had very close values. The average pore sizes of the nanoparticles obtained with the 0.1 mol/L of NH3 (aq) were larger than those obtained with the 1 mol/L of NH3 (aq). Both of the average pore sizes were smaller than 2 nm so that the nanoparticles obtained with NH3 (aq) were microporous materials. On the other hand, the average pore size of the anatase TiO2 particles obtained by heating the mixture of TIP and H2O was 4.67 nm. Accordingly, the obtained anatase TiO2 was a mesoporous material. The dependence of the porosity and the pore size on the NH3 (aq) concentration reflected the difference of the hydrolysis reaction in the solutions.Table 1 The BET specific surface areas of the nanoparticles obtained by heating the mixture of the ethylene glycol solution of TIP and the NH3 aqueous solution. Average pore diameter and total pore volume were estimated by using the BJH method to the N2 adsorption isotherms shown in Figure 6. The solution added to ethyleneglycol solution of TIP[NH3] = 0.1 M[NH3] = 1 MH2OSBET358 m2/g445 m2/g214 m2/gAverage pore diameter1.34 nm1.17 nm4.67 nmTotal pore volume0.12 cm3/g0.13 cm3/g0.25 cm3/gFigure 6 N2 adsorption isotherms of the titanium oxide nanoparticles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 77 K. ◯, ⚫: adsorption and desorption isotherms of [NH3]=1 mol/L. □, ■: adsorption and desorption isotherms of [NH3] = 0.1 mol/L. ∆▲: adsorption and desorption isotherms of particles obtained by heating the mixture of TIP and H2O at 368 K for 24 h.Adsorption isotherms of dye molecules were measured to investigate the interaction between the dye molecules and the titanium oxide nanoparticle surfaces. Figure7 presents adsorption isotherms of dye molecules on the titanium oxide nanoparticles obtained at [NH3]=1 mol/L. The adsorption isotherms of the cationic dye molecules of methylene blue and crystal violet on the obtained titanium oxide nanoparticles are shown, respectively, as open circles and open squares. The black line shows the result of least-square curve-fitting based on Langmuir equation, which fits the experimental data very well. The saturated adsorbed amount of methylene blue and crystal violet was, respectively, 3.59 × 10-4mol/g and 2.02 × 10-4mol/g. The adsorption isotherms of the anionic dye molecules of eosin Y and Evans blue on the titanium oxide nanoparticles are shown, respectively, as closed squares and closed circles. The saturated adsorbed amounts of eosin Y and Evans blue, as calculated from the Langmuir plots, were, respectively, 8.25 × 10-6mol/g and 6.91 × 10-6mol/g. Accordingly, the saturated adsorbed amounts of the anionic dye molecules were less than 1/50 of those of the cationic dye molecules. Highly selective adsorption of the cationic dye molecules was observed.Figure 7 Adsorption isotherms of dye molecules on the titanium oxide nanoparticles in aqueous solution of dye. Aqueous solutions of the dye molecules were the following:◯, methylene blue; □, crystal violet; ⚫, Evans blue; ■, eosin Y. The titanium oxide nanoparticles were prepared using the aqueous solution having [NH3] of 1 mol/L.To examine more details of the selective adsorption of the cationic dye molecules, the adsorption isotherms of methylene blue, which is a cationic dye molecule, were measured as presented in Figure8. The adsorbents were titanium oxide nanoparticles prepared by using NH3 (aq) whose concentrations were 0.1 mol/L, 0.2 mol/L, and 1 mol/L. The saturated adsorbed amounts estimated by using the Langmuir plot were, respectively, 2.37 × 10-4mol/g, 2.49 × 10-4mol/g, and 3.59 × 10-4mol/g. The saturated adsorbed amounts corresponded to SBET values of the titanium oxide nanoparticles.Figure 8 Adsorption isotherms of methylene blue on the titanium oxide nanoparticles in aqueous solution of dye. The NH3 aqueous solutions concentrations used to prepare the titanium oxide nanoparticles were the following: ◯, 1 mol/L; □, 0.2 mol/L; ∆, 0.1 mol/L.Figure9 shows adsorption isotherms of Evans blue, which is an anionic dye molecule. The saturated adsorbed amounts of Evans blue molecules of the nanoparticles were less than 2.5 × 10-5mol/g. According to the results presented in Figures 8 and 9, the cationic dye molecules interacted strongly with the surface of the titanium oxide nanoparticles obtained by heating the mixtures of ethylene glycol solution of TIP and NH3 (aq). As presented in Figure 1(d), when the [NH3] value was 1 mol/L, the obtained nanoparticles were mixture of anatase TiO2 and layered titanic acid. In general, a layered titanic acid structure shows the cationic ion exchange property. Therefore, an enhanced adsorption towards the cationic dye molecules on the nanoparticles corresponded to the presence of layered titanic acid structure in the obtained nanoparticles. On the other hand, although the XRD peaks of the nanoparticles prepared at [NH3] = 0.1 mol/L as presented in Figure 1(a) can be assigned only to anatase TiO2, the obtained nanoparticles also showed the enhanced adsorption towards the cationic dye molecules. In this case, the layered titanic acid particles with very low degree of crystallization were included in the obtained nanoparticles so that the presence of the layered titanic acid phase cannot be detected by XRD. To examine the above consideration, the adsorption characteristics of conventional anatase TiO2 particles, the dye adsorption isotherms of the particles were presented in Figure 10. The saturated adsorbed amounts of methylene blue and Evans blue were, respectively, 1.50 × 10-5mol/g and 1.66 × 10-5mol/g. In this case, the adsorbed amount of Evans blue, an anionic dye molecule, was larger than that of methylene blue, a cationic dye molecule. This adsorption behavior was opposite to the titanium nanoparticles obtained from the ethylene glycol solution of TIP and NH3 aqueous solution. Accordingly, the positively charged sites on the anatase particles such as Ti4+ ions played an important role in the adsorption of Evans blue molecules. Therefore, the adsorption behavior of the dye molecules depends strongly on the surface characteristics. Results show that the selective adsorption of the cationic dye molecules did not correspond to the anatase TiO2 structure. The layered titanic acid structure is expected to play an important role in the selective adsorption of the cationic dye molecules. The hydrolysis reaction with a sufficient amount of NH3 aqueous solution enabled formation of a composite of layered titanic acid structure and anatase TiO2, and it is considered that the structure contributed to the selective dye adsorption characteristics.Figure 9 Adsorption isotherms of Evans blue on the titanium oxide nanoparticles in aqueous solution of dye. The concentrations of NH3 (aq) used to prepare the titanium oxide nanoparticles were the following: ◯, 1 mol/L; □, 0.2 mol/L; and ∆, 0.1 mol/L.Figure 10 Adsorption isotherms of dye molecules on titanium oxide nanoparticles in aqueous solution of dye. The dye molecules were the following:◯, methylene blue and ⚫, Evans blue. Titanium oxide nanoparticles were prepared by heating the mixture of TIP and H2O at 368 K for 24 h. ## 4. Conclusion Transparent and stable sols of titanium oxide nanoparticles were obtained by heating a mixture of ethylene glycol solution of TIP and NH3 aqueous solution at 368 K for 24 h. The concentration of NH3 aqueous solution affected the structure of the obtained titanium oxide nanoparticles. When the concentration of NH3 aqueous solution was 0.1 mol/L, the obtained nanoparticles were assigned to anatase TiO2 according to the XRD pattern. When the concentration was higher than 0.2 mol/L, a mixture of anatase TiO2 nanoparticles and layered titanic acid nanoparticles was obtained. The coordination of ethylene glycol and NH3 molecules to Ti4+ ions played an important role in the formation of titanium oxide nanoparticles and their homogeneous dispersion in the sol. The obtained titanium oxide nanoparticles had a large specific surface area, which was larger than 350 m2/g because the aggregation of their nanoparticles was prevented by the coordination of NH3 and ethylene glycol molecules. The obtained titanium oxide nanoparticles indicated an enhanced adsorption towards the cationic dye molecules. The selective adsorption corresponded to presence of the layered titanic acid on the nanoparticles. The high specific surface area also played an important role for the selective adsorption of the cationic dye molecules. Accordingly, the hydrolysis reaction of TIP with ethylene glycol and NH3 enabled us to obtain stable sols of anatase TiO2 and layered titanic acid nanoparticles with highly homogeneous dispersion. Furthermore, the obtained nanoparticles showed unique adsorption characteristics. --- *Source: 102361-2012-05-28.xml*
102361-2012-05-28_102361-2012-05-28.md
55,586
Characterization of Titanium Oxide Nanoparticles Obtained by Hydrolysis Reaction of Ethylene Glycol Solution of Alkoxide
Naofumi Uekawa; Naoya Endo; Keisuke Ishii; Takashi Kojima; Kazuyuki Kakegawa
Journal of Nanotechnology (2012)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/102361
102361-2012-05-28.xml
--- ## Abstract Transparent and stable sols of titanium oxide nanoparticles were obtained by heating a mixture of ethylene glycol solution of titanium tetraisopropoxide (TIP) and a NH3 aqueous solution at 368 K for 24 h. The concentration of NH3 aqueous solution affected the structure of the obtained titanium oxide nanoparticles. For NH3 aqueous solution concentrations higher than 0.2 mol/L, a mixture of anatase TiO2 nanoparticles and layered titanic acid nanoparticles was obtained. The obtained sol was very stable without formation of aggregated precipitates and gels. Coordination of ethylene glycol to Ti4+ ions inhibited the rapid hydrolysis reaction and aggregation of the obtained nanoparticles. The obtained titanium oxide nanoparticles had a large specific surface area: larger than 350 m2/g. The obtained titanium oxide nanoparticles showed an enhanced adsorption towards the cationic dye molecules. The selective adsorption corresponded to presence of layered titanic acid on the obtained anatase TiO2 nanoparticles. --- ## Body ## 1. Introduction Titanium dioxide (TiO2 titania) is an n-type oxide semiconductor that shows photocatalytic activity and photoconductivity [1, 2]. Several various applications of TiO2 particles have been studied in recent years, as photocatalysts and to solar cells, UV-shielding materials and electric devices [3–6]. For these applications, development of a simple synthesis method to obtain TiO2 nanoparticles with highly homogeneous dispersion has been required [7, 8]. TiO2 nanoparticles are also useful in many important applications for improving environmental problems [9, 10]. These nanoparticles can be used for formation of TiO2 thin films with optical transparency and photocatalyst activity. Furthermore, surface characteristics of TiO2 nanoparticles strongly affect the application area of TiO2 nanoparticles. For example, Grätzel developed photoelectrochemical systems with dye-sensitized anatase TiO2 semiconductor electrodes [11, 12]. The characteristics of these solar cells depend on adsorption interaction between dye molecules and TiO2 nanoparticle surface. Therefore, it is also important to control molecular adsorption properties of TiO2 nanoparticles.Rath et al. prepared size-controlled TiO2 nanoparticles in reverse micelles using a surfactant Aerosol-OT (AOT) [13]. Zaki et al. obtained anatase TiO2 nanoparticles through hydrolysis of ethanol solution of titanium tetraisopropoxide by adding nitric acid [14]. They examined an adsorption of amine molecules to investigate the surface adsorption sites. Nakayama and Hyashi prepared stable sols of TiO2 with surface modification of carboxylic acid and amine [15]. Some reports have also described methods for preparation of TiO2 nanoparticles and stable sols using peroxotitanic acid as a precursor [16].This study describes examination of a novel preparation method of titanium oxide nanoparticles and their stable sols through hydrolysis reaction of an ethylene glycol solution of titanium alkoxide with NH3 aqueous solution. Ethylene glycol easily coordinates to Ti4+ ions and controls the hydrolysis reaction [17, 18]. Furthermore, NH3 molecules also strongly coordinate to the Ti4+ ions. It is expected that restricting rapid hydrolysis reaction enables the production of titanium oxide nanoparticles without aggregation. Furthermore, surface characteristics of the titanium oxide nanoparticles were examined by measuring adsorption isotherms of cationic and anionic dye molecules. ## 2. Experiments ### 2.1. Preparation of Titanium Oxide Nanoparticles and Their Sols The titanium oxide nanoparticles were prepared as follows. The 1 mol/L of NH3 aqueous solution was added to the 50 mL of ethylene glycol solution (0.1 mol/L) of titanium tetraisopropoxide (TIP). The total volume was adjusted to 100 mL. Under these circumstances no precipitate was observed, although an opaque solution was obtained. This solution was heated at 368 K for 24 h in a closed glass vessel, and a stable sol was obtained without precipitation. All chemicals used in this preparation were of reagent grade (Wako Pure Chemical Industries Ltd.). To control the particle size and surface properties of the obtained titanium oxide nanoparticles, the same synthetic process was also conducted using the NH3 (aq) with other concentrations. The concentrations were in the range of 0.1 mol/L–1 mol/L. Hereinafter, this concentration will be designated as [NH3].To separate the obtained particles from the sol, 50 mL of the obtained sol was poured into a cellulose tube for dialysis, and the cellulose tube was soaked in 500 mL of H2O for 3 h at room temperature. The 500 mL of H2O was exchanged five times. Finally, the sol in the cellulose tube was dried at 348 K for 12 h.Anatase TiO2 particles were also prepared using a simple hydrolysis reaction between Ti alkoxide and H2O. The H2O was added to 0.01 mol of TIP. Then the total volume was adjusted to 100 mL. The mixed solution of TIP and H2O was kept in a closed glass beaker and heated at 368 K for 24 h. White precipitate was obtained using the hydrolysis reaction. The precipitate was separated by centrifugation at 3000 rpm for 5 min. The obtained precipitate was dried at 348 K for 24 h. ### 2.2. Characterization The structure of the obtained particles was characterized using X-ray diffraction (XRD) (Cu Kα 40 kV, 100 mA, MXP-18; Bruker AXS Co., Ltd.). The particle shape was observed using field emission scanning electron microscopy (FE-SEM: JSM-6330; JEOL) after osmium coating, which is one of the electroconductive film formation methods for electron microscope observation. It is the method for depositing osmium metal thin film on the sample surface by DC glow discharge in osmium oxide gas. The ultraviolet-visible (UV-VIS) spectra of the sols and the solutions were measured using quartz cell (UV2000; Hitachi Ltd.) with wavelengths of 300–800 nm, and the optical path length of the cell was 1 cm. Thermogravimetric analysis and differential thermal analysis (TG-DTA) were measured in the air with the flow rate = 100 mL/min. Weight of the used samples was 10 mg, and the rate of elevating temperature was 10 K/min. The upper limit of the measuring temperature was 1073 K. The N2 adsorption isotherms of the obtained powders were measured at 77 K by using the volumetric method (BELSORP-max, BEL Japan, Inc.) after pretreatment at 383 K in 1 mPa for 1 h. The used sample weight was ca. 0.1 g. ### 2.3. Dye Adsorption Measurement Dye adsorption isotherms of the titanium oxide nanoparticles were measured as described below. First, 50 mL of methylene blue aqueous solutions was prepared and adjusted to the following concentrations: 1 × 10-4mol/L, 2 × 10-4mol/L, 3 × 10-4mol/L, 4 × 10-4mol/L, 6 × 10-4mol/L, and 8 × 10-4mol/L. 0.05 g of the titanium oxide powder was added to 50 mL of each of the methylene blue aqueous solution. The pH values of the dye aqueous solution with dispersion of the titanium oxide powder were in the range from 6.5 to 7.5 before and after the adsorption. Each dye aqueous solution with the titanium oxide powder was stirred at 500 rpm at 298 K for 24 h to reach the equilibrium of dye adsorption. From each solution, 3 mL of the methylene blue aqueous solution was separated using filtration with a 0.45 μm membrane filter. The optical absorbance of the corrected methylene blue aqueous solution at 655 nm was measured using a UV-VIS spectrometer (UV2000; Hitachi Ltd.). The methylene blue concentrations were calculated from the absorbance using a working curve. Measurements of the dye adsorption isotherm described above were also conducted using other dyes such as crystal violet, Evans blue, and eosin Y. The respective wavelengths used for estimating the dye aqueous solution concentrations of crystal violet, Evans blue, and eosin Y were 590 nm, 608 nm, and 517 nm. ## 2.1. Preparation of Titanium Oxide Nanoparticles and Their Sols The titanium oxide nanoparticles were prepared as follows. The 1 mol/L of NH3 aqueous solution was added to the 50 mL of ethylene glycol solution (0.1 mol/L) of titanium tetraisopropoxide (TIP). The total volume was adjusted to 100 mL. Under these circumstances no precipitate was observed, although an opaque solution was obtained. This solution was heated at 368 K for 24 h in a closed glass vessel, and a stable sol was obtained without precipitation. All chemicals used in this preparation were of reagent grade (Wako Pure Chemical Industries Ltd.). To control the particle size and surface properties of the obtained titanium oxide nanoparticles, the same synthetic process was also conducted using the NH3 (aq) with other concentrations. The concentrations were in the range of 0.1 mol/L–1 mol/L. Hereinafter, this concentration will be designated as [NH3].To separate the obtained particles from the sol, 50 mL of the obtained sol was poured into a cellulose tube for dialysis, and the cellulose tube was soaked in 500 mL of H2O for 3 h at room temperature. The 500 mL of H2O was exchanged five times. Finally, the sol in the cellulose tube was dried at 348 K for 12 h.Anatase TiO2 particles were also prepared using a simple hydrolysis reaction between Ti alkoxide and H2O. The H2O was added to 0.01 mol of TIP. Then the total volume was adjusted to 100 mL. The mixed solution of TIP and H2O was kept in a closed glass beaker and heated at 368 K for 24 h. White precipitate was obtained using the hydrolysis reaction. The precipitate was separated by centrifugation at 3000 rpm for 5 min. The obtained precipitate was dried at 348 K for 24 h. ## 2.2. Characterization The structure of the obtained particles was characterized using X-ray diffraction (XRD) (Cu Kα 40 kV, 100 mA, MXP-18; Bruker AXS Co., Ltd.). The particle shape was observed using field emission scanning electron microscopy (FE-SEM: JSM-6330; JEOL) after osmium coating, which is one of the electroconductive film formation methods for electron microscope observation. It is the method for depositing osmium metal thin film on the sample surface by DC glow discharge in osmium oxide gas. The ultraviolet-visible (UV-VIS) spectra of the sols and the solutions were measured using quartz cell (UV2000; Hitachi Ltd.) with wavelengths of 300–800 nm, and the optical path length of the cell was 1 cm. Thermogravimetric analysis and differential thermal analysis (TG-DTA) were measured in the air with the flow rate = 100 mL/min. Weight of the used samples was 10 mg, and the rate of elevating temperature was 10 K/min. The upper limit of the measuring temperature was 1073 K. The N2 adsorption isotherms of the obtained powders were measured at 77 K by using the volumetric method (BELSORP-max, BEL Japan, Inc.) after pretreatment at 383 K in 1 mPa for 1 h. The used sample weight was ca. 0.1 g. ## 2.3. Dye Adsorption Measurement Dye adsorption isotherms of the titanium oxide nanoparticles were measured as described below. First, 50 mL of methylene blue aqueous solutions was prepared and adjusted to the following concentrations: 1 × 10-4mol/L, 2 × 10-4mol/L, 3 × 10-4mol/L, 4 × 10-4mol/L, 6 × 10-4mol/L, and 8 × 10-4mol/L. 0.05 g of the titanium oxide powder was added to 50 mL of each of the methylene blue aqueous solution. The pH values of the dye aqueous solution with dispersion of the titanium oxide powder were in the range from 6.5 to 7.5 before and after the adsorption. Each dye aqueous solution with the titanium oxide powder was stirred at 500 rpm at 298 K for 24 h to reach the equilibrium of dye adsorption. From each solution, 3 mL of the methylene blue aqueous solution was separated using filtration with a 0.45 μm membrane filter. The optical absorbance of the corrected methylene blue aqueous solution at 655 nm was measured using a UV-VIS spectrometer (UV2000; Hitachi Ltd.). The methylene blue concentrations were calculated from the absorbance using a working curve. Measurements of the dye adsorption isotherm described above were also conducted using other dyes such as crystal violet, Evans blue, and eosin Y. The respective wavelengths used for estimating the dye aqueous solution concentrations of crystal violet, Evans blue, and eosin Y were 590 nm, 608 nm, and 517 nm. ## 3. Results and Discussion ### 3.1. Characterization of the Obtained Titanium Oxide Nanoparticles Figure1 portrays XRD patterns of the particles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The XRD peaks in Figures 1(a)–1(e), as marked with open circles, can be assigned to anatase TiO2. When the concentration of the NH3 (aq) concentration mixed with the ethylene glycol solution of TIP was 0.1 mol/L, the obtained titanium oxide was anatase TiO2 as shown in Figure 1(a). When the mixture of TIP and H2O was heated at 368 K for 24 h, anatase TiO2 was also obtained as shown in Figure 1(e). In Figure 1(b), a very weak and broad peak around 2θ < 10° can be assigned to layered titanic acid structure [19]. When the NH3 (aq) concentration became higher than 0.2 mol/L, the XRD peaks shown in Figures 1(b)–1(d) can be assigned to layered titanic acid lattice and anatase TiO2. Accordingly, the NH3 (aq) concentration affected the crystal structure of the obtained particles. Furthermore, crystallite sizes were determined by using Scherrer’s equation at the peak of anatase TiO2 (101) plane at 2θ = 25.8°. The crystallite sizes determined from the XRD patterns of Figures 1(a)–1(e) were 4.68 nm, 4.60 nm, 2.88 nm, 2.27 nm, and 10.7 nm, respectively. Accordingly, a decrease in crystallite size was observed with increases in the NH3 (aq) concentration. Furthermore, the crystallite size of anatase TiO2 of Figure 1(a) was smaller than that of Figure 1(e). This result means that growth of the crystallite size of the anatase TiO2 particles was restricted by coordination of the coexisted NH3 and ethylene glycol molecules in the solution.Figure 1 XRD patterns of the obtained particles by heating a mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The NH3 (aq) concentrations were (a) 0.1 mol/L, (b) 0.2 mol/L, (c) 0.5 mol/L, (d) 1 mol/L, and (e) the particles obtained by heating the mixture of TIP and H2O at 368 K for 24 h. ◯: anatase TiO2 and ⚫: layered titanic acid.Figure2 shows FE-SEM images of the obtained titanium oxide particles. Figures 2(a), 2(b), and 2(c), respectively, show FE-SEM images of particles obtained using the NH3 (aq) at concentrations of 0.1 mol/L, 0.5 mol/L, and 1 mol/L. When the [NH3] values were 0.1 mol/L, 0.5 mol/L, and 1 mol/L, the average particle sizes were 32.1 nm, 31.5 nm, and 28.0 nm, respectively. The particle sizes of the obtained titanium oxide particles were almost close to each other. The coordination of NH3 aqueous solution did not affect the particle size of the titanium oxide nanoparticles, although as discussed in Figure 1, the crystal structure was affected. The average particle sizes determined by FE-SEM images of the titanium oxide nanoparticles were larger than the crystallite sizes determined by using Scherrer’s equation. This means that the particles observed in the FE-SEM images had aggregated structure of crystallites. Furthermore, when the NH3 (aq) concentration was more than 0.5 mol/L, the particle shape would change from spherical particles to plate particles. This will be one of the reasons of difference between the crystallite size and the average particle size.FE-SEM images of the titanium oxide particles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The [NH3] values were (a) 0.1 mol/L, (b) 0.5 mol/L, and (c) 1 mol/L. (a)(b)(c)To examine the residual organic molecules and the degree of the crystallization of the obtained nanoparticles, the TG-DTA curves were measured. Figure3(a) shows the TG-DTA curve of the nanoparticles obtained by using the NH3 (aq) ([NH3] = 0.1 mol/L). The TG curve had 15.1% weight loss below 400 K. This weight loss corresponded to desorption of adsorbed H2O on the nanoparticles. Furthermore, when the temperature was around 523 K, a steep decrease of weight was observed. According to the DTA curve, a sharp exothermic peak also appeared around 523 K. Therefore, the weight loss around 523 K corresponded to the oxidation of the TIP molecules which did not react in the hydrolysis reaction. The weight loss around this temperature was 8.1%. Furthermore, a slight decrease of weight (4.82%) occurred at temperatures of 530 K to 700 K. This weight loss corresponded to dehydration of surface OH groups and desorption of the adsorbed ammonium ions. Figure 3(b) shows the TG-DTA curve of the nanoparticles obtained using the NH3 (aq) ([NH3] = 1 mol/L). In this case, 24% of the large weight loss caused by desorption of adsorbed H2O was also observed below 400 K. The DTA curve had an exothermic peak around 535 K, and the temperature was close to that of Figure 4(a). Therefore, this DTA peak corresponds to the oxidation of organic molecules. The weight loss around this temperature was 2.4%. This value of the weight loss in Figure 3(b) was less than that in Figure 3(a). This result means that TIP molecules in the 1 mol/L of NH3 (aq) are more effectively hydrolyzed than those in the 1 mol/L of NH3 (aq). Furthermore, the slight decrease in mass corresponded to desorption of ammonium ions and the surface OH groups, as discussed in Figure 3(a). Accordingly, the hydrolysis reaction of titanium alkoxides proceeded more effectively at higher concentrations of NH3 (aq).TG-DTA curves of the titanium oxide nanoparticles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq). The [NH3] values were (a) 0.1 mol/L and (b) 1 mol/L. (a)(b)Figure 4 UV-VIS absorption spectra of the sols obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The solid line shows the spectra of the obtained sols. The NH3 (aq) concentrations were (a) 0.05 mol/L, (b) 0.1 mol/L, (c) 0.2 mol/L, (d) 0.5 mol/L, and (e) 1 mol/L. The broken lines show spectra of the isopropanol solution of TIP. The TIP concentrations of the solutions were (f) 1 mol/L, (g) 0.1 mol/L, and (h) 0.01 mol/L. ### 3.2. Investigation of Dispersion Stability and Formation Process of Sols by Using UV-VIS Absorption Spectra Measurements To examine dispersed state of particles in the obtained sols, UV-VIS absorption spectra were measured as presented in Figure4. When the [NH3] was 0.05 mol/L, the value of optical absorbance at 400 nm was 0.35. In this case, the obtained solution was a slightly opaque sol. Then, the observed optical absorbance corresponded to scattering of light. Furthermore, when wavelength decreased from 400 nm to 320 nm, the optical absorbance increased from 0.35 to 3.00. This increase of the optical absorbance corresponds to the electron transition between band gap of TiO2. In general, a charge transfer transition between O2- ion and Ti4+ ion also causes optical absorption in this wavelength range [20]. Figures 4(f)–4(h) show optical absorption spectra of isopropanol solution of TIP whose concentrations were, respectively, 1 mol/L, 0.1 mol/L, and 0.01 mol/L. According to Figure 4(f), the absorption edge of the charge transfer transition was around 355 nm when the concentration of the isopropanol solution of TIP was 1 mol/L. As shown in Figures 4(f)–4(h), with decreasing the TIP concentration in the solution from 1 mol/L to 0.01 mol/L, the wavelength of the absorption edge shifted from 355 nm to a shorter wavelength. Because the maximum concentration of TIP in the obtained sol was 0.05 mol/L, the absorption edge, which corresponded to the charge transfer, was expected to be less than 355 nm. The strong absorption around 375 nm in Figure 4(a) accordingly corresponded to the band gap transition of electrons in TiO2. When the [NH3] were greater than 0.1 mol/L, the optical absorbance at 400 nm was less than 0.05. Therefore, the obtained sol was almost transparent for visible light, and the scattering of light by the particles in the sol was slight. Therefore, transparent and homogeneous sols without precipitate are obtainable merely by heating the mixture of ethylene glycol solution of TIP and NH3 (aq). Furthermore, the wavelength of the absorption edge decreased with the increase of the [NH3], and the wavelengths remained longer than 360 nm. The strong optical absorption corresponded to the band gap electron transition of TiO2 and indicated the formation of TiO2 particles in the solution during the heating process.To examine the formation process of titanium oxide nanoparticles in the TIP solution, we examined change of UV-VIS absorption spectra of the solution during the heating process. Figure5 portrays the UV-VIS absorption spectra of the sols obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K. As presented in Figure 5(a), when the heating time was 0 h, that is, before the heat treatment, value of the optical absorbance at 400 nm was 0.52. The obtained solution was an opaque sol. Then, the observed optical absorbance corresponded to scattering of light. The optical absorbance increased from 0.35 to 3.00 for wavelengths of 400–320 nm. This spectrum resembled that portrayed in Figure 2(g). Therefore, this increase of the optical absorbance corresponds to the charge transfer transition between O2- ion and Ti4+ ion. As presented in Figure 5(b), the optical absorbance at 400 nm of the solution that had been heated for 1 h was 0.01. Accordingly, the obtained solution was almost transparent in the visible light wavelength range. The absorption edge was 344 nm. When heating times were longer than 1 h, the optical absorption at 400 nm remained almost 0, which indicates that the solution was transparent and the clear sol was obtained. The absorption edge of the steep increase of absorbance shifted from 344 nm to 359 nm when the heating time increased from 0 h to 24 h. This shift of the wavelength of the absorption edge indicated the formation of anatase TiO2 lattice. If the shift of the adsorption edge was caused by decrease of the concentration of TIP in the solution, the wavelength of the charge transfer absorption would shift to shorter wavelength with increase of the heating time. The band gap energy, which corresponds to the wavelength of the absorption edge, depends on the size of nanoparticles by the quantum size effect. Therefore, the shift of the absorption edge signifies formation of titanium oxide and change of its particle sizes. Accordingly, the heating of the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for more than 1 h was sufficient to obtain the transparent and homogeneous sol of titanium oxide nanoparticles.Figure 5 UV-VIS absorption spectra of the mixture of ethyleneglycol solution of TIP and NH3 (aq) ([NH3]=1 mol/L). The heating temperature was 368 K. The heating times were the following: (a) 0 h (before heating), (b) 1 h, (c) 3 h, (d) 6 h, (e) 12 h, and (f) 24 h. ### 3.3. Dye Adsorption Characteristics of the Titanium Oxide Nanoparticles Obtained by Hydrolysis Reaction of Ethylene Glycol Solution of TIP To examine the surface properties of the obtained titanium oxide nanoparticles, the N2 adsorption isotherms at 77 K were measured, and the BET-specific surface area was estimated. Figure 6 shows N2 adsorption isotherms of the titanium oxide nanoparticles obtained with 0.1 mol/L and 1 mol/L of the NH3 (aq). Additionally, a N2 adsorption isotherm of the conventional anatase TiO2 particles obtained by heating the mixture of TIP and H2O at 368 K for 24 h is also shown in Figure 6. Both N2 adsorption isotherms obtained with the NH3 aqueous solutions belong to the type I isotherms defined by IUPAC. The N2 adsorption isotherm of the conventional anatase TiO2 particles belongs to the type IV isotherm, and the hysteresis loop can be classified to the type H2 [21]. The N2 adsorption isotherms of the titanium oxide nanoparticles were analyzed by using BJH method to estimate the average pore sizes and the total pore volume. The N2 adsorption isotherms were analyzed using the Brunauer-Emmett-Teller (BET) equation, allowing us to obtain the specific surface area of the titanium oxide nanoparticles (SBET). The results of calculation were indicated in Table 1. When the [NH3] values were 0.1 mol/L and 1 mol/L, the SBET values were, respectively, 358 m2/g and 445 m2/g. With the higher concentration of NH3 (aq), coordination of NH3 molecules to Ti4+ ions inhibited rapid growth and aggregation of the nanoparticles so that higher specific surface area were achieved. However, the SBET value of the anatase TiO2 particles obtained by heating the mixture of TIP and H2O was 214 m2/g. Accordingly, the titanium oxide nanoparticles obtained using the reaction between the ethylene glycol solution of TIP and NH3 (aq) had quite large SBET values compared to those particles obtained using the conventional hydrolysis reaction of TIP. As, shown in the Table 1, the total pore volumes of the titanium oxide nanoparticles obtained with 0.1 mol/L and 1 mol/L of NH3 aqueous solutions were 0.12 cm3/g and 0.13 cm3/g, respectively, and they had very close values. The average pore sizes of the nanoparticles obtained with the 0.1 mol/L of NH3 (aq) were larger than those obtained with the 1 mol/L of NH3 (aq). Both of the average pore sizes were smaller than 2 nm so that the nanoparticles obtained with NH3 (aq) were microporous materials. On the other hand, the average pore size of the anatase TiO2 particles obtained by heating the mixture of TIP and H2O was 4.67 nm. Accordingly, the obtained anatase TiO2 was a mesoporous material. The dependence of the porosity and the pore size on the NH3 (aq) concentration reflected the difference of the hydrolysis reaction in the solutions.Table 1 The BET specific surface areas of the nanoparticles obtained by heating the mixture of the ethylene glycol solution of TIP and the NH3 aqueous solution. Average pore diameter and total pore volume were estimated by using the BJH method to the N2 adsorption isotherms shown in Figure 6. The solution added to ethyleneglycol solution of TIP[NH3] = 0.1 M[NH3] = 1 MH2OSBET358 m2/g445 m2/g214 m2/gAverage pore diameter1.34 nm1.17 nm4.67 nmTotal pore volume0.12 cm3/g0.13 cm3/g0.25 cm3/gFigure 6 N2 adsorption isotherms of the titanium oxide nanoparticles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 77 K. ◯, ⚫: adsorption and desorption isotherms of [NH3]=1 mol/L. □, ■: adsorption and desorption isotherms of [NH3] = 0.1 mol/L. ∆▲: adsorption and desorption isotherms of particles obtained by heating the mixture of TIP and H2O at 368 K for 24 h.Adsorption isotherms of dye molecules were measured to investigate the interaction between the dye molecules and the titanium oxide nanoparticle surfaces. Figure7 presents adsorption isotherms of dye molecules on the titanium oxide nanoparticles obtained at [NH3]=1 mol/L. The adsorption isotherms of the cationic dye molecules of methylene blue and crystal violet on the obtained titanium oxide nanoparticles are shown, respectively, as open circles and open squares. The black line shows the result of least-square curve-fitting based on Langmuir equation, which fits the experimental data very well. The saturated adsorbed amount of methylene blue and crystal violet was, respectively, 3.59 × 10-4mol/g and 2.02 × 10-4mol/g. The adsorption isotherms of the anionic dye molecules of eosin Y and Evans blue on the titanium oxide nanoparticles are shown, respectively, as closed squares and closed circles. The saturated adsorbed amounts of eosin Y and Evans blue, as calculated from the Langmuir plots, were, respectively, 8.25 × 10-6mol/g and 6.91 × 10-6mol/g. Accordingly, the saturated adsorbed amounts of the anionic dye molecules were less than 1/50 of those of the cationic dye molecules. Highly selective adsorption of the cationic dye molecules was observed.Figure 7 Adsorption isotherms of dye molecules on the titanium oxide nanoparticles in aqueous solution of dye. Aqueous solutions of the dye molecules were the following:◯, methylene blue; □, crystal violet; ⚫, Evans blue; ■, eosin Y. The titanium oxide nanoparticles were prepared using the aqueous solution having [NH3] of 1 mol/L.To examine more details of the selective adsorption of the cationic dye molecules, the adsorption isotherms of methylene blue, which is a cationic dye molecule, were measured as presented in Figure8. The adsorbents were titanium oxide nanoparticles prepared by using NH3 (aq) whose concentrations were 0.1 mol/L, 0.2 mol/L, and 1 mol/L. The saturated adsorbed amounts estimated by using the Langmuir plot were, respectively, 2.37 × 10-4mol/g, 2.49 × 10-4mol/g, and 3.59 × 10-4mol/g. The saturated adsorbed amounts corresponded to SBET values of the titanium oxide nanoparticles.Figure 8 Adsorption isotherms of methylene blue on the titanium oxide nanoparticles in aqueous solution of dye. The NH3 aqueous solutions concentrations used to prepare the titanium oxide nanoparticles were the following: ◯, 1 mol/L; □, 0.2 mol/L; ∆, 0.1 mol/L.Figure9 shows adsorption isotherms of Evans blue, which is an anionic dye molecule. The saturated adsorbed amounts of Evans blue molecules of the nanoparticles were less than 2.5 × 10-5mol/g. According to the results presented in Figures 8 and 9, the cationic dye molecules interacted strongly with the surface of the titanium oxide nanoparticles obtained by heating the mixtures of ethylene glycol solution of TIP and NH3 (aq). As presented in Figure 1(d), when the [NH3] value was 1 mol/L, the obtained nanoparticles were mixture of anatase TiO2 and layered titanic acid. In general, a layered titanic acid structure shows the cationic ion exchange property. Therefore, an enhanced adsorption towards the cationic dye molecules on the nanoparticles corresponded to the presence of layered titanic acid structure in the obtained nanoparticles. On the other hand, although the XRD peaks of the nanoparticles prepared at [NH3] = 0.1 mol/L as presented in Figure 1(a) can be assigned only to anatase TiO2, the obtained nanoparticles also showed the enhanced adsorption towards the cationic dye molecules. In this case, the layered titanic acid particles with very low degree of crystallization were included in the obtained nanoparticles so that the presence of the layered titanic acid phase cannot be detected by XRD. To examine the above consideration, the adsorption characteristics of conventional anatase TiO2 particles, the dye adsorption isotherms of the particles were presented in Figure 10. The saturated adsorbed amounts of methylene blue and Evans blue were, respectively, 1.50 × 10-5mol/g and 1.66 × 10-5mol/g. In this case, the adsorbed amount of Evans blue, an anionic dye molecule, was larger than that of methylene blue, a cationic dye molecule. This adsorption behavior was opposite to the titanium nanoparticles obtained from the ethylene glycol solution of TIP and NH3 aqueous solution. Accordingly, the positively charged sites on the anatase particles such as Ti4+ ions played an important role in the adsorption of Evans blue molecules. Therefore, the adsorption behavior of the dye molecules depends strongly on the surface characteristics. Results show that the selective adsorption of the cationic dye molecules did not correspond to the anatase TiO2 structure. The layered titanic acid structure is expected to play an important role in the selective adsorption of the cationic dye molecules. The hydrolysis reaction with a sufficient amount of NH3 aqueous solution enabled formation of a composite of layered titanic acid structure and anatase TiO2, and it is considered that the structure contributed to the selective dye adsorption characteristics.Figure 9 Adsorption isotherms of Evans blue on the titanium oxide nanoparticles in aqueous solution of dye. The concentrations of NH3 (aq) used to prepare the titanium oxide nanoparticles were the following: ◯, 1 mol/L; □, 0.2 mol/L; and ∆, 0.1 mol/L.Figure 10 Adsorption isotherms of dye molecules on titanium oxide nanoparticles in aqueous solution of dye. The dye molecules were the following:◯, methylene blue and ⚫, Evans blue. Titanium oxide nanoparticles were prepared by heating the mixture of TIP and H2O at 368 K for 24 h. ## 3.1. Characterization of the Obtained Titanium Oxide Nanoparticles Figure1 portrays XRD patterns of the particles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The XRD peaks in Figures 1(a)–1(e), as marked with open circles, can be assigned to anatase TiO2. When the concentration of the NH3 (aq) concentration mixed with the ethylene glycol solution of TIP was 0.1 mol/L, the obtained titanium oxide was anatase TiO2 as shown in Figure 1(a). When the mixture of TIP and H2O was heated at 368 K for 24 h, anatase TiO2 was also obtained as shown in Figure 1(e). In Figure 1(b), a very weak and broad peak around 2θ < 10° can be assigned to layered titanic acid structure [19]. When the NH3 (aq) concentration became higher than 0.2 mol/L, the XRD peaks shown in Figures 1(b)–1(d) can be assigned to layered titanic acid lattice and anatase TiO2. Accordingly, the NH3 (aq) concentration affected the crystal structure of the obtained particles. Furthermore, crystallite sizes were determined by using Scherrer’s equation at the peak of anatase TiO2 (101) plane at 2θ = 25.8°. The crystallite sizes determined from the XRD patterns of Figures 1(a)–1(e) were 4.68 nm, 4.60 nm, 2.88 nm, 2.27 nm, and 10.7 nm, respectively. Accordingly, a decrease in crystallite size was observed with increases in the NH3 (aq) concentration. Furthermore, the crystallite size of anatase TiO2 of Figure 1(a) was smaller than that of Figure 1(e). This result means that growth of the crystallite size of the anatase TiO2 particles was restricted by coordination of the coexisted NH3 and ethylene glycol molecules in the solution.Figure 1 XRD patterns of the obtained particles by heating a mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The NH3 (aq) concentrations were (a) 0.1 mol/L, (b) 0.2 mol/L, (c) 0.5 mol/L, (d) 1 mol/L, and (e) the particles obtained by heating the mixture of TIP and H2O at 368 K for 24 h. ◯: anatase TiO2 and ⚫: layered titanic acid.Figure2 shows FE-SEM images of the obtained titanium oxide particles. Figures 2(a), 2(b), and 2(c), respectively, show FE-SEM images of particles obtained using the NH3 (aq) at concentrations of 0.1 mol/L, 0.5 mol/L, and 1 mol/L. When the [NH3] values were 0.1 mol/L, 0.5 mol/L, and 1 mol/L, the average particle sizes were 32.1 nm, 31.5 nm, and 28.0 nm, respectively. The particle sizes of the obtained titanium oxide particles were almost close to each other. The coordination of NH3 aqueous solution did not affect the particle size of the titanium oxide nanoparticles, although as discussed in Figure 1, the crystal structure was affected. The average particle sizes determined by FE-SEM images of the titanium oxide nanoparticles were larger than the crystallite sizes determined by using Scherrer’s equation. This means that the particles observed in the FE-SEM images had aggregated structure of crystallites. Furthermore, when the NH3 (aq) concentration was more than 0.5 mol/L, the particle shape would change from spherical particles to plate particles. This will be one of the reasons of difference between the crystallite size and the average particle size.FE-SEM images of the titanium oxide particles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The [NH3] values were (a) 0.1 mol/L, (b) 0.5 mol/L, and (c) 1 mol/L. (a)(b)(c)To examine the residual organic molecules and the degree of the crystallization of the obtained nanoparticles, the TG-DTA curves were measured. Figure3(a) shows the TG-DTA curve of the nanoparticles obtained by using the NH3 (aq) ([NH3] = 0.1 mol/L). The TG curve had 15.1% weight loss below 400 K. This weight loss corresponded to desorption of adsorbed H2O on the nanoparticles. Furthermore, when the temperature was around 523 K, a steep decrease of weight was observed. According to the DTA curve, a sharp exothermic peak also appeared around 523 K. Therefore, the weight loss around 523 K corresponded to the oxidation of the TIP molecules which did not react in the hydrolysis reaction. The weight loss around this temperature was 8.1%. Furthermore, a slight decrease of weight (4.82%) occurred at temperatures of 530 K to 700 K. This weight loss corresponded to dehydration of surface OH groups and desorption of the adsorbed ammonium ions. Figure 3(b) shows the TG-DTA curve of the nanoparticles obtained using the NH3 (aq) ([NH3] = 1 mol/L). In this case, 24% of the large weight loss caused by desorption of adsorbed H2O was also observed below 400 K. The DTA curve had an exothermic peak around 535 K, and the temperature was close to that of Figure 4(a). Therefore, this DTA peak corresponds to the oxidation of organic molecules. The weight loss around this temperature was 2.4%. This value of the weight loss in Figure 3(b) was less than that in Figure 3(a). This result means that TIP molecules in the 1 mol/L of NH3 (aq) are more effectively hydrolyzed than those in the 1 mol/L of NH3 (aq). Furthermore, the slight decrease in mass corresponded to desorption of ammonium ions and the surface OH groups, as discussed in Figure 3(a). Accordingly, the hydrolysis reaction of titanium alkoxides proceeded more effectively at higher concentrations of NH3 (aq).TG-DTA curves of the titanium oxide nanoparticles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq). The [NH3] values were (a) 0.1 mol/L and (b) 1 mol/L. (a)(b)Figure 4 UV-VIS absorption spectra of the sols obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for 24 h. The solid line shows the spectra of the obtained sols. The NH3 (aq) concentrations were (a) 0.05 mol/L, (b) 0.1 mol/L, (c) 0.2 mol/L, (d) 0.5 mol/L, and (e) 1 mol/L. The broken lines show spectra of the isopropanol solution of TIP. The TIP concentrations of the solutions were (f) 1 mol/L, (g) 0.1 mol/L, and (h) 0.01 mol/L. ## 3.2. Investigation of Dispersion Stability and Formation Process of Sols by Using UV-VIS Absorption Spectra Measurements To examine dispersed state of particles in the obtained sols, UV-VIS absorption spectra were measured as presented in Figure4. When the [NH3] was 0.05 mol/L, the value of optical absorbance at 400 nm was 0.35. In this case, the obtained solution was a slightly opaque sol. Then, the observed optical absorbance corresponded to scattering of light. Furthermore, when wavelength decreased from 400 nm to 320 nm, the optical absorbance increased from 0.35 to 3.00. This increase of the optical absorbance corresponds to the electron transition between band gap of TiO2. In general, a charge transfer transition between O2- ion and Ti4+ ion also causes optical absorption in this wavelength range [20]. Figures 4(f)–4(h) show optical absorption spectra of isopropanol solution of TIP whose concentrations were, respectively, 1 mol/L, 0.1 mol/L, and 0.01 mol/L. According to Figure 4(f), the absorption edge of the charge transfer transition was around 355 nm when the concentration of the isopropanol solution of TIP was 1 mol/L. As shown in Figures 4(f)–4(h), with decreasing the TIP concentration in the solution from 1 mol/L to 0.01 mol/L, the wavelength of the absorption edge shifted from 355 nm to a shorter wavelength. Because the maximum concentration of TIP in the obtained sol was 0.05 mol/L, the absorption edge, which corresponded to the charge transfer, was expected to be less than 355 nm. The strong absorption around 375 nm in Figure 4(a) accordingly corresponded to the band gap transition of electrons in TiO2. When the [NH3] were greater than 0.1 mol/L, the optical absorbance at 400 nm was less than 0.05. Therefore, the obtained sol was almost transparent for visible light, and the scattering of light by the particles in the sol was slight. Therefore, transparent and homogeneous sols without precipitate are obtainable merely by heating the mixture of ethylene glycol solution of TIP and NH3 (aq). Furthermore, the wavelength of the absorption edge decreased with the increase of the [NH3], and the wavelengths remained longer than 360 nm. The strong optical absorption corresponded to the band gap electron transition of TiO2 and indicated the formation of TiO2 particles in the solution during the heating process.To examine the formation process of titanium oxide nanoparticles in the TIP solution, we examined change of UV-VIS absorption spectra of the solution during the heating process. Figure5 portrays the UV-VIS absorption spectra of the sols obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K. As presented in Figure 5(a), when the heating time was 0 h, that is, before the heat treatment, value of the optical absorbance at 400 nm was 0.52. The obtained solution was an opaque sol. Then, the observed optical absorbance corresponded to scattering of light. The optical absorbance increased from 0.35 to 3.00 for wavelengths of 400–320 nm. This spectrum resembled that portrayed in Figure 2(g). Therefore, this increase of the optical absorbance corresponds to the charge transfer transition between O2- ion and Ti4+ ion. As presented in Figure 5(b), the optical absorbance at 400 nm of the solution that had been heated for 1 h was 0.01. Accordingly, the obtained solution was almost transparent in the visible light wavelength range. The absorption edge was 344 nm. When heating times were longer than 1 h, the optical absorption at 400 nm remained almost 0, which indicates that the solution was transparent and the clear sol was obtained. The absorption edge of the steep increase of absorbance shifted from 344 nm to 359 nm when the heating time increased from 0 h to 24 h. This shift of the wavelength of the absorption edge indicated the formation of anatase TiO2 lattice. If the shift of the adsorption edge was caused by decrease of the concentration of TIP in the solution, the wavelength of the charge transfer absorption would shift to shorter wavelength with increase of the heating time. The band gap energy, which corresponds to the wavelength of the absorption edge, depends on the size of nanoparticles by the quantum size effect. Therefore, the shift of the absorption edge signifies formation of titanium oxide and change of its particle sizes. Accordingly, the heating of the mixture of ethylene glycol solution of TIP and NH3 (aq) at 368 K for more than 1 h was sufficient to obtain the transparent and homogeneous sol of titanium oxide nanoparticles.Figure 5 UV-VIS absorption spectra of the mixture of ethyleneglycol solution of TIP and NH3 (aq) ([NH3]=1 mol/L). The heating temperature was 368 K. The heating times were the following: (a) 0 h (before heating), (b) 1 h, (c) 3 h, (d) 6 h, (e) 12 h, and (f) 24 h. ## 3.3. Dye Adsorption Characteristics of the Titanium Oxide Nanoparticles Obtained by Hydrolysis Reaction of Ethylene Glycol Solution of TIP To examine the surface properties of the obtained titanium oxide nanoparticles, the N2 adsorption isotherms at 77 K were measured, and the BET-specific surface area was estimated. Figure 6 shows N2 adsorption isotherms of the titanium oxide nanoparticles obtained with 0.1 mol/L and 1 mol/L of the NH3 (aq). Additionally, a N2 adsorption isotherm of the conventional anatase TiO2 particles obtained by heating the mixture of TIP and H2O at 368 K for 24 h is also shown in Figure 6. Both N2 adsorption isotherms obtained with the NH3 aqueous solutions belong to the type I isotherms defined by IUPAC. The N2 adsorption isotherm of the conventional anatase TiO2 particles belongs to the type IV isotherm, and the hysteresis loop can be classified to the type H2 [21]. The N2 adsorption isotherms of the titanium oxide nanoparticles were analyzed by using BJH method to estimate the average pore sizes and the total pore volume. The N2 adsorption isotherms were analyzed using the Brunauer-Emmett-Teller (BET) equation, allowing us to obtain the specific surface area of the titanium oxide nanoparticles (SBET). The results of calculation were indicated in Table 1. When the [NH3] values were 0.1 mol/L and 1 mol/L, the SBET values were, respectively, 358 m2/g and 445 m2/g. With the higher concentration of NH3 (aq), coordination of NH3 molecules to Ti4+ ions inhibited rapid growth and aggregation of the nanoparticles so that higher specific surface area were achieved. However, the SBET value of the anatase TiO2 particles obtained by heating the mixture of TIP and H2O was 214 m2/g. Accordingly, the titanium oxide nanoparticles obtained using the reaction between the ethylene glycol solution of TIP and NH3 (aq) had quite large SBET values compared to those particles obtained using the conventional hydrolysis reaction of TIP. As, shown in the Table 1, the total pore volumes of the titanium oxide nanoparticles obtained with 0.1 mol/L and 1 mol/L of NH3 aqueous solutions were 0.12 cm3/g and 0.13 cm3/g, respectively, and they had very close values. The average pore sizes of the nanoparticles obtained with the 0.1 mol/L of NH3 (aq) were larger than those obtained with the 1 mol/L of NH3 (aq). Both of the average pore sizes were smaller than 2 nm so that the nanoparticles obtained with NH3 (aq) were microporous materials. On the other hand, the average pore size of the anatase TiO2 particles obtained by heating the mixture of TIP and H2O was 4.67 nm. Accordingly, the obtained anatase TiO2 was a mesoporous material. The dependence of the porosity and the pore size on the NH3 (aq) concentration reflected the difference of the hydrolysis reaction in the solutions.Table 1 The BET specific surface areas of the nanoparticles obtained by heating the mixture of the ethylene glycol solution of TIP and the NH3 aqueous solution. Average pore diameter and total pore volume were estimated by using the BJH method to the N2 adsorption isotherms shown in Figure 6. The solution added to ethyleneglycol solution of TIP[NH3] = 0.1 M[NH3] = 1 MH2OSBET358 m2/g445 m2/g214 m2/gAverage pore diameter1.34 nm1.17 nm4.67 nmTotal pore volume0.12 cm3/g0.13 cm3/g0.25 cm3/gFigure 6 N2 adsorption isotherms of the titanium oxide nanoparticles obtained by heating the mixture of ethylene glycol solution of TIP and NH3 (aq) at 77 K. ◯, ⚫: adsorption and desorption isotherms of [NH3]=1 mol/L. □, ■: adsorption and desorption isotherms of [NH3] = 0.1 mol/L. ∆▲: adsorption and desorption isotherms of particles obtained by heating the mixture of TIP and H2O at 368 K for 24 h.Adsorption isotherms of dye molecules were measured to investigate the interaction between the dye molecules and the titanium oxide nanoparticle surfaces. Figure7 presents adsorption isotherms of dye molecules on the titanium oxide nanoparticles obtained at [NH3]=1 mol/L. The adsorption isotherms of the cationic dye molecules of methylene blue and crystal violet on the obtained titanium oxide nanoparticles are shown, respectively, as open circles and open squares. The black line shows the result of least-square curve-fitting based on Langmuir equation, which fits the experimental data very well. The saturated adsorbed amount of methylene blue and crystal violet was, respectively, 3.59 × 10-4mol/g and 2.02 × 10-4mol/g. The adsorption isotherms of the anionic dye molecules of eosin Y and Evans blue on the titanium oxide nanoparticles are shown, respectively, as closed squares and closed circles. The saturated adsorbed amounts of eosin Y and Evans blue, as calculated from the Langmuir plots, were, respectively, 8.25 × 10-6mol/g and 6.91 × 10-6mol/g. Accordingly, the saturated adsorbed amounts of the anionic dye molecules were less than 1/50 of those of the cationic dye molecules. Highly selective adsorption of the cationic dye molecules was observed.Figure 7 Adsorption isotherms of dye molecules on the titanium oxide nanoparticles in aqueous solution of dye. Aqueous solutions of the dye molecules were the following:◯, methylene blue; □, crystal violet; ⚫, Evans blue; ■, eosin Y. The titanium oxide nanoparticles were prepared using the aqueous solution having [NH3] of 1 mol/L.To examine more details of the selective adsorption of the cationic dye molecules, the adsorption isotherms of methylene blue, which is a cationic dye molecule, were measured as presented in Figure8. The adsorbents were titanium oxide nanoparticles prepared by using NH3 (aq) whose concentrations were 0.1 mol/L, 0.2 mol/L, and 1 mol/L. The saturated adsorbed amounts estimated by using the Langmuir plot were, respectively, 2.37 × 10-4mol/g, 2.49 × 10-4mol/g, and 3.59 × 10-4mol/g. The saturated adsorbed amounts corresponded to SBET values of the titanium oxide nanoparticles.Figure 8 Adsorption isotherms of methylene blue on the titanium oxide nanoparticles in aqueous solution of dye. The NH3 aqueous solutions concentrations used to prepare the titanium oxide nanoparticles were the following: ◯, 1 mol/L; □, 0.2 mol/L; ∆, 0.1 mol/L.Figure9 shows adsorption isotherms of Evans blue, which is an anionic dye molecule. The saturated adsorbed amounts of Evans blue molecules of the nanoparticles were less than 2.5 × 10-5mol/g. According to the results presented in Figures 8 and 9, the cationic dye molecules interacted strongly with the surface of the titanium oxide nanoparticles obtained by heating the mixtures of ethylene glycol solution of TIP and NH3 (aq). As presented in Figure 1(d), when the [NH3] value was 1 mol/L, the obtained nanoparticles were mixture of anatase TiO2 and layered titanic acid. In general, a layered titanic acid structure shows the cationic ion exchange property. Therefore, an enhanced adsorption towards the cationic dye molecules on the nanoparticles corresponded to the presence of layered titanic acid structure in the obtained nanoparticles. On the other hand, although the XRD peaks of the nanoparticles prepared at [NH3] = 0.1 mol/L as presented in Figure 1(a) can be assigned only to anatase TiO2, the obtained nanoparticles also showed the enhanced adsorption towards the cationic dye molecules. In this case, the layered titanic acid particles with very low degree of crystallization were included in the obtained nanoparticles so that the presence of the layered titanic acid phase cannot be detected by XRD. To examine the above consideration, the adsorption characteristics of conventional anatase TiO2 particles, the dye adsorption isotherms of the particles were presented in Figure 10. The saturated adsorbed amounts of methylene blue and Evans blue were, respectively, 1.50 × 10-5mol/g and 1.66 × 10-5mol/g. In this case, the adsorbed amount of Evans blue, an anionic dye molecule, was larger than that of methylene blue, a cationic dye molecule. This adsorption behavior was opposite to the titanium nanoparticles obtained from the ethylene glycol solution of TIP and NH3 aqueous solution. Accordingly, the positively charged sites on the anatase particles such as Ti4+ ions played an important role in the adsorption of Evans blue molecules. Therefore, the adsorption behavior of the dye molecules depends strongly on the surface characteristics. Results show that the selective adsorption of the cationic dye molecules did not correspond to the anatase TiO2 structure. The layered titanic acid structure is expected to play an important role in the selective adsorption of the cationic dye molecules. The hydrolysis reaction with a sufficient amount of NH3 aqueous solution enabled formation of a composite of layered titanic acid structure and anatase TiO2, and it is considered that the structure contributed to the selective dye adsorption characteristics.Figure 9 Adsorption isotherms of Evans blue on the titanium oxide nanoparticles in aqueous solution of dye. The concentrations of NH3 (aq) used to prepare the titanium oxide nanoparticles were the following: ◯, 1 mol/L; □, 0.2 mol/L; and ∆, 0.1 mol/L.Figure 10 Adsorption isotherms of dye molecules on titanium oxide nanoparticles in aqueous solution of dye. The dye molecules were the following:◯, methylene blue and ⚫, Evans blue. Titanium oxide nanoparticles were prepared by heating the mixture of TIP and H2O at 368 K for 24 h. ## 4. Conclusion Transparent and stable sols of titanium oxide nanoparticles were obtained by heating a mixture of ethylene glycol solution of TIP and NH3 aqueous solution at 368 K for 24 h. The concentration of NH3 aqueous solution affected the structure of the obtained titanium oxide nanoparticles. When the concentration of NH3 aqueous solution was 0.1 mol/L, the obtained nanoparticles were assigned to anatase TiO2 according to the XRD pattern. When the concentration was higher than 0.2 mol/L, a mixture of anatase TiO2 nanoparticles and layered titanic acid nanoparticles was obtained. The coordination of ethylene glycol and NH3 molecules to Ti4+ ions played an important role in the formation of titanium oxide nanoparticles and their homogeneous dispersion in the sol. The obtained titanium oxide nanoparticles had a large specific surface area, which was larger than 350 m2/g because the aggregation of their nanoparticles was prevented by the coordination of NH3 and ethylene glycol molecules. The obtained titanium oxide nanoparticles indicated an enhanced adsorption towards the cationic dye molecules. The selective adsorption corresponded to presence of the layered titanic acid on the nanoparticles. The high specific surface area also played an important role for the selective adsorption of the cationic dye molecules. Accordingly, the hydrolysis reaction of TIP with ethylene glycol and NH3 enabled us to obtain stable sols of anatase TiO2 and layered titanic acid nanoparticles with highly homogeneous dispersion. Furthermore, the obtained nanoparticles showed unique adsorption characteristics. --- *Source: 102361-2012-05-28.xml*
2012
# Yagi Array of Microstrip Quarter-Wave Patch Antennas with Microstrip Lines Coupling **Authors:** Juhua Liu; Yue Kang; Jie Chen; Yunliang Long **Journal:** International Journal of Antennas and Propagation (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102362 --- ## Abstract A new kind of Yagi array of quarter-wave patch antennas is presented. The Yagi array has a low profile, a wide bandwidth, and a high gain. A main beam close to endfire is produced, with a vertical polarization in the horizontal plane. A set of microstrip lines are introduced between the driven element and the first director element to enhance the coupling between them, and therefore the bandwidth could be increased and the back lobes could be suppressed. Measured results show that the Yagi array with 4 elements generates a peak gain of about 9.7 dBi, a front-to-back ratio higher than 10 dB, and a 10 dB return loss band from 4.68 GHz to 5.24 GHz, with a profile of 1.5 mm and an overall size of 80 × 100 mm2. An increase of the number of director elements would enhance the gain and have the main beam pointing closer to endfire. --- ## Body ## 1. Introduction YAGI-UDA arrays of classical electric dipole antennas are famous and widely used [1–3], because they provide a high gain and have a simple structure with only one driven element. However, the classical Yagi array of dipole antennas has a high profile (about half wavelength) if it is set for generating vertical polarization. Yagi array of monopole antennas [4] is also developed that produces a beam close to endfire and with a vertical polarization in the horizontal plane. Nonetheless, the Yagi array of monopole antennas has a high profile of 0.25λ0 (where λ0 is the wavelength in free space). In mobile communications, vertical polarization is usually preferred, since transmitter and receiver can keep the same vertical polarization for good connection no matter how transmitter or receiver rotates on a horizontal platform.Recently, Yagi arrays of printed antennas [5–12] have been studied, since printed antennas have a low profile, light weight, and easy fabrication. The microstrip Yagi array of half-wave patch antennas [5–9] provides a high gain and has its main beam tilted away from the broadside. This type of antenna can generate a vertical polarization in the horizontal plane. The quasi Yagi array based on classical dipole antennas [10–12] has a high gain, a wide bandwidth, and a main beam pointing exactly at endfire, but this type of Yagi arrays could only generate horizontal polarization.In this paper, we propose a new type of microstrip Yagi array based on quarter-wave patch antennas. The microstrip Yagi array has advantages of low profile and simple structure and can easily be fabricated on a PCB with shorting vias. A set of microstrip lines are introduced between the driven element and the first director element to enhance the coupling between them, and therefore the bandwidth could be increased and the back lobes could be suppressed. An increase of the number of director elements would enhance the gain and have the main beam pointing closer to endfire. The front-to-back ratio of the presented Yagi array is higher than 10 dB. Compared with a half-wave patch antenna, a quarter-wave patch antenna has half the length of the half-wave patch, and therefore the Yagi array of quarter-wave patch antennas has a smaller length compared with the conventional Yagi array of half-wave patch antennas. The comparison between the Yagi array of half-wave patch antennas and the Yagi array of quarter-wave patch antennas will be discussed in the paper. ## 2. Quarter-Wave Patch Antenna Yagi Array ### 2.1. Operation Principles The structure of the Yagi array based four quarter-wave patch antennas is shown in Figure1. Four quarter-wave patch antennas [13–15] and a set of coupling microstrip lines are fabricated on a substrate backed with a ground plane. The substrate could be air or dielectric substrate. The four quarter-wave patch antennas include a driven element (D), two director elements (D1 and D2), and a reflector element (R). Only the driven element is excited with a 50 Ω coaxial probe, and the other patches are parasitic radiators.Figure 1 Top view and cross-section of the microstrip Yagi array of quarter-wave patch antennas.Radiation is mainly generated from the open apertures of the four quarter-wave patches (the open apertures opposite to the shorting vias). Since the tangential electric field at each open aperture can be considered as a magnetic current, the Yagi array can be considered as a Yagi array of magnetic elements. With respect to the principle of duality to the classical electric dipole Yagi array, the parasitic magnetic element would act as a reflector when it has an additional capacitive component and it would act as a director when it has an additional inductive component. The reactive component of each magnetic element can be controlled by adjusting the length of the quarter-wave patch (the distance from the open edge to the shorting vias of the quarter-wave patch). Therefore, a quarter-wave patch must have a smaller length in order to have an additional inductive component (in view at the open aperture), when it acts as a director. Otherwise, a quarter-wave patch would act as a reflector when it has a larger length. The lengths of the directors and reflector need to be tuned with simulation tools (such HFSS) to have optimum values and to have the array radiating in forward direction.In order to make more power coupled into the first director from the driven element, three parallel microstrip lines with equal widths(W-2sm)/3 are introduced between them. Actually, the three coupling microstrip lines can be considered as a wide microstrip line with width W that is cut with two slits with width sm, as shown in Figure 1. The two slits cutting the wide microstrip line are to avoid the wide microstrip line resonating as a radiating patch. Therefore, the coupling effect between the driven element and the first director element is mainly through a guided wave under the microstrip transmission lines. On the other hand, the coupling between the driven and the reflector elements and that between the first and the second elements are through space wave.The widthsd of the gaps between the coupling microstrip lines and the quarter-wave patches controls the coupling strength, which is usually less than the thickness h of the substrate.The lengthLm of the coupling microstrip lines would have an effect on the directivity and front-to-back (F/B) ratio. The length Lm is usually between 0.1 λ0 and 0.15 λ0. We let all these elements have the same width W, and then we tune the length of each element to control its resonant frequency. Optimized values for the parameters of the Yagi array are given in Table 1.Table 1 Parameters for the Yagi arrays. Yagi array of quarter-wave patch antennas (Figure1) Yagi array of half-wave patch antennas (Figure9) Variable Values Variable Values ε r 2.33 ε r 2.33 h 1.57 mm h 1.57 mm L d ′ 8.926 mm L d 17.6 mm L r ′ 9.026 mm L r 19 mm L d 1 ′ 7.826 mm L d 1 15 mm L d 2 ′ 7.626 mm L d 2 15 mm W 40 mm W 25 mm b 1.4 mm b 1.3 mm s d 0.8 mm s 0.8 mm s m 0.8 mm L g 115 mm L m 8.4 mm W g 80 mm s r ′ 3.37 mm s d 2 ′ 3.37 mm s g 23.67 mm L g 100 mm W g 80 mm d 0.6 mm p 1.5 mm ### 2.2. 4-Element Yagi Array Simulated and measured results for the reflection coefficient are shown in Figure3. The array has an overall size of 100×80 mm2 and a profile of 1.5 mm. It shows that the simulated results agree very well with the measured ones. Measured results show that the array works in the band from 4.68 GHz to 5.24 GHz for the reflection coefficient being less than −10 dB, with a fractional bandwidth of 11.3%. The profile of the antenna is about 0.026λ0 (where λ0 is the wavelength in free space).Also shown in Figure3 is the reflection coefficient for the Yagi array without microstrip lines coupling between the driven element and the first director element. It shows that the bandwidth could be greatly improved when the coupling microstrip lines are introduced.Figure4 shows radiation patterns in the elevation plane and in the azimuth plane for the Yagi array working at 5.1 GHz. It shows that simulated results agree very well with measured ones. While not shown here, the main beam would point exactly at endfire when the array is on an infinite ground plane. Due to the finite ground plane diffraction effects, the main beam does not point exactly at endfire but at an angle of about 45–50° from the normal direction, when the array is in a finite ground plane. Measured results show that the cross polarization is less than −17 dB in the main elevation plane and less than −10 dB in the horizontal (azimuth) plane. The front-to-back (F/B) ratio that is used to represent the main lobe in the front quadrant over that of the reflected lobe in the back quadrant is higher than 10 dB.Figure5 shows radiation patterns for other frequencies for the Yagi array. It shows that the beam directions are stable at an angle of about 45–50° from the normal direction. While not shown here, the cross polarization is also less than −10 dB for these frequencies. The back lobes for these frequencies are less than −10 dB, as shown in the horizontal plane in Figure 5.Figure6 shows the gains for the Yagi array in finite ground plane. Figure 6 shows that measured gains (dashed line) agree very well with simulated ones (dotted line). Measured results show that the array has a gain of about 9.5 dBi in the band from 4.68 GHz to 5.24 GHz. ### 2.3. 12-Element Yagi Array The photo of the designed Yagi array is shown in Figure7. The substrate, the first four patches, and the coupling microstrip lines are the same as the Yagi array shown in Figure 2. The added eight director elements are the same as the second director element that has a length of Ld2′=7.626 mm, and the apertures of the added eight director elements have the same size as that of the second element. The ground plane has a length of Lg=180 mm and a width of Wg=80 mm.Figure 2 Photo for the 4-element Yagi array of quarter-wave patch antennas with the geometry shown in Figure1.Figure 3 Reflection coefficients for the 4-element Yagi array of quarter-wave patch antennas without microstrip lines coupling, the 4-element Yagi array with microstrip lines coupling (Figure2), and the 12-element Yagi array with microsrip lines coupling (Figure 7).Elevation (a) and azimuth (b) radiation patterns for the Yagi array (Figure2) working at 5.1 GHz. (a) (b)Measured elevation (a) and azimuth (b) radiation patterns for the Yagi array (Figure2). (a) (b)Figure 6 Gains for the 4-element Yagi array (Figure2) and the 12-element Yagi array (Figure 7).Figure 7 Photo of the 12-element Yagi array of quarter-wave patch antennas.The reflection coefficient for the Yagi array with 12 radiators is shown in black dashed line in Figure2. It shows that the 12-element Yagi array works in the band from 4.64 GHz to 5.22 GHz for the reflection coefficient being less than −10 dB, with a fractional bandwidth of 11.8%. The bandwidth of the Yagi array with 12 radiators is very close to that of the Yagi array with 4 radiators. As a matter of fact, the driven and the first director elements play a much more important role in the bandwidth than other elements, since the coupling through a guided wave under the microstrip lines between the driven and the first director elements is much stronger than the coupling through space waves between other elements. Therefore, adding more director elements would not help to enhance the bandwidth in this type of Yagi array.The gain of the antenna with 12 elements is shown in red line in Figure6. Measured results show that the peak gain of the 12-element Yagi array is about 10.5 dBi in the band from 4.64 GHz to 5.22 GHz, which is about 1 dB higher than that of the 4-element Yagi array. Measured radiation patterns for the 12-element Yagi array are shown in Figure 8. It shows that the main beam of the 12-element Yagi array would point closer to endfire compared with the 4-element Yagi array. The beam direction is 65–75° entitled from the normal direction.Measured radiation patterns in the elevation (a) plane and in the azimuth (b) plane for the 12-element Yagi array (Figure7). (a) (b) ## 2.1. Operation Principles The structure of the Yagi array based four quarter-wave patch antennas is shown in Figure1. Four quarter-wave patch antennas [13–15] and a set of coupling microstrip lines are fabricated on a substrate backed with a ground plane. The substrate could be air or dielectric substrate. The four quarter-wave patch antennas include a driven element (D), two director elements (D1 and D2), and a reflector element (R). Only the driven element is excited with a 50 Ω coaxial probe, and the other patches are parasitic radiators.Figure 1 Top view and cross-section of the microstrip Yagi array of quarter-wave patch antennas.Radiation is mainly generated from the open apertures of the four quarter-wave patches (the open apertures opposite to the shorting vias). Since the tangential electric field at each open aperture can be considered as a magnetic current, the Yagi array can be considered as a Yagi array of magnetic elements. With respect to the principle of duality to the classical electric dipole Yagi array, the parasitic magnetic element would act as a reflector when it has an additional capacitive component and it would act as a director when it has an additional inductive component. The reactive component of each magnetic element can be controlled by adjusting the length of the quarter-wave patch (the distance from the open edge to the shorting vias of the quarter-wave patch). Therefore, a quarter-wave patch must have a smaller length in order to have an additional inductive component (in view at the open aperture), when it acts as a director. Otherwise, a quarter-wave patch would act as a reflector when it has a larger length. The lengths of the directors and reflector need to be tuned with simulation tools (such HFSS) to have optimum values and to have the array radiating in forward direction.In order to make more power coupled into the first director from the driven element, three parallel microstrip lines with equal widths(W-2sm)/3 are introduced between them. Actually, the three coupling microstrip lines can be considered as a wide microstrip line with width W that is cut with two slits with width sm, as shown in Figure 1. The two slits cutting the wide microstrip line are to avoid the wide microstrip line resonating as a radiating patch. Therefore, the coupling effect between the driven element and the first director element is mainly through a guided wave under the microstrip transmission lines. On the other hand, the coupling between the driven and the reflector elements and that between the first and the second elements are through space wave.The widthsd of the gaps between the coupling microstrip lines and the quarter-wave patches controls the coupling strength, which is usually less than the thickness h of the substrate.The lengthLm of the coupling microstrip lines would have an effect on the directivity and front-to-back (F/B) ratio. The length Lm is usually between 0.1 λ0 and 0.15 λ0. We let all these elements have the same width W, and then we tune the length of each element to control its resonant frequency. Optimized values for the parameters of the Yagi array are given in Table 1.Table 1 Parameters for the Yagi arrays. Yagi array of quarter-wave patch antennas (Figure1) Yagi array of half-wave patch antennas (Figure9) Variable Values Variable Values ε r 2.33 ε r 2.33 h 1.57 mm h 1.57 mm L d ′ 8.926 mm L d 17.6 mm L r ′ 9.026 mm L r 19 mm L d 1 ′ 7.826 mm L d 1 15 mm L d 2 ′ 7.626 mm L d 2 15 mm W 40 mm W 25 mm b 1.4 mm b 1.3 mm s d 0.8 mm s 0.8 mm s m 0.8 mm L g 115 mm L m 8.4 mm W g 80 mm s r ′ 3.37 mm s d 2 ′ 3.37 mm s g 23.67 mm L g 100 mm W g 80 mm d 0.6 mm p 1.5 mm ## 2.2. 4-Element Yagi Array Simulated and measured results for the reflection coefficient are shown in Figure3. The array has an overall size of 100×80 mm2 and a profile of 1.5 mm. It shows that the simulated results agree very well with the measured ones. Measured results show that the array works in the band from 4.68 GHz to 5.24 GHz for the reflection coefficient being less than −10 dB, with a fractional bandwidth of 11.3%. The profile of the antenna is about 0.026λ0 (where λ0 is the wavelength in free space).Also shown in Figure3 is the reflection coefficient for the Yagi array without microstrip lines coupling between the driven element and the first director element. It shows that the bandwidth could be greatly improved when the coupling microstrip lines are introduced.Figure4 shows radiation patterns in the elevation plane and in the azimuth plane for the Yagi array working at 5.1 GHz. It shows that simulated results agree very well with measured ones. While not shown here, the main beam would point exactly at endfire when the array is on an infinite ground plane. Due to the finite ground plane diffraction effects, the main beam does not point exactly at endfire but at an angle of about 45–50° from the normal direction, when the array is in a finite ground plane. Measured results show that the cross polarization is less than −17 dB in the main elevation plane and less than −10 dB in the horizontal (azimuth) plane. The front-to-back (F/B) ratio that is used to represent the main lobe in the front quadrant over that of the reflected lobe in the back quadrant is higher than 10 dB.Figure5 shows radiation patterns for other frequencies for the Yagi array. It shows that the beam directions are stable at an angle of about 45–50° from the normal direction. While not shown here, the cross polarization is also less than −10 dB for these frequencies. The back lobes for these frequencies are less than −10 dB, as shown in the horizontal plane in Figure 5.Figure6 shows the gains for the Yagi array in finite ground plane. Figure 6 shows that measured gains (dashed line) agree very well with simulated ones (dotted line). Measured results show that the array has a gain of about 9.5 dBi in the band from 4.68 GHz to 5.24 GHz. ## 2.3. 12-Element Yagi Array The photo of the designed Yagi array is shown in Figure7. The substrate, the first four patches, and the coupling microstrip lines are the same as the Yagi array shown in Figure 2. The added eight director elements are the same as the second director element that has a length of Ld2′=7.626 mm, and the apertures of the added eight director elements have the same size as that of the second element. The ground plane has a length of Lg=180 mm and a width of Wg=80 mm.Figure 2 Photo for the 4-element Yagi array of quarter-wave patch antennas with the geometry shown in Figure1.Figure 3 Reflection coefficients for the 4-element Yagi array of quarter-wave patch antennas without microstrip lines coupling, the 4-element Yagi array with microstrip lines coupling (Figure2), and the 12-element Yagi array with microsrip lines coupling (Figure 7).Elevation (a) and azimuth (b) radiation patterns for the Yagi array (Figure2) working at 5.1 GHz. (a) (b)Measured elevation (a) and azimuth (b) radiation patterns for the Yagi array (Figure2). (a) (b)Figure 6 Gains for the 4-element Yagi array (Figure2) and the 12-element Yagi array (Figure 7).Figure 7 Photo of the 12-element Yagi array of quarter-wave patch antennas.The reflection coefficient for the Yagi array with 12 radiators is shown in black dashed line in Figure2. It shows that the 12-element Yagi array works in the band from 4.64 GHz to 5.22 GHz for the reflection coefficient being less than −10 dB, with a fractional bandwidth of 11.8%. The bandwidth of the Yagi array with 12 radiators is very close to that of the Yagi array with 4 radiators. As a matter of fact, the driven and the first director elements play a much more important role in the bandwidth than other elements, since the coupling through a guided wave under the microstrip lines between the driven and the first director elements is much stronger than the coupling through space waves between other elements. Therefore, adding more director elements would not help to enhance the bandwidth in this type of Yagi array.The gain of the antenna with 12 elements is shown in red line in Figure6. Measured results show that the peak gain of the 12-element Yagi array is about 10.5 dBi in the band from 4.64 GHz to 5.22 GHz, which is about 1 dB higher than that of the 4-element Yagi array. Measured radiation patterns for the 12-element Yagi array are shown in Figure 8. It shows that the main beam of the 12-element Yagi array would point closer to endfire compared with the 4-element Yagi array. The beam direction is 65–75° entitled from the normal direction.Measured radiation patterns in the elevation (a) plane and in the azimuth (b) plane for the 12-element Yagi array (Figure7). (a) (b) ## 3. Comparisons with Yagi Array of Half-Wave Patch Antennas In this section, the Yagi array of quarter-wave patch antennas is compared with the conventional Yagi array of half-wave patch antennas. The geometry of the Yagi array of half-wave patch antennas is shown in Figure9. To have a reasonable compassion, the two kinds of Yagi arrays are fabricated on the same substrate. The length of the driven half-wave patch antenna is twice the effective length [16] of the quarter-wave patch antenna. To prevent higher modes of the half-wave patch antenna to be excited, the width W of the half-wave patch antennas is 25 mm, which is smaller than the width of the quarter-wave patch antennas. The parameters for the Yagi array half-wave patch antennas and those for the Yagi arrays of quarter-wave patch antennas are given in Table 1.Figure 9 Geometry of the Yagi array of half-wave patch antennas.Simulated results for the reflection coefficient and gain for the 4-element Yagi array of half-wave patch antennas are shown in Figure10. Simulated results for the radiation patterns in the elevation and azimuth planes for the Yagi array are shown in Figure 11. Table 2 gives data for the comparisons between the Yagi array of half-wave patch antennas and the Yagi array of quarter-wave patch antennas.Table 2 Comparisons between the Yagi array of quarter-wave patch Antennas and the Yagi array of half-wave patch antennas. Character Yagi array of quarter-wave patch antennas (Figure1) Yagi array of half-wave patch antennas (Figure9) Band 4.68–5.24 GHz 5.0–5.43 GHz Fractional bandwidth 11.3% 8.25% Beamwidth (elevation plane) 56–58° 42–46° Beamwidth (azimuth plane) 44–60° 80–87° Gain 8.65–9.9 dBi 8.28–10.19 dBi Front-to-back ratio >10 dB >7.4 dB Radiation angle (from broadside) 45–50° 35–40° Radiation efficiency 91–98% 95–99% Size of driven patch 8.926 × 40 mm2 17.6 × 25 mm2 Size of ground plane 100 × 80 mm2 115 × 80 mm2Figure 10 Reflection coefficient and gain for the 4-element Yagi array of half-wave patch antennas.Elevation radiation patterns (a) in thexz plane and azimuth radiation patterns (b) in the xy plane for the 4-element Yagi array of half-wave patch antennas. (a) (b)From Table2, it is seen that the bandwidth of the Yagi array of quarter-wave patch antennas is slightly wider than the Yagi array of half-wave patch antenna. The size of the ground plane for the Yagi array of quarter-wave patch antenna is smaller than that for the Yagi array of half-wave patch antenna, because the length of each quarter-wave patch antenna is only half the length of the corresponding half-wave patch antenna. The main beam of the Yagi array of quarter-wave patch antennas points closer to endfire than that of the Yagi array of half-wave patch antennas, because a single quarter-wave patch antenna has an almost omnidirectional pattern in upper half-space (when in infinite ground plane) while a single half-wave patch antenna has its main beam point at broadside [6]. The F/B ratio for the Yagi array of quarter-wave patch antennas is higher than that for the Yagi array of half-wave patch antennas. The gains are almost the same for the two kinds of Yagi arrays. The efficiency for the Yagi array of quarter-wave patch antennas is not as high as that for the Yagi array of quarter-wave antennas due to the conducting loss of the shorting vias in the quarter-wave patch antennas. Actually, the shorting-vias may also result in a sloppy fabrication, so the period of the shorting-vias should not be too small (the period of the shorting-vias is about 2–2.5 times the diameter of the shorting-vias [16]). ## 4. Conclusion A new type of Yagi array of quarter-wave patch antennas has been presented and studied. The Yagi array has a wide bandwidth and a high gain and provides a vertical polarization in the horizontal plane. A Yagi array with 4 microstrip quarter-wave patch antennas is designed and measured. Measured results show that the Yagi array generates a gain of about 9.5 dBi and a bandwidth of 11.3%, with an overall size of100×80 mm2 and a profile of 1.5 mm. An increase of director radiators of the Yagi array would enhance the gain and has the main beam pointing closer to endfire.Compared with the classical Yagi array of half-wave patch antennas, the presented Yagi array of quarter-wave antennas has a smaller length, a slightly wider bandwidth, a beam closer to endfire, and a higher F/B ratio. However, the efficiency of the Yagi array of quarter-wave patch antennas is not as high as that of the Yagi array of half-wave patch antennas. Both types of Yagi arrays have almost the same gains. --- *Source: 102362-2014-07-22.xml*
102362-2014-07-22_102362-2014-07-22.md
26,202
Yagi Array of Microstrip Quarter-Wave Patch Antennas with Microstrip Lines Coupling
Juhua Liu; Yue Kang; Jie Chen; Yunliang Long
International Journal of Antennas and Propagation (2014)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102362
102362-2014-07-22.xml
--- ## Abstract A new kind of Yagi array of quarter-wave patch antennas is presented. The Yagi array has a low profile, a wide bandwidth, and a high gain. A main beam close to endfire is produced, with a vertical polarization in the horizontal plane. A set of microstrip lines are introduced between the driven element and the first director element to enhance the coupling between them, and therefore the bandwidth could be increased and the back lobes could be suppressed. Measured results show that the Yagi array with 4 elements generates a peak gain of about 9.7 dBi, a front-to-back ratio higher than 10 dB, and a 10 dB return loss band from 4.68 GHz to 5.24 GHz, with a profile of 1.5 mm and an overall size of 80 × 100 mm2. An increase of the number of director elements would enhance the gain and have the main beam pointing closer to endfire. --- ## Body ## 1. Introduction YAGI-UDA arrays of classical electric dipole antennas are famous and widely used [1–3], because they provide a high gain and have a simple structure with only one driven element. However, the classical Yagi array of dipole antennas has a high profile (about half wavelength) if it is set for generating vertical polarization. Yagi array of monopole antennas [4] is also developed that produces a beam close to endfire and with a vertical polarization in the horizontal plane. Nonetheless, the Yagi array of monopole antennas has a high profile of 0.25λ0 (where λ0 is the wavelength in free space). In mobile communications, vertical polarization is usually preferred, since transmitter and receiver can keep the same vertical polarization for good connection no matter how transmitter or receiver rotates on a horizontal platform.Recently, Yagi arrays of printed antennas [5–12] have been studied, since printed antennas have a low profile, light weight, and easy fabrication. The microstrip Yagi array of half-wave patch antennas [5–9] provides a high gain and has its main beam tilted away from the broadside. This type of antenna can generate a vertical polarization in the horizontal plane. The quasi Yagi array based on classical dipole antennas [10–12] has a high gain, a wide bandwidth, and a main beam pointing exactly at endfire, but this type of Yagi arrays could only generate horizontal polarization.In this paper, we propose a new type of microstrip Yagi array based on quarter-wave patch antennas. The microstrip Yagi array has advantages of low profile and simple structure and can easily be fabricated on a PCB with shorting vias. A set of microstrip lines are introduced between the driven element and the first director element to enhance the coupling between them, and therefore the bandwidth could be increased and the back lobes could be suppressed. An increase of the number of director elements would enhance the gain and have the main beam pointing closer to endfire. The front-to-back ratio of the presented Yagi array is higher than 10 dB. Compared with a half-wave patch antenna, a quarter-wave patch antenna has half the length of the half-wave patch, and therefore the Yagi array of quarter-wave patch antennas has a smaller length compared with the conventional Yagi array of half-wave patch antennas. The comparison between the Yagi array of half-wave patch antennas and the Yagi array of quarter-wave patch antennas will be discussed in the paper. ## 2. Quarter-Wave Patch Antenna Yagi Array ### 2.1. Operation Principles The structure of the Yagi array based four quarter-wave patch antennas is shown in Figure1. Four quarter-wave patch antennas [13–15] and a set of coupling microstrip lines are fabricated on a substrate backed with a ground plane. The substrate could be air or dielectric substrate. The four quarter-wave patch antennas include a driven element (D), two director elements (D1 and D2), and a reflector element (R). Only the driven element is excited with a 50 Ω coaxial probe, and the other patches are parasitic radiators.Figure 1 Top view and cross-section of the microstrip Yagi array of quarter-wave patch antennas.Radiation is mainly generated from the open apertures of the four quarter-wave patches (the open apertures opposite to the shorting vias). Since the tangential electric field at each open aperture can be considered as a magnetic current, the Yagi array can be considered as a Yagi array of magnetic elements. With respect to the principle of duality to the classical electric dipole Yagi array, the parasitic magnetic element would act as a reflector when it has an additional capacitive component and it would act as a director when it has an additional inductive component. The reactive component of each magnetic element can be controlled by adjusting the length of the quarter-wave patch (the distance from the open edge to the shorting vias of the quarter-wave patch). Therefore, a quarter-wave patch must have a smaller length in order to have an additional inductive component (in view at the open aperture), when it acts as a director. Otherwise, a quarter-wave patch would act as a reflector when it has a larger length. The lengths of the directors and reflector need to be tuned with simulation tools (such HFSS) to have optimum values and to have the array radiating in forward direction.In order to make more power coupled into the first director from the driven element, three parallel microstrip lines with equal widths(W-2sm)/3 are introduced between them. Actually, the three coupling microstrip lines can be considered as a wide microstrip line with width W that is cut with two slits with width sm, as shown in Figure 1. The two slits cutting the wide microstrip line are to avoid the wide microstrip line resonating as a radiating patch. Therefore, the coupling effect between the driven element and the first director element is mainly through a guided wave under the microstrip transmission lines. On the other hand, the coupling between the driven and the reflector elements and that between the first and the second elements are through space wave.The widthsd of the gaps between the coupling microstrip lines and the quarter-wave patches controls the coupling strength, which is usually less than the thickness h of the substrate.The lengthLm of the coupling microstrip lines would have an effect on the directivity and front-to-back (F/B) ratio. The length Lm is usually between 0.1 λ0 and 0.15 λ0. We let all these elements have the same width W, and then we tune the length of each element to control its resonant frequency. Optimized values for the parameters of the Yagi array are given in Table 1.Table 1 Parameters for the Yagi arrays. Yagi array of quarter-wave patch antennas (Figure1) Yagi array of half-wave patch antennas (Figure9) Variable Values Variable Values ε r 2.33 ε r 2.33 h 1.57 mm h 1.57 mm L d ′ 8.926 mm L d 17.6 mm L r ′ 9.026 mm L r 19 mm L d 1 ′ 7.826 mm L d 1 15 mm L d 2 ′ 7.626 mm L d 2 15 mm W 40 mm W 25 mm b 1.4 mm b 1.3 mm s d 0.8 mm s 0.8 mm s m 0.8 mm L g 115 mm L m 8.4 mm W g 80 mm s r ′ 3.37 mm s d 2 ′ 3.37 mm s g 23.67 mm L g 100 mm W g 80 mm d 0.6 mm p 1.5 mm ### 2.2. 4-Element Yagi Array Simulated and measured results for the reflection coefficient are shown in Figure3. The array has an overall size of 100×80 mm2 and a profile of 1.5 mm. It shows that the simulated results agree very well with the measured ones. Measured results show that the array works in the band from 4.68 GHz to 5.24 GHz for the reflection coefficient being less than −10 dB, with a fractional bandwidth of 11.3%. The profile of the antenna is about 0.026λ0 (where λ0 is the wavelength in free space).Also shown in Figure3 is the reflection coefficient for the Yagi array without microstrip lines coupling between the driven element and the first director element. It shows that the bandwidth could be greatly improved when the coupling microstrip lines are introduced.Figure4 shows radiation patterns in the elevation plane and in the azimuth plane for the Yagi array working at 5.1 GHz. It shows that simulated results agree very well with measured ones. While not shown here, the main beam would point exactly at endfire when the array is on an infinite ground plane. Due to the finite ground plane diffraction effects, the main beam does not point exactly at endfire but at an angle of about 45–50° from the normal direction, when the array is in a finite ground plane. Measured results show that the cross polarization is less than −17 dB in the main elevation plane and less than −10 dB in the horizontal (azimuth) plane. The front-to-back (F/B) ratio that is used to represent the main lobe in the front quadrant over that of the reflected lobe in the back quadrant is higher than 10 dB.Figure5 shows radiation patterns for other frequencies for the Yagi array. It shows that the beam directions are stable at an angle of about 45–50° from the normal direction. While not shown here, the cross polarization is also less than −10 dB for these frequencies. The back lobes for these frequencies are less than −10 dB, as shown in the horizontal plane in Figure 5.Figure6 shows the gains for the Yagi array in finite ground plane. Figure 6 shows that measured gains (dashed line) agree very well with simulated ones (dotted line). Measured results show that the array has a gain of about 9.5 dBi in the band from 4.68 GHz to 5.24 GHz. ### 2.3. 12-Element Yagi Array The photo of the designed Yagi array is shown in Figure7. The substrate, the first four patches, and the coupling microstrip lines are the same as the Yagi array shown in Figure 2. The added eight director elements are the same as the second director element that has a length of Ld2′=7.626 mm, and the apertures of the added eight director elements have the same size as that of the second element. The ground plane has a length of Lg=180 mm and a width of Wg=80 mm.Figure 2 Photo for the 4-element Yagi array of quarter-wave patch antennas with the geometry shown in Figure1.Figure 3 Reflection coefficients for the 4-element Yagi array of quarter-wave patch antennas without microstrip lines coupling, the 4-element Yagi array with microstrip lines coupling (Figure2), and the 12-element Yagi array with microsrip lines coupling (Figure 7).Elevation (a) and azimuth (b) radiation patterns for the Yagi array (Figure2) working at 5.1 GHz. (a) (b)Measured elevation (a) and azimuth (b) radiation patterns for the Yagi array (Figure2). (a) (b)Figure 6 Gains for the 4-element Yagi array (Figure2) and the 12-element Yagi array (Figure 7).Figure 7 Photo of the 12-element Yagi array of quarter-wave patch antennas.The reflection coefficient for the Yagi array with 12 radiators is shown in black dashed line in Figure2. It shows that the 12-element Yagi array works in the band from 4.64 GHz to 5.22 GHz for the reflection coefficient being less than −10 dB, with a fractional bandwidth of 11.8%. The bandwidth of the Yagi array with 12 radiators is very close to that of the Yagi array with 4 radiators. As a matter of fact, the driven and the first director elements play a much more important role in the bandwidth than other elements, since the coupling through a guided wave under the microstrip lines between the driven and the first director elements is much stronger than the coupling through space waves between other elements. Therefore, adding more director elements would not help to enhance the bandwidth in this type of Yagi array.The gain of the antenna with 12 elements is shown in red line in Figure6. Measured results show that the peak gain of the 12-element Yagi array is about 10.5 dBi in the band from 4.64 GHz to 5.22 GHz, which is about 1 dB higher than that of the 4-element Yagi array. Measured radiation patterns for the 12-element Yagi array are shown in Figure 8. It shows that the main beam of the 12-element Yagi array would point closer to endfire compared with the 4-element Yagi array. The beam direction is 65–75° entitled from the normal direction.Measured radiation patterns in the elevation (a) plane and in the azimuth (b) plane for the 12-element Yagi array (Figure7). (a) (b) ## 2.1. Operation Principles The structure of the Yagi array based four quarter-wave patch antennas is shown in Figure1. Four quarter-wave patch antennas [13–15] and a set of coupling microstrip lines are fabricated on a substrate backed with a ground plane. The substrate could be air or dielectric substrate. The four quarter-wave patch antennas include a driven element (D), two director elements (D1 and D2), and a reflector element (R). Only the driven element is excited with a 50 Ω coaxial probe, and the other patches are parasitic radiators.Figure 1 Top view and cross-section of the microstrip Yagi array of quarter-wave patch antennas.Radiation is mainly generated from the open apertures of the four quarter-wave patches (the open apertures opposite to the shorting vias). Since the tangential electric field at each open aperture can be considered as a magnetic current, the Yagi array can be considered as a Yagi array of magnetic elements. With respect to the principle of duality to the classical electric dipole Yagi array, the parasitic magnetic element would act as a reflector when it has an additional capacitive component and it would act as a director when it has an additional inductive component. The reactive component of each magnetic element can be controlled by adjusting the length of the quarter-wave patch (the distance from the open edge to the shorting vias of the quarter-wave patch). Therefore, a quarter-wave patch must have a smaller length in order to have an additional inductive component (in view at the open aperture), when it acts as a director. Otherwise, a quarter-wave patch would act as a reflector when it has a larger length. The lengths of the directors and reflector need to be tuned with simulation tools (such HFSS) to have optimum values and to have the array radiating in forward direction.In order to make more power coupled into the first director from the driven element, three parallel microstrip lines with equal widths(W-2sm)/3 are introduced between them. Actually, the three coupling microstrip lines can be considered as a wide microstrip line with width W that is cut with two slits with width sm, as shown in Figure 1. The two slits cutting the wide microstrip line are to avoid the wide microstrip line resonating as a radiating patch. Therefore, the coupling effect between the driven element and the first director element is mainly through a guided wave under the microstrip transmission lines. On the other hand, the coupling between the driven and the reflector elements and that between the first and the second elements are through space wave.The widthsd of the gaps between the coupling microstrip lines and the quarter-wave patches controls the coupling strength, which is usually less than the thickness h of the substrate.The lengthLm of the coupling microstrip lines would have an effect on the directivity and front-to-back (F/B) ratio. The length Lm is usually between 0.1 λ0 and 0.15 λ0. We let all these elements have the same width W, and then we tune the length of each element to control its resonant frequency. Optimized values for the parameters of the Yagi array are given in Table 1.Table 1 Parameters for the Yagi arrays. Yagi array of quarter-wave patch antennas (Figure1) Yagi array of half-wave patch antennas (Figure9) Variable Values Variable Values ε r 2.33 ε r 2.33 h 1.57 mm h 1.57 mm L d ′ 8.926 mm L d 17.6 mm L r ′ 9.026 mm L r 19 mm L d 1 ′ 7.826 mm L d 1 15 mm L d 2 ′ 7.626 mm L d 2 15 mm W 40 mm W 25 mm b 1.4 mm b 1.3 mm s d 0.8 mm s 0.8 mm s m 0.8 mm L g 115 mm L m 8.4 mm W g 80 mm s r ′ 3.37 mm s d 2 ′ 3.37 mm s g 23.67 mm L g 100 mm W g 80 mm d 0.6 mm p 1.5 mm ## 2.2. 4-Element Yagi Array Simulated and measured results for the reflection coefficient are shown in Figure3. The array has an overall size of 100×80 mm2 and a profile of 1.5 mm. It shows that the simulated results agree very well with the measured ones. Measured results show that the array works in the band from 4.68 GHz to 5.24 GHz for the reflection coefficient being less than −10 dB, with a fractional bandwidth of 11.3%. The profile of the antenna is about 0.026λ0 (where λ0 is the wavelength in free space).Also shown in Figure3 is the reflection coefficient for the Yagi array without microstrip lines coupling between the driven element and the first director element. It shows that the bandwidth could be greatly improved when the coupling microstrip lines are introduced.Figure4 shows radiation patterns in the elevation plane and in the azimuth plane for the Yagi array working at 5.1 GHz. It shows that simulated results agree very well with measured ones. While not shown here, the main beam would point exactly at endfire when the array is on an infinite ground plane. Due to the finite ground plane diffraction effects, the main beam does not point exactly at endfire but at an angle of about 45–50° from the normal direction, when the array is in a finite ground plane. Measured results show that the cross polarization is less than −17 dB in the main elevation plane and less than −10 dB in the horizontal (azimuth) plane. The front-to-back (F/B) ratio that is used to represent the main lobe in the front quadrant over that of the reflected lobe in the back quadrant is higher than 10 dB.Figure5 shows radiation patterns for other frequencies for the Yagi array. It shows that the beam directions are stable at an angle of about 45–50° from the normal direction. While not shown here, the cross polarization is also less than −10 dB for these frequencies. The back lobes for these frequencies are less than −10 dB, as shown in the horizontal plane in Figure 5.Figure6 shows the gains for the Yagi array in finite ground plane. Figure 6 shows that measured gains (dashed line) agree very well with simulated ones (dotted line). Measured results show that the array has a gain of about 9.5 dBi in the band from 4.68 GHz to 5.24 GHz. ## 2.3. 12-Element Yagi Array The photo of the designed Yagi array is shown in Figure7. The substrate, the first four patches, and the coupling microstrip lines are the same as the Yagi array shown in Figure 2. The added eight director elements are the same as the second director element that has a length of Ld2′=7.626 mm, and the apertures of the added eight director elements have the same size as that of the second element. The ground plane has a length of Lg=180 mm and a width of Wg=80 mm.Figure 2 Photo for the 4-element Yagi array of quarter-wave patch antennas with the geometry shown in Figure1.Figure 3 Reflection coefficients for the 4-element Yagi array of quarter-wave patch antennas without microstrip lines coupling, the 4-element Yagi array with microstrip lines coupling (Figure2), and the 12-element Yagi array with microsrip lines coupling (Figure 7).Elevation (a) and azimuth (b) radiation patterns for the Yagi array (Figure2) working at 5.1 GHz. (a) (b)Measured elevation (a) and azimuth (b) radiation patterns for the Yagi array (Figure2). (a) (b)Figure 6 Gains for the 4-element Yagi array (Figure2) and the 12-element Yagi array (Figure 7).Figure 7 Photo of the 12-element Yagi array of quarter-wave patch antennas.The reflection coefficient for the Yagi array with 12 radiators is shown in black dashed line in Figure2. It shows that the 12-element Yagi array works in the band from 4.64 GHz to 5.22 GHz for the reflection coefficient being less than −10 dB, with a fractional bandwidth of 11.8%. The bandwidth of the Yagi array with 12 radiators is very close to that of the Yagi array with 4 radiators. As a matter of fact, the driven and the first director elements play a much more important role in the bandwidth than other elements, since the coupling through a guided wave under the microstrip lines between the driven and the first director elements is much stronger than the coupling through space waves between other elements. Therefore, adding more director elements would not help to enhance the bandwidth in this type of Yagi array.The gain of the antenna with 12 elements is shown in red line in Figure6. Measured results show that the peak gain of the 12-element Yagi array is about 10.5 dBi in the band from 4.64 GHz to 5.22 GHz, which is about 1 dB higher than that of the 4-element Yagi array. Measured radiation patterns for the 12-element Yagi array are shown in Figure 8. It shows that the main beam of the 12-element Yagi array would point closer to endfire compared with the 4-element Yagi array. The beam direction is 65–75° entitled from the normal direction.Measured radiation patterns in the elevation (a) plane and in the azimuth (b) plane for the 12-element Yagi array (Figure7). (a) (b) ## 3. Comparisons with Yagi Array of Half-Wave Patch Antennas In this section, the Yagi array of quarter-wave patch antennas is compared with the conventional Yagi array of half-wave patch antennas. The geometry of the Yagi array of half-wave patch antennas is shown in Figure9. To have a reasonable compassion, the two kinds of Yagi arrays are fabricated on the same substrate. The length of the driven half-wave patch antenna is twice the effective length [16] of the quarter-wave patch antenna. To prevent higher modes of the half-wave patch antenna to be excited, the width W of the half-wave patch antennas is 25 mm, which is smaller than the width of the quarter-wave patch antennas. The parameters for the Yagi array half-wave patch antennas and those for the Yagi arrays of quarter-wave patch antennas are given in Table 1.Figure 9 Geometry of the Yagi array of half-wave patch antennas.Simulated results for the reflection coefficient and gain for the 4-element Yagi array of half-wave patch antennas are shown in Figure10. Simulated results for the radiation patterns in the elevation and azimuth planes for the Yagi array are shown in Figure 11. Table 2 gives data for the comparisons between the Yagi array of half-wave patch antennas and the Yagi array of quarter-wave patch antennas.Table 2 Comparisons between the Yagi array of quarter-wave patch Antennas and the Yagi array of half-wave patch antennas. Character Yagi array of quarter-wave patch antennas (Figure1) Yagi array of half-wave patch antennas (Figure9) Band 4.68–5.24 GHz 5.0–5.43 GHz Fractional bandwidth 11.3% 8.25% Beamwidth (elevation plane) 56–58° 42–46° Beamwidth (azimuth plane) 44–60° 80–87° Gain 8.65–9.9 dBi 8.28–10.19 dBi Front-to-back ratio >10 dB >7.4 dB Radiation angle (from broadside) 45–50° 35–40° Radiation efficiency 91–98% 95–99% Size of driven patch 8.926 × 40 mm2 17.6 × 25 mm2 Size of ground plane 100 × 80 mm2 115 × 80 mm2Figure 10 Reflection coefficient and gain for the 4-element Yagi array of half-wave patch antennas.Elevation radiation patterns (a) in thexz plane and azimuth radiation patterns (b) in the xy plane for the 4-element Yagi array of half-wave patch antennas. (a) (b)From Table2, it is seen that the bandwidth of the Yagi array of quarter-wave patch antennas is slightly wider than the Yagi array of half-wave patch antenna. The size of the ground plane for the Yagi array of quarter-wave patch antenna is smaller than that for the Yagi array of half-wave patch antenna, because the length of each quarter-wave patch antenna is only half the length of the corresponding half-wave patch antenna. The main beam of the Yagi array of quarter-wave patch antennas points closer to endfire than that of the Yagi array of half-wave patch antennas, because a single quarter-wave patch antenna has an almost omnidirectional pattern in upper half-space (when in infinite ground plane) while a single half-wave patch antenna has its main beam point at broadside [6]. The F/B ratio for the Yagi array of quarter-wave patch antennas is higher than that for the Yagi array of half-wave patch antennas. The gains are almost the same for the two kinds of Yagi arrays. The efficiency for the Yagi array of quarter-wave patch antennas is not as high as that for the Yagi array of quarter-wave antennas due to the conducting loss of the shorting vias in the quarter-wave patch antennas. Actually, the shorting-vias may also result in a sloppy fabrication, so the period of the shorting-vias should not be too small (the period of the shorting-vias is about 2–2.5 times the diameter of the shorting-vias [16]). ## 4. Conclusion A new type of Yagi array of quarter-wave patch antennas has been presented and studied. The Yagi array has a wide bandwidth and a high gain and provides a vertical polarization in the horizontal plane. A Yagi array with 4 microstrip quarter-wave patch antennas is designed and measured. Measured results show that the Yagi array generates a gain of about 9.5 dBi and a bandwidth of 11.3%, with an overall size of100×80 mm2 and a profile of 1.5 mm. An increase of director radiators of the Yagi array would enhance the gain and has the main beam pointing closer to endfire.Compared with the classical Yagi array of half-wave patch antennas, the presented Yagi array of quarter-wave antennas has a smaller length, a slightly wider bandwidth, a beam closer to endfire, and a higher F/B ratio. However, the efficiency of the Yagi array of quarter-wave patch antennas is not as high as that of the Yagi array of half-wave patch antennas. Both types of Yagi arrays have almost the same gains. --- *Source: 102362-2014-07-22.xml*
2014
# Corrosion Potential Profile Simulation in a Tube under Cathodic Protection **Authors:** Mauricio Ohanian; Víctor Martínez-Luaces **Journal:** International Journal of Corrosion (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102363 --- ## Abstract The potential distribution in tubes of a heat exchanger is simulated when applying cathodic polarization to its extremes. The comparison of two methods to achieve this goal is presented: a numeric solution based on boundary elements carried out with the commercial software Beasy-GID and a semianalytical method developed by the authors. The mathematical model, the simplifications considered, and the problem solving are shown. Since both approaches use polarization curves as a boundary condition, experimental polarization curves (voltage versus current density) were determined in the laboratory under flow conditions and cylindrical cell geometry. The results obtained suggest the impossibility of extending the protection along the whole tube length; therefore, other protection methods are considered. --- ## Body ## 1. Introduction The corrosion in heat exchangers in a steam cycle of an electric generating plant represents an important problem for maintenance. The morphology of attack is both generalized and localized; the latter involves drilling shutdowns for the exchange surface and contamination of the circulating fluids. The problem is solved by stopping the operation and sealing the punctured tube, which causes a decrease in heat exchange efficiency.Condensers are multimetal systems with special conditions fluid dynamics and geometric complexity [1]. It is usual to find carbon steel boxes and titanium, stainless steel, or even different copper alloy tubes and plates [2]. All these alloys have different electrochemical corrosion characteristics, and due to their electrical contact a galvanic corrosion can be induced [3]. In the event of cathodic protection, applied in the box, the plate is cathodically polarized, but its effect may not be seen by the entire tube and a part of it will work as an anode. Therefore, the electrochemical potential across the tube is not constant, and it is important to determine the profile in the longitudinal direction, taking as parameters the potentials applied to the edges. The current and potential distribution in the tubes is theoretically analyzed by authors as Alkire and Mirarefi [4]. Astley [5], Verbrugge [6], and Song [7] address the problem considering the one-dimensional Laplace equation as the governing relationship, which is assumed to be linear kinetics; under these conditions, the problem is transformed into a distribution system consisting of an ordinary differential equation (ODE) and various boundary conditions. The above authors validate the models with respect to the results obtained through experimental prototypes. Scully and Hack [3] worked on the same research lines and performed a numerical finite element analysis, as well as a qualitative study from Wagner dimensionless number to determine the influence of plate, tube pair. The data are validated by using pilot scale prototypes.This issue has been addressed in previous works [8, 9] using a one-dimensional approximation and, as a consequence, the problem is transformed into a system consisting of an (ODE) and several boundary conditions.In this paper, the resolution of the equations corresponding to current and potential distribution in heat exchanger tubes is studied when a given potential is applied at the edges. We compare the results obtained by using the commercial software Beasy-GID [10] with a method developed by the authors. The method involves an analytical resolution for a simplified model based on the one-dimensional electrochemical approximation proposed by Frumkin [11].We describe the mathematical model, the simplifications considered, and the resolution of the corresponding mathematical equations. The polarization curve—which serves as boundary condition for the proposed model—is obtained experimentally under flow conditions in a cylindrical cell, to simulate the fluid dynamic conditions within the heat exchanger. ## 2. Modeling In order to analyze the current and potential distribution in a tubular system with electrolyte flow, a mass balance in an infinitesimal control volumedV is established [12]. For generic species i, the accumulation is equal to the input minus the output of the species over a period with a corresponding generation (R): (1)dV∂ci∂t=-dA∇·Ji+Ri. In (1) that mass balance is stated, J being the flux of the species through the area dA and c the molar concentration of the species.The global balance—for all the species—can be written as follows:(2)dV∂∑ici∂t=-dA∇·∑iJi+∑iRi. Multiplying these terms by the species charges (z) and the Faraday constant (F), we can convert the mass balance into a charge balance: (3)dVF∂∑izici∂t=-dA∇·F∑iziJi+F∑iziRi. Considering an electrochemical system, with reaction at the interface, it can be assumed that there is no generation in the control volume (within the electrolyte) and then, Ri is null. Moreover, in a steady state condition the left term of (3) is also zero, resulting in (4)∇·F∑iziJi=∇·j=0, where j is the current density vector. Expanding j in the transport components corresponding to migration, diffusion, and convection (Nernst-Plank development [13]), (4) can be expressed as (5)∇·j=-∇·χ∇E-∇·F∑iDizi∇ci+∇·Fv∑izici=0.χ is the ionic conductivity, Di is the diffusivity of species i, E is the electric potential, and v is the advection velocity. If the solution is proposed for an isotropic medium, then ionic conductivity and species diffusivity are homogeneous properties and so their gradient is zero. Furthermore, considering a developed profile in the pipe [14, 15] and an incompressible flow, the divergence of the velocity is zero, so (5) can be converted into (6)∇·j=-χ∇2E-F∑iDizi∇2ci+F∑izi∇ci·v=0. Due to the electroneutrality in the solution bulk, the contribution corresponding to the third term (i.e., the convective transport) is zero: (7)-χ∇2E-F∑iDizi∇2ci=0. Considering primary or secondary current distribution [16], the species concentration gradient within the electrolyte can be neglected. Therefore (7) simplifies to (8)∇2E=0. The obtained equation is formally the same that corresponds to heat transfer or mass diffusion mathematical models, in steady state. The solutions obtained for these phenomena can be applied to solve the equation of the electric field in an electrochemical system, with appropriate boundary conditions. The procedures used for this solution are analytic, analogue, or digital.Since the system considered is a tube, it is convenient to express (8) in cylindrical coordinates as follows [17]: (9)∂2E∂r2+1r∂E∂r+1r2∂2E∂θ2+∂2E∂z2=0. Due to symmetry reasons, the derivatives with respect to the polar argument θ can be cancelled. Then, the elliptical partial differential equation (PDE) to solve is the following: (10)∂2E∂r2+1r∂E∂r+∂2E∂z2=0. Frumkin [11] determined experimentally that the potential in an electrolyte varies only in one dimension (along the flow) when the following condition, considered in the cylindrical geometry, verifies (11)r<2ρK. In this formula r is the radius of the tube, ρ is the resistivity of the electrolyte, and K is the proportionality constant derived from the polarization curve (inverse of the linear polarization resistance).Thus if condition (11) is verified, (10) results in (12)∂2E∂z2=0. Under these conditions, the electrical potential gradient is parallel to the tube walls, and it is related to the current density (also parallel to the pipe walls, iL) flowing in the electrolyte. Applying Ohm’s law, (13)∂E∂z=±ρiL. Applying Kirchhoff’s law it is possible to establish a relation between the current density normal to the tube wall (iS) and the longitudinal current density, as follows: (14)diLπr2dz=-2πriS. Combining (13) and (14) the following equation can be obtained: (15)∂2μ-∂z2=±2ρiSr. The electrostatic potential E is linearly related to the electrochemical potential μ-: (16)μ-=μ+nFE, where μ is the chemical potential and n is the number of exchanged electrons. The boundary conditions are expressed usually through linear or semilogarithmic approximations: (17)∂μ-∂r=Kμ--A or (18)∂μ-∂r=B′exp⁡K′μ--A′. The behaviors expressed by the boundary conditions (17) and (18) can be observed in the experimental curves only in bounded intervals; linear or semilogarithmic conditions cannot be extended to the entire range considered.Finally, for the boundary condition at the pipe edges, both Dirichlet (constant potential at the edges) and Neumann (no potential variation at the edges) conditions [18] can be assumed. In this paper only the first one, that is, the Dirichlet condition, is considered. ## 3. Resolution The ODE for the potential function corresponds to (15) and the boundary condition at the tube walls is obtained from the polarization curve.Assuming a constant potential at the tube edges,(19)μ-z=0,r=R=μ-1μ-z=L,r=R=μ-2. It is important to mention that, under the conditions considered in this paper, the inequality proposed by Frumkin (11) is verified. So as to solve analytically the ODE (15), the boundary condition given by the experimental curve is linearized by the following procedure. For each experimental point, a moving box, containing ten points before and ten points after, is considered. For this set, a simple linear model was fitted using least squares and then, from this model, obtained point by point, the slope of the tangent and its intersection with the vertical axis are obtained.Then, at each point a linear dependency like the following one(20)∂μ-∂r=i=a+bμ- is obtained. In order to solve the problem, a discretizing approach is performed in the following way: the tube is divided into sections such that the length of each one is decided by a preliminary observation. In fact, shorter sections are taken close to the extremes (where the greatest potential change takes place) and longer ones in the central part of the tube. In each pipe section an edge potential is assigned. The electrochemical properties for each section are considered constant and evaluated as an average of the extreme values.If the coefficientb of (20) is positive, the solution for the potential takes the following form: (21)μ-=c1exp⁡ψz+c2exp⁡-ψz-ab, with (22)ψ=2ρbr. Imposing the boundary condition (19), the coefficients c1 and c2 are obtained as follows: (23)c1=-μ-1+a/bexp⁡-ψL-μ-2-a/b2sinh⁡ψLc2=-μ-2+a/b-μ-1+a/bexp⁡ψL2sinh⁡ψL. Formula (21) is used after getting a and b from (20), using the values of c1 and c2 given by (22) and (23). The final result is obtained iteratively using for this purpose Microsoft Excel. For the first iteration the corrosion potential condition is used at the interior points and the potential of (19) is imposed at the extreme points. As a closure criterion, the supremum norm of the difference between successive steps must be less than an arbitrary epsilon. ## 4. Use of Software Beasy-GID With the purpose of comparing the results of the previous method, the commercial software Beasy-GID is used. This package is designed to simulate corrosion systems, using the boundary elements method [19]. The information required by this software includes the system geometry (which is added as a geometrical model with the GID program), the electrical connections between the different metal components of the system (which are modeled as an electrical circuit at the software interface), and the polarization curve for the involved metals. ## 5. Prototype Condenser The prototype condenser is constructed with an admiralty brass tube (UNS C443 [20]), carbon steel boxes, and Muntz brass plates (UNS C268).The boxes are made of carbon steel, 30 × 30 × 30 cm, sandblasted, and painted using a nonaqueous solvent-based acrylic with a final thickness of 120 microns.The tube was discontinued and the sections were joined by a nonconductive tube, electrically connecting the sections using an 8 mm2 cable in order to determine the circulating current.At certain distances the pipe is performed introducing a zinc wire as the reference electrode. Additionally zinc reference electrodes are incorporated in the boxes (see Figure1). In order to get the polarization of the system, platinized titanium electrodes 1′′ diameter and 15 cm long are incorporated in the boxes. The polarization of the system is carried out using a voltage source or controlled current (20 A maximum current and maximum voltages 30 V). The polarization of the tube sheet is chosen close to −1 V, approximately at zinc corrosion potential (the most electronegative brass component).Figure 1 Prototype condenser and diagram of the circuit.The electrolyte solution used in the experiment was 0.1 M sodium sulfate, pH 7, with a conductivity of 0.2 S/m. ## 6. Determination of the Boundary Conditions A cell specifically built for this experiment is used (cell shown in Figure2). Inside it, a half pipe is placed to be tested and it constitutes the working electrode.Figure 2 Cylindrical cell utilized in flow conditions.In the cell are also placed the counter electrode (platinum platinized) and the reference electrode (saturated calomel electrode, SCE). The electrolyte is recirculated by a centrifugal pump. The flow is measured and controlled through a rotameter and a ball valve, respectively. The system temperature is controlled by water circulation in a radiator located in the electrolyte reservoir.The flow used in the experiment is 30 L/min and the electrolyte temperature is maintained at 20 ± 2°C.The material tested is admiralty brass, the tube diameter is 0.0254 cm (1′′), and the electrode area is 20 cm2.The working electrode was in contact with the electrolyte for 24 hours in sealed conditions in the cell, before the determination of the curve in flow conditions. The polarization curves are obtained by a linear scanning (5 mV/min, between −600 and +600 mV versus SCE). The experimental curves obtained are used as boundary conditions in the simulation programs. ## 7. Results and Discussion Figure3 shows the polarization curve obtained for admiralty brass corresponding to a flow condition of 30 L/min in sodium sulfate, pH 7.Figure 3 Polarization curve (current density versus applied potential).The polarization curve in Figure3 exhibits two plateaus (between −0.04 and −0.01 V versus SCE and above 0.005 V) and an abrupt increase in current in the vicinity of 0 V versus SCE. The response is a consequence of the chemical complexity due to the electrooxidation of the two principal alloying metals (copper and zinc), the influence of microalloying element (tin), and the cathodic reactions that take place: evolution of hydrogen and oxygen reduction [21–23].Figure4 shows the potential profile for the prototype, the simulation Beasy-GID, and the proposed simulation.Figure 4 Plot of potential versus distance to the prototype tube edge: proposed and Beasy-GID simulation.The simulation software Beasy-GID presents potential values which show a more smoothed trend in their evolution than the values corresponding to the experimental prototype. This fact leads to a prediction of the influence of polarization beyond the experimental one, measured with the prototype (approximately 100 diameters ≈ 2.5 m).The results obtained with our proposed simulation are closer to the values determined with the prototype, but it predicts a somewhat less distance of influence for the protection: 0.15 m (approximately 6 pipe diameters).It can be observed that the data obtained from the prototype have high variability, possibly due to the conditions of the reference electrodes used, zinc wire, and the turbulence generated at the measurement place. The influence of the imposed potential can be estimated as 20 diameters (0.5 m), which indicates, for a usual 9 m long tube, that the center of it is not reached by the cathodic protection. ## 8. Final Considerations In the first place, the modeling has been presented for the distribution of electrochemical potential in a tube. The resolution of this model was performed using two methods: the first based on commercial software and the second one using a specific program developed by the working group.Secondly, an experimental cell was built in order to determine current profiles and potentials for cylindrical geometries and electrolyte flow conditions.The results obtained by the developed program were satisfactory when compared with the experimental results.Consequently, from the technological viewpoint, both methods can predict the potential profile along the tube, changing boundary conditions, edges potentials and flow conditions, without requiring laborious and costly prototype implementation in a pilot scale.However, the cathodic protection does not extend beyond 20 pipe diameters and this fact indicates that other methods must be used to generate an effective protection in that area.Since it is not possible to extend the protection to the entire length of the tube, it is necessary to implement other technological solutions. One industrial possibility is the dosage of corrosion inhibitors. Particularly, in open cooling systems the ferrous sulfate dosage is commonly used [24].As a closing remark, we recommend combining the effect of cathodic protection, which preserves the vicinity of the tube plate, with the effect of the addition of ferrous sulfate by employing consumable iron anodes [25]. --- *Source: 102363-2014-12-18.xml*
102363-2014-12-18_102363-2014-12-18.md
17,939
Corrosion Potential Profile Simulation in a Tube under Cathodic Protection
Mauricio Ohanian; Víctor Martínez-Luaces
International Journal of Corrosion (2014)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102363
102363-2014-12-18.xml
--- ## Abstract The potential distribution in tubes of a heat exchanger is simulated when applying cathodic polarization to its extremes. The comparison of two methods to achieve this goal is presented: a numeric solution based on boundary elements carried out with the commercial software Beasy-GID and a semianalytical method developed by the authors. The mathematical model, the simplifications considered, and the problem solving are shown. Since both approaches use polarization curves as a boundary condition, experimental polarization curves (voltage versus current density) were determined in the laboratory under flow conditions and cylindrical cell geometry. The results obtained suggest the impossibility of extending the protection along the whole tube length; therefore, other protection methods are considered. --- ## Body ## 1. Introduction The corrosion in heat exchangers in a steam cycle of an electric generating plant represents an important problem for maintenance. The morphology of attack is both generalized and localized; the latter involves drilling shutdowns for the exchange surface and contamination of the circulating fluids. The problem is solved by stopping the operation and sealing the punctured tube, which causes a decrease in heat exchange efficiency.Condensers are multimetal systems with special conditions fluid dynamics and geometric complexity [1]. It is usual to find carbon steel boxes and titanium, stainless steel, or even different copper alloy tubes and plates [2]. All these alloys have different electrochemical corrosion characteristics, and due to their electrical contact a galvanic corrosion can be induced [3]. In the event of cathodic protection, applied in the box, the plate is cathodically polarized, but its effect may not be seen by the entire tube and a part of it will work as an anode. Therefore, the electrochemical potential across the tube is not constant, and it is important to determine the profile in the longitudinal direction, taking as parameters the potentials applied to the edges. The current and potential distribution in the tubes is theoretically analyzed by authors as Alkire and Mirarefi [4]. Astley [5], Verbrugge [6], and Song [7] address the problem considering the one-dimensional Laplace equation as the governing relationship, which is assumed to be linear kinetics; under these conditions, the problem is transformed into a distribution system consisting of an ordinary differential equation (ODE) and various boundary conditions. The above authors validate the models with respect to the results obtained through experimental prototypes. Scully and Hack [3] worked on the same research lines and performed a numerical finite element analysis, as well as a qualitative study from Wagner dimensionless number to determine the influence of plate, tube pair. The data are validated by using pilot scale prototypes.This issue has been addressed in previous works [8, 9] using a one-dimensional approximation and, as a consequence, the problem is transformed into a system consisting of an (ODE) and several boundary conditions.In this paper, the resolution of the equations corresponding to current and potential distribution in heat exchanger tubes is studied when a given potential is applied at the edges. We compare the results obtained by using the commercial software Beasy-GID [10] with a method developed by the authors. The method involves an analytical resolution for a simplified model based on the one-dimensional electrochemical approximation proposed by Frumkin [11].We describe the mathematical model, the simplifications considered, and the resolution of the corresponding mathematical equations. The polarization curve—which serves as boundary condition for the proposed model—is obtained experimentally under flow conditions in a cylindrical cell, to simulate the fluid dynamic conditions within the heat exchanger. ## 2. Modeling In order to analyze the current and potential distribution in a tubular system with electrolyte flow, a mass balance in an infinitesimal control volumedV is established [12]. For generic species i, the accumulation is equal to the input minus the output of the species over a period with a corresponding generation (R): (1)dV∂ci∂t=-dA∇·Ji+Ri. In (1) that mass balance is stated, J being the flux of the species through the area dA and c the molar concentration of the species.The global balance—for all the species—can be written as follows:(2)dV∂∑ici∂t=-dA∇·∑iJi+∑iRi. Multiplying these terms by the species charges (z) and the Faraday constant (F), we can convert the mass balance into a charge balance: (3)dVF∂∑izici∂t=-dA∇·F∑iziJi+F∑iziRi. Considering an electrochemical system, with reaction at the interface, it can be assumed that there is no generation in the control volume (within the electrolyte) and then, Ri is null. Moreover, in a steady state condition the left term of (3) is also zero, resulting in (4)∇·F∑iziJi=∇·j=0, where j is the current density vector. Expanding j in the transport components corresponding to migration, diffusion, and convection (Nernst-Plank development [13]), (4) can be expressed as (5)∇·j=-∇·χ∇E-∇·F∑iDizi∇ci+∇·Fv∑izici=0.χ is the ionic conductivity, Di is the diffusivity of species i, E is the electric potential, and v is the advection velocity. If the solution is proposed for an isotropic medium, then ionic conductivity and species diffusivity are homogeneous properties and so their gradient is zero. Furthermore, considering a developed profile in the pipe [14, 15] and an incompressible flow, the divergence of the velocity is zero, so (5) can be converted into (6)∇·j=-χ∇2E-F∑iDizi∇2ci+F∑izi∇ci·v=0. Due to the electroneutrality in the solution bulk, the contribution corresponding to the third term (i.e., the convective transport) is zero: (7)-χ∇2E-F∑iDizi∇2ci=0. Considering primary or secondary current distribution [16], the species concentration gradient within the electrolyte can be neglected. Therefore (7) simplifies to (8)∇2E=0. The obtained equation is formally the same that corresponds to heat transfer or mass diffusion mathematical models, in steady state. The solutions obtained for these phenomena can be applied to solve the equation of the electric field in an electrochemical system, with appropriate boundary conditions. The procedures used for this solution are analytic, analogue, or digital.Since the system considered is a tube, it is convenient to express (8) in cylindrical coordinates as follows [17]: (9)∂2E∂r2+1r∂E∂r+1r2∂2E∂θ2+∂2E∂z2=0. Due to symmetry reasons, the derivatives with respect to the polar argument θ can be cancelled. Then, the elliptical partial differential equation (PDE) to solve is the following: (10)∂2E∂r2+1r∂E∂r+∂2E∂z2=0. Frumkin [11] determined experimentally that the potential in an electrolyte varies only in one dimension (along the flow) when the following condition, considered in the cylindrical geometry, verifies (11)r<2ρK. In this formula r is the radius of the tube, ρ is the resistivity of the electrolyte, and K is the proportionality constant derived from the polarization curve (inverse of the linear polarization resistance).Thus if condition (11) is verified, (10) results in (12)∂2E∂z2=0. Under these conditions, the electrical potential gradient is parallel to the tube walls, and it is related to the current density (also parallel to the pipe walls, iL) flowing in the electrolyte. Applying Ohm’s law, (13)∂E∂z=±ρiL. Applying Kirchhoff’s law it is possible to establish a relation between the current density normal to the tube wall (iS) and the longitudinal current density, as follows: (14)diLπr2dz=-2πriS. Combining (13) and (14) the following equation can be obtained: (15)∂2μ-∂z2=±2ρiSr. The electrostatic potential E is linearly related to the electrochemical potential μ-: (16)μ-=μ+nFE, where μ is the chemical potential and n is the number of exchanged electrons. The boundary conditions are expressed usually through linear or semilogarithmic approximations: (17)∂μ-∂r=Kμ--A or (18)∂μ-∂r=B′exp⁡K′μ--A′. The behaviors expressed by the boundary conditions (17) and (18) can be observed in the experimental curves only in bounded intervals; linear or semilogarithmic conditions cannot be extended to the entire range considered.Finally, for the boundary condition at the pipe edges, both Dirichlet (constant potential at the edges) and Neumann (no potential variation at the edges) conditions [18] can be assumed. In this paper only the first one, that is, the Dirichlet condition, is considered. ## 3. Resolution The ODE for the potential function corresponds to (15) and the boundary condition at the tube walls is obtained from the polarization curve.Assuming a constant potential at the tube edges,(19)μ-z=0,r=R=μ-1μ-z=L,r=R=μ-2. It is important to mention that, under the conditions considered in this paper, the inequality proposed by Frumkin (11) is verified. So as to solve analytically the ODE (15), the boundary condition given by the experimental curve is linearized by the following procedure. For each experimental point, a moving box, containing ten points before and ten points after, is considered. For this set, a simple linear model was fitted using least squares and then, from this model, obtained point by point, the slope of the tangent and its intersection with the vertical axis are obtained.Then, at each point a linear dependency like the following one(20)∂μ-∂r=i=a+bμ- is obtained. In order to solve the problem, a discretizing approach is performed in the following way: the tube is divided into sections such that the length of each one is decided by a preliminary observation. In fact, shorter sections are taken close to the extremes (where the greatest potential change takes place) and longer ones in the central part of the tube. In each pipe section an edge potential is assigned. The electrochemical properties for each section are considered constant and evaluated as an average of the extreme values.If the coefficientb of (20) is positive, the solution for the potential takes the following form: (21)μ-=c1exp⁡ψz+c2exp⁡-ψz-ab, with (22)ψ=2ρbr. Imposing the boundary condition (19), the coefficients c1 and c2 are obtained as follows: (23)c1=-μ-1+a/bexp⁡-ψL-μ-2-a/b2sinh⁡ψLc2=-μ-2+a/b-μ-1+a/bexp⁡ψL2sinh⁡ψL. Formula (21) is used after getting a and b from (20), using the values of c1 and c2 given by (22) and (23). The final result is obtained iteratively using for this purpose Microsoft Excel. For the first iteration the corrosion potential condition is used at the interior points and the potential of (19) is imposed at the extreme points. As a closure criterion, the supremum norm of the difference between successive steps must be less than an arbitrary epsilon. ## 4. Use of Software Beasy-GID With the purpose of comparing the results of the previous method, the commercial software Beasy-GID is used. This package is designed to simulate corrosion systems, using the boundary elements method [19]. The information required by this software includes the system geometry (which is added as a geometrical model with the GID program), the electrical connections between the different metal components of the system (which are modeled as an electrical circuit at the software interface), and the polarization curve for the involved metals. ## 5. Prototype Condenser The prototype condenser is constructed with an admiralty brass tube (UNS C443 [20]), carbon steel boxes, and Muntz brass plates (UNS C268).The boxes are made of carbon steel, 30 × 30 × 30 cm, sandblasted, and painted using a nonaqueous solvent-based acrylic with a final thickness of 120 microns.The tube was discontinued and the sections were joined by a nonconductive tube, electrically connecting the sections using an 8 mm2 cable in order to determine the circulating current.At certain distances the pipe is performed introducing a zinc wire as the reference electrode. Additionally zinc reference electrodes are incorporated in the boxes (see Figure1). In order to get the polarization of the system, platinized titanium electrodes 1′′ diameter and 15 cm long are incorporated in the boxes. The polarization of the system is carried out using a voltage source or controlled current (20 A maximum current and maximum voltages 30 V). The polarization of the tube sheet is chosen close to −1 V, approximately at zinc corrosion potential (the most electronegative brass component).Figure 1 Prototype condenser and diagram of the circuit.The electrolyte solution used in the experiment was 0.1 M sodium sulfate, pH 7, with a conductivity of 0.2 S/m. ## 6. Determination of the Boundary Conditions A cell specifically built for this experiment is used (cell shown in Figure2). Inside it, a half pipe is placed to be tested and it constitutes the working electrode.Figure 2 Cylindrical cell utilized in flow conditions.In the cell are also placed the counter electrode (platinum platinized) and the reference electrode (saturated calomel electrode, SCE). The electrolyte is recirculated by a centrifugal pump. The flow is measured and controlled through a rotameter and a ball valve, respectively. The system temperature is controlled by water circulation in a radiator located in the electrolyte reservoir.The flow used in the experiment is 30 L/min and the electrolyte temperature is maintained at 20 ± 2°C.The material tested is admiralty brass, the tube diameter is 0.0254 cm (1′′), and the electrode area is 20 cm2.The working electrode was in contact with the electrolyte for 24 hours in sealed conditions in the cell, before the determination of the curve in flow conditions. The polarization curves are obtained by a linear scanning (5 mV/min, between −600 and +600 mV versus SCE). The experimental curves obtained are used as boundary conditions in the simulation programs. ## 7. Results and Discussion Figure3 shows the polarization curve obtained for admiralty brass corresponding to a flow condition of 30 L/min in sodium sulfate, pH 7.Figure 3 Polarization curve (current density versus applied potential).The polarization curve in Figure3 exhibits two plateaus (between −0.04 and −0.01 V versus SCE and above 0.005 V) and an abrupt increase in current in the vicinity of 0 V versus SCE. The response is a consequence of the chemical complexity due to the electrooxidation of the two principal alloying metals (copper and zinc), the influence of microalloying element (tin), and the cathodic reactions that take place: evolution of hydrogen and oxygen reduction [21–23].Figure4 shows the potential profile for the prototype, the simulation Beasy-GID, and the proposed simulation.Figure 4 Plot of potential versus distance to the prototype tube edge: proposed and Beasy-GID simulation.The simulation software Beasy-GID presents potential values which show a more smoothed trend in their evolution than the values corresponding to the experimental prototype. This fact leads to a prediction of the influence of polarization beyond the experimental one, measured with the prototype (approximately 100 diameters ≈ 2.5 m).The results obtained with our proposed simulation are closer to the values determined with the prototype, but it predicts a somewhat less distance of influence for the protection: 0.15 m (approximately 6 pipe diameters).It can be observed that the data obtained from the prototype have high variability, possibly due to the conditions of the reference electrodes used, zinc wire, and the turbulence generated at the measurement place. The influence of the imposed potential can be estimated as 20 diameters (0.5 m), which indicates, for a usual 9 m long tube, that the center of it is not reached by the cathodic protection. ## 8. Final Considerations In the first place, the modeling has been presented for the distribution of electrochemical potential in a tube. The resolution of this model was performed using two methods: the first based on commercial software and the second one using a specific program developed by the working group.Secondly, an experimental cell was built in order to determine current profiles and potentials for cylindrical geometries and electrolyte flow conditions.The results obtained by the developed program were satisfactory when compared with the experimental results.Consequently, from the technological viewpoint, both methods can predict the potential profile along the tube, changing boundary conditions, edges potentials and flow conditions, without requiring laborious and costly prototype implementation in a pilot scale.However, the cathodic protection does not extend beyond 20 pipe diameters and this fact indicates that other methods must be used to generate an effective protection in that area.Since it is not possible to extend the protection to the entire length of the tube, it is necessary to implement other technological solutions. One industrial possibility is the dosage of corrosion inhibitors. Particularly, in open cooling systems the ferrous sulfate dosage is commonly used [24].As a closing remark, we recommend combining the effect of cathodic protection, which preserves the vicinity of the tube plate, with the effect of the addition of ferrous sulfate by employing consumable iron anodes [25]. --- *Source: 102363-2014-12-18.xml*
2014
# An Aggressive Sphenoid Wing Meningioma Causing Foster Kennedy Syndrome **Authors:** Harpreet S. Walia; F. Lawson Grumbine; Gagan K. Sawhney; David S. Risner; Neal V. Palejwala; Matthew E. Emanuel; Sandeep S. Walia **Journal:** Case Reports in Ophthalmological Medicine (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/102365 --- ## Abstract Foster Kennedy syndrome is a rare neurological condition with ophthalmic significance that can manifest as acute visual loss. It is classically characterised by unilateral optic nerve atrophy and contralateral papilledema resulting from an intracranial neoplasm. Physicians should consider Foster Kennedy syndrome in patients who present with visual loss and who have a history of intracranial neoplasm. In addition to ophthalmologic examination, neuroimaging is essential for the diagnosis of Foster Kennedy syndrome. --- ## Body ## 1. Introduction Foster Kennedy syndrome is a rare condition that classically involves optic nerve atrophy ipsilateral to an intracranial neoplasm with concomitant contralateral papilledema. As few as 37 cases have been completely documented between 1909 and 1989 [1]. We review the pathogenesis and common clinical manifestations of Foster Kennedy syndrome and highlight the role of neuroimaging in diagnosis. ## 2. Case History A 72-year-old white female with a medical history of an aggressive left sphenoid wing meningioma, initially treated with resection and subsequently with radiotherapy for a recurrence four years, presented complaining of acute visual loss in her left eye. She also endorsed an associated left-sided retroorbital headache; she denied nausea, vomiting, or gait abnormalities.On physical exam, the patient had stable vital signs and was in no acute distress. Ophthalmologic exam revealed visual acuity of 20/50 in the right eye and finger counting in the left eye. A relative afferent pupillary defect was present in the left eye. Confrontation visual fields were full in the right eye but revealed significant generalized constriction in all four quadrants in the left eye. Slit lamp exam was remarkable for only moderate nuclear sclerotic cataracts in both eyes. On funduscopic exam, the right eye revealed a hyperemic, edematous optic disc with tortuous and dilated vessels and scattered drusen in the macula without subretinal fluid, and the left eye revealed a pale optic disc with scattered drusen in the macula without subretinal fluid.Given her history of known sphenoid wing meningioma, neuroimaging with MRI was obtained. Axial MRI scans revealed a large4.4cm×4cm×3.4cm mass in the left cavernous sinus extending into the left optic nerve and optic chiasm (see Figures 1 and 2). A coronal MRI scan confirmed infiltration into the sella turcica and sphenoid sinus (Figure 3). A lumbar puncture confirmed elevated intracranial pressure at 22 mmHg and did not show any signs of infection. Other diagnostic tests including complete blood count, complete metabolic panel, erythrocyte sedimentation rate, c-reactive protein, and electrocardiogram were unremarkable. Neuroimaging evidence of a cavernous sinus mass in the clinical scenario of a patient presenting with recurrent sphenoid wing meningioma with ipsilateral optic disc pallor and contralateral optic disc edema, along with elevated intracranial pressure, confirmed the Foster Kennedy syndrome.Figure 1Figure 2Figure 3 ## 3. Discussion Foster Kennedy syndrome is a rare entity found with intracranial neoplasms. First described in 1911, the Foster Kennedy syndrome (also known as Gowers-Paton-Kennedy syndrome) [2] originates from a retrobulbar compressive optic neuropathy commonly caused by sphenoid wing meningioma, frontal lobe glioma, optic neuroglioma, olfactory glioma, chiasmal glioma, and craniopharyngioma [3]. Although more commonly associated with neoplasm, the syndrome can also be caused by vascular lesions, meningitis, internal carotid artery sclerosis, and Paget’s disease of the skull [3, 4].The pathogenesis for the clinical findings cannot be attributed to a single mechanism. It is postulated that initially a pressure, often secondary to intracranial mass, arises on one side of the optic nerve. The increased cerebrospinal fluid pressure causes impaired ocular venous return and compression in the subarachnoid space of the intraorbital part of the optic nerve [5]. This mechanical compression results in atrophy of the ipsilateral nerve fiber layer, which in turn prevents the development of papilledema [5]. Concurrently, the pressure is transmitted to the contralateral nerve so that it is under a relatively higher pressure, but without mechanical compression. This elevated pressure without compression results in papilledema [4]. In the early stages of the syndrome, the contralateral papilledema often precedes the ipsilateral optic atrophy, and visual acuity can be retained with only mild pallor on funduscopic examination [6].Anosmia and headache are often present in true Foster Kennedy syndrome [4]; however, they are not universally present signs. Other common complications depend on the area of intracranial involvement and potentially include emotional lability, memory loss, nausea, vomiting, vertigo, hearing loss, extremity weakness, and facial paresis. Various ophthalmic signs and symptoms can be present depending on the localization of the tumor. Visual loss has been reported to be present in 70% of cases, visual field defects, most notably central scotomata, in 9%, and extraocular motility limitation in 6% [5]. Transient visual obscurations can be present due to fluctuations in intracranial or systemic blood pressure that cause transient compromise to the optic nerve [7].Foster Kennedy syndrome is a rare constellation of clinical symptoms and signs that may present emergently with visual loss or decreased visual acuity. Neuroimaging, as warranted by clinical suspicion from history of ocular complaints and ophthalmologic exam findings, can assist physicians in determining the cause, such as intracranial neoplasm, of Foster Kennedy syndrome. Physicians should consider Foster Kennedy syndrome in patients who present acutely with visual loss and who have a history of intracranial neoplasm. In addition to ophthalmologic examination, neuroimaging to evaluate for intracranial neoplasm is essential for the diagnosis of Foster Kennedy syndrome. Treatment options are aimed at resolving the underlying cause of the syndrome and are beyond the scope of this case report. --- *Source: 102365-2012-05-17.xml*
102365-2012-05-17_102365-2012-05-17.md
6,648
An Aggressive Sphenoid Wing Meningioma Causing Foster Kennedy Syndrome
Harpreet S. Walia; F. Lawson Grumbine; Gagan K. Sawhney; David S. Risner; Neal V. Palejwala; Matthew E. Emanuel; Sandeep S. Walia
Case Reports in Ophthalmological Medicine (2012)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/102365
102365-2012-05-17.xml
--- ## Abstract Foster Kennedy syndrome is a rare neurological condition with ophthalmic significance that can manifest as acute visual loss. It is classically characterised by unilateral optic nerve atrophy and contralateral papilledema resulting from an intracranial neoplasm. Physicians should consider Foster Kennedy syndrome in patients who present with visual loss and who have a history of intracranial neoplasm. In addition to ophthalmologic examination, neuroimaging is essential for the diagnosis of Foster Kennedy syndrome. --- ## Body ## 1. Introduction Foster Kennedy syndrome is a rare condition that classically involves optic nerve atrophy ipsilateral to an intracranial neoplasm with concomitant contralateral papilledema. As few as 37 cases have been completely documented between 1909 and 1989 [1]. We review the pathogenesis and common clinical manifestations of Foster Kennedy syndrome and highlight the role of neuroimaging in diagnosis. ## 2. Case History A 72-year-old white female with a medical history of an aggressive left sphenoid wing meningioma, initially treated with resection and subsequently with radiotherapy for a recurrence four years, presented complaining of acute visual loss in her left eye. She also endorsed an associated left-sided retroorbital headache; she denied nausea, vomiting, or gait abnormalities.On physical exam, the patient had stable vital signs and was in no acute distress. Ophthalmologic exam revealed visual acuity of 20/50 in the right eye and finger counting in the left eye. A relative afferent pupillary defect was present in the left eye. Confrontation visual fields were full in the right eye but revealed significant generalized constriction in all four quadrants in the left eye. Slit lamp exam was remarkable for only moderate nuclear sclerotic cataracts in both eyes. On funduscopic exam, the right eye revealed a hyperemic, edematous optic disc with tortuous and dilated vessels and scattered drusen in the macula without subretinal fluid, and the left eye revealed a pale optic disc with scattered drusen in the macula without subretinal fluid.Given her history of known sphenoid wing meningioma, neuroimaging with MRI was obtained. Axial MRI scans revealed a large4.4cm×4cm×3.4cm mass in the left cavernous sinus extending into the left optic nerve and optic chiasm (see Figures 1 and 2). A coronal MRI scan confirmed infiltration into the sella turcica and sphenoid sinus (Figure 3). A lumbar puncture confirmed elevated intracranial pressure at 22 mmHg and did not show any signs of infection. Other diagnostic tests including complete blood count, complete metabolic panel, erythrocyte sedimentation rate, c-reactive protein, and electrocardiogram were unremarkable. Neuroimaging evidence of a cavernous sinus mass in the clinical scenario of a patient presenting with recurrent sphenoid wing meningioma with ipsilateral optic disc pallor and contralateral optic disc edema, along with elevated intracranial pressure, confirmed the Foster Kennedy syndrome.Figure 1Figure 2Figure 3 ## 3. Discussion Foster Kennedy syndrome is a rare entity found with intracranial neoplasms. First described in 1911, the Foster Kennedy syndrome (also known as Gowers-Paton-Kennedy syndrome) [2] originates from a retrobulbar compressive optic neuropathy commonly caused by sphenoid wing meningioma, frontal lobe glioma, optic neuroglioma, olfactory glioma, chiasmal glioma, and craniopharyngioma [3]. Although more commonly associated with neoplasm, the syndrome can also be caused by vascular lesions, meningitis, internal carotid artery sclerosis, and Paget’s disease of the skull [3, 4].The pathogenesis for the clinical findings cannot be attributed to a single mechanism. It is postulated that initially a pressure, often secondary to intracranial mass, arises on one side of the optic nerve. The increased cerebrospinal fluid pressure causes impaired ocular venous return and compression in the subarachnoid space of the intraorbital part of the optic nerve [5]. This mechanical compression results in atrophy of the ipsilateral nerve fiber layer, which in turn prevents the development of papilledema [5]. Concurrently, the pressure is transmitted to the contralateral nerve so that it is under a relatively higher pressure, but without mechanical compression. This elevated pressure without compression results in papilledema [4]. In the early stages of the syndrome, the contralateral papilledema often precedes the ipsilateral optic atrophy, and visual acuity can be retained with only mild pallor on funduscopic examination [6].Anosmia and headache are often present in true Foster Kennedy syndrome [4]; however, they are not universally present signs. Other common complications depend on the area of intracranial involvement and potentially include emotional lability, memory loss, nausea, vomiting, vertigo, hearing loss, extremity weakness, and facial paresis. Various ophthalmic signs and symptoms can be present depending on the localization of the tumor. Visual loss has been reported to be present in 70% of cases, visual field defects, most notably central scotomata, in 9%, and extraocular motility limitation in 6% [5]. Transient visual obscurations can be present due to fluctuations in intracranial or systemic blood pressure that cause transient compromise to the optic nerve [7].Foster Kennedy syndrome is a rare constellation of clinical symptoms and signs that may present emergently with visual loss or decreased visual acuity. Neuroimaging, as warranted by clinical suspicion from history of ocular complaints and ophthalmologic exam findings, can assist physicians in determining the cause, such as intracranial neoplasm, of Foster Kennedy syndrome. Physicians should consider Foster Kennedy syndrome in patients who present acutely with visual loss and who have a history of intracranial neoplasm. In addition to ophthalmologic examination, neuroimaging to evaluate for intracranial neoplasm is essential for the diagnosis of Foster Kennedy syndrome. Treatment options are aimed at resolving the underlying cause of the syndrome and are beyond the scope of this case report. --- *Source: 102365-2012-05-17.xml*
2012
# Developing a Distributed Consensus-Based Cooperative Adaptive Cruise Control System for Heterogeneous Vehicles with Predecessor Following Topology **Authors:** Ziran Wang; Guoyuan Wu; Matthew J. Barth **Journal:** Journal of Advanced Transportation (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1023654 --- ## Abstract Connected and automated vehicle (CAV) has become an increasingly popular topic recently. As an application, Cooperative Adaptive Cruise Control (CACC) systems are of high interest, allowing CAVs to communicate with each other and coordinating their maneuvers to form platoons, where one vehicle follows another with a constant velocity and/or time headway. In this study, we propose a novel CACC system, where distributed consensus algorithm and protocol are designed for platoon formation, merging maneuvers, and splitting maneuvers. Predecessor following information flow topology is adopted for the system, where each vehicle only communicates with its following vehicle to reach consensus of the whole platoon, making the vehicle-to-vehicle (V2V) communication fast and accurate. Moreover, different from most studies assuming the type and dynamics of all the vehicles in a platoon to be homogenous, we take into account the length, location of GPS antenna on vehicle, and braking performance of different vehicles. A simulation study has been conducted under scenarios including normal platoon formation, platoon restoration from disturbances, and merging and splitting maneuvers. We have also carried out a sensitivity analysis on the distributed consensus algorithm, investigating the effect of the damping gain on convergence rate, driving comfort, and driving safety of the system. --- ## Body ## 1. Introduction Recently, the rapid development of our transportation systems has led to a worldwide economic prosperity, where transportation for both passengers and goods is much more convenient both domestically and internationally. The number of motor vehicles worldwide is estimated to be more than 1 billion now and will double again within one or two decades [1]. Such a huge quantity of motor vehicles and intensive transportation activities has brought about various social-economic issues. For example, more than 30,000 people still perish from roadway crashes on US highways every year [2]. For the past few years, cities that have experienced more economic improvements are at a higher risk to face worsening traffic conditions, resulting in increased pollutant emissions and decreased travel efficiency. In terms of average time wasted on the road, Los Angeles, for example, tops the global ranking with 104 hours spent in congestion per commuter during the year of 2016 [3]. It was also estimated by [4] that there were 3.1 billion gallons of energy wasted worldwide due to traffic congestion in 2014, which equated to approximately 19 gallons per commuter.Significant efforts have been made around the world to address these transportation issues. Many propose simply expanding our existing transportation infrastructure to help solve these traffic-related problems. However, not only is this costly but also it has many negative social and environmental effects. As an alternative solution, the development of connected and automated vehicle (CAV) can help better manage traffic, thus improving traffic safety, mobility, and reliability without the cost of infrastructure build-out. One of the more promising CAV applications is Cooperative Adaptive Cruise Control (CACC), which extends Adaptive Cruise Control (ACC) with CAV technology (e.g., mainly via vehicle-to-vehicle (V2V) communication) [5]. By sharing information among vehicles, a CACC system allows vehicles to form platoons and be driven at harmonized speeds with constant time headways between vehicles. The main advantages of a CACC system are as follows: (a) connected and automated driving is safer than human driving by minimizing driver distractions; (b) roadway capacity is increased due to the reduction of intervehicle time gaps without compromising safety; and (c) fuel consumption and pollutant emissions are reduced due to the reductions of both unnecessary acceleration maneuvers and aerodynamic drag on the vehicles in the platoon [6].The core of a CACC system is the vehicle-following control model, which depends on the vehicle information flow topology. The topology determines how all CAVs in a CACC system communicate with others, and it has been well studied by researchers. Zheng et al. [7] proposed some typical types of information flow topologies, including predecessor following, predecessor-leader following, and bidirectional types. In our research, each vehicle in the CACC system only receives information from the predecessor (if it exists), which is exactly the predecessor following type. The vehicle-following controller efficiently describes the vehicle dynamics and cooperative maneuvers residing in the system. The performance and robustness of a CACC consensus algorithm were discussed in [8], where packet loss, network failures, and beaconing frequencies were all taken into consideration when the simulation framework is built with the CACC controller developed by [9]. Di Bernardo et al. [10] designed a distributed control protocol to solve the platooning problem, which depends on a local action of the vehicle itself and a cooperative action received from the leader and neighboring vehicles. Lu et al. [11] used a nonlinear model to describe the vehicle longitudinal dynamics, where the engine power, gravity, road and tire resistance, and aerodynamics drag are all considered. However, since the complexity of such nonlinear models is problematic for system analysis, a linearized model is typically used for field deployment, such as the one in [12]. Wang et al. [13] proposed an Eco-CACC system with a novel platoon gap opening and closing protocol to reduce the platoon-wide fuel consumption and pollutant emissions. Based on this study, Hao et al. [14] developed a bilevel model to synthetically analyze the platoon-wide impact of the disturbances when vehicles join and leave the Eco-CACC system. Amoozadeh et al. [15] developed a platoon management protocol for CACC vehicles, including CACC longitudinal control logic and platoon merge and split maneuvers. In terms of intervehicle distance in motion (at relatively high speed), the existing vehicle-following models can be divided into two categories: one that regulates the spatial gap, where one vehicle follows its predecessor with a fixed intervehicle distance [16] and the other that is based on time gap or velocity-dependent distance, where the intervehicle distance may vary with vehicle velocity and vehicle length by keeping a constant time headway. Our approach falls into the second category.Stability is a basic requirement to ensure the safety of a CACC system. The control system should be capable of dealing with various disturbances and uncertainties. Laumônier et al. [17] proposed a reinforcement learning approach to design the CACC system, where the system is modeled as a Markov Decision Process incorporated together with stochastic game theory. They showed that the system was capable of damping small disturbances throughout the platoon. The uncertainties in communication network and sensor information were modeled by a Gaussian distribution in [18], which was applied to calculate the minimal time headway for safety reasons. Qin et al. [19] studied the effects of stochastic delays on the dynamics of connected vehicles by analyzing the mean dynamics. Plant and string stability conditions were both derived, and the results showed that stability domains shrink along with the increases of the packet drop ratio or the sampling time. In [20], propagation of motion signals was attenuated by adjusting the controller parameters in the system, which guaranteed the so-called string stability of the platoon. Since the inherent communication time delay and vehicle actuator delay significantly limit the minimum intervehicle distance in view of string stability requirements, Xing et al. [21] carried out Padé approximations of the vehicle actuator delay to arrive at a finite-dimensional model. It was shown in [22] that the standard constant time-gap spacing policy can guarantee string stability of the platoon as long as a sufficient large time gap is maintained. In this study, we also adopted the time-gap spacing policy and selected time gap large enough to ensure the platoon’s string stability. A simulation study of platoon restoration after disturbances is demonstrated to further prove the string stability of our system.Communication plays a crucial role in the formation of a CACC system. The United States Department of Transportation (USDOT) developed a Connected Vehicle Reference Implementation Architecture (CVRIA) to provide the communication framework for different applications, including V2V and Vehicle-to-Infrastructure (V2I) communications [23]. IEEE 802.11p-based Dedicated Short Range Communication (DSRC) has been developed by the automotive industry for use in V2V and V2I communication, considered as a promising wireless technology to improve both transportation safety and efficiency. Bai et al. [24] used a large set of empirical measurement data taken in various realistic driving environments to characterize communication properties of DSRC. Since the increase of CAVs in a certain coverage area may lead to a shortage of communication bandwidth, a distributed methodology is more advantageous for vehicular communication. In our study, the V2V communication is only conducted between predecessor and follower, making the proposed system more distributed.Essentially, the proposed system is different from a conventional Adaptive Cruise Control (ACC) system for the following reasons. (1) In the proposed system, although some forward ranging sensing techniques such as camera, radar, and lidar (Light Detection and Ranging) might be needed as supplementary methods, the core technique for CAVs to form platoon is V2V communication. CAVs send their absolute position and instantaneous velocity information measured by equipped sensors (e.g., high-precision GPS, inertial measurement unit, and on-board diagnostic system) to their followers by V2V communication. However, for a conventional ACC system, V2V communication is not enabled, where vehicles need to use their forward ranging sensing equipment to obtain predecessors’ information. (2) A conventional ACC system can only implement the function of vehicle following; however, the proposed CACC system allows individual vehicle to merge into the platoon by using V2V communication. “Ghost” vehicles are created as predecessors for following vehicles to follow; however, since they are virtual and only for V2V communication, it is impossible for forward ranging sensing techniques to sense them. (3) The measurement delay of forward ranging sensing techniques in a conventional ACC system is apparently different from the V2V communication delay of DSRC in the proposed system, which leads to different system behaviors in different scenarios, especially the one we talk about in Section3.2.Despite the advantages of consensus-based platooning approach for the CACC system, several issues still need to be addressed to improve the reliability and practicality.(a) The primary V2V communication method being used nowadays is DSRC, which normally has a 300-meter transmission range [24]. As the transmission distance increases, the safety message reception probability dramatically decreases, and the relative signal strength index (RSSI) from DSRC antenna also decreases [25, 26]. However, many existing CACC systems such as [27] adopted predecessor-leader following information flow topology, which required the leader of a platoon to communicate with all the vehicles in the broadcast mode. Therefore, when a platoon expands to a bigger size, the V2V communication between the leader and the last vehicle may introduce lower RSSI or be impaired by obstructions along the platoon. In this study, we adopt predecessor following information flow topology (i.e., “distributed”), where each vehicle in the platoon only communicates with its following vehicle to reach consensus of the whole platoon. Therefore, the platoon size is not limited by the DSRC transmission range, and the V2V communication has a higher safety message reception probability and a higher RSSI than in the predecessor-leader following topology.(b) Most existing CACC-related research has only considered vehicles in the system as homogenous point mass models. However, in reality, vehicles should be heterogeneous with different lengths and braking performances. Therefore, we take into account the vehicle length together with the position of GPS antenna on vehicle in this study. Moreover, according to different braking performances, we assign different braking factors to different types of vehicles in our system, allowing the intervehicle distances to be weighted based on these factors.(c) While the information flow topology and algorithm have been well studied, not many protocols have been developed to apply the theory to real-world transportation systems, especially for different traffic scenarios. In this study, we design protocols for the normal platoon formation scenario and merging and splitting scenario. Sensitivity analysis is also conducted to study the practical issues of the proposed CACC system, including the convergence rate of a platoon, the driving comfort for human passengers, and the driving safety of the whole system. By optimizing the damping gain value of our algorithm, the proposed system is supposed to be efficient, comfortable, and safe.The remainder of this paper is organized as follows. Section2 describes the methodology used for our distributed consensus-based CACC system. Section 3 describes the detailed simulation study and analyzes the results. Section 4 is focused on a sensitivity analysis for different aspects of driving in our CACC system. The last section provides general conclusions and outlines some future steps. ## 2. Methodology ### 2.1. Mathematical Preliminaries and Nomenclature We represent the information flow topology of a distributed network of vehicles by using a directed graphG=(V,E), where V={1,2,…,n} is a finite nonempty node set and E⊆V×V is an edge set of ordered pairs of nodes, called edges. The edge i,j∈E denotes that vehicle j can obtain information from vehicle i. However, it is not necessarily true in reverse. The neighbors of vehicle i are denoted by Ni={j∈V:i,j∈E}. The topology of the graph is associated with an adjacency matrix A=aij∈R, which is defined such that aij=1 if edge j,i∈E, aij=0 if edge j,i∉E, and aii=0. L=[lij]∈R (i.e., lij=-aij, i≠j, and lii=∑j=1,j≠inaij) is the nonsymmetrical Laplacian matrix associated with G. A directed spanning tree is a directed tree formed by graph edges which connects all the nodes of the graph.Before proceeding to designing our distributed consensus algorithm for the CACC system, we recall here some basic consensus algorithms which can be used to apply similar dynamics on the information states of vehicles. If the communication between vehicles in the distributed networks is continuous, then a differential equation can be used to model the information state update of each vehicle.The single-integrator consensus algorithm [28] is given by (1)x˙it=-∑j=1ngijkijxit-xjt,i∈V,where xi∈R, kij>0, and gij=1 if information flows from vehicle j to i and 0 otherwise, ∀i≠j. The adjacency matrix A of the information flow topology is defined accordingly as aii=0 and aij=gijkij,∀i≠j. This consensus algorithm guarantees convergence of multiple agents to a collective decision via local interactions.Equation (1) can be extended to second-order dynamics to better model the movement of a physical entity, such as a CAV. For a second-order model, the double-integrator consensus algorithm [29] is given by(2)x˙it=vit,v˙it=-∑j=1ngijkijxit-xjt+γgijkijvit-vjt,i∈V,where xi∈R, vi∈R, kij>0, γ>0, and gij=1 if information flows from vehicle j to i and 0 otherwise, ∀i≠j. ### 2.2. System Specifications and Assumptions It shall be noted that since our study mainly focuses on communication topology and control algorithm of the system, we make some reasonable assumptions while modelling the general system to enable the theoretical analysis.(a) All vehicles are CAVs with the ability to send and receive information among the same transmission range, and there is no vehicle actuator delay in the proposed system.(b) Every vehicle in the proposed system is equipped with appropriate sensors (e.g., high-precision GPS, inertial measurement unit, and on-board diagnostic system) to measure its absolute position and instantaneous velocity, and the measurement is precise without noise.(c) Vehicle types are assumed to be heterogeneous, with different vehicle length, location of GPS antenna on vehicle, and braking performance. ### 2.3. Distributed Consensus Algorithm for the CACC System The objective of the distributed consensus-based CACC system is to use algorithms and protocols that ensure consensus of a platoon of vehicles. Toward this end, the meaning of consensus is twofold: one is the absolute position consensus, where one vehicle maintains a certain distance with its predecessor, and the other is the velocity consensus, where one vehicle maintains the same velocity with its predecessor. Taking into account second-order vehicle dynamics, we propose the distributed consensus algorithm for the CACC system, fori=2,…,n,j=i-1:(3)x˙it=vit,v˙it=-aijxit-xjt-τijt+lif+ljr+x˙jt-τijttijg+τijtbi-γaijx˙it-x˙jt-τijt,where vehicle i’s predecessor is vehicle j; xit is the absolute position of the GPS antenna on vehicle i at time t; x˙it or vit is the velocity of vehicle i at time t; τijt is the unavoidable time-varying communication delay when information is transmitted from vehicle j to vehicle i at time t; lif is the length between the GPS antenna and the front bumper of vehicle i; ljr is the length between the GPS antenna and the rear bumper of vehicle j; tijg is the desired intervehicle time gap between vehicle i and vehicle j; bi is the braking factor of vehicle i; aij is the i,jth entry of the adjacency matrix; γ is a damping gain. The part xit-xjt-τijt+lif+ljr+x˙jt-τijtt-τijtbi is the absolute position consensus term, and the part [x˙it-x˙jt-τijt] is the velocity consensus term. The positions of vehicles in the proposed CACC system are illustrated in Figure 1.Figure 1 Positions of vehicles in the proposed system.With (3), consensus is reached by a platoon of vehicles if, for all xi0 and x˙i0 and all i=2,…,n,j=i-1, as t→∞, (4)xit-xjt-τijt⟶lif+ljr+x˙jt-τijttijg+τijtbi,x˙it-x˙jt-τijt⟶0,which means the absolute position difference of the two vehicles converges to a velocity-determined distance plus two constant vehicle length terms, while the velocity difference of the two vehicles converges to zero. Details of the convergence analysis of (3) can be found in Appendix B.As mentioned in Section1, a common issue regarding CACC systems is the string stability. This refers to the capability of attenuating traffic shockwaves by vehicles in platoons. Generally, string stability is defined with respect to the propagation of vehicle spacing errors and/or vehicle accelerations [30]. In particular, if we define dij as the vehicle spacing error (i.e., intervehicle distance) between two consecutive vehicles in a platoon, then string stability with respect to vehicle spacing error indicates that(5)Di+1j+1sDijs∞≤1,where Dij(s) is the Laplace transform of the vehicle spacing error dij. This criterion can be therefore applied to guarantee that the vehicle spacing errors are not amplified upstream in the platoon. Likewise, if we define ai as the acceleration of vehicles in a platoon, then string stability with respect to vehicle acceleration implies that(6)Ai+1sAis∞≤1,where Ai(s) is the Laplace transform of the vehicle acceleration ai. This guarantees that the vehicle accelerations are not amplified upstream in the platoon. We adopt (6) to analyze the string stability of our system, and the details are discussed in Appendix C. Simulation results in Section 3.2 show that the tuning parameters in (3) are chosen to guarantee string stability of the system.The braking performance of a vehicle can be affected by many factors, including the mass of the vehicle and the aerodynamics performance of the vehicle. We assign a braking factorbi, which is assumed to be an aggregate of the aforementioned factors, to each vehicle of the proposed CACC system. This braking factor itself does not affect but reflects the braking performance of vehicles. Specifically, it works as a weighting factor of the desired intervehicle distance dij (safety braking distance), making it different for different vehicles in the proposed system. In this study, the braking factors are assumed to be known constants, while the exact methodology to calculate the braking factor is discussed as future research in Section 5.We assume that the vehicle in the proposed system receives its absolute position (location) information by the GPS antenna that is installed on a certain position of the vehicle’s roof. Both the length between antenna and the front bumperlif and the length between antenna and the rear bumper lir of each vehicle are assumed to be known constants. Thus the length of vehicle i is li=lif+lir. We use time gap tijg to adjust the intervehicle distances that are subject to the change of vehicles’ velocities. By referring to Figure 1, the relationship between time headway and time gap can be denoted as th=tijgbi+lif+ljr/x˙jt-τijt. The damping gain γ needs to meet a specific requirement to ensure the convergence property of the distributed consensus algorithm, which will be analyzed in Section 4 of the paper.Equation (3) is designed for all but the leading vehicle in our CACC system. The dynamics of the leading vehicle can be characterized as(7)x˙1t=v1t,v˙1t=a1t,where x1(t), v1(t), and a1(t) represent the absolute position, velocity, and acceleration of the leading vehicle, respectively. The leading vehicle of a platoon is set to cruise at a certain velocity, with a1t=0. Equation (3) will allow all the following vehicles in the platoon to track the dynamics of the leading vehicle on the above two scenarios. ### 2.4. Distributed Consensus Protocol for the CACC System Considering different scenarios in our system, two protocols are designed in the following. #### 2.4.1. Normal Platoon Formation Protocol This protocol is designed for vehicles to form a platoon. For vehiclei in our CACC system, it needs to check whether there is a predecessor in a certain distance r after the platoon formation mode is activated.(a) If yes, then vehiclei will communicate with its predecessor and (3) will be applied, which enables vehicle i to be a following vehicle.(b) If no, then vehiclei may become a leading vehicle of a platoon (where i=1) and cruise at a constant velocity. The driver can also take over the control to drive however he/she wants, but the vehicle may still potentially act as a leading vehicle of the platoon.After the above procedure, vehiclei is in the distributed consensus-based CACC system whether it plays the role of a following vehicle or a leading vehicle. However, the “following” and “leading” roles for vehicle i may switch under the following conditions.(a) For a following vehiclei, if all of its predecessors move out of the distance r ahead of vehicle i, then vehicle i changes from a following vehicle to a leading vehicle, where i=1.(b) For a leading vehiclei (i.e., i=1), if one or more vehicles move into the distance r ahead of vehicle i, then vehicle i changes from a leading vehicle to a following vehicle, where i=2,…,n.Figure2 shows the flowchart of this protocol for the distributed consensus-based CACC system.Figure 2 Normal platoon formation protocol. #### 2.4.2. Merging and Splitting Maneuvers Protocol Normal platoon formation protocol addresses the longitudinal maneuvers, while merging and splitting maneuvers protocol is aimed at handling the lateral maneuvers (i.e., lane change). It is introduced in [31] that there are four different cases for the lane change within the platoon maneuvers: (1) free-agent-to-free-agent lane change, (2) free-agent-to-platoon lane change, (3) platoon-to-free-agent lane change, and (4) platoon-to-platoon lane change. In this study, we focus on the second and third cases. Since this part is about applying the proposed algorithm (see (3)) to lane change scenarios, which is focused on gap creation and gap closure maneuvers implemented by V2V communication, the specific lane change behavior is considered as a manual driving behavior.For the case where vehiclei (as a free agent) tries to merge into a platoon on the adjacent lane, after the merging mode is activated, vehicle i will communicate with the platoon leader and decide which position it will be in the platoon, as shown in Figure 3(a). If it decides to be the jth vehicle of the platoon after merging maneuvers, then a “ghost” vehicle with respect to vehicle j-1 of the platoon will be created in front of vehicle i, as shown in Figure 3(b). This “ghost” vehicle has all the same parameters but the lateral position as vehicle j-1. Then, vehicle i will automatically adjust its absolute position and velocity with the “ghost” vehicle by (3). After that, vehicle i sends a merging signal to vehicle j+1 in the platoon, as shown in Figure 3(c). Upon receiving the merging signal, a “ghost” vehicle with respect to vehicle i is created in front of vehicle j+1, and vehicle j+1 starts to adjust its absolute position and velocity to create a gap for vehicle i by (3), as shown in Figure 3(d). After the gap is fully created, vehicle j+1 sends a confirmation signal to vehicle i, and vehicle i merges into the platoon, as shown in Figure 3(e).Figure 3 Merging maneuvers protocol (assuming merging into the 2nd position). (a) (b) (c) (d) (e)The case where vehiclej (in the platoon) tries to split from the platoon is easier. It is studied in [32] that there are two strategies for splitting maneuvers or so-called CACC string dissolution. The most efficient action is for the departing driver to do a simple lane change in the direction of the off-ramp. The other strategy is for the departing vehicle to deactivate the CACC function by tapping on the brakes before changing lanes, creating a split in the CACC string and becoming the manually driver leader of the platoon until it moves out of the lane. In our system, we adopt the first strategy. After the splitting mode is activated, the driver can take over the lateral control of the vehicle and perform the lane change without adjusting the velocity longitudinally. After vehicle j completes the lane change, vehicle j+1 will be informed that its predecessor changes from vehicle j to vehicle j-1 and therefore adjusts its velocity to close the gap. A new platoon is formed, where vehicle j+1 becomes vehicle j, and vehicle j+2 becomes vehicle j+1, and so on. ## 2.1. Mathematical Preliminaries and Nomenclature We represent the information flow topology of a distributed network of vehicles by using a directed graphG=(V,E), where V={1,2,…,n} is a finite nonempty node set and E⊆V×V is an edge set of ordered pairs of nodes, called edges. The edge i,j∈E denotes that vehicle j can obtain information from vehicle i. However, it is not necessarily true in reverse. The neighbors of vehicle i are denoted by Ni={j∈V:i,j∈E}. The topology of the graph is associated with an adjacency matrix A=aij∈R, which is defined such that aij=1 if edge j,i∈E, aij=0 if edge j,i∉E, and aii=0. L=[lij]∈R (i.e., lij=-aij, i≠j, and lii=∑j=1,j≠inaij) is the nonsymmetrical Laplacian matrix associated with G. A directed spanning tree is a directed tree formed by graph edges which connects all the nodes of the graph.Before proceeding to designing our distributed consensus algorithm for the CACC system, we recall here some basic consensus algorithms which can be used to apply similar dynamics on the information states of vehicles. If the communication between vehicles in the distributed networks is continuous, then a differential equation can be used to model the information state update of each vehicle.The single-integrator consensus algorithm [28] is given by (1)x˙it=-∑j=1ngijkijxit-xjt,i∈V,where xi∈R, kij>0, and gij=1 if information flows from vehicle j to i and 0 otherwise, ∀i≠j. The adjacency matrix A of the information flow topology is defined accordingly as aii=0 and aij=gijkij,∀i≠j. This consensus algorithm guarantees convergence of multiple agents to a collective decision via local interactions.Equation (1) can be extended to second-order dynamics to better model the movement of a physical entity, such as a CAV. For a second-order model, the double-integrator consensus algorithm [29] is given by(2)x˙it=vit,v˙it=-∑j=1ngijkijxit-xjt+γgijkijvit-vjt,i∈V,where xi∈R, vi∈R, kij>0, γ>0, and gij=1 if information flows from vehicle j to i and 0 otherwise, ∀i≠j. ## 2.2. System Specifications and Assumptions It shall be noted that since our study mainly focuses on communication topology and control algorithm of the system, we make some reasonable assumptions while modelling the general system to enable the theoretical analysis.(a) All vehicles are CAVs with the ability to send and receive information among the same transmission range, and there is no vehicle actuator delay in the proposed system.(b) Every vehicle in the proposed system is equipped with appropriate sensors (e.g., high-precision GPS, inertial measurement unit, and on-board diagnostic system) to measure its absolute position and instantaneous velocity, and the measurement is precise without noise.(c) Vehicle types are assumed to be heterogeneous, with different vehicle length, location of GPS antenna on vehicle, and braking performance. ## 2.3. Distributed Consensus Algorithm for the CACC System The objective of the distributed consensus-based CACC system is to use algorithms and protocols that ensure consensus of a platoon of vehicles. Toward this end, the meaning of consensus is twofold: one is the absolute position consensus, where one vehicle maintains a certain distance with its predecessor, and the other is the velocity consensus, where one vehicle maintains the same velocity with its predecessor. Taking into account second-order vehicle dynamics, we propose the distributed consensus algorithm for the CACC system, fori=2,…,n,j=i-1:(3)x˙it=vit,v˙it=-aijxit-xjt-τijt+lif+ljr+x˙jt-τijttijg+τijtbi-γaijx˙it-x˙jt-τijt,where vehicle i’s predecessor is vehicle j; xit is the absolute position of the GPS antenna on vehicle i at time t; x˙it or vit is the velocity of vehicle i at time t; τijt is the unavoidable time-varying communication delay when information is transmitted from vehicle j to vehicle i at time t; lif is the length between the GPS antenna and the front bumper of vehicle i; ljr is the length between the GPS antenna and the rear bumper of vehicle j; tijg is the desired intervehicle time gap between vehicle i and vehicle j; bi is the braking factor of vehicle i; aij is the i,jth entry of the adjacency matrix; γ is a damping gain. The part xit-xjt-τijt+lif+ljr+x˙jt-τijtt-τijtbi is the absolute position consensus term, and the part [x˙it-x˙jt-τijt] is the velocity consensus term. The positions of vehicles in the proposed CACC system are illustrated in Figure 1.Figure 1 Positions of vehicles in the proposed system.With (3), consensus is reached by a platoon of vehicles if, for all xi0 and x˙i0 and all i=2,…,n,j=i-1, as t→∞, (4)xit-xjt-τijt⟶lif+ljr+x˙jt-τijttijg+τijtbi,x˙it-x˙jt-τijt⟶0,which means the absolute position difference of the two vehicles converges to a velocity-determined distance plus two constant vehicle length terms, while the velocity difference of the two vehicles converges to zero. Details of the convergence analysis of (3) can be found in Appendix B.As mentioned in Section1, a common issue regarding CACC systems is the string stability. This refers to the capability of attenuating traffic shockwaves by vehicles in platoons. Generally, string stability is defined with respect to the propagation of vehicle spacing errors and/or vehicle accelerations [30]. In particular, if we define dij as the vehicle spacing error (i.e., intervehicle distance) between two consecutive vehicles in a platoon, then string stability with respect to vehicle spacing error indicates that(5)Di+1j+1sDijs∞≤1,where Dij(s) is the Laplace transform of the vehicle spacing error dij. This criterion can be therefore applied to guarantee that the vehicle spacing errors are not amplified upstream in the platoon. Likewise, if we define ai as the acceleration of vehicles in a platoon, then string stability with respect to vehicle acceleration implies that(6)Ai+1sAis∞≤1,where Ai(s) is the Laplace transform of the vehicle acceleration ai. This guarantees that the vehicle accelerations are not amplified upstream in the platoon. We adopt (6) to analyze the string stability of our system, and the details are discussed in Appendix C. Simulation results in Section 3.2 show that the tuning parameters in (3) are chosen to guarantee string stability of the system.The braking performance of a vehicle can be affected by many factors, including the mass of the vehicle and the aerodynamics performance of the vehicle. We assign a braking factorbi, which is assumed to be an aggregate of the aforementioned factors, to each vehicle of the proposed CACC system. This braking factor itself does not affect but reflects the braking performance of vehicles. Specifically, it works as a weighting factor of the desired intervehicle distance dij (safety braking distance), making it different for different vehicles in the proposed system. In this study, the braking factors are assumed to be known constants, while the exact methodology to calculate the braking factor is discussed as future research in Section 5.We assume that the vehicle in the proposed system receives its absolute position (location) information by the GPS antenna that is installed on a certain position of the vehicle’s roof. Both the length between antenna and the front bumperlif and the length between antenna and the rear bumper lir of each vehicle are assumed to be known constants. Thus the length of vehicle i is li=lif+lir. We use time gap tijg to adjust the intervehicle distances that are subject to the change of vehicles’ velocities. By referring to Figure 1, the relationship between time headway and time gap can be denoted as th=tijgbi+lif+ljr/x˙jt-τijt. The damping gain γ needs to meet a specific requirement to ensure the convergence property of the distributed consensus algorithm, which will be analyzed in Section 4 of the paper.Equation (3) is designed for all but the leading vehicle in our CACC system. The dynamics of the leading vehicle can be characterized as(7)x˙1t=v1t,v˙1t=a1t,where x1(t), v1(t), and a1(t) represent the absolute position, velocity, and acceleration of the leading vehicle, respectively. The leading vehicle of a platoon is set to cruise at a certain velocity, with a1t=0. Equation (3) will allow all the following vehicles in the platoon to track the dynamics of the leading vehicle on the above two scenarios. ## 2.4. Distributed Consensus Protocol for the CACC System Considering different scenarios in our system, two protocols are designed in the following. ### 2.4.1. Normal Platoon Formation Protocol This protocol is designed for vehicles to form a platoon. For vehiclei in our CACC system, it needs to check whether there is a predecessor in a certain distance r after the platoon formation mode is activated.(a) If yes, then vehiclei will communicate with its predecessor and (3) will be applied, which enables vehicle i to be a following vehicle.(b) If no, then vehiclei may become a leading vehicle of a platoon (where i=1) and cruise at a constant velocity. The driver can also take over the control to drive however he/she wants, but the vehicle may still potentially act as a leading vehicle of the platoon.After the above procedure, vehiclei is in the distributed consensus-based CACC system whether it plays the role of a following vehicle or a leading vehicle. However, the “following” and “leading” roles for vehicle i may switch under the following conditions.(a) For a following vehiclei, if all of its predecessors move out of the distance r ahead of vehicle i, then vehicle i changes from a following vehicle to a leading vehicle, where i=1.(b) For a leading vehiclei (i.e., i=1), if one or more vehicles move into the distance r ahead of vehicle i, then vehicle i changes from a leading vehicle to a following vehicle, where i=2,…,n.Figure2 shows the flowchart of this protocol for the distributed consensus-based CACC system.Figure 2 Normal platoon formation protocol. ### 2.4.2. Merging and Splitting Maneuvers Protocol Normal platoon formation protocol addresses the longitudinal maneuvers, while merging and splitting maneuvers protocol is aimed at handling the lateral maneuvers (i.e., lane change). It is introduced in [31] that there are four different cases for the lane change within the platoon maneuvers: (1) free-agent-to-free-agent lane change, (2) free-agent-to-platoon lane change, (3) platoon-to-free-agent lane change, and (4) platoon-to-platoon lane change. In this study, we focus on the second and third cases. Since this part is about applying the proposed algorithm (see (3)) to lane change scenarios, which is focused on gap creation and gap closure maneuvers implemented by V2V communication, the specific lane change behavior is considered as a manual driving behavior.For the case where vehiclei (as a free agent) tries to merge into a platoon on the adjacent lane, after the merging mode is activated, vehicle i will communicate with the platoon leader and decide which position it will be in the platoon, as shown in Figure 3(a). If it decides to be the jth vehicle of the platoon after merging maneuvers, then a “ghost” vehicle with respect to vehicle j-1 of the platoon will be created in front of vehicle i, as shown in Figure 3(b). This “ghost” vehicle has all the same parameters but the lateral position as vehicle j-1. Then, vehicle i will automatically adjust its absolute position and velocity with the “ghost” vehicle by (3). After that, vehicle i sends a merging signal to vehicle j+1 in the platoon, as shown in Figure 3(c). Upon receiving the merging signal, a “ghost” vehicle with respect to vehicle i is created in front of vehicle j+1, and vehicle j+1 starts to adjust its absolute position and velocity to create a gap for vehicle i by (3), as shown in Figure 3(d). After the gap is fully created, vehicle j+1 sends a confirmation signal to vehicle i, and vehicle i merges into the platoon, as shown in Figure 3(e).Figure 3 Merging maneuvers protocol (assuming merging into the 2nd position). (a) (b) (c) (d) (e)The case where vehiclej (in the platoon) tries to split from the platoon is easier. It is studied in [32] that there are two strategies for splitting maneuvers or so-called CACC string dissolution. The most efficient action is for the departing driver to do a simple lane change in the direction of the off-ramp. The other strategy is for the departing vehicle to deactivate the CACC function by tapping on the brakes before changing lanes, creating a split in the CACC string and becoming the manually driver leader of the platoon until it moves out of the lane. In our system, we adopt the first strategy. After the splitting mode is activated, the driver can take over the lateral control of the vehicle and perform the lane change without adjusting the velocity longitudinally. After vehicle j completes the lane change, vehicle j+1 will be informed that its predecessor changes from vehicle j to vehicle j-1 and therefore adjusts its velocity to close the gap. A new platoon is formed, where vehicle j+1 becomes vehicle j, and vehicle j+2 becomes vehicle j+1, and so on. ## 2.4.1. Normal Platoon Formation Protocol This protocol is designed for vehicles to form a platoon. For vehiclei in our CACC system, it needs to check whether there is a predecessor in a certain distance r after the platoon formation mode is activated.(a) If yes, then vehiclei will communicate with its predecessor and (3) will be applied, which enables vehicle i to be a following vehicle.(b) If no, then vehiclei may become a leading vehicle of a platoon (where i=1) and cruise at a constant velocity. The driver can also take over the control to drive however he/she wants, but the vehicle may still potentially act as a leading vehicle of the platoon.After the above procedure, vehiclei is in the distributed consensus-based CACC system whether it plays the role of a following vehicle or a leading vehicle. However, the “following” and “leading” roles for vehicle i may switch under the following conditions.(a) For a following vehiclei, if all of its predecessors move out of the distance r ahead of vehicle i, then vehicle i changes from a following vehicle to a leading vehicle, where i=1.(b) For a leading vehiclei (i.e., i=1), if one or more vehicles move into the distance r ahead of vehicle i, then vehicle i changes from a leading vehicle to a following vehicle, where i=2,…,n.Figure2 shows the flowchart of this protocol for the distributed consensus-based CACC system.Figure 2 Normal platoon formation protocol. ## 2.4.2. Merging and Splitting Maneuvers Protocol Normal platoon formation protocol addresses the longitudinal maneuvers, while merging and splitting maneuvers protocol is aimed at handling the lateral maneuvers (i.e., lane change). It is introduced in [31] that there are four different cases for the lane change within the platoon maneuvers: (1) free-agent-to-free-agent lane change, (2) free-agent-to-platoon lane change, (3) platoon-to-free-agent lane change, and (4) platoon-to-platoon lane change. In this study, we focus on the second and third cases. Since this part is about applying the proposed algorithm (see (3)) to lane change scenarios, which is focused on gap creation and gap closure maneuvers implemented by V2V communication, the specific lane change behavior is considered as a manual driving behavior.For the case where vehiclei (as a free agent) tries to merge into a platoon on the adjacent lane, after the merging mode is activated, vehicle i will communicate with the platoon leader and decide which position it will be in the platoon, as shown in Figure 3(a). If it decides to be the jth vehicle of the platoon after merging maneuvers, then a “ghost” vehicle with respect to vehicle j-1 of the platoon will be created in front of vehicle i, as shown in Figure 3(b). This “ghost” vehicle has all the same parameters but the lateral position as vehicle j-1. Then, vehicle i will automatically adjust its absolute position and velocity with the “ghost” vehicle by (3). After that, vehicle i sends a merging signal to vehicle j+1 in the platoon, as shown in Figure 3(c). Upon receiving the merging signal, a “ghost” vehicle with respect to vehicle i is created in front of vehicle j+1, and vehicle j+1 starts to adjust its absolute position and velocity to create a gap for vehicle i by (3), as shown in Figure 3(d). After the gap is fully created, vehicle j+1 sends a confirmation signal to vehicle i, and vehicle i merges into the platoon, as shown in Figure 3(e).Figure 3 Merging maneuvers protocol (assuming merging into the 2nd position). (a) (b) (c) (d) (e)The case where vehiclej (in the platoon) tries to split from the platoon is easier. It is studied in [32] that there are two strategies for splitting maneuvers or so-called CACC string dissolution. The most efficient action is for the departing driver to do a simple lane change in the direction of the off-ramp. The other strategy is for the departing vehicle to deactivate the CACC function by tapping on the brakes before changing lanes, creating a split in the CACC string and becoming the manually driver leader of the platoon until it moves out of the lane. In our system, we adopt the first strategy. After the splitting mode is activated, the driver can take over the lateral control of the vehicle and perform the lane change without adjusting the velocity longitudinally. After vehicle j completes the lane change, vehicle j+1 will be informed that its predecessor changes from vehicle j to vehicle j-1 and therefore adjusts its velocity to close the gap. A new platoon is formed, where vehicle j+1 becomes vehicle j, and vehicle j+2 becomes vehicle j+1, and so on. ## 3. Simulation Study We use MATLAB Simulink [33] to simulate three different scenarios of our distributed consensus-based CACC system. For the sake of brevity, in the simulation study, we assume that the communication delay between two CACC-equipped vehicles is τijt=60 ms [9]. Results of vehicle velocity and weighted and unweighted intervehicle distance are shown in different scenarios. ### 3.1. Normal Platoon Formation In the first scenario, we assume that there are four CAVs of different types (i.e., 2 sedans, 1 SUV, and 1 truck) driving at randomly varied velocities on the same lane of a highway. At a certain time (t=0), they all switch on the platoon mode. From then on, they adjust their absolute positions and velocities based on (3) and (7) as well as normal platoon formation protocol to reach consensus and form a platoon. The vehicle parameters of this distributed consensus-based CACC system are listed in Table 1.Table 1 Values of vehicle parameters. Parameters Vehicle 1 Vehicle 2 Vehicle 3 Vehicle 4 GPS antenna to front bumperlif 3 m 3 m 3 m 6 m GPS antenna to rear bumperlir 2 m 2 m 2 m 4 m Braking factorbi 1 1 1.1 1.6 Initial velocityx˙i0 30 m/s 33 m/s 36 m/s 39 m/s Desired velocityx˙i 30 m/s 30 m/s 30 m/s 30 m/s Initial time gaptij0g 0.91 s 1.11 s 1.67 s Initial weighted intervehicle distancedij0 30 m 40 m 65 m Desired time gaptijg 0.43 s 0.48 s 0.69 s Desired time headwaytijh 0.6 s 0.64 s 0.86 s Desired weighted intervehicle distancedij 13 m 14.3 m 20.8 m Desired unweighted intervehicle distancedij/bi 13 m 13 m 13 mAs can be seen from Table1, we assume that vehicles 1 and 2 are sedans with vehicle lengths of 5 m and braking factor of 1, vehicle 3 is a SUV with a vehicle length of 5 m and a braking factor of 1.1, and vehicle 4 is a truck with a vehicle length of 10 m and a braking factor of 1.6. We further assume that the GPS antenna is located at a point of vehicle satisfying 2lif=3lir. The weighted intervehicle distances are used instead of time gaps to measure the consensus of vehicles’ absolute positions in a more intuitive manner. They can be written as(8)dij0=x˙i0ttij0g,dij=x˙ittijg.As a key parameter, the damping gainγ in (3) will affect the convergence rate of absolute positions and velocities of all the vehicles in the platoon. In this study, γ=7 is set to all three simulation scenarios. More detailed analysis on how the value of γ may affect the system performance (e.g., driving safety and driving comfort) is conducted in the next section. By implementing our distributed consensus-based strategy, the simulation results of our CACC system are shown in Figures 4(a)–4(c).Figure 4 Simulation results of normal platoon formation. (a) (b) (c)Figure4(a) shows that, after the platoon mode is activated at t=0, all of the three unweighted intervehicle distances converge to 13 m at around 35 seconds. This unweighted intervehicle distance can be considered as a “virtual” target value we set for the system to achieve, not the “real” intervehicle distance. Figure 4(b) shows the results for weighted intervehicle distance. By introducing the braking factor, the steady state of weighted intervehicle distance varies with different vehicle pairs. The weighted intervehicle distance indicates the “real” value for intervehicle distance in our CACC system. In this case, at the steady state of the system, vehicle 1 and vehicle 2 have a 13 m (0.43 s) gap, vehicle 2 and vehicle 3 have a 14.3 m (0.48 s) gap, and vehicle 3 and vehicle 4 have a 20.8 m (0.69 s) gap. It is shown in Figure 4(c) that velocities of the four vehicles converge within around 35 seconds after the platoon mode is activated. After running the distributed consensus algorithms, they all converge to 30 m/s, which is the constant velocity of the leading vehicle and also the desired velocity of this platoon. ### 3.2. Platoon Restoration from Disturbances In this scenario, a simulation test is conducted to demonstrate the string stability of our CACC system, where the distributed consensus algorithm has the capability of attenuating the impact of sudden disturbances. In the platoon mode of our distributed consensus-based CACC system, if one vehicle (e.g., leading vehicle) suddenly brakes and reduces its velocity due to emergency, then the following vehicles will decelerate accordingly to maintain certain weighted intervehicle distances.For example, we assume that all the parameters remain the same as the first scenario. At timet=45 s, suppose that the leading vehicle suddenly brakes due to a flat tire, and its velocity decreases from 30 m/s to 15 m/s. To simplify the scenario, we assume that the brake happens only suddenly (Δt≈0), that is, a step change in leading vehicle’s velocity.The simulation results of sudden brake are shown in Figures5(a)–5(c). Figure 5(a) shows that the unweighted intervehicle distance between vehicle 1 and vehicle 2 suffers an approximately 4 m decrease at time t=45 s. However, the unweighted intervehicle distance between vehicle 2 and vehicle 3 only suffers an approximately 0.7 m decrease, and the one between vehicle 3 and vehicle 4 is further smaller. This result implies that the sudden disturbance on the intervehicle distance is attenuated along the rest of the platoon.Figure 5 Simulation results of platoon restoration from disturbances. (a) (b) (c)The velocity of vehicles in platoon is shown in Figure5(c). The sudden brake originates from vehicle 1, and vehicle 2 tends to avoid the collision with vehicle 1 with a hard braking event. The braking event of vehicle 3 is not as hard as vehicle 2 (the slope is smaller), and the braking of vehicle 4 is further smoother than vehicle 3. The smoother their braking is, the smaller the absolute value of their acceleration will be. After the braking event, the velocities of the three following vehicles are slowly restored to the desired velocity. This result implies that the sudden disturbance on the vehicle acceleration is attenuated along the rest of the platoon.Figure5(b) presents the results for weighted intervehicle distance; that is, the unweighted intervehicle distance multiplies by the braking factor of different vehicles. Overall, the simulation results of this scenario indicate that our distributed consensus-based CACC system is capable of attenuating sudden disturbances and restoring to normal conditions; that is, this system is string-stable. ### 3.3. Merging and Splitting Maneuvers In this scenario, we show the effects when the proposed distributed consensus algorithm is performed together with the merging and splitting maneuvers protocol as presented in Section3.For merging maneuvers, assume that, at timet=0, a three-vehicle platoon (same parameters as vehicles 1, 3, and 4 in the first scenario) is operating at the steady state (i.e., cruising at the velocity of 30 m/s). Another individual vehicle (same parameters as vehicle 2 in the first scenario) traveling at the velocity of 35 m/s on the adjacent lane plans to merge into the platoon, and the simulation result is shown in Figure 6(a).Figure 6 Simulation results of merging and splitting maneuvers. (a) (b)It can be observed from Figure6(a) that the individual vehicle switches on the merging mode at time t=5 s. From then on, a “ghost” vehicle with respect to the first vehicle in the platoon is created, and the individual vehicle adjusts its velocity from 35 m/s to 30 m/s by (3). After that, the individual vehicle sends a merging signal to the second vehicle of the platoon. Then a “ghost” vehicle with respect to the merging vehicle is created in front of the second vehicle of the platoon. Based on (3), both the second and third vehicles of the platoon decelerate to create a gap, and the second vehicle sends a signal to the individual vehicle upon the completion of gap opening. Finally, the individual vehicle merges into the platoon, and the velocities of the other two following vehicles are restored to consensus in around 8 s.For splitting maneuvers, assume that, at timet=0, a four-vehicle platoon (same parameters as vehicles 1, 2, 3, and 4 in the first scenario) is cruising at the velocity of 30 m/s. The second vehicle will split from the platoon, and the simulation result is shown in Figure 6(b).The second vehicle of the platoon switches off the platoon mode and drives away (constantly accelerates from 30 m/s to 35 m/s) from platoon at timet=10 s. After the second vehicle completes its lane change, the third vehicle confirms that its predecessor has changed to the first vehicle of the platoon. Then it adjusts its velocity based on (3) to close the gap. The fourth vehicle accordingly adjusts its velocity to follow the movement of its predecessor.Therefore, the simulation results of the third scenario show that our distributed consensus-based CACC system is capable of carrying out merging and splitting maneuvers. ## 3.1. Normal Platoon Formation In the first scenario, we assume that there are four CAVs of different types (i.e., 2 sedans, 1 SUV, and 1 truck) driving at randomly varied velocities on the same lane of a highway. At a certain time (t=0), they all switch on the platoon mode. From then on, they adjust their absolute positions and velocities based on (3) and (7) as well as normal platoon formation protocol to reach consensus and form a platoon. The vehicle parameters of this distributed consensus-based CACC system are listed in Table 1.Table 1 Values of vehicle parameters. Parameters Vehicle 1 Vehicle 2 Vehicle 3 Vehicle 4 GPS antenna to front bumperlif 3 m 3 m 3 m 6 m GPS antenna to rear bumperlir 2 m 2 m 2 m 4 m Braking factorbi 1 1 1.1 1.6 Initial velocityx˙i0 30 m/s 33 m/s 36 m/s 39 m/s Desired velocityx˙i 30 m/s 30 m/s 30 m/s 30 m/s Initial time gaptij0g 0.91 s 1.11 s 1.67 s Initial weighted intervehicle distancedij0 30 m 40 m 65 m Desired time gaptijg 0.43 s 0.48 s 0.69 s Desired time headwaytijh 0.6 s 0.64 s 0.86 s Desired weighted intervehicle distancedij 13 m 14.3 m 20.8 m Desired unweighted intervehicle distancedij/bi 13 m 13 m 13 mAs can be seen from Table1, we assume that vehicles 1 and 2 are sedans with vehicle lengths of 5 m and braking factor of 1, vehicle 3 is a SUV with a vehicle length of 5 m and a braking factor of 1.1, and vehicle 4 is a truck with a vehicle length of 10 m and a braking factor of 1.6. We further assume that the GPS antenna is located at a point of vehicle satisfying 2lif=3lir. The weighted intervehicle distances are used instead of time gaps to measure the consensus of vehicles’ absolute positions in a more intuitive manner. They can be written as(8)dij0=x˙i0ttij0g,dij=x˙ittijg.As a key parameter, the damping gainγ in (3) will affect the convergence rate of absolute positions and velocities of all the vehicles in the platoon. In this study, γ=7 is set to all three simulation scenarios. More detailed analysis on how the value of γ may affect the system performance (e.g., driving safety and driving comfort) is conducted in the next section. By implementing our distributed consensus-based strategy, the simulation results of our CACC system are shown in Figures 4(a)–4(c).Figure 4 Simulation results of normal platoon formation. (a) (b) (c)Figure4(a) shows that, after the platoon mode is activated at t=0, all of the three unweighted intervehicle distances converge to 13 m at around 35 seconds. This unweighted intervehicle distance can be considered as a “virtual” target value we set for the system to achieve, not the “real” intervehicle distance. Figure 4(b) shows the results for weighted intervehicle distance. By introducing the braking factor, the steady state of weighted intervehicle distance varies with different vehicle pairs. The weighted intervehicle distance indicates the “real” value for intervehicle distance in our CACC system. In this case, at the steady state of the system, vehicle 1 and vehicle 2 have a 13 m (0.43 s) gap, vehicle 2 and vehicle 3 have a 14.3 m (0.48 s) gap, and vehicle 3 and vehicle 4 have a 20.8 m (0.69 s) gap. It is shown in Figure 4(c) that velocities of the four vehicles converge within around 35 seconds after the platoon mode is activated. After running the distributed consensus algorithms, they all converge to 30 m/s, which is the constant velocity of the leading vehicle and also the desired velocity of this platoon. ## 3.2. Platoon Restoration from Disturbances In this scenario, a simulation test is conducted to demonstrate the string stability of our CACC system, where the distributed consensus algorithm has the capability of attenuating the impact of sudden disturbances. In the platoon mode of our distributed consensus-based CACC system, if one vehicle (e.g., leading vehicle) suddenly brakes and reduces its velocity due to emergency, then the following vehicles will decelerate accordingly to maintain certain weighted intervehicle distances.For example, we assume that all the parameters remain the same as the first scenario. At timet=45 s, suppose that the leading vehicle suddenly brakes due to a flat tire, and its velocity decreases from 30 m/s to 15 m/s. To simplify the scenario, we assume that the brake happens only suddenly (Δt≈0), that is, a step change in leading vehicle’s velocity.The simulation results of sudden brake are shown in Figures5(a)–5(c). Figure 5(a) shows that the unweighted intervehicle distance between vehicle 1 and vehicle 2 suffers an approximately 4 m decrease at time t=45 s. However, the unweighted intervehicle distance between vehicle 2 and vehicle 3 only suffers an approximately 0.7 m decrease, and the one between vehicle 3 and vehicle 4 is further smaller. This result implies that the sudden disturbance on the intervehicle distance is attenuated along the rest of the platoon.Figure 5 Simulation results of platoon restoration from disturbances. (a) (b) (c)The velocity of vehicles in platoon is shown in Figure5(c). The sudden brake originates from vehicle 1, and vehicle 2 tends to avoid the collision with vehicle 1 with a hard braking event. The braking event of vehicle 3 is not as hard as vehicle 2 (the slope is smaller), and the braking of vehicle 4 is further smoother than vehicle 3. The smoother their braking is, the smaller the absolute value of their acceleration will be. After the braking event, the velocities of the three following vehicles are slowly restored to the desired velocity. This result implies that the sudden disturbance on the vehicle acceleration is attenuated along the rest of the platoon.Figure5(b) presents the results for weighted intervehicle distance; that is, the unweighted intervehicle distance multiplies by the braking factor of different vehicles. Overall, the simulation results of this scenario indicate that our distributed consensus-based CACC system is capable of attenuating sudden disturbances and restoring to normal conditions; that is, this system is string-stable. ## 3.3. Merging and Splitting Maneuvers In this scenario, we show the effects when the proposed distributed consensus algorithm is performed together with the merging and splitting maneuvers protocol as presented in Section3.For merging maneuvers, assume that, at timet=0, a three-vehicle platoon (same parameters as vehicles 1, 3, and 4 in the first scenario) is operating at the steady state (i.e., cruising at the velocity of 30 m/s). Another individual vehicle (same parameters as vehicle 2 in the first scenario) traveling at the velocity of 35 m/s on the adjacent lane plans to merge into the platoon, and the simulation result is shown in Figure 6(a).Figure 6 Simulation results of merging and splitting maneuvers. (a) (b)It can be observed from Figure6(a) that the individual vehicle switches on the merging mode at time t=5 s. From then on, a “ghost” vehicle with respect to the first vehicle in the platoon is created, and the individual vehicle adjusts its velocity from 35 m/s to 30 m/s by (3). After that, the individual vehicle sends a merging signal to the second vehicle of the platoon. Then a “ghost” vehicle with respect to the merging vehicle is created in front of the second vehicle of the platoon. Based on (3), both the second and third vehicles of the platoon decelerate to create a gap, and the second vehicle sends a signal to the individual vehicle upon the completion of gap opening. Finally, the individual vehicle merges into the platoon, and the velocities of the other two following vehicles are restored to consensus in around 8 s.For splitting maneuvers, assume that, at timet=0, a four-vehicle platoon (same parameters as vehicles 1, 2, 3, and 4 in the first scenario) is cruising at the velocity of 30 m/s. The second vehicle will split from the platoon, and the simulation result is shown in Figure 6(b).The second vehicle of the platoon switches off the platoon mode and drives away (constantly accelerates from 30 m/s to 35 m/s) from platoon at timet=10 s. After the second vehicle completes its lane change, the third vehicle confirms that its predecessor has changed to the first vehicle of the platoon. Then it adjusts its velocity based on (3) to close the gap. The fourth vehicle accordingly adjusts its velocity to follow the movement of its predecessor.Therefore, the simulation results of the third scenario show that our distributed consensus-based CACC system is capable of carrying out merging and splitting maneuvers. ## 4. Sensitivity Analysis In this section, a sensitivity analysis is conducted to study how the uncertainty in the damping gainγ can affect the uncertainties in the convergence rate of the system, the acceleration and jerk (time rate of change of acceleration) of vehicles in the system, and the minimum weighted intervehicle distance between two consecutive vehicles in the system.This sensitivity analysis is based on the normal platoon formation scenario, where the information flow topologyG contains a directed spanning tree as shown in Figure 7.Figure 7 Information flow topology of normal platoon formation scenario.The adjacency matrix then can be defined as(9)A=0000100001000010,and the nonsymmetricalLaplacian matrix is(10)L=0000-11000-11000-11.Recall that, in (3), there is a damping gain γ before the velocity consensus term. Similar to the second-order consensus algorithm in [34], we can get the conclusion that (3) achieves consensus asymptotically if and only if directed graph G has a directed spanning tree and (11)γ>max∀μi≠0⁡ImμiReμi·μi,where μi,i=1,…,n, denotes the ith eigenvalue of -L. The detailed proof of the above conclusion is included in Appendix B. Since the specific value of γ has significant influences on our CACC system in different respects, a sensitivity analysis is conducted in this section, including three different parts. ### 4.1. Convergence Rate Analysis The convergence rate of the proposed distributed consensus algorithm will affect the time required for our CACC system to reach the steady state. The faster the convergence rate is, the less time will be consumed and thus the higher efficiency of our CACC system is.In this case, we study the convergence rate of our system without communication delay for the sake of brevity. Definex~=x~1T,…,x~iT,…,x~4TT and x~˙=x~˙1T,…,x~˙iT,…,x~˙4TT, where x~i=xit-xjt+lif+ljr+x˙jttijgbi and x~˙i=x˙it-x˙jt. The information states with second-order dynamics of our system, which are in this four-vehicle platoon case without communication delay, can be written in a matrix form as (12)x~˙x~¨=Γx~x~˙,where(13)Γ=04×4I4-L-γL.The way to find the eigenvalues ofΓ is to solve the characteristic polynomial of Γ, which is(14)det⁡λI8-Γ=det⁡λI4-I4LλI4+γL=det⁡λ2I4+γλ+1L=0.As aforementioned,μi is the ith eigenvalue of -L. Therefore, it can be given that(15)det⁡λI4+L=∏i=14λ-μi.By comparing (14) to (15), we can get(16)det⁡λI8-Γ=det⁡λ2I4+γλ+1L=∏i=14λ2-γλ+1μi=0,which implies that the solution of (14) is the same as the solution of(17)λ2-γλ+1μi=0.Therefore, the eigenvalues ofΓ can be given by(18)λi1=γμi+γ2μi2+4μi2,λi2=γμi-γ2μi2+4μi2.The convergence rate is an exponential decay term known ase-ηγt, where(19)ηγ=max⁡Reλij∣i=2,3,4;j=1,2.SinceReλi1≥Reλi2,j=1 is set in (19). To find out the maximum convergence rate, we need to find out γ∗>0 such that ηγ∗=min⁡ηγ. It is proven in [35] that the minimum of ηγ is achieved if Reλ21=λn1; that is,(20)γμ22=γμn+γ2μn2+4μn2.Therefore, the maximum convergence rate is achieved as(21)γ=γ∗=2-μn-μ2μ2-2μn.Noting that the Laplacian matrixL is previously given and μ2 and μn can be derived, a value of γ=γ∗=2 is obtained to reach the maximum convergence rate. When γ<2, the larger γ is, the faster convergence rate will be. When γ>2, the larger γ is, the slower convergence rate will be. Therefore, to reach higher efficiency of our CACC system, we design the value of damping gain γ as close to 2 as possible. ### 4.2. Driving Comfort Analysis In this part, we analyze the effect ofγ on driving comfort. The change of vehicle velocity is related to vehicle acceleration and jerk, and it is studied in [36, 37] that a limitation of ±2.5 m/s2 and ±10 m/s3 for acceleration and jerk separately will be comfortable for human passengers. We measure the values of arg⁡max⁡a and arg⁡max⁡jerk through normal platoon formation process and check under which value of γ will -2.5m/s2<a<2.5m/s2 and -10m/s3<jerk<10m/s3 be satisfied. If a and jerk are both in the range, then driving is comfortable for human passengers.Parameters of this analysis are set in Table2, which are exactly the same as the first two vehicles in aforementioned simulation scenarios. The result of the sensitivity analysis on driving comfort is shown in Figure 8. As can be seen from it, when 7≤γ≤7.8, both the acceleration and the jerk are in the “comfort” ranges. Since a faster convergence rate is desired, a value of 7 can be chosen for γ.Table 2 Values of vehicle parameters. Parameters Vehicle 1 Vehicle 2 GPS antenna to front bumperlif 3 m 3 m GPS antenna to rear bumperlir 2 m 2 m Braking factorbi 1 1 Initial velocityx˙i0 30 m/s 33 m/s Desired velocityx˙i 30 m/s 30 m/s Initial weighted intervehicle distancedij0 30 m Desired weighted intervehicle distancedij 13 mFigure 8 Driving comfort analysis. ### 4.3. Driving Safety Analysis In this part, we analyze the effect ofγ on driving safety. We measure the value of minimum weighted intervehicle distance through normal platoon formation process and check whether it goes to negative. If it does, then a collision between the leading vehicle and the following vehicle occurs.We first analyze how the changes ofγ and the initial weighted intervehicle distance dij0 will affect the minimum weighted intervehicle distance min⁡(dij). All parameters but the initial weighted intervehicle distance (dij0 is a variable in this case) of this sensitivity analysis are set the same as in Table 2. The result is shown in Figure 9.Figure 9 Driving safety analysis related to initial weighted intervehicle distance.As shown in the result, the areas indicatingmin⁡(dij)<0 appear mostly when dij0>25 m and meanwhile γ<1. This is because when the absolute position difference is large and the damping gain of velocity consensus term is small, the system tends to put more weight on the absolute position consensus term, resulting in a large overshoot of the absolute position consensus. When the initial weighted intervehicle distance is sufficiently large (dij0>0.18 m), we can avoid this by choosing the value of γ no smaller than 2. Also, there is a linear area indicating min⁡(dij)<0, where dij0 is small. A hypothesis is that, at time t=0, the following vehicle has a higher velocity and the weighted intervehicle distance is rather small, so there exists no γ to ensure the following vehicle to avoid the collision with the leading vehicle. If we fix the value of γ, it is found that the closer dij0 approaches to dij (13 m), the larger min⁡(dij) is.We also analyze how the changes ofγ and the initial velocity difference δx˙ij0 will affect the minimum weighted intervehicle distance min⁡(dij). All parameters but the initial velocity (the difference of x˙i0 and x˙j0 is a variable in this case) of this sensitivity analysis are set the same as in Table 2. The result is shown in Figure 10.Figure 10 Driving safety analysis related to initial velocity difference.As shown in the figures, collision only happens in the areas whereγ is small. If we fix the value of γ, it is found that the closer δx˙ij0 approaches to 0 m/s, the larger min⁡(dij) is. A potential explanation is that although the weighted intervehicle distance will change regardless of the initial value, the change will be minimized when the initial velocities of the two vehicles are the same. When γ≥2, no matter how much the initial velocity difference is, the minimum weighted intervehicle distance will always be 13 m.By analyzing the results of driving safety analysis, we know that the preliminary value ofγ (γ=7) chosen for our CACC system is safe without any collision between two vehicles. When the parameter setting changes, the procedures of convergence rate analysis, driving comfort analysis, and driving safety analysis can be applied to choose the best value of γ, which ensures the platoon in our CACC system to be efficient, comfortable, and safe. ## 4.1. Convergence Rate Analysis The convergence rate of the proposed distributed consensus algorithm will affect the time required for our CACC system to reach the steady state. The faster the convergence rate is, the less time will be consumed and thus the higher efficiency of our CACC system is.In this case, we study the convergence rate of our system without communication delay for the sake of brevity. Definex~=x~1T,…,x~iT,…,x~4TT and x~˙=x~˙1T,…,x~˙iT,…,x~˙4TT, where x~i=xit-xjt+lif+ljr+x˙jttijgbi and x~˙i=x˙it-x˙jt. The information states with second-order dynamics of our system, which are in this four-vehicle platoon case without communication delay, can be written in a matrix form as (12)x~˙x~¨=Γx~x~˙,where(13)Γ=04×4I4-L-γL.The way to find the eigenvalues ofΓ is to solve the characteristic polynomial of Γ, which is(14)det⁡λI8-Γ=det⁡λI4-I4LλI4+γL=det⁡λ2I4+γλ+1L=0.As aforementioned,μi is the ith eigenvalue of -L. Therefore, it can be given that(15)det⁡λI4+L=∏i=14λ-μi.By comparing (14) to (15), we can get(16)det⁡λI8-Γ=det⁡λ2I4+γλ+1L=∏i=14λ2-γλ+1μi=0,which implies that the solution of (14) is the same as the solution of(17)λ2-γλ+1μi=0.Therefore, the eigenvalues ofΓ can be given by(18)λi1=γμi+γ2μi2+4μi2,λi2=γμi-γ2μi2+4μi2.The convergence rate is an exponential decay term known ase-ηγt, where(19)ηγ=max⁡Reλij∣i=2,3,4;j=1,2.SinceReλi1≥Reλi2,j=1 is set in (19). To find out the maximum convergence rate, we need to find out γ∗>0 such that ηγ∗=min⁡ηγ. It is proven in [35] that the minimum of ηγ is achieved if Reλ21=λn1; that is,(20)γμ22=γμn+γ2μn2+4μn2.Therefore, the maximum convergence rate is achieved as(21)γ=γ∗=2-μn-μ2μ2-2μn.Noting that the Laplacian matrixL is previously given and μ2 and μn can be derived, a value of γ=γ∗=2 is obtained to reach the maximum convergence rate. When γ<2, the larger γ is, the faster convergence rate will be. When γ>2, the larger γ is, the slower convergence rate will be. Therefore, to reach higher efficiency of our CACC system, we design the value of damping gain γ as close to 2 as possible. ## 4.2. Driving Comfort Analysis In this part, we analyze the effect ofγ on driving comfort. The change of vehicle velocity is related to vehicle acceleration and jerk, and it is studied in [36, 37] that a limitation of ±2.5 m/s2 and ±10 m/s3 for acceleration and jerk separately will be comfortable for human passengers. We measure the values of arg⁡max⁡a and arg⁡max⁡jerk through normal platoon formation process and check under which value of γ will -2.5m/s2<a<2.5m/s2 and -10m/s3<jerk<10m/s3 be satisfied. If a and jerk are both in the range, then driving is comfortable for human passengers.Parameters of this analysis are set in Table2, which are exactly the same as the first two vehicles in aforementioned simulation scenarios. The result of the sensitivity analysis on driving comfort is shown in Figure 8. As can be seen from it, when 7≤γ≤7.8, both the acceleration and the jerk are in the “comfort” ranges. Since a faster convergence rate is desired, a value of 7 can be chosen for γ.Table 2 Values of vehicle parameters. Parameters Vehicle 1 Vehicle 2 GPS antenna to front bumperlif 3 m 3 m GPS antenna to rear bumperlir 2 m 2 m Braking factorbi 1 1 Initial velocityx˙i0 30 m/s 33 m/s Desired velocityx˙i 30 m/s 30 m/s Initial weighted intervehicle distancedij0 30 m Desired weighted intervehicle distancedij 13 mFigure 8 Driving comfort analysis. ## 4.3. Driving Safety Analysis In this part, we analyze the effect ofγ on driving safety. We measure the value of minimum weighted intervehicle distance through normal platoon formation process and check whether it goes to negative. If it does, then a collision between the leading vehicle and the following vehicle occurs.We first analyze how the changes ofγ and the initial weighted intervehicle distance dij0 will affect the minimum weighted intervehicle distance min⁡(dij). All parameters but the initial weighted intervehicle distance (dij0 is a variable in this case) of this sensitivity analysis are set the same as in Table 2. The result is shown in Figure 9.Figure 9 Driving safety analysis related to initial weighted intervehicle distance.As shown in the result, the areas indicatingmin⁡(dij)<0 appear mostly when dij0>25 m and meanwhile γ<1. This is because when the absolute position difference is large and the damping gain of velocity consensus term is small, the system tends to put more weight on the absolute position consensus term, resulting in a large overshoot of the absolute position consensus. When the initial weighted intervehicle distance is sufficiently large (dij0>0.18 m), we can avoid this by choosing the value of γ no smaller than 2. Also, there is a linear area indicating min⁡(dij)<0, where dij0 is small. A hypothesis is that, at time t=0, the following vehicle has a higher velocity and the weighted intervehicle distance is rather small, so there exists no γ to ensure the following vehicle to avoid the collision with the leading vehicle. If we fix the value of γ, it is found that the closer dij0 approaches to dij (13 m), the larger min⁡(dij) is.We also analyze how the changes ofγ and the initial velocity difference δx˙ij0 will affect the minimum weighted intervehicle distance min⁡(dij). All parameters but the initial velocity (the difference of x˙i0 and x˙j0 is a variable in this case) of this sensitivity analysis are set the same as in Table 2. The result is shown in Figure 10.Figure 10 Driving safety analysis related to initial velocity difference.As shown in the figures, collision only happens in the areas whereγ is small. If we fix the value of γ, it is found that the closer δx˙ij0 approaches to 0 m/s, the larger min⁡(dij) is. A potential explanation is that although the weighted intervehicle distance will change regardless of the initial value, the change will be minimized when the initial velocities of the two vehicles are the same. When γ≥2, no matter how much the initial velocity difference is, the minimum weighted intervehicle distance will always be 13 m.By analyzing the results of driving safety analysis, we know that the preliminary value ofγ (γ=7) chosen for our CACC system is safe without any collision between two vehicles. When the parameter setting changes, the procedures of convergence rate analysis, driving comfort analysis, and driving safety analysis can be applied to choose the best value of γ, which ensures the platoon in our CACC system to be efficient, comfortable, and safe. ## 5. Conclusions and Future Research In this study, we have proposed a novel CACC system based on a distributed consensus algorithm, which takes into account the unavoidable time-varying communication delay, as well as the length, GPS antenna’s location, and braking ability of different vehicles. We have also developed distributed consensus protocol, allowing our CACC system to process the algorithm to implement the function of forming a platoon, merging, and splitting. The algorithm and protocol have been implemented in MATLAB Simulink and the system is shown to have the ability to be restored from a variety of disturbances and carry out merging and splitting maneuvers. In addition, a sensitivity analysis was performed on the algorithm, indicating that the distributed consensus algorithm reaches the maximum convergence rate whenγ=2, and γ=7 is an optimal value for our system to be efficient, comfortable, and safe under the given parameter setting.It should be pointed out that although the system level (cyberspace) of vehicles has been taken into account in this study, the actual vehicle dynamics model (physical space) has been neglected. Combination of the cyberspace and the physical space may be a future goal of this study. Also, as discussed in Section2, the braking factor we proposed may be an aggregate of many different factors, including the mass of the vehicle (light or heavy), the aerodynamics performance of the vehicle (good or bad), the status of the brake (new or worn), the status of the tires (new or worn), the type of the tires (all-season tires or snow tires), the status of the road surface (dry or wet), and the gradient of the road surface (flat or steep). By applying fuzzy logic theory [38], a control model considering above factors as inputs and braking factor as output can be developed in the future to decide the value of braking factor for each vehicle in the system. Moreover, although the proposed distributed consensus algorithm has taken into account some system uncertainties like communication delay, many other issues that may occur in the field implementation still have not been addressed in this study, such as packet loss, signal fading, and signal interference. This unlocks more opportunities for future research. --- *Source: 1023654-2017-08-06.xml*
1023654-2017-08-06_1023654-2017-08-06.md
80,136
Developing a Distributed Consensus-Based Cooperative Adaptive Cruise Control System for Heterogeneous Vehicles with Predecessor Following Topology
Ziran Wang; Guoyuan Wu; Matthew J. Barth
Journal of Advanced Transportation (2017)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1023654
1023654-2017-08-06.xml
--- ## Abstract Connected and automated vehicle (CAV) has become an increasingly popular topic recently. As an application, Cooperative Adaptive Cruise Control (CACC) systems are of high interest, allowing CAVs to communicate with each other and coordinating their maneuvers to form platoons, where one vehicle follows another with a constant velocity and/or time headway. In this study, we propose a novel CACC system, where distributed consensus algorithm and protocol are designed for platoon formation, merging maneuvers, and splitting maneuvers. Predecessor following information flow topology is adopted for the system, where each vehicle only communicates with its following vehicle to reach consensus of the whole platoon, making the vehicle-to-vehicle (V2V) communication fast and accurate. Moreover, different from most studies assuming the type and dynamics of all the vehicles in a platoon to be homogenous, we take into account the length, location of GPS antenna on vehicle, and braking performance of different vehicles. A simulation study has been conducted under scenarios including normal platoon formation, platoon restoration from disturbances, and merging and splitting maneuvers. We have also carried out a sensitivity analysis on the distributed consensus algorithm, investigating the effect of the damping gain on convergence rate, driving comfort, and driving safety of the system. --- ## Body ## 1. Introduction Recently, the rapid development of our transportation systems has led to a worldwide economic prosperity, where transportation for both passengers and goods is much more convenient both domestically and internationally. The number of motor vehicles worldwide is estimated to be more than 1 billion now and will double again within one or two decades [1]. Such a huge quantity of motor vehicles and intensive transportation activities has brought about various social-economic issues. For example, more than 30,000 people still perish from roadway crashes on US highways every year [2]. For the past few years, cities that have experienced more economic improvements are at a higher risk to face worsening traffic conditions, resulting in increased pollutant emissions and decreased travel efficiency. In terms of average time wasted on the road, Los Angeles, for example, tops the global ranking with 104 hours spent in congestion per commuter during the year of 2016 [3]. It was also estimated by [4] that there were 3.1 billion gallons of energy wasted worldwide due to traffic congestion in 2014, which equated to approximately 19 gallons per commuter.Significant efforts have been made around the world to address these transportation issues. Many propose simply expanding our existing transportation infrastructure to help solve these traffic-related problems. However, not only is this costly but also it has many negative social and environmental effects. As an alternative solution, the development of connected and automated vehicle (CAV) can help better manage traffic, thus improving traffic safety, mobility, and reliability without the cost of infrastructure build-out. One of the more promising CAV applications is Cooperative Adaptive Cruise Control (CACC), which extends Adaptive Cruise Control (ACC) with CAV technology (e.g., mainly via vehicle-to-vehicle (V2V) communication) [5]. By sharing information among vehicles, a CACC system allows vehicles to form platoons and be driven at harmonized speeds with constant time headways between vehicles. The main advantages of a CACC system are as follows: (a) connected and automated driving is safer than human driving by minimizing driver distractions; (b) roadway capacity is increased due to the reduction of intervehicle time gaps without compromising safety; and (c) fuel consumption and pollutant emissions are reduced due to the reductions of both unnecessary acceleration maneuvers and aerodynamic drag on the vehicles in the platoon [6].The core of a CACC system is the vehicle-following control model, which depends on the vehicle information flow topology. The topology determines how all CAVs in a CACC system communicate with others, and it has been well studied by researchers. Zheng et al. [7] proposed some typical types of information flow topologies, including predecessor following, predecessor-leader following, and bidirectional types. In our research, each vehicle in the CACC system only receives information from the predecessor (if it exists), which is exactly the predecessor following type. The vehicle-following controller efficiently describes the vehicle dynamics and cooperative maneuvers residing in the system. The performance and robustness of a CACC consensus algorithm were discussed in [8], where packet loss, network failures, and beaconing frequencies were all taken into consideration when the simulation framework is built with the CACC controller developed by [9]. Di Bernardo et al. [10] designed a distributed control protocol to solve the platooning problem, which depends on a local action of the vehicle itself and a cooperative action received from the leader and neighboring vehicles. Lu et al. [11] used a nonlinear model to describe the vehicle longitudinal dynamics, where the engine power, gravity, road and tire resistance, and aerodynamics drag are all considered. However, since the complexity of such nonlinear models is problematic for system analysis, a linearized model is typically used for field deployment, such as the one in [12]. Wang et al. [13] proposed an Eco-CACC system with a novel platoon gap opening and closing protocol to reduce the platoon-wide fuel consumption and pollutant emissions. Based on this study, Hao et al. [14] developed a bilevel model to synthetically analyze the platoon-wide impact of the disturbances when vehicles join and leave the Eco-CACC system. Amoozadeh et al. [15] developed a platoon management protocol for CACC vehicles, including CACC longitudinal control logic and platoon merge and split maneuvers. In terms of intervehicle distance in motion (at relatively high speed), the existing vehicle-following models can be divided into two categories: one that regulates the spatial gap, where one vehicle follows its predecessor with a fixed intervehicle distance [16] and the other that is based on time gap or velocity-dependent distance, where the intervehicle distance may vary with vehicle velocity and vehicle length by keeping a constant time headway. Our approach falls into the second category.Stability is a basic requirement to ensure the safety of a CACC system. The control system should be capable of dealing with various disturbances and uncertainties. Laumônier et al. [17] proposed a reinforcement learning approach to design the CACC system, where the system is modeled as a Markov Decision Process incorporated together with stochastic game theory. They showed that the system was capable of damping small disturbances throughout the platoon. The uncertainties in communication network and sensor information were modeled by a Gaussian distribution in [18], which was applied to calculate the minimal time headway for safety reasons. Qin et al. [19] studied the effects of stochastic delays on the dynamics of connected vehicles by analyzing the mean dynamics. Plant and string stability conditions were both derived, and the results showed that stability domains shrink along with the increases of the packet drop ratio or the sampling time. In [20], propagation of motion signals was attenuated by adjusting the controller parameters in the system, which guaranteed the so-called string stability of the platoon. Since the inherent communication time delay and vehicle actuator delay significantly limit the minimum intervehicle distance in view of string stability requirements, Xing et al. [21] carried out Padé approximations of the vehicle actuator delay to arrive at a finite-dimensional model. It was shown in [22] that the standard constant time-gap spacing policy can guarantee string stability of the platoon as long as a sufficient large time gap is maintained. In this study, we also adopted the time-gap spacing policy and selected time gap large enough to ensure the platoon’s string stability. A simulation study of platoon restoration after disturbances is demonstrated to further prove the string stability of our system.Communication plays a crucial role in the formation of a CACC system. The United States Department of Transportation (USDOT) developed a Connected Vehicle Reference Implementation Architecture (CVRIA) to provide the communication framework for different applications, including V2V and Vehicle-to-Infrastructure (V2I) communications [23]. IEEE 802.11p-based Dedicated Short Range Communication (DSRC) has been developed by the automotive industry for use in V2V and V2I communication, considered as a promising wireless technology to improve both transportation safety and efficiency. Bai et al. [24] used a large set of empirical measurement data taken in various realistic driving environments to characterize communication properties of DSRC. Since the increase of CAVs in a certain coverage area may lead to a shortage of communication bandwidth, a distributed methodology is more advantageous for vehicular communication. In our study, the V2V communication is only conducted between predecessor and follower, making the proposed system more distributed.Essentially, the proposed system is different from a conventional Adaptive Cruise Control (ACC) system for the following reasons. (1) In the proposed system, although some forward ranging sensing techniques such as camera, radar, and lidar (Light Detection and Ranging) might be needed as supplementary methods, the core technique for CAVs to form platoon is V2V communication. CAVs send their absolute position and instantaneous velocity information measured by equipped sensors (e.g., high-precision GPS, inertial measurement unit, and on-board diagnostic system) to their followers by V2V communication. However, for a conventional ACC system, V2V communication is not enabled, where vehicles need to use their forward ranging sensing equipment to obtain predecessors’ information. (2) A conventional ACC system can only implement the function of vehicle following; however, the proposed CACC system allows individual vehicle to merge into the platoon by using V2V communication. “Ghost” vehicles are created as predecessors for following vehicles to follow; however, since they are virtual and only for V2V communication, it is impossible for forward ranging sensing techniques to sense them. (3) The measurement delay of forward ranging sensing techniques in a conventional ACC system is apparently different from the V2V communication delay of DSRC in the proposed system, which leads to different system behaviors in different scenarios, especially the one we talk about in Section3.2.Despite the advantages of consensus-based platooning approach for the CACC system, several issues still need to be addressed to improve the reliability and practicality.(a) The primary V2V communication method being used nowadays is DSRC, which normally has a 300-meter transmission range [24]. As the transmission distance increases, the safety message reception probability dramatically decreases, and the relative signal strength index (RSSI) from DSRC antenna also decreases [25, 26]. However, many existing CACC systems such as [27] adopted predecessor-leader following information flow topology, which required the leader of a platoon to communicate with all the vehicles in the broadcast mode. Therefore, when a platoon expands to a bigger size, the V2V communication between the leader and the last vehicle may introduce lower RSSI or be impaired by obstructions along the platoon. In this study, we adopt predecessor following information flow topology (i.e., “distributed”), where each vehicle in the platoon only communicates with its following vehicle to reach consensus of the whole platoon. Therefore, the platoon size is not limited by the DSRC transmission range, and the V2V communication has a higher safety message reception probability and a higher RSSI than in the predecessor-leader following topology.(b) Most existing CACC-related research has only considered vehicles in the system as homogenous point mass models. However, in reality, vehicles should be heterogeneous with different lengths and braking performances. Therefore, we take into account the vehicle length together with the position of GPS antenna on vehicle in this study. Moreover, according to different braking performances, we assign different braking factors to different types of vehicles in our system, allowing the intervehicle distances to be weighted based on these factors.(c) While the information flow topology and algorithm have been well studied, not many protocols have been developed to apply the theory to real-world transportation systems, especially for different traffic scenarios. In this study, we design protocols for the normal platoon formation scenario and merging and splitting scenario. Sensitivity analysis is also conducted to study the practical issues of the proposed CACC system, including the convergence rate of a platoon, the driving comfort for human passengers, and the driving safety of the whole system. By optimizing the damping gain value of our algorithm, the proposed system is supposed to be efficient, comfortable, and safe.The remainder of this paper is organized as follows. Section2 describes the methodology used for our distributed consensus-based CACC system. Section 3 describes the detailed simulation study and analyzes the results. Section 4 is focused on a sensitivity analysis for different aspects of driving in our CACC system. The last section provides general conclusions and outlines some future steps. ## 2. Methodology ### 2.1. Mathematical Preliminaries and Nomenclature We represent the information flow topology of a distributed network of vehicles by using a directed graphG=(V,E), where V={1,2,…,n} is a finite nonempty node set and E⊆V×V is an edge set of ordered pairs of nodes, called edges. The edge i,j∈E denotes that vehicle j can obtain information from vehicle i. However, it is not necessarily true in reverse. The neighbors of vehicle i are denoted by Ni={j∈V:i,j∈E}. The topology of the graph is associated with an adjacency matrix A=aij∈R, which is defined such that aij=1 if edge j,i∈E, aij=0 if edge j,i∉E, and aii=0. L=[lij]∈R (i.e., lij=-aij, i≠j, and lii=∑j=1,j≠inaij) is the nonsymmetrical Laplacian matrix associated with G. A directed spanning tree is a directed tree formed by graph edges which connects all the nodes of the graph.Before proceeding to designing our distributed consensus algorithm for the CACC system, we recall here some basic consensus algorithms which can be used to apply similar dynamics on the information states of vehicles. If the communication between vehicles in the distributed networks is continuous, then a differential equation can be used to model the information state update of each vehicle.The single-integrator consensus algorithm [28] is given by (1)x˙it=-∑j=1ngijkijxit-xjt,i∈V,where xi∈R, kij>0, and gij=1 if information flows from vehicle j to i and 0 otherwise, ∀i≠j. The adjacency matrix A of the information flow topology is defined accordingly as aii=0 and aij=gijkij,∀i≠j. This consensus algorithm guarantees convergence of multiple agents to a collective decision via local interactions.Equation (1) can be extended to second-order dynamics to better model the movement of a physical entity, such as a CAV. For a second-order model, the double-integrator consensus algorithm [29] is given by(2)x˙it=vit,v˙it=-∑j=1ngijkijxit-xjt+γgijkijvit-vjt,i∈V,where xi∈R, vi∈R, kij>0, γ>0, and gij=1 if information flows from vehicle j to i and 0 otherwise, ∀i≠j. ### 2.2. System Specifications and Assumptions It shall be noted that since our study mainly focuses on communication topology and control algorithm of the system, we make some reasonable assumptions while modelling the general system to enable the theoretical analysis.(a) All vehicles are CAVs with the ability to send and receive information among the same transmission range, and there is no vehicle actuator delay in the proposed system.(b) Every vehicle in the proposed system is equipped with appropriate sensors (e.g., high-precision GPS, inertial measurement unit, and on-board diagnostic system) to measure its absolute position and instantaneous velocity, and the measurement is precise without noise.(c) Vehicle types are assumed to be heterogeneous, with different vehicle length, location of GPS antenna on vehicle, and braking performance. ### 2.3. Distributed Consensus Algorithm for the CACC System The objective of the distributed consensus-based CACC system is to use algorithms and protocols that ensure consensus of a platoon of vehicles. Toward this end, the meaning of consensus is twofold: one is the absolute position consensus, where one vehicle maintains a certain distance with its predecessor, and the other is the velocity consensus, where one vehicle maintains the same velocity with its predecessor. Taking into account second-order vehicle dynamics, we propose the distributed consensus algorithm for the CACC system, fori=2,…,n,j=i-1:(3)x˙it=vit,v˙it=-aijxit-xjt-τijt+lif+ljr+x˙jt-τijttijg+τijtbi-γaijx˙it-x˙jt-τijt,where vehicle i’s predecessor is vehicle j; xit is the absolute position of the GPS antenna on vehicle i at time t; x˙it or vit is the velocity of vehicle i at time t; τijt is the unavoidable time-varying communication delay when information is transmitted from vehicle j to vehicle i at time t; lif is the length between the GPS antenna and the front bumper of vehicle i; ljr is the length between the GPS antenna and the rear bumper of vehicle j; tijg is the desired intervehicle time gap between vehicle i and vehicle j; bi is the braking factor of vehicle i; aij is the i,jth entry of the adjacency matrix; γ is a damping gain. The part xit-xjt-τijt+lif+ljr+x˙jt-τijtt-τijtbi is the absolute position consensus term, and the part [x˙it-x˙jt-τijt] is the velocity consensus term. The positions of vehicles in the proposed CACC system are illustrated in Figure 1.Figure 1 Positions of vehicles in the proposed system.With (3), consensus is reached by a platoon of vehicles if, for all xi0 and x˙i0 and all i=2,…,n,j=i-1, as t→∞, (4)xit-xjt-τijt⟶lif+ljr+x˙jt-τijttijg+τijtbi,x˙it-x˙jt-τijt⟶0,which means the absolute position difference of the two vehicles converges to a velocity-determined distance plus two constant vehicle length terms, while the velocity difference of the two vehicles converges to zero. Details of the convergence analysis of (3) can be found in Appendix B.As mentioned in Section1, a common issue regarding CACC systems is the string stability. This refers to the capability of attenuating traffic shockwaves by vehicles in platoons. Generally, string stability is defined with respect to the propagation of vehicle spacing errors and/or vehicle accelerations [30]. In particular, if we define dij as the vehicle spacing error (i.e., intervehicle distance) between two consecutive vehicles in a platoon, then string stability with respect to vehicle spacing error indicates that(5)Di+1j+1sDijs∞≤1,where Dij(s) is the Laplace transform of the vehicle spacing error dij. This criterion can be therefore applied to guarantee that the vehicle spacing errors are not amplified upstream in the platoon. Likewise, if we define ai as the acceleration of vehicles in a platoon, then string stability with respect to vehicle acceleration implies that(6)Ai+1sAis∞≤1,where Ai(s) is the Laplace transform of the vehicle acceleration ai. This guarantees that the vehicle accelerations are not amplified upstream in the platoon. We adopt (6) to analyze the string stability of our system, and the details are discussed in Appendix C. Simulation results in Section 3.2 show that the tuning parameters in (3) are chosen to guarantee string stability of the system.The braking performance of a vehicle can be affected by many factors, including the mass of the vehicle and the aerodynamics performance of the vehicle. We assign a braking factorbi, which is assumed to be an aggregate of the aforementioned factors, to each vehicle of the proposed CACC system. This braking factor itself does not affect but reflects the braking performance of vehicles. Specifically, it works as a weighting factor of the desired intervehicle distance dij (safety braking distance), making it different for different vehicles in the proposed system. In this study, the braking factors are assumed to be known constants, while the exact methodology to calculate the braking factor is discussed as future research in Section 5.We assume that the vehicle in the proposed system receives its absolute position (location) information by the GPS antenna that is installed on a certain position of the vehicle’s roof. Both the length between antenna and the front bumperlif and the length between antenna and the rear bumper lir of each vehicle are assumed to be known constants. Thus the length of vehicle i is li=lif+lir. We use time gap tijg to adjust the intervehicle distances that are subject to the change of vehicles’ velocities. By referring to Figure 1, the relationship between time headway and time gap can be denoted as th=tijgbi+lif+ljr/x˙jt-τijt. The damping gain γ needs to meet a specific requirement to ensure the convergence property of the distributed consensus algorithm, which will be analyzed in Section 4 of the paper.Equation (3) is designed for all but the leading vehicle in our CACC system. The dynamics of the leading vehicle can be characterized as(7)x˙1t=v1t,v˙1t=a1t,where x1(t), v1(t), and a1(t) represent the absolute position, velocity, and acceleration of the leading vehicle, respectively. The leading vehicle of a platoon is set to cruise at a certain velocity, with a1t=0. Equation (3) will allow all the following vehicles in the platoon to track the dynamics of the leading vehicle on the above two scenarios. ### 2.4. Distributed Consensus Protocol for the CACC System Considering different scenarios in our system, two protocols are designed in the following. #### 2.4.1. Normal Platoon Formation Protocol This protocol is designed for vehicles to form a platoon. For vehiclei in our CACC system, it needs to check whether there is a predecessor in a certain distance r after the platoon formation mode is activated.(a) If yes, then vehiclei will communicate with its predecessor and (3) will be applied, which enables vehicle i to be a following vehicle.(b) If no, then vehiclei may become a leading vehicle of a platoon (where i=1) and cruise at a constant velocity. The driver can also take over the control to drive however he/she wants, but the vehicle may still potentially act as a leading vehicle of the platoon.After the above procedure, vehiclei is in the distributed consensus-based CACC system whether it plays the role of a following vehicle or a leading vehicle. However, the “following” and “leading” roles for vehicle i may switch under the following conditions.(a) For a following vehiclei, if all of its predecessors move out of the distance r ahead of vehicle i, then vehicle i changes from a following vehicle to a leading vehicle, where i=1.(b) For a leading vehiclei (i.e., i=1), if one or more vehicles move into the distance r ahead of vehicle i, then vehicle i changes from a leading vehicle to a following vehicle, where i=2,…,n.Figure2 shows the flowchart of this protocol for the distributed consensus-based CACC system.Figure 2 Normal platoon formation protocol. #### 2.4.2. Merging and Splitting Maneuvers Protocol Normal platoon formation protocol addresses the longitudinal maneuvers, while merging and splitting maneuvers protocol is aimed at handling the lateral maneuvers (i.e., lane change). It is introduced in [31] that there are four different cases for the lane change within the platoon maneuvers: (1) free-agent-to-free-agent lane change, (2) free-agent-to-platoon lane change, (3) platoon-to-free-agent lane change, and (4) platoon-to-platoon lane change. In this study, we focus on the second and third cases. Since this part is about applying the proposed algorithm (see (3)) to lane change scenarios, which is focused on gap creation and gap closure maneuvers implemented by V2V communication, the specific lane change behavior is considered as a manual driving behavior.For the case where vehiclei (as a free agent) tries to merge into a platoon on the adjacent lane, after the merging mode is activated, vehicle i will communicate with the platoon leader and decide which position it will be in the platoon, as shown in Figure 3(a). If it decides to be the jth vehicle of the platoon after merging maneuvers, then a “ghost” vehicle with respect to vehicle j-1 of the platoon will be created in front of vehicle i, as shown in Figure 3(b). This “ghost” vehicle has all the same parameters but the lateral position as vehicle j-1. Then, vehicle i will automatically adjust its absolute position and velocity with the “ghost” vehicle by (3). After that, vehicle i sends a merging signal to vehicle j+1 in the platoon, as shown in Figure 3(c). Upon receiving the merging signal, a “ghost” vehicle with respect to vehicle i is created in front of vehicle j+1, and vehicle j+1 starts to adjust its absolute position and velocity to create a gap for vehicle i by (3), as shown in Figure 3(d). After the gap is fully created, vehicle j+1 sends a confirmation signal to vehicle i, and vehicle i merges into the platoon, as shown in Figure 3(e).Figure 3 Merging maneuvers protocol (assuming merging into the 2nd position). (a) (b) (c) (d) (e)The case where vehiclej (in the platoon) tries to split from the platoon is easier. It is studied in [32] that there are two strategies for splitting maneuvers or so-called CACC string dissolution. The most efficient action is for the departing driver to do a simple lane change in the direction of the off-ramp. The other strategy is for the departing vehicle to deactivate the CACC function by tapping on the brakes before changing lanes, creating a split in the CACC string and becoming the manually driver leader of the platoon until it moves out of the lane. In our system, we adopt the first strategy. After the splitting mode is activated, the driver can take over the lateral control of the vehicle and perform the lane change without adjusting the velocity longitudinally. After vehicle j completes the lane change, vehicle j+1 will be informed that its predecessor changes from vehicle j to vehicle j-1 and therefore adjusts its velocity to close the gap. A new platoon is formed, where vehicle j+1 becomes vehicle j, and vehicle j+2 becomes vehicle j+1, and so on. ## 2.1. Mathematical Preliminaries and Nomenclature We represent the information flow topology of a distributed network of vehicles by using a directed graphG=(V,E), where V={1,2,…,n} is a finite nonempty node set and E⊆V×V is an edge set of ordered pairs of nodes, called edges. The edge i,j∈E denotes that vehicle j can obtain information from vehicle i. However, it is not necessarily true in reverse. The neighbors of vehicle i are denoted by Ni={j∈V:i,j∈E}. The topology of the graph is associated with an adjacency matrix A=aij∈R, which is defined such that aij=1 if edge j,i∈E, aij=0 if edge j,i∉E, and aii=0. L=[lij]∈R (i.e., lij=-aij, i≠j, and lii=∑j=1,j≠inaij) is the nonsymmetrical Laplacian matrix associated with G. A directed spanning tree is a directed tree formed by graph edges which connects all the nodes of the graph.Before proceeding to designing our distributed consensus algorithm for the CACC system, we recall here some basic consensus algorithms which can be used to apply similar dynamics on the information states of vehicles. If the communication between vehicles in the distributed networks is continuous, then a differential equation can be used to model the information state update of each vehicle.The single-integrator consensus algorithm [28] is given by (1)x˙it=-∑j=1ngijkijxit-xjt,i∈V,where xi∈R, kij>0, and gij=1 if information flows from vehicle j to i and 0 otherwise, ∀i≠j. The adjacency matrix A of the information flow topology is defined accordingly as aii=0 and aij=gijkij,∀i≠j. This consensus algorithm guarantees convergence of multiple agents to a collective decision via local interactions.Equation (1) can be extended to second-order dynamics to better model the movement of a physical entity, such as a CAV. For a second-order model, the double-integrator consensus algorithm [29] is given by(2)x˙it=vit,v˙it=-∑j=1ngijkijxit-xjt+γgijkijvit-vjt,i∈V,where xi∈R, vi∈R, kij>0, γ>0, and gij=1 if information flows from vehicle j to i and 0 otherwise, ∀i≠j. ## 2.2. System Specifications and Assumptions It shall be noted that since our study mainly focuses on communication topology and control algorithm of the system, we make some reasonable assumptions while modelling the general system to enable the theoretical analysis.(a) All vehicles are CAVs with the ability to send and receive information among the same transmission range, and there is no vehicle actuator delay in the proposed system.(b) Every vehicle in the proposed system is equipped with appropriate sensors (e.g., high-precision GPS, inertial measurement unit, and on-board diagnostic system) to measure its absolute position and instantaneous velocity, and the measurement is precise without noise.(c) Vehicle types are assumed to be heterogeneous, with different vehicle length, location of GPS antenna on vehicle, and braking performance. ## 2.3. Distributed Consensus Algorithm for the CACC System The objective of the distributed consensus-based CACC system is to use algorithms and protocols that ensure consensus of a platoon of vehicles. Toward this end, the meaning of consensus is twofold: one is the absolute position consensus, where one vehicle maintains a certain distance with its predecessor, and the other is the velocity consensus, where one vehicle maintains the same velocity with its predecessor. Taking into account second-order vehicle dynamics, we propose the distributed consensus algorithm for the CACC system, fori=2,…,n,j=i-1:(3)x˙it=vit,v˙it=-aijxit-xjt-τijt+lif+ljr+x˙jt-τijttijg+τijtbi-γaijx˙it-x˙jt-τijt,where vehicle i’s predecessor is vehicle j; xit is the absolute position of the GPS antenna on vehicle i at time t; x˙it or vit is the velocity of vehicle i at time t; τijt is the unavoidable time-varying communication delay when information is transmitted from vehicle j to vehicle i at time t; lif is the length between the GPS antenna and the front bumper of vehicle i; ljr is the length between the GPS antenna and the rear bumper of vehicle j; tijg is the desired intervehicle time gap between vehicle i and vehicle j; bi is the braking factor of vehicle i; aij is the i,jth entry of the adjacency matrix; γ is a damping gain. The part xit-xjt-τijt+lif+ljr+x˙jt-τijtt-τijtbi is the absolute position consensus term, and the part [x˙it-x˙jt-τijt] is the velocity consensus term. The positions of vehicles in the proposed CACC system are illustrated in Figure 1.Figure 1 Positions of vehicles in the proposed system.With (3), consensus is reached by a platoon of vehicles if, for all xi0 and x˙i0 and all i=2,…,n,j=i-1, as t→∞, (4)xit-xjt-τijt⟶lif+ljr+x˙jt-τijttijg+τijtbi,x˙it-x˙jt-τijt⟶0,which means the absolute position difference of the two vehicles converges to a velocity-determined distance plus two constant vehicle length terms, while the velocity difference of the two vehicles converges to zero. Details of the convergence analysis of (3) can be found in Appendix B.As mentioned in Section1, a common issue regarding CACC systems is the string stability. This refers to the capability of attenuating traffic shockwaves by vehicles in platoons. Generally, string stability is defined with respect to the propagation of vehicle spacing errors and/or vehicle accelerations [30]. In particular, if we define dij as the vehicle spacing error (i.e., intervehicle distance) between two consecutive vehicles in a platoon, then string stability with respect to vehicle spacing error indicates that(5)Di+1j+1sDijs∞≤1,where Dij(s) is the Laplace transform of the vehicle spacing error dij. This criterion can be therefore applied to guarantee that the vehicle spacing errors are not amplified upstream in the platoon. Likewise, if we define ai as the acceleration of vehicles in a platoon, then string stability with respect to vehicle acceleration implies that(6)Ai+1sAis∞≤1,where Ai(s) is the Laplace transform of the vehicle acceleration ai. This guarantees that the vehicle accelerations are not amplified upstream in the platoon. We adopt (6) to analyze the string stability of our system, and the details are discussed in Appendix C. Simulation results in Section 3.2 show that the tuning parameters in (3) are chosen to guarantee string stability of the system.The braking performance of a vehicle can be affected by many factors, including the mass of the vehicle and the aerodynamics performance of the vehicle. We assign a braking factorbi, which is assumed to be an aggregate of the aforementioned factors, to each vehicle of the proposed CACC system. This braking factor itself does not affect but reflects the braking performance of vehicles. Specifically, it works as a weighting factor of the desired intervehicle distance dij (safety braking distance), making it different for different vehicles in the proposed system. In this study, the braking factors are assumed to be known constants, while the exact methodology to calculate the braking factor is discussed as future research in Section 5.We assume that the vehicle in the proposed system receives its absolute position (location) information by the GPS antenna that is installed on a certain position of the vehicle’s roof. Both the length between antenna and the front bumperlif and the length between antenna and the rear bumper lir of each vehicle are assumed to be known constants. Thus the length of vehicle i is li=lif+lir. We use time gap tijg to adjust the intervehicle distances that are subject to the change of vehicles’ velocities. By referring to Figure 1, the relationship between time headway and time gap can be denoted as th=tijgbi+lif+ljr/x˙jt-τijt. The damping gain γ needs to meet a specific requirement to ensure the convergence property of the distributed consensus algorithm, which will be analyzed in Section 4 of the paper.Equation (3) is designed for all but the leading vehicle in our CACC system. The dynamics of the leading vehicle can be characterized as(7)x˙1t=v1t,v˙1t=a1t,where x1(t), v1(t), and a1(t) represent the absolute position, velocity, and acceleration of the leading vehicle, respectively. The leading vehicle of a platoon is set to cruise at a certain velocity, with a1t=0. Equation (3) will allow all the following vehicles in the platoon to track the dynamics of the leading vehicle on the above two scenarios. ## 2.4. Distributed Consensus Protocol for the CACC System Considering different scenarios in our system, two protocols are designed in the following. ### 2.4.1. Normal Platoon Formation Protocol This protocol is designed for vehicles to form a platoon. For vehiclei in our CACC system, it needs to check whether there is a predecessor in a certain distance r after the platoon formation mode is activated.(a) If yes, then vehiclei will communicate with its predecessor and (3) will be applied, which enables vehicle i to be a following vehicle.(b) If no, then vehiclei may become a leading vehicle of a platoon (where i=1) and cruise at a constant velocity. The driver can also take over the control to drive however he/she wants, but the vehicle may still potentially act as a leading vehicle of the platoon.After the above procedure, vehiclei is in the distributed consensus-based CACC system whether it plays the role of a following vehicle or a leading vehicle. However, the “following” and “leading” roles for vehicle i may switch under the following conditions.(a) For a following vehiclei, if all of its predecessors move out of the distance r ahead of vehicle i, then vehicle i changes from a following vehicle to a leading vehicle, where i=1.(b) For a leading vehiclei (i.e., i=1), if one or more vehicles move into the distance r ahead of vehicle i, then vehicle i changes from a leading vehicle to a following vehicle, where i=2,…,n.Figure2 shows the flowchart of this protocol for the distributed consensus-based CACC system.Figure 2 Normal platoon formation protocol. ### 2.4.2. Merging and Splitting Maneuvers Protocol Normal platoon formation protocol addresses the longitudinal maneuvers, while merging and splitting maneuvers protocol is aimed at handling the lateral maneuvers (i.e., lane change). It is introduced in [31] that there are four different cases for the lane change within the platoon maneuvers: (1) free-agent-to-free-agent lane change, (2) free-agent-to-platoon lane change, (3) platoon-to-free-agent lane change, and (4) platoon-to-platoon lane change. In this study, we focus on the second and third cases. Since this part is about applying the proposed algorithm (see (3)) to lane change scenarios, which is focused on gap creation and gap closure maneuvers implemented by V2V communication, the specific lane change behavior is considered as a manual driving behavior.For the case where vehiclei (as a free agent) tries to merge into a platoon on the adjacent lane, after the merging mode is activated, vehicle i will communicate with the platoon leader and decide which position it will be in the platoon, as shown in Figure 3(a). If it decides to be the jth vehicle of the platoon after merging maneuvers, then a “ghost” vehicle with respect to vehicle j-1 of the platoon will be created in front of vehicle i, as shown in Figure 3(b). This “ghost” vehicle has all the same parameters but the lateral position as vehicle j-1. Then, vehicle i will automatically adjust its absolute position and velocity with the “ghost” vehicle by (3). After that, vehicle i sends a merging signal to vehicle j+1 in the platoon, as shown in Figure 3(c). Upon receiving the merging signal, a “ghost” vehicle with respect to vehicle i is created in front of vehicle j+1, and vehicle j+1 starts to adjust its absolute position and velocity to create a gap for vehicle i by (3), as shown in Figure 3(d). After the gap is fully created, vehicle j+1 sends a confirmation signal to vehicle i, and vehicle i merges into the platoon, as shown in Figure 3(e).Figure 3 Merging maneuvers protocol (assuming merging into the 2nd position). (a) (b) (c) (d) (e)The case where vehiclej (in the platoon) tries to split from the platoon is easier. It is studied in [32] that there are two strategies for splitting maneuvers or so-called CACC string dissolution. The most efficient action is for the departing driver to do a simple lane change in the direction of the off-ramp. The other strategy is for the departing vehicle to deactivate the CACC function by tapping on the brakes before changing lanes, creating a split in the CACC string and becoming the manually driver leader of the platoon until it moves out of the lane. In our system, we adopt the first strategy. After the splitting mode is activated, the driver can take over the lateral control of the vehicle and perform the lane change without adjusting the velocity longitudinally. After vehicle j completes the lane change, vehicle j+1 will be informed that its predecessor changes from vehicle j to vehicle j-1 and therefore adjusts its velocity to close the gap. A new platoon is formed, where vehicle j+1 becomes vehicle j, and vehicle j+2 becomes vehicle j+1, and so on. ## 2.4.1. Normal Platoon Formation Protocol This protocol is designed for vehicles to form a platoon. For vehiclei in our CACC system, it needs to check whether there is a predecessor in a certain distance r after the platoon formation mode is activated.(a) If yes, then vehiclei will communicate with its predecessor and (3) will be applied, which enables vehicle i to be a following vehicle.(b) If no, then vehiclei may become a leading vehicle of a platoon (where i=1) and cruise at a constant velocity. The driver can also take over the control to drive however he/she wants, but the vehicle may still potentially act as a leading vehicle of the platoon.After the above procedure, vehiclei is in the distributed consensus-based CACC system whether it plays the role of a following vehicle or a leading vehicle. However, the “following” and “leading” roles for vehicle i may switch under the following conditions.(a) For a following vehiclei, if all of its predecessors move out of the distance r ahead of vehicle i, then vehicle i changes from a following vehicle to a leading vehicle, where i=1.(b) For a leading vehiclei (i.e., i=1), if one or more vehicles move into the distance r ahead of vehicle i, then vehicle i changes from a leading vehicle to a following vehicle, where i=2,…,n.Figure2 shows the flowchart of this protocol for the distributed consensus-based CACC system.Figure 2 Normal platoon formation protocol. ## 2.4.2. Merging and Splitting Maneuvers Protocol Normal platoon formation protocol addresses the longitudinal maneuvers, while merging and splitting maneuvers protocol is aimed at handling the lateral maneuvers (i.e., lane change). It is introduced in [31] that there are four different cases for the lane change within the platoon maneuvers: (1) free-agent-to-free-agent lane change, (2) free-agent-to-platoon lane change, (3) platoon-to-free-agent lane change, and (4) platoon-to-platoon lane change. In this study, we focus on the second and third cases. Since this part is about applying the proposed algorithm (see (3)) to lane change scenarios, which is focused on gap creation and gap closure maneuvers implemented by V2V communication, the specific lane change behavior is considered as a manual driving behavior.For the case where vehiclei (as a free agent) tries to merge into a platoon on the adjacent lane, after the merging mode is activated, vehicle i will communicate with the platoon leader and decide which position it will be in the platoon, as shown in Figure 3(a). If it decides to be the jth vehicle of the platoon after merging maneuvers, then a “ghost” vehicle with respect to vehicle j-1 of the platoon will be created in front of vehicle i, as shown in Figure 3(b). This “ghost” vehicle has all the same parameters but the lateral position as vehicle j-1. Then, vehicle i will automatically adjust its absolute position and velocity with the “ghost” vehicle by (3). After that, vehicle i sends a merging signal to vehicle j+1 in the platoon, as shown in Figure 3(c). Upon receiving the merging signal, a “ghost” vehicle with respect to vehicle i is created in front of vehicle j+1, and vehicle j+1 starts to adjust its absolute position and velocity to create a gap for vehicle i by (3), as shown in Figure 3(d). After the gap is fully created, vehicle j+1 sends a confirmation signal to vehicle i, and vehicle i merges into the platoon, as shown in Figure 3(e).Figure 3 Merging maneuvers protocol (assuming merging into the 2nd position). (a) (b) (c) (d) (e)The case where vehiclej (in the platoon) tries to split from the platoon is easier. It is studied in [32] that there are two strategies for splitting maneuvers or so-called CACC string dissolution. The most efficient action is for the departing driver to do a simple lane change in the direction of the off-ramp. The other strategy is for the departing vehicle to deactivate the CACC function by tapping on the brakes before changing lanes, creating a split in the CACC string and becoming the manually driver leader of the platoon until it moves out of the lane. In our system, we adopt the first strategy. After the splitting mode is activated, the driver can take over the lateral control of the vehicle and perform the lane change without adjusting the velocity longitudinally. After vehicle j completes the lane change, vehicle j+1 will be informed that its predecessor changes from vehicle j to vehicle j-1 and therefore adjusts its velocity to close the gap. A new platoon is formed, where vehicle j+1 becomes vehicle j, and vehicle j+2 becomes vehicle j+1, and so on. ## 3. Simulation Study We use MATLAB Simulink [33] to simulate three different scenarios of our distributed consensus-based CACC system. For the sake of brevity, in the simulation study, we assume that the communication delay between two CACC-equipped vehicles is τijt=60 ms [9]. Results of vehicle velocity and weighted and unweighted intervehicle distance are shown in different scenarios. ### 3.1. Normal Platoon Formation In the first scenario, we assume that there are four CAVs of different types (i.e., 2 sedans, 1 SUV, and 1 truck) driving at randomly varied velocities on the same lane of a highway. At a certain time (t=0), they all switch on the platoon mode. From then on, they adjust their absolute positions and velocities based on (3) and (7) as well as normal platoon formation protocol to reach consensus and form a platoon. The vehicle parameters of this distributed consensus-based CACC system are listed in Table 1.Table 1 Values of vehicle parameters. Parameters Vehicle 1 Vehicle 2 Vehicle 3 Vehicle 4 GPS antenna to front bumperlif 3 m 3 m 3 m 6 m GPS antenna to rear bumperlir 2 m 2 m 2 m 4 m Braking factorbi 1 1 1.1 1.6 Initial velocityx˙i0 30 m/s 33 m/s 36 m/s 39 m/s Desired velocityx˙i 30 m/s 30 m/s 30 m/s 30 m/s Initial time gaptij0g 0.91 s 1.11 s 1.67 s Initial weighted intervehicle distancedij0 30 m 40 m 65 m Desired time gaptijg 0.43 s 0.48 s 0.69 s Desired time headwaytijh 0.6 s 0.64 s 0.86 s Desired weighted intervehicle distancedij 13 m 14.3 m 20.8 m Desired unweighted intervehicle distancedij/bi 13 m 13 m 13 mAs can be seen from Table1, we assume that vehicles 1 and 2 are sedans with vehicle lengths of 5 m and braking factor of 1, vehicle 3 is a SUV with a vehicle length of 5 m and a braking factor of 1.1, and vehicle 4 is a truck with a vehicle length of 10 m and a braking factor of 1.6. We further assume that the GPS antenna is located at a point of vehicle satisfying 2lif=3lir. The weighted intervehicle distances are used instead of time gaps to measure the consensus of vehicles’ absolute positions in a more intuitive manner. They can be written as(8)dij0=x˙i0ttij0g,dij=x˙ittijg.As a key parameter, the damping gainγ in (3) will affect the convergence rate of absolute positions and velocities of all the vehicles in the platoon. In this study, γ=7 is set to all three simulation scenarios. More detailed analysis on how the value of γ may affect the system performance (e.g., driving safety and driving comfort) is conducted in the next section. By implementing our distributed consensus-based strategy, the simulation results of our CACC system are shown in Figures 4(a)–4(c).Figure 4 Simulation results of normal platoon formation. (a) (b) (c)Figure4(a) shows that, after the platoon mode is activated at t=0, all of the three unweighted intervehicle distances converge to 13 m at around 35 seconds. This unweighted intervehicle distance can be considered as a “virtual” target value we set for the system to achieve, not the “real” intervehicle distance. Figure 4(b) shows the results for weighted intervehicle distance. By introducing the braking factor, the steady state of weighted intervehicle distance varies with different vehicle pairs. The weighted intervehicle distance indicates the “real” value for intervehicle distance in our CACC system. In this case, at the steady state of the system, vehicle 1 and vehicle 2 have a 13 m (0.43 s) gap, vehicle 2 and vehicle 3 have a 14.3 m (0.48 s) gap, and vehicle 3 and vehicle 4 have a 20.8 m (0.69 s) gap. It is shown in Figure 4(c) that velocities of the four vehicles converge within around 35 seconds after the platoon mode is activated. After running the distributed consensus algorithms, they all converge to 30 m/s, which is the constant velocity of the leading vehicle and also the desired velocity of this platoon. ### 3.2. Platoon Restoration from Disturbances In this scenario, a simulation test is conducted to demonstrate the string stability of our CACC system, where the distributed consensus algorithm has the capability of attenuating the impact of sudden disturbances. In the platoon mode of our distributed consensus-based CACC system, if one vehicle (e.g., leading vehicle) suddenly brakes and reduces its velocity due to emergency, then the following vehicles will decelerate accordingly to maintain certain weighted intervehicle distances.For example, we assume that all the parameters remain the same as the first scenario. At timet=45 s, suppose that the leading vehicle suddenly brakes due to a flat tire, and its velocity decreases from 30 m/s to 15 m/s. To simplify the scenario, we assume that the brake happens only suddenly (Δt≈0), that is, a step change in leading vehicle’s velocity.The simulation results of sudden brake are shown in Figures5(a)–5(c). Figure 5(a) shows that the unweighted intervehicle distance between vehicle 1 and vehicle 2 suffers an approximately 4 m decrease at time t=45 s. However, the unweighted intervehicle distance between vehicle 2 and vehicle 3 only suffers an approximately 0.7 m decrease, and the one between vehicle 3 and vehicle 4 is further smaller. This result implies that the sudden disturbance on the intervehicle distance is attenuated along the rest of the platoon.Figure 5 Simulation results of platoon restoration from disturbances. (a) (b) (c)The velocity of vehicles in platoon is shown in Figure5(c). The sudden brake originates from vehicle 1, and vehicle 2 tends to avoid the collision with vehicle 1 with a hard braking event. The braking event of vehicle 3 is not as hard as vehicle 2 (the slope is smaller), and the braking of vehicle 4 is further smoother than vehicle 3. The smoother their braking is, the smaller the absolute value of their acceleration will be. After the braking event, the velocities of the three following vehicles are slowly restored to the desired velocity. This result implies that the sudden disturbance on the vehicle acceleration is attenuated along the rest of the platoon.Figure5(b) presents the results for weighted intervehicle distance; that is, the unweighted intervehicle distance multiplies by the braking factor of different vehicles. Overall, the simulation results of this scenario indicate that our distributed consensus-based CACC system is capable of attenuating sudden disturbances and restoring to normal conditions; that is, this system is string-stable. ### 3.3. Merging and Splitting Maneuvers In this scenario, we show the effects when the proposed distributed consensus algorithm is performed together with the merging and splitting maneuvers protocol as presented in Section3.For merging maneuvers, assume that, at timet=0, a three-vehicle platoon (same parameters as vehicles 1, 3, and 4 in the first scenario) is operating at the steady state (i.e., cruising at the velocity of 30 m/s). Another individual vehicle (same parameters as vehicle 2 in the first scenario) traveling at the velocity of 35 m/s on the adjacent lane plans to merge into the platoon, and the simulation result is shown in Figure 6(a).Figure 6 Simulation results of merging and splitting maneuvers. (a) (b)It can be observed from Figure6(a) that the individual vehicle switches on the merging mode at time t=5 s. From then on, a “ghost” vehicle with respect to the first vehicle in the platoon is created, and the individual vehicle adjusts its velocity from 35 m/s to 30 m/s by (3). After that, the individual vehicle sends a merging signal to the second vehicle of the platoon. Then a “ghost” vehicle with respect to the merging vehicle is created in front of the second vehicle of the platoon. Based on (3), both the second and third vehicles of the platoon decelerate to create a gap, and the second vehicle sends a signal to the individual vehicle upon the completion of gap opening. Finally, the individual vehicle merges into the platoon, and the velocities of the other two following vehicles are restored to consensus in around 8 s.For splitting maneuvers, assume that, at timet=0, a four-vehicle platoon (same parameters as vehicles 1, 2, 3, and 4 in the first scenario) is cruising at the velocity of 30 m/s. The second vehicle will split from the platoon, and the simulation result is shown in Figure 6(b).The second vehicle of the platoon switches off the platoon mode and drives away (constantly accelerates from 30 m/s to 35 m/s) from platoon at timet=10 s. After the second vehicle completes its lane change, the third vehicle confirms that its predecessor has changed to the first vehicle of the platoon. Then it adjusts its velocity based on (3) to close the gap. The fourth vehicle accordingly adjusts its velocity to follow the movement of its predecessor.Therefore, the simulation results of the third scenario show that our distributed consensus-based CACC system is capable of carrying out merging and splitting maneuvers. ## 3.1. Normal Platoon Formation In the first scenario, we assume that there are four CAVs of different types (i.e., 2 sedans, 1 SUV, and 1 truck) driving at randomly varied velocities on the same lane of a highway. At a certain time (t=0), they all switch on the platoon mode. From then on, they adjust their absolute positions and velocities based on (3) and (7) as well as normal platoon formation protocol to reach consensus and form a platoon. The vehicle parameters of this distributed consensus-based CACC system are listed in Table 1.Table 1 Values of vehicle parameters. Parameters Vehicle 1 Vehicle 2 Vehicle 3 Vehicle 4 GPS antenna to front bumperlif 3 m 3 m 3 m 6 m GPS antenna to rear bumperlir 2 m 2 m 2 m 4 m Braking factorbi 1 1 1.1 1.6 Initial velocityx˙i0 30 m/s 33 m/s 36 m/s 39 m/s Desired velocityx˙i 30 m/s 30 m/s 30 m/s 30 m/s Initial time gaptij0g 0.91 s 1.11 s 1.67 s Initial weighted intervehicle distancedij0 30 m 40 m 65 m Desired time gaptijg 0.43 s 0.48 s 0.69 s Desired time headwaytijh 0.6 s 0.64 s 0.86 s Desired weighted intervehicle distancedij 13 m 14.3 m 20.8 m Desired unweighted intervehicle distancedij/bi 13 m 13 m 13 mAs can be seen from Table1, we assume that vehicles 1 and 2 are sedans with vehicle lengths of 5 m and braking factor of 1, vehicle 3 is a SUV with a vehicle length of 5 m and a braking factor of 1.1, and vehicle 4 is a truck with a vehicle length of 10 m and a braking factor of 1.6. We further assume that the GPS antenna is located at a point of vehicle satisfying 2lif=3lir. The weighted intervehicle distances are used instead of time gaps to measure the consensus of vehicles’ absolute positions in a more intuitive manner. They can be written as(8)dij0=x˙i0ttij0g,dij=x˙ittijg.As a key parameter, the damping gainγ in (3) will affect the convergence rate of absolute positions and velocities of all the vehicles in the platoon. In this study, γ=7 is set to all three simulation scenarios. More detailed analysis on how the value of γ may affect the system performance (e.g., driving safety and driving comfort) is conducted in the next section. By implementing our distributed consensus-based strategy, the simulation results of our CACC system are shown in Figures 4(a)–4(c).Figure 4 Simulation results of normal platoon formation. (a) (b) (c)Figure4(a) shows that, after the platoon mode is activated at t=0, all of the three unweighted intervehicle distances converge to 13 m at around 35 seconds. This unweighted intervehicle distance can be considered as a “virtual” target value we set for the system to achieve, not the “real” intervehicle distance. Figure 4(b) shows the results for weighted intervehicle distance. By introducing the braking factor, the steady state of weighted intervehicle distance varies with different vehicle pairs. The weighted intervehicle distance indicates the “real” value for intervehicle distance in our CACC system. In this case, at the steady state of the system, vehicle 1 and vehicle 2 have a 13 m (0.43 s) gap, vehicle 2 and vehicle 3 have a 14.3 m (0.48 s) gap, and vehicle 3 and vehicle 4 have a 20.8 m (0.69 s) gap. It is shown in Figure 4(c) that velocities of the four vehicles converge within around 35 seconds after the platoon mode is activated. After running the distributed consensus algorithms, they all converge to 30 m/s, which is the constant velocity of the leading vehicle and also the desired velocity of this platoon. ## 3.2. Platoon Restoration from Disturbances In this scenario, a simulation test is conducted to demonstrate the string stability of our CACC system, where the distributed consensus algorithm has the capability of attenuating the impact of sudden disturbances. In the platoon mode of our distributed consensus-based CACC system, if one vehicle (e.g., leading vehicle) suddenly brakes and reduces its velocity due to emergency, then the following vehicles will decelerate accordingly to maintain certain weighted intervehicle distances.For example, we assume that all the parameters remain the same as the first scenario. At timet=45 s, suppose that the leading vehicle suddenly brakes due to a flat tire, and its velocity decreases from 30 m/s to 15 m/s. To simplify the scenario, we assume that the brake happens only suddenly (Δt≈0), that is, a step change in leading vehicle’s velocity.The simulation results of sudden brake are shown in Figures5(a)–5(c). Figure 5(a) shows that the unweighted intervehicle distance between vehicle 1 and vehicle 2 suffers an approximately 4 m decrease at time t=45 s. However, the unweighted intervehicle distance between vehicle 2 and vehicle 3 only suffers an approximately 0.7 m decrease, and the one between vehicle 3 and vehicle 4 is further smaller. This result implies that the sudden disturbance on the intervehicle distance is attenuated along the rest of the platoon.Figure 5 Simulation results of platoon restoration from disturbances. (a) (b) (c)The velocity of vehicles in platoon is shown in Figure5(c). The sudden brake originates from vehicle 1, and vehicle 2 tends to avoid the collision with vehicle 1 with a hard braking event. The braking event of vehicle 3 is not as hard as vehicle 2 (the slope is smaller), and the braking of vehicle 4 is further smoother than vehicle 3. The smoother their braking is, the smaller the absolute value of their acceleration will be. After the braking event, the velocities of the three following vehicles are slowly restored to the desired velocity. This result implies that the sudden disturbance on the vehicle acceleration is attenuated along the rest of the platoon.Figure5(b) presents the results for weighted intervehicle distance; that is, the unweighted intervehicle distance multiplies by the braking factor of different vehicles. Overall, the simulation results of this scenario indicate that our distributed consensus-based CACC system is capable of attenuating sudden disturbances and restoring to normal conditions; that is, this system is string-stable. ## 3.3. Merging and Splitting Maneuvers In this scenario, we show the effects when the proposed distributed consensus algorithm is performed together with the merging and splitting maneuvers protocol as presented in Section3.For merging maneuvers, assume that, at timet=0, a three-vehicle platoon (same parameters as vehicles 1, 3, and 4 in the first scenario) is operating at the steady state (i.e., cruising at the velocity of 30 m/s). Another individual vehicle (same parameters as vehicle 2 in the first scenario) traveling at the velocity of 35 m/s on the adjacent lane plans to merge into the platoon, and the simulation result is shown in Figure 6(a).Figure 6 Simulation results of merging and splitting maneuvers. (a) (b)It can be observed from Figure6(a) that the individual vehicle switches on the merging mode at time t=5 s. From then on, a “ghost” vehicle with respect to the first vehicle in the platoon is created, and the individual vehicle adjusts its velocity from 35 m/s to 30 m/s by (3). After that, the individual vehicle sends a merging signal to the second vehicle of the platoon. Then a “ghost” vehicle with respect to the merging vehicle is created in front of the second vehicle of the platoon. Based on (3), both the second and third vehicles of the platoon decelerate to create a gap, and the second vehicle sends a signal to the individual vehicle upon the completion of gap opening. Finally, the individual vehicle merges into the platoon, and the velocities of the other two following vehicles are restored to consensus in around 8 s.For splitting maneuvers, assume that, at timet=0, a four-vehicle platoon (same parameters as vehicles 1, 2, 3, and 4 in the first scenario) is cruising at the velocity of 30 m/s. The second vehicle will split from the platoon, and the simulation result is shown in Figure 6(b).The second vehicle of the platoon switches off the platoon mode and drives away (constantly accelerates from 30 m/s to 35 m/s) from platoon at timet=10 s. After the second vehicle completes its lane change, the third vehicle confirms that its predecessor has changed to the first vehicle of the platoon. Then it adjusts its velocity based on (3) to close the gap. The fourth vehicle accordingly adjusts its velocity to follow the movement of its predecessor.Therefore, the simulation results of the third scenario show that our distributed consensus-based CACC system is capable of carrying out merging and splitting maneuvers. ## 4. Sensitivity Analysis In this section, a sensitivity analysis is conducted to study how the uncertainty in the damping gainγ can affect the uncertainties in the convergence rate of the system, the acceleration and jerk (time rate of change of acceleration) of vehicles in the system, and the minimum weighted intervehicle distance between two consecutive vehicles in the system.This sensitivity analysis is based on the normal platoon formation scenario, where the information flow topologyG contains a directed spanning tree as shown in Figure 7.Figure 7 Information flow topology of normal platoon formation scenario.The adjacency matrix then can be defined as(9)A=0000100001000010,and the nonsymmetricalLaplacian matrix is(10)L=0000-11000-11000-11.Recall that, in (3), there is a damping gain γ before the velocity consensus term. Similar to the second-order consensus algorithm in [34], we can get the conclusion that (3) achieves consensus asymptotically if and only if directed graph G has a directed spanning tree and (11)γ>max∀μi≠0⁡ImμiReμi·μi,where μi,i=1,…,n, denotes the ith eigenvalue of -L. The detailed proof of the above conclusion is included in Appendix B. Since the specific value of γ has significant influences on our CACC system in different respects, a sensitivity analysis is conducted in this section, including three different parts. ### 4.1. Convergence Rate Analysis The convergence rate of the proposed distributed consensus algorithm will affect the time required for our CACC system to reach the steady state. The faster the convergence rate is, the less time will be consumed and thus the higher efficiency of our CACC system is.In this case, we study the convergence rate of our system without communication delay for the sake of brevity. Definex~=x~1T,…,x~iT,…,x~4TT and x~˙=x~˙1T,…,x~˙iT,…,x~˙4TT, where x~i=xit-xjt+lif+ljr+x˙jttijgbi and x~˙i=x˙it-x˙jt. The information states with second-order dynamics of our system, which are in this four-vehicle platoon case without communication delay, can be written in a matrix form as (12)x~˙x~¨=Γx~x~˙,where(13)Γ=04×4I4-L-γL.The way to find the eigenvalues ofΓ is to solve the characteristic polynomial of Γ, which is(14)det⁡λI8-Γ=det⁡λI4-I4LλI4+γL=det⁡λ2I4+γλ+1L=0.As aforementioned,μi is the ith eigenvalue of -L. Therefore, it can be given that(15)det⁡λI4+L=∏i=14λ-μi.By comparing (14) to (15), we can get(16)det⁡λI8-Γ=det⁡λ2I4+γλ+1L=∏i=14λ2-γλ+1μi=0,which implies that the solution of (14) is the same as the solution of(17)λ2-γλ+1μi=0.Therefore, the eigenvalues ofΓ can be given by(18)λi1=γμi+γ2μi2+4μi2,λi2=γμi-γ2μi2+4μi2.The convergence rate is an exponential decay term known ase-ηγt, where(19)ηγ=max⁡Reλij∣i=2,3,4;j=1,2.SinceReλi1≥Reλi2,j=1 is set in (19). To find out the maximum convergence rate, we need to find out γ∗>0 such that ηγ∗=min⁡ηγ. It is proven in [35] that the minimum of ηγ is achieved if Reλ21=λn1; that is,(20)γμ22=γμn+γ2μn2+4μn2.Therefore, the maximum convergence rate is achieved as(21)γ=γ∗=2-μn-μ2μ2-2μn.Noting that the Laplacian matrixL is previously given and μ2 and μn can be derived, a value of γ=γ∗=2 is obtained to reach the maximum convergence rate. When γ<2, the larger γ is, the faster convergence rate will be. When γ>2, the larger γ is, the slower convergence rate will be. Therefore, to reach higher efficiency of our CACC system, we design the value of damping gain γ as close to 2 as possible. ### 4.2. Driving Comfort Analysis In this part, we analyze the effect ofγ on driving comfort. The change of vehicle velocity is related to vehicle acceleration and jerk, and it is studied in [36, 37] that a limitation of ±2.5 m/s2 and ±10 m/s3 for acceleration and jerk separately will be comfortable for human passengers. We measure the values of arg⁡max⁡a and arg⁡max⁡jerk through normal platoon formation process and check under which value of γ will -2.5m/s2<a<2.5m/s2 and -10m/s3<jerk<10m/s3 be satisfied. If a and jerk are both in the range, then driving is comfortable for human passengers.Parameters of this analysis are set in Table2, which are exactly the same as the first two vehicles in aforementioned simulation scenarios. The result of the sensitivity analysis on driving comfort is shown in Figure 8. As can be seen from it, when 7≤γ≤7.8, both the acceleration and the jerk are in the “comfort” ranges. Since a faster convergence rate is desired, a value of 7 can be chosen for γ.Table 2 Values of vehicle parameters. Parameters Vehicle 1 Vehicle 2 GPS antenna to front bumperlif 3 m 3 m GPS antenna to rear bumperlir 2 m 2 m Braking factorbi 1 1 Initial velocityx˙i0 30 m/s 33 m/s Desired velocityx˙i 30 m/s 30 m/s Initial weighted intervehicle distancedij0 30 m Desired weighted intervehicle distancedij 13 mFigure 8 Driving comfort analysis. ### 4.3. Driving Safety Analysis In this part, we analyze the effect ofγ on driving safety. We measure the value of minimum weighted intervehicle distance through normal platoon formation process and check whether it goes to negative. If it does, then a collision between the leading vehicle and the following vehicle occurs.We first analyze how the changes ofγ and the initial weighted intervehicle distance dij0 will affect the minimum weighted intervehicle distance min⁡(dij). All parameters but the initial weighted intervehicle distance (dij0 is a variable in this case) of this sensitivity analysis are set the same as in Table 2. The result is shown in Figure 9.Figure 9 Driving safety analysis related to initial weighted intervehicle distance.As shown in the result, the areas indicatingmin⁡(dij)<0 appear mostly when dij0>25 m and meanwhile γ<1. This is because when the absolute position difference is large and the damping gain of velocity consensus term is small, the system tends to put more weight on the absolute position consensus term, resulting in a large overshoot of the absolute position consensus. When the initial weighted intervehicle distance is sufficiently large (dij0>0.18 m), we can avoid this by choosing the value of γ no smaller than 2. Also, there is a linear area indicating min⁡(dij)<0, where dij0 is small. A hypothesis is that, at time t=0, the following vehicle has a higher velocity and the weighted intervehicle distance is rather small, so there exists no γ to ensure the following vehicle to avoid the collision with the leading vehicle. If we fix the value of γ, it is found that the closer dij0 approaches to dij (13 m), the larger min⁡(dij) is.We also analyze how the changes ofγ and the initial velocity difference δx˙ij0 will affect the minimum weighted intervehicle distance min⁡(dij). All parameters but the initial velocity (the difference of x˙i0 and x˙j0 is a variable in this case) of this sensitivity analysis are set the same as in Table 2. The result is shown in Figure 10.Figure 10 Driving safety analysis related to initial velocity difference.As shown in the figures, collision only happens in the areas whereγ is small. If we fix the value of γ, it is found that the closer δx˙ij0 approaches to 0 m/s, the larger min⁡(dij) is. A potential explanation is that although the weighted intervehicle distance will change regardless of the initial value, the change will be minimized when the initial velocities of the two vehicles are the same. When γ≥2, no matter how much the initial velocity difference is, the minimum weighted intervehicle distance will always be 13 m.By analyzing the results of driving safety analysis, we know that the preliminary value ofγ (γ=7) chosen for our CACC system is safe without any collision between two vehicles. When the parameter setting changes, the procedures of convergence rate analysis, driving comfort analysis, and driving safety analysis can be applied to choose the best value of γ, which ensures the platoon in our CACC system to be efficient, comfortable, and safe. ## 4.1. Convergence Rate Analysis The convergence rate of the proposed distributed consensus algorithm will affect the time required for our CACC system to reach the steady state. The faster the convergence rate is, the less time will be consumed and thus the higher efficiency of our CACC system is.In this case, we study the convergence rate of our system without communication delay for the sake of brevity. Definex~=x~1T,…,x~iT,…,x~4TT and x~˙=x~˙1T,…,x~˙iT,…,x~˙4TT, where x~i=xit-xjt+lif+ljr+x˙jttijgbi and x~˙i=x˙it-x˙jt. The information states with second-order dynamics of our system, which are in this four-vehicle platoon case without communication delay, can be written in a matrix form as (12)x~˙x~¨=Γx~x~˙,where(13)Γ=04×4I4-L-γL.The way to find the eigenvalues ofΓ is to solve the characteristic polynomial of Γ, which is(14)det⁡λI8-Γ=det⁡λI4-I4LλI4+γL=det⁡λ2I4+γλ+1L=0.As aforementioned,μi is the ith eigenvalue of -L. Therefore, it can be given that(15)det⁡λI4+L=∏i=14λ-μi.By comparing (14) to (15), we can get(16)det⁡λI8-Γ=det⁡λ2I4+γλ+1L=∏i=14λ2-γλ+1μi=0,which implies that the solution of (14) is the same as the solution of(17)λ2-γλ+1μi=0.Therefore, the eigenvalues ofΓ can be given by(18)λi1=γμi+γ2μi2+4μi2,λi2=γμi-γ2μi2+4μi2.The convergence rate is an exponential decay term known ase-ηγt, where(19)ηγ=max⁡Reλij∣i=2,3,4;j=1,2.SinceReλi1≥Reλi2,j=1 is set in (19). To find out the maximum convergence rate, we need to find out γ∗>0 such that ηγ∗=min⁡ηγ. It is proven in [35] that the minimum of ηγ is achieved if Reλ21=λn1; that is,(20)γμ22=γμn+γ2μn2+4μn2.Therefore, the maximum convergence rate is achieved as(21)γ=γ∗=2-μn-μ2μ2-2μn.Noting that the Laplacian matrixL is previously given and μ2 and μn can be derived, a value of γ=γ∗=2 is obtained to reach the maximum convergence rate. When γ<2, the larger γ is, the faster convergence rate will be. When γ>2, the larger γ is, the slower convergence rate will be. Therefore, to reach higher efficiency of our CACC system, we design the value of damping gain γ as close to 2 as possible. ## 4.2. Driving Comfort Analysis In this part, we analyze the effect ofγ on driving comfort. The change of vehicle velocity is related to vehicle acceleration and jerk, and it is studied in [36, 37] that a limitation of ±2.5 m/s2 and ±10 m/s3 for acceleration and jerk separately will be comfortable for human passengers. We measure the values of arg⁡max⁡a and arg⁡max⁡jerk through normal platoon formation process and check under which value of γ will -2.5m/s2<a<2.5m/s2 and -10m/s3<jerk<10m/s3 be satisfied. If a and jerk are both in the range, then driving is comfortable for human passengers.Parameters of this analysis are set in Table2, which are exactly the same as the first two vehicles in aforementioned simulation scenarios. The result of the sensitivity analysis on driving comfort is shown in Figure 8. As can be seen from it, when 7≤γ≤7.8, both the acceleration and the jerk are in the “comfort” ranges. Since a faster convergence rate is desired, a value of 7 can be chosen for γ.Table 2 Values of vehicle parameters. Parameters Vehicle 1 Vehicle 2 GPS antenna to front bumperlif 3 m 3 m GPS antenna to rear bumperlir 2 m 2 m Braking factorbi 1 1 Initial velocityx˙i0 30 m/s 33 m/s Desired velocityx˙i 30 m/s 30 m/s Initial weighted intervehicle distancedij0 30 m Desired weighted intervehicle distancedij 13 mFigure 8 Driving comfort analysis. ## 4.3. Driving Safety Analysis In this part, we analyze the effect ofγ on driving safety. We measure the value of minimum weighted intervehicle distance through normal platoon formation process and check whether it goes to negative. If it does, then a collision between the leading vehicle and the following vehicle occurs.We first analyze how the changes ofγ and the initial weighted intervehicle distance dij0 will affect the minimum weighted intervehicle distance min⁡(dij). All parameters but the initial weighted intervehicle distance (dij0 is a variable in this case) of this sensitivity analysis are set the same as in Table 2. The result is shown in Figure 9.Figure 9 Driving safety analysis related to initial weighted intervehicle distance.As shown in the result, the areas indicatingmin⁡(dij)<0 appear mostly when dij0>25 m and meanwhile γ<1. This is because when the absolute position difference is large and the damping gain of velocity consensus term is small, the system tends to put more weight on the absolute position consensus term, resulting in a large overshoot of the absolute position consensus. When the initial weighted intervehicle distance is sufficiently large (dij0>0.18 m), we can avoid this by choosing the value of γ no smaller than 2. Also, there is a linear area indicating min⁡(dij)<0, where dij0 is small. A hypothesis is that, at time t=0, the following vehicle has a higher velocity and the weighted intervehicle distance is rather small, so there exists no γ to ensure the following vehicle to avoid the collision with the leading vehicle. If we fix the value of γ, it is found that the closer dij0 approaches to dij (13 m), the larger min⁡(dij) is.We also analyze how the changes ofγ and the initial velocity difference δx˙ij0 will affect the minimum weighted intervehicle distance min⁡(dij). All parameters but the initial velocity (the difference of x˙i0 and x˙j0 is a variable in this case) of this sensitivity analysis are set the same as in Table 2. The result is shown in Figure 10.Figure 10 Driving safety analysis related to initial velocity difference.As shown in the figures, collision only happens in the areas whereγ is small. If we fix the value of γ, it is found that the closer δx˙ij0 approaches to 0 m/s, the larger min⁡(dij) is. A potential explanation is that although the weighted intervehicle distance will change regardless of the initial value, the change will be minimized when the initial velocities of the two vehicles are the same. When γ≥2, no matter how much the initial velocity difference is, the minimum weighted intervehicle distance will always be 13 m.By analyzing the results of driving safety analysis, we know that the preliminary value ofγ (γ=7) chosen for our CACC system is safe without any collision between two vehicles. When the parameter setting changes, the procedures of convergence rate analysis, driving comfort analysis, and driving safety analysis can be applied to choose the best value of γ, which ensures the platoon in our CACC system to be efficient, comfortable, and safe. ## 5. Conclusions and Future Research In this study, we have proposed a novel CACC system based on a distributed consensus algorithm, which takes into account the unavoidable time-varying communication delay, as well as the length, GPS antenna’s location, and braking ability of different vehicles. We have also developed distributed consensus protocol, allowing our CACC system to process the algorithm to implement the function of forming a platoon, merging, and splitting. The algorithm and protocol have been implemented in MATLAB Simulink and the system is shown to have the ability to be restored from a variety of disturbances and carry out merging and splitting maneuvers. In addition, a sensitivity analysis was performed on the algorithm, indicating that the distributed consensus algorithm reaches the maximum convergence rate whenγ=2, and γ=7 is an optimal value for our system to be efficient, comfortable, and safe under the given parameter setting.It should be pointed out that although the system level (cyberspace) of vehicles has been taken into account in this study, the actual vehicle dynamics model (physical space) has been neglected. Combination of the cyberspace and the physical space may be a future goal of this study. Also, as discussed in Section2, the braking factor we proposed may be an aggregate of many different factors, including the mass of the vehicle (light or heavy), the aerodynamics performance of the vehicle (good or bad), the status of the brake (new or worn), the status of the tires (new or worn), the type of the tires (all-season tires or snow tires), the status of the road surface (dry or wet), and the gradient of the road surface (flat or steep). By applying fuzzy logic theory [38], a control model considering above factors as inputs and braking factor as output can be developed in the future to decide the value of braking factor for each vehicle in the system. Moreover, although the proposed distributed consensus algorithm has taken into account some system uncertainties like communication delay, many other issues that may occur in the field implementation still have not been addressed in this study, such as packet loss, signal fading, and signal interference. This unlocks more opportunities for future research. --- *Source: 1023654-2017-08-06.xml*
2017
# Correlation Analysis between Exchange Rate Fluctuations and Oil Price Changes Based on Copula Function **Authors:** Xiaodong Huang **Journal:** Advances in Multimedia (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1023725 --- ## Abstract In order to explore the relationship between exchange rate fluctuations and oil prices, this paper combines the copula function to study the correlation between exchange rate fluctuations and oil price changes, conducts a more comprehensive study of the copula function, and applies the algorithm to some practical classification problems. Moreover, this paper improves some defects in the algorithm and combines some new learning frameworks in machine learning to generalize the copula function to a variety of learning models. In addition, this paper studies how to use the coverage algorithm to construct classifiers under various problems and proposes corresponding improvement strategies according to the characteristics of various problems. Finally, this paper builds a correlation analysis algorithm model and uses simulation research to verify that there is a relatively obvious correlation between exchange rate fluctuations and oil price changes. --- ## Body ## 1. Introduction The commodity attribute of oil means that oil has use value and value as an exchangeable commodity. According to the definition in political economy, the use value of a commodity refers to the attribute that can meet certain needs of people, and the use value is one of the common attributes that all commodities have. Conversely, an item that has no use value does not become a commodity. The value of a commodity is the undifferentiated human labor (including physical labor and mental labor) condensed in the commodity. On the one hand, as an important energy material and chemical raw material, oil is widely used in industry, transportation, and national defense. Therefore, petroleum plays an extremely important role in people’s daily life and has great use value [1]. On the other hand, from finding oil to using oil, it generally goes through four links: exploration, exploitation, transportation, and processing. Moreover, the successful completion of each link condenses the mental and physical labor of oil exploration personnel, mining workers, transportation workers, and production workers. Therefore, the value attribute of oil is obvious. According to the law of commodity value, the price of oil fluctuates around the value of oil according to the market supply and demand conditions, and the value of oil is determined by the cost of producing oil, including exploration, extraction, transportation, and processing costs [2].In a narrow sense, oil is a financial derivative product, and it is one of the basic variables of financial derivative product market transactions. In a broad sense, it refers to the interaction and mutual influence between the fluctuation of the oil spot market and the fluctuation of the financial derivative product market, and this relationship is more and more inclined to the one-way impact of financial derivative market fluctuations on spot market fluctuations [3]. The financial properties of oil are determined by the characteristics of oil supply and demand and the uneven distribution of oil. The supply and demand characteristics of oil are the fundamental reasons for the financial properties of oil. The characteristics of oil supply and demand and the uneven distribution of oil resources make the market use financial means to control price risks, while also causing a large influx of speculative funds into the oil market to hype oil prices, so that oil prices are affected not only by supply and demand but also by the financial market. The influence of conventional financial price indices such as speculative funds, exchange rate indices, stock price indices, and gold price indices [4].Considering that the futures price has a price discovery effect on the spot price, the sharp and sudden fluctuation of the oil futures price will also cause transmission to the spot market. With the effect of the convergence theory, the future price and the spot price will tend to be consistent [5]. Therefore, the continuous jumping behavior in the future market may affect the spot market. Based on this, it is very necessary to discuss the jumping phenomenon of international oil future prices in depth, which can better make suggestions for investors in related fields, stabilize the spot market, and take timely measures to deal with risks. As the most important adjustment lever of international trade [6], the exchange rate plays a direct adjustment role in a country’s trade, and its fluctuations will directly affect the entire import and export trade, thereby affecting the country’s economic stability. Economic operation plays an extremely important role. According to the existing literature, many scholars have also proposed that the spot price of oil is the most significant factor affecting the exchange rate and has a great influence on the exchange rate, and its explanatory power is also significantly better than other factors. Therefore, based on the fluctuation of the oil price itself, the effect of oil on the exchange rate can be further investigated, so as to better analyze the exchange rate trend and stabilize the economic operation [7]. Observing the fluctuations of oil prices in the past two years, it can be found that when oil prices fluctuated, many unexpected events occurred in the world, such as the Iranian nuclear issue and OPEC’s refusal to reduce production, which all led to excess oil supply and short-term changes in the relationship between oil supply and demand. It caused frequent jumps in oil prices. Correspondingly, the exchange rates of different countries have also changed significantly, which also shows that it is necessary to study the fluctuations of oil prices and their jumping phenomena [8].Literature [9] explained how the exchange rate affects oil prices from a theoretical level and carried out an empirical analysis. Literature [10] believes that there is a cointegration relationship between the US dollar exchange rate and the international oil price. Changes in international oil prices will lead to fluctuations in the US dollar exchange rate, but changes in the US dollar exchange rate will not lead to fluctuations in international oil prices. Literature [11] believes that there is a cointegration relationship between international oil prices and real exchange rates, and the prediction of international oil prices to exchange rate changes has a high significance in the long run. Literature [12] conducted an empirical study by adding the international oil price as a variable to the exchange rate determination model and found that the international oil price can significantly explain and predict the changes in the US dollar exchange rate. Literature [13] studied the relationship between the international oil price and the US dollar exchange rate before and after the financial crisis through linear and nonlinear causal analysis and found that there was a single linear causal relationship between the international oil price and the US dollar exchange rate before the financial crisis. There is a bidirectional nonlinear causal relationship between the US dollar exchange rate, and volatility spillovers and institutional changes are important factors for the nonlinear causality. Literature [14] judged from both theoretical and empirical levels that the impact of rising oil prices on real income and price levels is ambiguous, because countries that are substantially affected by actual oil prices in a statistical sense are also countries that are conducting price controls, which leads to the existence of price control bias in the actual GNP data which can be used as a reasonable explanation instead of the oil price shock. Literature [15] established a quarterly multivariate VAR model to study the existence and direction of the causal relationship between oil prices, oil consumption, and actual output and several other key macroeconomic policy variables and concluded that oil price shocks are not caused by the main reason for the US business cycle, in addition to oil prices, and real output will significantly change oil consumption and vice versa.International oil prices can cause changes in the exchange rate level by causing changes in a country’s inflation. Oil is a basic industrial energy source. The rise or fall of international oil prices will cause changes in the cost of industrial production enterprises. When the international oil price rises, it first causes changes in the costs of enterprises and industries closely related to oil, and this change further spreads to changes in the operating costs of the entire industrial chain and even the entire economy. The increase in costs leads to cost-driven inflation [16].This paper combines the copula function to study the correlation between exchange rate fluctuations and oil price changes and establishes a model through intelligent analysis methods to improve the correlation analysis effect between exchange rate fluctuations and oil price changes. ## 2. Copula Function Based on Correlation Analysis ### 2.1. Basic Copula Functions We assume thatS=x1,y1,x2,y2,⋯,xp,yp=X1,X2,⋯,Xkis a set of learning samples in a givenn-dimensional space X, where xi=x1i,x2i,⋯,xni∈X, p is the number of learning samples, Y=y1,y2,⋯,yk is a set of finite class labels, and each yi∈Y in the sample set; k is the number of categories. It is required to construct a three-layer neural network. After learning S, it can output the sample of unknown category xj∈X and output its category yj∈Y, and the recognition rate of the network is as high as possible.The basic idea of the domain copula function is to construct the coverage of each category in turn, and there is no intersection between the coverages until all samples are included in a certain coverage. In the process of constructing coverage, the main operations are as follows:(1) The field is constructed: the method of constructing a spherical field is to select any samplea1 that has not been covered in the current processing category Xi and use it as the center to solve the radius r to obtain a coverage. We set <> to represent the inner product operation; then, the solution strategy for the radius r is as follows:(1)d1=max<x,a1>,x∉Xi,(2)d2=max<x,a1>∣<x,a1>>d1,x∈Xi,(3)θ=d1+d22,(4)a=d1−d22That is,d1 represents the distance between the current center a1 and the nearest heterogeneous point, and d2 represents the distance between the current center and the farthest similar point on the premise that the distance is less than d1. Taking r=θ=d1+d2/2 as the radius to construct the coverage, the classification gap is a, as shown in Figure 1, where triangles and squares represent two types of samples. (2) The center of gravity is obtained: after obtaining an initial coverage, the center of gravityC of all samples in the coverage is obtained, and C is projected onto the hypersphere. Taking the projected point as the new coverage center, the field is reconstructed according to the processing in operation (1) until the newly obtained coverage cannot cover more sample points(3) Translation: since in then-dimensional space, n+1 linearly unrelated points determine a hypersphere, the translation operation is used to cover as many similar samples as possible, and the translation algorithm can be referred to in the literatureFigure 1 Definition of coverage radiusr.At this point, the domain copula function can be obtained:Algorithm 1. The learning process is as follows: fork categories, it is necessary to construct the coverage of each category in turn, until all samples are included in a certain coverage. The overriding construction process of the ith class is as follows: Step 1.1. The algorithm takes any point in theith category that has not been covered and denote it as a1. Step 1.2. The algorithm takesa1 as the center, finds the threshold θ1, and obtains a coverage Ca1 with the center of θ1 and the radius of θ1. Step1.3. The algorithm finds the center of gravitya1′ of the coverage Ca1 and then finds the new threshold θ1′ according to Step 1.2 and obtains the new coverage Ca1′. If Ca1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and the loop is executed until Ca1′ cannot cover more points. Step 1.4. The algorithm finds the translation pointa1′′ of a1 and finds the corresponding coverage Ca1′′. If Ca1′′ covers more points than Ca1, then a1′′⟶a1, θ1′′⟶θ1, and the algorithm goes to Step 1.3. Otherwise, the construction of covering Ci is completed, and the flag of the sample point in Ci is set as covered, and the algorithm goes to Step 1.1. The testing process is as follows: Step 2.1. For each samplex, the algorithm finds the distance dx,Ci≤ωi,x>−θi to all coverages, where ωi and θi are the center and radius of Ci, respectively. Step 2.2. The algorithm takes the categoryj corresponding to maxdx,Ci as the final category of the sample. The idea of the above test process is to first find the distance from the test samplex to each coverage, so as to judge whether the sample falls into a certain coverage. If it falls into the coverage Ci, the class j to which Ci belongs is taken as the sample class. Otherwise, according to the principle of proximity, the category corresponding to the closest coverage is selected as the classification result.The schematic diagram after completing the field coverage is shownin Figure2.Figure 2 Classification results of the domain copula function.The main difference between crosscoverage and domain coverage is that the former constructs the coverage of each category alternately; that is, after constructing a coverage of categoryj this time, the coverage of the j+1th category will be constructed next time. Moreover, after the completion of each coverage construction, the points contained in the coverage are deleted from the sample set, so the algorithm adds deletion operations on the basis of field coverage. According to this idea, the crosscopula function can be obtained:Algorithm 2. The learning process is as follows: for thek categories, the coverage of each category is constructed in turn until all samples have completed the learning. The overriding construction process of the ith class is as follows: Step 1. The algorithm takes any point that has not been covered in theith category and denotes it as a1. Step 2. The algorithm takesa1 as the center, finds the threshold θ1, and obtains a coverage Ca1 with the center of a1 and the radius of θ1. Step 3. The algorithm finds the center of gravitya1′of the coverageCa1and then presses Step 1.2 to find the new thresholdθ1′ and obtain the new coverage Ca1′. If Ca1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and loop operation until Ca1′ cannot cover more points. Step 4. The algorithm finds the translation pointa1′′ of a1 and finds the corresponding coverage Ca1′′. If Ca1′′ covers more points than Ca1, then a1′′⟶a1, θ1′′⟶θ1, and the algorithm goes to Step 1.3. Otherwise, a coverage Ci is obtained, and all sample points contained in Ci are deleted. Step 5. Ifi=i+1modk, the algorithm goes to Step 1.The algorithm judges whether the test sample belongs to the area according to the sequence of coverage structure and takes the category of the smallest area containing the test sample as the category of the sample.The schematic diagram after the crosscoverage is completed is shown in Figure3.Figure 3 Classification results of the crosscopula function. ### 2.2. Fuzzy Kernel Covering Classifier The commonly used kernel functions mainly include the following types:(1) The Gaussian radial basis function isKx,y=exp−x−y2/q, q=2σ2(2) The polynomial function isKx,y=x·y+1d, d=1,2,⋯(3) The sigmoid function isKx,y=tanh−bx⋅y−cWe assume that the domain of discourse isX; then, any point on X is mapped to the unit hypersphere in the feature space by the radial basis kernel function, which is exactly the process of projecting the sample onto the hypersphere in the copula function. Therefore, it is possible to construct the coverage directly in the feature space without transforming the samples. At this point, the kernel copula function is obtained, and the distance originally represented by the inner product becomes Kx,y=exp−x−y2/q, where q=2σ2.In the kernel copula function, some functions of the original copula function need to be changed as follows:(1) The calculation of the threshold (radius)θ usually adopts the following formula:(5)θ=maxx∉Xkai∈XkKai,x=maxx∉Xkai∈Xkexp−x−ai2qIts purpose is to increase the radius of the spherical field, reduce the classification boundary, and reduce the rejection rate.(2) The function is(6)y=σKω,x−θ,(7)σx=x,ifx≥0,0,otherwise(3) The distance function from the samplex to the domain Ci is(8)dx,Ci=0,ifx∈Ci,θ−Kωi,x,ifx∉CiIn the basic copula function, the radius of each coverage is determined using formulas (1)–(4). d1 represents the distance between the current field center and the nearest heterogeneous point (due to the inner product operation. Therefore, d1 takes the maximum value when the distance is closest, and vice versa). d2 represents the distance between the center of the field and the farthest similar point on the premise that it is greater than d1. Take θ=d1+d2/2 as the radius, and all areas outside the coverage are the rejection areas. The purpose of this processing is to reflect the equality of various categories on the one hand and to expand the coverage area as much as possible, so that more test samples fall into the existing coverage and reduce the rejection rate. In the kernel copula function, due to the ambiguity of the algorithm itself, formula (5) is used to determine the radius, and its essence is to set θ=d1.According to the above analysis, in the improved algorithm, the algorithm first modifies the radius calculation principle toθ1=d1 according to formula (10), where d2 is as formula (10). Algorithms make each overlay describe only what has been learned, which is commonly referred to as “knowing what you know, not knowing what you don’t know.” Such modification will inevitably lead to the reduction of the coverage area, the increase of the rejection area, and the increase of rejection points. At this time, the recognition rate of the classifier is improved by improving the processing method of the rejected samples. (9)d1=maxΚx,a1,x∉Xi,(10)d2=minΚx,a1Κx,a1>δ1,x∈Xi.In the process of judging rejected samples, the membership function of samplex to the ith covering Ci is introduced. (11)μi=1+Κx,ωi−θi∗Κx,ωiθi.Among them,Κx,ωi is the distance from the rejected sample to the center ωi of Ci, θi is the radius of Ci, and Κx,ωi−θi is the distance from the rejected sample to the edge of Ci, which is a negative value. The function comprehensively considers factors such as the distance between the sample and the coverage edge, the distance between the sample and the coverage center, and the coverage radius. When the sample happens to fall on the edge of the coverage, the value of μi is 1; that is, it belongs to the coverage. As the samples move away, μi decreases monotonically and gradually tends to 0. At this time, the fuzzy kernel copula function FKCA (Fuzzy Kernel Covering Algorithm) is obtained. The algorithm is divided into two parts: learning and testing, which are described as follows:Algorithm 3. The learning algorithm is as follows. We assume that the learning sampleX has a total of k classes; that is, X=X1,X2,⋯,Xk, and the algorithm uses the Gaussian radial basis function Kx,y=exp−x−y2/q, where q=2σ2. For k categories, the algorithm constructs the coverage of each category in turn, until all samples are included in a certain coverage. The overriding construction process of the ith class is as follows: Step 1. The algorithm takes any point that has not been covered in theith category and denotes it as a1. Step 2. The algorithm takesa1 as the center and calculates the threshold d2 according to d2 in formula (10) and obtains a coverage Ca1 with a1 as the center and θ1 as the radius. Step 3. The algorithm finds the center of gravitya1′ of the coverage Ca1 and then presses Step 1.2 to find the new threshold θ1′ and obtain the new coverage Ca1′. If Ca1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and loop operation until Ca1′ cannot cover more points, and then, a coverage Ci is obtained.The membership function designed in formula (11) comprehensively considers a variety of location factors covered to determine the membership. When the distance between a rejected sample and the coverage edge of two different categories is equal, the following conclusion can be obtained according to this function; the membership degree of the sample to the coverage with a smaller radius is greater than that of the coverage with a larger radius. In this regard, we give an intuitive explanation as shown in Figure 4. For the sake of simplicity, we take the two-category case under the two-dimensional plane as an example for analysis.Figure 4 Analysis of membership function.We assume that two coverages Cover 1 and Cover 2 have been obtained in Figure4, which cover samples of different classes, respectively. The solid line represents its range, and the radius r1 of Cover 1 is greater than the radius r2 of Cover 2, and the distance between the rejection point T and the two coverage edges is equal. According to the way of determining the radius, it is advisable to set the point A∈C1 and B∈C2 at the position shown in the figure. It does the radii R1 and R2 to get the range shown by the dotted line. For Cover 1, when the radius reaches r1, it stops due to encountering a heterogeneous B in the process of continuing expansion. It can be considered that the point located on the edge R2 does not belong to Cover 1 at all. Therefore, the connection C1B can be made, and the membership of the points on the connection to Cover 1 gradually decreases, and 0 is taken at point B. In the same way, there is a connection C2A to Cover 2, and the properties are the same. It is easy to know that the degree of membership of point T to Cover 1 is smaller than that to Cover 2, so T is determined as the category to which Cover 2 belongs. ### 2.3. Multiple Example Copula Functions A threshold is set, and when the sum of the cost of expansion is higher than the threshold, the current coverage stops expanding, as shown in Figure5.Figure 5 Schematic diagram of MICA-SNP.The distribution of each bag example in Figure5 is consistent with the figure, with white for positive packets and black for negative packets. For example, in construction coverage in the negative bag, when the coverage C1.1 is obtained, it continues to expand to obtain C1.2 and C1.3, and the cost increases continuously. When expanding to C1.4, the newly added positive bag example makes the cost exceed the threshold, so C1.4 cancels and falls back to C1.3. C2.3 and C3.2 are obtained in the same way, while the negative packet covers C4, C5, and C6. It cannot be expanded because of its small capacity. When the coverage of the negative examples is completed, the coverage of the remaining positive examples is constructed, and the general construction method is adopted at this time, but the positive examples are not deleted during the construction process. For example, when obtaining C8 and continuing to construct C9, although some positive examples have been covered by C3, it is still used for the construction of C. The target concept area after construction is completed is shown in the shaded area.In summary, the multi-instance copula function MICA-BSNP (Multi-instance Covering Algorithm Based on Strong Noise Processing) is obtained as follows.Algorithm 4. The learning algorithm is as follows: the algorithm gives K learning samplesB1,y1,B2,y2,⋯,BK,yK, yK∈0,1, yK=0 means negative packet, yK=1 means positive packet, and the number of examples contained in packet Bi is Ni. A new sample set b11,y1,⋯,b1N1,y1,⋯,bK1,yK,⋯,bKNK,yK is obtained by assigning the label of the bag to each example in the bag. We record the set of negative examples as X0 and the set of positive examples as X1. Step 1. The algorithm takes any example that has not been learned and denotes it asa1. Step 2. The algorithm takesa1 as the center and calculates the threshold θ1 according to d2 in formula (10) and obtains a coverage Ca1 with a1 as the center and θ1 as the radius. Step 3. The algorithm finds the center of gravitya1′ of the coverage Ca1 and then presses Step 1.2 to find the new threshold θ1′ and obtains the new coverage Ca1′. IfCa1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and loop operation until Ca1′ cannot cover more points; then, a quasicoverage Ca1 is obtained. Step 4. When the number of examples inCa1 is less than epsN, the algorithm determines Ca1 and obtains a coverage Ci, the algorithm marks the examples contained in it as learned, and the algorithm goes to Step 1. Otherwise, the algorithm goes to Step 1.5. Step 5. The algorithm takesa1 as the center; finds θ1′′, θ1′′=maxKa1,x<θ1x∈X1; and calculates the total cost of the current cost of coverage expansion. When the total cost is less than the threshold, the algorithm marks the positive examples included in the expansion as negative, θ1′′⟶θ1, and a new Ca1 is obtained, and the algorithm goes to Step 1.3. Otherwise, the algorithm cancels this expansion and obtains a coverage Ci with a1 as the center and θ1 as the radius.The algorithm is given two finite setsA and B with A=a1,a2,⋯,am and B=B1,B2,⋯,Bn. Then, the Hausdorff distance between A and B can be defined by formula (12): (12)HA,B=maxhA,B,hB,A,(13)hA,B=maxa∈Aminb∈Ba−b,(14)hB,A=maxb∈Bmina∈Ab−a.In formulas (12)–(14), · is a certain distance norm, and the Euclidean distance is used in this paper. Hausdorff distance describes the degree of difference between two sets A and B; the larger the distance, the more obvious the difference. Thus, the multi-instance copula function MICA-BBC (Multi-Instance Covering Algorithm Based on Bag Covering) is obtained.Algorithm 5. The learning algorithm is as follows: the algorithm is givenK learning samples B1,y1,B2,y2,⋯,BK,yK, yK∈0,1, and the set of positive and negative packets is denoted as Xi, where i=0 means negative packet and i=1 means positive packet. The algorithm constructs the spherical cover of positive and negative packages in turn, until all packages fall into a certain cover, and the construction process of the ith cover is as follows: Step 1. The algorithm selects any package that has not yet been covered in theith category, denoted as a1. Step 2. The algorithm takesa1 as the center to solve the threshold θ, and the solution method of θ is (15)d1=minHx,a1,x∉Xi,(16)d2=maxHx,a1Hx,a1<d1,x∈Xi,(17)θ=d1+d22. At this point, a sphere coveringCj is obtained, whose center is a1 and whose radius is θ. The test algorithm is as follows: Step 1. For the bagx to be classified, the algorithm calculates the distance from x to each one in turn: (18)dx,Cj=θj−Hx,aj. Step 2. If there isCj such that dx,Cj≥0, it means that x falls into the sphere cover Cj, and the label of the package to which Cj belongs is used as the label of the package x. Step 3. If∀j, dx,Cj<0, the algorithm takes dx,Cj and takes the tag of the package to which Cj belongs as the tag of package x.In order to properly optimize the coverage results, increase the coverage radius, and reduce the number of coverages, we introduce the secondary scanning method introduced in Section3 and then obtain an improved multi-instance copula function based on package coverage.Algorithm 6. The learning algorithm is as follows: the algorithm is givenK learning samples B1,y1,B2,y2,⋯,BK,yK, yK∈0,1, and the set of positive and negative packets is denoted as Xi, where i=0 means positive packet and i=1 means negative packet. The algorithm constructs the spherical cover of positive and negative packages in turn, until all packages fall into a certain cover, and the construction process of the ith cover is as follows: Step 1. The algorithm selects any package that has not yet been covered in theith category, denoted as a1. Step 2. The algorithm takesa1 as the center and uses formula (17) to solve the threshold θ and obtains a spherical coverage Ca1 with a1 as the center and θ as the radius and records the number of packets contained in the coverage. If there are uncovered packages in this class, the algorithm goes to Step 1.1; otherwise, the algorithm goes to Step 1.3. Step 3. The algorithm sorts the coverages in a descending order according to the number of samples contained in each coverage and reconstructs coverages in turn according to the sorted centers. When the number of samples contained in the obtained coverage is not less than the number of samples contained in the coverage for the first time, the coverage is determined. Otherwise, the algorithm cancels this construction and reinserts the center into the table in order based on the number of samples contained in this time.Algorithm 7. The algorithm is as follows: the algorithm is givenK learning samples B1,y1,B2,y2,⋯,BK,yK, yK∈0,1. The algorithm transforms the package and constructs an overlay. Step 1. The algorithm arbitrarily selectsk packages from the set B of packages as the initial cluster center, denoted as C1 to Ck, and the cluster corresponding to the cluster center Cj is cluster j. Step 2. The algorithm uses the Hausdorff distance to find the distanceHBi,Cj from Bi to each cluster center Cj for each packet Bi in B−C1,⋯,Ck, where j=1,..,k. The algorithm puts Bi into the cluster j formed by the nearest Cj and repeats until all packets are clustered into k categories. Step 3. The algorithm solves the center bag for thek clusters obtained by clustering, and the new center of the jth cluster is (19)Cj=argminA∈ClusterjHA,B. Step 4. The algorithm finds the Hausdorff distance fromBi to Cj (j=,..,k) for each package Bi in B and uses it as a component of the feature vector. That is, xi=HBi,C1,HBi,C2,⋯,HBi,Ck; all xi form a new sample set X, and X=x1,y1,x2,y2,⋯,xK,yK. Step 5. The algorithm uses the copula function onX to construct a classifier. ## 2.1. Basic Copula Functions We assume thatS=x1,y1,x2,y2,⋯,xp,yp=X1,X2,⋯,Xkis a set of learning samples in a givenn-dimensional space X, where xi=x1i,x2i,⋯,xni∈X, p is the number of learning samples, Y=y1,y2,⋯,yk is a set of finite class labels, and each yi∈Y in the sample set; k is the number of categories. It is required to construct a three-layer neural network. After learning S, it can output the sample of unknown category xj∈X and output its category yj∈Y, and the recognition rate of the network is as high as possible.The basic idea of the domain copula function is to construct the coverage of each category in turn, and there is no intersection between the coverages until all samples are included in a certain coverage. In the process of constructing coverage, the main operations are as follows:(1) The field is constructed: the method of constructing a spherical field is to select any samplea1 that has not been covered in the current processing category Xi and use it as the center to solve the radius r to obtain a coverage. We set <> to represent the inner product operation; then, the solution strategy for the radius r is as follows:(1)d1=max<x,a1>,x∉Xi,(2)d2=max<x,a1>∣<x,a1>>d1,x∈Xi,(3)θ=d1+d22,(4)a=d1−d22That is,d1 represents the distance between the current center a1 and the nearest heterogeneous point, and d2 represents the distance between the current center and the farthest similar point on the premise that the distance is less than d1. Taking r=θ=d1+d2/2 as the radius to construct the coverage, the classification gap is a, as shown in Figure 1, where triangles and squares represent two types of samples. (2) The center of gravity is obtained: after obtaining an initial coverage, the center of gravityC of all samples in the coverage is obtained, and C is projected onto the hypersphere. Taking the projected point as the new coverage center, the field is reconstructed according to the processing in operation (1) until the newly obtained coverage cannot cover more sample points(3) Translation: since in then-dimensional space, n+1 linearly unrelated points determine a hypersphere, the translation operation is used to cover as many similar samples as possible, and the translation algorithm can be referred to in the literatureFigure 1 Definition of coverage radiusr.At this point, the domain copula function can be obtained:Algorithm 1. The learning process is as follows: fork categories, it is necessary to construct the coverage of each category in turn, until all samples are included in a certain coverage. The overriding construction process of the ith class is as follows: Step 1.1. The algorithm takes any point in theith category that has not been covered and denote it as a1. Step 1.2. The algorithm takesa1 as the center, finds the threshold θ1, and obtains a coverage Ca1 with the center of θ1 and the radius of θ1. Step1.3. The algorithm finds the center of gravitya1′ of the coverage Ca1 and then finds the new threshold θ1′ according to Step 1.2 and obtains the new coverage Ca1′. If Ca1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and the loop is executed until Ca1′ cannot cover more points. Step 1.4. The algorithm finds the translation pointa1′′ of a1 and finds the corresponding coverage Ca1′′. If Ca1′′ covers more points than Ca1, then a1′′⟶a1, θ1′′⟶θ1, and the algorithm goes to Step 1.3. Otherwise, the construction of covering Ci is completed, and the flag of the sample point in Ci is set as covered, and the algorithm goes to Step 1.1. The testing process is as follows: Step 2.1. For each samplex, the algorithm finds the distance dx,Ci≤ωi,x>−θi to all coverages, where ωi and θi are the center and radius of Ci, respectively. Step 2.2. The algorithm takes the categoryj corresponding to maxdx,Ci as the final category of the sample. The idea of the above test process is to first find the distance from the test samplex to each coverage, so as to judge whether the sample falls into a certain coverage. If it falls into the coverage Ci, the class j to which Ci belongs is taken as the sample class. Otherwise, according to the principle of proximity, the category corresponding to the closest coverage is selected as the classification result.The schematic diagram after completing the field coverage is shownin Figure2.Figure 2 Classification results of the domain copula function.The main difference between crosscoverage and domain coverage is that the former constructs the coverage of each category alternately; that is, after constructing a coverage of categoryj this time, the coverage of the j+1th category will be constructed next time. Moreover, after the completion of each coverage construction, the points contained in the coverage are deleted from the sample set, so the algorithm adds deletion operations on the basis of field coverage. According to this idea, the crosscopula function can be obtained:Algorithm 2. The learning process is as follows: for thek categories, the coverage of each category is constructed in turn until all samples have completed the learning. The overriding construction process of the ith class is as follows: Step 1. The algorithm takes any point that has not been covered in theith category and denotes it as a1. Step 2. The algorithm takesa1 as the center, finds the threshold θ1, and obtains a coverage Ca1 with the center of a1 and the radius of θ1. Step 3. The algorithm finds the center of gravitya1′of the coverageCa1and then presses Step 1.2 to find the new thresholdθ1′ and obtain the new coverage Ca1′. If Ca1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and loop operation until Ca1′ cannot cover more points. Step 4. The algorithm finds the translation pointa1′′ of a1 and finds the corresponding coverage Ca1′′. If Ca1′′ covers more points than Ca1, then a1′′⟶a1, θ1′′⟶θ1, and the algorithm goes to Step 1.3. Otherwise, a coverage Ci is obtained, and all sample points contained in Ci are deleted. Step 5. Ifi=i+1modk, the algorithm goes to Step 1.The algorithm judges whether the test sample belongs to the area according to the sequence of coverage structure and takes the category of the smallest area containing the test sample as the category of the sample.The schematic diagram after the crosscoverage is completed is shown in Figure3.Figure 3 Classification results of the crosscopula function. ## 2.2. Fuzzy Kernel Covering Classifier The commonly used kernel functions mainly include the following types:(1) The Gaussian radial basis function isKx,y=exp−x−y2/q, q=2σ2(2) The polynomial function isKx,y=x·y+1d, d=1,2,⋯(3) The sigmoid function isKx,y=tanh−bx⋅y−cWe assume that the domain of discourse isX; then, any point on X is mapped to the unit hypersphere in the feature space by the radial basis kernel function, which is exactly the process of projecting the sample onto the hypersphere in the copula function. Therefore, it is possible to construct the coverage directly in the feature space without transforming the samples. At this point, the kernel copula function is obtained, and the distance originally represented by the inner product becomes Kx,y=exp−x−y2/q, where q=2σ2.In the kernel copula function, some functions of the original copula function need to be changed as follows:(1) The calculation of the threshold (radius)θ usually adopts the following formula:(5)θ=maxx∉Xkai∈XkKai,x=maxx∉Xkai∈Xkexp−x−ai2qIts purpose is to increase the radius of the spherical field, reduce the classification boundary, and reduce the rejection rate.(2) The function is(6)y=σKω,x−θ,(7)σx=x,ifx≥0,0,otherwise(3) The distance function from the samplex to the domain Ci is(8)dx,Ci=0,ifx∈Ci,θ−Kωi,x,ifx∉CiIn the basic copula function, the radius of each coverage is determined using formulas (1)–(4). d1 represents the distance between the current field center and the nearest heterogeneous point (due to the inner product operation. Therefore, d1 takes the maximum value when the distance is closest, and vice versa). d2 represents the distance between the center of the field and the farthest similar point on the premise that it is greater than d1. Take θ=d1+d2/2 as the radius, and all areas outside the coverage are the rejection areas. The purpose of this processing is to reflect the equality of various categories on the one hand and to expand the coverage area as much as possible, so that more test samples fall into the existing coverage and reduce the rejection rate. In the kernel copula function, due to the ambiguity of the algorithm itself, formula (5) is used to determine the radius, and its essence is to set θ=d1.According to the above analysis, in the improved algorithm, the algorithm first modifies the radius calculation principle toθ1=d1 according to formula (10), where d2 is as formula (10). Algorithms make each overlay describe only what has been learned, which is commonly referred to as “knowing what you know, not knowing what you don’t know.” Such modification will inevitably lead to the reduction of the coverage area, the increase of the rejection area, and the increase of rejection points. At this time, the recognition rate of the classifier is improved by improving the processing method of the rejected samples. (9)d1=maxΚx,a1,x∉Xi,(10)d2=minΚx,a1Κx,a1>δ1,x∈Xi.In the process of judging rejected samples, the membership function of samplex to the ith covering Ci is introduced. (11)μi=1+Κx,ωi−θi∗Κx,ωiθi.Among them,Κx,ωi is the distance from the rejected sample to the center ωi of Ci, θi is the radius of Ci, and Κx,ωi−θi is the distance from the rejected sample to the edge of Ci, which is a negative value. The function comprehensively considers factors such as the distance between the sample and the coverage edge, the distance between the sample and the coverage center, and the coverage radius. When the sample happens to fall on the edge of the coverage, the value of μi is 1; that is, it belongs to the coverage. As the samples move away, μi decreases monotonically and gradually tends to 0. At this time, the fuzzy kernel copula function FKCA (Fuzzy Kernel Covering Algorithm) is obtained. The algorithm is divided into two parts: learning and testing, which are described as follows:Algorithm 3. The learning algorithm is as follows. We assume that the learning sampleX has a total of k classes; that is, X=X1,X2,⋯,Xk, and the algorithm uses the Gaussian radial basis function Kx,y=exp−x−y2/q, where q=2σ2. For k categories, the algorithm constructs the coverage of each category in turn, until all samples are included in a certain coverage. The overriding construction process of the ith class is as follows: Step 1. The algorithm takes any point that has not been covered in theith category and denotes it as a1. Step 2. The algorithm takesa1 as the center and calculates the threshold d2 according to d2 in formula (10) and obtains a coverage Ca1 with a1 as the center and θ1 as the radius. Step 3. The algorithm finds the center of gravitya1′ of the coverage Ca1 and then presses Step 1.2 to find the new threshold θ1′ and obtain the new coverage Ca1′. If Ca1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and loop operation until Ca1′ cannot cover more points, and then, a coverage Ci is obtained.The membership function designed in formula (11) comprehensively considers a variety of location factors covered to determine the membership. When the distance between a rejected sample and the coverage edge of two different categories is equal, the following conclusion can be obtained according to this function; the membership degree of the sample to the coverage with a smaller radius is greater than that of the coverage with a larger radius. In this regard, we give an intuitive explanation as shown in Figure 4. For the sake of simplicity, we take the two-category case under the two-dimensional plane as an example for analysis.Figure 4 Analysis of membership function.We assume that two coverages Cover 1 and Cover 2 have been obtained in Figure4, which cover samples of different classes, respectively. The solid line represents its range, and the radius r1 of Cover 1 is greater than the radius r2 of Cover 2, and the distance between the rejection point T and the two coverage edges is equal. According to the way of determining the radius, it is advisable to set the point A∈C1 and B∈C2 at the position shown in the figure. It does the radii R1 and R2 to get the range shown by the dotted line. For Cover 1, when the radius reaches r1, it stops due to encountering a heterogeneous B in the process of continuing expansion. It can be considered that the point located on the edge R2 does not belong to Cover 1 at all. Therefore, the connection C1B can be made, and the membership of the points on the connection to Cover 1 gradually decreases, and 0 is taken at point B. In the same way, there is a connection C2A to Cover 2, and the properties are the same. It is easy to know that the degree of membership of point T to Cover 1 is smaller than that to Cover 2, so T is determined as the category to which Cover 2 belongs. ## 2.3. Multiple Example Copula Functions A threshold is set, and when the sum of the cost of expansion is higher than the threshold, the current coverage stops expanding, as shown in Figure5.Figure 5 Schematic diagram of MICA-SNP.The distribution of each bag example in Figure5 is consistent with the figure, with white for positive packets and black for negative packets. For example, in construction coverage in the negative bag, when the coverage C1.1 is obtained, it continues to expand to obtain C1.2 and C1.3, and the cost increases continuously. When expanding to C1.4, the newly added positive bag example makes the cost exceed the threshold, so C1.4 cancels and falls back to C1.3. C2.3 and C3.2 are obtained in the same way, while the negative packet covers C4, C5, and C6. It cannot be expanded because of its small capacity. When the coverage of the negative examples is completed, the coverage of the remaining positive examples is constructed, and the general construction method is adopted at this time, but the positive examples are not deleted during the construction process. For example, when obtaining C8 and continuing to construct C9, although some positive examples have been covered by C3, it is still used for the construction of C. The target concept area after construction is completed is shown in the shaded area.In summary, the multi-instance copula function MICA-BSNP (Multi-instance Covering Algorithm Based on Strong Noise Processing) is obtained as follows.Algorithm 4. The learning algorithm is as follows: the algorithm gives K learning samplesB1,y1,B2,y2,⋯,BK,yK, yK∈0,1, yK=0 means negative packet, yK=1 means positive packet, and the number of examples contained in packet Bi is Ni. A new sample set b11,y1,⋯,b1N1,y1,⋯,bK1,yK,⋯,bKNK,yK is obtained by assigning the label of the bag to each example in the bag. We record the set of negative examples as X0 and the set of positive examples as X1. Step 1. The algorithm takes any example that has not been learned and denotes it asa1. Step 2. The algorithm takesa1 as the center and calculates the threshold θ1 according to d2 in formula (10) and obtains a coverage Ca1 with a1 as the center and θ1 as the radius. Step 3. The algorithm finds the center of gravitya1′ of the coverage Ca1 and then presses Step 1.2 to find the new threshold θ1′ and obtains the new coverage Ca1′. IfCa1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and loop operation until Ca1′ cannot cover more points; then, a quasicoverage Ca1 is obtained. Step 4. When the number of examples inCa1 is less than epsN, the algorithm determines Ca1 and obtains a coverage Ci, the algorithm marks the examples contained in it as learned, and the algorithm goes to Step 1. Otherwise, the algorithm goes to Step 1.5. Step 5. The algorithm takesa1 as the center; finds θ1′′, θ1′′=maxKa1,x<θ1x∈X1; and calculates the total cost of the current cost of coverage expansion. When the total cost is less than the threshold, the algorithm marks the positive examples included in the expansion as negative, θ1′′⟶θ1, and a new Ca1 is obtained, and the algorithm goes to Step 1.3. Otherwise, the algorithm cancels this expansion and obtains a coverage Ci with a1 as the center and θ1 as the radius.The algorithm is given two finite setsA and B with A=a1,a2,⋯,am and B=B1,B2,⋯,Bn. Then, the Hausdorff distance between A and B can be defined by formula (12): (12)HA,B=maxhA,B,hB,A,(13)hA,B=maxa∈Aminb∈Ba−b,(14)hB,A=maxb∈Bmina∈Ab−a.In formulas (12)–(14), · is a certain distance norm, and the Euclidean distance is used in this paper. Hausdorff distance describes the degree of difference between two sets A and B; the larger the distance, the more obvious the difference. Thus, the multi-instance copula function MICA-BBC (Multi-Instance Covering Algorithm Based on Bag Covering) is obtained.Algorithm 5. The learning algorithm is as follows: the algorithm is givenK learning samples B1,y1,B2,y2,⋯,BK,yK, yK∈0,1, and the set of positive and negative packets is denoted as Xi, where i=0 means negative packet and i=1 means positive packet. The algorithm constructs the spherical cover of positive and negative packages in turn, until all packages fall into a certain cover, and the construction process of the ith cover is as follows: Step 1. The algorithm selects any package that has not yet been covered in theith category, denoted as a1. Step 2. The algorithm takesa1 as the center to solve the threshold θ, and the solution method of θ is (15)d1=minHx,a1,x∉Xi,(16)d2=maxHx,a1Hx,a1<d1,x∈Xi,(17)θ=d1+d22. At this point, a sphere coveringCj is obtained, whose center is a1 and whose radius is θ. The test algorithm is as follows: Step 1. For the bagx to be classified, the algorithm calculates the distance from x to each one in turn: (18)dx,Cj=θj−Hx,aj. Step 2. If there isCj such that dx,Cj≥0, it means that x falls into the sphere cover Cj, and the label of the package to which Cj belongs is used as the label of the package x. Step 3. If∀j, dx,Cj<0, the algorithm takes dx,Cj and takes the tag of the package to which Cj belongs as the tag of package x.In order to properly optimize the coverage results, increase the coverage radius, and reduce the number of coverages, we introduce the secondary scanning method introduced in Section3 and then obtain an improved multi-instance copula function based on package coverage.Algorithm 6. The learning algorithm is as follows: the algorithm is givenK learning samples B1,y1,B2,y2,⋯,BK,yK, yK∈0,1, and the set of positive and negative packets is denoted as Xi, where i=0 means positive packet and i=1 means negative packet. The algorithm constructs the spherical cover of positive and negative packages in turn, until all packages fall into a certain cover, and the construction process of the ith cover is as follows: Step 1. The algorithm selects any package that has not yet been covered in theith category, denoted as a1. Step 2. The algorithm takesa1 as the center and uses formula (17) to solve the threshold θ and obtains a spherical coverage Ca1 with a1 as the center and θ as the radius and records the number of packets contained in the coverage. If there are uncovered packages in this class, the algorithm goes to Step 1.1; otherwise, the algorithm goes to Step 1.3. Step 3. The algorithm sorts the coverages in a descending order according to the number of samples contained in each coverage and reconstructs coverages in turn according to the sorted centers. When the number of samples contained in the obtained coverage is not less than the number of samples contained in the coverage for the first time, the coverage is determined. Otherwise, the algorithm cancels this construction and reinserts the center into the table in order based on the number of samples contained in this time.Algorithm 7. The algorithm is as follows: the algorithm is givenK learning samples B1,y1,B2,y2,⋯,BK,yK, yK∈0,1. The algorithm transforms the package and constructs an overlay. Step 1. The algorithm arbitrarily selectsk packages from the set B of packages as the initial cluster center, denoted as C1 to Ck, and the cluster corresponding to the cluster center Cj is cluster j. Step 2. The algorithm uses the Hausdorff distance to find the distanceHBi,Cj from Bi to each cluster center Cj for each packet Bi in B−C1,⋯,Ck, where j=1,..,k. The algorithm puts Bi into the cluster j formed by the nearest Cj and repeats until all packets are clustered into k categories. Step 3. The algorithm solves the center bag for thek clusters obtained by clustering, and the new center of the jth cluster is (19)Cj=argminA∈ClusterjHA,B. Step 4. The algorithm finds the Hausdorff distance fromBi to Cj (j=,..,k) for each package Bi in B and uses it as a component of the feature vector. That is, xi=HBi,C1,HBi,C2,⋯,HBi,Ck; all xi form a new sample set X, and X=x1,y1,x2,y2,⋯,xK,yK. Step 5. The algorithm uses the copula function onX to construct a classifier. ## 3. Correlation Analysis of Exchange Rate Fluctuations and Oil Price Changes Based on Copula Based on the established individual effect model, a panel model based on the Brent oil and WTI oil future prices is established for the exchange rate, respectively, to forecast the exchange rate of exporting countries. The cases are shown in Figures6–8.Figure 6 Prediction figure1. (a)(b)Figure 7 Prediction figure2. (a)(b)Figure 8 Prediction figure3. (a)(b)Figure6(a) shows the WTI oil price prediction, and Figure 6(b) shows the Brent oil price. According to the comparison of the scales in the above picture, it is found that there is a high degree of similarity between the two. Moreover, the results show that the WTI oil price prediction error value is 0.000168, and the Brent oil price forecast error value is 0.000145. Although it can be seen from the error value that it is relatively small, it can be found that there is still a big difference in the whole model, especially the trend.Similarly, Figure7(a) shows the WTI oil price prediction exchange rate series, and Figure 7(b) shows the Brent oil price prediction. The fluctuation of the entire trend is more in line with the original sequence. Among them, the total error of WTI oil future price series forecast is 2.63E-05, and the error of Brent oil future price forecast is 3.2E-06. It can be seen from the figure that the entire predicted sequence fits well with the original sequence, and the similarity is high.Through the above simulation studies, it is verified that there is a relatively obvious correlation between exchange rate fluctuations and oil price changes. ## 4. Conclusion At present, in the postcrisis era, all kinds of instability and risks are increasing day by day. For example, the instability of the global economy and political changes in major countries have become uncertain factors that can cause sudden jumps in international oil prices at any time, which may lead to continuous fluctuations in oil futures prices. When the changes in international oil prices are continuous or large scale, it will likely lead to more violent inflation. According to the theory of purchasing power parity, due to the rise in the price level and inflation, the exchange rate rises and the local currency depreciates. At the same time, higher inflation means higher interest rates. According to the theoretical formula of interest rate parity, the change of the interest rate level will cause the opposite direction change of the value of the local currency; that is, the international oil price will rise, the interest rate will rise, and the local currency will depreciate. This paper combines the copula function to study the correlation between exchange rate fluctuations and oil price changes. The simulation study verifies that there is a relatively obvious correlation between exchange rate fluctuations and oil price changes. --- *Source: 1023725-2022-10-04.xml*
1023725-2022-10-04_1023725-2022-10-04.md
55,017
Correlation Analysis between Exchange Rate Fluctuations and Oil Price Changes Based on Copula Function
Xiaodong Huang
Advances in Multimedia (2022)
Computer Science
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1023725
1023725-2022-10-04.xml
--- ## Abstract In order to explore the relationship between exchange rate fluctuations and oil prices, this paper combines the copula function to study the correlation between exchange rate fluctuations and oil price changes, conducts a more comprehensive study of the copula function, and applies the algorithm to some practical classification problems. Moreover, this paper improves some defects in the algorithm and combines some new learning frameworks in machine learning to generalize the copula function to a variety of learning models. In addition, this paper studies how to use the coverage algorithm to construct classifiers under various problems and proposes corresponding improvement strategies according to the characteristics of various problems. Finally, this paper builds a correlation analysis algorithm model and uses simulation research to verify that there is a relatively obvious correlation between exchange rate fluctuations and oil price changes. --- ## Body ## 1. Introduction The commodity attribute of oil means that oil has use value and value as an exchangeable commodity. According to the definition in political economy, the use value of a commodity refers to the attribute that can meet certain needs of people, and the use value is one of the common attributes that all commodities have. Conversely, an item that has no use value does not become a commodity. The value of a commodity is the undifferentiated human labor (including physical labor and mental labor) condensed in the commodity. On the one hand, as an important energy material and chemical raw material, oil is widely used in industry, transportation, and national defense. Therefore, petroleum plays an extremely important role in people’s daily life and has great use value [1]. On the other hand, from finding oil to using oil, it generally goes through four links: exploration, exploitation, transportation, and processing. Moreover, the successful completion of each link condenses the mental and physical labor of oil exploration personnel, mining workers, transportation workers, and production workers. Therefore, the value attribute of oil is obvious. According to the law of commodity value, the price of oil fluctuates around the value of oil according to the market supply and demand conditions, and the value of oil is determined by the cost of producing oil, including exploration, extraction, transportation, and processing costs [2].In a narrow sense, oil is a financial derivative product, and it is one of the basic variables of financial derivative product market transactions. In a broad sense, it refers to the interaction and mutual influence between the fluctuation of the oil spot market and the fluctuation of the financial derivative product market, and this relationship is more and more inclined to the one-way impact of financial derivative market fluctuations on spot market fluctuations [3]. The financial properties of oil are determined by the characteristics of oil supply and demand and the uneven distribution of oil. The supply and demand characteristics of oil are the fundamental reasons for the financial properties of oil. The characteristics of oil supply and demand and the uneven distribution of oil resources make the market use financial means to control price risks, while also causing a large influx of speculative funds into the oil market to hype oil prices, so that oil prices are affected not only by supply and demand but also by the financial market. The influence of conventional financial price indices such as speculative funds, exchange rate indices, stock price indices, and gold price indices [4].Considering that the futures price has a price discovery effect on the spot price, the sharp and sudden fluctuation of the oil futures price will also cause transmission to the spot market. With the effect of the convergence theory, the future price and the spot price will tend to be consistent [5]. Therefore, the continuous jumping behavior in the future market may affect the spot market. Based on this, it is very necessary to discuss the jumping phenomenon of international oil future prices in depth, which can better make suggestions for investors in related fields, stabilize the spot market, and take timely measures to deal with risks. As the most important adjustment lever of international trade [6], the exchange rate plays a direct adjustment role in a country’s trade, and its fluctuations will directly affect the entire import and export trade, thereby affecting the country’s economic stability. Economic operation plays an extremely important role. According to the existing literature, many scholars have also proposed that the spot price of oil is the most significant factor affecting the exchange rate and has a great influence on the exchange rate, and its explanatory power is also significantly better than other factors. Therefore, based on the fluctuation of the oil price itself, the effect of oil on the exchange rate can be further investigated, so as to better analyze the exchange rate trend and stabilize the economic operation [7]. Observing the fluctuations of oil prices in the past two years, it can be found that when oil prices fluctuated, many unexpected events occurred in the world, such as the Iranian nuclear issue and OPEC’s refusal to reduce production, which all led to excess oil supply and short-term changes in the relationship between oil supply and demand. It caused frequent jumps in oil prices. Correspondingly, the exchange rates of different countries have also changed significantly, which also shows that it is necessary to study the fluctuations of oil prices and their jumping phenomena [8].Literature [9] explained how the exchange rate affects oil prices from a theoretical level and carried out an empirical analysis. Literature [10] believes that there is a cointegration relationship between the US dollar exchange rate and the international oil price. Changes in international oil prices will lead to fluctuations in the US dollar exchange rate, but changes in the US dollar exchange rate will not lead to fluctuations in international oil prices. Literature [11] believes that there is a cointegration relationship between international oil prices and real exchange rates, and the prediction of international oil prices to exchange rate changes has a high significance in the long run. Literature [12] conducted an empirical study by adding the international oil price as a variable to the exchange rate determination model and found that the international oil price can significantly explain and predict the changes in the US dollar exchange rate. Literature [13] studied the relationship between the international oil price and the US dollar exchange rate before and after the financial crisis through linear and nonlinear causal analysis and found that there was a single linear causal relationship between the international oil price and the US dollar exchange rate before the financial crisis. There is a bidirectional nonlinear causal relationship between the US dollar exchange rate, and volatility spillovers and institutional changes are important factors for the nonlinear causality. Literature [14] judged from both theoretical and empirical levels that the impact of rising oil prices on real income and price levels is ambiguous, because countries that are substantially affected by actual oil prices in a statistical sense are also countries that are conducting price controls, which leads to the existence of price control bias in the actual GNP data which can be used as a reasonable explanation instead of the oil price shock. Literature [15] established a quarterly multivariate VAR model to study the existence and direction of the causal relationship between oil prices, oil consumption, and actual output and several other key macroeconomic policy variables and concluded that oil price shocks are not caused by the main reason for the US business cycle, in addition to oil prices, and real output will significantly change oil consumption and vice versa.International oil prices can cause changes in the exchange rate level by causing changes in a country’s inflation. Oil is a basic industrial energy source. The rise or fall of international oil prices will cause changes in the cost of industrial production enterprises. When the international oil price rises, it first causes changes in the costs of enterprises and industries closely related to oil, and this change further spreads to changes in the operating costs of the entire industrial chain and even the entire economy. The increase in costs leads to cost-driven inflation [16].This paper combines the copula function to study the correlation between exchange rate fluctuations and oil price changes and establishes a model through intelligent analysis methods to improve the correlation analysis effect between exchange rate fluctuations and oil price changes. ## 2. Copula Function Based on Correlation Analysis ### 2.1. Basic Copula Functions We assume thatS=x1,y1,x2,y2,⋯,xp,yp=X1,X2,⋯,Xkis a set of learning samples in a givenn-dimensional space X, where xi=x1i,x2i,⋯,xni∈X, p is the number of learning samples, Y=y1,y2,⋯,yk is a set of finite class labels, and each yi∈Y in the sample set; k is the number of categories. It is required to construct a three-layer neural network. After learning S, it can output the sample of unknown category xj∈X and output its category yj∈Y, and the recognition rate of the network is as high as possible.The basic idea of the domain copula function is to construct the coverage of each category in turn, and there is no intersection between the coverages until all samples are included in a certain coverage. In the process of constructing coverage, the main operations are as follows:(1) The field is constructed: the method of constructing a spherical field is to select any samplea1 that has not been covered in the current processing category Xi and use it as the center to solve the radius r to obtain a coverage. We set <> to represent the inner product operation; then, the solution strategy for the radius r is as follows:(1)d1=max<x,a1>,x∉Xi,(2)d2=max<x,a1>∣<x,a1>>d1,x∈Xi,(3)θ=d1+d22,(4)a=d1−d22That is,d1 represents the distance between the current center a1 and the nearest heterogeneous point, and d2 represents the distance between the current center and the farthest similar point on the premise that the distance is less than d1. Taking r=θ=d1+d2/2 as the radius to construct the coverage, the classification gap is a, as shown in Figure 1, where triangles and squares represent two types of samples. (2) The center of gravity is obtained: after obtaining an initial coverage, the center of gravityC of all samples in the coverage is obtained, and C is projected onto the hypersphere. Taking the projected point as the new coverage center, the field is reconstructed according to the processing in operation (1) until the newly obtained coverage cannot cover more sample points(3) Translation: since in then-dimensional space, n+1 linearly unrelated points determine a hypersphere, the translation operation is used to cover as many similar samples as possible, and the translation algorithm can be referred to in the literatureFigure 1 Definition of coverage radiusr.At this point, the domain copula function can be obtained:Algorithm 1. The learning process is as follows: fork categories, it is necessary to construct the coverage of each category in turn, until all samples are included in a certain coverage. The overriding construction process of the ith class is as follows: Step 1.1. The algorithm takes any point in theith category that has not been covered and denote it as a1. Step 1.2. The algorithm takesa1 as the center, finds the threshold θ1, and obtains a coverage Ca1 with the center of θ1 and the radius of θ1. Step1.3. The algorithm finds the center of gravitya1′ of the coverage Ca1 and then finds the new threshold θ1′ according to Step 1.2 and obtains the new coverage Ca1′. If Ca1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and the loop is executed until Ca1′ cannot cover more points. Step 1.4. The algorithm finds the translation pointa1′′ of a1 and finds the corresponding coverage Ca1′′. If Ca1′′ covers more points than Ca1, then a1′′⟶a1, θ1′′⟶θ1, and the algorithm goes to Step 1.3. Otherwise, the construction of covering Ci is completed, and the flag of the sample point in Ci is set as covered, and the algorithm goes to Step 1.1. The testing process is as follows: Step 2.1. For each samplex, the algorithm finds the distance dx,Ci≤ωi,x>−θi to all coverages, where ωi and θi are the center and radius of Ci, respectively. Step 2.2. The algorithm takes the categoryj corresponding to maxdx,Ci as the final category of the sample. The idea of the above test process is to first find the distance from the test samplex to each coverage, so as to judge whether the sample falls into a certain coverage. If it falls into the coverage Ci, the class j to which Ci belongs is taken as the sample class. Otherwise, according to the principle of proximity, the category corresponding to the closest coverage is selected as the classification result.The schematic diagram after completing the field coverage is shownin Figure2.Figure 2 Classification results of the domain copula function.The main difference between crosscoverage and domain coverage is that the former constructs the coverage of each category alternately; that is, after constructing a coverage of categoryj this time, the coverage of the j+1th category will be constructed next time. Moreover, after the completion of each coverage construction, the points contained in the coverage are deleted from the sample set, so the algorithm adds deletion operations on the basis of field coverage. According to this idea, the crosscopula function can be obtained:Algorithm 2. The learning process is as follows: for thek categories, the coverage of each category is constructed in turn until all samples have completed the learning. The overriding construction process of the ith class is as follows: Step 1. The algorithm takes any point that has not been covered in theith category and denotes it as a1. Step 2. The algorithm takesa1 as the center, finds the threshold θ1, and obtains a coverage Ca1 with the center of a1 and the radius of θ1. Step 3. The algorithm finds the center of gravitya1′of the coverageCa1and then presses Step 1.2 to find the new thresholdθ1′ and obtain the new coverage Ca1′. If Ca1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and loop operation until Ca1′ cannot cover more points. Step 4. The algorithm finds the translation pointa1′′ of a1 and finds the corresponding coverage Ca1′′. If Ca1′′ covers more points than Ca1, then a1′′⟶a1, θ1′′⟶θ1, and the algorithm goes to Step 1.3. Otherwise, a coverage Ci is obtained, and all sample points contained in Ci are deleted. Step 5. Ifi=i+1modk, the algorithm goes to Step 1.The algorithm judges whether the test sample belongs to the area according to the sequence of coverage structure and takes the category of the smallest area containing the test sample as the category of the sample.The schematic diagram after the crosscoverage is completed is shown in Figure3.Figure 3 Classification results of the crosscopula function. ### 2.2. Fuzzy Kernel Covering Classifier The commonly used kernel functions mainly include the following types:(1) The Gaussian radial basis function isKx,y=exp−x−y2/q, q=2σ2(2) The polynomial function isKx,y=x·y+1d, d=1,2,⋯(3) The sigmoid function isKx,y=tanh−bx⋅y−cWe assume that the domain of discourse isX; then, any point on X is mapped to the unit hypersphere in the feature space by the radial basis kernel function, which is exactly the process of projecting the sample onto the hypersphere in the copula function. Therefore, it is possible to construct the coverage directly in the feature space without transforming the samples. At this point, the kernel copula function is obtained, and the distance originally represented by the inner product becomes Kx,y=exp−x−y2/q, where q=2σ2.In the kernel copula function, some functions of the original copula function need to be changed as follows:(1) The calculation of the threshold (radius)θ usually adopts the following formula:(5)θ=maxx∉Xkai∈XkKai,x=maxx∉Xkai∈Xkexp−x−ai2qIts purpose is to increase the radius of the spherical field, reduce the classification boundary, and reduce the rejection rate.(2) The function is(6)y=σKω,x−θ,(7)σx=x,ifx≥0,0,otherwise(3) The distance function from the samplex to the domain Ci is(8)dx,Ci=0,ifx∈Ci,θ−Kωi,x,ifx∉CiIn the basic copula function, the radius of each coverage is determined using formulas (1)–(4). d1 represents the distance between the current field center and the nearest heterogeneous point (due to the inner product operation. Therefore, d1 takes the maximum value when the distance is closest, and vice versa). d2 represents the distance between the center of the field and the farthest similar point on the premise that it is greater than d1. Take θ=d1+d2/2 as the radius, and all areas outside the coverage are the rejection areas. The purpose of this processing is to reflect the equality of various categories on the one hand and to expand the coverage area as much as possible, so that more test samples fall into the existing coverage and reduce the rejection rate. In the kernel copula function, due to the ambiguity of the algorithm itself, formula (5) is used to determine the radius, and its essence is to set θ=d1.According to the above analysis, in the improved algorithm, the algorithm first modifies the radius calculation principle toθ1=d1 according to formula (10), where d2 is as formula (10). Algorithms make each overlay describe only what has been learned, which is commonly referred to as “knowing what you know, not knowing what you don’t know.” Such modification will inevitably lead to the reduction of the coverage area, the increase of the rejection area, and the increase of rejection points. At this time, the recognition rate of the classifier is improved by improving the processing method of the rejected samples. (9)d1=maxΚx,a1,x∉Xi,(10)d2=minΚx,a1Κx,a1>δ1,x∈Xi.In the process of judging rejected samples, the membership function of samplex to the ith covering Ci is introduced. (11)μi=1+Κx,ωi−θi∗Κx,ωiθi.Among them,Κx,ωi is the distance from the rejected sample to the center ωi of Ci, θi is the radius of Ci, and Κx,ωi−θi is the distance from the rejected sample to the edge of Ci, which is a negative value. The function comprehensively considers factors such as the distance between the sample and the coverage edge, the distance between the sample and the coverage center, and the coverage radius. When the sample happens to fall on the edge of the coverage, the value of μi is 1; that is, it belongs to the coverage. As the samples move away, μi decreases monotonically and gradually tends to 0. At this time, the fuzzy kernel copula function FKCA (Fuzzy Kernel Covering Algorithm) is obtained. The algorithm is divided into two parts: learning and testing, which are described as follows:Algorithm 3. The learning algorithm is as follows. We assume that the learning sampleX has a total of k classes; that is, X=X1,X2,⋯,Xk, and the algorithm uses the Gaussian radial basis function Kx,y=exp−x−y2/q, where q=2σ2. For k categories, the algorithm constructs the coverage of each category in turn, until all samples are included in a certain coverage. The overriding construction process of the ith class is as follows: Step 1. The algorithm takes any point that has not been covered in theith category and denotes it as a1. Step 2. The algorithm takesa1 as the center and calculates the threshold d2 according to d2 in formula (10) and obtains a coverage Ca1 with a1 as the center and θ1 as the radius. Step 3. The algorithm finds the center of gravitya1′ of the coverage Ca1 and then presses Step 1.2 to find the new threshold θ1′ and obtain the new coverage Ca1′. If Ca1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and loop operation until Ca1′ cannot cover more points, and then, a coverage Ci is obtained.The membership function designed in formula (11) comprehensively considers a variety of location factors covered to determine the membership. When the distance between a rejected sample and the coverage edge of two different categories is equal, the following conclusion can be obtained according to this function; the membership degree of the sample to the coverage with a smaller radius is greater than that of the coverage with a larger radius. In this regard, we give an intuitive explanation as shown in Figure 4. For the sake of simplicity, we take the two-category case under the two-dimensional plane as an example for analysis.Figure 4 Analysis of membership function.We assume that two coverages Cover 1 and Cover 2 have been obtained in Figure4, which cover samples of different classes, respectively. The solid line represents its range, and the radius r1 of Cover 1 is greater than the radius r2 of Cover 2, and the distance between the rejection point T and the two coverage edges is equal. According to the way of determining the radius, it is advisable to set the point A∈C1 and B∈C2 at the position shown in the figure. It does the radii R1 and R2 to get the range shown by the dotted line. For Cover 1, when the radius reaches r1, it stops due to encountering a heterogeneous B in the process of continuing expansion. It can be considered that the point located on the edge R2 does not belong to Cover 1 at all. Therefore, the connection C1B can be made, and the membership of the points on the connection to Cover 1 gradually decreases, and 0 is taken at point B. In the same way, there is a connection C2A to Cover 2, and the properties are the same. It is easy to know that the degree of membership of point T to Cover 1 is smaller than that to Cover 2, so T is determined as the category to which Cover 2 belongs. ### 2.3. Multiple Example Copula Functions A threshold is set, and when the sum of the cost of expansion is higher than the threshold, the current coverage stops expanding, as shown in Figure5.Figure 5 Schematic diagram of MICA-SNP.The distribution of each bag example in Figure5 is consistent with the figure, with white for positive packets and black for negative packets. For example, in construction coverage in the negative bag, when the coverage C1.1 is obtained, it continues to expand to obtain C1.2 and C1.3, and the cost increases continuously. When expanding to C1.4, the newly added positive bag example makes the cost exceed the threshold, so C1.4 cancels and falls back to C1.3. C2.3 and C3.2 are obtained in the same way, while the negative packet covers C4, C5, and C6. It cannot be expanded because of its small capacity. When the coverage of the negative examples is completed, the coverage of the remaining positive examples is constructed, and the general construction method is adopted at this time, but the positive examples are not deleted during the construction process. For example, when obtaining C8 and continuing to construct C9, although some positive examples have been covered by C3, it is still used for the construction of C. The target concept area after construction is completed is shown in the shaded area.In summary, the multi-instance copula function MICA-BSNP (Multi-instance Covering Algorithm Based on Strong Noise Processing) is obtained as follows.Algorithm 4. The learning algorithm is as follows: the algorithm gives K learning samplesB1,y1,B2,y2,⋯,BK,yK, yK∈0,1, yK=0 means negative packet, yK=1 means positive packet, and the number of examples contained in packet Bi is Ni. A new sample set b11,y1,⋯,b1N1,y1,⋯,bK1,yK,⋯,bKNK,yK is obtained by assigning the label of the bag to each example in the bag. We record the set of negative examples as X0 and the set of positive examples as X1. Step 1. The algorithm takes any example that has not been learned and denotes it asa1. Step 2. The algorithm takesa1 as the center and calculates the threshold θ1 according to d2 in formula (10) and obtains a coverage Ca1 with a1 as the center and θ1 as the radius. Step 3. The algorithm finds the center of gravitya1′ of the coverage Ca1 and then presses Step 1.2 to find the new threshold θ1′ and obtains the new coverage Ca1′. IfCa1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and loop operation until Ca1′ cannot cover more points; then, a quasicoverage Ca1 is obtained. Step 4. When the number of examples inCa1 is less than epsN, the algorithm determines Ca1 and obtains a coverage Ci, the algorithm marks the examples contained in it as learned, and the algorithm goes to Step 1. Otherwise, the algorithm goes to Step 1.5. Step 5. The algorithm takesa1 as the center; finds θ1′′, θ1′′=maxKa1,x<θ1x∈X1; and calculates the total cost of the current cost of coverage expansion. When the total cost is less than the threshold, the algorithm marks the positive examples included in the expansion as negative, θ1′′⟶θ1, and a new Ca1 is obtained, and the algorithm goes to Step 1.3. Otherwise, the algorithm cancels this expansion and obtains a coverage Ci with a1 as the center and θ1 as the radius.The algorithm is given two finite setsA and B with A=a1,a2,⋯,am and B=B1,B2,⋯,Bn. Then, the Hausdorff distance between A and B can be defined by formula (12): (12)HA,B=maxhA,B,hB,A,(13)hA,B=maxa∈Aminb∈Ba−b,(14)hB,A=maxb∈Bmina∈Ab−a.In formulas (12)–(14), · is a certain distance norm, and the Euclidean distance is used in this paper. Hausdorff distance describes the degree of difference between two sets A and B; the larger the distance, the more obvious the difference. Thus, the multi-instance copula function MICA-BBC (Multi-Instance Covering Algorithm Based on Bag Covering) is obtained.Algorithm 5. The learning algorithm is as follows: the algorithm is givenK learning samples B1,y1,B2,y2,⋯,BK,yK, yK∈0,1, and the set of positive and negative packets is denoted as Xi, where i=0 means negative packet and i=1 means positive packet. The algorithm constructs the spherical cover of positive and negative packages in turn, until all packages fall into a certain cover, and the construction process of the ith cover is as follows: Step 1. The algorithm selects any package that has not yet been covered in theith category, denoted as a1. Step 2. The algorithm takesa1 as the center to solve the threshold θ, and the solution method of θ is (15)d1=minHx,a1,x∉Xi,(16)d2=maxHx,a1Hx,a1<d1,x∈Xi,(17)θ=d1+d22. At this point, a sphere coveringCj is obtained, whose center is a1 and whose radius is θ. The test algorithm is as follows: Step 1. For the bagx to be classified, the algorithm calculates the distance from x to each one in turn: (18)dx,Cj=θj−Hx,aj. Step 2. If there isCj such that dx,Cj≥0, it means that x falls into the sphere cover Cj, and the label of the package to which Cj belongs is used as the label of the package x. Step 3. If∀j, dx,Cj<0, the algorithm takes dx,Cj and takes the tag of the package to which Cj belongs as the tag of package x.In order to properly optimize the coverage results, increase the coverage radius, and reduce the number of coverages, we introduce the secondary scanning method introduced in Section3 and then obtain an improved multi-instance copula function based on package coverage.Algorithm 6. The learning algorithm is as follows: the algorithm is givenK learning samples B1,y1,B2,y2,⋯,BK,yK, yK∈0,1, and the set of positive and negative packets is denoted as Xi, where i=0 means positive packet and i=1 means negative packet. The algorithm constructs the spherical cover of positive and negative packages in turn, until all packages fall into a certain cover, and the construction process of the ith cover is as follows: Step 1. The algorithm selects any package that has not yet been covered in theith category, denoted as a1. Step 2. The algorithm takesa1 as the center and uses formula (17) to solve the threshold θ and obtains a spherical coverage Ca1 with a1 as the center and θ as the radius and records the number of packets contained in the coverage. If there are uncovered packages in this class, the algorithm goes to Step 1.1; otherwise, the algorithm goes to Step 1.3. Step 3. The algorithm sorts the coverages in a descending order according to the number of samples contained in each coverage and reconstructs coverages in turn according to the sorted centers. When the number of samples contained in the obtained coverage is not less than the number of samples contained in the coverage for the first time, the coverage is determined. Otherwise, the algorithm cancels this construction and reinserts the center into the table in order based on the number of samples contained in this time.Algorithm 7. The algorithm is as follows: the algorithm is givenK learning samples B1,y1,B2,y2,⋯,BK,yK, yK∈0,1. The algorithm transforms the package and constructs an overlay. Step 1. The algorithm arbitrarily selectsk packages from the set B of packages as the initial cluster center, denoted as C1 to Ck, and the cluster corresponding to the cluster center Cj is cluster j. Step 2. The algorithm uses the Hausdorff distance to find the distanceHBi,Cj from Bi to each cluster center Cj for each packet Bi in B−C1,⋯,Ck, where j=1,..,k. The algorithm puts Bi into the cluster j formed by the nearest Cj and repeats until all packets are clustered into k categories. Step 3. The algorithm solves the center bag for thek clusters obtained by clustering, and the new center of the jth cluster is (19)Cj=argminA∈ClusterjHA,B. Step 4. The algorithm finds the Hausdorff distance fromBi to Cj (j=,..,k) for each package Bi in B and uses it as a component of the feature vector. That is, xi=HBi,C1,HBi,C2,⋯,HBi,Ck; all xi form a new sample set X, and X=x1,y1,x2,y2,⋯,xK,yK. Step 5. The algorithm uses the copula function onX to construct a classifier. ## 2.1. Basic Copula Functions We assume thatS=x1,y1,x2,y2,⋯,xp,yp=X1,X2,⋯,Xkis a set of learning samples in a givenn-dimensional space X, where xi=x1i,x2i,⋯,xni∈X, p is the number of learning samples, Y=y1,y2,⋯,yk is a set of finite class labels, and each yi∈Y in the sample set; k is the number of categories. It is required to construct a three-layer neural network. After learning S, it can output the sample of unknown category xj∈X and output its category yj∈Y, and the recognition rate of the network is as high as possible.The basic idea of the domain copula function is to construct the coverage of each category in turn, and there is no intersection between the coverages until all samples are included in a certain coverage. In the process of constructing coverage, the main operations are as follows:(1) The field is constructed: the method of constructing a spherical field is to select any samplea1 that has not been covered in the current processing category Xi and use it as the center to solve the radius r to obtain a coverage. We set <> to represent the inner product operation; then, the solution strategy for the radius r is as follows:(1)d1=max<x,a1>,x∉Xi,(2)d2=max<x,a1>∣<x,a1>>d1,x∈Xi,(3)θ=d1+d22,(4)a=d1−d22That is,d1 represents the distance between the current center a1 and the nearest heterogeneous point, and d2 represents the distance between the current center and the farthest similar point on the premise that the distance is less than d1. Taking r=θ=d1+d2/2 as the radius to construct the coverage, the classification gap is a, as shown in Figure 1, where triangles and squares represent two types of samples. (2) The center of gravity is obtained: after obtaining an initial coverage, the center of gravityC of all samples in the coverage is obtained, and C is projected onto the hypersphere. Taking the projected point as the new coverage center, the field is reconstructed according to the processing in operation (1) until the newly obtained coverage cannot cover more sample points(3) Translation: since in then-dimensional space, n+1 linearly unrelated points determine a hypersphere, the translation operation is used to cover as many similar samples as possible, and the translation algorithm can be referred to in the literatureFigure 1 Definition of coverage radiusr.At this point, the domain copula function can be obtained:Algorithm 1. The learning process is as follows: fork categories, it is necessary to construct the coverage of each category in turn, until all samples are included in a certain coverage. The overriding construction process of the ith class is as follows: Step 1.1. The algorithm takes any point in theith category that has not been covered and denote it as a1. Step 1.2. The algorithm takesa1 as the center, finds the threshold θ1, and obtains a coverage Ca1 with the center of θ1 and the radius of θ1. Step1.3. The algorithm finds the center of gravitya1′ of the coverage Ca1 and then finds the new threshold θ1′ according to Step 1.2 and obtains the new coverage Ca1′. If Ca1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and the loop is executed until Ca1′ cannot cover more points. Step 1.4. The algorithm finds the translation pointa1′′ of a1 and finds the corresponding coverage Ca1′′. If Ca1′′ covers more points than Ca1, then a1′′⟶a1, θ1′′⟶θ1, and the algorithm goes to Step 1.3. Otherwise, the construction of covering Ci is completed, and the flag of the sample point in Ci is set as covered, and the algorithm goes to Step 1.1. The testing process is as follows: Step 2.1. For each samplex, the algorithm finds the distance dx,Ci≤ωi,x>−θi to all coverages, where ωi and θi are the center and radius of Ci, respectively. Step 2.2. The algorithm takes the categoryj corresponding to maxdx,Ci as the final category of the sample. The idea of the above test process is to first find the distance from the test samplex to each coverage, so as to judge whether the sample falls into a certain coverage. If it falls into the coverage Ci, the class j to which Ci belongs is taken as the sample class. Otherwise, according to the principle of proximity, the category corresponding to the closest coverage is selected as the classification result.The schematic diagram after completing the field coverage is shownin Figure2.Figure 2 Classification results of the domain copula function.The main difference between crosscoverage and domain coverage is that the former constructs the coverage of each category alternately; that is, after constructing a coverage of categoryj this time, the coverage of the j+1th category will be constructed next time. Moreover, after the completion of each coverage construction, the points contained in the coverage are deleted from the sample set, so the algorithm adds deletion operations on the basis of field coverage. According to this idea, the crosscopula function can be obtained:Algorithm 2. The learning process is as follows: for thek categories, the coverage of each category is constructed in turn until all samples have completed the learning. The overriding construction process of the ith class is as follows: Step 1. The algorithm takes any point that has not been covered in theith category and denotes it as a1. Step 2. The algorithm takesa1 as the center, finds the threshold θ1, and obtains a coverage Ca1 with the center of a1 and the radius of θ1. Step 3. The algorithm finds the center of gravitya1′of the coverageCa1and then presses Step 1.2 to find the new thresholdθ1′ and obtain the new coverage Ca1′. If Ca1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and loop operation until Ca1′ cannot cover more points. Step 4. The algorithm finds the translation pointa1′′ of a1 and finds the corresponding coverage Ca1′′. If Ca1′′ covers more points than Ca1, then a1′′⟶a1, θ1′′⟶θ1, and the algorithm goes to Step 1.3. Otherwise, a coverage Ci is obtained, and all sample points contained in Ci are deleted. Step 5. Ifi=i+1modk, the algorithm goes to Step 1.The algorithm judges whether the test sample belongs to the area according to the sequence of coverage structure and takes the category of the smallest area containing the test sample as the category of the sample.The schematic diagram after the crosscoverage is completed is shown in Figure3.Figure 3 Classification results of the crosscopula function. ## 2.2. Fuzzy Kernel Covering Classifier The commonly used kernel functions mainly include the following types:(1) The Gaussian radial basis function isKx,y=exp−x−y2/q, q=2σ2(2) The polynomial function isKx,y=x·y+1d, d=1,2,⋯(3) The sigmoid function isKx,y=tanh−bx⋅y−cWe assume that the domain of discourse isX; then, any point on X is mapped to the unit hypersphere in the feature space by the radial basis kernel function, which is exactly the process of projecting the sample onto the hypersphere in the copula function. Therefore, it is possible to construct the coverage directly in the feature space without transforming the samples. At this point, the kernel copula function is obtained, and the distance originally represented by the inner product becomes Kx,y=exp−x−y2/q, where q=2σ2.In the kernel copula function, some functions of the original copula function need to be changed as follows:(1) The calculation of the threshold (radius)θ usually adopts the following formula:(5)θ=maxx∉Xkai∈XkKai,x=maxx∉Xkai∈Xkexp−x−ai2qIts purpose is to increase the radius of the spherical field, reduce the classification boundary, and reduce the rejection rate.(2) The function is(6)y=σKω,x−θ,(7)σx=x,ifx≥0,0,otherwise(3) The distance function from the samplex to the domain Ci is(8)dx,Ci=0,ifx∈Ci,θ−Kωi,x,ifx∉CiIn the basic copula function, the radius of each coverage is determined using formulas (1)–(4). d1 represents the distance between the current field center and the nearest heterogeneous point (due to the inner product operation. Therefore, d1 takes the maximum value when the distance is closest, and vice versa). d2 represents the distance between the center of the field and the farthest similar point on the premise that it is greater than d1. Take θ=d1+d2/2 as the radius, and all areas outside the coverage are the rejection areas. The purpose of this processing is to reflect the equality of various categories on the one hand and to expand the coverage area as much as possible, so that more test samples fall into the existing coverage and reduce the rejection rate. In the kernel copula function, due to the ambiguity of the algorithm itself, formula (5) is used to determine the radius, and its essence is to set θ=d1.According to the above analysis, in the improved algorithm, the algorithm first modifies the radius calculation principle toθ1=d1 according to formula (10), where d2 is as formula (10). Algorithms make each overlay describe only what has been learned, which is commonly referred to as “knowing what you know, not knowing what you don’t know.” Such modification will inevitably lead to the reduction of the coverage area, the increase of the rejection area, and the increase of rejection points. At this time, the recognition rate of the classifier is improved by improving the processing method of the rejected samples. (9)d1=maxΚx,a1,x∉Xi,(10)d2=minΚx,a1Κx,a1>δ1,x∈Xi.In the process of judging rejected samples, the membership function of samplex to the ith covering Ci is introduced. (11)μi=1+Κx,ωi−θi∗Κx,ωiθi.Among them,Κx,ωi is the distance from the rejected sample to the center ωi of Ci, θi is the radius of Ci, and Κx,ωi−θi is the distance from the rejected sample to the edge of Ci, which is a negative value. The function comprehensively considers factors such as the distance between the sample and the coverage edge, the distance between the sample and the coverage center, and the coverage radius. When the sample happens to fall on the edge of the coverage, the value of μi is 1; that is, it belongs to the coverage. As the samples move away, μi decreases monotonically and gradually tends to 0. At this time, the fuzzy kernel copula function FKCA (Fuzzy Kernel Covering Algorithm) is obtained. The algorithm is divided into two parts: learning and testing, which are described as follows:Algorithm 3. The learning algorithm is as follows. We assume that the learning sampleX has a total of k classes; that is, X=X1,X2,⋯,Xk, and the algorithm uses the Gaussian radial basis function Kx,y=exp−x−y2/q, where q=2σ2. For k categories, the algorithm constructs the coverage of each category in turn, until all samples are included in a certain coverage. The overriding construction process of the ith class is as follows: Step 1. The algorithm takes any point that has not been covered in theith category and denotes it as a1. Step 2. The algorithm takesa1 as the center and calculates the threshold d2 according to d2 in formula (10) and obtains a coverage Ca1 with a1 as the center and θ1 as the radius. Step 3. The algorithm finds the center of gravitya1′ of the coverage Ca1 and then presses Step 1.2 to find the new threshold θ1′ and obtain the new coverage Ca1′. If Ca1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and loop operation until Ca1′ cannot cover more points, and then, a coverage Ci is obtained.The membership function designed in formula (11) comprehensively considers a variety of location factors covered to determine the membership. When the distance between a rejected sample and the coverage edge of two different categories is equal, the following conclusion can be obtained according to this function; the membership degree of the sample to the coverage with a smaller radius is greater than that of the coverage with a larger radius. In this regard, we give an intuitive explanation as shown in Figure 4. For the sake of simplicity, we take the two-category case under the two-dimensional plane as an example for analysis.Figure 4 Analysis of membership function.We assume that two coverages Cover 1 and Cover 2 have been obtained in Figure4, which cover samples of different classes, respectively. The solid line represents its range, and the radius r1 of Cover 1 is greater than the radius r2 of Cover 2, and the distance between the rejection point T and the two coverage edges is equal. According to the way of determining the radius, it is advisable to set the point A∈C1 and B∈C2 at the position shown in the figure. It does the radii R1 and R2 to get the range shown by the dotted line. For Cover 1, when the radius reaches r1, it stops due to encountering a heterogeneous B in the process of continuing expansion. It can be considered that the point located on the edge R2 does not belong to Cover 1 at all. Therefore, the connection C1B can be made, and the membership of the points on the connection to Cover 1 gradually decreases, and 0 is taken at point B. In the same way, there is a connection C2A to Cover 2, and the properties are the same. It is easy to know that the degree of membership of point T to Cover 1 is smaller than that to Cover 2, so T is determined as the category to which Cover 2 belongs. ## 2.3. Multiple Example Copula Functions A threshold is set, and when the sum of the cost of expansion is higher than the threshold, the current coverage stops expanding, as shown in Figure5.Figure 5 Schematic diagram of MICA-SNP.The distribution of each bag example in Figure5 is consistent with the figure, with white for positive packets and black for negative packets. For example, in construction coverage in the negative bag, when the coverage C1.1 is obtained, it continues to expand to obtain C1.2 and C1.3, and the cost increases continuously. When expanding to C1.4, the newly added positive bag example makes the cost exceed the threshold, so C1.4 cancels and falls back to C1.3. C2.3 and C3.2 are obtained in the same way, while the negative packet covers C4, C5, and C6. It cannot be expanded because of its small capacity. When the coverage of the negative examples is completed, the coverage of the remaining positive examples is constructed, and the general construction method is adopted at this time, but the positive examples are not deleted during the construction process. For example, when obtaining C8 and continuing to construct C9, although some positive examples have been covered by C3, it is still used for the construction of C. The target concept area after construction is completed is shown in the shaded area.In summary, the multi-instance copula function MICA-BSNP (Multi-instance Covering Algorithm Based on Strong Noise Processing) is obtained as follows.Algorithm 4. The learning algorithm is as follows: the algorithm gives K learning samplesB1,y1,B2,y2,⋯,BK,yK, yK∈0,1, yK=0 means negative packet, yK=1 means positive packet, and the number of examples contained in packet Bi is Ni. A new sample set b11,y1,⋯,b1N1,y1,⋯,bK1,yK,⋯,bKNK,yK is obtained by assigning the label of the bag to each example in the bag. We record the set of negative examples as X0 and the set of positive examples as X1. Step 1. The algorithm takes any example that has not been learned and denotes it asa1. Step 2. The algorithm takesa1 as the center and calculates the threshold θ1 according to d2 in formula (10) and obtains a coverage Ca1 with a1 as the center and θ1 as the radius. Step 3. The algorithm finds the center of gravitya1′ of the coverage Ca1 and then presses Step 1.2 to find the new threshold θ1′ and obtains the new coverage Ca1′. IfCa1′ covers more points than Ca1, then a1′⟶a1, θ1′⟶θ1, and loop operation until Ca1′ cannot cover more points; then, a quasicoverage Ca1 is obtained. Step 4. When the number of examples inCa1 is less than epsN, the algorithm determines Ca1 and obtains a coverage Ci, the algorithm marks the examples contained in it as learned, and the algorithm goes to Step 1. Otherwise, the algorithm goes to Step 1.5. Step 5. The algorithm takesa1 as the center; finds θ1′′, θ1′′=maxKa1,x<θ1x∈X1; and calculates the total cost of the current cost of coverage expansion. When the total cost is less than the threshold, the algorithm marks the positive examples included in the expansion as negative, θ1′′⟶θ1, and a new Ca1 is obtained, and the algorithm goes to Step 1.3. Otherwise, the algorithm cancels this expansion and obtains a coverage Ci with a1 as the center and θ1 as the radius.The algorithm is given two finite setsA and B with A=a1,a2,⋯,am and B=B1,B2,⋯,Bn. Then, the Hausdorff distance between A and B can be defined by formula (12): (12)HA,B=maxhA,B,hB,A,(13)hA,B=maxa∈Aminb∈Ba−b,(14)hB,A=maxb∈Bmina∈Ab−a.In formulas (12)–(14), · is a certain distance norm, and the Euclidean distance is used in this paper. Hausdorff distance describes the degree of difference between two sets A and B; the larger the distance, the more obvious the difference. Thus, the multi-instance copula function MICA-BBC (Multi-Instance Covering Algorithm Based on Bag Covering) is obtained.Algorithm 5. The learning algorithm is as follows: the algorithm is givenK learning samples B1,y1,B2,y2,⋯,BK,yK, yK∈0,1, and the set of positive and negative packets is denoted as Xi, where i=0 means negative packet and i=1 means positive packet. The algorithm constructs the spherical cover of positive and negative packages in turn, until all packages fall into a certain cover, and the construction process of the ith cover is as follows: Step 1. The algorithm selects any package that has not yet been covered in theith category, denoted as a1. Step 2. The algorithm takesa1 as the center to solve the threshold θ, and the solution method of θ is (15)d1=minHx,a1,x∉Xi,(16)d2=maxHx,a1Hx,a1<d1,x∈Xi,(17)θ=d1+d22. At this point, a sphere coveringCj is obtained, whose center is a1 and whose radius is θ. The test algorithm is as follows: Step 1. For the bagx to be classified, the algorithm calculates the distance from x to each one in turn: (18)dx,Cj=θj−Hx,aj. Step 2. If there isCj such that dx,Cj≥0, it means that x falls into the sphere cover Cj, and the label of the package to which Cj belongs is used as the label of the package x. Step 3. If∀j, dx,Cj<0, the algorithm takes dx,Cj and takes the tag of the package to which Cj belongs as the tag of package x.In order to properly optimize the coverage results, increase the coverage radius, and reduce the number of coverages, we introduce the secondary scanning method introduced in Section3 and then obtain an improved multi-instance copula function based on package coverage.Algorithm 6. The learning algorithm is as follows: the algorithm is givenK learning samples B1,y1,B2,y2,⋯,BK,yK, yK∈0,1, and the set of positive and negative packets is denoted as Xi, where i=0 means positive packet and i=1 means negative packet. The algorithm constructs the spherical cover of positive and negative packages in turn, until all packages fall into a certain cover, and the construction process of the ith cover is as follows: Step 1. The algorithm selects any package that has not yet been covered in theith category, denoted as a1. Step 2. The algorithm takesa1 as the center and uses formula (17) to solve the threshold θ and obtains a spherical coverage Ca1 with a1 as the center and θ as the radius and records the number of packets contained in the coverage. If there are uncovered packages in this class, the algorithm goes to Step 1.1; otherwise, the algorithm goes to Step 1.3. Step 3. The algorithm sorts the coverages in a descending order according to the number of samples contained in each coverage and reconstructs coverages in turn according to the sorted centers. When the number of samples contained in the obtained coverage is not less than the number of samples contained in the coverage for the first time, the coverage is determined. Otherwise, the algorithm cancels this construction and reinserts the center into the table in order based on the number of samples contained in this time.Algorithm 7. The algorithm is as follows: the algorithm is givenK learning samples B1,y1,B2,y2,⋯,BK,yK, yK∈0,1. The algorithm transforms the package and constructs an overlay. Step 1. The algorithm arbitrarily selectsk packages from the set B of packages as the initial cluster center, denoted as C1 to Ck, and the cluster corresponding to the cluster center Cj is cluster j. Step 2. The algorithm uses the Hausdorff distance to find the distanceHBi,Cj from Bi to each cluster center Cj for each packet Bi in B−C1,⋯,Ck, where j=1,..,k. The algorithm puts Bi into the cluster j formed by the nearest Cj and repeats until all packets are clustered into k categories. Step 3. The algorithm solves the center bag for thek clusters obtained by clustering, and the new center of the jth cluster is (19)Cj=argminA∈ClusterjHA,B. Step 4. The algorithm finds the Hausdorff distance fromBi to Cj (j=,..,k) for each package Bi in B and uses it as a component of the feature vector. That is, xi=HBi,C1,HBi,C2,⋯,HBi,Ck; all xi form a new sample set X, and X=x1,y1,x2,y2,⋯,xK,yK. Step 5. The algorithm uses the copula function onX to construct a classifier. ## 3. Correlation Analysis of Exchange Rate Fluctuations and Oil Price Changes Based on Copula Based on the established individual effect model, a panel model based on the Brent oil and WTI oil future prices is established for the exchange rate, respectively, to forecast the exchange rate of exporting countries. The cases are shown in Figures6–8.Figure 6 Prediction figure1. (a)(b)Figure 7 Prediction figure2. (a)(b)Figure 8 Prediction figure3. (a)(b)Figure6(a) shows the WTI oil price prediction, and Figure 6(b) shows the Brent oil price. According to the comparison of the scales in the above picture, it is found that there is a high degree of similarity between the two. Moreover, the results show that the WTI oil price prediction error value is 0.000168, and the Brent oil price forecast error value is 0.000145. Although it can be seen from the error value that it is relatively small, it can be found that there is still a big difference in the whole model, especially the trend.Similarly, Figure7(a) shows the WTI oil price prediction exchange rate series, and Figure 7(b) shows the Brent oil price prediction. The fluctuation of the entire trend is more in line with the original sequence. Among them, the total error of WTI oil future price series forecast is 2.63E-05, and the error of Brent oil future price forecast is 3.2E-06. It can be seen from the figure that the entire predicted sequence fits well with the original sequence, and the similarity is high.Through the above simulation studies, it is verified that there is a relatively obvious correlation between exchange rate fluctuations and oil price changes. ## 4. Conclusion At present, in the postcrisis era, all kinds of instability and risks are increasing day by day. For example, the instability of the global economy and political changes in major countries have become uncertain factors that can cause sudden jumps in international oil prices at any time, which may lead to continuous fluctuations in oil futures prices. When the changes in international oil prices are continuous or large scale, it will likely lead to more violent inflation. According to the theory of purchasing power parity, due to the rise in the price level and inflation, the exchange rate rises and the local currency depreciates. At the same time, higher inflation means higher interest rates. According to the theoretical formula of interest rate parity, the change of the interest rate level will cause the opposite direction change of the value of the local currency; that is, the international oil price will rise, the interest rate will rise, and the local currency will depreciate. This paper combines the copula function to study the correlation between exchange rate fluctuations and oil price changes. The simulation study verifies that there is a relatively obvious correlation between exchange rate fluctuations and oil price changes. --- *Source: 1023725-2022-10-04.xml*
2022
# The Perception of Nursing Professionals Working in a Central Sterile Supplies Department regarding Health Conditions, Workload, Ergonomic Risks, and Functional Readaptation **Authors:** Rosemere Saldanha Xavier; Patrícia dos Santos Vigário; Alvaro Camilo Dias Faria; Patricia Maria Dusek; Agnaldo José Lopes **Journal:** Advances in Preventive Medicine (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1023728 --- ## Abstract Background. The central sterile supply department (CSSD) is wrongly seen as a place in the hospital environment that does not require skills and physical effort, being commonly a hospital sector for the relocation of functionally-readapted professionals. However, CSSD is a work environment that demands professional experience and presents itself as a sector that does not have a healthy work environment. This study aims to evaluate the frequency of comorbidities and functionally-readapted people among nursing professionals allocated to a CSSD and, also, to seek the perception of these professionals about the ergonomic risks and the degree of difficulty to perform activities within a CSSD. Methods. This is a cross-sectional study that analyzed the opinions of nursing professionals who work in the CSSD of public hospitals in Rio de Janeiro, Brazil. Nurses, nursing technicians and nursing assistants aged ≥18 years were included. Results. Seventy-two nursing professionals were consecutively evaluated. It was observed that 43 of them (59.7%) had never worked in a CSSD. The most prevalent comorbidity in the present study was chronic rhinosinusitis, observed in more than half of the sample, although it is interesting to note the high frequency of participants with work-related musculoskeletal disorders (WMSD) and repetitive strain injuries (RSI). There is a relationship between previous work in a CSSD and the ability to identify surgical tweezers by visual recognition (p=0.031). There is a relationship between the time the participant had previously worked in the hospital and the skill regarding the information contained in the conference folders for preparing the tray surgical procedures (τb = −0.34, p=0.001). Conclusion. Almost a third of nursing professionals working in a CSSD are rehabilitated, with a high prevalence of WMSD and RSI. The commitment of managers to an internal health policy aimed at workers is necessary for health promotion. --- ## Body ## 1. Introduction The central sterile supply department (CSSD) is a hospital sector dedicated to the reception, cleaning/disinfection, preparation, sterilization, storage, and distribution of materials for the entire hospital unit [1]. However, the CSSD is a stigmatized area within the hospital. Initially, the activities of preparation and sterilization of hospital instruments were the tasks of professionals who worked in the operating room, and there was no autonomy or even recognition as a unit [2]. Although the CSSD has full responsibility for the processing of hospital supplies, its recognition as an essential area for the provision of patient care is still small [3, 4].Historically and nowadays, CSSD is classified as a sector of indirect activities for patient care and considered as a secondary sector [5, 6]. It is a technical support unit responsible for supplying properly processed medical and hospital instruments, with the aim of offering adequate conditions for direct care and assistance to patients. The CSSD work process has its own characteristics and peculiarities to be aimed at nursing professionals [7]. After the material is processed, it is intended to meet the procedures performed in outpatient and inpatient units, especially surgical procedures. Despite the invisibility and little appreciation of the activities carried out in the CSSD to superimpose the activities of other sectors of the hospital, nursing professionals are aware of the importance of their work in patient safety. Considering the benefits of offering patients in the hospital environment without risk to their health regarding processed material and free from microorganisms, CSSD is important for patient safety in surgical procedures. Therefore, all actions involved in the CSSD work process are essential requirements that are enabled in the proper processing of instruments and that contribute to the safety of surgery [8].Improving the quality and safety of the CSSD work process encompasses the entire processing of surgical instruments, the management of surgical trays, and efforts to ensure that the information is efficient. All these steps are intrinsically linked to the success of the surgery [9]. Avoiding CSSD failures is working for patient safety and, thus, the sector must have a qualified nursing team and employees aware of the importance of the steps in the cleaning processes, preparation of surgical trays, sterilization, and storage [10]. The direct supervision of the nursing professional must occur at all stages of the process, using validated protocols for this [7].The CSSD is considered a place that does not require skills and physical effort, being commonly a hospital sector for the relocation of functionally-readapted professionals. However, this invisibility is characterized by the lack of knowledge of other health professionals and the activities performed, generating dissatisfaction among professionals working in the CSSD [11]. Functional readaptation is directly linked to the relocation of health professionals to other activities or sectors that do not harm their current health condition. However, CSSD is a work environment that demands professional experience and presents itself as a sector that does not have a healthy work environment due to exposure to chemical agents and contaminants; this contributes to health problems for professionals, whether physical or psychological [12].Health conditions, workload, physical effort, functional readaptation, and degree of difficulty in performing manual tasks (including identification of surgical tweezers) among nursing professionals working in a CSSD are still poorly known [5, 6, 9, 12]. Thus, the present study sought to evaluate the frequency of comorbidities and functionally-readapted people among nursing professionals allocated to a CSSD and, also, to seek the perception of these professionals about the ergonomic risks and the degree of difficulty to perform activities within a CSSD. ## 2. Materials and Methods ### 2.1. Study Design and Participants Between June and July 2021, a cross-sectional study was carried out analyzing the opinions of nursing professionals working in the CSSD of public hospitals in Rio de Janeiro, Brazil. Nurses, nursing technicians, and nursing assistants, of both sexes, >18 years old and with the following employment relationships were included: statutory, workers governed by the Consolidation of Labor Laws, and service providers working in the CSSD (both on the day and on the night shift).The project was approved by the Research Ethics Committee of the Augusto Motta University Center under the number CAAE-45992721.9.0000.5235, and all subjects signed the consent form. The protocol followed the recommendations for research in human beings as per the Declaration of Helsinki. ### 2.2. Assessment Instrument A questionnaire containing sociodemographic data, health conditions, length of professional experience, and opinions about the skills of nursing professionals (including the preparation of surgical trays) was applied. To this end, we resorted to the construction of questions using the Likert scale, which is a psychometric response scale used particularly in the area of psychology, health, and education, to which the participant responds through a criterion that can be objective or subjective [13, 14]. Usually what one wants to measure is the level of agreement or non-agreement to the statement, usually using five levels of response [14]. ### 2.3. Data Analysis Descriptive analysis was presented in the form of tables and the observed data were expressed by frequency and percentage. The inferential analysis consisted of the following methods: the comparison of professional ability (5 classes) between professionals with and without previous work in the CSSD was assessed using Fisher’s exact test; and the association between professional ability and length of time working at the hospital (3 classes) and performance upon reaching the CSSD (5 classes) was analyzed using Kendall’s tau-b (τb) correlation coefficient. The criterion for determining the significance adopted was the level of 5%. Statistical analysis was performed using SAS 6.11 software (SAS Institute, Inc., Cary, NC, USA). ## 2.1. Study Design and Participants Between June and July 2021, a cross-sectional study was carried out analyzing the opinions of nursing professionals working in the CSSD of public hospitals in Rio de Janeiro, Brazil. Nurses, nursing technicians, and nursing assistants, of both sexes, >18 years old and with the following employment relationships were included: statutory, workers governed by the Consolidation of Labor Laws, and service providers working in the CSSD (both on the day and on the night shift).The project was approved by the Research Ethics Committee of the Augusto Motta University Center under the number CAAE-45992721.9.0000.5235, and all subjects signed the consent form. The protocol followed the recommendations for research in human beings as per the Declaration of Helsinki. ## 2.2. Assessment Instrument A questionnaire containing sociodemographic data, health conditions, length of professional experience, and opinions about the skills of nursing professionals (including the preparation of surgical trays) was applied. To this end, we resorted to the construction of questions using the Likert scale, which is a psychometric response scale used particularly in the area of psychology, health, and education, to which the participant responds through a criterion that can be objective or subjective [13, 14]. Usually what one wants to measure is the level of agreement or non-agreement to the statement, usually using five levels of response [14]. ## 2.3. Data Analysis Descriptive analysis was presented in the form of tables and the observed data were expressed by frequency and percentage. The inferential analysis consisted of the following methods: the comparison of professional ability (5 classes) between professionals with and without previous work in the CSSD was assessed using Fisher’s exact test; and the association between professional ability and length of time working at the hospital (3 classes) and performance upon reaching the CSSD (5 classes) was analyzed using Kendall’s tau-b (τb) correlation coefficient. The criterion for determining the significance adopted was the level of 5%. Statistical analysis was performed using SAS 6.11 software (SAS Institute, Inc., Cary, NC, USA). ## 3. Results Seventy-two nursing professionals were consecutively evaluated as follows: 13 nurses; 56 nursing technicians; and 3 nursing assistants. Regarding age, the age group between 51 and 60 years predominated (34.7% of the sample). Among the participants, 54 (75%) were female and 35 (48.6%) declared themselves married. In this sample, 42 participants (58.3%) had secondary education and most had a statutory relationship with hospital institutions. The general characteristics of the participants are shown in Table1.Table 1 Participants’ general characteristics (n = 72). VariableNumber (%)Age group20–30 years4 (5.6%)31–40 years12 (16.7%)41–50 years23 (31.9%)51–60 years25 (34.7%)>60 years8 (11.1%)SexMale18 (25%)Female54 (75%)Marital statusSingle26 (36.1%)Married35 (48.6%)Stable union9 (12.5%)Divorced1 (1.4%)Widower1 (1.4%)EducationHigher level30 (41.7%)High school42 (58.3%)Institutional bondStatutory37 (51.4%)Consolidation of labor laws17 (23.6%)Service provider18 (25%)Results are expressed as number (%).As for the length of service provided to the institutions, it is observed that almost half of the professionals evaluated (47.2%) had between 11 and 20 years of work in the hospital. According to the professional profile of the interviewees, it was observed that 43 of them (59.7%) had never worked in a CSSD. Regarding the relocation of nursing professionals in the CSSD, 22 (30.6%) responded that they came to the sector due to functional readaptation. Data regarding the professional profile of the participants are shown in Table2. The most prevalent comorbidity in the present study was chronic rhinosinusitis, observed in more than half of the sample, although it is interesting to note the high frequency of participants with work-related musculoskeletal disorders (WMSD) and repetitive strain injuries (RSI), such as shown in Figure 1.Table 2 Professional profile of the participants (n = 72). VariableNumber (%)How long do you work in the hospital?1–10 years22 (30.6%)11–20 years34 (47.2%)21–30 years0 (0%)>31 years36 (22.2%)Have you ever worked on a CSSD before?Yes29 (40.3%)No43 (59.7%)Did you come to CSSD because of being re-adapted?Yes22 (30.6%)No50 (69.4%)CSSD: central sterile supply department.Figure 1 Decreasing distribution of comorbidities in the evaluated sample. WMSD: work-related musculoskeletal disorders. RSI: repetitive strain injuries.Regarding the structure of the workplace, 43 (59.7%) participants strongly disagreed that the CSSD is a place where light activities that require little physical effort are performed. They also disagreed [n = 30 (41.7%)] that the CSSD provides an adequate framework for promoting professional health. Regarding the tasks performed in the CSSD, 40 (55.6%) agreed that they can be considered harmful to the health of the functionally-readapted professional. Regarding skills, 23 (31.9%) participants responded that it was likely to prepare a surgical tray without the use of conference folders. Fifty-four (75%) of the respondents fully agreed that the correct preparation of surgical trays contributes to patient safety during surgery. Thirty-six (50%) participants agreed that CSSD presents a high risk of contamination compared to other hospital sectors. Thirty-four (47.2%) participants responded that it is likely that the use of a sustainable product for the identification of surgical clamps improves their identification. The distribution of participants’ opinions regarding health risk, professional skills, and work environment in the CSSD is shown in Table 3.Table 3 Distribution of participants’ opinions regarding health risk, professional skills, and work environment in the central sterile supply department. VariableNumber (%)Some professionals consider CSSD activities to be easy to perform and require little effortStrongly disagree0 (0%)Somewhat disagree2 2.8%)Neutral3 (4.2%)Somewhat agree24 (33.3%)Strongly agree43 (59.7%)Some tasks performed in the CSSD can be considered harmful to the health of the functionally-readapted professionalStrongly disagree24 (33.3%)Somewhat disagree40 (55.6%)Neutral3 (4.2%)Somewhat agree5 (6.9%)Strongly agree0 (0%)The CSSD offers an environment with an adequate structure to promote the health of the professionalStrongly disagree1 (1.4%)Somewhat disagree14 (19.4%)Neutral11 (15.3%)Somewhat agree30 (41.7%)Strongly agree16 (22.2%)How do you rate performance in CSSD?Extremely difficult1 (1.4%)Difficult12 (16.7%)Moderate41 (56.9%)Easy14 (19.4%)Very easy4 (5.6%)How do you evaluate the information contained in the conference folders for preparing the surgical trays?Extremely difficult2 (2.8%)Difficult7 (9.7%)Moderate41 (56.9%)Easy17 (23.6%)Very easy5 (7%)Do your skills allow you to prepare a surgical tray without the use of conference folders?Very unlikely7 (9.7%)Unlikely20 (27.8%)Neutral4 (5.6%)Likely23 (31.9%)Very likely18 (25%)How do you classify the identification of surgical tweezers in the way they are numerically recorded?Extremely difficult2 (2.8%)Difficult7 (9.7%)Moderate33 (45.8%)Easy26 (36.1%)Very easy4 (5.6%)How do you classify the identification of surgical tweezers by visual recognition?Extremely difficult0 (0%)Difficult9 (12.5%)Moderate31 (43.1%)Easy27 (37.5%)Very easy5 (6.9%)Correct preparation of surgical trays contributes to patient safetyStrongly disagree0 (0%)Somewhat disagree0 (0%)Neutral1 (1.4%)Somewhat agree17 (23.6%)Strongly agree54 (75%)In relation to other hospital sectors, the CSSD is a place with a high risk of contaminationStrongly disagree1 (1.4%)Somewhat disagree6 (8.3%)Neutral2 (2.8%)Somewhat agree36 (50%)Strongly agree27 (37.5%)Actions aimed at preserving the environment must be implemented at the CSSDStrongly disagree0 (0%)Somewhat disagree1 (1.4%)Neutral10 (13.9%)Somewhat agree38 (52.8%)Strongly agree23 (31.9%)A sustainable product for identifying surgical tweezers improves their identificationVery unlikely2 (2.8%)Unlikely0 (0%)Neutral12 (16.7%)Likely34 (47.2%)Very likely24 (33.3%)CSSD: central sterile supply department.Additionally, we evaluate some associations. Table4 shows a significant association between prior work in CSSD and the ability to identify surgical clamps by visual recognition using Fisher’s exact test (p=0.031). Observing the distribution of responses, the group of participants with previous work in a CSSD showed a greater propensity to classify “very easy” in the task of identifying tweezers by visual recognition (17.2%) than the group without previous work in a CSSD (0%). Table 5 shows a significant association between the time the participant had previously worked in the hospital and the ability to use the information contained in the conference folders to prepare the surgical trays using Kendall’s τb (τb = −0.34, p=0.001).Table 4 Association between previous work at a central sterile supply department and professional ability. VariablePrevious work in CSSDp-valueYesNoNumber (%)Number (%)How do you classify the identification of surgical tweezers by visual recognition?Difficult4 (13.8%)5 (11.6%)0.031Moderate12 (41.4%)19 (44.2%)Easy8 (27.6%)19 (44.2%)Very easy5 (17.2%)0 (0%)CSSD = central sterile supply departmentTable 5 Association between working time at the hospital and professional ability. VariableWorking time at the hospitalp-value1–10 years11–20 years>31 yearsNumber (%)Number (%)Number (%)How do you evaluate the information contained in the conference folders for preparing the surgical trays?Extremely difficult0 (0%)1 (2.9%)1 (6.3%)0.001Difficult0 (0%)5 (14.7%)2 (12.5%)Moderate9 (40.9%)22 (64.7%)10 (62.4%)Easy9 (40.9%)6 (17.7%2 (12.5%)Very easy4 (18.2%)0 (0%)1 (6.3%) ## 4. Discussion The main findings of the present study were that almost 80% of the nursing professionals working in a CSSD have up to 20 years of service in the hospital and almost 60% of them had never previously worked in this hospital sector. Importantly, almost a third of nursing professionals working in a CSSD came to the sector because of functional readaptation. There was a high prevalence of people with WMSD/RSI. Most of these professionals believe that the CSSD is a hospital place where activities that demand a lot of physical effort are performed, which are harmful to the health of the functionally-readapted professional. There was a relationship between previous work in a CSSD and the ability to identify surgical tweezers by visual recognition, and between the time the participant had previously worked in the hospital and the skill regarding the information contained in the conference folders for preparing the tray for surgical procedures.Among the sociodemographic characteristics analyzed in this study, there was a clear predominance of female nursing professionals. Historically, the art of caring is linked to the figure of the woman, which is reflected in the increase in the workforce in health units, not only in Brazil but also in other countries [12, 15]. By observing the profile of nursing professionals, it was possible to identify a clear predominance of nursing technicians, which suggests that these people have other professions that do not require university education. This highlights the need to encourage the construction of professional growth or even encourage professional growth [16]. As for professional experience, it was observed that almost 60% of the participants had never worked in a CSSD, which characterizes an incorrect conception that the CSSD is a sector for workers who do not have professional experience. According to Bugs et al., the CSSD is a unit that preferably ensures quality care—whether in the inpatient or surgical unit—with hospital instruments being essential elements in this process [17].Given the invisibility of the CSSD, which unfortunately many managers still believe is a place exclusively for functionally-readapted professionals, almost a third of our sample responded that they came to work in this sector for functional readaptation. In this context, it should be noted that this scenario is changing. It can be said that the movement of professionals to the CSSD and the growing technological advancement of instruments and products used in the sector—which requires professionals with technical knowledge and vast experience—are factors that could have positive repercussions in this scenario. Even though it is classified as indirect care, with routine and repetitive activities, the importance of the work of CSSD professionals is reflected in the delivery of a contamination-free product for hospital use [18]. A recent study showed that CSSD can successfully form closed-loop management of sterilization effectiveness, improve standardized management of sterile article storage in clinical departments, ensure sterile article safety, and reduce the cost of consumption caused due to loss of sterile packaging [19].The intense workload and frequent exposure to ergonomic risks are the predominant factors for the emergence or worsening of comorbidities, especially WMSDs and RSIs [20]. In the present study, these comorbidities had a prevalence rate of 40%. Nursing professionals are highly exposed to intense and strenuous work, being part of the group of workers who are at risk for occupational diseases and other health problems such as psychiatric disorders [21, 22]. One aspect that needs to be emphasized is that the intense workload and occupational diseases are conditions that contribute to the illness of nursing professionals worldwide. In line with the studies by Costa et al. [11] and Morais et al. [23] and, almost 60% of respondents in our study strongly disagreed that the CSSD is a sector where light activities that require little physical effort are performed. It is noteworthy that more than 40% of our sample disagreed that the CSSD offers an adequate structure to promote the health of professionals due to occupational and ergonomic risks, as they are exposed to chemical and physical agents despite the routine use of personal protective equipment.Regarding the tasks performed in the CSSD, more than half of the participants agreed that they can be considered harmful to the health of the functionally-readapted professional. This is interesting because, in general, these professionals are of older age, with their retirement approaching, and present health conditions that prevent them from fully exercising their activities. This is reinforced by the fact that the work assigned to nursing professionals requires physical effort, repetitive tasks, and uncomfortable positions [12]. It is noticeable that the role of health professionals in the CSSD goes beyond routine and repetitive work, requiring specific training for qualification in the development of specific skills to work in this sector [11]. This finding is in agreement with the study by Santos et al., which points out that the elimination of unnecessary or redundant instruments in surgical trays can save time, require less operational effort, and bring lower costs to the CSSD, without compromising the surgical procedure and the health of patients [24]. Therefore, simplifying surgical instruments can also significantly reduce sterilization time, decrease ergonomic risks, and lower instrument acquisition costs.The correct preparation of surgical trays can directly impact patient safety during surgery. In our study, 75% of the participants strongly agreed with this question, which demonstrates the perception of these health professionals regarding the importance of their work. Additionally, we observed a positive association between previous work in a CSSD and nursing professionals’ self-declaration about the ability to identify surgical clamps by visual recognition. The World Health Organization cites some worrying situations in patient safety within the hospital environment, including hospital infections and other surgical complications [25]. The CSSD aims to offer patients a microorganism-free material, whose process is carried out in several steps. This is corroborated by the study by Sartelli et al., which shows the need to focus the efforts of all health professionals on the prevention and control of infections [26]. It is worth emphasizing the inverse association between the time the participant previously worked at the hospital and the skill they attribute to the information contained in the conference folders for the preparation of surgical trays. A possible explanation for this paradox is that people with less time working in the hospital are younger and more adapted to new technologies, which allows for greater ease in preparing surgical trays.Finally, it is important to highlight the importance of strengthening actions aimed at preserving the environment, as the CSSD is a hospital sector that needs to implement these actions. Interestingly, more than half of the participants (52.8%) agreed with these actions, while 47.2% responded that the use of a sustainable product for the identification of surgical clamps is likely. The use of sustainable technological products can be a tool for improving the work process and quality of care. It should also be noted that, with the necessary knowledge and the use of technologies with well-planned actions, it is possible to prevent harm to patients, improving the quality of care provided in health care. However, the need to adhere to these improvements is a key factor for greater patient care [27, 28].The strength of this study is that it is the first study that addresses in detail the opinions of nursing professionals who work in a CSSD, encompassing professionals from different hospitals and with varied experiences (including the handling of surgical trays). However, some limitations must be pointed out. First, the number of participants was relatively small, although we included professionals from both the day and night shifts and from different age groups. Second, our measurement was based on the application of a questionnaire and, therefore, the use of objective measures is important to confirm our results. Third, we only evaluated the opinion of nursing professionals working in public institutions. Despite these limitations, our results can serve as a starting point for future research with greater numbers of nursing professionals working in CSSD using objective measures to assess safety and the work environment. We also encourage interventional studies aimed at promoting health and facilitating the tasks of these professionals, such as the preparation of surgical trays. ## 5. Conclusions Our study shows that almost a third of the nursing professionals working in a CSSD are functionally-readapted people, with a high prevalence of WMSD/RSI. Most of them believe that the CSSD is a sector where activities that demand a lot of physical effort are carried out, and these activities are harmful to the health of the functionally-readapted professional. Almost half of these professionals disagree that the CSSD offers an adequate structure for the promotion of workers’ health due to occupational and ergonomic risks. Almost all participants agree that the correct preparation of surgical trays can directly impact patient safety. Furthermore, there is a relationship between previous work in the CSSD and the ability to identify surgical tweezers by visual recognition, and between the time that the nursing professional previously worked at the hospital and the skills regarding the information contained in the conference folders for the preparation of surgical trays. The usual activities of the CSSD are considered exhaustive and with occupational hazards, which proves not to be suitable for the functionally-readapted professional. In this sense, the ideal would be for the CSSD to have professionals with skills free of serious comorbidities, and with knowledge of new technologies – which are fundamental interventions to be implemented by managers. Although CSSD professionals potentially acquire skills due to the length of service, it is noted that they need more favorable conditions for the verification of tweezers in the preparation of surgical trays. Managers’ commitment to an internal health policy aimed at workers is necessary for the promotion of health and for the implementation of in-service training. In this sense, the role of society demanding the implementation of these measures is of great importance for valuing nursing professionals who work in the CSSD so that the paradigms of invisibility are broken, especially for those who are functionally-readapted people. --- *Source: 1023728-2022-04-13.xml*
1023728-2022-04-13_1023728-2022-04-13.md
29,680
The Perception of Nursing Professionals Working in a Central Sterile Supplies Department regarding Health Conditions, Workload, Ergonomic Risks, and Functional Readaptation
Rosemere Saldanha Xavier; Patrícia dos Santos Vigário; Alvaro Camilo Dias Faria; Patricia Maria Dusek; Agnaldo José Lopes
Advances in Preventive Medicine (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1023728
1023728-2022-04-13.xml
--- ## Abstract Background. The central sterile supply department (CSSD) is wrongly seen as a place in the hospital environment that does not require skills and physical effort, being commonly a hospital sector for the relocation of functionally-readapted professionals. However, CSSD is a work environment that demands professional experience and presents itself as a sector that does not have a healthy work environment. This study aims to evaluate the frequency of comorbidities and functionally-readapted people among nursing professionals allocated to a CSSD and, also, to seek the perception of these professionals about the ergonomic risks and the degree of difficulty to perform activities within a CSSD. Methods. This is a cross-sectional study that analyzed the opinions of nursing professionals who work in the CSSD of public hospitals in Rio de Janeiro, Brazil. Nurses, nursing technicians and nursing assistants aged ≥18 years were included. Results. Seventy-two nursing professionals were consecutively evaluated. It was observed that 43 of them (59.7%) had never worked in a CSSD. The most prevalent comorbidity in the present study was chronic rhinosinusitis, observed in more than half of the sample, although it is interesting to note the high frequency of participants with work-related musculoskeletal disorders (WMSD) and repetitive strain injuries (RSI). There is a relationship between previous work in a CSSD and the ability to identify surgical tweezers by visual recognition (p=0.031). There is a relationship between the time the participant had previously worked in the hospital and the skill regarding the information contained in the conference folders for preparing the tray surgical procedures (τb = −0.34, p=0.001). Conclusion. Almost a third of nursing professionals working in a CSSD are rehabilitated, with a high prevalence of WMSD and RSI. The commitment of managers to an internal health policy aimed at workers is necessary for health promotion. --- ## Body ## 1. Introduction The central sterile supply department (CSSD) is a hospital sector dedicated to the reception, cleaning/disinfection, preparation, sterilization, storage, and distribution of materials for the entire hospital unit [1]. However, the CSSD is a stigmatized area within the hospital. Initially, the activities of preparation and sterilization of hospital instruments were the tasks of professionals who worked in the operating room, and there was no autonomy or even recognition as a unit [2]. Although the CSSD has full responsibility for the processing of hospital supplies, its recognition as an essential area for the provision of patient care is still small [3, 4].Historically and nowadays, CSSD is classified as a sector of indirect activities for patient care and considered as a secondary sector [5, 6]. It is a technical support unit responsible for supplying properly processed medical and hospital instruments, with the aim of offering adequate conditions for direct care and assistance to patients. The CSSD work process has its own characteristics and peculiarities to be aimed at nursing professionals [7]. After the material is processed, it is intended to meet the procedures performed in outpatient and inpatient units, especially surgical procedures. Despite the invisibility and little appreciation of the activities carried out in the CSSD to superimpose the activities of other sectors of the hospital, nursing professionals are aware of the importance of their work in patient safety. Considering the benefits of offering patients in the hospital environment without risk to their health regarding processed material and free from microorganisms, CSSD is important for patient safety in surgical procedures. Therefore, all actions involved in the CSSD work process are essential requirements that are enabled in the proper processing of instruments and that contribute to the safety of surgery [8].Improving the quality and safety of the CSSD work process encompasses the entire processing of surgical instruments, the management of surgical trays, and efforts to ensure that the information is efficient. All these steps are intrinsically linked to the success of the surgery [9]. Avoiding CSSD failures is working for patient safety and, thus, the sector must have a qualified nursing team and employees aware of the importance of the steps in the cleaning processes, preparation of surgical trays, sterilization, and storage [10]. The direct supervision of the nursing professional must occur at all stages of the process, using validated protocols for this [7].The CSSD is considered a place that does not require skills and physical effort, being commonly a hospital sector for the relocation of functionally-readapted professionals. However, this invisibility is characterized by the lack of knowledge of other health professionals and the activities performed, generating dissatisfaction among professionals working in the CSSD [11]. Functional readaptation is directly linked to the relocation of health professionals to other activities or sectors that do not harm their current health condition. However, CSSD is a work environment that demands professional experience and presents itself as a sector that does not have a healthy work environment due to exposure to chemical agents and contaminants; this contributes to health problems for professionals, whether physical or psychological [12].Health conditions, workload, physical effort, functional readaptation, and degree of difficulty in performing manual tasks (including identification of surgical tweezers) among nursing professionals working in a CSSD are still poorly known [5, 6, 9, 12]. Thus, the present study sought to evaluate the frequency of comorbidities and functionally-readapted people among nursing professionals allocated to a CSSD and, also, to seek the perception of these professionals about the ergonomic risks and the degree of difficulty to perform activities within a CSSD. ## 2. Materials and Methods ### 2.1. Study Design and Participants Between June and July 2021, a cross-sectional study was carried out analyzing the opinions of nursing professionals working in the CSSD of public hospitals in Rio de Janeiro, Brazil. Nurses, nursing technicians, and nursing assistants, of both sexes, >18 years old and with the following employment relationships were included: statutory, workers governed by the Consolidation of Labor Laws, and service providers working in the CSSD (both on the day and on the night shift).The project was approved by the Research Ethics Committee of the Augusto Motta University Center under the number CAAE-45992721.9.0000.5235, and all subjects signed the consent form. The protocol followed the recommendations for research in human beings as per the Declaration of Helsinki. ### 2.2. Assessment Instrument A questionnaire containing sociodemographic data, health conditions, length of professional experience, and opinions about the skills of nursing professionals (including the preparation of surgical trays) was applied. To this end, we resorted to the construction of questions using the Likert scale, which is a psychometric response scale used particularly in the area of psychology, health, and education, to which the participant responds through a criterion that can be objective or subjective [13, 14]. Usually what one wants to measure is the level of agreement or non-agreement to the statement, usually using five levels of response [14]. ### 2.3. Data Analysis Descriptive analysis was presented in the form of tables and the observed data were expressed by frequency and percentage. The inferential analysis consisted of the following methods: the comparison of professional ability (5 classes) between professionals with and without previous work in the CSSD was assessed using Fisher’s exact test; and the association between professional ability and length of time working at the hospital (3 classes) and performance upon reaching the CSSD (5 classes) was analyzed using Kendall’s tau-b (τb) correlation coefficient. The criterion for determining the significance adopted was the level of 5%. Statistical analysis was performed using SAS 6.11 software (SAS Institute, Inc., Cary, NC, USA). ## 2.1. Study Design and Participants Between June and July 2021, a cross-sectional study was carried out analyzing the opinions of nursing professionals working in the CSSD of public hospitals in Rio de Janeiro, Brazil. Nurses, nursing technicians, and nursing assistants, of both sexes, >18 years old and with the following employment relationships were included: statutory, workers governed by the Consolidation of Labor Laws, and service providers working in the CSSD (both on the day and on the night shift).The project was approved by the Research Ethics Committee of the Augusto Motta University Center under the number CAAE-45992721.9.0000.5235, and all subjects signed the consent form. The protocol followed the recommendations for research in human beings as per the Declaration of Helsinki. ## 2.2. Assessment Instrument A questionnaire containing sociodemographic data, health conditions, length of professional experience, and opinions about the skills of nursing professionals (including the preparation of surgical trays) was applied. To this end, we resorted to the construction of questions using the Likert scale, which is a psychometric response scale used particularly in the area of psychology, health, and education, to which the participant responds through a criterion that can be objective or subjective [13, 14]. Usually what one wants to measure is the level of agreement or non-agreement to the statement, usually using five levels of response [14]. ## 2.3. Data Analysis Descriptive analysis was presented in the form of tables and the observed data were expressed by frequency and percentage. The inferential analysis consisted of the following methods: the comparison of professional ability (5 classes) between professionals with and without previous work in the CSSD was assessed using Fisher’s exact test; and the association between professional ability and length of time working at the hospital (3 classes) and performance upon reaching the CSSD (5 classes) was analyzed using Kendall’s tau-b (τb) correlation coefficient. The criterion for determining the significance adopted was the level of 5%. Statistical analysis was performed using SAS 6.11 software (SAS Institute, Inc., Cary, NC, USA). ## 3. Results Seventy-two nursing professionals were consecutively evaluated as follows: 13 nurses; 56 nursing technicians; and 3 nursing assistants. Regarding age, the age group between 51 and 60 years predominated (34.7% of the sample). Among the participants, 54 (75%) were female and 35 (48.6%) declared themselves married. In this sample, 42 participants (58.3%) had secondary education and most had a statutory relationship with hospital institutions. The general characteristics of the participants are shown in Table1.Table 1 Participants’ general characteristics (n = 72). VariableNumber (%)Age group20–30 years4 (5.6%)31–40 years12 (16.7%)41–50 years23 (31.9%)51–60 years25 (34.7%)>60 years8 (11.1%)SexMale18 (25%)Female54 (75%)Marital statusSingle26 (36.1%)Married35 (48.6%)Stable union9 (12.5%)Divorced1 (1.4%)Widower1 (1.4%)EducationHigher level30 (41.7%)High school42 (58.3%)Institutional bondStatutory37 (51.4%)Consolidation of labor laws17 (23.6%)Service provider18 (25%)Results are expressed as number (%).As for the length of service provided to the institutions, it is observed that almost half of the professionals evaluated (47.2%) had between 11 and 20 years of work in the hospital. According to the professional profile of the interviewees, it was observed that 43 of them (59.7%) had never worked in a CSSD. Regarding the relocation of nursing professionals in the CSSD, 22 (30.6%) responded that they came to the sector due to functional readaptation. Data regarding the professional profile of the participants are shown in Table2. The most prevalent comorbidity in the present study was chronic rhinosinusitis, observed in more than half of the sample, although it is interesting to note the high frequency of participants with work-related musculoskeletal disorders (WMSD) and repetitive strain injuries (RSI), such as shown in Figure 1.Table 2 Professional profile of the participants (n = 72). VariableNumber (%)How long do you work in the hospital?1–10 years22 (30.6%)11–20 years34 (47.2%)21–30 years0 (0%)>31 years36 (22.2%)Have you ever worked on a CSSD before?Yes29 (40.3%)No43 (59.7%)Did you come to CSSD because of being re-adapted?Yes22 (30.6%)No50 (69.4%)CSSD: central sterile supply department.Figure 1 Decreasing distribution of comorbidities in the evaluated sample. WMSD: work-related musculoskeletal disorders. RSI: repetitive strain injuries.Regarding the structure of the workplace, 43 (59.7%) participants strongly disagreed that the CSSD is a place where light activities that require little physical effort are performed. They also disagreed [n = 30 (41.7%)] that the CSSD provides an adequate framework for promoting professional health. Regarding the tasks performed in the CSSD, 40 (55.6%) agreed that they can be considered harmful to the health of the functionally-readapted professional. Regarding skills, 23 (31.9%) participants responded that it was likely to prepare a surgical tray without the use of conference folders. Fifty-four (75%) of the respondents fully agreed that the correct preparation of surgical trays contributes to patient safety during surgery. Thirty-six (50%) participants agreed that CSSD presents a high risk of contamination compared to other hospital sectors. Thirty-four (47.2%) participants responded that it is likely that the use of a sustainable product for the identification of surgical clamps improves their identification. The distribution of participants’ opinions regarding health risk, professional skills, and work environment in the CSSD is shown in Table 3.Table 3 Distribution of participants’ opinions regarding health risk, professional skills, and work environment in the central sterile supply department. VariableNumber (%)Some professionals consider CSSD activities to be easy to perform and require little effortStrongly disagree0 (0%)Somewhat disagree2 2.8%)Neutral3 (4.2%)Somewhat agree24 (33.3%)Strongly agree43 (59.7%)Some tasks performed in the CSSD can be considered harmful to the health of the functionally-readapted professionalStrongly disagree24 (33.3%)Somewhat disagree40 (55.6%)Neutral3 (4.2%)Somewhat agree5 (6.9%)Strongly agree0 (0%)The CSSD offers an environment with an adequate structure to promote the health of the professionalStrongly disagree1 (1.4%)Somewhat disagree14 (19.4%)Neutral11 (15.3%)Somewhat agree30 (41.7%)Strongly agree16 (22.2%)How do you rate performance in CSSD?Extremely difficult1 (1.4%)Difficult12 (16.7%)Moderate41 (56.9%)Easy14 (19.4%)Very easy4 (5.6%)How do you evaluate the information contained in the conference folders for preparing the surgical trays?Extremely difficult2 (2.8%)Difficult7 (9.7%)Moderate41 (56.9%)Easy17 (23.6%)Very easy5 (7%)Do your skills allow you to prepare a surgical tray without the use of conference folders?Very unlikely7 (9.7%)Unlikely20 (27.8%)Neutral4 (5.6%)Likely23 (31.9%)Very likely18 (25%)How do you classify the identification of surgical tweezers in the way they are numerically recorded?Extremely difficult2 (2.8%)Difficult7 (9.7%)Moderate33 (45.8%)Easy26 (36.1%)Very easy4 (5.6%)How do you classify the identification of surgical tweezers by visual recognition?Extremely difficult0 (0%)Difficult9 (12.5%)Moderate31 (43.1%)Easy27 (37.5%)Very easy5 (6.9%)Correct preparation of surgical trays contributes to patient safetyStrongly disagree0 (0%)Somewhat disagree0 (0%)Neutral1 (1.4%)Somewhat agree17 (23.6%)Strongly agree54 (75%)In relation to other hospital sectors, the CSSD is a place with a high risk of contaminationStrongly disagree1 (1.4%)Somewhat disagree6 (8.3%)Neutral2 (2.8%)Somewhat agree36 (50%)Strongly agree27 (37.5%)Actions aimed at preserving the environment must be implemented at the CSSDStrongly disagree0 (0%)Somewhat disagree1 (1.4%)Neutral10 (13.9%)Somewhat agree38 (52.8%)Strongly agree23 (31.9%)A sustainable product for identifying surgical tweezers improves their identificationVery unlikely2 (2.8%)Unlikely0 (0%)Neutral12 (16.7%)Likely34 (47.2%)Very likely24 (33.3%)CSSD: central sterile supply department.Additionally, we evaluate some associations. Table4 shows a significant association between prior work in CSSD and the ability to identify surgical clamps by visual recognition using Fisher’s exact test (p=0.031). Observing the distribution of responses, the group of participants with previous work in a CSSD showed a greater propensity to classify “very easy” in the task of identifying tweezers by visual recognition (17.2%) than the group without previous work in a CSSD (0%). Table 5 shows a significant association between the time the participant had previously worked in the hospital and the ability to use the information contained in the conference folders to prepare the surgical trays using Kendall’s τb (τb = −0.34, p=0.001).Table 4 Association between previous work at a central sterile supply department and professional ability. VariablePrevious work in CSSDp-valueYesNoNumber (%)Number (%)How do you classify the identification of surgical tweezers by visual recognition?Difficult4 (13.8%)5 (11.6%)0.031Moderate12 (41.4%)19 (44.2%)Easy8 (27.6%)19 (44.2%)Very easy5 (17.2%)0 (0%)CSSD = central sterile supply departmentTable 5 Association between working time at the hospital and professional ability. VariableWorking time at the hospitalp-value1–10 years11–20 years>31 yearsNumber (%)Number (%)Number (%)How do you evaluate the information contained in the conference folders for preparing the surgical trays?Extremely difficult0 (0%)1 (2.9%)1 (6.3%)0.001Difficult0 (0%)5 (14.7%)2 (12.5%)Moderate9 (40.9%)22 (64.7%)10 (62.4%)Easy9 (40.9%)6 (17.7%2 (12.5%)Very easy4 (18.2%)0 (0%)1 (6.3%) ## 4. Discussion The main findings of the present study were that almost 80% of the nursing professionals working in a CSSD have up to 20 years of service in the hospital and almost 60% of them had never previously worked in this hospital sector. Importantly, almost a third of nursing professionals working in a CSSD came to the sector because of functional readaptation. There was a high prevalence of people with WMSD/RSI. Most of these professionals believe that the CSSD is a hospital place where activities that demand a lot of physical effort are performed, which are harmful to the health of the functionally-readapted professional. There was a relationship between previous work in a CSSD and the ability to identify surgical tweezers by visual recognition, and between the time the participant had previously worked in the hospital and the skill regarding the information contained in the conference folders for preparing the tray for surgical procedures.Among the sociodemographic characteristics analyzed in this study, there was a clear predominance of female nursing professionals. Historically, the art of caring is linked to the figure of the woman, which is reflected in the increase in the workforce in health units, not only in Brazil but also in other countries [12, 15]. By observing the profile of nursing professionals, it was possible to identify a clear predominance of nursing technicians, which suggests that these people have other professions that do not require university education. This highlights the need to encourage the construction of professional growth or even encourage professional growth [16]. As for professional experience, it was observed that almost 60% of the participants had never worked in a CSSD, which characterizes an incorrect conception that the CSSD is a sector for workers who do not have professional experience. According to Bugs et al., the CSSD is a unit that preferably ensures quality care—whether in the inpatient or surgical unit—with hospital instruments being essential elements in this process [17].Given the invisibility of the CSSD, which unfortunately many managers still believe is a place exclusively for functionally-readapted professionals, almost a third of our sample responded that they came to work in this sector for functional readaptation. In this context, it should be noted that this scenario is changing. It can be said that the movement of professionals to the CSSD and the growing technological advancement of instruments and products used in the sector—which requires professionals with technical knowledge and vast experience—are factors that could have positive repercussions in this scenario. Even though it is classified as indirect care, with routine and repetitive activities, the importance of the work of CSSD professionals is reflected in the delivery of a contamination-free product for hospital use [18]. A recent study showed that CSSD can successfully form closed-loop management of sterilization effectiveness, improve standardized management of sterile article storage in clinical departments, ensure sterile article safety, and reduce the cost of consumption caused due to loss of sterile packaging [19].The intense workload and frequent exposure to ergonomic risks are the predominant factors for the emergence or worsening of comorbidities, especially WMSDs and RSIs [20]. In the present study, these comorbidities had a prevalence rate of 40%. Nursing professionals are highly exposed to intense and strenuous work, being part of the group of workers who are at risk for occupational diseases and other health problems such as psychiatric disorders [21, 22]. One aspect that needs to be emphasized is that the intense workload and occupational diseases are conditions that contribute to the illness of nursing professionals worldwide. In line with the studies by Costa et al. [11] and Morais et al. [23] and, almost 60% of respondents in our study strongly disagreed that the CSSD is a sector where light activities that require little physical effort are performed. It is noteworthy that more than 40% of our sample disagreed that the CSSD offers an adequate structure to promote the health of professionals due to occupational and ergonomic risks, as they are exposed to chemical and physical agents despite the routine use of personal protective equipment.Regarding the tasks performed in the CSSD, more than half of the participants agreed that they can be considered harmful to the health of the functionally-readapted professional. This is interesting because, in general, these professionals are of older age, with their retirement approaching, and present health conditions that prevent them from fully exercising their activities. This is reinforced by the fact that the work assigned to nursing professionals requires physical effort, repetitive tasks, and uncomfortable positions [12]. It is noticeable that the role of health professionals in the CSSD goes beyond routine and repetitive work, requiring specific training for qualification in the development of specific skills to work in this sector [11]. This finding is in agreement with the study by Santos et al., which points out that the elimination of unnecessary or redundant instruments in surgical trays can save time, require less operational effort, and bring lower costs to the CSSD, without compromising the surgical procedure and the health of patients [24]. Therefore, simplifying surgical instruments can also significantly reduce sterilization time, decrease ergonomic risks, and lower instrument acquisition costs.The correct preparation of surgical trays can directly impact patient safety during surgery. In our study, 75% of the participants strongly agreed with this question, which demonstrates the perception of these health professionals regarding the importance of their work. Additionally, we observed a positive association between previous work in a CSSD and nursing professionals’ self-declaration about the ability to identify surgical clamps by visual recognition. The World Health Organization cites some worrying situations in patient safety within the hospital environment, including hospital infections and other surgical complications [25]. The CSSD aims to offer patients a microorganism-free material, whose process is carried out in several steps. This is corroborated by the study by Sartelli et al., which shows the need to focus the efforts of all health professionals on the prevention and control of infections [26]. It is worth emphasizing the inverse association between the time the participant previously worked at the hospital and the skill they attribute to the information contained in the conference folders for the preparation of surgical trays. A possible explanation for this paradox is that people with less time working in the hospital are younger and more adapted to new technologies, which allows for greater ease in preparing surgical trays.Finally, it is important to highlight the importance of strengthening actions aimed at preserving the environment, as the CSSD is a hospital sector that needs to implement these actions. Interestingly, more than half of the participants (52.8%) agreed with these actions, while 47.2% responded that the use of a sustainable product for the identification of surgical clamps is likely. The use of sustainable technological products can be a tool for improving the work process and quality of care. It should also be noted that, with the necessary knowledge and the use of technologies with well-planned actions, it is possible to prevent harm to patients, improving the quality of care provided in health care. However, the need to adhere to these improvements is a key factor for greater patient care [27, 28].The strength of this study is that it is the first study that addresses in detail the opinions of nursing professionals who work in a CSSD, encompassing professionals from different hospitals and with varied experiences (including the handling of surgical trays). However, some limitations must be pointed out. First, the number of participants was relatively small, although we included professionals from both the day and night shifts and from different age groups. Second, our measurement was based on the application of a questionnaire and, therefore, the use of objective measures is important to confirm our results. Third, we only evaluated the opinion of nursing professionals working in public institutions. Despite these limitations, our results can serve as a starting point for future research with greater numbers of nursing professionals working in CSSD using objective measures to assess safety and the work environment. We also encourage interventional studies aimed at promoting health and facilitating the tasks of these professionals, such as the preparation of surgical trays. ## 5. Conclusions Our study shows that almost a third of the nursing professionals working in a CSSD are functionally-readapted people, with a high prevalence of WMSD/RSI. Most of them believe that the CSSD is a sector where activities that demand a lot of physical effort are carried out, and these activities are harmful to the health of the functionally-readapted professional. Almost half of these professionals disagree that the CSSD offers an adequate structure for the promotion of workers’ health due to occupational and ergonomic risks. Almost all participants agree that the correct preparation of surgical trays can directly impact patient safety. Furthermore, there is a relationship between previous work in the CSSD and the ability to identify surgical tweezers by visual recognition, and between the time that the nursing professional previously worked at the hospital and the skills regarding the information contained in the conference folders for the preparation of surgical trays. The usual activities of the CSSD are considered exhaustive and with occupational hazards, which proves not to be suitable for the functionally-readapted professional. In this sense, the ideal would be for the CSSD to have professionals with skills free of serious comorbidities, and with knowledge of new technologies – which are fundamental interventions to be implemented by managers. Although CSSD professionals potentially acquire skills due to the length of service, it is noted that they need more favorable conditions for the verification of tweezers in the preparation of surgical trays. Managers’ commitment to an internal health policy aimed at workers is necessary for the promotion of health and for the implementation of in-service training. In this sense, the role of society demanding the implementation of these measures is of great importance for valuing nursing professionals who work in the CSSD so that the paradigms of invisibility are broken, especially for those who are functionally-readapted people. --- *Source: 1023728-2022-04-13.xml*
2022
# Performance Prediction of Solar Adsorption Refrigeration System by Ann **Authors:** V. Baiju; C. Muraleedharan **Journal:** ISRN Thermodynamics (2012) **Publisher:** International Scholarly Research Network **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.5402/2012/102376 --- ## Abstract This paper proposes a new approach for the performance analysis of a single-stage solar adsorption refrigeration system with activated carbon-R134a as working pair. Use of artificial neural network has been proposed to determine the performance parameters of the system, namely, coefficient of performance, specific cooling power, adsorbent bed (thermal compressor) discharge temperature, and solar cooling coefficient of performance. The ANN used in the performance prediction was made in MATLAB (version 7.8) environment using neural network tool box.In this study the temperature, pressure, and solar insolation are used in input layer. The back propagation algorithm with three different variants namely Scaled conjugate gradient, Pola-Ribiere conjugate gradient, and Levenberg-Marquardt (LM) and logistic sigmoid transfer function were used, so that the best approach could be found. After training, it was found that LM algorithm with 9 neurons is most suitable for modeling solar adsorption refrigeration system. The ANN predictions of performance parameters agree well with experimental values withR2 values close to 1 and maximum percentage of error less than 5%. The RMS and covariance values are also found to be within the acceptable limits. --- ## Body ## 1. Introduction The conventional refrigeration systems require mechanical energy as the driving source and are responsible for the emission of CO2 and the other greenhouse gases such as CFCs and HFCs which are considered major cause for ozone layer depletion. From this context, the adsorption refrigeration system attains a considerable attention in 1970s due to the energy crisis and ecological problems related to the use of CFCs and HFCs. Research has proved that the adsorption refrigeration technology has a promising potential for competing with the conventional vapour compression refrigeration systems. In comparison with the vapour compression refrigeration systems, adsorption refrigeration systems have the benefits of energy savings if powered by waste heat or solar energy, like simpler control, absence of vibration, and low operation cost.The major attraction of solar adsorption refrigeration is that its working fluids satisfy the Montreal protocol on ozone layer depletion and the Kyoto protocol on global warming [1]. The consumption of low grade energy by the adsorption units does not possess any problems of emission of greenhouse gases. Furthermore, solar-power-based refrigerator is simple and is adaptable for small, medium, or large systems [2]. By use of ozone-friendly refrigerants and ability to utilize the renewable energy sources, the adsorption systems can be preferred as an alternative to the conventional refrigeration systems [3, 4]. The low COP and SCP values as compared to the conventional refrigeration systems are the barriers for the commercialization of the adsorption refrigeration systems [5, 6]. For the improvement of the system, a detailed computational and thermodynamic analysis must be carried out.The thermodynamic analyses of adsorption systems are complex because of the complex differential equations involved. Instead of solving complex differential equations and applying the limited number of experimental data, faster and simpler solutions can be obtained by using artificial neural network. ANNs are able to learn the key information patterns within multidimensional information domain. The use of ANN for the performance prediction and simulation of complex system is increasingly becoming popular in the last few years.The application of artificial neural network for the exergy performance prediction of solar adsorption refrigeration system working at different conditions is necessary nowadays for making the analysis simple. Many earlier studies have reported the application of artificial neural network for the performance predictions of vapour compression refrigeration systems such as for direct expansion heat pump [7], for modeling solar cooling systems [8], and for modeling cascade refrigeration systems [9] with the acceptable accuracy. Recently some works about the use of ANN in energy systems have been published [10–17].From the brief literature review cited, it can be observed that many investigators have used the artificial neural network for the performance prediction of various thermal energy systems. No much work has reported the applicability of artificial neural network for the performance prediction of the solar adsorption refrigeration system. Hence, in the present work, the ANN approach is used for investigating the performance of solar adsorption refrigeration system. Utilising the data obtained from the experimental system, an ANN model for the system is developed. With the use of this model, various performance parameters of the system, namely, the coefficient of performance, specific cooling power, thermal compressor discharge temperature, and solar cooling coefficient of performance are predicted and compared with the actual values. ## 2. Description of the System and Experimental Data ### 2.1. Experimental Setup The solar-assisted adsorption refrigeration system consists of a parabolic solar concentrator, water tank, adsorbent bed, condenser, expansion device (capillary tube), and evaporator as shown in Figure1, and its photograph is shown in Figure 2. The specifications of the components used in the system are given in Table 1. The experimental set up is located in the Solar Energy Laboratory at National Institute of Technology Calicut, Kerala, India. The solar adsorption refrigeration system is tested under the meteorological conditions of Calicut (latitude of 11.15°N, longitude of 75.49°E) during April 2011.Table 1 The specifications of main components of solar adsorption refrigeration system. ComponentTechnical specificationCondenserCapacity: 200 WWater cooledEvaporatorMaterial: copperCapacity: 150 WExpansion deviceCapillary tubeAdsorbent bedMaterial: stainless steelParabolic solar concentratorSolar concentrator of area 3 m2 made of stainless steelAdsorbentActivated carbonType:granular particle size: 0.25 mmMass: 1.5 kgAdsorbateR134 aFigure 1 Schematic of solar adsorption refrigeration system.Figure 2 Photograph of the experimental setup. ### 2.2. Experimentation Water gets heated starting from morning while flowing through the solar concentrator by natural circulation. When the hot water is circulated around the adsorbent bed, the temperature in the adsorbent bed increases. This causes the vapour pressure of the adsorbed refrigerant to reach up to the condensing pressure. The desorbed vapour is liquefied in the condenser. The high pressure liquid refrigerant is expanded through the expansion device to the evaporator pressure. The low pressure liquid refrigerant then enters the evaporator where it evaporates by absorbing the latent heat of evaporation. In the evening, the hot water from the tank should be drained off and is refilled with cold water. The temperature of the adsorbent bed reduces rapidly and the pressure in the adsorber drops below the evaporator pressure. The experiments are carried out keeping the evaporator temperature constant. The same procedure is repeated for the different evaporator loads. ### 2.3. Measurements A digital pyranometer with an accuracy of ±5 W/m2 is placed near the solar collector to measure the instantaneous solar insolation. Pressure is measured during heating (desorption) of refrigerant, that is, condensing pressure and during cooling (adsorption), that is, evaporator pressure. The pressure gauges are fixed at the adsorbent bed in order to measure the pressure inside the adsorbent bed at each stage of adsorption and desorption processes. The temperature at various points in the solar adsorption refrigeration system is measured by calibrated T-type (Copper-Constantan) thermocouples. The various temperatures observed are (1) temperature of the adsorbent bed during various processes, (2) temperature of the refrigerant at inlet and outlet of the condenser, expansion device exit, and evaporator outlet, (3) temperature of water entering the water tank, and (4) temperature of chilled water in the evaporator box. ## 2.1. Experimental Setup The solar-assisted adsorption refrigeration system consists of a parabolic solar concentrator, water tank, adsorbent bed, condenser, expansion device (capillary tube), and evaporator as shown in Figure1, and its photograph is shown in Figure 2. The specifications of the components used in the system are given in Table 1. The experimental set up is located in the Solar Energy Laboratory at National Institute of Technology Calicut, Kerala, India. The solar adsorption refrigeration system is tested under the meteorological conditions of Calicut (latitude of 11.15°N, longitude of 75.49°E) during April 2011.Table 1 The specifications of main components of solar adsorption refrigeration system. ComponentTechnical specificationCondenserCapacity: 200 WWater cooledEvaporatorMaterial: copperCapacity: 150 WExpansion deviceCapillary tubeAdsorbent bedMaterial: stainless steelParabolic solar concentratorSolar concentrator of area 3 m2 made of stainless steelAdsorbentActivated carbonType:granular particle size: 0.25 mmMass: 1.5 kgAdsorbateR134 aFigure 1 Schematic of solar adsorption refrigeration system.Figure 2 Photograph of the experimental setup. ## 2.2. Experimentation Water gets heated starting from morning while flowing through the solar concentrator by natural circulation. When the hot water is circulated around the adsorbent bed, the temperature in the adsorbent bed increases. This causes the vapour pressure of the adsorbed refrigerant to reach up to the condensing pressure. The desorbed vapour is liquefied in the condenser. The high pressure liquid refrigerant is expanded through the expansion device to the evaporator pressure. The low pressure liquid refrigerant then enters the evaporator where it evaporates by absorbing the latent heat of evaporation. In the evening, the hot water from the tank should be drained off and is refilled with cold water. The temperature of the adsorbent bed reduces rapidly and the pressure in the adsorber drops below the evaporator pressure. The experiments are carried out keeping the evaporator temperature constant. The same procedure is repeated for the different evaporator loads. ## 2.3. Measurements A digital pyranometer with an accuracy of ±5 W/m2 is placed near the solar collector to measure the instantaneous solar insolation. Pressure is measured during heating (desorption) of refrigerant, that is, condensing pressure and during cooling (adsorption), that is, evaporator pressure. The pressure gauges are fixed at the adsorbent bed in order to measure the pressure inside the adsorbent bed at each stage of adsorption and desorption processes. The temperature at various points in the solar adsorption refrigeration system is measured by calibrated T-type (Copper-Constantan) thermocouples. The various temperatures observed are (1) temperature of the adsorbent bed during various processes, (2) temperature of the refrigerant at inlet and outlet of the condenser, expansion device exit, and evaporator outlet, (3) temperature of water entering the water tank, and (4) temperature of chilled water in the evaporator box. ## 3. Performance Parameters The main performance parameters used for the present study are cycle coefficient of performance, specific cooling power, and solar cooling coefficient of performance [18]. ### 3.1. Cycle COP Cycle COP is defined as the ratio of cooling effect to the total energy required for desired cooling effect:(1)COP=coolingeffecttotalenergyinput=QeQT. The total energy input to the system is given by.(2)QT=Qisostericheating+Qdesorption. The total heat supplied to the system is equal to the enthalpy change of solar heated water(3)QT=ṁcp(Tfi-Tfo). Cooling effect is as follows: (4)Qe=mwCPw(ΔTw). ### 3.2. Specific Cooling Power (SCP) Specific cooling power indicates the size of the system as it measures the cooling output per unit mass of adsorbent per unit time. Higher SCP values indicate the compactness of the system:(5)SCP=CoolingeffectCycletimeperunitofadsorbentmass=Qema×τcycle⁡. ### 3.3. Solar COP Since the system is solar-powered, the solar coefficient of performance is also to be defined. This is defined as the ratio of cooling effect to the net solar energy input:(6)SolarCOP=QeQs. ## 3.1. Cycle COP Cycle COP is defined as the ratio of cooling effect to the total energy required for desired cooling effect:(1)COP=coolingeffecttotalenergyinput=QeQT. The total energy input to the system is given by.(2)QT=Qisostericheating+Qdesorption. The total heat supplied to the system is equal to the enthalpy change of solar heated water(3)QT=ṁcp(Tfi-Tfo). Cooling effect is as follows: (4)Qe=mwCPw(ΔTw). ## 3.2. Specific Cooling Power (SCP) Specific cooling power indicates the size of the system as it measures the cooling output per unit mass of adsorbent per unit time. Higher SCP values indicate the compactness of the system:(5)SCP=CoolingeffectCycletimeperunitofadsorbentmass=Qema×τcycle⁡. ## 3.3. Solar COP Since the system is solar-powered, the solar coefficient of performance is also to be defined. This is defined as the ratio of cooling effect to the net solar energy input:(6)SolarCOP=QeQs. ## 4. Uncertainty Analysis Uncertainties in the experiments can arise from the selection, condition, and calibration of the instruments, environment, observation, and test planning. A more precise method of estimating uncertainty has been presented by Holman [19]. The method is based on a careful specification of uncertainties in the various primary experimental measurements. These measurements are then used to calculate some desired results of the experiments. In the present study, the pressure, temperatures, and solar insolation were measured by using the instruments. The total uncertainties of the various calculated parameters are shown in Table 2 and its calculation procedure is given in the appendices.Table 2 Uncertainty in different parameters. DescriptionTotal uncertainty (%)COPcycle±6.04%Heat input to the system±1.09%Solar cooling COP±3.12%Specific cooling power±2.91%The total uncertainty arising due to independent variables is given by(7)wR=[(∂R∂x1w1)2+(∂R∂x2w2)2+⋯+(∂R∂xnwn)2]1/2. The result R is a given function in terms of independent variables. Let wR be the uncertainty in the result and let w1,w2,…,wn be the uncertainties in the independent variables. ## 5. Neural Network Design Artificial intelligent (AI) systems are widely used as a technology offering an alternative way to tackle complex and ill-defined problems. They can learn from examples in the sense that they are able to handle noisy and incomplete data, are able to deal with the nonlinear problems, and once trained, can perform prediction and generalization at high speed [12]. Artificial neural network system resembles human brain in two aspects the knowledge is acquired by the network through learning process, and the neuron connection strengths known as synaptic weights are used to store the knowledge. Artificial neural network is an interconnected assembly of simple processing elements, units, or nodes, whose functionality is loosely based on the animal neuron. The fundamental processing element of a neural network is a neuron. Basically a biological neuron receives input information from other sources, combines them in other ways, performs generally a nonlinear operation, and outputs the final results. The network usually consists of an input layer, some hidden layer, and an output layer. An artificial neuron is shown in Figure 3.Figure 3 Artificial neuron.An important stage of a neural network is the training step, in which an input is introduced in the network together with the desired output, and the weights are adjusted so that the network attempts to produce the desired output. There are different learning algorithms. A popular algorithm is a standard backpropagation algorithm which has different variants. It is very difficult to know which algorithm is faster for the given problem. The artificial neural network with backpropagation algorithm learns by changing the weights, and these changes are stored as knowledge. Some statistical methods, in fact root mean square (RMS), correlation coefficient (R2), covariance (COV), and mean absolute percentage error (MAPE), are used for comparison [14]:(8)RMS=(1n∑j=1n[tj-oj]2)1/2,R2=1-(∑j[tj-oj]2∑j(oj)2),COV=RMS∑joj*100,MAPE=o-to*100. ## 6. Modelling of Solar Adsorption Refrigeration System by Artificial Neural Network The performance parameters such as cycle COP, SCP, discharge temperature, and solar COP are predicted by using artificial neural network. The architecture of an ANN used for the performance prediction of SAR indicating input and output is shown in Figure4.Figure 4 Neural network for the performance prediction of solar adsorption refrigeration system.In this study, the pressure, temperature, and solar intensity are used as input parameters whereas the cycle coefficient of performance, specific cooling power, discharge temperature, and solar cooling coefficient of performance are predicted in the output layer. The back propagation algorithm is used in feed forward single hidden layer network. The variants of the algorithm used in the study are Scaled Conjugate Gradient (SCG), Pola-Ribiere Conjugate Gradient (CGP), and Levenberg-Marquardt (LM). The inputs and outputs are normalized in the range 0-1. Logistic sigmoid (log sig) transfer function is being used in ANN.The transfer function used is given by(9)f(Z)=11+e-Z, where z is the weighted sum of inputs.The artificial neural network used in SAR modeling was made in the MATLAB (version 7.8) environment using a neural network tool box. In the training, an increased number of neurons from 5 to 10 are used in hidden layer to define the output accurately. The output of network is compared with the desired output at each presentation, and errors are computed. These errors are backpropagated to the neural network for adjusting the weight such that the errors decrease with each iteration and ANN model approximates the desired output.The available data obtained from the experimental observations are divided into training and testing sets. The data set consists of 90 input values. From these, 80 data sets are used for training and the remaining is used for testing the network. During each step, the performance of the network is studied by using the different statistical performance parameters such asR2, RMS, and COV values. In every step, the performance of the network is tested and it is decided that the network consists of single hidden layer with 9 neurons and L-M variant is the optimum network for the particular system. The performance parameters of the network with log-sig transfer function and different variants are shown in Table 3.Table 3 Statistical values of the different networks evaluated. AlgorithmR2RMSCOVLM-80.99850.02040.308LM-90.999870.010350.158LM-100.98780.010780.159SCG-80.98740.02240.876SCG-90.95440.02450.190SCG-100.98370.03590.204CGP 80.98740.02450.198CGP 90.97510.02130.163CGP 100.98320.0350.83 ## 7. Results and Discussion Average values of performance parameters of the system obtained by conducting different experiments are shown in Table4.Table 4 Average values of performance parameters. ParametersValueRefrigerating effect (W)64.4Cycle COP0.334Solar COP0.0685SCP (W/kg)42.5From Table4, it is clear that the average value of solar COP is very much less than the cycle coefficient of performance. The solar COP is a parameter that is defined by considering the performance of solar concentrator. The cycle COP is calculated by using the total heat content of solar-heated water in water tank. The maximum energy absorbed by the water in absorber of a solar concentrator is only 20–35% during peak sunshine. The remaining heat is lost due to the high irreversibility associated with the heat transfer in solar concentrator surfaces. This leads to the very low solar coefficient of performance as compared to the cycle COP for the solar adsorption refrigeration system.The experimental and ANN predicted results with statistical values such as RMS, COV,R2, and MAPE for the three different parameters are shown in Table 5.Table 5 Comparison between the experimental and ANN predicted results. ExperimentalANN predicted% errorCycle COPSCPSolar COPDTCycle COPSCPSolar COPDTCycle COPSCPSolar COPDT0.248370.045580.24637.50.044858.40.80−1.350.44−0.690.275420.05560.40.27142.80.055860.81.451−1.90−0.148−0.650.325450.06261.820.32845.80.06161−0.92−1.771.6171.420.35490.07263.250.354510.07364−1.15−4.08−1.388−1.1750.378520.08264.30.37551.80.082664.50.7980.384−0.72−0.3050.425580.08465.860.4258.20.084965.81.176−0.344−1.070.091R2: 0.99872, 0.99503, 0.99905, 0.9814.RMS: 0.003642, 0.9678, 0.0008, 0.5167.COV: 0.00182, 0.003417, 0.002, 0.00148.The maximum percentage of error is 1.451%, 4.08%, 1.17, and 1.617% for the cycle COP, SCP discharge temperature (DT), and solar COP, respectively. The results also show thatR2 values are very close to 1 for all the data and RMS values are very small. It is clear that the neural model gives a very accurate representation of the statistical data over the full range of operating conditions and indicates good accuracy of the neural network to represent the performance predictions of the solar adsorption refrigeration system. As seen from the results, the performance parameters are obviously calculated within the acceptable uncertainties.The experimental and ANN predicted values of cycle COP, specific cooling power, discharge temperature, and solar cooling coefficient of performance are shown in Figures5–8.Figure 5 Comparison of actual and ANN predicted COP values.Figure 6 Comparison of experimental and ANN predicted values of specific cooling power.Figure 7 Comparison of ANN predicted and experimental values of solar cooling COP.Figure 8 Comparison of ANN predicted and experimentally measured compressor discharge temperature.Figure5 shows the ANN predicted results and experimentally calculated values of cycle COP with different evaporator loads. In this case an ANN predicted value yields a correlation coefficient of 0.99872. The RMS and COV values are found to be 0.00364 and 0.00182, respectively. The coefficient of performance is an important parameter considered for the rating of SAR system, which is calculated based on the evaporator capacity and heat input to the adsorbent bed from solar-heated water.A plot of experimental and ANN predicted values for the specific cooling power against chilled water temperature is shown in Figure6. These predictions yield a correlation coefficient of 0.99503 RMS and COV values of 0.9678 and 0.003417, respectively. The comparison shows that the ANN predicted values are closer to experimental results. Specific cooling power is also an important parameter that determines the size of the system.Figure7 shows the experimentally measured and ANN predicted values of solar coefficient of performance for different evaporator loads. The ANN predictions for the solar COP yield a correlation coefficient of 0.99905 with RMS and covariance values of 0.0008 and 0.002, respectively. The results confirmed that ANN predicts exactly the solar cooling COP for different chilled water temperature.The ANN predicted and experimentally measured values of the thermal compressor discharge temperature are depicted in Figure8. From the figure it is clear that the ANN predictions are in well agreement with the experimental results of discharge temperature. The comparison gives a correlation coefficient of 0.9814, and the RMS and COV values are found to be 0.5167 and 0.00148, respectively. ## 8. Conclusions The artificial neural network approach has been applied to the solar hybrid adsorption refrigeration system as an alternative to the classical approaches, which are usually complicated, require an enormous amount of the experimental data, and may yield inaccurate results. In order to gather the data for training and testing the proposed ANN, an experimental system has been set up and tested for its performance at different evaporator loads. Using the three different parameters, namely, temperature, pressure, and solar insolation an ANN model based on the back propagation algorithm was proposed, The performance of ANN predictions was measured using the three criteria root mean square error, correlation coefficient, and coefficient of variance. The network model demonstrated good results with correlation coefficients in the range of 0.98146–0.99905 and percentage of error 0.344%–4.08%. This study reveals that, with the use of neural network, solar adsorption refrigeration systems can be modeled with a high degree of accuracy. --- *Source: 102376-2012-03-07.xml*
102376-2012-03-07_102376-2012-03-07.md
25,416
Performance Prediction of Solar Adsorption Refrigeration System by Ann
V. Baiju; C. Muraleedharan
ISRN Thermodynamics (2012)
Engineering & Technology
International Scholarly Research Network
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.5402/2012/102376
102376-2012-03-07.xml
--- ## Abstract This paper proposes a new approach for the performance analysis of a single-stage solar adsorption refrigeration system with activated carbon-R134a as working pair. Use of artificial neural network has been proposed to determine the performance parameters of the system, namely, coefficient of performance, specific cooling power, adsorbent bed (thermal compressor) discharge temperature, and solar cooling coefficient of performance. The ANN used in the performance prediction was made in MATLAB (version 7.8) environment using neural network tool box.In this study the temperature, pressure, and solar insolation are used in input layer. The back propagation algorithm with three different variants namely Scaled conjugate gradient, Pola-Ribiere conjugate gradient, and Levenberg-Marquardt (LM) and logistic sigmoid transfer function were used, so that the best approach could be found. After training, it was found that LM algorithm with 9 neurons is most suitable for modeling solar adsorption refrigeration system. The ANN predictions of performance parameters agree well with experimental values withR2 values close to 1 and maximum percentage of error less than 5%. The RMS and covariance values are also found to be within the acceptable limits. --- ## Body ## 1. Introduction The conventional refrigeration systems require mechanical energy as the driving source and are responsible for the emission of CO2 and the other greenhouse gases such as CFCs and HFCs which are considered major cause for ozone layer depletion. From this context, the adsorption refrigeration system attains a considerable attention in 1970s due to the energy crisis and ecological problems related to the use of CFCs and HFCs. Research has proved that the adsorption refrigeration technology has a promising potential for competing with the conventional vapour compression refrigeration systems. In comparison with the vapour compression refrigeration systems, adsorption refrigeration systems have the benefits of energy savings if powered by waste heat or solar energy, like simpler control, absence of vibration, and low operation cost.The major attraction of solar adsorption refrigeration is that its working fluids satisfy the Montreal protocol on ozone layer depletion and the Kyoto protocol on global warming [1]. The consumption of low grade energy by the adsorption units does not possess any problems of emission of greenhouse gases. Furthermore, solar-power-based refrigerator is simple and is adaptable for small, medium, or large systems [2]. By use of ozone-friendly refrigerants and ability to utilize the renewable energy sources, the adsorption systems can be preferred as an alternative to the conventional refrigeration systems [3, 4]. The low COP and SCP values as compared to the conventional refrigeration systems are the barriers for the commercialization of the adsorption refrigeration systems [5, 6]. For the improvement of the system, a detailed computational and thermodynamic analysis must be carried out.The thermodynamic analyses of adsorption systems are complex because of the complex differential equations involved. Instead of solving complex differential equations and applying the limited number of experimental data, faster and simpler solutions can be obtained by using artificial neural network. ANNs are able to learn the key information patterns within multidimensional information domain. The use of ANN for the performance prediction and simulation of complex system is increasingly becoming popular in the last few years.The application of artificial neural network for the exergy performance prediction of solar adsorption refrigeration system working at different conditions is necessary nowadays for making the analysis simple. Many earlier studies have reported the application of artificial neural network for the performance predictions of vapour compression refrigeration systems such as for direct expansion heat pump [7], for modeling solar cooling systems [8], and for modeling cascade refrigeration systems [9] with the acceptable accuracy. Recently some works about the use of ANN in energy systems have been published [10–17].From the brief literature review cited, it can be observed that many investigators have used the artificial neural network for the performance prediction of various thermal energy systems. No much work has reported the applicability of artificial neural network for the performance prediction of the solar adsorption refrigeration system. Hence, in the present work, the ANN approach is used for investigating the performance of solar adsorption refrigeration system. Utilising the data obtained from the experimental system, an ANN model for the system is developed. With the use of this model, various performance parameters of the system, namely, the coefficient of performance, specific cooling power, thermal compressor discharge temperature, and solar cooling coefficient of performance are predicted and compared with the actual values. ## 2. Description of the System and Experimental Data ### 2.1. Experimental Setup The solar-assisted adsorption refrigeration system consists of a parabolic solar concentrator, water tank, adsorbent bed, condenser, expansion device (capillary tube), and evaporator as shown in Figure1, and its photograph is shown in Figure 2. The specifications of the components used in the system are given in Table 1. The experimental set up is located in the Solar Energy Laboratory at National Institute of Technology Calicut, Kerala, India. The solar adsorption refrigeration system is tested under the meteorological conditions of Calicut (latitude of 11.15°N, longitude of 75.49°E) during April 2011.Table 1 The specifications of main components of solar adsorption refrigeration system. ComponentTechnical specificationCondenserCapacity: 200 WWater cooledEvaporatorMaterial: copperCapacity: 150 WExpansion deviceCapillary tubeAdsorbent bedMaterial: stainless steelParabolic solar concentratorSolar concentrator of area 3 m2 made of stainless steelAdsorbentActivated carbonType:granular particle size: 0.25 mmMass: 1.5 kgAdsorbateR134 aFigure 1 Schematic of solar adsorption refrigeration system.Figure 2 Photograph of the experimental setup. ### 2.2. Experimentation Water gets heated starting from morning while flowing through the solar concentrator by natural circulation. When the hot water is circulated around the adsorbent bed, the temperature in the adsorbent bed increases. This causes the vapour pressure of the adsorbed refrigerant to reach up to the condensing pressure. The desorbed vapour is liquefied in the condenser. The high pressure liquid refrigerant is expanded through the expansion device to the evaporator pressure. The low pressure liquid refrigerant then enters the evaporator where it evaporates by absorbing the latent heat of evaporation. In the evening, the hot water from the tank should be drained off and is refilled with cold water. The temperature of the adsorbent bed reduces rapidly and the pressure in the adsorber drops below the evaporator pressure. The experiments are carried out keeping the evaporator temperature constant. The same procedure is repeated for the different evaporator loads. ### 2.3. Measurements A digital pyranometer with an accuracy of ±5 W/m2 is placed near the solar collector to measure the instantaneous solar insolation. Pressure is measured during heating (desorption) of refrigerant, that is, condensing pressure and during cooling (adsorption), that is, evaporator pressure. The pressure gauges are fixed at the adsorbent bed in order to measure the pressure inside the adsorbent bed at each stage of adsorption and desorption processes. The temperature at various points in the solar adsorption refrigeration system is measured by calibrated T-type (Copper-Constantan) thermocouples. The various temperatures observed are (1) temperature of the adsorbent bed during various processes, (2) temperature of the refrigerant at inlet and outlet of the condenser, expansion device exit, and evaporator outlet, (3) temperature of water entering the water tank, and (4) temperature of chilled water in the evaporator box. ## 2.1. Experimental Setup The solar-assisted adsorption refrigeration system consists of a parabolic solar concentrator, water tank, adsorbent bed, condenser, expansion device (capillary tube), and evaporator as shown in Figure1, and its photograph is shown in Figure 2. The specifications of the components used in the system are given in Table 1. The experimental set up is located in the Solar Energy Laboratory at National Institute of Technology Calicut, Kerala, India. The solar adsorption refrigeration system is tested under the meteorological conditions of Calicut (latitude of 11.15°N, longitude of 75.49°E) during April 2011.Table 1 The specifications of main components of solar adsorption refrigeration system. ComponentTechnical specificationCondenserCapacity: 200 WWater cooledEvaporatorMaterial: copperCapacity: 150 WExpansion deviceCapillary tubeAdsorbent bedMaterial: stainless steelParabolic solar concentratorSolar concentrator of area 3 m2 made of stainless steelAdsorbentActivated carbonType:granular particle size: 0.25 mmMass: 1.5 kgAdsorbateR134 aFigure 1 Schematic of solar adsorption refrigeration system.Figure 2 Photograph of the experimental setup. ## 2.2. Experimentation Water gets heated starting from morning while flowing through the solar concentrator by natural circulation. When the hot water is circulated around the adsorbent bed, the temperature in the adsorbent bed increases. This causes the vapour pressure of the adsorbed refrigerant to reach up to the condensing pressure. The desorbed vapour is liquefied in the condenser. The high pressure liquid refrigerant is expanded through the expansion device to the evaporator pressure. The low pressure liquid refrigerant then enters the evaporator where it evaporates by absorbing the latent heat of evaporation. In the evening, the hot water from the tank should be drained off and is refilled with cold water. The temperature of the adsorbent bed reduces rapidly and the pressure in the adsorber drops below the evaporator pressure. The experiments are carried out keeping the evaporator temperature constant. The same procedure is repeated for the different evaporator loads. ## 2.3. Measurements A digital pyranometer with an accuracy of ±5 W/m2 is placed near the solar collector to measure the instantaneous solar insolation. Pressure is measured during heating (desorption) of refrigerant, that is, condensing pressure and during cooling (adsorption), that is, evaporator pressure. The pressure gauges are fixed at the adsorbent bed in order to measure the pressure inside the adsorbent bed at each stage of adsorption and desorption processes. The temperature at various points in the solar adsorption refrigeration system is measured by calibrated T-type (Copper-Constantan) thermocouples. The various temperatures observed are (1) temperature of the adsorbent bed during various processes, (2) temperature of the refrigerant at inlet and outlet of the condenser, expansion device exit, and evaporator outlet, (3) temperature of water entering the water tank, and (4) temperature of chilled water in the evaporator box. ## 3. Performance Parameters The main performance parameters used for the present study are cycle coefficient of performance, specific cooling power, and solar cooling coefficient of performance [18]. ### 3.1. Cycle COP Cycle COP is defined as the ratio of cooling effect to the total energy required for desired cooling effect:(1)COP=coolingeffecttotalenergyinput=QeQT. The total energy input to the system is given by.(2)QT=Qisostericheating+Qdesorption. The total heat supplied to the system is equal to the enthalpy change of solar heated water(3)QT=ṁcp(Tfi-Tfo). Cooling effect is as follows: (4)Qe=mwCPw(ΔTw). ### 3.2. Specific Cooling Power (SCP) Specific cooling power indicates the size of the system as it measures the cooling output per unit mass of adsorbent per unit time. Higher SCP values indicate the compactness of the system:(5)SCP=CoolingeffectCycletimeperunitofadsorbentmass=Qema×τcycle⁡. ### 3.3. Solar COP Since the system is solar-powered, the solar coefficient of performance is also to be defined. This is defined as the ratio of cooling effect to the net solar energy input:(6)SolarCOP=QeQs. ## 3.1. Cycle COP Cycle COP is defined as the ratio of cooling effect to the total energy required for desired cooling effect:(1)COP=coolingeffecttotalenergyinput=QeQT. The total energy input to the system is given by.(2)QT=Qisostericheating+Qdesorption. The total heat supplied to the system is equal to the enthalpy change of solar heated water(3)QT=ṁcp(Tfi-Tfo). Cooling effect is as follows: (4)Qe=mwCPw(ΔTw). ## 3.2. Specific Cooling Power (SCP) Specific cooling power indicates the size of the system as it measures the cooling output per unit mass of adsorbent per unit time. Higher SCP values indicate the compactness of the system:(5)SCP=CoolingeffectCycletimeperunitofadsorbentmass=Qema×τcycle⁡. ## 3.3. Solar COP Since the system is solar-powered, the solar coefficient of performance is also to be defined. This is defined as the ratio of cooling effect to the net solar energy input:(6)SolarCOP=QeQs. ## 4. Uncertainty Analysis Uncertainties in the experiments can arise from the selection, condition, and calibration of the instruments, environment, observation, and test planning. A more precise method of estimating uncertainty has been presented by Holman [19]. The method is based on a careful specification of uncertainties in the various primary experimental measurements. These measurements are then used to calculate some desired results of the experiments. In the present study, the pressure, temperatures, and solar insolation were measured by using the instruments. The total uncertainties of the various calculated parameters are shown in Table 2 and its calculation procedure is given in the appendices.Table 2 Uncertainty in different parameters. DescriptionTotal uncertainty (%)COPcycle±6.04%Heat input to the system±1.09%Solar cooling COP±3.12%Specific cooling power±2.91%The total uncertainty arising due to independent variables is given by(7)wR=[(∂R∂x1w1)2+(∂R∂x2w2)2+⋯+(∂R∂xnwn)2]1/2. The result R is a given function in terms of independent variables. Let wR be the uncertainty in the result and let w1,w2,…,wn be the uncertainties in the independent variables. ## 5. Neural Network Design Artificial intelligent (AI) systems are widely used as a technology offering an alternative way to tackle complex and ill-defined problems. They can learn from examples in the sense that they are able to handle noisy and incomplete data, are able to deal with the nonlinear problems, and once trained, can perform prediction and generalization at high speed [12]. Artificial neural network system resembles human brain in two aspects the knowledge is acquired by the network through learning process, and the neuron connection strengths known as synaptic weights are used to store the knowledge. Artificial neural network is an interconnected assembly of simple processing elements, units, or nodes, whose functionality is loosely based on the animal neuron. The fundamental processing element of a neural network is a neuron. Basically a biological neuron receives input information from other sources, combines them in other ways, performs generally a nonlinear operation, and outputs the final results. The network usually consists of an input layer, some hidden layer, and an output layer. An artificial neuron is shown in Figure 3.Figure 3 Artificial neuron.An important stage of a neural network is the training step, in which an input is introduced in the network together with the desired output, and the weights are adjusted so that the network attempts to produce the desired output. There are different learning algorithms. A popular algorithm is a standard backpropagation algorithm which has different variants. It is very difficult to know which algorithm is faster for the given problem. The artificial neural network with backpropagation algorithm learns by changing the weights, and these changes are stored as knowledge. Some statistical methods, in fact root mean square (RMS), correlation coefficient (R2), covariance (COV), and mean absolute percentage error (MAPE), are used for comparison [14]:(8)RMS=(1n∑j=1n[tj-oj]2)1/2,R2=1-(∑j[tj-oj]2∑j(oj)2),COV=RMS∑joj*100,MAPE=o-to*100. ## 6. Modelling of Solar Adsorption Refrigeration System by Artificial Neural Network The performance parameters such as cycle COP, SCP, discharge temperature, and solar COP are predicted by using artificial neural network. The architecture of an ANN used for the performance prediction of SAR indicating input and output is shown in Figure4.Figure 4 Neural network for the performance prediction of solar adsorption refrigeration system.In this study, the pressure, temperature, and solar intensity are used as input parameters whereas the cycle coefficient of performance, specific cooling power, discharge temperature, and solar cooling coefficient of performance are predicted in the output layer. The back propagation algorithm is used in feed forward single hidden layer network. The variants of the algorithm used in the study are Scaled Conjugate Gradient (SCG), Pola-Ribiere Conjugate Gradient (CGP), and Levenberg-Marquardt (LM). The inputs and outputs are normalized in the range 0-1. Logistic sigmoid (log sig) transfer function is being used in ANN.The transfer function used is given by(9)f(Z)=11+e-Z, where z is the weighted sum of inputs.The artificial neural network used in SAR modeling was made in the MATLAB (version 7.8) environment using a neural network tool box. In the training, an increased number of neurons from 5 to 10 are used in hidden layer to define the output accurately. The output of network is compared with the desired output at each presentation, and errors are computed. These errors are backpropagated to the neural network for adjusting the weight such that the errors decrease with each iteration and ANN model approximates the desired output.The available data obtained from the experimental observations are divided into training and testing sets. The data set consists of 90 input values. From these, 80 data sets are used for training and the remaining is used for testing the network. During each step, the performance of the network is studied by using the different statistical performance parameters such asR2, RMS, and COV values. In every step, the performance of the network is tested and it is decided that the network consists of single hidden layer with 9 neurons and L-M variant is the optimum network for the particular system. The performance parameters of the network with log-sig transfer function and different variants are shown in Table 3.Table 3 Statistical values of the different networks evaluated. AlgorithmR2RMSCOVLM-80.99850.02040.308LM-90.999870.010350.158LM-100.98780.010780.159SCG-80.98740.02240.876SCG-90.95440.02450.190SCG-100.98370.03590.204CGP 80.98740.02450.198CGP 90.97510.02130.163CGP 100.98320.0350.83 ## 7. Results and Discussion Average values of performance parameters of the system obtained by conducting different experiments are shown in Table4.Table 4 Average values of performance parameters. ParametersValueRefrigerating effect (W)64.4Cycle COP0.334Solar COP0.0685SCP (W/kg)42.5From Table4, it is clear that the average value of solar COP is very much less than the cycle coefficient of performance. The solar COP is a parameter that is defined by considering the performance of solar concentrator. The cycle COP is calculated by using the total heat content of solar-heated water in water tank. The maximum energy absorbed by the water in absorber of a solar concentrator is only 20–35% during peak sunshine. The remaining heat is lost due to the high irreversibility associated with the heat transfer in solar concentrator surfaces. This leads to the very low solar coefficient of performance as compared to the cycle COP for the solar adsorption refrigeration system.The experimental and ANN predicted results with statistical values such as RMS, COV,R2, and MAPE for the three different parameters are shown in Table 5.Table 5 Comparison between the experimental and ANN predicted results. ExperimentalANN predicted% errorCycle COPSCPSolar COPDTCycle COPSCPSolar COPDTCycle COPSCPSolar COPDT0.248370.045580.24637.50.044858.40.80−1.350.44−0.690.275420.05560.40.27142.80.055860.81.451−1.90−0.148−0.650.325450.06261.820.32845.80.06161−0.92−1.771.6171.420.35490.07263.250.354510.07364−1.15−4.08−1.388−1.1750.378520.08264.30.37551.80.082664.50.7980.384−0.72−0.3050.425580.08465.860.4258.20.084965.81.176−0.344−1.070.091R2: 0.99872, 0.99503, 0.99905, 0.9814.RMS: 0.003642, 0.9678, 0.0008, 0.5167.COV: 0.00182, 0.003417, 0.002, 0.00148.The maximum percentage of error is 1.451%, 4.08%, 1.17, and 1.617% for the cycle COP, SCP discharge temperature (DT), and solar COP, respectively. The results also show thatR2 values are very close to 1 for all the data and RMS values are very small. It is clear that the neural model gives a very accurate representation of the statistical data over the full range of operating conditions and indicates good accuracy of the neural network to represent the performance predictions of the solar adsorption refrigeration system. As seen from the results, the performance parameters are obviously calculated within the acceptable uncertainties.The experimental and ANN predicted values of cycle COP, specific cooling power, discharge temperature, and solar cooling coefficient of performance are shown in Figures5–8.Figure 5 Comparison of actual and ANN predicted COP values.Figure 6 Comparison of experimental and ANN predicted values of specific cooling power.Figure 7 Comparison of ANN predicted and experimental values of solar cooling COP.Figure 8 Comparison of ANN predicted and experimentally measured compressor discharge temperature.Figure5 shows the ANN predicted results and experimentally calculated values of cycle COP with different evaporator loads. In this case an ANN predicted value yields a correlation coefficient of 0.99872. The RMS and COV values are found to be 0.00364 and 0.00182, respectively. The coefficient of performance is an important parameter considered for the rating of SAR system, which is calculated based on the evaporator capacity and heat input to the adsorbent bed from solar-heated water.A plot of experimental and ANN predicted values for the specific cooling power against chilled water temperature is shown in Figure6. These predictions yield a correlation coefficient of 0.99503 RMS and COV values of 0.9678 and 0.003417, respectively. The comparison shows that the ANN predicted values are closer to experimental results. Specific cooling power is also an important parameter that determines the size of the system.Figure7 shows the experimentally measured and ANN predicted values of solar coefficient of performance for different evaporator loads. The ANN predictions for the solar COP yield a correlation coefficient of 0.99905 with RMS and covariance values of 0.0008 and 0.002, respectively. The results confirmed that ANN predicts exactly the solar cooling COP for different chilled water temperature.The ANN predicted and experimentally measured values of the thermal compressor discharge temperature are depicted in Figure8. From the figure it is clear that the ANN predictions are in well agreement with the experimental results of discharge temperature. The comparison gives a correlation coefficient of 0.9814, and the RMS and COV values are found to be 0.5167 and 0.00148, respectively. ## 8. Conclusions The artificial neural network approach has been applied to the solar hybrid adsorption refrigeration system as an alternative to the classical approaches, which are usually complicated, require an enormous amount of the experimental data, and may yield inaccurate results. In order to gather the data for training and testing the proposed ANN, an experimental system has been set up and tested for its performance at different evaporator loads. Using the three different parameters, namely, temperature, pressure, and solar insolation an ANN model based on the back propagation algorithm was proposed, The performance of ANN predictions was measured using the three criteria root mean square error, correlation coefficient, and coefficient of variance. The network model demonstrated good results with correlation coefficients in the range of 0.98146–0.99905 and percentage of error 0.344%–4.08%. This study reveals that, with the use of neural network, solar adsorption refrigeration systems can be modeled with a high degree of accuracy. --- *Source: 102376-2012-03-07.xml*
2012
# Automatic Freeway Incident Detection for Free Flow Conditions: A Vehicle Reidentification Based Approach Using Image Data from Sparsely Distributed Video Cameras **Authors:** Jiankai Wang; Agachai Sumalee **Journal:** Mathematical Problems in Engineering (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102380 --- ## Abstract This paper proposes a vehicle reidentification (VRI) based automatic incident algorithm (AID) for freeway system under free flow condition. An enhanced vehicle feature matching technique is adopted in the VRI component of the proposed system. In this study, arrival time interval, which is estimated based on the historical database, is introduced into the VRI component to improve the matching accuracy and reduce the incident detection time. Also, a screening method, which is based on the ratios of the matching probabilities, is introduced to the VRI component to further reduce false alarm rate. The proposed AID algorithm is tested on a 3.6 km segment of a closed freeway system in Bangkok, Thailand. The results show that in terms of incident detection time, the proposed AID algorithm outperforms the traditional vehicle count approach. --- ## Body ## 1. Introduction and Literature Review Traffic incidents have been widely recognized as a serious problem for its negative effects on traffic congestion and safety [1]. Under heavy traffic condition, one minor incident could result in gridlock and hence serious traffic congestion. In addition, traffic injuries are likely to be more severe if incidents occur at higher speeds (e.g., under free flow condition). Statistics also suggest the high chance of a more sever secondary accident following the initial incident on freeway [2, 3]. An ability to detect incident in a timely and accurate manner would allow the traffic manager to efficiently remove the incident, to notify the follow-up traffic of the incident, and to better manage the traffic for minimizing the negative impact caused by the incident. Therefore, considerable research efforts have been dedicated to the development of automatic incident detection (AID) algorithms by utilizing the traditional detectors (i.e., inductive loops) over the past few decades (e.g., the California algorithm series [4] and McMaster algorithm [5]). The underlying assumption of these algorithms is that the aggregated traffic parameters (e.g., travel time and traffic flow) would change dramatically when incidents occur under congested situation. By comparing the real-time traffic data with the incident-free data, one can determine the likelihood that an incident has happened. Based on the above-mentioned principle, various advanced data mining approaches (e.g., neural network [6], Bayesian network [7, 8], and principal component analysis [9]) were adopted for detecting the abnormal traffic delay or abrupt change in traffic flow pattern. However, most of the existing incident detection algorithms are specifically designed for congested traffic conditions and may not be applicable for free flow situations.Under free flow condition, a drop in traffic capacity due to a traffic incident (e.g., one lane blocking) may not cause any traffic delay or substantial change in traffic flow pattern. Therefore, it is no longer feasible to detect the incident through analyzing the aggregated traffic parameters. To handle the aforementioned challenge, research attention has shifted away from the data mining approach based on macroscopic traffic data, towards considering the continuous tracking of individual vehicle by using the microscopic vehicle data (e.g., vehicle trajectory and individual vehicle feature). The rationale behind this idea is straightforward. For a closed freeway system, if one can track all the vehicles along the designated points, a disappearance of any vehicle movement (or nonmoving vehicle) between consecutive points can be classified as a potential incident. Based on this principle, various emerging technologies, such as GPS technology [10] and cellular phones [11], has been utilized for collecting theLagrangian measurements (i.e., individual vehicle trajectory), which eventually could be used for identifying the “incident” vehicle. Despite their theoretical simplicity, the successes of these approaches heavily rely on the high level of market penetration (in principle, 100% of penetration rate is required) of the in-vehicle equipment (e.g., GPS device) and the driver’s willingness to provide the location information. To compensate for this, Shehata et al. [12] conducted a study to detect the nonmoving vehicle (i.e., caused by incident) from the fixed video camera (i.e.,Eulerian measurement) by using image processing techniques. Although this method appears to be theoretically sound, the deployment of such system requires the installations of cameras at all key locations along the freeway, which may not be feasible due to the limitation of public resources. In practice, the fixed traffic sensors (e.g., loop detectors and video cameras) cannot cover the entire freeway system and are generally sparsely distributed [13]. In this regard, Fambro and Ritch [14] conducted a pioneering study to trace and identify the “missing” vehicle through analyzing the vehicle count data obtained at two consecutive loop detectors, which is also referred to as the vehicle count approach. Given the vehicle speed at upstream, the arrival time window at downstream could be estimated. By comparing the vehicle counts in this arrival time window with the corresponding vehicle counts at upstream, one may be able to identify the missing vehicle (if any) for incident detection. However, it is also noteworthy that the overlapping of arrival time windows of different vehicles would lead to a significant increase in the detection time (this will be discussed in more detail in Section 3.1). To further reduce the incident detection time, much attention has been paid to track the vehicle by utilizing the emerging automatic vehicle identification (AVI) systems: automatic number plate recognition [15] or Bluetooth identification technology [16]. Although the AVI technologies enable a more efficient tracking of vehicles across consecutive points by accurately matching their unique identity (e.g., plate number and media access control address), privacy concerns may arise as the consequence of this matching process of vehicle identity. In this case, the vehicle reidentification (VRI) scheme, which does not intrude driver’s privacy, provides a tool to devise a more practical and effective incident detection algorithm under free flow condition.Generally, vehicle reidentification (VRI) is a process of matching nonunique vehicle signatures (e.g., waveform [17], vehicle length [18], and vehicle color [19, 20]) from one detector to the next one in the traffic network. On one hand, the nonuniqueness of the vehicle signature would allow the VRI system to track the vehicle anonymously at two consecutive detectors [21] and, hence, identify the “missing” vehicle due to an incident. On the other hand, this property of very nonuniqueness imposes a great challenge on the development of the vehicle signature matching method. To improve the matching accuracy, Coifman [22] proposed a vehicle platoon matching method such that the lengths of vehicle platoons were compared. To further consider the noise and uncertainty of vehicle signature, Kwong et al. [23] introduced a statistical matching method in which the vehicle signature is treated as a random variable, and a probabilistic measure is calculated for matching decision making. During the past few years, the authors also developed a VRI system [24, 25] by utilizing the emerging video image processing systems [26]. Various detailed vehicle features (e.g., vehicle color, length, and type) were extracted and a probabilistic data fusion rule was then introduced to combine these features to generate a matching probability (i.e.,posterior probability) for reidentification purpose. To account for the large variance in travel time under dynamic traffic condition, the proposed VRI system also introduced aprior (fixed) time window constraint, which sets the upper and lower bounds of the vehicle travel time, to rule out the unlikely candidate vehicles. However, it is noteworthy that the aforementioned VRI systems were specifically designed for the purpose of traffic data collection (e.g., travel time). To our knowledge, very few studies were explicitly conducted to investigate the potential feasibility of utilizing VRI system for incident detection. Also, as the existing VRI systems cannot guarantee an accurate matching due to the nonuniqueness of the vehicle signatures, the mismatches between the upstream and downstream vehicles may potentially lead to false alarms when they were applied for incident detection. To sum up, the current VRI systems are not readily transferable to the field of incident detection.To this end, this paper aims to propose a VRI-based automatic incident detection algorithm under free flow condition. The revised VRI system is based on authors’ previous work [24] with several major changes to cope with the purpose of incident detection, which eventually give rise to the incident-detection-oriented VRI.(i) Note that in the work of [24] a unified and fixed time window constraint is imposed on all the vehicles. However, the vehicles would maintain a relatively stable speed under free flow condition, which allows for the estimation of a flexible time window for each individual vehicle. Therefore, this incident-detection-oriented VRI would introduce a flexible time window to further improve the matching accuracy and reduce the incident detection time.(ii) Rather than finding the matching results between two sets of vehicles (i.e., upstream and downstream sets of vehicles), the incident-detection-oriented VRI attempts to make an instant matching decision for each individual vehicle such that the “missing” vehicle can be identified promptly. In this study, the statistical model regarding the vehicle feature is built up, and the matching probability for each pair of vehicles (i.e., theposterior probability of each potential match based on the observation of the vehicle signatures) isexplicitly calculated.(iii) Last but not least, a screening method (i.e., thresholding process), which is based on the ratios of the matching probabilities, is introduced to screen out the mismatched vehicles for reducing the false alarm rate.The rest of the chapter is organized as follows. Section 2 describes the traffic dataset collected for the algorithm development and evaluation. Mathematical notations are also provided in this section. In Section 3, the overall framework of the proposed automatic incident detection system is introduced. The description and analysis of the incident-detection-oriented VRI system under free flow condition are proposed in the following two sections (Sections 4 and 5). In Section 6, simulated tests and real-world case studies are carried out to evaluate the performance of the proposed AID system against the traditional vehicle count approach. Finally, we close this chapter with the conclusion and future works. ## 2. Dataset for Algorithm Development and Evaluation The test site is a 3.6 km long section of the closed three-lane freeway in Bangkok, Thailand (i.e., the green section in Figure1). At each station (i.e., location 10B and 08B in Figure 1) a gantry-mounted video camera, which is viewed in the upstream direction, is installed and two hours of video record (10 a.m. and noon on March 15, 2011) was collected. The frame rate of the video record is 25 FPS and the still image size is 563 × 764. As the detailed traffic data (especially the individual vehicle data) are not readily obtainable from the raw video record, the video image processing systems (VIPs) are then employed for extracting the required information (e.g., vehicle feature data and spot speed). In general, VIPs involve two major steps: vehicle detection and feature extraction. The first step is to digitize and store the raw video record for the detection subsystem to detect and capture the moving vehicle. The still image regarding the individual vehicle is stored for further application. In the second step, various image processing techniques are performed on the vehicle images to obtain the intrinsic feature data (e.g., color, length, and type). In the following, a brief review on the VIPs and associated image processing techniques are presented.Figure 1 Test site in Bangkok, Thailand. ### 2.1. Video Image Processing Systems #### 2.1.1. Vehicle Detection The success of vehicle detection largely depends on the degree that the moving object (vehicle) can be distinguished from its surroundings (background). In light of this, background estimation technology is employed in the detection subsystem. By calculating the media of a sequence of video frames, the background of the video image is obtained. Then, image segmentation technique is performed to identify the foreground object (vehicle). The still image including the detected vehicle is then clipped and stored for further feature extraction. Along with the detection of the vehicle, the associated arrival timet and the spot speed v are also collected. The normalized height of the vehicle image is adopted for representing the vehicle length L. #### 2.1.2. Vehicle Color Recognition Color is one the most essential features for characterizing a vehicle. To reduce the negative effect of illumination changes, this paper has adopted the HSV (hue-saturation-value) color space to represent the vehicle image. Vehicle color recognition illustrated in this paper is conducted in two steps. First, the general RGB color images are converted into HSV color model-based images. Hue and Saturation values are then exploited for color detection, whereasV (value) information is separated out from the color space. Second, a two-dimensional color histogram C is formed to represent the distribution (frequency) of colors across a vehicle image. To be more specific, the hue and saturation channels are divided into 36 and 10 bins, respectively. Thus, a color feature vector C with 360 elements is obtained. Each element of the feature vector is calculated as(1)Ci=NiN,1≤i≤360,where Ni is the number of pixels whose values fall within the bin i and N is the total number of pixels in the image. #### 2.1.3. Vehicle Type Recognition Vehicle type feature provides the other important information to describe a vehicle. The template matching method [27] is utilized to recognize vehicle type. This method uses L2 distance metric to measure the similarity between vehicle image and template images. Specifically, vehicles are classified into 6 categories. For each category, the corresponding template image (T) is built. Finally, the normalized similarity value between the vehicle image (I) and the kth template image (T) is given by (2)Sk=∑m=1M∑n=1NIm,n-Tm,n2GMN,where G denotes the maximum gray level (255); M and N are the dimensions of the vehicle image. Thus, the vehicle type/shape feature S is a vector that consists of the similarity score for each template. Detailed implementation of the VIP systems to traffic data extraction can be found in [24]. A formal description of the dataset obtained from the video record is presented in the following subsection. ### 2.2. Dataset Description and Notation VIPs provide a large amount of traffic data to develop and validate the automatic incident detection algorithms. LetU={1,2,…,N} denote the N vehicles detected at upstream station during the time interval. D={1,2,…,M} is the set of downstream vehicles. In addition, tiU and viU are the associated arrival time and the spot speed of the ith upstream vehicle, respectively. Accordingly, tiD and viD are the corresponding arrival time and spot speed of the ith downstream vehicle. As discussed above, for each detected individual vehicle, the intrinsic feature data (e.g., color, size, and length) are also obtained. Let XiU={CiU,SiU,LiU} denote the signature of the ith upstream vehicle, where CiU and SiU are the normalized color feature vector and type (shape) feature vector, respectively. LiU denotes the normalized the length of vehicle i. Similarly, XjD={CjD,SjD,LjD} is the signature of the jth downstream vehicle. To sum up, dataset from the VIPs during a time interval consists of the upstream vehicle dataset {(tiU,viU,XiU),i=1,2,…,N} and the downstream vehicle set {(tjD,vjD,XjD),j=1,2,…,M}. In order to quantify the difference between each pair of upstream and downstream vehicle signatures, several distance measures are then incorporated. Specifically, for a pair of signatures (XiU,XjD), the Bhattacharyya distance [28] is utilized to calculate the degree of similarity between color features:(3)dcolori,j=1-∑k=1360CiUk·CjDk1/2,where k denoted the kth component of the color feature vector. The L1 distance measure is introduced to represent the difference between the type feature vectors:(4)dtypei,j=∑k=1qSiUk-SjDk,where q is the number of vehicle type template and is taken as 6 in this study. The length difference is given by(5)dlengthi,j=LiU-LjD.Based on the video record collected at the test site, 3,628 vehicles are detected at both stations (10B and 08B) during the two-hour video record. For the purpose of the algorithm development and evaluation, these 3,628 pairs of vehicles are manually matched (i.e., reidentified) by the human operators viewing the video record frame by frame. In other words, the ground-truth matching results of the 3,628 pairs of vehicles are obtained in advance. The mean travel time is 170.9 seconds. The first 800 pairs of vehicle data are used for the model training and calibration (which are discussed in the following sections), while the rest of the vehicle dataset are used for the simulation test of the proposed automatic incident detection algorithm. ## 2.1. Video Image Processing Systems ### 2.1.1. Vehicle Detection The success of vehicle detection largely depends on the degree that the moving object (vehicle) can be distinguished from its surroundings (background). In light of this, background estimation technology is employed in the detection subsystem. By calculating the media of a sequence of video frames, the background of the video image is obtained. Then, image segmentation technique is performed to identify the foreground object (vehicle). The still image including the detected vehicle is then clipped and stored for further feature extraction. Along with the detection of the vehicle, the associated arrival timet and the spot speed v are also collected. The normalized height of the vehicle image is adopted for representing the vehicle length L. ### 2.1.2. Vehicle Color Recognition Color is one the most essential features for characterizing a vehicle. To reduce the negative effect of illumination changes, this paper has adopted the HSV (hue-saturation-value) color space to represent the vehicle image. Vehicle color recognition illustrated in this paper is conducted in two steps. First, the general RGB color images are converted into HSV color model-based images. Hue and Saturation values are then exploited for color detection, whereasV (value) information is separated out from the color space. Second, a two-dimensional color histogram C is formed to represent the distribution (frequency) of colors across a vehicle image. To be more specific, the hue and saturation channels are divided into 36 and 10 bins, respectively. Thus, a color feature vector C with 360 elements is obtained. Each element of the feature vector is calculated as(1)Ci=NiN,1≤i≤360,where Ni is the number of pixels whose values fall within the bin i and N is the total number of pixels in the image. ### 2.1.3. Vehicle Type Recognition Vehicle type feature provides the other important information to describe a vehicle. The template matching method [27] is utilized to recognize vehicle type. This method uses L2 distance metric to measure the similarity between vehicle image and template images. Specifically, vehicles are classified into 6 categories. For each category, the corresponding template image (T) is built. Finally, the normalized similarity value between the vehicle image (I) and the kth template image (T) is given by (2)Sk=∑m=1M∑n=1NIm,n-Tm,n2GMN,where G denotes the maximum gray level (255); M and N are the dimensions of the vehicle image. Thus, the vehicle type/shape feature S is a vector that consists of the similarity score for each template. Detailed implementation of the VIP systems to traffic data extraction can be found in [24]. A formal description of the dataset obtained from the video record is presented in the following subsection. ## 2.1.1. Vehicle Detection The success of vehicle detection largely depends on the degree that the moving object (vehicle) can be distinguished from its surroundings (background). In light of this, background estimation technology is employed in the detection subsystem. By calculating the media of a sequence of video frames, the background of the video image is obtained. Then, image segmentation technique is performed to identify the foreground object (vehicle). The still image including the detected vehicle is then clipped and stored for further feature extraction. Along with the detection of the vehicle, the associated arrival timet and the spot speed v are also collected. The normalized height of the vehicle image is adopted for representing the vehicle length L. ## 2.1.2. Vehicle Color Recognition Color is one the most essential features for characterizing a vehicle. To reduce the negative effect of illumination changes, this paper has adopted the HSV (hue-saturation-value) color space to represent the vehicle image. Vehicle color recognition illustrated in this paper is conducted in two steps. First, the general RGB color images are converted into HSV color model-based images. Hue and Saturation values are then exploited for color detection, whereasV (value) information is separated out from the color space. Second, a two-dimensional color histogram C is formed to represent the distribution (frequency) of colors across a vehicle image. To be more specific, the hue and saturation channels are divided into 36 and 10 bins, respectively. Thus, a color feature vector C with 360 elements is obtained. Each element of the feature vector is calculated as(1)Ci=NiN,1≤i≤360,where Ni is the number of pixels whose values fall within the bin i and N is the total number of pixels in the image. ## 2.1.3. Vehicle Type Recognition Vehicle type feature provides the other important information to describe a vehicle. The template matching method [27] is utilized to recognize vehicle type. This method uses L2 distance metric to measure the similarity between vehicle image and template images. Specifically, vehicles are classified into 6 categories. For each category, the corresponding template image (T) is built. Finally, the normalized similarity value between the vehicle image (I) and the kth template image (T) is given by (2)Sk=∑m=1M∑n=1NIm,n-Tm,n2GMN,where G denotes the maximum gray level (255); M and N are the dimensions of the vehicle image. Thus, the vehicle type/shape feature S is a vector that consists of the similarity score for each template. Detailed implementation of the VIP systems to traffic data extraction can be found in [24]. A formal description of the dataset obtained from the video record is presented in the following subsection. ## 2.2. Dataset Description and Notation VIPs provide a large amount of traffic data to develop and validate the automatic incident detection algorithms. LetU={1,2,…,N} denote the N vehicles detected at upstream station during the time interval. D={1,2,…,M} is the set of downstream vehicles. In addition, tiU and viU are the associated arrival time and the spot speed of the ith upstream vehicle, respectively. Accordingly, tiD and viD are the corresponding arrival time and spot speed of the ith downstream vehicle. As discussed above, for each detected individual vehicle, the intrinsic feature data (e.g., color, size, and length) are also obtained. Let XiU={CiU,SiU,LiU} denote the signature of the ith upstream vehicle, where CiU and SiU are the normalized color feature vector and type (shape) feature vector, respectively. LiU denotes the normalized the length of vehicle i. Similarly, XjD={CjD,SjD,LjD} is the signature of the jth downstream vehicle. To sum up, dataset from the VIPs during a time interval consists of the upstream vehicle dataset {(tiU,viU,XiU),i=1,2,…,N} and the downstream vehicle set {(tjD,vjD,XjD),j=1,2,…,M}. In order to quantify the difference between each pair of upstream and downstream vehicle signatures, several distance measures are then incorporated. Specifically, for a pair of signatures (XiU,XjD), the Bhattacharyya distance [28] is utilized to calculate the degree of similarity between color features:(3)dcolori,j=1-∑k=1360CiUk·CjDk1/2,where k denoted the kth component of the color feature vector. The L1 distance measure is introduced to represent the difference between the type feature vectors:(4)dtypei,j=∑k=1qSiUk-SjDk,where q is the number of vehicle type template and is taken as 6 in this study. The length difference is given by(5)dlengthi,j=LiU-LjD.Based on the video record collected at the test site, 3,628 vehicles are detected at both stations (10B and 08B) during the two-hour video record. For the purpose of the algorithm development and evaluation, these 3,628 pairs of vehicles are manually matched (i.e., reidentified) by the human operators viewing the video record frame by frame. In other words, the ground-truth matching results of the 3,628 pairs of vehicles are obtained in advance. The mean travel time is 170.9 seconds. The first 800 pairs of vehicle data are used for the model training and calibration (which are discussed in the following sections), while the rest of the vehicle dataset are used for the simulation test of the proposed automatic incident detection algorithm. ## 3. Overall Framework of Automatic Incident Detection System The basic idea of incident detection under free flow condition is to track the individual vehicle so as to identify the missing vehicle due to an incident. Owning to its computational and theoretical simplicity, the vehicle count approach [14] is the most well-known free-flow incident detection algorithm. Thus, it is necessary to revisit this method in detail. ### 3.1. Vehicle Count Approach The basic operation of the vehicle count approach is illustrated in Figure2. When a vehicle Ui arrives at upstream station at time tiU, the expected arrival time window [tiU+Lbi,tiU+Ubi] of this vehicle at downstream station is estimated, where Lbi and Ubi, respectively, represent the lower and upper bounds of the vehicle’s travel time. If another vehicle Uj is detected at upstream station, the corresponding arrival time window [tjU+Lbj,tjU+Ubj] can also be obtained. Unsurprisingly, there may be overlap between these two time windows, and both of these two vehicles are likely to arrive at downstream during time interval [tjU+Lbj,tiU+Ubi]. The incident would then be detected by comparing the collected vehicle count data to the expected number of vehicles in the time interval. In the case that vehicle Ui is missing, if vehicle Uj arrives at downstream during time interval [tjU+Lbj,tiU+Ubi], then the incident alarm will not be triggered until time tjU+Ubj, which is clearly later than the upper bound of the arrival time of vehicle Ui (i.e., tiU+Ubi). Because of the overlapping between the time windows, the vehicle count approach, which is solely based on comparing the vehicle counts data, cannot promptly detect the incident (i.e., delay in incident detection). In general, the incident detection time would significantly increase with respect to the increase in size of vehicle platoon at the upstream detector, which increases the number of overlapping in arrival time intervals at the downstream detector.Figure 2 Illustrative example of vehicle count approach.To reduce the detection time, this research proposes a novel incident detection algorithm by incorporating the vision-based VRI system. As shown in Figure2, vehicle Ui and Uj are detected and their detailed feature data (e.g., color, type and length) are also extracted. Once a vehicle is detected at downstream site, the proposed VRI system is performed to find a matched upstream vehicle based on the vehicle feature data. In the case that vehicle Ui is missing, if the downstream vehicle could be matched to the vehicle Uj based on the vehicle feature, an incident alarm would be triggered at time tiU+Ubi, as vehicle Ui is not reidentified during time window [tiU+Lbi,tiU+Ubi]. As shown by this “toy” example, the additional VRI component could potentially reduce the incident detection time to some extent. However, it is also observed that the concept of VRI is not readily transferable to the field of incident detection (mismatches of VRI may trigger false alarms) and several modifications should be made regarding the vehicle matching process.(i) First, instead of finding the matching result for upstream vehicle, the incident-detection-oriented VRI attempts to match the vehicles at downstream site such that the proposed AID algorithm can be implemented in real-time.(ii) Second, once a vehicle passes the downstream station, the incident-detection-oriented VRI should be capable of making matching decision immediately such that the missing vehicle (i.e., the vehicle that does not appear at downstream) could be promptly identified. Therefore, this study calculates the matching probability for each pair of vehicles on which the following screening method could be imposed to further reduce the false alarm rate.The overall framework of the proposed algorithm is presented in the following subsection. ### 3.2. AID Algorithm Based on VRI System The detailed implementation of the VRI-based incident detection system is summarized in the following flowchart (Figure3). First, the system will initialize the timestamp, t, and check whether a vehicle is detected at upstream and/or downstream station. If a vehicle is detected at upstream detector, the expected arrival time window of this vehicle at downstream station will be estimated based on the historical data. The record of the detected vehicle at upstream will be stored in the database as unmatched upstream vehicle. On the other hand, if a vehicle is captured at the downstream station, the system will perform incident-detection-oriented VRI subsystem to check whether this detected vehicle match with any of the unmatched upstream vehicle. The time window constraint is utilized to identify the potential matches for this vehicle detected at downstream station. Once the match is found, the matched vehicle data will be removed from the list of the unmatched upstream vehicles.Figure 3 Overall framework of AID system.After the previous two steps for handling the detected vehicles at upstream and downstream stations, the system will proceed to determine whether there is an incident occurs on the monitored segment. For incident detection, the system will screen through the list of unmatched vehicles. If the current time (t) is out of the expected arrival time window (i.e., greater than the upper bound of the arrival time interval) of the unmatched vehicle, an incident alarm will be issued. If not, t will be set to t+1 and the system will move forward to the next time step. It could be easily observed that the performance of the incident detection system is heavily dependent on two critical components, that is, flexible time window constraint and incident-detection-oriented VRI system.For the aforementioned framework, the following three comments should be taken into account.(i) First, the detection error is not considered in this study. In other words, it is assumed that all the vehicles cross the video cameras will be detected. This is achievable under free flow condition, as there is no occlusion between the vehicles and, consequently, VIPs perform generally well and are able to detect most of the individual vehicles.(ii) Second, under free flow condition, the traveling behavior of the individual vehicle is more predictable. This phenomenon enables the estimation of the flexible arrival time window for each individual vehicle based on the current spot speed and the historical data. It is expected that the accurate estimation of the arrival time window could potentially lead to an improved matching accuracy of the VRI method, and hence reduce the incident detection time.(iii) Third, it should be noted that the proposed VRI cannot guarantee an accurate matching because of the nonuniqueness of the vehicle signatures. Instead, the proposed VRI scheme in this paper can only provide the matching probability between the downstream and upstream vehicles. Therefore, some of the mismatches resulted from the matching probability could potentially lead to false alarms. To handle this, a ratio method is introduced to screen out those mismatches for reducing the false alarms. ## 3.1. Vehicle Count Approach The basic operation of the vehicle count approach is illustrated in Figure2. When a vehicle Ui arrives at upstream station at time tiU, the expected arrival time window [tiU+Lbi,tiU+Ubi] of this vehicle at downstream station is estimated, where Lbi and Ubi, respectively, represent the lower and upper bounds of the vehicle’s travel time. If another vehicle Uj is detected at upstream station, the corresponding arrival time window [tjU+Lbj,tjU+Ubj] can also be obtained. Unsurprisingly, there may be overlap between these two time windows, and both of these two vehicles are likely to arrive at downstream during time interval [tjU+Lbj,tiU+Ubi]. The incident would then be detected by comparing the collected vehicle count data to the expected number of vehicles in the time interval. In the case that vehicle Ui is missing, if vehicle Uj arrives at downstream during time interval [tjU+Lbj,tiU+Ubi], then the incident alarm will not be triggered until time tjU+Ubj, which is clearly later than the upper bound of the arrival time of vehicle Ui (i.e., tiU+Ubi). Because of the overlapping between the time windows, the vehicle count approach, which is solely based on comparing the vehicle counts data, cannot promptly detect the incident (i.e., delay in incident detection). In general, the incident detection time would significantly increase with respect to the increase in size of vehicle platoon at the upstream detector, which increases the number of overlapping in arrival time intervals at the downstream detector.Figure 2 Illustrative example of vehicle count approach.To reduce the detection time, this research proposes a novel incident detection algorithm by incorporating the vision-based VRI system. As shown in Figure2, vehicle Ui and Uj are detected and their detailed feature data (e.g., color, type and length) are also extracted. Once a vehicle is detected at downstream site, the proposed VRI system is performed to find a matched upstream vehicle based on the vehicle feature data. In the case that vehicle Ui is missing, if the downstream vehicle could be matched to the vehicle Uj based on the vehicle feature, an incident alarm would be triggered at time tiU+Ubi, as vehicle Ui is not reidentified during time window [tiU+Lbi,tiU+Ubi]. As shown by this “toy” example, the additional VRI component could potentially reduce the incident detection time to some extent. However, it is also observed that the concept of VRI is not readily transferable to the field of incident detection (mismatches of VRI may trigger false alarms) and several modifications should be made regarding the vehicle matching process.(i) First, instead of finding the matching result for upstream vehicle, the incident-detection-oriented VRI attempts to match the vehicles at downstream site such that the proposed AID algorithm can be implemented in real-time.(ii) Second, once a vehicle passes the downstream station, the incident-detection-oriented VRI should be capable of making matching decision immediately such that the missing vehicle (i.e., the vehicle that does not appear at downstream) could be promptly identified. Therefore, this study calculates the matching probability for each pair of vehicles on which the following screening method could be imposed to further reduce the false alarm rate.The overall framework of the proposed algorithm is presented in the following subsection. ## 3.2. AID Algorithm Based on VRI System The detailed implementation of the VRI-based incident detection system is summarized in the following flowchart (Figure3). First, the system will initialize the timestamp, t, and check whether a vehicle is detected at upstream and/or downstream station. If a vehicle is detected at upstream detector, the expected arrival time window of this vehicle at downstream station will be estimated based on the historical data. The record of the detected vehicle at upstream will be stored in the database as unmatched upstream vehicle. On the other hand, if a vehicle is captured at the downstream station, the system will perform incident-detection-oriented VRI subsystem to check whether this detected vehicle match with any of the unmatched upstream vehicle. The time window constraint is utilized to identify the potential matches for this vehicle detected at downstream station. Once the match is found, the matched vehicle data will be removed from the list of the unmatched upstream vehicles.Figure 3 Overall framework of AID system.After the previous two steps for handling the detected vehicles at upstream and downstream stations, the system will proceed to determine whether there is an incident occurs on the monitored segment. For incident detection, the system will screen through the list of unmatched vehicles. If the current time (t) is out of the expected arrival time window (i.e., greater than the upper bound of the arrival time interval) of the unmatched vehicle, an incident alarm will be issued. If not, t will be set to t+1 and the system will move forward to the next time step. It could be easily observed that the performance of the incident detection system is heavily dependent on two critical components, that is, flexible time window constraint and incident-detection-oriented VRI system.For the aforementioned framework, the following three comments should be taken into account.(i) First, the detection error is not considered in this study. In other words, it is assumed that all the vehicles cross the video cameras will be detected. This is achievable under free flow condition, as there is no occlusion between the vehicles and, consequently, VIPs perform generally well and are able to detect most of the individual vehicles.(ii) Second, under free flow condition, the traveling behavior of the individual vehicle is more predictable. This phenomenon enables the estimation of the flexible arrival time window for each individual vehicle based on the current spot speed and the historical data. It is expected that the accurate estimation of the arrival time window could potentially lead to an improved matching accuracy of the VRI method, and hence reduce the incident detection time.(iii) Third, it should be noted that the proposed VRI cannot guarantee an accurate matching because of the nonuniqueness of the vehicle signatures. Instead, the proposed VRI scheme in this paper can only provide the matching probability between the downstream and upstream vehicles. Therefore, some of the mismatches resulted from the matching probability could potentially lead to false alarms. To handle this, a ratio method is introduced to screen out those mismatches for reducing the false alarms. ## 4. Flexible Time Window Estimation Under free flow condition, each individual vehicle would maintain in a relatively stable speed (i.e., low variance in travel time). In this case, the arrival time of the vehicle at downstream station could be estimated based on the spot speed and historical data. LetUi represent an upstream vehicle detected at time tiU, and the associated upstream spot speed is denoted as viU. The expected arrival time Arr of vehicle Ui is given by (6)Arr=tiU+l0.5viU+viD,where l is the distance between the upstream and downstream detectors; viD is the estimated vehicle speed at downstream detector based on the historical speed database. To account for the error in estimating the downstream spot speed, the upper and lower bounds of viD are provided by the following equations:(7)vubD=σubVhistDt′viUVU,vlbD=σlbVhistDt′viUVU,where vubD and vlbD are, respectively, the upper and lower bounds of the vehicle at downstream detector; VU is the current average speed of the upstream detector; σub≥1 and σlb≤1 are respectively the associated upper and lower bound factors; VhistD(t′) is the historical average speed of the downstream detector at time t′. The time t′ is chosen such that it is matched with the arrival time, which is estimated by a linear speed profile of the modeled section, at the downstream detector. The estimation of downstream spot speeds can be viewed as a prediction-correction process. First, the historical average speed VhistD(t′) is adopted to predict the speed of this vehicle at downstream site. Then, this prediction is corrected by the factor viU/VU for the better representation of the current traffic condition. Finally, the upper and lower bound factors (σub and σlb) are applied for determining the upper of lower bounds of the downstream spot speed. With the estimated downstream speeds, the corresponding upper and lower bounds of the travel time of vehicle Ui can be calculated as follows:(8)Ubi=l0.5viU+vlbD,Lbi=l0.5viU+vubD.However, it should be noted that the proposed incident detection system is not confined to the above method for estimating the time window. Any other estimation methods are equally applicable to the proposed AID algorithm. With the estimated time windows, vehicles on the monitored freeway section could be “partially” tracked and reidentified in a timely and accurate manner. ## 5. Incident-Detection-Oriented VRI As explained previously, the proposed VRI system is devised based on the video image data provided by VIPs technology. By applying myriad image processing techniques, the detailed vehicle feature data (e.g., color, type, and length) could be obtained. The vehicle matching process is then performed by comparing these vehicle feature data. In this section, the methodologies involved in the incident-detection-oriented VRI system are presented. ### 5.1. Vehicle Matching Problem For a vehicleDk arrives at downstream station at time tkD, the vehicle signature, denoted as XkD={CkD,SkD,LkD}, is then obtained from the VIPs. A search space, S(k), which represents the potential matches at upstream station for vehicle Dk, is determined based on the calculated arrival time window. Specifically, S(k) is given by(9)Sk=Ui∈U∣tiU+Lbi≤tkD≤tiU+Ubi,where Ui represents the vehicle detected at upstream station; [Lbi,Ubi] is the associated travel time window. The vehicle reidentification problem is to find the corresponding upstream vehicle for Dk through the search space S(k). Herein, we introduce the assignment function ψ to represent the matching result; that is, (10)ψk:Dk⟶Ui∈Sk∣i=1,2,…,Nk⟼i,i=1,2,…,N,where ψ(k)=i indicates that vehicle Dk is the same as Ui. Recall that for each vehicle Ui∈S(k), one may assign to the pair of signatures (XiU,XkD) the distances dcolor(i,k), dtype(i,k) and dlength(i,k) based on (3), (4), and (5). In this case, one simple method (i.e., distance-based method) is to find the matched upstream vehicle with the minimum feature distance. However, it should be noted that the vehicle signatures derived from VIPs contain noise and are not unique. Therefore, the distance measure cannot really reflect the similarities between the vehicles. Instead of directly comparing the feature distances, this study utilizes the statistical matching method. Based on the calculated feature distances, that is, dcolor(i,k), dtype(i,k), and dlength(i,k), a matching probability P(ψ(k)=i∣dcolor,dtype,dlength) between vehicles Ui and Dk is provided for the matching decision making. ### 5.2. Calculation of Matching Probability The matching probability, also referred to as theposterior probability, plays a fundamental role in the proposed VRI system. By applying the Bayesian rule, we have(11)Pψk=i∣dcolor,dtype,dlength=pdcolor,dtype,dlength∣ψk=iPψk=ipdcolor,dtype,dlength,where pdcolor,dtype,dlength∣ψ(k)=i is the likelihood function; P(ψ(k)=i) is the prior knowledge of the assignment function. To obtain the explicit matching probability, the denominator in (11) can further be expressed as(12)pdcolor,dtype,dlength=pdcolor,dtype,dlength∣ψk=iPψk=i+pdcolor,dtype,dlength∣ψk≠iPψk≠i.On the basis of (11) and (12), it is easily observed that the calculation of the matching probability is dependent on the deduction of the likelihood function and theprior probability. In this particular case, theprior probability is defined as P(ψ(k)=i)=0.5, which suggests that the matching is solely based on the comparison between the vehicle feature data. The calculation of the likelihood function is completed in two steps.(i) First, individual statistical models for the three feature distances are constructed and the corresponding likelihood functions are also obtained (i.e.,pdcolor∣ψ(k), p(dtype∣ψ(k)), and p(dlength∣ψ(k))).(ii) Second, a data fusion rule is employed to provide the overall likelihood functions, that is, the termp(dcolor,dtype,dlength∣ψ(k)) in (11) and (12). #### 5.2.1. Statistical Modeling of Feature Distance Without loss of generality, only the probabilistic modeling of color feature distance is described. In the framework of statistical modeling, the distance measure is assumed to be a random variable. Thus, for a pair of color feature vectors(CiU,CkD), the distance dcolor(i,k) follows a certain statistical distribution. The conditional probability (i.e., likelihood function) of dcolor(i,k) is then given by(13)pdcolori,k∣ψk=p1dcolori,k,ifψk=ip2dcolori,k,ifψk≠i,where p1 denotes the probability density function (pdf) of distance dcolor(i,k) when color feature vectors CiU and CkD belong to the same vehicle, while p2 is the pdf of the distance dcolor(i,k) between different vehicles. A historical training dataset that contains a number of pairs of correctly matched vehicles is built up for estimating the pdfs p1 and p2. Finite Gaussian mixture model is used to approximate the pdfs and the well-known Expectation Maximization (EM) algorithm is applied to solve the associated parameter estimation problem. Likewise, the likelihood functions for the type and length distances can also be obtained in a similar manner. #### 5.2.2. Data Fusion Rule In this study, the logarithmic opinion pool (LOP) approach is employed to fuse the individual likelihood functions. The LOP is evaluated as a weighted product of the probabilities and the equation is given by(14)pdcolor,dtype,dlength∣ψk=1ZLOP·pdcolor∣ψkαpdtype∣ψkβ·pdlength∣ψkγ,α+β+γ=1,where the fusion weights, α, β, and γ are used to indicate the degree of contribution of each likelihood function. The weights can also be calibrated from the training dataset. By substituting (12), (13), and (14) into (11), the desired matching probability for each pair of vehicles (Ui,Dk) could be obtained. For the sake of simplicity, let Pik denote the matching probability between the vehicle Ui and Dk. In this case, we may obtain a set of probabilistic measures Pik∣i=1,2,…,N to represent the likelihood of a correct match between Dk and the vehicles in the search space S(k). The final matching decision-making based on these matching probabilities, becomes the major concern in the following subsection. ### 5.3. Ratio Method for Final Matching Decision An intuitive decision-making process (i.e., the greedy method) is to sort the matches via the matching probabilityPik∣i=1,2,…,N and choose the vehicle Ui with the maximum matching likelihood; that is,(15)ψk=i,ifPjk≤Pik∀j∈1,2,…,N.However, it is noteworthy that the proposed VRI system is utilized for incident detection purpose, the final matching decision would produce significant impacts on the performance of the AID system. Based on the greedy method (15), the potential false alarms would be triggered. As shown in Figure 4, the downstream vehicle Dk arrives at time 10:39:39 a.m. Uj and Ui are, respectively, the two candidate vehicles with the largest and second largest matching probabilities with the downstream vehicle Dk (i.e., Pjk=0.9295 and Pik=0.8392). Although vehicle Dk actually matches with vehicle Ui (based on the manual matching), the greedy method yields the matching result ψ(k)=j, which could lead to a false alarm at time tiU+Ubi.Figure 4 Illustrative example of a false alarm.To reduce the false alarms mentioned above, a ratio method is then introduced for the final matching decision-making. LetPi∣i=1,2,…,N denote the set of matching probabilities in descending order. The ratio method proposed in this study involves two major steps. First, by imposing a threshold τ on the value of the ratio between the neighboring probabilities in the ordered set Pi∣i=1,2,…,N, one may be able to screen through the search space and rule out those unlikely matches. The screening process is described in Procedure 1.Procedure 1:Algorithmic framework for screening process. Input: A finite set {Pi∣i=1,2,…,N} of matching probabilities in descending order Output: The set of unlikely matches for downstream vehicle Dk (1) i←1; (2) while  i≤N-1∧Pi/Pi+1≤τ  do (3)  i←i+1; (4) return  {i+1,i+2,…,N};The underlying implication of Procedure1 is that if the ratio (i.e., Pi/Pi+1) is sufficiently large, then it could come to a conclusion that vehicles {i+1,i+2,…,N} are the unlikely matches due to their relatively smaller matching probabilities. Otherwise, if the ratio Pi/Pi+1≤τ, then we may declare that vehicle i and i+1 are not distinctive from each other and a matching decision cannot be made at current stage.Upon the completion of the above screening process, unlikely matches could be ruled out and the search space is further reduced. The second step is then to make a matching decision based on the remaining search spaceSR(k). Let SR(k)=Um∣m=1,2,…,i (clearly i≤N), the matching result is then given by(16)ψk=m∗,iftlU+Ubl≥tm∗U+Ubm∗,∀l∈1,2,…,i.It is obvious that vehicle Dk is matched to the vehicle in SR(k) with the smallest upper bound in the predicted arrival time window. The rationale behind this approach is that a matching decision could not be made based on the matching probabilities (as the matching probabilities for vehicles in SR(k) are not significantly different from each other). In this case, the vehicle Dk is matched to upstream vehicle with smallest upper bound in the predicted arrival time window to avoid the potential false alarms. As a matter of fact, the second step could be viewed as a standard vehicle count approach in which only the counts data is utilized.To sum up, the matching decision-making process of the incident-detection-oriented VRI is a hybrid of the vehicle feature matching and the classic vehicle count approach. The overall procedure for the matching decision-making is given by Procedure2.Procedure 2:Algorithmic framework for final matching decision-making. Input: A set {Pi∣i=1,2,…,N} of matching probabilities and the set {tiU+Ubi∣i=1,2,…,N} of upper bounds in arrival time interval Output: The final matching decision for vehicle Dk Screening Method: (1) i←1; (2) while  i≤N-1∧Pi/Pi+1≤τ  do (3)  i←i+1; (4) SR(k)←{1,2,…,i}; Vehicle Count Approach: (5) m∗←arg⁡minl⁡{tlU+Ubl∣l∈SR(k)}; (6) return  ψ(k)=m∗; ## 5.1. Vehicle Matching Problem For a vehicleDk arrives at downstream station at time tkD, the vehicle signature, denoted as XkD={CkD,SkD,LkD}, is then obtained from the VIPs. A search space, S(k), which represents the potential matches at upstream station for vehicle Dk, is determined based on the calculated arrival time window. Specifically, S(k) is given by(9)Sk=Ui∈U∣tiU+Lbi≤tkD≤tiU+Ubi,where Ui represents the vehicle detected at upstream station; [Lbi,Ubi] is the associated travel time window. The vehicle reidentification problem is to find the corresponding upstream vehicle for Dk through the search space S(k). Herein, we introduce the assignment function ψ to represent the matching result; that is, (10)ψk:Dk⟶Ui∈Sk∣i=1,2,…,Nk⟼i,i=1,2,…,N,where ψ(k)=i indicates that vehicle Dk is the same as Ui. Recall that for each vehicle Ui∈S(k), one may assign to the pair of signatures (XiU,XkD) the distances dcolor(i,k), dtype(i,k) and dlength(i,k) based on (3), (4), and (5). In this case, one simple method (i.e., distance-based method) is to find the matched upstream vehicle with the minimum feature distance. However, it should be noted that the vehicle signatures derived from VIPs contain noise and are not unique. Therefore, the distance measure cannot really reflect the similarities between the vehicles. Instead of directly comparing the feature distances, this study utilizes the statistical matching method. Based on the calculated feature distances, that is, dcolor(i,k), dtype(i,k), and dlength(i,k), a matching probability P(ψ(k)=i∣dcolor,dtype,dlength) between vehicles Ui and Dk is provided for the matching decision making. ## 5.2. Calculation of Matching Probability The matching probability, also referred to as theposterior probability, plays a fundamental role in the proposed VRI system. By applying the Bayesian rule, we have(11)Pψk=i∣dcolor,dtype,dlength=pdcolor,dtype,dlength∣ψk=iPψk=ipdcolor,dtype,dlength,where pdcolor,dtype,dlength∣ψ(k)=i is the likelihood function; P(ψ(k)=i) is the prior knowledge of the assignment function. To obtain the explicit matching probability, the denominator in (11) can further be expressed as(12)pdcolor,dtype,dlength=pdcolor,dtype,dlength∣ψk=iPψk=i+pdcolor,dtype,dlength∣ψk≠iPψk≠i.On the basis of (11) and (12), it is easily observed that the calculation of the matching probability is dependent on the deduction of the likelihood function and theprior probability. In this particular case, theprior probability is defined as P(ψ(k)=i)=0.5, which suggests that the matching is solely based on the comparison between the vehicle feature data. The calculation of the likelihood function is completed in two steps.(i) First, individual statistical models for the three feature distances are constructed and the corresponding likelihood functions are also obtained (i.e.,pdcolor∣ψ(k), p(dtype∣ψ(k)), and p(dlength∣ψ(k))).(ii) Second, a data fusion rule is employed to provide the overall likelihood functions, that is, the termp(dcolor,dtype,dlength∣ψ(k)) in (11) and (12). ### 5.2.1. Statistical Modeling of Feature Distance Without loss of generality, only the probabilistic modeling of color feature distance is described. In the framework of statistical modeling, the distance measure is assumed to be a random variable. Thus, for a pair of color feature vectors(CiU,CkD), the distance dcolor(i,k) follows a certain statistical distribution. The conditional probability (i.e., likelihood function) of dcolor(i,k) is then given by(13)pdcolori,k∣ψk=p1dcolori,k,ifψk=ip2dcolori,k,ifψk≠i,where p1 denotes the probability density function (pdf) of distance dcolor(i,k) when color feature vectors CiU and CkD belong to the same vehicle, while p2 is the pdf of the distance dcolor(i,k) between different vehicles. A historical training dataset that contains a number of pairs of correctly matched vehicles is built up for estimating the pdfs p1 and p2. Finite Gaussian mixture model is used to approximate the pdfs and the well-known Expectation Maximization (EM) algorithm is applied to solve the associated parameter estimation problem. Likewise, the likelihood functions for the type and length distances can also be obtained in a similar manner. ### 5.2.2. Data Fusion Rule In this study, the logarithmic opinion pool (LOP) approach is employed to fuse the individual likelihood functions. The LOP is evaluated as a weighted product of the probabilities and the equation is given by(14)pdcolor,dtype,dlength∣ψk=1ZLOP·pdcolor∣ψkαpdtype∣ψkβ·pdlength∣ψkγ,α+β+γ=1,where the fusion weights, α, β, and γ are used to indicate the degree of contribution of each likelihood function. The weights can also be calibrated from the training dataset. By substituting (12), (13), and (14) into (11), the desired matching probability for each pair of vehicles (Ui,Dk) could be obtained. For the sake of simplicity, let Pik denote the matching probability between the vehicle Ui and Dk. In this case, we may obtain a set of probabilistic measures Pik∣i=1,2,…,N to represent the likelihood of a correct match between Dk and the vehicles in the search space S(k). The final matching decision-making based on these matching probabilities, becomes the major concern in the following subsection. ## 5.2.1. Statistical Modeling of Feature Distance Without loss of generality, only the probabilistic modeling of color feature distance is described. In the framework of statistical modeling, the distance measure is assumed to be a random variable. Thus, for a pair of color feature vectors(CiU,CkD), the distance dcolor(i,k) follows a certain statistical distribution. The conditional probability (i.e., likelihood function) of dcolor(i,k) is then given by(13)pdcolori,k∣ψk=p1dcolori,k,ifψk=ip2dcolori,k,ifψk≠i,where p1 denotes the probability density function (pdf) of distance dcolor(i,k) when color feature vectors CiU and CkD belong to the same vehicle, while p2 is the pdf of the distance dcolor(i,k) between different vehicles. A historical training dataset that contains a number of pairs of correctly matched vehicles is built up for estimating the pdfs p1 and p2. Finite Gaussian mixture model is used to approximate the pdfs and the well-known Expectation Maximization (EM) algorithm is applied to solve the associated parameter estimation problem. Likewise, the likelihood functions for the type and length distances can also be obtained in a similar manner. ## 5.2.2. Data Fusion Rule In this study, the logarithmic opinion pool (LOP) approach is employed to fuse the individual likelihood functions. The LOP is evaluated as a weighted product of the probabilities and the equation is given by(14)pdcolor,dtype,dlength∣ψk=1ZLOP·pdcolor∣ψkαpdtype∣ψkβ·pdlength∣ψkγ,α+β+γ=1,where the fusion weights, α, β, and γ are used to indicate the degree of contribution of each likelihood function. The weights can also be calibrated from the training dataset. By substituting (12), (13), and (14) into (11), the desired matching probability for each pair of vehicles (Ui,Dk) could be obtained. For the sake of simplicity, let Pik denote the matching probability between the vehicle Ui and Dk. In this case, we may obtain a set of probabilistic measures Pik∣i=1,2,…,N to represent the likelihood of a correct match between Dk and the vehicles in the search space S(k). The final matching decision-making based on these matching probabilities, becomes the major concern in the following subsection. ## 5.3. Ratio Method for Final Matching Decision An intuitive decision-making process (i.e., the greedy method) is to sort the matches via the matching probabilityPik∣i=1,2,…,N and choose the vehicle Ui with the maximum matching likelihood; that is,(15)ψk=i,ifPjk≤Pik∀j∈1,2,…,N.However, it is noteworthy that the proposed VRI system is utilized for incident detection purpose, the final matching decision would produce significant impacts on the performance of the AID system. Based on the greedy method (15), the potential false alarms would be triggered. As shown in Figure 4, the downstream vehicle Dk arrives at time 10:39:39 a.m. Uj and Ui are, respectively, the two candidate vehicles with the largest and second largest matching probabilities with the downstream vehicle Dk (i.e., Pjk=0.9295 and Pik=0.8392). Although vehicle Dk actually matches with vehicle Ui (based on the manual matching), the greedy method yields the matching result ψ(k)=j, which could lead to a false alarm at time tiU+Ubi.Figure 4 Illustrative example of a false alarm.To reduce the false alarms mentioned above, a ratio method is then introduced for the final matching decision-making. LetPi∣i=1,2,…,N denote the set of matching probabilities in descending order. The ratio method proposed in this study involves two major steps. First, by imposing a threshold τ on the value of the ratio between the neighboring probabilities in the ordered set Pi∣i=1,2,…,N, one may be able to screen through the search space and rule out those unlikely matches. The screening process is described in Procedure 1.Procedure 1:Algorithmic framework for screening process. Input: A finite set {Pi∣i=1,2,…,N} of matching probabilities in descending order Output: The set of unlikely matches for downstream vehicle Dk (1) i←1; (2) while  i≤N-1∧Pi/Pi+1≤τ  do (3)  i←i+1; (4) return  {i+1,i+2,…,N};The underlying implication of Procedure1 is that if the ratio (i.e., Pi/Pi+1) is sufficiently large, then it could come to a conclusion that vehicles {i+1,i+2,…,N} are the unlikely matches due to their relatively smaller matching probabilities. Otherwise, if the ratio Pi/Pi+1≤τ, then we may declare that vehicle i and i+1 are not distinctive from each other and a matching decision cannot be made at current stage.Upon the completion of the above screening process, unlikely matches could be ruled out and the search space is further reduced. The second step is then to make a matching decision based on the remaining search spaceSR(k). Let SR(k)=Um∣m=1,2,…,i (clearly i≤N), the matching result is then given by(16)ψk=m∗,iftlU+Ubl≥tm∗U+Ubm∗,∀l∈1,2,…,i.It is obvious that vehicle Dk is matched to the vehicle in SR(k) with the smallest upper bound in the predicted arrival time window. The rationale behind this approach is that a matching decision could not be made based on the matching probabilities (as the matching probabilities for vehicles in SR(k) are not significantly different from each other). In this case, the vehicle Dk is matched to upstream vehicle with smallest upper bound in the predicted arrival time window to avoid the potential false alarms. As a matter of fact, the second step could be viewed as a standard vehicle count approach in which only the counts data is utilized.To sum up, the matching decision-making process of the incident-detection-oriented VRI is a hybrid of the vehicle feature matching and the classic vehicle count approach. The overall procedure for the matching decision-making is given by Procedure2.Procedure 2:Algorithmic framework for final matching decision-making. Input: A set {Pi∣i=1,2,…,N} of matching probabilities and the set {tiU+Ubi∣i=1,2,…,N} of upper bounds in arrival time interval Output: The final matching decision for vehicle Dk Screening Method: (1) i←1; (2) while  i≤N-1∧Pi/Pi+1≤τ  do (3)  i←i+1; (4) SR(k)←{1,2,…,i}; Vehicle Count Approach: (5) m∗←arg⁡minl⁡{tlU+Ubl∣l∈SR(k)}; (6) return  ψ(k)=m∗; ## 6. Test Results In this section, the performance of the proposed AID algorithm is evaluated against the classical vehicle count approach in terms of mean time-to-detect and false alarm rate (i.e., false alarms per hour). As the performance of the proposed AID system relies on its two critical components (i.e., flexible time window estimation and incident-detection-oriented VRI), different sizes of time window and thresholds for final matching are tested in this section. The dataset described in Section2 are used to perform the simulated tests for the algorithm evaluation. Also, the real-world case studies are carried out in this section. ### 6.1. Simulated Tests For calibrating and testing the proposed AID system, the 3,682 pairs of vehicle matching results from the collected dataset are divided into two parts. First, a dataset of 800 pairs of correctly matched vehicles are used for model calibration and training. The upper and lower bound factors for time window estimation (i.e.,σub and σlb) are calibrated by using the travel time data of the 800 vehicles and the historical averaging speed on Thursday, which is the same as the test day (i.e., 16/2/2012, 23/2/2012, 1/3/2012, and 8/3/2012). In addition, the parameters of the statistical model (i.e., p1 and p2) are estimated by utilizing the feature data extracted from the captured images of these 800 pairs of vehicles. Second, the remaining 2,828 pairs of vehicles detected at both upstream and downstream detectors are fed into the calibrated AID system for model evaluation. In order to mimic an incident between the upstream and downstream detectors, the record of vehicle at downstream site is intentionally removed to simulate the situation that the vehicle has passed the upstream detector but not the downstream one. In the testing of the proposed AID system, the AID algorithm is run for 2,828 times, which for each run the record of one of the 2,828 vehicles at downstream detector is removed, for determining the mean detection time. Specifically, the incident detection time is defined as(17)TD=tincident-tU.Since we do not know the exact time when the incident happened, the incident detection time is then defined as the difference between the time when an alarm is issued (i.e., tincident) and the arrival time of incident vehicle at upstream station (i.e., tU).By setting the threshold value equals to 2 (i.e.,τ=2), the mean detection time of the proposed AID algorithm is 203.2 seconds, whereas the mean detection time of the classical vehicle count approach is 644.1 seconds. As it is expected, the mean detection time is reduced substantially by incorporating the modified VRI system. Figure 5 shows the performance of the VRI-based incident detection algorithm for different threshold values adopted in final matching decision. It could be observed that the false alarm rate reduces as the threshold value increases. When the threshold value equals one, the VRI system will always match the downstream vehicle to the upstream one with the largest matching probability. Therefore, it would lead to a large number of false alarms (see Section 5.3). With the increase in threshold value, the modified VRI system relies more on the traditional vehicle count approach and results in a decrease in the false alarm rate. On the other hand, as the proposed VRI system is more relied on the traditional vehicle count approach (e.g., τ→∞), the mean detection time also increases (see Section 3.1). To sum up, for the proposed VRI system, the lowering of the false alarm rates is at the expense of incident detection time. Thus, a balance should be struck between the rapid incident detection and low false alarm rate.Figure 5 Mean detection time and false alarm rate.The estimation of arrival time window also has a significant impact on the performance of the proposed AID algorithm. It is not difficult to understand that a small time window size would result in faster incident detection. To test the performance of the proposed AID algorithm under different time window sizes, a time window with fixed size is assigned for each individual vehicle. Figure6 shows the mean detection of the algorithm for different time window sizes. The mean detection time of the vehicle count approach increases dramatically as the size of the time window grows. It is also observed that the vehicle count approach is not capable of detecting the missing vehicle as the size of time window is larger than 50 seconds. To sum up, for the simulation that a large arrival time window is applied, the proposed AID algorithm clearly outperforms the vehicle count approach.Figure 6 Comparison between the proposed AID algorithm and vehicle count approach. ### 6.2. Real-World Case Studies Apart from the above-mentioned simulation tests, two real-world case studies are also carried out. Based on the record from the freeway authority, the first incident is reported on June 13, 2012, at 16:03. The reported incident location is at 20 + 600 westbound, which is in the section between camera 7A/8A and 9A/10A (see Figure1). Based on this information, the research team has screened through the captured videos for identifying the incident vehicle. It is found out that on June 13, 2012, the incident vehicle has passed the upstream detector (7A/8A) at 15:55 (Figure 7(a)) and has an incident before it reaches the downstream detector (9A/10A). Four minutes later, a tow truck, which is probably called by the driver of the incident vehicle, has passed the upstream detector (Figure 7(b)) and towed the incident vehicle to pass the downstream detector at 16:09 Figure 7(c)).Figure 7 Real-world case study #1: (a) incident vehicle passes the upstream detector; (b) tow truck passes the upstream detector; (c) incident vehicle and truck passes through the downstream detector. (a) (b) (c)According to the above information of the incident vehicle, a 35-minute video record data (from 15:33 to 16:08 on June 13, 2012) of locations 8A and 10A are extracted and input into the proposed AID system for free flow condition. In this case, apart from the incident vehicle, 739 vehicles are detected at both stations during the 35-minute video record. By setting the threshold value of the ratio of matching probabilities equals to 8.5, the time of incident detection and the false alarm rate for this case study are found to be 15:58:22, and 3.42 false alarms per hour, respectively. Compared with the classic vehicle count approach, which would trigger an incident alarm at 16:01:28, the proposed AID system performs better in terms of the incident detection time.The incident vehicle of the second real-world case study is shown in Figure8. This incident is reported on June 17, 2012 at 10:31 a.m. and its detailed location is at 19 + 300A westbound (between 7A/8A and 9A/10A). By setting the threshold value of the ratio of matching probabilities equal to 8.5, the time of incident detection and the false alarm rate for this case study are found to be 10:28:22 and 2 false alarms per hour, respectively. Compared with the classic vehicle count approach, which would trigger an incident alarm at 10:33:50, the proposed AID system still performs better in terms of the incident detection time.Figure 8 Real-world case study #2: (a) incident vehicle passes the upstream detector; (b) incident vehicle and truck passes through the downstream detector. (a) (b)On the basis of these two real-world case studies, we may observe that the time of incident detection of the proposed AID algorithm is largely dependent on the actual information associated with the incident vehicle (e.g., distinctiveness of the incident vehicle feature and the size of the vehicle platoon). In real-world case study #2, the incident vehicle has “distinctive” vehicle color distribution (see Figure8(a)) and, consequently, the disappearance of this very vehicle can be identified earlier than the vehicle count approach. The size of vehicle platoon may also have significant impact on the performance of the AID algorithm. The larger the platoon size is, the more likely it is that the arrival time windows of these vehicles may overlap each other at the downstream site and, hence, leads to the significant increase in incident detection time. ## 6.1. Simulated Tests For calibrating and testing the proposed AID system, the 3,682 pairs of vehicle matching results from the collected dataset are divided into two parts. First, a dataset of 800 pairs of correctly matched vehicles are used for model calibration and training. The upper and lower bound factors for time window estimation (i.e.,σub and σlb) are calibrated by using the travel time data of the 800 vehicles and the historical averaging speed on Thursday, which is the same as the test day (i.e., 16/2/2012, 23/2/2012, 1/3/2012, and 8/3/2012). In addition, the parameters of the statistical model (i.e., p1 and p2) are estimated by utilizing the feature data extracted from the captured images of these 800 pairs of vehicles. Second, the remaining 2,828 pairs of vehicles detected at both upstream and downstream detectors are fed into the calibrated AID system for model evaluation. In order to mimic an incident between the upstream and downstream detectors, the record of vehicle at downstream site is intentionally removed to simulate the situation that the vehicle has passed the upstream detector but not the downstream one. In the testing of the proposed AID system, the AID algorithm is run for 2,828 times, which for each run the record of one of the 2,828 vehicles at downstream detector is removed, for determining the mean detection time. Specifically, the incident detection time is defined as(17)TD=tincident-tU.Since we do not know the exact time when the incident happened, the incident detection time is then defined as the difference between the time when an alarm is issued (i.e., tincident) and the arrival time of incident vehicle at upstream station (i.e., tU).By setting the threshold value equals to 2 (i.e.,τ=2), the mean detection time of the proposed AID algorithm is 203.2 seconds, whereas the mean detection time of the classical vehicle count approach is 644.1 seconds. As it is expected, the mean detection time is reduced substantially by incorporating the modified VRI system. Figure 5 shows the performance of the VRI-based incident detection algorithm for different threshold values adopted in final matching decision. It could be observed that the false alarm rate reduces as the threshold value increases. When the threshold value equals one, the VRI system will always match the downstream vehicle to the upstream one with the largest matching probability. Therefore, it would lead to a large number of false alarms (see Section 5.3). With the increase in threshold value, the modified VRI system relies more on the traditional vehicle count approach and results in a decrease in the false alarm rate. On the other hand, as the proposed VRI system is more relied on the traditional vehicle count approach (e.g., τ→∞), the mean detection time also increases (see Section 3.1). To sum up, for the proposed VRI system, the lowering of the false alarm rates is at the expense of incident detection time. Thus, a balance should be struck between the rapid incident detection and low false alarm rate.Figure 5 Mean detection time and false alarm rate.The estimation of arrival time window also has a significant impact on the performance of the proposed AID algorithm. It is not difficult to understand that a small time window size would result in faster incident detection. To test the performance of the proposed AID algorithm under different time window sizes, a time window with fixed size is assigned for each individual vehicle. Figure6 shows the mean detection of the algorithm for different time window sizes. The mean detection time of the vehicle count approach increases dramatically as the size of the time window grows. It is also observed that the vehicle count approach is not capable of detecting the missing vehicle as the size of time window is larger than 50 seconds. To sum up, for the simulation that a large arrival time window is applied, the proposed AID algorithm clearly outperforms the vehicle count approach.Figure 6 Comparison between the proposed AID algorithm and vehicle count approach. ## 6.2. Real-World Case Studies Apart from the above-mentioned simulation tests, two real-world case studies are also carried out. Based on the record from the freeway authority, the first incident is reported on June 13, 2012, at 16:03. The reported incident location is at 20 + 600 westbound, which is in the section between camera 7A/8A and 9A/10A (see Figure1). Based on this information, the research team has screened through the captured videos for identifying the incident vehicle. It is found out that on June 13, 2012, the incident vehicle has passed the upstream detector (7A/8A) at 15:55 (Figure 7(a)) and has an incident before it reaches the downstream detector (9A/10A). Four minutes later, a tow truck, which is probably called by the driver of the incident vehicle, has passed the upstream detector (Figure 7(b)) and towed the incident vehicle to pass the downstream detector at 16:09 Figure 7(c)).Figure 7 Real-world case study #1: (a) incident vehicle passes the upstream detector; (b) tow truck passes the upstream detector; (c) incident vehicle and truck passes through the downstream detector. (a) (b) (c)According to the above information of the incident vehicle, a 35-minute video record data (from 15:33 to 16:08 on June 13, 2012) of locations 8A and 10A are extracted and input into the proposed AID system for free flow condition. In this case, apart from the incident vehicle, 739 vehicles are detected at both stations during the 35-minute video record. By setting the threshold value of the ratio of matching probabilities equals to 8.5, the time of incident detection and the false alarm rate for this case study are found to be 15:58:22, and 3.42 false alarms per hour, respectively. Compared with the classic vehicle count approach, which would trigger an incident alarm at 16:01:28, the proposed AID system performs better in terms of the incident detection time.The incident vehicle of the second real-world case study is shown in Figure8. This incident is reported on June 17, 2012 at 10:31 a.m. and its detailed location is at 19 + 300A westbound (between 7A/8A and 9A/10A). By setting the threshold value of the ratio of matching probabilities equal to 8.5, the time of incident detection and the false alarm rate for this case study are found to be 10:28:22 and 2 false alarms per hour, respectively. Compared with the classic vehicle count approach, which would trigger an incident alarm at 10:33:50, the proposed AID system still performs better in terms of the incident detection time.Figure 8 Real-world case study #2: (a) incident vehicle passes the upstream detector; (b) incident vehicle and truck passes through the downstream detector. (a) (b)On the basis of these two real-world case studies, we may observe that the time of incident detection of the proposed AID algorithm is largely dependent on the actual information associated with the incident vehicle (e.g., distinctiveness of the incident vehicle feature and the size of the vehicle platoon). In real-world case study #2, the incident vehicle has “distinctive” vehicle color distribution (see Figure8(a)) and, consequently, the disappearance of this very vehicle can be identified earlier than the vehicle count approach. The size of vehicle platoon may also have significant impact on the performance of the AID algorithm. The larger the platoon size is, the more likely it is that the arrival time windows of these vehicles may overlap each other at the downstream site and, hence, leads to the significant increase in incident detection time. ## 7. Conclusion and Future Works This paper investigates the feasibility of utilizing the vehicle reidentification system for incident detection on a freeway section under the free flow condition. A modified vision-based VRI system is proposed to partially track the individual vehicle for identifying the “missing” vehicle due to an incident. A flexible arrival time window is estimated for each of the individual vehicle at upstream station to improve the matching accuracy. To reduce the potential false alarms, a screening method, which is based on the ratios of the matching probabilities and arrival time windows, is introduced to rule out the potential mismatches.The proposed AID algorithm is tested on a 3.6 km segment of a closed freeway in Bangkok, Thailand. Based on the test results, it is found out that the detection time of the proposed AID algorithm is substantially shorter than the traditional vehicle count approach. Also, there is a trade-off between the false alarms rate and detection time for the proposed AID algorithm. Therefore, a balance should be struck between the rapid incident detection and low false alarm rate by adjusting the thresholding valueτ. As demonstrated in Procedure 2, the proposed AID algorithm is a hybrid of the vehicle feature comparison method and the classical vehicle count approach, and the threshold value τ can be viewed as a switch between these two methods. Therefore, the selection of τ may be of great importance to the proposed AID algorithm. In this study, we adjust the threshold value manually based on the reliability of VRI system (the performance of the VRI system in different time period may be slightly different due to the changes in outdoor environment, and the threshold value τ should be adjusted accordingly). Some other automatic thresholding processes [29] will be investigated in our future works.Note that the proposed AID algorithm is specifically devised to detection incident on freeway system under free-flow conditions. As a natural and necessary extension, the ability of detecting incidents under dynamic traffic condition is required for the further development of incident detection system. The key component would be to devise an additional VRI based detection algorithm for congested situation. Theoretically, this purpose can be achieved by analyzing the temporal changes in travel time information obtained from VRI system under dynamic traffic conditions [25]. --- *Source: 102380-2015-07-05.xml*
102380-2015-07-05_102380-2015-07-05.md
82,601
Automatic Freeway Incident Detection for Free Flow Conditions: A Vehicle Reidentification Based Approach Using Image Data from Sparsely Distributed Video Cameras
Jiankai Wang; Agachai Sumalee
Mathematical Problems in Engineering (2015)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102380
102380-2015-07-05.xml
--- ## Abstract This paper proposes a vehicle reidentification (VRI) based automatic incident algorithm (AID) for freeway system under free flow condition. An enhanced vehicle feature matching technique is adopted in the VRI component of the proposed system. In this study, arrival time interval, which is estimated based on the historical database, is introduced into the VRI component to improve the matching accuracy and reduce the incident detection time. Also, a screening method, which is based on the ratios of the matching probabilities, is introduced to the VRI component to further reduce false alarm rate. The proposed AID algorithm is tested on a 3.6 km segment of a closed freeway system in Bangkok, Thailand. The results show that in terms of incident detection time, the proposed AID algorithm outperforms the traditional vehicle count approach. --- ## Body ## 1. Introduction and Literature Review Traffic incidents have been widely recognized as a serious problem for its negative effects on traffic congestion and safety [1]. Under heavy traffic condition, one minor incident could result in gridlock and hence serious traffic congestion. In addition, traffic injuries are likely to be more severe if incidents occur at higher speeds (e.g., under free flow condition). Statistics also suggest the high chance of a more sever secondary accident following the initial incident on freeway [2, 3]. An ability to detect incident in a timely and accurate manner would allow the traffic manager to efficiently remove the incident, to notify the follow-up traffic of the incident, and to better manage the traffic for minimizing the negative impact caused by the incident. Therefore, considerable research efforts have been dedicated to the development of automatic incident detection (AID) algorithms by utilizing the traditional detectors (i.e., inductive loops) over the past few decades (e.g., the California algorithm series [4] and McMaster algorithm [5]). The underlying assumption of these algorithms is that the aggregated traffic parameters (e.g., travel time and traffic flow) would change dramatically when incidents occur under congested situation. By comparing the real-time traffic data with the incident-free data, one can determine the likelihood that an incident has happened. Based on the above-mentioned principle, various advanced data mining approaches (e.g., neural network [6], Bayesian network [7, 8], and principal component analysis [9]) were adopted for detecting the abnormal traffic delay or abrupt change in traffic flow pattern. However, most of the existing incident detection algorithms are specifically designed for congested traffic conditions and may not be applicable for free flow situations.Under free flow condition, a drop in traffic capacity due to a traffic incident (e.g., one lane blocking) may not cause any traffic delay or substantial change in traffic flow pattern. Therefore, it is no longer feasible to detect the incident through analyzing the aggregated traffic parameters. To handle the aforementioned challenge, research attention has shifted away from the data mining approach based on macroscopic traffic data, towards considering the continuous tracking of individual vehicle by using the microscopic vehicle data (e.g., vehicle trajectory and individual vehicle feature). The rationale behind this idea is straightforward. For a closed freeway system, if one can track all the vehicles along the designated points, a disappearance of any vehicle movement (or nonmoving vehicle) between consecutive points can be classified as a potential incident. Based on this principle, various emerging technologies, such as GPS technology [10] and cellular phones [11], has been utilized for collecting theLagrangian measurements (i.e., individual vehicle trajectory), which eventually could be used for identifying the “incident” vehicle. Despite their theoretical simplicity, the successes of these approaches heavily rely on the high level of market penetration (in principle, 100% of penetration rate is required) of the in-vehicle equipment (e.g., GPS device) and the driver’s willingness to provide the location information. To compensate for this, Shehata et al. [12] conducted a study to detect the nonmoving vehicle (i.e., caused by incident) from the fixed video camera (i.e.,Eulerian measurement) by using image processing techniques. Although this method appears to be theoretically sound, the deployment of such system requires the installations of cameras at all key locations along the freeway, which may not be feasible due to the limitation of public resources. In practice, the fixed traffic sensors (e.g., loop detectors and video cameras) cannot cover the entire freeway system and are generally sparsely distributed [13]. In this regard, Fambro and Ritch [14] conducted a pioneering study to trace and identify the “missing” vehicle through analyzing the vehicle count data obtained at two consecutive loop detectors, which is also referred to as the vehicle count approach. Given the vehicle speed at upstream, the arrival time window at downstream could be estimated. By comparing the vehicle counts in this arrival time window with the corresponding vehicle counts at upstream, one may be able to identify the missing vehicle (if any) for incident detection. However, it is also noteworthy that the overlapping of arrival time windows of different vehicles would lead to a significant increase in the detection time (this will be discussed in more detail in Section 3.1). To further reduce the incident detection time, much attention has been paid to track the vehicle by utilizing the emerging automatic vehicle identification (AVI) systems: automatic number plate recognition [15] or Bluetooth identification technology [16]. Although the AVI technologies enable a more efficient tracking of vehicles across consecutive points by accurately matching their unique identity (e.g., plate number and media access control address), privacy concerns may arise as the consequence of this matching process of vehicle identity. In this case, the vehicle reidentification (VRI) scheme, which does not intrude driver’s privacy, provides a tool to devise a more practical and effective incident detection algorithm under free flow condition.Generally, vehicle reidentification (VRI) is a process of matching nonunique vehicle signatures (e.g., waveform [17], vehicle length [18], and vehicle color [19, 20]) from one detector to the next one in the traffic network. On one hand, the nonuniqueness of the vehicle signature would allow the VRI system to track the vehicle anonymously at two consecutive detectors [21] and, hence, identify the “missing” vehicle due to an incident. On the other hand, this property of very nonuniqueness imposes a great challenge on the development of the vehicle signature matching method. To improve the matching accuracy, Coifman [22] proposed a vehicle platoon matching method such that the lengths of vehicle platoons were compared. To further consider the noise and uncertainty of vehicle signature, Kwong et al. [23] introduced a statistical matching method in which the vehicle signature is treated as a random variable, and a probabilistic measure is calculated for matching decision making. During the past few years, the authors also developed a VRI system [24, 25] by utilizing the emerging video image processing systems [26]. Various detailed vehicle features (e.g., vehicle color, length, and type) were extracted and a probabilistic data fusion rule was then introduced to combine these features to generate a matching probability (i.e.,posterior probability) for reidentification purpose. To account for the large variance in travel time under dynamic traffic condition, the proposed VRI system also introduced aprior (fixed) time window constraint, which sets the upper and lower bounds of the vehicle travel time, to rule out the unlikely candidate vehicles. However, it is noteworthy that the aforementioned VRI systems were specifically designed for the purpose of traffic data collection (e.g., travel time). To our knowledge, very few studies were explicitly conducted to investigate the potential feasibility of utilizing VRI system for incident detection. Also, as the existing VRI systems cannot guarantee an accurate matching due to the nonuniqueness of the vehicle signatures, the mismatches between the upstream and downstream vehicles may potentially lead to false alarms when they were applied for incident detection. To sum up, the current VRI systems are not readily transferable to the field of incident detection.To this end, this paper aims to propose a VRI-based automatic incident detection algorithm under free flow condition. The revised VRI system is based on authors’ previous work [24] with several major changes to cope with the purpose of incident detection, which eventually give rise to the incident-detection-oriented VRI.(i) Note that in the work of [24] a unified and fixed time window constraint is imposed on all the vehicles. However, the vehicles would maintain a relatively stable speed under free flow condition, which allows for the estimation of a flexible time window for each individual vehicle. Therefore, this incident-detection-oriented VRI would introduce a flexible time window to further improve the matching accuracy and reduce the incident detection time.(ii) Rather than finding the matching results between two sets of vehicles (i.e., upstream and downstream sets of vehicles), the incident-detection-oriented VRI attempts to make an instant matching decision for each individual vehicle such that the “missing” vehicle can be identified promptly. In this study, the statistical model regarding the vehicle feature is built up, and the matching probability for each pair of vehicles (i.e., theposterior probability of each potential match based on the observation of the vehicle signatures) isexplicitly calculated.(iii) Last but not least, a screening method (i.e., thresholding process), which is based on the ratios of the matching probabilities, is introduced to screen out the mismatched vehicles for reducing the false alarm rate.The rest of the chapter is organized as follows. Section 2 describes the traffic dataset collected for the algorithm development and evaluation. Mathematical notations are also provided in this section. In Section 3, the overall framework of the proposed automatic incident detection system is introduced. The description and analysis of the incident-detection-oriented VRI system under free flow condition are proposed in the following two sections (Sections 4 and 5). In Section 6, simulated tests and real-world case studies are carried out to evaluate the performance of the proposed AID system against the traditional vehicle count approach. Finally, we close this chapter with the conclusion and future works. ## 2. Dataset for Algorithm Development and Evaluation The test site is a 3.6 km long section of the closed three-lane freeway in Bangkok, Thailand (i.e., the green section in Figure1). At each station (i.e., location 10B and 08B in Figure 1) a gantry-mounted video camera, which is viewed in the upstream direction, is installed and two hours of video record (10 a.m. and noon on March 15, 2011) was collected. The frame rate of the video record is 25 FPS and the still image size is 563 × 764. As the detailed traffic data (especially the individual vehicle data) are not readily obtainable from the raw video record, the video image processing systems (VIPs) are then employed for extracting the required information (e.g., vehicle feature data and spot speed). In general, VIPs involve two major steps: vehicle detection and feature extraction. The first step is to digitize and store the raw video record for the detection subsystem to detect and capture the moving vehicle. The still image regarding the individual vehicle is stored for further application. In the second step, various image processing techniques are performed on the vehicle images to obtain the intrinsic feature data (e.g., color, length, and type). In the following, a brief review on the VIPs and associated image processing techniques are presented.Figure 1 Test site in Bangkok, Thailand. ### 2.1. Video Image Processing Systems #### 2.1.1. Vehicle Detection The success of vehicle detection largely depends on the degree that the moving object (vehicle) can be distinguished from its surroundings (background). In light of this, background estimation technology is employed in the detection subsystem. By calculating the media of a sequence of video frames, the background of the video image is obtained. Then, image segmentation technique is performed to identify the foreground object (vehicle). The still image including the detected vehicle is then clipped and stored for further feature extraction. Along with the detection of the vehicle, the associated arrival timet and the spot speed v are also collected. The normalized height of the vehicle image is adopted for representing the vehicle length L. #### 2.1.2. Vehicle Color Recognition Color is one the most essential features for characterizing a vehicle. To reduce the negative effect of illumination changes, this paper has adopted the HSV (hue-saturation-value) color space to represent the vehicle image. Vehicle color recognition illustrated in this paper is conducted in two steps. First, the general RGB color images are converted into HSV color model-based images. Hue and Saturation values are then exploited for color detection, whereasV (value) information is separated out from the color space. Second, a two-dimensional color histogram C is formed to represent the distribution (frequency) of colors across a vehicle image. To be more specific, the hue and saturation channels are divided into 36 and 10 bins, respectively. Thus, a color feature vector C with 360 elements is obtained. Each element of the feature vector is calculated as(1)Ci=NiN,1≤i≤360,where Ni is the number of pixels whose values fall within the bin i and N is the total number of pixels in the image. #### 2.1.3. Vehicle Type Recognition Vehicle type feature provides the other important information to describe a vehicle. The template matching method [27] is utilized to recognize vehicle type. This method uses L2 distance metric to measure the similarity between vehicle image and template images. Specifically, vehicles are classified into 6 categories. For each category, the corresponding template image (T) is built. Finally, the normalized similarity value between the vehicle image (I) and the kth template image (T) is given by (2)Sk=∑m=1M∑n=1NIm,n-Tm,n2GMN,where G denotes the maximum gray level (255); M and N are the dimensions of the vehicle image. Thus, the vehicle type/shape feature S is a vector that consists of the similarity score for each template. Detailed implementation of the VIP systems to traffic data extraction can be found in [24]. A formal description of the dataset obtained from the video record is presented in the following subsection. ### 2.2. Dataset Description and Notation VIPs provide a large amount of traffic data to develop and validate the automatic incident detection algorithms. LetU={1,2,…,N} denote the N vehicles detected at upstream station during the time interval. D={1,2,…,M} is the set of downstream vehicles. In addition, tiU and viU are the associated arrival time and the spot speed of the ith upstream vehicle, respectively. Accordingly, tiD and viD are the corresponding arrival time and spot speed of the ith downstream vehicle. As discussed above, for each detected individual vehicle, the intrinsic feature data (e.g., color, size, and length) are also obtained. Let XiU={CiU,SiU,LiU} denote the signature of the ith upstream vehicle, where CiU and SiU are the normalized color feature vector and type (shape) feature vector, respectively. LiU denotes the normalized the length of vehicle i. Similarly, XjD={CjD,SjD,LjD} is the signature of the jth downstream vehicle. To sum up, dataset from the VIPs during a time interval consists of the upstream vehicle dataset {(tiU,viU,XiU),i=1,2,…,N} and the downstream vehicle set {(tjD,vjD,XjD),j=1,2,…,M}. In order to quantify the difference between each pair of upstream and downstream vehicle signatures, several distance measures are then incorporated. Specifically, for a pair of signatures (XiU,XjD), the Bhattacharyya distance [28] is utilized to calculate the degree of similarity between color features:(3)dcolori,j=1-∑k=1360CiUk·CjDk1/2,where k denoted the kth component of the color feature vector. The L1 distance measure is introduced to represent the difference between the type feature vectors:(4)dtypei,j=∑k=1qSiUk-SjDk,where q is the number of vehicle type template and is taken as 6 in this study. The length difference is given by(5)dlengthi,j=LiU-LjD.Based on the video record collected at the test site, 3,628 vehicles are detected at both stations (10B and 08B) during the two-hour video record. For the purpose of the algorithm development and evaluation, these 3,628 pairs of vehicles are manually matched (i.e., reidentified) by the human operators viewing the video record frame by frame. In other words, the ground-truth matching results of the 3,628 pairs of vehicles are obtained in advance. The mean travel time is 170.9 seconds. The first 800 pairs of vehicle data are used for the model training and calibration (which are discussed in the following sections), while the rest of the vehicle dataset are used for the simulation test of the proposed automatic incident detection algorithm. ## 2.1. Video Image Processing Systems ### 2.1.1. Vehicle Detection The success of vehicle detection largely depends on the degree that the moving object (vehicle) can be distinguished from its surroundings (background). In light of this, background estimation technology is employed in the detection subsystem. By calculating the media of a sequence of video frames, the background of the video image is obtained. Then, image segmentation technique is performed to identify the foreground object (vehicle). The still image including the detected vehicle is then clipped and stored for further feature extraction. Along with the detection of the vehicle, the associated arrival timet and the spot speed v are also collected. The normalized height of the vehicle image is adopted for representing the vehicle length L. ### 2.1.2. Vehicle Color Recognition Color is one the most essential features for characterizing a vehicle. To reduce the negative effect of illumination changes, this paper has adopted the HSV (hue-saturation-value) color space to represent the vehicle image. Vehicle color recognition illustrated in this paper is conducted in two steps. First, the general RGB color images are converted into HSV color model-based images. Hue and Saturation values are then exploited for color detection, whereasV (value) information is separated out from the color space. Second, a two-dimensional color histogram C is formed to represent the distribution (frequency) of colors across a vehicle image. To be more specific, the hue and saturation channels are divided into 36 and 10 bins, respectively. Thus, a color feature vector C with 360 elements is obtained. Each element of the feature vector is calculated as(1)Ci=NiN,1≤i≤360,where Ni is the number of pixels whose values fall within the bin i and N is the total number of pixels in the image. ### 2.1.3. Vehicle Type Recognition Vehicle type feature provides the other important information to describe a vehicle. The template matching method [27] is utilized to recognize vehicle type. This method uses L2 distance metric to measure the similarity between vehicle image and template images. Specifically, vehicles are classified into 6 categories. For each category, the corresponding template image (T) is built. Finally, the normalized similarity value between the vehicle image (I) and the kth template image (T) is given by (2)Sk=∑m=1M∑n=1NIm,n-Tm,n2GMN,where G denotes the maximum gray level (255); M and N are the dimensions of the vehicle image. Thus, the vehicle type/shape feature S is a vector that consists of the similarity score for each template. Detailed implementation of the VIP systems to traffic data extraction can be found in [24]. A formal description of the dataset obtained from the video record is presented in the following subsection. ## 2.1.1. Vehicle Detection The success of vehicle detection largely depends on the degree that the moving object (vehicle) can be distinguished from its surroundings (background). In light of this, background estimation technology is employed in the detection subsystem. By calculating the media of a sequence of video frames, the background of the video image is obtained. Then, image segmentation technique is performed to identify the foreground object (vehicle). The still image including the detected vehicle is then clipped and stored for further feature extraction. Along with the detection of the vehicle, the associated arrival timet and the spot speed v are also collected. The normalized height of the vehicle image is adopted for representing the vehicle length L. ## 2.1.2. Vehicle Color Recognition Color is one the most essential features for characterizing a vehicle. To reduce the negative effect of illumination changes, this paper has adopted the HSV (hue-saturation-value) color space to represent the vehicle image. Vehicle color recognition illustrated in this paper is conducted in two steps. First, the general RGB color images are converted into HSV color model-based images. Hue and Saturation values are then exploited for color detection, whereasV (value) information is separated out from the color space. Second, a two-dimensional color histogram C is formed to represent the distribution (frequency) of colors across a vehicle image. To be more specific, the hue and saturation channels are divided into 36 and 10 bins, respectively. Thus, a color feature vector C with 360 elements is obtained. Each element of the feature vector is calculated as(1)Ci=NiN,1≤i≤360,where Ni is the number of pixels whose values fall within the bin i and N is the total number of pixels in the image. ## 2.1.3. Vehicle Type Recognition Vehicle type feature provides the other important information to describe a vehicle. The template matching method [27] is utilized to recognize vehicle type. This method uses L2 distance metric to measure the similarity between vehicle image and template images. Specifically, vehicles are classified into 6 categories. For each category, the corresponding template image (T) is built. Finally, the normalized similarity value between the vehicle image (I) and the kth template image (T) is given by (2)Sk=∑m=1M∑n=1NIm,n-Tm,n2GMN,where G denotes the maximum gray level (255); M and N are the dimensions of the vehicle image. Thus, the vehicle type/shape feature S is a vector that consists of the similarity score for each template. Detailed implementation of the VIP systems to traffic data extraction can be found in [24]. A formal description of the dataset obtained from the video record is presented in the following subsection. ## 2.2. Dataset Description and Notation VIPs provide a large amount of traffic data to develop and validate the automatic incident detection algorithms. LetU={1,2,…,N} denote the N vehicles detected at upstream station during the time interval. D={1,2,…,M} is the set of downstream vehicles. In addition, tiU and viU are the associated arrival time and the spot speed of the ith upstream vehicle, respectively. Accordingly, tiD and viD are the corresponding arrival time and spot speed of the ith downstream vehicle. As discussed above, for each detected individual vehicle, the intrinsic feature data (e.g., color, size, and length) are also obtained. Let XiU={CiU,SiU,LiU} denote the signature of the ith upstream vehicle, where CiU and SiU are the normalized color feature vector and type (shape) feature vector, respectively. LiU denotes the normalized the length of vehicle i. Similarly, XjD={CjD,SjD,LjD} is the signature of the jth downstream vehicle. To sum up, dataset from the VIPs during a time interval consists of the upstream vehicle dataset {(tiU,viU,XiU),i=1,2,…,N} and the downstream vehicle set {(tjD,vjD,XjD),j=1,2,…,M}. In order to quantify the difference between each pair of upstream and downstream vehicle signatures, several distance measures are then incorporated. Specifically, for a pair of signatures (XiU,XjD), the Bhattacharyya distance [28] is utilized to calculate the degree of similarity between color features:(3)dcolori,j=1-∑k=1360CiUk·CjDk1/2,where k denoted the kth component of the color feature vector. The L1 distance measure is introduced to represent the difference between the type feature vectors:(4)dtypei,j=∑k=1qSiUk-SjDk,where q is the number of vehicle type template and is taken as 6 in this study. The length difference is given by(5)dlengthi,j=LiU-LjD.Based on the video record collected at the test site, 3,628 vehicles are detected at both stations (10B and 08B) during the two-hour video record. For the purpose of the algorithm development and evaluation, these 3,628 pairs of vehicles are manually matched (i.e., reidentified) by the human operators viewing the video record frame by frame. In other words, the ground-truth matching results of the 3,628 pairs of vehicles are obtained in advance. The mean travel time is 170.9 seconds. The first 800 pairs of vehicle data are used for the model training and calibration (which are discussed in the following sections), while the rest of the vehicle dataset are used for the simulation test of the proposed automatic incident detection algorithm. ## 3. Overall Framework of Automatic Incident Detection System The basic idea of incident detection under free flow condition is to track the individual vehicle so as to identify the missing vehicle due to an incident. Owning to its computational and theoretical simplicity, the vehicle count approach [14] is the most well-known free-flow incident detection algorithm. Thus, it is necessary to revisit this method in detail. ### 3.1. Vehicle Count Approach The basic operation of the vehicle count approach is illustrated in Figure2. When a vehicle Ui arrives at upstream station at time tiU, the expected arrival time window [tiU+Lbi,tiU+Ubi] of this vehicle at downstream station is estimated, where Lbi and Ubi, respectively, represent the lower and upper bounds of the vehicle’s travel time. If another vehicle Uj is detected at upstream station, the corresponding arrival time window [tjU+Lbj,tjU+Ubj] can also be obtained. Unsurprisingly, there may be overlap between these two time windows, and both of these two vehicles are likely to arrive at downstream during time interval [tjU+Lbj,tiU+Ubi]. The incident would then be detected by comparing the collected vehicle count data to the expected number of vehicles in the time interval. In the case that vehicle Ui is missing, if vehicle Uj arrives at downstream during time interval [tjU+Lbj,tiU+Ubi], then the incident alarm will not be triggered until time tjU+Ubj, which is clearly later than the upper bound of the arrival time of vehicle Ui (i.e., tiU+Ubi). Because of the overlapping between the time windows, the vehicle count approach, which is solely based on comparing the vehicle counts data, cannot promptly detect the incident (i.e., delay in incident detection). In general, the incident detection time would significantly increase with respect to the increase in size of vehicle platoon at the upstream detector, which increases the number of overlapping in arrival time intervals at the downstream detector.Figure 2 Illustrative example of vehicle count approach.To reduce the detection time, this research proposes a novel incident detection algorithm by incorporating the vision-based VRI system. As shown in Figure2, vehicle Ui and Uj are detected and their detailed feature data (e.g., color, type and length) are also extracted. Once a vehicle is detected at downstream site, the proposed VRI system is performed to find a matched upstream vehicle based on the vehicle feature data. In the case that vehicle Ui is missing, if the downstream vehicle could be matched to the vehicle Uj based on the vehicle feature, an incident alarm would be triggered at time tiU+Ubi, as vehicle Ui is not reidentified during time window [tiU+Lbi,tiU+Ubi]. As shown by this “toy” example, the additional VRI component could potentially reduce the incident detection time to some extent. However, it is also observed that the concept of VRI is not readily transferable to the field of incident detection (mismatches of VRI may trigger false alarms) and several modifications should be made regarding the vehicle matching process.(i) First, instead of finding the matching result for upstream vehicle, the incident-detection-oriented VRI attempts to match the vehicles at downstream site such that the proposed AID algorithm can be implemented in real-time.(ii) Second, once a vehicle passes the downstream station, the incident-detection-oriented VRI should be capable of making matching decision immediately such that the missing vehicle (i.e., the vehicle that does not appear at downstream) could be promptly identified. Therefore, this study calculates the matching probability for each pair of vehicles on which the following screening method could be imposed to further reduce the false alarm rate.The overall framework of the proposed algorithm is presented in the following subsection. ### 3.2. AID Algorithm Based on VRI System The detailed implementation of the VRI-based incident detection system is summarized in the following flowchart (Figure3). First, the system will initialize the timestamp, t, and check whether a vehicle is detected at upstream and/or downstream station. If a vehicle is detected at upstream detector, the expected arrival time window of this vehicle at downstream station will be estimated based on the historical data. The record of the detected vehicle at upstream will be stored in the database as unmatched upstream vehicle. On the other hand, if a vehicle is captured at the downstream station, the system will perform incident-detection-oriented VRI subsystem to check whether this detected vehicle match with any of the unmatched upstream vehicle. The time window constraint is utilized to identify the potential matches for this vehicle detected at downstream station. Once the match is found, the matched vehicle data will be removed from the list of the unmatched upstream vehicles.Figure 3 Overall framework of AID system.After the previous two steps for handling the detected vehicles at upstream and downstream stations, the system will proceed to determine whether there is an incident occurs on the monitored segment. For incident detection, the system will screen through the list of unmatched vehicles. If the current time (t) is out of the expected arrival time window (i.e., greater than the upper bound of the arrival time interval) of the unmatched vehicle, an incident alarm will be issued. If not, t will be set to t+1 and the system will move forward to the next time step. It could be easily observed that the performance of the incident detection system is heavily dependent on two critical components, that is, flexible time window constraint and incident-detection-oriented VRI system.For the aforementioned framework, the following three comments should be taken into account.(i) First, the detection error is not considered in this study. In other words, it is assumed that all the vehicles cross the video cameras will be detected. This is achievable under free flow condition, as there is no occlusion between the vehicles and, consequently, VIPs perform generally well and are able to detect most of the individual vehicles.(ii) Second, under free flow condition, the traveling behavior of the individual vehicle is more predictable. This phenomenon enables the estimation of the flexible arrival time window for each individual vehicle based on the current spot speed and the historical data. It is expected that the accurate estimation of the arrival time window could potentially lead to an improved matching accuracy of the VRI method, and hence reduce the incident detection time.(iii) Third, it should be noted that the proposed VRI cannot guarantee an accurate matching because of the nonuniqueness of the vehicle signatures. Instead, the proposed VRI scheme in this paper can only provide the matching probability between the downstream and upstream vehicles. Therefore, some of the mismatches resulted from the matching probability could potentially lead to false alarms. To handle this, a ratio method is introduced to screen out those mismatches for reducing the false alarms. ## 3.1. Vehicle Count Approach The basic operation of the vehicle count approach is illustrated in Figure2. When a vehicle Ui arrives at upstream station at time tiU, the expected arrival time window [tiU+Lbi,tiU+Ubi] of this vehicle at downstream station is estimated, where Lbi and Ubi, respectively, represent the lower and upper bounds of the vehicle’s travel time. If another vehicle Uj is detected at upstream station, the corresponding arrival time window [tjU+Lbj,tjU+Ubj] can also be obtained. Unsurprisingly, there may be overlap between these two time windows, and both of these two vehicles are likely to arrive at downstream during time interval [tjU+Lbj,tiU+Ubi]. The incident would then be detected by comparing the collected vehicle count data to the expected number of vehicles in the time interval. In the case that vehicle Ui is missing, if vehicle Uj arrives at downstream during time interval [tjU+Lbj,tiU+Ubi], then the incident alarm will not be triggered until time tjU+Ubj, which is clearly later than the upper bound of the arrival time of vehicle Ui (i.e., tiU+Ubi). Because of the overlapping between the time windows, the vehicle count approach, which is solely based on comparing the vehicle counts data, cannot promptly detect the incident (i.e., delay in incident detection). In general, the incident detection time would significantly increase with respect to the increase in size of vehicle platoon at the upstream detector, which increases the number of overlapping in arrival time intervals at the downstream detector.Figure 2 Illustrative example of vehicle count approach.To reduce the detection time, this research proposes a novel incident detection algorithm by incorporating the vision-based VRI system. As shown in Figure2, vehicle Ui and Uj are detected and their detailed feature data (e.g., color, type and length) are also extracted. Once a vehicle is detected at downstream site, the proposed VRI system is performed to find a matched upstream vehicle based on the vehicle feature data. In the case that vehicle Ui is missing, if the downstream vehicle could be matched to the vehicle Uj based on the vehicle feature, an incident alarm would be triggered at time tiU+Ubi, as vehicle Ui is not reidentified during time window [tiU+Lbi,tiU+Ubi]. As shown by this “toy” example, the additional VRI component could potentially reduce the incident detection time to some extent. However, it is also observed that the concept of VRI is not readily transferable to the field of incident detection (mismatches of VRI may trigger false alarms) and several modifications should be made regarding the vehicle matching process.(i) First, instead of finding the matching result for upstream vehicle, the incident-detection-oriented VRI attempts to match the vehicles at downstream site such that the proposed AID algorithm can be implemented in real-time.(ii) Second, once a vehicle passes the downstream station, the incident-detection-oriented VRI should be capable of making matching decision immediately such that the missing vehicle (i.e., the vehicle that does not appear at downstream) could be promptly identified. Therefore, this study calculates the matching probability for each pair of vehicles on which the following screening method could be imposed to further reduce the false alarm rate.The overall framework of the proposed algorithm is presented in the following subsection. ## 3.2. AID Algorithm Based on VRI System The detailed implementation of the VRI-based incident detection system is summarized in the following flowchart (Figure3). First, the system will initialize the timestamp, t, and check whether a vehicle is detected at upstream and/or downstream station. If a vehicle is detected at upstream detector, the expected arrival time window of this vehicle at downstream station will be estimated based on the historical data. The record of the detected vehicle at upstream will be stored in the database as unmatched upstream vehicle. On the other hand, if a vehicle is captured at the downstream station, the system will perform incident-detection-oriented VRI subsystem to check whether this detected vehicle match with any of the unmatched upstream vehicle. The time window constraint is utilized to identify the potential matches for this vehicle detected at downstream station. Once the match is found, the matched vehicle data will be removed from the list of the unmatched upstream vehicles.Figure 3 Overall framework of AID system.After the previous two steps for handling the detected vehicles at upstream and downstream stations, the system will proceed to determine whether there is an incident occurs on the monitored segment. For incident detection, the system will screen through the list of unmatched vehicles. If the current time (t) is out of the expected arrival time window (i.e., greater than the upper bound of the arrival time interval) of the unmatched vehicle, an incident alarm will be issued. If not, t will be set to t+1 and the system will move forward to the next time step. It could be easily observed that the performance of the incident detection system is heavily dependent on two critical components, that is, flexible time window constraint and incident-detection-oriented VRI system.For the aforementioned framework, the following three comments should be taken into account.(i) First, the detection error is not considered in this study. In other words, it is assumed that all the vehicles cross the video cameras will be detected. This is achievable under free flow condition, as there is no occlusion between the vehicles and, consequently, VIPs perform generally well and are able to detect most of the individual vehicles.(ii) Second, under free flow condition, the traveling behavior of the individual vehicle is more predictable. This phenomenon enables the estimation of the flexible arrival time window for each individual vehicle based on the current spot speed and the historical data. It is expected that the accurate estimation of the arrival time window could potentially lead to an improved matching accuracy of the VRI method, and hence reduce the incident detection time.(iii) Third, it should be noted that the proposed VRI cannot guarantee an accurate matching because of the nonuniqueness of the vehicle signatures. Instead, the proposed VRI scheme in this paper can only provide the matching probability between the downstream and upstream vehicles. Therefore, some of the mismatches resulted from the matching probability could potentially lead to false alarms. To handle this, a ratio method is introduced to screen out those mismatches for reducing the false alarms. ## 4. Flexible Time Window Estimation Under free flow condition, each individual vehicle would maintain in a relatively stable speed (i.e., low variance in travel time). In this case, the arrival time of the vehicle at downstream station could be estimated based on the spot speed and historical data. LetUi represent an upstream vehicle detected at time tiU, and the associated upstream spot speed is denoted as viU. The expected arrival time Arr of vehicle Ui is given by (6)Arr=tiU+l0.5viU+viD,where l is the distance between the upstream and downstream detectors; viD is the estimated vehicle speed at downstream detector based on the historical speed database. To account for the error in estimating the downstream spot speed, the upper and lower bounds of viD are provided by the following equations:(7)vubD=σubVhistDt′viUVU,vlbD=σlbVhistDt′viUVU,where vubD and vlbD are, respectively, the upper and lower bounds of the vehicle at downstream detector; VU is the current average speed of the upstream detector; σub≥1 and σlb≤1 are respectively the associated upper and lower bound factors; VhistD(t′) is the historical average speed of the downstream detector at time t′. The time t′ is chosen such that it is matched with the arrival time, which is estimated by a linear speed profile of the modeled section, at the downstream detector. The estimation of downstream spot speeds can be viewed as a prediction-correction process. First, the historical average speed VhistD(t′) is adopted to predict the speed of this vehicle at downstream site. Then, this prediction is corrected by the factor viU/VU for the better representation of the current traffic condition. Finally, the upper and lower bound factors (σub and σlb) are applied for determining the upper of lower bounds of the downstream spot speed. With the estimated downstream speeds, the corresponding upper and lower bounds of the travel time of vehicle Ui can be calculated as follows:(8)Ubi=l0.5viU+vlbD,Lbi=l0.5viU+vubD.However, it should be noted that the proposed incident detection system is not confined to the above method for estimating the time window. Any other estimation methods are equally applicable to the proposed AID algorithm. With the estimated time windows, vehicles on the monitored freeway section could be “partially” tracked and reidentified in a timely and accurate manner. ## 5. Incident-Detection-Oriented VRI As explained previously, the proposed VRI system is devised based on the video image data provided by VIPs technology. By applying myriad image processing techniques, the detailed vehicle feature data (e.g., color, type, and length) could be obtained. The vehicle matching process is then performed by comparing these vehicle feature data. In this section, the methodologies involved in the incident-detection-oriented VRI system are presented. ### 5.1. Vehicle Matching Problem For a vehicleDk arrives at downstream station at time tkD, the vehicle signature, denoted as XkD={CkD,SkD,LkD}, is then obtained from the VIPs. A search space, S(k), which represents the potential matches at upstream station for vehicle Dk, is determined based on the calculated arrival time window. Specifically, S(k) is given by(9)Sk=Ui∈U∣tiU+Lbi≤tkD≤tiU+Ubi,where Ui represents the vehicle detected at upstream station; [Lbi,Ubi] is the associated travel time window. The vehicle reidentification problem is to find the corresponding upstream vehicle for Dk through the search space S(k). Herein, we introduce the assignment function ψ to represent the matching result; that is, (10)ψk:Dk⟶Ui∈Sk∣i=1,2,…,Nk⟼i,i=1,2,…,N,where ψ(k)=i indicates that vehicle Dk is the same as Ui. Recall that for each vehicle Ui∈S(k), one may assign to the pair of signatures (XiU,XkD) the distances dcolor(i,k), dtype(i,k) and dlength(i,k) based on (3), (4), and (5). In this case, one simple method (i.e., distance-based method) is to find the matched upstream vehicle with the minimum feature distance. However, it should be noted that the vehicle signatures derived from VIPs contain noise and are not unique. Therefore, the distance measure cannot really reflect the similarities between the vehicles. Instead of directly comparing the feature distances, this study utilizes the statistical matching method. Based on the calculated feature distances, that is, dcolor(i,k), dtype(i,k), and dlength(i,k), a matching probability P(ψ(k)=i∣dcolor,dtype,dlength) between vehicles Ui and Dk is provided for the matching decision making. ### 5.2. Calculation of Matching Probability The matching probability, also referred to as theposterior probability, plays a fundamental role in the proposed VRI system. By applying the Bayesian rule, we have(11)Pψk=i∣dcolor,dtype,dlength=pdcolor,dtype,dlength∣ψk=iPψk=ipdcolor,dtype,dlength,where pdcolor,dtype,dlength∣ψ(k)=i is the likelihood function; P(ψ(k)=i) is the prior knowledge of the assignment function. To obtain the explicit matching probability, the denominator in (11) can further be expressed as(12)pdcolor,dtype,dlength=pdcolor,dtype,dlength∣ψk=iPψk=i+pdcolor,dtype,dlength∣ψk≠iPψk≠i.On the basis of (11) and (12), it is easily observed that the calculation of the matching probability is dependent on the deduction of the likelihood function and theprior probability. In this particular case, theprior probability is defined as P(ψ(k)=i)=0.5, which suggests that the matching is solely based on the comparison between the vehicle feature data. The calculation of the likelihood function is completed in two steps.(i) First, individual statistical models for the three feature distances are constructed and the corresponding likelihood functions are also obtained (i.e.,pdcolor∣ψ(k), p(dtype∣ψ(k)), and p(dlength∣ψ(k))).(ii) Second, a data fusion rule is employed to provide the overall likelihood functions, that is, the termp(dcolor,dtype,dlength∣ψ(k)) in (11) and (12). #### 5.2.1. Statistical Modeling of Feature Distance Without loss of generality, only the probabilistic modeling of color feature distance is described. In the framework of statistical modeling, the distance measure is assumed to be a random variable. Thus, for a pair of color feature vectors(CiU,CkD), the distance dcolor(i,k) follows a certain statistical distribution. The conditional probability (i.e., likelihood function) of dcolor(i,k) is then given by(13)pdcolori,k∣ψk=p1dcolori,k,ifψk=ip2dcolori,k,ifψk≠i,where p1 denotes the probability density function (pdf) of distance dcolor(i,k) when color feature vectors CiU and CkD belong to the same vehicle, while p2 is the pdf of the distance dcolor(i,k) between different vehicles. A historical training dataset that contains a number of pairs of correctly matched vehicles is built up for estimating the pdfs p1 and p2. Finite Gaussian mixture model is used to approximate the pdfs and the well-known Expectation Maximization (EM) algorithm is applied to solve the associated parameter estimation problem. Likewise, the likelihood functions for the type and length distances can also be obtained in a similar manner. #### 5.2.2. Data Fusion Rule In this study, the logarithmic opinion pool (LOP) approach is employed to fuse the individual likelihood functions. The LOP is evaluated as a weighted product of the probabilities and the equation is given by(14)pdcolor,dtype,dlength∣ψk=1ZLOP·pdcolor∣ψkαpdtype∣ψkβ·pdlength∣ψkγ,α+β+γ=1,where the fusion weights, α, β, and γ are used to indicate the degree of contribution of each likelihood function. The weights can also be calibrated from the training dataset. By substituting (12), (13), and (14) into (11), the desired matching probability for each pair of vehicles (Ui,Dk) could be obtained. For the sake of simplicity, let Pik denote the matching probability between the vehicle Ui and Dk. In this case, we may obtain a set of probabilistic measures Pik∣i=1,2,…,N to represent the likelihood of a correct match between Dk and the vehicles in the search space S(k). The final matching decision-making based on these matching probabilities, becomes the major concern in the following subsection. ### 5.3. Ratio Method for Final Matching Decision An intuitive decision-making process (i.e., the greedy method) is to sort the matches via the matching probabilityPik∣i=1,2,…,N and choose the vehicle Ui with the maximum matching likelihood; that is,(15)ψk=i,ifPjk≤Pik∀j∈1,2,…,N.However, it is noteworthy that the proposed VRI system is utilized for incident detection purpose, the final matching decision would produce significant impacts on the performance of the AID system. Based on the greedy method (15), the potential false alarms would be triggered. As shown in Figure 4, the downstream vehicle Dk arrives at time 10:39:39 a.m. Uj and Ui are, respectively, the two candidate vehicles with the largest and second largest matching probabilities with the downstream vehicle Dk (i.e., Pjk=0.9295 and Pik=0.8392). Although vehicle Dk actually matches with vehicle Ui (based on the manual matching), the greedy method yields the matching result ψ(k)=j, which could lead to a false alarm at time tiU+Ubi.Figure 4 Illustrative example of a false alarm.To reduce the false alarms mentioned above, a ratio method is then introduced for the final matching decision-making. LetPi∣i=1,2,…,N denote the set of matching probabilities in descending order. The ratio method proposed in this study involves two major steps. First, by imposing a threshold τ on the value of the ratio between the neighboring probabilities in the ordered set Pi∣i=1,2,…,N, one may be able to screen through the search space and rule out those unlikely matches. The screening process is described in Procedure 1.Procedure 1:Algorithmic framework for screening process. Input: A finite set {Pi∣i=1,2,…,N} of matching probabilities in descending order Output: The set of unlikely matches for downstream vehicle Dk (1) i←1; (2) while  i≤N-1∧Pi/Pi+1≤τ  do (3)  i←i+1; (4) return  {i+1,i+2,…,N};The underlying implication of Procedure1 is that if the ratio (i.e., Pi/Pi+1) is sufficiently large, then it could come to a conclusion that vehicles {i+1,i+2,…,N} are the unlikely matches due to their relatively smaller matching probabilities. Otherwise, if the ratio Pi/Pi+1≤τ, then we may declare that vehicle i and i+1 are not distinctive from each other and a matching decision cannot be made at current stage.Upon the completion of the above screening process, unlikely matches could be ruled out and the search space is further reduced. The second step is then to make a matching decision based on the remaining search spaceSR(k). Let SR(k)=Um∣m=1,2,…,i (clearly i≤N), the matching result is then given by(16)ψk=m∗,iftlU+Ubl≥tm∗U+Ubm∗,∀l∈1,2,…,i.It is obvious that vehicle Dk is matched to the vehicle in SR(k) with the smallest upper bound in the predicted arrival time window. The rationale behind this approach is that a matching decision could not be made based on the matching probabilities (as the matching probabilities for vehicles in SR(k) are not significantly different from each other). In this case, the vehicle Dk is matched to upstream vehicle with smallest upper bound in the predicted arrival time window to avoid the potential false alarms. As a matter of fact, the second step could be viewed as a standard vehicle count approach in which only the counts data is utilized.To sum up, the matching decision-making process of the incident-detection-oriented VRI is a hybrid of the vehicle feature matching and the classic vehicle count approach. The overall procedure for the matching decision-making is given by Procedure2.Procedure 2:Algorithmic framework for final matching decision-making. Input: A set {Pi∣i=1,2,…,N} of matching probabilities and the set {tiU+Ubi∣i=1,2,…,N} of upper bounds in arrival time interval Output: The final matching decision for vehicle Dk Screening Method: (1) i←1; (2) while  i≤N-1∧Pi/Pi+1≤τ  do (3)  i←i+1; (4) SR(k)←{1,2,…,i}; Vehicle Count Approach: (5) m∗←arg⁡minl⁡{tlU+Ubl∣l∈SR(k)}; (6) return  ψ(k)=m∗; ## 5.1. Vehicle Matching Problem For a vehicleDk arrives at downstream station at time tkD, the vehicle signature, denoted as XkD={CkD,SkD,LkD}, is then obtained from the VIPs. A search space, S(k), which represents the potential matches at upstream station for vehicle Dk, is determined based on the calculated arrival time window. Specifically, S(k) is given by(9)Sk=Ui∈U∣tiU+Lbi≤tkD≤tiU+Ubi,where Ui represents the vehicle detected at upstream station; [Lbi,Ubi] is the associated travel time window. The vehicle reidentification problem is to find the corresponding upstream vehicle for Dk through the search space S(k). Herein, we introduce the assignment function ψ to represent the matching result; that is, (10)ψk:Dk⟶Ui∈Sk∣i=1,2,…,Nk⟼i,i=1,2,…,N,where ψ(k)=i indicates that vehicle Dk is the same as Ui. Recall that for each vehicle Ui∈S(k), one may assign to the pair of signatures (XiU,XkD) the distances dcolor(i,k), dtype(i,k) and dlength(i,k) based on (3), (4), and (5). In this case, one simple method (i.e., distance-based method) is to find the matched upstream vehicle with the minimum feature distance. However, it should be noted that the vehicle signatures derived from VIPs contain noise and are not unique. Therefore, the distance measure cannot really reflect the similarities between the vehicles. Instead of directly comparing the feature distances, this study utilizes the statistical matching method. Based on the calculated feature distances, that is, dcolor(i,k), dtype(i,k), and dlength(i,k), a matching probability P(ψ(k)=i∣dcolor,dtype,dlength) between vehicles Ui and Dk is provided for the matching decision making. ## 5.2. Calculation of Matching Probability The matching probability, also referred to as theposterior probability, plays a fundamental role in the proposed VRI system. By applying the Bayesian rule, we have(11)Pψk=i∣dcolor,dtype,dlength=pdcolor,dtype,dlength∣ψk=iPψk=ipdcolor,dtype,dlength,where pdcolor,dtype,dlength∣ψ(k)=i is the likelihood function; P(ψ(k)=i) is the prior knowledge of the assignment function. To obtain the explicit matching probability, the denominator in (11) can further be expressed as(12)pdcolor,dtype,dlength=pdcolor,dtype,dlength∣ψk=iPψk=i+pdcolor,dtype,dlength∣ψk≠iPψk≠i.On the basis of (11) and (12), it is easily observed that the calculation of the matching probability is dependent on the deduction of the likelihood function and theprior probability. In this particular case, theprior probability is defined as P(ψ(k)=i)=0.5, which suggests that the matching is solely based on the comparison between the vehicle feature data. The calculation of the likelihood function is completed in two steps.(i) First, individual statistical models for the three feature distances are constructed and the corresponding likelihood functions are also obtained (i.e.,pdcolor∣ψ(k), p(dtype∣ψ(k)), and p(dlength∣ψ(k))).(ii) Second, a data fusion rule is employed to provide the overall likelihood functions, that is, the termp(dcolor,dtype,dlength∣ψ(k)) in (11) and (12). ### 5.2.1. Statistical Modeling of Feature Distance Without loss of generality, only the probabilistic modeling of color feature distance is described. In the framework of statistical modeling, the distance measure is assumed to be a random variable. Thus, for a pair of color feature vectors(CiU,CkD), the distance dcolor(i,k) follows a certain statistical distribution. The conditional probability (i.e., likelihood function) of dcolor(i,k) is then given by(13)pdcolori,k∣ψk=p1dcolori,k,ifψk=ip2dcolori,k,ifψk≠i,where p1 denotes the probability density function (pdf) of distance dcolor(i,k) when color feature vectors CiU and CkD belong to the same vehicle, while p2 is the pdf of the distance dcolor(i,k) between different vehicles. A historical training dataset that contains a number of pairs of correctly matched vehicles is built up for estimating the pdfs p1 and p2. Finite Gaussian mixture model is used to approximate the pdfs and the well-known Expectation Maximization (EM) algorithm is applied to solve the associated parameter estimation problem. Likewise, the likelihood functions for the type and length distances can also be obtained in a similar manner. ### 5.2.2. Data Fusion Rule In this study, the logarithmic opinion pool (LOP) approach is employed to fuse the individual likelihood functions. The LOP is evaluated as a weighted product of the probabilities and the equation is given by(14)pdcolor,dtype,dlength∣ψk=1ZLOP·pdcolor∣ψkαpdtype∣ψkβ·pdlength∣ψkγ,α+β+γ=1,where the fusion weights, α, β, and γ are used to indicate the degree of contribution of each likelihood function. The weights can also be calibrated from the training dataset. By substituting (12), (13), and (14) into (11), the desired matching probability for each pair of vehicles (Ui,Dk) could be obtained. For the sake of simplicity, let Pik denote the matching probability between the vehicle Ui and Dk. In this case, we may obtain a set of probabilistic measures Pik∣i=1,2,…,N to represent the likelihood of a correct match between Dk and the vehicles in the search space S(k). The final matching decision-making based on these matching probabilities, becomes the major concern in the following subsection. ## 5.2.1. Statistical Modeling of Feature Distance Without loss of generality, only the probabilistic modeling of color feature distance is described. In the framework of statistical modeling, the distance measure is assumed to be a random variable. Thus, for a pair of color feature vectors(CiU,CkD), the distance dcolor(i,k) follows a certain statistical distribution. The conditional probability (i.e., likelihood function) of dcolor(i,k) is then given by(13)pdcolori,k∣ψk=p1dcolori,k,ifψk=ip2dcolori,k,ifψk≠i,where p1 denotes the probability density function (pdf) of distance dcolor(i,k) when color feature vectors CiU and CkD belong to the same vehicle, while p2 is the pdf of the distance dcolor(i,k) between different vehicles. A historical training dataset that contains a number of pairs of correctly matched vehicles is built up for estimating the pdfs p1 and p2. Finite Gaussian mixture model is used to approximate the pdfs and the well-known Expectation Maximization (EM) algorithm is applied to solve the associated parameter estimation problem. Likewise, the likelihood functions for the type and length distances can also be obtained in a similar manner. ## 5.2.2. Data Fusion Rule In this study, the logarithmic opinion pool (LOP) approach is employed to fuse the individual likelihood functions. The LOP is evaluated as a weighted product of the probabilities and the equation is given by(14)pdcolor,dtype,dlength∣ψk=1ZLOP·pdcolor∣ψkαpdtype∣ψkβ·pdlength∣ψkγ,α+β+γ=1,where the fusion weights, α, β, and γ are used to indicate the degree of contribution of each likelihood function. The weights can also be calibrated from the training dataset. By substituting (12), (13), and (14) into (11), the desired matching probability for each pair of vehicles (Ui,Dk) could be obtained. For the sake of simplicity, let Pik denote the matching probability between the vehicle Ui and Dk. In this case, we may obtain a set of probabilistic measures Pik∣i=1,2,…,N to represent the likelihood of a correct match between Dk and the vehicles in the search space S(k). The final matching decision-making based on these matching probabilities, becomes the major concern in the following subsection. ## 5.3. Ratio Method for Final Matching Decision An intuitive decision-making process (i.e., the greedy method) is to sort the matches via the matching probabilityPik∣i=1,2,…,N and choose the vehicle Ui with the maximum matching likelihood; that is,(15)ψk=i,ifPjk≤Pik∀j∈1,2,…,N.However, it is noteworthy that the proposed VRI system is utilized for incident detection purpose, the final matching decision would produce significant impacts on the performance of the AID system. Based on the greedy method (15), the potential false alarms would be triggered. As shown in Figure 4, the downstream vehicle Dk arrives at time 10:39:39 a.m. Uj and Ui are, respectively, the two candidate vehicles with the largest and second largest matching probabilities with the downstream vehicle Dk (i.e., Pjk=0.9295 and Pik=0.8392). Although vehicle Dk actually matches with vehicle Ui (based on the manual matching), the greedy method yields the matching result ψ(k)=j, which could lead to a false alarm at time tiU+Ubi.Figure 4 Illustrative example of a false alarm.To reduce the false alarms mentioned above, a ratio method is then introduced for the final matching decision-making. LetPi∣i=1,2,…,N denote the set of matching probabilities in descending order. The ratio method proposed in this study involves two major steps. First, by imposing a threshold τ on the value of the ratio between the neighboring probabilities in the ordered set Pi∣i=1,2,…,N, one may be able to screen through the search space and rule out those unlikely matches. The screening process is described in Procedure 1.Procedure 1:Algorithmic framework for screening process. Input: A finite set {Pi∣i=1,2,…,N} of matching probabilities in descending order Output: The set of unlikely matches for downstream vehicle Dk (1) i←1; (2) while  i≤N-1∧Pi/Pi+1≤τ  do (3)  i←i+1; (4) return  {i+1,i+2,…,N};The underlying implication of Procedure1 is that if the ratio (i.e., Pi/Pi+1) is sufficiently large, then it could come to a conclusion that vehicles {i+1,i+2,…,N} are the unlikely matches due to their relatively smaller matching probabilities. Otherwise, if the ratio Pi/Pi+1≤τ, then we may declare that vehicle i and i+1 are not distinctive from each other and a matching decision cannot be made at current stage.Upon the completion of the above screening process, unlikely matches could be ruled out and the search space is further reduced. The second step is then to make a matching decision based on the remaining search spaceSR(k). Let SR(k)=Um∣m=1,2,…,i (clearly i≤N), the matching result is then given by(16)ψk=m∗,iftlU+Ubl≥tm∗U+Ubm∗,∀l∈1,2,…,i.It is obvious that vehicle Dk is matched to the vehicle in SR(k) with the smallest upper bound in the predicted arrival time window. The rationale behind this approach is that a matching decision could not be made based on the matching probabilities (as the matching probabilities for vehicles in SR(k) are not significantly different from each other). In this case, the vehicle Dk is matched to upstream vehicle with smallest upper bound in the predicted arrival time window to avoid the potential false alarms. As a matter of fact, the second step could be viewed as a standard vehicle count approach in which only the counts data is utilized.To sum up, the matching decision-making process of the incident-detection-oriented VRI is a hybrid of the vehicle feature matching and the classic vehicle count approach. The overall procedure for the matching decision-making is given by Procedure2.Procedure 2:Algorithmic framework for final matching decision-making. Input: A set {Pi∣i=1,2,…,N} of matching probabilities and the set {tiU+Ubi∣i=1,2,…,N} of upper bounds in arrival time interval Output: The final matching decision for vehicle Dk Screening Method: (1) i←1; (2) while  i≤N-1∧Pi/Pi+1≤τ  do (3)  i←i+1; (4) SR(k)←{1,2,…,i}; Vehicle Count Approach: (5) m∗←arg⁡minl⁡{tlU+Ubl∣l∈SR(k)}; (6) return  ψ(k)=m∗; ## 6. Test Results In this section, the performance of the proposed AID algorithm is evaluated against the classical vehicle count approach in terms of mean time-to-detect and false alarm rate (i.e., false alarms per hour). As the performance of the proposed AID system relies on its two critical components (i.e., flexible time window estimation and incident-detection-oriented VRI), different sizes of time window and thresholds for final matching are tested in this section. The dataset described in Section2 are used to perform the simulated tests for the algorithm evaluation. Also, the real-world case studies are carried out in this section. ### 6.1. Simulated Tests For calibrating and testing the proposed AID system, the 3,682 pairs of vehicle matching results from the collected dataset are divided into two parts. First, a dataset of 800 pairs of correctly matched vehicles are used for model calibration and training. The upper and lower bound factors for time window estimation (i.e.,σub and σlb) are calibrated by using the travel time data of the 800 vehicles and the historical averaging speed on Thursday, which is the same as the test day (i.e., 16/2/2012, 23/2/2012, 1/3/2012, and 8/3/2012). In addition, the parameters of the statistical model (i.e., p1 and p2) are estimated by utilizing the feature data extracted from the captured images of these 800 pairs of vehicles. Second, the remaining 2,828 pairs of vehicles detected at both upstream and downstream detectors are fed into the calibrated AID system for model evaluation. In order to mimic an incident between the upstream and downstream detectors, the record of vehicle at downstream site is intentionally removed to simulate the situation that the vehicle has passed the upstream detector but not the downstream one. In the testing of the proposed AID system, the AID algorithm is run for 2,828 times, which for each run the record of one of the 2,828 vehicles at downstream detector is removed, for determining the mean detection time. Specifically, the incident detection time is defined as(17)TD=tincident-tU.Since we do not know the exact time when the incident happened, the incident detection time is then defined as the difference between the time when an alarm is issued (i.e., tincident) and the arrival time of incident vehicle at upstream station (i.e., tU).By setting the threshold value equals to 2 (i.e.,τ=2), the mean detection time of the proposed AID algorithm is 203.2 seconds, whereas the mean detection time of the classical vehicle count approach is 644.1 seconds. As it is expected, the mean detection time is reduced substantially by incorporating the modified VRI system. Figure 5 shows the performance of the VRI-based incident detection algorithm for different threshold values adopted in final matching decision. It could be observed that the false alarm rate reduces as the threshold value increases. When the threshold value equals one, the VRI system will always match the downstream vehicle to the upstream one with the largest matching probability. Therefore, it would lead to a large number of false alarms (see Section 5.3). With the increase in threshold value, the modified VRI system relies more on the traditional vehicle count approach and results in a decrease in the false alarm rate. On the other hand, as the proposed VRI system is more relied on the traditional vehicle count approach (e.g., τ→∞), the mean detection time also increases (see Section 3.1). To sum up, for the proposed VRI system, the lowering of the false alarm rates is at the expense of incident detection time. Thus, a balance should be struck between the rapid incident detection and low false alarm rate.Figure 5 Mean detection time and false alarm rate.The estimation of arrival time window also has a significant impact on the performance of the proposed AID algorithm. It is not difficult to understand that a small time window size would result in faster incident detection. To test the performance of the proposed AID algorithm under different time window sizes, a time window with fixed size is assigned for each individual vehicle. Figure6 shows the mean detection of the algorithm for different time window sizes. The mean detection time of the vehicle count approach increases dramatically as the size of the time window grows. It is also observed that the vehicle count approach is not capable of detecting the missing vehicle as the size of time window is larger than 50 seconds. To sum up, for the simulation that a large arrival time window is applied, the proposed AID algorithm clearly outperforms the vehicle count approach.Figure 6 Comparison between the proposed AID algorithm and vehicle count approach. ### 6.2. Real-World Case Studies Apart from the above-mentioned simulation tests, two real-world case studies are also carried out. Based on the record from the freeway authority, the first incident is reported on June 13, 2012, at 16:03. The reported incident location is at 20 + 600 westbound, which is in the section between camera 7A/8A and 9A/10A (see Figure1). Based on this information, the research team has screened through the captured videos for identifying the incident vehicle. It is found out that on June 13, 2012, the incident vehicle has passed the upstream detector (7A/8A) at 15:55 (Figure 7(a)) and has an incident before it reaches the downstream detector (9A/10A). Four minutes later, a tow truck, which is probably called by the driver of the incident vehicle, has passed the upstream detector (Figure 7(b)) and towed the incident vehicle to pass the downstream detector at 16:09 Figure 7(c)).Figure 7 Real-world case study #1: (a) incident vehicle passes the upstream detector; (b) tow truck passes the upstream detector; (c) incident vehicle and truck passes through the downstream detector. (a) (b) (c)According to the above information of the incident vehicle, a 35-minute video record data (from 15:33 to 16:08 on June 13, 2012) of locations 8A and 10A are extracted and input into the proposed AID system for free flow condition. In this case, apart from the incident vehicle, 739 vehicles are detected at both stations during the 35-minute video record. By setting the threshold value of the ratio of matching probabilities equals to 8.5, the time of incident detection and the false alarm rate for this case study are found to be 15:58:22, and 3.42 false alarms per hour, respectively. Compared with the classic vehicle count approach, which would trigger an incident alarm at 16:01:28, the proposed AID system performs better in terms of the incident detection time.The incident vehicle of the second real-world case study is shown in Figure8. This incident is reported on June 17, 2012 at 10:31 a.m. and its detailed location is at 19 + 300A westbound (between 7A/8A and 9A/10A). By setting the threshold value of the ratio of matching probabilities equal to 8.5, the time of incident detection and the false alarm rate for this case study are found to be 10:28:22 and 2 false alarms per hour, respectively. Compared with the classic vehicle count approach, which would trigger an incident alarm at 10:33:50, the proposed AID system still performs better in terms of the incident detection time.Figure 8 Real-world case study #2: (a) incident vehicle passes the upstream detector; (b) incident vehicle and truck passes through the downstream detector. (a) (b)On the basis of these two real-world case studies, we may observe that the time of incident detection of the proposed AID algorithm is largely dependent on the actual information associated with the incident vehicle (e.g., distinctiveness of the incident vehicle feature and the size of the vehicle platoon). In real-world case study #2, the incident vehicle has “distinctive” vehicle color distribution (see Figure8(a)) and, consequently, the disappearance of this very vehicle can be identified earlier than the vehicle count approach. The size of vehicle platoon may also have significant impact on the performance of the AID algorithm. The larger the platoon size is, the more likely it is that the arrival time windows of these vehicles may overlap each other at the downstream site and, hence, leads to the significant increase in incident detection time. ## 6.1. Simulated Tests For calibrating and testing the proposed AID system, the 3,682 pairs of vehicle matching results from the collected dataset are divided into two parts. First, a dataset of 800 pairs of correctly matched vehicles are used for model calibration and training. The upper and lower bound factors for time window estimation (i.e.,σub and σlb) are calibrated by using the travel time data of the 800 vehicles and the historical averaging speed on Thursday, which is the same as the test day (i.e., 16/2/2012, 23/2/2012, 1/3/2012, and 8/3/2012). In addition, the parameters of the statistical model (i.e., p1 and p2) are estimated by utilizing the feature data extracted from the captured images of these 800 pairs of vehicles. Second, the remaining 2,828 pairs of vehicles detected at both upstream and downstream detectors are fed into the calibrated AID system for model evaluation. In order to mimic an incident between the upstream and downstream detectors, the record of vehicle at downstream site is intentionally removed to simulate the situation that the vehicle has passed the upstream detector but not the downstream one. In the testing of the proposed AID system, the AID algorithm is run for 2,828 times, which for each run the record of one of the 2,828 vehicles at downstream detector is removed, for determining the mean detection time. Specifically, the incident detection time is defined as(17)TD=tincident-tU.Since we do not know the exact time when the incident happened, the incident detection time is then defined as the difference between the time when an alarm is issued (i.e., tincident) and the arrival time of incident vehicle at upstream station (i.e., tU).By setting the threshold value equals to 2 (i.e.,τ=2), the mean detection time of the proposed AID algorithm is 203.2 seconds, whereas the mean detection time of the classical vehicle count approach is 644.1 seconds. As it is expected, the mean detection time is reduced substantially by incorporating the modified VRI system. Figure 5 shows the performance of the VRI-based incident detection algorithm for different threshold values adopted in final matching decision. It could be observed that the false alarm rate reduces as the threshold value increases. When the threshold value equals one, the VRI system will always match the downstream vehicle to the upstream one with the largest matching probability. Therefore, it would lead to a large number of false alarms (see Section 5.3). With the increase in threshold value, the modified VRI system relies more on the traditional vehicle count approach and results in a decrease in the false alarm rate. On the other hand, as the proposed VRI system is more relied on the traditional vehicle count approach (e.g., τ→∞), the mean detection time also increases (see Section 3.1). To sum up, for the proposed VRI system, the lowering of the false alarm rates is at the expense of incident detection time. Thus, a balance should be struck between the rapid incident detection and low false alarm rate.Figure 5 Mean detection time and false alarm rate.The estimation of arrival time window also has a significant impact on the performance of the proposed AID algorithm. It is not difficult to understand that a small time window size would result in faster incident detection. To test the performance of the proposed AID algorithm under different time window sizes, a time window with fixed size is assigned for each individual vehicle. Figure6 shows the mean detection of the algorithm for different time window sizes. The mean detection time of the vehicle count approach increases dramatically as the size of the time window grows. It is also observed that the vehicle count approach is not capable of detecting the missing vehicle as the size of time window is larger than 50 seconds. To sum up, for the simulation that a large arrival time window is applied, the proposed AID algorithm clearly outperforms the vehicle count approach.Figure 6 Comparison between the proposed AID algorithm and vehicle count approach. ## 6.2. Real-World Case Studies Apart from the above-mentioned simulation tests, two real-world case studies are also carried out. Based on the record from the freeway authority, the first incident is reported on June 13, 2012, at 16:03. The reported incident location is at 20 + 600 westbound, which is in the section between camera 7A/8A and 9A/10A (see Figure1). Based on this information, the research team has screened through the captured videos for identifying the incident vehicle. It is found out that on June 13, 2012, the incident vehicle has passed the upstream detector (7A/8A) at 15:55 (Figure 7(a)) and has an incident before it reaches the downstream detector (9A/10A). Four minutes later, a tow truck, which is probably called by the driver of the incident vehicle, has passed the upstream detector (Figure 7(b)) and towed the incident vehicle to pass the downstream detector at 16:09 Figure 7(c)).Figure 7 Real-world case study #1: (a) incident vehicle passes the upstream detector; (b) tow truck passes the upstream detector; (c) incident vehicle and truck passes through the downstream detector. (a) (b) (c)According to the above information of the incident vehicle, a 35-minute video record data (from 15:33 to 16:08 on June 13, 2012) of locations 8A and 10A are extracted and input into the proposed AID system for free flow condition. In this case, apart from the incident vehicle, 739 vehicles are detected at both stations during the 35-minute video record. By setting the threshold value of the ratio of matching probabilities equals to 8.5, the time of incident detection and the false alarm rate for this case study are found to be 15:58:22, and 3.42 false alarms per hour, respectively. Compared with the classic vehicle count approach, which would trigger an incident alarm at 16:01:28, the proposed AID system performs better in terms of the incident detection time.The incident vehicle of the second real-world case study is shown in Figure8. This incident is reported on June 17, 2012 at 10:31 a.m. and its detailed location is at 19 + 300A westbound (between 7A/8A and 9A/10A). By setting the threshold value of the ratio of matching probabilities equal to 8.5, the time of incident detection and the false alarm rate for this case study are found to be 10:28:22 and 2 false alarms per hour, respectively. Compared with the classic vehicle count approach, which would trigger an incident alarm at 10:33:50, the proposed AID system still performs better in terms of the incident detection time.Figure 8 Real-world case study #2: (a) incident vehicle passes the upstream detector; (b) incident vehicle and truck passes through the downstream detector. (a) (b)On the basis of these two real-world case studies, we may observe that the time of incident detection of the proposed AID algorithm is largely dependent on the actual information associated with the incident vehicle (e.g., distinctiveness of the incident vehicle feature and the size of the vehicle platoon). In real-world case study #2, the incident vehicle has “distinctive” vehicle color distribution (see Figure8(a)) and, consequently, the disappearance of this very vehicle can be identified earlier than the vehicle count approach. The size of vehicle platoon may also have significant impact on the performance of the AID algorithm. The larger the platoon size is, the more likely it is that the arrival time windows of these vehicles may overlap each other at the downstream site and, hence, leads to the significant increase in incident detection time. ## 7. Conclusion and Future Works This paper investigates the feasibility of utilizing the vehicle reidentification system for incident detection on a freeway section under the free flow condition. A modified vision-based VRI system is proposed to partially track the individual vehicle for identifying the “missing” vehicle due to an incident. A flexible arrival time window is estimated for each of the individual vehicle at upstream station to improve the matching accuracy. To reduce the potential false alarms, a screening method, which is based on the ratios of the matching probabilities and arrival time windows, is introduced to rule out the potential mismatches.The proposed AID algorithm is tested on a 3.6 km segment of a closed freeway in Bangkok, Thailand. Based on the test results, it is found out that the detection time of the proposed AID algorithm is substantially shorter than the traditional vehicle count approach. Also, there is a trade-off between the false alarms rate and detection time for the proposed AID algorithm. Therefore, a balance should be struck between the rapid incident detection and low false alarm rate by adjusting the thresholding valueτ. As demonstrated in Procedure 2, the proposed AID algorithm is a hybrid of the vehicle feature comparison method and the classical vehicle count approach, and the threshold value τ can be viewed as a switch between these two methods. Therefore, the selection of τ may be of great importance to the proposed AID algorithm. In this study, we adjust the threshold value manually based on the reliability of VRI system (the performance of the VRI system in different time period may be slightly different due to the changes in outdoor environment, and the threshold value τ should be adjusted accordingly). Some other automatic thresholding processes [29] will be investigated in our future works.Note that the proposed AID algorithm is specifically devised to detection incident on freeway system under free-flow conditions. As a natural and necessary extension, the ability of detecting incidents under dynamic traffic condition is required for the further development of incident detection system. The key component would be to devise an additional VRI based detection algorithm for congested situation. Theoretically, this purpose can be achieved by analyzing the temporal changes in travel time information obtained from VRI system under dynamic traffic conditions [25]. --- *Source: 102380-2015-07-05.xml*
2015
# Evolution Characteristics of Advanced Nonferrous Metal Industry Patent Cooperation Network in China from the Perspective of Multilayer Network **Authors:** Qingxiao Wang; Wenqian Zhu **Journal:** Mathematical Problems in Engineering (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1023816 --- ## Abstract The advanced nonferrous metal industry is a national strategic emerging industry, and its innovation ability is crucial to the transformation and development of downstream industries such as aerospace, rail transit, electronic information, and automobiles to the high end of the industrial value chain. The research on the evolution characteristics of its cooperative network is beneficial to the research on the mechanism of improving the innovation capability of the industry. Based on the cooperative invention patent application data in 2002–2020, using the social network analysis method and Gephi 0.9.2 visualization to construct a patent cooperation network and knowledge network, this paper analyzes the overall characteristics and evolution law of the patent cooperation network in the advanced nonferrous metal industry. It is found that, first, the structure of the innovation network in China’s advanced nonferrous metals industry is becoming more and more complex. The scale of the cooperation network and knowledge network is increasing, and the small world is obvious. Second, some innovation subjects, as key nodes, play the role of “bridge” and grasp the general direction of technology. The structure of the key nodes in the network has changed, and the cooperative innovation network of industry-university-research has gradually evolved from enterprise-led to university-led. Third, the number of technical categories has increased and the depth of technology has been enhanced in China’s advanced nonferrous metal field. Hot technical fields are characterized by high intensity, wide range, and stable advancement. Based on the research findings, some suggestions and enlightenment are put forward to promote the development of the advanced nonferrous metal industry. --- ## Body ## 1. Introduction The advanced nonferrous metal industry is one of the strategic emerging industries. Developing an advanced nonferrous metal industry not only helps to promote the adjustment of China’s current economic structure and the upgrading of industrial structures but also deeply affects the effectiveness of building an innovative country in China. In recent years, the Chinese government has given great policy support to the advanced nonferrous metal industry. China’s advanced nonferrous metal industry has made rapid development and great progress, but there is still a big gap compared with international giants in the core technology field of the industry.The patent is an important means of technological innovation and achievement transmission and exchange, and it is also an important index to judge the degree of technological innovation. Patents have an important impact on the development and technological breakthroughs of related industries. Patent innovation is the driving force of technological development, which is mainly reflected in optimizing patent layout, promoting technological innovation, and realizing industrial applications. The development of the modern manufacturing industry cannot be separated from cooperative innovation, and the need for cooperative innovation in the advanced nonferrous metal industry is more prominent. Patent cooperation is an important form of in cooperative innovation, which can provide important innovation power for the development of enterprises. By studying the patent innovation network, we can understand the development trend of technological innovation, the layout characteristics, and the evolution laws of patents in China’s advanced nonferrous metal industry. The innovation and development status of advanced nonferrous metals can be scientifically analyzed and reasonable suggestions can be put forward.In recent years, more and more nonferrous metal manufacturing enterprises have cooperated with universities and research institutes inR & D and have jointly applied for patents on the background of emerging and cross-integrated new technologies. They spontaneously form a patent cooperation network, which can realize resource sharing and complementary advantages. This kind of patent cooperation network can greatly reduce the innovation cost and risk and improve the technological innovation ability of an industry. However, at present, the exploration and utilization of patent cooperation networks in China’s nonferrous metal manufacturing industry are far from enough, and the innovation and aggregation effect of patent cooperation networks have not been fully exerted. Thus, exploring, utilizing, and optimizing patent cooperation networks to improve industrial technological innovation ability has become an important way for China’s advanced nonferrous metal industry to shorten the technological gap between China and international oligarchs and break the technological blockade. With the development of China’s advanced nonferrous metal industry, the innovation network of China’s advanced nonferrous metal industry, including the knowledge network and cooperation network, has changed significantly and become more complex. So, what are the evolution characteristics of the patent cooperation networks and knowledge networks in China’s advanced nonferrous metal industry? In the evolution process of a cooperative network of advanced nonferrous metals, which enterprises and research institutions are the key nodes to grasp the general direction of technology and how do their composition structures change? What are the distribution of technology fields and the evolution law of technology hot fields in China’s advanced nonferrous metal industry? These are the problems worth studying. The purpose of this study is to reveal the evolution law of its structural characteristics by constructing the innovation network of China’s advanced nonferrous metal industry and to provide suggestions for optimizing the innovation network and strengthening technological innovation. ## 2. Literature Review ### 2.1. Research on Multilayer Innovation Network Wang et al. pointed out that enterprise innovation is double embedded in the knowledge network composed of knowledge elements and the social network formed by the cooperative relationships ofR & D personnel, and these two networks are decoupled [1]. The process of combining knowledge elements is the production process of innovation behavior [2], and all knowledge elements are connected in the process of combination, forming a knowledge network [3]. Innovation is actually the interaction process of the innovation subject’s knowledge base and the combination mode, and the structural characteristics of the knowledge network represent the combination potential of core knowledge fields [4], which in turn becomes an important factor affecting the innovation performance of enterprises.From the perspective of social attributes of innovation activities, enterprises embedded in cooperative networks can provide favorable resources for enterprise innovation. The innovative behavior of enterprises needs to be realized through the process of expanding and deepening the specific social network [5]. It is found that enterprises in different positions in the network have different degrees of control over resources and information. Cooperative network structure and relationship characteristics will have a certain impact on cooperative innovation behavior and performance, resulting in nonequilibrium behavior of innovation subjects [6, 7] and differences in innovation performance [8]. In fact, the innovation activities of enterprises are multidimensional. The knowledge network and cooperation network in which R & D personnel are located have their own unique operational characteristics, which will have different impacts on innovation activities. At present, most of the research on the evolution characteristics of innovation cooperation is based on patent data to analyze its evolution characteristics or patterns from a single network level. ### 2.2. Research on the Evolution of Innovation Network In terms of innovation network evolution, existing scholars mainly conduct research from three aspects. First, focus on the cooperative relationship between universities and research institutes, and explore the evolution of cooperative networks at different stages through the cooperative publication of papers and works. For example, Balconi et al. [9] analyzed the role of university professors in the Italian inventor patent cooperation network; Lissoni [10] found that universities occupy a core position in the patent cooperation network and have a stable cooperative relationship with other types of inventors; Li et al. [11] found that comprehensive universities, science and engineering universities, and energy-based enterprises occupy the core position in the school-enterprise patent cooperation network. The second is to study the evolution of the innovation network of strategic emerging technology industries, involving information foundations, intelligent manufacturing, biomedicine, and other industries. For example, Zhang et al. [12] from the software service industry, Cao et al. [13] from the new energy automobile industry, Li et al. [14] from the satellite and application industry, and Chen et al. [15] from the chip industry, respectively, confirmed this point. These research fields belong to national strategic emerging industries. The research on these strategic emerging industries is helpful in improving the comprehensive innovation level of China. But unfortunately, these studies have not involved the advanced nonferrous metal industry. The third is to study the structure and evolution of the regional patent cooperation network. For example, Ejermo and Karlsson [16] believed that geographic distance is a key factor affecting patent cooperation networks, and there is generally a phenomenon of intraregional aggregation in patent cooperation networks. Pan et al. [17] studied the patent cooperation networks in 31 provinces of China and found that geographical distance would have an impact on the linkage of patent cooperation networks. However, other scholars put forward different views. For example, Wilhelmsson [18] found that cities with denser populations and more diverse industrial structures tended to have lower levels of patent cooperation; Zheng et al. [19] constructed the cross-city patent cooperation network of Fujian, Xiamen, and Quanzhou, measured the order of network evolution by using the concept of “entropy,” and found that the cross-city cooperation network showed an obvious entropy increase in the evolution process. ### 2.3. Research on the Innovation Network of the Nonferrous Metals Industry The advanced nonferrous metal industry is a national strategic leading industry and an important part of new materials. Its innovation ability is crucial to the transformation and development of downstream industries such as aerospace, rail transit, electronic information, and automobiles to the high end of the industrial value chain, so it is worth studying. Concerning the research of the advanced nonferrous metal industry, some scholars study the path of industrial development and innovation. Tian [20] proposed to strengthen the innovation capacity building of the industry, strengthen the knowledge alliance and technology alliance, and realize the integration of technology and management. Choose the path of independent innovation, enhance competitiveness by participating in international market resources, and improve the efficiency of the nonferrous metal industry. Lin et al. [21] combined the exploratory single case study method and studied the path of both exploratory innovation and utilization innovation in the process transformation of typical research institutes in the rare metal industry. It is found that the engineering center has become the key link in the transformation from exploratory innovation to utilization innovation. Research on the nonferrous metals innovation network. Zhou et al. [22] combined chaos theory to expound on the chaotic characteristics of the innovation network of nonferrous metal industry clusters. Too strong or too weak aggregation capability level of integrated units is not conducive to the development of a nonferrous metal cluster innovation network. In the development process of a cluster innovation network, it is necessary to adjust the evolution speed of the cluster innovation network by the strength of its aggregation ability. It is proposed that nonferrous metal cluster enterprises can be divided into four stages: generation, growth, maturity, and decline in the life cycle of the cluster innovation network. The above research on the advanced nonferrous metal industry provides a useful reference for this study.“Triple Helix” innovation theory is an innovation theory established by Etzkowitz and Leydesdorff in 1995 [23]. The “Triple Helix” theory borrows the concept of biology and puts forwards that all three parties can be the main body of innovation, and that the three parties cooperate and interact closely in innovation. At present, the “Triple Helix” innovation theory has been widely used in academic circles to evaluate the collaborative interaction and dynamic evolution of universities, industries, and governments (including scientific research institutions) [24]. Many scholars have introduced the “Triple Helix” theory into the innovation network research of industry, education, and research [25, 26].Therefore, based on the “Triple Helix” theory, enterprises, universities, and research institutes can be regarded as three types of innovation subjects, with the support of the government and other relevant institutions, and carry out innovation cooperation activities based on a clear division of functions. In this paper, the “Triple Helix” theory is used for reference, and the “Triple Helix” theory is introduced into the research framework of China’s advanced nonferrous metal innovation network.To sum up, the existing research on the evolution of innovation networks has provided a solid foundation and beneficial enlightenment for this study. However, there are few kinds of research on the combination of advanced nonferrous metals and innovation networks. Moreover, in the field of advanced nonferrous metals, scholars mostly start from a static perspective. Comprehensive, systematic, and dynamic research on the patent status and patent cooperation network of the industry needs to be deepened.Therefore, based on the data of patent cooperation applications of the advanced nonferrous metal industry, this study uses the social network analysis method and Gephi network visualization to analyze the current situation of the nonferrous metal industry in China. The patent cooperation network and knowledge network are constructed from the perspective of the multilayer network, analyzed from the dimensions of the patentee, knowledge element, and geographical distribution, and reveal the structural characteristics and evolution laws of patent cooperation in the advanced nonferrous metal industry from multiple dimensions. It provides a reference for optimizing the patent cooperation network of the advanced nonferrous metal industry, enhancing its technological innovation, and formulating an industrial development strategy. ## 2.1. Research on Multilayer Innovation Network Wang et al. pointed out that enterprise innovation is double embedded in the knowledge network composed of knowledge elements and the social network formed by the cooperative relationships ofR & D personnel, and these two networks are decoupled [1]. The process of combining knowledge elements is the production process of innovation behavior [2], and all knowledge elements are connected in the process of combination, forming a knowledge network [3]. Innovation is actually the interaction process of the innovation subject’s knowledge base and the combination mode, and the structural characteristics of the knowledge network represent the combination potential of core knowledge fields [4], which in turn becomes an important factor affecting the innovation performance of enterprises.From the perspective of social attributes of innovation activities, enterprises embedded in cooperative networks can provide favorable resources for enterprise innovation. The innovative behavior of enterprises needs to be realized through the process of expanding and deepening the specific social network [5]. It is found that enterprises in different positions in the network have different degrees of control over resources and information. Cooperative network structure and relationship characteristics will have a certain impact on cooperative innovation behavior and performance, resulting in nonequilibrium behavior of innovation subjects [6, 7] and differences in innovation performance [8]. In fact, the innovation activities of enterprises are multidimensional. The knowledge network and cooperation network in which R & D personnel are located have their own unique operational characteristics, which will have different impacts on innovation activities. At present, most of the research on the evolution characteristics of innovation cooperation is based on patent data to analyze its evolution characteristics or patterns from a single network level. ## 2.2. Research on the Evolution of Innovation Network In terms of innovation network evolution, existing scholars mainly conduct research from three aspects. First, focus on the cooperative relationship between universities and research institutes, and explore the evolution of cooperative networks at different stages through the cooperative publication of papers and works. For example, Balconi et al. [9] analyzed the role of university professors in the Italian inventor patent cooperation network; Lissoni [10] found that universities occupy a core position in the patent cooperation network and have a stable cooperative relationship with other types of inventors; Li et al. [11] found that comprehensive universities, science and engineering universities, and energy-based enterprises occupy the core position in the school-enterprise patent cooperation network. The second is to study the evolution of the innovation network of strategic emerging technology industries, involving information foundations, intelligent manufacturing, biomedicine, and other industries. For example, Zhang et al. [12] from the software service industry, Cao et al. [13] from the new energy automobile industry, Li et al. [14] from the satellite and application industry, and Chen et al. [15] from the chip industry, respectively, confirmed this point. These research fields belong to national strategic emerging industries. The research on these strategic emerging industries is helpful in improving the comprehensive innovation level of China. But unfortunately, these studies have not involved the advanced nonferrous metal industry. The third is to study the structure and evolution of the regional patent cooperation network. For example, Ejermo and Karlsson [16] believed that geographic distance is a key factor affecting patent cooperation networks, and there is generally a phenomenon of intraregional aggregation in patent cooperation networks. Pan et al. [17] studied the patent cooperation networks in 31 provinces of China and found that geographical distance would have an impact on the linkage of patent cooperation networks. However, other scholars put forward different views. For example, Wilhelmsson [18] found that cities with denser populations and more diverse industrial structures tended to have lower levels of patent cooperation; Zheng et al. [19] constructed the cross-city patent cooperation network of Fujian, Xiamen, and Quanzhou, measured the order of network evolution by using the concept of “entropy,” and found that the cross-city cooperation network showed an obvious entropy increase in the evolution process. ## 2.3. Research on the Innovation Network of the Nonferrous Metals Industry The advanced nonferrous metal industry is a national strategic leading industry and an important part of new materials. Its innovation ability is crucial to the transformation and development of downstream industries such as aerospace, rail transit, electronic information, and automobiles to the high end of the industrial value chain, so it is worth studying. Concerning the research of the advanced nonferrous metal industry, some scholars study the path of industrial development and innovation. Tian [20] proposed to strengthen the innovation capacity building of the industry, strengthen the knowledge alliance and technology alliance, and realize the integration of technology and management. Choose the path of independent innovation, enhance competitiveness by participating in international market resources, and improve the efficiency of the nonferrous metal industry. Lin et al. [21] combined the exploratory single case study method and studied the path of both exploratory innovation and utilization innovation in the process transformation of typical research institutes in the rare metal industry. It is found that the engineering center has become the key link in the transformation from exploratory innovation to utilization innovation. Research on the nonferrous metals innovation network. Zhou et al. [22] combined chaos theory to expound on the chaotic characteristics of the innovation network of nonferrous metal industry clusters. Too strong or too weak aggregation capability level of integrated units is not conducive to the development of a nonferrous metal cluster innovation network. In the development process of a cluster innovation network, it is necessary to adjust the evolution speed of the cluster innovation network by the strength of its aggregation ability. It is proposed that nonferrous metal cluster enterprises can be divided into four stages: generation, growth, maturity, and decline in the life cycle of the cluster innovation network. The above research on the advanced nonferrous metal industry provides a useful reference for this study.“Triple Helix” innovation theory is an innovation theory established by Etzkowitz and Leydesdorff in 1995 [23]. The “Triple Helix” theory borrows the concept of biology and puts forwards that all three parties can be the main body of innovation, and that the three parties cooperate and interact closely in innovation. At present, the “Triple Helix” innovation theory has been widely used in academic circles to evaluate the collaborative interaction and dynamic evolution of universities, industries, and governments (including scientific research institutions) [24]. Many scholars have introduced the “Triple Helix” theory into the innovation network research of industry, education, and research [25, 26].Therefore, based on the “Triple Helix” theory, enterprises, universities, and research institutes can be regarded as three types of innovation subjects, with the support of the government and other relevant institutions, and carry out innovation cooperation activities based on a clear division of functions. In this paper, the “Triple Helix” theory is used for reference, and the “Triple Helix” theory is introduced into the research framework of China’s advanced nonferrous metal innovation network.To sum up, the existing research on the evolution of innovation networks has provided a solid foundation and beneficial enlightenment for this study. However, there are few kinds of research on the combination of advanced nonferrous metals and innovation networks. Moreover, in the field of advanced nonferrous metals, scholars mostly start from a static perspective. Comprehensive, systematic, and dynamic research on the patent status and patent cooperation network of the industry needs to be deepened.Therefore, based on the data of patent cooperation applications of the advanced nonferrous metal industry, this study uses the social network analysis method and Gephi network visualization to analyze the current situation of the nonferrous metal industry in China. The patent cooperation network and knowledge network are constructed from the perspective of the multilayer network, analyzed from the dimensions of the patentee, knowledge element, and geographical distribution, and reveal the structural characteristics and evolution laws of patent cooperation in the advanced nonferrous metal industry from multiple dimensions. It provides a reference for optimizing the patent cooperation network of the advanced nonferrous metal industry, enhancing its technological innovation, and formulating an industrial development strategy. ## 3. Research Design ### 3.1. Research Methodology This paper adopts research methods such as patent bibliometrics, social network analysis, and data visualization. It uses the social network analysis software and its visualization functions such as Gephi and Python to construct the topological structure diagram of patent cooperation and the heat map of patent technology theme evolution in China’s advanced nonferrous metal industry. Also, analyze the network structure index, network structure evolution, and technology topic evolution. The indicators for measuring the patent cooperation network of advanced nonferrous metal industry mainly include the following:(1) Network size, that is, the number of nodes in the network, reflects the size of the network. The larger the value, the larger the network scale.(2) The number of network edges, that is, the total number of connections produced by cooperation among network nodes, reflects the network structure relationship. The larger the value, the more complex the network structure.(3) The average path length is the number of edges in the shortest path between any two nodesi and j in the network. dij is the distance between nodes, and the average of all the distances between nodes is the average path length of the network, which reflects the average distance between two nodes in the network (Watts and Strogatz collective dynamics of “small world” networks [J]. Nature, 1998, 393 (6684): 440–442). The larger the value, the sparser the network, and the lower the transmission performance and efficiency of the corresponding network. The calculation formula is(1)PL=2nn−1∑i,jdi,j.(4) The average clustering coefficient, which is a measure of the density of nodes in the network, reflects the average value of the clustering coefficients of all nodes in the network. The larger the value, the easier it is for adjacent nodes to establish cooperative relations. The calculation formula is(2)C¯=1n∑i=1n2lididi−1,li refers to the actual number of edges between all nodes connected to node i.(5) Network density is the ratio of the number of actual relationships in the network to the number of theoretically possible relationships (Liu Jun. An introduction to social network analysis [M]. Beijing: Social Scince Literature Publishing House, 2004: 304–305). The calculation formula is(3)D=2lnn−1,where the number of nodes isn and the actual number of edges is l, which reflects the closeness of the network relationship. The larger the value, the closer the network structure and the closer the relationship between network members.(6) Network diameter is the maximum distance between any two nodes in the network. The larger the value, the sparser the network, and the lower the transmission performance and efficiency of the corresponding network. The calculation formula is(4)L=maxdi,j,wherei and j represent any two nodes. ### 3.2. Sample Selection and Data Processing The patent data used in this research comes from the PatSnap patent database. Download the patent data of the advanced nonferrous metal industry according to theStrategic Emerging Industry Classification and International Patent Classification Reference Relationship Table issued by the State Intellectual Property Office. The specific steps of data search and cleaning are as follows: first, the high-frequency keywords and IPC classification numbers of the advanced nonferrous metals industry are determined according to the classification of the advanced nonferrous metals industry in the document Classification of Strategic Emerging Industries. According to this, the patent search expression of the advanced nonferrous metal industry is determined. The document points out that the technologies covered by the advanced nonferrous metal industry include high-precision copper, pipe, rod, and wire profiles; high-precision copper and tubes, rods, and linear materials; aerospace high-strength aluminum alloy forgings; high-strength and high-conductivity copper materials; electrolytic copper foil, rolled copper foil, electronic copper, medical titanium alloy, metal fiber porous materials, porous titanium and titanium alloy, foam copper, aluminum, nickel, nonferrous metal fiber porous materials, etc. Second, according to the patent search expression, patent data are retrieved through the PatSnap patent database, and 1018491 original patent data are obtained and downloaded. Third, invention patents contain a higher knowledge level of inventors and more value, and are more representative than utility model and appearance patents. Therefore, this study only selects invention applications and authorizes invention patents. The remaining 259579 patents were obtained. Finally, screened patents with more than 2 original patent holders and then continued to exclude patents containing natural-person applicants. A total of 34116 cooperative invention patents in the advanced nonferrous metals industry were obtained as the data basis of this study. Since there is at least an 18-month time lag from patent application to publication, and invention patents tend to take longer from application to publication, the data set in this study is selected up to 2020 as a reference. Because the patent application data for the advanced nonferrous metal industry first appeared in 2002, the scope of this study is from 2002 to 2020.From Figure1, it can be seen that the number of joint patent applications for advanced nonferrous metals in China is increasing year by year, and the technological development of an industry will be directly affected by various policies issued by China. This study divides the period of joint patent applications into four development stages, namely: 2002–2007, 2008–2011, 2012–2016, and 2017–2020.(1) 2002–2007 is the initial period. In 2002, the number of cooperative patent applications in the advanced nonferrous metals industry began to grow from scratch, and it showed a slow development trend until 2007. At this stage, only a few large enterprises cooperated to apply for patents, and the level of patent cooperation was low.(2) 2008–2011 is a period of rapid growth, and the “12th Five-Year Plan” is a key period for China to build an innovative country. Building a well-off society in an all-round way and accelerating the transformation of the economic development mode put forward higher and more urgent requirements for the construction of innovation capacity. At this stage, the number of cooperative patents increased year by year, with an obvious growth rate.(3) 2012–2016 is a period of high-quality development. The “13th Five-Year Plan” for the development of the nonferrous metals industry, promulgated by the Ministry of Industry and Information Technology, outlined eight major tasks, including implementing an innovation drive, accelerating industrial restructuring, vigorously developing high-end materials, promoting green sustainable development, improving resource supply capacity, promoting deep integration of the two industries, actively expanding application fields, and deepening international cooperation. The nonferrous metals industry entered a new stage of development after the Fifth Plenary Session of the 18th CPC Central Committee. At this stage, it changed from rapid development to high-quality development, and the number of patents increased as a whole.(4) 2017–2020 is a period of stable growth or a growth bottleneck period. At this stage, although the number of cooperative patent applications still maintains high-intensity cooperation, the growth rate is slow, far from the rapid development of the previous stage. This may be related to technological bottlenecks and lagging economic development during the epidemic.Figure 1 Trend chart of the number of joint patents applied for advanced nonferrous metals in China.According to the above analysis, based on the patent data of the cooperative application, this paper explores the evolution characteristics of the cooperative patent in the advanced nonferrous metal industry from three aspects: the type of original patentee, the region where the original patentee is located, and the technical subject areas involved in the cooperative patent application. The aim is to maintain a stable level of technological innovation for enterprises in the industry during the normalization of the epidemic situation, thus promoting the core competitiveness of enterprises. ## 3.1. Research Methodology This paper adopts research methods such as patent bibliometrics, social network analysis, and data visualization. It uses the social network analysis software and its visualization functions such as Gephi and Python to construct the topological structure diagram of patent cooperation and the heat map of patent technology theme evolution in China’s advanced nonferrous metal industry. Also, analyze the network structure index, network structure evolution, and technology topic evolution. The indicators for measuring the patent cooperation network of advanced nonferrous metal industry mainly include the following:(1) Network size, that is, the number of nodes in the network, reflects the size of the network. The larger the value, the larger the network scale.(2) The number of network edges, that is, the total number of connections produced by cooperation among network nodes, reflects the network structure relationship. The larger the value, the more complex the network structure.(3) The average path length is the number of edges in the shortest path between any two nodesi and j in the network. dij is the distance between nodes, and the average of all the distances between nodes is the average path length of the network, which reflects the average distance between two nodes in the network (Watts and Strogatz collective dynamics of “small world” networks [J]. Nature, 1998, 393 (6684): 440–442). The larger the value, the sparser the network, and the lower the transmission performance and efficiency of the corresponding network. The calculation formula is(1)PL=2nn−1∑i,jdi,j.(4) The average clustering coefficient, which is a measure of the density of nodes in the network, reflects the average value of the clustering coefficients of all nodes in the network. The larger the value, the easier it is for adjacent nodes to establish cooperative relations. The calculation formula is(2)C¯=1n∑i=1n2lididi−1,li refers to the actual number of edges between all nodes connected to node i.(5) Network density is the ratio of the number of actual relationships in the network to the number of theoretically possible relationships (Liu Jun. An introduction to social network analysis [M]. Beijing: Social Scince Literature Publishing House, 2004: 304–305). The calculation formula is(3)D=2lnn−1,where the number of nodes isn and the actual number of edges is l, which reflects the closeness of the network relationship. The larger the value, the closer the network structure and the closer the relationship between network members.(6) Network diameter is the maximum distance between any two nodes in the network. The larger the value, the sparser the network, and the lower the transmission performance and efficiency of the corresponding network. The calculation formula is(4)L=maxdi,j,wherei and j represent any two nodes. ## 3.2. Sample Selection and Data Processing The patent data used in this research comes from the PatSnap patent database. Download the patent data of the advanced nonferrous metal industry according to theStrategic Emerging Industry Classification and International Patent Classification Reference Relationship Table issued by the State Intellectual Property Office. The specific steps of data search and cleaning are as follows: first, the high-frequency keywords and IPC classification numbers of the advanced nonferrous metals industry are determined according to the classification of the advanced nonferrous metals industry in the document Classification of Strategic Emerging Industries. According to this, the patent search expression of the advanced nonferrous metal industry is determined. The document points out that the technologies covered by the advanced nonferrous metal industry include high-precision copper, pipe, rod, and wire profiles; high-precision copper and tubes, rods, and linear materials; aerospace high-strength aluminum alloy forgings; high-strength and high-conductivity copper materials; electrolytic copper foil, rolled copper foil, electronic copper, medical titanium alloy, metal fiber porous materials, porous titanium and titanium alloy, foam copper, aluminum, nickel, nonferrous metal fiber porous materials, etc. Second, according to the patent search expression, patent data are retrieved through the PatSnap patent database, and 1018491 original patent data are obtained and downloaded. Third, invention patents contain a higher knowledge level of inventors and more value, and are more representative than utility model and appearance patents. Therefore, this study only selects invention applications and authorizes invention patents. The remaining 259579 patents were obtained. Finally, screened patents with more than 2 original patent holders and then continued to exclude patents containing natural-person applicants. A total of 34116 cooperative invention patents in the advanced nonferrous metals industry were obtained as the data basis of this study. Since there is at least an 18-month time lag from patent application to publication, and invention patents tend to take longer from application to publication, the data set in this study is selected up to 2020 as a reference. Because the patent application data for the advanced nonferrous metal industry first appeared in 2002, the scope of this study is from 2002 to 2020.From Figure1, it can be seen that the number of joint patent applications for advanced nonferrous metals in China is increasing year by year, and the technological development of an industry will be directly affected by various policies issued by China. This study divides the period of joint patent applications into four development stages, namely: 2002–2007, 2008–2011, 2012–2016, and 2017–2020.(1) 2002–2007 is the initial period. In 2002, the number of cooperative patent applications in the advanced nonferrous metals industry began to grow from scratch, and it showed a slow development trend until 2007. At this stage, only a few large enterprises cooperated to apply for patents, and the level of patent cooperation was low.(2) 2008–2011 is a period of rapid growth, and the “12th Five-Year Plan” is a key period for China to build an innovative country. Building a well-off society in an all-round way and accelerating the transformation of the economic development mode put forward higher and more urgent requirements for the construction of innovation capacity. At this stage, the number of cooperative patents increased year by year, with an obvious growth rate.(3) 2012–2016 is a period of high-quality development. The “13th Five-Year Plan” for the development of the nonferrous metals industry, promulgated by the Ministry of Industry and Information Technology, outlined eight major tasks, including implementing an innovation drive, accelerating industrial restructuring, vigorously developing high-end materials, promoting green sustainable development, improving resource supply capacity, promoting deep integration of the two industries, actively expanding application fields, and deepening international cooperation. The nonferrous metals industry entered a new stage of development after the Fifth Plenary Session of the 18th CPC Central Committee. At this stage, it changed from rapid development to high-quality development, and the number of patents increased as a whole.(4) 2017–2020 is a period of stable growth or a growth bottleneck period. At this stage, although the number of cooperative patent applications still maintains high-intensity cooperation, the growth rate is slow, far from the rapid development of the previous stage. This may be related to technological bottlenecks and lagging economic development during the epidemic.Figure 1 Trend chart of the number of joint patents applied for advanced nonferrous metals in China.According to the above analysis, based on the patent data of the cooperative application, this paper explores the evolution characteristics of the cooperative patent in the advanced nonferrous metal industry from three aspects: the type of original patentee, the region where the original patentee is located, and the technical subject areas involved in the cooperative patent application. The aim is to maintain a stable level of technological innovation for enterprises in the industry during the normalization of the epidemic situation, thus promoting the core competitiveness of enterprises. ## 4. Analysis of the Evolution Characteristics of Cooperation Networks in the Advanced Nonferrous Metal Industry ### 4.1. Analysis of the Characteristics of the Overall Industrial Cooperation Network According to the sample patent data, the patent cooperation network of China’s nonferrous metal industry from 2002 to 2020 was constructed by using Gephi0.9.2 software, as shown in Figure2. The cooperative patentees are divided according to the year and imported into Gephi 0.9.2 software in the form of point relationships and edge relationship, respectively. In the diagram, nodes represent types of principals that cooperate with each other. The nodes are divided into four types, which replace enterprises, universities, research institutes, and natural peoples with different colors. The size of a node represents the number of links between the node and other nodes, that is, the cooperation strength of the node. The connection between nodes represents the cooperative relationship, and the thickness of the connection represents the frequency of cooperation between linked nodes. The thicker the connection, the more frequent the cooperation between nodes in this research period. Generally, the larger the node value, the larger the degree value of the node, and the wider the cooperation range; the thicker the connecting edge, the greater the cooperation frequency between adjacent nodes, that is, the more stable the cooperative relationship between adjacent nodes. As shown in Figure 2, the patent cooperation network of China’s advanced nonferrous metal industry is disconnected as a whole. Some nodes occupy the core position in the network, forming a larger subnet and several small subnets.Figure 2 The invention patent cooperation network of the advanced nonferrous metals industry in China from 2002 to 2020.The topological structure indicators of patent cooperation in China’s advanced nonferrous metal industry from 2002 to 2020 are shown in Table1. It can be seen that many nodes in the network diagram have not yet formed direct connections; the degree of patent cooperation is not sufficient; and the network vitality is insufficient. Most patent applicants have little cooperation and weak connections, and some key applicants act as intermediaries to form a tie to transmit information and connect with other nodes.Table 1 Basic attribute index of an invention patent cooperation network of the advanced nonferrous metals industry in China from 2002 to 2020. ProjectNetwork sizeNumber of network edgeNetwork diameterGraph densityAverage clustering coefficientAverage path lengthIndex value30104124160.0010.7295.083Centrality analysis is a key tool to measure the importance of network nodes. This study selects two indexes, degree centrality and betweenness centrality, to explore the important nodes in the patent cooperation network of China’s advanced nonferrous metal industry from 2002 to 2020. The top ten sample invention patent applicants with degree centrality and betweenness centrality are shown in Table2. Applicants with a strong degree of centrality have a higher position in the network, and they have more subjects to cooperate with, which leads to cooperative relations and makes the network closer. Applicants with a strong betweenness centrality have a strong ability to influence the whole network through cooperation. Some applicants show high status in both centralities, such as State Grid Corp. and Tsinghua University, which rank at the forefront in both centralities; that is, they have higher status in the network and greater influence on the whole network.Table 2 The degree centrality and betweenness centrality of the whole patent cooperation network of the advanced nonferrous metal industry. ApplicantDegree centralityApplicantBetweenness centralityState Grid Corp.346State Grid Corp.590711.80China Petrochemical Corporation Limited55PetroChina Co., Ltd.213672.42Central South University55Tsinghua University178997.21Shanghai Jiao Tong University49Shanghai Jiao Tong University172177.20Tsinghua University46Central South University154672.13University of Science and Technology Beijing39China Petrochemical Corporation Limited143446.67Northeastern University34University of Science and Technology Beijing130471.53Zhejiang University32Baoshan Iron and Steel Co., Ltd129023.57State Grid Smart Grid Research Institute31Xi’an Jiaotong University92493.06PetroChina Co., Ltd.30Peking University83922.18 ### 4.2. Analysis of Four-Stage Evolution Characteristics of Cooperative Networks Gephi 0.9.2 software is used to draw the patent cooperation network map of the advanced nonferrous metal industry in four stages: 2002–2007, 2008–2011, 2012–2016, and 2017–2020, as shown in Figures3–6.Figure 3 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2002 to 2007.Figure 4 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2008 to 2011.Figure 5 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2012 to 2016.Figure 6 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2017 to 2020.The four evolution analysis charts show the following:(1) Over time, the intensity of cooperation has gradually increased. In the first two stages, there is mainly small-scale cooperation. For example, two or three enterprises cooperate and do not form a complex cooperation network; in the latter two stages, there are a large number of small-scale centralized cooperative relationships with certain nodes as the core of cooperation. This shows that enterprises find the limitations of their technology and knowledge in the development process and begin to try to find a more stable cooperative relationship. This relationship can not only help enterprises to open up new technical fields and enhance their technical breadth and depth but also share the uncertainty and risk of technological innovation in the cooperation network. This will greatly reduce the loss of interest that enterprises need to bear due to innovation failure.(2) The evolution of research subjects is more in line with the trend of integration of industry-university-research. Colleges and universities have more patent knowledge and rich theoretical resources, but there is no way to transform patented technology well without product demand and financial strength. Enterprises have the economic strength to bring patented technology to the ground, but a lack of knowledge support from patent developers leads to the lack of patented technology. The full combination of industry-university-research is helpful to realize the collaborative transformation process from patent knowledge to product profit. We can see from Figures3 to 6 that in the initial stage, there is more cooperation between natural persons and enterprises. However, over time, more and more cooperative relations have been formed, with universities as the “stamens” and various types of enterprises as the “petals.” This relationship can promote the flow and transformation of knowledge and technology, thus enhancing the success of invention patents in enterprises and the efficiency of patent transformation in colleges and universities.To further analyze the structural changes of the patent cooperation network of advanced nonferrous metals, the structure index of the advanced nonferrous metal patent cooperation network is measured by using the social network analysis method. The change in topological structure characteristics is shown in Table3.Table 3 Structural characteristic values of invention patent cooperation networks in different stages of China’s advanced nonferrous metal industry. IndexStage 1: 2002–2007Stage 2: 2008–2011Stage 3: 2012–2016Stage 4: 2017–2020Network size275102520882567Number of network edge25498823383404Network diameter6111416Graph density0.0060.0020.0010.001Average clustering coefficient0.7380.7210.7130.732Average path length2.775.034.995.29The network nodes reflect the network scale. As can be seen from Table3, the scale of the patent cooperation network in China’s advanced nonferrous metal industry is building up. The network scale values show that the value of network nodes in stage 1 is 275, that in stage 2 is 1025, that in stage 3 is 2088, and that in stage 4 is 2567. From stage 1 to stage 4, the number of network edges has the same trend as that of network nodes, but the change speed of network edge number is more elastic than that of network scale. This can be reflected by graph density, which decreases gradually with the change of stages, which shows that the cooperation relationship and cooperation times between network nodes are increasing.The average clustering coefficient reflects the close relationship between the whole network. The clustering coefficients of the four stages are much higher than the network density of the same stage, which shows that the establishment of an invention patent cooperation relationship in China’s advanced nonferrous metal industry is not a random choice. There is preferred connectivity, and a stable cooperative relationship is gradually formed. The network diameter and average path length are also rising, which shows that the cooperation network of the Chinese advanced nonferrous metal industry is becoming more and more sparse. The increase of network nodes in the network makes the distance between nodes far away, which leads to the decline of network transmission performance and efficiency of network nodes. This shows that although there are more and more subjects participating in cooperation in this field in China, there is still a large cooperation space among innovative subjects. Generally speaking, the four stages of the evolution of the network have the characteristics of a high clustering coefficient and a small average path distance, showing a distinct small-world effect. This is conducive to the sharing of resources and information in the network and facilitates the exchange and cooperation between applicants. ### 4.3. Evolution Analysis of Important Nodes From the analysis of cooperative networks, it can be seen that the networks in each stage are comparable. Select two indexes, degree centrality and betweenness centrality, to analyze the top ten nodes of the two indexes in each stage. It can reflect the importance of reflecting nodes and analyze the changing characteristics of important nodes in each stage.As shown in Table4, from the point of view of degree centrality, the most active applicants are enterprises, universities, and scientific research institutions, which form a relatively stable cooperation mode of “industry-university-research.” From the point of view of centrality value, from the initial company as the main core to the later university as the core, the weight of universities is getting bigger and bigger. From 2002 to 2007, there was more patent cooperation related to the petrochemical industry, which made Sinopec and its research institutes rank 1 and 2, respectively, and the proportion of enterprises, universities, and scientific research institutions was equal; from 2008 to 2011, the composition structure of patent applicants is similar to that of the first stage, and enterprises still occupy an important position. However, it can be seen that Zhejiang University and Tsinghua University have gradually become the subcore nodes of the network, indicating that universities have strong technological innovation capabilities and rich patent cooperation relations; from 2012 to 2016, different from the previous two stages, there are 5 universities, accounting for half of the applicants, and Tsinghua University is the main core node of the network. The strong technological innovation and strong patent cooperation application intention of universities appear; from 2017 to 2020, like the third stage, universities are still the main ones, with 7 university applicants, most of whom are at the forefront, and the status of universities has increased.Table 4 The degree centrality and betweenness centrality of the patent cooperation network in each stage of advanced nonferrous metal industry. TimeApplicantDegree centralityApplicantBetweenness centrality2002–2007China Petrochemical Corporation Limited30China Petrochemical Corporation Limited665.67Sinopec Company Petrochemical Science Research Institute9Zhejiang University494Baoshan Iron and Steel Co., Ltd9Baoshan Iron and Steel Co., Ltd368Consortium corporation Industrial Technology Research Institute7China Petrochemical Corporation Limited205.67Zhejiang University7East China University of Science and Technology132Wanguo Computer Co., Ltd6Sinopec Ningbo engineering Co., Ltd117.33East China University of Science and Technology5Petrochemical Science Research Institute of China Stone Engineering Company110.67Tsinghua University5Dalian Institute of Chemical Physics, Chinese Academy of Sciences89China University of Petroleum (Beijing)5Antai Technology Co., Ltd45Xiwang Technology Co., Ltd5University of Chongqing452008–2011China Petrochemical Corporation Limited23China National Petroleum Corporation Limited2230.5Baoshan Iron and Steel Co., Ltd12China Petrochemical Corporation Limited2192.36Zhejiang University11Dalian Institute of Chemical Physics, Chinese Academy of Sciences2090.33Tsinghua University10Tsinghua University1483.5Dalian Institute of Chemical Physics, Chinese Academy of Sciences10Baoshan Iron and Steel Co., Ltd1464.67University of Science and Technology Beijing9University of Science and Technology Beijing1225.67Hon Hai Precision Industries Company Limited7Zhejiang University1208.6China National Petroleum Corporation Limited6Shanghai Jiao Tong University756.64Shanghai Jiao Tong University6Shandong Aluminum Co., Ltd557East China University of Science and Technology6Jiangsu Thorpe (group) Co., Ltd4652012–2016Tsinghua University22China National Petroleum Corporation Limited5530.62China Petrochemical Corporation Limited17Tsinghua University4273.33Shanghai Jiao Tong University16Baoshan Iron and Steel Co., Ltd3769.08Central South University15University of Science and Technology Beijing3277.58Zhejiang University13China Petrochemical Corporation Limited3113.59Dalian Institute of Chemical Physics, Chinese Academy of Sciences12Dalian University of Technology2950.09Hon Hai Precision Industries Company Limited11Shanghai Jiao Tong University2843.51University of Science and Technology Beijing11Central South University2026.67China Petrochemical Corporation Limited10Dalian Institute of Chemical Physics, Chinese Academy of Sciences1583Baoshan Iron and Steel Co., Ltd9Zhejiang University1350.12017–2020Shanghai Jiao Tong University23University of Science and Technology Beijing3945Tsinghua University18Central South University3824Central South University18Baoshan Iron and Steel Co., Ltd3548University of Science and Technology Beijing15Tsinghua University2465China Petrochemical Corporation Limited14Shanghai Jiao Tong University2269Zhejiang University12CRRC Industrial Research Institute Co., Ltd2088Hon Hai Precision Industries Company Limited11China Petrochemical Corporation Limited1039.5Shanghai University9Hon Hai Precision Industries Company Limited954China University of Petroleum (East China)9Dalian University of Technology946Baoshan Iron and Steel Co., Ltd9Beijing Steel Research and Advanced Technology co., Ltd824.33According to each stage of betweenness centrality, it can be seen that the composition structure of applicants is still dominated by enterprises, universities, and research institutes, and the value of betweenness centrality gradually increases in different stages. The influence of the third and fourth stages is more obvious, and the control power of the network is stronger. From 2002 to 2007, enterprise applicants accounted for half, which had strong control over the network, followed by universities, which showed relatively weak influence; from 2008 to 2011, similar to the previous stage, the applicants with more patent cooperation are still enterprises, and most of them are concentrated in companies in the fields of petroleum, chemical industry, and natural gas. Also, the betweenness centrality value increases, which shows that it has stronger control power than the first stage in the network; from 2012 to 2016, there are a large number of college applicants, occupying strong network control, but enterprises still have the strongest leading patent cooperation ability; from 2017 to 2020, the number of applicants from colleges and universities has increased in both the number and the intensity of control, which can effectively influence patent cooperation.It can be seen that the ranking structure of the betweenness centrality and the degree centrality of the network is different, and the universities have increased the ranking of the two types of centralities. It reflects that colleges and universities are playing an increasingly important role in the network and have gradually acquired strong control power, which can effectively influence patent cooperation. Some applicants have a position before the exam in both types of centralities, such as China Petrochemical Corporation, Tsinghua University, and China National Petroleum Corporation. It shows that these members play an important intermediary role in contacting other members, building a “bridge” for cooperation among other members, and grasping the general direction of technical cooperation. ## 4.1. Analysis of the Characteristics of the Overall Industrial Cooperation Network According to the sample patent data, the patent cooperation network of China’s nonferrous metal industry from 2002 to 2020 was constructed by using Gephi0.9.2 software, as shown in Figure2. The cooperative patentees are divided according to the year and imported into Gephi 0.9.2 software in the form of point relationships and edge relationship, respectively. In the diagram, nodes represent types of principals that cooperate with each other. The nodes are divided into four types, which replace enterprises, universities, research institutes, and natural peoples with different colors. The size of a node represents the number of links between the node and other nodes, that is, the cooperation strength of the node. The connection between nodes represents the cooperative relationship, and the thickness of the connection represents the frequency of cooperation between linked nodes. The thicker the connection, the more frequent the cooperation between nodes in this research period. Generally, the larger the node value, the larger the degree value of the node, and the wider the cooperation range; the thicker the connecting edge, the greater the cooperation frequency between adjacent nodes, that is, the more stable the cooperative relationship between adjacent nodes. As shown in Figure 2, the patent cooperation network of China’s advanced nonferrous metal industry is disconnected as a whole. Some nodes occupy the core position in the network, forming a larger subnet and several small subnets.Figure 2 The invention patent cooperation network of the advanced nonferrous metals industry in China from 2002 to 2020.The topological structure indicators of patent cooperation in China’s advanced nonferrous metal industry from 2002 to 2020 are shown in Table1. It can be seen that many nodes in the network diagram have not yet formed direct connections; the degree of patent cooperation is not sufficient; and the network vitality is insufficient. Most patent applicants have little cooperation and weak connections, and some key applicants act as intermediaries to form a tie to transmit information and connect with other nodes.Table 1 Basic attribute index of an invention patent cooperation network of the advanced nonferrous metals industry in China from 2002 to 2020. ProjectNetwork sizeNumber of network edgeNetwork diameterGraph densityAverage clustering coefficientAverage path lengthIndex value30104124160.0010.7295.083Centrality analysis is a key tool to measure the importance of network nodes. This study selects two indexes, degree centrality and betweenness centrality, to explore the important nodes in the patent cooperation network of China’s advanced nonferrous metal industry from 2002 to 2020. The top ten sample invention patent applicants with degree centrality and betweenness centrality are shown in Table2. Applicants with a strong degree of centrality have a higher position in the network, and they have more subjects to cooperate with, which leads to cooperative relations and makes the network closer. Applicants with a strong betweenness centrality have a strong ability to influence the whole network through cooperation. Some applicants show high status in both centralities, such as State Grid Corp. and Tsinghua University, which rank at the forefront in both centralities; that is, they have higher status in the network and greater influence on the whole network.Table 2 The degree centrality and betweenness centrality of the whole patent cooperation network of the advanced nonferrous metal industry. ApplicantDegree centralityApplicantBetweenness centralityState Grid Corp.346State Grid Corp.590711.80China Petrochemical Corporation Limited55PetroChina Co., Ltd.213672.42Central South University55Tsinghua University178997.21Shanghai Jiao Tong University49Shanghai Jiao Tong University172177.20Tsinghua University46Central South University154672.13University of Science and Technology Beijing39China Petrochemical Corporation Limited143446.67Northeastern University34University of Science and Technology Beijing130471.53Zhejiang University32Baoshan Iron and Steel Co., Ltd129023.57State Grid Smart Grid Research Institute31Xi’an Jiaotong University92493.06PetroChina Co., Ltd.30Peking University83922.18 ## 4.2. Analysis of Four-Stage Evolution Characteristics of Cooperative Networks Gephi 0.9.2 software is used to draw the patent cooperation network map of the advanced nonferrous metal industry in four stages: 2002–2007, 2008–2011, 2012–2016, and 2017–2020, as shown in Figures3–6.Figure 3 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2002 to 2007.Figure 4 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2008 to 2011.Figure 5 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2012 to 2016.Figure 6 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2017 to 2020.The four evolution analysis charts show the following:(1) Over time, the intensity of cooperation has gradually increased. In the first two stages, there is mainly small-scale cooperation. For example, two or three enterprises cooperate and do not form a complex cooperation network; in the latter two stages, there are a large number of small-scale centralized cooperative relationships with certain nodes as the core of cooperation. This shows that enterprises find the limitations of their technology and knowledge in the development process and begin to try to find a more stable cooperative relationship. This relationship can not only help enterprises to open up new technical fields and enhance their technical breadth and depth but also share the uncertainty and risk of technological innovation in the cooperation network. This will greatly reduce the loss of interest that enterprises need to bear due to innovation failure.(2) The evolution of research subjects is more in line with the trend of integration of industry-university-research. Colleges and universities have more patent knowledge and rich theoretical resources, but there is no way to transform patented technology well without product demand and financial strength. Enterprises have the economic strength to bring patented technology to the ground, but a lack of knowledge support from patent developers leads to the lack of patented technology. The full combination of industry-university-research is helpful to realize the collaborative transformation process from patent knowledge to product profit. We can see from Figures3 to 6 that in the initial stage, there is more cooperation between natural persons and enterprises. However, over time, more and more cooperative relations have been formed, with universities as the “stamens” and various types of enterprises as the “petals.” This relationship can promote the flow and transformation of knowledge and technology, thus enhancing the success of invention patents in enterprises and the efficiency of patent transformation in colleges and universities.To further analyze the structural changes of the patent cooperation network of advanced nonferrous metals, the structure index of the advanced nonferrous metal patent cooperation network is measured by using the social network analysis method. The change in topological structure characteristics is shown in Table3.Table 3 Structural characteristic values of invention patent cooperation networks in different stages of China’s advanced nonferrous metal industry. IndexStage 1: 2002–2007Stage 2: 2008–2011Stage 3: 2012–2016Stage 4: 2017–2020Network size275102520882567Number of network edge25498823383404Network diameter6111416Graph density0.0060.0020.0010.001Average clustering coefficient0.7380.7210.7130.732Average path length2.775.034.995.29The network nodes reflect the network scale. As can be seen from Table3, the scale of the patent cooperation network in China’s advanced nonferrous metal industry is building up. The network scale values show that the value of network nodes in stage 1 is 275, that in stage 2 is 1025, that in stage 3 is 2088, and that in stage 4 is 2567. From stage 1 to stage 4, the number of network edges has the same trend as that of network nodes, but the change speed of network edge number is more elastic than that of network scale. This can be reflected by graph density, which decreases gradually with the change of stages, which shows that the cooperation relationship and cooperation times between network nodes are increasing.The average clustering coefficient reflects the close relationship between the whole network. The clustering coefficients of the four stages are much higher than the network density of the same stage, which shows that the establishment of an invention patent cooperation relationship in China’s advanced nonferrous metal industry is not a random choice. There is preferred connectivity, and a stable cooperative relationship is gradually formed. The network diameter and average path length are also rising, which shows that the cooperation network of the Chinese advanced nonferrous metal industry is becoming more and more sparse. The increase of network nodes in the network makes the distance between nodes far away, which leads to the decline of network transmission performance and efficiency of network nodes. This shows that although there are more and more subjects participating in cooperation in this field in China, there is still a large cooperation space among innovative subjects. Generally speaking, the four stages of the evolution of the network have the characteristics of a high clustering coefficient and a small average path distance, showing a distinct small-world effect. This is conducive to the sharing of resources and information in the network and facilitates the exchange and cooperation between applicants. ## 4.3. Evolution Analysis of Important Nodes From the analysis of cooperative networks, it can be seen that the networks in each stage are comparable. Select two indexes, degree centrality and betweenness centrality, to analyze the top ten nodes of the two indexes in each stage. It can reflect the importance of reflecting nodes and analyze the changing characteristics of important nodes in each stage.As shown in Table4, from the point of view of degree centrality, the most active applicants are enterprises, universities, and scientific research institutions, which form a relatively stable cooperation mode of “industry-university-research.” From the point of view of centrality value, from the initial company as the main core to the later university as the core, the weight of universities is getting bigger and bigger. From 2002 to 2007, there was more patent cooperation related to the petrochemical industry, which made Sinopec and its research institutes rank 1 and 2, respectively, and the proportion of enterprises, universities, and scientific research institutions was equal; from 2008 to 2011, the composition structure of patent applicants is similar to that of the first stage, and enterprises still occupy an important position. However, it can be seen that Zhejiang University and Tsinghua University have gradually become the subcore nodes of the network, indicating that universities have strong technological innovation capabilities and rich patent cooperation relations; from 2012 to 2016, different from the previous two stages, there are 5 universities, accounting for half of the applicants, and Tsinghua University is the main core node of the network. The strong technological innovation and strong patent cooperation application intention of universities appear; from 2017 to 2020, like the third stage, universities are still the main ones, with 7 university applicants, most of whom are at the forefront, and the status of universities has increased.Table 4 The degree centrality and betweenness centrality of the patent cooperation network in each stage of advanced nonferrous metal industry. TimeApplicantDegree centralityApplicantBetweenness centrality2002–2007China Petrochemical Corporation Limited30China Petrochemical Corporation Limited665.67Sinopec Company Petrochemical Science Research Institute9Zhejiang University494Baoshan Iron and Steel Co., Ltd9Baoshan Iron and Steel Co., Ltd368Consortium corporation Industrial Technology Research Institute7China Petrochemical Corporation Limited205.67Zhejiang University7East China University of Science and Technology132Wanguo Computer Co., Ltd6Sinopec Ningbo engineering Co., Ltd117.33East China University of Science and Technology5Petrochemical Science Research Institute of China Stone Engineering Company110.67Tsinghua University5Dalian Institute of Chemical Physics, Chinese Academy of Sciences89China University of Petroleum (Beijing)5Antai Technology Co., Ltd45Xiwang Technology Co., Ltd5University of Chongqing452008–2011China Petrochemical Corporation Limited23China National Petroleum Corporation Limited2230.5Baoshan Iron and Steel Co., Ltd12China Petrochemical Corporation Limited2192.36Zhejiang University11Dalian Institute of Chemical Physics, Chinese Academy of Sciences2090.33Tsinghua University10Tsinghua University1483.5Dalian Institute of Chemical Physics, Chinese Academy of Sciences10Baoshan Iron and Steel Co., Ltd1464.67University of Science and Technology Beijing9University of Science and Technology Beijing1225.67Hon Hai Precision Industries Company Limited7Zhejiang University1208.6China National Petroleum Corporation Limited6Shanghai Jiao Tong University756.64Shanghai Jiao Tong University6Shandong Aluminum Co., Ltd557East China University of Science and Technology6Jiangsu Thorpe (group) Co., Ltd4652012–2016Tsinghua University22China National Petroleum Corporation Limited5530.62China Petrochemical Corporation Limited17Tsinghua University4273.33Shanghai Jiao Tong University16Baoshan Iron and Steel Co., Ltd3769.08Central South University15University of Science and Technology Beijing3277.58Zhejiang University13China Petrochemical Corporation Limited3113.59Dalian Institute of Chemical Physics, Chinese Academy of Sciences12Dalian University of Technology2950.09Hon Hai Precision Industries Company Limited11Shanghai Jiao Tong University2843.51University of Science and Technology Beijing11Central South University2026.67China Petrochemical Corporation Limited10Dalian Institute of Chemical Physics, Chinese Academy of Sciences1583Baoshan Iron and Steel Co., Ltd9Zhejiang University1350.12017–2020Shanghai Jiao Tong University23University of Science and Technology Beijing3945Tsinghua University18Central South University3824Central South University18Baoshan Iron and Steel Co., Ltd3548University of Science and Technology Beijing15Tsinghua University2465China Petrochemical Corporation Limited14Shanghai Jiao Tong University2269Zhejiang University12CRRC Industrial Research Institute Co., Ltd2088Hon Hai Precision Industries Company Limited11China Petrochemical Corporation Limited1039.5Shanghai University9Hon Hai Precision Industries Company Limited954China University of Petroleum (East China)9Dalian University of Technology946Baoshan Iron and Steel Co., Ltd9Beijing Steel Research and Advanced Technology co., Ltd824.33According to each stage of betweenness centrality, it can be seen that the composition structure of applicants is still dominated by enterprises, universities, and research institutes, and the value of betweenness centrality gradually increases in different stages. The influence of the third and fourth stages is more obvious, and the control power of the network is stronger. From 2002 to 2007, enterprise applicants accounted for half, which had strong control over the network, followed by universities, which showed relatively weak influence; from 2008 to 2011, similar to the previous stage, the applicants with more patent cooperation are still enterprises, and most of them are concentrated in companies in the fields of petroleum, chemical industry, and natural gas. Also, the betweenness centrality value increases, which shows that it has stronger control power than the first stage in the network; from 2012 to 2016, there are a large number of college applicants, occupying strong network control, but enterprises still have the strongest leading patent cooperation ability; from 2017 to 2020, the number of applicants from colleges and universities has increased in both the number and the intensity of control, which can effectively influence patent cooperation.It can be seen that the ranking structure of the betweenness centrality and the degree centrality of the network is different, and the universities have increased the ranking of the two types of centralities. It reflects that colleges and universities are playing an increasingly important role in the network and have gradually acquired strong control power, which can effectively influence patent cooperation. Some applicants have a position before the exam in both types of centralities, such as China Petrochemical Corporation, Tsinghua University, and China National Petroleum Corporation. It shows that these members play an important intermediary role in contacting other members, building a “bridge” for cooperation among other members, and grasping the general direction of technical cooperation. ## 5. Analysis of the Evolution Characteristics of Knowledge Networks in the Advanced Nonferrous Metal Industry ### 5.1. Evolution Characteristics of Cooperative Patent Application Technology Field In order to have a deeper understanding of the fields involved in patented technology and theR & D intensity invested in each field in the advanced nonferrous metal industry, Python is used to draw the thermal map of patented technology theme evolution and Gephi0.9.2 is used to draw the patented technology co-occurrence map. We can visually analyze the depth and co-occurrence of technology. Figures 7–10 represent technology-intensive fields, technology-tentative areas, and technology-developmental areas, respectively. In the figures, the horizontal axis represents the fields where patented technologies are distributed; the vertical axis represents the passage of years; and the color progressive bars on the right represent the number of patented technologies from less to more. In the heat map, we can see the development process of technical fields vertically, and we can see the development and changes of various technical fields with the change of years horizontally.Figure 7 Technology-intensive fields of the advanced nonferrous metal industry.Figure 8 The tentative field of the advanced nonferrous metal industry technology (1).Figure 9 The tentative field of the advanced nonferrous metal industry technology (2).Figure 10 Technical development fields of the advanced nonferrous metal industry.From Figure7, we can see that the technology-intensive distribution areas of the advanced nonferrous metal industry are B01, B21, B22, B23, C01, C07, C09, C22, C23, and H01. The overall total amount is more than 800. The top three in total are B01 (general physical or chemical methods or devices), H01 (basic electrical components), and C01 (inorganic chemistry), with 7001 times, 6444 times, and 5176 times, respectively. It can be seen from the figure that the evolution of various technical fields shows a trend from shallow to deep, and these technical fields are the main research fields of the advanced nonferrous metal industry.Figures8 and 9 show the evolution characteristics of technical topics in different fields at the frequency of 0–10. From the ordinate point of view, this part of the field spans a wider range, with a total of 72 categories, but the total number is only 840 times, which belongs to a wide range but a small number of tentative technical fields. This field is divided into two types of technologies with different characteristics. One is research that advances with time and keeps continuity, and the research intensity increases year by year, such as C30 (crystal growth), E21 (drilling of soil or rock; mining), G06 (calculation; reckoning or counting), etc. The patented technologies in these fields are likely to grow into patented technologies in developmental fields and auxiliary intensive technology fields through continuous cooperation and development; the other is that it occasionally appears in the progress of time, forming intermittent research. Also, the total amount is very small, for example, C05 (fertilizer; fertilizer manufacturing), C06 (dynamite; matches), A23 (food or food products not included in other categories; and their handling), etc. The actual effect of patented technology inventions in these fields is not great, or it is difficult to research and develop, and the benefits are small, so this kind of technology does not have good research value.Figure10 shows the technical fields in the frequency range of 0–80, and the number is about one-tenth of the technology-intensive fields, which play an auxiliary role in patents in the intensive fields. The number is about 8 times that of the tentative field, which belongs to the field with higher research value and significance. Similar to the tentative field, there are two types of technologies in this field. One is a technology that may become an intensive technology field in the future, such as C25 (electrolytic or electrophoretic processes; equipment used). This kind of technology shows the phenomenon of continuous research with time. The research intensity is more than that of previous years, and it also invests more in research than other technologies. There is a great research gap, but it should have its own research value and significance. Enterprises, universities, and research institutes can increase research on this part of technology. The other is an experimental development technique that shows signs of disappearing as it matures over time, with a few studies appearing early on, such as H04 (electrical communication technology). Or there is the phenomenon of staged technology research and development, such as B32(layered products). This kind of technology does not have strategic support for the innovation and development of enterprises. Therefore, enterprise should reduce their research and development investment in this kind of technology.From the above analysis, we can see that technological innovation institutions should keep continuous research in intensive technology fields to lay the foundation of innovation competitiveness. Enterprises that master the core competitive advantages of the industry can have good basic development abilities in the advanced nonferrous metal industry. In addition, the innovation subject can also separate some innovation resources, cooperate with enterprises, universities, and research institutes with heterogeneous resources, and then seek the blue ocean field of technology in the field of technology development and tentative fields. Good development research in this field can provide stable technical assistance advantages for enterprises and even occupy the leading resources in the future development of technical fields, providing a high-quality guarantee for their innovation competitiveness. ### 5.2. Analysis of Four-Stage Evolution Characteristics of Knowledge Networks Through the analysis of the above-mentioned patent categories, we can see which categories the advanced nonferrous metal industry should conduct in-depth research. Within each category, the success and availability of patent research and development can be enhanced by studying the cooperative research and development of patent subcategories. This paper is based on the subclass data of the IPC classification number of a cooperative patent application. The Gephi 0.9.2 is used to construct the IPC co-occurrence network diagram of patented technology to analyze the characteristics and evolution law of the knowledge network. Figures11–14 show the co-occurrence evolution process of IPC classification numbers in four stages, and Table 5 shows the co-occurrence topological index of cooperative patented technologies in the advanced nonferrous metal industry. The node in the network represents a subclass of the IPC classification number, and the larger the node, the more times the IPC appears; the connection represents a co-occurrence of two classification numbers, and the thicker the connection, the more co-occurrence frequency of nodes at both ends of the connection, which shows that the two technologies are more closely related.Figure 11 Co-occurrence of IPC from 2002 to 2007.Figure 12 Co-occurrence of IPC from 2008 to 2011.Figure 13 Co-occurrence of IPC from 2012 to 2006.Figure 14 Co-occurrence of IPC from 2017 to 2020.Table 5 Co-occurrence topological index of cooperative patent technology in the advanced nonferrous metal industry. Network sizeNetwork connectionAverage degreeNetwork diameterGraph densityMean clustering coefficientAverage path length02–07 years1123315.91170.0530.7323.02108–11 years2068348.09750.0390.7252.71712–16 years284163011.47940.0410.6682.50517–20 years325208612.83740.040.7072.446With the evolution process diagram and the co-occurrence topological indicators shown in Table5, it can be analyzed as follows:(1) The network scale is increasing gradually, from 112 nodes at the beginning to 325 nodes, which shows that the cooperative patents of the advanced nonferrous metals industry contain more and more knowledge elements. It can also be seen from the growth of network connections and the average degree that the frequency of co-occurrence between technologies has also increased by a greater margin. It shows that the intensity of technological co-occurrence is getting bigger and bigger, and patents gather more and more technical cooperation. This requires enterprises to seek more partners with heterogeneous resources for patent technology cooperation.(2) From the network diameter and graph density, we can see that the technology co-occurrence network is getting closer and closer. With the increasing number and complexity of network scale and network connections, the network diameter decreases step by step, and the graph density increases and stabilizes gradually, which shows that the technology is characterized by overall dispersion and local tightness. More research on IPC technology subcategories included in intensive technology fields is conducive to the promotion of core competitiveness.(3) From the average clustering coefficient and average path length, it can be seen that the four stages have the characteristics of a high average clustering coefficient, low average path length, strong network cohesion, and small-world effect. There are relatively close technologies.R & D institutions should identify these technologies and cooperate in research, and cooperate to develop more comprehensive patents, thereby enhancing their innovation ability and core competitiveness. ## 5.1. Evolution Characteristics of Cooperative Patent Application Technology Field In order to have a deeper understanding of the fields involved in patented technology and theR & D intensity invested in each field in the advanced nonferrous metal industry, Python is used to draw the thermal map of patented technology theme evolution and Gephi0.9.2 is used to draw the patented technology co-occurrence map. We can visually analyze the depth and co-occurrence of technology. Figures 7–10 represent technology-intensive fields, technology-tentative areas, and technology-developmental areas, respectively. In the figures, the horizontal axis represents the fields where patented technologies are distributed; the vertical axis represents the passage of years; and the color progressive bars on the right represent the number of patented technologies from less to more. In the heat map, we can see the development process of technical fields vertically, and we can see the development and changes of various technical fields with the change of years horizontally.Figure 7 Technology-intensive fields of the advanced nonferrous metal industry.Figure 8 The tentative field of the advanced nonferrous metal industry technology (1).Figure 9 The tentative field of the advanced nonferrous metal industry technology (2).Figure 10 Technical development fields of the advanced nonferrous metal industry.From Figure7, we can see that the technology-intensive distribution areas of the advanced nonferrous metal industry are B01, B21, B22, B23, C01, C07, C09, C22, C23, and H01. The overall total amount is more than 800. The top three in total are B01 (general physical or chemical methods or devices), H01 (basic electrical components), and C01 (inorganic chemistry), with 7001 times, 6444 times, and 5176 times, respectively. It can be seen from the figure that the evolution of various technical fields shows a trend from shallow to deep, and these technical fields are the main research fields of the advanced nonferrous metal industry.Figures8 and 9 show the evolution characteristics of technical topics in different fields at the frequency of 0–10. From the ordinate point of view, this part of the field spans a wider range, with a total of 72 categories, but the total number is only 840 times, which belongs to a wide range but a small number of tentative technical fields. This field is divided into two types of technologies with different characteristics. One is research that advances with time and keeps continuity, and the research intensity increases year by year, such as C30 (crystal growth), E21 (drilling of soil or rock; mining), G06 (calculation; reckoning or counting), etc. The patented technologies in these fields are likely to grow into patented technologies in developmental fields and auxiliary intensive technology fields through continuous cooperation and development; the other is that it occasionally appears in the progress of time, forming intermittent research. Also, the total amount is very small, for example, C05 (fertilizer; fertilizer manufacturing), C06 (dynamite; matches), A23 (food or food products not included in other categories; and their handling), etc. The actual effect of patented technology inventions in these fields is not great, or it is difficult to research and develop, and the benefits are small, so this kind of technology does not have good research value.Figure10 shows the technical fields in the frequency range of 0–80, and the number is about one-tenth of the technology-intensive fields, which play an auxiliary role in patents in the intensive fields. The number is about 8 times that of the tentative field, which belongs to the field with higher research value and significance. Similar to the tentative field, there are two types of technologies in this field. One is a technology that may become an intensive technology field in the future, such as C25 (electrolytic or electrophoretic processes; equipment used). This kind of technology shows the phenomenon of continuous research with time. The research intensity is more than that of previous years, and it also invests more in research than other technologies. There is a great research gap, but it should have its own research value and significance. Enterprises, universities, and research institutes can increase research on this part of technology. The other is an experimental development technique that shows signs of disappearing as it matures over time, with a few studies appearing early on, such as H04 (electrical communication technology). Or there is the phenomenon of staged technology research and development, such as B32(layered products). This kind of technology does not have strategic support for the innovation and development of enterprises. Therefore, enterprise should reduce their research and development investment in this kind of technology.From the above analysis, we can see that technological innovation institutions should keep continuous research in intensive technology fields to lay the foundation of innovation competitiveness. Enterprises that master the core competitive advantages of the industry can have good basic development abilities in the advanced nonferrous metal industry. In addition, the innovation subject can also separate some innovation resources, cooperate with enterprises, universities, and research institutes with heterogeneous resources, and then seek the blue ocean field of technology in the field of technology development and tentative fields. Good development research in this field can provide stable technical assistance advantages for enterprises and even occupy the leading resources in the future development of technical fields, providing a high-quality guarantee for their innovation competitiveness. ## 5.2. Analysis of Four-Stage Evolution Characteristics of Knowledge Networks Through the analysis of the above-mentioned patent categories, we can see which categories the advanced nonferrous metal industry should conduct in-depth research. Within each category, the success and availability of patent research and development can be enhanced by studying the cooperative research and development of patent subcategories. This paper is based on the subclass data of the IPC classification number of a cooperative patent application. The Gephi 0.9.2 is used to construct the IPC co-occurrence network diagram of patented technology to analyze the characteristics and evolution law of the knowledge network. Figures11–14 show the co-occurrence evolution process of IPC classification numbers in four stages, and Table 5 shows the co-occurrence topological index of cooperative patented technologies in the advanced nonferrous metal industry. The node in the network represents a subclass of the IPC classification number, and the larger the node, the more times the IPC appears; the connection represents a co-occurrence of two classification numbers, and the thicker the connection, the more co-occurrence frequency of nodes at both ends of the connection, which shows that the two technologies are more closely related.Figure 11 Co-occurrence of IPC from 2002 to 2007.Figure 12 Co-occurrence of IPC from 2008 to 2011.Figure 13 Co-occurrence of IPC from 2012 to 2006.Figure 14 Co-occurrence of IPC from 2017 to 2020.Table 5 Co-occurrence topological index of cooperative patent technology in the advanced nonferrous metal industry. Network sizeNetwork connectionAverage degreeNetwork diameterGraph densityMean clustering coefficientAverage path length02–07 years1123315.91170.0530.7323.02108–11 years2068348.09750.0390.7252.71712–16 years284163011.47940.0410.6682.50517–20 years325208612.83740.040.7072.446With the evolution process diagram and the co-occurrence topological indicators shown in Table5, it can be analyzed as follows:(1) The network scale is increasing gradually, from 112 nodes at the beginning to 325 nodes, which shows that the cooperative patents of the advanced nonferrous metals industry contain more and more knowledge elements. It can also be seen from the growth of network connections and the average degree that the frequency of co-occurrence between technologies has also increased by a greater margin. It shows that the intensity of technological co-occurrence is getting bigger and bigger, and patents gather more and more technical cooperation. This requires enterprises to seek more partners with heterogeneous resources for patent technology cooperation.(2) From the network diameter and graph density, we can see that the technology co-occurrence network is getting closer and closer. With the increasing number and complexity of network scale and network connections, the network diameter decreases step by step, and the graph density increases and stabilizes gradually, which shows that the technology is characterized by overall dispersion and local tightness. More research on IPC technology subcategories included in intensive technology fields is conducive to the promotion of core competitiveness.(3) From the average clustering coefficient and average path length, it can be seen that the four stages have the characteristics of a high average clustering coefficient, low average path length, strong network cohesion, and small-world effect. There are relatively close technologies.R & D institutions should identify these technologies and cooperate in research, and cooperate to develop more comprehensive patents, thereby enhancing their innovation ability and core competitiveness. ## 6. Conclusion Taking the advanced nonferrous metal industry as an example, this study constructs the invention patent cooperation network, the heat map of patent technology theme evolution, and the patent technology IPC co-occurrence network of the advanced nonferrous metal industry in China from the perspective of cooperation network and knowledge network from 2002 to 2020. Furthermore, we deeply study the structural characteristics and evolution law of the invention patent network in the advanced nonferrous metal industry and draw the following conclusions and give relevant suggestions. ### 6.1. Crucial Findings (1) The scale of patent cooperation networks in China’s advanced nonferrous metal industry is expanding day by day. The network density is increasing, the network cohesion is strong, and the small-world effect is obvious. It shows the characteristics of overall dispersion and local tightness, and there are few “bridges” between groups, so the network still has great room for development. The “Triple Helix” feature of the cooperation network is gradually obvious. In the long-term formal and informal cooperation and exchanges, universities, enterprises, and scientific research institutes have crossed, closely cooperated, and interacted with each other, and the cooperative relationship has become more stable. The evolution of knowledge networks over time shows the characteristics of technological dynamic changes.(2) Patent applicants such as China Petrochemical Corporation, Tsinghua University, China National Petroleum Corporation, Zhejiang University, Baoshan Iron and Steel Co., Ltd., and Shanghai Jiaotong University are important nodes of the network. These nodes have more contact with other nodes, play an intermediary role, are the “bridge” for cooperation among other subjects, and grasp the general direction of technical cooperation. Therefore, these core nodes should be the key breakthrough points to guide and adjust the cooperation behavior of important nodes; with time, the structure of key nodes that play a leading role has changed from the initial enterprise-led to university-led.(3) The total number of technology categories involved in the advanced nonferrous metal industry is increasing, and the depth is also gradually increasing. Different types of technology fields are presented, including technology-intensive fields, tentative technology fields, and technology development fields, which are characterized by high intensity, wide range, and steady progress. ### 6.2. Policy Implications To speed up the cultivation of China’s advanced nonferrous metal industry, realize the optimal allocation of innovative resources, and improve international competitiveness, this paper puts forward the following theoretical and practical enlightenment.(1) The government should strengthen and guide the cooperative relationship of the “Triple Helix” subjects to promote the stability and development of the innovation network among industries, universities, and research. The government should introduce relevant policies to promote the breakthrough of the core technology of China’s advanced nonferrous metal industry, guided by industrial demand and aimed at tackling key problems with new technologies. Encourage innovative organizations to focus on the core common technologies of the industry to carry out joint research and achieve breakthroughs in the core technologies of the industry with the strong support of the government’s science and technology support plan.First of all, the government should set up an incentive mechanism to actively guide research institutions such as universities and research institutes to realize the transformation of patented technology through cooperation with enterprises. Second, the government should establish a dynamic management and evaluation system for industry-university-research cooperation projects, constantly adjust the deployment of industry-university-research cooperation projects, and strengthen the tracking and implementation of policies. Finally, the government should improve the intellectual property system and build a long-term protection mechanism for intellectual property rights.(2) In the stage of knowledge innovation, universities have created a large number of scientific research achievements through collaborative innovation with governments, enterprises, and research institutions. However, most of them exist in the form of papers or patents and do not transform knowledge into capital, which leads to the waste of technological innovation. Therefore, universities should give full play to their advantages in the process of knowledge innovation and extend this advantage to the field of technological innovation to realize the capitalization of knowledge.In addition, universities and research institutions should focus on the core common technologies of industrial development to fundamentally solve the “two skins” problem of the current science and technology economy. Because they have overlapping functions in scientific research and personnel training, they can break the original organizational boundaries and develop synergistically through the deep integration of scientific and technological resources by integrating resources such as science, technology, and talents with universities and scientific research institutions to promote the deep integration of basic research and applied research personnel training, which can realize the superposition of high-quality resources and complementary advantages, thus driving the sustainable development of regional economies and societies.(3) Strengthen the dominant position of enterprises in technological innovation; support enterprises to enhance their independent innovation ability; and establish and improve the enterprise-led collaborative innovation mechanism of industries, universities and research. Enterprises, as the main battlefield and the realizing party of scientific and technological achievements, can accelerate the process of knowledge capitalization and technology industrialization through collaborative innovation. However, most Chinese enterprises pay insufficient attention to basic research, which leads to the weak dominant position of enterprises in the innovation network, and their lack of scientific and technological innovation ability leads to loose cooperative relations, which has become an important factor restricting the development of China’s “Triple Helix” collaborative innovation system.Enterprises can enhance their innovation abilities, at the same time, they can carry out technical cooperation with other enterprises to promote the formation and implementation of patent alliances to enhance their innovation advantages. On the other hand, it can cooperate with university research institutions to realize the transformation and docking of resources and provide strong support for technological innovation and achievement transformation.(4) In practice, the government should guide advanced nonferrous metal enterprises to increase investment in basic research of core technologies in the industry through leveraged means such as financial subsidies and interest subsidies. Encourage enterprises to improve their scientific research ability by building laboratories orR & D centers jointly with university research institutes.Based on macropolicy support, the government can also build innovation carriers to participate in collaborative innovation. For example, the establishment of new industry research institutes and technology incubators. The new industry research institute is an innovative platform for deep integration of industry, education, and research, which is jointly sponsored by key universities and research institutions of key enterprises in the industry under the guidance of the government to carry out collaborative innovation and core technology breakthroughs and tackle key problems. The new industry research institute pays attention to the interconnection of basic research and applied research to realize the effective connection of “basic research-technological tackling-technical application-successful industrialization.” The technology incubator is one of the carriers of science and technology innovation services and the training base of innovative and entrepreneurial talents. Among them, the government provides infrastructure such as sites, as well as preferential policies, such as seed funds and taxes. A technology incubator helps to improve the conversion rate of scientific and technological achievements and reduce the risks and costs of start-up enterprises. Through these methods, it is beneficial to solve the dilemma of obstacles in knowledge transfer across organizations caused by the mismatch of knowledge and ability between advanced nonferrous metal enterprises and universities and research institutes in China, and fundamentally solve the phenomenon of “two skins” between science and technology and the economy. ## 6.1. Crucial Findings (1) The scale of patent cooperation networks in China’s advanced nonferrous metal industry is expanding day by day. The network density is increasing, the network cohesion is strong, and the small-world effect is obvious. It shows the characteristics of overall dispersion and local tightness, and there are few “bridges” between groups, so the network still has great room for development. The “Triple Helix” feature of the cooperation network is gradually obvious. In the long-term formal and informal cooperation and exchanges, universities, enterprises, and scientific research institutes have crossed, closely cooperated, and interacted with each other, and the cooperative relationship has become more stable. The evolution of knowledge networks over time shows the characteristics of technological dynamic changes.(2) Patent applicants such as China Petrochemical Corporation, Tsinghua University, China National Petroleum Corporation, Zhejiang University, Baoshan Iron and Steel Co., Ltd., and Shanghai Jiaotong University are important nodes of the network. These nodes have more contact with other nodes, play an intermediary role, are the “bridge” for cooperation among other subjects, and grasp the general direction of technical cooperation. Therefore, these core nodes should be the key breakthrough points to guide and adjust the cooperation behavior of important nodes; with time, the structure of key nodes that play a leading role has changed from the initial enterprise-led to university-led.(3) The total number of technology categories involved in the advanced nonferrous metal industry is increasing, and the depth is also gradually increasing. Different types of technology fields are presented, including technology-intensive fields, tentative technology fields, and technology development fields, which are characterized by high intensity, wide range, and steady progress. ## 6.2. Policy Implications To speed up the cultivation of China’s advanced nonferrous metal industry, realize the optimal allocation of innovative resources, and improve international competitiveness, this paper puts forward the following theoretical and practical enlightenment.(1) The government should strengthen and guide the cooperative relationship of the “Triple Helix” subjects to promote the stability and development of the innovation network among industries, universities, and research. The government should introduce relevant policies to promote the breakthrough of the core technology of China’s advanced nonferrous metal industry, guided by industrial demand and aimed at tackling key problems with new technologies. Encourage innovative organizations to focus on the core common technologies of the industry to carry out joint research and achieve breakthroughs in the core technologies of the industry with the strong support of the government’s science and technology support plan.First of all, the government should set up an incentive mechanism to actively guide research institutions such as universities and research institutes to realize the transformation of patented technology through cooperation with enterprises. Second, the government should establish a dynamic management and evaluation system for industry-university-research cooperation projects, constantly adjust the deployment of industry-university-research cooperation projects, and strengthen the tracking and implementation of policies. Finally, the government should improve the intellectual property system and build a long-term protection mechanism for intellectual property rights.(2) In the stage of knowledge innovation, universities have created a large number of scientific research achievements through collaborative innovation with governments, enterprises, and research institutions. However, most of them exist in the form of papers or patents and do not transform knowledge into capital, which leads to the waste of technological innovation. Therefore, universities should give full play to their advantages in the process of knowledge innovation and extend this advantage to the field of technological innovation to realize the capitalization of knowledge.In addition, universities and research institutions should focus on the core common technologies of industrial development to fundamentally solve the “two skins” problem of the current science and technology economy. Because they have overlapping functions in scientific research and personnel training, they can break the original organizational boundaries and develop synergistically through the deep integration of scientific and technological resources by integrating resources such as science, technology, and talents with universities and scientific research institutions to promote the deep integration of basic research and applied research personnel training, which can realize the superposition of high-quality resources and complementary advantages, thus driving the sustainable development of regional economies and societies.(3) Strengthen the dominant position of enterprises in technological innovation; support enterprises to enhance their independent innovation ability; and establish and improve the enterprise-led collaborative innovation mechanism of industries, universities and research. Enterprises, as the main battlefield and the realizing party of scientific and technological achievements, can accelerate the process of knowledge capitalization and technology industrialization through collaborative innovation. However, most Chinese enterprises pay insufficient attention to basic research, which leads to the weak dominant position of enterprises in the innovation network, and their lack of scientific and technological innovation ability leads to loose cooperative relations, which has become an important factor restricting the development of China’s “Triple Helix” collaborative innovation system.Enterprises can enhance their innovation abilities, at the same time, they can carry out technical cooperation with other enterprises to promote the formation and implementation of patent alliances to enhance their innovation advantages. On the other hand, it can cooperate with university research institutions to realize the transformation and docking of resources and provide strong support for technological innovation and achievement transformation.(4) In practice, the government should guide advanced nonferrous metal enterprises to increase investment in basic research of core technologies in the industry through leveraged means such as financial subsidies and interest subsidies. Encourage enterprises to improve their scientific research ability by building laboratories orR & D centers jointly with university research institutes.Based on macropolicy support, the government can also build innovation carriers to participate in collaborative innovation. For example, the establishment of new industry research institutes and technology incubators. The new industry research institute is an innovative platform for deep integration of industry, education, and research, which is jointly sponsored by key universities and research institutions of key enterprises in the industry under the guidance of the government to carry out collaborative innovation and core technology breakthroughs and tackle key problems. The new industry research institute pays attention to the interconnection of basic research and applied research to realize the effective connection of “basic research-technological tackling-technical application-successful industrialization.” The technology incubator is one of the carriers of science and technology innovation services and the training base of innovative and entrepreneurial talents. Among them, the government provides infrastructure such as sites, as well as preferential policies, such as seed funds and taxes. A technology incubator helps to improve the conversion rate of scientific and technological achievements and reduce the risks and costs of start-up enterprises. Through these methods, it is beneficial to solve the dilemma of obstacles in knowledge transfer across organizations caused by the mismatch of knowledge and ability between advanced nonferrous metal enterprises and universities and research institutes in China, and fundamentally solve the phenomenon of “two skins” between science and technology and the economy. --- *Source: 1023816-2022-11-08.xml*
1023816-2022-11-08_1023816-2022-11-08.md
113,872
Evolution Characteristics of Advanced Nonferrous Metal Industry Patent Cooperation Network in China from the Perspective of Multilayer Network
Qingxiao Wang; Wenqian Zhu
Mathematical Problems in Engineering (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1023816
1023816-2022-11-08.xml
--- ## Abstract The advanced nonferrous metal industry is a national strategic emerging industry, and its innovation ability is crucial to the transformation and development of downstream industries such as aerospace, rail transit, electronic information, and automobiles to the high end of the industrial value chain. The research on the evolution characteristics of its cooperative network is beneficial to the research on the mechanism of improving the innovation capability of the industry. Based on the cooperative invention patent application data in 2002–2020, using the social network analysis method and Gephi 0.9.2 visualization to construct a patent cooperation network and knowledge network, this paper analyzes the overall characteristics and evolution law of the patent cooperation network in the advanced nonferrous metal industry. It is found that, first, the structure of the innovation network in China’s advanced nonferrous metals industry is becoming more and more complex. The scale of the cooperation network and knowledge network is increasing, and the small world is obvious. Second, some innovation subjects, as key nodes, play the role of “bridge” and grasp the general direction of technology. The structure of the key nodes in the network has changed, and the cooperative innovation network of industry-university-research has gradually evolved from enterprise-led to university-led. Third, the number of technical categories has increased and the depth of technology has been enhanced in China’s advanced nonferrous metal field. Hot technical fields are characterized by high intensity, wide range, and stable advancement. Based on the research findings, some suggestions and enlightenment are put forward to promote the development of the advanced nonferrous metal industry. --- ## Body ## 1. Introduction The advanced nonferrous metal industry is one of the strategic emerging industries. Developing an advanced nonferrous metal industry not only helps to promote the adjustment of China’s current economic structure and the upgrading of industrial structures but also deeply affects the effectiveness of building an innovative country in China. In recent years, the Chinese government has given great policy support to the advanced nonferrous metal industry. China’s advanced nonferrous metal industry has made rapid development and great progress, but there is still a big gap compared with international giants in the core technology field of the industry.The patent is an important means of technological innovation and achievement transmission and exchange, and it is also an important index to judge the degree of technological innovation. Patents have an important impact on the development and technological breakthroughs of related industries. Patent innovation is the driving force of technological development, which is mainly reflected in optimizing patent layout, promoting technological innovation, and realizing industrial applications. The development of the modern manufacturing industry cannot be separated from cooperative innovation, and the need for cooperative innovation in the advanced nonferrous metal industry is more prominent. Patent cooperation is an important form of in cooperative innovation, which can provide important innovation power for the development of enterprises. By studying the patent innovation network, we can understand the development trend of technological innovation, the layout characteristics, and the evolution laws of patents in China’s advanced nonferrous metal industry. The innovation and development status of advanced nonferrous metals can be scientifically analyzed and reasonable suggestions can be put forward.In recent years, more and more nonferrous metal manufacturing enterprises have cooperated with universities and research institutes inR & D and have jointly applied for patents on the background of emerging and cross-integrated new technologies. They spontaneously form a patent cooperation network, which can realize resource sharing and complementary advantages. This kind of patent cooperation network can greatly reduce the innovation cost and risk and improve the technological innovation ability of an industry. However, at present, the exploration and utilization of patent cooperation networks in China’s nonferrous metal manufacturing industry are far from enough, and the innovation and aggregation effect of patent cooperation networks have not been fully exerted. Thus, exploring, utilizing, and optimizing patent cooperation networks to improve industrial technological innovation ability has become an important way for China’s advanced nonferrous metal industry to shorten the technological gap between China and international oligarchs and break the technological blockade. With the development of China’s advanced nonferrous metal industry, the innovation network of China’s advanced nonferrous metal industry, including the knowledge network and cooperation network, has changed significantly and become more complex. So, what are the evolution characteristics of the patent cooperation networks and knowledge networks in China’s advanced nonferrous metal industry? In the evolution process of a cooperative network of advanced nonferrous metals, which enterprises and research institutions are the key nodes to grasp the general direction of technology and how do their composition structures change? What are the distribution of technology fields and the evolution law of technology hot fields in China’s advanced nonferrous metal industry? These are the problems worth studying. The purpose of this study is to reveal the evolution law of its structural characteristics by constructing the innovation network of China’s advanced nonferrous metal industry and to provide suggestions for optimizing the innovation network and strengthening technological innovation. ## 2. Literature Review ### 2.1. Research on Multilayer Innovation Network Wang et al. pointed out that enterprise innovation is double embedded in the knowledge network composed of knowledge elements and the social network formed by the cooperative relationships ofR & D personnel, and these two networks are decoupled [1]. The process of combining knowledge elements is the production process of innovation behavior [2], and all knowledge elements are connected in the process of combination, forming a knowledge network [3]. Innovation is actually the interaction process of the innovation subject’s knowledge base and the combination mode, and the structural characteristics of the knowledge network represent the combination potential of core knowledge fields [4], which in turn becomes an important factor affecting the innovation performance of enterprises.From the perspective of social attributes of innovation activities, enterprises embedded in cooperative networks can provide favorable resources for enterprise innovation. The innovative behavior of enterprises needs to be realized through the process of expanding and deepening the specific social network [5]. It is found that enterprises in different positions in the network have different degrees of control over resources and information. Cooperative network structure and relationship characteristics will have a certain impact on cooperative innovation behavior and performance, resulting in nonequilibrium behavior of innovation subjects [6, 7] and differences in innovation performance [8]. In fact, the innovation activities of enterprises are multidimensional. The knowledge network and cooperation network in which R & D personnel are located have their own unique operational characteristics, which will have different impacts on innovation activities. At present, most of the research on the evolution characteristics of innovation cooperation is based on patent data to analyze its evolution characteristics or patterns from a single network level. ### 2.2. Research on the Evolution of Innovation Network In terms of innovation network evolution, existing scholars mainly conduct research from three aspects. First, focus on the cooperative relationship between universities and research institutes, and explore the evolution of cooperative networks at different stages through the cooperative publication of papers and works. For example, Balconi et al. [9] analyzed the role of university professors in the Italian inventor patent cooperation network; Lissoni [10] found that universities occupy a core position in the patent cooperation network and have a stable cooperative relationship with other types of inventors; Li et al. [11] found that comprehensive universities, science and engineering universities, and energy-based enterprises occupy the core position in the school-enterprise patent cooperation network. The second is to study the evolution of the innovation network of strategic emerging technology industries, involving information foundations, intelligent manufacturing, biomedicine, and other industries. For example, Zhang et al. [12] from the software service industry, Cao et al. [13] from the new energy automobile industry, Li et al. [14] from the satellite and application industry, and Chen et al. [15] from the chip industry, respectively, confirmed this point. These research fields belong to national strategic emerging industries. The research on these strategic emerging industries is helpful in improving the comprehensive innovation level of China. But unfortunately, these studies have not involved the advanced nonferrous metal industry. The third is to study the structure and evolution of the regional patent cooperation network. For example, Ejermo and Karlsson [16] believed that geographic distance is a key factor affecting patent cooperation networks, and there is generally a phenomenon of intraregional aggregation in patent cooperation networks. Pan et al. [17] studied the patent cooperation networks in 31 provinces of China and found that geographical distance would have an impact on the linkage of patent cooperation networks. However, other scholars put forward different views. For example, Wilhelmsson [18] found that cities with denser populations and more diverse industrial structures tended to have lower levels of patent cooperation; Zheng et al. [19] constructed the cross-city patent cooperation network of Fujian, Xiamen, and Quanzhou, measured the order of network evolution by using the concept of “entropy,” and found that the cross-city cooperation network showed an obvious entropy increase in the evolution process. ### 2.3. Research on the Innovation Network of the Nonferrous Metals Industry The advanced nonferrous metal industry is a national strategic leading industry and an important part of new materials. Its innovation ability is crucial to the transformation and development of downstream industries such as aerospace, rail transit, electronic information, and automobiles to the high end of the industrial value chain, so it is worth studying. Concerning the research of the advanced nonferrous metal industry, some scholars study the path of industrial development and innovation. Tian [20] proposed to strengthen the innovation capacity building of the industry, strengthen the knowledge alliance and technology alliance, and realize the integration of technology and management. Choose the path of independent innovation, enhance competitiveness by participating in international market resources, and improve the efficiency of the nonferrous metal industry. Lin et al. [21] combined the exploratory single case study method and studied the path of both exploratory innovation and utilization innovation in the process transformation of typical research institutes in the rare metal industry. It is found that the engineering center has become the key link in the transformation from exploratory innovation to utilization innovation. Research on the nonferrous metals innovation network. Zhou et al. [22] combined chaos theory to expound on the chaotic characteristics of the innovation network of nonferrous metal industry clusters. Too strong or too weak aggregation capability level of integrated units is not conducive to the development of a nonferrous metal cluster innovation network. In the development process of a cluster innovation network, it is necessary to adjust the evolution speed of the cluster innovation network by the strength of its aggregation ability. It is proposed that nonferrous metal cluster enterprises can be divided into four stages: generation, growth, maturity, and decline in the life cycle of the cluster innovation network. The above research on the advanced nonferrous metal industry provides a useful reference for this study.“Triple Helix” innovation theory is an innovation theory established by Etzkowitz and Leydesdorff in 1995 [23]. The “Triple Helix” theory borrows the concept of biology and puts forwards that all three parties can be the main body of innovation, and that the three parties cooperate and interact closely in innovation. At present, the “Triple Helix” innovation theory has been widely used in academic circles to evaluate the collaborative interaction and dynamic evolution of universities, industries, and governments (including scientific research institutions) [24]. Many scholars have introduced the “Triple Helix” theory into the innovation network research of industry, education, and research [25, 26].Therefore, based on the “Triple Helix” theory, enterprises, universities, and research institutes can be regarded as three types of innovation subjects, with the support of the government and other relevant institutions, and carry out innovation cooperation activities based on a clear division of functions. In this paper, the “Triple Helix” theory is used for reference, and the “Triple Helix” theory is introduced into the research framework of China’s advanced nonferrous metal innovation network.To sum up, the existing research on the evolution of innovation networks has provided a solid foundation and beneficial enlightenment for this study. However, there are few kinds of research on the combination of advanced nonferrous metals and innovation networks. Moreover, in the field of advanced nonferrous metals, scholars mostly start from a static perspective. Comprehensive, systematic, and dynamic research on the patent status and patent cooperation network of the industry needs to be deepened.Therefore, based on the data of patent cooperation applications of the advanced nonferrous metal industry, this study uses the social network analysis method and Gephi network visualization to analyze the current situation of the nonferrous metal industry in China. The patent cooperation network and knowledge network are constructed from the perspective of the multilayer network, analyzed from the dimensions of the patentee, knowledge element, and geographical distribution, and reveal the structural characteristics and evolution laws of patent cooperation in the advanced nonferrous metal industry from multiple dimensions. It provides a reference for optimizing the patent cooperation network of the advanced nonferrous metal industry, enhancing its technological innovation, and formulating an industrial development strategy. ## 2.1. Research on Multilayer Innovation Network Wang et al. pointed out that enterprise innovation is double embedded in the knowledge network composed of knowledge elements and the social network formed by the cooperative relationships ofR & D personnel, and these two networks are decoupled [1]. The process of combining knowledge elements is the production process of innovation behavior [2], and all knowledge elements are connected in the process of combination, forming a knowledge network [3]. Innovation is actually the interaction process of the innovation subject’s knowledge base and the combination mode, and the structural characteristics of the knowledge network represent the combination potential of core knowledge fields [4], which in turn becomes an important factor affecting the innovation performance of enterprises.From the perspective of social attributes of innovation activities, enterprises embedded in cooperative networks can provide favorable resources for enterprise innovation. The innovative behavior of enterprises needs to be realized through the process of expanding and deepening the specific social network [5]. It is found that enterprises in different positions in the network have different degrees of control over resources and information. Cooperative network structure and relationship characteristics will have a certain impact on cooperative innovation behavior and performance, resulting in nonequilibrium behavior of innovation subjects [6, 7] and differences in innovation performance [8]. In fact, the innovation activities of enterprises are multidimensional. The knowledge network and cooperation network in which R & D personnel are located have their own unique operational characteristics, which will have different impacts on innovation activities. At present, most of the research on the evolution characteristics of innovation cooperation is based on patent data to analyze its evolution characteristics or patterns from a single network level. ## 2.2. Research on the Evolution of Innovation Network In terms of innovation network evolution, existing scholars mainly conduct research from three aspects. First, focus on the cooperative relationship between universities and research institutes, and explore the evolution of cooperative networks at different stages through the cooperative publication of papers and works. For example, Balconi et al. [9] analyzed the role of university professors in the Italian inventor patent cooperation network; Lissoni [10] found that universities occupy a core position in the patent cooperation network and have a stable cooperative relationship with other types of inventors; Li et al. [11] found that comprehensive universities, science and engineering universities, and energy-based enterprises occupy the core position in the school-enterprise patent cooperation network. The second is to study the evolution of the innovation network of strategic emerging technology industries, involving information foundations, intelligent manufacturing, biomedicine, and other industries. For example, Zhang et al. [12] from the software service industry, Cao et al. [13] from the new energy automobile industry, Li et al. [14] from the satellite and application industry, and Chen et al. [15] from the chip industry, respectively, confirmed this point. These research fields belong to national strategic emerging industries. The research on these strategic emerging industries is helpful in improving the comprehensive innovation level of China. But unfortunately, these studies have not involved the advanced nonferrous metal industry. The third is to study the structure and evolution of the regional patent cooperation network. For example, Ejermo and Karlsson [16] believed that geographic distance is a key factor affecting patent cooperation networks, and there is generally a phenomenon of intraregional aggregation in patent cooperation networks. Pan et al. [17] studied the patent cooperation networks in 31 provinces of China and found that geographical distance would have an impact on the linkage of patent cooperation networks. However, other scholars put forward different views. For example, Wilhelmsson [18] found that cities with denser populations and more diverse industrial structures tended to have lower levels of patent cooperation; Zheng et al. [19] constructed the cross-city patent cooperation network of Fujian, Xiamen, and Quanzhou, measured the order of network evolution by using the concept of “entropy,” and found that the cross-city cooperation network showed an obvious entropy increase in the evolution process. ## 2.3. Research on the Innovation Network of the Nonferrous Metals Industry The advanced nonferrous metal industry is a national strategic leading industry and an important part of new materials. Its innovation ability is crucial to the transformation and development of downstream industries such as aerospace, rail transit, electronic information, and automobiles to the high end of the industrial value chain, so it is worth studying. Concerning the research of the advanced nonferrous metal industry, some scholars study the path of industrial development and innovation. Tian [20] proposed to strengthen the innovation capacity building of the industry, strengthen the knowledge alliance and technology alliance, and realize the integration of technology and management. Choose the path of independent innovation, enhance competitiveness by participating in international market resources, and improve the efficiency of the nonferrous metal industry. Lin et al. [21] combined the exploratory single case study method and studied the path of both exploratory innovation and utilization innovation in the process transformation of typical research institutes in the rare metal industry. It is found that the engineering center has become the key link in the transformation from exploratory innovation to utilization innovation. Research on the nonferrous metals innovation network. Zhou et al. [22] combined chaos theory to expound on the chaotic characteristics of the innovation network of nonferrous metal industry clusters. Too strong or too weak aggregation capability level of integrated units is not conducive to the development of a nonferrous metal cluster innovation network. In the development process of a cluster innovation network, it is necessary to adjust the evolution speed of the cluster innovation network by the strength of its aggregation ability. It is proposed that nonferrous metal cluster enterprises can be divided into four stages: generation, growth, maturity, and decline in the life cycle of the cluster innovation network. The above research on the advanced nonferrous metal industry provides a useful reference for this study.“Triple Helix” innovation theory is an innovation theory established by Etzkowitz and Leydesdorff in 1995 [23]. The “Triple Helix” theory borrows the concept of biology and puts forwards that all three parties can be the main body of innovation, and that the three parties cooperate and interact closely in innovation. At present, the “Triple Helix” innovation theory has been widely used in academic circles to evaluate the collaborative interaction and dynamic evolution of universities, industries, and governments (including scientific research institutions) [24]. Many scholars have introduced the “Triple Helix” theory into the innovation network research of industry, education, and research [25, 26].Therefore, based on the “Triple Helix” theory, enterprises, universities, and research institutes can be regarded as three types of innovation subjects, with the support of the government and other relevant institutions, and carry out innovation cooperation activities based on a clear division of functions. In this paper, the “Triple Helix” theory is used for reference, and the “Triple Helix” theory is introduced into the research framework of China’s advanced nonferrous metal innovation network.To sum up, the existing research on the evolution of innovation networks has provided a solid foundation and beneficial enlightenment for this study. However, there are few kinds of research on the combination of advanced nonferrous metals and innovation networks. Moreover, in the field of advanced nonferrous metals, scholars mostly start from a static perspective. Comprehensive, systematic, and dynamic research on the patent status and patent cooperation network of the industry needs to be deepened.Therefore, based on the data of patent cooperation applications of the advanced nonferrous metal industry, this study uses the social network analysis method and Gephi network visualization to analyze the current situation of the nonferrous metal industry in China. The patent cooperation network and knowledge network are constructed from the perspective of the multilayer network, analyzed from the dimensions of the patentee, knowledge element, and geographical distribution, and reveal the structural characteristics and evolution laws of patent cooperation in the advanced nonferrous metal industry from multiple dimensions. It provides a reference for optimizing the patent cooperation network of the advanced nonferrous metal industry, enhancing its technological innovation, and formulating an industrial development strategy. ## 3. Research Design ### 3.1. Research Methodology This paper adopts research methods such as patent bibliometrics, social network analysis, and data visualization. It uses the social network analysis software and its visualization functions such as Gephi and Python to construct the topological structure diagram of patent cooperation and the heat map of patent technology theme evolution in China’s advanced nonferrous metal industry. Also, analyze the network structure index, network structure evolution, and technology topic evolution. The indicators for measuring the patent cooperation network of advanced nonferrous metal industry mainly include the following:(1) Network size, that is, the number of nodes in the network, reflects the size of the network. The larger the value, the larger the network scale.(2) The number of network edges, that is, the total number of connections produced by cooperation among network nodes, reflects the network structure relationship. The larger the value, the more complex the network structure.(3) The average path length is the number of edges in the shortest path between any two nodesi and j in the network. dij is the distance between nodes, and the average of all the distances between nodes is the average path length of the network, which reflects the average distance between two nodes in the network (Watts and Strogatz collective dynamics of “small world” networks [J]. Nature, 1998, 393 (6684): 440–442). The larger the value, the sparser the network, and the lower the transmission performance and efficiency of the corresponding network. The calculation formula is(1)PL=2nn−1∑i,jdi,j.(4) The average clustering coefficient, which is a measure of the density of nodes in the network, reflects the average value of the clustering coefficients of all nodes in the network. The larger the value, the easier it is for adjacent nodes to establish cooperative relations. The calculation formula is(2)C¯=1n∑i=1n2lididi−1,li refers to the actual number of edges between all nodes connected to node i.(5) Network density is the ratio of the number of actual relationships in the network to the number of theoretically possible relationships (Liu Jun. An introduction to social network analysis [M]. Beijing: Social Scince Literature Publishing House, 2004: 304–305). The calculation formula is(3)D=2lnn−1,where the number of nodes isn and the actual number of edges is l, which reflects the closeness of the network relationship. The larger the value, the closer the network structure and the closer the relationship between network members.(6) Network diameter is the maximum distance between any two nodes in the network. The larger the value, the sparser the network, and the lower the transmission performance and efficiency of the corresponding network. The calculation formula is(4)L=maxdi,j,wherei and j represent any two nodes. ### 3.2. Sample Selection and Data Processing The patent data used in this research comes from the PatSnap patent database. Download the patent data of the advanced nonferrous metal industry according to theStrategic Emerging Industry Classification and International Patent Classification Reference Relationship Table issued by the State Intellectual Property Office. The specific steps of data search and cleaning are as follows: first, the high-frequency keywords and IPC classification numbers of the advanced nonferrous metals industry are determined according to the classification of the advanced nonferrous metals industry in the document Classification of Strategic Emerging Industries. According to this, the patent search expression of the advanced nonferrous metal industry is determined. The document points out that the technologies covered by the advanced nonferrous metal industry include high-precision copper, pipe, rod, and wire profiles; high-precision copper and tubes, rods, and linear materials; aerospace high-strength aluminum alloy forgings; high-strength and high-conductivity copper materials; electrolytic copper foil, rolled copper foil, electronic copper, medical titanium alloy, metal fiber porous materials, porous titanium and titanium alloy, foam copper, aluminum, nickel, nonferrous metal fiber porous materials, etc. Second, according to the patent search expression, patent data are retrieved through the PatSnap patent database, and 1018491 original patent data are obtained and downloaded. Third, invention patents contain a higher knowledge level of inventors and more value, and are more representative than utility model and appearance patents. Therefore, this study only selects invention applications and authorizes invention patents. The remaining 259579 patents were obtained. Finally, screened patents with more than 2 original patent holders and then continued to exclude patents containing natural-person applicants. A total of 34116 cooperative invention patents in the advanced nonferrous metals industry were obtained as the data basis of this study. Since there is at least an 18-month time lag from patent application to publication, and invention patents tend to take longer from application to publication, the data set in this study is selected up to 2020 as a reference. Because the patent application data for the advanced nonferrous metal industry first appeared in 2002, the scope of this study is from 2002 to 2020.From Figure1, it can be seen that the number of joint patent applications for advanced nonferrous metals in China is increasing year by year, and the technological development of an industry will be directly affected by various policies issued by China. This study divides the period of joint patent applications into four development stages, namely: 2002–2007, 2008–2011, 2012–2016, and 2017–2020.(1) 2002–2007 is the initial period. In 2002, the number of cooperative patent applications in the advanced nonferrous metals industry began to grow from scratch, and it showed a slow development trend until 2007. At this stage, only a few large enterprises cooperated to apply for patents, and the level of patent cooperation was low.(2) 2008–2011 is a period of rapid growth, and the “12th Five-Year Plan” is a key period for China to build an innovative country. Building a well-off society in an all-round way and accelerating the transformation of the economic development mode put forward higher and more urgent requirements for the construction of innovation capacity. At this stage, the number of cooperative patents increased year by year, with an obvious growth rate.(3) 2012–2016 is a period of high-quality development. The “13th Five-Year Plan” for the development of the nonferrous metals industry, promulgated by the Ministry of Industry and Information Technology, outlined eight major tasks, including implementing an innovation drive, accelerating industrial restructuring, vigorously developing high-end materials, promoting green sustainable development, improving resource supply capacity, promoting deep integration of the two industries, actively expanding application fields, and deepening international cooperation. The nonferrous metals industry entered a new stage of development after the Fifth Plenary Session of the 18th CPC Central Committee. At this stage, it changed from rapid development to high-quality development, and the number of patents increased as a whole.(4) 2017–2020 is a period of stable growth or a growth bottleneck period. At this stage, although the number of cooperative patent applications still maintains high-intensity cooperation, the growth rate is slow, far from the rapid development of the previous stage. This may be related to technological bottlenecks and lagging economic development during the epidemic.Figure 1 Trend chart of the number of joint patents applied for advanced nonferrous metals in China.According to the above analysis, based on the patent data of the cooperative application, this paper explores the evolution characteristics of the cooperative patent in the advanced nonferrous metal industry from three aspects: the type of original patentee, the region where the original patentee is located, and the technical subject areas involved in the cooperative patent application. The aim is to maintain a stable level of technological innovation for enterprises in the industry during the normalization of the epidemic situation, thus promoting the core competitiveness of enterprises. ## 3.1. Research Methodology This paper adopts research methods such as patent bibliometrics, social network analysis, and data visualization. It uses the social network analysis software and its visualization functions such as Gephi and Python to construct the topological structure diagram of patent cooperation and the heat map of patent technology theme evolution in China’s advanced nonferrous metal industry. Also, analyze the network structure index, network structure evolution, and technology topic evolution. The indicators for measuring the patent cooperation network of advanced nonferrous metal industry mainly include the following:(1) Network size, that is, the number of nodes in the network, reflects the size of the network. The larger the value, the larger the network scale.(2) The number of network edges, that is, the total number of connections produced by cooperation among network nodes, reflects the network structure relationship. The larger the value, the more complex the network structure.(3) The average path length is the number of edges in the shortest path between any two nodesi and j in the network. dij is the distance between nodes, and the average of all the distances between nodes is the average path length of the network, which reflects the average distance between two nodes in the network (Watts and Strogatz collective dynamics of “small world” networks [J]. Nature, 1998, 393 (6684): 440–442). The larger the value, the sparser the network, and the lower the transmission performance and efficiency of the corresponding network. The calculation formula is(1)PL=2nn−1∑i,jdi,j.(4) The average clustering coefficient, which is a measure of the density of nodes in the network, reflects the average value of the clustering coefficients of all nodes in the network. The larger the value, the easier it is for adjacent nodes to establish cooperative relations. The calculation formula is(2)C¯=1n∑i=1n2lididi−1,li refers to the actual number of edges between all nodes connected to node i.(5) Network density is the ratio of the number of actual relationships in the network to the number of theoretically possible relationships (Liu Jun. An introduction to social network analysis [M]. Beijing: Social Scince Literature Publishing House, 2004: 304–305). The calculation formula is(3)D=2lnn−1,where the number of nodes isn and the actual number of edges is l, which reflects the closeness of the network relationship. The larger the value, the closer the network structure and the closer the relationship between network members.(6) Network diameter is the maximum distance between any two nodes in the network. The larger the value, the sparser the network, and the lower the transmission performance and efficiency of the corresponding network. The calculation formula is(4)L=maxdi,j,wherei and j represent any two nodes. ## 3.2. Sample Selection and Data Processing The patent data used in this research comes from the PatSnap patent database. Download the patent data of the advanced nonferrous metal industry according to theStrategic Emerging Industry Classification and International Patent Classification Reference Relationship Table issued by the State Intellectual Property Office. The specific steps of data search and cleaning are as follows: first, the high-frequency keywords and IPC classification numbers of the advanced nonferrous metals industry are determined according to the classification of the advanced nonferrous metals industry in the document Classification of Strategic Emerging Industries. According to this, the patent search expression of the advanced nonferrous metal industry is determined. The document points out that the technologies covered by the advanced nonferrous metal industry include high-precision copper, pipe, rod, and wire profiles; high-precision copper and tubes, rods, and linear materials; aerospace high-strength aluminum alloy forgings; high-strength and high-conductivity copper materials; electrolytic copper foil, rolled copper foil, electronic copper, medical titanium alloy, metal fiber porous materials, porous titanium and titanium alloy, foam copper, aluminum, nickel, nonferrous metal fiber porous materials, etc. Second, according to the patent search expression, patent data are retrieved through the PatSnap patent database, and 1018491 original patent data are obtained and downloaded. Third, invention patents contain a higher knowledge level of inventors and more value, and are more representative than utility model and appearance patents. Therefore, this study only selects invention applications and authorizes invention patents. The remaining 259579 patents were obtained. Finally, screened patents with more than 2 original patent holders and then continued to exclude patents containing natural-person applicants. A total of 34116 cooperative invention patents in the advanced nonferrous metals industry were obtained as the data basis of this study. Since there is at least an 18-month time lag from patent application to publication, and invention patents tend to take longer from application to publication, the data set in this study is selected up to 2020 as a reference. Because the patent application data for the advanced nonferrous metal industry first appeared in 2002, the scope of this study is from 2002 to 2020.From Figure1, it can be seen that the number of joint patent applications for advanced nonferrous metals in China is increasing year by year, and the technological development of an industry will be directly affected by various policies issued by China. This study divides the period of joint patent applications into four development stages, namely: 2002–2007, 2008–2011, 2012–2016, and 2017–2020.(1) 2002–2007 is the initial period. In 2002, the number of cooperative patent applications in the advanced nonferrous metals industry began to grow from scratch, and it showed a slow development trend until 2007. At this stage, only a few large enterprises cooperated to apply for patents, and the level of patent cooperation was low.(2) 2008–2011 is a period of rapid growth, and the “12th Five-Year Plan” is a key period for China to build an innovative country. Building a well-off society in an all-round way and accelerating the transformation of the economic development mode put forward higher and more urgent requirements for the construction of innovation capacity. At this stage, the number of cooperative patents increased year by year, with an obvious growth rate.(3) 2012–2016 is a period of high-quality development. The “13th Five-Year Plan” for the development of the nonferrous metals industry, promulgated by the Ministry of Industry and Information Technology, outlined eight major tasks, including implementing an innovation drive, accelerating industrial restructuring, vigorously developing high-end materials, promoting green sustainable development, improving resource supply capacity, promoting deep integration of the two industries, actively expanding application fields, and deepening international cooperation. The nonferrous metals industry entered a new stage of development after the Fifth Plenary Session of the 18th CPC Central Committee. At this stage, it changed from rapid development to high-quality development, and the number of patents increased as a whole.(4) 2017–2020 is a period of stable growth or a growth bottleneck period. At this stage, although the number of cooperative patent applications still maintains high-intensity cooperation, the growth rate is slow, far from the rapid development of the previous stage. This may be related to technological bottlenecks and lagging economic development during the epidemic.Figure 1 Trend chart of the number of joint patents applied for advanced nonferrous metals in China.According to the above analysis, based on the patent data of the cooperative application, this paper explores the evolution characteristics of the cooperative patent in the advanced nonferrous metal industry from three aspects: the type of original patentee, the region where the original patentee is located, and the technical subject areas involved in the cooperative patent application. The aim is to maintain a stable level of technological innovation for enterprises in the industry during the normalization of the epidemic situation, thus promoting the core competitiveness of enterprises. ## 4. Analysis of the Evolution Characteristics of Cooperation Networks in the Advanced Nonferrous Metal Industry ### 4.1. Analysis of the Characteristics of the Overall Industrial Cooperation Network According to the sample patent data, the patent cooperation network of China’s nonferrous metal industry from 2002 to 2020 was constructed by using Gephi0.9.2 software, as shown in Figure2. The cooperative patentees are divided according to the year and imported into Gephi 0.9.2 software in the form of point relationships and edge relationship, respectively. In the diagram, nodes represent types of principals that cooperate with each other. The nodes are divided into four types, which replace enterprises, universities, research institutes, and natural peoples with different colors. The size of a node represents the number of links between the node and other nodes, that is, the cooperation strength of the node. The connection between nodes represents the cooperative relationship, and the thickness of the connection represents the frequency of cooperation between linked nodes. The thicker the connection, the more frequent the cooperation between nodes in this research period. Generally, the larger the node value, the larger the degree value of the node, and the wider the cooperation range; the thicker the connecting edge, the greater the cooperation frequency between adjacent nodes, that is, the more stable the cooperative relationship between adjacent nodes. As shown in Figure 2, the patent cooperation network of China’s advanced nonferrous metal industry is disconnected as a whole. Some nodes occupy the core position in the network, forming a larger subnet and several small subnets.Figure 2 The invention patent cooperation network of the advanced nonferrous metals industry in China from 2002 to 2020.The topological structure indicators of patent cooperation in China’s advanced nonferrous metal industry from 2002 to 2020 are shown in Table1. It can be seen that many nodes in the network diagram have not yet formed direct connections; the degree of patent cooperation is not sufficient; and the network vitality is insufficient. Most patent applicants have little cooperation and weak connections, and some key applicants act as intermediaries to form a tie to transmit information and connect with other nodes.Table 1 Basic attribute index of an invention patent cooperation network of the advanced nonferrous metals industry in China from 2002 to 2020. ProjectNetwork sizeNumber of network edgeNetwork diameterGraph densityAverage clustering coefficientAverage path lengthIndex value30104124160.0010.7295.083Centrality analysis is a key tool to measure the importance of network nodes. This study selects two indexes, degree centrality and betweenness centrality, to explore the important nodes in the patent cooperation network of China’s advanced nonferrous metal industry from 2002 to 2020. The top ten sample invention patent applicants with degree centrality and betweenness centrality are shown in Table2. Applicants with a strong degree of centrality have a higher position in the network, and they have more subjects to cooperate with, which leads to cooperative relations and makes the network closer. Applicants with a strong betweenness centrality have a strong ability to influence the whole network through cooperation. Some applicants show high status in both centralities, such as State Grid Corp. and Tsinghua University, which rank at the forefront in both centralities; that is, they have higher status in the network and greater influence on the whole network.Table 2 The degree centrality and betweenness centrality of the whole patent cooperation network of the advanced nonferrous metal industry. ApplicantDegree centralityApplicantBetweenness centralityState Grid Corp.346State Grid Corp.590711.80China Petrochemical Corporation Limited55PetroChina Co., Ltd.213672.42Central South University55Tsinghua University178997.21Shanghai Jiao Tong University49Shanghai Jiao Tong University172177.20Tsinghua University46Central South University154672.13University of Science and Technology Beijing39China Petrochemical Corporation Limited143446.67Northeastern University34University of Science and Technology Beijing130471.53Zhejiang University32Baoshan Iron and Steel Co., Ltd129023.57State Grid Smart Grid Research Institute31Xi’an Jiaotong University92493.06PetroChina Co., Ltd.30Peking University83922.18 ### 4.2. Analysis of Four-Stage Evolution Characteristics of Cooperative Networks Gephi 0.9.2 software is used to draw the patent cooperation network map of the advanced nonferrous metal industry in four stages: 2002–2007, 2008–2011, 2012–2016, and 2017–2020, as shown in Figures3–6.Figure 3 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2002 to 2007.Figure 4 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2008 to 2011.Figure 5 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2012 to 2016.Figure 6 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2017 to 2020.The four evolution analysis charts show the following:(1) Over time, the intensity of cooperation has gradually increased. In the first two stages, there is mainly small-scale cooperation. For example, two or three enterprises cooperate and do not form a complex cooperation network; in the latter two stages, there are a large number of small-scale centralized cooperative relationships with certain nodes as the core of cooperation. This shows that enterprises find the limitations of their technology and knowledge in the development process and begin to try to find a more stable cooperative relationship. This relationship can not only help enterprises to open up new technical fields and enhance their technical breadth and depth but also share the uncertainty and risk of technological innovation in the cooperation network. This will greatly reduce the loss of interest that enterprises need to bear due to innovation failure.(2) The evolution of research subjects is more in line with the trend of integration of industry-university-research. Colleges and universities have more patent knowledge and rich theoretical resources, but there is no way to transform patented technology well without product demand and financial strength. Enterprises have the economic strength to bring patented technology to the ground, but a lack of knowledge support from patent developers leads to the lack of patented technology. The full combination of industry-university-research is helpful to realize the collaborative transformation process from patent knowledge to product profit. We can see from Figures3 to 6 that in the initial stage, there is more cooperation between natural persons and enterprises. However, over time, more and more cooperative relations have been formed, with universities as the “stamens” and various types of enterprises as the “petals.” This relationship can promote the flow and transformation of knowledge and technology, thus enhancing the success of invention patents in enterprises and the efficiency of patent transformation in colleges and universities.To further analyze the structural changes of the patent cooperation network of advanced nonferrous metals, the structure index of the advanced nonferrous metal patent cooperation network is measured by using the social network analysis method. The change in topological structure characteristics is shown in Table3.Table 3 Structural characteristic values of invention patent cooperation networks in different stages of China’s advanced nonferrous metal industry. IndexStage 1: 2002–2007Stage 2: 2008–2011Stage 3: 2012–2016Stage 4: 2017–2020Network size275102520882567Number of network edge25498823383404Network diameter6111416Graph density0.0060.0020.0010.001Average clustering coefficient0.7380.7210.7130.732Average path length2.775.034.995.29The network nodes reflect the network scale. As can be seen from Table3, the scale of the patent cooperation network in China’s advanced nonferrous metal industry is building up. The network scale values show that the value of network nodes in stage 1 is 275, that in stage 2 is 1025, that in stage 3 is 2088, and that in stage 4 is 2567. From stage 1 to stage 4, the number of network edges has the same trend as that of network nodes, but the change speed of network edge number is more elastic than that of network scale. This can be reflected by graph density, which decreases gradually with the change of stages, which shows that the cooperation relationship and cooperation times between network nodes are increasing.The average clustering coefficient reflects the close relationship between the whole network. The clustering coefficients of the four stages are much higher than the network density of the same stage, which shows that the establishment of an invention patent cooperation relationship in China’s advanced nonferrous metal industry is not a random choice. There is preferred connectivity, and a stable cooperative relationship is gradually formed. The network diameter and average path length are also rising, which shows that the cooperation network of the Chinese advanced nonferrous metal industry is becoming more and more sparse. The increase of network nodes in the network makes the distance between nodes far away, which leads to the decline of network transmission performance and efficiency of network nodes. This shows that although there are more and more subjects participating in cooperation in this field in China, there is still a large cooperation space among innovative subjects. Generally speaking, the four stages of the evolution of the network have the characteristics of a high clustering coefficient and a small average path distance, showing a distinct small-world effect. This is conducive to the sharing of resources and information in the network and facilitates the exchange and cooperation between applicants. ### 4.3. Evolution Analysis of Important Nodes From the analysis of cooperative networks, it can be seen that the networks in each stage are comparable. Select two indexes, degree centrality and betweenness centrality, to analyze the top ten nodes of the two indexes in each stage. It can reflect the importance of reflecting nodes and analyze the changing characteristics of important nodes in each stage.As shown in Table4, from the point of view of degree centrality, the most active applicants are enterprises, universities, and scientific research institutions, which form a relatively stable cooperation mode of “industry-university-research.” From the point of view of centrality value, from the initial company as the main core to the later university as the core, the weight of universities is getting bigger and bigger. From 2002 to 2007, there was more patent cooperation related to the petrochemical industry, which made Sinopec and its research institutes rank 1 and 2, respectively, and the proportion of enterprises, universities, and scientific research institutions was equal; from 2008 to 2011, the composition structure of patent applicants is similar to that of the first stage, and enterprises still occupy an important position. However, it can be seen that Zhejiang University and Tsinghua University have gradually become the subcore nodes of the network, indicating that universities have strong technological innovation capabilities and rich patent cooperation relations; from 2012 to 2016, different from the previous two stages, there are 5 universities, accounting for half of the applicants, and Tsinghua University is the main core node of the network. The strong technological innovation and strong patent cooperation application intention of universities appear; from 2017 to 2020, like the third stage, universities are still the main ones, with 7 university applicants, most of whom are at the forefront, and the status of universities has increased.Table 4 The degree centrality and betweenness centrality of the patent cooperation network in each stage of advanced nonferrous metal industry. TimeApplicantDegree centralityApplicantBetweenness centrality2002–2007China Petrochemical Corporation Limited30China Petrochemical Corporation Limited665.67Sinopec Company Petrochemical Science Research Institute9Zhejiang University494Baoshan Iron and Steel Co., Ltd9Baoshan Iron and Steel Co., Ltd368Consortium corporation Industrial Technology Research Institute7China Petrochemical Corporation Limited205.67Zhejiang University7East China University of Science and Technology132Wanguo Computer Co., Ltd6Sinopec Ningbo engineering Co., Ltd117.33East China University of Science and Technology5Petrochemical Science Research Institute of China Stone Engineering Company110.67Tsinghua University5Dalian Institute of Chemical Physics, Chinese Academy of Sciences89China University of Petroleum (Beijing)5Antai Technology Co., Ltd45Xiwang Technology Co., Ltd5University of Chongqing452008–2011China Petrochemical Corporation Limited23China National Petroleum Corporation Limited2230.5Baoshan Iron and Steel Co., Ltd12China Petrochemical Corporation Limited2192.36Zhejiang University11Dalian Institute of Chemical Physics, Chinese Academy of Sciences2090.33Tsinghua University10Tsinghua University1483.5Dalian Institute of Chemical Physics, Chinese Academy of Sciences10Baoshan Iron and Steel Co., Ltd1464.67University of Science and Technology Beijing9University of Science and Technology Beijing1225.67Hon Hai Precision Industries Company Limited7Zhejiang University1208.6China National Petroleum Corporation Limited6Shanghai Jiao Tong University756.64Shanghai Jiao Tong University6Shandong Aluminum Co., Ltd557East China University of Science and Technology6Jiangsu Thorpe (group) Co., Ltd4652012–2016Tsinghua University22China National Petroleum Corporation Limited5530.62China Petrochemical Corporation Limited17Tsinghua University4273.33Shanghai Jiao Tong University16Baoshan Iron and Steel Co., Ltd3769.08Central South University15University of Science and Technology Beijing3277.58Zhejiang University13China Petrochemical Corporation Limited3113.59Dalian Institute of Chemical Physics, Chinese Academy of Sciences12Dalian University of Technology2950.09Hon Hai Precision Industries Company Limited11Shanghai Jiao Tong University2843.51University of Science and Technology Beijing11Central South University2026.67China Petrochemical Corporation Limited10Dalian Institute of Chemical Physics, Chinese Academy of Sciences1583Baoshan Iron and Steel Co., Ltd9Zhejiang University1350.12017–2020Shanghai Jiao Tong University23University of Science and Technology Beijing3945Tsinghua University18Central South University3824Central South University18Baoshan Iron and Steel Co., Ltd3548University of Science and Technology Beijing15Tsinghua University2465China Petrochemical Corporation Limited14Shanghai Jiao Tong University2269Zhejiang University12CRRC Industrial Research Institute Co., Ltd2088Hon Hai Precision Industries Company Limited11China Petrochemical Corporation Limited1039.5Shanghai University9Hon Hai Precision Industries Company Limited954China University of Petroleum (East China)9Dalian University of Technology946Baoshan Iron and Steel Co., Ltd9Beijing Steel Research and Advanced Technology co., Ltd824.33According to each stage of betweenness centrality, it can be seen that the composition structure of applicants is still dominated by enterprises, universities, and research institutes, and the value of betweenness centrality gradually increases in different stages. The influence of the third and fourth stages is more obvious, and the control power of the network is stronger. From 2002 to 2007, enterprise applicants accounted for half, which had strong control over the network, followed by universities, which showed relatively weak influence; from 2008 to 2011, similar to the previous stage, the applicants with more patent cooperation are still enterprises, and most of them are concentrated in companies in the fields of petroleum, chemical industry, and natural gas. Also, the betweenness centrality value increases, which shows that it has stronger control power than the first stage in the network; from 2012 to 2016, there are a large number of college applicants, occupying strong network control, but enterprises still have the strongest leading patent cooperation ability; from 2017 to 2020, the number of applicants from colleges and universities has increased in both the number and the intensity of control, which can effectively influence patent cooperation.It can be seen that the ranking structure of the betweenness centrality and the degree centrality of the network is different, and the universities have increased the ranking of the two types of centralities. It reflects that colleges and universities are playing an increasingly important role in the network and have gradually acquired strong control power, which can effectively influence patent cooperation. Some applicants have a position before the exam in both types of centralities, such as China Petrochemical Corporation, Tsinghua University, and China National Petroleum Corporation. It shows that these members play an important intermediary role in contacting other members, building a “bridge” for cooperation among other members, and grasping the general direction of technical cooperation. ## 4.1. Analysis of the Characteristics of the Overall Industrial Cooperation Network According to the sample patent data, the patent cooperation network of China’s nonferrous metal industry from 2002 to 2020 was constructed by using Gephi0.9.2 software, as shown in Figure2. The cooperative patentees are divided according to the year and imported into Gephi 0.9.2 software in the form of point relationships and edge relationship, respectively. In the diagram, nodes represent types of principals that cooperate with each other. The nodes are divided into four types, which replace enterprises, universities, research institutes, and natural peoples with different colors. The size of a node represents the number of links between the node and other nodes, that is, the cooperation strength of the node. The connection between nodes represents the cooperative relationship, and the thickness of the connection represents the frequency of cooperation between linked nodes. The thicker the connection, the more frequent the cooperation between nodes in this research period. Generally, the larger the node value, the larger the degree value of the node, and the wider the cooperation range; the thicker the connecting edge, the greater the cooperation frequency between adjacent nodes, that is, the more stable the cooperative relationship between adjacent nodes. As shown in Figure 2, the patent cooperation network of China’s advanced nonferrous metal industry is disconnected as a whole. Some nodes occupy the core position in the network, forming a larger subnet and several small subnets.Figure 2 The invention patent cooperation network of the advanced nonferrous metals industry in China from 2002 to 2020.The topological structure indicators of patent cooperation in China’s advanced nonferrous metal industry from 2002 to 2020 are shown in Table1. It can be seen that many nodes in the network diagram have not yet formed direct connections; the degree of patent cooperation is not sufficient; and the network vitality is insufficient. Most patent applicants have little cooperation and weak connections, and some key applicants act as intermediaries to form a tie to transmit information and connect with other nodes.Table 1 Basic attribute index of an invention patent cooperation network of the advanced nonferrous metals industry in China from 2002 to 2020. ProjectNetwork sizeNumber of network edgeNetwork diameterGraph densityAverage clustering coefficientAverage path lengthIndex value30104124160.0010.7295.083Centrality analysis is a key tool to measure the importance of network nodes. This study selects two indexes, degree centrality and betweenness centrality, to explore the important nodes in the patent cooperation network of China’s advanced nonferrous metal industry from 2002 to 2020. The top ten sample invention patent applicants with degree centrality and betweenness centrality are shown in Table2. Applicants with a strong degree of centrality have a higher position in the network, and they have more subjects to cooperate with, which leads to cooperative relations and makes the network closer. Applicants with a strong betweenness centrality have a strong ability to influence the whole network through cooperation. Some applicants show high status in both centralities, such as State Grid Corp. and Tsinghua University, which rank at the forefront in both centralities; that is, they have higher status in the network and greater influence on the whole network.Table 2 The degree centrality and betweenness centrality of the whole patent cooperation network of the advanced nonferrous metal industry. ApplicantDegree centralityApplicantBetweenness centralityState Grid Corp.346State Grid Corp.590711.80China Petrochemical Corporation Limited55PetroChina Co., Ltd.213672.42Central South University55Tsinghua University178997.21Shanghai Jiao Tong University49Shanghai Jiao Tong University172177.20Tsinghua University46Central South University154672.13University of Science and Technology Beijing39China Petrochemical Corporation Limited143446.67Northeastern University34University of Science and Technology Beijing130471.53Zhejiang University32Baoshan Iron and Steel Co., Ltd129023.57State Grid Smart Grid Research Institute31Xi’an Jiaotong University92493.06PetroChina Co., Ltd.30Peking University83922.18 ## 4.2. Analysis of Four-Stage Evolution Characteristics of Cooperative Networks Gephi 0.9.2 software is used to draw the patent cooperation network map of the advanced nonferrous metal industry in four stages: 2002–2007, 2008–2011, 2012–2016, and 2017–2020, as shown in Figures3–6.Figure 3 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2002 to 2007.Figure 4 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2008 to 2011.Figure 5 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2012 to 2016.Figure 6 An invention patent cooperation network of the advanced nonferrous metals industry in China from 2017 to 2020.The four evolution analysis charts show the following:(1) Over time, the intensity of cooperation has gradually increased. In the first two stages, there is mainly small-scale cooperation. For example, two or three enterprises cooperate and do not form a complex cooperation network; in the latter two stages, there are a large number of small-scale centralized cooperative relationships with certain nodes as the core of cooperation. This shows that enterprises find the limitations of their technology and knowledge in the development process and begin to try to find a more stable cooperative relationship. This relationship can not only help enterprises to open up new technical fields and enhance their technical breadth and depth but also share the uncertainty and risk of technological innovation in the cooperation network. This will greatly reduce the loss of interest that enterprises need to bear due to innovation failure.(2) The evolution of research subjects is more in line with the trend of integration of industry-university-research. Colleges and universities have more patent knowledge and rich theoretical resources, but there is no way to transform patented technology well without product demand and financial strength. Enterprises have the economic strength to bring patented technology to the ground, but a lack of knowledge support from patent developers leads to the lack of patented technology. The full combination of industry-university-research is helpful to realize the collaborative transformation process from patent knowledge to product profit. We can see from Figures3 to 6 that in the initial stage, there is more cooperation between natural persons and enterprises. However, over time, more and more cooperative relations have been formed, with universities as the “stamens” and various types of enterprises as the “petals.” This relationship can promote the flow and transformation of knowledge and technology, thus enhancing the success of invention patents in enterprises and the efficiency of patent transformation in colleges and universities.To further analyze the structural changes of the patent cooperation network of advanced nonferrous metals, the structure index of the advanced nonferrous metal patent cooperation network is measured by using the social network analysis method. The change in topological structure characteristics is shown in Table3.Table 3 Structural characteristic values of invention patent cooperation networks in different stages of China’s advanced nonferrous metal industry. IndexStage 1: 2002–2007Stage 2: 2008–2011Stage 3: 2012–2016Stage 4: 2017–2020Network size275102520882567Number of network edge25498823383404Network diameter6111416Graph density0.0060.0020.0010.001Average clustering coefficient0.7380.7210.7130.732Average path length2.775.034.995.29The network nodes reflect the network scale. As can be seen from Table3, the scale of the patent cooperation network in China’s advanced nonferrous metal industry is building up. The network scale values show that the value of network nodes in stage 1 is 275, that in stage 2 is 1025, that in stage 3 is 2088, and that in stage 4 is 2567. From stage 1 to stage 4, the number of network edges has the same trend as that of network nodes, but the change speed of network edge number is more elastic than that of network scale. This can be reflected by graph density, which decreases gradually with the change of stages, which shows that the cooperation relationship and cooperation times between network nodes are increasing.The average clustering coefficient reflects the close relationship between the whole network. The clustering coefficients of the four stages are much higher than the network density of the same stage, which shows that the establishment of an invention patent cooperation relationship in China’s advanced nonferrous metal industry is not a random choice. There is preferred connectivity, and a stable cooperative relationship is gradually formed. The network diameter and average path length are also rising, which shows that the cooperation network of the Chinese advanced nonferrous metal industry is becoming more and more sparse. The increase of network nodes in the network makes the distance between nodes far away, which leads to the decline of network transmission performance and efficiency of network nodes. This shows that although there are more and more subjects participating in cooperation in this field in China, there is still a large cooperation space among innovative subjects. Generally speaking, the four stages of the evolution of the network have the characteristics of a high clustering coefficient and a small average path distance, showing a distinct small-world effect. This is conducive to the sharing of resources and information in the network and facilitates the exchange and cooperation between applicants. ## 4.3. Evolution Analysis of Important Nodes From the analysis of cooperative networks, it can be seen that the networks in each stage are comparable. Select two indexes, degree centrality and betweenness centrality, to analyze the top ten nodes of the two indexes in each stage. It can reflect the importance of reflecting nodes and analyze the changing characteristics of important nodes in each stage.As shown in Table4, from the point of view of degree centrality, the most active applicants are enterprises, universities, and scientific research institutions, which form a relatively stable cooperation mode of “industry-university-research.” From the point of view of centrality value, from the initial company as the main core to the later university as the core, the weight of universities is getting bigger and bigger. From 2002 to 2007, there was more patent cooperation related to the petrochemical industry, which made Sinopec and its research institutes rank 1 and 2, respectively, and the proportion of enterprises, universities, and scientific research institutions was equal; from 2008 to 2011, the composition structure of patent applicants is similar to that of the first stage, and enterprises still occupy an important position. However, it can be seen that Zhejiang University and Tsinghua University have gradually become the subcore nodes of the network, indicating that universities have strong technological innovation capabilities and rich patent cooperation relations; from 2012 to 2016, different from the previous two stages, there are 5 universities, accounting for half of the applicants, and Tsinghua University is the main core node of the network. The strong technological innovation and strong patent cooperation application intention of universities appear; from 2017 to 2020, like the third stage, universities are still the main ones, with 7 university applicants, most of whom are at the forefront, and the status of universities has increased.Table 4 The degree centrality and betweenness centrality of the patent cooperation network in each stage of advanced nonferrous metal industry. TimeApplicantDegree centralityApplicantBetweenness centrality2002–2007China Petrochemical Corporation Limited30China Petrochemical Corporation Limited665.67Sinopec Company Petrochemical Science Research Institute9Zhejiang University494Baoshan Iron and Steel Co., Ltd9Baoshan Iron and Steel Co., Ltd368Consortium corporation Industrial Technology Research Institute7China Petrochemical Corporation Limited205.67Zhejiang University7East China University of Science and Technology132Wanguo Computer Co., Ltd6Sinopec Ningbo engineering Co., Ltd117.33East China University of Science and Technology5Petrochemical Science Research Institute of China Stone Engineering Company110.67Tsinghua University5Dalian Institute of Chemical Physics, Chinese Academy of Sciences89China University of Petroleum (Beijing)5Antai Technology Co., Ltd45Xiwang Technology Co., Ltd5University of Chongqing452008–2011China Petrochemical Corporation Limited23China National Petroleum Corporation Limited2230.5Baoshan Iron and Steel Co., Ltd12China Petrochemical Corporation Limited2192.36Zhejiang University11Dalian Institute of Chemical Physics, Chinese Academy of Sciences2090.33Tsinghua University10Tsinghua University1483.5Dalian Institute of Chemical Physics, Chinese Academy of Sciences10Baoshan Iron and Steel Co., Ltd1464.67University of Science and Technology Beijing9University of Science and Technology Beijing1225.67Hon Hai Precision Industries Company Limited7Zhejiang University1208.6China National Petroleum Corporation Limited6Shanghai Jiao Tong University756.64Shanghai Jiao Tong University6Shandong Aluminum Co., Ltd557East China University of Science and Technology6Jiangsu Thorpe (group) Co., Ltd4652012–2016Tsinghua University22China National Petroleum Corporation Limited5530.62China Petrochemical Corporation Limited17Tsinghua University4273.33Shanghai Jiao Tong University16Baoshan Iron and Steel Co., Ltd3769.08Central South University15University of Science and Technology Beijing3277.58Zhejiang University13China Petrochemical Corporation Limited3113.59Dalian Institute of Chemical Physics, Chinese Academy of Sciences12Dalian University of Technology2950.09Hon Hai Precision Industries Company Limited11Shanghai Jiao Tong University2843.51University of Science and Technology Beijing11Central South University2026.67China Petrochemical Corporation Limited10Dalian Institute of Chemical Physics, Chinese Academy of Sciences1583Baoshan Iron and Steel Co., Ltd9Zhejiang University1350.12017–2020Shanghai Jiao Tong University23University of Science and Technology Beijing3945Tsinghua University18Central South University3824Central South University18Baoshan Iron and Steel Co., Ltd3548University of Science and Technology Beijing15Tsinghua University2465China Petrochemical Corporation Limited14Shanghai Jiao Tong University2269Zhejiang University12CRRC Industrial Research Institute Co., Ltd2088Hon Hai Precision Industries Company Limited11China Petrochemical Corporation Limited1039.5Shanghai University9Hon Hai Precision Industries Company Limited954China University of Petroleum (East China)9Dalian University of Technology946Baoshan Iron and Steel Co., Ltd9Beijing Steel Research and Advanced Technology co., Ltd824.33According to each stage of betweenness centrality, it can be seen that the composition structure of applicants is still dominated by enterprises, universities, and research institutes, and the value of betweenness centrality gradually increases in different stages. The influence of the third and fourth stages is more obvious, and the control power of the network is stronger. From 2002 to 2007, enterprise applicants accounted for half, which had strong control over the network, followed by universities, which showed relatively weak influence; from 2008 to 2011, similar to the previous stage, the applicants with more patent cooperation are still enterprises, and most of them are concentrated in companies in the fields of petroleum, chemical industry, and natural gas. Also, the betweenness centrality value increases, which shows that it has stronger control power than the first stage in the network; from 2012 to 2016, there are a large number of college applicants, occupying strong network control, but enterprises still have the strongest leading patent cooperation ability; from 2017 to 2020, the number of applicants from colleges and universities has increased in both the number and the intensity of control, which can effectively influence patent cooperation.It can be seen that the ranking structure of the betweenness centrality and the degree centrality of the network is different, and the universities have increased the ranking of the two types of centralities. It reflects that colleges and universities are playing an increasingly important role in the network and have gradually acquired strong control power, which can effectively influence patent cooperation. Some applicants have a position before the exam in both types of centralities, such as China Petrochemical Corporation, Tsinghua University, and China National Petroleum Corporation. It shows that these members play an important intermediary role in contacting other members, building a “bridge” for cooperation among other members, and grasping the general direction of technical cooperation. ## 5. Analysis of the Evolution Characteristics of Knowledge Networks in the Advanced Nonferrous Metal Industry ### 5.1. Evolution Characteristics of Cooperative Patent Application Technology Field In order to have a deeper understanding of the fields involved in patented technology and theR & D intensity invested in each field in the advanced nonferrous metal industry, Python is used to draw the thermal map of patented technology theme evolution and Gephi0.9.2 is used to draw the patented technology co-occurrence map. We can visually analyze the depth and co-occurrence of technology. Figures 7–10 represent technology-intensive fields, technology-tentative areas, and technology-developmental areas, respectively. In the figures, the horizontal axis represents the fields where patented technologies are distributed; the vertical axis represents the passage of years; and the color progressive bars on the right represent the number of patented technologies from less to more. In the heat map, we can see the development process of technical fields vertically, and we can see the development and changes of various technical fields with the change of years horizontally.Figure 7 Technology-intensive fields of the advanced nonferrous metal industry.Figure 8 The tentative field of the advanced nonferrous metal industry technology (1).Figure 9 The tentative field of the advanced nonferrous metal industry technology (2).Figure 10 Technical development fields of the advanced nonferrous metal industry.From Figure7, we can see that the technology-intensive distribution areas of the advanced nonferrous metal industry are B01, B21, B22, B23, C01, C07, C09, C22, C23, and H01. The overall total amount is more than 800. The top three in total are B01 (general physical or chemical methods or devices), H01 (basic electrical components), and C01 (inorganic chemistry), with 7001 times, 6444 times, and 5176 times, respectively. It can be seen from the figure that the evolution of various technical fields shows a trend from shallow to deep, and these technical fields are the main research fields of the advanced nonferrous metal industry.Figures8 and 9 show the evolution characteristics of technical topics in different fields at the frequency of 0–10. From the ordinate point of view, this part of the field spans a wider range, with a total of 72 categories, but the total number is only 840 times, which belongs to a wide range but a small number of tentative technical fields. This field is divided into two types of technologies with different characteristics. One is research that advances with time and keeps continuity, and the research intensity increases year by year, such as C30 (crystal growth), E21 (drilling of soil or rock; mining), G06 (calculation; reckoning or counting), etc. The patented technologies in these fields are likely to grow into patented technologies in developmental fields and auxiliary intensive technology fields through continuous cooperation and development; the other is that it occasionally appears in the progress of time, forming intermittent research. Also, the total amount is very small, for example, C05 (fertilizer; fertilizer manufacturing), C06 (dynamite; matches), A23 (food or food products not included in other categories; and their handling), etc. The actual effect of patented technology inventions in these fields is not great, or it is difficult to research and develop, and the benefits are small, so this kind of technology does not have good research value.Figure10 shows the technical fields in the frequency range of 0–80, and the number is about one-tenth of the technology-intensive fields, which play an auxiliary role in patents in the intensive fields. The number is about 8 times that of the tentative field, which belongs to the field with higher research value and significance. Similar to the tentative field, there are two types of technologies in this field. One is a technology that may become an intensive technology field in the future, such as C25 (electrolytic or electrophoretic processes; equipment used). This kind of technology shows the phenomenon of continuous research with time. The research intensity is more than that of previous years, and it also invests more in research than other technologies. There is a great research gap, but it should have its own research value and significance. Enterprises, universities, and research institutes can increase research on this part of technology. The other is an experimental development technique that shows signs of disappearing as it matures over time, with a few studies appearing early on, such as H04 (electrical communication technology). Or there is the phenomenon of staged technology research and development, such as B32(layered products). This kind of technology does not have strategic support for the innovation and development of enterprises. Therefore, enterprise should reduce their research and development investment in this kind of technology.From the above analysis, we can see that technological innovation institutions should keep continuous research in intensive technology fields to lay the foundation of innovation competitiveness. Enterprises that master the core competitive advantages of the industry can have good basic development abilities in the advanced nonferrous metal industry. In addition, the innovation subject can also separate some innovation resources, cooperate with enterprises, universities, and research institutes with heterogeneous resources, and then seek the blue ocean field of technology in the field of technology development and tentative fields. Good development research in this field can provide stable technical assistance advantages for enterprises and even occupy the leading resources in the future development of technical fields, providing a high-quality guarantee for their innovation competitiveness. ### 5.2. Analysis of Four-Stage Evolution Characteristics of Knowledge Networks Through the analysis of the above-mentioned patent categories, we can see which categories the advanced nonferrous metal industry should conduct in-depth research. Within each category, the success and availability of patent research and development can be enhanced by studying the cooperative research and development of patent subcategories. This paper is based on the subclass data of the IPC classification number of a cooperative patent application. The Gephi 0.9.2 is used to construct the IPC co-occurrence network diagram of patented technology to analyze the characteristics and evolution law of the knowledge network. Figures11–14 show the co-occurrence evolution process of IPC classification numbers in four stages, and Table 5 shows the co-occurrence topological index of cooperative patented technologies in the advanced nonferrous metal industry. The node in the network represents a subclass of the IPC classification number, and the larger the node, the more times the IPC appears; the connection represents a co-occurrence of two classification numbers, and the thicker the connection, the more co-occurrence frequency of nodes at both ends of the connection, which shows that the two technologies are more closely related.Figure 11 Co-occurrence of IPC from 2002 to 2007.Figure 12 Co-occurrence of IPC from 2008 to 2011.Figure 13 Co-occurrence of IPC from 2012 to 2006.Figure 14 Co-occurrence of IPC from 2017 to 2020.Table 5 Co-occurrence topological index of cooperative patent technology in the advanced nonferrous metal industry. Network sizeNetwork connectionAverage degreeNetwork diameterGraph densityMean clustering coefficientAverage path length02–07 years1123315.91170.0530.7323.02108–11 years2068348.09750.0390.7252.71712–16 years284163011.47940.0410.6682.50517–20 years325208612.83740.040.7072.446With the evolution process diagram and the co-occurrence topological indicators shown in Table5, it can be analyzed as follows:(1) The network scale is increasing gradually, from 112 nodes at the beginning to 325 nodes, which shows that the cooperative patents of the advanced nonferrous metals industry contain more and more knowledge elements. It can also be seen from the growth of network connections and the average degree that the frequency of co-occurrence between technologies has also increased by a greater margin. It shows that the intensity of technological co-occurrence is getting bigger and bigger, and patents gather more and more technical cooperation. This requires enterprises to seek more partners with heterogeneous resources for patent technology cooperation.(2) From the network diameter and graph density, we can see that the technology co-occurrence network is getting closer and closer. With the increasing number and complexity of network scale and network connections, the network diameter decreases step by step, and the graph density increases and stabilizes gradually, which shows that the technology is characterized by overall dispersion and local tightness. More research on IPC technology subcategories included in intensive technology fields is conducive to the promotion of core competitiveness.(3) From the average clustering coefficient and average path length, it can be seen that the four stages have the characteristics of a high average clustering coefficient, low average path length, strong network cohesion, and small-world effect. There are relatively close technologies.R & D institutions should identify these technologies and cooperate in research, and cooperate to develop more comprehensive patents, thereby enhancing their innovation ability and core competitiveness. ## 5.1. Evolution Characteristics of Cooperative Patent Application Technology Field In order to have a deeper understanding of the fields involved in patented technology and theR & D intensity invested in each field in the advanced nonferrous metal industry, Python is used to draw the thermal map of patented technology theme evolution and Gephi0.9.2 is used to draw the patented technology co-occurrence map. We can visually analyze the depth and co-occurrence of technology. Figures 7–10 represent technology-intensive fields, technology-tentative areas, and technology-developmental areas, respectively. In the figures, the horizontal axis represents the fields where patented technologies are distributed; the vertical axis represents the passage of years; and the color progressive bars on the right represent the number of patented technologies from less to more. In the heat map, we can see the development process of technical fields vertically, and we can see the development and changes of various technical fields with the change of years horizontally.Figure 7 Technology-intensive fields of the advanced nonferrous metal industry.Figure 8 The tentative field of the advanced nonferrous metal industry technology (1).Figure 9 The tentative field of the advanced nonferrous metal industry technology (2).Figure 10 Technical development fields of the advanced nonferrous metal industry.From Figure7, we can see that the technology-intensive distribution areas of the advanced nonferrous metal industry are B01, B21, B22, B23, C01, C07, C09, C22, C23, and H01. The overall total amount is more than 800. The top three in total are B01 (general physical or chemical methods or devices), H01 (basic electrical components), and C01 (inorganic chemistry), with 7001 times, 6444 times, and 5176 times, respectively. It can be seen from the figure that the evolution of various technical fields shows a trend from shallow to deep, and these technical fields are the main research fields of the advanced nonferrous metal industry.Figures8 and 9 show the evolution characteristics of technical topics in different fields at the frequency of 0–10. From the ordinate point of view, this part of the field spans a wider range, with a total of 72 categories, but the total number is only 840 times, which belongs to a wide range but a small number of tentative technical fields. This field is divided into two types of technologies with different characteristics. One is research that advances with time and keeps continuity, and the research intensity increases year by year, such as C30 (crystal growth), E21 (drilling of soil or rock; mining), G06 (calculation; reckoning or counting), etc. The patented technologies in these fields are likely to grow into patented technologies in developmental fields and auxiliary intensive technology fields through continuous cooperation and development; the other is that it occasionally appears in the progress of time, forming intermittent research. Also, the total amount is very small, for example, C05 (fertilizer; fertilizer manufacturing), C06 (dynamite; matches), A23 (food or food products not included in other categories; and their handling), etc. The actual effect of patented technology inventions in these fields is not great, or it is difficult to research and develop, and the benefits are small, so this kind of technology does not have good research value.Figure10 shows the technical fields in the frequency range of 0–80, and the number is about one-tenth of the technology-intensive fields, which play an auxiliary role in patents in the intensive fields. The number is about 8 times that of the tentative field, which belongs to the field with higher research value and significance. Similar to the tentative field, there are two types of technologies in this field. One is a technology that may become an intensive technology field in the future, such as C25 (electrolytic or electrophoretic processes; equipment used). This kind of technology shows the phenomenon of continuous research with time. The research intensity is more than that of previous years, and it also invests more in research than other technologies. There is a great research gap, but it should have its own research value and significance. Enterprises, universities, and research institutes can increase research on this part of technology. The other is an experimental development technique that shows signs of disappearing as it matures over time, with a few studies appearing early on, such as H04 (electrical communication technology). Or there is the phenomenon of staged technology research and development, such as B32(layered products). This kind of technology does not have strategic support for the innovation and development of enterprises. Therefore, enterprise should reduce their research and development investment in this kind of technology.From the above analysis, we can see that technological innovation institutions should keep continuous research in intensive technology fields to lay the foundation of innovation competitiveness. Enterprises that master the core competitive advantages of the industry can have good basic development abilities in the advanced nonferrous metal industry. In addition, the innovation subject can also separate some innovation resources, cooperate with enterprises, universities, and research institutes with heterogeneous resources, and then seek the blue ocean field of technology in the field of technology development and tentative fields. Good development research in this field can provide stable technical assistance advantages for enterprises and even occupy the leading resources in the future development of technical fields, providing a high-quality guarantee for their innovation competitiveness. ## 5.2. Analysis of Four-Stage Evolution Characteristics of Knowledge Networks Through the analysis of the above-mentioned patent categories, we can see which categories the advanced nonferrous metal industry should conduct in-depth research. Within each category, the success and availability of patent research and development can be enhanced by studying the cooperative research and development of patent subcategories. This paper is based on the subclass data of the IPC classification number of a cooperative patent application. The Gephi 0.9.2 is used to construct the IPC co-occurrence network diagram of patented technology to analyze the characteristics and evolution law of the knowledge network. Figures11–14 show the co-occurrence evolution process of IPC classification numbers in four stages, and Table 5 shows the co-occurrence topological index of cooperative patented technologies in the advanced nonferrous metal industry. The node in the network represents a subclass of the IPC classification number, and the larger the node, the more times the IPC appears; the connection represents a co-occurrence of two classification numbers, and the thicker the connection, the more co-occurrence frequency of nodes at both ends of the connection, which shows that the two technologies are more closely related.Figure 11 Co-occurrence of IPC from 2002 to 2007.Figure 12 Co-occurrence of IPC from 2008 to 2011.Figure 13 Co-occurrence of IPC from 2012 to 2006.Figure 14 Co-occurrence of IPC from 2017 to 2020.Table 5 Co-occurrence topological index of cooperative patent technology in the advanced nonferrous metal industry. Network sizeNetwork connectionAverage degreeNetwork diameterGraph densityMean clustering coefficientAverage path length02–07 years1123315.91170.0530.7323.02108–11 years2068348.09750.0390.7252.71712–16 years284163011.47940.0410.6682.50517–20 years325208612.83740.040.7072.446With the evolution process diagram and the co-occurrence topological indicators shown in Table5, it can be analyzed as follows:(1) The network scale is increasing gradually, from 112 nodes at the beginning to 325 nodes, which shows that the cooperative patents of the advanced nonferrous metals industry contain more and more knowledge elements. It can also be seen from the growth of network connections and the average degree that the frequency of co-occurrence between technologies has also increased by a greater margin. It shows that the intensity of technological co-occurrence is getting bigger and bigger, and patents gather more and more technical cooperation. This requires enterprises to seek more partners with heterogeneous resources for patent technology cooperation.(2) From the network diameter and graph density, we can see that the technology co-occurrence network is getting closer and closer. With the increasing number and complexity of network scale and network connections, the network diameter decreases step by step, and the graph density increases and stabilizes gradually, which shows that the technology is characterized by overall dispersion and local tightness. More research on IPC technology subcategories included in intensive technology fields is conducive to the promotion of core competitiveness.(3) From the average clustering coefficient and average path length, it can be seen that the four stages have the characteristics of a high average clustering coefficient, low average path length, strong network cohesion, and small-world effect. There are relatively close technologies.R & D institutions should identify these technologies and cooperate in research, and cooperate to develop more comprehensive patents, thereby enhancing their innovation ability and core competitiveness. ## 6. Conclusion Taking the advanced nonferrous metal industry as an example, this study constructs the invention patent cooperation network, the heat map of patent technology theme evolution, and the patent technology IPC co-occurrence network of the advanced nonferrous metal industry in China from the perspective of cooperation network and knowledge network from 2002 to 2020. Furthermore, we deeply study the structural characteristics and evolution law of the invention patent network in the advanced nonferrous metal industry and draw the following conclusions and give relevant suggestions. ### 6.1. Crucial Findings (1) The scale of patent cooperation networks in China’s advanced nonferrous metal industry is expanding day by day. The network density is increasing, the network cohesion is strong, and the small-world effect is obvious. It shows the characteristics of overall dispersion and local tightness, and there are few “bridges” between groups, so the network still has great room for development. The “Triple Helix” feature of the cooperation network is gradually obvious. In the long-term formal and informal cooperation and exchanges, universities, enterprises, and scientific research institutes have crossed, closely cooperated, and interacted with each other, and the cooperative relationship has become more stable. The evolution of knowledge networks over time shows the characteristics of technological dynamic changes.(2) Patent applicants such as China Petrochemical Corporation, Tsinghua University, China National Petroleum Corporation, Zhejiang University, Baoshan Iron and Steel Co., Ltd., and Shanghai Jiaotong University are important nodes of the network. These nodes have more contact with other nodes, play an intermediary role, are the “bridge” for cooperation among other subjects, and grasp the general direction of technical cooperation. Therefore, these core nodes should be the key breakthrough points to guide and adjust the cooperation behavior of important nodes; with time, the structure of key nodes that play a leading role has changed from the initial enterprise-led to university-led.(3) The total number of technology categories involved in the advanced nonferrous metal industry is increasing, and the depth is also gradually increasing. Different types of technology fields are presented, including technology-intensive fields, tentative technology fields, and technology development fields, which are characterized by high intensity, wide range, and steady progress. ### 6.2. Policy Implications To speed up the cultivation of China’s advanced nonferrous metal industry, realize the optimal allocation of innovative resources, and improve international competitiveness, this paper puts forward the following theoretical and practical enlightenment.(1) The government should strengthen and guide the cooperative relationship of the “Triple Helix” subjects to promote the stability and development of the innovation network among industries, universities, and research. The government should introduce relevant policies to promote the breakthrough of the core technology of China’s advanced nonferrous metal industry, guided by industrial demand and aimed at tackling key problems with new technologies. Encourage innovative organizations to focus on the core common technologies of the industry to carry out joint research and achieve breakthroughs in the core technologies of the industry with the strong support of the government’s science and technology support plan.First of all, the government should set up an incentive mechanism to actively guide research institutions such as universities and research institutes to realize the transformation of patented technology through cooperation with enterprises. Second, the government should establish a dynamic management and evaluation system for industry-university-research cooperation projects, constantly adjust the deployment of industry-university-research cooperation projects, and strengthen the tracking and implementation of policies. Finally, the government should improve the intellectual property system and build a long-term protection mechanism for intellectual property rights.(2) In the stage of knowledge innovation, universities have created a large number of scientific research achievements through collaborative innovation with governments, enterprises, and research institutions. However, most of them exist in the form of papers or patents and do not transform knowledge into capital, which leads to the waste of technological innovation. Therefore, universities should give full play to their advantages in the process of knowledge innovation and extend this advantage to the field of technological innovation to realize the capitalization of knowledge.In addition, universities and research institutions should focus on the core common technologies of industrial development to fundamentally solve the “two skins” problem of the current science and technology economy. Because they have overlapping functions in scientific research and personnel training, they can break the original organizational boundaries and develop synergistically through the deep integration of scientific and technological resources by integrating resources such as science, technology, and talents with universities and scientific research institutions to promote the deep integration of basic research and applied research personnel training, which can realize the superposition of high-quality resources and complementary advantages, thus driving the sustainable development of regional economies and societies.(3) Strengthen the dominant position of enterprises in technological innovation; support enterprises to enhance their independent innovation ability; and establish and improve the enterprise-led collaborative innovation mechanism of industries, universities and research. Enterprises, as the main battlefield and the realizing party of scientific and technological achievements, can accelerate the process of knowledge capitalization and technology industrialization through collaborative innovation. However, most Chinese enterprises pay insufficient attention to basic research, which leads to the weak dominant position of enterprises in the innovation network, and their lack of scientific and technological innovation ability leads to loose cooperative relations, which has become an important factor restricting the development of China’s “Triple Helix” collaborative innovation system.Enterprises can enhance their innovation abilities, at the same time, they can carry out technical cooperation with other enterprises to promote the formation and implementation of patent alliances to enhance their innovation advantages. On the other hand, it can cooperate with university research institutions to realize the transformation and docking of resources and provide strong support for technological innovation and achievement transformation.(4) In practice, the government should guide advanced nonferrous metal enterprises to increase investment in basic research of core technologies in the industry through leveraged means such as financial subsidies and interest subsidies. Encourage enterprises to improve their scientific research ability by building laboratories orR & D centers jointly with university research institutes.Based on macropolicy support, the government can also build innovation carriers to participate in collaborative innovation. For example, the establishment of new industry research institutes and technology incubators. The new industry research institute is an innovative platform for deep integration of industry, education, and research, which is jointly sponsored by key universities and research institutions of key enterprises in the industry under the guidance of the government to carry out collaborative innovation and core technology breakthroughs and tackle key problems. The new industry research institute pays attention to the interconnection of basic research and applied research to realize the effective connection of “basic research-technological tackling-technical application-successful industrialization.” The technology incubator is one of the carriers of science and technology innovation services and the training base of innovative and entrepreneurial talents. Among them, the government provides infrastructure such as sites, as well as preferential policies, such as seed funds and taxes. A technology incubator helps to improve the conversion rate of scientific and technological achievements and reduce the risks and costs of start-up enterprises. Through these methods, it is beneficial to solve the dilemma of obstacles in knowledge transfer across organizations caused by the mismatch of knowledge and ability between advanced nonferrous metal enterprises and universities and research institutes in China, and fundamentally solve the phenomenon of “two skins” between science and technology and the economy. ## 6.1. Crucial Findings (1) The scale of patent cooperation networks in China’s advanced nonferrous metal industry is expanding day by day. The network density is increasing, the network cohesion is strong, and the small-world effect is obvious. It shows the characteristics of overall dispersion and local tightness, and there are few “bridges” between groups, so the network still has great room for development. The “Triple Helix” feature of the cooperation network is gradually obvious. In the long-term formal and informal cooperation and exchanges, universities, enterprises, and scientific research institutes have crossed, closely cooperated, and interacted with each other, and the cooperative relationship has become more stable. The evolution of knowledge networks over time shows the characteristics of technological dynamic changes.(2) Patent applicants such as China Petrochemical Corporation, Tsinghua University, China National Petroleum Corporation, Zhejiang University, Baoshan Iron and Steel Co., Ltd., and Shanghai Jiaotong University are important nodes of the network. These nodes have more contact with other nodes, play an intermediary role, are the “bridge” for cooperation among other subjects, and grasp the general direction of technical cooperation. Therefore, these core nodes should be the key breakthrough points to guide and adjust the cooperation behavior of important nodes; with time, the structure of key nodes that play a leading role has changed from the initial enterprise-led to university-led.(3) The total number of technology categories involved in the advanced nonferrous metal industry is increasing, and the depth is also gradually increasing. Different types of technology fields are presented, including technology-intensive fields, tentative technology fields, and technology development fields, which are characterized by high intensity, wide range, and steady progress. ## 6.2. Policy Implications To speed up the cultivation of China’s advanced nonferrous metal industry, realize the optimal allocation of innovative resources, and improve international competitiveness, this paper puts forward the following theoretical and practical enlightenment.(1) The government should strengthen and guide the cooperative relationship of the “Triple Helix” subjects to promote the stability and development of the innovation network among industries, universities, and research. The government should introduce relevant policies to promote the breakthrough of the core technology of China’s advanced nonferrous metal industry, guided by industrial demand and aimed at tackling key problems with new technologies. Encourage innovative organizations to focus on the core common technologies of the industry to carry out joint research and achieve breakthroughs in the core technologies of the industry with the strong support of the government’s science and technology support plan.First of all, the government should set up an incentive mechanism to actively guide research institutions such as universities and research institutes to realize the transformation of patented technology through cooperation with enterprises. Second, the government should establish a dynamic management and evaluation system for industry-university-research cooperation projects, constantly adjust the deployment of industry-university-research cooperation projects, and strengthen the tracking and implementation of policies. Finally, the government should improve the intellectual property system and build a long-term protection mechanism for intellectual property rights.(2) In the stage of knowledge innovation, universities have created a large number of scientific research achievements through collaborative innovation with governments, enterprises, and research institutions. However, most of them exist in the form of papers or patents and do not transform knowledge into capital, which leads to the waste of technological innovation. Therefore, universities should give full play to their advantages in the process of knowledge innovation and extend this advantage to the field of technological innovation to realize the capitalization of knowledge.In addition, universities and research institutions should focus on the core common technologies of industrial development to fundamentally solve the “two skins” problem of the current science and technology economy. Because they have overlapping functions in scientific research and personnel training, they can break the original organizational boundaries and develop synergistically through the deep integration of scientific and technological resources by integrating resources such as science, technology, and talents with universities and scientific research institutions to promote the deep integration of basic research and applied research personnel training, which can realize the superposition of high-quality resources and complementary advantages, thus driving the sustainable development of regional economies and societies.(3) Strengthen the dominant position of enterprises in technological innovation; support enterprises to enhance their independent innovation ability; and establish and improve the enterprise-led collaborative innovation mechanism of industries, universities and research. Enterprises, as the main battlefield and the realizing party of scientific and technological achievements, can accelerate the process of knowledge capitalization and technology industrialization through collaborative innovation. However, most Chinese enterprises pay insufficient attention to basic research, which leads to the weak dominant position of enterprises in the innovation network, and their lack of scientific and technological innovation ability leads to loose cooperative relations, which has become an important factor restricting the development of China’s “Triple Helix” collaborative innovation system.Enterprises can enhance their innovation abilities, at the same time, they can carry out technical cooperation with other enterprises to promote the formation and implementation of patent alliances to enhance their innovation advantages. On the other hand, it can cooperate with university research institutions to realize the transformation and docking of resources and provide strong support for technological innovation and achievement transformation.(4) In practice, the government should guide advanced nonferrous metal enterprises to increase investment in basic research of core technologies in the industry through leveraged means such as financial subsidies and interest subsidies. Encourage enterprises to improve their scientific research ability by building laboratories orR & D centers jointly with university research institutes.Based on macropolicy support, the government can also build innovation carriers to participate in collaborative innovation. For example, the establishment of new industry research institutes and technology incubators. The new industry research institute is an innovative platform for deep integration of industry, education, and research, which is jointly sponsored by key universities and research institutions of key enterprises in the industry under the guidance of the government to carry out collaborative innovation and core technology breakthroughs and tackle key problems. The new industry research institute pays attention to the interconnection of basic research and applied research to realize the effective connection of “basic research-technological tackling-technical application-successful industrialization.” The technology incubator is one of the carriers of science and technology innovation services and the training base of innovative and entrepreneurial talents. Among them, the government provides infrastructure such as sites, as well as preferential policies, such as seed funds and taxes. A technology incubator helps to improve the conversion rate of scientific and technological achievements and reduce the risks and costs of start-up enterprises. Through these methods, it is beneficial to solve the dilemma of obstacles in knowledge transfer across organizations caused by the mismatch of knowledge and ability between advanced nonferrous metal enterprises and universities and research institutes in China, and fundamentally solve the phenomenon of “two skins” between science and technology and the economy. --- *Source: 1023816-2022-11-08.xml*
2022
# Responses of Streamflow to Climate Change and Human Activities in a River Basin, Northeast China **Authors:** Hanwen Zhang; Wei Xu; Xintong Xu; Baohong Lu **Journal:** Advances in Meteorology (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1023821 --- ## Abstract It is now common knowledge that many water resources stresses relate to access to water within a basin. Yi River Basin, a typical river basin characterized by intensive agricultural processes, significant population growth, and water management, has been undergoing grave water problems. In this paper, the long-term trend of precipitation and streamflow in Yi River Basin, from 1964 to 2010, was investigated via Mann-Kendall test. The change point occurred in the year 1965 dividing the long-term series into two periods. Climate elasticity method and linear regression method were implemented to quantify the impact of precipitation and human activities on runoff and presented basically consistent results of the percentage change in an annual runoff for the postchange period. The results reveal that the decline of annual runoff in postchange period is mainly attributed to precipitation variability of 53.66–58.25% and human activities of 46.34–41.74%, as estimated by climate elasticity method and linear regression method, respectively. This study detected the changes in the precipitation-streamflow relationship and investigated the possible causes in the Yi River, which will be helpful for providing a reference for the management of regional water resources. --- ## Body ## 1. Introduction Streamflow, in terms of natural resource, is a measure of sustainable water availability, which is of great pith and moment for sustainable development, utilization of water resources, and biodiversity. With socioeconomic development, however, access to water has already become a serious issue for people in many parts of the world and, given recent United Nations estimates, the situation is not likely to improve [1, 2]. Water-related problems, especially the variability and availability of regional water resources under influence of climate change and human activities, have been of great concern to hydrologists recently [3, 4]. For sustainable water resources planning and management, it is therefore essential to determine the changes in water resources in both space and time and evaluate the influence of climate change and human activities thereon.The streamflow of some rivers in the world has decreased significantly due to climate change and intensifying human activities [2, 5–7]. Milly et al. [5] investigated the observed trends of global streamflow and reported that runoff reduction occurred in sub-Saharan Africa, southern Europe, southernmost South America, Southern Australia, and western midlatitude North America, while the observed streamflow increases in the La Plate basin of southern South America, Southern through central North America, the Southeastern quadrant of Africa, and northern Australia.China, whose water resources rank sixth in the world in terms of total volume though, suffers a lot from water shortage. To date, water stress in China remains a widely recognized concern of the general public [8]. The effects of climate variability and change, including increasing frequency of extreme events such as droughts, coupled with unevenly distributed water resources across regions and time, create additional pressure on the already scarce water supplies. Zhang et al. [9] investigated the trends of annual streamflow in the six large basins in China based on the observed runoff data of 19 main hydrological stations for the past 50 years and concluded that the annual streamflow of all investigated basins shows a downward trend, with some presenting significant decreasing trend in the northern China. Piao et al. [10] reviewed runoff changes in the Yangtze River Basin and Yellow River Basin, two largest basins in China, and reported a small and statistically insignificant upward trend detected in in annual runoff of Yangtze River Basin since 1960, while the Yellow River Basin showed a persistent decline in annual runoff time series. Zhang et al. [4] also found similar changing patterns based on monthly streamflow data from 382 hydrological stations in China covering the period 1960–2000. They reported that declining streamflow was found in the northern China and the upper reaches of the Yangtze and the Pearl River Basins, and significant decreasing streamflow was found mainly in the Yellow River Basin, the Liaohe River Basin, and the Haihe River Basin.It is common knowledge that the planning, designing, and operating of water resources projects are generally based on observational and historical hydroclimatologic data. The underlying assumption of this idea is that time-invariant statistical characteristics of the considered time series in all water resources engineering work, which would inevitably trigger major problems in regional water resources management [11]. Hence, it has been an important scientific problem in hydrology to reveal the physical characteristics of the basin and trends of streamflow change and their influencing factors. Moreover, the hydrologic cycle at the watershed scale is a complex process affected by climate, intensifying human activities and so on [12, 13]; thus the impacts of precipitation and human activities generally vary from place to place and need to be investigated at local scale to better understand the consequences of human activities and serve water resources planning, management, and sustainable development in that region.Yi River, tributary of Huaihe River system, has experienced grave water problems in recent years, such as, as reported by Zhang et al. [14], flooding, water shortage, high regulation, serious pollution, and aquatic ecology degradation. Water resources have become one of the key constraints of local socioeconomic development. Yet, research on the streamflow of Yi River Basin is rarely reported. In the light of these facts, this paper attempts to assess the half-century-long changing characteristics of precipitation and streamflow of Yi River to quantify the impacts of climate change and human activities on hydrological process of Yi River, aiming to better understand the changes in precipitation-streamflow in Yi River and how the runoff responds to climate change and anthropological activities, which is of great pith and moment for rational utilization and management of water resources and the local ecological protection. ## 2. Materials and Methods ### 2.1. Study Area and Data Set Yi River Basin, extending from longitude 117°24′E to 119°11′E and latitude 34°22′N to 36°23′N, is located in southeast of Shandong Province, China (Figure1). Originating from Yimeng Mountain, Yi River flows through Shandong Province and extends south to Jiangsu Province, with a total length of 333 km and a drainage area of 11,820 km2.Figure 1 Sketch map of study area, hydrological and meteorological stations.The hilly area lies in the middle and north part of Yi River Watershed and accounts for 70% of total basin area, leaving the rest mainly the plain area. The climate is characterized by north temperate monsoon, with average annual mean temperature of 13.2°C and average annual precipitation 830 mm. The flood season normally occurs in June to September, and the amount of precipitation in this period accounts for about 74.2% of annual total precipitation, while the main flood season, July to August, produces the highest flooding. Due to the mountainous nature, the floods in Yi River feature high peaks, large volume, and flash floods. Affected by the changes in rainfall distribution and topography, Yi River lacks water in dry season, while being frequented with flash floods in wet season, causing difficulties in the development and utilization of water resources.Linyi hydrological station, outlet of Yi River Basin, is selected to investigate the annual and monthly runoff variation. Streamflow data was obtained from the Hydrologic Year-book, spanning 57 years (1954–2010), in conjunction with daily precipitation records of seven meteorological stations covering the same period were acquired from National Meteorological Information Center, China Meteorological Administration (http://data.cma.cn/site/index.html) (Figure 1). The mean annual precipitation data for the river basin in the investigated time frame were interpolated by the Kriging method using ArcGIS with annual precipitation data of the meteorological stations in the river basin. ### 2.2. Methods Quantifying the respective contributions of natural factors and human activities to streamflow changes is important not only in theoretical perspective, but also in water resources management and soil and water conservation measures. However, it is never an easy and straightforward process; many methods have been developed to investigate the impacts of climate change and human activities quantitatively in the literature; the selection of the “best” method to be employed in such quantitative evaluation remains an open and debated question. For instance, Saifullah et al. [7] simply emphasized this fact by saying that “…several methods have been used to assess the impacts of precipitation and land surface changes on the hydrological processes, but to date, no standard model has been developed….” In terms of the methods, two categories can be identified, that is, hydrological modelling and statistical modelling. With reference to the first category, paired catchment experiment method and physically based hydrological model (lumped or distributed), such as Xinanjiang Model, Soil and Water Assessment Tool (SWAT), Variable Infiltration Capacity Model (VIC), SIMHYD Model, HBV Model, the SLURP (Semidistributed Land Use-Based Runoff Processes) Model, were at the top of list for determining the impact of climate change and human activities on the hydrological process (e.g., [15–20]). However, the experimental methods are usually time-consuming, expensive, and difficult to locate suitable controls. The physically based hydrological models, though physically sound, are also limited because of the involvement of major efforts on model calibration and validation, high demand of various data (e.g., high-resolution land use data, soil data, and groundwater data) to represent the hydrophysical processes, and the entanglement of complexity and uncertainty in model structure and parameter estimation [15, 17, 21, 22]. Statistical approach provides alternative choice, for instance, regression analysis method [23, 24], hydrological sensitivity analysis method (see, among others, [17, 22, 25–29]), and elasticity method [30, 31]. Nonetheless, climate elasticity method, for instance, has its limitations too; the general framework used to estimate proportional contribution of climate and human impact on streamflow is based on the assumption that human activities are independent of climate change. As a matter of fact, human activities and the climate system interact with each other. At a catchment scale, climate change may play a dominant role in land use and land cover change (LUCC) and may consequently alter the amount and process of streamflow.It is therefore varying from case to case to choose the appropriate methods to estimate the response of streamflow to climate change and human activities quantitatively. Ideally, implement of a combination of both approaches should compensate for other to some extent. Due to limited access to soil and groundwater data to perform a hydrological model, this paper employs statistical approach. #### 2.2.1. Trend and Abrupt Change Point Detection The rank-based MK test [32–34] is a parameter-free trend detection technique within time series. Due to its power and advantages, such as high asymptotic efficiency, it is frequently used in the literature (e.g., [35–38]). Also, different variants can be found in MK test [39]. In this study, the method examined by Moraes et al. [40] and Gerstengarbe and Werner [41] is adopted, which can be briefly outlined as follows.Given a time seriesxi with n terms (1≤i≤n), the MK rank statistic ak is given as(1)ak=∑i=1kRi,where(2)Ri=+1,xi>xj0,xi≤xj.Under the null hypothesis H0 of no change, the statistic ak is normally distributed with mean and variance given by(3)Eak=nn-14,varak=nn-12n+572.As such, the definition of the statistic index u(ak) is calculated using following:(4)uak=ak-Eakvarak1/2k=1,2,3,…,n.u(ak) is distributed as a normal distribution (it is named here UF, as it is calculated forwardly according to the order x1,x2,x3,…,xn). A positive UF value denotes an increasing trend, and a negative UF value denotes a decreasing trend. Given the significance level of α, the null hypothesis is rejected if UF>Uα/2 (two-sided statistical test), indicating that there is an obvious (or significant) trend change in the x time series. Then the statistic index of the corresponding rank series for retrograde rows, UB (as it is computed backwardly for the reverser sample, xn,xn-1,xn-2,…,x1), are similarly obtained through the method mentioned above.Also, letUB=-UF denote the trend of retrograde time series. Then the two curves, UF and UB  (k=1,2,3,…,n), are plotted to localize the beginning of the change, at the intersection point between the curves. If the intersection point is significant at given significance level, the match point would be the break point occurring in the investigated time series at that time. The considered α is 0.05 in this study, and the corresponding Uα/2 is 1.96. #### 2.2.2. Quantifying the Impact of Climate Change and Human Activities on Runoff Changes of the observed mean annual streamflowΔQ¯total are subject to climate variability (represented by precipitation, ΔQ¯P) and human activities ΔQ¯h.(5)ΔQ¯total=ΔQ¯P+ΔQ¯h.To evaluate the impacts of precipitation and human activities, respectively, the investigated time frame was divided into two periods through trend and abrupt change point analysis, that is, prechange period and postchange period. As such, a change in the average annual streamflow is calculated as(6)ΔQ¯=ΔQ¯2-ΔQ¯1,where ΔQ¯ denotes the change in annual mean runoff, ΔQ¯1 and ΔQ¯2 represent annual runoff during prechange period and postchange period, respectively.Simple Linear Regression Method. Take prechange period as unimpaired reference period, where a regression equation can be obtained between annual streamflow (Q1) and averaged annual precipitation (P1) of the basin as follows:(7)Q1=aP1+b,where a and b are two parameters of the model.Then the streamflow without the influence of human activities in the change period can be modelled as(8)Q¯2′=aP¯2+b,where Q¯2′ and P¯2 are fitted mean streamflow and observed precipitation during the change period, respectively.The contribution of runoff changes by human activities and precipitation can be estimated as(9)ΔQh=Q¯2-Q¯2′,ΔQP=ΔQ¯-ΔQh.Climate Elasticity Method. Climate elasticity of streamflow developed by Schaake and Waggoner [30] is considered to be an important, efficient, and robust indicator assessing the sensitivity of streamflow to climate change [20, 31, 42]. The climate elasticity can be estimated in different ways, and the nonparametric estimator proposed by Zheng et al. [31] was employed in this paper. The elasticity of streamflow with respect to precipitation (εP) can be expressed as follows:(10)εP=P¯Q¯=∑Pi-P¯Qi-Q¯∑Pi-P¯2=ρP,Q·CQCP,where ρP,Q is the correlation coefficient of precipitation (P) and streamflow (Q) and CP and CQ are coefficients of variation of P and Q. ## 2.1. Study Area and Data Set Yi River Basin, extending from longitude 117°24′E to 119°11′E and latitude 34°22′N to 36°23′N, is located in southeast of Shandong Province, China (Figure1). Originating from Yimeng Mountain, Yi River flows through Shandong Province and extends south to Jiangsu Province, with a total length of 333 km and a drainage area of 11,820 km2.Figure 1 Sketch map of study area, hydrological and meteorological stations.The hilly area lies in the middle and north part of Yi River Watershed and accounts for 70% of total basin area, leaving the rest mainly the plain area. The climate is characterized by north temperate monsoon, with average annual mean temperature of 13.2°C and average annual precipitation 830 mm. The flood season normally occurs in June to September, and the amount of precipitation in this period accounts for about 74.2% of annual total precipitation, while the main flood season, July to August, produces the highest flooding. Due to the mountainous nature, the floods in Yi River feature high peaks, large volume, and flash floods. Affected by the changes in rainfall distribution and topography, Yi River lacks water in dry season, while being frequented with flash floods in wet season, causing difficulties in the development and utilization of water resources.Linyi hydrological station, outlet of Yi River Basin, is selected to investigate the annual and monthly runoff variation. Streamflow data was obtained from the Hydrologic Year-book, spanning 57 years (1954–2010), in conjunction with daily precipitation records of seven meteorological stations covering the same period were acquired from National Meteorological Information Center, China Meteorological Administration (http://data.cma.cn/site/index.html) (Figure 1). The mean annual precipitation data for the river basin in the investigated time frame were interpolated by the Kriging method using ArcGIS with annual precipitation data of the meteorological stations in the river basin. ## 2.2. Methods Quantifying the respective contributions of natural factors and human activities to streamflow changes is important not only in theoretical perspective, but also in water resources management and soil and water conservation measures. However, it is never an easy and straightforward process; many methods have been developed to investigate the impacts of climate change and human activities quantitatively in the literature; the selection of the “best” method to be employed in such quantitative evaluation remains an open and debated question. For instance, Saifullah et al. [7] simply emphasized this fact by saying that “…several methods have been used to assess the impacts of precipitation and land surface changes on the hydrological processes, but to date, no standard model has been developed….” In terms of the methods, two categories can be identified, that is, hydrological modelling and statistical modelling. With reference to the first category, paired catchment experiment method and physically based hydrological model (lumped or distributed), such as Xinanjiang Model, Soil and Water Assessment Tool (SWAT), Variable Infiltration Capacity Model (VIC), SIMHYD Model, HBV Model, the SLURP (Semidistributed Land Use-Based Runoff Processes) Model, were at the top of list for determining the impact of climate change and human activities on the hydrological process (e.g., [15–20]). However, the experimental methods are usually time-consuming, expensive, and difficult to locate suitable controls. The physically based hydrological models, though physically sound, are also limited because of the involvement of major efforts on model calibration and validation, high demand of various data (e.g., high-resolution land use data, soil data, and groundwater data) to represent the hydrophysical processes, and the entanglement of complexity and uncertainty in model structure and parameter estimation [15, 17, 21, 22]. Statistical approach provides alternative choice, for instance, regression analysis method [23, 24], hydrological sensitivity analysis method (see, among others, [17, 22, 25–29]), and elasticity method [30, 31]. Nonetheless, climate elasticity method, for instance, has its limitations too; the general framework used to estimate proportional contribution of climate and human impact on streamflow is based on the assumption that human activities are independent of climate change. As a matter of fact, human activities and the climate system interact with each other. At a catchment scale, climate change may play a dominant role in land use and land cover change (LUCC) and may consequently alter the amount and process of streamflow.It is therefore varying from case to case to choose the appropriate methods to estimate the response of streamflow to climate change and human activities quantitatively. Ideally, implement of a combination of both approaches should compensate for other to some extent. Due to limited access to soil and groundwater data to perform a hydrological model, this paper employs statistical approach. ### 2.2.1. Trend and Abrupt Change Point Detection The rank-based MK test [32–34] is a parameter-free trend detection technique within time series. Due to its power and advantages, such as high asymptotic efficiency, it is frequently used in the literature (e.g., [35–38]). Also, different variants can be found in MK test [39]. In this study, the method examined by Moraes et al. [40] and Gerstengarbe and Werner [41] is adopted, which can be briefly outlined as follows.Given a time seriesxi with n terms (1≤i≤n), the MK rank statistic ak is given as(1)ak=∑i=1kRi,where(2)Ri=+1,xi>xj0,xi≤xj.Under the null hypothesis H0 of no change, the statistic ak is normally distributed with mean and variance given by(3)Eak=nn-14,varak=nn-12n+572.As such, the definition of the statistic index u(ak) is calculated using following:(4)uak=ak-Eakvarak1/2k=1,2,3,…,n.u(ak) is distributed as a normal distribution (it is named here UF, as it is calculated forwardly according to the order x1,x2,x3,…,xn). A positive UF value denotes an increasing trend, and a negative UF value denotes a decreasing trend. Given the significance level of α, the null hypothesis is rejected if UF>Uα/2 (two-sided statistical test), indicating that there is an obvious (or significant) trend change in the x time series. Then the statistic index of the corresponding rank series for retrograde rows, UB (as it is computed backwardly for the reverser sample, xn,xn-1,xn-2,…,x1), are similarly obtained through the method mentioned above.Also, letUB=-UF denote the trend of retrograde time series. Then the two curves, UF and UB  (k=1,2,3,…,n), are plotted to localize the beginning of the change, at the intersection point between the curves. If the intersection point is significant at given significance level, the match point would be the break point occurring in the investigated time series at that time. The considered α is 0.05 in this study, and the corresponding Uα/2 is 1.96. ### 2.2.2. Quantifying the Impact of Climate Change and Human Activities on Runoff Changes of the observed mean annual streamflowΔQ¯total are subject to climate variability (represented by precipitation, ΔQ¯P) and human activities ΔQ¯h.(5)ΔQ¯total=ΔQ¯P+ΔQ¯h.To evaluate the impacts of precipitation and human activities, respectively, the investigated time frame was divided into two periods through trend and abrupt change point analysis, that is, prechange period and postchange period. As such, a change in the average annual streamflow is calculated as(6)ΔQ¯=ΔQ¯2-ΔQ¯1,where ΔQ¯ denotes the change in annual mean runoff, ΔQ¯1 and ΔQ¯2 represent annual runoff during prechange period and postchange period, respectively.Simple Linear Regression Method. Take prechange period as unimpaired reference period, where a regression equation can be obtained between annual streamflow (Q1) and averaged annual precipitation (P1) of the basin as follows:(7)Q1=aP1+b,where a and b are two parameters of the model.Then the streamflow without the influence of human activities in the change period can be modelled as(8)Q¯2′=aP¯2+b,where Q¯2′ and P¯2 are fitted mean streamflow and observed precipitation during the change period, respectively.The contribution of runoff changes by human activities and precipitation can be estimated as(9)ΔQh=Q¯2-Q¯2′,ΔQP=ΔQ¯-ΔQh.Climate Elasticity Method. Climate elasticity of streamflow developed by Schaake and Waggoner [30] is considered to be an important, efficient, and robust indicator assessing the sensitivity of streamflow to climate change [20, 31, 42]. The climate elasticity can be estimated in different ways, and the nonparametric estimator proposed by Zheng et al. [31] was employed in this paper. The elasticity of streamflow with respect to precipitation (εP) can be expressed as follows:(10)εP=P¯Q¯=∑Pi-P¯Qi-Q¯∑Pi-P¯2=ρP,Q·CQCP,where ρP,Q is the correlation coefficient of precipitation (P) and streamflow (Q) and CP and CQ are coefficients of variation of P and Q. ## 2.2.1. Trend and Abrupt Change Point Detection The rank-based MK test [32–34] is a parameter-free trend detection technique within time series. Due to its power and advantages, such as high asymptotic efficiency, it is frequently used in the literature (e.g., [35–38]). Also, different variants can be found in MK test [39]. In this study, the method examined by Moraes et al. [40] and Gerstengarbe and Werner [41] is adopted, which can be briefly outlined as follows.Given a time seriesxi with n terms (1≤i≤n), the MK rank statistic ak is given as(1)ak=∑i=1kRi,where(2)Ri=+1,xi>xj0,xi≤xj.Under the null hypothesis H0 of no change, the statistic ak is normally distributed with mean and variance given by(3)Eak=nn-14,varak=nn-12n+572.As such, the definition of the statistic index u(ak) is calculated using following:(4)uak=ak-Eakvarak1/2k=1,2,3,…,n.u(ak) is distributed as a normal distribution (it is named here UF, as it is calculated forwardly according to the order x1,x2,x3,…,xn). A positive UF value denotes an increasing trend, and a negative UF value denotes a decreasing trend. Given the significance level of α, the null hypothesis is rejected if UF>Uα/2 (two-sided statistical test), indicating that there is an obvious (or significant) trend change in the x time series. Then the statistic index of the corresponding rank series for retrograde rows, UB (as it is computed backwardly for the reverser sample, xn,xn-1,xn-2,…,x1), are similarly obtained through the method mentioned above.Also, letUB=-UF denote the trend of retrograde time series. Then the two curves, UF and UB  (k=1,2,3,…,n), are plotted to localize the beginning of the change, at the intersection point between the curves. If the intersection point is significant at given significance level, the match point would be the break point occurring in the investigated time series at that time. The considered α is 0.05 in this study, and the corresponding Uα/2 is 1.96. ## 2.2.2. Quantifying the Impact of Climate Change and Human Activities on Runoff Changes of the observed mean annual streamflowΔQ¯total are subject to climate variability (represented by precipitation, ΔQ¯P) and human activities ΔQ¯h.(5)ΔQ¯total=ΔQ¯P+ΔQ¯h.To evaluate the impacts of precipitation and human activities, respectively, the investigated time frame was divided into two periods through trend and abrupt change point analysis, that is, prechange period and postchange period. As such, a change in the average annual streamflow is calculated as(6)ΔQ¯=ΔQ¯2-ΔQ¯1,where ΔQ¯ denotes the change in annual mean runoff, ΔQ¯1 and ΔQ¯2 represent annual runoff during prechange period and postchange period, respectively.Simple Linear Regression Method. Take prechange period as unimpaired reference period, where a regression equation can be obtained between annual streamflow (Q1) and averaged annual precipitation (P1) of the basin as follows:(7)Q1=aP1+b,where a and b are two parameters of the model.Then the streamflow without the influence of human activities in the change period can be modelled as(8)Q¯2′=aP¯2+b,where Q¯2′ and P¯2 are fitted mean streamflow and observed precipitation during the change period, respectively.The contribution of runoff changes by human activities and precipitation can be estimated as(9)ΔQh=Q¯2-Q¯2′,ΔQP=ΔQ¯-ΔQh.Climate Elasticity Method. Climate elasticity of streamflow developed by Schaake and Waggoner [30] is considered to be an important, efficient, and robust indicator assessing the sensitivity of streamflow to climate change [20, 31, 42]. The climate elasticity can be estimated in different ways, and the nonparametric estimator proposed by Zheng et al. [31] was employed in this paper. The elasticity of streamflow with respect to precipitation (εP) can be expressed as follows:(10)εP=P¯Q¯=∑Pi-P¯Qi-Q¯∑Pi-P¯2=ρP,Q·CQCP,where ρP,Q is the correlation coefficient of precipitation (P) and streamflow (Q) and CP and CQ are coefficients of variation of P and Q. ## 3. Results and Discussions ### 3.1. Change Points Analysis In the Yi River Basin, precipitation mainly occurs during June–September. November, December, January, and February are the dry months, as is the case of runoff experienced (Figure2). Compared with dry months, there seems to be more outliers in wet months. With reference to average monthly precipitation, the highest value occurred in July, while the least occurred in January; regarding average monthly runoff, the highest value occurred in July, while the least occurred in March, illustrating that high flow responds to high precipitation simultaneously, while low flow responds with apparent delay in time.Figure 2 Long-term monthly average precipitation and runoff depth of the Yi River Basin.Before further trend detection analysis, serial persistence within the meteor-hydrological series was performed (Figure3). It can be seen from Figure 3 that the series include independent observations both for annual streamflow series and for annual precipitation series at 95% confidence level. This result announces that the application of MK trend detection technique is warranted in this study.Figure 3 Autocorrelation analysis of meteor-hydrological series of the Yi River Basin. The dashed lines denote 95% confidence level (ACF means autocorrelation functions).The trend analysis results were presented in Figure4 and Table 1. Note that a negative UF value denotes a downward trend and vice versa, and if the UF value is greater than the critical values (±1.96, two dashed lines in Figure 4), then the increasing or decreasing trend is significant at 5% significance level. Figure 4 showed that both the annual precipitation and streamflow decreased in Yi River Basin during 1954–2010 and the significant decreasing trend was found in streamflow. MK value of precipitation fluctuated between positive and negative value during 1965–1975, while the UF value of streamflow was negative from 1967 to 1975, indicating that the decrease of runoff may be attributed to anthropological impacts.Table 1 MK test result of annual precipitation and streamflow during 1954–2010. Time series MK value Trend Precipitation −2.1 Downward ∗ Streamflow −0.7 Downward Note. denotes∗ significant at 5% significance level.Figure 4 MK test of meteor-hydrological series of the Yi River Basin. The dashed lines denote 95% confidence level.Though multiple intersection points of precipitation were identified and none was significant at 5% significance level, the precipitation generally presented a downward trend since 1965, only differed in decreasing degrees. Figure5 presented the variations of precipitation and streamflow before and after 1965. The mean annual precipitation decreased by 153 mm from pre-1965 period to post-1965 period, while the average annual runoff depth decreased by 231.1 mm from pre-1965 period to post-1965 period, indicating that the process of the runoff production may have changed.Figure 5 Time series of annual precipitation and runoff depth in Yi River Basin (the thin dashed blue line denotes average value before 1965; the thick dashed blue line denotes average value from 1965 to 2010).To better understand the change characteristics in precipitation, the double mass curve [43] was employed. The annual precipitation-runoff double mass curve is normally approximate to a straight line if the basin characteristics are stable; that is, there are no abrupt changes in precipitation and runoff; thus, a change in the slope of the curve may indicate a change in the investigated series. The double mass curve of precipitation-runoff was represented in Figure 6. It can be seen from Figure 6 that the slope of lines of the pre-1965-period is more than twice higher than a post-1965-period. The maximum value of runoff coefficient is 0.58 for the period of 1954–1964. After 1965, runoff coefficient abruptly decreases, and the mean runoff coefficient of pre-1965-period is 0.40, while that of post-1965-period is 0.18 (Table 2), which is evidence of the change point.Table 2 Summary of annual precipitation, streamflow, and runoff coefficient during 1954–2010. Period Prechange1954–1964 Postchange1965–2010 Full time frame1954–2010 Precipitation Mean (mm) 927.70 774.66 804.20 Standard deviation 179.12 181.76 189.71 Coefficient of variation 0.19 0.23 0.24 Streamflow Mean (mm) 385.45 154.39 198.98 Standard deviation 161.02 112.98 152.81 Coefficient of variation 0.42 0.73 0.77 Runoff coefficient 0.40 0.18 0.22Figure 6 Double mass curve of precipitation and runoff depth.Comparing the results of the change point test and double mass curve, the year 1965 could be the change point representing the impact of precipitation and human activities on runoff. Figure7 shows a correlation comparison of precipitation and runoff for the two periods. The correlation between precipitation and streamflow for the prechange period (r=0.89) is slightly stronger than that for postchange period (r=0.86), while the slope decreased from 0.79 to 0.55. The lower standard deviation of runoff (Table 2) confirms that change of runoff tends to stabilize, arrantly under the influence of anthropological activities. The decrease of slope from prechange period to postchange period also demonstrates that the same annual precipitation in the baseline period produces more streamflow, suggesting that streamflow should be driven by increasing human activities in the study area. According to the records of Hydrologic Year-book, there are 5 large sized reservoirs and 22 medium sized reservoirs, whose total storage volumes range from 9.5 × 106 to 7.49 × 108 m3 (due to the size of the paper, only part of the reservoirs are listed in Table 3). It can be seen that most of the reservoirs were built during the 1960s and 1970s; hence, water-related human activities including agricultural irrigation, dam construction and industry development should be considered to be responsible for the decline in runoff.Table 3 Summary of large and medium sized reservoirs in Yi River Basin. Reservoir names Size Build date Total storage (104 m3) Tianzhuang Large 1960 13057 Dian Large 1960 74900 Bashan Large 1960 50850 Xujiaya Large 1959 29290 Tangcun Large 1959 14961 Gaohu Medium 1967 3741 Shangye Medium 1960 3638 Cangli Medium 1971 6480 Wujiazhuang Medium 1960 2544 Shilan Medium 1960 3682Figure 7 Correlation analysis of precipitation and runoff for pre/postchange period.As per the discussion above, it can be concluded that runoff should be affected by the anthropological impacts in this region after 1964. ### 3.2. Quantitative Assessment of Precipitation and Human Activities on Streamflow The precipitation elasticity of runoff was estimated by (10) to assess the impact of precipitation change on runoff. The value of εP is 1.95, indicating that a 10% decrease in precipitation should result in 19.5% decrease in runoff. According to the equation, with the calculated εP, it can be calculated that the 153.04 mm decrease in precipitation during 1965–2010 may led to a 123.99 mm decrease in streamflow, accounting for 53.66% of the total observed drop in annual runoff. The climate elasticity method measures climate influence on streamflow and assumes that the remaining change would come from human influence such as LUCC. Therefore, human activities could contribute 46.34% of the decrease in streamflow.With regard to the simple linear regression method, the difference of simulated runoff after the change point is assessed for the impact of precipitation change. Stimulated annual mean runoff in the study area during 1965–2010 is 250.83 mm; hence, human activities may have resulted in 96.43 mm decrease of annual runoff, estimating 41.74% of the runoff reduction, while the rest (58.26%) was attributed to climate change. ### 3.3. Discussions In the present work, two statistical methods were selected to quantify the response of streamflow to climate change and human activities in Yi River Basin due to limited access to soil and groundwater data to perform a hydrological model. It can be seen that both methods are generally easy to satisfy the data requirement and easy to be implemented. However, as Legesse et al. [44] pointed out, physically based hydrological models may be preferred and even the most optimal choice for hydrological effect study, though, as stated previously, limitations still remain in practice for them to be applied at basin scale. The statistical approach, on the other hand, only requires basic meteorological data such as precipitation and normal hydrological data, such as runoff series. In particular, climate elasticity method needs less and climate elasticity parameter can be directly estimated by hydroclimatic data without parameter adjustment per nonparametric estimators. Compared with parametric estimators of climate elasticity, nonparametric estimator for εP has been determined to be robust, appropriate with smaller bias, and consistent with the results estimated using rainfall-runoff model [42, 45]. In the present study only precipitation elasticity was investigated as streamflow responds directly to precipitation. But, it is clear-cut that the climate elasticity method is not expected to provide enough information compared with physically based distributed model.It is also relevant to note that there are uncertainties associated with assessing effects of climate variability and human activities on runoff in both methods, even though the evaluated effects are relatively consistent. The first source of uncertainty lies in the fact that both methods are only for the effect of the changes in runoff with changes in mean annual precipitation. While in real world, streamflow can be influenced by variations in other precipitation characteristics, such as seasonality, intensity, and concentration. Additionally, occurrence of extreme runoff may also affect the accuracy. The second comes from the framework to separate the effect. Regarding simple linear regression method, the hydroclimatic data may be lacks of representation if a short baseline period is detected in the change point analysis. What the regression model actually conveys is the response of hydrologic process to the average climatic conditions in the baseline period. The relationship estimated as per the date in a wet baseline period would greatly differ with that in a dry baseline period. Furthermore, the relationship between the precipitation and runoff may have changed in a nonstationary environment. As climate elasticity method, the framework used to estimate proportional contribution of climate variability and human activities to runoff is based on the assumption that human factors are independent of climate factors. The effects of human activities and climate, in fact, interplay with each other and are not readily separable. At a basin scale, climate change may influence the human activities such as land use and thus change runoff consequently, and vice versa, intensifying urbanization and expanding population may cause increase in temperature and consequently result in change in hydrological regime. Despite the fact that human activities and climate system interact with each other, even in baseline period, it is not considered as such in the present study. Therefore, focus of further studies should be placed in future to improve the results of separating climate and anthropogenic effects with consideration of these uncertainties. ## 3.1. Change Points Analysis In the Yi River Basin, precipitation mainly occurs during June–September. November, December, January, and February are the dry months, as is the case of runoff experienced (Figure2). Compared with dry months, there seems to be more outliers in wet months. With reference to average monthly precipitation, the highest value occurred in July, while the least occurred in January; regarding average monthly runoff, the highest value occurred in July, while the least occurred in March, illustrating that high flow responds to high precipitation simultaneously, while low flow responds with apparent delay in time.Figure 2 Long-term monthly average precipitation and runoff depth of the Yi River Basin.Before further trend detection analysis, serial persistence within the meteor-hydrological series was performed (Figure3). It can be seen from Figure 3 that the series include independent observations both for annual streamflow series and for annual precipitation series at 95% confidence level. This result announces that the application of MK trend detection technique is warranted in this study.Figure 3 Autocorrelation analysis of meteor-hydrological series of the Yi River Basin. The dashed lines denote 95% confidence level (ACF means autocorrelation functions).The trend analysis results were presented in Figure4 and Table 1. Note that a negative UF value denotes a downward trend and vice versa, and if the UF value is greater than the critical values (±1.96, two dashed lines in Figure 4), then the increasing or decreasing trend is significant at 5% significance level. Figure 4 showed that both the annual precipitation and streamflow decreased in Yi River Basin during 1954–2010 and the significant decreasing trend was found in streamflow. MK value of precipitation fluctuated between positive and negative value during 1965–1975, while the UF value of streamflow was negative from 1967 to 1975, indicating that the decrease of runoff may be attributed to anthropological impacts.Table 1 MK test result of annual precipitation and streamflow during 1954–2010. Time series MK value Trend Precipitation −2.1 Downward ∗ Streamflow −0.7 Downward Note. denotes∗ significant at 5% significance level.Figure 4 MK test of meteor-hydrological series of the Yi River Basin. The dashed lines denote 95% confidence level.Though multiple intersection points of precipitation were identified and none was significant at 5% significance level, the precipitation generally presented a downward trend since 1965, only differed in decreasing degrees. Figure5 presented the variations of precipitation and streamflow before and after 1965. The mean annual precipitation decreased by 153 mm from pre-1965 period to post-1965 period, while the average annual runoff depth decreased by 231.1 mm from pre-1965 period to post-1965 period, indicating that the process of the runoff production may have changed.Figure 5 Time series of annual precipitation and runoff depth in Yi River Basin (the thin dashed blue line denotes average value before 1965; the thick dashed blue line denotes average value from 1965 to 2010).To better understand the change characteristics in precipitation, the double mass curve [43] was employed. The annual precipitation-runoff double mass curve is normally approximate to a straight line if the basin characteristics are stable; that is, there are no abrupt changes in precipitation and runoff; thus, a change in the slope of the curve may indicate a change in the investigated series. The double mass curve of precipitation-runoff was represented in Figure 6. It can be seen from Figure 6 that the slope of lines of the pre-1965-period is more than twice higher than a post-1965-period. The maximum value of runoff coefficient is 0.58 for the period of 1954–1964. After 1965, runoff coefficient abruptly decreases, and the mean runoff coefficient of pre-1965-period is 0.40, while that of post-1965-period is 0.18 (Table 2), which is evidence of the change point.Table 2 Summary of annual precipitation, streamflow, and runoff coefficient during 1954–2010. Period Prechange1954–1964 Postchange1965–2010 Full time frame1954–2010 Precipitation Mean (mm) 927.70 774.66 804.20 Standard deviation 179.12 181.76 189.71 Coefficient of variation 0.19 0.23 0.24 Streamflow Mean (mm) 385.45 154.39 198.98 Standard deviation 161.02 112.98 152.81 Coefficient of variation 0.42 0.73 0.77 Runoff coefficient 0.40 0.18 0.22Figure 6 Double mass curve of precipitation and runoff depth.Comparing the results of the change point test and double mass curve, the year 1965 could be the change point representing the impact of precipitation and human activities on runoff. Figure7 shows a correlation comparison of precipitation and runoff for the two periods. The correlation between precipitation and streamflow for the prechange period (r=0.89) is slightly stronger than that for postchange period (r=0.86), while the slope decreased from 0.79 to 0.55. The lower standard deviation of runoff (Table 2) confirms that change of runoff tends to stabilize, arrantly under the influence of anthropological activities. The decrease of slope from prechange period to postchange period also demonstrates that the same annual precipitation in the baseline period produces more streamflow, suggesting that streamflow should be driven by increasing human activities in the study area. According to the records of Hydrologic Year-book, there are 5 large sized reservoirs and 22 medium sized reservoirs, whose total storage volumes range from 9.5 × 106 to 7.49 × 108 m3 (due to the size of the paper, only part of the reservoirs are listed in Table 3). It can be seen that most of the reservoirs were built during the 1960s and 1970s; hence, water-related human activities including agricultural irrigation, dam construction and industry development should be considered to be responsible for the decline in runoff.Table 3 Summary of large and medium sized reservoirs in Yi River Basin. Reservoir names Size Build date Total storage (104 m3) Tianzhuang Large 1960 13057 Dian Large 1960 74900 Bashan Large 1960 50850 Xujiaya Large 1959 29290 Tangcun Large 1959 14961 Gaohu Medium 1967 3741 Shangye Medium 1960 3638 Cangli Medium 1971 6480 Wujiazhuang Medium 1960 2544 Shilan Medium 1960 3682Figure 7 Correlation analysis of precipitation and runoff for pre/postchange period.As per the discussion above, it can be concluded that runoff should be affected by the anthropological impacts in this region after 1964. ## 3.2. Quantitative Assessment of Precipitation and Human Activities on Streamflow The precipitation elasticity of runoff was estimated by (10) to assess the impact of precipitation change on runoff. The value of εP is 1.95, indicating that a 10% decrease in precipitation should result in 19.5% decrease in runoff. According to the equation, with the calculated εP, it can be calculated that the 153.04 mm decrease in precipitation during 1965–2010 may led to a 123.99 mm decrease in streamflow, accounting for 53.66% of the total observed drop in annual runoff. The climate elasticity method measures climate influence on streamflow and assumes that the remaining change would come from human influence such as LUCC. Therefore, human activities could contribute 46.34% of the decrease in streamflow.With regard to the simple linear regression method, the difference of simulated runoff after the change point is assessed for the impact of precipitation change. Stimulated annual mean runoff in the study area during 1965–2010 is 250.83 mm; hence, human activities may have resulted in 96.43 mm decrease of annual runoff, estimating 41.74% of the runoff reduction, while the rest (58.26%) was attributed to climate change. ## 3.3. Discussions In the present work, two statistical methods were selected to quantify the response of streamflow to climate change and human activities in Yi River Basin due to limited access to soil and groundwater data to perform a hydrological model. It can be seen that both methods are generally easy to satisfy the data requirement and easy to be implemented. However, as Legesse et al. [44] pointed out, physically based hydrological models may be preferred and even the most optimal choice for hydrological effect study, though, as stated previously, limitations still remain in practice for them to be applied at basin scale. The statistical approach, on the other hand, only requires basic meteorological data such as precipitation and normal hydrological data, such as runoff series. In particular, climate elasticity method needs less and climate elasticity parameter can be directly estimated by hydroclimatic data without parameter adjustment per nonparametric estimators. Compared with parametric estimators of climate elasticity, nonparametric estimator for εP has been determined to be robust, appropriate with smaller bias, and consistent with the results estimated using rainfall-runoff model [42, 45]. In the present study only precipitation elasticity was investigated as streamflow responds directly to precipitation. But, it is clear-cut that the climate elasticity method is not expected to provide enough information compared with physically based distributed model.It is also relevant to note that there are uncertainties associated with assessing effects of climate variability and human activities on runoff in both methods, even though the evaluated effects are relatively consistent. The first source of uncertainty lies in the fact that both methods are only for the effect of the changes in runoff with changes in mean annual precipitation. While in real world, streamflow can be influenced by variations in other precipitation characteristics, such as seasonality, intensity, and concentration. Additionally, occurrence of extreme runoff may also affect the accuracy. The second comes from the framework to separate the effect. Regarding simple linear regression method, the hydroclimatic data may be lacks of representation if a short baseline period is detected in the change point analysis. What the regression model actually conveys is the response of hydrologic process to the average climatic conditions in the baseline period. The relationship estimated as per the date in a wet baseline period would greatly differ with that in a dry baseline period. Furthermore, the relationship between the precipitation and runoff may have changed in a nonstationary environment. As climate elasticity method, the framework used to estimate proportional contribution of climate variability and human activities to runoff is based on the assumption that human factors are independent of climate factors. The effects of human activities and climate, in fact, interplay with each other and are not readily separable. At a basin scale, climate change may influence the human activities such as land use and thus change runoff consequently, and vice versa, intensifying urbanization and expanding population may cause increase in temperature and consequently result in change in hydrological regime. Despite the fact that human activities and climate system interact with each other, even in baseline period, it is not considered as such in the present study. Therefore, focus of further studies should be placed in future to improve the results of separating climate and anthropogenic effects with consideration of these uncertainties. ## 4. Conclusions Global and regional climate variability is regarded as an important factor affecting hydrological processes. Concurrently, development-induced human activities have been identified as a main factor causing runoff variation in Yi River Basin. It is hence useful to update the understanding of the changes in precipitation-streamflow in Yi River and to separate the effects of climate variability from that of human activities. In this study, two different approaches, that is, simple linear regression method and climate elasticity method, have been used to investigate the impacts of the precipitation and land surface changes on runoff. The variation characteristics of annual precipitation and runoff, during 1954–2010, were analyzed. Downward trend was found for both precipitation and runoff using MK test. Streamflow series in the study area presented greater decline and is statistically significant (5% level), compared with the precipitation time series. A beak point in runoff was identified in 1965; also, similar result was found in precipitation series. Though multiple intersection points were found in precipitation series, the break point 1965 could be the change point representing the impact of precipitation and human activities on runoff considering the results of the change point test and double mass curve.Compared with linear regression method, the climate elasticity method is relatively simple and can be easily implemented, and it gives a natural runoff change with lesser data and parameters. Both methods have some uncertainty on the results to some extent. The basically consistent results from the two methods to quantify the responses of streamflow to precipitation variability and human activities were identified. Precipitation variability accounted for 53.66% and 58.26% of the decline in annual runoff between the two periods by the two methods, indicating that precipitation variability acted as the main driving force in the runoff decrease, while the role of human activities cannot be neglected neither. The results obtained in this study would provide more evidence and useful reference for water resources planning and management in this region. --- *Source: 1023821-2017-07-30.xml*
1023821-2017-07-30_1023821-2017-07-30.md
54,314
Responses of Streamflow to Climate Change and Human Activities in a River Basin, Northeast China
Hanwen Zhang; Wei Xu; Xintong Xu; Baohong Lu
Advances in Meteorology (2017)
Earth and Environmental Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1023821
1023821-2017-07-30.xml
--- ## Abstract It is now common knowledge that many water resources stresses relate to access to water within a basin. Yi River Basin, a typical river basin characterized by intensive agricultural processes, significant population growth, and water management, has been undergoing grave water problems. In this paper, the long-term trend of precipitation and streamflow in Yi River Basin, from 1964 to 2010, was investigated via Mann-Kendall test. The change point occurred in the year 1965 dividing the long-term series into two periods. Climate elasticity method and linear regression method were implemented to quantify the impact of precipitation and human activities on runoff and presented basically consistent results of the percentage change in an annual runoff for the postchange period. The results reveal that the decline of annual runoff in postchange period is mainly attributed to precipitation variability of 53.66–58.25% and human activities of 46.34–41.74%, as estimated by climate elasticity method and linear regression method, respectively. This study detected the changes in the precipitation-streamflow relationship and investigated the possible causes in the Yi River, which will be helpful for providing a reference for the management of regional water resources. --- ## Body ## 1. Introduction Streamflow, in terms of natural resource, is a measure of sustainable water availability, which is of great pith and moment for sustainable development, utilization of water resources, and biodiversity. With socioeconomic development, however, access to water has already become a serious issue for people in many parts of the world and, given recent United Nations estimates, the situation is not likely to improve [1, 2]. Water-related problems, especially the variability and availability of regional water resources under influence of climate change and human activities, have been of great concern to hydrologists recently [3, 4]. For sustainable water resources planning and management, it is therefore essential to determine the changes in water resources in both space and time and evaluate the influence of climate change and human activities thereon.The streamflow of some rivers in the world has decreased significantly due to climate change and intensifying human activities [2, 5–7]. Milly et al. [5] investigated the observed trends of global streamflow and reported that runoff reduction occurred in sub-Saharan Africa, southern Europe, southernmost South America, Southern Australia, and western midlatitude North America, while the observed streamflow increases in the La Plate basin of southern South America, Southern through central North America, the Southeastern quadrant of Africa, and northern Australia.China, whose water resources rank sixth in the world in terms of total volume though, suffers a lot from water shortage. To date, water stress in China remains a widely recognized concern of the general public [8]. The effects of climate variability and change, including increasing frequency of extreme events such as droughts, coupled with unevenly distributed water resources across regions and time, create additional pressure on the already scarce water supplies. Zhang et al. [9] investigated the trends of annual streamflow in the six large basins in China based on the observed runoff data of 19 main hydrological stations for the past 50 years and concluded that the annual streamflow of all investigated basins shows a downward trend, with some presenting significant decreasing trend in the northern China. Piao et al. [10] reviewed runoff changes in the Yangtze River Basin and Yellow River Basin, two largest basins in China, and reported a small and statistically insignificant upward trend detected in in annual runoff of Yangtze River Basin since 1960, while the Yellow River Basin showed a persistent decline in annual runoff time series. Zhang et al. [4] also found similar changing patterns based on monthly streamflow data from 382 hydrological stations in China covering the period 1960–2000. They reported that declining streamflow was found in the northern China and the upper reaches of the Yangtze and the Pearl River Basins, and significant decreasing streamflow was found mainly in the Yellow River Basin, the Liaohe River Basin, and the Haihe River Basin.It is common knowledge that the planning, designing, and operating of water resources projects are generally based on observational and historical hydroclimatologic data. The underlying assumption of this idea is that time-invariant statistical characteristics of the considered time series in all water resources engineering work, which would inevitably trigger major problems in regional water resources management [11]. Hence, it has been an important scientific problem in hydrology to reveal the physical characteristics of the basin and trends of streamflow change and their influencing factors. Moreover, the hydrologic cycle at the watershed scale is a complex process affected by climate, intensifying human activities and so on [12, 13]; thus the impacts of precipitation and human activities generally vary from place to place and need to be investigated at local scale to better understand the consequences of human activities and serve water resources planning, management, and sustainable development in that region.Yi River, tributary of Huaihe River system, has experienced grave water problems in recent years, such as, as reported by Zhang et al. [14], flooding, water shortage, high regulation, serious pollution, and aquatic ecology degradation. Water resources have become one of the key constraints of local socioeconomic development. Yet, research on the streamflow of Yi River Basin is rarely reported. In the light of these facts, this paper attempts to assess the half-century-long changing characteristics of precipitation and streamflow of Yi River to quantify the impacts of climate change and human activities on hydrological process of Yi River, aiming to better understand the changes in precipitation-streamflow in Yi River and how the runoff responds to climate change and anthropological activities, which is of great pith and moment for rational utilization and management of water resources and the local ecological protection. ## 2. Materials and Methods ### 2.1. Study Area and Data Set Yi River Basin, extending from longitude 117°24′E to 119°11′E and latitude 34°22′N to 36°23′N, is located in southeast of Shandong Province, China (Figure1). Originating from Yimeng Mountain, Yi River flows through Shandong Province and extends south to Jiangsu Province, with a total length of 333 km and a drainage area of 11,820 km2.Figure 1 Sketch map of study area, hydrological and meteorological stations.The hilly area lies in the middle and north part of Yi River Watershed and accounts for 70% of total basin area, leaving the rest mainly the plain area. The climate is characterized by north temperate monsoon, with average annual mean temperature of 13.2°C and average annual precipitation 830 mm. The flood season normally occurs in June to September, and the amount of precipitation in this period accounts for about 74.2% of annual total precipitation, while the main flood season, July to August, produces the highest flooding. Due to the mountainous nature, the floods in Yi River feature high peaks, large volume, and flash floods. Affected by the changes in rainfall distribution and topography, Yi River lacks water in dry season, while being frequented with flash floods in wet season, causing difficulties in the development and utilization of water resources.Linyi hydrological station, outlet of Yi River Basin, is selected to investigate the annual and monthly runoff variation. Streamflow data was obtained from the Hydrologic Year-book, spanning 57 years (1954–2010), in conjunction with daily precipitation records of seven meteorological stations covering the same period were acquired from National Meteorological Information Center, China Meteorological Administration (http://data.cma.cn/site/index.html) (Figure 1). The mean annual precipitation data for the river basin in the investigated time frame were interpolated by the Kriging method using ArcGIS with annual precipitation data of the meteorological stations in the river basin. ### 2.2. Methods Quantifying the respective contributions of natural factors and human activities to streamflow changes is important not only in theoretical perspective, but also in water resources management and soil and water conservation measures. However, it is never an easy and straightforward process; many methods have been developed to investigate the impacts of climate change and human activities quantitatively in the literature; the selection of the “best” method to be employed in such quantitative evaluation remains an open and debated question. For instance, Saifullah et al. [7] simply emphasized this fact by saying that “…several methods have been used to assess the impacts of precipitation and land surface changes on the hydrological processes, but to date, no standard model has been developed….” In terms of the methods, two categories can be identified, that is, hydrological modelling and statistical modelling. With reference to the first category, paired catchment experiment method and physically based hydrological model (lumped or distributed), such as Xinanjiang Model, Soil and Water Assessment Tool (SWAT), Variable Infiltration Capacity Model (VIC), SIMHYD Model, HBV Model, the SLURP (Semidistributed Land Use-Based Runoff Processes) Model, were at the top of list for determining the impact of climate change and human activities on the hydrological process (e.g., [15–20]). However, the experimental methods are usually time-consuming, expensive, and difficult to locate suitable controls. The physically based hydrological models, though physically sound, are also limited because of the involvement of major efforts on model calibration and validation, high demand of various data (e.g., high-resolution land use data, soil data, and groundwater data) to represent the hydrophysical processes, and the entanglement of complexity and uncertainty in model structure and parameter estimation [15, 17, 21, 22]. Statistical approach provides alternative choice, for instance, regression analysis method [23, 24], hydrological sensitivity analysis method (see, among others, [17, 22, 25–29]), and elasticity method [30, 31]. Nonetheless, climate elasticity method, for instance, has its limitations too; the general framework used to estimate proportional contribution of climate and human impact on streamflow is based on the assumption that human activities are independent of climate change. As a matter of fact, human activities and the climate system interact with each other. At a catchment scale, climate change may play a dominant role in land use and land cover change (LUCC) and may consequently alter the amount and process of streamflow.It is therefore varying from case to case to choose the appropriate methods to estimate the response of streamflow to climate change and human activities quantitatively. Ideally, implement of a combination of both approaches should compensate for other to some extent. Due to limited access to soil and groundwater data to perform a hydrological model, this paper employs statistical approach. #### 2.2.1. Trend and Abrupt Change Point Detection The rank-based MK test [32–34] is a parameter-free trend detection technique within time series. Due to its power and advantages, such as high asymptotic efficiency, it is frequently used in the literature (e.g., [35–38]). Also, different variants can be found in MK test [39]. In this study, the method examined by Moraes et al. [40] and Gerstengarbe and Werner [41] is adopted, which can be briefly outlined as follows.Given a time seriesxi with n terms (1≤i≤n), the MK rank statistic ak is given as(1)ak=∑i=1kRi,where(2)Ri=+1,xi>xj0,xi≤xj.Under the null hypothesis H0 of no change, the statistic ak is normally distributed with mean and variance given by(3)Eak=nn-14,varak=nn-12n+572.As such, the definition of the statistic index u(ak) is calculated using following:(4)uak=ak-Eakvarak1/2k=1,2,3,…,n.u(ak) is distributed as a normal distribution (it is named here UF, as it is calculated forwardly according to the order x1,x2,x3,…,xn). A positive UF value denotes an increasing trend, and a negative UF value denotes a decreasing trend. Given the significance level of α, the null hypothesis is rejected if UF>Uα/2 (two-sided statistical test), indicating that there is an obvious (or significant) trend change in the x time series. Then the statistic index of the corresponding rank series for retrograde rows, UB (as it is computed backwardly for the reverser sample, xn,xn-1,xn-2,…,x1), are similarly obtained through the method mentioned above.Also, letUB=-UF denote the trend of retrograde time series. Then the two curves, UF and UB  (k=1,2,3,…,n), are plotted to localize the beginning of the change, at the intersection point between the curves. If the intersection point is significant at given significance level, the match point would be the break point occurring in the investigated time series at that time. The considered α is 0.05 in this study, and the corresponding Uα/2 is 1.96. #### 2.2.2. Quantifying the Impact of Climate Change and Human Activities on Runoff Changes of the observed mean annual streamflowΔQ¯total are subject to climate variability (represented by precipitation, ΔQ¯P) and human activities ΔQ¯h.(5)ΔQ¯total=ΔQ¯P+ΔQ¯h.To evaluate the impacts of precipitation and human activities, respectively, the investigated time frame was divided into two periods through trend and abrupt change point analysis, that is, prechange period and postchange period. As such, a change in the average annual streamflow is calculated as(6)ΔQ¯=ΔQ¯2-ΔQ¯1,where ΔQ¯ denotes the change in annual mean runoff, ΔQ¯1 and ΔQ¯2 represent annual runoff during prechange period and postchange period, respectively.Simple Linear Regression Method. Take prechange period as unimpaired reference period, where a regression equation can be obtained between annual streamflow (Q1) and averaged annual precipitation (P1) of the basin as follows:(7)Q1=aP1+b,where a and b are two parameters of the model.Then the streamflow without the influence of human activities in the change period can be modelled as(8)Q¯2′=aP¯2+b,where Q¯2′ and P¯2 are fitted mean streamflow and observed precipitation during the change period, respectively.The contribution of runoff changes by human activities and precipitation can be estimated as(9)ΔQh=Q¯2-Q¯2′,ΔQP=ΔQ¯-ΔQh.Climate Elasticity Method. Climate elasticity of streamflow developed by Schaake and Waggoner [30] is considered to be an important, efficient, and robust indicator assessing the sensitivity of streamflow to climate change [20, 31, 42]. The climate elasticity can be estimated in different ways, and the nonparametric estimator proposed by Zheng et al. [31] was employed in this paper. The elasticity of streamflow with respect to precipitation (εP) can be expressed as follows:(10)εP=P¯Q¯=∑Pi-P¯Qi-Q¯∑Pi-P¯2=ρP,Q·CQCP,where ρP,Q is the correlation coefficient of precipitation (P) and streamflow (Q) and CP and CQ are coefficients of variation of P and Q. ## 2.1. Study Area and Data Set Yi River Basin, extending from longitude 117°24′E to 119°11′E and latitude 34°22′N to 36°23′N, is located in southeast of Shandong Province, China (Figure1). Originating from Yimeng Mountain, Yi River flows through Shandong Province and extends south to Jiangsu Province, with a total length of 333 km and a drainage area of 11,820 km2.Figure 1 Sketch map of study area, hydrological and meteorological stations.The hilly area lies in the middle and north part of Yi River Watershed and accounts for 70% of total basin area, leaving the rest mainly the plain area. The climate is characterized by north temperate monsoon, with average annual mean temperature of 13.2°C and average annual precipitation 830 mm. The flood season normally occurs in June to September, and the amount of precipitation in this period accounts for about 74.2% of annual total precipitation, while the main flood season, July to August, produces the highest flooding. Due to the mountainous nature, the floods in Yi River feature high peaks, large volume, and flash floods. Affected by the changes in rainfall distribution and topography, Yi River lacks water in dry season, while being frequented with flash floods in wet season, causing difficulties in the development and utilization of water resources.Linyi hydrological station, outlet of Yi River Basin, is selected to investigate the annual and monthly runoff variation. Streamflow data was obtained from the Hydrologic Year-book, spanning 57 years (1954–2010), in conjunction with daily precipitation records of seven meteorological stations covering the same period were acquired from National Meteorological Information Center, China Meteorological Administration (http://data.cma.cn/site/index.html) (Figure 1). The mean annual precipitation data for the river basin in the investigated time frame were interpolated by the Kriging method using ArcGIS with annual precipitation data of the meteorological stations in the river basin. ## 2.2. Methods Quantifying the respective contributions of natural factors and human activities to streamflow changes is important not only in theoretical perspective, but also in water resources management and soil and water conservation measures. However, it is never an easy and straightforward process; many methods have been developed to investigate the impacts of climate change and human activities quantitatively in the literature; the selection of the “best” method to be employed in such quantitative evaluation remains an open and debated question. For instance, Saifullah et al. [7] simply emphasized this fact by saying that “…several methods have been used to assess the impacts of precipitation and land surface changes on the hydrological processes, but to date, no standard model has been developed….” In terms of the methods, two categories can be identified, that is, hydrological modelling and statistical modelling. With reference to the first category, paired catchment experiment method and physically based hydrological model (lumped or distributed), such as Xinanjiang Model, Soil and Water Assessment Tool (SWAT), Variable Infiltration Capacity Model (VIC), SIMHYD Model, HBV Model, the SLURP (Semidistributed Land Use-Based Runoff Processes) Model, were at the top of list for determining the impact of climate change and human activities on the hydrological process (e.g., [15–20]). However, the experimental methods are usually time-consuming, expensive, and difficult to locate suitable controls. The physically based hydrological models, though physically sound, are also limited because of the involvement of major efforts on model calibration and validation, high demand of various data (e.g., high-resolution land use data, soil data, and groundwater data) to represent the hydrophysical processes, and the entanglement of complexity and uncertainty in model structure and parameter estimation [15, 17, 21, 22]. Statistical approach provides alternative choice, for instance, regression analysis method [23, 24], hydrological sensitivity analysis method (see, among others, [17, 22, 25–29]), and elasticity method [30, 31]. Nonetheless, climate elasticity method, for instance, has its limitations too; the general framework used to estimate proportional contribution of climate and human impact on streamflow is based on the assumption that human activities are independent of climate change. As a matter of fact, human activities and the climate system interact with each other. At a catchment scale, climate change may play a dominant role in land use and land cover change (LUCC) and may consequently alter the amount and process of streamflow.It is therefore varying from case to case to choose the appropriate methods to estimate the response of streamflow to climate change and human activities quantitatively. Ideally, implement of a combination of both approaches should compensate for other to some extent. Due to limited access to soil and groundwater data to perform a hydrological model, this paper employs statistical approach. ### 2.2.1. Trend and Abrupt Change Point Detection The rank-based MK test [32–34] is a parameter-free trend detection technique within time series. Due to its power and advantages, such as high asymptotic efficiency, it is frequently used in the literature (e.g., [35–38]). Also, different variants can be found in MK test [39]. In this study, the method examined by Moraes et al. [40] and Gerstengarbe and Werner [41] is adopted, which can be briefly outlined as follows.Given a time seriesxi with n terms (1≤i≤n), the MK rank statistic ak is given as(1)ak=∑i=1kRi,where(2)Ri=+1,xi>xj0,xi≤xj.Under the null hypothesis H0 of no change, the statistic ak is normally distributed with mean and variance given by(3)Eak=nn-14,varak=nn-12n+572.As such, the definition of the statistic index u(ak) is calculated using following:(4)uak=ak-Eakvarak1/2k=1,2,3,…,n.u(ak) is distributed as a normal distribution (it is named here UF, as it is calculated forwardly according to the order x1,x2,x3,…,xn). A positive UF value denotes an increasing trend, and a negative UF value denotes a decreasing trend. Given the significance level of α, the null hypothesis is rejected if UF>Uα/2 (two-sided statistical test), indicating that there is an obvious (or significant) trend change in the x time series. Then the statistic index of the corresponding rank series for retrograde rows, UB (as it is computed backwardly for the reverser sample, xn,xn-1,xn-2,…,x1), are similarly obtained through the method mentioned above.Also, letUB=-UF denote the trend of retrograde time series. Then the two curves, UF and UB  (k=1,2,3,…,n), are plotted to localize the beginning of the change, at the intersection point between the curves. If the intersection point is significant at given significance level, the match point would be the break point occurring in the investigated time series at that time. The considered α is 0.05 in this study, and the corresponding Uα/2 is 1.96. ### 2.2.2. Quantifying the Impact of Climate Change and Human Activities on Runoff Changes of the observed mean annual streamflowΔQ¯total are subject to climate variability (represented by precipitation, ΔQ¯P) and human activities ΔQ¯h.(5)ΔQ¯total=ΔQ¯P+ΔQ¯h.To evaluate the impacts of precipitation and human activities, respectively, the investigated time frame was divided into two periods through trend and abrupt change point analysis, that is, prechange period and postchange period. As such, a change in the average annual streamflow is calculated as(6)ΔQ¯=ΔQ¯2-ΔQ¯1,where ΔQ¯ denotes the change in annual mean runoff, ΔQ¯1 and ΔQ¯2 represent annual runoff during prechange period and postchange period, respectively.Simple Linear Regression Method. Take prechange period as unimpaired reference period, where a regression equation can be obtained between annual streamflow (Q1) and averaged annual precipitation (P1) of the basin as follows:(7)Q1=aP1+b,where a and b are two parameters of the model.Then the streamflow without the influence of human activities in the change period can be modelled as(8)Q¯2′=aP¯2+b,where Q¯2′ and P¯2 are fitted mean streamflow and observed precipitation during the change period, respectively.The contribution of runoff changes by human activities and precipitation can be estimated as(9)ΔQh=Q¯2-Q¯2′,ΔQP=ΔQ¯-ΔQh.Climate Elasticity Method. Climate elasticity of streamflow developed by Schaake and Waggoner [30] is considered to be an important, efficient, and robust indicator assessing the sensitivity of streamflow to climate change [20, 31, 42]. The climate elasticity can be estimated in different ways, and the nonparametric estimator proposed by Zheng et al. [31] was employed in this paper. The elasticity of streamflow with respect to precipitation (εP) can be expressed as follows:(10)εP=P¯Q¯=∑Pi-P¯Qi-Q¯∑Pi-P¯2=ρP,Q·CQCP,where ρP,Q is the correlation coefficient of precipitation (P) and streamflow (Q) and CP and CQ are coefficients of variation of P and Q. ## 2.2.1. Trend and Abrupt Change Point Detection The rank-based MK test [32–34] is a parameter-free trend detection technique within time series. Due to its power and advantages, such as high asymptotic efficiency, it is frequently used in the literature (e.g., [35–38]). Also, different variants can be found in MK test [39]. In this study, the method examined by Moraes et al. [40] and Gerstengarbe and Werner [41] is adopted, which can be briefly outlined as follows.Given a time seriesxi with n terms (1≤i≤n), the MK rank statistic ak is given as(1)ak=∑i=1kRi,where(2)Ri=+1,xi>xj0,xi≤xj.Under the null hypothesis H0 of no change, the statistic ak is normally distributed with mean and variance given by(3)Eak=nn-14,varak=nn-12n+572.As such, the definition of the statistic index u(ak) is calculated using following:(4)uak=ak-Eakvarak1/2k=1,2,3,…,n.u(ak) is distributed as a normal distribution (it is named here UF, as it is calculated forwardly according to the order x1,x2,x3,…,xn). A positive UF value denotes an increasing trend, and a negative UF value denotes a decreasing trend. Given the significance level of α, the null hypothesis is rejected if UF>Uα/2 (two-sided statistical test), indicating that there is an obvious (or significant) trend change in the x time series. Then the statistic index of the corresponding rank series for retrograde rows, UB (as it is computed backwardly for the reverser sample, xn,xn-1,xn-2,…,x1), are similarly obtained through the method mentioned above.Also, letUB=-UF denote the trend of retrograde time series. Then the two curves, UF and UB  (k=1,2,3,…,n), are plotted to localize the beginning of the change, at the intersection point between the curves. If the intersection point is significant at given significance level, the match point would be the break point occurring in the investigated time series at that time. The considered α is 0.05 in this study, and the corresponding Uα/2 is 1.96. ## 2.2.2. Quantifying the Impact of Climate Change and Human Activities on Runoff Changes of the observed mean annual streamflowΔQ¯total are subject to climate variability (represented by precipitation, ΔQ¯P) and human activities ΔQ¯h.(5)ΔQ¯total=ΔQ¯P+ΔQ¯h.To evaluate the impacts of precipitation and human activities, respectively, the investigated time frame was divided into two periods through trend and abrupt change point analysis, that is, prechange period and postchange period. As such, a change in the average annual streamflow is calculated as(6)ΔQ¯=ΔQ¯2-ΔQ¯1,where ΔQ¯ denotes the change in annual mean runoff, ΔQ¯1 and ΔQ¯2 represent annual runoff during prechange period and postchange period, respectively.Simple Linear Regression Method. Take prechange period as unimpaired reference period, where a regression equation can be obtained between annual streamflow (Q1) and averaged annual precipitation (P1) of the basin as follows:(7)Q1=aP1+b,where a and b are two parameters of the model.Then the streamflow without the influence of human activities in the change period can be modelled as(8)Q¯2′=aP¯2+b,where Q¯2′ and P¯2 are fitted mean streamflow and observed precipitation during the change period, respectively.The contribution of runoff changes by human activities and precipitation can be estimated as(9)ΔQh=Q¯2-Q¯2′,ΔQP=ΔQ¯-ΔQh.Climate Elasticity Method. Climate elasticity of streamflow developed by Schaake and Waggoner [30] is considered to be an important, efficient, and robust indicator assessing the sensitivity of streamflow to climate change [20, 31, 42]. The climate elasticity can be estimated in different ways, and the nonparametric estimator proposed by Zheng et al. [31] was employed in this paper. The elasticity of streamflow with respect to precipitation (εP) can be expressed as follows:(10)εP=P¯Q¯=∑Pi-P¯Qi-Q¯∑Pi-P¯2=ρP,Q·CQCP,where ρP,Q is the correlation coefficient of precipitation (P) and streamflow (Q) and CP and CQ are coefficients of variation of P and Q. ## 3. Results and Discussions ### 3.1. Change Points Analysis In the Yi River Basin, precipitation mainly occurs during June–September. November, December, January, and February are the dry months, as is the case of runoff experienced (Figure2). Compared with dry months, there seems to be more outliers in wet months. With reference to average monthly precipitation, the highest value occurred in July, while the least occurred in January; regarding average monthly runoff, the highest value occurred in July, while the least occurred in March, illustrating that high flow responds to high precipitation simultaneously, while low flow responds with apparent delay in time.Figure 2 Long-term monthly average precipitation and runoff depth of the Yi River Basin.Before further trend detection analysis, serial persistence within the meteor-hydrological series was performed (Figure3). It can be seen from Figure 3 that the series include independent observations both for annual streamflow series and for annual precipitation series at 95% confidence level. This result announces that the application of MK trend detection technique is warranted in this study.Figure 3 Autocorrelation analysis of meteor-hydrological series of the Yi River Basin. The dashed lines denote 95% confidence level (ACF means autocorrelation functions).The trend analysis results were presented in Figure4 and Table 1. Note that a negative UF value denotes a downward trend and vice versa, and if the UF value is greater than the critical values (±1.96, two dashed lines in Figure 4), then the increasing or decreasing trend is significant at 5% significance level. Figure 4 showed that both the annual precipitation and streamflow decreased in Yi River Basin during 1954–2010 and the significant decreasing trend was found in streamflow. MK value of precipitation fluctuated between positive and negative value during 1965–1975, while the UF value of streamflow was negative from 1967 to 1975, indicating that the decrease of runoff may be attributed to anthropological impacts.Table 1 MK test result of annual precipitation and streamflow during 1954–2010. Time series MK value Trend Precipitation −2.1 Downward ∗ Streamflow −0.7 Downward Note. denotes∗ significant at 5% significance level.Figure 4 MK test of meteor-hydrological series of the Yi River Basin. The dashed lines denote 95% confidence level.Though multiple intersection points of precipitation were identified and none was significant at 5% significance level, the precipitation generally presented a downward trend since 1965, only differed in decreasing degrees. Figure5 presented the variations of precipitation and streamflow before and after 1965. The mean annual precipitation decreased by 153 mm from pre-1965 period to post-1965 period, while the average annual runoff depth decreased by 231.1 mm from pre-1965 period to post-1965 period, indicating that the process of the runoff production may have changed.Figure 5 Time series of annual precipitation and runoff depth in Yi River Basin (the thin dashed blue line denotes average value before 1965; the thick dashed blue line denotes average value from 1965 to 2010).To better understand the change characteristics in precipitation, the double mass curve [43] was employed. The annual precipitation-runoff double mass curve is normally approximate to a straight line if the basin characteristics are stable; that is, there are no abrupt changes in precipitation and runoff; thus, a change in the slope of the curve may indicate a change in the investigated series. The double mass curve of precipitation-runoff was represented in Figure 6. It can be seen from Figure 6 that the slope of lines of the pre-1965-period is more than twice higher than a post-1965-period. The maximum value of runoff coefficient is 0.58 for the period of 1954–1964. After 1965, runoff coefficient abruptly decreases, and the mean runoff coefficient of pre-1965-period is 0.40, while that of post-1965-period is 0.18 (Table 2), which is evidence of the change point.Table 2 Summary of annual precipitation, streamflow, and runoff coefficient during 1954–2010. Period Prechange1954–1964 Postchange1965–2010 Full time frame1954–2010 Precipitation Mean (mm) 927.70 774.66 804.20 Standard deviation 179.12 181.76 189.71 Coefficient of variation 0.19 0.23 0.24 Streamflow Mean (mm) 385.45 154.39 198.98 Standard deviation 161.02 112.98 152.81 Coefficient of variation 0.42 0.73 0.77 Runoff coefficient 0.40 0.18 0.22Figure 6 Double mass curve of precipitation and runoff depth.Comparing the results of the change point test and double mass curve, the year 1965 could be the change point representing the impact of precipitation and human activities on runoff. Figure7 shows a correlation comparison of precipitation and runoff for the two periods. The correlation between precipitation and streamflow for the prechange period (r=0.89) is slightly stronger than that for postchange period (r=0.86), while the slope decreased from 0.79 to 0.55. The lower standard deviation of runoff (Table 2) confirms that change of runoff tends to stabilize, arrantly under the influence of anthropological activities. The decrease of slope from prechange period to postchange period also demonstrates that the same annual precipitation in the baseline period produces more streamflow, suggesting that streamflow should be driven by increasing human activities in the study area. According to the records of Hydrologic Year-book, there are 5 large sized reservoirs and 22 medium sized reservoirs, whose total storage volumes range from 9.5 × 106 to 7.49 × 108 m3 (due to the size of the paper, only part of the reservoirs are listed in Table 3). It can be seen that most of the reservoirs were built during the 1960s and 1970s; hence, water-related human activities including agricultural irrigation, dam construction and industry development should be considered to be responsible for the decline in runoff.Table 3 Summary of large and medium sized reservoirs in Yi River Basin. Reservoir names Size Build date Total storage (104 m3) Tianzhuang Large 1960 13057 Dian Large 1960 74900 Bashan Large 1960 50850 Xujiaya Large 1959 29290 Tangcun Large 1959 14961 Gaohu Medium 1967 3741 Shangye Medium 1960 3638 Cangli Medium 1971 6480 Wujiazhuang Medium 1960 2544 Shilan Medium 1960 3682Figure 7 Correlation analysis of precipitation and runoff for pre/postchange period.As per the discussion above, it can be concluded that runoff should be affected by the anthropological impacts in this region after 1964. ### 3.2. Quantitative Assessment of Precipitation and Human Activities on Streamflow The precipitation elasticity of runoff was estimated by (10) to assess the impact of precipitation change on runoff. The value of εP is 1.95, indicating that a 10% decrease in precipitation should result in 19.5% decrease in runoff. According to the equation, with the calculated εP, it can be calculated that the 153.04 mm decrease in precipitation during 1965–2010 may led to a 123.99 mm decrease in streamflow, accounting for 53.66% of the total observed drop in annual runoff. The climate elasticity method measures climate influence on streamflow and assumes that the remaining change would come from human influence such as LUCC. Therefore, human activities could contribute 46.34% of the decrease in streamflow.With regard to the simple linear regression method, the difference of simulated runoff after the change point is assessed for the impact of precipitation change. Stimulated annual mean runoff in the study area during 1965–2010 is 250.83 mm; hence, human activities may have resulted in 96.43 mm decrease of annual runoff, estimating 41.74% of the runoff reduction, while the rest (58.26%) was attributed to climate change. ### 3.3. Discussions In the present work, two statistical methods were selected to quantify the response of streamflow to climate change and human activities in Yi River Basin due to limited access to soil and groundwater data to perform a hydrological model. It can be seen that both methods are generally easy to satisfy the data requirement and easy to be implemented. However, as Legesse et al. [44] pointed out, physically based hydrological models may be preferred and even the most optimal choice for hydrological effect study, though, as stated previously, limitations still remain in practice for them to be applied at basin scale. The statistical approach, on the other hand, only requires basic meteorological data such as precipitation and normal hydrological data, such as runoff series. In particular, climate elasticity method needs less and climate elasticity parameter can be directly estimated by hydroclimatic data without parameter adjustment per nonparametric estimators. Compared with parametric estimators of climate elasticity, nonparametric estimator for εP has been determined to be robust, appropriate with smaller bias, and consistent with the results estimated using rainfall-runoff model [42, 45]. In the present study only precipitation elasticity was investigated as streamflow responds directly to precipitation. But, it is clear-cut that the climate elasticity method is not expected to provide enough information compared with physically based distributed model.It is also relevant to note that there are uncertainties associated with assessing effects of climate variability and human activities on runoff in both methods, even though the evaluated effects are relatively consistent. The first source of uncertainty lies in the fact that both methods are only for the effect of the changes in runoff with changes in mean annual precipitation. While in real world, streamflow can be influenced by variations in other precipitation characteristics, such as seasonality, intensity, and concentration. Additionally, occurrence of extreme runoff may also affect the accuracy. The second comes from the framework to separate the effect. Regarding simple linear regression method, the hydroclimatic data may be lacks of representation if a short baseline period is detected in the change point analysis. What the regression model actually conveys is the response of hydrologic process to the average climatic conditions in the baseline period. The relationship estimated as per the date in a wet baseline period would greatly differ with that in a dry baseline period. Furthermore, the relationship between the precipitation and runoff may have changed in a nonstationary environment. As climate elasticity method, the framework used to estimate proportional contribution of climate variability and human activities to runoff is based on the assumption that human factors are independent of climate factors. The effects of human activities and climate, in fact, interplay with each other and are not readily separable. At a basin scale, climate change may influence the human activities such as land use and thus change runoff consequently, and vice versa, intensifying urbanization and expanding population may cause increase in temperature and consequently result in change in hydrological regime. Despite the fact that human activities and climate system interact with each other, even in baseline period, it is not considered as such in the present study. Therefore, focus of further studies should be placed in future to improve the results of separating climate and anthropogenic effects with consideration of these uncertainties. ## 3.1. Change Points Analysis In the Yi River Basin, precipitation mainly occurs during June–September. November, December, January, and February are the dry months, as is the case of runoff experienced (Figure2). Compared with dry months, there seems to be more outliers in wet months. With reference to average monthly precipitation, the highest value occurred in July, while the least occurred in January; regarding average monthly runoff, the highest value occurred in July, while the least occurred in March, illustrating that high flow responds to high precipitation simultaneously, while low flow responds with apparent delay in time.Figure 2 Long-term monthly average precipitation and runoff depth of the Yi River Basin.Before further trend detection analysis, serial persistence within the meteor-hydrological series was performed (Figure3). It can be seen from Figure 3 that the series include independent observations both for annual streamflow series and for annual precipitation series at 95% confidence level. This result announces that the application of MK trend detection technique is warranted in this study.Figure 3 Autocorrelation analysis of meteor-hydrological series of the Yi River Basin. The dashed lines denote 95% confidence level (ACF means autocorrelation functions).The trend analysis results were presented in Figure4 and Table 1. Note that a negative UF value denotes a downward trend and vice versa, and if the UF value is greater than the critical values (±1.96, two dashed lines in Figure 4), then the increasing or decreasing trend is significant at 5% significance level. Figure 4 showed that both the annual precipitation and streamflow decreased in Yi River Basin during 1954–2010 and the significant decreasing trend was found in streamflow. MK value of precipitation fluctuated between positive and negative value during 1965–1975, while the UF value of streamflow was negative from 1967 to 1975, indicating that the decrease of runoff may be attributed to anthropological impacts.Table 1 MK test result of annual precipitation and streamflow during 1954–2010. Time series MK value Trend Precipitation −2.1 Downward ∗ Streamflow −0.7 Downward Note. denotes∗ significant at 5% significance level.Figure 4 MK test of meteor-hydrological series of the Yi River Basin. The dashed lines denote 95% confidence level.Though multiple intersection points of precipitation were identified and none was significant at 5% significance level, the precipitation generally presented a downward trend since 1965, only differed in decreasing degrees. Figure5 presented the variations of precipitation and streamflow before and after 1965. The mean annual precipitation decreased by 153 mm from pre-1965 period to post-1965 period, while the average annual runoff depth decreased by 231.1 mm from pre-1965 period to post-1965 period, indicating that the process of the runoff production may have changed.Figure 5 Time series of annual precipitation and runoff depth in Yi River Basin (the thin dashed blue line denotes average value before 1965; the thick dashed blue line denotes average value from 1965 to 2010).To better understand the change characteristics in precipitation, the double mass curve [43] was employed. The annual precipitation-runoff double mass curve is normally approximate to a straight line if the basin characteristics are stable; that is, there are no abrupt changes in precipitation and runoff; thus, a change in the slope of the curve may indicate a change in the investigated series. The double mass curve of precipitation-runoff was represented in Figure 6. It can be seen from Figure 6 that the slope of lines of the pre-1965-period is more than twice higher than a post-1965-period. The maximum value of runoff coefficient is 0.58 for the period of 1954–1964. After 1965, runoff coefficient abruptly decreases, and the mean runoff coefficient of pre-1965-period is 0.40, while that of post-1965-period is 0.18 (Table 2), which is evidence of the change point.Table 2 Summary of annual precipitation, streamflow, and runoff coefficient during 1954–2010. Period Prechange1954–1964 Postchange1965–2010 Full time frame1954–2010 Precipitation Mean (mm) 927.70 774.66 804.20 Standard deviation 179.12 181.76 189.71 Coefficient of variation 0.19 0.23 0.24 Streamflow Mean (mm) 385.45 154.39 198.98 Standard deviation 161.02 112.98 152.81 Coefficient of variation 0.42 0.73 0.77 Runoff coefficient 0.40 0.18 0.22Figure 6 Double mass curve of precipitation and runoff depth.Comparing the results of the change point test and double mass curve, the year 1965 could be the change point representing the impact of precipitation and human activities on runoff. Figure7 shows a correlation comparison of precipitation and runoff for the two periods. The correlation between precipitation and streamflow for the prechange period (r=0.89) is slightly stronger than that for postchange period (r=0.86), while the slope decreased from 0.79 to 0.55. The lower standard deviation of runoff (Table 2) confirms that change of runoff tends to stabilize, arrantly under the influence of anthropological activities. The decrease of slope from prechange period to postchange period also demonstrates that the same annual precipitation in the baseline period produces more streamflow, suggesting that streamflow should be driven by increasing human activities in the study area. According to the records of Hydrologic Year-book, there are 5 large sized reservoirs and 22 medium sized reservoirs, whose total storage volumes range from 9.5 × 106 to 7.49 × 108 m3 (due to the size of the paper, only part of the reservoirs are listed in Table 3). It can be seen that most of the reservoirs were built during the 1960s and 1970s; hence, water-related human activities including agricultural irrigation, dam construction and industry development should be considered to be responsible for the decline in runoff.Table 3 Summary of large and medium sized reservoirs in Yi River Basin. Reservoir names Size Build date Total storage (104 m3) Tianzhuang Large 1960 13057 Dian Large 1960 74900 Bashan Large 1960 50850 Xujiaya Large 1959 29290 Tangcun Large 1959 14961 Gaohu Medium 1967 3741 Shangye Medium 1960 3638 Cangli Medium 1971 6480 Wujiazhuang Medium 1960 2544 Shilan Medium 1960 3682Figure 7 Correlation analysis of precipitation and runoff for pre/postchange period.As per the discussion above, it can be concluded that runoff should be affected by the anthropological impacts in this region after 1964. ## 3.2. Quantitative Assessment of Precipitation and Human Activities on Streamflow The precipitation elasticity of runoff was estimated by (10) to assess the impact of precipitation change on runoff. The value of εP is 1.95, indicating that a 10% decrease in precipitation should result in 19.5% decrease in runoff. According to the equation, with the calculated εP, it can be calculated that the 153.04 mm decrease in precipitation during 1965–2010 may led to a 123.99 mm decrease in streamflow, accounting for 53.66% of the total observed drop in annual runoff. The climate elasticity method measures climate influence on streamflow and assumes that the remaining change would come from human influence such as LUCC. Therefore, human activities could contribute 46.34% of the decrease in streamflow.With regard to the simple linear regression method, the difference of simulated runoff after the change point is assessed for the impact of precipitation change. Stimulated annual mean runoff in the study area during 1965–2010 is 250.83 mm; hence, human activities may have resulted in 96.43 mm decrease of annual runoff, estimating 41.74% of the runoff reduction, while the rest (58.26%) was attributed to climate change. ## 3.3. Discussions In the present work, two statistical methods were selected to quantify the response of streamflow to climate change and human activities in Yi River Basin due to limited access to soil and groundwater data to perform a hydrological model. It can be seen that both methods are generally easy to satisfy the data requirement and easy to be implemented. However, as Legesse et al. [44] pointed out, physically based hydrological models may be preferred and even the most optimal choice for hydrological effect study, though, as stated previously, limitations still remain in practice for them to be applied at basin scale. The statistical approach, on the other hand, only requires basic meteorological data such as precipitation and normal hydrological data, such as runoff series. In particular, climate elasticity method needs less and climate elasticity parameter can be directly estimated by hydroclimatic data without parameter adjustment per nonparametric estimators. Compared with parametric estimators of climate elasticity, nonparametric estimator for εP has been determined to be robust, appropriate with smaller bias, and consistent with the results estimated using rainfall-runoff model [42, 45]. In the present study only precipitation elasticity was investigated as streamflow responds directly to precipitation. But, it is clear-cut that the climate elasticity method is not expected to provide enough information compared with physically based distributed model.It is also relevant to note that there are uncertainties associated with assessing effects of climate variability and human activities on runoff in both methods, even though the evaluated effects are relatively consistent. The first source of uncertainty lies in the fact that both methods are only for the effect of the changes in runoff with changes in mean annual precipitation. While in real world, streamflow can be influenced by variations in other precipitation characteristics, such as seasonality, intensity, and concentration. Additionally, occurrence of extreme runoff may also affect the accuracy. The second comes from the framework to separate the effect. Regarding simple linear regression method, the hydroclimatic data may be lacks of representation if a short baseline period is detected in the change point analysis. What the regression model actually conveys is the response of hydrologic process to the average climatic conditions in the baseline period. The relationship estimated as per the date in a wet baseline period would greatly differ with that in a dry baseline period. Furthermore, the relationship between the precipitation and runoff may have changed in a nonstationary environment. As climate elasticity method, the framework used to estimate proportional contribution of climate variability and human activities to runoff is based on the assumption that human factors are independent of climate factors. The effects of human activities and climate, in fact, interplay with each other and are not readily separable. At a basin scale, climate change may influence the human activities such as land use and thus change runoff consequently, and vice versa, intensifying urbanization and expanding population may cause increase in temperature and consequently result in change in hydrological regime. Despite the fact that human activities and climate system interact with each other, even in baseline period, it is not considered as such in the present study. Therefore, focus of further studies should be placed in future to improve the results of separating climate and anthropogenic effects with consideration of these uncertainties. ## 4. Conclusions Global and regional climate variability is regarded as an important factor affecting hydrological processes. Concurrently, development-induced human activities have been identified as a main factor causing runoff variation in Yi River Basin. It is hence useful to update the understanding of the changes in precipitation-streamflow in Yi River and to separate the effects of climate variability from that of human activities. In this study, two different approaches, that is, simple linear regression method and climate elasticity method, have been used to investigate the impacts of the precipitation and land surface changes on runoff. The variation characteristics of annual precipitation and runoff, during 1954–2010, were analyzed. Downward trend was found for both precipitation and runoff using MK test. Streamflow series in the study area presented greater decline and is statistically significant (5% level), compared with the precipitation time series. A beak point in runoff was identified in 1965; also, similar result was found in precipitation series. Though multiple intersection points were found in precipitation series, the break point 1965 could be the change point representing the impact of precipitation and human activities on runoff considering the results of the change point test and double mass curve.Compared with linear regression method, the climate elasticity method is relatively simple and can be easily implemented, and it gives a natural runoff change with lesser data and parameters. Both methods have some uncertainty on the results to some extent. The basically consistent results from the two methods to quantify the responses of streamflow to precipitation variability and human activities were identified. Precipitation variability accounted for 53.66% and 58.26% of the decline in annual runoff between the two periods by the two methods, indicating that precipitation variability acted as the main driving force in the runoff decrease, while the role of human activities cannot be neglected neither. The results obtained in this study would provide more evidence and useful reference for water resources planning and management in this region. --- *Source: 1023821-2017-07-30.xml*
2017
# The Development and Realization of Digital Panorama Media Technology Based on VR Technology **Authors:** Yake Qiu; Jingyu Tao **Journal:** Computational Intelligence and Neuroscience (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1023865 --- ## Abstract The purpose of this paper is to understand the digital 3D multimedia panoramic visual communication technology based on virtual reality. Firstly, the key concepts and characteristics of virtual reality are introduced, including the development and application of digital three-dimensional panorama technology. Then, according to the theoretical research, some basic knowledge of 3D panoramic image Mosaic is introduced, including camera image modeling, image sharing, and image exchange. Finally, with the development of the virtual tour at the College of Normal University, the hardware of panoramic technology and the demand of panoramic image search have been expanded in the application. The design of panoramic Mosaic, panoramic image generation, and virtual tour school construction considers real-world issues. The innovation of this paper lies in that will be used by SketchUp8.0 software builds the geometry of 3d virtual scene and by the cylindrical panoramic images based on image of building 3 d virtual scene organic unifies in together and makes a panoramic image can be as the change of seasons in the real scene and real-time change, enhance the sense of the reality of the system and user immersive. --- ## Body ## 1. Introduction Today, with the further development of cloud computing, Internet of Things, GIS, and other technologies, more and more applications choose to use these technologies to serve users, entering the era of software-as-a-service. Networked applications are the premise of the software-as-a-service era. Networked applications are widely used in the field of GIS, such as Baidu Maps, Google Maps, and Sky Maps. To realize the release and management of geographic information, especially now, The Street View service is in full swing and has become the focus of the development of various manufacturers. Its virtual three-dimensional scene and real on-site photos give users a sense of immersion, operation, and integration. But after all, such three-dimensionality is virtual and incalculable, Fake 3D. However, the real 3D display has shortcomings such as a large amount of data, cumbersome modeling process, and inconvenient data updating. How to make up for these shortcomings is the key to the current development [1]. The application of virtual reality technology in education is mainly reflected in the construction of a “digital campus.” The content mainly integrates campus culture, teaching staff, etc., showing the school's comprehensive strength in running a school. In form, it has developed from a traditional two-dimensional virtual campus to a three-dimensional virtual campus with a sense of reality, immersion, and greater interaction, which has played a positive role in the school’s enrollment and publicity work and has also played a positive role in the school’s digital construction [2] (Figure 1). Three-dimensional panoramic virtual reality technology (also called virtual reality) is a virtual reality technology of real scenes based on panoramic images. Panorama is to use a digital camera to shoot the real scene in a certain way when the viewpoint is fixed, rotate 360 degrees around the vertical axis according to a uniform angle, take one or more sets of photos according to the specific situation, stitch, adjust, and integrate the images in the computer, generate seamless panoramic images according to the expression classification, and realize the real scene restoration display mode of all-round interactive viewing through computer technology [1]. The three-dimensional panoramic display can give people a sense of proximity, a sense of being on the scene. When browsing online, the sense of presence directly affects the audience's effect of receiving information, thereby arousing the audience's interest and better information dissemination effect. The high-definition panoramic tour will give the audience a strong sense of presence.Figure 1 Three-dimensional animation interaction.This article through to Flex and PV3D integrated use of technology and builds the 3d virtual campus system based on panoramic images; the interaction of the system is good, in which the user can freely roam, and users can have a real immersive feeling, which had a profound influence on school's external publicity and has important application value. ## 2. Literature Review Wang et al. used a pair of hyperboloid refractive mirrors and a camera lens to form a stereo panoramic vision system [3]. Tomoe et al. have also carried out pioneering research work in this area and proposed a design method that uses an ordinary camera to achieve an omnidirectional stereo vision. The design method is to process a pair of hyperboloid refracting mirrors into a single refracting mirror. The principle is similar to that of Koyasu’s. Both devices have the advantage of compact structure [4]. Morra et al. designed a variety of point light source projection range meters, through the recognition of the point light source to achieve the depth information measurement of the object point [5]. This method can quickly obtain the three-dimensional information of the object point, but the amount of three-dimensional information obtained each time is small. Shen et al. designed a line structured light rangefinder, which allows the laser to generate a line light source through a cylindrical lens and rotates at a uniform speed through a stepping motor to sweep the beam across the surface of the object to be measured, thereby obtaining a series of images for information extraction and measurement [6]. Zhou et al. proposed to use a point laser to rotate at a high speed to form a conical laser plane and receive the panoramic light source information through the conical mirror and then analyzed the projection light source information on the imaging plane to achieve panoramic three-dimensional stereo measurement [7]. Ortega et al. proposed a combination of ring laser and conical mirror surface. This method can effectively obtain panoramic three-dimensional information, but the combination of laser source and mirror surface is more demanding. And. because its sensor design adopts parabolic catadioptric imaging, parabolic mirror catadioptric imaging has the advantages of simple imaging algorithm and no special geometric corresponding requirements during installation [8].Through the comprehensive application of Flex and PV3D technologies, the paper constructs a three-dimensional virtual campus system based on panoramic images. The system has good interactivity and users can roam freely in it, so that users have a more realistic and immersive feeling. It has had a profound impact on the school’s external publicity and has important promotion and application value. ## 3. Digital 3D Multimedia Panoramic Visual Communication Technology Based on Virtual Reality Virtual reality is a three-dimensional virtual reality environment created by modern computer technology. In a virtual reality environment, users can participate in the virtual environment through work and better understand what they see and hear. Computer virtual reality technology provides users with an interactive virtual reality environment, in which they can self-manage and actively interact with objects, so as to form a universal experience combining vision, hearing, and touch. In fact, virtual reality requires a deep understanding of cognition, independence, and interaction in applications. The combination of the two enables users to create virtual reality-like concepts, namely, the three key concepts of virtual reality: interaction, understanding, and imagination, as shown in Figure2. Therefore, it is convenient to improve immersion and improve the ability to interfere autonomously. Instructions for improving virtual reality are given in the study by [9].Figure 2 The main characteristics of virtual reality. ### 3.1. Immersion Immersion is the immersive feeling that the virtual environment in virtual reality technology brings to people. Immersion is also the main criterion for measuring the construction of a virtual reality environment. It is also the main basis for people to obtain a “sense of conception” in experiencing virtual reality technology [10]. ### 3.2. Interactivity Interactivity is the main feature that distinguishes virtual reality technology from traditional 3D animation. When watching 3D animation, the viewer is in a passive state of acceptance [11] and the interactivity of virtual reality allows people to interact with the virtual reality environment and objects. Then, after the color separation filter filters, only the specified wavelength range of light is allowed to pass through. In the virtual reality environment, the viewer is no longer a bystander who passively receives information but can also participate in it. ### 3.3. Autonomy Autonomy refers to the main characteristics of virtual reality browsing. In the virtual reality environment, users can browse or change the virtual reality environment according to their own intentions. ### 3.4. Panoramic Visual Communication Technology In summary, the color of the object recognized by the observer is determined by the spectral composition of the light source, the reflection characteristics of the surface of the object, and the observer's sensitivity to the spectrum. Based on the above analysis, this paper further proposes an active vision system model based on the framework of the active vision system [12].First, each photosensitive unit (that is, pixel) of the CCD will generate a photoelectric effect under the action of the incident light and store a certain amount of photoelectrons corresponding to the light intensity on each photosensitive unit. Generally speaking, this process is characterized by a coefficient, which means that a photon generates several photoelectrons correspondingly. But in fact, the CCD may work in the fat zero mode and each unit has a small number of photoelectrons, which is equivalent to a bias. There are also some noise effects, which are generally not completely linear.The camera model describes the process of photoelectric conversion. CCD (charge coupled device) is an important device that undertakes the task of photoelectric conversion. It has photosensitivity and the ability to store charges. When the surface feels light, it will generate corresponding charges. First, external light is concentrated through a tiny lens to improve daylighting. The color separation filter then filters the light, allowing only the desired wavelength range of light to pass through; finally, the photosensitive layer converts the light signal through the color separation filter into an electrical signal. The incident light after digital camera CCD can be expressed by the following formulas:(1)R=∫0∞fRλIλdλ,(2)G=∫0∞fGλI′λdλ,(3)B=∫0∞fBλI′λdλ.Here,∫0∞fR, ∫0∞fB, and ∫0∞fB, respectively, represent the wavelength spectrum sensitivity of the camera's three-channel color filter, and the authors represent the intensity of light reaching a certain wavelength of the camera. The wavelength spectrum sensitivity of a digital camera is generally provided by the CCD/CMOS production company. It is the product of the filter function of the color filter and the responsiveness of the photosensitive layer. Figure 3 shows the response curve of the CCD of a certain digital camera to the spectrum.Figure 3 The response curve of the camera spectrum.Theoretically, it is believed that the response characteristic of CCD to light energy is linear; that is, there is a linear relationship between the amount of exposure and the gray value of the digital image. In fact, neither CCD nor CMOS is completely linear in the process of converting optical signals into electrical signals [12]. The literature pointed out that in an environment where a color camera is used as a precision measuring instrument, it is necessary to solve this problem through photoelectric response linear correction.The luminous flux received by different areas of CCD/CMOS under a uniform light field is not the same. The luminous flux received at the center of the lens is greater than the luminous flux received around the lens. The literature points out that for different lenses and corresponding focal lengths, it is necessary to use an integrating sphere for shimming correction. In the active vision system model of this article, the above factors have not been considered [13–16].Assuming that there is only diffuse reflection on the surface of the object, the relationship between reflected lightI′λ and incident light kλ can be shown in the following equation:(4)I′λ=I′λ⋅kλ.Here,I′λ represents the intensity of incident light at a wavelength of λ at the time t and kλ is the reflection coefficient of the material at a wavelength of λ. On this basis, to increase the consideration of specular reflection, it is necessary to increase the respective weights of the specular reflection component and the diffuse reflection component. At this time, formula (4) will be rewritten as the following formula:(5)I′λ=mdI′λ⋅kλ.The different TVS and different primary colors used by the projector (such as phosphors) must first know their coefficients in standard chromaticity coordinates to obtain the matrix conversion. Thus, after receiving the signal (if the signal is defined by standard chromaticity coordinates), we must wait until the three primary colors are compared to the corresponding light as the color through transformation.The projection equipment used in this paper is a panoramic color light source generator (PCSLG) with a transmitting point. The color characteristics of the lighting should be related to the tilt angle, as shown in Figure4.Figure 4 Correspondence between PCSLG's hue value and projection angle.In the actual situation, there is a difference between the theoretical projection and the real projection at a certain projection angle due to the manufacturing process. In order to overcome the existence of this difference, the panoramic color structured light generator needs to undergo chromaticity calibration before the actual measurement [17]. The chromaticity calibration will be introduced in the light source color correction algorithm later. ## 3.1. Immersion Immersion is the immersive feeling that the virtual environment in virtual reality technology brings to people. Immersion is also the main criterion for measuring the construction of a virtual reality environment. It is also the main basis for people to obtain a “sense of conception” in experiencing virtual reality technology [10]. ## 3.2. Interactivity Interactivity is the main feature that distinguishes virtual reality technology from traditional 3D animation. When watching 3D animation, the viewer is in a passive state of acceptance [11] and the interactivity of virtual reality allows people to interact with the virtual reality environment and objects. Then, after the color separation filter filters, only the specified wavelength range of light is allowed to pass through. In the virtual reality environment, the viewer is no longer a bystander who passively receives information but can also participate in it. ## 3.3. Autonomy Autonomy refers to the main characteristics of virtual reality browsing. In the virtual reality environment, users can browse or change the virtual reality environment according to their own intentions. ## 3.4. Panoramic Visual Communication Technology In summary, the color of the object recognized by the observer is determined by the spectral composition of the light source, the reflection characteristics of the surface of the object, and the observer's sensitivity to the spectrum. Based on the above analysis, this paper further proposes an active vision system model based on the framework of the active vision system [12].First, each photosensitive unit (that is, pixel) of the CCD will generate a photoelectric effect under the action of the incident light and store a certain amount of photoelectrons corresponding to the light intensity on each photosensitive unit. Generally speaking, this process is characterized by a coefficient, which means that a photon generates several photoelectrons correspondingly. But in fact, the CCD may work in the fat zero mode and each unit has a small number of photoelectrons, which is equivalent to a bias. There are also some noise effects, which are generally not completely linear.The camera model describes the process of photoelectric conversion. CCD (charge coupled device) is an important device that undertakes the task of photoelectric conversion. It has photosensitivity and the ability to store charges. When the surface feels light, it will generate corresponding charges. First, external light is concentrated through a tiny lens to improve daylighting. The color separation filter then filters the light, allowing only the desired wavelength range of light to pass through; finally, the photosensitive layer converts the light signal through the color separation filter into an electrical signal. The incident light after digital camera CCD can be expressed by the following formulas:(1)R=∫0∞fRλIλdλ,(2)G=∫0∞fGλI′λdλ,(3)B=∫0∞fBλI′λdλ.Here,∫0∞fR, ∫0∞fB, and ∫0∞fB, respectively, represent the wavelength spectrum sensitivity of the camera's three-channel color filter, and the authors represent the intensity of light reaching a certain wavelength of the camera. The wavelength spectrum sensitivity of a digital camera is generally provided by the CCD/CMOS production company. It is the product of the filter function of the color filter and the responsiveness of the photosensitive layer. Figure 3 shows the response curve of the CCD of a certain digital camera to the spectrum.Figure 3 The response curve of the camera spectrum.Theoretically, it is believed that the response characteristic of CCD to light energy is linear; that is, there is a linear relationship between the amount of exposure and the gray value of the digital image. In fact, neither CCD nor CMOS is completely linear in the process of converting optical signals into electrical signals [12]. The literature pointed out that in an environment where a color camera is used as a precision measuring instrument, it is necessary to solve this problem through photoelectric response linear correction.The luminous flux received by different areas of CCD/CMOS under a uniform light field is not the same. The luminous flux received at the center of the lens is greater than the luminous flux received around the lens. The literature points out that for different lenses and corresponding focal lengths, it is necessary to use an integrating sphere for shimming correction. In the active vision system model of this article, the above factors have not been considered [13–16].Assuming that there is only diffuse reflection on the surface of the object, the relationship between reflected lightI′λ and incident light kλ can be shown in the following equation:(4)I′λ=I′λ⋅kλ.Here,I′λ represents the intensity of incident light at a wavelength of λ at the time t and kλ is the reflection coefficient of the material at a wavelength of λ. On this basis, to increase the consideration of specular reflection, it is necessary to increase the respective weights of the specular reflection component and the diffuse reflection component. At this time, formula (4) will be rewritten as the following formula:(5)I′λ=mdI′λ⋅kλ.The different TVS and different primary colors used by the projector (such as phosphors) must first know their coefficients in standard chromaticity coordinates to obtain the matrix conversion. Thus, after receiving the signal (if the signal is defined by standard chromaticity coordinates), we must wait until the three primary colors are compared to the corresponding light as the color through transformation.The projection equipment used in this paper is a panoramic color light source generator (PCSLG) with a transmitting point. The color characteristics of the lighting should be related to the tilt angle, as shown in Figure4.Figure 4 Correspondence between PCSLG's hue value and projection angle.In the actual situation, there is a difference between the theoretical projection and the real projection at a certain projection angle due to the manufacturing process. In order to overcome the existence of this difference, the panoramic color structured light generator needs to undergo chromaticity calibration before the actual measurement [17]. The chromaticity calibration will be introduced in the light source color correction algorithm later. ## 4. Realization of the Virtual Campus Roaming System The virtual campus provides the most intuitive form of campus landscape and facilities, facilitates users' access to campus information, and promotes the construction of universities and the development of distance teaching. The virtual campus is based on high-tech such as geographic information technology, virtual reality technology, and computer network technology, combining campus geographic information with other campus information. Browsing and querying the landscape and information of the ridge garden can be realized through the virtual reality scene interface, and they can be uploaded to the computer network and provided for remote access. ### 4.1. Demand Analysis and Design of the Virtual Campus Roaming Taking into account the actual roaming needs of users, the basic functions provided by the system include the following parts: viewpoint options, interaction methods, roaming methods, scene selection, and campus introduction as shown in Table1.Table 1 Function summary table of virtual campus roaming system. Viewpoint controlBest: move the viewpoint to the most suitable observation position when switching the panorama.Interactive operationMouse: use the mouse of the computer as the main input device to complete the panoramic image viewing and interactive roaming functions.Scene switchingMap navigation: switch to the corresponding panoramic scene view through the hotspot setting connection of the place.Campus profileCampus introduction: a brief introduction to the whole campus, so that the viewer has an overall understanding of the school.The composition of the roaming panoramic campus is organized by the relative spatial relationship of each panoramic image node in the entire scene. Therefore, in the design and production of the main interface of the panoramic campus roaming, the campus three-dimensional navigation map is used as the basis [18], and then based on it, we place the corresponding object for the reference. And, through the production of command buttons on important campus buildings, the connection of the secondary page is carried out and finally the campus is roamed through the navigation and hotspot connection. The basic framework of campus roaming is shown in Figure 5.Figure 5 Schematic diagram of the interactive roaming framework structure.The realization of panoramic roaming interactive technology usually includes three aspects: first, the collection and creation of panoramic images of each element of the site [19]; second, processing of panoramic roaming space; third, design and building of panoramic browsing interface. The acquisition and creation of panoramic images is an important technology in the development of the panoramic roaming school. In the previous section, we discussed and analyzed the key concepts of panoramic design and the application of modern technology. In addition, the transformation of the panoramic roaming site is an important part of the design of the panoramic roaming system. By modifying the panoramic roaming space, 3d panoramic images of various locations can be integrated into the virtual roaming space for viewers to walk freely. The design and manufacture of the panorama browsing interface [20] are used by the audience when viewing the panorama website. Therefore, the design and construction quality of the browsing interface affect the completion of three-side panoramic roaming interaction. The best panorama browsers are designed to make it easy for viewers to view their favorite virtual scenes.Table 2 Experimental result data. 56DetectionMatchhfovfDetectionMatchhfovfSIFT1575/150671.23442.131647/165361.23553.6Harris902/63262.31440.23988/63261.56253.1278DetectionMatchHfovfDetectionMatchHfovfSIFT1802/165390.93665.231758/1236490.86775.36Harris956/23150.8563.59953/85660.89652.39 ### 4.2. Experimental Steps (1) Use an ordinary digital camera to acquire material images; a total of 20 groups are acquired, and the number of material images in each group increases sequentially, which are 5, 6, ..., 26.(2) Carry out cylindrical projection on the acquired material images, and calculate the camera horizontal angle of view (hfov) and camera pixel focal length (f) of 20 groups of material images.(3) For two adjacent images in the same scene, use the SIFT-based image matching algorithm and the Harris-based image matching algorithm to detect and match feature points and record the number of detected feature points and the matched feature point pair quantity [21].(4) Perform coordinate transformation on the image to be spliced through the matching feature point pairs to complete the image splicing. Through the above steps, the data obtained is shown in Table2. Among them, detection represents the number of detected feature points, match represents the number of matched feature point pairs, hfov represents the camera's horizontal viewing angle, and f represents the camera pixel focal length [22]. ### 4.3. Experiment Analysis Through the analysis of the above experimental data, we can know that multiple feature point pairs can be matched together in multiple data images in the same scene. In the same case, the SIFT algorithm has more special integration points than the Harris algorithm, as shown in Figure6. The special points obtained by the SIFT feature extraction algorithm include dozens of error points from Harris corner. The extraction algorithm [23] also includes noncolor elements in the graph.Figure 6 Feature point pairs matched by the SIFT algorithm and Harris algorithm.Estimation of surface spectral reflectance of objects: because it is very difficult to directly calculate the surface spectral reflectance of objects, in consideration of the time of the active vision system, this paper uses three-channel reflectance instead. Solutions: experiments show that using three-channel interference to adjust the color of the product can reduce the accuracy by 44% to 11% and achieve better results. However, when calculating the effects of the three methods in light or standard white light, there may be slight differences due to differences in the intensity of the projected light. Therefore, how to develop an environmental model to solve the three-channel concept is a necessary condition for further research [24].In addition, the material surface with special strong specular reflection will cause the light saturation phenomenon of the digital camera, which will affect the separation of specular reflection components and the location of diffuse reflection objects. Therefore, in a wide range of applications, attention should be paid to adjust the aperture size and exposure time of digital cameras or adopt wide dynamic camera technology. ## 4.1. Demand Analysis and Design of the Virtual Campus Roaming Taking into account the actual roaming needs of users, the basic functions provided by the system include the following parts: viewpoint options, interaction methods, roaming methods, scene selection, and campus introduction as shown in Table1.Table 1 Function summary table of virtual campus roaming system. Viewpoint controlBest: move the viewpoint to the most suitable observation position when switching the panorama.Interactive operationMouse: use the mouse of the computer as the main input device to complete the panoramic image viewing and interactive roaming functions.Scene switchingMap navigation: switch to the corresponding panoramic scene view through the hotspot setting connection of the place.Campus profileCampus introduction: a brief introduction to the whole campus, so that the viewer has an overall understanding of the school.The composition of the roaming panoramic campus is organized by the relative spatial relationship of each panoramic image node in the entire scene. Therefore, in the design and production of the main interface of the panoramic campus roaming, the campus three-dimensional navigation map is used as the basis [18], and then based on it, we place the corresponding object for the reference. And, through the production of command buttons on important campus buildings, the connection of the secondary page is carried out and finally the campus is roamed through the navigation and hotspot connection. The basic framework of campus roaming is shown in Figure 5.Figure 5 Schematic diagram of the interactive roaming framework structure.The realization of panoramic roaming interactive technology usually includes three aspects: first, the collection and creation of panoramic images of each element of the site [19]; second, processing of panoramic roaming space; third, design and building of panoramic browsing interface. The acquisition and creation of panoramic images is an important technology in the development of the panoramic roaming school. In the previous section, we discussed and analyzed the key concepts of panoramic design and the application of modern technology. In addition, the transformation of the panoramic roaming site is an important part of the design of the panoramic roaming system. By modifying the panoramic roaming space, 3d panoramic images of various locations can be integrated into the virtual roaming space for viewers to walk freely. The design and manufacture of the panorama browsing interface [20] are used by the audience when viewing the panorama website. Therefore, the design and construction quality of the browsing interface affect the completion of three-side panoramic roaming interaction. The best panorama browsers are designed to make it easy for viewers to view their favorite virtual scenes.Table 2 Experimental result data. 56DetectionMatchhfovfDetectionMatchhfovfSIFT1575/150671.23442.131647/165361.23553.6Harris902/63262.31440.23988/63261.56253.1278DetectionMatchHfovfDetectionMatchHfovfSIFT1802/165390.93665.231758/1236490.86775.36Harris956/23150.8563.59953/85660.89652.39 ## 4.2. Experimental Steps (1) Use an ordinary digital camera to acquire material images; a total of 20 groups are acquired, and the number of material images in each group increases sequentially, which are 5, 6, ..., 26.(2) Carry out cylindrical projection on the acquired material images, and calculate the camera horizontal angle of view (hfov) and camera pixel focal length (f) of 20 groups of material images.(3) For two adjacent images in the same scene, use the SIFT-based image matching algorithm and the Harris-based image matching algorithm to detect and match feature points and record the number of detected feature points and the matched feature point pair quantity [21].(4) Perform coordinate transformation on the image to be spliced through the matching feature point pairs to complete the image splicing. Through the above steps, the data obtained is shown in Table2. Among them, detection represents the number of detected feature points, match represents the number of matched feature point pairs, hfov represents the camera's horizontal viewing angle, and f represents the camera pixel focal length [22]. ## 4.3. Experiment Analysis Through the analysis of the above experimental data, we can know that multiple feature point pairs can be matched together in multiple data images in the same scene. In the same case, the SIFT algorithm has more special integration points than the Harris algorithm, as shown in Figure6. The special points obtained by the SIFT feature extraction algorithm include dozens of error points from Harris corner. The extraction algorithm [23] also includes noncolor elements in the graph.Figure 6 Feature point pairs matched by the SIFT algorithm and Harris algorithm.Estimation of surface spectral reflectance of objects: because it is very difficult to directly calculate the surface spectral reflectance of objects, in consideration of the time of the active vision system, this paper uses three-channel reflectance instead. Solutions: experiments show that using three-channel interference to adjust the color of the product can reduce the accuracy by 44% to 11% and achieve better results. However, when calculating the effects of the three methods in light or standard white light, there may be slight differences due to differences in the intensity of the projected light. Therefore, how to develop an environmental model to solve the three-channel concept is a necessary condition for further research [24].In addition, the material surface with special strong specular reflection will cause the light saturation phenomenon of the digital camera, which will affect the separation of specular reflection components and the location of diffuse reflection objects. Therefore, in a wide range of applications, attention should be paid to adjust the aperture size and exposure time of digital cameras or adopt wide dynamic camera technology. ## 5. Conclusion The application system of this article simply takes a school as an example, and the amount of data is relatively small. However, in some commercial applications, the amount of image data in a three-dimensional virtual environment is often very large. Image storage and indexing technology is a key point of virtual roaming. Combining database technology for image storage and retrieval is a direction worthy of further research. Interactive roaming is a complete roaming process. An exploration of the interaction and interactive nature of 3d virtual travel systems is also worth listening to. How to create an interactive way that is related to the audience's works and does not affect the audience's sense of reality is a suitable research direction. In the display of high-resolution images, how to use a variety of display technologies to produce beautiful and fast images is also an undetermined problem of the system, which should be further studied in the future. In short, the research of 3D virtual roaming technology has important theoretical significance and application value. The research and discussion on it in this paper are far from enough. In the follow-up research, on the basis of the existing research, we should continuously absorb various new related technologies and methods, further deepen the research, and achieve better results for the panoramic display and apply it to a broader field.In a word, the research of 3D virtual tourism technology has important theoretical value and application value. The research and discussion in this paper are far from enough. In future research, on the basis of existing research, many new technologies related to the process should be further extended to in-depth research, to achieve the best effect of panoramic images and apply them to wider fields. --- *Source: 1023865-2022-04-28.xml*
1023865-2022-04-28_1023865-2022-04-28.md
35,778
The Development and Realization of Digital Panorama Media Technology Based on VR Technology
Yake Qiu; Jingyu Tao
Computational Intelligence and Neuroscience (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1023865
1023865-2022-04-28.xml
--- ## Abstract The purpose of this paper is to understand the digital 3D multimedia panoramic visual communication technology based on virtual reality. Firstly, the key concepts and characteristics of virtual reality are introduced, including the development and application of digital three-dimensional panorama technology. Then, according to the theoretical research, some basic knowledge of 3D panoramic image Mosaic is introduced, including camera image modeling, image sharing, and image exchange. Finally, with the development of the virtual tour at the College of Normal University, the hardware of panoramic technology and the demand of panoramic image search have been expanded in the application. The design of panoramic Mosaic, panoramic image generation, and virtual tour school construction considers real-world issues. The innovation of this paper lies in that will be used by SketchUp8.0 software builds the geometry of 3d virtual scene and by the cylindrical panoramic images based on image of building 3 d virtual scene organic unifies in together and makes a panoramic image can be as the change of seasons in the real scene and real-time change, enhance the sense of the reality of the system and user immersive. --- ## Body ## 1. Introduction Today, with the further development of cloud computing, Internet of Things, GIS, and other technologies, more and more applications choose to use these technologies to serve users, entering the era of software-as-a-service. Networked applications are the premise of the software-as-a-service era. Networked applications are widely used in the field of GIS, such as Baidu Maps, Google Maps, and Sky Maps. To realize the release and management of geographic information, especially now, The Street View service is in full swing and has become the focus of the development of various manufacturers. Its virtual three-dimensional scene and real on-site photos give users a sense of immersion, operation, and integration. But after all, such three-dimensionality is virtual and incalculable, Fake 3D. However, the real 3D display has shortcomings such as a large amount of data, cumbersome modeling process, and inconvenient data updating. How to make up for these shortcomings is the key to the current development [1]. The application of virtual reality technology in education is mainly reflected in the construction of a “digital campus.” The content mainly integrates campus culture, teaching staff, etc., showing the school's comprehensive strength in running a school. In form, it has developed from a traditional two-dimensional virtual campus to a three-dimensional virtual campus with a sense of reality, immersion, and greater interaction, which has played a positive role in the school’s enrollment and publicity work and has also played a positive role in the school’s digital construction [2] (Figure 1). Three-dimensional panoramic virtual reality technology (also called virtual reality) is a virtual reality technology of real scenes based on panoramic images. Panorama is to use a digital camera to shoot the real scene in a certain way when the viewpoint is fixed, rotate 360 degrees around the vertical axis according to a uniform angle, take one or more sets of photos according to the specific situation, stitch, adjust, and integrate the images in the computer, generate seamless panoramic images according to the expression classification, and realize the real scene restoration display mode of all-round interactive viewing through computer technology [1]. The three-dimensional panoramic display can give people a sense of proximity, a sense of being on the scene. When browsing online, the sense of presence directly affects the audience's effect of receiving information, thereby arousing the audience's interest and better information dissemination effect. The high-definition panoramic tour will give the audience a strong sense of presence.Figure 1 Three-dimensional animation interaction.This article through to Flex and PV3D integrated use of technology and builds the 3d virtual campus system based on panoramic images; the interaction of the system is good, in which the user can freely roam, and users can have a real immersive feeling, which had a profound influence on school's external publicity and has important application value. ## 2. Literature Review Wang et al. used a pair of hyperboloid refractive mirrors and a camera lens to form a stereo panoramic vision system [3]. Tomoe et al. have also carried out pioneering research work in this area and proposed a design method that uses an ordinary camera to achieve an omnidirectional stereo vision. The design method is to process a pair of hyperboloid refracting mirrors into a single refracting mirror. The principle is similar to that of Koyasu’s. Both devices have the advantage of compact structure [4]. Morra et al. designed a variety of point light source projection range meters, through the recognition of the point light source to achieve the depth information measurement of the object point [5]. This method can quickly obtain the three-dimensional information of the object point, but the amount of three-dimensional information obtained each time is small. Shen et al. designed a line structured light rangefinder, which allows the laser to generate a line light source through a cylindrical lens and rotates at a uniform speed through a stepping motor to sweep the beam across the surface of the object to be measured, thereby obtaining a series of images for information extraction and measurement [6]. Zhou et al. proposed to use a point laser to rotate at a high speed to form a conical laser plane and receive the panoramic light source information through the conical mirror and then analyzed the projection light source information on the imaging plane to achieve panoramic three-dimensional stereo measurement [7]. Ortega et al. proposed a combination of ring laser and conical mirror surface. This method can effectively obtain panoramic three-dimensional information, but the combination of laser source and mirror surface is more demanding. And. because its sensor design adopts parabolic catadioptric imaging, parabolic mirror catadioptric imaging has the advantages of simple imaging algorithm and no special geometric corresponding requirements during installation [8].Through the comprehensive application of Flex and PV3D technologies, the paper constructs a three-dimensional virtual campus system based on panoramic images. The system has good interactivity and users can roam freely in it, so that users have a more realistic and immersive feeling. It has had a profound impact on the school’s external publicity and has important promotion and application value. ## 3. Digital 3D Multimedia Panoramic Visual Communication Technology Based on Virtual Reality Virtual reality is a three-dimensional virtual reality environment created by modern computer technology. In a virtual reality environment, users can participate in the virtual environment through work and better understand what they see and hear. Computer virtual reality technology provides users with an interactive virtual reality environment, in which they can self-manage and actively interact with objects, so as to form a universal experience combining vision, hearing, and touch. In fact, virtual reality requires a deep understanding of cognition, independence, and interaction in applications. The combination of the two enables users to create virtual reality-like concepts, namely, the three key concepts of virtual reality: interaction, understanding, and imagination, as shown in Figure2. Therefore, it is convenient to improve immersion and improve the ability to interfere autonomously. Instructions for improving virtual reality are given in the study by [9].Figure 2 The main characteristics of virtual reality. ### 3.1. Immersion Immersion is the immersive feeling that the virtual environment in virtual reality technology brings to people. Immersion is also the main criterion for measuring the construction of a virtual reality environment. It is also the main basis for people to obtain a “sense of conception” in experiencing virtual reality technology [10]. ### 3.2. Interactivity Interactivity is the main feature that distinguishes virtual reality technology from traditional 3D animation. When watching 3D animation, the viewer is in a passive state of acceptance [11] and the interactivity of virtual reality allows people to interact with the virtual reality environment and objects. Then, after the color separation filter filters, only the specified wavelength range of light is allowed to pass through. In the virtual reality environment, the viewer is no longer a bystander who passively receives information but can also participate in it. ### 3.3. Autonomy Autonomy refers to the main characteristics of virtual reality browsing. In the virtual reality environment, users can browse or change the virtual reality environment according to their own intentions. ### 3.4. Panoramic Visual Communication Technology In summary, the color of the object recognized by the observer is determined by the spectral composition of the light source, the reflection characteristics of the surface of the object, and the observer's sensitivity to the spectrum. Based on the above analysis, this paper further proposes an active vision system model based on the framework of the active vision system [12].First, each photosensitive unit (that is, pixel) of the CCD will generate a photoelectric effect under the action of the incident light and store a certain amount of photoelectrons corresponding to the light intensity on each photosensitive unit. Generally speaking, this process is characterized by a coefficient, which means that a photon generates several photoelectrons correspondingly. But in fact, the CCD may work in the fat zero mode and each unit has a small number of photoelectrons, which is equivalent to a bias. There are also some noise effects, which are generally not completely linear.The camera model describes the process of photoelectric conversion. CCD (charge coupled device) is an important device that undertakes the task of photoelectric conversion. It has photosensitivity and the ability to store charges. When the surface feels light, it will generate corresponding charges. First, external light is concentrated through a tiny lens to improve daylighting. The color separation filter then filters the light, allowing only the desired wavelength range of light to pass through; finally, the photosensitive layer converts the light signal through the color separation filter into an electrical signal. The incident light after digital camera CCD can be expressed by the following formulas:(1)R=∫0∞fRλIλdλ,(2)G=∫0∞fGλI′λdλ,(3)B=∫0∞fBλI′λdλ.Here,∫0∞fR, ∫0∞fB, and ∫0∞fB, respectively, represent the wavelength spectrum sensitivity of the camera's three-channel color filter, and the authors represent the intensity of light reaching a certain wavelength of the camera. The wavelength spectrum sensitivity of a digital camera is generally provided by the CCD/CMOS production company. It is the product of the filter function of the color filter and the responsiveness of the photosensitive layer. Figure 3 shows the response curve of the CCD of a certain digital camera to the spectrum.Figure 3 The response curve of the camera spectrum.Theoretically, it is believed that the response characteristic of CCD to light energy is linear; that is, there is a linear relationship between the amount of exposure and the gray value of the digital image. In fact, neither CCD nor CMOS is completely linear in the process of converting optical signals into electrical signals [12]. The literature pointed out that in an environment where a color camera is used as a precision measuring instrument, it is necessary to solve this problem through photoelectric response linear correction.The luminous flux received by different areas of CCD/CMOS under a uniform light field is not the same. The luminous flux received at the center of the lens is greater than the luminous flux received around the lens. The literature points out that for different lenses and corresponding focal lengths, it is necessary to use an integrating sphere for shimming correction. In the active vision system model of this article, the above factors have not been considered [13–16].Assuming that there is only diffuse reflection on the surface of the object, the relationship between reflected lightI′λ and incident light kλ can be shown in the following equation:(4)I′λ=I′λ⋅kλ.Here,I′λ represents the intensity of incident light at a wavelength of λ at the time t and kλ is the reflection coefficient of the material at a wavelength of λ. On this basis, to increase the consideration of specular reflection, it is necessary to increase the respective weights of the specular reflection component and the diffuse reflection component. At this time, formula (4) will be rewritten as the following formula:(5)I′λ=mdI′λ⋅kλ.The different TVS and different primary colors used by the projector (such as phosphors) must first know their coefficients in standard chromaticity coordinates to obtain the matrix conversion. Thus, after receiving the signal (if the signal is defined by standard chromaticity coordinates), we must wait until the three primary colors are compared to the corresponding light as the color through transformation.The projection equipment used in this paper is a panoramic color light source generator (PCSLG) with a transmitting point. The color characteristics of the lighting should be related to the tilt angle, as shown in Figure4.Figure 4 Correspondence between PCSLG's hue value and projection angle.In the actual situation, there is a difference between the theoretical projection and the real projection at a certain projection angle due to the manufacturing process. In order to overcome the existence of this difference, the panoramic color structured light generator needs to undergo chromaticity calibration before the actual measurement [17]. The chromaticity calibration will be introduced in the light source color correction algorithm later. ## 3.1. Immersion Immersion is the immersive feeling that the virtual environment in virtual reality technology brings to people. Immersion is also the main criterion for measuring the construction of a virtual reality environment. It is also the main basis for people to obtain a “sense of conception” in experiencing virtual reality technology [10]. ## 3.2. Interactivity Interactivity is the main feature that distinguishes virtual reality technology from traditional 3D animation. When watching 3D animation, the viewer is in a passive state of acceptance [11] and the interactivity of virtual reality allows people to interact with the virtual reality environment and objects. Then, after the color separation filter filters, only the specified wavelength range of light is allowed to pass through. In the virtual reality environment, the viewer is no longer a bystander who passively receives information but can also participate in it. ## 3.3. Autonomy Autonomy refers to the main characteristics of virtual reality browsing. In the virtual reality environment, users can browse or change the virtual reality environment according to their own intentions. ## 3.4. Panoramic Visual Communication Technology In summary, the color of the object recognized by the observer is determined by the spectral composition of the light source, the reflection characteristics of the surface of the object, and the observer's sensitivity to the spectrum. Based on the above analysis, this paper further proposes an active vision system model based on the framework of the active vision system [12].First, each photosensitive unit (that is, pixel) of the CCD will generate a photoelectric effect under the action of the incident light and store a certain amount of photoelectrons corresponding to the light intensity on each photosensitive unit. Generally speaking, this process is characterized by a coefficient, which means that a photon generates several photoelectrons correspondingly. But in fact, the CCD may work in the fat zero mode and each unit has a small number of photoelectrons, which is equivalent to a bias. There are also some noise effects, which are generally not completely linear.The camera model describes the process of photoelectric conversion. CCD (charge coupled device) is an important device that undertakes the task of photoelectric conversion. It has photosensitivity and the ability to store charges. When the surface feels light, it will generate corresponding charges. First, external light is concentrated through a tiny lens to improve daylighting. The color separation filter then filters the light, allowing only the desired wavelength range of light to pass through; finally, the photosensitive layer converts the light signal through the color separation filter into an electrical signal. The incident light after digital camera CCD can be expressed by the following formulas:(1)R=∫0∞fRλIλdλ,(2)G=∫0∞fGλI′λdλ,(3)B=∫0∞fBλI′λdλ.Here,∫0∞fR, ∫0∞fB, and ∫0∞fB, respectively, represent the wavelength spectrum sensitivity of the camera's three-channel color filter, and the authors represent the intensity of light reaching a certain wavelength of the camera. The wavelength spectrum sensitivity of a digital camera is generally provided by the CCD/CMOS production company. It is the product of the filter function of the color filter and the responsiveness of the photosensitive layer. Figure 3 shows the response curve of the CCD of a certain digital camera to the spectrum.Figure 3 The response curve of the camera spectrum.Theoretically, it is believed that the response characteristic of CCD to light energy is linear; that is, there is a linear relationship between the amount of exposure and the gray value of the digital image. In fact, neither CCD nor CMOS is completely linear in the process of converting optical signals into electrical signals [12]. The literature pointed out that in an environment where a color camera is used as a precision measuring instrument, it is necessary to solve this problem through photoelectric response linear correction.The luminous flux received by different areas of CCD/CMOS under a uniform light field is not the same. The luminous flux received at the center of the lens is greater than the luminous flux received around the lens. The literature points out that for different lenses and corresponding focal lengths, it is necessary to use an integrating sphere for shimming correction. In the active vision system model of this article, the above factors have not been considered [13–16].Assuming that there is only diffuse reflection on the surface of the object, the relationship between reflected lightI′λ and incident light kλ can be shown in the following equation:(4)I′λ=I′λ⋅kλ.Here,I′λ represents the intensity of incident light at a wavelength of λ at the time t and kλ is the reflection coefficient of the material at a wavelength of λ. On this basis, to increase the consideration of specular reflection, it is necessary to increase the respective weights of the specular reflection component and the diffuse reflection component. At this time, formula (4) will be rewritten as the following formula:(5)I′λ=mdI′λ⋅kλ.The different TVS and different primary colors used by the projector (such as phosphors) must first know their coefficients in standard chromaticity coordinates to obtain the matrix conversion. Thus, after receiving the signal (if the signal is defined by standard chromaticity coordinates), we must wait until the three primary colors are compared to the corresponding light as the color through transformation.The projection equipment used in this paper is a panoramic color light source generator (PCSLG) with a transmitting point. The color characteristics of the lighting should be related to the tilt angle, as shown in Figure4.Figure 4 Correspondence between PCSLG's hue value and projection angle.In the actual situation, there is a difference between the theoretical projection and the real projection at a certain projection angle due to the manufacturing process. In order to overcome the existence of this difference, the panoramic color structured light generator needs to undergo chromaticity calibration before the actual measurement [17]. The chromaticity calibration will be introduced in the light source color correction algorithm later. ## 4. Realization of the Virtual Campus Roaming System The virtual campus provides the most intuitive form of campus landscape and facilities, facilitates users' access to campus information, and promotes the construction of universities and the development of distance teaching. The virtual campus is based on high-tech such as geographic information technology, virtual reality technology, and computer network technology, combining campus geographic information with other campus information. Browsing and querying the landscape and information of the ridge garden can be realized through the virtual reality scene interface, and they can be uploaded to the computer network and provided for remote access. ### 4.1. Demand Analysis and Design of the Virtual Campus Roaming Taking into account the actual roaming needs of users, the basic functions provided by the system include the following parts: viewpoint options, interaction methods, roaming methods, scene selection, and campus introduction as shown in Table1.Table 1 Function summary table of virtual campus roaming system. Viewpoint controlBest: move the viewpoint to the most suitable observation position when switching the panorama.Interactive operationMouse: use the mouse of the computer as the main input device to complete the panoramic image viewing and interactive roaming functions.Scene switchingMap navigation: switch to the corresponding panoramic scene view through the hotspot setting connection of the place.Campus profileCampus introduction: a brief introduction to the whole campus, so that the viewer has an overall understanding of the school.The composition of the roaming panoramic campus is organized by the relative spatial relationship of each panoramic image node in the entire scene. Therefore, in the design and production of the main interface of the panoramic campus roaming, the campus three-dimensional navigation map is used as the basis [18], and then based on it, we place the corresponding object for the reference. And, through the production of command buttons on important campus buildings, the connection of the secondary page is carried out and finally the campus is roamed through the navigation and hotspot connection. The basic framework of campus roaming is shown in Figure 5.Figure 5 Schematic diagram of the interactive roaming framework structure.The realization of panoramic roaming interactive technology usually includes three aspects: first, the collection and creation of panoramic images of each element of the site [19]; second, processing of panoramic roaming space; third, design and building of panoramic browsing interface. The acquisition and creation of panoramic images is an important technology in the development of the panoramic roaming school. In the previous section, we discussed and analyzed the key concepts of panoramic design and the application of modern technology. In addition, the transformation of the panoramic roaming site is an important part of the design of the panoramic roaming system. By modifying the panoramic roaming space, 3d panoramic images of various locations can be integrated into the virtual roaming space for viewers to walk freely. The design and manufacture of the panorama browsing interface [20] are used by the audience when viewing the panorama website. Therefore, the design and construction quality of the browsing interface affect the completion of three-side panoramic roaming interaction. The best panorama browsers are designed to make it easy for viewers to view their favorite virtual scenes.Table 2 Experimental result data. 56DetectionMatchhfovfDetectionMatchhfovfSIFT1575/150671.23442.131647/165361.23553.6Harris902/63262.31440.23988/63261.56253.1278DetectionMatchHfovfDetectionMatchHfovfSIFT1802/165390.93665.231758/1236490.86775.36Harris956/23150.8563.59953/85660.89652.39 ### 4.2. Experimental Steps (1) Use an ordinary digital camera to acquire material images; a total of 20 groups are acquired, and the number of material images in each group increases sequentially, which are 5, 6, ..., 26.(2) Carry out cylindrical projection on the acquired material images, and calculate the camera horizontal angle of view (hfov) and camera pixel focal length (f) of 20 groups of material images.(3) For two adjacent images in the same scene, use the SIFT-based image matching algorithm and the Harris-based image matching algorithm to detect and match feature points and record the number of detected feature points and the matched feature point pair quantity [21].(4) Perform coordinate transformation on the image to be spliced through the matching feature point pairs to complete the image splicing. Through the above steps, the data obtained is shown in Table2. Among them, detection represents the number of detected feature points, match represents the number of matched feature point pairs, hfov represents the camera's horizontal viewing angle, and f represents the camera pixel focal length [22]. ### 4.3. Experiment Analysis Through the analysis of the above experimental data, we can know that multiple feature point pairs can be matched together in multiple data images in the same scene. In the same case, the SIFT algorithm has more special integration points than the Harris algorithm, as shown in Figure6. The special points obtained by the SIFT feature extraction algorithm include dozens of error points from Harris corner. The extraction algorithm [23] also includes noncolor elements in the graph.Figure 6 Feature point pairs matched by the SIFT algorithm and Harris algorithm.Estimation of surface spectral reflectance of objects: because it is very difficult to directly calculate the surface spectral reflectance of objects, in consideration of the time of the active vision system, this paper uses three-channel reflectance instead. Solutions: experiments show that using three-channel interference to adjust the color of the product can reduce the accuracy by 44% to 11% and achieve better results. However, when calculating the effects of the three methods in light or standard white light, there may be slight differences due to differences in the intensity of the projected light. Therefore, how to develop an environmental model to solve the three-channel concept is a necessary condition for further research [24].In addition, the material surface with special strong specular reflection will cause the light saturation phenomenon of the digital camera, which will affect the separation of specular reflection components and the location of diffuse reflection objects. Therefore, in a wide range of applications, attention should be paid to adjust the aperture size and exposure time of digital cameras or adopt wide dynamic camera technology. ## 4.1. Demand Analysis and Design of the Virtual Campus Roaming Taking into account the actual roaming needs of users, the basic functions provided by the system include the following parts: viewpoint options, interaction methods, roaming methods, scene selection, and campus introduction as shown in Table1.Table 1 Function summary table of virtual campus roaming system. Viewpoint controlBest: move the viewpoint to the most suitable observation position when switching the panorama.Interactive operationMouse: use the mouse of the computer as the main input device to complete the panoramic image viewing and interactive roaming functions.Scene switchingMap navigation: switch to the corresponding panoramic scene view through the hotspot setting connection of the place.Campus profileCampus introduction: a brief introduction to the whole campus, so that the viewer has an overall understanding of the school.The composition of the roaming panoramic campus is organized by the relative spatial relationship of each panoramic image node in the entire scene. Therefore, in the design and production of the main interface of the panoramic campus roaming, the campus three-dimensional navigation map is used as the basis [18], and then based on it, we place the corresponding object for the reference. And, through the production of command buttons on important campus buildings, the connection of the secondary page is carried out and finally the campus is roamed through the navigation and hotspot connection. The basic framework of campus roaming is shown in Figure 5.Figure 5 Schematic diagram of the interactive roaming framework structure.The realization of panoramic roaming interactive technology usually includes three aspects: first, the collection and creation of panoramic images of each element of the site [19]; second, processing of panoramic roaming space; third, design and building of panoramic browsing interface. The acquisition and creation of panoramic images is an important technology in the development of the panoramic roaming school. In the previous section, we discussed and analyzed the key concepts of panoramic design and the application of modern technology. In addition, the transformation of the panoramic roaming site is an important part of the design of the panoramic roaming system. By modifying the panoramic roaming space, 3d panoramic images of various locations can be integrated into the virtual roaming space for viewers to walk freely. The design and manufacture of the panorama browsing interface [20] are used by the audience when viewing the panorama website. Therefore, the design and construction quality of the browsing interface affect the completion of three-side panoramic roaming interaction. The best panorama browsers are designed to make it easy for viewers to view their favorite virtual scenes.Table 2 Experimental result data. 56DetectionMatchhfovfDetectionMatchhfovfSIFT1575/150671.23442.131647/165361.23553.6Harris902/63262.31440.23988/63261.56253.1278DetectionMatchHfovfDetectionMatchHfovfSIFT1802/165390.93665.231758/1236490.86775.36Harris956/23150.8563.59953/85660.89652.39 ## 4.2. Experimental Steps (1) Use an ordinary digital camera to acquire material images; a total of 20 groups are acquired, and the number of material images in each group increases sequentially, which are 5, 6, ..., 26.(2) Carry out cylindrical projection on the acquired material images, and calculate the camera horizontal angle of view (hfov) and camera pixel focal length (f) of 20 groups of material images.(3) For two adjacent images in the same scene, use the SIFT-based image matching algorithm and the Harris-based image matching algorithm to detect and match feature points and record the number of detected feature points and the matched feature point pair quantity [21].(4) Perform coordinate transformation on the image to be spliced through the matching feature point pairs to complete the image splicing. Through the above steps, the data obtained is shown in Table2. Among them, detection represents the number of detected feature points, match represents the number of matched feature point pairs, hfov represents the camera's horizontal viewing angle, and f represents the camera pixel focal length [22]. ## 4.3. Experiment Analysis Through the analysis of the above experimental data, we can know that multiple feature point pairs can be matched together in multiple data images in the same scene. In the same case, the SIFT algorithm has more special integration points than the Harris algorithm, as shown in Figure6. The special points obtained by the SIFT feature extraction algorithm include dozens of error points from Harris corner. The extraction algorithm [23] also includes noncolor elements in the graph.Figure 6 Feature point pairs matched by the SIFT algorithm and Harris algorithm.Estimation of surface spectral reflectance of objects: because it is very difficult to directly calculate the surface spectral reflectance of objects, in consideration of the time of the active vision system, this paper uses three-channel reflectance instead. Solutions: experiments show that using three-channel interference to adjust the color of the product can reduce the accuracy by 44% to 11% and achieve better results. However, when calculating the effects of the three methods in light or standard white light, there may be slight differences due to differences in the intensity of the projected light. Therefore, how to develop an environmental model to solve the three-channel concept is a necessary condition for further research [24].In addition, the material surface with special strong specular reflection will cause the light saturation phenomenon of the digital camera, which will affect the separation of specular reflection components and the location of diffuse reflection objects. Therefore, in a wide range of applications, attention should be paid to adjust the aperture size and exposure time of digital cameras or adopt wide dynamic camera technology. ## 5. Conclusion The application system of this article simply takes a school as an example, and the amount of data is relatively small. However, in some commercial applications, the amount of image data in a three-dimensional virtual environment is often very large. Image storage and indexing technology is a key point of virtual roaming. Combining database technology for image storage and retrieval is a direction worthy of further research. Interactive roaming is a complete roaming process. An exploration of the interaction and interactive nature of 3d virtual travel systems is also worth listening to. How to create an interactive way that is related to the audience's works and does not affect the audience's sense of reality is a suitable research direction. In the display of high-resolution images, how to use a variety of display technologies to produce beautiful and fast images is also an undetermined problem of the system, which should be further studied in the future. In short, the research of 3D virtual roaming technology has important theoretical significance and application value. The research and discussion on it in this paper are far from enough. In the follow-up research, on the basis of the existing research, we should continuously absorb various new related technologies and methods, further deepen the research, and achieve better results for the panoramic display and apply it to a broader field.In a word, the research of 3D virtual tourism technology has important theoretical value and application value. The research and discussion in this paper are far from enough. In future research, on the basis of existing research, many new technologies related to the process should be further extended to in-depth research, to achieve the best effect of panoramic images and apply them to wider fields. --- *Source: 1023865-2022-04-28.xml*
2022
# Study of Knocking Effect in Compression Ignition Engine with Hydrogen as a Secondary Fuel **Authors:** R. Sivabalakrishnan; C. Jegadheesan **Journal:** Chinese Journal of Engineering (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102390 --- ## Abstract The aim of this project is detecting knock during combustion of biodiesel-hydrogen fuel and also the knock is suppressed by timed injection of diethyl ether (DEE) with biodiesel-hydrogen fuel for different loads. Hydrogen fuel is an effective alternate fuel in making a pollution-free environment with higher efficiency. The usage of hydrogen in compression ignition engine leads to production of knocking or detonation because of its lower ignition energy, wider flammability range, and shorter quenching distance. Knocking combustion causes major engine damage, and also reduces the efficiency. The method uses the measurement and analysis of cylinder pressure signal for various loads. The pressure signal is to be converted into frequency domain that shows the accurate knocking combustion of fuel mixtures. The variation of pressure signal is gradually increased and smoothly reduced to minimum during normal combustion. The rapid rise of pressure signal has occurred during knocking combustion. The experimental setup was mainly available for evaluating the feasibility of normal combustion by comparing with the signals from both fuel mixtures in compression ignition engine. This method provides better results in predicting the knocking feature of biodiesel-hydrogen fuel and the usage of DEE provides complete combustion of fuels with higher performance, and lower emission. --- ## Body ## 1. Introduction The demand for fossil fuels gets increased by more usage of transportation and automobile. The use of fossil fuels emits more emissions such as HC, CO, CO2, and NO x and also makes harmful environmental condition. The best solution for this problem is to move on to alternative fuels. Hydrogen is the most effective alternative fuel which reduces the emission and fuel consumption and also provides better performance. Hydrogen has some limitations such as backfire and preignition. Saravanan et al. [1] proposed that the direct injection (DI) diesel engine was used to test the performance and emission of an engine. Hydrogen was injected at the intake port of the engine and diesel can be used as an ignition source. In order to improve the efficiency, the knocking combustion occurred as a major problem due to some properties of hydrogen fuel such as wider flammability range and shorter quenching distance. The biodiesel can be used as an ignition source instead of diesel which reduces the emissions of particulate matter and limits the autoignition condition. There is a possible minimum emission of NO x at higher load conditions.Zhen et al. [2] projected that the knock detection is to be done on several types of methods. These methods are in-cylinder pressure analysis, heat transfer analysis, light radiation, cylinder block vibration analysis, intermediate radicals and species analysis, ion current analysis, and exhaust gas temperature analysis. The most suitable methods are in-cylinder pressure analysis and heat transfer analysis. The knock intensity is the maximum amplitude of cylinder pressure fluctuation and rapid increase of pressure signal and heat release rate provides the information about abnormal combustion.Wannatong et al. [3] determined that the knocking in engines leads to damaging the engine and limits the performance of the engine. The combustion and knock characteristics can be determined for diesel and dual fuel (Diesel and Natural Gas) by varying the temperature of intake mixture, increasing the amount of natural gas, mixture of diesel and natural gas. Engine knocks were noted for every increase of temperature of intake mixture and increasing the amount of natural gas. In this process, the higher intake temperature fastened the combustion and made autoignition of fuel before flame arrival. The rapid increase of cylinder pressure has shown the onset of knock in engine.The knock detection method is to be done on the cylinder pressure, block vibration, and sound pressure signal in spark ignited (SI) engine. The three knock harmonic frequencies were estimated by analyzing the cylinder pressure signal under various operating conditions in spark ignited (SI) engine. The filtered pressure signal can be used to predict knock intensity and also helps to remove background noise. The knock windows and knock frequencies were determined by Lee et al. [4].Brunt et al. [5] have made a comparison of calculated peak pressures at crank angle resolution for constant speed and also found out the peak knock pressure for all cycles. The measurement and analysis of cylinder pressure is used to obtain accurate knocking combustion. The knock intensity is to be determined by the maximum variability of peak pressure and its filtered data. ## 2. Fundamentals ### 2.1. Hydrogen Fuel Hydrogen has clean burning characteristics that provide an efficient operation in CI engine. Hydrogen can be used as a secondary fuel in an internal combustion engine. The hydrogen burning combines with oxygen to form water and no other combustion products (except for little amounts ofNO x). Hydrogen cannot be ignited by compression due to higher autoignition temperature (585°C) than diesel fuel (180°C). Biodiesel is used as an ignition source for hydrogen fuel during combustion of compression ignition engine (Table 1).Table 1 Fuels properties. Properties Biodiesel Hydrogen Diethyl ether Chemical formula — H2 C2H5OC2H5 Auto Ignition Temperature (K) 535 858 433 Calorific value (MJ/kg) 38.5 119.9 33.9 Density (kg/m3) 885 0.0837 713 Viscosity at 15.5°C, centipoises — — 0.023 ### 2.2. Knock Fundamentals Due to presence of some constituents in the fuel used, the rate of oxidation becomes so great that the last portion of the fuel-air mixture gets ignited instantaneously, producing an explosive violence, known as knocking. The explosive ignition of fuel-air mixture before the propagating flame is increasing successive cylinder pressure oscillations. The well-examined external mixing of hydrogen with the intake of air causes backfire and knock, especially at higher engine loads. The abnormal combustion of hydrogen fuel in CI engine will produce an increased chemical heat release rate, which results in a rapid pressure rise and higher heat rejections. The maximum amplitude of pressure oscillation and analysis of exhaust temperature is a good indicator for severity of the knock. ## 2.1. Hydrogen Fuel Hydrogen has clean burning characteristics that provide an efficient operation in CI engine. Hydrogen can be used as a secondary fuel in an internal combustion engine. The hydrogen burning combines with oxygen to form water and no other combustion products (except for little amounts ofNO x). Hydrogen cannot be ignited by compression due to higher autoignition temperature (585°C) than diesel fuel (180°C). Biodiesel is used as an ignition source for hydrogen fuel during combustion of compression ignition engine (Table 1).Table 1 Fuels properties. Properties Biodiesel Hydrogen Diethyl ether Chemical formula — H2 C2H5OC2H5 Auto Ignition Temperature (K) 535 858 433 Calorific value (MJ/kg) 38.5 119.9 33.9 Density (kg/m3) 885 0.0837 713 Viscosity at 15.5°C, centipoises — — 0.023 ## 2.2. Knock Fundamentals Due to presence of some constituents in the fuel used, the rate of oxidation becomes so great that the last portion of the fuel-air mixture gets ignited instantaneously, producing an explosive violence, known as knocking. The explosive ignition of fuel-air mixture before the propagating flame is increasing successive cylinder pressure oscillations. The well-examined external mixing of hydrogen with the intake of air causes backfire and knock, especially at higher engine loads. The abnormal combustion of hydrogen fuel in CI engine will produce an increased chemical heat release rate, which results in a rapid pressure rise and higher heat rejections. The maximum amplitude of pressure oscillation and analysis of exhaust temperature is a good indicator for severity of the knock. ## 3. Experimental Setup In this study, a single cylinder, four strokes, water cooled direct injection diesel engine was operated as dual fuel engine which uses hydrogen and biodiesel shown in Figure1. The engine details are shown in Table 2. Hydrogen fuel is stored in a storage cylinder. A pressure regulator was used to regulate hydrogen passed to flame arrester through flow control valve and check valve. Check valve is used to pass hydrogen in forward direction alone and it can be closed if any gas returns from CI engine. Flame arrester can have 3/4thfiled water in an enclosed tank to restrict backfire to hydrogen cylinder during combustion. Hydrogen fuel is fed at the inlet manifold in diesel engine. DEE is to be fed at the inlet port before the hydrogen port is used. A pressure transducer was used to pick up peak pressure oscillation during the combustion of fuel. The pressure signal is acquired by PC data acquisition system.Table 2 Engine specification. Name Specification Type 4-Stroke, Single Cylinder Diesel Engine Make Kirloskar Power 5.2 kW Speed 1500 rpm Stroke 110 mm Bore 87.5 mm Capacity 661 ccFigure 1 Experimental setup. ## 4. Frequency Analysis of Pressure Signal The pressure transducer is used to record the in-cylinder pressure signal with respect to crank angle. This signal can be acquired using PC data acquisition system and the crank angle is got from rotary encoder coupled with crank shaft.This signal is given to power spectral analysis tool in LabView software which converts the given signal into frequency domain. The conversion of pressure signal into frequency domain is shown in Figure2. The frequency signal is used to predict the knocking combustion of engine during abnormal conditions.Figure 2 Program for FFT conversion. ## 5. Result and Discussion Experimental tests were carried out for biodiesel-hydrogen mixtures and biodiesel-hydrogen mixture with DEE at various loads. The pressure signal variation and its power spectrum can be shown in Figures3 and 4. The engine has been run on biodiesel-hydrogen mixtures from no load to full load. In normal combustion, the pressure signal gradually reaches the peak value after Top Dead Centre of the piston (TDC of greater than 3600 of crank angle) and again smoothly decreases to minimum value of the pressure.(a) In-cylinder pressure signal for biodiesel and hydrogen at various loads. (b) Power spectrum of pressure sign. (a) (b)(a) In-cylinder pressure signal for biodiesel and hydrogen with DEE at various loads. (b) Power spectrum of pressure signal. (a) (b)In knocking combustion, the peak pressure signal gets rapid oscillation at every crank angle. After crossing the load of 52%, there could be a maximum oscillation in peak pressure compared to light load, as well as a significant notification from power spectrum of pressure signal. From the power spectrum signal, the first harmonic knocking frequency can be found as 1.65 kHz for 70% and 80% load and second harmonic frequency is 2.4 kHz and 2.3 kHz for 70% and 80% load, respectively. There are no harmonic frequencies found for biodiesel-hydrogen with diethyl ether. Next, the engine was run on biodiesel-hydrogen mixture and diethyl ether can be injected at the intake valve opening moment in an engine. The different types of load can be applied to these mixtures and the signal can be noted down. This result shows that complete combustion of engine during the application of higher loads. Along with the analysis of pressure signal, the exhaust gas temperature and brake specific fuel consumption can be considered to find out the knocking behavior of the engine. ### 5.1. Combustion Characteristics The cylinder peak pressure variation and its power spectrum are given in Figures3 and 4. The peak pressure and pressure oscillation are higher for the biodiesel-hydrogen fuel mixture when compared to the diethyl ether. The biodiesel fuel can act as a main fuel which can be injected at direct injection port and hydrogen is supplied at intake manifold whose flow rate is fixed at 0.5 lpm. In biodiesel-hydrogen, the hydrogen fuel properties make the abnormal combustion in compression ignition engine. This can be got from analysis of pressure signal and its power spectrum. The pressure signal can be got from PC data acquisition system which is given in the LabView software. This can be converted into frequency domain. In part load, there is no rapid rise or oscillation of pressure signal during combustion phase. This shows that the complete combustion fuel mixture takes place at minimum load. After injecting the diethyl ether with the biodiesel-hydrogen, there are no changes in pressure signal during minimum (<60%) load. The flow rate of diethyl ether is optimized at 0.25 g/min, according to the signal got from the engine during the operation. The diethyl ether helps to reduce the abnormal combustion to take place at maximum (>60%) load. The diethyl ether reduces the peak pressure occurring during the combustion of fuel due to lag in ignition timing and acts as an ignition improver. The autoignition can be prevented by supplying diethyl ether as an additive. The knocking combustion can be found at higher load and after applying diethyl ether smooth combustion of fuel mixture takes place inside the engine. ### 5.2. Performance Characteristics The performance of biodiesel-hydrogen fuel and biodiesel-hydrogen fuel with DEE can be shown in Tables3 and 4, respectively. The performance can be noted for various applications of load up to 80% load. The exhaust temperature is taken from the thermocouple sensor. The performance of engine during knocking and nonknocking can be evaluated using these equations.Table 3 Performance of biodiesel and hydrogen. Sl. no. Load (kg) Exhaust temeratureT 3 (°C) Indicated power, IP (kW) Brake power, BP (kW) BSFC = FC/BP(kg/kW-hr) MechEfficiency = BP/IP (%) 1 0 198 2.68715 0.2835 1.84282 10.55151 2 2 229 3.084022 0.8506 0.69387 27.58103 3 4 247 3.472626 1.4176 0.46834 40.8243 4 6 273 3.753743 1.9847 0.37341 52.87376 5 8 338 4.274637 2.5518 0.32021 59.69665 6 10 375 4.812067 3.1188 0.30051 64.81384 7 12 410 5.275084 3.6859 0.28817 69.81384 8 14 505 5.820782 4.2530 0.27749 73.06622 9 16 564 6.449162 4.8200 0.27545 74.73987 kW: Killowatt, °C: Degree Celcius, kg: killogram; hr: hourhenry.Table 4 Performance of biodiesel and hydrogen with DEE. Sl. no. Load (kg) Exhaust temeratureT 3 (°C) Indicated power, IP (kW) Brake power, BP (kW) BSFC = FC/BP(kg/kW-hr) MechEfficiency = BP/IP (%) 10 0 194 2.48045 0.283 2.14154 11.4308 11 2 224 2.70369 0.850 0.80882 31.46093 12 4 255 3.47263 1.417 0.55646 40.8243 13 6 291 3.81162 1.984 0.42659 52.07091 14 8 327 4.23330 2.551 0.38290 60.27963 15 10 361 4.72112 3.118 0.34491 66.06244 16 12 404 5.21721 3.685 0.30774 70.64998 17 14 451 5.69676 4.253 0.30015 74.65692 18 16 520 6.18458 6.184 0.29316 77.9373 kW: Killowatt, °C: Degree Celcius, kg: killogram; hr: hourhenry.The power and efficiency can be calculated from these formulas.(i) Indicated power(1) IP = n P mi L A N k * 10 6 kW ,where P mi indicated mean effective pressure in bar, n indicated number of cylinders, L indicated length of stroke in m, A indicated area of piston in m2, N indicated speed in rpm, and k indicated 1 / 2 (for four-stroke engine). (ii) Brake power(2) BP = 2 π N T 60 * 1000 kW , where  N is speed in rpm and  T is torque in Nm. (iii) Mechanical efficiency(3) η mech = BP IP .Figure5 shows the variation of exhaust gas temperature with respect to load. It is observed that the exhaust gas temperature of biodiesel-hydrogen is similar to that of those fuel mixtures along with DEE for below 60% of load. When the amount of load was increased, the engine experienced knocking level due to improper combustion of fuel (fuel mixture remains same to find out knocking level). The exhaust gas temperature gets increased for the load above 70% due to late combustion of fuel increasing the exhaust gas temperature. The hydrogen fuel gets accumulated in full throttle running of an engine during higher load. The injection of diethyl ether leads to providing normal combustion of engine, and the complete combustion of fuel takes place due to timed injection of DEE at the inlet port.Figure 5 Exhaust temperature variation with load.Figure6 shows the variation of brake specific fuel consumption for various fuel mixtures with respect to load. The brake specific fuel consumption is mainly based on the torque delivered by the engine with respect to the mass flow rate of fuel delivered to the engine.Figure 6 Brake specific fuel consumption variation with load.It is observed that the brake specific fuel consumption of biodiesel-hydrogen with DEE is decreased with the load increasing to maximum. In case of hydrogen-biodiesel, brake specific fuel consumption is increased because of knocking combustion. When there is a decrease in brake specific fuel consumption, it also decreases the brake thermal efficiency of the engine. The brake specific fuel consumption is well decreased at minimum load compared to higher load, while applying diethyl ether during the combustion of fuel mixture.Figure7 shows the variation of mechanical efficiency for various fuel mixtures with respect to load. The mechanical efficiency is defined as the ratio of brake power to the indicated power. It is observed that the mechanical efficiency of biodiesel-hydrogen with DEE increases for load above 50%. There is a slight increase of mechanical efficiency for the 10% load. The increase in mechanical efficiency in the case of hydrogen-biodiesel with DEE operation is mainly due to higher charge intake leading to complete combustion and the energy release is higher in case of DEE. The diethyl ether helps to make complete burning of fuel during combustion at higher load.Figure 7 Mechanical efficiency variation with load. ## 5.1. Combustion Characteristics The cylinder peak pressure variation and its power spectrum are given in Figures3 and 4. The peak pressure and pressure oscillation are higher for the biodiesel-hydrogen fuel mixture when compared to the diethyl ether. The biodiesel fuel can act as a main fuel which can be injected at direct injection port and hydrogen is supplied at intake manifold whose flow rate is fixed at 0.5 lpm. In biodiesel-hydrogen, the hydrogen fuel properties make the abnormal combustion in compression ignition engine. This can be got from analysis of pressure signal and its power spectrum. The pressure signal can be got from PC data acquisition system which is given in the LabView software. This can be converted into frequency domain. In part load, there is no rapid rise or oscillation of pressure signal during combustion phase. This shows that the complete combustion fuel mixture takes place at minimum load. After injecting the diethyl ether with the biodiesel-hydrogen, there are no changes in pressure signal during minimum (<60%) load. The flow rate of diethyl ether is optimized at 0.25 g/min, according to the signal got from the engine during the operation. The diethyl ether helps to reduce the abnormal combustion to take place at maximum (>60%) load. The diethyl ether reduces the peak pressure occurring during the combustion of fuel due to lag in ignition timing and acts as an ignition improver. The autoignition can be prevented by supplying diethyl ether as an additive. The knocking combustion can be found at higher load and after applying diethyl ether smooth combustion of fuel mixture takes place inside the engine. ## 5.2. Performance Characteristics The performance of biodiesel-hydrogen fuel and biodiesel-hydrogen fuel with DEE can be shown in Tables3 and 4, respectively. The performance can be noted for various applications of load up to 80% load. The exhaust temperature is taken from the thermocouple sensor. The performance of engine during knocking and nonknocking can be evaluated using these equations.Table 3 Performance of biodiesel and hydrogen. Sl. no. Load (kg) Exhaust temeratureT 3 (°C) Indicated power, IP (kW) Brake power, BP (kW) BSFC = FC/BP(kg/kW-hr) MechEfficiency = BP/IP (%) 1 0 198 2.68715 0.2835 1.84282 10.55151 2 2 229 3.084022 0.8506 0.69387 27.58103 3 4 247 3.472626 1.4176 0.46834 40.8243 4 6 273 3.753743 1.9847 0.37341 52.87376 5 8 338 4.274637 2.5518 0.32021 59.69665 6 10 375 4.812067 3.1188 0.30051 64.81384 7 12 410 5.275084 3.6859 0.28817 69.81384 8 14 505 5.820782 4.2530 0.27749 73.06622 9 16 564 6.449162 4.8200 0.27545 74.73987 kW: Killowatt, °C: Degree Celcius, kg: killogram; hr: hourhenry.Table 4 Performance of biodiesel and hydrogen with DEE. Sl. no. Load (kg) Exhaust temeratureT 3 (°C) Indicated power, IP (kW) Brake power, BP (kW) BSFC = FC/BP(kg/kW-hr) MechEfficiency = BP/IP (%) 10 0 194 2.48045 0.283 2.14154 11.4308 11 2 224 2.70369 0.850 0.80882 31.46093 12 4 255 3.47263 1.417 0.55646 40.8243 13 6 291 3.81162 1.984 0.42659 52.07091 14 8 327 4.23330 2.551 0.38290 60.27963 15 10 361 4.72112 3.118 0.34491 66.06244 16 12 404 5.21721 3.685 0.30774 70.64998 17 14 451 5.69676 4.253 0.30015 74.65692 18 16 520 6.18458 6.184 0.29316 77.9373 kW: Killowatt, °C: Degree Celcius, kg: killogram; hr: hourhenry.The power and efficiency can be calculated from these formulas.(i) Indicated power(1) IP = n P mi L A N k * 10 6 kW ,where P mi indicated mean effective pressure in bar, n indicated number of cylinders, L indicated length of stroke in m, A indicated area of piston in m2, N indicated speed in rpm, and k indicated 1 / 2 (for four-stroke engine). (ii) Brake power(2) BP = 2 π N T 60 * 1000 kW , where  N is speed in rpm and  T is torque in Nm. (iii) Mechanical efficiency(3) η mech = BP IP .Figure5 shows the variation of exhaust gas temperature with respect to load. It is observed that the exhaust gas temperature of biodiesel-hydrogen is similar to that of those fuel mixtures along with DEE for below 60% of load. When the amount of load was increased, the engine experienced knocking level due to improper combustion of fuel (fuel mixture remains same to find out knocking level). The exhaust gas temperature gets increased for the load above 70% due to late combustion of fuel increasing the exhaust gas temperature. The hydrogen fuel gets accumulated in full throttle running of an engine during higher load. The injection of diethyl ether leads to providing normal combustion of engine, and the complete combustion of fuel takes place due to timed injection of DEE at the inlet port.Figure 5 Exhaust temperature variation with load.Figure6 shows the variation of brake specific fuel consumption for various fuel mixtures with respect to load. The brake specific fuel consumption is mainly based on the torque delivered by the engine with respect to the mass flow rate of fuel delivered to the engine.Figure 6 Brake specific fuel consumption variation with load.It is observed that the brake specific fuel consumption of biodiesel-hydrogen with DEE is decreased with the load increasing to maximum. In case of hydrogen-biodiesel, brake specific fuel consumption is increased because of knocking combustion. When there is a decrease in brake specific fuel consumption, it also decreases the brake thermal efficiency of the engine. The brake specific fuel consumption is well decreased at minimum load compared to higher load, while applying diethyl ether during the combustion of fuel mixture.Figure7 shows the variation of mechanical efficiency for various fuel mixtures with respect to load. The mechanical efficiency is defined as the ratio of brake power to the indicated power. It is observed that the mechanical efficiency of biodiesel-hydrogen with DEE increases for load above 50%. There is a slight increase of mechanical efficiency for the 10% load. The increase in mechanical efficiency in the case of hydrogen-biodiesel with DEE operation is mainly due to higher charge intake leading to complete combustion and the energy release is higher in case of DEE. The diethyl ether helps to make complete burning of fuel during combustion at higher load.Figure 7 Mechanical efficiency variation with load. ## 6. Conclusions An experimental model of knock detection for biodiesel-hydrogen fuel and biodiesel-hydrogen fuel mixtures with diethyl ether has been developed. The most suitable knock techniques have been applied to detect knock in compression ignition engine.(i) The knock measurement and analysis can be done for the biodiesel-hydrogen fuel and biodiesel-hydrogen fuel with DEE. (ii) The pressure signal could be got from a pressure transducer and converted into frequency domain for analysis of the knock. (iii) The exhaust temperature can also be used to find out the knocking combustion for the same fuel mixture (biodiesel-hydrogen fuel at 10 lpm) at higher loads. (iv) The performance and knock limiting operation of engine could be improved by using DEE as an additive fuel. (v) The diethyl ether is taken to suppress the knocking behaviour in compression ignition engine during combustion of mixture of hydrogen-biodiesel fuel.The performance characteristics of both hydrogen-biodiesel fuel and hydrogen-biodiesel fuel with DEE could be computed for various applications of load. --- *Source: 102390-2014-02-24.xml*
102390-2014-02-24_102390-2014-02-24.md
25,624
Study of Knocking Effect in Compression Ignition Engine with Hydrogen as a Secondary Fuel
R. Sivabalakrishnan; C. Jegadheesan
Chinese Journal of Engineering (2014)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102390
102390-2014-02-24.xml
--- ## Abstract The aim of this project is detecting knock during combustion of biodiesel-hydrogen fuel and also the knock is suppressed by timed injection of diethyl ether (DEE) with biodiesel-hydrogen fuel for different loads. Hydrogen fuel is an effective alternate fuel in making a pollution-free environment with higher efficiency. The usage of hydrogen in compression ignition engine leads to production of knocking or detonation because of its lower ignition energy, wider flammability range, and shorter quenching distance. Knocking combustion causes major engine damage, and also reduces the efficiency. The method uses the measurement and analysis of cylinder pressure signal for various loads. The pressure signal is to be converted into frequency domain that shows the accurate knocking combustion of fuel mixtures. The variation of pressure signal is gradually increased and smoothly reduced to minimum during normal combustion. The rapid rise of pressure signal has occurred during knocking combustion. The experimental setup was mainly available for evaluating the feasibility of normal combustion by comparing with the signals from both fuel mixtures in compression ignition engine. This method provides better results in predicting the knocking feature of biodiesel-hydrogen fuel and the usage of DEE provides complete combustion of fuels with higher performance, and lower emission. --- ## Body ## 1. Introduction The demand for fossil fuels gets increased by more usage of transportation and automobile. The use of fossil fuels emits more emissions such as HC, CO, CO2, and NO x and also makes harmful environmental condition. The best solution for this problem is to move on to alternative fuels. Hydrogen is the most effective alternative fuel which reduces the emission and fuel consumption and also provides better performance. Hydrogen has some limitations such as backfire and preignition. Saravanan et al. [1] proposed that the direct injection (DI) diesel engine was used to test the performance and emission of an engine. Hydrogen was injected at the intake port of the engine and diesel can be used as an ignition source. In order to improve the efficiency, the knocking combustion occurred as a major problem due to some properties of hydrogen fuel such as wider flammability range and shorter quenching distance. The biodiesel can be used as an ignition source instead of diesel which reduces the emissions of particulate matter and limits the autoignition condition. There is a possible minimum emission of NO x at higher load conditions.Zhen et al. [2] projected that the knock detection is to be done on several types of methods. These methods are in-cylinder pressure analysis, heat transfer analysis, light radiation, cylinder block vibration analysis, intermediate radicals and species analysis, ion current analysis, and exhaust gas temperature analysis. The most suitable methods are in-cylinder pressure analysis and heat transfer analysis. The knock intensity is the maximum amplitude of cylinder pressure fluctuation and rapid increase of pressure signal and heat release rate provides the information about abnormal combustion.Wannatong et al. [3] determined that the knocking in engines leads to damaging the engine and limits the performance of the engine. The combustion and knock characteristics can be determined for diesel and dual fuel (Diesel and Natural Gas) by varying the temperature of intake mixture, increasing the amount of natural gas, mixture of diesel and natural gas. Engine knocks were noted for every increase of temperature of intake mixture and increasing the amount of natural gas. In this process, the higher intake temperature fastened the combustion and made autoignition of fuel before flame arrival. The rapid increase of cylinder pressure has shown the onset of knock in engine.The knock detection method is to be done on the cylinder pressure, block vibration, and sound pressure signal in spark ignited (SI) engine. The three knock harmonic frequencies were estimated by analyzing the cylinder pressure signal under various operating conditions in spark ignited (SI) engine. The filtered pressure signal can be used to predict knock intensity and also helps to remove background noise. The knock windows and knock frequencies were determined by Lee et al. [4].Brunt et al. [5] have made a comparison of calculated peak pressures at crank angle resolution for constant speed and also found out the peak knock pressure for all cycles. The measurement and analysis of cylinder pressure is used to obtain accurate knocking combustion. The knock intensity is to be determined by the maximum variability of peak pressure and its filtered data. ## 2. Fundamentals ### 2.1. Hydrogen Fuel Hydrogen has clean burning characteristics that provide an efficient operation in CI engine. Hydrogen can be used as a secondary fuel in an internal combustion engine. The hydrogen burning combines with oxygen to form water and no other combustion products (except for little amounts ofNO x). Hydrogen cannot be ignited by compression due to higher autoignition temperature (585°C) than diesel fuel (180°C). Biodiesel is used as an ignition source for hydrogen fuel during combustion of compression ignition engine (Table 1).Table 1 Fuels properties. Properties Biodiesel Hydrogen Diethyl ether Chemical formula — H2 C2H5OC2H5 Auto Ignition Temperature (K) 535 858 433 Calorific value (MJ/kg) 38.5 119.9 33.9 Density (kg/m3) 885 0.0837 713 Viscosity at 15.5°C, centipoises — — 0.023 ### 2.2. Knock Fundamentals Due to presence of some constituents in the fuel used, the rate of oxidation becomes so great that the last portion of the fuel-air mixture gets ignited instantaneously, producing an explosive violence, known as knocking. The explosive ignition of fuel-air mixture before the propagating flame is increasing successive cylinder pressure oscillations. The well-examined external mixing of hydrogen with the intake of air causes backfire and knock, especially at higher engine loads. The abnormal combustion of hydrogen fuel in CI engine will produce an increased chemical heat release rate, which results in a rapid pressure rise and higher heat rejections. The maximum amplitude of pressure oscillation and analysis of exhaust temperature is a good indicator for severity of the knock. ## 2.1. Hydrogen Fuel Hydrogen has clean burning characteristics that provide an efficient operation in CI engine. Hydrogen can be used as a secondary fuel in an internal combustion engine. The hydrogen burning combines with oxygen to form water and no other combustion products (except for little amounts ofNO x). Hydrogen cannot be ignited by compression due to higher autoignition temperature (585°C) than diesel fuel (180°C). Biodiesel is used as an ignition source for hydrogen fuel during combustion of compression ignition engine (Table 1).Table 1 Fuels properties. Properties Biodiesel Hydrogen Diethyl ether Chemical formula — H2 C2H5OC2H5 Auto Ignition Temperature (K) 535 858 433 Calorific value (MJ/kg) 38.5 119.9 33.9 Density (kg/m3) 885 0.0837 713 Viscosity at 15.5°C, centipoises — — 0.023 ## 2.2. Knock Fundamentals Due to presence of some constituents in the fuel used, the rate of oxidation becomes so great that the last portion of the fuel-air mixture gets ignited instantaneously, producing an explosive violence, known as knocking. The explosive ignition of fuel-air mixture before the propagating flame is increasing successive cylinder pressure oscillations. The well-examined external mixing of hydrogen with the intake of air causes backfire and knock, especially at higher engine loads. The abnormal combustion of hydrogen fuel in CI engine will produce an increased chemical heat release rate, which results in a rapid pressure rise and higher heat rejections. The maximum amplitude of pressure oscillation and analysis of exhaust temperature is a good indicator for severity of the knock. ## 3. Experimental Setup In this study, a single cylinder, four strokes, water cooled direct injection diesel engine was operated as dual fuel engine which uses hydrogen and biodiesel shown in Figure1. The engine details are shown in Table 2. Hydrogen fuel is stored in a storage cylinder. A pressure regulator was used to regulate hydrogen passed to flame arrester through flow control valve and check valve. Check valve is used to pass hydrogen in forward direction alone and it can be closed if any gas returns from CI engine. Flame arrester can have 3/4thfiled water in an enclosed tank to restrict backfire to hydrogen cylinder during combustion. Hydrogen fuel is fed at the inlet manifold in diesel engine. DEE is to be fed at the inlet port before the hydrogen port is used. A pressure transducer was used to pick up peak pressure oscillation during the combustion of fuel. The pressure signal is acquired by PC data acquisition system.Table 2 Engine specification. Name Specification Type 4-Stroke, Single Cylinder Diesel Engine Make Kirloskar Power 5.2 kW Speed 1500 rpm Stroke 110 mm Bore 87.5 mm Capacity 661 ccFigure 1 Experimental setup. ## 4. Frequency Analysis of Pressure Signal The pressure transducer is used to record the in-cylinder pressure signal with respect to crank angle. This signal can be acquired using PC data acquisition system and the crank angle is got from rotary encoder coupled with crank shaft.This signal is given to power spectral analysis tool in LabView software which converts the given signal into frequency domain. The conversion of pressure signal into frequency domain is shown in Figure2. The frequency signal is used to predict the knocking combustion of engine during abnormal conditions.Figure 2 Program for FFT conversion. ## 5. Result and Discussion Experimental tests were carried out for biodiesel-hydrogen mixtures and biodiesel-hydrogen mixture with DEE at various loads. The pressure signal variation and its power spectrum can be shown in Figures3 and 4. The engine has been run on biodiesel-hydrogen mixtures from no load to full load. In normal combustion, the pressure signal gradually reaches the peak value after Top Dead Centre of the piston (TDC of greater than 3600 of crank angle) and again smoothly decreases to minimum value of the pressure.(a) In-cylinder pressure signal for biodiesel and hydrogen at various loads. (b) Power spectrum of pressure sign. (a) (b)(a) In-cylinder pressure signal for biodiesel and hydrogen with DEE at various loads. (b) Power spectrum of pressure signal. (a) (b)In knocking combustion, the peak pressure signal gets rapid oscillation at every crank angle. After crossing the load of 52%, there could be a maximum oscillation in peak pressure compared to light load, as well as a significant notification from power spectrum of pressure signal. From the power spectrum signal, the first harmonic knocking frequency can be found as 1.65 kHz for 70% and 80% load and second harmonic frequency is 2.4 kHz and 2.3 kHz for 70% and 80% load, respectively. There are no harmonic frequencies found for biodiesel-hydrogen with diethyl ether. Next, the engine was run on biodiesel-hydrogen mixture and diethyl ether can be injected at the intake valve opening moment in an engine. The different types of load can be applied to these mixtures and the signal can be noted down. This result shows that complete combustion of engine during the application of higher loads. Along with the analysis of pressure signal, the exhaust gas temperature and brake specific fuel consumption can be considered to find out the knocking behavior of the engine. ### 5.1. Combustion Characteristics The cylinder peak pressure variation and its power spectrum are given in Figures3 and 4. The peak pressure and pressure oscillation are higher for the biodiesel-hydrogen fuel mixture when compared to the diethyl ether. The biodiesel fuel can act as a main fuel which can be injected at direct injection port and hydrogen is supplied at intake manifold whose flow rate is fixed at 0.5 lpm. In biodiesel-hydrogen, the hydrogen fuel properties make the abnormal combustion in compression ignition engine. This can be got from analysis of pressure signal and its power spectrum. The pressure signal can be got from PC data acquisition system which is given in the LabView software. This can be converted into frequency domain. In part load, there is no rapid rise or oscillation of pressure signal during combustion phase. This shows that the complete combustion fuel mixture takes place at minimum load. After injecting the diethyl ether with the biodiesel-hydrogen, there are no changes in pressure signal during minimum (<60%) load. The flow rate of diethyl ether is optimized at 0.25 g/min, according to the signal got from the engine during the operation. The diethyl ether helps to reduce the abnormal combustion to take place at maximum (>60%) load. The diethyl ether reduces the peak pressure occurring during the combustion of fuel due to lag in ignition timing and acts as an ignition improver. The autoignition can be prevented by supplying diethyl ether as an additive. The knocking combustion can be found at higher load and after applying diethyl ether smooth combustion of fuel mixture takes place inside the engine. ### 5.2. Performance Characteristics The performance of biodiesel-hydrogen fuel and biodiesel-hydrogen fuel with DEE can be shown in Tables3 and 4, respectively. The performance can be noted for various applications of load up to 80% load. The exhaust temperature is taken from the thermocouple sensor. The performance of engine during knocking and nonknocking can be evaluated using these equations.Table 3 Performance of biodiesel and hydrogen. Sl. no. Load (kg) Exhaust temeratureT 3 (°C) Indicated power, IP (kW) Brake power, BP (kW) BSFC = FC/BP(kg/kW-hr) MechEfficiency = BP/IP (%) 1 0 198 2.68715 0.2835 1.84282 10.55151 2 2 229 3.084022 0.8506 0.69387 27.58103 3 4 247 3.472626 1.4176 0.46834 40.8243 4 6 273 3.753743 1.9847 0.37341 52.87376 5 8 338 4.274637 2.5518 0.32021 59.69665 6 10 375 4.812067 3.1188 0.30051 64.81384 7 12 410 5.275084 3.6859 0.28817 69.81384 8 14 505 5.820782 4.2530 0.27749 73.06622 9 16 564 6.449162 4.8200 0.27545 74.73987 kW: Killowatt, °C: Degree Celcius, kg: killogram; hr: hourhenry.Table 4 Performance of biodiesel and hydrogen with DEE. Sl. no. Load (kg) Exhaust temeratureT 3 (°C) Indicated power, IP (kW) Brake power, BP (kW) BSFC = FC/BP(kg/kW-hr) MechEfficiency = BP/IP (%) 10 0 194 2.48045 0.283 2.14154 11.4308 11 2 224 2.70369 0.850 0.80882 31.46093 12 4 255 3.47263 1.417 0.55646 40.8243 13 6 291 3.81162 1.984 0.42659 52.07091 14 8 327 4.23330 2.551 0.38290 60.27963 15 10 361 4.72112 3.118 0.34491 66.06244 16 12 404 5.21721 3.685 0.30774 70.64998 17 14 451 5.69676 4.253 0.30015 74.65692 18 16 520 6.18458 6.184 0.29316 77.9373 kW: Killowatt, °C: Degree Celcius, kg: killogram; hr: hourhenry.The power and efficiency can be calculated from these formulas.(i) Indicated power(1) IP = n P mi L A N k * 10 6 kW ,where P mi indicated mean effective pressure in bar, n indicated number of cylinders, L indicated length of stroke in m, A indicated area of piston in m2, N indicated speed in rpm, and k indicated 1 / 2 (for four-stroke engine). (ii) Brake power(2) BP = 2 π N T 60 * 1000 kW , where  N is speed in rpm and  T is torque in Nm. (iii) Mechanical efficiency(3) η mech = BP IP .Figure5 shows the variation of exhaust gas temperature with respect to load. It is observed that the exhaust gas temperature of biodiesel-hydrogen is similar to that of those fuel mixtures along with DEE for below 60% of load. When the amount of load was increased, the engine experienced knocking level due to improper combustion of fuel (fuel mixture remains same to find out knocking level). The exhaust gas temperature gets increased for the load above 70% due to late combustion of fuel increasing the exhaust gas temperature. The hydrogen fuel gets accumulated in full throttle running of an engine during higher load. The injection of diethyl ether leads to providing normal combustion of engine, and the complete combustion of fuel takes place due to timed injection of DEE at the inlet port.Figure 5 Exhaust temperature variation with load.Figure6 shows the variation of brake specific fuel consumption for various fuel mixtures with respect to load. The brake specific fuel consumption is mainly based on the torque delivered by the engine with respect to the mass flow rate of fuel delivered to the engine.Figure 6 Brake specific fuel consumption variation with load.It is observed that the brake specific fuel consumption of biodiesel-hydrogen with DEE is decreased with the load increasing to maximum. In case of hydrogen-biodiesel, brake specific fuel consumption is increased because of knocking combustion. When there is a decrease in brake specific fuel consumption, it also decreases the brake thermal efficiency of the engine. The brake specific fuel consumption is well decreased at minimum load compared to higher load, while applying diethyl ether during the combustion of fuel mixture.Figure7 shows the variation of mechanical efficiency for various fuel mixtures with respect to load. The mechanical efficiency is defined as the ratio of brake power to the indicated power. It is observed that the mechanical efficiency of biodiesel-hydrogen with DEE increases for load above 50%. There is a slight increase of mechanical efficiency for the 10% load. The increase in mechanical efficiency in the case of hydrogen-biodiesel with DEE operation is mainly due to higher charge intake leading to complete combustion and the energy release is higher in case of DEE. The diethyl ether helps to make complete burning of fuel during combustion at higher load.Figure 7 Mechanical efficiency variation with load. ## 5.1. Combustion Characteristics The cylinder peak pressure variation and its power spectrum are given in Figures3 and 4. The peak pressure and pressure oscillation are higher for the biodiesel-hydrogen fuel mixture when compared to the diethyl ether. The biodiesel fuel can act as a main fuel which can be injected at direct injection port and hydrogen is supplied at intake manifold whose flow rate is fixed at 0.5 lpm. In biodiesel-hydrogen, the hydrogen fuel properties make the abnormal combustion in compression ignition engine. This can be got from analysis of pressure signal and its power spectrum. The pressure signal can be got from PC data acquisition system which is given in the LabView software. This can be converted into frequency domain. In part load, there is no rapid rise or oscillation of pressure signal during combustion phase. This shows that the complete combustion fuel mixture takes place at minimum load. After injecting the diethyl ether with the biodiesel-hydrogen, there are no changes in pressure signal during minimum (<60%) load. The flow rate of diethyl ether is optimized at 0.25 g/min, according to the signal got from the engine during the operation. The diethyl ether helps to reduce the abnormal combustion to take place at maximum (>60%) load. The diethyl ether reduces the peak pressure occurring during the combustion of fuel due to lag in ignition timing and acts as an ignition improver. The autoignition can be prevented by supplying diethyl ether as an additive. The knocking combustion can be found at higher load and after applying diethyl ether smooth combustion of fuel mixture takes place inside the engine. ## 5.2. Performance Characteristics The performance of biodiesel-hydrogen fuel and biodiesel-hydrogen fuel with DEE can be shown in Tables3 and 4, respectively. The performance can be noted for various applications of load up to 80% load. The exhaust temperature is taken from the thermocouple sensor. The performance of engine during knocking and nonknocking can be evaluated using these equations.Table 3 Performance of biodiesel and hydrogen. Sl. no. Load (kg) Exhaust temeratureT 3 (°C) Indicated power, IP (kW) Brake power, BP (kW) BSFC = FC/BP(kg/kW-hr) MechEfficiency = BP/IP (%) 1 0 198 2.68715 0.2835 1.84282 10.55151 2 2 229 3.084022 0.8506 0.69387 27.58103 3 4 247 3.472626 1.4176 0.46834 40.8243 4 6 273 3.753743 1.9847 0.37341 52.87376 5 8 338 4.274637 2.5518 0.32021 59.69665 6 10 375 4.812067 3.1188 0.30051 64.81384 7 12 410 5.275084 3.6859 0.28817 69.81384 8 14 505 5.820782 4.2530 0.27749 73.06622 9 16 564 6.449162 4.8200 0.27545 74.73987 kW: Killowatt, °C: Degree Celcius, kg: killogram; hr: hourhenry.Table 4 Performance of biodiesel and hydrogen with DEE. Sl. no. Load (kg) Exhaust temeratureT 3 (°C) Indicated power, IP (kW) Brake power, BP (kW) BSFC = FC/BP(kg/kW-hr) MechEfficiency = BP/IP (%) 10 0 194 2.48045 0.283 2.14154 11.4308 11 2 224 2.70369 0.850 0.80882 31.46093 12 4 255 3.47263 1.417 0.55646 40.8243 13 6 291 3.81162 1.984 0.42659 52.07091 14 8 327 4.23330 2.551 0.38290 60.27963 15 10 361 4.72112 3.118 0.34491 66.06244 16 12 404 5.21721 3.685 0.30774 70.64998 17 14 451 5.69676 4.253 0.30015 74.65692 18 16 520 6.18458 6.184 0.29316 77.9373 kW: Killowatt, °C: Degree Celcius, kg: killogram; hr: hourhenry.The power and efficiency can be calculated from these formulas.(i) Indicated power(1) IP = n P mi L A N k * 10 6 kW ,where P mi indicated mean effective pressure in bar, n indicated number of cylinders, L indicated length of stroke in m, A indicated area of piston in m2, N indicated speed in rpm, and k indicated 1 / 2 (for four-stroke engine). (ii) Brake power(2) BP = 2 π N T 60 * 1000 kW , where  N is speed in rpm and  T is torque in Nm. (iii) Mechanical efficiency(3) η mech = BP IP .Figure5 shows the variation of exhaust gas temperature with respect to load. It is observed that the exhaust gas temperature of biodiesel-hydrogen is similar to that of those fuel mixtures along with DEE for below 60% of load. When the amount of load was increased, the engine experienced knocking level due to improper combustion of fuel (fuel mixture remains same to find out knocking level). The exhaust gas temperature gets increased for the load above 70% due to late combustion of fuel increasing the exhaust gas temperature. The hydrogen fuel gets accumulated in full throttle running of an engine during higher load. The injection of diethyl ether leads to providing normal combustion of engine, and the complete combustion of fuel takes place due to timed injection of DEE at the inlet port.Figure 5 Exhaust temperature variation with load.Figure6 shows the variation of brake specific fuel consumption for various fuel mixtures with respect to load. The brake specific fuel consumption is mainly based on the torque delivered by the engine with respect to the mass flow rate of fuel delivered to the engine.Figure 6 Brake specific fuel consumption variation with load.It is observed that the brake specific fuel consumption of biodiesel-hydrogen with DEE is decreased with the load increasing to maximum. In case of hydrogen-biodiesel, brake specific fuel consumption is increased because of knocking combustion. When there is a decrease in brake specific fuel consumption, it also decreases the brake thermal efficiency of the engine. The brake specific fuel consumption is well decreased at minimum load compared to higher load, while applying diethyl ether during the combustion of fuel mixture.Figure7 shows the variation of mechanical efficiency for various fuel mixtures with respect to load. The mechanical efficiency is defined as the ratio of brake power to the indicated power. It is observed that the mechanical efficiency of biodiesel-hydrogen with DEE increases for load above 50%. There is a slight increase of mechanical efficiency for the 10% load. The increase in mechanical efficiency in the case of hydrogen-biodiesel with DEE operation is mainly due to higher charge intake leading to complete combustion and the energy release is higher in case of DEE. The diethyl ether helps to make complete burning of fuel during combustion at higher load.Figure 7 Mechanical efficiency variation with load. ## 6. Conclusions An experimental model of knock detection for biodiesel-hydrogen fuel and biodiesel-hydrogen fuel mixtures with diethyl ether has been developed. The most suitable knock techniques have been applied to detect knock in compression ignition engine.(i) The knock measurement and analysis can be done for the biodiesel-hydrogen fuel and biodiesel-hydrogen fuel with DEE. (ii) The pressure signal could be got from a pressure transducer and converted into frequency domain for analysis of the knock. (iii) The exhaust temperature can also be used to find out the knocking combustion for the same fuel mixture (biodiesel-hydrogen fuel at 10 lpm) at higher loads. (iv) The performance and knock limiting operation of engine could be improved by using DEE as an additive fuel. (v) The diethyl ether is taken to suppress the knocking behaviour in compression ignition engine during combustion of mixture of hydrogen-biodiesel fuel.The performance characteristics of both hydrogen-biodiesel fuel and hydrogen-biodiesel fuel with DEE could be computed for various applications of load. --- *Source: 102390-2014-02-24.xml*
2014
# Experimental Investigation and Theoretical Modeling of Nanosilica Activity in Concrete **Authors:** Han-Seung Lee; Hyeong-Kyu Cho; Xiao-Yong Wang **Journal:** Journal of Nanomaterials (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102392 --- ## Abstract This paper presents experimental investigations and theoretical modeling of the hydration reaction of nanosilica blended concrete with different water-to-binder ratios and different nanosilica replacement ratios. The developments of chemically bound water contents, calcium hydroxide contents, and compressive strength of Portland cement control specimens and nanosilica blended specimens were measured at different ages: 1 day, 3 days, 7 days, 14 days, and 28 days. Due to the pozzolanic reaction of nanosilica, the contents of calcium hydroxide in nanosilica blended pastes are considerably lower than those in the control specimens. Compared with the control specimens, the extent of compressive strength enhancement in the nanosilica blended specimens is much higher at early ages. Additionally, a blended cement hydration model that considers both the hydration reaction of cement and the pozzolanic reaction of nanosilica is proposed. The properties of nanosilica blended concrete during hardening were evaluated using the degree of hydration of cement and the reaction degree of nanosilica. The calculated chemically bound water contents, calcium hydroxide contents, and compressive strength were generally consistent with the experimental results. --- ## Body ## 1. Introduction Scientists have considered nanomaterials to be the most promising materials of the 21st century. In recent years, considerable attention has been focused on civil engineering applications for nanomaterials because nanoparticles possess many unique properties due to their small size, such as large specific surface areas and high activity [1].Many experimental studies have investigated the influence of adding nanosilica on the properties of concrete. Li et al. [2] showed that the compressive and flexural strengths of cement mortars mixed with nanoparticles measured on the 7th day and the 28th day were higher than those of a plain cement mortar. Zhang and Li [3] found that the addition of nanoparticles refines the pore structure of concrete and enhances the resistance of concrete to chloride penetration. Ji [4] found that the microstructure of concrete containing nano-SiO2 is more uniform and compact than that of normal concrete and that the incorporation of nano-SiO2 can improve the resistance of concrete to water penetration. Jo et al. [5] found that nanoparticles are more effective at enhancing strength than silica fume and that nanoscale SiO2 functions not only as a filler to improve the microstructure but also as an activator to promote pozzolanic reactions.Although numerous experimental studies have investigated the physical and chemical properties of nanosilica blended concrete, research on the modeling of hydration of nanosilica blended concrete is very limited. Some existing hydration models [6–10] are only valid for traditional mineral admixture blended concretes, such as slag blended concrete, fly ash blended concrete, and silica fume blended concrete. To fill this knowledge gap, this paper proposes a blended cement hydration model that considers both the hydration reaction of cement and the pozzolanic reaction of nanosilica. The properties of nanosilica blended concrete during hardening, such as the content of chemically bound water, the content of calcium hydroxide, and the development of compressive strength, were evaluated using the hydration degree of cement and the reaction degree of nanosilica. ## 2. Hydration Model of Portland Cement Tomosawa et al. [11, 12] developed a shrinking-core model to simulate the development of cement hydration. This model is expressed as a single equation that consists of three coefficients: kd is the reaction coefficient in the induction period; De is the effective diffusion coefficient of water through the C–S–H gel; and kr is a coefficient of the reaction rate of cement, as shown in (1) below. These coefficients determine the rate of mass transport through the initial shell layer, the rate of the phase boundary reaction process, and the rate of the diffusion-controlled process. The modeled cement particles are assumed to be spheres surrounded by hydration product. Based on this theory, the rate of cement hydration is derived as follows: (1)dαdt=3(Sw/S0)ρwCw-free(v+wg)r0ρc×((1kd-r0De)(1)((1kd-r0De)+r0De(1-α)-1/3+1kr(1-α)-2/3(1kd-r0De))-1), where α is the degree of cement hydration; ν is the stoichiometric ratio by mass of water to cement (=0.25); wg is the physically bound water in C–S–H gel (=0.15); ρw is the density of water; Cw-free is the amount of water at the exterior of the C–S–H gel; r0 is the radius of unhydrated cement particles (r0=3/(Sρc), where S and ρc represent the Blaine surface area and density of the cement, resp.); Sw is the effective surface area of the cement particles in contact with water; and S0 is the total surface area if the surface area develops unconstrained.The reaction coefficientkd is assumed to be a function of the degree of hydration, as shown in (2), where B and C are the coefficients that determine this factor; B controls the rate of the initial shell formation and C controls the rate of the initial shell decay. Consider (2)kd=Bα1.5+Cα3.The effective diffusion coefficient of water is affected by the tortuosity of the gel pores and by the radii of the gel pores in the hydrate. This phenomenon can be described as a function of the degree of hydration and is expressed as follows:(3)De=De0ln⁡(1α).In addition, free water in the capillary pores is depleted as the hydration of cement minerals progresses. Some water is bound in the gel pores, and this water is not available for further hydration, which is an effect that must be taken into consideration during every step of the hydration process. Therefore, the amount of water in the capillary poresCw-free is expressed as a function of the degree of hydration in the previous step, as shown in (4)Cw-free=W0-0.4*α*C0W0, where C0 and W0 are the mass fractions of cement and water in the mix proportion.The effect of temperature on these reaction coefficients is assumed to follow Arrhenius’s law, as shown in(5)B=B20exp⁡(-β1(1T-1293)),C=C20exp⁡(-β2(1T-1293)),kr=kr20exp⁡(-ER(1T-1293)),De=De20exp⁡(-β3(1T-1293)), where β1, β2, E/R, and β3 are temperature sensitivity coefficients and B20, C20, kr20, and De20 are the values of B, C, kr, and De at 20°C, respectively.Using the proposed Portland cement hydration model, Tomosawa et al. [11, 12] evaluated the heat evolution rate, chemically bound water content, and compressive strength of hardening concrete. However, note that Tomosawa’s model is only valid for Portland cement. To model the hydration of nanosilica blended concrete, a reaction model for nanosilica should be constructed and the mutual interactions between cement hydration and the nanosilica reaction should be clarified. ## 3. Hydration Model for Cement Blended with Nanosilica ### 3.1. The Amount of Calcium Hydroxide (CH) during the Hydration Process During the hydration of Portland cement paste, the amount of calcium hydroxide is directly proportional to the degree of hydration of cement [11]. Furthermore, Papadakis [10] proposed that the amount of calcium hydroxide in cement-nanosilica blends during hydration can be determined with the following equation: (6)CH=RCHCE*C0*α-RCHSF*αsilica*msilica0. In (6), RCHCE is the mass of calcium hydroxide produced from the hydration of cement; RCHSF is the mass of calcium hydroxide consumed during the pozzolanic reaction of nanosilica; αsilica is the degree of hydration of the glass (active) phase of nanosilica; and msilica0 is the mass of nanosilica in the mixing proportion. In this equation, the term RCHCE*C0*α considers the production of calcium hydroxide from the hydration of cement and the term -RCHSF*αsilica*msilica0 considers the consumption of calcium hydroxide during the pozzolanic reaction. The value of RCHCE is related to the chemical compositions of Portland cement and can be obtained from the experimentally determined calcium hydroxide content and chemically bound water content in Portland cement paste. The value of RCHSF can be derived from the experimentally determined calcium hydroxide contents in Portland cement paste and nanosilica blended cement paste.Similar to the hydration of cement, as the pozzolanic reaction proceeds, water will be physically adsorbed in the nanosilica hydration products. Lura et al. [13] proposed that, for the silica fume pozzolanic reaction, when 1 g of silica fume reacts, 0.5 g of gel water and 0 g of chemical water will be consumed. Hence, the masses of capillary water and chemically bound water during the hydration of cement-nanosilica blends can be rewritten as in the following equations: (7a)Wcap=W0-0.4*C0*α-0.5*αsilica*msilica0,(7b)Wchem=0.25*C0*α,where Wcap and Wchem are the masses of capillary water and chemically bound water, respectively. In (7a) and (7b), the term 0.4*C0*α considers the reduction of capillary water due to cement hydration. The term 0.5*αsilica*msilica0 considers the reduction of capillary water due to the pozzolanic reaction [13]. ### 3.2. Simulation of the Pozzolanic Reaction in Cement-Nanosilica Blends Because of the high specific surface area of nanosilica and its considerable pozzolanic activity, in this paper, it is assumed that the hydration of nanosilica includes two processes: phase-boundary reaction process and diffusion process. Considering these points, based on the method proposed by Saeki and Monteiro [14], the nanosilica hydration equation can be written as (8a)dαsilicadt=mCH(t)msilica03ρwvsirsi0ρsi×((1krsi)(1)(rsioDesi(1-αsilica)-1/3-rsi0Desi+1krsi(1-αsilica)-2/3)-1),(8b)Desi=Desi0*ln⁡(1αsilica), where mCH(t) is the mass of calcium hydroxide in a unit volume of hydrating cement-nanosilica blends and can be obtained from (6). vsi is the stoichiometric ratio of CH to nanosilica by mass. rsi0 is the radius of the nanosilica particles. ρsi is the density of nanosilica. Desi0 is the initial diffusion coefficient, and krsi is the reaction rate coefficient.The influence of temperature on hydration is considered using the Arrhenius law, as in the following equations:(8c)krsi=krsi20exp⁡[-EsiR(1/T-1/293)],(8d)Desi0=Desi20exp⁡[-βesi(1T-1293)],where krsi20 and Desi20 are reaction coefficients at 20°C and Esi/R and βesi are activation energies.When nanosilica is incorporated in concrete, two possible reasons may be used to explain the change in the hydration process. The first is the pozzolanic reaction of amorphous phases in nanosilica, and the second is the influence of nanosilica on the hydration of cement. In the current paper, a new model is proposed that can describe the pozzolanic reaction between calcium hydroxide and nanosilica. In addition, the influence of nanosilica on the hydration of cement is considered through the amount of capillary water (7a) and the dilution effect (4). By using the proposed model, the reaction degrees of cement and nanosilica can be determined as functions of the curing time. Furthermore, the properties of nanosilica blended concrete during hardening can be evaluated from the reaction degrees of cement and nanosilica.In this study, (1) through (8a), (8b), (8c), and (8d) are used to model the hydration of nanosilica blended concrete. For nanosilica blended concrete, the hydration of Portland cement and the pozzolanic reaction of nanosilica occur simultaneously. The hydration of Portland cement is simulated using (1) to (5), which were proposed by Tomosawa et al. [11, 12]. The pozzolanic reaction of nanosilica is described using (8a) to (8d), which are the authors’ original work. The interactions between the hydration of cement and the pozzolanic reaction of nanosilica are considered using (6) to (7a) and (7b), which are also original work of the authors.Note that Tomosawa’s model is only valid for Portland cement; this model is not valid for nanosilica blended cement because of the coexistence of Portland cement hydration and the pozzolanic reaction of nanosilica. To model the hydration of nanosilica blended concrete, a new reaction model of nanosilica is constructed, and the mutual interactions between cement hydration and the nanosilica reaction are clarified.An outline of the modeling process is shown in Figure1. The proposed procedure considered the effects of nanosilica replacement ratios, water-to-binder ratios, and curing temperatures on the hydration of nanosilica blended concrete. At each time step, the calcium hydroxide contents, capillary water contents, chemically bound water contents, gel-space ratio, and compressive strength of hardening nanosilica blended concrete are determined using the cement hydration degree and the nanosilica reaction degree.Figure 1 The outline of modeling. ## 3.1. The Amount of Calcium Hydroxide (CH) during the Hydration Process During the hydration of Portland cement paste, the amount of calcium hydroxide is directly proportional to the degree of hydration of cement [11]. Furthermore, Papadakis [10] proposed that the amount of calcium hydroxide in cement-nanosilica blends during hydration can be determined with the following equation: (6)CH=RCHCE*C0*α-RCHSF*αsilica*msilica0. In (6), RCHCE is the mass of calcium hydroxide produced from the hydration of cement; RCHSF is the mass of calcium hydroxide consumed during the pozzolanic reaction of nanosilica; αsilica is the degree of hydration of the glass (active) phase of nanosilica; and msilica0 is the mass of nanosilica in the mixing proportion. In this equation, the term RCHCE*C0*α considers the production of calcium hydroxide from the hydration of cement and the term -RCHSF*αsilica*msilica0 considers the consumption of calcium hydroxide during the pozzolanic reaction. The value of RCHCE is related to the chemical compositions of Portland cement and can be obtained from the experimentally determined calcium hydroxide content and chemically bound water content in Portland cement paste. The value of RCHSF can be derived from the experimentally determined calcium hydroxide contents in Portland cement paste and nanosilica blended cement paste.Similar to the hydration of cement, as the pozzolanic reaction proceeds, water will be physically adsorbed in the nanosilica hydration products. Lura et al. [13] proposed that, for the silica fume pozzolanic reaction, when 1 g of silica fume reacts, 0.5 g of gel water and 0 g of chemical water will be consumed. Hence, the masses of capillary water and chemically bound water during the hydration of cement-nanosilica blends can be rewritten as in the following equations: (7a)Wcap=W0-0.4*C0*α-0.5*αsilica*msilica0,(7b)Wchem=0.25*C0*α,where Wcap and Wchem are the masses of capillary water and chemically bound water, respectively. In (7a) and (7b), the term 0.4*C0*α considers the reduction of capillary water due to cement hydration. The term 0.5*αsilica*msilica0 considers the reduction of capillary water due to the pozzolanic reaction [13]. ## 3.2. Simulation of the Pozzolanic Reaction in Cement-Nanosilica Blends Because of the high specific surface area of nanosilica and its considerable pozzolanic activity, in this paper, it is assumed that the hydration of nanosilica includes two processes: phase-boundary reaction process and diffusion process. Considering these points, based on the method proposed by Saeki and Monteiro [14], the nanosilica hydration equation can be written as (8a)dαsilicadt=mCH(t)msilica03ρwvsirsi0ρsi×((1krsi)(1)(rsioDesi(1-αsilica)-1/3-rsi0Desi+1krsi(1-αsilica)-2/3)-1),(8b)Desi=Desi0*ln⁡(1αsilica), where mCH(t) is the mass of calcium hydroxide in a unit volume of hydrating cement-nanosilica blends and can be obtained from (6). vsi is the stoichiometric ratio of CH to nanosilica by mass. rsi0 is the radius of the nanosilica particles. ρsi is the density of nanosilica. Desi0 is the initial diffusion coefficient, and krsi is the reaction rate coefficient.The influence of temperature on hydration is considered using the Arrhenius law, as in the following equations:(8c)krsi=krsi20exp⁡[-EsiR(1/T-1/293)],(8d)Desi0=Desi20exp⁡[-βesi(1T-1293)],where krsi20 and Desi20 are reaction coefficients at 20°C and Esi/R and βesi are activation energies.When nanosilica is incorporated in concrete, two possible reasons may be used to explain the change in the hydration process. The first is the pozzolanic reaction of amorphous phases in nanosilica, and the second is the influence of nanosilica on the hydration of cement. In the current paper, a new model is proposed that can describe the pozzolanic reaction between calcium hydroxide and nanosilica. In addition, the influence of nanosilica on the hydration of cement is considered through the amount of capillary water (7a) and the dilution effect (4). By using the proposed model, the reaction degrees of cement and nanosilica can be determined as functions of the curing time. Furthermore, the properties of nanosilica blended concrete during hardening can be evaluated from the reaction degrees of cement and nanosilica.In this study, (1) through (8a), (8b), (8c), and (8d) are used to model the hydration of nanosilica blended concrete. For nanosilica blended concrete, the hydration of Portland cement and the pozzolanic reaction of nanosilica occur simultaneously. The hydration of Portland cement is simulated using (1) to (5), which were proposed by Tomosawa et al. [11, 12]. The pozzolanic reaction of nanosilica is described using (8a) to (8d), which are the authors’ original work. The interactions between the hydration of cement and the pozzolanic reaction of nanosilica are considered using (6) to (7a) and (7b), which are also original work of the authors.Note that Tomosawa’s model is only valid for Portland cement; this model is not valid for nanosilica blended cement because of the coexistence of Portland cement hydration and the pozzolanic reaction of nanosilica. To model the hydration of nanosilica blended concrete, a new reaction model of nanosilica is constructed, and the mutual interactions between cement hydration and the nanosilica reaction are clarified.An outline of the modeling process is shown in Figure1. The proposed procedure considered the effects of nanosilica replacement ratios, water-to-binder ratios, and curing temperatures on the hydration of nanosilica blended concrete. At each time step, the calcium hydroxide contents, capillary water contents, chemically bound water contents, gel-space ratio, and compressive strength of hardening nanosilica blended concrete are determined using the cement hydration degree and the nanosilica reaction degree.Figure 1 The outline of modeling. ## 4. Experimental Investigation and Theoretical Verification of the Hydration Model To verify the proposed model, a series of experimental investigations were conducted to investigate the properties of nanosilica blended paste. The cementitious materials used were ordinary Portland cement (OPC) and nano-SiO2. The chemical composition of the Portland cement is shown in Table 1, and the physical and chemical properties of the nanosilica are shown in Table 2. The SiO2 purity of the nanosilica is 0.999, and the mean particle size of the nanosilica is 15 nanometers. The mixing proportions of the cement-nanosilica paste specimens are shown in Table 3. The influences of the water-to-binder ratios (0.3 and 0.5) and nanosilica replacement ratios (0.03 and 0.06) on the hydration of Portland cement were investigated.Table 1 Mineral compositions of Portland cement. Mineral composition (% by mass) Chemical composition (% by mass) C3S C2S C3A C4AF SiO2 Al2O3 Fe2O3 CaO MgO SO3 K2O Na2O 58.48 11.16 8.83 8.74 19.29 5.16 2.87 61.68 4.17 2.53 0.92 0.21Table 2 Properties of nanosilica. Particle size (nm) Specific surface area (m2/g) Density (g/cm3) SiO2 purity (%) 15 250 0.05 99.9Table 3 Mixing proportions of cement-nanosilica paste specimens. Water-to-binder ratio (%) Mixing proportions (kg/m3) Water Cement Nanosilica AE water reducing agent OPC 30 486 1619 — 50 612 1224 — OPC + NS 3 30 486 1570 49 Binder * 0.8% 50 612 1187 37 Binder * 0.5% OPC + NS 6 30 486 1522 97 Binder * 1.6% 50 612 1151 73 Binder * 1.0%The specimens were prepared according to the Korean Agency for Technology and Standard KSL 5109 (testing method for mechanical mixing of hydraulic cement paste and mortars of plastic consistency). A rotary mortar mixer was used to prepare specimens. To uniformly disperse nanosilica particles, after dissolving the AE agent in water, nanosilica was added to water and mixed at a high speed for approximately 2 minutes. Then, the cement was added into the mixer and mixed for approximately 1 minute. After preparing the paste specimens (10 cm × 10 cm × 40 cm), the specimens were sealed and cured in a chamber at 20°C until they reached the testing age.The compressive strength, chemically bound water content, and calcium hydroxide content were measured at different ages: 1 day and 3, 7, 14, and 28 days.The compressive strength tests were performed according to the Korean Agency for Technology and Standard KSL 2426 (standard test method for compressive strength of mortar grout). The dimensions of the test specimens were 10 cm × 10 cm × 40 cm. The compressive strength was measured using a universal testing machine with a loading rate of 0.4 N/mm2/s. For each mixing proportion, three specimens were tested, and the compressive strength was determined from the average value of three specimens.The fractured pieces after the compression test were preserved for the calcium hydroxide tests and chemically bound water tests. To stop the hydration reactions, the samples were soaked in acetone for 7 days and were then placed in a vacuum desiccator overnight to remove the acetone. These samples were further dried at 60°C in an oven for 24 hours and were ground to pass through a 150μm sieve.To determine the content of chemically bound water, 1 g of the hydrated sample was dried in an oven at 105°C for 3 hours and was then ignited at 1050°C in an electric furnace for 1 hour. For pastes, the chemically bound water content was calculated as the difference between the weight on ignition at 1050°C and the weight after oven drying at 105°C.The amount of calcium hydroxide (CH) in the samples was determined by TG/DTA (thermogravimetry/differential thermal analysis). Analyses were conducted at a heating rate of 10°C/min from 20°C to 1100°C under flowing nitrogen. The mass loss corresponding to the decomposition of Ca(OH)2 occurs between 440°C and 520°C. ### 4.1. The Amount of Calcium Hydroxide during the Hydration Process During the hydration of ordinary Portland cement, the amount of calcium hydroxide will increase until it reaches a steady state. During the hydration of cement-nanosilica blends, the evolution of the amount of CH depends on two factors: the Portland cement hydration that produces CH and the pozzolanic reaction that consumes CH. In the initial period, the production of CH is the dominant process, and then the consumption of CH becomes the dominant process. In the experimental range, the amount of CH initially increases, reaches a maximum value, and then decreases. Using the experimentally determined calcium hydroxide content and chemically bound water content of Portland cement paste, the value ofRCHCE in (6) was determined to be 0.23, which means that 1 g of hydrated cement will produce 0.23 g of calcium hydroxide. Papadakis [10] also reported that when 1 g of cement hydrates, approximately 0.2~0.3 g of calcium hydroxide will be produced. Additionally, based on the experimentally determined calcium hydroxide contents in Portland cement paste and nanosilica blended cement paste, the value of RCHSF in (6) was determined to be 2.2, which means that, for the pozzolanic reaction of nanosilica, 1 g of reacted nanosilica will consume 2.2 g of calcium hydroxide. Similarly, Maekawa et al. [9] proposed that when 1 g of silica fume reacts, 2 g of calcium hydroxide will be consumed during the pozzolanic reaction of silica fume.A total of 12 series of experimental results regarding the calcium hydroxide contents and chemically bound water contents are presented in Figure2 (calcium hydroxide contents) and Figure 3 (chemically bound water contents). Thew/b ratios of the specimens are 0.5 and 0.3, and the nanosilica replacement ratios are 0%, 3%, and 6%.Evaluation of calcium hydroxide contents. (a) OPC paste with water-to-cement ratio 0.5 (b) OPC paste with water-to-cement ratio 0.3 (c) OPC-nanosilica paste with water-to-binder ratio 0.5 and 3% nanosilica (d) OPC-nanosilica paste with water-to-binder ratio 0.3 and 3% nanosilica (e) OPC-nanosilica paste with water-to-binder ratio 0.5 and 6% nanosilica (f) OPC-nanosilica paste with water-to-binder ratio 0.3 and 6% nanosilicaEvaluation of chemically bound water contents. (a) OPC paste with water-to-cement ratio 0.5 (b) OPC paste with water-to-cement ratio 0.3 (c) OPC-nanosilica paste with water-to-binder ratio 0.5 and 3% nanosilica (d) OPC-nanosilica paste with water-to-binder ratio 0.3 and 3% nanosilica (e) OPC-nanosilica paste with water-to-binder ratio 0.5 and 6% nanosilica (f) OPC-nanosilica paste with water-to-binder ratio 0.3 and 6% nanosilicaCalibration process: in the calibration process, using calcium hydroxide contents forw/b = 0.5 control specimen (shown in Figure 2(a)) andw/b = 0.5 for the 3% nanosilica specimen (shown in Figure 2(c)), the reaction coefficients are calibrated. In this calibration process, 2 experimental results are used. The calibrated reaction coefficients of the proposed hydration model are shown in Table 4. In this table, B, C, kr, and De0 are the reaction coefficients of cement and krsi and Desi0 are the reaction coefficients of nanosilica.Table 4 Reaction coefficients of the proposed hydration model. B (cm/h) C (cm/h) k r (cm/h) D e 0 (cm2/h) k r s i (cm/h) D e s i 0 (cm2/h) 4.310 * 10−7 0.035 1.073 * 10−5 5.474 * 10−8 1.00 * 10−6 2.50 * 10−13Validation process: in the validation process, the remaining experimental results, such as the calcium hydroxide contents and chemically bound water contents shown in Figures2(b), 2(d) to 2(f), and 3(a) to 3(f), can be predicted. In this validation process, 10 experimental results are used. In the proposed model, the reaction coefficients do not vary with the nanosilica replacement ratios and water-to-binder ratios.In summary, to use the proposed theoretical model, a small number of experimental results are required to calibrate the reaction coefficients. Furthermore, a large number of experimental results can be predicted. The time-consuming and expensive experimental investigations can be significantly reduced.The evolution of the CH amount is shown as a function of the hydration time in Figure2. As shown in Figure 2, the simulation results overall agree well with experimental results. When the water-to- binder ratios change from 0.5 (Figures 2(a), 2(c), and 2(e)) to 0.3 (Figures 2(b), 2(d), and 2(f)), the amount of calcium hydroxide produced for 1 g of binder will decrease due to the reduction in capillary water and deposition space for hydration products. When the nanosilica replacement ratios increase from 0.03 (Figures 2(c) and 2(d)) to 0.06 (Figures 2(e) and 2(f)), the calcium hydroxide contents will correspondingly decrease due to the enhancement of the pozzolanic reaction of nanosilica. The proposed model can simulate the effects of the water-to-binder ratio and nanosilica replacement ratios on the hydration of Portland cement. ### 4.2. The Mass of Chemically Bound Water during Hydration Process As proposed in the hydration model and reference [10], the chemically bound water mainly comes from the hydration of Portland cement. The pozzolanic reaction between calcium hydroxide and nanosilica occurs without additional water binding more than that contained in the CH molecules. The evolution of the mass of bound water is shown as a function of the hydration time in Figure 3. As shown in Figure 3, the simulated results overall agree well with the experimental results. When the water-to-binder ratios change from 0.5 (Figures 3(a), 3(c), and 3(e)) to 0.3 (Figures 3(b), 3(d), and 3(f)), the chemically bound water produced for 1 g of binder will decrease due to the reduction in capillary water and deposition space for hydration products.As shown in Figures2 and 3, the analysis results can generally reproduce the experimental results. However, for some cases (Figures 2(e), 2(f), 3(b), 3(d), and 3(f)), the modeling results show disagreements with the experimental results. These differences arise from the following reasons.When calculating the calcium hydroxide contents in pastes with higher nanosilica replacement levels (Figures2(e) and 2(f), nanosilica replacement 6%), due to the increase in nanosilica contents, the surface area of cement-based materials will increase significantly. In this case, increasing the superplasticizer content to lubricate the nanosilica particles and to break the agglomerates, that is, 6% nanosilica, is difficult because of the very large specific surface area of nanosilica. Nanosilica will slightly agglomerate, and the pozzolanic reaction will be delayed. The agglomeration of nanosilica is not considered in the proposed model. Therefore, for pastes with higher nanosilica contents, the peaks in the analysis results occur earlier than in the experimental results.When calculating the chemically bound water contents in pastes with lower water-to-binder ratios (Figures3(b), 3(d), and 3(f), water-to-binder ratio 0.3), due to the decrease in water-to-binder ratios, the degree of supersaturation of the pore solution will increase and the initial dormant period of the paste will be shortened. The hydration reaction will correspondingly be accelerated. The proposed model does not take into account the influences of ion concentrations or the degree of supersaturation of the pore solution on the hydration kinetics of cement-based materials. Therefore, for pastes with lower water-to-binder ratios, the analysis results for the early age are slightly lower than the experimental results.Hence, at the current stage, the model is not perfect because some variables are not accounted for; thus, the model will be subjected to further improvements in the future. ### 4.3. The Evaluation of Compressive Strength It is well known that the compressive strength of concrete depends on the gel/space ratio determined from the degree of cement hydration and thew/c ratio. The gel/space ratio is defined as the ratio of the volumes of the hydrated cement to the sum of the volumes of the hydrated cement and of the capillary pores [15]. For Portland cement pastes, it is approximately assumed that 1 mL of hydrated cement occupies 2.06 mL, and, for the nanosilica pozzolanic reaction, 1 mL of reacted nanosilica is considered to occupy 2.52 mL of space [15]. Therefore, the gel/space ratio of cement-nanosilica paste is given by (9)xfc=2.06(1/ρ)αC0+2.52(1/ρsi)αsilicamsilica0(1/ρ)αC0+(1/ρsi)αsilicamsilica0+W0, where xfc is the gel/space ratio of blended cement pastes. Note that the volume change of nanosilica is larger than that of the anhydrous cement (2.52 versus 2.06). This result may partially be due to the lower density of the pozzolanic hydration products and may indicate that pozzolanic reaction products are more effective in filling pores [15].Furthermore, the development of compressive strength in blended concrete can be evaluated through Powers’ strength theory as(10a)fc=Axfcn,(10b)A=aC0C0+msilica0+bmsilica0C0+msilica0,where fc is the compressive strength of blended concrete. A is the intrinsic strength of the material and can be expressed as a function of the weight fractions of cement and mineral admixture in the mixing proportion. The coefficients a and b in (10b) represent the contributions of cement and mineral admixture to the intrinsic strength of materials, respectively, and the units of a and b are MPa. n is the exponent.Based on the compressive strength of nanosilica blended paste, the values of the coefficients ofa and b are given as a=128 and b=160, and n=3.0 (water/binder = 0.5) and n=2.3 (water/binder = 0.3). The decrease inn may be attributed to the homogenization of hydration products for specimens with lower water-to-binder ratios [15]. Comparisons between the experimental results and the prediction results are shown in Figure 4. Compared with the control specimens (Figures 4(a) and 4(b)), at an early age of 1 day, the compressive strength of nanosilica blended paste (Figures 4(c) to 4(f)) is significantly increased. In contrast, at the late age of 28 days, the compressive strength of nanosilica blended paste is only slightly higher than that of the control specimens. Li et al. [2] also obtained similar results: for nanosilica blended paste, the extent of the compressive strength enhancement is considerably more significant at an early age.Evaluation of compressive strength developments. (a) OPC paste with water-to-cement ratio 0.5 (b) OPC paste with water-to-cement ratio 0.3 (c) OPC-nanosilica paste with water-to-binder ratio 0.5 and 3% nanosilica (d) OPC-nanosilica paste with water-to-binder ratio 0.3 and 3% nanosilica (e) OPC-nanosilica paste with water-to-binder ratio 0.5 and 6% nanosilica (f) OPC-nanosilica paste with water-to-binder ratio 0.3 and 6% nanosilica ## 4.1. The Amount of Calcium Hydroxide during the Hydration Process During the hydration of ordinary Portland cement, the amount of calcium hydroxide will increase until it reaches a steady state. During the hydration of cement-nanosilica blends, the evolution of the amount of CH depends on two factors: the Portland cement hydration that produces CH and the pozzolanic reaction that consumes CH. In the initial period, the production of CH is the dominant process, and then the consumption of CH becomes the dominant process. In the experimental range, the amount of CH initially increases, reaches a maximum value, and then decreases. Using the experimentally determined calcium hydroxide content and chemically bound water content of Portland cement paste, the value ofRCHCE in (6) was determined to be 0.23, which means that 1 g of hydrated cement will produce 0.23 g of calcium hydroxide. Papadakis [10] also reported that when 1 g of cement hydrates, approximately 0.2~0.3 g of calcium hydroxide will be produced. Additionally, based on the experimentally determined calcium hydroxide contents in Portland cement paste and nanosilica blended cement paste, the value of RCHSF in (6) was determined to be 2.2, which means that, for the pozzolanic reaction of nanosilica, 1 g of reacted nanosilica will consume 2.2 g of calcium hydroxide. Similarly, Maekawa et al. [9] proposed that when 1 g of silica fume reacts, 2 g of calcium hydroxide will be consumed during the pozzolanic reaction of silica fume.A total of 12 series of experimental results regarding the calcium hydroxide contents and chemically bound water contents are presented in Figure2 (calcium hydroxide contents) and Figure 3 (chemically bound water contents). Thew/b ratios of the specimens are 0.5 and 0.3, and the nanosilica replacement ratios are 0%, 3%, and 6%.Evaluation of calcium hydroxide contents. (a) OPC paste with water-to-cement ratio 0.5 (b) OPC paste with water-to-cement ratio 0.3 (c) OPC-nanosilica paste with water-to-binder ratio 0.5 and 3% nanosilica (d) OPC-nanosilica paste with water-to-binder ratio 0.3 and 3% nanosilica (e) OPC-nanosilica paste with water-to-binder ratio 0.5 and 6% nanosilica (f) OPC-nanosilica paste with water-to-binder ratio 0.3 and 6% nanosilicaEvaluation of chemically bound water contents. (a) OPC paste with water-to-cement ratio 0.5 (b) OPC paste with water-to-cement ratio 0.3 (c) OPC-nanosilica paste with water-to-binder ratio 0.5 and 3% nanosilica (d) OPC-nanosilica paste with water-to-binder ratio 0.3 and 3% nanosilica (e) OPC-nanosilica paste with water-to-binder ratio 0.5 and 6% nanosilica (f) OPC-nanosilica paste with water-to-binder ratio 0.3 and 6% nanosilicaCalibration process: in the calibration process, using calcium hydroxide contents forw/b = 0.5 control specimen (shown in Figure 2(a)) andw/b = 0.5 for the 3% nanosilica specimen (shown in Figure 2(c)), the reaction coefficients are calibrated. In this calibration process, 2 experimental results are used. The calibrated reaction coefficients of the proposed hydration model are shown in Table 4. In this table, B, C, kr, and De0 are the reaction coefficients of cement and krsi and Desi0 are the reaction coefficients of nanosilica.Table 4 Reaction coefficients of the proposed hydration model. B (cm/h) C (cm/h) k r (cm/h) D e 0 (cm2/h) k r s i (cm/h) D e s i 0 (cm2/h) 4.310 * 10−7 0.035 1.073 * 10−5 5.474 * 10−8 1.00 * 10−6 2.50 * 10−13Validation process: in the validation process, the remaining experimental results, such as the calcium hydroxide contents and chemically bound water contents shown in Figures2(b), 2(d) to 2(f), and 3(a) to 3(f), can be predicted. In this validation process, 10 experimental results are used. In the proposed model, the reaction coefficients do not vary with the nanosilica replacement ratios and water-to-binder ratios.In summary, to use the proposed theoretical model, a small number of experimental results are required to calibrate the reaction coefficients. Furthermore, a large number of experimental results can be predicted. The time-consuming and expensive experimental investigations can be significantly reduced.The evolution of the CH amount is shown as a function of the hydration time in Figure2. As shown in Figure 2, the simulation results overall agree well with experimental results. When the water-to- binder ratios change from 0.5 (Figures 2(a), 2(c), and 2(e)) to 0.3 (Figures 2(b), 2(d), and 2(f)), the amount of calcium hydroxide produced for 1 g of binder will decrease due to the reduction in capillary water and deposition space for hydration products. When the nanosilica replacement ratios increase from 0.03 (Figures 2(c) and 2(d)) to 0.06 (Figures 2(e) and 2(f)), the calcium hydroxide contents will correspondingly decrease due to the enhancement of the pozzolanic reaction of nanosilica. The proposed model can simulate the effects of the water-to-binder ratio and nanosilica replacement ratios on the hydration of Portland cement. ## 4.2. The Mass of Chemically Bound Water during Hydration Process As proposed in the hydration model and reference [10], the chemically bound water mainly comes from the hydration of Portland cement. The pozzolanic reaction between calcium hydroxide and nanosilica occurs without additional water binding more than that contained in the CH molecules. The evolution of the mass of bound water is shown as a function of the hydration time in Figure 3. As shown in Figure 3, the simulated results overall agree well with the experimental results. When the water-to-binder ratios change from 0.5 (Figures 3(a), 3(c), and 3(e)) to 0.3 (Figures 3(b), 3(d), and 3(f)), the chemically bound water produced for 1 g of binder will decrease due to the reduction in capillary water and deposition space for hydration products.As shown in Figures2 and 3, the analysis results can generally reproduce the experimental results. However, for some cases (Figures 2(e), 2(f), 3(b), 3(d), and 3(f)), the modeling results show disagreements with the experimental results. These differences arise from the following reasons.When calculating the calcium hydroxide contents in pastes with higher nanosilica replacement levels (Figures2(e) and 2(f), nanosilica replacement 6%), due to the increase in nanosilica contents, the surface area of cement-based materials will increase significantly. In this case, increasing the superplasticizer content to lubricate the nanosilica particles and to break the agglomerates, that is, 6% nanosilica, is difficult because of the very large specific surface area of nanosilica. Nanosilica will slightly agglomerate, and the pozzolanic reaction will be delayed. The agglomeration of nanosilica is not considered in the proposed model. Therefore, for pastes with higher nanosilica contents, the peaks in the analysis results occur earlier than in the experimental results.When calculating the chemically bound water contents in pastes with lower water-to-binder ratios (Figures3(b), 3(d), and 3(f), water-to-binder ratio 0.3), due to the decrease in water-to-binder ratios, the degree of supersaturation of the pore solution will increase and the initial dormant period of the paste will be shortened. The hydration reaction will correspondingly be accelerated. The proposed model does not take into account the influences of ion concentrations or the degree of supersaturation of the pore solution on the hydration kinetics of cement-based materials. Therefore, for pastes with lower water-to-binder ratios, the analysis results for the early age are slightly lower than the experimental results.Hence, at the current stage, the model is not perfect because some variables are not accounted for; thus, the model will be subjected to further improvements in the future. ## 4.3. The Evaluation of Compressive Strength It is well known that the compressive strength of concrete depends on the gel/space ratio determined from the degree of cement hydration and thew/c ratio. The gel/space ratio is defined as the ratio of the volumes of the hydrated cement to the sum of the volumes of the hydrated cement and of the capillary pores [15]. For Portland cement pastes, it is approximately assumed that 1 mL of hydrated cement occupies 2.06 mL, and, for the nanosilica pozzolanic reaction, 1 mL of reacted nanosilica is considered to occupy 2.52 mL of space [15]. Therefore, the gel/space ratio of cement-nanosilica paste is given by (9)xfc=2.06(1/ρ)αC0+2.52(1/ρsi)αsilicamsilica0(1/ρ)αC0+(1/ρsi)αsilicamsilica0+W0, where xfc is the gel/space ratio of blended cement pastes. Note that the volume change of nanosilica is larger than that of the anhydrous cement (2.52 versus 2.06). This result may partially be due to the lower density of the pozzolanic hydration products and may indicate that pozzolanic reaction products are more effective in filling pores [15].Furthermore, the development of compressive strength in blended concrete can be evaluated through Powers’ strength theory as(10a)fc=Axfcn,(10b)A=aC0C0+msilica0+bmsilica0C0+msilica0,where fc is the compressive strength of blended concrete. A is the intrinsic strength of the material and can be expressed as a function of the weight fractions of cement and mineral admixture in the mixing proportion. The coefficients a and b in (10b) represent the contributions of cement and mineral admixture to the intrinsic strength of materials, respectively, and the units of a and b are MPa. n is the exponent.Based on the compressive strength of nanosilica blended paste, the values of the coefficients ofa and b are given as a=128 and b=160, and n=3.0 (water/binder = 0.5) and n=2.3 (water/binder = 0.3). The decrease inn may be attributed to the homogenization of hydration products for specimens with lower water-to-binder ratios [15]. Comparisons between the experimental results and the prediction results are shown in Figure 4. Compared with the control specimens (Figures 4(a) and 4(b)), at an early age of 1 day, the compressive strength of nanosilica blended paste (Figures 4(c) to 4(f)) is significantly increased. In contrast, at the late age of 28 days, the compressive strength of nanosilica blended paste is only slightly higher than that of the control specimens. Li et al. [2] also obtained similar results: for nanosilica blended paste, the extent of the compressive strength enhancement is considerably more significant at an early age.Evaluation of compressive strength developments. (a) OPC paste with water-to-cement ratio 0.5 (b) OPC paste with water-to-cement ratio 0.3 (c) OPC-nanosilica paste with water-to-binder ratio 0.5 and 3% nanosilica (d) OPC-nanosilica paste with water-to-binder ratio 0.3 and 3% nanosilica (e) OPC-nanosilica paste with water-to-binder ratio 0.5 and 6% nanosilica (f) OPC-nanosilica paste with water-to-binder ratio 0.3 and 6% nanosilica ## 5. Conclusions This paper presents the results from experimental investigations and theoretical modeling of the hydration reaction of nanosilica blended concrete. The contents of chemically bound water in nanosilica blended paste are similar to those in the control specimens, which means that the pozzolanic reaction of nanosilica does not produce chemically bound water. Due to the pozzolanic reaction of nanosilica, the contents of calcium hydroxide in nanosilica blended paste are much lower than those in the control specimens. Compared with the control specimens, the extent of compressive strength enhancement in nanosilica blended specimens is considerably higher at early ages.A numerical model is proposed to simulate the hydration of concrete containing nanosilica. The reaction coefficients of nanosilica are obtained from the experimentally determined calcium hydroxide contents in nanosilica blended concrete. The degree of hydration of cement and the degree of reaction of nanosilica are calculated to accompany the results from the proposed hydration model. The development of compressive strength in nanosilica blended paste is evaluated using the gel-space ratio, which considers the contributions of cement hydration and the nanosilica reaction. The calculated results regarding chemically bound water, calcium hydroxide, and compressive strength generally agree with the experimental results. --- *Source: 102392-2014-08-11.xml*
102392-2014-08-11_102392-2014-08-11.md
46,341
Experimental Investigation and Theoretical Modeling of Nanosilica Activity in Concrete
Han-Seung Lee; Hyeong-Kyu Cho; Xiao-Yong Wang
Journal of Nanomaterials (2014)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102392
102392-2014-08-11.xml
--- ## Abstract This paper presents experimental investigations and theoretical modeling of the hydration reaction of nanosilica blended concrete with different water-to-binder ratios and different nanosilica replacement ratios. The developments of chemically bound water contents, calcium hydroxide contents, and compressive strength of Portland cement control specimens and nanosilica blended specimens were measured at different ages: 1 day, 3 days, 7 days, 14 days, and 28 days. Due to the pozzolanic reaction of nanosilica, the contents of calcium hydroxide in nanosilica blended pastes are considerably lower than those in the control specimens. Compared with the control specimens, the extent of compressive strength enhancement in the nanosilica blended specimens is much higher at early ages. Additionally, a blended cement hydration model that considers both the hydration reaction of cement and the pozzolanic reaction of nanosilica is proposed. The properties of nanosilica blended concrete during hardening were evaluated using the degree of hydration of cement and the reaction degree of nanosilica. The calculated chemically bound water contents, calcium hydroxide contents, and compressive strength were generally consistent with the experimental results. --- ## Body ## 1. Introduction Scientists have considered nanomaterials to be the most promising materials of the 21st century. In recent years, considerable attention has been focused on civil engineering applications for nanomaterials because nanoparticles possess many unique properties due to their small size, such as large specific surface areas and high activity [1].Many experimental studies have investigated the influence of adding nanosilica on the properties of concrete. Li et al. [2] showed that the compressive and flexural strengths of cement mortars mixed with nanoparticles measured on the 7th day and the 28th day were higher than those of a plain cement mortar. Zhang and Li [3] found that the addition of nanoparticles refines the pore structure of concrete and enhances the resistance of concrete to chloride penetration. Ji [4] found that the microstructure of concrete containing nano-SiO2 is more uniform and compact than that of normal concrete and that the incorporation of nano-SiO2 can improve the resistance of concrete to water penetration. Jo et al. [5] found that nanoparticles are more effective at enhancing strength than silica fume and that nanoscale SiO2 functions not only as a filler to improve the microstructure but also as an activator to promote pozzolanic reactions.Although numerous experimental studies have investigated the physical and chemical properties of nanosilica blended concrete, research on the modeling of hydration of nanosilica blended concrete is very limited. Some existing hydration models [6–10] are only valid for traditional mineral admixture blended concretes, such as slag blended concrete, fly ash blended concrete, and silica fume blended concrete. To fill this knowledge gap, this paper proposes a blended cement hydration model that considers both the hydration reaction of cement and the pozzolanic reaction of nanosilica. The properties of nanosilica blended concrete during hardening, such as the content of chemically bound water, the content of calcium hydroxide, and the development of compressive strength, were evaluated using the hydration degree of cement and the reaction degree of nanosilica. ## 2. Hydration Model of Portland Cement Tomosawa et al. [11, 12] developed a shrinking-core model to simulate the development of cement hydration. This model is expressed as a single equation that consists of three coefficients: kd is the reaction coefficient in the induction period; De is the effective diffusion coefficient of water through the C–S–H gel; and kr is a coefficient of the reaction rate of cement, as shown in (1) below. These coefficients determine the rate of mass transport through the initial shell layer, the rate of the phase boundary reaction process, and the rate of the diffusion-controlled process. The modeled cement particles are assumed to be spheres surrounded by hydration product. Based on this theory, the rate of cement hydration is derived as follows: (1)dαdt=3(Sw/S0)ρwCw-free(v+wg)r0ρc×((1kd-r0De)(1)((1kd-r0De)+r0De(1-α)-1/3+1kr(1-α)-2/3(1kd-r0De))-1), where α is the degree of cement hydration; ν is the stoichiometric ratio by mass of water to cement (=0.25); wg is the physically bound water in C–S–H gel (=0.15); ρw is the density of water; Cw-free is the amount of water at the exterior of the C–S–H gel; r0 is the radius of unhydrated cement particles (r0=3/(Sρc), where S and ρc represent the Blaine surface area and density of the cement, resp.); Sw is the effective surface area of the cement particles in contact with water; and S0 is the total surface area if the surface area develops unconstrained.The reaction coefficientkd is assumed to be a function of the degree of hydration, as shown in (2), where B and C are the coefficients that determine this factor; B controls the rate of the initial shell formation and C controls the rate of the initial shell decay. Consider (2)kd=Bα1.5+Cα3.The effective diffusion coefficient of water is affected by the tortuosity of the gel pores and by the radii of the gel pores in the hydrate. This phenomenon can be described as a function of the degree of hydration and is expressed as follows:(3)De=De0ln⁡(1α).In addition, free water in the capillary pores is depleted as the hydration of cement minerals progresses. Some water is bound in the gel pores, and this water is not available for further hydration, which is an effect that must be taken into consideration during every step of the hydration process. Therefore, the amount of water in the capillary poresCw-free is expressed as a function of the degree of hydration in the previous step, as shown in (4)Cw-free=W0-0.4*α*C0W0, where C0 and W0 are the mass fractions of cement and water in the mix proportion.The effect of temperature on these reaction coefficients is assumed to follow Arrhenius’s law, as shown in(5)B=B20exp⁡(-β1(1T-1293)),C=C20exp⁡(-β2(1T-1293)),kr=kr20exp⁡(-ER(1T-1293)),De=De20exp⁡(-β3(1T-1293)), where β1, β2, E/R, and β3 are temperature sensitivity coefficients and B20, C20, kr20, and De20 are the values of B, C, kr, and De at 20°C, respectively.Using the proposed Portland cement hydration model, Tomosawa et al. [11, 12] evaluated the heat evolution rate, chemically bound water content, and compressive strength of hardening concrete. However, note that Tomosawa’s model is only valid for Portland cement. To model the hydration of nanosilica blended concrete, a reaction model for nanosilica should be constructed and the mutual interactions between cement hydration and the nanosilica reaction should be clarified. ## 3. Hydration Model for Cement Blended with Nanosilica ### 3.1. The Amount of Calcium Hydroxide (CH) during the Hydration Process During the hydration of Portland cement paste, the amount of calcium hydroxide is directly proportional to the degree of hydration of cement [11]. Furthermore, Papadakis [10] proposed that the amount of calcium hydroxide in cement-nanosilica blends during hydration can be determined with the following equation: (6)CH=RCHCE*C0*α-RCHSF*αsilica*msilica0. In (6), RCHCE is the mass of calcium hydroxide produced from the hydration of cement; RCHSF is the mass of calcium hydroxide consumed during the pozzolanic reaction of nanosilica; αsilica is the degree of hydration of the glass (active) phase of nanosilica; and msilica0 is the mass of nanosilica in the mixing proportion. In this equation, the term RCHCE*C0*α considers the production of calcium hydroxide from the hydration of cement and the term -RCHSF*αsilica*msilica0 considers the consumption of calcium hydroxide during the pozzolanic reaction. The value of RCHCE is related to the chemical compositions of Portland cement and can be obtained from the experimentally determined calcium hydroxide content and chemically bound water content in Portland cement paste. The value of RCHSF can be derived from the experimentally determined calcium hydroxide contents in Portland cement paste and nanosilica blended cement paste.Similar to the hydration of cement, as the pozzolanic reaction proceeds, water will be physically adsorbed in the nanosilica hydration products. Lura et al. [13] proposed that, for the silica fume pozzolanic reaction, when 1 g of silica fume reacts, 0.5 g of gel water and 0 g of chemical water will be consumed. Hence, the masses of capillary water and chemically bound water during the hydration of cement-nanosilica blends can be rewritten as in the following equations: (7a)Wcap=W0-0.4*C0*α-0.5*αsilica*msilica0,(7b)Wchem=0.25*C0*α,where Wcap and Wchem are the masses of capillary water and chemically bound water, respectively. In (7a) and (7b), the term 0.4*C0*α considers the reduction of capillary water due to cement hydration. The term 0.5*αsilica*msilica0 considers the reduction of capillary water due to the pozzolanic reaction [13]. ### 3.2. Simulation of the Pozzolanic Reaction in Cement-Nanosilica Blends Because of the high specific surface area of nanosilica and its considerable pozzolanic activity, in this paper, it is assumed that the hydration of nanosilica includes two processes: phase-boundary reaction process and diffusion process. Considering these points, based on the method proposed by Saeki and Monteiro [14], the nanosilica hydration equation can be written as (8a)dαsilicadt=mCH(t)msilica03ρwvsirsi0ρsi×((1krsi)(1)(rsioDesi(1-αsilica)-1/3-rsi0Desi+1krsi(1-αsilica)-2/3)-1),(8b)Desi=Desi0*ln⁡(1αsilica), where mCH(t) is the mass of calcium hydroxide in a unit volume of hydrating cement-nanosilica blends and can be obtained from (6). vsi is the stoichiometric ratio of CH to nanosilica by mass. rsi0 is the radius of the nanosilica particles. ρsi is the density of nanosilica. Desi0 is the initial diffusion coefficient, and krsi is the reaction rate coefficient.The influence of temperature on hydration is considered using the Arrhenius law, as in the following equations:(8c)krsi=krsi20exp⁡[-EsiR(1/T-1/293)],(8d)Desi0=Desi20exp⁡[-βesi(1T-1293)],where krsi20 and Desi20 are reaction coefficients at 20°C and Esi/R and βesi are activation energies.When nanosilica is incorporated in concrete, two possible reasons may be used to explain the change in the hydration process. The first is the pozzolanic reaction of amorphous phases in nanosilica, and the second is the influence of nanosilica on the hydration of cement. In the current paper, a new model is proposed that can describe the pozzolanic reaction between calcium hydroxide and nanosilica. In addition, the influence of nanosilica on the hydration of cement is considered through the amount of capillary water (7a) and the dilution effect (4). By using the proposed model, the reaction degrees of cement and nanosilica can be determined as functions of the curing time. Furthermore, the properties of nanosilica blended concrete during hardening can be evaluated from the reaction degrees of cement and nanosilica.In this study, (1) through (8a), (8b), (8c), and (8d) are used to model the hydration of nanosilica blended concrete. For nanosilica blended concrete, the hydration of Portland cement and the pozzolanic reaction of nanosilica occur simultaneously. The hydration of Portland cement is simulated using (1) to (5), which were proposed by Tomosawa et al. [11, 12]. The pozzolanic reaction of nanosilica is described using (8a) to (8d), which are the authors’ original work. The interactions between the hydration of cement and the pozzolanic reaction of nanosilica are considered using (6) to (7a) and (7b), which are also original work of the authors.Note that Tomosawa’s model is only valid for Portland cement; this model is not valid for nanosilica blended cement because of the coexistence of Portland cement hydration and the pozzolanic reaction of nanosilica. To model the hydration of nanosilica blended concrete, a new reaction model of nanosilica is constructed, and the mutual interactions between cement hydration and the nanosilica reaction are clarified.An outline of the modeling process is shown in Figure1. The proposed procedure considered the effects of nanosilica replacement ratios, water-to-binder ratios, and curing temperatures on the hydration of nanosilica blended concrete. At each time step, the calcium hydroxide contents, capillary water contents, chemically bound water contents, gel-space ratio, and compressive strength of hardening nanosilica blended concrete are determined using the cement hydration degree and the nanosilica reaction degree.Figure 1 The outline of modeling. ## 3.1. The Amount of Calcium Hydroxide (CH) during the Hydration Process During the hydration of Portland cement paste, the amount of calcium hydroxide is directly proportional to the degree of hydration of cement [11]. Furthermore, Papadakis [10] proposed that the amount of calcium hydroxide in cement-nanosilica blends during hydration can be determined with the following equation: (6)CH=RCHCE*C0*α-RCHSF*αsilica*msilica0. In (6), RCHCE is the mass of calcium hydroxide produced from the hydration of cement; RCHSF is the mass of calcium hydroxide consumed during the pozzolanic reaction of nanosilica; αsilica is the degree of hydration of the glass (active) phase of nanosilica; and msilica0 is the mass of nanosilica in the mixing proportion. In this equation, the term RCHCE*C0*α considers the production of calcium hydroxide from the hydration of cement and the term -RCHSF*αsilica*msilica0 considers the consumption of calcium hydroxide during the pozzolanic reaction. The value of RCHCE is related to the chemical compositions of Portland cement and can be obtained from the experimentally determined calcium hydroxide content and chemically bound water content in Portland cement paste. The value of RCHSF can be derived from the experimentally determined calcium hydroxide contents in Portland cement paste and nanosilica blended cement paste.Similar to the hydration of cement, as the pozzolanic reaction proceeds, water will be physically adsorbed in the nanosilica hydration products. Lura et al. [13] proposed that, for the silica fume pozzolanic reaction, when 1 g of silica fume reacts, 0.5 g of gel water and 0 g of chemical water will be consumed. Hence, the masses of capillary water and chemically bound water during the hydration of cement-nanosilica blends can be rewritten as in the following equations: (7a)Wcap=W0-0.4*C0*α-0.5*αsilica*msilica0,(7b)Wchem=0.25*C0*α,where Wcap and Wchem are the masses of capillary water and chemically bound water, respectively. In (7a) and (7b), the term 0.4*C0*α considers the reduction of capillary water due to cement hydration. The term 0.5*αsilica*msilica0 considers the reduction of capillary water due to the pozzolanic reaction [13]. ## 3.2. Simulation of the Pozzolanic Reaction in Cement-Nanosilica Blends Because of the high specific surface area of nanosilica and its considerable pozzolanic activity, in this paper, it is assumed that the hydration of nanosilica includes two processes: phase-boundary reaction process and diffusion process. Considering these points, based on the method proposed by Saeki and Monteiro [14], the nanosilica hydration equation can be written as (8a)dαsilicadt=mCH(t)msilica03ρwvsirsi0ρsi×((1krsi)(1)(rsioDesi(1-αsilica)-1/3-rsi0Desi+1krsi(1-αsilica)-2/3)-1),(8b)Desi=Desi0*ln⁡(1αsilica), where mCH(t) is the mass of calcium hydroxide in a unit volume of hydrating cement-nanosilica blends and can be obtained from (6). vsi is the stoichiometric ratio of CH to nanosilica by mass. rsi0 is the radius of the nanosilica particles. ρsi is the density of nanosilica. Desi0 is the initial diffusion coefficient, and krsi is the reaction rate coefficient.The influence of temperature on hydration is considered using the Arrhenius law, as in the following equations:(8c)krsi=krsi20exp⁡[-EsiR(1/T-1/293)],(8d)Desi0=Desi20exp⁡[-βesi(1T-1293)],where krsi20 and Desi20 are reaction coefficients at 20°C and Esi/R and βesi are activation energies.When nanosilica is incorporated in concrete, two possible reasons may be used to explain the change in the hydration process. The first is the pozzolanic reaction of amorphous phases in nanosilica, and the second is the influence of nanosilica on the hydration of cement. In the current paper, a new model is proposed that can describe the pozzolanic reaction between calcium hydroxide and nanosilica. In addition, the influence of nanosilica on the hydration of cement is considered through the amount of capillary water (7a) and the dilution effect (4). By using the proposed model, the reaction degrees of cement and nanosilica can be determined as functions of the curing time. Furthermore, the properties of nanosilica blended concrete during hardening can be evaluated from the reaction degrees of cement and nanosilica.In this study, (1) through (8a), (8b), (8c), and (8d) are used to model the hydration of nanosilica blended concrete. For nanosilica blended concrete, the hydration of Portland cement and the pozzolanic reaction of nanosilica occur simultaneously. The hydration of Portland cement is simulated using (1) to (5), which were proposed by Tomosawa et al. [11, 12]. The pozzolanic reaction of nanosilica is described using (8a) to (8d), which are the authors’ original work. The interactions between the hydration of cement and the pozzolanic reaction of nanosilica are considered using (6) to (7a) and (7b), which are also original work of the authors.Note that Tomosawa’s model is only valid for Portland cement; this model is not valid for nanosilica blended cement because of the coexistence of Portland cement hydration and the pozzolanic reaction of nanosilica. To model the hydration of nanosilica blended concrete, a new reaction model of nanosilica is constructed, and the mutual interactions between cement hydration and the nanosilica reaction are clarified.An outline of the modeling process is shown in Figure1. The proposed procedure considered the effects of nanosilica replacement ratios, water-to-binder ratios, and curing temperatures on the hydration of nanosilica blended concrete. At each time step, the calcium hydroxide contents, capillary water contents, chemically bound water contents, gel-space ratio, and compressive strength of hardening nanosilica blended concrete are determined using the cement hydration degree and the nanosilica reaction degree.Figure 1 The outline of modeling. ## 4. Experimental Investigation and Theoretical Verification of the Hydration Model To verify the proposed model, a series of experimental investigations were conducted to investigate the properties of nanosilica blended paste. The cementitious materials used were ordinary Portland cement (OPC) and nano-SiO2. The chemical composition of the Portland cement is shown in Table 1, and the physical and chemical properties of the nanosilica are shown in Table 2. The SiO2 purity of the nanosilica is 0.999, and the mean particle size of the nanosilica is 15 nanometers. The mixing proportions of the cement-nanosilica paste specimens are shown in Table 3. The influences of the water-to-binder ratios (0.3 and 0.5) and nanosilica replacement ratios (0.03 and 0.06) on the hydration of Portland cement were investigated.Table 1 Mineral compositions of Portland cement. Mineral composition (% by mass) Chemical composition (% by mass) C3S C2S C3A C4AF SiO2 Al2O3 Fe2O3 CaO MgO SO3 K2O Na2O 58.48 11.16 8.83 8.74 19.29 5.16 2.87 61.68 4.17 2.53 0.92 0.21Table 2 Properties of nanosilica. Particle size (nm) Specific surface area (m2/g) Density (g/cm3) SiO2 purity (%) 15 250 0.05 99.9Table 3 Mixing proportions of cement-nanosilica paste specimens. Water-to-binder ratio (%) Mixing proportions (kg/m3) Water Cement Nanosilica AE water reducing agent OPC 30 486 1619 — 50 612 1224 — OPC + NS 3 30 486 1570 49 Binder * 0.8% 50 612 1187 37 Binder * 0.5% OPC + NS 6 30 486 1522 97 Binder * 1.6% 50 612 1151 73 Binder * 1.0%The specimens were prepared according to the Korean Agency for Technology and Standard KSL 5109 (testing method for mechanical mixing of hydraulic cement paste and mortars of plastic consistency). A rotary mortar mixer was used to prepare specimens. To uniformly disperse nanosilica particles, after dissolving the AE agent in water, nanosilica was added to water and mixed at a high speed for approximately 2 minutes. Then, the cement was added into the mixer and mixed for approximately 1 minute. After preparing the paste specimens (10 cm × 10 cm × 40 cm), the specimens were sealed and cured in a chamber at 20°C until they reached the testing age.The compressive strength, chemically bound water content, and calcium hydroxide content were measured at different ages: 1 day and 3, 7, 14, and 28 days.The compressive strength tests were performed according to the Korean Agency for Technology and Standard KSL 2426 (standard test method for compressive strength of mortar grout). The dimensions of the test specimens were 10 cm × 10 cm × 40 cm. The compressive strength was measured using a universal testing machine with a loading rate of 0.4 N/mm2/s. For each mixing proportion, three specimens were tested, and the compressive strength was determined from the average value of three specimens.The fractured pieces after the compression test were preserved for the calcium hydroxide tests and chemically bound water tests. To stop the hydration reactions, the samples were soaked in acetone for 7 days and were then placed in a vacuum desiccator overnight to remove the acetone. These samples were further dried at 60°C in an oven for 24 hours and were ground to pass through a 150μm sieve.To determine the content of chemically bound water, 1 g of the hydrated sample was dried in an oven at 105°C for 3 hours and was then ignited at 1050°C in an electric furnace for 1 hour. For pastes, the chemically bound water content was calculated as the difference between the weight on ignition at 1050°C and the weight after oven drying at 105°C.The amount of calcium hydroxide (CH) in the samples was determined by TG/DTA (thermogravimetry/differential thermal analysis). Analyses were conducted at a heating rate of 10°C/min from 20°C to 1100°C under flowing nitrogen. The mass loss corresponding to the decomposition of Ca(OH)2 occurs between 440°C and 520°C. ### 4.1. The Amount of Calcium Hydroxide during the Hydration Process During the hydration of ordinary Portland cement, the amount of calcium hydroxide will increase until it reaches a steady state. During the hydration of cement-nanosilica blends, the evolution of the amount of CH depends on two factors: the Portland cement hydration that produces CH and the pozzolanic reaction that consumes CH. In the initial period, the production of CH is the dominant process, and then the consumption of CH becomes the dominant process. In the experimental range, the amount of CH initially increases, reaches a maximum value, and then decreases. Using the experimentally determined calcium hydroxide content and chemically bound water content of Portland cement paste, the value ofRCHCE in (6) was determined to be 0.23, which means that 1 g of hydrated cement will produce 0.23 g of calcium hydroxide. Papadakis [10] also reported that when 1 g of cement hydrates, approximately 0.2~0.3 g of calcium hydroxide will be produced. Additionally, based on the experimentally determined calcium hydroxide contents in Portland cement paste and nanosilica blended cement paste, the value of RCHSF in (6) was determined to be 2.2, which means that, for the pozzolanic reaction of nanosilica, 1 g of reacted nanosilica will consume 2.2 g of calcium hydroxide. Similarly, Maekawa et al. [9] proposed that when 1 g of silica fume reacts, 2 g of calcium hydroxide will be consumed during the pozzolanic reaction of silica fume.A total of 12 series of experimental results regarding the calcium hydroxide contents and chemically bound water contents are presented in Figure2 (calcium hydroxide contents) and Figure 3 (chemically bound water contents). Thew/b ratios of the specimens are 0.5 and 0.3, and the nanosilica replacement ratios are 0%, 3%, and 6%.Evaluation of calcium hydroxide contents. (a) OPC paste with water-to-cement ratio 0.5 (b) OPC paste with water-to-cement ratio 0.3 (c) OPC-nanosilica paste with water-to-binder ratio 0.5 and 3% nanosilica (d) OPC-nanosilica paste with water-to-binder ratio 0.3 and 3% nanosilica (e) OPC-nanosilica paste with water-to-binder ratio 0.5 and 6% nanosilica (f) OPC-nanosilica paste with water-to-binder ratio 0.3 and 6% nanosilicaEvaluation of chemically bound water contents. (a) OPC paste with water-to-cement ratio 0.5 (b) OPC paste with water-to-cement ratio 0.3 (c) OPC-nanosilica paste with water-to-binder ratio 0.5 and 3% nanosilica (d) OPC-nanosilica paste with water-to-binder ratio 0.3 and 3% nanosilica (e) OPC-nanosilica paste with water-to-binder ratio 0.5 and 6% nanosilica (f) OPC-nanosilica paste with water-to-binder ratio 0.3 and 6% nanosilicaCalibration process: in the calibration process, using calcium hydroxide contents forw/b = 0.5 control specimen (shown in Figure 2(a)) andw/b = 0.5 for the 3% nanosilica specimen (shown in Figure 2(c)), the reaction coefficients are calibrated. In this calibration process, 2 experimental results are used. The calibrated reaction coefficients of the proposed hydration model are shown in Table 4. In this table, B, C, kr, and De0 are the reaction coefficients of cement and krsi and Desi0 are the reaction coefficients of nanosilica.Table 4 Reaction coefficients of the proposed hydration model. B (cm/h) C (cm/h) k r (cm/h) D e 0 (cm2/h) k r s i (cm/h) D e s i 0 (cm2/h) 4.310 * 10−7 0.035 1.073 * 10−5 5.474 * 10−8 1.00 * 10−6 2.50 * 10−13Validation process: in the validation process, the remaining experimental results, such as the calcium hydroxide contents and chemically bound water contents shown in Figures2(b), 2(d) to 2(f), and 3(a) to 3(f), can be predicted. In this validation process, 10 experimental results are used. In the proposed model, the reaction coefficients do not vary with the nanosilica replacement ratios and water-to-binder ratios.In summary, to use the proposed theoretical model, a small number of experimental results are required to calibrate the reaction coefficients. Furthermore, a large number of experimental results can be predicted. The time-consuming and expensive experimental investigations can be significantly reduced.The evolution of the CH amount is shown as a function of the hydration time in Figure2. As shown in Figure 2, the simulation results overall agree well with experimental results. When the water-to- binder ratios change from 0.5 (Figures 2(a), 2(c), and 2(e)) to 0.3 (Figures 2(b), 2(d), and 2(f)), the amount of calcium hydroxide produced for 1 g of binder will decrease due to the reduction in capillary water and deposition space for hydration products. When the nanosilica replacement ratios increase from 0.03 (Figures 2(c) and 2(d)) to 0.06 (Figures 2(e) and 2(f)), the calcium hydroxide contents will correspondingly decrease due to the enhancement of the pozzolanic reaction of nanosilica. The proposed model can simulate the effects of the water-to-binder ratio and nanosilica replacement ratios on the hydration of Portland cement. ### 4.2. The Mass of Chemically Bound Water during Hydration Process As proposed in the hydration model and reference [10], the chemically bound water mainly comes from the hydration of Portland cement. The pozzolanic reaction between calcium hydroxide and nanosilica occurs without additional water binding more than that contained in the CH molecules. The evolution of the mass of bound water is shown as a function of the hydration time in Figure 3. As shown in Figure 3, the simulated results overall agree well with the experimental results. When the water-to-binder ratios change from 0.5 (Figures 3(a), 3(c), and 3(e)) to 0.3 (Figures 3(b), 3(d), and 3(f)), the chemically bound water produced for 1 g of binder will decrease due to the reduction in capillary water and deposition space for hydration products.As shown in Figures2 and 3, the analysis results can generally reproduce the experimental results. However, for some cases (Figures 2(e), 2(f), 3(b), 3(d), and 3(f)), the modeling results show disagreements with the experimental results. These differences arise from the following reasons.When calculating the calcium hydroxide contents in pastes with higher nanosilica replacement levels (Figures2(e) and 2(f), nanosilica replacement 6%), due to the increase in nanosilica contents, the surface area of cement-based materials will increase significantly. In this case, increasing the superplasticizer content to lubricate the nanosilica particles and to break the agglomerates, that is, 6% nanosilica, is difficult because of the very large specific surface area of nanosilica. Nanosilica will slightly agglomerate, and the pozzolanic reaction will be delayed. The agglomeration of nanosilica is not considered in the proposed model. Therefore, for pastes with higher nanosilica contents, the peaks in the analysis results occur earlier than in the experimental results.When calculating the chemically bound water contents in pastes with lower water-to-binder ratios (Figures3(b), 3(d), and 3(f), water-to-binder ratio 0.3), due to the decrease in water-to-binder ratios, the degree of supersaturation of the pore solution will increase and the initial dormant period of the paste will be shortened. The hydration reaction will correspondingly be accelerated. The proposed model does not take into account the influences of ion concentrations or the degree of supersaturation of the pore solution on the hydration kinetics of cement-based materials. Therefore, for pastes with lower water-to-binder ratios, the analysis results for the early age are slightly lower than the experimental results.Hence, at the current stage, the model is not perfect because some variables are not accounted for; thus, the model will be subjected to further improvements in the future. ### 4.3. The Evaluation of Compressive Strength It is well known that the compressive strength of concrete depends on the gel/space ratio determined from the degree of cement hydration and thew/c ratio. The gel/space ratio is defined as the ratio of the volumes of the hydrated cement to the sum of the volumes of the hydrated cement and of the capillary pores [15]. For Portland cement pastes, it is approximately assumed that 1 mL of hydrated cement occupies 2.06 mL, and, for the nanosilica pozzolanic reaction, 1 mL of reacted nanosilica is considered to occupy 2.52 mL of space [15]. Therefore, the gel/space ratio of cement-nanosilica paste is given by (9)xfc=2.06(1/ρ)αC0+2.52(1/ρsi)αsilicamsilica0(1/ρ)αC0+(1/ρsi)αsilicamsilica0+W0, where xfc is the gel/space ratio of blended cement pastes. Note that the volume change of nanosilica is larger than that of the anhydrous cement (2.52 versus 2.06). This result may partially be due to the lower density of the pozzolanic hydration products and may indicate that pozzolanic reaction products are more effective in filling pores [15].Furthermore, the development of compressive strength in blended concrete can be evaluated through Powers’ strength theory as(10a)fc=Axfcn,(10b)A=aC0C0+msilica0+bmsilica0C0+msilica0,where fc is the compressive strength of blended concrete. A is the intrinsic strength of the material and can be expressed as a function of the weight fractions of cement and mineral admixture in the mixing proportion. The coefficients a and b in (10b) represent the contributions of cement and mineral admixture to the intrinsic strength of materials, respectively, and the units of a and b are MPa. n is the exponent.Based on the compressive strength of nanosilica blended paste, the values of the coefficients ofa and b are given as a=128 and b=160, and n=3.0 (water/binder = 0.5) and n=2.3 (water/binder = 0.3). The decrease inn may be attributed to the homogenization of hydration products for specimens with lower water-to-binder ratios [15]. Comparisons between the experimental results and the prediction results are shown in Figure 4. Compared with the control specimens (Figures 4(a) and 4(b)), at an early age of 1 day, the compressive strength of nanosilica blended paste (Figures 4(c) to 4(f)) is significantly increased. In contrast, at the late age of 28 days, the compressive strength of nanosilica blended paste is only slightly higher than that of the control specimens. Li et al. [2] also obtained similar results: for nanosilica blended paste, the extent of the compressive strength enhancement is considerably more significant at an early age.Evaluation of compressive strength developments. (a) OPC paste with water-to-cement ratio 0.5 (b) OPC paste with water-to-cement ratio 0.3 (c) OPC-nanosilica paste with water-to-binder ratio 0.5 and 3% nanosilica (d) OPC-nanosilica paste with water-to-binder ratio 0.3 and 3% nanosilica (e) OPC-nanosilica paste with water-to-binder ratio 0.5 and 6% nanosilica (f) OPC-nanosilica paste with water-to-binder ratio 0.3 and 6% nanosilica ## 4.1. The Amount of Calcium Hydroxide during the Hydration Process During the hydration of ordinary Portland cement, the amount of calcium hydroxide will increase until it reaches a steady state. During the hydration of cement-nanosilica blends, the evolution of the amount of CH depends on two factors: the Portland cement hydration that produces CH and the pozzolanic reaction that consumes CH. In the initial period, the production of CH is the dominant process, and then the consumption of CH becomes the dominant process. In the experimental range, the amount of CH initially increases, reaches a maximum value, and then decreases. Using the experimentally determined calcium hydroxide content and chemically bound water content of Portland cement paste, the value ofRCHCE in (6) was determined to be 0.23, which means that 1 g of hydrated cement will produce 0.23 g of calcium hydroxide. Papadakis [10] also reported that when 1 g of cement hydrates, approximately 0.2~0.3 g of calcium hydroxide will be produced. Additionally, based on the experimentally determined calcium hydroxide contents in Portland cement paste and nanosilica blended cement paste, the value of RCHSF in (6) was determined to be 2.2, which means that, for the pozzolanic reaction of nanosilica, 1 g of reacted nanosilica will consume 2.2 g of calcium hydroxide. Similarly, Maekawa et al. [9] proposed that when 1 g of silica fume reacts, 2 g of calcium hydroxide will be consumed during the pozzolanic reaction of silica fume.A total of 12 series of experimental results regarding the calcium hydroxide contents and chemically bound water contents are presented in Figure2 (calcium hydroxide contents) and Figure 3 (chemically bound water contents). Thew/b ratios of the specimens are 0.5 and 0.3, and the nanosilica replacement ratios are 0%, 3%, and 6%.Evaluation of calcium hydroxide contents. (a) OPC paste with water-to-cement ratio 0.5 (b) OPC paste with water-to-cement ratio 0.3 (c) OPC-nanosilica paste with water-to-binder ratio 0.5 and 3% nanosilica (d) OPC-nanosilica paste with water-to-binder ratio 0.3 and 3% nanosilica (e) OPC-nanosilica paste with water-to-binder ratio 0.5 and 6% nanosilica (f) OPC-nanosilica paste with water-to-binder ratio 0.3 and 6% nanosilicaEvaluation of chemically bound water contents. (a) OPC paste with water-to-cement ratio 0.5 (b) OPC paste with water-to-cement ratio 0.3 (c) OPC-nanosilica paste with water-to-binder ratio 0.5 and 3% nanosilica (d) OPC-nanosilica paste with water-to-binder ratio 0.3 and 3% nanosilica (e) OPC-nanosilica paste with water-to-binder ratio 0.5 and 6% nanosilica (f) OPC-nanosilica paste with water-to-binder ratio 0.3 and 6% nanosilicaCalibration process: in the calibration process, using calcium hydroxide contents forw/b = 0.5 control specimen (shown in Figure 2(a)) andw/b = 0.5 for the 3% nanosilica specimen (shown in Figure 2(c)), the reaction coefficients are calibrated. In this calibration process, 2 experimental results are used. The calibrated reaction coefficients of the proposed hydration model are shown in Table 4. In this table, B, C, kr, and De0 are the reaction coefficients of cement and krsi and Desi0 are the reaction coefficients of nanosilica.Table 4 Reaction coefficients of the proposed hydration model. B (cm/h) C (cm/h) k r (cm/h) D e 0 (cm2/h) k r s i (cm/h) D e s i 0 (cm2/h) 4.310 * 10−7 0.035 1.073 * 10−5 5.474 * 10−8 1.00 * 10−6 2.50 * 10−13Validation process: in the validation process, the remaining experimental results, such as the calcium hydroxide contents and chemically bound water contents shown in Figures2(b), 2(d) to 2(f), and 3(a) to 3(f), can be predicted. In this validation process, 10 experimental results are used. In the proposed model, the reaction coefficients do not vary with the nanosilica replacement ratios and water-to-binder ratios.In summary, to use the proposed theoretical model, a small number of experimental results are required to calibrate the reaction coefficients. Furthermore, a large number of experimental results can be predicted. The time-consuming and expensive experimental investigations can be significantly reduced.The evolution of the CH amount is shown as a function of the hydration time in Figure2. As shown in Figure 2, the simulation results overall agree well with experimental results. When the water-to- binder ratios change from 0.5 (Figures 2(a), 2(c), and 2(e)) to 0.3 (Figures 2(b), 2(d), and 2(f)), the amount of calcium hydroxide produced for 1 g of binder will decrease due to the reduction in capillary water and deposition space for hydration products. When the nanosilica replacement ratios increase from 0.03 (Figures 2(c) and 2(d)) to 0.06 (Figures 2(e) and 2(f)), the calcium hydroxide contents will correspondingly decrease due to the enhancement of the pozzolanic reaction of nanosilica. The proposed model can simulate the effects of the water-to-binder ratio and nanosilica replacement ratios on the hydration of Portland cement. ## 4.2. The Mass of Chemically Bound Water during Hydration Process As proposed in the hydration model and reference [10], the chemically bound water mainly comes from the hydration of Portland cement. The pozzolanic reaction between calcium hydroxide and nanosilica occurs without additional water binding more than that contained in the CH molecules. The evolution of the mass of bound water is shown as a function of the hydration time in Figure 3. As shown in Figure 3, the simulated results overall agree well with the experimental results. When the water-to-binder ratios change from 0.5 (Figures 3(a), 3(c), and 3(e)) to 0.3 (Figures 3(b), 3(d), and 3(f)), the chemically bound water produced for 1 g of binder will decrease due to the reduction in capillary water and deposition space for hydration products.As shown in Figures2 and 3, the analysis results can generally reproduce the experimental results. However, for some cases (Figures 2(e), 2(f), 3(b), 3(d), and 3(f)), the modeling results show disagreements with the experimental results. These differences arise from the following reasons.When calculating the calcium hydroxide contents in pastes with higher nanosilica replacement levels (Figures2(e) and 2(f), nanosilica replacement 6%), due to the increase in nanosilica contents, the surface area of cement-based materials will increase significantly. In this case, increasing the superplasticizer content to lubricate the nanosilica particles and to break the agglomerates, that is, 6% nanosilica, is difficult because of the very large specific surface area of nanosilica. Nanosilica will slightly agglomerate, and the pozzolanic reaction will be delayed. The agglomeration of nanosilica is not considered in the proposed model. Therefore, for pastes with higher nanosilica contents, the peaks in the analysis results occur earlier than in the experimental results.When calculating the chemically bound water contents in pastes with lower water-to-binder ratios (Figures3(b), 3(d), and 3(f), water-to-binder ratio 0.3), due to the decrease in water-to-binder ratios, the degree of supersaturation of the pore solution will increase and the initial dormant period of the paste will be shortened. The hydration reaction will correspondingly be accelerated. The proposed model does not take into account the influences of ion concentrations or the degree of supersaturation of the pore solution on the hydration kinetics of cement-based materials. Therefore, for pastes with lower water-to-binder ratios, the analysis results for the early age are slightly lower than the experimental results.Hence, at the current stage, the model is not perfect because some variables are not accounted for; thus, the model will be subjected to further improvements in the future. ## 4.3. The Evaluation of Compressive Strength It is well known that the compressive strength of concrete depends on the gel/space ratio determined from the degree of cement hydration and thew/c ratio. The gel/space ratio is defined as the ratio of the volumes of the hydrated cement to the sum of the volumes of the hydrated cement and of the capillary pores [15]. For Portland cement pastes, it is approximately assumed that 1 mL of hydrated cement occupies 2.06 mL, and, for the nanosilica pozzolanic reaction, 1 mL of reacted nanosilica is considered to occupy 2.52 mL of space [15]. Therefore, the gel/space ratio of cement-nanosilica paste is given by (9)xfc=2.06(1/ρ)αC0+2.52(1/ρsi)αsilicamsilica0(1/ρ)αC0+(1/ρsi)αsilicamsilica0+W0, where xfc is the gel/space ratio of blended cement pastes. Note that the volume change of nanosilica is larger than that of the anhydrous cement (2.52 versus 2.06). This result may partially be due to the lower density of the pozzolanic hydration products and may indicate that pozzolanic reaction products are more effective in filling pores [15].Furthermore, the development of compressive strength in blended concrete can be evaluated through Powers’ strength theory as(10a)fc=Axfcn,(10b)A=aC0C0+msilica0+bmsilica0C0+msilica0,where fc is the compressive strength of blended concrete. A is the intrinsic strength of the material and can be expressed as a function of the weight fractions of cement and mineral admixture in the mixing proportion. The coefficients a and b in (10b) represent the contributions of cement and mineral admixture to the intrinsic strength of materials, respectively, and the units of a and b are MPa. n is the exponent.Based on the compressive strength of nanosilica blended paste, the values of the coefficients ofa and b are given as a=128 and b=160, and n=3.0 (water/binder = 0.5) and n=2.3 (water/binder = 0.3). The decrease inn may be attributed to the homogenization of hydration products for specimens with lower water-to-binder ratios [15]. Comparisons between the experimental results and the prediction results are shown in Figure 4. Compared with the control specimens (Figures 4(a) and 4(b)), at an early age of 1 day, the compressive strength of nanosilica blended paste (Figures 4(c) to 4(f)) is significantly increased. In contrast, at the late age of 28 days, the compressive strength of nanosilica blended paste is only slightly higher than that of the control specimens. Li et al. [2] also obtained similar results: for nanosilica blended paste, the extent of the compressive strength enhancement is considerably more significant at an early age.Evaluation of compressive strength developments. (a) OPC paste with water-to-cement ratio 0.5 (b) OPC paste with water-to-cement ratio 0.3 (c) OPC-nanosilica paste with water-to-binder ratio 0.5 and 3% nanosilica (d) OPC-nanosilica paste with water-to-binder ratio 0.3 and 3% nanosilica (e) OPC-nanosilica paste with water-to-binder ratio 0.5 and 6% nanosilica (f) OPC-nanosilica paste with water-to-binder ratio 0.3 and 6% nanosilica ## 5. Conclusions This paper presents the results from experimental investigations and theoretical modeling of the hydration reaction of nanosilica blended concrete. The contents of chemically bound water in nanosilica blended paste are similar to those in the control specimens, which means that the pozzolanic reaction of nanosilica does not produce chemically bound water. Due to the pozzolanic reaction of nanosilica, the contents of calcium hydroxide in nanosilica blended paste are much lower than those in the control specimens. Compared with the control specimens, the extent of compressive strength enhancement in nanosilica blended specimens is considerably higher at early ages.A numerical model is proposed to simulate the hydration of concrete containing nanosilica. The reaction coefficients of nanosilica are obtained from the experimentally determined calcium hydroxide contents in nanosilica blended concrete. The degree of hydration of cement and the degree of reaction of nanosilica are calculated to accompany the results from the proposed hydration model. The development of compressive strength in nanosilica blended paste is evaluated using the gel-space ratio, which considers the contributions of cement hydration and the nanosilica reaction. The calculated results regarding chemically bound water, calcium hydroxide, and compressive strength generally agree with the experimental results. --- *Source: 102392-2014-08-11.xml*
2014
# Parachute-Like Mitral Valve Tuberculoma: A Rare Presentation **Authors:** Arslan Masood; Gul Zaman Khan; Irfan Bashir; Zubair Akram **Journal:** Case Reports in Cardiology (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1023924 --- ## Abstract There have been anecdotal reports of tuberculous cardiac involvement, mainly in cases of military tuberculosis or immune deficient individuals. The spectrum of clinical presentations of tuberculous cardiac involvements includes incidental detection of single and multiple well-circumscribed tuberculomas, symptomatic obstructive lesions, AV conduction abnormalities, and even sudden death. We present a case of cardiac tuberculoma in an immune-competent person who presented with worsening dyspnea. The unique morphology of this mass posed an imaging challenge that required 4-dimensional (4D) echocardiography and cardiac magnetic resonance (CMR) detail to differentiate the mass from an anterior mitral leaflet (AML) aneurysm. Histological examination after surgical resection confirmed its tuberculous etiology. --- ## Body ## 1. Introduction Tuberculosis is one of the leading causes of infectious diseases and death in developing countries [1]. There have been anecdotal reports of tuberculous cardiac involvements including valvular and endocardial lesions, myocarditis, pericarditis, and adjacent extracardiac masses. Tuberculous endocarditis has largely been reported in patients with military tuberculosis or after valve replacement in whom the used valves were already contaminated [2–6]. Here we present a case of cardiac tuberculosis in an immunocompetent patient with isolated cardiac involvement in form of a growth involving mitral valve. Its unique appearance made it an imaging challenge until postoperative histological diagnosis could be made. ## 2. Case Presentation A 38-year-old male presented with worsening dyspnea and gradual weight loss for 9 months. Physical examination revealed a pansystolic apical murmur radiating to axilla with expiratory accentuation. Transthoracic echocardiography revealed a rounded, hollow, parachute-shaped mass in left atrium (LA) near mitral valve leaflets. On initial impression, the mass looked like an aneurysmal deformation of myxomatous AML. However, careful examination on 4D echocardiography delineated the structure as a separate entity from AML. It was hollow, parachute-shaped, mainly tethered to AML and partially to posterior leaflet and adjoining posterior wall of LA (Figure1). The mass interfered with physiological functioning of mitral valve leading to significant mitral regurgitation (MR). Color Doppler examination revealed a systolic jet both into the cavity of this mass and into the LA (Figure 2). CMR of the structure was carried out to further delineate the mass and assess the possibility of anterior mitral leaflet aneurysm that was ruled out, supporting the 4D echocardiographic morphological description (Figures 3 and 4).Figure 1 Transthoracic 2D (a) and 4D (b) echocardiographic views in parasternal long axis orientation. Parachute-shaped mass (arrows) in LA attached to atrial side of AML and inferoposterior region of left atrial endocardium. LA: left atrium; LV: left ventricle. (a) (b)Figure 2 Apical 4-chamber view showing the mass (arrows) mimicking anterior leaflet aneurysm. Color Doppler (b) shows systolic flow into the mass in addition to eccentric MR. LV: left ventricle, LA: left atrium, RV: right ventricle, RA: right atrium, and MR: mitral regurgitation. (a) (b)Figure 3 CMR cine run in the sagittal plane reconfirms the thick-walled parachute mass (arrows) tethered to the tip of AML and LA inferoposterior region. Systolic flow into the mass and significant MR are also evident. LA: left atrium, LV: left ventricle, AML: anterior mitral leaflet, and MR: mitral regurgitation.Figure 4 CMR transverse plane image of the LA mass showing its circular shape in cross-section. LA: left atrium; RA: right atrium.Surgical resection of the mass was decided due to significant MR and persistent symptoms. Left atrial approach was taken to expose the mass and its perioperative evaluation was carried out that matched with the consensus morphology on preoperative imaging assessments (Figure5). Following complete resection of the growth, mitral valve could not be adequately repaired and was replaced by metallic prosthesis. Further postsurgical course was uneventful and a significant improvement in functional capacity was noted. Histological evaluation of tissue sample from the growth revealed areas of caseous necrosis with Langhans giant cells (Figure 6). Standard antituberculous regimen was initiated and continued.Figure 5 Perioperative image from left atrial access looking at the atrial side of the mass. (A) Parachute-shaped tuberculoma as viewed from above (exposed through LA approach); (B) AML.Figure 6 Histological pictures from biopsy specimens of resected mass. (a) Histological appearance of tuberculous granuloma surrounded by palisaded epithelioid histiocytes (broad arrow) with central caseous necrosis (long arrow), (b) magnified histological picture showing a Langhans giant cell (arrow), and (c) Langhans giant cell (arrow) with ZN staining. (a) (b) (c) ## 3. Discussion The scientific evidence for cardiac tuberculosis dates back to reports by Maurocordat (1664) and Morgagni (1761) [7]. Postmortem series have documented low frequencies of cardiac tuberculomas in the range of around 0.3% among all tuberculosis patients [8, 9]. Isolated cardiac tuberculomas are even rare [10]. A variety of patterns have been described for cardiac tuberculosis ranging from single to multiple well-circumscribed tuberculomas, most commonly in right sided chambers [11]. There have been sporadic reports of large tuberculous masses including a large right atrial mass leading to tricuspid stenosis in the recent pass [12]. The spectrum of clinical presentations of tuberculous cardiac masses ranges from asymptomatic incidental findings to symptomatic AV conduction abnormalities [10, 13, 14], right ventricular outflow obstruction [15], caval obstructions [2], and even sudden death [16].Pericardial involvement, the commonest manifest of cardiac tuberculosis in developing nations, is mostly diagnosed by transthoracic echocardiographic findings in the background of clinical picture. Specific histopathological or culture diagnosis is very occasionally required. In contrast, not only are myocardial and endocardial involvements uncommon, but they frequently require histological or cultural evidences in addition to imaging modalities due to lack of specific recognized morphological patterns. Although both histological pattern and Ziehl-Neelsen (ZN) staining are specific for diagnosis, the latter often fails in revealing acid fast bacilli and a definitive diagnosis is made based on typical histological pattern [17].Given the lack of specific morphological patterns of cardiac tuberculomas, echocardiographic diagnosis is often challenging. The morphological patterns may bear close resemblance to other echogenic intracardiac masses like vegetations, tumors, and aneurysms. A similar diagnostic dilemma was experienced in our case where an initial impression of AML aneurysm needed clarification with 4D and CMR imaging. A conclusive diagnosis could only be made after histological evidence of typical tuberculous morphology following surgical resection. A combination of challenging anatomical detail and rarity of tuberculous etiology made this case unique. --- *Source: 1023924-2017-10-08.xml*
1023924-2017-10-08_1023924-2017-10-08.md
7,647
Parachute-Like Mitral Valve Tuberculoma: A Rare Presentation
Arslan Masood; Gul Zaman Khan; Irfan Bashir; Zubair Akram
Case Reports in Cardiology (2017)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1023924
1023924-2017-10-08.xml
--- ## Abstract There have been anecdotal reports of tuberculous cardiac involvement, mainly in cases of military tuberculosis or immune deficient individuals. The spectrum of clinical presentations of tuberculous cardiac involvements includes incidental detection of single and multiple well-circumscribed tuberculomas, symptomatic obstructive lesions, AV conduction abnormalities, and even sudden death. We present a case of cardiac tuberculoma in an immune-competent person who presented with worsening dyspnea. The unique morphology of this mass posed an imaging challenge that required 4-dimensional (4D) echocardiography and cardiac magnetic resonance (CMR) detail to differentiate the mass from an anterior mitral leaflet (AML) aneurysm. Histological examination after surgical resection confirmed its tuberculous etiology. --- ## Body ## 1. Introduction Tuberculosis is one of the leading causes of infectious diseases and death in developing countries [1]. There have been anecdotal reports of tuberculous cardiac involvements including valvular and endocardial lesions, myocarditis, pericarditis, and adjacent extracardiac masses. Tuberculous endocarditis has largely been reported in patients with military tuberculosis or after valve replacement in whom the used valves were already contaminated [2–6]. Here we present a case of cardiac tuberculosis in an immunocompetent patient with isolated cardiac involvement in form of a growth involving mitral valve. Its unique appearance made it an imaging challenge until postoperative histological diagnosis could be made. ## 2. Case Presentation A 38-year-old male presented with worsening dyspnea and gradual weight loss for 9 months. Physical examination revealed a pansystolic apical murmur radiating to axilla with expiratory accentuation. Transthoracic echocardiography revealed a rounded, hollow, parachute-shaped mass in left atrium (LA) near mitral valve leaflets. On initial impression, the mass looked like an aneurysmal deformation of myxomatous AML. However, careful examination on 4D echocardiography delineated the structure as a separate entity from AML. It was hollow, parachute-shaped, mainly tethered to AML and partially to posterior leaflet and adjoining posterior wall of LA (Figure1). The mass interfered with physiological functioning of mitral valve leading to significant mitral regurgitation (MR). Color Doppler examination revealed a systolic jet both into the cavity of this mass and into the LA (Figure 2). CMR of the structure was carried out to further delineate the mass and assess the possibility of anterior mitral leaflet aneurysm that was ruled out, supporting the 4D echocardiographic morphological description (Figures 3 and 4).Figure 1 Transthoracic 2D (a) and 4D (b) echocardiographic views in parasternal long axis orientation. Parachute-shaped mass (arrows) in LA attached to atrial side of AML and inferoposterior region of left atrial endocardium. LA: left atrium; LV: left ventricle. (a) (b)Figure 2 Apical 4-chamber view showing the mass (arrows) mimicking anterior leaflet aneurysm. Color Doppler (b) shows systolic flow into the mass in addition to eccentric MR. LV: left ventricle, LA: left atrium, RV: right ventricle, RA: right atrium, and MR: mitral regurgitation. (a) (b)Figure 3 CMR cine run in the sagittal plane reconfirms the thick-walled parachute mass (arrows) tethered to the tip of AML and LA inferoposterior region. Systolic flow into the mass and significant MR are also evident. LA: left atrium, LV: left ventricle, AML: anterior mitral leaflet, and MR: mitral regurgitation.Figure 4 CMR transverse plane image of the LA mass showing its circular shape in cross-section. LA: left atrium; RA: right atrium.Surgical resection of the mass was decided due to significant MR and persistent symptoms. Left atrial approach was taken to expose the mass and its perioperative evaluation was carried out that matched with the consensus morphology on preoperative imaging assessments (Figure5). Following complete resection of the growth, mitral valve could not be adequately repaired and was replaced by metallic prosthesis. Further postsurgical course was uneventful and a significant improvement in functional capacity was noted. Histological evaluation of tissue sample from the growth revealed areas of caseous necrosis with Langhans giant cells (Figure 6). Standard antituberculous regimen was initiated and continued.Figure 5 Perioperative image from left atrial access looking at the atrial side of the mass. (A) Parachute-shaped tuberculoma as viewed from above (exposed through LA approach); (B) AML.Figure 6 Histological pictures from biopsy specimens of resected mass. (a) Histological appearance of tuberculous granuloma surrounded by palisaded epithelioid histiocytes (broad arrow) with central caseous necrosis (long arrow), (b) magnified histological picture showing a Langhans giant cell (arrow), and (c) Langhans giant cell (arrow) with ZN staining. (a) (b) (c) ## 3. Discussion The scientific evidence for cardiac tuberculosis dates back to reports by Maurocordat (1664) and Morgagni (1761) [7]. Postmortem series have documented low frequencies of cardiac tuberculomas in the range of around 0.3% among all tuberculosis patients [8, 9]. Isolated cardiac tuberculomas are even rare [10]. A variety of patterns have been described for cardiac tuberculosis ranging from single to multiple well-circumscribed tuberculomas, most commonly in right sided chambers [11]. There have been sporadic reports of large tuberculous masses including a large right atrial mass leading to tricuspid stenosis in the recent pass [12]. The spectrum of clinical presentations of tuberculous cardiac masses ranges from asymptomatic incidental findings to symptomatic AV conduction abnormalities [10, 13, 14], right ventricular outflow obstruction [15], caval obstructions [2], and even sudden death [16].Pericardial involvement, the commonest manifest of cardiac tuberculosis in developing nations, is mostly diagnosed by transthoracic echocardiographic findings in the background of clinical picture. Specific histopathological or culture diagnosis is very occasionally required. In contrast, not only are myocardial and endocardial involvements uncommon, but they frequently require histological or cultural evidences in addition to imaging modalities due to lack of specific recognized morphological patterns. Although both histological pattern and Ziehl-Neelsen (ZN) staining are specific for diagnosis, the latter often fails in revealing acid fast bacilli and a definitive diagnosis is made based on typical histological pattern [17].Given the lack of specific morphological patterns of cardiac tuberculomas, echocardiographic diagnosis is often challenging. The morphological patterns may bear close resemblance to other echogenic intracardiac masses like vegetations, tumors, and aneurysms. A similar diagnostic dilemma was experienced in our case where an initial impression of AML aneurysm needed clarification with 4D and CMR imaging. A conclusive diagnosis could only be made after histological evidence of typical tuberculous morphology following surgical resection. A combination of challenging anatomical detail and rarity of tuberculous etiology made this case unique. --- *Source: 1023924-2017-10-08.xml*
2017
# A Universal Trajectory Planning Method for Automated Lane-Changing and Overtaking Maneuvers **Authors:** Ying Wang; Chong Wei **Journal:** Mathematical Problems in Engineering (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1023975 --- ## Abstract Lane-changing and overtaking are conventional maneuvers on roads, and the reference trajectory is one of the prerequisites to execute these maneuvers. This study proposes a universal trajectory planning method for automated lane-changing and overtaking maneuvers, in which the trajectory is regarded as the combination of a path and its traffic state profiles. The two-dimensional path is represented by a suitable curve to connect the initial position with final position of the ego vehicle. Based on the planned path, its traffic state profiles are generated by solving a nonlinear mathematical optimization model. Moreover, the study discretizes the time horizon into several time intervals and determines the parameters to obtain the continuous and smooth profiles, which guarantees the safety and comfort of the ego vehicle. Finally, a series of simulation experiments are performed in the MATLAB platform and the results show the feasibility and effectiveness of the proposed universal trajectory planning method. --- ## Body ## 1. Introduction Automated driving system can significantly alleviate traffic congestion, ensure traffic safety, and improve road capacity [1–4]. In order to realize the intelligent vehicle as soon as possible, some automated driving technologies are being tested on roads. Among the test items, the lane-changing and overtaking maneuvers are included [5–8]. To execute these maneuvers automatically, a feasible reference trajectory is needed. The ego vehicle will drive along the planned trajectory to finish the relevant maneuvers. This paper proposes a novel universal trajectory planning method for the automated lane-changing and overtaking maneuvers. With the development of advanced sensors and vehicle-to-vehicle (V2V) communications, real-time traffic information of vehicles, such as position, speed, and acceleration, can be obtained conveniently and accurately [9–12], and the trajectory is planned based on these traffic state information.Different types of curves have different performance characteristics in terms of continuity and smoothness. Therefore, selecting a suitable curve function to represent the trajectory according to the traffic condition is important. In the relevant trajectory planning studies, the commonly used geometric curves include the spline curve [13–17], trapezoidal acceleration curve [18–22], Bezier curve [23–26], and polynomial curve [27–33].Ziegler et al. [13] employed a quantic spline curve to generate the trajectory. They divided the space into multiple geometric graphs and adopted the shortest path algorithm to search for the feasible trajectory for each graph. Rousseau et al. [17] utilized the trajectory planning method based on the B-spline curve. The parameters of curve function were determined by minimizing the driving time. Yin et al. [18] described a trapezoidal acceleration motion planning model. The planned trajectory could avoid the potential collisions with obstacles by analyzing the variation law of the actuator. Chen et al. [22] presented a trajectory planning method using the 3D Bezier curve, which enhanced the flexibility of trajectory and guaranteed the conformity to the realistic driving maneuver.Among the commonly used geometric curves, the polynomial curve is the most widely used in trajectory planning. This type of curve can plan the smooth trajectory with a low computational cost, and the order of polynomial can be tuned for obtaining the desired trajectory performance. Nelson et al. [27] were the first to propose the polynomial method. They used the polynomial curve to replace the arc curve for planning the continuous-curvature trajectory, which guaranteed the traceability of the trajectory. Nilsson et al. [29] presented a trajectory planning model using the discrete quadratic polynomial curve. They divided the planning area into three regions (i.e., preregion, periregion, and postregion) and solved the optimal positions of the ego vehicle in each region to determine the trajectory function. Zhang et al. [30] adopted two time-dependent cubic polynomial functions to describe the longitudinal and lateral motions, respectively. They concluded that the continuity of the polynomial curve could ensure the robustness of trajectory. Yang et al. [32] proposed a trajectory planning model based on the cubic polynomial curve. They focused on the trajectory replanning problem in dynamic driving environments. Wei et al. [33] increased the polynomial function to the fifth order and solved the optimal trajectory by treating the driving time and movement distance as free variables.However, there are still several common disadvantages existing in the previous studies. First, these trajectory planning models usually only focus on the automated lane-changing or overtaking maneuver, while neglecting the universality of the model for lane-changing and overtaking maneuvers. Second, the traffic states of the ego vehicle solved by the previous model usually were discretized, which might result in hard tracking or even crashes. Third, most of the studies designed the lane-changing or overtaking scenarios simply, in which they assumed that the surrounding environment was static, the speeds of other vehicles were constants, or only one or two vehicles were around the ego vehicle. These assumptions were inconsistent with the real-world traffic characteristics.Aiming at these problems, a trajectory planning method based on the mathematical optimization framework is proposed in this paper. The contributions can be summarized as follows:(1) A universal framework of trajectory planning for the automated lane-changing and overtaking maneuvers is proposed. The study considers that the combination of a path and its traffic state profiles determines a complete spatiotemporal trajectory. The suitable curve is employed to represent the two-dimensional path. And then a nonlinear optimization model is established to generate the traffic state profiles, whose computational cost is low since the path has been planned.(2) The traffic state profiles are continuous and smooth. In previous studies, the traffic states of the ego vehicle solved by the nonlinear model usually were discretized. Our proposed model discretizes the time horizon into several time intervals and solves the unknown parameters of the function to generate the continuous and smooth traffic state profiles.The rest of the paper is organized as follows. Section2 introduces the framework of the universal trajectory planning method. Section 3 describes the technical details of the proposed method, including the generation process of the path and traffic state profiles. Section 4 presents the simulation results and several typical numerical examples. Section 5 concludes the study and discusses the future work. ## 2. Framework of the Universal Trajectory Planning Method Figure1 presents the framework of the proposed universal trajectory planning method, which plans the trajectory through four stages. In stage 1, the current traffic states of the vehicles are collected via GPS, digital maps, and sensors. In stage 2, based on the known boundary conditions, a suitable curve is employed to generate the two-dimensional path connecting the initial position and the final position. In stage 3, the objective function and relevant constraints are established to solve the traffic state profiles along the path, such as speed, acceleration, and jerk profiles. The combination of the path and its traffic state profiles determines a complete spatiotemporal trajectory. In stage 4, the generated trajectory parameters are used as inputs for the actuators to execute the desired control.Figure 1 Framework of the trajectory planning method. ## 3. The Universal Trajectory Planning Method Planning a reference trajectory before changing lane or overtaking is necessary and important. The quality of the trajectory has a direct impact on the performance of the automated driving behaviors. This study considers that the combination of a path and its traffic state profiles determines a complete spatiotemporal trajectory. Therefore, in the following sections, we will describe the universal generation method of path and traffic state profiles in detail. ### 3.1. Path Generation A suitable curve should be selected to plan the lane-changing or overtaking path. Here, we choose the quintic polynomial, which has the advantages of a continuous third derivative and smooth curvature, as shown in the following equation:(1)yx=a0+a1x+a2x2+a3x3+a4x4+a5x5,where x and y are the longitudinal and lateral positions of the ego vehicle, respectively. a0, …, a5 are unknown parameters that need to be calculated.As for the lane-changing maneuver, the boundary traffic information can be collected with the help of sensors, GPS, and digital maps. Based on such information, the following equations can be obtained:(2)y0=a0+a1x0+a2x02+a3x03+a4x04+a5x05,(3)yf=a0+a1xf+a2xf2+a3xf3+a4xf4+a5xf5,(4)c0=a1+2a2x0+3a3x02+4a4x03+5a5x04,(5)cf=a1+2a2xf+3a3xf2+4a4xf3+5a5xf4,(6)k0=2a2+6a3x0+12a4x02+20a5x03,(7)kf=2a2+6a3xf+12a4xf2+20a5xf3,where x0, y0, c0, and k0 represent the longitudinal position, the lateral position, the derivative, and the curvature of the lane curve at current moment, respectively. xf, yf, cf, and kf represent the longitudinal position, the lateral position, the derivative, and the curvature of the lane curve at final moment, respectively. By solving equations (2)–(7), the values of a0, …, a5 are obtained, and then the lane-changing path can be generated.As for overtaking maneuver, the same method as lane-changing is adopted to generate the path. A complete overtaking path is composed of two lane-changing paths, as shown in Figure2. In the figure, vehicle 0 is the ego vehicle and vehicles 1, 2, and 3 are the surrounding vehicles.Figure 2 Overtaking path of the ego vehicle.After collecting the boundary traffic information of the ego vehicle in lane-changing or overtaking, the unknown parameters of the path function can be solved and then the suitable path is generated. The study chooses the polynomial curve to represent the path, which has the advantages of a continuous derivative and smooth curvature. Other types of curve can also be employed according to the traffic scenario, while the framework of the path generation is unchanged. ### 3.2. Traffic State Profile Generation In this section, a nonlinear programming model is constructed to generate the traffic state profiles (speed, acceleration, and jerk profiles) for the automated lane-changing or overtaking maneuver. Since the discrete traffic states of the ego vehicle may result in crashes, our model discretizes the time horizon and solves the unknown parameters of the function to generate the continuous and smooth traffic state profiles. The driving distance profile is shown in the following equation:(8)st=b0+b1t+b2t2+b3t3+b4t4+b5t5.The corresponding speed, acceleration, and jerk profiles are shown in the following equations:(9)vt=s˙t=b1+2b2t+3b3t2+4b4t3+5b5t4,(10)at=s¨t=2b2+6b3t+12b4t2+20b5t3,(11)jt=s…t=6b3+24b4t+60b5t2.Six unknown parametersb0, …, b5 need to be determined. Based on the known initial traffic states of the ego vehicle, three equations are derived as follows:(12)s0=s0,0,v0=v0,0,a0=a0,0,where s0,0, v0,0, and a0,0 are the current driving distance, speed, and acceleration of the ego vehicle, respectively. The values of b0, b1, and b2 can be obtained by solving equation (12). However, the unknown parameters b3, b4, and b5 and total driving time tf are not easy to determine, and the arbitrary values may affect the solution efficiency of the model [29, 34]. Therefore, we treat b3, b4, b5, and tf as free variables and establish a nonlinear model to search for the optimal traffic state profiles.The study dividestf into I time intervals and uses tii=1,…,I to denote the end moment of each time interval. Let t0=0 and set constraint ti>ti−1. As shown in Figure 3, si is the driving distance of the ego vehicle at ti. x0,i and y0,i are the longitudinal and lateral positions of the ego vehicle at ti, respectively. s0=0 and si is calculated as ∫0x0,i1+dy/dy2dx. Furthermore, ti for i=1,…,I are treated as free variables. The unknown parameters b3, b4, and b5 and driving time tf can be represented as functions of ti. Therefore, the planning traffic state profile problem can be converted into the optimization problem.Figure 3 Generation of the traffic state profiles: (a) lane-changing maneuver; (b) overtaking maneuver. (a) (b) #### 3.2.1. Optimal Traffic State Profile Planning The objective function of the constrained nonlinear optimization model is shown in equation (13). a0,i and j0,i denote the acceleration and jerk of the ego vehicle at ti, respectively, and tf denotes the total driving time:(13)minJ=1I∑i=1Ia0,i2+1I∑i=1Ij0,i2+tf2.During the lane-changing or overtaking process, the passengers will feel comfortable when the accelerations and jerks of the ego vehicle are small, soa0,i and j0,i are used to reflect the comfort in equation (13). The total driving time tf can represent the efficiency. Short time indicates high efficiency. #### 3.2.2. Collision-Avoidance Constraint To avoid the potential collisions, the ego vehicle should maintain sufficient distance from the surrounding vehicles during lane-changing or overtaking maneuver. For illustrative purposes, the study assumes that the vehicles drive along the straight roads. This limitation can be easily removed by introducing a curved road function. As shown in Figure4, there are three vehicles around the ego vehicle (vehicle 0). The leading vehicle on the current lane is indexed as vehicle 1, the following vehicle on the target lane is indexed as vehicle 2, and the leading vehicle on the target lane is indexed as vehicle 3.Figure 4 Relative positions of the vehicles.The safe distance between the ego vehicle and vehicle 1 is discussed first. As shown in Figure5, the ego vehicle is simplified a geometry of several circles with diameter m. Following Ziegler et al. [35], the Euclidean distance between the center of each circle of the ego vehicle and the center of the first circle of vehicle 1 should be greater than diameter m of the circle. Therefore, the distance constraint between the ego vehicle (vehicle 0) and vehicle 1 can be shown in the following equation:(14)m2≤x0,ik−x1,i12+y0,ik−y1,i12,fork=1,…,5,fori=1,…,I,where k is used to denote the index of circles; x0,ik and y0,ik represent the positions of the ego vehicle of the k-th circle at ti; and x1,i1 and y1,i1 represent the positions of the first circle of vehicle 1 at ti, whose values can be solved from (x1,i,y1,i). x1,i is calculated as x1,0+v1,0ti+1/2∗a1,0ti2, and y1,i is 0m on the straight lane.Figure 5 Mechanism of the collision-avoidance constraint.Then, the constraints of the collision avoidance between the ego vehicle and the other two vehicles are discussed. As shown in equations (15)-(16), the Euclidean distance between the ego vehicle and vehicles 2 and 3 should be greater than the diagonal length r of the vehicle:(15)r2≤x0,i−x2,i2+y0,i−y2,i2,fori=1,…,I,(16)r2≤x0,i−x3,i2+y0,i−y3,i2,fori=1,…,I,where x0,i, x2,i, and x3,i are the longitudinal central positions of the ego vehicle, vehicle 2, and vehicle 3 at ti, respectively. y0,i, y2,i, and y3,i are the lateral central positions of the ego vehicle, vehicle 2, and vehicle 3 at ti, respectively.x 0 , i and y0,i are the variables that need to be solved in the model. x2,i and x3,i can be calculated as x2,0+v2,0ti+1/2∗a2,0ti2 and x3,0+v3,0ti+1/2∗a3,0ti2, respectively; y2,i and y3,i are the lane width 3.75m. Here, the initial traffic states of relevant vehicles are directly derived from the sensors. #### 3.2.3. Acceleration Constraint Based on Car-following Maneuver In real-world traffic environment, the ego vehicle usually shifts to a car-following maneuver at the end of the lane-changing or overtaking process. To guarantee a comfort driving behavior, the planned final accelerationa0,I of the ego vehicle should be smaller than the car-following acceleration, as shown in the following equation:(17)a0,I≤Av0,I,v3,I,d0,3,where Av0,I,v3,I,d0,3 denotes the car-following acceleration of the ego vehicle at tI; v0,I and v3,I denote the speeds of the ego vehicle and vehicle 3 at tI, respectively; and d0,3 denotes the distance between the ego vehicle and vehicle 3 at tI.Here,v0,I is solved by the proposed nonlinear model; v3,I and d0,3 are calculated as follows, where l is the length of the vehicle:(18)v3,I=v3,0+a3,0ti,d0,3=x3,I−x0,I−l.Whena0,I is smaller than the car-following acceleration, the ego vehicle will shift from a lane-changing or overtaking maneuver to a car-following maneuver comfortably, i.e., no accelerate or decrease immediately.We choose a linear dynamic model [36] as the underlying car-following model. The car-following acceleration of the ego vehicle Av0,I,v3,I,d0,3 can be given as follows:(19)Av0,I,v3,I,d0,3=w1d0,3−Δs∗+w2v3,I−v0,I,where w1 and w2 denote the variation rate of the distance gap and the variation rate of the speed gap between the ego vehicle and vehicle 3, respectively; Δs∗ denotes the ideal distance gap between the ego vehicle and vehicle 3. The function of the three parameters is to help the ego vehicle reach the equilibrium condition as soon as possible by adjusting its acceleration. Following Pariota et al. [36], the parameters in the model are set as w1=0.0343,w2=0.948, and Δs∗=30. #### 3.2.4. Traffic State Constraint The speedv0,i, acceleration a0,i, and jerk j0,i of the ego vehicle at ti can be obtained according to equations (9)–(11). To ensure the driving safety and comfort, the total lane-changing or overtaking time and traffic states of the ego vehicle should be simultaneously constrained as follows:(20)0<tf<tf,max,vmin<v0,i<vmax,fori=1,…,I,amin<a0,i<amax,fori=1,…,I,jmin<j0,i<jmax,fori=1,…,I,where tf,max, vmax, amax, and jmax denote the allowable maximum total time, speed, acceleration, and jerk, respectively. vmin, amin, and jmin denote the allowable minimum speed, acceleration, and jerk, respectively.In the above constraints, the speed of the ego vehicle should always be positive and not exceed the maximum allowable speed, i.e., no stop or backward motion. Moreover, the smaller the jerk, the smaller the acceleration variation, and the higher the driving comfort degree [37]. ### 3.3. Trajectory Planning Based on the Nonlinear Model After receiving the lane-changing or overtaking decision, our proposed trajectory planning model will work. The steps are as follows:(1) Traffic information sensingCollecting the current traffic information of the relevant vehicles through the advanced sensors, GPS, and digital maps.(2) Path generationBased on the collected traffic information, using the quintic polynomial curve to represent the lane-changing or overtaking path, the unknown parameters of the path function can be solved according to the boundary conditions.(3) Traffic state profile generationConsidering the safety and comfort constraints, establishing a nonlinear optimization model to solve the optimal traffic state profiles of the ego vehicle along the path.(4) Lane-changing or overtaking trajectory planningIf a feasible solution is found for the proposed nonlinear optimization model, the lane-changing or overtaking trajectory of the ego vehicle can be planned. However, if there is no feasible solution, the current lane will continue to be used, and the planner will iterate the above three steps until a feasible trajectory is planned.The sequence quadratic programming (SQP) algorithm is selected to solve the nonlinear optimization model. Previous studies [19, 33, 34] prove that this algorithm can find the feasible solution for the nonlinear problem well. Through multiple iterations, the optimal traffic state profiles of the ego vehicle are acquired based on the SQP algorithm. Therefore, the optimal lane-changing or overtaking trajectory can be planned. ## 3.1. Path Generation A suitable curve should be selected to plan the lane-changing or overtaking path. Here, we choose the quintic polynomial, which has the advantages of a continuous third derivative and smooth curvature, as shown in the following equation:(1)yx=a0+a1x+a2x2+a3x3+a4x4+a5x5,where x and y are the longitudinal and lateral positions of the ego vehicle, respectively. a0, …, a5 are unknown parameters that need to be calculated.As for the lane-changing maneuver, the boundary traffic information can be collected with the help of sensors, GPS, and digital maps. Based on such information, the following equations can be obtained:(2)y0=a0+a1x0+a2x02+a3x03+a4x04+a5x05,(3)yf=a0+a1xf+a2xf2+a3xf3+a4xf4+a5xf5,(4)c0=a1+2a2x0+3a3x02+4a4x03+5a5x04,(5)cf=a1+2a2xf+3a3xf2+4a4xf3+5a5xf4,(6)k0=2a2+6a3x0+12a4x02+20a5x03,(7)kf=2a2+6a3xf+12a4xf2+20a5xf3,where x0, y0, c0, and k0 represent the longitudinal position, the lateral position, the derivative, and the curvature of the lane curve at current moment, respectively. xf, yf, cf, and kf represent the longitudinal position, the lateral position, the derivative, and the curvature of the lane curve at final moment, respectively. By solving equations (2)–(7), the values of a0, …, a5 are obtained, and then the lane-changing path can be generated.As for overtaking maneuver, the same method as lane-changing is adopted to generate the path. A complete overtaking path is composed of two lane-changing paths, as shown in Figure2. In the figure, vehicle 0 is the ego vehicle and vehicles 1, 2, and 3 are the surrounding vehicles.Figure 2 Overtaking path of the ego vehicle.After collecting the boundary traffic information of the ego vehicle in lane-changing or overtaking, the unknown parameters of the path function can be solved and then the suitable path is generated. The study chooses the polynomial curve to represent the path, which has the advantages of a continuous derivative and smooth curvature. Other types of curve can also be employed according to the traffic scenario, while the framework of the path generation is unchanged. ## 3.2. Traffic State Profile Generation In this section, a nonlinear programming model is constructed to generate the traffic state profiles (speed, acceleration, and jerk profiles) for the automated lane-changing or overtaking maneuver. Since the discrete traffic states of the ego vehicle may result in crashes, our model discretizes the time horizon and solves the unknown parameters of the function to generate the continuous and smooth traffic state profiles. The driving distance profile is shown in the following equation:(8)st=b0+b1t+b2t2+b3t3+b4t4+b5t5.The corresponding speed, acceleration, and jerk profiles are shown in the following equations:(9)vt=s˙t=b1+2b2t+3b3t2+4b4t3+5b5t4,(10)at=s¨t=2b2+6b3t+12b4t2+20b5t3,(11)jt=s…t=6b3+24b4t+60b5t2.Six unknown parametersb0, …, b5 need to be determined. Based on the known initial traffic states of the ego vehicle, three equations are derived as follows:(12)s0=s0,0,v0=v0,0,a0=a0,0,where s0,0, v0,0, and a0,0 are the current driving distance, speed, and acceleration of the ego vehicle, respectively. The values of b0, b1, and b2 can be obtained by solving equation (12). However, the unknown parameters b3, b4, and b5 and total driving time tf are not easy to determine, and the arbitrary values may affect the solution efficiency of the model [29, 34]. Therefore, we treat b3, b4, b5, and tf as free variables and establish a nonlinear model to search for the optimal traffic state profiles.The study dividestf into I time intervals and uses tii=1,…,I to denote the end moment of each time interval. Let t0=0 and set constraint ti>ti−1. As shown in Figure 3, si is the driving distance of the ego vehicle at ti. x0,i and y0,i are the longitudinal and lateral positions of the ego vehicle at ti, respectively. s0=0 and si is calculated as ∫0x0,i1+dy/dy2dx. Furthermore, ti for i=1,…,I are treated as free variables. The unknown parameters b3, b4, and b5 and driving time tf can be represented as functions of ti. Therefore, the planning traffic state profile problem can be converted into the optimization problem.Figure 3 Generation of the traffic state profiles: (a) lane-changing maneuver; (b) overtaking maneuver. (a) (b) ### 3.2.1. Optimal Traffic State Profile Planning The objective function of the constrained nonlinear optimization model is shown in equation (13). a0,i and j0,i denote the acceleration and jerk of the ego vehicle at ti, respectively, and tf denotes the total driving time:(13)minJ=1I∑i=1Ia0,i2+1I∑i=1Ij0,i2+tf2.During the lane-changing or overtaking process, the passengers will feel comfortable when the accelerations and jerks of the ego vehicle are small, soa0,i and j0,i are used to reflect the comfort in equation (13). The total driving time tf can represent the efficiency. Short time indicates high efficiency. ### 3.2.2. Collision-Avoidance Constraint To avoid the potential collisions, the ego vehicle should maintain sufficient distance from the surrounding vehicles during lane-changing or overtaking maneuver. For illustrative purposes, the study assumes that the vehicles drive along the straight roads. This limitation can be easily removed by introducing a curved road function. As shown in Figure4, there are three vehicles around the ego vehicle (vehicle 0). The leading vehicle on the current lane is indexed as vehicle 1, the following vehicle on the target lane is indexed as vehicle 2, and the leading vehicle on the target lane is indexed as vehicle 3.Figure 4 Relative positions of the vehicles.The safe distance between the ego vehicle and vehicle 1 is discussed first. As shown in Figure5, the ego vehicle is simplified a geometry of several circles with diameter m. Following Ziegler et al. [35], the Euclidean distance between the center of each circle of the ego vehicle and the center of the first circle of vehicle 1 should be greater than diameter m of the circle. Therefore, the distance constraint between the ego vehicle (vehicle 0) and vehicle 1 can be shown in the following equation:(14)m2≤x0,ik−x1,i12+y0,ik−y1,i12,fork=1,…,5,fori=1,…,I,where k is used to denote the index of circles; x0,ik and y0,ik represent the positions of the ego vehicle of the k-th circle at ti; and x1,i1 and y1,i1 represent the positions of the first circle of vehicle 1 at ti, whose values can be solved from (x1,i,y1,i). x1,i is calculated as x1,0+v1,0ti+1/2∗a1,0ti2, and y1,i is 0m on the straight lane.Figure 5 Mechanism of the collision-avoidance constraint.Then, the constraints of the collision avoidance between the ego vehicle and the other two vehicles are discussed. As shown in equations (15)-(16), the Euclidean distance between the ego vehicle and vehicles 2 and 3 should be greater than the diagonal length r of the vehicle:(15)r2≤x0,i−x2,i2+y0,i−y2,i2,fori=1,…,I,(16)r2≤x0,i−x3,i2+y0,i−y3,i2,fori=1,…,I,where x0,i, x2,i, and x3,i are the longitudinal central positions of the ego vehicle, vehicle 2, and vehicle 3 at ti, respectively. y0,i, y2,i, and y3,i are the lateral central positions of the ego vehicle, vehicle 2, and vehicle 3 at ti, respectively.x 0 , i and y0,i are the variables that need to be solved in the model. x2,i and x3,i can be calculated as x2,0+v2,0ti+1/2∗a2,0ti2 and x3,0+v3,0ti+1/2∗a3,0ti2, respectively; y2,i and y3,i are the lane width 3.75m. Here, the initial traffic states of relevant vehicles are directly derived from the sensors. ### 3.2.3. Acceleration Constraint Based on Car-following Maneuver In real-world traffic environment, the ego vehicle usually shifts to a car-following maneuver at the end of the lane-changing or overtaking process. To guarantee a comfort driving behavior, the planned final accelerationa0,I of the ego vehicle should be smaller than the car-following acceleration, as shown in the following equation:(17)a0,I≤Av0,I,v3,I,d0,3,where Av0,I,v3,I,d0,3 denotes the car-following acceleration of the ego vehicle at tI; v0,I and v3,I denote the speeds of the ego vehicle and vehicle 3 at tI, respectively; and d0,3 denotes the distance between the ego vehicle and vehicle 3 at tI.Here,v0,I is solved by the proposed nonlinear model; v3,I and d0,3 are calculated as follows, where l is the length of the vehicle:(18)v3,I=v3,0+a3,0ti,d0,3=x3,I−x0,I−l.Whena0,I is smaller than the car-following acceleration, the ego vehicle will shift from a lane-changing or overtaking maneuver to a car-following maneuver comfortably, i.e., no accelerate or decrease immediately.We choose a linear dynamic model [36] as the underlying car-following model. The car-following acceleration of the ego vehicle Av0,I,v3,I,d0,3 can be given as follows:(19)Av0,I,v3,I,d0,3=w1d0,3−Δs∗+w2v3,I−v0,I,where w1 and w2 denote the variation rate of the distance gap and the variation rate of the speed gap between the ego vehicle and vehicle 3, respectively; Δs∗ denotes the ideal distance gap between the ego vehicle and vehicle 3. The function of the three parameters is to help the ego vehicle reach the equilibrium condition as soon as possible by adjusting its acceleration. Following Pariota et al. [36], the parameters in the model are set as w1=0.0343,w2=0.948, and Δs∗=30. ### 3.2.4. Traffic State Constraint The speedv0,i, acceleration a0,i, and jerk j0,i of the ego vehicle at ti can be obtained according to equations (9)–(11). To ensure the driving safety and comfort, the total lane-changing or overtaking time and traffic states of the ego vehicle should be simultaneously constrained as follows:(20)0<tf<tf,max,vmin<v0,i<vmax,fori=1,…,I,amin<a0,i<amax,fori=1,…,I,jmin<j0,i<jmax,fori=1,…,I,where tf,max, vmax, amax, and jmax denote the allowable maximum total time, speed, acceleration, and jerk, respectively. vmin, amin, and jmin denote the allowable minimum speed, acceleration, and jerk, respectively.In the above constraints, the speed of the ego vehicle should always be positive and not exceed the maximum allowable speed, i.e., no stop or backward motion. Moreover, the smaller the jerk, the smaller the acceleration variation, and the higher the driving comfort degree [37]. ## 3.2.1. Optimal Traffic State Profile Planning The objective function of the constrained nonlinear optimization model is shown in equation (13). a0,i and j0,i denote the acceleration and jerk of the ego vehicle at ti, respectively, and tf denotes the total driving time:(13)minJ=1I∑i=1Ia0,i2+1I∑i=1Ij0,i2+tf2.During the lane-changing or overtaking process, the passengers will feel comfortable when the accelerations and jerks of the ego vehicle are small, soa0,i and j0,i are used to reflect the comfort in equation (13). The total driving time tf can represent the efficiency. Short time indicates high efficiency. ## 3.2.2. Collision-Avoidance Constraint To avoid the potential collisions, the ego vehicle should maintain sufficient distance from the surrounding vehicles during lane-changing or overtaking maneuver. For illustrative purposes, the study assumes that the vehicles drive along the straight roads. This limitation can be easily removed by introducing a curved road function. As shown in Figure4, there are three vehicles around the ego vehicle (vehicle 0). The leading vehicle on the current lane is indexed as vehicle 1, the following vehicle on the target lane is indexed as vehicle 2, and the leading vehicle on the target lane is indexed as vehicle 3.Figure 4 Relative positions of the vehicles.The safe distance between the ego vehicle and vehicle 1 is discussed first. As shown in Figure5, the ego vehicle is simplified a geometry of several circles with diameter m. Following Ziegler et al. [35], the Euclidean distance between the center of each circle of the ego vehicle and the center of the first circle of vehicle 1 should be greater than diameter m of the circle. Therefore, the distance constraint between the ego vehicle (vehicle 0) and vehicle 1 can be shown in the following equation:(14)m2≤x0,ik−x1,i12+y0,ik−y1,i12,fork=1,…,5,fori=1,…,I,where k is used to denote the index of circles; x0,ik and y0,ik represent the positions of the ego vehicle of the k-th circle at ti; and x1,i1 and y1,i1 represent the positions of the first circle of vehicle 1 at ti, whose values can be solved from (x1,i,y1,i). x1,i is calculated as x1,0+v1,0ti+1/2∗a1,0ti2, and y1,i is 0m on the straight lane.Figure 5 Mechanism of the collision-avoidance constraint.Then, the constraints of the collision avoidance between the ego vehicle and the other two vehicles are discussed. As shown in equations (15)-(16), the Euclidean distance between the ego vehicle and vehicles 2 and 3 should be greater than the diagonal length r of the vehicle:(15)r2≤x0,i−x2,i2+y0,i−y2,i2,fori=1,…,I,(16)r2≤x0,i−x3,i2+y0,i−y3,i2,fori=1,…,I,where x0,i, x2,i, and x3,i are the longitudinal central positions of the ego vehicle, vehicle 2, and vehicle 3 at ti, respectively. y0,i, y2,i, and y3,i are the lateral central positions of the ego vehicle, vehicle 2, and vehicle 3 at ti, respectively.x 0 , i and y0,i are the variables that need to be solved in the model. x2,i and x3,i can be calculated as x2,0+v2,0ti+1/2∗a2,0ti2 and x3,0+v3,0ti+1/2∗a3,0ti2, respectively; y2,i and y3,i are the lane width 3.75m. Here, the initial traffic states of relevant vehicles are directly derived from the sensors. ## 3.2.3. Acceleration Constraint Based on Car-following Maneuver In real-world traffic environment, the ego vehicle usually shifts to a car-following maneuver at the end of the lane-changing or overtaking process. To guarantee a comfort driving behavior, the planned final accelerationa0,I of the ego vehicle should be smaller than the car-following acceleration, as shown in the following equation:(17)a0,I≤Av0,I,v3,I,d0,3,where Av0,I,v3,I,d0,3 denotes the car-following acceleration of the ego vehicle at tI; v0,I and v3,I denote the speeds of the ego vehicle and vehicle 3 at tI, respectively; and d0,3 denotes the distance between the ego vehicle and vehicle 3 at tI.Here,v0,I is solved by the proposed nonlinear model; v3,I and d0,3 are calculated as follows, where l is the length of the vehicle:(18)v3,I=v3,0+a3,0ti,d0,3=x3,I−x0,I−l.Whena0,I is smaller than the car-following acceleration, the ego vehicle will shift from a lane-changing or overtaking maneuver to a car-following maneuver comfortably, i.e., no accelerate or decrease immediately.We choose a linear dynamic model [36] as the underlying car-following model. The car-following acceleration of the ego vehicle Av0,I,v3,I,d0,3 can be given as follows:(19)Av0,I,v3,I,d0,3=w1d0,3−Δs∗+w2v3,I−v0,I,where w1 and w2 denote the variation rate of the distance gap and the variation rate of the speed gap between the ego vehicle and vehicle 3, respectively; Δs∗ denotes the ideal distance gap between the ego vehicle and vehicle 3. The function of the three parameters is to help the ego vehicle reach the equilibrium condition as soon as possible by adjusting its acceleration. Following Pariota et al. [36], the parameters in the model are set as w1=0.0343,w2=0.948, and Δs∗=30. ## 3.2.4. Traffic State Constraint The speedv0,i, acceleration a0,i, and jerk j0,i of the ego vehicle at ti can be obtained according to equations (9)–(11). To ensure the driving safety and comfort, the total lane-changing or overtaking time and traffic states of the ego vehicle should be simultaneously constrained as follows:(20)0<tf<tf,max,vmin<v0,i<vmax,fori=1,…,I,amin<a0,i<amax,fori=1,…,I,jmin<j0,i<jmax,fori=1,…,I,where tf,max, vmax, amax, and jmax denote the allowable maximum total time, speed, acceleration, and jerk, respectively. vmin, amin, and jmin denote the allowable minimum speed, acceleration, and jerk, respectively.In the above constraints, the speed of the ego vehicle should always be positive and not exceed the maximum allowable speed, i.e., no stop or backward motion. Moreover, the smaller the jerk, the smaller the acceleration variation, and the higher the driving comfort degree [37]. ## 3.3. Trajectory Planning Based on the Nonlinear Model After receiving the lane-changing or overtaking decision, our proposed trajectory planning model will work. The steps are as follows:(1) Traffic information sensingCollecting the current traffic information of the relevant vehicles through the advanced sensors, GPS, and digital maps.(2) Path generationBased on the collected traffic information, using the quintic polynomial curve to represent the lane-changing or overtaking path, the unknown parameters of the path function can be solved according to the boundary conditions.(3) Traffic state profile generationConsidering the safety and comfort constraints, establishing a nonlinear optimization model to solve the optimal traffic state profiles of the ego vehicle along the path.(4) Lane-changing or overtaking trajectory planningIf a feasible solution is found for the proposed nonlinear optimization model, the lane-changing or overtaking trajectory of the ego vehicle can be planned. However, if there is no feasible solution, the current lane will continue to be used, and the planner will iterate the above three steps until a feasible trajectory is planned.The sequence quadratic programming (SQP) algorithm is selected to solve the nonlinear optimization model. Previous studies [19, 33, 34] prove that this algorithm can find the feasible solution for the nonlinear problem well. Through multiple iterations, the optimal traffic state profiles of the ego vehicle are acquired based on the SQP algorithm. Therefore, the optimal lane-changing or overtaking trajectory can be planned. ## 4. Simulation and Discussion In this section, the types of test scenarios are described first; then, the performance characteristics of the simulation results are shown; finally, three detailed trajectory planning cases are demonstrated. ### 4.1. Scenario Description and Simulation Result Analysis Various test scenarios should be considered to evaluate the proposed model. Yang et al. [32] have proposed that lane-changing maneuver in real-world traffic can be roughly categorized into two types: the ego vehicle is located in front of vehicle 2 and the ego vehicle is located behind vehicle 2 at the initial moment. Similar to Yang et al. [32], we also consider that the overtaking maneuver includes two types according to the initial relative positions between the ego vehicle and vehicle 2 (see Figures 6(a) and 6(b)). In the figure, the solid vehicles indicate their current positions and the dashed vehicle indicates the final position of the ego vehicle.Figure 6 Overtaking positions of vehicles: (a) first type of scenario; (b) second type of scenario. (a) (b)As shown in Figure6(a), under the first type of scenario, the ego vehicle is located in front of vehicle 2 at the initial moment. The ego vehicle can change lane to reach the target gap between vehicle 2 and vehicle 3 directly. Under the second type of scenario (see Figure 6(b)), the ego vehicle is located behind vehicle 2 at the initial moment. The ego vehicle needs to overtake vehicle 2 to reach the target gap between vehicle 2 and vehicle 3.Furthermore, 100 random overtaking test cases for each type are generated to verify the feasibility and effectiveness of the proposed trajectory planning method. MATLAB is employed as the simulation platform. Table1 presents the relevant parameters of the model, including the lower and upper boundaries of the traffic states, and the constant quantities. Inputting the current traffic information, the planner can automatically plan a comfort and safe trajectory for the ego vehicle.Table 1 Design parameters of the model. Symbol Explanation Value I Number of time intervals 20 t f , max Maximum total driving time 13 s v min Minimum speed 0 m/s v max Maximum speed 30 m/s a min Minimum acceleration −3 m/s2 a max Maximum acceleration 3 m/s2 j min Minimum jerk −3 m/s3 j max Maximum jerk 3 m/s3 m Diameter of the circle 2.50 m r Diagonal length of the vehicle 5.12 m L Length of the vehicle 4.80 mThrough our proposed model, 82 of the 100 test cases can find the feasible overtaking trajectory under the first type of scenario; 76 of the 100 test cases can find the feasible overtaking trajectory under the second type of scenario, which proves that this method is applied to most traffic conditions. The distribution of the overtaking time is shown in Figure7. From the figure, it can be indicated that the driving time of most cases is between 6 and 7 s for the first type of overtaking scenario. The shortest time is 4.72 s, the longest time is 10.24 s, and the average time is approximately 6.36 s. For the second type of overtaking scenario, most driving time is between 7 and 8 s, the shortest time is 4.68 s, the longest time is 11.56 s, and the average time is approximately 7.71 s.Figure 7 Distribution of the driving time.A comprehensive presentation of the simulation results is presented in Table2. Using the current traffic states of vehicles as the input variables, the overtaking time tf can be determined automatically by the proposed model. Since the ego vehicle needs to overtake vehicle 2 first to reach the target gap under the second type of scenario, the mean value of tf (7.71 s) is larger than the mean value of tf (6.36 s) under the first type of scenario. Moreover, the variations in the trajectory performance parameters (speed, acceleration, and jerk) in Table 2 are small, which guarantees the ride comfort.Table 2 Simulation results under the test scenarios. Variable The first type of scenario The second type of scenario t f , mean s 6.36 7.71 t f , max s 10.24 11.56 v m a x − v 0 | m / s 3.63 4.24 a max m / s 2 1.52 2.89 j max m / s 3 1.96 2.73 Δ x f mean m 106.85 135.46 Δ s f mean m 106.98 136.18 ### 4.2. Numerical Examples In this section, two special cases are demonstrated in detail. Table3 shows the initial traffic states of vehicles under two detailed scenarios. In scenario 1, the ego vehicle (111.59 m) is located in front of vehicle 2 (100.00 m), which belongs to the first type of scenario. In scenario 2, the ego vehicle (89.17 m) is located behind vehicle 2 (100.00 m), which belongs to the second type of scenario. The speed gap (4.87 m/s) between the ego vehicle and vehicle 2 under scenario 1 is larger than that (0.95 m/s) under scenario 2.Table 3 Initial traffic states of vehicles under two detailed scenarios. Scenario 1 x 0,0 v 0,0 a 0,0 x 1,0 v 1,0 a 1,0 111.59 (m) 14.52 (m/s) −0.61 (m/s2) 119.95 (m) 12.08 (m/s) 0.45 (m/s2) x 2,0 v 2,0 a 2,0 x 3,0 v 3,0 a 3,0 100.00 (m) 19.39 (m/s) −1.33 (m/s2) 123.47 (m) 18.62 (m/s) −0.29 (m/s2) Scenario 2 x 0,0 v 0,0 a 0,0 x 1,0 v 1,0 a 1,0 89.17 (m) 14.56 (m/s) −2.11 (m/s2) 103.80 (m) 12.08 (m/s) 0.19 (m/s2) x 2,0 v 2,0 a 2,0 x 3,0 v 3,0 a 3,0 100.00 (m) 13.61 (m/s) −0.23 (m/s2) 123.60 (m) 14.31 (m/s) −0.8 (m/s2)Figures8 and 9 present the overtaking trajectories and the variations in the speeds of vehicles under scenario 1, respectively. From Figure 8, it is shown that the overtaking longitudinal distance xf is 115 m and the total driving time tf is 7.14 s. Figure 9 indicates that the ego vehicle first trends to decelerate slightly to avoid the collisions with surrounding vehicles and then to accelerate to a maximum value and finally to decrease the speed slowly to a stable state. It can be concluded that the ego vehicle can dynamically adjust its speed for adapting the speed change in surrounding vehicles.Figure 8 Trajectories under scenario 1: (a) trajectories of the ego vehicle, vehicle 1, and vehicle 2; (b) trajectories of the ego vehicle, vehicle 1, and vehicle 3. (a) (b)Figure 9 Speeds of vehicles under scenario 1.Figures10 and 11 display the overtaking trajectories and the variations in speeds of vehicles under scenario 2, respectively. The overtaking longitudinal distance xf is 147 m, and the total driving time tf is 9.02 s. From Figure 11, it can be seen that the ego vehicle trends to decrease the speed from 0 s to 1.69 s and then to change its speed lightly from 1.69 s to 7.35 s. After 7.35 s, the ego vehicle accelerates to a value that is larger than the final speed of vehicle 1.Figure 10 Trajectories under scenario 2: (a) trajectories of the ego vehicle, vehicle 1, and vehicle 2; (b) trajectories of the ego vehicle, vehicle 1, and vehicle 3. (a) (b)Figure 11 Speeds of vehicles under scenario 2.Figure12 shows the speed comparison of the ego vehicle under scenarios 1 and 2. Under scenario 1, the ego vehicle decreases its speed lightly at beginning because the ego vehicle has enough initial speed gap (4.87 m/s) to adjust its position. However, since the initial speed gap (0.95 m/s) between the ego vehicle and vehicle 2 is small under scenario 2, the ego vehicle needs to decelerate quickly at the beginning for avoiding a collision with vehicle 2.Figure 12 Speed comparison of the ego vehicle.Figures13 and 14 show the planned accelerations and jerks of the ego vehicle under two scenarios, respectively. From the figures, it can be indicated that different initial traffic states of vehicles will have a significant impact on acceleration and jerk profiles. These trajectory performance variables of the ego vehicle under scenarios 1 and 2 are all small, which ensures the comfort of passengers during the overtaking process.Figure 13 Acceleration comparison of the ego vehicle.Figure 14 Jerk comparison of the ego vehicle. ### 4.3. Trajectory Correction Scenario If the traffic states of surrounding vehicles change suddenly, the planned trajectory of the ego vehicle needs to be corrected to prevent the potential collision. The corrected trajectory can still be generated using our proposed model. Next, we will describe a trajectory correction scenario in detail. Table4 shows the initial traffic states of the vehicles.Table 4 Initial traffic states of vehicles under correction scenario. Trajectory correction scenario x 0,0 v 0,0 a 0,0 x 1,0 v 1,0 a 1,0 100.00 (m) 15.26 (m/s) 0.62 (m/s2) 108.47 (m) 13.01 (m/s) 0.53 (m/s2) x 2,0 v 2,0 a 2,0 x 3,0 v 3,0 a 3,0 97.59 (m) 13.27 (m/s) −0.16 (m/s2) 110.22 (m) 14.15 (m/s) 0.24 (m/s2)In this scenario, the ego vehicle originally intends to change lanes. However, vehicle 3 suddenly decelerates at 2.62 s. The ego vehicle perceives that the distance between itself and vehicle 3 will become very short, and there may be a collision. Therefore, the lane-changing trajectory is corrected, and the ego vehicle returns to the original lane. Figure15 presents the trajectories of the relevant vehicles in this scenario. The simulation result indicates that the total longitudinal distance of the ego vehicle is 91.73 m, and the driving time is 6.64 s. During the trajectory correction process, we assume that vehicle 2 follows the leading vehicle politely, so the motion of vehicle 2 after 2.62 s is not discussed.Figure 15 Trajectories under correction scenario: (a) trajectories of the ego vehicle, vehicle 1, and vehicle 2; (b) trajectories of the ego vehicle, vehicle 1, and vehicle 3. (a) (b)In Figure15, the ego vehicle successfully returns to the original lane tracking the corrected trajectory and does not collide with the surrounding vehicles. It can be concluded that the proposed model is effective for emergent traffic condition.Furthermore, the real-time distance of the ego vehicle and surrounding vehicles is shown in Figure16. Here, the distance denotes the driving distance from the origin point of the lane to the current position of vehicle. In Figure 16, vehicle 3 decelerates at 2.62 s, and then its speed decreases rapidly. If the ego vehicle does not correct the trajectory, it is likely to collide with vehicle 3.Figure 16 Driving distance of the vehicles under correction scenario. ## 4.1. Scenario Description and Simulation Result Analysis Various test scenarios should be considered to evaluate the proposed model. Yang et al. [32] have proposed that lane-changing maneuver in real-world traffic can be roughly categorized into two types: the ego vehicle is located in front of vehicle 2 and the ego vehicle is located behind vehicle 2 at the initial moment. Similar to Yang et al. [32], we also consider that the overtaking maneuver includes two types according to the initial relative positions between the ego vehicle and vehicle 2 (see Figures 6(a) and 6(b)). In the figure, the solid vehicles indicate their current positions and the dashed vehicle indicates the final position of the ego vehicle.Figure 6 Overtaking positions of vehicles: (a) first type of scenario; (b) second type of scenario. (a) (b)As shown in Figure6(a), under the first type of scenario, the ego vehicle is located in front of vehicle 2 at the initial moment. The ego vehicle can change lane to reach the target gap between vehicle 2 and vehicle 3 directly. Under the second type of scenario (see Figure 6(b)), the ego vehicle is located behind vehicle 2 at the initial moment. The ego vehicle needs to overtake vehicle 2 to reach the target gap between vehicle 2 and vehicle 3.Furthermore, 100 random overtaking test cases for each type are generated to verify the feasibility and effectiveness of the proposed trajectory planning method. MATLAB is employed as the simulation platform. Table1 presents the relevant parameters of the model, including the lower and upper boundaries of the traffic states, and the constant quantities. Inputting the current traffic information, the planner can automatically plan a comfort and safe trajectory for the ego vehicle.Table 1 Design parameters of the model. Symbol Explanation Value I Number of time intervals 20 t f , max Maximum total driving time 13 s v min Minimum speed 0 m/s v max Maximum speed 30 m/s a min Minimum acceleration −3 m/s2 a max Maximum acceleration 3 m/s2 j min Minimum jerk −3 m/s3 j max Maximum jerk 3 m/s3 m Diameter of the circle 2.50 m r Diagonal length of the vehicle 5.12 m L Length of the vehicle 4.80 mThrough our proposed model, 82 of the 100 test cases can find the feasible overtaking trajectory under the first type of scenario; 76 of the 100 test cases can find the feasible overtaking trajectory under the second type of scenario, which proves that this method is applied to most traffic conditions. The distribution of the overtaking time is shown in Figure7. From the figure, it can be indicated that the driving time of most cases is between 6 and 7 s for the first type of overtaking scenario. The shortest time is 4.72 s, the longest time is 10.24 s, and the average time is approximately 6.36 s. For the second type of overtaking scenario, most driving time is between 7 and 8 s, the shortest time is 4.68 s, the longest time is 11.56 s, and the average time is approximately 7.71 s.Figure 7 Distribution of the driving time.A comprehensive presentation of the simulation results is presented in Table2. Using the current traffic states of vehicles as the input variables, the overtaking time tf can be determined automatically by the proposed model. Since the ego vehicle needs to overtake vehicle 2 first to reach the target gap under the second type of scenario, the mean value of tf (7.71 s) is larger than the mean value of tf (6.36 s) under the first type of scenario. Moreover, the variations in the trajectory performance parameters (speed, acceleration, and jerk) in Table 2 are small, which guarantees the ride comfort.Table 2 Simulation results under the test scenarios. Variable The first type of scenario The second type of scenario t f , mean s 6.36 7.71 t f , max s 10.24 11.56 v m a x − v 0 | m / s 3.63 4.24 a max m / s 2 1.52 2.89 j max m / s 3 1.96 2.73 Δ x f mean m 106.85 135.46 Δ s f mean m 106.98 136.18 ## 4.2. Numerical Examples In this section, two special cases are demonstrated in detail. Table3 shows the initial traffic states of vehicles under two detailed scenarios. In scenario 1, the ego vehicle (111.59 m) is located in front of vehicle 2 (100.00 m), which belongs to the first type of scenario. In scenario 2, the ego vehicle (89.17 m) is located behind vehicle 2 (100.00 m), which belongs to the second type of scenario. The speed gap (4.87 m/s) between the ego vehicle and vehicle 2 under scenario 1 is larger than that (0.95 m/s) under scenario 2.Table 3 Initial traffic states of vehicles under two detailed scenarios. Scenario 1 x 0,0 v 0,0 a 0,0 x 1,0 v 1,0 a 1,0 111.59 (m) 14.52 (m/s) −0.61 (m/s2) 119.95 (m) 12.08 (m/s) 0.45 (m/s2) x 2,0 v 2,0 a 2,0 x 3,0 v 3,0 a 3,0 100.00 (m) 19.39 (m/s) −1.33 (m/s2) 123.47 (m) 18.62 (m/s) −0.29 (m/s2) Scenario 2 x 0,0 v 0,0 a 0,0 x 1,0 v 1,0 a 1,0 89.17 (m) 14.56 (m/s) −2.11 (m/s2) 103.80 (m) 12.08 (m/s) 0.19 (m/s2) x 2,0 v 2,0 a 2,0 x 3,0 v 3,0 a 3,0 100.00 (m) 13.61 (m/s) −0.23 (m/s2) 123.60 (m) 14.31 (m/s) −0.8 (m/s2)Figures8 and 9 present the overtaking trajectories and the variations in the speeds of vehicles under scenario 1, respectively. From Figure 8, it is shown that the overtaking longitudinal distance xf is 115 m and the total driving time tf is 7.14 s. Figure 9 indicates that the ego vehicle first trends to decelerate slightly to avoid the collisions with surrounding vehicles and then to accelerate to a maximum value and finally to decrease the speed slowly to a stable state. It can be concluded that the ego vehicle can dynamically adjust its speed for adapting the speed change in surrounding vehicles.Figure 8 Trajectories under scenario 1: (a) trajectories of the ego vehicle, vehicle 1, and vehicle 2; (b) trajectories of the ego vehicle, vehicle 1, and vehicle 3. (a) (b)Figure 9 Speeds of vehicles under scenario 1.Figures10 and 11 display the overtaking trajectories and the variations in speeds of vehicles under scenario 2, respectively. The overtaking longitudinal distance xf is 147 m, and the total driving time tf is 9.02 s. From Figure 11, it can be seen that the ego vehicle trends to decrease the speed from 0 s to 1.69 s and then to change its speed lightly from 1.69 s to 7.35 s. After 7.35 s, the ego vehicle accelerates to a value that is larger than the final speed of vehicle 1.Figure 10 Trajectories under scenario 2: (a) trajectories of the ego vehicle, vehicle 1, and vehicle 2; (b) trajectories of the ego vehicle, vehicle 1, and vehicle 3. (a) (b)Figure 11 Speeds of vehicles under scenario 2.Figure12 shows the speed comparison of the ego vehicle under scenarios 1 and 2. Under scenario 1, the ego vehicle decreases its speed lightly at beginning because the ego vehicle has enough initial speed gap (4.87 m/s) to adjust its position. However, since the initial speed gap (0.95 m/s) between the ego vehicle and vehicle 2 is small under scenario 2, the ego vehicle needs to decelerate quickly at the beginning for avoiding a collision with vehicle 2.Figure 12 Speed comparison of the ego vehicle.Figures13 and 14 show the planned accelerations and jerks of the ego vehicle under two scenarios, respectively. From the figures, it can be indicated that different initial traffic states of vehicles will have a significant impact on acceleration and jerk profiles. These trajectory performance variables of the ego vehicle under scenarios 1 and 2 are all small, which ensures the comfort of passengers during the overtaking process.Figure 13 Acceleration comparison of the ego vehicle.Figure 14 Jerk comparison of the ego vehicle. ## 4.3. Trajectory Correction Scenario If the traffic states of surrounding vehicles change suddenly, the planned trajectory of the ego vehicle needs to be corrected to prevent the potential collision. The corrected trajectory can still be generated using our proposed model. Next, we will describe a trajectory correction scenario in detail. Table4 shows the initial traffic states of the vehicles.Table 4 Initial traffic states of vehicles under correction scenario. Trajectory correction scenario x 0,0 v 0,0 a 0,0 x 1,0 v 1,0 a 1,0 100.00 (m) 15.26 (m/s) 0.62 (m/s2) 108.47 (m) 13.01 (m/s) 0.53 (m/s2) x 2,0 v 2,0 a 2,0 x 3,0 v 3,0 a 3,0 97.59 (m) 13.27 (m/s) −0.16 (m/s2) 110.22 (m) 14.15 (m/s) 0.24 (m/s2)In this scenario, the ego vehicle originally intends to change lanes. However, vehicle 3 suddenly decelerates at 2.62 s. The ego vehicle perceives that the distance between itself and vehicle 3 will become very short, and there may be a collision. Therefore, the lane-changing trajectory is corrected, and the ego vehicle returns to the original lane. Figure15 presents the trajectories of the relevant vehicles in this scenario. The simulation result indicates that the total longitudinal distance of the ego vehicle is 91.73 m, and the driving time is 6.64 s. During the trajectory correction process, we assume that vehicle 2 follows the leading vehicle politely, so the motion of vehicle 2 after 2.62 s is not discussed.Figure 15 Trajectories under correction scenario: (a) trajectories of the ego vehicle, vehicle 1, and vehicle 2; (b) trajectories of the ego vehicle, vehicle 1, and vehicle 3. (a) (b)In Figure15, the ego vehicle successfully returns to the original lane tracking the corrected trajectory and does not collide with the surrounding vehicles. It can be concluded that the proposed model is effective for emergent traffic condition.Furthermore, the real-time distance of the ego vehicle and surrounding vehicles is shown in Figure16. Here, the distance denotes the driving distance from the origin point of the lane to the current position of vehicle. In Figure 16, vehicle 3 decelerates at 2.62 s, and then its speed decreases rapidly. If the ego vehicle does not correct the trajectory, it is likely to collide with vehicle 3.Figure 16 Driving distance of the vehicles under correction scenario. ## 5. Conclusion This paper proposes a universal trajectory planning method that is suitable for the automated lane-changing and overtaking maneuvers. The proposed method first generates a two-dimensional path in the Cartesian coordinate system to connect the initial position with the final position, and then a nonlinear mathematical optimization model is established to obtain the traffic state profiles along the path. The combination of a path and its traffic state profiles determines a spatiotemporal trajectory. Moreover, considering the safety and comfort of the ego vehicle during the driving process, the study discretizes the total time horizon into several time intervals and solves the unknown parameters of the function to generate the continuous and smooth traffic state profiles. Furthermore, a series of numerical examples are conducted under the typical scenarios. The simulation results demonstrate that the intelligent vehicle can avoid the potential collisions effectively when tracking the planned trajectory. Future works will concentrate on the trajectory replanning and tracking issues for proposing a more realistic lane-changing and overtaking model. --- *Source: 1023975-2020-04-25.xml*
1023975-2020-04-25_1023975-2020-04-25.md
59,684
A Universal Trajectory Planning Method for Automated Lane-Changing and Overtaking Maneuvers
Ying Wang; Chong Wei
Mathematical Problems in Engineering (2020)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1023975
1023975-2020-04-25.xml
--- ## Abstract Lane-changing and overtaking are conventional maneuvers on roads, and the reference trajectory is one of the prerequisites to execute these maneuvers. This study proposes a universal trajectory planning method for automated lane-changing and overtaking maneuvers, in which the trajectory is regarded as the combination of a path and its traffic state profiles. The two-dimensional path is represented by a suitable curve to connect the initial position with final position of the ego vehicle. Based on the planned path, its traffic state profiles are generated by solving a nonlinear mathematical optimization model. Moreover, the study discretizes the time horizon into several time intervals and determines the parameters to obtain the continuous and smooth profiles, which guarantees the safety and comfort of the ego vehicle. Finally, a series of simulation experiments are performed in the MATLAB platform and the results show the feasibility and effectiveness of the proposed universal trajectory planning method. --- ## Body ## 1. Introduction Automated driving system can significantly alleviate traffic congestion, ensure traffic safety, and improve road capacity [1–4]. In order to realize the intelligent vehicle as soon as possible, some automated driving technologies are being tested on roads. Among the test items, the lane-changing and overtaking maneuvers are included [5–8]. To execute these maneuvers automatically, a feasible reference trajectory is needed. The ego vehicle will drive along the planned trajectory to finish the relevant maneuvers. This paper proposes a novel universal trajectory planning method for the automated lane-changing and overtaking maneuvers. With the development of advanced sensors and vehicle-to-vehicle (V2V) communications, real-time traffic information of vehicles, such as position, speed, and acceleration, can be obtained conveniently and accurately [9–12], and the trajectory is planned based on these traffic state information.Different types of curves have different performance characteristics in terms of continuity and smoothness. Therefore, selecting a suitable curve function to represent the trajectory according to the traffic condition is important. In the relevant trajectory planning studies, the commonly used geometric curves include the spline curve [13–17], trapezoidal acceleration curve [18–22], Bezier curve [23–26], and polynomial curve [27–33].Ziegler et al. [13] employed a quantic spline curve to generate the trajectory. They divided the space into multiple geometric graphs and adopted the shortest path algorithm to search for the feasible trajectory for each graph. Rousseau et al. [17] utilized the trajectory planning method based on the B-spline curve. The parameters of curve function were determined by minimizing the driving time. Yin et al. [18] described a trapezoidal acceleration motion planning model. The planned trajectory could avoid the potential collisions with obstacles by analyzing the variation law of the actuator. Chen et al. [22] presented a trajectory planning method using the 3D Bezier curve, which enhanced the flexibility of trajectory and guaranteed the conformity to the realistic driving maneuver.Among the commonly used geometric curves, the polynomial curve is the most widely used in trajectory planning. This type of curve can plan the smooth trajectory with a low computational cost, and the order of polynomial can be tuned for obtaining the desired trajectory performance. Nelson et al. [27] were the first to propose the polynomial method. They used the polynomial curve to replace the arc curve for planning the continuous-curvature trajectory, which guaranteed the traceability of the trajectory. Nilsson et al. [29] presented a trajectory planning model using the discrete quadratic polynomial curve. They divided the planning area into three regions (i.e., preregion, periregion, and postregion) and solved the optimal positions of the ego vehicle in each region to determine the trajectory function. Zhang et al. [30] adopted two time-dependent cubic polynomial functions to describe the longitudinal and lateral motions, respectively. They concluded that the continuity of the polynomial curve could ensure the robustness of trajectory. Yang et al. [32] proposed a trajectory planning model based on the cubic polynomial curve. They focused on the trajectory replanning problem in dynamic driving environments. Wei et al. [33] increased the polynomial function to the fifth order and solved the optimal trajectory by treating the driving time and movement distance as free variables.However, there are still several common disadvantages existing in the previous studies. First, these trajectory planning models usually only focus on the automated lane-changing or overtaking maneuver, while neglecting the universality of the model for lane-changing and overtaking maneuvers. Second, the traffic states of the ego vehicle solved by the previous model usually were discretized, which might result in hard tracking or even crashes. Third, most of the studies designed the lane-changing or overtaking scenarios simply, in which they assumed that the surrounding environment was static, the speeds of other vehicles were constants, or only one or two vehicles were around the ego vehicle. These assumptions were inconsistent with the real-world traffic characteristics.Aiming at these problems, a trajectory planning method based on the mathematical optimization framework is proposed in this paper. The contributions can be summarized as follows:(1) A universal framework of trajectory planning for the automated lane-changing and overtaking maneuvers is proposed. The study considers that the combination of a path and its traffic state profiles determines a complete spatiotemporal trajectory. The suitable curve is employed to represent the two-dimensional path. And then a nonlinear optimization model is established to generate the traffic state profiles, whose computational cost is low since the path has been planned.(2) The traffic state profiles are continuous and smooth. In previous studies, the traffic states of the ego vehicle solved by the nonlinear model usually were discretized. Our proposed model discretizes the time horizon into several time intervals and solves the unknown parameters of the function to generate the continuous and smooth traffic state profiles.The rest of the paper is organized as follows. Section2 introduces the framework of the universal trajectory planning method. Section 3 describes the technical details of the proposed method, including the generation process of the path and traffic state profiles. Section 4 presents the simulation results and several typical numerical examples. Section 5 concludes the study and discusses the future work. ## 2. Framework of the Universal Trajectory Planning Method Figure1 presents the framework of the proposed universal trajectory planning method, which plans the trajectory through four stages. In stage 1, the current traffic states of the vehicles are collected via GPS, digital maps, and sensors. In stage 2, based on the known boundary conditions, a suitable curve is employed to generate the two-dimensional path connecting the initial position and the final position. In stage 3, the objective function and relevant constraints are established to solve the traffic state profiles along the path, such as speed, acceleration, and jerk profiles. The combination of the path and its traffic state profiles determines a complete spatiotemporal trajectory. In stage 4, the generated trajectory parameters are used as inputs for the actuators to execute the desired control.Figure 1 Framework of the trajectory planning method. ## 3. The Universal Trajectory Planning Method Planning a reference trajectory before changing lane or overtaking is necessary and important. The quality of the trajectory has a direct impact on the performance of the automated driving behaviors. This study considers that the combination of a path and its traffic state profiles determines a complete spatiotemporal trajectory. Therefore, in the following sections, we will describe the universal generation method of path and traffic state profiles in detail. ### 3.1. Path Generation A suitable curve should be selected to plan the lane-changing or overtaking path. Here, we choose the quintic polynomial, which has the advantages of a continuous third derivative and smooth curvature, as shown in the following equation:(1)yx=a0+a1x+a2x2+a3x3+a4x4+a5x5,where x and y are the longitudinal and lateral positions of the ego vehicle, respectively. a0, …, a5 are unknown parameters that need to be calculated.As for the lane-changing maneuver, the boundary traffic information can be collected with the help of sensors, GPS, and digital maps. Based on such information, the following equations can be obtained:(2)y0=a0+a1x0+a2x02+a3x03+a4x04+a5x05,(3)yf=a0+a1xf+a2xf2+a3xf3+a4xf4+a5xf5,(4)c0=a1+2a2x0+3a3x02+4a4x03+5a5x04,(5)cf=a1+2a2xf+3a3xf2+4a4xf3+5a5xf4,(6)k0=2a2+6a3x0+12a4x02+20a5x03,(7)kf=2a2+6a3xf+12a4xf2+20a5xf3,where x0, y0, c0, and k0 represent the longitudinal position, the lateral position, the derivative, and the curvature of the lane curve at current moment, respectively. xf, yf, cf, and kf represent the longitudinal position, the lateral position, the derivative, and the curvature of the lane curve at final moment, respectively. By solving equations (2)–(7), the values of a0, …, a5 are obtained, and then the lane-changing path can be generated.As for overtaking maneuver, the same method as lane-changing is adopted to generate the path. A complete overtaking path is composed of two lane-changing paths, as shown in Figure2. In the figure, vehicle 0 is the ego vehicle and vehicles 1, 2, and 3 are the surrounding vehicles.Figure 2 Overtaking path of the ego vehicle.After collecting the boundary traffic information of the ego vehicle in lane-changing or overtaking, the unknown parameters of the path function can be solved and then the suitable path is generated. The study chooses the polynomial curve to represent the path, which has the advantages of a continuous derivative and smooth curvature. Other types of curve can also be employed according to the traffic scenario, while the framework of the path generation is unchanged. ### 3.2. Traffic State Profile Generation In this section, a nonlinear programming model is constructed to generate the traffic state profiles (speed, acceleration, and jerk profiles) for the automated lane-changing or overtaking maneuver. Since the discrete traffic states of the ego vehicle may result in crashes, our model discretizes the time horizon and solves the unknown parameters of the function to generate the continuous and smooth traffic state profiles. The driving distance profile is shown in the following equation:(8)st=b0+b1t+b2t2+b3t3+b4t4+b5t5.The corresponding speed, acceleration, and jerk profiles are shown in the following equations:(9)vt=s˙t=b1+2b2t+3b3t2+4b4t3+5b5t4,(10)at=s¨t=2b2+6b3t+12b4t2+20b5t3,(11)jt=s…t=6b3+24b4t+60b5t2.Six unknown parametersb0, …, b5 need to be determined. Based on the known initial traffic states of the ego vehicle, three equations are derived as follows:(12)s0=s0,0,v0=v0,0,a0=a0,0,where s0,0, v0,0, and a0,0 are the current driving distance, speed, and acceleration of the ego vehicle, respectively. The values of b0, b1, and b2 can be obtained by solving equation (12). However, the unknown parameters b3, b4, and b5 and total driving time tf are not easy to determine, and the arbitrary values may affect the solution efficiency of the model [29, 34]. Therefore, we treat b3, b4, b5, and tf as free variables and establish a nonlinear model to search for the optimal traffic state profiles.The study dividestf into I time intervals and uses tii=1,…,I to denote the end moment of each time interval. Let t0=0 and set constraint ti>ti−1. As shown in Figure 3, si is the driving distance of the ego vehicle at ti. x0,i and y0,i are the longitudinal and lateral positions of the ego vehicle at ti, respectively. s0=0 and si is calculated as ∫0x0,i1+dy/dy2dx. Furthermore, ti for i=1,…,I are treated as free variables. The unknown parameters b3, b4, and b5 and driving time tf can be represented as functions of ti. Therefore, the planning traffic state profile problem can be converted into the optimization problem.Figure 3 Generation of the traffic state profiles: (a) lane-changing maneuver; (b) overtaking maneuver. (a) (b) #### 3.2.1. Optimal Traffic State Profile Planning The objective function of the constrained nonlinear optimization model is shown in equation (13). a0,i and j0,i denote the acceleration and jerk of the ego vehicle at ti, respectively, and tf denotes the total driving time:(13)minJ=1I∑i=1Ia0,i2+1I∑i=1Ij0,i2+tf2.During the lane-changing or overtaking process, the passengers will feel comfortable when the accelerations and jerks of the ego vehicle are small, soa0,i and j0,i are used to reflect the comfort in equation (13). The total driving time tf can represent the efficiency. Short time indicates high efficiency. #### 3.2.2. Collision-Avoidance Constraint To avoid the potential collisions, the ego vehicle should maintain sufficient distance from the surrounding vehicles during lane-changing or overtaking maneuver. For illustrative purposes, the study assumes that the vehicles drive along the straight roads. This limitation can be easily removed by introducing a curved road function. As shown in Figure4, there are three vehicles around the ego vehicle (vehicle 0). The leading vehicle on the current lane is indexed as vehicle 1, the following vehicle on the target lane is indexed as vehicle 2, and the leading vehicle on the target lane is indexed as vehicle 3.Figure 4 Relative positions of the vehicles.The safe distance between the ego vehicle and vehicle 1 is discussed first. As shown in Figure5, the ego vehicle is simplified a geometry of several circles with diameter m. Following Ziegler et al. [35], the Euclidean distance between the center of each circle of the ego vehicle and the center of the first circle of vehicle 1 should be greater than diameter m of the circle. Therefore, the distance constraint between the ego vehicle (vehicle 0) and vehicle 1 can be shown in the following equation:(14)m2≤x0,ik−x1,i12+y0,ik−y1,i12,fork=1,…,5,fori=1,…,I,where k is used to denote the index of circles; x0,ik and y0,ik represent the positions of the ego vehicle of the k-th circle at ti; and x1,i1 and y1,i1 represent the positions of the first circle of vehicle 1 at ti, whose values can be solved from (x1,i,y1,i). x1,i is calculated as x1,0+v1,0ti+1/2∗a1,0ti2, and y1,i is 0m on the straight lane.Figure 5 Mechanism of the collision-avoidance constraint.Then, the constraints of the collision avoidance between the ego vehicle and the other two vehicles are discussed. As shown in equations (15)-(16), the Euclidean distance between the ego vehicle and vehicles 2 and 3 should be greater than the diagonal length r of the vehicle:(15)r2≤x0,i−x2,i2+y0,i−y2,i2,fori=1,…,I,(16)r2≤x0,i−x3,i2+y0,i−y3,i2,fori=1,…,I,where x0,i, x2,i, and x3,i are the longitudinal central positions of the ego vehicle, vehicle 2, and vehicle 3 at ti, respectively. y0,i, y2,i, and y3,i are the lateral central positions of the ego vehicle, vehicle 2, and vehicle 3 at ti, respectively.x 0 , i and y0,i are the variables that need to be solved in the model. x2,i and x3,i can be calculated as x2,0+v2,0ti+1/2∗a2,0ti2 and x3,0+v3,0ti+1/2∗a3,0ti2, respectively; y2,i and y3,i are the lane width 3.75m. Here, the initial traffic states of relevant vehicles are directly derived from the sensors. #### 3.2.3. Acceleration Constraint Based on Car-following Maneuver In real-world traffic environment, the ego vehicle usually shifts to a car-following maneuver at the end of the lane-changing or overtaking process. To guarantee a comfort driving behavior, the planned final accelerationa0,I of the ego vehicle should be smaller than the car-following acceleration, as shown in the following equation:(17)a0,I≤Av0,I,v3,I,d0,3,where Av0,I,v3,I,d0,3 denotes the car-following acceleration of the ego vehicle at tI; v0,I and v3,I denote the speeds of the ego vehicle and vehicle 3 at tI, respectively; and d0,3 denotes the distance between the ego vehicle and vehicle 3 at tI.Here,v0,I is solved by the proposed nonlinear model; v3,I and d0,3 are calculated as follows, where l is the length of the vehicle:(18)v3,I=v3,0+a3,0ti,d0,3=x3,I−x0,I−l.Whena0,I is smaller than the car-following acceleration, the ego vehicle will shift from a lane-changing or overtaking maneuver to a car-following maneuver comfortably, i.e., no accelerate or decrease immediately.We choose a linear dynamic model [36] as the underlying car-following model. The car-following acceleration of the ego vehicle Av0,I,v3,I,d0,3 can be given as follows:(19)Av0,I,v3,I,d0,3=w1d0,3−Δs∗+w2v3,I−v0,I,where w1 and w2 denote the variation rate of the distance gap and the variation rate of the speed gap between the ego vehicle and vehicle 3, respectively; Δs∗ denotes the ideal distance gap between the ego vehicle and vehicle 3. The function of the three parameters is to help the ego vehicle reach the equilibrium condition as soon as possible by adjusting its acceleration. Following Pariota et al. [36], the parameters in the model are set as w1=0.0343,w2=0.948, and Δs∗=30. #### 3.2.4. Traffic State Constraint The speedv0,i, acceleration a0,i, and jerk j0,i of the ego vehicle at ti can be obtained according to equations (9)–(11). To ensure the driving safety and comfort, the total lane-changing or overtaking time and traffic states of the ego vehicle should be simultaneously constrained as follows:(20)0<tf<tf,max,vmin<v0,i<vmax,fori=1,…,I,amin<a0,i<amax,fori=1,…,I,jmin<j0,i<jmax,fori=1,…,I,where tf,max, vmax, amax, and jmax denote the allowable maximum total time, speed, acceleration, and jerk, respectively. vmin, amin, and jmin denote the allowable minimum speed, acceleration, and jerk, respectively.In the above constraints, the speed of the ego vehicle should always be positive and not exceed the maximum allowable speed, i.e., no stop or backward motion. Moreover, the smaller the jerk, the smaller the acceleration variation, and the higher the driving comfort degree [37]. ### 3.3. Trajectory Planning Based on the Nonlinear Model After receiving the lane-changing or overtaking decision, our proposed trajectory planning model will work. The steps are as follows:(1) Traffic information sensingCollecting the current traffic information of the relevant vehicles through the advanced sensors, GPS, and digital maps.(2) Path generationBased on the collected traffic information, using the quintic polynomial curve to represent the lane-changing or overtaking path, the unknown parameters of the path function can be solved according to the boundary conditions.(3) Traffic state profile generationConsidering the safety and comfort constraints, establishing a nonlinear optimization model to solve the optimal traffic state profiles of the ego vehicle along the path.(4) Lane-changing or overtaking trajectory planningIf a feasible solution is found for the proposed nonlinear optimization model, the lane-changing or overtaking trajectory of the ego vehicle can be planned. However, if there is no feasible solution, the current lane will continue to be used, and the planner will iterate the above three steps until a feasible trajectory is planned.The sequence quadratic programming (SQP) algorithm is selected to solve the nonlinear optimization model. Previous studies [19, 33, 34] prove that this algorithm can find the feasible solution for the nonlinear problem well. Through multiple iterations, the optimal traffic state profiles of the ego vehicle are acquired based on the SQP algorithm. Therefore, the optimal lane-changing or overtaking trajectory can be planned. ## 3.1. Path Generation A suitable curve should be selected to plan the lane-changing or overtaking path. Here, we choose the quintic polynomial, which has the advantages of a continuous third derivative and smooth curvature, as shown in the following equation:(1)yx=a0+a1x+a2x2+a3x3+a4x4+a5x5,where x and y are the longitudinal and lateral positions of the ego vehicle, respectively. a0, …, a5 are unknown parameters that need to be calculated.As for the lane-changing maneuver, the boundary traffic information can be collected with the help of sensors, GPS, and digital maps. Based on such information, the following equations can be obtained:(2)y0=a0+a1x0+a2x02+a3x03+a4x04+a5x05,(3)yf=a0+a1xf+a2xf2+a3xf3+a4xf4+a5xf5,(4)c0=a1+2a2x0+3a3x02+4a4x03+5a5x04,(5)cf=a1+2a2xf+3a3xf2+4a4xf3+5a5xf4,(6)k0=2a2+6a3x0+12a4x02+20a5x03,(7)kf=2a2+6a3xf+12a4xf2+20a5xf3,where x0, y0, c0, and k0 represent the longitudinal position, the lateral position, the derivative, and the curvature of the lane curve at current moment, respectively. xf, yf, cf, and kf represent the longitudinal position, the lateral position, the derivative, and the curvature of the lane curve at final moment, respectively. By solving equations (2)–(7), the values of a0, …, a5 are obtained, and then the lane-changing path can be generated.As for overtaking maneuver, the same method as lane-changing is adopted to generate the path. A complete overtaking path is composed of two lane-changing paths, as shown in Figure2. In the figure, vehicle 0 is the ego vehicle and vehicles 1, 2, and 3 are the surrounding vehicles.Figure 2 Overtaking path of the ego vehicle.After collecting the boundary traffic information of the ego vehicle in lane-changing or overtaking, the unknown parameters of the path function can be solved and then the suitable path is generated. The study chooses the polynomial curve to represent the path, which has the advantages of a continuous derivative and smooth curvature. Other types of curve can also be employed according to the traffic scenario, while the framework of the path generation is unchanged. ## 3.2. Traffic State Profile Generation In this section, a nonlinear programming model is constructed to generate the traffic state profiles (speed, acceleration, and jerk profiles) for the automated lane-changing or overtaking maneuver. Since the discrete traffic states of the ego vehicle may result in crashes, our model discretizes the time horizon and solves the unknown parameters of the function to generate the continuous and smooth traffic state profiles. The driving distance profile is shown in the following equation:(8)st=b0+b1t+b2t2+b3t3+b4t4+b5t5.The corresponding speed, acceleration, and jerk profiles are shown in the following equations:(9)vt=s˙t=b1+2b2t+3b3t2+4b4t3+5b5t4,(10)at=s¨t=2b2+6b3t+12b4t2+20b5t3,(11)jt=s…t=6b3+24b4t+60b5t2.Six unknown parametersb0, …, b5 need to be determined. Based on the known initial traffic states of the ego vehicle, three equations are derived as follows:(12)s0=s0,0,v0=v0,0,a0=a0,0,where s0,0, v0,0, and a0,0 are the current driving distance, speed, and acceleration of the ego vehicle, respectively. The values of b0, b1, and b2 can be obtained by solving equation (12). However, the unknown parameters b3, b4, and b5 and total driving time tf are not easy to determine, and the arbitrary values may affect the solution efficiency of the model [29, 34]. Therefore, we treat b3, b4, b5, and tf as free variables and establish a nonlinear model to search for the optimal traffic state profiles.The study dividestf into I time intervals and uses tii=1,…,I to denote the end moment of each time interval. Let t0=0 and set constraint ti>ti−1. As shown in Figure 3, si is the driving distance of the ego vehicle at ti. x0,i and y0,i are the longitudinal and lateral positions of the ego vehicle at ti, respectively. s0=0 and si is calculated as ∫0x0,i1+dy/dy2dx. Furthermore, ti for i=1,…,I are treated as free variables. The unknown parameters b3, b4, and b5 and driving time tf can be represented as functions of ti. Therefore, the planning traffic state profile problem can be converted into the optimization problem.Figure 3 Generation of the traffic state profiles: (a) lane-changing maneuver; (b) overtaking maneuver. (a) (b) ### 3.2.1. Optimal Traffic State Profile Planning The objective function of the constrained nonlinear optimization model is shown in equation (13). a0,i and j0,i denote the acceleration and jerk of the ego vehicle at ti, respectively, and tf denotes the total driving time:(13)minJ=1I∑i=1Ia0,i2+1I∑i=1Ij0,i2+tf2.During the lane-changing or overtaking process, the passengers will feel comfortable when the accelerations and jerks of the ego vehicle are small, soa0,i and j0,i are used to reflect the comfort in equation (13). The total driving time tf can represent the efficiency. Short time indicates high efficiency. ### 3.2.2. Collision-Avoidance Constraint To avoid the potential collisions, the ego vehicle should maintain sufficient distance from the surrounding vehicles during lane-changing or overtaking maneuver. For illustrative purposes, the study assumes that the vehicles drive along the straight roads. This limitation can be easily removed by introducing a curved road function. As shown in Figure4, there are three vehicles around the ego vehicle (vehicle 0). The leading vehicle on the current lane is indexed as vehicle 1, the following vehicle on the target lane is indexed as vehicle 2, and the leading vehicle on the target lane is indexed as vehicle 3.Figure 4 Relative positions of the vehicles.The safe distance between the ego vehicle and vehicle 1 is discussed first. As shown in Figure5, the ego vehicle is simplified a geometry of several circles with diameter m. Following Ziegler et al. [35], the Euclidean distance between the center of each circle of the ego vehicle and the center of the first circle of vehicle 1 should be greater than diameter m of the circle. Therefore, the distance constraint between the ego vehicle (vehicle 0) and vehicle 1 can be shown in the following equation:(14)m2≤x0,ik−x1,i12+y0,ik−y1,i12,fork=1,…,5,fori=1,…,I,where k is used to denote the index of circles; x0,ik and y0,ik represent the positions of the ego vehicle of the k-th circle at ti; and x1,i1 and y1,i1 represent the positions of the first circle of vehicle 1 at ti, whose values can be solved from (x1,i,y1,i). x1,i is calculated as x1,0+v1,0ti+1/2∗a1,0ti2, and y1,i is 0m on the straight lane.Figure 5 Mechanism of the collision-avoidance constraint.Then, the constraints of the collision avoidance between the ego vehicle and the other two vehicles are discussed. As shown in equations (15)-(16), the Euclidean distance between the ego vehicle and vehicles 2 and 3 should be greater than the diagonal length r of the vehicle:(15)r2≤x0,i−x2,i2+y0,i−y2,i2,fori=1,…,I,(16)r2≤x0,i−x3,i2+y0,i−y3,i2,fori=1,…,I,where x0,i, x2,i, and x3,i are the longitudinal central positions of the ego vehicle, vehicle 2, and vehicle 3 at ti, respectively. y0,i, y2,i, and y3,i are the lateral central positions of the ego vehicle, vehicle 2, and vehicle 3 at ti, respectively.x 0 , i and y0,i are the variables that need to be solved in the model. x2,i and x3,i can be calculated as x2,0+v2,0ti+1/2∗a2,0ti2 and x3,0+v3,0ti+1/2∗a3,0ti2, respectively; y2,i and y3,i are the lane width 3.75m. Here, the initial traffic states of relevant vehicles are directly derived from the sensors. ### 3.2.3. Acceleration Constraint Based on Car-following Maneuver In real-world traffic environment, the ego vehicle usually shifts to a car-following maneuver at the end of the lane-changing or overtaking process. To guarantee a comfort driving behavior, the planned final accelerationa0,I of the ego vehicle should be smaller than the car-following acceleration, as shown in the following equation:(17)a0,I≤Av0,I,v3,I,d0,3,where Av0,I,v3,I,d0,3 denotes the car-following acceleration of the ego vehicle at tI; v0,I and v3,I denote the speeds of the ego vehicle and vehicle 3 at tI, respectively; and d0,3 denotes the distance between the ego vehicle and vehicle 3 at tI.Here,v0,I is solved by the proposed nonlinear model; v3,I and d0,3 are calculated as follows, where l is the length of the vehicle:(18)v3,I=v3,0+a3,0ti,d0,3=x3,I−x0,I−l.Whena0,I is smaller than the car-following acceleration, the ego vehicle will shift from a lane-changing or overtaking maneuver to a car-following maneuver comfortably, i.e., no accelerate or decrease immediately.We choose a linear dynamic model [36] as the underlying car-following model. The car-following acceleration of the ego vehicle Av0,I,v3,I,d0,3 can be given as follows:(19)Av0,I,v3,I,d0,3=w1d0,3−Δs∗+w2v3,I−v0,I,where w1 and w2 denote the variation rate of the distance gap and the variation rate of the speed gap between the ego vehicle and vehicle 3, respectively; Δs∗ denotes the ideal distance gap between the ego vehicle and vehicle 3. The function of the three parameters is to help the ego vehicle reach the equilibrium condition as soon as possible by adjusting its acceleration. Following Pariota et al. [36], the parameters in the model are set as w1=0.0343,w2=0.948, and Δs∗=30. ### 3.2.4. Traffic State Constraint The speedv0,i, acceleration a0,i, and jerk j0,i of the ego vehicle at ti can be obtained according to equations (9)–(11). To ensure the driving safety and comfort, the total lane-changing or overtaking time and traffic states of the ego vehicle should be simultaneously constrained as follows:(20)0<tf<tf,max,vmin<v0,i<vmax,fori=1,…,I,amin<a0,i<amax,fori=1,…,I,jmin<j0,i<jmax,fori=1,…,I,where tf,max, vmax, amax, and jmax denote the allowable maximum total time, speed, acceleration, and jerk, respectively. vmin, amin, and jmin denote the allowable minimum speed, acceleration, and jerk, respectively.In the above constraints, the speed of the ego vehicle should always be positive and not exceed the maximum allowable speed, i.e., no stop or backward motion. Moreover, the smaller the jerk, the smaller the acceleration variation, and the higher the driving comfort degree [37]. ## 3.2.1. Optimal Traffic State Profile Planning The objective function of the constrained nonlinear optimization model is shown in equation (13). a0,i and j0,i denote the acceleration and jerk of the ego vehicle at ti, respectively, and tf denotes the total driving time:(13)minJ=1I∑i=1Ia0,i2+1I∑i=1Ij0,i2+tf2.During the lane-changing or overtaking process, the passengers will feel comfortable when the accelerations and jerks of the ego vehicle are small, soa0,i and j0,i are used to reflect the comfort in equation (13). The total driving time tf can represent the efficiency. Short time indicates high efficiency. ## 3.2.2. Collision-Avoidance Constraint To avoid the potential collisions, the ego vehicle should maintain sufficient distance from the surrounding vehicles during lane-changing or overtaking maneuver. For illustrative purposes, the study assumes that the vehicles drive along the straight roads. This limitation can be easily removed by introducing a curved road function. As shown in Figure4, there are three vehicles around the ego vehicle (vehicle 0). The leading vehicle on the current lane is indexed as vehicle 1, the following vehicle on the target lane is indexed as vehicle 2, and the leading vehicle on the target lane is indexed as vehicle 3.Figure 4 Relative positions of the vehicles.The safe distance between the ego vehicle and vehicle 1 is discussed first. As shown in Figure5, the ego vehicle is simplified a geometry of several circles with diameter m. Following Ziegler et al. [35], the Euclidean distance between the center of each circle of the ego vehicle and the center of the first circle of vehicle 1 should be greater than diameter m of the circle. Therefore, the distance constraint between the ego vehicle (vehicle 0) and vehicle 1 can be shown in the following equation:(14)m2≤x0,ik−x1,i12+y0,ik−y1,i12,fork=1,…,5,fori=1,…,I,where k is used to denote the index of circles; x0,ik and y0,ik represent the positions of the ego vehicle of the k-th circle at ti; and x1,i1 and y1,i1 represent the positions of the first circle of vehicle 1 at ti, whose values can be solved from (x1,i,y1,i). x1,i is calculated as x1,0+v1,0ti+1/2∗a1,0ti2, and y1,i is 0m on the straight lane.Figure 5 Mechanism of the collision-avoidance constraint.Then, the constraints of the collision avoidance between the ego vehicle and the other two vehicles are discussed. As shown in equations (15)-(16), the Euclidean distance between the ego vehicle and vehicles 2 and 3 should be greater than the diagonal length r of the vehicle:(15)r2≤x0,i−x2,i2+y0,i−y2,i2,fori=1,…,I,(16)r2≤x0,i−x3,i2+y0,i−y3,i2,fori=1,…,I,where x0,i, x2,i, and x3,i are the longitudinal central positions of the ego vehicle, vehicle 2, and vehicle 3 at ti, respectively. y0,i, y2,i, and y3,i are the lateral central positions of the ego vehicle, vehicle 2, and vehicle 3 at ti, respectively.x 0 , i and y0,i are the variables that need to be solved in the model. x2,i and x3,i can be calculated as x2,0+v2,0ti+1/2∗a2,0ti2 and x3,0+v3,0ti+1/2∗a3,0ti2, respectively; y2,i and y3,i are the lane width 3.75m. Here, the initial traffic states of relevant vehicles are directly derived from the sensors. ## 3.2.3. Acceleration Constraint Based on Car-following Maneuver In real-world traffic environment, the ego vehicle usually shifts to a car-following maneuver at the end of the lane-changing or overtaking process. To guarantee a comfort driving behavior, the planned final accelerationa0,I of the ego vehicle should be smaller than the car-following acceleration, as shown in the following equation:(17)a0,I≤Av0,I,v3,I,d0,3,where Av0,I,v3,I,d0,3 denotes the car-following acceleration of the ego vehicle at tI; v0,I and v3,I denote the speeds of the ego vehicle and vehicle 3 at tI, respectively; and d0,3 denotes the distance between the ego vehicle and vehicle 3 at tI.Here,v0,I is solved by the proposed nonlinear model; v3,I and d0,3 are calculated as follows, where l is the length of the vehicle:(18)v3,I=v3,0+a3,0ti,d0,3=x3,I−x0,I−l.Whena0,I is smaller than the car-following acceleration, the ego vehicle will shift from a lane-changing or overtaking maneuver to a car-following maneuver comfortably, i.e., no accelerate or decrease immediately.We choose a linear dynamic model [36] as the underlying car-following model. The car-following acceleration of the ego vehicle Av0,I,v3,I,d0,3 can be given as follows:(19)Av0,I,v3,I,d0,3=w1d0,3−Δs∗+w2v3,I−v0,I,where w1 and w2 denote the variation rate of the distance gap and the variation rate of the speed gap between the ego vehicle and vehicle 3, respectively; Δs∗ denotes the ideal distance gap between the ego vehicle and vehicle 3. The function of the three parameters is to help the ego vehicle reach the equilibrium condition as soon as possible by adjusting its acceleration. Following Pariota et al. [36], the parameters in the model are set as w1=0.0343,w2=0.948, and Δs∗=30. ## 3.2.4. Traffic State Constraint The speedv0,i, acceleration a0,i, and jerk j0,i of the ego vehicle at ti can be obtained according to equations (9)–(11). To ensure the driving safety and comfort, the total lane-changing or overtaking time and traffic states of the ego vehicle should be simultaneously constrained as follows:(20)0<tf<tf,max,vmin<v0,i<vmax,fori=1,…,I,amin<a0,i<amax,fori=1,…,I,jmin<j0,i<jmax,fori=1,…,I,where tf,max, vmax, amax, and jmax denote the allowable maximum total time, speed, acceleration, and jerk, respectively. vmin, amin, and jmin denote the allowable minimum speed, acceleration, and jerk, respectively.In the above constraints, the speed of the ego vehicle should always be positive and not exceed the maximum allowable speed, i.e., no stop or backward motion. Moreover, the smaller the jerk, the smaller the acceleration variation, and the higher the driving comfort degree [37]. ## 3.3. Trajectory Planning Based on the Nonlinear Model After receiving the lane-changing or overtaking decision, our proposed trajectory planning model will work. The steps are as follows:(1) Traffic information sensingCollecting the current traffic information of the relevant vehicles through the advanced sensors, GPS, and digital maps.(2) Path generationBased on the collected traffic information, using the quintic polynomial curve to represent the lane-changing or overtaking path, the unknown parameters of the path function can be solved according to the boundary conditions.(3) Traffic state profile generationConsidering the safety and comfort constraints, establishing a nonlinear optimization model to solve the optimal traffic state profiles of the ego vehicle along the path.(4) Lane-changing or overtaking trajectory planningIf a feasible solution is found for the proposed nonlinear optimization model, the lane-changing or overtaking trajectory of the ego vehicle can be planned. However, if there is no feasible solution, the current lane will continue to be used, and the planner will iterate the above three steps until a feasible trajectory is planned.The sequence quadratic programming (SQP) algorithm is selected to solve the nonlinear optimization model. Previous studies [19, 33, 34] prove that this algorithm can find the feasible solution for the nonlinear problem well. Through multiple iterations, the optimal traffic state profiles of the ego vehicle are acquired based on the SQP algorithm. Therefore, the optimal lane-changing or overtaking trajectory can be planned. ## 4. Simulation and Discussion In this section, the types of test scenarios are described first; then, the performance characteristics of the simulation results are shown; finally, three detailed trajectory planning cases are demonstrated. ### 4.1. Scenario Description and Simulation Result Analysis Various test scenarios should be considered to evaluate the proposed model. Yang et al. [32] have proposed that lane-changing maneuver in real-world traffic can be roughly categorized into two types: the ego vehicle is located in front of vehicle 2 and the ego vehicle is located behind vehicle 2 at the initial moment. Similar to Yang et al. [32], we also consider that the overtaking maneuver includes two types according to the initial relative positions between the ego vehicle and vehicle 2 (see Figures 6(a) and 6(b)). In the figure, the solid vehicles indicate their current positions and the dashed vehicle indicates the final position of the ego vehicle.Figure 6 Overtaking positions of vehicles: (a) first type of scenario; (b) second type of scenario. (a) (b)As shown in Figure6(a), under the first type of scenario, the ego vehicle is located in front of vehicle 2 at the initial moment. The ego vehicle can change lane to reach the target gap between vehicle 2 and vehicle 3 directly. Under the second type of scenario (see Figure 6(b)), the ego vehicle is located behind vehicle 2 at the initial moment. The ego vehicle needs to overtake vehicle 2 to reach the target gap between vehicle 2 and vehicle 3.Furthermore, 100 random overtaking test cases for each type are generated to verify the feasibility and effectiveness of the proposed trajectory planning method. MATLAB is employed as the simulation platform. Table1 presents the relevant parameters of the model, including the lower and upper boundaries of the traffic states, and the constant quantities. Inputting the current traffic information, the planner can automatically plan a comfort and safe trajectory for the ego vehicle.Table 1 Design parameters of the model. Symbol Explanation Value I Number of time intervals 20 t f , max Maximum total driving time 13 s v min Minimum speed 0 m/s v max Maximum speed 30 m/s a min Minimum acceleration −3 m/s2 a max Maximum acceleration 3 m/s2 j min Minimum jerk −3 m/s3 j max Maximum jerk 3 m/s3 m Diameter of the circle 2.50 m r Diagonal length of the vehicle 5.12 m L Length of the vehicle 4.80 mThrough our proposed model, 82 of the 100 test cases can find the feasible overtaking trajectory under the first type of scenario; 76 of the 100 test cases can find the feasible overtaking trajectory under the second type of scenario, which proves that this method is applied to most traffic conditions. The distribution of the overtaking time is shown in Figure7. From the figure, it can be indicated that the driving time of most cases is between 6 and 7 s for the first type of overtaking scenario. The shortest time is 4.72 s, the longest time is 10.24 s, and the average time is approximately 6.36 s. For the second type of overtaking scenario, most driving time is between 7 and 8 s, the shortest time is 4.68 s, the longest time is 11.56 s, and the average time is approximately 7.71 s.Figure 7 Distribution of the driving time.A comprehensive presentation of the simulation results is presented in Table2. Using the current traffic states of vehicles as the input variables, the overtaking time tf can be determined automatically by the proposed model. Since the ego vehicle needs to overtake vehicle 2 first to reach the target gap under the second type of scenario, the mean value of tf (7.71 s) is larger than the mean value of tf (6.36 s) under the first type of scenario. Moreover, the variations in the trajectory performance parameters (speed, acceleration, and jerk) in Table 2 are small, which guarantees the ride comfort.Table 2 Simulation results under the test scenarios. Variable The first type of scenario The second type of scenario t f , mean s 6.36 7.71 t f , max s 10.24 11.56 v m a x − v 0 | m / s 3.63 4.24 a max m / s 2 1.52 2.89 j max m / s 3 1.96 2.73 Δ x f mean m 106.85 135.46 Δ s f mean m 106.98 136.18 ### 4.2. Numerical Examples In this section, two special cases are demonstrated in detail. Table3 shows the initial traffic states of vehicles under two detailed scenarios. In scenario 1, the ego vehicle (111.59 m) is located in front of vehicle 2 (100.00 m), which belongs to the first type of scenario. In scenario 2, the ego vehicle (89.17 m) is located behind vehicle 2 (100.00 m), which belongs to the second type of scenario. The speed gap (4.87 m/s) between the ego vehicle and vehicle 2 under scenario 1 is larger than that (0.95 m/s) under scenario 2.Table 3 Initial traffic states of vehicles under two detailed scenarios. Scenario 1 x 0,0 v 0,0 a 0,0 x 1,0 v 1,0 a 1,0 111.59 (m) 14.52 (m/s) −0.61 (m/s2) 119.95 (m) 12.08 (m/s) 0.45 (m/s2) x 2,0 v 2,0 a 2,0 x 3,0 v 3,0 a 3,0 100.00 (m) 19.39 (m/s) −1.33 (m/s2) 123.47 (m) 18.62 (m/s) −0.29 (m/s2) Scenario 2 x 0,0 v 0,0 a 0,0 x 1,0 v 1,0 a 1,0 89.17 (m) 14.56 (m/s) −2.11 (m/s2) 103.80 (m) 12.08 (m/s) 0.19 (m/s2) x 2,0 v 2,0 a 2,0 x 3,0 v 3,0 a 3,0 100.00 (m) 13.61 (m/s) −0.23 (m/s2) 123.60 (m) 14.31 (m/s) −0.8 (m/s2)Figures8 and 9 present the overtaking trajectories and the variations in the speeds of vehicles under scenario 1, respectively. From Figure 8, it is shown that the overtaking longitudinal distance xf is 115 m and the total driving time tf is 7.14 s. Figure 9 indicates that the ego vehicle first trends to decelerate slightly to avoid the collisions with surrounding vehicles and then to accelerate to a maximum value and finally to decrease the speed slowly to a stable state. It can be concluded that the ego vehicle can dynamically adjust its speed for adapting the speed change in surrounding vehicles.Figure 8 Trajectories under scenario 1: (a) trajectories of the ego vehicle, vehicle 1, and vehicle 2; (b) trajectories of the ego vehicle, vehicle 1, and vehicle 3. (a) (b)Figure 9 Speeds of vehicles under scenario 1.Figures10 and 11 display the overtaking trajectories and the variations in speeds of vehicles under scenario 2, respectively. The overtaking longitudinal distance xf is 147 m, and the total driving time tf is 9.02 s. From Figure 11, it can be seen that the ego vehicle trends to decrease the speed from 0 s to 1.69 s and then to change its speed lightly from 1.69 s to 7.35 s. After 7.35 s, the ego vehicle accelerates to a value that is larger than the final speed of vehicle 1.Figure 10 Trajectories under scenario 2: (a) trajectories of the ego vehicle, vehicle 1, and vehicle 2; (b) trajectories of the ego vehicle, vehicle 1, and vehicle 3. (a) (b)Figure 11 Speeds of vehicles under scenario 2.Figure12 shows the speed comparison of the ego vehicle under scenarios 1 and 2. Under scenario 1, the ego vehicle decreases its speed lightly at beginning because the ego vehicle has enough initial speed gap (4.87 m/s) to adjust its position. However, since the initial speed gap (0.95 m/s) between the ego vehicle and vehicle 2 is small under scenario 2, the ego vehicle needs to decelerate quickly at the beginning for avoiding a collision with vehicle 2.Figure 12 Speed comparison of the ego vehicle.Figures13 and 14 show the planned accelerations and jerks of the ego vehicle under two scenarios, respectively. From the figures, it can be indicated that different initial traffic states of vehicles will have a significant impact on acceleration and jerk profiles. These trajectory performance variables of the ego vehicle under scenarios 1 and 2 are all small, which ensures the comfort of passengers during the overtaking process.Figure 13 Acceleration comparison of the ego vehicle.Figure 14 Jerk comparison of the ego vehicle. ### 4.3. Trajectory Correction Scenario If the traffic states of surrounding vehicles change suddenly, the planned trajectory of the ego vehicle needs to be corrected to prevent the potential collision. The corrected trajectory can still be generated using our proposed model. Next, we will describe a trajectory correction scenario in detail. Table4 shows the initial traffic states of the vehicles.Table 4 Initial traffic states of vehicles under correction scenario. Trajectory correction scenario x 0,0 v 0,0 a 0,0 x 1,0 v 1,0 a 1,0 100.00 (m) 15.26 (m/s) 0.62 (m/s2) 108.47 (m) 13.01 (m/s) 0.53 (m/s2) x 2,0 v 2,0 a 2,0 x 3,0 v 3,0 a 3,0 97.59 (m) 13.27 (m/s) −0.16 (m/s2) 110.22 (m) 14.15 (m/s) 0.24 (m/s2)In this scenario, the ego vehicle originally intends to change lanes. However, vehicle 3 suddenly decelerates at 2.62 s. The ego vehicle perceives that the distance between itself and vehicle 3 will become very short, and there may be a collision. Therefore, the lane-changing trajectory is corrected, and the ego vehicle returns to the original lane. Figure15 presents the trajectories of the relevant vehicles in this scenario. The simulation result indicates that the total longitudinal distance of the ego vehicle is 91.73 m, and the driving time is 6.64 s. During the trajectory correction process, we assume that vehicle 2 follows the leading vehicle politely, so the motion of vehicle 2 after 2.62 s is not discussed.Figure 15 Trajectories under correction scenario: (a) trajectories of the ego vehicle, vehicle 1, and vehicle 2; (b) trajectories of the ego vehicle, vehicle 1, and vehicle 3. (a) (b)In Figure15, the ego vehicle successfully returns to the original lane tracking the corrected trajectory and does not collide with the surrounding vehicles. It can be concluded that the proposed model is effective for emergent traffic condition.Furthermore, the real-time distance of the ego vehicle and surrounding vehicles is shown in Figure16. Here, the distance denotes the driving distance from the origin point of the lane to the current position of vehicle. In Figure 16, vehicle 3 decelerates at 2.62 s, and then its speed decreases rapidly. If the ego vehicle does not correct the trajectory, it is likely to collide with vehicle 3.Figure 16 Driving distance of the vehicles under correction scenario. ## 4.1. Scenario Description and Simulation Result Analysis Various test scenarios should be considered to evaluate the proposed model. Yang et al. [32] have proposed that lane-changing maneuver in real-world traffic can be roughly categorized into two types: the ego vehicle is located in front of vehicle 2 and the ego vehicle is located behind vehicle 2 at the initial moment. Similar to Yang et al. [32], we also consider that the overtaking maneuver includes two types according to the initial relative positions between the ego vehicle and vehicle 2 (see Figures 6(a) and 6(b)). In the figure, the solid vehicles indicate their current positions and the dashed vehicle indicates the final position of the ego vehicle.Figure 6 Overtaking positions of vehicles: (a) first type of scenario; (b) second type of scenario. (a) (b)As shown in Figure6(a), under the first type of scenario, the ego vehicle is located in front of vehicle 2 at the initial moment. The ego vehicle can change lane to reach the target gap between vehicle 2 and vehicle 3 directly. Under the second type of scenario (see Figure 6(b)), the ego vehicle is located behind vehicle 2 at the initial moment. The ego vehicle needs to overtake vehicle 2 to reach the target gap between vehicle 2 and vehicle 3.Furthermore, 100 random overtaking test cases for each type are generated to verify the feasibility and effectiveness of the proposed trajectory planning method. MATLAB is employed as the simulation platform. Table1 presents the relevant parameters of the model, including the lower and upper boundaries of the traffic states, and the constant quantities. Inputting the current traffic information, the planner can automatically plan a comfort and safe trajectory for the ego vehicle.Table 1 Design parameters of the model. Symbol Explanation Value I Number of time intervals 20 t f , max Maximum total driving time 13 s v min Minimum speed 0 m/s v max Maximum speed 30 m/s a min Minimum acceleration −3 m/s2 a max Maximum acceleration 3 m/s2 j min Minimum jerk −3 m/s3 j max Maximum jerk 3 m/s3 m Diameter of the circle 2.50 m r Diagonal length of the vehicle 5.12 m L Length of the vehicle 4.80 mThrough our proposed model, 82 of the 100 test cases can find the feasible overtaking trajectory under the first type of scenario; 76 of the 100 test cases can find the feasible overtaking trajectory under the second type of scenario, which proves that this method is applied to most traffic conditions. The distribution of the overtaking time is shown in Figure7. From the figure, it can be indicated that the driving time of most cases is between 6 and 7 s for the first type of overtaking scenario. The shortest time is 4.72 s, the longest time is 10.24 s, and the average time is approximately 6.36 s. For the second type of overtaking scenario, most driving time is between 7 and 8 s, the shortest time is 4.68 s, the longest time is 11.56 s, and the average time is approximately 7.71 s.Figure 7 Distribution of the driving time.A comprehensive presentation of the simulation results is presented in Table2. Using the current traffic states of vehicles as the input variables, the overtaking time tf can be determined automatically by the proposed model. Since the ego vehicle needs to overtake vehicle 2 first to reach the target gap under the second type of scenario, the mean value of tf (7.71 s) is larger than the mean value of tf (6.36 s) under the first type of scenario. Moreover, the variations in the trajectory performance parameters (speed, acceleration, and jerk) in Table 2 are small, which guarantees the ride comfort.Table 2 Simulation results under the test scenarios. Variable The first type of scenario The second type of scenario t f , mean s 6.36 7.71 t f , max s 10.24 11.56 v m a x − v 0 | m / s 3.63 4.24 a max m / s 2 1.52 2.89 j max m / s 3 1.96 2.73 Δ x f mean m 106.85 135.46 Δ s f mean m 106.98 136.18 ## 4.2. Numerical Examples In this section, two special cases are demonstrated in detail. Table3 shows the initial traffic states of vehicles under two detailed scenarios. In scenario 1, the ego vehicle (111.59 m) is located in front of vehicle 2 (100.00 m), which belongs to the first type of scenario. In scenario 2, the ego vehicle (89.17 m) is located behind vehicle 2 (100.00 m), which belongs to the second type of scenario. The speed gap (4.87 m/s) between the ego vehicle and vehicle 2 under scenario 1 is larger than that (0.95 m/s) under scenario 2.Table 3 Initial traffic states of vehicles under two detailed scenarios. Scenario 1 x 0,0 v 0,0 a 0,0 x 1,0 v 1,0 a 1,0 111.59 (m) 14.52 (m/s) −0.61 (m/s2) 119.95 (m) 12.08 (m/s) 0.45 (m/s2) x 2,0 v 2,0 a 2,0 x 3,0 v 3,0 a 3,0 100.00 (m) 19.39 (m/s) −1.33 (m/s2) 123.47 (m) 18.62 (m/s) −0.29 (m/s2) Scenario 2 x 0,0 v 0,0 a 0,0 x 1,0 v 1,0 a 1,0 89.17 (m) 14.56 (m/s) −2.11 (m/s2) 103.80 (m) 12.08 (m/s) 0.19 (m/s2) x 2,0 v 2,0 a 2,0 x 3,0 v 3,0 a 3,0 100.00 (m) 13.61 (m/s) −0.23 (m/s2) 123.60 (m) 14.31 (m/s) −0.8 (m/s2)Figures8 and 9 present the overtaking trajectories and the variations in the speeds of vehicles under scenario 1, respectively. From Figure 8, it is shown that the overtaking longitudinal distance xf is 115 m and the total driving time tf is 7.14 s. Figure 9 indicates that the ego vehicle first trends to decelerate slightly to avoid the collisions with surrounding vehicles and then to accelerate to a maximum value and finally to decrease the speed slowly to a stable state. It can be concluded that the ego vehicle can dynamically adjust its speed for adapting the speed change in surrounding vehicles.Figure 8 Trajectories under scenario 1: (a) trajectories of the ego vehicle, vehicle 1, and vehicle 2; (b) trajectories of the ego vehicle, vehicle 1, and vehicle 3. (a) (b)Figure 9 Speeds of vehicles under scenario 1.Figures10 and 11 display the overtaking trajectories and the variations in speeds of vehicles under scenario 2, respectively. The overtaking longitudinal distance xf is 147 m, and the total driving time tf is 9.02 s. From Figure 11, it can be seen that the ego vehicle trends to decrease the speed from 0 s to 1.69 s and then to change its speed lightly from 1.69 s to 7.35 s. After 7.35 s, the ego vehicle accelerates to a value that is larger than the final speed of vehicle 1.Figure 10 Trajectories under scenario 2: (a) trajectories of the ego vehicle, vehicle 1, and vehicle 2; (b) trajectories of the ego vehicle, vehicle 1, and vehicle 3. (a) (b)Figure 11 Speeds of vehicles under scenario 2.Figure12 shows the speed comparison of the ego vehicle under scenarios 1 and 2. Under scenario 1, the ego vehicle decreases its speed lightly at beginning because the ego vehicle has enough initial speed gap (4.87 m/s) to adjust its position. However, since the initial speed gap (0.95 m/s) between the ego vehicle and vehicle 2 is small under scenario 2, the ego vehicle needs to decelerate quickly at the beginning for avoiding a collision with vehicle 2.Figure 12 Speed comparison of the ego vehicle.Figures13 and 14 show the planned accelerations and jerks of the ego vehicle under two scenarios, respectively. From the figures, it can be indicated that different initial traffic states of vehicles will have a significant impact on acceleration and jerk profiles. These trajectory performance variables of the ego vehicle under scenarios 1 and 2 are all small, which ensures the comfort of passengers during the overtaking process.Figure 13 Acceleration comparison of the ego vehicle.Figure 14 Jerk comparison of the ego vehicle. ## 4.3. Trajectory Correction Scenario If the traffic states of surrounding vehicles change suddenly, the planned trajectory of the ego vehicle needs to be corrected to prevent the potential collision. The corrected trajectory can still be generated using our proposed model. Next, we will describe a trajectory correction scenario in detail. Table4 shows the initial traffic states of the vehicles.Table 4 Initial traffic states of vehicles under correction scenario. Trajectory correction scenario x 0,0 v 0,0 a 0,0 x 1,0 v 1,0 a 1,0 100.00 (m) 15.26 (m/s) 0.62 (m/s2) 108.47 (m) 13.01 (m/s) 0.53 (m/s2) x 2,0 v 2,0 a 2,0 x 3,0 v 3,0 a 3,0 97.59 (m) 13.27 (m/s) −0.16 (m/s2) 110.22 (m) 14.15 (m/s) 0.24 (m/s2)In this scenario, the ego vehicle originally intends to change lanes. However, vehicle 3 suddenly decelerates at 2.62 s. The ego vehicle perceives that the distance between itself and vehicle 3 will become very short, and there may be a collision. Therefore, the lane-changing trajectory is corrected, and the ego vehicle returns to the original lane. Figure15 presents the trajectories of the relevant vehicles in this scenario. The simulation result indicates that the total longitudinal distance of the ego vehicle is 91.73 m, and the driving time is 6.64 s. During the trajectory correction process, we assume that vehicle 2 follows the leading vehicle politely, so the motion of vehicle 2 after 2.62 s is not discussed.Figure 15 Trajectories under correction scenario: (a) trajectories of the ego vehicle, vehicle 1, and vehicle 2; (b) trajectories of the ego vehicle, vehicle 1, and vehicle 3. (a) (b)In Figure15, the ego vehicle successfully returns to the original lane tracking the corrected trajectory and does not collide with the surrounding vehicles. It can be concluded that the proposed model is effective for emergent traffic condition.Furthermore, the real-time distance of the ego vehicle and surrounding vehicles is shown in Figure16. Here, the distance denotes the driving distance from the origin point of the lane to the current position of vehicle. In Figure 16, vehicle 3 decelerates at 2.62 s, and then its speed decreases rapidly. If the ego vehicle does not correct the trajectory, it is likely to collide with vehicle 3.Figure 16 Driving distance of the vehicles under correction scenario. ## 5. Conclusion This paper proposes a universal trajectory planning method that is suitable for the automated lane-changing and overtaking maneuvers. The proposed method first generates a two-dimensional path in the Cartesian coordinate system to connect the initial position with the final position, and then a nonlinear mathematical optimization model is established to obtain the traffic state profiles along the path. The combination of a path and its traffic state profiles determines a spatiotemporal trajectory. Moreover, considering the safety and comfort of the ego vehicle during the driving process, the study discretizes the total time horizon into several time intervals and solves the unknown parameters of the function to generate the continuous and smooth traffic state profiles. Furthermore, a series of numerical examples are conducted under the typical scenarios. The simulation results demonstrate that the intelligent vehicle can avoid the potential collisions effectively when tracking the planned trajectory. Future works will concentrate on the trajectory replanning and tracking issues for proposing a more realistic lane-changing and overtaking model. --- *Source: 1023975-2020-04-25.xml*
2020
# Silicon Nanofabrication by Atomic Force Microscopy-Based Mechanical Processing **Authors:** Shojiro Miyake; Mei Wang; Jongduk Kim **Journal:** Journal of Nanotechnology (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102404 --- ## Abstract This paper reviews silicon nanofabrication processes using atomic force microscopy (AFM). In particular, it summarizes recent results obtained in our research group regarding AFM-based silicon nanofabrication through mechanochemical local oxidation by diamond tip sliding, as well as mechanical, electrical, and electromechanical processing using an electrically conductive diamond tip. Microscopic three-dimensional manufacturing mainly relies on etching, deposition, and lithography. Therefore, a special emphasis was placed on nanomechanical processes, mechanochemical reaction by potassium hydroxide solution etching, and mechanical and electrical approaches. Several important surface characterization techniques consisting of scanning tunneling microscopy and related techniques, such as scanning probe microscopy and AFM, were also discussed. --- ## Body ## 1. Introduction Nanoelectronic devices and nanomachines could soon be manufactured by manipulating atoms and molecules [1]. Microfabrication is essential for the development of these nanotechnologies but remains challenging. Conventional microfabrication relies on lithography, the most common semiconductor manufacturing technology, which is usually combined with deposition and dry and wet etching processes. More recently, new nanoprocessing methods, such as nanoprinting and molecular self-organization, have emerged with the development of nanotechnology, but these methods are limited by the shape and dimensions of workpiece materials.Mechanical processing methods that transcribe a tool locus can produce three-dimensional microshapes with high precision using the tribological properties of tool geometry and a workpiece [2]. For example, a 1 μm wide knife edge can be formed by fine abrasive grinding [3]. Using microtribological action and degrees of freedom of materials, the shape and size of processed objects can increase if shape processing is achievable at the micron and nanometer scales [4, 5]. Therefore, microfabrication applications can be significantly extended through such novel processes [2–10].Meanwhile, processing methods combining chemical and mechanical actions have been widely used to machine high-quality surfaces with high precision [3]. Mechanochemical processing (MCP) is a machining method that utilizes mechanical energy to activate chemical reactions and structural changes [11], providing highly flat surfaces with few defects. Recently, chemical-mechanical polishing (CMP) has been applied in the fine processing of semiconductor devices. Furthermore, a complex chemical grinding approach that combines chemical potassium hydroxide (KOH) solution etching and mechanical action has been studied, thereby improving the processing properties of CMP [3].In recent years, maskless and friction-induced processing techniques have been utilized for nanofabrication [12–21]. The elevated internal stress and plastic deformation of masks formed at high load resulted in highly damaged mask surfaces and oxidation layers [17–21]. Mechanically processed mask patterns obtained from plastically deformed damaged layers withstood the selective wet etching processes conducted for pattern transfer called maskless patterning or friction-induced fabrication [17–23]. The mechanistic evaluation of these processes showed that plastic deformation and oxidation layers may block the reactive KOH solution layer. Therefore, the damage remained superficial [18–21]. In contrast, the direct patterning of etching mask was proposed to reduce damage [8–11].Few studies have used MCP at the atomic scale. For example, marking has been assessed as a physical-thermal excitation machining approach to produce storage devices, requiring complex mechanochemical processes at the nanometer scale [9]. However, severe adhesion during silicon machining made high precision difficult to achieve. Therefore, the potential application of this approach in a liquid chemical environment was investigated [3, 24]. Furthermore, a rigid silicon cantilever bearing diamond particles was produced experimentally, making the processing characteristics of silicon clearer [25].New processing methods can achieve high-precision and high-quality machining through combined mechanical-chemical nanofabrication approaches. During mechanical processing, tool shape machining is mostly applied to material removal. Nanosized convex shapes are very difficult to obtain with high precision on surfaces. A mechanochemical approach can circumvent this issue by mechanically promoting the reaction between atmospheric gas and surface adsorption layer.Micromachining mainly utilizes three-dimensional microscopic manufacturing processes comprising etching, deposition, and lithography. Scanning probe microscopy (SPM) techniques, which include the most common atomic force microscopy (AFM) and scanning tunneling microscopy (STM) [26, 27], are useful for surface property evaluation. These techniques involve scanning surfaces with a tip that includes a piezoelectric element. SPM is a promising tool in the nanofabrication of functional nanometer-size materials and devices [28] because of its ability to produce such nanostructures at the atomic scale. Several researchers have also attempted to use SPM techniques for local surface deposition and modification [5, 28–30]. In particular, the so-called local oxidation has been proven highly promising for the fabrication of electronic devices at the nanometer scale [31–33]. In this method, under room temperature conditions, oxidizing agents contained in the surface adsorbed water drifted across the silicon oxide layer under the influence of a high electric field, which was produced by voltage applied to the SPM probe [2, 33]. This SPM-generated oxide can function as a mask for the etching step and insulating barrier.Mechanical friction has also been exploited using AFM in contact mode in air to fabricate silicon nanostructures on a H-passivated Si(100) substrate [34]. An AFM diamond tip sliding on a silicon surface formed protuberances on the substrate under ambient conditions [35–37]. A proper mechanical action of the sliding tip on the silicon surface resulted in local mechanochemical oxidation [35, 36]. Conversely, CMP has achieved the damage-free and high-accuracy processing of silicon wafers. If diamond tip sliding results in suitable mechanochemical action on the silicon surface, local oxidation can be performed without damaging the oxide mask during the selective wet etching process used for pattern transfer [35–38].A similar AFM nanoprocessing method has previously been developed and evaluated in our research group on a Si(100) surface without bias voltage [35–40]. The mechanochemical reaction used a conductive diamond tip with superior wear resistance and produced nanoprotuberances and grooves under ambient conditions. A KOH solution selectively etched the unprocessed silicon area, leaving processed surfaces mostly intact [35–38]. An AFM investigation was conducted to evaluate the KOH etching of silicon specimens that were initially processed by diamond tip sliding at low and high scanning densities at different rates, confirming this observation [38]. These results suggested that an approach combining mechanical and electrical processes, such as an AFM technique that simultaneously used mechanical load and bias voltage, could be developed. Previous electrical nanoprocessing systems [41] indicated that this complex approach should use a conductive diamond tip. The mechanism of this complex approach was compared to those of the mechanical and electrical processes [41]. Investigations of fundamental nanostructure characteristics are expected to provide a better understanding of the morphological and electric properties of processed areas, which is crucial to process improvement. Conductive atomic force microscopy (CAFM) has been shown to directly determine the local conductivity distribution, which is independent of topography [38, 42]. However, the local structure and electric properties of these nanostructures remain poorly understood because of their complexity.This paper reviews the most recent developments in (1) silicon nanoprocessing through mechanochemical reaction by diamond tip sliding, followed by (2) KOH solution etching using the processed area as a mask, and (3) nanofabrication through mechanical and electrical processes using a conductive diamond tip. ## 2. Silicon Nanoprocessing through Mechanochemical Reaction and Etching Several mechanochemical techniques have been developed for silicon nanoprocessing using natural pyramidal AFM diamond tips of various radii [40, 43]. The diamond tips were fabricated by polishing, and their radii were estimated by tracing on a standard sample. MCP by diamond tip sliding and the resulting silicon protuberances and grooves were evaluated by AFMin situunder atmospheric conditions. Sharp tips produced grooves, whereas wide tips produced protuberances on the silicon surfaces. KOH etching of the mechanochemically processed surfaces revealed that protuberance-covered areas were barely etched. In other words, processed areas that displayed a plastic deformation also acted as etching masks for the KOH solution. Three-dimensional nanoprofiles, which consisted of squares (1000 × 1000 nm2), lines interspaced by 200 nm intervals, and a two-step table, were fabricated on silicon using similarly oxidized areas as etching masks. Using thick masks that were mechanochemically oxidized at high load and mechanical removal of the natural oxide layer at low load, three-dimensional nanometer-sized profiles were processed by additional KOH solution etching.To clarify the mechanism of this approach, the contact stress was analyzed using the boundary element method [40]. ### 2.1. Processing Methods The effects of load and diamond tip radius on the height and depth of features obtained through MCP by diamond tip sliding were studied [36]. N-type Si(100) samples were not cleaned prior to AFM processing so that their surface was covered by a natural oxide layer, which was less than 2 nm thick. First, the surfaces were mechanochemically processed under atmospheric conditions at room temperature. During this process, the specimens were driven using a piezoelectric element. Changes in processed area profiles were examined by applying a load of 10–40 μN and expanding scanning areas. The scanning procedure (256 scan lines) is shown in Figure 1 [40]. Diamond tip radii (50, 100, and 200 nm) and applied loads were changed to control the oxidation level.Figure 1 Mechanochemical processing of silicon by AFM tip sliding.Processed areas were expected to act as etching masks; therefore, the etching properties of KOH were evaluated after AFM processing. These processed wafers were etched in a 10% KOH solution at room temperature. Differences in etching rates between processed and unprocessed areas were determined by AFM measurements at low load using the processing tip. ### 2.2. Dependence of Silicon Nanoprocessing on Tip Radius and Load The dependence of silicon nanoprocessing on tip radius and load was evaluated to determine the protuberance processing properties and the deformation characteristics of processed areas. Contact pressure displayed static pressure (stress). The additional stress generated during processing was measured when the tip moved and slid. Therefore, changes in processed protuberance heights and plastic deformation characteristics of processed area were investigated by measuring principal contact and shear stresses. The principal contact and shear stress of 50 and 200 nm radius diamond tips were analyzed through the boundary element method. Load effects on maximum principle and shear stresses were also studied.Figure2(a) shows a square silicon area (1 μm2) processed using a tip with a 50 nm radius under an applied load of 10–40 μN, along with its section profiles (Figure 2(a) (I–III)) [40]. The AFM profile suggests that plastic deformation or cutting debris accumulated at the periphery of the processed square. The 50 nm tip produced grooves whose depth reached 20 nm and increased with increasing load on the silicon surface. AFM imaging showed that small-radius tips could remove silicon from the surface at loads as low as 10 μN.AFM profiles of silicon square areas processed by diamond tip sliding for different tip radii. (a) 50, (b) 200, and (c) 100 nm. (a) Remove groove processed with a 50 nm radius tip (b) Protuberance profile processed with a 200 nm radius tip (c) Removal and protuberance profile of silicon with a 100 nm radius tipIn contrast, a 200 nm radius diamond tip produced 0–2 nm high silicon protuberances, as shown by the profiles of processed silicon square area (1μm2) (Figure 2(b)). The section profiles (Figure 2(b) (I–III)) also suggested that protuberance height increased with increasing applied load (50–80 μN). For example, an 80 μN load formed 1.5 nm high protuberances.Diamond tips with radii of 100 nm simultaneously produced protuberances and grooves, as shown in Figure2(c). Section profiles (Figure 2(c) (I–III)) revealed that protuberances appeared at loads as low as 50 μN, and grooves formed at loads as high as 70–80 μN.Figure3 shows the effects of tip radius and load on protuberance height and groove depth for radii of 50, 100, and 200 nm. The maximum height was obtained at a load of 40 μN using the 100 nm radius tip. In contrast, the groove depth significantly increased at a load of 40 μN and reached a maximum of 7 nm at a load of 80 μN with the 50 nm radius tip.Figure 3 Tip radius and load effects on protuberance height and groove depth.These AFM measurements suggest that protuberances may originate from the mechanochemical action of tip sliding on the sample. Protuberances have not been observed under vacuum and dry nitrogen conditions [44]. The mechanochemical reaction of silicon with carbon appeared to be enhanced with rising temperature under vacuum, but protuberances did not form. Protuberances have been produced using other tip materials, such as silicon nitride or silicon, but the mechanochemical reaction between silicon and diamond was negligible. Protuberances were difficult to detect when the diamond tip was used under dry conditions (less than 40% humidity), suggesting that the presence of water on the silicon surface contributes to the mechanochemical reaction.Mechanistic models of protuberance and groove processed by diamond tip sliding were proposed using the processing characteristics. Models for height-increasing and height-decreasing processes [37] are shown in Figures 4(a) and 4(b), respectively. These protuberance processing phenomena may stem from the local atomic destruction of bonds concomitant with concentrated stress. Sliding enhances the reaction of damaged silicon with oxygen and water that are present on the surface due to the destruction of silicon–silicon bonds. This reaction forms silicon oxide (SiOx) or silicon hydroxide (Si(OH)x), increasing the height of the processed parts. The mechanochemically processed layer thickness is greater than the protuberance height.Mechanistic models for (a) protuberance and (b) groove processing by diamond tip sliding. (a) Protuberance processing of silicon (b) Groove processing of siliconWhen the radius equaled 50 or 100 nm with a higher applied load, the maximum surface shearing stress exceeded the strength of silicon, leading to plastic deformation followed by the sliding-induced silicon removal from the sides and front. However, during plastic deformation, oxygen and water also reacted mechanochemically with silicon through a reaction similar to that occurring during the height-increasing process (Figure2(b)). The nanoindentation hardness of processed protuberances and grooves was approximately 10% higher than that of unprocessed parts. The reaction of silicon with surface water and oxygen produces Si ( OH ) x or SiO x, which exhibits the same hardness as silicon and is harder than Si ( OH ) x. Therefore, instead of soft silicon hydroxide, a silicon oxide layer is speculated to form on processed protuberance and groove surfaces.To clarify the mechanism of mechanochemical protuberance and groove processing, the surface contact stress was evaluated using the boundary element method [40]. Contact model and material parameters are presented in Figure 5. Figures 6 and 7 show the contact principal stress and shearing stress, respectively, of 50 and 200 nm radius diamond tips analyzed by the boundary element method [40]. Maximum principal stress and shearing stress dependences on load are shown in Figure 8. Hard and brittle materials such as silicon are easily fractured under tensile stress. In maximum tensile stress areas, silicon bonds appear to break under tensile stress caused by the friction of the diamond tip. Therefore, the reaction of silicon may occur at the rear edge of the sliding contact area where the elongation stress is the highest. This protuberance processing phenomenon accompanies mechanochemical reaction during which tensile stress increases. Its mechanism may involve adsorbates, such as surface water and oxygen, which mechanochemically react with silicon [40].Figure 5 Contact stress analysis using the boundary element method.Principal stress of the silicon-diamond contact area at 50μN load for tip radii of (a) 50 and (b) 200 nm. (a) Diamond radius: 50 nm; load: 50μN (b) Diamond radius: 200 nm; load: 50μNShear stress of the silicon-diamond contact area at 50μN load for tip radii of (a) 50 and (b) 200 nm. (a) Diamond radius: 50 nm; load: 50μN (b) Diamond radius: 200 nm; load: 50μNFigure 8 Load effects on maximum principle and shear stresses.Shearing stress was evaluated to estimate the plastic deformation of silicon. The effect of the evaluated contact stress on protuberance heights and groove depths is shown in Figure9 [40]. Protuberance heights increased until the tensile stress reached 4.5 GPa and decreased. At this peak height, the maximum shear stress attained more than 8 GPa. This suggests that MCP using a 100 nm radius diamond tip is load dependent when the shear stress exceeds the strength of silicon, including a plastic deformation of several nanometers. At lower loads, the 100 nm radius tip provided protuberances through silicon oxidation. However, the maximum shear stress increased beyond the yield criterion with increasing load, resulting in silicon removal from the side and front surfaces and a subsequent decrease in protuberances height. Conversely, the tensile stress at the rear edge increased with increasing load during this removal process, indicating that water and oxygen reacted with silicon, and a mechanochemical reaction similar to the protuberance height-increasing process occurred. The silicon oxide layer is also assumed to form on processed groove surfaces, as shown in Figure 4(b).Contact stress effects on protuberance heights and groove depths. (a) Tensile stress effect and (b) maximum shear stress effect. (a) Tensile stress (b) Maximum shearing stress ### 2.3. Additional KOH Solution Etching of Processed Areas #### 2.3.1. Load Dependence of KOH Solution Etching on Mechanochemically Processed Areas The KOH solution etching of a mechanically processed area is shown in Figure10 [40]. Thick oxidized layers were mechanochemically formed on the processed areas, and these locally oxidized parts were expected to work as etching masks for KOH solution [37, 40, 43]. For short KOH etching times, the area processed at a high scanning density and the unprocessed area covered with a natural oxidative layer acted as masks and were not etched, enabling various types of nanofabrication processes via mechanochemical action. Nanoscale local oxidation based on mechanochemical action may find application in future nanodevice processes and nanolithography technology.Figure 10 Profile etching model of silicon using a mechanochemically oxidized mask.The effects of KOH etching on protuberance and groove processing on silicon were investigated at room temperature. Silicon protuberance and groove processing were conducted in air using a 140 nm radius diamond tip on a 1μm2 square area at loads ranging from 10 to 90 μN [43]. As shown in Figure 11(a), protuberance heights remained below 1 nm for 10–30 μN loads, but they increased to 1-2 nm for loads of 40, 50, and 60 μN, indicating that protuberance heights increased with increasing applied load. However, loads of 70, 80, and 90 μN generated grooves on the processed areas as well as plastic deformation. The maximum depth of the grooves was 4.5 nm. These results are in between those obtained using the 100 and 200 nm radius tips.Load effects on protuberance and groove processing using a 140 nm radius tip (a) before and (b) after KOH solution etching. (a) Before etching (b) After etchingSilicon samples processed at loads ranging from 0 to 90μN were etched using KOH solution for 10 min. Figure 11(b) shows that unprocessed areas were etched, whereas all processed areas remained almost intact. Processed features protruded over the surface because of KOH etching, reaching a maximum height of 8 nm. Areas processed by tip sliding clearly showed an erosion resistance to KOH solution. For high loads of 70, 80, and 90 μN, the shape of plastically deformed areas did not change after etching in KOH solution. The etching speed of processed areas with inherent defects was accelerated under normal processing conditions [37, 38], but the processed regions could not be etched, even with plastic deformation. This may result from the formation of a dense oxide layer by tip sliding.Figure12(a) shows plastically deformed protuberances produced using a 100 nm radius diamond tip. The KOH solution selectively etched the unprocessed Si area, leaving protuberances and grooves processed by tip sliding unchanged. The depth of grooves processed at 80 μN was approximately 5 nm. Protuberances processed at 40 μN reached heights of nearly 1 nm. Figure 12(b) shows the surface profile after KOH solution etching for 30 min, revealing that processed protuberance and groove areas were barely etched. Height differences between processed and unprocessed areas amounted to approximately 50 nm after KOH solution etching. These results suggest that protuberances and grooves formed by diamond tip sliding consist of silicon oxide (Figure 4), which is chemically resistant to KOH. The protuberances exhibited similar upper surface profiles before and after KOH etching. Plastically deformed areas shown at the start and end of tip sliding process remained intact after KOH solution etching, suggesting the formation of silicon oxide on these areas.Figure 12 AFM profiles of groove and protuberance processing areas (a) before and (b) after KOH solution etching.Figure13 shows the profiles of three square areas (a) processed by AFM and (b) etched using KOH [40]. Mean protuberance heights of 2.3, 2.1, and 1.2 nm were obtained by diamond tip sliding for loads of 50, 40, and 30 μN, respectively. At higher loads, the plastically deformed debris accumulated at the end of the sliding. Three squares with mean heights of 82, 80, and 78 nm were clearly observed after etching. The lower profile of the KOH etched surface (Figure 13(b)) matched that of the upper profile for AFM tip processed surface (Figure 13(a)), indicating that the three processed squares were barely etched.Figure 13 AFM profiles of three square areas processed at different loads (a) before and (b) after KOH solution etching.Figure14 shows the profiles of lines and spaces produced by tip sliding at 50 μN before and after KOH solution etching [40]. The 2 μm long lines were interspaced by intervals of (a) 500, (b) 400, (c) 300, and (d) 200 nm. The sliding scar of the processed lines was barely visible by AFM. After KOH solution etching, four lines were clearly observed and displayed rounded ends because of side etching. Line heights ranged from 40 to 42 nm, while line spacing narrowed to 200 nm. Even without changing the profiles, tip sliding forms a silicon oxide layer that protects the processed surface from KOH etching.AFM profiles of evenly interspaced lines mechanochemically processed by tip sliding on silicon before and after etching with 10% KOH solution. (a) Line pitch 500 nm (b) Line pitch 400 nm (c) Line pitch 300 nm (d) Line pitch 200 nm #### 2.3.2. Load and Scanning Density Effects on the KOH Solution Etching Rates of Mechanochemically Processed Areas Figure15(a) shows the process used to control etching rates by changing the tip loads [37, 38, 40]. A uniform natural oxide layer was formed on the silicon surface before processing (Figure 15(a)). The mechanochemically processed area exhibited a thick oxidized layer at a high load and high-density scanning (Figure 15(b)), while a low load and low-density scanning mechanically removed the natural oxide layer (Figure 15(c)).Model of a nanofabrication process using mechanochemically and naturally oxidized areas as etching masks. (a) Before processing, (b) during the first processing step at high load, and (c) during the second processing step at low load. (a) Before processing (b) First processing at higher load (c) Second processing at lower loadFigure16 shows a two-step table obtained from mechanochemically processed areas etched using KOH solution at different rates [40]. To change the degree of oxidation of the processed area, the atomic force between the Si(100) surface and tip was changed. First, a high square protuberance of 0.6 nm (2 × 2 μm2) was processed at 25 μN (Figure 16(a)). Next, a second area (5 × 5 μm2) was processed at a low load and low-density scanning. Profiles obtained after additional KOH solution etching revealed that a low load and low-density scanning resulted in a thin, 5 × 5 μm2 oxidized area, and a high load and high-density scanning resulted in a dense, 2 × 2 μm2 oxidized area on the same surface (10 × 10 μm2, Figure 16(b)). Load and scanning density changes during MCP can influence reaction rates. The height difference between the processed areas amounted to approximately 18 nm and total height was 33 nm, evidence for the potential fabrication of a nanometer-size two-step table.Figure 16 AFM profiles of a two-step table fabricated using differences in etching rates between processed areas (a) before and (b) after KOH solution etching.The profile of a processed sample was obtained, as shown in Figure17. First, a 1 × 1 μm2 area was mechanically oxidized by diamond tip sliding at 40 μN and was assumed to undergo partial plastic deformation. Another 1 × 1 μm2 area processed at 10 μN displayed a 1.5 nm high protuberance. A 6 × 6 μm2 area was processed by tip sliding at 1.5 μN to remove the natural oxide layer, and the sample was etched using KOH for 25 min. Figure 17 shows the resulting three-dimensional profile of the processed areas. The profile suggests that tip sliding at 1.5 μN and over a wide area removed the natural oxide layer, promoting a deep etching of 119 nm. In contrast, the surfaces of areas processed at 10 and 40 μN exhibited little differences from the unprocessed basal plane covered with a natural oxide layer. The natural oxide layer worked as an etching mask for the unprocessed surface for 25 min but, thereafter, was removed by low load and wide-range processing at 1.5 μN, increasing the etching rate.Figure 17 AFM profile obtained using mechanochemically and naturally oxidized layers as etching masks for the KOH solution.To demonstrate this mechanism, nine squares (1 × 1μm2) were mechanically oxidized by tip sliding at 15 μN, and the processed area was expanded to a 10 × 10 μm2 square to remove the natural oxide layer at 1 μN. Subsequent KOH solution etching was performed for 30 min. As shown in Figure 18, the nine square areas and the unprocessed natural oxide layer were negligibly etched, indicating that they acted as etching masks.Figure 18 AFM profiles obtained using mechanochemically and naturally oxidized layers as etching masks for the KOH solution at 15μN. ## 2.1. Processing Methods The effects of load and diamond tip radius on the height and depth of features obtained through MCP by diamond tip sliding were studied [36]. N-type Si(100) samples were not cleaned prior to AFM processing so that their surface was covered by a natural oxide layer, which was less than 2 nm thick. First, the surfaces were mechanochemically processed under atmospheric conditions at room temperature. During this process, the specimens were driven using a piezoelectric element. Changes in processed area profiles were examined by applying a load of 10–40 μN and expanding scanning areas. The scanning procedure (256 scan lines) is shown in Figure 1 [40]. Diamond tip radii (50, 100, and 200 nm) and applied loads were changed to control the oxidation level.Figure 1 Mechanochemical processing of silicon by AFM tip sliding.Processed areas were expected to act as etching masks; therefore, the etching properties of KOH were evaluated after AFM processing. These processed wafers were etched in a 10% KOH solution at room temperature. Differences in etching rates between processed and unprocessed areas were determined by AFM measurements at low load using the processing tip. ## 2.2. Dependence of Silicon Nanoprocessing on Tip Radius and Load The dependence of silicon nanoprocessing on tip radius and load was evaluated to determine the protuberance processing properties and the deformation characteristics of processed areas. Contact pressure displayed static pressure (stress). The additional stress generated during processing was measured when the tip moved and slid. Therefore, changes in processed protuberance heights and plastic deformation characteristics of processed area were investigated by measuring principal contact and shear stresses. The principal contact and shear stress of 50 and 200 nm radius diamond tips were analyzed through the boundary element method. Load effects on maximum principle and shear stresses were also studied.Figure2(a) shows a square silicon area (1 μm2) processed using a tip with a 50 nm radius under an applied load of 10–40 μN, along with its section profiles (Figure 2(a) (I–III)) [40]. The AFM profile suggests that plastic deformation or cutting debris accumulated at the periphery of the processed square. The 50 nm tip produced grooves whose depth reached 20 nm and increased with increasing load on the silicon surface. AFM imaging showed that small-radius tips could remove silicon from the surface at loads as low as 10 μN.AFM profiles of silicon square areas processed by diamond tip sliding for different tip radii. (a) 50, (b) 200, and (c) 100 nm. (a) Remove groove processed with a 50 nm radius tip (b) Protuberance profile processed with a 200 nm radius tip (c) Removal and protuberance profile of silicon with a 100 nm radius tipIn contrast, a 200 nm radius diamond tip produced 0–2 nm high silicon protuberances, as shown by the profiles of processed silicon square area (1μm2) (Figure 2(b)). The section profiles (Figure 2(b) (I–III)) also suggested that protuberance height increased with increasing applied load (50–80 μN). For example, an 80 μN load formed 1.5 nm high protuberances.Diamond tips with radii of 100 nm simultaneously produced protuberances and grooves, as shown in Figure2(c). Section profiles (Figure 2(c) (I–III)) revealed that protuberances appeared at loads as low as 50 μN, and grooves formed at loads as high as 70–80 μN.Figure3 shows the effects of tip radius and load on protuberance height and groove depth for radii of 50, 100, and 200 nm. The maximum height was obtained at a load of 40 μN using the 100 nm radius tip. In contrast, the groove depth significantly increased at a load of 40 μN and reached a maximum of 7 nm at a load of 80 μN with the 50 nm radius tip.Figure 3 Tip radius and load effects on protuberance height and groove depth.These AFM measurements suggest that protuberances may originate from the mechanochemical action of tip sliding on the sample. Protuberances have not been observed under vacuum and dry nitrogen conditions [44]. The mechanochemical reaction of silicon with carbon appeared to be enhanced with rising temperature under vacuum, but protuberances did not form. Protuberances have been produced using other tip materials, such as silicon nitride or silicon, but the mechanochemical reaction between silicon and diamond was negligible. Protuberances were difficult to detect when the diamond tip was used under dry conditions (less than 40% humidity), suggesting that the presence of water on the silicon surface contributes to the mechanochemical reaction.Mechanistic models of protuberance and groove processed by diamond tip sliding were proposed using the processing characteristics. Models for height-increasing and height-decreasing processes [37] are shown in Figures 4(a) and 4(b), respectively. These protuberance processing phenomena may stem from the local atomic destruction of bonds concomitant with concentrated stress. Sliding enhances the reaction of damaged silicon with oxygen and water that are present on the surface due to the destruction of silicon–silicon bonds. This reaction forms silicon oxide (SiOx) or silicon hydroxide (Si(OH)x), increasing the height of the processed parts. The mechanochemically processed layer thickness is greater than the protuberance height.Mechanistic models for (a) protuberance and (b) groove processing by diamond tip sliding. (a) Protuberance processing of silicon (b) Groove processing of siliconWhen the radius equaled 50 or 100 nm with a higher applied load, the maximum surface shearing stress exceeded the strength of silicon, leading to plastic deformation followed by the sliding-induced silicon removal from the sides and front. However, during plastic deformation, oxygen and water also reacted mechanochemically with silicon through a reaction similar to that occurring during the height-increasing process (Figure2(b)). The nanoindentation hardness of processed protuberances and grooves was approximately 10% higher than that of unprocessed parts. The reaction of silicon with surface water and oxygen produces Si ( OH ) x or SiO x, which exhibits the same hardness as silicon and is harder than Si ( OH ) x. Therefore, instead of soft silicon hydroxide, a silicon oxide layer is speculated to form on processed protuberance and groove surfaces.To clarify the mechanism of mechanochemical protuberance and groove processing, the surface contact stress was evaluated using the boundary element method [40]. Contact model and material parameters are presented in Figure 5. Figures 6 and 7 show the contact principal stress and shearing stress, respectively, of 50 and 200 nm radius diamond tips analyzed by the boundary element method [40]. Maximum principal stress and shearing stress dependences on load are shown in Figure 8. Hard and brittle materials such as silicon are easily fractured under tensile stress. In maximum tensile stress areas, silicon bonds appear to break under tensile stress caused by the friction of the diamond tip. Therefore, the reaction of silicon may occur at the rear edge of the sliding contact area where the elongation stress is the highest. This protuberance processing phenomenon accompanies mechanochemical reaction during which tensile stress increases. Its mechanism may involve adsorbates, such as surface water and oxygen, which mechanochemically react with silicon [40].Figure 5 Contact stress analysis using the boundary element method.Principal stress of the silicon-diamond contact area at 50μN load for tip radii of (a) 50 and (b) 200 nm. (a) Diamond radius: 50 nm; load: 50μN (b) Diamond radius: 200 nm; load: 50μNShear stress of the silicon-diamond contact area at 50μN load for tip radii of (a) 50 and (b) 200 nm. (a) Diamond radius: 50 nm; load: 50μN (b) Diamond radius: 200 nm; load: 50μNFigure 8 Load effects on maximum principle and shear stresses.Shearing stress was evaluated to estimate the plastic deformation of silicon. The effect of the evaluated contact stress on protuberance heights and groove depths is shown in Figure9 [40]. Protuberance heights increased until the tensile stress reached 4.5 GPa and decreased. At this peak height, the maximum shear stress attained more than 8 GPa. This suggests that MCP using a 100 nm radius diamond tip is load dependent when the shear stress exceeds the strength of silicon, including a plastic deformation of several nanometers. At lower loads, the 100 nm radius tip provided protuberances through silicon oxidation. However, the maximum shear stress increased beyond the yield criterion with increasing load, resulting in silicon removal from the side and front surfaces and a subsequent decrease in protuberances height. Conversely, the tensile stress at the rear edge increased with increasing load during this removal process, indicating that water and oxygen reacted with silicon, and a mechanochemical reaction similar to the protuberance height-increasing process occurred. The silicon oxide layer is also assumed to form on processed groove surfaces, as shown in Figure 4(b).Contact stress effects on protuberance heights and groove depths. (a) Tensile stress effect and (b) maximum shear stress effect. (a) Tensile stress (b) Maximum shearing stress ## 2.3. Additional KOH Solution Etching of Processed Areas ### 2.3.1. Load Dependence of KOH Solution Etching on Mechanochemically Processed Areas The KOH solution etching of a mechanically processed area is shown in Figure10 [40]. Thick oxidized layers were mechanochemically formed on the processed areas, and these locally oxidized parts were expected to work as etching masks for KOH solution [37, 40, 43]. For short KOH etching times, the area processed at a high scanning density and the unprocessed area covered with a natural oxidative layer acted as masks and were not etched, enabling various types of nanofabrication processes via mechanochemical action. Nanoscale local oxidation based on mechanochemical action may find application in future nanodevice processes and nanolithography technology.Figure 10 Profile etching model of silicon using a mechanochemically oxidized mask.The effects of KOH etching on protuberance and groove processing on silicon were investigated at room temperature. Silicon protuberance and groove processing were conducted in air using a 140 nm radius diamond tip on a 1μm2 square area at loads ranging from 10 to 90 μN [43]. As shown in Figure 11(a), protuberance heights remained below 1 nm for 10–30 μN loads, but they increased to 1-2 nm for loads of 40, 50, and 60 μN, indicating that protuberance heights increased with increasing applied load. However, loads of 70, 80, and 90 μN generated grooves on the processed areas as well as plastic deformation. The maximum depth of the grooves was 4.5 nm. These results are in between those obtained using the 100 and 200 nm radius tips.Load effects on protuberance and groove processing using a 140 nm radius tip (a) before and (b) after KOH solution etching. (a) Before etching (b) After etchingSilicon samples processed at loads ranging from 0 to 90μN were etched using KOH solution for 10 min. Figure 11(b) shows that unprocessed areas were etched, whereas all processed areas remained almost intact. Processed features protruded over the surface because of KOH etching, reaching a maximum height of 8 nm. Areas processed by tip sliding clearly showed an erosion resistance to KOH solution. For high loads of 70, 80, and 90 μN, the shape of plastically deformed areas did not change after etching in KOH solution. The etching speed of processed areas with inherent defects was accelerated under normal processing conditions [37, 38], but the processed regions could not be etched, even with plastic deformation. This may result from the formation of a dense oxide layer by tip sliding.Figure12(a) shows plastically deformed protuberances produced using a 100 nm radius diamond tip. The KOH solution selectively etched the unprocessed Si area, leaving protuberances and grooves processed by tip sliding unchanged. The depth of grooves processed at 80 μN was approximately 5 nm. Protuberances processed at 40 μN reached heights of nearly 1 nm. Figure 12(b) shows the surface profile after KOH solution etching for 30 min, revealing that processed protuberance and groove areas were barely etched. Height differences between processed and unprocessed areas amounted to approximately 50 nm after KOH solution etching. These results suggest that protuberances and grooves formed by diamond tip sliding consist of silicon oxide (Figure 4), which is chemically resistant to KOH. The protuberances exhibited similar upper surface profiles before and after KOH etching. Plastically deformed areas shown at the start and end of tip sliding process remained intact after KOH solution etching, suggesting the formation of silicon oxide on these areas.Figure 12 AFM profiles of groove and protuberance processing areas (a) before and (b) after KOH solution etching.Figure13 shows the profiles of three square areas (a) processed by AFM and (b) etched using KOH [40]. Mean protuberance heights of 2.3, 2.1, and 1.2 nm were obtained by diamond tip sliding for loads of 50, 40, and 30 μN, respectively. At higher loads, the plastically deformed debris accumulated at the end of the sliding. Three squares with mean heights of 82, 80, and 78 nm were clearly observed after etching. The lower profile of the KOH etched surface (Figure 13(b)) matched that of the upper profile for AFM tip processed surface (Figure 13(a)), indicating that the three processed squares were barely etched.Figure 13 AFM profiles of three square areas processed at different loads (a) before and (b) after KOH solution etching.Figure14 shows the profiles of lines and spaces produced by tip sliding at 50 μN before and after KOH solution etching [40]. The 2 μm long lines were interspaced by intervals of (a) 500, (b) 400, (c) 300, and (d) 200 nm. The sliding scar of the processed lines was barely visible by AFM. After KOH solution etching, four lines were clearly observed and displayed rounded ends because of side etching. Line heights ranged from 40 to 42 nm, while line spacing narrowed to 200 nm. Even without changing the profiles, tip sliding forms a silicon oxide layer that protects the processed surface from KOH etching.AFM profiles of evenly interspaced lines mechanochemically processed by tip sliding on silicon before and after etching with 10% KOH solution. (a) Line pitch 500 nm (b) Line pitch 400 nm (c) Line pitch 300 nm (d) Line pitch 200 nm ### 2.3.2. Load and Scanning Density Effects on the KOH Solution Etching Rates of Mechanochemically Processed Areas Figure15(a) shows the process used to control etching rates by changing the tip loads [37, 38, 40]. A uniform natural oxide layer was formed on the silicon surface before processing (Figure 15(a)). The mechanochemically processed area exhibited a thick oxidized layer at a high load and high-density scanning (Figure 15(b)), while a low load and low-density scanning mechanically removed the natural oxide layer (Figure 15(c)).Model of a nanofabrication process using mechanochemically and naturally oxidized areas as etching masks. (a) Before processing, (b) during the first processing step at high load, and (c) during the second processing step at low load. (a) Before processing (b) First processing at higher load (c) Second processing at lower loadFigure16 shows a two-step table obtained from mechanochemically processed areas etched using KOH solution at different rates [40]. To change the degree of oxidation of the processed area, the atomic force between the Si(100) surface and tip was changed. First, a high square protuberance of 0.6 nm (2 × 2 μm2) was processed at 25 μN (Figure 16(a)). Next, a second area (5 × 5 μm2) was processed at a low load and low-density scanning. Profiles obtained after additional KOH solution etching revealed that a low load and low-density scanning resulted in a thin, 5 × 5 μm2 oxidized area, and a high load and high-density scanning resulted in a dense, 2 × 2 μm2 oxidized area on the same surface (10 × 10 μm2, Figure 16(b)). Load and scanning density changes during MCP can influence reaction rates. The height difference between the processed areas amounted to approximately 18 nm and total height was 33 nm, evidence for the potential fabrication of a nanometer-size two-step table.Figure 16 AFM profiles of a two-step table fabricated using differences in etching rates between processed areas (a) before and (b) after KOH solution etching.The profile of a processed sample was obtained, as shown in Figure17. First, a 1 × 1 μm2 area was mechanically oxidized by diamond tip sliding at 40 μN and was assumed to undergo partial plastic deformation. Another 1 × 1 μm2 area processed at 10 μN displayed a 1.5 nm high protuberance. A 6 × 6 μm2 area was processed by tip sliding at 1.5 μN to remove the natural oxide layer, and the sample was etched using KOH for 25 min. Figure 17 shows the resulting three-dimensional profile of the processed areas. The profile suggests that tip sliding at 1.5 μN and over a wide area removed the natural oxide layer, promoting a deep etching of 119 nm. In contrast, the surfaces of areas processed at 10 and 40 μN exhibited little differences from the unprocessed basal plane covered with a natural oxide layer. The natural oxide layer worked as an etching mask for the unprocessed surface for 25 min but, thereafter, was removed by low load and wide-range processing at 1.5 μN, increasing the etching rate.Figure 17 AFM profile obtained using mechanochemically and naturally oxidized layers as etching masks for the KOH solution.To demonstrate this mechanism, nine squares (1 × 1μm2) were mechanically oxidized by tip sliding at 15 μN, and the processed area was expanded to a 10 × 10 μm2 square to remove the natural oxide layer at 1 μN. Subsequent KOH solution etching was performed for 30 min. As shown in Figure 18, the nine square areas and the unprocessed natural oxide layer were negligibly etched, indicating that they acted as etching masks.Figure 18 AFM profiles obtained using mechanochemically and naturally oxidized layers as etching masks for the KOH solution at 15μN. ## 2.3.1. Load Dependence of KOH Solution Etching on Mechanochemically Processed Areas The KOH solution etching of a mechanically processed area is shown in Figure10 [40]. Thick oxidized layers were mechanochemically formed on the processed areas, and these locally oxidized parts were expected to work as etching masks for KOH solution [37, 40, 43]. For short KOH etching times, the area processed at a high scanning density and the unprocessed area covered with a natural oxidative layer acted as masks and were not etched, enabling various types of nanofabrication processes via mechanochemical action. Nanoscale local oxidation based on mechanochemical action may find application in future nanodevice processes and nanolithography technology.Figure 10 Profile etching model of silicon using a mechanochemically oxidized mask.The effects of KOH etching on protuberance and groove processing on silicon were investigated at room temperature. Silicon protuberance and groove processing were conducted in air using a 140 nm radius diamond tip on a 1μm2 square area at loads ranging from 10 to 90 μN [43]. As shown in Figure 11(a), protuberance heights remained below 1 nm for 10–30 μN loads, but they increased to 1-2 nm for loads of 40, 50, and 60 μN, indicating that protuberance heights increased with increasing applied load. However, loads of 70, 80, and 90 μN generated grooves on the processed areas as well as plastic deformation. The maximum depth of the grooves was 4.5 nm. These results are in between those obtained using the 100 and 200 nm radius tips.Load effects on protuberance and groove processing using a 140 nm radius tip (a) before and (b) after KOH solution etching. (a) Before etching (b) After etchingSilicon samples processed at loads ranging from 0 to 90μN were etched using KOH solution for 10 min. Figure 11(b) shows that unprocessed areas were etched, whereas all processed areas remained almost intact. Processed features protruded over the surface because of KOH etching, reaching a maximum height of 8 nm. Areas processed by tip sliding clearly showed an erosion resistance to KOH solution. For high loads of 70, 80, and 90 μN, the shape of plastically deformed areas did not change after etching in KOH solution. The etching speed of processed areas with inherent defects was accelerated under normal processing conditions [37, 38], but the processed regions could not be etched, even with plastic deformation. This may result from the formation of a dense oxide layer by tip sliding.Figure12(a) shows plastically deformed protuberances produced using a 100 nm radius diamond tip. The KOH solution selectively etched the unprocessed Si area, leaving protuberances and grooves processed by tip sliding unchanged. The depth of grooves processed at 80 μN was approximately 5 nm. Protuberances processed at 40 μN reached heights of nearly 1 nm. Figure 12(b) shows the surface profile after KOH solution etching for 30 min, revealing that processed protuberance and groove areas were barely etched. Height differences between processed and unprocessed areas amounted to approximately 50 nm after KOH solution etching. These results suggest that protuberances and grooves formed by diamond tip sliding consist of silicon oxide (Figure 4), which is chemically resistant to KOH. The protuberances exhibited similar upper surface profiles before and after KOH etching. Plastically deformed areas shown at the start and end of tip sliding process remained intact after KOH solution etching, suggesting the formation of silicon oxide on these areas.Figure 12 AFM profiles of groove and protuberance processing areas (a) before and (b) after KOH solution etching.Figure13 shows the profiles of three square areas (a) processed by AFM and (b) etched using KOH [40]. Mean protuberance heights of 2.3, 2.1, and 1.2 nm were obtained by diamond tip sliding for loads of 50, 40, and 30 μN, respectively. At higher loads, the plastically deformed debris accumulated at the end of the sliding. Three squares with mean heights of 82, 80, and 78 nm were clearly observed after etching. The lower profile of the KOH etched surface (Figure 13(b)) matched that of the upper profile for AFM tip processed surface (Figure 13(a)), indicating that the three processed squares were barely etched.Figure 13 AFM profiles of three square areas processed at different loads (a) before and (b) after KOH solution etching.Figure14 shows the profiles of lines and spaces produced by tip sliding at 50 μN before and after KOH solution etching [40]. The 2 μm long lines were interspaced by intervals of (a) 500, (b) 400, (c) 300, and (d) 200 nm. The sliding scar of the processed lines was barely visible by AFM. After KOH solution etching, four lines were clearly observed and displayed rounded ends because of side etching. Line heights ranged from 40 to 42 nm, while line spacing narrowed to 200 nm. Even without changing the profiles, tip sliding forms a silicon oxide layer that protects the processed surface from KOH etching.AFM profiles of evenly interspaced lines mechanochemically processed by tip sliding on silicon before and after etching with 10% KOH solution. (a) Line pitch 500 nm (b) Line pitch 400 nm (c) Line pitch 300 nm (d) Line pitch 200 nm ## 2.3.2. Load and Scanning Density Effects on the KOH Solution Etching Rates of Mechanochemically Processed Areas Figure15(a) shows the process used to control etching rates by changing the tip loads [37, 38, 40]. A uniform natural oxide layer was formed on the silicon surface before processing (Figure 15(a)). The mechanochemically processed area exhibited a thick oxidized layer at a high load and high-density scanning (Figure 15(b)), while a low load and low-density scanning mechanically removed the natural oxide layer (Figure 15(c)).Model of a nanofabrication process using mechanochemically and naturally oxidized areas as etching masks. (a) Before processing, (b) during the first processing step at high load, and (c) during the second processing step at low load. (a) Before processing (b) First processing at higher load (c) Second processing at lower loadFigure16 shows a two-step table obtained from mechanochemically processed areas etched using KOH solution at different rates [40]. To change the degree of oxidation of the processed area, the atomic force between the Si(100) surface and tip was changed. First, a high square protuberance of 0.6 nm (2 × 2 μm2) was processed at 25 μN (Figure 16(a)). Next, a second area (5 × 5 μm2) was processed at a low load and low-density scanning. Profiles obtained after additional KOH solution etching revealed that a low load and low-density scanning resulted in a thin, 5 × 5 μm2 oxidized area, and a high load and high-density scanning resulted in a dense, 2 × 2 μm2 oxidized area on the same surface (10 × 10 μm2, Figure 16(b)). Load and scanning density changes during MCP can influence reaction rates. The height difference between the processed areas amounted to approximately 18 nm and total height was 33 nm, evidence for the potential fabrication of a nanometer-size two-step table.Figure 16 AFM profiles of a two-step table fabricated using differences in etching rates between processed areas (a) before and (b) after KOH solution etching.The profile of a processed sample was obtained, as shown in Figure17. First, a 1 × 1 μm2 area was mechanically oxidized by diamond tip sliding at 40 μN and was assumed to undergo partial plastic deformation. Another 1 × 1 μm2 area processed at 10 μN displayed a 1.5 nm high protuberance. A 6 × 6 μm2 area was processed by tip sliding at 1.5 μN to remove the natural oxide layer, and the sample was etched using KOH for 25 min. Figure 17 shows the resulting three-dimensional profile of the processed areas. The profile suggests that tip sliding at 1.5 μN and over a wide area removed the natural oxide layer, promoting a deep etching of 119 nm. In contrast, the surfaces of areas processed at 10 and 40 μN exhibited little differences from the unprocessed basal plane covered with a natural oxide layer. The natural oxide layer worked as an etching mask for the unprocessed surface for 25 min but, thereafter, was removed by low load and wide-range processing at 1.5 μN, increasing the etching rate.Figure 17 AFM profile obtained using mechanochemically and naturally oxidized layers as etching masks for the KOH solution.To demonstrate this mechanism, nine squares (1 × 1μm2) were mechanically oxidized by tip sliding at 15 μN, and the processed area was expanded to a 10 × 10 μm2 square to remove the natural oxide layer at 1 μN. Subsequent KOH solution etching was performed for 30 min. As shown in Figure 18, the nine square areas and the unprocessed natural oxide layer were negligibly etched, indicating that they acted as etching masks.Figure 18 AFM profiles obtained using mechanochemically and naturally oxidized layers as etching masks for the KOH solution at 15μN. ## 3. Nanofabrication through Mechanical and Electrical Processes Using an Electrically Conductive Diamond Tip Several mechanical and electrical methods, including complex combinations of both approaches, have been developed for silicon nanoprocessing using an electrically conductive diamond tip [41]. In these approaches, nanostructures were formed on naturally oxidized silicon wafers through mechanochemical and/or electrochemical reactions. Mechanochemical and electrochemical reactions were conducted by applying a load and/or electrical voltage using an electrically conductive diamond tip with a radius of approximately 45 nm. Surface morphology and electrical properties of processed areas were examined by AFM, which provides topographic and current images, as well as current-voltage (I-V) curves [41]. These investigations showed that nanostructure height and morphology depended on applied load and voltage. The electric properties of the processed regions were identified using SchottkyI-V curves, which showed their dependence on the extent of local mechanochemical and electrochemical reaction generated by tip sliding during processing. In particular, complex approaches that combined mechanical and electrical processes enhanced the local oxidation at high load and voltage. ### 3.1. Experimental Method Experiments were conducted using AFM apparatus with a cantilever equipped with a conductive diamond tip. Process evaluation consisted of two steps. First, sample surfaces were scanned using the tip at a relatively high contact load over a 500 × 500 nm2 area under the conditions shown in Table 1. Second, surface topography and electrical current were examined using the same tip at a low contact load of 1 nN over an enlarged scanning area (5 × 5 μm2). Current images andI-Vcurves were generated while scanning by applying a bias voltage (0.5 V) between the sample and the conductive tip and measuring of the electrical current flowing through the tip. The reaction occurred on the ultrasmall part of the contact area, and processing time is the same for either mechanical or electrical process. The oxidation reaction was assumed to occur during processing and last as long as the processing time. The tests were performed three more times. Measured values were different but exhibited the same trend.Table 1 Processing conditions. Process Load (nN) Voltage (V) Mechanical processing 40, 80, 120 ⋯ Electrical processing ⋯ 1, 2, 3 Mechanical + electrical processing 40 1 80 2 120 3CAFM enables topography and current distribution measurements in the contact mode [41]. Furthermore,I-V curves of processed and unprocessed regions were obtained using an applied bias voltage because a comparison between current image and topography revealed no clear relationship between local conductivity and morphology. The probe used here consisted of aB-doped chemical vapor deposition diamond-coated silicon tip with a radius of approximately 45 nm and an average electrical resistance of approximately 0.0031 Ωcm at 22°C. Under ambient conditions (room temperature of approximately 22°C), the specimens were coated with an approximately 2 nm thick natural oxide layer. The cantilever (231 × 36 μm2) displayed a spring constant of approximately 48 N/m and a resonance frequency of 185 kHz [41].As shown in Figure19, three areas (indicated by line A-A1) were mechanically processed by tip sliding at loads of 40, 80, and 120 nN. As indicated by line B-B1, three areas were electrically processed by tip sliding at a load of 2 nN using bias voltages of 1, 2, and 3 V. During electrical processing, the applied force load is assumed to be 0 nN, which corresponds to a nominal zero force scan because true zero force is actually difficult to achieve. As shown by line C-C1, mechanical and electrical processes were combined using tip sliding at nonzero loads and bias voltages. Three areas were processed through this complex electromechanical approach at loads of 40, 80, and 120 nN and voltages of 1, 2, and 3 V, respectively. Sliding tracks appear as dashed lines in Figure 19.Figure 19 Scheme showing silicon square areas processed through (A-A1) mechanical, (B-B1) electrical, and (C-C1) electromechanical methods at different loads and voltages. ### 3.2. Surface Properties of Electrical, Mechanical, and Electromechanical Processes Figure20 shows the topographic images of mechanically, electrically, and electromechanically processed samples (Table 1). Along a-a1, samples were mechanically processed by tip sliding at loads of 40, 80, and 120 nN, producing protuberances with heights of approximately 0.40, 0.26, and 0.20 nm, respectively. These protuberances were generated by tribochemical reaction [35, 36, 40]. In this process, protuberance heights depend on the local oxidation and plastic deformation of the specimen caused by tip sliding. Protuberance heights increased because of the tribochemical reaction. However, compression from the loading force limits protuberance growth, explaining a decrease in height with increasing load. Along b-b1, samples were electrically processed by tip sliding using bias voltages of 1, 2, and 3 V. The applied voltage triggered an electrochemical reaction that resulted in the local oxidation of silicon. Protuberance heights of approximately 0.44, 0.50, and 0.71 nm were obtained for voltages of 1, 2, and 3 V, respectively. Protuberance heights increased with increasing voltage. The net force is assumed to strongly depend on the voltage [45, 46] so that the processing force at a load of 0 nN is substantially greater at 3 V than at 1 V. Conversely, electrical processing produced taller protuberances than mechanical processing. Therefore, local oxidation may be accelerated by the voltage-induced electrochemical reaction, increasing the volume of silicon oxide formed [41].Figure 20 AFM profiles and cross-sectional images of surfaces obtained by (a) mechanical, (b) electrical, and (c) electromechanical processing.Along c-c1, samples were electromechanically processed by tip sliding using applied loads and voltages of 40 nN, 1 V; 80 nN, 2 V; and 120 nN, 3 V. The respective heights of the three protuberances were 0.44, 0.56, and 0.64 nm. The protuberance formed electromechanically at a 120 nN load and at 3 V was lower in height than the protuberance formed electrically at 3 V, indicating that compressive stress resulting from the high load reduced the protuberance height. This result is consistent with the reduced protuberance height being suppressed with increasing load in mechanical processing. These results also suggest that mechanical and electrical processing can effectively control the formation and growth of nanostructures through mechanochemical and electrochemical reactions. Sectional profiles showed that the electrically processed protuberances were thicker than the mechanically and electromechanically processed protuberances. It is believed that electrical action is useful in conjunction with mechanical action in protuberance fabrication. The effect of combined electrical and mechanical actions on processing is emphasized and discussed in the next section. ### 3.3. Electrical Properties of Electrical, Mechanical, and Electromechanical Processes Figure21 shows the current distribution images of areas obtained by mechanical, electrical, and electromechanical processing (Figure 20) [41]. Differences in measured current between unprocessed and processed areas were determined to compare the effect of mechanical and/or electrical techniques on fabrication. In general, processed areas displayed lower current than unprocessed areas, which stems from the formation of oxidization layers during nanoprocessing. These layers are produced by reactions between silicon and water, which included a mechanochemical reaction caused by tip sliding and an electrochemically induced anodic oxidation. A current of 0.191 nA was measured in the unprocessed area for a bias voltage of 2 V, indicating that this area was covered with an ultrathin natural oxidation layer formed under ambient conditions.Figure 21 AFM current profiles and cross-sectional images of surfaces obtained by (a) mechanical, (b) electrical, and (c) electromechanical processing.Mechanical processing resulted in current differences of 0.079, 0.077, and 0.076 nA for loads of 40, 80, and 120 nN, respectively, indicating that processed areas presented lower currents than the unprocessed areas (a-a1, Figure 21(a)). These current differences decreased with increasing load in the processed areas, meaning that the currents increased with increasing load. This phenomenon may result from the partial removal of the oxidation layer at high load, causing its thickness to decrease.Current differences between electrically processed and unprocessed areas amounted to 0.120, 0122, and 0.122 nA for voltages of 1, 2, and 3 V, respectively (b-b1, Figure 21(b)). These current differences were greater than the current differences for mechanically processed samples, suggesting the formation of a thicker oxidation layer during electrical processing than during mechanical processing. This thick oxidation layer was impeded on conductivity in electrically processed areas.Current differences between electromechanically processed and unprocessed areas equaled 0.136 nA for a 40 nN load and a voltage of 1 V, 0.137 nA for a 80 nN load and a voltage of 2 V, and 0.138 nA for a 120 nN load and a voltage of 3 V (c-c1, Figure 21(c)), showing that currents were lower in electromechanically processed areas than in unprocessed areas. In addition, current differences increased with increasing load and voltage, concomitant with the enhanced local oxidation and plastic deformation induced by mechanical and electrochemical reactions in the processed areas. Moreover, mechanically processed areas exhibited higher average currents than electrically and electromechanically processed areas, indicating that electrical and electromechanical methods may enhance local oxidation.SchottkyI-V characteristics were measured to study electrical conductivity properties in unprocessed and processed areas (Figures 22–24). The detection of a breakdown voltage is expected to confirm the existence of an oxidation layer on the processed area. This layer, which hinders current flow between conductive tip and specimen, originates from the mechanically, electrically, or electromechanically induced local oxidation. Breakdown voltages obtained at different processing conditions are listed in Table 2. As shown in Figure 22, breakdown voltages amounted to 1.1 V through mechanical processing at a 40 nN load (track A1), 1.1 V through electrical processing at a voltage of 1 V (track B1), and 1.08 V through electromechanical processing at a 40 nN load and a voltage of 1 V (track C1).Table 2 Breakdown voltages and processing conditions. Processing Tracks A1 B1 C1 A2 B2 C2 A3 B3 C3 Load (nN) 40 ⋯ 40 80 ⋯ 80 120 ⋯ 120 Voltage (V) ⋯ 1 1 ⋯ 2 2 ⋯ 3 3 Breakdown voltage (V) 1.1 1.1 1.08 1.2 3 or over 3 or over 3 or over 3 or over 3 or overSurface morphology andI-V characteristics of areas processed at different loads and voltages of (closed circle) 40 nN, 0 V, (closed triangle) 0 nN, 1 V, and (closed square) 40 nN, 1 V. (a) (b)Surface morphology andI-V characteristics of areas processed at different loads and voltages of (closed circle) 80 nN, 0 V, (closed triangle) 0 nN, 2 V, and (closed square) 80 nN, 2 V. (a) (b)Surface morphology andI-V characteristic of areas processed at different loads and voltages of (closed circle) 120 nN, 0 V, (closed triangle) 0 nN, 3 V, and (closed square) 120 nN, 3 V. (a) (b)Figure23 shows that the breakdown voltage increased beyond 3 V through electrical processing at a voltage of 2 V (track B2) and electromechanical processing with a 80 nN load and a voltage of 2 V (track C2). However, mechanical processing at 80 nN load (track A2) displayed breakdown voltage of approximately 1.2 V, in agreement with previous results suggesting the partial removal of the oxidation layer at high loads. In the case of electromechanical processing, the breakdown voltage increased beyond 3 V (track C2) because the oxidation was enhanced by simultaneous mechanochemical and electrochemical reactions. Figure 24 shows that the breakdown voltage increased beyond 3 V through electromechanical processing with a 120 nN load and a voltage of 3 V (track C3) due to the production of a regular and dense oxidation layer at high load and voltage. The elevated breakdown voltage of the mechanical processed area (track A3) indicates that the mechanochemical reaction enhanced the oxidation at a high load despite a reduction in protuberance height and the partial removal of the oxidation layer by tip sliding. Electrical processing also promoted the oxidation reaction with increasing bias voltage (track B3), enhancing the conductive resistance. Consequently, the breakdown voltage was higher in the electrical process than in the mechanical process. Electromechanically processed areas exhibited higher breakdown voltages than other processed areas, indicating that this complex approach can generate nanostructures with high oxidation density. Furthermore, electrical and electromechanical processing may improve nanofabrication rates because mechanochemical and electrochemical reactions enhanced oxidation rates. ## 3.1. Experimental Method Experiments were conducted using AFM apparatus with a cantilever equipped with a conductive diamond tip. Process evaluation consisted of two steps. First, sample surfaces were scanned using the tip at a relatively high contact load over a 500 × 500 nm2 area under the conditions shown in Table 1. Second, surface topography and electrical current were examined using the same tip at a low contact load of 1 nN over an enlarged scanning area (5 × 5 μm2). Current images andI-Vcurves were generated while scanning by applying a bias voltage (0.5 V) between the sample and the conductive tip and measuring of the electrical current flowing through the tip. The reaction occurred on the ultrasmall part of the contact area, and processing time is the same for either mechanical or electrical process. The oxidation reaction was assumed to occur during processing and last as long as the processing time. The tests were performed three more times. Measured values were different but exhibited the same trend.Table 1 Processing conditions. Process Load (nN) Voltage (V) Mechanical processing 40, 80, 120 ⋯ Electrical processing ⋯ 1, 2, 3 Mechanical + electrical processing 40 1 80 2 120 3CAFM enables topography and current distribution measurements in the contact mode [41]. Furthermore,I-V curves of processed and unprocessed regions were obtained using an applied bias voltage because a comparison between current image and topography revealed no clear relationship between local conductivity and morphology. The probe used here consisted of aB-doped chemical vapor deposition diamond-coated silicon tip with a radius of approximately 45 nm and an average electrical resistance of approximately 0.0031 Ωcm at 22°C. Under ambient conditions (room temperature of approximately 22°C), the specimens were coated with an approximately 2 nm thick natural oxide layer. The cantilever (231 × 36 μm2) displayed a spring constant of approximately 48 N/m and a resonance frequency of 185 kHz [41].As shown in Figure19, three areas (indicated by line A-A1) were mechanically processed by tip sliding at loads of 40, 80, and 120 nN. As indicated by line B-B1, three areas were electrically processed by tip sliding at a load of 2 nN using bias voltages of 1, 2, and 3 V. During electrical processing, the applied force load is assumed to be 0 nN, which corresponds to a nominal zero force scan because true zero force is actually difficult to achieve. As shown by line C-C1, mechanical and electrical processes were combined using tip sliding at nonzero loads and bias voltages. Three areas were processed through this complex electromechanical approach at loads of 40, 80, and 120 nN and voltages of 1, 2, and 3 V, respectively. Sliding tracks appear as dashed lines in Figure 19.Figure 19 Scheme showing silicon square areas processed through (A-A1) mechanical, (B-B1) electrical, and (C-C1) electromechanical methods at different loads and voltages. ## 3.2. Surface Properties of Electrical, Mechanical, and Electromechanical Processes Figure20 shows the topographic images of mechanically, electrically, and electromechanically processed samples (Table 1). Along a-a1, samples were mechanically processed by tip sliding at loads of 40, 80, and 120 nN, producing protuberances with heights of approximately 0.40, 0.26, and 0.20 nm, respectively. These protuberances were generated by tribochemical reaction [35, 36, 40]. In this process, protuberance heights depend on the local oxidation and plastic deformation of the specimen caused by tip sliding. Protuberance heights increased because of the tribochemical reaction. However, compression from the loading force limits protuberance growth, explaining a decrease in height with increasing load. Along b-b1, samples were electrically processed by tip sliding using bias voltages of 1, 2, and 3 V. The applied voltage triggered an electrochemical reaction that resulted in the local oxidation of silicon. Protuberance heights of approximately 0.44, 0.50, and 0.71 nm were obtained for voltages of 1, 2, and 3 V, respectively. Protuberance heights increased with increasing voltage. The net force is assumed to strongly depend on the voltage [45, 46] so that the processing force at a load of 0 nN is substantially greater at 3 V than at 1 V. Conversely, electrical processing produced taller protuberances than mechanical processing. Therefore, local oxidation may be accelerated by the voltage-induced electrochemical reaction, increasing the volume of silicon oxide formed [41].Figure 20 AFM profiles and cross-sectional images of surfaces obtained by (a) mechanical, (b) electrical, and (c) electromechanical processing.Along c-c1, samples were electromechanically processed by tip sliding using applied loads and voltages of 40 nN, 1 V; 80 nN, 2 V; and 120 nN, 3 V. The respective heights of the three protuberances were 0.44, 0.56, and 0.64 nm. The protuberance formed electromechanically at a 120 nN load and at 3 V was lower in height than the protuberance formed electrically at 3 V, indicating that compressive stress resulting from the high load reduced the protuberance height. This result is consistent with the reduced protuberance height being suppressed with increasing load in mechanical processing. These results also suggest that mechanical and electrical processing can effectively control the formation and growth of nanostructures through mechanochemical and electrochemical reactions. Sectional profiles showed that the electrically processed protuberances were thicker than the mechanically and electromechanically processed protuberances. It is believed that electrical action is useful in conjunction with mechanical action in protuberance fabrication. The effect of combined electrical and mechanical actions on processing is emphasized and discussed in the next section. ## 3.3. Electrical Properties of Electrical, Mechanical, and Electromechanical Processes Figure21 shows the current distribution images of areas obtained by mechanical, electrical, and electromechanical processing (Figure 20) [41]. Differences in measured current between unprocessed and processed areas were determined to compare the effect of mechanical and/or electrical techniques on fabrication. In general, processed areas displayed lower current than unprocessed areas, which stems from the formation of oxidization layers during nanoprocessing. These layers are produced by reactions between silicon and water, which included a mechanochemical reaction caused by tip sliding and an electrochemically induced anodic oxidation. A current of 0.191 nA was measured in the unprocessed area for a bias voltage of 2 V, indicating that this area was covered with an ultrathin natural oxidation layer formed under ambient conditions.Figure 21 AFM current profiles and cross-sectional images of surfaces obtained by (a) mechanical, (b) electrical, and (c) electromechanical processing.Mechanical processing resulted in current differences of 0.079, 0.077, and 0.076 nA for loads of 40, 80, and 120 nN, respectively, indicating that processed areas presented lower currents than the unprocessed areas (a-a1, Figure 21(a)). These current differences decreased with increasing load in the processed areas, meaning that the currents increased with increasing load. This phenomenon may result from the partial removal of the oxidation layer at high load, causing its thickness to decrease.Current differences between electrically processed and unprocessed areas amounted to 0.120, 0122, and 0.122 nA for voltages of 1, 2, and 3 V, respectively (b-b1, Figure 21(b)). These current differences were greater than the current differences for mechanically processed samples, suggesting the formation of a thicker oxidation layer during electrical processing than during mechanical processing. This thick oxidation layer was impeded on conductivity in electrically processed areas.Current differences between electromechanically processed and unprocessed areas equaled 0.136 nA for a 40 nN load and a voltage of 1 V, 0.137 nA for a 80 nN load and a voltage of 2 V, and 0.138 nA for a 120 nN load and a voltage of 3 V (c-c1, Figure 21(c)), showing that currents were lower in electromechanically processed areas than in unprocessed areas. In addition, current differences increased with increasing load and voltage, concomitant with the enhanced local oxidation and plastic deformation induced by mechanical and electrochemical reactions in the processed areas. Moreover, mechanically processed areas exhibited higher average currents than electrically and electromechanically processed areas, indicating that electrical and electromechanical methods may enhance local oxidation.SchottkyI-V characteristics were measured to study electrical conductivity properties in unprocessed and processed areas (Figures 22–24). The detection of a breakdown voltage is expected to confirm the existence of an oxidation layer on the processed area. This layer, which hinders current flow between conductive tip and specimen, originates from the mechanically, electrically, or electromechanically induced local oxidation. Breakdown voltages obtained at different processing conditions are listed in Table 2. As shown in Figure 22, breakdown voltages amounted to 1.1 V through mechanical processing at a 40 nN load (track A1), 1.1 V through electrical processing at a voltage of 1 V (track B1), and 1.08 V through electromechanical processing at a 40 nN load and a voltage of 1 V (track C1).Table 2 Breakdown voltages and processing conditions. Processing Tracks A1 B1 C1 A2 B2 C2 A3 B3 C3 Load (nN) 40 ⋯ 40 80 ⋯ 80 120 ⋯ 120 Voltage (V) ⋯ 1 1 ⋯ 2 2 ⋯ 3 3 Breakdown voltage (V) 1.1 1.1 1.08 1.2 3 or over 3 or over 3 or over 3 or over 3 or overSurface morphology andI-V characteristics of areas processed at different loads and voltages of (closed circle) 40 nN, 0 V, (closed triangle) 0 nN, 1 V, and (closed square) 40 nN, 1 V. (a) (b)Surface morphology andI-V characteristics of areas processed at different loads and voltages of (closed circle) 80 nN, 0 V, (closed triangle) 0 nN, 2 V, and (closed square) 80 nN, 2 V. (a) (b)Surface morphology andI-V characteristic of areas processed at different loads and voltages of (closed circle) 120 nN, 0 V, (closed triangle) 0 nN, 3 V, and (closed square) 120 nN, 3 V. (a) (b)Figure23 shows that the breakdown voltage increased beyond 3 V through electrical processing at a voltage of 2 V (track B2) and electromechanical processing with a 80 nN load and a voltage of 2 V (track C2). However, mechanical processing at 80 nN load (track A2) displayed breakdown voltage of approximately 1.2 V, in agreement with previous results suggesting the partial removal of the oxidation layer at high loads. In the case of electromechanical processing, the breakdown voltage increased beyond 3 V (track C2) because the oxidation was enhanced by simultaneous mechanochemical and electrochemical reactions. Figure 24 shows that the breakdown voltage increased beyond 3 V through electromechanical processing with a 120 nN load and a voltage of 3 V (track C3) due to the production of a regular and dense oxidation layer at high load and voltage. The elevated breakdown voltage of the mechanical processed area (track A3) indicates that the mechanochemical reaction enhanced the oxidation at a high load despite a reduction in protuberance height and the partial removal of the oxidation layer by tip sliding. Electrical processing also promoted the oxidation reaction with increasing bias voltage (track B3), enhancing the conductive resistance. Consequently, the breakdown voltage was higher in the electrical process than in the mechanical process. Electromechanically processed areas exhibited higher breakdown voltages than other processed areas, indicating that this complex approach can generate nanostructures with high oxidation density. Furthermore, electrical and electromechanical processing may improve nanofabrication rates because mechanochemical and electrochemical reactions enhanced oxidation rates. ## 4. Conclusions This review evaluated silicon processing by diamond tip sliding under ambient conditions using AFM. First, silicon nanofabrication through AFM-based mechanical and mechanochemical processes, followed by additional KOH solution etching, was examined. Mechanochemically reacted surfaces and natural oxide layers acted as etching masks for the etching solution. Protuberances and grooves were processed on silicon surfaces in air using diamond tips with different radii. Changing the scanning density during tip sliding modulated the etching rates. Specifically, this approach was used to manufacture three-dimensional nanoprofiles, such as three squares, evenly interspaced lines, and a two-step table. Second, a CAFM-based nanofabrication technique was applied to oxidized silicon wafers. Topographic and current images, as well asin situ I-V characteristics of mechanically, electrically, and electromechanically processed surfaces, revealed the effects of mechanochemical and electromechanical reactions on nanofabrication. Nanoscale local oxidation based on these mechanical, mechanochemical, and electrochemical reactions may be applied to future nanodevice processes and nanolithography technology. --- *Source: 102404-2014-05-11.xml*
102404-2014-05-11_102404-2014-05-11.md
83,591
Silicon Nanofabrication by Atomic Force Microscopy-Based Mechanical Processing
Shojiro Miyake; Mei Wang; Jongduk Kim
Journal of Nanotechnology (2014)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102404
102404-2014-05-11.xml
--- ## Abstract This paper reviews silicon nanofabrication processes using atomic force microscopy (AFM). In particular, it summarizes recent results obtained in our research group regarding AFM-based silicon nanofabrication through mechanochemical local oxidation by diamond tip sliding, as well as mechanical, electrical, and electromechanical processing using an electrically conductive diamond tip. Microscopic three-dimensional manufacturing mainly relies on etching, deposition, and lithography. Therefore, a special emphasis was placed on nanomechanical processes, mechanochemical reaction by potassium hydroxide solution etching, and mechanical and electrical approaches. Several important surface characterization techniques consisting of scanning tunneling microscopy and related techniques, such as scanning probe microscopy and AFM, were also discussed. --- ## Body ## 1. Introduction Nanoelectronic devices and nanomachines could soon be manufactured by manipulating atoms and molecules [1]. Microfabrication is essential for the development of these nanotechnologies but remains challenging. Conventional microfabrication relies on lithography, the most common semiconductor manufacturing technology, which is usually combined with deposition and dry and wet etching processes. More recently, new nanoprocessing methods, such as nanoprinting and molecular self-organization, have emerged with the development of nanotechnology, but these methods are limited by the shape and dimensions of workpiece materials.Mechanical processing methods that transcribe a tool locus can produce three-dimensional microshapes with high precision using the tribological properties of tool geometry and a workpiece [2]. For example, a 1 μm wide knife edge can be formed by fine abrasive grinding [3]. Using microtribological action and degrees of freedom of materials, the shape and size of processed objects can increase if shape processing is achievable at the micron and nanometer scales [4, 5]. Therefore, microfabrication applications can be significantly extended through such novel processes [2–10].Meanwhile, processing methods combining chemical and mechanical actions have been widely used to machine high-quality surfaces with high precision [3]. Mechanochemical processing (MCP) is a machining method that utilizes mechanical energy to activate chemical reactions and structural changes [11], providing highly flat surfaces with few defects. Recently, chemical-mechanical polishing (CMP) has been applied in the fine processing of semiconductor devices. Furthermore, a complex chemical grinding approach that combines chemical potassium hydroxide (KOH) solution etching and mechanical action has been studied, thereby improving the processing properties of CMP [3].In recent years, maskless and friction-induced processing techniques have been utilized for nanofabrication [12–21]. The elevated internal stress and plastic deformation of masks formed at high load resulted in highly damaged mask surfaces and oxidation layers [17–21]. Mechanically processed mask patterns obtained from plastically deformed damaged layers withstood the selective wet etching processes conducted for pattern transfer called maskless patterning or friction-induced fabrication [17–23]. The mechanistic evaluation of these processes showed that plastic deformation and oxidation layers may block the reactive KOH solution layer. Therefore, the damage remained superficial [18–21]. In contrast, the direct patterning of etching mask was proposed to reduce damage [8–11].Few studies have used MCP at the atomic scale. For example, marking has been assessed as a physical-thermal excitation machining approach to produce storage devices, requiring complex mechanochemical processes at the nanometer scale [9]. However, severe adhesion during silicon machining made high precision difficult to achieve. Therefore, the potential application of this approach in a liquid chemical environment was investigated [3, 24]. Furthermore, a rigid silicon cantilever bearing diamond particles was produced experimentally, making the processing characteristics of silicon clearer [25].New processing methods can achieve high-precision and high-quality machining through combined mechanical-chemical nanofabrication approaches. During mechanical processing, tool shape machining is mostly applied to material removal. Nanosized convex shapes are very difficult to obtain with high precision on surfaces. A mechanochemical approach can circumvent this issue by mechanically promoting the reaction between atmospheric gas and surface adsorption layer.Micromachining mainly utilizes three-dimensional microscopic manufacturing processes comprising etching, deposition, and lithography. Scanning probe microscopy (SPM) techniques, which include the most common atomic force microscopy (AFM) and scanning tunneling microscopy (STM) [26, 27], are useful for surface property evaluation. These techniques involve scanning surfaces with a tip that includes a piezoelectric element. SPM is a promising tool in the nanofabrication of functional nanometer-size materials and devices [28] because of its ability to produce such nanostructures at the atomic scale. Several researchers have also attempted to use SPM techniques for local surface deposition and modification [5, 28–30]. In particular, the so-called local oxidation has been proven highly promising for the fabrication of electronic devices at the nanometer scale [31–33]. In this method, under room temperature conditions, oxidizing agents contained in the surface adsorbed water drifted across the silicon oxide layer under the influence of a high electric field, which was produced by voltage applied to the SPM probe [2, 33]. This SPM-generated oxide can function as a mask for the etching step and insulating barrier.Mechanical friction has also been exploited using AFM in contact mode in air to fabricate silicon nanostructures on a H-passivated Si(100) substrate [34]. An AFM diamond tip sliding on a silicon surface formed protuberances on the substrate under ambient conditions [35–37]. A proper mechanical action of the sliding tip on the silicon surface resulted in local mechanochemical oxidation [35, 36]. Conversely, CMP has achieved the damage-free and high-accuracy processing of silicon wafers. If diamond tip sliding results in suitable mechanochemical action on the silicon surface, local oxidation can be performed without damaging the oxide mask during the selective wet etching process used for pattern transfer [35–38].A similar AFM nanoprocessing method has previously been developed and evaluated in our research group on a Si(100) surface without bias voltage [35–40]. The mechanochemical reaction used a conductive diamond tip with superior wear resistance and produced nanoprotuberances and grooves under ambient conditions. A KOH solution selectively etched the unprocessed silicon area, leaving processed surfaces mostly intact [35–38]. An AFM investigation was conducted to evaluate the KOH etching of silicon specimens that were initially processed by diamond tip sliding at low and high scanning densities at different rates, confirming this observation [38]. These results suggested that an approach combining mechanical and electrical processes, such as an AFM technique that simultaneously used mechanical load and bias voltage, could be developed. Previous electrical nanoprocessing systems [41] indicated that this complex approach should use a conductive diamond tip. The mechanism of this complex approach was compared to those of the mechanical and electrical processes [41]. Investigations of fundamental nanostructure characteristics are expected to provide a better understanding of the morphological and electric properties of processed areas, which is crucial to process improvement. Conductive atomic force microscopy (CAFM) has been shown to directly determine the local conductivity distribution, which is independent of topography [38, 42]. However, the local structure and electric properties of these nanostructures remain poorly understood because of their complexity.This paper reviews the most recent developments in (1) silicon nanoprocessing through mechanochemical reaction by diamond tip sliding, followed by (2) KOH solution etching using the processed area as a mask, and (3) nanofabrication through mechanical and electrical processes using a conductive diamond tip. ## 2. Silicon Nanoprocessing through Mechanochemical Reaction and Etching Several mechanochemical techniques have been developed for silicon nanoprocessing using natural pyramidal AFM diamond tips of various radii [40, 43]. The diamond tips were fabricated by polishing, and their radii were estimated by tracing on a standard sample. MCP by diamond tip sliding and the resulting silicon protuberances and grooves were evaluated by AFMin situunder atmospheric conditions. Sharp tips produced grooves, whereas wide tips produced protuberances on the silicon surfaces. KOH etching of the mechanochemically processed surfaces revealed that protuberance-covered areas were barely etched. In other words, processed areas that displayed a plastic deformation also acted as etching masks for the KOH solution. Three-dimensional nanoprofiles, which consisted of squares (1000 × 1000 nm2), lines interspaced by 200 nm intervals, and a two-step table, were fabricated on silicon using similarly oxidized areas as etching masks. Using thick masks that were mechanochemically oxidized at high load and mechanical removal of the natural oxide layer at low load, three-dimensional nanometer-sized profiles were processed by additional KOH solution etching.To clarify the mechanism of this approach, the contact stress was analyzed using the boundary element method [40]. ### 2.1. Processing Methods The effects of load and diamond tip radius on the height and depth of features obtained through MCP by diamond tip sliding were studied [36]. N-type Si(100) samples were not cleaned prior to AFM processing so that their surface was covered by a natural oxide layer, which was less than 2 nm thick. First, the surfaces were mechanochemically processed under atmospheric conditions at room temperature. During this process, the specimens were driven using a piezoelectric element. Changes in processed area profiles were examined by applying a load of 10–40 μN and expanding scanning areas. The scanning procedure (256 scan lines) is shown in Figure 1 [40]. Diamond tip radii (50, 100, and 200 nm) and applied loads were changed to control the oxidation level.Figure 1 Mechanochemical processing of silicon by AFM tip sliding.Processed areas were expected to act as etching masks; therefore, the etching properties of KOH were evaluated after AFM processing. These processed wafers were etched in a 10% KOH solution at room temperature. Differences in etching rates between processed and unprocessed areas were determined by AFM measurements at low load using the processing tip. ### 2.2. Dependence of Silicon Nanoprocessing on Tip Radius and Load The dependence of silicon nanoprocessing on tip radius and load was evaluated to determine the protuberance processing properties and the deformation characteristics of processed areas. Contact pressure displayed static pressure (stress). The additional stress generated during processing was measured when the tip moved and slid. Therefore, changes in processed protuberance heights and plastic deformation characteristics of processed area were investigated by measuring principal contact and shear stresses. The principal contact and shear stress of 50 and 200 nm radius diamond tips were analyzed through the boundary element method. Load effects on maximum principle and shear stresses were also studied.Figure2(a) shows a square silicon area (1 μm2) processed using a tip with a 50 nm radius under an applied load of 10–40 μN, along with its section profiles (Figure 2(a) (I–III)) [40]. The AFM profile suggests that plastic deformation or cutting debris accumulated at the periphery of the processed square. The 50 nm tip produced grooves whose depth reached 20 nm and increased with increasing load on the silicon surface. AFM imaging showed that small-radius tips could remove silicon from the surface at loads as low as 10 μN.AFM profiles of silicon square areas processed by diamond tip sliding for different tip radii. (a) 50, (b) 200, and (c) 100 nm. (a) Remove groove processed with a 50 nm radius tip (b) Protuberance profile processed with a 200 nm radius tip (c) Removal and protuberance profile of silicon with a 100 nm radius tipIn contrast, a 200 nm radius diamond tip produced 0–2 nm high silicon protuberances, as shown by the profiles of processed silicon square area (1μm2) (Figure 2(b)). The section profiles (Figure 2(b) (I–III)) also suggested that protuberance height increased with increasing applied load (50–80 μN). For example, an 80 μN load formed 1.5 nm high protuberances.Diamond tips with radii of 100 nm simultaneously produced protuberances and grooves, as shown in Figure2(c). Section profiles (Figure 2(c) (I–III)) revealed that protuberances appeared at loads as low as 50 μN, and grooves formed at loads as high as 70–80 μN.Figure3 shows the effects of tip radius and load on protuberance height and groove depth for radii of 50, 100, and 200 nm. The maximum height was obtained at a load of 40 μN using the 100 nm radius tip. In contrast, the groove depth significantly increased at a load of 40 μN and reached a maximum of 7 nm at a load of 80 μN with the 50 nm radius tip.Figure 3 Tip radius and load effects on protuberance height and groove depth.These AFM measurements suggest that protuberances may originate from the mechanochemical action of tip sliding on the sample. Protuberances have not been observed under vacuum and dry nitrogen conditions [44]. The mechanochemical reaction of silicon with carbon appeared to be enhanced with rising temperature under vacuum, but protuberances did not form. Protuberances have been produced using other tip materials, such as silicon nitride or silicon, but the mechanochemical reaction between silicon and diamond was negligible. Protuberances were difficult to detect when the diamond tip was used under dry conditions (less than 40% humidity), suggesting that the presence of water on the silicon surface contributes to the mechanochemical reaction.Mechanistic models of protuberance and groove processed by diamond tip sliding were proposed using the processing characteristics. Models for height-increasing and height-decreasing processes [37] are shown in Figures 4(a) and 4(b), respectively. These protuberance processing phenomena may stem from the local atomic destruction of bonds concomitant with concentrated stress. Sliding enhances the reaction of damaged silicon with oxygen and water that are present on the surface due to the destruction of silicon–silicon bonds. This reaction forms silicon oxide (SiOx) or silicon hydroxide (Si(OH)x), increasing the height of the processed parts. The mechanochemically processed layer thickness is greater than the protuberance height.Mechanistic models for (a) protuberance and (b) groove processing by diamond tip sliding. (a) Protuberance processing of silicon (b) Groove processing of siliconWhen the radius equaled 50 or 100 nm with a higher applied load, the maximum surface shearing stress exceeded the strength of silicon, leading to plastic deformation followed by the sliding-induced silicon removal from the sides and front. However, during plastic deformation, oxygen and water also reacted mechanochemically with silicon through a reaction similar to that occurring during the height-increasing process (Figure2(b)). The nanoindentation hardness of processed protuberances and grooves was approximately 10% higher than that of unprocessed parts. The reaction of silicon with surface water and oxygen produces Si ( OH ) x or SiO x, which exhibits the same hardness as silicon and is harder than Si ( OH ) x. Therefore, instead of soft silicon hydroxide, a silicon oxide layer is speculated to form on processed protuberance and groove surfaces.To clarify the mechanism of mechanochemical protuberance and groove processing, the surface contact stress was evaluated using the boundary element method [40]. Contact model and material parameters are presented in Figure 5. Figures 6 and 7 show the contact principal stress and shearing stress, respectively, of 50 and 200 nm radius diamond tips analyzed by the boundary element method [40]. Maximum principal stress and shearing stress dependences on load are shown in Figure 8. Hard and brittle materials such as silicon are easily fractured under tensile stress. In maximum tensile stress areas, silicon bonds appear to break under tensile stress caused by the friction of the diamond tip. Therefore, the reaction of silicon may occur at the rear edge of the sliding contact area where the elongation stress is the highest. This protuberance processing phenomenon accompanies mechanochemical reaction during which tensile stress increases. Its mechanism may involve adsorbates, such as surface water and oxygen, which mechanochemically react with silicon [40].Figure 5 Contact stress analysis using the boundary element method.Principal stress of the silicon-diamond contact area at 50μN load for tip radii of (a) 50 and (b) 200 nm. (a) Diamond radius: 50 nm; load: 50μN (b) Diamond radius: 200 nm; load: 50μNShear stress of the silicon-diamond contact area at 50μN load for tip radii of (a) 50 and (b) 200 nm. (a) Diamond radius: 50 nm; load: 50μN (b) Diamond radius: 200 nm; load: 50μNFigure 8 Load effects on maximum principle and shear stresses.Shearing stress was evaluated to estimate the plastic deformation of silicon. The effect of the evaluated contact stress on protuberance heights and groove depths is shown in Figure9 [40]. Protuberance heights increased until the tensile stress reached 4.5 GPa and decreased. At this peak height, the maximum shear stress attained more than 8 GPa. This suggests that MCP using a 100 nm radius diamond tip is load dependent when the shear stress exceeds the strength of silicon, including a plastic deformation of several nanometers. At lower loads, the 100 nm radius tip provided protuberances through silicon oxidation. However, the maximum shear stress increased beyond the yield criterion with increasing load, resulting in silicon removal from the side and front surfaces and a subsequent decrease in protuberances height. Conversely, the tensile stress at the rear edge increased with increasing load during this removal process, indicating that water and oxygen reacted with silicon, and a mechanochemical reaction similar to the protuberance height-increasing process occurred. The silicon oxide layer is also assumed to form on processed groove surfaces, as shown in Figure 4(b).Contact stress effects on protuberance heights and groove depths. (a) Tensile stress effect and (b) maximum shear stress effect. (a) Tensile stress (b) Maximum shearing stress ### 2.3. Additional KOH Solution Etching of Processed Areas #### 2.3.1. Load Dependence of KOH Solution Etching on Mechanochemically Processed Areas The KOH solution etching of a mechanically processed area is shown in Figure10 [40]. Thick oxidized layers were mechanochemically formed on the processed areas, and these locally oxidized parts were expected to work as etching masks for KOH solution [37, 40, 43]. For short KOH etching times, the area processed at a high scanning density and the unprocessed area covered with a natural oxidative layer acted as masks and were not etched, enabling various types of nanofabrication processes via mechanochemical action. Nanoscale local oxidation based on mechanochemical action may find application in future nanodevice processes and nanolithography technology.Figure 10 Profile etching model of silicon using a mechanochemically oxidized mask.The effects of KOH etching on protuberance and groove processing on silicon were investigated at room temperature. Silicon protuberance and groove processing were conducted in air using a 140 nm radius diamond tip on a 1μm2 square area at loads ranging from 10 to 90 μN [43]. As shown in Figure 11(a), protuberance heights remained below 1 nm for 10–30 μN loads, but they increased to 1-2 nm for loads of 40, 50, and 60 μN, indicating that protuberance heights increased with increasing applied load. However, loads of 70, 80, and 90 μN generated grooves on the processed areas as well as plastic deformation. The maximum depth of the grooves was 4.5 nm. These results are in between those obtained using the 100 and 200 nm radius tips.Load effects on protuberance and groove processing using a 140 nm radius tip (a) before and (b) after KOH solution etching. (a) Before etching (b) After etchingSilicon samples processed at loads ranging from 0 to 90μN were etched using KOH solution for 10 min. Figure 11(b) shows that unprocessed areas were etched, whereas all processed areas remained almost intact. Processed features protruded over the surface because of KOH etching, reaching a maximum height of 8 nm. Areas processed by tip sliding clearly showed an erosion resistance to KOH solution. For high loads of 70, 80, and 90 μN, the shape of plastically deformed areas did not change after etching in KOH solution. The etching speed of processed areas with inherent defects was accelerated under normal processing conditions [37, 38], but the processed regions could not be etched, even with plastic deformation. This may result from the formation of a dense oxide layer by tip sliding.Figure12(a) shows plastically deformed protuberances produced using a 100 nm radius diamond tip. The KOH solution selectively etched the unprocessed Si area, leaving protuberances and grooves processed by tip sliding unchanged. The depth of grooves processed at 80 μN was approximately 5 nm. Protuberances processed at 40 μN reached heights of nearly 1 nm. Figure 12(b) shows the surface profile after KOH solution etching for 30 min, revealing that processed protuberance and groove areas were barely etched. Height differences between processed and unprocessed areas amounted to approximately 50 nm after KOH solution etching. These results suggest that protuberances and grooves formed by diamond tip sliding consist of silicon oxide (Figure 4), which is chemically resistant to KOH. The protuberances exhibited similar upper surface profiles before and after KOH etching. Plastically deformed areas shown at the start and end of tip sliding process remained intact after KOH solution etching, suggesting the formation of silicon oxide on these areas.Figure 12 AFM profiles of groove and protuberance processing areas (a) before and (b) after KOH solution etching.Figure13 shows the profiles of three square areas (a) processed by AFM and (b) etched using KOH [40]. Mean protuberance heights of 2.3, 2.1, and 1.2 nm were obtained by diamond tip sliding for loads of 50, 40, and 30 μN, respectively. At higher loads, the plastically deformed debris accumulated at the end of the sliding. Three squares with mean heights of 82, 80, and 78 nm were clearly observed after etching. The lower profile of the KOH etched surface (Figure 13(b)) matched that of the upper profile for AFM tip processed surface (Figure 13(a)), indicating that the three processed squares were barely etched.Figure 13 AFM profiles of three square areas processed at different loads (a) before and (b) after KOH solution etching.Figure14 shows the profiles of lines and spaces produced by tip sliding at 50 μN before and after KOH solution etching [40]. The 2 μm long lines were interspaced by intervals of (a) 500, (b) 400, (c) 300, and (d) 200 nm. The sliding scar of the processed lines was barely visible by AFM. After KOH solution etching, four lines were clearly observed and displayed rounded ends because of side etching. Line heights ranged from 40 to 42 nm, while line spacing narrowed to 200 nm. Even without changing the profiles, tip sliding forms a silicon oxide layer that protects the processed surface from KOH etching.AFM profiles of evenly interspaced lines mechanochemically processed by tip sliding on silicon before and after etching with 10% KOH solution. (a) Line pitch 500 nm (b) Line pitch 400 nm (c) Line pitch 300 nm (d) Line pitch 200 nm #### 2.3.2. Load and Scanning Density Effects on the KOH Solution Etching Rates of Mechanochemically Processed Areas Figure15(a) shows the process used to control etching rates by changing the tip loads [37, 38, 40]. A uniform natural oxide layer was formed on the silicon surface before processing (Figure 15(a)). The mechanochemically processed area exhibited a thick oxidized layer at a high load and high-density scanning (Figure 15(b)), while a low load and low-density scanning mechanically removed the natural oxide layer (Figure 15(c)).Model of a nanofabrication process using mechanochemically and naturally oxidized areas as etching masks. (a) Before processing, (b) during the first processing step at high load, and (c) during the second processing step at low load. (a) Before processing (b) First processing at higher load (c) Second processing at lower loadFigure16 shows a two-step table obtained from mechanochemically processed areas etched using KOH solution at different rates [40]. To change the degree of oxidation of the processed area, the atomic force between the Si(100) surface and tip was changed. First, a high square protuberance of 0.6 nm (2 × 2 μm2) was processed at 25 μN (Figure 16(a)). Next, a second area (5 × 5 μm2) was processed at a low load and low-density scanning. Profiles obtained after additional KOH solution etching revealed that a low load and low-density scanning resulted in a thin, 5 × 5 μm2 oxidized area, and a high load and high-density scanning resulted in a dense, 2 × 2 μm2 oxidized area on the same surface (10 × 10 μm2, Figure 16(b)). Load and scanning density changes during MCP can influence reaction rates. The height difference between the processed areas amounted to approximately 18 nm and total height was 33 nm, evidence for the potential fabrication of a nanometer-size two-step table.Figure 16 AFM profiles of a two-step table fabricated using differences in etching rates between processed areas (a) before and (b) after KOH solution etching.The profile of a processed sample was obtained, as shown in Figure17. First, a 1 × 1 μm2 area was mechanically oxidized by diamond tip sliding at 40 μN and was assumed to undergo partial plastic deformation. Another 1 × 1 μm2 area processed at 10 μN displayed a 1.5 nm high protuberance. A 6 × 6 μm2 area was processed by tip sliding at 1.5 μN to remove the natural oxide layer, and the sample was etched using KOH for 25 min. Figure 17 shows the resulting three-dimensional profile of the processed areas. The profile suggests that tip sliding at 1.5 μN and over a wide area removed the natural oxide layer, promoting a deep etching of 119 nm. In contrast, the surfaces of areas processed at 10 and 40 μN exhibited little differences from the unprocessed basal plane covered with a natural oxide layer. The natural oxide layer worked as an etching mask for the unprocessed surface for 25 min but, thereafter, was removed by low load and wide-range processing at 1.5 μN, increasing the etching rate.Figure 17 AFM profile obtained using mechanochemically and naturally oxidized layers as etching masks for the KOH solution.To demonstrate this mechanism, nine squares (1 × 1μm2) were mechanically oxidized by tip sliding at 15 μN, and the processed area was expanded to a 10 × 10 μm2 square to remove the natural oxide layer at 1 μN. Subsequent KOH solution etching was performed for 30 min. As shown in Figure 18, the nine square areas and the unprocessed natural oxide layer were negligibly etched, indicating that they acted as etching masks.Figure 18 AFM profiles obtained using mechanochemically and naturally oxidized layers as etching masks for the KOH solution at 15μN. ## 2.1. Processing Methods The effects of load and diamond tip radius on the height and depth of features obtained through MCP by diamond tip sliding were studied [36]. N-type Si(100) samples were not cleaned prior to AFM processing so that their surface was covered by a natural oxide layer, which was less than 2 nm thick. First, the surfaces were mechanochemically processed under atmospheric conditions at room temperature. During this process, the specimens were driven using a piezoelectric element. Changes in processed area profiles were examined by applying a load of 10–40 μN and expanding scanning areas. The scanning procedure (256 scan lines) is shown in Figure 1 [40]. Diamond tip radii (50, 100, and 200 nm) and applied loads were changed to control the oxidation level.Figure 1 Mechanochemical processing of silicon by AFM tip sliding.Processed areas were expected to act as etching masks; therefore, the etching properties of KOH were evaluated after AFM processing. These processed wafers were etched in a 10% KOH solution at room temperature. Differences in etching rates between processed and unprocessed areas were determined by AFM measurements at low load using the processing tip. ## 2.2. Dependence of Silicon Nanoprocessing on Tip Radius and Load The dependence of silicon nanoprocessing on tip radius and load was evaluated to determine the protuberance processing properties and the deformation characteristics of processed areas. Contact pressure displayed static pressure (stress). The additional stress generated during processing was measured when the tip moved and slid. Therefore, changes in processed protuberance heights and plastic deformation characteristics of processed area were investigated by measuring principal contact and shear stresses. The principal contact and shear stress of 50 and 200 nm radius diamond tips were analyzed through the boundary element method. Load effects on maximum principle and shear stresses were also studied.Figure2(a) shows a square silicon area (1 μm2) processed using a tip with a 50 nm radius under an applied load of 10–40 μN, along with its section profiles (Figure 2(a) (I–III)) [40]. The AFM profile suggests that plastic deformation or cutting debris accumulated at the periphery of the processed square. The 50 nm tip produced grooves whose depth reached 20 nm and increased with increasing load on the silicon surface. AFM imaging showed that small-radius tips could remove silicon from the surface at loads as low as 10 μN.AFM profiles of silicon square areas processed by diamond tip sliding for different tip radii. (a) 50, (b) 200, and (c) 100 nm. (a) Remove groove processed with a 50 nm radius tip (b) Protuberance profile processed with a 200 nm radius tip (c) Removal and protuberance profile of silicon with a 100 nm radius tipIn contrast, a 200 nm radius diamond tip produced 0–2 nm high silicon protuberances, as shown by the profiles of processed silicon square area (1μm2) (Figure 2(b)). The section profiles (Figure 2(b) (I–III)) also suggested that protuberance height increased with increasing applied load (50–80 μN). For example, an 80 μN load formed 1.5 nm high protuberances.Diamond tips with radii of 100 nm simultaneously produced protuberances and grooves, as shown in Figure2(c). Section profiles (Figure 2(c) (I–III)) revealed that protuberances appeared at loads as low as 50 μN, and grooves formed at loads as high as 70–80 μN.Figure3 shows the effects of tip radius and load on protuberance height and groove depth for radii of 50, 100, and 200 nm. The maximum height was obtained at a load of 40 μN using the 100 nm radius tip. In contrast, the groove depth significantly increased at a load of 40 μN and reached a maximum of 7 nm at a load of 80 μN with the 50 nm radius tip.Figure 3 Tip radius and load effects on protuberance height and groove depth.These AFM measurements suggest that protuberances may originate from the mechanochemical action of tip sliding on the sample. Protuberances have not been observed under vacuum and dry nitrogen conditions [44]. The mechanochemical reaction of silicon with carbon appeared to be enhanced with rising temperature under vacuum, but protuberances did not form. Protuberances have been produced using other tip materials, such as silicon nitride or silicon, but the mechanochemical reaction between silicon and diamond was negligible. Protuberances were difficult to detect when the diamond tip was used under dry conditions (less than 40% humidity), suggesting that the presence of water on the silicon surface contributes to the mechanochemical reaction.Mechanistic models of protuberance and groove processed by diamond tip sliding were proposed using the processing characteristics. Models for height-increasing and height-decreasing processes [37] are shown in Figures 4(a) and 4(b), respectively. These protuberance processing phenomena may stem from the local atomic destruction of bonds concomitant with concentrated stress. Sliding enhances the reaction of damaged silicon with oxygen and water that are present on the surface due to the destruction of silicon–silicon bonds. This reaction forms silicon oxide (SiOx) or silicon hydroxide (Si(OH)x), increasing the height of the processed parts. The mechanochemically processed layer thickness is greater than the protuberance height.Mechanistic models for (a) protuberance and (b) groove processing by diamond tip sliding. (a) Protuberance processing of silicon (b) Groove processing of siliconWhen the radius equaled 50 or 100 nm with a higher applied load, the maximum surface shearing stress exceeded the strength of silicon, leading to plastic deformation followed by the sliding-induced silicon removal from the sides and front. However, during plastic deformation, oxygen and water also reacted mechanochemically with silicon through a reaction similar to that occurring during the height-increasing process (Figure2(b)). The nanoindentation hardness of processed protuberances and grooves was approximately 10% higher than that of unprocessed parts. The reaction of silicon with surface water and oxygen produces Si ( OH ) x or SiO x, which exhibits the same hardness as silicon and is harder than Si ( OH ) x. Therefore, instead of soft silicon hydroxide, a silicon oxide layer is speculated to form on processed protuberance and groove surfaces.To clarify the mechanism of mechanochemical protuberance and groove processing, the surface contact stress was evaluated using the boundary element method [40]. Contact model and material parameters are presented in Figure 5. Figures 6 and 7 show the contact principal stress and shearing stress, respectively, of 50 and 200 nm radius diamond tips analyzed by the boundary element method [40]. Maximum principal stress and shearing stress dependences on load are shown in Figure 8. Hard and brittle materials such as silicon are easily fractured under tensile stress. In maximum tensile stress areas, silicon bonds appear to break under tensile stress caused by the friction of the diamond tip. Therefore, the reaction of silicon may occur at the rear edge of the sliding contact area where the elongation stress is the highest. This protuberance processing phenomenon accompanies mechanochemical reaction during which tensile stress increases. Its mechanism may involve adsorbates, such as surface water and oxygen, which mechanochemically react with silicon [40].Figure 5 Contact stress analysis using the boundary element method.Principal stress of the silicon-diamond contact area at 50μN load for tip radii of (a) 50 and (b) 200 nm. (a) Diamond radius: 50 nm; load: 50μN (b) Diamond radius: 200 nm; load: 50μNShear stress of the silicon-diamond contact area at 50μN load for tip radii of (a) 50 and (b) 200 nm. (a) Diamond radius: 50 nm; load: 50μN (b) Diamond radius: 200 nm; load: 50μNFigure 8 Load effects on maximum principle and shear stresses.Shearing stress was evaluated to estimate the plastic deformation of silicon. The effect of the evaluated contact stress on protuberance heights and groove depths is shown in Figure9 [40]. Protuberance heights increased until the tensile stress reached 4.5 GPa and decreased. At this peak height, the maximum shear stress attained more than 8 GPa. This suggests that MCP using a 100 nm radius diamond tip is load dependent when the shear stress exceeds the strength of silicon, including a plastic deformation of several nanometers. At lower loads, the 100 nm radius tip provided protuberances through silicon oxidation. However, the maximum shear stress increased beyond the yield criterion with increasing load, resulting in silicon removal from the side and front surfaces and a subsequent decrease in protuberances height. Conversely, the tensile stress at the rear edge increased with increasing load during this removal process, indicating that water and oxygen reacted with silicon, and a mechanochemical reaction similar to the protuberance height-increasing process occurred. The silicon oxide layer is also assumed to form on processed groove surfaces, as shown in Figure 4(b).Contact stress effects on protuberance heights and groove depths. (a) Tensile stress effect and (b) maximum shear stress effect. (a) Tensile stress (b) Maximum shearing stress ## 2.3. Additional KOH Solution Etching of Processed Areas ### 2.3.1. Load Dependence of KOH Solution Etching on Mechanochemically Processed Areas The KOH solution etching of a mechanically processed area is shown in Figure10 [40]. Thick oxidized layers were mechanochemically formed on the processed areas, and these locally oxidized parts were expected to work as etching masks for KOH solution [37, 40, 43]. For short KOH etching times, the area processed at a high scanning density and the unprocessed area covered with a natural oxidative layer acted as masks and were not etched, enabling various types of nanofabrication processes via mechanochemical action. Nanoscale local oxidation based on mechanochemical action may find application in future nanodevice processes and nanolithography technology.Figure 10 Profile etching model of silicon using a mechanochemically oxidized mask.The effects of KOH etching on protuberance and groove processing on silicon were investigated at room temperature. Silicon protuberance and groove processing were conducted in air using a 140 nm radius diamond tip on a 1μm2 square area at loads ranging from 10 to 90 μN [43]. As shown in Figure 11(a), protuberance heights remained below 1 nm for 10–30 μN loads, but they increased to 1-2 nm for loads of 40, 50, and 60 μN, indicating that protuberance heights increased with increasing applied load. However, loads of 70, 80, and 90 μN generated grooves on the processed areas as well as plastic deformation. The maximum depth of the grooves was 4.5 nm. These results are in between those obtained using the 100 and 200 nm radius tips.Load effects on protuberance and groove processing using a 140 nm radius tip (a) before and (b) after KOH solution etching. (a) Before etching (b) After etchingSilicon samples processed at loads ranging from 0 to 90μN were etched using KOH solution for 10 min. Figure 11(b) shows that unprocessed areas were etched, whereas all processed areas remained almost intact. Processed features protruded over the surface because of KOH etching, reaching a maximum height of 8 nm. Areas processed by tip sliding clearly showed an erosion resistance to KOH solution. For high loads of 70, 80, and 90 μN, the shape of plastically deformed areas did not change after etching in KOH solution. The etching speed of processed areas with inherent defects was accelerated under normal processing conditions [37, 38], but the processed regions could not be etched, even with plastic deformation. This may result from the formation of a dense oxide layer by tip sliding.Figure12(a) shows plastically deformed protuberances produced using a 100 nm radius diamond tip. The KOH solution selectively etched the unprocessed Si area, leaving protuberances and grooves processed by tip sliding unchanged. The depth of grooves processed at 80 μN was approximately 5 nm. Protuberances processed at 40 μN reached heights of nearly 1 nm. Figure 12(b) shows the surface profile after KOH solution etching for 30 min, revealing that processed protuberance and groove areas were barely etched. Height differences between processed and unprocessed areas amounted to approximately 50 nm after KOH solution etching. These results suggest that protuberances and grooves formed by diamond tip sliding consist of silicon oxide (Figure 4), which is chemically resistant to KOH. The protuberances exhibited similar upper surface profiles before and after KOH etching. Plastically deformed areas shown at the start and end of tip sliding process remained intact after KOH solution etching, suggesting the formation of silicon oxide on these areas.Figure 12 AFM profiles of groove and protuberance processing areas (a) before and (b) after KOH solution etching.Figure13 shows the profiles of three square areas (a) processed by AFM and (b) etched using KOH [40]. Mean protuberance heights of 2.3, 2.1, and 1.2 nm were obtained by diamond tip sliding for loads of 50, 40, and 30 μN, respectively. At higher loads, the plastically deformed debris accumulated at the end of the sliding. Three squares with mean heights of 82, 80, and 78 nm were clearly observed after etching. The lower profile of the KOH etched surface (Figure 13(b)) matched that of the upper profile for AFM tip processed surface (Figure 13(a)), indicating that the three processed squares were barely etched.Figure 13 AFM profiles of three square areas processed at different loads (a) before and (b) after KOH solution etching.Figure14 shows the profiles of lines and spaces produced by tip sliding at 50 μN before and after KOH solution etching [40]. The 2 μm long lines were interspaced by intervals of (a) 500, (b) 400, (c) 300, and (d) 200 nm. The sliding scar of the processed lines was barely visible by AFM. After KOH solution etching, four lines were clearly observed and displayed rounded ends because of side etching. Line heights ranged from 40 to 42 nm, while line spacing narrowed to 200 nm. Even without changing the profiles, tip sliding forms a silicon oxide layer that protects the processed surface from KOH etching.AFM profiles of evenly interspaced lines mechanochemically processed by tip sliding on silicon before and after etching with 10% KOH solution. (a) Line pitch 500 nm (b) Line pitch 400 nm (c) Line pitch 300 nm (d) Line pitch 200 nm ### 2.3.2. Load and Scanning Density Effects on the KOH Solution Etching Rates of Mechanochemically Processed Areas Figure15(a) shows the process used to control etching rates by changing the tip loads [37, 38, 40]. A uniform natural oxide layer was formed on the silicon surface before processing (Figure 15(a)). The mechanochemically processed area exhibited a thick oxidized layer at a high load and high-density scanning (Figure 15(b)), while a low load and low-density scanning mechanically removed the natural oxide layer (Figure 15(c)).Model of a nanofabrication process using mechanochemically and naturally oxidized areas as etching masks. (a) Before processing, (b) during the first processing step at high load, and (c) during the second processing step at low load. (a) Before processing (b) First processing at higher load (c) Second processing at lower loadFigure16 shows a two-step table obtained from mechanochemically processed areas etched using KOH solution at different rates [40]. To change the degree of oxidation of the processed area, the atomic force between the Si(100) surface and tip was changed. First, a high square protuberance of 0.6 nm (2 × 2 μm2) was processed at 25 μN (Figure 16(a)). Next, a second area (5 × 5 μm2) was processed at a low load and low-density scanning. Profiles obtained after additional KOH solution etching revealed that a low load and low-density scanning resulted in a thin, 5 × 5 μm2 oxidized area, and a high load and high-density scanning resulted in a dense, 2 × 2 μm2 oxidized area on the same surface (10 × 10 μm2, Figure 16(b)). Load and scanning density changes during MCP can influence reaction rates. The height difference between the processed areas amounted to approximately 18 nm and total height was 33 nm, evidence for the potential fabrication of a nanometer-size two-step table.Figure 16 AFM profiles of a two-step table fabricated using differences in etching rates between processed areas (a) before and (b) after KOH solution etching.The profile of a processed sample was obtained, as shown in Figure17. First, a 1 × 1 μm2 area was mechanically oxidized by diamond tip sliding at 40 μN and was assumed to undergo partial plastic deformation. Another 1 × 1 μm2 area processed at 10 μN displayed a 1.5 nm high protuberance. A 6 × 6 μm2 area was processed by tip sliding at 1.5 μN to remove the natural oxide layer, and the sample was etched using KOH for 25 min. Figure 17 shows the resulting three-dimensional profile of the processed areas. The profile suggests that tip sliding at 1.5 μN and over a wide area removed the natural oxide layer, promoting a deep etching of 119 nm. In contrast, the surfaces of areas processed at 10 and 40 μN exhibited little differences from the unprocessed basal plane covered with a natural oxide layer. The natural oxide layer worked as an etching mask for the unprocessed surface for 25 min but, thereafter, was removed by low load and wide-range processing at 1.5 μN, increasing the etching rate.Figure 17 AFM profile obtained using mechanochemically and naturally oxidized layers as etching masks for the KOH solution.To demonstrate this mechanism, nine squares (1 × 1μm2) were mechanically oxidized by tip sliding at 15 μN, and the processed area was expanded to a 10 × 10 μm2 square to remove the natural oxide layer at 1 μN. Subsequent KOH solution etching was performed for 30 min. As shown in Figure 18, the nine square areas and the unprocessed natural oxide layer were negligibly etched, indicating that they acted as etching masks.Figure 18 AFM profiles obtained using mechanochemically and naturally oxidized layers as etching masks for the KOH solution at 15μN. ## 2.3.1. Load Dependence of KOH Solution Etching on Mechanochemically Processed Areas The KOH solution etching of a mechanically processed area is shown in Figure10 [40]. Thick oxidized layers were mechanochemically formed on the processed areas, and these locally oxidized parts were expected to work as etching masks for KOH solution [37, 40, 43]. For short KOH etching times, the area processed at a high scanning density and the unprocessed area covered with a natural oxidative layer acted as masks and were not etched, enabling various types of nanofabrication processes via mechanochemical action. Nanoscale local oxidation based on mechanochemical action may find application in future nanodevice processes and nanolithography technology.Figure 10 Profile etching model of silicon using a mechanochemically oxidized mask.The effects of KOH etching on protuberance and groove processing on silicon were investigated at room temperature. Silicon protuberance and groove processing were conducted in air using a 140 nm radius diamond tip on a 1μm2 square area at loads ranging from 10 to 90 μN [43]. As shown in Figure 11(a), protuberance heights remained below 1 nm for 10–30 μN loads, but they increased to 1-2 nm for loads of 40, 50, and 60 μN, indicating that protuberance heights increased with increasing applied load. However, loads of 70, 80, and 90 μN generated grooves on the processed areas as well as plastic deformation. The maximum depth of the grooves was 4.5 nm. These results are in between those obtained using the 100 and 200 nm radius tips.Load effects on protuberance and groove processing using a 140 nm radius tip (a) before and (b) after KOH solution etching. (a) Before etching (b) After etchingSilicon samples processed at loads ranging from 0 to 90μN were etched using KOH solution for 10 min. Figure 11(b) shows that unprocessed areas were etched, whereas all processed areas remained almost intact. Processed features protruded over the surface because of KOH etching, reaching a maximum height of 8 nm. Areas processed by tip sliding clearly showed an erosion resistance to KOH solution. For high loads of 70, 80, and 90 μN, the shape of plastically deformed areas did not change after etching in KOH solution. The etching speed of processed areas with inherent defects was accelerated under normal processing conditions [37, 38], but the processed regions could not be etched, even with plastic deformation. This may result from the formation of a dense oxide layer by tip sliding.Figure12(a) shows plastically deformed protuberances produced using a 100 nm radius diamond tip. The KOH solution selectively etched the unprocessed Si area, leaving protuberances and grooves processed by tip sliding unchanged. The depth of grooves processed at 80 μN was approximately 5 nm. Protuberances processed at 40 μN reached heights of nearly 1 nm. Figure 12(b) shows the surface profile after KOH solution etching for 30 min, revealing that processed protuberance and groove areas were barely etched. Height differences between processed and unprocessed areas amounted to approximately 50 nm after KOH solution etching. These results suggest that protuberances and grooves formed by diamond tip sliding consist of silicon oxide (Figure 4), which is chemically resistant to KOH. The protuberances exhibited similar upper surface profiles before and after KOH etching. Plastically deformed areas shown at the start and end of tip sliding process remained intact after KOH solution etching, suggesting the formation of silicon oxide on these areas.Figure 12 AFM profiles of groove and protuberance processing areas (a) before and (b) after KOH solution etching.Figure13 shows the profiles of three square areas (a) processed by AFM and (b) etched using KOH [40]. Mean protuberance heights of 2.3, 2.1, and 1.2 nm were obtained by diamond tip sliding for loads of 50, 40, and 30 μN, respectively. At higher loads, the plastically deformed debris accumulated at the end of the sliding. Three squares with mean heights of 82, 80, and 78 nm were clearly observed after etching. The lower profile of the KOH etched surface (Figure 13(b)) matched that of the upper profile for AFM tip processed surface (Figure 13(a)), indicating that the three processed squares were barely etched.Figure 13 AFM profiles of three square areas processed at different loads (a) before and (b) after KOH solution etching.Figure14 shows the profiles of lines and spaces produced by tip sliding at 50 μN before and after KOH solution etching [40]. The 2 μm long lines were interspaced by intervals of (a) 500, (b) 400, (c) 300, and (d) 200 nm. The sliding scar of the processed lines was barely visible by AFM. After KOH solution etching, four lines were clearly observed and displayed rounded ends because of side etching. Line heights ranged from 40 to 42 nm, while line spacing narrowed to 200 nm. Even without changing the profiles, tip sliding forms a silicon oxide layer that protects the processed surface from KOH etching.AFM profiles of evenly interspaced lines mechanochemically processed by tip sliding on silicon before and after etching with 10% KOH solution. (a) Line pitch 500 nm (b) Line pitch 400 nm (c) Line pitch 300 nm (d) Line pitch 200 nm ## 2.3.2. Load and Scanning Density Effects on the KOH Solution Etching Rates of Mechanochemically Processed Areas Figure15(a) shows the process used to control etching rates by changing the tip loads [37, 38, 40]. A uniform natural oxide layer was formed on the silicon surface before processing (Figure 15(a)). The mechanochemically processed area exhibited a thick oxidized layer at a high load and high-density scanning (Figure 15(b)), while a low load and low-density scanning mechanically removed the natural oxide layer (Figure 15(c)).Model of a nanofabrication process using mechanochemically and naturally oxidized areas as etching masks. (a) Before processing, (b) during the first processing step at high load, and (c) during the second processing step at low load. (a) Before processing (b) First processing at higher load (c) Second processing at lower loadFigure16 shows a two-step table obtained from mechanochemically processed areas etched using KOH solution at different rates [40]. To change the degree of oxidation of the processed area, the atomic force between the Si(100) surface and tip was changed. First, a high square protuberance of 0.6 nm (2 × 2 μm2) was processed at 25 μN (Figure 16(a)). Next, a second area (5 × 5 μm2) was processed at a low load and low-density scanning. Profiles obtained after additional KOH solution etching revealed that a low load and low-density scanning resulted in a thin, 5 × 5 μm2 oxidized area, and a high load and high-density scanning resulted in a dense, 2 × 2 μm2 oxidized area on the same surface (10 × 10 μm2, Figure 16(b)). Load and scanning density changes during MCP can influence reaction rates. The height difference between the processed areas amounted to approximately 18 nm and total height was 33 nm, evidence for the potential fabrication of a nanometer-size two-step table.Figure 16 AFM profiles of a two-step table fabricated using differences in etching rates between processed areas (a) before and (b) after KOH solution etching.The profile of a processed sample was obtained, as shown in Figure17. First, a 1 × 1 μm2 area was mechanically oxidized by diamond tip sliding at 40 μN and was assumed to undergo partial plastic deformation. Another 1 × 1 μm2 area processed at 10 μN displayed a 1.5 nm high protuberance. A 6 × 6 μm2 area was processed by tip sliding at 1.5 μN to remove the natural oxide layer, and the sample was etched using KOH for 25 min. Figure 17 shows the resulting three-dimensional profile of the processed areas. The profile suggests that tip sliding at 1.5 μN and over a wide area removed the natural oxide layer, promoting a deep etching of 119 nm. In contrast, the surfaces of areas processed at 10 and 40 μN exhibited little differences from the unprocessed basal plane covered with a natural oxide layer. The natural oxide layer worked as an etching mask for the unprocessed surface for 25 min but, thereafter, was removed by low load and wide-range processing at 1.5 μN, increasing the etching rate.Figure 17 AFM profile obtained using mechanochemically and naturally oxidized layers as etching masks for the KOH solution.To demonstrate this mechanism, nine squares (1 × 1μm2) were mechanically oxidized by tip sliding at 15 μN, and the processed area was expanded to a 10 × 10 μm2 square to remove the natural oxide layer at 1 μN. Subsequent KOH solution etching was performed for 30 min. As shown in Figure 18, the nine square areas and the unprocessed natural oxide layer were negligibly etched, indicating that they acted as etching masks.Figure 18 AFM profiles obtained using mechanochemically and naturally oxidized layers as etching masks for the KOH solution at 15μN. ## 3. Nanofabrication through Mechanical and Electrical Processes Using an Electrically Conductive Diamond Tip Several mechanical and electrical methods, including complex combinations of both approaches, have been developed for silicon nanoprocessing using an electrically conductive diamond tip [41]. In these approaches, nanostructures were formed on naturally oxidized silicon wafers through mechanochemical and/or electrochemical reactions. Mechanochemical and electrochemical reactions were conducted by applying a load and/or electrical voltage using an electrically conductive diamond tip with a radius of approximately 45 nm. Surface morphology and electrical properties of processed areas were examined by AFM, which provides topographic and current images, as well as current-voltage (I-V) curves [41]. These investigations showed that nanostructure height and morphology depended on applied load and voltage. The electric properties of the processed regions were identified using SchottkyI-V curves, which showed their dependence on the extent of local mechanochemical and electrochemical reaction generated by tip sliding during processing. In particular, complex approaches that combined mechanical and electrical processes enhanced the local oxidation at high load and voltage. ### 3.1. Experimental Method Experiments were conducted using AFM apparatus with a cantilever equipped with a conductive diamond tip. Process evaluation consisted of two steps. First, sample surfaces were scanned using the tip at a relatively high contact load over a 500 × 500 nm2 area under the conditions shown in Table 1. Second, surface topography and electrical current were examined using the same tip at a low contact load of 1 nN over an enlarged scanning area (5 × 5 μm2). Current images andI-Vcurves were generated while scanning by applying a bias voltage (0.5 V) between the sample and the conductive tip and measuring of the electrical current flowing through the tip. The reaction occurred on the ultrasmall part of the contact area, and processing time is the same for either mechanical or electrical process. The oxidation reaction was assumed to occur during processing and last as long as the processing time. The tests were performed three more times. Measured values were different but exhibited the same trend.Table 1 Processing conditions. Process Load (nN) Voltage (V) Mechanical processing 40, 80, 120 ⋯ Electrical processing ⋯ 1, 2, 3 Mechanical + electrical processing 40 1 80 2 120 3CAFM enables topography and current distribution measurements in the contact mode [41]. Furthermore,I-V curves of processed and unprocessed regions were obtained using an applied bias voltage because a comparison between current image and topography revealed no clear relationship between local conductivity and morphology. The probe used here consisted of aB-doped chemical vapor deposition diamond-coated silicon tip with a radius of approximately 45 nm and an average electrical resistance of approximately 0.0031 Ωcm at 22°C. Under ambient conditions (room temperature of approximately 22°C), the specimens were coated with an approximately 2 nm thick natural oxide layer. The cantilever (231 × 36 μm2) displayed a spring constant of approximately 48 N/m and a resonance frequency of 185 kHz [41].As shown in Figure19, three areas (indicated by line A-A1) were mechanically processed by tip sliding at loads of 40, 80, and 120 nN. As indicated by line B-B1, three areas were electrically processed by tip sliding at a load of 2 nN using bias voltages of 1, 2, and 3 V. During electrical processing, the applied force load is assumed to be 0 nN, which corresponds to a nominal zero force scan because true zero force is actually difficult to achieve. As shown by line C-C1, mechanical and electrical processes were combined using tip sliding at nonzero loads and bias voltages. Three areas were processed through this complex electromechanical approach at loads of 40, 80, and 120 nN and voltages of 1, 2, and 3 V, respectively. Sliding tracks appear as dashed lines in Figure 19.Figure 19 Scheme showing silicon square areas processed through (A-A1) mechanical, (B-B1) electrical, and (C-C1) electromechanical methods at different loads and voltages. ### 3.2. Surface Properties of Electrical, Mechanical, and Electromechanical Processes Figure20 shows the topographic images of mechanically, electrically, and electromechanically processed samples (Table 1). Along a-a1, samples were mechanically processed by tip sliding at loads of 40, 80, and 120 nN, producing protuberances with heights of approximately 0.40, 0.26, and 0.20 nm, respectively. These protuberances were generated by tribochemical reaction [35, 36, 40]. In this process, protuberance heights depend on the local oxidation and plastic deformation of the specimen caused by tip sliding. Protuberance heights increased because of the tribochemical reaction. However, compression from the loading force limits protuberance growth, explaining a decrease in height with increasing load. Along b-b1, samples were electrically processed by tip sliding using bias voltages of 1, 2, and 3 V. The applied voltage triggered an electrochemical reaction that resulted in the local oxidation of silicon. Protuberance heights of approximately 0.44, 0.50, and 0.71 nm were obtained for voltages of 1, 2, and 3 V, respectively. Protuberance heights increased with increasing voltage. The net force is assumed to strongly depend on the voltage [45, 46] so that the processing force at a load of 0 nN is substantially greater at 3 V than at 1 V. Conversely, electrical processing produced taller protuberances than mechanical processing. Therefore, local oxidation may be accelerated by the voltage-induced electrochemical reaction, increasing the volume of silicon oxide formed [41].Figure 20 AFM profiles and cross-sectional images of surfaces obtained by (a) mechanical, (b) electrical, and (c) electromechanical processing.Along c-c1, samples were electromechanically processed by tip sliding using applied loads and voltages of 40 nN, 1 V; 80 nN, 2 V; and 120 nN, 3 V. The respective heights of the three protuberances were 0.44, 0.56, and 0.64 nm. The protuberance formed electromechanically at a 120 nN load and at 3 V was lower in height than the protuberance formed electrically at 3 V, indicating that compressive stress resulting from the high load reduced the protuberance height. This result is consistent with the reduced protuberance height being suppressed with increasing load in mechanical processing. These results also suggest that mechanical and electrical processing can effectively control the formation and growth of nanostructures through mechanochemical and electrochemical reactions. Sectional profiles showed that the electrically processed protuberances were thicker than the mechanically and electromechanically processed protuberances. It is believed that electrical action is useful in conjunction with mechanical action in protuberance fabrication. The effect of combined electrical and mechanical actions on processing is emphasized and discussed in the next section. ### 3.3. Electrical Properties of Electrical, Mechanical, and Electromechanical Processes Figure21 shows the current distribution images of areas obtained by mechanical, electrical, and electromechanical processing (Figure 20) [41]. Differences in measured current between unprocessed and processed areas were determined to compare the effect of mechanical and/or electrical techniques on fabrication. In general, processed areas displayed lower current than unprocessed areas, which stems from the formation of oxidization layers during nanoprocessing. These layers are produced by reactions between silicon and water, which included a mechanochemical reaction caused by tip sliding and an electrochemically induced anodic oxidation. A current of 0.191 nA was measured in the unprocessed area for a bias voltage of 2 V, indicating that this area was covered with an ultrathin natural oxidation layer formed under ambient conditions.Figure 21 AFM current profiles and cross-sectional images of surfaces obtained by (a) mechanical, (b) electrical, and (c) electromechanical processing.Mechanical processing resulted in current differences of 0.079, 0.077, and 0.076 nA for loads of 40, 80, and 120 nN, respectively, indicating that processed areas presented lower currents than the unprocessed areas (a-a1, Figure 21(a)). These current differences decreased with increasing load in the processed areas, meaning that the currents increased with increasing load. This phenomenon may result from the partial removal of the oxidation layer at high load, causing its thickness to decrease.Current differences between electrically processed and unprocessed areas amounted to 0.120, 0122, and 0.122 nA for voltages of 1, 2, and 3 V, respectively (b-b1, Figure 21(b)). These current differences were greater than the current differences for mechanically processed samples, suggesting the formation of a thicker oxidation layer during electrical processing than during mechanical processing. This thick oxidation layer was impeded on conductivity in electrically processed areas.Current differences between electromechanically processed and unprocessed areas equaled 0.136 nA for a 40 nN load and a voltage of 1 V, 0.137 nA for a 80 nN load and a voltage of 2 V, and 0.138 nA for a 120 nN load and a voltage of 3 V (c-c1, Figure 21(c)), showing that currents were lower in electromechanically processed areas than in unprocessed areas. In addition, current differences increased with increasing load and voltage, concomitant with the enhanced local oxidation and plastic deformation induced by mechanical and electrochemical reactions in the processed areas. Moreover, mechanically processed areas exhibited higher average currents than electrically and electromechanically processed areas, indicating that electrical and electromechanical methods may enhance local oxidation.SchottkyI-V characteristics were measured to study electrical conductivity properties in unprocessed and processed areas (Figures 22–24). The detection of a breakdown voltage is expected to confirm the existence of an oxidation layer on the processed area. This layer, which hinders current flow between conductive tip and specimen, originates from the mechanically, electrically, or electromechanically induced local oxidation. Breakdown voltages obtained at different processing conditions are listed in Table 2. As shown in Figure 22, breakdown voltages amounted to 1.1 V through mechanical processing at a 40 nN load (track A1), 1.1 V through electrical processing at a voltage of 1 V (track B1), and 1.08 V through electromechanical processing at a 40 nN load and a voltage of 1 V (track C1).Table 2 Breakdown voltages and processing conditions. Processing Tracks A1 B1 C1 A2 B2 C2 A3 B3 C3 Load (nN) 40 ⋯ 40 80 ⋯ 80 120 ⋯ 120 Voltage (V) ⋯ 1 1 ⋯ 2 2 ⋯ 3 3 Breakdown voltage (V) 1.1 1.1 1.08 1.2 3 or over 3 or over 3 or over 3 or over 3 or overSurface morphology andI-V characteristics of areas processed at different loads and voltages of (closed circle) 40 nN, 0 V, (closed triangle) 0 nN, 1 V, and (closed square) 40 nN, 1 V. (a) (b)Surface morphology andI-V characteristics of areas processed at different loads and voltages of (closed circle) 80 nN, 0 V, (closed triangle) 0 nN, 2 V, and (closed square) 80 nN, 2 V. (a) (b)Surface morphology andI-V characteristic of areas processed at different loads and voltages of (closed circle) 120 nN, 0 V, (closed triangle) 0 nN, 3 V, and (closed square) 120 nN, 3 V. (a) (b)Figure23 shows that the breakdown voltage increased beyond 3 V through electrical processing at a voltage of 2 V (track B2) and electromechanical processing with a 80 nN load and a voltage of 2 V (track C2). However, mechanical processing at 80 nN load (track A2) displayed breakdown voltage of approximately 1.2 V, in agreement with previous results suggesting the partial removal of the oxidation layer at high loads. In the case of electromechanical processing, the breakdown voltage increased beyond 3 V (track C2) because the oxidation was enhanced by simultaneous mechanochemical and electrochemical reactions. Figure 24 shows that the breakdown voltage increased beyond 3 V through electromechanical processing with a 120 nN load and a voltage of 3 V (track C3) due to the production of a regular and dense oxidation layer at high load and voltage. The elevated breakdown voltage of the mechanical processed area (track A3) indicates that the mechanochemical reaction enhanced the oxidation at a high load despite a reduction in protuberance height and the partial removal of the oxidation layer by tip sliding. Electrical processing also promoted the oxidation reaction with increasing bias voltage (track B3), enhancing the conductive resistance. Consequently, the breakdown voltage was higher in the electrical process than in the mechanical process. Electromechanically processed areas exhibited higher breakdown voltages than other processed areas, indicating that this complex approach can generate nanostructures with high oxidation density. Furthermore, electrical and electromechanical processing may improve nanofabrication rates because mechanochemical and electrochemical reactions enhanced oxidation rates. ## 3.1. Experimental Method Experiments were conducted using AFM apparatus with a cantilever equipped with a conductive diamond tip. Process evaluation consisted of two steps. First, sample surfaces were scanned using the tip at a relatively high contact load over a 500 × 500 nm2 area under the conditions shown in Table 1. Second, surface topography and electrical current were examined using the same tip at a low contact load of 1 nN over an enlarged scanning area (5 × 5 μm2). Current images andI-Vcurves were generated while scanning by applying a bias voltage (0.5 V) between the sample and the conductive tip and measuring of the electrical current flowing through the tip. The reaction occurred on the ultrasmall part of the contact area, and processing time is the same for either mechanical or electrical process. The oxidation reaction was assumed to occur during processing and last as long as the processing time. The tests were performed three more times. Measured values were different but exhibited the same trend.Table 1 Processing conditions. Process Load (nN) Voltage (V) Mechanical processing 40, 80, 120 ⋯ Electrical processing ⋯ 1, 2, 3 Mechanical + electrical processing 40 1 80 2 120 3CAFM enables topography and current distribution measurements in the contact mode [41]. Furthermore,I-V curves of processed and unprocessed regions were obtained using an applied bias voltage because a comparison between current image and topography revealed no clear relationship between local conductivity and morphology. The probe used here consisted of aB-doped chemical vapor deposition diamond-coated silicon tip with a radius of approximately 45 nm and an average electrical resistance of approximately 0.0031 Ωcm at 22°C. Under ambient conditions (room temperature of approximately 22°C), the specimens were coated with an approximately 2 nm thick natural oxide layer. The cantilever (231 × 36 μm2) displayed a spring constant of approximately 48 N/m and a resonance frequency of 185 kHz [41].As shown in Figure19, three areas (indicated by line A-A1) were mechanically processed by tip sliding at loads of 40, 80, and 120 nN. As indicated by line B-B1, three areas were electrically processed by tip sliding at a load of 2 nN using bias voltages of 1, 2, and 3 V. During electrical processing, the applied force load is assumed to be 0 nN, which corresponds to a nominal zero force scan because true zero force is actually difficult to achieve. As shown by line C-C1, mechanical and electrical processes were combined using tip sliding at nonzero loads and bias voltages. Three areas were processed through this complex electromechanical approach at loads of 40, 80, and 120 nN and voltages of 1, 2, and 3 V, respectively. Sliding tracks appear as dashed lines in Figure 19.Figure 19 Scheme showing silicon square areas processed through (A-A1) mechanical, (B-B1) electrical, and (C-C1) electromechanical methods at different loads and voltages. ## 3.2. Surface Properties of Electrical, Mechanical, and Electromechanical Processes Figure20 shows the topographic images of mechanically, electrically, and electromechanically processed samples (Table 1). Along a-a1, samples were mechanically processed by tip sliding at loads of 40, 80, and 120 nN, producing protuberances with heights of approximately 0.40, 0.26, and 0.20 nm, respectively. These protuberances were generated by tribochemical reaction [35, 36, 40]. In this process, protuberance heights depend on the local oxidation and plastic deformation of the specimen caused by tip sliding. Protuberance heights increased because of the tribochemical reaction. However, compression from the loading force limits protuberance growth, explaining a decrease in height with increasing load. Along b-b1, samples were electrically processed by tip sliding using bias voltages of 1, 2, and 3 V. The applied voltage triggered an electrochemical reaction that resulted in the local oxidation of silicon. Protuberance heights of approximately 0.44, 0.50, and 0.71 nm were obtained for voltages of 1, 2, and 3 V, respectively. Protuberance heights increased with increasing voltage. The net force is assumed to strongly depend on the voltage [45, 46] so that the processing force at a load of 0 nN is substantially greater at 3 V than at 1 V. Conversely, electrical processing produced taller protuberances than mechanical processing. Therefore, local oxidation may be accelerated by the voltage-induced electrochemical reaction, increasing the volume of silicon oxide formed [41].Figure 20 AFM profiles and cross-sectional images of surfaces obtained by (a) mechanical, (b) electrical, and (c) electromechanical processing.Along c-c1, samples were electromechanically processed by tip sliding using applied loads and voltages of 40 nN, 1 V; 80 nN, 2 V; and 120 nN, 3 V. The respective heights of the three protuberances were 0.44, 0.56, and 0.64 nm. The protuberance formed electromechanically at a 120 nN load and at 3 V was lower in height than the protuberance formed electrically at 3 V, indicating that compressive stress resulting from the high load reduced the protuberance height. This result is consistent with the reduced protuberance height being suppressed with increasing load in mechanical processing. These results also suggest that mechanical and electrical processing can effectively control the formation and growth of nanostructures through mechanochemical and electrochemical reactions. Sectional profiles showed that the electrically processed protuberances were thicker than the mechanically and electromechanically processed protuberances. It is believed that electrical action is useful in conjunction with mechanical action in protuberance fabrication. The effect of combined electrical and mechanical actions on processing is emphasized and discussed in the next section. ## 3.3. Electrical Properties of Electrical, Mechanical, and Electromechanical Processes Figure21 shows the current distribution images of areas obtained by mechanical, electrical, and electromechanical processing (Figure 20) [41]. Differences in measured current between unprocessed and processed areas were determined to compare the effect of mechanical and/or electrical techniques on fabrication. In general, processed areas displayed lower current than unprocessed areas, which stems from the formation of oxidization layers during nanoprocessing. These layers are produced by reactions between silicon and water, which included a mechanochemical reaction caused by tip sliding and an electrochemically induced anodic oxidation. A current of 0.191 nA was measured in the unprocessed area for a bias voltage of 2 V, indicating that this area was covered with an ultrathin natural oxidation layer formed under ambient conditions.Figure 21 AFM current profiles and cross-sectional images of surfaces obtained by (a) mechanical, (b) electrical, and (c) electromechanical processing.Mechanical processing resulted in current differences of 0.079, 0.077, and 0.076 nA for loads of 40, 80, and 120 nN, respectively, indicating that processed areas presented lower currents than the unprocessed areas (a-a1, Figure 21(a)). These current differences decreased with increasing load in the processed areas, meaning that the currents increased with increasing load. This phenomenon may result from the partial removal of the oxidation layer at high load, causing its thickness to decrease.Current differences between electrically processed and unprocessed areas amounted to 0.120, 0122, and 0.122 nA for voltages of 1, 2, and 3 V, respectively (b-b1, Figure 21(b)). These current differences were greater than the current differences for mechanically processed samples, suggesting the formation of a thicker oxidation layer during electrical processing than during mechanical processing. This thick oxidation layer was impeded on conductivity in electrically processed areas.Current differences between electromechanically processed and unprocessed areas equaled 0.136 nA for a 40 nN load and a voltage of 1 V, 0.137 nA for a 80 nN load and a voltage of 2 V, and 0.138 nA for a 120 nN load and a voltage of 3 V (c-c1, Figure 21(c)), showing that currents were lower in electromechanically processed areas than in unprocessed areas. In addition, current differences increased with increasing load and voltage, concomitant with the enhanced local oxidation and plastic deformation induced by mechanical and electrochemical reactions in the processed areas. Moreover, mechanically processed areas exhibited higher average currents than electrically and electromechanically processed areas, indicating that electrical and electromechanical methods may enhance local oxidation.SchottkyI-V characteristics were measured to study electrical conductivity properties in unprocessed and processed areas (Figures 22–24). The detection of a breakdown voltage is expected to confirm the existence of an oxidation layer on the processed area. This layer, which hinders current flow between conductive tip and specimen, originates from the mechanically, electrically, or electromechanically induced local oxidation. Breakdown voltages obtained at different processing conditions are listed in Table 2. As shown in Figure 22, breakdown voltages amounted to 1.1 V through mechanical processing at a 40 nN load (track A1), 1.1 V through electrical processing at a voltage of 1 V (track B1), and 1.08 V through electromechanical processing at a 40 nN load and a voltage of 1 V (track C1).Table 2 Breakdown voltages and processing conditions. Processing Tracks A1 B1 C1 A2 B2 C2 A3 B3 C3 Load (nN) 40 ⋯ 40 80 ⋯ 80 120 ⋯ 120 Voltage (V) ⋯ 1 1 ⋯ 2 2 ⋯ 3 3 Breakdown voltage (V) 1.1 1.1 1.08 1.2 3 or over 3 or over 3 or over 3 or over 3 or overSurface morphology andI-V characteristics of areas processed at different loads and voltages of (closed circle) 40 nN, 0 V, (closed triangle) 0 nN, 1 V, and (closed square) 40 nN, 1 V. (a) (b)Surface morphology andI-V characteristics of areas processed at different loads and voltages of (closed circle) 80 nN, 0 V, (closed triangle) 0 nN, 2 V, and (closed square) 80 nN, 2 V. (a) (b)Surface morphology andI-V characteristic of areas processed at different loads and voltages of (closed circle) 120 nN, 0 V, (closed triangle) 0 nN, 3 V, and (closed square) 120 nN, 3 V. (a) (b)Figure23 shows that the breakdown voltage increased beyond 3 V through electrical processing at a voltage of 2 V (track B2) and electromechanical processing with a 80 nN load and a voltage of 2 V (track C2). However, mechanical processing at 80 nN load (track A2) displayed breakdown voltage of approximately 1.2 V, in agreement with previous results suggesting the partial removal of the oxidation layer at high loads. In the case of electromechanical processing, the breakdown voltage increased beyond 3 V (track C2) because the oxidation was enhanced by simultaneous mechanochemical and electrochemical reactions. Figure 24 shows that the breakdown voltage increased beyond 3 V through electromechanical processing with a 120 nN load and a voltage of 3 V (track C3) due to the production of a regular and dense oxidation layer at high load and voltage. The elevated breakdown voltage of the mechanical processed area (track A3) indicates that the mechanochemical reaction enhanced the oxidation at a high load despite a reduction in protuberance height and the partial removal of the oxidation layer by tip sliding. Electrical processing also promoted the oxidation reaction with increasing bias voltage (track B3), enhancing the conductive resistance. Consequently, the breakdown voltage was higher in the electrical process than in the mechanical process. Electromechanically processed areas exhibited higher breakdown voltages than other processed areas, indicating that this complex approach can generate nanostructures with high oxidation density. Furthermore, electrical and electromechanical processing may improve nanofabrication rates because mechanochemical and electrochemical reactions enhanced oxidation rates. ## 4. Conclusions This review evaluated silicon processing by diamond tip sliding under ambient conditions using AFM. First, silicon nanofabrication through AFM-based mechanical and mechanochemical processes, followed by additional KOH solution etching, was examined. Mechanochemically reacted surfaces and natural oxide layers acted as etching masks for the etching solution. Protuberances and grooves were processed on silicon surfaces in air using diamond tips with different radii. Changing the scanning density during tip sliding modulated the etching rates. Specifically, this approach was used to manufacture three-dimensional nanoprofiles, such as three squares, evenly interspaced lines, and a two-step table. Second, a CAFM-based nanofabrication technique was applied to oxidized silicon wafers. Topographic and current images, as well asin situ I-V characteristics of mechanically, electrically, and electromechanically processed surfaces, revealed the effects of mechanochemical and electromechanical reactions on nanofabrication. Nanoscale local oxidation based on these mechanical, mechanochemical, and electrochemical reactions may be applied to future nanodevice processes and nanolithography technology. --- *Source: 102404-2014-05-11.xml*
2014
# A Case of Infective Endocarditis and Pulmonary Septic Emboli Caused byLactococcus lactis **Authors:** Bshara Mansour; Adib Habib; Nazih Asli; Yuval Geffen; Dan Miron; Nael Elias **Journal:** Case Reports in Pediatrics (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1024054 --- ## Abstract Infective endocarditis is a rare condition in children with normal hearts. We present here a case of previously healthy eleven-year-old girl with infective endocarditis and pulmonary septic emboli caused by a very rare bacterial etiology (Lactococcus lactis). Identification of this pathogen was only made by polymerase chain reaction. --- ## Body ## 1. Introduction Infective endocarditis (IE) is a relatively rare condition in children but it causes significant morbidity and mortality. Repaired and unrepaired congenital heart disease are associated with a high lifetime risk of infective endocarditis; patients with ventricular septal defect have the highest risk [1]. Acute tricuspid valve endocarditis is rare and usually associated with habitual intravenous self-administration of drugs and more often is associated with central line infections [2].Septic pulmonary embolism is an uncommon condition in children. Numerous pulmonary infarcts resulting from small emboli may be associated with right-sided bacterial endocarditis, septic thrombophlebitis, and osteomyelitis [3]. Moreover, coexistence of both infective endocarditis and septic emboli is very rare. We present here a child with both IE and septic emboli due to a very rare etiology. ## 2. Case Report A previously healthy 11-year-old girl presented to the pediatric emergency department with a one-week history of fever, headache, left flank pain, chills, and central cyanosis. On physical examination she was well-looking and afebrile, heart rate was 130 beats/min, blood pressure was 112/70 mmHg, air room saturation was 95–97%, and body weight was 41 kg (50th percentile). Her physical examination was normal. Laboratory analysis showed microcytic anemia with hemoglobin 9.4 g/dL, a white cell count 10.1 × 103/μL, and platelets 211 × 103/μL. C-reactive protein was 15 mg/dL (0–0.5); erythrocyte sedimentation rate was 60 mm/hr. Liver and kidney function tests were normal; creatine phosphokinase was 66 U/L. Urine analysis revealed slight leukocyturia of 25 cells/μL; single blood culture was negative. Chest X-ray showed infiltrate in left lower lobe; X-ray of the sinuses was normal. Oral cefuroxime was prescribed for suspected urinary tract infection and suspected left-side pneumonia, and she was discharged home. Urine culture result was sterile.Two weeks later she presented again to the pediatric emergency department due to a one-day weakness, dyspnea, and pallor without fever. Her physical examination revealed decreased air entry to both lungs and a new 2/6 systolic murmur. C-reactive protein was elevated (20.5 mg/dL) and erythrocyte sedimentation rate was 115 mm/hr. Hemoglobin was 9.4 g/dL, white cell count was 12.9 k/μL, neutrophils were 85%, and elevated lactic dehydrogenase was 1815 IU/L. Chest X-ray showed enlarged perihilar lymph nodes and bilateral lower lobe consolidation, which was interpreted as bilateral pneumonia with a mild bilateral pleural effusion (Figure 1). Urine and blood cultures were taken and she was admitted and treated with intravenous cefuroxime.Figure 1 Enlarged perihilar lymph nodes and bilateral lower lobe consolidation.Two days after her admission transthoracic echocardiography (TTE) was done and revealed two vegetations (2.3 × 1 cm and 0.8 × 0.5 cm) attached to the tricuspid valve with a moderate tricuspid regurgitation without other valvular abnormalities (Figures2, 3, and 4). No ventricular septal defect or patent foramen ovale was demonstrated and intravenous ceftriaxone, gentamicin, and vancomycin were started. Ophthalmologic examination was normal and no other immunologic signs of infective endocarditis were noticed. Other investigations included peripheral blood smear; PCR for respiratory viruses from nasopharyngeal secretion; PCR for Q-fever; serology and blood culture for Brucella, C3 and C4; ANA; and urine Legionella antigen. All these tests were negative and five blood cultures were sterile. We thought that TTE was enough diagnostic and clear so transesophageal echocardiography (TEE) was not performed.Figure 2 Tricuspid septal leaflet vegetation.Figure 3 Tricuspid valve vegetation on septal tricuspid valve leaflet.Figure 4 Floating tricuspid valve leaflet and damaged (ruptured) tricuspid valve chordae.On the fifth day of hospitalization, antibiotic treatment was switched to Ceftriaxone and Daptomycin due to renal function impairment as a result of vancomycin treatment. (We thought at that point that the patient had developed interstitial nephritis due to Vancomycin treatment.) She was discharged on the 22nd day of hospitalization, with significant clinical, laboratory, and radiological improvement. Echocardiography performed prior to discharge revealed a decrease in the size of tricuspid vegetation; however, a small muscular ventricular septal defect became more evident at this time. She ultimately completed a four-week course of intravenous antibiotics.Twenty-five days later she was readmitted due to a two-day history of cough, runny nose, effort-induced dyspnea, and left-sided pleuritic chest pain without fever. Vital signs were normal. Physical examination revealed a holosystolic 3/6 murmur. Laboratory analysis revealed CRP 16 and ESR 67. Blood cultures were repeated and she was treated with intravenous cefazolin and ceftriaxone. Chest X-ray revealed middle and left lobe consolidation. Echocardiography revealed a further decrease in vegetation size. Lung CAT scan revealed bilateral consolidation that was diagnosed eventually as septic emboli originating from the tricuspid vegetations, clearly demonstrated by Angio-CT (Figure5; septic pulmonary emboli originating from the tricuspid vegetation were diagnosed). Echo Doppler of iliac veins, inferior vena cava, and lower extremities veins was normal.Figure 5 Septic emboli in the bilateral lower pulmonary segments.Three blood cultures were again negative (blood cultures at our institute are routinely cultured for 7 days). In spite of that, a broad range pan bacterial PCR test of the 16S rDNA gene performed on the blood culture sample yielded a positive signal [4]. The PCR product was separated by electrophoresis and was then sequenced and analyzed using the Basic Local Alignment Search Tool. The amplicon sequence gave 100% identity toLactococcus lactis. In order to make sure of no contamination during the PCR analysis, we used several controls: a positive control of a known bacterial DNA, a negative control for the DNA extraction process, and a negative control (no template control) for the PCR reaction. All controls were as expected.Since we had only a positive PCR, but no positive culture, there is no way to perform an antibiogram.She was discharged on day 15 with the recommendation to complete two weeks intravenous antibiotic treatment with Cefazolin and Ceftriaxone. Additional blood culture sample taken four weeks later tested negative by the same PCR method. She was operated later for tricuspid valve repair without complications and she was followed by a pediatric cardiologist and a cardiac surgery specialist and now she enjoys a good health. ## 3. Discussion Herein, we report the case of a young girl who was initially reported to be healthy with no congenital heart disease who developed tricuspid infective endocarditis with septic emboli due to a rare etiology,L. lactis.Our patient was diagnosed with infective endocarditis based on modified Duke Criteria [5]. The presented girl had one major criterion, which was evidence of endocarditis on echocardiography (tricuspid vegetation and tricuspid regurgitation) along with three minor criteria: (1) predisposing condition (ventricular septal defect), (2) fever, and (3) pulmonary emboli.Infective endocarditis in children with congestive heart disease can potentially lead to major complications in and outside the heart. Congestive heart failure occurs in up to 40% of cases and is the leading cause of hemodynamic compromise; this could be due to many factors including the destruction of valves, myocarditis, or arrhythmias [6]. Extracardiac complications are also frequent in up to 43% of cases and are caused by either embolic events or immune phenomena [7]. Vegetation on the tricuspid valve has a high risk of resulting in septic pulmonary emboli, causing various pulmonary complications such as pneumonia and pulmonary abscess [8]. Our patient had developed septic pulmonary emboli originating from the tricuspid vegetations most probably related to the ventricular septal defect (VSD) with left to right blood flow direction.Right-sided infective endocarditis is associated with congenital heart diseases, drug-users, or central lines infection. History of neither dental treatment, invasive procedures, intravenous drugs, nor central lines was reported in our patient. Initially, our patient was reported as a healthy girl with no previous cardiac malformations; however, as the vegetation got smaller, a hidden small muscular ventricular septal defect (VSD) became evident on echocardiography. The VSD was not infected by vegetations.Due to our estimation the timing of septic embolism development was during her first admission as the tricuspid vegetation became smaller by echocardiography follow-up.Of course, rheumatic fever was considered in our differential diagnosis, especially since our patient responded to Jones criteria: one major (carditis) and two minors (fever and elevated ESR and PCR). On the other hand the right-sided cardiac involvement was atypical for this entity.Repeated blood cultures taken at her presentation and during the course of hospitalization did not grow any pathogen. According to literature, the rate of culture-negative endocarditis varies with different studies, ranging from 2.5% to 31% [5]. In our patient we were able to identifyL. lactis in blood culture using molecular methods.L. lactis did not grow in blood culture using sufficient amount of blood, which could be attributed to previous antibiotic treatment. There was neither history of exposure to unpasteurized milk products nor history of previous gastrointestinal symptoms. Four weeks after treatment, PCR test was negative for bacteria.L. lactis is a mesophilic and microaerophilic fermenting microorganism widely used for the production of fermented food products. It is also occasionally isolated from oropharynx, intestines, or vagina and may even be a part of the normal flora. For a long time it was considered as nonvirulent with low pathogenicity in humans. EarlyL. lactis endocarditis was described recently in a 75-year-old man who had previously undergone mitral valve repair for severe mitral valve prolapse; a literature search shows other isolated cases ofL. lactis-related endocarditis [9]. In another case report, intravascular catheter-related bacteremia caused byL. lactis was described in an infant and treated with Cefotaxime and Vancomycin for 14 days [10].A recent case report described a sudden death of a four-month-old male infant without congenital heart disease. It was elucidated by postmortem examination that the dead had suffered severe IE, which led him to death. In the microbiological genetic analysis using histological section, the pathogen causing inflammation was identified asLactococcus lactissubspecies [11]. ## 4. Conclusions Congenital heart disease with left to right shunt (like VSD) should be taken into consideration in case of right-sided endocarditis in native-valve, nondrug user patient.Blood PCR analysis should be performed in case of partially treated or culture-negative endocarditis and also CAT scan in order to exclude pulmonary emboli especially in absence of clinical improvement or unexplained dyspnea despite an appropriate treatment. --- *Source: 1024054-2016-09-27.xml*
1024054-2016-09-27_1024054-2016-09-27.md
12,214
A Case of Infective Endocarditis and Pulmonary Septic Emboli Caused byLactococcus lactis
Bshara Mansour; Adib Habib; Nazih Asli; Yuval Geffen; Dan Miron; Nael Elias
Case Reports in Pediatrics (2016)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1024054
1024054-2016-09-27.xml
--- ## Abstract Infective endocarditis is a rare condition in children with normal hearts. We present here a case of previously healthy eleven-year-old girl with infective endocarditis and pulmonary septic emboli caused by a very rare bacterial etiology (Lactococcus lactis). Identification of this pathogen was only made by polymerase chain reaction. --- ## Body ## 1. Introduction Infective endocarditis (IE) is a relatively rare condition in children but it causes significant morbidity and mortality. Repaired and unrepaired congenital heart disease are associated with a high lifetime risk of infective endocarditis; patients with ventricular septal defect have the highest risk [1]. Acute tricuspid valve endocarditis is rare and usually associated with habitual intravenous self-administration of drugs and more often is associated with central line infections [2].Septic pulmonary embolism is an uncommon condition in children. Numerous pulmonary infarcts resulting from small emboli may be associated with right-sided bacterial endocarditis, septic thrombophlebitis, and osteomyelitis [3]. Moreover, coexistence of both infective endocarditis and septic emboli is very rare. We present here a child with both IE and septic emboli due to a very rare etiology. ## 2. Case Report A previously healthy 11-year-old girl presented to the pediatric emergency department with a one-week history of fever, headache, left flank pain, chills, and central cyanosis. On physical examination she was well-looking and afebrile, heart rate was 130 beats/min, blood pressure was 112/70 mmHg, air room saturation was 95–97%, and body weight was 41 kg (50th percentile). Her physical examination was normal. Laboratory analysis showed microcytic anemia with hemoglobin 9.4 g/dL, a white cell count 10.1 × 103/μL, and platelets 211 × 103/μL. C-reactive protein was 15 mg/dL (0–0.5); erythrocyte sedimentation rate was 60 mm/hr. Liver and kidney function tests were normal; creatine phosphokinase was 66 U/L. Urine analysis revealed slight leukocyturia of 25 cells/μL; single blood culture was negative. Chest X-ray showed infiltrate in left lower lobe; X-ray of the sinuses was normal. Oral cefuroxime was prescribed for suspected urinary tract infection and suspected left-side pneumonia, and she was discharged home. Urine culture result was sterile.Two weeks later she presented again to the pediatric emergency department due to a one-day weakness, dyspnea, and pallor without fever. Her physical examination revealed decreased air entry to both lungs and a new 2/6 systolic murmur. C-reactive protein was elevated (20.5 mg/dL) and erythrocyte sedimentation rate was 115 mm/hr. Hemoglobin was 9.4 g/dL, white cell count was 12.9 k/μL, neutrophils were 85%, and elevated lactic dehydrogenase was 1815 IU/L. Chest X-ray showed enlarged perihilar lymph nodes and bilateral lower lobe consolidation, which was interpreted as bilateral pneumonia with a mild bilateral pleural effusion (Figure 1). Urine and blood cultures were taken and she was admitted and treated with intravenous cefuroxime.Figure 1 Enlarged perihilar lymph nodes and bilateral lower lobe consolidation.Two days after her admission transthoracic echocardiography (TTE) was done and revealed two vegetations (2.3 × 1 cm and 0.8 × 0.5 cm) attached to the tricuspid valve with a moderate tricuspid regurgitation without other valvular abnormalities (Figures2, 3, and 4). No ventricular septal defect or patent foramen ovale was demonstrated and intravenous ceftriaxone, gentamicin, and vancomycin were started. Ophthalmologic examination was normal and no other immunologic signs of infective endocarditis were noticed. Other investigations included peripheral blood smear; PCR for respiratory viruses from nasopharyngeal secretion; PCR for Q-fever; serology and blood culture for Brucella, C3 and C4; ANA; and urine Legionella antigen. All these tests were negative and five blood cultures were sterile. We thought that TTE was enough diagnostic and clear so transesophageal echocardiography (TEE) was not performed.Figure 2 Tricuspid septal leaflet vegetation.Figure 3 Tricuspid valve vegetation on septal tricuspid valve leaflet.Figure 4 Floating tricuspid valve leaflet and damaged (ruptured) tricuspid valve chordae.On the fifth day of hospitalization, antibiotic treatment was switched to Ceftriaxone and Daptomycin due to renal function impairment as a result of vancomycin treatment. (We thought at that point that the patient had developed interstitial nephritis due to Vancomycin treatment.) She was discharged on the 22nd day of hospitalization, with significant clinical, laboratory, and radiological improvement. Echocardiography performed prior to discharge revealed a decrease in the size of tricuspid vegetation; however, a small muscular ventricular septal defect became more evident at this time. She ultimately completed a four-week course of intravenous antibiotics.Twenty-five days later she was readmitted due to a two-day history of cough, runny nose, effort-induced dyspnea, and left-sided pleuritic chest pain without fever. Vital signs were normal. Physical examination revealed a holosystolic 3/6 murmur. Laboratory analysis revealed CRP 16 and ESR 67. Blood cultures were repeated and she was treated with intravenous cefazolin and ceftriaxone. Chest X-ray revealed middle and left lobe consolidation. Echocardiography revealed a further decrease in vegetation size. Lung CAT scan revealed bilateral consolidation that was diagnosed eventually as septic emboli originating from the tricuspid vegetations, clearly demonstrated by Angio-CT (Figure5; septic pulmonary emboli originating from the tricuspid vegetation were diagnosed). Echo Doppler of iliac veins, inferior vena cava, and lower extremities veins was normal.Figure 5 Septic emboli in the bilateral lower pulmonary segments.Three blood cultures were again negative (blood cultures at our institute are routinely cultured for 7 days). In spite of that, a broad range pan bacterial PCR test of the 16S rDNA gene performed on the blood culture sample yielded a positive signal [4]. The PCR product was separated by electrophoresis and was then sequenced and analyzed using the Basic Local Alignment Search Tool. The amplicon sequence gave 100% identity toLactococcus lactis. In order to make sure of no contamination during the PCR analysis, we used several controls: a positive control of a known bacterial DNA, a negative control for the DNA extraction process, and a negative control (no template control) for the PCR reaction. All controls were as expected.Since we had only a positive PCR, but no positive culture, there is no way to perform an antibiogram.She was discharged on day 15 with the recommendation to complete two weeks intravenous antibiotic treatment with Cefazolin and Ceftriaxone. Additional blood culture sample taken four weeks later tested negative by the same PCR method. She was operated later for tricuspid valve repair without complications and she was followed by a pediatric cardiologist and a cardiac surgery specialist and now she enjoys a good health. ## 3. Discussion Herein, we report the case of a young girl who was initially reported to be healthy with no congenital heart disease who developed tricuspid infective endocarditis with septic emboli due to a rare etiology,L. lactis.Our patient was diagnosed with infective endocarditis based on modified Duke Criteria [5]. The presented girl had one major criterion, which was evidence of endocarditis on echocardiography (tricuspid vegetation and tricuspid regurgitation) along with three minor criteria: (1) predisposing condition (ventricular septal defect), (2) fever, and (3) pulmonary emboli.Infective endocarditis in children with congestive heart disease can potentially lead to major complications in and outside the heart. Congestive heart failure occurs in up to 40% of cases and is the leading cause of hemodynamic compromise; this could be due to many factors including the destruction of valves, myocarditis, or arrhythmias [6]. Extracardiac complications are also frequent in up to 43% of cases and are caused by either embolic events or immune phenomena [7]. Vegetation on the tricuspid valve has a high risk of resulting in septic pulmonary emboli, causing various pulmonary complications such as pneumonia and pulmonary abscess [8]. Our patient had developed septic pulmonary emboli originating from the tricuspid vegetations most probably related to the ventricular septal defect (VSD) with left to right blood flow direction.Right-sided infective endocarditis is associated with congenital heart diseases, drug-users, or central lines infection. History of neither dental treatment, invasive procedures, intravenous drugs, nor central lines was reported in our patient. Initially, our patient was reported as a healthy girl with no previous cardiac malformations; however, as the vegetation got smaller, a hidden small muscular ventricular septal defect (VSD) became evident on echocardiography. The VSD was not infected by vegetations.Due to our estimation the timing of septic embolism development was during her first admission as the tricuspid vegetation became smaller by echocardiography follow-up.Of course, rheumatic fever was considered in our differential diagnosis, especially since our patient responded to Jones criteria: one major (carditis) and two minors (fever and elevated ESR and PCR). On the other hand the right-sided cardiac involvement was atypical for this entity.Repeated blood cultures taken at her presentation and during the course of hospitalization did not grow any pathogen. According to literature, the rate of culture-negative endocarditis varies with different studies, ranging from 2.5% to 31% [5]. In our patient we were able to identifyL. lactis in blood culture using molecular methods.L. lactis did not grow in blood culture using sufficient amount of blood, which could be attributed to previous antibiotic treatment. There was neither history of exposure to unpasteurized milk products nor history of previous gastrointestinal symptoms. Four weeks after treatment, PCR test was negative for bacteria.L. lactis is a mesophilic and microaerophilic fermenting microorganism widely used for the production of fermented food products. It is also occasionally isolated from oropharynx, intestines, or vagina and may even be a part of the normal flora. For a long time it was considered as nonvirulent with low pathogenicity in humans. EarlyL. lactis endocarditis was described recently in a 75-year-old man who had previously undergone mitral valve repair for severe mitral valve prolapse; a literature search shows other isolated cases ofL. lactis-related endocarditis [9]. In another case report, intravascular catheter-related bacteremia caused byL. lactis was described in an infant and treated with Cefotaxime and Vancomycin for 14 days [10].A recent case report described a sudden death of a four-month-old male infant without congenital heart disease. It was elucidated by postmortem examination that the dead had suffered severe IE, which led him to death. In the microbiological genetic analysis using histological section, the pathogen causing inflammation was identified asLactococcus lactissubspecies [11]. ## 4. Conclusions Congenital heart disease with left to right shunt (like VSD) should be taken into consideration in case of right-sided endocarditis in native-valve, nondrug user patient.Blood PCR analysis should be performed in case of partially treated or culture-negative endocarditis and also CAT scan in order to exclude pulmonary emboli especially in absence of clinical improvement or unexplained dyspnea despite an appropriate treatment. --- *Source: 1024054-2016-09-27.xml*
2016
# Immunohistochemical Localization of the Bradykinin B1 and B2 Receptors in Human Nasal Mucosa **Authors:** Hideaki Shirasaki; Etsuko Kanaizumi; Tetsuo Himi **Journal:** Mediators of Inflammation (2009) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2009/102406 --- ## Abstract Bradykinin (BK) has been tobe thought a potent mediator involved in allergic rhinitis because BK was recovered from the nasal lavage fluid of allergic rhinitis patients after allergen provocation and BK receptor antagonists relief nasal allergic symptoms. Two mammalian BK receptor subtypes, B1 and B2, have been defined based on their pharmacological properties. We investigated the localization of these receptors by immunohistochemistry. Human turbinates were obtained after turbinectomy from 12 patients with nasal obstruction refractory to medical therapy. The immunohistochemical study revealed that epithelial cells, submucosal glands, fibroblast, vascular smooth muscle, vascular endothelial cells, and macrophages showed immunoreactivity for both B1 and B2 receptors. The B2 receptor expression was found in peripheral nerve fibers, whereas the B1 expression was not observed in nerves. The results may have an important clinical implication for understanding the differential roles of BK receptor subtypes on upper airway diseases such as allergic rhinitis and nonallergic rhinitis. --- ## Body ## 1. Introduction The allergic response is a complex process involving the interaction of many mediators. Bradykinin (BK) is a potent inflammatory mediator and its actions are mediated via specific cell surface receptors which are coupled to G-proteins. Two mammalian BK receptor subtypes, B1 and B2, have been reported, and the amino acid sequence of the B1 receptor is 36% identical to the amino acid sequence of the B2 receptor [1]. Administration of exogenous BK into human nasal airway causes nasal obstruction, rhinorrhea, and nasal pain [2, 3]. These effects appear to be mediated by bradykinin B2 receptor because bradykinin B2 receptor antagonists abolish bradykinin-induced nasal obstruction and plasma extravasation, whereas agonists at the bradykinin B1 receptor do not cause any symptoms [3]. Icatibant, a bradykinin B2 receptor antagonist, inhibits the immediate inflammatory response to antigen in subjects with perennial allergic rhinitis [4, 5]. These reports suggest that BK may play an important role in the pathogenesis of allergic rhinitis. The previous autoradiografic study using 125I-BK has demonstrated specific 125I-BK binding sites mainly exist on the small muscular arteries, venous sinusoids, and submucosal fibers in human nasal mucosa [6]. However, there has been no other report about BK receptor expression in upper airway.In the present study, immunohistochemistry for bradykinin B1 and B2 receptors was performed to confirm the expression and the distribution of these receptors in human nasal mucosa. ## 2. Materials and Methods ### 2.1. Tissue Preparation Human inferior turbinates were obtained after turbinectomy from 12 patients with nasal obstruction refractory to medical therapy. Informed consent was obtained from all patients and this study was approved by the ethics committee of Sapporo Medical University. All were nonsmokers, and 6 patients had perennial allergy against mites as defined by questionnaire and CAP test (Pharmacia, Uppsala, Sweden). All medications, including antibiotics, were prohibited for at least 3 weeks prior to the study. Demographic and clinical characteristics of the patients are summarized in Table1. The nasal mucosal specimens were immediately fixed in 10% formalin for immunohistochemistry.Table 1 Demographic characteristics of allergic and nonallergic patients. Allergic rhinitisNonallergic rhinitisN = 6N = 6Sex (male/female)2/43/3Age31(19–58)39(28–55)Specific IgE to house dust mite (d1) (kU/L)2.7(1.0–13)<0.35Total IgE (kU/L)210(10–387)110(10–185)Blood eosinophils (cells/μL)370(70–690)135(55–240)Current nasal symptoms (number of patients)Nasal obstruction6(all patients)4(all patients)Sneezing40Rhinorrhea32Data expressed as median values and range (in brankets). ### 2.2. Immunohistochemistry #### 2.2.1. Antibodies For immunohistochemistry of B1 receptor, rabbit antihuman B1 receptor polyclonal antibody against a peptide corresponding to C-terminal domain of human B1 receptor (catalog # LS-A3580, Lifespan Biosciences, Mich USA) was used at 1:20 dilutions. Similarly, for immunohistochemistry of B2 receptor, rabbit antihuman B2 receptor polyclonal antibody against a peptide corresponding to C-terminal domain of human B2 receptor (catalog # LS-A797, Lifespan Biosciences, Mich, USA) was used at 1:100 dilutions. To identify the subsets of cells expressing each bradykinin receptor, the following monoclonal antibodies were used: anti-CD68 (KP-1 clone, Dako Corporation, Carpinteria, Calif, USA) for macrophage, anti-CD31 (JC70A clone, Dako) for vascular endothelial cells, antihuman fibroblast (5B5 clone, DAKO) for fibroblast, anticytokeratin (AE1/AE3 clone, Dako) for epithelial cells, and antineurofilament protein (2F11 clone, Dako) for peripheral nerves. #### 2.2.2. Immunohistochemistry Deparaffinized sections were initially incubated with 3% H2O2 in methanol for 10 minutes to quench endogenous peroxidase activity. After microwave treatment (10 minutes at 500 Watt in citrate buffer), the sections were incubated in blocking solution (10% normal goat serum in PBS) for 30 minutes before incubation in primary antibody. Then, the sections were incubated with anti-bradykinin B1 or B2 polyclonal antibody for overnight at 4°C, washed, and incubated for 30 minutes with EnVision+, Peroxidase (Dako). A further washing in PBS was followed by developing in DAB (Dako) as a chromogen for signal visualization. The slides were counterstained Mayer's haematoxylin and coverslipped using mounting medium.To identify the subsets of cells expressing each bradykinin receptor, some sections were stained by immunofluorescence technique. For double staining, deparaffinized sections were incubated overnight at 4°C with a combination of rabbit polyclonal antihuman bradykinin B1 or B2 antibody and one of mouse monoclonal antihuman phenotypical makers antibody. Sections were washed in PBS and were incubated for 30 minutes with Alexa Fluor 594-labelled goat antimouse IgG (diluted 1:50; Molecular Probes, Ore, USA) and Alexa Fluor 488-labelled goat antirabbit IgG (diluted 1:50; Molecular Probes). Sections were mounted with SlowFade antifade kits (Molecular Probes) and examined under Olympus BX51 microscope, DP70 CCD camera (Olympus Optical Co., Tokyo, Japan). All images were processed with DP Controller and DP Manager software (Olympus Optical Co) for image analysis. Using this method, bradykinin B1 or B2 receptor expressing cells was green, cellular phenotypical makers were red, and the combined signal is visualized as yellow. Negative controls were obtained by replacing primary antibodies by mouse IgG1 and rabbit immunoglobulin fraction (Dako). ## 2.1. Tissue Preparation Human inferior turbinates were obtained after turbinectomy from 12 patients with nasal obstruction refractory to medical therapy. Informed consent was obtained from all patients and this study was approved by the ethics committee of Sapporo Medical University. All were nonsmokers, and 6 patients had perennial allergy against mites as defined by questionnaire and CAP test (Pharmacia, Uppsala, Sweden). All medications, including antibiotics, were prohibited for at least 3 weeks prior to the study. Demographic and clinical characteristics of the patients are summarized in Table1. The nasal mucosal specimens were immediately fixed in 10% formalin for immunohistochemistry.Table 1 Demographic characteristics of allergic and nonallergic patients. Allergic rhinitisNonallergic rhinitisN = 6N = 6Sex (male/female)2/43/3Age31(19–58)39(28–55)Specific IgE to house dust mite (d1) (kU/L)2.7(1.0–13)<0.35Total IgE (kU/L)210(10–387)110(10–185)Blood eosinophils (cells/μL)370(70–690)135(55–240)Current nasal symptoms (number of patients)Nasal obstruction6(all patients)4(all patients)Sneezing40Rhinorrhea32Data expressed as median values and range (in brankets). ## 2.2. Immunohistochemistry ### 2.2.1. Antibodies For immunohistochemistry of B1 receptor, rabbit antihuman B1 receptor polyclonal antibody against a peptide corresponding to C-terminal domain of human B1 receptor (catalog # LS-A3580, Lifespan Biosciences, Mich USA) was used at 1:20 dilutions. Similarly, for immunohistochemistry of B2 receptor, rabbit antihuman B2 receptor polyclonal antibody against a peptide corresponding to C-terminal domain of human B2 receptor (catalog # LS-A797, Lifespan Biosciences, Mich, USA) was used at 1:100 dilutions. To identify the subsets of cells expressing each bradykinin receptor, the following monoclonal antibodies were used: anti-CD68 (KP-1 clone, Dako Corporation, Carpinteria, Calif, USA) for macrophage, anti-CD31 (JC70A clone, Dako) for vascular endothelial cells, antihuman fibroblast (5B5 clone, DAKO) for fibroblast, anticytokeratin (AE1/AE3 clone, Dako) for epithelial cells, and antineurofilament protein (2F11 clone, Dako) for peripheral nerves. ### 2.2.2. Immunohistochemistry Deparaffinized sections were initially incubated with 3% H2O2 in methanol for 10 minutes to quench endogenous peroxidase activity. After microwave treatment (10 minutes at 500 Watt in citrate buffer), the sections were incubated in blocking solution (10% normal goat serum in PBS) for 30 minutes before incubation in primary antibody. Then, the sections were incubated with anti-bradykinin B1 or B2 polyclonal antibody for overnight at 4°C, washed, and incubated for 30 minutes with EnVision+, Peroxidase (Dako). A further washing in PBS was followed by developing in DAB (Dako) as a chromogen for signal visualization. The slides were counterstained Mayer's haematoxylin and coverslipped using mounting medium.To identify the subsets of cells expressing each bradykinin receptor, some sections were stained by immunofluorescence technique. For double staining, deparaffinized sections were incubated overnight at 4°C with a combination of rabbit polyclonal antihuman bradykinin B1 or B2 antibody and one of mouse monoclonal antihuman phenotypical makers antibody. Sections were washed in PBS and were incubated for 30 minutes with Alexa Fluor 594-labelled goat antimouse IgG (diluted 1:50; Molecular Probes, Ore, USA) and Alexa Fluor 488-labelled goat antirabbit IgG (diluted 1:50; Molecular Probes). Sections were mounted with SlowFade antifade kits (Molecular Probes) and examined under Olympus BX51 microscope, DP70 CCD camera (Olympus Optical Co., Tokyo, Japan). All images were processed with DP Controller and DP Manager software (Olympus Optical Co) for image analysis. Using this method, bradykinin B1 or B2 receptor expressing cells was green, cellular phenotypical makers were red, and the combined signal is visualized as yellow. Negative controls were obtained by replacing primary antibodies by mouse IgG1 and rabbit immunoglobulin fraction (Dako). ## 2.2.1. Antibodies For immunohistochemistry of B1 receptor, rabbit antihuman B1 receptor polyclonal antibody against a peptide corresponding to C-terminal domain of human B1 receptor (catalog # LS-A3580, Lifespan Biosciences, Mich USA) was used at 1:20 dilutions. Similarly, for immunohistochemistry of B2 receptor, rabbit antihuman B2 receptor polyclonal antibody against a peptide corresponding to C-terminal domain of human B2 receptor (catalog # LS-A797, Lifespan Biosciences, Mich, USA) was used at 1:100 dilutions. To identify the subsets of cells expressing each bradykinin receptor, the following monoclonal antibodies were used: anti-CD68 (KP-1 clone, Dako Corporation, Carpinteria, Calif, USA) for macrophage, anti-CD31 (JC70A clone, Dako) for vascular endothelial cells, antihuman fibroblast (5B5 clone, DAKO) for fibroblast, anticytokeratin (AE1/AE3 clone, Dako) for epithelial cells, and antineurofilament protein (2F11 clone, Dako) for peripheral nerves. ## 2.2.2. Immunohistochemistry Deparaffinized sections were initially incubated with 3% H2O2 in methanol for 10 minutes to quench endogenous peroxidase activity. After microwave treatment (10 minutes at 500 Watt in citrate buffer), the sections were incubated in blocking solution (10% normal goat serum in PBS) for 30 minutes before incubation in primary antibody. Then, the sections were incubated with anti-bradykinin B1 or B2 polyclonal antibody for overnight at 4°C, washed, and incubated for 30 minutes with EnVision+, Peroxidase (Dako). A further washing in PBS was followed by developing in DAB (Dako) as a chromogen for signal visualization. The slides were counterstained Mayer's haematoxylin and coverslipped using mounting medium.To identify the subsets of cells expressing each bradykinin receptor, some sections were stained by immunofluorescence technique. For double staining, deparaffinized sections were incubated overnight at 4°C with a combination of rabbit polyclonal antihuman bradykinin B1 or B2 antibody and one of mouse monoclonal antihuman phenotypical makers antibody. Sections were washed in PBS and were incubated for 30 minutes with Alexa Fluor 594-labelled goat antimouse IgG (diluted 1:50; Molecular Probes, Ore, USA) and Alexa Fluor 488-labelled goat antirabbit IgG (diluted 1:50; Molecular Probes). Sections were mounted with SlowFade antifade kits (Molecular Probes) and examined under Olympus BX51 microscope, DP70 CCD camera (Olympus Optical Co., Tokyo, Japan). All images were processed with DP Controller and DP Manager software (Olympus Optical Co) for image analysis. Using this method, bradykinin B1 or B2 receptor expressing cells was green, cellular phenotypical makers were red, and the combined signal is visualized as yellow. Negative controls were obtained by replacing primary antibodies by mouse IgG1 and rabbit immunoglobulin fraction (Dako). ## 3. Results As shown in Figure1, the immunoreactivity for B1 receptor was significantly detected in submucosal glands, epithelial cells (Figures 1(a), 1(b)), and fibroblasts (Figures 1(c), 1(d)). Immunoreactivity for B2 receptor was significantly detected in submucosal glands, epithelial cells (Figures 2(a), 2(b)), vascular smooth muscle (Figures 2(c), 2(d)), nerve bundle, and fibroblasts (Figures 2(c), 2(d)). Specificity of the staining was also confirmed by the absence of labeling with normal rabbit immunoglobulin (Figures 1(e) and 1(f)).Immunohistochemical staining for bradykinin B1 receptor in human allergic (a), (c), (e) and nonallergic (b), (d), (f) nasal mucosa. Inferior turbinates were stained with antihuman B1 receptor antibody (a)–(d) or normal rabbit immunoglobulin (e), (f). Immunoreactivity for B1 receptor was significantly detected in submucosal glands (a), (b), epithelial cells (a), (b), and fibroblasts (c), (d). ep: epithelial cells; v: blood vessels; g: submucosal glands; n: nerves. Scale bar = 100μm. (a)(b)(c)(d)(e)(f)Immunohistochemical staining for bradykinin B2 receptor in human allergic (a), (c), (e) and nonallergic (b), (d), (f) nasal mucosa. Inferior turbinates were stained with antihuman B2 receptor antibody (a)–(d) or normal rabbit immunoglobulin (e), (f). Immunoreactivity for B2 receptor was significantly detected in submucosal glands (), epithelial cells (c), vascular smooth muscle (c), (d), nerve bundle, and fibroblasts (d). ep: epithelial cells; v: blood vessels; g: submucosal glands; n: nerves. Scale bar = 100μm. (a)(b)(c)(d)In order to clarify the cell expressing bradykinin B1 and B2 receptors, we performed double immunofluorescence staining. As shown in Figures3 and 4, epithelial cells, submucosal glands (Figures 3(b) and 4(b)), fibroblast (Figures 3(d) and 4(d)), vascular smooth muscle, and vascular endothelial cells (Figures 3(f) and 4(f)) express both B1 and B2 receptors. The B2 receptor was found in nerve fibers (Figure 4(h)), whereas the B1 expression was not observed in nerves (Figure 3(h)). As shown in Figure 5, the majority of CD68 positive macrophages showed immunoreactivity for both B1 and B2 receptors.Identification of subsets of cells expressing the B1 receptor in human allergic nasal mucosa. Single staining immunofluorescence for each cell type (panel (a), (c), (e), and (g) and the dual staining for the cell marker and B1 receptor (panel (b), (d), (f), and (h). The B1 receptor protein (green) shows colocalization with antiphenotypical marker antibody (red) and the combined signal is visualized as yellow. Identification markers for cytokeratin (epithelial cells) (a), (b); fibroblast (c), (d); CD31 (vascular endothelial cells) (e), (f); neurofilament protein (peripheral nerves) (g), (h). Scale bar = 100μm. (a)(b)(c)(d)(e)(f)(g)(h)Identification of subsets of cells expressing the B2 receptor in human allergic nasal mucosa. Single staining immunofluorescence for each cell type (panel (a), (c), (e), and (g) and the dual staining for the cell marker and B2 receptor (panel (b), (d), (f), and (h). The B2 receptor protein (green) shows colocalization with antiphenotypical marker antibody (red) and the combined signal is visualized as yellow. Identification markers for cytokeratin (epithelial cells) (a), (b); fibroblast (c), (d); CD31 (vascular endothelial cells) (e), (f); neurofilament protein (peripheral nerves) (g), (h). Scale bar = 100μm. (a)(b)(c)(d)(e)(f)(g)(h)Expression of bradykinin B1 (a), (b) and B2 (c), (d) receptors on macrophages in human allergic nasal mucosa. (a) Macrophages (CD68-positive cells) (red). (b) Overlay image of bradykinin B1 receptor protein (green) and macrophages (CD68-positive) (red). The combined signal is visualized as yellow; (c) macrophages (CD68-positive) (red); (d) Overlay image of bradykinin B2 receptor protein (green); macrophages (CD68-positive cells) (red). Scale bar = 50μm. (a)(b)(c)(d)The expression levels of both B1 and B2 receptors on epithelial cells and fibroblast were higher in allergic nasal mucosa (B1 receptor: Figures1(a) and 1(c); B2 receptor: Figures 2(a) and 2(c)) than nonallergic nasal mucosa (B1 receptor: Figures 1(b) and 1(d); B2 receptor: Figures 2(b) and 2(d)).The patterns of the other immunohistochemical findings in all 12 cases were remarkably similar, and we could not find any other differences of B1 and B2 receptors immunoreactivity between on allergic and nonallergic nasal mucosae. The summary of the results is shown in Table2.Table 2 Distribution pattern of B1 and B2 receptors in normal and allergic nasal mucosae. Normal nasalAllergic nasalmucosamucosaB1B2B1B2Epithelium++++++Submucosal gland++++Nerve−++−++Fibroblast++++++Vascular endothelial cell++++Vascular smooth muscle++++++Inflammatory cell (macrophage)++++++ ## 4. Discussion It is well known that the responses to vasoactive kinin peptides are mediated through the activation of two receptors termed B1 and B2, which have been defined on the basis of the structure-function relationships of their agonists and antagonists [7]. The natural agonists of the B2-receptor are the nonapeptide bradykinin (BK) and the decapeptide Lys-BK (kallidin) which are generated by the proteolytic action of the serine protease kallikrein from the protein precursor kininogen [8]. BK and Lys-BK are weak B1 receptor agonist, however, the cleavage of these two B2-agonists by arginine carboxypeptidases produces the high affinity B1 receptor agonists, [des-Arg9]-BK, and [des-Arg10]-kallidin (DLBK), respectively, [7]. The B1 receptor is not expressed at significant levels in normal tissues, but its synthesis can be induced after tissue injury and by inflammatory factor such as lipopolysaccharide and IL-1 beta [9]. On the other hand, the B2 receptor is constitutively expressed in many types of the cells including smooth muscle cells, certain neurons, fibroblasts, and epithelial cells of the lung.In the present study, we confirm the expression of both bradykinin B1 and B2 receptors in human nasal epithelial cells. BK induced rise in [Ca2+] in primary cultured human nasal epithelial cells, suggesting the existence of B2 receptor on human nasal epithelial cells [10]. B1 receptor ligand, Lys-des-Arg-BK activated extracellular signal-regulated kinase (ERK) and the transcription factor AP-1 in human airway epithelial cell lines A549 and BEAS-2B [11]. Taken together, these previous observations and our present observations suggest the functional B1 and B2 receptors in human nasal epithelial cells.The present study indicated the significant expression of B1 and B2 receptor expressions on nasal fibroblasts. It has been reported that BK stimulated IL-1 [12], IL-6 [13], IL-8 [13, 14], and eotaxin [15] production in cultured human fibroblasts by increasing its gene expression. TNF-alpha and IL-1beta both induced an increase in B1 and B2 receptor expressions in human lung fibroblasts [16]. The observation of local production of IL-1β during inflammation accompanied by B1-receptor upregulation in several tissues has resulted in the hypothesis that this cytokine is directly involved in B1-receptor upregulation [9].The expression of bradykinin B1 and B2 receptors were found not only on epithelial cells and fibroblasts, but also on submucosal glands in the present study. It has been reported that bradykinin receptors were detected over submucosal gland in human and guinea pig airways by in vitro autoradiography [17]. With respect to the effect of BK on airway submucosal glands, it has been reported that BK directly stimulates isolated airway submucosal gland cells and induces mucus glycoprotein and Cl- secretion through the activation of B2 receptor [18]. Also, BK induces an increase in short-circuit current across a cultured gland cell layer from human airways with [Ca2+] rise, indicating a direct stimulation of ion transport in airway gland cells by BK [19]. On the other hand, some investigators have reported that BK had no significant effect on mucin release from human, feline, or ferret airway explants [6, 20].In contrast to significant both B1 and B2 receptor expressions on epithelial cells, fibroblasts, and submucosal glands, B2 receptor expression on peripheral nerves, but not B1 receptor expression, could be detected by immunohistochemistry. B1 receptor was thought to be generally absent in healthy tissues [21, 22]. In contrast, B2 receptors are constitutively expressed in a range of cell types including sensory neurons, and their activation results in excitation and sensitization of sensory neurons [23, 24]. In B2 receptor agonist, BK can stimulate sensory nerve ending, causing the release of substance P and other neuropeptides [25]. It has been shown that the nasal stimulation with BK causes nasal pain [2, 3], suggesting the existence of the functional B2 receptor on sensory nerves.Using double immunofluorescence technique, we could confirm the significant expression of both B1 and B2 receptors on macrophages. The potency of kinins to stimulate leukocytes has been thought dependent on differentiation and especially on the activation stage of these cells. The differentiation of monocytes into macrophages is associated with functional and phenotypic changes. It has been shown that human peripheral monocytes express a low number of kinin B2 binding sites [26]. However, immature, unstimulated human monocytes-derived dendritic cells constitutively express both B1 and B2 receptors, whereas monocytes did not express B1 or B2 receptor protein [27], suggesting upregulation of B1 and B2 receptors during the differentiation of the cells. With respect to the effect of BK on monocytes, BK acting via B2 receptor, increased intracelluar Ca2+, and stimulated the migration of immature human monocyte-derived dendritic cells [27]. Thus, it might be possible that local macrophages might be activated by locally released BK during nasal allergic response. ## 5. Conclusions Using immunohistochemical technique, we have demonstrated the distribution of bradykinin B1 and B2 receptors in human nasal mucosa. Although kinins do not appear to have major role in allergic rhinitis, our findings should be of considerable interest for understanding the role of kinins on upper airway diseases such as allergic rhinitis and nonallergic rhinitis. --- *Source: 102406-2009-04-23.xml*
102406-2009-04-23_102406-2009-04-23.md
24,528
Immunohistochemical Localization of the Bradykinin B1 and B2 Receptors in Human Nasal Mucosa
Hideaki Shirasaki; Etsuko Kanaizumi; Tetsuo Himi
Mediators of Inflammation (2009)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2009/102406
102406-2009-04-23.xml
--- ## Abstract Bradykinin (BK) has been tobe thought a potent mediator involved in allergic rhinitis because BK was recovered from the nasal lavage fluid of allergic rhinitis patients after allergen provocation and BK receptor antagonists relief nasal allergic symptoms. Two mammalian BK receptor subtypes, B1 and B2, have been defined based on their pharmacological properties. We investigated the localization of these receptors by immunohistochemistry. Human turbinates were obtained after turbinectomy from 12 patients with nasal obstruction refractory to medical therapy. The immunohistochemical study revealed that epithelial cells, submucosal glands, fibroblast, vascular smooth muscle, vascular endothelial cells, and macrophages showed immunoreactivity for both B1 and B2 receptors. The B2 receptor expression was found in peripheral nerve fibers, whereas the B1 expression was not observed in nerves. The results may have an important clinical implication for understanding the differential roles of BK receptor subtypes on upper airway diseases such as allergic rhinitis and nonallergic rhinitis. --- ## Body ## 1. Introduction The allergic response is a complex process involving the interaction of many mediators. Bradykinin (BK) is a potent inflammatory mediator and its actions are mediated via specific cell surface receptors which are coupled to G-proteins. Two mammalian BK receptor subtypes, B1 and B2, have been reported, and the amino acid sequence of the B1 receptor is 36% identical to the amino acid sequence of the B2 receptor [1]. Administration of exogenous BK into human nasal airway causes nasal obstruction, rhinorrhea, and nasal pain [2, 3]. These effects appear to be mediated by bradykinin B2 receptor because bradykinin B2 receptor antagonists abolish bradykinin-induced nasal obstruction and plasma extravasation, whereas agonists at the bradykinin B1 receptor do not cause any symptoms [3]. Icatibant, a bradykinin B2 receptor antagonist, inhibits the immediate inflammatory response to antigen in subjects with perennial allergic rhinitis [4, 5]. These reports suggest that BK may play an important role in the pathogenesis of allergic rhinitis. The previous autoradiografic study using 125I-BK has demonstrated specific 125I-BK binding sites mainly exist on the small muscular arteries, venous sinusoids, and submucosal fibers in human nasal mucosa [6]. However, there has been no other report about BK receptor expression in upper airway.In the present study, immunohistochemistry for bradykinin B1 and B2 receptors was performed to confirm the expression and the distribution of these receptors in human nasal mucosa. ## 2. Materials and Methods ### 2.1. Tissue Preparation Human inferior turbinates were obtained after turbinectomy from 12 patients with nasal obstruction refractory to medical therapy. Informed consent was obtained from all patients and this study was approved by the ethics committee of Sapporo Medical University. All were nonsmokers, and 6 patients had perennial allergy against mites as defined by questionnaire and CAP test (Pharmacia, Uppsala, Sweden). All medications, including antibiotics, were prohibited for at least 3 weeks prior to the study. Demographic and clinical characteristics of the patients are summarized in Table1. The nasal mucosal specimens were immediately fixed in 10% formalin for immunohistochemistry.Table 1 Demographic characteristics of allergic and nonallergic patients. Allergic rhinitisNonallergic rhinitisN = 6N = 6Sex (male/female)2/43/3Age31(19–58)39(28–55)Specific IgE to house dust mite (d1) (kU/L)2.7(1.0–13)<0.35Total IgE (kU/L)210(10–387)110(10–185)Blood eosinophils (cells/μL)370(70–690)135(55–240)Current nasal symptoms (number of patients)Nasal obstruction6(all patients)4(all patients)Sneezing40Rhinorrhea32Data expressed as median values and range (in brankets). ### 2.2. Immunohistochemistry #### 2.2.1. Antibodies For immunohistochemistry of B1 receptor, rabbit antihuman B1 receptor polyclonal antibody against a peptide corresponding to C-terminal domain of human B1 receptor (catalog # LS-A3580, Lifespan Biosciences, Mich USA) was used at 1:20 dilutions. Similarly, for immunohistochemistry of B2 receptor, rabbit antihuman B2 receptor polyclonal antibody against a peptide corresponding to C-terminal domain of human B2 receptor (catalog # LS-A797, Lifespan Biosciences, Mich, USA) was used at 1:100 dilutions. To identify the subsets of cells expressing each bradykinin receptor, the following monoclonal antibodies were used: anti-CD68 (KP-1 clone, Dako Corporation, Carpinteria, Calif, USA) for macrophage, anti-CD31 (JC70A clone, Dako) for vascular endothelial cells, antihuman fibroblast (5B5 clone, DAKO) for fibroblast, anticytokeratin (AE1/AE3 clone, Dako) for epithelial cells, and antineurofilament protein (2F11 clone, Dako) for peripheral nerves. #### 2.2.2. Immunohistochemistry Deparaffinized sections were initially incubated with 3% H2O2 in methanol for 10 minutes to quench endogenous peroxidase activity. After microwave treatment (10 minutes at 500 Watt in citrate buffer), the sections were incubated in blocking solution (10% normal goat serum in PBS) for 30 minutes before incubation in primary antibody. Then, the sections were incubated with anti-bradykinin B1 or B2 polyclonal antibody for overnight at 4°C, washed, and incubated for 30 minutes with EnVision+, Peroxidase (Dako). A further washing in PBS was followed by developing in DAB (Dako) as a chromogen for signal visualization. The slides were counterstained Mayer's haematoxylin and coverslipped using mounting medium.To identify the subsets of cells expressing each bradykinin receptor, some sections were stained by immunofluorescence technique. For double staining, deparaffinized sections were incubated overnight at 4°C with a combination of rabbit polyclonal antihuman bradykinin B1 or B2 antibody and one of mouse monoclonal antihuman phenotypical makers antibody. Sections were washed in PBS and were incubated for 30 minutes with Alexa Fluor 594-labelled goat antimouse IgG (diluted 1:50; Molecular Probes, Ore, USA) and Alexa Fluor 488-labelled goat antirabbit IgG (diluted 1:50; Molecular Probes). Sections were mounted with SlowFade antifade kits (Molecular Probes) and examined under Olympus BX51 microscope, DP70 CCD camera (Olympus Optical Co., Tokyo, Japan). All images were processed with DP Controller and DP Manager software (Olympus Optical Co) for image analysis. Using this method, bradykinin B1 or B2 receptor expressing cells was green, cellular phenotypical makers were red, and the combined signal is visualized as yellow. Negative controls were obtained by replacing primary antibodies by mouse IgG1 and rabbit immunoglobulin fraction (Dako). ## 2.1. Tissue Preparation Human inferior turbinates were obtained after turbinectomy from 12 patients with nasal obstruction refractory to medical therapy. Informed consent was obtained from all patients and this study was approved by the ethics committee of Sapporo Medical University. All were nonsmokers, and 6 patients had perennial allergy against mites as defined by questionnaire and CAP test (Pharmacia, Uppsala, Sweden). All medications, including antibiotics, were prohibited for at least 3 weeks prior to the study. Demographic and clinical characteristics of the patients are summarized in Table1. The nasal mucosal specimens were immediately fixed in 10% formalin for immunohistochemistry.Table 1 Demographic characteristics of allergic and nonallergic patients. Allergic rhinitisNonallergic rhinitisN = 6N = 6Sex (male/female)2/43/3Age31(19–58)39(28–55)Specific IgE to house dust mite (d1) (kU/L)2.7(1.0–13)<0.35Total IgE (kU/L)210(10–387)110(10–185)Blood eosinophils (cells/μL)370(70–690)135(55–240)Current nasal symptoms (number of patients)Nasal obstruction6(all patients)4(all patients)Sneezing40Rhinorrhea32Data expressed as median values and range (in brankets). ## 2.2. Immunohistochemistry ### 2.2.1. Antibodies For immunohistochemistry of B1 receptor, rabbit antihuman B1 receptor polyclonal antibody against a peptide corresponding to C-terminal domain of human B1 receptor (catalog # LS-A3580, Lifespan Biosciences, Mich USA) was used at 1:20 dilutions. Similarly, for immunohistochemistry of B2 receptor, rabbit antihuman B2 receptor polyclonal antibody against a peptide corresponding to C-terminal domain of human B2 receptor (catalog # LS-A797, Lifespan Biosciences, Mich, USA) was used at 1:100 dilutions. To identify the subsets of cells expressing each bradykinin receptor, the following monoclonal antibodies were used: anti-CD68 (KP-1 clone, Dako Corporation, Carpinteria, Calif, USA) for macrophage, anti-CD31 (JC70A clone, Dako) for vascular endothelial cells, antihuman fibroblast (5B5 clone, DAKO) for fibroblast, anticytokeratin (AE1/AE3 clone, Dako) for epithelial cells, and antineurofilament protein (2F11 clone, Dako) for peripheral nerves. ### 2.2.2. Immunohistochemistry Deparaffinized sections were initially incubated with 3% H2O2 in methanol for 10 minutes to quench endogenous peroxidase activity. After microwave treatment (10 minutes at 500 Watt in citrate buffer), the sections were incubated in blocking solution (10% normal goat serum in PBS) for 30 minutes before incubation in primary antibody. Then, the sections were incubated with anti-bradykinin B1 or B2 polyclonal antibody for overnight at 4°C, washed, and incubated for 30 minutes with EnVision+, Peroxidase (Dako). A further washing in PBS was followed by developing in DAB (Dako) as a chromogen for signal visualization. The slides were counterstained Mayer's haematoxylin and coverslipped using mounting medium.To identify the subsets of cells expressing each bradykinin receptor, some sections were stained by immunofluorescence technique. For double staining, deparaffinized sections were incubated overnight at 4°C with a combination of rabbit polyclonal antihuman bradykinin B1 or B2 antibody and one of mouse monoclonal antihuman phenotypical makers antibody. Sections were washed in PBS and were incubated for 30 minutes with Alexa Fluor 594-labelled goat antimouse IgG (diluted 1:50; Molecular Probes, Ore, USA) and Alexa Fluor 488-labelled goat antirabbit IgG (diluted 1:50; Molecular Probes). Sections were mounted with SlowFade antifade kits (Molecular Probes) and examined under Olympus BX51 microscope, DP70 CCD camera (Olympus Optical Co., Tokyo, Japan). All images were processed with DP Controller and DP Manager software (Olympus Optical Co) for image analysis. Using this method, bradykinin B1 or B2 receptor expressing cells was green, cellular phenotypical makers were red, and the combined signal is visualized as yellow. Negative controls were obtained by replacing primary antibodies by mouse IgG1 and rabbit immunoglobulin fraction (Dako). ## 2.2.1. Antibodies For immunohistochemistry of B1 receptor, rabbit antihuman B1 receptor polyclonal antibody against a peptide corresponding to C-terminal domain of human B1 receptor (catalog # LS-A3580, Lifespan Biosciences, Mich USA) was used at 1:20 dilutions. Similarly, for immunohistochemistry of B2 receptor, rabbit antihuman B2 receptor polyclonal antibody against a peptide corresponding to C-terminal domain of human B2 receptor (catalog # LS-A797, Lifespan Biosciences, Mich, USA) was used at 1:100 dilutions. To identify the subsets of cells expressing each bradykinin receptor, the following monoclonal antibodies were used: anti-CD68 (KP-1 clone, Dako Corporation, Carpinteria, Calif, USA) for macrophage, anti-CD31 (JC70A clone, Dako) for vascular endothelial cells, antihuman fibroblast (5B5 clone, DAKO) for fibroblast, anticytokeratin (AE1/AE3 clone, Dako) for epithelial cells, and antineurofilament protein (2F11 clone, Dako) for peripheral nerves. ## 2.2.2. Immunohistochemistry Deparaffinized sections were initially incubated with 3% H2O2 in methanol for 10 minutes to quench endogenous peroxidase activity. After microwave treatment (10 minutes at 500 Watt in citrate buffer), the sections were incubated in blocking solution (10% normal goat serum in PBS) for 30 minutes before incubation in primary antibody. Then, the sections were incubated with anti-bradykinin B1 or B2 polyclonal antibody for overnight at 4°C, washed, and incubated for 30 minutes with EnVision+, Peroxidase (Dako). A further washing in PBS was followed by developing in DAB (Dako) as a chromogen for signal visualization. The slides were counterstained Mayer's haematoxylin and coverslipped using mounting medium.To identify the subsets of cells expressing each bradykinin receptor, some sections were stained by immunofluorescence technique. For double staining, deparaffinized sections were incubated overnight at 4°C with a combination of rabbit polyclonal antihuman bradykinin B1 or B2 antibody and one of mouse monoclonal antihuman phenotypical makers antibody. Sections were washed in PBS and were incubated for 30 minutes with Alexa Fluor 594-labelled goat antimouse IgG (diluted 1:50; Molecular Probes, Ore, USA) and Alexa Fluor 488-labelled goat antirabbit IgG (diluted 1:50; Molecular Probes). Sections were mounted with SlowFade antifade kits (Molecular Probes) and examined under Olympus BX51 microscope, DP70 CCD camera (Olympus Optical Co., Tokyo, Japan). All images were processed with DP Controller and DP Manager software (Olympus Optical Co) for image analysis. Using this method, bradykinin B1 or B2 receptor expressing cells was green, cellular phenotypical makers were red, and the combined signal is visualized as yellow. Negative controls were obtained by replacing primary antibodies by mouse IgG1 and rabbit immunoglobulin fraction (Dako). ## 3. Results As shown in Figure1, the immunoreactivity for B1 receptor was significantly detected in submucosal glands, epithelial cells (Figures 1(a), 1(b)), and fibroblasts (Figures 1(c), 1(d)). Immunoreactivity for B2 receptor was significantly detected in submucosal glands, epithelial cells (Figures 2(a), 2(b)), vascular smooth muscle (Figures 2(c), 2(d)), nerve bundle, and fibroblasts (Figures 2(c), 2(d)). Specificity of the staining was also confirmed by the absence of labeling with normal rabbit immunoglobulin (Figures 1(e) and 1(f)).Immunohistochemical staining for bradykinin B1 receptor in human allergic (a), (c), (e) and nonallergic (b), (d), (f) nasal mucosa. Inferior turbinates were stained with antihuman B1 receptor antibody (a)–(d) or normal rabbit immunoglobulin (e), (f). Immunoreactivity for B1 receptor was significantly detected in submucosal glands (a), (b), epithelial cells (a), (b), and fibroblasts (c), (d). ep: epithelial cells; v: blood vessels; g: submucosal glands; n: nerves. Scale bar = 100μm. (a)(b)(c)(d)(e)(f)Immunohistochemical staining for bradykinin B2 receptor in human allergic (a), (c), (e) and nonallergic (b), (d), (f) nasal mucosa. Inferior turbinates were stained with antihuman B2 receptor antibody (a)–(d) or normal rabbit immunoglobulin (e), (f). Immunoreactivity for B2 receptor was significantly detected in submucosal glands (), epithelial cells (c), vascular smooth muscle (c), (d), nerve bundle, and fibroblasts (d). ep: epithelial cells; v: blood vessels; g: submucosal glands; n: nerves. Scale bar = 100μm. (a)(b)(c)(d)In order to clarify the cell expressing bradykinin B1 and B2 receptors, we performed double immunofluorescence staining. As shown in Figures3 and 4, epithelial cells, submucosal glands (Figures 3(b) and 4(b)), fibroblast (Figures 3(d) and 4(d)), vascular smooth muscle, and vascular endothelial cells (Figures 3(f) and 4(f)) express both B1 and B2 receptors. The B2 receptor was found in nerve fibers (Figure 4(h)), whereas the B1 expression was not observed in nerves (Figure 3(h)). As shown in Figure 5, the majority of CD68 positive macrophages showed immunoreactivity for both B1 and B2 receptors.Identification of subsets of cells expressing the B1 receptor in human allergic nasal mucosa. Single staining immunofluorescence for each cell type (panel (a), (c), (e), and (g) and the dual staining for the cell marker and B1 receptor (panel (b), (d), (f), and (h). The B1 receptor protein (green) shows colocalization with antiphenotypical marker antibody (red) and the combined signal is visualized as yellow. Identification markers for cytokeratin (epithelial cells) (a), (b); fibroblast (c), (d); CD31 (vascular endothelial cells) (e), (f); neurofilament protein (peripheral nerves) (g), (h). Scale bar = 100μm. (a)(b)(c)(d)(e)(f)(g)(h)Identification of subsets of cells expressing the B2 receptor in human allergic nasal mucosa. Single staining immunofluorescence for each cell type (panel (a), (c), (e), and (g) and the dual staining for the cell marker and B2 receptor (panel (b), (d), (f), and (h). The B2 receptor protein (green) shows colocalization with antiphenotypical marker antibody (red) and the combined signal is visualized as yellow. Identification markers for cytokeratin (epithelial cells) (a), (b); fibroblast (c), (d); CD31 (vascular endothelial cells) (e), (f); neurofilament protein (peripheral nerves) (g), (h). Scale bar = 100μm. (a)(b)(c)(d)(e)(f)(g)(h)Expression of bradykinin B1 (a), (b) and B2 (c), (d) receptors on macrophages in human allergic nasal mucosa. (a) Macrophages (CD68-positive cells) (red). (b) Overlay image of bradykinin B1 receptor protein (green) and macrophages (CD68-positive) (red). The combined signal is visualized as yellow; (c) macrophages (CD68-positive) (red); (d) Overlay image of bradykinin B2 receptor protein (green); macrophages (CD68-positive cells) (red). Scale bar = 50μm. (a)(b)(c)(d)The expression levels of both B1 and B2 receptors on epithelial cells and fibroblast were higher in allergic nasal mucosa (B1 receptor: Figures1(a) and 1(c); B2 receptor: Figures 2(a) and 2(c)) than nonallergic nasal mucosa (B1 receptor: Figures 1(b) and 1(d); B2 receptor: Figures 2(b) and 2(d)).The patterns of the other immunohistochemical findings in all 12 cases were remarkably similar, and we could not find any other differences of B1 and B2 receptors immunoreactivity between on allergic and nonallergic nasal mucosae. The summary of the results is shown in Table2.Table 2 Distribution pattern of B1 and B2 receptors in normal and allergic nasal mucosae. Normal nasalAllergic nasalmucosamucosaB1B2B1B2Epithelium++++++Submucosal gland++++Nerve−++−++Fibroblast++++++Vascular endothelial cell++++Vascular smooth muscle++++++Inflammatory cell (macrophage)++++++ ## 4. Discussion It is well known that the responses to vasoactive kinin peptides are mediated through the activation of two receptors termed B1 and B2, which have been defined on the basis of the structure-function relationships of their agonists and antagonists [7]. The natural agonists of the B2-receptor are the nonapeptide bradykinin (BK) and the decapeptide Lys-BK (kallidin) which are generated by the proteolytic action of the serine protease kallikrein from the protein precursor kininogen [8]. BK and Lys-BK are weak B1 receptor agonist, however, the cleavage of these two B2-agonists by arginine carboxypeptidases produces the high affinity B1 receptor agonists, [des-Arg9]-BK, and [des-Arg10]-kallidin (DLBK), respectively, [7]. The B1 receptor is not expressed at significant levels in normal tissues, but its synthesis can be induced after tissue injury and by inflammatory factor such as lipopolysaccharide and IL-1 beta [9]. On the other hand, the B2 receptor is constitutively expressed in many types of the cells including smooth muscle cells, certain neurons, fibroblasts, and epithelial cells of the lung.In the present study, we confirm the expression of both bradykinin B1 and B2 receptors in human nasal epithelial cells. BK induced rise in [Ca2+] in primary cultured human nasal epithelial cells, suggesting the existence of B2 receptor on human nasal epithelial cells [10]. B1 receptor ligand, Lys-des-Arg-BK activated extracellular signal-regulated kinase (ERK) and the transcription factor AP-1 in human airway epithelial cell lines A549 and BEAS-2B [11]. Taken together, these previous observations and our present observations suggest the functional B1 and B2 receptors in human nasal epithelial cells.The present study indicated the significant expression of B1 and B2 receptor expressions on nasal fibroblasts. It has been reported that BK stimulated IL-1 [12], IL-6 [13], IL-8 [13, 14], and eotaxin [15] production in cultured human fibroblasts by increasing its gene expression. TNF-alpha and IL-1beta both induced an increase in B1 and B2 receptor expressions in human lung fibroblasts [16]. The observation of local production of IL-1β during inflammation accompanied by B1-receptor upregulation in several tissues has resulted in the hypothesis that this cytokine is directly involved in B1-receptor upregulation [9].The expression of bradykinin B1 and B2 receptors were found not only on epithelial cells and fibroblasts, but also on submucosal glands in the present study. It has been reported that bradykinin receptors were detected over submucosal gland in human and guinea pig airways by in vitro autoradiography [17]. With respect to the effect of BK on airway submucosal glands, it has been reported that BK directly stimulates isolated airway submucosal gland cells and induces mucus glycoprotein and Cl- secretion through the activation of B2 receptor [18]. Also, BK induces an increase in short-circuit current across a cultured gland cell layer from human airways with [Ca2+] rise, indicating a direct stimulation of ion transport in airway gland cells by BK [19]. On the other hand, some investigators have reported that BK had no significant effect on mucin release from human, feline, or ferret airway explants [6, 20].In contrast to significant both B1 and B2 receptor expressions on epithelial cells, fibroblasts, and submucosal glands, B2 receptor expression on peripheral nerves, but not B1 receptor expression, could be detected by immunohistochemistry. B1 receptor was thought to be generally absent in healthy tissues [21, 22]. In contrast, B2 receptors are constitutively expressed in a range of cell types including sensory neurons, and their activation results in excitation and sensitization of sensory neurons [23, 24]. In B2 receptor agonist, BK can stimulate sensory nerve ending, causing the release of substance P and other neuropeptides [25]. It has been shown that the nasal stimulation with BK causes nasal pain [2, 3], suggesting the existence of the functional B2 receptor on sensory nerves.Using double immunofluorescence technique, we could confirm the significant expression of both B1 and B2 receptors on macrophages. The potency of kinins to stimulate leukocytes has been thought dependent on differentiation and especially on the activation stage of these cells. The differentiation of monocytes into macrophages is associated with functional and phenotypic changes. It has been shown that human peripheral monocytes express a low number of kinin B2 binding sites [26]. However, immature, unstimulated human monocytes-derived dendritic cells constitutively express both B1 and B2 receptors, whereas monocytes did not express B1 or B2 receptor protein [27], suggesting upregulation of B1 and B2 receptors during the differentiation of the cells. With respect to the effect of BK on monocytes, BK acting via B2 receptor, increased intracelluar Ca2+, and stimulated the migration of immature human monocyte-derived dendritic cells [27]. Thus, it might be possible that local macrophages might be activated by locally released BK during nasal allergic response. ## 5. Conclusions Using immunohistochemical technique, we have demonstrated the distribution of bradykinin B1 and B2 receptors in human nasal mucosa. Although kinins do not appear to have major role in allergic rhinitis, our findings should be of considerable interest for understanding the role of kinins on upper airway diseases such as allergic rhinitis and nonallergic rhinitis. --- *Source: 102406-2009-04-23.xml*
2009
# Screening Biomarker as an Alternative to Endoscopy for the Detection of Early Gastric Cancer: The Combination of Serum Trefoil Factor Family 3 and Pepsinogen **Authors:** Hyun Seok Lee; Seong Woo Jeon; Sachiyo Nomura; Yasuyuki Seto; Yong Hwan Kwon; Su Youn Nam; Yuko Ishibashi; Hiroshi Ohtsu; Yasukazu Ohmoto; Hae Min Yang **Journal:** Gastroenterology Research and Practice (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1024074 --- ## Abstract Objective. The serum pepsinogen test has limitation in its predictive power as a noninvasive biomarker for gastric cancer screening. We aimed to investigate whether the combination of TFF3 and pepsinogen could be an effective biomarker for the detection of gastric cancer even in the early stages. Methods. In total, 281 patients with early gastric cancer (EGC), who underwent endoscopic submucosal dissection in Korea, and 708 healthy individuals from Japan were enrolled in the derivation cohort. The validation cohort included 30 Korean patients with EGC and 30 Korean healthy control blood donors. Serum TFF3 levels were examined using enzyme-linked immunosorbent assay. Results. Using a cutoff of 6.73 ng/mL in the derivation cohort, the sensitivity of the combination of tests for EGC detection was superior (87.5%) to that of TFF3 (80.4%) or pepsinogen test alone (39.5%). Similarly, in the validation cohort, the sensitivity of TFF3 plus pepsinogen was higher (90.4%) than that of TFF3 (80.0%) or pepsinogen test alone (33.3%). Conclusion. The combination of serum TFF3 and pepsinogen is a more effective noninvasive biomarker for gastric cancer detection compared with pepsinogen or TFF3 alone, even in EGC. This trial is registered with NCT03046745. --- ## Body ## 1. Introduction Gastric cancer is the third leading cause of cancer death in the world [1–3]. Approximately half of the gastric cancer cases are diagnosed during advanced stages. One of the reasons for this is the invasiveness of esophagogastroduodenoscopy (EGD) screening examinations that leads to patients avoiding necessary tests [4]. The limitation of the pepsinogen test as a noninvasive serologic biomarker screening method is that the optimal cutoff value could be affected by several factors, such as H. pylori infection, age, gender, and the test method itself [5].The trefoil factor family (TFF) of peptides comprises small (12–22 kDa) molecules that are secreted by the mammalian gastrointestinal tract. They are extremely stable in acidic conditions and resistant to heat degradation and proteolytic digestion. TFFs constitute a family of three peptides (TFF1, TFF2, and TFF3) that are widely expressed in a tissue-specific manner in the gastrointestinal tract. TFF3 is expressed in the goblet cells of the small and large intestines as well as the intestinal metaplasia in the stomach [6–10]. Serum TFF3 was shown to be a better potential screening tool for gastric cancer than pepsinogen in Japan [11].These characteristics of TFF3 prompted us to analyze whether serum TFF3 can be a biomarker of early gastric cancer (EGC) in Koreans, as well as in the Japanese population. There is no previous study on serum TFF3 as a biomarker in EGC population without advanced gastric cancer (AGC). Because the detection of early-stage cancer is associated with improved survival, our hypothesis is that the combination of TFF3 with pepsinogen could enhance the sensitivity of EGC detection. Thus, we investigated if the combination of serum TFF3 and pepsinogen tests could be a more effective noninvasive tool for the detection of EGC. ## 2. Materials and Methods ### 2.1. Subjects #### 2.1.1. Study Population of Derivation Cohort The patient group consisted of 281 EGC patients who underwent endoscopic submucosal dissection at the Kyungpook National University Medical Center in Korea from January 2011 to May 2013. We obtained blood samples from all the patients before their endoscopic treatment. The control group consisted of 708 healthy male and female blood donors who had received a health check at Yamanaka Clinic in Japan from October 2011 to December 2012. The biopsy specimens for this study were provided by the National Biobank of Korea, Kyungpook National University Hospital (KNUH), which is supported by the Ministry of Health, Welfare and Affairs. All materials derived from the National Biobank of Korea, KNUH, were obtained under institutional review board-approved protocols. #### 2.1.2. TFF3 Value in the Validation Cohort In the derivation cohort of the current study, the control group consisted of Japanese individuals, and we have not obtained the results of serum pepsinogen in the control group. In order to test our results in another validation cohort with both patients and controls from the same country, the Korean validation cohort was needed. The validation cohort was an independent cohort, containing 30 Korean EGC patients from August 2016 to December 2016 and 30 Korean healthy control blood donors who received a health check including EGD from August 2016 to December 2016. Their data were prospectively collected and analyzed to validate the TFF3 value. The study protocols used for subjects in the validation cohort were identical to those used for subjects in the derivation cohort. The validity of the combination of TFF3 and pepsinogen for the detection of EGC in Korean individuals was assessed by receiver operating characteristic (ROC) analysis. ### 2.2. Methods #### 2.2.1. Construction of Human TFF3 Expression Plasmids Human TFF3 complementary deoxyribonucleic acid (cDNA) was cloned from Human Small Intestine Marathon-Ready cDNA (Clontech, Mountain View, CA, USA) by polymerase chain reaction. For His-taggedEscherichia coli expression, the human TFF3 cDNA fragments were inserted into the pET-21a(+) (Novagen) vector to create pET-hTFF3-His [11]. #### 2.2.2. Expression and Purification of Recombinant Human TFF3 BL21-CodonPlus (DE3)-RIL bacteria (Stratagene, Santa Clara, CA, USA) were transformed with the pET-hTFF3-His plasmid and then cultured in lysogeny broth medium at 37°C. Recombinant protein expression was induced by incubating cells with 1 mmol/L isopropylβ-D-1-thiogalactopyranoside for 5 hours. Bacterial pellets were harvested, the soluble protein fractions were extracted by sonication in 0.2% Triton X-100 and 50 mmol/L Tris-HCl (pH 8.0), and recombinant human TFFs were purified by Ni-Resin chromatography (Invitrogen, Tokyo, Japan). Recombinant human TFFs were eluted from the Ni-Resin column with 0.5 mol/L imidazole, 50 mmol/L Tris-HCl (pH 8.0), and 0.5 mol/L NaCl. Each elution fraction was analyzed by performing sodium dodecyl sulfate polyacrylamide gel electrophoresis and Western blot analysis. Concentrations of the purified recombinant human TFFs were measured by using a protein assay (Bio-Rad Laboratories Inc., Tokyo, Japan) [11]. #### 2.2.3. Immunoassays for TFF3, Pepsinogen I, Pepsinogen II, andHelicobacter pylori Infection Status Serum TFF3 levels were measured by performing enzyme-linked immunosorbent assay (ELISA). Antisera were prepared from rabbits immunized with human TFFs. The sensitivity of TFF3 was 30 pg/mL. Serum pepsinogen I and pepsinogen II levels were measured using the latex-enhanced turbidimetric immunoassay (Hitachi Ltd, Tokyo, Japan), and pepsinogen I/pepsinogen II ratio was calculated.A positiveH. pylori infection status was dependent on at least one of the following tests showing evidence of infection: histology, rapid urease test, and [13C]-urea breath test. ### 2.3. Statistical Analysis All statistical analyses were performed using JMP7 software (SAS Institute Inc., Cary, NC, USA) or SPSS version 14.0 (SPSS Inc., Chicago, IL, USA). The mean of variables was compared between two groups using at-test. The ROC curve for each evaluation was used to extract the corresponding cutoff point, which can be used to discriminate different gastric statuses. For that purpose, the area under each ROC curve was used to measure the discriminatory ability of the model. The resulting value of the cutoff point for each evaluation was applied to the determination of the sensitivity, specificity, and odds ratio. Consequently, 95% confidence intervals were calculated. A 2-sided P value of less than 0.05 was considered statistically significant. ## 2.1. Subjects ### 2.1.1. Study Population of Derivation Cohort The patient group consisted of 281 EGC patients who underwent endoscopic submucosal dissection at the Kyungpook National University Medical Center in Korea from January 2011 to May 2013. We obtained blood samples from all the patients before their endoscopic treatment. The control group consisted of 708 healthy male and female blood donors who had received a health check at Yamanaka Clinic in Japan from October 2011 to December 2012. The biopsy specimens for this study were provided by the National Biobank of Korea, Kyungpook National University Hospital (KNUH), which is supported by the Ministry of Health, Welfare and Affairs. All materials derived from the National Biobank of Korea, KNUH, were obtained under institutional review board-approved protocols. ### 2.1.2. TFF3 Value in the Validation Cohort In the derivation cohort of the current study, the control group consisted of Japanese individuals, and we have not obtained the results of serum pepsinogen in the control group. In order to test our results in another validation cohort with both patients and controls from the same country, the Korean validation cohort was needed. The validation cohort was an independent cohort, containing 30 Korean EGC patients from August 2016 to December 2016 and 30 Korean healthy control blood donors who received a health check including EGD from August 2016 to December 2016. Their data were prospectively collected and analyzed to validate the TFF3 value. The study protocols used for subjects in the validation cohort were identical to those used for subjects in the derivation cohort. The validity of the combination of TFF3 and pepsinogen for the detection of EGC in Korean individuals was assessed by receiver operating characteristic (ROC) analysis. ## 2.1.1. Study Population of Derivation Cohort The patient group consisted of 281 EGC patients who underwent endoscopic submucosal dissection at the Kyungpook National University Medical Center in Korea from January 2011 to May 2013. We obtained blood samples from all the patients before their endoscopic treatment. The control group consisted of 708 healthy male and female blood donors who had received a health check at Yamanaka Clinic in Japan from October 2011 to December 2012. The biopsy specimens for this study were provided by the National Biobank of Korea, Kyungpook National University Hospital (KNUH), which is supported by the Ministry of Health, Welfare and Affairs. All materials derived from the National Biobank of Korea, KNUH, were obtained under institutional review board-approved protocols. ## 2.1.2. TFF3 Value in the Validation Cohort In the derivation cohort of the current study, the control group consisted of Japanese individuals, and we have not obtained the results of serum pepsinogen in the control group. In order to test our results in another validation cohort with both patients and controls from the same country, the Korean validation cohort was needed. The validation cohort was an independent cohort, containing 30 Korean EGC patients from August 2016 to December 2016 and 30 Korean healthy control blood donors who received a health check including EGD from August 2016 to December 2016. Their data were prospectively collected and analyzed to validate the TFF3 value. The study protocols used for subjects in the validation cohort were identical to those used for subjects in the derivation cohort. The validity of the combination of TFF3 and pepsinogen for the detection of EGC in Korean individuals was assessed by receiver operating characteristic (ROC) analysis. ## 2.2. Methods ### 2.2.1. Construction of Human TFF3 Expression Plasmids Human TFF3 complementary deoxyribonucleic acid (cDNA) was cloned from Human Small Intestine Marathon-Ready cDNA (Clontech, Mountain View, CA, USA) by polymerase chain reaction. For His-taggedEscherichia coli expression, the human TFF3 cDNA fragments were inserted into the pET-21a(+) (Novagen) vector to create pET-hTFF3-His [11]. ### 2.2.2. Expression and Purification of Recombinant Human TFF3 BL21-CodonPlus (DE3)-RIL bacteria (Stratagene, Santa Clara, CA, USA) were transformed with the pET-hTFF3-His plasmid and then cultured in lysogeny broth medium at 37°C. Recombinant protein expression was induced by incubating cells with 1 mmol/L isopropylβ-D-1-thiogalactopyranoside for 5 hours. Bacterial pellets were harvested, the soluble protein fractions were extracted by sonication in 0.2% Triton X-100 and 50 mmol/L Tris-HCl (pH 8.0), and recombinant human TFFs were purified by Ni-Resin chromatography (Invitrogen, Tokyo, Japan). Recombinant human TFFs were eluted from the Ni-Resin column with 0.5 mol/L imidazole, 50 mmol/L Tris-HCl (pH 8.0), and 0.5 mol/L NaCl. Each elution fraction was analyzed by performing sodium dodecyl sulfate polyacrylamide gel electrophoresis and Western blot analysis. Concentrations of the purified recombinant human TFFs were measured by using a protein assay (Bio-Rad Laboratories Inc., Tokyo, Japan) [11]. ### 2.2.3. Immunoassays for TFF3, Pepsinogen I, Pepsinogen II, andHelicobacter pylori Infection Status Serum TFF3 levels were measured by performing enzyme-linked immunosorbent assay (ELISA). Antisera were prepared from rabbits immunized with human TFFs. The sensitivity of TFF3 was 30 pg/mL. Serum pepsinogen I and pepsinogen II levels were measured using the latex-enhanced turbidimetric immunoassay (Hitachi Ltd, Tokyo, Japan), and pepsinogen I/pepsinogen II ratio was calculated.A positiveH. pylori infection status was dependent on at least one of the following tests showing evidence of infection: histology, rapid urease test, and [13C]-urea breath test. ## 2.2.1. Construction of Human TFF3 Expression Plasmids Human TFF3 complementary deoxyribonucleic acid (cDNA) was cloned from Human Small Intestine Marathon-Ready cDNA (Clontech, Mountain View, CA, USA) by polymerase chain reaction. For His-taggedEscherichia coli expression, the human TFF3 cDNA fragments were inserted into the pET-21a(+) (Novagen) vector to create pET-hTFF3-His [11]. ## 2.2.2. Expression and Purification of Recombinant Human TFF3 BL21-CodonPlus (DE3)-RIL bacteria (Stratagene, Santa Clara, CA, USA) were transformed with the pET-hTFF3-His plasmid and then cultured in lysogeny broth medium at 37°C. Recombinant protein expression was induced by incubating cells with 1 mmol/L isopropylβ-D-1-thiogalactopyranoside for 5 hours. Bacterial pellets were harvested, the soluble protein fractions were extracted by sonication in 0.2% Triton X-100 and 50 mmol/L Tris-HCl (pH 8.0), and recombinant human TFFs were purified by Ni-Resin chromatography (Invitrogen, Tokyo, Japan). Recombinant human TFFs were eluted from the Ni-Resin column with 0.5 mol/L imidazole, 50 mmol/L Tris-HCl (pH 8.0), and 0.5 mol/L NaCl. Each elution fraction was analyzed by performing sodium dodecyl sulfate polyacrylamide gel electrophoresis and Western blot analysis. Concentrations of the purified recombinant human TFFs were measured by using a protein assay (Bio-Rad Laboratories Inc., Tokyo, Japan) [11]. ## 2.2.3. Immunoassays for TFF3, Pepsinogen I, Pepsinogen II, andHelicobacter pylori Infection Status Serum TFF3 levels were measured by performing enzyme-linked immunosorbent assay (ELISA). Antisera were prepared from rabbits immunized with human TFFs. The sensitivity of TFF3 was 30 pg/mL. Serum pepsinogen I and pepsinogen II levels were measured using the latex-enhanced turbidimetric immunoassay (Hitachi Ltd, Tokyo, Japan), and pepsinogen I/pepsinogen II ratio was calculated.A positiveH. pylori infection status was dependent on at least one of the following tests showing evidence of infection: histology, rapid urease test, and [13C]-urea breath test. ## 2.3. Statistical Analysis All statistical analyses were performed using JMP7 software (SAS Institute Inc., Cary, NC, USA) or SPSS version 14.0 (SPSS Inc., Chicago, IL, USA). The mean of variables was compared between two groups using at-test. The ROC curve for each evaluation was used to extract the corresponding cutoff point, which can be used to discriminate different gastric statuses. For that purpose, the area under each ROC curve was used to measure the discriminatory ability of the model. The resulting value of the cutoff point for each evaluation was applied to the determination of the sensitivity, specificity, and odds ratio. Consequently, 95% confidence intervals were calculated. A 2-sided P value of less than 0.05 was considered statistically significant. ## 3. Results ### 3.1. Characteristics of the Derivation and Validation Cohorts In the derivation cohort, there were 217 (75.8%) male patients in the EGC group and that of the control group were 272 (38.4%). The mean age of patients in the cancer group was 63.4 ± 9.3 years, and that of the controls was 67.4 ± 11.9 years. The rate of positiveH. pylori infection in the cancer group was 48.4%. Of the 281 studied tumors, 256 (91.1%) were histologically classified as differentiated type and 25 (8.9%) as undifferentiated type (Table 1). The mean serum TFF3 level in the patients with gastric cancer was 9.37 ± 4.67 ng/mL, which was significantly higher compared with that in the control group (7.05 ± 3.28 ng/mL; P<0.001; Table 1, Figure 1).Table 1 Baseline characteristics of patients with early gastric cancer and control groups in the derivation and validation cohorts. Characteristics Patients Controls P value Derivation cohort n 281 708 Sex, male,n (%) 213 (75.8) 272 (38.4) <0.001 Age (years) 63.4 ± 9.3 67.4 ± 11.9 <0.001 TFF3 value (ng/mL) 9.37 ± 4.67 7.05 ± 3.28 <0.001 Male 9.21 ± 3.42 7.19 ± 3.86 <0.001 Female 9.87 ± 7.34 6.96 ± 2.86 0.002 Helicobacter pylori positivity (%) 136 (48.4) NA Mean tumor size (mm) 22.0 ± 13.5 NA Submucosal invasion (%) 35 (12.5) NA Lymphovascular invasion (%) 5 (1.8) NA Histologic type (%) NA Differentiated (WD, MD) 256 (91.1) Undifferentiated (PD, SRC) 25 (8.9) Lauren classification (%) NA Intestinal 261 (92.9) Diffuse 20 (7.1) Validation cohort n 30 30 Sex, male,n (%) 21 (70.0) 15 (50.0) 0.114 Age (years) 59.5 ± 10.7 66.6 ± 12.0 0.002 TFF3 value (ng/mL) 9.01 ± 4.21 6.92 ± 2.76 <0.001 Intestinal metaplasia,n (%) 13 (43.3) 3 (10.0) 0.004 Data are presented as the mean ± SD. TFF3: trefoil factor family 3; WD: well-differentiated adenocarcinoma; MD: moderately differentiated adenocarcinoma; PD: poorly differentiated adenocarcinoma; SRC: signet ring cell carcinoma; NA: not applicable.Figure 1 Serum trefoil factor family 3 (TFF3) levels in patients with gastric cancer were compared with healthy control individuals in the derivation cohort. The TFF3 level was significantly higher in patients with gastric cancer (P<0.001).For the validation cohort, 30 Korean EGC patients and 30 Korean healthy control subjects were enrolled (Table1). There were 21 (70.0%) male patients in the EGC group and 15 (50.0%) in the control group. The mean age of EGC patients was 59.5 ± 10.7 years, and that of the controls was 66.6 ± 12.0 years. The mean serum TFF3 level in patients with gastric cancer was 9.01 ± 4.21 ng/mL, which was significantly higher than that in the control group (6.92 ± 2.76 ng/mL; P<0.001). ### 3.2. Effect ofH. pylori Infection on Serum TFF3 Levels in the Derivation Cohort To test the diagnostic accuracy of serum TFF3 for identifyingH. pylori infection among patients with cancer, ROC analysis was performed (data not shown). The area under the ROC curve of TFF3 was 0.445.To test the diagnostic accuracy of serum TFF3 for identifying EGC, ROC analysis was performed. For bothH. pylori-positive and Helicobacter pylori-negative patients, the sensitivity, specificity, odds ratio, area under the curve, and cutoff value for TFF3 were 0.804, 0.576, 5.60, 0.729, and 6.73, respectively. The positive and negative predictive values for TFF3 were 0.430 and 0.881, respectively (Figure 2(a)). To further evaluate TFF3, patients were divided according to H. pylori infection status and then ROC analysis was performed. The area under the curve was 0.716 for H. pylori-positive patients (Figure 2(b)) and 0.740 for H. pylori-negative patients (Figure 2(c)).Figure 2 Receiver operating characteristic (ROC) curves of trefoil factor family 3 (TFF3) to predict early gastric cancer presence in the derivation cohort. (a) ROC curve of serum TFF3 for all (bothHelicobacter pylori-positive and Helicobacter pylori-negative) the patients. The sensitivity, specificity, odds ratio, area under the curve, and cutoff value of TFF3 were 0.804, 0.576, 5.60, 0.729, and 6.73, respectively. The positive and negative predictive values of TFF3 were 0.430 and 0.881, respectively. (b) For H. pylori-positive patients, the sensitivity, specificity, odds ratio, and area under the curve were 0.772, 0.576, 4.61, and 0.716, respectively. (c) For H. pylori-negative patients, the sensitivity, specificity, odds ratio, and area under the curve were 0.835, 0.576, 6.86, and 0.740, respectively. (a) (b) (c) ### 3.3. Histologic Types and Serum TFF3 Levels in the Derivation Cohort To test the influence of EGC on serum TFF3 levels, the TFF3 level in each patient’s serum was compared with their EGC histologic types. Differentiated gastric cancer included cases with well-differentiated or moderately differentiated adenocarcinomas. Gastric cancer with undifferentiated-type histology included cases with poorly differentiated adenocarcinoma or signet ring cell carcinoma. Serum TFF3 levels of patients with the differentiated type and of those with the undifferentiated type of EGC did not differ significantly (9.53 ± 4.83 ng/mL versus 7.66 ± 1.82 ng/mL, respectively;P=0.056). On the other hand, serum TFF3 level in patients with the intestinal-type EGC was significantly higher than in patients with the diffuse type (9.54 ± 4.78 ng/mL versus 7.16 ± 1.89 ng/mL, respectively; P=0.028; Figure 3). In any other pathologic status of EGC, such as submucosal invasion or lymphovascular invasion, there was no significant difference in serum TFF3 levels (data not shown).Figure 3 Distribution of serum trefoil factor family 3 (TFF3) in differentiated or undifferentiated-type and intestinal- or diffuse-type early gastric cancer (EGC) in the derivation cohort. (a) The serum TFF3 levels of patients with differentiated-type histology and of those with undifferentiated-type EGC did not differ significantly (P=0.056). (b) Serum TFF3 levels in patients with intestinal-type EGC was significantly higher than that in those with diffuse-type cancer (P=0.028). (a) (b) ### 3.4. Combination of the Serum TFF3 and Pepsinogen Tests in the Derivation Cohort We analyzed the usefulness of determining the TFF3 level together with pepsinogen testing. The number of patients with gastric cancer and positive or negative results for both tests in the present study is shown in Table2. The cutoff values for defining a positive pepsinogen test were a serum pepsinogen I level of <70 ng/mL and serum pepsinogen I/II ratio of <3. Under these cutoff values, 170 of the 281 patients with EGC were shown not to have cancer with pepsinogen screening alone. However, when serum TFF3 testing was added to the gastric cancer screening, 135 of the 170 EGC patients who were not identified by pepsinogen testing could be identified by the TFF3 examination. On the other hand, 20 of the 281 patients were not detected by TFF3 testing but were detected by pepsinogen testing.Table 2 Evaluation of patients with early gastric cancer using pepsinogen and TFF3 levels. TFF3 (−) TFF3 (+) Total Derivation cohort Pepsinogen test (−) 35 (20.6%) 135 (79.4%) 170 Pepsinogen test (+) 20 (18.0%) 91 (82.0%) 111 Total 55 226 281 Validation cohort Pepsinogen test (−) 3 (15.0%) 17 (85.0%) 20 Pepsinogen test (+) 3 (30.0%) 7 (70.0%) 10 Total 6 24 30 TFF3: trefoil factor family 3. Serum pepsinogen test (+): pepsinogen I < 70 ng/mL and pepsinogen I/II ratio < 3.The sensitivity of the individual pepsinogen and TFF3 tests was 39.5% and 80.4%, respectively. With combination testing, the sensitivity for gastric cancer presence was 87.5%, which was higher than that of TFF3 testing alone. ### 3.5. Combination of the Serum TFF3 and Pepsinogen Tests in the Korean Validation Cohort To test the diagnostic performance of pepsinogen test and serum TFF3 for identifying EGC in the Korean validation cohort, ROC analysis was performed (Figure4). The sensitivity, specificity, odds ratio, and area under the curve of pepsinogen test for the detection of EGC according to the definition of pepsinogen test were 0.333, 0.933, 7.00, and 0.633, respectively. Using the cutoff value of 6.73 ng/mL for TFF3, those for TFF3 were 0.800, 0.433, 3.06, and 0.651, respectively. Those for the combination of TFF3 and pepsinogen l/ll ratio were 0.900, 0.367, 5.21, and 0.756, respectively.Figure 4 The Receiver operating characteristic curves of pepsinogen I/II ratio, serum trefoil factor family 3 (TFF3), and TFF3 plus pepsinogen I/II ratio are shown in the validation cohort. The sensitivity, specificity, odds ratio, and area under the curve of pepsinogen test for detection of EGC according to the definition of pepsinogen test were 0.333, 0.933, 7.00, and 0.633, respectively. Using the cutoff value of 6.73 ng/mL for TFF3, those for TFF3 were 0.800, 0.433, 3.06, and 0.651, respectively. Those for the combination of TFF3 and pepsinogen l/ll ratio were 0.900, 0.367, 5.21, and 0.756, respectively. AUC: area under the curve.The positive and negative predictive values of pepsinogen l/ll ratio were 0.833 and 0.583, respectively, and those of TFF3 were 0.585 and 0.684, respectively. Those of the combination of TFF3 and pepsinogen l/ll ratio were 0.587 and 0.786, respectively. ## 3.1. Characteristics of the Derivation and Validation Cohorts In the derivation cohort, there were 217 (75.8%) male patients in the EGC group and that of the control group were 272 (38.4%). The mean age of patients in the cancer group was 63.4 ± 9.3 years, and that of the controls was 67.4 ± 11.9 years. The rate of positiveH. pylori infection in the cancer group was 48.4%. Of the 281 studied tumors, 256 (91.1%) were histologically classified as differentiated type and 25 (8.9%) as undifferentiated type (Table 1). The mean serum TFF3 level in the patients with gastric cancer was 9.37 ± 4.67 ng/mL, which was significantly higher compared with that in the control group (7.05 ± 3.28 ng/mL; P<0.001; Table 1, Figure 1).Table 1 Baseline characteristics of patients with early gastric cancer and control groups in the derivation and validation cohorts. Characteristics Patients Controls P value Derivation cohort n 281 708 Sex, male,n (%) 213 (75.8) 272 (38.4) <0.001 Age (years) 63.4 ± 9.3 67.4 ± 11.9 <0.001 TFF3 value (ng/mL) 9.37 ± 4.67 7.05 ± 3.28 <0.001 Male 9.21 ± 3.42 7.19 ± 3.86 <0.001 Female 9.87 ± 7.34 6.96 ± 2.86 0.002 Helicobacter pylori positivity (%) 136 (48.4) NA Mean tumor size (mm) 22.0 ± 13.5 NA Submucosal invasion (%) 35 (12.5) NA Lymphovascular invasion (%) 5 (1.8) NA Histologic type (%) NA Differentiated (WD, MD) 256 (91.1) Undifferentiated (PD, SRC) 25 (8.9) Lauren classification (%) NA Intestinal 261 (92.9) Diffuse 20 (7.1) Validation cohort n 30 30 Sex, male,n (%) 21 (70.0) 15 (50.0) 0.114 Age (years) 59.5 ± 10.7 66.6 ± 12.0 0.002 TFF3 value (ng/mL) 9.01 ± 4.21 6.92 ± 2.76 <0.001 Intestinal metaplasia,n (%) 13 (43.3) 3 (10.0) 0.004 Data are presented as the mean ± SD. TFF3: trefoil factor family 3; WD: well-differentiated adenocarcinoma; MD: moderately differentiated adenocarcinoma; PD: poorly differentiated adenocarcinoma; SRC: signet ring cell carcinoma; NA: not applicable.Figure 1 Serum trefoil factor family 3 (TFF3) levels in patients with gastric cancer were compared with healthy control individuals in the derivation cohort. The TFF3 level was significantly higher in patients with gastric cancer (P<0.001).For the validation cohort, 30 Korean EGC patients and 30 Korean healthy control subjects were enrolled (Table1). There were 21 (70.0%) male patients in the EGC group and 15 (50.0%) in the control group. The mean age of EGC patients was 59.5 ± 10.7 years, and that of the controls was 66.6 ± 12.0 years. The mean serum TFF3 level in patients with gastric cancer was 9.01 ± 4.21 ng/mL, which was significantly higher than that in the control group (6.92 ± 2.76 ng/mL; P<0.001). ## 3.2. Effect ofH. pylori Infection on Serum TFF3 Levels in the Derivation Cohort To test the diagnostic accuracy of serum TFF3 for identifyingH. pylori infection among patients with cancer, ROC analysis was performed (data not shown). The area under the ROC curve of TFF3 was 0.445.To test the diagnostic accuracy of serum TFF3 for identifying EGC, ROC analysis was performed. For bothH. pylori-positive and Helicobacter pylori-negative patients, the sensitivity, specificity, odds ratio, area under the curve, and cutoff value for TFF3 were 0.804, 0.576, 5.60, 0.729, and 6.73, respectively. The positive and negative predictive values for TFF3 were 0.430 and 0.881, respectively (Figure 2(a)). To further evaluate TFF3, patients were divided according to H. pylori infection status and then ROC analysis was performed. The area under the curve was 0.716 for H. pylori-positive patients (Figure 2(b)) and 0.740 for H. pylori-negative patients (Figure 2(c)).Figure 2 Receiver operating characteristic (ROC) curves of trefoil factor family 3 (TFF3) to predict early gastric cancer presence in the derivation cohort. (a) ROC curve of serum TFF3 for all (bothHelicobacter pylori-positive and Helicobacter pylori-negative) the patients. The sensitivity, specificity, odds ratio, area under the curve, and cutoff value of TFF3 were 0.804, 0.576, 5.60, 0.729, and 6.73, respectively. The positive and negative predictive values of TFF3 were 0.430 and 0.881, respectively. (b) For H. pylori-positive patients, the sensitivity, specificity, odds ratio, and area under the curve were 0.772, 0.576, 4.61, and 0.716, respectively. (c) For H. pylori-negative patients, the sensitivity, specificity, odds ratio, and area under the curve were 0.835, 0.576, 6.86, and 0.740, respectively. (a) (b) (c) ## 3.3. Histologic Types and Serum TFF3 Levels in the Derivation Cohort To test the influence of EGC on serum TFF3 levels, the TFF3 level in each patient’s serum was compared with their EGC histologic types. Differentiated gastric cancer included cases with well-differentiated or moderately differentiated adenocarcinomas. Gastric cancer with undifferentiated-type histology included cases with poorly differentiated adenocarcinoma or signet ring cell carcinoma. Serum TFF3 levels of patients with the differentiated type and of those with the undifferentiated type of EGC did not differ significantly (9.53 ± 4.83 ng/mL versus 7.66 ± 1.82 ng/mL, respectively;P=0.056). On the other hand, serum TFF3 level in patients with the intestinal-type EGC was significantly higher than in patients with the diffuse type (9.54 ± 4.78 ng/mL versus 7.16 ± 1.89 ng/mL, respectively; P=0.028; Figure 3). In any other pathologic status of EGC, such as submucosal invasion or lymphovascular invasion, there was no significant difference in serum TFF3 levels (data not shown).Figure 3 Distribution of serum trefoil factor family 3 (TFF3) in differentiated or undifferentiated-type and intestinal- or diffuse-type early gastric cancer (EGC) in the derivation cohort. (a) The serum TFF3 levels of patients with differentiated-type histology and of those with undifferentiated-type EGC did not differ significantly (P=0.056). (b) Serum TFF3 levels in patients with intestinal-type EGC was significantly higher than that in those with diffuse-type cancer (P=0.028). (a) (b) ## 3.4. Combination of the Serum TFF3 and Pepsinogen Tests in the Derivation Cohort We analyzed the usefulness of determining the TFF3 level together with pepsinogen testing. The number of patients with gastric cancer and positive or negative results for both tests in the present study is shown in Table2. The cutoff values for defining a positive pepsinogen test were a serum pepsinogen I level of <70 ng/mL and serum pepsinogen I/II ratio of <3. Under these cutoff values, 170 of the 281 patients with EGC were shown not to have cancer with pepsinogen screening alone. However, when serum TFF3 testing was added to the gastric cancer screening, 135 of the 170 EGC patients who were not identified by pepsinogen testing could be identified by the TFF3 examination. On the other hand, 20 of the 281 patients were not detected by TFF3 testing but were detected by pepsinogen testing.Table 2 Evaluation of patients with early gastric cancer using pepsinogen and TFF3 levels. TFF3 (−) TFF3 (+) Total Derivation cohort Pepsinogen test (−) 35 (20.6%) 135 (79.4%) 170 Pepsinogen test (+) 20 (18.0%) 91 (82.0%) 111 Total 55 226 281 Validation cohort Pepsinogen test (−) 3 (15.0%) 17 (85.0%) 20 Pepsinogen test (+) 3 (30.0%) 7 (70.0%) 10 Total 6 24 30 TFF3: trefoil factor family 3. Serum pepsinogen test (+): pepsinogen I < 70 ng/mL and pepsinogen I/II ratio < 3.The sensitivity of the individual pepsinogen and TFF3 tests was 39.5% and 80.4%, respectively. With combination testing, the sensitivity for gastric cancer presence was 87.5%, which was higher than that of TFF3 testing alone. ## 3.5. Combination of the Serum TFF3 and Pepsinogen Tests in the Korean Validation Cohort To test the diagnostic performance of pepsinogen test and serum TFF3 for identifying EGC in the Korean validation cohort, ROC analysis was performed (Figure4). The sensitivity, specificity, odds ratio, and area under the curve of pepsinogen test for the detection of EGC according to the definition of pepsinogen test were 0.333, 0.933, 7.00, and 0.633, respectively. Using the cutoff value of 6.73 ng/mL for TFF3, those for TFF3 were 0.800, 0.433, 3.06, and 0.651, respectively. Those for the combination of TFF3 and pepsinogen l/ll ratio were 0.900, 0.367, 5.21, and 0.756, respectively.Figure 4 The Receiver operating characteristic curves of pepsinogen I/II ratio, serum trefoil factor family 3 (TFF3), and TFF3 plus pepsinogen I/II ratio are shown in the validation cohort. The sensitivity, specificity, odds ratio, and area under the curve of pepsinogen test for detection of EGC according to the definition of pepsinogen test were 0.333, 0.933, 7.00, and 0.633, respectively. Using the cutoff value of 6.73 ng/mL for TFF3, those for TFF3 were 0.800, 0.433, 3.06, and 0.651, respectively. Those for the combination of TFF3 and pepsinogen l/ll ratio were 0.900, 0.367, 5.21, and 0.756, respectively. AUC: area under the curve.The positive and negative predictive values of pepsinogen l/ll ratio were 0.833 and 0.583, respectively, and those of TFF3 were 0.585 and 0.684, respectively. Those of the combination of TFF3 and pepsinogen l/ll ratio were 0.587 and 0.786, respectively. ## 4. Discussion The pepsinogen test is used for gastric cancer screening in Japan [4], where test sensitivity in population-based studies ranges from 71% to 84%, and specificity ranges from 57% to 78% [12]. In the present study, we compared serum TFF3 with serum pepsinogen test as serologic screening tools for detection of EGC in Korean patients. Sensitivity of the pepsinogen test was 39.5% in the derivation cohort, showing a lower sensitivity than those of the Japanese studies. It seems that the pepsinogen test for gastric cancer is easily influenced by various factors, including H. pylori status and the test method itself, and therefore does not meet the ideal screening criteria [5, 13, 14]. On the other hand, the serum TFF3 test showed higher sensitivity (80.4%) than pepsinogen test for detecting EGC in our study. Moreover, results of the serum TFF3 test were not influenced by H. pylori status. Similarly, recent Japanese study on 1260 healthy individuals showed that serum TFF3 values were not considerably affected by H. pylori status and eradication [15]. The authors suggested that serum TFF3 could be a stable biomarker of gastric cancer even after H. pylori eradication in contrast with the pepsinogen test [15]. Because TFF3 is not expressed in epithelial cells of the stomach and is only expressed in the intestinal goblet cells of the metaplasia, serum TFF3 levels are less influenced by H. pylori infection [15–17].The TFFs that consist of TFF1, TFF2, and TFF3 are highly expressed in tissues containing mucus-producing cells. They play a key role in the maintenance of mucosal integrity and oncogenic transformation, growth, and metastatic extension of solid tumors [18–20]. TFF3 is expressed in goblet cells of the small and large intestines as well as in the intestinal metaplasia in the stomach [6–8].Recent data indicate that serum TFFs, especially TFF3, could be potential biomarkers for the detection of gastric cancer. In a Japanese study conducted on 183 patients with gastric cancer and 280 healthy individuals, using a cutoff of 3.6 ng/mL for TFF3, the odds ratio for gastric cancer significantly increased (odds ratio 18.1; 95% confidence interval 11.2–29.2) and the sensitivity and specificity for predicting gastric cancer were 80.9 and 81.0%, respectively [11]. When comparing ROC curves of the pepsinogen I/II ratio, TFF3, and TFF3 plus pepsinogen I/II ratio, the TFF3 plus pepsinogen was found to have better results for gastric-screening marker than pepsinogen or TFF3 test only [11]. In another study conducted on 192 patients with gastric cancer and 1254 controls, the sensitivity and specificity of pepsinogen test for predicting gastric cancer were 67% and 82%, respectively, while a combination of serum TFF3 and pepsinogen test showed a sensitivity of 80 and specificity of 80% in detecting gastric cancer [21]. These previous results are consistent with the results from our study, on patients with EGC. We also compared the combination of serum TFF3 and pepsinogen with TFF3 or pepsinogen test only. The ROC curve of TFF3 for predicting EGC presence showed that the sensitivity, specificity, and area under the curve were 80.4%, 57.6%, and 0.729, respectively, using a cutoff of 6.73 ng/mL in the derivation cohort. The sensitivity of the combination of tests (87.5%) for EGC detection was superior to that of TFF3 (80.4%) or pepsinogen test alone (39.5%). Similarly, in the validation cohort, the ROC curve of TFF3 showed that the sensitivity, specificity, and area under the curve were 80.0%, 43.3%, and 0.651, respectively, using a cutoff of 6.73 ng/mL. The area under the curve for TFF3 plus pepsinogen I/II ratio (0.756) was higher than that for TFF3 alone (0.651) or pepsinogen I/II ratio alone (0.633). Additionally, the sensitivity of TFF3 plus pepsinogen (90.0%) was higher than that of TFF3 (80.0%) or pepsinogen test only (33.3%). TFF3 is a more useful marker than pepsinogen test for detection of EGC, and the combination of serum TFF3 plus pepsinogen is more effective than TFF3 or pepsinogen only.We also evaluated the relationship between TFF3 and EGC histologic types according to differentiation and Lauren classification, respectively. We found that serum TFF3 levels in patients with differentiated-type gastric cancer were higher than in patients with undifferentiated-type histology, although these differences did not show statistical significance (P=0.056). Serum TFF3 levels in patients with intestinal-type gastric cancer were significantly higher than in those with diffuse-type cancer (P=0.028). Huang et al. [13] reported lower serum TFF3 concentrations in Chinese patients with differentiated-type and intestinal-type gastric cancers. Thus, the results of our study are not consistent with their report. In contrast, our study is highly consistent with the report of Kaise et al. [21] in Japan, which found that sensitivities of the TFF3 test alone and the combination of TFF3 and pepsinogen tests in diffuse-type adenocarcinoma were lower than those in intestinal-type cancer. Because TFF3 is strongly expressed by goblet cells in the epithelium of intestinal metaplasia of the stomach (according to the histopathogenesis of gastric cancer), a high TFF3 serum level would be expected in intestinal-type and differentiated-type gastric cancers. Further large studies are needed to explain these controversial results and discrepancies among previous studies.EGD is an invasive examination used for early detection of gastric cancer, particularly in many asymptomatic subjects. Positive results of the combination of serum TFF3 and pepsinogen for gastric cancer could be helpful to encourage patients to undergo the EGD.There were several limitations in this study. One was the relatively small sample size. However, our study showed similar results through two independent cohorts and this is the first study on the diagnostic usefulness of TFF3, which included only patients with EGC, and not AGC. Second, the proportion of diffuse-type EGC was small. However, the previous Korean study showed similar results and reported that diagnostic value of serum TFF3 for the diffuse-type cancer was somewhat decreased compared to that of intestinal-type gastric cancer although the proportion of EGC was 49.4% [17]. Third, control subjects in the derivation cohort were healthy Japanese and not Korean individuals. To overcome this and validate the present study, we analyzed a second independent Korean control cohort and results from both cohorts were similar. Fourth, our study did not show the detectability of precancerous lesions including atrophic gastritis by TFF3.In summary, this study has shown that the serum TFF3 can be a more effective biomarker of EGC in Koreans than the pepsinogen test. Moreover, the combination of TFF3 and pepsinogen test had an increased diagnostic power as a screening modality. Additionally, results indicated the possibility of serum TFF3 level being associated with the histologic type and differentiation type in EGC. Further large studies are required to confirm the strong predictive power of serum TFF3 and the combination tests with TFF3 and pepsinogen in patients with AGC or EGC, as well as to clarify the role of serum TFF3 as a nonendoscopic biomarker in population-based screening for gastric cancer. --- *Source: 1024074-2018-05-09.xml*
1024074-2018-05-09_1024074-2018-05-09.md
43,102
Screening Biomarker as an Alternative to Endoscopy for the Detection of Early Gastric Cancer: The Combination of Serum Trefoil Factor Family 3 and Pepsinogen
Hyun Seok Lee; Seong Woo Jeon; Sachiyo Nomura; Yasuyuki Seto; Yong Hwan Kwon; Su Youn Nam; Yuko Ishibashi; Hiroshi Ohtsu; Yasukazu Ohmoto; Hae Min Yang
Gastroenterology Research and Practice (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1024074
1024074-2018-05-09.xml
--- ## Abstract Objective. The serum pepsinogen test has limitation in its predictive power as a noninvasive biomarker for gastric cancer screening. We aimed to investigate whether the combination of TFF3 and pepsinogen could be an effective biomarker for the detection of gastric cancer even in the early stages. Methods. In total, 281 patients with early gastric cancer (EGC), who underwent endoscopic submucosal dissection in Korea, and 708 healthy individuals from Japan were enrolled in the derivation cohort. The validation cohort included 30 Korean patients with EGC and 30 Korean healthy control blood donors. Serum TFF3 levels were examined using enzyme-linked immunosorbent assay. Results. Using a cutoff of 6.73 ng/mL in the derivation cohort, the sensitivity of the combination of tests for EGC detection was superior (87.5%) to that of TFF3 (80.4%) or pepsinogen test alone (39.5%). Similarly, in the validation cohort, the sensitivity of TFF3 plus pepsinogen was higher (90.4%) than that of TFF3 (80.0%) or pepsinogen test alone (33.3%). Conclusion. The combination of serum TFF3 and pepsinogen is a more effective noninvasive biomarker for gastric cancer detection compared with pepsinogen or TFF3 alone, even in EGC. This trial is registered with NCT03046745. --- ## Body ## 1. Introduction Gastric cancer is the third leading cause of cancer death in the world [1–3]. Approximately half of the gastric cancer cases are diagnosed during advanced stages. One of the reasons for this is the invasiveness of esophagogastroduodenoscopy (EGD) screening examinations that leads to patients avoiding necessary tests [4]. The limitation of the pepsinogen test as a noninvasive serologic biomarker screening method is that the optimal cutoff value could be affected by several factors, such as H. pylori infection, age, gender, and the test method itself [5].The trefoil factor family (TFF) of peptides comprises small (12–22 kDa) molecules that are secreted by the mammalian gastrointestinal tract. They are extremely stable in acidic conditions and resistant to heat degradation and proteolytic digestion. TFFs constitute a family of three peptides (TFF1, TFF2, and TFF3) that are widely expressed in a tissue-specific manner in the gastrointestinal tract. TFF3 is expressed in the goblet cells of the small and large intestines as well as the intestinal metaplasia in the stomach [6–10]. Serum TFF3 was shown to be a better potential screening tool for gastric cancer than pepsinogen in Japan [11].These characteristics of TFF3 prompted us to analyze whether serum TFF3 can be a biomarker of early gastric cancer (EGC) in Koreans, as well as in the Japanese population. There is no previous study on serum TFF3 as a biomarker in EGC population without advanced gastric cancer (AGC). Because the detection of early-stage cancer is associated with improved survival, our hypothesis is that the combination of TFF3 with pepsinogen could enhance the sensitivity of EGC detection. Thus, we investigated if the combination of serum TFF3 and pepsinogen tests could be a more effective noninvasive tool for the detection of EGC. ## 2. Materials and Methods ### 2.1. Subjects #### 2.1.1. Study Population of Derivation Cohort The patient group consisted of 281 EGC patients who underwent endoscopic submucosal dissection at the Kyungpook National University Medical Center in Korea from January 2011 to May 2013. We obtained blood samples from all the patients before their endoscopic treatment. The control group consisted of 708 healthy male and female blood donors who had received a health check at Yamanaka Clinic in Japan from October 2011 to December 2012. The biopsy specimens for this study were provided by the National Biobank of Korea, Kyungpook National University Hospital (KNUH), which is supported by the Ministry of Health, Welfare and Affairs. All materials derived from the National Biobank of Korea, KNUH, were obtained under institutional review board-approved protocols. #### 2.1.2. TFF3 Value in the Validation Cohort In the derivation cohort of the current study, the control group consisted of Japanese individuals, and we have not obtained the results of serum pepsinogen in the control group. In order to test our results in another validation cohort with both patients and controls from the same country, the Korean validation cohort was needed. The validation cohort was an independent cohort, containing 30 Korean EGC patients from August 2016 to December 2016 and 30 Korean healthy control blood donors who received a health check including EGD from August 2016 to December 2016. Their data were prospectively collected and analyzed to validate the TFF3 value. The study protocols used for subjects in the validation cohort were identical to those used for subjects in the derivation cohort. The validity of the combination of TFF3 and pepsinogen for the detection of EGC in Korean individuals was assessed by receiver operating characteristic (ROC) analysis. ### 2.2. Methods #### 2.2.1. Construction of Human TFF3 Expression Plasmids Human TFF3 complementary deoxyribonucleic acid (cDNA) was cloned from Human Small Intestine Marathon-Ready cDNA (Clontech, Mountain View, CA, USA) by polymerase chain reaction. For His-taggedEscherichia coli expression, the human TFF3 cDNA fragments were inserted into the pET-21a(+) (Novagen) vector to create pET-hTFF3-His [11]. #### 2.2.2. Expression and Purification of Recombinant Human TFF3 BL21-CodonPlus (DE3)-RIL bacteria (Stratagene, Santa Clara, CA, USA) were transformed with the pET-hTFF3-His plasmid and then cultured in lysogeny broth medium at 37°C. Recombinant protein expression was induced by incubating cells with 1 mmol/L isopropylβ-D-1-thiogalactopyranoside for 5 hours. Bacterial pellets were harvested, the soluble protein fractions were extracted by sonication in 0.2% Triton X-100 and 50 mmol/L Tris-HCl (pH 8.0), and recombinant human TFFs were purified by Ni-Resin chromatography (Invitrogen, Tokyo, Japan). Recombinant human TFFs were eluted from the Ni-Resin column with 0.5 mol/L imidazole, 50 mmol/L Tris-HCl (pH 8.0), and 0.5 mol/L NaCl. Each elution fraction was analyzed by performing sodium dodecyl sulfate polyacrylamide gel electrophoresis and Western blot analysis. Concentrations of the purified recombinant human TFFs were measured by using a protein assay (Bio-Rad Laboratories Inc., Tokyo, Japan) [11]. #### 2.2.3. Immunoassays for TFF3, Pepsinogen I, Pepsinogen II, andHelicobacter pylori Infection Status Serum TFF3 levels were measured by performing enzyme-linked immunosorbent assay (ELISA). Antisera were prepared from rabbits immunized with human TFFs. The sensitivity of TFF3 was 30 pg/mL. Serum pepsinogen I and pepsinogen II levels were measured using the latex-enhanced turbidimetric immunoassay (Hitachi Ltd, Tokyo, Japan), and pepsinogen I/pepsinogen II ratio was calculated.A positiveH. pylori infection status was dependent on at least one of the following tests showing evidence of infection: histology, rapid urease test, and [13C]-urea breath test. ### 2.3. Statistical Analysis All statistical analyses were performed using JMP7 software (SAS Institute Inc., Cary, NC, USA) or SPSS version 14.0 (SPSS Inc., Chicago, IL, USA). The mean of variables was compared between two groups using at-test. The ROC curve for each evaluation was used to extract the corresponding cutoff point, which can be used to discriminate different gastric statuses. For that purpose, the area under each ROC curve was used to measure the discriminatory ability of the model. The resulting value of the cutoff point for each evaluation was applied to the determination of the sensitivity, specificity, and odds ratio. Consequently, 95% confidence intervals were calculated. A 2-sided P value of less than 0.05 was considered statistically significant. ## 2.1. Subjects ### 2.1.1. Study Population of Derivation Cohort The patient group consisted of 281 EGC patients who underwent endoscopic submucosal dissection at the Kyungpook National University Medical Center in Korea from January 2011 to May 2013. We obtained blood samples from all the patients before their endoscopic treatment. The control group consisted of 708 healthy male and female blood donors who had received a health check at Yamanaka Clinic in Japan from October 2011 to December 2012. The biopsy specimens for this study were provided by the National Biobank of Korea, Kyungpook National University Hospital (KNUH), which is supported by the Ministry of Health, Welfare and Affairs. All materials derived from the National Biobank of Korea, KNUH, were obtained under institutional review board-approved protocols. ### 2.1.2. TFF3 Value in the Validation Cohort In the derivation cohort of the current study, the control group consisted of Japanese individuals, and we have not obtained the results of serum pepsinogen in the control group. In order to test our results in another validation cohort with both patients and controls from the same country, the Korean validation cohort was needed. The validation cohort was an independent cohort, containing 30 Korean EGC patients from August 2016 to December 2016 and 30 Korean healthy control blood donors who received a health check including EGD from August 2016 to December 2016. Their data were prospectively collected and analyzed to validate the TFF3 value. The study protocols used for subjects in the validation cohort were identical to those used for subjects in the derivation cohort. The validity of the combination of TFF3 and pepsinogen for the detection of EGC in Korean individuals was assessed by receiver operating characteristic (ROC) analysis. ## 2.1.1. Study Population of Derivation Cohort The patient group consisted of 281 EGC patients who underwent endoscopic submucosal dissection at the Kyungpook National University Medical Center in Korea from January 2011 to May 2013. We obtained blood samples from all the patients before their endoscopic treatment. The control group consisted of 708 healthy male and female blood donors who had received a health check at Yamanaka Clinic in Japan from October 2011 to December 2012. The biopsy specimens for this study were provided by the National Biobank of Korea, Kyungpook National University Hospital (KNUH), which is supported by the Ministry of Health, Welfare and Affairs. All materials derived from the National Biobank of Korea, KNUH, were obtained under institutional review board-approved protocols. ## 2.1.2. TFF3 Value in the Validation Cohort In the derivation cohort of the current study, the control group consisted of Japanese individuals, and we have not obtained the results of serum pepsinogen in the control group. In order to test our results in another validation cohort with both patients and controls from the same country, the Korean validation cohort was needed. The validation cohort was an independent cohort, containing 30 Korean EGC patients from August 2016 to December 2016 and 30 Korean healthy control blood donors who received a health check including EGD from August 2016 to December 2016. Their data were prospectively collected and analyzed to validate the TFF3 value. The study protocols used for subjects in the validation cohort were identical to those used for subjects in the derivation cohort. The validity of the combination of TFF3 and pepsinogen for the detection of EGC in Korean individuals was assessed by receiver operating characteristic (ROC) analysis. ## 2.2. Methods ### 2.2.1. Construction of Human TFF3 Expression Plasmids Human TFF3 complementary deoxyribonucleic acid (cDNA) was cloned from Human Small Intestine Marathon-Ready cDNA (Clontech, Mountain View, CA, USA) by polymerase chain reaction. For His-taggedEscherichia coli expression, the human TFF3 cDNA fragments were inserted into the pET-21a(+) (Novagen) vector to create pET-hTFF3-His [11]. ### 2.2.2. Expression and Purification of Recombinant Human TFF3 BL21-CodonPlus (DE3)-RIL bacteria (Stratagene, Santa Clara, CA, USA) were transformed with the pET-hTFF3-His plasmid and then cultured in lysogeny broth medium at 37°C. Recombinant protein expression was induced by incubating cells with 1 mmol/L isopropylβ-D-1-thiogalactopyranoside for 5 hours. Bacterial pellets were harvested, the soluble protein fractions were extracted by sonication in 0.2% Triton X-100 and 50 mmol/L Tris-HCl (pH 8.0), and recombinant human TFFs were purified by Ni-Resin chromatography (Invitrogen, Tokyo, Japan). Recombinant human TFFs were eluted from the Ni-Resin column with 0.5 mol/L imidazole, 50 mmol/L Tris-HCl (pH 8.0), and 0.5 mol/L NaCl. Each elution fraction was analyzed by performing sodium dodecyl sulfate polyacrylamide gel electrophoresis and Western blot analysis. Concentrations of the purified recombinant human TFFs were measured by using a protein assay (Bio-Rad Laboratories Inc., Tokyo, Japan) [11]. ### 2.2.3. Immunoassays for TFF3, Pepsinogen I, Pepsinogen II, andHelicobacter pylori Infection Status Serum TFF3 levels were measured by performing enzyme-linked immunosorbent assay (ELISA). Antisera were prepared from rabbits immunized with human TFFs. The sensitivity of TFF3 was 30 pg/mL. Serum pepsinogen I and pepsinogen II levels were measured using the latex-enhanced turbidimetric immunoassay (Hitachi Ltd, Tokyo, Japan), and pepsinogen I/pepsinogen II ratio was calculated.A positiveH. pylori infection status was dependent on at least one of the following tests showing evidence of infection: histology, rapid urease test, and [13C]-urea breath test. ## 2.2.1. Construction of Human TFF3 Expression Plasmids Human TFF3 complementary deoxyribonucleic acid (cDNA) was cloned from Human Small Intestine Marathon-Ready cDNA (Clontech, Mountain View, CA, USA) by polymerase chain reaction. For His-taggedEscherichia coli expression, the human TFF3 cDNA fragments were inserted into the pET-21a(+) (Novagen) vector to create pET-hTFF3-His [11]. ## 2.2.2. Expression and Purification of Recombinant Human TFF3 BL21-CodonPlus (DE3)-RIL bacteria (Stratagene, Santa Clara, CA, USA) were transformed with the pET-hTFF3-His plasmid and then cultured in lysogeny broth medium at 37°C. Recombinant protein expression was induced by incubating cells with 1 mmol/L isopropylβ-D-1-thiogalactopyranoside for 5 hours. Bacterial pellets were harvested, the soluble protein fractions were extracted by sonication in 0.2% Triton X-100 and 50 mmol/L Tris-HCl (pH 8.0), and recombinant human TFFs were purified by Ni-Resin chromatography (Invitrogen, Tokyo, Japan). Recombinant human TFFs were eluted from the Ni-Resin column with 0.5 mol/L imidazole, 50 mmol/L Tris-HCl (pH 8.0), and 0.5 mol/L NaCl. Each elution fraction was analyzed by performing sodium dodecyl sulfate polyacrylamide gel electrophoresis and Western blot analysis. Concentrations of the purified recombinant human TFFs were measured by using a protein assay (Bio-Rad Laboratories Inc., Tokyo, Japan) [11]. ## 2.2.3. Immunoassays for TFF3, Pepsinogen I, Pepsinogen II, andHelicobacter pylori Infection Status Serum TFF3 levels were measured by performing enzyme-linked immunosorbent assay (ELISA). Antisera were prepared from rabbits immunized with human TFFs. The sensitivity of TFF3 was 30 pg/mL. Serum pepsinogen I and pepsinogen II levels were measured using the latex-enhanced turbidimetric immunoassay (Hitachi Ltd, Tokyo, Japan), and pepsinogen I/pepsinogen II ratio was calculated.A positiveH. pylori infection status was dependent on at least one of the following tests showing evidence of infection: histology, rapid urease test, and [13C]-urea breath test. ## 2.3. Statistical Analysis All statistical analyses were performed using JMP7 software (SAS Institute Inc., Cary, NC, USA) or SPSS version 14.0 (SPSS Inc., Chicago, IL, USA). The mean of variables was compared between two groups using at-test. The ROC curve for each evaluation was used to extract the corresponding cutoff point, which can be used to discriminate different gastric statuses. For that purpose, the area under each ROC curve was used to measure the discriminatory ability of the model. The resulting value of the cutoff point for each evaluation was applied to the determination of the sensitivity, specificity, and odds ratio. Consequently, 95% confidence intervals were calculated. A 2-sided P value of less than 0.05 was considered statistically significant. ## 3. Results ### 3.1. Characteristics of the Derivation and Validation Cohorts In the derivation cohort, there were 217 (75.8%) male patients in the EGC group and that of the control group were 272 (38.4%). The mean age of patients in the cancer group was 63.4 ± 9.3 years, and that of the controls was 67.4 ± 11.9 years. The rate of positiveH. pylori infection in the cancer group was 48.4%. Of the 281 studied tumors, 256 (91.1%) were histologically classified as differentiated type and 25 (8.9%) as undifferentiated type (Table 1). The mean serum TFF3 level in the patients with gastric cancer was 9.37 ± 4.67 ng/mL, which was significantly higher compared with that in the control group (7.05 ± 3.28 ng/mL; P<0.001; Table 1, Figure 1).Table 1 Baseline characteristics of patients with early gastric cancer and control groups in the derivation and validation cohorts. Characteristics Patients Controls P value Derivation cohort n 281 708 Sex, male,n (%) 213 (75.8) 272 (38.4) <0.001 Age (years) 63.4 ± 9.3 67.4 ± 11.9 <0.001 TFF3 value (ng/mL) 9.37 ± 4.67 7.05 ± 3.28 <0.001 Male 9.21 ± 3.42 7.19 ± 3.86 <0.001 Female 9.87 ± 7.34 6.96 ± 2.86 0.002 Helicobacter pylori positivity (%) 136 (48.4) NA Mean tumor size (mm) 22.0 ± 13.5 NA Submucosal invasion (%) 35 (12.5) NA Lymphovascular invasion (%) 5 (1.8) NA Histologic type (%) NA Differentiated (WD, MD) 256 (91.1) Undifferentiated (PD, SRC) 25 (8.9) Lauren classification (%) NA Intestinal 261 (92.9) Diffuse 20 (7.1) Validation cohort n 30 30 Sex, male,n (%) 21 (70.0) 15 (50.0) 0.114 Age (years) 59.5 ± 10.7 66.6 ± 12.0 0.002 TFF3 value (ng/mL) 9.01 ± 4.21 6.92 ± 2.76 <0.001 Intestinal metaplasia,n (%) 13 (43.3) 3 (10.0) 0.004 Data are presented as the mean ± SD. TFF3: trefoil factor family 3; WD: well-differentiated adenocarcinoma; MD: moderately differentiated adenocarcinoma; PD: poorly differentiated adenocarcinoma; SRC: signet ring cell carcinoma; NA: not applicable.Figure 1 Serum trefoil factor family 3 (TFF3) levels in patients with gastric cancer were compared with healthy control individuals in the derivation cohort. The TFF3 level was significantly higher in patients with gastric cancer (P<0.001).For the validation cohort, 30 Korean EGC patients and 30 Korean healthy control subjects were enrolled (Table1). There were 21 (70.0%) male patients in the EGC group and 15 (50.0%) in the control group. The mean age of EGC patients was 59.5 ± 10.7 years, and that of the controls was 66.6 ± 12.0 years. The mean serum TFF3 level in patients with gastric cancer was 9.01 ± 4.21 ng/mL, which was significantly higher than that in the control group (6.92 ± 2.76 ng/mL; P<0.001). ### 3.2. Effect ofH. pylori Infection on Serum TFF3 Levels in the Derivation Cohort To test the diagnostic accuracy of serum TFF3 for identifyingH. pylori infection among patients with cancer, ROC analysis was performed (data not shown). The area under the ROC curve of TFF3 was 0.445.To test the diagnostic accuracy of serum TFF3 for identifying EGC, ROC analysis was performed. For bothH. pylori-positive and Helicobacter pylori-negative patients, the sensitivity, specificity, odds ratio, area under the curve, and cutoff value for TFF3 were 0.804, 0.576, 5.60, 0.729, and 6.73, respectively. The positive and negative predictive values for TFF3 were 0.430 and 0.881, respectively (Figure 2(a)). To further evaluate TFF3, patients were divided according to H. pylori infection status and then ROC analysis was performed. The area under the curve was 0.716 for H. pylori-positive patients (Figure 2(b)) and 0.740 for H. pylori-negative patients (Figure 2(c)).Figure 2 Receiver operating characteristic (ROC) curves of trefoil factor family 3 (TFF3) to predict early gastric cancer presence in the derivation cohort. (a) ROC curve of serum TFF3 for all (bothHelicobacter pylori-positive and Helicobacter pylori-negative) the patients. The sensitivity, specificity, odds ratio, area under the curve, and cutoff value of TFF3 were 0.804, 0.576, 5.60, 0.729, and 6.73, respectively. The positive and negative predictive values of TFF3 were 0.430 and 0.881, respectively. (b) For H. pylori-positive patients, the sensitivity, specificity, odds ratio, and area under the curve were 0.772, 0.576, 4.61, and 0.716, respectively. (c) For H. pylori-negative patients, the sensitivity, specificity, odds ratio, and area under the curve were 0.835, 0.576, 6.86, and 0.740, respectively. (a) (b) (c) ### 3.3. Histologic Types and Serum TFF3 Levels in the Derivation Cohort To test the influence of EGC on serum TFF3 levels, the TFF3 level in each patient’s serum was compared with their EGC histologic types. Differentiated gastric cancer included cases with well-differentiated or moderately differentiated adenocarcinomas. Gastric cancer with undifferentiated-type histology included cases with poorly differentiated adenocarcinoma or signet ring cell carcinoma. Serum TFF3 levels of patients with the differentiated type and of those with the undifferentiated type of EGC did not differ significantly (9.53 ± 4.83 ng/mL versus 7.66 ± 1.82 ng/mL, respectively;P=0.056). On the other hand, serum TFF3 level in patients with the intestinal-type EGC was significantly higher than in patients with the diffuse type (9.54 ± 4.78 ng/mL versus 7.16 ± 1.89 ng/mL, respectively; P=0.028; Figure 3). In any other pathologic status of EGC, such as submucosal invasion or lymphovascular invasion, there was no significant difference in serum TFF3 levels (data not shown).Figure 3 Distribution of serum trefoil factor family 3 (TFF3) in differentiated or undifferentiated-type and intestinal- or diffuse-type early gastric cancer (EGC) in the derivation cohort. (a) The serum TFF3 levels of patients with differentiated-type histology and of those with undifferentiated-type EGC did not differ significantly (P=0.056). (b) Serum TFF3 levels in patients with intestinal-type EGC was significantly higher than that in those with diffuse-type cancer (P=0.028). (a) (b) ### 3.4. Combination of the Serum TFF3 and Pepsinogen Tests in the Derivation Cohort We analyzed the usefulness of determining the TFF3 level together with pepsinogen testing. The number of patients with gastric cancer and positive or negative results for both tests in the present study is shown in Table2. The cutoff values for defining a positive pepsinogen test were a serum pepsinogen I level of <70 ng/mL and serum pepsinogen I/II ratio of <3. Under these cutoff values, 170 of the 281 patients with EGC were shown not to have cancer with pepsinogen screening alone. However, when serum TFF3 testing was added to the gastric cancer screening, 135 of the 170 EGC patients who were not identified by pepsinogen testing could be identified by the TFF3 examination. On the other hand, 20 of the 281 patients were not detected by TFF3 testing but were detected by pepsinogen testing.Table 2 Evaluation of patients with early gastric cancer using pepsinogen and TFF3 levels. TFF3 (−) TFF3 (+) Total Derivation cohort Pepsinogen test (−) 35 (20.6%) 135 (79.4%) 170 Pepsinogen test (+) 20 (18.0%) 91 (82.0%) 111 Total 55 226 281 Validation cohort Pepsinogen test (−) 3 (15.0%) 17 (85.0%) 20 Pepsinogen test (+) 3 (30.0%) 7 (70.0%) 10 Total 6 24 30 TFF3: trefoil factor family 3. Serum pepsinogen test (+): pepsinogen I < 70 ng/mL and pepsinogen I/II ratio < 3.The sensitivity of the individual pepsinogen and TFF3 tests was 39.5% and 80.4%, respectively. With combination testing, the sensitivity for gastric cancer presence was 87.5%, which was higher than that of TFF3 testing alone. ### 3.5. Combination of the Serum TFF3 and Pepsinogen Tests in the Korean Validation Cohort To test the diagnostic performance of pepsinogen test and serum TFF3 for identifying EGC in the Korean validation cohort, ROC analysis was performed (Figure4). The sensitivity, specificity, odds ratio, and area under the curve of pepsinogen test for the detection of EGC according to the definition of pepsinogen test were 0.333, 0.933, 7.00, and 0.633, respectively. Using the cutoff value of 6.73 ng/mL for TFF3, those for TFF3 were 0.800, 0.433, 3.06, and 0.651, respectively. Those for the combination of TFF3 and pepsinogen l/ll ratio were 0.900, 0.367, 5.21, and 0.756, respectively.Figure 4 The Receiver operating characteristic curves of pepsinogen I/II ratio, serum trefoil factor family 3 (TFF3), and TFF3 plus pepsinogen I/II ratio are shown in the validation cohort. The sensitivity, specificity, odds ratio, and area under the curve of pepsinogen test for detection of EGC according to the definition of pepsinogen test were 0.333, 0.933, 7.00, and 0.633, respectively. Using the cutoff value of 6.73 ng/mL for TFF3, those for TFF3 were 0.800, 0.433, 3.06, and 0.651, respectively. Those for the combination of TFF3 and pepsinogen l/ll ratio were 0.900, 0.367, 5.21, and 0.756, respectively. AUC: area under the curve.The positive and negative predictive values of pepsinogen l/ll ratio were 0.833 and 0.583, respectively, and those of TFF3 were 0.585 and 0.684, respectively. Those of the combination of TFF3 and pepsinogen l/ll ratio were 0.587 and 0.786, respectively. ## 3.1. Characteristics of the Derivation and Validation Cohorts In the derivation cohort, there were 217 (75.8%) male patients in the EGC group and that of the control group were 272 (38.4%). The mean age of patients in the cancer group was 63.4 ± 9.3 years, and that of the controls was 67.4 ± 11.9 years. The rate of positiveH. pylori infection in the cancer group was 48.4%. Of the 281 studied tumors, 256 (91.1%) were histologically classified as differentiated type and 25 (8.9%) as undifferentiated type (Table 1). The mean serum TFF3 level in the patients with gastric cancer was 9.37 ± 4.67 ng/mL, which was significantly higher compared with that in the control group (7.05 ± 3.28 ng/mL; P<0.001; Table 1, Figure 1).Table 1 Baseline characteristics of patients with early gastric cancer and control groups in the derivation and validation cohorts. Characteristics Patients Controls P value Derivation cohort n 281 708 Sex, male,n (%) 213 (75.8) 272 (38.4) <0.001 Age (years) 63.4 ± 9.3 67.4 ± 11.9 <0.001 TFF3 value (ng/mL) 9.37 ± 4.67 7.05 ± 3.28 <0.001 Male 9.21 ± 3.42 7.19 ± 3.86 <0.001 Female 9.87 ± 7.34 6.96 ± 2.86 0.002 Helicobacter pylori positivity (%) 136 (48.4) NA Mean tumor size (mm) 22.0 ± 13.5 NA Submucosal invasion (%) 35 (12.5) NA Lymphovascular invasion (%) 5 (1.8) NA Histologic type (%) NA Differentiated (WD, MD) 256 (91.1) Undifferentiated (PD, SRC) 25 (8.9) Lauren classification (%) NA Intestinal 261 (92.9) Diffuse 20 (7.1) Validation cohort n 30 30 Sex, male,n (%) 21 (70.0) 15 (50.0) 0.114 Age (years) 59.5 ± 10.7 66.6 ± 12.0 0.002 TFF3 value (ng/mL) 9.01 ± 4.21 6.92 ± 2.76 <0.001 Intestinal metaplasia,n (%) 13 (43.3) 3 (10.0) 0.004 Data are presented as the mean ± SD. TFF3: trefoil factor family 3; WD: well-differentiated adenocarcinoma; MD: moderately differentiated adenocarcinoma; PD: poorly differentiated adenocarcinoma; SRC: signet ring cell carcinoma; NA: not applicable.Figure 1 Serum trefoil factor family 3 (TFF3) levels in patients with gastric cancer were compared with healthy control individuals in the derivation cohort. The TFF3 level was significantly higher in patients with gastric cancer (P<0.001).For the validation cohort, 30 Korean EGC patients and 30 Korean healthy control subjects were enrolled (Table1). There were 21 (70.0%) male patients in the EGC group and 15 (50.0%) in the control group. The mean age of EGC patients was 59.5 ± 10.7 years, and that of the controls was 66.6 ± 12.0 years. The mean serum TFF3 level in patients with gastric cancer was 9.01 ± 4.21 ng/mL, which was significantly higher than that in the control group (6.92 ± 2.76 ng/mL; P<0.001). ## 3.2. Effect ofH. pylori Infection on Serum TFF3 Levels in the Derivation Cohort To test the diagnostic accuracy of serum TFF3 for identifyingH. pylori infection among patients with cancer, ROC analysis was performed (data not shown). The area under the ROC curve of TFF3 was 0.445.To test the diagnostic accuracy of serum TFF3 for identifying EGC, ROC analysis was performed. For bothH. pylori-positive and Helicobacter pylori-negative patients, the sensitivity, specificity, odds ratio, area under the curve, and cutoff value for TFF3 were 0.804, 0.576, 5.60, 0.729, and 6.73, respectively. The positive and negative predictive values for TFF3 were 0.430 and 0.881, respectively (Figure 2(a)). To further evaluate TFF3, patients were divided according to H. pylori infection status and then ROC analysis was performed. The area under the curve was 0.716 for H. pylori-positive patients (Figure 2(b)) and 0.740 for H. pylori-negative patients (Figure 2(c)).Figure 2 Receiver operating characteristic (ROC) curves of trefoil factor family 3 (TFF3) to predict early gastric cancer presence in the derivation cohort. (a) ROC curve of serum TFF3 for all (bothHelicobacter pylori-positive and Helicobacter pylori-negative) the patients. The sensitivity, specificity, odds ratio, area under the curve, and cutoff value of TFF3 were 0.804, 0.576, 5.60, 0.729, and 6.73, respectively. The positive and negative predictive values of TFF3 were 0.430 and 0.881, respectively. (b) For H. pylori-positive patients, the sensitivity, specificity, odds ratio, and area under the curve were 0.772, 0.576, 4.61, and 0.716, respectively. (c) For H. pylori-negative patients, the sensitivity, specificity, odds ratio, and area under the curve were 0.835, 0.576, 6.86, and 0.740, respectively. (a) (b) (c) ## 3.3. Histologic Types and Serum TFF3 Levels in the Derivation Cohort To test the influence of EGC on serum TFF3 levels, the TFF3 level in each patient’s serum was compared with their EGC histologic types. Differentiated gastric cancer included cases with well-differentiated or moderately differentiated adenocarcinomas. Gastric cancer with undifferentiated-type histology included cases with poorly differentiated adenocarcinoma or signet ring cell carcinoma. Serum TFF3 levels of patients with the differentiated type and of those with the undifferentiated type of EGC did not differ significantly (9.53 ± 4.83 ng/mL versus 7.66 ± 1.82 ng/mL, respectively;P=0.056). On the other hand, serum TFF3 level in patients with the intestinal-type EGC was significantly higher than in patients with the diffuse type (9.54 ± 4.78 ng/mL versus 7.16 ± 1.89 ng/mL, respectively; P=0.028; Figure 3). In any other pathologic status of EGC, such as submucosal invasion or lymphovascular invasion, there was no significant difference in serum TFF3 levels (data not shown).Figure 3 Distribution of serum trefoil factor family 3 (TFF3) in differentiated or undifferentiated-type and intestinal- or diffuse-type early gastric cancer (EGC) in the derivation cohort. (a) The serum TFF3 levels of patients with differentiated-type histology and of those with undifferentiated-type EGC did not differ significantly (P=0.056). (b) Serum TFF3 levels in patients with intestinal-type EGC was significantly higher than that in those with diffuse-type cancer (P=0.028). (a) (b) ## 3.4. Combination of the Serum TFF3 and Pepsinogen Tests in the Derivation Cohort We analyzed the usefulness of determining the TFF3 level together with pepsinogen testing. The number of patients with gastric cancer and positive or negative results for both tests in the present study is shown in Table2. The cutoff values for defining a positive pepsinogen test were a serum pepsinogen I level of <70 ng/mL and serum pepsinogen I/II ratio of <3. Under these cutoff values, 170 of the 281 patients with EGC were shown not to have cancer with pepsinogen screening alone. However, when serum TFF3 testing was added to the gastric cancer screening, 135 of the 170 EGC patients who were not identified by pepsinogen testing could be identified by the TFF3 examination. On the other hand, 20 of the 281 patients were not detected by TFF3 testing but were detected by pepsinogen testing.Table 2 Evaluation of patients with early gastric cancer using pepsinogen and TFF3 levels. TFF3 (−) TFF3 (+) Total Derivation cohort Pepsinogen test (−) 35 (20.6%) 135 (79.4%) 170 Pepsinogen test (+) 20 (18.0%) 91 (82.0%) 111 Total 55 226 281 Validation cohort Pepsinogen test (−) 3 (15.0%) 17 (85.0%) 20 Pepsinogen test (+) 3 (30.0%) 7 (70.0%) 10 Total 6 24 30 TFF3: trefoil factor family 3. Serum pepsinogen test (+): pepsinogen I < 70 ng/mL and pepsinogen I/II ratio < 3.The sensitivity of the individual pepsinogen and TFF3 tests was 39.5% and 80.4%, respectively. With combination testing, the sensitivity for gastric cancer presence was 87.5%, which was higher than that of TFF3 testing alone. ## 3.5. Combination of the Serum TFF3 and Pepsinogen Tests in the Korean Validation Cohort To test the diagnostic performance of pepsinogen test and serum TFF3 for identifying EGC in the Korean validation cohort, ROC analysis was performed (Figure4). The sensitivity, specificity, odds ratio, and area under the curve of pepsinogen test for the detection of EGC according to the definition of pepsinogen test were 0.333, 0.933, 7.00, and 0.633, respectively. Using the cutoff value of 6.73 ng/mL for TFF3, those for TFF3 were 0.800, 0.433, 3.06, and 0.651, respectively. Those for the combination of TFF3 and pepsinogen l/ll ratio were 0.900, 0.367, 5.21, and 0.756, respectively.Figure 4 The Receiver operating characteristic curves of pepsinogen I/II ratio, serum trefoil factor family 3 (TFF3), and TFF3 plus pepsinogen I/II ratio are shown in the validation cohort. The sensitivity, specificity, odds ratio, and area under the curve of pepsinogen test for detection of EGC according to the definition of pepsinogen test were 0.333, 0.933, 7.00, and 0.633, respectively. Using the cutoff value of 6.73 ng/mL for TFF3, those for TFF3 were 0.800, 0.433, 3.06, and 0.651, respectively. Those for the combination of TFF3 and pepsinogen l/ll ratio were 0.900, 0.367, 5.21, and 0.756, respectively. AUC: area under the curve.The positive and negative predictive values of pepsinogen l/ll ratio were 0.833 and 0.583, respectively, and those of TFF3 were 0.585 and 0.684, respectively. Those of the combination of TFF3 and pepsinogen l/ll ratio were 0.587 and 0.786, respectively. ## 4. Discussion The pepsinogen test is used for gastric cancer screening in Japan [4], where test sensitivity in population-based studies ranges from 71% to 84%, and specificity ranges from 57% to 78% [12]. In the present study, we compared serum TFF3 with serum pepsinogen test as serologic screening tools for detection of EGC in Korean patients. Sensitivity of the pepsinogen test was 39.5% in the derivation cohort, showing a lower sensitivity than those of the Japanese studies. It seems that the pepsinogen test for gastric cancer is easily influenced by various factors, including H. pylori status and the test method itself, and therefore does not meet the ideal screening criteria [5, 13, 14]. On the other hand, the serum TFF3 test showed higher sensitivity (80.4%) than pepsinogen test for detecting EGC in our study. Moreover, results of the serum TFF3 test were not influenced by H. pylori status. Similarly, recent Japanese study on 1260 healthy individuals showed that serum TFF3 values were not considerably affected by H. pylori status and eradication [15]. The authors suggested that serum TFF3 could be a stable biomarker of gastric cancer even after H. pylori eradication in contrast with the pepsinogen test [15]. Because TFF3 is not expressed in epithelial cells of the stomach and is only expressed in the intestinal goblet cells of the metaplasia, serum TFF3 levels are less influenced by H. pylori infection [15–17].The TFFs that consist of TFF1, TFF2, and TFF3 are highly expressed in tissues containing mucus-producing cells. They play a key role in the maintenance of mucosal integrity and oncogenic transformation, growth, and metastatic extension of solid tumors [18–20]. TFF3 is expressed in goblet cells of the small and large intestines as well as in the intestinal metaplasia in the stomach [6–8].Recent data indicate that serum TFFs, especially TFF3, could be potential biomarkers for the detection of gastric cancer. In a Japanese study conducted on 183 patients with gastric cancer and 280 healthy individuals, using a cutoff of 3.6 ng/mL for TFF3, the odds ratio for gastric cancer significantly increased (odds ratio 18.1; 95% confidence interval 11.2–29.2) and the sensitivity and specificity for predicting gastric cancer were 80.9 and 81.0%, respectively [11]. When comparing ROC curves of the pepsinogen I/II ratio, TFF3, and TFF3 plus pepsinogen I/II ratio, the TFF3 plus pepsinogen was found to have better results for gastric-screening marker than pepsinogen or TFF3 test only [11]. In another study conducted on 192 patients with gastric cancer and 1254 controls, the sensitivity and specificity of pepsinogen test for predicting gastric cancer were 67% and 82%, respectively, while a combination of serum TFF3 and pepsinogen test showed a sensitivity of 80 and specificity of 80% in detecting gastric cancer [21]. These previous results are consistent with the results from our study, on patients with EGC. We also compared the combination of serum TFF3 and pepsinogen with TFF3 or pepsinogen test only. The ROC curve of TFF3 for predicting EGC presence showed that the sensitivity, specificity, and area under the curve were 80.4%, 57.6%, and 0.729, respectively, using a cutoff of 6.73 ng/mL in the derivation cohort. The sensitivity of the combination of tests (87.5%) for EGC detection was superior to that of TFF3 (80.4%) or pepsinogen test alone (39.5%). Similarly, in the validation cohort, the ROC curve of TFF3 showed that the sensitivity, specificity, and area under the curve were 80.0%, 43.3%, and 0.651, respectively, using a cutoff of 6.73 ng/mL. The area under the curve for TFF3 plus pepsinogen I/II ratio (0.756) was higher than that for TFF3 alone (0.651) or pepsinogen I/II ratio alone (0.633). Additionally, the sensitivity of TFF3 plus pepsinogen (90.0%) was higher than that of TFF3 (80.0%) or pepsinogen test only (33.3%). TFF3 is a more useful marker than pepsinogen test for detection of EGC, and the combination of serum TFF3 plus pepsinogen is more effective than TFF3 or pepsinogen only.We also evaluated the relationship between TFF3 and EGC histologic types according to differentiation and Lauren classification, respectively. We found that serum TFF3 levels in patients with differentiated-type gastric cancer were higher than in patients with undifferentiated-type histology, although these differences did not show statistical significance (P=0.056). Serum TFF3 levels in patients with intestinal-type gastric cancer were significantly higher than in those with diffuse-type cancer (P=0.028). Huang et al. [13] reported lower serum TFF3 concentrations in Chinese patients with differentiated-type and intestinal-type gastric cancers. Thus, the results of our study are not consistent with their report. In contrast, our study is highly consistent with the report of Kaise et al. [21] in Japan, which found that sensitivities of the TFF3 test alone and the combination of TFF3 and pepsinogen tests in diffuse-type adenocarcinoma were lower than those in intestinal-type cancer. Because TFF3 is strongly expressed by goblet cells in the epithelium of intestinal metaplasia of the stomach (according to the histopathogenesis of gastric cancer), a high TFF3 serum level would be expected in intestinal-type and differentiated-type gastric cancers. Further large studies are needed to explain these controversial results and discrepancies among previous studies.EGD is an invasive examination used for early detection of gastric cancer, particularly in many asymptomatic subjects. Positive results of the combination of serum TFF3 and pepsinogen for gastric cancer could be helpful to encourage patients to undergo the EGD.There were several limitations in this study. One was the relatively small sample size. However, our study showed similar results through two independent cohorts and this is the first study on the diagnostic usefulness of TFF3, which included only patients with EGC, and not AGC. Second, the proportion of diffuse-type EGC was small. However, the previous Korean study showed similar results and reported that diagnostic value of serum TFF3 for the diffuse-type cancer was somewhat decreased compared to that of intestinal-type gastric cancer although the proportion of EGC was 49.4% [17]. Third, control subjects in the derivation cohort were healthy Japanese and not Korean individuals. To overcome this and validate the present study, we analyzed a second independent Korean control cohort and results from both cohorts were similar. Fourth, our study did not show the detectability of precancerous lesions including atrophic gastritis by TFF3.In summary, this study has shown that the serum TFF3 can be a more effective biomarker of EGC in Koreans than the pepsinogen test. Moreover, the combination of TFF3 and pepsinogen test had an increased diagnostic power as a screening modality. Additionally, results indicated the possibility of serum TFF3 level being associated with the histologic type and differentiation type in EGC. Further large studies are required to confirm the strong predictive power of serum TFF3 and the combination tests with TFF3 and pepsinogen in patients with AGC or EGC, as well as to clarify the role of serum TFF3 as a nonendoscopic biomarker in population-based screening for gastric cancer. --- *Source: 1024074-2018-05-09.xml*
2018
# Clay Minerals Change the Toxic Effect of Cadmium on the Activities of Leucine Aminopeptidase **Authors:** Shunyu Huang; Jingji Li; Jipeng Wang **Journal:** Adsorption Science & Technology (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1024085 --- ## Abstract Soil leucine aminopeptidase (LAP) is a hydrolytic enzyme involved in the acquisition of nitrogen by microorganisms. In contaminated soils, LAP activity is affected not only by the type and concentration of heavy metals but also by the form of enzyme. Here, we investigated the degree and mechanism of cadmium (Cd) inhibition of soil LAP and purified LAP. We also examined the effect of montmorillonite and kaolinite on LAP and LAP contaminated with Cd. The results showed that Cd inhibition of LAP activity increased with increasing Cd concentration and that Cd exerted noncompetitive inhibition of LAP. The addition of clay minerals decreases LAP activity and the maximum reaction rate (Vmax), regardless of the presence of Cd. Montmorillonite decreases the affinity of LAP to the substrate (Km), while kaolinite increases the affinity of LAP to the substrate. The clay mineral-immobilized LAP showed an increase in resistance to Cd contamination compared with the free LAP. The results obtained in this study may aid in understanding the toxic effects of heavy metals on soil enzymes. --- ## Body ## 1. Introduction Cadmium (Cd) has become one of the most hazardous heavy metals in soil as a result of its potential, persistent, and irreversible toxicity [1–3]. When Cd enters soil, it affects the environmental health of the soil and the stability of ecosystem, particularly soil microorganisms [4, 5]. Soil microorganisms are the most active and sensitive component of the soil ecosystem and can secrete most soil extracellular enzymes associated with the soil nutrient cycle [6, 7]. Since soil enzyme activity is closely related to soil microorganisms and is sensitive to changes in environmental conditions, soil enzyme activity is often used as an indicator to evaluate the level to which heavy metals influence soil microbial function and soil ecosystem health [8–12].According to previous studies, the soil enzyme activity exponentially decreases with increasing heavy metal concentrations by displacing enzyme conformation-related metals and occupying the active center of the enzyme or by binding to sulfhydryl, amino, and carboxyl groups in the enzyme structure to reduce the active site of the enzyme [13–16]. However, the extent of soil enzyme activity in response to heavy metals is related not only to the type and concentration of heavy metals but also to the type of soil enzyme and soil properties, such as clay mineral content [17, 18]. Clay minerals form enzyme complexes whose molecular structure and catalytic properties differ from those of free enzymes, thus affecting the contact of enzymes with heavy metals and substrates [19–21]. At the same time, heavy metals can interact with clay mineral surfaces and compete with enzymes to form heavy metal-enzyme-clay mineral complexes [22, 23]. Therefore, free enzymes, soil enzymes, and enzymes immobilized by clay minerals have different response to heavy metal pollution.Leucine aminopeptidase (LAP, Enzyme Commission number: 3.4.11.1) catalyzes the hydrolysis of leucine and other hydrophobic amino acids at the N-terminus of polypeptides. It is one kind of enzyme involved in the microbial acquisition of N in soil [6]. Cd acts as an enzyme inhibitor by replacing the metal associated with the active center of phosphatase [24, 25]. Additionally, soil properties (e.g., organic C, total N, pH, soil particle size, and clay contents) can influence the toxicity of heavy metals in soil on soil enzymes [26]. On this basis, we anticipated that the inhibition of LAP by Cd may be due to the displacement of enzyme conformation-related metals and occupation of the active center of the enzyme; thus, the interaction of clay minerals with LAP might contribute to different Cd toxicity on free LAP, soil LAP, and immobilized LAP. It has been demonstrated that Cd shows exponential or logarithmic inhibition of other hydrolytic enzymes, such as phosphatase and β-glucosidase [27, 28]. However, the toxicity of Cd to LAP is poorly researched in existing studies, and there is barely any information available on the effects of the interaction between clay minerals and LAP on Cd toxicity.Consequently, we investigated the effect of Cd toxicity on the degree of LAP exposure (free LAP, LAP in soil, and LAP immobilized by clay minerals). A preliminary attempt was also made to investigate the reasons of cadmium toxicity to LAP (with reference to the toxicity of other heavy metals to LAP and experiments on the recovery of activity of Cd-LAP in the presence of Mg and Mn addition) in an attempt to reach the following conclusions: (1) the difference response between free LAP, soil LAP, and clay mineral-immobilized LAP on Cd toxicity and (2) the mechanism of LAP inhibition by Cd.Our work is a preliminary investigation into the mechanisms of Cd contamination on LAP (soil LAP and purified LAP) and the role that clay minerals play in this process. This provides a theoretical basis for restoring the activity and function of LAP in Cd-contaminated soils, as well as providing some assistance in restoring nitrogen use efficiency and accelerating nitrogen cycling in contaminated soils. ## 2. Material and Methods ### 2.1. Purified Enzyme and Soils The purified enzyme was purchased from Sigma-Aldrich (product number: L5006, Type IV-S, stored as ammonium sulfate suspension at 4°C) and volumed with 2.5 mol L-1 (NH4)2SO4 to 5 mL to make an enzyme stock solution. The enzyme stock solution was stored at 4°C. To make activated enzyme, the enzyme stock solution was diluted with 50 mmol L-1 pH 8 THAM and activated at 37°C for 2 hours before use.The soils used in this study are red soil and purple soil, which were collected from the Ecological Experimental Station of Red Soil in Jiangxi, China (28°15′N, 116°55′E) and Yanting Station of Chinese Ecosystem Research Network (CERN) in Sichuan, China (31°16′N, 105°27′E), respectively. Soil samples were sieved (<2 mm) and stored at 4°C until enzyme assay.Soil pH, particle size, organic C, and total N were determined using air-dried soils. The pH was determined using a 1 : 2.5 ratio between the mass of the soil sample and the volume of deionized water. Soil samples were pretreated with 30% H2O2 and 10% HCl to remove organic matter and carbonates, and 0.05 mol L-1 sodium hexametaphosphate was added to disperse the soil aggregates for particle size distribution analysis by Malvern MS 2000 (Malvern Instruments, Malvern, England). The organic C and total N of the soil were determined by using 0.5 mol L-1 HCl to remove carbonates and then analyzed by an element analyzer (Elementar vario MACRO cube, Germany) (Table S1). ### 2.2. Clay Mineral and Immobilized LAP Montmorillonite ((Al,Mg)2[Si4O10] (OH)2·nH2O) and kaolinite (Al4[Si4O10] (OH)8) were purchased from Aladdin (product numbers M141491 and K100134, respectively) (Table S2). The surface morphology of clay minerals was analyzed by scanning electron microscopy (SEM) (Thermo Prisma E, Finland) at 20 kV and at a magnification of 3500x (Figure S1). The particle size distribution was determined by a Malvern ZEN 3600 (Malvern Instruments, Malvern, England). The specific surface area (SSA) of clay minerals was analyzed by the N2 adsorption method [29, 30].Clay mineral colloid solution (0.5 mg mL-1) was prepared by adding 25 mg montmorillonite or kaolinite to 50 mL 50 mmol L-1 pH 8 THAM into a 150 mL beaker and performing ultrasonication for 5 minutes. The activated enzyme was diluted 2500 times using THAM to prepare the LAP working solution. To prepare clay mineral-immobilized LAP, 20 mL of montmorillonite or kaolinite colloids was added to 20 mL of the LAP working solution. The suspension was stirred for 30 minutes at 250 r min-1 at 25°C before centrifugation at 8000 r min-1 for 5 minutes. The residue was washed with 40 mL deionized (DI) water and centrifuged twice to remove the unabsorbed LAP. Finally, the residue was resuspended in 40 mL of 50 mmol L-1 pH 8 THAM buffer. The clay mineral-immobilized LAP was mixed thoroughly before being added to a 96-well plate. ### 2.3. Experimental Design #### 2.3.1. Enzyme Assay L-leucine-7-amido-4-methylcoumarin (L-leucine-AMC) and 7-amino-4-methyl-coumarin (AMC) were purchased from Aladdin and used as the substrate and the standard for LAP in microplate fluorimetric assays, respectively.The soil LAP activity was determined as previously described [31, 32]. Briefly, a soil homogenate was prepared by stirring 0.5 g fresh soil and 120 mL DI water at 600 rpm for 30 minutes. Then, the homogenate was placed into the assay well (including 50 μL THAM buffer, 100 μL homogenate, and 50 μL L-leucine-AMC) and quenched well (including 50 μL THAM buffer, 100 μL homogenate, and 50 μL AMC). DI water was used in place of homogenate in a standard well (including 50 μL THAM buffer, 100 μL DI water, and 50 μL AMC) and substrate control well (including 50 μL THAM buffer, 100 μL DI water, and 50 μL L-leucine-AMC). THAM, AMC, and L-leucine-AMC were then added sequentially according to the type of well. The THAM concentration in all wells was 20 mmol L-1, and the L-leucine-AMC was 200 μmol L-1. The concentration of AMC was 10 μmol L-1 before being added. The mixture in 96-well plates was incubated for 1 h at 37°C before being read by a fluorometer (Thermo Varioskan™ LUX, Finland) with 365 nm excitation and a 450 nm emission filter immediately.For the purified enzyme and immobilized enzyme, the soil homogenate in the above method was replaced with the LAP working solution and clay mineral-immobilized enzyme LAP, respectively. The purified enzyme was incubated and measured under the same conditions as the soil. #### 2.3.2. Effect of Clay Minerals on LAP For soil LAP, the effect of clay mineral addition on Cd toxicity was investigated in the natural state (0 mg clay mineral addition), low concentration (50 mg montmorillonite or kaolinite per gram fresh soil), and high concentration (100 mg montmorillonite or kaolinite per gram fresh soil). Clay minerals are added by mixing the corresponding quality and type of clay minerals with red and purple soil to produce homogenate.For the purified enzyme experiment, the following levels of binding of clay minerals to enzymes were set to investigate the effect of clay minerals on Cd: (1) free enzyme: no clay minerals added; (2) adsorbed enzyme: clay minerals added during incubation to make the LAP working solution containing final concentrations of 0.5 mg L-1 and 1 mg L-1 clay minerals, respectively; and (3) immobilized enzyme: clay mineral-immobilized LAP was made in advance. #### 2.3.3. Cd Toxicity on LAP To determine the Cd toxicity to LAP, after adding homogenate or LAP work solution to 96-well plates, 50μL of different concentrations of Cd solution (CdCl2) was added and contaminated LAP for 30 minutes at 25°C.Inhibition of LAP by Cd was calculated for soil LAP and purified LAP activity at final Cd concentrations of 0, 10, 20, 50, 100, 200, 500, and 1000μmol L-1 to quantify the toxicity of Cd on LAP.Changes in kinetic constants (Vmax and Km) of soil LAP and purified LAP were compared at final Cd concentrations of 0, 4, and 10 μmol L-1 to infer the type of inhibition of LAP by Cd. The enzyme activities of soil LAP and purified LAP were determined at different concentrations of substrate (final concentrations of 0, 10, 20, 40, 100, 200, 300, and 400 μmol L-1, respectively), and Vmax and Km were calculated using the Michaelis-Menten equation.In order to have a more insightful understanding of the inhibitory effect of Cd on LAP, we chose argentum (Ag) and hydrargyrum (Hg), which are both inhibitory metals to hydrolase like Cd, as a reference to observe the inhibitory effect and type of LAP inhibition by different inhibitory metals. We also chose cobalt (Co) and boron (B), which are accelerators of hydrolase, and magnesium (Mg) and manganese (Mn), metals that can occupy and substitute metal ions at two exchangeable sites in LAP to affectKm, to observe the effect of different metals on LAP activity under the same conditions [33–38].Additionally, the recovery of Cd-contaminated LAP activity by the addition of Mg and Mn was used to infer the mechanism of Cd inhibition of LAP [37]. In the recovery experiment, after measuring the LAP activity inhibited by Cd, Mg and Mn solutions at a final concentration of 4 μmol L-1 were added to observe the change in LAP activity. ### 2.4. Data Analysis #### 2.4.1. Enzyme Activity Soil LAP activity was expressed in units of nmol AMC·g-1·h-1 and calculated by the following equations which were modified from Deforest [31] and Wang et al. [32]: (1)ActivitynmolAMC×g−1×h−1=Netfluorescence×VmLEmissioncoefficient×vmL×Th×DMg,where Net fluorescence is the actual fluorescence value of AMC produced by the enzymatic reaction of LAP in soil, which can be calculated by equation (2); V is the total volume of homogenate, 120 mL for this experiment; the Emission coefficient indicates the fluorescence value per unit AMC (nmol), which can be calculated by equation (4); v is the volume of homogenate in a single incubation well, 0.1 mL for this experiment; T is incubation time, 1 h for purple soil and purified LAP, 3 h for red soil; DM is the dry soil mass corresponding to 0.5 g of fresh soil. (2)Netfluorescence=fsampleassayQuenchcoefficient−fsubstratecontrol,where f is the fluorescence value measured by the marked well; the Quench coefficient indicates the effect of soil on the fluorescence values of AMC, which can be calculated by (3)Quenchcoefficient=fquenchfstandard,(4)Emissioncoefficientfluorescencenmol−1=fstandard0.5nmol.Purified LAP activity was expressed in units of nmol AMC·μg-1·h-1: (5)ActivitynmolAMC×μg−1×h−1=NetfluorescenceEmissioncoefficient×Th×enzymeμg,Netfluorescence=fsampleassay−fsubstratecontrol. #### 2.4.2. Inhibition Ratio and Type of Inhibition Variation of the maximum rate of reaction (Vmax) and the Michaelis constant (Km) could deduce the type of inhibition of enzymatic reactions by inhibitors [35].To better describe and compare the inhibition of LAP by different inhibitors (Cd, clay minerals, and other ion), the inhibition ratio was expressed by following equations from Acosta-Martínez [39]: (6)Inhibitionratio%=Activitycontrol−ActivitytreatmentActivitycontrol×100%.The type of inhibition of LAP by the inhibitor can be known from the change in kinetic constantsKm and Vmax, which can be calculated according to the Michaelis-Menten equation (7) and the Lineweaver-Burk double-reciprocal equation (8) [40, 41]: (7)v=VmaxSKm+S,where v is the initial reaction rate at the substrate concentration S; Vmax is the maximum rate of reaction and Km is the Michaelis constant. (8)1v=KmVmax⋅1S+1Vmax. #### 2.4.3. Statistical Analysis Differences between treatments were analyzed by one-way ANOVA following the Bonferroni post hoc test in SPSS Statistics 24 (IBM SPSS, Somers, NY, USA). Values ofP<0.05 were considered to be significant. Data are expressed as mean±standarderror (number of replicates n≥3), with different letters indicating significant differences. The dependences of enzyme activity with substrate concentration were represented with the Michaelis-Menten equations. The fitting and calculation of kinetic constants were performed using a nonlinear fitting equation (Growth/Sigmoidal Hill) in OriginPro 9.1 (OriginLab Corp., Northampton, MA, USA). ## 2.1. Purified Enzyme and Soils The purified enzyme was purchased from Sigma-Aldrich (product number: L5006, Type IV-S, stored as ammonium sulfate suspension at 4°C) and volumed with 2.5 mol L-1 (NH4)2SO4 to 5 mL to make an enzyme stock solution. The enzyme stock solution was stored at 4°C. To make activated enzyme, the enzyme stock solution was diluted with 50 mmol L-1 pH 8 THAM and activated at 37°C for 2 hours before use.The soils used in this study are red soil and purple soil, which were collected from the Ecological Experimental Station of Red Soil in Jiangxi, China (28°15′N, 116°55′E) and Yanting Station of Chinese Ecosystem Research Network (CERN) in Sichuan, China (31°16′N, 105°27′E), respectively. Soil samples were sieved (<2 mm) and stored at 4°C until enzyme assay.Soil pH, particle size, organic C, and total N were determined using air-dried soils. The pH was determined using a 1 : 2.5 ratio between the mass of the soil sample and the volume of deionized water. Soil samples were pretreated with 30% H2O2 and 10% HCl to remove organic matter and carbonates, and 0.05 mol L-1 sodium hexametaphosphate was added to disperse the soil aggregates for particle size distribution analysis by Malvern MS 2000 (Malvern Instruments, Malvern, England). The organic C and total N of the soil were determined by using 0.5 mol L-1 HCl to remove carbonates and then analyzed by an element analyzer (Elementar vario MACRO cube, Germany) (Table S1). ## 2.2. Clay Mineral and Immobilized LAP Montmorillonite ((Al,Mg)2[Si4O10] (OH)2·nH2O) and kaolinite (Al4[Si4O10] (OH)8) were purchased from Aladdin (product numbers M141491 and K100134, respectively) (Table S2). The surface morphology of clay minerals was analyzed by scanning electron microscopy (SEM) (Thermo Prisma E, Finland) at 20 kV and at a magnification of 3500x (Figure S1). The particle size distribution was determined by a Malvern ZEN 3600 (Malvern Instruments, Malvern, England). The specific surface area (SSA) of clay minerals was analyzed by the N2 adsorption method [29, 30].Clay mineral colloid solution (0.5 mg mL-1) was prepared by adding 25 mg montmorillonite or kaolinite to 50 mL 50 mmol L-1 pH 8 THAM into a 150 mL beaker and performing ultrasonication for 5 minutes. The activated enzyme was diluted 2500 times using THAM to prepare the LAP working solution. To prepare clay mineral-immobilized LAP, 20 mL of montmorillonite or kaolinite colloids was added to 20 mL of the LAP working solution. The suspension was stirred for 30 minutes at 250 r min-1 at 25°C before centrifugation at 8000 r min-1 for 5 minutes. The residue was washed with 40 mL deionized (DI) water and centrifuged twice to remove the unabsorbed LAP. Finally, the residue was resuspended in 40 mL of 50 mmol L-1 pH 8 THAM buffer. The clay mineral-immobilized LAP was mixed thoroughly before being added to a 96-well plate. ## 2.3. Experimental Design ### 2.3.1. Enzyme Assay L-leucine-7-amido-4-methylcoumarin (L-leucine-AMC) and 7-amino-4-methyl-coumarin (AMC) were purchased from Aladdin and used as the substrate and the standard for LAP in microplate fluorimetric assays, respectively.The soil LAP activity was determined as previously described [31, 32]. Briefly, a soil homogenate was prepared by stirring 0.5 g fresh soil and 120 mL DI water at 600 rpm for 30 minutes. Then, the homogenate was placed into the assay well (including 50 μL THAM buffer, 100 μL homogenate, and 50 μL L-leucine-AMC) and quenched well (including 50 μL THAM buffer, 100 μL homogenate, and 50 μL AMC). DI water was used in place of homogenate in a standard well (including 50 μL THAM buffer, 100 μL DI water, and 50 μL AMC) and substrate control well (including 50 μL THAM buffer, 100 μL DI water, and 50 μL L-leucine-AMC). THAM, AMC, and L-leucine-AMC were then added sequentially according to the type of well. The THAM concentration in all wells was 20 mmol L-1, and the L-leucine-AMC was 200 μmol L-1. The concentration of AMC was 10 μmol L-1 before being added. The mixture in 96-well plates was incubated for 1 h at 37°C before being read by a fluorometer (Thermo Varioskan™ LUX, Finland) with 365 nm excitation and a 450 nm emission filter immediately.For the purified enzyme and immobilized enzyme, the soil homogenate in the above method was replaced with the LAP working solution and clay mineral-immobilized enzyme LAP, respectively. The purified enzyme was incubated and measured under the same conditions as the soil. ### 2.3.2. Effect of Clay Minerals on LAP For soil LAP, the effect of clay mineral addition on Cd toxicity was investigated in the natural state (0 mg clay mineral addition), low concentration (50 mg montmorillonite or kaolinite per gram fresh soil), and high concentration (100 mg montmorillonite or kaolinite per gram fresh soil). Clay minerals are added by mixing the corresponding quality and type of clay minerals with red and purple soil to produce homogenate.For the purified enzyme experiment, the following levels of binding of clay minerals to enzymes were set to investigate the effect of clay minerals on Cd: (1) free enzyme: no clay minerals added; (2) adsorbed enzyme: clay minerals added during incubation to make the LAP working solution containing final concentrations of 0.5 mg L-1 and 1 mg L-1 clay minerals, respectively; and (3) immobilized enzyme: clay mineral-immobilized LAP was made in advance. ### 2.3.3. Cd Toxicity on LAP To determine the Cd toxicity to LAP, after adding homogenate or LAP work solution to 96-well plates, 50μL of different concentrations of Cd solution (CdCl2) was added and contaminated LAP for 30 minutes at 25°C.Inhibition of LAP by Cd was calculated for soil LAP and purified LAP activity at final Cd concentrations of 0, 10, 20, 50, 100, 200, 500, and 1000μmol L-1 to quantify the toxicity of Cd on LAP.Changes in kinetic constants (Vmax and Km) of soil LAP and purified LAP were compared at final Cd concentrations of 0, 4, and 10 μmol L-1 to infer the type of inhibition of LAP by Cd. The enzyme activities of soil LAP and purified LAP were determined at different concentrations of substrate (final concentrations of 0, 10, 20, 40, 100, 200, 300, and 400 μmol L-1, respectively), and Vmax and Km were calculated using the Michaelis-Menten equation.In order to have a more insightful understanding of the inhibitory effect of Cd on LAP, we chose argentum (Ag) and hydrargyrum (Hg), which are both inhibitory metals to hydrolase like Cd, as a reference to observe the inhibitory effect and type of LAP inhibition by different inhibitory metals. We also chose cobalt (Co) and boron (B), which are accelerators of hydrolase, and magnesium (Mg) and manganese (Mn), metals that can occupy and substitute metal ions at two exchangeable sites in LAP to affectKm, to observe the effect of different metals on LAP activity under the same conditions [33–38].Additionally, the recovery of Cd-contaminated LAP activity by the addition of Mg and Mn was used to infer the mechanism of Cd inhibition of LAP [37]. In the recovery experiment, after measuring the LAP activity inhibited by Cd, Mg and Mn solutions at a final concentration of 4 μmol L-1 were added to observe the change in LAP activity. ## 2.3.1. Enzyme Assay L-leucine-7-amido-4-methylcoumarin (L-leucine-AMC) and 7-amino-4-methyl-coumarin (AMC) were purchased from Aladdin and used as the substrate and the standard for LAP in microplate fluorimetric assays, respectively.The soil LAP activity was determined as previously described [31, 32]. Briefly, a soil homogenate was prepared by stirring 0.5 g fresh soil and 120 mL DI water at 600 rpm for 30 minutes. Then, the homogenate was placed into the assay well (including 50 μL THAM buffer, 100 μL homogenate, and 50 μL L-leucine-AMC) and quenched well (including 50 μL THAM buffer, 100 μL homogenate, and 50 μL AMC). DI water was used in place of homogenate in a standard well (including 50 μL THAM buffer, 100 μL DI water, and 50 μL AMC) and substrate control well (including 50 μL THAM buffer, 100 μL DI water, and 50 μL L-leucine-AMC). THAM, AMC, and L-leucine-AMC were then added sequentially according to the type of well. The THAM concentration in all wells was 20 mmol L-1, and the L-leucine-AMC was 200 μmol L-1. The concentration of AMC was 10 μmol L-1 before being added. The mixture in 96-well plates was incubated for 1 h at 37°C before being read by a fluorometer (Thermo Varioskan™ LUX, Finland) with 365 nm excitation and a 450 nm emission filter immediately.For the purified enzyme and immobilized enzyme, the soil homogenate in the above method was replaced with the LAP working solution and clay mineral-immobilized enzyme LAP, respectively. The purified enzyme was incubated and measured under the same conditions as the soil. ## 2.3.2. Effect of Clay Minerals on LAP For soil LAP, the effect of clay mineral addition on Cd toxicity was investigated in the natural state (0 mg clay mineral addition), low concentration (50 mg montmorillonite or kaolinite per gram fresh soil), and high concentration (100 mg montmorillonite or kaolinite per gram fresh soil). Clay minerals are added by mixing the corresponding quality and type of clay minerals with red and purple soil to produce homogenate.For the purified enzyme experiment, the following levels of binding of clay minerals to enzymes were set to investigate the effect of clay minerals on Cd: (1) free enzyme: no clay minerals added; (2) adsorbed enzyme: clay minerals added during incubation to make the LAP working solution containing final concentrations of 0.5 mg L-1 and 1 mg L-1 clay minerals, respectively; and (3) immobilized enzyme: clay mineral-immobilized LAP was made in advance. ## 2.3.3. Cd Toxicity on LAP To determine the Cd toxicity to LAP, after adding homogenate or LAP work solution to 96-well plates, 50μL of different concentrations of Cd solution (CdCl2) was added and contaminated LAP for 30 minutes at 25°C.Inhibition of LAP by Cd was calculated for soil LAP and purified LAP activity at final Cd concentrations of 0, 10, 20, 50, 100, 200, 500, and 1000μmol L-1 to quantify the toxicity of Cd on LAP.Changes in kinetic constants (Vmax and Km) of soil LAP and purified LAP were compared at final Cd concentrations of 0, 4, and 10 μmol L-1 to infer the type of inhibition of LAP by Cd. The enzyme activities of soil LAP and purified LAP were determined at different concentrations of substrate (final concentrations of 0, 10, 20, 40, 100, 200, 300, and 400 μmol L-1, respectively), and Vmax and Km were calculated using the Michaelis-Menten equation.In order to have a more insightful understanding of the inhibitory effect of Cd on LAP, we chose argentum (Ag) and hydrargyrum (Hg), which are both inhibitory metals to hydrolase like Cd, as a reference to observe the inhibitory effect and type of LAP inhibition by different inhibitory metals. We also chose cobalt (Co) and boron (B), which are accelerators of hydrolase, and magnesium (Mg) and manganese (Mn), metals that can occupy and substitute metal ions at two exchangeable sites in LAP to affectKm, to observe the effect of different metals on LAP activity under the same conditions [33–38].Additionally, the recovery of Cd-contaminated LAP activity by the addition of Mg and Mn was used to infer the mechanism of Cd inhibition of LAP [37]. In the recovery experiment, after measuring the LAP activity inhibited by Cd, Mg and Mn solutions at a final concentration of 4 μmol L-1 were added to observe the change in LAP activity. ## 2.4. Data Analysis ### 2.4.1. Enzyme Activity Soil LAP activity was expressed in units of nmol AMC·g-1·h-1 and calculated by the following equations which were modified from Deforest [31] and Wang et al. [32]: (1)ActivitynmolAMC×g−1×h−1=Netfluorescence×VmLEmissioncoefficient×vmL×Th×DMg,where Net fluorescence is the actual fluorescence value of AMC produced by the enzymatic reaction of LAP in soil, which can be calculated by equation (2); V is the total volume of homogenate, 120 mL for this experiment; the Emission coefficient indicates the fluorescence value per unit AMC (nmol), which can be calculated by equation (4); v is the volume of homogenate in a single incubation well, 0.1 mL for this experiment; T is incubation time, 1 h for purple soil and purified LAP, 3 h for red soil; DM is the dry soil mass corresponding to 0.5 g of fresh soil. (2)Netfluorescence=fsampleassayQuenchcoefficient−fsubstratecontrol,where f is the fluorescence value measured by the marked well; the Quench coefficient indicates the effect of soil on the fluorescence values of AMC, which can be calculated by (3)Quenchcoefficient=fquenchfstandard,(4)Emissioncoefficientfluorescencenmol−1=fstandard0.5nmol.Purified LAP activity was expressed in units of nmol AMC·μg-1·h-1: (5)ActivitynmolAMC×μg−1×h−1=NetfluorescenceEmissioncoefficient×Th×enzymeμg,Netfluorescence=fsampleassay−fsubstratecontrol. ### 2.4.2. Inhibition Ratio and Type of Inhibition Variation of the maximum rate of reaction (Vmax) and the Michaelis constant (Km) could deduce the type of inhibition of enzymatic reactions by inhibitors [35].To better describe and compare the inhibition of LAP by different inhibitors (Cd, clay minerals, and other ion), the inhibition ratio was expressed by following equations from Acosta-Martínez [39]: (6)Inhibitionratio%=Activitycontrol−ActivitytreatmentActivitycontrol×100%.The type of inhibition of LAP by the inhibitor can be known from the change in kinetic constantsKm and Vmax, which can be calculated according to the Michaelis-Menten equation (7) and the Lineweaver-Burk double-reciprocal equation (8) [40, 41]: (7)v=VmaxSKm+S,where v is the initial reaction rate at the substrate concentration S; Vmax is the maximum rate of reaction and Km is the Michaelis constant. (8)1v=KmVmax⋅1S+1Vmax. ### 2.4.3. Statistical Analysis Differences between treatments were analyzed by one-way ANOVA following the Bonferroni post hoc test in SPSS Statistics 24 (IBM SPSS, Somers, NY, USA). Values ofP<0.05 were considered to be significant. Data are expressed as mean±standarderror (number of replicates n≥3), with different letters indicating significant differences. The dependences of enzyme activity with substrate concentration were represented with the Michaelis-Menten equations. The fitting and calculation of kinetic constants were performed using a nonlinear fitting equation (Growth/Sigmoidal Hill) in OriginPro 9.1 (OriginLab Corp., Northampton, MA, USA). ## 2.4.1. Enzyme Activity Soil LAP activity was expressed in units of nmol AMC·g-1·h-1 and calculated by the following equations which were modified from Deforest [31] and Wang et al. [32]: (1)ActivitynmolAMC×g−1×h−1=Netfluorescence×VmLEmissioncoefficient×vmL×Th×DMg,where Net fluorescence is the actual fluorescence value of AMC produced by the enzymatic reaction of LAP in soil, which can be calculated by equation (2); V is the total volume of homogenate, 120 mL for this experiment; the Emission coefficient indicates the fluorescence value per unit AMC (nmol), which can be calculated by equation (4); v is the volume of homogenate in a single incubation well, 0.1 mL for this experiment; T is incubation time, 1 h for purple soil and purified LAP, 3 h for red soil; DM is the dry soil mass corresponding to 0.5 g of fresh soil. (2)Netfluorescence=fsampleassayQuenchcoefficient−fsubstratecontrol,where f is the fluorescence value measured by the marked well; the Quench coefficient indicates the effect of soil on the fluorescence values of AMC, which can be calculated by (3)Quenchcoefficient=fquenchfstandard,(4)Emissioncoefficientfluorescencenmol−1=fstandard0.5nmol.Purified LAP activity was expressed in units of nmol AMC·μg-1·h-1: (5)ActivitynmolAMC×μg−1×h−1=NetfluorescenceEmissioncoefficient×Th×enzymeμg,Netfluorescence=fsampleassay−fsubstratecontrol. ## 2.4.2. Inhibition Ratio and Type of Inhibition Variation of the maximum rate of reaction (Vmax) and the Michaelis constant (Km) could deduce the type of inhibition of enzymatic reactions by inhibitors [35].To better describe and compare the inhibition of LAP by different inhibitors (Cd, clay minerals, and other ion), the inhibition ratio was expressed by following equations from Acosta-Martínez [39]: (6)Inhibitionratio%=Activitycontrol−ActivitytreatmentActivitycontrol×100%.The type of inhibition of LAP by the inhibitor can be known from the change in kinetic constantsKm and Vmax, which can be calculated according to the Michaelis-Menten equation (7) and the Lineweaver-Burk double-reciprocal equation (8) [40, 41]: (7)v=VmaxSKm+S,where v is the initial reaction rate at the substrate concentration S; Vmax is the maximum rate of reaction and Km is the Michaelis constant. (8)1v=KmVmax⋅1S+1Vmax. ## 2.4.3. Statistical Analysis Differences between treatments were analyzed by one-way ANOVA following the Bonferroni post hoc test in SPSS Statistics 24 (IBM SPSS, Somers, NY, USA). Values ofP<0.05 were considered to be significant. Data are expressed as mean±standarderror (number of replicates n≥3), with different letters indicating significant differences. The dependences of enzyme activity with substrate concentration were represented with the Michaelis-Menten equations. The fitting and calculation of kinetic constants were performed using a nonlinear fitting equation (Growth/Sigmoidal Hill) in OriginPro 9.1 (OriginLab Corp., Northampton, MA, USA). ## 3. Results and Discussion ### 3.1. Effect of Cd and Other Ions on LAP Cadmium inhibited both soil LAP and purified LAP. The inhibition ratio of Cd on LAP increased sharply with increasing Cd concentration in the range of 0–200μmol L-1, and the inhibition ratio showed a logarithmic increase (Figure 1(a)). This is consistent with the findings that Cd is a strong inhibitor of LAP in both biochemical and soil studies [27, 41, 42]. When contaminated with low concentrations of Cd (Cd concentration less than 10 μmol L-1), LAP in red soil was more sensitive to Cd than that in purple soil, while LAP in both soils responded equally to Cd contamination when Cd was greater than 10 μmol L-1 (Figures 1(a), 1(c), and 1(d)). This indicates that the pH of soil may influence Cd toxicity when the Cd concentration is less than 10 μmol L-1. At low Cd concentrations, the effective Cd concentration in the soil also depends on the concentration of Cd being adsorbed. When soil pH is low, soil components such as clay minerals and organic matter are less able to sorb Cd and are less likely to produce Cd(OH)2 precipitation, so more Cd2+ is free in the soil solution; therefore, Cd toxicity is likely to be greater [43, 44]. In contrast, at higher Cd concentrations, soil properties, including pH (Table S1), did not affect Cd toxicity to soil LAP. The weaker inhibition of Cd on soil LAP than on purified LAP may be due to the adsorption of Cd by sorbent substances in soil homogenate, resulting in a lower effective concentration of Cd [45, 46]. Another explanation may be that certain components of the soil form a protective effect on LAP, resulting in a reduction in the toxicity of Cd to LAP [47].Figure 1 Effect of Cd on the inhibition rate and kinetic constants of LAP: (a) inhibition rate of soil LAP and purified LAP by different concentrations of Cd; (b) kinetic curve of purified enzyme affected by Cd; (c) kinetic curve of LAP in red soil affected by Cd; (d) kinetic curve of LAP in purple soil affected by Cd. (a)(b)(c)(d)Enzyme kinetic analysis indicated that Cd may be a noncompetitive inhibitor of both purified LAP and soil LAP (Figures1(b)–1(d)). After the addition of 4 and 10 μmol L-1 of Cd, Vmax for the enzymatic reaction of soil LAP and purified LAP decreased, while no significant difference was observed in Km (Table 1). This indicates that the addition of Cd only reduced the effective amount of the enzyme without reducing the affinity between the enzyme and the substrate. It can be inferred that Cd disrupts the conformation on LAP and thus renders some LAP inactive [26, 27].Table 1 Changes in the kinetic constants (Vmax and Km) of LAP by different concentrations of Cd. Purified enzymeRed soilPurple soil0μmol L-14μmol L-110μmol L-10μmol L-14μmol L-110μmol L-10μmol L-14μmol L-110μmol L-1Vmax64.88A50.58B41.06C327.56A′194.93B′159.41C′2167.45A″2054.84A″1558.41B″Km30.43a34.65a49.05a55.73a′36.22a′38.72a′48.81a″43.27a″30.69a″R20.960.970.970.970.990.970.970.990.97Different capital letters indicate significant differences inVmax (P<0.05), and different lowercase letters indicate significant differences in Km (P<0.05), with Vmax in nmol μg-1 h-1 in purified enzyme and nmol g-1 h-1 in soil enzyme and Km in μmol L-1. Tables 2–4 are the same.The inhibitory effect of Cd on LAP activity was comparable Ag and Hg but stronger than that of Co, B, Mg, and Mn (FiguresS2–S4). Generally, the effect of Ag on soil LAP and purified LAP was similar to that of Cd. Soil LAP and purified LAP activities were strongly inhibited by Ag. The inhibitory effect of Hg on LAP at the same concentration was slightly greater than that of Cd. Accelerator metal Co promoted soil LAP and purified LAP activities at low concentrations and inhibited them at high concentrations. The inflection points for purified LAP and LAP in red soil were 250 and 200 μmol L-1, respectively. B, Mg, and Mn had little effect on LAP activities. The purified enzyme responded most strongly to their inhibition, relative to the two soil LAP contaminated by Cd; Ag and Hg had a noncompetitive inhibition on both LAP in purple soil and purified LAP (Figure 2, Tables S3–S5). Nevertheless, we also observed competitive inhibition of Hg on LAP in the red soil, indicating that other isozyme may exist in the red soil compared with the purified LAP and LAP in the purple soil.Figure 2 Kinetic profiles of purified enzyme, red soil LAP, and purple soil LAP affected by 0, 4, and 10μmol Cd, Ag, and Hg, respectively.The addition of Mg and Mn, which are metals in the active site of LAP [5, 6], cannot restore the activity of purified LAP contaminated by Cd, and the addition of Mg and Mn only slightly changed the activity of LAP in red soil and purple soil (Figure 3). This suggests that Mg and Mn cannot effectively restore the activity of LAP by competing with Cd for the divalent metal sites on LAP.Figure 3 Proportion of change in the activity of Cd-contaminated LAP after the addition of Mg and Mn as restorative activators of LAP. ### 3.2. Effect of Clay Minerals on LAP Clay minerals showed different effects on LAP activity (Figure4). The addition of clay minerals does not significantly affect the activity of purified LAP (Figure 4(a)). In the soil system (Figures 4(b) and 4(c)), the addition of montmorillonite significantly reduced the LAP activity in red soil and purple soil (P<0.05). The greater the amount of montmorillonite added, the lower the enzyme activity. When the amount of montmorillonite was 50 and 100 mg g-1 soil, the inhibition rate of LAP in red soil was 30.96% and 36.42% and was 10.77% and 22.78% in purple soil. The addition of kaolinite had no significant effect on LAP activity. The different effects of the two clay minerals may be attributed to the fact that montmorillonite in the 2 : 1 layer has a higher specific surface area and adsorption capacity than kaolinite in the 1 : 1 layer-type structure (Figure S1 and Table S2). Thus, montmorillonite can mask some of the active sites by adsorption on LAP, resulting in a decrease in LAP activity [22].Figure 4 Effect of adding different amounts of montmorillonite and kaolinite on LAP activity in purified LAP, LAP in red soil, and LAP in purple soil. (a) Purified enzyme(b) Red soil(c) Purple soilDifferent lowercase letters a and b indicate significant differences in LAP activities caused by montmorillonite, and a′ and b′ indicate kaolinite.Figure5 shows the effect of adding clay minerals on the kinetics of soil LAP and purified LAP. The addition of low concentrations of montmorillonite and kaolinite (0.5 mg L-1 for purified LAP and 50 mg g-1 soil for soil LAP) resulted in a decreasing trend in Vmax values for both soil LAP and purified LAP, with only the addition of montmorillonite to the purple clay producing a significant decrease in Vmax (Table 2). When the substrate concentration was relatively low (concentration less than 200 μmol L-1), the enzyme activity decreased significantly after clay mineral addition (Figure 4), and the addition of both montmorillonite and kaolinite changed the Km value of the purified enzyme (Table 2), indicating that the affinity of LAP and substrates decreased in the presence of clay minerals. Clay minerals reduce enzyme activity because the adsorption of the mineral to the enzyme changes the conformational structure of the protein, ultimately altering its catalytic properties and reducing its activity and Vmax [48].Figure 5 Effect of adding different amounts of montmorillonite and kaolinite on the kinetic curve of LAP in purified LAP, LAP in red soil, and LAP in purple soil. (a) Purified enzyme(b) Red soil(c) Purple soilTable 2 Changes in the kinetic constants (Vmax and Km) of LAP by montmorillonite and kaolinite. Purified LAPRed soilPurple soilPEPE+MPE+KRSRS+MRS+KPSPS+MPS+KVmax104.05A97.73A119.1A257.18A′253.65A′235.26A′3312.61B″2898.44A″3226.65B″Km26.42a34.34b31.74ab36.92a′72.02b′35.09a′38.13a″40.44a″36.79a″R20.990.990.970.990.960.990.990.980.99PE means purified enzyme (LAP), RS means red soil, PS means purple soil, and +M or +K means adding montmorillonite or kaolinite.Clay minerals still had an effect on LAP activity with Cd contamination (Figure6). Adding montmorillonite or kaolinite will cause a decrease in LAP activity, but there is no statistically significant difference in this decreasing trend, and in general, the greater the amount of clay mineral added, the greater the reduction in enzyme activity [49, 50]. This suggests that the presence or absence of Cd had no effect on the ability of the clay minerals to reduce enzyme activity (Figures 4 and 6). According to Figures 6(a) and 6(d), the addition of clay minerals had almost no effect on the enzymatic activity of purified LAP (0 μmol L-1 Cd); in the presence of Cd contamination with concentrations of 4 μmol L-1 and 10 μmol L-1, the inhibition rate of Cd on LAP increases significantly with the addition of clay minerals.Figure 6 Effect of montmorillonite and kaolinite on the activity of purified LAP, LAP in red soil, and LAP in purple soil in the presence of 0, 4, and 10μmol L-1 Cd. (a) Montmorillonite-purified enzyme(b) Montmorillonite-red soil(c) Montmorillonite-purple soil(d) Kaolinite-purified enzyme(e) Kaolinite-red soil(f) Kaolinite-purple soilIn contrast, the inhibitory effect of Cd on LAP in red and purple soil did not increase significantly with the addition of clay minerals. The reason might be that soil enzymes are in different environments than the purified enzyme. There are only buffer, substrate, metal solution, and enzyme in the purified enzyme incubation systems. Thus, this system can be regarded as a homogeneous liquid wherein the metal ions and enzyme could diffuse easily. Clay minerals may adsorb and concentrate the enzyme and Cd ions from the dispersion system on their surface, which increases the chance of Cd-enzyme interaction and the inhibitive effect of Cd on LAP.Lowercase letters a and b indicate significant differences in LAP activities caused by 0μmol L-1 Cd, and a′ and b′ or a″ and b″ indicate significant differences in LAP activities caused by 4 and 10 μmol L-1 Cd, respectively.According to Tables1 and 2, the addition of clay minerals decreased the enzyme activity and Vmax of LAP, and the kinetic constants of LAP were affected by both clay minerals and Cd. The decrease in Vmax of the purified enzyme was mainly caused by Cd, but when Cd and clay minerals were present together, the clay minerals exacerbated the inhibitory effect of Cd, and the effect of kaolinite was stronger than that of montmorillonite (Figure 7 and Table 3). Km in purple soil was not affected by the addition of Cd or clay minerals, but, in both the purified enzyme and in the red soil, it was concluded that montmorillonite decreased the affinity of LAP to the substrate and kaolinite increased the affinity of LAP to the substrate. It can be concluded that the presence of both Cd and clay minerals in the purified enzyme system amplifies the inhibitory effect of Cd on LAP, possibly due to the ability of clay minerals to adsorb LAP and Cd [51]. The reason for the different results observed in the two soils is that the type and content of clay minerals differ between the red and purple soils, and these differences make a difference in the effect of Cd on LAP contamination [47].Figure 7 Effect of montmorillonite and kaolinite on the kinetic curves of purified LAP, LAP in red soil, and LAP in purple soil in the presence of 10μmol L-1 Cd. (a) Purified enzyme(b) Red soil(c) Purple soilTable 3 Changes in the kinetic constants (Vmax and Km) of LAP by montmorillonite and kaolinite with 10 μmol L-1 Cd contamination. Purified enzymeRed soilPurple soilPEPE+MPE+KRSRS+MRS+KPSPS+MPS+KVmax69.11C48.59B27.63A179.00B′124.42A′127.37A′2273.60B″1964.32A″1988.37B″Km93.22b152.1b63.5a46.18b′55.91c′31.23a′36.74a″34.13a″33.68a″R20.990.990.980.990.990.990.990.990.99PE means purified enzyme (LAP), RS means red soil, PS means purple soil, and +M or +K means adding montmorillonite or kaolinite. ### 3.3. Effect of Cd on Immobilized LAP As shown in Figure8 and Table 4, the montmorillonite-immobilized enzyme showed a nonsignificant decreasing trend in Vmax with increasing Cd concentration under the influence of 0, 4, and 10 μmol L-1 Cd, while the kaolinite-immobilized enzyme did not differ significantly under the three concentrations of Cd contamination. No significant change in Km was observed for the enzymes immobilized by the two clay minerals. Comparison with the kinetic constants of the purified LAP (free enzyme) in Figure 1(b) and Table 1 shows that the significant increase in Km of the immobilized enzyme suggests that the presence of clay minerals reduces the affinity between the enzyme and the substrate. When the free LAP was contaminated with 10 μmol L-1 of Cd, Vmax of the free LAP decreased to 63.29% and Km became 161.19% of that of the uncontaminated enzyme, respectively. In contrast, Vmax of montmorillonite-immobilized enzyme and kaolinite-immobilized enzyme changed to 70.89% and 119.35% of those when they were not contaminated, respectively, and Km changed to 119.12% and 109.34%, respectively. This indicates that the immobilized enzymes showed less change in Vmax and Km when contaminated with Cd, and therefore, the clay mineral-immobilized LAPs were more resistant to Cd contamination [49, 51–53].Figure 8 Effect of Cd on the kinetics curves of montmorillonite-immobilized LAP and kaolinite-immobilized LAP.Table 4 Changes in the kinetic constants (Vmax and Km) of montmorillonite-immobilized LAP and kaolinite-immobilized LAP with 0, 4, and 10 μmol L-1 Cd contamination. Montmorillonite-immobilized enzymeKaolinite-immobilized enzyme0μmol L-1Cd4μmol L-1Cd10μmol L-1Cd0μmol L-1Cd4μmol L-1Cd10μmol L-1CdVmax22.19A18.93A15.73A20.52A′21.60A′24.49A′Km74.42a72.44a88.65a76.53a′93.19a′83.68a′R20.980.980.990.990.990.99The difference in the immobilized enzyme and free enzyme can also explain the difference in the response of soil LAP and purified LAP to Cd contamination. Clay mineral-immobilized enzymes are more resistant to Cd toxicity than free enzymes, so the same concentration of Cd inhibits the clay mineral-immobilized enzyme less than the free enzyme, which is why the inhibition ratio of soil enzymes under the same concentration of Cd is lower than that of purified enzyme. Since soil enzymes are a mixture of free and immobilized enzymes, the sensitivity of soil enzymes to Cd contamination should be somewhere between free and immobilized enzymes, and therefore, the inhibition rate of soil enzymes is usually lower than that of purified enzymes (free enzymes) when faced with the same concentration of Cd contamination. ## 3.1. Effect of Cd and Other Ions on LAP Cadmium inhibited both soil LAP and purified LAP. The inhibition ratio of Cd on LAP increased sharply with increasing Cd concentration in the range of 0–200μmol L-1, and the inhibition ratio showed a logarithmic increase (Figure 1(a)). This is consistent with the findings that Cd is a strong inhibitor of LAP in both biochemical and soil studies [27, 41, 42]. When contaminated with low concentrations of Cd (Cd concentration less than 10 μmol L-1), LAP in red soil was more sensitive to Cd than that in purple soil, while LAP in both soils responded equally to Cd contamination when Cd was greater than 10 μmol L-1 (Figures 1(a), 1(c), and 1(d)). This indicates that the pH of soil may influence Cd toxicity when the Cd concentration is less than 10 μmol L-1. At low Cd concentrations, the effective Cd concentration in the soil also depends on the concentration of Cd being adsorbed. When soil pH is low, soil components such as clay minerals and organic matter are less able to sorb Cd and are less likely to produce Cd(OH)2 precipitation, so more Cd2+ is free in the soil solution; therefore, Cd toxicity is likely to be greater [43, 44]. In contrast, at higher Cd concentrations, soil properties, including pH (Table S1), did not affect Cd toxicity to soil LAP. The weaker inhibition of Cd on soil LAP than on purified LAP may be due to the adsorption of Cd by sorbent substances in soil homogenate, resulting in a lower effective concentration of Cd [45, 46]. Another explanation may be that certain components of the soil form a protective effect on LAP, resulting in a reduction in the toxicity of Cd to LAP [47].Figure 1 Effect of Cd on the inhibition rate and kinetic constants of LAP: (a) inhibition rate of soil LAP and purified LAP by different concentrations of Cd; (b) kinetic curve of purified enzyme affected by Cd; (c) kinetic curve of LAP in red soil affected by Cd; (d) kinetic curve of LAP in purple soil affected by Cd. (a)(b)(c)(d)Enzyme kinetic analysis indicated that Cd may be a noncompetitive inhibitor of both purified LAP and soil LAP (Figures1(b)–1(d)). After the addition of 4 and 10 μmol L-1 of Cd, Vmax for the enzymatic reaction of soil LAP and purified LAP decreased, while no significant difference was observed in Km (Table 1). This indicates that the addition of Cd only reduced the effective amount of the enzyme without reducing the affinity between the enzyme and the substrate. It can be inferred that Cd disrupts the conformation on LAP and thus renders some LAP inactive [26, 27].Table 1 Changes in the kinetic constants (Vmax and Km) of LAP by different concentrations of Cd. Purified enzymeRed soilPurple soil0μmol L-14μmol L-110μmol L-10μmol L-14μmol L-110μmol L-10μmol L-14μmol L-110μmol L-1Vmax64.88A50.58B41.06C327.56A′194.93B′159.41C′2167.45A″2054.84A″1558.41B″Km30.43a34.65a49.05a55.73a′36.22a′38.72a′48.81a″43.27a″30.69a″R20.960.970.970.970.990.970.970.990.97Different capital letters indicate significant differences inVmax (P<0.05), and different lowercase letters indicate significant differences in Km (P<0.05), with Vmax in nmol μg-1 h-1 in purified enzyme and nmol g-1 h-1 in soil enzyme and Km in μmol L-1. Tables 2–4 are the same.The inhibitory effect of Cd on LAP activity was comparable Ag and Hg but stronger than that of Co, B, Mg, and Mn (FiguresS2–S4). Generally, the effect of Ag on soil LAP and purified LAP was similar to that of Cd. Soil LAP and purified LAP activities were strongly inhibited by Ag. The inhibitory effect of Hg on LAP at the same concentration was slightly greater than that of Cd. Accelerator metal Co promoted soil LAP and purified LAP activities at low concentrations and inhibited them at high concentrations. The inflection points for purified LAP and LAP in red soil were 250 and 200 μmol L-1, respectively. B, Mg, and Mn had little effect on LAP activities. The purified enzyme responded most strongly to their inhibition, relative to the two soil LAP contaminated by Cd; Ag and Hg had a noncompetitive inhibition on both LAP in purple soil and purified LAP (Figure 2, Tables S3–S5). Nevertheless, we also observed competitive inhibition of Hg on LAP in the red soil, indicating that other isozyme may exist in the red soil compared with the purified LAP and LAP in the purple soil.Figure 2 Kinetic profiles of purified enzyme, red soil LAP, and purple soil LAP affected by 0, 4, and 10μmol Cd, Ag, and Hg, respectively.The addition of Mg and Mn, which are metals in the active site of LAP [5, 6], cannot restore the activity of purified LAP contaminated by Cd, and the addition of Mg and Mn only slightly changed the activity of LAP in red soil and purple soil (Figure 3). This suggests that Mg and Mn cannot effectively restore the activity of LAP by competing with Cd for the divalent metal sites on LAP.Figure 3 Proportion of change in the activity of Cd-contaminated LAP after the addition of Mg and Mn as restorative activators of LAP. ## 3.2. Effect of Clay Minerals on LAP Clay minerals showed different effects on LAP activity (Figure4). The addition of clay minerals does not significantly affect the activity of purified LAP (Figure 4(a)). In the soil system (Figures 4(b) and 4(c)), the addition of montmorillonite significantly reduced the LAP activity in red soil and purple soil (P<0.05). The greater the amount of montmorillonite added, the lower the enzyme activity. When the amount of montmorillonite was 50 and 100 mg g-1 soil, the inhibition rate of LAP in red soil was 30.96% and 36.42% and was 10.77% and 22.78% in purple soil. The addition of kaolinite had no significant effect on LAP activity. The different effects of the two clay minerals may be attributed to the fact that montmorillonite in the 2 : 1 layer has a higher specific surface area and adsorption capacity than kaolinite in the 1 : 1 layer-type structure (Figure S1 and Table S2). Thus, montmorillonite can mask some of the active sites by adsorption on LAP, resulting in a decrease in LAP activity [22].Figure 4 Effect of adding different amounts of montmorillonite and kaolinite on LAP activity in purified LAP, LAP in red soil, and LAP in purple soil. (a) Purified enzyme(b) Red soil(c) Purple soilDifferent lowercase letters a and b indicate significant differences in LAP activities caused by montmorillonite, and a′ and b′ indicate kaolinite.Figure5 shows the effect of adding clay minerals on the kinetics of soil LAP and purified LAP. The addition of low concentrations of montmorillonite and kaolinite (0.5 mg L-1 for purified LAP and 50 mg g-1 soil for soil LAP) resulted in a decreasing trend in Vmax values for both soil LAP and purified LAP, with only the addition of montmorillonite to the purple clay producing a significant decrease in Vmax (Table 2). When the substrate concentration was relatively low (concentration less than 200 μmol L-1), the enzyme activity decreased significantly after clay mineral addition (Figure 4), and the addition of both montmorillonite and kaolinite changed the Km value of the purified enzyme (Table 2), indicating that the affinity of LAP and substrates decreased in the presence of clay minerals. Clay minerals reduce enzyme activity because the adsorption of the mineral to the enzyme changes the conformational structure of the protein, ultimately altering its catalytic properties and reducing its activity and Vmax [48].Figure 5 Effect of adding different amounts of montmorillonite and kaolinite on the kinetic curve of LAP in purified LAP, LAP in red soil, and LAP in purple soil. (a) Purified enzyme(b) Red soil(c) Purple soilTable 2 Changes in the kinetic constants (Vmax and Km) of LAP by montmorillonite and kaolinite. Purified LAPRed soilPurple soilPEPE+MPE+KRSRS+MRS+KPSPS+MPS+KVmax104.05A97.73A119.1A257.18A′253.65A′235.26A′3312.61B″2898.44A″3226.65B″Km26.42a34.34b31.74ab36.92a′72.02b′35.09a′38.13a″40.44a″36.79a″R20.990.990.970.990.960.990.990.980.99PE means purified enzyme (LAP), RS means red soil, PS means purple soil, and +M or +K means adding montmorillonite or kaolinite.Clay minerals still had an effect on LAP activity with Cd contamination (Figure6). Adding montmorillonite or kaolinite will cause a decrease in LAP activity, but there is no statistically significant difference in this decreasing trend, and in general, the greater the amount of clay mineral added, the greater the reduction in enzyme activity [49, 50]. This suggests that the presence or absence of Cd had no effect on the ability of the clay minerals to reduce enzyme activity (Figures 4 and 6). According to Figures 6(a) and 6(d), the addition of clay minerals had almost no effect on the enzymatic activity of purified LAP (0 μmol L-1 Cd); in the presence of Cd contamination with concentrations of 4 μmol L-1 and 10 μmol L-1, the inhibition rate of Cd on LAP increases significantly with the addition of clay minerals.Figure 6 Effect of montmorillonite and kaolinite on the activity of purified LAP, LAP in red soil, and LAP in purple soil in the presence of 0, 4, and 10μmol L-1 Cd. (a) Montmorillonite-purified enzyme(b) Montmorillonite-red soil(c) Montmorillonite-purple soil(d) Kaolinite-purified enzyme(e) Kaolinite-red soil(f) Kaolinite-purple soilIn contrast, the inhibitory effect of Cd on LAP in red and purple soil did not increase significantly with the addition of clay minerals. The reason might be that soil enzymes are in different environments than the purified enzyme. There are only buffer, substrate, metal solution, and enzyme in the purified enzyme incubation systems. Thus, this system can be regarded as a homogeneous liquid wherein the metal ions and enzyme could diffuse easily. Clay minerals may adsorb and concentrate the enzyme and Cd ions from the dispersion system on their surface, which increases the chance of Cd-enzyme interaction and the inhibitive effect of Cd on LAP.Lowercase letters a and b indicate significant differences in LAP activities caused by 0μmol L-1 Cd, and a′ and b′ or a″ and b″ indicate significant differences in LAP activities caused by 4 and 10 μmol L-1 Cd, respectively.According to Tables1 and 2, the addition of clay minerals decreased the enzyme activity and Vmax of LAP, and the kinetic constants of LAP were affected by both clay minerals and Cd. The decrease in Vmax of the purified enzyme was mainly caused by Cd, but when Cd and clay minerals were present together, the clay minerals exacerbated the inhibitory effect of Cd, and the effect of kaolinite was stronger than that of montmorillonite (Figure 7 and Table 3). Km in purple soil was not affected by the addition of Cd or clay minerals, but, in both the purified enzyme and in the red soil, it was concluded that montmorillonite decreased the affinity of LAP to the substrate and kaolinite increased the affinity of LAP to the substrate. It can be concluded that the presence of both Cd and clay minerals in the purified enzyme system amplifies the inhibitory effect of Cd on LAP, possibly due to the ability of clay minerals to adsorb LAP and Cd [51]. The reason for the different results observed in the two soils is that the type and content of clay minerals differ between the red and purple soils, and these differences make a difference in the effect of Cd on LAP contamination [47].Figure 7 Effect of montmorillonite and kaolinite on the kinetic curves of purified LAP, LAP in red soil, and LAP in purple soil in the presence of 10μmol L-1 Cd. (a) Purified enzyme(b) Red soil(c) Purple soilTable 3 Changes in the kinetic constants (Vmax and Km) of LAP by montmorillonite and kaolinite with 10 μmol L-1 Cd contamination. Purified enzymeRed soilPurple soilPEPE+MPE+KRSRS+MRS+KPSPS+MPS+KVmax69.11C48.59B27.63A179.00B′124.42A′127.37A′2273.60B″1964.32A″1988.37B″Km93.22b152.1b63.5a46.18b′55.91c′31.23a′36.74a″34.13a″33.68a″R20.990.990.980.990.990.990.990.990.99PE means purified enzyme (LAP), RS means red soil, PS means purple soil, and +M or +K means adding montmorillonite or kaolinite. ## 3.3. Effect of Cd on Immobilized LAP As shown in Figure8 and Table 4, the montmorillonite-immobilized enzyme showed a nonsignificant decreasing trend in Vmax with increasing Cd concentration under the influence of 0, 4, and 10 μmol L-1 Cd, while the kaolinite-immobilized enzyme did not differ significantly under the three concentrations of Cd contamination. No significant change in Km was observed for the enzymes immobilized by the two clay minerals. Comparison with the kinetic constants of the purified LAP (free enzyme) in Figure 1(b) and Table 1 shows that the significant increase in Km of the immobilized enzyme suggests that the presence of clay minerals reduces the affinity between the enzyme and the substrate. When the free LAP was contaminated with 10 μmol L-1 of Cd, Vmax of the free LAP decreased to 63.29% and Km became 161.19% of that of the uncontaminated enzyme, respectively. In contrast, Vmax of montmorillonite-immobilized enzyme and kaolinite-immobilized enzyme changed to 70.89% and 119.35% of those when they were not contaminated, respectively, and Km changed to 119.12% and 109.34%, respectively. This indicates that the immobilized enzymes showed less change in Vmax and Km when contaminated with Cd, and therefore, the clay mineral-immobilized LAPs were more resistant to Cd contamination [49, 51–53].Figure 8 Effect of Cd on the kinetics curves of montmorillonite-immobilized LAP and kaolinite-immobilized LAP.Table 4 Changes in the kinetic constants (Vmax and Km) of montmorillonite-immobilized LAP and kaolinite-immobilized LAP with 0, 4, and 10 μmol L-1 Cd contamination. Montmorillonite-immobilized enzymeKaolinite-immobilized enzyme0μmol L-1Cd4μmol L-1Cd10μmol L-1Cd0μmol L-1Cd4μmol L-1Cd10μmol L-1CdVmax22.19A18.93A15.73A20.52A′21.60A′24.49A′Km74.42a72.44a88.65a76.53a′93.19a′83.68a′R20.980.980.990.990.990.99The difference in the immobilized enzyme and free enzyme can also explain the difference in the response of soil LAP and purified LAP to Cd contamination. Clay mineral-immobilized enzymes are more resistant to Cd toxicity than free enzymes, so the same concentration of Cd inhibits the clay mineral-immobilized enzyme less than the free enzyme, which is why the inhibition ratio of soil enzymes under the same concentration of Cd is lower than that of purified enzyme. Since soil enzymes are a mixture of free and immobilized enzymes, the sensitivity of soil enzymes to Cd contamination should be somewhere between free and immobilized enzymes, and therefore, the inhibition rate of soil enzymes is usually lower than that of purified enzymes (free enzymes) when faced with the same concentration of Cd contamination. ## 4. Conclusion The inhibitory effect of Cd on LAP increased logarithmically with the increasing Cd concentration, and Cd produced noncompetitive inhibition on both soil LAP and purified LAP. Cd inhibited the enzymatic reaction by disrupting the conformation of the enzyme protein, which cannot be restored by adding the metals associated with the LAP active site. Regardless of the presence of Cd, the addition of clay minerals will generally reduce the activity and maximum reaction rate (Vmax) of LAP, and the effect of montmorillonite is stronger than that of kaolinite. Montmorillonite decreases the affinity between LAP and the substrate (increasing Km), while kaolinite increases the affinity between LAP and the substrate (decreasing Km). It should be noted that clay minerals can increase the inhibition ratio of Cd on purified LAP. The interaction between leucine aminopeptidase, clay minerals, and cadmium contamination in the soil is a complicated process that is related not only to the concentration of the three but also to the soil environment. Therefore, the process and mechanism by which clay minerals affect the toxicity of Cd to LAP are still unclear. In consequence, it is important to continue to study the interaction between clay minerals, Cd and LAP, and the inhibition mechanism of Cd on clay mineral-immobilized LAP. This can provide scientific evidence for restoring the activity and function of LAP in Cd-contaminated soils and to provide some help in restoring nutrient use efficiency and accelerating nutrient cycling in contaminated soils. --- *Source: 1024085-2021-10-22.xml*
1024085-2021-10-22_1024085-2021-10-22.md
65,294
Clay Minerals Change the Toxic Effect of Cadmium on the Activities of Leucine Aminopeptidase
Shunyu Huang; Jingji Li; Jipeng Wang
Adsorption Science & Technology (2021)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1024085
1024085-2021-10-22.xml
--- ## Abstract Soil leucine aminopeptidase (LAP) is a hydrolytic enzyme involved in the acquisition of nitrogen by microorganisms. In contaminated soils, LAP activity is affected not only by the type and concentration of heavy metals but also by the form of enzyme. Here, we investigated the degree and mechanism of cadmium (Cd) inhibition of soil LAP and purified LAP. We also examined the effect of montmorillonite and kaolinite on LAP and LAP contaminated with Cd. The results showed that Cd inhibition of LAP activity increased with increasing Cd concentration and that Cd exerted noncompetitive inhibition of LAP. The addition of clay minerals decreases LAP activity and the maximum reaction rate (Vmax), regardless of the presence of Cd. Montmorillonite decreases the affinity of LAP to the substrate (Km), while kaolinite increases the affinity of LAP to the substrate. The clay mineral-immobilized LAP showed an increase in resistance to Cd contamination compared with the free LAP. The results obtained in this study may aid in understanding the toxic effects of heavy metals on soil enzymes. --- ## Body ## 1. Introduction Cadmium (Cd) has become one of the most hazardous heavy metals in soil as a result of its potential, persistent, and irreversible toxicity [1–3]. When Cd enters soil, it affects the environmental health of the soil and the stability of ecosystem, particularly soil microorganisms [4, 5]. Soil microorganisms are the most active and sensitive component of the soil ecosystem and can secrete most soil extracellular enzymes associated with the soil nutrient cycle [6, 7]. Since soil enzyme activity is closely related to soil microorganisms and is sensitive to changes in environmental conditions, soil enzyme activity is often used as an indicator to evaluate the level to which heavy metals influence soil microbial function and soil ecosystem health [8–12].According to previous studies, the soil enzyme activity exponentially decreases with increasing heavy metal concentrations by displacing enzyme conformation-related metals and occupying the active center of the enzyme or by binding to sulfhydryl, amino, and carboxyl groups in the enzyme structure to reduce the active site of the enzyme [13–16]. However, the extent of soil enzyme activity in response to heavy metals is related not only to the type and concentration of heavy metals but also to the type of soil enzyme and soil properties, such as clay mineral content [17, 18]. Clay minerals form enzyme complexes whose molecular structure and catalytic properties differ from those of free enzymes, thus affecting the contact of enzymes with heavy metals and substrates [19–21]. At the same time, heavy metals can interact with clay mineral surfaces and compete with enzymes to form heavy metal-enzyme-clay mineral complexes [22, 23]. Therefore, free enzymes, soil enzymes, and enzymes immobilized by clay minerals have different response to heavy metal pollution.Leucine aminopeptidase (LAP, Enzyme Commission number: 3.4.11.1) catalyzes the hydrolysis of leucine and other hydrophobic amino acids at the N-terminus of polypeptides. It is one kind of enzyme involved in the microbial acquisition of N in soil [6]. Cd acts as an enzyme inhibitor by replacing the metal associated with the active center of phosphatase [24, 25]. Additionally, soil properties (e.g., organic C, total N, pH, soil particle size, and clay contents) can influence the toxicity of heavy metals in soil on soil enzymes [26]. On this basis, we anticipated that the inhibition of LAP by Cd may be due to the displacement of enzyme conformation-related metals and occupation of the active center of the enzyme; thus, the interaction of clay minerals with LAP might contribute to different Cd toxicity on free LAP, soil LAP, and immobilized LAP. It has been demonstrated that Cd shows exponential or logarithmic inhibition of other hydrolytic enzymes, such as phosphatase and β-glucosidase [27, 28]. However, the toxicity of Cd to LAP is poorly researched in existing studies, and there is barely any information available on the effects of the interaction between clay minerals and LAP on Cd toxicity.Consequently, we investigated the effect of Cd toxicity on the degree of LAP exposure (free LAP, LAP in soil, and LAP immobilized by clay minerals). A preliminary attempt was also made to investigate the reasons of cadmium toxicity to LAP (with reference to the toxicity of other heavy metals to LAP and experiments on the recovery of activity of Cd-LAP in the presence of Mg and Mn addition) in an attempt to reach the following conclusions: (1) the difference response between free LAP, soil LAP, and clay mineral-immobilized LAP on Cd toxicity and (2) the mechanism of LAP inhibition by Cd.Our work is a preliminary investigation into the mechanisms of Cd contamination on LAP (soil LAP and purified LAP) and the role that clay minerals play in this process. This provides a theoretical basis for restoring the activity and function of LAP in Cd-contaminated soils, as well as providing some assistance in restoring nitrogen use efficiency and accelerating nitrogen cycling in contaminated soils. ## 2. Material and Methods ### 2.1. Purified Enzyme and Soils The purified enzyme was purchased from Sigma-Aldrich (product number: L5006, Type IV-S, stored as ammonium sulfate suspension at 4°C) and volumed with 2.5 mol L-1 (NH4)2SO4 to 5 mL to make an enzyme stock solution. The enzyme stock solution was stored at 4°C. To make activated enzyme, the enzyme stock solution was diluted with 50 mmol L-1 pH 8 THAM and activated at 37°C for 2 hours before use.The soils used in this study are red soil and purple soil, which were collected from the Ecological Experimental Station of Red Soil in Jiangxi, China (28°15′N, 116°55′E) and Yanting Station of Chinese Ecosystem Research Network (CERN) in Sichuan, China (31°16′N, 105°27′E), respectively. Soil samples were sieved (<2 mm) and stored at 4°C until enzyme assay.Soil pH, particle size, organic C, and total N were determined using air-dried soils. The pH was determined using a 1 : 2.5 ratio between the mass of the soil sample and the volume of deionized water. Soil samples were pretreated with 30% H2O2 and 10% HCl to remove organic matter and carbonates, and 0.05 mol L-1 sodium hexametaphosphate was added to disperse the soil aggregates for particle size distribution analysis by Malvern MS 2000 (Malvern Instruments, Malvern, England). The organic C and total N of the soil were determined by using 0.5 mol L-1 HCl to remove carbonates and then analyzed by an element analyzer (Elementar vario MACRO cube, Germany) (Table S1). ### 2.2. Clay Mineral and Immobilized LAP Montmorillonite ((Al,Mg)2[Si4O10] (OH)2·nH2O) and kaolinite (Al4[Si4O10] (OH)8) were purchased from Aladdin (product numbers M141491 and K100134, respectively) (Table S2). The surface morphology of clay minerals was analyzed by scanning electron microscopy (SEM) (Thermo Prisma E, Finland) at 20 kV and at a magnification of 3500x (Figure S1). The particle size distribution was determined by a Malvern ZEN 3600 (Malvern Instruments, Malvern, England). The specific surface area (SSA) of clay minerals was analyzed by the N2 adsorption method [29, 30].Clay mineral colloid solution (0.5 mg mL-1) was prepared by adding 25 mg montmorillonite or kaolinite to 50 mL 50 mmol L-1 pH 8 THAM into a 150 mL beaker and performing ultrasonication for 5 minutes. The activated enzyme was diluted 2500 times using THAM to prepare the LAP working solution. To prepare clay mineral-immobilized LAP, 20 mL of montmorillonite or kaolinite colloids was added to 20 mL of the LAP working solution. The suspension was stirred for 30 minutes at 250 r min-1 at 25°C before centrifugation at 8000 r min-1 for 5 minutes. The residue was washed with 40 mL deionized (DI) water and centrifuged twice to remove the unabsorbed LAP. Finally, the residue was resuspended in 40 mL of 50 mmol L-1 pH 8 THAM buffer. The clay mineral-immobilized LAP was mixed thoroughly before being added to a 96-well plate. ### 2.3. Experimental Design #### 2.3.1. Enzyme Assay L-leucine-7-amido-4-methylcoumarin (L-leucine-AMC) and 7-amino-4-methyl-coumarin (AMC) were purchased from Aladdin and used as the substrate and the standard for LAP in microplate fluorimetric assays, respectively.The soil LAP activity was determined as previously described [31, 32]. Briefly, a soil homogenate was prepared by stirring 0.5 g fresh soil and 120 mL DI water at 600 rpm for 30 minutes. Then, the homogenate was placed into the assay well (including 50 μL THAM buffer, 100 μL homogenate, and 50 μL L-leucine-AMC) and quenched well (including 50 μL THAM buffer, 100 μL homogenate, and 50 μL AMC). DI water was used in place of homogenate in a standard well (including 50 μL THAM buffer, 100 μL DI water, and 50 μL AMC) and substrate control well (including 50 μL THAM buffer, 100 μL DI water, and 50 μL L-leucine-AMC). THAM, AMC, and L-leucine-AMC were then added sequentially according to the type of well. The THAM concentration in all wells was 20 mmol L-1, and the L-leucine-AMC was 200 μmol L-1. The concentration of AMC was 10 μmol L-1 before being added. The mixture in 96-well plates was incubated for 1 h at 37°C before being read by a fluorometer (Thermo Varioskan™ LUX, Finland) with 365 nm excitation and a 450 nm emission filter immediately.For the purified enzyme and immobilized enzyme, the soil homogenate in the above method was replaced with the LAP working solution and clay mineral-immobilized enzyme LAP, respectively. The purified enzyme was incubated and measured under the same conditions as the soil. #### 2.3.2. Effect of Clay Minerals on LAP For soil LAP, the effect of clay mineral addition on Cd toxicity was investigated in the natural state (0 mg clay mineral addition), low concentration (50 mg montmorillonite or kaolinite per gram fresh soil), and high concentration (100 mg montmorillonite or kaolinite per gram fresh soil). Clay minerals are added by mixing the corresponding quality and type of clay minerals with red and purple soil to produce homogenate.For the purified enzyme experiment, the following levels of binding of clay minerals to enzymes were set to investigate the effect of clay minerals on Cd: (1) free enzyme: no clay minerals added; (2) adsorbed enzyme: clay minerals added during incubation to make the LAP working solution containing final concentrations of 0.5 mg L-1 and 1 mg L-1 clay minerals, respectively; and (3) immobilized enzyme: clay mineral-immobilized LAP was made in advance. #### 2.3.3. Cd Toxicity on LAP To determine the Cd toxicity to LAP, after adding homogenate or LAP work solution to 96-well plates, 50μL of different concentrations of Cd solution (CdCl2) was added and contaminated LAP for 30 minutes at 25°C.Inhibition of LAP by Cd was calculated for soil LAP and purified LAP activity at final Cd concentrations of 0, 10, 20, 50, 100, 200, 500, and 1000μmol L-1 to quantify the toxicity of Cd on LAP.Changes in kinetic constants (Vmax and Km) of soil LAP and purified LAP were compared at final Cd concentrations of 0, 4, and 10 μmol L-1 to infer the type of inhibition of LAP by Cd. The enzyme activities of soil LAP and purified LAP were determined at different concentrations of substrate (final concentrations of 0, 10, 20, 40, 100, 200, 300, and 400 μmol L-1, respectively), and Vmax and Km were calculated using the Michaelis-Menten equation.In order to have a more insightful understanding of the inhibitory effect of Cd on LAP, we chose argentum (Ag) and hydrargyrum (Hg), which are both inhibitory metals to hydrolase like Cd, as a reference to observe the inhibitory effect and type of LAP inhibition by different inhibitory metals. We also chose cobalt (Co) and boron (B), which are accelerators of hydrolase, and magnesium (Mg) and manganese (Mn), metals that can occupy and substitute metal ions at two exchangeable sites in LAP to affectKm, to observe the effect of different metals on LAP activity under the same conditions [33–38].Additionally, the recovery of Cd-contaminated LAP activity by the addition of Mg and Mn was used to infer the mechanism of Cd inhibition of LAP [37]. In the recovery experiment, after measuring the LAP activity inhibited by Cd, Mg and Mn solutions at a final concentration of 4 μmol L-1 were added to observe the change in LAP activity. ### 2.4. Data Analysis #### 2.4.1. Enzyme Activity Soil LAP activity was expressed in units of nmol AMC·g-1·h-1 and calculated by the following equations which were modified from Deforest [31] and Wang et al. [32]: (1)ActivitynmolAMC×g−1×h−1=Netfluorescence×VmLEmissioncoefficient×vmL×Th×DMg,where Net fluorescence is the actual fluorescence value of AMC produced by the enzymatic reaction of LAP in soil, which can be calculated by equation (2); V is the total volume of homogenate, 120 mL for this experiment; the Emission coefficient indicates the fluorescence value per unit AMC (nmol), which can be calculated by equation (4); v is the volume of homogenate in a single incubation well, 0.1 mL for this experiment; T is incubation time, 1 h for purple soil and purified LAP, 3 h for red soil; DM is the dry soil mass corresponding to 0.5 g of fresh soil. (2)Netfluorescence=fsampleassayQuenchcoefficient−fsubstratecontrol,where f is the fluorescence value measured by the marked well; the Quench coefficient indicates the effect of soil on the fluorescence values of AMC, which can be calculated by (3)Quenchcoefficient=fquenchfstandard,(4)Emissioncoefficientfluorescencenmol−1=fstandard0.5nmol.Purified LAP activity was expressed in units of nmol AMC·μg-1·h-1: (5)ActivitynmolAMC×μg−1×h−1=NetfluorescenceEmissioncoefficient×Th×enzymeμg,Netfluorescence=fsampleassay−fsubstratecontrol. #### 2.4.2. Inhibition Ratio and Type of Inhibition Variation of the maximum rate of reaction (Vmax) and the Michaelis constant (Km) could deduce the type of inhibition of enzymatic reactions by inhibitors [35].To better describe and compare the inhibition of LAP by different inhibitors (Cd, clay minerals, and other ion), the inhibition ratio was expressed by following equations from Acosta-Martínez [39]: (6)Inhibitionratio%=Activitycontrol−ActivitytreatmentActivitycontrol×100%.The type of inhibition of LAP by the inhibitor can be known from the change in kinetic constantsKm and Vmax, which can be calculated according to the Michaelis-Menten equation (7) and the Lineweaver-Burk double-reciprocal equation (8) [40, 41]: (7)v=VmaxSKm+S,where v is the initial reaction rate at the substrate concentration S; Vmax is the maximum rate of reaction and Km is the Michaelis constant. (8)1v=KmVmax⋅1S+1Vmax. #### 2.4.3. Statistical Analysis Differences between treatments were analyzed by one-way ANOVA following the Bonferroni post hoc test in SPSS Statistics 24 (IBM SPSS, Somers, NY, USA). Values ofP<0.05 were considered to be significant. Data are expressed as mean±standarderror (number of replicates n≥3), with different letters indicating significant differences. The dependences of enzyme activity with substrate concentration were represented with the Michaelis-Menten equations. The fitting and calculation of kinetic constants were performed using a nonlinear fitting equation (Growth/Sigmoidal Hill) in OriginPro 9.1 (OriginLab Corp., Northampton, MA, USA). ## 2.1. Purified Enzyme and Soils The purified enzyme was purchased from Sigma-Aldrich (product number: L5006, Type IV-S, stored as ammonium sulfate suspension at 4°C) and volumed with 2.5 mol L-1 (NH4)2SO4 to 5 mL to make an enzyme stock solution. The enzyme stock solution was stored at 4°C. To make activated enzyme, the enzyme stock solution was diluted with 50 mmol L-1 pH 8 THAM and activated at 37°C for 2 hours before use.The soils used in this study are red soil and purple soil, which were collected from the Ecological Experimental Station of Red Soil in Jiangxi, China (28°15′N, 116°55′E) and Yanting Station of Chinese Ecosystem Research Network (CERN) in Sichuan, China (31°16′N, 105°27′E), respectively. Soil samples were sieved (<2 mm) and stored at 4°C until enzyme assay.Soil pH, particle size, organic C, and total N were determined using air-dried soils. The pH was determined using a 1 : 2.5 ratio between the mass of the soil sample and the volume of deionized water. Soil samples were pretreated with 30% H2O2 and 10% HCl to remove organic matter and carbonates, and 0.05 mol L-1 sodium hexametaphosphate was added to disperse the soil aggregates for particle size distribution analysis by Malvern MS 2000 (Malvern Instruments, Malvern, England). The organic C and total N of the soil were determined by using 0.5 mol L-1 HCl to remove carbonates and then analyzed by an element analyzer (Elementar vario MACRO cube, Germany) (Table S1). ## 2.2. Clay Mineral and Immobilized LAP Montmorillonite ((Al,Mg)2[Si4O10] (OH)2·nH2O) and kaolinite (Al4[Si4O10] (OH)8) were purchased from Aladdin (product numbers M141491 and K100134, respectively) (Table S2). The surface morphology of clay minerals was analyzed by scanning electron microscopy (SEM) (Thermo Prisma E, Finland) at 20 kV and at a magnification of 3500x (Figure S1). The particle size distribution was determined by a Malvern ZEN 3600 (Malvern Instruments, Malvern, England). The specific surface area (SSA) of clay minerals was analyzed by the N2 adsorption method [29, 30].Clay mineral colloid solution (0.5 mg mL-1) was prepared by adding 25 mg montmorillonite or kaolinite to 50 mL 50 mmol L-1 pH 8 THAM into a 150 mL beaker and performing ultrasonication for 5 minutes. The activated enzyme was diluted 2500 times using THAM to prepare the LAP working solution. To prepare clay mineral-immobilized LAP, 20 mL of montmorillonite or kaolinite colloids was added to 20 mL of the LAP working solution. The suspension was stirred for 30 minutes at 250 r min-1 at 25°C before centrifugation at 8000 r min-1 for 5 minutes. The residue was washed with 40 mL deionized (DI) water and centrifuged twice to remove the unabsorbed LAP. Finally, the residue was resuspended in 40 mL of 50 mmol L-1 pH 8 THAM buffer. The clay mineral-immobilized LAP was mixed thoroughly before being added to a 96-well plate. ## 2.3. Experimental Design ### 2.3.1. Enzyme Assay L-leucine-7-amido-4-methylcoumarin (L-leucine-AMC) and 7-amino-4-methyl-coumarin (AMC) were purchased from Aladdin and used as the substrate and the standard for LAP in microplate fluorimetric assays, respectively.The soil LAP activity was determined as previously described [31, 32]. Briefly, a soil homogenate was prepared by stirring 0.5 g fresh soil and 120 mL DI water at 600 rpm for 30 minutes. Then, the homogenate was placed into the assay well (including 50 μL THAM buffer, 100 μL homogenate, and 50 μL L-leucine-AMC) and quenched well (including 50 μL THAM buffer, 100 μL homogenate, and 50 μL AMC). DI water was used in place of homogenate in a standard well (including 50 μL THAM buffer, 100 μL DI water, and 50 μL AMC) and substrate control well (including 50 μL THAM buffer, 100 μL DI water, and 50 μL L-leucine-AMC). THAM, AMC, and L-leucine-AMC were then added sequentially according to the type of well. The THAM concentration in all wells was 20 mmol L-1, and the L-leucine-AMC was 200 μmol L-1. The concentration of AMC was 10 μmol L-1 before being added. The mixture in 96-well plates was incubated for 1 h at 37°C before being read by a fluorometer (Thermo Varioskan™ LUX, Finland) with 365 nm excitation and a 450 nm emission filter immediately.For the purified enzyme and immobilized enzyme, the soil homogenate in the above method was replaced with the LAP working solution and clay mineral-immobilized enzyme LAP, respectively. The purified enzyme was incubated and measured under the same conditions as the soil. ### 2.3.2. Effect of Clay Minerals on LAP For soil LAP, the effect of clay mineral addition on Cd toxicity was investigated in the natural state (0 mg clay mineral addition), low concentration (50 mg montmorillonite or kaolinite per gram fresh soil), and high concentration (100 mg montmorillonite or kaolinite per gram fresh soil). Clay minerals are added by mixing the corresponding quality and type of clay minerals with red and purple soil to produce homogenate.For the purified enzyme experiment, the following levels of binding of clay minerals to enzymes were set to investigate the effect of clay minerals on Cd: (1) free enzyme: no clay minerals added; (2) adsorbed enzyme: clay minerals added during incubation to make the LAP working solution containing final concentrations of 0.5 mg L-1 and 1 mg L-1 clay minerals, respectively; and (3) immobilized enzyme: clay mineral-immobilized LAP was made in advance. ### 2.3.3. Cd Toxicity on LAP To determine the Cd toxicity to LAP, after adding homogenate or LAP work solution to 96-well plates, 50μL of different concentrations of Cd solution (CdCl2) was added and contaminated LAP for 30 minutes at 25°C.Inhibition of LAP by Cd was calculated for soil LAP and purified LAP activity at final Cd concentrations of 0, 10, 20, 50, 100, 200, 500, and 1000μmol L-1 to quantify the toxicity of Cd on LAP.Changes in kinetic constants (Vmax and Km) of soil LAP and purified LAP were compared at final Cd concentrations of 0, 4, and 10 μmol L-1 to infer the type of inhibition of LAP by Cd. The enzyme activities of soil LAP and purified LAP were determined at different concentrations of substrate (final concentrations of 0, 10, 20, 40, 100, 200, 300, and 400 μmol L-1, respectively), and Vmax and Km were calculated using the Michaelis-Menten equation.In order to have a more insightful understanding of the inhibitory effect of Cd on LAP, we chose argentum (Ag) and hydrargyrum (Hg), which are both inhibitory metals to hydrolase like Cd, as a reference to observe the inhibitory effect and type of LAP inhibition by different inhibitory metals. We also chose cobalt (Co) and boron (B), which are accelerators of hydrolase, and magnesium (Mg) and manganese (Mn), metals that can occupy and substitute metal ions at two exchangeable sites in LAP to affectKm, to observe the effect of different metals on LAP activity under the same conditions [33–38].Additionally, the recovery of Cd-contaminated LAP activity by the addition of Mg and Mn was used to infer the mechanism of Cd inhibition of LAP [37]. In the recovery experiment, after measuring the LAP activity inhibited by Cd, Mg and Mn solutions at a final concentration of 4 μmol L-1 were added to observe the change in LAP activity. ## 2.3.1. Enzyme Assay L-leucine-7-amido-4-methylcoumarin (L-leucine-AMC) and 7-amino-4-methyl-coumarin (AMC) were purchased from Aladdin and used as the substrate and the standard for LAP in microplate fluorimetric assays, respectively.The soil LAP activity was determined as previously described [31, 32]. Briefly, a soil homogenate was prepared by stirring 0.5 g fresh soil and 120 mL DI water at 600 rpm for 30 minutes. Then, the homogenate was placed into the assay well (including 50 μL THAM buffer, 100 μL homogenate, and 50 μL L-leucine-AMC) and quenched well (including 50 μL THAM buffer, 100 μL homogenate, and 50 μL AMC). DI water was used in place of homogenate in a standard well (including 50 μL THAM buffer, 100 μL DI water, and 50 μL AMC) and substrate control well (including 50 μL THAM buffer, 100 μL DI water, and 50 μL L-leucine-AMC). THAM, AMC, and L-leucine-AMC were then added sequentially according to the type of well. The THAM concentration in all wells was 20 mmol L-1, and the L-leucine-AMC was 200 μmol L-1. The concentration of AMC was 10 μmol L-1 before being added. The mixture in 96-well plates was incubated for 1 h at 37°C before being read by a fluorometer (Thermo Varioskan™ LUX, Finland) with 365 nm excitation and a 450 nm emission filter immediately.For the purified enzyme and immobilized enzyme, the soil homogenate in the above method was replaced with the LAP working solution and clay mineral-immobilized enzyme LAP, respectively. The purified enzyme was incubated and measured under the same conditions as the soil. ## 2.3.2. Effect of Clay Minerals on LAP For soil LAP, the effect of clay mineral addition on Cd toxicity was investigated in the natural state (0 mg clay mineral addition), low concentration (50 mg montmorillonite or kaolinite per gram fresh soil), and high concentration (100 mg montmorillonite or kaolinite per gram fresh soil). Clay minerals are added by mixing the corresponding quality and type of clay minerals with red and purple soil to produce homogenate.For the purified enzyme experiment, the following levels of binding of clay minerals to enzymes were set to investigate the effect of clay minerals on Cd: (1) free enzyme: no clay minerals added; (2) adsorbed enzyme: clay minerals added during incubation to make the LAP working solution containing final concentrations of 0.5 mg L-1 and 1 mg L-1 clay minerals, respectively; and (3) immobilized enzyme: clay mineral-immobilized LAP was made in advance. ## 2.3.3. Cd Toxicity on LAP To determine the Cd toxicity to LAP, after adding homogenate or LAP work solution to 96-well plates, 50μL of different concentrations of Cd solution (CdCl2) was added and contaminated LAP for 30 minutes at 25°C.Inhibition of LAP by Cd was calculated for soil LAP and purified LAP activity at final Cd concentrations of 0, 10, 20, 50, 100, 200, 500, and 1000μmol L-1 to quantify the toxicity of Cd on LAP.Changes in kinetic constants (Vmax and Km) of soil LAP and purified LAP were compared at final Cd concentrations of 0, 4, and 10 μmol L-1 to infer the type of inhibition of LAP by Cd. The enzyme activities of soil LAP and purified LAP were determined at different concentrations of substrate (final concentrations of 0, 10, 20, 40, 100, 200, 300, and 400 μmol L-1, respectively), and Vmax and Km were calculated using the Michaelis-Menten equation.In order to have a more insightful understanding of the inhibitory effect of Cd on LAP, we chose argentum (Ag) and hydrargyrum (Hg), which are both inhibitory metals to hydrolase like Cd, as a reference to observe the inhibitory effect and type of LAP inhibition by different inhibitory metals. We also chose cobalt (Co) and boron (B), which are accelerators of hydrolase, and magnesium (Mg) and manganese (Mn), metals that can occupy and substitute metal ions at two exchangeable sites in LAP to affectKm, to observe the effect of different metals on LAP activity under the same conditions [33–38].Additionally, the recovery of Cd-contaminated LAP activity by the addition of Mg and Mn was used to infer the mechanism of Cd inhibition of LAP [37]. In the recovery experiment, after measuring the LAP activity inhibited by Cd, Mg and Mn solutions at a final concentration of 4 μmol L-1 were added to observe the change in LAP activity. ## 2.4. Data Analysis ### 2.4.1. Enzyme Activity Soil LAP activity was expressed in units of nmol AMC·g-1·h-1 and calculated by the following equations which were modified from Deforest [31] and Wang et al. [32]: (1)ActivitynmolAMC×g−1×h−1=Netfluorescence×VmLEmissioncoefficient×vmL×Th×DMg,where Net fluorescence is the actual fluorescence value of AMC produced by the enzymatic reaction of LAP in soil, which can be calculated by equation (2); V is the total volume of homogenate, 120 mL for this experiment; the Emission coefficient indicates the fluorescence value per unit AMC (nmol), which can be calculated by equation (4); v is the volume of homogenate in a single incubation well, 0.1 mL for this experiment; T is incubation time, 1 h for purple soil and purified LAP, 3 h for red soil; DM is the dry soil mass corresponding to 0.5 g of fresh soil. (2)Netfluorescence=fsampleassayQuenchcoefficient−fsubstratecontrol,where f is the fluorescence value measured by the marked well; the Quench coefficient indicates the effect of soil on the fluorescence values of AMC, which can be calculated by (3)Quenchcoefficient=fquenchfstandard,(4)Emissioncoefficientfluorescencenmol−1=fstandard0.5nmol.Purified LAP activity was expressed in units of nmol AMC·μg-1·h-1: (5)ActivitynmolAMC×μg−1×h−1=NetfluorescenceEmissioncoefficient×Th×enzymeμg,Netfluorescence=fsampleassay−fsubstratecontrol. ### 2.4.2. Inhibition Ratio and Type of Inhibition Variation of the maximum rate of reaction (Vmax) and the Michaelis constant (Km) could deduce the type of inhibition of enzymatic reactions by inhibitors [35].To better describe and compare the inhibition of LAP by different inhibitors (Cd, clay minerals, and other ion), the inhibition ratio was expressed by following equations from Acosta-Martínez [39]: (6)Inhibitionratio%=Activitycontrol−ActivitytreatmentActivitycontrol×100%.The type of inhibition of LAP by the inhibitor can be known from the change in kinetic constantsKm and Vmax, which can be calculated according to the Michaelis-Menten equation (7) and the Lineweaver-Burk double-reciprocal equation (8) [40, 41]: (7)v=VmaxSKm+S,where v is the initial reaction rate at the substrate concentration S; Vmax is the maximum rate of reaction and Km is the Michaelis constant. (8)1v=KmVmax⋅1S+1Vmax. ### 2.4.3. Statistical Analysis Differences between treatments were analyzed by one-way ANOVA following the Bonferroni post hoc test in SPSS Statistics 24 (IBM SPSS, Somers, NY, USA). Values ofP<0.05 were considered to be significant. Data are expressed as mean±standarderror (number of replicates n≥3), with different letters indicating significant differences. The dependences of enzyme activity with substrate concentration were represented with the Michaelis-Menten equations. The fitting and calculation of kinetic constants were performed using a nonlinear fitting equation (Growth/Sigmoidal Hill) in OriginPro 9.1 (OriginLab Corp., Northampton, MA, USA). ## 2.4.1. Enzyme Activity Soil LAP activity was expressed in units of nmol AMC·g-1·h-1 and calculated by the following equations which were modified from Deforest [31] and Wang et al. [32]: (1)ActivitynmolAMC×g−1×h−1=Netfluorescence×VmLEmissioncoefficient×vmL×Th×DMg,where Net fluorescence is the actual fluorescence value of AMC produced by the enzymatic reaction of LAP in soil, which can be calculated by equation (2); V is the total volume of homogenate, 120 mL for this experiment; the Emission coefficient indicates the fluorescence value per unit AMC (nmol), which can be calculated by equation (4); v is the volume of homogenate in a single incubation well, 0.1 mL for this experiment; T is incubation time, 1 h for purple soil and purified LAP, 3 h for red soil; DM is the dry soil mass corresponding to 0.5 g of fresh soil. (2)Netfluorescence=fsampleassayQuenchcoefficient−fsubstratecontrol,where f is the fluorescence value measured by the marked well; the Quench coefficient indicates the effect of soil on the fluorescence values of AMC, which can be calculated by (3)Quenchcoefficient=fquenchfstandard,(4)Emissioncoefficientfluorescencenmol−1=fstandard0.5nmol.Purified LAP activity was expressed in units of nmol AMC·μg-1·h-1: (5)ActivitynmolAMC×μg−1×h−1=NetfluorescenceEmissioncoefficient×Th×enzymeμg,Netfluorescence=fsampleassay−fsubstratecontrol. ## 2.4.2. Inhibition Ratio and Type of Inhibition Variation of the maximum rate of reaction (Vmax) and the Michaelis constant (Km) could deduce the type of inhibition of enzymatic reactions by inhibitors [35].To better describe and compare the inhibition of LAP by different inhibitors (Cd, clay minerals, and other ion), the inhibition ratio was expressed by following equations from Acosta-Martínez [39]: (6)Inhibitionratio%=Activitycontrol−ActivitytreatmentActivitycontrol×100%.The type of inhibition of LAP by the inhibitor can be known from the change in kinetic constantsKm and Vmax, which can be calculated according to the Michaelis-Menten equation (7) and the Lineweaver-Burk double-reciprocal equation (8) [40, 41]: (7)v=VmaxSKm+S,where v is the initial reaction rate at the substrate concentration S; Vmax is the maximum rate of reaction and Km is the Michaelis constant. (8)1v=KmVmax⋅1S+1Vmax. ## 2.4.3. Statistical Analysis Differences between treatments were analyzed by one-way ANOVA following the Bonferroni post hoc test in SPSS Statistics 24 (IBM SPSS, Somers, NY, USA). Values ofP<0.05 were considered to be significant. Data are expressed as mean±standarderror (number of replicates n≥3), with different letters indicating significant differences. The dependences of enzyme activity with substrate concentration were represented with the Michaelis-Menten equations. The fitting and calculation of kinetic constants were performed using a nonlinear fitting equation (Growth/Sigmoidal Hill) in OriginPro 9.1 (OriginLab Corp., Northampton, MA, USA). ## 3. Results and Discussion ### 3.1. Effect of Cd and Other Ions on LAP Cadmium inhibited both soil LAP and purified LAP. The inhibition ratio of Cd on LAP increased sharply with increasing Cd concentration in the range of 0–200μmol L-1, and the inhibition ratio showed a logarithmic increase (Figure 1(a)). This is consistent with the findings that Cd is a strong inhibitor of LAP in both biochemical and soil studies [27, 41, 42]. When contaminated with low concentrations of Cd (Cd concentration less than 10 μmol L-1), LAP in red soil was more sensitive to Cd than that in purple soil, while LAP in both soils responded equally to Cd contamination when Cd was greater than 10 μmol L-1 (Figures 1(a), 1(c), and 1(d)). This indicates that the pH of soil may influence Cd toxicity when the Cd concentration is less than 10 μmol L-1. At low Cd concentrations, the effective Cd concentration in the soil also depends on the concentration of Cd being adsorbed. When soil pH is low, soil components such as clay minerals and organic matter are less able to sorb Cd and are less likely to produce Cd(OH)2 precipitation, so more Cd2+ is free in the soil solution; therefore, Cd toxicity is likely to be greater [43, 44]. In contrast, at higher Cd concentrations, soil properties, including pH (Table S1), did not affect Cd toxicity to soil LAP. The weaker inhibition of Cd on soil LAP than on purified LAP may be due to the adsorption of Cd by sorbent substances in soil homogenate, resulting in a lower effective concentration of Cd [45, 46]. Another explanation may be that certain components of the soil form a protective effect on LAP, resulting in a reduction in the toxicity of Cd to LAP [47].Figure 1 Effect of Cd on the inhibition rate and kinetic constants of LAP: (a) inhibition rate of soil LAP and purified LAP by different concentrations of Cd; (b) kinetic curve of purified enzyme affected by Cd; (c) kinetic curve of LAP in red soil affected by Cd; (d) kinetic curve of LAP in purple soil affected by Cd. (a)(b)(c)(d)Enzyme kinetic analysis indicated that Cd may be a noncompetitive inhibitor of both purified LAP and soil LAP (Figures1(b)–1(d)). After the addition of 4 and 10 μmol L-1 of Cd, Vmax for the enzymatic reaction of soil LAP and purified LAP decreased, while no significant difference was observed in Km (Table 1). This indicates that the addition of Cd only reduced the effective amount of the enzyme without reducing the affinity between the enzyme and the substrate. It can be inferred that Cd disrupts the conformation on LAP and thus renders some LAP inactive [26, 27].Table 1 Changes in the kinetic constants (Vmax and Km) of LAP by different concentrations of Cd. Purified enzymeRed soilPurple soil0μmol L-14μmol L-110μmol L-10μmol L-14μmol L-110μmol L-10μmol L-14μmol L-110μmol L-1Vmax64.88A50.58B41.06C327.56A′194.93B′159.41C′2167.45A″2054.84A″1558.41B″Km30.43a34.65a49.05a55.73a′36.22a′38.72a′48.81a″43.27a″30.69a″R20.960.970.970.970.990.970.970.990.97Different capital letters indicate significant differences inVmax (P<0.05), and different lowercase letters indicate significant differences in Km (P<0.05), with Vmax in nmol μg-1 h-1 in purified enzyme and nmol g-1 h-1 in soil enzyme and Km in μmol L-1. Tables 2–4 are the same.The inhibitory effect of Cd on LAP activity was comparable Ag and Hg but stronger than that of Co, B, Mg, and Mn (FiguresS2–S4). Generally, the effect of Ag on soil LAP and purified LAP was similar to that of Cd. Soil LAP and purified LAP activities were strongly inhibited by Ag. The inhibitory effect of Hg on LAP at the same concentration was slightly greater than that of Cd. Accelerator metal Co promoted soil LAP and purified LAP activities at low concentrations and inhibited them at high concentrations. The inflection points for purified LAP and LAP in red soil were 250 and 200 μmol L-1, respectively. B, Mg, and Mn had little effect on LAP activities. The purified enzyme responded most strongly to their inhibition, relative to the two soil LAP contaminated by Cd; Ag and Hg had a noncompetitive inhibition on both LAP in purple soil and purified LAP (Figure 2, Tables S3–S5). Nevertheless, we also observed competitive inhibition of Hg on LAP in the red soil, indicating that other isozyme may exist in the red soil compared with the purified LAP and LAP in the purple soil.Figure 2 Kinetic profiles of purified enzyme, red soil LAP, and purple soil LAP affected by 0, 4, and 10μmol Cd, Ag, and Hg, respectively.The addition of Mg and Mn, which are metals in the active site of LAP [5, 6], cannot restore the activity of purified LAP contaminated by Cd, and the addition of Mg and Mn only slightly changed the activity of LAP in red soil and purple soil (Figure 3). This suggests that Mg and Mn cannot effectively restore the activity of LAP by competing with Cd for the divalent metal sites on LAP.Figure 3 Proportion of change in the activity of Cd-contaminated LAP after the addition of Mg and Mn as restorative activators of LAP. ### 3.2. Effect of Clay Minerals on LAP Clay minerals showed different effects on LAP activity (Figure4). The addition of clay minerals does not significantly affect the activity of purified LAP (Figure 4(a)). In the soil system (Figures 4(b) and 4(c)), the addition of montmorillonite significantly reduced the LAP activity in red soil and purple soil (P<0.05). The greater the amount of montmorillonite added, the lower the enzyme activity. When the amount of montmorillonite was 50 and 100 mg g-1 soil, the inhibition rate of LAP in red soil was 30.96% and 36.42% and was 10.77% and 22.78% in purple soil. The addition of kaolinite had no significant effect on LAP activity. The different effects of the two clay minerals may be attributed to the fact that montmorillonite in the 2 : 1 layer has a higher specific surface area and adsorption capacity than kaolinite in the 1 : 1 layer-type structure (Figure S1 and Table S2). Thus, montmorillonite can mask some of the active sites by adsorption on LAP, resulting in a decrease in LAP activity [22].Figure 4 Effect of adding different amounts of montmorillonite and kaolinite on LAP activity in purified LAP, LAP in red soil, and LAP in purple soil. (a) Purified enzyme(b) Red soil(c) Purple soilDifferent lowercase letters a and b indicate significant differences in LAP activities caused by montmorillonite, and a′ and b′ indicate kaolinite.Figure5 shows the effect of adding clay minerals on the kinetics of soil LAP and purified LAP. The addition of low concentrations of montmorillonite and kaolinite (0.5 mg L-1 for purified LAP and 50 mg g-1 soil for soil LAP) resulted in a decreasing trend in Vmax values for both soil LAP and purified LAP, with only the addition of montmorillonite to the purple clay producing a significant decrease in Vmax (Table 2). When the substrate concentration was relatively low (concentration less than 200 μmol L-1), the enzyme activity decreased significantly after clay mineral addition (Figure 4), and the addition of both montmorillonite and kaolinite changed the Km value of the purified enzyme (Table 2), indicating that the affinity of LAP and substrates decreased in the presence of clay minerals. Clay minerals reduce enzyme activity because the adsorption of the mineral to the enzyme changes the conformational structure of the protein, ultimately altering its catalytic properties and reducing its activity and Vmax [48].Figure 5 Effect of adding different amounts of montmorillonite and kaolinite on the kinetic curve of LAP in purified LAP, LAP in red soil, and LAP in purple soil. (a) Purified enzyme(b) Red soil(c) Purple soilTable 2 Changes in the kinetic constants (Vmax and Km) of LAP by montmorillonite and kaolinite. Purified LAPRed soilPurple soilPEPE+MPE+KRSRS+MRS+KPSPS+MPS+KVmax104.05A97.73A119.1A257.18A′253.65A′235.26A′3312.61B″2898.44A″3226.65B″Km26.42a34.34b31.74ab36.92a′72.02b′35.09a′38.13a″40.44a″36.79a″R20.990.990.970.990.960.990.990.980.99PE means purified enzyme (LAP), RS means red soil, PS means purple soil, and +M or +K means adding montmorillonite or kaolinite.Clay minerals still had an effect on LAP activity with Cd contamination (Figure6). Adding montmorillonite or kaolinite will cause a decrease in LAP activity, but there is no statistically significant difference in this decreasing trend, and in general, the greater the amount of clay mineral added, the greater the reduction in enzyme activity [49, 50]. This suggests that the presence or absence of Cd had no effect on the ability of the clay minerals to reduce enzyme activity (Figures 4 and 6). According to Figures 6(a) and 6(d), the addition of clay minerals had almost no effect on the enzymatic activity of purified LAP (0 μmol L-1 Cd); in the presence of Cd contamination with concentrations of 4 μmol L-1 and 10 μmol L-1, the inhibition rate of Cd on LAP increases significantly with the addition of clay minerals.Figure 6 Effect of montmorillonite and kaolinite on the activity of purified LAP, LAP in red soil, and LAP in purple soil in the presence of 0, 4, and 10μmol L-1 Cd. (a) Montmorillonite-purified enzyme(b) Montmorillonite-red soil(c) Montmorillonite-purple soil(d) Kaolinite-purified enzyme(e) Kaolinite-red soil(f) Kaolinite-purple soilIn contrast, the inhibitory effect of Cd on LAP in red and purple soil did not increase significantly with the addition of clay minerals. The reason might be that soil enzymes are in different environments than the purified enzyme. There are only buffer, substrate, metal solution, and enzyme in the purified enzyme incubation systems. Thus, this system can be regarded as a homogeneous liquid wherein the metal ions and enzyme could diffuse easily. Clay minerals may adsorb and concentrate the enzyme and Cd ions from the dispersion system on their surface, which increases the chance of Cd-enzyme interaction and the inhibitive effect of Cd on LAP.Lowercase letters a and b indicate significant differences in LAP activities caused by 0μmol L-1 Cd, and a′ and b′ or a″ and b″ indicate significant differences in LAP activities caused by 4 and 10 μmol L-1 Cd, respectively.According to Tables1 and 2, the addition of clay minerals decreased the enzyme activity and Vmax of LAP, and the kinetic constants of LAP were affected by both clay minerals and Cd. The decrease in Vmax of the purified enzyme was mainly caused by Cd, but when Cd and clay minerals were present together, the clay minerals exacerbated the inhibitory effect of Cd, and the effect of kaolinite was stronger than that of montmorillonite (Figure 7 and Table 3). Km in purple soil was not affected by the addition of Cd or clay minerals, but, in both the purified enzyme and in the red soil, it was concluded that montmorillonite decreased the affinity of LAP to the substrate and kaolinite increased the affinity of LAP to the substrate. It can be concluded that the presence of both Cd and clay minerals in the purified enzyme system amplifies the inhibitory effect of Cd on LAP, possibly due to the ability of clay minerals to adsorb LAP and Cd [51]. The reason for the different results observed in the two soils is that the type and content of clay minerals differ between the red and purple soils, and these differences make a difference in the effect of Cd on LAP contamination [47].Figure 7 Effect of montmorillonite and kaolinite on the kinetic curves of purified LAP, LAP in red soil, and LAP in purple soil in the presence of 10μmol L-1 Cd. (a) Purified enzyme(b) Red soil(c) Purple soilTable 3 Changes in the kinetic constants (Vmax and Km) of LAP by montmorillonite and kaolinite with 10 μmol L-1 Cd contamination. Purified enzymeRed soilPurple soilPEPE+MPE+KRSRS+MRS+KPSPS+MPS+KVmax69.11C48.59B27.63A179.00B′124.42A′127.37A′2273.60B″1964.32A″1988.37B″Km93.22b152.1b63.5a46.18b′55.91c′31.23a′36.74a″34.13a″33.68a″R20.990.990.980.990.990.990.990.990.99PE means purified enzyme (LAP), RS means red soil, PS means purple soil, and +M or +K means adding montmorillonite or kaolinite. ### 3.3. Effect of Cd on Immobilized LAP As shown in Figure8 and Table 4, the montmorillonite-immobilized enzyme showed a nonsignificant decreasing trend in Vmax with increasing Cd concentration under the influence of 0, 4, and 10 μmol L-1 Cd, while the kaolinite-immobilized enzyme did not differ significantly under the three concentrations of Cd contamination. No significant change in Km was observed for the enzymes immobilized by the two clay minerals. Comparison with the kinetic constants of the purified LAP (free enzyme) in Figure 1(b) and Table 1 shows that the significant increase in Km of the immobilized enzyme suggests that the presence of clay minerals reduces the affinity between the enzyme and the substrate. When the free LAP was contaminated with 10 μmol L-1 of Cd, Vmax of the free LAP decreased to 63.29% and Km became 161.19% of that of the uncontaminated enzyme, respectively. In contrast, Vmax of montmorillonite-immobilized enzyme and kaolinite-immobilized enzyme changed to 70.89% and 119.35% of those when they were not contaminated, respectively, and Km changed to 119.12% and 109.34%, respectively. This indicates that the immobilized enzymes showed less change in Vmax and Km when contaminated with Cd, and therefore, the clay mineral-immobilized LAPs were more resistant to Cd contamination [49, 51–53].Figure 8 Effect of Cd on the kinetics curves of montmorillonite-immobilized LAP and kaolinite-immobilized LAP.Table 4 Changes in the kinetic constants (Vmax and Km) of montmorillonite-immobilized LAP and kaolinite-immobilized LAP with 0, 4, and 10 μmol L-1 Cd contamination. Montmorillonite-immobilized enzymeKaolinite-immobilized enzyme0μmol L-1Cd4μmol L-1Cd10μmol L-1Cd0μmol L-1Cd4μmol L-1Cd10μmol L-1CdVmax22.19A18.93A15.73A20.52A′21.60A′24.49A′Km74.42a72.44a88.65a76.53a′93.19a′83.68a′R20.980.980.990.990.990.99The difference in the immobilized enzyme and free enzyme can also explain the difference in the response of soil LAP and purified LAP to Cd contamination. Clay mineral-immobilized enzymes are more resistant to Cd toxicity than free enzymes, so the same concentration of Cd inhibits the clay mineral-immobilized enzyme less than the free enzyme, which is why the inhibition ratio of soil enzymes under the same concentration of Cd is lower than that of purified enzyme. Since soil enzymes are a mixture of free and immobilized enzymes, the sensitivity of soil enzymes to Cd contamination should be somewhere between free and immobilized enzymes, and therefore, the inhibition rate of soil enzymes is usually lower than that of purified enzymes (free enzymes) when faced with the same concentration of Cd contamination. ## 3.1. Effect of Cd and Other Ions on LAP Cadmium inhibited both soil LAP and purified LAP. The inhibition ratio of Cd on LAP increased sharply with increasing Cd concentration in the range of 0–200μmol L-1, and the inhibition ratio showed a logarithmic increase (Figure 1(a)). This is consistent with the findings that Cd is a strong inhibitor of LAP in both biochemical and soil studies [27, 41, 42]. When contaminated with low concentrations of Cd (Cd concentration less than 10 μmol L-1), LAP in red soil was more sensitive to Cd than that in purple soil, while LAP in both soils responded equally to Cd contamination when Cd was greater than 10 μmol L-1 (Figures 1(a), 1(c), and 1(d)). This indicates that the pH of soil may influence Cd toxicity when the Cd concentration is less than 10 μmol L-1. At low Cd concentrations, the effective Cd concentration in the soil also depends on the concentration of Cd being adsorbed. When soil pH is low, soil components such as clay minerals and organic matter are less able to sorb Cd and are less likely to produce Cd(OH)2 precipitation, so more Cd2+ is free in the soil solution; therefore, Cd toxicity is likely to be greater [43, 44]. In contrast, at higher Cd concentrations, soil properties, including pH (Table S1), did not affect Cd toxicity to soil LAP. The weaker inhibition of Cd on soil LAP than on purified LAP may be due to the adsorption of Cd by sorbent substances in soil homogenate, resulting in a lower effective concentration of Cd [45, 46]. Another explanation may be that certain components of the soil form a protective effect on LAP, resulting in a reduction in the toxicity of Cd to LAP [47].Figure 1 Effect of Cd on the inhibition rate and kinetic constants of LAP: (a) inhibition rate of soil LAP and purified LAP by different concentrations of Cd; (b) kinetic curve of purified enzyme affected by Cd; (c) kinetic curve of LAP in red soil affected by Cd; (d) kinetic curve of LAP in purple soil affected by Cd. (a)(b)(c)(d)Enzyme kinetic analysis indicated that Cd may be a noncompetitive inhibitor of both purified LAP and soil LAP (Figures1(b)–1(d)). After the addition of 4 and 10 μmol L-1 of Cd, Vmax for the enzymatic reaction of soil LAP and purified LAP decreased, while no significant difference was observed in Km (Table 1). This indicates that the addition of Cd only reduced the effective amount of the enzyme without reducing the affinity between the enzyme and the substrate. It can be inferred that Cd disrupts the conformation on LAP and thus renders some LAP inactive [26, 27].Table 1 Changes in the kinetic constants (Vmax and Km) of LAP by different concentrations of Cd. Purified enzymeRed soilPurple soil0μmol L-14μmol L-110μmol L-10μmol L-14μmol L-110μmol L-10μmol L-14μmol L-110μmol L-1Vmax64.88A50.58B41.06C327.56A′194.93B′159.41C′2167.45A″2054.84A″1558.41B″Km30.43a34.65a49.05a55.73a′36.22a′38.72a′48.81a″43.27a″30.69a″R20.960.970.970.970.990.970.970.990.97Different capital letters indicate significant differences inVmax (P<0.05), and different lowercase letters indicate significant differences in Km (P<0.05), with Vmax in nmol μg-1 h-1 in purified enzyme and nmol g-1 h-1 in soil enzyme and Km in μmol L-1. Tables 2–4 are the same.The inhibitory effect of Cd on LAP activity was comparable Ag and Hg but stronger than that of Co, B, Mg, and Mn (FiguresS2–S4). Generally, the effect of Ag on soil LAP and purified LAP was similar to that of Cd. Soil LAP and purified LAP activities were strongly inhibited by Ag. The inhibitory effect of Hg on LAP at the same concentration was slightly greater than that of Cd. Accelerator metal Co promoted soil LAP and purified LAP activities at low concentrations and inhibited them at high concentrations. The inflection points for purified LAP and LAP in red soil were 250 and 200 μmol L-1, respectively. B, Mg, and Mn had little effect on LAP activities. The purified enzyme responded most strongly to their inhibition, relative to the two soil LAP contaminated by Cd; Ag and Hg had a noncompetitive inhibition on both LAP in purple soil and purified LAP (Figure 2, Tables S3–S5). Nevertheless, we also observed competitive inhibition of Hg on LAP in the red soil, indicating that other isozyme may exist in the red soil compared with the purified LAP and LAP in the purple soil.Figure 2 Kinetic profiles of purified enzyme, red soil LAP, and purple soil LAP affected by 0, 4, and 10μmol Cd, Ag, and Hg, respectively.The addition of Mg and Mn, which are metals in the active site of LAP [5, 6], cannot restore the activity of purified LAP contaminated by Cd, and the addition of Mg and Mn only slightly changed the activity of LAP in red soil and purple soil (Figure 3). This suggests that Mg and Mn cannot effectively restore the activity of LAP by competing with Cd for the divalent metal sites on LAP.Figure 3 Proportion of change in the activity of Cd-contaminated LAP after the addition of Mg and Mn as restorative activators of LAP. ## 3.2. Effect of Clay Minerals on LAP Clay minerals showed different effects on LAP activity (Figure4). The addition of clay minerals does not significantly affect the activity of purified LAP (Figure 4(a)). In the soil system (Figures 4(b) and 4(c)), the addition of montmorillonite significantly reduced the LAP activity in red soil and purple soil (P<0.05). The greater the amount of montmorillonite added, the lower the enzyme activity. When the amount of montmorillonite was 50 and 100 mg g-1 soil, the inhibition rate of LAP in red soil was 30.96% and 36.42% and was 10.77% and 22.78% in purple soil. The addition of kaolinite had no significant effect on LAP activity. The different effects of the two clay minerals may be attributed to the fact that montmorillonite in the 2 : 1 layer has a higher specific surface area and adsorption capacity than kaolinite in the 1 : 1 layer-type structure (Figure S1 and Table S2). Thus, montmorillonite can mask some of the active sites by adsorption on LAP, resulting in a decrease in LAP activity [22].Figure 4 Effect of adding different amounts of montmorillonite and kaolinite on LAP activity in purified LAP, LAP in red soil, and LAP in purple soil. (a) Purified enzyme(b) Red soil(c) Purple soilDifferent lowercase letters a and b indicate significant differences in LAP activities caused by montmorillonite, and a′ and b′ indicate kaolinite.Figure5 shows the effect of adding clay minerals on the kinetics of soil LAP and purified LAP. The addition of low concentrations of montmorillonite and kaolinite (0.5 mg L-1 for purified LAP and 50 mg g-1 soil for soil LAP) resulted in a decreasing trend in Vmax values for both soil LAP and purified LAP, with only the addition of montmorillonite to the purple clay producing a significant decrease in Vmax (Table 2). When the substrate concentration was relatively low (concentration less than 200 μmol L-1), the enzyme activity decreased significantly after clay mineral addition (Figure 4), and the addition of both montmorillonite and kaolinite changed the Km value of the purified enzyme (Table 2), indicating that the affinity of LAP and substrates decreased in the presence of clay minerals. Clay minerals reduce enzyme activity because the adsorption of the mineral to the enzyme changes the conformational structure of the protein, ultimately altering its catalytic properties and reducing its activity and Vmax [48].Figure 5 Effect of adding different amounts of montmorillonite and kaolinite on the kinetic curve of LAP in purified LAP, LAP in red soil, and LAP in purple soil. (a) Purified enzyme(b) Red soil(c) Purple soilTable 2 Changes in the kinetic constants (Vmax and Km) of LAP by montmorillonite and kaolinite. Purified LAPRed soilPurple soilPEPE+MPE+KRSRS+MRS+KPSPS+MPS+KVmax104.05A97.73A119.1A257.18A′253.65A′235.26A′3312.61B″2898.44A″3226.65B″Km26.42a34.34b31.74ab36.92a′72.02b′35.09a′38.13a″40.44a″36.79a″R20.990.990.970.990.960.990.990.980.99PE means purified enzyme (LAP), RS means red soil, PS means purple soil, and +M or +K means adding montmorillonite or kaolinite.Clay minerals still had an effect on LAP activity with Cd contamination (Figure6). Adding montmorillonite or kaolinite will cause a decrease in LAP activity, but there is no statistically significant difference in this decreasing trend, and in general, the greater the amount of clay mineral added, the greater the reduction in enzyme activity [49, 50]. This suggests that the presence or absence of Cd had no effect on the ability of the clay minerals to reduce enzyme activity (Figures 4 and 6). According to Figures 6(a) and 6(d), the addition of clay minerals had almost no effect on the enzymatic activity of purified LAP (0 μmol L-1 Cd); in the presence of Cd contamination with concentrations of 4 μmol L-1 and 10 μmol L-1, the inhibition rate of Cd on LAP increases significantly with the addition of clay minerals.Figure 6 Effect of montmorillonite and kaolinite on the activity of purified LAP, LAP in red soil, and LAP in purple soil in the presence of 0, 4, and 10μmol L-1 Cd. (a) Montmorillonite-purified enzyme(b) Montmorillonite-red soil(c) Montmorillonite-purple soil(d) Kaolinite-purified enzyme(e) Kaolinite-red soil(f) Kaolinite-purple soilIn contrast, the inhibitory effect of Cd on LAP in red and purple soil did not increase significantly with the addition of clay minerals. The reason might be that soil enzymes are in different environments than the purified enzyme. There are only buffer, substrate, metal solution, and enzyme in the purified enzyme incubation systems. Thus, this system can be regarded as a homogeneous liquid wherein the metal ions and enzyme could diffuse easily. Clay minerals may adsorb and concentrate the enzyme and Cd ions from the dispersion system on their surface, which increases the chance of Cd-enzyme interaction and the inhibitive effect of Cd on LAP.Lowercase letters a and b indicate significant differences in LAP activities caused by 0μmol L-1 Cd, and a′ and b′ or a″ and b″ indicate significant differences in LAP activities caused by 4 and 10 μmol L-1 Cd, respectively.According to Tables1 and 2, the addition of clay minerals decreased the enzyme activity and Vmax of LAP, and the kinetic constants of LAP were affected by both clay minerals and Cd. The decrease in Vmax of the purified enzyme was mainly caused by Cd, but when Cd and clay minerals were present together, the clay minerals exacerbated the inhibitory effect of Cd, and the effect of kaolinite was stronger than that of montmorillonite (Figure 7 and Table 3). Km in purple soil was not affected by the addition of Cd or clay minerals, but, in both the purified enzyme and in the red soil, it was concluded that montmorillonite decreased the affinity of LAP to the substrate and kaolinite increased the affinity of LAP to the substrate. It can be concluded that the presence of both Cd and clay minerals in the purified enzyme system amplifies the inhibitory effect of Cd on LAP, possibly due to the ability of clay minerals to adsorb LAP and Cd [51]. The reason for the different results observed in the two soils is that the type and content of clay minerals differ between the red and purple soils, and these differences make a difference in the effect of Cd on LAP contamination [47].Figure 7 Effect of montmorillonite and kaolinite on the kinetic curves of purified LAP, LAP in red soil, and LAP in purple soil in the presence of 10μmol L-1 Cd. (a) Purified enzyme(b) Red soil(c) Purple soilTable 3 Changes in the kinetic constants (Vmax and Km) of LAP by montmorillonite and kaolinite with 10 μmol L-1 Cd contamination. Purified enzymeRed soilPurple soilPEPE+MPE+KRSRS+MRS+KPSPS+MPS+KVmax69.11C48.59B27.63A179.00B′124.42A′127.37A′2273.60B″1964.32A″1988.37B″Km93.22b152.1b63.5a46.18b′55.91c′31.23a′36.74a″34.13a″33.68a″R20.990.990.980.990.990.990.990.990.99PE means purified enzyme (LAP), RS means red soil, PS means purple soil, and +M or +K means adding montmorillonite or kaolinite. ## 3.3. Effect of Cd on Immobilized LAP As shown in Figure8 and Table 4, the montmorillonite-immobilized enzyme showed a nonsignificant decreasing trend in Vmax with increasing Cd concentration under the influence of 0, 4, and 10 μmol L-1 Cd, while the kaolinite-immobilized enzyme did not differ significantly under the three concentrations of Cd contamination. No significant change in Km was observed for the enzymes immobilized by the two clay minerals. Comparison with the kinetic constants of the purified LAP (free enzyme) in Figure 1(b) and Table 1 shows that the significant increase in Km of the immobilized enzyme suggests that the presence of clay minerals reduces the affinity between the enzyme and the substrate. When the free LAP was contaminated with 10 μmol L-1 of Cd, Vmax of the free LAP decreased to 63.29% and Km became 161.19% of that of the uncontaminated enzyme, respectively. In contrast, Vmax of montmorillonite-immobilized enzyme and kaolinite-immobilized enzyme changed to 70.89% and 119.35% of those when they were not contaminated, respectively, and Km changed to 119.12% and 109.34%, respectively. This indicates that the immobilized enzymes showed less change in Vmax and Km when contaminated with Cd, and therefore, the clay mineral-immobilized LAPs were more resistant to Cd contamination [49, 51–53].Figure 8 Effect of Cd on the kinetics curves of montmorillonite-immobilized LAP and kaolinite-immobilized LAP.Table 4 Changes in the kinetic constants (Vmax and Km) of montmorillonite-immobilized LAP and kaolinite-immobilized LAP with 0, 4, and 10 μmol L-1 Cd contamination. Montmorillonite-immobilized enzymeKaolinite-immobilized enzyme0μmol L-1Cd4μmol L-1Cd10μmol L-1Cd0μmol L-1Cd4μmol L-1Cd10μmol L-1CdVmax22.19A18.93A15.73A20.52A′21.60A′24.49A′Km74.42a72.44a88.65a76.53a′93.19a′83.68a′R20.980.980.990.990.990.99The difference in the immobilized enzyme and free enzyme can also explain the difference in the response of soil LAP and purified LAP to Cd contamination. Clay mineral-immobilized enzymes are more resistant to Cd toxicity than free enzymes, so the same concentration of Cd inhibits the clay mineral-immobilized enzyme less than the free enzyme, which is why the inhibition ratio of soil enzymes under the same concentration of Cd is lower than that of purified enzyme. Since soil enzymes are a mixture of free and immobilized enzymes, the sensitivity of soil enzymes to Cd contamination should be somewhere between free and immobilized enzymes, and therefore, the inhibition rate of soil enzymes is usually lower than that of purified enzymes (free enzymes) when faced with the same concentration of Cd contamination. ## 4. Conclusion The inhibitory effect of Cd on LAP increased logarithmically with the increasing Cd concentration, and Cd produced noncompetitive inhibition on both soil LAP and purified LAP. Cd inhibited the enzymatic reaction by disrupting the conformation of the enzyme protein, which cannot be restored by adding the metals associated with the LAP active site. Regardless of the presence of Cd, the addition of clay minerals will generally reduce the activity and maximum reaction rate (Vmax) of LAP, and the effect of montmorillonite is stronger than that of kaolinite. Montmorillonite decreases the affinity between LAP and the substrate (increasing Km), while kaolinite increases the affinity between LAP and the substrate (decreasing Km). It should be noted that clay minerals can increase the inhibition ratio of Cd on purified LAP. The interaction between leucine aminopeptidase, clay minerals, and cadmium contamination in the soil is a complicated process that is related not only to the concentration of the three but also to the soil environment. Therefore, the process and mechanism by which clay minerals affect the toxicity of Cd to LAP are still unclear. In consequence, it is important to continue to study the interaction between clay minerals, Cd and LAP, and the inhibition mechanism of Cd on clay mineral-immobilized LAP. This can provide scientific evidence for restoring the activity and function of LAP in Cd-contaminated soils and to provide some help in restoring nutrient use efficiency and accelerating nutrient cycling in contaminated soils. --- *Source: 1024085-2021-10-22.xml*
2021
# Hydrodynamic Noise of Pulsating Jets through Bileaflet Mechanical Mitral Valve **Authors:** Vladimir Voskoboinick; Oleksandr Voskoboinyk; Oleg Chertov; Andrey Voskoboinick; Lidiia Tereshchenko **Journal:** BioMed Research International (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1024096 --- ## Abstract Experimental research results of hydrodynamic noise of pulsating flow through a bileaflet mechanical mitral valve are presented. The pulsating flow of pure water corresponds to the diastolic mode of the cardiac rhythm heart. The valve was located between the model of the left atrium and the model of the left ventricle of the heart. A coordinate device, on which a block of miniature sensors of absolute pressure and pressure fluctuations was installed, was located inside the model of the left ventricle. It is found that the hydrodynamic noise of the pulsating side jet of the semiclosed valve is higher than for the open valve. The pressure fluctuation levels gradually decrease with the removal from the mitral valve. It is established that at the second harmonic of the pulsating flow frequency, the spectral levels of the hydrodynamic noise of the semiclosed bileaflet mechanical mitral valve are almost 5 times higher than the open valve. With the removal from the mitral valve, spectral levels of hydrodynamic noise are decreased, especially strongly at the frequency of the pulsating water flow and its higher harmonics. --- ## Body ## 1. Introduction The heart is a vital hollow muscular-fibrous organ located in the thorax and providing blood flow through the vessels. This is a kind of muscle pump that works on the principle of suction-pushing blood. The human heart is divided by diaphragms into four separate chambers: two atria (left and right) and two ventricles (also left and right). The functions of each of them are different. Inside each of the atria, blood entering the heart is accumulated and, having reached a certain volume, is pushed into the corresponding ventricles. The ventricles drive blood into the arteries, through which it moves throughout the body. The unidirectional movement of blood is ensured by the well-coordinated work of the valvular apparatus of the heart, consisting of the mitral, tricuspid, aortic, and pulmonary valves, which are opened and closed at the right moment, preventing the blood from being regurgitated. The first and third valves are located in the left ventricle, and the second and fourth valves are located in the right ventricle of the heart.The heart pumps about five to six liters of blood in a minute. This volume is somewhat decreased at rest and when a person performs physical exercise, on the contrary, it is increased. Together with blood vessels, the heart forms the cardiovascular system, which has two circles of circulation: large (systemic) and small (pulmonary). Blood first enters in the aorta from the left ventricle of the heart and then it moves through large and small diameter arteries. Then blood moves through the arterioles to the capillaries, where it donates oxygen and a number of other nutrients necessary for the body and takes carbon dioxide and waste metabolic products. Hence, the blood from the artery (oxygenated blood flows from the heart) becomes venous blood and flows back to the heart. The venous blood flows first through the venules, then through the small veins and large venous trunks. The venous blood enters inside the right atrium along the inferior and superior vena cava, closing the systemic circulation (large circle). The blood is again enriched with oxygen in the lungs, where it flows from the right heart through the pulmonary valve into the pulmonary arteries, which form the pulmonary circulation. Oxygenated blood fills the left atrium and through the mitral valve enters the left ventricle of the heart. The mitral (bicuspid or bileaflet) valve is located between the left atrium and the ventricle and consists of two leaflets. When it is opened, blood flows through an atrioventricular orifice into the left ventricle from the left atrium. During systole (i.e., contraction) of the left ventricle, the valve is closed so that blood does not flow back into the atrium, but is pushed through the aortic valve into the aorta and the vessels of the systemic circulation. Heart valves consist of thin, flexible leaflets that are opened and closed in response to changes in blood pressure between the respective atria and ventricles. As a result of various diseases and pathologies, the leaflets are damaged and interfere with the normal functioning of the heart and the entire cardiovascular system. As a result, it is necessary to apply therapeutic and surgical measures to eliminate such injuries until the replacement of natural heart valves with prostheses. About 300 thousand prosthetic heart valves are annually implanted in the world, and it is estimated [1] that the number can triple in 2050.In recent years, a large number of heart valve prostheses of various forms and principles have been developed and put into practice, which are grouped into three groups: bioprostheses, mechanical, and transcatheter valves [2–5]. Most valves are manufactured by world-renowned firms such as Edwards Lifesciences, Medtronic, St. Jude Medical, Sorin Group, and Boston Scientific. The bileaflet mechanical heart valves are most common (more than 50% of prosthetic valves). These valves consist of two semilunar leaflets (Figure 1(a)), which are attached to a rigid stitched ring by means of small hinges. The opening angle of the valves relative to the plane of the ring is ranged from 75 to 90°. An open valve has three openings: a small slit-like central opening between two open leaflets and two larger semicircular openings on the sides of the valve. The valves are mainly made of pyrolytic carbon, because it has a sufficiently high thromboresistance. Despite the wide clinical application, the functions of these valves are far from perfect. The main disadvantages that distinguish them from the ideal mechanical heart valve are the destruction of erythrocytes and the formation of hemolysis, that is, the destruction of red blood cells, as well as thromboembolism resulting from the formation of thrombi on the streamlined surface of mechanical valves (Figure 1(b)). At the same time, patients with mechanical heart valves should use anticoagulants throughout their lives to counteract thromboembolic complications.Figure 1 Bileaflet mechanical heart valve: open (a) and semiclosed by trombi (b). (a)(b) ## 2. Materials and Methods Since mechanical valves have nonphysiological geometry, the flow of blood through them is significantly different from natural conditions. Indeed, the hemodynamics of bileaflet mechanical heart prostheses is significantly different from natural valves, since they have three orifices of different sizes. This forms a localized high-velocity gradient through a smaller central opening. The leaflets of the valve are a barrier to the blood flow through the valve, which, along with the high velocity of the jets between the leaflets, causes increased shear stresses, which lead to the destruction of erythrocytes and other blood corpuscles [6–8]. In addition, there is a small technological gap between the leaflets which are fixed in the ringed valve body by means of hinges, and the ring itself. The reverse blood flow is rushed with a sufficiently high velocity in this gap, generating large shear stresses, that increases the risk of damage to the blood and the formation of thrombi.Optimization of the design of mechanical valves is achieved using computational and experimental methods [9–11] to improve the velocity profiles and minimize the complications that are caused by prosthetics. Modern numerical simulation methods such as DNS, URANS, LES, and DES are used for calculations [12–14]. Hemodynamic and hydrodynamic tests of heart valves are an important step for conducting preclinical research of a new device. The main indicators of the effectiveness of such devices are the velocity profiles, velocity and pressure gradients, shear and Reynolds stresses, convective velocities, and directions of motion of vortex structures that are formed by a heart valve [15–17]. Studies of the blood flow characteristics inside the heart and the operation conditions of mechanical valves are performed both by noninvasive measurement methods (in vivo) and with help ventricular and atrial models in laboratory conditions (in vitro). Research by noninvasive methods is carried out mainly on the rather complex equipment of Doppler echocardiography and magnetic resonance imaging.But this modern equipment has a serious disadvantage due to insufficient temporal and spatial resolution. Therefore, the fine structure of blood flow through the mechanical heart valves is investigated in the laboratory with the help of miniature sensors and complexes for detecting the movement of labelled particles with an increased spatial and temporal resolution [18, 19]. Several complex experimental setups that were used for hemodynamic and hydrodynamic studies have been described in the literature [20–22].Diagnostics of the operation of mechanical valves in vivo are carried out using diagnostic complexes of Doppler echocardiography, magnetic resonance imaging, electrocardiography, ultrasound tomographic velocity measurement, phonocardiography, seismocardiography, and a number of other techniques and devices [23–26]. This equipment uses various principles and mechanisms for recording hemodynamics inside the heart and in general throughout the entire cardiovascular system. Each type of diagnosis has its advantages, as well as disadvantages, which are quite well covered in the scientific literature [27–29]. However, there is the problem of creation of the effective, inexpensive, and miniature diagnostic equipment for the registration of the thrombus formation on the leaflets of mechanical heart valves, since thrombi prevent the valve opening. If one of the leaflets is closed, as shown in Figure 1(b), it is urgently necessary to take appropriate measures to replace the valve or eliminate thrombi. It is desirable to create such a device that patients could use this device at home without receiving a special medical education.The goal of our research is to develop principles and methods for vibro-hydroacoustic diagnostics of the operation of a bileaflet mechanical heart valve, as well as to study the features of the transformation of hydrodynamic noise and vibrations that are generated inside the left ventricle and atrium models and transmitted to the surface of the laboratory bench. ### 2.1. Experimental Setup Experimental research of the pure water flow, the density of which is close to the density of blood, and the kinematic viscosity coefficient is (4-5) times lower than that of blood, in the diastole, a regime was carried out in the microlaboratory of Politecnico di Milano (Italy). The bileaflet mechanical heart valve with a diameter ofd=25mm from Sorin biomedica (Italy), which is shown in Figure 1(a), was used in this research. The valve was located between the model of the left atrium and the model of the left ventricle of the heart and worked as a mitral valve. The models of an atrium and a ventricle were made of organic glass and are shown in Figure 2(a).Figure 2 Experimental bench (a), piezoresistive and piezoceramic sensors (b). (a)(b)Water was entering inside the atrium model (2) and the left ventricular model (3) through the inlet fitting (1). A device (5) was made between the atrium and the ventricle, where the bileaflet mechanical heart valve was installed. Water from the experimental bench flowed out through the outlet fitting (4). Inside the model of the left ventricle, there was a coordinate device (6) on which a block of miniature absolute pressure sensors and pressure fluctuation sensors (7) was installed. The coordinate device allowed the sensors to be moved downstream from the bileaflet valve along the direction of the jets that flowed out from an open or semiclosed valve. Vibrations on the surface of the experimental bench, absolute pressure, and pressure fluctuations inside it were recorded using single-component piezoceramic accelerometers, as well as miniature piezoresistive and piezoceramic absolute pressure sensors and pressure fluctuation sensors (Figure2(b)). These sensors were developed and manufactured at the Institute of Hydromechanics of the National Academy of Sciences of Ukraine [30, 31].To conduct hydroacoustic diagnostics of the operation of the bileaflet mechanical mitral valve, a special experimental stand was created [32, 33]. The scheme and photograph of the stand are shown in Figure 3. The impulse pump was pumping water through the mitral valve from the reservoir (R) to the impedance tank (I). The impulse pump was controlled by a computer using a specially developed program, which made it possible to create a pulsating water supply of a certain amplitude, period, and form. In our studies, the pulse form of water movement through the mitral valve corresponded to the cardiac cycle of diastole.Figure 3 Scheme and photography of the experimental stand.The impedance tank provided the lower threshold of the arterial pressure of the cardiac cycle. An ultrasonic flowmeter (F) was installed at the inlet or outlet of the experimental bench. Thus, the computer-controlled operation of the impulse pump made it possible to accurately and stably control the pulse form of the water supply through the mitral valve (diastole regime), the pulse period (cardiac pulse), and the water flow rate. In studies, the frequency of the pulsating water supply was 1 Hz or 60 beats per minute, and the average flow rate through the open or semiclosed mitral valve varied from 3 l/min to 6 l/min.Studies of the hydrodynamic noise of water jets, which flowed out from the mitral valve into the model of the left ventricular chamber, were carried out using a block of pressure fluctuation sensors [34, 35]. These sensors were located inside the experimental bench on the coordinate device and were located downstream of the valve, as shown in Figure 4. Miniature piezoceramic pressure fluctuation sensors (diameter of the sensitive surface of 1.3 mm) were installed in the well-streamlined block of pressure sensors at a fixed distance from each other (Figure 4(a)). Holes with a diameter of 0.5 mm were made here, through which the absolute pressure was measured by piezoresistive pressure sensors. The sensor block through the coordinate device was moved along the studied jets, which flowed out through an open or semiclosed valve. Thus, the sensors recorded the hydroacoustic noise of the near field of the jets (Figure 4(b)). The mounting structure of the mitral valve made it possible to turn the valve around its axis. Consequently, the sensors recorded the noise of either a side jet or a central jet of the bileaflet valve.Figure 4 Location of the pressure sensors downstream of the open bileaflet valve. (a)(b)The electrical signals of the sensors were amplified by low-noise amplifiers and were given to personal computers using multichannel analog-to-digital converters. Processing and analysis of experimental results were carried out using probability theory and mathematical statistics.According to the developed program and research methodology, the vibroacoustic diagnostics of the experimental bench was originally conducted. Sources of extraneous vibrations and noises were established, and measures were taken to eliminate them or to reduce noise levels. The sensors were periodically checked; the measurement errors of the integral and spectral characteristics of the research parameters were determined. This allowed to receive experimental results with acceptable accuracy and good repeatability.The measurement error of averaged and integral values of the pressure and vibration fields did not exceed 10% (95% reliability). The measurement error of the velocity rate is no more than 3%. The measurement error of the spectral characteristics of the fields of velocity, pressure fluctuations, and accelerations is no more than 2 dB in the frequency range from 0.01 Hz to 2 kHz with a confidence probability of 0.95 or2σ. ## 2.1. Experimental Setup Experimental research of the pure water flow, the density of which is close to the density of blood, and the kinematic viscosity coefficient is (4-5) times lower than that of blood, in the diastole, a regime was carried out in the microlaboratory of Politecnico di Milano (Italy). The bileaflet mechanical heart valve with a diameter ofd=25mm from Sorin biomedica (Italy), which is shown in Figure 1(a), was used in this research. The valve was located between the model of the left atrium and the model of the left ventricle of the heart and worked as a mitral valve. The models of an atrium and a ventricle were made of organic glass and are shown in Figure 2(a).Figure 2 Experimental bench (a), piezoresistive and piezoceramic sensors (b). (a)(b)Water was entering inside the atrium model (2) and the left ventricular model (3) through the inlet fitting (1). A device (5) was made between the atrium and the ventricle, where the bileaflet mechanical heart valve was installed. Water from the experimental bench flowed out through the outlet fitting (4). Inside the model of the left ventricle, there was a coordinate device (6) on which a block of miniature absolute pressure sensors and pressure fluctuation sensors (7) was installed. The coordinate device allowed the sensors to be moved downstream from the bileaflet valve along the direction of the jets that flowed out from an open or semiclosed valve. Vibrations on the surface of the experimental bench, absolute pressure, and pressure fluctuations inside it were recorded using single-component piezoceramic accelerometers, as well as miniature piezoresistive and piezoceramic absolute pressure sensors and pressure fluctuation sensors (Figure2(b)). These sensors were developed and manufactured at the Institute of Hydromechanics of the National Academy of Sciences of Ukraine [30, 31].To conduct hydroacoustic diagnostics of the operation of the bileaflet mechanical mitral valve, a special experimental stand was created [32, 33]. The scheme and photograph of the stand are shown in Figure 3. The impulse pump was pumping water through the mitral valve from the reservoir (R) to the impedance tank (I). The impulse pump was controlled by a computer using a specially developed program, which made it possible to create a pulsating water supply of a certain amplitude, period, and form. In our studies, the pulse form of water movement through the mitral valve corresponded to the cardiac cycle of diastole.Figure 3 Scheme and photography of the experimental stand.The impedance tank provided the lower threshold of the arterial pressure of the cardiac cycle. An ultrasonic flowmeter (F) was installed at the inlet or outlet of the experimental bench. Thus, the computer-controlled operation of the impulse pump made it possible to accurately and stably control the pulse form of the water supply through the mitral valve (diastole regime), the pulse period (cardiac pulse), and the water flow rate. In studies, the frequency of the pulsating water supply was 1 Hz or 60 beats per minute, and the average flow rate through the open or semiclosed mitral valve varied from 3 l/min to 6 l/min.Studies of the hydrodynamic noise of water jets, which flowed out from the mitral valve into the model of the left ventricular chamber, were carried out using a block of pressure fluctuation sensors [34, 35]. These sensors were located inside the experimental bench on the coordinate device and were located downstream of the valve, as shown in Figure 4. Miniature piezoceramic pressure fluctuation sensors (diameter of the sensitive surface of 1.3 mm) were installed in the well-streamlined block of pressure sensors at a fixed distance from each other (Figure 4(a)). Holes with a diameter of 0.5 mm were made here, through which the absolute pressure was measured by piezoresistive pressure sensors. The sensor block through the coordinate device was moved along the studied jets, which flowed out through an open or semiclosed valve. Thus, the sensors recorded the hydroacoustic noise of the near field of the jets (Figure 4(b)). The mounting structure of the mitral valve made it possible to turn the valve around its axis. Consequently, the sensors recorded the noise of either a side jet or a central jet of the bileaflet valve.Figure 4 Location of the pressure sensors downstream of the open bileaflet valve. (a)(b)The electrical signals of the sensors were amplified by low-noise amplifiers and were given to personal computers using multichannel analog-to-digital converters. Processing and analysis of experimental results were carried out using probability theory and mathematical statistics.According to the developed program and research methodology, the vibroacoustic diagnostics of the experimental bench was originally conducted. Sources of extraneous vibrations and noises were established, and measures were taken to eliminate them or to reduce noise levels. The sensors were periodically checked; the measurement errors of the integral and spectral characteristics of the research parameters were determined. This allowed to receive experimental results with acceptable accuracy and good repeatability.The measurement error of averaged and integral values of the pressure and vibration fields did not exceed 10% (95% reliability). The measurement error of the velocity rate is no more than 3%. The measurement error of the spectral characteristics of the fields of velocity, pressure fluctuations, and accelerations is no more than 2 dB in the frequency range from 0.01 Hz to 2 kHz with a confidence probability of 0.95 or2σ. ## 3. Results and Discussion The impulse pump pumped water through the mitral valve in accordance with the diastole mode of the operation heart. In this mode, two pulsed blood supply occur through the mitral valve. The first impulse is formed by the expansion of the left ventricle of the heart (wave E), and the second is formed by the contraction of the left atrium (wave A). Between these impulses, there is a diastase time interval during which the volume of the ventricle is constant [36, 37]. The form of the curves of the pulsating water supply by the pump through the mitral valve in the form of water rate, which was recorded by an ultrasonic flowmeter, is shown in Figure 5(a). Here, the flow rate measured in l/min is shown as a function of the impulse time. Curves 1 and 2 simulate the blood flow through the left ventricle of a small person (71% of the pump power), and curves 3-5 simulate the blood flow through the left ventricle of a teenager (50%). Curves 1 and 3 were measured to supply clean water under the operating conditions of a semiclosed valve, and curves 2, 4, and 5 were measured to supply water through an open mitral valve. The first, higher impulse of water rate, corresponds to wave E, and the second impulse corresponds to wave A of diastole. The pressure changes inside the left ventricular model are shown in Figure 5(b) in the working conditions of open and semiclosed mitral valve in the process of diastole. Here, the numbers of the curves correspond to those shown in Figure 5(a).Figure 5 The pulsating water supply (a) and the pressure changes (b) inside the ventricular model. (a)(b)The liquid flow through the open mitral valve was divided into three jets—the central and two side jets, which are schematically shown in Figure6. When one of the leaflets of the mechanical bileaflet valve of the heart is closed, then the blood flows only through the open leaflet. In this case, the flow velocity in the side jet of the open valve leaflet and partially in the central jet increases. This situation occurs when one of the valve leaflets is closed by thrombi, and only the open leaflet of the mechanical heart valve is working.Figure 6 Scheme of fluid flow through an open mitral valve.The pressure fluctuation dependences, measured in the near wake of the side jet, under the conditions of open and semiclosed mitral heart valve to simulate diastole of a teenager and a small person are shown in Figure7. In these figures, curve 1 corresponds to the operating conditions of the open mitral valve, and curve 2 corresponds to the operating conditions of the semiclosed valve. In accordance with the above results, the intensity of pulsating pressure fluctuations or pulsating hydrodynamic noise of the side jet in the near wake of the valve at a distance d from it is almost 1.5 times higher for a small person and in the operating conditions of the semiclosed bileaflet mechanical mitral valve than in the conditions of the open heart valve.Figure 7 Pressure fluctuations in the near wake of the side jet of the open and semiclosed mitral valve for the operating conditions of the pump 50% (a) and 71% (b). (a)(b)Short-term statistical processing of the research results of pulsating water supply through the mitral heart valve allowed us to separate the impulses of wave E and wave A of diastole. In Figure8 is shown the averaged impulses of the wave E of the side jet of the semiclosed and open valve, measured by two pressure fluctuation sensors, which are spaced 0.4 d from each other. Curve 1 was measured by the sensor at a distance of 1.2 d downstream of the mitral valve, and curve 2 was measured by the sensor at a distance of 1.6 d from the valve. The time delay between the recordings of the impulse maxima was 0.003 s and 0.007 s for the operating conditions of the semiclosed and open valve and the modelling of the heart operation of the small person. At the same time, the maximum of impulses was first observed in signals that were registered sensors located closer to the mitral valve. This made it possible to determine the maximum transfer velocity [38–40] of the wave E of diastole, which was almost 1.4 m/s for the operating conditions of the open mitral valve of the heart and 3.3 m/s for the operating conditions of the semiclosed heart valve of the small person.Figure 8 Pressure fluctuations of the wave E in the near wake of the side jet of the semiclosed (a) and open (b) mitral valve for the operating conditions of the pump 71%. (a)(b)Changes of the root-mean-square (RMS) values of the pressure fluctuations along a side jet that pulses with a frequency of 1 Hz are shown in Figure9(a) for the operating conditions of the mitral heart valves of the teenager and the small person. Here, curves 1 and 2 were measured for the operating conditions of 50% of the pump power, and curves 3 and 4—71% of the pump power. Curves 1 and 3 were measured for open mitral valve and curves 2 and 4 are for semiclosed valve. The integral characteristics of the hydrodynamic noise were measured in the wake of the bileaflet valve when the sensor block was removed along the side jet. The RMS values of pressure fluctuations in the wake of the side jet of the mitral valve of a small person are more than 1.5 times higher than that of a teenager practically throughout the jet. During the pulsating flow of the semiclosed valve, the hydrodynamic noise of the side jet is higher than when the valve is open. This correlates with studies for a stationary water flow through the mitral valve [33, 35, 37]. The pressure fluctuation levels gradually are decreased when sensors are moved away from the mitral valve.Figure 9 RMS of the pressure fluctuations (a) and their spectral levels (b). (a)(b)The spectral levels of the pressure fluctuations along a pulsating side jet that was flowed out from the semiclosed mitral valve, when simulating diastole of the teenager’s heart, are shown in Figure9(b). Here, curve 1 was measured at a distance d from the mitral valve, curve 2 was measured at a distance of 1.1 d, curve 3 was measured at a distance of 1.2 d, curve 4 was measured at a distance of 1.4 d, curve 5 was measured at a distance of 1.8 d, and curve 6 was ambient noise. According to research results, in the frequency range up to 20 Hz, the dynamic measurement range exceeds 30 dB. When distance from the mitral valve is increased, the spectral levels of hydrodynamic noise are decreased, especially strongly at the frequency of pulsating water flow and its higher harmonics. Harmonics of higher orders are formed due to the nonlinear interaction between the vortex and jet flows that flow through the heart valve and their interaction with the components of the valve and heart.The spectral power densities of the pressure fluctuations measured in the near wake of the mitral valve along the central jet that pulses with a frequency of 1 Hz are shown in Figure10 for the model conditions of the diastole of the teenager and the small person. Here, curves 1 and 2 were measured by a pressure transducer that is removed from the mitral valve by a distance d, and curves 3 and 4 were measured by a pressure transducer that is removed from the mitral valve by a distance of 1.4 d. Curves 1 and 3 were measured for the operating conditions of the semiclosed valve, and curves 2 and 4 were measured for the operating conditions of the open valve. In the spectral dependences, discrete peaks were observed at the frequency of the pulsating flow and its higher harmonics. The spectral levels of pressure fluctuations when was simulated diastole of the small person are higher than when was simulated diastole of the heart rhythm of the teenager in the entire frequency range of the research.Figure 10 The spectral power densities of the pressure fluctuations in the near wake of the central jet of the mitral valve for the operating conditions of the pump 50% (a) and 71% (b). (a)(b)The research results showed that the RMS values of the pressure fluctuations of the hydrodynamic noise of the side and central jets, which are move from the mitral heart valve, are larger for a small person than for a teenager, which is illustrated in Figure9(a) and Figure 11(a). Figure 11(a) shows the ratio of the RMS values of the pressure fluctuations along the side jet downstream of the semiclosed and open valve.Figure 11 The ratios of RMS values of the pressure fluctuations (a) and the spectral levels of the hydrodynamic noise (b) in the near wake of the side jet of the mitral heart valve. (a)(b)Curve 1 was measured for the operating conditions of the mitral heart valve of the teenager, and curve 2 was measured for the operating conditions of the mitral heart valve of the small person. The greatest difference of the pressure fluctuations was observed in the near wake of the valve, and with an increase of the distance from the valve, the ratio of RMS pressure fluctuations gradually decreases.The ratios of the spectral power densities of the pressure fluctuations in the near wake of the side jet of the semiclosed and open valves for the operating conditions of the heart valve of the small person are shown in Figure11(b). Curve 1 was measured by a sensor that was removed at a distance d from the mitral valve, and curve 2 was measured by a sensor that was removed at a distance of 1.4 d from this valve. The highest ratio of the pressure fluctuation levels (almost 5 times) was observed at the second harmonic of the frequency of the pulsating water flow. Along with this, when the distance from the mitral heart valve is increased, the ratios of the spectral components of the hydrodynamic noise of the semiclosed and open valve are decreased. ## 4. Conclusions (1) It was found that the intensity of the hydrodynamic noise of the pulsating side jet in the near wake of the semiclosed bileaflet mechanical mitral heart valve is almost 1.5 times higher than the open valve. The levels of the pressure fluctuations of the hydrodynamic noise gradually decrease with moving away from the mitral valve(2) It was established that the maximum transfer velocity of the wave E of the diastole inside the left ventricular model for operating conditions of the open mitral valve of the small person is almost 1.4 m/s and about 3.3 m/s for the operating conditions of the semiclosed heart valve(3) It was registered that the spectral levels of the pressure fluctuations when is simulated diastole of the heart valve of the small person are higher than when is simulated diastole of the heart valve of the teenager in the entire frequency range of the research (from 0.1 Hz to 1 kHz). In the spectral dependences of the hydrodynamic noise, discrete peaks were observed at the frequency of the pulsating water flow and at its higher harmonics. With an increase of the distance from the mitral valve, the spectral levels of hydrodynamic noise were decreased, especially strongly at the frequency of the pulsating flow and its higher harmonics. --- *Source: 1024096-2020-05-30.xml*
1024096-2020-05-30_1024096-2020-05-30.md
33,131
Hydrodynamic Noise of Pulsating Jets through Bileaflet Mechanical Mitral Valve
Vladimir Voskoboinick; Oleksandr Voskoboinyk; Oleg Chertov; Andrey Voskoboinick; Lidiia Tereshchenko
BioMed Research International (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1024096
1024096-2020-05-30.xml
--- ## Abstract Experimental research results of hydrodynamic noise of pulsating flow through a bileaflet mechanical mitral valve are presented. The pulsating flow of pure water corresponds to the diastolic mode of the cardiac rhythm heart. The valve was located between the model of the left atrium and the model of the left ventricle of the heart. A coordinate device, on which a block of miniature sensors of absolute pressure and pressure fluctuations was installed, was located inside the model of the left ventricle. It is found that the hydrodynamic noise of the pulsating side jet of the semiclosed valve is higher than for the open valve. The pressure fluctuation levels gradually decrease with the removal from the mitral valve. It is established that at the second harmonic of the pulsating flow frequency, the spectral levels of the hydrodynamic noise of the semiclosed bileaflet mechanical mitral valve are almost 5 times higher than the open valve. With the removal from the mitral valve, spectral levels of hydrodynamic noise are decreased, especially strongly at the frequency of the pulsating water flow and its higher harmonics. --- ## Body ## 1. Introduction The heart is a vital hollow muscular-fibrous organ located in the thorax and providing blood flow through the vessels. This is a kind of muscle pump that works on the principle of suction-pushing blood. The human heart is divided by diaphragms into four separate chambers: two atria (left and right) and two ventricles (also left and right). The functions of each of them are different. Inside each of the atria, blood entering the heart is accumulated and, having reached a certain volume, is pushed into the corresponding ventricles. The ventricles drive blood into the arteries, through which it moves throughout the body. The unidirectional movement of blood is ensured by the well-coordinated work of the valvular apparatus of the heart, consisting of the mitral, tricuspid, aortic, and pulmonary valves, which are opened and closed at the right moment, preventing the blood from being regurgitated. The first and third valves are located in the left ventricle, and the second and fourth valves are located in the right ventricle of the heart.The heart pumps about five to six liters of blood in a minute. This volume is somewhat decreased at rest and when a person performs physical exercise, on the contrary, it is increased. Together with blood vessels, the heart forms the cardiovascular system, which has two circles of circulation: large (systemic) and small (pulmonary). Blood first enters in the aorta from the left ventricle of the heart and then it moves through large and small diameter arteries. Then blood moves through the arterioles to the capillaries, where it donates oxygen and a number of other nutrients necessary for the body and takes carbon dioxide and waste metabolic products. Hence, the blood from the artery (oxygenated blood flows from the heart) becomes venous blood and flows back to the heart. The venous blood flows first through the venules, then through the small veins and large venous trunks. The venous blood enters inside the right atrium along the inferior and superior vena cava, closing the systemic circulation (large circle). The blood is again enriched with oxygen in the lungs, where it flows from the right heart through the pulmonary valve into the pulmonary arteries, which form the pulmonary circulation. Oxygenated blood fills the left atrium and through the mitral valve enters the left ventricle of the heart. The mitral (bicuspid or bileaflet) valve is located between the left atrium and the ventricle and consists of two leaflets. When it is opened, blood flows through an atrioventricular orifice into the left ventricle from the left atrium. During systole (i.e., contraction) of the left ventricle, the valve is closed so that blood does not flow back into the atrium, but is pushed through the aortic valve into the aorta and the vessels of the systemic circulation. Heart valves consist of thin, flexible leaflets that are opened and closed in response to changes in blood pressure between the respective atria and ventricles. As a result of various diseases and pathologies, the leaflets are damaged and interfere with the normal functioning of the heart and the entire cardiovascular system. As a result, it is necessary to apply therapeutic and surgical measures to eliminate such injuries until the replacement of natural heart valves with prostheses. About 300 thousand prosthetic heart valves are annually implanted in the world, and it is estimated [1] that the number can triple in 2050.In recent years, a large number of heart valve prostheses of various forms and principles have been developed and put into practice, which are grouped into three groups: bioprostheses, mechanical, and transcatheter valves [2–5]. Most valves are manufactured by world-renowned firms such as Edwards Lifesciences, Medtronic, St. Jude Medical, Sorin Group, and Boston Scientific. The bileaflet mechanical heart valves are most common (more than 50% of prosthetic valves). These valves consist of two semilunar leaflets (Figure 1(a)), which are attached to a rigid stitched ring by means of small hinges. The opening angle of the valves relative to the plane of the ring is ranged from 75 to 90°. An open valve has three openings: a small slit-like central opening between two open leaflets and two larger semicircular openings on the sides of the valve. The valves are mainly made of pyrolytic carbon, because it has a sufficiently high thromboresistance. Despite the wide clinical application, the functions of these valves are far from perfect. The main disadvantages that distinguish them from the ideal mechanical heart valve are the destruction of erythrocytes and the formation of hemolysis, that is, the destruction of red blood cells, as well as thromboembolism resulting from the formation of thrombi on the streamlined surface of mechanical valves (Figure 1(b)). At the same time, patients with mechanical heart valves should use anticoagulants throughout their lives to counteract thromboembolic complications.Figure 1 Bileaflet mechanical heart valve: open (a) and semiclosed by trombi (b). (a)(b) ## 2. Materials and Methods Since mechanical valves have nonphysiological geometry, the flow of blood through them is significantly different from natural conditions. Indeed, the hemodynamics of bileaflet mechanical heart prostheses is significantly different from natural valves, since they have three orifices of different sizes. This forms a localized high-velocity gradient through a smaller central opening. The leaflets of the valve are a barrier to the blood flow through the valve, which, along with the high velocity of the jets between the leaflets, causes increased shear stresses, which lead to the destruction of erythrocytes and other blood corpuscles [6–8]. In addition, there is a small technological gap between the leaflets which are fixed in the ringed valve body by means of hinges, and the ring itself. The reverse blood flow is rushed with a sufficiently high velocity in this gap, generating large shear stresses, that increases the risk of damage to the blood and the formation of thrombi.Optimization of the design of mechanical valves is achieved using computational and experimental methods [9–11] to improve the velocity profiles and minimize the complications that are caused by prosthetics. Modern numerical simulation methods such as DNS, URANS, LES, and DES are used for calculations [12–14]. Hemodynamic and hydrodynamic tests of heart valves are an important step for conducting preclinical research of a new device. The main indicators of the effectiveness of such devices are the velocity profiles, velocity and pressure gradients, shear and Reynolds stresses, convective velocities, and directions of motion of vortex structures that are formed by a heart valve [15–17]. Studies of the blood flow characteristics inside the heart and the operation conditions of mechanical valves are performed both by noninvasive measurement methods (in vivo) and with help ventricular and atrial models in laboratory conditions (in vitro). Research by noninvasive methods is carried out mainly on the rather complex equipment of Doppler echocardiography and magnetic resonance imaging.But this modern equipment has a serious disadvantage due to insufficient temporal and spatial resolution. Therefore, the fine structure of blood flow through the mechanical heart valves is investigated in the laboratory with the help of miniature sensors and complexes for detecting the movement of labelled particles with an increased spatial and temporal resolution [18, 19]. Several complex experimental setups that were used for hemodynamic and hydrodynamic studies have been described in the literature [20–22].Diagnostics of the operation of mechanical valves in vivo are carried out using diagnostic complexes of Doppler echocardiography, magnetic resonance imaging, electrocardiography, ultrasound tomographic velocity measurement, phonocardiography, seismocardiography, and a number of other techniques and devices [23–26]. This equipment uses various principles and mechanisms for recording hemodynamics inside the heart and in general throughout the entire cardiovascular system. Each type of diagnosis has its advantages, as well as disadvantages, which are quite well covered in the scientific literature [27–29]. However, there is the problem of creation of the effective, inexpensive, and miniature diagnostic equipment for the registration of the thrombus formation on the leaflets of mechanical heart valves, since thrombi prevent the valve opening. If one of the leaflets is closed, as shown in Figure 1(b), it is urgently necessary to take appropriate measures to replace the valve or eliminate thrombi. It is desirable to create such a device that patients could use this device at home without receiving a special medical education.The goal of our research is to develop principles and methods for vibro-hydroacoustic diagnostics of the operation of a bileaflet mechanical heart valve, as well as to study the features of the transformation of hydrodynamic noise and vibrations that are generated inside the left ventricle and atrium models and transmitted to the surface of the laboratory bench. ### 2.1. Experimental Setup Experimental research of the pure water flow, the density of which is close to the density of blood, and the kinematic viscosity coefficient is (4-5) times lower than that of blood, in the diastole, a regime was carried out in the microlaboratory of Politecnico di Milano (Italy). The bileaflet mechanical heart valve with a diameter ofd=25mm from Sorin biomedica (Italy), which is shown in Figure 1(a), was used in this research. The valve was located between the model of the left atrium and the model of the left ventricle of the heart and worked as a mitral valve. The models of an atrium and a ventricle were made of organic glass and are shown in Figure 2(a).Figure 2 Experimental bench (a), piezoresistive and piezoceramic sensors (b). (a)(b)Water was entering inside the atrium model (2) and the left ventricular model (3) through the inlet fitting (1). A device (5) was made between the atrium and the ventricle, where the bileaflet mechanical heart valve was installed. Water from the experimental bench flowed out through the outlet fitting (4). Inside the model of the left ventricle, there was a coordinate device (6) on which a block of miniature absolute pressure sensors and pressure fluctuation sensors (7) was installed. The coordinate device allowed the sensors to be moved downstream from the bileaflet valve along the direction of the jets that flowed out from an open or semiclosed valve. Vibrations on the surface of the experimental bench, absolute pressure, and pressure fluctuations inside it were recorded using single-component piezoceramic accelerometers, as well as miniature piezoresistive and piezoceramic absolute pressure sensors and pressure fluctuation sensors (Figure2(b)). These sensors were developed and manufactured at the Institute of Hydromechanics of the National Academy of Sciences of Ukraine [30, 31].To conduct hydroacoustic diagnostics of the operation of the bileaflet mechanical mitral valve, a special experimental stand was created [32, 33]. The scheme and photograph of the stand are shown in Figure 3. The impulse pump was pumping water through the mitral valve from the reservoir (R) to the impedance tank (I). The impulse pump was controlled by a computer using a specially developed program, which made it possible to create a pulsating water supply of a certain amplitude, period, and form. In our studies, the pulse form of water movement through the mitral valve corresponded to the cardiac cycle of diastole.Figure 3 Scheme and photography of the experimental stand.The impedance tank provided the lower threshold of the arterial pressure of the cardiac cycle. An ultrasonic flowmeter (F) was installed at the inlet or outlet of the experimental bench. Thus, the computer-controlled operation of the impulse pump made it possible to accurately and stably control the pulse form of the water supply through the mitral valve (diastole regime), the pulse period (cardiac pulse), and the water flow rate. In studies, the frequency of the pulsating water supply was 1 Hz or 60 beats per minute, and the average flow rate through the open or semiclosed mitral valve varied from 3 l/min to 6 l/min.Studies of the hydrodynamic noise of water jets, which flowed out from the mitral valve into the model of the left ventricular chamber, were carried out using a block of pressure fluctuation sensors [34, 35]. These sensors were located inside the experimental bench on the coordinate device and were located downstream of the valve, as shown in Figure 4. Miniature piezoceramic pressure fluctuation sensors (diameter of the sensitive surface of 1.3 mm) were installed in the well-streamlined block of pressure sensors at a fixed distance from each other (Figure 4(a)). Holes with a diameter of 0.5 mm were made here, through which the absolute pressure was measured by piezoresistive pressure sensors. The sensor block through the coordinate device was moved along the studied jets, which flowed out through an open or semiclosed valve. Thus, the sensors recorded the hydroacoustic noise of the near field of the jets (Figure 4(b)). The mounting structure of the mitral valve made it possible to turn the valve around its axis. Consequently, the sensors recorded the noise of either a side jet or a central jet of the bileaflet valve.Figure 4 Location of the pressure sensors downstream of the open bileaflet valve. (a)(b)The electrical signals of the sensors were amplified by low-noise amplifiers and were given to personal computers using multichannel analog-to-digital converters. Processing and analysis of experimental results were carried out using probability theory and mathematical statistics.According to the developed program and research methodology, the vibroacoustic diagnostics of the experimental bench was originally conducted. Sources of extraneous vibrations and noises were established, and measures were taken to eliminate them or to reduce noise levels. The sensors were periodically checked; the measurement errors of the integral and spectral characteristics of the research parameters were determined. This allowed to receive experimental results with acceptable accuracy and good repeatability.The measurement error of averaged and integral values of the pressure and vibration fields did not exceed 10% (95% reliability). The measurement error of the velocity rate is no more than 3%. The measurement error of the spectral characteristics of the fields of velocity, pressure fluctuations, and accelerations is no more than 2 dB in the frequency range from 0.01 Hz to 2 kHz with a confidence probability of 0.95 or2σ. ## 2.1. Experimental Setup Experimental research of the pure water flow, the density of which is close to the density of blood, and the kinematic viscosity coefficient is (4-5) times lower than that of blood, in the diastole, a regime was carried out in the microlaboratory of Politecnico di Milano (Italy). The bileaflet mechanical heart valve with a diameter ofd=25mm from Sorin biomedica (Italy), which is shown in Figure 1(a), was used in this research. The valve was located between the model of the left atrium and the model of the left ventricle of the heart and worked as a mitral valve. The models of an atrium and a ventricle were made of organic glass and are shown in Figure 2(a).Figure 2 Experimental bench (a), piezoresistive and piezoceramic sensors (b). (a)(b)Water was entering inside the atrium model (2) and the left ventricular model (3) through the inlet fitting (1). A device (5) was made between the atrium and the ventricle, where the bileaflet mechanical heart valve was installed. Water from the experimental bench flowed out through the outlet fitting (4). Inside the model of the left ventricle, there was a coordinate device (6) on which a block of miniature absolute pressure sensors and pressure fluctuation sensors (7) was installed. The coordinate device allowed the sensors to be moved downstream from the bileaflet valve along the direction of the jets that flowed out from an open or semiclosed valve. Vibrations on the surface of the experimental bench, absolute pressure, and pressure fluctuations inside it were recorded using single-component piezoceramic accelerometers, as well as miniature piezoresistive and piezoceramic absolute pressure sensors and pressure fluctuation sensors (Figure2(b)). These sensors were developed and manufactured at the Institute of Hydromechanics of the National Academy of Sciences of Ukraine [30, 31].To conduct hydroacoustic diagnostics of the operation of the bileaflet mechanical mitral valve, a special experimental stand was created [32, 33]. The scheme and photograph of the stand are shown in Figure 3. The impulse pump was pumping water through the mitral valve from the reservoir (R) to the impedance tank (I). The impulse pump was controlled by a computer using a specially developed program, which made it possible to create a pulsating water supply of a certain amplitude, period, and form. In our studies, the pulse form of water movement through the mitral valve corresponded to the cardiac cycle of diastole.Figure 3 Scheme and photography of the experimental stand.The impedance tank provided the lower threshold of the arterial pressure of the cardiac cycle. An ultrasonic flowmeter (F) was installed at the inlet or outlet of the experimental bench. Thus, the computer-controlled operation of the impulse pump made it possible to accurately and stably control the pulse form of the water supply through the mitral valve (diastole regime), the pulse period (cardiac pulse), and the water flow rate. In studies, the frequency of the pulsating water supply was 1 Hz or 60 beats per minute, and the average flow rate through the open or semiclosed mitral valve varied from 3 l/min to 6 l/min.Studies of the hydrodynamic noise of water jets, which flowed out from the mitral valve into the model of the left ventricular chamber, were carried out using a block of pressure fluctuation sensors [34, 35]. These sensors were located inside the experimental bench on the coordinate device and were located downstream of the valve, as shown in Figure 4. Miniature piezoceramic pressure fluctuation sensors (diameter of the sensitive surface of 1.3 mm) were installed in the well-streamlined block of pressure sensors at a fixed distance from each other (Figure 4(a)). Holes with a diameter of 0.5 mm were made here, through which the absolute pressure was measured by piezoresistive pressure sensors. The sensor block through the coordinate device was moved along the studied jets, which flowed out through an open or semiclosed valve. Thus, the sensors recorded the hydroacoustic noise of the near field of the jets (Figure 4(b)). The mounting structure of the mitral valve made it possible to turn the valve around its axis. Consequently, the sensors recorded the noise of either a side jet or a central jet of the bileaflet valve.Figure 4 Location of the pressure sensors downstream of the open bileaflet valve. (a)(b)The electrical signals of the sensors were amplified by low-noise amplifiers and were given to personal computers using multichannel analog-to-digital converters. Processing and analysis of experimental results were carried out using probability theory and mathematical statistics.According to the developed program and research methodology, the vibroacoustic diagnostics of the experimental bench was originally conducted. Sources of extraneous vibrations and noises were established, and measures were taken to eliminate them or to reduce noise levels. The sensors were periodically checked; the measurement errors of the integral and spectral characteristics of the research parameters were determined. This allowed to receive experimental results with acceptable accuracy and good repeatability.The measurement error of averaged and integral values of the pressure and vibration fields did not exceed 10% (95% reliability). The measurement error of the velocity rate is no more than 3%. The measurement error of the spectral characteristics of the fields of velocity, pressure fluctuations, and accelerations is no more than 2 dB in the frequency range from 0.01 Hz to 2 kHz with a confidence probability of 0.95 or2σ. ## 3. Results and Discussion The impulse pump pumped water through the mitral valve in accordance with the diastole mode of the operation heart. In this mode, two pulsed blood supply occur through the mitral valve. The first impulse is formed by the expansion of the left ventricle of the heart (wave E), and the second is formed by the contraction of the left atrium (wave A). Between these impulses, there is a diastase time interval during which the volume of the ventricle is constant [36, 37]. The form of the curves of the pulsating water supply by the pump through the mitral valve in the form of water rate, which was recorded by an ultrasonic flowmeter, is shown in Figure 5(a). Here, the flow rate measured in l/min is shown as a function of the impulse time. Curves 1 and 2 simulate the blood flow through the left ventricle of a small person (71% of the pump power), and curves 3-5 simulate the blood flow through the left ventricle of a teenager (50%). Curves 1 and 3 were measured to supply clean water under the operating conditions of a semiclosed valve, and curves 2, 4, and 5 were measured to supply water through an open mitral valve. The first, higher impulse of water rate, corresponds to wave E, and the second impulse corresponds to wave A of diastole. The pressure changes inside the left ventricular model are shown in Figure 5(b) in the working conditions of open and semiclosed mitral valve in the process of diastole. Here, the numbers of the curves correspond to those shown in Figure 5(a).Figure 5 The pulsating water supply (a) and the pressure changes (b) inside the ventricular model. (a)(b)The liquid flow through the open mitral valve was divided into three jets—the central and two side jets, which are schematically shown in Figure6. When one of the leaflets of the mechanical bileaflet valve of the heart is closed, then the blood flows only through the open leaflet. In this case, the flow velocity in the side jet of the open valve leaflet and partially in the central jet increases. This situation occurs when one of the valve leaflets is closed by thrombi, and only the open leaflet of the mechanical heart valve is working.Figure 6 Scheme of fluid flow through an open mitral valve.The pressure fluctuation dependences, measured in the near wake of the side jet, under the conditions of open and semiclosed mitral heart valve to simulate diastole of a teenager and a small person are shown in Figure7. In these figures, curve 1 corresponds to the operating conditions of the open mitral valve, and curve 2 corresponds to the operating conditions of the semiclosed valve. In accordance with the above results, the intensity of pulsating pressure fluctuations or pulsating hydrodynamic noise of the side jet in the near wake of the valve at a distance d from it is almost 1.5 times higher for a small person and in the operating conditions of the semiclosed bileaflet mechanical mitral valve than in the conditions of the open heart valve.Figure 7 Pressure fluctuations in the near wake of the side jet of the open and semiclosed mitral valve for the operating conditions of the pump 50% (a) and 71% (b). (a)(b)Short-term statistical processing of the research results of pulsating water supply through the mitral heart valve allowed us to separate the impulses of wave E and wave A of diastole. In Figure8 is shown the averaged impulses of the wave E of the side jet of the semiclosed and open valve, measured by two pressure fluctuation sensors, which are spaced 0.4 d from each other. Curve 1 was measured by the sensor at a distance of 1.2 d downstream of the mitral valve, and curve 2 was measured by the sensor at a distance of 1.6 d from the valve. The time delay between the recordings of the impulse maxima was 0.003 s and 0.007 s for the operating conditions of the semiclosed and open valve and the modelling of the heart operation of the small person. At the same time, the maximum of impulses was first observed in signals that were registered sensors located closer to the mitral valve. This made it possible to determine the maximum transfer velocity [38–40] of the wave E of diastole, which was almost 1.4 m/s for the operating conditions of the open mitral valve of the heart and 3.3 m/s for the operating conditions of the semiclosed heart valve of the small person.Figure 8 Pressure fluctuations of the wave E in the near wake of the side jet of the semiclosed (a) and open (b) mitral valve for the operating conditions of the pump 71%. (a)(b)Changes of the root-mean-square (RMS) values of the pressure fluctuations along a side jet that pulses with a frequency of 1 Hz are shown in Figure9(a) for the operating conditions of the mitral heart valves of the teenager and the small person. Here, curves 1 and 2 were measured for the operating conditions of 50% of the pump power, and curves 3 and 4—71% of the pump power. Curves 1 and 3 were measured for open mitral valve and curves 2 and 4 are for semiclosed valve. The integral characteristics of the hydrodynamic noise were measured in the wake of the bileaflet valve when the sensor block was removed along the side jet. The RMS values of pressure fluctuations in the wake of the side jet of the mitral valve of a small person are more than 1.5 times higher than that of a teenager practically throughout the jet. During the pulsating flow of the semiclosed valve, the hydrodynamic noise of the side jet is higher than when the valve is open. This correlates with studies for a stationary water flow through the mitral valve [33, 35, 37]. The pressure fluctuation levels gradually are decreased when sensors are moved away from the mitral valve.Figure 9 RMS of the pressure fluctuations (a) and their spectral levels (b). (a)(b)The spectral levels of the pressure fluctuations along a pulsating side jet that was flowed out from the semiclosed mitral valve, when simulating diastole of the teenager’s heart, are shown in Figure9(b). Here, curve 1 was measured at a distance d from the mitral valve, curve 2 was measured at a distance of 1.1 d, curve 3 was measured at a distance of 1.2 d, curve 4 was measured at a distance of 1.4 d, curve 5 was measured at a distance of 1.8 d, and curve 6 was ambient noise. According to research results, in the frequency range up to 20 Hz, the dynamic measurement range exceeds 30 dB. When distance from the mitral valve is increased, the spectral levels of hydrodynamic noise are decreased, especially strongly at the frequency of pulsating water flow and its higher harmonics. Harmonics of higher orders are formed due to the nonlinear interaction between the vortex and jet flows that flow through the heart valve and their interaction with the components of the valve and heart.The spectral power densities of the pressure fluctuations measured in the near wake of the mitral valve along the central jet that pulses with a frequency of 1 Hz are shown in Figure10 for the model conditions of the diastole of the teenager and the small person. Here, curves 1 and 2 were measured by a pressure transducer that is removed from the mitral valve by a distance d, and curves 3 and 4 were measured by a pressure transducer that is removed from the mitral valve by a distance of 1.4 d. Curves 1 and 3 were measured for the operating conditions of the semiclosed valve, and curves 2 and 4 were measured for the operating conditions of the open valve. In the spectral dependences, discrete peaks were observed at the frequency of the pulsating flow and its higher harmonics. The spectral levels of pressure fluctuations when was simulated diastole of the small person are higher than when was simulated diastole of the heart rhythm of the teenager in the entire frequency range of the research.Figure 10 The spectral power densities of the pressure fluctuations in the near wake of the central jet of the mitral valve for the operating conditions of the pump 50% (a) and 71% (b). (a)(b)The research results showed that the RMS values of the pressure fluctuations of the hydrodynamic noise of the side and central jets, which are move from the mitral heart valve, are larger for a small person than for a teenager, which is illustrated in Figure9(a) and Figure 11(a). Figure 11(a) shows the ratio of the RMS values of the pressure fluctuations along the side jet downstream of the semiclosed and open valve.Figure 11 The ratios of RMS values of the pressure fluctuations (a) and the spectral levels of the hydrodynamic noise (b) in the near wake of the side jet of the mitral heart valve. (a)(b)Curve 1 was measured for the operating conditions of the mitral heart valve of the teenager, and curve 2 was measured for the operating conditions of the mitral heart valve of the small person. The greatest difference of the pressure fluctuations was observed in the near wake of the valve, and with an increase of the distance from the valve, the ratio of RMS pressure fluctuations gradually decreases.The ratios of the spectral power densities of the pressure fluctuations in the near wake of the side jet of the semiclosed and open valves for the operating conditions of the heart valve of the small person are shown in Figure11(b). Curve 1 was measured by a sensor that was removed at a distance d from the mitral valve, and curve 2 was measured by a sensor that was removed at a distance of 1.4 d from this valve. The highest ratio of the pressure fluctuation levels (almost 5 times) was observed at the second harmonic of the frequency of the pulsating water flow. Along with this, when the distance from the mitral heart valve is increased, the ratios of the spectral components of the hydrodynamic noise of the semiclosed and open valve are decreased. ## 4. Conclusions (1) It was found that the intensity of the hydrodynamic noise of the pulsating side jet in the near wake of the semiclosed bileaflet mechanical mitral heart valve is almost 1.5 times higher than the open valve. The levels of the pressure fluctuations of the hydrodynamic noise gradually decrease with moving away from the mitral valve(2) It was established that the maximum transfer velocity of the wave E of the diastole inside the left ventricular model for operating conditions of the open mitral valve of the small person is almost 1.4 m/s and about 3.3 m/s for the operating conditions of the semiclosed heart valve(3) It was registered that the spectral levels of the pressure fluctuations when is simulated diastole of the heart valve of the small person are higher than when is simulated diastole of the heart valve of the teenager in the entire frequency range of the research (from 0.1 Hz to 1 kHz). In the spectral dependences of the hydrodynamic noise, discrete peaks were observed at the frequency of the pulsating water flow and at its higher harmonics. With an increase of the distance from the mitral valve, the spectral levels of hydrodynamic noise were decreased, especially strongly at the frequency of the pulsating flow and its higher harmonics. --- *Source: 1024096-2020-05-30.xml*
2020
# Recent Progress, Advancements, and Efficiency Improvement Techniques of Natural Plant Pigment-Based Photosensitizers for Dye-Sensitized Solar Cells **Authors:** Eneyew Tilahun Bekele; Yilkal Dessie Sintayehu **Journal:** Journal of Nanomaterials (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1024100 --- ## Abstract Production of green energy by using environment friendly and cost-effective components is attracting the attention of the research world and is found to be a promising approach to replace nonrenewable energy sources. Among the green energy sources, dye-sensitized solar cells (DSSCs) are found to be the most alternative way to reduce the energy demand crises in current situation. The efficiency of DSSCs is dependent on numerous factors such as the solvent used for dye extraction, anode and cathode electrodes, and the thickness of the film, electrolyte, dye, and nature of FTO/ITO glasses. The efficiency of synthetic dye-based DSSCs is enhanced as compared to their counterparts. However, it has been found that many of the synthetic sensitizers used in DSSCs are toxic, and some of them are found to cause carcinogenicity in nature by forming a complex agent. Instead, using various parts of green plants such as leaves, roots, steam, peel waste, flowers, various spices, and mixtures of them would be a highly environmentally friendly and good efficient. The present review focuses on and summarizes the efficiency affecting factors, the various categories of natural sensitizers, and solvent effects. Furthermore, the review work assesses the experimentally and computationally obtained values and their progress in development. --- ## Body ## 1. Introduction Energy is an important and basic pillar of life activities across the globe. Basically, it exists in different forms, from burning woods to obtain fire in prehistoric times to producing electricity in modern society. But it has been found that the original sources of energy that people used to harvest in their day-to-day activities have shown signs of deficiency due to the rapid growth in industrialization, overgrowth in population size, the advancement of infrastructure, and humans’ basic need improvement [1]. Due to uprising concerns about the energy crisis, climate change, shortages in fossil fuels, and current environmental issues are motivating the researcher to focus on clean, sustainable, and renewable energy resources that will help to predict future sustainability. Previously, it has been reported that energy produced from nonrenewable products had contributed to almost one-third of global greenhouse gas emissions [2, 3].Nonrenewable energy products from fossil fuels, petroleum liquids, coal, and natural gas have been considered as the dominant source of energy production for the world economy in the past century. However, in view of nonrenewable energy sources such as the fossil fuel crisis, the rising per barrel cost of crude oil, and the rejection of pollution-causing energy sources, sustainable forms of energy are becoming the center of attention worldwide [4–6]. ### 1.1. Renewable Sources of Energy The common conventional sources of energy based on oil, coal, and natural gas have proven to be highly efficient and effective product for economic progress but they might result in damaging the natural ecosystem and human health too. The fast depletion of fossil fuels and climate change issues and renewable energy sourced from solar, wind, geothermal, hydroelectric, biomass, nuclear power, and tide are some of the examples that are available in common throughout the world [7]. Renewable energy sources can provide sustainable energy service based on the use of routinely available starting materials and indigenous resources, which is available within the natural environment with minimum cost [8–10]. Solar energy provides a clean, renewable, and cheaper energy source for the human race. These concepts were supported by the world’s energy consumption history projections covering the years from 1990 to 2040. From this projection, more than 60% of energy is dominated by solar energy in the years between 2090 and 2100 [11]. So, Figure 1 describes some common categories of both renewable and nonrenewable energy sources.Figure 1 Education chart of (a) renewable and (b) nonrenewable sources of energy diagram (copyright: Vecton, image ID: 98899382, media type: stock photo). (a)(b)Energy from sunlight is capable of producing heat and light, causes photochemical reactions, and generates electricity. When sunlight strikes the earth’s surface, it provides 3.8 million EJ of energy a year (i.e., collecting the total solar energy in one hour would satisfy human energy demand for one year) [12]. Light energy conversion applications are divided into three categories: photovoltaics for direct conversion of sunlight into electricity, as well as concentrating solar power systems and solar thermal collectors, employing solar thermal energy. All of this energy was modeled in an attempt to produce an efficient power output [11]. Photovoltaic (PV) technology has sparked a lot of interest due to the benefits of lower manufacturing costs and environmental safety. Its panel efficiency strongly depends on the surface temperature of the cell, and its efficiency decreases with increasing temperature [13].Up to now, commercially available PV technologies are based on inorganic materials, which require high costs and highly energy-consuming preparation methods. In addition, several of those materials are toxic and have low natural abundance. Organic PV can be used to avoid those problems. However, the efficiencies of organic-based PV cells are still, at the moment, a long way behind those obtained with purely inorganic-based PV technologies. Hence, the benefit and significance of solar energy is that sunlight can be directly harvested into solar energy with the use of small and tiny PV solar cells [14]. Among silicon-based polymer, quantum dots, dye-sensitized solar cells, and perovskites are some of them and attract the researchers worldwide [15, 16]. Figure 2 demonstrates the classification of solar cells based on the materials on which they are made.Figure 2 Category of solar cells and current trends of development [17]. ## 1.1. Renewable Sources of Energy The common conventional sources of energy based on oil, coal, and natural gas have proven to be highly efficient and effective product for economic progress but they might result in damaging the natural ecosystem and human health too. The fast depletion of fossil fuels and climate change issues and renewable energy sourced from solar, wind, geothermal, hydroelectric, biomass, nuclear power, and tide are some of the examples that are available in common throughout the world [7]. Renewable energy sources can provide sustainable energy service based on the use of routinely available starting materials and indigenous resources, which is available within the natural environment with minimum cost [8–10]. Solar energy provides a clean, renewable, and cheaper energy source for the human race. These concepts were supported by the world’s energy consumption history projections covering the years from 1990 to 2040. From this projection, more than 60% of energy is dominated by solar energy in the years between 2090 and 2100 [11]. So, Figure 1 describes some common categories of both renewable and nonrenewable energy sources.Figure 1 Education chart of (a) renewable and (b) nonrenewable sources of energy diagram (copyright: Vecton, image ID: 98899382, media type: stock photo). (a)(b)Energy from sunlight is capable of producing heat and light, causes photochemical reactions, and generates electricity. When sunlight strikes the earth’s surface, it provides 3.8 million EJ of energy a year (i.e., collecting the total solar energy in one hour would satisfy human energy demand for one year) [12]. Light energy conversion applications are divided into three categories: photovoltaics for direct conversion of sunlight into electricity, as well as concentrating solar power systems and solar thermal collectors, employing solar thermal energy. All of this energy was modeled in an attempt to produce an efficient power output [11]. Photovoltaic (PV) technology has sparked a lot of interest due to the benefits of lower manufacturing costs and environmental safety. Its panel efficiency strongly depends on the surface temperature of the cell, and its efficiency decreases with increasing temperature [13].Up to now, commercially available PV technologies are based on inorganic materials, which require high costs and highly energy-consuming preparation methods. In addition, several of those materials are toxic and have low natural abundance. Organic PV can be used to avoid those problems. However, the efficiencies of organic-based PV cells are still, at the moment, a long way behind those obtained with purely inorganic-based PV technologies. Hence, the benefit and significance of solar energy is that sunlight can be directly harvested into solar energy with the use of small and tiny PV solar cells [14]. Among silicon-based polymer, quantum dots, dye-sensitized solar cells, and perovskites are some of them and attract the researchers worldwide [15, 16]. Figure 2 demonstrates the classification of solar cells based on the materials on which they are made.Figure 2 Category of solar cells and current trends of development [17]. ## 2. Dye-Sensitized Solar Cells (DSSCs) Before dye-sensitized solar cells (DSSCs), silicon-based solar cells were found to be the most popular and dominant source of energy [18]. These solid-state junction devices dominated the photovoltaic industry. O’Regan and Grätzel [19] developed DSSCs, a new type of third-generation solar cell in year 1991 and also known as green alternative energy due to its potential applications and cost-effectiveness. Moreover, this class of energy involves the use of green alternative solvents during its fabrication and does not result in the formation of pollution to the natural environment such as greenhouse gases, and the source by itself is uniformly distributed as compared to other forms of energy. One of the most operational technologies in this generation is its long-term stability and environmentally friendly energy that belongs to thin-film solar cell group [19, 20]. DSSCs are thus considered as one of the most promising next-generation devices for future energy demand and environmental remediation solutions. Its basic components include porous semiconductor materials loaded with sensitizer on a glass substrate (FTO/ITO), redox couple electrolyte, and counterelectrode [21, 22].One form of modification made by previous researchers and reporters in the working electrodes is to add metal and metal-based oxide materials to semiconductors such as metals like Cr, Zr, Ni, Fe, Cu, Ag, CuO, ZnO, and TiO2 [23, 24]. In addition to this, there are a number of factors limiting the performance of DSSCs. As such, the absorption of a large fraction of incident solar light by the photoactive layer of a dye-sensitized photoanode, the use of wide light absorption bands, and cosensitization of the photoanode are important for achieving high-performance and efficient device. Ragoussi and Torres also reported on the molecular orbital levels, absorption coefficients, morphology of the layers, and molecular diffusion lengths as the other main factors that affect certified power conversion efficiency [25].Figures3(a) and 3(b) illustrate the fundamental operational principle of DSSCs and the resulting charge transfer process between the sensitizer and the photoelectrode system. As described in Figure 3(a), sensitizers may achieve charge injection with the corresponding photoelectrode by both direct and indirect sensitization, as shown in Figure 3(b), thereby rendering it more panchromatic in response [26], and this showed the charge collection properties of DSSCs, which, in turn, altered the photocurrent density, photovoltage, and solar energy conversion efficiency by optimizing cell size without affecting environmental safety [27]. However, it has not yet come to fruition; pure direct sensitization protocol should be adapted, since it could increase DSSC efficiency by eliminating the electron injection overpotential. It indicates the energy loss due to thermalization from the excited state of the dye (D∗) in the other process (indirect technique) [28].Figure 3 Schematic representation of the components and of the basic operating principle of DSSCs [29] (a) and the schematic illustration of two similar types of visible light sensitization of TiO2. (i) Dye sensitization (indirect): (1) excitation of the dye by visible light absorption, (2) electron transfer from the excited state of the dye to TiO2 CB, (3) recombination, (4) electron transfer to the acceptor, and (5) regeneration of the sensitizer by an electron donor. (ii) Ligand-to-metal charge transfer (LMCT sensitization) (direct): (1) visible light-induced LMCT transfer, (2) recombination, (3) electron transfer to the acceptor, and (4) regeneration of adsorbates by an electron donor. S, D, and A represent the sensitizer (or adsorbate), electron donor, and electron acceptor, respectively (S0: ground state, S∗ and S1: excited state of the sensitizer/adsorbate) (b) [28]. (a)(b)Previously, much scientific research work was reported on the fabrication and assembly of DSSCs using their various components such as working electrode, counterelectrode, sensitizers/dyes (natural, organic metal free dye, and metal complex dyes/inorganic dyes), and the corresponding cell performance [30–34]. This is due to its best performance under solar irradiation diffusion, having low-cost manufacturing, fabrication simplicity, and environmental friendliness incorporated materials. But the low energy conversion and short-term operational stability are still a challenge for commercial use as compared with silicon-based solar cells, which have been commercialized [35].However, to the knowledge of the researchers, there is no scientific report that summarizes and shows the effect of various components of the cell. Moreover, the novelty of this review work also focuses on showing and focusing on the different types of natural products as green, environmentally friendly, and cost-effective sensitizers for DSSCs. Furthermore, the work intends to summarize the effect of solvent on the extraction of dye and the effect and nature of the different parts of natural plants, such as roots, flowers, stems, and leaves, due to their having various bioactive photosensitive molecules [24, 36]. Hence, the aim of this review is to focus on performance affecting parameters, to show the recently achieved solar cell efficiency, to propose future scientific directions on the way to use DSSCs for homemade applications, and to extend their industrializations as well as its assessment on how to optimize the device for commercialization at large-scale production. ## 3. Basic Elements of DSSCs In the old generation of photoelectrochemical solar cells (PSCs), photoelectrodes were fabricated from bulky semiconductor materials such as Si, GaAs, or CdS. But these kinds of photoelectrodes are highly affected by photocorrosion, which results in poor stability of the photoelectrochemical cell [37]. Instead, sensitized wide band gap semiconductors derived from metal oxides such as TiO2, ZnO, niobium oxide, carbon materials, bilayer-assembled, and their composites have been used as a major photoelectrode in DSSCs, as shown in Figure 4 [38, 39]. So, the next section discussed the basic structure of DSSCs, which is composed of different layers as compared to conventional solar cells based on silicon, such as photoanode/photoelectrode/working electrode (WE), counterelectrode (CE), electrolyte, and sensitizer (both synthetic/complex and natural dyes) [35, 38, 40, 41].Figure 4 Band positions of the most common semiconductors [38]. ### 3.1. Photoanode The WE, or indicator, is the most important component that has the function of absorbing radiation. As a criterion, the electrode consists of a dye-sensitized layer of nanocrystalline semiconductor metal oxide having a wide band gap and being transparent enough to pass light to the sensitizer. Many semiconductor materials, either in the form of nano or bulk, were used as a photoelectrodes, such as TiO2, Al@TiO2, TiO2-Fe, ZnO, SiO2, Zn2SnO4, CeO2, WO3, SrTiO3, and Nb2O5 were used as scaffold materials in DSSCs [26]. However, reports showed that TiO2 and ZnO and their composite/doped-based photoelectrodes were found to be the most widely common photoanode materials due to their high band gap energy, inexpensiveness, nontoxicity nature, accessibility, and photoelectrochemical stability. Their methods of preparation are too simple and achievable with environmentally friendly materials in the presence of green solvents and show a promising high efficiency as compared to the counterparts. Recently, TiO2 has become the most popular metal oxide semiconductor in DSSCs after ZnO and SnO2 [38].Rajendhiran et al. [42] have reported the green synthesized TiO2 nanoparticles by the sol-gel method using Plectranthus amboinicus leaf extract, and the prepared nanoparticles have been coated over an ITO substrate by using the doctor blade approach. So, the assembled DSSCs have exhibited higher solar to electrical energy conversion efficiency, reaching 1.3% by using Rose Bengal organic dye sensitizer due to the surface modification of the synthesized nanoparticles. In addition, to support TiO2 in the photoanode, substrates such as fluorine-doped tin oxide (FTO) and indium-doped tin oxide (ITO) [9], due to scarcity, rigidity, and brittle properties of ITO, an alternative with low-cost FTO and graphene, have been chosen due to their unique structural defect with a rough surface that enables it to solve problems in short circuits and leakage current [39, 43].Low and Lai [44] designed an efficient photoanode from reduced graphene oxide- (rGO-) decorated TiO2 materials. It has been found that the UV-Vis diffuse reflection spectra shows an inconstant absorption with increasing duration of TiO2 deposited on rGO in the ultraviolet (from 200 to 400 nm) and visible light (from 400 to 700 nm) regions. From the observed spectra, rGO/TiO2 samples with a spinning duration of 30 seconds exhibited an optimum light-absorbing ability, as shown in Figure 5(a). As shown in Figure 5(b), the performance of the electrochemical parameters such as JSC, VOC, FF, and η was dependent on the spinning durations. The efficiency (η) of the assembled DSSCs increases from 10 to 30-second spinning duration (4.74-9.98%, respectively). This is due to surface area and photoelectrochemical stability modification, which in turn enables us to adsorb more dye molecules, followed by absorbing more light on the surface of the composite photoanode. So, it could be noticed that, after 30 seconds of optimized spinning duration, the best efficiencies of the device were 22.01 mA cm-2, 0.79 V, 57%, 9.98% in JSC, VOC, FF%, and η%, respectively.Figure 5 UV-Vis diffuses reflectance spectra (a) and J-V curves of the photoanode-based pure rGO and rGO/TiO2 with spinning duration of 10 s, 20 s, 30, 40, and 50 s (b) [44]. (a)(b)Furthermore, Gao et al. [45] also developed a nitrogen-doped TiO2/graphene nanofiber (G-T-N) as an alternative green photoelectrode and evaluated the different photovoltaic parameters of the device’s performance, as shown in Figure 6. While nitrogen doping can prevent the in situ recombination of electron-hole pairs, graphene doping first increases the surface area of TiO2 fibers and also increases the dye adsorption active sites, with more electrons injected into the semiconductor conduction band from the excited state of the dye, thereby improving the photoelectric conversion efficiency. The open circuit voltage (VOC), short circuit current density (JSC), fill factor (FF), and η value for the TiO2/graphene and N@TiO2/graphene nanofiber photoelectrode-based DSSCs were found to be 0.66, 17.48, 0.35, and 3.97 and 0.71, 15.38, 0.46, and 5.01, respectively [46].Figure 6 IPCE curves (a) and J-V curves of DSSCs with different photoanodes and the illustration in (b) [45]. (a)(b) ### 3.2. Counterelectrode The counterelectrode (CE, cathode) is where the redox mediator reduction occurs. It collects electrons from the external circuit and injects them into the electrolyte to catalyze the reduction of I3− to I− in the redox couple for dye regeneration [47]. The pillar and the primary major function of CE in the DSSC system plays as catalyst to promote the completion of the process, since the oxidized redox couple is reduced by accepting electrons at the surface of the CE, and the oxidized dye is again reduced by collecting electrons via the ionic transport materials. The second ultimate role of CE in DSSCs is to act as a positive electrode; it collects electrons from the external circuit and transmits them into the cell. Additionally, CE is used as a mirror, since it reflects the unabsorbed light from the cell back to the cell to enhance the utilization of sunlight [40].The most commonly used CE material is Pt on a conductive ITO or FTO substrate, owing to its excellent electrocatalytic activity for I3− reduction, high electrical conductivity for efficient electron transport, and high electrochemical stability in the electrolyte system. Hence, most of the research work uses expensive platinum as a CE, but this limits the employability of large-scale production. Therefore, to address these limitations, several materials derived from inorganic compounds, carbonaceous materials, and conductive organic polymers, have been investigated as potential alternatives to replace or modify the Pt-based cathodes in DSSCs. To improve this, less-expensive copper was used as the CE in large-scale industrial applications [39]. Huang et al. [48] have worked on biochar from lotus leaf by one-step pyrolysis as a flexible CE to replace platinum. From their studies at the same photoanode, a maximum value of 0.15% power conversion efficiency (PCE) was produced in the presence of lotus leaf extract as a photosensitizer, while 0.36% of PCE was produced. In the same manner, 0.13% of PCE was produced when graphite was used as CE while lotus leaf extract photosensitizer-modified TiO2-FTO photoanode [48]. It can be concluded that graphite presents feasible potential as an alternative to platinum due to its affordable cost and performance output due to having more than 0.0385% efficiency than FTO glass and platinum by using Strobilanthes cusia photosensitizer [49].Kumar et al. [50] designed and fabricated a new cost-effective, enhanced performance CE using a carbon material produced with the organic ligand 2- methyl-8-hydroxyquinolinol (Mq). The carbon-derived Mq CE-based DSSCs show a short circuit current density of 11.00 mA cm-2, a fill factor of 0.51, and an open circuit voltage of VOC 0.75 V with a conversion efficiency of 4.25%. As a reference, Pt CE was used and provides a short circuit current density (Jsc) of 12.40 mA cm-2, a fill factor of 0.68, and an open circuit voltage of 0.69 V, having a conversion efficiency of ≈5.86%. As supported in Figure 7(a), the low cell performance of carbon-derived Mq CE could be attributed to the high electrostatic interactions between the carbon atom and I- or I3- with a higher concentration of mediator anions in proximity to the carbon surface, which results in an increase of the regeneration and recombination rates [50]. Due to their low surface area, low stability, and low catalytic behavior, single material-based CE has lower device performance than composites and doped CE-based DSSCs. To improve this, Younas et al. [51] prepared the high mesopore carbon-titanium oxide composite CE (HMC-TiO2) for the first time and investigated the various cell photovoltaic parameters. The report shows that Jsc of 16.1 mA cm-2, FF of 68%, and VOC of 0.808 V have a conversion η of ≈8.77%. As the different parameters of the DSSC value obtained show, the HMCs-TiO2 composites display high electrocatalytic activity and could be taken as a promising CE. In addition, Song et al. [52] discuss the role of iron pyrite (FeS2), in the presence and absence of NaOH basic solution, as one of the most promising counterelectrode materials for dye-sensitized solar cells. FeS2 CE-based DSSCs without NaOH addition provides a PCE of 4.76%, a JSC of 10.20 mA cm-2 with a VOC of 0.70 V and a FF of 0.66. In the presence of NaOH, the FeS2 CE-based DSSCs had a JSC of 12.08 mA cm-2, a VOC of 0.74 V, a FF of 0.64, and a PCE value of 5.78%. As a control, Pt CE were also investigated and shows JSC of 11.58 mA cm-2, VOC of 0.74 V, FF of 0.69, and resulting PCE of 5.93%. The improvement in the photovoltaic parameters of DSSCs, shown in Figure 7(b), indicates that more electrons are generated in the device due to the presence of NaOH, and this is found to be consistent with JSC [53].Figure 7 J-V spectra of HMC-TiO2 (a) and FeS2; (A: without NaOH, B: with NaOH, and C: Pt CE-based DSSCs) (b) [50, 52]. (a)(b) ### 3.3. Electrolyte A good electrolyte should have high electrical and ionic conductivity, good interfacial contact with the nanocrystalline semiconductor and counterelectrode, not degrade the dye molecules, be transparent to visible light, noncorrosive property to the counterelectrode, high thermal and electrochemical stability, a high diffusion coefficient, low vapor pressure, appropriate viscosity, and ease of sealing, without suppressing charge carrier transport [54, 55]. Liquid electrolytes, solid-state electrolytes, quasisolid electrolytes [30], and water-based electrolytes [56] are common redox mediators (electrolytes) found in DSSCs. Liquid electrolytes are also organic (redox couple, a solvent, and additives) and are characterized by ionic liquid. Quasisolid electrolytes are good candidates for DSSCs due to their optimum efficiency and durability, high ionic conductivity, long-term stability, ionic conductivity, and excellent interfacial contact property like the liquid electrolytes. The most important components are redox couples such as I-/I3-, Br-/Br3-, SCN-/(SCN)2, Fe (CN)63/4, SeCN-/(SeCN)2, and substituted bipyridyl cobalt (III/II) [57], which are directly linked to the VOC of DSSCs.Due to the better solubility, fast dye regeneration process, low light absorption in the visible region, appropriate redox potential, and very slow recombination rate between the nanocrystalline semiconductor injected electrons and I3-, I-/I3- is the most popular redox couple electrolyte [40]. Since a good solvent is responsible for the diffusion and dissolution of I-/I3- ions, among these solvents, acrylonitrile, ethylenecarbonate, propylene carbonate, 3-methoxypropionitrile, and N-methylpyrrolidone are common. A solvent with a high donor number can increase the VOC and decrease the JSC by lowering the concentration of I3-. As a result, the lower I3- concentration helps to slow the recombination rate, and as a result, it increases the VOC. The second type of liquid electrolyte is the ionic liquid electrolyte, such as pyridinium, imidazolium, and anions from the halide or pseudohalide family. These electrolytes show high ionic conductivity, nonvolatility, good chemical and thermal stability at room temperature, and a negligible vapor pressure, which are favorable for efficient DSSCs [30, 58]. Since, DSSCs are affected by evaporation and leakage found in liquid electrolyte. To overcome these drawbacks, novel solid or quasisolid state electrolytes, such as hole transportation materials, p-type semiconductors, and polymer-based gel electrolytes, have been developed as potential alternatives to volatile liquid electrolytes [59].A quasisolid electrolyte is a means to solve the poor contact between the photoanode and the hole that transfers material found in solid-state electrolytes. This electrolyte is composed of a composite of a polymer and liquid electrolyte that can penetrate into the photoanode to make a good contact. Interestingly, this has better stability, high electrical conductivity, and especially, good interfacial contact when compared to the other types of electrolyte. However, the quasisolid electrolyte has one particular disadvantage: it is strongly dependent on the working temperature of the solar cell, where high temperatures cause a phase transformation from gel state to solution state. Selvanathan et al. [60] have used starch and cellulose derivative polymers as quasisolid electrolytes and contributing to an optimized efficiency of 5.20% [60].Saaid et al. [61] prepared a quasisolid-state polymer electrolyte by incorporating poly (vinylidene fluoride-co-hexafluoropropylene) (PVdF-HFP) into a propylene carbonate (PC)/1, 2-dimethoxyethane (DME)/1-methyl-3-propylimidazolium iodide (MPII) and dealing with the dependency of photovoltaic parameters on the fabricated electrolyte shown in Figure 8(a). It has been observed that the cell photovoltaic parameters are found to be dependent on the amount of the added PVdF-HFP polymer. Before the addition of any PVdF-HFP polymer, the corresponding Jsc, VOC, FF, and η were found to be 11.24 mA cm-2, 619 mV, 70%, and 4.88%, respectively. While followed by the addition of 0.1, 0.2, 0.3, and 0.4 g of PVdF-HFP polymer, the cell performance was found to be (9.53, 9.53, 7.54, and 6.57) mA cm-2Jsc, (638, 641, 679, and 684) mV of VOC, (67, 66, 64, and 61) % of FF, and (4.09, 3.70, 3.27, and 2.73) % of η, respectively. It was demonstrated that as the amount of polymer in the cell increases, the performance of the cell decreases gradually. Moreover, Lim et al. [62] have also designed a new quasisolid-state electrolyte using coal fly ash-derived zeolite-X and -A as shown in Figure 8(b) and achieves a VOC of 0.74 V, JSC of 13.7 mA/cm2, and FF of 60% with η of 6.0% and 0.73 V, 11.4 mA/cm2, 60% FF, respectively. But it has been found that zeolite-X&AF quasisolid-state electrolyte-based DSSCs show VOC of 0.72 V, JSC of 11.1 mA/cm2, and FF of 61% and η of 4.8%. The enhancement in cell photovoltaic parameters in the case of zeolite-XF12 quasisolid-state electrolyte could be attributed to the high crystallinity nature, high light harvesting efficiency, the reduction of resistance at the photoanode/electrolyte interface, and the decrease in charge recombination rate. As a control, nanogel polymer was used as an electrolyte and possessed 0.66 V, 10.3 mA/cm2, 56% of FF, and 3.8% η.Figure 8 J-V curve of PVdF-HFP/PC/DME/MPII (a) with A 0.0; B 0.1; C 0.2; D 0.3; and E 0.4 g of PVdF-HFP and photocurrent Density-photovoltaic curves of DSSCs based on nanogel and quasisolid-state electrolytes under 1 sun illumination (AM 1.5G, 100 mW cm-2) (b) [48, 49]. (a)(b) ### 3.4. Photosensitizer Other core components of DSSCs are photosensitizers that play a great role in the absorption of solar photons. That is, dyes play a prominent role in harvesting the incoming light (absorbing) and injecting the photoexcited electrons into the conduction band of the semiconducting material to convert solar energy to electrical energy (i.e., which is responsible for absorbing the incident solar energy and converting it into electrical energy) [63, 64]. This enables us to produce renewable power systems and manage power sustainability and to achieve a reliable and stable network output power distribution [58]. It is chemically bonded to the porous surface of the semiconductor material and determines the efficiency and general performance of the device [47]. The possibilities of some organic dyes, polymer dyes, and natural dyes have been reported with great relative cost-effective potential for industrialization [11]. To be effective, photosensitizer should have a broad and intense absorption spectrum that covers the entire visible region, high adsorption affinity to the surface of the semiconducting layer, excellent stability in its oxidized form, low-cost, and low threat to the environment. Furthermore, its LUMO level, i.e., excited state level, must be higher in energy than the conduction band edge of the semiconductor, for efficient electron injection into the conduction band of the semiconductor. Also, its HOMO level, i.e., oxidized state level, must be lower in energy than the redox potential of the electrolyte to promote dye regeneration .The most commonly used photosensitizers are categorized into three groups. These are metal complex sensitizers, metal-free organic sensitizers [65], and natural sensitizers. Metal complex sensitizers provide relatively high efficiency and stability due to having both anchoring and ancillary ligands. The modifications of these two ligands to improve the efficiency of the solar cell performance have been reported. These ligands facilitate charge transfer in metal-to-ligand bonds. The metal complex-based photosensitizers are ruthenium and its cosensitized configurations [56] based complexes owing to their wide absorption range, i.e., from the visible to the near-infrared region, which renders them with superior photon harvesting properties. However, these complexes require multistep synthesis reactions (i.e., they require long synthesis and purification steps), and they contain a heavy metal, which is expensive (i.e., it needs high production costs), scarce, and toxic. However, these problems can be overcome by applying metal-free organic dyes in DSSCs instead of metal complex sensitizers. Trihutomo et al. [66] explained that using natural dye as a photosensitizer has the problem of producing lower efficiency than silicon solar cells due to the barrier of electron transfer in the TiO2 semiconductor layer.A donor acceptor-substituted-conjugated bridge (D-π-A) is used in the design of a metal-free organic sensitizer [30, 47]. The properties of a sensitizer vary with the electron-donating ability of the donor part and the electron-accepting ability of the acceptor part, as well as with the electronic characteristics of π bridge. At present, most of the π-bridge conjugated parts in organic sensitizers are based on oligoene, coumarin, oligothiophene, fluorene, and phenoxazine. The donor part has been synthesized with a dialkyl amine or diphenylamine moiety while using a carboxylic acid, cyanoacrylic acid, or rhodanine-3-acetic acid moiety for the acceptor part. As shown in Figure 9 [30], the sensitizer anchors onto the porous network of nanocrystalline TiO2 particles via the acceptor part of dye the molecule. However, metal-free organic sensitizers (organic dyes) have the following disadvantages: strong π-stacked aggregates between D-π-A dye molecules on semiconductor surfaces, which reduces the electron-injection yield from the dyes to the conduction band of nanocrystalline semiconductor, low absorption bands compared to metal-based sensitizers, which leads to a reduction in light absorption capability, low stability due to the sensitizer’s tendency to decay with time, and a long anode lifetime [30, 67].Figure 9 Designed structure of a metal-free organic dye [30].In brief, the 3.2 eV energy band gap of the TiO2 semiconductor is responsible for absorbing ultraviolet light (i.e., the absorption of visible light is weak). As a result, natural dyes increase the overall DSSCs for sunlight absorption rate [68]. The light-absorbed efficiency by TiO2 is also enhanced by cosensitization, which enables better light harvesting across the solar spectrum [69]. Ananthakumar et al. [70] have also reviewed more on the energy transfer process from donor to acceptor through the Forster resonance energy transfer process (FRET) for improved absorption. So, cosensitization is effectively achieved through the FRET mechanism, in which the dipole-dipole attraction of two chromophoric components occurs via an electric field. In this process, absorption of light causes the molecular excitation of the donor, and this is transferred to a nearby acceptor molecule having lower excitation energy through the exchange of virtual photons. Here, the donor molecule nonradiatively transfers excitation energy to an acceptor molecule through an exchange of photons, as shown in Figure 10 [70].Figure 10 Schematic diagram of FRET process (a) and its mechanism (b) [70]. (a)(b)To solve problems found in both metal-based complex and metal-free organic dyes, researchers have focused on natural plant pigment-based photosensitizer [71]. As a result, metal-free dyes, such as natural dyes (natural pigments) from different plant sources such as fruits, roots, flowers, leaves, wood, algae, and bacterial pigments [72, 73], coupled with their organic derivatives, have attracted considerable research interest, owing to their low-cost, simple synthesis procedure, abundance in nature, nontoxicity, and high molar absorption coefficient [35, 74]. An efficient photosensitizer for DSSCs should possess several essential requirements [64]: (i) A high molar extinction coefficient and a high panchromatic light absorption ability that extends from visible to near-infrared(ii) Anthocyanin pigment fromEleiodoxa conferta and Garcinia atroviridis fruit, for example, contains hydroxyl and carboxylic groups in the molecule that can effectively attach to the surface of a TiO2 film [75](iii) Good HOMO/LUMO energy alignment with respect to the redox couple and the conduction band level in the semiconductor, which allows efficient charge injection into the semiconductor, and simultaneously efficient regeneration of the oxidized dye(iv) The electron transfer rate from the dye sensitizer to the semiconductor must be faster than the decay rate of the photosensitizer(v) Stability under solar light illumination and continuous light soaking [76–78]It is important to note that the stable natural plant pigments extracted by effective solvents can absorb a broad range of visible light [79, 80], because the two most significant drawbacks of DSSCs are their narrow spectral response and short-term stability. Therefore, in this review work, different natural plant pigments are extracted from different plant parts such as leaves, roots, steam, parks, peel waste, flowers, various spices, and a mixture of them with various solvents, and their stability and various experimental factors are effectively discussed. #### 3.4.1. Natural Plant Pigment Photosensitizers in DSSCs The highest efficiency ever recorded for a DSSCs material was about 12% using Ru (II) dyes when its material and structural properties were optimized. However, this efficiency is less when compared to the efficiencies of the first and second generations of solar cells (thin-film solar cells and first generation (Si-based) solar cells), whose efficiencies were about 20-30% [11]. A ruthenium-based dye and platinum are the most common materials used as photosensitizer and counterelectrode, respectively, in the production of the DSSCs, the third generation of photovoltaic technologies. However, their expensive cost, the complexity and toxicity of ruthenium dye, and the scarcity of platinum sources preclude their use in the DSSCs [49]. Thus, an alternative way to produce cost-effective dyes on a large scale is by extracting natural dyes from plant sources. The colors are due to the presence of various pigments that have been proven to be efficient photosensitizers. Meanwhile, colors and their transmittance by themselves could affect energy generation performance. Based on this, DSSCs currently being produced have better power generation efficiency as the visible light transmittance lowers, and the power generation efficiency is good in the order of red > green > blue [81]. It is reported that extracts of plant pigments also have a simultaneous effect as photosensitizers and reducing agents for nanostructure synthesis, which is useful in photoanode activity in solar devices (e.g., TiO2) [82].In order to improve the energy conversion efficiency of natural photosensitizers, blending of different dyes, copigmentation of dyes, acidifying of dyes, and other approaches have been conducted by researchers, resulting in appreciable performance [83]. Based on the types of natural molecules found in plant products, such photosensitizers are classified as carotenoids, betalains, flavonoids, or chlorophyll structural classes [30, 65, 83, 84]. For stable adsorption onto the semiconductor substrate, sensitizers are typically designed with functional groups such as -COOH, -PO3H2, and -B(OH)2 [20]. These biomolecules have functional groups, such as carboxyl and hydroxyl that can easily react with the surface of nanostructured TiO2 that enables them to absorb sunlight. In particular, hydroxyl or carboxyl functional groups are strongly bound on the surface of TiO2 [85].The enchantment of efficiency from extraction of dyes from fresh purple cabbage (anthocyanin), spinach leaves (chlorophyll), turmeric stem (betaxanthin), and their mixture as a photosensitizer, with nanostructured ZnO-coated FTO substrates as a photoanode-based DSSCs. The photon to electrical power conversion efficiencies of purple cabbage, spinach, turmeric, and their mixed dyes are explored as 0.1015%, 0.1312%, 0.3045%, and 0.602%, respectively, under the same simulated light condition. The mixed dye reveals the stable performance of the cell with the highest conversion efficiency due to the absorption of an extensive range of the solar spectrum and well-suited electrochemical responses due to the fast electron transportation and lower recombination loss with longer electron lifetime found on mixed dyes [86].(1) Flowers. In DSSCs, the red/purple pigment in various leaves and flowers has been used as a sensitizer. Notably, an abundantly available organic dye is easily extracted from flowers and leaves, mainly responsible for light absorption in DSSCs [39]. The natural color pigments originated from organic dyes impart an anthocyanin group present in the different parts (e.g., flowers, leaves) of the plant. Hibiscus rosa-sinensis, a red pigment-containing flower with a higher concentration of anthocyanins, is used as a natural DSSC. In fact, the Malvaviscus penduliflorus flower is closely related to the Hibiscus family. However, potential research based on M. penduliflorus flower extracted dye in DSSCs is still lacking. The broader absorption of Hibiscus-extracted dye within 400-500 nm can further be enhanced using either concentrated dye solution or operating the sensitization process at an elevated temperature [39].Natural dyes from flowers can decrease the charge transfer resistance and are helpful in the better absorbance of light as well as enabling them to show absorption near to the red region. Therefore, efficient DSSCs using natural dyes are less toxic, disposed of easily, cost-effectively, and more environmentally friendly compared to organic dyes. Which is considered beneficial for future biosolar cell technology [87]. The performance of the DSSCs can be compensated by introducing a scattering layer or interfacial modification in the photoanode and concomitantly improving the broader spectrum wavelength range of light absorption to make it suitable for outdoor applications [39]. the presence of a series of conjugated double bonds from flower extracts helped to increase efficiency improvement. Raguram and Rajni [88] demonstrated that a flavanol pigment from the Allamanda blanchetti flower is responsible for red, purple, and blue colors, whereas a carotenoid pigment from the Allamanda cathartica flower is responsible for bright red, orange, and yellow colors and a series of conjugated double bonds [88]. Table 1 summarizes flower-based photosensitizers for efficiency improvement in DSSCs. From this, the performance is highly dependent on the type of the plant flower type.Table 1 Flowers as photosensitizers in DSSCs. ImagePlantClass of biomoleculesSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefSalvia—MethanolTiO2-FTO0.1680.46140.00.152[87]Spathodea—MethanolTiO2-FTO0.2010.52541.20.217[87]Malvaviscus penduliflorus—EthanolTiO2/MnO2-FTO6.020.3840.380.92[39]Allamanda blanchettiFlavonoids (flavanol)EthanolTiO2-FTO4.13660.4702601.16[88]Allamanda catharticaCarotenoids (lutein)EthanolTiO2-FTO2.14060.4896280.30[88]Canna-lily redAnthocyaninsMethanolTiO2-FTO0.440.57450.14[90]Canna-lily yellowAnthocyaninsMethanolTiO2-FTO0.430.56400.12[90]Beta vulgaris L. ssp. f. rubraBeta caroteneHot waterTiO2 surface0.440.55510.41Brassica oleracea L. var. capitata f. rubraAnthocyaninHot waterTiO2 surface1.880.54561.87(2) Leaves. The advantages of mesoporous holes in TiO2 are that they provide the surface of a large hole for the higher adsorption of dye molecules and facilitate the penetration of electrolyte within their pores. Absorbing light in an extended range of wavelengths by innovative natural dyes followed by increasing surface areas of the photoanode with a TiO2 nanostructure-based layer on the glass substrate improves DSSC technology [74]. Khammee et al. [89] have reported a natural pigment photosensitizer extracted from Dimocarpus longan leaves. According to the report, the methanol extract pigment was composed of chlorophyll-a, chlorophyll-b, and carotene components [89]. The functional group found on the leaves of natural plant pigment can bind with TiO2, which is then responsible for absorbing visible light [54]. Chlorophyll, which is found in the leaves of most green plants, absorbs light from red, blue, and violet wavelengths and obtains its color by reflecting green. Chlorophyll exhibits two main absorption peaks in the visible region at wavelengths of 420 and 660 nm [85]. Experimental results show that the absorption peaks of those dyes are mainly distributed in the visible light regions of 400-420 nm and 650-700 nm. So, chlorophyll was selected as the reference dye [94]. Therefore, chlorophyll and other related extract-based photosensitizes are given in both Tables 1 and 2.Table 2 Leaves as photosensitizers in DSSCs. ImagePlantClass of extracted dye pigmentsSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefLagerstroemia macrocarpa(i) Carotenoids(ii) Chlorophyll-a(iii) Chlorophyll-bMethanolTiO2-FTO0.0920.80753.711.138±0.018[74]Spinach leaves(i) ChlorophyllAcetoneTiO2-FTO0.410.5958.759820.171253[26]Strobilanthes cusia(i) Chlorophyll-a(ii) Chlorophyll-bMethanol, ethanol, acetone, diethyl-ether, dimethyl-sulphoxideTiO2-FTO0.00518330.30646.20.0385[49]Galinsoga parviflora(i) Chlorophyll groupDistilled water and ethanolTiO2-FTO0.4 (mA)0.346.71.65[91]Amaranthus red(i) Chlorophyll(ii) BetalainDistilled water, ethanol, acetoneTiO2-FTO1.00420.354738.640.14[54]Lawsonia inermis(i) Lawsone(ii) ChlorophyllDistilled water, ethanol, acetoneTiO2-FTO0.42360.547838.510.09[54]Cordyline fruticosa(i) ChlorophyllEthanolTiO2 surface1.3 mA0.61660.160.5[85]Euodia meliaefolia (Hance) Benth(i) ChlorophyllEthanolTiO2-FTO2.640.58701.08[92]Matteuccia struthiopteris (L.) Todaro(i) ChlorophyllEthanolTiO2-FTO0.750.60720.32[92]Corylus heterophylla Fisch(i) ChlorophyllEthanolTiO2-FTO0.680.56690.26[92]Filipendula intermedia(i) ChlorophyllEthanolTiO2-FTO0.870.54740.34[92]Pteridium aquilinum var. latiusculum(i) ChlorophyllEthanolTiO2-FTO0.740.56730.30[92]Populus L(i) ChlorophyllEthanolTiO2-FTO1.250.57370.27[92]Euphorbia sp.(i) QuercetinHot waterTiO2 surface0.460.40510.30Rubia tinctoria(i)AlizarinHot waterTiO2 surface0.650.48630.65Morus alba(i)CyanineHot waterTiO2 surface0.440.45570.38Reseda lutea(i)LuteolinHot waterTiO2 surface0.500.50620.52Medicago sativa(i)ChlorophyllHot waterTiO2 surface0.330.55560.33Aloe barbadensis miller(i)AnthocyaninsEthanolTiO2-FTO0.1120.67650.40.380[93]Opuntia ficus-indica(i) ChlorophyllEthanolTiO2-FTO0.2410.64248.00.740[93]Cladode and aloe vera(i)Anthocyanins and chlorophyllEthanolTiO2-FTO0.2900.44040.10.500[93]Lotus leaf(i)Alkaloid and flavonoidEthanolTiO2-FTO14.330.44231.42[48]Brassica oleracea var(i)AnthocyaninDistilled water, methanol, and acetic acidTiO2-FTO0.490.43510.054[55]Wrightia tinctoria R.Br. (“Pala indigo” or “dyer’s oleander”)(i)ChlorophyllCold methanolic extractTiO2-FTO0.530.51690.19[94](i)ChlorophyllAcidified cold methanolic extractTiO2-FTO0.210.422660.06[94](i)ChlorophyllSoxhlet extractTiO2-FTO0.490.495690.17[94](i)ChlorophyllAcidified Soxhlet extractTiO2-FTO0.310.419650.08[94](3) Fruits. The plant-extracted natural dyes are observed to be more prospective owing to their abundance and eco-friendly characteristics. They are environmentally and economically superior to ruthenium-based dyes because they are nontoxic and cheap. However, the conversion efficiency of dye-sensitized solar cells based on natural dyes is low [95]. Substations of natural dyes as sensitizers were shown to be not only economically viable and nontoxic but also effective for enhancing efficiency up to 11.9% [96]. Sensitizers for DSSCs need to fulfill important requirements such as absorption in the visible and near-infrared regions of the solar spectrum and strong chelation to the semiconductor oxide surface. Moreover, the LUMO of the dye should lie at a higher energy level than the conduction band of the semiconductor, so that, upon excitation, the dye could introduce electrons into the conduction band of the TiO2 [95]. Considering this, Najm et al. [97] used abundant and cheap Malaysian fruit, betel nut (Areca catechu) as a photosensitizer in DSSCs due to the presence of tannins, polyphenols, gallic acid, catechins, alkaloids, fat, gum, and other minerals. Provided that, gallotannic acid, a stable dye, is the main pigment (yellowish) of A. catechu and is responsible for the effective absorption of visible wavelengths and used in DSSCs [97, 98]. So, fruit extract as a photosensitizer and natural extracts from other sources are summarized in Table 3 and 4, respectively.Table 3 Fruits as photosensitizers in DSSCs. ImagePlantClass of extracted dye pigmentsSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefMelastoma malabathricum(i) AnthocyaninMethanol and trifluoroacetic acidTiO2 film4.490.42571.05[84]Eleiodoxa conferta(i) AnthocyaninEthanolTiO2-FTO4.630.37561.00[75]Garcinia atroviridis(i) AnthocyaninEthanolTiO2-FTO2.550.32630.51[75]Onion peels(i) AnthocyaninDistilled waterTiO2-FTO0.240.4846.630.065[26]Red cabbage(i) AnthocyaninDistilled waterTiO2-FTO0.210.5146.610.060[26]Areca catechuGallotannic acidMethanolTiO2 surface0.30.53673.50.118[97]Hylocereus polyrhizus(i) AnthocyaninDistilled water, ethanol, and acetic acidTiO2-FTO0.23 mA0.34630.024[55]Doum palm(i) ChromophoresEthanolTiO2-FTO0.0050.37630.012[99]Doum palm(i) ChromophoresDistilled waterTiO2-FTO0.0100.50660.033[99]Linia cauliflora(i) AnthocyaninEthanolTiO2-ITO0.380.41290.13[100]Phyllanthus reticulatus(i) AnthocyaninMethanolTiO2-FTO1.3820.67—0.69[101]Table 4 Other natural pigment sources as photosensitizers in DSSCs. ImagePlantClassSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefJuglon regia shell(i) JuglonHot waterTiO2 surface0.430.47560.38Malabar spinach seeds—Distilled waterTiO2-ITO510 (μA)0.71048.79.23[102]Rhamnus petiolaris seed(i) EmodinHot waterTiO2 surface0.200.50550.18Iridaea obovata algae(i) PhycoerythrinEthanolTiO2-FTO0.1360.40430.022[103]Delesseria lancifolia algae(i) PhycoerythrinEthanolTiO2-FTO0.2430.40460.045[103]Plocamium hookeri algae(i) PhycoerythrinEthanolTiO2-FTO0.0830.53630.027[103]Mangosteen pericarp (mangosteen peels)(i) AnthocyaninEthanolTiO2-FTO0.38 mA0.46480.042[55]Ataco vegetable(i) AnthocyaninsEthanolTiO2-FTO0.060.48660.018[104]Achiote vegetable(i) AnthocyaninsEthanolTiO2-FTO0.060.45500.013[104]Berenjena vegetable(i) AnthocyaninsEthanolTiO2-FTO0.040.40560.008[104]Flor de Jamaica vegetable(i) AnthocyaninsEthanolTiO2-FTO0.3820.478580.109[104]Mora vegetable(i) AnthocyaninsEthanolTiO2-FTO0.280.48510.069[104]Mortiño vegetable(i) AnthocyaninsEthanolTiO2-FTO0.5570.48466.40.175[104]Rabano vegetable(i) AnthocyaninsEthanolTiO2-FTO0.070.39550.015[104]Tomate de arbol vegetable(i) AnthocyaninsEthanolTiO2-FTO0.100.44520.023[104]The narrow spectral response and the short-term stability found in DSSCs are the two major drawbacks. These limitations are improved by using natural plant pigment dyes as an effective sensitizer on the photoanode of the device. So, using these natural pigments improved the efficiency of DSSCs by forming broad spectral absorption responses. To improve this, DeSilva et al. have investigated good photosensitizers from Mondo-grass berry and blackberry. As a result, the device efficiency improvement, with better stability on Mondo-grass berry dye, was observed when compared with that of blackberry. This is the reason that a Mondo-grass berry contains a mixture of two or more chemical compounds belonging to both the anthocyanin and carotenoid families, as proved by thin layer chromatography [105]. ## 3.1. Photoanode The WE, or indicator, is the most important component that has the function of absorbing radiation. As a criterion, the electrode consists of a dye-sensitized layer of nanocrystalline semiconductor metal oxide having a wide band gap and being transparent enough to pass light to the sensitizer. Many semiconductor materials, either in the form of nano or bulk, were used as a photoelectrodes, such as TiO2, Al@TiO2, TiO2-Fe, ZnO, SiO2, Zn2SnO4, CeO2, WO3, SrTiO3, and Nb2O5 were used as scaffold materials in DSSCs [26]. However, reports showed that TiO2 and ZnO and their composite/doped-based photoelectrodes were found to be the most widely common photoanode materials due to their high band gap energy, inexpensiveness, nontoxicity nature, accessibility, and photoelectrochemical stability. Their methods of preparation are too simple and achievable with environmentally friendly materials in the presence of green solvents and show a promising high efficiency as compared to the counterparts. Recently, TiO2 has become the most popular metal oxide semiconductor in DSSCs after ZnO and SnO2 [38].Rajendhiran et al. [42] have reported the green synthesized TiO2 nanoparticles by the sol-gel method using Plectranthus amboinicus leaf extract, and the prepared nanoparticles have been coated over an ITO substrate by using the doctor blade approach. So, the assembled DSSCs have exhibited higher solar to electrical energy conversion efficiency, reaching 1.3% by using Rose Bengal organic dye sensitizer due to the surface modification of the synthesized nanoparticles. In addition, to support TiO2 in the photoanode, substrates such as fluorine-doped tin oxide (FTO) and indium-doped tin oxide (ITO) [9], due to scarcity, rigidity, and brittle properties of ITO, an alternative with low-cost FTO and graphene, have been chosen due to their unique structural defect with a rough surface that enables it to solve problems in short circuits and leakage current [39, 43].Low and Lai [44] designed an efficient photoanode from reduced graphene oxide- (rGO-) decorated TiO2 materials. It has been found that the UV-Vis diffuse reflection spectra shows an inconstant absorption with increasing duration of TiO2 deposited on rGO in the ultraviolet (from 200 to 400 nm) and visible light (from 400 to 700 nm) regions. From the observed spectra, rGO/TiO2 samples with a spinning duration of 30 seconds exhibited an optimum light-absorbing ability, as shown in Figure 5(a). As shown in Figure 5(b), the performance of the electrochemical parameters such as JSC, VOC, FF, and η was dependent on the spinning durations. The efficiency (η) of the assembled DSSCs increases from 10 to 30-second spinning duration (4.74-9.98%, respectively). This is due to surface area and photoelectrochemical stability modification, which in turn enables us to adsorb more dye molecules, followed by absorbing more light on the surface of the composite photoanode. So, it could be noticed that, after 30 seconds of optimized spinning duration, the best efficiencies of the device were 22.01 mA cm-2, 0.79 V, 57%, 9.98% in JSC, VOC, FF%, and η%, respectively.Figure 5 UV-Vis diffuses reflectance spectra (a) and J-V curves of the photoanode-based pure rGO and rGO/TiO2 with spinning duration of 10 s, 20 s, 30, 40, and 50 s (b) [44]. (a)(b)Furthermore, Gao et al. [45] also developed a nitrogen-doped TiO2/graphene nanofiber (G-T-N) as an alternative green photoelectrode and evaluated the different photovoltaic parameters of the device’s performance, as shown in Figure 6. While nitrogen doping can prevent the in situ recombination of electron-hole pairs, graphene doping first increases the surface area of TiO2 fibers and also increases the dye adsorption active sites, with more electrons injected into the semiconductor conduction band from the excited state of the dye, thereby improving the photoelectric conversion efficiency. The open circuit voltage (VOC), short circuit current density (JSC), fill factor (FF), and η value for the TiO2/graphene and N@TiO2/graphene nanofiber photoelectrode-based DSSCs were found to be 0.66, 17.48, 0.35, and 3.97 and 0.71, 15.38, 0.46, and 5.01, respectively [46].Figure 6 IPCE curves (a) and J-V curves of DSSCs with different photoanodes and the illustration in (b) [45]. (a)(b) ## 3.2. Counterelectrode The counterelectrode (CE, cathode) is where the redox mediator reduction occurs. It collects electrons from the external circuit and injects them into the electrolyte to catalyze the reduction of I3− to I− in the redox couple for dye regeneration [47]. The pillar and the primary major function of CE in the DSSC system plays as catalyst to promote the completion of the process, since the oxidized redox couple is reduced by accepting electrons at the surface of the CE, and the oxidized dye is again reduced by collecting electrons via the ionic transport materials. The second ultimate role of CE in DSSCs is to act as a positive electrode; it collects electrons from the external circuit and transmits them into the cell. Additionally, CE is used as a mirror, since it reflects the unabsorbed light from the cell back to the cell to enhance the utilization of sunlight [40].The most commonly used CE material is Pt on a conductive ITO or FTO substrate, owing to its excellent electrocatalytic activity for I3− reduction, high electrical conductivity for efficient electron transport, and high electrochemical stability in the electrolyte system. Hence, most of the research work uses expensive platinum as a CE, but this limits the employability of large-scale production. Therefore, to address these limitations, several materials derived from inorganic compounds, carbonaceous materials, and conductive organic polymers, have been investigated as potential alternatives to replace or modify the Pt-based cathodes in DSSCs. To improve this, less-expensive copper was used as the CE in large-scale industrial applications [39]. Huang et al. [48] have worked on biochar from lotus leaf by one-step pyrolysis as a flexible CE to replace platinum. From their studies at the same photoanode, a maximum value of 0.15% power conversion efficiency (PCE) was produced in the presence of lotus leaf extract as a photosensitizer, while 0.36% of PCE was produced. In the same manner, 0.13% of PCE was produced when graphite was used as CE while lotus leaf extract photosensitizer-modified TiO2-FTO photoanode [48]. It can be concluded that graphite presents feasible potential as an alternative to platinum due to its affordable cost and performance output due to having more than 0.0385% efficiency than FTO glass and platinum by using Strobilanthes cusia photosensitizer [49].Kumar et al. [50] designed and fabricated a new cost-effective, enhanced performance CE using a carbon material produced with the organic ligand 2- methyl-8-hydroxyquinolinol (Mq). The carbon-derived Mq CE-based DSSCs show a short circuit current density of 11.00 mA cm-2, a fill factor of 0.51, and an open circuit voltage of VOC 0.75 V with a conversion efficiency of 4.25%. As a reference, Pt CE was used and provides a short circuit current density (Jsc) of 12.40 mA cm-2, a fill factor of 0.68, and an open circuit voltage of 0.69 V, having a conversion efficiency of ≈5.86%. As supported in Figure 7(a), the low cell performance of carbon-derived Mq CE could be attributed to the high electrostatic interactions between the carbon atom and I- or I3- with a higher concentration of mediator anions in proximity to the carbon surface, which results in an increase of the regeneration and recombination rates [50]. Due to their low surface area, low stability, and low catalytic behavior, single material-based CE has lower device performance than composites and doped CE-based DSSCs. To improve this, Younas et al. [51] prepared the high mesopore carbon-titanium oxide composite CE (HMC-TiO2) for the first time and investigated the various cell photovoltaic parameters. The report shows that Jsc of 16.1 mA cm-2, FF of 68%, and VOC of 0.808 V have a conversion η of ≈8.77%. As the different parameters of the DSSC value obtained show, the HMCs-TiO2 composites display high electrocatalytic activity and could be taken as a promising CE. In addition, Song et al. [52] discuss the role of iron pyrite (FeS2), in the presence and absence of NaOH basic solution, as one of the most promising counterelectrode materials for dye-sensitized solar cells. FeS2 CE-based DSSCs without NaOH addition provides a PCE of 4.76%, a JSC of 10.20 mA cm-2 with a VOC of 0.70 V and a FF of 0.66. In the presence of NaOH, the FeS2 CE-based DSSCs had a JSC of 12.08 mA cm-2, a VOC of 0.74 V, a FF of 0.64, and a PCE value of 5.78%. As a control, Pt CE were also investigated and shows JSC of 11.58 mA cm-2, VOC of 0.74 V, FF of 0.69, and resulting PCE of 5.93%. The improvement in the photovoltaic parameters of DSSCs, shown in Figure 7(b), indicates that more electrons are generated in the device due to the presence of NaOH, and this is found to be consistent with JSC [53].Figure 7 J-V spectra of HMC-TiO2 (a) and FeS2; (A: without NaOH, B: with NaOH, and C: Pt CE-based DSSCs) (b) [50, 52]. (a)(b) ## 3.3. Electrolyte A good electrolyte should have high electrical and ionic conductivity, good interfacial contact with the nanocrystalline semiconductor and counterelectrode, not degrade the dye molecules, be transparent to visible light, noncorrosive property to the counterelectrode, high thermal and electrochemical stability, a high diffusion coefficient, low vapor pressure, appropriate viscosity, and ease of sealing, without suppressing charge carrier transport [54, 55]. Liquid electrolytes, solid-state electrolytes, quasisolid electrolytes [30], and water-based electrolytes [56] are common redox mediators (electrolytes) found in DSSCs. Liquid electrolytes are also organic (redox couple, a solvent, and additives) and are characterized by ionic liquid. Quasisolid electrolytes are good candidates for DSSCs due to their optimum efficiency and durability, high ionic conductivity, long-term stability, ionic conductivity, and excellent interfacial contact property like the liquid electrolytes. The most important components are redox couples such as I-/I3-, Br-/Br3-, SCN-/(SCN)2, Fe (CN)63/4, SeCN-/(SeCN)2, and substituted bipyridyl cobalt (III/II) [57], which are directly linked to the VOC of DSSCs.Due to the better solubility, fast dye regeneration process, low light absorption in the visible region, appropriate redox potential, and very slow recombination rate between the nanocrystalline semiconductor injected electrons and I3-, I-/I3- is the most popular redox couple electrolyte [40]. Since a good solvent is responsible for the diffusion and dissolution of I-/I3- ions, among these solvents, acrylonitrile, ethylenecarbonate, propylene carbonate, 3-methoxypropionitrile, and N-methylpyrrolidone are common. A solvent with a high donor number can increase the VOC and decrease the JSC by lowering the concentration of I3-. As a result, the lower I3- concentration helps to slow the recombination rate, and as a result, it increases the VOC. The second type of liquid electrolyte is the ionic liquid electrolyte, such as pyridinium, imidazolium, and anions from the halide or pseudohalide family. These electrolytes show high ionic conductivity, nonvolatility, good chemical and thermal stability at room temperature, and a negligible vapor pressure, which are favorable for efficient DSSCs [30, 58]. Since, DSSCs are affected by evaporation and leakage found in liquid electrolyte. To overcome these drawbacks, novel solid or quasisolid state electrolytes, such as hole transportation materials, p-type semiconductors, and polymer-based gel electrolytes, have been developed as potential alternatives to volatile liquid electrolytes [59].A quasisolid electrolyte is a means to solve the poor contact between the photoanode and the hole that transfers material found in solid-state electrolytes. This electrolyte is composed of a composite of a polymer and liquid electrolyte that can penetrate into the photoanode to make a good contact. Interestingly, this has better stability, high electrical conductivity, and especially, good interfacial contact when compared to the other types of electrolyte. However, the quasisolid electrolyte has one particular disadvantage: it is strongly dependent on the working temperature of the solar cell, where high temperatures cause a phase transformation from gel state to solution state. Selvanathan et al. [60] have used starch and cellulose derivative polymers as quasisolid electrolytes and contributing to an optimized efficiency of 5.20% [60].Saaid et al. [61] prepared a quasisolid-state polymer electrolyte by incorporating poly (vinylidene fluoride-co-hexafluoropropylene) (PVdF-HFP) into a propylene carbonate (PC)/1, 2-dimethoxyethane (DME)/1-methyl-3-propylimidazolium iodide (MPII) and dealing with the dependency of photovoltaic parameters on the fabricated electrolyte shown in Figure 8(a). It has been observed that the cell photovoltaic parameters are found to be dependent on the amount of the added PVdF-HFP polymer. Before the addition of any PVdF-HFP polymer, the corresponding Jsc, VOC, FF, and η were found to be 11.24 mA cm-2, 619 mV, 70%, and 4.88%, respectively. While followed by the addition of 0.1, 0.2, 0.3, and 0.4 g of PVdF-HFP polymer, the cell performance was found to be (9.53, 9.53, 7.54, and 6.57) mA cm-2Jsc, (638, 641, 679, and 684) mV of VOC, (67, 66, 64, and 61) % of FF, and (4.09, 3.70, 3.27, and 2.73) % of η, respectively. It was demonstrated that as the amount of polymer in the cell increases, the performance of the cell decreases gradually. Moreover, Lim et al. [62] have also designed a new quasisolid-state electrolyte using coal fly ash-derived zeolite-X and -A as shown in Figure 8(b) and achieves a VOC of 0.74 V, JSC of 13.7 mA/cm2, and FF of 60% with η of 6.0% and 0.73 V, 11.4 mA/cm2, 60% FF, respectively. But it has been found that zeolite-X&AF quasisolid-state electrolyte-based DSSCs show VOC of 0.72 V, JSC of 11.1 mA/cm2, and FF of 61% and η of 4.8%. The enhancement in cell photovoltaic parameters in the case of zeolite-XF12 quasisolid-state electrolyte could be attributed to the high crystallinity nature, high light harvesting efficiency, the reduction of resistance at the photoanode/electrolyte interface, and the decrease in charge recombination rate. As a control, nanogel polymer was used as an electrolyte and possessed 0.66 V, 10.3 mA/cm2, 56% of FF, and 3.8% η.Figure 8 J-V curve of PVdF-HFP/PC/DME/MPII (a) with A 0.0; B 0.1; C 0.2; D 0.3; and E 0.4 g of PVdF-HFP and photocurrent Density-photovoltaic curves of DSSCs based on nanogel and quasisolid-state electrolytes under 1 sun illumination (AM 1.5G, 100 mW cm-2) (b) [48, 49]. (a)(b) ## 3.4. Photosensitizer Other core components of DSSCs are photosensitizers that play a great role in the absorption of solar photons. That is, dyes play a prominent role in harvesting the incoming light (absorbing) and injecting the photoexcited electrons into the conduction band of the semiconducting material to convert solar energy to electrical energy (i.e., which is responsible for absorbing the incident solar energy and converting it into electrical energy) [63, 64]. This enables us to produce renewable power systems and manage power sustainability and to achieve a reliable and stable network output power distribution [58]. It is chemically bonded to the porous surface of the semiconductor material and determines the efficiency and general performance of the device [47]. The possibilities of some organic dyes, polymer dyes, and natural dyes have been reported with great relative cost-effective potential for industrialization [11]. To be effective, photosensitizer should have a broad and intense absorption spectrum that covers the entire visible region, high adsorption affinity to the surface of the semiconducting layer, excellent stability in its oxidized form, low-cost, and low threat to the environment. Furthermore, its LUMO level, i.e., excited state level, must be higher in energy than the conduction band edge of the semiconductor, for efficient electron injection into the conduction band of the semiconductor. Also, its HOMO level, i.e., oxidized state level, must be lower in energy than the redox potential of the electrolyte to promote dye regeneration .The most commonly used photosensitizers are categorized into three groups. These are metal complex sensitizers, metal-free organic sensitizers [65], and natural sensitizers. Metal complex sensitizers provide relatively high efficiency and stability due to having both anchoring and ancillary ligands. The modifications of these two ligands to improve the efficiency of the solar cell performance have been reported. These ligands facilitate charge transfer in metal-to-ligand bonds. The metal complex-based photosensitizers are ruthenium and its cosensitized configurations [56] based complexes owing to their wide absorption range, i.e., from the visible to the near-infrared region, which renders them with superior photon harvesting properties. However, these complexes require multistep synthesis reactions (i.e., they require long synthesis and purification steps), and they contain a heavy metal, which is expensive (i.e., it needs high production costs), scarce, and toxic. However, these problems can be overcome by applying metal-free organic dyes in DSSCs instead of metal complex sensitizers. Trihutomo et al. [66] explained that using natural dye as a photosensitizer has the problem of producing lower efficiency than silicon solar cells due to the barrier of electron transfer in the TiO2 semiconductor layer.A donor acceptor-substituted-conjugated bridge (D-π-A) is used in the design of a metal-free organic sensitizer [30, 47]. The properties of a sensitizer vary with the electron-donating ability of the donor part and the electron-accepting ability of the acceptor part, as well as with the electronic characteristics of π bridge. At present, most of the π-bridge conjugated parts in organic sensitizers are based on oligoene, coumarin, oligothiophene, fluorene, and phenoxazine. The donor part has been synthesized with a dialkyl amine or diphenylamine moiety while using a carboxylic acid, cyanoacrylic acid, or rhodanine-3-acetic acid moiety for the acceptor part. As shown in Figure 9 [30], the sensitizer anchors onto the porous network of nanocrystalline TiO2 particles via the acceptor part of dye the molecule. However, metal-free organic sensitizers (organic dyes) have the following disadvantages: strong π-stacked aggregates between D-π-A dye molecules on semiconductor surfaces, which reduces the electron-injection yield from the dyes to the conduction band of nanocrystalline semiconductor, low absorption bands compared to metal-based sensitizers, which leads to a reduction in light absorption capability, low stability due to the sensitizer’s tendency to decay with time, and a long anode lifetime [30, 67].Figure 9 Designed structure of a metal-free organic dye [30].In brief, the 3.2 eV energy band gap of the TiO2 semiconductor is responsible for absorbing ultraviolet light (i.e., the absorption of visible light is weak). As a result, natural dyes increase the overall DSSCs for sunlight absorption rate [68]. The light-absorbed efficiency by TiO2 is also enhanced by cosensitization, which enables better light harvesting across the solar spectrum [69]. Ananthakumar et al. [70] have also reviewed more on the energy transfer process from donor to acceptor through the Forster resonance energy transfer process (FRET) for improved absorption. So, cosensitization is effectively achieved through the FRET mechanism, in which the dipole-dipole attraction of two chromophoric components occurs via an electric field. In this process, absorption of light causes the molecular excitation of the donor, and this is transferred to a nearby acceptor molecule having lower excitation energy through the exchange of virtual photons. Here, the donor molecule nonradiatively transfers excitation energy to an acceptor molecule through an exchange of photons, as shown in Figure 10 [70].Figure 10 Schematic diagram of FRET process (a) and its mechanism (b) [70]. (a)(b)To solve problems found in both metal-based complex and metal-free organic dyes, researchers have focused on natural plant pigment-based photosensitizer [71]. As a result, metal-free dyes, such as natural dyes (natural pigments) from different plant sources such as fruits, roots, flowers, leaves, wood, algae, and bacterial pigments [72, 73], coupled with their organic derivatives, have attracted considerable research interest, owing to their low-cost, simple synthesis procedure, abundance in nature, nontoxicity, and high molar absorption coefficient [35, 74]. An efficient photosensitizer for DSSCs should possess several essential requirements [64]: (i) A high molar extinction coefficient and a high panchromatic light absorption ability that extends from visible to near-infrared(ii) Anthocyanin pigment fromEleiodoxa conferta and Garcinia atroviridis fruit, for example, contains hydroxyl and carboxylic groups in the molecule that can effectively attach to the surface of a TiO2 film [75](iii) Good HOMO/LUMO energy alignment with respect to the redox couple and the conduction band level in the semiconductor, which allows efficient charge injection into the semiconductor, and simultaneously efficient regeneration of the oxidized dye(iv) The electron transfer rate from the dye sensitizer to the semiconductor must be faster than the decay rate of the photosensitizer(v) Stability under solar light illumination and continuous light soaking [76–78]It is important to note that the stable natural plant pigments extracted by effective solvents can absorb a broad range of visible light [79, 80], because the two most significant drawbacks of DSSCs are their narrow spectral response and short-term stability. Therefore, in this review work, different natural plant pigments are extracted from different plant parts such as leaves, roots, steam, parks, peel waste, flowers, various spices, and a mixture of them with various solvents, and their stability and various experimental factors are effectively discussed. ### 3.4.1. Natural Plant Pigment Photosensitizers in DSSCs The highest efficiency ever recorded for a DSSCs material was about 12% using Ru (II) dyes when its material and structural properties were optimized. However, this efficiency is less when compared to the efficiencies of the first and second generations of solar cells (thin-film solar cells and first generation (Si-based) solar cells), whose efficiencies were about 20-30% [11]. A ruthenium-based dye and platinum are the most common materials used as photosensitizer and counterelectrode, respectively, in the production of the DSSCs, the third generation of photovoltaic technologies. However, their expensive cost, the complexity and toxicity of ruthenium dye, and the scarcity of platinum sources preclude their use in the DSSCs [49]. Thus, an alternative way to produce cost-effective dyes on a large scale is by extracting natural dyes from plant sources. The colors are due to the presence of various pigments that have been proven to be efficient photosensitizers. Meanwhile, colors and their transmittance by themselves could affect energy generation performance. Based on this, DSSCs currently being produced have better power generation efficiency as the visible light transmittance lowers, and the power generation efficiency is good in the order of red > green > blue [81]. It is reported that extracts of plant pigments also have a simultaneous effect as photosensitizers and reducing agents for nanostructure synthesis, which is useful in photoanode activity in solar devices (e.g., TiO2) [82].In order to improve the energy conversion efficiency of natural photosensitizers, blending of different dyes, copigmentation of dyes, acidifying of dyes, and other approaches have been conducted by researchers, resulting in appreciable performance [83]. Based on the types of natural molecules found in plant products, such photosensitizers are classified as carotenoids, betalains, flavonoids, or chlorophyll structural classes [30, 65, 83, 84]. For stable adsorption onto the semiconductor substrate, sensitizers are typically designed with functional groups such as -COOH, -PO3H2, and -B(OH)2 [20]. These biomolecules have functional groups, such as carboxyl and hydroxyl that can easily react with the surface of nanostructured TiO2 that enables them to absorb sunlight. In particular, hydroxyl or carboxyl functional groups are strongly bound on the surface of TiO2 [85].The enchantment of efficiency from extraction of dyes from fresh purple cabbage (anthocyanin), spinach leaves (chlorophyll), turmeric stem (betaxanthin), and their mixture as a photosensitizer, with nanostructured ZnO-coated FTO substrates as a photoanode-based DSSCs. The photon to electrical power conversion efficiencies of purple cabbage, spinach, turmeric, and their mixed dyes are explored as 0.1015%, 0.1312%, 0.3045%, and 0.602%, respectively, under the same simulated light condition. The mixed dye reveals the stable performance of the cell with the highest conversion efficiency due to the absorption of an extensive range of the solar spectrum and well-suited electrochemical responses due to the fast electron transportation and lower recombination loss with longer electron lifetime found on mixed dyes [86].(1) Flowers. In DSSCs, the red/purple pigment in various leaves and flowers has been used as a sensitizer. Notably, an abundantly available organic dye is easily extracted from flowers and leaves, mainly responsible for light absorption in DSSCs [39]. The natural color pigments originated from organic dyes impart an anthocyanin group present in the different parts (e.g., flowers, leaves) of the plant. Hibiscus rosa-sinensis, a red pigment-containing flower with a higher concentration of anthocyanins, is used as a natural DSSC. In fact, the Malvaviscus penduliflorus flower is closely related to the Hibiscus family. However, potential research based on M. penduliflorus flower extracted dye in DSSCs is still lacking. The broader absorption of Hibiscus-extracted dye within 400-500 nm can further be enhanced using either concentrated dye solution or operating the sensitization process at an elevated temperature [39].Natural dyes from flowers can decrease the charge transfer resistance and are helpful in the better absorbance of light as well as enabling them to show absorption near to the red region. Therefore, efficient DSSCs using natural dyes are less toxic, disposed of easily, cost-effectively, and more environmentally friendly compared to organic dyes. Which is considered beneficial for future biosolar cell technology [87]. The performance of the DSSCs can be compensated by introducing a scattering layer or interfacial modification in the photoanode and concomitantly improving the broader spectrum wavelength range of light absorption to make it suitable for outdoor applications [39]. the presence of a series of conjugated double bonds from flower extracts helped to increase efficiency improvement. Raguram and Rajni [88] demonstrated that a flavanol pigment from the Allamanda blanchetti flower is responsible for red, purple, and blue colors, whereas a carotenoid pigment from the Allamanda cathartica flower is responsible for bright red, orange, and yellow colors and a series of conjugated double bonds [88]. Table 1 summarizes flower-based photosensitizers for efficiency improvement in DSSCs. From this, the performance is highly dependent on the type of the plant flower type.Table 1 Flowers as photosensitizers in DSSCs. ImagePlantClass of biomoleculesSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefSalvia—MethanolTiO2-FTO0.1680.46140.00.152[87]Spathodea—MethanolTiO2-FTO0.2010.52541.20.217[87]Malvaviscus penduliflorus—EthanolTiO2/MnO2-FTO6.020.3840.380.92[39]Allamanda blanchettiFlavonoids (flavanol)EthanolTiO2-FTO4.13660.4702601.16[88]Allamanda catharticaCarotenoids (lutein)EthanolTiO2-FTO2.14060.4896280.30[88]Canna-lily redAnthocyaninsMethanolTiO2-FTO0.440.57450.14[90]Canna-lily yellowAnthocyaninsMethanolTiO2-FTO0.430.56400.12[90]Beta vulgaris L. ssp. f. rubraBeta caroteneHot waterTiO2 surface0.440.55510.41Brassica oleracea L. var. capitata f. rubraAnthocyaninHot waterTiO2 surface1.880.54561.87(2) Leaves. The advantages of mesoporous holes in TiO2 are that they provide the surface of a large hole for the higher adsorption of dye molecules and facilitate the penetration of electrolyte within their pores. Absorbing light in an extended range of wavelengths by innovative natural dyes followed by increasing surface areas of the photoanode with a TiO2 nanostructure-based layer on the glass substrate improves DSSC technology [74]. Khammee et al. [89] have reported a natural pigment photosensitizer extracted from Dimocarpus longan leaves. According to the report, the methanol extract pigment was composed of chlorophyll-a, chlorophyll-b, and carotene components [89]. The functional group found on the leaves of natural plant pigment can bind with TiO2, which is then responsible for absorbing visible light [54]. Chlorophyll, which is found in the leaves of most green plants, absorbs light from red, blue, and violet wavelengths and obtains its color by reflecting green. Chlorophyll exhibits two main absorption peaks in the visible region at wavelengths of 420 and 660 nm [85]. Experimental results show that the absorption peaks of those dyes are mainly distributed in the visible light regions of 400-420 nm and 650-700 nm. So, chlorophyll was selected as the reference dye [94]. Therefore, chlorophyll and other related extract-based photosensitizes are given in both Tables 1 and 2.Table 2 Leaves as photosensitizers in DSSCs. ImagePlantClass of extracted dye pigmentsSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefLagerstroemia macrocarpa(i) Carotenoids(ii) Chlorophyll-a(iii) Chlorophyll-bMethanolTiO2-FTO0.0920.80753.711.138±0.018[74]Spinach leaves(i) ChlorophyllAcetoneTiO2-FTO0.410.5958.759820.171253[26]Strobilanthes cusia(i) Chlorophyll-a(ii) Chlorophyll-bMethanol, ethanol, acetone, diethyl-ether, dimethyl-sulphoxideTiO2-FTO0.00518330.30646.20.0385[49]Galinsoga parviflora(i) Chlorophyll groupDistilled water and ethanolTiO2-FTO0.4 (mA)0.346.71.65[91]Amaranthus red(i) Chlorophyll(ii) BetalainDistilled water, ethanol, acetoneTiO2-FTO1.00420.354738.640.14[54]Lawsonia inermis(i) Lawsone(ii) ChlorophyllDistilled water, ethanol, acetoneTiO2-FTO0.42360.547838.510.09[54]Cordyline fruticosa(i) ChlorophyllEthanolTiO2 surface1.3 mA0.61660.160.5[85]Euodia meliaefolia (Hance) Benth(i) ChlorophyllEthanolTiO2-FTO2.640.58701.08[92]Matteuccia struthiopteris (L.) Todaro(i) ChlorophyllEthanolTiO2-FTO0.750.60720.32[92]Corylus heterophylla Fisch(i) ChlorophyllEthanolTiO2-FTO0.680.56690.26[92]Filipendula intermedia(i) ChlorophyllEthanolTiO2-FTO0.870.54740.34[92]Pteridium aquilinum var. latiusculum(i) ChlorophyllEthanolTiO2-FTO0.740.56730.30[92]Populus L(i) ChlorophyllEthanolTiO2-FTO1.250.57370.27[92]Euphorbia sp.(i) QuercetinHot waterTiO2 surface0.460.40510.30Rubia tinctoria(i)AlizarinHot waterTiO2 surface0.650.48630.65Morus alba(i)CyanineHot waterTiO2 surface0.440.45570.38Reseda lutea(i)LuteolinHot waterTiO2 surface0.500.50620.52Medicago sativa(i)ChlorophyllHot waterTiO2 surface0.330.55560.33Aloe barbadensis miller(i)AnthocyaninsEthanolTiO2-FTO0.1120.67650.40.380[93]Opuntia ficus-indica(i) ChlorophyllEthanolTiO2-FTO0.2410.64248.00.740[93]Cladode and aloe vera(i)Anthocyanins and chlorophyllEthanolTiO2-FTO0.2900.44040.10.500[93]Lotus leaf(i)Alkaloid and flavonoidEthanolTiO2-FTO14.330.44231.42[48]Brassica oleracea var(i)AnthocyaninDistilled water, methanol, and acetic acidTiO2-FTO0.490.43510.054[55]Wrightia tinctoria R.Br. (“Pala indigo” or “dyer’s oleander”)(i)ChlorophyllCold methanolic extractTiO2-FTO0.530.51690.19[94](i)ChlorophyllAcidified cold methanolic extractTiO2-FTO0.210.422660.06[94](i)ChlorophyllSoxhlet extractTiO2-FTO0.490.495690.17[94](i)ChlorophyllAcidified Soxhlet extractTiO2-FTO0.310.419650.08[94](3) Fruits. The plant-extracted natural dyes are observed to be more prospective owing to their abundance and eco-friendly characteristics. They are environmentally and economically superior to ruthenium-based dyes because they are nontoxic and cheap. However, the conversion efficiency of dye-sensitized solar cells based on natural dyes is low [95]. Substations of natural dyes as sensitizers were shown to be not only economically viable and nontoxic but also effective for enhancing efficiency up to 11.9% [96]. Sensitizers for DSSCs need to fulfill important requirements such as absorption in the visible and near-infrared regions of the solar spectrum and strong chelation to the semiconductor oxide surface. Moreover, the LUMO of the dye should lie at a higher energy level than the conduction band of the semiconductor, so that, upon excitation, the dye could introduce electrons into the conduction band of the TiO2 [95]. Considering this, Najm et al. [97] used abundant and cheap Malaysian fruit, betel nut (Areca catechu) as a photosensitizer in DSSCs due to the presence of tannins, polyphenols, gallic acid, catechins, alkaloids, fat, gum, and other minerals. Provided that, gallotannic acid, a stable dye, is the main pigment (yellowish) of A. catechu and is responsible for the effective absorption of visible wavelengths and used in DSSCs [97, 98]. So, fruit extract as a photosensitizer and natural extracts from other sources are summarized in Table 3 and 4, respectively.Table 3 Fruits as photosensitizers in DSSCs. ImagePlantClass of extracted dye pigmentsSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefMelastoma malabathricum(i) AnthocyaninMethanol and trifluoroacetic acidTiO2 film4.490.42571.05[84]Eleiodoxa conferta(i) AnthocyaninEthanolTiO2-FTO4.630.37561.00[75]Garcinia atroviridis(i) AnthocyaninEthanolTiO2-FTO2.550.32630.51[75]Onion peels(i) AnthocyaninDistilled waterTiO2-FTO0.240.4846.630.065[26]Red cabbage(i) AnthocyaninDistilled waterTiO2-FTO0.210.5146.610.060[26]Areca catechuGallotannic acidMethanolTiO2 surface0.30.53673.50.118[97]Hylocereus polyrhizus(i) AnthocyaninDistilled water, ethanol, and acetic acidTiO2-FTO0.23 mA0.34630.024[55]Doum palm(i) ChromophoresEthanolTiO2-FTO0.0050.37630.012[99]Doum palm(i) ChromophoresDistilled waterTiO2-FTO0.0100.50660.033[99]Linia cauliflora(i) AnthocyaninEthanolTiO2-ITO0.380.41290.13[100]Phyllanthus reticulatus(i) AnthocyaninMethanolTiO2-FTO1.3820.67—0.69[101]Table 4 Other natural pigment sources as photosensitizers in DSSCs. ImagePlantClassSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefJuglon regia shell(i) JuglonHot waterTiO2 surface0.430.47560.38Malabar spinach seeds—Distilled waterTiO2-ITO510 (μA)0.71048.79.23[102]Rhamnus petiolaris seed(i) EmodinHot waterTiO2 surface0.200.50550.18Iridaea obovata algae(i) PhycoerythrinEthanolTiO2-FTO0.1360.40430.022[103]Delesseria lancifolia algae(i) PhycoerythrinEthanolTiO2-FTO0.2430.40460.045[103]Plocamium hookeri algae(i) PhycoerythrinEthanolTiO2-FTO0.0830.53630.027[103]Mangosteen pericarp (mangosteen peels)(i) AnthocyaninEthanolTiO2-FTO0.38 mA0.46480.042[55]Ataco vegetable(i) AnthocyaninsEthanolTiO2-FTO0.060.48660.018[104]Achiote vegetable(i) AnthocyaninsEthanolTiO2-FTO0.060.45500.013[104]Berenjena vegetable(i) AnthocyaninsEthanolTiO2-FTO0.040.40560.008[104]Flor de Jamaica vegetable(i) AnthocyaninsEthanolTiO2-FTO0.3820.478580.109[104]Mora vegetable(i) AnthocyaninsEthanolTiO2-FTO0.280.48510.069[104]Mortiño vegetable(i) AnthocyaninsEthanolTiO2-FTO0.5570.48466.40.175[104]Rabano vegetable(i) AnthocyaninsEthanolTiO2-FTO0.070.39550.015[104]Tomate de arbol vegetable(i) AnthocyaninsEthanolTiO2-FTO0.100.44520.023[104]The narrow spectral response and the short-term stability found in DSSCs are the two major drawbacks. These limitations are improved by using natural plant pigment dyes as an effective sensitizer on the photoanode of the device. So, using these natural pigments improved the efficiency of DSSCs by forming broad spectral absorption responses. To improve this, DeSilva et al. have investigated good photosensitizers from Mondo-grass berry and blackberry. As a result, the device efficiency improvement, with better stability on Mondo-grass berry dye, was observed when compared with that of blackberry. This is the reason that a Mondo-grass berry contains a mixture of two or more chemical compounds belonging to both the anthocyanin and carotenoid families, as proved by thin layer chromatography [105]. ## 3.4.1. Natural Plant Pigment Photosensitizers in DSSCs The highest efficiency ever recorded for a DSSCs material was about 12% using Ru (II) dyes when its material and structural properties were optimized. However, this efficiency is less when compared to the efficiencies of the first and second generations of solar cells (thin-film solar cells and first generation (Si-based) solar cells), whose efficiencies were about 20-30% [11]. A ruthenium-based dye and platinum are the most common materials used as photosensitizer and counterelectrode, respectively, in the production of the DSSCs, the third generation of photovoltaic technologies. However, their expensive cost, the complexity and toxicity of ruthenium dye, and the scarcity of platinum sources preclude their use in the DSSCs [49]. Thus, an alternative way to produce cost-effective dyes on a large scale is by extracting natural dyes from plant sources. The colors are due to the presence of various pigments that have been proven to be efficient photosensitizers. Meanwhile, colors and their transmittance by themselves could affect energy generation performance. Based on this, DSSCs currently being produced have better power generation efficiency as the visible light transmittance lowers, and the power generation efficiency is good in the order of red > green > blue [81]. It is reported that extracts of plant pigments also have a simultaneous effect as photosensitizers and reducing agents for nanostructure synthesis, which is useful in photoanode activity in solar devices (e.g., TiO2) [82].In order to improve the energy conversion efficiency of natural photosensitizers, blending of different dyes, copigmentation of dyes, acidifying of dyes, and other approaches have been conducted by researchers, resulting in appreciable performance [83]. Based on the types of natural molecules found in plant products, such photosensitizers are classified as carotenoids, betalains, flavonoids, or chlorophyll structural classes [30, 65, 83, 84]. For stable adsorption onto the semiconductor substrate, sensitizers are typically designed with functional groups such as -COOH, -PO3H2, and -B(OH)2 [20]. These biomolecules have functional groups, such as carboxyl and hydroxyl that can easily react with the surface of nanostructured TiO2 that enables them to absorb sunlight. In particular, hydroxyl or carboxyl functional groups are strongly bound on the surface of TiO2 [85].The enchantment of efficiency from extraction of dyes from fresh purple cabbage (anthocyanin), spinach leaves (chlorophyll), turmeric stem (betaxanthin), and their mixture as a photosensitizer, with nanostructured ZnO-coated FTO substrates as a photoanode-based DSSCs. The photon to electrical power conversion efficiencies of purple cabbage, spinach, turmeric, and their mixed dyes are explored as 0.1015%, 0.1312%, 0.3045%, and 0.602%, respectively, under the same simulated light condition. The mixed dye reveals the stable performance of the cell with the highest conversion efficiency due to the absorption of an extensive range of the solar spectrum and well-suited electrochemical responses due to the fast electron transportation and lower recombination loss with longer electron lifetime found on mixed dyes [86].(1) Flowers. In DSSCs, the red/purple pigment in various leaves and flowers has been used as a sensitizer. Notably, an abundantly available organic dye is easily extracted from flowers and leaves, mainly responsible for light absorption in DSSCs [39]. The natural color pigments originated from organic dyes impart an anthocyanin group present in the different parts (e.g., flowers, leaves) of the plant. Hibiscus rosa-sinensis, a red pigment-containing flower with a higher concentration of anthocyanins, is used as a natural DSSC. In fact, the Malvaviscus penduliflorus flower is closely related to the Hibiscus family. However, potential research based on M. penduliflorus flower extracted dye in DSSCs is still lacking. The broader absorption of Hibiscus-extracted dye within 400-500 nm can further be enhanced using either concentrated dye solution or operating the sensitization process at an elevated temperature [39].Natural dyes from flowers can decrease the charge transfer resistance and are helpful in the better absorbance of light as well as enabling them to show absorption near to the red region. Therefore, efficient DSSCs using natural dyes are less toxic, disposed of easily, cost-effectively, and more environmentally friendly compared to organic dyes. Which is considered beneficial for future biosolar cell technology [87]. The performance of the DSSCs can be compensated by introducing a scattering layer or interfacial modification in the photoanode and concomitantly improving the broader spectrum wavelength range of light absorption to make it suitable for outdoor applications [39]. the presence of a series of conjugated double bonds from flower extracts helped to increase efficiency improvement. Raguram and Rajni [88] demonstrated that a flavanol pigment from the Allamanda blanchetti flower is responsible for red, purple, and blue colors, whereas a carotenoid pigment from the Allamanda cathartica flower is responsible for bright red, orange, and yellow colors and a series of conjugated double bonds [88]. Table 1 summarizes flower-based photosensitizers for efficiency improvement in DSSCs. From this, the performance is highly dependent on the type of the plant flower type.Table 1 Flowers as photosensitizers in DSSCs. ImagePlantClass of biomoleculesSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefSalvia—MethanolTiO2-FTO0.1680.46140.00.152[87]Spathodea—MethanolTiO2-FTO0.2010.52541.20.217[87]Malvaviscus penduliflorus—EthanolTiO2/MnO2-FTO6.020.3840.380.92[39]Allamanda blanchettiFlavonoids (flavanol)EthanolTiO2-FTO4.13660.4702601.16[88]Allamanda catharticaCarotenoids (lutein)EthanolTiO2-FTO2.14060.4896280.30[88]Canna-lily redAnthocyaninsMethanolTiO2-FTO0.440.57450.14[90]Canna-lily yellowAnthocyaninsMethanolTiO2-FTO0.430.56400.12[90]Beta vulgaris L. ssp. f. rubraBeta caroteneHot waterTiO2 surface0.440.55510.41Brassica oleracea L. var. capitata f. rubraAnthocyaninHot waterTiO2 surface1.880.54561.87(2) Leaves. The advantages of mesoporous holes in TiO2 are that they provide the surface of a large hole for the higher adsorption of dye molecules and facilitate the penetration of electrolyte within their pores. Absorbing light in an extended range of wavelengths by innovative natural dyes followed by increasing surface areas of the photoanode with a TiO2 nanostructure-based layer on the glass substrate improves DSSC technology [74]. Khammee et al. [89] have reported a natural pigment photosensitizer extracted from Dimocarpus longan leaves. According to the report, the methanol extract pigment was composed of chlorophyll-a, chlorophyll-b, and carotene components [89]. The functional group found on the leaves of natural plant pigment can bind with TiO2, which is then responsible for absorbing visible light [54]. Chlorophyll, which is found in the leaves of most green plants, absorbs light from red, blue, and violet wavelengths and obtains its color by reflecting green. Chlorophyll exhibits two main absorption peaks in the visible region at wavelengths of 420 and 660 nm [85]. Experimental results show that the absorption peaks of those dyes are mainly distributed in the visible light regions of 400-420 nm and 650-700 nm. So, chlorophyll was selected as the reference dye [94]. Therefore, chlorophyll and other related extract-based photosensitizes are given in both Tables 1 and 2.Table 2 Leaves as photosensitizers in DSSCs. ImagePlantClass of extracted dye pigmentsSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefLagerstroemia macrocarpa(i) Carotenoids(ii) Chlorophyll-a(iii) Chlorophyll-bMethanolTiO2-FTO0.0920.80753.711.138±0.018[74]Spinach leaves(i) ChlorophyllAcetoneTiO2-FTO0.410.5958.759820.171253[26]Strobilanthes cusia(i) Chlorophyll-a(ii) Chlorophyll-bMethanol, ethanol, acetone, diethyl-ether, dimethyl-sulphoxideTiO2-FTO0.00518330.30646.20.0385[49]Galinsoga parviflora(i) Chlorophyll groupDistilled water and ethanolTiO2-FTO0.4 (mA)0.346.71.65[91]Amaranthus red(i) Chlorophyll(ii) BetalainDistilled water, ethanol, acetoneTiO2-FTO1.00420.354738.640.14[54]Lawsonia inermis(i) Lawsone(ii) ChlorophyllDistilled water, ethanol, acetoneTiO2-FTO0.42360.547838.510.09[54]Cordyline fruticosa(i) ChlorophyllEthanolTiO2 surface1.3 mA0.61660.160.5[85]Euodia meliaefolia (Hance) Benth(i) ChlorophyllEthanolTiO2-FTO2.640.58701.08[92]Matteuccia struthiopteris (L.) Todaro(i) ChlorophyllEthanolTiO2-FTO0.750.60720.32[92]Corylus heterophylla Fisch(i) ChlorophyllEthanolTiO2-FTO0.680.56690.26[92]Filipendula intermedia(i) ChlorophyllEthanolTiO2-FTO0.870.54740.34[92]Pteridium aquilinum var. latiusculum(i) ChlorophyllEthanolTiO2-FTO0.740.56730.30[92]Populus L(i) ChlorophyllEthanolTiO2-FTO1.250.57370.27[92]Euphorbia sp.(i) QuercetinHot waterTiO2 surface0.460.40510.30Rubia tinctoria(i)AlizarinHot waterTiO2 surface0.650.48630.65Morus alba(i)CyanineHot waterTiO2 surface0.440.45570.38Reseda lutea(i)LuteolinHot waterTiO2 surface0.500.50620.52Medicago sativa(i)ChlorophyllHot waterTiO2 surface0.330.55560.33Aloe barbadensis miller(i)AnthocyaninsEthanolTiO2-FTO0.1120.67650.40.380[93]Opuntia ficus-indica(i) ChlorophyllEthanolTiO2-FTO0.2410.64248.00.740[93]Cladode and aloe vera(i)Anthocyanins and chlorophyllEthanolTiO2-FTO0.2900.44040.10.500[93]Lotus leaf(i)Alkaloid and flavonoidEthanolTiO2-FTO14.330.44231.42[48]Brassica oleracea var(i)AnthocyaninDistilled water, methanol, and acetic acidTiO2-FTO0.490.43510.054[55]Wrightia tinctoria R.Br. (“Pala indigo” or “dyer’s oleander”)(i)ChlorophyllCold methanolic extractTiO2-FTO0.530.51690.19[94](i)ChlorophyllAcidified cold methanolic extractTiO2-FTO0.210.422660.06[94](i)ChlorophyllSoxhlet extractTiO2-FTO0.490.495690.17[94](i)ChlorophyllAcidified Soxhlet extractTiO2-FTO0.310.419650.08[94](3) Fruits. The plant-extracted natural dyes are observed to be more prospective owing to their abundance and eco-friendly characteristics. They are environmentally and economically superior to ruthenium-based dyes because they are nontoxic and cheap. However, the conversion efficiency of dye-sensitized solar cells based on natural dyes is low [95]. Substations of natural dyes as sensitizers were shown to be not only economically viable and nontoxic but also effective for enhancing efficiency up to 11.9% [96]. Sensitizers for DSSCs need to fulfill important requirements such as absorption in the visible and near-infrared regions of the solar spectrum and strong chelation to the semiconductor oxide surface. Moreover, the LUMO of the dye should lie at a higher energy level than the conduction band of the semiconductor, so that, upon excitation, the dye could introduce electrons into the conduction band of the TiO2 [95]. Considering this, Najm et al. [97] used abundant and cheap Malaysian fruit, betel nut (Areca catechu) as a photosensitizer in DSSCs due to the presence of tannins, polyphenols, gallic acid, catechins, alkaloids, fat, gum, and other minerals. Provided that, gallotannic acid, a stable dye, is the main pigment (yellowish) of A. catechu and is responsible for the effective absorption of visible wavelengths and used in DSSCs [97, 98]. So, fruit extract as a photosensitizer and natural extracts from other sources are summarized in Table 3 and 4, respectively.Table 3 Fruits as photosensitizers in DSSCs. ImagePlantClass of extracted dye pigmentsSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefMelastoma malabathricum(i) AnthocyaninMethanol and trifluoroacetic acidTiO2 film4.490.42571.05[84]Eleiodoxa conferta(i) AnthocyaninEthanolTiO2-FTO4.630.37561.00[75]Garcinia atroviridis(i) AnthocyaninEthanolTiO2-FTO2.550.32630.51[75]Onion peels(i) AnthocyaninDistilled waterTiO2-FTO0.240.4846.630.065[26]Red cabbage(i) AnthocyaninDistilled waterTiO2-FTO0.210.5146.610.060[26]Areca catechuGallotannic acidMethanolTiO2 surface0.30.53673.50.118[97]Hylocereus polyrhizus(i) AnthocyaninDistilled water, ethanol, and acetic acidTiO2-FTO0.23 mA0.34630.024[55]Doum palm(i) ChromophoresEthanolTiO2-FTO0.0050.37630.012[99]Doum palm(i) ChromophoresDistilled waterTiO2-FTO0.0100.50660.033[99]Linia cauliflora(i) AnthocyaninEthanolTiO2-ITO0.380.41290.13[100]Phyllanthus reticulatus(i) AnthocyaninMethanolTiO2-FTO1.3820.67—0.69[101]Table 4 Other natural pigment sources as photosensitizers in DSSCs. ImagePlantClassSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefJuglon regia shell(i) JuglonHot waterTiO2 surface0.430.47560.38Malabar spinach seeds—Distilled waterTiO2-ITO510 (μA)0.71048.79.23[102]Rhamnus petiolaris seed(i) EmodinHot waterTiO2 surface0.200.50550.18Iridaea obovata algae(i) PhycoerythrinEthanolTiO2-FTO0.1360.40430.022[103]Delesseria lancifolia algae(i) PhycoerythrinEthanolTiO2-FTO0.2430.40460.045[103]Plocamium hookeri algae(i) PhycoerythrinEthanolTiO2-FTO0.0830.53630.027[103]Mangosteen pericarp (mangosteen peels)(i) AnthocyaninEthanolTiO2-FTO0.38 mA0.46480.042[55]Ataco vegetable(i) AnthocyaninsEthanolTiO2-FTO0.060.48660.018[104]Achiote vegetable(i) AnthocyaninsEthanolTiO2-FTO0.060.45500.013[104]Berenjena vegetable(i) AnthocyaninsEthanolTiO2-FTO0.040.40560.008[104]Flor de Jamaica vegetable(i) AnthocyaninsEthanolTiO2-FTO0.3820.478580.109[104]Mora vegetable(i) AnthocyaninsEthanolTiO2-FTO0.280.48510.069[104]Mortiño vegetable(i) AnthocyaninsEthanolTiO2-FTO0.5570.48466.40.175[104]Rabano vegetable(i) AnthocyaninsEthanolTiO2-FTO0.070.39550.015[104]Tomate de arbol vegetable(i) AnthocyaninsEthanolTiO2-FTO0.100.44520.023[104]The narrow spectral response and the short-term stability found in DSSCs are the two major drawbacks. These limitations are improved by using natural plant pigment dyes as an effective sensitizer on the photoanode of the device. So, using these natural pigments improved the efficiency of DSSCs by forming broad spectral absorption responses. To improve this, DeSilva et al. have investigated good photosensitizers from Mondo-grass berry and blackberry. As a result, the device efficiency improvement, with better stability on Mondo-grass berry dye, was observed when compared with that of blackberry. This is the reason that a Mondo-grass berry contains a mixture of two or more chemical compounds belonging to both the anthocyanin and carotenoid families, as proved by thin layer chromatography [105]. ## 4. Photoinduced Electron Transfer Rate Efficiency in Natural Plant Pigment DSSCs Anthocyanin coupled with TiO2 is cheap, readily available, and innocuous to the environment, with high economic advantages over other types of photovoltaic devices, but it has yet to become a commercially viable product due to its low conversion efficiency and life span [11]. In addition, TiO2 photoelectrode properties favor natural pigments as sensitized DSSCs because the conduction band of TiO2 photoelectrode coincides well with the excited level LUMO of natural pigments (especially with anthocyanins) [106]. The interaction between TiO2 and dye molecule could lead to the transfer of excited electrons from the dye molecules, to the conduction band of TiO2 [85] as shown in Figure 11.Figure 11 Configuration of a traditional dye-sensitized solar cell [107].For good photovoltaic efficiency of a DSSC, an electron from the electronically excited state of the dye must be injected effortlessly into the conduction band of the semiconductor. The electron transfer kinetics of natural dye molecules can be appraised in terms of the photoinduced electron transfer (PET) theory. The theory implies that the logarithm of the electron transfer rate is a quadratic function with respect to the driving force,−ΔG°. The simplified form of the rate constant of ET, kET, is given as follows: (1)kET=Aexp−ΔGo+λ24λRT,where ΔG° is the driving force, λ is the reorganization energy, R is the gas constant, and T is the temperature. In the region of driving force smaller than the reorganization energy (the normal region), the electron transfer rate increases as driving force increases. The electron transfer rate attains a maximum value at ΔG°=λ. When the driving force for reaction is greater than λ, inverted region kinetics are observed, and the electron transfer rate decreases as the driving force increases .The driving force for electron transfer between a photosensitizer and semiconductor nanoparticles can be dictated by the energy difference between the oxidation potential of the photosensitizer and the reduction potential of semiconductor nanoparticles. The Rehm-Weller equation can be utilized to determine the driving force energy changes for the PET process. This equation gives the driving force energy changes between a donor (D) and an acceptor (A) as [108]: (2)ΔGo=eEOxi.D−ERed.A−ΔE∗,where e is the unit electrical charge, EOxi.(D) and ERed.(A) are the oxidation and reduction potentials of electron donor and acceptor, respectively. ΔE∗ is the electronic excitation energy that corresponds to the energy difference between the ground and first excited states of donor species .This is followed by regeneration of the dye by the redox mediator, transport of electrons in mesoporous TiO2 and redox mediators in the electrolyte, and finally, reduction of the oxidized redox mediator at the counterelectrode, since dyes in the DSSCs are adsorbed as a monolayer onto the mesoporous TiO2 electrode [107]. Historically, ruthenium and volatile organic-based molecular sensitizers were used, but due to the presence of various pigments as well as environmental friendliness, which avoids the use of expensive rare metals with toxic volatile organics, much research work is now devoted to natural plant-based photosensitizers. Akin et al. [108] have designed and tested DSSCs sensitized by natural dyes having several pigments with various anchoring groups such as carbonyl, hydroxyl, and alkyl chains to clearly understand the photoinduced electron injection kinetics of these natural DSSCs. As a summary, the photoinduced electron transfer mechanism from plant extract photosensitizer is shown in Figure 12.Figure 12 Photoinduced electron transfer mechanism in natural dye extracted plant [108]. ## 5. Efficiency Optimization of Natural Plant Pigment-Based DSSCs Ananthakumar et al. [70] showed that when a photoanode is functionalized by dyes containing organic dyes, it helps absorb more incident light. In order to increase the efficiency of solar cell devices further, the photoanode has been improved by low-cost transition metal oxide nanomaterials containing quantum dots. In addition, the cosensitization process also enhances light harvest efficiency. Thus, according to the review, 14% of efficiency was successfully reported. Hence, a sensitizer that is supported by a photoanode could successfully help to absorb visible light (enhance light harvesting capability) [109].Al-alwani et al. [85] have optimized three different process parameters such as the nature of organic solvent based on their boiling point (ethanol, methanol, and acetonitrile), pH (4-8) and extraction temperature (50-90°C) for chlorophyll extraction from Cordyline fruticosa leaves by using response surface methodology. The optimal extraction conditions were a pH of 7.99, an extraction temperature of 78.33°C, and a solvent boiling point of 78°C. Therefore, at this optimal condition, the extracted pigment was used as a photosensitizer, and 0.5% of maximum solar conversion η was achieved [85]. Chien and Hsu [110] have reported an optimized anthocyanin photosensitizer extracted from red cabbage (Brassica oleracea var. capitata f. rubra), and best light-to-electricity conversion ƞ was obtained when the pH and the concentration of the anthocyanin extract were at 8.0 and 3 mM, respectively, and when the immersion time for fabricating sensitized TiO2 film was 15 min [111].IK and Uthman [99] have reported an ethanol and distilled water-based extraction effect on doum palm fruit photosensitizer. From their report, the absorption transition between the dye ground state and excited states and the solar energy range absorbed by the dye are different. This difference is due to the existence of chromophores, which represent the chemical group that is responsible for the color of the molecule that is its ability to absorb photons. In detail, doum water extract has two absorption peaks at 350 nm and 400 nm, while the absorption peak of the doum ethanol extract adsorbed on TiO2 was only at one absorption peak of 353 nm. It can be seen that after TiO2 nanoparticles were added to doum pericarp extract, its absorption intensity decreased from 440 to 350 nm. Finally, the conversion efficiency for the ethanol extract was 0.012%, and water extract was up to 0.033% under the same light intensity [99].Gu et al. [8] have suggested that the absorption properties of natural dyes are strongly dependent on the types and concentration of pigments. According to their study, the photoelectric performance of different natural dyes from spinach, pitaya pericarp, orange peel, ginkgo leaf, purple cabbage, and carrot were measured as shown in Figure 13. They suggested that the VOC of these dyes showed a similar value, corresponding to about 0.524 V, except for carrot, which showed only 0.276 V. The fill factors of these DSSCs are mostly higher than 0.5, which proves that a better conversion capability of photoelectric energy was obtained. For the short-circuit photocurrent density (JSC), the values of these DSSCs based on different natural dyes are JSCpurplecabbage>JSCorangepeel>JSCspinach>JSCginkgoleaf>JSCpitayapericarp>JSCcarrot, and they arrive at 0.594, 0.325, 0.152, 0.111, 0.100, and 0.086 mA/cm2, respectively. Meanwhile, the purple cabbage showed higher photoelectric conversion efficiency and reached 0.157% [8].Figure 13 J-V curves for the DSSCs under standard simulated sunlight [8].Optimization of the photoanode (TiO2 nanostructure) is necessary for developing the high solar efficiency of DSSCs [74]. It is important that the thicker TiO2 layers would also result in a dwindling transmittance and reduce the pigment dyes’ absorption of light intensity. Also, the resistance to charge transfer might increase when the thickness of TiO2 electrode layers increases [112]. García-Salinas and Ariza [113] have tried to optimize solvent extraction, method, pH, dye precursor, and dye extract stability. They focused on betalain pigments present in bougainvillea and beetroot extracts, and anthocyanins in eggplant extracts. Of these, beetroot extract showed 0.47% cell efficiency [113]. Later, they demonstrated improved power conversion of 1.3% by using roots of Kniphofia schemperi sensitizer in the presence of TiO2 NPs biosynthesized in a (3 : 2) volume ratio due to effective surface modification, which enabled them to absorb an incident light [114, 115].Norhisamudin et al. [116] have fabricated DSSCs using anthocyanin or chlorophyll natural dye extracts coming from Roselle (Hibiscus sabdariffa) and green tea leaves (Camellia sinensis). Both pigments were extracted using different alcohol-based solvents, namely, ethanol, methanol, and mixed (ethanol + methanol) to identify whether the different solvents had an effect during the dye extraction. According to their study, using a mixed solvent from methanol and ethanol and their mixture extraction system was done, and their mixture showed efficiency improvement. Thus, the comparison between Roselle (anthocyanin) dye extracts and green tea (chlorophyll) dye extracts shows that Roselle has higher efficiency and higher photosensitized performance.Sensitization time and the number of natural dye coatings are the other big factors which affect DSSC performance. For example, in the betanin, indigo, and lawsone solar cells systems, 6, 12, 24, 36, and 48 hour sensitization times were tested. The time 24 hour was found to be an optimal time for sensitization in the case of betanin and lawsone solar cells, and 36 hour was observed to be optimum in the case of indigo solar cells. The optimal time of sensitization for the best performance of a particular dye is dependent on the rate of dye anchoring. lawsone and betanin have higher dipole moments, favoring the dipole-dipole interaction with TiO2; moreover, they possess more favorable functional groups (-COOH and -OH) compared to indigo (with =CO groups), which will enable a higher rate of anchoring. Akin et al. reported the effects of anchoring groups on the photoinduced electron injection dynamics from natural dye molecules to TiO2 nanoparticles. According to their report, nine different natural dyes having various anchoring groups were extracted from various plants and used as photosensitizers in DSSC applications. From these extracts, the long-hydroxyl and carbonyl-chain bearing anthocyanin, with the maximum electron transfer rate (kET), has shown the best photosensitization effect with regard to cell output. Despite the fact that their performance in DSSCs is somewhat lower or close to the metal complexes, these metal-free natural dyes can be treated as a new generation of sensitizers. It was reported that upon illumination, the dyes absorb light; an electron in a HOMO state is excited to LUMO state and further injected into a conduction band of TiO2. Related to the physical adsorption, chemical adsorption is a more effective way to enhance the conversion efficiency, and usually, -OH, -COOH, C=O, -SO3H, and -PO3H2 in the pigment were used as the effective groups and bonded with TiO2, forming a chemical adsorption, just as shown in Figure 14. This action can facilitate the transfer of electrons from the dye to TiO2. Therefore, according to Gu et al. report, the photoelectric conversion efficiency of dyes was ηpurplecabbage>ηorangepeel>ηspinach>ηpitayapericarp>ηginkgoleaf>ηcarrot and reached 0.157, 0.071, 0.054, 0.031, 0.030, and 0.010%, respectively, which is attributed to the synergistic reaction between the absorptive properties and molecular structure of natural dyes [8].Figure 14 Chemical adsorption between TiO2 films and effective groups [8].It is reported that the increase in the dye layer obstructs the charge transfer from the conduction band of the TiO2 surface to the FTO electrolyte [69]. For the aforementioned reasons, the number of coatings in betanin-, indigo-, and lawsone-based solar cell configurations proved detrimental to the solar cell performance, possibly because of dye aggregation. In addition, doping agents are also the other factor which affects the efficiency of DSSCs in natural plant pigment containing dyes. For example, Bekele et al. [114] have reported that when TiO2 nanoparticles are doped by Mg2+ ion and coated on an FTO glass substrate, they form Mg2+-TiO2-FTO photoanode. When this photoanode was immersed in methanol-extracted henna (Lawsonia inermis) leaf dye, its efficiency of light-to-electricity conversion was increased by generating the highest Jsc, from 0.66 to 1.28 (mA cm-2), representing a 93% increase over the undoped TiO2 group [117].As seen from Figure15(a), the short-circuit photocurrent density JSC increases from 0.23 to 0.38 mA cm-2 when the TiO2 thin film is coated by the doctor blade and spin-coated techniques, respectively. The results indicate that the recombination rate increases in spin-coated TiO2 thin film photoanodes. From the J-V curve, the highest values of JSC and VOC of the DSSCs were observed at 0.38 mA cm-2 and 0.41 V, respectively. The maximum ƞ obtained was 0.13% for spin-coated TiO2 thin film electrodes and 0.08% for doctor blade method-coated electrodes. Figure 15(b) shows the power versus potential curves, and the corresponding power (Pmax) obtained from spin-coated TiO2 thin film DSSCs was 36.4 μW cm-2, while the maximum power for the doctor blade method-coated electrode was 23.6 μW cm-2 [100].Figure 15 Photovoltaic performance of spin-coated (blue dotted) and doctor blade-coated (red dotted) TiO2 anodes photosensitized by jabuticaba fruit extract dye (a) and power versus voltage curves of spin-coated (blue curve) and doctor blade-coated (red curve) TiO2 thin films DSSCs using natural dyes extracted from the jabuticaba fruit (b) [100]. (a)(b) ## 6. Computational Studies of Natural Plant Pigment-Based DSSCs The computational calculation principle on extracts from natural plant-based pigments has provided a valuable reference for predicting precise protocols about the structure and photoelectrical property relationships between molecules using the Gaussian 09 package. The model enabled to identify essential electronic and structural attributes that quantify the molecular prerequisites of certain classes found in the natural dye, which are responsible for high power conversion efficiency for DSSCs [118, 119].In detail, the dye structure properties’ ground state could be optimized using DFT with the B3LYP functional at a 6-31G(d) basis set, while excited states were calculated using TD-DFT with different functionals, including Cam-B3LYP, MPW1PW91, and PBEPBE, at the same basis set calculation method [92]. To clarify this, Maahury and Martoprawiro [120] have studied the computational calculations of anthocyanin, which is evaluated as a basic reference biomolecule and used as the main photosensitizer in DSSCs [83]. Their geometry optimization calculations showed that the structures of anthocyanin compounds are not planar and their single point calculations for excited states showed that their absorption wavelength was shorter than experimental data (i.e., its difference was between 7.3% and 8.3%). The calculations were computed by DFT with B3LYP functional and 6-31G (d) for ground state optimization and TD-DFT for excited states based on a single point calculation system [120].Ghann et al. [98] reported that the computational calculation on delphinidin, which is an anthocyanin derivative that is found in pomegranate fruit, resulted in HOMO and LUMO values of -8.71 eV and -6.27, respectively. This makes effective electron transfer of charge possible from the LUMO of the pigment into the conduction band of TiO2. To support this, the HOMO and LUMO surfaces and their orbital energy diagrams are shown in Figures 16(b) and 16(c), respectively. For this electron transfer, the blue and red regions represented the positive and negative values of the orbitals, respectively. The regeneration of the dye caused by the redox electrolyte (I-/I3-) coupling increases the lifetime of the dye itself. Also, the narrower band gap of delphinidin dye, with a value of 2.44 eV, increases the intramolecular electronic transition probabilities [98]. From these and other literature studies, it is summarized that the electronic transition computational protocols are composed of six main steps. These are (i) engineering band gaps, (ii) photoabsorption spectrum of dyes, (iii) adsorbed dyes onto the anode surface, (iv) short-circuit current density, JSC, (v) open-circuit photovoltage, VOC, and (vi) photocurrent-photovoltage curve, and fill factor, FF [121]. Based on this fact and for simplicity, Mohankumar et al. [119] explained that twelve novel dye molecules developed from D-π-A-based triphenylamine (TPA) were studied to evaluate their suitability for applications in DSSCs by using DFT and TD-DFT. The optimization effects of flavone and isoflavone on TPA-based dyes were successfully studied using B3LYP and CAM-B3LYP density functionals combined with 6-311G (d, p) basis set in their computational study.Figure 16 Working principles of DSSCs with delphinidin (a), LUMO (b), and HOMO surface and orbital energy diagram for delphinidin (c) [98]. (a)(b)(c)Ndiaye et al. [122] studied the experimental and computational behavior of chrysanthemin (cyanidin 3-glucoside) pigment in DSSCs. Its theoretical study of chrysanthemin was performed with the GAUSSSIAN 09 simulation. A better energy level alignment was found for partially deprotonated molecules of chrysanthemin, with the excited photoelectron having enough energy in order to be transferred to the conduction band of TiO2 semiconductor in DSSCs. Experimentally, an aqueous extract of Roselle (Hibiscus sabdariffa) calyces was considered as the source of chrysanthemin, and the extracts having various pH values were tested in DSSCs. The detailed analysis of HOMO and LUMO of the cyaniding 3-glucoside molecule deprotonated in positions 1, 2, 3, bound to the TiO2 surface shows a large electron density on the deprotonated anchor groups, which favors the electron transfer from the excited molecule to the semiconductor as shown in Figure 17 [122].Figure 17 Cyanidin 3-glucoside structure (a) and labeling of the deprotonation sites (b) [122]. (a)(b)The analysis of the molecular orbitals showed that the probability distribution of electron density at the HOMO and LUMO levels is predominantly around the NH and C=O groups in the molecule. The two nonbonding electrons on the N atom participate in the delocalization of theπ-electrons of the conjugated systems that correspond to the HOMO energy levels (see Figure 18(a)) and the antibonding π∗ orbitals arise from the LUMO level of indigo as shown in Figure 18(b). The photoexcitation of the nonbonding electrons coming from the electron donor NH group to the antibonding π∗ orbital in the electron acceptor C=O group forms the n⟶π∗ electronic transitions. The C=O group helps the pigment anchor with TiO2. The electron density map that is shown in Figure 18(c) indicates the distribution of charge on the indigo molecule (green and red colors represent electropositivity and electronegativity, respectively). As the molecule is symmetric, the net dipole moment is negligible and is equal to 0.0053 D [69]. It is known that hypericin is a naphthodianthrone, a red-colored anthraquinone derivative, a photosensitive pigment, which is one of the principal active constituents of St. John’s wort (Hypericum perforatum). As a result, this pigment exhibited good adsorption onto a semiconductor surface, a high molar absorption coefficient (43700 L mol-1 cm-1) and favorable alignment of energy levels and provided a long lifetime of electrons (17.8 ms) on the TiO2 photoanode surface [123].Figure 18 Electron density corresponding to the HOMO energy level (a), LUMO energy level (b), and charge distributions of indigo (c). Electron density corresponding to the HOMO energy level (d), LUMO energy level (e), charge distributions of lawsone (f), and alignment of the energy levels (with respect to vacuum) of the materials with respect to each other (g) [69]. (a)(b)(c)(d)(e)(f)(g)The probability distribution of the electron density corresponding to the HOMO and LUMO levels are located to both benzoid and quinoid moieties, respectively (Figures18(d) and 18(e)). The first absorption peak at 338 nm is primarily caused by HOMO to LUMO transitions within the C=C (π⟶π∗) and C=O (n⟶π∗) regions of the lawsone quinoidal ring. The absorption peak seen in the visible region at 410 nm arises from the n⟶π∗ transitions localized mainly around the oxygen atom of the quinoidal ring. So, Figure 18(f) shows the distribution of charge on the lawsone molecule. From this pigment, the carbonyl carbons of C=O groups could be observed to be highly electropositive, showing a net dipole moment equal to 5.78 D, indicating the strong electron-withdrawing nature of the C=O group anchors the lawsone molecule onto TiO2 [69].Cosensitizing pigments with a complementary absorption spectra increase the absorption band and are an attractive pathway to enhance the efficiency in DSSCs [124]. Ramirez-Perez et al. [104] reported that the predominance of hydroxyl groups on the aromatic skeleton of anthocyanin gives rise to an intense blue color, while a red color is observed in methoxyl functional groups [104]. In brief, lawsone solar cells displayed better performance, showing average efficiencies of 0.311±0.034%, compared to indigo solar cells showing efficiencies of 0.060±0.004%. So, the betanin/lawsone cosensitized solar cell reflected a higher average efficiency of 0.793±0.021% as compared to the 0.655±0.019% obtained for the betanin/indigo cosensitized solar cell. An 11.7% enhancement in efficiency (with respect to betanin) was observed for the betanin/indigo solar cell, whereas a higher enhancement of 25.5% was observed for the betanin/lawsone solar cell. Impedance spectroscopy improved that the higher efficiency can be attributed to the higher electron lifetime of 313.8 ms in the betanin/lawsone cosensitized solar cell compared to 291.4 ms in the betanin/indigo solar cell (Figure 19) [69].Figure 19 Illustration of the device showing the complementary absorption by the cosensitized pigments of betanin and indigo (a) and betanin and lawsone (b) [69]. (a)(b)The computational analysis and experimental verification of photosensitizers and their efficiency performances done by Liu et al. [92] have reported that the simulated absorption spectra of chlorophyll were extracted from six different leaves by using ethanol solvent. To compare with the experimental results, the excited state properties of chlorophyll was investigated via the TD-DFT method with different functionals at a 6-31G(d) basis set based on the optimized ground state structure of chlorophyll. The choice of chlorophyll pigment was due to the fact that it accounts for the largest proportion of the green plant leaves. The charge different density results showed the distribution of charge during the light absorption step (Figure 20). As stated, for the sixth excited state, a red electron is moved into the semiconductor, and a hole is resided in the porphyrin ring. Previously, chlorophyll, natural porphyrin, and its derivatives have been studied using DFT approaches to explore their spectroscopic properties and their future applications in DSSCs [125]. As a result, this state is a charge transfer (CT) state, and a similar CT process can be found in the ninth excited state, where more electron migration into the semiconductor benefits electron transport for external circuit. The calculated excitation energies (E, eV), for the sixth excited state (S6) and the ninth excited state (S9) were 2.6161 eV and 2.9989 eV, respectively. This confirmed the existence of electron transfer into the semiconductor during photoexcitation.Figure 20 Charge difference density (CDD) for chlorophyll/TiO2. S6 (a) and S9 (b) [92]. (a)(b)As a summary, both experimental and computational modeling (relative energy calculations of HOMO and LUMO of natural plant pigment extraction) allowed electron injection ability elucidation of the extracted plant pigments [126]. These investigations, when the natural plant pigments are highly aggregated on the TiO2 surface, affects the DSSCs’ performance [127, 128]. ## 7. Performance Characterization Parameters in DSSCs ### 7.1. Characterization Using Photocurrent Density-Voltage (J-V) The general performance of the assembled DSSCs prepared using their components could be evaluated by different parameters such asJSC, η, Vmax, Pmax, VOC, and FF. Bekele et al. [114] reported on synthesis of TiO2 nanoparticles within three different volume ratios as 2 : 3, 1 : 1, and 3 : 2 in the presence of Kniphofia schemperi root ethanol extract both as a capping and reducing agent and also as a natural sensitizer. Synthesized 2 : 3, 1 : 1, and 3 : 2 photoelectrodes show VOC performance of 63, 48, and 161 mV, respectively. The 3 : 2 photoelectrode shows best VOC as compared to the remaining electrodes, and this improvement could be attributed to the improved absorption of light in the presence of Kniphofia schemperi root sensitizer, and this is due to the more improved surface morphology of the photoelectrode. This photoelectrode provides enhanced efficiency (≈1.30%) as compared to the remaining ratio photoelectrode due to its small average crystalline size, which enables it to adsorb excess dye molecules on its surface [129]. The corresponding JSC and FF values for each of the different volume ratios of TiO2 photoelectrode-based DSSCs were estimated as 1.29×10−3, 6.05×10−3, and 2.46×10−2mA/cm2, and the resultant FF was 42, 40.3, and 32.8% for the TiO2 (2 : 3), TiO2 (1 : 1), and TiO2 (3 : 2) photoelectrodes, respectively. TiO2 (3 : 2) outperforms the other two green prepared photoelectrodes due to the better catalytic property of the photoelectrode, which was achieved by using less extract during synthesis.Senthamarai et al. [82] reported on the green synthesis of TiO2 nanostructure photoelectrodes prepared via the green route using the fruit extracts of pineapple, orange, and grapes as reducing and stabilizing agents, for DSSC application in the presence of fruit skin ethanol extract of Murraya koenigii sensitizer. The grape-mediated synthesized TiO2 photoelectrode shows the maximum solar cell efficiency (1.78%). In the presence of Murraya koenigii natural sensitizer, pineapple-templated TiO2 and orange-templated TiO2 photoelectrodes shows solar cell efficiency of 1.61% and 1.52%, respectively, in the presence of fruit skin extract. The corresponding VOC values for the grape-TiO2, orange-TiO2, and pineapple-TiO2 photoelectrodes were found to be 0.628, 0.626, and 0.576 mV, respectively.Moreover, reports show that ZnO-TiO2-Fe2O3 nanocomposites were synthesized and used as an alternative photoelectrode in the presence of ethanol-extracted Guizotia scabra and Salvia leucanta flower sensitizers [130]. It has been found that the obtained conversion efficiencies for the ethanol extracts of Guizotia scabra and Salvia leucantha were estimated at 0.0013% and 0.0017%, respectively. According to Cho et al. [131], the ethanol extract of the mixed sweet potato leaf and blueberry sensitizer at a weight concentration of 40% (1 : 1) volume ratio was used to coat the TiO2 photoanode, and the report shows that 0.61 V, 4.75 mA/cm2, 53%, and 1.57% of VOC, JSC, FF, and η were achieved, respectively. But the individual single sensitizers of sweet potato leaf sensitizer show 0.645 V, 1.23 mA/cm2, 49%, and 0.391%, and blueberry flower sensitizer-based DSSCs provide 0.67 V, 0.532 mA/cm2, 61%, and 0.218% of VOC, JSC, FF, and, η, respectively.Safie et al. [78] reported on the use of mengkulang and rengas wood and their mixed ratios (60 : 40, 40 : 60, and 50 : 50) in the presence of TiO2 nanoparticle photoelectrode. The report proves that the cell performance was found to be 0.53 V, 0.40 mA/cm2, 75.98%, 0.16%, and 0.50 V, 0.30 mA/cm2, 72.88%, and 0.11% of VOC, JSC, FF, and η for the mengkulang and rengas wood sensitizer-based DSSCs, respectively, while the different mixed ratios of those sensitizers based on DSSC performance were found to be 0.54 V, 0.60 mA/cm2, 63.28%, 0.21%, 0.53 V, 0.90 mA/cm2, 61.78%, 0.29%, 0.53 V, 0.90 mA/cm2, 62.16%, and 0.30% for the 50 : 50, 40 : 60, and 60 : 40 volume ratios of mengkulang and rengas wood sensitizer, respectively. Figures 21(a)–21(c) display the effect of the photoelectrode on the performance of the photovoltaic parameters of DSSCs. Figure 21(b) depicts the J-V curve of green synthesized TiO2 NP photoelectrode within different volume ratio-based DSSCs in the presence of ethanolic root extract of Kniphofia schemperi, while Figure 21(c) shows the J-V curve of mengkulang and rengas wood and their mixed sensitizers in the presence of TiO2 nanostructured photoanode. When TiO2 nanoparticles are layered with graphitic carbon nitride structure by forming composites, it decreases the energy barrier of electron transport and improves the injection efficiency of photogenerated electrons from the photoanode [132].Figure 21 J-V curve of root extract ofKniphofia schemperi (a), mengkulang and rengas wood and their mixed sensitizers (b), and different cells at different solvents (c) [methanol (i), ethanol (ii), and acetone (iii)] extracted from Costus woodsonii leaves [78, 114, 133]. (a)(b)(c)Najihah and Tan [133] designed and reported on the methanol, ethanol, and acetone extracted from Costus woodsonii leaves as a natural sensitizer for DSSCs. It has been found that the short circuit current density, open circuit voltage, fill factor, and efficiency of methanol-extracted Costus Woodsonii leave sensitizer-based DSSCs shows 0.63, 0.60, 0.61, and 0.23, respectively. The ethanol-extracted Costus woodsonii leave sensitizer-based DSSCs are 0.85, 0.63, 0.69, and 0.37, respectively. In addition, as can be seen in Figure 21(c), acetone-extractable sensitizer shows 1.35, 0.57, 0.62, and 0.48, respectively, which is the best performing solvent. The report proves the effect of the solvent and, in turn, its influence on the performance of the various photovoltaic parameters. The light absorption capability of the sensitizer-coated photoelectrode is directly correlated to the concentration of the natural dye in an extraction solvent [78, 133]. Among the three solvents, acetone-extracted sensitizer-based DSSCs provide better efficiency relative to the corresponding counterparts. ### 7.2. Characterization Using Incident Photon to Current Conversion Efficiency (IPCE) The IPCE, also known as the external quantum efficiency, is the percentage of incident photons converted to electric current (collected charge carriers) when the device is operated in a short circuit. As reported by Al-Alwani et al. [134], the incident photon to current conversion efficiency of DSSCs is found to be dependent on the nature of the photoanode and also the light absorption and capacity behavior of natural sensitizers, as shown in Figures 22(a)and 22(b). As supported in Figure 22(b), the IPCE value of Kniphofia foliosa ethanolic root extract in the presence of various volume ratios (2 : 3, 1 : 1, and 3 : 2) of green-synthesized TiO2 photoelectrode was found to be altered with the variation in the volume ratio of the photoanode. As could be provided in Figure 22(b), relatively maximum IPCE% (≈8.11%) value was obtained in the presence of TiO2 photoelectrode formed within the volume ratio of 3 : 2 and was located at a wavelength scan of 340 nm, while the TiO2 (1 : 1) photoelectrode provides an IPCE of 2.66%, which occurs at around 500 nm.Figure 22 IPEC% curve ofPandannus amaryllifolius leaves (a) and root extract of Kniphofia foliosa (b)-based DSSCs [114, 134]. (a)(b)DSSC is an efficient photovoltaic technology in wireless sensors and indoor light due to its low cost and material abundancy in nature. However, Kokkonen et al. [32] have reviewed the possible scaling up of fabrication methods at industrial manufacturing level for high-performance stability and high photovoltaic efficiency under typical indoor conditions. Hence, a significant research effort has been invested in exploring the new generation of photovoltaic devices as alternatives to traditional silicon- (Si-) based solar cells [135]. Moreover, the efficiency and its research challenges towards DSSCs have been clearly reviewed [136]. To solve such problems, recently, the discovery of new materials, such as 2D and high selectivity catalysts, have been emerged as promising materials, and their identifications have been identified by machine learning data-driven approach [137]. Especially, indoor solar cell is a strong positive influence on the ecology of the Internet of Things (IoTs). This IoT contained communication devices, actuators, remote, and distributed sensors. Particularly, smart IoT sensors have the potential of performing control functions and mass monitoring, which is driven by an indoor power gathering system [138, 139]. ## 7.1. Characterization Using Photocurrent Density-Voltage (J-V) The general performance of the assembled DSSCs prepared using their components could be evaluated by different parameters such asJSC, η, Vmax, Pmax, VOC, and FF. Bekele et al. [114] reported on synthesis of TiO2 nanoparticles within three different volume ratios as 2 : 3, 1 : 1, and 3 : 2 in the presence of Kniphofia schemperi root ethanol extract both as a capping and reducing agent and also as a natural sensitizer. Synthesized 2 : 3, 1 : 1, and 3 : 2 photoelectrodes show VOC performance of 63, 48, and 161 mV, respectively. The 3 : 2 photoelectrode shows best VOC as compared to the remaining electrodes, and this improvement could be attributed to the improved absorption of light in the presence of Kniphofia schemperi root sensitizer, and this is due to the more improved surface morphology of the photoelectrode. This photoelectrode provides enhanced efficiency (≈1.30%) as compared to the remaining ratio photoelectrode due to its small average crystalline size, which enables it to adsorb excess dye molecules on its surface [129]. The corresponding JSC and FF values for each of the different volume ratios of TiO2 photoelectrode-based DSSCs were estimated as 1.29×10−3, 6.05×10−3, and 2.46×10−2mA/cm2, and the resultant FF was 42, 40.3, and 32.8% for the TiO2 (2 : 3), TiO2 (1 : 1), and TiO2 (3 : 2) photoelectrodes, respectively. TiO2 (3 : 2) outperforms the other two green prepared photoelectrodes due to the better catalytic property of the photoelectrode, which was achieved by using less extract during synthesis.Senthamarai et al. [82] reported on the green synthesis of TiO2 nanostructure photoelectrodes prepared via the green route using the fruit extracts of pineapple, orange, and grapes as reducing and stabilizing agents, for DSSC application in the presence of fruit skin ethanol extract of Murraya koenigii sensitizer. The grape-mediated synthesized TiO2 photoelectrode shows the maximum solar cell efficiency (1.78%). In the presence of Murraya koenigii natural sensitizer, pineapple-templated TiO2 and orange-templated TiO2 photoelectrodes shows solar cell efficiency of 1.61% and 1.52%, respectively, in the presence of fruit skin extract. The corresponding VOC values for the grape-TiO2, orange-TiO2, and pineapple-TiO2 photoelectrodes were found to be 0.628, 0.626, and 0.576 mV, respectively.Moreover, reports show that ZnO-TiO2-Fe2O3 nanocomposites were synthesized and used as an alternative photoelectrode in the presence of ethanol-extracted Guizotia scabra and Salvia leucanta flower sensitizers [130]. It has been found that the obtained conversion efficiencies for the ethanol extracts of Guizotia scabra and Salvia leucantha were estimated at 0.0013% and 0.0017%, respectively. According to Cho et al. [131], the ethanol extract of the mixed sweet potato leaf and blueberry sensitizer at a weight concentration of 40% (1 : 1) volume ratio was used to coat the TiO2 photoanode, and the report shows that 0.61 V, 4.75 mA/cm2, 53%, and 1.57% of VOC, JSC, FF, and η were achieved, respectively. But the individual single sensitizers of sweet potato leaf sensitizer show 0.645 V, 1.23 mA/cm2, 49%, and 0.391%, and blueberry flower sensitizer-based DSSCs provide 0.67 V, 0.532 mA/cm2, 61%, and 0.218% of VOC, JSC, FF, and, η, respectively.Safie et al. [78] reported on the use of mengkulang and rengas wood and their mixed ratios (60 : 40, 40 : 60, and 50 : 50) in the presence of TiO2 nanoparticle photoelectrode. The report proves that the cell performance was found to be 0.53 V, 0.40 mA/cm2, 75.98%, 0.16%, and 0.50 V, 0.30 mA/cm2, 72.88%, and 0.11% of VOC, JSC, FF, and η for the mengkulang and rengas wood sensitizer-based DSSCs, respectively, while the different mixed ratios of those sensitizers based on DSSC performance were found to be 0.54 V, 0.60 mA/cm2, 63.28%, 0.21%, 0.53 V, 0.90 mA/cm2, 61.78%, 0.29%, 0.53 V, 0.90 mA/cm2, 62.16%, and 0.30% for the 50 : 50, 40 : 60, and 60 : 40 volume ratios of mengkulang and rengas wood sensitizer, respectively. Figures 21(a)–21(c) display the effect of the photoelectrode on the performance of the photovoltaic parameters of DSSCs. Figure 21(b) depicts the J-V curve of green synthesized TiO2 NP photoelectrode within different volume ratio-based DSSCs in the presence of ethanolic root extract of Kniphofia schemperi, while Figure 21(c) shows the J-V curve of mengkulang and rengas wood and their mixed sensitizers in the presence of TiO2 nanostructured photoanode. When TiO2 nanoparticles are layered with graphitic carbon nitride structure by forming composites, it decreases the energy barrier of electron transport and improves the injection efficiency of photogenerated electrons from the photoanode [132].Figure 21 J-V curve of root extract ofKniphofia schemperi (a), mengkulang and rengas wood and their mixed sensitizers (b), and different cells at different solvents (c) [methanol (i), ethanol (ii), and acetone (iii)] extracted from Costus woodsonii leaves [78, 114, 133]. (a)(b)(c)Najihah and Tan [133] designed and reported on the methanol, ethanol, and acetone extracted from Costus woodsonii leaves as a natural sensitizer for DSSCs. It has been found that the short circuit current density, open circuit voltage, fill factor, and efficiency of methanol-extracted Costus Woodsonii leave sensitizer-based DSSCs shows 0.63, 0.60, 0.61, and 0.23, respectively. The ethanol-extracted Costus woodsonii leave sensitizer-based DSSCs are 0.85, 0.63, 0.69, and 0.37, respectively. In addition, as can be seen in Figure 21(c), acetone-extractable sensitizer shows 1.35, 0.57, 0.62, and 0.48, respectively, which is the best performing solvent. The report proves the effect of the solvent and, in turn, its influence on the performance of the various photovoltaic parameters. The light absorption capability of the sensitizer-coated photoelectrode is directly correlated to the concentration of the natural dye in an extraction solvent [78, 133]. Among the three solvents, acetone-extracted sensitizer-based DSSCs provide better efficiency relative to the corresponding counterparts. ## 7.2. Characterization Using Incident Photon to Current Conversion Efficiency (IPCE) The IPCE, also known as the external quantum efficiency, is the percentage of incident photons converted to electric current (collected charge carriers) when the device is operated in a short circuit. As reported by Al-Alwani et al. [134], the incident photon to current conversion efficiency of DSSCs is found to be dependent on the nature of the photoanode and also the light absorption and capacity behavior of natural sensitizers, as shown in Figures 22(a)and 22(b). As supported in Figure 22(b), the IPCE value of Kniphofia foliosa ethanolic root extract in the presence of various volume ratios (2 : 3, 1 : 1, and 3 : 2) of green-synthesized TiO2 photoelectrode was found to be altered with the variation in the volume ratio of the photoanode. As could be provided in Figure 22(b), relatively maximum IPCE% (≈8.11%) value was obtained in the presence of TiO2 photoelectrode formed within the volume ratio of 3 : 2 and was located at a wavelength scan of 340 nm, while the TiO2 (1 : 1) photoelectrode provides an IPCE of 2.66%, which occurs at around 500 nm.Figure 22 IPEC% curve ofPandannus amaryllifolius leaves (a) and root extract of Kniphofia foliosa (b)-based DSSCs [114, 134]. (a)(b)DSSC is an efficient photovoltaic technology in wireless sensors and indoor light due to its low cost and material abundancy in nature. However, Kokkonen et al. [32] have reviewed the possible scaling up of fabrication methods at industrial manufacturing level for high-performance stability and high photovoltaic efficiency under typical indoor conditions. Hence, a significant research effort has been invested in exploring the new generation of photovoltaic devices as alternatives to traditional silicon- (Si-) based solar cells [135]. Moreover, the efficiency and its research challenges towards DSSCs have been clearly reviewed [136]. To solve such problems, recently, the discovery of new materials, such as 2D and high selectivity catalysts, have been emerged as promising materials, and their identifications have been identified by machine learning data-driven approach [137]. Especially, indoor solar cell is a strong positive influence on the ecology of the Internet of Things (IoTs). This IoT contained communication devices, actuators, remote, and distributed sensors. Particularly, smart IoT sensors have the potential of performing control functions and mass monitoring, which is driven by an indoor power gathering system [138, 139]. ## 8. Conclusion and Future Outlook ### 8.1. Conclusion It has been observed from the review work that DSSCs and their various components were fabricatedvia different protocols. The CE and the photoanode of DSSCs were fabricated via various chemical and green methods. It has been observed that even if the electrodes of DSSCs prepared via chemical methods were to provide efficient efficiency as compared to the electrodes prepared via green method, the electrodes fabricated via green technique is found to be cost effective, environmentally friendly, and also able to provide a high surface area to volume ration, which makes them available for harvesting much of the sun’s light on their surface. Using numerous green and medicinal parts of natural plants such as leaves, roots, steams, barks, flowers, and other medicinal green plants as photosensitizers is the most preferable option. ### 8.2. Future Outlook In order to achieve the real utilization of natural pigment-based conversion of solar light energy to electricity in the near future, the scientific community should be addressing and solving the problem of the less efficient properties of DSSCs. This could be possible by improving the major components of the devicevia numerous modification protocols. In order to articulate a best-performing device and enhance the efficiency, the device must be assembled by considering those parameters. This enables the device to achieve enhanced cell performance and efficiency. ## 8.1. Conclusion It has been observed from the review work that DSSCs and their various components were fabricatedvia different protocols. The CE and the photoanode of DSSCs were fabricated via various chemical and green methods. It has been observed that even if the electrodes of DSSCs prepared via chemical methods were to provide efficient efficiency as compared to the electrodes prepared via green method, the electrodes fabricated via green technique is found to be cost effective, environmentally friendly, and also able to provide a high surface area to volume ration, which makes them available for harvesting much of the sun’s light on their surface. Using numerous green and medicinal parts of natural plants such as leaves, roots, steams, barks, flowers, and other medicinal green plants as photosensitizers is the most preferable option. ## 8.2. Future Outlook In order to achieve the real utilization of natural pigment-based conversion of solar light energy to electricity in the near future, the scientific community should be addressing and solving the problem of the less efficient properties of DSSCs. This could be possible by improving the major components of the devicevia numerous modification protocols. In order to articulate a best-performing device and enhance the efficiency, the device must be assembled by considering those parameters. This enables the device to achieve enhanced cell performance and efficiency. --- *Source: 1024100-2022-10-14.xml*
1024100-2022-10-14_1024100-2022-10-14.md
150,057
Recent Progress, Advancements, and Efficiency Improvement Techniques of Natural Plant Pigment-Based Photosensitizers for Dye-Sensitized Solar Cells
Eneyew Tilahun Bekele; Yilkal Dessie Sintayehu
Journal of Nanomaterials (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1024100
1024100-2022-10-14.xml
--- ## Abstract Production of green energy by using environment friendly and cost-effective components is attracting the attention of the research world and is found to be a promising approach to replace nonrenewable energy sources. Among the green energy sources, dye-sensitized solar cells (DSSCs) are found to be the most alternative way to reduce the energy demand crises in current situation. The efficiency of DSSCs is dependent on numerous factors such as the solvent used for dye extraction, anode and cathode electrodes, and the thickness of the film, electrolyte, dye, and nature of FTO/ITO glasses. The efficiency of synthetic dye-based DSSCs is enhanced as compared to their counterparts. However, it has been found that many of the synthetic sensitizers used in DSSCs are toxic, and some of them are found to cause carcinogenicity in nature by forming a complex agent. Instead, using various parts of green plants such as leaves, roots, steam, peel waste, flowers, various spices, and mixtures of them would be a highly environmentally friendly and good efficient. The present review focuses on and summarizes the efficiency affecting factors, the various categories of natural sensitizers, and solvent effects. Furthermore, the review work assesses the experimentally and computationally obtained values and their progress in development. --- ## Body ## 1. Introduction Energy is an important and basic pillar of life activities across the globe. Basically, it exists in different forms, from burning woods to obtain fire in prehistoric times to producing electricity in modern society. But it has been found that the original sources of energy that people used to harvest in their day-to-day activities have shown signs of deficiency due to the rapid growth in industrialization, overgrowth in population size, the advancement of infrastructure, and humans’ basic need improvement [1]. Due to uprising concerns about the energy crisis, climate change, shortages in fossil fuels, and current environmental issues are motivating the researcher to focus on clean, sustainable, and renewable energy resources that will help to predict future sustainability. Previously, it has been reported that energy produced from nonrenewable products had contributed to almost one-third of global greenhouse gas emissions [2, 3].Nonrenewable energy products from fossil fuels, petroleum liquids, coal, and natural gas have been considered as the dominant source of energy production for the world economy in the past century. However, in view of nonrenewable energy sources such as the fossil fuel crisis, the rising per barrel cost of crude oil, and the rejection of pollution-causing energy sources, sustainable forms of energy are becoming the center of attention worldwide [4–6]. ### 1.1. Renewable Sources of Energy The common conventional sources of energy based on oil, coal, and natural gas have proven to be highly efficient and effective product for economic progress but they might result in damaging the natural ecosystem and human health too. The fast depletion of fossil fuels and climate change issues and renewable energy sourced from solar, wind, geothermal, hydroelectric, biomass, nuclear power, and tide are some of the examples that are available in common throughout the world [7]. Renewable energy sources can provide sustainable energy service based on the use of routinely available starting materials and indigenous resources, which is available within the natural environment with minimum cost [8–10]. Solar energy provides a clean, renewable, and cheaper energy source for the human race. These concepts were supported by the world’s energy consumption history projections covering the years from 1990 to 2040. From this projection, more than 60% of energy is dominated by solar energy in the years between 2090 and 2100 [11]. So, Figure 1 describes some common categories of both renewable and nonrenewable energy sources.Figure 1 Education chart of (a) renewable and (b) nonrenewable sources of energy diagram (copyright: Vecton, image ID: 98899382, media type: stock photo). (a)(b)Energy from sunlight is capable of producing heat and light, causes photochemical reactions, and generates electricity. When sunlight strikes the earth’s surface, it provides 3.8 million EJ of energy a year (i.e., collecting the total solar energy in one hour would satisfy human energy demand for one year) [12]. Light energy conversion applications are divided into three categories: photovoltaics for direct conversion of sunlight into electricity, as well as concentrating solar power systems and solar thermal collectors, employing solar thermal energy. All of this energy was modeled in an attempt to produce an efficient power output [11]. Photovoltaic (PV) technology has sparked a lot of interest due to the benefits of lower manufacturing costs and environmental safety. Its panel efficiency strongly depends on the surface temperature of the cell, and its efficiency decreases with increasing temperature [13].Up to now, commercially available PV technologies are based on inorganic materials, which require high costs and highly energy-consuming preparation methods. In addition, several of those materials are toxic and have low natural abundance. Organic PV can be used to avoid those problems. However, the efficiencies of organic-based PV cells are still, at the moment, a long way behind those obtained with purely inorganic-based PV technologies. Hence, the benefit and significance of solar energy is that sunlight can be directly harvested into solar energy with the use of small and tiny PV solar cells [14]. Among silicon-based polymer, quantum dots, dye-sensitized solar cells, and perovskites are some of them and attract the researchers worldwide [15, 16]. Figure 2 demonstrates the classification of solar cells based on the materials on which they are made.Figure 2 Category of solar cells and current trends of development [17]. ## 1.1. Renewable Sources of Energy The common conventional sources of energy based on oil, coal, and natural gas have proven to be highly efficient and effective product for economic progress but they might result in damaging the natural ecosystem and human health too. The fast depletion of fossil fuels and climate change issues and renewable energy sourced from solar, wind, geothermal, hydroelectric, biomass, nuclear power, and tide are some of the examples that are available in common throughout the world [7]. Renewable energy sources can provide sustainable energy service based on the use of routinely available starting materials and indigenous resources, which is available within the natural environment with minimum cost [8–10]. Solar energy provides a clean, renewable, and cheaper energy source for the human race. These concepts were supported by the world’s energy consumption history projections covering the years from 1990 to 2040. From this projection, more than 60% of energy is dominated by solar energy in the years between 2090 and 2100 [11]. So, Figure 1 describes some common categories of both renewable and nonrenewable energy sources.Figure 1 Education chart of (a) renewable and (b) nonrenewable sources of energy diagram (copyright: Vecton, image ID: 98899382, media type: stock photo). (a)(b)Energy from sunlight is capable of producing heat and light, causes photochemical reactions, and generates electricity. When sunlight strikes the earth’s surface, it provides 3.8 million EJ of energy a year (i.e., collecting the total solar energy in one hour would satisfy human energy demand for one year) [12]. Light energy conversion applications are divided into three categories: photovoltaics for direct conversion of sunlight into electricity, as well as concentrating solar power systems and solar thermal collectors, employing solar thermal energy. All of this energy was modeled in an attempt to produce an efficient power output [11]. Photovoltaic (PV) technology has sparked a lot of interest due to the benefits of lower manufacturing costs and environmental safety. Its panel efficiency strongly depends on the surface temperature of the cell, and its efficiency decreases with increasing temperature [13].Up to now, commercially available PV technologies are based on inorganic materials, which require high costs and highly energy-consuming preparation methods. In addition, several of those materials are toxic and have low natural abundance. Organic PV can be used to avoid those problems. However, the efficiencies of organic-based PV cells are still, at the moment, a long way behind those obtained with purely inorganic-based PV technologies. Hence, the benefit and significance of solar energy is that sunlight can be directly harvested into solar energy with the use of small and tiny PV solar cells [14]. Among silicon-based polymer, quantum dots, dye-sensitized solar cells, and perovskites are some of them and attract the researchers worldwide [15, 16]. Figure 2 demonstrates the classification of solar cells based on the materials on which they are made.Figure 2 Category of solar cells and current trends of development [17]. ## 2. Dye-Sensitized Solar Cells (DSSCs) Before dye-sensitized solar cells (DSSCs), silicon-based solar cells were found to be the most popular and dominant source of energy [18]. These solid-state junction devices dominated the photovoltaic industry. O’Regan and Grätzel [19] developed DSSCs, a new type of third-generation solar cell in year 1991 and also known as green alternative energy due to its potential applications and cost-effectiveness. Moreover, this class of energy involves the use of green alternative solvents during its fabrication and does not result in the formation of pollution to the natural environment such as greenhouse gases, and the source by itself is uniformly distributed as compared to other forms of energy. One of the most operational technologies in this generation is its long-term stability and environmentally friendly energy that belongs to thin-film solar cell group [19, 20]. DSSCs are thus considered as one of the most promising next-generation devices for future energy demand and environmental remediation solutions. Its basic components include porous semiconductor materials loaded with sensitizer on a glass substrate (FTO/ITO), redox couple electrolyte, and counterelectrode [21, 22].One form of modification made by previous researchers and reporters in the working electrodes is to add metal and metal-based oxide materials to semiconductors such as metals like Cr, Zr, Ni, Fe, Cu, Ag, CuO, ZnO, and TiO2 [23, 24]. In addition to this, there are a number of factors limiting the performance of DSSCs. As such, the absorption of a large fraction of incident solar light by the photoactive layer of a dye-sensitized photoanode, the use of wide light absorption bands, and cosensitization of the photoanode are important for achieving high-performance and efficient device. Ragoussi and Torres also reported on the molecular orbital levels, absorption coefficients, morphology of the layers, and molecular diffusion lengths as the other main factors that affect certified power conversion efficiency [25].Figures3(a) and 3(b) illustrate the fundamental operational principle of DSSCs and the resulting charge transfer process between the sensitizer and the photoelectrode system. As described in Figure 3(a), sensitizers may achieve charge injection with the corresponding photoelectrode by both direct and indirect sensitization, as shown in Figure 3(b), thereby rendering it more panchromatic in response [26], and this showed the charge collection properties of DSSCs, which, in turn, altered the photocurrent density, photovoltage, and solar energy conversion efficiency by optimizing cell size without affecting environmental safety [27]. However, it has not yet come to fruition; pure direct sensitization protocol should be adapted, since it could increase DSSC efficiency by eliminating the electron injection overpotential. It indicates the energy loss due to thermalization from the excited state of the dye (D∗) in the other process (indirect technique) [28].Figure 3 Schematic representation of the components and of the basic operating principle of DSSCs [29] (a) and the schematic illustration of two similar types of visible light sensitization of TiO2. (i) Dye sensitization (indirect): (1) excitation of the dye by visible light absorption, (2) electron transfer from the excited state of the dye to TiO2 CB, (3) recombination, (4) electron transfer to the acceptor, and (5) regeneration of the sensitizer by an electron donor. (ii) Ligand-to-metal charge transfer (LMCT sensitization) (direct): (1) visible light-induced LMCT transfer, (2) recombination, (3) electron transfer to the acceptor, and (4) regeneration of adsorbates by an electron donor. S, D, and A represent the sensitizer (or adsorbate), electron donor, and electron acceptor, respectively (S0: ground state, S∗ and S1: excited state of the sensitizer/adsorbate) (b) [28]. (a)(b)Previously, much scientific research work was reported on the fabrication and assembly of DSSCs using their various components such as working electrode, counterelectrode, sensitizers/dyes (natural, organic metal free dye, and metal complex dyes/inorganic dyes), and the corresponding cell performance [30–34]. This is due to its best performance under solar irradiation diffusion, having low-cost manufacturing, fabrication simplicity, and environmental friendliness incorporated materials. But the low energy conversion and short-term operational stability are still a challenge for commercial use as compared with silicon-based solar cells, which have been commercialized [35].However, to the knowledge of the researchers, there is no scientific report that summarizes and shows the effect of various components of the cell. Moreover, the novelty of this review work also focuses on showing and focusing on the different types of natural products as green, environmentally friendly, and cost-effective sensitizers for DSSCs. Furthermore, the work intends to summarize the effect of solvent on the extraction of dye and the effect and nature of the different parts of natural plants, such as roots, flowers, stems, and leaves, due to their having various bioactive photosensitive molecules [24, 36]. Hence, the aim of this review is to focus on performance affecting parameters, to show the recently achieved solar cell efficiency, to propose future scientific directions on the way to use DSSCs for homemade applications, and to extend their industrializations as well as its assessment on how to optimize the device for commercialization at large-scale production. ## 3. Basic Elements of DSSCs In the old generation of photoelectrochemical solar cells (PSCs), photoelectrodes were fabricated from bulky semiconductor materials such as Si, GaAs, or CdS. But these kinds of photoelectrodes are highly affected by photocorrosion, which results in poor stability of the photoelectrochemical cell [37]. Instead, sensitized wide band gap semiconductors derived from metal oxides such as TiO2, ZnO, niobium oxide, carbon materials, bilayer-assembled, and their composites have been used as a major photoelectrode in DSSCs, as shown in Figure 4 [38, 39]. So, the next section discussed the basic structure of DSSCs, which is composed of different layers as compared to conventional solar cells based on silicon, such as photoanode/photoelectrode/working electrode (WE), counterelectrode (CE), electrolyte, and sensitizer (both synthetic/complex and natural dyes) [35, 38, 40, 41].Figure 4 Band positions of the most common semiconductors [38]. ### 3.1. Photoanode The WE, or indicator, is the most important component that has the function of absorbing radiation. As a criterion, the electrode consists of a dye-sensitized layer of nanocrystalline semiconductor metal oxide having a wide band gap and being transparent enough to pass light to the sensitizer. Many semiconductor materials, either in the form of nano or bulk, were used as a photoelectrodes, such as TiO2, Al@TiO2, TiO2-Fe, ZnO, SiO2, Zn2SnO4, CeO2, WO3, SrTiO3, and Nb2O5 were used as scaffold materials in DSSCs [26]. However, reports showed that TiO2 and ZnO and their composite/doped-based photoelectrodes were found to be the most widely common photoanode materials due to their high band gap energy, inexpensiveness, nontoxicity nature, accessibility, and photoelectrochemical stability. Their methods of preparation are too simple and achievable with environmentally friendly materials in the presence of green solvents and show a promising high efficiency as compared to the counterparts. Recently, TiO2 has become the most popular metal oxide semiconductor in DSSCs after ZnO and SnO2 [38].Rajendhiran et al. [42] have reported the green synthesized TiO2 nanoparticles by the sol-gel method using Plectranthus amboinicus leaf extract, and the prepared nanoparticles have been coated over an ITO substrate by using the doctor blade approach. So, the assembled DSSCs have exhibited higher solar to electrical energy conversion efficiency, reaching 1.3% by using Rose Bengal organic dye sensitizer due to the surface modification of the synthesized nanoparticles. In addition, to support TiO2 in the photoanode, substrates such as fluorine-doped tin oxide (FTO) and indium-doped tin oxide (ITO) [9], due to scarcity, rigidity, and brittle properties of ITO, an alternative with low-cost FTO and graphene, have been chosen due to their unique structural defect with a rough surface that enables it to solve problems in short circuits and leakage current [39, 43].Low and Lai [44] designed an efficient photoanode from reduced graphene oxide- (rGO-) decorated TiO2 materials. It has been found that the UV-Vis diffuse reflection spectra shows an inconstant absorption with increasing duration of TiO2 deposited on rGO in the ultraviolet (from 200 to 400 nm) and visible light (from 400 to 700 nm) regions. From the observed spectra, rGO/TiO2 samples with a spinning duration of 30 seconds exhibited an optimum light-absorbing ability, as shown in Figure 5(a). As shown in Figure 5(b), the performance of the electrochemical parameters such as JSC, VOC, FF, and η was dependent on the spinning durations. The efficiency (η) of the assembled DSSCs increases from 10 to 30-second spinning duration (4.74-9.98%, respectively). This is due to surface area and photoelectrochemical stability modification, which in turn enables us to adsorb more dye molecules, followed by absorbing more light on the surface of the composite photoanode. So, it could be noticed that, after 30 seconds of optimized spinning duration, the best efficiencies of the device were 22.01 mA cm-2, 0.79 V, 57%, 9.98% in JSC, VOC, FF%, and η%, respectively.Figure 5 UV-Vis diffuses reflectance spectra (a) and J-V curves of the photoanode-based pure rGO and rGO/TiO2 with spinning duration of 10 s, 20 s, 30, 40, and 50 s (b) [44]. (a)(b)Furthermore, Gao et al. [45] also developed a nitrogen-doped TiO2/graphene nanofiber (G-T-N) as an alternative green photoelectrode and evaluated the different photovoltaic parameters of the device’s performance, as shown in Figure 6. While nitrogen doping can prevent the in situ recombination of electron-hole pairs, graphene doping first increases the surface area of TiO2 fibers and also increases the dye adsorption active sites, with more electrons injected into the semiconductor conduction band from the excited state of the dye, thereby improving the photoelectric conversion efficiency. The open circuit voltage (VOC), short circuit current density (JSC), fill factor (FF), and η value for the TiO2/graphene and N@TiO2/graphene nanofiber photoelectrode-based DSSCs were found to be 0.66, 17.48, 0.35, and 3.97 and 0.71, 15.38, 0.46, and 5.01, respectively [46].Figure 6 IPCE curves (a) and J-V curves of DSSCs with different photoanodes and the illustration in (b) [45]. (a)(b) ### 3.2. Counterelectrode The counterelectrode (CE, cathode) is where the redox mediator reduction occurs. It collects electrons from the external circuit and injects them into the electrolyte to catalyze the reduction of I3− to I− in the redox couple for dye regeneration [47]. The pillar and the primary major function of CE in the DSSC system plays as catalyst to promote the completion of the process, since the oxidized redox couple is reduced by accepting electrons at the surface of the CE, and the oxidized dye is again reduced by collecting electrons via the ionic transport materials. The second ultimate role of CE in DSSCs is to act as a positive electrode; it collects electrons from the external circuit and transmits them into the cell. Additionally, CE is used as a mirror, since it reflects the unabsorbed light from the cell back to the cell to enhance the utilization of sunlight [40].The most commonly used CE material is Pt on a conductive ITO or FTO substrate, owing to its excellent electrocatalytic activity for I3− reduction, high electrical conductivity for efficient electron transport, and high electrochemical stability in the electrolyte system. Hence, most of the research work uses expensive platinum as a CE, but this limits the employability of large-scale production. Therefore, to address these limitations, several materials derived from inorganic compounds, carbonaceous materials, and conductive organic polymers, have been investigated as potential alternatives to replace or modify the Pt-based cathodes in DSSCs. To improve this, less-expensive copper was used as the CE in large-scale industrial applications [39]. Huang et al. [48] have worked on biochar from lotus leaf by one-step pyrolysis as a flexible CE to replace platinum. From their studies at the same photoanode, a maximum value of 0.15% power conversion efficiency (PCE) was produced in the presence of lotus leaf extract as a photosensitizer, while 0.36% of PCE was produced. In the same manner, 0.13% of PCE was produced when graphite was used as CE while lotus leaf extract photosensitizer-modified TiO2-FTO photoanode [48]. It can be concluded that graphite presents feasible potential as an alternative to platinum due to its affordable cost and performance output due to having more than 0.0385% efficiency than FTO glass and platinum by using Strobilanthes cusia photosensitizer [49].Kumar et al. [50] designed and fabricated a new cost-effective, enhanced performance CE using a carbon material produced with the organic ligand 2- methyl-8-hydroxyquinolinol (Mq). The carbon-derived Mq CE-based DSSCs show a short circuit current density of 11.00 mA cm-2, a fill factor of 0.51, and an open circuit voltage of VOC 0.75 V with a conversion efficiency of 4.25%. As a reference, Pt CE was used and provides a short circuit current density (Jsc) of 12.40 mA cm-2, a fill factor of 0.68, and an open circuit voltage of 0.69 V, having a conversion efficiency of ≈5.86%. As supported in Figure 7(a), the low cell performance of carbon-derived Mq CE could be attributed to the high electrostatic interactions between the carbon atom and I- or I3- with a higher concentration of mediator anions in proximity to the carbon surface, which results in an increase of the regeneration and recombination rates [50]. Due to their low surface area, low stability, and low catalytic behavior, single material-based CE has lower device performance than composites and doped CE-based DSSCs. To improve this, Younas et al. [51] prepared the high mesopore carbon-titanium oxide composite CE (HMC-TiO2) for the first time and investigated the various cell photovoltaic parameters. The report shows that Jsc of 16.1 mA cm-2, FF of 68%, and VOC of 0.808 V have a conversion η of ≈8.77%. As the different parameters of the DSSC value obtained show, the HMCs-TiO2 composites display high electrocatalytic activity and could be taken as a promising CE. In addition, Song et al. [52] discuss the role of iron pyrite (FeS2), in the presence and absence of NaOH basic solution, as one of the most promising counterelectrode materials for dye-sensitized solar cells. FeS2 CE-based DSSCs without NaOH addition provides a PCE of 4.76%, a JSC of 10.20 mA cm-2 with a VOC of 0.70 V and a FF of 0.66. In the presence of NaOH, the FeS2 CE-based DSSCs had a JSC of 12.08 mA cm-2, a VOC of 0.74 V, a FF of 0.64, and a PCE value of 5.78%. As a control, Pt CE were also investigated and shows JSC of 11.58 mA cm-2, VOC of 0.74 V, FF of 0.69, and resulting PCE of 5.93%. The improvement in the photovoltaic parameters of DSSCs, shown in Figure 7(b), indicates that more electrons are generated in the device due to the presence of NaOH, and this is found to be consistent with JSC [53].Figure 7 J-V spectra of HMC-TiO2 (a) and FeS2; (A: without NaOH, B: with NaOH, and C: Pt CE-based DSSCs) (b) [50, 52]. (a)(b) ### 3.3. Electrolyte A good electrolyte should have high electrical and ionic conductivity, good interfacial contact with the nanocrystalline semiconductor and counterelectrode, not degrade the dye molecules, be transparent to visible light, noncorrosive property to the counterelectrode, high thermal and electrochemical stability, a high diffusion coefficient, low vapor pressure, appropriate viscosity, and ease of sealing, without suppressing charge carrier transport [54, 55]. Liquid electrolytes, solid-state electrolytes, quasisolid electrolytes [30], and water-based electrolytes [56] are common redox mediators (electrolytes) found in DSSCs. Liquid electrolytes are also organic (redox couple, a solvent, and additives) and are characterized by ionic liquid. Quasisolid electrolytes are good candidates for DSSCs due to their optimum efficiency and durability, high ionic conductivity, long-term stability, ionic conductivity, and excellent interfacial contact property like the liquid electrolytes. The most important components are redox couples such as I-/I3-, Br-/Br3-, SCN-/(SCN)2, Fe (CN)63/4, SeCN-/(SeCN)2, and substituted bipyridyl cobalt (III/II) [57], which are directly linked to the VOC of DSSCs.Due to the better solubility, fast dye regeneration process, low light absorption in the visible region, appropriate redox potential, and very slow recombination rate between the nanocrystalline semiconductor injected electrons and I3-, I-/I3- is the most popular redox couple electrolyte [40]. Since a good solvent is responsible for the diffusion and dissolution of I-/I3- ions, among these solvents, acrylonitrile, ethylenecarbonate, propylene carbonate, 3-methoxypropionitrile, and N-methylpyrrolidone are common. A solvent with a high donor number can increase the VOC and decrease the JSC by lowering the concentration of I3-. As a result, the lower I3- concentration helps to slow the recombination rate, and as a result, it increases the VOC. The second type of liquid electrolyte is the ionic liquid electrolyte, such as pyridinium, imidazolium, and anions from the halide or pseudohalide family. These electrolytes show high ionic conductivity, nonvolatility, good chemical and thermal stability at room temperature, and a negligible vapor pressure, which are favorable for efficient DSSCs [30, 58]. Since, DSSCs are affected by evaporation and leakage found in liquid electrolyte. To overcome these drawbacks, novel solid or quasisolid state electrolytes, such as hole transportation materials, p-type semiconductors, and polymer-based gel electrolytes, have been developed as potential alternatives to volatile liquid electrolytes [59].A quasisolid electrolyte is a means to solve the poor contact between the photoanode and the hole that transfers material found in solid-state electrolytes. This electrolyte is composed of a composite of a polymer and liquid electrolyte that can penetrate into the photoanode to make a good contact. Interestingly, this has better stability, high electrical conductivity, and especially, good interfacial contact when compared to the other types of electrolyte. However, the quasisolid electrolyte has one particular disadvantage: it is strongly dependent on the working temperature of the solar cell, where high temperatures cause a phase transformation from gel state to solution state. Selvanathan et al. [60] have used starch and cellulose derivative polymers as quasisolid electrolytes and contributing to an optimized efficiency of 5.20% [60].Saaid et al. [61] prepared a quasisolid-state polymer electrolyte by incorporating poly (vinylidene fluoride-co-hexafluoropropylene) (PVdF-HFP) into a propylene carbonate (PC)/1, 2-dimethoxyethane (DME)/1-methyl-3-propylimidazolium iodide (MPII) and dealing with the dependency of photovoltaic parameters on the fabricated electrolyte shown in Figure 8(a). It has been observed that the cell photovoltaic parameters are found to be dependent on the amount of the added PVdF-HFP polymer. Before the addition of any PVdF-HFP polymer, the corresponding Jsc, VOC, FF, and η were found to be 11.24 mA cm-2, 619 mV, 70%, and 4.88%, respectively. While followed by the addition of 0.1, 0.2, 0.3, and 0.4 g of PVdF-HFP polymer, the cell performance was found to be (9.53, 9.53, 7.54, and 6.57) mA cm-2Jsc, (638, 641, 679, and 684) mV of VOC, (67, 66, 64, and 61) % of FF, and (4.09, 3.70, 3.27, and 2.73) % of η, respectively. It was demonstrated that as the amount of polymer in the cell increases, the performance of the cell decreases gradually. Moreover, Lim et al. [62] have also designed a new quasisolid-state electrolyte using coal fly ash-derived zeolite-X and -A as shown in Figure 8(b) and achieves a VOC of 0.74 V, JSC of 13.7 mA/cm2, and FF of 60% with η of 6.0% and 0.73 V, 11.4 mA/cm2, 60% FF, respectively. But it has been found that zeolite-X&AF quasisolid-state electrolyte-based DSSCs show VOC of 0.72 V, JSC of 11.1 mA/cm2, and FF of 61% and η of 4.8%. The enhancement in cell photovoltaic parameters in the case of zeolite-XF12 quasisolid-state electrolyte could be attributed to the high crystallinity nature, high light harvesting efficiency, the reduction of resistance at the photoanode/electrolyte interface, and the decrease in charge recombination rate. As a control, nanogel polymer was used as an electrolyte and possessed 0.66 V, 10.3 mA/cm2, 56% of FF, and 3.8% η.Figure 8 J-V curve of PVdF-HFP/PC/DME/MPII (a) with A 0.0; B 0.1; C 0.2; D 0.3; and E 0.4 g of PVdF-HFP and photocurrent Density-photovoltaic curves of DSSCs based on nanogel and quasisolid-state electrolytes under 1 sun illumination (AM 1.5G, 100 mW cm-2) (b) [48, 49]. (a)(b) ### 3.4. Photosensitizer Other core components of DSSCs are photosensitizers that play a great role in the absorption of solar photons. That is, dyes play a prominent role in harvesting the incoming light (absorbing) and injecting the photoexcited electrons into the conduction band of the semiconducting material to convert solar energy to electrical energy (i.e., which is responsible for absorbing the incident solar energy and converting it into electrical energy) [63, 64]. This enables us to produce renewable power systems and manage power sustainability and to achieve a reliable and stable network output power distribution [58]. It is chemically bonded to the porous surface of the semiconductor material and determines the efficiency and general performance of the device [47]. The possibilities of some organic dyes, polymer dyes, and natural dyes have been reported with great relative cost-effective potential for industrialization [11]. To be effective, photosensitizer should have a broad and intense absorption spectrum that covers the entire visible region, high adsorption affinity to the surface of the semiconducting layer, excellent stability in its oxidized form, low-cost, and low threat to the environment. Furthermore, its LUMO level, i.e., excited state level, must be higher in energy than the conduction band edge of the semiconductor, for efficient electron injection into the conduction band of the semiconductor. Also, its HOMO level, i.e., oxidized state level, must be lower in energy than the redox potential of the electrolyte to promote dye regeneration .The most commonly used photosensitizers are categorized into three groups. These are metal complex sensitizers, metal-free organic sensitizers [65], and natural sensitizers. Metal complex sensitizers provide relatively high efficiency and stability due to having both anchoring and ancillary ligands. The modifications of these two ligands to improve the efficiency of the solar cell performance have been reported. These ligands facilitate charge transfer in metal-to-ligand bonds. The metal complex-based photosensitizers are ruthenium and its cosensitized configurations [56] based complexes owing to their wide absorption range, i.e., from the visible to the near-infrared region, which renders them with superior photon harvesting properties. However, these complexes require multistep synthesis reactions (i.e., they require long synthesis and purification steps), and they contain a heavy metal, which is expensive (i.e., it needs high production costs), scarce, and toxic. However, these problems can be overcome by applying metal-free organic dyes in DSSCs instead of metal complex sensitizers. Trihutomo et al. [66] explained that using natural dye as a photosensitizer has the problem of producing lower efficiency than silicon solar cells due to the barrier of electron transfer in the TiO2 semiconductor layer.A donor acceptor-substituted-conjugated bridge (D-π-A) is used in the design of a metal-free organic sensitizer [30, 47]. The properties of a sensitizer vary with the electron-donating ability of the donor part and the electron-accepting ability of the acceptor part, as well as with the electronic characteristics of π bridge. At present, most of the π-bridge conjugated parts in organic sensitizers are based on oligoene, coumarin, oligothiophene, fluorene, and phenoxazine. The donor part has been synthesized with a dialkyl amine or diphenylamine moiety while using a carboxylic acid, cyanoacrylic acid, or rhodanine-3-acetic acid moiety for the acceptor part. As shown in Figure 9 [30], the sensitizer anchors onto the porous network of nanocrystalline TiO2 particles via the acceptor part of dye the molecule. However, metal-free organic sensitizers (organic dyes) have the following disadvantages: strong π-stacked aggregates between D-π-A dye molecules on semiconductor surfaces, which reduces the electron-injection yield from the dyes to the conduction band of nanocrystalline semiconductor, low absorption bands compared to metal-based sensitizers, which leads to a reduction in light absorption capability, low stability due to the sensitizer’s tendency to decay with time, and a long anode lifetime [30, 67].Figure 9 Designed structure of a metal-free organic dye [30].In brief, the 3.2 eV energy band gap of the TiO2 semiconductor is responsible for absorbing ultraviolet light (i.e., the absorption of visible light is weak). As a result, natural dyes increase the overall DSSCs for sunlight absorption rate [68]. The light-absorbed efficiency by TiO2 is also enhanced by cosensitization, which enables better light harvesting across the solar spectrum [69]. Ananthakumar et al. [70] have also reviewed more on the energy transfer process from donor to acceptor through the Forster resonance energy transfer process (FRET) for improved absorption. So, cosensitization is effectively achieved through the FRET mechanism, in which the dipole-dipole attraction of two chromophoric components occurs via an electric field. In this process, absorption of light causes the molecular excitation of the donor, and this is transferred to a nearby acceptor molecule having lower excitation energy through the exchange of virtual photons. Here, the donor molecule nonradiatively transfers excitation energy to an acceptor molecule through an exchange of photons, as shown in Figure 10 [70].Figure 10 Schematic diagram of FRET process (a) and its mechanism (b) [70]. (a)(b)To solve problems found in both metal-based complex and metal-free organic dyes, researchers have focused on natural plant pigment-based photosensitizer [71]. As a result, metal-free dyes, such as natural dyes (natural pigments) from different plant sources such as fruits, roots, flowers, leaves, wood, algae, and bacterial pigments [72, 73], coupled with their organic derivatives, have attracted considerable research interest, owing to their low-cost, simple synthesis procedure, abundance in nature, nontoxicity, and high molar absorption coefficient [35, 74]. An efficient photosensitizer for DSSCs should possess several essential requirements [64]: (i) A high molar extinction coefficient and a high panchromatic light absorption ability that extends from visible to near-infrared(ii) Anthocyanin pigment fromEleiodoxa conferta and Garcinia atroviridis fruit, for example, contains hydroxyl and carboxylic groups in the molecule that can effectively attach to the surface of a TiO2 film [75](iii) Good HOMO/LUMO energy alignment with respect to the redox couple and the conduction band level in the semiconductor, which allows efficient charge injection into the semiconductor, and simultaneously efficient regeneration of the oxidized dye(iv) The electron transfer rate from the dye sensitizer to the semiconductor must be faster than the decay rate of the photosensitizer(v) Stability under solar light illumination and continuous light soaking [76–78]It is important to note that the stable natural plant pigments extracted by effective solvents can absorb a broad range of visible light [79, 80], because the two most significant drawbacks of DSSCs are their narrow spectral response and short-term stability. Therefore, in this review work, different natural plant pigments are extracted from different plant parts such as leaves, roots, steam, parks, peel waste, flowers, various spices, and a mixture of them with various solvents, and their stability and various experimental factors are effectively discussed. #### 3.4.1. Natural Plant Pigment Photosensitizers in DSSCs The highest efficiency ever recorded for a DSSCs material was about 12% using Ru (II) dyes when its material and structural properties were optimized. However, this efficiency is less when compared to the efficiencies of the first and second generations of solar cells (thin-film solar cells and first generation (Si-based) solar cells), whose efficiencies were about 20-30% [11]. A ruthenium-based dye and platinum are the most common materials used as photosensitizer and counterelectrode, respectively, in the production of the DSSCs, the third generation of photovoltaic technologies. However, their expensive cost, the complexity and toxicity of ruthenium dye, and the scarcity of platinum sources preclude their use in the DSSCs [49]. Thus, an alternative way to produce cost-effective dyes on a large scale is by extracting natural dyes from plant sources. The colors are due to the presence of various pigments that have been proven to be efficient photosensitizers. Meanwhile, colors and their transmittance by themselves could affect energy generation performance. Based on this, DSSCs currently being produced have better power generation efficiency as the visible light transmittance lowers, and the power generation efficiency is good in the order of red > green > blue [81]. It is reported that extracts of plant pigments also have a simultaneous effect as photosensitizers and reducing agents for nanostructure synthesis, which is useful in photoanode activity in solar devices (e.g., TiO2) [82].In order to improve the energy conversion efficiency of natural photosensitizers, blending of different dyes, copigmentation of dyes, acidifying of dyes, and other approaches have been conducted by researchers, resulting in appreciable performance [83]. Based on the types of natural molecules found in plant products, such photosensitizers are classified as carotenoids, betalains, flavonoids, or chlorophyll structural classes [30, 65, 83, 84]. For stable adsorption onto the semiconductor substrate, sensitizers are typically designed with functional groups such as -COOH, -PO3H2, and -B(OH)2 [20]. These biomolecules have functional groups, such as carboxyl and hydroxyl that can easily react with the surface of nanostructured TiO2 that enables them to absorb sunlight. In particular, hydroxyl or carboxyl functional groups are strongly bound on the surface of TiO2 [85].The enchantment of efficiency from extraction of dyes from fresh purple cabbage (anthocyanin), spinach leaves (chlorophyll), turmeric stem (betaxanthin), and their mixture as a photosensitizer, with nanostructured ZnO-coated FTO substrates as a photoanode-based DSSCs. The photon to electrical power conversion efficiencies of purple cabbage, spinach, turmeric, and their mixed dyes are explored as 0.1015%, 0.1312%, 0.3045%, and 0.602%, respectively, under the same simulated light condition. The mixed dye reveals the stable performance of the cell with the highest conversion efficiency due to the absorption of an extensive range of the solar spectrum and well-suited electrochemical responses due to the fast electron transportation and lower recombination loss with longer electron lifetime found on mixed dyes [86].(1) Flowers. In DSSCs, the red/purple pigment in various leaves and flowers has been used as a sensitizer. Notably, an abundantly available organic dye is easily extracted from flowers and leaves, mainly responsible for light absorption in DSSCs [39]. The natural color pigments originated from organic dyes impart an anthocyanin group present in the different parts (e.g., flowers, leaves) of the plant. Hibiscus rosa-sinensis, a red pigment-containing flower with a higher concentration of anthocyanins, is used as a natural DSSC. In fact, the Malvaviscus penduliflorus flower is closely related to the Hibiscus family. However, potential research based on M. penduliflorus flower extracted dye in DSSCs is still lacking. The broader absorption of Hibiscus-extracted dye within 400-500 nm can further be enhanced using either concentrated dye solution or operating the sensitization process at an elevated temperature [39].Natural dyes from flowers can decrease the charge transfer resistance and are helpful in the better absorbance of light as well as enabling them to show absorption near to the red region. Therefore, efficient DSSCs using natural dyes are less toxic, disposed of easily, cost-effectively, and more environmentally friendly compared to organic dyes. Which is considered beneficial for future biosolar cell technology [87]. The performance of the DSSCs can be compensated by introducing a scattering layer or interfacial modification in the photoanode and concomitantly improving the broader spectrum wavelength range of light absorption to make it suitable for outdoor applications [39]. the presence of a series of conjugated double bonds from flower extracts helped to increase efficiency improvement. Raguram and Rajni [88] demonstrated that a flavanol pigment from the Allamanda blanchetti flower is responsible for red, purple, and blue colors, whereas a carotenoid pigment from the Allamanda cathartica flower is responsible for bright red, orange, and yellow colors and a series of conjugated double bonds [88]. Table 1 summarizes flower-based photosensitizers for efficiency improvement in DSSCs. From this, the performance is highly dependent on the type of the plant flower type.Table 1 Flowers as photosensitizers in DSSCs. ImagePlantClass of biomoleculesSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefSalvia—MethanolTiO2-FTO0.1680.46140.00.152[87]Spathodea—MethanolTiO2-FTO0.2010.52541.20.217[87]Malvaviscus penduliflorus—EthanolTiO2/MnO2-FTO6.020.3840.380.92[39]Allamanda blanchettiFlavonoids (flavanol)EthanolTiO2-FTO4.13660.4702601.16[88]Allamanda catharticaCarotenoids (lutein)EthanolTiO2-FTO2.14060.4896280.30[88]Canna-lily redAnthocyaninsMethanolTiO2-FTO0.440.57450.14[90]Canna-lily yellowAnthocyaninsMethanolTiO2-FTO0.430.56400.12[90]Beta vulgaris L. ssp. f. rubraBeta caroteneHot waterTiO2 surface0.440.55510.41Brassica oleracea L. var. capitata f. rubraAnthocyaninHot waterTiO2 surface1.880.54561.87(2) Leaves. The advantages of mesoporous holes in TiO2 are that they provide the surface of a large hole for the higher adsorption of dye molecules and facilitate the penetration of electrolyte within their pores. Absorbing light in an extended range of wavelengths by innovative natural dyes followed by increasing surface areas of the photoanode with a TiO2 nanostructure-based layer on the glass substrate improves DSSC technology [74]. Khammee et al. [89] have reported a natural pigment photosensitizer extracted from Dimocarpus longan leaves. According to the report, the methanol extract pigment was composed of chlorophyll-a, chlorophyll-b, and carotene components [89]. The functional group found on the leaves of natural plant pigment can bind with TiO2, which is then responsible for absorbing visible light [54]. Chlorophyll, which is found in the leaves of most green plants, absorbs light from red, blue, and violet wavelengths and obtains its color by reflecting green. Chlorophyll exhibits two main absorption peaks in the visible region at wavelengths of 420 and 660 nm [85]. Experimental results show that the absorption peaks of those dyes are mainly distributed in the visible light regions of 400-420 nm and 650-700 nm. So, chlorophyll was selected as the reference dye [94]. Therefore, chlorophyll and other related extract-based photosensitizes are given in both Tables 1 and 2.Table 2 Leaves as photosensitizers in DSSCs. ImagePlantClass of extracted dye pigmentsSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefLagerstroemia macrocarpa(i) Carotenoids(ii) Chlorophyll-a(iii) Chlorophyll-bMethanolTiO2-FTO0.0920.80753.711.138±0.018[74]Spinach leaves(i) ChlorophyllAcetoneTiO2-FTO0.410.5958.759820.171253[26]Strobilanthes cusia(i) Chlorophyll-a(ii) Chlorophyll-bMethanol, ethanol, acetone, diethyl-ether, dimethyl-sulphoxideTiO2-FTO0.00518330.30646.20.0385[49]Galinsoga parviflora(i) Chlorophyll groupDistilled water and ethanolTiO2-FTO0.4 (mA)0.346.71.65[91]Amaranthus red(i) Chlorophyll(ii) BetalainDistilled water, ethanol, acetoneTiO2-FTO1.00420.354738.640.14[54]Lawsonia inermis(i) Lawsone(ii) ChlorophyllDistilled water, ethanol, acetoneTiO2-FTO0.42360.547838.510.09[54]Cordyline fruticosa(i) ChlorophyllEthanolTiO2 surface1.3 mA0.61660.160.5[85]Euodia meliaefolia (Hance) Benth(i) ChlorophyllEthanolTiO2-FTO2.640.58701.08[92]Matteuccia struthiopteris (L.) Todaro(i) ChlorophyllEthanolTiO2-FTO0.750.60720.32[92]Corylus heterophylla Fisch(i) ChlorophyllEthanolTiO2-FTO0.680.56690.26[92]Filipendula intermedia(i) ChlorophyllEthanolTiO2-FTO0.870.54740.34[92]Pteridium aquilinum var. latiusculum(i) ChlorophyllEthanolTiO2-FTO0.740.56730.30[92]Populus L(i) ChlorophyllEthanolTiO2-FTO1.250.57370.27[92]Euphorbia sp.(i) QuercetinHot waterTiO2 surface0.460.40510.30Rubia tinctoria(i)AlizarinHot waterTiO2 surface0.650.48630.65Morus alba(i)CyanineHot waterTiO2 surface0.440.45570.38Reseda lutea(i)LuteolinHot waterTiO2 surface0.500.50620.52Medicago sativa(i)ChlorophyllHot waterTiO2 surface0.330.55560.33Aloe barbadensis miller(i)AnthocyaninsEthanolTiO2-FTO0.1120.67650.40.380[93]Opuntia ficus-indica(i) ChlorophyllEthanolTiO2-FTO0.2410.64248.00.740[93]Cladode and aloe vera(i)Anthocyanins and chlorophyllEthanolTiO2-FTO0.2900.44040.10.500[93]Lotus leaf(i)Alkaloid and flavonoidEthanolTiO2-FTO14.330.44231.42[48]Brassica oleracea var(i)AnthocyaninDistilled water, methanol, and acetic acidTiO2-FTO0.490.43510.054[55]Wrightia tinctoria R.Br. (“Pala indigo” or “dyer’s oleander”)(i)ChlorophyllCold methanolic extractTiO2-FTO0.530.51690.19[94](i)ChlorophyllAcidified cold methanolic extractTiO2-FTO0.210.422660.06[94](i)ChlorophyllSoxhlet extractTiO2-FTO0.490.495690.17[94](i)ChlorophyllAcidified Soxhlet extractTiO2-FTO0.310.419650.08[94](3) Fruits. The plant-extracted natural dyes are observed to be more prospective owing to their abundance and eco-friendly characteristics. They are environmentally and economically superior to ruthenium-based dyes because they are nontoxic and cheap. However, the conversion efficiency of dye-sensitized solar cells based on natural dyes is low [95]. Substations of natural dyes as sensitizers were shown to be not only economically viable and nontoxic but also effective for enhancing efficiency up to 11.9% [96]. Sensitizers for DSSCs need to fulfill important requirements such as absorption in the visible and near-infrared regions of the solar spectrum and strong chelation to the semiconductor oxide surface. Moreover, the LUMO of the dye should lie at a higher energy level than the conduction band of the semiconductor, so that, upon excitation, the dye could introduce electrons into the conduction band of the TiO2 [95]. Considering this, Najm et al. [97] used abundant and cheap Malaysian fruit, betel nut (Areca catechu) as a photosensitizer in DSSCs due to the presence of tannins, polyphenols, gallic acid, catechins, alkaloids, fat, gum, and other minerals. Provided that, gallotannic acid, a stable dye, is the main pigment (yellowish) of A. catechu and is responsible for the effective absorption of visible wavelengths and used in DSSCs [97, 98]. So, fruit extract as a photosensitizer and natural extracts from other sources are summarized in Table 3 and 4, respectively.Table 3 Fruits as photosensitizers in DSSCs. ImagePlantClass of extracted dye pigmentsSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefMelastoma malabathricum(i) AnthocyaninMethanol and trifluoroacetic acidTiO2 film4.490.42571.05[84]Eleiodoxa conferta(i) AnthocyaninEthanolTiO2-FTO4.630.37561.00[75]Garcinia atroviridis(i) AnthocyaninEthanolTiO2-FTO2.550.32630.51[75]Onion peels(i) AnthocyaninDistilled waterTiO2-FTO0.240.4846.630.065[26]Red cabbage(i) AnthocyaninDistilled waterTiO2-FTO0.210.5146.610.060[26]Areca catechuGallotannic acidMethanolTiO2 surface0.30.53673.50.118[97]Hylocereus polyrhizus(i) AnthocyaninDistilled water, ethanol, and acetic acidTiO2-FTO0.23 mA0.34630.024[55]Doum palm(i) ChromophoresEthanolTiO2-FTO0.0050.37630.012[99]Doum palm(i) ChromophoresDistilled waterTiO2-FTO0.0100.50660.033[99]Linia cauliflora(i) AnthocyaninEthanolTiO2-ITO0.380.41290.13[100]Phyllanthus reticulatus(i) AnthocyaninMethanolTiO2-FTO1.3820.67—0.69[101]Table 4 Other natural pigment sources as photosensitizers in DSSCs. ImagePlantClassSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefJuglon regia shell(i) JuglonHot waterTiO2 surface0.430.47560.38Malabar spinach seeds—Distilled waterTiO2-ITO510 (μA)0.71048.79.23[102]Rhamnus petiolaris seed(i) EmodinHot waterTiO2 surface0.200.50550.18Iridaea obovata algae(i) PhycoerythrinEthanolTiO2-FTO0.1360.40430.022[103]Delesseria lancifolia algae(i) PhycoerythrinEthanolTiO2-FTO0.2430.40460.045[103]Plocamium hookeri algae(i) PhycoerythrinEthanolTiO2-FTO0.0830.53630.027[103]Mangosteen pericarp (mangosteen peels)(i) AnthocyaninEthanolTiO2-FTO0.38 mA0.46480.042[55]Ataco vegetable(i) AnthocyaninsEthanolTiO2-FTO0.060.48660.018[104]Achiote vegetable(i) AnthocyaninsEthanolTiO2-FTO0.060.45500.013[104]Berenjena vegetable(i) AnthocyaninsEthanolTiO2-FTO0.040.40560.008[104]Flor de Jamaica vegetable(i) AnthocyaninsEthanolTiO2-FTO0.3820.478580.109[104]Mora vegetable(i) AnthocyaninsEthanolTiO2-FTO0.280.48510.069[104]Mortiño vegetable(i) AnthocyaninsEthanolTiO2-FTO0.5570.48466.40.175[104]Rabano vegetable(i) AnthocyaninsEthanolTiO2-FTO0.070.39550.015[104]Tomate de arbol vegetable(i) AnthocyaninsEthanolTiO2-FTO0.100.44520.023[104]The narrow spectral response and the short-term stability found in DSSCs are the two major drawbacks. These limitations are improved by using natural plant pigment dyes as an effective sensitizer on the photoanode of the device. So, using these natural pigments improved the efficiency of DSSCs by forming broad spectral absorption responses. To improve this, DeSilva et al. have investigated good photosensitizers from Mondo-grass berry and blackberry. As a result, the device efficiency improvement, with better stability on Mondo-grass berry dye, was observed when compared with that of blackberry. This is the reason that a Mondo-grass berry contains a mixture of two or more chemical compounds belonging to both the anthocyanin and carotenoid families, as proved by thin layer chromatography [105]. ## 3.1. Photoanode The WE, or indicator, is the most important component that has the function of absorbing radiation. As a criterion, the electrode consists of a dye-sensitized layer of nanocrystalline semiconductor metal oxide having a wide band gap and being transparent enough to pass light to the sensitizer. Many semiconductor materials, either in the form of nano or bulk, were used as a photoelectrodes, such as TiO2, Al@TiO2, TiO2-Fe, ZnO, SiO2, Zn2SnO4, CeO2, WO3, SrTiO3, and Nb2O5 were used as scaffold materials in DSSCs [26]. However, reports showed that TiO2 and ZnO and their composite/doped-based photoelectrodes were found to be the most widely common photoanode materials due to their high band gap energy, inexpensiveness, nontoxicity nature, accessibility, and photoelectrochemical stability. Their methods of preparation are too simple and achievable with environmentally friendly materials in the presence of green solvents and show a promising high efficiency as compared to the counterparts. Recently, TiO2 has become the most popular metal oxide semiconductor in DSSCs after ZnO and SnO2 [38].Rajendhiran et al. [42] have reported the green synthesized TiO2 nanoparticles by the sol-gel method using Plectranthus amboinicus leaf extract, and the prepared nanoparticles have been coated over an ITO substrate by using the doctor blade approach. So, the assembled DSSCs have exhibited higher solar to electrical energy conversion efficiency, reaching 1.3% by using Rose Bengal organic dye sensitizer due to the surface modification of the synthesized nanoparticles. In addition, to support TiO2 in the photoanode, substrates such as fluorine-doped tin oxide (FTO) and indium-doped tin oxide (ITO) [9], due to scarcity, rigidity, and brittle properties of ITO, an alternative with low-cost FTO and graphene, have been chosen due to their unique structural defect with a rough surface that enables it to solve problems in short circuits and leakage current [39, 43].Low and Lai [44] designed an efficient photoanode from reduced graphene oxide- (rGO-) decorated TiO2 materials. It has been found that the UV-Vis diffuse reflection spectra shows an inconstant absorption with increasing duration of TiO2 deposited on rGO in the ultraviolet (from 200 to 400 nm) and visible light (from 400 to 700 nm) regions. From the observed spectra, rGO/TiO2 samples with a spinning duration of 30 seconds exhibited an optimum light-absorbing ability, as shown in Figure 5(a). As shown in Figure 5(b), the performance of the electrochemical parameters such as JSC, VOC, FF, and η was dependent on the spinning durations. The efficiency (η) of the assembled DSSCs increases from 10 to 30-second spinning duration (4.74-9.98%, respectively). This is due to surface area and photoelectrochemical stability modification, which in turn enables us to adsorb more dye molecules, followed by absorbing more light on the surface of the composite photoanode. So, it could be noticed that, after 30 seconds of optimized spinning duration, the best efficiencies of the device were 22.01 mA cm-2, 0.79 V, 57%, 9.98% in JSC, VOC, FF%, and η%, respectively.Figure 5 UV-Vis diffuses reflectance spectra (a) and J-V curves of the photoanode-based pure rGO and rGO/TiO2 with spinning duration of 10 s, 20 s, 30, 40, and 50 s (b) [44]. (a)(b)Furthermore, Gao et al. [45] also developed a nitrogen-doped TiO2/graphene nanofiber (G-T-N) as an alternative green photoelectrode and evaluated the different photovoltaic parameters of the device’s performance, as shown in Figure 6. While nitrogen doping can prevent the in situ recombination of electron-hole pairs, graphene doping first increases the surface area of TiO2 fibers and also increases the dye adsorption active sites, with more electrons injected into the semiconductor conduction band from the excited state of the dye, thereby improving the photoelectric conversion efficiency. The open circuit voltage (VOC), short circuit current density (JSC), fill factor (FF), and η value for the TiO2/graphene and N@TiO2/graphene nanofiber photoelectrode-based DSSCs were found to be 0.66, 17.48, 0.35, and 3.97 and 0.71, 15.38, 0.46, and 5.01, respectively [46].Figure 6 IPCE curves (a) and J-V curves of DSSCs with different photoanodes and the illustration in (b) [45]. (a)(b) ## 3.2. Counterelectrode The counterelectrode (CE, cathode) is where the redox mediator reduction occurs. It collects electrons from the external circuit and injects them into the electrolyte to catalyze the reduction of I3− to I− in the redox couple for dye regeneration [47]. The pillar and the primary major function of CE in the DSSC system plays as catalyst to promote the completion of the process, since the oxidized redox couple is reduced by accepting electrons at the surface of the CE, and the oxidized dye is again reduced by collecting electrons via the ionic transport materials. The second ultimate role of CE in DSSCs is to act as a positive electrode; it collects electrons from the external circuit and transmits them into the cell. Additionally, CE is used as a mirror, since it reflects the unabsorbed light from the cell back to the cell to enhance the utilization of sunlight [40].The most commonly used CE material is Pt on a conductive ITO or FTO substrate, owing to its excellent electrocatalytic activity for I3− reduction, high electrical conductivity for efficient electron transport, and high electrochemical stability in the electrolyte system. Hence, most of the research work uses expensive platinum as a CE, but this limits the employability of large-scale production. Therefore, to address these limitations, several materials derived from inorganic compounds, carbonaceous materials, and conductive organic polymers, have been investigated as potential alternatives to replace or modify the Pt-based cathodes in DSSCs. To improve this, less-expensive copper was used as the CE in large-scale industrial applications [39]. Huang et al. [48] have worked on biochar from lotus leaf by one-step pyrolysis as a flexible CE to replace platinum. From their studies at the same photoanode, a maximum value of 0.15% power conversion efficiency (PCE) was produced in the presence of lotus leaf extract as a photosensitizer, while 0.36% of PCE was produced. In the same manner, 0.13% of PCE was produced when graphite was used as CE while lotus leaf extract photosensitizer-modified TiO2-FTO photoanode [48]. It can be concluded that graphite presents feasible potential as an alternative to platinum due to its affordable cost and performance output due to having more than 0.0385% efficiency than FTO glass and platinum by using Strobilanthes cusia photosensitizer [49].Kumar et al. [50] designed and fabricated a new cost-effective, enhanced performance CE using a carbon material produced with the organic ligand 2- methyl-8-hydroxyquinolinol (Mq). The carbon-derived Mq CE-based DSSCs show a short circuit current density of 11.00 mA cm-2, a fill factor of 0.51, and an open circuit voltage of VOC 0.75 V with a conversion efficiency of 4.25%. As a reference, Pt CE was used and provides a short circuit current density (Jsc) of 12.40 mA cm-2, a fill factor of 0.68, and an open circuit voltage of 0.69 V, having a conversion efficiency of ≈5.86%. As supported in Figure 7(a), the low cell performance of carbon-derived Mq CE could be attributed to the high electrostatic interactions between the carbon atom and I- or I3- with a higher concentration of mediator anions in proximity to the carbon surface, which results in an increase of the regeneration and recombination rates [50]. Due to their low surface area, low stability, and low catalytic behavior, single material-based CE has lower device performance than composites and doped CE-based DSSCs. To improve this, Younas et al. [51] prepared the high mesopore carbon-titanium oxide composite CE (HMC-TiO2) for the first time and investigated the various cell photovoltaic parameters. The report shows that Jsc of 16.1 mA cm-2, FF of 68%, and VOC of 0.808 V have a conversion η of ≈8.77%. As the different parameters of the DSSC value obtained show, the HMCs-TiO2 composites display high electrocatalytic activity and could be taken as a promising CE. In addition, Song et al. [52] discuss the role of iron pyrite (FeS2), in the presence and absence of NaOH basic solution, as one of the most promising counterelectrode materials for dye-sensitized solar cells. FeS2 CE-based DSSCs without NaOH addition provides a PCE of 4.76%, a JSC of 10.20 mA cm-2 with a VOC of 0.70 V and a FF of 0.66. In the presence of NaOH, the FeS2 CE-based DSSCs had a JSC of 12.08 mA cm-2, a VOC of 0.74 V, a FF of 0.64, and a PCE value of 5.78%. As a control, Pt CE were also investigated and shows JSC of 11.58 mA cm-2, VOC of 0.74 V, FF of 0.69, and resulting PCE of 5.93%. The improvement in the photovoltaic parameters of DSSCs, shown in Figure 7(b), indicates that more electrons are generated in the device due to the presence of NaOH, and this is found to be consistent with JSC [53].Figure 7 J-V spectra of HMC-TiO2 (a) and FeS2; (A: without NaOH, B: with NaOH, and C: Pt CE-based DSSCs) (b) [50, 52]. (a)(b) ## 3.3. Electrolyte A good electrolyte should have high electrical and ionic conductivity, good interfacial contact with the nanocrystalline semiconductor and counterelectrode, not degrade the dye molecules, be transparent to visible light, noncorrosive property to the counterelectrode, high thermal and electrochemical stability, a high diffusion coefficient, low vapor pressure, appropriate viscosity, and ease of sealing, without suppressing charge carrier transport [54, 55]. Liquid electrolytes, solid-state electrolytes, quasisolid electrolytes [30], and water-based electrolytes [56] are common redox mediators (electrolytes) found in DSSCs. Liquid electrolytes are also organic (redox couple, a solvent, and additives) and are characterized by ionic liquid. Quasisolid electrolytes are good candidates for DSSCs due to their optimum efficiency and durability, high ionic conductivity, long-term stability, ionic conductivity, and excellent interfacial contact property like the liquid electrolytes. The most important components are redox couples such as I-/I3-, Br-/Br3-, SCN-/(SCN)2, Fe (CN)63/4, SeCN-/(SeCN)2, and substituted bipyridyl cobalt (III/II) [57], which are directly linked to the VOC of DSSCs.Due to the better solubility, fast dye regeneration process, low light absorption in the visible region, appropriate redox potential, and very slow recombination rate between the nanocrystalline semiconductor injected electrons and I3-, I-/I3- is the most popular redox couple electrolyte [40]. Since a good solvent is responsible for the diffusion and dissolution of I-/I3- ions, among these solvents, acrylonitrile, ethylenecarbonate, propylene carbonate, 3-methoxypropionitrile, and N-methylpyrrolidone are common. A solvent with a high donor number can increase the VOC and decrease the JSC by lowering the concentration of I3-. As a result, the lower I3- concentration helps to slow the recombination rate, and as a result, it increases the VOC. The second type of liquid electrolyte is the ionic liquid electrolyte, such as pyridinium, imidazolium, and anions from the halide or pseudohalide family. These electrolytes show high ionic conductivity, nonvolatility, good chemical and thermal stability at room temperature, and a negligible vapor pressure, which are favorable for efficient DSSCs [30, 58]. Since, DSSCs are affected by evaporation and leakage found in liquid electrolyte. To overcome these drawbacks, novel solid or quasisolid state electrolytes, such as hole transportation materials, p-type semiconductors, and polymer-based gel electrolytes, have been developed as potential alternatives to volatile liquid electrolytes [59].A quasisolid electrolyte is a means to solve the poor contact between the photoanode and the hole that transfers material found in solid-state electrolytes. This electrolyte is composed of a composite of a polymer and liquid electrolyte that can penetrate into the photoanode to make a good contact. Interestingly, this has better stability, high electrical conductivity, and especially, good interfacial contact when compared to the other types of electrolyte. However, the quasisolid electrolyte has one particular disadvantage: it is strongly dependent on the working temperature of the solar cell, where high temperatures cause a phase transformation from gel state to solution state. Selvanathan et al. [60] have used starch and cellulose derivative polymers as quasisolid electrolytes and contributing to an optimized efficiency of 5.20% [60].Saaid et al. [61] prepared a quasisolid-state polymer electrolyte by incorporating poly (vinylidene fluoride-co-hexafluoropropylene) (PVdF-HFP) into a propylene carbonate (PC)/1, 2-dimethoxyethane (DME)/1-methyl-3-propylimidazolium iodide (MPII) and dealing with the dependency of photovoltaic parameters on the fabricated electrolyte shown in Figure 8(a). It has been observed that the cell photovoltaic parameters are found to be dependent on the amount of the added PVdF-HFP polymer. Before the addition of any PVdF-HFP polymer, the corresponding Jsc, VOC, FF, and η were found to be 11.24 mA cm-2, 619 mV, 70%, and 4.88%, respectively. While followed by the addition of 0.1, 0.2, 0.3, and 0.4 g of PVdF-HFP polymer, the cell performance was found to be (9.53, 9.53, 7.54, and 6.57) mA cm-2Jsc, (638, 641, 679, and 684) mV of VOC, (67, 66, 64, and 61) % of FF, and (4.09, 3.70, 3.27, and 2.73) % of η, respectively. It was demonstrated that as the amount of polymer in the cell increases, the performance of the cell decreases gradually. Moreover, Lim et al. [62] have also designed a new quasisolid-state electrolyte using coal fly ash-derived zeolite-X and -A as shown in Figure 8(b) and achieves a VOC of 0.74 V, JSC of 13.7 mA/cm2, and FF of 60% with η of 6.0% and 0.73 V, 11.4 mA/cm2, 60% FF, respectively. But it has been found that zeolite-X&AF quasisolid-state electrolyte-based DSSCs show VOC of 0.72 V, JSC of 11.1 mA/cm2, and FF of 61% and η of 4.8%. The enhancement in cell photovoltaic parameters in the case of zeolite-XF12 quasisolid-state electrolyte could be attributed to the high crystallinity nature, high light harvesting efficiency, the reduction of resistance at the photoanode/electrolyte interface, and the decrease in charge recombination rate. As a control, nanogel polymer was used as an electrolyte and possessed 0.66 V, 10.3 mA/cm2, 56% of FF, and 3.8% η.Figure 8 J-V curve of PVdF-HFP/PC/DME/MPII (a) with A 0.0; B 0.1; C 0.2; D 0.3; and E 0.4 g of PVdF-HFP and photocurrent Density-photovoltaic curves of DSSCs based on nanogel and quasisolid-state electrolytes under 1 sun illumination (AM 1.5G, 100 mW cm-2) (b) [48, 49]. (a)(b) ## 3.4. Photosensitizer Other core components of DSSCs are photosensitizers that play a great role in the absorption of solar photons. That is, dyes play a prominent role in harvesting the incoming light (absorbing) and injecting the photoexcited electrons into the conduction band of the semiconducting material to convert solar energy to electrical energy (i.e., which is responsible for absorbing the incident solar energy and converting it into electrical energy) [63, 64]. This enables us to produce renewable power systems and manage power sustainability and to achieve a reliable and stable network output power distribution [58]. It is chemically bonded to the porous surface of the semiconductor material and determines the efficiency and general performance of the device [47]. The possibilities of some organic dyes, polymer dyes, and natural dyes have been reported with great relative cost-effective potential for industrialization [11]. To be effective, photosensitizer should have a broad and intense absorption spectrum that covers the entire visible region, high adsorption affinity to the surface of the semiconducting layer, excellent stability in its oxidized form, low-cost, and low threat to the environment. Furthermore, its LUMO level, i.e., excited state level, must be higher in energy than the conduction band edge of the semiconductor, for efficient electron injection into the conduction band of the semiconductor. Also, its HOMO level, i.e., oxidized state level, must be lower in energy than the redox potential of the electrolyte to promote dye regeneration .The most commonly used photosensitizers are categorized into three groups. These are metal complex sensitizers, metal-free organic sensitizers [65], and natural sensitizers. Metal complex sensitizers provide relatively high efficiency and stability due to having both anchoring and ancillary ligands. The modifications of these two ligands to improve the efficiency of the solar cell performance have been reported. These ligands facilitate charge transfer in metal-to-ligand bonds. The metal complex-based photosensitizers are ruthenium and its cosensitized configurations [56] based complexes owing to their wide absorption range, i.e., from the visible to the near-infrared region, which renders them with superior photon harvesting properties. However, these complexes require multistep synthesis reactions (i.e., they require long synthesis and purification steps), and they contain a heavy metal, which is expensive (i.e., it needs high production costs), scarce, and toxic. However, these problems can be overcome by applying metal-free organic dyes in DSSCs instead of metal complex sensitizers. Trihutomo et al. [66] explained that using natural dye as a photosensitizer has the problem of producing lower efficiency than silicon solar cells due to the barrier of electron transfer in the TiO2 semiconductor layer.A donor acceptor-substituted-conjugated bridge (D-π-A) is used in the design of a metal-free organic sensitizer [30, 47]. The properties of a sensitizer vary with the electron-donating ability of the donor part and the electron-accepting ability of the acceptor part, as well as with the electronic characteristics of π bridge. At present, most of the π-bridge conjugated parts in organic sensitizers are based on oligoene, coumarin, oligothiophene, fluorene, and phenoxazine. The donor part has been synthesized with a dialkyl amine or diphenylamine moiety while using a carboxylic acid, cyanoacrylic acid, or rhodanine-3-acetic acid moiety for the acceptor part. As shown in Figure 9 [30], the sensitizer anchors onto the porous network of nanocrystalline TiO2 particles via the acceptor part of dye the molecule. However, metal-free organic sensitizers (organic dyes) have the following disadvantages: strong π-stacked aggregates between D-π-A dye molecules on semiconductor surfaces, which reduces the electron-injection yield from the dyes to the conduction band of nanocrystalline semiconductor, low absorption bands compared to metal-based sensitizers, which leads to a reduction in light absorption capability, low stability due to the sensitizer’s tendency to decay with time, and a long anode lifetime [30, 67].Figure 9 Designed structure of a metal-free organic dye [30].In brief, the 3.2 eV energy band gap of the TiO2 semiconductor is responsible for absorbing ultraviolet light (i.e., the absorption of visible light is weak). As a result, natural dyes increase the overall DSSCs for sunlight absorption rate [68]. The light-absorbed efficiency by TiO2 is also enhanced by cosensitization, which enables better light harvesting across the solar spectrum [69]. Ananthakumar et al. [70] have also reviewed more on the energy transfer process from donor to acceptor through the Forster resonance energy transfer process (FRET) for improved absorption. So, cosensitization is effectively achieved through the FRET mechanism, in which the dipole-dipole attraction of two chromophoric components occurs via an electric field. In this process, absorption of light causes the molecular excitation of the donor, and this is transferred to a nearby acceptor molecule having lower excitation energy through the exchange of virtual photons. Here, the donor molecule nonradiatively transfers excitation energy to an acceptor molecule through an exchange of photons, as shown in Figure 10 [70].Figure 10 Schematic diagram of FRET process (a) and its mechanism (b) [70]. (a)(b)To solve problems found in both metal-based complex and metal-free organic dyes, researchers have focused on natural plant pigment-based photosensitizer [71]. As a result, metal-free dyes, such as natural dyes (natural pigments) from different plant sources such as fruits, roots, flowers, leaves, wood, algae, and bacterial pigments [72, 73], coupled with their organic derivatives, have attracted considerable research interest, owing to their low-cost, simple synthesis procedure, abundance in nature, nontoxicity, and high molar absorption coefficient [35, 74]. An efficient photosensitizer for DSSCs should possess several essential requirements [64]: (i) A high molar extinction coefficient and a high panchromatic light absorption ability that extends from visible to near-infrared(ii) Anthocyanin pigment fromEleiodoxa conferta and Garcinia atroviridis fruit, for example, contains hydroxyl and carboxylic groups in the molecule that can effectively attach to the surface of a TiO2 film [75](iii) Good HOMO/LUMO energy alignment with respect to the redox couple and the conduction band level in the semiconductor, which allows efficient charge injection into the semiconductor, and simultaneously efficient regeneration of the oxidized dye(iv) The electron transfer rate from the dye sensitizer to the semiconductor must be faster than the decay rate of the photosensitizer(v) Stability under solar light illumination and continuous light soaking [76–78]It is important to note that the stable natural plant pigments extracted by effective solvents can absorb a broad range of visible light [79, 80], because the two most significant drawbacks of DSSCs are their narrow spectral response and short-term stability. Therefore, in this review work, different natural plant pigments are extracted from different plant parts such as leaves, roots, steam, parks, peel waste, flowers, various spices, and a mixture of them with various solvents, and their stability and various experimental factors are effectively discussed. ### 3.4.1. Natural Plant Pigment Photosensitizers in DSSCs The highest efficiency ever recorded for a DSSCs material was about 12% using Ru (II) dyes when its material and structural properties were optimized. However, this efficiency is less when compared to the efficiencies of the first and second generations of solar cells (thin-film solar cells and first generation (Si-based) solar cells), whose efficiencies were about 20-30% [11]. A ruthenium-based dye and platinum are the most common materials used as photosensitizer and counterelectrode, respectively, in the production of the DSSCs, the third generation of photovoltaic technologies. However, their expensive cost, the complexity and toxicity of ruthenium dye, and the scarcity of platinum sources preclude their use in the DSSCs [49]. Thus, an alternative way to produce cost-effective dyes on a large scale is by extracting natural dyes from plant sources. The colors are due to the presence of various pigments that have been proven to be efficient photosensitizers. Meanwhile, colors and their transmittance by themselves could affect energy generation performance. Based on this, DSSCs currently being produced have better power generation efficiency as the visible light transmittance lowers, and the power generation efficiency is good in the order of red > green > blue [81]. It is reported that extracts of plant pigments also have a simultaneous effect as photosensitizers and reducing agents for nanostructure synthesis, which is useful in photoanode activity in solar devices (e.g., TiO2) [82].In order to improve the energy conversion efficiency of natural photosensitizers, blending of different dyes, copigmentation of dyes, acidifying of dyes, and other approaches have been conducted by researchers, resulting in appreciable performance [83]. Based on the types of natural molecules found in plant products, such photosensitizers are classified as carotenoids, betalains, flavonoids, or chlorophyll structural classes [30, 65, 83, 84]. For stable adsorption onto the semiconductor substrate, sensitizers are typically designed with functional groups such as -COOH, -PO3H2, and -B(OH)2 [20]. These biomolecules have functional groups, such as carboxyl and hydroxyl that can easily react with the surface of nanostructured TiO2 that enables them to absorb sunlight. In particular, hydroxyl or carboxyl functional groups are strongly bound on the surface of TiO2 [85].The enchantment of efficiency from extraction of dyes from fresh purple cabbage (anthocyanin), spinach leaves (chlorophyll), turmeric stem (betaxanthin), and their mixture as a photosensitizer, with nanostructured ZnO-coated FTO substrates as a photoanode-based DSSCs. The photon to electrical power conversion efficiencies of purple cabbage, spinach, turmeric, and their mixed dyes are explored as 0.1015%, 0.1312%, 0.3045%, and 0.602%, respectively, under the same simulated light condition. The mixed dye reveals the stable performance of the cell with the highest conversion efficiency due to the absorption of an extensive range of the solar spectrum and well-suited electrochemical responses due to the fast electron transportation and lower recombination loss with longer electron lifetime found on mixed dyes [86].(1) Flowers. In DSSCs, the red/purple pigment in various leaves and flowers has been used as a sensitizer. Notably, an abundantly available organic dye is easily extracted from flowers and leaves, mainly responsible for light absorption in DSSCs [39]. The natural color pigments originated from organic dyes impart an anthocyanin group present in the different parts (e.g., flowers, leaves) of the plant. Hibiscus rosa-sinensis, a red pigment-containing flower with a higher concentration of anthocyanins, is used as a natural DSSC. In fact, the Malvaviscus penduliflorus flower is closely related to the Hibiscus family. However, potential research based on M. penduliflorus flower extracted dye in DSSCs is still lacking. The broader absorption of Hibiscus-extracted dye within 400-500 nm can further be enhanced using either concentrated dye solution or operating the sensitization process at an elevated temperature [39].Natural dyes from flowers can decrease the charge transfer resistance and are helpful in the better absorbance of light as well as enabling them to show absorption near to the red region. Therefore, efficient DSSCs using natural dyes are less toxic, disposed of easily, cost-effectively, and more environmentally friendly compared to organic dyes. Which is considered beneficial for future biosolar cell technology [87]. The performance of the DSSCs can be compensated by introducing a scattering layer or interfacial modification in the photoanode and concomitantly improving the broader spectrum wavelength range of light absorption to make it suitable for outdoor applications [39]. the presence of a series of conjugated double bonds from flower extracts helped to increase efficiency improvement. Raguram and Rajni [88] demonstrated that a flavanol pigment from the Allamanda blanchetti flower is responsible for red, purple, and blue colors, whereas a carotenoid pigment from the Allamanda cathartica flower is responsible for bright red, orange, and yellow colors and a series of conjugated double bonds [88]. Table 1 summarizes flower-based photosensitizers for efficiency improvement in DSSCs. From this, the performance is highly dependent on the type of the plant flower type.Table 1 Flowers as photosensitizers in DSSCs. ImagePlantClass of biomoleculesSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefSalvia—MethanolTiO2-FTO0.1680.46140.00.152[87]Spathodea—MethanolTiO2-FTO0.2010.52541.20.217[87]Malvaviscus penduliflorus—EthanolTiO2/MnO2-FTO6.020.3840.380.92[39]Allamanda blanchettiFlavonoids (flavanol)EthanolTiO2-FTO4.13660.4702601.16[88]Allamanda catharticaCarotenoids (lutein)EthanolTiO2-FTO2.14060.4896280.30[88]Canna-lily redAnthocyaninsMethanolTiO2-FTO0.440.57450.14[90]Canna-lily yellowAnthocyaninsMethanolTiO2-FTO0.430.56400.12[90]Beta vulgaris L. ssp. f. rubraBeta caroteneHot waterTiO2 surface0.440.55510.41Brassica oleracea L. var. capitata f. rubraAnthocyaninHot waterTiO2 surface1.880.54561.87(2) Leaves. The advantages of mesoporous holes in TiO2 are that they provide the surface of a large hole for the higher adsorption of dye molecules and facilitate the penetration of electrolyte within their pores. Absorbing light in an extended range of wavelengths by innovative natural dyes followed by increasing surface areas of the photoanode with a TiO2 nanostructure-based layer on the glass substrate improves DSSC technology [74]. Khammee et al. [89] have reported a natural pigment photosensitizer extracted from Dimocarpus longan leaves. According to the report, the methanol extract pigment was composed of chlorophyll-a, chlorophyll-b, and carotene components [89]. The functional group found on the leaves of natural plant pigment can bind with TiO2, which is then responsible for absorbing visible light [54]. Chlorophyll, which is found in the leaves of most green plants, absorbs light from red, blue, and violet wavelengths and obtains its color by reflecting green. Chlorophyll exhibits two main absorption peaks in the visible region at wavelengths of 420 and 660 nm [85]. Experimental results show that the absorption peaks of those dyes are mainly distributed in the visible light regions of 400-420 nm and 650-700 nm. So, chlorophyll was selected as the reference dye [94]. Therefore, chlorophyll and other related extract-based photosensitizes are given in both Tables 1 and 2.Table 2 Leaves as photosensitizers in DSSCs. ImagePlantClass of extracted dye pigmentsSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefLagerstroemia macrocarpa(i) Carotenoids(ii) Chlorophyll-a(iii) Chlorophyll-bMethanolTiO2-FTO0.0920.80753.711.138±0.018[74]Spinach leaves(i) ChlorophyllAcetoneTiO2-FTO0.410.5958.759820.171253[26]Strobilanthes cusia(i) Chlorophyll-a(ii) Chlorophyll-bMethanol, ethanol, acetone, diethyl-ether, dimethyl-sulphoxideTiO2-FTO0.00518330.30646.20.0385[49]Galinsoga parviflora(i) Chlorophyll groupDistilled water and ethanolTiO2-FTO0.4 (mA)0.346.71.65[91]Amaranthus red(i) Chlorophyll(ii) BetalainDistilled water, ethanol, acetoneTiO2-FTO1.00420.354738.640.14[54]Lawsonia inermis(i) Lawsone(ii) ChlorophyllDistilled water, ethanol, acetoneTiO2-FTO0.42360.547838.510.09[54]Cordyline fruticosa(i) ChlorophyllEthanolTiO2 surface1.3 mA0.61660.160.5[85]Euodia meliaefolia (Hance) Benth(i) ChlorophyllEthanolTiO2-FTO2.640.58701.08[92]Matteuccia struthiopteris (L.) Todaro(i) ChlorophyllEthanolTiO2-FTO0.750.60720.32[92]Corylus heterophylla Fisch(i) ChlorophyllEthanolTiO2-FTO0.680.56690.26[92]Filipendula intermedia(i) ChlorophyllEthanolTiO2-FTO0.870.54740.34[92]Pteridium aquilinum var. latiusculum(i) ChlorophyllEthanolTiO2-FTO0.740.56730.30[92]Populus L(i) ChlorophyllEthanolTiO2-FTO1.250.57370.27[92]Euphorbia sp.(i) QuercetinHot waterTiO2 surface0.460.40510.30Rubia tinctoria(i)AlizarinHot waterTiO2 surface0.650.48630.65Morus alba(i)CyanineHot waterTiO2 surface0.440.45570.38Reseda lutea(i)LuteolinHot waterTiO2 surface0.500.50620.52Medicago sativa(i)ChlorophyllHot waterTiO2 surface0.330.55560.33Aloe barbadensis miller(i)AnthocyaninsEthanolTiO2-FTO0.1120.67650.40.380[93]Opuntia ficus-indica(i) ChlorophyllEthanolTiO2-FTO0.2410.64248.00.740[93]Cladode and aloe vera(i)Anthocyanins and chlorophyllEthanolTiO2-FTO0.2900.44040.10.500[93]Lotus leaf(i)Alkaloid and flavonoidEthanolTiO2-FTO14.330.44231.42[48]Brassica oleracea var(i)AnthocyaninDistilled water, methanol, and acetic acidTiO2-FTO0.490.43510.054[55]Wrightia tinctoria R.Br. (“Pala indigo” or “dyer’s oleander”)(i)ChlorophyllCold methanolic extractTiO2-FTO0.530.51690.19[94](i)ChlorophyllAcidified cold methanolic extractTiO2-FTO0.210.422660.06[94](i)ChlorophyllSoxhlet extractTiO2-FTO0.490.495690.17[94](i)ChlorophyllAcidified Soxhlet extractTiO2-FTO0.310.419650.08[94](3) Fruits. The plant-extracted natural dyes are observed to be more prospective owing to their abundance and eco-friendly characteristics. They are environmentally and economically superior to ruthenium-based dyes because they are nontoxic and cheap. However, the conversion efficiency of dye-sensitized solar cells based on natural dyes is low [95]. Substations of natural dyes as sensitizers were shown to be not only economically viable and nontoxic but also effective for enhancing efficiency up to 11.9% [96]. Sensitizers for DSSCs need to fulfill important requirements such as absorption in the visible and near-infrared regions of the solar spectrum and strong chelation to the semiconductor oxide surface. Moreover, the LUMO of the dye should lie at a higher energy level than the conduction band of the semiconductor, so that, upon excitation, the dye could introduce electrons into the conduction band of the TiO2 [95]. Considering this, Najm et al. [97] used abundant and cheap Malaysian fruit, betel nut (Areca catechu) as a photosensitizer in DSSCs due to the presence of tannins, polyphenols, gallic acid, catechins, alkaloids, fat, gum, and other minerals. Provided that, gallotannic acid, a stable dye, is the main pigment (yellowish) of A. catechu and is responsible for the effective absorption of visible wavelengths and used in DSSCs [97, 98]. So, fruit extract as a photosensitizer and natural extracts from other sources are summarized in Table 3 and 4, respectively.Table 3 Fruits as photosensitizers in DSSCs. ImagePlantClass of extracted dye pigmentsSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefMelastoma malabathricum(i) AnthocyaninMethanol and trifluoroacetic acidTiO2 film4.490.42571.05[84]Eleiodoxa conferta(i) AnthocyaninEthanolTiO2-FTO4.630.37561.00[75]Garcinia atroviridis(i) AnthocyaninEthanolTiO2-FTO2.550.32630.51[75]Onion peels(i) AnthocyaninDistilled waterTiO2-FTO0.240.4846.630.065[26]Red cabbage(i) AnthocyaninDistilled waterTiO2-FTO0.210.5146.610.060[26]Areca catechuGallotannic acidMethanolTiO2 surface0.30.53673.50.118[97]Hylocereus polyrhizus(i) AnthocyaninDistilled water, ethanol, and acetic acidTiO2-FTO0.23 mA0.34630.024[55]Doum palm(i) ChromophoresEthanolTiO2-FTO0.0050.37630.012[99]Doum palm(i) ChromophoresDistilled waterTiO2-FTO0.0100.50660.033[99]Linia cauliflora(i) AnthocyaninEthanolTiO2-ITO0.380.41290.13[100]Phyllanthus reticulatus(i) AnthocyaninMethanolTiO2-FTO1.3820.67—0.69[101]Table 4 Other natural pigment sources as photosensitizers in DSSCs. ImagePlantClassSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefJuglon regia shell(i) JuglonHot waterTiO2 surface0.430.47560.38Malabar spinach seeds—Distilled waterTiO2-ITO510 (μA)0.71048.79.23[102]Rhamnus petiolaris seed(i) EmodinHot waterTiO2 surface0.200.50550.18Iridaea obovata algae(i) PhycoerythrinEthanolTiO2-FTO0.1360.40430.022[103]Delesseria lancifolia algae(i) PhycoerythrinEthanolTiO2-FTO0.2430.40460.045[103]Plocamium hookeri algae(i) PhycoerythrinEthanolTiO2-FTO0.0830.53630.027[103]Mangosteen pericarp (mangosteen peels)(i) AnthocyaninEthanolTiO2-FTO0.38 mA0.46480.042[55]Ataco vegetable(i) AnthocyaninsEthanolTiO2-FTO0.060.48660.018[104]Achiote vegetable(i) AnthocyaninsEthanolTiO2-FTO0.060.45500.013[104]Berenjena vegetable(i) AnthocyaninsEthanolTiO2-FTO0.040.40560.008[104]Flor de Jamaica vegetable(i) AnthocyaninsEthanolTiO2-FTO0.3820.478580.109[104]Mora vegetable(i) AnthocyaninsEthanolTiO2-FTO0.280.48510.069[104]Mortiño vegetable(i) AnthocyaninsEthanolTiO2-FTO0.5570.48466.40.175[104]Rabano vegetable(i) AnthocyaninsEthanolTiO2-FTO0.070.39550.015[104]Tomate de arbol vegetable(i) AnthocyaninsEthanolTiO2-FTO0.100.44520.023[104]The narrow spectral response and the short-term stability found in DSSCs are the two major drawbacks. These limitations are improved by using natural plant pigment dyes as an effective sensitizer on the photoanode of the device. So, using these natural pigments improved the efficiency of DSSCs by forming broad spectral absorption responses. To improve this, DeSilva et al. have investigated good photosensitizers from Mondo-grass berry and blackberry. As a result, the device efficiency improvement, with better stability on Mondo-grass berry dye, was observed when compared with that of blackberry. This is the reason that a Mondo-grass berry contains a mixture of two or more chemical compounds belonging to both the anthocyanin and carotenoid families, as proved by thin layer chromatography [105]. ## 3.4.1. Natural Plant Pigment Photosensitizers in DSSCs The highest efficiency ever recorded for a DSSCs material was about 12% using Ru (II) dyes when its material and structural properties were optimized. However, this efficiency is less when compared to the efficiencies of the first and second generations of solar cells (thin-film solar cells and first generation (Si-based) solar cells), whose efficiencies were about 20-30% [11]. A ruthenium-based dye and platinum are the most common materials used as photosensitizer and counterelectrode, respectively, in the production of the DSSCs, the third generation of photovoltaic technologies. However, their expensive cost, the complexity and toxicity of ruthenium dye, and the scarcity of platinum sources preclude their use in the DSSCs [49]. Thus, an alternative way to produce cost-effective dyes on a large scale is by extracting natural dyes from plant sources. The colors are due to the presence of various pigments that have been proven to be efficient photosensitizers. Meanwhile, colors and their transmittance by themselves could affect energy generation performance. Based on this, DSSCs currently being produced have better power generation efficiency as the visible light transmittance lowers, and the power generation efficiency is good in the order of red > green > blue [81]. It is reported that extracts of plant pigments also have a simultaneous effect as photosensitizers and reducing agents for nanostructure synthesis, which is useful in photoanode activity in solar devices (e.g., TiO2) [82].In order to improve the energy conversion efficiency of natural photosensitizers, blending of different dyes, copigmentation of dyes, acidifying of dyes, and other approaches have been conducted by researchers, resulting in appreciable performance [83]. Based on the types of natural molecules found in plant products, such photosensitizers are classified as carotenoids, betalains, flavonoids, or chlorophyll structural classes [30, 65, 83, 84]. For stable adsorption onto the semiconductor substrate, sensitizers are typically designed with functional groups such as -COOH, -PO3H2, and -B(OH)2 [20]. These biomolecules have functional groups, such as carboxyl and hydroxyl that can easily react with the surface of nanostructured TiO2 that enables them to absorb sunlight. In particular, hydroxyl or carboxyl functional groups are strongly bound on the surface of TiO2 [85].The enchantment of efficiency from extraction of dyes from fresh purple cabbage (anthocyanin), spinach leaves (chlorophyll), turmeric stem (betaxanthin), and their mixture as a photosensitizer, with nanostructured ZnO-coated FTO substrates as a photoanode-based DSSCs. The photon to electrical power conversion efficiencies of purple cabbage, spinach, turmeric, and their mixed dyes are explored as 0.1015%, 0.1312%, 0.3045%, and 0.602%, respectively, under the same simulated light condition. The mixed dye reveals the stable performance of the cell with the highest conversion efficiency due to the absorption of an extensive range of the solar spectrum and well-suited electrochemical responses due to the fast electron transportation and lower recombination loss with longer electron lifetime found on mixed dyes [86].(1) Flowers. In DSSCs, the red/purple pigment in various leaves and flowers has been used as a sensitizer. Notably, an abundantly available organic dye is easily extracted from flowers and leaves, mainly responsible for light absorption in DSSCs [39]. The natural color pigments originated from organic dyes impart an anthocyanin group present in the different parts (e.g., flowers, leaves) of the plant. Hibiscus rosa-sinensis, a red pigment-containing flower with a higher concentration of anthocyanins, is used as a natural DSSC. In fact, the Malvaviscus penduliflorus flower is closely related to the Hibiscus family. However, potential research based on M. penduliflorus flower extracted dye in DSSCs is still lacking. The broader absorption of Hibiscus-extracted dye within 400-500 nm can further be enhanced using either concentrated dye solution or operating the sensitization process at an elevated temperature [39].Natural dyes from flowers can decrease the charge transfer resistance and are helpful in the better absorbance of light as well as enabling them to show absorption near to the red region. Therefore, efficient DSSCs using natural dyes are less toxic, disposed of easily, cost-effectively, and more environmentally friendly compared to organic dyes. Which is considered beneficial for future biosolar cell technology [87]. The performance of the DSSCs can be compensated by introducing a scattering layer or interfacial modification in the photoanode and concomitantly improving the broader spectrum wavelength range of light absorption to make it suitable for outdoor applications [39]. the presence of a series of conjugated double bonds from flower extracts helped to increase efficiency improvement. Raguram and Rajni [88] demonstrated that a flavanol pigment from the Allamanda blanchetti flower is responsible for red, purple, and blue colors, whereas a carotenoid pigment from the Allamanda cathartica flower is responsible for bright red, orange, and yellow colors and a series of conjugated double bonds [88]. Table 1 summarizes flower-based photosensitizers for efficiency improvement in DSSCs. From this, the performance is highly dependent on the type of the plant flower type.Table 1 Flowers as photosensitizers in DSSCs. ImagePlantClass of biomoleculesSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefSalvia—MethanolTiO2-FTO0.1680.46140.00.152[87]Spathodea—MethanolTiO2-FTO0.2010.52541.20.217[87]Malvaviscus penduliflorus—EthanolTiO2/MnO2-FTO6.020.3840.380.92[39]Allamanda blanchettiFlavonoids (flavanol)EthanolTiO2-FTO4.13660.4702601.16[88]Allamanda catharticaCarotenoids (lutein)EthanolTiO2-FTO2.14060.4896280.30[88]Canna-lily redAnthocyaninsMethanolTiO2-FTO0.440.57450.14[90]Canna-lily yellowAnthocyaninsMethanolTiO2-FTO0.430.56400.12[90]Beta vulgaris L. ssp. f. rubraBeta caroteneHot waterTiO2 surface0.440.55510.41Brassica oleracea L. var. capitata f. rubraAnthocyaninHot waterTiO2 surface1.880.54561.87(2) Leaves. The advantages of mesoporous holes in TiO2 are that they provide the surface of a large hole for the higher adsorption of dye molecules and facilitate the penetration of electrolyte within their pores. Absorbing light in an extended range of wavelengths by innovative natural dyes followed by increasing surface areas of the photoanode with a TiO2 nanostructure-based layer on the glass substrate improves DSSC technology [74]. Khammee et al. [89] have reported a natural pigment photosensitizer extracted from Dimocarpus longan leaves. According to the report, the methanol extract pigment was composed of chlorophyll-a, chlorophyll-b, and carotene components [89]. The functional group found on the leaves of natural plant pigment can bind with TiO2, which is then responsible for absorbing visible light [54]. Chlorophyll, which is found in the leaves of most green plants, absorbs light from red, blue, and violet wavelengths and obtains its color by reflecting green. Chlorophyll exhibits two main absorption peaks in the visible region at wavelengths of 420 and 660 nm [85]. Experimental results show that the absorption peaks of those dyes are mainly distributed in the visible light regions of 400-420 nm and 650-700 nm. So, chlorophyll was selected as the reference dye [94]. Therefore, chlorophyll and other related extract-based photosensitizes are given in both Tables 1 and 2.Table 2 Leaves as photosensitizers in DSSCs. ImagePlantClass of extracted dye pigmentsSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefLagerstroemia macrocarpa(i) Carotenoids(ii) Chlorophyll-a(iii) Chlorophyll-bMethanolTiO2-FTO0.0920.80753.711.138±0.018[74]Spinach leaves(i) ChlorophyllAcetoneTiO2-FTO0.410.5958.759820.171253[26]Strobilanthes cusia(i) Chlorophyll-a(ii) Chlorophyll-bMethanol, ethanol, acetone, diethyl-ether, dimethyl-sulphoxideTiO2-FTO0.00518330.30646.20.0385[49]Galinsoga parviflora(i) Chlorophyll groupDistilled water and ethanolTiO2-FTO0.4 (mA)0.346.71.65[91]Amaranthus red(i) Chlorophyll(ii) BetalainDistilled water, ethanol, acetoneTiO2-FTO1.00420.354738.640.14[54]Lawsonia inermis(i) Lawsone(ii) ChlorophyllDistilled water, ethanol, acetoneTiO2-FTO0.42360.547838.510.09[54]Cordyline fruticosa(i) ChlorophyllEthanolTiO2 surface1.3 mA0.61660.160.5[85]Euodia meliaefolia (Hance) Benth(i) ChlorophyllEthanolTiO2-FTO2.640.58701.08[92]Matteuccia struthiopteris (L.) Todaro(i) ChlorophyllEthanolTiO2-FTO0.750.60720.32[92]Corylus heterophylla Fisch(i) ChlorophyllEthanolTiO2-FTO0.680.56690.26[92]Filipendula intermedia(i) ChlorophyllEthanolTiO2-FTO0.870.54740.34[92]Pteridium aquilinum var. latiusculum(i) ChlorophyllEthanolTiO2-FTO0.740.56730.30[92]Populus L(i) ChlorophyllEthanolTiO2-FTO1.250.57370.27[92]Euphorbia sp.(i) QuercetinHot waterTiO2 surface0.460.40510.30Rubia tinctoria(i)AlizarinHot waterTiO2 surface0.650.48630.65Morus alba(i)CyanineHot waterTiO2 surface0.440.45570.38Reseda lutea(i)LuteolinHot waterTiO2 surface0.500.50620.52Medicago sativa(i)ChlorophyllHot waterTiO2 surface0.330.55560.33Aloe barbadensis miller(i)AnthocyaninsEthanolTiO2-FTO0.1120.67650.40.380[93]Opuntia ficus-indica(i) ChlorophyllEthanolTiO2-FTO0.2410.64248.00.740[93]Cladode and aloe vera(i)Anthocyanins and chlorophyllEthanolTiO2-FTO0.2900.44040.10.500[93]Lotus leaf(i)Alkaloid and flavonoidEthanolTiO2-FTO14.330.44231.42[48]Brassica oleracea var(i)AnthocyaninDistilled water, methanol, and acetic acidTiO2-FTO0.490.43510.054[55]Wrightia tinctoria R.Br. (“Pala indigo” or “dyer’s oleander”)(i)ChlorophyllCold methanolic extractTiO2-FTO0.530.51690.19[94](i)ChlorophyllAcidified cold methanolic extractTiO2-FTO0.210.422660.06[94](i)ChlorophyllSoxhlet extractTiO2-FTO0.490.495690.17[94](i)ChlorophyllAcidified Soxhlet extractTiO2-FTO0.310.419650.08[94](3) Fruits. The plant-extracted natural dyes are observed to be more prospective owing to their abundance and eco-friendly characteristics. They are environmentally and economically superior to ruthenium-based dyes because they are nontoxic and cheap. However, the conversion efficiency of dye-sensitized solar cells based on natural dyes is low [95]. Substations of natural dyes as sensitizers were shown to be not only economically viable and nontoxic but also effective for enhancing efficiency up to 11.9% [96]. Sensitizers for DSSCs need to fulfill important requirements such as absorption in the visible and near-infrared regions of the solar spectrum and strong chelation to the semiconductor oxide surface. Moreover, the LUMO of the dye should lie at a higher energy level than the conduction band of the semiconductor, so that, upon excitation, the dye could introduce electrons into the conduction band of the TiO2 [95]. Considering this, Najm et al. [97] used abundant and cheap Malaysian fruit, betel nut (Areca catechu) as a photosensitizer in DSSCs due to the presence of tannins, polyphenols, gallic acid, catechins, alkaloids, fat, gum, and other minerals. Provided that, gallotannic acid, a stable dye, is the main pigment (yellowish) of A. catechu and is responsible for the effective absorption of visible wavelengths and used in DSSCs [97, 98]. So, fruit extract as a photosensitizer and natural extracts from other sources are summarized in Table 3 and 4, respectively.Table 3 Fruits as photosensitizers in DSSCs. ImagePlantClass of extracted dye pigmentsSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefMelastoma malabathricum(i) AnthocyaninMethanol and trifluoroacetic acidTiO2 film4.490.42571.05[84]Eleiodoxa conferta(i) AnthocyaninEthanolTiO2-FTO4.630.37561.00[75]Garcinia atroviridis(i) AnthocyaninEthanolTiO2-FTO2.550.32630.51[75]Onion peels(i) AnthocyaninDistilled waterTiO2-FTO0.240.4846.630.065[26]Red cabbage(i) AnthocyaninDistilled waterTiO2-FTO0.210.5146.610.060[26]Areca catechuGallotannic acidMethanolTiO2 surface0.30.53673.50.118[97]Hylocereus polyrhizus(i) AnthocyaninDistilled water, ethanol, and acetic acidTiO2-FTO0.23 mA0.34630.024[55]Doum palm(i) ChromophoresEthanolTiO2-FTO0.0050.37630.012[99]Doum palm(i) ChromophoresDistilled waterTiO2-FTO0.0100.50660.033[99]Linia cauliflora(i) AnthocyaninEthanolTiO2-ITO0.380.41290.13[100]Phyllanthus reticulatus(i) AnthocyaninMethanolTiO2-FTO1.3820.67—0.69[101]Table 4 Other natural pigment sources as photosensitizers in DSSCs. ImagePlantClassSolvent for extractionPhotoanodeJsc (mA cm-2)Voc (V)FF (%)η (%)RefJuglon regia shell(i) JuglonHot waterTiO2 surface0.430.47560.38Malabar spinach seeds—Distilled waterTiO2-ITO510 (μA)0.71048.79.23[102]Rhamnus petiolaris seed(i) EmodinHot waterTiO2 surface0.200.50550.18Iridaea obovata algae(i) PhycoerythrinEthanolTiO2-FTO0.1360.40430.022[103]Delesseria lancifolia algae(i) PhycoerythrinEthanolTiO2-FTO0.2430.40460.045[103]Plocamium hookeri algae(i) PhycoerythrinEthanolTiO2-FTO0.0830.53630.027[103]Mangosteen pericarp (mangosteen peels)(i) AnthocyaninEthanolTiO2-FTO0.38 mA0.46480.042[55]Ataco vegetable(i) AnthocyaninsEthanolTiO2-FTO0.060.48660.018[104]Achiote vegetable(i) AnthocyaninsEthanolTiO2-FTO0.060.45500.013[104]Berenjena vegetable(i) AnthocyaninsEthanolTiO2-FTO0.040.40560.008[104]Flor de Jamaica vegetable(i) AnthocyaninsEthanolTiO2-FTO0.3820.478580.109[104]Mora vegetable(i) AnthocyaninsEthanolTiO2-FTO0.280.48510.069[104]Mortiño vegetable(i) AnthocyaninsEthanolTiO2-FTO0.5570.48466.40.175[104]Rabano vegetable(i) AnthocyaninsEthanolTiO2-FTO0.070.39550.015[104]Tomate de arbol vegetable(i) AnthocyaninsEthanolTiO2-FTO0.100.44520.023[104]The narrow spectral response and the short-term stability found in DSSCs are the two major drawbacks. These limitations are improved by using natural plant pigment dyes as an effective sensitizer on the photoanode of the device. So, using these natural pigments improved the efficiency of DSSCs by forming broad spectral absorption responses. To improve this, DeSilva et al. have investigated good photosensitizers from Mondo-grass berry and blackberry. As a result, the device efficiency improvement, with better stability on Mondo-grass berry dye, was observed when compared with that of blackberry. This is the reason that a Mondo-grass berry contains a mixture of two or more chemical compounds belonging to both the anthocyanin and carotenoid families, as proved by thin layer chromatography [105]. ## 4. Photoinduced Electron Transfer Rate Efficiency in Natural Plant Pigment DSSCs Anthocyanin coupled with TiO2 is cheap, readily available, and innocuous to the environment, with high economic advantages over other types of photovoltaic devices, but it has yet to become a commercially viable product due to its low conversion efficiency and life span [11]. In addition, TiO2 photoelectrode properties favor natural pigments as sensitized DSSCs because the conduction band of TiO2 photoelectrode coincides well with the excited level LUMO of natural pigments (especially with anthocyanins) [106]. The interaction between TiO2 and dye molecule could lead to the transfer of excited electrons from the dye molecules, to the conduction band of TiO2 [85] as shown in Figure 11.Figure 11 Configuration of a traditional dye-sensitized solar cell [107].For good photovoltaic efficiency of a DSSC, an electron from the electronically excited state of the dye must be injected effortlessly into the conduction band of the semiconductor. The electron transfer kinetics of natural dye molecules can be appraised in terms of the photoinduced electron transfer (PET) theory. The theory implies that the logarithm of the electron transfer rate is a quadratic function with respect to the driving force,−ΔG°. The simplified form of the rate constant of ET, kET, is given as follows: (1)kET=Aexp−ΔGo+λ24λRT,where ΔG° is the driving force, λ is the reorganization energy, R is the gas constant, and T is the temperature. In the region of driving force smaller than the reorganization energy (the normal region), the electron transfer rate increases as driving force increases. The electron transfer rate attains a maximum value at ΔG°=λ. When the driving force for reaction is greater than λ, inverted region kinetics are observed, and the electron transfer rate decreases as the driving force increases .The driving force for electron transfer between a photosensitizer and semiconductor nanoparticles can be dictated by the energy difference between the oxidation potential of the photosensitizer and the reduction potential of semiconductor nanoparticles. The Rehm-Weller equation can be utilized to determine the driving force energy changes for the PET process. This equation gives the driving force energy changes between a donor (D) and an acceptor (A) as [108]: (2)ΔGo=eEOxi.D−ERed.A−ΔE∗,where e is the unit electrical charge, EOxi.(D) and ERed.(A) are the oxidation and reduction potentials of electron donor and acceptor, respectively. ΔE∗ is the electronic excitation energy that corresponds to the energy difference between the ground and first excited states of donor species .This is followed by regeneration of the dye by the redox mediator, transport of electrons in mesoporous TiO2 and redox mediators in the electrolyte, and finally, reduction of the oxidized redox mediator at the counterelectrode, since dyes in the DSSCs are adsorbed as a monolayer onto the mesoporous TiO2 electrode [107]. Historically, ruthenium and volatile organic-based molecular sensitizers were used, but due to the presence of various pigments as well as environmental friendliness, which avoids the use of expensive rare metals with toxic volatile organics, much research work is now devoted to natural plant-based photosensitizers. Akin et al. [108] have designed and tested DSSCs sensitized by natural dyes having several pigments with various anchoring groups such as carbonyl, hydroxyl, and alkyl chains to clearly understand the photoinduced electron injection kinetics of these natural DSSCs. As a summary, the photoinduced electron transfer mechanism from plant extract photosensitizer is shown in Figure 12.Figure 12 Photoinduced electron transfer mechanism in natural dye extracted plant [108]. ## 5. Efficiency Optimization of Natural Plant Pigment-Based DSSCs Ananthakumar et al. [70] showed that when a photoanode is functionalized by dyes containing organic dyes, it helps absorb more incident light. In order to increase the efficiency of solar cell devices further, the photoanode has been improved by low-cost transition metal oxide nanomaterials containing quantum dots. In addition, the cosensitization process also enhances light harvest efficiency. Thus, according to the review, 14% of efficiency was successfully reported. Hence, a sensitizer that is supported by a photoanode could successfully help to absorb visible light (enhance light harvesting capability) [109].Al-alwani et al. [85] have optimized three different process parameters such as the nature of organic solvent based on their boiling point (ethanol, methanol, and acetonitrile), pH (4-8) and extraction temperature (50-90°C) for chlorophyll extraction from Cordyline fruticosa leaves by using response surface methodology. The optimal extraction conditions were a pH of 7.99, an extraction temperature of 78.33°C, and a solvent boiling point of 78°C. Therefore, at this optimal condition, the extracted pigment was used as a photosensitizer, and 0.5% of maximum solar conversion η was achieved [85]. Chien and Hsu [110] have reported an optimized anthocyanin photosensitizer extracted from red cabbage (Brassica oleracea var. capitata f. rubra), and best light-to-electricity conversion ƞ was obtained when the pH and the concentration of the anthocyanin extract were at 8.0 and 3 mM, respectively, and when the immersion time for fabricating sensitized TiO2 film was 15 min [111].IK and Uthman [99] have reported an ethanol and distilled water-based extraction effect on doum palm fruit photosensitizer. From their report, the absorption transition between the dye ground state and excited states and the solar energy range absorbed by the dye are different. This difference is due to the existence of chromophores, which represent the chemical group that is responsible for the color of the molecule that is its ability to absorb photons. In detail, doum water extract has two absorption peaks at 350 nm and 400 nm, while the absorption peak of the doum ethanol extract adsorbed on TiO2 was only at one absorption peak of 353 nm. It can be seen that after TiO2 nanoparticles were added to doum pericarp extract, its absorption intensity decreased from 440 to 350 nm. Finally, the conversion efficiency for the ethanol extract was 0.012%, and water extract was up to 0.033% under the same light intensity [99].Gu et al. [8] have suggested that the absorption properties of natural dyes are strongly dependent on the types and concentration of pigments. According to their study, the photoelectric performance of different natural dyes from spinach, pitaya pericarp, orange peel, ginkgo leaf, purple cabbage, and carrot were measured as shown in Figure 13. They suggested that the VOC of these dyes showed a similar value, corresponding to about 0.524 V, except for carrot, which showed only 0.276 V. The fill factors of these DSSCs are mostly higher than 0.5, which proves that a better conversion capability of photoelectric energy was obtained. For the short-circuit photocurrent density (JSC), the values of these DSSCs based on different natural dyes are JSCpurplecabbage>JSCorangepeel>JSCspinach>JSCginkgoleaf>JSCpitayapericarp>JSCcarrot, and they arrive at 0.594, 0.325, 0.152, 0.111, 0.100, and 0.086 mA/cm2, respectively. Meanwhile, the purple cabbage showed higher photoelectric conversion efficiency and reached 0.157% [8].Figure 13 J-V curves for the DSSCs under standard simulated sunlight [8].Optimization of the photoanode (TiO2 nanostructure) is necessary for developing the high solar efficiency of DSSCs [74]. It is important that the thicker TiO2 layers would also result in a dwindling transmittance and reduce the pigment dyes’ absorption of light intensity. Also, the resistance to charge transfer might increase when the thickness of TiO2 electrode layers increases [112]. García-Salinas and Ariza [113] have tried to optimize solvent extraction, method, pH, dye precursor, and dye extract stability. They focused on betalain pigments present in bougainvillea and beetroot extracts, and anthocyanins in eggplant extracts. Of these, beetroot extract showed 0.47% cell efficiency [113]. Later, they demonstrated improved power conversion of 1.3% by using roots of Kniphofia schemperi sensitizer in the presence of TiO2 NPs biosynthesized in a (3 : 2) volume ratio due to effective surface modification, which enabled them to absorb an incident light [114, 115].Norhisamudin et al. [116] have fabricated DSSCs using anthocyanin or chlorophyll natural dye extracts coming from Roselle (Hibiscus sabdariffa) and green tea leaves (Camellia sinensis). Both pigments were extracted using different alcohol-based solvents, namely, ethanol, methanol, and mixed (ethanol + methanol) to identify whether the different solvents had an effect during the dye extraction. According to their study, using a mixed solvent from methanol and ethanol and their mixture extraction system was done, and their mixture showed efficiency improvement. Thus, the comparison between Roselle (anthocyanin) dye extracts and green tea (chlorophyll) dye extracts shows that Roselle has higher efficiency and higher photosensitized performance.Sensitization time and the number of natural dye coatings are the other big factors which affect DSSC performance. For example, in the betanin, indigo, and lawsone solar cells systems, 6, 12, 24, 36, and 48 hour sensitization times were tested. The time 24 hour was found to be an optimal time for sensitization in the case of betanin and lawsone solar cells, and 36 hour was observed to be optimum in the case of indigo solar cells. The optimal time of sensitization for the best performance of a particular dye is dependent on the rate of dye anchoring. lawsone and betanin have higher dipole moments, favoring the dipole-dipole interaction with TiO2; moreover, they possess more favorable functional groups (-COOH and -OH) compared to indigo (with =CO groups), which will enable a higher rate of anchoring. Akin et al. reported the effects of anchoring groups on the photoinduced electron injection dynamics from natural dye molecules to TiO2 nanoparticles. According to their report, nine different natural dyes having various anchoring groups were extracted from various plants and used as photosensitizers in DSSC applications. From these extracts, the long-hydroxyl and carbonyl-chain bearing anthocyanin, with the maximum electron transfer rate (kET), has shown the best photosensitization effect with regard to cell output. Despite the fact that their performance in DSSCs is somewhat lower or close to the metal complexes, these metal-free natural dyes can be treated as a new generation of sensitizers. It was reported that upon illumination, the dyes absorb light; an electron in a HOMO state is excited to LUMO state and further injected into a conduction band of TiO2. Related to the physical adsorption, chemical adsorption is a more effective way to enhance the conversion efficiency, and usually, -OH, -COOH, C=O, -SO3H, and -PO3H2 in the pigment were used as the effective groups and bonded with TiO2, forming a chemical adsorption, just as shown in Figure 14. This action can facilitate the transfer of electrons from the dye to TiO2. Therefore, according to Gu et al. report, the photoelectric conversion efficiency of dyes was ηpurplecabbage>ηorangepeel>ηspinach>ηpitayapericarp>ηginkgoleaf>ηcarrot and reached 0.157, 0.071, 0.054, 0.031, 0.030, and 0.010%, respectively, which is attributed to the synergistic reaction between the absorptive properties and molecular structure of natural dyes [8].Figure 14 Chemical adsorption between TiO2 films and effective groups [8].It is reported that the increase in the dye layer obstructs the charge transfer from the conduction band of the TiO2 surface to the FTO electrolyte [69]. For the aforementioned reasons, the number of coatings in betanin-, indigo-, and lawsone-based solar cell configurations proved detrimental to the solar cell performance, possibly because of dye aggregation. In addition, doping agents are also the other factor which affects the efficiency of DSSCs in natural plant pigment containing dyes. For example, Bekele et al. [114] have reported that when TiO2 nanoparticles are doped by Mg2+ ion and coated on an FTO glass substrate, they form Mg2+-TiO2-FTO photoanode. When this photoanode was immersed in methanol-extracted henna (Lawsonia inermis) leaf dye, its efficiency of light-to-electricity conversion was increased by generating the highest Jsc, from 0.66 to 1.28 (mA cm-2), representing a 93% increase over the undoped TiO2 group [117].As seen from Figure15(a), the short-circuit photocurrent density JSC increases from 0.23 to 0.38 mA cm-2 when the TiO2 thin film is coated by the doctor blade and spin-coated techniques, respectively. The results indicate that the recombination rate increases in spin-coated TiO2 thin film photoanodes. From the J-V curve, the highest values of JSC and VOC of the DSSCs were observed at 0.38 mA cm-2 and 0.41 V, respectively. The maximum ƞ obtained was 0.13% for spin-coated TiO2 thin film electrodes and 0.08% for doctor blade method-coated electrodes. Figure 15(b) shows the power versus potential curves, and the corresponding power (Pmax) obtained from spin-coated TiO2 thin film DSSCs was 36.4 μW cm-2, while the maximum power for the doctor blade method-coated electrode was 23.6 μW cm-2 [100].Figure 15 Photovoltaic performance of spin-coated (blue dotted) and doctor blade-coated (red dotted) TiO2 anodes photosensitized by jabuticaba fruit extract dye (a) and power versus voltage curves of spin-coated (blue curve) and doctor blade-coated (red curve) TiO2 thin films DSSCs using natural dyes extracted from the jabuticaba fruit (b) [100]. (a)(b) ## 6. Computational Studies of Natural Plant Pigment-Based DSSCs The computational calculation principle on extracts from natural plant-based pigments has provided a valuable reference for predicting precise protocols about the structure and photoelectrical property relationships between molecules using the Gaussian 09 package. The model enabled to identify essential electronic and structural attributes that quantify the molecular prerequisites of certain classes found in the natural dye, which are responsible for high power conversion efficiency for DSSCs [118, 119].In detail, the dye structure properties’ ground state could be optimized using DFT with the B3LYP functional at a 6-31G(d) basis set, while excited states were calculated using TD-DFT with different functionals, including Cam-B3LYP, MPW1PW91, and PBEPBE, at the same basis set calculation method [92]. To clarify this, Maahury and Martoprawiro [120] have studied the computational calculations of anthocyanin, which is evaluated as a basic reference biomolecule and used as the main photosensitizer in DSSCs [83]. Their geometry optimization calculations showed that the structures of anthocyanin compounds are not planar and their single point calculations for excited states showed that their absorption wavelength was shorter than experimental data (i.e., its difference was between 7.3% and 8.3%). The calculations were computed by DFT with B3LYP functional and 6-31G (d) for ground state optimization and TD-DFT for excited states based on a single point calculation system [120].Ghann et al. [98] reported that the computational calculation on delphinidin, which is an anthocyanin derivative that is found in pomegranate fruit, resulted in HOMO and LUMO values of -8.71 eV and -6.27, respectively. This makes effective electron transfer of charge possible from the LUMO of the pigment into the conduction band of TiO2. To support this, the HOMO and LUMO surfaces and their orbital energy diagrams are shown in Figures 16(b) and 16(c), respectively. For this electron transfer, the blue and red regions represented the positive and negative values of the orbitals, respectively. The regeneration of the dye caused by the redox electrolyte (I-/I3-) coupling increases the lifetime of the dye itself. Also, the narrower band gap of delphinidin dye, with a value of 2.44 eV, increases the intramolecular electronic transition probabilities [98]. From these and other literature studies, it is summarized that the electronic transition computational protocols are composed of six main steps. These are (i) engineering band gaps, (ii) photoabsorption spectrum of dyes, (iii) adsorbed dyes onto the anode surface, (iv) short-circuit current density, JSC, (v) open-circuit photovoltage, VOC, and (vi) photocurrent-photovoltage curve, and fill factor, FF [121]. Based on this fact and for simplicity, Mohankumar et al. [119] explained that twelve novel dye molecules developed from D-π-A-based triphenylamine (TPA) were studied to evaluate their suitability for applications in DSSCs by using DFT and TD-DFT. The optimization effects of flavone and isoflavone on TPA-based dyes were successfully studied using B3LYP and CAM-B3LYP density functionals combined with 6-311G (d, p) basis set in their computational study.Figure 16 Working principles of DSSCs with delphinidin (a), LUMO (b), and HOMO surface and orbital energy diagram for delphinidin (c) [98]. (a)(b)(c)Ndiaye et al. [122] studied the experimental and computational behavior of chrysanthemin (cyanidin 3-glucoside) pigment in DSSCs. Its theoretical study of chrysanthemin was performed with the GAUSSSIAN 09 simulation. A better energy level alignment was found for partially deprotonated molecules of chrysanthemin, with the excited photoelectron having enough energy in order to be transferred to the conduction band of TiO2 semiconductor in DSSCs. Experimentally, an aqueous extract of Roselle (Hibiscus sabdariffa) calyces was considered as the source of chrysanthemin, and the extracts having various pH values were tested in DSSCs. The detailed analysis of HOMO and LUMO of the cyaniding 3-glucoside molecule deprotonated in positions 1, 2, 3, bound to the TiO2 surface shows a large electron density on the deprotonated anchor groups, which favors the electron transfer from the excited molecule to the semiconductor as shown in Figure 17 [122].Figure 17 Cyanidin 3-glucoside structure (a) and labeling of the deprotonation sites (b) [122]. (a)(b)The analysis of the molecular orbitals showed that the probability distribution of electron density at the HOMO and LUMO levels is predominantly around the NH and C=O groups in the molecule. The two nonbonding electrons on the N atom participate in the delocalization of theπ-electrons of the conjugated systems that correspond to the HOMO energy levels (see Figure 18(a)) and the antibonding π∗ orbitals arise from the LUMO level of indigo as shown in Figure 18(b). The photoexcitation of the nonbonding electrons coming from the electron donor NH group to the antibonding π∗ orbital in the electron acceptor C=O group forms the n⟶π∗ electronic transitions. The C=O group helps the pigment anchor with TiO2. The electron density map that is shown in Figure 18(c) indicates the distribution of charge on the indigo molecule (green and red colors represent electropositivity and electronegativity, respectively). As the molecule is symmetric, the net dipole moment is negligible and is equal to 0.0053 D [69]. It is known that hypericin is a naphthodianthrone, a red-colored anthraquinone derivative, a photosensitive pigment, which is one of the principal active constituents of St. John’s wort (Hypericum perforatum). As a result, this pigment exhibited good adsorption onto a semiconductor surface, a high molar absorption coefficient (43700 L mol-1 cm-1) and favorable alignment of energy levels and provided a long lifetime of electrons (17.8 ms) on the TiO2 photoanode surface [123].Figure 18 Electron density corresponding to the HOMO energy level (a), LUMO energy level (b), and charge distributions of indigo (c). Electron density corresponding to the HOMO energy level (d), LUMO energy level (e), charge distributions of lawsone (f), and alignment of the energy levels (with respect to vacuum) of the materials with respect to each other (g) [69]. (a)(b)(c)(d)(e)(f)(g)The probability distribution of the electron density corresponding to the HOMO and LUMO levels are located to both benzoid and quinoid moieties, respectively (Figures18(d) and 18(e)). The first absorption peak at 338 nm is primarily caused by HOMO to LUMO transitions within the C=C (π⟶π∗) and C=O (n⟶π∗) regions of the lawsone quinoidal ring. The absorption peak seen in the visible region at 410 nm arises from the n⟶π∗ transitions localized mainly around the oxygen atom of the quinoidal ring. So, Figure 18(f) shows the distribution of charge on the lawsone molecule. From this pigment, the carbonyl carbons of C=O groups could be observed to be highly electropositive, showing a net dipole moment equal to 5.78 D, indicating the strong electron-withdrawing nature of the C=O group anchors the lawsone molecule onto TiO2 [69].Cosensitizing pigments with a complementary absorption spectra increase the absorption band and are an attractive pathway to enhance the efficiency in DSSCs [124]. Ramirez-Perez et al. [104] reported that the predominance of hydroxyl groups on the aromatic skeleton of anthocyanin gives rise to an intense blue color, while a red color is observed in methoxyl functional groups [104]. In brief, lawsone solar cells displayed better performance, showing average efficiencies of 0.311±0.034%, compared to indigo solar cells showing efficiencies of 0.060±0.004%. So, the betanin/lawsone cosensitized solar cell reflected a higher average efficiency of 0.793±0.021% as compared to the 0.655±0.019% obtained for the betanin/indigo cosensitized solar cell. An 11.7% enhancement in efficiency (with respect to betanin) was observed for the betanin/indigo solar cell, whereas a higher enhancement of 25.5% was observed for the betanin/lawsone solar cell. Impedance spectroscopy improved that the higher efficiency can be attributed to the higher electron lifetime of 313.8 ms in the betanin/lawsone cosensitized solar cell compared to 291.4 ms in the betanin/indigo solar cell (Figure 19) [69].Figure 19 Illustration of the device showing the complementary absorption by the cosensitized pigments of betanin and indigo (a) and betanin and lawsone (b) [69]. (a)(b)The computational analysis and experimental verification of photosensitizers and their efficiency performances done by Liu et al. [92] have reported that the simulated absorption spectra of chlorophyll were extracted from six different leaves by using ethanol solvent. To compare with the experimental results, the excited state properties of chlorophyll was investigated via the TD-DFT method with different functionals at a 6-31G(d) basis set based on the optimized ground state structure of chlorophyll. The choice of chlorophyll pigment was due to the fact that it accounts for the largest proportion of the green plant leaves. The charge different density results showed the distribution of charge during the light absorption step (Figure 20). As stated, for the sixth excited state, a red electron is moved into the semiconductor, and a hole is resided in the porphyrin ring. Previously, chlorophyll, natural porphyrin, and its derivatives have been studied using DFT approaches to explore their spectroscopic properties and their future applications in DSSCs [125]. As a result, this state is a charge transfer (CT) state, and a similar CT process can be found in the ninth excited state, where more electron migration into the semiconductor benefits electron transport for external circuit. The calculated excitation energies (E, eV), for the sixth excited state (S6) and the ninth excited state (S9) were 2.6161 eV and 2.9989 eV, respectively. This confirmed the existence of electron transfer into the semiconductor during photoexcitation.Figure 20 Charge difference density (CDD) for chlorophyll/TiO2. S6 (a) and S9 (b) [92]. (a)(b)As a summary, both experimental and computational modeling (relative energy calculations of HOMO and LUMO of natural plant pigment extraction) allowed electron injection ability elucidation of the extracted plant pigments [126]. These investigations, when the natural plant pigments are highly aggregated on the TiO2 surface, affects the DSSCs’ performance [127, 128]. ## 7. Performance Characterization Parameters in DSSCs ### 7.1. Characterization Using Photocurrent Density-Voltage (J-V) The general performance of the assembled DSSCs prepared using their components could be evaluated by different parameters such asJSC, η, Vmax, Pmax, VOC, and FF. Bekele et al. [114] reported on synthesis of TiO2 nanoparticles within three different volume ratios as 2 : 3, 1 : 1, and 3 : 2 in the presence of Kniphofia schemperi root ethanol extract both as a capping and reducing agent and also as a natural sensitizer. Synthesized 2 : 3, 1 : 1, and 3 : 2 photoelectrodes show VOC performance of 63, 48, and 161 mV, respectively. The 3 : 2 photoelectrode shows best VOC as compared to the remaining electrodes, and this improvement could be attributed to the improved absorption of light in the presence of Kniphofia schemperi root sensitizer, and this is due to the more improved surface morphology of the photoelectrode. This photoelectrode provides enhanced efficiency (≈1.30%) as compared to the remaining ratio photoelectrode due to its small average crystalline size, which enables it to adsorb excess dye molecules on its surface [129]. The corresponding JSC and FF values for each of the different volume ratios of TiO2 photoelectrode-based DSSCs were estimated as 1.29×10−3, 6.05×10−3, and 2.46×10−2mA/cm2, and the resultant FF was 42, 40.3, and 32.8% for the TiO2 (2 : 3), TiO2 (1 : 1), and TiO2 (3 : 2) photoelectrodes, respectively. TiO2 (3 : 2) outperforms the other two green prepared photoelectrodes due to the better catalytic property of the photoelectrode, which was achieved by using less extract during synthesis.Senthamarai et al. [82] reported on the green synthesis of TiO2 nanostructure photoelectrodes prepared via the green route using the fruit extracts of pineapple, orange, and grapes as reducing and stabilizing agents, for DSSC application in the presence of fruit skin ethanol extract of Murraya koenigii sensitizer. The grape-mediated synthesized TiO2 photoelectrode shows the maximum solar cell efficiency (1.78%). In the presence of Murraya koenigii natural sensitizer, pineapple-templated TiO2 and orange-templated TiO2 photoelectrodes shows solar cell efficiency of 1.61% and 1.52%, respectively, in the presence of fruit skin extract. The corresponding VOC values for the grape-TiO2, orange-TiO2, and pineapple-TiO2 photoelectrodes were found to be 0.628, 0.626, and 0.576 mV, respectively.Moreover, reports show that ZnO-TiO2-Fe2O3 nanocomposites were synthesized and used as an alternative photoelectrode in the presence of ethanol-extracted Guizotia scabra and Salvia leucanta flower sensitizers [130]. It has been found that the obtained conversion efficiencies for the ethanol extracts of Guizotia scabra and Salvia leucantha were estimated at 0.0013% and 0.0017%, respectively. According to Cho et al. [131], the ethanol extract of the mixed sweet potato leaf and blueberry sensitizer at a weight concentration of 40% (1 : 1) volume ratio was used to coat the TiO2 photoanode, and the report shows that 0.61 V, 4.75 mA/cm2, 53%, and 1.57% of VOC, JSC, FF, and η were achieved, respectively. But the individual single sensitizers of sweet potato leaf sensitizer show 0.645 V, 1.23 mA/cm2, 49%, and 0.391%, and blueberry flower sensitizer-based DSSCs provide 0.67 V, 0.532 mA/cm2, 61%, and 0.218% of VOC, JSC, FF, and, η, respectively.Safie et al. [78] reported on the use of mengkulang and rengas wood and their mixed ratios (60 : 40, 40 : 60, and 50 : 50) in the presence of TiO2 nanoparticle photoelectrode. The report proves that the cell performance was found to be 0.53 V, 0.40 mA/cm2, 75.98%, 0.16%, and 0.50 V, 0.30 mA/cm2, 72.88%, and 0.11% of VOC, JSC, FF, and η for the mengkulang and rengas wood sensitizer-based DSSCs, respectively, while the different mixed ratios of those sensitizers based on DSSC performance were found to be 0.54 V, 0.60 mA/cm2, 63.28%, 0.21%, 0.53 V, 0.90 mA/cm2, 61.78%, 0.29%, 0.53 V, 0.90 mA/cm2, 62.16%, and 0.30% for the 50 : 50, 40 : 60, and 60 : 40 volume ratios of mengkulang and rengas wood sensitizer, respectively. Figures 21(a)–21(c) display the effect of the photoelectrode on the performance of the photovoltaic parameters of DSSCs. Figure 21(b) depicts the J-V curve of green synthesized TiO2 NP photoelectrode within different volume ratio-based DSSCs in the presence of ethanolic root extract of Kniphofia schemperi, while Figure 21(c) shows the J-V curve of mengkulang and rengas wood and their mixed sensitizers in the presence of TiO2 nanostructured photoanode. When TiO2 nanoparticles are layered with graphitic carbon nitride structure by forming composites, it decreases the energy barrier of electron transport and improves the injection efficiency of photogenerated electrons from the photoanode [132].Figure 21 J-V curve of root extract ofKniphofia schemperi (a), mengkulang and rengas wood and their mixed sensitizers (b), and different cells at different solvents (c) [methanol (i), ethanol (ii), and acetone (iii)] extracted from Costus woodsonii leaves [78, 114, 133]. (a)(b)(c)Najihah and Tan [133] designed and reported on the methanol, ethanol, and acetone extracted from Costus woodsonii leaves as a natural sensitizer for DSSCs. It has been found that the short circuit current density, open circuit voltage, fill factor, and efficiency of methanol-extracted Costus Woodsonii leave sensitizer-based DSSCs shows 0.63, 0.60, 0.61, and 0.23, respectively. The ethanol-extracted Costus woodsonii leave sensitizer-based DSSCs are 0.85, 0.63, 0.69, and 0.37, respectively. In addition, as can be seen in Figure 21(c), acetone-extractable sensitizer shows 1.35, 0.57, 0.62, and 0.48, respectively, which is the best performing solvent. The report proves the effect of the solvent and, in turn, its influence on the performance of the various photovoltaic parameters. The light absorption capability of the sensitizer-coated photoelectrode is directly correlated to the concentration of the natural dye in an extraction solvent [78, 133]. Among the three solvents, acetone-extracted sensitizer-based DSSCs provide better efficiency relative to the corresponding counterparts. ### 7.2. Characterization Using Incident Photon to Current Conversion Efficiency (IPCE) The IPCE, also known as the external quantum efficiency, is the percentage of incident photons converted to electric current (collected charge carriers) when the device is operated in a short circuit. As reported by Al-Alwani et al. [134], the incident photon to current conversion efficiency of DSSCs is found to be dependent on the nature of the photoanode and also the light absorption and capacity behavior of natural sensitizers, as shown in Figures 22(a)and 22(b). As supported in Figure 22(b), the IPCE value of Kniphofia foliosa ethanolic root extract in the presence of various volume ratios (2 : 3, 1 : 1, and 3 : 2) of green-synthesized TiO2 photoelectrode was found to be altered with the variation in the volume ratio of the photoanode. As could be provided in Figure 22(b), relatively maximum IPCE% (≈8.11%) value was obtained in the presence of TiO2 photoelectrode formed within the volume ratio of 3 : 2 and was located at a wavelength scan of 340 nm, while the TiO2 (1 : 1) photoelectrode provides an IPCE of 2.66%, which occurs at around 500 nm.Figure 22 IPEC% curve ofPandannus amaryllifolius leaves (a) and root extract of Kniphofia foliosa (b)-based DSSCs [114, 134]. (a)(b)DSSC is an efficient photovoltaic technology in wireless sensors and indoor light due to its low cost and material abundancy in nature. However, Kokkonen et al. [32] have reviewed the possible scaling up of fabrication methods at industrial manufacturing level for high-performance stability and high photovoltaic efficiency under typical indoor conditions. Hence, a significant research effort has been invested in exploring the new generation of photovoltaic devices as alternatives to traditional silicon- (Si-) based solar cells [135]. Moreover, the efficiency and its research challenges towards DSSCs have been clearly reviewed [136]. To solve such problems, recently, the discovery of new materials, such as 2D and high selectivity catalysts, have been emerged as promising materials, and their identifications have been identified by machine learning data-driven approach [137]. Especially, indoor solar cell is a strong positive influence on the ecology of the Internet of Things (IoTs). This IoT contained communication devices, actuators, remote, and distributed sensors. Particularly, smart IoT sensors have the potential of performing control functions and mass monitoring, which is driven by an indoor power gathering system [138, 139]. ## 7.1. Characterization Using Photocurrent Density-Voltage (J-V) The general performance of the assembled DSSCs prepared using their components could be evaluated by different parameters such asJSC, η, Vmax, Pmax, VOC, and FF. Bekele et al. [114] reported on synthesis of TiO2 nanoparticles within three different volume ratios as 2 : 3, 1 : 1, and 3 : 2 in the presence of Kniphofia schemperi root ethanol extract both as a capping and reducing agent and also as a natural sensitizer. Synthesized 2 : 3, 1 : 1, and 3 : 2 photoelectrodes show VOC performance of 63, 48, and 161 mV, respectively. The 3 : 2 photoelectrode shows best VOC as compared to the remaining electrodes, and this improvement could be attributed to the improved absorption of light in the presence of Kniphofia schemperi root sensitizer, and this is due to the more improved surface morphology of the photoelectrode. This photoelectrode provides enhanced efficiency (≈1.30%) as compared to the remaining ratio photoelectrode due to its small average crystalline size, which enables it to adsorb excess dye molecules on its surface [129]. The corresponding JSC and FF values for each of the different volume ratios of TiO2 photoelectrode-based DSSCs were estimated as 1.29×10−3, 6.05×10−3, and 2.46×10−2mA/cm2, and the resultant FF was 42, 40.3, and 32.8% for the TiO2 (2 : 3), TiO2 (1 : 1), and TiO2 (3 : 2) photoelectrodes, respectively. TiO2 (3 : 2) outperforms the other two green prepared photoelectrodes due to the better catalytic property of the photoelectrode, which was achieved by using less extract during synthesis.Senthamarai et al. [82] reported on the green synthesis of TiO2 nanostructure photoelectrodes prepared via the green route using the fruit extracts of pineapple, orange, and grapes as reducing and stabilizing agents, for DSSC application in the presence of fruit skin ethanol extract of Murraya koenigii sensitizer. The grape-mediated synthesized TiO2 photoelectrode shows the maximum solar cell efficiency (1.78%). In the presence of Murraya koenigii natural sensitizer, pineapple-templated TiO2 and orange-templated TiO2 photoelectrodes shows solar cell efficiency of 1.61% and 1.52%, respectively, in the presence of fruit skin extract. The corresponding VOC values for the grape-TiO2, orange-TiO2, and pineapple-TiO2 photoelectrodes were found to be 0.628, 0.626, and 0.576 mV, respectively.Moreover, reports show that ZnO-TiO2-Fe2O3 nanocomposites were synthesized and used as an alternative photoelectrode in the presence of ethanol-extracted Guizotia scabra and Salvia leucanta flower sensitizers [130]. It has been found that the obtained conversion efficiencies for the ethanol extracts of Guizotia scabra and Salvia leucantha were estimated at 0.0013% and 0.0017%, respectively. According to Cho et al. [131], the ethanol extract of the mixed sweet potato leaf and blueberry sensitizer at a weight concentration of 40% (1 : 1) volume ratio was used to coat the TiO2 photoanode, and the report shows that 0.61 V, 4.75 mA/cm2, 53%, and 1.57% of VOC, JSC, FF, and η were achieved, respectively. But the individual single sensitizers of sweet potato leaf sensitizer show 0.645 V, 1.23 mA/cm2, 49%, and 0.391%, and blueberry flower sensitizer-based DSSCs provide 0.67 V, 0.532 mA/cm2, 61%, and 0.218% of VOC, JSC, FF, and, η, respectively.Safie et al. [78] reported on the use of mengkulang and rengas wood and their mixed ratios (60 : 40, 40 : 60, and 50 : 50) in the presence of TiO2 nanoparticle photoelectrode. The report proves that the cell performance was found to be 0.53 V, 0.40 mA/cm2, 75.98%, 0.16%, and 0.50 V, 0.30 mA/cm2, 72.88%, and 0.11% of VOC, JSC, FF, and η for the mengkulang and rengas wood sensitizer-based DSSCs, respectively, while the different mixed ratios of those sensitizers based on DSSC performance were found to be 0.54 V, 0.60 mA/cm2, 63.28%, 0.21%, 0.53 V, 0.90 mA/cm2, 61.78%, 0.29%, 0.53 V, 0.90 mA/cm2, 62.16%, and 0.30% for the 50 : 50, 40 : 60, and 60 : 40 volume ratios of mengkulang and rengas wood sensitizer, respectively. Figures 21(a)–21(c) display the effect of the photoelectrode on the performance of the photovoltaic parameters of DSSCs. Figure 21(b) depicts the J-V curve of green synthesized TiO2 NP photoelectrode within different volume ratio-based DSSCs in the presence of ethanolic root extract of Kniphofia schemperi, while Figure 21(c) shows the J-V curve of mengkulang and rengas wood and their mixed sensitizers in the presence of TiO2 nanostructured photoanode. When TiO2 nanoparticles are layered with graphitic carbon nitride structure by forming composites, it decreases the energy barrier of electron transport and improves the injection efficiency of photogenerated electrons from the photoanode [132].Figure 21 J-V curve of root extract ofKniphofia schemperi (a), mengkulang and rengas wood and their mixed sensitizers (b), and different cells at different solvents (c) [methanol (i), ethanol (ii), and acetone (iii)] extracted from Costus woodsonii leaves [78, 114, 133]. (a)(b)(c)Najihah and Tan [133] designed and reported on the methanol, ethanol, and acetone extracted from Costus woodsonii leaves as a natural sensitizer for DSSCs. It has been found that the short circuit current density, open circuit voltage, fill factor, and efficiency of methanol-extracted Costus Woodsonii leave sensitizer-based DSSCs shows 0.63, 0.60, 0.61, and 0.23, respectively. The ethanol-extracted Costus woodsonii leave sensitizer-based DSSCs are 0.85, 0.63, 0.69, and 0.37, respectively. In addition, as can be seen in Figure 21(c), acetone-extractable sensitizer shows 1.35, 0.57, 0.62, and 0.48, respectively, which is the best performing solvent. The report proves the effect of the solvent and, in turn, its influence on the performance of the various photovoltaic parameters. The light absorption capability of the sensitizer-coated photoelectrode is directly correlated to the concentration of the natural dye in an extraction solvent [78, 133]. Among the three solvents, acetone-extracted sensitizer-based DSSCs provide better efficiency relative to the corresponding counterparts. ## 7.2. Characterization Using Incident Photon to Current Conversion Efficiency (IPCE) The IPCE, also known as the external quantum efficiency, is the percentage of incident photons converted to electric current (collected charge carriers) when the device is operated in a short circuit. As reported by Al-Alwani et al. [134], the incident photon to current conversion efficiency of DSSCs is found to be dependent on the nature of the photoanode and also the light absorption and capacity behavior of natural sensitizers, as shown in Figures 22(a)and 22(b). As supported in Figure 22(b), the IPCE value of Kniphofia foliosa ethanolic root extract in the presence of various volume ratios (2 : 3, 1 : 1, and 3 : 2) of green-synthesized TiO2 photoelectrode was found to be altered with the variation in the volume ratio of the photoanode. As could be provided in Figure 22(b), relatively maximum IPCE% (≈8.11%) value was obtained in the presence of TiO2 photoelectrode formed within the volume ratio of 3 : 2 and was located at a wavelength scan of 340 nm, while the TiO2 (1 : 1) photoelectrode provides an IPCE of 2.66%, which occurs at around 500 nm.Figure 22 IPEC% curve ofPandannus amaryllifolius leaves (a) and root extract of Kniphofia foliosa (b)-based DSSCs [114, 134]. (a)(b)DSSC is an efficient photovoltaic technology in wireless sensors and indoor light due to its low cost and material abundancy in nature. However, Kokkonen et al. [32] have reviewed the possible scaling up of fabrication methods at industrial manufacturing level for high-performance stability and high photovoltaic efficiency under typical indoor conditions. Hence, a significant research effort has been invested in exploring the new generation of photovoltaic devices as alternatives to traditional silicon- (Si-) based solar cells [135]. Moreover, the efficiency and its research challenges towards DSSCs have been clearly reviewed [136]. To solve such problems, recently, the discovery of new materials, such as 2D and high selectivity catalysts, have been emerged as promising materials, and their identifications have been identified by machine learning data-driven approach [137]. Especially, indoor solar cell is a strong positive influence on the ecology of the Internet of Things (IoTs). This IoT contained communication devices, actuators, remote, and distributed sensors. Particularly, smart IoT sensors have the potential of performing control functions and mass monitoring, which is driven by an indoor power gathering system [138, 139]. ## 8. Conclusion and Future Outlook ### 8.1. Conclusion It has been observed from the review work that DSSCs and their various components were fabricatedvia different protocols. The CE and the photoanode of DSSCs were fabricated via various chemical and green methods. It has been observed that even if the electrodes of DSSCs prepared via chemical methods were to provide efficient efficiency as compared to the electrodes prepared via green method, the electrodes fabricated via green technique is found to be cost effective, environmentally friendly, and also able to provide a high surface area to volume ration, which makes them available for harvesting much of the sun’s light on their surface. Using numerous green and medicinal parts of natural plants such as leaves, roots, steams, barks, flowers, and other medicinal green plants as photosensitizers is the most preferable option. ### 8.2. Future Outlook In order to achieve the real utilization of natural pigment-based conversion of solar light energy to electricity in the near future, the scientific community should be addressing and solving the problem of the less efficient properties of DSSCs. This could be possible by improving the major components of the devicevia numerous modification protocols. In order to articulate a best-performing device and enhance the efficiency, the device must be assembled by considering those parameters. This enables the device to achieve enhanced cell performance and efficiency. ## 8.1. Conclusion It has been observed from the review work that DSSCs and their various components were fabricatedvia different protocols. The CE and the photoanode of DSSCs were fabricated via various chemical and green methods. It has been observed that even if the electrodes of DSSCs prepared via chemical methods were to provide efficient efficiency as compared to the electrodes prepared via green method, the electrodes fabricated via green technique is found to be cost effective, environmentally friendly, and also able to provide a high surface area to volume ration, which makes them available for harvesting much of the sun’s light on their surface. Using numerous green and medicinal parts of natural plants such as leaves, roots, steams, barks, flowers, and other medicinal green plants as photosensitizers is the most preferable option. ## 8.2. Future Outlook In order to achieve the real utilization of natural pigment-based conversion of solar light energy to electricity in the near future, the scientific community should be addressing and solving the problem of the less efficient properties of DSSCs. This could be possible by improving the major components of the devicevia numerous modification protocols. In order to articulate a best-performing device and enhance the efficiency, the device must be assembled by considering those parameters. This enables the device to achieve enhanced cell performance and efficiency. --- *Source: 1024100-2022-10-14.xml*
2022
# Combined Influence of Hall Current and Soret Effect on Chemically Reacting Magnetomicropolar Fluid Flow from Radiative Rotating Vertical Surface with Variable Suction in Slip-Flow Regime **Authors:** Preeti Jain **Journal:** International Scholarly Research Notices (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102413 --- ## Abstract An analysis study is presented to study the effects of Hall current and Soret effect on unsteady hydromagnetic natural convection of a micropolar fluid in a rotating frame of reference with slip-flow regime. A uniform magnetic field acts perpendicularly to the porous surface which absorbs the micropolar fluid with variable suction velocity. The effects of heat absorption, chemical reaction, and thermal radiation are discussed and for this Rosseland approximation is used to describe the radiative heat flux in energy equation. The entire system rotates with uniform angular velocity Ω about an axis normal to the plate. The nonlinear coupled partial differential equations are solved by perturbation techniques. In order to get physical insight, the numerical results of translational velocity, microrotation, fluid temperature, and species concentration for different physical parameters entering into the analysis are discussed and explained graphically. Also, the results of the skin-friction coefficient, the couple stress coefficient, Nusselt number, and Sherwood number are discussed with the help of figures for various values of flow pertinent flow parameters. --- ## Body ## 1. Introduction Micropolar fluids are subset of the micromorphic fluid. Micropolar fluids are those fluids consisting of randomly oriented particles suspended in a viscous medium, which can undergo a rotation that can affect the hydrodynamics of the flow, making it a distinctly non-Newtonian fluid. They constitute an important branch of non-Newtonian fluid dynamics where microrotation effects as well as microinertia are exhibited. Modelling and analysis of the dynamics of micropolar fluids have been the field of very active research due to their application in a number of processes that occur in chemical, pharmaceutical, and food industry. Such applications include the extrusion of polymer fluids, solidification of liquid crystals, cooling of a metallic plate in a bath, animal bloods, exotic lubricants, and colloidal and suspension solutions, for example, for which the classical Navier-Stokes theory is inadequate. The essence of the theory of micropolar fluids lies in the extension of the constitutive equations for Newtonian fluids so that more complex fluids can be described by this theory. In this theory, rigid particles contained in a small fluid volume element are limited to rotation about the centre of the volume elements described by microrotation vector. It is well known that heterogeneous mixtures, such as Ferro liquids, colloidal fluids, most slurries, and suspensions, are some liquids with polymer activities which behave differently from Newtonian fluids. The main difference is that these types of fluids have a microstructure and exhibit microrotational effects and can support surface and body couples which are not present in the theory of Newtonian fluids. In order to study such types of fluids Eringen [1] developed the theory of microfluids which include the effect of local rotary inertia, the couple stress, and inertial spin. This theory is expected to be successful in analyzing the behavior of non-Newtonian fluids. Eringen [2] also developed the theory of micropolar fluids for the case where only microrotational effects and microrotational inertia exist. He [3] extended the theory of thermomicropolar fluids and derived the constitutive law for fluids with microstructure. An excellent review of micropolar fluids and their applications was given by Ariman et al. [4]. In view of Lukaszewicz [5], micropolar fluids represent those fluids which consist of randomly oriented particles suspended in a viscous medium.Several authors have studied the characteristic of the boundary layer flow of micropolar fluid under different boundary conditions. Takhar and Soundalgekar [6, 7] studied the flow and heat transfer of micropolar fluid past a porous plate. Further, they [8, 9] discussed these problems past a continuously moving porous plate. Often experimental and analytical investigations of free convection flows are carried out by the researchers, since in many situations in technology and nature, one continually encounters masses of fluid arising freely in an extensive medium due to the buoyancy effects. Gorla et al. [10, 11] investigated the natural convection from a heated vertical plate in micropolar fluid. The problem of flow and heat transfer for a micropolar fluid past a porous plate embedded in a porous medium has been of great use in engineering studies such as oil exploration and thermal insulation. Raptis and Takhar [12] and Kim [13] have considered the micropolar fluid through a porous medium.All the above mentioned studies are limited only to applications where radiative heat transfer is negligible. The role of thermal radiation in the flow heat transfer process is of great relevance in the design of many advanced energy conversion systems operating at higher temperatures. Thermal radiation within these systems is usually the result of emission by hot walls and the working fluid. Nuclear power plants, gas turbines, and the various propulsion devices for aircraft, missiles, satellites, and space vehicles are examples of such engineering areas. Perdikis and Raptis [14] illustrated the heat transfer of a micropolar fluid in the presence of radiation. Raptis [15] studied the effect of radiation on the flow of a micropolar fluid past a continuously moving plate. Recently, Elbashbeshy and Bazid [16] and Kim and Fedorov [17] have reported on the radiation effects on the mixed convection flow of micropolar fluid. Makinde [18] examined the transient free convection interaction with thermal radiation of an absorbing emitting fluid along moving vertical permeable plate. Rahman and Sattar [19] studied transient convective flow of micropolar fluid past a continuous moving porous plate in the presence of radiation. Moreover, when the radiative heat transfer takes place, the fluid involved can be electrically conducting in the sense that it is ionized owing to high operating temperature. Accordingly, it is of interest to examine the effect of the magnetic field on the flow. Thermal radiation effects on hydromagnetic natural convection flow with heat and mass transfer play an important role in manufacturing processes taking place in industries for the design of fins, glass production, steel rolling, casting and levitation, furnace design, and so forth. The process of fusing of metals in an electrical furnace by applying a magnetic field and the process of cooling of the first wall inside a nuclear reactor containment vessel where the hot plasma is isolated from the wall by applying a magnetic field are examples of such fields where thermal radiation and magnetohydrodynamics (MHD) are correlative. This fact was taken into consideration by Abd-El Aziz [20] in his study on micropolar fluids. Raptis and Massalas [21] studied magnetohydrodynamic flow past a plate by the presence of radiation.The rotating flow of an electrically conducting fluid in presence of magnetic field has got its importance in Geophysical problems. Investigation of hydromagnetic natural convection flow in a rotating medium is of considerable importance due to its application in various areas of geophysics, astrophysics, and fluid engineering, namely, maintenance and secular variations in Earth’s magnetic field due to motion of Earth’s liquid core, internal rotation rate of the sun, structure of the magnetic stars, solar and planetary dynamo problems, turbo machines, rotating MHD generators, rotating drum separators for liquid metal MHD applications, and so forth. It may be noted that Coriolis and magnetic forces are comparable in magnitude and Coriolis force induces secondary flow in the flow-field. Changes that take place in the rotation suggest the possible importance of hydromagnetic spin-up. Taking into consideration the importance of such study, unsteady hydromagnetic natural convection flow past a moving plate in a rotating medium is studied by a number of researchers. Mention maybe made of research studies of Singh [22], Raptist and Singh [23], Tokis [24], Nanousis [25], and Singh et al. [26]. This problem of spin-up in magnetohydrodynamic rotating fluids has been discussed under varied conditions by Takhar et al. [27].The study of heat and mass transfer due to chemical reaction is also very importance because of its occurrence in most of the branches of science and technology. The processes involving mass transfer effects are important in chemical processing equipment which is designed to draw high value products from cheaper raw materials with the involvement of chemical reaction. Ibrahim and Makinde [28] investigated radiation effect on chemically reactive MHD boundary layer flow of heat and mass transfer past a porous vertical flat plate. Babu and Satya Narayan [29] examined chemical reaction and thermal radiation effects on MHD convective flow in a porous medium in the presence of suction. Das [30] and Sivaiah [31] investigated studied the effect of chemical reaction and thermal radiation on heat and mass transfer flow of MHD micropolar aid in a rotating frame of reference. Convection problems associated with heat sources within fluid-saturated porous media are of great practical significance in geophysics and energy-related problems, such as recovery of petroleum resources, cooling of underground electric cables, storage of nuclear waste materials groundwater pollution, fiber and granular insulations, chemical catalytic reactors, and environmental impact of buried heat generating waste. Bakr et al. [32, 33] presented an analysis on MHD free convection and mass transfer adjacent to moving vertical plate for micropolar fluid in a rotating frame of reference in presence of heat generation/absorption and a chemical reaction using perturbation technique. Babu and Narayana [34] analyzed unsteady free convection with heat and mass transfer flow for a micropolar fluid through a porous medium with a variable permeability bounded by a semi-infinite vertical plate in the presence of heat generation, thermal radiation and first-order chemical reaction.In all this study, the effect of Hall current is not considered. The current development of magnetohydrodynamics application is toward a strong magnetic field (so that the influence of electromagnetic force is noticeable) and toward a low density of the gas (such as in space flight and in nuclear fusion research). Under this condition, the Hall current becomes important. The rotating flow of an electrically conducting fluid in the presence of a magnetic field is encountered in cosmic fluid dynamics, medicine and biology. Application in biomedical engineering includes cardiac MRI, and ECG. MHD was pioneered Cowling [35] and he emphasized that when the strength of the applied magnetic field is sufficiently large, Ohm’s law needs to be modified to include Hall current. The Hall effect is merely due to the sideways magnetic force on the drafting free charges. The electric field has to have a component transverse to the direction of the current density to balance this force. In many works of plasma physics, much attention is not paid to the effect caused due to Hall current. However, the Hall Effect cannot be completely ignored if the strength of the magnetic field is high and the number of density of electrons is small as it is responsible for the change of the flow pattern of an ionized gas. Hall effect results in a development of an additional potential difference between opposite surfaces of a conductor for which a current is induced perpendicular to both the electric and magnetic field. This current is termed as Hall current. Deka [36], Takhar et al. [37], Saha et al. [38], and Ahmed et al. [39] have presented some model studies on the effect of Hall current on MHD convection flow because of its possible application in the problem of MHD generators and Hall current. Preeti and Chaudhary [40] analyzed an unsteady hydromagnetic flow of a viscoelastic fluid from a radiative vertical porous plate, taking the effects of Hall current and mass transfer into account. Kinyanjui et al. [41] studied the heat and mass transfer in unsteady free convection flow with radiation absorption past an impulsively started infinite vertical porous plate subjected to strong magnetic field including the Hall effect. Takhar et al. [42] investigated the simultaneous effects of Hall current and free stream velocity on the magneto hydrodynamic flow over a moving plate in a rotating fluid. Recently, Seth et al. [43] investigated the problem of an unsteady MHD free convective flow past an impulsively started vertical plate with ramped temperature immersed in a porous medium with rotation and heat absorption taken into account the Hall Effect.When heat and mass transfer occur simultaneously in a moving fluid, the relations between the fluxes and the driven potential are important. It has been found that an energy flux can be generated not only by temperature gradients but by composition gradient as well. The energy caused by a composition gradient is called the Dufour or the diffusion-thermo effect, also the mass fluxes can also be caused by the temperature gradient and this is called the Soret or thermal diffusion effect; that is, if two regions in a mixture are maintained at different temperatures so that there is a flux of heat, it has been found that a concentration gradient is set up and in a binary mixture, one kind of a molecule tends to travel toward the hot region and the other kind toward the cold region. This is called the “Soret effect.” The Dufour effect is neglected in this study because it is of a smaller order of magnitude than the magnitude of thermal radiation which exerts a stronger effect on the energy flux. Soret or thermal diffusion effect has been utilized for isotope separation in mixtures between gases with very light molecular weight (H2, He) and medium molecular weight (N2, air) and it was found to be of a magnitude that it cannot be neglected due to its practical applications in engineering and sciences. Soret effects due to natural convection between heated inclined plates have been investigated by Raju et al. [44]. M. G. Reddy and N. B. Reddy [45] investigated Soret and Dufour effects on steady MHD free convective flow past an infinite plate. Mohamed [46] studied unsteady MHD flow over a vertical moving porous plate with heat generation and Soret effect.Practically, in many engineering applications, the particle adjacent to a solid surface no longer takes the velocity of the surface. The particle at the surface has a finite tangential velocity; it “slips” along the surface. This flow regime is called the slip-flow regime and this effect cannot be neglected. The fluid slippage phenomenon at the solid boundaries appear in many applications such as microchannels or nanochannels and in application where a thin film of light oils is attached to the moving plates or when the surface is coated with special coating such as thick monolayer of hydrophobic octadecyltrichlosilane, that is, lubrication of mechanical device, where a thin film of lubricant is attached to the surface slipping over one another or when the surfaces are coated with special coating to minimize the friction between them [47]. Chaudhary and Jain [48] examined the effects of radiation on the hydromagnetic free convection flow set up due to temperature as well as species concentration of an electrically conducting micropolar fluid past a vertical porous plate through porous medium in slip-flow regime. Chaudhary and Sharma [49, 50] studied the free convection flow past a vertical porous plate with variable suction in slip-flow regime. Das et al. [51] have considered the magnetohydrodynamic unsteady flow of a viscous stratified fluid through a porous medium past a porous flat moving plate in the slip flow regime with heat source. Singh and Kumar [52] presented the fluctuating heat and mass transfer on unsteady free convection flow of radiating and reacting fid past a vertical porous plate in slip flow regime using perturbation analysis. Kumar and Chand [53] have studied the effect of slip conditions and the Hall current on unsteady MHD flow of a viscoelastic fluid past an infinite vertical porous plate through porous medium. Recently, Oahimire et al. [54] investigated the effects of thermal-diffusion and thermal radiation on unsteady heat and mass transfer by free convective MHD micropolar fluid flow bounded by a semi-infinite vertical plate in a slip-flow regime under the action of transverse magnetic field with suction.To the best of our knowledge, considerably less work has been done concerning the combined effect of Hall current and Soret effect on chemically reactive magnetomicropolar fluid flow incorporating the effect of rotation in slip flow regime in the presence of radiation and heat absorption. The results are in accordance with the physical realities which validate the correctness of our work presented here. ## 2. Mathematical Formulation of the Problem Consider an unsteady hydromagnetic flow of an incompressible, viscous, and electrically conducting micropolar fluid past an infinite vertical permeable plate embedded in a uniform porous medium in slip-flow regime and in a rotating system taking Hall current, thermal radiation, Soret effect, and chemical reaction into account. The coordinate system is chosen in such a way thatx *-axis is considered along the porous plate in vertically upward direction, y *-axis is taken along the width of the plate, and z *-axis normal to the plane of the plate in the fluid as shown in figure configuration (Figure 1). Since the plate is infinite in extent in x *- and y *- directions, hence all physical quantities will be independent of x * and y * and they are functions of z * and t * only; that is, ∂ u * / ∂ x * = ∂ u * / ∂ y * = ∂ v * / ∂ x * = ∂ v * / ∂ y * = 0, and so forth.Figure 1 Geometry and coordinate system of the problem.A magnetic field of uniform strengthB 0 is applied in a direction parallel to z *-axis which is perpendicular to the flow direction. It is assumed that the induced magnetic field generated by fluid motion is negligible in comparison to the applied one. This assumption is justified because magnetic Reynolds number is very small for liquid metals and partially ionized fluids which are commonly used in industrial applications [55]. It is assumed that there is no applied or polarized voltage so the effect of polarization of fluid is negligible. This corresponds to the case where no energy is added or extracted from the fluid by electrical means. The entire system is rotating with an angular velocity Ω about the normal to the plate. It is assumed here that the hole size of the porous plate is significantly larger than the characteristic microscopic length scale of the porous medium. The fluid is considered to be a gray, absorbing-emitting but nonscattering medium and the Rosseland approximation is used to describe the radiative heat flux. The radiative heat flux in the x *-direction is considered negligible in comparison with that of z *-direction. When the strength of the magnetic field is very large, the generalized Ohm’s law in the absence of electric field takes the following form: (1) J → + ω e τ e B 0 J → × H → = σ μ e V → × H → + 1 en e ∇ P e . Under the assumption that the electron pressure (for weakly ionized gas), the thermoelectric pressure and ion-slip conditions are negligible; now the above equation becomes (2) j x = σ μ e H 0 1 + m 2 m v - u , j z = σ μ e H 0 1 + m 2 m u + v , where u is the x-component of V →, v is the y-component of V →, and m (=ω e τ e) is Hall parameter.The suction velocity is assumed to bew * = - w 0 ( 1 + ε A e δ * t * ), where ε and ε A are small values less than unity and w 0 is the scale of suction velocity which is nonzero positive constant. The negative sign indicates that the suction is towards the plate. The fluid properties are assumed to be constants except that the influence of density variation with temperature and concentration has been considered in the body-force term. There is a first-order chemical reaction between the diffusing species and the fluid.With these foregoing assumptions, the governing equations under Boussinesq approximation can be written in a Cartesian frame of reference as follows.Continuity. Consider the following: (3) ∂ w * ∂ z * = 0 .Linear Momentum. Consider the following: (4) ∂ u * ∂ t * + w * ∂ u * ∂ z * - 2 Ω v * = ν + ν r ∂ 2 u * ∂ z * 2 + g β T T - T ∞ + g β C C * - C ∞ * - ν u * K * - ν r ∂ ω 2 * ∂ z * + σ μ e 2 H 0 2 m v * - u * ρ 1 + m 2 , ∂ v * ∂ t * + w * ∂ v * ∂ z * + 2 Ω u * = ν + ν r ∂ 2 v * ∂ z * 2 - ν u * K * + ν r ∂ ω 1 * ∂ z * - σ μ e 2 H 0 2 m u * + v * ρ 1 + m 2 .Angular Momentum. Consider the following: (5) ∂ ω 1 * ∂ t * + w * ∂ ω 1 * ∂ z * = Λ ρ j ∂ 2 ω 1 * ∂ z * 2 , ∂ ω 2 * ∂ t * + w * ∂ ω 2 * ∂ z * = Λ ρ j ∂ 2 ω 2 * ∂ z * 2 .Energy. Consider the following: (6) ∂ T ∂ t * + w * ∂ T ∂ z * = k ρ C p ∂ 2 T ∂ z * 2 - Q * ρ C p T - T ∞ - 1 ρ C p ∂ q r ∂ z * .Mass Transfer. Consider the following: (7) ∂ C * ∂ t * + w * ∂ C * ∂ z * = D m ∂ 2 C * ∂ z * 2 + D m K t T m ∂ 2 T * ∂ z * 2 - R C C * - C ∞ * . The initial and boundary conditions suggested by the physics of the problem are (8) u * = v * = 0 , ω 1 * = ω 2 * = 0 , T = T ∞ , C * = C ∞ * kkkkkkkkkkkkkkikk for t * ≤ 0 , (9) u * = U r + L * ∂ u * ∂ z * , v * = 0 , ω 1 * = - 1 2 ∂ v * ∂ z * , ω 2 * = 1 2 ∂ u * ∂ z * , T = T w , C * = C w * i k k k k k k k k k k k k k k k k k k k at z * = 0 u * ⟶ 0 , v * ⟶ 0 , ω 1 * ⟶ 0 , ω 2 * ⟶ 0 , T ⟶ T ∞ , C * ⟶ C ∞ * i k k k k k k k k k k k k k k k k k k k k k i as z * ⟶ ∞ i k k k k k k k k k k k k k k k k k k k k k k i k for t * > 0 . The boundary condition for microrotation components ω 1 * and ω 2 * describes its relationship with the surface stress. In the above boundary condition (9) the plate is in uniform motion and subjected to variable suction and slip boundary condition. In the parameter L * = 2 - m 1 / m 1 L, L is the molecular mean free path and m 1 is the tangential momentum accommodation coefficient. All the physical variables are given in nomenclature.Integration of continuity equation (3) for variable suction velocity normal to the plate gives (10) w * = - w 0 1 + ε A e δ * t * , where w 0 represents the normal velocity at the plate which is positive for suction and negative for blowing. The radiative heat flux term by using Rosseland approximation is given by (11) q r = - 4 σ * 3 a R ∂ T 4 ∂ z * . We assume that the temperature differences within the flow are such that T 4 may be expressed as a linear function of the temperature T. This is accomplished by expanding T 4 in a Taylor series about T ∞ and, neglecting higher-order terms, we have (12) T 4 ≃ 4 T ∞ 3 T - 3 T ∞ 4 . By using (11) and (12), (6) gives (13) ∂ T ∂ t * + w * ∂ T ∂ z * = k ρ C p ∂ 2 T ∂ z * 2 - Q * ρ C p T - T ∞ + 16 σ * T ∞ 3 3 ρ C p a R ∂ 2 T ∂ z * 2 . Proceeding with analysis, we introduce the following dimensionless variables: (14) u = u * U r , v = v * U r , z = z * U r ν , t = t * U r 2 ν , δ = δ * ν U r 2 , ω 1 = ω 1 * ν U r 2 , ω 2 = ω 2 * ν U r 2 , Gr = ν g β T T w - T ∞ U r 3 , Gc = ν g β C C w * - C ∞ * U r 3 , R = 2 Ω ν U r 2 , S = w 0 U r , Δ = ν r ν , θ = T - T ∞ T w - T ∞ , C = C * - C ∞ * C w * - C ∞ * , K = K * U r 2 ν 2 , M = μ e H 0 U r σ ν ρ , λ = Λ μ j , Pr ⁡ = μ C p k , Sc = ν D m , F = 4 T ∞ 3 σ * k a R , Q = Q * ν 2 U r 2 k , Sr = D m K t T W - T ∞ T m C W * - C ∞ * ν , α = R C ν U r 2 , h = L * U r ν . In view of (14), the governing equations (4)–(7) and (13) reduce to the following dimensionless form: (15) ∂ u ∂ t - S 1 + ε A e δ t ∂ u ∂ z - R v = 1 + Δ ∂ 2 u ∂ z 2 + Gr θ + Gm ϕ - M 2 1 + m 2 + 1 k u - Δ ∂ ω 2 ∂ z + m M 2 1 + m 2 v , (16) ∂ v ∂ t - S 1 + ε A e δ t ∂ v ∂ z + R u = 1 + Δ ∂ 2 v ∂ z 2 - M 2 1 + m 2 + 1 k v + Δ ∂ ω 1 ∂ z - m M 2 1 + m 2 u , (17) ∂ ω 1 ∂ t - S 1 + ε A e δ t ∂ ω 1 ∂ z = λ ∂ 2 ω 1 ∂ z 2 , (18) ∂ ω 2 ∂ t - S 1 + ε A e δ t ∂ ω 2 ∂ z = λ ∂ 2 ω 2 ∂ z 2 , (19) ∂ θ ∂ t - S 1 + ε A e δ t ∂ θ ∂ z = 1 Pr ⁡ 1 + 4 F 3 ∂ 2 θ ∂ z 2 - Q Pr ⁡ θ , (20) ∂ C ∂ t - S 1 + ε A e δ t ∂ C ∂ z = 1 Sc ∂ 2 C ∂ z 2 + Sr ∂ 2 C ∂ z 2 - α C . The boundary conditions (8)-(9) in view of (14) are then given by the following dimensionless form: (21) u = v = 0 , ω 1 = ω 2 = 0 , θ = 0 , C = 0 kkkkkkkkkkkkkkkkkkkkkkkkkkkkk for t ≤ 0 u = 1 + h ∂ u ∂ z , v = 0 , ω 1 = - 1 2 ∂ v ∂ z , ω 2 = 1 2 ∂ u ∂ z , θ = 1 , C = 1 kkkkkkkkkkkkkkkkkkkkkkkkk at z = 0 u ⟶ 0 , ω 1 ⟶ 0 , ω 2 ⟶ 0 , θ ⟶ 0 , C ⟶ 0 kkkkkkkkkkkkkkkkkkkk as z ⟶ ∞ kkkkkkkkkkkkkkkkkkkkkkk for t > 0 . To simplify (15)–(18), we substitute the fluid velocity and angular velocity in the complex form as V = u + i v,  ω = ω 1 + i ω 2 and we get (22) ∂ V ∂ t - S 1 + ε A e δ t ∂ V ∂ z + i R V = 1 + Δ ∂ 2 V ∂ z 2 + Gr θ + Gm ϕ - M 2 1 + m 2 + 1 k V - i Δ ∂ ω ∂ z - i m M 2 1 + m 2 V , ∂ ω ∂ t - S 1 + ε A e δ t ∂ ω ∂ z = λ ∂ 2 ω ∂ z 2 . The associated boundary conditions (21) become (23) V = 0 , ω = 0 , θ = 0 , C = 0 k k k k k k k k k k k k i k k k k k k k k k k for t ≤ 0 V = 1 + h ∂ u ∂ z , ω = i 2 ∂ V ∂ z , θ = 1 , C = 1 kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkki at z = 0 V ⟶ 0 , ω ⟶ 0 , θ ⟶ 0 , C ⟶ 0 k k k k k k k k k k k k k k k k k k i k k k k k k k k k k as z ⟶ ∞ kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk for t > 0 . ## 3. Analytical Solution of the Problem In order to reduce the above system of partial differential equations to a system of ordinary differential equations in dimensionless form, we represent the translational velocityV, microrotation velocity ω, temperature θ, and concentration C as (24) V z , t = V 0 z + ε e δ t V 1 z + O ε 2 , ω z , t = ω 0 z + ε e δ t ω 1 z + O ε 2 , θ z , t = θ 0 z + ε e δ t θ 1 z + O ε 2 , C z , t = C 0 z + ε e δ t C 1 z + O ε 2 . By substituting the above equations (24) into (19), (20), (22)-(23) and equating the harmonic and nonharmonic terms and neglecting the higher-order terms of O ( ε 2 ), we obtain the following pairs of equations for ( V 0 , ω 0 , θ 0 , C 0 ) and ( V 1 , ω 1 , θ 1 , C 1 ).Zero-order equations are:(25) 1 + Δ V 0 ′ ′ + S V 0 ′ - a 1 V 0 + Gr θ 0 + Gm C 0 + i Δ ω 0 ′ = 0 , λ ω 0 ′ ′ + S ω 0 ′ = 0 , 3 + 4 F θ 0 ′ ′ + 3 Pr ⁡ S θ 0 ′ - 3 Q θ 0 = 0 , C 0 ′′ + S Sc C 0 ′ - α Sc C 0 = - Sr θ 0 ′ ′ .First-order equations are:(26) 1 + Δ V 1 ′ ′ + S V 1 ′ - a 2 V 1 + Gr θ 1 + Gm C 1 + A V 0 ′ + i Δ ω 1 ′ = 0 , λ ω 1 ′ ′ + S ω 1 ′ - δ ω 1 = - S A ω 0 ′ , 3 + 4 F θ 1 ′ ′ + 3 Pr ⁡ S θ 1 ′ - 3 Q + Pr ⁡ δ θ 1 = - 3 Pr ⁡ S A θ 0 ′ , C 1 ′ ′ + S Sc C 1 ′ - Sc α + δ C 1 = - S Sc A C 0 ′ - SrSc θ 1 ′ ′ . The prime denotes differentiation with respect to y.The corresponding boundary conditions can be written as(27) V 0 = 1 + h ∂ V 0 ∂ z , V 1 = h ∂ V 1 ∂ z , ω 0 = i 2 ∂ V 0 ∂ z , ω 1 = i 2 ∂ V 1 ∂ z , θ 0 = 1 , θ 1 = 0 , C 0 = 1 , C 1 = 0 at z = 0 V 0 ⟶ 0 , V 1 ⟶ 0 , ω 0 ⟶ 0 , ω 1 ⟶ 0 , θ 0 ⟶ 0 , θ 1 ⟶ 0 , C 0 ⟶ 0 , C 1 ⟶ 0 k k k k k k k k k k k as z ⟶ ∞ .Solving (25)-(26) satisfying the boundary conditions (27) we obtain the expression for translational velocity V, microrotation velocity ω, temperature θ, and concentration C as (28) V z , t = B 11 e - r 5 z + B 8 e - r 1 z + B 9 e - r 3 z + B 10 e - S / λ z + ε e δ t B 20 e - r 7 z + B 13 e - r 1 z + B 17 e - r 5 z + B 18 e - S / λ z + B 19 e - r 6 z + B 14 e - r 2 z + B 15 e - r 4 z + B 16 e - r 3 z + B 17 e - r 5 z + B 18 e - S / λ z + B 19 e - r 6 z , (29) ω z , t = D 1 e - S / λ z + ε e δ t D 2 e - r 6 z + B 12 e - S / λ z , (30) θ z , t = e - r 1 z + ε e δ t B 1 e - r 1 z - e - r 2 z , (31) C z , t = B 3 e - r 3 z + B 2 e - r 1 z + ε e δ t B 4 e - r 3 z + B 5 e - r 1 z + B 6 e - r 2 z + B 7 e - r 4 z . The exponential indices and the coefficients appearing in (28)–(31) are given in the appendix.In technological applications, the wall shear stress, the wall couple stress, and the heat and mass transfer rate are often of great interest. Skin friction is caused by viscous drag in the boundary layer around the plate. The skin friction coefficient( C f ) at the wall in dimensionless form is given by (32) C f = τ w * z * = 0 ρ U r 2 = 1 + Δ 1 + i 2 ∂ V ∂ z z = 0 (33) = - 1 + Δ 1 + i 2 × B 11 r 5 + B 8 r 1 + B 9 r 3 + B 10 S λ + B 17 r 5 + B 18 S λ + B 19 r 6 + ε e δ t B 20 r 7 + B 13 r 1 + B 17 r 5 + B 18 S λ + B 19 r 6 + B 14 r 2 + B 15 r 4 + B 16 r 3 + B 17 r 5 + B 18 S λ + B 19 r 6 . The couple stress coefficient ( C m ) at the plate is defined by (34) M w = Λ ∂ ω * ∂ z * z * = 0 and in the dimensionless form it is given by (35) C m = M w μ j U r = 1 + Δ 2 ∂ ω ∂ z z = 0 = 1 + Δ 2 ∂ ω 1 ∂ z z = 0 + i ∂ ω 2 ∂ z z = 0 = - 1 + Δ 2 D 1 S λ + ε e δ t D 2 r 6 + B 12 S λ . Knowing the temperature field, it is interesting to study the effect of the free convection and thermal radiation on the rate of heat transfer and this is given by (36) q w * = - k ∂ T ∂ z * - 4 σ * 3 a R ∂ T 4 ∂ z * z * = 0 . Using T 4 ≃ 4 T ∞ 3 T - 3 T ∞ 4 the above equation becomes (37) q w * = - k T w - T ∞ U r ν 1 + 4 F 3 ∂ θ ∂ z z = 0 . The rate of heat transfer between the fluid and the plate is studied in terms of nondimensional Nusselt number, which is given by (38) Nu = x q w * k T w - T ∞ = - Re x 1 + 4 F 3 ∂ θ ∂ z z = 0 , where R e x = U r x / ν is the local Reynolds number (39) Nu Re x - 1 = 1 + 4 F 3 ∂ θ ∂ z z = 0 = 1 + 4 F 3 r 1 + ε e δ t B 1 r 1 - r 2 . The definitions of the local mass flux and the local Sherwood number are, respectively, given by (40) j w = - D m ∂ C * ∂ z * z * = 0 , Sh x = j w x D m C w * - C ∞ * = - Re x ∂ C ∂ z z = 0 , Sh x Re x - 1 = ∂ C ∂ z z = 0 = B 3 r 3 + B 2 r 1 + ε e δ t B 4 r 3 + B 5 r 1 + B 6 r 2 + B 7 r 4 . ## 4. Results and Discussion In the preceding sections, the governing equations along with the boundary conditions are solved analytically employing the perturbation techniques. The effects of main controlling parameters as they appear in the governing equations are discussed on the temperatureθ, concentration C, translational velocity V, microrotation ω, skin-friction C f, Nusselt number, and Sherwood number. In order to get a physical insight of the problem the above physical quantities are compiled numerically and displayed graphically. In the entire calculations we have chosen ε = 0.01, δ = 0.1, t = 1 and A = 1 while Pr, S, F, Q, Sr, Sc, M, m, Gr, Gm, R, h, K, Δ, and λ are varied over the range which are listed in the figure legends.The numerical values of fluid temperatureθ computed from the analytical solutions (29) are illustrated graphically versus boundary layer coordinate z in Figure 2 for various values of Prandtl number (Pr), suction parameter ( S ), heat absorption parameter ( Q ), and radiation parameter ( F ). The values of Prandtl number are chosen as Pr = 0.71, 0.025, and 7.0 which physically correspond to air, mercury, and water at 25° temperature and one atmospheric pressure. Pr = 11.62 correspond to water at 4°C. It is inferred that the temperature falls more rapidly for water in comparison to air which is physically true thus the thermal boundary layer falls quickly for large value of Prandtl number. The thickness of thermal boundary layer is greatest for Pr = 0.025 (mercury) than for Pr = 0.71 (air), thereafter for Pr = 7 (water) and finally the lowest for Pr = 11.62 (water at 4°C); that is, an increase in Prandtl number results in a decrease of temperature. The reason underlying such a behavior is that Pr signifies the relative effects of viscosity to thermal conductivity and smaller values of Prandtl number possess high thermal conductivity and therefore heat is able to diffuse away from the surface faster than at higher values of Pr. This results in the reduction of thermal boundary layer thickness. The fluid temperature θ also decreases with an increase of Heat absorption parameter ( Q ) and suction parameter ( S ). The temperature decreases with an increase in the heat absorption parameter because when heat is absorbed the buoyancy forces decrease the temperature profiles. The effect of thermal radiation parameter ( F ) is to enhance the fluid temperature throughout the boundary layer region. This is consistent with the fact that thermal radiation provides an additional means to diffuse energy because thermal radiation parameter F = 4 T ∞ 3 σ * / k a R and therefore an increase in F implies a decrease in Rosseland mean absorption coefficient a R for fixed values of T ∞ and k. Thus it is pointed out that radiation should be minimized to have the cooling process at a faster rate. The temperature profiles attain their maximum value at the wall and decrease exponentially with z and finally tend to zero as z → ∞. Hence the accuracy is checked and it validates that the analytical results for temperature is correct.Figure 2 Temperature profiles for different values of Prandtl number (Pr), suction parameter( S ), heat absorption parameter ( Q ), and radiation parameter ( F ) taking A = 3, t = 1, and δ = 0.1.Graphical results of concentration profilesC for different values of Schmidt number (Sc) and chemical reaction parameter ( α ) are displayed in Figure 3(a). The values of Schmidt number are chosen to represent the most common diffusing chemical species which are of interest and they are Sc = 0.22 (hydrogen), Sc = 0.3 (helium), Sc = 0.6 (water vapor), Sc = 0.94 (carbon dioxide) and Sc = 2.62 (propylbenzene) at 25°C temperature and one atmospheric pressure. A comparison of curves in the figure show the concentration distribution decreases at all points in the flow field with an increase in Schmidt number because smaller values of Sc are equivalent to increasing chemical molecular diffusivity ( D ). This implies mass diffusion tends to enhance species concentration. This shows that the heavier diffusing species have a greater retarding effect on the concentration distribution. Furthermore, it is interesting to note that concentration profiles fall slowly and steadily for hydrogen (Sc = 0.22) and helium (Sc = 0.30) but falls very rapidly for water vapor (Sc = 0.6) and propylbenzene (Sc = 2.62). Physically this is true because of the fact that the water vapor can be used for maintaining normal concentration field whereas hydrogen can be used for maintaining effective concentration field. Similar effects are seen in the case when chemical reaction parameter ( α ) is increased. Further, this figure clearly demonstrates that the concentration profiles decrease rapidly when chemical reaction parameter is increased this is due to the fact that boundary layer decreases with an increase in the value of α in this system, results in the consumption of the chemical and hence result in decreasing concentration profile. Thus the diffusion rates can be tremendously altered by chemical reaction.(a) Concentration profiles showing the variation for Schmidt number (Sc) and chemical reaction parameter( α ) taking Pr = 0.71, S = 0.01, F = 2, Q = 1, δ = 0.1, t = 1, and A = 3. (b) Concentration profiles showing the variation for heat generation parameter ( Q ) and Soret number (Sr) taking Pr = 0.71, S = 0.01, F = 2, Sc = 0.22, δ = 0.1, t = 1, and A = 3. (c) Concentration profiles showing the variation for suction parameter ( S ) and radiation parameter ( F ) taking Pr = 0.71, Sr = 0.5, Sc = 0.3, Q = 1, δ = 0.1, t = 1, and A = 3. (a) (b) (c)The effects of heat absorption parameter( Q ) and Soret number (Sr) on concentration profiles across the boundary layer are displayed in Figure 3(b). The results show that concentration boundary layer suppresses with an increase in heat absorption parameter and Soret number. The profiles fall rapidly with an increase of Soret number and thereafter increase and tend to zero as z → ∞. Figure 3(c) is plotted to show the effects of radiation parameter ( F ) and Suction parameter ( S ) on the species concentration profiles. It is revealed that the presence of Suction parameter diminishes the concentration distribution whereas reverse phenomena are observed with increasing values of radiation parameter. In Figures 3(a)–3(c) the concentration profiles attain their maximum value at the wall and decrease exponentially with z and finally tend to zero as z → ∞. Hence it is found to be in good agreement with boundary condition given in (23). Moreover these figures provide a check of our analytical solution for the concentration field.The microrotation profiles( ω ) against span wise coordinate z incorporating the effect of various parameters influencing the flow field are demonstrated in Figures 4(a)–4(h). It is revealed from Figures 4(a)–4(h) that these profiles attain a distinctive maximum value near surface of the plate and decrease properly on increasing boundary layer coordinate z to approach free stream value. Figure 4(a) shows the influence of Prandtl number (Pr), Suction parameter ( S ) and radiation parameter ( F ) on microrotation profiles. It is noticed that microrotation profiles ( ω ) decrease on increasing Pr. Physically, it is true due to the fact that an increase in Prandtl number increase the viscosity of the fluid, so the fluid becomes thick and consequently leads to a decrease in velocity. This figure further indicates that the microrotation profiles decrease with an increase in suction parameter ( S ) because sucking decelerates the fluid particles through the porous wall and hence reduce the growth of the fluid boundary layer as well as thermal and concentration boundary layers. Indicating the usual fact that suction stabilizes the boundary layer growth. These profiles enhances with an increase in radiation parameter ( F ). This is because when the intensity of heat generated through thermal radiation increases, the bond holding the components of the fluid particle is easily broken and the fluid velocity will increase.(a) Microrotation profiles showing the variation of Prandtl number (Pr), suction parameter( S ), and radiation parameter ( F ) taking Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (b) Microrotation profiles showing the variation of heat absorption parameter ( Q ) taking Pr = 0.71, S = 0.5, F = 2, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (c) Microrotation profiles showing the variation of magnetic parameter ( M ) and Hall parameter ( m ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (d) Microrotation profiles showing the variation of Grashof number (Gr) and modified Grashof number (Gm) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (e) Microrotation profiles showing the variation of rotational parameter ( R ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (f) Microrotation profiles showing the variation of viscosity ratio ( Δ ) and material parameter ( λ ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, and K = 0.6. (g) Microrotation profiles showing the variation of slip parameter ( h ) and permeability parameter ( K ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, Δ = 0.2, and λ = 0.8. (h) Microrotation profiles showing the variation of Soret parameter (Sr), Schmidt number (Sc), and chemical reaction parameter ( α ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, δ = 0.1, t = 1, A = 3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, and Δ = 0.2. (a) (b) (c) (d) (e) (f) (g) (h)From Figure4(b) it is perceived that microrotation profiles decrease with an increase in heat absorption parameter ( Q ). Figure 4(c) elucidates the influence of magnetic parameter ( M ) and Hall parameter ( m ) on microrotation profiles ( ω ); it is clear from these curves that these profiles increase when magnetic parameter and Hall current parameter are increased. The profiles corresponding to m = 0 reveals that microelements close to the wall are unable to rotate; hence, ω is very small. Figure 4(d) demonstrates the effect of thermal and concentration buoyancy forces, that is, Grashof number (Gr) and modified Grashof number (Gm) on the microrotation profiles. Here the negative value of Grashof number (Gr < 0), physically, corresponds to heating of the plate while the positive value (Gr > 0) represents cooling of the plate. Hence, it is observed from the comparison of the curves that an increase in thermal Grashof number leads to an increase in the velocity due to an enhancement in buoyancy forces. Gr signifies the relative strength of thermal buoyancy force to viscous hydrodynamic force. An increase in Grashof number indicates small viscous effects in the momentum equation and consequently causes an increase in the velocity profiles. Furthermore, the comparison of the curves illustrates that velocity increases with increasing Gm. The modified Grashof number (Gm) represents the relative strength of concentration buoyancy forces to viscous hydrodynamic force. As expected, the fluid velocity increases and the peak value is more distinctive due to an increase in the species buoyancy force. The profiles attain a maximum value near the wall and then decrease rapidly to approach the free stream value. Hence we are confident at the accuracy of our solution given by (30).For various values of rotational parameter( R ), the profiles of microrotation across the boundary layer are shown in Figure 4(e). It is perceived that the rotation tend to decrease the microrotation profiles. Figure 4(f) presents the effect of viscosity ratio ( Δ ) and material parameter ( λ ) on ω. The magnitude of microrotation is greater for a Newtonian fluid ( Δ = 0 ) with given parameters as compared with micropolar fluids ( Δ ≠ 0 ). Also, it is observed that the magnitude of microrotation profiles decrease with an increase in material parameter ( λ ) and viscosity ratio ( Δ ).Rarefaction effects that give rise to slip flow become significant when the molecular mean free path is comparable to characteristic length of the system. The microrotation profiles presented in Figure4(g) incorporate the influence of rarefaction parameter ( h ) and permeability parameter ( K ). It is noticed that an increase in the value of rarefaction parameter decreases the magnitude of microrotation profiles while the comparison of curves for different values of permeability parameter ( K ) reflects that profiles increase with increasing values of K. A similar behavior is also expected because when we increase the permeability it increases the size of the pores inside the porous medium due to which the drag force decreases and hence the magnitude of microrotation profiles increases.Microrotation profiles showing the variation of Soret parameter (Sr), Schmidt number (Sc), and generative chemical reaction( α ) are presented in Figure 4(h). It is analyzed that the influence of Sr, Sc, and α is to reduce the magnitude of microrotation profiles. Comparison of the curves in this figure indicate that the magnitude of microrotation profiles is the greatest for helium (He: Sc = 0.3) and then for carbon dioxide (CO2: Sc = 0.94) and the lowest for propylbenzene (C9H10: Sc = 2.62). Physically it is justified because, for large value of Schmidt number, the fluid becomes denser. This figure also displays the fact that these profiles decrease during the destructive reaction ( α > 0 ).Figures5(a)–5(h) illustrate graphically the behavior of translational velocity ( V ) versus boundary layer coordinate z for various involved parameters governing the flow field. For various values of Prandtl number (Pr), suction parameter ( S ), and radiation parameter ( F ), the profiles of translational velocity across boundary layer are shown in Figure 5(a). It is clearly evident that translational velocity decreases on increasing Pr because since Prandtl number is the ratio of kinematic viscosity to thermal diffusivity, so as Pr increases, the kinematic viscosity of the fluid dominates the thermal diffusivity of the fluid which leads to decreasing of the velocity of the flow field. Moreover, it is noticed that velocity first increases in the region adjacent to the plate and then decreases on moving away from the plate with increase in the suction parameter ( S ) showing the suction has a stabilizing effect on the flow field. This figure also incorporates the fact that radiation ( F ) tends to accelerate the translational velocity throughout the boundary layer region. Physically, it is true, as higher radiation occurs when temperature is higher and ultimately the velocity rises. The velocity distribution attains maximum value in the neighborhood of the wall and then decrease to approach the free stream value. The effect of heat absorption parameter on translational velocity ( V ) is depicted in Figure 5(b) and it is found that velocity reduces due to the presence of heat absorption parameter ( Q ).(a) Velocity profiles showing the variation of Prandtl number (Pr), suction parameter( S ), and radiation parameter ( F ) taking Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (b) Velocity profiles showing the variation of heat absorption parameter ( Q ) taking Pr = 0.71, S = 0.5, F = 1, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (c) Velocity profiles showing the variation of magnetic parameter ( M ) and Hall parameter ( m ) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (d) Velocity profiles showing the variation of Grashof number (Gr) and modified Grashof number (Gm) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (e) Velocity profiles showing the variation of rotational parameter ( R ) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (f) Velocity profiles showing the variation of Soret parameter (Sr), Schmidt number (Sc), and chemical reaction parameter ( α ) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, δ = 0.1, t = 1, A = 3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ =0.8. (g) Velocity profiles showing the variation of viscosity ratio ( Δ ) and permeability parameter ( K ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, and λ = 0.1. (h) Velocity profiles showing the variation of slip parameter ( h ) and material parameter ( λ ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, Δ = 0.2, and K = 0.6. (a) (b) (c) (d) (e) (f) (g) (h)Figure5(c) incorporates the influence of magnetic parameter ( M ) and hall parameter ( m ) on the translational velocity profiles ( V ). As expected, the application of the transverse magnetic field retards the fluid motion. This phenomenon has an excellent agreement with the physical fact that the presence of transverse magnetic field in an electrically conducting fluid always generates a resistive type of force called Lorentz force which is similar to drag force and hence serves to decelerate the flow. As such the magnetic field is an effective regulatory mechanism for the flow regime. Form this figure it is also found that Hall currents ( m ) tends to accelerate the fluid velocity throughout the boundary layer region which is consistent with the fact that Hall currents induces flow in the flow field.The combined effect of thermal and concentration buoyancy forces on the translational velocity are depicted in Figure5(d). It is evident from this figure that with an increase in Grashof number (Gr) and modified Grashof number (Gm), which is a measure of thermal and concentration buoyancy forces, there is a substantial growth in the momentum boundary layer for the same reasons as explained earlier in this section. Figure 5(e) depicts the effect of rotational parameter ( R ) on the fluid velocity and it is perceived that rotation tends to retard fluid velocity throughout the flow field. This is due to the reason that Coriolis force is dominant in the region near to the axis of rotation.Variation of translational velocity profiles for different values of Soret parameter( Sr ), Schmidt number (Sc), and chemical reaction ( α ) are displayed in Figure 5(f). The comparison of the curves shows that the velocity of the flow field decreases due to an increase in Schimdt number and Soret number. It is also observed from this figure that velocity decreases during the destructive reaction ( α < 0 ).Figure5(g) depicts the influence of viscosity ratio ( Δ ) and permeability parameter ( K ) on the translational velocity ( V ). For different values of permeability parameter this figure shows that velocity increases with increasing values of K while an increasing viscosity ratio ( Δ ) results in an enhancement of the total viscosity in fluid flow because Δ is directly proportional to vortex viscosity which makes the fluid more viscous and so weakens the convection currents and hence the velocity decreases. This phenomenon has a good agreement with the physical realities.Figure5(h) incorporates the effect of slip or rarefaction parameter ( h ) and material parameter ( λ ) on the translational velocity ( V ). It is observed that an increase in the values of rarefaction parameter result in an enhancement of the flow field inside the boundary layer. This behavior is readily understood from the velocity slip condition at the surface (23). The case when h = 0 corresponds to the no slip condition and in the present case it reduces to the case when the plate moves with constant velocity in the longitudinal direction. The effects are more visible in the region near to the plate and afterwards it fall slowly and steadily to its free stream value as z → ∞. Lastly the velocity decreases with increasing material parameter ( λ ). In Figures 5(a)–5(h) we observe that the velocity become maximum in the vicinity of the plate and then decreases away from the plate and finally takes asymptotic values far away from the plate.The numerical values of Nusselt number computed from the analytical solution given in (39) are presented graphically versus time ( t ) in Figure 6 for various values of Prandtl number (Pr), suction parameter ( S ), heat absorption parameter ( Q ), and radiation parameter ( F ). It is noteworthy that the Prandtl number, suction parameter, heat absorption parameter, and radiation parameter enhance the rate of heat transfer at the surface of the plate. The reason behind this phenomenon is explained earlier in the text. The rate of heat transfer is more for water (Pr = 7.0) than that of air (Pr = 0.71).Figure 6 Nusselt number for different values of Prandtl number (Pr), suction parameter( S ), heat absorption parameter ( Q ), and radiation parameter ( F ) taking A = 3, t = 1, and δ = 0.1.Figures7(a)–7(c) display the concentration gradient—C ′ ( 0 ) at the porous plate versus time ( t ). From all these figures it is analyzed that Sherwood number increase with an increase in Schmidt number (Sc), chemical reaction parameter ( α ), Soret number (Sr), suction parameter ( S ), heat absorption parameter ( Q ), and radiation parameter ( F ). As time progresses the Sherwood number remains unaltered.(a) Sherwood number showing the variation for Schmidt number (Sc) and chemical reaction parameter( α ) taking Pr = 0.71, S = 0.01, F = 2, Q = 1, δ = 0.1, t = 1, and A = 3. (b) Sherwood number showing the variation for heat absorption parameter ( Q ) and Soret number (Sr) taking Pr = 0.71, S = 0.01, F = 2, Sc = 0.22, δ = 0.1, t = 1, and A = 3. (c) Sherwood number showing the variation for suction parameter ( S ) and radiation parameter ( F ) taking Pr = 0.71, Sr = 0.5, Sc = 0.3, Q = 1, δ = 0.1, t = 1, and A = 3. (a) (b) (c)The variation of couple stress coefficient( C m ) for various involved parameters is displayed in Figures 8(a)–8(h) versus time ( t ). Figure 8(a) exhibits that couple stress coefficient decreases with increasing values of radiation parameter ( F ) and suction parameter ( S ) it increases with increasing values of Prandtl number (Pr). The effect of heat absorption parameter ( Q ) on C m is shown in Figure 8(b) and it is found that couple stress coefficient enhances with the rise in the values of Q. From Figures 8(c)–8(f) it is apparent that the effect of increasing values of magnetic parameter ( M ), hall parameter ( m ), Grashof number (Gr), modified Grashof number (Gm), Rotational parameter ( R ) and viscosity ratio ( Δ ) are to decrease the values of couple stress coefficient whereas reverse effect is found on increasing the values of material parameter ( λ ). Figure 8(g) shows a substantial growth in couple stress coefficient with increasing values of slip parameter ( h ) while reverse happen for increasing values of permeability parameter ( K ). Finally, the Schmidt number (Sc), Soret number (Sr), and chemical reaction parameter ( α ) have the tendency to increase couple stress coefficient and this is clearly visible in Figure 8(h). From all these figures from Figures 8(a) to 8(h) it is understandable that as time progresses couple stress coefficient ( C m ) is getting enhanced whereas from Figures 9(a) to 9(h) it is visible that skin friction coefficient ( C f ) is getting suppressed for increasing values of time ( t ).(a) Couple stress showing the variation of Prandtl number (Pr), suction parameter( S ), and radiation parameter ( F ) taking Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (b) Couple stress showing the variation of heat absorption parameter ( Q ) taking Pr = 0.71, S = 0.5, F = 2, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (c) Couple stress showing the variation of magnetic parameter ( M ) and Hall parameter ( m ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (d) Couple stress showing the variation of Grashof number (Gr) and modified Grashof number (Gm) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (e) Couple stress showing the variation of rotational parameter ( R ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (f) Couple stress showing the variation of viscosity ratio ( Δ ) and material parameter ( λ ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, and K = 0.6. (g) Couple stress showing the variation of slip parameter ( h ) and permeability parameter ( K ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, Δ = 0.2, and λ = 0.8. (h) Couple stress showing the variation of Soret parameter (Sr), Schmidt number (Sc), and chemical reaction parameter ( α ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, δ = 0.1, t = 1, A = 3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (a) (b) (c) (d) (e) (f) (g) (h)(a) Skin friction profiles showing the variation of Prandtl number (Pr), suction parameterS, and radiation parameter F taking Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (b) Skin friction profiles showing the variation of heat absorption parameter Q taking Pr = 0.71, S = 0.5, F = 1, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (c) Skin friction profiles showing the variation of magnetic parameter M and Hall parameter m taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (d) Skin friction profiles showing the variation of Grashof number (Gr) and modified Grashof number (Gm) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (e) Skin friction profiles showing the variation of rotational parameter ( R ) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (f) Skin friction profiles showing the variation of Soret parameter (Sr), Schimdt number (Sc), and chemical reaction parameter ( α ) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, δ = 0.1, t = 1, A = 3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (g) Skin friction profiles showing the variation of viscosity ratio ( Δ ) and permeability parameter ( K ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, and λ = 0.1. (h) Skin friction profiles showing the variation of slip parameter ( h ) and material parameter ( λ ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, Δ = 0.2, and K = 0.6. (a) (b) (c) (d) (e) (f) (g) (h)The skin friction is an important phenomenon which characterizes the frictional drag at the solid surface, so the numerical values of skin friction coefficient( C f ) computed from (33) is presented in Figures 9(a)–9(h) taking different values of F, S, Pr, Q, M, m, Gr, Gm, R, Sc, Sr, α, Δ, λ, K, and h. The skin friction coefficient increases with increasing values of radiation parameter while it decreases with increase in suction parameter, Prandtl number, and heat absorption parameter and this fact is depicted in Figures 9(a) and 9(b). It is noticed from Figure 9(c) that the skin friction coefficient is reduced due to an increase in magnetic field strength as expected, since the applied magnetic field tends to impede the flow motion and thus reduces the surface friction force while the hall parameter tends to increase the skin friction. Figure 9(d) demonstrates the growth in skin friction for increasing values of thermal buoyancy parameter (Gr) and modified Grashof number (Gm) because an increase in buoyancy effect in mixed convection flow leads to an acceleration of the fluid flow which increases the friction factor. An opposite trend is observed for increasing values of rotational parameter; that is, C f decreases with an increase in R. The influence of Schmidt number (Sc), Soret number (Sr), and chemical reaction parameter ( α ) on skin friction coefficient is exhibited in Figure 9(f) and all these parameters tend to retard the surface friction forces. Finally, Figures 9(g) and 9(h) exhibit a significant growth in C f with increasing values of viscosity ratio, permeability parameter, and slip parameter while reverse happens with increasing material parameter. ## 5. Conclusion The governing equations were solved analytically using perturbation technique. The effects of various parameters on the temperatureθ, concentration C, translational velocity V, microrotation ω, skin-friction C f, Nusselt number, and Sherwood number are examined. From the present calculations, we arrive at the following findings.(i) Thermal radiation tends to enhance fluid temperature whereas there is a decrement in fluid temperature with an increase of Prandtl number, suction parameter, and heat absorption parameter. (ii) The species concentration profiles decrease at all points in the flow field with an increase in Schmidt number, chemical reaction parameter, heat generation parameter, Soret number, and suction parameter but are enhanced with an increase in radiation parameter while these physical quantities show reverse trend for Sherwood number. (iii) Thermal radiation, magnetic parameter, hall parameter, and permeability parameter tend to enhance the microrotation distribution whereas these physical quantities have reverse effect on couple stress coefficient. (iv) Microrotation profiles decrease with an increase in Prandtl number, material parameter, slip parameter, Soret number, Schmidt number, and chemical reaction parameter whereas these physical quantities have reverse effect on couple stress coefficient. (v) Microrotation profiles and couple stress coefficient decrease with an increase in suction parameter, rotation parameter, and viscosity ratio. (vi) Thermal radiation parameter, permeability parameter, and slip parameter tend to enhance the translational velocity profiles throughout the boundary layer region and the skin-friction coefficient. (vii) Prandtl number, magnetic parameter, suction parameter, rotation parameter, Soret number, Schmidt number, chemical reaction parameter, viscosity ratio, and material parameter tend to enhance both translational velocity profiles and skin-friction coefficient. (viii) Slip parameter increases the translational velocity profiles but decreases the skin-friction coefficient. (ix) Thermal radiation parameter, Prandtl number, suction parameter, and heat absorption parameter tend to enhance dimensionless rate of heat transfer, that is, Nusselt number. --- *Source: 102413-2014-10-30.xml*
102413-2014-10-30_102413-2014-10-30.md
63,940
Combined Influence of Hall Current and Soret Effect on Chemically Reacting Magnetomicropolar Fluid Flow from Radiative Rotating Vertical Surface with Variable Suction in Slip-Flow Regime
Preeti Jain
International Scholarly Research Notices (2014)
Social Sciences & Business
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102413
102413-2014-10-30.xml
--- ## Abstract An analysis study is presented to study the effects of Hall current and Soret effect on unsteady hydromagnetic natural convection of a micropolar fluid in a rotating frame of reference with slip-flow regime. A uniform magnetic field acts perpendicularly to the porous surface which absorbs the micropolar fluid with variable suction velocity. The effects of heat absorption, chemical reaction, and thermal radiation are discussed and for this Rosseland approximation is used to describe the radiative heat flux in energy equation. The entire system rotates with uniform angular velocity Ω about an axis normal to the plate. The nonlinear coupled partial differential equations are solved by perturbation techniques. In order to get physical insight, the numerical results of translational velocity, microrotation, fluid temperature, and species concentration for different physical parameters entering into the analysis are discussed and explained graphically. Also, the results of the skin-friction coefficient, the couple stress coefficient, Nusselt number, and Sherwood number are discussed with the help of figures for various values of flow pertinent flow parameters. --- ## Body ## 1. Introduction Micropolar fluids are subset of the micromorphic fluid. Micropolar fluids are those fluids consisting of randomly oriented particles suspended in a viscous medium, which can undergo a rotation that can affect the hydrodynamics of the flow, making it a distinctly non-Newtonian fluid. They constitute an important branch of non-Newtonian fluid dynamics where microrotation effects as well as microinertia are exhibited. Modelling and analysis of the dynamics of micropolar fluids have been the field of very active research due to their application in a number of processes that occur in chemical, pharmaceutical, and food industry. Such applications include the extrusion of polymer fluids, solidification of liquid crystals, cooling of a metallic plate in a bath, animal bloods, exotic lubricants, and colloidal and suspension solutions, for example, for which the classical Navier-Stokes theory is inadequate. The essence of the theory of micropolar fluids lies in the extension of the constitutive equations for Newtonian fluids so that more complex fluids can be described by this theory. In this theory, rigid particles contained in a small fluid volume element are limited to rotation about the centre of the volume elements described by microrotation vector. It is well known that heterogeneous mixtures, such as Ferro liquids, colloidal fluids, most slurries, and suspensions, are some liquids with polymer activities which behave differently from Newtonian fluids. The main difference is that these types of fluids have a microstructure and exhibit microrotational effects and can support surface and body couples which are not present in the theory of Newtonian fluids. In order to study such types of fluids Eringen [1] developed the theory of microfluids which include the effect of local rotary inertia, the couple stress, and inertial spin. This theory is expected to be successful in analyzing the behavior of non-Newtonian fluids. Eringen [2] also developed the theory of micropolar fluids for the case where only microrotational effects and microrotational inertia exist. He [3] extended the theory of thermomicropolar fluids and derived the constitutive law for fluids with microstructure. An excellent review of micropolar fluids and their applications was given by Ariman et al. [4]. In view of Lukaszewicz [5], micropolar fluids represent those fluids which consist of randomly oriented particles suspended in a viscous medium.Several authors have studied the characteristic of the boundary layer flow of micropolar fluid under different boundary conditions. Takhar and Soundalgekar [6, 7] studied the flow and heat transfer of micropolar fluid past a porous plate. Further, they [8, 9] discussed these problems past a continuously moving porous plate. Often experimental and analytical investigations of free convection flows are carried out by the researchers, since in many situations in technology and nature, one continually encounters masses of fluid arising freely in an extensive medium due to the buoyancy effects. Gorla et al. [10, 11] investigated the natural convection from a heated vertical plate in micropolar fluid. The problem of flow and heat transfer for a micropolar fluid past a porous plate embedded in a porous medium has been of great use in engineering studies such as oil exploration and thermal insulation. Raptis and Takhar [12] and Kim [13] have considered the micropolar fluid through a porous medium.All the above mentioned studies are limited only to applications where radiative heat transfer is negligible. The role of thermal radiation in the flow heat transfer process is of great relevance in the design of many advanced energy conversion systems operating at higher temperatures. Thermal radiation within these systems is usually the result of emission by hot walls and the working fluid. Nuclear power plants, gas turbines, and the various propulsion devices for aircraft, missiles, satellites, and space vehicles are examples of such engineering areas. Perdikis and Raptis [14] illustrated the heat transfer of a micropolar fluid in the presence of radiation. Raptis [15] studied the effect of radiation on the flow of a micropolar fluid past a continuously moving plate. Recently, Elbashbeshy and Bazid [16] and Kim and Fedorov [17] have reported on the radiation effects on the mixed convection flow of micropolar fluid. Makinde [18] examined the transient free convection interaction with thermal radiation of an absorbing emitting fluid along moving vertical permeable plate. Rahman and Sattar [19] studied transient convective flow of micropolar fluid past a continuous moving porous plate in the presence of radiation. Moreover, when the radiative heat transfer takes place, the fluid involved can be electrically conducting in the sense that it is ionized owing to high operating temperature. Accordingly, it is of interest to examine the effect of the magnetic field on the flow. Thermal radiation effects on hydromagnetic natural convection flow with heat and mass transfer play an important role in manufacturing processes taking place in industries for the design of fins, glass production, steel rolling, casting and levitation, furnace design, and so forth. The process of fusing of metals in an electrical furnace by applying a magnetic field and the process of cooling of the first wall inside a nuclear reactor containment vessel where the hot plasma is isolated from the wall by applying a magnetic field are examples of such fields where thermal radiation and magnetohydrodynamics (MHD) are correlative. This fact was taken into consideration by Abd-El Aziz [20] in his study on micropolar fluids. Raptis and Massalas [21] studied magnetohydrodynamic flow past a plate by the presence of radiation.The rotating flow of an electrically conducting fluid in presence of magnetic field has got its importance in Geophysical problems. Investigation of hydromagnetic natural convection flow in a rotating medium is of considerable importance due to its application in various areas of geophysics, astrophysics, and fluid engineering, namely, maintenance and secular variations in Earth’s magnetic field due to motion of Earth’s liquid core, internal rotation rate of the sun, structure of the magnetic stars, solar and planetary dynamo problems, turbo machines, rotating MHD generators, rotating drum separators for liquid metal MHD applications, and so forth. It may be noted that Coriolis and magnetic forces are comparable in magnitude and Coriolis force induces secondary flow in the flow-field. Changes that take place in the rotation suggest the possible importance of hydromagnetic spin-up. Taking into consideration the importance of such study, unsteady hydromagnetic natural convection flow past a moving plate in a rotating medium is studied by a number of researchers. Mention maybe made of research studies of Singh [22], Raptist and Singh [23], Tokis [24], Nanousis [25], and Singh et al. [26]. This problem of spin-up in magnetohydrodynamic rotating fluids has been discussed under varied conditions by Takhar et al. [27].The study of heat and mass transfer due to chemical reaction is also very importance because of its occurrence in most of the branches of science and technology. The processes involving mass transfer effects are important in chemical processing equipment which is designed to draw high value products from cheaper raw materials with the involvement of chemical reaction. Ibrahim and Makinde [28] investigated radiation effect on chemically reactive MHD boundary layer flow of heat and mass transfer past a porous vertical flat plate. Babu and Satya Narayan [29] examined chemical reaction and thermal radiation effects on MHD convective flow in a porous medium in the presence of suction. Das [30] and Sivaiah [31] investigated studied the effect of chemical reaction and thermal radiation on heat and mass transfer flow of MHD micropolar aid in a rotating frame of reference. Convection problems associated with heat sources within fluid-saturated porous media are of great practical significance in geophysics and energy-related problems, such as recovery of petroleum resources, cooling of underground electric cables, storage of nuclear waste materials groundwater pollution, fiber and granular insulations, chemical catalytic reactors, and environmental impact of buried heat generating waste. Bakr et al. [32, 33] presented an analysis on MHD free convection and mass transfer adjacent to moving vertical plate for micropolar fluid in a rotating frame of reference in presence of heat generation/absorption and a chemical reaction using perturbation technique. Babu and Narayana [34] analyzed unsteady free convection with heat and mass transfer flow for a micropolar fluid through a porous medium with a variable permeability bounded by a semi-infinite vertical plate in the presence of heat generation, thermal radiation and first-order chemical reaction.In all this study, the effect of Hall current is not considered. The current development of magnetohydrodynamics application is toward a strong magnetic field (so that the influence of electromagnetic force is noticeable) and toward a low density of the gas (such as in space flight and in nuclear fusion research). Under this condition, the Hall current becomes important. The rotating flow of an electrically conducting fluid in the presence of a magnetic field is encountered in cosmic fluid dynamics, medicine and biology. Application in biomedical engineering includes cardiac MRI, and ECG. MHD was pioneered Cowling [35] and he emphasized that when the strength of the applied magnetic field is sufficiently large, Ohm’s law needs to be modified to include Hall current. The Hall effect is merely due to the sideways magnetic force on the drafting free charges. The electric field has to have a component transverse to the direction of the current density to balance this force. In many works of plasma physics, much attention is not paid to the effect caused due to Hall current. However, the Hall Effect cannot be completely ignored if the strength of the magnetic field is high and the number of density of electrons is small as it is responsible for the change of the flow pattern of an ionized gas. Hall effect results in a development of an additional potential difference between opposite surfaces of a conductor for which a current is induced perpendicular to both the electric and magnetic field. This current is termed as Hall current. Deka [36], Takhar et al. [37], Saha et al. [38], and Ahmed et al. [39] have presented some model studies on the effect of Hall current on MHD convection flow because of its possible application in the problem of MHD generators and Hall current. Preeti and Chaudhary [40] analyzed an unsteady hydromagnetic flow of a viscoelastic fluid from a radiative vertical porous plate, taking the effects of Hall current and mass transfer into account. Kinyanjui et al. [41] studied the heat and mass transfer in unsteady free convection flow with radiation absorption past an impulsively started infinite vertical porous plate subjected to strong magnetic field including the Hall effect. Takhar et al. [42] investigated the simultaneous effects of Hall current and free stream velocity on the magneto hydrodynamic flow over a moving plate in a rotating fluid. Recently, Seth et al. [43] investigated the problem of an unsteady MHD free convective flow past an impulsively started vertical plate with ramped temperature immersed in a porous medium with rotation and heat absorption taken into account the Hall Effect.When heat and mass transfer occur simultaneously in a moving fluid, the relations between the fluxes and the driven potential are important. It has been found that an energy flux can be generated not only by temperature gradients but by composition gradient as well. The energy caused by a composition gradient is called the Dufour or the diffusion-thermo effect, also the mass fluxes can also be caused by the temperature gradient and this is called the Soret or thermal diffusion effect; that is, if two regions in a mixture are maintained at different temperatures so that there is a flux of heat, it has been found that a concentration gradient is set up and in a binary mixture, one kind of a molecule tends to travel toward the hot region and the other kind toward the cold region. This is called the “Soret effect.” The Dufour effect is neglected in this study because it is of a smaller order of magnitude than the magnitude of thermal radiation which exerts a stronger effect on the energy flux. Soret or thermal diffusion effect has been utilized for isotope separation in mixtures between gases with very light molecular weight (H2, He) and medium molecular weight (N2, air) and it was found to be of a magnitude that it cannot be neglected due to its practical applications in engineering and sciences. Soret effects due to natural convection between heated inclined plates have been investigated by Raju et al. [44]. M. G. Reddy and N. B. Reddy [45] investigated Soret and Dufour effects on steady MHD free convective flow past an infinite plate. Mohamed [46] studied unsteady MHD flow over a vertical moving porous plate with heat generation and Soret effect.Practically, in many engineering applications, the particle adjacent to a solid surface no longer takes the velocity of the surface. The particle at the surface has a finite tangential velocity; it “slips” along the surface. This flow regime is called the slip-flow regime and this effect cannot be neglected. The fluid slippage phenomenon at the solid boundaries appear in many applications such as microchannels or nanochannels and in application where a thin film of light oils is attached to the moving plates or when the surface is coated with special coating such as thick monolayer of hydrophobic octadecyltrichlosilane, that is, lubrication of mechanical device, where a thin film of lubricant is attached to the surface slipping over one another or when the surfaces are coated with special coating to minimize the friction between them [47]. Chaudhary and Jain [48] examined the effects of radiation on the hydromagnetic free convection flow set up due to temperature as well as species concentration of an electrically conducting micropolar fluid past a vertical porous plate through porous medium in slip-flow regime. Chaudhary and Sharma [49, 50] studied the free convection flow past a vertical porous plate with variable suction in slip-flow regime. Das et al. [51] have considered the magnetohydrodynamic unsteady flow of a viscous stratified fluid through a porous medium past a porous flat moving plate in the slip flow regime with heat source. Singh and Kumar [52] presented the fluctuating heat and mass transfer on unsteady free convection flow of radiating and reacting fid past a vertical porous plate in slip flow regime using perturbation analysis. Kumar and Chand [53] have studied the effect of slip conditions and the Hall current on unsteady MHD flow of a viscoelastic fluid past an infinite vertical porous plate through porous medium. Recently, Oahimire et al. [54] investigated the effects of thermal-diffusion and thermal radiation on unsteady heat and mass transfer by free convective MHD micropolar fluid flow bounded by a semi-infinite vertical plate in a slip-flow regime under the action of transverse magnetic field with suction.To the best of our knowledge, considerably less work has been done concerning the combined effect of Hall current and Soret effect on chemically reactive magnetomicropolar fluid flow incorporating the effect of rotation in slip flow regime in the presence of radiation and heat absorption. The results are in accordance with the physical realities which validate the correctness of our work presented here. ## 2. Mathematical Formulation of the Problem Consider an unsteady hydromagnetic flow of an incompressible, viscous, and electrically conducting micropolar fluid past an infinite vertical permeable plate embedded in a uniform porous medium in slip-flow regime and in a rotating system taking Hall current, thermal radiation, Soret effect, and chemical reaction into account. The coordinate system is chosen in such a way thatx *-axis is considered along the porous plate in vertically upward direction, y *-axis is taken along the width of the plate, and z *-axis normal to the plane of the plate in the fluid as shown in figure configuration (Figure 1). Since the plate is infinite in extent in x *- and y *- directions, hence all physical quantities will be independent of x * and y * and they are functions of z * and t * only; that is, ∂ u * / ∂ x * = ∂ u * / ∂ y * = ∂ v * / ∂ x * = ∂ v * / ∂ y * = 0, and so forth.Figure 1 Geometry and coordinate system of the problem.A magnetic field of uniform strengthB 0 is applied in a direction parallel to z *-axis which is perpendicular to the flow direction. It is assumed that the induced magnetic field generated by fluid motion is negligible in comparison to the applied one. This assumption is justified because magnetic Reynolds number is very small for liquid metals and partially ionized fluids which are commonly used in industrial applications [55]. It is assumed that there is no applied or polarized voltage so the effect of polarization of fluid is negligible. This corresponds to the case where no energy is added or extracted from the fluid by electrical means. The entire system is rotating with an angular velocity Ω about the normal to the plate. It is assumed here that the hole size of the porous plate is significantly larger than the characteristic microscopic length scale of the porous medium. The fluid is considered to be a gray, absorbing-emitting but nonscattering medium and the Rosseland approximation is used to describe the radiative heat flux. The radiative heat flux in the x *-direction is considered negligible in comparison with that of z *-direction. When the strength of the magnetic field is very large, the generalized Ohm’s law in the absence of electric field takes the following form: (1) J → + ω e τ e B 0 J → × H → = σ μ e V → × H → + 1 en e ∇ P e . Under the assumption that the electron pressure (for weakly ionized gas), the thermoelectric pressure and ion-slip conditions are negligible; now the above equation becomes (2) j x = σ μ e H 0 1 + m 2 m v - u , j z = σ μ e H 0 1 + m 2 m u + v , where u is the x-component of V →, v is the y-component of V →, and m (=ω e τ e) is Hall parameter.The suction velocity is assumed to bew * = - w 0 ( 1 + ε A e δ * t * ), where ε and ε A are small values less than unity and w 0 is the scale of suction velocity which is nonzero positive constant. The negative sign indicates that the suction is towards the plate. The fluid properties are assumed to be constants except that the influence of density variation with temperature and concentration has been considered in the body-force term. There is a first-order chemical reaction between the diffusing species and the fluid.With these foregoing assumptions, the governing equations under Boussinesq approximation can be written in a Cartesian frame of reference as follows.Continuity. Consider the following: (3) ∂ w * ∂ z * = 0 .Linear Momentum. Consider the following: (4) ∂ u * ∂ t * + w * ∂ u * ∂ z * - 2 Ω v * = ν + ν r ∂ 2 u * ∂ z * 2 + g β T T - T ∞ + g β C C * - C ∞ * - ν u * K * - ν r ∂ ω 2 * ∂ z * + σ μ e 2 H 0 2 m v * - u * ρ 1 + m 2 , ∂ v * ∂ t * + w * ∂ v * ∂ z * + 2 Ω u * = ν + ν r ∂ 2 v * ∂ z * 2 - ν u * K * + ν r ∂ ω 1 * ∂ z * - σ μ e 2 H 0 2 m u * + v * ρ 1 + m 2 .Angular Momentum. Consider the following: (5) ∂ ω 1 * ∂ t * + w * ∂ ω 1 * ∂ z * = Λ ρ j ∂ 2 ω 1 * ∂ z * 2 , ∂ ω 2 * ∂ t * + w * ∂ ω 2 * ∂ z * = Λ ρ j ∂ 2 ω 2 * ∂ z * 2 .Energy. Consider the following: (6) ∂ T ∂ t * + w * ∂ T ∂ z * = k ρ C p ∂ 2 T ∂ z * 2 - Q * ρ C p T - T ∞ - 1 ρ C p ∂ q r ∂ z * .Mass Transfer. Consider the following: (7) ∂ C * ∂ t * + w * ∂ C * ∂ z * = D m ∂ 2 C * ∂ z * 2 + D m K t T m ∂ 2 T * ∂ z * 2 - R C C * - C ∞ * . The initial and boundary conditions suggested by the physics of the problem are (8) u * = v * = 0 , ω 1 * = ω 2 * = 0 , T = T ∞ , C * = C ∞ * kkkkkkkkkkkkkkikk for t * ≤ 0 , (9) u * = U r + L * ∂ u * ∂ z * , v * = 0 , ω 1 * = - 1 2 ∂ v * ∂ z * , ω 2 * = 1 2 ∂ u * ∂ z * , T = T w , C * = C w * i k k k k k k k k k k k k k k k k k k k at z * = 0 u * ⟶ 0 , v * ⟶ 0 , ω 1 * ⟶ 0 , ω 2 * ⟶ 0 , T ⟶ T ∞ , C * ⟶ C ∞ * i k k k k k k k k k k k k k k k k k k k k k i as z * ⟶ ∞ i k k k k k k k k k k k k k k k k k k k k k k i k for t * > 0 . The boundary condition for microrotation components ω 1 * and ω 2 * describes its relationship with the surface stress. In the above boundary condition (9) the plate is in uniform motion and subjected to variable suction and slip boundary condition. In the parameter L * = 2 - m 1 / m 1 L, L is the molecular mean free path and m 1 is the tangential momentum accommodation coefficient. All the physical variables are given in nomenclature.Integration of continuity equation (3) for variable suction velocity normal to the plate gives (10) w * = - w 0 1 + ε A e δ * t * , where w 0 represents the normal velocity at the plate which is positive for suction and negative for blowing. The radiative heat flux term by using Rosseland approximation is given by (11) q r = - 4 σ * 3 a R ∂ T 4 ∂ z * . We assume that the temperature differences within the flow are such that T 4 may be expressed as a linear function of the temperature T. This is accomplished by expanding T 4 in a Taylor series about T ∞ and, neglecting higher-order terms, we have (12) T 4 ≃ 4 T ∞ 3 T - 3 T ∞ 4 . By using (11) and (12), (6) gives (13) ∂ T ∂ t * + w * ∂ T ∂ z * = k ρ C p ∂ 2 T ∂ z * 2 - Q * ρ C p T - T ∞ + 16 σ * T ∞ 3 3 ρ C p a R ∂ 2 T ∂ z * 2 . Proceeding with analysis, we introduce the following dimensionless variables: (14) u = u * U r , v = v * U r , z = z * U r ν , t = t * U r 2 ν , δ = δ * ν U r 2 , ω 1 = ω 1 * ν U r 2 , ω 2 = ω 2 * ν U r 2 , Gr = ν g β T T w - T ∞ U r 3 , Gc = ν g β C C w * - C ∞ * U r 3 , R = 2 Ω ν U r 2 , S = w 0 U r , Δ = ν r ν , θ = T - T ∞ T w - T ∞ , C = C * - C ∞ * C w * - C ∞ * , K = K * U r 2 ν 2 , M = μ e H 0 U r σ ν ρ , λ = Λ μ j , Pr ⁡ = μ C p k , Sc = ν D m , F = 4 T ∞ 3 σ * k a R , Q = Q * ν 2 U r 2 k , Sr = D m K t T W - T ∞ T m C W * - C ∞ * ν , α = R C ν U r 2 , h = L * U r ν . In view of (14), the governing equations (4)–(7) and (13) reduce to the following dimensionless form: (15) ∂ u ∂ t - S 1 + ε A e δ t ∂ u ∂ z - R v = 1 + Δ ∂ 2 u ∂ z 2 + Gr θ + Gm ϕ - M 2 1 + m 2 + 1 k u - Δ ∂ ω 2 ∂ z + m M 2 1 + m 2 v , (16) ∂ v ∂ t - S 1 + ε A e δ t ∂ v ∂ z + R u = 1 + Δ ∂ 2 v ∂ z 2 - M 2 1 + m 2 + 1 k v + Δ ∂ ω 1 ∂ z - m M 2 1 + m 2 u , (17) ∂ ω 1 ∂ t - S 1 + ε A e δ t ∂ ω 1 ∂ z = λ ∂ 2 ω 1 ∂ z 2 , (18) ∂ ω 2 ∂ t - S 1 + ε A e δ t ∂ ω 2 ∂ z = λ ∂ 2 ω 2 ∂ z 2 , (19) ∂ θ ∂ t - S 1 + ε A e δ t ∂ θ ∂ z = 1 Pr ⁡ 1 + 4 F 3 ∂ 2 θ ∂ z 2 - Q Pr ⁡ θ , (20) ∂ C ∂ t - S 1 + ε A e δ t ∂ C ∂ z = 1 Sc ∂ 2 C ∂ z 2 + Sr ∂ 2 C ∂ z 2 - α C . The boundary conditions (8)-(9) in view of (14) are then given by the following dimensionless form: (21) u = v = 0 , ω 1 = ω 2 = 0 , θ = 0 , C = 0 kkkkkkkkkkkkkkkkkkkkkkkkkkkkk for t ≤ 0 u = 1 + h ∂ u ∂ z , v = 0 , ω 1 = - 1 2 ∂ v ∂ z , ω 2 = 1 2 ∂ u ∂ z , θ = 1 , C = 1 kkkkkkkkkkkkkkkkkkkkkkkkk at z = 0 u ⟶ 0 , ω 1 ⟶ 0 , ω 2 ⟶ 0 , θ ⟶ 0 , C ⟶ 0 kkkkkkkkkkkkkkkkkkkk as z ⟶ ∞ kkkkkkkkkkkkkkkkkkkkkkk for t > 0 . To simplify (15)–(18), we substitute the fluid velocity and angular velocity in the complex form as V = u + i v,  ω = ω 1 + i ω 2 and we get (22) ∂ V ∂ t - S 1 + ε A e δ t ∂ V ∂ z + i R V = 1 + Δ ∂ 2 V ∂ z 2 + Gr θ + Gm ϕ - M 2 1 + m 2 + 1 k V - i Δ ∂ ω ∂ z - i m M 2 1 + m 2 V , ∂ ω ∂ t - S 1 + ε A e δ t ∂ ω ∂ z = λ ∂ 2 ω ∂ z 2 . The associated boundary conditions (21) become (23) V = 0 , ω = 0 , θ = 0 , C = 0 k k k k k k k k k k k k i k k k k k k k k k k for t ≤ 0 V = 1 + h ∂ u ∂ z , ω = i 2 ∂ V ∂ z , θ = 1 , C = 1 kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkki at z = 0 V ⟶ 0 , ω ⟶ 0 , θ ⟶ 0 , C ⟶ 0 k k k k k k k k k k k k k k k k k k i k k k k k k k k k k as z ⟶ ∞ kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk for t > 0 . ## 3. Analytical Solution of the Problem In order to reduce the above system of partial differential equations to a system of ordinary differential equations in dimensionless form, we represent the translational velocityV, microrotation velocity ω, temperature θ, and concentration C as (24) V z , t = V 0 z + ε e δ t V 1 z + O ε 2 , ω z , t = ω 0 z + ε e δ t ω 1 z + O ε 2 , θ z , t = θ 0 z + ε e δ t θ 1 z + O ε 2 , C z , t = C 0 z + ε e δ t C 1 z + O ε 2 . By substituting the above equations (24) into (19), (20), (22)-(23) and equating the harmonic and nonharmonic terms and neglecting the higher-order terms of O ( ε 2 ), we obtain the following pairs of equations for ( V 0 , ω 0 , θ 0 , C 0 ) and ( V 1 , ω 1 , θ 1 , C 1 ).Zero-order equations are:(25) 1 + Δ V 0 ′ ′ + S V 0 ′ - a 1 V 0 + Gr θ 0 + Gm C 0 + i Δ ω 0 ′ = 0 , λ ω 0 ′ ′ + S ω 0 ′ = 0 , 3 + 4 F θ 0 ′ ′ + 3 Pr ⁡ S θ 0 ′ - 3 Q θ 0 = 0 , C 0 ′′ + S Sc C 0 ′ - α Sc C 0 = - Sr θ 0 ′ ′ .First-order equations are:(26) 1 + Δ V 1 ′ ′ + S V 1 ′ - a 2 V 1 + Gr θ 1 + Gm C 1 + A V 0 ′ + i Δ ω 1 ′ = 0 , λ ω 1 ′ ′ + S ω 1 ′ - δ ω 1 = - S A ω 0 ′ , 3 + 4 F θ 1 ′ ′ + 3 Pr ⁡ S θ 1 ′ - 3 Q + Pr ⁡ δ θ 1 = - 3 Pr ⁡ S A θ 0 ′ , C 1 ′ ′ + S Sc C 1 ′ - Sc α + δ C 1 = - S Sc A C 0 ′ - SrSc θ 1 ′ ′ . The prime denotes differentiation with respect to y.The corresponding boundary conditions can be written as(27) V 0 = 1 + h ∂ V 0 ∂ z , V 1 = h ∂ V 1 ∂ z , ω 0 = i 2 ∂ V 0 ∂ z , ω 1 = i 2 ∂ V 1 ∂ z , θ 0 = 1 , θ 1 = 0 , C 0 = 1 , C 1 = 0 at z = 0 V 0 ⟶ 0 , V 1 ⟶ 0 , ω 0 ⟶ 0 , ω 1 ⟶ 0 , θ 0 ⟶ 0 , θ 1 ⟶ 0 , C 0 ⟶ 0 , C 1 ⟶ 0 k k k k k k k k k k k as z ⟶ ∞ .Solving (25)-(26) satisfying the boundary conditions (27) we obtain the expression for translational velocity V, microrotation velocity ω, temperature θ, and concentration C as (28) V z , t = B 11 e - r 5 z + B 8 e - r 1 z + B 9 e - r 3 z + B 10 e - S / λ z + ε e δ t B 20 e - r 7 z + B 13 e - r 1 z + B 17 e - r 5 z + B 18 e - S / λ z + B 19 e - r 6 z + B 14 e - r 2 z + B 15 e - r 4 z + B 16 e - r 3 z + B 17 e - r 5 z + B 18 e - S / λ z + B 19 e - r 6 z , (29) ω z , t = D 1 e - S / λ z + ε e δ t D 2 e - r 6 z + B 12 e - S / λ z , (30) θ z , t = e - r 1 z + ε e δ t B 1 e - r 1 z - e - r 2 z , (31) C z , t = B 3 e - r 3 z + B 2 e - r 1 z + ε e δ t B 4 e - r 3 z + B 5 e - r 1 z + B 6 e - r 2 z + B 7 e - r 4 z . The exponential indices and the coefficients appearing in (28)–(31) are given in the appendix.In technological applications, the wall shear stress, the wall couple stress, and the heat and mass transfer rate are often of great interest. Skin friction is caused by viscous drag in the boundary layer around the plate. The skin friction coefficient( C f ) at the wall in dimensionless form is given by (32) C f = τ w * z * = 0 ρ U r 2 = 1 + Δ 1 + i 2 ∂ V ∂ z z = 0 (33) = - 1 + Δ 1 + i 2 × B 11 r 5 + B 8 r 1 + B 9 r 3 + B 10 S λ + B 17 r 5 + B 18 S λ + B 19 r 6 + ε e δ t B 20 r 7 + B 13 r 1 + B 17 r 5 + B 18 S λ + B 19 r 6 + B 14 r 2 + B 15 r 4 + B 16 r 3 + B 17 r 5 + B 18 S λ + B 19 r 6 . The couple stress coefficient ( C m ) at the plate is defined by (34) M w = Λ ∂ ω * ∂ z * z * = 0 and in the dimensionless form it is given by (35) C m = M w μ j U r = 1 + Δ 2 ∂ ω ∂ z z = 0 = 1 + Δ 2 ∂ ω 1 ∂ z z = 0 + i ∂ ω 2 ∂ z z = 0 = - 1 + Δ 2 D 1 S λ + ε e δ t D 2 r 6 + B 12 S λ . Knowing the temperature field, it is interesting to study the effect of the free convection and thermal radiation on the rate of heat transfer and this is given by (36) q w * = - k ∂ T ∂ z * - 4 σ * 3 a R ∂ T 4 ∂ z * z * = 0 . Using T 4 ≃ 4 T ∞ 3 T - 3 T ∞ 4 the above equation becomes (37) q w * = - k T w - T ∞ U r ν 1 + 4 F 3 ∂ θ ∂ z z = 0 . The rate of heat transfer between the fluid and the plate is studied in terms of nondimensional Nusselt number, which is given by (38) Nu = x q w * k T w - T ∞ = - Re x 1 + 4 F 3 ∂ θ ∂ z z = 0 , where R e x = U r x / ν is the local Reynolds number (39) Nu Re x - 1 = 1 + 4 F 3 ∂ θ ∂ z z = 0 = 1 + 4 F 3 r 1 + ε e δ t B 1 r 1 - r 2 . The definitions of the local mass flux and the local Sherwood number are, respectively, given by (40) j w = - D m ∂ C * ∂ z * z * = 0 , Sh x = j w x D m C w * - C ∞ * = - Re x ∂ C ∂ z z = 0 , Sh x Re x - 1 = ∂ C ∂ z z = 0 = B 3 r 3 + B 2 r 1 + ε e δ t B 4 r 3 + B 5 r 1 + B 6 r 2 + B 7 r 4 . ## 4. Results and Discussion In the preceding sections, the governing equations along with the boundary conditions are solved analytically employing the perturbation techniques. The effects of main controlling parameters as they appear in the governing equations are discussed on the temperatureθ, concentration C, translational velocity V, microrotation ω, skin-friction C f, Nusselt number, and Sherwood number. In order to get a physical insight of the problem the above physical quantities are compiled numerically and displayed graphically. In the entire calculations we have chosen ε = 0.01, δ = 0.1, t = 1 and A = 1 while Pr, S, F, Q, Sr, Sc, M, m, Gr, Gm, R, h, K, Δ, and λ are varied over the range which are listed in the figure legends.The numerical values of fluid temperatureθ computed from the analytical solutions (29) are illustrated graphically versus boundary layer coordinate z in Figure 2 for various values of Prandtl number (Pr), suction parameter ( S ), heat absorption parameter ( Q ), and radiation parameter ( F ). The values of Prandtl number are chosen as Pr = 0.71, 0.025, and 7.0 which physically correspond to air, mercury, and water at 25° temperature and one atmospheric pressure. Pr = 11.62 correspond to water at 4°C. It is inferred that the temperature falls more rapidly for water in comparison to air which is physically true thus the thermal boundary layer falls quickly for large value of Prandtl number. The thickness of thermal boundary layer is greatest for Pr = 0.025 (mercury) than for Pr = 0.71 (air), thereafter for Pr = 7 (water) and finally the lowest for Pr = 11.62 (water at 4°C); that is, an increase in Prandtl number results in a decrease of temperature. The reason underlying such a behavior is that Pr signifies the relative effects of viscosity to thermal conductivity and smaller values of Prandtl number possess high thermal conductivity and therefore heat is able to diffuse away from the surface faster than at higher values of Pr. This results in the reduction of thermal boundary layer thickness. The fluid temperature θ also decreases with an increase of Heat absorption parameter ( Q ) and suction parameter ( S ). The temperature decreases with an increase in the heat absorption parameter because when heat is absorbed the buoyancy forces decrease the temperature profiles. The effect of thermal radiation parameter ( F ) is to enhance the fluid temperature throughout the boundary layer region. This is consistent with the fact that thermal radiation provides an additional means to diffuse energy because thermal radiation parameter F = 4 T ∞ 3 σ * / k a R and therefore an increase in F implies a decrease in Rosseland mean absorption coefficient a R for fixed values of T ∞ and k. Thus it is pointed out that radiation should be minimized to have the cooling process at a faster rate. The temperature profiles attain their maximum value at the wall and decrease exponentially with z and finally tend to zero as z → ∞. Hence the accuracy is checked and it validates that the analytical results for temperature is correct.Figure 2 Temperature profiles for different values of Prandtl number (Pr), suction parameter( S ), heat absorption parameter ( Q ), and radiation parameter ( F ) taking A = 3, t = 1, and δ = 0.1.Graphical results of concentration profilesC for different values of Schmidt number (Sc) and chemical reaction parameter ( α ) are displayed in Figure 3(a). The values of Schmidt number are chosen to represent the most common diffusing chemical species which are of interest and they are Sc = 0.22 (hydrogen), Sc = 0.3 (helium), Sc = 0.6 (water vapor), Sc = 0.94 (carbon dioxide) and Sc = 2.62 (propylbenzene) at 25°C temperature and one atmospheric pressure. A comparison of curves in the figure show the concentration distribution decreases at all points in the flow field with an increase in Schmidt number because smaller values of Sc are equivalent to increasing chemical molecular diffusivity ( D ). This implies mass diffusion tends to enhance species concentration. This shows that the heavier diffusing species have a greater retarding effect on the concentration distribution. Furthermore, it is interesting to note that concentration profiles fall slowly and steadily for hydrogen (Sc = 0.22) and helium (Sc = 0.30) but falls very rapidly for water vapor (Sc = 0.6) and propylbenzene (Sc = 2.62). Physically this is true because of the fact that the water vapor can be used for maintaining normal concentration field whereas hydrogen can be used for maintaining effective concentration field. Similar effects are seen in the case when chemical reaction parameter ( α ) is increased. Further, this figure clearly demonstrates that the concentration profiles decrease rapidly when chemical reaction parameter is increased this is due to the fact that boundary layer decreases with an increase in the value of α in this system, results in the consumption of the chemical and hence result in decreasing concentration profile. Thus the diffusion rates can be tremendously altered by chemical reaction.(a) Concentration profiles showing the variation for Schmidt number (Sc) and chemical reaction parameter( α ) taking Pr = 0.71, S = 0.01, F = 2, Q = 1, δ = 0.1, t = 1, and A = 3. (b) Concentration profiles showing the variation for heat generation parameter ( Q ) and Soret number (Sr) taking Pr = 0.71, S = 0.01, F = 2, Sc = 0.22, δ = 0.1, t = 1, and A = 3. (c) Concentration profiles showing the variation for suction parameter ( S ) and radiation parameter ( F ) taking Pr = 0.71, Sr = 0.5, Sc = 0.3, Q = 1, δ = 0.1, t = 1, and A = 3. (a) (b) (c)The effects of heat absorption parameter( Q ) and Soret number (Sr) on concentration profiles across the boundary layer are displayed in Figure 3(b). The results show that concentration boundary layer suppresses with an increase in heat absorption parameter and Soret number. The profiles fall rapidly with an increase of Soret number and thereafter increase and tend to zero as z → ∞. Figure 3(c) is plotted to show the effects of radiation parameter ( F ) and Suction parameter ( S ) on the species concentration profiles. It is revealed that the presence of Suction parameter diminishes the concentration distribution whereas reverse phenomena are observed with increasing values of radiation parameter. In Figures 3(a)–3(c) the concentration profiles attain their maximum value at the wall and decrease exponentially with z and finally tend to zero as z → ∞. Hence it is found to be in good agreement with boundary condition given in (23). Moreover these figures provide a check of our analytical solution for the concentration field.The microrotation profiles( ω ) against span wise coordinate z incorporating the effect of various parameters influencing the flow field are demonstrated in Figures 4(a)–4(h). It is revealed from Figures 4(a)–4(h) that these profiles attain a distinctive maximum value near surface of the plate and decrease properly on increasing boundary layer coordinate z to approach free stream value. Figure 4(a) shows the influence of Prandtl number (Pr), Suction parameter ( S ) and radiation parameter ( F ) on microrotation profiles. It is noticed that microrotation profiles ( ω ) decrease on increasing Pr. Physically, it is true due to the fact that an increase in Prandtl number increase the viscosity of the fluid, so the fluid becomes thick and consequently leads to a decrease in velocity. This figure further indicates that the microrotation profiles decrease with an increase in suction parameter ( S ) because sucking decelerates the fluid particles through the porous wall and hence reduce the growth of the fluid boundary layer as well as thermal and concentration boundary layers. Indicating the usual fact that suction stabilizes the boundary layer growth. These profiles enhances with an increase in radiation parameter ( F ). This is because when the intensity of heat generated through thermal radiation increases, the bond holding the components of the fluid particle is easily broken and the fluid velocity will increase.(a) Microrotation profiles showing the variation of Prandtl number (Pr), suction parameter( S ), and radiation parameter ( F ) taking Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (b) Microrotation profiles showing the variation of heat absorption parameter ( Q ) taking Pr = 0.71, S = 0.5, F = 2, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (c) Microrotation profiles showing the variation of magnetic parameter ( M ) and Hall parameter ( m ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (d) Microrotation profiles showing the variation of Grashof number (Gr) and modified Grashof number (Gm) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (e) Microrotation profiles showing the variation of rotational parameter ( R ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (f) Microrotation profiles showing the variation of viscosity ratio ( Δ ) and material parameter ( λ ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, and K = 0.6. (g) Microrotation profiles showing the variation of slip parameter ( h ) and permeability parameter ( K ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, Δ = 0.2, and λ = 0.8. (h) Microrotation profiles showing the variation of Soret parameter (Sr), Schmidt number (Sc), and chemical reaction parameter ( α ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, δ = 0.1, t = 1, A = 3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, and Δ = 0.2. (a) (b) (c) (d) (e) (f) (g) (h)From Figure4(b) it is perceived that microrotation profiles decrease with an increase in heat absorption parameter ( Q ). Figure 4(c) elucidates the influence of magnetic parameter ( M ) and Hall parameter ( m ) on microrotation profiles ( ω ); it is clear from these curves that these profiles increase when magnetic parameter and Hall current parameter are increased. The profiles corresponding to m = 0 reveals that microelements close to the wall are unable to rotate; hence, ω is very small. Figure 4(d) demonstrates the effect of thermal and concentration buoyancy forces, that is, Grashof number (Gr) and modified Grashof number (Gm) on the microrotation profiles. Here the negative value of Grashof number (Gr < 0), physically, corresponds to heating of the plate while the positive value (Gr > 0) represents cooling of the plate. Hence, it is observed from the comparison of the curves that an increase in thermal Grashof number leads to an increase in the velocity due to an enhancement in buoyancy forces. Gr signifies the relative strength of thermal buoyancy force to viscous hydrodynamic force. An increase in Grashof number indicates small viscous effects in the momentum equation and consequently causes an increase in the velocity profiles. Furthermore, the comparison of the curves illustrates that velocity increases with increasing Gm. The modified Grashof number (Gm) represents the relative strength of concentration buoyancy forces to viscous hydrodynamic force. As expected, the fluid velocity increases and the peak value is more distinctive due to an increase in the species buoyancy force. The profiles attain a maximum value near the wall and then decrease rapidly to approach the free stream value. Hence we are confident at the accuracy of our solution given by (30).For various values of rotational parameter( R ), the profiles of microrotation across the boundary layer are shown in Figure 4(e). It is perceived that the rotation tend to decrease the microrotation profiles. Figure 4(f) presents the effect of viscosity ratio ( Δ ) and material parameter ( λ ) on ω. The magnitude of microrotation is greater for a Newtonian fluid ( Δ = 0 ) with given parameters as compared with micropolar fluids ( Δ ≠ 0 ). Also, it is observed that the magnitude of microrotation profiles decrease with an increase in material parameter ( λ ) and viscosity ratio ( Δ ).Rarefaction effects that give rise to slip flow become significant when the molecular mean free path is comparable to characteristic length of the system. The microrotation profiles presented in Figure4(g) incorporate the influence of rarefaction parameter ( h ) and permeability parameter ( K ). It is noticed that an increase in the value of rarefaction parameter decreases the magnitude of microrotation profiles while the comparison of curves for different values of permeability parameter ( K ) reflects that profiles increase with increasing values of K. A similar behavior is also expected because when we increase the permeability it increases the size of the pores inside the porous medium due to which the drag force decreases and hence the magnitude of microrotation profiles increases.Microrotation profiles showing the variation of Soret parameter (Sr), Schmidt number (Sc), and generative chemical reaction( α ) are presented in Figure 4(h). It is analyzed that the influence of Sr, Sc, and α is to reduce the magnitude of microrotation profiles. Comparison of the curves in this figure indicate that the magnitude of microrotation profiles is the greatest for helium (He: Sc = 0.3) and then for carbon dioxide (CO2: Sc = 0.94) and the lowest for propylbenzene (C9H10: Sc = 2.62). Physically it is justified because, for large value of Schmidt number, the fluid becomes denser. This figure also displays the fact that these profiles decrease during the destructive reaction ( α > 0 ).Figures5(a)–5(h) illustrate graphically the behavior of translational velocity ( V ) versus boundary layer coordinate z for various involved parameters governing the flow field. For various values of Prandtl number (Pr), suction parameter ( S ), and radiation parameter ( F ), the profiles of translational velocity across boundary layer are shown in Figure 5(a). It is clearly evident that translational velocity decreases on increasing Pr because since Prandtl number is the ratio of kinematic viscosity to thermal diffusivity, so as Pr increases, the kinematic viscosity of the fluid dominates the thermal diffusivity of the fluid which leads to decreasing of the velocity of the flow field. Moreover, it is noticed that velocity first increases in the region adjacent to the plate and then decreases on moving away from the plate with increase in the suction parameter ( S ) showing the suction has a stabilizing effect on the flow field. This figure also incorporates the fact that radiation ( F ) tends to accelerate the translational velocity throughout the boundary layer region. Physically, it is true, as higher radiation occurs when temperature is higher and ultimately the velocity rises. The velocity distribution attains maximum value in the neighborhood of the wall and then decrease to approach the free stream value. The effect of heat absorption parameter on translational velocity ( V ) is depicted in Figure 5(b) and it is found that velocity reduces due to the presence of heat absorption parameter ( Q ).(a) Velocity profiles showing the variation of Prandtl number (Pr), suction parameter( S ), and radiation parameter ( F ) taking Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (b) Velocity profiles showing the variation of heat absorption parameter ( Q ) taking Pr = 0.71, S = 0.5, F = 1, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (c) Velocity profiles showing the variation of magnetic parameter ( M ) and Hall parameter ( m ) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (d) Velocity profiles showing the variation of Grashof number (Gr) and modified Grashof number (Gm) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (e) Velocity profiles showing the variation of rotational parameter ( R ) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (f) Velocity profiles showing the variation of Soret parameter (Sr), Schmidt number (Sc), and chemical reaction parameter ( α ) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, δ = 0.1, t = 1, A = 3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ =0.8. (g) Velocity profiles showing the variation of viscosity ratio ( Δ ) and permeability parameter ( K ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, and λ = 0.1. (h) Velocity profiles showing the variation of slip parameter ( h ) and material parameter ( λ ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, Δ = 0.2, and K = 0.6. (a) (b) (c) (d) (e) (f) (g) (h)Figure5(c) incorporates the influence of magnetic parameter ( M ) and hall parameter ( m ) on the translational velocity profiles ( V ). As expected, the application of the transverse magnetic field retards the fluid motion. This phenomenon has an excellent agreement with the physical fact that the presence of transverse magnetic field in an electrically conducting fluid always generates a resistive type of force called Lorentz force which is similar to drag force and hence serves to decelerate the flow. As such the magnetic field is an effective regulatory mechanism for the flow regime. Form this figure it is also found that Hall currents ( m ) tends to accelerate the fluid velocity throughout the boundary layer region which is consistent with the fact that Hall currents induces flow in the flow field.The combined effect of thermal and concentration buoyancy forces on the translational velocity are depicted in Figure5(d). It is evident from this figure that with an increase in Grashof number (Gr) and modified Grashof number (Gm), which is a measure of thermal and concentration buoyancy forces, there is a substantial growth in the momentum boundary layer for the same reasons as explained earlier in this section. Figure 5(e) depicts the effect of rotational parameter ( R ) on the fluid velocity and it is perceived that rotation tends to retard fluid velocity throughout the flow field. This is due to the reason that Coriolis force is dominant in the region near to the axis of rotation.Variation of translational velocity profiles for different values of Soret parameter( Sr ), Schmidt number (Sc), and chemical reaction ( α ) are displayed in Figure 5(f). The comparison of the curves shows that the velocity of the flow field decreases due to an increase in Schimdt number and Soret number. It is also observed from this figure that velocity decreases during the destructive reaction ( α < 0 ).Figure5(g) depicts the influence of viscosity ratio ( Δ ) and permeability parameter ( K ) on the translational velocity ( V ). For different values of permeability parameter this figure shows that velocity increases with increasing values of K while an increasing viscosity ratio ( Δ ) results in an enhancement of the total viscosity in fluid flow because Δ is directly proportional to vortex viscosity which makes the fluid more viscous and so weakens the convection currents and hence the velocity decreases. This phenomenon has a good agreement with the physical realities.Figure5(h) incorporates the effect of slip or rarefaction parameter ( h ) and material parameter ( λ ) on the translational velocity ( V ). It is observed that an increase in the values of rarefaction parameter result in an enhancement of the flow field inside the boundary layer. This behavior is readily understood from the velocity slip condition at the surface (23). The case when h = 0 corresponds to the no slip condition and in the present case it reduces to the case when the plate moves with constant velocity in the longitudinal direction. The effects are more visible in the region near to the plate and afterwards it fall slowly and steadily to its free stream value as z → ∞. Lastly the velocity decreases with increasing material parameter ( λ ). In Figures 5(a)–5(h) we observe that the velocity become maximum in the vicinity of the plate and then decreases away from the plate and finally takes asymptotic values far away from the plate.The numerical values of Nusselt number computed from the analytical solution given in (39) are presented graphically versus time ( t ) in Figure 6 for various values of Prandtl number (Pr), suction parameter ( S ), heat absorption parameter ( Q ), and radiation parameter ( F ). It is noteworthy that the Prandtl number, suction parameter, heat absorption parameter, and radiation parameter enhance the rate of heat transfer at the surface of the plate. The reason behind this phenomenon is explained earlier in the text. The rate of heat transfer is more for water (Pr = 7.0) than that of air (Pr = 0.71).Figure 6 Nusselt number for different values of Prandtl number (Pr), suction parameter( S ), heat absorption parameter ( Q ), and radiation parameter ( F ) taking A = 3, t = 1, and δ = 0.1.Figures7(a)–7(c) display the concentration gradient—C ′ ( 0 ) at the porous plate versus time ( t ). From all these figures it is analyzed that Sherwood number increase with an increase in Schmidt number (Sc), chemical reaction parameter ( α ), Soret number (Sr), suction parameter ( S ), heat absorption parameter ( Q ), and radiation parameter ( F ). As time progresses the Sherwood number remains unaltered.(a) Sherwood number showing the variation for Schmidt number (Sc) and chemical reaction parameter( α ) taking Pr = 0.71, S = 0.01, F = 2, Q = 1, δ = 0.1, t = 1, and A = 3. (b) Sherwood number showing the variation for heat absorption parameter ( Q ) and Soret number (Sr) taking Pr = 0.71, S = 0.01, F = 2, Sc = 0.22, δ = 0.1, t = 1, and A = 3. (c) Sherwood number showing the variation for suction parameter ( S ) and radiation parameter ( F ) taking Pr = 0.71, Sr = 0.5, Sc = 0.3, Q = 1, δ = 0.1, t = 1, and A = 3. (a) (b) (c)The variation of couple stress coefficient( C m ) for various involved parameters is displayed in Figures 8(a)–8(h) versus time ( t ). Figure 8(a) exhibits that couple stress coefficient decreases with increasing values of radiation parameter ( F ) and suction parameter ( S ) it increases with increasing values of Prandtl number (Pr). The effect of heat absorption parameter ( Q ) on C m is shown in Figure 8(b) and it is found that couple stress coefficient enhances with the rise in the values of Q. From Figures 8(c)–8(f) it is apparent that the effect of increasing values of magnetic parameter ( M ), hall parameter ( m ), Grashof number (Gr), modified Grashof number (Gm), Rotational parameter ( R ) and viscosity ratio ( Δ ) are to decrease the values of couple stress coefficient whereas reverse effect is found on increasing the values of material parameter ( λ ). Figure 8(g) shows a substantial growth in couple stress coefficient with increasing values of slip parameter ( h ) while reverse happen for increasing values of permeability parameter ( K ). Finally, the Schmidt number (Sc), Soret number (Sr), and chemical reaction parameter ( α ) have the tendency to increase couple stress coefficient and this is clearly visible in Figure 8(h). From all these figures from Figures 8(a) to 8(h) it is understandable that as time progresses couple stress coefficient ( C m ) is getting enhanced whereas from Figures 9(a) to 9(h) it is visible that skin friction coefficient ( C f ) is getting suppressed for increasing values of time ( t ).(a) Couple stress showing the variation of Prandtl number (Pr), suction parameter( S ), and radiation parameter ( F ) taking Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (b) Couple stress showing the variation of heat absorption parameter ( Q ) taking Pr = 0.71, S = 0.5, F = 2, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (c) Couple stress showing the variation of magnetic parameter ( M ) and Hall parameter ( m ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (d) Couple stress showing the variation of Grashof number (Gr) and modified Grashof number (Gm) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (e) Couple stress showing the variation of rotational parameter ( R ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (f) Couple stress showing the variation of viscosity ratio ( Δ ) and material parameter ( λ ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, and K = 0.6. (g) Couple stress showing the variation of slip parameter ( h ) and permeability parameter ( K ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, Δ = 0.2, and λ = 0.8. (h) Couple stress showing the variation of Soret parameter (Sr), Schmidt number (Sc), and chemical reaction parameter ( α ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.4, δ = 0.1, t = 1, A = 3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (a) (b) (c) (d) (e) (f) (g) (h)(a) Skin friction profiles showing the variation of Prandtl number (Pr), suction parameterS, and radiation parameter F taking Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (b) Skin friction profiles showing the variation of heat absorption parameter Q taking Pr = 0.71, S = 0.5, F = 1, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (c) Skin friction profiles showing the variation of magnetic parameter M and Hall parameter m taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (d) Skin friction profiles showing the variation of Grashof number (Gr) and modified Grashof number (Gm) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (e) Skin friction profiles showing the variation of rotational parameter ( R ) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (f) Skin friction profiles showing the variation of Soret parameter (Sr), Schimdt number (Sc), and chemical reaction parameter ( α ) taking Pr = 0.71, S = 0.5, F = 1, Q = 0.5, δ = 0.1, t = 1, A = 3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, K = 0.6, Δ = 0.2, and λ = 0.8. (g) Skin friction profiles showing the variation of viscosity ratio ( Δ ) and permeability parameter ( K ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, h = 0.1, and λ = 0.1. (h) Skin friction profiles showing the variation of slip parameter ( h ) and material parameter ( λ ) taking Pr = 0.71, S = 0.5, F = 2, Q = 0.5, α = 0.1, δ = 0.1, t = 1, A = 3, Sr = 0.5, Sc = 0.3, M = 2, m = 0.5, Gr = 10, Gm = 10, R = 0.3, Δ = 0.2, and K = 0.6. (a) (b) (c) (d) (e) (f) (g) (h)The skin friction is an important phenomenon which characterizes the frictional drag at the solid surface, so the numerical values of skin friction coefficient( C f ) computed from (33) is presented in Figures 9(a)–9(h) taking different values of F, S, Pr, Q, M, m, Gr, Gm, R, Sc, Sr, α, Δ, λ, K, and h. The skin friction coefficient increases with increasing values of radiation parameter while it decreases with increase in suction parameter, Prandtl number, and heat absorption parameter and this fact is depicted in Figures 9(a) and 9(b). It is noticed from Figure 9(c) that the skin friction coefficient is reduced due to an increase in magnetic field strength as expected, since the applied magnetic field tends to impede the flow motion and thus reduces the surface friction force while the hall parameter tends to increase the skin friction. Figure 9(d) demonstrates the growth in skin friction for increasing values of thermal buoyancy parameter (Gr) and modified Grashof number (Gm) because an increase in buoyancy effect in mixed convection flow leads to an acceleration of the fluid flow which increases the friction factor. An opposite trend is observed for increasing values of rotational parameter; that is, C f decreases with an increase in R. The influence of Schmidt number (Sc), Soret number (Sr), and chemical reaction parameter ( α ) on skin friction coefficient is exhibited in Figure 9(f) and all these parameters tend to retard the surface friction forces. Finally, Figures 9(g) and 9(h) exhibit a significant growth in C f with increasing values of viscosity ratio, permeability parameter, and slip parameter while reverse happens with increasing material parameter. ## 5. Conclusion The governing equations were solved analytically using perturbation technique. The effects of various parameters on the temperatureθ, concentration C, translational velocity V, microrotation ω, skin-friction C f, Nusselt number, and Sherwood number are examined. From the present calculations, we arrive at the following findings.(i) Thermal radiation tends to enhance fluid temperature whereas there is a decrement in fluid temperature with an increase of Prandtl number, suction parameter, and heat absorption parameter. (ii) The species concentration profiles decrease at all points in the flow field with an increase in Schmidt number, chemical reaction parameter, heat generation parameter, Soret number, and suction parameter but are enhanced with an increase in radiation parameter while these physical quantities show reverse trend for Sherwood number. (iii) Thermal radiation, magnetic parameter, hall parameter, and permeability parameter tend to enhance the microrotation distribution whereas these physical quantities have reverse effect on couple stress coefficient. (iv) Microrotation profiles decrease with an increase in Prandtl number, material parameter, slip parameter, Soret number, Schmidt number, and chemical reaction parameter whereas these physical quantities have reverse effect on couple stress coefficient. (v) Microrotation profiles and couple stress coefficient decrease with an increase in suction parameter, rotation parameter, and viscosity ratio. (vi) Thermal radiation parameter, permeability parameter, and slip parameter tend to enhance the translational velocity profiles throughout the boundary layer region and the skin-friction coefficient. (vii) Prandtl number, magnetic parameter, suction parameter, rotation parameter, Soret number, Schmidt number, chemical reaction parameter, viscosity ratio, and material parameter tend to enhance both translational velocity profiles and skin-friction coefficient. (viii) Slip parameter increases the translational velocity profiles but decreases the skin-friction coefficient. (ix) Thermal radiation parameter, Prandtl number, suction parameter, and heat absorption parameter tend to enhance dimensionless rate of heat transfer, that is, Nusselt number. --- *Source: 102413-2014-10-30.xml*
2014
# Challenges and Their Practices in Adoption of Hybrid Cloud Computing: An Analytical Hierarchy Approach **Authors:** Siffat Ullah Khan; Habib Ullah Khan; Naeem Ullah; Rafiq Ahmad Khan **Journal:** Security and Communication Networks (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1024139 --- ## Abstract Cloud computing adoption provides various advantages for companies. In particular, hybrid cloud shares the advantages of both the public and private cloud technologies because it combines the private in-house cloud with the public on-demand cloud. In order to obtain benefits from the opportunities provided by the hybrid cloud, organizations want to adopt or develop novel capabilities. Maturity models have proved to be an exceptional and easily available method for evaluating and improving capabilities. However, there is a dire need for a robust framework that helps client organizations in the adoption and assessment of hybrid cloud. Therefore, this research paper aims to present a taxonomy of the challenging factors faced by client organizations in the adoption of hybrid cloud. Typically, such a taxonomy is presented on the basis of obtained results from the empirical analysis with the execution of analytical hierarchy process (AHP) method. From the review of literature and empirical study, in total 13 challenging factors are recognized and plotted into four groups: “Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction.” The AHP technique is executed to prioritize the identified factors and their groups. By this way, we found that “Lack of Adoption” and “Lack of Satisfaction” are the most significant groups from the identified challenging factors. Findings from AHP also show that “public cloud security concern” and “achieving QoS” are the upper ranking factors confronted in the adoption of hybrid cloud mechanism by client organizations because their global weight (0.201) is greater than those of all the other reported challenging factors. We also found out 46 practices to address the identified challenges. The taxonomy developed in this study offers a comprehensive structure for dealing with hybrid cloud computing issues, which is essential for the success and advancement of client and vendor organizations in hybrid cloud computing relationships. --- ## Body ## 1. Introduction Recently, the cloud computing mechanism has grown up very rapidly, and it has many unique features like elasticity, pooling of a resource, on-demand support, and wide network access [1, 2]. Technology-assisted learning is becoming more common, with most educational institutions across the globe using learning management systems, content management systems, virtual networks, and virtual machines to enhance student learning [3]. In this day and age, educational institutions are even using private clouds to improve the student experience [3]. Cloud computing acquires some of the features of cluster computing, distributed computing, and grid computing but still has its unique features [4, 5]. “Users of a cloud service only use the volume of IT resources they need, and only pay for the volume of IT resources they use” [6]. In the field of IT, cloud computing brings revolution and provides different concepts from the traditional IT environment [7]. Many organizations of all sizes (small, medium, and large) have adopted and are spending on cloud computing-related techniques [8]. SMEs embrace cloud computing because it cost-effectively provides IT resources [9]. Cloud infrastructure implementation models include the public cloud, private cloud, hybrid cloud, and community cloud [10]. Typically, the service model of the cloud consists of “software as a service” (SaaS), “platform as a service” (PaaS), and “infrastructure as a service” (IaaS) [9]. The decision as to which model is suitable to be adopted for a particular organization depends on various factors. “Hybrid cloud deployment model has proved more significant, both in terms of better economic aspects and business agility” [11, 12]. National Institute of Standards and Technology (NIST) defines hybrid cloud as “a combination of public and private clouds bound together by either standardized or proprietary technology that enables data and application portability.” The adoption of new technology requires many changes within the organization [13, 14].The traditional cloud computing task offloading algorithm consumes abundant energy in task scheduling, which results in a longer average task waiting time [15]. For this reason, a cloud computing task offloading algorithm based on dynamic multiobjective evolution is proposed by the authors of [15]. In order to ensure the parallel completion of multiple tasks, the dynamic multiobjective evolution method is used to construct the cloud computing task scheduling model and complete the cloud computing task scheduling [15]. Then, based on the calculated effectiveness and validity of energy consumption to complete the initial operation distribution and offloading priority, the time and cost of task offloading are calculated according to the raking results of task offloading priority. The cloud computing tasks are distributed with minimum time and minimum cost as the goal. Hamouda et al. [16] proposed a reconfigurable formal model of the hybrid cloud architecture, and then they utilized instantiations of this model, simulation, and real-time execution runs to estimate different performance metrics related to fault detection and self-recovery strategies in hybrid cloud.Literature reveals that theories and models developed by scholars are mainly focusing on such factors that affect technology acceptance [17]. This is the extended version of our previous study [18]. In this paper, we review the latest work performed in the field of hybrid cloud computing and recognize the various challenging factors faced by client organizations during the adoption of cloud computing. For these challenges, we also find practices. The primary research questions that are answered in this paper are the following:RQ1: What are the challenging factors to be avoided by client organizations in adopting hybrid cloud computing, as identified in the literature and industrial survey?RQ2: How could the defined challenging variables be prioritized via AHP strategy?RQ3: What are the practices to be adopted by vendor organizations to develop effective relationships with client organizations in the adoption of hybrid cloud mechanism, as described in the literature and industrial survey?RQ4: What would be the taxonomy for the identified factors that could assist the stakeholders (clients and vendors) in developing an efficient partnership between each other in such a domain?This paper is organized as follows: Section2 provides a background to cloud computing. The research process and methodology are described in Section 3. In Section 4, findings from the SLR, empirical study, and analytical hierarchy process (AHP) approach are presented and analyzed. In Section 5, discussion of the study is presented. The research description is provided in Section 6. Section 7 explains the limitation of the research, followed by the conclusion and future work in Section 8. ## 2. Materials and Methods Cloud computing emerges as the fifth generation of computing which brings a revolution in the way of computation. “Cloud computing doesn’t limit to the grid, parallel, and distributed computing but it involves the power of such paradigms at any level to form a resource pool” [19]. Various stakeholders, such as clients, developers, engineers, executives, academicians, and architects, define cloud computing differently [20]. “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” [21] . Gartner in [22] defines “Cloud computing is a style of computing where massively scalable IT-related capabilities are provided as a service across the Internet to multiple external customers.” Forrester’s [23] said that “cloud computing is a standardized IT capability (services, software, or infrastructure) delivered via Internet technologies in a pay-per-use, self-service way.” ### 2.1. Cloud Computing Service Model In the software industry, big players such as Microsoft, as well as other Internet technology heavyweights, including Google and Amazon, are adding the development of cloud services.Software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) are the three preliminary types of cloud computing services. “In SaaS computer applications are accessed over the Internet rather than being installed on a local computing device or in a local data center” [24]. Desk Away, Dropbox, SkyDrive (Windows Live), Mozy, Google Docs, Pixlr, Zoho Invoice, and CRM on-demand are some of the well-known SaaS examples. PaaS offers an online platform for the creation and operation of applications by software developers [6]. Force.com, Microsoft Windows Azure, and Google App Engine are some examples of PaaS. IaaS works as a cloud provider for hardware such as storage, network, and server and other relevant software such as OS, file system, and virtualization technologies as a service [25]. Joyent, EC2, Zimory, ElasticHosts, Amazon, S3, Rackspace, and GoGrid are examples of IaaS service providers. ### 2.2. Cloud Computing Deployment Model In different types of delivery models, cloud computing services and technologies are deployed based on their characteristics and intent, as well as the distinction between user classes [26]. Public, private, community, and hybrid cloud are the cloud deployment types.A public cloud is one in which the cloud infrastructure and computational services are made accessible over the Internet to the general type public. It is operated and managed by a cloud company that provides customers with cloud services.A private or internal cloud is one in which a single entity manages the cloud infrastructure and computing environment exclusively. It may be operated by a company or a third-party and may be held within or outside the data center of the organization. A private cloud has the ability to give the enterprise greater control than a public cloud over the infrastructure, computing resources, and cloud customers.A community cloud is shared and serves a particular community through many organizations. It is to some extent similar to the private cloud, except in a single entity; the technology and computing services are restricted to more than two organizations with shared privacy, protection, and regulatory considerations.Hybrid clouds are more complex than the other deployment models, and they are a combination of public and private clouds bound together by either standardized or proprietary technology that enables data and application portability [21]. ### 2.3. Hybrid Cloud Related Work Majority of the current research is shedding light on different aspects of hybrid cloud. For example, Ristova et al. [27] discuss hybrid cloud and its utilization in the midmarket and propose a method for mass customization and its association in clouds environment. Khadilkar et al. [28] propose a solution for data security and regulatory in using hybrid cloud computing environment. Amrohi and Khadilkar [29] state that organizations utilizing hybrid cloud can take advantage of both the public cloud and private cloud. Heckel [30] provides some of the basic ideas of cloud computing and also discusses the technological requirements for establishing a hybrid cloud environment. Nepalp et al. [31] provide a solution for secure data storage in the hybrid cloud deployment. According to Javadi et al. [32], “a scalable hybrid cloud infrastructure as well as resource provisioning policies assure QoS targets of the users.” Tanimotoet et al. [33], propose an enterprise data management method for a hybrid cloud configuration. According to Judith et al. [34], “if a few developers in a company use a public cloud service to prototype a new application that is completely disconnected from the private cloud or the data center, the company does not have a hybrid environment, but if a company uses a public development platform that sends data to a private cloud or a data center–based application, the cloud is hybrid.”According to Weinman [35], “under the right conditions, hybrid clouds can optimize costs while still exploiting the benefits of public clouds such as geographic dispersion and business agility.”A cloud-based security company (Trend Micro) indicated via an empirical survey that the “public cloud services fail to meet the IT and business requirements of some of the business organizations.” Alternatively, the “safer option,” private cloud, requires significant infrastructure and operations development along with new skill sets required by its IT staff. Although there are ways of balancing each of these concerns, this will ultimately lead to a hybrid of these environments, along with an array of other noncloud environments.Khan and Ullah [18] “surveyed storage and server decision-makers at North American, Asian Pacific, and European enterprises and found that various hybrid cloud implementations were the preferred approach.” ## 2.1. Cloud Computing Service Model In the software industry, big players such as Microsoft, as well as other Internet technology heavyweights, including Google and Amazon, are adding the development of cloud services.Software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) are the three preliminary types of cloud computing services. “In SaaS computer applications are accessed over the Internet rather than being installed on a local computing device or in a local data center” [24]. Desk Away, Dropbox, SkyDrive (Windows Live), Mozy, Google Docs, Pixlr, Zoho Invoice, and CRM on-demand are some of the well-known SaaS examples. PaaS offers an online platform for the creation and operation of applications by software developers [6]. Force.com, Microsoft Windows Azure, and Google App Engine are some examples of PaaS. IaaS works as a cloud provider for hardware such as storage, network, and server and other relevant software such as OS, file system, and virtualization technologies as a service [25]. Joyent, EC2, Zimory, ElasticHosts, Amazon, S3, Rackspace, and GoGrid are examples of IaaS service providers. ## 2.2. Cloud Computing Deployment Model In different types of delivery models, cloud computing services and technologies are deployed based on their characteristics and intent, as well as the distinction between user classes [26]. Public, private, community, and hybrid cloud are the cloud deployment types.A public cloud is one in which the cloud infrastructure and computational services are made accessible over the Internet to the general type public. It is operated and managed by a cloud company that provides customers with cloud services.A private or internal cloud is one in which a single entity manages the cloud infrastructure and computing environment exclusively. It may be operated by a company or a third-party and may be held within or outside the data center of the organization. A private cloud has the ability to give the enterprise greater control than a public cloud over the infrastructure, computing resources, and cloud customers.A community cloud is shared and serves a particular community through many organizations. It is to some extent similar to the private cloud, except in a single entity; the technology and computing services are restricted to more than two organizations with shared privacy, protection, and regulatory considerations.Hybrid clouds are more complex than the other deployment models, and they are a combination of public and private clouds bound together by either standardized or proprietary technology that enables data and application portability [21]. ## 2.3. Hybrid Cloud Related Work Majority of the current research is shedding light on different aspects of hybrid cloud. For example, Ristova et al. [27] discuss hybrid cloud and its utilization in the midmarket and propose a method for mass customization and its association in clouds environment. Khadilkar et al. [28] propose a solution for data security and regulatory in using hybrid cloud computing environment. Amrohi and Khadilkar [29] state that organizations utilizing hybrid cloud can take advantage of both the public cloud and private cloud. Heckel [30] provides some of the basic ideas of cloud computing and also discusses the technological requirements for establishing a hybrid cloud environment. Nepalp et al. [31] provide a solution for secure data storage in the hybrid cloud deployment. According to Javadi et al. [32], “a scalable hybrid cloud infrastructure as well as resource provisioning policies assure QoS targets of the users.” Tanimotoet et al. [33], propose an enterprise data management method for a hybrid cloud configuration. According to Judith et al. [34], “if a few developers in a company use a public cloud service to prototype a new application that is completely disconnected from the private cloud or the data center, the company does not have a hybrid environment, but if a company uses a public development platform that sends data to a private cloud or a data center–based application, the cloud is hybrid.”According to Weinman [35], “under the right conditions, hybrid clouds can optimize costs while still exploiting the benefits of public clouds such as geographic dispersion and business agility.”A cloud-based security company (Trend Micro) indicated via an empirical survey that the “public cloud services fail to meet the IT and business requirements of some of the business organizations.” Alternatively, the “safer option,” private cloud, requires significant infrastructure and operations development along with new skill sets required by its IT staff. Although there are ways of balancing each of these concerns, this will ultimately lead to a hybrid of these environments, along with an array of other noncloud environments.Khan and Ullah [18] “surveyed storage and server decision-makers at North American, Asian Pacific, and European enterprises and found that various hybrid cloud implementations were the preferred approach.” ## 3. Research Methodology The proposed research methodology is presented in Figure1 and consists of the following three phases.Figure 1 Research methodology. ### 3.1. SLR Conduction Stage 1: identifying challenges faced by client organizations and their practices in adoption of hybrid cloud computing.In stage 1, two systematic literature reviews (SLRs) were conducted to extract the relevant data: one for the purpose of identifying the challenges faced by client organizations in the adoption of hybrid cloud computing [18, 36] and another to identify practical solutions for these challenges. We followed the SLR approach because SLR is a different method from an ordinary conducted literature review, and it requires more time as well as effort to complete [37–39]. We studied several SLRs [37–39] for guidance. We initially developed the SLR protocol, which was validated and has been published [36]. The SLR1 protocol was then implemented, and the findings of the SLR1 have been published [18]. Through the SLR1, we have identified 12 challenges in such a domain. Among these challenges, 8 were considered critical challenges on the basis of their high frequency. For these critical challenges, we then conducted SLR2 and found out 46 practices out of a sample of 90 papers. ### 3.2. Empirical Study Conduction Stage 2: validating the findings of SLR and finding out new challenges faced by client organizations and their practices in adoption of hybrid cloud computing.In stage 2, a survey of 42 hybrid cloud computing experts was conducted to verify the results of SLRs and to recognize other significant challenges and their practices. An empirical survey refers to the experimental research which gathers data based on qualitative and quantitative description from a sample of population [35]. In the collection of implicit data for an issue, empirical survey is the most commonly used tool [40]. A similar approach was followed by other researchers [41–43]. ### 3.3. Application of AHP Stage 3: prioritizing the identified challenges with their respective categories.For the purpose of prioritizing the listed challenges and their corresponding categories, the analytical hierarchy process (AHP) approach is used. AHP was developed by Saaty [44] and is a popular classical multiple-criteria decision-making (MCDM) method. Typically, such an approach of ranking and prioritizing given variables is accurate and precise. The main aim of such a study is to give rank and to prioritize the hybrid cloud computing challenges faced by client organizations. Classical AHP is therefore ideally suited for the study of the data obtained using the form of the survey. In addition, such a technique (AHP) has previously been utilized to cope with complex decision-making issues in numerous other research areas. In Figure 2, the steps for the application of AHP are presented. AHP’s three major stages are as follows.Figure 2 Phases of AHP. #### 3.3.1. Decomposition of a Complex Decision Problem to a Simple Hierarchy Structure Here, the problem of decision making is decomposed into related elements of decision making [45, 46]. At least three levels are used to divide the hierarchical structure of the question: at level 1, the goal of the problem is presented; level 2 gives the challenges; similarly, subchallenges are presented at level 3 as depicted in Figure 3.Figure 3 Hierarchical structure of problem. #### 3.3.2. Survey Regarding the Pairwise Comparison In order to incorporate the aforementioned AHP approach of prioritization and corresponding categorization of challenges, we conducted a survey with the senior members of the Software Engineering Research Group, University of Malakand (SERGUOM). In total, 8 respondents gave positive feedback, and so they were included to take part in the second phase of questionnaire survey.TheSupplementary Material provides the questionnaire of the second survey sample. The data obtained from 8 participants in the survey and this small sample may threaten the later results of this study; however, the AHP is a subjective methodology and may consider small sample of data also [45, 46]. Other researchers [47–51] with relatively small sample sizes have adopted a similar strategy. #### 3.3.3. Pairwise Comparisons To calculate the priority weights of the identified challenges, pairwise comparison of these challenges was conducted. At each level, comparison of these challenges was performed based on their degree of impact and, also, based on the criteria specified at upper level [52]. For instance, the matrix-comparison criteria, i.e., [C] = {Cx|x = 1, 2,...,n}, was used, where n is the evaluation matrix A, i.e., xy (y, x = 1, 2,...,n), which presents the normalized relative weight as shown in equation (1), where axy =  1/axy,axy>0.(1)A=1a12a1na211a2nan1an21.As an indication of their degree of importance for the introduction of challenges faced by client organizations in hybrid cloud computing adoption, we further clarify the pairwise comparison of two enlisted challenging factors as CH1 and CH2. For example, if CH1 is five degrees greater than CH2, then, as shown in Tables1 and 2, CH2 is noticed to be 1/5 as compared to CH1. Through applying the same principle, we performed in Section 4 the pairwise comparison of matrixes for the overall identified challenging factors and their categorization.Table 1 Example of pairwise comparison. S. NoCH1CH2CH115CH21/51Table 2 Description of intensity scale. DescriptionSignificance intensityEqually important1Moderately important3Strongly more important5Very strongly more important7Extremely more important9Intermediate values2, 4, 6, 8In order to assess the rank of the identified challenges with the corresponding categories, the standard 9-point scale comparison was applied as depicted in Table2.The priority weight is determined based on the pairwise comparison matrixes as follows:(1) C refers to pairwise comparison for the recognized challenging factors.(2) The normalized matrix [C] decomposes each element from every column via the sum from its concerned column.(3) The priority weight [W] computes the average from each row from a normalized matrix [C]. #### 3.3.4. Checking the Consistency for Pairwise Comparison Matrix Shameem et al. [45] mentioned that pairwise comparison matrix in the AHP should be consistent and it could be calculated using the consistency index (CI) and consistency ratio (CR) as given in the following equations:(2)consistency indexCI=λmax−nn−1,(3)consistency ratioCR=CIRI.By multiplication of given weight,W, and the summation of each column from a comparison matrix (see Section 4), the λ max value is the prime eigenvalue which could be determined, where n shows the total number for the identified challenges in the given numbers of pairwise comparison matrix.RI shows the random consistency index (CI) in (3) and its value varies with respect to the size of matrix (see Table 3). The permissible CR value goes up to 0.10, and the challenging priority vector is acceptable only if the CR value is less than 0.10. Further, if the given CR value is not under the appropriate range, then there is a compulsory need to repeat the same process to enhance the degree of steadiness. In this paper, Section 4 presents the estimated value of CR for each comparison matrix.Table 3 RI value with respect to matrix size. Size of matrix12345678910RI000.580.91.121.241.321.411.451.49 ## 3.1. SLR Conduction Stage 1: identifying challenges faced by client organizations and their practices in adoption of hybrid cloud computing.In stage 1, two systematic literature reviews (SLRs) were conducted to extract the relevant data: one for the purpose of identifying the challenges faced by client organizations in the adoption of hybrid cloud computing [18, 36] and another to identify practical solutions for these challenges. We followed the SLR approach because SLR is a different method from an ordinary conducted literature review, and it requires more time as well as effort to complete [37–39]. We studied several SLRs [37–39] for guidance. We initially developed the SLR protocol, which was validated and has been published [36]. The SLR1 protocol was then implemented, and the findings of the SLR1 have been published [18]. Through the SLR1, we have identified 12 challenges in such a domain. Among these challenges, 8 were considered critical challenges on the basis of their high frequency. For these critical challenges, we then conducted SLR2 and found out 46 practices out of a sample of 90 papers. ## 3.2. Empirical Study Conduction Stage 2: validating the findings of SLR and finding out new challenges faced by client organizations and their practices in adoption of hybrid cloud computing.In stage 2, a survey of 42 hybrid cloud computing experts was conducted to verify the results of SLRs and to recognize other significant challenges and their practices. An empirical survey refers to the experimental research which gathers data based on qualitative and quantitative description from a sample of population [35]. In the collection of implicit data for an issue, empirical survey is the most commonly used tool [40]. A similar approach was followed by other researchers [41–43]. ## 3.3. Application of AHP Stage 3: prioritizing the identified challenges with their respective categories.For the purpose of prioritizing the listed challenges and their corresponding categories, the analytical hierarchy process (AHP) approach is used. AHP was developed by Saaty [44] and is a popular classical multiple-criteria decision-making (MCDM) method. Typically, such an approach of ranking and prioritizing given variables is accurate and precise. The main aim of such a study is to give rank and to prioritize the hybrid cloud computing challenges faced by client organizations. Classical AHP is therefore ideally suited for the study of the data obtained using the form of the survey. In addition, such a technique (AHP) has previously been utilized to cope with complex decision-making issues in numerous other research areas. In Figure 2, the steps for the application of AHP are presented. AHP’s three major stages are as follows.Figure 2 Phases of AHP. ### 3.3.1. Decomposition of a Complex Decision Problem to a Simple Hierarchy Structure Here, the problem of decision making is decomposed into related elements of decision making [45, 46]. At least three levels are used to divide the hierarchical structure of the question: at level 1, the goal of the problem is presented; level 2 gives the challenges; similarly, subchallenges are presented at level 3 as depicted in Figure 3.Figure 3 Hierarchical structure of problem. ### 3.3.2. Survey Regarding the Pairwise Comparison In order to incorporate the aforementioned AHP approach of prioritization and corresponding categorization of challenges, we conducted a survey with the senior members of the Software Engineering Research Group, University of Malakand (SERGUOM). In total, 8 respondents gave positive feedback, and so they were included to take part in the second phase of questionnaire survey.TheSupplementary Material provides the questionnaire of the second survey sample. The data obtained from 8 participants in the survey and this small sample may threaten the later results of this study; however, the AHP is a subjective methodology and may consider small sample of data also [45, 46]. Other researchers [47–51] with relatively small sample sizes have adopted a similar strategy. ### 3.3.3. Pairwise Comparisons To calculate the priority weights of the identified challenges, pairwise comparison of these challenges was conducted. At each level, comparison of these challenges was performed based on their degree of impact and, also, based on the criteria specified at upper level [52]. For instance, the matrix-comparison criteria, i.e., [C] = {Cx|x = 1, 2,...,n}, was used, where n is the evaluation matrix A, i.e., xy (y, x = 1, 2,...,n), which presents the normalized relative weight as shown in equation (1), where axy =  1/axy,axy>0.(1)A=1a12a1na211a2nan1an21.As an indication of their degree of importance for the introduction of challenges faced by client organizations in hybrid cloud computing adoption, we further clarify the pairwise comparison of two enlisted challenging factors as CH1 and CH2. For example, if CH1 is five degrees greater than CH2, then, as shown in Tables1 and 2, CH2 is noticed to be 1/5 as compared to CH1. Through applying the same principle, we performed in Section 4 the pairwise comparison of matrixes for the overall identified challenging factors and their categorization.Table 1 Example of pairwise comparison. S. NoCH1CH2CH115CH21/51Table 2 Description of intensity scale. DescriptionSignificance intensityEqually important1Moderately important3Strongly more important5Very strongly more important7Extremely more important9Intermediate values2, 4, 6, 8In order to assess the rank of the identified challenges with the corresponding categories, the standard 9-point scale comparison was applied as depicted in Table2.The priority weight is determined based on the pairwise comparison matrixes as follows:(1) C refers to pairwise comparison for the recognized challenging factors.(2) The normalized matrix [C] decomposes each element from every column via the sum from its concerned column.(3) The priority weight [W] computes the average from each row from a normalized matrix [C]. ### 3.3.4. Checking the Consistency for Pairwise Comparison Matrix Shameem et al. [45] mentioned that pairwise comparison matrix in the AHP should be consistent and it could be calculated using the consistency index (CI) and consistency ratio (CR) as given in the following equations:(2)consistency indexCI=λmax−nn−1,(3)consistency ratioCR=CIRI.By multiplication of given weight,W, and the summation of each column from a comparison matrix (see Section 4), the λ max value is the prime eigenvalue which could be determined, where n shows the total number for the identified challenges in the given numbers of pairwise comparison matrix.RI shows the random consistency index (CI) in (3) and its value varies with respect to the size of matrix (see Table 3). The permissible CR value goes up to 0.10, and the challenging priority vector is acceptable only if the CR value is less than 0.10. Further, if the given CR value is not under the appropriate range, then there is a compulsory need to repeat the same process to enhance the degree of steadiness. In this paper, Section 4 presents the estimated value of CR for each comparison matrix.Table 3 RI value with respect to matrix size. Size of matrix12345678910RI000.580.91.121.241.321.411.451.49 ## 3.3.1. Decomposition of a Complex Decision Problem to a Simple Hierarchy Structure Here, the problem of decision making is decomposed into related elements of decision making [45, 46]. At least three levels are used to divide the hierarchical structure of the question: at level 1, the goal of the problem is presented; level 2 gives the challenges; similarly, subchallenges are presented at level 3 as depicted in Figure 3.Figure 3 Hierarchical structure of problem. ## 3.3.2. Survey Regarding the Pairwise Comparison In order to incorporate the aforementioned AHP approach of prioritization and corresponding categorization of challenges, we conducted a survey with the senior members of the Software Engineering Research Group, University of Malakand (SERGUOM). In total, 8 respondents gave positive feedback, and so they were included to take part in the second phase of questionnaire survey.TheSupplementary Material provides the questionnaire of the second survey sample. The data obtained from 8 participants in the survey and this small sample may threaten the later results of this study; however, the AHP is a subjective methodology and may consider small sample of data also [45, 46]. Other researchers [47–51] with relatively small sample sizes have adopted a similar strategy. ## 3.3.3. Pairwise Comparisons To calculate the priority weights of the identified challenges, pairwise comparison of these challenges was conducted. At each level, comparison of these challenges was performed based on their degree of impact and, also, based on the criteria specified at upper level [52]. For instance, the matrix-comparison criteria, i.e., [C] = {Cx|x = 1, 2,...,n}, was used, where n is the evaluation matrix A, i.e., xy (y, x = 1, 2,...,n), which presents the normalized relative weight as shown in equation (1), where axy =  1/axy,axy>0.(1)A=1a12a1na211a2nan1an21.As an indication of their degree of importance for the introduction of challenges faced by client organizations in hybrid cloud computing adoption, we further clarify the pairwise comparison of two enlisted challenging factors as CH1 and CH2. For example, if CH1 is five degrees greater than CH2, then, as shown in Tables1 and 2, CH2 is noticed to be 1/5 as compared to CH1. Through applying the same principle, we performed in Section 4 the pairwise comparison of matrixes for the overall identified challenging factors and their categorization.Table 1 Example of pairwise comparison. S. NoCH1CH2CH115CH21/51Table 2 Description of intensity scale. DescriptionSignificance intensityEqually important1Moderately important3Strongly more important5Very strongly more important7Extremely more important9Intermediate values2, 4, 6, 8In order to assess the rank of the identified challenges with the corresponding categories, the standard 9-point scale comparison was applied as depicted in Table2.The priority weight is determined based on the pairwise comparison matrixes as follows:(1) C refers to pairwise comparison for the recognized challenging factors.(2) The normalized matrix [C] decomposes each element from every column via the sum from its concerned column.(3) The priority weight [W] computes the average from each row from a normalized matrix [C]. ## 3.3.4. Checking the Consistency for Pairwise Comparison Matrix Shameem et al. [45] mentioned that pairwise comparison matrix in the AHP should be consistent and it could be calculated using the consistency index (CI) and consistency ratio (CR) as given in the following equations:(2)consistency indexCI=λmax−nn−1,(3)consistency ratioCR=CIRI.By multiplication of given weight,W, and the summation of each column from a comparison matrix (see Section 4), the λ max value is the prime eigenvalue which could be determined, where n shows the total number for the identified challenges in the given numbers of pairwise comparison matrix.RI shows the random consistency index (CI) in (3) and its value varies with respect to the size of matrix (see Table 3). The permissible CR value goes up to 0.10, and the challenging priority vector is acceptable only if the CR value is less than 0.10. Further, if the given CR value is not under the appropriate range, then there is a compulsory need to repeat the same process to enhance the degree of steadiness. In this paper, Section 4 presents the estimated value of CR for each comparison matrix.Table 3 RI value with respect to matrix size. Size of matrix12345678910RI000.580.91.121.241.321.411.451.49 ## 4. Results ### 4.1. List of Challenging Factors Identified via SLR Table4 display a list of the challenges found via SLR1 that are regarded as the main roadblocks in the adoption of hybrid cloud. Our results show that “public cloud security concern” is the top of all challenges, i.e., 58%. “This is because in hybrid cloud data security risk is high as some of the data is exposed to public cloud from the private cloud” [53]. According to Li et al. [54], “this challenge relates to keeping the amount of sensitive data that is exposed to the public machines.” Balasubramanian and Murugaiyan [55] argue that “hybrid cloud model transfers selective data between public and private clouds.” According to Wang et al. [56], “data externalization towards services deployed on the public cloud creates security problems coming from the data issued by public cloud services.”Table 4 List of challenging factors identified through SLR1. Public hybrid cloud computing challengesFreq%ReferencesPublic cloud security concern6958[1],[2], [9], [13], [17], [18], [20], [23], [24], [27], [30], [33], [34], [35], [39], [41], [42], [43], [44], [45], [47], [50], [53], [54], [55], [56], [58], [59], [61], [63], [65], [66], [67], [68], [71], [72], [73], [76], [78], [79], [80], [81], [82], [83], [84], [85], [86], [87], [89], [91], [92], [93], [95], [96], [97], [98], [99], [100], [101], [103], [104], [105], [106], [110], [112], [114], [115],[116], [120]Effective management issue3428[1], [5], [8], [9], [15], [17], [18], [27], [28], [31], [37], [38], [39], [40], [44], [50], [56], [66], [70], [74], [77], [78], [80], [84], [86], [88], [89], [102], [104], [105], [106], [107], [113], [120]Integration complexity2723[5], [6], [17], [18], [32], [56], [58], [60], [65], [66], [69], [72], [73], [77], [78], [79], [80], [86], [89 ], [92], [96], [100], [103], [105], [113], [116], [117]Achieving QoS1513[4], [5], [11], [14], [23], [41], [57], [61], [65], [94], [106], [107], [108], [111], [119]Components partitioning1513[3], [9], [12], [16], [20], [21], [52], [57], [60], [62], [68], [70], [109], [115], [118]Lack of trust1412[7], [13], [19], [24], [30], [39], [54], [64], [71], [72], [85], [92], [114][116]SLA assurance1412[16], [17], [25], [26], [30], [39], [41], [47], [48], [61], [66], [69], [80], [85]Task scheduling and execution1311[4], [10], [22], [29], [36], [49], [51], [52], [70], [74], [75], [90], [93]Appropriate cloud offering54[38], [46], [86], [88], [102]Data searching10.8[114]Cost driven scheduling of services10.8[121]Lack of sharing of resources across multiple concerns10.8[122]The results also show that the “effective management issue” (28%) is the secondly cited challenge among all the identified challenges. This is because moving to hybrid cloud from public cloud environment needs an effective management in terms of managing the manageability issue of the cloud infrastructure in hybrid cloud environment [57]. According to Bhadauria et al. [58], “the risk of outsourced services going out of control is high in a hybrid cloud environment and key management becomes a difficult task in such situations.”“Integration complexity” in our findings was listed at the 3rd position among mostly cited challenges (23%). Hybrid cloud technical integration is perceived to be much more difficult and a significant barrier to adoption. “Integration of one or more public and private clouds into a hybrid system can be more challenging than integrating on-premises systems” [59]. According to Javidi et al. [60], “a mechanism for integrating private and public clouds is one of the major issues that need to be addressed for realizing hybrid cloud computing infrastructure.”The “quality of service (QoS)” is another challenge in hybrid cloud adoption, which we have already discussed in the literature portion.Jian and Sheng [61] determined that “different components of the hybrid infrastructure provide different QoS guarantees, efficient policies to integrate public and private cloud. However, to assure QoS target of the users remain a challenging job.”“Lack of trust has been reported as a challenge in the adoption of hybrid” [62]. Noor et al. [63] argue that “due to the fact that data owners and cloud storage providers are dispersed across distinct global sites, due to which it becomes difficult to establish trust between the client and public cloud provider in the hybrid cloud environment.”By following the structure established by Shameem et al. [45], the challenges found were further mapped into four categories, as shown in Figure 4.Figure 4 Categorization of hybrid cloud computing challenges. ### 4.2. Empirical Investigation In order to empirically verify the identified results of SLR, we have performed an online questionnaire survey. We evaluated the data obtained, and this section contains the findings. The questionnaire survey contains demographic information and hybrid cloud adoption challenges identified through SLR. There were three sections in the questionnaire survey. Every section consisted of various open-ended questions for the purpose of extracting any other challenges beyond those which were identified via the SLR1. Seven-point Likert scale, i.e., “extremely agree (EA),” “moderately agree (MA),” “slightly agree (SA),” “not sure (NS),” “slightly disagree (SD),” “moderately disagree (MD),” “extremely disagree (ED),” was applied to conclude the views of the respondents about these identified challenges.For data collection, a request was posted in different groups via LinkedIn, having more than fifty thousand members in total across the globe (see Table5). Further, we also sent requests to different companies in Pakistan utilizing cloud services as shown in Table 6 to participate in the questionnaire survey. Our invitation request was responded to by 60 experts in total showing their willingness, through e-mail, to participate. Questionnaire was shared with these experts after receiving their consent for participation. A total of 33 participants took part in the survey. Among the filled questionnaires, 3 were rejected because they did not follow our predefined quality criteria. Accordingly, 30 responses were selected as the final sample and used for the analysis, showing a response rate of 50%.Table 5 List of LinkedIn online cloud professionals. S. no.Group name (available online at LinkedIn website)Total members (at the time of request)Date (request posted)1Canada Cloud Network68114 April 20152Cloud Architect and Professionals Network6,35414 April 20153Conversations on Cloud Computing10,13214 April 20154Cloud Computing Best Practices7,95014 April 20155Hybrid Cloud User Group6615 April 20156SAP Cloud Computing (private, public, or hybrid)1,53115 April 20157TalkinCloud1,01015 April 20158Windows Azure & Microsoft Cloud10,46716 April 20159Cloud Computing, Microsoft UK11,08816 April 201510IEEE Cloud Computing5,71916 April 2015Table 6 List of some famous software development companies in Pakistan. S. no.Software company nameAddressDate of request sent1Subhash Educational ComplexArbab Road, Peshawar, KPK14 April 20152Haq Institute of Computer and ITMain Ghazi Road, Lahore14 April 20153PurePushAbasyn Street 6, I/9, Islamabad, Pakistan14 April 20154DatumSquare IT Services Pvt. Ltd.Software Technology Park, I-9/3, Islamabad14 April 20155Macrosoft PakistanAbu Bakar Block, New Garden Town, Lahore, Pakistan15 April 20156Xavor CorporationMasood Anwari Rd, Cavalry Extension, Cavalry Ground, Lahore15 April 20157Xerox Soft (Pvt.) Ltd.Deans Trade Center, Peshawar15 April 20158Ovex technologies1st Floor, KSL Complex, Software Technology Park, Plot No. 156, I-9/3 Industrial Area, Islamabad16 April 20159TechlogixEmpress Road, Lahore, Pakistan16 April 2015Table7 shows the list of the challenges in the adoption of hybrid cloud that were identified/validated via empirical study. This table also depicts the various options through 7-point Likert scale for each of the aforementioned challenges and the graphical representation.Table 7 List of challenges identified via empirical study. S. no.Identified challenging factors with categorizationExpert perception (n = 30)PositiveNegativeNeutralEAMASA%SDMDED%NS%C1Category: lack of Inclination20128621114310CH1Effective management issue196497000013CH2SLA assurance206293000027CH3Cost-driven scheduling of service10837032123310C2Category: lack of Readiness212389011427CH4Task scheduling and execution193693000027CH5Data searching187393010313CH6Integration complexity203386200727CH7Components partitioning1924831003414C3Category: lack of Adoption222288111913CH8Public cloud security concern232497010200CH9Lack of trust243193000027CH10Lack of sharing resources across multiple clients976732132727C4Category: lack of Satisfaction2032832007310CH11Achieving QoS206397000013CH12Appropriate cloud offering16110900000310CH13Delays in response time10646621320414Table7 shows that ≥ 80% of the respondents agreed that “public cloud security concern” (90%), “lack of trust” (87%), and “integration complexity” (83%) are the highest challenges found in such a domain.About >60% respondents agreed about “QoS” (67%), “SLA (service level agreement) assurance” (67%), “task scheduling challenges” (63%), “effective management” (63%), and “component partitioning” (63%). We suggest efficient policies implementation for these challenges. Although, there is little evidence in literature revealing “data searching” challenge, 60% of the respondents agreed that it is also a challenging factor or challenge in the embracing of hybrid cloud. 53% of the respondents agreed about “appropriate cloud offering.”In addition, the participants of the survey were invited for the purpose of presenting their insights about other challenges which were not identified through the SLR conduction. However, we have not found any new challenge or recommendation given by the respondents of the survey. ### 4.3. Crossway Comparison of Public Hybrid Cloud Computing Challenges Identified through SLR1 and Questionnaire Survey We validated public hybrid cloud computing challenges discovered via SLR1 with a survey questionnaire follow-up and then undertook comparative analysis of both outcome data sets. This analysis predictably leaned towards radiating extant similarities and disparities between SLR1 and survey outcomes (see Table8). Table 8 presents a comparative analysis of both data sets using only positive values from the survey questionnaire. Lowest ranks were assigned to highest values. Whenever similar values occurred, we assigned an average rank and then approximated the value of the next rank. We further noted that rankings for cited challenges across both data sets were not the same. We used the Spearman’s rank correlation test (formula) and identified a value of R = 0.54, reflecting the frequency of a challenge’s reference in the SLR correlation with its citation frequency among survey participants. This allows a relative appraisal for the similarity of importance for each SLR challenge compared to survey results.Table 8 Crossway comparison of public hybrid cloud computing challenges identified through SLR1 and questionnaire survey. S. no.Public hybrid cloud computing challengesOccurrences in SLR1 (N = 121)Positive agreement (%) in the questionnaire survey (N = 33)dd2%Rank%Rank1Public cloud security concern581971.5−0.50.252Effective management issue282971.50.50.253Integration complexity233869−6364Achieving QoS134.5971.52.56.255Components partitioning134.58310−5.530.256Lack of trust126.5934.5247SLA assurance126.5934.5248Task scheduling and execution118934.53.512.259Appropriate cloud offering499081110Data searching0.810.5934.563611Cost driven scheduling of services0.810.57012−1.52.2512Lack of sharing of resources across multiple concerns0.810.57311−0.50.25n = 12Ʃd2 = 132.5(4)Spearman rank correlationR=1−6Σd2n3−n=1−6×132.5123−12=1−7951728−12=1−7951716=1−0.46=0.54. ### 4.4. Application of AHP The AHP method and its implementation are highlighted briefly in the following steps:Step 1: decomposition of a complex-decision problem into a simple hierarchical structure.Based on the corresponding categorization from Figure4, the hierarchical structure is built and is shown in Figure 5. In the first level, the fundamental purpose of this analysis is presented; the corresponding challenging factors with the categories are offered at levels 2 to 3 of Figure 5, respectively.Step 2: development of the pairwise comparison matrix and calculation of the prioritization AL weight.For each group of these challenges with its corresponding categories, the generation of pairwise comparison matrix was performed on the basis of data collected via AHP. All details for such a comparison (“Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction”) are given in Tables9–12.Likewise, in Table13, the results for the stated pairwise comparison matrix categories are produced. We also used the normalized matrix comparison for the purpose of calculating the weights of these challenging factors.The normalized values of each challenging factor are determined by dividing its value by the number of corresponding columns. The complete details of normalized matrix of each category (“Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction”) are provided in Tables9, 14, 15, and 16, respectively. Table 17 shows the findings of the normalized matrix comparison groups.For each challenge, the weight value (W) is determined from the average number of the normalized values of the respective rows. For example, the weight values given in Table18 show that CH1 is the most significant challenging factor in the “Lack of Inclination” category because it has high value as compared to the other challenging factors of the same category. However, CH3 is the least significant challenging factor because it has low weight value.Step 3: checking of consistency.Figure 5 Hierarchal structure of challenging factors.Table 9 Normalized matrix of “Lack of Adoption” category. S. no.CH8CH9CH10Weight (W)CH80.610.630.360.53CH90.500.320.570.40CH100.090.050.070.07λmax=3.067,CI=0.034,RI=0.58,CR=0.06<0.1.Table 10 Pairwise matrix comparison for the challenging factors of “Lack of Readiness” category. S. no.CH4CH5CH6CH7CH41236CH51/2135CH61/31/314CH71/61/51/41Sum2.003.587.2516.00Table 11 Pairwise matrix comparison for the challenging factors of “Lack of Adoption” category. S. no.CH8CH9CH10CH8125CH91/218CH101/51/81Sum1.643.1614.00Table 12 Pairwise matrix comparison for the challenging factors of “Lack of Satisfaction” category. S. no.CH11CH12CH13CH11138CH121/316CH131/81/61Sum1.454.1715.00Table 13 Pairwise matrix comparison between categories of challenging factors. S. no.Lack of InclinationLack of ReadinessLack of AdoptionLack of SatisfactionLack of Inclination11/21/31/3Lack of Readiness21¼1/4Lack of Adoption1/31/314Lack of Satisfaction1/61/5¼1Sum2.003.587.2516.00Table 14 Normalized matrix of “Lack of Inclination” category. S. no.CH1CH2CH3Weight (W)CH10.680.720.500.63CH20.220.240.430.30CH30.100.040.070.07λmax=3.099,CI=0.049,RI=0.58,CR=0.09<0.1.Table 15 Normalized matrix of “Lack of Inclination” category. S. no.CH4CH5CH6CH7Weight (W)CH40.500.560.410.380.46CH50.250.280.410.310.31CH60.170.090.140.250.16CH70.090.070.030.060.06Table 16 Normalized matrix of “Lack of Satisfaction” category. S. no.CH8CH9CH10Weight (W)CH80.610.630.360.53CH90.300.320.570.40CH100.090.050.070.07λmax=3.067,CI=0.034,RI=0.58,CR=0.06<0.1.Table 17 Normalized matrix for the categories of challenging factors. S. no.Lack of InclinationLack of ReadinessLack of AdoptionLack of SatisfactionWeight (W)Lack of Inclination0.110.050.130.130.10Lack of Readiness0.220.110.100.100.13Lack of Adoption0.330.420.390.390.38Lack of Satisfaction0.330.420.390.390.38λmax=4.199,CI=0.040,RI=0.9,CR=0.04<0.1.Table 18 Pairwise matrix comparison for the extracted challenging factors of “Lack of Inclination” category. S. no.CH1CH2CH3CH1137CH21/316CH31/71/61Sum1.474.1714.00We are able to measure the degree of consistency for the corresponding category, i.e., “Lack of Inclination” via the below parameters, discussed in Section3:(5)λmax=∑∑Cj×W,where ΣCj is the summation of columns from matrix [C], which is depicted in Table 10, and W is the weight vector (see Table 15).(6)λmax=2.00×0.46±3.58×0.31±7.25×0.16±16.00×0.06,λmax=0.92+1.1098±1.16+0.96,λmax=4.1498,Consistency IndexCI=λmax−nn−1=4.1498−44−1=0.14983,Random IndexRI=0.9,Consistency RatioCR=CIRI=0.0490.9=0.058<0.1consitency is OK.Table 19 Summarized list for the challenging factors being ranked. CategoriesCategories weightChallenging factorsLocal weightLocal rankingGlobal weightPriorityLack of Inclination0.10CH10.6310.0635CH20.3020.0308CH30.0730.0704Lack of Readiness0.13CH40.4610.0606CH50.3120.0407CH60.1630.02011CH70.0640.0783Lack of Adoption0.38CH80.5310.2011CH90.4020.1522CH100.0730.02610Lack of Satisfaction0.38CH110.5310.2011CH120.4020.1522CH130.0730.0269The result given indicates that the CR value is lower than 0.1, which is the appropriate CR value. For such challenging factors in all the other corresponding categories, the same accuracy method is used, and the estimation of the CR value is given in Tables9, 10, 14, 16, and 18.Step 4: provision of local and global ranking for challenging factors with their categories.For each of the stated challenges, the local weight (LW) and global weight (GW) are presented in Table19, where the local weight reflects the importance of the related challenging factor in its own specified category, while the global weight shows the specified priority for a factor across all the 13 challenging factors identified.The LW was determined by performing a pairwise comparison for each of the challenging factors and the category (see step 3). For example, Table19 shows that the LW of CH1 (0.63) is found to be the highest weight in the “Lack of Inclination” category; therefore, CH1 is also the uppermost ranked, prioritized challenging factor in the “Lack of Inclination” category.Likewise, by multiplication of their LW and that of their respective groups, we calculated the GW of the reported challenging factors. For example, GW of challenging factor CH1 = 0.10 × 0.63 = 0.063, where 0.10 is the weight of its category (Lack of Inclination) and 0.63 is its LW. The same process is repeated for the remaining challenging factors, and their GW is calculated, respectively, as presented in Table19.Step 5: finalized prioritization of these challenging factors.The finalized priority of these challenge factors is, typically, based on the GW for each of the challenge factors, and the same is presented in Table19. On top priority, challenging factors that have higher GW in all categories are taken into account.In Table19, CH8 (public cloud security concern) and CH11 (achieving QoS) are considered as the uppermost ranking challenging factors due to their GW value (0.201), which is the highest value as compared to the other factors.We further noted that CH12 (appropriate cloud offering) is found to be the secondly highest common challenging factor that could adversely affect the receipt of hybrid cloud computing activities from client organizations’ perspective. ### 4.5. Practices for Critical Challenges Identified through SLR 2 and Empirical Study In this section, a list of practices is presented for each CC (critical challenge) identified through SLR1. We considered that 8 challenges are critical because they have high frequencies both in SLR1 and empirical study findings. We have designed 46 practices in total for the 8 CCs. These practices were identified via SLR2 and questionnaires surveys with 30 experts.We have presented under each challenge its relevant practices for addressing the particular challenge. In Table20 CCs represent critical challenges and CCPs represent practices for addressing these challenges.Table 20 Practices for addressing critical challenge “SLA assurance.” CC#1. SLA assuranceSLR2Questionnaire surveyS. no.Practices for addressing SLA assuranceFrequency of SLR2 (N = 90)Positive %CCP#1.1Ensure the maximum availability of services, provided by cloud providers, and duration of the contract period are explicitly defined in the SLA.298CCP#1.2Define explicitly in the SLA terms and conditions regarding the security of the clients’ data.881CCP#1.3Keep the clients aware of where the processes are running or where the data is stored to ensure the security of the clients’ data.296CCP#1.4To mitigate the risk of a cloud provider failure, specify reversion strategies in the SLA. This is because they put cloud customers in a much stronger position when renegotiating a cloud service contract because cloud customers know that they could readily switch from the provider if needed.294CCP#1.5Perform third-party auditing regularly to monitor the cloud service provider’s compliance to agreed terms.492CCP#1.6Ensure in service level agreements what the contingency plans are in case of the breakdown of the system.697CCP#1.7On-premises gateway should be used in hybrid cloud for controlling the applications and data that flow from each part to the other.684CCP#1.8Categorize the data into two parts, i.e., sensitive and nonsensitive. Place the sensitive data in the on-premises side (private cloud) whereas nonsensitive data should be kept in public cloud.16100 ## 4.1. List of Challenging Factors Identified via SLR Table4 display a list of the challenges found via SLR1 that are regarded as the main roadblocks in the adoption of hybrid cloud. Our results show that “public cloud security concern” is the top of all challenges, i.e., 58%. “This is because in hybrid cloud data security risk is high as some of the data is exposed to public cloud from the private cloud” [53]. According to Li et al. [54], “this challenge relates to keeping the amount of sensitive data that is exposed to the public machines.” Balasubramanian and Murugaiyan [55] argue that “hybrid cloud model transfers selective data between public and private clouds.” According to Wang et al. [56], “data externalization towards services deployed on the public cloud creates security problems coming from the data issued by public cloud services.”Table 4 List of challenging factors identified through SLR1. Public hybrid cloud computing challengesFreq%ReferencesPublic cloud security concern6958[1],[2], [9], [13], [17], [18], [20], [23], [24], [27], [30], [33], [34], [35], [39], [41], [42], [43], [44], [45], [47], [50], [53], [54], [55], [56], [58], [59], [61], [63], [65], [66], [67], [68], [71], [72], [73], [76], [78], [79], [80], [81], [82], [83], [84], [85], [86], [87], [89], [91], [92], [93], [95], [96], [97], [98], [99], [100], [101], [103], [104], [105], [106], [110], [112], [114], [115],[116], [120]Effective management issue3428[1], [5], [8], [9], [15], [17], [18], [27], [28], [31], [37], [38], [39], [40], [44], [50], [56], [66], [70], [74], [77], [78], [80], [84], [86], [88], [89], [102], [104], [105], [106], [107], [113], [120]Integration complexity2723[5], [6], [17], [18], [32], [56], [58], [60], [65], [66], [69], [72], [73], [77], [78], [79], [80], [86], [89 ], [92], [96], [100], [103], [105], [113], [116], [117]Achieving QoS1513[4], [5], [11], [14], [23], [41], [57], [61], [65], [94], [106], [107], [108], [111], [119]Components partitioning1513[3], [9], [12], [16], [20], [21], [52], [57], [60], [62], [68], [70], [109], [115], [118]Lack of trust1412[7], [13], [19], [24], [30], [39], [54], [64], [71], [72], [85], [92], [114][116]SLA assurance1412[16], [17], [25], [26], [30], [39], [41], [47], [48], [61], [66], [69], [80], [85]Task scheduling and execution1311[4], [10], [22], [29], [36], [49], [51], [52], [70], [74], [75], [90], [93]Appropriate cloud offering54[38], [46], [86], [88], [102]Data searching10.8[114]Cost driven scheduling of services10.8[121]Lack of sharing of resources across multiple concerns10.8[122]The results also show that the “effective management issue” (28%) is the secondly cited challenge among all the identified challenges. This is because moving to hybrid cloud from public cloud environment needs an effective management in terms of managing the manageability issue of the cloud infrastructure in hybrid cloud environment [57]. According to Bhadauria et al. [58], “the risk of outsourced services going out of control is high in a hybrid cloud environment and key management becomes a difficult task in such situations.”“Integration complexity” in our findings was listed at the 3rd position among mostly cited challenges (23%). Hybrid cloud technical integration is perceived to be much more difficult and a significant barrier to adoption. “Integration of one or more public and private clouds into a hybrid system can be more challenging than integrating on-premises systems” [59]. According to Javidi et al. [60], “a mechanism for integrating private and public clouds is one of the major issues that need to be addressed for realizing hybrid cloud computing infrastructure.”The “quality of service (QoS)” is another challenge in hybrid cloud adoption, which we have already discussed in the literature portion.Jian and Sheng [61] determined that “different components of the hybrid infrastructure provide different QoS guarantees, efficient policies to integrate public and private cloud. However, to assure QoS target of the users remain a challenging job.”“Lack of trust has been reported as a challenge in the adoption of hybrid” [62]. Noor et al. [63] argue that “due to the fact that data owners and cloud storage providers are dispersed across distinct global sites, due to which it becomes difficult to establish trust between the client and public cloud provider in the hybrid cloud environment.”By following the structure established by Shameem et al. [45], the challenges found were further mapped into four categories, as shown in Figure 4.Figure 4 Categorization of hybrid cloud computing challenges. ## 4.2. Empirical Investigation In order to empirically verify the identified results of SLR, we have performed an online questionnaire survey. We evaluated the data obtained, and this section contains the findings. The questionnaire survey contains demographic information and hybrid cloud adoption challenges identified through SLR. There were three sections in the questionnaire survey. Every section consisted of various open-ended questions for the purpose of extracting any other challenges beyond those which were identified via the SLR1. Seven-point Likert scale, i.e., “extremely agree (EA),” “moderately agree (MA),” “slightly agree (SA),” “not sure (NS),” “slightly disagree (SD),” “moderately disagree (MD),” “extremely disagree (ED),” was applied to conclude the views of the respondents about these identified challenges.For data collection, a request was posted in different groups via LinkedIn, having more than fifty thousand members in total across the globe (see Table5). Further, we also sent requests to different companies in Pakistan utilizing cloud services as shown in Table 6 to participate in the questionnaire survey. Our invitation request was responded to by 60 experts in total showing their willingness, through e-mail, to participate. Questionnaire was shared with these experts after receiving their consent for participation. A total of 33 participants took part in the survey. Among the filled questionnaires, 3 were rejected because they did not follow our predefined quality criteria. Accordingly, 30 responses were selected as the final sample and used for the analysis, showing a response rate of 50%.Table 5 List of LinkedIn online cloud professionals. S. no.Group name (available online at LinkedIn website)Total members (at the time of request)Date (request posted)1Canada Cloud Network68114 April 20152Cloud Architect and Professionals Network6,35414 April 20153Conversations on Cloud Computing10,13214 April 20154Cloud Computing Best Practices7,95014 April 20155Hybrid Cloud User Group6615 April 20156SAP Cloud Computing (private, public, or hybrid)1,53115 April 20157TalkinCloud1,01015 April 20158Windows Azure & Microsoft Cloud10,46716 April 20159Cloud Computing, Microsoft UK11,08816 April 201510IEEE Cloud Computing5,71916 April 2015Table 6 List of some famous software development companies in Pakistan. S. no.Software company nameAddressDate of request sent1Subhash Educational ComplexArbab Road, Peshawar, KPK14 April 20152Haq Institute of Computer and ITMain Ghazi Road, Lahore14 April 20153PurePushAbasyn Street 6, I/9, Islamabad, Pakistan14 April 20154DatumSquare IT Services Pvt. Ltd.Software Technology Park, I-9/3, Islamabad14 April 20155Macrosoft PakistanAbu Bakar Block, New Garden Town, Lahore, Pakistan15 April 20156Xavor CorporationMasood Anwari Rd, Cavalry Extension, Cavalry Ground, Lahore15 April 20157Xerox Soft (Pvt.) Ltd.Deans Trade Center, Peshawar15 April 20158Ovex technologies1st Floor, KSL Complex, Software Technology Park, Plot No. 156, I-9/3 Industrial Area, Islamabad16 April 20159TechlogixEmpress Road, Lahore, Pakistan16 April 2015Table7 shows the list of the challenges in the adoption of hybrid cloud that were identified/validated via empirical study. This table also depicts the various options through 7-point Likert scale for each of the aforementioned challenges and the graphical representation.Table 7 List of challenges identified via empirical study. S. no.Identified challenging factors with categorizationExpert perception (n = 30)PositiveNegativeNeutralEAMASA%SDMDED%NS%C1Category: lack of Inclination20128621114310CH1Effective management issue196497000013CH2SLA assurance206293000027CH3Cost-driven scheduling of service10837032123310C2Category: lack of Readiness212389011427CH4Task scheduling and execution193693000027CH5Data searching187393010313CH6Integration complexity203386200727CH7Components partitioning1924831003414C3Category: lack of Adoption222288111913CH8Public cloud security concern232497010200CH9Lack of trust243193000027CH10Lack of sharing resources across multiple clients976732132727C4Category: lack of Satisfaction2032832007310CH11Achieving QoS206397000013CH12Appropriate cloud offering16110900000310CH13Delays in response time10646621320414Table7 shows that ≥ 80% of the respondents agreed that “public cloud security concern” (90%), “lack of trust” (87%), and “integration complexity” (83%) are the highest challenges found in such a domain.About >60% respondents agreed about “QoS” (67%), “SLA (service level agreement) assurance” (67%), “task scheduling challenges” (63%), “effective management” (63%), and “component partitioning” (63%). We suggest efficient policies implementation for these challenges. Although, there is little evidence in literature revealing “data searching” challenge, 60% of the respondents agreed that it is also a challenging factor or challenge in the embracing of hybrid cloud. 53% of the respondents agreed about “appropriate cloud offering.”In addition, the participants of the survey were invited for the purpose of presenting their insights about other challenges which were not identified through the SLR conduction. However, we have not found any new challenge or recommendation given by the respondents of the survey. ## 4.3. Crossway Comparison of Public Hybrid Cloud Computing Challenges Identified through SLR1 and Questionnaire Survey We validated public hybrid cloud computing challenges discovered via SLR1 with a survey questionnaire follow-up and then undertook comparative analysis of both outcome data sets. This analysis predictably leaned towards radiating extant similarities and disparities between SLR1 and survey outcomes (see Table8). Table 8 presents a comparative analysis of both data sets using only positive values from the survey questionnaire. Lowest ranks were assigned to highest values. Whenever similar values occurred, we assigned an average rank and then approximated the value of the next rank. We further noted that rankings for cited challenges across both data sets were not the same. We used the Spearman’s rank correlation test (formula) and identified a value of R = 0.54, reflecting the frequency of a challenge’s reference in the SLR correlation with its citation frequency among survey participants. This allows a relative appraisal for the similarity of importance for each SLR challenge compared to survey results.Table 8 Crossway comparison of public hybrid cloud computing challenges identified through SLR1 and questionnaire survey. S. no.Public hybrid cloud computing challengesOccurrences in SLR1 (N = 121)Positive agreement (%) in the questionnaire survey (N = 33)dd2%Rank%Rank1Public cloud security concern581971.5−0.50.252Effective management issue282971.50.50.253Integration complexity233869−6364Achieving QoS134.5971.52.56.255Components partitioning134.58310−5.530.256Lack of trust126.5934.5247SLA assurance126.5934.5248Task scheduling and execution118934.53.512.259Appropriate cloud offering499081110Data searching0.810.5934.563611Cost driven scheduling of services0.810.57012−1.52.2512Lack of sharing of resources across multiple concerns0.810.57311−0.50.25n = 12Ʃd2 = 132.5(4)Spearman rank correlationR=1−6Σd2n3−n=1−6×132.5123−12=1−7951728−12=1−7951716=1−0.46=0.54. ## 4.4. Application of AHP The AHP method and its implementation are highlighted briefly in the following steps:Step 1: decomposition of a complex-decision problem into a simple hierarchical structure.Based on the corresponding categorization from Figure4, the hierarchical structure is built and is shown in Figure 5. In the first level, the fundamental purpose of this analysis is presented; the corresponding challenging factors with the categories are offered at levels 2 to 3 of Figure 5, respectively.Step 2: development of the pairwise comparison matrix and calculation of the prioritization AL weight.For each group of these challenges with its corresponding categories, the generation of pairwise comparison matrix was performed on the basis of data collected via AHP. All details for such a comparison (“Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction”) are given in Tables9–12.Likewise, in Table13, the results for the stated pairwise comparison matrix categories are produced. We also used the normalized matrix comparison for the purpose of calculating the weights of these challenging factors.The normalized values of each challenging factor are determined by dividing its value by the number of corresponding columns. The complete details of normalized matrix of each category (“Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction”) are provided in Tables9, 14, 15, and 16, respectively. Table 17 shows the findings of the normalized matrix comparison groups.For each challenge, the weight value (W) is determined from the average number of the normalized values of the respective rows. For example, the weight values given in Table18 show that CH1 is the most significant challenging factor in the “Lack of Inclination” category because it has high value as compared to the other challenging factors of the same category. However, CH3 is the least significant challenging factor because it has low weight value.Step 3: checking of consistency.Figure 5 Hierarchal structure of challenging factors.Table 9 Normalized matrix of “Lack of Adoption” category. S. no.CH8CH9CH10Weight (W)CH80.610.630.360.53CH90.500.320.570.40CH100.090.050.070.07λmax=3.067,CI=0.034,RI=0.58,CR=0.06<0.1.Table 10 Pairwise matrix comparison for the challenging factors of “Lack of Readiness” category. S. no.CH4CH5CH6CH7CH41236CH51/2135CH61/31/314CH71/61/51/41Sum2.003.587.2516.00Table 11 Pairwise matrix comparison for the challenging factors of “Lack of Adoption” category. S. no.CH8CH9CH10CH8125CH91/218CH101/51/81Sum1.643.1614.00Table 12 Pairwise matrix comparison for the challenging factors of “Lack of Satisfaction” category. S. no.CH11CH12CH13CH11138CH121/316CH131/81/61Sum1.454.1715.00Table 13 Pairwise matrix comparison between categories of challenging factors. S. no.Lack of InclinationLack of ReadinessLack of AdoptionLack of SatisfactionLack of Inclination11/21/31/3Lack of Readiness21¼1/4Lack of Adoption1/31/314Lack of Satisfaction1/61/5¼1Sum2.003.587.2516.00Table 14 Normalized matrix of “Lack of Inclination” category. S. no.CH1CH2CH3Weight (W)CH10.680.720.500.63CH20.220.240.430.30CH30.100.040.070.07λmax=3.099,CI=0.049,RI=0.58,CR=0.09<0.1.Table 15 Normalized matrix of “Lack of Inclination” category. S. no.CH4CH5CH6CH7Weight (W)CH40.500.560.410.380.46CH50.250.280.410.310.31CH60.170.090.140.250.16CH70.090.070.030.060.06Table 16 Normalized matrix of “Lack of Satisfaction” category. S. no.CH8CH9CH10Weight (W)CH80.610.630.360.53CH90.300.320.570.40CH100.090.050.070.07λmax=3.067,CI=0.034,RI=0.58,CR=0.06<0.1.Table 17 Normalized matrix for the categories of challenging factors. S. no.Lack of InclinationLack of ReadinessLack of AdoptionLack of SatisfactionWeight (W)Lack of Inclination0.110.050.130.130.10Lack of Readiness0.220.110.100.100.13Lack of Adoption0.330.420.390.390.38Lack of Satisfaction0.330.420.390.390.38λmax=4.199,CI=0.040,RI=0.9,CR=0.04<0.1.Table 18 Pairwise matrix comparison for the extracted challenging factors of “Lack of Inclination” category. S. no.CH1CH2CH3CH1137CH21/316CH31/71/61Sum1.474.1714.00We are able to measure the degree of consistency for the corresponding category, i.e., “Lack of Inclination” via the below parameters, discussed in Section3:(5)λmax=∑∑Cj×W,where ΣCj is the summation of columns from matrix [C], which is depicted in Table 10, and W is the weight vector (see Table 15).(6)λmax=2.00×0.46±3.58×0.31±7.25×0.16±16.00×0.06,λmax=0.92+1.1098±1.16+0.96,λmax=4.1498,Consistency IndexCI=λmax−nn−1=4.1498−44−1=0.14983,Random IndexRI=0.9,Consistency RatioCR=CIRI=0.0490.9=0.058<0.1consitency is OK.Table 19 Summarized list for the challenging factors being ranked. CategoriesCategories weightChallenging factorsLocal weightLocal rankingGlobal weightPriorityLack of Inclination0.10CH10.6310.0635CH20.3020.0308CH30.0730.0704Lack of Readiness0.13CH40.4610.0606CH50.3120.0407CH60.1630.02011CH70.0640.0783Lack of Adoption0.38CH80.5310.2011CH90.4020.1522CH100.0730.02610Lack of Satisfaction0.38CH110.5310.2011CH120.4020.1522CH130.0730.0269The result given indicates that the CR value is lower than 0.1, which is the appropriate CR value. For such challenging factors in all the other corresponding categories, the same accuracy method is used, and the estimation of the CR value is given in Tables9, 10, 14, 16, and 18.Step 4: provision of local and global ranking for challenging factors with their categories.For each of the stated challenges, the local weight (LW) and global weight (GW) are presented in Table19, where the local weight reflects the importance of the related challenging factor in its own specified category, while the global weight shows the specified priority for a factor across all the 13 challenging factors identified.The LW was determined by performing a pairwise comparison for each of the challenging factors and the category (see step 3). For example, Table19 shows that the LW of CH1 (0.63) is found to be the highest weight in the “Lack of Inclination” category; therefore, CH1 is also the uppermost ranked, prioritized challenging factor in the “Lack of Inclination” category.Likewise, by multiplication of their LW and that of their respective groups, we calculated the GW of the reported challenging factors. For example, GW of challenging factor CH1 = 0.10 × 0.63 = 0.063, where 0.10 is the weight of its category (Lack of Inclination) and 0.63 is its LW. The same process is repeated for the remaining challenging factors, and their GW is calculated, respectively, as presented in Table19.Step 5: finalized prioritization of these challenging factors.The finalized priority of these challenge factors is, typically, based on the GW for each of the challenge factors, and the same is presented in Table19. On top priority, challenging factors that have higher GW in all categories are taken into account.In Table19, CH8 (public cloud security concern) and CH11 (achieving QoS) are considered as the uppermost ranking challenging factors due to their GW value (0.201), which is the highest value as compared to the other factors.We further noted that CH12 (appropriate cloud offering) is found to be the secondly highest common challenging factor that could adversely affect the receipt of hybrid cloud computing activities from client organizations’ perspective. ## 4.5. Practices for Critical Challenges Identified through SLR 2 and Empirical Study In this section, a list of practices is presented for each CC (critical challenge) identified through SLR1. We considered that 8 challenges are critical because they have high frequencies both in SLR1 and empirical study findings. We have designed 46 practices in total for the 8 CCs. These practices were identified via SLR2 and questionnaires surveys with 30 experts.We have presented under each challenge its relevant practices for addressing the particular challenge. In Table20 CCs represent critical challenges and CCPs represent practices for addressing these challenges.Table 20 Practices for addressing critical challenge “SLA assurance.” CC#1. SLA assuranceSLR2Questionnaire surveyS. no.Practices for addressing SLA assuranceFrequency of SLR2 (N = 90)Positive %CCP#1.1Ensure the maximum availability of services, provided by cloud providers, and duration of the contract period are explicitly defined in the SLA.298CCP#1.2Define explicitly in the SLA terms and conditions regarding the security of the clients’ data.881CCP#1.3Keep the clients aware of where the processes are running or where the data is stored to ensure the security of the clients’ data.296CCP#1.4To mitigate the risk of a cloud provider failure, specify reversion strategies in the SLA. This is because they put cloud customers in a much stronger position when renegotiating a cloud service contract because cloud customers know that they could readily switch from the provider if needed.294CCP#1.5Perform third-party auditing regularly to monitor the cloud service provider’s compliance to agreed terms.492CCP#1.6Ensure in service level agreements what the contingency plans are in case of the breakdown of the system.697CCP#1.7On-premises gateway should be used in hybrid cloud for controlling the applications and data that flow from each part to the other.684CCP#1.8Categorize the data into two parts, i.e., sensitive and nonsensitive. Place the sensitive data in the on-premises side (private cloud) whereas nonsensitive data should be kept in public cloud.16100 ## 5. Discussion ### 5.1. RQ1 (Challenges Faced by Client Organizations in the Adoption of Hybrid Cloud Computing) We have closely analyzed 120 articles and extracted a total of 13 challenging factors that could adversely affect the world of hybrid cloud computing. Section4 addresses and summarizes these factors in detail. Following the defined principles for the system introduced by Shameem et al. [45], the reported challenging factors were further classified and presented as a theoretical mode.A questionnaire analysis with 30 experts from hybrid cloud computing professionals as well as academic researchers, conducted empirically, validates the results of the SLR1. ### 5.2. RQ2 (Prioritization Process for Hybrid Cloud Challenging Factors) For such a process of prioritization, the AHP approach implemented was selected because it plays a key role in solving such a type of problems due to its classical multiple-criteria decision-making, MCDM, nature, introduced by Saaty [44]. Typically, the approach to ranking and prioritizing the given variables is accurate and precise.In this paper, we use AHP to prioritize the factors found in the adoption of hybrid cloud computing faced by client organizations. Table19 indicates that CH8 (public cloud security concern) and CH11 (achieving QoS) have been considered as the highest-ranking challenging factors, faced by client organizations in adoption of hybrid cloud computing because their GW (0.201) is higher as compared to all the other reported challenging factors. ### 5.3. RQ3 (Practices for the Identified Critical Challenges Faced by Client Organizations in the Adoption of Hybrid Cloud Computing) For such a purpose, we performed a second systematic literature review, SLR2, and extracted a total of 46 practices from a sample of 90 papers. These practices are discussed and summarized in detail in Section4 (Tables 20–27). By conducting a questionnaire survey with 30 experts from hybrid cloud computing professionals and academic scholars, the results of SLR2 were empirically checked.Table 21 Practices for addressing critical challenge “effective management issue.” CC#2. Effective management issueSLR2Questionnaire surveyS. no.Practices for addressing effective management issueFrequency of SLR2 (N = 90)Positive %CCP#2.1Use management tools developed by several working groups like open grid forum, open cloud computing interface (OCCI), and storage network industry association (SNIA) to monitor the performance of both internal and external resources.287CCP#2.2Create a plan for release and deployment management that is appropriate for using and living in cloud settings191CCP#2.3Place a strong service portfolio management for continual service improvement process.187CCP#2.4Set a plan for capacity management (business capacity management, service capacity management, and component capacity management) to improve performance relating to both services and resources.188CCP#2.5Implement tools like Ansible, CFEngine, Chep, Elastra and RightScale Puppet, and Salt for addressing configuration and change management to control the lifecycle of all changes which will assist in enabling beneficial changes to be made with minimum disruption to IT services.187CCP#2.6Keep backups of applications and data on on-premises servers and storage devices to avoid data loss and time delays in case of failures in the cloud platform.497CCP#2.7Consider a cost-effective model to decide which task is economical on the cloud or internal resources.384CCP#2.8Perform efficient planning and implementation strategies before moving to the hybrid cloud.297Table 22 Practices for addressing critical challenge “integration complexity.” CC#3. Integration complexitySLR2Questionnaire surveyS. no.Practices for addressing integration complexityFrequency of practices via SLR2 (N = 90)Positive %CCP#3.1Use the available infrastructures such as Eucalyptus, OpenNebula, and open source software frameworks, in order to assist integration (front end integration, data integration, and process integration) in hybrid cloud.387CCP#3.2Use standard API (application programming interface) to integrate applications and data between the private clouds and the public clouds.597CCP#3.3Adopt technologies such as information integration, enterprise application integration, and enterprise service bus for effective integration.390CCP#3.4Establish integration mechanism to be controlled dynamically in response to changes in business requirements with the passage of time.179CCP#3.5Select form number of vendors offering solutions for data integration including companies such as Dell Boomi, IBM, Informatica, Pervasive Software, Liaison Technologies, and Talend.184Table 23 Practices for addressing critical challenge “achieving QoS.” CC#4. Achieving QoSSLR2Questionnaire surveyS. no.Practices for addressing QoSFrequency of practices via SLR (N = 90)Positive %CCP#4.1Select a cloud provider that can offer improved services in the following QoS parameters/attributes: price, offered load, job deadline constraint, energy consumption of the integrated infrastructure, security, etc.197CCP#4.2Ensure that access to the internal infrastructure is only possible through secure communications.394CCP#4.3Follow secure communication protocols (such as Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL)) when communicating with endpoint applications and databases.197CCP#4.4Select a public cloud provider which can offer the capacity needed by internal cloud and execute dynamically.193CCP#4.5Select a cloud provider that can ensure high degree of availability of services at all times.297Table 24 Practices for addressing critical challenge “component partitioning.” CC#5. Component partitioningSLR2Questionnaire surveyS. no.Practices for addressing component partitioningFrequency of practices via SLR (N = 90)Positive %CCP#5.1In order to distribute an application over a hybrid cloud, the following parameters should be kept in mind:1100(i) Data disclosure risk.(ii) Resource allocation cost.(iii) Private cloud load.CCP#5.2In order to migrate some of the applications components from private cloud to public cloud in the context of hybrid cloud environment, implement migration progress management functions like pacer which is capable of accurately predicting the migration time and coordinating the migrations of multiple application component.287CCP#5.3Divide the workload to be executed across local and public clouds so that the workloads can move among resource pools, which will result in a well-designed cloud environment.194CCP#5.4Replicate some part of the data to the public side so as to enable the distribution of the computation.188CCP#5.5Consider a sensitivity-aware data partitioning mechanism like Sedic that guarantees that no sensitive data is exposed to public cloud.187Table 25 Practices for addressing critical challenge “lack of trust.” CC#6. Lack of trustSLR2Questionnaire surveyS. no.Practices for addressing lack of trust.Frequency of practices via SLR2 (N = 90)Positive %CCP#6.1Establish trustworthy relationships with cloud service providers through service level agreement (SLA).397CCP#6.2Ensure the provision of security at different levels, i.e., how cloud providers implement, deploy, and manage security.197CCP#6.3Keep in mind that clients are still ultimately responsible for compliance and protection of their critical data, even if that workload had moved to the cloud.194CCP#6.4Use services of a broker in order to negotiate trust relationships with cloud providers.490CCP#6.5Ensure what sort of certifications the cloud providers have in place which can ensure service quality of the cloud provider.384Table 26 Practices for addressing critical challenge “public cloud security concern.” CC#7. Public cloud security concernSLR2Questionnaire surveyS. no.Practices for addressing cloud security concernFrequency of practices via SLR2 (N = 90)Positive %CCP#7.1Cloud security should be controlled by the client organization and not by the cloud vendor.297CCP#7.2Provide effective authentication for users based on access control rights. Only the users who are authorized to access the private cloud can be directed to the private cloud; they can also access the public cloud. The rest of users who are not authorized to access the private cloud can be directed to the public cloud; they can access the public cloud only.881CCP#7.3Client organizations should use a third-party tool to enhance the security.297CCP#7.4Client organizations should utilize their private (own) resources as much as possible and outsource minimum tasks to the public cloud to maximize security.294CCP#7.5Client organizations should carefully manage virtual images in a hybrid environment using tools like firewall, IDS/IPS, and log inspection.493CCP#7.6Data should be encrypted by the client before outsourcing to cloud computing.697CCP#7.7The on-premises gateway should be used in a hybrid cloud for controlling the applications and data that flow from each part to the other.684CCP#7.8Categorize the data into two parts, i.e., sensitive and nonsensitive. Place the sensitive data in the on-premises side (private cloud) whereas nonsensitive data should be kept in the public cloud.16100Table 27 Practices for addressing critical challenge “task scheduling and execution.” CC# 8. Task scheduling and executionFrequency of practices via SLR (N = 90)Questionnaire surveyS. no.Practices for addressing task scheduling and executionPositive %CCP#8.1Use of an efficient scheduling mechanism/ algorithm to enable efficient utilization of the on-premise resources and to minimize the task outsourcing cost, while meeting the task completion time requirements as well. These scheduling algorithms include Hybrid Cloud Optimized Cost (HCOC), Deadline-Markov Decision Process (MDP), Heterogeneous Earliest Finish Time (HEFT) based on resource discovering, filtering, selection, and task submission187CCP#8.2Execute part of the application on public cloud to achieve output within deadline as public cloud resources has much high processing power as compare to private cloud resources. On the other hand, executing the whole application on the public cloud will be costly.480CCP#8.3The capacity of the communication channels in hybrid cloud must be considered because it impacts the cost of workflow execution.197CCP#8.4Implement workflow management system like CWMS (Cloud Workflow Management System) to increase productivity and efficiency197 ### 5.4. RQ4 (Taxonomy for the Challenging Factors) Figure6 shows the taxonomy for such challenging factors, where it is generated via measuring of both the LW value and the GW value of each challenging factor with its corresponding category based on AHP prioritization. This figure shows that “Lack of Adoption” (0.38) and “Lack of Satisfaction” (0.38) are considered as the most prioritized categories by the survey experts. The weights of these categories are maximum as compared with the other categories.Figure 6 Analytical hierarchy process (AHP) based on the prioritization taxonomy of challenging factors with their categories.Further, we have noted that CH8 (public cloud security concern) and CH11 (achieving QoS) have been considered as the highest-ranking challenging factors, which are also listed in these categories, because of their GW value (i.e., 0.201) which is found to be higher than those of all other challenging factors mentioned. ## 5.1. RQ1 (Challenges Faced by Client Organizations in the Adoption of Hybrid Cloud Computing) We have closely analyzed 120 articles and extracted a total of 13 challenging factors that could adversely affect the world of hybrid cloud computing. Section4 addresses and summarizes these factors in detail. Following the defined principles for the system introduced by Shameem et al. [45], the reported challenging factors were further classified and presented as a theoretical mode.A questionnaire analysis with 30 experts from hybrid cloud computing professionals as well as academic researchers, conducted empirically, validates the results of the SLR1. ## 5.2. RQ2 (Prioritization Process for Hybrid Cloud Challenging Factors) For such a process of prioritization, the AHP approach implemented was selected because it plays a key role in solving such a type of problems due to its classical multiple-criteria decision-making, MCDM, nature, introduced by Saaty [44]. Typically, the approach to ranking and prioritizing the given variables is accurate and precise.In this paper, we use AHP to prioritize the factors found in the adoption of hybrid cloud computing faced by client organizations. Table19 indicates that CH8 (public cloud security concern) and CH11 (achieving QoS) have been considered as the highest-ranking challenging factors, faced by client organizations in adoption of hybrid cloud computing because their GW (0.201) is higher as compared to all the other reported challenging factors. ## 5.3. RQ3 (Practices for the Identified Critical Challenges Faced by Client Organizations in the Adoption of Hybrid Cloud Computing) For such a purpose, we performed a second systematic literature review, SLR2, and extracted a total of 46 practices from a sample of 90 papers. These practices are discussed and summarized in detail in Section4 (Tables 20–27). By conducting a questionnaire survey with 30 experts from hybrid cloud computing professionals and academic scholars, the results of SLR2 were empirically checked.Table 21 Practices for addressing critical challenge “effective management issue.” CC#2. Effective management issueSLR2Questionnaire surveyS. no.Practices for addressing effective management issueFrequency of SLR2 (N = 90)Positive %CCP#2.1Use management tools developed by several working groups like open grid forum, open cloud computing interface (OCCI), and storage network industry association (SNIA) to monitor the performance of both internal and external resources.287CCP#2.2Create a plan for release and deployment management that is appropriate for using and living in cloud settings191CCP#2.3Place a strong service portfolio management for continual service improvement process.187CCP#2.4Set a plan for capacity management (business capacity management, service capacity management, and component capacity management) to improve performance relating to both services and resources.188CCP#2.5Implement tools like Ansible, CFEngine, Chep, Elastra and RightScale Puppet, and Salt for addressing configuration and change management to control the lifecycle of all changes which will assist in enabling beneficial changes to be made with minimum disruption to IT services.187CCP#2.6Keep backups of applications and data on on-premises servers and storage devices to avoid data loss and time delays in case of failures in the cloud platform.497CCP#2.7Consider a cost-effective model to decide which task is economical on the cloud or internal resources.384CCP#2.8Perform efficient planning and implementation strategies before moving to the hybrid cloud.297Table 22 Practices for addressing critical challenge “integration complexity.” CC#3. Integration complexitySLR2Questionnaire surveyS. no.Practices for addressing integration complexityFrequency of practices via SLR2 (N = 90)Positive %CCP#3.1Use the available infrastructures such as Eucalyptus, OpenNebula, and open source software frameworks, in order to assist integration (front end integration, data integration, and process integration) in hybrid cloud.387CCP#3.2Use standard API (application programming interface) to integrate applications and data between the private clouds and the public clouds.597CCP#3.3Adopt technologies such as information integration, enterprise application integration, and enterprise service bus for effective integration.390CCP#3.4Establish integration mechanism to be controlled dynamically in response to changes in business requirements with the passage of time.179CCP#3.5Select form number of vendors offering solutions for data integration including companies such as Dell Boomi, IBM, Informatica, Pervasive Software, Liaison Technologies, and Talend.184Table 23 Practices for addressing critical challenge “achieving QoS.” CC#4. Achieving QoSSLR2Questionnaire surveyS. no.Practices for addressing QoSFrequency of practices via SLR (N = 90)Positive %CCP#4.1Select a cloud provider that can offer improved services in the following QoS parameters/attributes: price, offered load, job deadline constraint, energy consumption of the integrated infrastructure, security, etc.197CCP#4.2Ensure that access to the internal infrastructure is only possible through secure communications.394CCP#4.3Follow secure communication protocols (such as Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL)) when communicating with endpoint applications and databases.197CCP#4.4Select a public cloud provider which can offer the capacity needed by internal cloud and execute dynamically.193CCP#4.5Select a cloud provider that can ensure high degree of availability of services at all times.297Table 24 Practices for addressing critical challenge “component partitioning.” CC#5. Component partitioningSLR2Questionnaire surveyS. no.Practices for addressing component partitioningFrequency of practices via SLR (N = 90)Positive %CCP#5.1In order to distribute an application over a hybrid cloud, the following parameters should be kept in mind:1100(i) Data disclosure risk.(ii) Resource allocation cost.(iii) Private cloud load.CCP#5.2In order to migrate some of the applications components from private cloud to public cloud in the context of hybrid cloud environment, implement migration progress management functions like pacer which is capable of accurately predicting the migration time and coordinating the migrations of multiple application component.287CCP#5.3Divide the workload to be executed across local and public clouds so that the workloads can move among resource pools, which will result in a well-designed cloud environment.194CCP#5.4Replicate some part of the data to the public side so as to enable the distribution of the computation.188CCP#5.5Consider a sensitivity-aware data partitioning mechanism like Sedic that guarantees that no sensitive data is exposed to public cloud.187Table 25 Practices for addressing critical challenge “lack of trust.” CC#6. Lack of trustSLR2Questionnaire surveyS. no.Practices for addressing lack of trust.Frequency of practices via SLR2 (N = 90)Positive %CCP#6.1Establish trustworthy relationships with cloud service providers through service level agreement (SLA).397CCP#6.2Ensure the provision of security at different levels, i.e., how cloud providers implement, deploy, and manage security.197CCP#6.3Keep in mind that clients are still ultimately responsible for compliance and protection of their critical data, even if that workload had moved to the cloud.194CCP#6.4Use services of a broker in order to negotiate trust relationships with cloud providers.490CCP#6.5Ensure what sort of certifications the cloud providers have in place which can ensure service quality of the cloud provider.384Table 26 Practices for addressing critical challenge “public cloud security concern.” CC#7. Public cloud security concernSLR2Questionnaire surveyS. no.Practices for addressing cloud security concernFrequency of practices via SLR2 (N = 90)Positive %CCP#7.1Cloud security should be controlled by the client organization and not by the cloud vendor.297CCP#7.2Provide effective authentication for users based on access control rights. Only the users who are authorized to access the private cloud can be directed to the private cloud; they can also access the public cloud. The rest of users who are not authorized to access the private cloud can be directed to the public cloud; they can access the public cloud only.881CCP#7.3Client organizations should use a third-party tool to enhance the security.297CCP#7.4Client organizations should utilize their private (own) resources as much as possible and outsource minimum tasks to the public cloud to maximize security.294CCP#7.5Client organizations should carefully manage virtual images in a hybrid environment using tools like firewall, IDS/IPS, and log inspection.493CCP#7.6Data should be encrypted by the client before outsourcing to cloud computing.697CCP#7.7The on-premises gateway should be used in a hybrid cloud for controlling the applications and data that flow from each part to the other.684CCP#7.8Categorize the data into two parts, i.e., sensitive and nonsensitive. Place the sensitive data in the on-premises side (private cloud) whereas nonsensitive data should be kept in the public cloud.16100Table 27 Practices for addressing critical challenge “task scheduling and execution.” CC# 8. Task scheduling and executionFrequency of practices via SLR (N = 90)Questionnaire surveyS. no.Practices for addressing task scheduling and executionPositive %CCP#8.1Use of an efficient scheduling mechanism/ algorithm to enable efficient utilization of the on-premise resources and to minimize the task outsourcing cost, while meeting the task completion time requirements as well. These scheduling algorithms include Hybrid Cloud Optimized Cost (HCOC), Deadline-Markov Decision Process (MDP), Heterogeneous Earliest Finish Time (HEFT) based on resource discovering, filtering, selection, and task submission187CCP#8.2Execute part of the application on public cloud to achieve output within deadline as public cloud resources has much high processing power as compare to private cloud resources. On the other hand, executing the whole application on the public cloud will be costly.480CCP#8.3The capacity of the communication channels in hybrid cloud must be considered because it impacts the cost of workflow execution.197CCP#8.4Implement workflow management system like CWMS (Cloud Workflow Management System) to increase productivity and efficiency197 ## 5.4. RQ4 (Taxonomy for the Challenging Factors) Figure6 shows the taxonomy for such challenging factors, where it is generated via measuring of both the LW value and the GW value of each challenging factor with its corresponding category based on AHP prioritization. This figure shows that “Lack of Adoption” (0.38) and “Lack of Satisfaction” (0.38) are considered as the most prioritized categories by the survey experts. The weights of these categories are maximum as compared with the other categories.Figure 6 Analytical hierarchy process (AHP) based on the prioritization taxonomy of challenging factors with their categories.Further, we have noted that CH8 (public cloud security concern) and CH11 (achieving QoS) have been considered as the highest-ranking challenging factors, which are also listed in these categories, because of their GW value (i.e., 0.201) which is found to be higher than those of all other challenging factors mentioned. ## 6. Summarizing the Research Questions This research aims to deliver a taxonomy, on the basis of AHP technique, for the challenging factors or challenges faced by client organizations in hybrid cloud computing. The conclusions of this article provide help to effectively navigate the practices of hybrid cloud computing. In Table28, the description of the research questions is given.Table 28 Summary of the research questions. S. no.Research questionsDescriptionsRQ1What are the challenging factors, as described in the literature and industrial survey, to be avoided by client organizations in adoption of hybrid cloud computing?All the challenging factors are presented in Table4.RQ2How could the identified challenging factors be prioritized using the AHP approach?AHP approach is followed to prioritize the identified challenging factors. Details are presented in Section4.4. The summarized list is presented in Table 19.RQ3What are the practices, as identified in the literature and industrial survey, to be followed by vendor organizations to build successful relationship with client organizations in adoption of hybrid cloud computing?The practices for the identified challenges are presented in Tables20–27.RQ4What would be the taxonomy of the identified challenging factors that could assist in the successful relationship between client and vendor organizations in adoption of hybrid cloud computing?Taxonomy of the identified challenging factors is developed by categorizing these challenges into four main categories, “Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction” (Figure4), and prioritizing them by using the AHP technique. The basic purpose of this taxonomy is to highlight the local weight that shows the priority order of each challenging factor within its category and the global weight to show the effect of a particular challenge on the overall study objective. Furthermore, the taxonomy provides a robust framework that could help practitioners and researchers to handle the major issues of hybrid cloud computing activities. ## 7. Research Limitations We adopted the SLR for the identification of challenging factors; consequently, there is a chance that we might have missed out some relevant paper(s) for inclusion in the final selection for extraction of the relevant challenging factors. However, this is not a systematic omission as other researchers also conducted the same process for the identification and categorization of factors/variables in other different domains [47–49, 64].Due to the shortage of time and resource, the sample size for the study we have chosen was 30 (i.e.,n = 30), and so in this manner we are unable to claim generalized results. Nonetheless, other scholars from software engineering domain performed similar studies with the same sample size [65, 66].As the construct validity refers to testing the accuracy of the appraisal on the basis of provided variables, the available literature, discussed here, describes the challenging factors and their practices that are tested empirically by using the online survey strategy. The findings from such empirical study show that the challenges and practices given are linked to the findings of SLR1 and SLR2, which explain the accuracy of the appraisal scale we had chosen.Similarly, the internal validity is the assessment of a particular study’s findings and interpretation. In this respect, we had conducted a pilot study with the members of SERGUOM that offers an appropriate degree of internal validity.External validity is the generalizations of a research article’s findings. In this study, the majority of the survey respondents belonged to Pakistan which was a challenge to external validity by generalizing the findings in comparison to other countries. There are, however, still some participants from different countries and, above all, we have not found any substantial variations between the findings from SLR and industrial survey; thus, we are confident that the results can be generalized by the data sample.In addition, majority of the respondents were experienced practitioners in the same field, so we concluded that they have given their adequate input on the basis of their understanding of the difficult factors and their practices. ## 8. Conclusion and Future Work We were inspired by the importance of hybrid cloud computing activities to build a taxonomy focused on prioritizing the challenges or challenging factors that could pose risks for hybrid cloud computing adoption by client organizations. The revealed results deliver the key areas which need to be resolved before hybrid cloud computing practices are launched. The SLR1 was performed to classify the challenging factors, and the results of the literature reviews were confirmed by empirical research.In total, 13 challenges or challenging factors are found via the literature review, and then they are further classified into four main categories: “Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction” (Figure4). In addition, an AHP technique was implemented to prioritize these difficult variables and their groups.The findings of the AHP technique reveal that “Lack of Adoption” and “Lack of Satisfaction” are the most significant categories and CH8 (public cloud security concern) and CH11 (achieving QoS) have been considered as the highest-ranking challenging factors, which are also listed in these categories, because of their value GW (0.201), which is found to be higher than the values of all the other challenging factors mentioned.This study has a knowledge base in the domain of hybrid cloud computing for professionals and academic scholars. On the other hand, in such a field, it also offers state-of-the-art work that may be considered as a valuable input to academic studies.In summary, this study contributes a thorough overview of the perspectives of experts in hybrid cloud computing to illustrate various facets of the field of cloud computing. The future goal of this research work is to produce a robust model that could help client organizations analyze their current capabilities in hybrid cloud computing and include best practices for further improvements.Our ultimate aim is to develop hybrid cloud adoption assessment model (HCAAM). This paper contributes two components of the HCAAM, i.e., the identification of challenges in the adoption of hybrid cloud and practices for these challenges via SLR and questionnaire survey. The final outcome of the research is the development of HCAAM. The proposed model will be developed based on the results drawn from a systematic review of the literature and survey and will be supported by conducting case studies, which will provide a more comprehensive theoretical and practical assessment of the organization’s maturity in the context of hybrid cloud adoption. --- *Source: 1024139-2021-09-03.xml*
1024139-2021-09-03_1024139-2021-09-03.md
111,578
Challenges and Their Practices in Adoption of Hybrid Cloud Computing: An Analytical Hierarchy Approach
Siffat Ullah Khan; Habib Ullah Khan; Naeem Ullah; Rafiq Ahmad Khan
Security and Communication Networks (2021)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1024139
1024139-2021-09-03.xml
--- ## Abstract Cloud computing adoption provides various advantages for companies. In particular, hybrid cloud shares the advantages of both the public and private cloud technologies because it combines the private in-house cloud with the public on-demand cloud. In order to obtain benefits from the opportunities provided by the hybrid cloud, organizations want to adopt or develop novel capabilities. Maturity models have proved to be an exceptional and easily available method for evaluating and improving capabilities. However, there is a dire need for a robust framework that helps client organizations in the adoption and assessment of hybrid cloud. Therefore, this research paper aims to present a taxonomy of the challenging factors faced by client organizations in the adoption of hybrid cloud. Typically, such a taxonomy is presented on the basis of obtained results from the empirical analysis with the execution of analytical hierarchy process (AHP) method. From the review of literature and empirical study, in total 13 challenging factors are recognized and plotted into four groups: “Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction.” The AHP technique is executed to prioritize the identified factors and their groups. By this way, we found that “Lack of Adoption” and “Lack of Satisfaction” are the most significant groups from the identified challenging factors. Findings from AHP also show that “public cloud security concern” and “achieving QoS” are the upper ranking factors confronted in the adoption of hybrid cloud mechanism by client organizations because their global weight (0.201) is greater than those of all the other reported challenging factors. We also found out 46 practices to address the identified challenges. The taxonomy developed in this study offers a comprehensive structure for dealing with hybrid cloud computing issues, which is essential for the success and advancement of client and vendor organizations in hybrid cloud computing relationships. --- ## Body ## 1. Introduction Recently, the cloud computing mechanism has grown up very rapidly, and it has many unique features like elasticity, pooling of a resource, on-demand support, and wide network access [1, 2]. Technology-assisted learning is becoming more common, with most educational institutions across the globe using learning management systems, content management systems, virtual networks, and virtual machines to enhance student learning [3]. In this day and age, educational institutions are even using private clouds to improve the student experience [3]. Cloud computing acquires some of the features of cluster computing, distributed computing, and grid computing but still has its unique features [4, 5]. “Users of a cloud service only use the volume of IT resources they need, and only pay for the volume of IT resources they use” [6]. In the field of IT, cloud computing brings revolution and provides different concepts from the traditional IT environment [7]. Many organizations of all sizes (small, medium, and large) have adopted and are spending on cloud computing-related techniques [8]. SMEs embrace cloud computing because it cost-effectively provides IT resources [9]. Cloud infrastructure implementation models include the public cloud, private cloud, hybrid cloud, and community cloud [10]. Typically, the service model of the cloud consists of “software as a service” (SaaS), “platform as a service” (PaaS), and “infrastructure as a service” (IaaS) [9]. The decision as to which model is suitable to be adopted for a particular organization depends on various factors. “Hybrid cloud deployment model has proved more significant, both in terms of better economic aspects and business agility” [11, 12]. National Institute of Standards and Technology (NIST) defines hybrid cloud as “a combination of public and private clouds bound together by either standardized or proprietary technology that enables data and application portability.” The adoption of new technology requires many changes within the organization [13, 14].The traditional cloud computing task offloading algorithm consumes abundant energy in task scheduling, which results in a longer average task waiting time [15]. For this reason, a cloud computing task offloading algorithm based on dynamic multiobjective evolution is proposed by the authors of [15]. In order to ensure the parallel completion of multiple tasks, the dynamic multiobjective evolution method is used to construct the cloud computing task scheduling model and complete the cloud computing task scheduling [15]. Then, based on the calculated effectiveness and validity of energy consumption to complete the initial operation distribution and offloading priority, the time and cost of task offloading are calculated according to the raking results of task offloading priority. The cloud computing tasks are distributed with minimum time and minimum cost as the goal. Hamouda et al. [16] proposed a reconfigurable formal model of the hybrid cloud architecture, and then they utilized instantiations of this model, simulation, and real-time execution runs to estimate different performance metrics related to fault detection and self-recovery strategies in hybrid cloud.Literature reveals that theories and models developed by scholars are mainly focusing on such factors that affect technology acceptance [17]. This is the extended version of our previous study [18]. In this paper, we review the latest work performed in the field of hybrid cloud computing and recognize the various challenging factors faced by client organizations during the adoption of cloud computing. For these challenges, we also find practices. The primary research questions that are answered in this paper are the following:RQ1: What are the challenging factors to be avoided by client organizations in adopting hybrid cloud computing, as identified in the literature and industrial survey?RQ2: How could the defined challenging variables be prioritized via AHP strategy?RQ3: What are the practices to be adopted by vendor organizations to develop effective relationships with client organizations in the adoption of hybrid cloud mechanism, as described in the literature and industrial survey?RQ4: What would be the taxonomy for the identified factors that could assist the stakeholders (clients and vendors) in developing an efficient partnership between each other in such a domain?This paper is organized as follows: Section2 provides a background to cloud computing. The research process and methodology are described in Section 3. In Section 4, findings from the SLR, empirical study, and analytical hierarchy process (AHP) approach are presented and analyzed. In Section 5, discussion of the study is presented. The research description is provided in Section 6. Section 7 explains the limitation of the research, followed by the conclusion and future work in Section 8. ## 2. Materials and Methods Cloud computing emerges as the fifth generation of computing which brings a revolution in the way of computation. “Cloud computing doesn’t limit to the grid, parallel, and distributed computing but it involves the power of such paradigms at any level to form a resource pool” [19]. Various stakeholders, such as clients, developers, engineers, executives, academicians, and architects, define cloud computing differently [20]. “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” [21] . Gartner in [22] defines “Cloud computing is a style of computing where massively scalable IT-related capabilities are provided as a service across the Internet to multiple external customers.” Forrester’s [23] said that “cloud computing is a standardized IT capability (services, software, or infrastructure) delivered via Internet technologies in a pay-per-use, self-service way.” ### 2.1. Cloud Computing Service Model In the software industry, big players such as Microsoft, as well as other Internet technology heavyweights, including Google and Amazon, are adding the development of cloud services.Software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) are the three preliminary types of cloud computing services. “In SaaS computer applications are accessed over the Internet rather than being installed on a local computing device or in a local data center” [24]. Desk Away, Dropbox, SkyDrive (Windows Live), Mozy, Google Docs, Pixlr, Zoho Invoice, and CRM on-demand are some of the well-known SaaS examples. PaaS offers an online platform for the creation and operation of applications by software developers [6]. Force.com, Microsoft Windows Azure, and Google App Engine are some examples of PaaS. IaaS works as a cloud provider for hardware such as storage, network, and server and other relevant software such as OS, file system, and virtualization technologies as a service [25]. Joyent, EC2, Zimory, ElasticHosts, Amazon, S3, Rackspace, and GoGrid are examples of IaaS service providers. ### 2.2. Cloud Computing Deployment Model In different types of delivery models, cloud computing services and technologies are deployed based on their characteristics and intent, as well as the distinction between user classes [26]. Public, private, community, and hybrid cloud are the cloud deployment types.A public cloud is one in which the cloud infrastructure and computational services are made accessible over the Internet to the general type public. It is operated and managed by a cloud company that provides customers with cloud services.A private or internal cloud is one in which a single entity manages the cloud infrastructure and computing environment exclusively. It may be operated by a company or a third-party and may be held within or outside the data center of the organization. A private cloud has the ability to give the enterprise greater control than a public cloud over the infrastructure, computing resources, and cloud customers.A community cloud is shared and serves a particular community through many organizations. It is to some extent similar to the private cloud, except in a single entity; the technology and computing services are restricted to more than two organizations with shared privacy, protection, and regulatory considerations.Hybrid clouds are more complex than the other deployment models, and they are a combination of public and private clouds bound together by either standardized or proprietary technology that enables data and application portability [21]. ### 2.3. Hybrid Cloud Related Work Majority of the current research is shedding light on different aspects of hybrid cloud. For example, Ristova et al. [27] discuss hybrid cloud and its utilization in the midmarket and propose a method for mass customization and its association in clouds environment. Khadilkar et al. [28] propose a solution for data security and regulatory in using hybrid cloud computing environment. Amrohi and Khadilkar [29] state that organizations utilizing hybrid cloud can take advantage of both the public cloud and private cloud. Heckel [30] provides some of the basic ideas of cloud computing and also discusses the technological requirements for establishing a hybrid cloud environment. Nepalp et al. [31] provide a solution for secure data storage in the hybrid cloud deployment. According to Javadi et al. [32], “a scalable hybrid cloud infrastructure as well as resource provisioning policies assure QoS targets of the users.” Tanimotoet et al. [33], propose an enterprise data management method for a hybrid cloud configuration. According to Judith et al. [34], “if a few developers in a company use a public cloud service to prototype a new application that is completely disconnected from the private cloud or the data center, the company does not have a hybrid environment, but if a company uses a public development platform that sends data to a private cloud or a data center–based application, the cloud is hybrid.”According to Weinman [35], “under the right conditions, hybrid clouds can optimize costs while still exploiting the benefits of public clouds such as geographic dispersion and business agility.”A cloud-based security company (Trend Micro) indicated via an empirical survey that the “public cloud services fail to meet the IT and business requirements of some of the business organizations.” Alternatively, the “safer option,” private cloud, requires significant infrastructure and operations development along with new skill sets required by its IT staff. Although there are ways of balancing each of these concerns, this will ultimately lead to a hybrid of these environments, along with an array of other noncloud environments.Khan and Ullah [18] “surveyed storage and server decision-makers at North American, Asian Pacific, and European enterprises and found that various hybrid cloud implementations were the preferred approach.” ## 2.1. Cloud Computing Service Model In the software industry, big players such as Microsoft, as well as other Internet technology heavyweights, including Google and Amazon, are adding the development of cloud services.Software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) are the three preliminary types of cloud computing services. “In SaaS computer applications are accessed over the Internet rather than being installed on a local computing device or in a local data center” [24]. Desk Away, Dropbox, SkyDrive (Windows Live), Mozy, Google Docs, Pixlr, Zoho Invoice, and CRM on-demand are some of the well-known SaaS examples. PaaS offers an online platform for the creation and operation of applications by software developers [6]. Force.com, Microsoft Windows Azure, and Google App Engine are some examples of PaaS. IaaS works as a cloud provider for hardware such as storage, network, and server and other relevant software such as OS, file system, and virtualization technologies as a service [25]. Joyent, EC2, Zimory, ElasticHosts, Amazon, S3, Rackspace, and GoGrid are examples of IaaS service providers. ## 2.2. Cloud Computing Deployment Model In different types of delivery models, cloud computing services and technologies are deployed based on their characteristics and intent, as well as the distinction between user classes [26]. Public, private, community, and hybrid cloud are the cloud deployment types.A public cloud is one in which the cloud infrastructure and computational services are made accessible over the Internet to the general type public. It is operated and managed by a cloud company that provides customers with cloud services.A private or internal cloud is one in which a single entity manages the cloud infrastructure and computing environment exclusively. It may be operated by a company or a third-party and may be held within or outside the data center of the organization. A private cloud has the ability to give the enterprise greater control than a public cloud over the infrastructure, computing resources, and cloud customers.A community cloud is shared and serves a particular community through many organizations. It is to some extent similar to the private cloud, except in a single entity; the technology and computing services are restricted to more than two organizations with shared privacy, protection, and regulatory considerations.Hybrid clouds are more complex than the other deployment models, and they are a combination of public and private clouds bound together by either standardized or proprietary technology that enables data and application portability [21]. ## 2.3. Hybrid Cloud Related Work Majority of the current research is shedding light on different aspects of hybrid cloud. For example, Ristova et al. [27] discuss hybrid cloud and its utilization in the midmarket and propose a method for mass customization and its association in clouds environment. Khadilkar et al. [28] propose a solution for data security and regulatory in using hybrid cloud computing environment. Amrohi and Khadilkar [29] state that organizations utilizing hybrid cloud can take advantage of both the public cloud and private cloud. Heckel [30] provides some of the basic ideas of cloud computing and also discusses the technological requirements for establishing a hybrid cloud environment. Nepalp et al. [31] provide a solution for secure data storage in the hybrid cloud deployment. According to Javadi et al. [32], “a scalable hybrid cloud infrastructure as well as resource provisioning policies assure QoS targets of the users.” Tanimotoet et al. [33], propose an enterprise data management method for a hybrid cloud configuration. According to Judith et al. [34], “if a few developers in a company use a public cloud service to prototype a new application that is completely disconnected from the private cloud or the data center, the company does not have a hybrid environment, but if a company uses a public development platform that sends data to a private cloud or a data center–based application, the cloud is hybrid.”According to Weinman [35], “under the right conditions, hybrid clouds can optimize costs while still exploiting the benefits of public clouds such as geographic dispersion and business agility.”A cloud-based security company (Trend Micro) indicated via an empirical survey that the “public cloud services fail to meet the IT and business requirements of some of the business organizations.” Alternatively, the “safer option,” private cloud, requires significant infrastructure and operations development along with new skill sets required by its IT staff. Although there are ways of balancing each of these concerns, this will ultimately lead to a hybrid of these environments, along with an array of other noncloud environments.Khan and Ullah [18] “surveyed storage and server decision-makers at North American, Asian Pacific, and European enterprises and found that various hybrid cloud implementations were the preferred approach.” ## 3. Research Methodology The proposed research methodology is presented in Figure1 and consists of the following three phases.Figure 1 Research methodology. ### 3.1. SLR Conduction Stage 1: identifying challenges faced by client organizations and their practices in adoption of hybrid cloud computing.In stage 1, two systematic literature reviews (SLRs) were conducted to extract the relevant data: one for the purpose of identifying the challenges faced by client organizations in the adoption of hybrid cloud computing [18, 36] and another to identify practical solutions for these challenges. We followed the SLR approach because SLR is a different method from an ordinary conducted literature review, and it requires more time as well as effort to complete [37–39]. We studied several SLRs [37–39] for guidance. We initially developed the SLR protocol, which was validated and has been published [36]. The SLR1 protocol was then implemented, and the findings of the SLR1 have been published [18]. Through the SLR1, we have identified 12 challenges in such a domain. Among these challenges, 8 were considered critical challenges on the basis of their high frequency. For these critical challenges, we then conducted SLR2 and found out 46 practices out of a sample of 90 papers. ### 3.2. Empirical Study Conduction Stage 2: validating the findings of SLR and finding out new challenges faced by client organizations and their practices in adoption of hybrid cloud computing.In stage 2, a survey of 42 hybrid cloud computing experts was conducted to verify the results of SLRs and to recognize other significant challenges and their practices. An empirical survey refers to the experimental research which gathers data based on qualitative and quantitative description from a sample of population [35]. In the collection of implicit data for an issue, empirical survey is the most commonly used tool [40]. A similar approach was followed by other researchers [41–43]. ### 3.3. Application of AHP Stage 3: prioritizing the identified challenges with their respective categories.For the purpose of prioritizing the listed challenges and their corresponding categories, the analytical hierarchy process (AHP) approach is used. AHP was developed by Saaty [44] and is a popular classical multiple-criteria decision-making (MCDM) method. Typically, such an approach of ranking and prioritizing given variables is accurate and precise. The main aim of such a study is to give rank and to prioritize the hybrid cloud computing challenges faced by client organizations. Classical AHP is therefore ideally suited for the study of the data obtained using the form of the survey. In addition, such a technique (AHP) has previously been utilized to cope with complex decision-making issues in numerous other research areas. In Figure 2, the steps for the application of AHP are presented. AHP’s three major stages are as follows.Figure 2 Phases of AHP. #### 3.3.1. Decomposition of a Complex Decision Problem to a Simple Hierarchy Structure Here, the problem of decision making is decomposed into related elements of decision making [45, 46]. At least three levels are used to divide the hierarchical structure of the question: at level 1, the goal of the problem is presented; level 2 gives the challenges; similarly, subchallenges are presented at level 3 as depicted in Figure 3.Figure 3 Hierarchical structure of problem. #### 3.3.2. Survey Regarding the Pairwise Comparison In order to incorporate the aforementioned AHP approach of prioritization and corresponding categorization of challenges, we conducted a survey with the senior members of the Software Engineering Research Group, University of Malakand (SERGUOM). In total, 8 respondents gave positive feedback, and so they were included to take part in the second phase of questionnaire survey.TheSupplementary Material provides the questionnaire of the second survey sample. The data obtained from 8 participants in the survey and this small sample may threaten the later results of this study; however, the AHP is a subjective methodology and may consider small sample of data also [45, 46]. Other researchers [47–51] with relatively small sample sizes have adopted a similar strategy. #### 3.3.3. Pairwise Comparisons To calculate the priority weights of the identified challenges, pairwise comparison of these challenges was conducted. At each level, comparison of these challenges was performed based on their degree of impact and, also, based on the criteria specified at upper level [52]. For instance, the matrix-comparison criteria, i.e., [C] = {Cx|x = 1, 2,...,n}, was used, where n is the evaluation matrix A, i.e., xy (y, x = 1, 2,...,n), which presents the normalized relative weight as shown in equation (1), where axy =  1/axy,axy>0.(1)A=1a12a1na211a2nan1an21.As an indication of their degree of importance for the introduction of challenges faced by client organizations in hybrid cloud computing adoption, we further clarify the pairwise comparison of two enlisted challenging factors as CH1 and CH2. For example, if CH1 is five degrees greater than CH2, then, as shown in Tables1 and 2, CH2 is noticed to be 1/5 as compared to CH1. Through applying the same principle, we performed in Section 4 the pairwise comparison of matrixes for the overall identified challenging factors and their categorization.Table 1 Example of pairwise comparison. S. NoCH1CH2CH115CH21/51Table 2 Description of intensity scale. DescriptionSignificance intensityEqually important1Moderately important3Strongly more important5Very strongly more important7Extremely more important9Intermediate values2, 4, 6, 8In order to assess the rank of the identified challenges with the corresponding categories, the standard 9-point scale comparison was applied as depicted in Table2.The priority weight is determined based on the pairwise comparison matrixes as follows:(1) C refers to pairwise comparison for the recognized challenging factors.(2) The normalized matrix [C] decomposes each element from every column via the sum from its concerned column.(3) The priority weight [W] computes the average from each row from a normalized matrix [C]. #### 3.3.4. Checking the Consistency for Pairwise Comparison Matrix Shameem et al. [45] mentioned that pairwise comparison matrix in the AHP should be consistent and it could be calculated using the consistency index (CI) and consistency ratio (CR) as given in the following equations:(2)consistency indexCI=λmax−nn−1,(3)consistency ratioCR=CIRI.By multiplication of given weight,W, and the summation of each column from a comparison matrix (see Section 4), the λ max value is the prime eigenvalue which could be determined, where n shows the total number for the identified challenges in the given numbers of pairwise comparison matrix.RI shows the random consistency index (CI) in (3) and its value varies with respect to the size of matrix (see Table 3). The permissible CR value goes up to 0.10, and the challenging priority vector is acceptable only if the CR value is less than 0.10. Further, if the given CR value is not under the appropriate range, then there is a compulsory need to repeat the same process to enhance the degree of steadiness. In this paper, Section 4 presents the estimated value of CR for each comparison matrix.Table 3 RI value with respect to matrix size. Size of matrix12345678910RI000.580.91.121.241.321.411.451.49 ## 3.1. SLR Conduction Stage 1: identifying challenges faced by client organizations and their practices in adoption of hybrid cloud computing.In stage 1, two systematic literature reviews (SLRs) were conducted to extract the relevant data: one for the purpose of identifying the challenges faced by client organizations in the adoption of hybrid cloud computing [18, 36] and another to identify practical solutions for these challenges. We followed the SLR approach because SLR is a different method from an ordinary conducted literature review, and it requires more time as well as effort to complete [37–39]. We studied several SLRs [37–39] for guidance. We initially developed the SLR protocol, which was validated and has been published [36]. The SLR1 protocol was then implemented, and the findings of the SLR1 have been published [18]. Through the SLR1, we have identified 12 challenges in such a domain. Among these challenges, 8 were considered critical challenges on the basis of their high frequency. For these critical challenges, we then conducted SLR2 and found out 46 practices out of a sample of 90 papers. ## 3.2. Empirical Study Conduction Stage 2: validating the findings of SLR and finding out new challenges faced by client organizations and their practices in adoption of hybrid cloud computing.In stage 2, a survey of 42 hybrid cloud computing experts was conducted to verify the results of SLRs and to recognize other significant challenges and their practices. An empirical survey refers to the experimental research which gathers data based on qualitative and quantitative description from a sample of population [35]. In the collection of implicit data for an issue, empirical survey is the most commonly used tool [40]. A similar approach was followed by other researchers [41–43]. ## 3.3. Application of AHP Stage 3: prioritizing the identified challenges with their respective categories.For the purpose of prioritizing the listed challenges and their corresponding categories, the analytical hierarchy process (AHP) approach is used. AHP was developed by Saaty [44] and is a popular classical multiple-criteria decision-making (MCDM) method. Typically, such an approach of ranking and prioritizing given variables is accurate and precise. The main aim of such a study is to give rank and to prioritize the hybrid cloud computing challenges faced by client organizations. Classical AHP is therefore ideally suited for the study of the data obtained using the form of the survey. In addition, such a technique (AHP) has previously been utilized to cope with complex decision-making issues in numerous other research areas. In Figure 2, the steps for the application of AHP are presented. AHP’s three major stages are as follows.Figure 2 Phases of AHP. ### 3.3.1. Decomposition of a Complex Decision Problem to a Simple Hierarchy Structure Here, the problem of decision making is decomposed into related elements of decision making [45, 46]. At least three levels are used to divide the hierarchical structure of the question: at level 1, the goal of the problem is presented; level 2 gives the challenges; similarly, subchallenges are presented at level 3 as depicted in Figure 3.Figure 3 Hierarchical structure of problem. ### 3.3.2. Survey Regarding the Pairwise Comparison In order to incorporate the aforementioned AHP approach of prioritization and corresponding categorization of challenges, we conducted a survey with the senior members of the Software Engineering Research Group, University of Malakand (SERGUOM). In total, 8 respondents gave positive feedback, and so they were included to take part in the second phase of questionnaire survey.TheSupplementary Material provides the questionnaire of the second survey sample. The data obtained from 8 participants in the survey and this small sample may threaten the later results of this study; however, the AHP is a subjective methodology and may consider small sample of data also [45, 46]. Other researchers [47–51] with relatively small sample sizes have adopted a similar strategy. ### 3.3.3. Pairwise Comparisons To calculate the priority weights of the identified challenges, pairwise comparison of these challenges was conducted. At each level, comparison of these challenges was performed based on their degree of impact and, also, based on the criteria specified at upper level [52]. For instance, the matrix-comparison criteria, i.e., [C] = {Cx|x = 1, 2,...,n}, was used, where n is the evaluation matrix A, i.e., xy (y, x = 1, 2,...,n), which presents the normalized relative weight as shown in equation (1), where axy =  1/axy,axy>0.(1)A=1a12a1na211a2nan1an21.As an indication of their degree of importance for the introduction of challenges faced by client organizations in hybrid cloud computing adoption, we further clarify the pairwise comparison of two enlisted challenging factors as CH1 and CH2. For example, if CH1 is five degrees greater than CH2, then, as shown in Tables1 and 2, CH2 is noticed to be 1/5 as compared to CH1. Through applying the same principle, we performed in Section 4 the pairwise comparison of matrixes for the overall identified challenging factors and their categorization.Table 1 Example of pairwise comparison. S. NoCH1CH2CH115CH21/51Table 2 Description of intensity scale. DescriptionSignificance intensityEqually important1Moderately important3Strongly more important5Very strongly more important7Extremely more important9Intermediate values2, 4, 6, 8In order to assess the rank of the identified challenges with the corresponding categories, the standard 9-point scale comparison was applied as depicted in Table2.The priority weight is determined based on the pairwise comparison matrixes as follows:(1) C refers to pairwise comparison for the recognized challenging factors.(2) The normalized matrix [C] decomposes each element from every column via the sum from its concerned column.(3) The priority weight [W] computes the average from each row from a normalized matrix [C]. ### 3.3.4. Checking the Consistency for Pairwise Comparison Matrix Shameem et al. [45] mentioned that pairwise comparison matrix in the AHP should be consistent and it could be calculated using the consistency index (CI) and consistency ratio (CR) as given in the following equations:(2)consistency indexCI=λmax−nn−1,(3)consistency ratioCR=CIRI.By multiplication of given weight,W, and the summation of each column from a comparison matrix (see Section 4), the λ max value is the prime eigenvalue which could be determined, where n shows the total number for the identified challenges in the given numbers of pairwise comparison matrix.RI shows the random consistency index (CI) in (3) and its value varies with respect to the size of matrix (see Table 3). The permissible CR value goes up to 0.10, and the challenging priority vector is acceptable only if the CR value is less than 0.10. Further, if the given CR value is not under the appropriate range, then there is a compulsory need to repeat the same process to enhance the degree of steadiness. In this paper, Section 4 presents the estimated value of CR for each comparison matrix.Table 3 RI value with respect to matrix size. Size of matrix12345678910RI000.580.91.121.241.321.411.451.49 ## 3.3.1. Decomposition of a Complex Decision Problem to a Simple Hierarchy Structure Here, the problem of decision making is decomposed into related elements of decision making [45, 46]. At least three levels are used to divide the hierarchical structure of the question: at level 1, the goal of the problem is presented; level 2 gives the challenges; similarly, subchallenges are presented at level 3 as depicted in Figure 3.Figure 3 Hierarchical structure of problem. ## 3.3.2. Survey Regarding the Pairwise Comparison In order to incorporate the aforementioned AHP approach of prioritization and corresponding categorization of challenges, we conducted a survey with the senior members of the Software Engineering Research Group, University of Malakand (SERGUOM). In total, 8 respondents gave positive feedback, and so they were included to take part in the second phase of questionnaire survey.TheSupplementary Material provides the questionnaire of the second survey sample. The data obtained from 8 participants in the survey and this small sample may threaten the later results of this study; however, the AHP is a subjective methodology and may consider small sample of data also [45, 46]. Other researchers [47–51] with relatively small sample sizes have adopted a similar strategy. ## 3.3.3. Pairwise Comparisons To calculate the priority weights of the identified challenges, pairwise comparison of these challenges was conducted. At each level, comparison of these challenges was performed based on their degree of impact and, also, based on the criteria specified at upper level [52]. For instance, the matrix-comparison criteria, i.e., [C] = {Cx|x = 1, 2,...,n}, was used, where n is the evaluation matrix A, i.e., xy (y, x = 1, 2,...,n), which presents the normalized relative weight as shown in equation (1), where axy =  1/axy,axy>0.(1)A=1a12a1na211a2nan1an21.As an indication of their degree of importance for the introduction of challenges faced by client organizations in hybrid cloud computing adoption, we further clarify the pairwise comparison of two enlisted challenging factors as CH1 and CH2. For example, if CH1 is five degrees greater than CH2, then, as shown in Tables1 and 2, CH2 is noticed to be 1/5 as compared to CH1. Through applying the same principle, we performed in Section 4 the pairwise comparison of matrixes for the overall identified challenging factors and their categorization.Table 1 Example of pairwise comparison. S. NoCH1CH2CH115CH21/51Table 2 Description of intensity scale. DescriptionSignificance intensityEqually important1Moderately important3Strongly more important5Very strongly more important7Extremely more important9Intermediate values2, 4, 6, 8In order to assess the rank of the identified challenges with the corresponding categories, the standard 9-point scale comparison was applied as depicted in Table2.The priority weight is determined based on the pairwise comparison matrixes as follows:(1) C refers to pairwise comparison for the recognized challenging factors.(2) The normalized matrix [C] decomposes each element from every column via the sum from its concerned column.(3) The priority weight [W] computes the average from each row from a normalized matrix [C]. ## 3.3.4. Checking the Consistency for Pairwise Comparison Matrix Shameem et al. [45] mentioned that pairwise comparison matrix in the AHP should be consistent and it could be calculated using the consistency index (CI) and consistency ratio (CR) as given in the following equations:(2)consistency indexCI=λmax−nn−1,(3)consistency ratioCR=CIRI.By multiplication of given weight,W, and the summation of each column from a comparison matrix (see Section 4), the λ max value is the prime eigenvalue which could be determined, where n shows the total number for the identified challenges in the given numbers of pairwise comparison matrix.RI shows the random consistency index (CI) in (3) and its value varies with respect to the size of matrix (see Table 3). The permissible CR value goes up to 0.10, and the challenging priority vector is acceptable only if the CR value is less than 0.10. Further, if the given CR value is not under the appropriate range, then there is a compulsory need to repeat the same process to enhance the degree of steadiness. In this paper, Section 4 presents the estimated value of CR for each comparison matrix.Table 3 RI value with respect to matrix size. Size of matrix12345678910RI000.580.91.121.241.321.411.451.49 ## 4. Results ### 4.1. List of Challenging Factors Identified via SLR Table4 display a list of the challenges found via SLR1 that are regarded as the main roadblocks in the adoption of hybrid cloud. Our results show that “public cloud security concern” is the top of all challenges, i.e., 58%. “This is because in hybrid cloud data security risk is high as some of the data is exposed to public cloud from the private cloud” [53]. According to Li et al. [54], “this challenge relates to keeping the amount of sensitive data that is exposed to the public machines.” Balasubramanian and Murugaiyan [55] argue that “hybrid cloud model transfers selective data between public and private clouds.” According to Wang et al. [56], “data externalization towards services deployed on the public cloud creates security problems coming from the data issued by public cloud services.”Table 4 List of challenging factors identified through SLR1. Public hybrid cloud computing challengesFreq%ReferencesPublic cloud security concern6958[1],[2], [9], [13], [17], [18], [20], [23], [24], [27], [30], [33], [34], [35], [39], [41], [42], [43], [44], [45], [47], [50], [53], [54], [55], [56], [58], [59], [61], [63], [65], [66], [67], [68], [71], [72], [73], [76], [78], [79], [80], [81], [82], [83], [84], [85], [86], [87], [89], [91], [92], [93], [95], [96], [97], [98], [99], [100], [101], [103], [104], [105], [106], [110], [112], [114], [115],[116], [120]Effective management issue3428[1], [5], [8], [9], [15], [17], [18], [27], [28], [31], [37], [38], [39], [40], [44], [50], [56], [66], [70], [74], [77], [78], [80], [84], [86], [88], [89], [102], [104], [105], [106], [107], [113], [120]Integration complexity2723[5], [6], [17], [18], [32], [56], [58], [60], [65], [66], [69], [72], [73], [77], [78], [79], [80], [86], [89 ], [92], [96], [100], [103], [105], [113], [116], [117]Achieving QoS1513[4], [5], [11], [14], [23], [41], [57], [61], [65], [94], [106], [107], [108], [111], [119]Components partitioning1513[3], [9], [12], [16], [20], [21], [52], [57], [60], [62], [68], [70], [109], [115], [118]Lack of trust1412[7], [13], [19], [24], [30], [39], [54], [64], [71], [72], [85], [92], [114][116]SLA assurance1412[16], [17], [25], [26], [30], [39], [41], [47], [48], [61], [66], [69], [80], [85]Task scheduling and execution1311[4], [10], [22], [29], [36], [49], [51], [52], [70], [74], [75], [90], [93]Appropriate cloud offering54[38], [46], [86], [88], [102]Data searching10.8[114]Cost driven scheduling of services10.8[121]Lack of sharing of resources across multiple concerns10.8[122]The results also show that the “effective management issue” (28%) is the secondly cited challenge among all the identified challenges. This is because moving to hybrid cloud from public cloud environment needs an effective management in terms of managing the manageability issue of the cloud infrastructure in hybrid cloud environment [57]. According to Bhadauria et al. [58], “the risk of outsourced services going out of control is high in a hybrid cloud environment and key management becomes a difficult task in such situations.”“Integration complexity” in our findings was listed at the 3rd position among mostly cited challenges (23%). Hybrid cloud technical integration is perceived to be much more difficult and a significant barrier to adoption. “Integration of one or more public and private clouds into a hybrid system can be more challenging than integrating on-premises systems” [59]. According to Javidi et al. [60], “a mechanism for integrating private and public clouds is one of the major issues that need to be addressed for realizing hybrid cloud computing infrastructure.”The “quality of service (QoS)” is another challenge in hybrid cloud adoption, which we have already discussed in the literature portion.Jian and Sheng [61] determined that “different components of the hybrid infrastructure provide different QoS guarantees, efficient policies to integrate public and private cloud. However, to assure QoS target of the users remain a challenging job.”“Lack of trust has been reported as a challenge in the adoption of hybrid” [62]. Noor et al. [63] argue that “due to the fact that data owners and cloud storage providers are dispersed across distinct global sites, due to which it becomes difficult to establish trust between the client and public cloud provider in the hybrid cloud environment.”By following the structure established by Shameem et al. [45], the challenges found were further mapped into four categories, as shown in Figure 4.Figure 4 Categorization of hybrid cloud computing challenges. ### 4.2. Empirical Investigation In order to empirically verify the identified results of SLR, we have performed an online questionnaire survey. We evaluated the data obtained, and this section contains the findings. The questionnaire survey contains demographic information and hybrid cloud adoption challenges identified through SLR. There were three sections in the questionnaire survey. Every section consisted of various open-ended questions for the purpose of extracting any other challenges beyond those which were identified via the SLR1. Seven-point Likert scale, i.e., “extremely agree (EA),” “moderately agree (MA),” “slightly agree (SA),” “not sure (NS),” “slightly disagree (SD),” “moderately disagree (MD),” “extremely disagree (ED),” was applied to conclude the views of the respondents about these identified challenges.For data collection, a request was posted in different groups via LinkedIn, having more than fifty thousand members in total across the globe (see Table5). Further, we also sent requests to different companies in Pakistan utilizing cloud services as shown in Table 6 to participate in the questionnaire survey. Our invitation request was responded to by 60 experts in total showing their willingness, through e-mail, to participate. Questionnaire was shared with these experts after receiving their consent for participation. A total of 33 participants took part in the survey. Among the filled questionnaires, 3 were rejected because they did not follow our predefined quality criteria. Accordingly, 30 responses were selected as the final sample and used for the analysis, showing a response rate of 50%.Table 5 List of LinkedIn online cloud professionals. S. no.Group name (available online at LinkedIn website)Total members (at the time of request)Date (request posted)1Canada Cloud Network68114 April 20152Cloud Architect and Professionals Network6,35414 April 20153Conversations on Cloud Computing10,13214 April 20154Cloud Computing Best Practices7,95014 April 20155Hybrid Cloud User Group6615 April 20156SAP Cloud Computing (private, public, or hybrid)1,53115 April 20157TalkinCloud1,01015 April 20158Windows Azure & Microsoft Cloud10,46716 April 20159Cloud Computing, Microsoft UK11,08816 April 201510IEEE Cloud Computing5,71916 April 2015Table 6 List of some famous software development companies in Pakistan. S. no.Software company nameAddressDate of request sent1Subhash Educational ComplexArbab Road, Peshawar, KPK14 April 20152Haq Institute of Computer and ITMain Ghazi Road, Lahore14 April 20153PurePushAbasyn Street 6, I/9, Islamabad, Pakistan14 April 20154DatumSquare IT Services Pvt. Ltd.Software Technology Park, I-9/3, Islamabad14 April 20155Macrosoft PakistanAbu Bakar Block, New Garden Town, Lahore, Pakistan15 April 20156Xavor CorporationMasood Anwari Rd, Cavalry Extension, Cavalry Ground, Lahore15 April 20157Xerox Soft (Pvt.) Ltd.Deans Trade Center, Peshawar15 April 20158Ovex technologies1st Floor, KSL Complex, Software Technology Park, Plot No. 156, I-9/3 Industrial Area, Islamabad16 April 20159TechlogixEmpress Road, Lahore, Pakistan16 April 2015Table7 shows the list of the challenges in the adoption of hybrid cloud that were identified/validated via empirical study. This table also depicts the various options through 7-point Likert scale for each of the aforementioned challenges and the graphical representation.Table 7 List of challenges identified via empirical study. S. no.Identified challenging factors with categorizationExpert perception (n = 30)PositiveNegativeNeutralEAMASA%SDMDED%NS%C1Category: lack of Inclination20128621114310CH1Effective management issue196497000013CH2SLA assurance206293000027CH3Cost-driven scheduling of service10837032123310C2Category: lack of Readiness212389011427CH4Task scheduling and execution193693000027CH5Data searching187393010313CH6Integration complexity203386200727CH7Components partitioning1924831003414C3Category: lack of Adoption222288111913CH8Public cloud security concern232497010200CH9Lack of trust243193000027CH10Lack of sharing resources across multiple clients976732132727C4Category: lack of Satisfaction2032832007310CH11Achieving QoS206397000013CH12Appropriate cloud offering16110900000310CH13Delays in response time10646621320414Table7 shows that ≥ 80% of the respondents agreed that “public cloud security concern” (90%), “lack of trust” (87%), and “integration complexity” (83%) are the highest challenges found in such a domain.About >60% respondents agreed about “QoS” (67%), “SLA (service level agreement) assurance” (67%), “task scheduling challenges” (63%), “effective management” (63%), and “component partitioning” (63%). We suggest efficient policies implementation for these challenges. Although, there is little evidence in literature revealing “data searching” challenge, 60% of the respondents agreed that it is also a challenging factor or challenge in the embracing of hybrid cloud. 53% of the respondents agreed about “appropriate cloud offering.”In addition, the participants of the survey were invited for the purpose of presenting their insights about other challenges which were not identified through the SLR conduction. However, we have not found any new challenge or recommendation given by the respondents of the survey. ### 4.3. Crossway Comparison of Public Hybrid Cloud Computing Challenges Identified through SLR1 and Questionnaire Survey We validated public hybrid cloud computing challenges discovered via SLR1 with a survey questionnaire follow-up and then undertook comparative analysis of both outcome data sets. This analysis predictably leaned towards radiating extant similarities and disparities between SLR1 and survey outcomes (see Table8). Table 8 presents a comparative analysis of both data sets using only positive values from the survey questionnaire. Lowest ranks were assigned to highest values. Whenever similar values occurred, we assigned an average rank and then approximated the value of the next rank. We further noted that rankings for cited challenges across both data sets were not the same. We used the Spearman’s rank correlation test (formula) and identified a value of R = 0.54, reflecting the frequency of a challenge’s reference in the SLR correlation with its citation frequency among survey participants. This allows a relative appraisal for the similarity of importance for each SLR challenge compared to survey results.Table 8 Crossway comparison of public hybrid cloud computing challenges identified through SLR1 and questionnaire survey. S. no.Public hybrid cloud computing challengesOccurrences in SLR1 (N = 121)Positive agreement (%) in the questionnaire survey (N = 33)dd2%Rank%Rank1Public cloud security concern581971.5−0.50.252Effective management issue282971.50.50.253Integration complexity233869−6364Achieving QoS134.5971.52.56.255Components partitioning134.58310−5.530.256Lack of trust126.5934.5247SLA assurance126.5934.5248Task scheduling and execution118934.53.512.259Appropriate cloud offering499081110Data searching0.810.5934.563611Cost driven scheduling of services0.810.57012−1.52.2512Lack of sharing of resources across multiple concerns0.810.57311−0.50.25n = 12Ʃd2 = 132.5(4)Spearman rank correlationR=1−6Σd2n3−n=1−6×132.5123−12=1−7951728−12=1−7951716=1−0.46=0.54. ### 4.4. Application of AHP The AHP method and its implementation are highlighted briefly in the following steps:Step 1: decomposition of a complex-decision problem into a simple hierarchical structure.Based on the corresponding categorization from Figure4, the hierarchical structure is built and is shown in Figure 5. In the first level, the fundamental purpose of this analysis is presented; the corresponding challenging factors with the categories are offered at levels 2 to 3 of Figure 5, respectively.Step 2: development of the pairwise comparison matrix and calculation of the prioritization AL weight.For each group of these challenges with its corresponding categories, the generation of pairwise comparison matrix was performed on the basis of data collected via AHP. All details for such a comparison (“Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction”) are given in Tables9–12.Likewise, in Table13, the results for the stated pairwise comparison matrix categories are produced. We also used the normalized matrix comparison for the purpose of calculating the weights of these challenging factors.The normalized values of each challenging factor are determined by dividing its value by the number of corresponding columns. The complete details of normalized matrix of each category (“Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction”) are provided in Tables9, 14, 15, and 16, respectively. Table 17 shows the findings of the normalized matrix comparison groups.For each challenge, the weight value (W) is determined from the average number of the normalized values of the respective rows. For example, the weight values given in Table18 show that CH1 is the most significant challenging factor in the “Lack of Inclination” category because it has high value as compared to the other challenging factors of the same category. However, CH3 is the least significant challenging factor because it has low weight value.Step 3: checking of consistency.Figure 5 Hierarchal structure of challenging factors.Table 9 Normalized matrix of “Lack of Adoption” category. S. no.CH8CH9CH10Weight (W)CH80.610.630.360.53CH90.500.320.570.40CH100.090.050.070.07λmax=3.067,CI=0.034,RI=0.58,CR=0.06<0.1.Table 10 Pairwise matrix comparison for the challenging factors of “Lack of Readiness” category. S. no.CH4CH5CH6CH7CH41236CH51/2135CH61/31/314CH71/61/51/41Sum2.003.587.2516.00Table 11 Pairwise matrix comparison for the challenging factors of “Lack of Adoption” category. S. no.CH8CH9CH10CH8125CH91/218CH101/51/81Sum1.643.1614.00Table 12 Pairwise matrix comparison for the challenging factors of “Lack of Satisfaction” category. S. no.CH11CH12CH13CH11138CH121/316CH131/81/61Sum1.454.1715.00Table 13 Pairwise matrix comparison between categories of challenging factors. S. no.Lack of InclinationLack of ReadinessLack of AdoptionLack of SatisfactionLack of Inclination11/21/31/3Lack of Readiness21¼1/4Lack of Adoption1/31/314Lack of Satisfaction1/61/5¼1Sum2.003.587.2516.00Table 14 Normalized matrix of “Lack of Inclination” category. S. no.CH1CH2CH3Weight (W)CH10.680.720.500.63CH20.220.240.430.30CH30.100.040.070.07λmax=3.099,CI=0.049,RI=0.58,CR=0.09<0.1.Table 15 Normalized matrix of “Lack of Inclination” category. S. no.CH4CH5CH6CH7Weight (W)CH40.500.560.410.380.46CH50.250.280.410.310.31CH60.170.090.140.250.16CH70.090.070.030.060.06Table 16 Normalized matrix of “Lack of Satisfaction” category. S. no.CH8CH9CH10Weight (W)CH80.610.630.360.53CH90.300.320.570.40CH100.090.050.070.07λmax=3.067,CI=0.034,RI=0.58,CR=0.06<0.1.Table 17 Normalized matrix for the categories of challenging factors. S. no.Lack of InclinationLack of ReadinessLack of AdoptionLack of SatisfactionWeight (W)Lack of Inclination0.110.050.130.130.10Lack of Readiness0.220.110.100.100.13Lack of Adoption0.330.420.390.390.38Lack of Satisfaction0.330.420.390.390.38λmax=4.199,CI=0.040,RI=0.9,CR=0.04<0.1.Table 18 Pairwise matrix comparison for the extracted challenging factors of “Lack of Inclination” category. S. no.CH1CH2CH3CH1137CH21/316CH31/71/61Sum1.474.1714.00We are able to measure the degree of consistency for the corresponding category, i.e., “Lack of Inclination” via the below parameters, discussed in Section3:(5)λmax=∑∑Cj×W,where ΣCj is the summation of columns from matrix [C], which is depicted in Table 10, and W is the weight vector (see Table 15).(6)λmax=2.00×0.46±3.58×0.31±7.25×0.16±16.00×0.06,λmax=0.92+1.1098±1.16+0.96,λmax=4.1498,Consistency IndexCI=λmax−nn−1=4.1498−44−1=0.14983,Random IndexRI=0.9,Consistency RatioCR=CIRI=0.0490.9=0.058<0.1consitency is OK.Table 19 Summarized list for the challenging factors being ranked. CategoriesCategories weightChallenging factorsLocal weightLocal rankingGlobal weightPriorityLack of Inclination0.10CH10.6310.0635CH20.3020.0308CH30.0730.0704Lack of Readiness0.13CH40.4610.0606CH50.3120.0407CH60.1630.02011CH70.0640.0783Lack of Adoption0.38CH80.5310.2011CH90.4020.1522CH100.0730.02610Lack of Satisfaction0.38CH110.5310.2011CH120.4020.1522CH130.0730.0269The result given indicates that the CR value is lower than 0.1, which is the appropriate CR value. For such challenging factors in all the other corresponding categories, the same accuracy method is used, and the estimation of the CR value is given in Tables9, 10, 14, 16, and 18.Step 4: provision of local and global ranking for challenging factors with their categories.For each of the stated challenges, the local weight (LW) and global weight (GW) are presented in Table19, where the local weight reflects the importance of the related challenging factor in its own specified category, while the global weight shows the specified priority for a factor across all the 13 challenging factors identified.The LW was determined by performing a pairwise comparison for each of the challenging factors and the category (see step 3). For example, Table19 shows that the LW of CH1 (0.63) is found to be the highest weight in the “Lack of Inclination” category; therefore, CH1 is also the uppermost ranked, prioritized challenging factor in the “Lack of Inclination” category.Likewise, by multiplication of their LW and that of their respective groups, we calculated the GW of the reported challenging factors. For example, GW of challenging factor CH1 = 0.10 × 0.63 = 0.063, where 0.10 is the weight of its category (Lack of Inclination) and 0.63 is its LW. The same process is repeated for the remaining challenging factors, and their GW is calculated, respectively, as presented in Table19.Step 5: finalized prioritization of these challenging factors.The finalized priority of these challenge factors is, typically, based on the GW for each of the challenge factors, and the same is presented in Table19. On top priority, challenging factors that have higher GW in all categories are taken into account.In Table19, CH8 (public cloud security concern) and CH11 (achieving QoS) are considered as the uppermost ranking challenging factors due to their GW value (0.201), which is the highest value as compared to the other factors.We further noted that CH12 (appropriate cloud offering) is found to be the secondly highest common challenging factor that could adversely affect the receipt of hybrid cloud computing activities from client organizations’ perspective. ### 4.5. Practices for Critical Challenges Identified through SLR 2 and Empirical Study In this section, a list of practices is presented for each CC (critical challenge) identified through SLR1. We considered that 8 challenges are critical because they have high frequencies both in SLR1 and empirical study findings. We have designed 46 practices in total for the 8 CCs. These practices were identified via SLR2 and questionnaires surveys with 30 experts.We have presented under each challenge its relevant practices for addressing the particular challenge. In Table20 CCs represent critical challenges and CCPs represent practices for addressing these challenges.Table 20 Practices for addressing critical challenge “SLA assurance.” CC#1. SLA assuranceSLR2Questionnaire surveyS. no.Practices for addressing SLA assuranceFrequency of SLR2 (N = 90)Positive %CCP#1.1Ensure the maximum availability of services, provided by cloud providers, and duration of the contract period are explicitly defined in the SLA.298CCP#1.2Define explicitly in the SLA terms and conditions regarding the security of the clients’ data.881CCP#1.3Keep the clients aware of where the processes are running or where the data is stored to ensure the security of the clients’ data.296CCP#1.4To mitigate the risk of a cloud provider failure, specify reversion strategies in the SLA. This is because they put cloud customers in a much stronger position when renegotiating a cloud service contract because cloud customers know that they could readily switch from the provider if needed.294CCP#1.5Perform third-party auditing regularly to monitor the cloud service provider’s compliance to agreed terms.492CCP#1.6Ensure in service level agreements what the contingency plans are in case of the breakdown of the system.697CCP#1.7On-premises gateway should be used in hybrid cloud for controlling the applications and data that flow from each part to the other.684CCP#1.8Categorize the data into two parts, i.e., sensitive and nonsensitive. Place the sensitive data in the on-premises side (private cloud) whereas nonsensitive data should be kept in public cloud.16100 ## 4.1. List of Challenging Factors Identified via SLR Table4 display a list of the challenges found via SLR1 that are regarded as the main roadblocks in the adoption of hybrid cloud. Our results show that “public cloud security concern” is the top of all challenges, i.e., 58%. “This is because in hybrid cloud data security risk is high as some of the data is exposed to public cloud from the private cloud” [53]. According to Li et al. [54], “this challenge relates to keeping the amount of sensitive data that is exposed to the public machines.” Balasubramanian and Murugaiyan [55] argue that “hybrid cloud model transfers selective data between public and private clouds.” According to Wang et al. [56], “data externalization towards services deployed on the public cloud creates security problems coming from the data issued by public cloud services.”Table 4 List of challenging factors identified through SLR1. Public hybrid cloud computing challengesFreq%ReferencesPublic cloud security concern6958[1],[2], [9], [13], [17], [18], [20], [23], [24], [27], [30], [33], [34], [35], [39], [41], [42], [43], [44], [45], [47], [50], [53], [54], [55], [56], [58], [59], [61], [63], [65], [66], [67], [68], [71], [72], [73], [76], [78], [79], [80], [81], [82], [83], [84], [85], [86], [87], [89], [91], [92], [93], [95], [96], [97], [98], [99], [100], [101], [103], [104], [105], [106], [110], [112], [114], [115],[116], [120]Effective management issue3428[1], [5], [8], [9], [15], [17], [18], [27], [28], [31], [37], [38], [39], [40], [44], [50], [56], [66], [70], [74], [77], [78], [80], [84], [86], [88], [89], [102], [104], [105], [106], [107], [113], [120]Integration complexity2723[5], [6], [17], [18], [32], [56], [58], [60], [65], [66], [69], [72], [73], [77], [78], [79], [80], [86], [89 ], [92], [96], [100], [103], [105], [113], [116], [117]Achieving QoS1513[4], [5], [11], [14], [23], [41], [57], [61], [65], [94], [106], [107], [108], [111], [119]Components partitioning1513[3], [9], [12], [16], [20], [21], [52], [57], [60], [62], [68], [70], [109], [115], [118]Lack of trust1412[7], [13], [19], [24], [30], [39], [54], [64], [71], [72], [85], [92], [114][116]SLA assurance1412[16], [17], [25], [26], [30], [39], [41], [47], [48], [61], [66], [69], [80], [85]Task scheduling and execution1311[4], [10], [22], [29], [36], [49], [51], [52], [70], [74], [75], [90], [93]Appropriate cloud offering54[38], [46], [86], [88], [102]Data searching10.8[114]Cost driven scheduling of services10.8[121]Lack of sharing of resources across multiple concerns10.8[122]The results also show that the “effective management issue” (28%) is the secondly cited challenge among all the identified challenges. This is because moving to hybrid cloud from public cloud environment needs an effective management in terms of managing the manageability issue of the cloud infrastructure in hybrid cloud environment [57]. According to Bhadauria et al. [58], “the risk of outsourced services going out of control is high in a hybrid cloud environment and key management becomes a difficult task in such situations.”“Integration complexity” in our findings was listed at the 3rd position among mostly cited challenges (23%). Hybrid cloud technical integration is perceived to be much more difficult and a significant barrier to adoption. “Integration of one or more public and private clouds into a hybrid system can be more challenging than integrating on-premises systems” [59]. According to Javidi et al. [60], “a mechanism for integrating private and public clouds is one of the major issues that need to be addressed for realizing hybrid cloud computing infrastructure.”The “quality of service (QoS)” is another challenge in hybrid cloud adoption, which we have already discussed in the literature portion.Jian and Sheng [61] determined that “different components of the hybrid infrastructure provide different QoS guarantees, efficient policies to integrate public and private cloud. However, to assure QoS target of the users remain a challenging job.”“Lack of trust has been reported as a challenge in the adoption of hybrid” [62]. Noor et al. [63] argue that “due to the fact that data owners and cloud storage providers are dispersed across distinct global sites, due to which it becomes difficult to establish trust between the client and public cloud provider in the hybrid cloud environment.”By following the structure established by Shameem et al. [45], the challenges found were further mapped into four categories, as shown in Figure 4.Figure 4 Categorization of hybrid cloud computing challenges. ## 4.2. Empirical Investigation In order to empirically verify the identified results of SLR, we have performed an online questionnaire survey. We evaluated the data obtained, and this section contains the findings. The questionnaire survey contains demographic information and hybrid cloud adoption challenges identified through SLR. There were three sections in the questionnaire survey. Every section consisted of various open-ended questions for the purpose of extracting any other challenges beyond those which were identified via the SLR1. Seven-point Likert scale, i.e., “extremely agree (EA),” “moderately agree (MA),” “slightly agree (SA),” “not sure (NS),” “slightly disagree (SD),” “moderately disagree (MD),” “extremely disagree (ED),” was applied to conclude the views of the respondents about these identified challenges.For data collection, a request was posted in different groups via LinkedIn, having more than fifty thousand members in total across the globe (see Table5). Further, we also sent requests to different companies in Pakistan utilizing cloud services as shown in Table 6 to participate in the questionnaire survey. Our invitation request was responded to by 60 experts in total showing their willingness, through e-mail, to participate. Questionnaire was shared with these experts after receiving their consent for participation. A total of 33 participants took part in the survey. Among the filled questionnaires, 3 were rejected because they did not follow our predefined quality criteria. Accordingly, 30 responses were selected as the final sample and used for the analysis, showing a response rate of 50%.Table 5 List of LinkedIn online cloud professionals. S. no.Group name (available online at LinkedIn website)Total members (at the time of request)Date (request posted)1Canada Cloud Network68114 April 20152Cloud Architect and Professionals Network6,35414 April 20153Conversations on Cloud Computing10,13214 April 20154Cloud Computing Best Practices7,95014 April 20155Hybrid Cloud User Group6615 April 20156SAP Cloud Computing (private, public, or hybrid)1,53115 April 20157TalkinCloud1,01015 April 20158Windows Azure & Microsoft Cloud10,46716 April 20159Cloud Computing, Microsoft UK11,08816 April 201510IEEE Cloud Computing5,71916 April 2015Table 6 List of some famous software development companies in Pakistan. S. no.Software company nameAddressDate of request sent1Subhash Educational ComplexArbab Road, Peshawar, KPK14 April 20152Haq Institute of Computer and ITMain Ghazi Road, Lahore14 April 20153PurePushAbasyn Street 6, I/9, Islamabad, Pakistan14 April 20154DatumSquare IT Services Pvt. Ltd.Software Technology Park, I-9/3, Islamabad14 April 20155Macrosoft PakistanAbu Bakar Block, New Garden Town, Lahore, Pakistan15 April 20156Xavor CorporationMasood Anwari Rd, Cavalry Extension, Cavalry Ground, Lahore15 April 20157Xerox Soft (Pvt.) Ltd.Deans Trade Center, Peshawar15 April 20158Ovex technologies1st Floor, KSL Complex, Software Technology Park, Plot No. 156, I-9/3 Industrial Area, Islamabad16 April 20159TechlogixEmpress Road, Lahore, Pakistan16 April 2015Table7 shows the list of the challenges in the adoption of hybrid cloud that were identified/validated via empirical study. This table also depicts the various options through 7-point Likert scale for each of the aforementioned challenges and the graphical representation.Table 7 List of challenges identified via empirical study. S. no.Identified challenging factors with categorizationExpert perception (n = 30)PositiveNegativeNeutralEAMASA%SDMDED%NS%C1Category: lack of Inclination20128621114310CH1Effective management issue196497000013CH2SLA assurance206293000027CH3Cost-driven scheduling of service10837032123310C2Category: lack of Readiness212389011427CH4Task scheduling and execution193693000027CH5Data searching187393010313CH6Integration complexity203386200727CH7Components partitioning1924831003414C3Category: lack of Adoption222288111913CH8Public cloud security concern232497010200CH9Lack of trust243193000027CH10Lack of sharing resources across multiple clients976732132727C4Category: lack of Satisfaction2032832007310CH11Achieving QoS206397000013CH12Appropriate cloud offering16110900000310CH13Delays in response time10646621320414Table7 shows that ≥ 80% of the respondents agreed that “public cloud security concern” (90%), “lack of trust” (87%), and “integration complexity” (83%) are the highest challenges found in such a domain.About >60% respondents agreed about “QoS” (67%), “SLA (service level agreement) assurance” (67%), “task scheduling challenges” (63%), “effective management” (63%), and “component partitioning” (63%). We suggest efficient policies implementation for these challenges. Although, there is little evidence in literature revealing “data searching” challenge, 60% of the respondents agreed that it is also a challenging factor or challenge in the embracing of hybrid cloud. 53% of the respondents agreed about “appropriate cloud offering.”In addition, the participants of the survey were invited for the purpose of presenting their insights about other challenges which were not identified through the SLR conduction. However, we have not found any new challenge or recommendation given by the respondents of the survey. ## 4.3. Crossway Comparison of Public Hybrid Cloud Computing Challenges Identified through SLR1 and Questionnaire Survey We validated public hybrid cloud computing challenges discovered via SLR1 with a survey questionnaire follow-up and then undertook comparative analysis of both outcome data sets. This analysis predictably leaned towards radiating extant similarities and disparities between SLR1 and survey outcomes (see Table8). Table 8 presents a comparative analysis of both data sets using only positive values from the survey questionnaire. Lowest ranks were assigned to highest values. Whenever similar values occurred, we assigned an average rank and then approximated the value of the next rank. We further noted that rankings for cited challenges across both data sets were not the same. We used the Spearman’s rank correlation test (formula) and identified a value of R = 0.54, reflecting the frequency of a challenge’s reference in the SLR correlation with its citation frequency among survey participants. This allows a relative appraisal for the similarity of importance for each SLR challenge compared to survey results.Table 8 Crossway comparison of public hybrid cloud computing challenges identified through SLR1 and questionnaire survey. S. no.Public hybrid cloud computing challengesOccurrences in SLR1 (N = 121)Positive agreement (%) in the questionnaire survey (N = 33)dd2%Rank%Rank1Public cloud security concern581971.5−0.50.252Effective management issue282971.50.50.253Integration complexity233869−6364Achieving QoS134.5971.52.56.255Components partitioning134.58310−5.530.256Lack of trust126.5934.5247SLA assurance126.5934.5248Task scheduling and execution118934.53.512.259Appropriate cloud offering499081110Data searching0.810.5934.563611Cost driven scheduling of services0.810.57012−1.52.2512Lack of sharing of resources across multiple concerns0.810.57311−0.50.25n = 12Ʃd2 = 132.5(4)Spearman rank correlationR=1−6Σd2n3−n=1−6×132.5123−12=1−7951728−12=1−7951716=1−0.46=0.54. ## 4.4. Application of AHP The AHP method and its implementation are highlighted briefly in the following steps:Step 1: decomposition of a complex-decision problem into a simple hierarchical structure.Based on the corresponding categorization from Figure4, the hierarchical structure is built and is shown in Figure 5. In the first level, the fundamental purpose of this analysis is presented; the corresponding challenging factors with the categories are offered at levels 2 to 3 of Figure 5, respectively.Step 2: development of the pairwise comparison matrix and calculation of the prioritization AL weight.For each group of these challenges with its corresponding categories, the generation of pairwise comparison matrix was performed on the basis of data collected via AHP. All details for such a comparison (“Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction”) are given in Tables9–12.Likewise, in Table13, the results for the stated pairwise comparison matrix categories are produced. We also used the normalized matrix comparison for the purpose of calculating the weights of these challenging factors.The normalized values of each challenging factor are determined by dividing its value by the number of corresponding columns. The complete details of normalized matrix of each category (“Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction”) are provided in Tables9, 14, 15, and 16, respectively. Table 17 shows the findings of the normalized matrix comparison groups.For each challenge, the weight value (W) is determined from the average number of the normalized values of the respective rows. For example, the weight values given in Table18 show that CH1 is the most significant challenging factor in the “Lack of Inclination” category because it has high value as compared to the other challenging factors of the same category. However, CH3 is the least significant challenging factor because it has low weight value.Step 3: checking of consistency.Figure 5 Hierarchal structure of challenging factors.Table 9 Normalized matrix of “Lack of Adoption” category. S. no.CH8CH9CH10Weight (W)CH80.610.630.360.53CH90.500.320.570.40CH100.090.050.070.07λmax=3.067,CI=0.034,RI=0.58,CR=0.06<0.1.Table 10 Pairwise matrix comparison for the challenging factors of “Lack of Readiness” category. S. no.CH4CH5CH6CH7CH41236CH51/2135CH61/31/314CH71/61/51/41Sum2.003.587.2516.00Table 11 Pairwise matrix comparison for the challenging factors of “Lack of Adoption” category. S. no.CH8CH9CH10CH8125CH91/218CH101/51/81Sum1.643.1614.00Table 12 Pairwise matrix comparison for the challenging factors of “Lack of Satisfaction” category. S. no.CH11CH12CH13CH11138CH121/316CH131/81/61Sum1.454.1715.00Table 13 Pairwise matrix comparison between categories of challenging factors. S. no.Lack of InclinationLack of ReadinessLack of AdoptionLack of SatisfactionLack of Inclination11/21/31/3Lack of Readiness21¼1/4Lack of Adoption1/31/314Lack of Satisfaction1/61/5¼1Sum2.003.587.2516.00Table 14 Normalized matrix of “Lack of Inclination” category. S. no.CH1CH2CH3Weight (W)CH10.680.720.500.63CH20.220.240.430.30CH30.100.040.070.07λmax=3.099,CI=0.049,RI=0.58,CR=0.09<0.1.Table 15 Normalized matrix of “Lack of Inclination” category. S. no.CH4CH5CH6CH7Weight (W)CH40.500.560.410.380.46CH50.250.280.410.310.31CH60.170.090.140.250.16CH70.090.070.030.060.06Table 16 Normalized matrix of “Lack of Satisfaction” category. S. no.CH8CH9CH10Weight (W)CH80.610.630.360.53CH90.300.320.570.40CH100.090.050.070.07λmax=3.067,CI=0.034,RI=0.58,CR=0.06<0.1.Table 17 Normalized matrix for the categories of challenging factors. S. no.Lack of InclinationLack of ReadinessLack of AdoptionLack of SatisfactionWeight (W)Lack of Inclination0.110.050.130.130.10Lack of Readiness0.220.110.100.100.13Lack of Adoption0.330.420.390.390.38Lack of Satisfaction0.330.420.390.390.38λmax=4.199,CI=0.040,RI=0.9,CR=0.04<0.1.Table 18 Pairwise matrix comparison for the extracted challenging factors of “Lack of Inclination” category. S. no.CH1CH2CH3CH1137CH21/316CH31/71/61Sum1.474.1714.00We are able to measure the degree of consistency for the corresponding category, i.e., “Lack of Inclination” via the below parameters, discussed in Section3:(5)λmax=∑∑Cj×W,where ΣCj is the summation of columns from matrix [C], which is depicted in Table 10, and W is the weight vector (see Table 15).(6)λmax=2.00×0.46±3.58×0.31±7.25×0.16±16.00×0.06,λmax=0.92+1.1098±1.16+0.96,λmax=4.1498,Consistency IndexCI=λmax−nn−1=4.1498−44−1=0.14983,Random IndexRI=0.9,Consistency RatioCR=CIRI=0.0490.9=0.058<0.1consitency is OK.Table 19 Summarized list for the challenging factors being ranked. CategoriesCategories weightChallenging factorsLocal weightLocal rankingGlobal weightPriorityLack of Inclination0.10CH10.6310.0635CH20.3020.0308CH30.0730.0704Lack of Readiness0.13CH40.4610.0606CH50.3120.0407CH60.1630.02011CH70.0640.0783Lack of Adoption0.38CH80.5310.2011CH90.4020.1522CH100.0730.02610Lack of Satisfaction0.38CH110.5310.2011CH120.4020.1522CH130.0730.0269The result given indicates that the CR value is lower than 0.1, which is the appropriate CR value. For such challenging factors in all the other corresponding categories, the same accuracy method is used, and the estimation of the CR value is given in Tables9, 10, 14, 16, and 18.Step 4: provision of local and global ranking for challenging factors with their categories.For each of the stated challenges, the local weight (LW) and global weight (GW) are presented in Table19, where the local weight reflects the importance of the related challenging factor in its own specified category, while the global weight shows the specified priority for a factor across all the 13 challenging factors identified.The LW was determined by performing a pairwise comparison for each of the challenging factors and the category (see step 3). For example, Table19 shows that the LW of CH1 (0.63) is found to be the highest weight in the “Lack of Inclination” category; therefore, CH1 is also the uppermost ranked, prioritized challenging factor in the “Lack of Inclination” category.Likewise, by multiplication of their LW and that of their respective groups, we calculated the GW of the reported challenging factors. For example, GW of challenging factor CH1 = 0.10 × 0.63 = 0.063, where 0.10 is the weight of its category (Lack of Inclination) and 0.63 is its LW. The same process is repeated for the remaining challenging factors, and their GW is calculated, respectively, as presented in Table19.Step 5: finalized prioritization of these challenging factors.The finalized priority of these challenge factors is, typically, based on the GW for each of the challenge factors, and the same is presented in Table19. On top priority, challenging factors that have higher GW in all categories are taken into account.In Table19, CH8 (public cloud security concern) and CH11 (achieving QoS) are considered as the uppermost ranking challenging factors due to their GW value (0.201), which is the highest value as compared to the other factors.We further noted that CH12 (appropriate cloud offering) is found to be the secondly highest common challenging factor that could adversely affect the receipt of hybrid cloud computing activities from client organizations’ perspective. ## 4.5. Practices for Critical Challenges Identified through SLR 2 and Empirical Study In this section, a list of practices is presented for each CC (critical challenge) identified through SLR1. We considered that 8 challenges are critical because they have high frequencies both in SLR1 and empirical study findings. We have designed 46 practices in total for the 8 CCs. These practices were identified via SLR2 and questionnaires surveys with 30 experts.We have presented under each challenge its relevant practices for addressing the particular challenge. In Table20 CCs represent critical challenges and CCPs represent practices for addressing these challenges.Table 20 Practices for addressing critical challenge “SLA assurance.” CC#1. SLA assuranceSLR2Questionnaire surveyS. no.Practices for addressing SLA assuranceFrequency of SLR2 (N = 90)Positive %CCP#1.1Ensure the maximum availability of services, provided by cloud providers, and duration of the contract period are explicitly defined in the SLA.298CCP#1.2Define explicitly in the SLA terms and conditions regarding the security of the clients’ data.881CCP#1.3Keep the clients aware of where the processes are running or where the data is stored to ensure the security of the clients’ data.296CCP#1.4To mitigate the risk of a cloud provider failure, specify reversion strategies in the SLA. This is because they put cloud customers in a much stronger position when renegotiating a cloud service contract because cloud customers know that they could readily switch from the provider if needed.294CCP#1.5Perform third-party auditing regularly to monitor the cloud service provider’s compliance to agreed terms.492CCP#1.6Ensure in service level agreements what the contingency plans are in case of the breakdown of the system.697CCP#1.7On-premises gateway should be used in hybrid cloud for controlling the applications and data that flow from each part to the other.684CCP#1.8Categorize the data into two parts, i.e., sensitive and nonsensitive. Place the sensitive data in the on-premises side (private cloud) whereas nonsensitive data should be kept in public cloud.16100 ## 5. Discussion ### 5.1. RQ1 (Challenges Faced by Client Organizations in the Adoption of Hybrid Cloud Computing) We have closely analyzed 120 articles and extracted a total of 13 challenging factors that could adversely affect the world of hybrid cloud computing. Section4 addresses and summarizes these factors in detail. Following the defined principles for the system introduced by Shameem et al. [45], the reported challenging factors were further classified and presented as a theoretical mode.A questionnaire analysis with 30 experts from hybrid cloud computing professionals as well as academic researchers, conducted empirically, validates the results of the SLR1. ### 5.2. RQ2 (Prioritization Process for Hybrid Cloud Challenging Factors) For such a process of prioritization, the AHP approach implemented was selected because it plays a key role in solving such a type of problems due to its classical multiple-criteria decision-making, MCDM, nature, introduced by Saaty [44]. Typically, the approach to ranking and prioritizing the given variables is accurate and precise.In this paper, we use AHP to prioritize the factors found in the adoption of hybrid cloud computing faced by client organizations. Table19 indicates that CH8 (public cloud security concern) and CH11 (achieving QoS) have been considered as the highest-ranking challenging factors, faced by client organizations in adoption of hybrid cloud computing because their GW (0.201) is higher as compared to all the other reported challenging factors. ### 5.3. RQ3 (Practices for the Identified Critical Challenges Faced by Client Organizations in the Adoption of Hybrid Cloud Computing) For such a purpose, we performed a second systematic literature review, SLR2, and extracted a total of 46 practices from a sample of 90 papers. These practices are discussed and summarized in detail in Section4 (Tables 20–27). By conducting a questionnaire survey with 30 experts from hybrid cloud computing professionals and academic scholars, the results of SLR2 were empirically checked.Table 21 Practices for addressing critical challenge “effective management issue.” CC#2. Effective management issueSLR2Questionnaire surveyS. no.Practices for addressing effective management issueFrequency of SLR2 (N = 90)Positive %CCP#2.1Use management tools developed by several working groups like open grid forum, open cloud computing interface (OCCI), and storage network industry association (SNIA) to monitor the performance of both internal and external resources.287CCP#2.2Create a plan for release and deployment management that is appropriate for using and living in cloud settings191CCP#2.3Place a strong service portfolio management for continual service improvement process.187CCP#2.4Set a plan for capacity management (business capacity management, service capacity management, and component capacity management) to improve performance relating to both services and resources.188CCP#2.5Implement tools like Ansible, CFEngine, Chep, Elastra and RightScale Puppet, and Salt for addressing configuration and change management to control the lifecycle of all changes which will assist in enabling beneficial changes to be made with minimum disruption to IT services.187CCP#2.6Keep backups of applications and data on on-premises servers and storage devices to avoid data loss and time delays in case of failures in the cloud platform.497CCP#2.7Consider a cost-effective model to decide which task is economical on the cloud or internal resources.384CCP#2.8Perform efficient planning and implementation strategies before moving to the hybrid cloud.297Table 22 Practices for addressing critical challenge “integration complexity.” CC#3. Integration complexitySLR2Questionnaire surveyS. no.Practices for addressing integration complexityFrequency of practices via SLR2 (N = 90)Positive %CCP#3.1Use the available infrastructures such as Eucalyptus, OpenNebula, and open source software frameworks, in order to assist integration (front end integration, data integration, and process integration) in hybrid cloud.387CCP#3.2Use standard API (application programming interface) to integrate applications and data between the private clouds and the public clouds.597CCP#3.3Adopt technologies such as information integration, enterprise application integration, and enterprise service bus for effective integration.390CCP#3.4Establish integration mechanism to be controlled dynamically in response to changes in business requirements with the passage of time.179CCP#3.5Select form number of vendors offering solutions for data integration including companies such as Dell Boomi, IBM, Informatica, Pervasive Software, Liaison Technologies, and Talend.184Table 23 Practices for addressing critical challenge “achieving QoS.” CC#4. Achieving QoSSLR2Questionnaire surveyS. no.Practices for addressing QoSFrequency of practices via SLR (N = 90)Positive %CCP#4.1Select a cloud provider that can offer improved services in the following QoS parameters/attributes: price, offered load, job deadline constraint, energy consumption of the integrated infrastructure, security, etc.197CCP#4.2Ensure that access to the internal infrastructure is only possible through secure communications.394CCP#4.3Follow secure communication protocols (such as Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL)) when communicating with endpoint applications and databases.197CCP#4.4Select a public cloud provider which can offer the capacity needed by internal cloud and execute dynamically.193CCP#4.5Select a cloud provider that can ensure high degree of availability of services at all times.297Table 24 Practices for addressing critical challenge “component partitioning.” CC#5. Component partitioningSLR2Questionnaire surveyS. no.Practices for addressing component partitioningFrequency of practices via SLR (N = 90)Positive %CCP#5.1In order to distribute an application over a hybrid cloud, the following parameters should be kept in mind:1100(i) Data disclosure risk.(ii) Resource allocation cost.(iii) Private cloud load.CCP#5.2In order to migrate some of the applications components from private cloud to public cloud in the context of hybrid cloud environment, implement migration progress management functions like pacer which is capable of accurately predicting the migration time and coordinating the migrations of multiple application component.287CCP#5.3Divide the workload to be executed across local and public clouds so that the workloads can move among resource pools, which will result in a well-designed cloud environment.194CCP#5.4Replicate some part of the data to the public side so as to enable the distribution of the computation.188CCP#5.5Consider a sensitivity-aware data partitioning mechanism like Sedic that guarantees that no sensitive data is exposed to public cloud.187Table 25 Practices for addressing critical challenge “lack of trust.” CC#6. Lack of trustSLR2Questionnaire surveyS. no.Practices for addressing lack of trust.Frequency of practices via SLR2 (N = 90)Positive %CCP#6.1Establish trustworthy relationships with cloud service providers through service level agreement (SLA).397CCP#6.2Ensure the provision of security at different levels, i.e., how cloud providers implement, deploy, and manage security.197CCP#6.3Keep in mind that clients are still ultimately responsible for compliance and protection of their critical data, even if that workload had moved to the cloud.194CCP#6.4Use services of a broker in order to negotiate trust relationships with cloud providers.490CCP#6.5Ensure what sort of certifications the cloud providers have in place which can ensure service quality of the cloud provider.384Table 26 Practices for addressing critical challenge “public cloud security concern.” CC#7. Public cloud security concernSLR2Questionnaire surveyS. no.Practices for addressing cloud security concernFrequency of practices via SLR2 (N = 90)Positive %CCP#7.1Cloud security should be controlled by the client organization and not by the cloud vendor.297CCP#7.2Provide effective authentication for users based on access control rights. Only the users who are authorized to access the private cloud can be directed to the private cloud; they can also access the public cloud. The rest of users who are not authorized to access the private cloud can be directed to the public cloud; they can access the public cloud only.881CCP#7.3Client organizations should use a third-party tool to enhance the security.297CCP#7.4Client organizations should utilize their private (own) resources as much as possible and outsource minimum tasks to the public cloud to maximize security.294CCP#7.5Client organizations should carefully manage virtual images in a hybrid environment using tools like firewall, IDS/IPS, and log inspection.493CCP#7.6Data should be encrypted by the client before outsourcing to cloud computing.697CCP#7.7The on-premises gateway should be used in a hybrid cloud for controlling the applications and data that flow from each part to the other.684CCP#7.8Categorize the data into two parts, i.e., sensitive and nonsensitive. Place the sensitive data in the on-premises side (private cloud) whereas nonsensitive data should be kept in the public cloud.16100Table 27 Practices for addressing critical challenge “task scheduling and execution.” CC# 8. Task scheduling and executionFrequency of practices via SLR (N = 90)Questionnaire surveyS. no.Practices for addressing task scheduling and executionPositive %CCP#8.1Use of an efficient scheduling mechanism/ algorithm to enable efficient utilization of the on-premise resources and to minimize the task outsourcing cost, while meeting the task completion time requirements as well. These scheduling algorithms include Hybrid Cloud Optimized Cost (HCOC), Deadline-Markov Decision Process (MDP), Heterogeneous Earliest Finish Time (HEFT) based on resource discovering, filtering, selection, and task submission187CCP#8.2Execute part of the application on public cloud to achieve output within deadline as public cloud resources has much high processing power as compare to private cloud resources. On the other hand, executing the whole application on the public cloud will be costly.480CCP#8.3The capacity of the communication channels in hybrid cloud must be considered because it impacts the cost of workflow execution.197CCP#8.4Implement workflow management system like CWMS (Cloud Workflow Management System) to increase productivity and efficiency197 ### 5.4. RQ4 (Taxonomy for the Challenging Factors) Figure6 shows the taxonomy for such challenging factors, where it is generated via measuring of both the LW value and the GW value of each challenging factor with its corresponding category based on AHP prioritization. This figure shows that “Lack of Adoption” (0.38) and “Lack of Satisfaction” (0.38) are considered as the most prioritized categories by the survey experts. The weights of these categories are maximum as compared with the other categories.Figure 6 Analytical hierarchy process (AHP) based on the prioritization taxonomy of challenging factors with their categories.Further, we have noted that CH8 (public cloud security concern) and CH11 (achieving QoS) have been considered as the highest-ranking challenging factors, which are also listed in these categories, because of their GW value (i.e., 0.201) which is found to be higher than those of all other challenging factors mentioned. ## 5.1. RQ1 (Challenges Faced by Client Organizations in the Adoption of Hybrid Cloud Computing) We have closely analyzed 120 articles and extracted a total of 13 challenging factors that could adversely affect the world of hybrid cloud computing. Section4 addresses and summarizes these factors in detail. Following the defined principles for the system introduced by Shameem et al. [45], the reported challenging factors were further classified and presented as a theoretical mode.A questionnaire analysis with 30 experts from hybrid cloud computing professionals as well as academic researchers, conducted empirically, validates the results of the SLR1. ## 5.2. RQ2 (Prioritization Process for Hybrid Cloud Challenging Factors) For such a process of prioritization, the AHP approach implemented was selected because it plays a key role in solving such a type of problems due to its classical multiple-criteria decision-making, MCDM, nature, introduced by Saaty [44]. Typically, the approach to ranking and prioritizing the given variables is accurate and precise.In this paper, we use AHP to prioritize the factors found in the adoption of hybrid cloud computing faced by client organizations. Table19 indicates that CH8 (public cloud security concern) and CH11 (achieving QoS) have been considered as the highest-ranking challenging factors, faced by client organizations in adoption of hybrid cloud computing because their GW (0.201) is higher as compared to all the other reported challenging factors. ## 5.3. RQ3 (Practices for the Identified Critical Challenges Faced by Client Organizations in the Adoption of Hybrid Cloud Computing) For such a purpose, we performed a second systematic literature review, SLR2, and extracted a total of 46 practices from a sample of 90 papers. These practices are discussed and summarized in detail in Section4 (Tables 20–27). By conducting a questionnaire survey with 30 experts from hybrid cloud computing professionals and academic scholars, the results of SLR2 were empirically checked.Table 21 Practices for addressing critical challenge “effective management issue.” CC#2. Effective management issueSLR2Questionnaire surveyS. no.Practices for addressing effective management issueFrequency of SLR2 (N = 90)Positive %CCP#2.1Use management tools developed by several working groups like open grid forum, open cloud computing interface (OCCI), and storage network industry association (SNIA) to monitor the performance of both internal and external resources.287CCP#2.2Create a plan for release and deployment management that is appropriate for using and living in cloud settings191CCP#2.3Place a strong service portfolio management for continual service improvement process.187CCP#2.4Set a plan for capacity management (business capacity management, service capacity management, and component capacity management) to improve performance relating to both services and resources.188CCP#2.5Implement tools like Ansible, CFEngine, Chep, Elastra and RightScale Puppet, and Salt for addressing configuration and change management to control the lifecycle of all changes which will assist in enabling beneficial changes to be made with minimum disruption to IT services.187CCP#2.6Keep backups of applications and data on on-premises servers and storage devices to avoid data loss and time delays in case of failures in the cloud platform.497CCP#2.7Consider a cost-effective model to decide which task is economical on the cloud or internal resources.384CCP#2.8Perform efficient planning and implementation strategies before moving to the hybrid cloud.297Table 22 Practices for addressing critical challenge “integration complexity.” CC#3. Integration complexitySLR2Questionnaire surveyS. no.Practices for addressing integration complexityFrequency of practices via SLR2 (N = 90)Positive %CCP#3.1Use the available infrastructures such as Eucalyptus, OpenNebula, and open source software frameworks, in order to assist integration (front end integration, data integration, and process integration) in hybrid cloud.387CCP#3.2Use standard API (application programming interface) to integrate applications and data between the private clouds and the public clouds.597CCP#3.3Adopt technologies such as information integration, enterprise application integration, and enterprise service bus for effective integration.390CCP#3.4Establish integration mechanism to be controlled dynamically in response to changes in business requirements with the passage of time.179CCP#3.5Select form number of vendors offering solutions for data integration including companies such as Dell Boomi, IBM, Informatica, Pervasive Software, Liaison Technologies, and Talend.184Table 23 Practices for addressing critical challenge “achieving QoS.” CC#4. Achieving QoSSLR2Questionnaire surveyS. no.Practices for addressing QoSFrequency of practices via SLR (N = 90)Positive %CCP#4.1Select a cloud provider that can offer improved services in the following QoS parameters/attributes: price, offered load, job deadline constraint, energy consumption of the integrated infrastructure, security, etc.197CCP#4.2Ensure that access to the internal infrastructure is only possible through secure communications.394CCP#4.3Follow secure communication protocols (such as Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL)) when communicating with endpoint applications and databases.197CCP#4.4Select a public cloud provider which can offer the capacity needed by internal cloud and execute dynamically.193CCP#4.5Select a cloud provider that can ensure high degree of availability of services at all times.297Table 24 Practices for addressing critical challenge “component partitioning.” CC#5. Component partitioningSLR2Questionnaire surveyS. no.Practices for addressing component partitioningFrequency of practices via SLR (N = 90)Positive %CCP#5.1In order to distribute an application over a hybrid cloud, the following parameters should be kept in mind:1100(i) Data disclosure risk.(ii) Resource allocation cost.(iii) Private cloud load.CCP#5.2In order to migrate some of the applications components from private cloud to public cloud in the context of hybrid cloud environment, implement migration progress management functions like pacer which is capable of accurately predicting the migration time and coordinating the migrations of multiple application component.287CCP#5.3Divide the workload to be executed across local and public clouds so that the workloads can move among resource pools, which will result in a well-designed cloud environment.194CCP#5.4Replicate some part of the data to the public side so as to enable the distribution of the computation.188CCP#5.5Consider a sensitivity-aware data partitioning mechanism like Sedic that guarantees that no sensitive data is exposed to public cloud.187Table 25 Practices for addressing critical challenge “lack of trust.” CC#6. Lack of trustSLR2Questionnaire surveyS. no.Practices for addressing lack of trust.Frequency of practices via SLR2 (N = 90)Positive %CCP#6.1Establish trustworthy relationships with cloud service providers through service level agreement (SLA).397CCP#6.2Ensure the provision of security at different levels, i.e., how cloud providers implement, deploy, and manage security.197CCP#6.3Keep in mind that clients are still ultimately responsible for compliance and protection of their critical data, even if that workload had moved to the cloud.194CCP#6.4Use services of a broker in order to negotiate trust relationships with cloud providers.490CCP#6.5Ensure what sort of certifications the cloud providers have in place which can ensure service quality of the cloud provider.384Table 26 Practices for addressing critical challenge “public cloud security concern.” CC#7. Public cloud security concernSLR2Questionnaire surveyS. no.Practices for addressing cloud security concernFrequency of practices via SLR2 (N = 90)Positive %CCP#7.1Cloud security should be controlled by the client organization and not by the cloud vendor.297CCP#7.2Provide effective authentication for users based on access control rights. Only the users who are authorized to access the private cloud can be directed to the private cloud; they can also access the public cloud. The rest of users who are not authorized to access the private cloud can be directed to the public cloud; they can access the public cloud only.881CCP#7.3Client organizations should use a third-party tool to enhance the security.297CCP#7.4Client organizations should utilize their private (own) resources as much as possible and outsource minimum tasks to the public cloud to maximize security.294CCP#7.5Client organizations should carefully manage virtual images in a hybrid environment using tools like firewall, IDS/IPS, and log inspection.493CCP#7.6Data should be encrypted by the client before outsourcing to cloud computing.697CCP#7.7The on-premises gateway should be used in a hybrid cloud for controlling the applications and data that flow from each part to the other.684CCP#7.8Categorize the data into two parts, i.e., sensitive and nonsensitive. Place the sensitive data in the on-premises side (private cloud) whereas nonsensitive data should be kept in the public cloud.16100Table 27 Practices for addressing critical challenge “task scheduling and execution.” CC# 8. Task scheduling and executionFrequency of practices via SLR (N = 90)Questionnaire surveyS. no.Practices for addressing task scheduling and executionPositive %CCP#8.1Use of an efficient scheduling mechanism/ algorithm to enable efficient utilization of the on-premise resources and to minimize the task outsourcing cost, while meeting the task completion time requirements as well. These scheduling algorithms include Hybrid Cloud Optimized Cost (HCOC), Deadline-Markov Decision Process (MDP), Heterogeneous Earliest Finish Time (HEFT) based on resource discovering, filtering, selection, and task submission187CCP#8.2Execute part of the application on public cloud to achieve output within deadline as public cloud resources has much high processing power as compare to private cloud resources. On the other hand, executing the whole application on the public cloud will be costly.480CCP#8.3The capacity of the communication channels in hybrid cloud must be considered because it impacts the cost of workflow execution.197CCP#8.4Implement workflow management system like CWMS (Cloud Workflow Management System) to increase productivity and efficiency197 ## 5.4. RQ4 (Taxonomy for the Challenging Factors) Figure6 shows the taxonomy for such challenging factors, where it is generated via measuring of both the LW value and the GW value of each challenging factor with its corresponding category based on AHP prioritization. This figure shows that “Lack of Adoption” (0.38) and “Lack of Satisfaction” (0.38) are considered as the most prioritized categories by the survey experts. The weights of these categories are maximum as compared with the other categories.Figure 6 Analytical hierarchy process (AHP) based on the prioritization taxonomy of challenging factors with their categories.Further, we have noted that CH8 (public cloud security concern) and CH11 (achieving QoS) have been considered as the highest-ranking challenging factors, which are also listed in these categories, because of their GW value (i.e., 0.201) which is found to be higher than those of all other challenging factors mentioned. ## 6. Summarizing the Research Questions This research aims to deliver a taxonomy, on the basis of AHP technique, for the challenging factors or challenges faced by client organizations in hybrid cloud computing. The conclusions of this article provide help to effectively navigate the practices of hybrid cloud computing. In Table28, the description of the research questions is given.Table 28 Summary of the research questions. S. no.Research questionsDescriptionsRQ1What are the challenging factors, as described in the literature and industrial survey, to be avoided by client organizations in adoption of hybrid cloud computing?All the challenging factors are presented in Table4.RQ2How could the identified challenging factors be prioritized using the AHP approach?AHP approach is followed to prioritize the identified challenging factors. Details are presented in Section4.4. The summarized list is presented in Table 19.RQ3What are the practices, as identified in the literature and industrial survey, to be followed by vendor organizations to build successful relationship with client organizations in adoption of hybrid cloud computing?The practices for the identified challenges are presented in Tables20–27.RQ4What would be the taxonomy of the identified challenging factors that could assist in the successful relationship between client and vendor organizations in adoption of hybrid cloud computing?Taxonomy of the identified challenging factors is developed by categorizing these challenges into four main categories, “Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction” (Figure4), and prioritizing them by using the AHP technique. The basic purpose of this taxonomy is to highlight the local weight that shows the priority order of each challenging factor within its category and the global weight to show the effect of a particular challenge on the overall study objective. Furthermore, the taxonomy provides a robust framework that could help practitioners and researchers to handle the major issues of hybrid cloud computing activities. ## 7. Research Limitations We adopted the SLR for the identification of challenging factors; consequently, there is a chance that we might have missed out some relevant paper(s) for inclusion in the final selection for extraction of the relevant challenging factors. However, this is not a systematic omission as other researchers also conducted the same process for the identification and categorization of factors/variables in other different domains [47–49, 64].Due to the shortage of time and resource, the sample size for the study we have chosen was 30 (i.e.,n = 30), and so in this manner we are unable to claim generalized results. Nonetheless, other scholars from software engineering domain performed similar studies with the same sample size [65, 66].As the construct validity refers to testing the accuracy of the appraisal on the basis of provided variables, the available literature, discussed here, describes the challenging factors and their practices that are tested empirically by using the online survey strategy. The findings from such empirical study show that the challenges and practices given are linked to the findings of SLR1 and SLR2, which explain the accuracy of the appraisal scale we had chosen.Similarly, the internal validity is the assessment of a particular study’s findings and interpretation. In this respect, we had conducted a pilot study with the members of SERGUOM that offers an appropriate degree of internal validity.External validity is the generalizations of a research article’s findings. In this study, the majority of the survey respondents belonged to Pakistan which was a challenge to external validity by generalizing the findings in comparison to other countries. There are, however, still some participants from different countries and, above all, we have not found any substantial variations between the findings from SLR and industrial survey; thus, we are confident that the results can be generalized by the data sample.In addition, majority of the respondents were experienced practitioners in the same field, so we concluded that they have given their adequate input on the basis of their understanding of the difficult factors and their practices. ## 8. Conclusion and Future Work We were inspired by the importance of hybrid cloud computing activities to build a taxonomy focused on prioritizing the challenges or challenging factors that could pose risks for hybrid cloud computing adoption by client organizations. The revealed results deliver the key areas which need to be resolved before hybrid cloud computing practices are launched. The SLR1 was performed to classify the challenging factors, and the results of the literature reviews were confirmed by empirical research.In total, 13 challenges or challenging factors are found via the literature review, and then they are further classified into four main categories: “Lack of Inclination,” “Lack of Readiness,” “Lack of Adoption,” and “Lack of Satisfaction” (Figure4). In addition, an AHP technique was implemented to prioritize these difficult variables and their groups.The findings of the AHP technique reveal that “Lack of Adoption” and “Lack of Satisfaction” are the most significant categories and CH8 (public cloud security concern) and CH11 (achieving QoS) have been considered as the highest-ranking challenging factors, which are also listed in these categories, because of their value GW (0.201), which is found to be higher than the values of all the other challenging factors mentioned.This study has a knowledge base in the domain of hybrid cloud computing for professionals and academic scholars. On the other hand, in such a field, it also offers state-of-the-art work that may be considered as a valuable input to academic studies.In summary, this study contributes a thorough overview of the perspectives of experts in hybrid cloud computing to illustrate various facets of the field of cloud computing. The future goal of this research work is to produce a robust model that could help client organizations analyze their current capabilities in hybrid cloud computing and include best practices for further improvements.Our ultimate aim is to develop hybrid cloud adoption assessment model (HCAAM). This paper contributes two components of the HCAAM, i.e., the identification of challenges in the adoption of hybrid cloud and practices for these challenges via SLR and questionnaire survey. The final outcome of the research is the development of HCAAM. The proposed model will be developed based on the results drawn from a systematic review of the literature and survey and will be supported by conducting case studies, which will provide a more comprehensive theoretical and practical assessment of the organization’s maturity in the context of hybrid cloud adoption. --- *Source: 1024139-2021-09-03.xml*
2021
# Tropical Cryptography Based on Multiple Exponentiation Problem of Matrices **Authors:** Huawei Huang; Chunhua Li **Journal:** Security and Communication Networks (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1024161 --- ## Abstract Because there is no multiplication of numbers in tropical algebra and the problem of solving the systems of polynomial equations in tropical algebra is NP-hard, in recent years some public key cryptography based on tropical semiring has been proposed. But most of them have some defects. This paper proposes new public key cryptosystems based on tropical matrices. The security of the cryptosystem relies on the difficulty of the problem of finding multiple exponentiations of tropical matrices given the product of the matrices powers when the subsemiring is hidden. This problem is a generalization of the discrete logarithm problem. But the problem generally cannot be reduced to discrete logarithm problem or hidden subgroup problem in polynomial time. Since the generating matrix of the used commutative subsemirings is hidden and the public key matrices are the product of more than two unknown matrices, the cryptosystems can resist KU attack and other known attacks. The cryptosystems based on multiple exponentiation problem can be considered as a potential postquantum cryptosystem. --- ## Body ## 1. Introduction Contemporary public key cryptography relies mainly on two computational problems, integer factorization problem, and discrete logarithm problem. For example, Diffie–Hellman key exchange protocol and ElGamal encryption scheme are based on discrete logarithm problem [1, 2]. Shor proposed a quantum algorithm which can solve integer factorization problem and discrete logarithm problem in polynomial time on a quantum computer [3]. So, it is a research focus of cryptography to develop other new cryptosystems. The traditional cryptosystems are based on various commutative rings, such as finite field, residue class ring, and integer ring [4–8]. Many cryptologists hope to find other algebraic structures to build new public key cryptosystems.In 2007, Maze, Monico, and Rosenthal proposed one of the first cryptosystems based on semigroups and semirings [9], using some ideas from [10], as well as from their previous article [11]. However, then it was broken by Steinwandt et al. [12]. Atani published a cryptosystem using semimodules over factor semirings [13]. Durcheva applied some idempotent semirings to construct cryptographic protocols [14]. A survey on semigroup action problem and its cryptographic applications was given by Goel, Gupta, and Dass [15].Grigoriev and Shpilrain proved that the problem of solving the systems of min-plus polynomial equations in tropical algebra is NP-hard and suggested using a min-plus (tropical) semiring to design public key cryptosystem [16]. An obvious advantage of using tropical algebras as platforms is unparalleled efficiency because in tropical schemes, one does not have to perform any multiplications of numbers since tropical multiplication is the usual addition. But “tropical powers” of an element exhibit some patterns, even if such an element is a matrix over a tropical algebra. This weakness was exploited by Kotov and Ushakov to arrange a fairly successful attack on one of Grigoriev and Shpilrain’s schemes [17]. In 2019, Grigoreiv and Shpilrain improved the original scheme and proposed the public key cryptosystems based on semi-direct product of tropical matrix semiring [18]. However, some attacks on the improved protocols are recently suggested by Rudy and Monico [19], Isaac and Kahrobei [20], and Muanalifah and Sergeev [21]. In order to remedy Grigoreiv–Shpilrain’s protocols, Muanalifah and Sergeev suggested some modifications that use two classes of commuting matrices in tropical algebra [22]. But the authors also pointed out that their modifications cannot resist the generalized KU attack since the user’s secret matrix can still be expressed in the linear form of the power of the basic elementary matrix.Our contribution: This paper provides a new public key cryptosystem based on tropical matrices. The security of the cryptosystem relies on the difficulty of the problem of finding multiple exponentiation of tropical matrices, which is a class of semigroup action problem proposed by Maze in [11]. The multiple exponentiation problem is also a generalization of the discrete logarithm problem. However, the problem generally cannot be reduced to discrete logarithm problem or hidden subgroup problem in polynomial time. Since the generating matrix of the used commutative subsemirings is hidden and the public key matrices are the product of more than two unknown matrices, the cryptosystems can resist KU attack and other known attacks. It is seemed that our cryptosystems based on multiple exponentiation problem can be considered as a potential postquantum cryptosystem.The remainder of this paper is organized as follows. In Section2, some preliminaries on tropical semiring are given. In Section 3, we define the multiple exponentiation problem of tropical matrices. In section 4, a key exchange protocol and a public key encryption scheme based on multiple exponentiation problem are presented. Finally, in Section 5 the possible attacks, parameter selection, and efficiency of the cryptosystems are discussed. ## 2. Preliminaries Notation. In this paper, matrices are generally denoted by the capital letters. In order to facilitate future references, frequently used notations are listed below with their meanings.ℤ+ is set of all non-negative integers; ℤ+x is polynomial semiring over ℤ+; Mnℤ+ is set of all n×n matrices over ℤ+; ℤ+C is set of all polynomials of matrix C∈Mnℤ+; Z is tropical semiring of integers ℤ∪∞; Zx is tropical polynomial semiring over Z; MkZ is set of all k×k tropical matrices over Z; ZN is set of all tropical polynomials of tropical matrix N∈MkZ; H⟶ is the vector H1,H2,…,Hn, where Hi∈ZN. ### 2.1. Tropical Semiring over Integer A semiring is an algebraic structure similar to a ring, but without the requirement that each element must have an additive inverse.Definition 1. (see [23]) A nonempty set ℛ with two binary operations + and ⋅ is called a semiring if(1) ℛ,+ is a commutative monoid with identity element 0;(2) ℛ,⋅ is a monoid with identity element 1≠0;(3) Both distributive laws hold inℛ;(4) a⋅0=0⋅a=0 for all a∈ℛ. If a semiring’s multiplication is commutative, then it is called acommutative semiring.Definition 2. (see [16]) The tropical commutative semiring of integer is the set Z=ℤ∪∞ with two operations as follows:(1)x⊕y=minx,y,x⊗y=x+y. The special “∞” satisfies the equations:(2)∞⊕x=x,∞⊗x=∞. It is straightforward to see thatZ,⊕,⊗ is a commutative semiring. In fact, ∞ is the identity element of Z,⊕ and 0 is the identity element of Z,⊕. Just as in the classical case, we define the set of all tropical polynomials overZ in the indeterminate x. Let(3)Zx=an⊗xn⊕an−1⊗xn−1⊕,…,⊕a1⊗x⊕a0|ai∈Zandn≥0. The tropical polynomial⊕ operation and ⊗ operation in Zxare similar to the classical polynomial addition and multiplication; however, every “+” calculation has to be substituted by a ⊕ operation of Z, and every “⋅” calculation by a ⊗ operation of Z. It is easy to verify that Zx is a commutative semiring with respect to the tropical polynomial ⊕ and ⊗ operations. ### 2.2. Tropical Matrix Semiring over Integer MkZ denotes the set of all k×k matrices over Z. We can also define the tropical matrix ⊕ and ⊗ operations. To perform the A⊕B operation, the elements mij of the resulting matrix M are set to be equal to aij⊕bij. The tropical matrix ⊗ operation is similar to the usual matrix multiplication; however, every “+” calculation has to be substituted by a ⊕ operation of Z, and every “⋅” calculation by a ⊗ operation of Z. MkZ is a noncommutative semiring with respect to the tropical matrix ⊕ and ⊗ operations.Example 1. (4)4−5270⊕10319=4−510,4−5270⊗10319=−4419,10319⊗4−5270=1435−4. The role of the identity matrixI is played by the matrix that has “0” s on the diagonal and ∞ elsewhere. Similarly, a scalar matrix would be a matrix with an element λ∈S on the diagonal and ∞ elsewhere. Such a matrix commutes with any other square matrix (of the same size). Multiplying a matrix by a scalar amounts to multiplying it by the corresponding scalar matrix.Example 2. (5)3⊗4−5270=3∞∞3⊗4−5270=7−2303. Then, tropical diagonal matrices have something on the diagonal and∞ elsewhere. Note that, in contrast to the “classical” situation, it is rather rare that a “tropical” matrix is invertible. More specifically, the only invertible tropical matrices are those that are obtained from a diagonal matrix by permuting rows and/or columns. For a matrixN∈MkZ, denote(6)ZN=an⊗Nn⊕an−1⊗Nn−1⊕,…,⊕a1⊗N⊕a0⊗I|ai∈Zandn≥0,where Nm means N⊗,…,⊗N (m times). It is clear that ZN is a commutative subsemiring of MkZ with respect to the tropical matrix addition and multiplication. ## 2.1. Tropical Semiring over Integer A semiring is an algebraic structure similar to a ring, but without the requirement that each element must have an additive inverse.Definition 1. (see [23]) A nonempty set ℛ with two binary operations + and ⋅ is called a semiring if(1) ℛ,+ is a commutative monoid with identity element 0;(2) ℛ,⋅ is a monoid with identity element 1≠0;(3) Both distributive laws hold inℛ;(4) a⋅0=0⋅a=0 for all a∈ℛ. If a semiring’s multiplication is commutative, then it is called acommutative semiring.Definition 2. (see [16]) The tropical commutative semiring of integer is the set Z=ℤ∪∞ with two operations as follows:(1)x⊕y=minx,y,x⊗y=x+y. The special “∞” satisfies the equations:(2)∞⊕x=x,∞⊗x=∞. It is straightforward to see thatZ,⊕,⊗ is a commutative semiring. In fact, ∞ is the identity element of Z,⊕ and 0 is the identity element of Z,⊕. Just as in the classical case, we define the set of all tropical polynomials overZ in the indeterminate x. Let(3)Zx=an⊗xn⊕an−1⊗xn−1⊕,…,⊕a1⊗x⊕a0|ai∈Zandn≥0. The tropical polynomial⊕ operation and ⊗ operation in Zxare similar to the classical polynomial addition and multiplication; however, every “+” calculation has to be substituted by a ⊕ operation of Z, and every “⋅” calculation by a ⊗ operation of Z. It is easy to verify that Zx is a commutative semiring with respect to the tropical polynomial ⊕ and ⊗ operations. ## 2.2. Tropical Matrix Semiring over Integer MkZ denotes the set of all k×k matrices over Z. We can also define the tropical matrix ⊕ and ⊗ operations. To perform the A⊕B operation, the elements mij of the resulting matrix M are set to be equal to aij⊕bij. The tropical matrix ⊗ operation is similar to the usual matrix multiplication; however, every “+” calculation has to be substituted by a ⊕ operation of Z, and every “⋅” calculation by a ⊗ operation of Z. MkZ is a noncommutative semiring with respect to the tropical matrix ⊕ and ⊗ operations.Example 1. (4)4−5270⊕10319=4−510,4−5270⊗10319=−4419,10319⊗4−5270=1435−4. The role of the identity matrixI is played by the matrix that has “0” s on the diagonal and ∞ elsewhere. Similarly, a scalar matrix would be a matrix with an element λ∈S on the diagonal and ∞ elsewhere. Such a matrix commutes with any other square matrix (of the same size). Multiplying a matrix by a scalar amounts to multiplying it by the corresponding scalar matrix.Example 2. (5)3⊗4−5270=3∞∞3⊗4−5270=7−2303. Then, tropical diagonal matrices have something on the diagonal and∞ elsewhere. Note that, in contrast to the “classical” situation, it is rather rare that a “tropical” matrix is invertible. More specifically, the only invertible tropical matrices are those that are obtained from a diagonal matrix by permuting rows and/or columns. For a matrixN∈MkZ, denote(6)ZN=an⊗Nn⊕an−1⊗Nn−1⊕,…,⊕a1⊗N⊕a0⊗I|ai∈Zandn≥0,where Nm means N⊗,…,⊗N (m times). It is clear that ZN is a commutative subsemiring of MkZ with respect to the tropical matrix addition and multiplication. ## 3. Multiple Exponentiation Problem of Tropical Matrices ### 3.1. Companion Matrix of Polynomial over Integer Ringℤ Letf0,f1,…,fn−1 be non-negative integers and f0>0. The companion matrix of a monic polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn is given by the n×n matrix(7)C=00…0f010…0f101…0f2⋮⋮⋱⋮⋮00…1fn−1.Note that the entries ofC are all non-negative. Denote(8)ℤ+C=pC|px∈ℤ+x.It is easy to verify thatℤ+C is a commutative subsemiring of Mnℤ. ### 3.2. Matrix Semigroupℤ+C Action on ZNn Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let A=aijn×n∈ℤ+C, N∈MkZ, and H⟶=H1,H2,…,Hn∈ZNn. Consider an action ∗ of the multiplicative semigroup ℤ+C on the Cartesian product ZNn as below:(9)A∗H⟶=A∗H1,H2,…,Hn=∏i=1nHia1i,∏i=1nHia2i,…,∏i=1nHiani,where Hiaji means Hi⊗⋯⊗Hi (aji times). By the commutativity of ZN, it is easy to prove that “∗” is a semigroup action of ℤ+C on ZNn. In fact, a similar semigroup action was first defined by Maze in [11], where the action of ℤC on the group direct product Gn was considered.Example 3. Letfx=x2−2. The companion matrix of fx is C=0210. Let N∈M3Z as follows.(10)N=6230238775518647. LetH⟶=H1,H2∈ZN2 as follows.(11)H1=384336672068144238,H2=253838571563163225. LetA=3E+5C=31053. Then,(12)A∗H⟶=A∗H1,H2=H13⊗H210,H15⊗H23=275233281252210258269227275,202168211187145193189162202. ### 3.3. Multiple Exponentiation Problem of Tropical Matrices Definition 3. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶=H1,H2,…,Hn∈ZNn. Suppose that U⟶=A∗H⟶, where A∈ℤ+C. The multiple exponentiation problem of tropical matrices is to find A∈ℤ+C, given fx, H⟶, and U⟶. (Note that N is unknown.) For simplicity, we abbreviate the problem to “ME problem.”Example 4. Givenfx=x2−2, H⟶=H1,H2, and U⟶ as follows, we try to find A∈ℤ+C such that U⟶=A∗H⟶.(13)H⟶=384336672068144238,253838571563163225,U⟶=260218266237195243254212260,138123136142100148114117138. FindingA∈ℤ+C such that U⟶=A∗H⟶ is equivalent to finding a0,a1∈ℤ+ such that U⟶=a0E+a1C∗H⟶. By a0E+a1C=a02a1a1a0, we have U⟶=H1a0H22a1,H1a1H2a0. That is,(14)384336672068144238a0⊗2538385715631632252a1=275233281252210258269227275384336672068144238a1⊗253838571563163225a0=202168211187145193189162202. As we know, most results in ordinary algebra do not hold in tropical algebra. Therefore, the certain properties of ordinary matrices like determinant, eigenvalues, and Cayley–Hamilton theorem cannot be used. But ifH1∈H2 or H2∈H1, we can reduce the problem to discrete logarithm problem.Proposition 1. SupposeH1∈H2 or H2∈H1, then the ME problem in Example 4 can be reduced to discrete logarithm problem in polynomial time.Proof. LetU⟶=U1,U2. Then,(15)H1a0H22a1=U1H1a1H2a0=U2 SupposeH2∈〈〈H1〉. By solving a discrete logarithm problem in H1, we can get a positive integer m such that H2=H1m. So, the equations (15) are equivalent to the following equations.(16)H1a0+2ma1=U1H1a1+ma0=U2. In this case,U1,U2∈H1. By solving two discrete logarithm problems in H1, we can get two positive integers t1,t2 such that U1=H1t1 and U2=H1t2. Therefore,(17)a0+2ma1=t1a1+ma0=t2 It is clear that we can obtaina0,a1 by solving a system of linear equations.Proposition 2. If there exists a componentHi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1, then the ME problem can be reduced to discrete logarithm problem in polynomial time.IfH1∉H2 and H2∉H1, the problem of finding a0,a1 from equation (14) cannot be reduced to discrete logarithm problem. In fact, in Example 4, the conditions are indeed satisfied.In order to resist some other potential method of solving ME problem, we stress the condition thatN is unknown. Since the matrix N is unknown, we cannot express H1 and H2 as the polynomials of N. (Even if N is known, we have not found any effective method to find a0 and a1.).Remark 1. Assume thata0,a1∈0,s−1. Hence, in the example the total number of steps to solve ME problem by brute-force attack is s2. Generally, assume thatA=∑i=0naiCiai∈0,s−1. Then, the total number of steps to solve ME problem by force attack is sn. ## 3.1. Companion Matrix of Polynomial over Integer Ringℤ Letf0,f1,…,fn−1 be non-negative integers and f0>0. The companion matrix of a monic polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn is given by the n×n matrix(7)C=00…0f010…0f101…0f2⋮⋮⋱⋮⋮00…1fn−1.Note that the entries ofC are all non-negative. Denote(8)ℤ+C=pC|px∈ℤ+x.It is easy to verify thatℤ+C is a commutative subsemiring of Mnℤ. ## 3.2. Matrix Semigroupℤ+C Action on ZNn Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let A=aijn×n∈ℤ+C, N∈MkZ, and H⟶=H1,H2,…,Hn∈ZNn. Consider an action ∗ of the multiplicative semigroup ℤ+C on the Cartesian product ZNn as below:(9)A∗H⟶=A∗H1,H2,…,Hn=∏i=1nHia1i,∏i=1nHia2i,…,∏i=1nHiani,where Hiaji means Hi⊗⋯⊗Hi (aji times). By the commutativity of ZN, it is easy to prove that “∗” is a semigroup action of ℤ+C on ZNn. In fact, a similar semigroup action was first defined by Maze in [11], where the action of ℤC on the group direct product Gn was considered.Example 3. Letfx=x2−2. The companion matrix of fx is C=0210. Let N∈M3Z as follows.(10)N=6230238775518647. LetH⟶=H1,H2∈ZN2 as follows.(11)H1=384336672068144238,H2=253838571563163225. LetA=3E+5C=31053. Then,(12)A∗H⟶=A∗H1,H2=H13⊗H210,H15⊗H23=275233281252210258269227275,202168211187145193189162202. ## 3.3. Multiple Exponentiation Problem of Tropical Matrices Definition 3. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶=H1,H2,…,Hn∈ZNn. Suppose that U⟶=A∗H⟶, where A∈ℤ+C. The multiple exponentiation problem of tropical matrices is to find A∈ℤ+C, given fx, H⟶, and U⟶. (Note that N is unknown.) For simplicity, we abbreviate the problem to “ME problem.”Example 4. Givenfx=x2−2, H⟶=H1,H2, and U⟶ as follows, we try to find A∈ℤ+C such that U⟶=A∗H⟶.(13)H⟶=384336672068144238,253838571563163225,U⟶=260218266237195243254212260,138123136142100148114117138. FindingA∈ℤ+C such that U⟶=A∗H⟶ is equivalent to finding a0,a1∈ℤ+ such that U⟶=a0E+a1C∗H⟶. By a0E+a1C=a02a1a1a0, we have U⟶=H1a0H22a1,H1a1H2a0. That is,(14)384336672068144238a0⊗2538385715631632252a1=275233281252210258269227275384336672068144238a1⊗253838571563163225a0=202168211187145193189162202. As we know, most results in ordinary algebra do not hold in tropical algebra. Therefore, the certain properties of ordinary matrices like determinant, eigenvalues, and Cayley–Hamilton theorem cannot be used. But ifH1∈H2 or H2∈H1, we can reduce the problem to discrete logarithm problem.Proposition 1. SupposeH1∈H2 or H2∈H1, then the ME problem in Example 4 can be reduced to discrete logarithm problem in polynomial time.Proof. LetU⟶=U1,U2. Then,(15)H1a0H22a1=U1H1a1H2a0=U2 SupposeH2∈〈〈H1〉. By solving a discrete logarithm problem in H1, we can get a positive integer m such that H2=H1m. So, the equations (15) are equivalent to the following equations.(16)H1a0+2ma1=U1H1a1+ma0=U2. In this case,U1,U2∈H1. By solving two discrete logarithm problems in H1, we can get two positive integers t1,t2 such that U1=H1t1 and U2=H1t2. Therefore,(17)a0+2ma1=t1a1+ma0=t2 It is clear that we can obtaina0,a1 by solving a system of linear equations.Proposition 2. If there exists a componentHi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1, then the ME problem can be reduced to discrete logarithm problem in polynomial time.IfH1∉H2 and H2∉H1, the problem of finding a0,a1 from equation (14) cannot be reduced to discrete logarithm problem. In fact, in Example 4, the conditions are indeed satisfied.In order to resist some other potential method of solving ME problem, we stress the condition thatN is unknown. Since the matrix N is unknown, we cannot express H1 and H2 as the polynomials of N. (Even if N is known, we have not found any effective method to find a0 and a1.).Remark 1. Assume thata0,a1∈0,s−1. Hence, in the example the total number of steps to solve ME problem by brute-force attack is s2. Generally, assume thatA=∑i=0naiCiai∈0,s−1. Then, the total number of steps to solve ME problem by force attack is sn. ## 4. Public Key Cryptosystems Based on Tropical Matrix In this section, we give a key exchange protocol similar to Diffie–Hellman protocol and a public key encryption scheme similar to ElGamal encryption scheme. ### 4.1. Key Exchange Protocol Based on Tropical Matrix Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn and H⟶∈ZNn such that there exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. The public parameters of the protocol are fx, H⟶. Key change protocol based on tropical matrix is the following. #### 4.1.1. Protocol 4.1.1 (1) Alice selects at randomn private integers a0,a1,…,an−1 in 0,s−1 and computesA=an−1Cn−1+⋯+a1C+a0E=aijn×n. Bob selects at random n private integers b0,b1,…,bn−1 in 0,s−1 and computes(18)B=bn−1Cn−1+⋯+b1C+b0E=bijn×n.(2) Alice computesU⟶=A∗H⟶ and sends to Bob the matrix U⟶. And Bob computes V⟶=B∗H⟶ and sends to Alice the matrix V⟶.(3) Alice computes(19)KAlice=A∗V⟶=A∗B∗H⟶=A⋅B∗H⟶.and Bob computes(20)KBob=B∗U⟶=B∗A∗H⟶=B⋅A∗H⟶,where “⋅” is the matrix multiplication in ℤ+C.Sinceℤ+C is commutative, we have A⋅B=B⋅A and KAlice=KBob. So, Alice and Bob share a common secret key.Definition 4. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. There exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. Suppose that U⟶=A∗H⟶ and V⟶=B∗H⟶, where A,B∈ℤ+C. The computational ME problem is to find the matrix vector K⟶ such that K⟶=AB∗H⟶, given fx, H⟶, U⟶, and V⟶. For simplicity, we abbreviate the problem to “CME problem.”Proposition 3. An algorithm that solves ME problem can be used to solve CME problem.Proposition 4. Finding the common secret key from the public information of Protocol 4.1.1 is equivalent to solving the CME problem. ### 4.2. Public Key Encryption Scheme Based on Tropical Matrix #### 4.2.1. Scheme 4.2.1 Key generation.Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. There exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. The public parameters are fx, H⟶. The key generation center chooses at random integers a0,a1,…,an−1∈0,s−1 and computes(21)A=an−1Cn−1+⋯+a1C+a0E=aijn×n,U⟶=A∗H⟶.The public key of Alice isU⟶. The private key of Alice is A (or a0,a1,…,an−1).Encryption.Bob wants to send a plaintext messagesM⟶∈Mkℤn to Alice.(1) Bob chooses at random integersb0,b1,…,bn−1 in 0,s−1 and computes(22)B=bn−1Cn−1+⋯+b1C+b0E=bijn×n.(2) Bob computesV⟶=B∗H⟶ as a part of ciphertext.(3) Bob computesQ⟶=M⟶+B∗U⟶ as the rest of the ciphertext, where “+” is the ordinary integer matrix vector addition.(4) Bob sends the ciphertextV⟶,Q⟶ to Alice.Decryption.Alice receives the ciphertextV⟶,Q⟶ and tries to decrypt it.(1) Using her private keyA, Alice computes W⟶=A∗V⟶.(2) Alice computesQ⟶−W⟶, where “-” is the ordinary integer matrix vector subtraction.Since(23)Q⟶−W⟶=M⟶+B∗U⟶−A∗V⟶=M⟶+B∗A∗H⟶−A∗B∗H⟶=M⟶+BA∗H⟶−AB∗H⟶=M⟶,Alice gets the plaintext messagesM⟶.Definition 5. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. Suppose U⟶=A∗H⟶ and V⟶=B∗H⟶, where A,B∈ℤ+C. Let R∈ZNn. The decisional ME problem is to decide whether R⟶=AB∗H⟶, given fx, H⟶, U⟶, V⟶, and R⟶. For simplicity, we abbreviate it to “DME problem.”Proposition 5. An algorithm that solves CME problem can be used to solve DME problem.Theorem 1. An algorithm that solves DME problem can be used to decide the validity of the ciphertexts of Scheme 4.2.1, and an algorithm that decides the validity of the ciphertexts of Scheme 4.2.1 can be used to solve DME problem.Proof. Suppose first that the algorithmA1 can decide whether a decryption of Scheme 4.2.1 is correct. In other words, when given the inputs fx, H⟶, U⟶, V⟶,Q⟶, M⟶, the algorithm A1 outputs “yes” if M⟶ is the decryption of V⟶,Q⟶ and outputs “no” otherwise. Let us use A1 to solve the DME problem. Suppose you are given p, fx, H⟶, U⟶=A∗H⟶, V⟶=B∗H⟶, and R⟶, and you want to decide whether or not R⟶=AB∗H⟶. Let Q⟶=R⟶ and M⟶=0k,0k,…,0k, where 0k is the k×k zero matrix of Mkℤ. Input all of these into A1. Note that in the present setup, A is the secret key. The correct decryption of V⟶,Q⟶ is(24)Q⟶−A∗V⟶=R⟶−A∗B∗H⟶=R⟶−AB∗H⟶. Therefore,A1 outputs “yes” exactly when M⟶=0k,0k,…,0k is the same as R⟶−AB∗H⟶, namely, when R⟶=AB∗H⟶. This solves the decision DME problem. Conversely, suppose an algorithmA2 can solve the DME problem. This means that if you give A2 inputs fx, H⟶, U⟶=A∗H⟶, V⟶=B∗H⟶, and R⟶, then A2 outputs “yes” if R⟶=AB∗H⟶ and outputs “no” if not. Let M⟶ be the claimed decryption of the ciphertext V⟶,Q⟶. Input Q⟶−M⟶ as R⟶. Note that M⟶ is the correct plaintext for the ciphertext V⟶,Q⟶ if and only if M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶, which happens if and only if Q⟶−M⟶=AB∗H⟶. Therefore, M⟶ is the correct plaintext if and only if R⟶=AB∗H⟶. Therefore, with these inputs, A2 outputs “yes” exactly when M⟶ is the correct plaintext.Theorem 2. An algorithm that solves CME problem can be used to decrypt the ciphertexts of Scheme 4.2.1, and an algorithm that decrypts the ciphertexts of Scheme 4.2.1 can be used to solve CME problem.Proof. If we have an algorithmA3 that can decrypt all ciphertexts of Scheme 4.2.1, then input U⟶=A∗H⟶ and V⟶=B∗H⟶. Take any vector for Q⟶. Then, A3 outputs(25)M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶. Therefore,Q⟶−M⟶ yields the solution AB∗H⟶ to the CME problem. Conversely, suppose we have an algorithmA4 that can solve CME problem. If we have an ciphertext V⟶,Q⟶, then we input U⟶=A∗H⟶ and V⟶=B∗H⟶. Then, A4 outputs AB∗H⟶. Since M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶, we obtain the plaintext M⟶ □. ## 4.1. Key Exchange Protocol Based on Tropical Matrix Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn and H⟶∈ZNn such that there exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. The public parameters of the protocol are fx, H⟶. Key change protocol based on tropical matrix is the following. ### 4.1.1. Protocol 4.1.1 (1) Alice selects at randomn private integers a0,a1,…,an−1 in 0,s−1 and computesA=an−1Cn−1+⋯+a1C+a0E=aijn×n. Bob selects at random n private integers b0,b1,…,bn−1 in 0,s−1 and computes(18)B=bn−1Cn−1+⋯+b1C+b0E=bijn×n.(2) Alice computesU⟶=A∗H⟶ and sends to Bob the matrix U⟶. And Bob computes V⟶=B∗H⟶ and sends to Alice the matrix V⟶.(3) Alice computes(19)KAlice=A∗V⟶=A∗B∗H⟶=A⋅B∗H⟶.and Bob computes(20)KBob=B∗U⟶=B∗A∗H⟶=B⋅A∗H⟶,where “⋅” is the matrix multiplication in ℤ+C.Sinceℤ+C is commutative, we have A⋅B=B⋅A and KAlice=KBob. So, Alice and Bob share a common secret key.Definition 4. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. There exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. Suppose that U⟶=A∗H⟶ and V⟶=B∗H⟶, where A,B∈ℤ+C. The computational ME problem is to find the matrix vector K⟶ such that K⟶=AB∗H⟶, given fx, H⟶, U⟶, and V⟶. For simplicity, we abbreviate the problem to “CME problem.”Proposition 3. An algorithm that solves ME problem can be used to solve CME problem.Proposition 4. Finding the common secret key from the public information of Protocol 4.1.1 is equivalent to solving the CME problem. ## 4.1.1. Protocol 4.1.1 (1) Alice selects at randomn private integers a0,a1,…,an−1 in 0,s−1 and computesA=an−1Cn−1+⋯+a1C+a0E=aijn×n. Bob selects at random n private integers b0,b1,…,bn−1 in 0,s−1 and computes(18)B=bn−1Cn−1+⋯+b1C+b0E=bijn×n.(2) Alice computesU⟶=A∗H⟶ and sends to Bob the matrix U⟶. And Bob computes V⟶=B∗H⟶ and sends to Alice the matrix V⟶.(3) Alice computes(19)KAlice=A∗V⟶=A∗B∗H⟶=A⋅B∗H⟶.and Bob computes(20)KBob=B∗U⟶=B∗A∗H⟶=B⋅A∗H⟶,where “⋅” is the matrix multiplication in ℤ+C.Sinceℤ+C is commutative, we have A⋅B=B⋅A and KAlice=KBob. So, Alice and Bob share a common secret key.Definition 4. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. There exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. Suppose that U⟶=A∗H⟶ and V⟶=B∗H⟶, where A,B∈ℤ+C. The computational ME problem is to find the matrix vector K⟶ such that K⟶=AB∗H⟶, given fx, H⟶, U⟶, and V⟶. For simplicity, we abbreviate the problem to “CME problem.”Proposition 3. An algorithm that solves ME problem can be used to solve CME problem.Proposition 4. Finding the common secret key from the public information of Protocol 4.1.1 is equivalent to solving the CME problem. ## 4.2. Public Key Encryption Scheme Based on Tropical Matrix ### 4.2.1. Scheme 4.2.1 Key generation.Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. There exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. The public parameters are fx, H⟶. The key generation center chooses at random integers a0,a1,…,an−1∈0,s−1 and computes(21)A=an−1Cn−1+⋯+a1C+a0E=aijn×n,U⟶=A∗H⟶.The public key of Alice isU⟶. The private key of Alice is A (or a0,a1,…,an−1).Encryption.Bob wants to send a plaintext messagesM⟶∈Mkℤn to Alice.(1) Bob chooses at random integersb0,b1,…,bn−1 in 0,s−1 and computes(22)B=bn−1Cn−1+⋯+b1C+b0E=bijn×n.(2) Bob computesV⟶=B∗H⟶ as a part of ciphertext.(3) Bob computesQ⟶=M⟶+B∗U⟶ as the rest of the ciphertext, where “+” is the ordinary integer matrix vector addition.(4) Bob sends the ciphertextV⟶,Q⟶ to Alice.Decryption.Alice receives the ciphertextV⟶,Q⟶ and tries to decrypt it.(1) Using her private keyA, Alice computes W⟶=A∗V⟶.(2) Alice computesQ⟶−W⟶, where “-” is the ordinary integer matrix vector subtraction.Since(23)Q⟶−W⟶=M⟶+B∗U⟶−A∗V⟶=M⟶+B∗A∗H⟶−A∗B∗H⟶=M⟶+BA∗H⟶−AB∗H⟶=M⟶,Alice gets the plaintext messagesM⟶.Definition 5. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. Suppose U⟶=A∗H⟶ and V⟶=B∗H⟶, where A,B∈ℤ+C. Let R∈ZNn. The decisional ME problem is to decide whether R⟶=AB∗H⟶, given fx, H⟶, U⟶, V⟶, and R⟶. For simplicity, we abbreviate it to “DME problem.”Proposition 5. An algorithm that solves CME problem can be used to solve DME problem.Theorem 1. An algorithm that solves DME problem can be used to decide the validity of the ciphertexts of Scheme 4.2.1, and an algorithm that decides the validity of the ciphertexts of Scheme 4.2.1 can be used to solve DME problem.Proof. Suppose first that the algorithmA1 can decide whether a decryption of Scheme 4.2.1 is correct. In other words, when given the inputs fx, H⟶, U⟶, V⟶,Q⟶, M⟶, the algorithm A1 outputs “yes” if M⟶ is the decryption of V⟶,Q⟶ and outputs “no” otherwise. Let us use A1 to solve the DME problem. Suppose you are given p, fx, H⟶, U⟶=A∗H⟶, V⟶=B∗H⟶, and R⟶, and you want to decide whether or not R⟶=AB∗H⟶. Let Q⟶=R⟶ and M⟶=0k,0k,…,0k, where 0k is the k×k zero matrix of Mkℤ. Input all of these into A1. Note that in the present setup, A is the secret key. The correct decryption of V⟶,Q⟶ is(24)Q⟶−A∗V⟶=R⟶−A∗B∗H⟶=R⟶−AB∗H⟶. Therefore,A1 outputs “yes” exactly when M⟶=0k,0k,…,0k is the same as R⟶−AB∗H⟶, namely, when R⟶=AB∗H⟶. This solves the decision DME problem. Conversely, suppose an algorithmA2 can solve the DME problem. This means that if you give A2 inputs fx, H⟶, U⟶=A∗H⟶, V⟶=B∗H⟶, and R⟶, then A2 outputs “yes” if R⟶=AB∗H⟶ and outputs “no” if not. Let M⟶ be the claimed decryption of the ciphertext V⟶,Q⟶. Input Q⟶−M⟶ as R⟶. Note that M⟶ is the correct plaintext for the ciphertext V⟶,Q⟶ if and only if M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶, which happens if and only if Q⟶−M⟶=AB∗H⟶. Therefore, M⟶ is the correct plaintext if and only if R⟶=AB∗H⟶. Therefore, with these inputs, A2 outputs “yes” exactly when M⟶ is the correct plaintext.Theorem 2. An algorithm that solves CME problem can be used to decrypt the ciphertexts of Scheme 4.2.1, and an algorithm that decrypts the ciphertexts of Scheme 4.2.1 can be used to solve CME problem.Proof. If we have an algorithmA3 that can decrypt all ciphertexts of Scheme 4.2.1, then input U⟶=A∗H⟶ and V⟶=B∗H⟶. Take any vector for Q⟶. Then, A3 outputs(25)M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶. Therefore,Q⟶−M⟶ yields the solution AB∗H⟶ to the CME problem. Conversely, suppose we have an algorithmA4 that can solve CME problem. If we have an ciphertext V⟶,Q⟶, then we input U⟶=A∗H⟶ and V⟶=B∗H⟶. Then, A4 outputs AB∗H⟶. Since M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶, we obtain the plaintext M⟶ □. ## 4.2.1. Scheme 4.2.1 Key generation.Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. There exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. The public parameters are fx, H⟶. The key generation center chooses at random integers a0,a1,…,an−1∈0,s−1 and computes(21)A=an−1Cn−1+⋯+a1C+a0E=aijn×n,U⟶=A∗H⟶.The public key of Alice isU⟶. The private key of Alice is A (or a0,a1,…,an−1).Encryption.Bob wants to send a plaintext messagesM⟶∈Mkℤn to Alice.(1) Bob chooses at random integersb0,b1,…,bn−1 in 0,s−1 and computes(22)B=bn−1Cn−1+⋯+b1C+b0E=bijn×n.(2) Bob computesV⟶=B∗H⟶ as a part of ciphertext.(3) Bob computesQ⟶=M⟶+B∗U⟶ as the rest of the ciphertext, where “+” is the ordinary integer matrix vector addition.(4) Bob sends the ciphertextV⟶,Q⟶ to Alice.Decryption.Alice receives the ciphertextV⟶,Q⟶ and tries to decrypt it.(1) Using her private keyA, Alice computes W⟶=A∗V⟶.(2) Alice computesQ⟶−W⟶, where “-” is the ordinary integer matrix vector subtraction.Since(23)Q⟶−W⟶=M⟶+B∗U⟶−A∗V⟶=M⟶+B∗A∗H⟶−A∗B∗H⟶=M⟶+BA∗H⟶−AB∗H⟶=M⟶,Alice gets the plaintext messagesM⟶.Definition 5. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. Suppose U⟶=A∗H⟶ and V⟶=B∗H⟶, where A,B∈ℤ+C. Let R∈ZNn. The decisional ME problem is to decide whether R⟶=AB∗H⟶, given fx, H⟶, U⟶, V⟶, and R⟶. For simplicity, we abbreviate it to “DME problem.”Proposition 5. An algorithm that solves CME problem can be used to solve DME problem.Theorem 1. An algorithm that solves DME problem can be used to decide the validity of the ciphertexts of Scheme 4.2.1, and an algorithm that decides the validity of the ciphertexts of Scheme 4.2.1 can be used to solve DME problem.Proof. Suppose first that the algorithmA1 can decide whether a decryption of Scheme 4.2.1 is correct. In other words, when given the inputs fx, H⟶, U⟶, V⟶,Q⟶, M⟶, the algorithm A1 outputs “yes” if M⟶ is the decryption of V⟶,Q⟶ and outputs “no” otherwise. Let us use A1 to solve the DME problem. Suppose you are given p, fx, H⟶, U⟶=A∗H⟶, V⟶=B∗H⟶, and R⟶, and you want to decide whether or not R⟶=AB∗H⟶. Let Q⟶=R⟶ and M⟶=0k,0k,…,0k, where 0k is the k×k zero matrix of Mkℤ. Input all of these into A1. Note that in the present setup, A is the secret key. The correct decryption of V⟶,Q⟶ is(24)Q⟶−A∗V⟶=R⟶−A∗B∗H⟶=R⟶−AB∗H⟶. Therefore,A1 outputs “yes” exactly when M⟶=0k,0k,…,0k is the same as R⟶−AB∗H⟶, namely, when R⟶=AB∗H⟶. This solves the decision DME problem. Conversely, suppose an algorithmA2 can solve the DME problem. This means that if you give A2 inputs fx, H⟶, U⟶=A∗H⟶, V⟶=B∗H⟶, and R⟶, then A2 outputs “yes” if R⟶=AB∗H⟶ and outputs “no” if not. Let M⟶ be the claimed decryption of the ciphertext V⟶,Q⟶. Input Q⟶−M⟶ as R⟶. Note that M⟶ is the correct plaintext for the ciphertext V⟶,Q⟶ if and only if M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶, which happens if and only if Q⟶−M⟶=AB∗H⟶. Therefore, M⟶ is the correct plaintext if and only if R⟶=AB∗H⟶. Therefore, with these inputs, A2 outputs “yes” exactly when M⟶ is the correct plaintext.Theorem 2. An algorithm that solves CME problem can be used to decrypt the ciphertexts of Scheme 4.2.1, and an algorithm that decrypts the ciphertexts of Scheme 4.2.1 can be used to solve CME problem.Proof. If we have an algorithmA3 that can decrypt all ciphertexts of Scheme 4.2.1, then input U⟶=A∗H⟶ and V⟶=B∗H⟶. Take any vector for Q⟶. Then, A3 outputs(25)M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶. Therefore,Q⟶−M⟶ yields the solution AB∗H⟶ to the CME problem. Conversely, suppose we have an algorithmA4 that can solve CME problem. If we have an ciphertext V⟶,Q⟶, then we input U⟶=A∗H⟶ and V⟶=B∗H⟶. Then, A4 outputs AB∗H⟶. Since M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶, we obtain the plaintext M⟶ □. ## 5. Possible Attacks, Parameter Selection, and Efficiency ### 5.1. Possible Attacks (1) Brute-force attack. Suppose thatA=an−1Cn−1+⋯+a1C+a0E=aijn×n and a0,a1,…,an−1∈0,s−1. It is clear that attacker has sn choices to choose A. So, the parameters s and n must satisfy sn≥280.(2) Tropical matrix decomposition attack.Suppose thatU⟶=U1,U2,…,Un. If attacker can find A′=aij′n×n such that(26)H1a11′H2a12′…Hna1n′=U1H1a21′H2a22′…Hna2n′=U2………H1an1′H2an2′…Hnann′=Un,A′C=CA′.Then he can get the shared keyA⋅B∗H⟶ by A′ and the public information. He can perform the following steps to find A′.(i) FactorUi=G1G2,…,Gn, where Gj∈Hji,j=1,2,…,n.(ii) Findaij′ such that Hiaij′=Gii,j=1,2,…,n by solving discrete logarithm problem in Hi.(iii) Verify whether or notA′C=CA′. If not, go to (i).However, it is hard to factorUi=G1G2⋯Gn, where Gj∈Hj and n>2. Generally, it is NP-hard by Shitov [24].(3) KU attack. In our cryptosystems, the used commutative subsemiring in our cryptosystems is the subsemiringZN and N is unknown. This is different from that in literature [16]. They used two public tropical matrices M1,M2M1M2≠M2M1 and then adopted the commutative subsemiring ZM1,, ZM2. Let p1M1∈ZM1, p2M2∈ZM2, and p1M1p2M2=U. The security of the cryptosystem relies on the difficulty of the problem of finding S1∈ZM1 and S2∈ZM2 such that S1S2=U. Because the secrete matrix can be expressed as a polynomial of M. Kotov and Ushakov designed an efficient algorithm to attack the tropical key exchange protocol [17]. In our cryptosystems, since attacker does not know N, the KU attack will not work. To find N from public information, attacker is faced with the following problem.GivenH1,H2,…,Hn∈MkZ, find N∈MkZ such that(27)d19⊗N9⊕d18⊗N8⊕⋯⊕d11⊗N⊕d10⊗Ik=H1d29⊗N9⊕d28⊗N8⊕⋯⊕d21⊗N⊕d20⊗Ik=H2………dn9⊗N9⊕dn8⊗N8⊕⋯⊕dn1⊗N⊕dn0⊗Ik=Hn.wheredi9,di8,…,di1,di0i=1,2,…,n and N are all unknown. It is clear that this is a problem of solving systems of min-plus polynomial equations which is NP-hard [16].Even ifN is obtained by attacker. It seems also hard to find the private key matrix A from the public key U⟶=A∗H⟶. As we know, KU attack can only decompose a tropical matrix into the product of two matrices such as U=S1S2. If n>2, each component matrix of U⟶ is the product of matrices more than two. In this case, KU attack will also not work.(4) Generalized KU attack. In order to remedy Grigoreiv–Shpilrain’s protocols, Muanalifah and Sergeev suggested some modifications that use two types of matrices that are Jones matrices and Linde–de la Puente matrices [22]. But the authors also pointed out that their modifications cannot resist the generalized KU attack which can also decompose the public matrix into the product of two Jones matrices (Linde–de la Puente matrices) expressed as the linear form of the tropical basic elementary matrix. In our cryptosystems, if n>2, then each component matrix of U⟶ is the product of matrices more than two. In this case, the generalized KU attack will also not work for our cryptosystems.(5) RM attack. Grigoreiv and Shpilrain [18] improved the original scheme and proposed the public key cryptosystems based on semi-direct product of tropical matrix semiring. But the first component of semi-direct product multiplication contains the addition of tropical matrix. Because the addition operation of tropical matrix is idempotent, the powers of semi-direct product multiplication have partial order preservation. Using this property, Rudy and Monico designed a simple binary search algorithm and cracked the cryptosystem in [18]. In our cryptosystems, A∗H⟶ has not the addition operation of tropical matrix. So, our cryptosystems can resist this attack.(6) Quantum attack. ME problem is the generalization of the discrete logarithm problem. As we know, the discrete logarithm problem can be reduced in polynomial time to hidden subgroup problem which can be solved in polynomial time by the generalized Shor quantum algorithm [25]. If the semigroup action is derived from a module over ring, there exist the similar reduction algorithms for the corresponding semigroup action problem. When the semigroup action is induced by a semimodule over semiring that cannot be embedded in a module, no effective reduction algorithm has been found for the corresponding semigroup action problem. It is easy to verify that “∗” is a semigroup action derived from the semimodule ZNn over the semiring ℤ+C and ME problem is indeed the corresponding semigroup action problem induced by the semimodule. Since the semimodule ZNn cannot be embedded in a module, ME problem cannot be reduced in polynomial time to hidden subgroup problem generally.Table1 provides the comparison among relevant tropical cryptographic schemes.Table 1 Comparison among relevant tropical schemes. SchemesMathematical problemsKU attackRM attackG-KU attackGrigoriev [16]Two-side matrix action problem×√×Grigoriev [18]Semi-direct product problem√×√Muanalifah [22]Two-side matrix action problem√√×Our schemesMultiple exponentiation problem√√√Note. that √ means that the scheme can resist the corresponding attack, while × does not. ### 5.2. Parameter Selection and Efficiency By Proposition2, if there exists a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1, then the ME problem can be reduced to discrete logarithm problem in polynomial time. To avoid this case, H⟶ needs to satisfy the condition that there exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. Note that N is unknown and H1,H2,…,Hn∈ZN. We can choose Hi such that(28)Hi=di9⊗N9⊕di8⊗N8⊕⋯⊕di1⊗N⊕di0⊗Ik,where di0,di1,…,di9 are integers selected randomly in 0,1024. Then, the number of possible Hi is 2100. Experiments show that it is easy to generate H1,H2,…,Hn satisfying the above condition.Generally, forN∈MkZ, the monogenic subsemigroup N is infinite. But for many tropical matrices N∈MkZ, there are non-negative integers l,m and integer e such that e⊗Nl=Nl+m. If l,m are the smallest non-negative integers such that e⊗Nl=Nl+m, then l is called the pseudo-index of the matrix N and m is called the pseudo-period of the matrix N. If the pseudo-indexes and the pseudo-periods of N and Hii=1,2,…,n are all small, then there may be some potential heuristic attacks. The pseudo-index of tropical matrix increases with the increase of k. Our experiments show that it is feasible to generate N and Hii=1,2,…,n with pseudo-indexes more than k2 (k<30). In A∗H⟶ and B∗H⟶, the entries of A, B are the exponents of Hi. Since A,B∈ℤC, the entries of C and s should not be too large. We recommend fx=xn−1 and(29)C=00…0110…0001…00⋮⋮⋱⋮⋮00…10.Then, the entries ofA, B are in 0,s−1 and the entries of AB are less than ns−12. To resist some potential heuristic attacks, we recommend the parameters s,n,k satisfying k2≥ns−12.If we use the “square-multiply” algorithm to compute the power of tropical matrix, then computingA∗H⟶ requires n2logs+nn−1 tropical matrix multiplications. The numbers of bit operations required for multiplying two tropical matrices of order k are Ok3. So, the total number of bit operations required for calculating A∗H⟶ is Ok3n2logs.The size of secret keya0,a1,…,an−1 is less than nlogs bits. Suppose the entries of the matrices N are in the range 0,T and Hi=∑j=09dij⊗N9 where dij∈0,d. Then, the size of public key U⟶ is less than n2k2logs−1d+9T bits.SelectT=100 and d=1024. Table 2 provides the upper bounds of the size of secrete key and public key for different values of k,n,s such that sn≈280 and k2≈ns−12. And in Table 2, we also compare the running time of the operation A∗H⟶ under different parameters (experimental platform: Intel(R) Core(TM) i7-4700MQ CPU @ 2.40 GHz).Table 2 Performance comparison under some different parameters. knsUpper bound of sk (bit)Upper bound of pk (kB)Timing ofA∗H⟶ (s)9802806821.79814503807481.46319404808792.372233458010313.210283168011974.316 ## 5.1. Possible Attacks (1) Brute-force attack. Suppose thatA=an−1Cn−1+⋯+a1C+a0E=aijn×n and a0,a1,…,an−1∈0,s−1. It is clear that attacker has sn choices to choose A. So, the parameters s and n must satisfy sn≥280.(2) Tropical matrix decomposition attack.Suppose thatU⟶=U1,U2,…,Un. If attacker can find A′=aij′n×n such that(26)H1a11′H2a12′…Hna1n′=U1H1a21′H2a22′…Hna2n′=U2………H1an1′H2an2′…Hnann′=Un,A′C=CA′.Then he can get the shared keyA⋅B∗H⟶ by A′ and the public information. He can perform the following steps to find A′.(i) FactorUi=G1G2,…,Gn, where Gj∈Hji,j=1,2,…,n.(ii) Findaij′ such that Hiaij′=Gii,j=1,2,…,n by solving discrete logarithm problem in Hi.(iii) Verify whether or notA′C=CA′. If not, go to (i).However, it is hard to factorUi=G1G2⋯Gn, where Gj∈Hj and n>2. Generally, it is NP-hard by Shitov [24].(3) KU attack. In our cryptosystems, the used commutative subsemiring in our cryptosystems is the subsemiringZN and N is unknown. This is different from that in literature [16]. They used two public tropical matrices M1,M2M1M2≠M2M1 and then adopted the commutative subsemiring ZM1,, ZM2. Let p1M1∈ZM1, p2M2∈ZM2, and p1M1p2M2=U. The security of the cryptosystem relies on the difficulty of the problem of finding S1∈ZM1 and S2∈ZM2 such that S1S2=U. Because the secrete matrix can be expressed as a polynomial of M. Kotov and Ushakov designed an efficient algorithm to attack the tropical key exchange protocol [17]. In our cryptosystems, since attacker does not know N, the KU attack will not work. To find N from public information, attacker is faced with the following problem.GivenH1,H2,…,Hn∈MkZ, find N∈MkZ such that(27)d19⊗N9⊕d18⊗N8⊕⋯⊕d11⊗N⊕d10⊗Ik=H1d29⊗N9⊕d28⊗N8⊕⋯⊕d21⊗N⊕d20⊗Ik=H2………dn9⊗N9⊕dn8⊗N8⊕⋯⊕dn1⊗N⊕dn0⊗Ik=Hn.wheredi9,di8,…,di1,di0i=1,2,…,n and N are all unknown. It is clear that this is a problem of solving systems of min-plus polynomial equations which is NP-hard [16].Even ifN is obtained by attacker. It seems also hard to find the private key matrix A from the public key U⟶=A∗H⟶. As we know, KU attack can only decompose a tropical matrix into the product of two matrices such as U=S1S2. If n>2, each component matrix of U⟶ is the product of matrices more than two. In this case, KU attack will also not work.(4) Generalized KU attack. In order to remedy Grigoreiv–Shpilrain’s protocols, Muanalifah and Sergeev suggested some modifications that use two types of matrices that are Jones matrices and Linde–de la Puente matrices [22]. But the authors also pointed out that their modifications cannot resist the generalized KU attack which can also decompose the public matrix into the product of two Jones matrices (Linde–de la Puente matrices) expressed as the linear form of the tropical basic elementary matrix. In our cryptosystems, if n>2, then each component matrix of U⟶ is the product of matrices more than two. In this case, the generalized KU attack will also not work for our cryptosystems.(5) RM attack. Grigoreiv and Shpilrain [18] improved the original scheme and proposed the public key cryptosystems based on semi-direct product of tropical matrix semiring. But the first component of semi-direct product multiplication contains the addition of tropical matrix. Because the addition operation of tropical matrix is idempotent, the powers of semi-direct product multiplication have partial order preservation. Using this property, Rudy and Monico designed a simple binary search algorithm and cracked the cryptosystem in [18]. In our cryptosystems, A∗H⟶ has not the addition operation of tropical matrix. So, our cryptosystems can resist this attack.(6) Quantum attack. ME problem is the generalization of the discrete logarithm problem. As we know, the discrete logarithm problem can be reduced in polynomial time to hidden subgroup problem which can be solved in polynomial time by the generalized Shor quantum algorithm [25]. If the semigroup action is derived from a module over ring, there exist the similar reduction algorithms for the corresponding semigroup action problem. When the semigroup action is induced by a semimodule over semiring that cannot be embedded in a module, no effective reduction algorithm has been found for the corresponding semigroup action problem. It is easy to verify that “∗” is a semigroup action derived from the semimodule ZNn over the semiring ℤ+C and ME problem is indeed the corresponding semigroup action problem induced by the semimodule. Since the semimodule ZNn cannot be embedded in a module, ME problem cannot be reduced in polynomial time to hidden subgroup problem generally.Table1 provides the comparison among relevant tropical cryptographic schemes.Table 1 Comparison among relevant tropical schemes. SchemesMathematical problemsKU attackRM attackG-KU attackGrigoriev [16]Two-side matrix action problem×√×Grigoriev [18]Semi-direct product problem√×√Muanalifah [22]Two-side matrix action problem√√×Our schemesMultiple exponentiation problem√√√Note. that √ means that the scheme can resist the corresponding attack, while × does not. ## 5.2. Parameter Selection and Efficiency By Proposition2, if there exists a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1, then the ME problem can be reduced to discrete logarithm problem in polynomial time. To avoid this case, H⟶ needs to satisfy the condition that there exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. Note that N is unknown and H1,H2,…,Hn∈ZN. We can choose Hi such that(28)Hi=di9⊗N9⊕di8⊗N8⊕⋯⊕di1⊗N⊕di0⊗Ik,where di0,di1,…,di9 are integers selected randomly in 0,1024. Then, the number of possible Hi is 2100. Experiments show that it is easy to generate H1,H2,…,Hn satisfying the above condition.Generally, forN∈MkZ, the monogenic subsemigroup N is infinite. But for many tropical matrices N∈MkZ, there are non-negative integers l,m and integer e such that e⊗Nl=Nl+m. If l,m are the smallest non-negative integers such that e⊗Nl=Nl+m, then l is called the pseudo-index of the matrix N and m is called the pseudo-period of the matrix N. If the pseudo-indexes and the pseudo-periods of N and Hii=1,2,…,n are all small, then there may be some potential heuristic attacks. The pseudo-index of tropical matrix increases with the increase of k. Our experiments show that it is feasible to generate N and Hii=1,2,…,n with pseudo-indexes more than k2 (k<30). In A∗H⟶ and B∗H⟶, the entries of A, B are the exponents of Hi. Since A,B∈ℤC, the entries of C and s should not be too large. We recommend fx=xn−1 and(29)C=00…0110…0001…00⋮⋮⋱⋮⋮00…10.Then, the entries ofA, B are in 0,s−1 and the entries of AB are less than ns−12. To resist some potential heuristic attacks, we recommend the parameters s,n,k satisfying k2≥ns−12.If we use the “square-multiply” algorithm to compute the power of tropical matrix, then computingA∗H⟶ requires n2logs+nn−1 tropical matrix multiplications. The numbers of bit operations required for multiplying two tropical matrices of order k are Ok3. So, the total number of bit operations required for calculating A∗H⟶ is Ok3n2logs.The size of secret keya0,a1,…,an−1 is less than nlogs bits. Suppose the entries of the matrices N are in the range 0,T and Hi=∑j=09dij⊗N9 where dij∈0,d. Then, the size of public key U⟶ is less than n2k2logs−1d+9T bits.SelectT=100 and d=1024. Table 2 provides the upper bounds of the size of secrete key and public key for different values of k,n,s such that sn≈280 and k2≈ns−12. And in Table 2, we also compare the running time of the operation A∗H⟶ under different parameters (experimental platform: Intel(R) Core(TM) i7-4700MQ CPU @ 2.40 GHz).Table 2 Performance comparison under some different parameters. knsUpper bound of sk (bit)Upper bound of pk (kB)Timing ofA∗H⟶ (s)9802806821.79814503807481.46319404808792.372233458010313.210283168011974.316 ## 6. Conclusion and Further Research This paper proposes a new key exchange protocol and a new public key encryption scheme based on multiple exponentiation problem of tropical matrices which can resist all known attacks. Since the generating matrixN of the used commutative subsemirings ZN is hidden and the public key matrices are the product of more than two unknown matrices, the cryptosystems can resist KU attack and generalized KU attack. There is no addition of tropical matrix in A∗H⟶. The attack method proposed by Rudy and Monico does not work for ME problem. As a semigroup action problem derived from semimodules on semirings, ME problem cannot be reduced to hidden subgroup problem in polynomial time. Our cryptosystem can be considered as a potential postquantum cryptosystem.The algebraic properties of pseudo-index and pseudo-period of tropical matrix have not been clearly studied. We can only use enumeration method to find the pseudo-index and pseudo-period of tropical matrix and generate tropical matrices with pseudo-index less than 900. Therefore, in order to prevent potential heuristic attacks, the dimensionn of H⟶ needs to be large n>30. However, this makes the operation efficiency of A∗H⟶ low. If we can generate effectively tropical matrices with large pseudo-index or large pseudo-period, we can choose small n to improve the operation efficiency.Future works worth studying include the following.(1) Study the properties of pseudo-index and pseudo-period of tropical matrix. If a fast algorithm of generating tropical matrix with large pseudo-index (or pseudo-period) and small order can be found, then the smallern and k can be chosen. This will improve the computational efficiency of our cryptosystem.(2) Use other commutative tropical matrix semirings instead ofZN. For example, we can design a public key cryptosystem based on ME problem of Jones matrix or Linde–de la Puente matrix. ME problem of commutative matrices over other semirings can also be considered.(3) ME problem of tropical matrices is a generalization of the discrete logarithm problem. If we regardA∗H⟶ as H⟶A, then ME problem corresponds to discrete logarithm problem, CME problem corresponds to CDH problem, and DME problem corresponds to DDH problem. We believe that other cryptographic applications based on ME problem are also feasible. For example, we can consider the digital signature schemes and identity authentication schemes and other cryptographic applications based on CME assumption or DME assumption, such as [26–28]. However, as we point out previously, the cryptographic system based on ME problem over tropical matrix has no high computational efficiency, since the number of matrices n and the order k of the matrix are large in order to ensure security. It may limit some possible application scenarios. --- *Source: 1024161-2022-09-09.xml*
1024161-2022-09-09_1024161-2022-09-09.md
59,042
Tropical Cryptography Based on Multiple Exponentiation Problem of Matrices
Huawei Huang; Chunhua Li
Security and Communication Networks (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1024161
1024161-2022-09-09.xml
--- ## Abstract Because there is no multiplication of numbers in tropical algebra and the problem of solving the systems of polynomial equations in tropical algebra is NP-hard, in recent years some public key cryptography based on tropical semiring has been proposed. But most of them have some defects. This paper proposes new public key cryptosystems based on tropical matrices. The security of the cryptosystem relies on the difficulty of the problem of finding multiple exponentiations of tropical matrices given the product of the matrices powers when the subsemiring is hidden. This problem is a generalization of the discrete logarithm problem. But the problem generally cannot be reduced to discrete logarithm problem or hidden subgroup problem in polynomial time. Since the generating matrix of the used commutative subsemirings is hidden and the public key matrices are the product of more than two unknown matrices, the cryptosystems can resist KU attack and other known attacks. The cryptosystems based on multiple exponentiation problem can be considered as a potential postquantum cryptosystem. --- ## Body ## 1. Introduction Contemporary public key cryptography relies mainly on two computational problems, integer factorization problem, and discrete logarithm problem. For example, Diffie–Hellman key exchange protocol and ElGamal encryption scheme are based on discrete logarithm problem [1, 2]. Shor proposed a quantum algorithm which can solve integer factorization problem and discrete logarithm problem in polynomial time on a quantum computer [3]. So, it is a research focus of cryptography to develop other new cryptosystems. The traditional cryptosystems are based on various commutative rings, such as finite field, residue class ring, and integer ring [4–8]. Many cryptologists hope to find other algebraic structures to build new public key cryptosystems.In 2007, Maze, Monico, and Rosenthal proposed one of the first cryptosystems based on semigroups and semirings [9], using some ideas from [10], as well as from their previous article [11]. However, then it was broken by Steinwandt et al. [12]. Atani published a cryptosystem using semimodules over factor semirings [13]. Durcheva applied some idempotent semirings to construct cryptographic protocols [14]. A survey on semigroup action problem and its cryptographic applications was given by Goel, Gupta, and Dass [15].Grigoriev and Shpilrain proved that the problem of solving the systems of min-plus polynomial equations in tropical algebra is NP-hard and suggested using a min-plus (tropical) semiring to design public key cryptosystem [16]. An obvious advantage of using tropical algebras as platforms is unparalleled efficiency because in tropical schemes, one does not have to perform any multiplications of numbers since tropical multiplication is the usual addition. But “tropical powers” of an element exhibit some patterns, even if such an element is a matrix over a tropical algebra. This weakness was exploited by Kotov and Ushakov to arrange a fairly successful attack on one of Grigoriev and Shpilrain’s schemes [17]. In 2019, Grigoreiv and Shpilrain improved the original scheme and proposed the public key cryptosystems based on semi-direct product of tropical matrix semiring [18]. However, some attacks on the improved protocols are recently suggested by Rudy and Monico [19], Isaac and Kahrobei [20], and Muanalifah and Sergeev [21]. In order to remedy Grigoreiv–Shpilrain’s protocols, Muanalifah and Sergeev suggested some modifications that use two classes of commuting matrices in tropical algebra [22]. But the authors also pointed out that their modifications cannot resist the generalized KU attack since the user’s secret matrix can still be expressed in the linear form of the power of the basic elementary matrix.Our contribution: This paper provides a new public key cryptosystem based on tropical matrices. The security of the cryptosystem relies on the difficulty of the problem of finding multiple exponentiation of tropical matrices, which is a class of semigroup action problem proposed by Maze in [11]. The multiple exponentiation problem is also a generalization of the discrete logarithm problem. However, the problem generally cannot be reduced to discrete logarithm problem or hidden subgroup problem in polynomial time. Since the generating matrix of the used commutative subsemirings is hidden and the public key matrices are the product of more than two unknown matrices, the cryptosystems can resist KU attack and other known attacks. It is seemed that our cryptosystems based on multiple exponentiation problem can be considered as a potential postquantum cryptosystem.The remainder of this paper is organized as follows. In Section2, some preliminaries on tropical semiring are given. In Section 3, we define the multiple exponentiation problem of tropical matrices. In section 4, a key exchange protocol and a public key encryption scheme based on multiple exponentiation problem are presented. Finally, in Section 5 the possible attacks, parameter selection, and efficiency of the cryptosystems are discussed. ## 2. Preliminaries Notation. In this paper, matrices are generally denoted by the capital letters. In order to facilitate future references, frequently used notations are listed below with their meanings.ℤ+ is set of all non-negative integers; ℤ+x is polynomial semiring over ℤ+; Mnℤ+ is set of all n×n matrices over ℤ+; ℤ+C is set of all polynomials of matrix C∈Mnℤ+; Z is tropical semiring of integers ℤ∪∞; Zx is tropical polynomial semiring over Z; MkZ is set of all k×k tropical matrices over Z; ZN is set of all tropical polynomials of tropical matrix N∈MkZ; H⟶ is the vector H1,H2,…,Hn, where Hi∈ZN. ### 2.1. Tropical Semiring over Integer A semiring is an algebraic structure similar to a ring, but without the requirement that each element must have an additive inverse.Definition 1. (see [23]) A nonempty set ℛ with two binary operations + and ⋅ is called a semiring if(1) ℛ,+ is a commutative monoid with identity element 0;(2) ℛ,⋅ is a monoid with identity element 1≠0;(3) Both distributive laws hold inℛ;(4) a⋅0=0⋅a=0 for all a∈ℛ. If a semiring’s multiplication is commutative, then it is called acommutative semiring.Definition 2. (see [16]) The tropical commutative semiring of integer is the set Z=ℤ∪∞ with two operations as follows:(1)x⊕y=minx,y,x⊗y=x+y. The special “∞” satisfies the equations:(2)∞⊕x=x,∞⊗x=∞. It is straightforward to see thatZ,⊕,⊗ is a commutative semiring. In fact, ∞ is the identity element of Z,⊕ and 0 is the identity element of Z,⊕. Just as in the classical case, we define the set of all tropical polynomials overZ in the indeterminate x. Let(3)Zx=an⊗xn⊕an−1⊗xn−1⊕,…,⊕a1⊗x⊕a0|ai∈Zandn≥0. The tropical polynomial⊕ operation and ⊗ operation in Zxare similar to the classical polynomial addition and multiplication; however, every “+” calculation has to be substituted by a ⊕ operation of Z, and every “⋅” calculation by a ⊗ operation of Z. It is easy to verify that Zx is a commutative semiring with respect to the tropical polynomial ⊕ and ⊗ operations. ### 2.2. Tropical Matrix Semiring over Integer MkZ denotes the set of all k×k matrices over Z. We can also define the tropical matrix ⊕ and ⊗ operations. To perform the A⊕B operation, the elements mij of the resulting matrix M are set to be equal to aij⊕bij. The tropical matrix ⊗ operation is similar to the usual matrix multiplication; however, every “+” calculation has to be substituted by a ⊕ operation of Z, and every “⋅” calculation by a ⊗ operation of Z. MkZ is a noncommutative semiring with respect to the tropical matrix ⊕ and ⊗ operations.Example 1. (4)4−5270⊕10319=4−510,4−5270⊗10319=−4419,10319⊗4−5270=1435−4. The role of the identity matrixI is played by the matrix that has “0” s on the diagonal and ∞ elsewhere. Similarly, a scalar matrix would be a matrix with an element λ∈S on the diagonal and ∞ elsewhere. Such a matrix commutes with any other square matrix (of the same size). Multiplying a matrix by a scalar amounts to multiplying it by the corresponding scalar matrix.Example 2. (5)3⊗4−5270=3∞∞3⊗4−5270=7−2303. Then, tropical diagonal matrices have something on the diagonal and∞ elsewhere. Note that, in contrast to the “classical” situation, it is rather rare that a “tropical” matrix is invertible. More specifically, the only invertible tropical matrices are those that are obtained from a diagonal matrix by permuting rows and/or columns. For a matrixN∈MkZ, denote(6)ZN=an⊗Nn⊕an−1⊗Nn−1⊕,…,⊕a1⊗N⊕a0⊗I|ai∈Zandn≥0,where Nm means N⊗,…,⊗N (m times). It is clear that ZN is a commutative subsemiring of MkZ with respect to the tropical matrix addition and multiplication. ## 2.1. Tropical Semiring over Integer A semiring is an algebraic structure similar to a ring, but without the requirement that each element must have an additive inverse.Definition 1. (see [23]) A nonempty set ℛ with two binary operations + and ⋅ is called a semiring if(1) ℛ,+ is a commutative monoid with identity element 0;(2) ℛ,⋅ is a monoid with identity element 1≠0;(3) Both distributive laws hold inℛ;(4) a⋅0=0⋅a=0 for all a∈ℛ. If a semiring’s multiplication is commutative, then it is called acommutative semiring.Definition 2. (see [16]) The tropical commutative semiring of integer is the set Z=ℤ∪∞ with two operations as follows:(1)x⊕y=minx,y,x⊗y=x+y. The special “∞” satisfies the equations:(2)∞⊕x=x,∞⊗x=∞. It is straightforward to see thatZ,⊕,⊗ is a commutative semiring. In fact, ∞ is the identity element of Z,⊕ and 0 is the identity element of Z,⊕. Just as in the classical case, we define the set of all tropical polynomials overZ in the indeterminate x. Let(3)Zx=an⊗xn⊕an−1⊗xn−1⊕,…,⊕a1⊗x⊕a0|ai∈Zandn≥0. The tropical polynomial⊕ operation and ⊗ operation in Zxare similar to the classical polynomial addition and multiplication; however, every “+” calculation has to be substituted by a ⊕ operation of Z, and every “⋅” calculation by a ⊗ operation of Z. It is easy to verify that Zx is a commutative semiring with respect to the tropical polynomial ⊕ and ⊗ operations. ## 2.2. Tropical Matrix Semiring over Integer MkZ denotes the set of all k×k matrices over Z. We can also define the tropical matrix ⊕ and ⊗ operations. To perform the A⊕B operation, the elements mij of the resulting matrix M are set to be equal to aij⊕bij. The tropical matrix ⊗ operation is similar to the usual matrix multiplication; however, every “+” calculation has to be substituted by a ⊕ operation of Z, and every “⋅” calculation by a ⊗ operation of Z. MkZ is a noncommutative semiring with respect to the tropical matrix ⊕ and ⊗ operations.Example 1. (4)4−5270⊕10319=4−510,4−5270⊗10319=−4419,10319⊗4−5270=1435−4. The role of the identity matrixI is played by the matrix that has “0” s on the diagonal and ∞ elsewhere. Similarly, a scalar matrix would be a matrix with an element λ∈S on the diagonal and ∞ elsewhere. Such a matrix commutes with any other square matrix (of the same size). Multiplying a matrix by a scalar amounts to multiplying it by the corresponding scalar matrix.Example 2. (5)3⊗4−5270=3∞∞3⊗4−5270=7−2303. Then, tropical diagonal matrices have something on the diagonal and∞ elsewhere. Note that, in contrast to the “classical” situation, it is rather rare that a “tropical” matrix is invertible. More specifically, the only invertible tropical matrices are those that are obtained from a diagonal matrix by permuting rows and/or columns. For a matrixN∈MkZ, denote(6)ZN=an⊗Nn⊕an−1⊗Nn−1⊕,…,⊕a1⊗N⊕a0⊗I|ai∈Zandn≥0,where Nm means N⊗,…,⊗N (m times). It is clear that ZN is a commutative subsemiring of MkZ with respect to the tropical matrix addition and multiplication. ## 3. Multiple Exponentiation Problem of Tropical Matrices ### 3.1. Companion Matrix of Polynomial over Integer Ringℤ Letf0,f1,…,fn−1 be non-negative integers and f0>0. The companion matrix of a monic polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn is given by the n×n matrix(7)C=00…0f010…0f101…0f2⋮⋮⋱⋮⋮00…1fn−1.Note that the entries ofC are all non-negative. Denote(8)ℤ+C=pC|px∈ℤ+x.It is easy to verify thatℤ+C is a commutative subsemiring of Mnℤ. ### 3.2. Matrix Semigroupℤ+C Action on ZNn Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let A=aijn×n∈ℤ+C, N∈MkZ, and H⟶=H1,H2,…,Hn∈ZNn. Consider an action ∗ of the multiplicative semigroup ℤ+C on the Cartesian product ZNn as below:(9)A∗H⟶=A∗H1,H2,…,Hn=∏i=1nHia1i,∏i=1nHia2i,…,∏i=1nHiani,where Hiaji means Hi⊗⋯⊗Hi (aji times). By the commutativity of ZN, it is easy to prove that “∗” is a semigroup action of ℤ+C on ZNn. In fact, a similar semigroup action was first defined by Maze in [11], where the action of ℤC on the group direct product Gn was considered.Example 3. Letfx=x2−2. The companion matrix of fx is C=0210. Let N∈M3Z as follows.(10)N=6230238775518647. LetH⟶=H1,H2∈ZN2 as follows.(11)H1=384336672068144238,H2=253838571563163225. LetA=3E+5C=31053. Then,(12)A∗H⟶=A∗H1,H2=H13⊗H210,H15⊗H23=275233281252210258269227275,202168211187145193189162202. ### 3.3. Multiple Exponentiation Problem of Tropical Matrices Definition 3. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶=H1,H2,…,Hn∈ZNn. Suppose that U⟶=A∗H⟶, where A∈ℤ+C. The multiple exponentiation problem of tropical matrices is to find A∈ℤ+C, given fx, H⟶, and U⟶. (Note that N is unknown.) For simplicity, we abbreviate the problem to “ME problem.”Example 4. Givenfx=x2−2, H⟶=H1,H2, and U⟶ as follows, we try to find A∈ℤ+C such that U⟶=A∗H⟶.(13)H⟶=384336672068144238,253838571563163225,U⟶=260218266237195243254212260,138123136142100148114117138. FindingA∈ℤ+C such that U⟶=A∗H⟶ is equivalent to finding a0,a1∈ℤ+ such that U⟶=a0E+a1C∗H⟶. By a0E+a1C=a02a1a1a0, we have U⟶=H1a0H22a1,H1a1H2a0. That is,(14)384336672068144238a0⊗2538385715631632252a1=275233281252210258269227275384336672068144238a1⊗253838571563163225a0=202168211187145193189162202. As we know, most results in ordinary algebra do not hold in tropical algebra. Therefore, the certain properties of ordinary matrices like determinant, eigenvalues, and Cayley–Hamilton theorem cannot be used. But ifH1∈H2 or H2∈H1, we can reduce the problem to discrete logarithm problem.Proposition 1. SupposeH1∈H2 or H2∈H1, then the ME problem in Example 4 can be reduced to discrete logarithm problem in polynomial time.Proof. LetU⟶=U1,U2. Then,(15)H1a0H22a1=U1H1a1H2a0=U2 SupposeH2∈〈〈H1〉. By solving a discrete logarithm problem in H1, we can get a positive integer m such that H2=H1m. So, the equations (15) are equivalent to the following equations.(16)H1a0+2ma1=U1H1a1+ma0=U2. In this case,U1,U2∈H1. By solving two discrete logarithm problems in H1, we can get two positive integers t1,t2 such that U1=H1t1 and U2=H1t2. Therefore,(17)a0+2ma1=t1a1+ma0=t2 It is clear that we can obtaina0,a1 by solving a system of linear equations.Proposition 2. If there exists a componentHi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1, then the ME problem can be reduced to discrete logarithm problem in polynomial time.IfH1∉H2 and H2∉H1, the problem of finding a0,a1 from equation (14) cannot be reduced to discrete logarithm problem. In fact, in Example 4, the conditions are indeed satisfied.In order to resist some other potential method of solving ME problem, we stress the condition thatN is unknown. Since the matrix N is unknown, we cannot express H1 and H2 as the polynomials of N. (Even if N is known, we have not found any effective method to find a0 and a1.).Remark 1. Assume thata0,a1∈0,s−1. Hence, in the example the total number of steps to solve ME problem by brute-force attack is s2. Generally, assume thatA=∑i=0naiCiai∈0,s−1. Then, the total number of steps to solve ME problem by force attack is sn. ## 3.1. Companion Matrix of Polynomial over Integer Ringℤ Letf0,f1,…,fn−1 be non-negative integers and f0>0. The companion matrix of a monic polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn is given by the n×n matrix(7)C=00…0f010…0f101…0f2⋮⋮⋱⋮⋮00…1fn−1.Note that the entries ofC are all non-negative. Denote(8)ℤ+C=pC|px∈ℤ+x.It is easy to verify thatℤ+C is a commutative subsemiring of Mnℤ. ## 3.2. Matrix Semigroupℤ+C Action on ZNn Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let A=aijn×n∈ℤ+C, N∈MkZ, and H⟶=H1,H2,…,Hn∈ZNn. Consider an action ∗ of the multiplicative semigroup ℤ+C on the Cartesian product ZNn as below:(9)A∗H⟶=A∗H1,H2,…,Hn=∏i=1nHia1i,∏i=1nHia2i,…,∏i=1nHiani,where Hiaji means Hi⊗⋯⊗Hi (aji times). By the commutativity of ZN, it is easy to prove that “∗” is a semigroup action of ℤ+C on ZNn. In fact, a similar semigroup action was first defined by Maze in [11], where the action of ℤC on the group direct product Gn was considered.Example 3. Letfx=x2−2. The companion matrix of fx is C=0210. Let N∈M3Z as follows.(10)N=6230238775518647. LetH⟶=H1,H2∈ZN2 as follows.(11)H1=384336672068144238,H2=253838571563163225. LetA=3E+5C=31053. Then,(12)A∗H⟶=A∗H1,H2=H13⊗H210,H15⊗H23=275233281252210258269227275,202168211187145193189162202. ## 3.3. Multiple Exponentiation Problem of Tropical Matrices Definition 3. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶=H1,H2,…,Hn∈ZNn. Suppose that U⟶=A∗H⟶, where A∈ℤ+C. The multiple exponentiation problem of tropical matrices is to find A∈ℤ+C, given fx, H⟶, and U⟶. (Note that N is unknown.) For simplicity, we abbreviate the problem to “ME problem.”Example 4. Givenfx=x2−2, H⟶=H1,H2, and U⟶ as follows, we try to find A∈ℤ+C such that U⟶=A∗H⟶.(13)H⟶=384336672068144238,253838571563163225,U⟶=260218266237195243254212260,138123136142100148114117138. FindingA∈ℤ+C such that U⟶=A∗H⟶ is equivalent to finding a0,a1∈ℤ+ such that U⟶=a0E+a1C∗H⟶. By a0E+a1C=a02a1a1a0, we have U⟶=H1a0H22a1,H1a1H2a0. That is,(14)384336672068144238a0⊗2538385715631632252a1=275233281252210258269227275384336672068144238a1⊗253838571563163225a0=202168211187145193189162202. As we know, most results in ordinary algebra do not hold in tropical algebra. Therefore, the certain properties of ordinary matrices like determinant, eigenvalues, and Cayley–Hamilton theorem cannot be used. But ifH1∈H2 or H2∈H1, we can reduce the problem to discrete logarithm problem.Proposition 1. SupposeH1∈H2 or H2∈H1, then the ME problem in Example 4 can be reduced to discrete logarithm problem in polynomial time.Proof. LetU⟶=U1,U2. Then,(15)H1a0H22a1=U1H1a1H2a0=U2 SupposeH2∈〈〈H1〉. By solving a discrete logarithm problem in H1, we can get a positive integer m such that H2=H1m. So, the equations (15) are equivalent to the following equations.(16)H1a0+2ma1=U1H1a1+ma0=U2. In this case,U1,U2∈H1. By solving two discrete logarithm problems in H1, we can get two positive integers t1,t2 such that U1=H1t1 and U2=H1t2. Therefore,(17)a0+2ma1=t1a1+ma0=t2 It is clear that we can obtaina0,a1 by solving a system of linear equations.Proposition 2. If there exists a componentHi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1, then the ME problem can be reduced to discrete logarithm problem in polynomial time.IfH1∉H2 and H2∉H1, the problem of finding a0,a1 from equation (14) cannot be reduced to discrete logarithm problem. In fact, in Example 4, the conditions are indeed satisfied.In order to resist some other potential method of solving ME problem, we stress the condition thatN is unknown. Since the matrix N is unknown, we cannot express H1 and H2 as the polynomials of N. (Even if N is known, we have not found any effective method to find a0 and a1.).Remark 1. Assume thata0,a1∈0,s−1. Hence, in the example the total number of steps to solve ME problem by brute-force attack is s2. Generally, assume thatA=∑i=0naiCiai∈0,s−1. Then, the total number of steps to solve ME problem by force attack is sn. ## 4. Public Key Cryptosystems Based on Tropical Matrix In this section, we give a key exchange protocol similar to Diffie–Hellman protocol and a public key encryption scheme similar to ElGamal encryption scheme. ### 4.1. Key Exchange Protocol Based on Tropical Matrix Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn and H⟶∈ZNn such that there exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. The public parameters of the protocol are fx, H⟶. Key change protocol based on tropical matrix is the following. #### 4.1.1. Protocol 4.1.1 (1) Alice selects at randomn private integers a0,a1,…,an−1 in 0,s−1 and computesA=an−1Cn−1+⋯+a1C+a0E=aijn×n. Bob selects at random n private integers b0,b1,…,bn−1 in 0,s−1 and computes(18)B=bn−1Cn−1+⋯+b1C+b0E=bijn×n.(2) Alice computesU⟶=A∗H⟶ and sends to Bob the matrix U⟶. And Bob computes V⟶=B∗H⟶ and sends to Alice the matrix V⟶.(3) Alice computes(19)KAlice=A∗V⟶=A∗B∗H⟶=A⋅B∗H⟶.and Bob computes(20)KBob=B∗U⟶=B∗A∗H⟶=B⋅A∗H⟶,where “⋅” is the matrix multiplication in ℤ+C.Sinceℤ+C is commutative, we have A⋅B=B⋅A and KAlice=KBob. So, Alice and Bob share a common secret key.Definition 4. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. There exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. Suppose that U⟶=A∗H⟶ and V⟶=B∗H⟶, where A,B∈ℤ+C. The computational ME problem is to find the matrix vector K⟶ such that K⟶=AB∗H⟶, given fx, H⟶, U⟶, and V⟶. For simplicity, we abbreviate the problem to “CME problem.”Proposition 3. An algorithm that solves ME problem can be used to solve CME problem.Proposition 4. Finding the common secret key from the public information of Protocol 4.1.1 is equivalent to solving the CME problem. ### 4.2. Public Key Encryption Scheme Based on Tropical Matrix #### 4.2.1. Scheme 4.2.1 Key generation.Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. There exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. The public parameters are fx, H⟶. The key generation center chooses at random integers a0,a1,…,an−1∈0,s−1 and computes(21)A=an−1Cn−1+⋯+a1C+a0E=aijn×n,U⟶=A∗H⟶.The public key of Alice isU⟶. The private key of Alice is A (or a0,a1,…,an−1).Encryption.Bob wants to send a plaintext messagesM⟶∈Mkℤn to Alice.(1) Bob chooses at random integersb0,b1,…,bn−1 in 0,s−1 and computes(22)B=bn−1Cn−1+⋯+b1C+b0E=bijn×n.(2) Bob computesV⟶=B∗H⟶ as a part of ciphertext.(3) Bob computesQ⟶=M⟶+B∗U⟶ as the rest of the ciphertext, where “+” is the ordinary integer matrix vector addition.(4) Bob sends the ciphertextV⟶,Q⟶ to Alice.Decryption.Alice receives the ciphertextV⟶,Q⟶ and tries to decrypt it.(1) Using her private keyA, Alice computes W⟶=A∗V⟶.(2) Alice computesQ⟶−W⟶, where “-” is the ordinary integer matrix vector subtraction.Since(23)Q⟶−W⟶=M⟶+B∗U⟶−A∗V⟶=M⟶+B∗A∗H⟶−A∗B∗H⟶=M⟶+BA∗H⟶−AB∗H⟶=M⟶,Alice gets the plaintext messagesM⟶.Definition 5. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. Suppose U⟶=A∗H⟶ and V⟶=B∗H⟶, where A,B∈ℤ+C. Let R∈ZNn. The decisional ME problem is to decide whether R⟶=AB∗H⟶, given fx, H⟶, U⟶, V⟶, and R⟶. For simplicity, we abbreviate it to “DME problem.”Proposition 5. An algorithm that solves CME problem can be used to solve DME problem.Theorem 1. An algorithm that solves DME problem can be used to decide the validity of the ciphertexts of Scheme 4.2.1, and an algorithm that decides the validity of the ciphertexts of Scheme 4.2.1 can be used to solve DME problem.Proof. Suppose first that the algorithmA1 can decide whether a decryption of Scheme 4.2.1 is correct. In other words, when given the inputs fx, H⟶, U⟶, V⟶,Q⟶, M⟶, the algorithm A1 outputs “yes” if M⟶ is the decryption of V⟶,Q⟶ and outputs “no” otherwise. Let us use A1 to solve the DME problem. Suppose you are given p, fx, H⟶, U⟶=A∗H⟶, V⟶=B∗H⟶, and R⟶, and you want to decide whether or not R⟶=AB∗H⟶. Let Q⟶=R⟶ and M⟶=0k,0k,…,0k, where 0k is the k×k zero matrix of Mkℤ. Input all of these into A1. Note that in the present setup, A is the secret key. The correct decryption of V⟶,Q⟶ is(24)Q⟶−A∗V⟶=R⟶−A∗B∗H⟶=R⟶−AB∗H⟶. Therefore,A1 outputs “yes” exactly when M⟶=0k,0k,…,0k is the same as R⟶−AB∗H⟶, namely, when R⟶=AB∗H⟶. This solves the decision DME problem. Conversely, suppose an algorithmA2 can solve the DME problem. This means that if you give A2 inputs fx, H⟶, U⟶=A∗H⟶, V⟶=B∗H⟶, and R⟶, then A2 outputs “yes” if R⟶=AB∗H⟶ and outputs “no” if not. Let M⟶ be the claimed decryption of the ciphertext V⟶,Q⟶. Input Q⟶−M⟶ as R⟶. Note that M⟶ is the correct plaintext for the ciphertext V⟶,Q⟶ if and only if M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶, which happens if and only if Q⟶−M⟶=AB∗H⟶. Therefore, M⟶ is the correct plaintext if and only if R⟶=AB∗H⟶. Therefore, with these inputs, A2 outputs “yes” exactly when M⟶ is the correct plaintext.Theorem 2. An algorithm that solves CME problem can be used to decrypt the ciphertexts of Scheme 4.2.1, and an algorithm that decrypts the ciphertexts of Scheme 4.2.1 can be used to solve CME problem.Proof. If we have an algorithmA3 that can decrypt all ciphertexts of Scheme 4.2.1, then input U⟶=A∗H⟶ and V⟶=B∗H⟶. Take any vector for Q⟶. Then, A3 outputs(25)M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶. Therefore,Q⟶−M⟶ yields the solution AB∗H⟶ to the CME problem. Conversely, suppose we have an algorithmA4 that can solve CME problem. If we have an ciphertext V⟶,Q⟶, then we input U⟶=A∗H⟶ and V⟶=B∗H⟶. Then, A4 outputs AB∗H⟶. Since M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶, we obtain the plaintext M⟶ □. ## 4.1. Key Exchange Protocol Based on Tropical Matrix Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn and H⟶∈ZNn such that there exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. The public parameters of the protocol are fx, H⟶. Key change protocol based on tropical matrix is the following. ### 4.1.1. Protocol 4.1.1 (1) Alice selects at randomn private integers a0,a1,…,an−1 in 0,s−1 and computesA=an−1Cn−1+⋯+a1C+a0E=aijn×n. Bob selects at random n private integers b0,b1,…,bn−1 in 0,s−1 and computes(18)B=bn−1Cn−1+⋯+b1C+b0E=bijn×n.(2) Alice computesU⟶=A∗H⟶ and sends to Bob the matrix U⟶. And Bob computes V⟶=B∗H⟶ and sends to Alice the matrix V⟶.(3) Alice computes(19)KAlice=A∗V⟶=A∗B∗H⟶=A⋅B∗H⟶.and Bob computes(20)KBob=B∗U⟶=B∗A∗H⟶=B⋅A∗H⟶,where “⋅” is the matrix multiplication in ℤ+C.Sinceℤ+C is commutative, we have A⋅B=B⋅A and KAlice=KBob. So, Alice and Bob share a common secret key.Definition 4. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. There exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. Suppose that U⟶=A∗H⟶ and V⟶=B∗H⟶, where A,B∈ℤ+C. The computational ME problem is to find the matrix vector K⟶ such that K⟶=AB∗H⟶, given fx, H⟶, U⟶, and V⟶. For simplicity, we abbreviate the problem to “CME problem.”Proposition 3. An algorithm that solves ME problem can be used to solve CME problem.Proposition 4. Finding the common secret key from the public information of Protocol 4.1.1 is equivalent to solving the CME problem. ## 4.1.1. Protocol 4.1.1 (1) Alice selects at randomn private integers a0,a1,…,an−1 in 0,s−1 and computesA=an−1Cn−1+⋯+a1C+a0E=aijn×n. Bob selects at random n private integers b0,b1,…,bn−1 in 0,s−1 and computes(18)B=bn−1Cn−1+⋯+b1C+b0E=bijn×n.(2) Alice computesU⟶=A∗H⟶ and sends to Bob the matrix U⟶. And Bob computes V⟶=B∗H⟶ and sends to Alice the matrix V⟶.(3) Alice computes(19)KAlice=A∗V⟶=A∗B∗H⟶=A⋅B∗H⟶.and Bob computes(20)KBob=B∗U⟶=B∗A∗H⟶=B⋅A∗H⟶,where “⋅” is the matrix multiplication in ℤ+C.Sinceℤ+C is commutative, we have A⋅B=B⋅A and KAlice=KBob. So, Alice and Bob share a common secret key.Definition 4. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. There exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. Suppose that U⟶=A∗H⟶ and V⟶=B∗H⟶, where A,B∈ℤ+C. The computational ME problem is to find the matrix vector K⟶ such that K⟶=AB∗H⟶, given fx, H⟶, U⟶, and V⟶. For simplicity, we abbreviate the problem to “CME problem.”Proposition 3. An algorithm that solves ME problem can be used to solve CME problem.Proposition 4. Finding the common secret key from the public information of Protocol 4.1.1 is equivalent to solving the CME problem. ## 4.2. Public Key Encryption Scheme Based on Tropical Matrix ### 4.2.1. Scheme 4.2.1 Key generation.Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. There exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. The public parameters are fx, H⟶. The key generation center chooses at random integers a0,a1,…,an−1∈0,s−1 and computes(21)A=an−1Cn−1+⋯+a1C+a0E=aijn×n,U⟶=A∗H⟶.The public key of Alice isU⟶. The private key of Alice is A (or a0,a1,…,an−1).Encryption.Bob wants to send a plaintext messagesM⟶∈Mkℤn to Alice.(1) Bob chooses at random integersb0,b1,…,bn−1 in 0,s−1 and computes(22)B=bn−1Cn−1+⋯+b1C+b0E=bijn×n.(2) Bob computesV⟶=B∗H⟶ as a part of ciphertext.(3) Bob computesQ⟶=M⟶+B∗U⟶ as the rest of the ciphertext, where “+” is the ordinary integer matrix vector addition.(4) Bob sends the ciphertextV⟶,Q⟶ to Alice.Decryption.Alice receives the ciphertextV⟶,Q⟶ and tries to decrypt it.(1) Using her private keyA, Alice computes W⟶=A∗V⟶.(2) Alice computesQ⟶−W⟶, where “-” is the ordinary integer matrix vector subtraction.Since(23)Q⟶−W⟶=M⟶+B∗U⟶−A∗V⟶=M⟶+B∗A∗H⟶−A∗B∗H⟶=M⟶+BA∗H⟶−AB∗H⟶=M⟶,Alice gets the plaintext messagesM⟶.Definition 5. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. Suppose U⟶=A∗H⟶ and V⟶=B∗H⟶, where A,B∈ℤ+C. Let R∈ZNn. The decisional ME problem is to decide whether R⟶=AB∗H⟶, given fx, H⟶, U⟶, V⟶, and R⟶. For simplicity, we abbreviate it to “DME problem.”Proposition 5. An algorithm that solves CME problem can be used to solve DME problem.Theorem 1. An algorithm that solves DME problem can be used to decide the validity of the ciphertexts of Scheme 4.2.1, and an algorithm that decides the validity of the ciphertexts of Scheme 4.2.1 can be used to solve DME problem.Proof. Suppose first that the algorithmA1 can decide whether a decryption of Scheme 4.2.1 is correct. In other words, when given the inputs fx, H⟶, U⟶, V⟶,Q⟶, M⟶, the algorithm A1 outputs “yes” if M⟶ is the decryption of V⟶,Q⟶ and outputs “no” otherwise. Let us use A1 to solve the DME problem. Suppose you are given p, fx, H⟶, U⟶=A∗H⟶, V⟶=B∗H⟶, and R⟶, and you want to decide whether or not R⟶=AB∗H⟶. Let Q⟶=R⟶ and M⟶=0k,0k,…,0k, where 0k is the k×k zero matrix of Mkℤ. Input all of these into A1. Note that in the present setup, A is the secret key. The correct decryption of V⟶,Q⟶ is(24)Q⟶−A∗V⟶=R⟶−A∗B∗H⟶=R⟶−AB∗H⟶. Therefore,A1 outputs “yes” exactly when M⟶=0k,0k,…,0k is the same as R⟶−AB∗H⟶, namely, when R⟶=AB∗H⟶. This solves the decision DME problem. Conversely, suppose an algorithmA2 can solve the DME problem. This means that if you give A2 inputs fx, H⟶, U⟶=A∗H⟶, V⟶=B∗H⟶, and R⟶, then A2 outputs “yes” if R⟶=AB∗H⟶ and outputs “no” if not. Let M⟶ be the claimed decryption of the ciphertext V⟶,Q⟶. Input Q⟶−M⟶ as R⟶. Note that M⟶ is the correct plaintext for the ciphertext V⟶,Q⟶ if and only if M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶, which happens if and only if Q⟶−M⟶=AB∗H⟶. Therefore, M⟶ is the correct plaintext if and only if R⟶=AB∗H⟶. Therefore, with these inputs, A2 outputs “yes” exactly when M⟶ is the correct plaintext.Theorem 2. An algorithm that solves CME problem can be used to decrypt the ciphertexts of Scheme 4.2.1, and an algorithm that decrypts the ciphertexts of Scheme 4.2.1 can be used to solve CME problem.Proof. If we have an algorithmA3 that can decrypt all ciphertexts of Scheme 4.2.1, then input U⟶=A∗H⟶ and V⟶=B∗H⟶. Take any vector for Q⟶. Then, A3 outputs(25)M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶. Therefore,Q⟶−M⟶ yields the solution AB∗H⟶ to the CME problem. Conversely, suppose we have an algorithmA4 that can solve CME problem. If we have an ciphertext V⟶,Q⟶, then we input U⟶=A∗H⟶ and V⟶=B∗H⟶. Then, A4 outputs AB∗H⟶. Since M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶, we obtain the plaintext M⟶ □. ## 4.2.1. Scheme 4.2.1 Key generation.Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. There exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. The public parameters are fx, H⟶. The key generation center chooses at random integers a0,a1,…,an−1∈0,s−1 and computes(21)A=an−1Cn−1+⋯+a1C+a0E=aijn×n,U⟶=A∗H⟶.The public key of Alice isU⟶. The private key of Alice is A (or a0,a1,…,an−1).Encryption.Bob wants to send a plaintext messagesM⟶∈Mkℤn to Alice.(1) Bob chooses at random integersb0,b1,…,bn−1 in 0,s−1 and computes(22)B=bn−1Cn−1+⋯+b1C+b0E=bijn×n.(2) Bob computesV⟶=B∗H⟶ as a part of ciphertext.(3) Bob computesQ⟶=M⟶+B∗U⟶ as the rest of the ciphertext, where “+” is the ordinary integer matrix vector addition.(4) Bob sends the ciphertextV⟶,Q⟶ to Alice.Decryption.Alice receives the ciphertextV⟶,Q⟶ and tries to decrypt it.(1) Using her private keyA, Alice computes W⟶=A∗V⟶.(2) Alice computesQ⟶−W⟶, where “-” is the ordinary integer matrix vector subtraction.Since(23)Q⟶−W⟶=M⟶+B∗U⟶−A∗V⟶=M⟶+B∗A∗H⟶−A∗B∗H⟶=M⟶+BA∗H⟶−AB∗H⟶=M⟶,Alice gets the plaintext messagesM⟶.Definition 5. Letf0,f1,…,fn−1 be non-negative integers and f0>0. Let C be the companion matrix of the polynomial fx=−f0−f1x−⋯−fn−1xn−1+xn. Let N∈MkZ and H⟶∈ZNn. Suppose U⟶=A∗H⟶ and V⟶=B∗H⟶, where A,B∈ℤ+C. Let R∈ZNn. The decisional ME problem is to decide whether R⟶=AB∗H⟶, given fx, H⟶, U⟶, V⟶, and R⟶. For simplicity, we abbreviate it to “DME problem.”Proposition 5. An algorithm that solves CME problem can be used to solve DME problem.Theorem 1. An algorithm that solves DME problem can be used to decide the validity of the ciphertexts of Scheme 4.2.1, and an algorithm that decides the validity of the ciphertexts of Scheme 4.2.1 can be used to solve DME problem.Proof. Suppose first that the algorithmA1 can decide whether a decryption of Scheme 4.2.1 is correct. In other words, when given the inputs fx, H⟶, U⟶, V⟶,Q⟶, M⟶, the algorithm A1 outputs “yes” if M⟶ is the decryption of V⟶,Q⟶ and outputs “no” otherwise. Let us use A1 to solve the DME problem. Suppose you are given p, fx, H⟶, U⟶=A∗H⟶, V⟶=B∗H⟶, and R⟶, and you want to decide whether or not R⟶=AB∗H⟶. Let Q⟶=R⟶ and M⟶=0k,0k,…,0k, where 0k is the k×k zero matrix of Mkℤ. Input all of these into A1. Note that in the present setup, A is the secret key. The correct decryption of V⟶,Q⟶ is(24)Q⟶−A∗V⟶=R⟶−A∗B∗H⟶=R⟶−AB∗H⟶. Therefore,A1 outputs “yes” exactly when M⟶=0k,0k,…,0k is the same as R⟶−AB∗H⟶, namely, when R⟶=AB∗H⟶. This solves the decision DME problem. Conversely, suppose an algorithmA2 can solve the DME problem. This means that if you give A2 inputs fx, H⟶, U⟶=A∗H⟶, V⟶=B∗H⟶, and R⟶, then A2 outputs “yes” if R⟶=AB∗H⟶ and outputs “no” if not. Let M⟶ be the claimed decryption of the ciphertext V⟶,Q⟶. Input Q⟶−M⟶ as R⟶. Note that M⟶ is the correct plaintext for the ciphertext V⟶,Q⟶ if and only if M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶, which happens if and only if Q⟶−M⟶=AB∗H⟶. Therefore, M⟶ is the correct plaintext if and only if R⟶=AB∗H⟶. Therefore, with these inputs, A2 outputs “yes” exactly when M⟶ is the correct plaintext.Theorem 2. An algorithm that solves CME problem can be used to decrypt the ciphertexts of Scheme 4.2.1, and an algorithm that decrypts the ciphertexts of Scheme 4.2.1 can be used to solve CME problem.Proof. If we have an algorithmA3 that can decrypt all ciphertexts of Scheme 4.2.1, then input U⟶=A∗H⟶ and V⟶=B∗H⟶. Take any vector for Q⟶. Then, A3 outputs(25)M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶. Therefore,Q⟶−M⟶ yields the solution AB∗H⟶ to the CME problem. Conversely, suppose we have an algorithmA4 that can solve CME problem. If we have an ciphertext V⟶,Q⟶, then we input U⟶=A∗H⟶ and V⟶=B∗H⟶. Then, A4 outputs AB∗H⟶. Since M⟶=Q⟶−A∗V⟶=Q⟶−AB∗H⟶, we obtain the plaintext M⟶ □. ## 5. Possible Attacks, Parameter Selection, and Efficiency ### 5.1. Possible Attacks (1) Brute-force attack. Suppose thatA=an−1Cn−1+⋯+a1C+a0E=aijn×n and a0,a1,…,an−1∈0,s−1. It is clear that attacker has sn choices to choose A. So, the parameters s and n must satisfy sn≥280.(2) Tropical matrix decomposition attack.Suppose thatU⟶=U1,U2,…,Un. If attacker can find A′=aij′n×n such that(26)H1a11′H2a12′…Hna1n′=U1H1a21′H2a22′…Hna2n′=U2………H1an1′H2an2′…Hnann′=Un,A′C=CA′.Then he can get the shared keyA⋅B∗H⟶ by A′ and the public information. He can perform the following steps to find A′.(i) FactorUi=G1G2,…,Gn, where Gj∈Hji,j=1,2,…,n.(ii) Findaij′ such that Hiaij′=Gii,j=1,2,…,n by solving discrete logarithm problem in Hi.(iii) Verify whether or notA′C=CA′. If not, go to (i).However, it is hard to factorUi=G1G2⋯Gn, where Gj∈Hj and n>2. Generally, it is NP-hard by Shitov [24].(3) KU attack. In our cryptosystems, the used commutative subsemiring in our cryptosystems is the subsemiringZN and N is unknown. This is different from that in literature [16]. They used two public tropical matrices M1,M2M1M2≠M2M1 and then adopted the commutative subsemiring ZM1,, ZM2. Let p1M1∈ZM1, p2M2∈ZM2, and p1M1p2M2=U. The security of the cryptosystem relies on the difficulty of the problem of finding S1∈ZM1 and S2∈ZM2 such that S1S2=U. Because the secrete matrix can be expressed as a polynomial of M. Kotov and Ushakov designed an efficient algorithm to attack the tropical key exchange protocol [17]. In our cryptosystems, since attacker does not know N, the KU attack will not work. To find N from public information, attacker is faced with the following problem.GivenH1,H2,…,Hn∈MkZ, find N∈MkZ such that(27)d19⊗N9⊕d18⊗N8⊕⋯⊕d11⊗N⊕d10⊗Ik=H1d29⊗N9⊕d28⊗N8⊕⋯⊕d21⊗N⊕d20⊗Ik=H2………dn9⊗N9⊕dn8⊗N8⊕⋯⊕dn1⊗N⊕dn0⊗Ik=Hn.wheredi9,di8,…,di1,di0i=1,2,…,n and N are all unknown. It is clear that this is a problem of solving systems of min-plus polynomial equations which is NP-hard [16].Even ifN is obtained by attacker. It seems also hard to find the private key matrix A from the public key U⟶=A∗H⟶. As we know, KU attack can only decompose a tropical matrix into the product of two matrices such as U=S1S2. If n>2, each component matrix of U⟶ is the product of matrices more than two. In this case, KU attack will also not work.(4) Generalized KU attack. In order to remedy Grigoreiv–Shpilrain’s protocols, Muanalifah and Sergeev suggested some modifications that use two types of matrices that are Jones matrices and Linde–de la Puente matrices [22]. But the authors also pointed out that their modifications cannot resist the generalized KU attack which can also decompose the public matrix into the product of two Jones matrices (Linde–de la Puente matrices) expressed as the linear form of the tropical basic elementary matrix. In our cryptosystems, if n>2, then each component matrix of U⟶ is the product of matrices more than two. In this case, the generalized KU attack will also not work for our cryptosystems.(5) RM attack. Grigoreiv and Shpilrain [18] improved the original scheme and proposed the public key cryptosystems based on semi-direct product of tropical matrix semiring. But the first component of semi-direct product multiplication contains the addition of tropical matrix. Because the addition operation of tropical matrix is idempotent, the powers of semi-direct product multiplication have partial order preservation. Using this property, Rudy and Monico designed a simple binary search algorithm and cracked the cryptosystem in [18]. In our cryptosystems, A∗H⟶ has not the addition operation of tropical matrix. So, our cryptosystems can resist this attack.(6) Quantum attack. ME problem is the generalization of the discrete logarithm problem. As we know, the discrete logarithm problem can be reduced in polynomial time to hidden subgroup problem which can be solved in polynomial time by the generalized Shor quantum algorithm [25]. If the semigroup action is derived from a module over ring, there exist the similar reduction algorithms for the corresponding semigroup action problem. When the semigroup action is induced by a semimodule over semiring that cannot be embedded in a module, no effective reduction algorithm has been found for the corresponding semigroup action problem. It is easy to verify that “∗” is a semigroup action derived from the semimodule ZNn over the semiring ℤ+C and ME problem is indeed the corresponding semigroup action problem induced by the semimodule. Since the semimodule ZNn cannot be embedded in a module, ME problem cannot be reduced in polynomial time to hidden subgroup problem generally.Table1 provides the comparison among relevant tropical cryptographic schemes.Table 1 Comparison among relevant tropical schemes. SchemesMathematical problemsKU attackRM attackG-KU attackGrigoriev [16]Two-side matrix action problem×√×Grigoriev [18]Semi-direct product problem√×√Muanalifah [22]Two-side matrix action problem√√×Our schemesMultiple exponentiation problem√√√Note. that √ means that the scheme can resist the corresponding attack, while × does not. ### 5.2. Parameter Selection and Efficiency By Proposition2, if there exists a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1, then the ME problem can be reduced to discrete logarithm problem in polynomial time. To avoid this case, H⟶ needs to satisfy the condition that there exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. Note that N is unknown and H1,H2,…,Hn∈ZN. We can choose Hi such that(28)Hi=di9⊗N9⊕di8⊗N8⊕⋯⊕di1⊗N⊕di0⊗Ik,where di0,di1,…,di9 are integers selected randomly in 0,1024. Then, the number of possible Hi is 2100. Experiments show that it is easy to generate H1,H2,…,Hn satisfying the above condition.Generally, forN∈MkZ, the monogenic subsemigroup N is infinite. But for many tropical matrices N∈MkZ, there are non-negative integers l,m and integer e such that e⊗Nl=Nl+m. If l,m are the smallest non-negative integers such that e⊗Nl=Nl+m, then l is called the pseudo-index of the matrix N and m is called the pseudo-period of the matrix N. If the pseudo-indexes and the pseudo-periods of N and Hii=1,2,…,n are all small, then there may be some potential heuristic attacks. The pseudo-index of tropical matrix increases with the increase of k. Our experiments show that it is feasible to generate N and Hii=1,2,…,n with pseudo-indexes more than k2 (k<30). In A∗H⟶ and B∗H⟶, the entries of A, B are the exponents of Hi. Since A,B∈ℤC, the entries of C and s should not be too large. We recommend fx=xn−1 and(29)C=00…0110…0001…00⋮⋮⋱⋮⋮00…10.Then, the entries ofA, B are in 0,s−1 and the entries of AB are less than ns−12. To resist some potential heuristic attacks, we recommend the parameters s,n,k satisfying k2≥ns−12.If we use the “square-multiply” algorithm to compute the power of tropical matrix, then computingA∗H⟶ requires n2logs+nn−1 tropical matrix multiplications. The numbers of bit operations required for multiplying two tropical matrices of order k are Ok3. So, the total number of bit operations required for calculating A∗H⟶ is Ok3n2logs.The size of secret keya0,a1,…,an−1 is less than nlogs bits. Suppose the entries of the matrices N are in the range 0,T and Hi=∑j=09dij⊗N9 where dij∈0,d. Then, the size of public key U⟶ is less than n2k2logs−1d+9T bits.SelectT=100 and d=1024. Table 2 provides the upper bounds of the size of secrete key and public key for different values of k,n,s such that sn≈280 and k2≈ns−12. And in Table 2, we also compare the running time of the operation A∗H⟶ under different parameters (experimental platform: Intel(R) Core(TM) i7-4700MQ CPU @ 2.40 GHz).Table 2 Performance comparison under some different parameters. knsUpper bound of sk (bit)Upper bound of pk (kB)Timing ofA∗H⟶ (s)9802806821.79814503807481.46319404808792.372233458010313.210283168011974.316 ## 5.1. Possible Attacks (1) Brute-force attack. Suppose thatA=an−1Cn−1+⋯+a1C+a0E=aijn×n and a0,a1,…,an−1∈0,s−1. It is clear that attacker has sn choices to choose A. So, the parameters s and n must satisfy sn≥280.(2) Tropical matrix decomposition attack.Suppose thatU⟶=U1,U2,…,Un. If attacker can find A′=aij′n×n such that(26)H1a11′H2a12′…Hna1n′=U1H1a21′H2a22′…Hna2n′=U2………H1an1′H2an2′…Hnann′=Un,A′C=CA′.Then he can get the shared keyA⋅B∗H⟶ by A′ and the public information. He can perform the following steps to find A′.(i) FactorUi=G1G2,…,Gn, where Gj∈Hji,j=1,2,…,n.(ii) Findaij′ such that Hiaij′=Gii,j=1,2,…,n by solving discrete logarithm problem in Hi.(iii) Verify whether or notA′C=CA′. If not, go to (i).However, it is hard to factorUi=G1G2⋯Gn, where Gj∈Hj and n>2. Generally, it is NP-hard by Shitov [24].(3) KU attack. In our cryptosystems, the used commutative subsemiring in our cryptosystems is the subsemiringZN and N is unknown. This is different from that in literature [16]. They used two public tropical matrices M1,M2M1M2≠M2M1 and then adopted the commutative subsemiring ZM1,, ZM2. Let p1M1∈ZM1, p2M2∈ZM2, and p1M1p2M2=U. The security of the cryptosystem relies on the difficulty of the problem of finding S1∈ZM1 and S2∈ZM2 such that S1S2=U. Because the secrete matrix can be expressed as a polynomial of M. Kotov and Ushakov designed an efficient algorithm to attack the tropical key exchange protocol [17]. In our cryptosystems, since attacker does not know N, the KU attack will not work. To find N from public information, attacker is faced with the following problem.GivenH1,H2,…,Hn∈MkZ, find N∈MkZ such that(27)d19⊗N9⊕d18⊗N8⊕⋯⊕d11⊗N⊕d10⊗Ik=H1d29⊗N9⊕d28⊗N8⊕⋯⊕d21⊗N⊕d20⊗Ik=H2………dn9⊗N9⊕dn8⊗N8⊕⋯⊕dn1⊗N⊕dn0⊗Ik=Hn.wheredi9,di8,…,di1,di0i=1,2,…,n and N are all unknown. It is clear that this is a problem of solving systems of min-plus polynomial equations which is NP-hard [16].Even ifN is obtained by attacker. It seems also hard to find the private key matrix A from the public key U⟶=A∗H⟶. As we know, KU attack can only decompose a tropical matrix into the product of two matrices such as U=S1S2. If n>2, each component matrix of U⟶ is the product of matrices more than two. In this case, KU attack will also not work.(4) Generalized KU attack. In order to remedy Grigoreiv–Shpilrain’s protocols, Muanalifah and Sergeev suggested some modifications that use two types of matrices that are Jones matrices and Linde–de la Puente matrices [22]. But the authors also pointed out that their modifications cannot resist the generalized KU attack which can also decompose the public matrix into the product of two Jones matrices (Linde–de la Puente matrices) expressed as the linear form of the tropical basic elementary matrix. In our cryptosystems, if n>2, then each component matrix of U⟶ is the product of matrices more than two. In this case, the generalized KU attack will also not work for our cryptosystems.(5) RM attack. Grigoreiv and Shpilrain [18] improved the original scheme and proposed the public key cryptosystems based on semi-direct product of tropical matrix semiring. But the first component of semi-direct product multiplication contains the addition of tropical matrix. Because the addition operation of tropical matrix is idempotent, the powers of semi-direct product multiplication have partial order preservation. Using this property, Rudy and Monico designed a simple binary search algorithm and cracked the cryptosystem in [18]. In our cryptosystems, A∗H⟶ has not the addition operation of tropical matrix. So, our cryptosystems can resist this attack.(6) Quantum attack. ME problem is the generalization of the discrete logarithm problem. As we know, the discrete logarithm problem can be reduced in polynomial time to hidden subgroup problem which can be solved in polynomial time by the generalized Shor quantum algorithm [25]. If the semigroup action is derived from a module over ring, there exist the similar reduction algorithms for the corresponding semigroup action problem. When the semigroup action is induced by a semimodule over semiring that cannot be embedded in a module, no effective reduction algorithm has been found for the corresponding semigroup action problem. It is easy to verify that “∗” is a semigroup action derived from the semimodule ZNn over the semiring ℤ+C and ME problem is indeed the corresponding semigroup action problem induced by the semimodule. Since the semimodule ZNn cannot be embedded in a module, ME problem cannot be reduced in polynomial time to hidden subgroup problem generally.Table1 provides the comparison among relevant tropical cryptographic schemes.Table 1 Comparison among relevant tropical schemes. SchemesMathematical problemsKU attackRM attackG-KU attackGrigoriev [16]Two-side matrix action problem×√×Grigoriev [18]Semi-direct product problem√×√Muanalifah [22]Two-side matrix action problem√√×Our schemesMultiple exponentiation problem√√√Note. that √ means that the scheme can resist the corresponding attack, while × does not. ## 5.2. Parameter Selection and Efficiency By Proposition2, if there exists a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1, then the ME problem can be reduced to discrete logarithm problem in polynomial time. To avoid this case, H⟶ needs to satisfy the condition that there exists not a component Hi of H⟶ such that ∀j≠iHj∈Hii,j=0,1,…,n−1. Note that N is unknown and H1,H2,…,Hn∈ZN. We can choose Hi such that(28)Hi=di9⊗N9⊕di8⊗N8⊕⋯⊕di1⊗N⊕di0⊗Ik,where di0,di1,…,di9 are integers selected randomly in 0,1024. Then, the number of possible Hi is 2100. Experiments show that it is easy to generate H1,H2,…,Hn satisfying the above condition.Generally, forN∈MkZ, the monogenic subsemigroup N is infinite. But for many tropical matrices N∈MkZ, there are non-negative integers l,m and integer e such that e⊗Nl=Nl+m. If l,m are the smallest non-negative integers such that e⊗Nl=Nl+m, then l is called the pseudo-index of the matrix N and m is called the pseudo-period of the matrix N. If the pseudo-indexes and the pseudo-periods of N and Hii=1,2,…,n are all small, then there may be some potential heuristic attacks. The pseudo-index of tropical matrix increases with the increase of k. Our experiments show that it is feasible to generate N and Hii=1,2,…,n with pseudo-indexes more than k2 (k<30). In A∗H⟶ and B∗H⟶, the entries of A, B are the exponents of Hi. Since A,B∈ℤC, the entries of C and s should not be too large. We recommend fx=xn−1 and(29)C=00…0110…0001…00⋮⋮⋱⋮⋮00…10.Then, the entries ofA, B are in 0,s−1 and the entries of AB are less than ns−12. To resist some potential heuristic attacks, we recommend the parameters s,n,k satisfying k2≥ns−12.If we use the “square-multiply” algorithm to compute the power of tropical matrix, then computingA∗H⟶ requires n2logs+nn−1 tropical matrix multiplications. The numbers of bit operations required for multiplying two tropical matrices of order k are Ok3. So, the total number of bit operations required for calculating A∗H⟶ is Ok3n2logs.The size of secret keya0,a1,…,an−1 is less than nlogs bits. Suppose the entries of the matrices N are in the range 0,T and Hi=∑j=09dij⊗N9 where dij∈0,d. Then, the size of public key U⟶ is less than n2k2logs−1d+9T bits.SelectT=100 and d=1024. Table 2 provides the upper bounds of the size of secrete key and public key for different values of k,n,s such that sn≈280 and k2≈ns−12. And in Table 2, we also compare the running time of the operation A∗H⟶ under different parameters (experimental platform: Intel(R) Core(TM) i7-4700MQ CPU @ 2.40 GHz).Table 2 Performance comparison under some different parameters. knsUpper bound of sk (bit)Upper bound of pk (kB)Timing ofA∗H⟶ (s)9802806821.79814503807481.46319404808792.372233458010313.210283168011974.316 ## 6. Conclusion and Further Research This paper proposes a new key exchange protocol and a new public key encryption scheme based on multiple exponentiation problem of tropical matrices which can resist all known attacks. Since the generating matrixN of the used commutative subsemirings ZN is hidden and the public key matrices are the product of more than two unknown matrices, the cryptosystems can resist KU attack and generalized KU attack. There is no addition of tropical matrix in A∗H⟶. The attack method proposed by Rudy and Monico does not work for ME problem. As a semigroup action problem derived from semimodules on semirings, ME problem cannot be reduced to hidden subgroup problem in polynomial time. Our cryptosystem can be considered as a potential postquantum cryptosystem.The algebraic properties of pseudo-index and pseudo-period of tropical matrix have not been clearly studied. We can only use enumeration method to find the pseudo-index and pseudo-period of tropical matrix and generate tropical matrices with pseudo-index less than 900. Therefore, in order to prevent potential heuristic attacks, the dimensionn of H⟶ needs to be large n>30. However, this makes the operation efficiency of A∗H⟶ low. If we can generate effectively tropical matrices with large pseudo-index or large pseudo-period, we can choose small n to improve the operation efficiency.Future works worth studying include the following.(1) Study the properties of pseudo-index and pseudo-period of tropical matrix. If a fast algorithm of generating tropical matrix with large pseudo-index (or pseudo-period) and small order can be found, then the smallern and k can be chosen. This will improve the computational efficiency of our cryptosystem.(2) Use other commutative tropical matrix semirings instead ofZN. For example, we can design a public key cryptosystem based on ME problem of Jones matrix or Linde–de la Puente matrix. ME problem of commutative matrices over other semirings can also be considered.(3) ME problem of tropical matrices is a generalization of the discrete logarithm problem. If we regardA∗H⟶ as H⟶A, then ME problem corresponds to discrete logarithm problem, CME problem corresponds to CDH problem, and DME problem corresponds to DDH problem. We believe that other cryptographic applications based on ME problem are also feasible. For example, we can consider the digital signature schemes and identity authentication schemes and other cryptographic applications based on CME assumption or DME assumption, such as [26–28]. However, as we point out previously, the cryptographic system based on ME problem over tropical matrix has no high computational efficiency, since the number of matrices n and the order k of the matrix are large in order to ensure security. It may limit some possible application scenarios. --- *Source: 1024161-2022-09-09.xml*
2022
# Facial Recognition for Drunk People Using Thermal Imaging **Authors:** Agustin Sancen-Plaza; Luis M. Contreras-Medina; Alejandro Israel Barranco-Gutiérrez; Carlos Villaseñor-Mora; Juan J Martínez-Nolasco; José A. Padilla-Medina **Journal:** Mathematical Problems in Engineering (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1024173 --- ## Abstract Face recognition using thermal imaging has the main advantage of being less affected by lighting conditions compared to images in the visible spectrum. However, there are factors such as the process of human thermoregulation that cause variations in the surface temperature of the face. These variations cause recognition systems to lose effectiveness. In particular, alcohol intake causes changes in the surface temperature of the face. It is of high relevance to identify not only if a person is drunk but also their identity. In this paper, we present a technique for face recognition based on thermal face images of drunk people. For the experiments, the Pontificia Universidad Católica de Valparaíso-Drunk Thermal Face database (PUCV-DTF) was used. The recognition system was carried out by using local binary patterns (LBPs). The LBP features were obtained from the bioheat model from thermal image representation and a fusion of thermal images and a vascular network extracted from the same image. The feature vector for each image is formed by the concatenation of the LBP histogram of the thermogram with an anisotropic filter and the fused image, respectively. The proposed technique has an average percentage of 99.63% in the Rank-10 cumulative classification; this performance is superior compared to using LBP in thermal images that do not use the bioheat model. --- ## Body ## 1. Introduction The recognition of individuals based on thermal images of the face (thermograms), captured by long-wave infrared (LWIR) thermal cameras in the range of 8 to 15μm, has considerable advantages over facial recognition systems within the visible spectrum of 390–700 nm (VS). In face thermal images, it is possible to extract unique marks that can be used to identify people [1]. These marks have better tolerance for recognition in persons with pose changes [2] in variable illumination conditions [3, 4] and can even be obtained in complete darkness. These advantages have motivated researchers to propose different models that represent determining information for the identification of individuals.Despite these advantages, some factors may cause marks not to be generated precisely with the same morphology for the same person. The factors can be internal or external and cause alterations in the thermoregulatory system. For example, caffeine and alcohol ingestion may cause an increase in face surface temperature [5, 6]. These temperature changes cause the heat dispersion zones of the skin to activate and these, in turn, can be obtained in the form of an image by a thermal camera. The morphology of the heat zones has been used to design people's recognition systems.In this study, a thermal facial recognition system was designed considering the effects of temperature variations caused from alcohol consumption. . For training in the system, only the thermograms of the individuals when they have not taken alcohol were used as a reference. It is proposed in this way to simulate an application scenario in a real-life situation. Blood perfusion models were used in order to mitigate the effects caused by the increase in temperature [7] caused by alcohol intake. The perfusion model was performed as an image preprocessing. This feature is ideal for scenarios where conditions that cause changes in the surface temperature of the face can occur. ## 2. Materials and Methods ### 2.1. Thermal Face Database The methodology was evaluated using the Pontificia Universidad Católica de Valparaíso-Drunk Thermal Face database (PUCV-DTF) [8]. It consists of 46 participants: 40 men and 6 women with an average of 24 years (between 18 and 29 years). As for their general health situation, participants should not have alcohol-related health problems, in addition to not being regular consumers. There were 250 face thermograms for each individual, 50 when they still did not drink alcohol and 50 for each who drank 1, 2, 3, and 4 beers, respectively. The drink that was ingested was 355mL can of 5.5° beer. . The authors established a procedure for image capture. First, participants should rest 30 minutes after arriving at the image capture site to stabilize the metabolism to environmental conditions. Each can of beer was consumed in a time of 5 minutes. The 50 images were captured in an average time of 1 minute. After each consumption there was a stabilization time of 30 minutes. ### 2.2. Theory #### 2.2.1. Blood Perfusion Models There are endogenous and exogenous conditions to the individual that can affect the surface temperature of the face. Endogenous factors are those that produce a stimulation of the thermoregulatory system. Alcohol consumption [6], pathologies [9], physical activity [10], and time lapse [11, 12] are some of the causes that can stimulate the body and, in turn, disperse heat. Exogenous conditions are related to the use of clothing on the face, such as glasses [13]. Due to these variations captured by the thermal camera, the recognition systems that do not take them into account decrease their effectiveness. The first heat dispersion model using LWIR face images was proposed by Wu et al. [7]. The main purpose of this model was to mitigate the effects of heat redistribution on thermal images (Figure 1). To establish the model, the authors started from the thermal equilibrium equation:(1)Qr+Qe+Qf=Qc+Qm+Qb,where Q represents the heat flux per unit area and stands for radiation (Qr), evaporation (Qe), convection (Qf), body conduction (Qc), metabolism (Qm), and blood flow convection (Qb).Figure 1 Thermal images of the same individual: (a) room temperature = 25.4–26.2°C and (b) room temperature = 32.8–33.1°C [7]. (a) (b)In a later work, Wu et al. [14] proposed a simplified model based on the reduced equation from which equation (2) is derived.(2)ω=εσTi,j4−Te4αcbTa−Ti,j,where ε stands for skin emissivity,σ is the Stefan–Boltzmann constant, T is the temperature of the pixel to analyze, Te is the environment temperature, α is the tissue-skin exchange ratio, cb is the specific heat of blood, and Ta is the average artery-vein temperature.Xie et al. [15] proposed a blood perfusion model based on the bioheat transfer model. Equation (3) is a discrete version of the bioheat transfer model obtained from Pennes equation:(3)Wbi,j=−KλTi−1,j+Ti+1,j+Ti,j−1+Ti,j+1−4Ti,j−qmCbTa−T,where qm stands for blood dispersion rate and λ is a constant related with pixel distance. This model allowed attenuating surface temperature variations caused by internal or external conditions.One of the main advantages of using these models is that the variables involved to solve the equations are predefined as constants or were obtained empirically. Wu's model, represented by equation (2), requires knowing the value of the ambient temperature at the time of thermogram capture. This variable is not necessary for the model proposed by Xie described in equation (3). The Xie model is a better alternative in applications where it is possible to know the value of the ambient temperature. With thermal cameras, it is possible to get the surface temperature values of the face, with acceptable error ranges, which on average are 2%. #### 2.2.2. Vascular Networks The heat patterns of each individual can be useful in recognition systems. Ghiass et al. [16] mentioned that high-reliability studies have not been carried out to demonstrate the direct correspondence of vascular networks with heat zones. However, an individual's heat zones can function as a biometric mark [17], regardless of whether they have a direct correspondence with vascular networks. Buddharaju et al. [18] used heat zones for the generation of thinned vascular networks, which in turn can be used as unique identification marks.Cho et al. [19] proposed preprocessing techniques to extract vascular networks: (1) application of a salt and pepper filter to eliminate noise in the image; (2) increase the contrast of the image; and (3) extraction of the vascular network using the top-hat algorithm represented by equations (4) and (5). The proposed recognition system is based on the location of bifurcation points obtained from the thinned vascular network.(4)fopen=f⊙S⊕S,(5)fvascular=f−fopen,where fopen is open operation from f, by applying erosion ⊙ and dilatation ⊕ with structural element s.Hermosilla et al. [20] used a fused image thermogram and vascular networks to improve recognition rates. Several algorithms for extracting characteristics were used: LBP histograms; Gabor descriptors; WLD histograms; and SIFT and SURF descriptors. The experiments were performed on two databases EQUINOX and UCH thermal face database [17]. In general, the method favors the improvement of recognition percentages by 1.89%.Nguyen et al. [21] recreated a three-dimensional model of the individuals' heads, from the fusion of video images. The vascular network is isolated, generating a 3D model. This representation would be closer to a real vascular network of an individual.A limitation in the use of vascular networks for identification is that these were affected by changes in the surface temperature of the face. For example, in people who have consumed alcohol, the areas of the face that disperse heat modify their morphology compared to when they have not consumed. In one experiment, Koukiou [22] showed that the morphology of the vascular networks extracted from the thermogram was different in a person when he drank alcohol. The effect of the morphological modification of the vascular network is shown in Figure 2. This phenomenon is caused by the individual’s thermoregulation process caused by alcohol intake; however, the networks of veins and arteries do not undergo noticeable structural changes.Figure 2 Vascular networks extracted from the same individual obtained from thermograms (a) of the individual without consuming alcohol and (b) after having consumed a 355 mL can of 5.5° beer. After drinking the first beer, 148 minutes pass until the last beer is consumed. (a) (b)The morphology of the vascular networks extracted from the thermograms modified by changes in surface temperature poses challenges in the design of identification systems applied to real-life scenarios. Several situations alter the surface temperature of the face in uncontrolled environments. Additionally, the use of vascular networks for identification has had good recognition results in controlled environments. The possibility arises when these situations appear in thermograms captured in uncontrolled environments, particularly for the case of this study, on people who have consumed alcohol. ### 2.3. Methodology Recognition systems that use infrared face images are classified by Arya et al. [23] as (1) based on face recognition techniques in images with classical (holistic) methods; (2) based on feature extraction, and (3) based on the multimodal analysis. The system proposed in this study is based on feature extraction and was designed in three stages denoted as (1) generation of the bioheat transfer model; (2) fusion of thermograms with vascular networks; and (3) feature extraction. #### 2.3.1. Bioheat Transfer Model The blood dispersion models increase the intensity of the image pixels corresponding to the areas associated with the tissue with higher surface temperature and attenuate those with a relatively lower temperature. In Figures3(a) and 3(b), this augmentation process and attenuation are shown, respectively.Figure 3 Thermal images of face: (a) image taken in LWIR and (b) image after applying the bioheat transfer model on an image of the UCHThermalFace [17]. (a) (b)Initially, the histogram of the thermogramf is equalized. The objective is to obtain a resulting image with a uniform intensity distribution. The discrete Pennes equation, defined in equation (3), is applied to the resulting image. From this transformation, the image Wb is obtained; this image is the representation of the dispersion model of the image of the thermogram f. The application of the bioheat transfer model mitigates the effects of the increase in the surface temperature of the face captured on the thermogram and that are caused by alcohol consumption. #### 2.3.2. Image Fusion Image fusion is established for the use of the vascular network and the bioheat model features for recognition purposes. This fusion (WFUS) is obtained from Wb, and the vascular network extracted from the thermogram (WVN). The image with the vascular network WVN is obtained from the application of equations (4) and (5) on Wb. In the work of Hermosilla et al. [20], image fusion is performed on a softened thermogram and the vascular network from the same image. In this work, the operations defined in equation (6) are used, instead of the ordered weighted averaging (OWA) operator [24].(6)WFUSi,j=WVNi,j,ifWVNi,j≥Wbi,j,Wbi,j,otherwise,where WVN and Wb must have the same size. In Figure 4, the fusion process applied to the same individual, with different alcohol consumption, is shown.Figure 4 Thermal images of the face applying BIOHEATRV to the same individual when (a) has not consumed alcohol, (b) when he consumed a beer, (c) two beers, (d) three beers and (e) five beers, respectively. (a) (b) (c) (d) (e) #### 2.3.3. Image Features Feature extraction is performed by using local binary patterns (LBPs). LBP (equation (7)) has been used in the processing of facial images for recognition purposes in the VS [25], near-infrared (NIR) [26], and in LWIR [27]. There are several configurations with which LBP can be used. In particular, configuration studies have been carried out that are better suited for establishing face recognition systems [17].(7)LBPic,jc=∑p=0P−12psfp−fc,(8)sx=1,ifx≥0,0,otherwise,where P is the the number of the adjacent pixels to compare, and fp and fc stand for intensity pixel values for position p and central position c, respectively.In this research, the value of P is 8 with a radius of 2 pixels.p. Only binary patterns with two transitions in the binary chain generated by s(x) were taken into account. These types of patterns are called uniform binary patterns [28], and a rotation-invariant version was used. Equation (7) is a reformulated LBP for the extraction of uniform binary local patterns.(9)LBP8riu2xc,yc∑i=18sfp−fc,ifULBP8xc,yc≤2,9,otherwise,where function U stands for the number of binary strings generated by LBP8xc,yc. The image to be analyzed is divided into blocks of 20 lines by 4 columns. For each of these blocks, the LBP histogram is calculated, which generates a vector of values for each block. The concatenation of all block vectors forms a single vector, which functions as a global descriptor of the image. ## 2.1. Thermal Face Database The methodology was evaluated using the Pontificia Universidad Católica de Valparaíso-Drunk Thermal Face database (PUCV-DTF) [8]. It consists of 46 participants: 40 men and 6 women with an average of 24 years (between 18 and 29 years). As for their general health situation, participants should not have alcohol-related health problems, in addition to not being regular consumers. There were 250 face thermograms for each individual, 50 when they still did not drink alcohol and 50 for each who drank 1, 2, 3, and 4 beers, respectively. The drink that was ingested was 355mL can of 5.5° beer. . The authors established a procedure for image capture. First, participants should rest 30 minutes after arriving at the image capture site to stabilize the metabolism to environmental conditions. Each can of beer was consumed in a time of 5 minutes. The 50 images were captured in an average time of 1 minute. After each consumption there was a stabilization time of 30 minutes. ## 2.2. Theory ### 2.2.1. Blood Perfusion Models There are endogenous and exogenous conditions to the individual that can affect the surface temperature of the face. Endogenous factors are those that produce a stimulation of the thermoregulatory system. Alcohol consumption [6], pathologies [9], physical activity [10], and time lapse [11, 12] are some of the causes that can stimulate the body and, in turn, disperse heat. Exogenous conditions are related to the use of clothing on the face, such as glasses [13]. Due to these variations captured by the thermal camera, the recognition systems that do not take them into account decrease their effectiveness. The first heat dispersion model using LWIR face images was proposed by Wu et al. [7]. The main purpose of this model was to mitigate the effects of heat redistribution on thermal images (Figure 1). To establish the model, the authors started from the thermal equilibrium equation:(1)Qr+Qe+Qf=Qc+Qm+Qb,where Q represents the heat flux per unit area and stands for radiation (Qr), evaporation (Qe), convection (Qf), body conduction (Qc), metabolism (Qm), and blood flow convection (Qb).Figure 1 Thermal images of the same individual: (a) room temperature = 25.4–26.2°C and (b) room temperature = 32.8–33.1°C [7]. (a) (b)In a later work, Wu et al. [14] proposed a simplified model based on the reduced equation from which equation (2) is derived.(2)ω=εσTi,j4−Te4αcbTa−Ti,j,where ε stands for skin emissivity,σ is the Stefan–Boltzmann constant, T is the temperature of the pixel to analyze, Te is the environment temperature, α is the tissue-skin exchange ratio, cb is the specific heat of blood, and Ta is the average artery-vein temperature.Xie et al. [15] proposed a blood perfusion model based on the bioheat transfer model. Equation (3) is a discrete version of the bioheat transfer model obtained from Pennes equation:(3)Wbi,j=−KλTi−1,j+Ti+1,j+Ti,j−1+Ti,j+1−4Ti,j−qmCbTa−T,where qm stands for blood dispersion rate and λ is a constant related with pixel distance. This model allowed attenuating surface temperature variations caused by internal or external conditions.One of the main advantages of using these models is that the variables involved to solve the equations are predefined as constants or were obtained empirically. Wu's model, represented by equation (2), requires knowing the value of the ambient temperature at the time of thermogram capture. This variable is not necessary for the model proposed by Xie described in equation (3). The Xie model is a better alternative in applications where it is possible to know the value of the ambient temperature. With thermal cameras, it is possible to get the surface temperature values of the face, with acceptable error ranges, which on average are 2%. ### 2.2.2. Vascular Networks The heat patterns of each individual can be useful in recognition systems. Ghiass et al. [16] mentioned that high-reliability studies have not been carried out to demonstrate the direct correspondence of vascular networks with heat zones. However, an individual's heat zones can function as a biometric mark [17], regardless of whether they have a direct correspondence with vascular networks. Buddharaju et al. [18] used heat zones for the generation of thinned vascular networks, which in turn can be used as unique identification marks.Cho et al. [19] proposed preprocessing techniques to extract vascular networks: (1) application of a salt and pepper filter to eliminate noise in the image; (2) increase the contrast of the image; and (3) extraction of the vascular network using the top-hat algorithm represented by equations (4) and (5). The proposed recognition system is based on the location of bifurcation points obtained from the thinned vascular network.(4)fopen=f⊙S⊕S,(5)fvascular=f−fopen,where fopen is open operation from f, by applying erosion ⊙ and dilatation ⊕ with structural element s.Hermosilla et al. [20] used a fused image thermogram and vascular networks to improve recognition rates. Several algorithms for extracting characteristics were used: LBP histograms; Gabor descriptors; WLD histograms; and SIFT and SURF descriptors. The experiments were performed on two databases EQUINOX and UCH thermal face database [17]. In general, the method favors the improvement of recognition percentages by 1.89%.Nguyen et al. [21] recreated a three-dimensional model of the individuals' heads, from the fusion of video images. The vascular network is isolated, generating a 3D model. This representation would be closer to a real vascular network of an individual.A limitation in the use of vascular networks for identification is that these were affected by changes in the surface temperature of the face. For example, in people who have consumed alcohol, the areas of the face that disperse heat modify their morphology compared to when they have not consumed. In one experiment, Koukiou [22] showed that the morphology of the vascular networks extracted from the thermogram was different in a person when he drank alcohol. The effect of the morphological modification of the vascular network is shown in Figure 2. This phenomenon is caused by the individual’s thermoregulation process caused by alcohol intake; however, the networks of veins and arteries do not undergo noticeable structural changes.Figure 2 Vascular networks extracted from the same individual obtained from thermograms (a) of the individual without consuming alcohol and (b) after having consumed a 355 mL can of 5.5° beer. After drinking the first beer, 148 minutes pass until the last beer is consumed. (a) (b)The morphology of the vascular networks extracted from the thermograms modified by changes in surface temperature poses challenges in the design of identification systems applied to real-life scenarios. Several situations alter the surface temperature of the face in uncontrolled environments. Additionally, the use of vascular networks for identification has had good recognition results in controlled environments. The possibility arises when these situations appear in thermograms captured in uncontrolled environments, particularly for the case of this study, on people who have consumed alcohol. ## 2.2.1. Blood Perfusion Models There are endogenous and exogenous conditions to the individual that can affect the surface temperature of the face. Endogenous factors are those that produce a stimulation of the thermoregulatory system. Alcohol consumption [6], pathologies [9], physical activity [10], and time lapse [11, 12] are some of the causes that can stimulate the body and, in turn, disperse heat. Exogenous conditions are related to the use of clothing on the face, such as glasses [13]. Due to these variations captured by the thermal camera, the recognition systems that do not take them into account decrease their effectiveness. The first heat dispersion model using LWIR face images was proposed by Wu et al. [7]. The main purpose of this model was to mitigate the effects of heat redistribution on thermal images (Figure 1). To establish the model, the authors started from the thermal equilibrium equation:(1)Qr+Qe+Qf=Qc+Qm+Qb,where Q represents the heat flux per unit area and stands for radiation (Qr), evaporation (Qe), convection (Qf), body conduction (Qc), metabolism (Qm), and blood flow convection (Qb).Figure 1 Thermal images of the same individual: (a) room temperature = 25.4–26.2°C and (b) room temperature = 32.8–33.1°C [7]. (a) (b)In a later work, Wu et al. [14] proposed a simplified model based on the reduced equation from which equation (2) is derived.(2)ω=εσTi,j4−Te4αcbTa−Ti,j,where ε stands for skin emissivity,σ is the Stefan–Boltzmann constant, T is the temperature of the pixel to analyze, Te is the environment temperature, α is the tissue-skin exchange ratio, cb is the specific heat of blood, and Ta is the average artery-vein temperature.Xie et al. [15] proposed a blood perfusion model based on the bioheat transfer model. Equation (3) is a discrete version of the bioheat transfer model obtained from Pennes equation:(3)Wbi,j=−KλTi−1,j+Ti+1,j+Ti,j−1+Ti,j+1−4Ti,j−qmCbTa−T,where qm stands for blood dispersion rate and λ is a constant related with pixel distance. This model allowed attenuating surface temperature variations caused by internal or external conditions.One of the main advantages of using these models is that the variables involved to solve the equations are predefined as constants or were obtained empirically. Wu's model, represented by equation (2), requires knowing the value of the ambient temperature at the time of thermogram capture. This variable is not necessary for the model proposed by Xie described in equation (3). The Xie model is a better alternative in applications where it is possible to know the value of the ambient temperature. With thermal cameras, it is possible to get the surface temperature values of the face, with acceptable error ranges, which on average are 2%. ## 2.2.2. Vascular Networks The heat patterns of each individual can be useful in recognition systems. Ghiass et al. [16] mentioned that high-reliability studies have not been carried out to demonstrate the direct correspondence of vascular networks with heat zones. However, an individual's heat zones can function as a biometric mark [17], regardless of whether they have a direct correspondence with vascular networks. Buddharaju et al. [18] used heat zones for the generation of thinned vascular networks, which in turn can be used as unique identification marks.Cho et al. [19] proposed preprocessing techniques to extract vascular networks: (1) application of a salt and pepper filter to eliminate noise in the image; (2) increase the contrast of the image; and (3) extraction of the vascular network using the top-hat algorithm represented by equations (4) and (5). The proposed recognition system is based on the location of bifurcation points obtained from the thinned vascular network.(4)fopen=f⊙S⊕S,(5)fvascular=f−fopen,where fopen is open operation from f, by applying erosion ⊙ and dilatation ⊕ with structural element s.Hermosilla et al. [20] used a fused image thermogram and vascular networks to improve recognition rates. Several algorithms for extracting characteristics were used: LBP histograms; Gabor descriptors; WLD histograms; and SIFT and SURF descriptors. The experiments were performed on two databases EQUINOX and UCH thermal face database [17]. In general, the method favors the improvement of recognition percentages by 1.89%.Nguyen et al. [21] recreated a three-dimensional model of the individuals' heads, from the fusion of video images. The vascular network is isolated, generating a 3D model. This representation would be closer to a real vascular network of an individual.A limitation in the use of vascular networks for identification is that these were affected by changes in the surface temperature of the face. For example, in people who have consumed alcohol, the areas of the face that disperse heat modify their morphology compared to when they have not consumed. In one experiment, Koukiou [22] showed that the morphology of the vascular networks extracted from the thermogram was different in a person when he drank alcohol. The effect of the morphological modification of the vascular network is shown in Figure 2. This phenomenon is caused by the individual’s thermoregulation process caused by alcohol intake; however, the networks of veins and arteries do not undergo noticeable structural changes.Figure 2 Vascular networks extracted from the same individual obtained from thermograms (a) of the individual without consuming alcohol and (b) after having consumed a 355 mL can of 5.5° beer. After drinking the first beer, 148 minutes pass until the last beer is consumed. (a) (b)The morphology of the vascular networks extracted from the thermograms modified by changes in surface temperature poses challenges in the design of identification systems applied to real-life scenarios. Several situations alter the surface temperature of the face in uncontrolled environments. Additionally, the use of vascular networks for identification has had good recognition results in controlled environments. The possibility arises when these situations appear in thermograms captured in uncontrolled environments, particularly for the case of this study, on people who have consumed alcohol. ## 2.3. Methodology Recognition systems that use infrared face images are classified by Arya et al. [23] as (1) based on face recognition techniques in images with classical (holistic) methods; (2) based on feature extraction, and (3) based on the multimodal analysis. The system proposed in this study is based on feature extraction and was designed in three stages denoted as (1) generation of the bioheat transfer model; (2) fusion of thermograms with vascular networks; and (3) feature extraction. ### 2.3.1. Bioheat Transfer Model The blood dispersion models increase the intensity of the image pixels corresponding to the areas associated with the tissue with higher surface temperature and attenuate those with a relatively lower temperature. In Figures3(a) and 3(b), this augmentation process and attenuation are shown, respectively.Figure 3 Thermal images of face: (a) image taken in LWIR and (b) image after applying the bioheat transfer model on an image of the UCHThermalFace [17]. (a) (b)Initially, the histogram of the thermogramf is equalized. The objective is to obtain a resulting image with a uniform intensity distribution. The discrete Pennes equation, defined in equation (3), is applied to the resulting image. From this transformation, the image Wb is obtained; this image is the representation of the dispersion model of the image of the thermogram f. The application of the bioheat transfer model mitigates the effects of the increase in the surface temperature of the face captured on the thermogram and that are caused by alcohol consumption. ### 2.3.2. Image Fusion Image fusion is established for the use of the vascular network and the bioheat model features for recognition purposes. This fusion (WFUS) is obtained from Wb, and the vascular network extracted from the thermogram (WVN). The image with the vascular network WVN is obtained from the application of equations (4) and (5) on Wb. In the work of Hermosilla et al. [20], image fusion is performed on a softened thermogram and the vascular network from the same image. In this work, the operations defined in equation (6) are used, instead of the ordered weighted averaging (OWA) operator [24].(6)WFUSi,j=WVNi,j,ifWVNi,j≥Wbi,j,Wbi,j,otherwise,where WVN and Wb must have the same size. In Figure 4, the fusion process applied to the same individual, with different alcohol consumption, is shown.Figure 4 Thermal images of the face applying BIOHEATRV to the same individual when (a) has not consumed alcohol, (b) when he consumed a beer, (c) two beers, (d) three beers and (e) five beers, respectively. (a) (b) (c) (d) (e) ### 2.3.3. Image Features Feature extraction is performed by using local binary patterns (LBPs). LBP (equation (7)) has been used in the processing of facial images for recognition purposes in the VS [25], near-infrared (NIR) [26], and in LWIR [27]. There are several configurations with which LBP can be used. In particular, configuration studies have been carried out that are better suited for establishing face recognition systems [17].(7)LBPic,jc=∑p=0P−12psfp−fc,(8)sx=1,ifx≥0,0,otherwise,where P is the the number of the adjacent pixels to compare, and fp and fc stand for intensity pixel values for position p and central position c, respectively.In this research, the value of P is 8 with a radius of 2 pixels.p. Only binary patterns with two transitions in the binary chain generated by s(x) were taken into account. These types of patterns are called uniform binary patterns [28], and a rotation-invariant version was used. Equation (7) is a reformulated LBP for the extraction of uniform binary local patterns.(9)LBP8riu2xc,yc∑i=18sfp−fc,ifULBP8xc,yc≤2,9,otherwise,where function U stands for the number of binary strings generated by LBP8xc,yc. The image to be analyzed is divided into blocks of 20 lines by 4 columns. For each of these blocks, the LBP histogram is calculated, which generates a vector of values for each block. The concatenation of all block vectors forms a single vector, which functions as a global descriptor of the image. ## 2.3.1. Bioheat Transfer Model The blood dispersion models increase the intensity of the image pixels corresponding to the areas associated with the tissue with higher surface temperature and attenuate those with a relatively lower temperature. In Figures3(a) and 3(b), this augmentation process and attenuation are shown, respectively.Figure 3 Thermal images of face: (a) image taken in LWIR and (b) image after applying the bioheat transfer model on an image of the UCHThermalFace [17]. (a) (b)Initially, the histogram of the thermogramf is equalized. The objective is to obtain a resulting image with a uniform intensity distribution. The discrete Pennes equation, defined in equation (3), is applied to the resulting image. From this transformation, the image Wb is obtained; this image is the representation of the dispersion model of the image of the thermogram f. The application of the bioheat transfer model mitigates the effects of the increase in the surface temperature of the face captured on the thermogram and that are caused by alcohol consumption. ## 2.3.2. Image Fusion Image fusion is established for the use of the vascular network and the bioheat model features for recognition purposes. This fusion (WFUS) is obtained from Wb, and the vascular network extracted from the thermogram (WVN). The image with the vascular network WVN is obtained from the application of equations (4) and (5) on Wb. In the work of Hermosilla et al. [20], image fusion is performed on a softened thermogram and the vascular network from the same image. In this work, the operations defined in equation (6) are used, instead of the ordered weighted averaging (OWA) operator [24].(6)WFUSi,j=WVNi,j,ifWVNi,j≥Wbi,j,Wbi,j,otherwise,where WVN and Wb must have the same size. In Figure 4, the fusion process applied to the same individual, with different alcohol consumption, is shown.Figure 4 Thermal images of the face applying BIOHEATRV to the same individual when (a) has not consumed alcohol, (b) when he consumed a beer, (c) two beers, (d) three beers and (e) five beers, respectively. (a) (b) (c) (d) (e) ## 2.3.3. Image Features Feature extraction is performed by using local binary patterns (LBPs). LBP (equation (7)) has been used in the processing of facial images for recognition purposes in the VS [25], near-infrared (NIR) [26], and in LWIR [27]. There are several configurations with which LBP can be used. In particular, configuration studies have been carried out that are better suited for establishing face recognition systems [17].(7)LBPic,jc=∑p=0P−12psfp−fc,(8)sx=1,ifx≥0,0,otherwise,where P is the the number of the adjacent pixels to compare, and fp and fc stand for intensity pixel values for position p and central position c, respectively.In this research, the value of P is 8 with a radius of 2 pixels.p. Only binary patterns with two transitions in the binary chain generated by s(x) were taken into account. These types of patterns are called uniform binary patterns [28], and a rotation-invariant version was used. Equation (7) is a reformulated LBP for the extraction of uniform binary local patterns.(9)LBP8riu2xc,yc∑i=18sfp−fc,ifULBP8xc,yc≤2,9,otherwise,where function U stands for the number of binary strings generated by LBP8xc,yc. The image to be analyzed is divided into blocks of 20 lines by 4 columns. For each of these blocks, the LBP histogram is calculated, which generates a vector of values for each block. The concatenation of all block vectors forms a single vector, which functions as a global descriptor of the image. ## 3. Results and Discussion The experiment design was based on supervised learning. For each image of the individuals, the identity of the person and the number of beers ingested are known. “Class 0” corresponds to those individuals who have not ingested alcohol. “Class 1”, “Class 2”, “Class 3,” and “Class 4” indicate the number of beers ingested by each individual. The main objective of the experiment was to establish the conditions that would happen in a real-life system. “Class 0” in all experiments is the training dataset. In a recognition system applied to real life, it is most likely that thermograms of individuals are available when they are not intoxicated.For the development of the experiment, 3 representations of the thermograms were taken for comparative purposes. The first representation was of the image applied in a traditional model where thermograms are preprocessed only with an anisotropic filter (ANISOFF). The second representation used the preprocessing with the bioheat transfer model and the vascular network extracted from the same thermogram (BIOHEATRV). Finally, a concatenation of the ANISOFF global LBP histogram with the BIOHEATRV global LBP (ABFUSED) was used to build a global vector. This representation is described in Figure5.Figure 5 Feature vector of ABFUSED obtained from ANISOFF and BIOHEATRV models.For each thermogram, a vector formed by the global histogram obtained from the concatenation of the LBP histograms was obtained. All values for each vector were normalized. Each vector was labeled with an identifier of the subject and the amount of alcohol consumed. For the classification phase, the nearest neighbor algorithm was used. Histogram intersection was used as a similarity measure. In all cases, data on subjects without consuming alcohol were used as a training set. Table1 shows the cumulative recognition results for the 3 methods. For each class, the values Rank-1, Rank-5, and Rank-10 are shown. In these classifications, the percentage of cumulative recognition was indicated by taking subsets of 1, 5, and 10 subjects. Subsets were formed with the most similar individuals with regard to the subject to be analyzed.Table 1 Recognition percentages of ANISOFF, BIOHEATRV, and ABFUSED, for each of the classes. The average recognition of the three methods is also shown. Class 1 Class 2 Class 3 Class 4 Average Rank-1 Rank-5 Rank-10 Rank-1 Rank-5 Rank-10 Rank-1 Rank-5 Rank-10 Rank-1 Rank-5 Rank-10 Rank-1 (%) Rank-5 (%) Rank-10 (%) ANISOFF 91.21 96.17 99.82 78.69 93.08 96.47 84.34 96.52 99.78 82.56 93.73 98.73 84.22 94.88 98.70 BIOHEATRV 91.51 95.95 99.30 84.34 92.13 96.91 83.16 91.52 94.17 84.77 96.69 98.69 85.94 94.07 97.27 ABFUSED 92.21 98.04 99.91 85.60 95.60 98.69 87.26 97.08 99.95 86.73 98.73 99.95 87.95 97.36 99.63In particular, for Class 2, it was shown that the cumulative recognition percentages have the worst performance with regards to the other classes. This was shown for both first attempt recognition (Rank-1) and Rank-10 with 78.69% and 96.47%, respectively. The best results were obtained for Class 1, for Rank-1 and Rank-10 of 92.21% and 99.82%, respectively. The average ANISOFF recognition percentage for Rank-1 and Rank-10 was 84.22% and 98.70%, respectively. The performance of the ANISOFF method is shown in Figure6.Figure 6 CMC recognition curve of ANISOFF model for 1, 2, 3, and 4 classes.In contrast, BIOHEATRV had its worst performance with class 3, for Rank-1 and Rank-10 of 83.16% and 94.17%, respectively. Like ANISOFF, the best BIOHEATRV performance was obtained in class 1 with 91.51% and 99.30% recognition for Rank-1 and Rank-10, respectively. The cumulative recognition results are shown in Figure7.Figure 7 CMC recognition curve of BIOHEATRV model for 1, 2, 3, and 4 classes.The behavior of ANISOFF for class 3 and BIOHEATRV for class 2 gave us an indication that in using both methods it was possible to improve the recognition percentages. From this behavior, ABFUSED was designed. The union of methods does not always lead us to improve the percentages of recognition. However, this combination proved to be viable for this study. The cumulative percentages of ABFUSED recognition for each of the classes are shown in Figure8.Figure 8 CMC recognition curve of ABFUSED model for 1, 2, 3, and 4 classes.CMC curves were designed for each of the classes (Figures9–12). In each of the classes, ABFUSED obtained the best recognition percentages, except for a segment between Rank-3 and Rank-4 of Class 3, where ANISOFF had better recognition percentages.Figure 9 CMC recognition curve for Class 1 for ANISOFF, BIOHEATRV, and ABFUSED.Figure 10 CMC recognition curve for Class 2 for ANISOFF, BIOHEATRV, and ABFUSED..Figure 11 CMC recognition curve for Class 3 for ANISOFF, BIOHEATRV, and ABFUSED.Figure 12 CMC recognition curve for Class 4 for ANISOFF, BIOHEATRV, and ABFUSED.The average recognition percentage of each class was calculated. The average cumulative recognition value was computed for each rank and each class. ABFUSED had the highest recognition percentages. It was noted that the use of BIOHEATRV had a better percentage of recognition in Rank-1 compared to ANISOFF. However, ANISOFF had a better percentage in the remaining subsets. ## 4. Conclusions The results in the recognition values shown in this paper are evidence of the effectiveness of bioheat transfer methods. These models may contain enough information for the process of identifying individuals. In the case of the BIOHEATRV method, it is demonstrated that it includes complimentary features compared to the classical anisotropic filter. These complementary characteristics were the reason why ABFUSED increased the recognition percentages for all classes.Alcohol consumption, in particular, tends to generate metabolic alterations that in turn produce temperature rise in different parts of the surface of the face. These temperature changes caused the recognition systems to reduce their effectiveness. In particular, this study showed how these percentages declined for classes 2 to 4. Of course, the methods described in this work were not taking into account the particular conditions of each individual. This stands out because neither the metabolism nor the heat dispersion of the face is the same for all individuals. However, the experimental conditions designed were defined as a system of recognition of everyday life. That is, thermograms of people acquired in a controlled environment were taken as the training base. In this scenario, the image of the subjects with alterations caused by alcohol would not be counted. Additionally, the percentages of Rank-1 recognition obtained for each of the classes suggested the feasibility of implementation in real-life situations.The cumulative recognition analysis was performed to indicate the feasibility of applying these methods for two-phase identification systems. The average Rank-10 percentage of 99.63% of ABFUSED method suggested that a second recognition method could be used. This is because the population was limited to a subset of 10 individuals, with a percentage of certainty ∼100% that the individual to identify was in that subset.Theoretically, heat dispersion models, such as the bioheat transfer model, mitigate global face temperature changes on the image. It is necessary to design studies where this can be measured quantitatively as other factors affect the percentages of recognition, for example, those originated due to the increase in the surface temperature of the face such as physical activity, substance use, or some pathologies. In the database used for this investigation, people who are active consumers of alcohol were excluded. It is necessary to broaden the inclusion criteria in the conformation of datasets to determine how these characteristics cause changes in heat dispersion and how these probably decrease the effectiveness of face recognition systems. Even other conditions must be added, for example, it is common for people who consume alcohol to also eat food.It is in the interest of the authors to carry out tests using the bioheat transfer model, for example, as input variables, in a deep learning neural network. Additionally, this study seeks to incorporate information on face images in NIR to establish a system of recognition of information fusion with LWIR images. Research is underway in these areas and will be reported in subsequent papers. --- *Source: 1024173-2020-04-14.xml*
1024173-2020-04-14_1024173-2020-04-14.md
44,936
Facial Recognition for Drunk People Using Thermal Imaging
Agustin Sancen-Plaza; Luis M. Contreras-Medina; Alejandro Israel Barranco-Gutiérrez; Carlos Villaseñor-Mora; Juan J Martínez-Nolasco; José A. Padilla-Medina
Mathematical Problems in Engineering (2020)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1024173
1024173-2020-04-14.xml
--- ## Abstract Face recognition using thermal imaging has the main advantage of being less affected by lighting conditions compared to images in the visible spectrum. However, there are factors such as the process of human thermoregulation that cause variations in the surface temperature of the face. These variations cause recognition systems to lose effectiveness. In particular, alcohol intake causes changes in the surface temperature of the face. It is of high relevance to identify not only if a person is drunk but also their identity. In this paper, we present a technique for face recognition based on thermal face images of drunk people. For the experiments, the Pontificia Universidad Católica de Valparaíso-Drunk Thermal Face database (PUCV-DTF) was used. The recognition system was carried out by using local binary patterns (LBPs). The LBP features were obtained from the bioheat model from thermal image representation and a fusion of thermal images and a vascular network extracted from the same image. The feature vector for each image is formed by the concatenation of the LBP histogram of the thermogram with an anisotropic filter and the fused image, respectively. The proposed technique has an average percentage of 99.63% in the Rank-10 cumulative classification; this performance is superior compared to using LBP in thermal images that do not use the bioheat model. --- ## Body ## 1. Introduction The recognition of individuals based on thermal images of the face (thermograms), captured by long-wave infrared (LWIR) thermal cameras in the range of 8 to 15μm, has considerable advantages over facial recognition systems within the visible spectrum of 390–700 nm (VS). In face thermal images, it is possible to extract unique marks that can be used to identify people [1]. These marks have better tolerance for recognition in persons with pose changes [2] in variable illumination conditions [3, 4] and can even be obtained in complete darkness. These advantages have motivated researchers to propose different models that represent determining information for the identification of individuals.Despite these advantages, some factors may cause marks not to be generated precisely with the same morphology for the same person. The factors can be internal or external and cause alterations in the thermoregulatory system. For example, caffeine and alcohol ingestion may cause an increase in face surface temperature [5, 6]. These temperature changes cause the heat dispersion zones of the skin to activate and these, in turn, can be obtained in the form of an image by a thermal camera. The morphology of the heat zones has been used to design people's recognition systems.In this study, a thermal facial recognition system was designed considering the effects of temperature variations caused from alcohol consumption. . For training in the system, only the thermograms of the individuals when they have not taken alcohol were used as a reference. It is proposed in this way to simulate an application scenario in a real-life situation. Blood perfusion models were used in order to mitigate the effects caused by the increase in temperature [7] caused by alcohol intake. The perfusion model was performed as an image preprocessing. This feature is ideal for scenarios where conditions that cause changes in the surface temperature of the face can occur. ## 2. Materials and Methods ### 2.1. Thermal Face Database The methodology was evaluated using the Pontificia Universidad Católica de Valparaíso-Drunk Thermal Face database (PUCV-DTF) [8]. It consists of 46 participants: 40 men and 6 women with an average of 24 years (between 18 and 29 years). As for their general health situation, participants should not have alcohol-related health problems, in addition to not being regular consumers. There were 250 face thermograms for each individual, 50 when they still did not drink alcohol and 50 for each who drank 1, 2, 3, and 4 beers, respectively. The drink that was ingested was 355mL can of 5.5° beer. . The authors established a procedure for image capture. First, participants should rest 30 minutes after arriving at the image capture site to stabilize the metabolism to environmental conditions. Each can of beer was consumed in a time of 5 minutes. The 50 images were captured in an average time of 1 minute. After each consumption there was a stabilization time of 30 minutes. ### 2.2. Theory #### 2.2.1. Blood Perfusion Models There are endogenous and exogenous conditions to the individual that can affect the surface temperature of the face. Endogenous factors are those that produce a stimulation of the thermoregulatory system. Alcohol consumption [6], pathologies [9], physical activity [10], and time lapse [11, 12] are some of the causes that can stimulate the body and, in turn, disperse heat. Exogenous conditions are related to the use of clothing on the face, such as glasses [13]. Due to these variations captured by the thermal camera, the recognition systems that do not take them into account decrease their effectiveness. The first heat dispersion model using LWIR face images was proposed by Wu et al. [7]. The main purpose of this model was to mitigate the effects of heat redistribution on thermal images (Figure 1). To establish the model, the authors started from the thermal equilibrium equation:(1)Qr+Qe+Qf=Qc+Qm+Qb,where Q represents the heat flux per unit area and stands for radiation (Qr), evaporation (Qe), convection (Qf), body conduction (Qc), metabolism (Qm), and blood flow convection (Qb).Figure 1 Thermal images of the same individual: (a) room temperature = 25.4–26.2°C and (b) room temperature = 32.8–33.1°C [7]. (a) (b)In a later work, Wu et al. [14] proposed a simplified model based on the reduced equation from which equation (2) is derived.(2)ω=εσTi,j4−Te4αcbTa−Ti,j,where ε stands for skin emissivity,σ is the Stefan–Boltzmann constant, T is the temperature of the pixel to analyze, Te is the environment temperature, α is the tissue-skin exchange ratio, cb is the specific heat of blood, and Ta is the average artery-vein temperature.Xie et al. [15] proposed a blood perfusion model based on the bioheat transfer model. Equation (3) is a discrete version of the bioheat transfer model obtained from Pennes equation:(3)Wbi,j=−KλTi−1,j+Ti+1,j+Ti,j−1+Ti,j+1−4Ti,j−qmCbTa−T,where qm stands for blood dispersion rate and λ is a constant related with pixel distance. This model allowed attenuating surface temperature variations caused by internal or external conditions.One of the main advantages of using these models is that the variables involved to solve the equations are predefined as constants or were obtained empirically. Wu's model, represented by equation (2), requires knowing the value of the ambient temperature at the time of thermogram capture. This variable is not necessary for the model proposed by Xie described in equation (3). The Xie model is a better alternative in applications where it is possible to know the value of the ambient temperature. With thermal cameras, it is possible to get the surface temperature values of the face, with acceptable error ranges, which on average are 2%. #### 2.2.2. Vascular Networks The heat patterns of each individual can be useful in recognition systems. Ghiass et al. [16] mentioned that high-reliability studies have not been carried out to demonstrate the direct correspondence of vascular networks with heat zones. However, an individual's heat zones can function as a biometric mark [17], regardless of whether they have a direct correspondence with vascular networks. Buddharaju et al. [18] used heat zones for the generation of thinned vascular networks, which in turn can be used as unique identification marks.Cho et al. [19] proposed preprocessing techniques to extract vascular networks: (1) application of a salt and pepper filter to eliminate noise in the image; (2) increase the contrast of the image; and (3) extraction of the vascular network using the top-hat algorithm represented by equations (4) and (5). The proposed recognition system is based on the location of bifurcation points obtained from the thinned vascular network.(4)fopen=f⊙S⊕S,(5)fvascular=f−fopen,where fopen is open operation from f, by applying erosion ⊙ and dilatation ⊕ with structural element s.Hermosilla et al. [20] used a fused image thermogram and vascular networks to improve recognition rates. Several algorithms for extracting characteristics were used: LBP histograms; Gabor descriptors; WLD histograms; and SIFT and SURF descriptors. The experiments were performed on two databases EQUINOX and UCH thermal face database [17]. In general, the method favors the improvement of recognition percentages by 1.89%.Nguyen et al. [21] recreated a three-dimensional model of the individuals' heads, from the fusion of video images. The vascular network is isolated, generating a 3D model. This representation would be closer to a real vascular network of an individual.A limitation in the use of vascular networks for identification is that these were affected by changes in the surface temperature of the face. For example, in people who have consumed alcohol, the areas of the face that disperse heat modify their morphology compared to when they have not consumed. In one experiment, Koukiou [22] showed that the morphology of the vascular networks extracted from the thermogram was different in a person when he drank alcohol. The effect of the morphological modification of the vascular network is shown in Figure 2. This phenomenon is caused by the individual’s thermoregulation process caused by alcohol intake; however, the networks of veins and arteries do not undergo noticeable structural changes.Figure 2 Vascular networks extracted from the same individual obtained from thermograms (a) of the individual without consuming alcohol and (b) after having consumed a 355 mL can of 5.5° beer. After drinking the first beer, 148 minutes pass until the last beer is consumed. (a) (b)The morphology of the vascular networks extracted from the thermograms modified by changes in surface temperature poses challenges in the design of identification systems applied to real-life scenarios. Several situations alter the surface temperature of the face in uncontrolled environments. Additionally, the use of vascular networks for identification has had good recognition results in controlled environments. The possibility arises when these situations appear in thermograms captured in uncontrolled environments, particularly for the case of this study, on people who have consumed alcohol. ### 2.3. Methodology Recognition systems that use infrared face images are classified by Arya et al. [23] as (1) based on face recognition techniques in images with classical (holistic) methods; (2) based on feature extraction, and (3) based on the multimodal analysis. The system proposed in this study is based on feature extraction and was designed in three stages denoted as (1) generation of the bioheat transfer model; (2) fusion of thermograms with vascular networks; and (3) feature extraction. #### 2.3.1. Bioheat Transfer Model The blood dispersion models increase the intensity of the image pixels corresponding to the areas associated with the tissue with higher surface temperature and attenuate those with a relatively lower temperature. In Figures3(a) and 3(b), this augmentation process and attenuation are shown, respectively.Figure 3 Thermal images of face: (a) image taken in LWIR and (b) image after applying the bioheat transfer model on an image of the UCHThermalFace [17]. (a) (b)Initially, the histogram of the thermogramf is equalized. The objective is to obtain a resulting image with a uniform intensity distribution. The discrete Pennes equation, defined in equation (3), is applied to the resulting image. From this transformation, the image Wb is obtained; this image is the representation of the dispersion model of the image of the thermogram f. The application of the bioheat transfer model mitigates the effects of the increase in the surface temperature of the face captured on the thermogram and that are caused by alcohol consumption. #### 2.3.2. Image Fusion Image fusion is established for the use of the vascular network and the bioheat model features for recognition purposes. This fusion (WFUS) is obtained from Wb, and the vascular network extracted from the thermogram (WVN). The image with the vascular network WVN is obtained from the application of equations (4) and (5) on Wb. In the work of Hermosilla et al. [20], image fusion is performed on a softened thermogram and the vascular network from the same image. In this work, the operations defined in equation (6) are used, instead of the ordered weighted averaging (OWA) operator [24].(6)WFUSi,j=WVNi,j,ifWVNi,j≥Wbi,j,Wbi,j,otherwise,where WVN and Wb must have the same size. In Figure 4, the fusion process applied to the same individual, with different alcohol consumption, is shown.Figure 4 Thermal images of the face applying BIOHEATRV to the same individual when (a) has not consumed alcohol, (b) when he consumed a beer, (c) two beers, (d) three beers and (e) five beers, respectively. (a) (b) (c) (d) (e) #### 2.3.3. Image Features Feature extraction is performed by using local binary patterns (LBPs). LBP (equation (7)) has been used in the processing of facial images for recognition purposes in the VS [25], near-infrared (NIR) [26], and in LWIR [27]. There are several configurations with which LBP can be used. In particular, configuration studies have been carried out that are better suited for establishing face recognition systems [17].(7)LBPic,jc=∑p=0P−12psfp−fc,(8)sx=1,ifx≥0,0,otherwise,where P is the the number of the adjacent pixels to compare, and fp and fc stand for intensity pixel values for position p and central position c, respectively.In this research, the value of P is 8 with a radius of 2 pixels.p. Only binary patterns with two transitions in the binary chain generated by s(x) were taken into account. These types of patterns are called uniform binary patterns [28], and a rotation-invariant version was used. Equation (7) is a reformulated LBP for the extraction of uniform binary local patterns.(9)LBP8riu2xc,yc∑i=18sfp−fc,ifULBP8xc,yc≤2,9,otherwise,where function U stands for the number of binary strings generated by LBP8xc,yc. The image to be analyzed is divided into blocks of 20 lines by 4 columns. For each of these blocks, the LBP histogram is calculated, which generates a vector of values for each block. The concatenation of all block vectors forms a single vector, which functions as a global descriptor of the image. ## 2.1. Thermal Face Database The methodology was evaluated using the Pontificia Universidad Católica de Valparaíso-Drunk Thermal Face database (PUCV-DTF) [8]. It consists of 46 participants: 40 men and 6 women with an average of 24 years (between 18 and 29 years). As for their general health situation, participants should not have alcohol-related health problems, in addition to not being regular consumers. There were 250 face thermograms for each individual, 50 when they still did not drink alcohol and 50 for each who drank 1, 2, 3, and 4 beers, respectively. The drink that was ingested was 355mL can of 5.5° beer. . The authors established a procedure for image capture. First, participants should rest 30 minutes after arriving at the image capture site to stabilize the metabolism to environmental conditions. Each can of beer was consumed in a time of 5 minutes. The 50 images were captured in an average time of 1 minute. After each consumption there was a stabilization time of 30 minutes. ## 2.2. Theory ### 2.2.1. Blood Perfusion Models There are endogenous and exogenous conditions to the individual that can affect the surface temperature of the face. Endogenous factors are those that produce a stimulation of the thermoregulatory system. Alcohol consumption [6], pathologies [9], physical activity [10], and time lapse [11, 12] are some of the causes that can stimulate the body and, in turn, disperse heat. Exogenous conditions are related to the use of clothing on the face, such as glasses [13]. Due to these variations captured by the thermal camera, the recognition systems that do not take them into account decrease their effectiveness. The first heat dispersion model using LWIR face images was proposed by Wu et al. [7]. The main purpose of this model was to mitigate the effects of heat redistribution on thermal images (Figure 1). To establish the model, the authors started from the thermal equilibrium equation:(1)Qr+Qe+Qf=Qc+Qm+Qb,where Q represents the heat flux per unit area and stands for radiation (Qr), evaporation (Qe), convection (Qf), body conduction (Qc), metabolism (Qm), and blood flow convection (Qb).Figure 1 Thermal images of the same individual: (a) room temperature = 25.4–26.2°C and (b) room temperature = 32.8–33.1°C [7]. (a) (b)In a later work, Wu et al. [14] proposed a simplified model based on the reduced equation from which equation (2) is derived.(2)ω=εσTi,j4−Te4αcbTa−Ti,j,where ε stands for skin emissivity,σ is the Stefan–Boltzmann constant, T is the temperature of the pixel to analyze, Te is the environment temperature, α is the tissue-skin exchange ratio, cb is the specific heat of blood, and Ta is the average artery-vein temperature.Xie et al. [15] proposed a blood perfusion model based on the bioheat transfer model. Equation (3) is a discrete version of the bioheat transfer model obtained from Pennes equation:(3)Wbi,j=−KλTi−1,j+Ti+1,j+Ti,j−1+Ti,j+1−4Ti,j−qmCbTa−T,where qm stands for blood dispersion rate and λ is a constant related with pixel distance. This model allowed attenuating surface temperature variations caused by internal or external conditions.One of the main advantages of using these models is that the variables involved to solve the equations are predefined as constants or were obtained empirically. Wu's model, represented by equation (2), requires knowing the value of the ambient temperature at the time of thermogram capture. This variable is not necessary for the model proposed by Xie described in equation (3). The Xie model is a better alternative in applications where it is possible to know the value of the ambient temperature. With thermal cameras, it is possible to get the surface temperature values of the face, with acceptable error ranges, which on average are 2%. ### 2.2.2. Vascular Networks The heat patterns of each individual can be useful in recognition systems. Ghiass et al. [16] mentioned that high-reliability studies have not been carried out to demonstrate the direct correspondence of vascular networks with heat zones. However, an individual's heat zones can function as a biometric mark [17], regardless of whether they have a direct correspondence with vascular networks. Buddharaju et al. [18] used heat zones for the generation of thinned vascular networks, which in turn can be used as unique identification marks.Cho et al. [19] proposed preprocessing techniques to extract vascular networks: (1) application of a salt and pepper filter to eliminate noise in the image; (2) increase the contrast of the image; and (3) extraction of the vascular network using the top-hat algorithm represented by equations (4) and (5). The proposed recognition system is based on the location of bifurcation points obtained from the thinned vascular network.(4)fopen=f⊙S⊕S,(5)fvascular=f−fopen,where fopen is open operation from f, by applying erosion ⊙ and dilatation ⊕ with structural element s.Hermosilla et al. [20] used a fused image thermogram and vascular networks to improve recognition rates. Several algorithms for extracting characteristics were used: LBP histograms; Gabor descriptors; WLD histograms; and SIFT and SURF descriptors. The experiments were performed on two databases EQUINOX and UCH thermal face database [17]. In general, the method favors the improvement of recognition percentages by 1.89%.Nguyen et al. [21] recreated a three-dimensional model of the individuals' heads, from the fusion of video images. The vascular network is isolated, generating a 3D model. This representation would be closer to a real vascular network of an individual.A limitation in the use of vascular networks for identification is that these were affected by changes in the surface temperature of the face. For example, in people who have consumed alcohol, the areas of the face that disperse heat modify their morphology compared to when they have not consumed. In one experiment, Koukiou [22] showed that the morphology of the vascular networks extracted from the thermogram was different in a person when he drank alcohol. The effect of the morphological modification of the vascular network is shown in Figure 2. This phenomenon is caused by the individual’s thermoregulation process caused by alcohol intake; however, the networks of veins and arteries do not undergo noticeable structural changes.Figure 2 Vascular networks extracted from the same individual obtained from thermograms (a) of the individual without consuming alcohol and (b) after having consumed a 355 mL can of 5.5° beer. After drinking the first beer, 148 minutes pass until the last beer is consumed. (a) (b)The morphology of the vascular networks extracted from the thermograms modified by changes in surface temperature poses challenges in the design of identification systems applied to real-life scenarios. Several situations alter the surface temperature of the face in uncontrolled environments. Additionally, the use of vascular networks for identification has had good recognition results in controlled environments. The possibility arises when these situations appear in thermograms captured in uncontrolled environments, particularly for the case of this study, on people who have consumed alcohol. ## 2.2.1. Blood Perfusion Models There are endogenous and exogenous conditions to the individual that can affect the surface temperature of the face. Endogenous factors are those that produce a stimulation of the thermoregulatory system. Alcohol consumption [6], pathologies [9], physical activity [10], and time lapse [11, 12] are some of the causes that can stimulate the body and, in turn, disperse heat. Exogenous conditions are related to the use of clothing on the face, such as glasses [13]. Due to these variations captured by the thermal camera, the recognition systems that do not take them into account decrease their effectiveness. The first heat dispersion model using LWIR face images was proposed by Wu et al. [7]. The main purpose of this model was to mitigate the effects of heat redistribution on thermal images (Figure 1). To establish the model, the authors started from the thermal equilibrium equation:(1)Qr+Qe+Qf=Qc+Qm+Qb,where Q represents the heat flux per unit area and stands for radiation (Qr), evaporation (Qe), convection (Qf), body conduction (Qc), metabolism (Qm), and blood flow convection (Qb).Figure 1 Thermal images of the same individual: (a) room temperature = 25.4–26.2°C and (b) room temperature = 32.8–33.1°C [7]. (a) (b)In a later work, Wu et al. [14] proposed a simplified model based on the reduced equation from which equation (2) is derived.(2)ω=εσTi,j4−Te4αcbTa−Ti,j,where ε stands for skin emissivity,σ is the Stefan–Boltzmann constant, T is the temperature of the pixel to analyze, Te is the environment temperature, α is the tissue-skin exchange ratio, cb is the specific heat of blood, and Ta is the average artery-vein temperature.Xie et al. [15] proposed a blood perfusion model based on the bioheat transfer model. Equation (3) is a discrete version of the bioheat transfer model obtained from Pennes equation:(3)Wbi,j=−KλTi−1,j+Ti+1,j+Ti,j−1+Ti,j+1−4Ti,j−qmCbTa−T,where qm stands for blood dispersion rate and λ is a constant related with pixel distance. This model allowed attenuating surface temperature variations caused by internal or external conditions.One of the main advantages of using these models is that the variables involved to solve the equations are predefined as constants or were obtained empirically. Wu's model, represented by equation (2), requires knowing the value of the ambient temperature at the time of thermogram capture. This variable is not necessary for the model proposed by Xie described in equation (3). The Xie model is a better alternative in applications where it is possible to know the value of the ambient temperature. With thermal cameras, it is possible to get the surface temperature values of the face, with acceptable error ranges, which on average are 2%. ## 2.2.2. Vascular Networks The heat patterns of each individual can be useful in recognition systems. Ghiass et al. [16] mentioned that high-reliability studies have not been carried out to demonstrate the direct correspondence of vascular networks with heat zones. However, an individual's heat zones can function as a biometric mark [17], regardless of whether they have a direct correspondence with vascular networks. Buddharaju et al. [18] used heat zones for the generation of thinned vascular networks, which in turn can be used as unique identification marks.Cho et al. [19] proposed preprocessing techniques to extract vascular networks: (1) application of a salt and pepper filter to eliminate noise in the image; (2) increase the contrast of the image; and (3) extraction of the vascular network using the top-hat algorithm represented by equations (4) and (5). The proposed recognition system is based on the location of bifurcation points obtained from the thinned vascular network.(4)fopen=f⊙S⊕S,(5)fvascular=f−fopen,where fopen is open operation from f, by applying erosion ⊙ and dilatation ⊕ with structural element s.Hermosilla et al. [20] used a fused image thermogram and vascular networks to improve recognition rates. Several algorithms for extracting characteristics were used: LBP histograms; Gabor descriptors; WLD histograms; and SIFT and SURF descriptors. The experiments were performed on two databases EQUINOX and UCH thermal face database [17]. In general, the method favors the improvement of recognition percentages by 1.89%.Nguyen et al. [21] recreated a three-dimensional model of the individuals' heads, from the fusion of video images. The vascular network is isolated, generating a 3D model. This representation would be closer to a real vascular network of an individual.A limitation in the use of vascular networks for identification is that these were affected by changes in the surface temperature of the face. For example, in people who have consumed alcohol, the areas of the face that disperse heat modify their morphology compared to when they have not consumed. In one experiment, Koukiou [22] showed that the morphology of the vascular networks extracted from the thermogram was different in a person when he drank alcohol. The effect of the morphological modification of the vascular network is shown in Figure 2. This phenomenon is caused by the individual’s thermoregulation process caused by alcohol intake; however, the networks of veins and arteries do not undergo noticeable structural changes.Figure 2 Vascular networks extracted from the same individual obtained from thermograms (a) of the individual without consuming alcohol and (b) after having consumed a 355 mL can of 5.5° beer. After drinking the first beer, 148 minutes pass until the last beer is consumed. (a) (b)The morphology of the vascular networks extracted from the thermograms modified by changes in surface temperature poses challenges in the design of identification systems applied to real-life scenarios. Several situations alter the surface temperature of the face in uncontrolled environments. Additionally, the use of vascular networks for identification has had good recognition results in controlled environments. The possibility arises when these situations appear in thermograms captured in uncontrolled environments, particularly for the case of this study, on people who have consumed alcohol. ## 2.3. Methodology Recognition systems that use infrared face images are classified by Arya et al. [23] as (1) based on face recognition techniques in images with classical (holistic) methods; (2) based on feature extraction, and (3) based on the multimodal analysis. The system proposed in this study is based on feature extraction and was designed in three stages denoted as (1) generation of the bioheat transfer model; (2) fusion of thermograms with vascular networks; and (3) feature extraction. ### 2.3.1. Bioheat Transfer Model The blood dispersion models increase the intensity of the image pixels corresponding to the areas associated with the tissue with higher surface temperature and attenuate those with a relatively lower temperature. In Figures3(a) and 3(b), this augmentation process and attenuation are shown, respectively.Figure 3 Thermal images of face: (a) image taken in LWIR and (b) image after applying the bioheat transfer model on an image of the UCHThermalFace [17]. (a) (b)Initially, the histogram of the thermogramf is equalized. The objective is to obtain a resulting image with a uniform intensity distribution. The discrete Pennes equation, defined in equation (3), is applied to the resulting image. From this transformation, the image Wb is obtained; this image is the representation of the dispersion model of the image of the thermogram f. The application of the bioheat transfer model mitigates the effects of the increase in the surface temperature of the face captured on the thermogram and that are caused by alcohol consumption. ### 2.3.2. Image Fusion Image fusion is established for the use of the vascular network and the bioheat model features for recognition purposes. This fusion (WFUS) is obtained from Wb, and the vascular network extracted from the thermogram (WVN). The image with the vascular network WVN is obtained from the application of equations (4) and (5) on Wb. In the work of Hermosilla et al. [20], image fusion is performed on a softened thermogram and the vascular network from the same image. In this work, the operations defined in equation (6) are used, instead of the ordered weighted averaging (OWA) operator [24].(6)WFUSi,j=WVNi,j,ifWVNi,j≥Wbi,j,Wbi,j,otherwise,where WVN and Wb must have the same size. In Figure 4, the fusion process applied to the same individual, with different alcohol consumption, is shown.Figure 4 Thermal images of the face applying BIOHEATRV to the same individual when (a) has not consumed alcohol, (b) when he consumed a beer, (c) two beers, (d) three beers and (e) five beers, respectively. (a) (b) (c) (d) (e) ### 2.3.3. Image Features Feature extraction is performed by using local binary patterns (LBPs). LBP (equation (7)) has been used in the processing of facial images for recognition purposes in the VS [25], near-infrared (NIR) [26], and in LWIR [27]. There are several configurations with which LBP can be used. In particular, configuration studies have been carried out that are better suited for establishing face recognition systems [17].(7)LBPic,jc=∑p=0P−12psfp−fc,(8)sx=1,ifx≥0,0,otherwise,where P is the the number of the adjacent pixels to compare, and fp and fc stand for intensity pixel values for position p and central position c, respectively.In this research, the value of P is 8 with a radius of 2 pixels.p. Only binary patterns with two transitions in the binary chain generated by s(x) were taken into account. These types of patterns are called uniform binary patterns [28], and a rotation-invariant version was used. Equation (7) is a reformulated LBP for the extraction of uniform binary local patterns.(9)LBP8riu2xc,yc∑i=18sfp−fc,ifULBP8xc,yc≤2,9,otherwise,where function U stands for the number of binary strings generated by LBP8xc,yc. The image to be analyzed is divided into blocks of 20 lines by 4 columns. For each of these blocks, the LBP histogram is calculated, which generates a vector of values for each block. The concatenation of all block vectors forms a single vector, which functions as a global descriptor of the image. ## 2.3.1. Bioheat Transfer Model The blood dispersion models increase the intensity of the image pixels corresponding to the areas associated with the tissue with higher surface temperature and attenuate those with a relatively lower temperature. In Figures3(a) and 3(b), this augmentation process and attenuation are shown, respectively.Figure 3 Thermal images of face: (a) image taken in LWIR and (b) image after applying the bioheat transfer model on an image of the UCHThermalFace [17]. (a) (b)Initially, the histogram of the thermogramf is equalized. The objective is to obtain a resulting image with a uniform intensity distribution. The discrete Pennes equation, defined in equation (3), is applied to the resulting image. From this transformation, the image Wb is obtained; this image is the representation of the dispersion model of the image of the thermogram f. The application of the bioheat transfer model mitigates the effects of the increase in the surface temperature of the face captured on the thermogram and that are caused by alcohol consumption. ## 2.3.2. Image Fusion Image fusion is established for the use of the vascular network and the bioheat model features for recognition purposes. This fusion (WFUS) is obtained from Wb, and the vascular network extracted from the thermogram (WVN). The image with the vascular network WVN is obtained from the application of equations (4) and (5) on Wb. In the work of Hermosilla et al. [20], image fusion is performed on a softened thermogram and the vascular network from the same image. In this work, the operations defined in equation (6) are used, instead of the ordered weighted averaging (OWA) operator [24].(6)WFUSi,j=WVNi,j,ifWVNi,j≥Wbi,j,Wbi,j,otherwise,where WVN and Wb must have the same size. In Figure 4, the fusion process applied to the same individual, with different alcohol consumption, is shown.Figure 4 Thermal images of the face applying BIOHEATRV to the same individual when (a) has not consumed alcohol, (b) when he consumed a beer, (c) two beers, (d) three beers and (e) five beers, respectively. (a) (b) (c) (d) (e) ## 2.3.3. Image Features Feature extraction is performed by using local binary patterns (LBPs). LBP (equation (7)) has been used in the processing of facial images for recognition purposes in the VS [25], near-infrared (NIR) [26], and in LWIR [27]. There are several configurations with which LBP can be used. In particular, configuration studies have been carried out that are better suited for establishing face recognition systems [17].(7)LBPic,jc=∑p=0P−12psfp−fc,(8)sx=1,ifx≥0,0,otherwise,where P is the the number of the adjacent pixels to compare, and fp and fc stand for intensity pixel values for position p and central position c, respectively.In this research, the value of P is 8 with a radius of 2 pixels.p. Only binary patterns with two transitions in the binary chain generated by s(x) were taken into account. These types of patterns are called uniform binary patterns [28], and a rotation-invariant version was used. Equation (7) is a reformulated LBP for the extraction of uniform binary local patterns.(9)LBP8riu2xc,yc∑i=18sfp−fc,ifULBP8xc,yc≤2,9,otherwise,where function U stands for the number of binary strings generated by LBP8xc,yc. The image to be analyzed is divided into blocks of 20 lines by 4 columns. For each of these blocks, the LBP histogram is calculated, which generates a vector of values for each block. The concatenation of all block vectors forms a single vector, which functions as a global descriptor of the image. ## 3. Results and Discussion The experiment design was based on supervised learning. For each image of the individuals, the identity of the person and the number of beers ingested are known. “Class 0” corresponds to those individuals who have not ingested alcohol. “Class 1”, “Class 2”, “Class 3,” and “Class 4” indicate the number of beers ingested by each individual. The main objective of the experiment was to establish the conditions that would happen in a real-life system. “Class 0” in all experiments is the training dataset. In a recognition system applied to real life, it is most likely that thermograms of individuals are available when they are not intoxicated.For the development of the experiment, 3 representations of the thermograms were taken for comparative purposes. The first representation was of the image applied in a traditional model where thermograms are preprocessed only with an anisotropic filter (ANISOFF). The second representation used the preprocessing with the bioheat transfer model and the vascular network extracted from the same thermogram (BIOHEATRV). Finally, a concatenation of the ANISOFF global LBP histogram with the BIOHEATRV global LBP (ABFUSED) was used to build a global vector. This representation is described in Figure5.Figure 5 Feature vector of ABFUSED obtained from ANISOFF and BIOHEATRV models.For each thermogram, a vector formed by the global histogram obtained from the concatenation of the LBP histograms was obtained. All values for each vector were normalized. Each vector was labeled with an identifier of the subject and the amount of alcohol consumed. For the classification phase, the nearest neighbor algorithm was used. Histogram intersection was used as a similarity measure. In all cases, data on subjects without consuming alcohol were used as a training set. Table1 shows the cumulative recognition results for the 3 methods. For each class, the values Rank-1, Rank-5, and Rank-10 are shown. In these classifications, the percentage of cumulative recognition was indicated by taking subsets of 1, 5, and 10 subjects. Subsets were formed with the most similar individuals with regard to the subject to be analyzed.Table 1 Recognition percentages of ANISOFF, BIOHEATRV, and ABFUSED, for each of the classes. The average recognition of the three methods is also shown. Class 1 Class 2 Class 3 Class 4 Average Rank-1 Rank-5 Rank-10 Rank-1 Rank-5 Rank-10 Rank-1 Rank-5 Rank-10 Rank-1 Rank-5 Rank-10 Rank-1 (%) Rank-5 (%) Rank-10 (%) ANISOFF 91.21 96.17 99.82 78.69 93.08 96.47 84.34 96.52 99.78 82.56 93.73 98.73 84.22 94.88 98.70 BIOHEATRV 91.51 95.95 99.30 84.34 92.13 96.91 83.16 91.52 94.17 84.77 96.69 98.69 85.94 94.07 97.27 ABFUSED 92.21 98.04 99.91 85.60 95.60 98.69 87.26 97.08 99.95 86.73 98.73 99.95 87.95 97.36 99.63In particular, for Class 2, it was shown that the cumulative recognition percentages have the worst performance with regards to the other classes. This was shown for both first attempt recognition (Rank-1) and Rank-10 with 78.69% and 96.47%, respectively. The best results were obtained for Class 1, for Rank-1 and Rank-10 of 92.21% and 99.82%, respectively. The average ANISOFF recognition percentage for Rank-1 and Rank-10 was 84.22% and 98.70%, respectively. The performance of the ANISOFF method is shown in Figure6.Figure 6 CMC recognition curve of ANISOFF model for 1, 2, 3, and 4 classes.In contrast, BIOHEATRV had its worst performance with class 3, for Rank-1 and Rank-10 of 83.16% and 94.17%, respectively. Like ANISOFF, the best BIOHEATRV performance was obtained in class 1 with 91.51% and 99.30% recognition for Rank-1 and Rank-10, respectively. The cumulative recognition results are shown in Figure7.Figure 7 CMC recognition curve of BIOHEATRV model for 1, 2, 3, and 4 classes.The behavior of ANISOFF for class 3 and BIOHEATRV for class 2 gave us an indication that in using both methods it was possible to improve the recognition percentages. From this behavior, ABFUSED was designed. The union of methods does not always lead us to improve the percentages of recognition. However, this combination proved to be viable for this study. The cumulative percentages of ABFUSED recognition for each of the classes are shown in Figure8.Figure 8 CMC recognition curve of ABFUSED model for 1, 2, 3, and 4 classes.CMC curves were designed for each of the classes (Figures9–12). In each of the classes, ABFUSED obtained the best recognition percentages, except for a segment between Rank-3 and Rank-4 of Class 3, where ANISOFF had better recognition percentages.Figure 9 CMC recognition curve for Class 1 for ANISOFF, BIOHEATRV, and ABFUSED.Figure 10 CMC recognition curve for Class 2 for ANISOFF, BIOHEATRV, and ABFUSED..Figure 11 CMC recognition curve for Class 3 for ANISOFF, BIOHEATRV, and ABFUSED.Figure 12 CMC recognition curve for Class 4 for ANISOFF, BIOHEATRV, and ABFUSED.The average recognition percentage of each class was calculated. The average cumulative recognition value was computed for each rank and each class. ABFUSED had the highest recognition percentages. It was noted that the use of BIOHEATRV had a better percentage of recognition in Rank-1 compared to ANISOFF. However, ANISOFF had a better percentage in the remaining subsets. ## 4. Conclusions The results in the recognition values shown in this paper are evidence of the effectiveness of bioheat transfer methods. These models may contain enough information for the process of identifying individuals. In the case of the BIOHEATRV method, it is demonstrated that it includes complimentary features compared to the classical anisotropic filter. These complementary characteristics were the reason why ABFUSED increased the recognition percentages for all classes.Alcohol consumption, in particular, tends to generate metabolic alterations that in turn produce temperature rise in different parts of the surface of the face. These temperature changes caused the recognition systems to reduce their effectiveness. In particular, this study showed how these percentages declined for classes 2 to 4. Of course, the methods described in this work were not taking into account the particular conditions of each individual. This stands out because neither the metabolism nor the heat dispersion of the face is the same for all individuals. However, the experimental conditions designed were defined as a system of recognition of everyday life. That is, thermograms of people acquired in a controlled environment were taken as the training base. In this scenario, the image of the subjects with alterations caused by alcohol would not be counted. Additionally, the percentages of Rank-1 recognition obtained for each of the classes suggested the feasibility of implementation in real-life situations.The cumulative recognition analysis was performed to indicate the feasibility of applying these methods for two-phase identification systems. The average Rank-10 percentage of 99.63% of ABFUSED method suggested that a second recognition method could be used. This is because the population was limited to a subset of 10 individuals, with a percentage of certainty ∼100% that the individual to identify was in that subset.Theoretically, heat dispersion models, such as the bioheat transfer model, mitigate global face temperature changes on the image. It is necessary to design studies where this can be measured quantitatively as other factors affect the percentages of recognition, for example, those originated due to the increase in the surface temperature of the face such as physical activity, substance use, or some pathologies. In the database used for this investigation, people who are active consumers of alcohol were excluded. It is necessary to broaden the inclusion criteria in the conformation of datasets to determine how these characteristics cause changes in heat dispersion and how these probably decrease the effectiveness of face recognition systems. Even other conditions must be added, for example, it is common for people who consume alcohol to also eat food.It is in the interest of the authors to carry out tests using the bioheat transfer model, for example, as input variables, in a deep learning neural network. Additionally, this study seeks to incorporate information on face images in NIR to establish a system of recognition of information fusion with LWIR images. Research is underway in these areas and will be reported in subsequent papers. --- *Source: 1024173-2020-04-14.xml*
2020
# The Challenge of Delivering Therapeutic Aerosols to Asthma Patients **Authors:** Federico Lavorini **Journal:** ISRN Allergy (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/102418 --- ## Abstract The number of people with asthma continues to grow around the world, and asthma remains a poorly controlled disease despite the availability of management guidelines and highly effective medication. Patient noncompliance with therapy is a major reason for poor asthma control. Patients fail to comply with their asthma regimen for a wide variety of reasons, but incorrect use of inhaler devices is amongst the most common. The pressurised metered-dose inhaler (pMDI) is still the most frequently used device worldwide, but many patients fail to use it correctly, even after repeated tuition. Breath-actuated inhalers are easier to use than pMDIs. The rationale behind inhaler choice should be evidence based rather than empirical. When choosing an inhaler device, it is essential that it is easy to use correctly, dosing is consistent, adequate drug is deposited in both central and peripheral airways, and that drug deposition is independent of airflow. Regular checking of inhalation technique is crucial, as correct inhalation is one of the cornerstones of successful asthma management. --- ## Body ## 1. Introduction The incidence of asthma continues to rise worldwide, doubling over the last 10 years [1–4] and, consequently, asthma places a huge economic burden on healthcare resources [5]. Asthma management guidelines [1, 2] are now available in virtually every country; their aim is to achieve control of the disease with the lowest possible dose of medication prescribed [1, 2]. To this end, asthma guidelines advocate a stepwise pharmacological approach that consists in increasing (“step up”) the numbers of medications as asthma worsens, and decreasing (“step down”) medications when asthma is under control [1, 2]. Once control of asthma has been achieved and maintained for at least three months, a gradual reduction of the maintenance therapy is recommended to identify the minimum therapy required to maintain control [1, 2]. Unfortunately, the current level of asthma control falls far short of the goals for long-term asthma management [2, 6, 7], with many patients reporting day- and night-time symptoms at least once a week, and continuing to require unscheduled visits and hospitalisations [2, 6, 7]. One of the reason why asthma remains poorly controlled is that patients are deriving incomplete benefit from their inhaled medication, primarily because they are unable to use their inhalers correctly [8–11].The benefits of inhaled therapy for the treatment of obstructive airway diseases, such as asthma, have been recognised for many years. In comparison with oral or parenteral formulations, minute but therapeutic doses of drug are delivered topically into the airways causing local effects within the lungs [12–14]. Unwanted systemic effects are minimised as the medication acts with maximum pulmonary specificity together with a rapid onset and duration of action [12–14]. Consequently, aerosol formulations of bronchodilators and corticosteroids are the mainstay of modern treatment for asthma at all ages [1, 2]. Aerosols are either solutions containing medications, or suspensions of solid particles in a gas, generated from devices such as pressurised metered dose inhalers (pMDIs), dry powder inhalers (DPIs) or nebulisers [12–16]. In the past decade some novel delivery systems have been developed that have high delivery efficiencies; notable among these are the soft mist inhalers (SMI). Each type of inhaler device has pros as well as cons (Table 1). Inhalers differ in their efficiency of drug delivery to the lower respiratory tract, depending on the form of the device, its internal resistance, formulation of medication, particle size, velocity of the produced aerosol plume, and ease with which patients can use the device [12–16]. Efficiency of drug delivery may also be influenced by patients’ preference, which in turn affects patients’ adherence with treatment and indeed long-term control of the disease [17]. There seems little point in prescribing an effective medication in an inhaler device which patients cannot use correctly. Thus, the choice of the right inhaler for the patient is just as important as choosing the most effective medication.Table 1 Major components, advantages, and disadvantages of inhaler devices. Inhaler Formulation Metering system Advantages Disadvantages pMDI Drug suspended or dissolved in propellant (with surfactant and cosolvent) Metering valve and reservoir Portable and compactMultidose deviceRelatively cheapCannot contaminate contentsAvailable for most inhaled medications Contains propellantsNot breath-actuated Many patients cannot use it correctly High oropharyngeal deposition pMDI + spacer Easier to coordinateLarge drug doses delivered more conveniently Less oropharyngeal depositionHigher lung deposition than a pMDI Less portable than pMDIPlastic spacers may acquire static chargeAdditional cost to pMDI BA-MDI Drug suspended in propellant Metering valve and reservoir Portable and compactMultidose deviceBreath-actuated (no coordination needed)Cannot contaminate contents Contains propellants“Cold Freon’’ effect Requires moderate inspiratory flow to be triggered DPI Drug blend in lactose, drug alone, drug/excipient particles Capsules, blisters, multidoseblister packs reservoirs Portable and compactBreath-actuated (no coordination needed)Does not contain propellants Requires a minimum inspiratory flowMay not appropriate for emergency situationsMany patients cannot use it correctlyMost types are moisture sensitive SMI (Respimat) Aqueous solution or suspension Unit dose blisters orreservoirs Portable and compactMultidose device High lung depositionDoes not contain propellants Not breath-actuatedNot currently available in most countriesRelatively expensive Nebulisers Aqueous solution or suspension Nebule dispensed into reservoir chamber of nebulizer May be used at any ageNo specific inhalation technique requiredVibrating mesh is portable and does not require an outside energy sourceMay dispense drugs not available with pMDIs or DPIs Jet and ultrasonic nebulisers require an outside energy sourceTreatment times can be longPerformance varies between nebulisersJet nebulisers cannot aerosolise a certain volume of solution Risk of bacterial contaminationNewer nebulisers are expensive pMDI: pressurised metered-dose inhalers; BA-MDI: breath-actuated metered-dose inhaler; DPI: dry-powder inhaler; SMI: soft mist inhaler.In this paper, the hand-held inhalers are reviewed together with a current understanding about correct inhalation techniques for each device. A description of nebulisers, which are frequently used to deliver asthma medications [18], is also given. However, since most current nebuliser designs are bulky and inconvenient and drug administration is prolonged, they are better categorised as second-line devices for most asthma patients. Finally, we present recommendations from the Aerosol Drug Management Improvement Team (ADMIT) for inhaler selection, as well as an algorithm for asthma therapy adjustment [8]. ## 2. Aerosol Device Options ### 2.1. Pressurised Metered-Dose Inhalers The development of the first commercial pMDIs was carried out by Riker Laboratories in 1955 and marketing in 1956 as the first portable, multidose delivery system for bronchodilators. Since that time, the pMDI has become the most widely prescribed inhalation device for drug delivery to the respiratory tract to treat obstructive airway diseases such as asthma and chronic obstructive pulmonary disease [18]; the total worldwide sales by all companies of pMDI products run in excess of $2 billion per year. The pMDI (Figure 1) is a portable multidose device that consists of an aluminium canister, lodged in a plastic support, containing a pressurised suspension or solution of micronized drug particles dispersed in propellants. A surfactant (usually sorbitan trioleate or lecithin) is also added to the formulation to reduce the particle agglomeration and is responsible for the characteristic taste of specific inhaler brands. The key component of the pMDI is a metering valve, which delivers an accurately known volume of propellant, containing the micronised drug at each valve actuation. Pressing the bottom of the canister into the actuator seating causes decompression of the formulation within the metering valve, resulting in an explosive generation of a heterodisperse aerosol of droplets that consist of tiny drug particles contained within a shell of propellant. The latter evaporates with time and distance, which reduces the size of the particles that use a propellant under pressure to generate a metered dose of an aerosol through an atomisation nozzle (Figure 1). The technology of pMDI has evolved steadily over the period of mid-1950s to the mid-1980s. Until recently, the pMDI used chlorofluorocarbons (CFCs) as propellants to deliver drugs; however, in accordance with the Montreal Protocol of 1987, CFC propellants are being replaced by hydrofluoroalkane (HFA) propellants that do not have ozone depleting properties [19]. Hydrofluoroalkane 134a and 227ca are propellants that contain no chlorine and have a residence in the stratosphere lower than CFCs, so they have substantially less global warming potential than do CFCs. HFA-134a albuterol has been the first HFA-driven pMDI that has received approval in both Europe and the United States. This preparation consists of albuterol suspended in HFA-134a, oleic acid, and ethanol; clinical trials have shown this preparation to be bioequivalent to CFCs albuterol in both bronchodilator efficacy and side effects [20]. At the present, in most European countries CFC-driven pMDIs have totally been replaced by HFA inhalers. The components of CFC-driven pMDIs (i.e., canister, metering valve, actuator, and propellant) are retained in HFA-driven pMDIs, but they have had a redesign. Two approaches were used in the reformulation of HFA-driven pMDIs. The first approach was to show equivalence with the CFC-driven pMDI, which helped regulatory approval, delivering salbutamol and some corticosteroid. Some HFA formulations were matched to their CFC counterparts on a microgram for microgram basis; therefore, no dosage modification was needed when switching from a CFC to an HFA formulation. The second approach involved extensive changes, particularly for corticosteroid inhalers containing beclomethasone dipropionate, and resulted in solution aerosols with extra-fine particle size distributions and high lung deposition [21, 22]. The exact dose equivalence of extra-fine HFA beclomethasone dipropionate and CFC beclomethasone dipropionate has not been established, but data from most trials have indicated a 2 : 1 dose ratio in favour of the HFA-driven pMDI [21, 22]. Patients on regular long-term treatment with a CFC pMDI could safely switch to an HFA pMDI without any deterioration in pulmonary function, loss of disease control, increased frequency of hospital admissions, or other adverse effects [19]. However, when physicians prescribe HFA formulations in place of CFC versions for the first time, they should inform their patients about differences between these products. Compared with CFC-driven pMDIs, many HFA-driven pMDIs have a lower (25.5 mN versus 95.4 mN) impact force and a higher (8°C versus −29°C) temperature [12, 14]. These properties partially overcome the “cold Freon effect” [12, 14] that has caused some patients to stop inhaling their CFC pMDIs. In addition, most HFA pMDIs have a smaller delivery orifice that may result in a more slowly delivered aerosol plume, thus facilitating inhalation and producing less mouth irritation [23]. Another difference is that many HFA-driven pMDIs contain small amount of ethanol. This affects the taste, as well as further increasing the temperature and decreasing the velocity of the aerosol. Pressurized MDIs containing fixed combination of beclomethasone dipropionate and the long-acting bronchodilator formoterol in a solution formulation with HFA-134a and ethanol with cosolvent [21, 24, 25] have been developed (Modulite technology, Chiesi, Italy). Interestingly, this formulation dispenses an aerosol that has a particularly small particle size (mass median aerodynamic diameter ~1 μ), a lower plume velocity of the aerosol, and not dropping temperature as much as when CFCs are used as carriers. These three factors, that is, smaller particle size, lower plume velocity, and less temperature drop, may decrease upper airway impaction and increase airway deposition of particles, particularly to the smaller airways, compared with the same dose of drug administered from a CFC pMDI [24, 25].Figure 1 Components of a pressurised metered-dose inhaler. Lower panels illustrate the process of aerosol generation.Pressurised MDIs have a number of advantages (Table1): they are compact, portable, relatively inexpensive, and contain at least 200 metered doses per canister that are immediately ready for the use. Furthermore, a large fraction (approximately 40%) of the aerosol particles is in the respirable range (mass median aerodynamic diameter less than 5 μ), and dosing is generally highly reproducible from puff to puff [12–16]. Despite these advantages, most patients cannot use pMDIs correctly, even after repeated tuition [8–11]. This is because pMDIs require good coordination of patient inspiration and inhaler actuation to ensure correct inhalation and deposition of drug in the lungs. The correct inhalation technique when using pMDIs involves firing the pMDI while breathing in deeply and slowly, continuing to inhale after firing, and then following inhalation with a breath-holding pause to allow particles to sediment on the airways [12, 26]. The patients should also be instructed that, on the first use and after several days of disuse, the pMDI should be primed. However, patients frequently fail to continuously inhale slowly after activation of the inhaler and exhale fully before inhalation [8]. In addition, patients often activate the inhaler before inhalation or at the end of inhalation by initiating inhaler actuation while breath holding [8]. Crompton and colleagues [8, 27, 28] showed that the proportion of patients capable of using their pMDIs correctly after reading the package insert fell from 46% in 1982 to 21% in 2000, while only just over half of patients (52%) used a pMDI correctly even after receiving instruction. In a large (n=4078) study, 71% of patients were found to have difficulty using pMDIs, and almost half of them had poor coordination [29]. Incorrect inhalation technique was associated with poor asthma control and with poor pMDI users having less stable asthma control than good pMDI users [29].Even with correct inhalation technique, pMDIs are inefficient since no more than 20% for CFC pMDIs or 40%–50% for HFA pMDIs producing extra-fine particles [12, 14–16] of the emitted dose reaches the lungs with a high proportion of drug being deposited in the mouth and oropharynx which can cause local as well as systemic side effects due to rapid absorption [12, 14–16]. Another disadvantage of some pMDIs is the absence of built-in counters that would alert the patient to the fact that the inhaler was approaching “empty” and needed to be refilled. Although many pMDIs contain more than the labelled number of doses, drug delivery per actuation may be very inconsistent and unpredictable after the labelled number of actuations. Beyond the labelled number of actuations, propellants can release an aerosol plume that contains little or no drug, a phenomenon called tail-off [30]. ### 2.2. pMDI Accessory Devices: The Spacers and Valved Holding Chambers Although the term “spacers” is often used for all types of extension add-on devices, these devices are properly categorised as either “spacers” or “valved holding chambers.” A spacer (Figure2) is a simple tube or extension attached to the pMDI mouthpiece with no valves to contain the aerosol plume after pMDI actuation [31]. A valved holding chamber (Figure 2) is an extension device, added onto the pMDI mouthpiece or canister, that contains a one-way valve to prevent holding the aerosol until inhalation [31]. The direction of the spray can be forward, that is, toward the mouth, or reverse, that is, away from the mouth (Figure 2). Both spacers and holding chambers constitute a volume into which the patient actuates the pMDI and from which the patient inhales reducing the need to coordinate the two manoeuvres [31]. By acting as an aerosol reservoir, these devices slow the aerosol velocity and increase transit time and distance between the pMDI actuator and the patient’s mouth, allowing particle size to decrease and, consequently, increasing deposition of the aerosol particles in the lungs [31]. Moreover, because spacers trap large particles comprising up to 80% of the aerosol dose, only a small fraction of the dose is deposited in the oropharynx, thereby reducing side effects, such as throat irritation, dysphonia, and oral candidiasis, associated with medications delivered by the pMDI alone [31]. Large-volume holding chambers increase lung deposition to a greater degree than does tube spacer or small holding chamber [32–34]. Devices larger than 1 L, however, are impractical, and patients would have difficulty inhaling the compete contents [35]. A valved holding chamber fitted with an appropriate facemask is used to give pMDI drugs to neonates, young children, and elderly patients. The two key factors for optimum aerosol delivery are a tight but comfortable facemask fit and reduced facemask dead space [31, 36]. Because children have low tidal volumes and inspiratory flow rates, comfortable breathing through a facemask requires low resistance inspiratory or expiratory valves. Of note, some holding chambers incorporate a whistle that makes a sound if inspiration is too fast [36]. Training patients to ensure that the whistle does not sound assists with developing an optimal inhalation technique. Plastic bottles and cups can also be used as rudimental, home-made spacers for the administration of aerosol drugs [37–39]. In a randomized controlled trial clinical effects of salbutamol inhaled through pMDI with a home-made nonvalved spacer (500 mL mineral water plastic bottle) were compared with those when the same drug was administered by using an oxygen-driven nebuliser in children with asthma [39]. The number of children hospitalised after treatment changes in clinical score and oxygen saturation were similar in conventional and bottle spacer groups [39]. Valved holding chambers may improve the clinical effect of inhaled medications especially in patients unable to use a pMDI properly [31]. Indeed, compared to both pMDIs alone and DPIs, these devices may increase the response to short-acting β-adrenergic bronchodilators, even in patients with correct inhalation technique [40–43]. While spacers and valved holding chambers are good drug-delivery devices, they suffer from the obvious disadvantage of making the entire delivery system less portable and compact than a pMDI alone. The size and appearance of some spacers may detract from the appeal of the pMDI to patients, especially among the paediatric population, and negatively affect patients’ compliance [31]. Furthermore, spacers are not immune from inconsistent medication delivery caused by electrostatic charge of the aerosol [44–47]. Drug deposits can build up on walls of plastic spacers and holding chambers mostly because of electrostatic charge. Aerosols remain suspended for longer periods within holding chambers that are manufactured from nonelectrostatic materials than other materials. Thus, an inhalation might be delayed for 2–5 s without a substantial loss of drug to the walls of metal or nonstatic spacers [45–47]. The electrostatic charge in plastic spacers can be substantially reduced by washing the spacer with a diluted (1 : 5000) household detergent and allowing it to drip dry [14, 48]. There is no consensus on how often a spacer should be cleaned, but recommendations range in general from once a week to once a month [12]. Multiple actuations of a pMDI into a spacer before inhalation also reduces the proportion of drug inhaled [46–50]. Five actuations of a corticosteroid inhaler into a large-volume spacer before inhalation deliver a similar dose to a single actuation into the same spacer inhaled immediately [49].(a) The Jet open tube spacer; (b) the AeroChamber plus holding chamber; (c) the reverse-flow EZSpacer. (a) (b) (c) ### 2.3. Breath-Actuated Metered-Dose Inhaler Breath-actuated (BA) pMDIs are alternatives to conventional press-and-breath pMDIs developed to overcome the problem of poor coordination between pMDI actuation and inhalation [12, 51]. Examples of this type of device include the Autohaler (3M, St. Paul, MI) and the Easi-Breathe (Teva Pharmaceutical Industries Ltd). Breath-actuated pMDIs contain a conventional pressurised canister and have a flow-triggered system driven by a spring which releases the dose during inhalation, so that firing and inhaling are automatically coordinated [12, 51]. These inhalation devices (Table 1) can achieve good lung deposition and clinical efficacy in patients unable to use a pMDI correctly because of coordination difficulties [52]. Errors when using BApMDI are less frequent than when using a standard pMDI [17]. Increased use of BApMDIs might improve asthma control and reduce overall cost of asthma therapy compared with conventional pMDIs [53]. On the negative side (Table 1), BApMDIs do not solve cold Freon effect and would be unsuitable for a patient who has this kind of difficulty using pMDI. In addition, these devices require a relatively higher inspiratory flow than pMDI for triggering. Furthermore, oropharyngeal deposition with breath-actuated pMDIs is as high as that with CFC-pMDIs [54].The Autohaler is a BApMDI that is available with albuterol and behlomethasone in HFA propellant. It has a manually operated lever that, when lifted, primes the inhaler through a spring-loaded mechanism, allowing the aerosol to be dispensed with an inspiratory flow of about 30 L/min. Clinical studies have demonstrated that the lung deposition ofβ-adrenergic bronchodilator administered via the Autohaler is similar to that obtained when the drug is correctly inhaled via a pMDI and greater than that resulting from conventional pMDIs in patients with poor inhalation technique [54]. Moreover, it can be used effectively by patients with poor lung function, patients with limited manual dexterity, and elderly patients [54]. The Easi-Breathe is a patient-triggered inhaler that dispenses albuterol and beclomethasone. This inhaler is primed when the mouthpiece is opened. When the patient breathes in, the mechanism is triggered and a dose is automatically released into the airstream. The inhaler can be actuated at a very low airflow rate of approximately 20 L/min, which is readily achievable by most patients [55]. Not surprisingly, practice nurses found it easier to teach and patients to use than a conventional pMDI [55]. In vitro studies have shown that particle size distribution and percentage of respirable fine particle obtained by using the Easi-Breathe device were similar to those obtained by using the conventional pMDI [56], although comparative clinical efficacy data are not yet available. ### 2.4. Dry Powder Inhalers Modern dry powder inhalers were first introduced in 1970, and the earliest models were single-dose devices containing the powder formulation in a gelatin capsule, which the patient loaded into the device prior to use. Since the late 1980s multidose DPIs have been available, giving the same degree of convenience as a pMDI [58]. Drypowder inhalers (Figure 3) are delivery devices containing drugs in powdered formulation that have been milled to produce micronized particles in the respirable range. These delivery devices allow the particles to be deagglomerated by the energy created by the patient’s own inspiratory flow [58–60]. The powdered drug can be either pure or blended with large particle size excipient (usually lactose) as a carrier powder [58–60]. The empty condition is generally apparent, alerting the patient to the need for replacement. Some DPIs, such as for the HandiHaler (Boehringer Ingelheim, D) and the Aerolizer (Novartis Pharma, CH), are singledose devices in which a capsule of powder is perforated in the device with needles fixed to pressure buttons. Other types of DPIs, such as the Diskus (GlaxoSmithKline, UK) or the Turbuhaler (AstraZeneca, Sweden), have a multidose capacity. These multidose DPIs fall into two main categories (Figure 3): these either measure the dose themselves (from a powder reservoir) or they dispense individual doses which are premetered into blisters by the manufacturer [58–60]. Turbuhaler and Diskus, respectively, are representatives of the former and latter categories, although many other different designs are presently in development. To date, new innovative DPIs are available for treatment of asthma and for delivery of a range of drugs usually given by injection, such as peptides, proteins, and vaccines. The use of DPIs is expected to increase with the phasing out of CFC production along with increased availability of drug powders and development of novel powder devices [59].Figure 3 Examples of dry powder inhalers. From [57].Generally, DPIs do have many advantages (Table1). Dry powder inhalers are actuated and driven by patient’s inspiratory flow; consequently, DPIs do not require propellants to generate the aerosol, as well as coordination of inhaler actuation with inhalation [60]. However, a forceful and deep inhalation through the DPI is needed to deaggregate the powder formulation into small respirable particles as efficiently as possible and, consequently, to ensure that the drug is delivered to the lungs [60–62]. Although most patients are capable of generating enough flow to operate a DPI efficiently [60], the need to inhale forcefully and, consequently, generate a sufficient inspiratory flow could be a problem for very young children or patients with severe airflow limitation [63]. For this reason, DPIs are not recommended for children under the age of 5 years [60]. The newer active or power-assisted DPIs incorporate battery-driven impellers and vibrating piezoelectric crystals that reduce the need for the patient to generate a high inspiratory flow rate, an advantage for many patients [59, 62]. Drug delivery to the lung ranges between 10% and 40% of the emitted dose for several marketed DPIs [60]. The physical design of the DPI establishes its specific resistance to airflow (measured as the square root of the pressure drop across the device divided by the flow rate through the device), with current designs having specific resistance values ranging from about 0.02 to 0.2 cm H2O/L/min) [61]. To produce a fine powder aerosol with increased delivery to the lung, a DPI that is characterised as having a low resistance requires an inspiratory flow of >90 L/min, a medium-resistance DPI requires 50–60 L/min, and a high-resistance DPI requires <50 L/min [61]. Of note, DPIs with high resistance tend to produce greater lung deposition than those with a lower resistance [61], but the clinical significance of this is not known. Based on the previous considerations, it is recommended to instruct patients to inhale forcefully from the beginning of the inspiration deeply as much as possible and to continue to inhale for as long as possible [12]. The rationale for these recommendations is that when using a DPI, inhalation should be forceful enough to disburse the micronised drug from the lactose-based carrier into a fine particle dose. However, it is not the absolute inspiratory flow that determines the fine particle dose from an inhaler but the resulting energy, which also depends on inhaler resistance. High air velocities within the inhaler are required for effective dispersion rather than high airflow through the inhaler. High airflow through the inhaler will lead to increased impaction in upper airways; thus, fast inhalation should be avoided unless a larger fine particle fraction compensates for the increased impaction. Furthermore, when using a singledose DPI, it is also recommendable to instruct patients to perform two separate inhalations for each dose [12].Although DPIs offer advantages over pMDIs, they do have some limitations (Table1) of design, cost-effectiveness and user-friendliness [60]. For instance, capsule-based DPIs, such as the HandiHaler and the Aerolizer, require that single doses are individually loaded into the inhaler immediately before use. This is inconvenient for some patients and does not allow direct dose counting. In addition, the inhalation manoeuvre has to be repeated until the capsule is empty, which may give rise to underdosing and to high dose variability. Other DPIs are multiple unit dose devices, such as the Diskhaler, or multidose devices, such as the Diskus and the Turbuhaler. These devices do not have any triggering mechanism which makes optimal drug delivery entirely dependent on an individual patient’s uncontrolled inspiratory manoeuvre. Because of variations in the design and performance of DPIs, patients might not use all DPIs equally well. Therefore, DPIs that dispense the same drug might not be readily interchangeable [61]. Studies [59, 60] have also been shown that dose emission is reduced when a DPI is exposed to extremely low and high temperature and humidity; therefore, DPIs should be stored in a cool dry place.A recent systematic literature review revealed that up to 90% of patients did not use their DPI correctly [64]. Common errors made by patients were lack of exhalation before inhalation, incorrect positioning and loading of the inhaler, failure to inhale forcefully and deeply through the device, and patients’ failure to breath-hold after inhalation [64]. All these errors may lead to insufficient drug delivery, which adversely influences drug efficacy and may contribute to inadequate disease control [64]. It is unsurprising that such a high proportion of patients were unable to use DPIs correctly as the devices have many inherent design limitations. The Diskhaler, for example, is a multiple unit dose device as it contains a series of foil blisters on a disk. It is complicated to use, requiring eight steps to effect one correct inhalation; it has been shown that approximately 70% of patients are unable to use it correctly [64]. The disks have to be changed frequently and the device cleaned before refilling. In addition, it provides no feedback to the patient of a successful inhalation, except a sweet taste in the mouth which may simply be indicative of oral drug deposition. The Turbuhaler, a multidose reservoir device, is the most frequently prescribed DPI as it produces good deposition of the drug in the lungs provided that a sufficient (about 60 L/min) inspiratory flow has been achieved by the patients. However, approximately 80% of patients are unable to use it correctly [64]; common mistakes made by patients using this inhaler are failure to turn the base fully in both directions and failure to keep the device upright until loaded. In addition, due to its high intrinsic resistance, patients who have a reduced inspiratory flow may encounter problems using this device. The Diskus is another example of multidose device that uses a strip foil drug containing blisters. As many as 50% of patients use this DPI incorrectly, and common errors include failure or difficulty in loading the device before inhalation and exhaling into the device [64]. The Diskus has a low intrinsic resistance but, like the Turbuhaler, does not have any triggering mechanism which makes optimal drug delivery entirely dependent on an individual patient’s uncontrolled inspiratory manoeuvre [64]. Additionally, as with other DPI devices employing drug blisters, incomplete emptying of the metered dose may occur, which could reduce the amount of drug delivered to the lung and hence reduce clinical efficacy [64]. ### 2.5. Nebulisers Various types of nebulisers are available on the market, and several studies have indicated that performance varies between manufacturers and also between nebulisers from the same manufacturers [65–67]. There are two basic types (Figure 4) of nebulisers: the pneumatic or jet nebuliser and the ultrasonic nebulisers [65–67]. The jet nebulisers generate aerosol particles as a result of the impact between a liquid and a jet of high velocity gas (usually air or oxygen) in the nebuliser chamber. In a jet nebulizer, the driving gas passes through a very narrow hole from a high pressure system. At the narrow hole, the pressure falls and the gas velocity increases greatly producing a cone shaped front. This passes at high velocity over the end of a narrow liquid feed tube or concentric feeding system creating a negative pressure at this point. As a result of this fall in pressure, liquid is sucked up by the Bernoulli effect and is drawn out into fine ligaments. The ligaments then collapse into droplets under the influence of the surface tension. The majority of the liquid mass produced during this process is in the form of large (15–500 micron) nonrespirable droplets. Coarse droplets impact on baffles while smaller droplets may be inhaled or may land on internal walls returning to the reservoir for renebulisation [65–67]. The resultant large particles then impact upon baffles to generate small, respirable particles. Thus, baffle design has a critical effect on droplet size. Concentric liquid feeds minimise blockage by residual drug build-up with repeated nebulisation. A flat pick up plate may allow some nebulisers to be tilted during treatment whilst maintaining liquid flow from the reservoir. A 6–8 L/min flow and a fill volume of 4-5 mL are generally recommended, unless some nebulisers are specifically designed for different flow and a smaller or larger fill volume [68]. The volume of some unit-dose medications is suboptimal; ideally, saline should be added to bring the fill volume to 4-5 mL, but this might not be practical. The longer nebulisation time with a greater fill volume can be reduced by increasing the flow used to power the nebuliser; however, increasing the flow decreases the droplet size produced by the nebuliser. Dead volume is the volume that is trapped inside the nebulizer, and typically it is 0.5–1 mL. To reduce dead volume, clinicians and patients commonly tap the nebuliser periodically during therapy in an effort to increase nebuliser output [69]. Therapy may also be continued past the point of sputtering in an attempt to decrease the dead volume, but this is unproductive and not recommended [70]. Because of the evaporative loss within the nebuliser, the solution becomes increasingly concentrated and cools during nebulisation.Components of a jet (a) and an ultrasonic (b) nebulisers. Modified from O'Callaghan and Barry [65]. (a) (b)There are four different designs of the jet nebulisers: jet nebuliser with reservoir tube, jet nebuliser with collection bag, and breath-enhanced and breath-actuated jet nebulizers [65–67]. Both the breath-enhanced and breath-actuated jet nebulisers are modifications of the “conventional” jet nebulisers specifically designed to improve their efficiency by increasing the amount of aerosol delivered to the patient with less wastage of aerosol during exhalation. The different types of jet nebulisers have different output characteristics determined by the design of the air jet and capillary tube orifices, their geometric relationship with each other and the internal baffles; for a given design the major determinant of output is the driving pressure [65–67]. The jet nebulizer with reservoir tube provides continuous aerosol during the entire breathing cycle, causing the release of aerosol to ambient air during exhalation and anytime when the patient is not breathing. Consequently, no more than 20% of the emitted aerosol is inhaled [65–67]. The jet nebulizer with collection bag generates aerosol by continuously filling a collection bag that acts as a reservoir. The patient inhales aerosol from the reservoir through a one-way inspiratory valve and exhales to the environment through an exhalation port between the one-way inspiratory valve and the mouthpiece. The breath-enhanced jet nebulizer (e.g., the PARI LC Plus, PARI gmbH) uses two one-way valves to prevent the loss of aerosol to environment. When the patient inhales, the inspiratory valve opens and aerosol vents through the nebuliser; exhaled aerosol passes through an expiratory valve in the mouthpiece. Breath-actuated jet nebulisers are designed to increase aerosol delivery to patient by means of a breath-actuated valve that triggers aerosol generation only during inspiration. Both the breath-enhanced and breath-actuated nebulisers increase the amount of inspired aerosol with shorter nebulisation time than “conventional” jet nebulisers [65]. Recently, adaptive aerosol delivery nebulisers (the HaloLite and the Prodose) have been developed to reduce the variability of the delivered dose and the waste of aerosol to the environment and to facilitate monitoring of compliance with patient therapy [71–73]. By monitoring pressure changes relative to flow over the first three breaths, these delivery systems establish the shape of the breathing pattern and then use this to provide a timed pulse of aerosol during the first 50% of each tidal inspiration. Monitoring of the breathing pattern continues throughout the delivery period, and any changes in breathing pattern are taken into account during the remainder of the delivery period. Furthermore, if no inhalation is registered, the system will cease delivery until the patient recommences breathing on the system [71–73]. Since the pulsed dose is only provided in the first 50% of each breath and the software can calculate the amount of drug given per pulse, the precise dose of drug can be delivered before the system stops [71–73].Ultrasonic nebulisers use a rapidly (>1 MHz) vibrating piezoelectric crystal to produce aerosol particles [65–67]. Ultrasonic vibrations from the crystal are transmitted to the surface of the drug solution where standing waves are formed. Droplets break free from the crest of these waves and are released as aerosol. The size of droplets produced by ultrasonic nebuliser is related to the frequency of oscillation [65–67]. Although ultrasonic nebulisers can nebulise solutions more quickly than jet nebulisers, they are not suitable for suspensions and the piezoelectric crystal can heat the drug to be aerosolised. A relatively new ultrasonic nebuliser technology is represented by the vibrating mesh nebulisers [12, 74, 75]. These new-generation nebulisers are either active or passive systems. In active devices (e.g., the eFlow, PARI gmbH), the aperture plate vibrates at a high frequency and draws the solution through the apertures in the plate. In passively vibrating mesh devices (e.g., MicroAir, Omron Healthcare), the mesh is attached to a transducer horn and vibrations of the piezoelectric crystal that are transmitted via the transducer horn force the solution through the mesh to create an aerosol. The eFlow is designed to be used with either a very low residual volume to reduce drug waste or with a relatively large residual volume, so that it can be used instead of conventional jet nebulisers with the same fill volume [76]. Vibrating mesh devices have a number of advantages over other nebuliser systems: they have greater efficiency, precision and consistency of drug delivery, and are quiet and generally portable [74, 75]. However, they are also significantly more expensive than other types of nebulisers, and require a significant amount of maintenance and cleaning after each use to prevent build-up of deposit and blockage of the apertures especially when suspensions are aerosolised and to prevent colonisation by pathogens [75]. They are currently most widely used for the treatment of patients with cystic fibrosis [77].Generally, mouthpieces are employed during nebuliser delivery. However, facemasks may be necessary for treatment of acutely dyspnoeic patients or uncooperative patients, such as infants and toddlers [78]. The facemask is not just a connector between the device and the patient. Principles of mask design are different depending on the device [78]. For example, a valved holding chamber with facemask must have a tight seal to achieve optimal lung deposition [78]. In contrast, the facemask for a nebuliser should not incorporate a tight seal but should have vent holes to reduce deposition on the face and in the eyes [79, 80]. Improvements in facemask design provide greater inhaled mass while reducing facial and ocular deposition [78]. Often, when a patient does not tolerate the facemask, practitioners employ the “blow-by” technique, which simply directs the aerosol towards the nose and mouth with the mouthpiece. However, there is no data to indicate that this is an effective method for delivering aerosol to the lungs, and therefore the use of this technique is not recommended [12].Unlike pMDIs and DPIs, no special inhalation techniques are needed for optimum delivery with conventional nebulisers; tidal breathing with only occasional deep breaths is sufficient (Table1). Thus, for patients who are unable to master the proper pMDI technique despite repeated instruction, the proper use of a nebuliser probably improves drug delivery. However, nebulisers have some distinct dis-advantages. Patients must load the device with medication solution for each treatment, and bacterial contamination of the reservoir can cause respiratory infection [65–67], making regular cleaning important. Also, nebuliser treatments take longer time than pMDIs and DPIs for drug administration (10–15 min for a jet nebuliser, 5 min for an ultrasonic or mesh nebuliser). Although they are relatively portable, typical jet nebuliser must be plugged into a wall outlet or power adaptor and thus cannot be used easily in transit. ### 2.6. Soft Mist Inhalers The development of soft mist inhalers (SMIs) has opened up new opportunities for inhaled drug delivery. Technically, these inhalation devices do fall within the definition of a nebuliser, as they transform aqueous liquid solution to liquid aerosol droplets suitable for inhalation. However, at variance with the traditional nebuliser designs, SMIs are hand-held multidose devices that have the potential to compete with both pMDIs and DPIs in the portable inhaler market. At the present, the only SMI currently marketed in some European countries is the Respimat inhaler (Boehringer Ingelheim, Figure5). This device does not require propellants since it is powered by the energy of a compressed spring inside the inhaler. Individual doses are delivered via a precisely engineered nozzle system as a slow-moving aerosol cloud (hence the term “soft mist”) [81]. Scintigraphic studies have shown that, compared to a CFC-based pMDI, lung deposition is higher (up to 50%) and oropharyngeal deposition is lower [81]. Respimat is a “press and breathe” device, and the correct inhalation technique closely resembles that used with a pMDI. However, although coordination between firing and inhaling is required, the aerosol emitted from Respimat is released very slowly, with a velocity of approximately four times less than that observed with a CFC-driven pMDI [81]. This greatly reduces the potential for drug impaction in the oropharynx. In addition, the relatively long duration over which the dose is expelled from Respimat (about 1.2 s compared with 0.1 s from pMDIs) would be expected to greatly reduce the need to coordinate actuation and inspiration, thus improving the potential for greater lung deposition. Although Respimat has been used relatively little in clinical practice to date, clinical trials seem to confirm that drugs delivered by the Respimat are effective in correspondingly smaller doses in patients with obstructive airway disease [82].The Respimat soft mist inhaler. From [81]. (a) (b) ## 2.1. Pressurised Metered-Dose Inhalers The development of the first commercial pMDIs was carried out by Riker Laboratories in 1955 and marketing in 1956 as the first portable, multidose delivery system for bronchodilators. Since that time, the pMDI has become the most widely prescribed inhalation device for drug delivery to the respiratory tract to treat obstructive airway diseases such as asthma and chronic obstructive pulmonary disease [18]; the total worldwide sales by all companies of pMDI products run in excess of $2 billion per year. The pMDI (Figure 1) is a portable multidose device that consists of an aluminium canister, lodged in a plastic support, containing a pressurised suspension or solution of micronized drug particles dispersed in propellants. A surfactant (usually sorbitan trioleate or lecithin) is also added to the formulation to reduce the particle agglomeration and is responsible for the characteristic taste of specific inhaler brands. The key component of the pMDI is a metering valve, which delivers an accurately known volume of propellant, containing the micronised drug at each valve actuation. Pressing the bottom of the canister into the actuator seating causes decompression of the formulation within the metering valve, resulting in an explosive generation of a heterodisperse aerosol of droplets that consist of tiny drug particles contained within a shell of propellant. The latter evaporates with time and distance, which reduces the size of the particles that use a propellant under pressure to generate a metered dose of an aerosol through an atomisation nozzle (Figure 1). The technology of pMDI has evolved steadily over the period of mid-1950s to the mid-1980s. Until recently, the pMDI used chlorofluorocarbons (CFCs) as propellants to deliver drugs; however, in accordance with the Montreal Protocol of 1987, CFC propellants are being replaced by hydrofluoroalkane (HFA) propellants that do not have ozone depleting properties [19]. Hydrofluoroalkane 134a and 227ca are propellants that contain no chlorine and have a residence in the stratosphere lower than CFCs, so they have substantially less global warming potential than do CFCs. HFA-134a albuterol has been the first HFA-driven pMDI that has received approval in both Europe and the United States. This preparation consists of albuterol suspended in HFA-134a, oleic acid, and ethanol; clinical trials have shown this preparation to be bioequivalent to CFCs albuterol in both bronchodilator efficacy and side effects [20]. At the present, in most European countries CFC-driven pMDIs have totally been replaced by HFA inhalers. The components of CFC-driven pMDIs (i.e., canister, metering valve, actuator, and propellant) are retained in HFA-driven pMDIs, but they have had a redesign. Two approaches were used in the reformulation of HFA-driven pMDIs. The first approach was to show equivalence with the CFC-driven pMDI, which helped regulatory approval, delivering salbutamol and some corticosteroid. Some HFA formulations were matched to their CFC counterparts on a microgram for microgram basis; therefore, no dosage modification was needed when switching from a CFC to an HFA formulation. The second approach involved extensive changes, particularly for corticosteroid inhalers containing beclomethasone dipropionate, and resulted in solution aerosols with extra-fine particle size distributions and high lung deposition [21, 22]. The exact dose equivalence of extra-fine HFA beclomethasone dipropionate and CFC beclomethasone dipropionate has not been established, but data from most trials have indicated a 2 : 1 dose ratio in favour of the HFA-driven pMDI [21, 22]. Patients on regular long-term treatment with a CFC pMDI could safely switch to an HFA pMDI without any deterioration in pulmonary function, loss of disease control, increased frequency of hospital admissions, or other adverse effects [19]. However, when physicians prescribe HFA formulations in place of CFC versions for the first time, they should inform their patients about differences between these products. Compared with CFC-driven pMDIs, many HFA-driven pMDIs have a lower (25.5 mN versus 95.4 mN) impact force and a higher (8°C versus −29°C) temperature [12, 14]. These properties partially overcome the “cold Freon effect” [12, 14] that has caused some patients to stop inhaling their CFC pMDIs. In addition, most HFA pMDIs have a smaller delivery orifice that may result in a more slowly delivered aerosol plume, thus facilitating inhalation and producing less mouth irritation [23]. Another difference is that many HFA-driven pMDIs contain small amount of ethanol. This affects the taste, as well as further increasing the temperature and decreasing the velocity of the aerosol. Pressurized MDIs containing fixed combination of beclomethasone dipropionate and the long-acting bronchodilator formoterol in a solution formulation with HFA-134a and ethanol with cosolvent [21, 24, 25] have been developed (Modulite technology, Chiesi, Italy). Interestingly, this formulation dispenses an aerosol that has a particularly small particle size (mass median aerodynamic diameter ~1 μ), a lower plume velocity of the aerosol, and not dropping temperature as much as when CFCs are used as carriers. These three factors, that is, smaller particle size, lower plume velocity, and less temperature drop, may decrease upper airway impaction and increase airway deposition of particles, particularly to the smaller airways, compared with the same dose of drug administered from a CFC pMDI [24, 25].Figure 1 Components of a pressurised metered-dose inhaler. Lower panels illustrate the process of aerosol generation.Pressurised MDIs have a number of advantages (Table1): they are compact, portable, relatively inexpensive, and contain at least 200 metered doses per canister that are immediately ready for the use. Furthermore, a large fraction (approximately 40%) of the aerosol particles is in the respirable range (mass median aerodynamic diameter less than 5 μ), and dosing is generally highly reproducible from puff to puff [12–16]. Despite these advantages, most patients cannot use pMDIs correctly, even after repeated tuition [8–11]. This is because pMDIs require good coordination of patient inspiration and inhaler actuation to ensure correct inhalation and deposition of drug in the lungs. The correct inhalation technique when using pMDIs involves firing the pMDI while breathing in deeply and slowly, continuing to inhale after firing, and then following inhalation with a breath-holding pause to allow particles to sediment on the airways [12, 26]. The patients should also be instructed that, on the first use and after several days of disuse, the pMDI should be primed. However, patients frequently fail to continuously inhale slowly after activation of the inhaler and exhale fully before inhalation [8]. In addition, patients often activate the inhaler before inhalation or at the end of inhalation by initiating inhaler actuation while breath holding [8]. Crompton and colleagues [8, 27, 28] showed that the proportion of patients capable of using their pMDIs correctly after reading the package insert fell from 46% in 1982 to 21% in 2000, while only just over half of patients (52%) used a pMDI correctly even after receiving instruction. In a large (n=4078) study, 71% of patients were found to have difficulty using pMDIs, and almost half of them had poor coordination [29]. Incorrect inhalation technique was associated with poor asthma control and with poor pMDI users having less stable asthma control than good pMDI users [29].Even with correct inhalation technique, pMDIs are inefficient since no more than 20% for CFC pMDIs or 40%–50% for HFA pMDIs producing extra-fine particles [12, 14–16] of the emitted dose reaches the lungs with a high proportion of drug being deposited in the mouth and oropharynx which can cause local as well as systemic side effects due to rapid absorption [12, 14–16]. Another disadvantage of some pMDIs is the absence of built-in counters that would alert the patient to the fact that the inhaler was approaching “empty” and needed to be refilled. Although many pMDIs contain more than the labelled number of doses, drug delivery per actuation may be very inconsistent and unpredictable after the labelled number of actuations. Beyond the labelled number of actuations, propellants can release an aerosol plume that contains little or no drug, a phenomenon called tail-off [30]. ## 2.2. pMDI Accessory Devices: The Spacers and Valved Holding Chambers Although the term “spacers” is often used for all types of extension add-on devices, these devices are properly categorised as either “spacers” or “valved holding chambers.” A spacer (Figure2) is a simple tube or extension attached to the pMDI mouthpiece with no valves to contain the aerosol plume after pMDI actuation [31]. A valved holding chamber (Figure 2) is an extension device, added onto the pMDI mouthpiece or canister, that contains a one-way valve to prevent holding the aerosol until inhalation [31]. The direction of the spray can be forward, that is, toward the mouth, or reverse, that is, away from the mouth (Figure 2). Both spacers and holding chambers constitute a volume into which the patient actuates the pMDI and from which the patient inhales reducing the need to coordinate the two manoeuvres [31]. By acting as an aerosol reservoir, these devices slow the aerosol velocity and increase transit time and distance between the pMDI actuator and the patient’s mouth, allowing particle size to decrease and, consequently, increasing deposition of the aerosol particles in the lungs [31]. Moreover, because spacers trap large particles comprising up to 80% of the aerosol dose, only a small fraction of the dose is deposited in the oropharynx, thereby reducing side effects, such as throat irritation, dysphonia, and oral candidiasis, associated with medications delivered by the pMDI alone [31]. Large-volume holding chambers increase lung deposition to a greater degree than does tube spacer or small holding chamber [32–34]. Devices larger than 1 L, however, are impractical, and patients would have difficulty inhaling the compete contents [35]. A valved holding chamber fitted with an appropriate facemask is used to give pMDI drugs to neonates, young children, and elderly patients. The two key factors for optimum aerosol delivery are a tight but comfortable facemask fit and reduced facemask dead space [31, 36]. Because children have low tidal volumes and inspiratory flow rates, comfortable breathing through a facemask requires low resistance inspiratory or expiratory valves. Of note, some holding chambers incorporate a whistle that makes a sound if inspiration is too fast [36]. Training patients to ensure that the whistle does not sound assists with developing an optimal inhalation technique. Plastic bottles and cups can also be used as rudimental, home-made spacers for the administration of aerosol drugs [37–39]. In a randomized controlled trial clinical effects of salbutamol inhaled through pMDI with a home-made nonvalved spacer (500 mL mineral water plastic bottle) were compared with those when the same drug was administered by using an oxygen-driven nebuliser in children with asthma [39]. The number of children hospitalised after treatment changes in clinical score and oxygen saturation were similar in conventional and bottle spacer groups [39]. Valved holding chambers may improve the clinical effect of inhaled medications especially in patients unable to use a pMDI properly [31]. Indeed, compared to both pMDIs alone and DPIs, these devices may increase the response to short-acting β-adrenergic bronchodilators, even in patients with correct inhalation technique [40–43]. While spacers and valved holding chambers are good drug-delivery devices, they suffer from the obvious disadvantage of making the entire delivery system less portable and compact than a pMDI alone. The size and appearance of some spacers may detract from the appeal of the pMDI to patients, especially among the paediatric population, and negatively affect patients’ compliance [31]. Furthermore, spacers are not immune from inconsistent medication delivery caused by electrostatic charge of the aerosol [44–47]. Drug deposits can build up on walls of plastic spacers and holding chambers mostly because of electrostatic charge. Aerosols remain suspended for longer periods within holding chambers that are manufactured from nonelectrostatic materials than other materials. Thus, an inhalation might be delayed for 2–5 s without a substantial loss of drug to the walls of metal or nonstatic spacers [45–47]. The electrostatic charge in plastic spacers can be substantially reduced by washing the spacer with a diluted (1 : 5000) household detergent and allowing it to drip dry [14, 48]. There is no consensus on how often a spacer should be cleaned, but recommendations range in general from once a week to once a month [12]. Multiple actuations of a pMDI into a spacer before inhalation also reduces the proportion of drug inhaled [46–50]. Five actuations of a corticosteroid inhaler into a large-volume spacer before inhalation deliver a similar dose to a single actuation into the same spacer inhaled immediately [49].(a) The Jet open tube spacer; (b) the AeroChamber plus holding chamber; (c) the reverse-flow EZSpacer. (a) (b) (c) ## 2.3. Breath-Actuated Metered-Dose Inhaler Breath-actuated (BA) pMDIs are alternatives to conventional press-and-breath pMDIs developed to overcome the problem of poor coordination between pMDI actuation and inhalation [12, 51]. Examples of this type of device include the Autohaler (3M, St. Paul, MI) and the Easi-Breathe (Teva Pharmaceutical Industries Ltd). Breath-actuated pMDIs contain a conventional pressurised canister and have a flow-triggered system driven by a spring which releases the dose during inhalation, so that firing and inhaling are automatically coordinated [12, 51]. These inhalation devices (Table 1) can achieve good lung deposition and clinical efficacy in patients unable to use a pMDI correctly because of coordination difficulties [52]. Errors when using BApMDI are less frequent than when using a standard pMDI [17]. Increased use of BApMDIs might improve asthma control and reduce overall cost of asthma therapy compared with conventional pMDIs [53]. On the negative side (Table 1), BApMDIs do not solve cold Freon effect and would be unsuitable for a patient who has this kind of difficulty using pMDI. In addition, these devices require a relatively higher inspiratory flow than pMDI for triggering. Furthermore, oropharyngeal deposition with breath-actuated pMDIs is as high as that with CFC-pMDIs [54].The Autohaler is a BApMDI that is available with albuterol and behlomethasone in HFA propellant. It has a manually operated lever that, when lifted, primes the inhaler through a spring-loaded mechanism, allowing the aerosol to be dispensed with an inspiratory flow of about 30 L/min. Clinical studies have demonstrated that the lung deposition ofβ-adrenergic bronchodilator administered via the Autohaler is similar to that obtained when the drug is correctly inhaled via a pMDI and greater than that resulting from conventional pMDIs in patients with poor inhalation technique [54]. Moreover, it can be used effectively by patients with poor lung function, patients with limited manual dexterity, and elderly patients [54]. The Easi-Breathe is a patient-triggered inhaler that dispenses albuterol and beclomethasone. This inhaler is primed when the mouthpiece is opened. When the patient breathes in, the mechanism is triggered and a dose is automatically released into the airstream. The inhaler can be actuated at a very low airflow rate of approximately 20 L/min, which is readily achievable by most patients [55]. Not surprisingly, practice nurses found it easier to teach and patients to use than a conventional pMDI [55]. In vitro studies have shown that particle size distribution and percentage of respirable fine particle obtained by using the Easi-Breathe device were similar to those obtained by using the conventional pMDI [56], although comparative clinical efficacy data are not yet available. ## 2.4. Dry Powder Inhalers Modern dry powder inhalers were first introduced in 1970, and the earliest models were single-dose devices containing the powder formulation in a gelatin capsule, which the patient loaded into the device prior to use. Since the late 1980s multidose DPIs have been available, giving the same degree of convenience as a pMDI [58]. Drypowder inhalers (Figure 3) are delivery devices containing drugs in powdered formulation that have been milled to produce micronized particles in the respirable range. These delivery devices allow the particles to be deagglomerated by the energy created by the patient’s own inspiratory flow [58–60]. The powdered drug can be either pure or blended with large particle size excipient (usually lactose) as a carrier powder [58–60]. The empty condition is generally apparent, alerting the patient to the need for replacement. Some DPIs, such as for the HandiHaler (Boehringer Ingelheim, D) and the Aerolizer (Novartis Pharma, CH), are singledose devices in which a capsule of powder is perforated in the device with needles fixed to pressure buttons. Other types of DPIs, such as the Diskus (GlaxoSmithKline, UK) or the Turbuhaler (AstraZeneca, Sweden), have a multidose capacity. These multidose DPIs fall into two main categories (Figure 3): these either measure the dose themselves (from a powder reservoir) or they dispense individual doses which are premetered into blisters by the manufacturer [58–60]. Turbuhaler and Diskus, respectively, are representatives of the former and latter categories, although many other different designs are presently in development. To date, new innovative DPIs are available for treatment of asthma and for delivery of a range of drugs usually given by injection, such as peptides, proteins, and vaccines. The use of DPIs is expected to increase with the phasing out of CFC production along with increased availability of drug powders and development of novel powder devices [59].Figure 3 Examples of dry powder inhalers. From [57].Generally, DPIs do have many advantages (Table1). Dry powder inhalers are actuated and driven by patient’s inspiratory flow; consequently, DPIs do not require propellants to generate the aerosol, as well as coordination of inhaler actuation with inhalation [60]. However, a forceful and deep inhalation through the DPI is needed to deaggregate the powder formulation into small respirable particles as efficiently as possible and, consequently, to ensure that the drug is delivered to the lungs [60–62]. Although most patients are capable of generating enough flow to operate a DPI efficiently [60], the need to inhale forcefully and, consequently, generate a sufficient inspiratory flow could be a problem for very young children or patients with severe airflow limitation [63]. For this reason, DPIs are not recommended for children under the age of 5 years [60]. The newer active or power-assisted DPIs incorporate battery-driven impellers and vibrating piezoelectric crystals that reduce the need for the patient to generate a high inspiratory flow rate, an advantage for many patients [59, 62]. Drug delivery to the lung ranges between 10% and 40% of the emitted dose for several marketed DPIs [60]. The physical design of the DPI establishes its specific resistance to airflow (measured as the square root of the pressure drop across the device divided by the flow rate through the device), with current designs having specific resistance values ranging from about 0.02 to 0.2 cm H2O/L/min) [61]. To produce a fine powder aerosol with increased delivery to the lung, a DPI that is characterised as having a low resistance requires an inspiratory flow of >90 L/min, a medium-resistance DPI requires 50–60 L/min, and a high-resistance DPI requires <50 L/min [61]. Of note, DPIs with high resistance tend to produce greater lung deposition than those with a lower resistance [61], but the clinical significance of this is not known. Based on the previous considerations, it is recommended to instruct patients to inhale forcefully from the beginning of the inspiration deeply as much as possible and to continue to inhale for as long as possible [12]. The rationale for these recommendations is that when using a DPI, inhalation should be forceful enough to disburse the micronised drug from the lactose-based carrier into a fine particle dose. However, it is not the absolute inspiratory flow that determines the fine particle dose from an inhaler but the resulting energy, which also depends on inhaler resistance. High air velocities within the inhaler are required for effective dispersion rather than high airflow through the inhaler. High airflow through the inhaler will lead to increased impaction in upper airways; thus, fast inhalation should be avoided unless a larger fine particle fraction compensates for the increased impaction. Furthermore, when using a singledose DPI, it is also recommendable to instruct patients to perform two separate inhalations for each dose [12].Although DPIs offer advantages over pMDIs, they do have some limitations (Table1) of design, cost-effectiveness and user-friendliness [60]. For instance, capsule-based DPIs, such as the HandiHaler and the Aerolizer, require that single doses are individually loaded into the inhaler immediately before use. This is inconvenient for some patients and does not allow direct dose counting. In addition, the inhalation manoeuvre has to be repeated until the capsule is empty, which may give rise to underdosing and to high dose variability. Other DPIs are multiple unit dose devices, such as the Diskhaler, or multidose devices, such as the Diskus and the Turbuhaler. These devices do not have any triggering mechanism which makes optimal drug delivery entirely dependent on an individual patient’s uncontrolled inspiratory manoeuvre. Because of variations in the design and performance of DPIs, patients might not use all DPIs equally well. Therefore, DPIs that dispense the same drug might not be readily interchangeable [61]. Studies [59, 60] have also been shown that dose emission is reduced when a DPI is exposed to extremely low and high temperature and humidity; therefore, DPIs should be stored in a cool dry place.A recent systematic literature review revealed that up to 90% of patients did not use their DPI correctly [64]. Common errors made by patients were lack of exhalation before inhalation, incorrect positioning and loading of the inhaler, failure to inhale forcefully and deeply through the device, and patients’ failure to breath-hold after inhalation [64]. All these errors may lead to insufficient drug delivery, which adversely influences drug efficacy and may contribute to inadequate disease control [64]. It is unsurprising that such a high proportion of patients were unable to use DPIs correctly as the devices have many inherent design limitations. The Diskhaler, for example, is a multiple unit dose device as it contains a series of foil blisters on a disk. It is complicated to use, requiring eight steps to effect one correct inhalation; it has been shown that approximately 70% of patients are unable to use it correctly [64]. The disks have to be changed frequently and the device cleaned before refilling. In addition, it provides no feedback to the patient of a successful inhalation, except a sweet taste in the mouth which may simply be indicative of oral drug deposition. The Turbuhaler, a multidose reservoir device, is the most frequently prescribed DPI as it produces good deposition of the drug in the lungs provided that a sufficient (about 60 L/min) inspiratory flow has been achieved by the patients. However, approximately 80% of patients are unable to use it correctly [64]; common mistakes made by patients using this inhaler are failure to turn the base fully in both directions and failure to keep the device upright until loaded. In addition, due to its high intrinsic resistance, patients who have a reduced inspiratory flow may encounter problems using this device. The Diskus is another example of multidose device that uses a strip foil drug containing blisters. As many as 50% of patients use this DPI incorrectly, and common errors include failure or difficulty in loading the device before inhalation and exhaling into the device [64]. The Diskus has a low intrinsic resistance but, like the Turbuhaler, does not have any triggering mechanism which makes optimal drug delivery entirely dependent on an individual patient’s uncontrolled inspiratory manoeuvre [64]. Additionally, as with other DPI devices employing drug blisters, incomplete emptying of the metered dose may occur, which could reduce the amount of drug delivered to the lung and hence reduce clinical efficacy [64]. ## 2.5. Nebulisers Various types of nebulisers are available on the market, and several studies have indicated that performance varies between manufacturers and also between nebulisers from the same manufacturers [65–67]. There are two basic types (Figure 4) of nebulisers: the pneumatic or jet nebuliser and the ultrasonic nebulisers [65–67]. The jet nebulisers generate aerosol particles as a result of the impact between a liquid and a jet of high velocity gas (usually air or oxygen) in the nebuliser chamber. In a jet nebulizer, the driving gas passes through a very narrow hole from a high pressure system. At the narrow hole, the pressure falls and the gas velocity increases greatly producing a cone shaped front. This passes at high velocity over the end of a narrow liquid feed tube or concentric feeding system creating a negative pressure at this point. As a result of this fall in pressure, liquid is sucked up by the Bernoulli effect and is drawn out into fine ligaments. The ligaments then collapse into droplets under the influence of the surface tension. The majority of the liquid mass produced during this process is in the form of large (15–500 micron) nonrespirable droplets. Coarse droplets impact on baffles while smaller droplets may be inhaled or may land on internal walls returning to the reservoir for renebulisation [65–67]. The resultant large particles then impact upon baffles to generate small, respirable particles. Thus, baffle design has a critical effect on droplet size. Concentric liquid feeds minimise blockage by residual drug build-up with repeated nebulisation. A flat pick up plate may allow some nebulisers to be tilted during treatment whilst maintaining liquid flow from the reservoir. A 6–8 L/min flow and a fill volume of 4-5 mL are generally recommended, unless some nebulisers are specifically designed for different flow and a smaller or larger fill volume [68]. The volume of some unit-dose medications is suboptimal; ideally, saline should be added to bring the fill volume to 4-5 mL, but this might not be practical. The longer nebulisation time with a greater fill volume can be reduced by increasing the flow used to power the nebuliser; however, increasing the flow decreases the droplet size produced by the nebuliser. Dead volume is the volume that is trapped inside the nebulizer, and typically it is 0.5–1 mL. To reduce dead volume, clinicians and patients commonly tap the nebuliser periodically during therapy in an effort to increase nebuliser output [69]. Therapy may also be continued past the point of sputtering in an attempt to decrease the dead volume, but this is unproductive and not recommended [70]. Because of the evaporative loss within the nebuliser, the solution becomes increasingly concentrated and cools during nebulisation.Components of a jet (a) and an ultrasonic (b) nebulisers. Modified from O'Callaghan and Barry [65]. (a) (b)There are four different designs of the jet nebulisers: jet nebuliser with reservoir tube, jet nebuliser with collection bag, and breath-enhanced and breath-actuated jet nebulizers [65–67]. Both the breath-enhanced and breath-actuated jet nebulisers are modifications of the “conventional” jet nebulisers specifically designed to improve their efficiency by increasing the amount of aerosol delivered to the patient with less wastage of aerosol during exhalation. The different types of jet nebulisers have different output characteristics determined by the design of the air jet and capillary tube orifices, their geometric relationship with each other and the internal baffles; for a given design the major determinant of output is the driving pressure [65–67]. The jet nebulizer with reservoir tube provides continuous aerosol during the entire breathing cycle, causing the release of aerosol to ambient air during exhalation and anytime when the patient is not breathing. Consequently, no more than 20% of the emitted aerosol is inhaled [65–67]. The jet nebulizer with collection bag generates aerosol by continuously filling a collection bag that acts as a reservoir. The patient inhales aerosol from the reservoir through a one-way inspiratory valve and exhales to the environment through an exhalation port between the one-way inspiratory valve and the mouthpiece. The breath-enhanced jet nebulizer (e.g., the PARI LC Plus, PARI gmbH) uses two one-way valves to prevent the loss of aerosol to environment. When the patient inhales, the inspiratory valve opens and aerosol vents through the nebuliser; exhaled aerosol passes through an expiratory valve in the mouthpiece. Breath-actuated jet nebulisers are designed to increase aerosol delivery to patient by means of a breath-actuated valve that triggers aerosol generation only during inspiration. Both the breath-enhanced and breath-actuated nebulisers increase the amount of inspired aerosol with shorter nebulisation time than “conventional” jet nebulisers [65]. Recently, adaptive aerosol delivery nebulisers (the HaloLite and the Prodose) have been developed to reduce the variability of the delivered dose and the waste of aerosol to the environment and to facilitate monitoring of compliance with patient therapy [71–73]. By monitoring pressure changes relative to flow over the first three breaths, these delivery systems establish the shape of the breathing pattern and then use this to provide a timed pulse of aerosol during the first 50% of each tidal inspiration. Monitoring of the breathing pattern continues throughout the delivery period, and any changes in breathing pattern are taken into account during the remainder of the delivery period. Furthermore, if no inhalation is registered, the system will cease delivery until the patient recommences breathing on the system [71–73]. Since the pulsed dose is only provided in the first 50% of each breath and the software can calculate the amount of drug given per pulse, the precise dose of drug can be delivered before the system stops [71–73].Ultrasonic nebulisers use a rapidly (>1 MHz) vibrating piezoelectric crystal to produce aerosol particles [65–67]. Ultrasonic vibrations from the crystal are transmitted to the surface of the drug solution where standing waves are formed. Droplets break free from the crest of these waves and are released as aerosol. The size of droplets produced by ultrasonic nebuliser is related to the frequency of oscillation [65–67]. Although ultrasonic nebulisers can nebulise solutions more quickly than jet nebulisers, they are not suitable for suspensions and the piezoelectric crystal can heat the drug to be aerosolised. A relatively new ultrasonic nebuliser technology is represented by the vibrating mesh nebulisers [12, 74, 75]. These new-generation nebulisers are either active or passive systems. In active devices (e.g., the eFlow, PARI gmbH), the aperture plate vibrates at a high frequency and draws the solution through the apertures in the plate. In passively vibrating mesh devices (e.g., MicroAir, Omron Healthcare), the mesh is attached to a transducer horn and vibrations of the piezoelectric crystal that are transmitted via the transducer horn force the solution through the mesh to create an aerosol. The eFlow is designed to be used with either a very low residual volume to reduce drug waste or with a relatively large residual volume, so that it can be used instead of conventional jet nebulisers with the same fill volume [76]. Vibrating mesh devices have a number of advantages over other nebuliser systems: they have greater efficiency, precision and consistency of drug delivery, and are quiet and generally portable [74, 75]. However, they are also significantly more expensive than other types of nebulisers, and require a significant amount of maintenance and cleaning after each use to prevent build-up of deposit and blockage of the apertures especially when suspensions are aerosolised and to prevent colonisation by pathogens [75]. They are currently most widely used for the treatment of patients with cystic fibrosis [77].Generally, mouthpieces are employed during nebuliser delivery. However, facemasks may be necessary for treatment of acutely dyspnoeic patients or uncooperative patients, such as infants and toddlers [78]. The facemask is not just a connector between the device and the patient. Principles of mask design are different depending on the device [78]. For example, a valved holding chamber with facemask must have a tight seal to achieve optimal lung deposition [78]. In contrast, the facemask for a nebuliser should not incorporate a tight seal but should have vent holes to reduce deposition on the face and in the eyes [79, 80]. Improvements in facemask design provide greater inhaled mass while reducing facial and ocular deposition [78]. Often, when a patient does not tolerate the facemask, practitioners employ the “blow-by” technique, which simply directs the aerosol towards the nose and mouth with the mouthpiece. However, there is no data to indicate that this is an effective method for delivering aerosol to the lungs, and therefore the use of this technique is not recommended [12].Unlike pMDIs and DPIs, no special inhalation techniques are needed for optimum delivery with conventional nebulisers; tidal breathing with only occasional deep breaths is sufficient (Table1). Thus, for patients who are unable to master the proper pMDI technique despite repeated instruction, the proper use of a nebuliser probably improves drug delivery. However, nebulisers have some distinct dis-advantages. Patients must load the device with medication solution for each treatment, and bacterial contamination of the reservoir can cause respiratory infection [65–67], making regular cleaning important. Also, nebuliser treatments take longer time than pMDIs and DPIs for drug administration (10–15 min for a jet nebuliser, 5 min for an ultrasonic or mesh nebuliser). Although they are relatively portable, typical jet nebuliser must be plugged into a wall outlet or power adaptor and thus cannot be used easily in transit. ## 2.6. Soft Mist Inhalers The development of soft mist inhalers (SMIs) has opened up new opportunities for inhaled drug delivery. Technically, these inhalation devices do fall within the definition of a nebuliser, as they transform aqueous liquid solution to liquid aerosol droplets suitable for inhalation. However, at variance with the traditional nebuliser designs, SMIs are hand-held multidose devices that have the potential to compete with both pMDIs and DPIs in the portable inhaler market. At the present, the only SMI currently marketed in some European countries is the Respimat inhaler (Boehringer Ingelheim, Figure5). This device does not require propellants since it is powered by the energy of a compressed spring inside the inhaler. Individual doses are delivered via a precisely engineered nozzle system as a slow-moving aerosol cloud (hence the term “soft mist”) [81]. Scintigraphic studies have shown that, compared to a CFC-based pMDI, lung deposition is higher (up to 50%) and oropharyngeal deposition is lower [81]. Respimat is a “press and breathe” device, and the correct inhalation technique closely resembles that used with a pMDI. However, although coordination between firing and inhaling is required, the aerosol emitted from Respimat is released very slowly, with a velocity of approximately four times less than that observed with a CFC-driven pMDI [81]. This greatly reduces the potential for drug impaction in the oropharynx. In addition, the relatively long duration over which the dose is expelled from Respimat (about 1.2 s compared with 0.1 s from pMDIs) would be expected to greatly reduce the need to coordinate actuation and inspiration, thus improving the potential for greater lung deposition. Although Respimat has been used relatively little in clinical practice to date, clinical trials seem to confirm that drugs delivered by the Respimat are effective in correspondingly smaller doses in patients with obstructive airway disease [82].The Respimat soft mist inhaler. From [81]. (a) (b) ## 3. Choice of an Inhaler Device for Asthma Therapy Drug choice is usually the first step in prescribing inhaled therapy for asthma and, together with availability and reimbursement criteria, dictates the inhaler deliver options. The next two steps, choice of inhaler device type and patient training in the use of the inhaler, are hampered by the lack of robust evidence or effective tools to aid healthcare professionals [9, 10, 83]. Meta-analysis regarding the selection of aerosol delivery systems for acute asthma concluded that short-acting beta agonists delivered via either nebuliser or pMDI with handling chamber are essentially equivalent [84–88]. More than 100 inhaled device-drug combinations are currently available for treatment of asthmatic patients [57]. The number is likely to increase with the development of analogue inhaled drugs delivered by relatively low-cost pMDIs and DPIs. Consequently, the level of confusion experienced by clinicians, nurses, and pharmacists when trying to choose the most appropriate device for each patient is increased. Thus, physicians’ experience is amongst the most important factors which influence decision making for inhaler choice in asthma therapy. In fact, inhalers are often prescribed on an empirical basis rather than on an evidence-based approach. Following their own experience, doctors are much more likely to prescribe the same old inhaler which they have always prescribed rather than new, improved inhalers entering the market.Current asthma management guidelines give some guidance on the class of inhaler to prescribe to children, but they offer nonspecific advice regarding inhaler choice for adult patients. The GINA guidelines [1] recommend pMDIs with spacer and facemask for children younger than 4 years (or pMDIs with spacer and mouthpiece for those aged 4–6 years) and, in addition to pMDIs alone, DPIs or BAMDIs, for children older than 6 years. However, for adults, the same guidelines state that inhalers should be portable and simple to use, should not require an external power source, require minimal cooperation and coordination, and have minimal maintenance requirements [1]. The British Thoracic Society guidelines [2] also include patient’s preference and abilities to use the device correctly. However, this advice relating to patient preference is not supported by any evidence that patients will correctly use an inhaler that they like.Criteria to be considered when choosing an inhaler device differ depending on the audience addressed [89]. From the viewpoint of the inhalation technologist, consistent and safe dosing, sufficient drug deposition, and clinical effect guide the inhaler choice. The patient’s ability to inhale through the device, the intrinsic airflow resistance of the device, and the degree of dependence of drug release on inspiratory airflow variability are all important determinants when considering constancy of dosing [89]. From the point of view of the clinician, clinical efficacy and safety should be the most important determinants to consider when choosing an inhaler [89]. However, in the real word clinical efficacy must be balanced against cost-effectiveness, and inhalers with insufficient performance may be prescribed simply because they are cheap. Patients’ preferences and acceptance of the inhaler should also be considered when deciding on a specific inhaler since these will have major implications for compliance.Several general principles of inhaler selection and use have recently been addressed in an evidence-based systematic review by a joint committee of the American College of Chest Physicians and the American College of Asthma, Allergy and Immunology [13]. The bottom line of this document was that each of the aerosol devices can work equally well in various clinical settings with patients who can use these devices properly [13]. In addition, pMDIs are convenient for delivering a wide variety of drugs to a broad spectrum of patients. For patients who have trouble coordinating inhalation with inhaler actuation, the use of spacer may obviate this difficulty, though most of these devices are cumbersome to store and transport [13]. The use of spacer, however, is mandatory for infants and young children. Dry powder inhalers are usually easier for patients to handle than pMDIs, and a growing number of drug types are available in several DPI formats [13]. The key issue for dry powder inhalation is the minimum inspiratory flow rate below which deagglomeration is inefficient, resulting in a reduced drug delivered dose. The most ill patients and the very young may not be candidates for a DPI. A nebuliser could be used as an adequate alternative to pMDI with a spacer by almost any patient in a variety of clinical settings from the home to the intensive care unit [13]. However, nebulisers are more expensive, cumbersome, and relatively time-consuming to use compared to hand-held inhalers. These attributes should limit the use of nebulisers whose effect can be matched by hand-held devices in almost all clinical settings. The findings of this document should not be interpreted to mean that the device choice for a specific patient does not matter. Rather, the study simply says that each of the devices studied can work equally well in patients who can use them correctly. However, this evidence-based systematic review does not provide much information about who is likely to use one device or another properly nor does it address many other considerations that are important for choosing a delivery device for a specific patient in a specific clinical situation. These include the patient’s ability to use the device, patient preference, and the availability of equipment and cost.More recently, Chapman and coworkers [90] proposed an algorithm approach to inhaler selection that considers patient’s ability to generate an inspiratory flow rate >30 L/min to coordinate inhaler actuation and inspiration and to prepare and actuate the device (Table 2).Table 2 Choice of inhaler devices according to the patient’s inspiratory flow and ability to coordinate inhaler actuation and inhalation. Modified from Chapman et al. [90]. Good hand-lung coordination Poor hand-lung coordination Inspiratory flow > 30 L/min Inspiratory flow < 30 L/min Inspiratory flow > 30 L/min Inspiratory flow < 30 L/min pMDI pMDI pMDI + spacer pMDI + spacer BAMDI Nebuliser BAMDI Nebuliser DPI SMI DPI SMI Nebuliser Nebuliser SMI SMI pMDI: pressurised metered-dose inhalers; BAMDI: breath-actuated metered-dose inhaler; DPI: dry powder inhaler; SMI: soft mist inhaler.When choosing an inhaler for children, it is essential that the individual child receives appropriate instructions and training necessary for the management of the disease [91]. Furthermore, the child should be prescribed the correct medication tailored to the severity of the disease, and, most importantly, the prescribed inhaler should suit the individual needs and preference of the child [91]. Contrary to general opinion, using an inhaler may be difficult for children [91]; many children with asthma use their inhaler incorrectly which may result in unreliable drug delivery, even after instruction and training for correct inhalation. In addition, previous inhalation instruction may be forgotten and, therefore, training should be repeated regularly to maintain correct inhalation technique in children with asthma [91]. ## 4. Education and Instruction Successful asthma management is 10% medication and 90% education [92]. Asthma education empowers patients to manage their disease and increases their awareness of danger signs [93]. Patients with a positive attitude towards controlling their asthma are more likely to adhere to therapy [94]. Regular medical review provides an opportunity to raise patients’ expectations, helps them understand how to monitor their asthma, and increases awareness of possible factors, such as poor inhaler technique, that may prevent them from attaining control [93]. A key challenge in many practice situations is the allocation of personnel and time for patient training in inhaler technique, although the upfront investment in time to properly train could later save time, resources, and adverse patient impact by preventing uncontrolled asthma because of poor inhaler technique. The conventional wisdom is that training patients to use inhalers is time-consuming. However, in one study, training sessions provided by pharmacists took an average of only 2.5 min and were shown to improve asthma outcomes [95]. The “trainer” must know the proper technique, including refinements to optimise inhaler therapy for each device type prescribed. However, the healthcare professionals involved often have not mastered inhalation technique themselves [96, 97] and are not sufficiently aware of handling difficulties with devices other than pMDIs [98]. Furthermore, only 2 of the 40 medical textbooks include a simple list of steps to properly use a pMDI [11]. For this reason, studies have examined educational interventions designed to “train the trainer” and improve healthcare professional inhaler competence. It has been demonstrated that a single education session improves medical residents’ inhaler knowledge and skills [99]. Another study demonstrated that pharmacists who participate in a single-session education workshop showed significantly better knowledge and skills than a control group and that this knowledge was retained at a high level [100]. The best person to provide inhaler training (physician, nurse, or pharmacist) will vary by practice situation. Another option is to enlist the aid of lay educators (e.g., other patients) to provide support and training. In all cases, adequate time and resources must be allotted for the training sessions. Successful training in inhaler technique depends upon effective communication of proper technique and its purpose and monitoring to ensure that the skills have been learned and retained [101]. Of all the training approaches possible, personal or small group demonstration has so far proven most effective [102, 103]. Other training methods for inhaler use include written indications, illustrations, audio-visual demonstrations, and internet-based, interactive, multimedia tutorials, the latter representing a promising, new low-cost and time-saving mechanism for educating both patients and healthcare professionals [104]. However, their value must not be overestimated, as a substantial proportion of patients still have incorrect inhalation technique despite several training sessions [105] Periodic retraining is needed as inhaler technique deteriorates with time [104, 106]. Special provision should be made for the elderly, who may have more trouble learning good inhaler technique and a greater tendency to forget it, while small children may require a particular teaching environment to hold their attention [107, 108]. Intuitively, therapeutic success will be more likely if patients are prescribed a device that they have chosen, are happy with, and can use well. Although use of a single type of device to deliver all medications is not always practicable it is preferable since coping with a variety of devices increases the likelihood of error [109].A visual evaluation by the healthcare professionals is subjective but important in assessing inhaler preparation and the mechanics of inhaler handling by the patient. Indeed, in real life, patients make many errors with their usual inhalation device that may negate the benefits observed in clinical trials. A checklist to identify critical errors, which are those comprising treatment efficacy, could be applied here, as outlined by Molimard and Le Gros [110]. Examples of currently available tools to objectively check and maintain the correct inhalation pattern include the Aerosol Inhalation Monitor (Vitalograph Ltd., Buckingham, UK) and 2Tone Trainer (Canday Medical Ltd., Newmarket, UK) for MDIs and the In-Check Dial (Clement Clarke International, Harlow, UK) for DPIs [111, 112]. These tools can provide an objective evaluation of the inhalation profile but cannot assess the patient’s preparation and handling of their device. ## 5. ADMIT Recommendations Many physicians in Europe are fully aware of the difficulties that patients have using prescribed inhaler devices correctly and the negative impact that this may have on asthma control. TheAerosol Drug Management Improvement Team (ADMIT), a consortium of European respiratory physicians (respiratory specialists, general practitioners, and paediatricians) with a common interest in promoting excellent delivery of inhaled drugs, was formed with the remit of examining ways to improve treatment of obstructive airway disease in Europe [57]. ADMIT recommends that instructions for correct inhalation technique for each inhaler device currently on the market should be compiled by an Official Board with instructions made readily accessible on the web. Local asthma associations and patient groups could also be involved in promoting the importance and teaching and reinforcing of correct inhalation technique. Information could be disseminated by the use of dedicated literature, school visits by healthcare professionals, pharmacists, and through patient advocacy groups. Other ADMIT recommendations are summarised as follows.Recommendations from the Aerosol Drug Management Improvement Team (ADMIT) for the choice and correct usage of inhalers. (DPI, dry powder inhaler; PIF, peak inspiratory flow; pMDI, pressurised metered-dose inhaler. Modified from Crompton et al. [8].)(i) Inhalers should be matched to the patient as much as possible.(ii) In young children, pMDIs should be always used with a spacer device.(iii) An alternative to a pMDI should be considered in elderly patients with a minimental test score <23/30 or an ideomotor dyspraxia score <14/20 as they are unlikely to have correct inhalation technique through a pMDI.(iv) The patient’s PIF values should be considered before DPI prescription. Those patients with severe airflow obstruction, children, and the elderly would benefit from an inhaler device with a low airflow resistance.(v) Before prescribing a DPI, check that the patient can inhale deeply and forcibly at the start of the inspiration as airflow profile affects particle size produced and hence drug deposition and efficacy.(vi) Where possible, one patient should have one type of inhaler.(vii) Establish an Official Board to compile instructions for correct inhalation technique for each inhaler device currently on the market.(viii) Instructions for correct inhaler use should be made readily accessible on a dedicated web site.(x) Training in correct inhalation technique is essential for patients and healthcare professionals.(xi) Inhalation technique should be checked and reinforced at regular intervals.(xii) Teaching correct inhalation techniques should be tailored to the patient’s needs and preferences: group instruction in correct inhalation technique appears to be more effective than personal one-to-one instruction and equally effective like video instruction; younger patients may benefit more from multimedia teaching methods; elderly patients respond well to one-to-one tuition.The ADMIT has also proposed a practical algorithm (Figure6) in order to improve the instruction given to the patient regarding optimal use of their inhalers. At each consultation of the patient, the physician should establish the patient’s level of symptoms and control, ideally using a composite measure such as the GINA control assessment [1], and if well controlled for at least 3 months, therapy should be stepped down gradually according to treatment guidelines. Conversely, if the patient answers “no” to any of these checklist questions then compliance and aggravating (trigger) factors should be assessed. Most importantly, inhalation technique should be assessed. If the patient is unable to use a particular inhaler correctly despite repeated attempts, a change in inhaler device should be considered. In the cases where ongoing uncontrolled asthma persists in the face of correct inhaler technique, then asthma therapy should be stepped up according to the treatment guidelines and another appointment scheduled in order to check symptoms.Figure 6 Asthma therapy adjustment flow chart. From [8]. ## 6. Conclusions The prevalence of asthma is continuing to rise throughout the world, particularly amongst children. Despite the implementation of both national and international guidelines and the widespread availability of effective pharmacological therapy, asthma is frequently uncontrolled and still may cause death. The reasons for this anomaly are numerous. First, the guidelines themselves are complex and too long for most physicians to absorb and utilise excessive length. Secondly, patients frequently do not adhere to their treatment regimen for a variety of reasons including incorrect use of inhaler and underestimation of disease severity. Indeed, asthma severity is often misclassified in the first instance and inappropriate or insufficient therapy prescribed. Finally, although guidelines agree on the most appropriate therapy to control asthma, the method by which this therapy is delivered to the lungs is often lacking in detail.To date, advancement in asthma management has been pharmacologically driven rather than device driven. Since it is likely that in the future inhaled bronchodilators and corticosteroids will remain the cornerstone of asthma therapy, development of inhaler devices may become more important than development of new drugs. In the past 10–15 years, several innovative developments have advanced the field of inhaler design. Although many inhalers incorporate features providing efficient aerosol delivery for asthma treatment, there is no perfect inhaler and each has advantages and disadvantages, but there is increasing recognition that a successful clinical outcome is determined as much by choice of an appropriate inhaler device as by the drugs that go in them. Drug delivery from all inhaler devices depends on how the patient prepares the device and then inhales from it. Problems with drug delivery have been identified due to inappropriate use of inhaler devices, particularly pMDIs where patients need to coordinate inhaler activation with inspiration. However, as inhalation is likely to remain the delivery route of choice for the foreseeable future, there is a need to develop inhaler devices which are easy to use and deliver a consistent dose of drug to the lungs which may improve patient compliance with treatment leading to better control of asthma. There is evidence that a patient is most likely to use correctly an inhaler that he or she prefers, and each patient’s choice of device will be determined by individual perceptions of how its advantages and disadvantages balance out. This decision could be quite different to the judgment of a prescriber or a formulator, who may give more weight to technical points. Choice of an inhaler device should therefore take into account the likelihood that patients will be able to use a particular device correctly, cost-effectiveness, preference, and likely compliance.Continued and repeated education of both healthcare professionals and patients in correct inhalation technique is essential, and the results are checked at regular intervals by a member of medical staff. Substantial changes in educational efforts are clearly required and should be particularly addressed towards the general practitioner and asthma nurse who in turn teach patients how to use their inhaler correctly. Finally, it is important to remember that continually changing inhaler devices which deliver the same drug is not the answer, as patients lose confidence in both the device and the drug and compliance with therapy drops. An inhaler should only be prescribed with the absolute certainly that the patient can use it correctly. It should be stressed that once a patient is familiar and stabilised on one type of inhaler, they should not be switched to new devices without their involvement and without follow-up education on how to use the device properly. A recent study has shown that asthma control deteriorates if an inhaler is substituted for a different device at the prescribing or dispensing stage without involving the patient [113]. Prescribers should be especially vigilant on this point in order to avoid changes to the type of device their patients receive through the pharmacy. --- *Source: 102418-2013-08-05.xml*
102418-2013-08-05_102418-2013-08-05.md
102,756
The Challenge of Delivering Therapeutic Aerosols to Asthma Patients
Federico Lavorini
ISRN Allergy (2013)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/102418
102418-2013-08-05.xml
--- ## Abstract The number of people with asthma continues to grow around the world, and asthma remains a poorly controlled disease despite the availability of management guidelines and highly effective medication. Patient noncompliance with therapy is a major reason for poor asthma control. Patients fail to comply with their asthma regimen for a wide variety of reasons, but incorrect use of inhaler devices is amongst the most common. The pressurised metered-dose inhaler (pMDI) is still the most frequently used device worldwide, but many patients fail to use it correctly, even after repeated tuition. Breath-actuated inhalers are easier to use than pMDIs. The rationale behind inhaler choice should be evidence based rather than empirical. When choosing an inhaler device, it is essential that it is easy to use correctly, dosing is consistent, adequate drug is deposited in both central and peripheral airways, and that drug deposition is independent of airflow. Regular checking of inhalation technique is crucial, as correct inhalation is one of the cornerstones of successful asthma management. --- ## Body ## 1. Introduction The incidence of asthma continues to rise worldwide, doubling over the last 10 years [1–4] and, consequently, asthma places a huge economic burden on healthcare resources [5]. Asthma management guidelines [1, 2] are now available in virtually every country; their aim is to achieve control of the disease with the lowest possible dose of medication prescribed [1, 2]. To this end, asthma guidelines advocate a stepwise pharmacological approach that consists in increasing (“step up”) the numbers of medications as asthma worsens, and decreasing (“step down”) medications when asthma is under control [1, 2]. Once control of asthma has been achieved and maintained for at least three months, a gradual reduction of the maintenance therapy is recommended to identify the minimum therapy required to maintain control [1, 2]. Unfortunately, the current level of asthma control falls far short of the goals for long-term asthma management [2, 6, 7], with many patients reporting day- and night-time symptoms at least once a week, and continuing to require unscheduled visits and hospitalisations [2, 6, 7]. One of the reason why asthma remains poorly controlled is that patients are deriving incomplete benefit from their inhaled medication, primarily because they are unable to use their inhalers correctly [8–11].The benefits of inhaled therapy for the treatment of obstructive airway diseases, such as asthma, have been recognised for many years. In comparison with oral or parenteral formulations, minute but therapeutic doses of drug are delivered topically into the airways causing local effects within the lungs [12–14]. Unwanted systemic effects are minimised as the medication acts with maximum pulmonary specificity together with a rapid onset and duration of action [12–14]. Consequently, aerosol formulations of bronchodilators and corticosteroids are the mainstay of modern treatment for asthma at all ages [1, 2]. Aerosols are either solutions containing medications, or suspensions of solid particles in a gas, generated from devices such as pressurised metered dose inhalers (pMDIs), dry powder inhalers (DPIs) or nebulisers [12–16]. In the past decade some novel delivery systems have been developed that have high delivery efficiencies; notable among these are the soft mist inhalers (SMI). Each type of inhaler device has pros as well as cons (Table 1). Inhalers differ in their efficiency of drug delivery to the lower respiratory tract, depending on the form of the device, its internal resistance, formulation of medication, particle size, velocity of the produced aerosol plume, and ease with which patients can use the device [12–16]. Efficiency of drug delivery may also be influenced by patients’ preference, which in turn affects patients’ adherence with treatment and indeed long-term control of the disease [17]. There seems little point in prescribing an effective medication in an inhaler device which patients cannot use correctly. Thus, the choice of the right inhaler for the patient is just as important as choosing the most effective medication.Table 1 Major components, advantages, and disadvantages of inhaler devices. Inhaler Formulation Metering system Advantages Disadvantages pMDI Drug suspended or dissolved in propellant (with surfactant and cosolvent) Metering valve and reservoir Portable and compactMultidose deviceRelatively cheapCannot contaminate contentsAvailable for most inhaled medications Contains propellantsNot breath-actuated Many patients cannot use it correctly High oropharyngeal deposition pMDI + spacer Easier to coordinateLarge drug doses delivered more conveniently Less oropharyngeal depositionHigher lung deposition than a pMDI Less portable than pMDIPlastic spacers may acquire static chargeAdditional cost to pMDI BA-MDI Drug suspended in propellant Metering valve and reservoir Portable and compactMultidose deviceBreath-actuated (no coordination needed)Cannot contaminate contents Contains propellants“Cold Freon’’ effect Requires moderate inspiratory flow to be triggered DPI Drug blend in lactose, drug alone, drug/excipient particles Capsules, blisters, multidoseblister packs reservoirs Portable and compactBreath-actuated (no coordination needed)Does not contain propellants Requires a minimum inspiratory flowMay not appropriate for emergency situationsMany patients cannot use it correctlyMost types are moisture sensitive SMI (Respimat) Aqueous solution or suspension Unit dose blisters orreservoirs Portable and compactMultidose device High lung depositionDoes not contain propellants Not breath-actuatedNot currently available in most countriesRelatively expensive Nebulisers Aqueous solution or suspension Nebule dispensed into reservoir chamber of nebulizer May be used at any ageNo specific inhalation technique requiredVibrating mesh is portable and does not require an outside energy sourceMay dispense drugs not available with pMDIs or DPIs Jet and ultrasonic nebulisers require an outside energy sourceTreatment times can be longPerformance varies between nebulisersJet nebulisers cannot aerosolise a certain volume of solution Risk of bacterial contaminationNewer nebulisers are expensive pMDI: pressurised metered-dose inhalers; BA-MDI: breath-actuated metered-dose inhaler; DPI: dry-powder inhaler; SMI: soft mist inhaler.In this paper, the hand-held inhalers are reviewed together with a current understanding about correct inhalation techniques for each device. A description of nebulisers, which are frequently used to deliver asthma medications [18], is also given. However, since most current nebuliser designs are bulky and inconvenient and drug administration is prolonged, they are better categorised as second-line devices for most asthma patients. Finally, we present recommendations from the Aerosol Drug Management Improvement Team (ADMIT) for inhaler selection, as well as an algorithm for asthma therapy adjustment [8]. ## 2. Aerosol Device Options ### 2.1. Pressurised Metered-Dose Inhalers The development of the first commercial pMDIs was carried out by Riker Laboratories in 1955 and marketing in 1956 as the first portable, multidose delivery system for bronchodilators. Since that time, the pMDI has become the most widely prescribed inhalation device for drug delivery to the respiratory tract to treat obstructive airway diseases such as asthma and chronic obstructive pulmonary disease [18]; the total worldwide sales by all companies of pMDI products run in excess of $2 billion per year. The pMDI (Figure 1) is a portable multidose device that consists of an aluminium canister, lodged in a plastic support, containing a pressurised suspension or solution of micronized drug particles dispersed in propellants. A surfactant (usually sorbitan trioleate or lecithin) is also added to the formulation to reduce the particle agglomeration and is responsible for the characteristic taste of specific inhaler brands. The key component of the pMDI is a metering valve, which delivers an accurately known volume of propellant, containing the micronised drug at each valve actuation. Pressing the bottom of the canister into the actuator seating causes decompression of the formulation within the metering valve, resulting in an explosive generation of a heterodisperse aerosol of droplets that consist of tiny drug particles contained within a shell of propellant. The latter evaporates with time and distance, which reduces the size of the particles that use a propellant under pressure to generate a metered dose of an aerosol through an atomisation nozzle (Figure 1). The technology of pMDI has evolved steadily over the period of mid-1950s to the mid-1980s. Until recently, the pMDI used chlorofluorocarbons (CFCs) as propellants to deliver drugs; however, in accordance with the Montreal Protocol of 1987, CFC propellants are being replaced by hydrofluoroalkane (HFA) propellants that do not have ozone depleting properties [19]. Hydrofluoroalkane 134a and 227ca are propellants that contain no chlorine and have a residence in the stratosphere lower than CFCs, so they have substantially less global warming potential than do CFCs. HFA-134a albuterol has been the first HFA-driven pMDI that has received approval in both Europe and the United States. This preparation consists of albuterol suspended in HFA-134a, oleic acid, and ethanol; clinical trials have shown this preparation to be bioequivalent to CFCs albuterol in both bronchodilator efficacy and side effects [20]. At the present, in most European countries CFC-driven pMDIs have totally been replaced by HFA inhalers. The components of CFC-driven pMDIs (i.e., canister, metering valve, actuator, and propellant) are retained in HFA-driven pMDIs, but they have had a redesign. Two approaches were used in the reformulation of HFA-driven pMDIs. The first approach was to show equivalence with the CFC-driven pMDI, which helped regulatory approval, delivering salbutamol and some corticosteroid. Some HFA formulations were matched to their CFC counterparts on a microgram for microgram basis; therefore, no dosage modification was needed when switching from a CFC to an HFA formulation. The second approach involved extensive changes, particularly for corticosteroid inhalers containing beclomethasone dipropionate, and resulted in solution aerosols with extra-fine particle size distributions and high lung deposition [21, 22]. The exact dose equivalence of extra-fine HFA beclomethasone dipropionate and CFC beclomethasone dipropionate has not been established, but data from most trials have indicated a 2 : 1 dose ratio in favour of the HFA-driven pMDI [21, 22]. Patients on regular long-term treatment with a CFC pMDI could safely switch to an HFA pMDI without any deterioration in pulmonary function, loss of disease control, increased frequency of hospital admissions, or other adverse effects [19]. However, when physicians prescribe HFA formulations in place of CFC versions for the first time, they should inform their patients about differences between these products. Compared with CFC-driven pMDIs, many HFA-driven pMDIs have a lower (25.5 mN versus 95.4 mN) impact force and a higher (8°C versus −29°C) temperature [12, 14]. These properties partially overcome the “cold Freon effect” [12, 14] that has caused some patients to stop inhaling their CFC pMDIs. In addition, most HFA pMDIs have a smaller delivery orifice that may result in a more slowly delivered aerosol plume, thus facilitating inhalation and producing less mouth irritation [23]. Another difference is that many HFA-driven pMDIs contain small amount of ethanol. This affects the taste, as well as further increasing the temperature and decreasing the velocity of the aerosol. Pressurized MDIs containing fixed combination of beclomethasone dipropionate and the long-acting bronchodilator formoterol in a solution formulation with HFA-134a and ethanol with cosolvent [21, 24, 25] have been developed (Modulite technology, Chiesi, Italy). Interestingly, this formulation dispenses an aerosol that has a particularly small particle size (mass median aerodynamic diameter ~1 μ), a lower plume velocity of the aerosol, and not dropping temperature as much as when CFCs are used as carriers. These three factors, that is, smaller particle size, lower plume velocity, and less temperature drop, may decrease upper airway impaction and increase airway deposition of particles, particularly to the smaller airways, compared with the same dose of drug administered from a CFC pMDI [24, 25].Figure 1 Components of a pressurised metered-dose inhaler. Lower panels illustrate the process of aerosol generation.Pressurised MDIs have a number of advantages (Table1): they are compact, portable, relatively inexpensive, and contain at least 200 metered doses per canister that are immediately ready for the use. Furthermore, a large fraction (approximately 40%) of the aerosol particles is in the respirable range (mass median aerodynamic diameter less than 5 μ), and dosing is generally highly reproducible from puff to puff [12–16]. Despite these advantages, most patients cannot use pMDIs correctly, even after repeated tuition [8–11]. This is because pMDIs require good coordination of patient inspiration and inhaler actuation to ensure correct inhalation and deposition of drug in the lungs. The correct inhalation technique when using pMDIs involves firing the pMDI while breathing in deeply and slowly, continuing to inhale after firing, and then following inhalation with a breath-holding pause to allow particles to sediment on the airways [12, 26]. The patients should also be instructed that, on the first use and after several days of disuse, the pMDI should be primed. However, patients frequently fail to continuously inhale slowly after activation of the inhaler and exhale fully before inhalation [8]. In addition, patients often activate the inhaler before inhalation or at the end of inhalation by initiating inhaler actuation while breath holding [8]. Crompton and colleagues [8, 27, 28] showed that the proportion of patients capable of using their pMDIs correctly after reading the package insert fell from 46% in 1982 to 21% in 2000, while only just over half of patients (52%) used a pMDI correctly even after receiving instruction. In a large (n=4078) study, 71% of patients were found to have difficulty using pMDIs, and almost half of them had poor coordination [29]. Incorrect inhalation technique was associated with poor asthma control and with poor pMDI users having less stable asthma control than good pMDI users [29].Even with correct inhalation technique, pMDIs are inefficient since no more than 20% for CFC pMDIs or 40%–50% for HFA pMDIs producing extra-fine particles [12, 14–16] of the emitted dose reaches the lungs with a high proportion of drug being deposited in the mouth and oropharynx which can cause local as well as systemic side effects due to rapid absorption [12, 14–16]. Another disadvantage of some pMDIs is the absence of built-in counters that would alert the patient to the fact that the inhaler was approaching “empty” and needed to be refilled. Although many pMDIs contain more than the labelled number of doses, drug delivery per actuation may be very inconsistent and unpredictable after the labelled number of actuations. Beyond the labelled number of actuations, propellants can release an aerosol plume that contains little or no drug, a phenomenon called tail-off [30]. ### 2.2. pMDI Accessory Devices: The Spacers and Valved Holding Chambers Although the term “spacers” is often used for all types of extension add-on devices, these devices are properly categorised as either “spacers” or “valved holding chambers.” A spacer (Figure2) is a simple tube or extension attached to the pMDI mouthpiece with no valves to contain the aerosol plume after pMDI actuation [31]. A valved holding chamber (Figure 2) is an extension device, added onto the pMDI mouthpiece or canister, that contains a one-way valve to prevent holding the aerosol until inhalation [31]. The direction of the spray can be forward, that is, toward the mouth, or reverse, that is, away from the mouth (Figure 2). Both spacers and holding chambers constitute a volume into which the patient actuates the pMDI and from which the patient inhales reducing the need to coordinate the two manoeuvres [31]. By acting as an aerosol reservoir, these devices slow the aerosol velocity and increase transit time and distance between the pMDI actuator and the patient’s mouth, allowing particle size to decrease and, consequently, increasing deposition of the aerosol particles in the lungs [31]. Moreover, because spacers trap large particles comprising up to 80% of the aerosol dose, only a small fraction of the dose is deposited in the oropharynx, thereby reducing side effects, such as throat irritation, dysphonia, and oral candidiasis, associated with medications delivered by the pMDI alone [31]. Large-volume holding chambers increase lung deposition to a greater degree than does tube spacer or small holding chamber [32–34]. Devices larger than 1 L, however, are impractical, and patients would have difficulty inhaling the compete contents [35]. A valved holding chamber fitted with an appropriate facemask is used to give pMDI drugs to neonates, young children, and elderly patients. The two key factors for optimum aerosol delivery are a tight but comfortable facemask fit and reduced facemask dead space [31, 36]. Because children have low tidal volumes and inspiratory flow rates, comfortable breathing through a facemask requires low resistance inspiratory or expiratory valves. Of note, some holding chambers incorporate a whistle that makes a sound if inspiration is too fast [36]. Training patients to ensure that the whistle does not sound assists with developing an optimal inhalation technique. Plastic bottles and cups can also be used as rudimental, home-made spacers for the administration of aerosol drugs [37–39]. In a randomized controlled trial clinical effects of salbutamol inhaled through pMDI with a home-made nonvalved spacer (500 mL mineral water plastic bottle) were compared with those when the same drug was administered by using an oxygen-driven nebuliser in children with asthma [39]. The number of children hospitalised after treatment changes in clinical score and oxygen saturation were similar in conventional and bottle spacer groups [39]. Valved holding chambers may improve the clinical effect of inhaled medications especially in patients unable to use a pMDI properly [31]. Indeed, compared to both pMDIs alone and DPIs, these devices may increase the response to short-acting β-adrenergic bronchodilators, even in patients with correct inhalation technique [40–43]. While spacers and valved holding chambers are good drug-delivery devices, they suffer from the obvious disadvantage of making the entire delivery system less portable and compact than a pMDI alone. The size and appearance of some spacers may detract from the appeal of the pMDI to patients, especially among the paediatric population, and negatively affect patients’ compliance [31]. Furthermore, spacers are not immune from inconsistent medication delivery caused by electrostatic charge of the aerosol [44–47]. Drug deposits can build up on walls of plastic spacers and holding chambers mostly because of electrostatic charge. Aerosols remain suspended for longer periods within holding chambers that are manufactured from nonelectrostatic materials than other materials. Thus, an inhalation might be delayed for 2–5 s without a substantial loss of drug to the walls of metal or nonstatic spacers [45–47]. The electrostatic charge in plastic spacers can be substantially reduced by washing the spacer with a diluted (1 : 5000) household detergent and allowing it to drip dry [14, 48]. There is no consensus on how often a spacer should be cleaned, but recommendations range in general from once a week to once a month [12]. Multiple actuations of a pMDI into a spacer before inhalation also reduces the proportion of drug inhaled [46–50]. Five actuations of a corticosteroid inhaler into a large-volume spacer before inhalation deliver a similar dose to a single actuation into the same spacer inhaled immediately [49].(a) The Jet open tube spacer; (b) the AeroChamber plus holding chamber; (c) the reverse-flow EZSpacer. (a) (b) (c) ### 2.3. Breath-Actuated Metered-Dose Inhaler Breath-actuated (BA) pMDIs are alternatives to conventional press-and-breath pMDIs developed to overcome the problem of poor coordination between pMDI actuation and inhalation [12, 51]. Examples of this type of device include the Autohaler (3M, St. Paul, MI) and the Easi-Breathe (Teva Pharmaceutical Industries Ltd). Breath-actuated pMDIs contain a conventional pressurised canister and have a flow-triggered system driven by a spring which releases the dose during inhalation, so that firing and inhaling are automatically coordinated [12, 51]. These inhalation devices (Table 1) can achieve good lung deposition and clinical efficacy in patients unable to use a pMDI correctly because of coordination difficulties [52]. Errors when using BApMDI are less frequent than when using a standard pMDI [17]. Increased use of BApMDIs might improve asthma control and reduce overall cost of asthma therapy compared with conventional pMDIs [53]. On the negative side (Table 1), BApMDIs do not solve cold Freon effect and would be unsuitable for a patient who has this kind of difficulty using pMDI. In addition, these devices require a relatively higher inspiratory flow than pMDI for triggering. Furthermore, oropharyngeal deposition with breath-actuated pMDIs is as high as that with CFC-pMDIs [54].The Autohaler is a BApMDI that is available with albuterol and behlomethasone in HFA propellant. It has a manually operated lever that, when lifted, primes the inhaler through a spring-loaded mechanism, allowing the aerosol to be dispensed with an inspiratory flow of about 30 L/min. Clinical studies have demonstrated that the lung deposition ofβ-adrenergic bronchodilator administered via the Autohaler is similar to that obtained when the drug is correctly inhaled via a pMDI and greater than that resulting from conventional pMDIs in patients with poor inhalation technique [54]. Moreover, it can be used effectively by patients with poor lung function, patients with limited manual dexterity, and elderly patients [54]. The Easi-Breathe is a patient-triggered inhaler that dispenses albuterol and beclomethasone. This inhaler is primed when the mouthpiece is opened. When the patient breathes in, the mechanism is triggered and a dose is automatically released into the airstream. The inhaler can be actuated at a very low airflow rate of approximately 20 L/min, which is readily achievable by most patients [55]. Not surprisingly, practice nurses found it easier to teach and patients to use than a conventional pMDI [55]. In vitro studies have shown that particle size distribution and percentage of respirable fine particle obtained by using the Easi-Breathe device were similar to those obtained by using the conventional pMDI [56], although comparative clinical efficacy data are not yet available. ### 2.4. Dry Powder Inhalers Modern dry powder inhalers were first introduced in 1970, and the earliest models were single-dose devices containing the powder formulation in a gelatin capsule, which the patient loaded into the device prior to use. Since the late 1980s multidose DPIs have been available, giving the same degree of convenience as a pMDI [58]. Drypowder inhalers (Figure 3) are delivery devices containing drugs in powdered formulation that have been milled to produce micronized particles in the respirable range. These delivery devices allow the particles to be deagglomerated by the energy created by the patient’s own inspiratory flow [58–60]. The powdered drug can be either pure or blended with large particle size excipient (usually lactose) as a carrier powder [58–60]. The empty condition is generally apparent, alerting the patient to the need for replacement. Some DPIs, such as for the HandiHaler (Boehringer Ingelheim, D) and the Aerolizer (Novartis Pharma, CH), are singledose devices in which a capsule of powder is perforated in the device with needles fixed to pressure buttons. Other types of DPIs, such as the Diskus (GlaxoSmithKline, UK) or the Turbuhaler (AstraZeneca, Sweden), have a multidose capacity. These multidose DPIs fall into two main categories (Figure 3): these either measure the dose themselves (from a powder reservoir) or they dispense individual doses which are premetered into blisters by the manufacturer [58–60]. Turbuhaler and Diskus, respectively, are representatives of the former and latter categories, although many other different designs are presently in development. To date, new innovative DPIs are available for treatment of asthma and for delivery of a range of drugs usually given by injection, such as peptides, proteins, and vaccines. The use of DPIs is expected to increase with the phasing out of CFC production along with increased availability of drug powders and development of novel powder devices [59].Figure 3 Examples of dry powder inhalers. From [57].Generally, DPIs do have many advantages (Table1). Dry powder inhalers are actuated and driven by patient’s inspiratory flow; consequently, DPIs do not require propellants to generate the aerosol, as well as coordination of inhaler actuation with inhalation [60]. However, a forceful and deep inhalation through the DPI is needed to deaggregate the powder formulation into small respirable particles as efficiently as possible and, consequently, to ensure that the drug is delivered to the lungs [60–62]. Although most patients are capable of generating enough flow to operate a DPI efficiently [60], the need to inhale forcefully and, consequently, generate a sufficient inspiratory flow could be a problem for very young children or patients with severe airflow limitation [63]. For this reason, DPIs are not recommended for children under the age of 5 years [60]. The newer active or power-assisted DPIs incorporate battery-driven impellers and vibrating piezoelectric crystals that reduce the need for the patient to generate a high inspiratory flow rate, an advantage for many patients [59, 62]. Drug delivery to the lung ranges between 10% and 40% of the emitted dose for several marketed DPIs [60]. The physical design of the DPI establishes its specific resistance to airflow (measured as the square root of the pressure drop across the device divided by the flow rate through the device), with current designs having specific resistance values ranging from about 0.02 to 0.2 cm H2O/L/min) [61]. To produce a fine powder aerosol with increased delivery to the lung, a DPI that is characterised as having a low resistance requires an inspiratory flow of >90 L/min, a medium-resistance DPI requires 50–60 L/min, and a high-resistance DPI requires <50 L/min [61]. Of note, DPIs with high resistance tend to produce greater lung deposition than those with a lower resistance [61], but the clinical significance of this is not known. Based on the previous considerations, it is recommended to instruct patients to inhale forcefully from the beginning of the inspiration deeply as much as possible and to continue to inhale for as long as possible [12]. The rationale for these recommendations is that when using a DPI, inhalation should be forceful enough to disburse the micronised drug from the lactose-based carrier into a fine particle dose. However, it is not the absolute inspiratory flow that determines the fine particle dose from an inhaler but the resulting energy, which also depends on inhaler resistance. High air velocities within the inhaler are required for effective dispersion rather than high airflow through the inhaler. High airflow through the inhaler will lead to increased impaction in upper airways; thus, fast inhalation should be avoided unless a larger fine particle fraction compensates for the increased impaction. Furthermore, when using a singledose DPI, it is also recommendable to instruct patients to perform two separate inhalations for each dose [12].Although DPIs offer advantages over pMDIs, they do have some limitations (Table1) of design, cost-effectiveness and user-friendliness [60]. For instance, capsule-based DPIs, such as the HandiHaler and the Aerolizer, require that single doses are individually loaded into the inhaler immediately before use. This is inconvenient for some patients and does not allow direct dose counting. In addition, the inhalation manoeuvre has to be repeated until the capsule is empty, which may give rise to underdosing and to high dose variability. Other DPIs are multiple unit dose devices, such as the Diskhaler, or multidose devices, such as the Diskus and the Turbuhaler. These devices do not have any triggering mechanism which makes optimal drug delivery entirely dependent on an individual patient’s uncontrolled inspiratory manoeuvre. Because of variations in the design and performance of DPIs, patients might not use all DPIs equally well. Therefore, DPIs that dispense the same drug might not be readily interchangeable [61]. Studies [59, 60] have also been shown that dose emission is reduced when a DPI is exposed to extremely low and high temperature and humidity; therefore, DPIs should be stored in a cool dry place.A recent systematic literature review revealed that up to 90% of patients did not use their DPI correctly [64]. Common errors made by patients were lack of exhalation before inhalation, incorrect positioning and loading of the inhaler, failure to inhale forcefully and deeply through the device, and patients’ failure to breath-hold after inhalation [64]. All these errors may lead to insufficient drug delivery, which adversely influences drug efficacy and may contribute to inadequate disease control [64]. It is unsurprising that such a high proportion of patients were unable to use DPIs correctly as the devices have many inherent design limitations. The Diskhaler, for example, is a multiple unit dose device as it contains a series of foil blisters on a disk. It is complicated to use, requiring eight steps to effect one correct inhalation; it has been shown that approximately 70% of patients are unable to use it correctly [64]. The disks have to be changed frequently and the device cleaned before refilling. In addition, it provides no feedback to the patient of a successful inhalation, except a sweet taste in the mouth which may simply be indicative of oral drug deposition. The Turbuhaler, a multidose reservoir device, is the most frequently prescribed DPI as it produces good deposition of the drug in the lungs provided that a sufficient (about 60 L/min) inspiratory flow has been achieved by the patients. However, approximately 80% of patients are unable to use it correctly [64]; common mistakes made by patients using this inhaler are failure to turn the base fully in both directions and failure to keep the device upright until loaded. In addition, due to its high intrinsic resistance, patients who have a reduced inspiratory flow may encounter problems using this device. The Diskus is another example of multidose device that uses a strip foil drug containing blisters. As many as 50% of patients use this DPI incorrectly, and common errors include failure or difficulty in loading the device before inhalation and exhaling into the device [64]. The Diskus has a low intrinsic resistance but, like the Turbuhaler, does not have any triggering mechanism which makes optimal drug delivery entirely dependent on an individual patient’s uncontrolled inspiratory manoeuvre [64]. Additionally, as with other DPI devices employing drug blisters, incomplete emptying of the metered dose may occur, which could reduce the amount of drug delivered to the lung and hence reduce clinical efficacy [64]. ### 2.5. Nebulisers Various types of nebulisers are available on the market, and several studies have indicated that performance varies between manufacturers and also between nebulisers from the same manufacturers [65–67]. There are two basic types (Figure 4) of nebulisers: the pneumatic or jet nebuliser and the ultrasonic nebulisers [65–67]. The jet nebulisers generate aerosol particles as a result of the impact between a liquid and a jet of high velocity gas (usually air or oxygen) in the nebuliser chamber. In a jet nebulizer, the driving gas passes through a very narrow hole from a high pressure system. At the narrow hole, the pressure falls and the gas velocity increases greatly producing a cone shaped front. This passes at high velocity over the end of a narrow liquid feed tube or concentric feeding system creating a negative pressure at this point. As a result of this fall in pressure, liquid is sucked up by the Bernoulli effect and is drawn out into fine ligaments. The ligaments then collapse into droplets under the influence of the surface tension. The majority of the liquid mass produced during this process is in the form of large (15–500 micron) nonrespirable droplets. Coarse droplets impact on baffles while smaller droplets may be inhaled or may land on internal walls returning to the reservoir for renebulisation [65–67]. The resultant large particles then impact upon baffles to generate small, respirable particles. Thus, baffle design has a critical effect on droplet size. Concentric liquid feeds minimise blockage by residual drug build-up with repeated nebulisation. A flat pick up plate may allow some nebulisers to be tilted during treatment whilst maintaining liquid flow from the reservoir. A 6–8 L/min flow and a fill volume of 4-5 mL are generally recommended, unless some nebulisers are specifically designed for different flow and a smaller or larger fill volume [68]. The volume of some unit-dose medications is suboptimal; ideally, saline should be added to bring the fill volume to 4-5 mL, but this might not be practical. The longer nebulisation time with a greater fill volume can be reduced by increasing the flow used to power the nebuliser; however, increasing the flow decreases the droplet size produced by the nebuliser. Dead volume is the volume that is trapped inside the nebulizer, and typically it is 0.5–1 mL. To reduce dead volume, clinicians and patients commonly tap the nebuliser periodically during therapy in an effort to increase nebuliser output [69]. Therapy may also be continued past the point of sputtering in an attempt to decrease the dead volume, but this is unproductive and not recommended [70]. Because of the evaporative loss within the nebuliser, the solution becomes increasingly concentrated and cools during nebulisation.Components of a jet (a) and an ultrasonic (b) nebulisers. Modified from O'Callaghan and Barry [65]. (a) (b)There are four different designs of the jet nebulisers: jet nebuliser with reservoir tube, jet nebuliser with collection bag, and breath-enhanced and breath-actuated jet nebulizers [65–67]. Both the breath-enhanced and breath-actuated jet nebulisers are modifications of the “conventional” jet nebulisers specifically designed to improve their efficiency by increasing the amount of aerosol delivered to the patient with less wastage of aerosol during exhalation. The different types of jet nebulisers have different output characteristics determined by the design of the air jet and capillary tube orifices, their geometric relationship with each other and the internal baffles; for a given design the major determinant of output is the driving pressure [65–67]. The jet nebulizer with reservoir tube provides continuous aerosol during the entire breathing cycle, causing the release of aerosol to ambient air during exhalation and anytime when the patient is not breathing. Consequently, no more than 20% of the emitted aerosol is inhaled [65–67]. The jet nebulizer with collection bag generates aerosol by continuously filling a collection bag that acts as a reservoir. The patient inhales aerosol from the reservoir through a one-way inspiratory valve and exhales to the environment through an exhalation port between the one-way inspiratory valve and the mouthpiece. The breath-enhanced jet nebulizer (e.g., the PARI LC Plus, PARI gmbH) uses two one-way valves to prevent the loss of aerosol to environment. When the patient inhales, the inspiratory valve opens and aerosol vents through the nebuliser; exhaled aerosol passes through an expiratory valve in the mouthpiece. Breath-actuated jet nebulisers are designed to increase aerosol delivery to patient by means of a breath-actuated valve that triggers aerosol generation only during inspiration. Both the breath-enhanced and breath-actuated nebulisers increase the amount of inspired aerosol with shorter nebulisation time than “conventional” jet nebulisers [65]. Recently, adaptive aerosol delivery nebulisers (the HaloLite and the Prodose) have been developed to reduce the variability of the delivered dose and the waste of aerosol to the environment and to facilitate monitoring of compliance with patient therapy [71–73]. By monitoring pressure changes relative to flow over the first three breaths, these delivery systems establish the shape of the breathing pattern and then use this to provide a timed pulse of aerosol during the first 50% of each tidal inspiration. Monitoring of the breathing pattern continues throughout the delivery period, and any changes in breathing pattern are taken into account during the remainder of the delivery period. Furthermore, if no inhalation is registered, the system will cease delivery until the patient recommences breathing on the system [71–73]. Since the pulsed dose is only provided in the first 50% of each breath and the software can calculate the amount of drug given per pulse, the precise dose of drug can be delivered before the system stops [71–73].Ultrasonic nebulisers use a rapidly (>1 MHz) vibrating piezoelectric crystal to produce aerosol particles [65–67]. Ultrasonic vibrations from the crystal are transmitted to the surface of the drug solution where standing waves are formed. Droplets break free from the crest of these waves and are released as aerosol. The size of droplets produced by ultrasonic nebuliser is related to the frequency of oscillation [65–67]. Although ultrasonic nebulisers can nebulise solutions more quickly than jet nebulisers, they are not suitable for suspensions and the piezoelectric crystal can heat the drug to be aerosolised. A relatively new ultrasonic nebuliser technology is represented by the vibrating mesh nebulisers [12, 74, 75]. These new-generation nebulisers are either active or passive systems. In active devices (e.g., the eFlow, PARI gmbH), the aperture plate vibrates at a high frequency and draws the solution through the apertures in the plate. In passively vibrating mesh devices (e.g., MicroAir, Omron Healthcare), the mesh is attached to a transducer horn and vibrations of the piezoelectric crystal that are transmitted via the transducer horn force the solution through the mesh to create an aerosol. The eFlow is designed to be used with either a very low residual volume to reduce drug waste or with a relatively large residual volume, so that it can be used instead of conventional jet nebulisers with the same fill volume [76]. Vibrating mesh devices have a number of advantages over other nebuliser systems: they have greater efficiency, precision and consistency of drug delivery, and are quiet and generally portable [74, 75]. However, they are also significantly more expensive than other types of nebulisers, and require a significant amount of maintenance and cleaning after each use to prevent build-up of deposit and blockage of the apertures especially when suspensions are aerosolised and to prevent colonisation by pathogens [75]. They are currently most widely used for the treatment of patients with cystic fibrosis [77].Generally, mouthpieces are employed during nebuliser delivery. However, facemasks may be necessary for treatment of acutely dyspnoeic patients or uncooperative patients, such as infants and toddlers [78]. The facemask is not just a connector between the device and the patient. Principles of mask design are different depending on the device [78]. For example, a valved holding chamber with facemask must have a tight seal to achieve optimal lung deposition [78]. In contrast, the facemask for a nebuliser should not incorporate a tight seal but should have vent holes to reduce deposition on the face and in the eyes [79, 80]. Improvements in facemask design provide greater inhaled mass while reducing facial and ocular deposition [78]. Often, when a patient does not tolerate the facemask, practitioners employ the “blow-by” technique, which simply directs the aerosol towards the nose and mouth with the mouthpiece. However, there is no data to indicate that this is an effective method for delivering aerosol to the lungs, and therefore the use of this technique is not recommended [12].Unlike pMDIs and DPIs, no special inhalation techniques are needed for optimum delivery with conventional nebulisers; tidal breathing with only occasional deep breaths is sufficient (Table1). Thus, for patients who are unable to master the proper pMDI technique despite repeated instruction, the proper use of a nebuliser probably improves drug delivery. However, nebulisers have some distinct dis-advantages. Patients must load the device with medication solution for each treatment, and bacterial contamination of the reservoir can cause respiratory infection [65–67], making regular cleaning important. Also, nebuliser treatments take longer time than pMDIs and DPIs for drug administration (10–15 min for a jet nebuliser, 5 min for an ultrasonic or mesh nebuliser). Although they are relatively portable, typical jet nebuliser must be plugged into a wall outlet or power adaptor and thus cannot be used easily in transit. ### 2.6. Soft Mist Inhalers The development of soft mist inhalers (SMIs) has opened up new opportunities for inhaled drug delivery. Technically, these inhalation devices do fall within the definition of a nebuliser, as they transform aqueous liquid solution to liquid aerosol droplets suitable for inhalation. However, at variance with the traditional nebuliser designs, SMIs are hand-held multidose devices that have the potential to compete with both pMDIs and DPIs in the portable inhaler market. At the present, the only SMI currently marketed in some European countries is the Respimat inhaler (Boehringer Ingelheim, Figure5). This device does not require propellants since it is powered by the energy of a compressed spring inside the inhaler. Individual doses are delivered via a precisely engineered nozzle system as a slow-moving aerosol cloud (hence the term “soft mist”) [81]. Scintigraphic studies have shown that, compared to a CFC-based pMDI, lung deposition is higher (up to 50%) and oropharyngeal deposition is lower [81]. Respimat is a “press and breathe” device, and the correct inhalation technique closely resembles that used with a pMDI. However, although coordination between firing and inhaling is required, the aerosol emitted from Respimat is released very slowly, with a velocity of approximately four times less than that observed with a CFC-driven pMDI [81]. This greatly reduces the potential for drug impaction in the oropharynx. In addition, the relatively long duration over which the dose is expelled from Respimat (about 1.2 s compared with 0.1 s from pMDIs) would be expected to greatly reduce the need to coordinate actuation and inspiration, thus improving the potential for greater lung deposition. Although Respimat has been used relatively little in clinical practice to date, clinical trials seem to confirm that drugs delivered by the Respimat are effective in correspondingly smaller doses in patients with obstructive airway disease [82].The Respimat soft mist inhaler. From [81]. (a) (b) ## 2.1. Pressurised Metered-Dose Inhalers The development of the first commercial pMDIs was carried out by Riker Laboratories in 1955 and marketing in 1956 as the first portable, multidose delivery system for bronchodilators. Since that time, the pMDI has become the most widely prescribed inhalation device for drug delivery to the respiratory tract to treat obstructive airway diseases such as asthma and chronic obstructive pulmonary disease [18]; the total worldwide sales by all companies of pMDI products run in excess of $2 billion per year. The pMDI (Figure 1) is a portable multidose device that consists of an aluminium canister, lodged in a plastic support, containing a pressurised suspension or solution of micronized drug particles dispersed in propellants. A surfactant (usually sorbitan trioleate or lecithin) is also added to the formulation to reduce the particle agglomeration and is responsible for the characteristic taste of specific inhaler brands. The key component of the pMDI is a metering valve, which delivers an accurately known volume of propellant, containing the micronised drug at each valve actuation. Pressing the bottom of the canister into the actuator seating causes decompression of the formulation within the metering valve, resulting in an explosive generation of a heterodisperse aerosol of droplets that consist of tiny drug particles contained within a shell of propellant. The latter evaporates with time and distance, which reduces the size of the particles that use a propellant under pressure to generate a metered dose of an aerosol through an atomisation nozzle (Figure 1). The technology of pMDI has evolved steadily over the period of mid-1950s to the mid-1980s. Until recently, the pMDI used chlorofluorocarbons (CFCs) as propellants to deliver drugs; however, in accordance with the Montreal Protocol of 1987, CFC propellants are being replaced by hydrofluoroalkane (HFA) propellants that do not have ozone depleting properties [19]. Hydrofluoroalkane 134a and 227ca are propellants that contain no chlorine and have a residence in the stratosphere lower than CFCs, so they have substantially less global warming potential than do CFCs. HFA-134a albuterol has been the first HFA-driven pMDI that has received approval in both Europe and the United States. This preparation consists of albuterol suspended in HFA-134a, oleic acid, and ethanol; clinical trials have shown this preparation to be bioequivalent to CFCs albuterol in both bronchodilator efficacy and side effects [20]. At the present, in most European countries CFC-driven pMDIs have totally been replaced by HFA inhalers. The components of CFC-driven pMDIs (i.e., canister, metering valve, actuator, and propellant) are retained in HFA-driven pMDIs, but they have had a redesign. Two approaches were used in the reformulation of HFA-driven pMDIs. The first approach was to show equivalence with the CFC-driven pMDI, which helped regulatory approval, delivering salbutamol and some corticosteroid. Some HFA formulations were matched to their CFC counterparts on a microgram for microgram basis; therefore, no dosage modification was needed when switching from a CFC to an HFA formulation. The second approach involved extensive changes, particularly for corticosteroid inhalers containing beclomethasone dipropionate, and resulted in solution aerosols with extra-fine particle size distributions and high lung deposition [21, 22]. The exact dose equivalence of extra-fine HFA beclomethasone dipropionate and CFC beclomethasone dipropionate has not been established, but data from most trials have indicated a 2 : 1 dose ratio in favour of the HFA-driven pMDI [21, 22]. Patients on regular long-term treatment with a CFC pMDI could safely switch to an HFA pMDI without any deterioration in pulmonary function, loss of disease control, increased frequency of hospital admissions, or other adverse effects [19]. However, when physicians prescribe HFA formulations in place of CFC versions for the first time, they should inform their patients about differences between these products. Compared with CFC-driven pMDIs, many HFA-driven pMDIs have a lower (25.5 mN versus 95.4 mN) impact force and a higher (8°C versus −29°C) temperature [12, 14]. These properties partially overcome the “cold Freon effect” [12, 14] that has caused some patients to stop inhaling their CFC pMDIs. In addition, most HFA pMDIs have a smaller delivery orifice that may result in a more slowly delivered aerosol plume, thus facilitating inhalation and producing less mouth irritation [23]. Another difference is that many HFA-driven pMDIs contain small amount of ethanol. This affects the taste, as well as further increasing the temperature and decreasing the velocity of the aerosol. Pressurized MDIs containing fixed combination of beclomethasone dipropionate and the long-acting bronchodilator formoterol in a solution formulation with HFA-134a and ethanol with cosolvent [21, 24, 25] have been developed (Modulite technology, Chiesi, Italy). Interestingly, this formulation dispenses an aerosol that has a particularly small particle size (mass median aerodynamic diameter ~1 μ), a lower plume velocity of the aerosol, and not dropping temperature as much as when CFCs are used as carriers. These three factors, that is, smaller particle size, lower plume velocity, and less temperature drop, may decrease upper airway impaction and increase airway deposition of particles, particularly to the smaller airways, compared with the same dose of drug administered from a CFC pMDI [24, 25].Figure 1 Components of a pressurised metered-dose inhaler. Lower panels illustrate the process of aerosol generation.Pressurised MDIs have a number of advantages (Table1): they are compact, portable, relatively inexpensive, and contain at least 200 metered doses per canister that are immediately ready for the use. Furthermore, a large fraction (approximately 40%) of the aerosol particles is in the respirable range (mass median aerodynamic diameter less than 5 μ), and dosing is generally highly reproducible from puff to puff [12–16]. Despite these advantages, most patients cannot use pMDIs correctly, even after repeated tuition [8–11]. This is because pMDIs require good coordination of patient inspiration and inhaler actuation to ensure correct inhalation and deposition of drug in the lungs. The correct inhalation technique when using pMDIs involves firing the pMDI while breathing in deeply and slowly, continuing to inhale after firing, and then following inhalation with a breath-holding pause to allow particles to sediment on the airways [12, 26]. The patients should also be instructed that, on the first use and after several days of disuse, the pMDI should be primed. However, patients frequently fail to continuously inhale slowly after activation of the inhaler and exhale fully before inhalation [8]. In addition, patients often activate the inhaler before inhalation or at the end of inhalation by initiating inhaler actuation while breath holding [8]. Crompton and colleagues [8, 27, 28] showed that the proportion of patients capable of using their pMDIs correctly after reading the package insert fell from 46% in 1982 to 21% in 2000, while only just over half of patients (52%) used a pMDI correctly even after receiving instruction. In a large (n=4078) study, 71% of patients were found to have difficulty using pMDIs, and almost half of them had poor coordination [29]. Incorrect inhalation technique was associated with poor asthma control and with poor pMDI users having less stable asthma control than good pMDI users [29].Even with correct inhalation technique, pMDIs are inefficient since no more than 20% for CFC pMDIs or 40%–50% for HFA pMDIs producing extra-fine particles [12, 14–16] of the emitted dose reaches the lungs with a high proportion of drug being deposited in the mouth and oropharynx which can cause local as well as systemic side effects due to rapid absorption [12, 14–16]. Another disadvantage of some pMDIs is the absence of built-in counters that would alert the patient to the fact that the inhaler was approaching “empty” and needed to be refilled. Although many pMDIs contain more than the labelled number of doses, drug delivery per actuation may be very inconsistent and unpredictable after the labelled number of actuations. Beyond the labelled number of actuations, propellants can release an aerosol plume that contains little or no drug, a phenomenon called tail-off [30]. ## 2.2. pMDI Accessory Devices: The Spacers and Valved Holding Chambers Although the term “spacers” is often used for all types of extension add-on devices, these devices are properly categorised as either “spacers” or “valved holding chambers.” A spacer (Figure2) is a simple tube or extension attached to the pMDI mouthpiece with no valves to contain the aerosol plume after pMDI actuation [31]. A valved holding chamber (Figure 2) is an extension device, added onto the pMDI mouthpiece or canister, that contains a one-way valve to prevent holding the aerosol until inhalation [31]. The direction of the spray can be forward, that is, toward the mouth, or reverse, that is, away from the mouth (Figure 2). Both spacers and holding chambers constitute a volume into which the patient actuates the pMDI and from which the patient inhales reducing the need to coordinate the two manoeuvres [31]. By acting as an aerosol reservoir, these devices slow the aerosol velocity and increase transit time and distance between the pMDI actuator and the patient’s mouth, allowing particle size to decrease and, consequently, increasing deposition of the aerosol particles in the lungs [31]. Moreover, because spacers trap large particles comprising up to 80% of the aerosol dose, only a small fraction of the dose is deposited in the oropharynx, thereby reducing side effects, such as throat irritation, dysphonia, and oral candidiasis, associated with medications delivered by the pMDI alone [31]. Large-volume holding chambers increase lung deposition to a greater degree than does tube spacer or small holding chamber [32–34]. Devices larger than 1 L, however, are impractical, and patients would have difficulty inhaling the compete contents [35]. A valved holding chamber fitted with an appropriate facemask is used to give pMDI drugs to neonates, young children, and elderly patients. The two key factors for optimum aerosol delivery are a tight but comfortable facemask fit and reduced facemask dead space [31, 36]. Because children have low tidal volumes and inspiratory flow rates, comfortable breathing through a facemask requires low resistance inspiratory or expiratory valves. Of note, some holding chambers incorporate a whistle that makes a sound if inspiration is too fast [36]. Training patients to ensure that the whistle does not sound assists with developing an optimal inhalation technique. Plastic bottles and cups can also be used as rudimental, home-made spacers for the administration of aerosol drugs [37–39]. In a randomized controlled trial clinical effects of salbutamol inhaled through pMDI with a home-made nonvalved spacer (500 mL mineral water plastic bottle) were compared with those when the same drug was administered by using an oxygen-driven nebuliser in children with asthma [39]. The number of children hospitalised after treatment changes in clinical score and oxygen saturation were similar in conventional and bottle spacer groups [39]. Valved holding chambers may improve the clinical effect of inhaled medications especially in patients unable to use a pMDI properly [31]. Indeed, compared to both pMDIs alone and DPIs, these devices may increase the response to short-acting β-adrenergic bronchodilators, even in patients with correct inhalation technique [40–43]. While spacers and valved holding chambers are good drug-delivery devices, they suffer from the obvious disadvantage of making the entire delivery system less portable and compact than a pMDI alone. The size and appearance of some spacers may detract from the appeal of the pMDI to patients, especially among the paediatric population, and negatively affect patients’ compliance [31]. Furthermore, spacers are not immune from inconsistent medication delivery caused by electrostatic charge of the aerosol [44–47]. Drug deposits can build up on walls of plastic spacers and holding chambers mostly because of electrostatic charge. Aerosols remain suspended for longer periods within holding chambers that are manufactured from nonelectrostatic materials than other materials. Thus, an inhalation might be delayed for 2–5 s without a substantial loss of drug to the walls of metal or nonstatic spacers [45–47]. The electrostatic charge in plastic spacers can be substantially reduced by washing the spacer with a diluted (1 : 5000) household detergent and allowing it to drip dry [14, 48]. There is no consensus on how often a spacer should be cleaned, but recommendations range in general from once a week to once a month [12]. Multiple actuations of a pMDI into a spacer before inhalation also reduces the proportion of drug inhaled [46–50]. Five actuations of a corticosteroid inhaler into a large-volume spacer before inhalation deliver a similar dose to a single actuation into the same spacer inhaled immediately [49].(a) The Jet open tube spacer; (b) the AeroChamber plus holding chamber; (c) the reverse-flow EZSpacer. (a) (b) (c) ## 2.3. Breath-Actuated Metered-Dose Inhaler Breath-actuated (BA) pMDIs are alternatives to conventional press-and-breath pMDIs developed to overcome the problem of poor coordination between pMDI actuation and inhalation [12, 51]. Examples of this type of device include the Autohaler (3M, St. Paul, MI) and the Easi-Breathe (Teva Pharmaceutical Industries Ltd). Breath-actuated pMDIs contain a conventional pressurised canister and have a flow-triggered system driven by a spring which releases the dose during inhalation, so that firing and inhaling are automatically coordinated [12, 51]. These inhalation devices (Table 1) can achieve good lung deposition and clinical efficacy in patients unable to use a pMDI correctly because of coordination difficulties [52]. Errors when using BApMDI are less frequent than when using a standard pMDI [17]. Increased use of BApMDIs might improve asthma control and reduce overall cost of asthma therapy compared with conventional pMDIs [53]. On the negative side (Table 1), BApMDIs do not solve cold Freon effect and would be unsuitable for a patient who has this kind of difficulty using pMDI. In addition, these devices require a relatively higher inspiratory flow than pMDI for triggering. Furthermore, oropharyngeal deposition with breath-actuated pMDIs is as high as that with CFC-pMDIs [54].The Autohaler is a BApMDI that is available with albuterol and behlomethasone in HFA propellant. It has a manually operated lever that, when lifted, primes the inhaler through a spring-loaded mechanism, allowing the aerosol to be dispensed with an inspiratory flow of about 30 L/min. Clinical studies have demonstrated that the lung deposition ofβ-adrenergic bronchodilator administered via the Autohaler is similar to that obtained when the drug is correctly inhaled via a pMDI and greater than that resulting from conventional pMDIs in patients with poor inhalation technique [54]. Moreover, it can be used effectively by patients with poor lung function, patients with limited manual dexterity, and elderly patients [54]. The Easi-Breathe is a patient-triggered inhaler that dispenses albuterol and beclomethasone. This inhaler is primed when the mouthpiece is opened. When the patient breathes in, the mechanism is triggered and a dose is automatically released into the airstream. The inhaler can be actuated at a very low airflow rate of approximately 20 L/min, which is readily achievable by most patients [55]. Not surprisingly, practice nurses found it easier to teach and patients to use than a conventional pMDI [55]. In vitro studies have shown that particle size distribution and percentage of respirable fine particle obtained by using the Easi-Breathe device were similar to those obtained by using the conventional pMDI [56], although comparative clinical efficacy data are not yet available. ## 2.4. Dry Powder Inhalers Modern dry powder inhalers were first introduced in 1970, and the earliest models were single-dose devices containing the powder formulation in a gelatin capsule, which the patient loaded into the device prior to use. Since the late 1980s multidose DPIs have been available, giving the same degree of convenience as a pMDI [58]. Drypowder inhalers (Figure 3) are delivery devices containing drugs in powdered formulation that have been milled to produce micronized particles in the respirable range. These delivery devices allow the particles to be deagglomerated by the energy created by the patient’s own inspiratory flow [58–60]. The powdered drug can be either pure or blended with large particle size excipient (usually lactose) as a carrier powder [58–60]. The empty condition is generally apparent, alerting the patient to the need for replacement. Some DPIs, such as for the HandiHaler (Boehringer Ingelheim, D) and the Aerolizer (Novartis Pharma, CH), are singledose devices in which a capsule of powder is perforated in the device with needles fixed to pressure buttons. Other types of DPIs, such as the Diskus (GlaxoSmithKline, UK) or the Turbuhaler (AstraZeneca, Sweden), have a multidose capacity. These multidose DPIs fall into two main categories (Figure 3): these either measure the dose themselves (from a powder reservoir) or they dispense individual doses which are premetered into blisters by the manufacturer [58–60]. Turbuhaler and Diskus, respectively, are representatives of the former and latter categories, although many other different designs are presently in development. To date, new innovative DPIs are available for treatment of asthma and for delivery of a range of drugs usually given by injection, such as peptides, proteins, and vaccines. The use of DPIs is expected to increase with the phasing out of CFC production along with increased availability of drug powders and development of novel powder devices [59].Figure 3 Examples of dry powder inhalers. From [57].Generally, DPIs do have many advantages (Table1). Dry powder inhalers are actuated and driven by patient’s inspiratory flow; consequently, DPIs do not require propellants to generate the aerosol, as well as coordination of inhaler actuation with inhalation [60]. However, a forceful and deep inhalation through the DPI is needed to deaggregate the powder formulation into small respirable particles as efficiently as possible and, consequently, to ensure that the drug is delivered to the lungs [60–62]. Although most patients are capable of generating enough flow to operate a DPI efficiently [60], the need to inhale forcefully and, consequently, generate a sufficient inspiratory flow could be a problem for very young children or patients with severe airflow limitation [63]. For this reason, DPIs are not recommended for children under the age of 5 years [60]. The newer active or power-assisted DPIs incorporate battery-driven impellers and vibrating piezoelectric crystals that reduce the need for the patient to generate a high inspiratory flow rate, an advantage for many patients [59, 62]. Drug delivery to the lung ranges between 10% and 40% of the emitted dose for several marketed DPIs [60]. The physical design of the DPI establishes its specific resistance to airflow (measured as the square root of the pressure drop across the device divided by the flow rate through the device), with current designs having specific resistance values ranging from about 0.02 to 0.2 cm H2O/L/min) [61]. To produce a fine powder aerosol with increased delivery to the lung, a DPI that is characterised as having a low resistance requires an inspiratory flow of >90 L/min, a medium-resistance DPI requires 50–60 L/min, and a high-resistance DPI requires <50 L/min [61]. Of note, DPIs with high resistance tend to produce greater lung deposition than those with a lower resistance [61], but the clinical significance of this is not known. Based on the previous considerations, it is recommended to instruct patients to inhale forcefully from the beginning of the inspiration deeply as much as possible and to continue to inhale for as long as possible [12]. The rationale for these recommendations is that when using a DPI, inhalation should be forceful enough to disburse the micronised drug from the lactose-based carrier into a fine particle dose. However, it is not the absolute inspiratory flow that determines the fine particle dose from an inhaler but the resulting energy, which also depends on inhaler resistance. High air velocities within the inhaler are required for effective dispersion rather than high airflow through the inhaler. High airflow through the inhaler will lead to increased impaction in upper airways; thus, fast inhalation should be avoided unless a larger fine particle fraction compensates for the increased impaction. Furthermore, when using a singledose DPI, it is also recommendable to instruct patients to perform two separate inhalations for each dose [12].Although DPIs offer advantages over pMDIs, they do have some limitations (Table1) of design, cost-effectiveness and user-friendliness [60]. For instance, capsule-based DPIs, such as the HandiHaler and the Aerolizer, require that single doses are individually loaded into the inhaler immediately before use. This is inconvenient for some patients and does not allow direct dose counting. In addition, the inhalation manoeuvre has to be repeated until the capsule is empty, which may give rise to underdosing and to high dose variability. Other DPIs are multiple unit dose devices, such as the Diskhaler, or multidose devices, such as the Diskus and the Turbuhaler. These devices do not have any triggering mechanism which makes optimal drug delivery entirely dependent on an individual patient’s uncontrolled inspiratory manoeuvre. Because of variations in the design and performance of DPIs, patients might not use all DPIs equally well. Therefore, DPIs that dispense the same drug might not be readily interchangeable [61]. Studies [59, 60] have also been shown that dose emission is reduced when a DPI is exposed to extremely low and high temperature and humidity; therefore, DPIs should be stored in a cool dry place.A recent systematic literature review revealed that up to 90% of patients did not use their DPI correctly [64]. Common errors made by patients were lack of exhalation before inhalation, incorrect positioning and loading of the inhaler, failure to inhale forcefully and deeply through the device, and patients’ failure to breath-hold after inhalation [64]. All these errors may lead to insufficient drug delivery, which adversely influences drug efficacy and may contribute to inadequate disease control [64]. It is unsurprising that such a high proportion of patients were unable to use DPIs correctly as the devices have many inherent design limitations. The Diskhaler, for example, is a multiple unit dose device as it contains a series of foil blisters on a disk. It is complicated to use, requiring eight steps to effect one correct inhalation; it has been shown that approximately 70% of patients are unable to use it correctly [64]. The disks have to be changed frequently and the device cleaned before refilling. In addition, it provides no feedback to the patient of a successful inhalation, except a sweet taste in the mouth which may simply be indicative of oral drug deposition. The Turbuhaler, a multidose reservoir device, is the most frequently prescribed DPI as it produces good deposition of the drug in the lungs provided that a sufficient (about 60 L/min) inspiratory flow has been achieved by the patients. However, approximately 80% of patients are unable to use it correctly [64]; common mistakes made by patients using this inhaler are failure to turn the base fully in both directions and failure to keep the device upright until loaded. In addition, due to its high intrinsic resistance, patients who have a reduced inspiratory flow may encounter problems using this device. The Diskus is another example of multidose device that uses a strip foil drug containing blisters. As many as 50% of patients use this DPI incorrectly, and common errors include failure or difficulty in loading the device before inhalation and exhaling into the device [64]. The Diskus has a low intrinsic resistance but, like the Turbuhaler, does not have any triggering mechanism which makes optimal drug delivery entirely dependent on an individual patient’s uncontrolled inspiratory manoeuvre [64]. Additionally, as with other DPI devices employing drug blisters, incomplete emptying of the metered dose may occur, which could reduce the amount of drug delivered to the lung and hence reduce clinical efficacy [64]. ## 2.5. Nebulisers Various types of nebulisers are available on the market, and several studies have indicated that performance varies between manufacturers and also between nebulisers from the same manufacturers [65–67]. There are two basic types (Figure 4) of nebulisers: the pneumatic or jet nebuliser and the ultrasonic nebulisers [65–67]. The jet nebulisers generate aerosol particles as a result of the impact between a liquid and a jet of high velocity gas (usually air or oxygen) in the nebuliser chamber. In a jet nebulizer, the driving gas passes through a very narrow hole from a high pressure system. At the narrow hole, the pressure falls and the gas velocity increases greatly producing a cone shaped front. This passes at high velocity over the end of a narrow liquid feed tube or concentric feeding system creating a negative pressure at this point. As a result of this fall in pressure, liquid is sucked up by the Bernoulli effect and is drawn out into fine ligaments. The ligaments then collapse into droplets under the influence of the surface tension. The majority of the liquid mass produced during this process is in the form of large (15–500 micron) nonrespirable droplets. Coarse droplets impact on baffles while smaller droplets may be inhaled or may land on internal walls returning to the reservoir for renebulisation [65–67]. The resultant large particles then impact upon baffles to generate small, respirable particles. Thus, baffle design has a critical effect on droplet size. Concentric liquid feeds minimise blockage by residual drug build-up with repeated nebulisation. A flat pick up plate may allow some nebulisers to be tilted during treatment whilst maintaining liquid flow from the reservoir. A 6–8 L/min flow and a fill volume of 4-5 mL are generally recommended, unless some nebulisers are specifically designed for different flow and a smaller or larger fill volume [68]. The volume of some unit-dose medications is suboptimal; ideally, saline should be added to bring the fill volume to 4-5 mL, but this might not be practical. The longer nebulisation time with a greater fill volume can be reduced by increasing the flow used to power the nebuliser; however, increasing the flow decreases the droplet size produced by the nebuliser. Dead volume is the volume that is trapped inside the nebulizer, and typically it is 0.5–1 mL. To reduce dead volume, clinicians and patients commonly tap the nebuliser periodically during therapy in an effort to increase nebuliser output [69]. Therapy may also be continued past the point of sputtering in an attempt to decrease the dead volume, but this is unproductive and not recommended [70]. Because of the evaporative loss within the nebuliser, the solution becomes increasingly concentrated and cools during nebulisation.Components of a jet (a) and an ultrasonic (b) nebulisers. Modified from O'Callaghan and Barry [65]. (a) (b)There are four different designs of the jet nebulisers: jet nebuliser with reservoir tube, jet nebuliser with collection bag, and breath-enhanced and breath-actuated jet nebulizers [65–67]. Both the breath-enhanced and breath-actuated jet nebulisers are modifications of the “conventional” jet nebulisers specifically designed to improve their efficiency by increasing the amount of aerosol delivered to the patient with less wastage of aerosol during exhalation. The different types of jet nebulisers have different output characteristics determined by the design of the air jet and capillary tube orifices, their geometric relationship with each other and the internal baffles; for a given design the major determinant of output is the driving pressure [65–67]. The jet nebulizer with reservoir tube provides continuous aerosol during the entire breathing cycle, causing the release of aerosol to ambient air during exhalation and anytime when the patient is not breathing. Consequently, no more than 20% of the emitted aerosol is inhaled [65–67]. The jet nebulizer with collection bag generates aerosol by continuously filling a collection bag that acts as a reservoir. The patient inhales aerosol from the reservoir through a one-way inspiratory valve and exhales to the environment through an exhalation port between the one-way inspiratory valve and the mouthpiece. The breath-enhanced jet nebulizer (e.g., the PARI LC Plus, PARI gmbH) uses two one-way valves to prevent the loss of aerosol to environment. When the patient inhales, the inspiratory valve opens and aerosol vents through the nebuliser; exhaled aerosol passes through an expiratory valve in the mouthpiece. Breath-actuated jet nebulisers are designed to increase aerosol delivery to patient by means of a breath-actuated valve that triggers aerosol generation only during inspiration. Both the breath-enhanced and breath-actuated nebulisers increase the amount of inspired aerosol with shorter nebulisation time than “conventional” jet nebulisers [65]. Recently, adaptive aerosol delivery nebulisers (the HaloLite and the Prodose) have been developed to reduce the variability of the delivered dose and the waste of aerosol to the environment and to facilitate monitoring of compliance with patient therapy [71–73]. By monitoring pressure changes relative to flow over the first three breaths, these delivery systems establish the shape of the breathing pattern and then use this to provide a timed pulse of aerosol during the first 50% of each tidal inspiration. Monitoring of the breathing pattern continues throughout the delivery period, and any changes in breathing pattern are taken into account during the remainder of the delivery period. Furthermore, if no inhalation is registered, the system will cease delivery until the patient recommences breathing on the system [71–73]. Since the pulsed dose is only provided in the first 50% of each breath and the software can calculate the amount of drug given per pulse, the precise dose of drug can be delivered before the system stops [71–73].Ultrasonic nebulisers use a rapidly (>1 MHz) vibrating piezoelectric crystal to produce aerosol particles [65–67]. Ultrasonic vibrations from the crystal are transmitted to the surface of the drug solution where standing waves are formed. Droplets break free from the crest of these waves and are released as aerosol. The size of droplets produced by ultrasonic nebuliser is related to the frequency of oscillation [65–67]. Although ultrasonic nebulisers can nebulise solutions more quickly than jet nebulisers, they are not suitable for suspensions and the piezoelectric crystal can heat the drug to be aerosolised. A relatively new ultrasonic nebuliser technology is represented by the vibrating mesh nebulisers [12, 74, 75]. These new-generation nebulisers are either active or passive systems. In active devices (e.g., the eFlow, PARI gmbH), the aperture plate vibrates at a high frequency and draws the solution through the apertures in the plate. In passively vibrating mesh devices (e.g., MicroAir, Omron Healthcare), the mesh is attached to a transducer horn and vibrations of the piezoelectric crystal that are transmitted via the transducer horn force the solution through the mesh to create an aerosol. The eFlow is designed to be used with either a very low residual volume to reduce drug waste or with a relatively large residual volume, so that it can be used instead of conventional jet nebulisers with the same fill volume [76]. Vibrating mesh devices have a number of advantages over other nebuliser systems: they have greater efficiency, precision and consistency of drug delivery, and are quiet and generally portable [74, 75]. However, they are also significantly more expensive than other types of nebulisers, and require a significant amount of maintenance and cleaning after each use to prevent build-up of deposit and blockage of the apertures especially when suspensions are aerosolised and to prevent colonisation by pathogens [75]. They are currently most widely used for the treatment of patients with cystic fibrosis [77].Generally, mouthpieces are employed during nebuliser delivery. However, facemasks may be necessary for treatment of acutely dyspnoeic patients or uncooperative patients, such as infants and toddlers [78]. The facemask is not just a connector between the device and the patient. Principles of mask design are different depending on the device [78]. For example, a valved holding chamber with facemask must have a tight seal to achieve optimal lung deposition [78]. In contrast, the facemask for a nebuliser should not incorporate a tight seal but should have vent holes to reduce deposition on the face and in the eyes [79, 80]. Improvements in facemask design provide greater inhaled mass while reducing facial and ocular deposition [78]. Often, when a patient does not tolerate the facemask, practitioners employ the “blow-by” technique, which simply directs the aerosol towards the nose and mouth with the mouthpiece. However, there is no data to indicate that this is an effective method for delivering aerosol to the lungs, and therefore the use of this technique is not recommended [12].Unlike pMDIs and DPIs, no special inhalation techniques are needed for optimum delivery with conventional nebulisers; tidal breathing with only occasional deep breaths is sufficient (Table1). Thus, for patients who are unable to master the proper pMDI technique despite repeated instruction, the proper use of a nebuliser probably improves drug delivery. However, nebulisers have some distinct dis-advantages. Patients must load the device with medication solution for each treatment, and bacterial contamination of the reservoir can cause respiratory infection [65–67], making regular cleaning important. Also, nebuliser treatments take longer time than pMDIs and DPIs for drug administration (10–15 min for a jet nebuliser, 5 min for an ultrasonic or mesh nebuliser). Although they are relatively portable, typical jet nebuliser must be plugged into a wall outlet or power adaptor and thus cannot be used easily in transit. ## 2.6. Soft Mist Inhalers The development of soft mist inhalers (SMIs) has opened up new opportunities for inhaled drug delivery. Technically, these inhalation devices do fall within the definition of a nebuliser, as they transform aqueous liquid solution to liquid aerosol droplets suitable for inhalation. However, at variance with the traditional nebuliser designs, SMIs are hand-held multidose devices that have the potential to compete with both pMDIs and DPIs in the portable inhaler market. At the present, the only SMI currently marketed in some European countries is the Respimat inhaler (Boehringer Ingelheim, Figure5). This device does not require propellants since it is powered by the energy of a compressed spring inside the inhaler. Individual doses are delivered via a precisely engineered nozzle system as a slow-moving aerosol cloud (hence the term “soft mist”) [81]. Scintigraphic studies have shown that, compared to a CFC-based pMDI, lung deposition is higher (up to 50%) and oropharyngeal deposition is lower [81]. Respimat is a “press and breathe” device, and the correct inhalation technique closely resembles that used with a pMDI. However, although coordination between firing and inhaling is required, the aerosol emitted from Respimat is released very slowly, with a velocity of approximately four times less than that observed with a CFC-driven pMDI [81]. This greatly reduces the potential for drug impaction in the oropharynx. In addition, the relatively long duration over which the dose is expelled from Respimat (about 1.2 s compared with 0.1 s from pMDIs) would be expected to greatly reduce the need to coordinate actuation and inspiration, thus improving the potential for greater lung deposition. Although Respimat has been used relatively little in clinical practice to date, clinical trials seem to confirm that drugs delivered by the Respimat are effective in correspondingly smaller doses in patients with obstructive airway disease [82].The Respimat soft mist inhaler. From [81]. (a) (b) ## 3. Choice of an Inhaler Device for Asthma Therapy Drug choice is usually the first step in prescribing inhaled therapy for asthma and, together with availability and reimbursement criteria, dictates the inhaler deliver options. The next two steps, choice of inhaler device type and patient training in the use of the inhaler, are hampered by the lack of robust evidence or effective tools to aid healthcare professionals [9, 10, 83]. Meta-analysis regarding the selection of aerosol delivery systems for acute asthma concluded that short-acting beta agonists delivered via either nebuliser or pMDI with handling chamber are essentially equivalent [84–88]. More than 100 inhaled device-drug combinations are currently available for treatment of asthmatic patients [57]. The number is likely to increase with the development of analogue inhaled drugs delivered by relatively low-cost pMDIs and DPIs. Consequently, the level of confusion experienced by clinicians, nurses, and pharmacists when trying to choose the most appropriate device for each patient is increased. Thus, physicians’ experience is amongst the most important factors which influence decision making for inhaler choice in asthma therapy. In fact, inhalers are often prescribed on an empirical basis rather than on an evidence-based approach. Following their own experience, doctors are much more likely to prescribe the same old inhaler which they have always prescribed rather than new, improved inhalers entering the market.Current asthma management guidelines give some guidance on the class of inhaler to prescribe to children, but they offer nonspecific advice regarding inhaler choice for adult patients. The GINA guidelines [1] recommend pMDIs with spacer and facemask for children younger than 4 years (or pMDIs with spacer and mouthpiece for those aged 4–6 years) and, in addition to pMDIs alone, DPIs or BAMDIs, for children older than 6 years. However, for adults, the same guidelines state that inhalers should be portable and simple to use, should not require an external power source, require minimal cooperation and coordination, and have minimal maintenance requirements [1]. The British Thoracic Society guidelines [2] also include patient’s preference and abilities to use the device correctly. However, this advice relating to patient preference is not supported by any evidence that patients will correctly use an inhaler that they like.Criteria to be considered when choosing an inhaler device differ depending on the audience addressed [89]. From the viewpoint of the inhalation technologist, consistent and safe dosing, sufficient drug deposition, and clinical effect guide the inhaler choice. The patient’s ability to inhale through the device, the intrinsic airflow resistance of the device, and the degree of dependence of drug release on inspiratory airflow variability are all important determinants when considering constancy of dosing [89]. From the point of view of the clinician, clinical efficacy and safety should be the most important determinants to consider when choosing an inhaler [89]. However, in the real word clinical efficacy must be balanced against cost-effectiveness, and inhalers with insufficient performance may be prescribed simply because they are cheap. Patients’ preferences and acceptance of the inhaler should also be considered when deciding on a specific inhaler since these will have major implications for compliance.Several general principles of inhaler selection and use have recently been addressed in an evidence-based systematic review by a joint committee of the American College of Chest Physicians and the American College of Asthma, Allergy and Immunology [13]. The bottom line of this document was that each of the aerosol devices can work equally well in various clinical settings with patients who can use these devices properly [13]. In addition, pMDIs are convenient for delivering a wide variety of drugs to a broad spectrum of patients. For patients who have trouble coordinating inhalation with inhaler actuation, the use of spacer may obviate this difficulty, though most of these devices are cumbersome to store and transport [13]. The use of spacer, however, is mandatory for infants and young children. Dry powder inhalers are usually easier for patients to handle than pMDIs, and a growing number of drug types are available in several DPI formats [13]. The key issue for dry powder inhalation is the minimum inspiratory flow rate below which deagglomeration is inefficient, resulting in a reduced drug delivered dose. The most ill patients and the very young may not be candidates for a DPI. A nebuliser could be used as an adequate alternative to pMDI with a spacer by almost any patient in a variety of clinical settings from the home to the intensive care unit [13]. However, nebulisers are more expensive, cumbersome, and relatively time-consuming to use compared to hand-held inhalers. These attributes should limit the use of nebulisers whose effect can be matched by hand-held devices in almost all clinical settings. The findings of this document should not be interpreted to mean that the device choice for a specific patient does not matter. Rather, the study simply says that each of the devices studied can work equally well in patients who can use them correctly. However, this evidence-based systematic review does not provide much information about who is likely to use one device or another properly nor does it address many other considerations that are important for choosing a delivery device for a specific patient in a specific clinical situation. These include the patient’s ability to use the device, patient preference, and the availability of equipment and cost.More recently, Chapman and coworkers [90] proposed an algorithm approach to inhaler selection that considers patient’s ability to generate an inspiratory flow rate >30 L/min to coordinate inhaler actuation and inspiration and to prepare and actuate the device (Table 2).Table 2 Choice of inhaler devices according to the patient’s inspiratory flow and ability to coordinate inhaler actuation and inhalation. Modified from Chapman et al. [90]. Good hand-lung coordination Poor hand-lung coordination Inspiratory flow > 30 L/min Inspiratory flow < 30 L/min Inspiratory flow > 30 L/min Inspiratory flow < 30 L/min pMDI pMDI pMDI + spacer pMDI + spacer BAMDI Nebuliser BAMDI Nebuliser DPI SMI DPI SMI Nebuliser Nebuliser SMI SMI pMDI: pressurised metered-dose inhalers; BAMDI: breath-actuated metered-dose inhaler; DPI: dry powder inhaler; SMI: soft mist inhaler.When choosing an inhaler for children, it is essential that the individual child receives appropriate instructions and training necessary for the management of the disease [91]. Furthermore, the child should be prescribed the correct medication tailored to the severity of the disease, and, most importantly, the prescribed inhaler should suit the individual needs and preference of the child [91]. Contrary to general opinion, using an inhaler may be difficult for children [91]; many children with asthma use their inhaler incorrectly which may result in unreliable drug delivery, even after instruction and training for correct inhalation. In addition, previous inhalation instruction may be forgotten and, therefore, training should be repeated regularly to maintain correct inhalation technique in children with asthma [91]. ## 4. Education and Instruction Successful asthma management is 10% medication and 90% education [92]. Asthma education empowers patients to manage their disease and increases their awareness of danger signs [93]. Patients with a positive attitude towards controlling their asthma are more likely to adhere to therapy [94]. Regular medical review provides an opportunity to raise patients’ expectations, helps them understand how to monitor their asthma, and increases awareness of possible factors, such as poor inhaler technique, that may prevent them from attaining control [93]. A key challenge in many practice situations is the allocation of personnel and time for patient training in inhaler technique, although the upfront investment in time to properly train could later save time, resources, and adverse patient impact by preventing uncontrolled asthma because of poor inhaler technique. The conventional wisdom is that training patients to use inhalers is time-consuming. However, in one study, training sessions provided by pharmacists took an average of only 2.5 min and were shown to improve asthma outcomes [95]. The “trainer” must know the proper technique, including refinements to optimise inhaler therapy for each device type prescribed. However, the healthcare professionals involved often have not mastered inhalation technique themselves [96, 97] and are not sufficiently aware of handling difficulties with devices other than pMDIs [98]. Furthermore, only 2 of the 40 medical textbooks include a simple list of steps to properly use a pMDI [11]. For this reason, studies have examined educational interventions designed to “train the trainer” and improve healthcare professional inhaler competence. It has been demonstrated that a single education session improves medical residents’ inhaler knowledge and skills [99]. Another study demonstrated that pharmacists who participate in a single-session education workshop showed significantly better knowledge and skills than a control group and that this knowledge was retained at a high level [100]. The best person to provide inhaler training (physician, nurse, or pharmacist) will vary by practice situation. Another option is to enlist the aid of lay educators (e.g., other patients) to provide support and training. In all cases, adequate time and resources must be allotted for the training sessions. Successful training in inhaler technique depends upon effective communication of proper technique and its purpose and monitoring to ensure that the skills have been learned and retained [101]. Of all the training approaches possible, personal or small group demonstration has so far proven most effective [102, 103]. Other training methods for inhaler use include written indications, illustrations, audio-visual demonstrations, and internet-based, interactive, multimedia tutorials, the latter representing a promising, new low-cost and time-saving mechanism for educating both patients and healthcare professionals [104]. However, their value must not be overestimated, as a substantial proportion of patients still have incorrect inhalation technique despite several training sessions [105] Periodic retraining is needed as inhaler technique deteriorates with time [104, 106]. Special provision should be made for the elderly, who may have more trouble learning good inhaler technique and a greater tendency to forget it, while small children may require a particular teaching environment to hold their attention [107, 108]. Intuitively, therapeutic success will be more likely if patients are prescribed a device that they have chosen, are happy with, and can use well. Although use of a single type of device to deliver all medications is not always practicable it is preferable since coping with a variety of devices increases the likelihood of error [109].A visual evaluation by the healthcare professionals is subjective but important in assessing inhaler preparation and the mechanics of inhaler handling by the patient. Indeed, in real life, patients make many errors with their usual inhalation device that may negate the benefits observed in clinical trials. A checklist to identify critical errors, which are those comprising treatment efficacy, could be applied here, as outlined by Molimard and Le Gros [110]. Examples of currently available tools to objectively check and maintain the correct inhalation pattern include the Aerosol Inhalation Monitor (Vitalograph Ltd., Buckingham, UK) and 2Tone Trainer (Canday Medical Ltd., Newmarket, UK) for MDIs and the In-Check Dial (Clement Clarke International, Harlow, UK) for DPIs [111, 112]. These tools can provide an objective evaluation of the inhalation profile but cannot assess the patient’s preparation and handling of their device. ## 5. ADMIT Recommendations Many physicians in Europe are fully aware of the difficulties that patients have using prescribed inhaler devices correctly and the negative impact that this may have on asthma control. TheAerosol Drug Management Improvement Team (ADMIT), a consortium of European respiratory physicians (respiratory specialists, general practitioners, and paediatricians) with a common interest in promoting excellent delivery of inhaled drugs, was formed with the remit of examining ways to improve treatment of obstructive airway disease in Europe [57]. ADMIT recommends that instructions for correct inhalation technique for each inhaler device currently on the market should be compiled by an Official Board with instructions made readily accessible on the web. Local asthma associations and patient groups could also be involved in promoting the importance and teaching and reinforcing of correct inhalation technique. Information could be disseminated by the use of dedicated literature, school visits by healthcare professionals, pharmacists, and through patient advocacy groups. Other ADMIT recommendations are summarised as follows.Recommendations from the Aerosol Drug Management Improvement Team (ADMIT) for the choice and correct usage of inhalers. (DPI, dry powder inhaler; PIF, peak inspiratory flow; pMDI, pressurised metered-dose inhaler. Modified from Crompton et al. [8].)(i) Inhalers should be matched to the patient as much as possible.(ii) In young children, pMDIs should be always used with a spacer device.(iii) An alternative to a pMDI should be considered in elderly patients with a minimental test score <23/30 or an ideomotor dyspraxia score <14/20 as they are unlikely to have correct inhalation technique through a pMDI.(iv) The patient’s PIF values should be considered before DPI prescription. Those patients with severe airflow obstruction, children, and the elderly would benefit from an inhaler device with a low airflow resistance.(v) Before prescribing a DPI, check that the patient can inhale deeply and forcibly at the start of the inspiration as airflow profile affects particle size produced and hence drug deposition and efficacy.(vi) Where possible, one patient should have one type of inhaler.(vii) Establish an Official Board to compile instructions for correct inhalation technique for each inhaler device currently on the market.(viii) Instructions for correct inhaler use should be made readily accessible on a dedicated web site.(x) Training in correct inhalation technique is essential for patients and healthcare professionals.(xi) Inhalation technique should be checked and reinforced at regular intervals.(xii) Teaching correct inhalation techniques should be tailored to the patient’s needs and preferences: group instruction in correct inhalation technique appears to be more effective than personal one-to-one instruction and equally effective like video instruction; younger patients may benefit more from multimedia teaching methods; elderly patients respond well to one-to-one tuition.The ADMIT has also proposed a practical algorithm (Figure6) in order to improve the instruction given to the patient regarding optimal use of their inhalers. At each consultation of the patient, the physician should establish the patient’s level of symptoms and control, ideally using a composite measure such as the GINA control assessment [1], and if well controlled for at least 3 months, therapy should be stepped down gradually according to treatment guidelines. Conversely, if the patient answers “no” to any of these checklist questions then compliance and aggravating (trigger) factors should be assessed. Most importantly, inhalation technique should be assessed. If the patient is unable to use a particular inhaler correctly despite repeated attempts, a change in inhaler device should be considered. In the cases where ongoing uncontrolled asthma persists in the face of correct inhaler technique, then asthma therapy should be stepped up according to the treatment guidelines and another appointment scheduled in order to check symptoms.Figure 6 Asthma therapy adjustment flow chart. From [8]. ## 6. Conclusions The prevalence of asthma is continuing to rise throughout the world, particularly amongst children. Despite the implementation of both national and international guidelines and the widespread availability of effective pharmacological therapy, asthma is frequently uncontrolled and still may cause death. The reasons for this anomaly are numerous. First, the guidelines themselves are complex and too long for most physicians to absorb and utilise excessive length. Secondly, patients frequently do not adhere to their treatment regimen for a variety of reasons including incorrect use of inhaler and underestimation of disease severity. Indeed, asthma severity is often misclassified in the first instance and inappropriate or insufficient therapy prescribed. Finally, although guidelines agree on the most appropriate therapy to control asthma, the method by which this therapy is delivered to the lungs is often lacking in detail.To date, advancement in asthma management has been pharmacologically driven rather than device driven. Since it is likely that in the future inhaled bronchodilators and corticosteroids will remain the cornerstone of asthma therapy, development of inhaler devices may become more important than development of new drugs. In the past 10–15 years, several innovative developments have advanced the field of inhaler design. Although many inhalers incorporate features providing efficient aerosol delivery for asthma treatment, there is no perfect inhaler and each has advantages and disadvantages, but there is increasing recognition that a successful clinical outcome is determined as much by choice of an appropriate inhaler device as by the drugs that go in them. Drug delivery from all inhaler devices depends on how the patient prepares the device and then inhales from it. Problems with drug delivery have been identified due to inappropriate use of inhaler devices, particularly pMDIs where patients need to coordinate inhaler activation with inspiration. However, as inhalation is likely to remain the delivery route of choice for the foreseeable future, there is a need to develop inhaler devices which are easy to use and deliver a consistent dose of drug to the lungs which may improve patient compliance with treatment leading to better control of asthma. There is evidence that a patient is most likely to use correctly an inhaler that he or she prefers, and each patient’s choice of device will be determined by individual perceptions of how its advantages and disadvantages balance out. This decision could be quite different to the judgment of a prescriber or a formulator, who may give more weight to technical points. Choice of an inhaler device should therefore take into account the likelihood that patients will be able to use a particular device correctly, cost-effectiveness, preference, and likely compliance.Continued and repeated education of both healthcare professionals and patients in correct inhalation technique is essential, and the results are checked at regular intervals by a member of medical staff. Substantial changes in educational efforts are clearly required and should be particularly addressed towards the general practitioner and asthma nurse who in turn teach patients how to use their inhaler correctly. Finally, it is important to remember that continually changing inhaler devices which deliver the same drug is not the answer, as patients lose confidence in both the device and the drug and compliance with therapy drops. An inhaler should only be prescribed with the absolute certainly that the patient can use it correctly. It should be stressed that once a patient is familiar and stabilised on one type of inhaler, they should not be switched to new devices without their involvement and without follow-up education on how to use the device properly. A recent study has shown that asthma control deteriorates if an inhaler is substituted for a different device at the prescribing or dispensing stage without involving the patient [113]. Prescribers should be especially vigilant on this point in order to avoid changes to the type of device their patients receive through the pharmacy. --- *Source: 102418-2013-08-05.xml*
2013
# Astroglia-Microglia Cross Talk during Neurodegeneration in the Rat Hippocampus **Authors:** Montserrat Batlle; Lorenzo Ferri; Carmen Andrade; Francisco-Javier Ortega; Jose M. Vidal-Taboada; Marco Pugliese; Nicole Mahy; Manuel J. Rodríguez **Journal:** BioMed Research International (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102419 --- ## Abstract Brain injury triggers a progressive inflammatory response supported by a dynamic astroglia-microglia interplay. We investigated the progressive chronic features of the astroglia-microglia cross talk in the perspective of neuronal effects in a rat model of hippocampal excitotoxic injury. N-Methyl-D-aspartate (NMDA) injection triggered a process characterized within 38 days by atrophy, neuronal loss, and fast astroglia-mediated S100B increase. Microglia reaction varied with the lesion progression. It presented a peak of tumor necrosis factor-α (TNF-α) secretion at one day after the lesion, and a transient YM1 secretion within the first three days. Microglial glucocorticoid receptor expression increased up to day 5, before returning progressively to sham values. To further investigate the astroglia role in the microglia reaction, we performed concomitant transient astroglia ablation with L-α-aminoadipate and NMDA-induced lesion. We observed a striking maintenance of neuronal death associated with enhanced microglial reaction and proliferation, increased YM1 concentration, and decreased TNF-α secretion and glucocorticoid receptor expression. S100B reactivity only increased after astroglia recovery. Our results argue for an initial neuroprotective microglial reaction, with a direct astroglial control of the microglial cytotoxic response. We propose the recovery of the astroglia-microglia cross talk as a tissue priority conducted to ensure a proper cellular coordination that retails brain damage. --- ## Body ## 1. Introduction Injury to the central nervous system, including stroke and traumatic brain injury, induces excitotoxic neuronal death that triggers a potent inflammatory response with a dynamic astroglia and microglia reaction that determine neuronal fate [1, 2]. Following the injury, all cells present a metabolic reprogramming to cover the bioenergetic and substrate demand for the trophic/inflammatory processes to take place [3] with the coexistence of various factors.An excessive production of proinflammatory factors from activated microglia, such as tumor necrosis factor-α (TNF-α), interleukin-1β, and reactive oxygen species, may trigger or exacerbate neuronal death [4]. Microglia do not constitute a unique cell population and show a range of phenotypes that are closely related with the evolution of the damaging process [5, 6]. Thus, their control will directly influence the tissue outcome [7]. These phenotypes range from the well-known proinflammatory activation state to a trophic one involved in cell repair and extracellular matrix remodelling [8]. In neurodegeneration, for example, microglia show inflammatory and neuroprotective properties [6] associated with the expression of YM1, a secretory protein related to neuroregeneration [9, 10]. In addition, as some microglial cells become increasingly dysfunctional, they may directly participate in the development of neurodegeneration [11].Whether microglia through TNF-α, interleukin-1β, and reactive oxygen species formation [4] adopt a phenotype that mostly exacerbates tissue injury or promotes brain repair with YM1 and other neuroprotective factors expression [9, 10] likely depends on the diversity of signals from the lesion environment, especially at the quadripartite synapse level [12]. For instance, astroglial chemokines have an influence upon microglia/macrophage activation in multiple sclerosis with CCL2 (MCP-1) and CXCL10 (IP-10) directing reactive gliosis [13]. Although a clear account of this dynamic relationship has yet to be proposed, the astrocyte-microglia interplay might determine the phenotype that microglia adopt during neurodegeneration.Steroid hormones may modulate the microglial response to injury, although the results are still controversial. Studies have shown that glucocorticoids regulate peripheral immune responses and have CNS anti-inflammatory properties, but they also appear to develop proinflammatory effects that exacerbate excitotoxicity and cerebral damage (see [14] for a review). This observed variability may be related to an acute or chronic CNS effect, with the initial anti-inflammatory modulation being eventually followed by an exacerbation of cerebral injury [14]. If this hypothesis is correct, it would be possible to modulate the cytotoxic and neuroprotective activity of microglia through acute or chronic activation of the glucocorticoid receptor (GR), the most abundant steroid hormone receptor found in microglia [15].Astroglial S100B is one of those factors that control microglial activity. Astrocytes release S100B constitutively [16] and increase this release upon stimulation by several factors, including TNF-α [17]. Under normal conditions, released S100B acts as a neurotrophic factor, countering the stimulatory effect of neurotoxins on microglia [18] and stimulating glutamate uptake [19]. By contrast, at high concentrations S100B binds the Receptor for Advanced Glycation End products (RAGE), which might mediate microglial activation in the course of brain damage [20]. Thus, secreted S100B participates in astrocyte-microglia cross talk, with an important role in the initial phase of brain insults.The purpose of the present study was to study the relationship between microgliosis and astrogliosis in anin vivo model of hippocampal injury that triggers chronic neuroinflammation [12]. To this end, we undertook a time-course study between day 1 and day 38 of a hippocampal N-methyl-D-aspartate (NMDA) stereotaxic-induced lesion and we characterized neuronal loss and the astroglial and microglial reactions. We quantified TNF-α, YM1, 8 kDa translocator protein (TSPO, or peripheral benzodiazepine receptor, PBR [21]), and GR at different postlesion times to estimate the state of microglial activity. Then, to determine the relationship between astrogliosis and microglia activation, we coinjected NMDA with L-α-aminoadipate (α-AA), a specific astrotoxin used to investigate astroglial participation in several paradigms (e.g., [22–25]). α-AA induces a transient astroglial ablation, which is associated with a microglial reaction that persists over several days [24, 26, 27]. The sequence of events describedin vivo after α-AA stereotaxic microinjection includes astroglial degeneration for 1–3 days, microglial invasion, and, finally, astroglial recovery [24, 26]. ## 2. Materials and Methods ### 2.1. Animals Adult male Wistar rats (body weight 200–225 g at the beginning of the study) were obtained from the animal housing facilities at the School of Medicine (Universitat de Barcelona). They were kept on a 12 h/12 h day/night cycle and housed with free access to food and water. Animals were manipulated according to European legislation (86/609/EEC) and all efforts were made to minimize the number of animals used and their suffering. The number of animals to be included in each group was statistically estimated to be 6 rats/group (tolerance interval ±0.9, confidence level 95%). Procedures were approved by the Ethics Committee of the Universitat de Barcelona, in accordance with the regulations established by the Catalan government (Generalitat de Catalunya). ### 2.2. Chemicals NMDA,α-AA, the mouse monoclonal anti-glial fibrillary acidic protein (GFAP), and the biotin-conjugated isolectin B4 (IB4) fromBandeiraea simplicifolia were all purchased from Sigma (St. Louis, MO). The mouse monoclonal anti-NeuN was purchased from Chemicon (Temecula, CA), the mouse anti-rat CD11b antibody (clone MRC OX-42) was from Serotec Ltd. (Oxford, UK), and the rabbit polyclonal anti-S100B antibody was from DAKO (Dako Diagnostics, Barcelona, Spain). The goat polyclonal AMCase (M-19) antibody that specifically binds the transcription factor YM1 was from Santa Cruz Biotechnology (Santa Cruz, CA) as was the rabbit polyclonal anti-GR antibody. Secondary antibodies and immunohistochemical reagents were from Sigma.[3H]PK-11195 was purchased from Perkin-Elmer (Boston, MA). [3H]Corticosterone was from Amersham Bioscience (Bucks, UK) and RU-28362 was purchased from Sigma. Dodecyl sulphate-polyacrylamide gel electrophoresis standards were purchased from Bio-Rad (Hercules CA), Immobilon-P membranes were from Millipore (Bedford, MA), and ECL Plus Western Blotting reagent was from Amersham Bioscience (Bucks, UK). The murine TNF-α ELISA development kit was purchased from PeproTech (Paris, France). ### 2.3. Stereotaxic Procedure and Labeling of Proliferating Cells Under equithesin anesthesia (a mixture of chloral hydrate and sodium pentobarbitone; 0.3 mL/100 g body wt, i.p.), rats were placed in a stereotaxic instrument (David Kopf, Carnegie Medicin, Sweden) with the incisor bar set at −3.3 mm. According to the Atlas of Paxinos and Watson [28], the stereotaxic coordinates for the hippocampal microinjection were 3.3 mm caudal to bregma, 2.2 mm lateral to bregma, and 2.9 mm ventral from dura. A 5.0 μL Hamilton syringe activated by an infusion pump (CMA/100; Carnegie Medicin, Sweden) was used for the intracerebral injection into the hippocampal parenchyma. In all injections 0.5 μL was infused over 5 min as previously described [29].Animals received a single injection of either 50 mM phosphate buffer saline (PBS; pH 7.4) (sham group), 20 nmol NMDA in 50 mM PBS (NMDA group), 6.4 nmolα-AA (α-AA group), or 20 nmol NMDA plus 6.4 nmol α-AA (NMDA + α-AA group). At five different postlesion times (1, 3, 5, 15, and 38 days) a total of 60 rats, 12 for each group (sham, α-AA, NMDA, and α-AA + NMDA) and 12 control rats (with no treatment, assigned as 0 days), were anaesthetized and decapitated. Half of the animals (6 rats/group) were used for biochemical studies and the other half were used for histological approaches. The NMDA dose was chosen according to previous studies [29]. As it was used in low concentrations, sodium pentobarbitone did not interfere with the function of NMDA receptors [29]. α-AA was at the limit of its solubility in water (2.2 mg/mL) and the selected dose ensured a specific gliotoxic effect [24]. ### 2.4. Histology, Immunohistochemistry, and Image Analysis At the indicated postlesion time, six rats from each group were anaesthetized and decapitated. Their brains were then quickly removed, frozen with powdered dry ice, and stored at −80°C until use. Adjacent 14μm coronal serial sections at the level of the dorsal hippocampus (−3.3 mm to bregma) were obtained from all brains, mounted on slices, and processed for histology, immunohistochemistry, andin vitro autoradiography studies.Standard Nissl staining was performed to evaluate neuronal loss and the morphology of the hippocampal region. Microglial cell shapes were identified by histochemistry with biotin-conjugated IB4 [30]. Briefly, endogenous peroxidase activity was inhibited by a 10-minute preincubation in H2O2-methanol-PBS (0.3/9.7/90) followed by a 10-minute wash in PBS [31] and postfixation for 10 min with ice-cold paraformaldehyde (4% in PBS, pH 7.4). Then sections were incubated overnight at 4°C with IB4 diluted 1 : 25 in normal goat serum (1 : 100 v/v in 0.01 M PBS; pH 7.4). After incubation with ExtrAvidin (1 : 250), sections were developed in a 0.05 M Tris solution containing 0.03% (w/v) diaminobenzidine and 0.006% (v/v) H2O2.Immunohistochemistry was carried out by the biotin-avidin-peroxidase method. Astroglial reaction was assessed by immunodetection with mouse monoclonal anti-GFAP and rabbit polyclonal anti-S100B antibodies diluted 1 : 400 and 1 : 800, respectively. Neuronal staining and GR expression were evaluated, respectively, with mouse monoclonal anti-NeuN antibody (1 : 150) and rabbit polyclonal anti-GR antibody (1 : 750). In brief, endogenous peroxidase activity was inhibited by a 10-minute preincubation in H2O2-methanol-PBS (0.3/9.7/90) followed by a 10-minute wash in PBS. Then, after postfixation for 10 min with ice-cold paraformaldehyde (4% in PBS, pH 7.4), sections were incubated overnight at 4°C with the primary antibody at the appropriate dilution in 0.05 M PBS containing 0.5% Triton ×100, 1% normal goat serum, and 1% bovine serum albumin. After washing and incubating with the appropriate secondary antibody, sections were incubated with ExtrAvidin (1 : 250) and developed in diaminobenzidine and H2O2.Double immunohistochemistry with specific cellular markers was performed to determine the cells expressing glucocorticoid receptors. Sections were coincubated overnight at 4°C with the rabbit polyclonal anti-GR antibody (1 : 750) and either mouse monoclonal anti-NeuN antibody (1 : 250) for neurons, mouse monoclonal anti-GFAP antibody (1 : 750) for astroglial cells, or mouse monoclonal anti-CD11b (1 : 100) for microglia. After washing, sections were sequentially incubated in the dark with goat anti-mouse IgG Alexa Fluor-555 conjugated (1 : 300) to detect cellular phenotype and then with biotinylated goat anti-rabbit IgG (1 : 200) followed by FITC-conjugated ExtrAvidin (1 : 250) to detect GR.All double immunostained sections were mounted in ProLong antifade and kept in the dark. Incubations with either mouse or goat IgG as primary antibodies were used for negative controls. Confocal images were acquired using a Leica TCS SL laser scanning confocal spectral microscope (Leica Microsystems Heidelberg GmbH, Manheim, Germany).Morphological and histological parameters were measured in 14μm thick serial coronal sections with the optical microscope software AxioVision 4 AC (Zeiss) and analyzed by the Image Pro Plus v.5.1 image analysis system (Media Cybernetics Inc., Bethesda, MD, USA). The hippocampal formation (HF) size, the area occupied by the neuronal loss, and the different hippocampal subfield areas were measured on Nissl stained sections at the injection level that allows differentiating between structures and identifying the damage to be accurately quantified. The area of hippocampus occupied by reactive astroglia or microglia was measured by delimitation of the increased immunoreactivity region on adjacent GFAP-immunostained, S100B-immunostained, and IB4-stained sections, respectively, using the same system [24]. Area size determinations were performed in three different stained sections at the injection level of all rat brains. To minimize biased errors due to fronto-occipital hippocampal size variations, only those sections located near the injection site and at similar bregma level were selected for quantification. Then, the area of interest was referred to as the whole HP area measured in each of the GFAP-immunostained, S100B-immunostained, and IB4-stained sections analyzed. Measurement of the whole HP area was also used to calculate the ratio between total HF and the area of gliosis. In all cases the contralateral hippocampus was measured in the same sections in order to estimate the effects of histological procedures on tissue size and thus to correct for variability in individual brain size and tissue shrinkage [32]. GFAP/S100B expression areas were estimated by the quotient of the increased immunoreactivity region in GFAP- and S100B-immunostained sections of each rat.Neuronal cell counting was performed in NeuN immunostained sections. Under the optical microscope, we randomly selected four areas of interest (1.0 mm2 each) from the dorsal hippocampus. Inside of these areas, we counted the number of stained cells at a ×40 objective magnification. Cell counting was performed in duplicate in three different stained sections of all rat brains. Immunopositive cells were counted in all lesioned hippocampal subfields, and quantification was made in the Image Pro Plus v.5.1 image and analysis system. To allow for quantification of double immunostained cells, multiple epifluorescent partial images of each hippocampus were taken and mounted in order to reconstruct an image of the whole structure using a Leica DMI 6000B inverted microscope equipped with the Tile Scan function of the LAS AF Leica software (Leica Microsystems Heidelberg GmbH, Manheim, Germany). Counting of double-immunostained cells was performed in duplicate in three different stained sections of all rat brains as explained above. GR density was analysed densitometrically in bright field microscopy images taken from three adjacent GR immunostained sections [33] from the lesioned hippocampus. To do that, we used the same procedure and image analysis system as described above for NeuN immunopositive cells. The size and shape of the cell soma were used to discriminate between neurons and glial cells [33] (Figure 2(a)). For validation of this counting, the number of double immunopositive cells was also calculated in NeuN-GR and CD11b-GR double stained sections as explained above for the estimation of the number of proliferating cells. ### 2.5.In Vitro Autoradiography Adjacent sections were processed forin vitro autoradiography to assess the hippocampal distribution of TSPO and GR. TSPO was labeled with [3H]PK-11195 as a microglial marker. Tissue sections were incubated for 2 h at room temperature in 50 mM Tris-HCl (pH 7.7) containing 1 nM [3H]PK-11195 (85 Ci/mmol). Nonspecific binding was determined in the presence of 1 μM PK-11195. It was homogeneous and lower than 10% of the total binding. Glucocorticoid receptors were labelled with 10 nM [3H]corticosterone (79 Ci/mmol) added to a buffer solution containing 20 mM Tris-HCl (pH 7.4), 1.5 mM EDTA, 140 mM NaCl, and 5 mM glucose. 5 μM RU-28362 was used to discriminate between GR and mineralocorticoid receptor specific [3H]corticosterone binding [34]. The nonspecific binding was determined by incubation with 20 μM corticosterone and was lower than 20% of total binding.After washing in the appropriate buffer, slides were dried overnight under a stream of air at 4°C and opposed to Hyperfilm-3H (Amersham) for a period between two weeks and two months. Films were developed and analysed densitometrically after calibration with plastic standards (3H-Microscales, Amersham) using the Image Pro Plus v.5.1 image analysis system. The average brain protein content was 8%. For each brain, four sections were processed for total binding and two other sections for nonspecific binding. ### 2.6. Western Blot and Immunoassay At the indicated postlesion time, six rats from each group were anaesthetized and decapitated. The brain of each animal was removed and the HF dissected, before being quickly frozen in liquid N2 and stored at −80°C prior to use. Each HF was manually homogenized in ice-cold Tris-HCl 50 mM, pH 7.7, containing 2 mM EDTA and a protease inhibitor cocktail. Part of the homogenate was immediately ultrasonicated at 4°C with a Sonifier 250 (Branson, Ultrasonic Corp., Danbury, CT) and centrifuged; the supernatant was directly used for TNF-α immunoassay. The other part was ultracentrifuged at 15000 rpm for 15 min at 4°C and the supernatant (cytosolic fraction) was used for Western blot analysis of YM1 protein. In both cases, protein determination was carried out using the Bradford method.Western blot analysis was performed as described elsewhere, with specific antibodies against YM1 [10]. Low-range molecular weight biotinylated SDS-PAGE standards were run in each gel to ascertain the position of each band. Immobilon-P was used for electroblotting and the immunocomplexes were detected by enhanced chemiluminescence using the ECL Plus Western Blotting Kit. Films were then developed, scanned, and analysed densitometrically with the Image Pro Plus v.5.1 image analysis system.TNF-α was quantified with the murine TNF-α ELISA development kit as indicated by the supplier protocol. In all samples, TNF-α determination was performed in duplicate. ### 2.7. Statistical Analysis For each parameter, kurtosis and skewness moments were calculated to test the normal distribution of data. A two-way ANOVA was performed with two factors: time with six levels (0 days, 1 day, 3 days, 5 days, 15 days, and 38 days) and treatment with four levels (sham,α-AA, NMDA, and NMDA + α-AA). When significant two-way interactions were observed, individual comparisons were performed using one-way ANOVAs followed by the LSDpost hoc test. Single comparisons between sham and NMDA groups at one time point were made using the Student’s t-test (t). When normality was not achieved, the values of all groups were compared using the nonparametric Kruskal-Wallis test (KW) followed by the Mann-Whitney U test (KS). In all cases, P<0.05 was considered as statistically significant. Results are expressed as mean ± SEM. All analyses were performed with the STATGRAPHICS software (STSC Inc., Rockville, MD, USA). ## 2.1. Animals Adult male Wistar rats (body weight 200–225 g at the beginning of the study) were obtained from the animal housing facilities at the School of Medicine (Universitat de Barcelona). They were kept on a 12 h/12 h day/night cycle and housed with free access to food and water. Animals were manipulated according to European legislation (86/609/EEC) and all efforts were made to minimize the number of animals used and their suffering. The number of animals to be included in each group was statistically estimated to be 6 rats/group (tolerance interval ±0.9, confidence level 95%). Procedures were approved by the Ethics Committee of the Universitat de Barcelona, in accordance with the regulations established by the Catalan government (Generalitat de Catalunya). ## 2.2. Chemicals NMDA,α-AA, the mouse monoclonal anti-glial fibrillary acidic protein (GFAP), and the biotin-conjugated isolectin B4 (IB4) fromBandeiraea simplicifolia were all purchased from Sigma (St. Louis, MO). The mouse monoclonal anti-NeuN was purchased from Chemicon (Temecula, CA), the mouse anti-rat CD11b antibody (clone MRC OX-42) was from Serotec Ltd. (Oxford, UK), and the rabbit polyclonal anti-S100B antibody was from DAKO (Dako Diagnostics, Barcelona, Spain). The goat polyclonal AMCase (M-19) antibody that specifically binds the transcription factor YM1 was from Santa Cruz Biotechnology (Santa Cruz, CA) as was the rabbit polyclonal anti-GR antibody. Secondary antibodies and immunohistochemical reagents were from Sigma.[3H]PK-11195 was purchased from Perkin-Elmer (Boston, MA). [3H]Corticosterone was from Amersham Bioscience (Bucks, UK) and RU-28362 was purchased from Sigma. Dodecyl sulphate-polyacrylamide gel electrophoresis standards were purchased from Bio-Rad (Hercules CA), Immobilon-P membranes were from Millipore (Bedford, MA), and ECL Plus Western Blotting reagent was from Amersham Bioscience (Bucks, UK). The murine TNF-α ELISA development kit was purchased from PeproTech (Paris, France). ## 2.3. Stereotaxic Procedure and Labeling of Proliferating Cells Under equithesin anesthesia (a mixture of chloral hydrate and sodium pentobarbitone; 0.3 mL/100 g body wt, i.p.), rats were placed in a stereotaxic instrument (David Kopf, Carnegie Medicin, Sweden) with the incisor bar set at −3.3 mm. According to the Atlas of Paxinos and Watson [28], the stereotaxic coordinates for the hippocampal microinjection were 3.3 mm caudal to bregma, 2.2 mm lateral to bregma, and 2.9 mm ventral from dura. A 5.0 μL Hamilton syringe activated by an infusion pump (CMA/100; Carnegie Medicin, Sweden) was used for the intracerebral injection into the hippocampal parenchyma. In all injections 0.5 μL was infused over 5 min as previously described [29].Animals received a single injection of either 50 mM phosphate buffer saline (PBS; pH 7.4) (sham group), 20 nmol NMDA in 50 mM PBS (NMDA group), 6.4 nmolα-AA (α-AA group), or 20 nmol NMDA plus 6.4 nmol α-AA (NMDA + α-AA group). At five different postlesion times (1, 3, 5, 15, and 38 days) a total of 60 rats, 12 for each group (sham, α-AA, NMDA, and α-AA + NMDA) and 12 control rats (with no treatment, assigned as 0 days), were anaesthetized and decapitated. Half of the animals (6 rats/group) were used for biochemical studies and the other half were used for histological approaches. The NMDA dose was chosen according to previous studies [29]. As it was used in low concentrations, sodium pentobarbitone did not interfere with the function of NMDA receptors [29]. α-AA was at the limit of its solubility in water (2.2 mg/mL) and the selected dose ensured a specific gliotoxic effect [24]. ## 2.4. Histology, Immunohistochemistry, and Image Analysis At the indicated postlesion time, six rats from each group were anaesthetized and decapitated. Their brains were then quickly removed, frozen with powdered dry ice, and stored at −80°C until use. Adjacent 14μm coronal serial sections at the level of the dorsal hippocampus (−3.3 mm to bregma) were obtained from all brains, mounted on slices, and processed for histology, immunohistochemistry, andin vitro autoradiography studies.Standard Nissl staining was performed to evaluate neuronal loss and the morphology of the hippocampal region. Microglial cell shapes were identified by histochemistry with biotin-conjugated IB4 [30]. Briefly, endogenous peroxidase activity was inhibited by a 10-minute preincubation in H2O2-methanol-PBS (0.3/9.7/90) followed by a 10-minute wash in PBS [31] and postfixation for 10 min with ice-cold paraformaldehyde (4% in PBS, pH 7.4). Then sections were incubated overnight at 4°C with IB4 diluted 1 : 25 in normal goat serum (1 : 100 v/v in 0.01 M PBS; pH 7.4). After incubation with ExtrAvidin (1 : 250), sections were developed in a 0.05 M Tris solution containing 0.03% (w/v) diaminobenzidine and 0.006% (v/v) H2O2.Immunohistochemistry was carried out by the biotin-avidin-peroxidase method. Astroglial reaction was assessed by immunodetection with mouse monoclonal anti-GFAP and rabbit polyclonal anti-S100B antibodies diluted 1 : 400 and 1 : 800, respectively. Neuronal staining and GR expression were evaluated, respectively, with mouse monoclonal anti-NeuN antibody (1 : 150) and rabbit polyclonal anti-GR antibody (1 : 750). In brief, endogenous peroxidase activity was inhibited by a 10-minute preincubation in H2O2-methanol-PBS (0.3/9.7/90) followed by a 10-minute wash in PBS. Then, after postfixation for 10 min with ice-cold paraformaldehyde (4% in PBS, pH 7.4), sections were incubated overnight at 4°C with the primary antibody at the appropriate dilution in 0.05 M PBS containing 0.5% Triton ×100, 1% normal goat serum, and 1% bovine serum albumin. After washing and incubating with the appropriate secondary antibody, sections were incubated with ExtrAvidin (1 : 250) and developed in diaminobenzidine and H2O2.Double immunohistochemistry with specific cellular markers was performed to determine the cells expressing glucocorticoid receptors. Sections were coincubated overnight at 4°C with the rabbit polyclonal anti-GR antibody (1 : 750) and either mouse monoclonal anti-NeuN antibody (1 : 250) for neurons, mouse monoclonal anti-GFAP antibody (1 : 750) for astroglial cells, or mouse monoclonal anti-CD11b (1 : 100) for microglia. After washing, sections were sequentially incubated in the dark with goat anti-mouse IgG Alexa Fluor-555 conjugated (1 : 300) to detect cellular phenotype and then with biotinylated goat anti-rabbit IgG (1 : 200) followed by FITC-conjugated ExtrAvidin (1 : 250) to detect GR.All double immunostained sections were mounted in ProLong antifade and kept in the dark. Incubations with either mouse or goat IgG as primary antibodies were used for negative controls. Confocal images were acquired using a Leica TCS SL laser scanning confocal spectral microscope (Leica Microsystems Heidelberg GmbH, Manheim, Germany).Morphological and histological parameters were measured in 14μm thick serial coronal sections with the optical microscope software AxioVision 4 AC (Zeiss) and analyzed by the Image Pro Plus v.5.1 image analysis system (Media Cybernetics Inc., Bethesda, MD, USA). The hippocampal formation (HF) size, the area occupied by the neuronal loss, and the different hippocampal subfield areas were measured on Nissl stained sections at the injection level that allows differentiating between structures and identifying the damage to be accurately quantified. The area of hippocampus occupied by reactive astroglia or microglia was measured by delimitation of the increased immunoreactivity region on adjacent GFAP-immunostained, S100B-immunostained, and IB4-stained sections, respectively, using the same system [24]. Area size determinations were performed in three different stained sections at the injection level of all rat brains. To minimize biased errors due to fronto-occipital hippocampal size variations, only those sections located near the injection site and at similar bregma level were selected for quantification. Then, the area of interest was referred to as the whole HP area measured in each of the GFAP-immunostained, S100B-immunostained, and IB4-stained sections analyzed. Measurement of the whole HP area was also used to calculate the ratio between total HF and the area of gliosis. In all cases the contralateral hippocampus was measured in the same sections in order to estimate the effects of histological procedures on tissue size and thus to correct for variability in individual brain size and tissue shrinkage [32]. GFAP/S100B expression areas were estimated by the quotient of the increased immunoreactivity region in GFAP- and S100B-immunostained sections of each rat.Neuronal cell counting was performed in NeuN immunostained sections. Under the optical microscope, we randomly selected four areas of interest (1.0 mm2 each) from the dorsal hippocampus. Inside of these areas, we counted the number of stained cells at a ×40 objective magnification. Cell counting was performed in duplicate in three different stained sections of all rat brains. Immunopositive cells were counted in all lesioned hippocampal subfields, and quantification was made in the Image Pro Plus v.5.1 image and analysis system. To allow for quantification of double immunostained cells, multiple epifluorescent partial images of each hippocampus were taken and mounted in order to reconstruct an image of the whole structure using a Leica DMI 6000B inverted microscope equipped with the Tile Scan function of the LAS AF Leica software (Leica Microsystems Heidelberg GmbH, Manheim, Germany). Counting of double-immunostained cells was performed in duplicate in three different stained sections of all rat brains as explained above. GR density was analysed densitometrically in bright field microscopy images taken from three adjacent GR immunostained sections [33] from the lesioned hippocampus. To do that, we used the same procedure and image analysis system as described above for NeuN immunopositive cells. The size and shape of the cell soma were used to discriminate between neurons and glial cells [33] (Figure 2(a)). For validation of this counting, the number of double immunopositive cells was also calculated in NeuN-GR and CD11b-GR double stained sections as explained above for the estimation of the number of proliferating cells. ## 2.5.In Vitro Autoradiography Adjacent sections were processed forin vitro autoradiography to assess the hippocampal distribution of TSPO and GR. TSPO was labeled with [3H]PK-11195 as a microglial marker. Tissue sections were incubated for 2 h at room temperature in 50 mM Tris-HCl (pH 7.7) containing 1 nM [3H]PK-11195 (85 Ci/mmol). Nonspecific binding was determined in the presence of 1 μM PK-11195. It was homogeneous and lower than 10% of the total binding. Glucocorticoid receptors were labelled with 10 nM [3H]corticosterone (79 Ci/mmol) added to a buffer solution containing 20 mM Tris-HCl (pH 7.4), 1.5 mM EDTA, 140 mM NaCl, and 5 mM glucose. 5 μM RU-28362 was used to discriminate between GR and mineralocorticoid receptor specific [3H]corticosterone binding [34]. The nonspecific binding was determined by incubation with 20 μM corticosterone and was lower than 20% of total binding.After washing in the appropriate buffer, slides were dried overnight under a stream of air at 4°C and opposed to Hyperfilm-3H (Amersham) for a period between two weeks and two months. Films were developed and analysed densitometrically after calibration with plastic standards (3H-Microscales, Amersham) using the Image Pro Plus v.5.1 image analysis system. The average brain protein content was 8%. For each brain, four sections were processed for total binding and two other sections for nonspecific binding. ## 2.6. Western Blot and Immunoassay At the indicated postlesion time, six rats from each group were anaesthetized and decapitated. The brain of each animal was removed and the HF dissected, before being quickly frozen in liquid N2 and stored at −80°C prior to use. Each HF was manually homogenized in ice-cold Tris-HCl 50 mM, pH 7.7, containing 2 mM EDTA and a protease inhibitor cocktail. Part of the homogenate was immediately ultrasonicated at 4°C with a Sonifier 250 (Branson, Ultrasonic Corp., Danbury, CT) and centrifuged; the supernatant was directly used for TNF-α immunoassay. The other part was ultracentrifuged at 15000 rpm for 15 min at 4°C and the supernatant (cytosolic fraction) was used for Western blot analysis of YM1 protein. In both cases, protein determination was carried out using the Bradford method.Western blot analysis was performed as described elsewhere, with specific antibodies against YM1 [10]. Low-range molecular weight biotinylated SDS-PAGE standards were run in each gel to ascertain the position of each band. Immobilon-P was used for electroblotting and the immunocomplexes were detected by enhanced chemiluminescence using the ECL Plus Western Blotting Kit. Films were then developed, scanned, and analysed densitometrically with the Image Pro Plus v.5.1 image analysis system.TNF-α was quantified with the murine TNF-α ELISA development kit as indicated by the supplier protocol. In all samples, TNF-α determination was performed in duplicate. ## 2.7. Statistical Analysis For each parameter, kurtosis and skewness moments were calculated to test the normal distribution of data. A two-way ANOVA was performed with two factors: time with six levels (0 days, 1 day, 3 days, 5 days, 15 days, and 38 days) and treatment with four levels (sham,α-AA, NMDA, and NMDA + α-AA). When significant two-way interactions were observed, individual comparisons were performed using one-way ANOVAs followed by the LSDpost hoc test. Single comparisons between sham and NMDA groups at one time point were made using the Student’s t-test (t). When normality was not achieved, the values of all groups were compared using the nonparametric Kruskal-Wallis test (KW) followed by the Mann-Whitney U test (KS). In all cases, P<0.05 was considered as statistically significant. Results are expressed as mean ± SEM. All analyses were performed with the STATGRAPHICS software (STSC Inc., Rockville, MD, USA). ## 3. Results ### 3.1. NMDA Induced Fast, Enduring Microgliosis with Phenotype Changes within Fifteen Days Observation of Nissl-stained sections revealed that 20 nmol NMDA produced major layer disorganization, neuronal loss, and gliosis in all layers of the hippocampal formation (HF). Animals from the sham group showed no cellular alterations except for the needle scar (Figure1(a)). The hippocampal lesion extended within 2 mm around the injection site in the rostrocaudal axis, with a slight tendency to grow towards the caudal direction. In the NMDA group (Figure 1(b)), we observed an increase in the area of lesion as a consequence of an initial massive neuronal loss within the first 3 days, followed by a more discrete neuronal death that still progressed at 38 days (Figure 1(c)).Figure 1 Timing of the microglial reaction to NMDA in the hippocampus. Photomicrographs illustrate the hippocampal injury and microglial reaction to NMDA-induced lesion at the injection site. Photomicrographs of cresyl-violet stained brain sections of (a) sham rats and (b) NMDA rats 15 days after the lesion. (c) Graph shows the quantification of the area of lesion relative to the whole HF in cresyl-violet stained sections. IB4 histochemistry of sham (PBS) (d) and NMDA rats (e) 15 days after the lesion. (f) Distribution of specific binding sites for [3H]PK-11195 in a coronal rat brain section 15 days after 20 nmol NMDA injection. Note the NMDA-induced increase of specific binding (arrowhead) seen as a white area in the left hippocampus. Graphs show the quantification of the area of microglial reaction in IB4-stained sections (g), the area of enhanced [3H]PK-11195 specific binding (h), and the hippocampal concentration of TNF-α (i) of PBS and NMDA rats during the 38 days of the study. Asterisks in (a), (b), (d), and (e) indicate the injection site. CA1,Cornu Ammonis area 1; DG, dentate gyrus. P∗<0.05 different from PBS; #P<0.05 different from day 0 (LSDpost hoc test in (d), (f); KW test in (e)) (n=6 PBS; n=6 NMDA rats). Bar: 1 mm in (a), (b), (d), and (e) and 2 mm in (f). (a) (b) (c) (d) (e) (f) (g) (h) (i)Figure 2 Activation of glucocorticoid receptors (GR) induced by NMDA injection in the hippocampus. (a) Photomicrographs illustrate GR immunohistochemistry in the hippocampus of sham animals and NMDA rats 5 days after the injection. Asterisks indicate the injection site and insets illustrate the GR immunostaining in neurons (arrow) and glial cells (arrowheads). Please note that pyramidal cells (arrows) are much bigger than glia (arrowheads); they also present a pyramidal shape with a dendritic tree, while reactive microglia have a round smaller shape. (b) Graphs show the quantification of the nucleus/cytoplasm ratio of GR-immunolabeling in hippocampal neurons and glial cells of sham (PBS) and NMDA-lesioned rats. Confocal photomicrographs show the double immunohistochemistry of GR (in green) with specific cellular markers (in red) in the hippocampal CA1 layer of NMDA-lesioned rats 5 days after the lesion. NeuN-GR double immunostaining of PBS (c–e) and NMDA (f–h) rats evidences an increased GR-immunolabeling in the nucleus of neurons of NMDA rats (arrowheads). GFAP-GR double immunostaining in PBS (i–k, arrowheads) and NMDA (l–n) rats shows an increased GFAP-immunolabeling in astrocytes of NMDA rats not associated with GR-immunoreactivity changes, which indicates that the NMDA-induced GR activation quantified in panel (b) is not astroglial. CD11b-GR double immunohistochemistry in PBS (o–q) and NMDA (r–t) rats evidences an increased GR-immunolabeling in the nucleus of reactive microglia of NMDA rats.P∗<0.05; P∗∗<0.01 different from PBS (Student’s t-test) (n=6 rats/group). Bar: 500 μm in (a) and 10 μm in (c)–(t) and insets in (a).Histochemistry analysis with IB4 stained hyperplasic and hypertrophic microglia in all NMDA-lesioned animals (Figure1(e)). The reactive microglia area increased with time and reached a maximal value at 15 days, with this value being maintained at day 38 (Figure 1(g)). Quantification of the TSPO microglial expression by [3H]PK-11195 usingin vitro autoradiography (Figure 1(f)) also showed an area of microglial reaction to the NMDA injection into the hippocampus (Table 1). In the NMDA-lesioned versus the sham groups both the intensity (Table 1) and the area (Figure 1(f)) of [3H]PK-11195 binding density showed an increase between day 1 and day 38 (Figure 1(h)). The sham group showed [3H]PK-11195 specific binding that was increased in the injection site only in a small area on day 1.Table 1 [3H]11195 and [3H]corticosterone specific binding to brain sections after 20 nmol NMDA injection in the hippocampus. 0 days 1 day 5 days 15 days 38 days Specific TSPO binding Sham hippocampus 481 ± 98 1532 ± 107# 618 ± 187 483 ± 341 483 ± 102 NMDA hippocampus 480 ± 73 1495 ± 189* 1644 ± 255*# 1571 ± 346*# 1440 ± 235*# Parietal cortex 490 ± 11 474 ± 64 472 ± 40 473 ± 18 488 ± 9 Piriform cortex 468 ± 5 439 ± 35 425 ± 33 453 ± 37 465 ± 2 Specific GR binding Sham hippocampus 68 ± 72 101 ± 94 74 ± 37 81 ± 13 75 ± 21 NMDA hippocampus 76 ± 62 198 ± 104 448 ± 247*# 91 ± 23 105 ± 32 Parietal cortex 101 ± 19 102 ± 14 85 ± 19 77 ± 32 102 ± 20 Piriform cortex 103 ± 24 74 ± 19 89 ± 6 73 ± 3 102 ± 42 18 kDa translocator protein (TSPO) concentration was estimated by [3H]PK11195 specific binding. 5 μM RU-28362 allowed discriminating the [3H]corticosterone specific binding to glucocorticoid receptors (GR). Data are expressed in fmol/mg of protein (mean ± S.E.M). *p<0.05 different from 0 days; #p<0.05 different from sham values (KW test).The TNF-α concentration increased in both sham and NMDA-lesioned groups only within the first five days of the lesion (Figure 1(i)). Between days 1 and 3, the concentration of TNF-α in the NMDA group increased with respect to the sham group and progressively returned to initial values by day 38. In the sham group, we only found a small transient increase in TNF-α concentration at day 3.We quantified the time-related changes of glucocorticoid receptor (GR) concentrations in the hippocampus by immunohistochemistry (Figure2) and byin vitro autoradiography of [3H]corticosterone binding (Table 1). In the sham group, single immunohistochemistry of GR showed widespread staining in the hippocampus and adjacent areas. We observed an increased GR-immunoreactivity (GR-IR) in the lesioned HF mostly in the nucleus of glial cells and surviving neurons between days 1 and 5 after NMDA injection (Figures 2(a) and 2(b)). Double immunohistochemistry and confocal analysis of anti-GR antibody with either anti-NeuN, anti-GFAP, or anti-CD11b antibodies evidenced short-term GR distribution changes in surviving neurons and reactive microglia but not in reactive astrocytes (Figures 2(c)–2(t)). We found the more evident GR-IR increase in the nucleus of round-shaped CD11b-immunostained cells at day 5 after the lesion (Figures 2(o)–2(t)), whereas the weak GR labeling of most GFAP-immunopositive cells remained and only a few scattered cells showed increased nuclear staining (Figures 2(i)–2(n)).We calculated the nucleus/cytoplasm GR-IR ratio as an estimation of GR activation, as unactivated GR predominantly localized within the cytoplasm and migrates to the nucleus with hormone binding. At day 1 the nucleus/cytoplasm GR-IR ratio was strongly increased in neurons of the lesioned CA1 strata and remained increased only at day 5. We found marked differences in microglial cells (Figure2(b)). In these cells, the nucleus/cytoplasm GR-IR ratio increased at days 1 and 5 after NMDA microinjection to then gradually decrease to sham group values at day 38.Experiments with [3H]corticosterone specific binding to GR showed low levels in the HF and the parietal and pyriform cortex of sham animals (Table 1). In the hippocampus of NMDA-lesioned rats, [3H]corticosterone specific binding progressively increased reaching a maximal value of 448±247 fmol/mg prot at day 5 to then decrease to sham levels at day 15 (Table 1). ### 3.2. Within Fifteen Daysα-AA Did Not Modify NMDA-Mediated Neuronal Loss Stereotaxic Injection of 20 nmol NMDA produced major neuronal loss mainly in the CA1 layers (Figures3(a)–3(c)). The area of neuronal loss extended to 34±5% of the pyramidal CA1 within the first three days (Figure 3(g)) and then progressively increased to reach a maximal value at day 38. The same pattern was observed in the neuronal density measured in NeuN immunostained sections (Figure 3(j)).Figure 3 α-AA effect on the NMDA-induced hippocampal lesion. Photomicrographs of cresyl-violet stained brain sections of sham rats. (a) NMDA rats 15 days (b) and 38 days after the lesion (c). α-AA rats (d) and α-AA + NMDA rats 15 days (e) and 38 days after the lesion (f). Asterisks indicate the injection site. Graphs (g) to (i) show the quantification of the area of neuronal loss in cresyl-violet stained sections in the pyramidal CA1 (g), pyramidal CA2-CA3 (h), and dorsal dentate gyrus (i) of sham (PBS), α-AA, NMDA, and α-AA + NMDA rats at postlesion days 1, 3, 15, and 38. Graphs (j) to (l) show the estimation of the density NeuN immunopositive neurons in the pyramidal CA1 (j), pyramidal CA2-CA3 (k), and dorsal dentate gyrus (l) of the same rats. P∗<0.05 different from PBS; #P<0.05 different from NMDA (LSDpost hoc test) (n=6 rats/group). Bar: 500 μm. (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l)When we coinjected 6.4 nmolα-AA with 20 nmol NMDA (α-AA + NMDA rats) all the lesion parameters were similar in all hippocampal strata of the NMDA and α-AA + NMDA groups during the first five days (Figures 3(d)–3(f)). In the α-AA + NMDA group, we observed increased tissue disorganization at day 15 and this area of lesion reached a maximal value at day 38 (Figures 3(d)–3(f)). At all time points, neuronal density in the HF of α-AA + NMDA and NMDA groups was similar (Figures 3(j)–3(l)).The injection of 6.4 nmol ofαAA alone caused no change in the hippocampal size, lesion area, or neuronal density when compared with sham group values at any of the studied time points (Figures 3(g)–3(i)). ### 3.3.α-AA Enhanced the S100B/GFAP Ratio Three Days after the NMDA-Induced Lesion The treatment only withα-AA induced a loss in the GFAP-immunoreactivity (IR) in a small area of the hippocampus around the injection site that was already recovered at day 3 after injection (Figures 4(a)–4(d)). In the NMDA group, the GFAP-IR increase was maximal at day 13 (Figures 4(c) and 4(d)) and then progressively decreased by day 38. In the α-AA + NMDA group only a few astrocytes showed a weak reactive morphology at day 1 (Figures 4(e)–4(h)). At day 3, the area of enhanced GFAP-IR covered 29±7% of the HF (Figure 4(i)), which increased slightly by day 38 to reach similar values to those obtained in the NMDA group at this time point. At day 38, reactive hippocampal astrocytes of α-AA + NMDA rats presented enhanced hypertrophy and hyperplasia and stronger GFAP-IR than did those of the NMDA group (Figures 4(l) and 4(m)).Figure 4 α-AA effect on the NMDA-induced astroglial reaction in the hippocampus. GFAP-immunostaining 1 (a) and 3 days (b) in sham animals, 1 (c) and 3 days (d) after α-AA, 1 (e) and 3 days (f) after NMDA lesion, and 1 (g) and 3 days (h) after α-AA + NMDA injection (arrowheads show the injection site). Please note in (c) and (g) the lack of GFAP-immunopositive cells in the surroundings of the injection site. Graphs show the quantification of the area of astrogliosis in the whole hippocampus. (i) GFAP-immunostained and S100B-immunostained (j) sections of sham (PBS), α-AA, NMDA, and α-AA + NMDA rats at postlesion days 1, 3, 15, and 38. (k) Graph shows the estimation of the ratio S100B/GFAP, calculated as the quotient between the area of increased S100B and the area of increased GFAP in these rats. Photomicrographs illustrate GFAP-immunoreactive cells of NMDA (l) and α-AA + NMDA (m) rats and S100B-immunoreactive cells of MNDA (n) and α-AA + NMDA (o) rats in the hippocampal parenchyma 3 days after the lesion. P∗<0.05 different from PBS; #P<0.05 different from NMDA; $P<0.05 different from day 15 (LSDpost hoc test) (n=6 rats/group). Bar: 1 mm in (a)–(h) and 10 μm in (l)–(o). (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o)Experiments of immunohistochemistry with S100B showed a very small area of increased IR that corresponded to the area of GFAP-IR loss, at day 1 in theα-AA group. In the NMDA rats, S100B-IR was maximal at day 1 and then progressively decreased by day 38, following the same dynamics as GFAP-IR did (Figures 4(j), 4(n), and 4(o)). In the α-AA + NMDA group the increased S100B-IR area was not different to sham group at day 1, and it reached a maximal value at day 3, before progressively decreasing to values similar to the NMDA group at day 38. When we calculated the ratio between S100B and GFAP (S100B/GFAP, Figure 4(k)) we found similar values between the sham and NMDA groups, whereas we found a significant increase in the α-AA + NMDA at days 3, 5, and 15 (Figure 4(k)). We found no expression of S100B in microglia using double immunohistochemistry and confocal analysis of anti-S100B with anti-CD11b antibodies (data not shown). ### 3.4.α-AA Increased NMDA-Induced Microgliosis and YM1 Production Three Days after the Lesion In theα-AA group, the histochemistry with IB4 stained morphologically reactive microglia already at day 1 in an area that reached a maximal value at day 3, before decreasing to sham group values at day 38 (Figure 5(a)). In NMDA-lesioned group the area of microglial reactivity reached significance at day 1 and increased by day 15 and then decreased at day 38 (Figure 5(a)). In the α-AA + NMDA group, microglial reactivity was already significant at day 1 and covered a maximal area at day 3, to then progressively decrease at day 38 (Figure 5(a)). At days 1 and 3, reactive microglia of the α-AA + NMDA group presented a ramified morphology clearly different to the shape showed by microglial cells of the NMDA group, which presented clear reduction of processes (Figures 5(b) and 5(c)).Figure 5 α-AA effect on the NMDA-induced microglial reaction in the hippocampus. (a) Quantification of the hippocampal area of microglial reaction in IB4-stained sections of sham (PBS), α-AA, NMDA, and α-AA + NMDA rats at postlesion days 1, 3, 15, and 38. Photomicrographs show hippocampal IB4-stained cells illustrating different microglial morphology between NMDA (b) and α-AA + NMDA (c) rats 3 days after the lesion. (d) TNF-α hippocampal concentration in PBS, α-AA, NMDA, and α-AA + NMDA rats during the first 5 days of the study. (e) Immunoblots and densitometric analysis (graphs) of YM1 in the hippocampus of PBS, α-AA, NMDA, and α-AA + NMDA rats during the first 5 days of the study. Values were normalized to β-actin bands. (f) Quantification of the nucleus/cytoplasm ratio of GR-immunolabeling in hippocampal glial cells 3 days after the lesion of the four groups of the study. P, PBS; A, α-AA; N, NMDA; NA, α-AA + NMDA; P∗<0.05 different from PBS; #P<0.05 different from NMDA; $P<0.05 different from α-AA (LSDpost hoc test in (a), (e); Student’s t-test in (d), (f)) (n=6 rats/group). Bar: 10 μm.We assessed the TNF-α concentration in hippocampus by ELISA. In the HF of α-AA group, the TNF-α concentration was increased at days 1 and 5 when compared with sham group values (Figure 5(d)). In the NMDA group the TNF-α concentration in the HF increased at day 1 and it was maximal at day 3 and then it decreased at day 5. In the α-AA + NMDA group, TNF-α concentration was significantly lower than in the NMDA group (Figure 5(d)).We assessed the YM1 concentration in hippocampus by Western blot. YM1 concentration was similarly enhanced in theα-AA and NMDA groups when compared with sham group values (Figure 5(e)). In the α-AA + NMDA group, the YM1 concentration was significantly higher compared with all groups for days 1 and 3, reaching a maximal increase at day 3 when compared with sham group (Figure 5(e)).With respect to the nucleus/cytoplasm ratio of GR-IR in microglia, it was similarly increased in theα-AA and α-AA + NMDA groups when compared with sham group values at day 3, whereas this increase was higher in the NMDA group (Figure 5(f)). ## 3.1. NMDA Induced Fast, Enduring Microgliosis with Phenotype Changes within Fifteen Days Observation of Nissl-stained sections revealed that 20 nmol NMDA produced major layer disorganization, neuronal loss, and gliosis in all layers of the hippocampal formation (HF). Animals from the sham group showed no cellular alterations except for the needle scar (Figure1(a)). The hippocampal lesion extended within 2 mm around the injection site in the rostrocaudal axis, with a slight tendency to grow towards the caudal direction. In the NMDA group (Figure 1(b)), we observed an increase in the area of lesion as a consequence of an initial massive neuronal loss within the first 3 days, followed by a more discrete neuronal death that still progressed at 38 days (Figure 1(c)).Figure 1 Timing of the microglial reaction to NMDA in the hippocampus. Photomicrographs illustrate the hippocampal injury and microglial reaction to NMDA-induced lesion at the injection site. Photomicrographs of cresyl-violet stained brain sections of (a) sham rats and (b) NMDA rats 15 days after the lesion. (c) Graph shows the quantification of the area of lesion relative to the whole HF in cresyl-violet stained sections. IB4 histochemistry of sham (PBS) (d) and NMDA rats (e) 15 days after the lesion. (f) Distribution of specific binding sites for [3H]PK-11195 in a coronal rat brain section 15 days after 20 nmol NMDA injection. Note the NMDA-induced increase of specific binding (arrowhead) seen as a white area in the left hippocampus. Graphs show the quantification of the area of microglial reaction in IB4-stained sections (g), the area of enhanced [3H]PK-11195 specific binding (h), and the hippocampal concentration of TNF-α (i) of PBS and NMDA rats during the 38 days of the study. Asterisks in (a), (b), (d), and (e) indicate the injection site. CA1,Cornu Ammonis area 1; DG, dentate gyrus. P∗<0.05 different from PBS; #P<0.05 different from day 0 (LSDpost hoc test in (d), (f); KW test in (e)) (n=6 PBS; n=6 NMDA rats). Bar: 1 mm in (a), (b), (d), and (e) and 2 mm in (f). (a) (b) (c) (d) (e) (f) (g) (h) (i)Figure 2 Activation of glucocorticoid receptors (GR) induced by NMDA injection in the hippocampus. (a) Photomicrographs illustrate GR immunohistochemistry in the hippocampus of sham animals and NMDA rats 5 days after the injection. Asterisks indicate the injection site and insets illustrate the GR immunostaining in neurons (arrow) and glial cells (arrowheads). Please note that pyramidal cells (arrows) are much bigger than glia (arrowheads); they also present a pyramidal shape with a dendritic tree, while reactive microglia have a round smaller shape. (b) Graphs show the quantification of the nucleus/cytoplasm ratio of GR-immunolabeling in hippocampal neurons and glial cells of sham (PBS) and NMDA-lesioned rats. Confocal photomicrographs show the double immunohistochemistry of GR (in green) with specific cellular markers (in red) in the hippocampal CA1 layer of NMDA-lesioned rats 5 days after the lesion. NeuN-GR double immunostaining of PBS (c–e) and NMDA (f–h) rats evidences an increased GR-immunolabeling in the nucleus of neurons of NMDA rats (arrowheads). GFAP-GR double immunostaining in PBS (i–k, arrowheads) and NMDA (l–n) rats shows an increased GFAP-immunolabeling in astrocytes of NMDA rats not associated with GR-immunoreactivity changes, which indicates that the NMDA-induced GR activation quantified in panel (b) is not astroglial. CD11b-GR double immunohistochemistry in PBS (o–q) and NMDA (r–t) rats evidences an increased GR-immunolabeling in the nucleus of reactive microglia of NMDA rats.P∗<0.05; P∗∗<0.01 different from PBS (Student’s t-test) (n=6 rats/group). Bar: 500 μm in (a) and 10 μm in (c)–(t) and insets in (a).Histochemistry analysis with IB4 stained hyperplasic and hypertrophic microglia in all NMDA-lesioned animals (Figure1(e)). The reactive microglia area increased with time and reached a maximal value at 15 days, with this value being maintained at day 38 (Figure 1(g)). Quantification of the TSPO microglial expression by [3H]PK-11195 usingin vitro autoradiography (Figure 1(f)) also showed an area of microglial reaction to the NMDA injection into the hippocampus (Table 1). In the NMDA-lesioned versus the sham groups both the intensity (Table 1) and the area (Figure 1(f)) of [3H]PK-11195 binding density showed an increase between day 1 and day 38 (Figure 1(h)). The sham group showed [3H]PK-11195 specific binding that was increased in the injection site only in a small area on day 1.Table 1 [3H]11195 and [3H]corticosterone specific binding to brain sections after 20 nmol NMDA injection in the hippocampus. 0 days 1 day 5 days 15 days 38 days Specific TSPO binding Sham hippocampus 481 ± 98 1532 ± 107# 618 ± 187 483 ± 341 483 ± 102 NMDA hippocampus 480 ± 73 1495 ± 189* 1644 ± 255*# 1571 ± 346*# 1440 ± 235*# Parietal cortex 490 ± 11 474 ± 64 472 ± 40 473 ± 18 488 ± 9 Piriform cortex 468 ± 5 439 ± 35 425 ± 33 453 ± 37 465 ± 2 Specific GR binding Sham hippocampus 68 ± 72 101 ± 94 74 ± 37 81 ± 13 75 ± 21 NMDA hippocampus 76 ± 62 198 ± 104 448 ± 247*# 91 ± 23 105 ± 32 Parietal cortex 101 ± 19 102 ± 14 85 ± 19 77 ± 32 102 ± 20 Piriform cortex 103 ± 24 74 ± 19 89 ± 6 73 ± 3 102 ± 42 18 kDa translocator protein (TSPO) concentration was estimated by [3H]PK11195 specific binding. 5 μM RU-28362 allowed discriminating the [3H]corticosterone specific binding to glucocorticoid receptors (GR). Data are expressed in fmol/mg of protein (mean ± S.E.M). *p<0.05 different from 0 days; #p<0.05 different from sham values (KW test).The TNF-α concentration increased in both sham and NMDA-lesioned groups only within the first five days of the lesion (Figure 1(i)). Between days 1 and 3, the concentration of TNF-α in the NMDA group increased with respect to the sham group and progressively returned to initial values by day 38. In the sham group, we only found a small transient increase in TNF-α concentration at day 3.We quantified the time-related changes of glucocorticoid receptor (GR) concentrations in the hippocampus by immunohistochemistry (Figure2) and byin vitro autoradiography of [3H]corticosterone binding (Table 1). In the sham group, single immunohistochemistry of GR showed widespread staining in the hippocampus and adjacent areas. We observed an increased GR-immunoreactivity (GR-IR) in the lesioned HF mostly in the nucleus of glial cells and surviving neurons between days 1 and 5 after NMDA injection (Figures 2(a) and 2(b)). Double immunohistochemistry and confocal analysis of anti-GR antibody with either anti-NeuN, anti-GFAP, or anti-CD11b antibodies evidenced short-term GR distribution changes in surviving neurons and reactive microglia but not in reactive astrocytes (Figures 2(c)–2(t)). We found the more evident GR-IR increase in the nucleus of round-shaped CD11b-immunostained cells at day 5 after the lesion (Figures 2(o)–2(t)), whereas the weak GR labeling of most GFAP-immunopositive cells remained and only a few scattered cells showed increased nuclear staining (Figures 2(i)–2(n)).We calculated the nucleus/cytoplasm GR-IR ratio as an estimation of GR activation, as unactivated GR predominantly localized within the cytoplasm and migrates to the nucleus with hormone binding. At day 1 the nucleus/cytoplasm GR-IR ratio was strongly increased in neurons of the lesioned CA1 strata and remained increased only at day 5. We found marked differences in microglial cells (Figure2(b)). In these cells, the nucleus/cytoplasm GR-IR ratio increased at days 1 and 5 after NMDA microinjection to then gradually decrease to sham group values at day 38.Experiments with [3H]corticosterone specific binding to GR showed low levels in the HF and the parietal and pyriform cortex of sham animals (Table 1). In the hippocampus of NMDA-lesioned rats, [3H]corticosterone specific binding progressively increased reaching a maximal value of 448±247 fmol/mg prot at day 5 to then decrease to sham levels at day 15 (Table 1). ## 3.2. Within Fifteen Daysα-AA Did Not Modify NMDA-Mediated Neuronal Loss Stereotaxic Injection of 20 nmol NMDA produced major neuronal loss mainly in the CA1 layers (Figures3(a)–3(c)). The area of neuronal loss extended to 34±5% of the pyramidal CA1 within the first three days (Figure 3(g)) and then progressively increased to reach a maximal value at day 38. The same pattern was observed in the neuronal density measured in NeuN immunostained sections (Figure 3(j)).Figure 3 α-AA effect on the NMDA-induced hippocampal lesion. Photomicrographs of cresyl-violet stained brain sections of sham rats. (a) NMDA rats 15 days (b) and 38 days after the lesion (c). α-AA rats (d) and α-AA + NMDA rats 15 days (e) and 38 days after the lesion (f). Asterisks indicate the injection site. Graphs (g) to (i) show the quantification of the area of neuronal loss in cresyl-violet stained sections in the pyramidal CA1 (g), pyramidal CA2-CA3 (h), and dorsal dentate gyrus (i) of sham (PBS), α-AA, NMDA, and α-AA + NMDA rats at postlesion days 1, 3, 15, and 38. Graphs (j) to (l) show the estimation of the density NeuN immunopositive neurons in the pyramidal CA1 (j), pyramidal CA2-CA3 (k), and dorsal dentate gyrus (l) of the same rats. P∗<0.05 different from PBS; #P<0.05 different from NMDA (LSDpost hoc test) (n=6 rats/group). Bar: 500 μm. (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l)When we coinjected 6.4 nmolα-AA with 20 nmol NMDA (α-AA + NMDA rats) all the lesion parameters were similar in all hippocampal strata of the NMDA and α-AA + NMDA groups during the first five days (Figures 3(d)–3(f)). In the α-AA + NMDA group, we observed increased tissue disorganization at day 15 and this area of lesion reached a maximal value at day 38 (Figures 3(d)–3(f)). At all time points, neuronal density in the HF of α-AA + NMDA and NMDA groups was similar (Figures 3(j)–3(l)).The injection of 6.4 nmol ofαAA alone caused no change in the hippocampal size, lesion area, or neuronal density when compared with sham group values at any of the studied time points (Figures 3(g)–3(i)). ## 3.3.α-AA Enhanced the S100B/GFAP Ratio Three Days after the NMDA-Induced Lesion The treatment only withα-AA induced a loss in the GFAP-immunoreactivity (IR) in a small area of the hippocampus around the injection site that was already recovered at day 3 after injection (Figures 4(a)–4(d)). In the NMDA group, the GFAP-IR increase was maximal at day 13 (Figures 4(c) and 4(d)) and then progressively decreased by day 38. In the α-AA + NMDA group only a few astrocytes showed a weak reactive morphology at day 1 (Figures 4(e)–4(h)). At day 3, the area of enhanced GFAP-IR covered 29±7% of the HF (Figure 4(i)), which increased slightly by day 38 to reach similar values to those obtained in the NMDA group at this time point. At day 38, reactive hippocampal astrocytes of α-AA + NMDA rats presented enhanced hypertrophy and hyperplasia and stronger GFAP-IR than did those of the NMDA group (Figures 4(l) and 4(m)).Figure 4 α-AA effect on the NMDA-induced astroglial reaction in the hippocampus. GFAP-immunostaining 1 (a) and 3 days (b) in sham animals, 1 (c) and 3 days (d) after α-AA, 1 (e) and 3 days (f) after NMDA lesion, and 1 (g) and 3 days (h) after α-AA + NMDA injection (arrowheads show the injection site). Please note in (c) and (g) the lack of GFAP-immunopositive cells in the surroundings of the injection site. Graphs show the quantification of the area of astrogliosis in the whole hippocampus. (i) GFAP-immunostained and S100B-immunostained (j) sections of sham (PBS), α-AA, NMDA, and α-AA + NMDA rats at postlesion days 1, 3, 15, and 38. (k) Graph shows the estimation of the ratio S100B/GFAP, calculated as the quotient between the area of increased S100B and the area of increased GFAP in these rats. Photomicrographs illustrate GFAP-immunoreactive cells of NMDA (l) and α-AA + NMDA (m) rats and S100B-immunoreactive cells of MNDA (n) and α-AA + NMDA (o) rats in the hippocampal parenchyma 3 days after the lesion. P∗<0.05 different from PBS; #P<0.05 different from NMDA; $P<0.05 different from day 15 (LSDpost hoc test) (n=6 rats/group). Bar: 1 mm in (a)–(h) and 10 μm in (l)–(o). (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o)Experiments of immunohistochemistry with S100B showed a very small area of increased IR that corresponded to the area of GFAP-IR loss, at day 1 in theα-AA group. In the NMDA rats, S100B-IR was maximal at day 1 and then progressively decreased by day 38, following the same dynamics as GFAP-IR did (Figures 4(j), 4(n), and 4(o)). In the α-AA + NMDA group the increased S100B-IR area was not different to sham group at day 1, and it reached a maximal value at day 3, before progressively decreasing to values similar to the NMDA group at day 38. When we calculated the ratio between S100B and GFAP (S100B/GFAP, Figure 4(k)) we found similar values between the sham and NMDA groups, whereas we found a significant increase in the α-AA + NMDA at days 3, 5, and 15 (Figure 4(k)). We found no expression of S100B in microglia using double immunohistochemistry and confocal analysis of anti-S100B with anti-CD11b antibodies (data not shown). ## 3.4.α-AA Increased NMDA-Induced Microgliosis and YM1 Production Three Days after the Lesion In theα-AA group, the histochemistry with IB4 stained morphologically reactive microglia already at day 1 in an area that reached a maximal value at day 3, before decreasing to sham group values at day 38 (Figure 5(a)). In NMDA-lesioned group the area of microglial reactivity reached significance at day 1 and increased by day 15 and then decreased at day 38 (Figure 5(a)). In the α-AA + NMDA group, microglial reactivity was already significant at day 1 and covered a maximal area at day 3, to then progressively decrease at day 38 (Figure 5(a)). At days 1 and 3, reactive microglia of the α-AA + NMDA group presented a ramified morphology clearly different to the shape showed by microglial cells of the NMDA group, which presented clear reduction of processes (Figures 5(b) and 5(c)).Figure 5 α-AA effect on the NMDA-induced microglial reaction in the hippocampus. (a) Quantification of the hippocampal area of microglial reaction in IB4-stained sections of sham (PBS), α-AA, NMDA, and α-AA + NMDA rats at postlesion days 1, 3, 15, and 38. Photomicrographs show hippocampal IB4-stained cells illustrating different microglial morphology between NMDA (b) and α-AA + NMDA (c) rats 3 days after the lesion. (d) TNF-α hippocampal concentration in PBS, α-AA, NMDA, and α-AA + NMDA rats during the first 5 days of the study. (e) Immunoblots and densitometric analysis (graphs) of YM1 in the hippocampus of PBS, α-AA, NMDA, and α-AA + NMDA rats during the first 5 days of the study. Values were normalized to β-actin bands. (f) Quantification of the nucleus/cytoplasm ratio of GR-immunolabeling in hippocampal glial cells 3 days after the lesion of the four groups of the study. P, PBS; A, α-AA; N, NMDA; NA, α-AA + NMDA; P∗<0.05 different from PBS; #P<0.05 different from NMDA; $P<0.05 different from α-AA (LSDpost hoc test in (a), (e); Student’s t-test in (d), (f)) (n=6 rats/group). Bar: 10 μm.We assessed the TNF-α concentration in hippocampus by ELISA. In the HF of α-AA group, the TNF-α concentration was increased at days 1 and 5 when compared with sham group values (Figure 5(d)). In the NMDA group the TNF-α concentration in the HF increased at day 1 and it was maximal at day 3 and then it decreased at day 5. In the α-AA + NMDA group, TNF-α concentration was significantly lower than in the NMDA group (Figure 5(d)).We assessed the YM1 concentration in hippocampus by Western blot. YM1 concentration was similarly enhanced in theα-AA and NMDA groups when compared with sham group values (Figure 5(e)). In the α-AA + NMDA group, the YM1 concentration was significantly higher compared with all groups for days 1 and 3, reaching a maximal increase at day 3 when compared with sham group (Figure 5(e)).With respect to the nucleus/cytoplasm ratio of GR-IR in microglia, it was similarly increased in theα-AA and α-AA + NMDA groups when compared with sham group values at day 3, whereas this increase was higher in the NMDA group (Figure 5(f)). ## 4. Discussion Brain NMDA microinjection triggers an excitotoxic process that expands through several dynamic processes resulting in astro- and microglial reactions and a massive neuronal loss [29] that may involve recruitment of peripheral neutrophils and macrophages [35]. The acute excitotoxic process is thought to be under control by retaliatory mechanisms within 15 days, but an underlying long-term process of at least 38 days goes on, associated with tissue atrophy, neuronal loss, and chronic microgliosis, in absence of astroglial reaction [32]. The microglia response may initially course with neuroprotective effects to afterwards cause neuron injury or death [2]. Thus, interventions that promote the regeneration of damaged tissue long after injury should take into consideration this dynamic phenomenon, in particular those addressed to interfere the neuroinflammatory pathways [7, 36, 37].Although astrocytes present neuroprotective activity against excitotoxicity [24], we herein found that early astroglia ablation with α-AA does not enhance within the first 15 days the NMDA-induced hippocampal lesion. The injured hippocampal area in particular CA2 and CA3 subfields only increased at day 38. In parallel, α-AA increased the NMDA-induced microglia reaction from day 1 on. These results argue for an early potentiation of microglia neuroprotective activity while astrocytes are compromised [38].Our results of IB4 histochemistry and TSPO labelling byin vitro autoradiography evidenced an early microglial activation that persisted at least for 38 days during which TNF-α secretion and YM1 and GR genes expression followed different patterns of activation. Specifically, the decrease of the initial massive production of TNF-α did not parallel the morphological long-term activation, YM1 secretion took place transiently during the first three days of the lesion, and GR translocation to the nucleus increased progressively up to day 5 before decreasing to control level. Since microglial phenotype results from the compromise between cytotoxic and trophic activities [1], these patterns result from the evolution of the microglial role during excitotoxicity.The production of YM1 in the first five days of the lesion is related to a neuroprotective activity [9], as evidenced in the olfactory bulb axotomy that results in YM1 secretion that reduces the inflammatory process [39]. By contrast, TNF-α production and GR translocation to the nucleus reflect more complex roles of microglial activity. TNF-α secretion is crucial for autocrine fast microglial activation with cytotoxic effects, a process also fed with the concomitant release of TNF-α by reactive astrocytes [40, 41]. However, the absence of TNF-α in knockout mice delays NO-mediated microglial activation, resulting in further exacerbated microgliosis [20] that leads to amplification of secondary excitotoxicity. TNF-α can bind to two specific receptors: TNFR1, with an intracellular death domain, and TNFR2, with a higher affinity and mostly involved in neuroprotection [42]. Consequently, at low concentration TNF-α activates TNFR2-mediated neuroprotection, whereas at high concentration TNFR1 contributes to cell injury [43]. Similarly, at baseline concentration and as the immediate response to acute stress, glucocorticoids produce an anti-inflammatory activity over microglia [44]. However chronic high concentration of glucocorticoids exacerbates cytotoxic microgliosis and increases hippocampal injury due to excitotoxicity or ischemia [14]. Such a sustained elevated glucocorticoid concentration downregulates GR microglia expression as a prerequisite to specifically inhibit the anti-inflammatory actions [15]. Therefore, the early activation and posterior downregulation of microglial GR herein observed may enhance microglia cytotoxic and inflammatory activity.S100B modifies astrocytic, neuronal, and microglial activities, and its effects depend on both its own extracellular concentration and the expression of the receptor RAGE. In activated microglia, S100B at micromolar concentrations upregulates interleukin-1β and TNF-α expression via RAGE [45], while at nanomolar concentrations S100B blocks that expression [18]. In turn, microglial reactive oxygen species, or TNF-α, modify the RAGE response to S100B [17], in a context of microglia-astroglia cross talk that integrates different signaling systems. In this regard, our results suggest that factors associated with hippocampal excitotoxic injury may trigger an early trophic neuroprotective reaction in microglia. Afterwards, variations in the concentration of these same factors may turn the chronic microgliosis into a proinflammatory cytotoxic activity.The ablation of astroglia usingα-AA results in a similar pattern of NMDA-induced hippocampal damage. α-AA did not modify the NMDA-induced hippocampal lesion area and neuronal loss during the first fifteen days, and we detected a discrete increase of these lesion parameters only in the CA2 and CA3 subfields at day 38, when astroglia was fully recovered. The α-AA-induced potentiation of microgliosis that we found in these conditions supports an early neuroprotective microglial activity [46]. As previously shown [24, 46], α-AA inhibits the main astroglial pathways of glutamate removal resulting in increased synaptic glutamate and increase of oxidative stress [23, 24, 26, 47], which constitute activation signals to microglia. This initial microglial phenotype may represent an attempt to preserve neurons during astroglial dysfunction. The activated microglia express glutamine synthetase and the glutamate transporter 1 in the early stages of excitotoxicity [47]. Microglial glutamate uptake starts at 4 h and lasts up to 72 h after the lesion [48], which indicates that reactive microglia may account for the control of synaptic glutamate homeostasis after early astrocyte injury. Furthermore, the early increase of YM1 concentration in the HF of α-AA + NMDA rats also suggests a neuroprotective microglial activity that may potentiate the recovery of astrocytes as indicates the slight enhancement of GFAP concentration herein found at day 3 in α-AA rats.Initially theα-AA + NMDA lesion presented a decreased TNF-α concentration and a delay in the increase of S100B. Also, within the first three days, reactive microglia presented a morphological transition from a phagocytic to a ramified shape. Thus, when astrocytes are depleted, the retaliatory mechanisms against excitotoxicity would include the enhanced neurotrophic activity of the microglia to counteract the limited functional capacity of the regenerating astrocytes. Taken together, these data indicate a priority to recover the astrocyte-microglia cross talk before the astroglial cytoskeletal rearrangement and proliferation.From day 3, our results inα-AA + NMDA rats showed a sustained reaction of microglia and a heightened S100B/GFAP ratio that resulted in an increased area of lesion that reached CA2 and CA3 hippocampal subfields at day 38 but no changes in the neuronal density of the lesioned CA1 subfields. These results also suggest that, rather than to the intensity of injury, the cytotoxic chronic activity of microglia contributes to the growth of affected areas during neurodegeneration. As the increased area of S100B-IR may not be translated into increased S100B release and could just reflect astrocytosis, further experiments are needed to quantifyin vivo the S100B release long after an excitotoxic insult and characterize its effects on microglia inflammatory activity. ## 5. Conclusions In conclusion, in this paper we show that NMDA-induced excitotoxicity in the hippocampus induces a chronic microgliosis. In this model, an initial release of YM1 followed by TNF-α production and the transient activation of GR may determine a putative functional switch of microglia. Although a role for infiltrated neutrophils and macrophages must not be ruled out, the astroglial removal in the early stages of the lesion may potentiate an initial neuroprotective role of microgliosis and reveals the S100B as a key modulator of microglia phenotype. These results also contribute to understanding the interplay between the trophic, neuroprotective, inflammatory, and cytotoxic functions of astrocytes and microglia during the course of brain injury. The fine control of these processes requires a dynamic understanding of their interactions to allow effective development of approaches to neuroprotection. --- *Source: 102419-2015-04-21.xml*
102419-2015-04-21_102419-2015-04-21.md
77,072
Astroglia-Microglia Cross Talk during Neurodegeneration in the Rat Hippocampus
Montserrat Batlle; Lorenzo Ferri; Carmen Andrade; Francisco-Javier Ortega; Jose M. Vidal-Taboada; Marco Pugliese; Nicole Mahy; Manuel J. Rodríguez
BioMed Research International (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102419
102419-2015-04-21.xml
--- ## Abstract Brain injury triggers a progressive inflammatory response supported by a dynamic astroglia-microglia interplay. We investigated the progressive chronic features of the astroglia-microglia cross talk in the perspective of neuronal effects in a rat model of hippocampal excitotoxic injury. N-Methyl-D-aspartate (NMDA) injection triggered a process characterized within 38 days by atrophy, neuronal loss, and fast astroglia-mediated S100B increase. Microglia reaction varied with the lesion progression. It presented a peak of tumor necrosis factor-α (TNF-α) secretion at one day after the lesion, and a transient YM1 secretion within the first three days. Microglial glucocorticoid receptor expression increased up to day 5, before returning progressively to sham values. To further investigate the astroglia role in the microglia reaction, we performed concomitant transient astroglia ablation with L-α-aminoadipate and NMDA-induced lesion. We observed a striking maintenance of neuronal death associated with enhanced microglial reaction and proliferation, increased YM1 concentration, and decreased TNF-α secretion and glucocorticoid receptor expression. S100B reactivity only increased after astroglia recovery. Our results argue for an initial neuroprotective microglial reaction, with a direct astroglial control of the microglial cytotoxic response. We propose the recovery of the astroglia-microglia cross talk as a tissue priority conducted to ensure a proper cellular coordination that retails brain damage. --- ## Body ## 1. Introduction Injury to the central nervous system, including stroke and traumatic brain injury, induces excitotoxic neuronal death that triggers a potent inflammatory response with a dynamic astroglia and microglia reaction that determine neuronal fate [1, 2]. Following the injury, all cells present a metabolic reprogramming to cover the bioenergetic and substrate demand for the trophic/inflammatory processes to take place [3] with the coexistence of various factors.An excessive production of proinflammatory factors from activated microglia, such as tumor necrosis factor-α (TNF-α), interleukin-1β, and reactive oxygen species, may trigger or exacerbate neuronal death [4]. Microglia do not constitute a unique cell population and show a range of phenotypes that are closely related with the evolution of the damaging process [5, 6]. Thus, their control will directly influence the tissue outcome [7]. These phenotypes range from the well-known proinflammatory activation state to a trophic one involved in cell repair and extracellular matrix remodelling [8]. In neurodegeneration, for example, microglia show inflammatory and neuroprotective properties [6] associated with the expression of YM1, a secretory protein related to neuroregeneration [9, 10]. In addition, as some microglial cells become increasingly dysfunctional, they may directly participate in the development of neurodegeneration [11].Whether microglia through TNF-α, interleukin-1β, and reactive oxygen species formation [4] adopt a phenotype that mostly exacerbates tissue injury or promotes brain repair with YM1 and other neuroprotective factors expression [9, 10] likely depends on the diversity of signals from the lesion environment, especially at the quadripartite synapse level [12]. For instance, astroglial chemokines have an influence upon microglia/macrophage activation in multiple sclerosis with CCL2 (MCP-1) and CXCL10 (IP-10) directing reactive gliosis [13]. Although a clear account of this dynamic relationship has yet to be proposed, the astrocyte-microglia interplay might determine the phenotype that microglia adopt during neurodegeneration.Steroid hormones may modulate the microglial response to injury, although the results are still controversial. Studies have shown that glucocorticoids regulate peripheral immune responses and have CNS anti-inflammatory properties, but they also appear to develop proinflammatory effects that exacerbate excitotoxicity and cerebral damage (see [14] for a review). This observed variability may be related to an acute or chronic CNS effect, with the initial anti-inflammatory modulation being eventually followed by an exacerbation of cerebral injury [14]. If this hypothesis is correct, it would be possible to modulate the cytotoxic and neuroprotective activity of microglia through acute or chronic activation of the glucocorticoid receptor (GR), the most abundant steroid hormone receptor found in microglia [15].Astroglial S100B is one of those factors that control microglial activity. Astrocytes release S100B constitutively [16] and increase this release upon stimulation by several factors, including TNF-α [17]. Under normal conditions, released S100B acts as a neurotrophic factor, countering the stimulatory effect of neurotoxins on microglia [18] and stimulating glutamate uptake [19]. By contrast, at high concentrations S100B binds the Receptor for Advanced Glycation End products (RAGE), which might mediate microglial activation in the course of brain damage [20]. Thus, secreted S100B participates in astrocyte-microglia cross talk, with an important role in the initial phase of brain insults.The purpose of the present study was to study the relationship between microgliosis and astrogliosis in anin vivo model of hippocampal injury that triggers chronic neuroinflammation [12]. To this end, we undertook a time-course study between day 1 and day 38 of a hippocampal N-methyl-D-aspartate (NMDA) stereotaxic-induced lesion and we characterized neuronal loss and the astroglial and microglial reactions. We quantified TNF-α, YM1, 8 kDa translocator protein (TSPO, or peripheral benzodiazepine receptor, PBR [21]), and GR at different postlesion times to estimate the state of microglial activity. Then, to determine the relationship between astrogliosis and microglia activation, we coinjected NMDA with L-α-aminoadipate (α-AA), a specific astrotoxin used to investigate astroglial participation in several paradigms (e.g., [22–25]). α-AA induces a transient astroglial ablation, which is associated with a microglial reaction that persists over several days [24, 26, 27]. The sequence of events describedin vivo after α-AA stereotaxic microinjection includes astroglial degeneration for 1–3 days, microglial invasion, and, finally, astroglial recovery [24, 26]. ## 2. Materials and Methods ### 2.1. Animals Adult male Wistar rats (body weight 200–225 g at the beginning of the study) were obtained from the animal housing facilities at the School of Medicine (Universitat de Barcelona). They were kept on a 12 h/12 h day/night cycle and housed with free access to food and water. Animals were manipulated according to European legislation (86/609/EEC) and all efforts were made to minimize the number of animals used and their suffering. The number of animals to be included in each group was statistically estimated to be 6 rats/group (tolerance interval ±0.9, confidence level 95%). Procedures were approved by the Ethics Committee of the Universitat de Barcelona, in accordance with the regulations established by the Catalan government (Generalitat de Catalunya). ### 2.2. Chemicals NMDA,α-AA, the mouse monoclonal anti-glial fibrillary acidic protein (GFAP), and the biotin-conjugated isolectin B4 (IB4) fromBandeiraea simplicifolia were all purchased from Sigma (St. Louis, MO). The mouse monoclonal anti-NeuN was purchased from Chemicon (Temecula, CA), the mouse anti-rat CD11b antibody (clone MRC OX-42) was from Serotec Ltd. (Oxford, UK), and the rabbit polyclonal anti-S100B antibody was from DAKO (Dako Diagnostics, Barcelona, Spain). The goat polyclonal AMCase (M-19) antibody that specifically binds the transcription factor YM1 was from Santa Cruz Biotechnology (Santa Cruz, CA) as was the rabbit polyclonal anti-GR antibody. Secondary antibodies and immunohistochemical reagents were from Sigma.[3H]PK-11195 was purchased from Perkin-Elmer (Boston, MA). [3H]Corticosterone was from Amersham Bioscience (Bucks, UK) and RU-28362 was purchased from Sigma. Dodecyl sulphate-polyacrylamide gel electrophoresis standards were purchased from Bio-Rad (Hercules CA), Immobilon-P membranes were from Millipore (Bedford, MA), and ECL Plus Western Blotting reagent was from Amersham Bioscience (Bucks, UK). The murine TNF-α ELISA development kit was purchased from PeproTech (Paris, France). ### 2.3. Stereotaxic Procedure and Labeling of Proliferating Cells Under equithesin anesthesia (a mixture of chloral hydrate and sodium pentobarbitone; 0.3 mL/100 g body wt, i.p.), rats were placed in a stereotaxic instrument (David Kopf, Carnegie Medicin, Sweden) with the incisor bar set at −3.3 mm. According to the Atlas of Paxinos and Watson [28], the stereotaxic coordinates for the hippocampal microinjection were 3.3 mm caudal to bregma, 2.2 mm lateral to bregma, and 2.9 mm ventral from dura. A 5.0 μL Hamilton syringe activated by an infusion pump (CMA/100; Carnegie Medicin, Sweden) was used for the intracerebral injection into the hippocampal parenchyma. In all injections 0.5 μL was infused over 5 min as previously described [29].Animals received a single injection of either 50 mM phosphate buffer saline (PBS; pH 7.4) (sham group), 20 nmol NMDA in 50 mM PBS (NMDA group), 6.4 nmolα-AA (α-AA group), or 20 nmol NMDA plus 6.4 nmol α-AA (NMDA + α-AA group). At five different postlesion times (1, 3, 5, 15, and 38 days) a total of 60 rats, 12 for each group (sham, α-AA, NMDA, and α-AA + NMDA) and 12 control rats (with no treatment, assigned as 0 days), were anaesthetized and decapitated. Half of the animals (6 rats/group) were used for biochemical studies and the other half were used for histological approaches. The NMDA dose was chosen according to previous studies [29]. As it was used in low concentrations, sodium pentobarbitone did not interfere with the function of NMDA receptors [29]. α-AA was at the limit of its solubility in water (2.2 mg/mL) and the selected dose ensured a specific gliotoxic effect [24]. ### 2.4. Histology, Immunohistochemistry, and Image Analysis At the indicated postlesion time, six rats from each group were anaesthetized and decapitated. Their brains were then quickly removed, frozen with powdered dry ice, and stored at −80°C until use. Adjacent 14μm coronal serial sections at the level of the dorsal hippocampus (−3.3 mm to bregma) were obtained from all brains, mounted on slices, and processed for histology, immunohistochemistry, andin vitro autoradiography studies.Standard Nissl staining was performed to evaluate neuronal loss and the morphology of the hippocampal region. Microglial cell shapes were identified by histochemistry with biotin-conjugated IB4 [30]. Briefly, endogenous peroxidase activity was inhibited by a 10-minute preincubation in H2O2-methanol-PBS (0.3/9.7/90) followed by a 10-minute wash in PBS [31] and postfixation for 10 min with ice-cold paraformaldehyde (4% in PBS, pH 7.4). Then sections were incubated overnight at 4°C with IB4 diluted 1 : 25 in normal goat serum (1 : 100 v/v in 0.01 M PBS; pH 7.4). After incubation with ExtrAvidin (1 : 250), sections were developed in a 0.05 M Tris solution containing 0.03% (w/v) diaminobenzidine and 0.006% (v/v) H2O2.Immunohistochemistry was carried out by the biotin-avidin-peroxidase method. Astroglial reaction was assessed by immunodetection with mouse monoclonal anti-GFAP and rabbit polyclonal anti-S100B antibodies diluted 1 : 400 and 1 : 800, respectively. Neuronal staining and GR expression were evaluated, respectively, with mouse monoclonal anti-NeuN antibody (1 : 150) and rabbit polyclonal anti-GR antibody (1 : 750). In brief, endogenous peroxidase activity was inhibited by a 10-minute preincubation in H2O2-methanol-PBS (0.3/9.7/90) followed by a 10-minute wash in PBS. Then, after postfixation for 10 min with ice-cold paraformaldehyde (4% in PBS, pH 7.4), sections were incubated overnight at 4°C with the primary antibody at the appropriate dilution in 0.05 M PBS containing 0.5% Triton ×100, 1% normal goat serum, and 1% bovine serum albumin. After washing and incubating with the appropriate secondary antibody, sections were incubated with ExtrAvidin (1 : 250) and developed in diaminobenzidine and H2O2.Double immunohistochemistry with specific cellular markers was performed to determine the cells expressing glucocorticoid receptors. Sections were coincubated overnight at 4°C with the rabbit polyclonal anti-GR antibody (1 : 750) and either mouse monoclonal anti-NeuN antibody (1 : 250) for neurons, mouse monoclonal anti-GFAP antibody (1 : 750) for astroglial cells, or mouse monoclonal anti-CD11b (1 : 100) for microglia. After washing, sections were sequentially incubated in the dark with goat anti-mouse IgG Alexa Fluor-555 conjugated (1 : 300) to detect cellular phenotype and then with biotinylated goat anti-rabbit IgG (1 : 200) followed by FITC-conjugated ExtrAvidin (1 : 250) to detect GR.All double immunostained sections were mounted in ProLong antifade and kept in the dark. Incubations with either mouse or goat IgG as primary antibodies were used for negative controls. Confocal images were acquired using a Leica TCS SL laser scanning confocal spectral microscope (Leica Microsystems Heidelberg GmbH, Manheim, Germany).Morphological and histological parameters were measured in 14μm thick serial coronal sections with the optical microscope software AxioVision 4 AC (Zeiss) and analyzed by the Image Pro Plus v.5.1 image analysis system (Media Cybernetics Inc., Bethesda, MD, USA). The hippocampal formation (HF) size, the area occupied by the neuronal loss, and the different hippocampal subfield areas were measured on Nissl stained sections at the injection level that allows differentiating between structures and identifying the damage to be accurately quantified. The area of hippocampus occupied by reactive astroglia or microglia was measured by delimitation of the increased immunoreactivity region on adjacent GFAP-immunostained, S100B-immunostained, and IB4-stained sections, respectively, using the same system [24]. Area size determinations were performed in three different stained sections at the injection level of all rat brains. To minimize biased errors due to fronto-occipital hippocampal size variations, only those sections located near the injection site and at similar bregma level were selected for quantification. Then, the area of interest was referred to as the whole HP area measured in each of the GFAP-immunostained, S100B-immunostained, and IB4-stained sections analyzed. Measurement of the whole HP area was also used to calculate the ratio between total HF and the area of gliosis. In all cases the contralateral hippocampus was measured in the same sections in order to estimate the effects of histological procedures on tissue size and thus to correct for variability in individual brain size and tissue shrinkage [32]. GFAP/S100B expression areas were estimated by the quotient of the increased immunoreactivity region in GFAP- and S100B-immunostained sections of each rat.Neuronal cell counting was performed in NeuN immunostained sections. Under the optical microscope, we randomly selected four areas of interest (1.0 mm2 each) from the dorsal hippocampus. Inside of these areas, we counted the number of stained cells at a ×40 objective magnification. Cell counting was performed in duplicate in three different stained sections of all rat brains. Immunopositive cells were counted in all lesioned hippocampal subfields, and quantification was made in the Image Pro Plus v.5.1 image and analysis system. To allow for quantification of double immunostained cells, multiple epifluorescent partial images of each hippocampus were taken and mounted in order to reconstruct an image of the whole structure using a Leica DMI 6000B inverted microscope equipped with the Tile Scan function of the LAS AF Leica software (Leica Microsystems Heidelberg GmbH, Manheim, Germany). Counting of double-immunostained cells was performed in duplicate in three different stained sections of all rat brains as explained above. GR density was analysed densitometrically in bright field microscopy images taken from three adjacent GR immunostained sections [33] from the lesioned hippocampus. To do that, we used the same procedure and image analysis system as described above for NeuN immunopositive cells. The size and shape of the cell soma were used to discriminate between neurons and glial cells [33] (Figure 2(a)). For validation of this counting, the number of double immunopositive cells was also calculated in NeuN-GR and CD11b-GR double stained sections as explained above for the estimation of the number of proliferating cells. ### 2.5.In Vitro Autoradiography Adjacent sections were processed forin vitro autoradiography to assess the hippocampal distribution of TSPO and GR. TSPO was labeled with [3H]PK-11195 as a microglial marker. Tissue sections were incubated for 2 h at room temperature in 50 mM Tris-HCl (pH 7.7) containing 1 nM [3H]PK-11195 (85 Ci/mmol). Nonspecific binding was determined in the presence of 1 μM PK-11195. It was homogeneous and lower than 10% of the total binding. Glucocorticoid receptors were labelled with 10 nM [3H]corticosterone (79 Ci/mmol) added to a buffer solution containing 20 mM Tris-HCl (pH 7.4), 1.5 mM EDTA, 140 mM NaCl, and 5 mM glucose. 5 μM RU-28362 was used to discriminate between GR and mineralocorticoid receptor specific [3H]corticosterone binding [34]. The nonspecific binding was determined by incubation with 20 μM corticosterone and was lower than 20% of total binding.After washing in the appropriate buffer, slides were dried overnight under a stream of air at 4°C and opposed to Hyperfilm-3H (Amersham) for a period between two weeks and two months. Films were developed and analysed densitometrically after calibration with plastic standards (3H-Microscales, Amersham) using the Image Pro Plus v.5.1 image analysis system. The average brain protein content was 8%. For each brain, four sections were processed for total binding and two other sections for nonspecific binding. ### 2.6. Western Blot and Immunoassay At the indicated postlesion time, six rats from each group were anaesthetized and decapitated. The brain of each animal was removed and the HF dissected, before being quickly frozen in liquid N2 and stored at −80°C prior to use. Each HF was manually homogenized in ice-cold Tris-HCl 50 mM, pH 7.7, containing 2 mM EDTA and a protease inhibitor cocktail. Part of the homogenate was immediately ultrasonicated at 4°C with a Sonifier 250 (Branson, Ultrasonic Corp., Danbury, CT) and centrifuged; the supernatant was directly used for TNF-α immunoassay. The other part was ultracentrifuged at 15000 rpm for 15 min at 4°C and the supernatant (cytosolic fraction) was used for Western blot analysis of YM1 protein. In both cases, protein determination was carried out using the Bradford method.Western blot analysis was performed as described elsewhere, with specific antibodies against YM1 [10]. Low-range molecular weight biotinylated SDS-PAGE standards were run in each gel to ascertain the position of each band. Immobilon-P was used for electroblotting and the immunocomplexes were detected by enhanced chemiluminescence using the ECL Plus Western Blotting Kit. Films were then developed, scanned, and analysed densitometrically with the Image Pro Plus v.5.1 image analysis system.TNF-α was quantified with the murine TNF-α ELISA development kit as indicated by the supplier protocol. In all samples, TNF-α determination was performed in duplicate. ### 2.7. Statistical Analysis For each parameter, kurtosis and skewness moments were calculated to test the normal distribution of data. A two-way ANOVA was performed with two factors: time with six levels (0 days, 1 day, 3 days, 5 days, 15 days, and 38 days) and treatment with four levels (sham,α-AA, NMDA, and NMDA + α-AA). When significant two-way interactions were observed, individual comparisons were performed using one-way ANOVAs followed by the LSDpost hoc test. Single comparisons between sham and NMDA groups at one time point were made using the Student’s t-test (t). When normality was not achieved, the values of all groups were compared using the nonparametric Kruskal-Wallis test (KW) followed by the Mann-Whitney U test (KS). In all cases, P<0.05 was considered as statistically significant. Results are expressed as mean ± SEM. All analyses were performed with the STATGRAPHICS software (STSC Inc., Rockville, MD, USA). ## 2.1. Animals Adult male Wistar rats (body weight 200–225 g at the beginning of the study) were obtained from the animal housing facilities at the School of Medicine (Universitat de Barcelona). They were kept on a 12 h/12 h day/night cycle and housed with free access to food and water. Animals were manipulated according to European legislation (86/609/EEC) and all efforts were made to minimize the number of animals used and their suffering. The number of animals to be included in each group was statistically estimated to be 6 rats/group (tolerance interval ±0.9, confidence level 95%). Procedures were approved by the Ethics Committee of the Universitat de Barcelona, in accordance with the regulations established by the Catalan government (Generalitat de Catalunya). ## 2.2. Chemicals NMDA,α-AA, the mouse monoclonal anti-glial fibrillary acidic protein (GFAP), and the biotin-conjugated isolectin B4 (IB4) fromBandeiraea simplicifolia were all purchased from Sigma (St. Louis, MO). The mouse monoclonal anti-NeuN was purchased from Chemicon (Temecula, CA), the mouse anti-rat CD11b antibody (clone MRC OX-42) was from Serotec Ltd. (Oxford, UK), and the rabbit polyclonal anti-S100B antibody was from DAKO (Dako Diagnostics, Barcelona, Spain). The goat polyclonal AMCase (M-19) antibody that specifically binds the transcription factor YM1 was from Santa Cruz Biotechnology (Santa Cruz, CA) as was the rabbit polyclonal anti-GR antibody. Secondary antibodies and immunohistochemical reagents were from Sigma.[3H]PK-11195 was purchased from Perkin-Elmer (Boston, MA). [3H]Corticosterone was from Amersham Bioscience (Bucks, UK) and RU-28362 was purchased from Sigma. Dodecyl sulphate-polyacrylamide gel electrophoresis standards were purchased from Bio-Rad (Hercules CA), Immobilon-P membranes were from Millipore (Bedford, MA), and ECL Plus Western Blotting reagent was from Amersham Bioscience (Bucks, UK). The murine TNF-α ELISA development kit was purchased from PeproTech (Paris, France). ## 2.3. Stereotaxic Procedure and Labeling of Proliferating Cells Under equithesin anesthesia (a mixture of chloral hydrate and sodium pentobarbitone; 0.3 mL/100 g body wt, i.p.), rats were placed in a stereotaxic instrument (David Kopf, Carnegie Medicin, Sweden) with the incisor bar set at −3.3 mm. According to the Atlas of Paxinos and Watson [28], the stereotaxic coordinates for the hippocampal microinjection were 3.3 mm caudal to bregma, 2.2 mm lateral to bregma, and 2.9 mm ventral from dura. A 5.0 μL Hamilton syringe activated by an infusion pump (CMA/100; Carnegie Medicin, Sweden) was used for the intracerebral injection into the hippocampal parenchyma. In all injections 0.5 μL was infused over 5 min as previously described [29].Animals received a single injection of either 50 mM phosphate buffer saline (PBS; pH 7.4) (sham group), 20 nmol NMDA in 50 mM PBS (NMDA group), 6.4 nmolα-AA (α-AA group), or 20 nmol NMDA plus 6.4 nmol α-AA (NMDA + α-AA group). At five different postlesion times (1, 3, 5, 15, and 38 days) a total of 60 rats, 12 for each group (sham, α-AA, NMDA, and α-AA + NMDA) and 12 control rats (with no treatment, assigned as 0 days), were anaesthetized and decapitated. Half of the animals (6 rats/group) were used for biochemical studies and the other half were used for histological approaches. The NMDA dose was chosen according to previous studies [29]. As it was used in low concentrations, sodium pentobarbitone did not interfere with the function of NMDA receptors [29]. α-AA was at the limit of its solubility in water (2.2 mg/mL) and the selected dose ensured a specific gliotoxic effect [24]. ## 2.4. Histology, Immunohistochemistry, and Image Analysis At the indicated postlesion time, six rats from each group were anaesthetized and decapitated. Their brains were then quickly removed, frozen with powdered dry ice, and stored at −80°C until use. Adjacent 14μm coronal serial sections at the level of the dorsal hippocampus (−3.3 mm to bregma) were obtained from all brains, mounted on slices, and processed for histology, immunohistochemistry, andin vitro autoradiography studies.Standard Nissl staining was performed to evaluate neuronal loss and the morphology of the hippocampal region. Microglial cell shapes were identified by histochemistry with biotin-conjugated IB4 [30]. Briefly, endogenous peroxidase activity was inhibited by a 10-minute preincubation in H2O2-methanol-PBS (0.3/9.7/90) followed by a 10-minute wash in PBS [31] and postfixation for 10 min with ice-cold paraformaldehyde (4% in PBS, pH 7.4). Then sections were incubated overnight at 4°C with IB4 diluted 1 : 25 in normal goat serum (1 : 100 v/v in 0.01 M PBS; pH 7.4). After incubation with ExtrAvidin (1 : 250), sections were developed in a 0.05 M Tris solution containing 0.03% (w/v) diaminobenzidine and 0.006% (v/v) H2O2.Immunohistochemistry was carried out by the biotin-avidin-peroxidase method. Astroglial reaction was assessed by immunodetection with mouse monoclonal anti-GFAP and rabbit polyclonal anti-S100B antibodies diluted 1 : 400 and 1 : 800, respectively. Neuronal staining and GR expression were evaluated, respectively, with mouse monoclonal anti-NeuN antibody (1 : 150) and rabbit polyclonal anti-GR antibody (1 : 750). In brief, endogenous peroxidase activity was inhibited by a 10-minute preincubation in H2O2-methanol-PBS (0.3/9.7/90) followed by a 10-minute wash in PBS. Then, after postfixation for 10 min with ice-cold paraformaldehyde (4% in PBS, pH 7.4), sections were incubated overnight at 4°C with the primary antibody at the appropriate dilution in 0.05 M PBS containing 0.5% Triton ×100, 1% normal goat serum, and 1% bovine serum albumin. After washing and incubating with the appropriate secondary antibody, sections were incubated with ExtrAvidin (1 : 250) and developed in diaminobenzidine and H2O2.Double immunohistochemistry with specific cellular markers was performed to determine the cells expressing glucocorticoid receptors. Sections were coincubated overnight at 4°C with the rabbit polyclonal anti-GR antibody (1 : 750) and either mouse monoclonal anti-NeuN antibody (1 : 250) for neurons, mouse monoclonal anti-GFAP antibody (1 : 750) for astroglial cells, or mouse monoclonal anti-CD11b (1 : 100) for microglia. After washing, sections were sequentially incubated in the dark with goat anti-mouse IgG Alexa Fluor-555 conjugated (1 : 300) to detect cellular phenotype and then with biotinylated goat anti-rabbit IgG (1 : 200) followed by FITC-conjugated ExtrAvidin (1 : 250) to detect GR.All double immunostained sections were mounted in ProLong antifade and kept in the dark. Incubations with either mouse or goat IgG as primary antibodies were used for negative controls. Confocal images were acquired using a Leica TCS SL laser scanning confocal spectral microscope (Leica Microsystems Heidelberg GmbH, Manheim, Germany).Morphological and histological parameters were measured in 14μm thick serial coronal sections with the optical microscope software AxioVision 4 AC (Zeiss) and analyzed by the Image Pro Plus v.5.1 image analysis system (Media Cybernetics Inc., Bethesda, MD, USA). The hippocampal formation (HF) size, the area occupied by the neuronal loss, and the different hippocampal subfield areas were measured on Nissl stained sections at the injection level that allows differentiating between structures and identifying the damage to be accurately quantified. The area of hippocampus occupied by reactive astroglia or microglia was measured by delimitation of the increased immunoreactivity region on adjacent GFAP-immunostained, S100B-immunostained, and IB4-stained sections, respectively, using the same system [24]. Area size determinations were performed in three different stained sections at the injection level of all rat brains. To minimize biased errors due to fronto-occipital hippocampal size variations, only those sections located near the injection site and at similar bregma level were selected for quantification. Then, the area of interest was referred to as the whole HP area measured in each of the GFAP-immunostained, S100B-immunostained, and IB4-stained sections analyzed. Measurement of the whole HP area was also used to calculate the ratio between total HF and the area of gliosis. In all cases the contralateral hippocampus was measured in the same sections in order to estimate the effects of histological procedures on tissue size and thus to correct for variability in individual brain size and tissue shrinkage [32]. GFAP/S100B expression areas were estimated by the quotient of the increased immunoreactivity region in GFAP- and S100B-immunostained sections of each rat.Neuronal cell counting was performed in NeuN immunostained sections. Under the optical microscope, we randomly selected four areas of interest (1.0 mm2 each) from the dorsal hippocampus. Inside of these areas, we counted the number of stained cells at a ×40 objective magnification. Cell counting was performed in duplicate in three different stained sections of all rat brains. Immunopositive cells were counted in all lesioned hippocampal subfields, and quantification was made in the Image Pro Plus v.5.1 image and analysis system. To allow for quantification of double immunostained cells, multiple epifluorescent partial images of each hippocampus were taken and mounted in order to reconstruct an image of the whole structure using a Leica DMI 6000B inverted microscope equipped with the Tile Scan function of the LAS AF Leica software (Leica Microsystems Heidelberg GmbH, Manheim, Germany). Counting of double-immunostained cells was performed in duplicate in three different stained sections of all rat brains as explained above. GR density was analysed densitometrically in bright field microscopy images taken from three adjacent GR immunostained sections [33] from the lesioned hippocampus. To do that, we used the same procedure and image analysis system as described above for NeuN immunopositive cells. The size and shape of the cell soma were used to discriminate between neurons and glial cells [33] (Figure 2(a)). For validation of this counting, the number of double immunopositive cells was also calculated in NeuN-GR and CD11b-GR double stained sections as explained above for the estimation of the number of proliferating cells. ## 2.5.In Vitro Autoradiography Adjacent sections were processed forin vitro autoradiography to assess the hippocampal distribution of TSPO and GR. TSPO was labeled with [3H]PK-11195 as a microglial marker. Tissue sections were incubated for 2 h at room temperature in 50 mM Tris-HCl (pH 7.7) containing 1 nM [3H]PK-11195 (85 Ci/mmol). Nonspecific binding was determined in the presence of 1 μM PK-11195. It was homogeneous and lower than 10% of the total binding. Glucocorticoid receptors were labelled with 10 nM [3H]corticosterone (79 Ci/mmol) added to a buffer solution containing 20 mM Tris-HCl (pH 7.4), 1.5 mM EDTA, 140 mM NaCl, and 5 mM glucose. 5 μM RU-28362 was used to discriminate between GR and mineralocorticoid receptor specific [3H]corticosterone binding [34]. The nonspecific binding was determined by incubation with 20 μM corticosterone and was lower than 20% of total binding.After washing in the appropriate buffer, slides were dried overnight under a stream of air at 4°C and opposed to Hyperfilm-3H (Amersham) for a period between two weeks and two months. Films were developed and analysed densitometrically after calibration with plastic standards (3H-Microscales, Amersham) using the Image Pro Plus v.5.1 image analysis system. The average brain protein content was 8%. For each brain, four sections were processed for total binding and two other sections for nonspecific binding. ## 2.6. Western Blot and Immunoassay At the indicated postlesion time, six rats from each group were anaesthetized and decapitated. The brain of each animal was removed and the HF dissected, before being quickly frozen in liquid N2 and stored at −80°C prior to use. Each HF was manually homogenized in ice-cold Tris-HCl 50 mM, pH 7.7, containing 2 mM EDTA and a protease inhibitor cocktail. Part of the homogenate was immediately ultrasonicated at 4°C with a Sonifier 250 (Branson, Ultrasonic Corp., Danbury, CT) and centrifuged; the supernatant was directly used for TNF-α immunoassay. The other part was ultracentrifuged at 15000 rpm for 15 min at 4°C and the supernatant (cytosolic fraction) was used for Western blot analysis of YM1 protein. In both cases, protein determination was carried out using the Bradford method.Western blot analysis was performed as described elsewhere, with specific antibodies against YM1 [10]. Low-range molecular weight biotinylated SDS-PAGE standards were run in each gel to ascertain the position of each band. Immobilon-P was used for electroblotting and the immunocomplexes were detected by enhanced chemiluminescence using the ECL Plus Western Blotting Kit. Films were then developed, scanned, and analysed densitometrically with the Image Pro Plus v.5.1 image analysis system.TNF-α was quantified with the murine TNF-α ELISA development kit as indicated by the supplier protocol. In all samples, TNF-α determination was performed in duplicate. ## 2.7. Statistical Analysis For each parameter, kurtosis and skewness moments were calculated to test the normal distribution of data. A two-way ANOVA was performed with two factors: time with six levels (0 days, 1 day, 3 days, 5 days, 15 days, and 38 days) and treatment with four levels (sham,α-AA, NMDA, and NMDA + α-AA). When significant two-way interactions were observed, individual comparisons were performed using one-way ANOVAs followed by the LSDpost hoc test. Single comparisons between sham and NMDA groups at one time point were made using the Student’s t-test (t). When normality was not achieved, the values of all groups were compared using the nonparametric Kruskal-Wallis test (KW) followed by the Mann-Whitney U test (KS). In all cases, P<0.05 was considered as statistically significant. Results are expressed as mean ± SEM. All analyses were performed with the STATGRAPHICS software (STSC Inc., Rockville, MD, USA). ## 3. Results ### 3.1. NMDA Induced Fast, Enduring Microgliosis with Phenotype Changes within Fifteen Days Observation of Nissl-stained sections revealed that 20 nmol NMDA produced major layer disorganization, neuronal loss, and gliosis in all layers of the hippocampal formation (HF). Animals from the sham group showed no cellular alterations except for the needle scar (Figure1(a)). The hippocampal lesion extended within 2 mm around the injection site in the rostrocaudal axis, with a slight tendency to grow towards the caudal direction. In the NMDA group (Figure 1(b)), we observed an increase in the area of lesion as a consequence of an initial massive neuronal loss within the first 3 days, followed by a more discrete neuronal death that still progressed at 38 days (Figure 1(c)).Figure 1 Timing of the microglial reaction to NMDA in the hippocampus. Photomicrographs illustrate the hippocampal injury and microglial reaction to NMDA-induced lesion at the injection site. Photomicrographs of cresyl-violet stained brain sections of (a) sham rats and (b) NMDA rats 15 days after the lesion. (c) Graph shows the quantification of the area of lesion relative to the whole HF in cresyl-violet stained sections. IB4 histochemistry of sham (PBS) (d) and NMDA rats (e) 15 days after the lesion. (f) Distribution of specific binding sites for [3H]PK-11195 in a coronal rat brain section 15 days after 20 nmol NMDA injection. Note the NMDA-induced increase of specific binding (arrowhead) seen as a white area in the left hippocampus. Graphs show the quantification of the area of microglial reaction in IB4-stained sections (g), the area of enhanced [3H]PK-11195 specific binding (h), and the hippocampal concentration of TNF-α (i) of PBS and NMDA rats during the 38 days of the study. Asterisks in (a), (b), (d), and (e) indicate the injection site. CA1,Cornu Ammonis area 1; DG, dentate gyrus. P∗<0.05 different from PBS; #P<0.05 different from day 0 (LSDpost hoc test in (d), (f); KW test in (e)) (n=6 PBS; n=6 NMDA rats). Bar: 1 mm in (a), (b), (d), and (e) and 2 mm in (f). (a) (b) (c) (d) (e) (f) (g) (h) (i)Figure 2 Activation of glucocorticoid receptors (GR) induced by NMDA injection in the hippocampus. (a) Photomicrographs illustrate GR immunohistochemistry in the hippocampus of sham animals and NMDA rats 5 days after the injection. Asterisks indicate the injection site and insets illustrate the GR immunostaining in neurons (arrow) and glial cells (arrowheads). Please note that pyramidal cells (arrows) are much bigger than glia (arrowheads); they also present a pyramidal shape with a dendritic tree, while reactive microglia have a round smaller shape. (b) Graphs show the quantification of the nucleus/cytoplasm ratio of GR-immunolabeling in hippocampal neurons and glial cells of sham (PBS) and NMDA-lesioned rats. Confocal photomicrographs show the double immunohistochemistry of GR (in green) with specific cellular markers (in red) in the hippocampal CA1 layer of NMDA-lesioned rats 5 days after the lesion. NeuN-GR double immunostaining of PBS (c–e) and NMDA (f–h) rats evidences an increased GR-immunolabeling in the nucleus of neurons of NMDA rats (arrowheads). GFAP-GR double immunostaining in PBS (i–k, arrowheads) and NMDA (l–n) rats shows an increased GFAP-immunolabeling in astrocytes of NMDA rats not associated with GR-immunoreactivity changes, which indicates that the NMDA-induced GR activation quantified in panel (b) is not astroglial. CD11b-GR double immunohistochemistry in PBS (o–q) and NMDA (r–t) rats evidences an increased GR-immunolabeling in the nucleus of reactive microglia of NMDA rats.P∗<0.05; P∗∗<0.01 different from PBS (Student’s t-test) (n=6 rats/group). Bar: 500 μm in (a) and 10 μm in (c)–(t) and insets in (a).Histochemistry analysis with IB4 stained hyperplasic and hypertrophic microglia in all NMDA-lesioned animals (Figure1(e)). The reactive microglia area increased with time and reached a maximal value at 15 days, with this value being maintained at day 38 (Figure 1(g)). Quantification of the TSPO microglial expression by [3H]PK-11195 usingin vitro autoradiography (Figure 1(f)) also showed an area of microglial reaction to the NMDA injection into the hippocampus (Table 1). In the NMDA-lesioned versus the sham groups both the intensity (Table 1) and the area (Figure 1(f)) of [3H]PK-11195 binding density showed an increase between day 1 and day 38 (Figure 1(h)). The sham group showed [3H]PK-11195 specific binding that was increased in the injection site only in a small area on day 1.Table 1 [3H]11195 and [3H]corticosterone specific binding to brain sections after 20 nmol NMDA injection in the hippocampus. 0 days 1 day 5 days 15 days 38 days Specific TSPO binding Sham hippocampus 481 ± 98 1532 ± 107# 618 ± 187 483 ± 341 483 ± 102 NMDA hippocampus 480 ± 73 1495 ± 189* 1644 ± 255*# 1571 ± 346*# 1440 ± 235*# Parietal cortex 490 ± 11 474 ± 64 472 ± 40 473 ± 18 488 ± 9 Piriform cortex 468 ± 5 439 ± 35 425 ± 33 453 ± 37 465 ± 2 Specific GR binding Sham hippocampus 68 ± 72 101 ± 94 74 ± 37 81 ± 13 75 ± 21 NMDA hippocampus 76 ± 62 198 ± 104 448 ± 247*# 91 ± 23 105 ± 32 Parietal cortex 101 ± 19 102 ± 14 85 ± 19 77 ± 32 102 ± 20 Piriform cortex 103 ± 24 74 ± 19 89 ± 6 73 ± 3 102 ± 42 18 kDa translocator protein (TSPO) concentration was estimated by [3H]PK11195 specific binding. 5 μM RU-28362 allowed discriminating the [3H]corticosterone specific binding to glucocorticoid receptors (GR). Data are expressed in fmol/mg of protein (mean ± S.E.M). *p<0.05 different from 0 days; #p<0.05 different from sham values (KW test).The TNF-α concentration increased in both sham and NMDA-lesioned groups only within the first five days of the lesion (Figure 1(i)). Between days 1 and 3, the concentration of TNF-α in the NMDA group increased with respect to the sham group and progressively returned to initial values by day 38. In the sham group, we only found a small transient increase in TNF-α concentration at day 3.We quantified the time-related changes of glucocorticoid receptor (GR) concentrations in the hippocampus by immunohistochemistry (Figure2) and byin vitro autoradiography of [3H]corticosterone binding (Table 1). In the sham group, single immunohistochemistry of GR showed widespread staining in the hippocampus and adjacent areas. We observed an increased GR-immunoreactivity (GR-IR) in the lesioned HF mostly in the nucleus of glial cells and surviving neurons between days 1 and 5 after NMDA injection (Figures 2(a) and 2(b)). Double immunohistochemistry and confocal analysis of anti-GR antibody with either anti-NeuN, anti-GFAP, or anti-CD11b antibodies evidenced short-term GR distribution changes in surviving neurons and reactive microglia but not in reactive astrocytes (Figures 2(c)–2(t)). We found the more evident GR-IR increase in the nucleus of round-shaped CD11b-immunostained cells at day 5 after the lesion (Figures 2(o)–2(t)), whereas the weak GR labeling of most GFAP-immunopositive cells remained and only a few scattered cells showed increased nuclear staining (Figures 2(i)–2(n)).We calculated the nucleus/cytoplasm GR-IR ratio as an estimation of GR activation, as unactivated GR predominantly localized within the cytoplasm and migrates to the nucleus with hormone binding. At day 1 the nucleus/cytoplasm GR-IR ratio was strongly increased in neurons of the lesioned CA1 strata and remained increased only at day 5. We found marked differences in microglial cells (Figure2(b)). In these cells, the nucleus/cytoplasm GR-IR ratio increased at days 1 and 5 after NMDA microinjection to then gradually decrease to sham group values at day 38.Experiments with [3H]corticosterone specific binding to GR showed low levels in the HF and the parietal and pyriform cortex of sham animals (Table 1). In the hippocampus of NMDA-lesioned rats, [3H]corticosterone specific binding progressively increased reaching a maximal value of 448±247 fmol/mg prot at day 5 to then decrease to sham levels at day 15 (Table 1). ### 3.2. Within Fifteen Daysα-AA Did Not Modify NMDA-Mediated Neuronal Loss Stereotaxic Injection of 20 nmol NMDA produced major neuronal loss mainly in the CA1 layers (Figures3(a)–3(c)). The area of neuronal loss extended to 34±5% of the pyramidal CA1 within the first three days (Figure 3(g)) and then progressively increased to reach a maximal value at day 38. The same pattern was observed in the neuronal density measured in NeuN immunostained sections (Figure 3(j)).Figure 3 α-AA effect on the NMDA-induced hippocampal lesion. Photomicrographs of cresyl-violet stained brain sections of sham rats. (a) NMDA rats 15 days (b) and 38 days after the lesion (c). α-AA rats (d) and α-AA + NMDA rats 15 days (e) and 38 days after the lesion (f). Asterisks indicate the injection site. Graphs (g) to (i) show the quantification of the area of neuronal loss in cresyl-violet stained sections in the pyramidal CA1 (g), pyramidal CA2-CA3 (h), and dorsal dentate gyrus (i) of sham (PBS), α-AA, NMDA, and α-AA + NMDA rats at postlesion days 1, 3, 15, and 38. Graphs (j) to (l) show the estimation of the density NeuN immunopositive neurons in the pyramidal CA1 (j), pyramidal CA2-CA3 (k), and dorsal dentate gyrus (l) of the same rats. P∗<0.05 different from PBS; #P<0.05 different from NMDA (LSDpost hoc test) (n=6 rats/group). Bar: 500 μm. (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l)When we coinjected 6.4 nmolα-AA with 20 nmol NMDA (α-AA + NMDA rats) all the lesion parameters were similar in all hippocampal strata of the NMDA and α-AA + NMDA groups during the first five days (Figures 3(d)–3(f)). In the α-AA + NMDA group, we observed increased tissue disorganization at day 15 and this area of lesion reached a maximal value at day 38 (Figures 3(d)–3(f)). At all time points, neuronal density in the HF of α-AA + NMDA and NMDA groups was similar (Figures 3(j)–3(l)).The injection of 6.4 nmol ofαAA alone caused no change in the hippocampal size, lesion area, or neuronal density when compared with sham group values at any of the studied time points (Figures 3(g)–3(i)). ### 3.3.α-AA Enhanced the S100B/GFAP Ratio Three Days after the NMDA-Induced Lesion The treatment only withα-AA induced a loss in the GFAP-immunoreactivity (IR) in a small area of the hippocampus around the injection site that was already recovered at day 3 after injection (Figures 4(a)–4(d)). In the NMDA group, the GFAP-IR increase was maximal at day 13 (Figures 4(c) and 4(d)) and then progressively decreased by day 38. In the α-AA + NMDA group only a few astrocytes showed a weak reactive morphology at day 1 (Figures 4(e)–4(h)). At day 3, the area of enhanced GFAP-IR covered 29±7% of the HF (Figure 4(i)), which increased slightly by day 38 to reach similar values to those obtained in the NMDA group at this time point. At day 38, reactive hippocampal astrocytes of α-AA + NMDA rats presented enhanced hypertrophy and hyperplasia and stronger GFAP-IR than did those of the NMDA group (Figures 4(l) and 4(m)).Figure 4 α-AA effect on the NMDA-induced astroglial reaction in the hippocampus. GFAP-immunostaining 1 (a) and 3 days (b) in sham animals, 1 (c) and 3 days (d) after α-AA, 1 (e) and 3 days (f) after NMDA lesion, and 1 (g) and 3 days (h) after α-AA + NMDA injection (arrowheads show the injection site). Please note in (c) and (g) the lack of GFAP-immunopositive cells in the surroundings of the injection site. Graphs show the quantification of the area of astrogliosis in the whole hippocampus. (i) GFAP-immunostained and S100B-immunostained (j) sections of sham (PBS), α-AA, NMDA, and α-AA + NMDA rats at postlesion days 1, 3, 15, and 38. (k) Graph shows the estimation of the ratio S100B/GFAP, calculated as the quotient between the area of increased S100B and the area of increased GFAP in these rats. Photomicrographs illustrate GFAP-immunoreactive cells of NMDA (l) and α-AA + NMDA (m) rats and S100B-immunoreactive cells of MNDA (n) and α-AA + NMDA (o) rats in the hippocampal parenchyma 3 days after the lesion. P∗<0.05 different from PBS; #P<0.05 different from NMDA; $P<0.05 different from day 15 (LSDpost hoc test) (n=6 rats/group). Bar: 1 mm in (a)–(h) and 10 μm in (l)–(o). (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o)Experiments of immunohistochemistry with S100B showed a very small area of increased IR that corresponded to the area of GFAP-IR loss, at day 1 in theα-AA group. In the NMDA rats, S100B-IR was maximal at day 1 and then progressively decreased by day 38, following the same dynamics as GFAP-IR did (Figures 4(j), 4(n), and 4(o)). In the α-AA + NMDA group the increased S100B-IR area was not different to sham group at day 1, and it reached a maximal value at day 3, before progressively decreasing to values similar to the NMDA group at day 38. When we calculated the ratio between S100B and GFAP (S100B/GFAP, Figure 4(k)) we found similar values between the sham and NMDA groups, whereas we found a significant increase in the α-AA + NMDA at days 3, 5, and 15 (Figure 4(k)). We found no expression of S100B in microglia using double immunohistochemistry and confocal analysis of anti-S100B with anti-CD11b antibodies (data not shown). ### 3.4.α-AA Increased NMDA-Induced Microgliosis and YM1 Production Three Days after the Lesion In theα-AA group, the histochemistry with IB4 stained morphologically reactive microglia already at day 1 in an area that reached a maximal value at day 3, before decreasing to sham group values at day 38 (Figure 5(a)). In NMDA-lesioned group the area of microglial reactivity reached significance at day 1 and increased by day 15 and then decreased at day 38 (Figure 5(a)). In the α-AA + NMDA group, microglial reactivity was already significant at day 1 and covered a maximal area at day 3, to then progressively decrease at day 38 (Figure 5(a)). At days 1 and 3, reactive microglia of the α-AA + NMDA group presented a ramified morphology clearly different to the shape showed by microglial cells of the NMDA group, which presented clear reduction of processes (Figures 5(b) and 5(c)).Figure 5 α-AA effect on the NMDA-induced microglial reaction in the hippocampus. (a) Quantification of the hippocampal area of microglial reaction in IB4-stained sections of sham (PBS), α-AA, NMDA, and α-AA + NMDA rats at postlesion days 1, 3, 15, and 38. Photomicrographs show hippocampal IB4-stained cells illustrating different microglial morphology between NMDA (b) and α-AA + NMDA (c) rats 3 days after the lesion. (d) TNF-α hippocampal concentration in PBS, α-AA, NMDA, and α-AA + NMDA rats during the first 5 days of the study. (e) Immunoblots and densitometric analysis (graphs) of YM1 in the hippocampus of PBS, α-AA, NMDA, and α-AA + NMDA rats during the first 5 days of the study. Values were normalized to β-actin bands. (f) Quantification of the nucleus/cytoplasm ratio of GR-immunolabeling in hippocampal glial cells 3 days after the lesion of the four groups of the study. P, PBS; A, α-AA; N, NMDA; NA, α-AA + NMDA; P∗<0.05 different from PBS; #P<0.05 different from NMDA; $P<0.05 different from α-AA (LSDpost hoc test in (a), (e); Student’s t-test in (d), (f)) (n=6 rats/group). Bar: 10 μm.We assessed the TNF-α concentration in hippocampus by ELISA. In the HF of α-AA group, the TNF-α concentration was increased at days 1 and 5 when compared with sham group values (Figure 5(d)). In the NMDA group the TNF-α concentration in the HF increased at day 1 and it was maximal at day 3 and then it decreased at day 5. In the α-AA + NMDA group, TNF-α concentration was significantly lower than in the NMDA group (Figure 5(d)).We assessed the YM1 concentration in hippocampus by Western blot. YM1 concentration was similarly enhanced in theα-AA and NMDA groups when compared with sham group values (Figure 5(e)). In the α-AA + NMDA group, the YM1 concentration was significantly higher compared with all groups for days 1 and 3, reaching a maximal increase at day 3 when compared with sham group (Figure 5(e)).With respect to the nucleus/cytoplasm ratio of GR-IR in microglia, it was similarly increased in theα-AA and α-AA + NMDA groups when compared with sham group values at day 3, whereas this increase was higher in the NMDA group (Figure 5(f)). ## 3.1. NMDA Induced Fast, Enduring Microgliosis with Phenotype Changes within Fifteen Days Observation of Nissl-stained sections revealed that 20 nmol NMDA produced major layer disorganization, neuronal loss, and gliosis in all layers of the hippocampal formation (HF). Animals from the sham group showed no cellular alterations except for the needle scar (Figure1(a)). The hippocampal lesion extended within 2 mm around the injection site in the rostrocaudal axis, with a slight tendency to grow towards the caudal direction. In the NMDA group (Figure 1(b)), we observed an increase in the area of lesion as a consequence of an initial massive neuronal loss within the first 3 days, followed by a more discrete neuronal death that still progressed at 38 days (Figure 1(c)).Figure 1 Timing of the microglial reaction to NMDA in the hippocampus. Photomicrographs illustrate the hippocampal injury and microglial reaction to NMDA-induced lesion at the injection site. Photomicrographs of cresyl-violet stained brain sections of (a) sham rats and (b) NMDA rats 15 days after the lesion. (c) Graph shows the quantification of the area of lesion relative to the whole HF in cresyl-violet stained sections. IB4 histochemistry of sham (PBS) (d) and NMDA rats (e) 15 days after the lesion. (f) Distribution of specific binding sites for [3H]PK-11195 in a coronal rat brain section 15 days after 20 nmol NMDA injection. Note the NMDA-induced increase of specific binding (arrowhead) seen as a white area in the left hippocampus. Graphs show the quantification of the area of microglial reaction in IB4-stained sections (g), the area of enhanced [3H]PK-11195 specific binding (h), and the hippocampal concentration of TNF-α (i) of PBS and NMDA rats during the 38 days of the study. Asterisks in (a), (b), (d), and (e) indicate the injection site. CA1,Cornu Ammonis area 1; DG, dentate gyrus. P∗<0.05 different from PBS; #P<0.05 different from day 0 (LSDpost hoc test in (d), (f); KW test in (e)) (n=6 PBS; n=6 NMDA rats). Bar: 1 mm in (a), (b), (d), and (e) and 2 mm in (f). (a) (b) (c) (d) (e) (f) (g) (h) (i)Figure 2 Activation of glucocorticoid receptors (GR) induced by NMDA injection in the hippocampus. (a) Photomicrographs illustrate GR immunohistochemistry in the hippocampus of sham animals and NMDA rats 5 days after the injection. Asterisks indicate the injection site and insets illustrate the GR immunostaining in neurons (arrow) and glial cells (arrowheads). Please note that pyramidal cells (arrows) are much bigger than glia (arrowheads); they also present a pyramidal shape with a dendritic tree, while reactive microglia have a round smaller shape. (b) Graphs show the quantification of the nucleus/cytoplasm ratio of GR-immunolabeling in hippocampal neurons and glial cells of sham (PBS) and NMDA-lesioned rats. Confocal photomicrographs show the double immunohistochemistry of GR (in green) with specific cellular markers (in red) in the hippocampal CA1 layer of NMDA-lesioned rats 5 days after the lesion. NeuN-GR double immunostaining of PBS (c–e) and NMDA (f–h) rats evidences an increased GR-immunolabeling in the nucleus of neurons of NMDA rats (arrowheads). GFAP-GR double immunostaining in PBS (i–k, arrowheads) and NMDA (l–n) rats shows an increased GFAP-immunolabeling in astrocytes of NMDA rats not associated with GR-immunoreactivity changes, which indicates that the NMDA-induced GR activation quantified in panel (b) is not astroglial. CD11b-GR double immunohistochemistry in PBS (o–q) and NMDA (r–t) rats evidences an increased GR-immunolabeling in the nucleus of reactive microglia of NMDA rats.P∗<0.05; P∗∗<0.01 different from PBS (Student’s t-test) (n=6 rats/group). Bar: 500 μm in (a) and 10 μm in (c)–(t) and insets in (a).Histochemistry analysis with IB4 stained hyperplasic and hypertrophic microglia in all NMDA-lesioned animals (Figure1(e)). The reactive microglia area increased with time and reached a maximal value at 15 days, with this value being maintained at day 38 (Figure 1(g)). Quantification of the TSPO microglial expression by [3H]PK-11195 usingin vitro autoradiography (Figure 1(f)) also showed an area of microglial reaction to the NMDA injection into the hippocampus (Table 1). In the NMDA-lesioned versus the sham groups both the intensity (Table 1) and the area (Figure 1(f)) of [3H]PK-11195 binding density showed an increase between day 1 and day 38 (Figure 1(h)). The sham group showed [3H]PK-11195 specific binding that was increased in the injection site only in a small area on day 1.Table 1 [3H]11195 and [3H]corticosterone specific binding to brain sections after 20 nmol NMDA injection in the hippocampus. 0 days 1 day 5 days 15 days 38 days Specific TSPO binding Sham hippocampus 481 ± 98 1532 ± 107# 618 ± 187 483 ± 341 483 ± 102 NMDA hippocampus 480 ± 73 1495 ± 189* 1644 ± 255*# 1571 ± 346*# 1440 ± 235*# Parietal cortex 490 ± 11 474 ± 64 472 ± 40 473 ± 18 488 ± 9 Piriform cortex 468 ± 5 439 ± 35 425 ± 33 453 ± 37 465 ± 2 Specific GR binding Sham hippocampus 68 ± 72 101 ± 94 74 ± 37 81 ± 13 75 ± 21 NMDA hippocampus 76 ± 62 198 ± 104 448 ± 247*# 91 ± 23 105 ± 32 Parietal cortex 101 ± 19 102 ± 14 85 ± 19 77 ± 32 102 ± 20 Piriform cortex 103 ± 24 74 ± 19 89 ± 6 73 ± 3 102 ± 42 18 kDa translocator protein (TSPO) concentration was estimated by [3H]PK11195 specific binding. 5 μM RU-28362 allowed discriminating the [3H]corticosterone specific binding to glucocorticoid receptors (GR). Data are expressed in fmol/mg of protein (mean ± S.E.M). *p<0.05 different from 0 days; #p<0.05 different from sham values (KW test).The TNF-α concentration increased in both sham and NMDA-lesioned groups only within the first five days of the lesion (Figure 1(i)). Between days 1 and 3, the concentration of TNF-α in the NMDA group increased with respect to the sham group and progressively returned to initial values by day 38. In the sham group, we only found a small transient increase in TNF-α concentration at day 3.We quantified the time-related changes of glucocorticoid receptor (GR) concentrations in the hippocampus by immunohistochemistry (Figure2) and byin vitro autoradiography of [3H]corticosterone binding (Table 1). In the sham group, single immunohistochemistry of GR showed widespread staining in the hippocampus and adjacent areas. We observed an increased GR-immunoreactivity (GR-IR) in the lesioned HF mostly in the nucleus of glial cells and surviving neurons between days 1 and 5 after NMDA injection (Figures 2(a) and 2(b)). Double immunohistochemistry and confocal analysis of anti-GR antibody with either anti-NeuN, anti-GFAP, or anti-CD11b antibodies evidenced short-term GR distribution changes in surviving neurons and reactive microglia but not in reactive astrocytes (Figures 2(c)–2(t)). We found the more evident GR-IR increase in the nucleus of round-shaped CD11b-immunostained cells at day 5 after the lesion (Figures 2(o)–2(t)), whereas the weak GR labeling of most GFAP-immunopositive cells remained and only a few scattered cells showed increased nuclear staining (Figures 2(i)–2(n)).We calculated the nucleus/cytoplasm GR-IR ratio as an estimation of GR activation, as unactivated GR predominantly localized within the cytoplasm and migrates to the nucleus with hormone binding. At day 1 the nucleus/cytoplasm GR-IR ratio was strongly increased in neurons of the lesioned CA1 strata and remained increased only at day 5. We found marked differences in microglial cells (Figure2(b)). In these cells, the nucleus/cytoplasm GR-IR ratio increased at days 1 and 5 after NMDA microinjection to then gradually decrease to sham group values at day 38.Experiments with [3H]corticosterone specific binding to GR showed low levels in the HF and the parietal and pyriform cortex of sham animals (Table 1). In the hippocampus of NMDA-lesioned rats, [3H]corticosterone specific binding progressively increased reaching a maximal value of 448±247 fmol/mg prot at day 5 to then decrease to sham levels at day 15 (Table 1). ## 3.2. Within Fifteen Daysα-AA Did Not Modify NMDA-Mediated Neuronal Loss Stereotaxic Injection of 20 nmol NMDA produced major neuronal loss mainly in the CA1 layers (Figures3(a)–3(c)). The area of neuronal loss extended to 34±5% of the pyramidal CA1 within the first three days (Figure 3(g)) and then progressively increased to reach a maximal value at day 38. The same pattern was observed in the neuronal density measured in NeuN immunostained sections (Figure 3(j)).Figure 3 α-AA effect on the NMDA-induced hippocampal lesion. Photomicrographs of cresyl-violet stained brain sections of sham rats. (a) NMDA rats 15 days (b) and 38 days after the lesion (c). α-AA rats (d) and α-AA + NMDA rats 15 days (e) and 38 days after the lesion (f). Asterisks indicate the injection site. Graphs (g) to (i) show the quantification of the area of neuronal loss in cresyl-violet stained sections in the pyramidal CA1 (g), pyramidal CA2-CA3 (h), and dorsal dentate gyrus (i) of sham (PBS), α-AA, NMDA, and α-AA + NMDA rats at postlesion days 1, 3, 15, and 38. Graphs (j) to (l) show the estimation of the density NeuN immunopositive neurons in the pyramidal CA1 (j), pyramidal CA2-CA3 (k), and dorsal dentate gyrus (l) of the same rats. P∗<0.05 different from PBS; #P<0.05 different from NMDA (LSDpost hoc test) (n=6 rats/group). Bar: 500 μm. (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l)When we coinjected 6.4 nmolα-AA with 20 nmol NMDA (α-AA + NMDA rats) all the lesion parameters were similar in all hippocampal strata of the NMDA and α-AA + NMDA groups during the first five days (Figures 3(d)–3(f)). In the α-AA + NMDA group, we observed increased tissue disorganization at day 15 and this area of lesion reached a maximal value at day 38 (Figures 3(d)–3(f)). At all time points, neuronal density in the HF of α-AA + NMDA and NMDA groups was similar (Figures 3(j)–3(l)).The injection of 6.4 nmol ofαAA alone caused no change in the hippocampal size, lesion area, or neuronal density when compared with sham group values at any of the studied time points (Figures 3(g)–3(i)). ## 3.3.α-AA Enhanced the S100B/GFAP Ratio Three Days after the NMDA-Induced Lesion The treatment only withα-AA induced a loss in the GFAP-immunoreactivity (IR) in a small area of the hippocampus around the injection site that was already recovered at day 3 after injection (Figures 4(a)–4(d)). In the NMDA group, the GFAP-IR increase was maximal at day 13 (Figures 4(c) and 4(d)) and then progressively decreased by day 38. In the α-AA + NMDA group only a few astrocytes showed a weak reactive morphology at day 1 (Figures 4(e)–4(h)). At day 3, the area of enhanced GFAP-IR covered 29±7% of the HF (Figure 4(i)), which increased slightly by day 38 to reach similar values to those obtained in the NMDA group at this time point. At day 38, reactive hippocampal astrocytes of α-AA + NMDA rats presented enhanced hypertrophy and hyperplasia and stronger GFAP-IR than did those of the NMDA group (Figures 4(l) and 4(m)).Figure 4 α-AA effect on the NMDA-induced astroglial reaction in the hippocampus. GFAP-immunostaining 1 (a) and 3 days (b) in sham animals, 1 (c) and 3 days (d) after α-AA, 1 (e) and 3 days (f) after NMDA lesion, and 1 (g) and 3 days (h) after α-AA + NMDA injection (arrowheads show the injection site). Please note in (c) and (g) the lack of GFAP-immunopositive cells in the surroundings of the injection site. Graphs show the quantification of the area of astrogliosis in the whole hippocampus. (i) GFAP-immunostained and S100B-immunostained (j) sections of sham (PBS), α-AA, NMDA, and α-AA + NMDA rats at postlesion days 1, 3, 15, and 38. (k) Graph shows the estimation of the ratio S100B/GFAP, calculated as the quotient between the area of increased S100B and the area of increased GFAP in these rats. Photomicrographs illustrate GFAP-immunoreactive cells of NMDA (l) and α-AA + NMDA (m) rats and S100B-immunoreactive cells of MNDA (n) and α-AA + NMDA (o) rats in the hippocampal parenchyma 3 days after the lesion. P∗<0.05 different from PBS; #P<0.05 different from NMDA; $P<0.05 different from day 15 (LSDpost hoc test) (n=6 rats/group). Bar: 1 mm in (a)–(h) and 10 μm in (l)–(o). (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o)Experiments of immunohistochemistry with S100B showed a very small area of increased IR that corresponded to the area of GFAP-IR loss, at day 1 in theα-AA group. In the NMDA rats, S100B-IR was maximal at day 1 and then progressively decreased by day 38, following the same dynamics as GFAP-IR did (Figures 4(j), 4(n), and 4(o)). In the α-AA + NMDA group the increased S100B-IR area was not different to sham group at day 1, and it reached a maximal value at day 3, before progressively decreasing to values similar to the NMDA group at day 38. When we calculated the ratio between S100B and GFAP (S100B/GFAP, Figure 4(k)) we found similar values between the sham and NMDA groups, whereas we found a significant increase in the α-AA + NMDA at days 3, 5, and 15 (Figure 4(k)). We found no expression of S100B in microglia using double immunohistochemistry and confocal analysis of anti-S100B with anti-CD11b antibodies (data not shown). ## 3.4.α-AA Increased NMDA-Induced Microgliosis and YM1 Production Three Days after the Lesion In theα-AA group, the histochemistry with IB4 stained morphologically reactive microglia already at day 1 in an area that reached a maximal value at day 3, before decreasing to sham group values at day 38 (Figure 5(a)). In NMDA-lesioned group the area of microglial reactivity reached significance at day 1 and increased by day 15 and then decreased at day 38 (Figure 5(a)). In the α-AA + NMDA group, microglial reactivity was already significant at day 1 and covered a maximal area at day 3, to then progressively decrease at day 38 (Figure 5(a)). At days 1 and 3, reactive microglia of the α-AA + NMDA group presented a ramified morphology clearly different to the shape showed by microglial cells of the NMDA group, which presented clear reduction of processes (Figures 5(b) and 5(c)).Figure 5 α-AA effect on the NMDA-induced microglial reaction in the hippocampus. (a) Quantification of the hippocampal area of microglial reaction in IB4-stained sections of sham (PBS), α-AA, NMDA, and α-AA + NMDA rats at postlesion days 1, 3, 15, and 38. Photomicrographs show hippocampal IB4-stained cells illustrating different microglial morphology between NMDA (b) and α-AA + NMDA (c) rats 3 days after the lesion. (d) TNF-α hippocampal concentration in PBS, α-AA, NMDA, and α-AA + NMDA rats during the first 5 days of the study. (e) Immunoblots and densitometric analysis (graphs) of YM1 in the hippocampus of PBS, α-AA, NMDA, and α-AA + NMDA rats during the first 5 days of the study. Values were normalized to β-actin bands. (f) Quantification of the nucleus/cytoplasm ratio of GR-immunolabeling in hippocampal glial cells 3 days after the lesion of the four groups of the study. P, PBS; A, α-AA; N, NMDA; NA, α-AA + NMDA; P∗<0.05 different from PBS; #P<0.05 different from NMDA; $P<0.05 different from α-AA (LSDpost hoc test in (a), (e); Student’s t-test in (d), (f)) (n=6 rats/group). Bar: 10 μm.We assessed the TNF-α concentration in hippocampus by ELISA. In the HF of α-AA group, the TNF-α concentration was increased at days 1 and 5 when compared with sham group values (Figure 5(d)). In the NMDA group the TNF-α concentration in the HF increased at day 1 and it was maximal at day 3 and then it decreased at day 5. In the α-AA + NMDA group, TNF-α concentration was significantly lower than in the NMDA group (Figure 5(d)).We assessed the YM1 concentration in hippocampus by Western blot. YM1 concentration was similarly enhanced in theα-AA and NMDA groups when compared with sham group values (Figure 5(e)). In the α-AA + NMDA group, the YM1 concentration was significantly higher compared with all groups for days 1 and 3, reaching a maximal increase at day 3 when compared with sham group (Figure 5(e)).With respect to the nucleus/cytoplasm ratio of GR-IR in microglia, it was similarly increased in theα-AA and α-AA + NMDA groups when compared with sham group values at day 3, whereas this increase was higher in the NMDA group (Figure 5(f)). ## 4. Discussion Brain NMDA microinjection triggers an excitotoxic process that expands through several dynamic processes resulting in astro- and microglial reactions and a massive neuronal loss [29] that may involve recruitment of peripheral neutrophils and macrophages [35]. The acute excitotoxic process is thought to be under control by retaliatory mechanisms within 15 days, but an underlying long-term process of at least 38 days goes on, associated with tissue atrophy, neuronal loss, and chronic microgliosis, in absence of astroglial reaction [32]. The microglia response may initially course with neuroprotective effects to afterwards cause neuron injury or death [2]. Thus, interventions that promote the regeneration of damaged tissue long after injury should take into consideration this dynamic phenomenon, in particular those addressed to interfere the neuroinflammatory pathways [7, 36, 37].Although astrocytes present neuroprotective activity against excitotoxicity [24], we herein found that early astroglia ablation with α-AA does not enhance within the first 15 days the NMDA-induced hippocampal lesion. The injured hippocampal area in particular CA2 and CA3 subfields only increased at day 38. In parallel, α-AA increased the NMDA-induced microglia reaction from day 1 on. These results argue for an early potentiation of microglia neuroprotective activity while astrocytes are compromised [38].Our results of IB4 histochemistry and TSPO labelling byin vitro autoradiography evidenced an early microglial activation that persisted at least for 38 days during which TNF-α secretion and YM1 and GR genes expression followed different patterns of activation. Specifically, the decrease of the initial massive production of TNF-α did not parallel the morphological long-term activation, YM1 secretion took place transiently during the first three days of the lesion, and GR translocation to the nucleus increased progressively up to day 5 before decreasing to control level. Since microglial phenotype results from the compromise between cytotoxic and trophic activities [1], these patterns result from the evolution of the microglial role during excitotoxicity.The production of YM1 in the first five days of the lesion is related to a neuroprotective activity [9], as evidenced in the olfactory bulb axotomy that results in YM1 secretion that reduces the inflammatory process [39]. By contrast, TNF-α production and GR translocation to the nucleus reflect more complex roles of microglial activity. TNF-α secretion is crucial for autocrine fast microglial activation with cytotoxic effects, a process also fed with the concomitant release of TNF-α by reactive astrocytes [40, 41]. However, the absence of TNF-α in knockout mice delays NO-mediated microglial activation, resulting in further exacerbated microgliosis [20] that leads to amplification of secondary excitotoxicity. TNF-α can bind to two specific receptors: TNFR1, with an intracellular death domain, and TNFR2, with a higher affinity and mostly involved in neuroprotection [42]. Consequently, at low concentration TNF-α activates TNFR2-mediated neuroprotection, whereas at high concentration TNFR1 contributes to cell injury [43]. Similarly, at baseline concentration and as the immediate response to acute stress, glucocorticoids produce an anti-inflammatory activity over microglia [44]. However chronic high concentration of glucocorticoids exacerbates cytotoxic microgliosis and increases hippocampal injury due to excitotoxicity or ischemia [14]. Such a sustained elevated glucocorticoid concentration downregulates GR microglia expression as a prerequisite to specifically inhibit the anti-inflammatory actions [15]. Therefore, the early activation and posterior downregulation of microglial GR herein observed may enhance microglia cytotoxic and inflammatory activity.S100B modifies astrocytic, neuronal, and microglial activities, and its effects depend on both its own extracellular concentration and the expression of the receptor RAGE. In activated microglia, S100B at micromolar concentrations upregulates interleukin-1β and TNF-α expression via RAGE [45], while at nanomolar concentrations S100B blocks that expression [18]. In turn, microglial reactive oxygen species, or TNF-α, modify the RAGE response to S100B [17], in a context of microglia-astroglia cross talk that integrates different signaling systems. In this regard, our results suggest that factors associated with hippocampal excitotoxic injury may trigger an early trophic neuroprotective reaction in microglia. Afterwards, variations in the concentration of these same factors may turn the chronic microgliosis into a proinflammatory cytotoxic activity.The ablation of astroglia usingα-AA results in a similar pattern of NMDA-induced hippocampal damage. α-AA did not modify the NMDA-induced hippocampal lesion area and neuronal loss during the first fifteen days, and we detected a discrete increase of these lesion parameters only in the CA2 and CA3 subfields at day 38, when astroglia was fully recovered. The α-AA-induced potentiation of microgliosis that we found in these conditions supports an early neuroprotective microglial activity [46]. As previously shown [24, 46], α-AA inhibits the main astroglial pathways of glutamate removal resulting in increased synaptic glutamate and increase of oxidative stress [23, 24, 26, 47], which constitute activation signals to microglia. This initial microglial phenotype may represent an attempt to preserve neurons during astroglial dysfunction. The activated microglia express glutamine synthetase and the glutamate transporter 1 in the early stages of excitotoxicity [47]. Microglial glutamate uptake starts at 4 h and lasts up to 72 h after the lesion [48], which indicates that reactive microglia may account for the control of synaptic glutamate homeostasis after early astrocyte injury. Furthermore, the early increase of YM1 concentration in the HF of α-AA + NMDA rats also suggests a neuroprotective microglial activity that may potentiate the recovery of astrocytes as indicates the slight enhancement of GFAP concentration herein found at day 3 in α-AA rats.Initially theα-AA + NMDA lesion presented a decreased TNF-α concentration and a delay in the increase of S100B. Also, within the first three days, reactive microglia presented a morphological transition from a phagocytic to a ramified shape. Thus, when astrocytes are depleted, the retaliatory mechanisms against excitotoxicity would include the enhanced neurotrophic activity of the microglia to counteract the limited functional capacity of the regenerating astrocytes. Taken together, these data indicate a priority to recover the astrocyte-microglia cross talk before the astroglial cytoskeletal rearrangement and proliferation.From day 3, our results inα-AA + NMDA rats showed a sustained reaction of microglia and a heightened S100B/GFAP ratio that resulted in an increased area of lesion that reached CA2 and CA3 hippocampal subfields at day 38 but no changes in the neuronal density of the lesioned CA1 subfields. These results also suggest that, rather than to the intensity of injury, the cytotoxic chronic activity of microglia contributes to the growth of affected areas during neurodegeneration. As the increased area of S100B-IR may not be translated into increased S100B release and could just reflect astrocytosis, further experiments are needed to quantifyin vivo the S100B release long after an excitotoxic insult and characterize its effects on microglia inflammatory activity. ## 5. Conclusions In conclusion, in this paper we show that NMDA-induced excitotoxicity in the hippocampus induces a chronic microgliosis. In this model, an initial release of YM1 followed by TNF-α production and the transient activation of GR may determine a putative functional switch of microglia. Although a role for infiltrated neutrophils and macrophages must not be ruled out, the astroglial removal in the early stages of the lesion may potentiate an initial neuroprotective role of microgliosis and reveals the S100B as a key modulator of microglia phenotype. These results also contribute to understanding the interplay between the trophic, neuroprotective, inflammatory, and cytotoxic functions of astrocytes and microglia during the course of brain injury. The fine control of these processes requires a dynamic understanding of their interactions to allow effective development of approaches to neuroprotection. --- *Source: 102419-2015-04-21.xml*
2015
# Impact ofβ-Turn Sequence on β-Hairpin Dynamics Studied with Infrared-Detected Temperature Jump **Authors:** Alexander Popp; Ling Wu; Timothy A. Keiderling; Karin Hauser **Journal:** Spectroscopy: An International Journal (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/102423 **Keywords:** Hairpin peptide; tryptophan; tryptophan zipper; temperature jump kinetics; infrared spectroscopy --- ## Abstract Folding dynamics forβ-structure loss and disordered structure gain were studied in a model β-hairpin peptide based on Cochran’s tryptophan zipper peptide Trpzip2, but with an altered Thr-Gly (TG) turn sequence, that is, SWTWETGKWTWK, using laser-induced temperature-jump (T-jump) kinetics with IR detection. As has been shown previously, the TG turn sequence reduces the thermodynamic β-hairpin stability as compared to the Asn-Gly sequence used in Trpzip2 (TZ2-NG). In this study, we found that the TG-turn slows down the overall relaxation dynamics as compared to TZ2-NG, which were studied at higher temperatures where the time constants show little difference between relaxation of the β-strand and the disordered conformation. These time constants become equivalent at lower temperatures for TZ2-TG than was seen for TZ2-NG. The correlation of thermodynamic stability and rates of relaxation suggests that the change from NG to TG turn results in a slowing of folding, lower kf, with less change of the unfolding rate, ku, assuming two state behavior at higher temperatures. --- ## Body ## 1. Introduction The simplest model of aβ-sheet is the β-hairpin consisting of a single sequence folded back onto itself to generate two antiparallel strands coupled by a turn. Hairpins have been proposed to act as nucleation sites for protein folding [1–4], and, in particular, β-sheet formation is biomedically relevant due to its role in many diseases, especially amyloid-related ones [5–7]. As compared to sheet structures, β-hairpins are more easily studied and modeled since only two strands interact in a restricted manner, rather than multiple strands as is the case for aggregate or fibril structures.β-hairpins can be stabilized either by selecting turn residues that tend to bring the strands close enough to enable cross-strand H-bonding, or by interaction, normally hydrophobic, of residues on opposite strands to stabilize a compact conformation and facilitate turn and H-bond formation [1, 4, 8–10]. The role of the turn in stabilizing the hairpin has been the topic of several equilibrium studies in an attempt to propose mechanisms for the folding [1, 11, 12]. Some of the most studied hairpins are the tryptophan zipper (Trpzip) peptides of Cochran and coworkers [8], which have pairwise Trp-Trp interactions, that can be monitored by a strong exciton-based circular dichroism (CD) arising from the cross-strand contact [13–15]. By contrast, amide I’ IR spectra (C=O stretch vibration) sense the peptide backbone conformation and are particularly useful for β-sheet studies [13, 14, 16–23]. Trpzip molecules at low temperatures have characteristic β-structure IR which reverts to a disordered structure spectrum at higher temperatures [16, 17, 22–24]. We have separately reported equilibrium studies of various modifications of the Trpzip2 (TZ2-NG) hairpin sequence Ser-Trp-Thr-Trp-Glu-Asn-Gly-Lys-Trp-Thr-Trp-Lys (SWTWENGKWTWK) [8], varying both the Trps and the turn sequence residues [13, 14, 16, 17, 25, 26]. We and others have also used laser-initiated T-jump spectroscopy to study dynamics of the TZ2-NG sequence and have incorporated isotopic labels to specify sequence dependence of the folding mechanism [17, 27, 28]. In this paper, we compare dynamics for this type of hairpin by modifying the turn sequence (Figure 1), changing from Asn-Gly (TZ2-NG) to Thr-Gly (TZ2-TG), which lowers the hairpin stability, as we have shown separately [25] and slows its relaxation dynamics, as will be shown here.Figure 1 Sequences of Trpzip2 (TZ2)β-hairpin peptides with different turns, Thr-Gly (TZ2-TG), and Asn-Gly (TZ2-NG). Dynamics have been studied for the TG-variant and compared to earlier studies of the NG-variant. ## 2. Experimental Section ### 2.1. Peptide Synthesis and IR Sample Preparation Peptides TZ2-TG with Thr-Gly turn were synthesized and characterized at UIC using a manual solid state method as described previously [13]. For this study, purified peptides were dissolved directly in D2O for equilibrium IR measurements without adjusting pH or removing the TFA counterions remaining from the peptide synthesis. Previous studies had varied pH and concentration as well as removed TFA but for the experiments reported here a simpler approach was taken, bypassing lyophilization, which additionally enhanced solubility. For IR studies, TFA containing samples were prepared at low pH and ~20 mg/mL (~12 mM), which evidenced good solubility and reversible folding. ### 2.2. Infrared Equilibrium Spectra Infrared spectra in thermal equilibrium were measured in the range of 5°C–85°C with 5°C steps at 4 cm−1 resolution using a Bruker Equinox 55 FTIR spectrometer with a HgCdTe detector. The temperature was controlled by a thermostatted water bath (Lauda Ecoline E300) coupled with a home-built cell holder. The peptide solution was sealed between two CaF2 windows separated by a 100 μm Teflon spacer. Both the reference cell and the sample cell were mounted on a sample shuttle and measured sequentially after equilibration to obtain one spectrum for each temperature so that interference due to long-term baseline drift is avoided. The thermal variations of the FTIR spectra were fit to a standard two-state equilibrium expression to determine the transition temperature, Tm, using a linear low-temperature and a flat high-temperature baseline connection as previously described [16, 25]. ### 2.3. T-Jump Measurements T-jump experiments were performed on the home-built instrument which has been described in detail previously [29]. The temperature jump in the sample was obtained by a Raman-shifted 5 ns Nd: YAG laser pulse at 1909 nm. In order to reduce inhomogeneous heating of the peptide sample, the pump pulse was split into two equivalent parts and aligned to impinge on the front and the back side of the sample cell. One part of the excitation pulse was delayed by 5 ns which can reduce the occurrence of cavitation [30]. The rapid heating of the sample by the laser pulse shifted to 1.9 μm was adjusted to yield ΔT ~10°C for various initial temperatures in the range from 5°C to 40°C.A lead salt laser diode (Mütek TLS 310) provided CW modes for probing transient transmission changes during the T-jump. Two modes at 1641 cm−1 and 1663 cm−1 were used to detect the decay of the β-sheet structure and the rise of the random coil structure, respectively. The IR-signal was detected using a photovoltaic HgCdTe detector (20 MHz, Kolmar KMPV11-1-J2) and digitized by a transient recorder board for further data analysis. For each measured temperature and wavelength, one dataset with 2000 corrected transients was acquired as the difference of 2000 collected with the probe laser on and 2000 with the probe laser blocked in order to eliminate background radiation and potential baseline drift.The observed transients were fitted with a biexponential function:(2.1)ΔA(t)=A0+A1⋅exp⁡(-tτ1)+A2⋅exp⁡(-tτ2) in order to describe both the dynamics of the TZ2-TG relaxation and the solvent cooling. Fitting details have been reported previously [17, 27]. ## 2.1. Peptide Synthesis and IR Sample Preparation Peptides TZ2-TG with Thr-Gly turn were synthesized and characterized at UIC using a manual solid state method as described previously [13]. For this study, purified peptides were dissolved directly in D2O for equilibrium IR measurements without adjusting pH or removing the TFA counterions remaining from the peptide synthesis. Previous studies had varied pH and concentration as well as removed TFA but for the experiments reported here a simpler approach was taken, bypassing lyophilization, which additionally enhanced solubility. For IR studies, TFA containing samples were prepared at low pH and ~20 mg/mL (~12 mM), which evidenced good solubility and reversible folding. ## 2.2. Infrared Equilibrium Spectra Infrared spectra in thermal equilibrium were measured in the range of 5°C–85°C with 5°C steps at 4 cm−1 resolution using a Bruker Equinox 55 FTIR spectrometer with a HgCdTe detector. The temperature was controlled by a thermostatted water bath (Lauda Ecoline E300) coupled with a home-built cell holder. The peptide solution was sealed between two CaF2 windows separated by a 100 μm Teflon spacer. Both the reference cell and the sample cell were mounted on a sample shuttle and measured sequentially after equilibration to obtain one spectrum for each temperature so that interference due to long-term baseline drift is avoided. The thermal variations of the FTIR spectra were fit to a standard two-state equilibrium expression to determine the transition temperature, Tm, using a linear low-temperature and a flat high-temperature baseline connection as previously described [16, 25]. ## 2.3. T-Jump Measurements T-jump experiments were performed on the home-built instrument which has been described in detail previously [29]. The temperature jump in the sample was obtained by a Raman-shifted 5 ns Nd: YAG laser pulse at 1909 nm. In order to reduce inhomogeneous heating of the peptide sample, the pump pulse was split into two equivalent parts and aligned to impinge on the front and the back side of the sample cell. One part of the excitation pulse was delayed by 5 ns which can reduce the occurrence of cavitation [30]. The rapid heating of the sample by the laser pulse shifted to 1.9 μm was adjusted to yield ΔT ~10°C for various initial temperatures in the range from 5°C to 40°C.A lead salt laser diode (Mütek TLS 310) provided CW modes for probing transient transmission changes during the T-jump. Two modes at 1641 cm−1 and 1663 cm−1 were used to detect the decay of the β-sheet structure and the rise of the random coil structure, respectively. The IR-signal was detected using a photovoltaic HgCdTe detector (20 MHz, Kolmar KMPV11-1-J2) and digitized by a transient recorder board for further data analysis. For each measured temperature and wavelength, one dataset with 2000 corrected transients was acquired as the difference of 2000 collected with the probe laser on and 2000 with the probe laser blocked in order to eliminate background radiation and potential baseline drift.The observed transients were fitted with a biexponential function:(2.1)ΔA(t)=A0+A1⋅exp⁡(-tτ1)+A2⋅exp⁡(-tτ2) in order to describe both the dynamics of the TZ2-TG relaxation and the solvent cooling. Fitting details have been reported previously [17, 27]. ## 3. Results and Discussion Like TZ2-NG, TZ2-TG adopts aβ-sheet secondary structure mainly stabilized by Trp-Trp interaction [13]. However, TZ2-TG is less stable, having an apparent Tm of 328 K for the sample conditions used for the T-jump study as compared to 345 K [8] or 352 K [25] for TZ2-NG, depending on conditions, as we reported previously. The role of modifying the turn sequence from Asn-Gly to Thr-Gly in terms of dynamic behavior becomes apparent in the T-jump experiments.The relaxation of the TZ2-TG peptide sample was probed at two different wavelengths and the relaxation timesτobs were determined after the temperature jump for a series of peptide temperatures. The 1641 cm−1 position primarily senses the cross-strand interaction and 1663 cm−1 the disordered structure. Four representative transients for T=312–323 K are shown in Figure 2. These data picture the dynamics of the peptide alone since they have been corrected for contributions from the D2O relaxation. Table 1 summarizes the temperature-dependent relaxation times τobs measured for TZ2-TG.Table 1 Observed relaxation timesτobs for TZ2-TG initiated by a T-jump of ΔT = 10°C and probed at two different wavelengths 1641 cm−1 (correlated to β-structure decay) and 1663 cm−1 (monitoring disordered fraction). The temperatures refer to the final peptide temperature after the T-jump. TZ2-TGT (K)τobs (μs)1641 cm−130817.1±1.13166.8±0.23233.3±0.11663 cm−129047.3±14.929721.6±0.930616.1±0.531211.5±0.43202.8±0.7Figure 2 Representative transients of TZ2-TG after the T-jump. The traces evidence different curvature and signal amplitude indicating temperature dependence in the dynamic of unfolding/folding of TZ2-TG.The decreased peptide relaxation time of TZ2-TG with increasing temperature is consistent with normal Arrhenius behavior and parallels the data that we have observed on several other (strand-centered) variants of TZ2-NG [17, 27] and helical peptides [31]. Unlike for TZ2-NG and its isotopic variants, the relaxation rates of the β-structure (cross-strand H-bonding) and the disordered structure did not show a clear differentiation for TZ2-TG.When temperature is varied, the rates (relaxation times) vary in a similar manner for detection at both 1663 cm−1 and 1641 cm−1. Due to limitations in signal strength, the 1641 cm−1 data at lower temperatures did not allow reliable analysis of the transients, and consequently is not reported.For two of three of the higher comparable temperature points (above 300 K), the relaxation times for the transients monitored at 1641 cm−1 (correlated to β-strand unfolding) and at 1663 cm−1 (monitoring disordered fraction) are equivalent within the error, and one could conclude that the two processes have the same rate. Such high-temperature behavior was also seen previously for TZ2-NG, but only above 325 K. Since the TZ2-TG is destabilized, as seen by its lower Tm, the temperature region with equivalent rates seems also to have shifted toward lower temperatures.Overall, the observed relaxation times of TZ2-TG are slower compared with TZ2-NG [17, 27], although the turn mutation destabilizes the peptide secondary structure [25]. While it is difficult to draw conclusions about Arrhenius behavior from just a few high-temperature data points, the slopes for the relaxation rates obtained when probing at the two wavenumbers are in fact similar and suggest activation energies 2-3 times higher than seen for TZ2-NG, which is consistent with the slower rates we observe. At lower temperatures, the 1663 cm−1 data do suggest a deviation from this trend and a possible differentiation between the β-strand and disordered dynamics, but confirmation of this depends on our remeasuring the kinetics with improved S/N.The meaning of this rate change from TZ2-NG to TZ2-TG must lie in the role of Asn in the turn. Tight turns in protein hairpin structures often involve Asn. Asn-Gly alone is not sufficient to nucleate a turn, but in combination with hydrophobic interaction between the strands, as in TZ2-NG, it is quite stable. The loss of stability on change to a TG turn would imply a reduction in the folding equilibrium constant. If we were to assume this to be an approximate two-state process at high temperatures and assume negligible impact on the unfolding rate constant,ku, then the folding, kf, would be reduced for TZ2-TG as compared to TZ2-NG. For simple unimolecular relaxation kinetics,(3.1)kobs=kf+ku,Keq=kfku, where kobs(=1/τobs) is the observed relaxation rate and Keq is the equilibrium constant. In such a situation, the reduction in Keq would lead to a reduction in kobs, which is just what we see experimentally. However, under different conditions, if the mechanism becomes more complex, and not two state, such a simple analysis will not be valid.In the low-temperature range, where we could clearly conclude that the previously studied TZ2-NG variants exhibited multistate behavior [17, 27], we cannot obtain enough data for TZ2-TG to draw similar conclusions. However, we also expect to find a differentiation of relaxation times for loss of β-strand and rise of disordered structure. Direct comparison of the low-temperature time constants between TZ2-NG and TZ2-TG might provide a better understanding of the slower overall dynamics of a hairpin whose thermal stability is destabilized by the turn. ## 4. Conclusions In the current study, we report on the conformational dynamics of theβ-hairpin peptide TZ2-TG and the influence of its turn residues on stability. Although the TG-turn residues have destabilizing impact as compared to an NG-turn, as seen in the decreased thermodynamic transition temperature, the conformational dynamics are unexpectedly slower. We attribute that to a modification of the multistate folding mechanism caused by the altered turn. --- *Source: 102423-2012-07-11.xml*
102423-2012-07-11_102423-2012-07-11.md
17,003
Impact ofβ-Turn Sequence on β-Hairpin Dynamics Studied with Infrared-Detected Temperature Jump
Alexander Popp; Ling Wu; Timothy A. Keiderling; Karin Hauser
Spectroscopy: An International Journal (2012)
Physical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/102423
102423-2012-07-11.xml
**Keywords:** Hairpin peptide; tryptophan; tryptophan zipper; temperature jump kinetics; infrared spectroscopy --- ## Abstract Folding dynamics forβ-structure loss and disordered structure gain were studied in a model β-hairpin peptide based on Cochran’s tryptophan zipper peptide Trpzip2, but with an altered Thr-Gly (TG) turn sequence, that is, SWTWETGKWTWK, using laser-induced temperature-jump (T-jump) kinetics with IR detection. As has been shown previously, the TG turn sequence reduces the thermodynamic β-hairpin stability as compared to the Asn-Gly sequence used in Trpzip2 (TZ2-NG). In this study, we found that the TG-turn slows down the overall relaxation dynamics as compared to TZ2-NG, which were studied at higher temperatures where the time constants show little difference between relaxation of the β-strand and the disordered conformation. These time constants become equivalent at lower temperatures for TZ2-TG than was seen for TZ2-NG. The correlation of thermodynamic stability and rates of relaxation suggests that the change from NG to TG turn results in a slowing of folding, lower kf, with less change of the unfolding rate, ku, assuming two state behavior at higher temperatures. --- ## Body ## 1. Introduction The simplest model of aβ-sheet is the β-hairpin consisting of a single sequence folded back onto itself to generate two antiparallel strands coupled by a turn. Hairpins have been proposed to act as nucleation sites for protein folding [1–4], and, in particular, β-sheet formation is biomedically relevant due to its role in many diseases, especially amyloid-related ones [5–7]. As compared to sheet structures, β-hairpins are more easily studied and modeled since only two strands interact in a restricted manner, rather than multiple strands as is the case for aggregate or fibril structures.β-hairpins can be stabilized either by selecting turn residues that tend to bring the strands close enough to enable cross-strand H-bonding, or by interaction, normally hydrophobic, of residues on opposite strands to stabilize a compact conformation and facilitate turn and H-bond formation [1, 4, 8–10]. The role of the turn in stabilizing the hairpin has been the topic of several equilibrium studies in an attempt to propose mechanisms for the folding [1, 11, 12]. Some of the most studied hairpins are the tryptophan zipper (Trpzip) peptides of Cochran and coworkers [8], which have pairwise Trp-Trp interactions, that can be monitored by a strong exciton-based circular dichroism (CD) arising from the cross-strand contact [13–15]. By contrast, amide I’ IR spectra (C=O stretch vibration) sense the peptide backbone conformation and are particularly useful for β-sheet studies [13, 14, 16–23]. Trpzip molecules at low temperatures have characteristic β-structure IR which reverts to a disordered structure spectrum at higher temperatures [16, 17, 22–24]. We have separately reported equilibrium studies of various modifications of the Trpzip2 (TZ2-NG) hairpin sequence Ser-Trp-Thr-Trp-Glu-Asn-Gly-Lys-Trp-Thr-Trp-Lys (SWTWENGKWTWK) [8], varying both the Trps and the turn sequence residues [13, 14, 16, 17, 25, 26]. We and others have also used laser-initiated T-jump spectroscopy to study dynamics of the TZ2-NG sequence and have incorporated isotopic labels to specify sequence dependence of the folding mechanism [17, 27, 28]. In this paper, we compare dynamics for this type of hairpin by modifying the turn sequence (Figure 1), changing from Asn-Gly (TZ2-NG) to Thr-Gly (TZ2-TG), which lowers the hairpin stability, as we have shown separately [25] and slows its relaxation dynamics, as will be shown here.Figure 1 Sequences of Trpzip2 (TZ2)β-hairpin peptides with different turns, Thr-Gly (TZ2-TG), and Asn-Gly (TZ2-NG). Dynamics have been studied for the TG-variant and compared to earlier studies of the NG-variant. ## 2. Experimental Section ### 2.1. Peptide Synthesis and IR Sample Preparation Peptides TZ2-TG with Thr-Gly turn were synthesized and characterized at UIC using a manual solid state method as described previously [13]. For this study, purified peptides were dissolved directly in D2O for equilibrium IR measurements without adjusting pH or removing the TFA counterions remaining from the peptide synthesis. Previous studies had varied pH and concentration as well as removed TFA but for the experiments reported here a simpler approach was taken, bypassing lyophilization, which additionally enhanced solubility. For IR studies, TFA containing samples were prepared at low pH and ~20 mg/mL (~12 mM), which evidenced good solubility and reversible folding. ### 2.2. Infrared Equilibrium Spectra Infrared spectra in thermal equilibrium were measured in the range of 5°C–85°C with 5°C steps at 4 cm−1 resolution using a Bruker Equinox 55 FTIR spectrometer with a HgCdTe detector. The temperature was controlled by a thermostatted water bath (Lauda Ecoline E300) coupled with a home-built cell holder. The peptide solution was sealed between two CaF2 windows separated by a 100 μm Teflon spacer. Both the reference cell and the sample cell were mounted on a sample shuttle and measured sequentially after equilibration to obtain one spectrum for each temperature so that interference due to long-term baseline drift is avoided. The thermal variations of the FTIR spectra were fit to a standard two-state equilibrium expression to determine the transition temperature, Tm, using a linear low-temperature and a flat high-temperature baseline connection as previously described [16, 25]. ### 2.3. T-Jump Measurements T-jump experiments were performed on the home-built instrument which has been described in detail previously [29]. The temperature jump in the sample was obtained by a Raman-shifted 5 ns Nd: YAG laser pulse at 1909 nm. In order to reduce inhomogeneous heating of the peptide sample, the pump pulse was split into two equivalent parts and aligned to impinge on the front and the back side of the sample cell. One part of the excitation pulse was delayed by 5 ns which can reduce the occurrence of cavitation [30]. The rapid heating of the sample by the laser pulse shifted to 1.9 μm was adjusted to yield ΔT ~10°C for various initial temperatures in the range from 5°C to 40°C.A lead salt laser diode (Mütek TLS 310) provided CW modes for probing transient transmission changes during the T-jump. Two modes at 1641 cm−1 and 1663 cm−1 were used to detect the decay of the β-sheet structure and the rise of the random coil structure, respectively. The IR-signal was detected using a photovoltaic HgCdTe detector (20 MHz, Kolmar KMPV11-1-J2) and digitized by a transient recorder board for further data analysis. For each measured temperature and wavelength, one dataset with 2000 corrected transients was acquired as the difference of 2000 collected with the probe laser on and 2000 with the probe laser blocked in order to eliminate background radiation and potential baseline drift.The observed transients were fitted with a biexponential function:(2.1)ΔA(t)=A0+A1⋅exp⁡(-tτ1)+A2⋅exp⁡(-tτ2) in order to describe both the dynamics of the TZ2-TG relaxation and the solvent cooling. Fitting details have been reported previously [17, 27]. ## 2.1. Peptide Synthesis and IR Sample Preparation Peptides TZ2-TG with Thr-Gly turn were synthesized and characterized at UIC using a manual solid state method as described previously [13]. For this study, purified peptides were dissolved directly in D2O for equilibrium IR measurements without adjusting pH or removing the TFA counterions remaining from the peptide synthesis. Previous studies had varied pH and concentration as well as removed TFA but for the experiments reported here a simpler approach was taken, bypassing lyophilization, which additionally enhanced solubility. For IR studies, TFA containing samples were prepared at low pH and ~20 mg/mL (~12 mM), which evidenced good solubility and reversible folding. ## 2.2. Infrared Equilibrium Spectra Infrared spectra in thermal equilibrium were measured in the range of 5°C–85°C with 5°C steps at 4 cm−1 resolution using a Bruker Equinox 55 FTIR spectrometer with a HgCdTe detector. The temperature was controlled by a thermostatted water bath (Lauda Ecoline E300) coupled with a home-built cell holder. The peptide solution was sealed between two CaF2 windows separated by a 100 μm Teflon spacer. Both the reference cell and the sample cell were mounted on a sample shuttle and measured sequentially after equilibration to obtain one spectrum for each temperature so that interference due to long-term baseline drift is avoided. The thermal variations of the FTIR spectra were fit to a standard two-state equilibrium expression to determine the transition temperature, Tm, using a linear low-temperature and a flat high-temperature baseline connection as previously described [16, 25]. ## 2.3. T-Jump Measurements T-jump experiments were performed on the home-built instrument which has been described in detail previously [29]. The temperature jump in the sample was obtained by a Raman-shifted 5 ns Nd: YAG laser pulse at 1909 nm. In order to reduce inhomogeneous heating of the peptide sample, the pump pulse was split into two equivalent parts and aligned to impinge on the front and the back side of the sample cell. One part of the excitation pulse was delayed by 5 ns which can reduce the occurrence of cavitation [30]. The rapid heating of the sample by the laser pulse shifted to 1.9 μm was adjusted to yield ΔT ~10°C for various initial temperatures in the range from 5°C to 40°C.A lead salt laser diode (Mütek TLS 310) provided CW modes for probing transient transmission changes during the T-jump. Two modes at 1641 cm−1 and 1663 cm−1 were used to detect the decay of the β-sheet structure and the rise of the random coil structure, respectively. The IR-signal was detected using a photovoltaic HgCdTe detector (20 MHz, Kolmar KMPV11-1-J2) and digitized by a transient recorder board for further data analysis. For each measured temperature and wavelength, one dataset with 2000 corrected transients was acquired as the difference of 2000 collected with the probe laser on and 2000 with the probe laser blocked in order to eliminate background radiation and potential baseline drift.The observed transients were fitted with a biexponential function:(2.1)ΔA(t)=A0+A1⋅exp⁡(-tτ1)+A2⋅exp⁡(-tτ2) in order to describe both the dynamics of the TZ2-TG relaxation and the solvent cooling. Fitting details have been reported previously [17, 27]. ## 3. Results and Discussion Like TZ2-NG, TZ2-TG adopts aβ-sheet secondary structure mainly stabilized by Trp-Trp interaction [13]. However, TZ2-TG is less stable, having an apparent Tm of 328 K for the sample conditions used for the T-jump study as compared to 345 K [8] or 352 K [25] for TZ2-NG, depending on conditions, as we reported previously. The role of modifying the turn sequence from Asn-Gly to Thr-Gly in terms of dynamic behavior becomes apparent in the T-jump experiments.The relaxation of the TZ2-TG peptide sample was probed at two different wavelengths and the relaxation timesτobs were determined after the temperature jump for a series of peptide temperatures. The 1641 cm−1 position primarily senses the cross-strand interaction and 1663 cm−1 the disordered structure. Four representative transients for T=312–323 K are shown in Figure 2. These data picture the dynamics of the peptide alone since they have been corrected for contributions from the D2O relaxation. Table 1 summarizes the temperature-dependent relaxation times τobs measured for TZ2-TG.Table 1 Observed relaxation timesτobs for TZ2-TG initiated by a T-jump of ΔT = 10°C and probed at two different wavelengths 1641 cm−1 (correlated to β-structure decay) and 1663 cm−1 (monitoring disordered fraction). The temperatures refer to the final peptide temperature after the T-jump. TZ2-TGT (K)τobs (μs)1641 cm−130817.1±1.13166.8±0.23233.3±0.11663 cm−129047.3±14.929721.6±0.930616.1±0.531211.5±0.43202.8±0.7Figure 2 Representative transients of TZ2-TG after the T-jump. The traces evidence different curvature and signal amplitude indicating temperature dependence in the dynamic of unfolding/folding of TZ2-TG.The decreased peptide relaxation time of TZ2-TG with increasing temperature is consistent with normal Arrhenius behavior and parallels the data that we have observed on several other (strand-centered) variants of TZ2-NG [17, 27] and helical peptides [31]. Unlike for TZ2-NG and its isotopic variants, the relaxation rates of the β-structure (cross-strand H-bonding) and the disordered structure did not show a clear differentiation for TZ2-TG.When temperature is varied, the rates (relaxation times) vary in a similar manner for detection at both 1663 cm−1 and 1641 cm−1. Due to limitations in signal strength, the 1641 cm−1 data at lower temperatures did not allow reliable analysis of the transients, and consequently is not reported.For two of three of the higher comparable temperature points (above 300 K), the relaxation times for the transients monitored at 1641 cm−1 (correlated to β-strand unfolding) and at 1663 cm−1 (monitoring disordered fraction) are equivalent within the error, and one could conclude that the two processes have the same rate. Such high-temperature behavior was also seen previously for TZ2-NG, but only above 325 K. Since the TZ2-TG is destabilized, as seen by its lower Tm, the temperature region with equivalent rates seems also to have shifted toward lower temperatures.Overall, the observed relaxation times of TZ2-TG are slower compared with TZ2-NG [17, 27], although the turn mutation destabilizes the peptide secondary structure [25]. While it is difficult to draw conclusions about Arrhenius behavior from just a few high-temperature data points, the slopes for the relaxation rates obtained when probing at the two wavenumbers are in fact similar and suggest activation energies 2-3 times higher than seen for TZ2-NG, which is consistent with the slower rates we observe. At lower temperatures, the 1663 cm−1 data do suggest a deviation from this trend and a possible differentiation between the β-strand and disordered dynamics, but confirmation of this depends on our remeasuring the kinetics with improved S/N.The meaning of this rate change from TZ2-NG to TZ2-TG must lie in the role of Asn in the turn. Tight turns in protein hairpin structures often involve Asn. Asn-Gly alone is not sufficient to nucleate a turn, but in combination with hydrophobic interaction between the strands, as in TZ2-NG, it is quite stable. The loss of stability on change to a TG turn would imply a reduction in the folding equilibrium constant. If we were to assume this to be an approximate two-state process at high temperatures and assume negligible impact on the unfolding rate constant,ku, then the folding, kf, would be reduced for TZ2-TG as compared to TZ2-NG. For simple unimolecular relaxation kinetics,(3.1)kobs=kf+ku,Keq=kfku, where kobs(=1/τobs) is the observed relaxation rate and Keq is the equilibrium constant. In such a situation, the reduction in Keq would lead to a reduction in kobs, which is just what we see experimentally. However, under different conditions, if the mechanism becomes more complex, and not two state, such a simple analysis will not be valid.In the low-temperature range, where we could clearly conclude that the previously studied TZ2-NG variants exhibited multistate behavior [17, 27], we cannot obtain enough data for TZ2-TG to draw similar conclusions. However, we also expect to find a differentiation of relaxation times for loss of β-strand and rise of disordered structure. Direct comparison of the low-temperature time constants between TZ2-NG and TZ2-TG might provide a better understanding of the slower overall dynamics of a hairpin whose thermal stability is destabilized by the turn. ## 4. Conclusions In the current study, we report on the conformational dynamics of theβ-hairpin peptide TZ2-TG and the influence of its turn residues on stability. Although the TG-turn residues have destabilizing impact as compared to an NG-turn, as seen in the decreased thermodynamic transition temperature, the conformational dynamics are unexpectedly slower. We attribute that to a modification of the multistate folding mechanism caused by the altered turn. --- *Source: 102423-2012-07-11.xml*
2012
# Response of Patients with Taxane-Refractory Advanced Urothelial Cancer to Enfortumab Vedotin, a Microtubule-Disrupting Agent **Authors:** Makito Miyake; Nobutaka Nishimura; Tatsuki Miyamoto; Takuto Shimizu; Kenta Ohnishi; Shunta Hori; Yosuke Morizawa; Daisuke Gotoh; Yasushi Nakai; Kazumasa Torimoto; Tomomi Fujii; Kiyohide Fujimoto **Journal:** Case Reports in Urology (2023) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2023/1024239 --- ## Abstract Enfortumab vedotin (EV), a nectin-4-directed antibody conjugated to monomethyl auristatin E (MMAE), has been approved for patients with advanced urothelial carcinoma (aUC) previously treated with platinum-based chemotherapy and immune inhibitors. Taxane agents and MMAE share antitumor mechanisms through microtubule disruption, thus raising a notable concern regarding cross-resistance between these drugs. This case report describes two patients with taxane-based chemotherapy-refractory aUC who responded well to EV. A 71-year-old man (case 1) with pT3N0M0 renal pelvic UC showed a partial response to EV in metastatic lesions of the bilateral lungs and right pelvic lymph nodes after three cycles of paclitaxel plus gemcitabine chemotherapy. A 53-year-old man (case 2) with cT3bN2M0 bladder UC underwent platinum-based neoadjuvant chemotherapy and the following radial cystectomy (ypTis ypN0). He developed bilateral lung metastases and showed a complete response to EV in the metastatic lesions after 20 cycles of paclitaxel plus nedaplatin chemotherapy. Our experience of two cases demonstrated that tumor response to EV can be expected in patients with taxane-refractory aUC. --- ## Body ## 1. Introduction Urothelial cancer (UC) of the bladder is the 12th most common cancer worldwide, accounting for 573,278 new cases and 212,536 deaths annually [1]. In spite recent advancements in systemic therapy, the prognosis of patients with advanced (aUC), unresectable, or metastatic UC remains poor. Platinum-based chemotherapy, immune checkpoint inhibitors, taxane-based chemotherapy, and FGFR-targeted therapy are currently available for patients with aUC [2]. Taxane agents, such as paclitaxel and docetaxel, exert anticancer activity by promoting polymerization of tubulin dimers, stabilizing microtubules, and inhibiting cell division [3, 4]. Evaluation of 370 patients from eight phase 2 trials demonstrated that taxane plus other chemotherapeutic agents was associated with prolonged overall survival as late-line systemic therapy following prior platinum-based therapy [5].Recently, enfortumab vedotin (EV), a nectin-4-directed antibody conjugated to monomethyl auristatin E (MMAE), has been approved for patients with aUC previously treated with platinum-based chemotherapy and programmed cell death-1 (PD-1)/programmed death-ligand 1 (PD-L1) inhibitors [6]. MMAE is a synthetic derivative of dolastatin-10 and is similar to taxanes, which disrupt microtubule dynamics through inhibition of tubulin polymerization [7, 8]. The two-dimensional structure of three microtubule-disrupting anticancer agents is shown in Figure 1 demonstrating that the structure of MMAE is not similar to two taxane agents. The treatment sequence in cancer management is vital to achieve long survival.Figure 1 Two-dimensional structure of three microtubule-disrupting anticancer agents. These three agents share similar anticancer mechanisms: disrupting microtubule dynamics through the inhibition of tubulin polymerization. (a) Monomethyl auristatin E (MMAE) is a synthetic derivative of dolastatin-10 isolated from sea hareDolabella auricularia. (b) Paclitaxel is the most well-known naturally sourced cancer drug and is derived from the bark of the Pacific yew tree Taxus brevifolia. (c) Docetaxel is a taxoid derived from the needles of the European yew tree Taxus baccata. (a)(b)(c)One of the biggest clinical concerns is whether taxane-refractory tumors can respond to EV and if EV-resistant tumors can respond to taxane agents. However, data regarding the cross-resistance between taxane anticancer agents and MMAE in urothelial cancer is limited. This case report describes two patients with taxane-based chemotherapy-refractory aUC who responded well to EV. ## 2. Case Presentation ### 2.1. Case 1 The patient was a 71-year-old man with localized UC of the right renal pelvis (pT3pN0 in a nephroureterectomy specimen). He received three cycles of adjuvant gemcitabine plus cisplatin (GC) chemotherapy. One year after radical surgery, right iliac lymph node metastasis developed, and he was treated with three cycles of paclitaxel plus gemcitabine (PG) chemotherapy consisting of paclitaxel 175 mg/m2 on day 1 and 1,000 mg/m2 on days 1 and 8, every three weeks. The metastatic lesion did not respond to taxane-based chemotherapy (Figure 2(a)). We observed that the development of multiple lung metastases and lymph node metastases further progressed and invaded the bladder, followed by palliative radiotherapy to the bladder-invading lesion to control urinary bleeding. After he received a total of 23 doses of pembrolizumab and 13 cycles of M-VAC (50%-reduced dose of methotrexate, vinblastine, and doxorubicin and 50%-reduced dose of cisplatin) chemotherapy, multiple lung metastases, and lymph node metastasis progressed (Figure 2(a)). He was started on a 1.25 mg/kg dose of EV (on days 1, 8, and 15 of a 28-day cycle). Because he presented with grade 3 erythema multiforme during the first cycle, the dose of EV was reduced by 20% (1.00 mg/kg) thereafter. The metastatic lesions responded to EV (partial response) after three cycles of EV (Figure 2(a)). The treatment is ongoing.Figure 2 Clinical courses of two cases who treated with taxane-based chemotherapy and enfortumab vedotin. The details of the clinical course are described in the main text. Both patients are alive, and EV treatment is ongoing. Yellow arrows indicate metastatic lesions. Abbreviations: GCa: gemcitabine and carboplatin combination chemotherapy; GC: gemcitabine and cisplatin combination chemotherapy; MVAC: methotrexate, vinblastine, doxorubicin, and cisplatin combination chemotherapy; Pem: pembrolizumab; EV: enfortumab vedotin; PG: paclitaxel and gemcitabine combination chemotherapy; PN: paclitaxel and nedaplatin combination chemotherapy; CR: complete response; PR: partial response; SD: stable disease; PD: progressive disease. (a)(b) ### 2.2. Case 2 A 53-year-old man presented with cT3bN2M0 muscle-invasive bladder UC. After receiving a cycle of GC chemotherapy and a cycle of gemcitabine plus carboplatin chemotherapy as a neoadjuvant setting, laparoscopic radical cystectomy accompanied with lymph node dissection and ileal conduit was performed, and the pathological diagnosis was ypTis and ypN0. Because multiple lung metastases developed within 12 months after the last dose of perioperative chemotherapy, pembrolizumab was initiated, and he received a total of eight cycles [9]. Then, paclitaxel plus nedaplatin (PN) chemotherapy consisting of paclitaxel 200 mg/m2 on day 1 and nedaplatin 100 mg/m2 on day 1, every three to four weeks was administered to mitigate pembrolizumab-refractory disease. After 20 cycles of PN, multiple lung metastases progressed; however, he developed hearing impairment due to platinum agents (Figure 2(b)). He was started on a 1.25 mg/kg dose of EV. The metastatic lesions became undetectable (complete response) after four cycles of EV (Figure 2(b)). The treatment is still ongoing without any severe adverse events. ## 2.1. Case 1 The patient was a 71-year-old man with localized UC of the right renal pelvis (pT3pN0 in a nephroureterectomy specimen). He received three cycles of adjuvant gemcitabine plus cisplatin (GC) chemotherapy. One year after radical surgery, right iliac lymph node metastasis developed, and he was treated with three cycles of paclitaxel plus gemcitabine (PG) chemotherapy consisting of paclitaxel 175 mg/m2 on day 1 and 1,000 mg/m2 on days 1 and 8, every three weeks. The metastatic lesion did not respond to taxane-based chemotherapy (Figure 2(a)). We observed that the development of multiple lung metastases and lymph node metastases further progressed and invaded the bladder, followed by palliative radiotherapy to the bladder-invading lesion to control urinary bleeding. After he received a total of 23 doses of pembrolizumab and 13 cycles of M-VAC (50%-reduced dose of methotrexate, vinblastine, and doxorubicin and 50%-reduced dose of cisplatin) chemotherapy, multiple lung metastases, and lymph node metastasis progressed (Figure 2(a)). He was started on a 1.25 mg/kg dose of EV (on days 1, 8, and 15 of a 28-day cycle). Because he presented with grade 3 erythema multiforme during the first cycle, the dose of EV was reduced by 20% (1.00 mg/kg) thereafter. The metastatic lesions responded to EV (partial response) after three cycles of EV (Figure 2(a)). The treatment is ongoing.Figure 2 Clinical courses of two cases who treated with taxane-based chemotherapy and enfortumab vedotin. The details of the clinical course are described in the main text. Both patients are alive, and EV treatment is ongoing. Yellow arrows indicate metastatic lesions. Abbreviations: GCa: gemcitabine and carboplatin combination chemotherapy; GC: gemcitabine and cisplatin combination chemotherapy; MVAC: methotrexate, vinblastine, doxorubicin, and cisplatin combination chemotherapy; Pem: pembrolizumab; EV: enfortumab vedotin; PG: paclitaxel and gemcitabine combination chemotherapy; PN: paclitaxel and nedaplatin combination chemotherapy; CR: complete response; PR: partial response; SD: stable disease; PD: progressive disease. (a)(b) ## 2.2. Case 2 A 53-year-old man presented with cT3bN2M0 muscle-invasive bladder UC. After receiving a cycle of GC chemotherapy and a cycle of gemcitabine plus carboplatin chemotherapy as a neoadjuvant setting, laparoscopic radical cystectomy accompanied with lymph node dissection and ileal conduit was performed, and the pathological diagnosis was ypTis and ypN0. Because multiple lung metastases developed within 12 months after the last dose of perioperative chemotherapy, pembrolizumab was initiated, and he received a total of eight cycles [9]. Then, paclitaxel plus nedaplatin (PN) chemotherapy consisting of paclitaxel 200 mg/m2 on day 1 and nedaplatin 100 mg/m2 on day 1, every three to four weeks was administered to mitigate pembrolizumab-refractory disease. After 20 cycles of PN, multiple lung metastases progressed; however, he developed hearing impairment due to platinum agents (Figure 2(b)). He was started on a 1.25 mg/kg dose of EV. The metastatic lesions became undetectable (complete response) after four cycles of EV (Figure 2(b)). The treatment is still ongoing without any severe adverse events. ## 3. Discussion We described the clinical courses of two patients in whom taxane-based chemotherapy-refractory metastatic lesions responded to EV. EV is a nectin-4-directed anticancer drug conjugate (ADC) approved as a salvage treatment for aUC. ADCs are an emerging class of drugs designed to increase selectivity for cancer cells and potentially reduce toxicity by conjugating cytotoxic agents to highly specific monoclonal antibodies [4]. Hoffman-Censits et al. performed immunohistochemical staining analysis to compare nectin-4 expression and reported that 58% of the muscle-invasive UC were positive for nectin-4 expression with a histoscore (H-score, 0 to 300) cutoff of 15 [10]. Conjugating MMAE to the nectin-4-biding antibody provided significant clinical benefit in EV-201 and EV-301 trials [6, 11, 12]. However, EV treatment can cause severe adverse events, including skin reactions, hematologic toxicity, hyperglycemia, and peripheral neuropathy [13]. When patients are indicated for EV administration, physicians need to pay attention to balance of oncological benefit and risk of potential adverse events.The two patients described in this report responded well to EV even after the patients acquired taxane resistance. Taxane agents and MMAE share antitumor mechanisms through microtubule disruption, which raises a significant concern regarding cross-resistance between these drugs. The molecular mechanisms underlying taxane resistance in UC are not fully understood. Activation of the fibroblast growth factor receptor signaling pathway and epithelial–mesenchymal transition plays an important role in cancer progression and paclitaxel resistance [14, 15]. Chu et al. reported that EV sensitivity is strongly associated with the luminal subtype and nectin-4 expression [16]. Some studies have suggested that exposure to cisplatin-based chemotherapy could decrease nectin-4 expression in UC cells, indicating that pretreatment could induce EV resistance [17–19]. However, only little is known about the molecular mechanisms underlying MMAE resistance, especially in UC. Chen et al. utilized a functional genomics approach to identify putative biomarkers of resistance to paclitaxel and MMAE in breast cancer and found that amplification of the chromosome 17q21 region encoding the ABCC3 drug transporter gene is highly associated with resistance to both drugs [20]. Further analyses using MMAE-resistant UC and MMAE-sensitive UC are required to determine the exact mechanism underpinning EV resistance and to develop combined interventions for patients with aUC and EV resistance. Another issue to be discussed is the difference of drug delivery efficiency between EV and standard chemotherapy drugs. EV is designed to efficiently deliver its payload and MMAE by actively targeting nectin-4, which is highly expressed in UC. It could be a marked concern whether enough uptake of paclitaxel into the tumor was delivered to tumor tissue to exert an antitumor effect, because the tumor biopsy was not performed and intratumoral concentration of paclitaxel was not measured during treatment in our cases. ## 4. Conclusion This case report suggests that a tumor response to EV can be expected in taxane-refractory aUC. However, in this case report, patients treated with taxane agents for EV-refractory aUC were excluded. More clinical evidence should be accumulated to establish better treatment strategies consisting of multiple systemic treatments, such as platinum-based chemotherapy, immune checkpoint inhibitors, taxane-based chemotherapy, and EV. --- *Source: 1024239-2023-01-14.xml*
1024239-2023-01-14_1024239-2023-01-14.md
14,482
Response of Patients with Taxane-Refractory Advanced Urothelial Cancer to Enfortumab Vedotin, a Microtubule-Disrupting Agent
Makito Miyake; Nobutaka Nishimura; Tatsuki Miyamoto; Takuto Shimizu; Kenta Ohnishi; Shunta Hori; Yosuke Morizawa; Daisuke Gotoh; Yasushi Nakai; Kazumasa Torimoto; Tomomi Fujii; Kiyohide Fujimoto
Case Reports in Urology (2023)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2023/1024239
1024239-2023-01-14.xml
--- ## Abstract Enfortumab vedotin (EV), a nectin-4-directed antibody conjugated to monomethyl auristatin E (MMAE), has been approved for patients with advanced urothelial carcinoma (aUC) previously treated with platinum-based chemotherapy and immune inhibitors. Taxane agents and MMAE share antitumor mechanisms through microtubule disruption, thus raising a notable concern regarding cross-resistance between these drugs. This case report describes two patients with taxane-based chemotherapy-refractory aUC who responded well to EV. A 71-year-old man (case 1) with pT3N0M0 renal pelvic UC showed a partial response to EV in metastatic lesions of the bilateral lungs and right pelvic lymph nodes after three cycles of paclitaxel plus gemcitabine chemotherapy. A 53-year-old man (case 2) with cT3bN2M0 bladder UC underwent platinum-based neoadjuvant chemotherapy and the following radial cystectomy (ypTis ypN0). He developed bilateral lung metastases and showed a complete response to EV in the metastatic lesions after 20 cycles of paclitaxel plus nedaplatin chemotherapy. Our experience of two cases demonstrated that tumor response to EV can be expected in patients with taxane-refractory aUC. --- ## Body ## 1. Introduction Urothelial cancer (UC) of the bladder is the 12th most common cancer worldwide, accounting for 573,278 new cases and 212,536 deaths annually [1]. In spite recent advancements in systemic therapy, the prognosis of patients with advanced (aUC), unresectable, or metastatic UC remains poor. Platinum-based chemotherapy, immune checkpoint inhibitors, taxane-based chemotherapy, and FGFR-targeted therapy are currently available for patients with aUC [2]. Taxane agents, such as paclitaxel and docetaxel, exert anticancer activity by promoting polymerization of tubulin dimers, stabilizing microtubules, and inhibiting cell division [3, 4]. Evaluation of 370 patients from eight phase 2 trials demonstrated that taxane plus other chemotherapeutic agents was associated with prolonged overall survival as late-line systemic therapy following prior platinum-based therapy [5].Recently, enfortumab vedotin (EV), a nectin-4-directed antibody conjugated to monomethyl auristatin E (MMAE), has been approved for patients with aUC previously treated with platinum-based chemotherapy and programmed cell death-1 (PD-1)/programmed death-ligand 1 (PD-L1) inhibitors [6]. MMAE is a synthetic derivative of dolastatin-10 and is similar to taxanes, which disrupt microtubule dynamics through inhibition of tubulin polymerization [7, 8]. The two-dimensional structure of three microtubule-disrupting anticancer agents is shown in Figure 1 demonstrating that the structure of MMAE is not similar to two taxane agents. The treatment sequence in cancer management is vital to achieve long survival.Figure 1 Two-dimensional structure of three microtubule-disrupting anticancer agents. These three agents share similar anticancer mechanisms: disrupting microtubule dynamics through the inhibition of tubulin polymerization. (a) Monomethyl auristatin E (MMAE) is a synthetic derivative of dolastatin-10 isolated from sea hareDolabella auricularia. (b) Paclitaxel is the most well-known naturally sourced cancer drug and is derived from the bark of the Pacific yew tree Taxus brevifolia. (c) Docetaxel is a taxoid derived from the needles of the European yew tree Taxus baccata. (a)(b)(c)One of the biggest clinical concerns is whether taxane-refractory tumors can respond to EV and if EV-resistant tumors can respond to taxane agents. However, data regarding the cross-resistance between taxane anticancer agents and MMAE in urothelial cancer is limited. This case report describes two patients with taxane-based chemotherapy-refractory aUC who responded well to EV. ## 2. Case Presentation ### 2.1. Case 1 The patient was a 71-year-old man with localized UC of the right renal pelvis (pT3pN0 in a nephroureterectomy specimen). He received three cycles of adjuvant gemcitabine plus cisplatin (GC) chemotherapy. One year after radical surgery, right iliac lymph node metastasis developed, and he was treated with three cycles of paclitaxel plus gemcitabine (PG) chemotherapy consisting of paclitaxel 175 mg/m2 on day 1 and 1,000 mg/m2 on days 1 and 8, every three weeks. The metastatic lesion did not respond to taxane-based chemotherapy (Figure 2(a)). We observed that the development of multiple lung metastases and lymph node metastases further progressed and invaded the bladder, followed by palliative radiotherapy to the bladder-invading lesion to control urinary bleeding. After he received a total of 23 doses of pembrolizumab and 13 cycles of M-VAC (50%-reduced dose of methotrexate, vinblastine, and doxorubicin and 50%-reduced dose of cisplatin) chemotherapy, multiple lung metastases, and lymph node metastasis progressed (Figure 2(a)). He was started on a 1.25 mg/kg dose of EV (on days 1, 8, and 15 of a 28-day cycle). Because he presented with grade 3 erythema multiforme during the first cycle, the dose of EV was reduced by 20% (1.00 mg/kg) thereafter. The metastatic lesions responded to EV (partial response) after three cycles of EV (Figure 2(a)). The treatment is ongoing.Figure 2 Clinical courses of two cases who treated with taxane-based chemotherapy and enfortumab vedotin. The details of the clinical course are described in the main text. Both patients are alive, and EV treatment is ongoing. Yellow arrows indicate metastatic lesions. Abbreviations: GCa: gemcitabine and carboplatin combination chemotherapy; GC: gemcitabine and cisplatin combination chemotherapy; MVAC: methotrexate, vinblastine, doxorubicin, and cisplatin combination chemotherapy; Pem: pembrolizumab; EV: enfortumab vedotin; PG: paclitaxel and gemcitabine combination chemotherapy; PN: paclitaxel and nedaplatin combination chemotherapy; CR: complete response; PR: partial response; SD: stable disease; PD: progressive disease. (a)(b) ### 2.2. Case 2 A 53-year-old man presented with cT3bN2M0 muscle-invasive bladder UC. After receiving a cycle of GC chemotherapy and a cycle of gemcitabine plus carboplatin chemotherapy as a neoadjuvant setting, laparoscopic radical cystectomy accompanied with lymph node dissection and ileal conduit was performed, and the pathological diagnosis was ypTis and ypN0. Because multiple lung metastases developed within 12 months after the last dose of perioperative chemotherapy, pembrolizumab was initiated, and he received a total of eight cycles [9]. Then, paclitaxel plus nedaplatin (PN) chemotherapy consisting of paclitaxel 200 mg/m2 on day 1 and nedaplatin 100 mg/m2 on day 1, every three to four weeks was administered to mitigate pembrolizumab-refractory disease. After 20 cycles of PN, multiple lung metastases progressed; however, he developed hearing impairment due to platinum agents (Figure 2(b)). He was started on a 1.25 mg/kg dose of EV. The metastatic lesions became undetectable (complete response) after four cycles of EV (Figure 2(b)). The treatment is still ongoing without any severe adverse events. ## 2.1. Case 1 The patient was a 71-year-old man with localized UC of the right renal pelvis (pT3pN0 in a nephroureterectomy specimen). He received three cycles of adjuvant gemcitabine plus cisplatin (GC) chemotherapy. One year after radical surgery, right iliac lymph node metastasis developed, and he was treated with three cycles of paclitaxel plus gemcitabine (PG) chemotherapy consisting of paclitaxel 175 mg/m2 on day 1 and 1,000 mg/m2 on days 1 and 8, every three weeks. The metastatic lesion did not respond to taxane-based chemotherapy (Figure 2(a)). We observed that the development of multiple lung metastases and lymph node metastases further progressed and invaded the bladder, followed by palliative radiotherapy to the bladder-invading lesion to control urinary bleeding. After he received a total of 23 doses of pembrolizumab and 13 cycles of M-VAC (50%-reduced dose of methotrexate, vinblastine, and doxorubicin and 50%-reduced dose of cisplatin) chemotherapy, multiple lung metastases, and lymph node metastasis progressed (Figure 2(a)). He was started on a 1.25 mg/kg dose of EV (on days 1, 8, and 15 of a 28-day cycle). Because he presented with grade 3 erythema multiforme during the first cycle, the dose of EV was reduced by 20% (1.00 mg/kg) thereafter. The metastatic lesions responded to EV (partial response) after three cycles of EV (Figure 2(a)). The treatment is ongoing.Figure 2 Clinical courses of two cases who treated with taxane-based chemotherapy and enfortumab vedotin. The details of the clinical course are described in the main text. Both patients are alive, and EV treatment is ongoing. Yellow arrows indicate metastatic lesions. Abbreviations: GCa: gemcitabine and carboplatin combination chemotherapy; GC: gemcitabine and cisplatin combination chemotherapy; MVAC: methotrexate, vinblastine, doxorubicin, and cisplatin combination chemotherapy; Pem: pembrolizumab; EV: enfortumab vedotin; PG: paclitaxel and gemcitabine combination chemotherapy; PN: paclitaxel and nedaplatin combination chemotherapy; CR: complete response; PR: partial response; SD: stable disease; PD: progressive disease. (a)(b) ## 2.2. Case 2 A 53-year-old man presented with cT3bN2M0 muscle-invasive bladder UC. After receiving a cycle of GC chemotherapy and a cycle of gemcitabine plus carboplatin chemotherapy as a neoadjuvant setting, laparoscopic radical cystectomy accompanied with lymph node dissection and ileal conduit was performed, and the pathological diagnosis was ypTis and ypN0. Because multiple lung metastases developed within 12 months after the last dose of perioperative chemotherapy, pembrolizumab was initiated, and he received a total of eight cycles [9]. Then, paclitaxel plus nedaplatin (PN) chemotherapy consisting of paclitaxel 200 mg/m2 on day 1 and nedaplatin 100 mg/m2 on day 1, every three to four weeks was administered to mitigate pembrolizumab-refractory disease. After 20 cycles of PN, multiple lung metastases progressed; however, he developed hearing impairment due to platinum agents (Figure 2(b)). He was started on a 1.25 mg/kg dose of EV. The metastatic lesions became undetectable (complete response) after four cycles of EV (Figure 2(b)). The treatment is still ongoing without any severe adverse events. ## 3. Discussion We described the clinical courses of two patients in whom taxane-based chemotherapy-refractory metastatic lesions responded to EV. EV is a nectin-4-directed anticancer drug conjugate (ADC) approved as a salvage treatment for aUC. ADCs are an emerging class of drugs designed to increase selectivity for cancer cells and potentially reduce toxicity by conjugating cytotoxic agents to highly specific monoclonal antibodies [4]. Hoffman-Censits et al. performed immunohistochemical staining analysis to compare nectin-4 expression and reported that 58% of the muscle-invasive UC were positive for nectin-4 expression with a histoscore (H-score, 0 to 300) cutoff of 15 [10]. Conjugating MMAE to the nectin-4-biding antibody provided significant clinical benefit in EV-201 and EV-301 trials [6, 11, 12]. However, EV treatment can cause severe adverse events, including skin reactions, hematologic toxicity, hyperglycemia, and peripheral neuropathy [13]. When patients are indicated for EV administration, physicians need to pay attention to balance of oncological benefit and risk of potential adverse events.The two patients described in this report responded well to EV even after the patients acquired taxane resistance. Taxane agents and MMAE share antitumor mechanisms through microtubule disruption, which raises a significant concern regarding cross-resistance between these drugs. The molecular mechanisms underlying taxane resistance in UC are not fully understood. Activation of the fibroblast growth factor receptor signaling pathway and epithelial–mesenchymal transition plays an important role in cancer progression and paclitaxel resistance [14, 15]. Chu et al. reported that EV sensitivity is strongly associated with the luminal subtype and nectin-4 expression [16]. Some studies have suggested that exposure to cisplatin-based chemotherapy could decrease nectin-4 expression in UC cells, indicating that pretreatment could induce EV resistance [17–19]. However, only little is known about the molecular mechanisms underlying MMAE resistance, especially in UC. Chen et al. utilized a functional genomics approach to identify putative biomarkers of resistance to paclitaxel and MMAE in breast cancer and found that amplification of the chromosome 17q21 region encoding the ABCC3 drug transporter gene is highly associated with resistance to both drugs [20]. Further analyses using MMAE-resistant UC and MMAE-sensitive UC are required to determine the exact mechanism underpinning EV resistance and to develop combined interventions for patients with aUC and EV resistance. Another issue to be discussed is the difference of drug delivery efficiency between EV and standard chemotherapy drugs. EV is designed to efficiently deliver its payload and MMAE by actively targeting nectin-4, which is highly expressed in UC. It could be a marked concern whether enough uptake of paclitaxel into the tumor was delivered to tumor tissue to exert an antitumor effect, because the tumor biopsy was not performed and intratumoral concentration of paclitaxel was not measured during treatment in our cases. ## 4. Conclusion This case report suggests that a tumor response to EV can be expected in taxane-refractory aUC. However, in this case report, patients treated with taxane agents for EV-refractory aUC were excluded. More clinical evidence should be accumulated to establish better treatment strategies consisting of multiple systemic treatments, such as platinum-based chemotherapy, immune checkpoint inhibitors, taxane-based chemotherapy, and EV. --- *Source: 1024239-2023-01-14.xml*
2023
# Neural Network Control for the Probe Landing Based on Proportional Integral Observer **Authors:** Yuanchun Li; Tianhao Ma; Bo Zhao **Journal:** Mathematical Problems in Engineering (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102424 --- ## Abstract For the probe descending and landing safely, a neural network control method based on proportional integral observer (PIO) is proposed. First, the dynamics equation of the probe under the landing site coordinate system is deduced and the nominal trajectory meeting the constraints in advance on three axes is preplanned. Then the PIO designed by using LMI technique is employed in the control law to compensate the effect of the disturbance. At last, the neural network control algorithm is used to guarantee the double zero control of the probe and ensure the probe can land safely. An illustrative design example is employed to demonstrate the effectiveness of the proposed control approach. --- ## Body ## 1. Introduction The exploration mission to near-earth asteroids (NEAs) would be one of the most complex tasks in the future deep space exploration [1, 2]. There is surge in NEAs mission activities, for which various space agencies around the world (e.g., NASA, European Space Agency, Japan Aerospace Exploration Agency, etc.) were commissioning researches about NEAs to determine the feasible exploration missions, including the (1) NEAR probe launched by NASA which can realize the fly-around to 433 Eros whose shape is like a potato with a size of 34.4 km × 11.2 km × 11.2 km and which verified the gravitational field model of 433 Eros and the stability of frozen orbit around the asteroid [3]; (2) the Hayabusa probe from JAEA which had successfully achieved to be attached and sample to the 25143 Itokawa (due to the smaller size and quality of the Itokawa asteroid, this mission realized the detection to the asteroid by hovering way [4]); (3) ROSETTA implemented by ESA which will arrive in the Churyumov-Gerasimenko comet in 2014 after a decade of interstellar flight and will make the comprehensive observation of the comet for a long time [5].In view of the complex environment around the small body, together with the long distance between the probe and the surface of the earth [6], a variety of accurate physical parameters and motion information of small bodies cannot be obtained through optical telescopes on the ground or radio telescopes. In addition, the complex process uncertainty, large time delay, nonlinearity, and multivariable coupling always exist in the probe dynamic model, so ground control for deep space exploration mission has become no longer appropriate; as a consequence, it puts forward a new challenge to autonomous navigation, guidance, and control (GNC) technology of landing softly on a small body. To cope with these problems, both domestic and foreign scholars have paid a great deal of attention to the GNC problem of landing small objects. As is well known, the accurate physical parameters and motion information of small bodies are the important premises of the probe softly landing. Misu et al. [6] proposed an autonomous optical navigation and guidance method, which extracted visual small features from the images taken by the navigation camera and tracked them robustly and accurately. Kawaguchi et al. [7] discussed an autonomous optical guidance and navigation strategy to approaching small bodies. Horneman and Kluever [8] presented a terminal area energy management (TAEM) guidance methodology which employed a trajectory planning algorithm to compute a feasible path from the current state to the desired approach and landing target state rather than relying on a precalculated one, stored database of neighboring TAEM trajectories. However, even if the accurate physical parameters and motion information of small bodies are gained, the controller is difficult to be designed to make the probe system meet the key performance indicators of the probe softly landing. In order to solve this problem, Furfaro et al. [9] presented a high order sliding mode variable structure control method to make the probe reach the sliding surface in finite time and overcame the chattering effect, generally existing in the common sliding mode control. Crassidis et al. [10] introduced a variable-structure controller based on a Gibbs vector parameterization, a modified-Rodrigues parameterization, and a quaternion parameterization. Blackmore [11] studied the robust path and feedback control under the condition of existing uncertainty; through this control method, the stability of the system is ensured. Meissinger and Greenstadt [12] proposed a soft landing scheme, which used a feedback control with a radar altimeter and a three-beam Doppler radar system to achieve landing spacecraft at Eros’ north polar region with a low-impact velocity. In [13], a novel robust stability condition was obtained for sliding mode dynamics by using Lyapunov theory in delta domain. Some other approaches for analysis and design of sliding mode control were presented in [14–16]. Apart from the position and the velocity of the probe, the attitude dynamics analyses also play an important role in the probe softly landing. Kumar and Shah [17] set up the general formulation of the spacecraft equations of motion in an equatorial eccentric orbit using Lagrangian method and did some analysis on the stability. Then the control laws for three-axis attitude control of spacecrafts had been developed and a closed-form solution of the system had been derived. Liang and Li [18] designed a robust adaptive backstepping sliding mode control law to make the attitude of the probe stabilized and respond accurately to the expectation in the presence of disturbances and parametric uncertainties. Nonetheless, these methods of dealing with the interference all made the inhibition of bounded disturbances implicit in the above autonomous GNC rather than using the interference information effectively, so the designed controller cannot meet the control requirement of the system when there exists a larger interference in the system.As a result of the complex environment in the deep space around the small bodies and the coupling effect of the detector itself, it leads to a great deal of uncertainties in the dynamic model and makes the system include the complex external disturbance. At present, the main approaches to process the external disturbances include disturbance decoupling, disturbance compensation, and robust control, especially disturbance compensation. There are many scholars proposing a variety of stability control strategies based on the observer aimed at different objects. Chadli and Karimi [19] dealt with the observer design for Takagi-Sugeno (T-S) fuzzy models subject to unknown inputs and disturbance affecting both states and outputs of the system. Chong et al. [20] designed a robust circle criterion observer and applied it to neural mass models; Sun et al. [21] proposed a novel speed observation scheme using artificial neural network (ANN) inverse method to effectively reject the influence of speed detection on system stability and precision for a bearingless induction motor. Above all, the observer can inhibit the effect of disturbance in the system by accurately measuring the unknown disturbance.The main advantages of the presented approach are generalized into two aspects: one is that it combines the characteristics of the probe dynamic model and the good estimation performance of observer, to eliminate the effect of the unknown disturbance and to avoid the chattering of the control signal caused by the large disturbance. This paper designs PIO using LMI technique, which can estimate the system states and unknown input disturbance simultaneously. The other is that PID neural network control algorithm is introduced in the design of the controller. It combines the advantages of traditional PID controller and learning memory function of neural networks. So it improves the convergence rate close to the ideal position on the condition that the convergence of the system can be ensured and simultaneously avoids the effect of nonlinear and strong coupling features of the system in a wide range, compared with the sliding mode control strategy.This paper proceeds as follows. In Section2, the dynamics equation of the probe is deduced under the landing site coordinate system and the interference outside system is treated as the known bounded function. In Section 3, firstly, the nominal trajectories based on the theory of suboptimal fuel are planned. Then PIO is designed by using LMI technique to estimate the unknown disturbance. Finally, PID neural network control algorithm is used to design the controller to ensure the stability and control performance of the system. In Section 4, Eros 433 is employed to demonstrate the effectiveness of the proposed control approach. Conclusions are presented in Section 5. ## 2. Small Body and Probe Dynamic Model In this section, the body-fixed coordinate system of small body is set up, which is shown in Figure1. Let the oa-xayaza coordinate system be fixed on small body with the origin coinciding with the mass center of small body, xa-axis coinciding with the minimum inertia axis of small body, za-axis coinciding with the spin axis of small body, and ya-axis meeting the condition that the xa, ya, and za axes compose the right-handed coordinate system. The oc-xcyczc coordinate system is fixed on optical navigation camera (ONC), and the image plane of ONC is defined as ocxcyc, and zc axis is parallel to the optical axis of ONC and is directed to the surface.Figure 1 Geometrical relationship of coordinate systems.The dynamic equations of the probe in the fixed-body coordinate system are given as [22] (1)R¨+2ω×R˙+ω×ω×R+ω˙×R=a+UR+fd, where R, R˙, R¨, ω, a, UR, and fd are the position vector from the target small body mass center of the spacecraft, the first and second time derivatives with respect to the body-fixed rotating frame, the instantaneous rotation vector of the small body, the control acceleration, the gradient of the gravitational potential U, and the components of unmodeled perturbation accelerations mainly from the solar radiation pressure and the solar gravitation.Considering the origin ofΣl is the vector ρ in the Σa, which is the vector from the target small body mass center to the landing site, the vector R has the following satisfaction relation in the body-fixed coordinate system Σa: (2)R=Clar+ρ, where r and Cla are the vector from the landing site to the probe in the Σl and coordinate transform matrix from the Σl to the Σa. The transform matrix is given as follows: (3)Cla=cos⁡φsinθ-sinφcos⁡φcos⁡θsinφsinθcos⁡φsinφcos⁡θ-cos⁡θ0sinθ.Suppose that the small body rotates around thez-axis and rotation velocity ω is a constant; we can get the final expression of dynamic models as (4)x¨=ω2x+2ωy˙+Ux+acx+fdx,y¨=ω2y-2ωx˙+Uy+acy+fdy,z¨=Uz+acz+fdz, where fdx, fdy, and fdz are the components of unmodelled perturbation accelerations mainly from the solar radiation pressure and the solar gravitation.Generally, given the small size, irregular shape, and variable surface properties of small bodies, orbital dynamics became complicated; thus it is difficult to obtain the gravitational field of the small bodies accurately. Considering that the gravitational potential is related to the distance, the latitude, and the longitude, it can be expanded into a series of spherical harmonics and can be expressed as(5)U=μr1+Rar212C203sin2φ-1Rar2Rar212C203sin2φ-1Rar2+3C22cos⁡2φcos⁡2θ, where μ, Ra, φ, θ, and r are the product of the gravitational constant and the mass of the target small body, the referenced radius which is similar to the large equatorial radius, the latitude and longitude in the same coordinate system whose origins are at the center of body mass, and the distance from the mass center of small body to the probe, respectively.According to the relationship between the rectangular coordinate and polar coordinate, one obtains(6)sinφ=zr,cos⁡2φ=x2+y2r2,cos⁡2θ=1-2sin2θ=x2-y2x2+y2.Introduce (6) into (5), and one can obtain (7)U=μr1+Rar212C202z2-x2-y2r2x4-y4r2(x2+y2)Rar212C202z2-x2-y2r2+3C22x4-y4r2x2+y2.Furthermore, the derivatives ofU can be computed explicitly with respect to x, y, and z, respectively, as (8)Ux=-μxr31+32C20Rar25z2r2-1+3C22Rar25x2-y2r2-2,Uy=-μyr31+32C20Rar25z2r2-1+3C22Rar25x2-y2r2-2,Uz=-μzr31+32C20Rar25z2r2-1+3C22Rar25x2-y2r2-2. ## 3. Guidance Law and Control Law Design Considering the probe achieves the vertical soft landing within the expected timeτ, this paper presents the nominal trajectory guidance law based on the theory of suboptimal fuel, and the nominal trajectories of three-axis direction are preplanned. Then use neural network control method based on PIO to track the planned ideal nominal trajectories. ### 3.1. The Nominal Trajectory Planning The desired descent altitude and velocity are planned in order to satisfy the requirements of soft landing on the surface of small bodies. The constraint condition is defined as [23] (9)z˙0=z˙0,z(0)=z0,zτ=ρ,z˙τ=0, where z0 and z˙0 denote the initial altitude and altitude change rate, z˙0 and z0 are the planned altitude and altitude change rate, and τ is the descent time. The cubic curve to satisfy the boundary condition is given by (10)znt=z0+z1t+z2t2+z3t3, where z0, z1, z2, and z3 are the cubic function coefficients.Using (9), the coefficients are determined. The descent curve is given by (11)znt=z0+z˙0t-3z0+2z˙0τ-3ρτ2t2+2z0+z˙0τ-2ρτ3t3, where ρ is the altitude of the landing site.Next, the time derivatives of (12) are given by (12)z˙nt=z˙0-6z0+4z˙0τ-6ρτ2t+6z0+3z˙0τ-6ρτ3t2,(13)z¨nt=-6z0+4z˙0τ-6ρτ2+12z0+6z˙0τ-12ρτ3t.Similarly, the ideal nominal trajectories can be planned on the other two axis directions. ### 3.2. Control Law Design #### 3.2.1. Proportional Integral Observer Design Let(14)xt=x,y,z,x˙,y˙,z˙T,ut=000uxt-Uxuyt-Uyuzt-UzT.Convert (4) into nonlinear time-invariant system as (15)x˙t=Axt+But+ft,yt=Cxt, where (16)A=000100000010000001ω20002ω00ω20-2ω00000000,B=000b1b2b3,ft=000fdxfdyfdz,C=111111, where ω is the instantaneous rotation vector of the small body and fdx, fdy, and fdz are the components of unmodelled perturbation accelerations mainly from the solar radiation pressure and the solar gravitation.Next, the PIO is designed as follows [24]: (17)x^˙t=Ax^t+But+f^t+Kpyt-y^t,y^t=Cx^t,f^˙t=KIyt-y^t, where Kp and KI are the observer gain matrix and the integral coefficient of estimated unknown disturbance, respectively.Note the state error and unknown disturbance error as(18)et=x^t-xt,eft=f^t-ft, where x^ and f^ are the estimations of the state vector x and the unknown disturbance f, respectively.Using (17) and (18) the error dynamics are as follows: (19)e˙t=A-KPCet+ft-f^t,e˙f=-KIC+KvCA-KpCet-KvCef-f˙t.The augmented estimator system could be rewritten as(20)e˙te˙ft=Aefeef+Beff˙t, where (21)Aef=A-KPC1-KIC-KvCA+KvCKpC0,Bef=01.Lemma 1 (see [25]). Givenγ>0 and (20), if there exist symmetric matrixes P, Q and two matrices KP, KI of appropriate dimension as well as LMI such that (22)A11A120A12TA22-QS0-QSTγI1<0, where (23)A11=PA-KpC+A-KpCTP+Ie,A12=P+QSKiC+KvA-KpCT,A22=-QSKvC-KiFS+KvCQST+Is hold, then system (20) is stable and satisfies corresponding performance index.Theorem 2. For a given positive constantγ>0 and (20), if there exist symmetric matrixes P, Q and two matrices KP, KI of appropriate dimension such that (22) LMI holds, then system (20) is robust and asymptotically stable and satisfies the performance index as follows: (24)eef≤γf^2+V0, where (25)eef2=∫0t1eefeefTdt,f^2=∫0t1f^f^Tdt.Ie, Is, and I1 are unit matrixes with appropriate dimension.Proof. Choose the Lyapunov candidate as(26)V=eefP1eefT; then taking the derivative ofV with respect to time along the trajectories of (19), one obtains (27)V˙=eefAefTP1+P1AefeefT+2eefTP1Beff^. Now define performance indicators as follows:(28)J=∫0t1eefeefT-γf^f^Tdt. Then(29)J=∫0t1eefeefT-γf^f^T+V˙dt-∫0t1V˙dt≤∫0t1eefeefT-γf^f^T+V˙dt+V0=∫0t1eefAefTP1+P1Aef+IeefT-eefTP1Beff˙-γf^f^Tdt+V0=∫0t1eeff^AefTP1+P1Aef+IP1BefBefTP1-γI1eeff^Tdt+V0. If there exists(30)AefTP1+P1Aef+IP1BefBefTP1-γI1<0, then theH∞ tracking performance can be satisfied. Next, note symmetric positive-definite matrix(31)P1=P00Q. Thus(32)P1Bef=0-QS,AefTP1+P1Aef+I=A11A12A12TA22, where A11, A12, and A22 are defined as (23); then (33)∫0t1eefeefTdt≤∫0t1γf^f^Tdt+V0. Then (24) can be obtained by simplifying (33); thus it is verified that unknown disturbance observer error ef of PIO can converge to zero in finite time, as well as the estimated interference f^t converging to actual interference ft. #### 3.2.2. PID Neural Network Structure and Calculation Method As it is difficult to acquire the physical parameters and motion information of small bodies accurately, there exists a highly nonlinear dynamic model of the small body. PID neural network control algorithm not only has the advantages of conventional PID controller, but also owns parallel structure and function of learning and memory of neural network and the ability of multilayer networks to approximate arbitrary functions. Therefore, the algorithm shows good superiority and stability performances in the control for the probe.The PID neural network is introduced as follows. The PID neural network is a three-forward neural network. Suppose that the controlled object has three inputs and three outputs, which is a nonlinear and strong coupling system with three variables. There exists a three-layer neural network comprising proportional neurons, integral neurons, and derivative neurons between the input layer and hidden layers. In addition, connected weights exist between the hidden layer and output layer. Figure2 shows a multivariable control structure based on a PID neural network of probe power decline period.Figure 2 Multivariable control structure based on a PID neural network of probe power decline period.(1) PID Neural Network Forward Algorithm. At any sampling time k, the forward calculation equations of the PID neural network are as follows.(a) The input-output function of input-layer neurons is(34)xsik=usik, where xsi, ss=1,2,3, and ii=1,2 are input values of input-layer neurons, output values of input-layer neurons, and the number of the subnet input layers, respectively.Define the position error inz-axis orientation as e; then (35)e=zt-znt, where zt and znt are the actual position on the za-axis at time t and the nominal position on the za-axis at corresponding time t, respectively.Introduce a simple filters as new state variable, and the input of input layer is defined as follows: (36)s=e˙+λe, where λ is a positive scalar.(b) Hidden layer contains nine neurons (three proportional neurons, three integral neurons, and three derivative neurons); the input values of these neurons can be calculated as follows:(37)netsi(k)=∑i=12ωsijxsik,j=1,2,3,….For subnetworki, the formula of the output of hidden layer neurons is given by (38)us1k=nets1k,us2k=nets2k+us2k-1,us3k=nets3k-nets3k-1, where nets1(k), usj(k), ωsij, and j are input value of neurons in the hidden layer, the output value of neurons in the hidden layer, weight between input layer and hidden layer in each subnet, and the hidden layer neuron number in the subnet (j=1,2,3), respectively.(c) The input and output of output-layer neurons: the output of output-layer neurons is the sum of output weights of all hidden layer neurons as(39)yhk=∑s=1n∑j=13ωsjhusjk, where yhk, ωsjh, and s are output value of output-layer neurons, connected weight between hidden layer and output layer, and sequence number of output-layer neurons (s=1,2,3), respectively.(2) PID Neural Network Learning Algorithm. In this subsection, a multivariable probe control system based on the PID neural network algorithm is regarded as a generalized network, using the backpropagation (BP) learning algorithm to minimize the criterion function within the scope of the requirements. Criterion function is given by [25] (40)J=E=∑p=1nEp=12∑p=1nrpk-ypk2=12∑pe2k≤ε.The weight of the PID neural network can be adjusted by virtue of the gradient method, trained and learned throughk steps, and then determined depending on the following equation.(a) The iterative equation of weight between input layer and hidden layer is(41)ωijk+1=ωijk-η∂J∂ωij+η1ωijk-ωijk+1.(b) The iterative equation of weight between hidden layer and output layer is(42)ωjhk+1=ωjh(k)-η∂J∂ωjh+η2ωjhk-ωjhk+1.Proof. Choose the Lyapunov candidate as(43)VJ,ω=αJ+12β∂J∂ω2, where the symbol · signifies the quadratic norm of the vector and the parameters α, β are strict positive constants and they are utilized to determine the degree of proportion; then (44)∂J∂ωl2=∂J∂ωl∂J∂ωT=∂J∂ω∂J∂ωT, where ∂J/∂ω=∂J/∂ω1,…,∂J/∂ωl(45)∂J∂ωT=∂J∂ω1,…,∂J∂ωl. Clearly,J and ∂J/∂ω2 are equal to the minimal neighborhood; in addition, the parameters α>0, β>0, so the function VJ,ω is positive definite. Taking the derivative ofVJ,ω with respect to time, one can obtain (46)V˙=α∂J∂ωω˙+∂J∂xx˙+∂J∂t+β∂J∂ω∂2J∂t∂ωT+∂2J∂ω∂ωTω˙+∂2J∂x∂ωTx˙=∂J∂ωαIl+β∂2J∂ω∂ωTω˙+α∂J∂x+β∂J∂ω∂2J∂x∂ωTx˙=∂J∂ωαIl+β∂2J∂ω∂ωTω˙+α∂J∂t+β∂J∂ω∂2J∂t∂ωT=-u∂J∂ω2-VJ2,if=∂J∂ω≠0α∂J∂t,if=∂J∂ω=0. Above all, considering the functionV is positive definite and another function V˙ the BP learning algorithm, it is certified that the BP learning algorithm has the internality of making the error converge to the minimum. ## 3.1. The Nominal Trajectory Planning The desired descent altitude and velocity are planned in order to satisfy the requirements of soft landing on the surface of small bodies. The constraint condition is defined as [23] (9)z˙0=z˙0,z(0)=z0,zτ=ρ,z˙τ=0, where z0 and z˙0 denote the initial altitude and altitude change rate, z˙0 and z0 are the planned altitude and altitude change rate, and τ is the descent time. The cubic curve to satisfy the boundary condition is given by (10)znt=z0+z1t+z2t2+z3t3, where z0, z1, z2, and z3 are the cubic function coefficients.Using (9), the coefficients are determined. The descent curve is given by (11)znt=z0+z˙0t-3z0+2z˙0τ-3ρτ2t2+2z0+z˙0τ-2ρτ3t3, where ρ is the altitude of the landing site.Next, the time derivatives of (12) are given by (12)z˙nt=z˙0-6z0+4z˙0τ-6ρτ2t+6z0+3z˙0τ-6ρτ3t2,(13)z¨nt=-6z0+4z˙0τ-6ρτ2+12z0+6z˙0τ-12ρτ3t.Similarly, the ideal nominal trajectories can be planned on the other two axis directions. ## 3.2. Control Law Design ### 3.2.1. Proportional Integral Observer Design Let(14)xt=x,y,z,x˙,y˙,z˙T,ut=000uxt-Uxuyt-Uyuzt-UzT.Convert (4) into nonlinear time-invariant system as (15)x˙t=Axt+But+ft,yt=Cxt, where (16)A=000100000010000001ω20002ω00ω20-2ω00000000,B=000b1b2b3,ft=000fdxfdyfdz,C=111111, where ω is the instantaneous rotation vector of the small body and fdx, fdy, and fdz are the components of unmodelled perturbation accelerations mainly from the solar radiation pressure and the solar gravitation.Next, the PIO is designed as follows [24]: (17)x^˙t=Ax^t+But+f^t+Kpyt-y^t,y^t=Cx^t,f^˙t=KIyt-y^t, where Kp and KI are the observer gain matrix and the integral coefficient of estimated unknown disturbance, respectively.Note the state error and unknown disturbance error as(18)et=x^t-xt,eft=f^t-ft, where x^ and f^ are the estimations of the state vector x and the unknown disturbance f, respectively.Using (17) and (18) the error dynamics are as follows: (19)e˙t=A-KPCet+ft-f^t,e˙f=-KIC+KvCA-KpCet-KvCef-f˙t.The augmented estimator system could be rewritten as(20)e˙te˙ft=Aefeef+Beff˙t, where (21)Aef=A-KPC1-KIC-KvCA+KvCKpC0,Bef=01.Lemma 1 (see [25]). Givenγ>0 and (20), if there exist symmetric matrixes P, Q and two matrices KP, KI of appropriate dimension as well as LMI such that (22)A11A120A12TA22-QS0-QSTγI1<0, where (23)A11=PA-KpC+A-KpCTP+Ie,A12=P+QSKiC+KvA-KpCT,A22=-QSKvC-KiFS+KvCQST+Is hold, then system (20) is stable and satisfies corresponding performance index.Theorem 2. For a given positive constantγ>0 and (20), if there exist symmetric matrixes P, Q and two matrices KP, KI of appropriate dimension such that (22) LMI holds, then system (20) is robust and asymptotically stable and satisfies the performance index as follows: (24)eef≤γf^2+V0, where (25)eef2=∫0t1eefeefTdt,f^2=∫0t1f^f^Tdt.Ie, Is, and I1 are unit matrixes with appropriate dimension.Proof. Choose the Lyapunov candidate as(26)V=eefP1eefT; then taking the derivative ofV with respect to time along the trajectories of (19), one obtains (27)V˙=eefAefTP1+P1AefeefT+2eefTP1Beff^. Now define performance indicators as follows:(28)J=∫0t1eefeefT-γf^f^Tdt. Then(29)J=∫0t1eefeefT-γf^f^T+V˙dt-∫0t1V˙dt≤∫0t1eefeefT-γf^f^T+V˙dt+V0=∫0t1eefAefTP1+P1Aef+IeefT-eefTP1Beff˙-γf^f^Tdt+V0=∫0t1eeff^AefTP1+P1Aef+IP1BefBefTP1-γI1eeff^Tdt+V0. If there exists(30)AefTP1+P1Aef+IP1BefBefTP1-γI1<0, then theH∞ tracking performance can be satisfied. Next, note symmetric positive-definite matrix(31)P1=P00Q. Thus(32)P1Bef=0-QS,AefTP1+P1Aef+I=A11A12A12TA22, where A11, A12, and A22 are defined as (23); then (33)∫0t1eefeefTdt≤∫0t1γf^f^Tdt+V0. Then (24) can be obtained by simplifying (33); thus it is verified that unknown disturbance observer error ef of PIO can converge to zero in finite time, as well as the estimated interference f^t converging to actual interference ft. ### 3.2.2. PID Neural Network Structure and Calculation Method As it is difficult to acquire the physical parameters and motion information of small bodies accurately, there exists a highly nonlinear dynamic model of the small body. PID neural network control algorithm not only has the advantages of conventional PID controller, but also owns parallel structure and function of learning and memory of neural network and the ability of multilayer networks to approximate arbitrary functions. Therefore, the algorithm shows good superiority and stability performances in the control for the probe.The PID neural network is introduced as follows. The PID neural network is a three-forward neural network. Suppose that the controlled object has three inputs and three outputs, which is a nonlinear and strong coupling system with three variables. There exists a three-layer neural network comprising proportional neurons, integral neurons, and derivative neurons between the input layer and hidden layers. In addition, connected weights exist between the hidden layer and output layer. Figure2 shows a multivariable control structure based on a PID neural network of probe power decline period.Figure 2 Multivariable control structure based on a PID neural network of probe power decline period.(1) PID Neural Network Forward Algorithm. At any sampling time k, the forward calculation equations of the PID neural network are as follows.(a) The input-output function of input-layer neurons is(34)xsik=usik, where xsi, ss=1,2,3, and ii=1,2 are input values of input-layer neurons, output values of input-layer neurons, and the number of the subnet input layers, respectively.Define the position error inz-axis orientation as e; then (35)e=zt-znt, where zt and znt are the actual position on the za-axis at time t and the nominal position on the za-axis at corresponding time t, respectively.Introduce a simple filters as new state variable, and the input of input layer is defined as follows: (36)s=e˙+λe, where λ is a positive scalar.(b) Hidden layer contains nine neurons (three proportional neurons, three integral neurons, and three derivative neurons); the input values of these neurons can be calculated as follows:(37)netsi(k)=∑i=12ωsijxsik,j=1,2,3,….For subnetworki, the formula of the output of hidden layer neurons is given by (38)us1k=nets1k,us2k=nets2k+us2k-1,us3k=nets3k-nets3k-1, where nets1(k), usj(k), ωsij, and j are input value of neurons in the hidden layer, the output value of neurons in the hidden layer, weight between input layer and hidden layer in each subnet, and the hidden layer neuron number in the subnet (j=1,2,3), respectively.(c) The input and output of output-layer neurons: the output of output-layer neurons is the sum of output weights of all hidden layer neurons as(39)yhk=∑s=1n∑j=13ωsjhusjk, where yhk, ωsjh, and s are output value of output-layer neurons, connected weight between hidden layer and output layer, and sequence number of output-layer neurons (s=1,2,3), respectively.(2) PID Neural Network Learning Algorithm. In this subsection, a multivariable probe control system based on the PID neural network algorithm is regarded as a generalized network, using the backpropagation (BP) learning algorithm to minimize the criterion function within the scope of the requirements. Criterion function is given by [25] (40)J=E=∑p=1nEp=12∑p=1nrpk-ypk2=12∑pe2k≤ε.The weight of the PID neural network can be adjusted by virtue of the gradient method, trained and learned throughk steps, and then determined depending on the following equation.(a) The iterative equation of weight between input layer and hidden layer is(41)ωijk+1=ωijk-η∂J∂ωij+η1ωijk-ωijk+1.(b) The iterative equation of weight between hidden layer and output layer is(42)ωjhk+1=ωjh(k)-η∂J∂ωjh+η2ωjhk-ωjhk+1.Proof. Choose the Lyapunov candidate as(43)VJ,ω=αJ+12β∂J∂ω2, where the symbol · signifies the quadratic norm of the vector and the parameters α, β are strict positive constants and they are utilized to determine the degree of proportion; then (44)∂J∂ωl2=∂J∂ωl∂J∂ωT=∂J∂ω∂J∂ωT, where ∂J/∂ω=∂J/∂ω1,…,∂J/∂ωl(45)∂J∂ωT=∂J∂ω1,…,∂J∂ωl. Clearly,J and ∂J/∂ω2 are equal to the minimal neighborhood; in addition, the parameters α>0, β>0, so the function VJ,ω is positive definite. Taking the derivative ofVJ,ω with respect to time, one can obtain (46)V˙=α∂J∂ωω˙+∂J∂xx˙+∂J∂t+β∂J∂ω∂2J∂t∂ωT+∂2J∂ω∂ωTω˙+∂2J∂x∂ωTx˙=∂J∂ωαIl+β∂2J∂ω∂ωTω˙+α∂J∂x+β∂J∂ω∂2J∂x∂ωTx˙=∂J∂ωαIl+β∂2J∂ω∂ωTω˙+α∂J∂t+β∂J∂ω∂2J∂t∂ωT=-u∂J∂ω2-VJ2,if=∂J∂ω≠0α∂J∂t,if=∂J∂ω=0. Above all, considering the functionV is positive definite and another function V˙ the BP learning algorithm, it is certified that the BP learning algorithm has the internality of making the error converge to the minimum. ## 3.2.1. Proportional Integral Observer Design Let(14)xt=x,y,z,x˙,y˙,z˙T,ut=000uxt-Uxuyt-Uyuzt-UzT.Convert (4) into nonlinear time-invariant system as (15)x˙t=Axt+But+ft,yt=Cxt, where (16)A=000100000010000001ω20002ω00ω20-2ω00000000,B=000b1b2b3,ft=000fdxfdyfdz,C=111111, where ω is the instantaneous rotation vector of the small body and fdx, fdy, and fdz are the components of unmodelled perturbation accelerations mainly from the solar radiation pressure and the solar gravitation.Next, the PIO is designed as follows [24]: (17)x^˙t=Ax^t+But+f^t+Kpyt-y^t,y^t=Cx^t,f^˙t=KIyt-y^t, where Kp and KI are the observer gain matrix and the integral coefficient of estimated unknown disturbance, respectively.Note the state error and unknown disturbance error as(18)et=x^t-xt,eft=f^t-ft, where x^ and f^ are the estimations of the state vector x and the unknown disturbance f, respectively.Using (17) and (18) the error dynamics are as follows: (19)e˙t=A-KPCet+ft-f^t,e˙f=-KIC+KvCA-KpCet-KvCef-f˙t.The augmented estimator system could be rewritten as(20)e˙te˙ft=Aefeef+Beff˙t, where (21)Aef=A-KPC1-KIC-KvCA+KvCKpC0,Bef=01.Lemma 1 (see [25]). Givenγ>0 and (20), if there exist symmetric matrixes P, Q and two matrices KP, KI of appropriate dimension as well as LMI such that (22)A11A120A12TA22-QS0-QSTγI1<0, where (23)A11=PA-KpC+A-KpCTP+Ie,A12=P+QSKiC+KvA-KpCT,A22=-QSKvC-KiFS+KvCQST+Is hold, then system (20) is stable and satisfies corresponding performance index.Theorem 2. For a given positive constantγ>0 and (20), if there exist symmetric matrixes P, Q and two matrices KP, KI of appropriate dimension such that (22) LMI holds, then system (20) is robust and asymptotically stable and satisfies the performance index as follows: (24)eef≤γf^2+V0, where (25)eef2=∫0t1eefeefTdt,f^2=∫0t1f^f^Tdt.Ie, Is, and I1 are unit matrixes with appropriate dimension.Proof. Choose the Lyapunov candidate as(26)V=eefP1eefT; then taking the derivative ofV with respect to time along the trajectories of (19), one obtains (27)V˙=eefAefTP1+P1AefeefT+2eefTP1Beff^. Now define performance indicators as follows:(28)J=∫0t1eefeefT-γf^f^Tdt. Then(29)J=∫0t1eefeefT-γf^f^T+V˙dt-∫0t1V˙dt≤∫0t1eefeefT-γf^f^T+V˙dt+V0=∫0t1eefAefTP1+P1Aef+IeefT-eefTP1Beff˙-γf^f^Tdt+V0=∫0t1eeff^AefTP1+P1Aef+IP1BefBefTP1-γI1eeff^Tdt+V0. If there exists(30)AefTP1+P1Aef+IP1BefBefTP1-γI1<0, then theH∞ tracking performance can be satisfied. Next, note symmetric positive-definite matrix(31)P1=P00Q. Thus(32)P1Bef=0-QS,AefTP1+P1Aef+I=A11A12A12TA22, where A11, A12, and A22 are defined as (23); then (33)∫0t1eefeefTdt≤∫0t1γf^f^Tdt+V0. Then (24) can be obtained by simplifying (33); thus it is verified that unknown disturbance observer error ef of PIO can converge to zero in finite time, as well as the estimated interference f^t converging to actual interference ft. ## 3.2.2. PID Neural Network Structure and Calculation Method As it is difficult to acquire the physical parameters and motion information of small bodies accurately, there exists a highly nonlinear dynamic model of the small body. PID neural network control algorithm not only has the advantages of conventional PID controller, but also owns parallel structure and function of learning and memory of neural network and the ability of multilayer networks to approximate arbitrary functions. Therefore, the algorithm shows good superiority and stability performances in the control for the probe.The PID neural network is introduced as follows. The PID neural network is a three-forward neural network. Suppose that the controlled object has three inputs and three outputs, which is a nonlinear and strong coupling system with three variables. There exists a three-layer neural network comprising proportional neurons, integral neurons, and derivative neurons between the input layer and hidden layers. In addition, connected weights exist between the hidden layer and output layer. Figure2 shows a multivariable control structure based on a PID neural network of probe power decline period.Figure 2 Multivariable control structure based on a PID neural network of probe power decline period.(1) PID Neural Network Forward Algorithm. At any sampling time k, the forward calculation equations of the PID neural network are as follows.(a) The input-output function of input-layer neurons is(34)xsik=usik, where xsi, ss=1,2,3, and ii=1,2 are input values of input-layer neurons, output values of input-layer neurons, and the number of the subnet input layers, respectively.Define the position error inz-axis orientation as e; then (35)e=zt-znt, where zt and znt are the actual position on the za-axis at time t and the nominal position on the za-axis at corresponding time t, respectively.Introduce a simple filters as new state variable, and the input of input layer is defined as follows: (36)s=e˙+λe, where λ is a positive scalar.(b) Hidden layer contains nine neurons (three proportional neurons, three integral neurons, and three derivative neurons); the input values of these neurons can be calculated as follows:(37)netsi(k)=∑i=12ωsijxsik,j=1,2,3,….For subnetworki, the formula of the output of hidden layer neurons is given by (38)us1k=nets1k,us2k=nets2k+us2k-1,us3k=nets3k-nets3k-1, where nets1(k), usj(k), ωsij, and j are input value of neurons in the hidden layer, the output value of neurons in the hidden layer, weight between input layer and hidden layer in each subnet, and the hidden layer neuron number in the subnet (j=1,2,3), respectively.(c) The input and output of output-layer neurons: the output of output-layer neurons is the sum of output weights of all hidden layer neurons as(39)yhk=∑s=1n∑j=13ωsjhusjk, where yhk, ωsjh, and s are output value of output-layer neurons, connected weight between hidden layer and output layer, and sequence number of output-layer neurons (s=1,2,3), respectively.(2) PID Neural Network Learning Algorithm. In this subsection, a multivariable probe control system based on the PID neural network algorithm is regarded as a generalized network, using the backpropagation (BP) learning algorithm to minimize the criterion function within the scope of the requirements. Criterion function is given by [25] (40)J=E=∑p=1nEp=12∑p=1nrpk-ypk2=12∑pe2k≤ε.The weight of the PID neural network can be adjusted by virtue of the gradient method, trained and learned throughk steps, and then determined depending on the following equation.(a) The iterative equation of weight between input layer and hidden layer is(41)ωijk+1=ωijk-η∂J∂ωij+η1ωijk-ωijk+1.(b) The iterative equation of weight between hidden layer and output layer is(42)ωjhk+1=ωjh(k)-η∂J∂ωjh+η2ωjhk-ωjhk+1.Proof. Choose the Lyapunov candidate as(43)VJ,ω=αJ+12β∂J∂ω2, where the symbol · signifies the quadratic norm of the vector and the parameters α, β are strict positive constants and they are utilized to determine the degree of proportion; then (44)∂J∂ωl2=∂J∂ωl∂J∂ωT=∂J∂ω∂J∂ωT, where ∂J/∂ω=∂J/∂ω1,…,∂J/∂ωl(45)∂J∂ωT=∂J∂ω1,…,∂J∂ωl. Clearly,J and ∂J/∂ω2 are equal to the minimal neighborhood; in addition, the parameters α>0, β>0, so the function VJ,ω is positive definite. Taking the derivative ofVJ,ω with respect to time, one can obtain (46)V˙=α∂J∂ωω˙+∂J∂xx˙+∂J∂t+β∂J∂ω∂2J∂t∂ωT+∂2J∂ω∂ωTω˙+∂2J∂x∂ωTx˙=∂J∂ωαIl+β∂2J∂ω∂ωTω˙+α∂J∂x+β∂J∂ω∂2J∂x∂ωTx˙=∂J∂ωαIl+β∂2J∂ω∂ωTω˙+α∂J∂t+β∂J∂ω∂2J∂t∂ωT=-u∂J∂ω2-VJ2,if=∂J∂ω≠0α∂J∂t,if=∂J∂ω=0. Above all, considering the functionV is positive definite and another function V˙ the BP learning algorithm, it is certified that the BP learning algorithm has the internality of making the error converge to the minimum. ## 4. Simulation Results (a) According to Theorem2, PIO parameters can be derived by using the LMI toolbox as follows: (47)Kp=1.8820-11.21214.9258-11.2727-14.75485.87730.55219.56234.92583.77460.2464-7.3346-12.34703.22469.275825.08644.908428.90355.2324-12.38699.2758-6.9024-3.7824-2.93573.419624.1648-4.973513.93405.983512.3925-5.20152.97387.17481.38699.0247-8.0924,KI=4.377812.9673-1.37829.0357-14.78931.5044.The initial values of proportional neurons, integral neurons, and three derivative neurons are implemented as follows:(48)ωs1j=0.1,ωs2j=-0.1,ωs3j=0.2.The initial values of connected weight between hidden layer and output layer are defined, respectively, as follows:(49)ωsj1=0.4,ωsj2=-0.5,ωsj3=0.8.(b) The asteroid Eros 433 is taken as the target small body for simulation to verify the feasibility of the presented control scheme. The parameters of the small body are gained from [26] and are shown in Table 1.Table 1 Parameters Real world Simulation parameters μ 4 . 749 E - 04 4.800 E - 04 Spin period (h) 10.54 10.55 R 0 (km) 1.150 1.148 Gravitational coefficients C20 −0.113 −0.110 C22 0.0396 0.0397 Initial position (m) [400, 400, 11000] Initial velocity (m/s) [−0.9, 1.2, −1] Terminal site [100, 100, 8000]In this paper, compared with the perturbation uncertainties proposed in [27], the larger perturbation uncertainties are chosen as follows: (50)fdx=150sin2t,fdy=160sin1.5t,fdz=140sin3t.From Figure3, it can be seen that the actual trajectory of probe exhibits evident chattering in the system suffering a larger disturbance. Inherent robustness of the sliding mode control algorithm is not sufficient to guarantee the actual trajectory to track the desired trajectory. On this occasion, the neural network control algorithm based on PIO in this paper is utilized to compensate the unknown disturbance and eliminate the chattering problem of trajectory. Meanwhile, it can track the required ideal location quickly.Figure 3 Landing trajectory curve of the probe.Figures4, 5, 6, 7, 8, and 9 show the error curves between ideal location and actual locations and the velocity curves as a function of time on the three axis directions. For the system exhibiting large initial error and perturbation uncertainties, on the condition that the convergence of the system can be ensured, compared with the sliding mode control algorithm [27] the neural network control algorithm based on PIO can improve the convergence rate of the actual position error and the actual velocity; namely, the actual trajectory can fast and accurately track the planned trajectory on the condition that there exist parameter, feedback state error, and external larger disturbance in the system. Thereby it can satisfy the probe to land smoothly on the surface of small body and avoid the occurrence of the probe crashing due to the excessive landing speed.Figure 4 Position error component (x-axis) as function of time.Figure 5 Position error component (y-axis) as function of time.Figure 6 Position error component (z-axis) as function of time.Figure 7 Velocity component (x-axis) as function of time.Figure 8 Velocity component (y-axis) as function of time.Figure 9 Velocity component (z-axis) as function of time. ## 5. Conclusion This paper has presented a neural network control algorithm based on PIO. In view of the power descent section of soft landing on small bodies, the system dynamic models of the small bodies under the body-fixed coordinate system are given with ignoring the attitude control. The solar radiation pressure and the third-body’s gravity are treated as the perturbation, which is viewed as a bounded function. The nominal trajectories meeting the constraints on the threeaxes are preplanned. The simulation results show that the neural network control algorithm based on PIO can ensure fast and accurate response to parameter uncertainty, feedback state error, and external disturbances. Moreover, for the system exhibiting larger interference, it can overcome the inherent chattering problem of sliding mode control algorithm and make the position error and the velocity error converge to the small finite value, realizing the aim to softly land. --- *Source: 102424-2015-02-22.xml*
102424-2015-02-22_102424-2015-02-22.md
43,053
Neural Network Control for the Probe Landing Based on Proportional Integral Observer
Yuanchun Li; Tianhao Ma; Bo Zhao
Mathematical Problems in Engineering (2015)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102424
102424-2015-02-22.xml
--- ## Abstract For the probe descending and landing safely, a neural network control method based on proportional integral observer (PIO) is proposed. First, the dynamics equation of the probe under the landing site coordinate system is deduced and the nominal trajectory meeting the constraints in advance on three axes is preplanned. Then the PIO designed by using LMI technique is employed in the control law to compensate the effect of the disturbance. At last, the neural network control algorithm is used to guarantee the double zero control of the probe and ensure the probe can land safely. An illustrative design example is employed to demonstrate the effectiveness of the proposed control approach. --- ## Body ## 1. Introduction The exploration mission to near-earth asteroids (NEAs) would be one of the most complex tasks in the future deep space exploration [1, 2]. There is surge in NEAs mission activities, for which various space agencies around the world (e.g., NASA, European Space Agency, Japan Aerospace Exploration Agency, etc.) were commissioning researches about NEAs to determine the feasible exploration missions, including the (1) NEAR probe launched by NASA which can realize the fly-around to 433 Eros whose shape is like a potato with a size of 34.4 km × 11.2 km × 11.2 km and which verified the gravitational field model of 433 Eros and the stability of frozen orbit around the asteroid [3]; (2) the Hayabusa probe from JAEA which had successfully achieved to be attached and sample to the 25143 Itokawa (due to the smaller size and quality of the Itokawa asteroid, this mission realized the detection to the asteroid by hovering way [4]); (3) ROSETTA implemented by ESA which will arrive in the Churyumov-Gerasimenko comet in 2014 after a decade of interstellar flight and will make the comprehensive observation of the comet for a long time [5].In view of the complex environment around the small body, together with the long distance between the probe and the surface of the earth [6], a variety of accurate physical parameters and motion information of small bodies cannot be obtained through optical telescopes on the ground or radio telescopes. In addition, the complex process uncertainty, large time delay, nonlinearity, and multivariable coupling always exist in the probe dynamic model, so ground control for deep space exploration mission has become no longer appropriate; as a consequence, it puts forward a new challenge to autonomous navigation, guidance, and control (GNC) technology of landing softly on a small body. To cope with these problems, both domestic and foreign scholars have paid a great deal of attention to the GNC problem of landing small objects. As is well known, the accurate physical parameters and motion information of small bodies are the important premises of the probe softly landing. Misu et al. [6] proposed an autonomous optical navigation and guidance method, which extracted visual small features from the images taken by the navigation camera and tracked them robustly and accurately. Kawaguchi et al. [7] discussed an autonomous optical guidance and navigation strategy to approaching small bodies. Horneman and Kluever [8] presented a terminal area energy management (TAEM) guidance methodology which employed a trajectory planning algorithm to compute a feasible path from the current state to the desired approach and landing target state rather than relying on a precalculated one, stored database of neighboring TAEM trajectories. However, even if the accurate physical parameters and motion information of small bodies are gained, the controller is difficult to be designed to make the probe system meet the key performance indicators of the probe softly landing. In order to solve this problem, Furfaro et al. [9] presented a high order sliding mode variable structure control method to make the probe reach the sliding surface in finite time and overcame the chattering effect, generally existing in the common sliding mode control. Crassidis et al. [10] introduced a variable-structure controller based on a Gibbs vector parameterization, a modified-Rodrigues parameterization, and a quaternion parameterization. Blackmore [11] studied the robust path and feedback control under the condition of existing uncertainty; through this control method, the stability of the system is ensured. Meissinger and Greenstadt [12] proposed a soft landing scheme, which used a feedback control with a radar altimeter and a three-beam Doppler radar system to achieve landing spacecraft at Eros’ north polar region with a low-impact velocity. In [13], a novel robust stability condition was obtained for sliding mode dynamics by using Lyapunov theory in delta domain. Some other approaches for analysis and design of sliding mode control were presented in [14–16]. Apart from the position and the velocity of the probe, the attitude dynamics analyses also play an important role in the probe softly landing. Kumar and Shah [17] set up the general formulation of the spacecraft equations of motion in an equatorial eccentric orbit using Lagrangian method and did some analysis on the stability. Then the control laws for three-axis attitude control of spacecrafts had been developed and a closed-form solution of the system had been derived. Liang and Li [18] designed a robust adaptive backstepping sliding mode control law to make the attitude of the probe stabilized and respond accurately to the expectation in the presence of disturbances and parametric uncertainties. Nonetheless, these methods of dealing with the interference all made the inhibition of bounded disturbances implicit in the above autonomous GNC rather than using the interference information effectively, so the designed controller cannot meet the control requirement of the system when there exists a larger interference in the system.As a result of the complex environment in the deep space around the small bodies and the coupling effect of the detector itself, it leads to a great deal of uncertainties in the dynamic model and makes the system include the complex external disturbance. At present, the main approaches to process the external disturbances include disturbance decoupling, disturbance compensation, and robust control, especially disturbance compensation. There are many scholars proposing a variety of stability control strategies based on the observer aimed at different objects. Chadli and Karimi [19] dealt with the observer design for Takagi-Sugeno (T-S) fuzzy models subject to unknown inputs and disturbance affecting both states and outputs of the system. Chong et al. [20] designed a robust circle criterion observer and applied it to neural mass models; Sun et al. [21] proposed a novel speed observation scheme using artificial neural network (ANN) inverse method to effectively reject the influence of speed detection on system stability and precision for a bearingless induction motor. Above all, the observer can inhibit the effect of disturbance in the system by accurately measuring the unknown disturbance.The main advantages of the presented approach are generalized into two aspects: one is that it combines the characteristics of the probe dynamic model and the good estimation performance of observer, to eliminate the effect of the unknown disturbance and to avoid the chattering of the control signal caused by the large disturbance. This paper designs PIO using LMI technique, which can estimate the system states and unknown input disturbance simultaneously. The other is that PID neural network control algorithm is introduced in the design of the controller. It combines the advantages of traditional PID controller and learning memory function of neural networks. So it improves the convergence rate close to the ideal position on the condition that the convergence of the system can be ensured and simultaneously avoids the effect of nonlinear and strong coupling features of the system in a wide range, compared with the sliding mode control strategy.This paper proceeds as follows. In Section2, the dynamics equation of the probe is deduced under the landing site coordinate system and the interference outside system is treated as the known bounded function. In Section 3, firstly, the nominal trajectories based on the theory of suboptimal fuel are planned. Then PIO is designed by using LMI technique to estimate the unknown disturbance. Finally, PID neural network control algorithm is used to design the controller to ensure the stability and control performance of the system. In Section 4, Eros 433 is employed to demonstrate the effectiveness of the proposed control approach. Conclusions are presented in Section 5. ## 2. Small Body and Probe Dynamic Model In this section, the body-fixed coordinate system of small body is set up, which is shown in Figure1. Let the oa-xayaza coordinate system be fixed on small body with the origin coinciding with the mass center of small body, xa-axis coinciding with the minimum inertia axis of small body, za-axis coinciding with the spin axis of small body, and ya-axis meeting the condition that the xa, ya, and za axes compose the right-handed coordinate system. The oc-xcyczc coordinate system is fixed on optical navigation camera (ONC), and the image plane of ONC is defined as ocxcyc, and zc axis is parallel to the optical axis of ONC and is directed to the surface.Figure 1 Geometrical relationship of coordinate systems.The dynamic equations of the probe in the fixed-body coordinate system are given as [22] (1)R¨+2ω×R˙+ω×ω×R+ω˙×R=a+UR+fd, where R, R˙, R¨, ω, a, UR, and fd are the position vector from the target small body mass center of the spacecraft, the first and second time derivatives with respect to the body-fixed rotating frame, the instantaneous rotation vector of the small body, the control acceleration, the gradient of the gravitational potential U, and the components of unmodeled perturbation accelerations mainly from the solar radiation pressure and the solar gravitation.Considering the origin ofΣl is the vector ρ in the Σa, which is the vector from the target small body mass center to the landing site, the vector R has the following satisfaction relation in the body-fixed coordinate system Σa: (2)R=Clar+ρ, where r and Cla are the vector from the landing site to the probe in the Σl and coordinate transform matrix from the Σl to the Σa. The transform matrix is given as follows: (3)Cla=cos⁡φsinθ-sinφcos⁡φcos⁡θsinφsinθcos⁡φsinφcos⁡θ-cos⁡θ0sinθ.Suppose that the small body rotates around thez-axis and rotation velocity ω is a constant; we can get the final expression of dynamic models as (4)x¨=ω2x+2ωy˙+Ux+acx+fdx,y¨=ω2y-2ωx˙+Uy+acy+fdy,z¨=Uz+acz+fdz, where fdx, fdy, and fdz are the components of unmodelled perturbation accelerations mainly from the solar radiation pressure and the solar gravitation.Generally, given the small size, irregular shape, and variable surface properties of small bodies, orbital dynamics became complicated; thus it is difficult to obtain the gravitational field of the small bodies accurately. Considering that the gravitational potential is related to the distance, the latitude, and the longitude, it can be expanded into a series of spherical harmonics and can be expressed as(5)U=μr1+Rar212C203sin2φ-1Rar2Rar212C203sin2φ-1Rar2+3C22cos⁡2φcos⁡2θ, where μ, Ra, φ, θ, and r are the product of the gravitational constant and the mass of the target small body, the referenced radius which is similar to the large equatorial radius, the latitude and longitude in the same coordinate system whose origins are at the center of body mass, and the distance from the mass center of small body to the probe, respectively.According to the relationship between the rectangular coordinate and polar coordinate, one obtains(6)sinφ=zr,cos⁡2φ=x2+y2r2,cos⁡2θ=1-2sin2θ=x2-y2x2+y2.Introduce (6) into (5), and one can obtain (7)U=μr1+Rar212C202z2-x2-y2r2x4-y4r2(x2+y2)Rar212C202z2-x2-y2r2+3C22x4-y4r2x2+y2.Furthermore, the derivatives ofU can be computed explicitly with respect to x, y, and z, respectively, as (8)Ux=-μxr31+32C20Rar25z2r2-1+3C22Rar25x2-y2r2-2,Uy=-μyr31+32C20Rar25z2r2-1+3C22Rar25x2-y2r2-2,Uz=-μzr31+32C20Rar25z2r2-1+3C22Rar25x2-y2r2-2. ## 3. Guidance Law and Control Law Design Considering the probe achieves the vertical soft landing within the expected timeτ, this paper presents the nominal trajectory guidance law based on the theory of suboptimal fuel, and the nominal trajectories of three-axis direction are preplanned. Then use neural network control method based on PIO to track the planned ideal nominal trajectories. ### 3.1. The Nominal Trajectory Planning The desired descent altitude and velocity are planned in order to satisfy the requirements of soft landing on the surface of small bodies. The constraint condition is defined as [23] (9)z˙0=z˙0,z(0)=z0,zτ=ρ,z˙τ=0, where z0 and z˙0 denote the initial altitude and altitude change rate, z˙0 and z0 are the planned altitude and altitude change rate, and τ is the descent time. The cubic curve to satisfy the boundary condition is given by (10)znt=z0+z1t+z2t2+z3t3, where z0, z1, z2, and z3 are the cubic function coefficients.Using (9), the coefficients are determined. The descent curve is given by (11)znt=z0+z˙0t-3z0+2z˙0τ-3ρτ2t2+2z0+z˙0τ-2ρτ3t3, where ρ is the altitude of the landing site.Next, the time derivatives of (12) are given by (12)z˙nt=z˙0-6z0+4z˙0τ-6ρτ2t+6z0+3z˙0τ-6ρτ3t2,(13)z¨nt=-6z0+4z˙0τ-6ρτ2+12z0+6z˙0τ-12ρτ3t.Similarly, the ideal nominal trajectories can be planned on the other two axis directions. ### 3.2. Control Law Design #### 3.2.1. Proportional Integral Observer Design Let(14)xt=x,y,z,x˙,y˙,z˙T,ut=000uxt-Uxuyt-Uyuzt-UzT.Convert (4) into nonlinear time-invariant system as (15)x˙t=Axt+But+ft,yt=Cxt, where (16)A=000100000010000001ω20002ω00ω20-2ω00000000,B=000b1b2b3,ft=000fdxfdyfdz,C=111111, where ω is the instantaneous rotation vector of the small body and fdx, fdy, and fdz are the components of unmodelled perturbation accelerations mainly from the solar radiation pressure and the solar gravitation.Next, the PIO is designed as follows [24]: (17)x^˙t=Ax^t+But+f^t+Kpyt-y^t,y^t=Cx^t,f^˙t=KIyt-y^t, where Kp and KI are the observer gain matrix and the integral coefficient of estimated unknown disturbance, respectively.Note the state error and unknown disturbance error as(18)et=x^t-xt,eft=f^t-ft, where x^ and f^ are the estimations of the state vector x and the unknown disturbance f, respectively.Using (17) and (18) the error dynamics are as follows: (19)e˙t=A-KPCet+ft-f^t,e˙f=-KIC+KvCA-KpCet-KvCef-f˙t.The augmented estimator system could be rewritten as(20)e˙te˙ft=Aefeef+Beff˙t, where (21)Aef=A-KPC1-KIC-KvCA+KvCKpC0,Bef=01.Lemma 1 (see [25]). Givenγ>0 and (20), if there exist symmetric matrixes P, Q and two matrices KP, KI of appropriate dimension as well as LMI such that (22)A11A120A12TA22-QS0-QSTγI1<0, where (23)A11=PA-KpC+A-KpCTP+Ie,A12=P+QSKiC+KvA-KpCT,A22=-QSKvC-KiFS+KvCQST+Is hold, then system (20) is stable and satisfies corresponding performance index.Theorem 2. For a given positive constantγ>0 and (20), if there exist symmetric matrixes P, Q and two matrices KP, KI of appropriate dimension such that (22) LMI holds, then system (20) is robust and asymptotically stable and satisfies the performance index as follows: (24)eef≤γf^2+V0, where (25)eef2=∫0t1eefeefTdt,f^2=∫0t1f^f^Tdt.Ie, Is, and I1 are unit matrixes with appropriate dimension.Proof. Choose the Lyapunov candidate as(26)V=eefP1eefT; then taking the derivative ofV with respect to time along the trajectories of (19), one obtains (27)V˙=eefAefTP1+P1AefeefT+2eefTP1Beff^. Now define performance indicators as follows:(28)J=∫0t1eefeefT-γf^f^Tdt. Then(29)J=∫0t1eefeefT-γf^f^T+V˙dt-∫0t1V˙dt≤∫0t1eefeefT-γf^f^T+V˙dt+V0=∫0t1eefAefTP1+P1Aef+IeefT-eefTP1Beff˙-γf^f^Tdt+V0=∫0t1eeff^AefTP1+P1Aef+IP1BefBefTP1-γI1eeff^Tdt+V0. If there exists(30)AefTP1+P1Aef+IP1BefBefTP1-γI1<0, then theH∞ tracking performance can be satisfied. Next, note symmetric positive-definite matrix(31)P1=P00Q. Thus(32)P1Bef=0-QS,AefTP1+P1Aef+I=A11A12A12TA22, where A11, A12, and A22 are defined as (23); then (33)∫0t1eefeefTdt≤∫0t1γf^f^Tdt+V0. Then (24) can be obtained by simplifying (33); thus it is verified that unknown disturbance observer error ef of PIO can converge to zero in finite time, as well as the estimated interference f^t converging to actual interference ft. #### 3.2.2. PID Neural Network Structure and Calculation Method As it is difficult to acquire the physical parameters and motion information of small bodies accurately, there exists a highly nonlinear dynamic model of the small body. PID neural network control algorithm not only has the advantages of conventional PID controller, but also owns parallel structure and function of learning and memory of neural network and the ability of multilayer networks to approximate arbitrary functions. Therefore, the algorithm shows good superiority and stability performances in the control for the probe.The PID neural network is introduced as follows. The PID neural network is a three-forward neural network. Suppose that the controlled object has three inputs and three outputs, which is a nonlinear and strong coupling system with three variables. There exists a three-layer neural network comprising proportional neurons, integral neurons, and derivative neurons between the input layer and hidden layers. In addition, connected weights exist between the hidden layer and output layer. Figure2 shows a multivariable control structure based on a PID neural network of probe power decline period.Figure 2 Multivariable control structure based on a PID neural network of probe power decline period.(1) PID Neural Network Forward Algorithm. At any sampling time k, the forward calculation equations of the PID neural network are as follows.(a) The input-output function of input-layer neurons is(34)xsik=usik, where xsi, ss=1,2,3, and ii=1,2 are input values of input-layer neurons, output values of input-layer neurons, and the number of the subnet input layers, respectively.Define the position error inz-axis orientation as e; then (35)e=zt-znt, where zt and znt are the actual position on the za-axis at time t and the nominal position on the za-axis at corresponding time t, respectively.Introduce a simple filters as new state variable, and the input of input layer is defined as follows: (36)s=e˙+λe, where λ is a positive scalar.(b) Hidden layer contains nine neurons (three proportional neurons, three integral neurons, and three derivative neurons); the input values of these neurons can be calculated as follows:(37)netsi(k)=∑i=12ωsijxsik,j=1,2,3,….For subnetworki, the formula of the output of hidden layer neurons is given by (38)us1k=nets1k,us2k=nets2k+us2k-1,us3k=nets3k-nets3k-1, where nets1(k), usj(k), ωsij, and j are input value of neurons in the hidden layer, the output value of neurons in the hidden layer, weight between input layer and hidden layer in each subnet, and the hidden layer neuron number in the subnet (j=1,2,3), respectively.(c) The input and output of output-layer neurons: the output of output-layer neurons is the sum of output weights of all hidden layer neurons as(39)yhk=∑s=1n∑j=13ωsjhusjk, where yhk, ωsjh, and s are output value of output-layer neurons, connected weight between hidden layer and output layer, and sequence number of output-layer neurons (s=1,2,3), respectively.(2) PID Neural Network Learning Algorithm. In this subsection, a multivariable probe control system based on the PID neural network algorithm is regarded as a generalized network, using the backpropagation (BP) learning algorithm to minimize the criterion function within the scope of the requirements. Criterion function is given by [25] (40)J=E=∑p=1nEp=12∑p=1nrpk-ypk2=12∑pe2k≤ε.The weight of the PID neural network can be adjusted by virtue of the gradient method, trained and learned throughk steps, and then determined depending on the following equation.(a) The iterative equation of weight between input layer and hidden layer is(41)ωijk+1=ωijk-η∂J∂ωij+η1ωijk-ωijk+1.(b) The iterative equation of weight between hidden layer and output layer is(42)ωjhk+1=ωjh(k)-η∂J∂ωjh+η2ωjhk-ωjhk+1.Proof. Choose the Lyapunov candidate as(43)VJ,ω=αJ+12β∂J∂ω2, where the symbol · signifies the quadratic norm of the vector and the parameters α, β are strict positive constants and they are utilized to determine the degree of proportion; then (44)∂J∂ωl2=∂J∂ωl∂J∂ωT=∂J∂ω∂J∂ωT, where ∂J/∂ω=∂J/∂ω1,…,∂J/∂ωl(45)∂J∂ωT=∂J∂ω1,…,∂J∂ωl. Clearly,J and ∂J/∂ω2 are equal to the minimal neighborhood; in addition, the parameters α>0, β>0, so the function VJ,ω is positive definite. Taking the derivative ofVJ,ω with respect to time, one can obtain (46)V˙=α∂J∂ωω˙+∂J∂xx˙+∂J∂t+β∂J∂ω∂2J∂t∂ωT+∂2J∂ω∂ωTω˙+∂2J∂x∂ωTx˙=∂J∂ωαIl+β∂2J∂ω∂ωTω˙+α∂J∂x+β∂J∂ω∂2J∂x∂ωTx˙=∂J∂ωαIl+β∂2J∂ω∂ωTω˙+α∂J∂t+β∂J∂ω∂2J∂t∂ωT=-u∂J∂ω2-VJ2,if=∂J∂ω≠0α∂J∂t,if=∂J∂ω=0. Above all, considering the functionV is positive definite and another function V˙ the BP learning algorithm, it is certified that the BP learning algorithm has the internality of making the error converge to the minimum. ## 3.1. The Nominal Trajectory Planning The desired descent altitude and velocity are planned in order to satisfy the requirements of soft landing on the surface of small bodies. The constraint condition is defined as [23] (9)z˙0=z˙0,z(0)=z0,zτ=ρ,z˙τ=0, where z0 and z˙0 denote the initial altitude and altitude change rate, z˙0 and z0 are the planned altitude and altitude change rate, and τ is the descent time. The cubic curve to satisfy the boundary condition is given by (10)znt=z0+z1t+z2t2+z3t3, where z0, z1, z2, and z3 are the cubic function coefficients.Using (9), the coefficients are determined. The descent curve is given by (11)znt=z0+z˙0t-3z0+2z˙0τ-3ρτ2t2+2z0+z˙0τ-2ρτ3t3, where ρ is the altitude of the landing site.Next, the time derivatives of (12) are given by (12)z˙nt=z˙0-6z0+4z˙0τ-6ρτ2t+6z0+3z˙0τ-6ρτ3t2,(13)z¨nt=-6z0+4z˙0τ-6ρτ2+12z0+6z˙0τ-12ρτ3t.Similarly, the ideal nominal trajectories can be planned on the other two axis directions. ## 3.2. Control Law Design ### 3.2.1. Proportional Integral Observer Design Let(14)xt=x,y,z,x˙,y˙,z˙T,ut=000uxt-Uxuyt-Uyuzt-UzT.Convert (4) into nonlinear time-invariant system as (15)x˙t=Axt+But+ft,yt=Cxt, where (16)A=000100000010000001ω20002ω00ω20-2ω00000000,B=000b1b2b3,ft=000fdxfdyfdz,C=111111, where ω is the instantaneous rotation vector of the small body and fdx, fdy, and fdz are the components of unmodelled perturbation accelerations mainly from the solar radiation pressure and the solar gravitation.Next, the PIO is designed as follows [24]: (17)x^˙t=Ax^t+But+f^t+Kpyt-y^t,y^t=Cx^t,f^˙t=KIyt-y^t, where Kp and KI are the observer gain matrix and the integral coefficient of estimated unknown disturbance, respectively.Note the state error and unknown disturbance error as(18)et=x^t-xt,eft=f^t-ft, where x^ and f^ are the estimations of the state vector x and the unknown disturbance f, respectively.Using (17) and (18) the error dynamics are as follows: (19)e˙t=A-KPCet+ft-f^t,e˙f=-KIC+KvCA-KpCet-KvCef-f˙t.The augmented estimator system could be rewritten as(20)e˙te˙ft=Aefeef+Beff˙t, where (21)Aef=A-KPC1-KIC-KvCA+KvCKpC0,Bef=01.Lemma 1 (see [25]). Givenγ>0 and (20), if there exist symmetric matrixes P, Q and two matrices KP, KI of appropriate dimension as well as LMI such that (22)A11A120A12TA22-QS0-QSTγI1<0, where (23)A11=PA-KpC+A-KpCTP+Ie,A12=P+QSKiC+KvA-KpCT,A22=-QSKvC-KiFS+KvCQST+Is hold, then system (20) is stable and satisfies corresponding performance index.Theorem 2. For a given positive constantγ>0 and (20), if there exist symmetric matrixes P, Q and two matrices KP, KI of appropriate dimension such that (22) LMI holds, then system (20) is robust and asymptotically stable and satisfies the performance index as follows: (24)eef≤γf^2+V0, where (25)eef2=∫0t1eefeefTdt,f^2=∫0t1f^f^Tdt.Ie, Is, and I1 are unit matrixes with appropriate dimension.Proof. Choose the Lyapunov candidate as(26)V=eefP1eefT; then taking the derivative ofV with respect to time along the trajectories of (19), one obtains (27)V˙=eefAefTP1+P1AefeefT+2eefTP1Beff^. Now define performance indicators as follows:(28)J=∫0t1eefeefT-γf^f^Tdt. Then(29)J=∫0t1eefeefT-γf^f^T+V˙dt-∫0t1V˙dt≤∫0t1eefeefT-γf^f^T+V˙dt+V0=∫0t1eefAefTP1+P1Aef+IeefT-eefTP1Beff˙-γf^f^Tdt+V0=∫0t1eeff^AefTP1+P1Aef+IP1BefBefTP1-γI1eeff^Tdt+V0. If there exists(30)AefTP1+P1Aef+IP1BefBefTP1-γI1<0, then theH∞ tracking performance can be satisfied. Next, note symmetric positive-definite matrix(31)P1=P00Q. Thus(32)P1Bef=0-QS,AefTP1+P1Aef+I=A11A12A12TA22, where A11, A12, and A22 are defined as (23); then (33)∫0t1eefeefTdt≤∫0t1γf^f^Tdt+V0. Then (24) can be obtained by simplifying (33); thus it is verified that unknown disturbance observer error ef of PIO can converge to zero in finite time, as well as the estimated interference f^t converging to actual interference ft. ### 3.2.2. PID Neural Network Structure and Calculation Method As it is difficult to acquire the physical parameters and motion information of small bodies accurately, there exists a highly nonlinear dynamic model of the small body. PID neural network control algorithm not only has the advantages of conventional PID controller, but also owns parallel structure and function of learning and memory of neural network and the ability of multilayer networks to approximate arbitrary functions. Therefore, the algorithm shows good superiority and stability performances in the control for the probe.The PID neural network is introduced as follows. The PID neural network is a three-forward neural network. Suppose that the controlled object has three inputs and three outputs, which is a nonlinear and strong coupling system with three variables. There exists a three-layer neural network comprising proportional neurons, integral neurons, and derivative neurons between the input layer and hidden layers. In addition, connected weights exist between the hidden layer and output layer. Figure2 shows a multivariable control structure based on a PID neural network of probe power decline period.Figure 2 Multivariable control structure based on a PID neural network of probe power decline period.(1) PID Neural Network Forward Algorithm. At any sampling time k, the forward calculation equations of the PID neural network are as follows.(a) The input-output function of input-layer neurons is(34)xsik=usik, where xsi, ss=1,2,3, and ii=1,2 are input values of input-layer neurons, output values of input-layer neurons, and the number of the subnet input layers, respectively.Define the position error inz-axis orientation as e; then (35)e=zt-znt, where zt and znt are the actual position on the za-axis at time t and the nominal position on the za-axis at corresponding time t, respectively.Introduce a simple filters as new state variable, and the input of input layer is defined as follows: (36)s=e˙+λe, where λ is a positive scalar.(b) Hidden layer contains nine neurons (three proportional neurons, three integral neurons, and three derivative neurons); the input values of these neurons can be calculated as follows:(37)netsi(k)=∑i=12ωsijxsik,j=1,2,3,….For subnetworki, the formula of the output of hidden layer neurons is given by (38)us1k=nets1k,us2k=nets2k+us2k-1,us3k=nets3k-nets3k-1, where nets1(k), usj(k), ωsij, and j are input value of neurons in the hidden layer, the output value of neurons in the hidden layer, weight between input layer and hidden layer in each subnet, and the hidden layer neuron number in the subnet (j=1,2,3), respectively.(c) The input and output of output-layer neurons: the output of output-layer neurons is the sum of output weights of all hidden layer neurons as(39)yhk=∑s=1n∑j=13ωsjhusjk, where yhk, ωsjh, and s are output value of output-layer neurons, connected weight between hidden layer and output layer, and sequence number of output-layer neurons (s=1,2,3), respectively.(2) PID Neural Network Learning Algorithm. In this subsection, a multivariable probe control system based on the PID neural network algorithm is regarded as a generalized network, using the backpropagation (BP) learning algorithm to minimize the criterion function within the scope of the requirements. Criterion function is given by [25] (40)J=E=∑p=1nEp=12∑p=1nrpk-ypk2=12∑pe2k≤ε.The weight of the PID neural network can be adjusted by virtue of the gradient method, trained and learned throughk steps, and then determined depending on the following equation.(a) The iterative equation of weight between input layer and hidden layer is(41)ωijk+1=ωijk-η∂J∂ωij+η1ωijk-ωijk+1.(b) The iterative equation of weight between hidden layer and output layer is(42)ωjhk+1=ωjh(k)-η∂J∂ωjh+η2ωjhk-ωjhk+1.Proof. Choose the Lyapunov candidate as(43)VJ,ω=αJ+12β∂J∂ω2, where the symbol · signifies the quadratic norm of the vector and the parameters α, β are strict positive constants and they are utilized to determine the degree of proportion; then (44)∂J∂ωl2=∂J∂ωl∂J∂ωT=∂J∂ω∂J∂ωT, where ∂J/∂ω=∂J/∂ω1,…,∂J/∂ωl(45)∂J∂ωT=∂J∂ω1,…,∂J∂ωl. Clearly,J and ∂J/∂ω2 are equal to the minimal neighborhood; in addition, the parameters α>0, β>0, so the function VJ,ω is positive definite. Taking the derivative ofVJ,ω with respect to time, one can obtain (46)V˙=α∂J∂ωω˙+∂J∂xx˙+∂J∂t+β∂J∂ω∂2J∂t∂ωT+∂2J∂ω∂ωTω˙+∂2J∂x∂ωTx˙=∂J∂ωαIl+β∂2J∂ω∂ωTω˙+α∂J∂x+β∂J∂ω∂2J∂x∂ωTx˙=∂J∂ωαIl+β∂2J∂ω∂ωTω˙+α∂J∂t+β∂J∂ω∂2J∂t∂ωT=-u∂J∂ω2-VJ2,if=∂J∂ω≠0α∂J∂t,if=∂J∂ω=0. Above all, considering the functionV is positive definite and another function V˙ the BP learning algorithm, it is certified that the BP learning algorithm has the internality of making the error converge to the minimum. ## 3.2.1. Proportional Integral Observer Design Let(14)xt=x,y,z,x˙,y˙,z˙T,ut=000uxt-Uxuyt-Uyuzt-UzT.Convert (4) into nonlinear time-invariant system as (15)x˙t=Axt+But+ft,yt=Cxt, where (16)A=000100000010000001ω20002ω00ω20-2ω00000000,B=000b1b2b3,ft=000fdxfdyfdz,C=111111, where ω is the instantaneous rotation vector of the small body and fdx, fdy, and fdz are the components of unmodelled perturbation accelerations mainly from the solar radiation pressure and the solar gravitation.Next, the PIO is designed as follows [24]: (17)x^˙t=Ax^t+But+f^t+Kpyt-y^t,y^t=Cx^t,f^˙t=KIyt-y^t, where Kp and KI are the observer gain matrix and the integral coefficient of estimated unknown disturbance, respectively.Note the state error and unknown disturbance error as(18)et=x^t-xt,eft=f^t-ft, where x^ and f^ are the estimations of the state vector x and the unknown disturbance f, respectively.Using (17) and (18) the error dynamics are as follows: (19)e˙t=A-KPCet+ft-f^t,e˙f=-KIC+KvCA-KpCet-KvCef-f˙t.The augmented estimator system could be rewritten as(20)e˙te˙ft=Aefeef+Beff˙t, where (21)Aef=A-KPC1-KIC-KvCA+KvCKpC0,Bef=01.Lemma 1 (see [25]). Givenγ>0 and (20), if there exist symmetric matrixes P, Q and two matrices KP, KI of appropriate dimension as well as LMI such that (22)A11A120A12TA22-QS0-QSTγI1<0, where (23)A11=PA-KpC+A-KpCTP+Ie,A12=P+QSKiC+KvA-KpCT,A22=-QSKvC-KiFS+KvCQST+Is hold, then system (20) is stable and satisfies corresponding performance index.Theorem 2. For a given positive constantγ>0 and (20), if there exist symmetric matrixes P, Q and two matrices KP, KI of appropriate dimension such that (22) LMI holds, then system (20) is robust and asymptotically stable and satisfies the performance index as follows: (24)eef≤γf^2+V0, where (25)eef2=∫0t1eefeefTdt,f^2=∫0t1f^f^Tdt.Ie, Is, and I1 are unit matrixes with appropriate dimension.Proof. Choose the Lyapunov candidate as(26)V=eefP1eefT; then taking the derivative ofV with respect to time along the trajectories of (19), one obtains (27)V˙=eefAefTP1+P1AefeefT+2eefTP1Beff^. Now define performance indicators as follows:(28)J=∫0t1eefeefT-γf^f^Tdt. Then(29)J=∫0t1eefeefT-γf^f^T+V˙dt-∫0t1V˙dt≤∫0t1eefeefT-γf^f^T+V˙dt+V0=∫0t1eefAefTP1+P1Aef+IeefT-eefTP1Beff˙-γf^f^Tdt+V0=∫0t1eeff^AefTP1+P1Aef+IP1BefBefTP1-γI1eeff^Tdt+V0. If there exists(30)AefTP1+P1Aef+IP1BefBefTP1-γI1<0, then theH∞ tracking performance can be satisfied. Next, note symmetric positive-definite matrix(31)P1=P00Q. Thus(32)P1Bef=0-QS,AefTP1+P1Aef+I=A11A12A12TA22, where A11, A12, and A22 are defined as (23); then (33)∫0t1eefeefTdt≤∫0t1γf^f^Tdt+V0. Then (24) can be obtained by simplifying (33); thus it is verified that unknown disturbance observer error ef of PIO can converge to zero in finite time, as well as the estimated interference f^t converging to actual interference ft. ## 3.2.2. PID Neural Network Structure and Calculation Method As it is difficult to acquire the physical parameters and motion information of small bodies accurately, there exists a highly nonlinear dynamic model of the small body. PID neural network control algorithm not only has the advantages of conventional PID controller, but also owns parallel structure and function of learning and memory of neural network and the ability of multilayer networks to approximate arbitrary functions. Therefore, the algorithm shows good superiority and stability performances in the control for the probe.The PID neural network is introduced as follows. The PID neural network is a three-forward neural network. Suppose that the controlled object has three inputs and three outputs, which is a nonlinear and strong coupling system with three variables. There exists a three-layer neural network comprising proportional neurons, integral neurons, and derivative neurons between the input layer and hidden layers. In addition, connected weights exist between the hidden layer and output layer. Figure2 shows a multivariable control structure based on a PID neural network of probe power decline period.Figure 2 Multivariable control structure based on a PID neural network of probe power decline period.(1) PID Neural Network Forward Algorithm. At any sampling time k, the forward calculation equations of the PID neural network are as follows.(a) The input-output function of input-layer neurons is(34)xsik=usik, where xsi, ss=1,2,3, and ii=1,2 are input values of input-layer neurons, output values of input-layer neurons, and the number of the subnet input layers, respectively.Define the position error inz-axis orientation as e; then (35)e=zt-znt, where zt and znt are the actual position on the za-axis at time t and the nominal position on the za-axis at corresponding time t, respectively.Introduce a simple filters as new state variable, and the input of input layer is defined as follows: (36)s=e˙+λe, where λ is a positive scalar.(b) Hidden layer contains nine neurons (three proportional neurons, three integral neurons, and three derivative neurons); the input values of these neurons can be calculated as follows:(37)netsi(k)=∑i=12ωsijxsik,j=1,2,3,….For subnetworki, the formula of the output of hidden layer neurons is given by (38)us1k=nets1k,us2k=nets2k+us2k-1,us3k=nets3k-nets3k-1, where nets1(k), usj(k), ωsij, and j are input value of neurons in the hidden layer, the output value of neurons in the hidden layer, weight between input layer and hidden layer in each subnet, and the hidden layer neuron number in the subnet (j=1,2,3), respectively.(c) The input and output of output-layer neurons: the output of output-layer neurons is the sum of output weights of all hidden layer neurons as(39)yhk=∑s=1n∑j=13ωsjhusjk, where yhk, ωsjh, and s are output value of output-layer neurons, connected weight between hidden layer and output layer, and sequence number of output-layer neurons (s=1,2,3), respectively.(2) PID Neural Network Learning Algorithm. In this subsection, a multivariable probe control system based on the PID neural network algorithm is regarded as a generalized network, using the backpropagation (BP) learning algorithm to minimize the criterion function within the scope of the requirements. Criterion function is given by [25] (40)J=E=∑p=1nEp=12∑p=1nrpk-ypk2=12∑pe2k≤ε.The weight of the PID neural network can be adjusted by virtue of the gradient method, trained and learned throughk steps, and then determined depending on the following equation.(a) The iterative equation of weight between input layer and hidden layer is(41)ωijk+1=ωijk-η∂J∂ωij+η1ωijk-ωijk+1.(b) The iterative equation of weight between hidden layer and output layer is(42)ωjhk+1=ωjh(k)-η∂J∂ωjh+η2ωjhk-ωjhk+1.Proof. Choose the Lyapunov candidate as(43)VJ,ω=αJ+12β∂J∂ω2, where the symbol · signifies the quadratic norm of the vector and the parameters α, β are strict positive constants and they are utilized to determine the degree of proportion; then (44)∂J∂ωl2=∂J∂ωl∂J∂ωT=∂J∂ω∂J∂ωT, where ∂J/∂ω=∂J/∂ω1,…,∂J/∂ωl(45)∂J∂ωT=∂J∂ω1,…,∂J∂ωl. Clearly,J and ∂J/∂ω2 are equal to the minimal neighborhood; in addition, the parameters α>0, β>0, so the function VJ,ω is positive definite. Taking the derivative ofVJ,ω with respect to time, one can obtain (46)V˙=α∂J∂ωω˙+∂J∂xx˙+∂J∂t+β∂J∂ω∂2J∂t∂ωT+∂2J∂ω∂ωTω˙+∂2J∂x∂ωTx˙=∂J∂ωαIl+β∂2J∂ω∂ωTω˙+α∂J∂x+β∂J∂ω∂2J∂x∂ωTx˙=∂J∂ωαIl+β∂2J∂ω∂ωTω˙+α∂J∂t+β∂J∂ω∂2J∂t∂ωT=-u∂J∂ω2-VJ2,if=∂J∂ω≠0α∂J∂t,if=∂J∂ω=0. Above all, considering the functionV is positive definite and another function V˙ the BP learning algorithm, it is certified that the BP learning algorithm has the internality of making the error converge to the minimum. ## 4. Simulation Results (a) According to Theorem2, PIO parameters can be derived by using the LMI toolbox as follows: (47)Kp=1.8820-11.21214.9258-11.2727-14.75485.87730.55219.56234.92583.77460.2464-7.3346-12.34703.22469.275825.08644.908428.90355.2324-12.38699.2758-6.9024-3.7824-2.93573.419624.1648-4.973513.93405.983512.3925-5.20152.97387.17481.38699.0247-8.0924,KI=4.377812.9673-1.37829.0357-14.78931.5044.The initial values of proportional neurons, integral neurons, and three derivative neurons are implemented as follows:(48)ωs1j=0.1,ωs2j=-0.1,ωs3j=0.2.The initial values of connected weight between hidden layer and output layer are defined, respectively, as follows:(49)ωsj1=0.4,ωsj2=-0.5,ωsj3=0.8.(b) The asteroid Eros 433 is taken as the target small body for simulation to verify the feasibility of the presented control scheme. The parameters of the small body are gained from [26] and are shown in Table 1.Table 1 Parameters Real world Simulation parameters μ 4 . 749 E - 04 4.800 E - 04 Spin period (h) 10.54 10.55 R 0 (km) 1.150 1.148 Gravitational coefficients C20 −0.113 −0.110 C22 0.0396 0.0397 Initial position (m) [400, 400, 11000] Initial velocity (m/s) [−0.9, 1.2, −1] Terminal site [100, 100, 8000]In this paper, compared with the perturbation uncertainties proposed in [27], the larger perturbation uncertainties are chosen as follows: (50)fdx=150sin2t,fdy=160sin1.5t,fdz=140sin3t.From Figure3, it can be seen that the actual trajectory of probe exhibits evident chattering in the system suffering a larger disturbance. Inherent robustness of the sliding mode control algorithm is not sufficient to guarantee the actual trajectory to track the desired trajectory. On this occasion, the neural network control algorithm based on PIO in this paper is utilized to compensate the unknown disturbance and eliminate the chattering problem of trajectory. Meanwhile, it can track the required ideal location quickly.Figure 3 Landing trajectory curve of the probe.Figures4, 5, 6, 7, 8, and 9 show the error curves between ideal location and actual locations and the velocity curves as a function of time on the three axis directions. For the system exhibiting large initial error and perturbation uncertainties, on the condition that the convergence of the system can be ensured, compared with the sliding mode control algorithm [27] the neural network control algorithm based on PIO can improve the convergence rate of the actual position error and the actual velocity; namely, the actual trajectory can fast and accurately track the planned trajectory on the condition that there exist parameter, feedback state error, and external larger disturbance in the system. Thereby it can satisfy the probe to land smoothly on the surface of small body and avoid the occurrence of the probe crashing due to the excessive landing speed.Figure 4 Position error component (x-axis) as function of time.Figure 5 Position error component (y-axis) as function of time.Figure 6 Position error component (z-axis) as function of time.Figure 7 Velocity component (x-axis) as function of time.Figure 8 Velocity component (y-axis) as function of time.Figure 9 Velocity component (z-axis) as function of time. ## 5. Conclusion This paper has presented a neural network control algorithm based on PIO. In view of the power descent section of soft landing on small bodies, the system dynamic models of the small bodies under the body-fixed coordinate system are given with ignoring the attitude control. The solar radiation pressure and the third-body’s gravity are treated as the perturbation, which is viewed as a bounded function. The nominal trajectories meeting the constraints on the threeaxes are preplanned. The simulation results show that the neural network control algorithm based on PIO can ensure fast and accurate response to parameter uncertainty, feedback state error, and external disturbances. Moreover, for the system exhibiting larger interference, it can overcome the inherent chattering problem of sliding mode control algorithm and make the position error and the velocity error converge to the small finite value, realizing the aim to softly land. --- *Source: 102424-2015-02-22.xml*
2015
# ACEA Attenuates Oxidative Stress by Promoting Mitophagy via CB1R/Nrf1/PINK1 Pathway after Subarachnoid Hemorrhage in Rats **Authors:** Binbing Liu; Yang Tian; Yuchen Li; Pei Wu; Yongzhi Zhang; Jiaolin Zheng; Huaizhang Shi **Journal:** Oxidative Medicine and Cellular Longevity (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1024279 --- ## Abstract Background and Purpose. Oxidative stress plays a pivotal role in early brain injury (EBI) after subarachnoid hemorrhage (SAH). The CB1R agonist ACEA has been reported to have a neuroprotective effect in many central nervous system diseases. Our study was aimed at exploring the effect and mechanism of ACEA in an experimental SAH model. Method. Endovascular perforation was performed to establish a SAH model of rats. ACEA was administered intraperitoneally 1 h after SAH. The CB1R antagonist AM251 was injected intraperitoneally 1 h before SAH induction. Adenoassociated virus- (AAV-) Nrf1 shRNA was infused into the lateral ventricle 3 weeks before SAH induction. Neurological tests, immunofluorescence, DHE, TUNEL, Nissl staining, transmission electron microscopy (TEM), and Western blot were performed. Results. The expression of CB1R, Nrf1, PINK1, Parkin, and LC3II increased and peaked at 24 h after SAH. ACEA treatment exhibited the antioxidative stress and antiapoptosis effects after SAH. In addition, ACEA treatment increased the expression of Nrf1, PINK1, Parkin, LC3II, and Bcl-xl but repressed the expression of Romo-1, Bax, and cleaved caspase-3. Moreover, the TEM results demonstrated that ACEA promoted the formation of mitophagosome and maintained the normal mitochondrial morphology of neurons. The protective effect of ACEA was reversed by AM251 and Nrf1 shRNA, respectively. Conclusions. This study demonstrated that ACEA alleviated oxidative stress and neurological dysfunction by promoting mitophagy after SAH, at least in part via the CB1R/Nrf1/PINK1 signaling pathway. --- ## Body ## 1. Introduction Subarachnoid hemorrhage (SAH) is a severe subtype of stroke with high morbidity and mortality [1]. Although the early diagnosis and treatment methods have been improved in the past few decades, not every patient can achieve a good clinical prognosis because of early brain injury (EBI) [2, 3]. Recent studies reported that oxidative stress plays a pivotal role in EBI after SAH [4–6]. Thus, alleviating oxidative stress injury may be an efficacious treatment for improving the prognosis of SAH.Dysfunctional mitochondria are the primary source of intracellular reactive oxygen species (ROS) due to the disruption of the electron transfer chain and the transition of mitochondrial membrane permeability following SAH [4, 7], leading to oxidative stress injury and subsequent neuronal death [6]. Consequently, the clearance of impaired mitochondria in time would be an effective treatment strategy for SAH patients. Mitophagy is a selective autophagy process that specifically degrades damaged mitochondria to maintain mitochondrial homeostasis and cellular survival [8, 9]. Accumulating evidence has indicated that mitophagy is a potential therapeutic target to protect against EBI after SAH [10–13]. Furthermore, our previous research proved that promoting mitophagy alleviated oxidative stress as well as subsequent neuronal death after SAH [14].The endocannabinoid system consists of lipid-based mediators, endocannabinoids (eCB), their target receptors, associated synthesizing and metabolizing enzymes, and transporter proteins. It is reported that the level of cannabinoid receptors and endocannabinoids increased after stroke [15]. Cannabinoid receptor 1 (CB1R) is a G-protein-coupled receptor, which is involved in modulation of neuronal activity, synaptic plasticity, and cell metabolism [16–18]. Accumulating evidence has demonstrated that activation of CB1R is beneficial to provide neuroprotection for stroke [19–21].Nuclear respiratory factor 1 (Nrf-1) is a critical transcription factor that regulates genes necessary for mitochondrial biogenesis and function [22–24]. In recent years, it has been discovered that NRF1 target genes are not restricted to the genes involved in mitochondrial function, which indicates that Nrf-1 has more potential functions [25]. For example, Nrf-1 has a positive regulatory effect on the expression of PINK1 and Parkin genes, and it participates in mitochondrial quality control by regulating the PINK1/Parkin-mediated mitophagy [26]. Arachidonyl-2-chloroethylamide (ACEA), a highly selective CB1R agonist, was reported to protect neurons against ischemic injury by increasing the expression of Nrf1 and inducing mitochondrial biogenesis [27]. However, whether the protective effect of ACEA is mediated by regulating mitophagy remains unknown.Thus, our study was aimed at verifying the hypothesis that ACEA attenuates oxidative stress by regulating mitophagy via the CB1R/Nrf1/PINK1 pathway after SAH in rats. ## 2. Materials and Methods ### 2.1. Animals and SAH Model All experimental procedures were approved by the Institutional Animal Care and Use Committees of the First Affiliated Hospital of Harbin Medical University and were in accordance with the NIH Guidelines for the Care and Use of Laboratory Animals. Adult male Sprague-Dawley rats (weight 280-320 g) were housed at a constant humidity (55±5%) and temperature (22±2°C) in a 12 h light and dark cycle room. The animals were raised with free access to food and water.The endovascular perforation method was employed to induce a SAH model in rats, as previously described [28]. Briefly, after rats were fully anesthetized, the external carotid artery (ECA) and internal carotid artery (ICA) were fully exposed. A sharp 4–0 nylon suture was inserted into the left ICA from the cut of the ECA stump until resistance was felt. The suture was further advanced to puncture the artery for several seconds, then withdrawn immediately. In Sham rats, all procedures were identical except the puncture of the vessel. ### 2.2. Drug Administration The highly selective CB1R agonist ACEA (Cat. No.1319, Tocris Bioscience, Bristol, UK) was diluted in 5% dimethyl sulfoxide (DMSO). ACEA was administered intraperitoneally (i.p.) in different groups at doses of 0.5 mg/kg, 1.5 mg/kg, and 4.5 mg/kg 1 h after SAH [27]. The CB1R antagonist AM251 (Cat. No. A6226, Sigma-Aldrich, MO, USA), dissolved in 5% DMSO, was injected intraperitoneally at a dose of 1.0 mg/kg 1 h before SAH induction [29]. The control groups were injected the same volume of solvents, respectively. ### 2.3. Intracerebroventricular Injection An adenoassociated virus (AAV; GeneChem, Shanghai, China) system was used to knock down the expression of Nrf1 according to the manufacturer’s instructions. A nontargeting scrambled negative matched shRNA was used as a control. Animals were injected intraperitoneally with pentobarbital (40 mg/kg) and placed in a stereotaxic apparatus. Next, a 10μL syringe was inserted into the left ventricle at the specific coordinates relative to the bregma: 1.0 mm lateral, 1.5 mm posterior, and 3.1 mm below the dural surface. A total of 3 μL of Nrf1 shRNA or scrambled shRNA was injected intraventricularly at a rate of 0.3 μL/min in 3 weeks before SAH induction. To improve the knockdown efficiency, three different shRNA duplexes were designed and mixed. Their sequences are provided as follows: shRNA1: 5′-GCCTGGTCCAGATCCCTGTGACGAATCACAGGGATCTGGACCAGGCTTTTT-3′, shRNA2: 5′-GGACAGCGCAGTCACCATGGACGAATCCATGGTGACTGCGCTGTCCTTTTT-3′, and shRNA3: 5′-GGAGGTGGTGACGTTGGAACACGAATGTTCCAACGTCACCACCTCCTTTTT-3′. ### 2.4. Experimental Design (Supplemental Figure1) #### 2.4.1. Experiment 1 36 rats (n=6 per group) were randomly assigned into 6 groups: Sham and 3, 6, 12, 24, and 72 hours, after SAH. Expression of CB1R, Nrf1, PINK1, Parkin, and LC3II/LC3I was analyzed by Western blot. Additionally, 3 Sham rats and 3 SAH-24 h rats were used to detect the cellular localization of CB1R by immunofluorescence staining. #### 2.4.2. Experiment 2 In a short-term outcome assessment, 30 rats (n=6 per group) were equally assigned into 5 groups: Sham, SAH+vehicle, SAH+ACEA (0.5 mg/kg), SAH+ACEA (1.5 mg/kg), and SAH+ACEA (4.5 mg/kg) for neurological tests. According to neurobehavioral test results, 1.5 mg/kg was chosen as the best dosage for subsequent experiments. #### 2.4.3. Experiment 3 In a long-term outcome assessment, 18 rats (n=6 per group) were assigned into 3 groups: Sham, SAH+vehicle, and SAH+ACEA. Rotarod test, foot fault test, and Nissl staining were performed. #### 2.4.4. Experiment 4 To verify the neuroprotective effect of ACEA, 30 rats (n=6 per group) were assigned into 5 groups: Sham, SAH+vehicle, SAH+ACEA, SAH+AM251, and SAH+ACEA+AM251 for Western blot. Additionally, 30 rats (n=6 per group) were used for neurological tests, DHE, TUNEL, immunofluorescence staining, and transmission electron microscopy (TEM). Samples for determination of MDA, SOD, GSH/GSSG, and GSH-Px levels were shared with Western blot in each group. #### 2.4.5. Experiment 5 To verify the hypothetical molecular mechanism, 24 rats (n=6 per group) were assigned into 4 groups: SAH+vehicle, SAH+ACEA, SAH+ACEA+scrambled shRNA, and SAH+ACEA+Nrf1 shRNA for Western blot. Additionally, 24 rats (n=6 per group) were used for neurological tests, DHE, TUNEL, immunofluorescence staining, and TEM. Samples for determination of MDA, SOD, GSH/GSSG, and GSH-Px Levels were shared with Western blot in each group. ### 2.5. Severity of SAH The severity of SAH was estimated with the SAH grading scale as previously described [30]. Briefly, the animals were euthanized at 24 h after SAH, and the basal cistern of the rat brain was divided into 6 sections. Based on the amount of blood clotting, each section was recorded with a grade from 0 to 3. The total score of six sections ranged from 0 to 18, and rats with SAH score≤8 were excluded. ### 2.6. Evaluation of Short-Term Neurofunctional Outcomes Short-term neurofunctional outcome was estimated with the modified Garcia score and beam balance test as previously described [31, 32]. The higher score represented better neurofunctional outcome. ### 2.7. Evaluation of Long-Term Neurofunctional Outcomes Long-term neurofunctional outcome was estimated with the rotarod test and foot fault test [33, 34]. Briefly, the rotarod test was conducted on days 7, 14, and 21 after SAH; animals were placed on a rotating horizontal cylinder. The rotating speed was started at 5 revolutions per minute (RPM) or 10 RPM and was gradually accelerated by 2 RPM every 5 seconds. The falling latency was recorded. The foot fault test was also conducted on days 7, 14, and 21 after SAH; animals were required to walk on a steel grid. A paw falling through the grid was recorded as a foot fault. A total of 50 steps were recorded for the right forelimb. Percentage of foot faults was expressed as faults/steps+faults×100%. ### 2.8. Western Blot and Isolation of Mitochondria Western blot was performed as previously described [35]. Briefly, rats were euthanized and transcardially perfused with 150 mL of cold PBS (0.01 M, pH 7.4). The left hemispheres were collected and homogenized in RIPA lysis buffer (P0013B, Beyotime, Shanghai, China). After centrifuging at 14000×g for 30 min at 4°C, the supernatant was collected. Equal amounts of protein (30 μg) were loaded onto 8%-12% SDS-PAGE gel, then electrophoresed and transferred to 0.2 μm nitrocellulose membranes, which were blocked with 5% nonfat milk and incubated with the following primary antibodies overnight at 4°C: anti-CB1R (1: 1000, ab259323, Abcam, MA, USA), anti-Nrf1 (1: 2000, ab175932, Abcam, MA, USA), anti-PINK1 (1: 1000, ab186303, Abcam, MA, USA), anti-Parkin (1: 1000, ab77924, Abcam, MA, USA), anti-LC3B (1: 2000, ab192890, Abcam, MA, USA), anti-COX IV(1: 1000, ab33985, Abcam, MA, USA), anti-Bcl-XL (1: 1000, ab32370, Abcam, MA, USA), anti-Bax (1: 1000, ab32503, Abcam, MA, USA), anti-cleaved caspase-3 (1: 500, 9661, Cell Signaling Technology Inc., MA, USA), anti-Romo-1 (1: 500, NBP2-45607, NOVUS Biologicals, CO, USA), and anti-β-actin (1: 1000, ab8227, Abcam, MA, USA). The next day, the membranes were incubated with the corresponding second antibodies for 2 h at room temperature. Immunoblots were visualized with BeyoECL Star chemiluminescence reagent kit (Beyotime, Shanghai, China) and quantified by densitometry using the ImageJ software. COX IV and β-actin were used as internal control.Mitochondrial proteins were extracted using the Tissue Mitochondria Isolation Kit (Beyotime, Shanghai, China) according to the manufacturer’s instructions. ### 2.9. Immunofluorescence Staining Immunofluorescence staining was performed as previously described [36]. Briefly, after being blocked with 5% donkey serum in 0.3% Triton X-100 for 60 min at room temperature, the brain slices were incubated at 4°C overnight with the following primary antibodies: anti-CB1R (1: 100, ab259323, Abcam, MA, USA), anti-NeuN (1: 1000, ab104224, Abcam, MA, USA), anti-GFAP (1: 50, ab4648, Abcam, MA, USA), anti-Iba1 (1: 100, ab5076, Abcam, MA, USA), anti-LC3B (1: 1000, ab192890, Abcam, MA, USA), and anti-TOMM20 (1: 100, ab56783, Abcam, MA, USA). Then, the slices were incubated with the appropriate fluorescence-conjugated secondary antibody (1: 500, Abcam, MA, USA) at 37° C for 1 h. ### 2.10. Transmission Electron Microscopy (TEM) The morphology of mitochondria was observed by TEM. 1 mm3 tissue was cut from the brain of each group and fixed with 2.5% glutaraldehyde for 4 h. After dehydration, samples were embedded into araldite and then cut into 60 nm slices with an ultramicrotome (Leica, Wetzlar, Germany). At last, the slices were fixed to nickel grids after staining. Images were acquired using a transmission electron microscope (Carl Zeiss, Thornwood, NY, USA). ### 2.11. Evaluation of Oxidative Stress #### 2.11.1. Determination of MDA, SOD, GSH/GSSG, and GSH-Px Levels The homogenate of the left hemispherical cortex tissues was collected. The levels of cellular MDA, SOD, GSH/GSSG, and GSH-Px in cortex tissues were, respectively, detected with Lipid Peroxidation MDA Assay Kit (S0131S, Beyotime, Shanghai, China), SOD Assay Kit (S0103, Beyotime, Shanghai, China), GSH and GSSG Assay Kit (S0053, Beyotime, Shanghai, China), and Total Glutathione Peroxidase Assay Kit (S0058, Beyotime, Shanghai, China) according to the manufacturer’s instructions. #### 2.11.2. Dihydroethidium (DHE) Staining To detect the reactive oxygen species (ROS) level of brain tissues, the brain slices were incubated with 2μmol/L DHE (Thermo Fisher Scientific, MA, USA) at 37°C for 30 min. Images were photographed by a fluorescence microscope, and the DHE-positive cells were quantified by using ImageJ software. ### 2.12. Evaluation of Neuronal Damage #### 2.12.1. TUNEL Staining Neuronal apoptosis was detected with TUNEL staining kit (11684795910, Roche, USA) according to the manufacturer’s protocols. Briefly, after being blocked with 5% donkey serum in 0.3% Triton X-100 for 60 min at room temperature, the brain slices were incubated at 4°C overnight with the primary antibodies: anti-NeuN (1: 1000, ab104224, Abcam, MA, USA). Then, the slices were incubated with fluorescence-conjugated secondary antibody (1: 1000, ab150120, Abcam, MA, USA) for 1 h at 37° C. Lastly, the slices were incubated with TUNEL reaction mixture for 1 h at 37° C before DAPI nuclear staining. Under a fluorescence microscope, the TUNEL-positive neurons in the left temporal cortex were quantified by using ImageJ software. #### 2.12.2. Nissl Staining Hippocampal neuron degeneration was assessed with Nissl staining on the 28th day after SAH. Briefly, the 16μm coronal slices were incubated with 0.5% crystal violet solution for 15 min. Under an optical microscope, the Nissl-positive cells in the hippocampal cornu ammonis (CA)1, CA3, and dentate gyrus (DG) were quantified by using ImageJ software. ### 2.13. Statistical Analysis Data were represented asmean±standarddeviation (SD) or median with interquartile range based on the normality and homogeneity of variance. For the data with a normal distribution, one-way analysis of variance (ANOVA) followed by the Tukey post hoc test was used for statistics. For nonnormally distributed data, the Kruskal-Wallis test followed by the Dunn post hoc test was used for statistics. A value of p<0.05 was considered statistically significant. GraphPad Prism software and SPSS software (version 24.0) were used for statistical analyses. ## 2.1. Animals and SAH Model All experimental procedures were approved by the Institutional Animal Care and Use Committees of the First Affiliated Hospital of Harbin Medical University and were in accordance with the NIH Guidelines for the Care and Use of Laboratory Animals. Adult male Sprague-Dawley rats (weight 280-320 g) were housed at a constant humidity (55±5%) and temperature (22±2°C) in a 12 h light and dark cycle room. The animals were raised with free access to food and water.The endovascular perforation method was employed to induce a SAH model in rats, as previously described [28]. Briefly, after rats were fully anesthetized, the external carotid artery (ECA) and internal carotid artery (ICA) were fully exposed. A sharp 4–0 nylon suture was inserted into the left ICA from the cut of the ECA stump until resistance was felt. The suture was further advanced to puncture the artery for several seconds, then withdrawn immediately. In Sham rats, all procedures were identical except the puncture of the vessel. ## 2.2. Drug Administration The highly selective CB1R agonist ACEA (Cat. No.1319, Tocris Bioscience, Bristol, UK) was diluted in 5% dimethyl sulfoxide (DMSO). ACEA was administered intraperitoneally (i.p.) in different groups at doses of 0.5 mg/kg, 1.5 mg/kg, and 4.5 mg/kg 1 h after SAH [27]. The CB1R antagonist AM251 (Cat. No. A6226, Sigma-Aldrich, MO, USA), dissolved in 5% DMSO, was injected intraperitoneally at a dose of 1.0 mg/kg 1 h before SAH induction [29]. The control groups were injected the same volume of solvents, respectively. ## 2.3. Intracerebroventricular Injection An adenoassociated virus (AAV; GeneChem, Shanghai, China) system was used to knock down the expression of Nrf1 according to the manufacturer’s instructions. A nontargeting scrambled negative matched shRNA was used as a control. Animals were injected intraperitoneally with pentobarbital (40 mg/kg) and placed in a stereotaxic apparatus. Next, a 10μL syringe was inserted into the left ventricle at the specific coordinates relative to the bregma: 1.0 mm lateral, 1.5 mm posterior, and 3.1 mm below the dural surface. A total of 3 μL of Nrf1 shRNA or scrambled shRNA was injected intraventricularly at a rate of 0.3 μL/min in 3 weeks before SAH induction. To improve the knockdown efficiency, three different shRNA duplexes were designed and mixed. Their sequences are provided as follows: shRNA1: 5′-GCCTGGTCCAGATCCCTGTGACGAATCACAGGGATCTGGACCAGGCTTTTT-3′, shRNA2: 5′-GGACAGCGCAGTCACCATGGACGAATCCATGGTGACTGCGCTGTCCTTTTT-3′, and shRNA3: 5′-GGAGGTGGTGACGTTGGAACACGAATGTTCCAACGTCACCACCTCCTTTTT-3′. ## 2.4. Experimental Design (Supplemental Figure1) ### 2.4.1. Experiment 1 36 rats (n=6 per group) were randomly assigned into 6 groups: Sham and 3, 6, 12, 24, and 72 hours, after SAH. Expression of CB1R, Nrf1, PINK1, Parkin, and LC3II/LC3I was analyzed by Western blot. Additionally, 3 Sham rats and 3 SAH-24 h rats were used to detect the cellular localization of CB1R by immunofluorescence staining. ### 2.4.2. Experiment 2 In a short-term outcome assessment, 30 rats (n=6 per group) were equally assigned into 5 groups: Sham, SAH+vehicle, SAH+ACEA (0.5 mg/kg), SAH+ACEA (1.5 mg/kg), and SAH+ACEA (4.5 mg/kg) for neurological tests. According to neurobehavioral test results, 1.5 mg/kg was chosen as the best dosage for subsequent experiments. ### 2.4.3. Experiment 3 In a long-term outcome assessment, 18 rats (n=6 per group) were assigned into 3 groups: Sham, SAH+vehicle, and SAH+ACEA. Rotarod test, foot fault test, and Nissl staining were performed. ### 2.4.4. Experiment 4 To verify the neuroprotective effect of ACEA, 30 rats (n=6 per group) were assigned into 5 groups: Sham, SAH+vehicle, SAH+ACEA, SAH+AM251, and SAH+ACEA+AM251 for Western blot. Additionally, 30 rats (n=6 per group) were used for neurological tests, DHE, TUNEL, immunofluorescence staining, and transmission electron microscopy (TEM). Samples for determination of MDA, SOD, GSH/GSSG, and GSH-Px levels were shared with Western blot in each group. ### 2.4.5. Experiment 5 To verify the hypothetical molecular mechanism, 24 rats (n=6 per group) were assigned into 4 groups: SAH+vehicle, SAH+ACEA, SAH+ACEA+scrambled shRNA, and SAH+ACEA+Nrf1 shRNA for Western blot. Additionally, 24 rats (n=6 per group) were used for neurological tests, DHE, TUNEL, immunofluorescence staining, and TEM. Samples for determination of MDA, SOD, GSH/GSSG, and GSH-Px Levels were shared with Western blot in each group. ## 2.4.1. Experiment 1 36 rats (n=6 per group) were randomly assigned into 6 groups: Sham and 3, 6, 12, 24, and 72 hours, after SAH. Expression of CB1R, Nrf1, PINK1, Parkin, and LC3II/LC3I was analyzed by Western blot. Additionally, 3 Sham rats and 3 SAH-24 h rats were used to detect the cellular localization of CB1R by immunofluorescence staining. ## 2.4.2. Experiment 2 In a short-term outcome assessment, 30 rats (n=6 per group) were equally assigned into 5 groups: Sham, SAH+vehicle, SAH+ACEA (0.5 mg/kg), SAH+ACEA (1.5 mg/kg), and SAH+ACEA (4.5 mg/kg) for neurological tests. According to neurobehavioral test results, 1.5 mg/kg was chosen as the best dosage for subsequent experiments. ## 2.4.3. Experiment 3 In a long-term outcome assessment, 18 rats (n=6 per group) were assigned into 3 groups: Sham, SAH+vehicle, and SAH+ACEA. Rotarod test, foot fault test, and Nissl staining were performed. ## 2.4.4. Experiment 4 To verify the neuroprotective effect of ACEA, 30 rats (n=6 per group) were assigned into 5 groups: Sham, SAH+vehicle, SAH+ACEA, SAH+AM251, and SAH+ACEA+AM251 for Western blot. Additionally, 30 rats (n=6 per group) were used for neurological tests, DHE, TUNEL, immunofluorescence staining, and transmission electron microscopy (TEM). Samples for determination of MDA, SOD, GSH/GSSG, and GSH-Px levels were shared with Western blot in each group. ## 2.4.5. Experiment 5 To verify the hypothetical molecular mechanism, 24 rats (n=6 per group) were assigned into 4 groups: SAH+vehicle, SAH+ACEA, SAH+ACEA+scrambled shRNA, and SAH+ACEA+Nrf1 shRNA for Western blot. Additionally, 24 rats (n=6 per group) were used for neurological tests, DHE, TUNEL, immunofluorescence staining, and TEM. Samples for determination of MDA, SOD, GSH/GSSG, and GSH-Px Levels were shared with Western blot in each group. ## 2.5. Severity of SAH The severity of SAH was estimated with the SAH grading scale as previously described [30]. Briefly, the animals were euthanized at 24 h after SAH, and the basal cistern of the rat brain was divided into 6 sections. Based on the amount of blood clotting, each section was recorded with a grade from 0 to 3. The total score of six sections ranged from 0 to 18, and rats with SAH score≤8 were excluded. ## 2.6. Evaluation of Short-Term Neurofunctional Outcomes Short-term neurofunctional outcome was estimated with the modified Garcia score and beam balance test as previously described [31, 32]. The higher score represented better neurofunctional outcome. ## 2.7. Evaluation of Long-Term Neurofunctional Outcomes Long-term neurofunctional outcome was estimated with the rotarod test and foot fault test [33, 34]. Briefly, the rotarod test was conducted on days 7, 14, and 21 after SAH; animals were placed on a rotating horizontal cylinder. The rotating speed was started at 5 revolutions per minute (RPM) or 10 RPM and was gradually accelerated by 2 RPM every 5 seconds. The falling latency was recorded. The foot fault test was also conducted on days 7, 14, and 21 after SAH; animals were required to walk on a steel grid. A paw falling through the grid was recorded as a foot fault. A total of 50 steps were recorded for the right forelimb. Percentage of foot faults was expressed as faults/steps+faults×100%. ## 2.8. Western Blot and Isolation of Mitochondria Western blot was performed as previously described [35]. Briefly, rats were euthanized and transcardially perfused with 150 mL of cold PBS (0.01 M, pH 7.4). The left hemispheres were collected and homogenized in RIPA lysis buffer (P0013B, Beyotime, Shanghai, China). After centrifuging at 14000×g for 30 min at 4°C, the supernatant was collected. Equal amounts of protein (30 μg) were loaded onto 8%-12% SDS-PAGE gel, then electrophoresed and transferred to 0.2 μm nitrocellulose membranes, which were blocked with 5% nonfat milk and incubated with the following primary antibodies overnight at 4°C: anti-CB1R (1: 1000, ab259323, Abcam, MA, USA), anti-Nrf1 (1: 2000, ab175932, Abcam, MA, USA), anti-PINK1 (1: 1000, ab186303, Abcam, MA, USA), anti-Parkin (1: 1000, ab77924, Abcam, MA, USA), anti-LC3B (1: 2000, ab192890, Abcam, MA, USA), anti-COX IV(1: 1000, ab33985, Abcam, MA, USA), anti-Bcl-XL (1: 1000, ab32370, Abcam, MA, USA), anti-Bax (1: 1000, ab32503, Abcam, MA, USA), anti-cleaved caspase-3 (1: 500, 9661, Cell Signaling Technology Inc., MA, USA), anti-Romo-1 (1: 500, NBP2-45607, NOVUS Biologicals, CO, USA), and anti-β-actin (1: 1000, ab8227, Abcam, MA, USA). The next day, the membranes were incubated with the corresponding second antibodies for 2 h at room temperature. Immunoblots were visualized with BeyoECL Star chemiluminescence reagent kit (Beyotime, Shanghai, China) and quantified by densitometry using the ImageJ software. COX IV and β-actin were used as internal control.Mitochondrial proteins were extracted using the Tissue Mitochondria Isolation Kit (Beyotime, Shanghai, China) according to the manufacturer’s instructions. ## 2.9. Immunofluorescence Staining Immunofluorescence staining was performed as previously described [36]. Briefly, after being blocked with 5% donkey serum in 0.3% Triton X-100 for 60 min at room temperature, the brain slices were incubated at 4°C overnight with the following primary antibodies: anti-CB1R (1: 100, ab259323, Abcam, MA, USA), anti-NeuN (1: 1000, ab104224, Abcam, MA, USA), anti-GFAP (1: 50, ab4648, Abcam, MA, USA), anti-Iba1 (1: 100, ab5076, Abcam, MA, USA), anti-LC3B (1: 1000, ab192890, Abcam, MA, USA), and anti-TOMM20 (1: 100, ab56783, Abcam, MA, USA). Then, the slices were incubated with the appropriate fluorescence-conjugated secondary antibody (1: 500, Abcam, MA, USA) at 37° C for 1 h. ## 2.10. Transmission Electron Microscopy (TEM) The morphology of mitochondria was observed by TEM. 1 mm3 tissue was cut from the brain of each group and fixed with 2.5% glutaraldehyde for 4 h. After dehydration, samples were embedded into araldite and then cut into 60 nm slices with an ultramicrotome (Leica, Wetzlar, Germany). At last, the slices were fixed to nickel grids after staining. Images were acquired using a transmission electron microscope (Carl Zeiss, Thornwood, NY, USA). ## 2.11. Evaluation of Oxidative Stress ### 2.11.1. Determination of MDA, SOD, GSH/GSSG, and GSH-Px Levels The homogenate of the left hemispherical cortex tissues was collected. The levels of cellular MDA, SOD, GSH/GSSG, and GSH-Px in cortex tissues were, respectively, detected with Lipid Peroxidation MDA Assay Kit (S0131S, Beyotime, Shanghai, China), SOD Assay Kit (S0103, Beyotime, Shanghai, China), GSH and GSSG Assay Kit (S0053, Beyotime, Shanghai, China), and Total Glutathione Peroxidase Assay Kit (S0058, Beyotime, Shanghai, China) according to the manufacturer’s instructions. ### 2.11.2. Dihydroethidium (DHE) Staining To detect the reactive oxygen species (ROS) level of brain tissues, the brain slices were incubated with 2μmol/L DHE (Thermo Fisher Scientific, MA, USA) at 37°C for 30 min. Images were photographed by a fluorescence microscope, and the DHE-positive cells were quantified by using ImageJ software. ## 2.11.1. Determination of MDA, SOD, GSH/GSSG, and GSH-Px Levels The homogenate of the left hemispherical cortex tissues was collected. The levels of cellular MDA, SOD, GSH/GSSG, and GSH-Px in cortex tissues were, respectively, detected with Lipid Peroxidation MDA Assay Kit (S0131S, Beyotime, Shanghai, China), SOD Assay Kit (S0103, Beyotime, Shanghai, China), GSH and GSSG Assay Kit (S0053, Beyotime, Shanghai, China), and Total Glutathione Peroxidase Assay Kit (S0058, Beyotime, Shanghai, China) according to the manufacturer’s instructions. ## 2.11.2. Dihydroethidium (DHE) Staining To detect the reactive oxygen species (ROS) level of brain tissues, the brain slices were incubated with 2μmol/L DHE (Thermo Fisher Scientific, MA, USA) at 37°C for 30 min. Images were photographed by a fluorescence microscope, and the DHE-positive cells were quantified by using ImageJ software. ## 2.12. Evaluation of Neuronal Damage ### 2.12.1. TUNEL Staining Neuronal apoptosis was detected with TUNEL staining kit (11684795910, Roche, USA) according to the manufacturer’s protocols. Briefly, after being blocked with 5% donkey serum in 0.3% Triton X-100 for 60 min at room temperature, the brain slices were incubated at 4°C overnight with the primary antibodies: anti-NeuN (1: 1000, ab104224, Abcam, MA, USA). Then, the slices were incubated with fluorescence-conjugated secondary antibody (1: 1000, ab150120, Abcam, MA, USA) for 1 h at 37° C. Lastly, the slices were incubated with TUNEL reaction mixture for 1 h at 37° C before DAPI nuclear staining. Under a fluorescence microscope, the TUNEL-positive neurons in the left temporal cortex were quantified by using ImageJ software. ### 2.12.2. Nissl Staining Hippocampal neuron degeneration was assessed with Nissl staining on the 28th day after SAH. Briefly, the 16μm coronal slices were incubated with 0.5% crystal violet solution for 15 min. Under an optical microscope, the Nissl-positive cells in the hippocampal cornu ammonis (CA)1, CA3, and dentate gyrus (DG) were quantified by using ImageJ software. ## 2.12.1. TUNEL Staining Neuronal apoptosis was detected with TUNEL staining kit (11684795910, Roche, USA) according to the manufacturer’s protocols. Briefly, after being blocked with 5% donkey serum in 0.3% Triton X-100 for 60 min at room temperature, the brain slices were incubated at 4°C overnight with the primary antibodies: anti-NeuN (1: 1000, ab104224, Abcam, MA, USA). Then, the slices were incubated with fluorescence-conjugated secondary antibody (1: 1000, ab150120, Abcam, MA, USA) for 1 h at 37° C. Lastly, the slices were incubated with TUNEL reaction mixture for 1 h at 37° C before DAPI nuclear staining. Under a fluorescence microscope, the TUNEL-positive neurons in the left temporal cortex were quantified by using ImageJ software. ## 2.12.2. Nissl Staining Hippocampal neuron degeneration was assessed with Nissl staining on the 28th day after SAH. Briefly, the 16μm coronal slices were incubated with 0.5% crystal violet solution for 15 min. Under an optical microscope, the Nissl-positive cells in the hippocampal cornu ammonis (CA)1, CA3, and dentate gyrus (DG) were quantified by using ImageJ software. ## 2.13. Statistical Analysis Data were represented asmean±standarddeviation (SD) or median with interquartile range based on the normality and homogeneity of variance. For the data with a normal distribution, one-way analysis of variance (ANOVA) followed by the Tukey post hoc test was used for statistics. For nonnormally distributed data, the Kruskal-Wallis test followed by the Dunn post hoc test was used for statistics. A value of p<0.05 was considered statistically significant. GraphPad Prism software and SPSS software (version 24.0) were used for statistical analyses. ## 3. Results ### 3.1. Mortality and SAH Severity There were 33 rats in the Sham group and 209 rats in the SAH group, of which 44 rats died due to SAH induction (21.05%). None of the Sham-operated rats died, and 13 rats with SAHscore≤8 were excluded from this research (Supplemental Figure 2(a)). In the SAH group, blood clots were mainly distributed around the circle of Willis (Supplemental Figure 2(b)). No significant differences in SAH grade were observed among SAH groups (Supplemental Figure 2(c)). ### 3.2. Time Course Expression of CB1R, Nrf1, PINK1, Parkin, and LC3II after SAH Western blot revealed that the expression of CB1R and Nrf1 in the cytoplasm and PINK1, Parkin, and LC3II in mitochondria began to increase at 6 h and reached a peak at 24 h after SAH, compared with the Sham group (p<0.05; Figures 1(a) and 1(b)). Consistently, immunofluorescence staining confirmed the increased expression of CB1R after SAH. Besides, it also showed that CB1R receptors were mainly located in neurons of the cerebral cortex, but a few were located in microglia and astrocytes (Figure 1(d)).Figure 1 Time course expression of CB1R, Nrf1, PINK1, Parkin, and LC3II and cellular localization of CB1R after SAH. (a) Representative Western blot images of time course and (b) quantitative analyses of CB1R, Nrf1, PINK1, Parkin, and LC3II.n=6 per group. Data were represented as mean±SD. ∗p<0.05 vs. the Sham group. (c) Representative picture indicates the location of immunofluorescence staining (small black box). (d) Representative microphotographs of immunofluorescence staining for CB1R (green) with neurons (NeuN, red), astrocytes (GFAP, red), and microglia (Iba-1, red) in the left temporal cortex at 24 h after SAH. Nuclei were stained with DAPI (blue). n=3 per group. Scalebar=50 μm. (a)(b)(c)(d) ### 3.3. ACEA Attenuated Short-Term Neurological Deficits Modified Garcia and beam balance scores indicated that SAH contributed to significant neurological deficits, compared with the Sham group. ACEA treatment at the dose of 1.5 mg/kg significantly attenuated the neurological deficits, compared with the SAH+vehicle group (p<0.05, Supplemental Figures 3(a) and 3(b)). According to neurobehavioral test results, 1.5 mg/kg was chosen as the best dosage for subsequent experiments. ### 3.4. ACEA Attenuated Long-Term Neurological Deficits and Hippocampus Neuron Degeneration The rotarod test indicated that whether it was 5 RPM or 10 RPM, the falling latency in the SAH+vehicle group was significantly shorter than that in the Sham group. Such poor performances induced by SAH were remarkably improved by ACEA treatment (p<0.05, Figures 2(a) and 2(b)).Figure 2 ACEA attenuated long-term neurological deficits and hippocampal neuronal degeneration after SAH. Rotarod test of 5 RPM (a) and 10 RPM (b) in the first, second, and third week after SAH,n=6 per group. (c) Foot fault test during the three weeks after SAH, n=6 per group. (d) Representative microphotographs of Nissl staining in the hippocampal CA1, CA3, and DG regions. Scalebar=50 μm. (e) Interest areas of the CA1, CA3, and DG region in the left hippocampus. Scalebar=200 μm. (f) Quantification of the Nissl-positive neurons, n=6 per group. Data of the rotarod test were represented as the median with interquartile range. Other data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group. (a)(b)(c)(d)(e)(f)Foot fault test showed that the foot fault rate in the SAH+vehicle group was dramatically higher than that in the Sham group in all three weeks. ACEA treatment significantly reduced the foot fault rate (p<0.01, Figure 2(c)).Nissl staining revealed that Nissl-positive neurons in the SAH+vehicle group were remarkably less than those in the Sham group in the CA1, CA3, and DG area of the ipsilateral hippocampus (p<0.05, Figures 2(d) and 2(f)). ACEA treatment significantly attenuated hippocampal neuron degeneration on the 28th day after SAH (p<0.05, compared with the SAH+vehicle group, Figures 2(d) and 2(f)). ### 3.5. ACEA Treatment Attenuated Neurological Deficits and Neuronal Apoptosis, whereas the Neuroprotective and Antiapoptotic Effects of ACEA Were Reversed by AM251 and Nrf1 shRNA In modified Garcia score and beam balance test results, ACEA treatment ameliorated neurological deficits compared with the SAH+vehicle group (p<0.05, Figures 3(a) and 3(b)). Compared with the SAH+ACEA group, AM251 abolished the neuroprotective effect of ACEA in the SAH+ACEA+AM251 group (p<0.05, Figures 3(a) and 3(b)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA abolished the neuroprotective effect of ACEA in the SAH+ACEA+Nrf1 shRNA group (p<0.05, Figures 4(a) and 4(b)).Figure 3 ACEA attenuated neurological deficits and neuronal apoptosis, which was reversed by AM251. (a) Modified Garcia and (b) beam balance scores,n=6 per group. (c) Representative microphotographs of TUNEL staining and quantification of TUNEL-positive neurons. Scalebar=100μm. n=3 per group. (d) Representative Western blot images. (e) Quantitative analyses of Bcl-xl, Bax, and cleaved caspase-3. n=6per group. Data of Modified Garcia and beam balance scores were represented as the median with interquartile range. Other data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group; @p<0.05 and @@p<0.01 vs. the SAH+ACEA group. (a)(b)(c)(d)(e)Figure 4 Nrf1 shRNA abolished the neuroprotective and antiapoptotic effects of ACEA. (a) Modified Garcia and (b) beam balance scores,n=6 per group. (c) Representative microphotographs of TUNEL staining and quantification of TUNEL-positive neurons. Scalebar=100 μm. n=3 per group. (d) Representative Western blot images. (e) Quantitative analyses of Bcl-xl, Bax, and cleaved caspase-3. n=6 per group. Data of Modified Garcia and beam balance scores were represented as the median with interquartile range. Other data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the SAH+vehicle group; #p<0.05 and ##p<0.01 vs. the SAH+ACEA+scrambled shRNA group. (a)(b)(c)(d)(e)TUNEL staining results revealed that ACEA reduced the number of apoptosis neurons compared with the SAH+vehicle group (p<0.01, Figure 3(c)). Compared with the SAH+ACEA group, AM251 abolished the antiapoptotic effect of ACEA in the SAH+ACEA+AM251 group (p<0.01, Figure 3(c)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA abolished the antiapoptotic effect of ACEA in the SAH+ACEA+Nrf1 shRNA group (p<0.01, Figure 4(c)).Western blot showed that the expression of Bax and cleaved caspase-3 significantly increased, and the expression of Bcl-xl decreased in the SAH+vehicle group compared with the Sham group (p<0.05, Figures 3(d) and 3(e)). AM251 eliminated the antiapoptotic effect of ACEA with the upregulation of Bax and cleaved caspase-3 and the downregulation of Bcl-xl in the SAH+ACEA+AM251 group compared with the SAH+ACEA group (p<0.05, Figures 3(d) and 3(e)). Nrf1 shRNA also eliminated the antiapoptotic effect of ACEA with the upregulation of Bax and cleaved caspase-3 and the downregulation of Bcl-xl in the SAH+ACEA+Nrf1 shRNA group compared with the SAH+ACEA+scrambled shRNA group (p<0.05, Figures 4(d) and 4(e)). ### 3.6. ACEA Treatment Attenuated Oxidative Stress, whereas the Antioxidative Stress Effect of ACEA Was Reversed by AM251 and Nrf1 shRNA Western blot results demonstrated that Romo-1 (reactive oxygen species modulator 1), a marker of reactive oxygen species–related protein, significantly increased after SAH compared with the Sham group (p<0.05, Figure 5(a)). ACEA dramatically reduced the expression of Romo-1 in the SAH+ACEA group compared with the SAH+vehicle group (p<0.05, Figure 5(a)). Compared with the SAH+ACEA group, AM251 increased the expression of Romo-1 in the SAH+ACEA+AM251 group (p<0.05, Figure 5(a)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA also increased the expression of Romo-1 in the SAH+ACEA+Nrf1 shRNA group (p<0.05, Figure 6(a)).Figure 5 ACEA attenuated oxidative stress, which was reversed by AM251. (a) Representative Western blot images and quantitative analysis of Romo-1,n=6 per group. (b) Quantification of the levels of MDA, SOD, GSH-Px, and GSH/GSSG ratio in the cortex of ipsilateral hemisphere, n=6 per group (c) Representative microphotographs of DHE staining and quantification of DHE-positive cells. Scalebar=50 μm. n=3 per group. Data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group; @p<0.05 and @@p<0.01 vs. the SAH+ACEA group. (a)(b)(c)Figure 6 Nrf1 shRNA abolished the antioxidative stress effect of ACEA. (a) Representative Western blot images and quantitative analysis of Romo-1,n=6 per group. (b) Quantification of the levels of MDA, SOD, GSH-Px, and GSH/GSSG ratio in the cortex of ipsilateral hemisphere, n=6 per group (c) Representative microphotographs of DHE staining and quantification of DHE-positive cells. Scalebar=50 μm. n=3 per group. Data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the SAH+vehicle group; #p<0.05 and ##p<0.01 vs. the SAH+ACEA+scrambled shRNA group. (a)(b)(c)The determination of MDA, SOD, GSH/GSSG, and GSH-Px level results showed that the level of MDA dramatically increased in the SAH+vehicle group compared with the Sham group (p<0.05, Figure 5(b)), whereas ACEA treatment reduced the level of MDA compared with the SAH+vehicle group (p<0.05, Figure 5(b)). Additionally, the level of SOD, GSH-Px, and GSH/GSSG ratio decreased as a result of oxidative stress injury in the SAH+vehicle group compared with the Sham group (p<0.05, Figure 5(b)), while ACEA treatment reinforced the activity of these antioxidative factors when compared with the SAH+vehicle group (p<0.05, Figure 5(b)). However, the antioxidative effect of ACEA was, respectively, reversed by AM251 and Nrf1 shRNA (p<0.05, compared with the SAH+ACEA group and SAH+ACEA+scrambled shRNA group, Figures 5(b) and 6(b)).Moreover, ACEA treatment reduced the number of DHE-positive cells when compared with the SAH+vehicle group (p<0.01, Figure 5(c)), but it was, respectively, reversed by AM251 and Nrf1 shRNA (p<0.01, compared with the SAH+ACEA group and SAH+ACEA+scrambled shRNA group, Figures 5(c) and 6(c)). ### 3.7. ACEA Promoted Mitophagy and Improved Mitochondrial Morphology after SAH In Western blot, we extracted mitochondrial proteins to detect the level of PINK1, Parkin, and LC3II in mitochondria. ACEA treatment increased the expression of PINK1, Parkin, and LC3II when compared with the SAH+vehicle group (p<0.05, Figures 7(c) and 7(d)), which indicated that ACEA activated PINK1/Parkin-mediated mitophagy.Figure 7 ACEA promoted mitophagy and improved mitochondrial morphology, which was reversed by AM251. (a) Representative immunofluorescence colocalization of Tomm20 (mitochondrial marker, green) with LC3 (autophagosome marker, red) and quantification of the ratio of LC3-associated Tomm20 to total Tomm20.Scalebar=50 μm. n=3 per group. (b) Neuronal and mitochondrial structures were observed by TEM. Red arrow: normal mitochondria; red triangle: swollen mitochondria; red circle: mitophagosome; red star: mitochondrial vacuolization. Scalebar=1 μm. (c) Representative Western blot images. (d) Quantitative analyses of CB1R, Nrf1, PINK1, Parkin, and LC3II. n=6 per group. Data were expressed as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group; @p<0.05 and @@p<0.01 vs. the SAH+ACEA group. (a)(b)(c)(d)Consistently, immunofluorescence colocalization staining results confirmed that ACEA treatment increased the colocalization of mitochondrial protein TOMM20 with autophagosome marker LC3 (p<0.01, compared with the SAH+vehicle group, Figure 7(a)).Mitochondrial ultrastructural morphology was observed by transmission electron microscopy (TEM). Compared with the Sham group, there were a lot of swollen mitochondria with broken mitochondrial cristae in the SAH+vehicle group, which indicated that the mitochondrial structure of neurons had been destroyed after SAH (Figure7(b)). ACEA treatment promoted the formation of mitophagosome and maintained the normal mitochondrial morphology of neurons (Figure 7(b)). ### 3.8. AM251 and Nrf1 shRNA Abolished the Promoting Effect of ACEA on Mitophagy Western blot revealed that the expression of PINK1, Parkin, and LC3II in mitochondria significantly decreased in the SAH+ACEA+AM251 group (p<0.05, compared with the SAH+ACEA group, Figures 7(c) and 7(d)), which indicated that the promoting effect of ACEA on mitophagy was eliminated by AM251. Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA reduced the expression of PINK1, Parkin, and LC3II in mitochondria (p<0.05, Figures 8(c) and 8(d)), which indicated that the promoting effect of ACEA on mitophagy was also eliminated by Nrf1 shRNA.Figure 8 Nrf1 shRNA abolished the promoting effect of ACEA on mitophagy. (a) Representative immunofluorescence colocalization of Tomm20 (mitochondrial marker, green) with LC3 (autophagosome marker, red) and quantification of the ratio of LC3-associated Tomm20 to total Tomm20.Scalebar=50 μm. n=3 per group. (b) Neuronal and mitochondrial structures were observed by TEM. Red arrow: normal mitochondria; red triangle: swollen mitochondria; red circle: mitophagosome; red star: mitochondrial vacuolization. Scalebar=1 μm. (c) Representative Western blot images. (d) Quantitative analyses of CB1R, Nrf1, PINK1, Parkin, and LC3II. n=6 per group. Data were expressed as mean±SD.∗p<0.05 and ∗∗p<0.01 vs. the SAH+vehicle group; #p<0.05 and ##p<0.01 vs. the SAH+ACEA+scrambled shRNA group. (a)(b)(c)(d)Consistently, immunofluorescence colocalization staining confirmed that the colocalization of TOMM20 and LC3 decreased in the SAH+ACEA+AM251 group (p<0.01, compared with the SAH+ACEA group, Figure 7(a)). Additionally, Nrf1 shRNA also reduced the colocalization of TOMM20 and LC3 in the SAH+ACEA+Nrf1 shRNA group (p<0.01, compared with the SAH+ACEA+scrambled shRNA group, Figure 8(a)).The detection of mitochondrial morphology showed that there were many swollen mitochondria with broken mitochondrial cristae in the SAH+ACEA+AM251 group, which indicated that AM251 abolished the protective effect of ACEA on mitochondria (compared with the SAH+ACEA group, Figure7(b)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA resulted in significant mitochondrial destruction and even mitochondrial vacuolization in the SAH+ACEA+Nrf1 shRNA group (Figure 8(b)). ### 3.9. ACEA Promoted Mitophagy via Activation of the CB1R/Nrf1/PINK1 Signaling Pathway after SAH Western blot demonstrated that the expression of CB1R, Nrf1, PINK1, Parkin, and LC3II markedly increased after SAH compared with the Sham group (p<0.05, Figures 1(a) and 1(b)). ACEA treatment significantly increased the expression of Nrf1, PINK1, Parkin, and LC3II compared with the SAH+vehicle group (p<0.05, Figures 7(c) and 7(d)). CB1R antagonist AM251 was injected at 1 h before SAH induction to evaluate whether CB1R involvement in the promoting effect of ACEA on mitophagy after SAH. The results manifested that pretreatment with AM251 remarkably reduced the expression of downstream molecules, such as Nrf1, PINK1, Parkin, and LC3II, compared with the SAH+ACEA group (p<0.05, Figures 7(c) and 7(d)). The administration of Nrf1 shRNA dramatically suppressed the expression of Nrf1 in the SAH+ACEA+Nrf1 shRNA group when compared with the SAH+ACEA+scrambled shRNA group (p<0.05, Figures 8(c) and 8(d)). Besides, Nrf1 knockdown significantly suppressed the expression of PINK1, Parkin, and LC3II in the SAH+ACEA+Nrf1 shRNA group when compared with the SAH+ACEA+scrambled shRNA group (p<0.05, Figures 8(c) and 8(d)). ## 3.1. Mortality and SAH Severity There were 33 rats in the Sham group and 209 rats in the SAH group, of which 44 rats died due to SAH induction (21.05%). None of the Sham-operated rats died, and 13 rats with SAHscore≤8 were excluded from this research (Supplemental Figure 2(a)). In the SAH group, blood clots were mainly distributed around the circle of Willis (Supplemental Figure 2(b)). No significant differences in SAH grade were observed among SAH groups (Supplemental Figure 2(c)). ## 3.2. Time Course Expression of CB1R, Nrf1, PINK1, Parkin, and LC3II after SAH Western blot revealed that the expression of CB1R and Nrf1 in the cytoplasm and PINK1, Parkin, and LC3II in mitochondria began to increase at 6 h and reached a peak at 24 h after SAH, compared with the Sham group (p<0.05; Figures 1(a) and 1(b)). Consistently, immunofluorescence staining confirmed the increased expression of CB1R after SAH. Besides, it also showed that CB1R receptors were mainly located in neurons of the cerebral cortex, but a few were located in microglia and astrocytes (Figure 1(d)).Figure 1 Time course expression of CB1R, Nrf1, PINK1, Parkin, and LC3II and cellular localization of CB1R after SAH. (a) Representative Western blot images of time course and (b) quantitative analyses of CB1R, Nrf1, PINK1, Parkin, and LC3II.n=6 per group. Data were represented as mean±SD. ∗p<0.05 vs. the Sham group. (c) Representative picture indicates the location of immunofluorescence staining (small black box). (d) Representative microphotographs of immunofluorescence staining for CB1R (green) with neurons (NeuN, red), astrocytes (GFAP, red), and microglia (Iba-1, red) in the left temporal cortex at 24 h after SAH. Nuclei were stained with DAPI (blue). n=3 per group. Scalebar=50 μm. (a)(b)(c)(d) ## 3.3. ACEA Attenuated Short-Term Neurological Deficits Modified Garcia and beam balance scores indicated that SAH contributed to significant neurological deficits, compared with the Sham group. ACEA treatment at the dose of 1.5 mg/kg significantly attenuated the neurological deficits, compared with the SAH+vehicle group (p<0.05, Supplemental Figures 3(a) and 3(b)). According to neurobehavioral test results, 1.5 mg/kg was chosen as the best dosage for subsequent experiments. ## 3.4. ACEA Attenuated Long-Term Neurological Deficits and Hippocampus Neuron Degeneration The rotarod test indicated that whether it was 5 RPM or 10 RPM, the falling latency in the SAH+vehicle group was significantly shorter than that in the Sham group. Such poor performances induced by SAH were remarkably improved by ACEA treatment (p<0.05, Figures 2(a) and 2(b)).Figure 2 ACEA attenuated long-term neurological deficits and hippocampal neuronal degeneration after SAH. Rotarod test of 5 RPM (a) and 10 RPM (b) in the first, second, and third week after SAH,n=6 per group. (c) Foot fault test during the three weeks after SAH, n=6 per group. (d) Representative microphotographs of Nissl staining in the hippocampal CA1, CA3, and DG regions. Scalebar=50 μm. (e) Interest areas of the CA1, CA3, and DG region in the left hippocampus. Scalebar=200 μm. (f) Quantification of the Nissl-positive neurons, n=6 per group. Data of the rotarod test were represented as the median with interquartile range. Other data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group. (a)(b)(c)(d)(e)(f)Foot fault test showed that the foot fault rate in the SAH+vehicle group was dramatically higher than that in the Sham group in all three weeks. ACEA treatment significantly reduced the foot fault rate (p<0.01, Figure 2(c)).Nissl staining revealed that Nissl-positive neurons in the SAH+vehicle group were remarkably less than those in the Sham group in the CA1, CA3, and DG area of the ipsilateral hippocampus (p<0.05, Figures 2(d) and 2(f)). ACEA treatment significantly attenuated hippocampal neuron degeneration on the 28th day after SAH (p<0.05, compared with the SAH+vehicle group, Figures 2(d) and 2(f)). ## 3.5. ACEA Treatment Attenuated Neurological Deficits and Neuronal Apoptosis, whereas the Neuroprotective and Antiapoptotic Effects of ACEA Were Reversed by AM251 and Nrf1 shRNA In modified Garcia score and beam balance test results, ACEA treatment ameliorated neurological deficits compared with the SAH+vehicle group (p<0.05, Figures 3(a) and 3(b)). Compared with the SAH+ACEA group, AM251 abolished the neuroprotective effect of ACEA in the SAH+ACEA+AM251 group (p<0.05, Figures 3(a) and 3(b)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA abolished the neuroprotective effect of ACEA in the SAH+ACEA+Nrf1 shRNA group (p<0.05, Figures 4(a) and 4(b)).Figure 3 ACEA attenuated neurological deficits and neuronal apoptosis, which was reversed by AM251. (a) Modified Garcia and (b) beam balance scores,n=6 per group. (c) Representative microphotographs of TUNEL staining and quantification of TUNEL-positive neurons. Scalebar=100μm. n=3 per group. (d) Representative Western blot images. (e) Quantitative analyses of Bcl-xl, Bax, and cleaved caspase-3. n=6per group. Data of Modified Garcia and beam balance scores were represented as the median with interquartile range. Other data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group; @p<0.05 and @@p<0.01 vs. the SAH+ACEA group. (a)(b)(c)(d)(e)Figure 4 Nrf1 shRNA abolished the neuroprotective and antiapoptotic effects of ACEA. (a) Modified Garcia and (b) beam balance scores,n=6 per group. (c) Representative microphotographs of TUNEL staining and quantification of TUNEL-positive neurons. Scalebar=100 μm. n=3 per group. (d) Representative Western blot images. (e) Quantitative analyses of Bcl-xl, Bax, and cleaved caspase-3. n=6 per group. Data of Modified Garcia and beam balance scores were represented as the median with interquartile range. Other data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the SAH+vehicle group; #p<0.05 and ##p<0.01 vs. the SAH+ACEA+scrambled shRNA group. (a)(b)(c)(d)(e)TUNEL staining results revealed that ACEA reduced the number of apoptosis neurons compared with the SAH+vehicle group (p<0.01, Figure 3(c)). Compared with the SAH+ACEA group, AM251 abolished the antiapoptotic effect of ACEA in the SAH+ACEA+AM251 group (p<0.01, Figure 3(c)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA abolished the antiapoptotic effect of ACEA in the SAH+ACEA+Nrf1 shRNA group (p<0.01, Figure 4(c)).Western blot showed that the expression of Bax and cleaved caspase-3 significantly increased, and the expression of Bcl-xl decreased in the SAH+vehicle group compared with the Sham group (p<0.05, Figures 3(d) and 3(e)). AM251 eliminated the antiapoptotic effect of ACEA with the upregulation of Bax and cleaved caspase-3 and the downregulation of Bcl-xl in the SAH+ACEA+AM251 group compared with the SAH+ACEA group (p<0.05, Figures 3(d) and 3(e)). Nrf1 shRNA also eliminated the antiapoptotic effect of ACEA with the upregulation of Bax and cleaved caspase-3 and the downregulation of Bcl-xl in the SAH+ACEA+Nrf1 shRNA group compared with the SAH+ACEA+scrambled shRNA group (p<0.05, Figures 4(d) and 4(e)). ## 3.6. ACEA Treatment Attenuated Oxidative Stress, whereas the Antioxidative Stress Effect of ACEA Was Reversed by AM251 and Nrf1 shRNA Western blot results demonstrated that Romo-1 (reactive oxygen species modulator 1), a marker of reactive oxygen species–related protein, significantly increased after SAH compared with the Sham group (p<0.05, Figure 5(a)). ACEA dramatically reduced the expression of Romo-1 in the SAH+ACEA group compared with the SAH+vehicle group (p<0.05, Figure 5(a)). Compared with the SAH+ACEA group, AM251 increased the expression of Romo-1 in the SAH+ACEA+AM251 group (p<0.05, Figure 5(a)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA also increased the expression of Romo-1 in the SAH+ACEA+Nrf1 shRNA group (p<0.05, Figure 6(a)).Figure 5 ACEA attenuated oxidative stress, which was reversed by AM251. (a) Representative Western blot images and quantitative analysis of Romo-1,n=6 per group. (b) Quantification of the levels of MDA, SOD, GSH-Px, and GSH/GSSG ratio in the cortex of ipsilateral hemisphere, n=6 per group (c) Representative microphotographs of DHE staining and quantification of DHE-positive cells. Scalebar=50 μm. n=3 per group. Data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group; @p<0.05 and @@p<0.01 vs. the SAH+ACEA group. (a)(b)(c)Figure 6 Nrf1 shRNA abolished the antioxidative stress effect of ACEA. (a) Representative Western blot images and quantitative analysis of Romo-1,n=6 per group. (b) Quantification of the levels of MDA, SOD, GSH-Px, and GSH/GSSG ratio in the cortex of ipsilateral hemisphere, n=6 per group (c) Representative microphotographs of DHE staining and quantification of DHE-positive cells. Scalebar=50 μm. n=3 per group. Data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the SAH+vehicle group; #p<0.05 and ##p<0.01 vs. the SAH+ACEA+scrambled shRNA group. (a)(b)(c)The determination of MDA, SOD, GSH/GSSG, and GSH-Px level results showed that the level of MDA dramatically increased in the SAH+vehicle group compared with the Sham group (p<0.05, Figure 5(b)), whereas ACEA treatment reduced the level of MDA compared with the SAH+vehicle group (p<0.05, Figure 5(b)). Additionally, the level of SOD, GSH-Px, and GSH/GSSG ratio decreased as a result of oxidative stress injury in the SAH+vehicle group compared with the Sham group (p<0.05, Figure 5(b)), while ACEA treatment reinforced the activity of these antioxidative factors when compared with the SAH+vehicle group (p<0.05, Figure 5(b)). However, the antioxidative effect of ACEA was, respectively, reversed by AM251 and Nrf1 shRNA (p<0.05, compared with the SAH+ACEA group and SAH+ACEA+scrambled shRNA group, Figures 5(b) and 6(b)).Moreover, ACEA treatment reduced the number of DHE-positive cells when compared with the SAH+vehicle group (p<0.01, Figure 5(c)), but it was, respectively, reversed by AM251 and Nrf1 shRNA (p<0.01, compared with the SAH+ACEA group and SAH+ACEA+scrambled shRNA group, Figures 5(c) and 6(c)). ## 3.7. ACEA Promoted Mitophagy and Improved Mitochondrial Morphology after SAH In Western blot, we extracted mitochondrial proteins to detect the level of PINK1, Parkin, and LC3II in mitochondria. ACEA treatment increased the expression of PINK1, Parkin, and LC3II when compared with the SAH+vehicle group (p<0.05, Figures 7(c) and 7(d)), which indicated that ACEA activated PINK1/Parkin-mediated mitophagy.Figure 7 ACEA promoted mitophagy and improved mitochondrial morphology, which was reversed by AM251. (a) Representative immunofluorescence colocalization of Tomm20 (mitochondrial marker, green) with LC3 (autophagosome marker, red) and quantification of the ratio of LC3-associated Tomm20 to total Tomm20.Scalebar=50 μm. n=3 per group. (b) Neuronal and mitochondrial structures were observed by TEM. Red arrow: normal mitochondria; red triangle: swollen mitochondria; red circle: mitophagosome; red star: mitochondrial vacuolization. Scalebar=1 μm. (c) Representative Western blot images. (d) Quantitative analyses of CB1R, Nrf1, PINK1, Parkin, and LC3II. n=6 per group. Data were expressed as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group; @p<0.05 and @@p<0.01 vs. the SAH+ACEA group. (a)(b)(c)(d)Consistently, immunofluorescence colocalization staining results confirmed that ACEA treatment increased the colocalization of mitochondrial protein TOMM20 with autophagosome marker LC3 (p<0.01, compared with the SAH+vehicle group, Figure 7(a)).Mitochondrial ultrastructural morphology was observed by transmission electron microscopy (TEM). Compared with the Sham group, there were a lot of swollen mitochondria with broken mitochondrial cristae in the SAH+vehicle group, which indicated that the mitochondrial structure of neurons had been destroyed after SAH (Figure7(b)). ACEA treatment promoted the formation of mitophagosome and maintained the normal mitochondrial morphology of neurons (Figure 7(b)). ## 3.8. AM251 and Nrf1 shRNA Abolished the Promoting Effect of ACEA on Mitophagy Western blot revealed that the expression of PINK1, Parkin, and LC3II in mitochondria significantly decreased in the SAH+ACEA+AM251 group (p<0.05, compared with the SAH+ACEA group, Figures 7(c) and 7(d)), which indicated that the promoting effect of ACEA on mitophagy was eliminated by AM251. Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA reduced the expression of PINK1, Parkin, and LC3II in mitochondria (p<0.05, Figures 8(c) and 8(d)), which indicated that the promoting effect of ACEA on mitophagy was also eliminated by Nrf1 shRNA.Figure 8 Nrf1 shRNA abolished the promoting effect of ACEA on mitophagy. (a) Representative immunofluorescence colocalization of Tomm20 (mitochondrial marker, green) with LC3 (autophagosome marker, red) and quantification of the ratio of LC3-associated Tomm20 to total Tomm20.Scalebar=50 μm. n=3 per group. (b) Neuronal and mitochondrial structures were observed by TEM. Red arrow: normal mitochondria; red triangle: swollen mitochondria; red circle: mitophagosome; red star: mitochondrial vacuolization. Scalebar=1 μm. (c) Representative Western blot images. (d) Quantitative analyses of CB1R, Nrf1, PINK1, Parkin, and LC3II. n=6 per group. Data were expressed as mean±SD.∗p<0.05 and ∗∗p<0.01 vs. the SAH+vehicle group; #p<0.05 and ##p<0.01 vs. the SAH+ACEA+scrambled shRNA group. (a)(b)(c)(d)Consistently, immunofluorescence colocalization staining confirmed that the colocalization of TOMM20 and LC3 decreased in the SAH+ACEA+AM251 group (p<0.01, compared with the SAH+ACEA group, Figure 7(a)). Additionally, Nrf1 shRNA also reduced the colocalization of TOMM20 and LC3 in the SAH+ACEA+Nrf1 shRNA group (p<0.01, compared with the SAH+ACEA+scrambled shRNA group, Figure 8(a)).The detection of mitochondrial morphology showed that there were many swollen mitochondria with broken mitochondrial cristae in the SAH+ACEA+AM251 group, which indicated that AM251 abolished the protective effect of ACEA on mitochondria (compared with the SAH+ACEA group, Figure7(b)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA resulted in significant mitochondrial destruction and even mitochondrial vacuolization in the SAH+ACEA+Nrf1 shRNA group (Figure 8(b)). ## 3.9. ACEA Promoted Mitophagy via Activation of the CB1R/Nrf1/PINK1 Signaling Pathway after SAH Western blot demonstrated that the expression of CB1R, Nrf1, PINK1, Parkin, and LC3II markedly increased after SAH compared with the Sham group (p<0.05, Figures 1(a) and 1(b)). ACEA treatment significantly increased the expression of Nrf1, PINK1, Parkin, and LC3II compared with the SAH+vehicle group (p<0.05, Figures 7(c) and 7(d)). CB1R antagonist AM251 was injected at 1 h before SAH induction to evaluate whether CB1R involvement in the promoting effect of ACEA on mitophagy after SAH. The results manifested that pretreatment with AM251 remarkably reduced the expression of downstream molecules, such as Nrf1, PINK1, Parkin, and LC3II, compared with the SAH+ACEA group (p<0.05, Figures 7(c) and 7(d)). The administration of Nrf1 shRNA dramatically suppressed the expression of Nrf1 in the SAH+ACEA+Nrf1 shRNA group when compared with the SAH+ACEA+scrambled shRNA group (p<0.05, Figures 8(c) and 8(d)). Besides, Nrf1 knockdown significantly suppressed the expression of PINK1, Parkin, and LC3II in the SAH+ACEA+Nrf1 shRNA group when compared with the SAH+ACEA+scrambled shRNA group (p<0.05, Figures 8(c) and 8(d)). ## 4. Discussion In our research, we investigated the antioxidative and antiapoptotic effects of ACEA as well as the underlying mechanism involving the CB1R/Nrf1/PINK1 signaling pathway after SAH in rats (Figure9). We discovered that the protein level of CB1R, Nrf1, PINK1, Parkin, and LC3II began to increase at 6 h and reached a peak at 24 h after SAH. The CB1R receptors were mainly located in neurons of the cerebral cortex after SAH. ACEA treatment promoted mitophagy and exerted the antioxidative and antiapoptotic effects, which ultimately contributed to the improvement of neurological deficits. Inhibition of CB1R with AM251 eliminated the antioxidative and antiapoptotic effects of ACEA after SAH. Mechanistically, activation of CB1R upregulated the expression of Nrf1, PINK1, Parkin, LC3II, and Bcl-xl and downregulated the expression of Bax, cleaved caspase-3, and Romo-1 after SAH, whereas inhibition of CB1R reversed the above changes. Furthermore, knockdown of Nrf1 abolished the promoting effect of ACEA on mitophagy, accompanied by downregulation of PINK1, Parkin, and LC3II, which ultimately eliminated the antioxidative and antiapoptotic effects of ACEA. To sum up, our research revealed that ACEA promoted mitophagy and attenuated oxidative stress as well as neurological deficits after SAH, at least in part via the CB1R/Nrf1/PINK1 signaling pathway.Figure 9 The graphical abstract. ACEA treatment attenuates oxidative stress by promoting mitophagy through the CB1R/Nrf1/PINK1 signaling pathway after SAH.Accumulating evidence manifested that damaged mitochondria-mediated oxidative stress is closely related to the mechanism of EBI after SAH [37, 38]. Specifically, mitochondrial dysfunction caused by SAH leads to the overplus of ROS, resulting in oxidative stress injury [7] and ultimately activation of the apoptotic signaling pathway [39]. Consequently, the clearance of damaged mitochondria in time would be an efficacious way to attenuate oxidative stress as well as subsequent apoptosis after SAH [14].Mitophagy is a selective autophagy process that specifically degrades damaged mitochondria to maintain mitochondrial homeostasis and cellular survival [8]. Recently, many studies showed that promoting mitophagy can alleviate neuroinflammation, oxidative stress, and neuronal apoptosis after SAH [11, 12, 14]. Consistent with previous studies, we found that promoting mitophagy with ACEA treatment protected against oxidative stress injury after SAH. The results of oxidative stress measurement showed that ACEA treatment contributed to an increase in SOD, GSH-Px, and GSH/GSSG levels and a decrease in MDA level after SAH. DHE and TUNEL staining results manifested that ACEA treatment reduced the number of both DHE-positive cells and TUNEL-positive neurons. Nissl staining results reflected the protective effect of ACEA on alleviating the degeneration of hippocampal neurons, which was consistent with the improvement of neurobehavior function. These results provide new insights for mitophagy as a potential therapeutic strategy for SAH.Cannabinoid receptor 1 (CB1R), a G-protein-coupled receptor, was reported to be a potential therapeutic target for many central nervous system diseases, such as ischemic stroke, epilepsy, and Parkinson’s and Alzheimer’s disease [29, 40–42]. CB1R is widely expressed in different organs, especially in the central nervous system (e.g., cerebral cortex, hippocampus, striatum, and cerebellum) [43]. Our immunofluorescence colocalization staining results revealed that CB1R receptors were mainly located in neurons of the cerebral cortex, but a few were located in microglia and astrocytes. An autoptic study manifested that the expression of CB1R increased after ischemic stroke [15]. Similarly, our results demonstrated that CB1R increased to a peak at 24 h after SAH. Therefore, we speculate that the upregulation of CB1R may act as a self-protection mechanism to play a neuroprotective role in EBI after SAH.ACEA, a highly selective CB1R agonist, was reported to provide a neuroprotective effect for ischemic stroke, Parkinson’s disease, Alzheimer’s disease, and epilepsy [29, 41, 42, 44]. However, there is no published research attempted to treat SAH using ACEA. In our study, we found that ACEA has the antioxidative and antiapoptotic ability to protect against EBI. In addition, we also discovered a reasonable mechanism for ACEA to improve neurological deficits after SAH in rats. Moreover, the easily passing through the blood-brain barrier characteristic of ACEA is conducive to clinical application. In order to determine the optimal dosage of ACEA, we used three different dosages in our experiment and we found 1.5 mg/kg was the best dosage to treat SAH in rats.Nrf-1 was initially discovered as an important transcription factor that regulates genes necessary for mitochondrial function [45]. Recently, accumulating evidence indicated that NRF1 target genes are also involved in the regulation of extramitochondrial biological processes, such as DNA damage repair, RNA metabolism, and ubiquitin-mediated protein degradation, all of which are essential to cell growth, differentiation, and survival [25]. In our research, Nrf1 knockdown significantly suppressed the expression of PINK1, Parkin, and LC3II, indicating the positive regulation effect of Nrf1 on PINK1/Parkin-mediated mitophagy, which was consistent with a previous study [46].The PINK1-Parkin pathway is a primary mechanism of mitophagy, which depends on the ubiquitination pathway. Specifically, following mitochondrial membrane depolarization, PINK1 is stabilized on the outer mitochondrial membrane (OMM). Next, PINK1 is activated via autophosphorylation and recruits E3 ubiquitin ligase Parkin from the cytoplasm to the damaged mitochondria. Subsequently, Parkin ubiquitinates multiple OMM proteins, leading to their recognition by autophagy adaptors. Finally, damaged mitochondria are engulfed by phagophores and eventually fuse with lysosomes for degradation [47]. In our research, we found that ACEA treatment increased the expression of PINK1 and Parkin in mitochondria, which indicated that ACEA might promote mitophagy through the PINK1-Parkin pathway. Consistently, immunofluorescence colocalization staining results confirmed that ACEA treatment increased the colocalization of mitochondrial protein TOMM20 with autophagosome marker LC3. The above results were further supported by transmission electron microscopy results that ACEA promoted the formation of mitophagosome and maintained the normal mitochondrial morphology of neurons. Taken together, ACEA may activate PINK1/Parkin-mediated mitophagy by upregulating the expression of Nrf1 after SAH.In contrast to previous studies, our study is the first one that attempted to treat SAH with ACEA. We found that ACEA has the antioxidative and antiapoptotic ability to protect against EBI. More importantly, this is the first study to elucidate the molecular mechanism by which ACEA promotes mitophagy. In addition, this is the first study to confirm the positive regulation effect of Nrf1 on PINK1/Parkin-mediated mitophagy in an in vivo model of SAH. However, our study has some limitations. First, it is difficult to mimic the pathological process of SAH in vitro, so we only verified our hypothetical mechanism in vivo. Second, a previous study showed that ACEA alleviated cerebral ischemia/reperfusion injury through the CB1R-Drp1 pathway [48], so we cannot exclude other signaling pathways that improve the prognosis of SAH. These deficiencies would be explored in future studies. ## 5. Conclusion In summary, we revealed that ACEA attenuated oxidative stress by promoting mitophagy via the CB1R/Nrf1/PINK1 signaling pathway after SAH. Therefore, ACEA may be a novel treatment for SAH patients. --- *Source: 1024279-2022-02-24.xml*
1024279-2022-02-24_1024279-2022-02-24.md
71,976
ACEA Attenuates Oxidative Stress by Promoting Mitophagy via CB1R/Nrf1/PINK1 Pathway after Subarachnoid Hemorrhage in Rats
Binbing Liu; Yang Tian; Yuchen Li; Pei Wu; Yongzhi Zhang; Jiaolin Zheng; Huaizhang Shi
Oxidative Medicine and Cellular Longevity (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1024279
1024279-2022-02-24.xml
--- ## Abstract Background and Purpose. Oxidative stress plays a pivotal role in early brain injury (EBI) after subarachnoid hemorrhage (SAH). The CB1R agonist ACEA has been reported to have a neuroprotective effect in many central nervous system diseases. Our study was aimed at exploring the effect and mechanism of ACEA in an experimental SAH model. Method. Endovascular perforation was performed to establish a SAH model of rats. ACEA was administered intraperitoneally 1 h after SAH. The CB1R antagonist AM251 was injected intraperitoneally 1 h before SAH induction. Adenoassociated virus- (AAV-) Nrf1 shRNA was infused into the lateral ventricle 3 weeks before SAH induction. Neurological tests, immunofluorescence, DHE, TUNEL, Nissl staining, transmission electron microscopy (TEM), and Western blot were performed. Results. The expression of CB1R, Nrf1, PINK1, Parkin, and LC3II increased and peaked at 24 h after SAH. ACEA treatment exhibited the antioxidative stress and antiapoptosis effects after SAH. In addition, ACEA treatment increased the expression of Nrf1, PINK1, Parkin, LC3II, and Bcl-xl but repressed the expression of Romo-1, Bax, and cleaved caspase-3. Moreover, the TEM results demonstrated that ACEA promoted the formation of mitophagosome and maintained the normal mitochondrial morphology of neurons. The protective effect of ACEA was reversed by AM251 and Nrf1 shRNA, respectively. Conclusions. This study demonstrated that ACEA alleviated oxidative stress and neurological dysfunction by promoting mitophagy after SAH, at least in part via the CB1R/Nrf1/PINK1 signaling pathway. --- ## Body ## 1. Introduction Subarachnoid hemorrhage (SAH) is a severe subtype of stroke with high morbidity and mortality [1]. Although the early diagnosis and treatment methods have been improved in the past few decades, not every patient can achieve a good clinical prognosis because of early brain injury (EBI) [2, 3]. Recent studies reported that oxidative stress plays a pivotal role in EBI after SAH [4–6]. Thus, alleviating oxidative stress injury may be an efficacious treatment for improving the prognosis of SAH.Dysfunctional mitochondria are the primary source of intracellular reactive oxygen species (ROS) due to the disruption of the electron transfer chain and the transition of mitochondrial membrane permeability following SAH [4, 7], leading to oxidative stress injury and subsequent neuronal death [6]. Consequently, the clearance of impaired mitochondria in time would be an effective treatment strategy for SAH patients. Mitophagy is a selective autophagy process that specifically degrades damaged mitochondria to maintain mitochondrial homeostasis and cellular survival [8, 9]. Accumulating evidence has indicated that mitophagy is a potential therapeutic target to protect against EBI after SAH [10–13]. Furthermore, our previous research proved that promoting mitophagy alleviated oxidative stress as well as subsequent neuronal death after SAH [14].The endocannabinoid system consists of lipid-based mediators, endocannabinoids (eCB), their target receptors, associated synthesizing and metabolizing enzymes, and transporter proteins. It is reported that the level of cannabinoid receptors and endocannabinoids increased after stroke [15]. Cannabinoid receptor 1 (CB1R) is a G-protein-coupled receptor, which is involved in modulation of neuronal activity, synaptic plasticity, and cell metabolism [16–18]. Accumulating evidence has demonstrated that activation of CB1R is beneficial to provide neuroprotection for stroke [19–21].Nuclear respiratory factor 1 (Nrf-1) is a critical transcription factor that regulates genes necessary for mitochondrial biogenesis and function [22–24]. In recent years, it has been discovered that NRF1 target genes are not restricted to the genes involved in mitochondrial function, which indicates that Nrf-1 has more potential functions [25]. For example, Nrf-1 has a positive regulatory effect on the expression of PINK1 and Parkin genes, and it participates in mitochondrial quality control by regulating the PINK1/Parkin-mediated mitophagy [26]. Arachidonyl-2-chloroethylamide (ACEA), a highly selective CB1R agonist, was reported to protect neurons against ischemic injury by increasing the expression of Nrf1 and inducing mitochondrial biogenesis [27]. However, whether the protective effect of ACEA is mediated by regulating mitophagy remains unknown.Thus, our study was aimed at verifying the hypothesis that ACEA attenuates oxidative stress by regulating mitophagy via the CB1R/Nrf1/PINK1 pathway after SAH in rats. ## 2. Materials and Methods ### 2.1. Animals and SAH Model All experimental procedures were approved by the Institutional Animal Care and Use Committees of the First Affiliated Hospital of Harbin Medical University and were in accordance with the NIH Guidelines for the Care and Use of Laboratory Animals. Adult male Sprague-Dawley rats (weight 280-320 g) were housed at a constant humidity (55±5%) and temperature (22±2°C) in a 12 h light and dark cycle room. The animals were raised with free access to food and water.The endovascular perforation method was employed to induce a SAH model in rats, as previously described [28]. Briefly, after rats were fully anesthetized, the external carotid artery (ECA) and internal carotid artery (ICA) were fully exposed. A sharp 4–0 nylon suture was inserted into the left ICA from the cut of the ECA stump until resistance was felt. The suture was further advanced to puncture the artery for several seconds, then withdrawn immediately. In Sham rats, all procedures were identical except the puncture of the vessel. ### 2.2. Drug Administration The highly selective CB1R agonist ACEA (Cat. No.1319, Tocris Bioscience, Bristol, UK) was diluted in 5% dimethyl sulfoxide (DMSO). ACEA was administered intraperitoneally (i.p.) in different groups at doses of 0.5 mg/kg, 1.5 mg/kg, and 4.5 mg/kg 1 h after SAH [27]. The CB1R antagonist AM251 (Cat. No. A6226, Sigma-Aldrich, MO, USA), dissolved in 5% DMSO, was injected intraperitoneally at a dose of 1.0 mg/kg 1 h before SAH induction [29]. The control groups were injected the same volume of solvents, respectively. ### 2.3. Intracerebroventricular Injection An adenoassociated virus (AAV; GeneChem, Shanghai, China) system was used to knock down the expression of Nrf1 according to the manufacturer’s instructions. A nontargeting scrambled negative matched shRNA was used as a control. Animals were injected intraperitoneally with pentobarbital (40 mg/kg) and placed in a stereotaxic apparatus. Next, a 10μL syringe was inserted into the left ventricle at the specific coordinates relative to the bregma: 1.0 mm lateral, 1.5 mm posterior, and 3.1 mm below the dural surface. A total of 3 μL of Nrf1 shRNA or scrambled shRNA was injected intraventricularly at a rate of 0.3 μL/min in 3 weeks before SAH induction. To improve the knockdown efficiency, three different shRNA duplexes were designed and mixed. Their sequences are provided as follows: shRNA1: 5′-GCCTGGTCCAGATCCCTGTGACGAATCACAGGGATCTGGACCAGGCTTTTT-3′, shRNA2: 5′-GGACAGCGCAGTCACCATGGACGAATCCATGGTGACTGCGCTGTCCTTTTT-3′, and shRNA3: 5′-GGAGGTGGTGACGTTGGAACACGAATGTTCCAACGTCACCACCTCCTTTTT-3′. ### 2.4. Experimental Design (Supplemental Figure1) #### 2.4.1. Experiment 1 36 rats (n=6 per group) were randomly assigned into 6 groups: Sham and 3, 6, 12, 24, and 72 hours, after SAH. Expression of CB1R, Nrf1, PINK1, Parkin, and LC3II/LC3I was analyzed by Western blot. Additionally, 3 Sham rats and 3 SAH-24 h rats were used to detect the cellular localization of CB1R by immunofluorescence staining. #### 2.4.2. Experiment 2 In a short-term outcome assessment, 30 rats (n=6 per group) were equally assigned into 5 groups: Sham, SAH+vehicle, SAH+ACEA (0.5 mg/kg), SAH+ACEA (1.5 mg/kg), and SAH+ACEA (4.5 mg/kg) for neurological tests. According to neurobehavioral test results, 1.5 mg/kg was chosen as the best dosage for subsequent experiments. #### 2.4.3. Experiment 3 In a long-term outcome assessment, 18 rats (n=6 per group) were assigned into 3 groups: Sham, SAH+vehicle, and SAH+ACEA. Rotarod test, foot fault test, and Nissl staining were performed. #### 2.4.4. Experiment 4 To verify the neuroprotective effect of ACEA, 30 rats (n=6 per group) were assigned into 5 groups: Sham, SAH+vehicle, SAH+ACEA, SAH+AM251, and SAH+ACEA+AM251 for Western blot. Additionally, 30 rats (n=6 per group) were used for neurological tests, DHE, TUNEL, immunofluorescence staining, and transmission electron microscopy (TEM). Samples for determination of MDA, SOD, GSH/GSSG, and GSH-Px levels were shared with Western blot in each group. #### 2.4.5. Experiment 5 To verify the hypothetical molecular mechanism, 24 rats (n=6 per group) were assigned into 4 groups: SAH+vehicle, SAH+ACEA, SAH+ACEA+scrambled shRNA, and SAH+ACEA+Nrf1 shRNA for Western blot. Additionally, 24 rats (n=6 per group) were used for neurological tests, DHE, TUNEL, immunofluorescence staining, and TEM. Samples for determination of MDA, SOD, GSH/GSSG, and GSH-Px Levels were shared with Western blot in each group. ### 2.5. Severity of SAH The severity of SAH was estimated with the SAH grading scale as previously described [30]. Briefly, the animals were euthanized at 24 h after SAH, and the basal cistern of the rat brain was divided into 6 sections. Based on the amount of blood clotting, each section was recorded with a grade from 0 to 3. The total score of six sections ranged from 0 to 18, and rats with SAH score≤8 were excluded. ### 2.6. Evaluation of Short-Term Neurofunctional Outcomes Short-term neurofunctional outcome was estimated with the modified Garcia score and beam balance test as previously described [31, 32]. The higher score represented better neurofunctional outcome. ### 2.7. Evaluation of Long-Term Neurofunctional Outcomes Long-term neurofunctional outcome was estimated with the rotarod test and foot fault test [33, 34]. Briefly, the rotarod test was conducted on days 7, 14, and 21 after SAH; animals were placed on a rotating horizontal cylinder. The rotating speed was started at 5 revolutions per minute (RPM) or 10 RPM and was gradually accelerated by 2 RPM every 5 seconds. The falling latency was recorded. The foot fault test was also conducted on days 7, 14, and 21 after SAH; animals were required to walk on a steel grid. A paw falling through the grid was recorded as a foot fault. A total of 50 steps were recorded for the right forelimb. Percentage of foot faults was expressed as faults/steps+faults×100%. ### 2.8. Western Blot and Isolation of Mitochondria Western blot was performed as previously described [35]. Briefly, rats were euthanized and transcardially perfused with 150 mL of cold PBS (0.01 M, pH 7.4). The left hemispheres were collected and homogenized in RIPA lysis buffer (P0013B, Beyotime, Shanghai, China). After centrifuging at 14000×g for 30 min at 4°C, the supernatant was collected. Equal amounts of protein (30 μg) were loaded onto 8%-12% SDS-PAGE gel, then electrophoresed and transferred to 0.2 μm nitrocellulose membranes, which were blocked with 5% nonfat milk and incubated with the following primary antibodies overnight at 4°C: anti-CB1R (1: 1000, ab259323, Abcam, MA, USA), anti-Nrf1 (1: 2000, ab175932, Abcam, MA, USA), anti-PINK1 (1: 1000, ab186303, Abcam, MA, USA), anti-Parkin (1: 1000, ab77924, Abcam, MA, USA), anti-LC3B (1: 2000, ab192890, Abcam, MA, USA), anti-COX IV(1: 1000, ab33985, Abcam, MA, USA), anti-Bcl-XL (1: 1000, ab32370, Abcam, MA, USA), anti-Bax (1: 1000, ab32503, Abcam, MA, USA), anti-cleaved caspase-3 (1: 500, 9661, Cell Signaling Technology Inc., MA, USA), anti-Romo-1 (1: 500, NBP2-45607, NOVUS Biologicals, CO, USA), and anti-β-actin (1: 1000, ab8227, Abcam, MA, USA). The next day, the membranes were incubated with the corresponding second antibodies for 2 h at room temperature. Immunoblots were visualized with BeyoECL Star chemiluminescence reagent kit (Beyotime, Shanghai, China) and quantified by densitometry using the ImageJ software. COX IV and β-actin were used as internal control.Mitochondrial proteins were extracted using the Tissue Mitochondria Isolation Kit (Beyotime, Shanghai, China) according to the manufacturer’s instructions. ### 2.9. Immunofluorescence Staining Immunofluorescence staining was performed as previously described [36]. Briefly, after being blocked with 5% donkey serum in 0.3% Triton X-100 for 60 min at room temperature, the brain slices were incubated at 4°C overnight with the following primary antibodies: anti-CB1R (1: 100, ab259323, Abcam, MA, USA), anti-NeuN (1: 1000, ab104224, Abcam, MA, USA), anti-GFAP (1: 50, ab4648, Abcam, MA, USA), anti-Iba1 (1: 100, ab5076, Abcam, MA, USA), anti-LC3B (1: 1000, ab192890, Abcam, MA, USA), and anti-TOMM20 (1: 100, ab56783, Abcam, MA, USA). Then, the slices were incubated with the appropriate fluorescence-conjugated secondary antibody (1: 500, Abcam, MA, USA) at 37° C for 1 h. ### 2.10. Transmission Electron Microscopy (TEM) The morphology of mitochondria was observed by TEM. 1 mm3 tissue was cut from the brain of each group and fixed with 2.5% glutaraldehyde for 4 h. After dehydration, samples were embedded into araldite and then cut into 60 nm slices with an ultramicrotome (Leica, Wetzlar, Germany). At last, the slices were fixed to nickel grids after staining. Images were acquired using a transmission electron microscope (Carl Zeiss, Thornwood, NY, USA). ### 2.11. Evaluation of Oxidative Stress #### 2.11.1. Determination of MDA, SOD, GSH/GSSG, and GSH-Px Levels The homogenate of the left hemispherical cortex tissues was collected. The levels of cellular MDA, SOD, GSH/GSSG, and GSH-Px in cortex tissues were, respectively, detected with Lipid Peroxidation MDA Assay Kit (S0131S, Beyotime, Shanghai, China), SOD Assay Kit (S0103, Beyotime, Shanghai, China), GSH and GSSG Assay Kit (S0053, Beyotime, Shanghai, China), and Total Glutathione Peroxidase Assay Kit (S0058, Beyotime, Shanghai, China) according to the manufacturer’s instructions. #### 2.11.2. Dihydroethidium (DHE) Staining To detect the reactive oxygen species (ROS) level of brain tissues, the brain slices were incubated with 2μmol/L DHE (Thermo Fisher Scientific, MA, USA) at 37°C for 30 min. Images were photographed by a fluorescence microscope, and the DHE-positive cells were quantified by using ImageJ software. ### 2.12. Evaluation of Neuronal Damage #### 2.12.1. TUNEL Staining Neuronal apoptosis was detected with TUNEL staining kit (11684795910, Roche, USA) according to the manufacturer’s protocols. Briefly, after being blocked with 5% donkey serum in 0.3% Triton X-100 for 60 min at room temperature, the brain slices were incubated at 4°C overnight with the primary antibodies: anti-NeuN (1: 1000, ab104224, Abcam, MA, USA). Then, the slices were incubated with fluorescence-conjugated secondary antibody (1: 1000, ab150120, Abcam, MA, USA) for 1 h at 37° C. Lastly, the slices were incubated with TUNEL reaction mixture for 1 h at 37° C before DAPI nuclear staining. Under a fluorescence microscope, the TUNEL-positive neurons in the left temporal cortex were quantified by using ImageJ software. #### 2.12.2. Nissl Staining Hippocampal neuron degeneration was assessed with Nissl staining on the 28th day after SAH. Briefly, the 16μm coronal slices were incubated with 0.5% crystal violet solution for 15 min. Under an optical microscope, the Nissl-positive cells in the hippocampal cornu ammonis (CA)1, CA3, and dentate gyrus (DG) were quantified by using ImageJ software. ### 2.13. Statistical Analysis Data were represented asmean±standarddeviation (SD) or median with interquartile range based on the normality and homogeneity of variance. For the data with a normal distribution, one-way analysis of variance (ANOVA) followed by the Tukey post hoc test was used for statistics. For nonnormally distributed data, the Kruskal-Wallis test followed by the Dunn post hoc test was used for statistics. A value of p<0.05 was considered statistically significant. GraphPad Prism software and SPSS software (version 24.0) were used for statistical analyses. ## 2.1. Animals and SAH Model All experimental procedures were approved by the Institutional Animal Care and Use Committees of the First Affiliated Hospital of Harbin Medical University and were in accordance with the NIH Guidelines for the Care and Use of Laboratory Animals. Adult male Sprague-Dawley rats (weight 280-320 g) were housed at a constant humidity (55±5%) and temperature (22±2°C) in a 12 h light and dark cycle room. The animals were raised with free access to food and water.The endovascular perforation method was employed to induce a SAH model in rats, as previously described [28]. Briefly, after rats were fully anesthetized, the external carotid artery (ECA) and internal carotid artery (ICA) were fully exposed. A sharp 4–0 nylon suture was inserted into the left ICA from the cut of the ECA stump until resistance was felt. The suture was further advanced to puncture the artery for several seconds, then withdrawn immediately. In Sham rats, all procedures were identical except the puncture of the vessel. ## 2.2. Drug Administration The highly selective CB1R agonist ACEA (Cat. No.1319, Tocris Bioscience, Bristol, UK) was diluted in 5% dimethyl sulfoxide (DMSO). ACEA was administered intraperitoneally (i.p.) in different groups at doses of 0.5 mg/kg, 1.5 mg/kg, and 4.5 mg/kg 1 h after SAH [27]. The CB1R antagonist AM251 (Cat. No. A6226, Sigma-Aldrich, MO, USA), dissolved in 5% DMSO, was injected intraperitoneally at a dose of 1.0 mg/kg 1 h before SAH induction [29]. The control groups were injected the same volume of solvents, respectively. ## 2.3. Intracerebroventricular Injection An adenoassociated virus (AAV; GeneChem, Shanghai, China) system was used to knock down the expression of Nrf1 according to the manufacturer’s instructions. A nontargeting scrambled negative matched shRNA was used as a control. Animals were injected intraperitoneally with pentobarbital (40 mg/kg) and placed in a stereotaxic apparatus. Next, a 10μL syringe was inserted into the left ventricle at the specific coordinates relative to the bregma: 1.0 mm lateral, 1.5 mm posterior, and 3.1 mm below the dural surface. A total of 3 μL of Nrf1 shRNA or scrambled shRNA was injected intraventricularly at a rate of 0.3 μL/min in 3 weeks before SAH induction. To improve the knockdown efficiency, three different shRNA duplexes were designed and mixed. Their sequences are provided as follows: shRNA1: 5′-GCCTGGTCCAGATCCCTGTGACGAATCACAGGGATCTGGACCAGGCTTTTT-3′, shRNA2: 5′-GGACAGCGCAGTCACCATGGACGAATCCATGGTGACTGCGCTGTCCTTTTT-3′, and shRNA3: 5′-GGAGGTGGTGACGTTGGAACACGAATGTTCCAACGTCACCACCTCCTTTTT-3′. ## 2.4. Experimental Design (Supplemental Figure1) ### 2.4.1. Experiment 1 36 rats (n=6 per group) were randomly assigned into 6 groups: Sham and 3, 6, 12, 24, and 72 hours, after SAH. Expression of CB1R, Nrf1, PINK1, Parkin, and LC3II/LC3I was analyzed by Western blot. Additionally, 3 Sham rats and 3 SAH-24 h rats were used to detect the cellular localization of CB1R by immunofluorescence staining. ### 2.4.2. Experiment 2 In a short-term outcome assessment, 30 rats (n=6 per group) were equally assigned into 5 groups: Sham, SAH+vehicle, SAH+ACEA (0.5 mg/kg), SAH+ACEA (1.5 mg/kg), and SAH+ACEA (4.5 mg/kg) for neurological tests. According to neurobehavioral test results, 1.5 mg/kg was chosen as the best dosage for subsequent experiments. ### 2.4.3. Experiment 3 In a long-term outcome assessment, 18 rats (n=6 per group) were assigned into 3 groups: Sham, SAH+vehicle, and SAH+ACEA. Rotarod test, foot fault test, and Nissl staining were performed. ### 2.4.4. Experiment 4 To verify the neuroprotective effect of ACEA, 30 rats (n=6 per group) were assigned into 5 groups: Sham, SAH+vehicle, SAH+ACEA, SAH+AM251, and SAH+ACEA+AM251 for Western blot. Additionally, 30 rats (n=6 per group) were used for neurological tests, DHE, TUNEL, immunofluorescence staining, and transmission electron microscopy (TEM). Samples for determination of MDA, SOD, GSH/GSSG, and GSH-Px levels were shared with Western blot in each group. ### 2.4.5. Experiment 5 To verify the hypothetical molecular mechanism, 24 rats (n=6 per group) were assigned into 4 groups: SAH+vehicle, SAH+ACEA, SAH+ACEA+scrambled shRNA, and SAH+ACEA+Nrf1 shRNA for Western blot. Additionally, 24 rats (n=6 per group) were used for neurological tests, DHE, TUNEL, immunofluorescence staining, and TEM. Samples for determination of MDA, SOD, GSH/GSSG, and GSH-Px Levels were shared with Western blot in each group. ## 2.4.1. Experiment 1 36 rats (n=6 per group) were randomly assigned into 6 groups: Sham and 3, 6, 12, 24, and 72 hours, after SAH. Expression of CB1R, Nrf1, PINK1, Parkin, and LC3II/LC3I was analyzed by Western blot. Additionally, 3 Sham rats and 3 SAH-24 h rats were used to detect the cellular localization of CB1R by immunofluorescence staining. ## 2.4.2. Experiment 2 In a short-term outcome assessment, 30 rats (n=6 per group) were equally assigned into 5 groups: Sham, SAH+vehicle, SAH+ACEA (0.5 mg/kg), SAH+ACEA (1.5 mg/kg), and SAH+ACEA (4.5 mg/kg) for neurological tests. According to neurobehavioral test results, 1.5 mg/kg was chosen as the best dosage for subsequent experiments. ## 2.4.3. Experiment 3 In a long-term outcome assessment, 18 rats (n=6 per group) were assigned into 3 groups: Sham, SAH+vehicle, and SAH+ACEA. Rotarod test, foot fault test, and Nissl staining were performed. ## 2.4.4. Experiment 4 To verify the neuroprotective effect of ACEA, 30 rats (n=6 per group) were assigned into 5 groups: Sham, SAH+vehicle, SAH+ACEA, SAH+AM251, and SAH+ACEA+AM251 for Western blot. Additionally, 30 rats (n=6 per group) were used for neurological tests, DHE, TUNEL, immunofluorescence staining, and transmission electron microscopy (TEM). Samples for determination of MDA, SOD, GSH/GSSG, and GSH-Px levels were shared with Western blot in each group. ## 2.4.5. Experiment 5 To verify the hypothetical molecular mechanism, 24 rats (n=6 per group) were assigned into 4 groups: SAH+vehicle, SAH+ACEA, SAH+ACEA+scrambled shRNA, and SAH+ACEA+Nrf1 shRNA for Western blot. Additionally, 24 rats (n=6 per group) were used for neurological tests, DHE, TUNEL, immunofluorescence staining, and TEM. Samples for determination of MDA, SOD, GSH/GSSG, and GSH-Px Levels were shared with Western blot in each group. ## 2.5. Severity of SAH The severity of SAH was estimated with the SAH grading scale as previously described [30]. Briefly, the animals were euthanized at 24 h after SAH, and the basal cistern of the rat brain was divided into 6 sections. Based on the amount of blood clotting, each section was recorded with a grade from 0 to 3. The total score of six sections ranged from 0 to 18, and rats with SAH score≤8 were excluded. ## 2.6. Evaluation of Short-Term Neurofunctional Outcomes Short-term neurofunctional outcome was estimated with the modified Garcia score and beam balance test as previously described [31, 32]. The higher score represented better neurofunctional outcome. ## 2.7. Evaluation of Long-Term Neurofunctional Outcomes Long-term neurofunctional outcome was estimated with the rotarod test and foot fault test [33, 34]. Briefly, the rotarod test was conducted on days 7, 14, and 21 after SAH; animals were placed on a rotating horizontal cylinder. The rotating speed was started at 5 revolutions per minute (RPM) or 10 RPM and was gradually accelerated by 2 RPM every 5 seconds. The falling latency was recorded. The foot fault test was also conducted on days 7, 14, and 21 after SAH; animals were required to walk on a steel grid. A paw falling through the grid was recorded as a foot fault. A total of 50 steps were recorded for the right forelimb. Percentage of foot faults was expressed as faults/steps+faults×100%. ## 2.8. Western Blot and Isolation of Mitochondria Western blot was performed as previously described [35]. Briefly, rats were euthanized and transcardially perfused with 150 mL of cold PBS (0.01 M, pH 7.4). The left hemispheres were collected and homogenized in RIPA lysis buffer (P0013B, Beyotime, Shanghai, China). After centrifuging at 14000×g for 30 min at 4°C, the supernatant was collected. Equal amounts of protein (30 μg) were loaded onto 8%-12% SDS-PAGE gel, then electrophoresed and transferred to 0.2 μm nitrocellulose membranes, which were blocked with 5% nonfat milk and incubated with the following primary antibodies overnight at 4°C: anti-CB1R (1: 1000, ab259323, Abcam, MA, USA), anti-Nrf1 (1: 2000, ab175932, Abcam, MA, USA), anti-PINK1 (1: 1000, ab186303, Abcam, MA, USA), anti-Parkin (1: 1000, ab77924, Abcam, MA, USA), anti-LC3B (1: 2000, ab192890, Abcam, MA, USA), anti-COX IV(1: 1000, ab33985, Abcam, MA, USA), anti-Bcl-XL (1: 1000, ab32370, Abcam, MA, USA), anti-Bax (1: 1000, ab32503, Abcam, MA, USA), anti-cleaved caspase-3 (1: 500, 9661, Cell Signaling Technology Inc., MA, USA), anti-Romo-1 (1: 500, NBP2-45607, NOVUS Biologicals, CO, USA), and anti-β-actin (1: 1000, ab8227, Abcam, MA, USA). The next day, the membranes were incubated with the corresponding second antibodies for 2 h at room temperature. Immunoblots were visualized with BeyoECL Star chemiluminescence reagent kit (Beyotime, Shanghai, China) and quantified by densitometry using the ImageJ software. COX IV and β-actin were used as internal control.Mitochondrial proteins were extracted using the Tissue Mitochondria Isolation Kit (Beyotime, Shanghai, China) according to the manufacturer’s instructions. ## 2.9. Immunofluorescence Staining Immunofluorescence staining was performed as previously described [36]. Briefly, after being blocked with 5% donkey serum in 0.3% Triton X-100 for 60 min at room temperature, the brain slices were incubated at 4°C overnight with the following primary antibodies: anti-CB1R (1: 100, ab259323, Abcam, MA, USA), anti-NeuN (1: 1000, ab104224, Abcam, MA, USA), anti-GFAP (1: 50, ab4648, Abcam, MA, USA), anti-Iba1 (1: 100, ab5076, Abcam, MA, USA), anti-LC3B (1: 1000, ab192890, Abcam, MA, USA), and anti-TOMM20 (1: 100, ab56783, Abcam, MA, USA). Then, the slices were incubated with the appropriate fluorescence-conjugated secondary antibody (1: 500, Abcam, MA, USA) at 37° C for 1 h. ## 2.10. Transmission Electron Microscopy (TEM) The morphology of mitochondria was observed by TEM. 1 mm3 tissue was cut from the brain of each group and fixed with 2.5% glutaraldehyde for 4 h. After dehydration, samples were embedded into araldite and then cut into 60 nm slices with an ultramicrotome (Leica, Wetzlar, Germany). At last, the slices were fixed to nickel grids after staining. Images were acquired using a transmission electron microscope (Carl Zeiss, Thornwood, NY, USA). ## 2.11. Evaluation of Oxidative Stress ### 2.11.1. Determination of MDA, SOD, GSH/GSSG, and GSH-Px Levels The homogenate of the left hemispherical cortex tissues was collected. The levels of cellular MDA, SOD, GSH/GSSG, and GSH-Px in cortex tissues were, respectively, detected with Lipid Peroxidation MDA Assay Kit (S0131S, Beyotime, Shanghai, China), SOD Assay Kit (S0103, Beyotime, Shanghai, China), GSH and GSSG Assay Kit (S0053, Beyotime, Shanghai, China), and Total Glutathione Peroxidase Assay Kit (S0058, Beyotime, Shanghai, China) according to the manufacturer’s instructions. ### 2.11.2. Dihydroethidium (DHE) Staining To detect the reactive oxygen species (ROS) level of brain tissues, the brain slices were incubated with 2μmol/L DHE (Thermo Fisher Scientific, MA, USA) at 37°C for 30 min. Images were photographed by a fluorescence microscope, and the DHE-positive cells were quantified by using ImageJ software. ## 2.11.1. Determination of MDA, SOD, GSH/GSSG, and GSH-Px Levels The homogenate of the left hemispherical cortex tissues was collected. The levels of cellular MDA, SOD, GSH/GSSG, and GSH-Px in cortex tissues were, respectively, detected with Lipid Peroxidation MDA Assay Kit (S0131S, Beyotime, Shanghai, China), SOD Assay Kit (S0103, Beyotime, Shanghai, China), GSH and GSSG Assay Kit (S0053, Beyotime, Shanghai, China), and Total Glutathione Peroxidase Assay Kit (S0058, Beyotime, Shanghai, China) according to the manufacturer’s instructions. ## 2.11.2. Dihydroethidium (DHE) Staining To detect the reactive oxygen species (ROS) level of brain tissues, the brain slices were incubated with 2μmol/L DHE (Thermo Fisher Scientific, MA, USA) at 37°C for 30 min. Images were photographed by a fluorescence microscope, and the DHE-positive cells were quantified by using ImageJ software. ## 2.12. Evaluation of Neuronal Damage ### 2.12.1. TUNEL Staining Neuronal apoptosis was detected with TUNEL staining kit (11684795910, Roche, USA) according to the manufacturer’s protocols. Briefly, after being blocked with 5% donkey serum in 0.3% Triton X-100 for 60 min at room temperature, the brain slices were incubated at 4°C overnight with the primary antibodies: anti-NeuN (1: 1000, ab104224, Abcam, MA, USA). Then, the slices were incubated with fluorescence-conjugated secondary antibody (1: 1000, ab150120, Abcam, MA, USA) for 1 h at 37° C. Lastly, the slices were incubated with TUNEL reaction mixture for 1 h at 37° C before DAPI nuclear staining. Under a fluorescence microscope, the TUNEL-positive neurons in the left temporal cortex were quantified by using ImageJ software. ### 2.12.2. Nissl Staining Hippocampal neuron degeneration was assessed with Nissl staining on the 28th day after SAH. Briefly, the 16μm coronal slices were incubated with 0.5% crystal violet solution for 15 min. Under an optical microscope, the Nissl-positive cells in the hippocampal cornu ammonis (CA)1, CA3, and dentate gyrus (DG) were quantified by using ImageJ software. ## 2.12.1. TUNEL Staining Neuronal apoptosis was detected with TUNEL staining kit (11684795910, Roche, USA) according to the manufacturer’s protocols. Briefly, after being blocked with 5% donkey serum in 0.3% Triton X-100 for 60 min at room temperature, the brain slices were incubated at 4°C overnight with the primary antibodies: anti-NeuN (1: 1000, ab104224, Abcam, MA, USA). Then, the slices were incubated with fluorescence-conjugated secondary antibody (1: 1000, ab150120, Abcam, MA, USA) for 1 h at 37° C. Lastly, the slices were incubated with TUNEL reaction mixture for 1 h at 37° C before DAPI nuclear staining. Under a fluorescence microscope, the TUNEL-positive neurons in the left temporal cortex were quantified by using ImageJ software. ## 2.12.2. Nissl Staining Hippocampal neuron degeneration was assessed with Nissl staining on the 28th day after SAH. Briefly, the 16μm coronal slices were incubated with 0.5% crystal violet solution for 15 min. Under an optical microscope, the Nissl-positive cells in the hippocampal cornu ammonis (CA)1, CA3, and dentate gyrus (DG) were quantified by using ImageJ software. ## 2.13. Statistical Analysis Data were represented asmean±standarddeviation (SD) or median with interquartile range based on the normality and homogeneity of variance. For the data with a normal distribution, one-way analysis of variance (ANOVA) followed by the Tukey post hoc test was used for statistics. For nonnormally distributed data, the Kruskal-Wallis test followed by the Dunn post hoc test was used for statistics. A value of p<0.05 was considered statistically significant. GraphPad Prism software and SPSS software (version 24.0) were used for statistical analyses. ## 3. Results ### 3.1. Mortality and SAH Severity There were 33 rats in the Sham group and 209 rats in the SAH group, of which 44 rats died due to SAH induction (21.05%). None of the Sham-operated rats died, and 13 rats with SAHscore≤8 were excluded from this research (Supplemental Figure 2(a)). In the SAH group, blood clots were mainly distributed around the circle of Willis (Supplemental Figure 2(b)). No significant differences in SAH grade were observed among SAH groups (Supplemental Figure 2(c)). ### 3.2. Time Course Expression of CB1R, Nrf1, PINK1, Parkin, and LC3II after SAH Western blot revealed that the expression of CB1R and Nrf1 in the cytoplasm and PINK1, Parkin, and LC3II in mitochondria began to increase at 6 h and reached a peak at 24 h after SAH, compared with the Sham group (p<0.05; Figures 1(a) and 1(b)). Consistently, immunofluorescence staining confirmed the increased expression of CB1R after SAH. Besides, it also showed that CB1R receptors were mainly located in neurons of the cerebral cortex, but a few were located in microglia and astrocytes (Figure 1(d)).Figure 1 Time course expression of CB1R, Nrf1, PINK1, Parkin, and LC3II and cellular localization of CB1R after SAH. (a) Representative Western blot images of time course and (b) quantitative analyses of CB1R, Nrf1, PINK1, Parkin, and LC3II.n=6 per group. Data were represented as mean±SD. ∗p<0.05 vs. the Sham group. (c) Representative picture indicates the location of immunofluorescence staining (small black box). (d) Representative microphotographs of immunofluorescence staining for CB1R (green) with neurons (NeuN, red), astrocytes (GFAP, red), and microglia (Iba-1, red) in the left temporal cortex at 24 h after SAH. Nuclei were stained with DAPI (blue). n=3 per group. Scalebar=50 μm. (a)(b)(c)(d) ### 3.3. ACEA Attenuated Short-Term Neurological Deficits Modified Garcia and beam balance scores indicated that SAH contributed to significant neurological deficits, compared with the Sham group. ACEA treatment at the dose of 1.5 mg/kg significantly attenuated the neurological deficits, compared with the SAH+vehicle group (p<0.05, Supplemental Figures 3(a) and 3(b)). According to neurobehavioral test results, 1.5 mg/kg was chosen as the best dosage for subsequent experiments. ### 3.4. ACEA Attenuated Long-Term Neurological Deficits and Hippocampus Neuron Degeneration The rotarod test indicated that whether it was 5 RPM or 10 RPM, the falling latency in the SAH+vehicle group was significantly shorter than that in the Sham group. Such poor performances induced by SAH were remarkably improved by ACEA treatment (p<0.05, Figures 2(a) and 2(b)).Figure 2 ACEA attenuated long-term neurological deficits and hippocampal neuronal degeneration after SAH. Rotarod test of 5 RPM (a) and 10 RPM (b) in the first, second, and third week after SAH,n=6 per group. (c) Foot fault test during the three weeks after SAH, n=6 per group. (d) Representative microphotographs of Nissl staining in the hippocampal CA1, CA3, and DG regions. Scalebar=50 μm. (e) Interest areas of the CA1, CA3, and DG region in the left hippocampus. Scalebar=200 μm. (f) Quantification of the Nissl-positive neurons, n=6 per group. Data of the rotarod test were represented as the median with interquartile range. Other data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group. (a)(b)(c)(d)(e)(f)Foot fault test showed that the foot fault rate in the SAH+vehicle group was dramatically higher than that in the Sham group in all three weeks. ACEA treatment significantly reduced the foot fault rate (p<0.01, Figure 2(c)).Nissl staining revealed that Nissl-positive neurons in the SAH+vehicle group were remarkably less than those in the Sham group in the CA1, CA3, and DG area of the ipsilateral hippocampus (p<0.05, Figures 2(d) and 2(f)). ACEA treatment significantly attenuated hippocampal neuron degeneration on the 28th day after SAH (p<0.05, compared with the SAH+vehicle group, Figures 2(d) and 2(f)). ### 3.5. ACEA Treatment Attenuated Neurological Deficits and Neuronal Apoptosis, whereas the Neuroprotective and Antiapoptotic Effects of ACEA Were Reversed by AM251 and Nrf1 shRNA In modified Garcia score and beam balance test results, ACEA treatment ameliorated neurological deficits compared with the SAH+vehicle group (p<0.05, Figures 3(a) and 3(b)). Compared with the SAH+ACEA group, AM251 abolished the neuroprotective effect of ACEA in the SAH+ACEA+AM251 group (p<0.05, Figures 3(a) and 3(b)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA abolished the neuroprotective effect of ACEA in the SAH+ACEA+Nrf1 shRNA group (p<0.05, Figures 4(a) and 4(b)).Figure 3 ACEA attenuated neurological deficits and neuronal apoptosis, which was reversed by AM251. (a) Modified Garcia and (b) beam balance scores,n=6 per group. (c) Representative microphotographs of TUNEL staining and quantification of TUNEL-positive neurons. Scalebar=100μm. n=3 per group. (d) Representative Western blot images. (e) Quantitative analyses of Bcl-xl, Bax, and cleaved caspase-3. n=6per group. Data of Modified Garcia and beam balance scores were represented as the median with interquartile range. Other data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group; @p<0.05 and @@p<0.01 vs. the SAH+ACEA group. (a)(b)(c)(d)(e)Figure 4 Nrf1 shRNA abolished the neuroprotective and antiapoptotic effects of ACEA. (a) Modified Garcia and (b) beam balance scores,n=6 per group. (c) Representative microphotographs of TUNEL staining and quantification of TUNEL-positive neurons. Scalebar=100 μm. n=3 per group. (d) Representative Western blot images. (e) Quantitative analyses of Bcl-xl, Bax, and cleaved caspase-3. n=6 per group. Data of Modified Garcia and beam balance scores were represented as the median with interquartile range. Other data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the SAH+vehicle group; #p<0.05 and ##p<0.01 vs. the SAH+ACEA+scrambled shRNA group. (a)(b)(c)(d)(e)TUNEL staining results revealed that ACEA reduced the number of apoptosis neurons compared with the SAH+vehicle group (p<0.01, Figure 3(c)). Compared with the SAH+ACEA group, AM251 abolished the antiapoptotic effect of ACEA in the SAH+ACEA+AM251 group (p<0.01, Figure 3(c)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA abolished the antiapoptotic effect of ACEA in the SAH+ACEA+Nrf1 shRNA group (p<0.01, Figure 4(c)).Western blot showed that the expression of Bax and cleaved caspase-3 significantly increased, and the expression of Bcl-xl decreased in the SAH+vehicle group compared with the Sham group (p<0.05, Figures 3(d) and 3(e)). AM251 eliminated the antiapoptotic effect of ACEA with the upregulation of Bax and cleaved caspase-3 and the downregulation of Bcl-xl in the SAH+ACEA+AM251 group compared with the SAH+ACEA group (p<0.05, Figures 3(d) and 3(e)). Nrf1 shRNA also eliminated the antiapoptotic effect of ACEA with the upregulation of Bax and cleaved caspase-3 and the downregulation of Bcl-xl in the SAH+ACEA+Nrf1 shRNA group compared with the SAH+ACEA+scrambled shRNA group (p<0.05, Figures 4(d) and 4(e)). ### 3.6. ACEA Treatment Attenuated Oxidative Stress, whereas the Antioxidative Stress Effect of ACEA Was Reversed by AM251 and Nrf1 shRNA Western blot results demonstrated that Romo-1 (reactive oxygen species modulator 1), a marker of reactive oxygen species–related protein, significantly increased after SAH compared with the Sham group (p<0.05, Figure 5(a)). ACEA dramatically reduced the expression of Romo-1 in the SAH+ACEA group compared with the SAH+vehicle group (p<0.05, Figure 5(a)). Compared with the SAH+ACEA group, AM251 increased the expression of Romo-1 in the SAH+ACEA+AM251 group (p<0.05, Figure 5(a)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA also increased the expression of Romo-1 in the SAH+ACEA+Nrf1 shRNA group (p<0.05, Figure 6(a)).Figure 5 ACEA attenuated oxidative stress, which was reversed by AM251. (a) Representative Western blot images and quantitative analysis of Romo-1,n=6 per group. (b) Quantification of the levels of MDA, SOD, GSH-Px, and GSH/GSSG ratio in the cortex of ipsilateral hemisphere, n=6 per group (c) Representative microphotographs of DHE staining and quantification of DHE-positive cells. Scalebar=50 μm. n=3 per group. Data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group; @p<0.05 and @@p<0.01 vs. the SAH+ACEA group. (a)(b)(c)Figure 6 Nrf1 shRNA abolished the antioxidative stress effect of ACEA. (a) Representative Western blot images and quantitative analysis of Romo-1,n=6 per group. (b) Quantification of the levels of MDA, SOD, GSH-Px, and GSH/GSSG ratio in the cortex of ipsilateral hemisphere, n=6 per group (c) Representative microphotographs of DHE staining and quantification of DHE-positive cells. Scalebar=50 μm. n=3 per group. Data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the SAH+vehicle group; #p<0.05 and ##p<0.01 vs. the SAH+ACEA+scrambled shRNA group. (a)(b)(c)The determination of MDA, SOD, GSH/GSSG, and GSH-Px level results showed that the level of MDA dramatically increased in the SAH+vehicle group compared with the Sham group (p<0.05, Figure 5(b)), whereas ACEA treatment reduced the level of MDA compared with the SAH+vehicle group (p<0.05, Figure 5(b)). Additionally, the level of SOD, GSH-Px, and GSH/GSSG ratio decreased as a result of oxidative stress injury in the SAH+vehicle group compared with the Sham group (p<0.05, Figure 5(b)), while ACEA treatment reinforced the activity of these antioxidative factors when compared with the SAH+vehicle group (p<0.05, Figure 5(b)). However, the antioxidative effect of ACEA was, respectively, reversed by AM251 and Nrf1 shRNA (p<0.05, compared with the SAH+ACEA group and SAH+ACEA+scrambled shRNA group, Figures 5(b) and 6(b)).Moreover, ACEA treatment reduced the number of DHE-positive cells when compared with the SAH+vehicle group (p<0.01, Figure 5(c)), but it was, respectively, reversed by AM251 and Nrf1 shRNA (p<0.01, compared with the SAH+ACEA group and SAH+ACEA+scrambled shRNA group, Figures 5(c) and 6(c)). ### 3.7. ACEA Promoted Mitophagy and Improved Mitochondrial Morphology after SAH In Western blot, we extracted mitochondrial proteins to detect the level of PINK1, Parkin, and LC3II in mitochondria. ACEA treatment increased the expression of PINK1, Parkin, and LC3II when compared with the SAH+vehicle group (p<0.05, Figures 7(c) and 7(d)), which indicated that ACEA activated PINK1/Parkin-mediated mitophagy.Figure 7 ACEA promoted mitophagy and improved mitochondrial morphology, which was reversed by AM251. (a) Representative immunofluorescence colocalization of Tomm20 (mitochondrial marker, green) with LC3 (autophagosome marker, red) and quantification of the ratio of LC3-associated Tomm20 to total Tomm20.Scalebar=50 μm. n=3 per group. (b) Neuronal and mitochondrial structures were observed by TEM. Red arrow: normal mitochondria; red triangle: swollen mitochondria; red circle: mitophagosome; red star: mitochondrial vacuolization. Scalebar=1 μm. (c) Representative Western blot images. (d) Quantitative analyses of CB1R, Nrf1, PINK1, Parkin, and LC3II. n=6 per group. Data were expressed as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group; @p<0.05 and @@p<0.01 vs. the SAH+ACEA group. (a)(b)(c)(d)Consistently, immunofluorescence colocalization staining results confirmed that ACEA treatment increased the colocalization of mitochondrial protein TOMM20 with autophagosome marker LC3 (p<0.01, compared with the SAH+vehicle group, Figure 7(a)).Mitochondrial ultrastructural morphology was observed by transmission electron microscopy (TEM). Compared with the Sham group, there were a lot of swollen mitochondria with broken mitochondrial cristae in the SAH+vehicle group, which indicated that the mitochondrial structure of neurons had been destroyed after SAH (Figure7(b)). ACEA treatment promoted the formation of mitophagosome and maintained the normal mitochondrial morphology of neurons (Figure 7(b)). ### 3.8. AM251 and Nrf1 shRNA Abolished the Promoting Effect of ACEA on Mitophagy Western blot revealed that the expression of PINK1, Parkin, and LC3II in mitochondria significantly decreased in the SAH+ACEA+AM251 group (p<0.05, compared with the SAH+ACEA group, Figures 7(c) and 7(d)), which indicated that the promoting effect of ACEA on mitophagy was eliminated by AM251. Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA reduced the expression of PINK1, Parkin, and LC3II in mitochondria (p<0.05, Figures 8(c) and 8(d)), which indicated that the promoting effect of ACEA on mitophagy was also eliminated by Nrf1 shRNA.Figure 8 Nrf1 shRNA abolished the promoting effect of ACEA on mitophagy. (a) Representative immunofluorescence colocalization of Tomm20 (mitochondrial marker, green) with LC3 (autophagosome marker, red) and quantification of the ratio of LC3-associated Tomm20 to total Tomm20.Scalebar=50 μm. n=3 per group. (b) Neuronal and mitochondrial structures were observed by TEM. Red arrow: normal mitochondria; red triangle: swollen mitochondria; red circle: mitophagosome; red star: mitochondrial vacuolization. Scalebar=1 μm. (c) Representative Western blot images. (d) Quantitative analyses of CB1R, Nrf1, PINK1, Parkin, and LC3II. n=6 per group. Data were expressed as mean±SD.∗p<0.05 and ∗∗p<0.01 vs. the SAH+vehicle group; #p<0.05 and ##p<0.01 vs. the SAH+ACEA+scrambled shRNA group. (a)(b)(c)(d)Consistently, immunofluorescence colocalization staining confirmed that the colocalization of TOMM20 and LC3 decreased in the SAH+ACEA+AM251 group (p<0.01, compared with the SAH+ACEA group, Figure 7(a)). Additionally, Nrf1 shRNA also reduced the colocalization of TOMM20 and LC3 in the SAH+ACEA+Nrf1 shRNA group (p<0.01, compared with the SAH+ACEA+scrambled shRNA group, Figure 8(a)).The detection of mitochondrial morphology showed that there were many swollen mitochondria with broken mitochondrial cristae in the SAH+ACEA+AM251 group, which indicated that AM251 abolished the protective effect of ACEA on mitochondria (compared with the SAH+ACEA group, Figure7(b)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA resulted in significant mitochondrial destruction and even mitochondrial vacuolization in the SAH+ACEA+Nrf1 shRNA group (Figure 8(b)). ### 3.9. ACEA Promoted Mitophagy via Activation of the CB1R/Nrf1/PINK1 Signaling Pathway after SAH Western blot demonstrated that the expression of CB1R, Nrf1, PINK1, Parkin, and LC3II markedly increased after SAH compared with the Sham group (p<0.05, Figures 1(a) and 1(b)). ACEA treatment significantly increased the expression of Nrf1, PINK1, Parkin, and LC3II compared with the SAH+vehicle group (p<0.05, Figures 7(c) and 7(d)). CB1R antagonist AM251 was injected at 1 h before SAH induction to evaluate whether CB1R involvement in the promoting effect of ACEA on mitophagy after SAH. The results manifested that pretreatment with AM251 remarkably reduced the expression of downstream molecules, such as Nrf1, PINK1, Parkin, and LC3II, compared with the SAH+ACEA group (p<0.05, Figures 7(c) and 7(d)). The administration of Nrf1 shRNA dramatically suppressed the expression of Nrf1 in the SAH+ACEA+Nrf1 shRNA group when compared with the SAH+ACEA+scrambled shRNA group (p<0.05, Figures 8(c) and 8(d)). Besides, Nrf1 knockdown significantly suppressed the expression of PINK1, Parkin, and LC3II in the SAH+ACEA+Nrf1 shRNA group when compared with the SAH+ACEA+scrambled shRNA group (p<0.05, Figures 8(c) and 8(d)). ## 3.1. Mortality and SAH Severity There were 33 rats in the Sham group and 209 rats in the SAH group, of which 44 rats died due to SAH induction (21.05%). None of the Sham-operated rats died, and 13 rats with SAHscore≤8 were excluded from this research (Supplemental Figure 2(a)). In the SAH group, blood clots were mainly distributed around the circle of Willis (Supplemental Figure 2(b)). No significant differences in SAH grade were observed among SAH groups (Supplemental Figure 2(c)). ## 3.2. Time Course Expression of CB1R, Nrf1, PINK1, Parkin, and LC3II after SAH Western blot revealed that the expression of CB1R and Nrf1 in the cytoplasm and PINK1, Parkin, and LC3II in mitochondria began to increase at 6 h and reached a peak at 24 h after SAH, compared with the Sham group (p<0.05; Figures 1(a) and 1(b)). Consistently, immunofluorescence staining confirmed the increased expression of CB1R after SAH. Besides, it also showed that CB1R receptors were mainly located in neurons of the cerebral cortex, but a few were located in microglia and astrocytes (Figure 1(d)).Figure 1 Time course expression of CB1R, Nrf1, PINK1, Parkin, and LC3II and cellular localization of CB1R after SAH. (a) Representative Western blot images of time course and (b) quantitative analyses of CB1R, Nrf1, PINK1, Parkin, and LC3II.n=6 per group. Data were represented as mean±SD. ∗p<0.05 vs. the Sham group. (c) Representative picture indicates the location of immunofluorescence staining (small black box). (d) Representative microphotographs of immunofluorescence staining for CB1R (green) with neurons (NeuN, red), astrocytes (GFAP, red), and microglia (Iba-1, red) in the left temporal cortex at 24 h after SAH. Nuclei were stained with DAPI (blue). n=3 per group. Scalebar=50 μm. (a)(b)(c)(d) ## 3.3. ACEA Attenuated Short-Term Neurological Deficits Modified Garcia and beam balance scores indicated that SAH contributed to significant neurological deficits, compared with the Sham group. ACEA treatment at the dose of 1.5 mg/kg significantly attenuated the neurological deficits, compared with the SAH+vehicle group (p<0.05, Supplemental Figures 3(a) and 3(b)). According to neurobehavioral test results, 1.5 mg/kg was chosen as the best dosage for subsequent experiments. ## 3.4. ACEA Attenuated Long-Term Neurological Deficits and Hippocampus Neuron Degeneration The rotarod test indicated that whether it was 5 RPM or 10 RPM, the falling latency in the SAH+vehicle group was significantly shorter than that in the Sham group. Such poor performances induced by SAH were remarkably improved by ACEA treatment (p<0.05, Figures 2(a) and 2(b)).Figure 2 ACEA attenuated long-term neurological deficits and hippocampal neuronal degeneration after SAH. Rotarod test of 5 RPM (a) and 10 RPM (b) in the first, second, and third week after SAH,n=6 per group. (c) Foot fault test during the three weeks after SAH, n=6 per group. (d) Representative microphotographs of Nissl staining in the hippocampal CA1, CA3, and DG regions. Scalebar=50 μm. (e) Interest areas of the CA1, CA3, and DG region in the left hippocampus. Scalebar=200 μm. (f) Quantification of the Nissl-positive neurons, n=6 per group. Data of the rotarod test were represented as the median with interquartile range. Other data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group. (a)(b)(c)(d)(e)(f)Foot fault test showed that the foot fault rate in the SAH+vehicle group was dramatically higher than that in the Sham group in all three weeks. ACEA treatment significantly reduced the foot fault rate (p<0.01, Figure 2(c)).Nissl staining revealed that Nissl-positive neurons in the SAH+vehicle group were remarkably less than those in the Sham group in the CA1, CA3, and DG area of the ipsilateral hippocampus (p<0.05, Figures 2(d) and 2(f)). ACEA treatment significantly attenuated hippocampal neuron degeneration on the 28th day after SAH (p<0.05, compared with the SAH+vehicle group, Figures 2(d) and 2(f)). ## 3.5. ACEA Treatment Attenuated Neurological Deficits and Neuronal Apoptosis, whereas the Neuroprotective and Antiapoptotic Effects of ACEA Were Reversed by AM251 and Nrf1 shRNA In modified Garcia score and beam balance test results, ACEA treatment ameliorated neurological deficits compared with the SAH+vehicle group (p<0.05, Figures 3(a) and 3(b)). Compared with the SAH+ACEA group, AM251 abolished the neuroprotective effect of ACEA in the SAH+ACEA+AM251 group (p<0.05, Figures 3(a) and 3(b)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA abolished the neuroprotective effect of ACEA in the SAH+ACEA+Nrf1 shRNA group (p<0.05, Figures 4(a) and 4(b)).Figure 3 ACEA attenuated neurological deficits and neuronal apoptosis, which was reversed by AM251. (a) Modified Garcia and (b) beam balance scores,n=6 per group. (c) Representative microphotographs of TUNEL staining and quantification of TUNEL-positive neurons. Scalebar=100μm. n=3 per group. (d) Representative Western blot images. (e) Quantitative analyses of Bcl-xl, Bax, and cleaved caspase-3. n=6per group. Data of Modified Garcia and beam balance scores were represented as the median with interquartile range. Other data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group; @p<0.05 and @@p<0.01 vs. the SAH+ACEA group. (a)(b)(c)(d)(e)Figure 4 Nrf1 shRNA abolished the neuroprotective and antiapoptotic effects of ACEA. (a) Modified Garcia and (b) beam balance scores,n=6 per group. (c) Representative microphotographs of TUNEL staining and quantification of TUNEL-positive neurons. Scalebar=100 μm. n=3 per group. (d) Representative Western blot images. (e) Quantitative analyses of Bcl-xl, Bax, and cleaved caspase-3. n=6 per group. Data of Modified Garcia and beam balance scores were represented as the median with interquartile range. Other data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the SAH+vehicle group; #p<0.05 and ##p<0.01 vs. the SAH+ACEA+scrambled shRNA group. (a)(b)(c)(d)(e)TUNEL staining results revealed that ACEA reduced the number of apoptosis neurons compared with the SAH+vehicle group (p<0.01, Figure 3(c)). Compared with the SAH+ACEA group, AM251 abolished the antiapoptotic effect of ACEA in the SAH+ACEA+AM251 group (p<0.01, Figure 3(c)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA abolished the antiapoptotic effect of ACEA in the SAH+ACEA+Nrf1 shRNA group (p<0.01, Figure 4(c)).Western blot showed that the expression of Bax and cleaved caspase-3 significantly increased, and the expression of Bcl-xl decreased in the SAH+vehicle group compared with the Sham group (p<0.05, Figures 3(d) and 3(e)). AM251 eliminated the antiapoptotic effect of ACEA with the upregulation of Bax and cleaved caspase-3 and the downregulation of Bcl-xl in the SAH+ACEA+AM251 group compared with the SAH+ACEA group (p<0.05, Figures 3(d) and 3(e)). Nrf1 shRNA also eliminated the antiapoptotic effect of ACEA with the upregulation of Bax and cleaved caspase-3 and the downregulation of Bcl-xl in the SAH+ACEA+Nrf1 shRNA group compared with the SAH+ACEA+scrambled shRNA group (p<0.05, Figures 4(d) and 4(e)). ## 3.6. ACEA Treatment Attenuated Oxidative Stress, whereas the Antioxidative Stress Effect of ACEA Was Reversed by AM251 and Nrf1 shRNA Western blot results demonstrated that Romo-1 (reactive oxygen species modulator 1), a marker of reactive oxygen species–related protein, significantly increased after SAH compared with the Sham group (p<0.05, Figure 5(a)). ACEA dramatically reduced the expression of Romo-1 in the SAH+ACEA group compared with the SAH+vehicle group (p<0.05, Figure 5(a)). Compared with the SAH+ACEA group, AM251 increased the expression of Romo-1 in the SAH+ACEA+AM251 group (p<0.05, Figure 5(a)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA also increased the expression of Romo-1 in the SAH+ACEA+Nrf1 shRNA group (p<0.05, Figure 6(a)).Figure 5 ACEA attenuated oxidative stress, which was reversed by AM251. (a) Representative Western blot images and quantitative analysis of Romo-1,n=6 per group. (b) Quantification of the levels of MDA, SOD, GSH-Px, and GSH/GSSG ratio in the cortex of ipsilateral hemisphere, n=6 per group (c) Representative microphotographs of DHE staining and quantification of DHE-positive cells. Scalebar=50 μm. n=3 per group. Data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group; @p<0.05 and @@p<0.01 vs. the SAH+ACEA group. (a)(b)(c)Figure 6 Nrf1 shRNA abolished the antioxidative stress effect of ACEA. (a) Representative Western blot images and quantitative analysis of Romo-1,n=6 per group. (b) Quantification of the levels of MDA, SOD, GSH-Px, and GSH/GSSG ratio in the cortex of ipsilateral hemisphere, n=6 per group (c) Representative microphotographs of DHE staining and quantification of DHE-positive cells. Scalebar=50 μm. n=3 per group. Data were represented as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the SAH+vehicle group; #p<0.05 and ##p<0.01 vs. the SAH+ACEA+scrambled shRNA group. (a)(b)(c)The determination of MDA, SOD, GSH/GSSG, and GSH-Px level results showed that the level of MDA dramatically increased in the SAH+vehicle group compared with the Sham group (p<0.05, Figure 5(b)), whereas ACEA treatment reduced the level of MDA compared with the SAH+vehicle group (p<0.05, Figure 5(b)). Additionally, the level of SOD, GSH-Px, and GSH/GSSG ratio decreased as a result of oxidative stress injury in the SAH+vehicle group compared with the Sham group (p<0.05, Figure 5(b)), while ACEA treatment reinforced the activity of these antioxidative factors when compared with the SAH+vehicle group (p<0.05, Figure 5(b)). However, the antioxidative effect of ACEA was, respectively, reversed by AM251 and Nrf1 shRNA (p<0.05, compared with the SAH+ACEA group and SAH+ACEA+scrambled shRNA group, Figures 5(b) and 6(b)).Moreover, ACEA treatment reduced the number of DHE-positive cells when compared with the SAH+vehicle group (p<0.01, Figure 5(c)), but it was, respectively, reversed by AM251 and Nrf1 shRNA (p<0.01, compared with the SAH+ACEA group and SAH+ACEA+scrambled shRNA group, Figures 5(c) and 6(c)). ## 3.7. ACEA Promoted Mitophagy and Improved Mitochondrial Morphology after SAH In Western blot, we extracted mitochondrial proteins to detect the level of PINK1, Parkin, and LC3II in mitochondria. ACEA treatment increased the expression of PINK1, Parkin, and LC3II when compared with the SAH+vehicle group (p<0.05, Figures 7(c) and 7(d)), which indicated that ACEA activated PINK1/Parkin-mediated mitophagy.Figure 7 ACEA promoted mitophagy and improved mitochondrial morphology, which was reversed by AM251. (a) Representative immunofluorescence colocalization of Tomm20 (mitochondrial marker, green) with LC3 (autophagosome marker, red) and quantification of the ratio of LC3-associated Tomm20 to total Tomm20.Scalebar=50 μm. n=3 per group. (b) Neuronal and mitochondrial structures were observed by TEM. Red arrow: normal mitochondria; red triangle: swollen mitochondria; red circle: mitophagosome; red star: mitochondrial vacuolization. Scalebar=1 μm. (c) Representative Western blot images. (d) Quantitative analyses of CB1R, Nrf1, PINK1, Parkin, and LC3II. n=6 per group. Data were expressed as mean±SD. ∗p<0.05 and ∗∗p<0.01 vs. the Sham group; #p<0.05 and ##p<0.01 vs. the SAH+vehicle group; @p<0.05 and @@p<0.01 vs. the SAH+ACEA group. (a)(b)(c)(d)Consistently, immunofluorescence colocalization staining results confirmed that ACEA treatment increased the colocalization of mitochondrial protein TOMM20 with autophagosome marker LC3 (p<0.01, compared with the SAH+vehicle group, Figure 7(a)).Mitochondrial ultrastructural morphology was observed by transmission electron microscopy (TEM). Compared with the Sham group, there were a lot of swollen mitochondria with broken mitochondrial cristae in the SAH+vehicle group, which indicated that the mitochondrial structure of neurons had been destroyed after SAH (Figure7(b)). ACEA treatment promoted the formation of mitophagosome and maintained the normal mitochondrial morphology of neurons (Figure 7(b)). ## 3.8. AM251 and Nrf1 shRNA Abolished the Promoting Effect of ACEA on Mitophagy Western blot revealed that the expression of PINK1, Parkin, and LC3II in mitochondria significantly decreased in the SAH+ACEA+AM251 group (p<0.05, compared with the SAH+ACEA group, Figures 7(c) and 7(d)), which indicated that the promoting effect of ACEA on mitophagy was eliminated by AM251. Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA reduced the expression of PINK1, Parkin, and LC3II in mitochondria (p<0.05, Figures 8(c) and 8(d)), which indicated that the promoting effect of ACEA on mitophagy was also eliminated by Nrf1 shRNA.Figure 8 Nrf1 shRNA abolished the promoting effect of ACEA on mitophagy. (a) Representative immunofluorescence colocalization of Tomm20 (mitochondrial marker, green) with LC3 (autophagosome marker, red) and quantification of the ratio of LC3-associated Tomm20 to total Tomm20.Scalebar=50 μm. n=3 per group. (b) Neuronal and mitochondrial structures were observed by TEM. Red arrow: normal mitochondria; red triangle: swollen mitochondria; red circle: mitophagosome; red star: mitochondrial vacuolization. Scalebar=1 μm. (c) Representative Western blot images. (d) Quantitative analyses of CB1R, Nrf1, PINK1, Parkin, and LC3II. n=6 per group. Data were expressed as mean±SD.∗p<0.05 and ∗∗p<0.01 vs. the SAH+vehicle group; #p<0.05 and ##p<0.01 vs. the SAH+ACEA+scrambled shRNA group. (a)(b)(c)(d)Consistently, immunofluorescence colocalization staining confirmed that the colocalization of TOMM20 and LC3 decreased in the SAH+ACEA+AM251 group (p<0.01, compared with the SAH+ACEA group, Figure 7(a)). Additionally, Nrf1 shRNA also reduced the colocalization of TOMM20 and LC3 in the SAH+ACEA+Nrf1 shRNA group (p<0.01, compared with the SAH+ACEA+scrambled shRNA group, Figure 8(a)).The detection of mitochondrial morphology showed that there were many swollen mitochondria with broken mitochondrial cristae in the SAH+ACEA+AM251 group, which indicated that AM251 abolished the protective effect of ACEA on mitochondria (compared with the SAH+ACEA group, Figure7(b)). Compared with the SAH+ACEA+scrambled shRNA group, Nrf1 shRNA resulted in significant mitochondrial destruction and even mitochondrial vacuolization in the SAH+ACEA+Nrf1 shRNA group (Figure 8(b)). ## 3.9. ACEA Promoted Mitophagy via Activation of the CB1R/Nrf1/PINK1 Signaling Pathway after SAH Western blot demonstrated that the expression of CB1R, Nrf1, PINK1, Parkin, and LC3II markedly increased after SAH compared with the Sham group (p<0.05, Figures 1(a) and 1(b)). ACEA treatment significantly increased the expression of Nrf1, PINK1, Parkin, and LC3II compared with the SAH+vehicle group (p<0.05, Figures 7(c) and 7(d)). CB1R antagonist AM251 was injected at 1 h before SAH induction to evaluate whether CB1R involvement in the promoting effect of ACEA on mitophagy after SAH. The results manifested that pretreatment with AM251 remarkably reduced the expression of downstream molecules, such as Nrf1, PINK1, Parkin, and LC3II, compared with the SAH+ACEA group (p<0.05, Figures 7(c) and 7(d)). The administration of Nrf1 shRNA dramatically suppressed the expression of Nrf1 in the SAH+ACEA+Nrf1 shRNA group when compared with the SAH+ACEA+scrambled shRNA group (p<0.05, Figures 8(c) and 8(d)). Besides, Nrf1 knockdown significantly suppressed the expression of PINK1, Parkin, and LC3II in the SAH+ACEA+Nrf1 shRNA group when compared with the SAH+ACEA+scrambled shRNA group (p<0.05, Figures 8(c) and 8(d)). ## 4. Discussion In our research, we investigated the antioxidative and antiapoptotic effects of ACEA as well as the underlying mechanism involving the CB1R/Nrf1/PINK1 signaling pathway after SAH in rats (Figure9). We discovered that the protein level of CB1R, Nrf1, PINK1, Parkin, and LC3II began to increase at 6 h and reached a peak at 24 h after SAH. The CB1R receptors were mainly located in neurons of the cerebral cortex after SAH. ACEA treatment promoted mitophagy and exerted the antioxidative and antiapoptotic effects, which ultimately contributed to the improvement of neurological deficits. Inhibition of CB1R with AM251 eliminated the antioxidative and antiapoptotic effects of ACEA after SAH. Mechanistically, activation of CB1R upregulated the expression of Nrf1, PINK1, Parkin, LC3II, and Bcl-xl and downregulated the expression of Bax, cleaved caspase-3, and Romo-1 after SAH, whereas inhibition of CB1R reversed the above changes. Furthermore, knockdown of Nrf1 abolished the promoting effect of ACEA on mitophagy, accompanied by downregulation of PINK1, Parkin, and LC3II, which ultimately eliminated the antioxidative and antiapoptotic effects of ACEA. To sum up, our research revealed that ACEA promoted mitophagy and attenuated oxidative stress as well as neurological deficits after SAH, at least in part via the CB1R/Nrf1/PINK1 signaling pathway.Figure 9 The graphical abstract. ACEA treatment attenuates oxidative stress by promoting mitophagy through the CB1R/Nrf1/PINK1 signaling pathway after SAH.Accumulating evidence manifested that damaged mitochondria-mediated oxidative stress is closely related to the mechanism of EBI after SAH [37, 38]. Specifically, mitochondrial dysfunction caused by SAH leads to the overplus of ROS, resulting in oxidative stress injury [7] and ultimately activation of the apoptotic signaling pathway [39]. Consequently, the clearance of damaged mitochondria in time would be an efficacious way to attenuate oxidative stress as well as subsequent apoptosis after SAH [14].Mitophagy is a selective autophagy process that specifically degrades damaged mitochondria to maintain mitochondrial homeostasis and cellular survival [8]. Recently, many studies showed that promoting mitophagy can alleviate neuroinflammation, oxidative stress, and neuronal apoptosis after SAH [11, 12, 14]. Consistent with previous studies, we found that promoting mitophagy with ACEA treatment protected against oxidative stress injury after SAH. The results of oxidative stress measurement showed that ACEA treatment contributed to an increase in SOD, GSH-Px, and GSH/GSSG levels and a decrease in MDA level after SAH. DHE and TUNEL staining results manifested that ACEA treatment reduced the number of both DHE-positive cells and TUNEL-positive neurons. Nissl staining results reflected the protective effect of ACEA on alleviating the degeneration of hippocampal neurons, which was consistent with the improvement of neurobehavior function. These results provide new insights for mitophagy as a potential therapeutic strategy for SAH.Cannabinoid receptor 1 (CB1R), a G-protein-coupled receptor, was reported to be a potential therapeutic target for many central nervous system diseases, such as ischemic stroke, epilepsy, and Parkinson’s and Alzheimer’s disease [29, 40–42]. CB1R is widely expressed in different organs, especially in the central nervous system (e.g., cerebral cortex, hippocampus, striatum, and cerebellum) [43]. Our immunofluorescence colocalization staining results revealed that CB1R receptors were mainly located in neurons of the cerebral cortex, but a few were located in microglia and astrocytes. An autoptic study manifested that the expression of CB1R increased after ischemic stroke [15]. Similarly, our results demonstrated that CB1R increased to a peak at 24 h after SAH. Therefore, we speculate that the upregulation of CB1R may act as a self-protection mechanism to play a neuroprotective role in EBI after SAH.ACEA, a highly selective CB1R agonist, was reported to provide a neuroprotective effect for ischemic stroke, Parkinson’s disease, Alzheimer’s disease, and epilepsy [29, 41, 42, 44]. However, there is no published research attempted to treat SAH using ACEA. In our study, we found that ACEA has the antioxidative and antiapoptotic ability to protect against EBI. In addition, we also discovered a reasonable mechanism for ACEA to improve neurological deficits after SAH in rats. Moreover, the easily passing through the blood-brain barrier characteristic of ACEA is conducive to clinical application. In order to determine the optimal dosage of ACEA, we used three different dosages in our experiment and we found 1.5 mg/kg was the best dosage to treat SAH in rats.Nrf-1 was initially discovered as an important transcription factor that regulates genes necessary for mitochondrial function [45]. Recently, accumulating evidence indicated that NRF1 target genes are also involved in the regulation of extramitochondrial biological processes, such as DNA damage repair, RNA metabolism, and ubiquitin-mediated protein degradation, all of which are essential to cell growth, differentiation, and survival [25]. In our research, Nrf1 knockdown significantly suppressed the expression of PINK1, Parkin, and LC3II, indicating the positive regulation effect of Nrf1 on PINK1/Parkin-mediated mitophagy, which was consistent with a previous study [46].The PINK1-Parkin pathway is a primary mechanism of mitophagy, which depends on the ubiquitination pathway. Specifically, following mitochondrial membrane depolarization, PINK1 is stabilized on the outer mitochondrial membrane (OMM). Next, PINK1 is activated via autophosphorylation and recruits E3 ubiquitin ligase Parkin from the cytoplasm to the damaged mitochondria. Subsequently, Parkin ubiquitinates multiple OMM proteins, leading to their recognition by autophagy adaptors. Finally, damaged mitochondria are engulfed by phagophores and eventually fuse with lysosomes for degradation [47]. In our research, we found that ACEA treatment increased the expression of PINK1 and Parkin in mitochondria, which indicated that ACEA might promote mitophagy through the PINK1-Parkin pathway. Consistently, immunofluorescence colocalization staining results confirmed that ACEA treatment increased the colocalization of mitochondrial protein TOMM20 with autophagosome marker LC3. The above results were further supported by transmission electron microscopy results that ACEA promoted the formation of mitophagosome and maintained the normal mitochondrial morphology of neurons. Taken together, ACEA may activate PINK1/Parkin-mediated mitophagy by upregulating the expression of Nrf1 after SAH.In contrast to previous studies, our study is the first one that attempted to treat SAH with ACEA. We found that ACEA has the antioxidative and antiapoptotic ability to protect against EBI. More importantly, this is the first study to elucidate the molecular mechanism by which ACEA promotes mitophagy. In addition, this is the first study to confirm the positive regulation effect of Nrf1 on PINK1/Parkin-mediated mitophagy in an in vivo model of SAH. However, our study has some limitations. First, it is difficult to mimic the pathological process of SAH in vitro, so we only verified our hypothetical mechanism in vivo. Second, a previous study showed that ACEA alleviated cerebral ischemia/reperfusion injury through the CB1R-Drp1 pathway [48], so we cannot exclude other signaling pathways that improve the prognosis of SAH. These deficiencies would be explored in future studies. ## 5. Conclusion In summary, we revealed that ACEA attenuated oxidative stress by promoting mitophagy via the CB1R/Nrf1/PINK1 signaling pathway after SAH. Therefore, ACEA may be a novel treatment for SAH patients. --- *Source: 1024279-2022-02-24.xml*
2022
# The Application of GeneXpert MTB/RIF for Smear-Negative TB Diagnosis as a Fee-Paying Service at a South Asian General Hospital **Authors:** Poojan Shrestha; Amit Arjyal; Maxine Caws; Krishna Govinda Prajapati; Abhilasha Karkey; Sabina Dongol; Saruna Pathak; Shanti Prajapati; Buddha Basnyat **Journal:** Tuberculosis Research and Treatment (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102430 --- ## Abstract The GeneXpert MTB/RIF assay (Xpert) is a novel automated diagnostic tool for tuberculosis but its optimal placement in the healthcare system has not been determined. The objective of this study was to determine the possibility of additional case detection for pulmonary tuberculosis (PTB) by offering Xpert to smear-negative patients in a low-HIV burden setting with noMycobacterium tuberculosis(M.tb.) culture facilities. Patients routinely presenting with symptoms suggestive of PTB with negative smears were offered single Xpert test on a fee-paying basis. Data were retrospectively reviewed to determine case detection in patients tested from February to December 2013. Symptoms associated with a positive test were analysed to determine if refinement of clinical criteria would reduce unnecessary testing. 258 smear-negative patients were included andM.tb.was detected in 55 (21.32%, n=55/258). Using standard clinical assessment for selection, testing 5 patients detected one case of smear-negative PTB. These results demonstrate that fee-paying Xpert service in low-income setting can increase TB case confirmation substantially and further systematic studies of health economic implications should be conducted to determine optimal implementation models to increase access to Xpert in low- and middle-income countries. --- ## Body ## 1. Introduction Tuberculosis (TB) continues to be a major public health problem, with 8 million cases and 1.3 million deaths each year [1]. One of the key challenges for TB control is to increase case detection and early treatment, thereby interrupting transmission chains and reducing individual morbidity. The most widely used test for TB, sputum smear microscopy, has a sensitivity of only 50% of active cases which contributes to delayed diagnosis resulting in continued transmission [2, 3]. In an advanced diagnostic setting, facilities such as culture, drug susceptibility testing (DST), and commercial molecular diagnostics may be available, but these facilities are lacking in most hospitals in high burden countries. Sputum smears with chest X-ray (CXR), where available, are the tests routinely applied for TB diagnosis. It is crucial to implement improved diagnostics in these endemic settings if we are to reach the targets of case detection, reduction in mortality, and prevalence of the disease [4, 5].Derivations from case burden estimates suggest that only around 70% of active TB cases are diagnosed using current strategies [1]. The development and implementation of the Xpert assay (Cepheid, Sunnyvale, CA, USA), which has high sensitivity in detection of smear-negative TB, has raised hopes of increased case detection in low- and middle-income countries (LMICs). Xpert is the only fully automated real-time DNA-based test which can detect both TB and rifampicin resistance (RR) [6, 7]. The test is based on a heminested PCR test which detects the presence ofMycobacterium tuberculosis complex bacilli [8]. The PCR target, an 81-base-pair region of therpoB gene, is the rifampicin resistance determining region (RRDR) [6, 9]. The reactions take place in a single-use cartridge making it easy to operate, without cross contamination, and giving a result in as little as 2 hours and thus potentially decreasing default due to delayed diagnosis [7, 10].In the most recent meta-analysis, Xpert as an initial replacement for smear microscopy showed a pooled sensitivity of 89% (95% credible interval CrI 85% to 92%) and pooled specificity of 99% (95% CrI 98% to 99%), and as an add-on test following a negative smear microscopy, pooled sensitivity and specificity were 67% (95% CrI 60% to 74%) and 99% (95% CrI 98% to 99%), respectively [11]. For RR detection, it achieved a pooled sensitivity of 95% (95% CrI 90% to 97%) and a pooled specificity of 98% (95% CrI 97% to 99%) [11]. The test was first endorsed by WHO in 2010 [11, 12] and revised recommendations in 2013 included a recommendation that Xpert be used in all suspected TB cases, with the acknowledgement that this has substantial resource limitations [13].However, despite a substantial negotiated price reduction on the test for LMICs, the 10 USD per cartridge remains expensive for low-income countries which often have per capita healthcare expenditure of <30 USD [14]. There have been limited reports of Xpert implementation on a fee-paying basis in such settings, with the majority of projects implementing free at point of care, a model which is dependent on sustained aid funding. Although the test cost is high to the patient, a confirmed diagnosis can avert often lengthy differential investigations and exploratory treatment which may cost substantially more. We therefore examined the impact on case detection of implementing Xpert as an optional fee-paying service to patients with suspected smear-negative TB.In Nepal, the estimated incidence of TB is 163 per 100,000 with a prevalence rate of 241 per 100,000 population. In 2012-13, the Nepal Tuberculosis Programme (NTP) registered 17,788 sputum smear-positive cases and 8,367 sputum smear-negative cases [15]. In common with many LMICs, many pulmonary TB cases treated in the private sector go unreported and therefore there is likely to be discrepancy between official figures and the true numbers being diagnosed and treated. Although multidrug resistant tuberculosis (MDR TB) rates remain relatively low, at 2.2% in new and 17.2% among retreatment cases, only 262, one-quarter of the estimated MDR cases, were treated via the national programme in 2012-13 [15]. The positive predictive value of rifampicin resistance detection using Xpert in an unselected patient population will be low and confirmation of MDR is necessary before MDR treatment is initiated.A recent Cochrane study concluded that more research would be helpful in evaluating the use of Xpert in TB programmes in high TB burden settings [11].We report here the experience of implementing Xpert at a general hospital in Kathmandu, Nepal, and determine the number needed to test in order to detect one active case of smear-negative TB. ## 2. Materials and Methods Patan Hospital is a 450-bed government general hospital providing emergency and elective outpatient and inpatient services, located in the Lalitpur area of the Kathmandu valley. The laboratory services are limited with acid-fast bacilli smear available but no provision forM.tb. culture.From February 18, 2013, an Xpert machine was made available for testing. Clinical practitioners both from within and outside the hospital were invited to refer patients with symptoms consistent with TB and three negative sputum smears to testing by Xpert. Patients offered the test were informed about the test evaluation and asked for consent to collect baseline data on demographics and symptoms at presentation. This study was approved by the Institutional Review Board of Patan Hospital.Patients with 3 negative Ziehl-Neelsen (ZN) sputum smears were referred for Xpert testing at the treating clinician’s discretion. A new sputum sample was collected for Xpert testing. Data was collected on history of persistent cough (≥2 weeks), fever, drenching night sweats, weight loss (>1.5 kg in a month), loss of appetite, malaise, and shortness of breath or chest pain.CXR reports were collected where available. For further analysis, CXR features were classified by the study investigators according to radiologist’s report as upper lobe infiltrates, pleural effusion, diffuse infiltrates, cavitary lesions, other infiltrates, consolidation, other abnormalities, and normal. CXR reported as patchy infiltrates, bilateral infiltrates, and infiltrates without a specified location was classified as “other infiltrates.” “Other abnormalities” comprised of pneumothorax, hyperinflation, lung abscess, mediastinal opacity, nodular opacity, mass lesion, and pleural thickening.At Patan Hospital, the acid-fast bacilli microscopy is performed by experienced laboratory personnel following the guidelines set by the NTP and is monitored by the NTP external quality assessment (EQA) scheme. The sputum sample for the Xpert was processed according to manufacturer’s guidelines. The patients were charged NPR 2000 (~20 USD) per test. This cost included freight charges (~$3 per cartridge), staffing, annual machine calibration ($450 per year), and annual maintenance charge (~$2,700 per year). The HIV status was reported by the patient but no confirmatory HIV testing was performed. Data on the treatment outcome was not collected.We further examined whether restricting the tested population would miss cases of TB or improve efficiency of testing and reduce unnecessary patient charges. We determined the number needed to test if the tested population was restricted to (1) those with 2 or more symptoms consistent with TB (fever, night sweats, persistent cough (≥2 weeks), weight loss (≥1.5 kg in a month), malaise, shortness of breath, or chest pain) or (2) those with CXR abnormalities and 2 or more symptoms or (3) those with only the CXR abnormalities found in the Xpert positive group.Categorical variables were compared between patient groups testing positive and negative by Xpert using Fisher’s exact test with aP value of ≤0.05 considered as significant. ## 3. Results Between February 18, 2013, and December 30, 2013, 258 smear-negative patients were tested by Xpert on sputum. Eighty-nine patients were referred from other hospitals throughout the Kathmandu valley, while 169 were from Patan Hospital clinics. Data were not systematically collected on patients who were smear-negative but not offered the test during the study period but no patients declined the test when offered at Patan Hospital. No patient declined consent for data collection and we therefore included all Xpert tested patients in the analysis.During the study period, there were 2,222 new patients tested by smear microscopy at Patan Hospital. Of these, 1,070 had 3 negative sputum smears and 196 had at least one positive sputum smear. The remaining 956 patients did not complete 3 sputum smears. Therefore, 169/1070 (15.8%) of diagnostic subjects were offered an Xpert test. In the majority of cases, this was due to an alternative diagnosis being reached or symptom resolution/improvement during the smear and CXR diagnostic process, but this was not systematically evaluated.Xpert was positive forM.tb. in 55 (n=55/258, 21.3%) patients. Therefore, 4.7 patients needed to be tested to detect one active TB case. All patients with a positive Xpert test were referred for free treatment in the appropriate NTP DOTS facility.A third of patients (n=93/258, 37.98%) was female and the median age was 52 (interquartile range IQR 33–68). Four children were included (13, 15, and two 17 years of age). There were 2 patients (n=2/258, 0.8%) with a positive result for rifampicin resistant TB by the Xpert test. The first patient was referred for sputum culture and phenotypic drug susceptibility testing at a tertiary centre (GENETUP, Nepal). The second rifampicin resistant patient was untraceable for follow-up by the phone number provided. No patient reported known HIV infection.Table1 shows the clinical features of the patients. The only clinical feature to show a statistically significant difference between the groups was shortness of breath, which was more common among Xpert negative patients (56.6 versus 72.2%, P=0.04). As this finding nevertheless occurred in half of Xpert positive patients, it was not appropriate for refining the group of patients tested.Table 1 Comparison of clinical features of 251 patients with suspected smear negative TB testing positive and negative by Xpert. Clinical features Xpert positive Xpert negative P value Cough 38 (71.7) 143 (72.2) 1 Fever 28 (52.8) 90 (45.5) 0.36 Night sweats 23 (43.4) 61 (30.8) 0.10 Loss of appetite 39 (73.6) 140 (70.7) 0.74 Weight loss 31 (58.5) 106 (53.5) 0.54 Malaise 42 (79.3) 164 (82.8) 0.55 Shortness of breath/chest pain 30 (56.6) 143 (72.2) 0.04 Total 53 (100) 198 (100) ∗Clinical questionnaire was not available for 7 patients. kg = kilogram.CXR was available for 187 patients (Table2). Thirteen patients (n=13/187, 6.9%) had no abnormality detected on CXR. Upper lobe infiltrates (P=0.03) and cavitary lesions (P=0.03) were more common among the Xpert positive group. Consolidation (P=0.04) was not evident among the Xpert positive group while it was reported in 9.4% of Xpert negative patients.Table 2 Comparison of chest X-ray features of 187 patients testing positive or negative by Xpert. Chest X-ray findings Xpert positive Xpert negative P value Upper lobe infiltrates 12 (31.57) 22 (14.86) 0.03 Cavitary lesions 5 (10.52) 5 (4.05) 0.03 Pleural effusion 9 (29.68) 27 (18.24) 0.49 Diffuse infiltration 7 (18.42) 26 (17.56) 1 Consolidation 0 14 (9.45) 0.04 Others 0 11 (6.75) 0.12 Normal 1 (2.6) 12 (8.10) 0.30 Other infiltrates 6 (13.15) 24 (16.89) 1 Infiltrates not in upper lobes 5 (13.15) 21 (14.18) 1 Total 39 (100) 148 (100) ∗Sum of the categories is greater than the total number of patients as one patient may be in more than one category.No clinical questionnaire was available for 7 patients, leaving 251 patients in the symptom analysis (Table3). If the Xpert testing was restricted to patients with 2 or more symptoms, the number needed to test was not altered and remained at 4.6 (n=50/232, 21.6% positive). Three cases of TB would have been missed.Table 3 Number of patients detected as positive by GeneXpert using different inclusion criteria for testing. Patients Number of patients with informationn (%) Number of patients tested Numbers of Xpert positiven (%) Number needed to test Numbers of patients not meeting the criteria Xpert positive in group not meeting criterian (%) At least 2symptoms∗ 251 (93.7) 232 50 (21.6) 4.6 19 3 (15.8) At least 2 symptoms +CXR∗∗ 187 (72.5) 163 36 (22.1) 4.5 24 3 (12.5) At least the 4 listed CXR abnormalities 187 (72.5) 152 38 (25.0) 4.0 35 1 (2.9) ∗7 patients had no available information of symptoms and were therefore excluded. ∗ ∗71 patients did not have information of CXR report and were therefore excluded.187 patients had an available questionnaire and CXR report. If CXR abnormalities and 2 clinical symptoms were required for Xpert testing, the number needed to test was 4.5 (n=36/163, 22.1%). Three cases of TB would have been missed.187 patients had CXR available. If testing were restricted to only those with an abnormal CXR showing at least one of the four characteristics, upper lobe infiltrates, cavitary lesions, pleural effusion, or infiltrates, then the number needed to test was 4 (n=38/152=25%). 1 case would have been missed. ## 4. Discussion This report demonstrates that application of Xpert testing as a fee-paying service in a general hospital can substantially increase the yield of confirmed TB cases. Five patients need to be tested to detect one TB case by applying simple routine criteria to select smear-negative TB suspects. During the same time period, there were 196 smear-positive TB cases at Patan Hospital and the implementation of Xpert therefore resulted in a 28.1% (n=55/196) increase in confirmed TB cases.We assumed that the specificity of Xpert is high, as every study performed to date and meta-analysis has indicated consistently high specificity of 99% [10, 11, 16]. Therefore, although there is no culture confirmation for comparison, this was not an evaluation of diagnostic accuracy, which has been comprehensively reported elsewhere and it can be reasonably assumed that these cases are genuine, and the rate of false positive diagnosis is unlikely to exceed that of culture.In areas of high TB prevalence such as Nepal, the majority of suspected TB cases are assessed by sputum smear microscopy and, where available, by CXR. Patients are often placed onto TB treatment on the basis of persistent cough or abnormal CXR alone. Modelling studies have indicated that Xpert scale-up may not increase the overall number of cases initiated on treatment due to these pragmatic empirical treatment practices. Instead, Xpert will divert treatment away from “false cases” to “true” smear-negative TB cases, thereby increasing the accuracy of treatment and cost-effectiveness, while reducing the burdens of toxicity and opportunity cost of treatment in patients who do not in fact have TB [17, 18]. The ability of Xpert to rapidly confirm TB in smear-negative cases offers the possibility of improving early TB case detection, but due to costs, recommendations for high burden, low-resource settings have so far focused on applying Xpert to HIV-infected individuals (where the sensitivity of smear is especially low and the complications of missed diagnosis are more severe) and patients with risk factors for MDR TB. Although WHO recommends applying the test to all smear-negative cases, the recommendation comes with the caveat that this is not financially feasible in most settings [13]. The low positive predictive value of Xpert for rifampicin resistance in a low MDR prevalence test population must also be considered. There has been considerable debate around the possibility of providing Xpert on a fee-paying basis to patients in low-income, high burden settings [19]. This study demonstrates that both uptake and case detection can be high in a targeted approach but further studies should be performed to determine the health economic impacts of such application, both on the individual patients including the opportunity costs of alternative health investigations and on household finances of testing costs and on hospital resource allocation.The impact of delayed TB detection is threefold: firstly, morbidity to the individual is increased and in many cases will persist beyond the disease episode in severe permanent lung damage; secondly, undetected smear-negative TB will become progressively more infectious and transmit within the community; and finally the economic impact on the household is magnified by repeated visits to healthcare facilities, differential diagnostic testing and treatments, and loss of earnings due to healthcare seeking and morbidity.The Xpert assay has also increased early detection of MDR-TB, particularly when applied to high-risk groups in accordance with WHO recommendations. Prior to application of this assay, patients at high risk for MDR TB would have to be referred to a tertiary setting and wait 6 to 8 weeks for results of phenotypic drug susceptibility testing resulting in high loss to follow-up and delays in treatment initiation. The line probe assays for MDR diagnosis have also largely been limited to tertiary centres in LMICs. In an unselected patient group without MDR risk factors and a low background prevalence of MDR TB, the positive predictive value of Xpert for rifampicin resistance is low and the result should be confirmed by a second test. However, early identification of possible MDR cases is key to reducing community transmission and reducing the incidence of MDR TB. The first patient positive for rifampicin resistant TB by Xpert in this study was referred to a tertiary centre for confirmatory testing by phenotypic DST according to the national guidelines. The phenotypic DST showed the patient was infected with TB resistant to isoniazid, rifampicin, streptomycin, and ethambutol. Without Xpert testing, this patient would have received a first line treatment regimen for a minimum of 5 months before being tested by phenotypic DST. The loss to follow-up of the second case despite the use of a rapid test demonstrates the problems associated with TB control beyond the need for accurate tests. This case could not be confirmed as a true positive for MDR and may not have received any TB treatment.It is not clear how many of the patients testing positive forM.tb. by Xpert in this study would have been started on TB treatment based on the judgement of the treating clinician if Xpert testing was not available [20]. In the previous year (2011-2012) at Patan Hospital, 21 patients were empirically treated for TB, approximately one-third of those detected by Xpert during the study period (1.75/months versus 5.65/months, resp.). It is possible that the availability of Xpert increased the recognition of TB symptoms, although we can claim that in this endemic setting TB awareness is consistently high among clinicians. Conversely, it is possible that a negative Xpert test may have discouraged treatment in some cases which would otherwise have been treated, although clinicians were informed that a negative Xpert test does not exclude a TB diagnosis. Patients with a strong possibility of pulmonary TB despite a negative Xpert test were referred for further investigations including culture at a tertiary centre. The rate of smear positivity at Patan is 9.28% which is consistent with the results in other areas of similar endemicity; it is therefore unlikely that the Xpert diagnosis was inflated due to low smear microscopy confirmation [21–23].The cost to the patients of the assay was 20 USD (which is equivalent to the price of eight CXRs in our hospital), including the full cost to the hospital of performing the assay and maintaining the service. Despite this, no clinician reported patients refusing the test for financial reasons, although this was not systematically evaluated. Studies of the patient acceptability of testing should be performed to determine the relative cost benefits for the patients of receiving a fee-paying Xpert test versus the traditional diagnostic route.Scale-up of Xpert testing is underway in Nepal but funding is not available to provide Xpert testing free to all patients with suspected smear-negative TB and is unlikely to be so in the near to medium term. This is also true in other LMICs. In India, it was calculated that providing Xpert to all smear-negative TB suspects would consume the entire national healthcare budget [17].Our study is consistent with other studies which have suggested the benefit of Xpert in smear-negative patients in developing countries [11, 24, 25]. The majority of these studies have been carried out in Africa where there is a substantially higher HIV burden than in South Asia. Three studies have been reported from low-HIV prevalence regions, including Peru and two hospitals (Hinduja Hospital and Christian Medical College Hospital) in India [8, 18, 25].We examined the clinical features of groups testing positive and negative by Xpert to determine if further refinement of eligibility for testing criteria could guide the application of Xpert and reduce unnecessary testing and thereby costs to the patient. The number needed to test was reduced to 4 by restricting patients tested to only those with specific CXR criteria consistent with TB, while only a single case was missed. This strategy should be evaluated in a larger multicentre study to determine the optimal referral algorithm for Xpert testing in the general hospital setting.By using only the classical broad definition of TB symptoms and CXR interpretation, the number needed to test for a single confirmed case of active TB is as low as five in this general hospital. Although smear microscopy, due to its simplicity, speed, and low cost, is used widely in low-resource settings, the low sensitivity precludes it from being an ideal test. The requirement for a rapid, simple TB diagnostic is evidenced by the widespread application of commercial serological tests which are inaccurate. These tests are widely provided at a cost to the patient and used to determine medical treatment. The government of India has recently banned the use of these tests following systematic evaluation by WHO [26]. Although the Xpert has several limitations, including requirement for stable electricity supply, limited temperature range, availability of maintenance, and bulky consumables, wider availability of the accurate Xpert assay may counter the use of these serological tests by providing a viable alternative to the patient and healthcare provider.It is highly probable that a small number of cases of TB were missed in this study as Xpert does not have as high sensitivity as culture. There is a danger that clinicians will “exclude” a TB diagnosis on the basis of a negative Xpert test and it is important that education is carried out to ensure clinicians are aware of the test limitations prior to the test being implemented. However, it is not sustainable to implement TB culture facilities at general hospitals in South Asia and the long turnaround of results means loss to follow-up in the diagnostic pathway is high. This study was not an assessment of Xpert sensitivity and specificity, as this has been comprehensively evaluated in comparison with culture by others. ## 5. Conclusion Our study has shown that by applying Xpert to test for smear-negative TB on a fee-paying basis in a general hospital in Nepal, five patients need to be tested to detect one case of active TB. Restriction of criteria for testing using CXR features can reduce the number needed to 4. Further research is needed evaluating the cost effectiveness, patient acceptability and impact on overall out-of-pocket expenditure of Xpert testing provided at a direct cost to the patient, to determine the optimal sustainable use of this technology while maximizing equality of access. --- *Source: 102430-2015-04-08.xml*
102430-2015-04-08_102430-2015-04-08.md
25,988
The Application of GeneXpert MTB/RIF for Smear-Negative TB Diagnosis as a Fee-Paying Service at a South Asian General Hospital
Poojan Shrestha; Amit Arjyal; Maxine Caws; Krishna Govinda Prajapati; Abhilasha Karkey; Sabina Dongol; Saruna Pathak; Shanti Prajapati; Buddha Basnyat
Tuberculosis Research and Treatment (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102430
102430-2015-04-08.xml
--- ## Abstract The GeneXpert MTB/RIF assay (Xpert) is a novel automated diagnostic tool for tuberculosis but its optimal placement in the healthcare system has not been determined. The objective of this study was to determine the possibility of additional case detection for pulmonary tuberculosis (PTB) by offering Xpert to smear-negative patients in a low-HIV burden setting with noMycobacterium tuberculosis(M.tb.) culture facilities. Patients routinely presenting with symptoms suggestive of PTB with negative smears were offered single Xpert test on a fee-paying basis. Data were retrospectively reviewed to determine case detection in patients tested from February to December 2013. Symptoms associated with a positive test were analysed to determine if refinement of clinical criteria would reduce unnecessary testing. 258 smear-negative patients were included andM.tb.was detected in 55 (21.32%, n=55/258). Using standard clinical assessment for selection, testing 5 patients detected one case of smear-negative PTB. These results demonstrate that fee-paying Xpert service in low-income setting can increase TB case confirmation substantially and further systematic studies of health economic implications should be conducted to determine optimal implementation models to increase access to Xpert in low- and middle-income countries. --- ## Body ## 1. Introduction Tuberculosis (TB) continues to be a major public health problem, with 8 million cases and 1.3 million deaths each year [1]. One of the key challenges for TB control is to increase case detection and early treatment, thereby interrupting transmission chains and reducing individual morbidity. The most widely used test for TB, sputum smear microscopy, has a sensitivity of only 50% of active cases which contributes to delayed diagnosis resulting in continued transmission [2, 3]. In an advanced diagnostic setting, facilities such as culture, drug susceptibility testing (DST), and commercial molecular diagnostics may be available, but these facilities are lacking in most hospitals in high burden countries. Sputum smears with chest X-ray (CXR), where available, are the tests routinely applied for TB diagnosis. It is crucial to implement improved diagnostics in these endemic settings if we are to reach the targets of case detection, reduction in mortality, and prevalence of the disease [4, 5].Derivations from case burden estimates suggest that only around 70% of active TB cases are diagnosed using current strategies [1]. The development and implementation of the Xpert assay (Cepheid, Sunnyvale, CA, USA), which has high sensitivity in detection of smear-negative TB, has raised hopes of increased case detection in low- and middle-income countries (LMICs). Xpert is the only fully automated real-time DNA-based test which can detect both TB and rifampicin resistance (RR) [6, 7]. The test is based on a heminested PCR test which detects the presence ofMycobacterium tuberculosis complex bacilli [8]. The PCR target, an 81-base-pair region of therpoB gene, is the rifampicin resistance determining region (RRDR) [6, 9]. The reactions take place in a single-use cartridge making it easy to operate, without cross contamination, and giving a result in as little as 2 hours and thus potentially decreasing default due to delayed diagnosis [7, 10].In the most recent meta-analysis, Xpert as an initial replacement for smear microscopy showed a pooled sensitivity of 89% (95% credible interval CrI 85% to 92%) and pooled specificity of 99% (95% CrI 98% to 99%), and as an add-on test following a negative smear microscopy, pooled sensitivity and specificity were 67% (95% CrI 60% to 74%) and 99% (95% CrI 98% to 99%), respectively [11]. For RR detection, it achieved a pooled sensitivity of 95% (95% CrI 90% to 97%) and a pooled specificity of 98% (95% CrI 97% to 99%) [11]. The test was first endorsed by WHO in 2010 [11, 12] and revised recommendations in 2013 included a recommendation that Xpert be used in all suspected TB cases, with the acknowledgement that this has substantial resource limitations [13].However, despite a substantial negotiated price reduction on the test for LMICs, the 10 USD per cartridge remains expensive for low-income countries which often have per capita healthcare expenditure of <30 USD [14]. There have been limited reports of Xpert implementation on a fee-paying basis in such settings, with the majority of projects implementing free at point of care, a model which is dependent on sustained aid funding. Although the test cost is high to the patient, a confirmed diagnosis can avert often lengthy differential investigations and exploratory treatment which may cost substantially more. We therefore examined the impact on case detection of implementing Xpert as an optional fee-paying service to patients with suspected smear-negative TB.In Nepal, the estimated incidence of TB is 163 per 100,000 with a prevalence rate of 241 per 100,000 population. In 2012-13, the Nepal Tuberculosis Programme (NTP) registered 17,788 sputum smear-positive cases and 8,367 sputum smear-negative cases [15]. In common with many LMICs, many pulmonary TB cases treated in the private sector go unreported and therefore there is likely to be discrepancy between official figures and the true numbers being diagnosed and treated. Although multidrug resistant tuberculosis (MDR TB) rates remain relatively low, at 2.2% in new and 17.2% among retreatment cases, only 262, one-quarter of the estimated MDR cases, were treated via the national programme in 2012-13 [15]. The positive predictive value of rifampicin resistance detection using Xpert in an unselected patient population will be low and confirmation of MDR is necessary before MDR treatment is initiated.A recent Cochrane study concluded that more research would be helpful in evaluating the use of Xpert in TB programmes in high TB burden settings [11].We report here the experience of implementing Xpert at a general hospital in Kathmandu, Nepal, and determine the number needed to test in order to detect one active case of smear-negative TB. ## 2. Materials and Methods Patan Hospital is a 450-bed government general hospital providing emergency and elective outpatient and inpatient services, located in the Lalitpur area of the Kathmandu valley. The laboratory services are limited with acid-fast bacilli smear available but no provision forM.tb. culture.From February 18, 2013, an Xpert machine was made available for testing. Clinical practitioners both from within and outside the hospital were invited to refer patients with symptoms consistent with TB and three negative sputum smears to testing by Xpert. Patients offered the test were informed about the test evaluation and asked for consent to collect baseline data on demographics and symptoms at presentation. This study was approved by the Institutional Review Board of Patan Hospital.Patients with 3 negative Ziehl-Neelsen (ZN) sputum smears were referred for Xpert testing at the treating clinician’s discretion. A new sputum sample was collected for Xpert testing. Data was collected on history of persistent cough (≥2 weeks), fever, drenching night sweats, weight loss (>1.5 kg in a month), loss of appetite, malaise, and shortness of breath or chest pain.CXR reports were collected where available. For further analysis, CXR features were classified by the study investigators according to radiologist’s report as upper lobe infiltrates, pleural effusion, diffuse infiltrates, cavitary lesions, other infiltrates, consolidation, other abnormalities, and normal. CXR reported as patchy infiltrates, bilateral infiltrates, and infiltrates without a specified location was classified as “other infiltrates.” “Other abnormalities” comprised of pneumothorax, hyperinflation, lung abscess, mediastinal opacity, nodular opacity, mass lesion, and pleural thickening.At Patan Hospital, the acid-fast bacilli microscopy is performed by experienced laboratory personnel following the guidelines set by the NTP and is monitored by the NTP external quality assessment (EQA) scheme. The sputum sample for the Xpert was processed according to manufacturer’s guidelines. The patients were charged NPR 2000 (~20 USD) per test. This cost included freight charges (~$3 per cartridge), staffing, annual machine calibration ($450 per year), and annual maintenance charge (~$2,700 per year). The HIV status was reported by the patient but no confirmatory HIV testing was performed. Data on the treatment outcome was not collected.We further examined whether restricting the tested population would miss cases of TB or improve efficiency of testing and reduce unnecessary patient charges. We determined the number needed to test if the tested population was restricted to (1) those with 2 or more symptoms consistent with TB (fever, night sweats, persistent cough (≥2 weeks), weight loss (≥1.5 kg in a month), malaise, shortness of breath, or chest pain) or (2) those with CXR abnormalities and 2 or more symptoms or (3) those with only the CXR abnormalities found in the Xpert positive group.Categorical variables were compared between patient groups testing positive and negative by Xpert using Fisher’s exact test with aP value of ≤0.05 considered as significant. ## 3. Results Between February 18, 2013, and December 30, 2013, 258 smear-negative patients were tested by Xpert on sputum. Eighty-nine patients were referred from other hospitals throughout the Kathmandu valley, while 169 were from Patan Hospital clinics. Data were not systematically collected on patients who were smear-negative but not offered the test during the study period but no patients declined the test when offered at Patan Hospital. No patient declined consent for data collection and we therefore included all Xpert tested patients in the analysis.During the study period, there were 2,222 new patients tested by smear microscopy at Patan Hospital. Of these, 1,070 had 3 negative sputum smears and 196 had at least one positive sputum smear. The remaining 956 patients did not complete 3 sputum smears. Therefore, 169/1070 (15.8%) of diagnostic subjects were offered an Xpert test. In the majority of cases, this was due to an alternative diagnosis being reached or symptom resolution/improvement during the smear and CXR diagnostic process, but this was not systematically evaluated.Xpert was positive forM.tb. in 55 (n=55/258, 21.3%) patients. Therefore, 4.7 patients needed to be tested to detect one active TB case. All patients with a positive Xpert test were referred for free treatment in the appropriate NTP DOTS facility.A third of patients (n=93/258, 37.98%) was female and the median age was 52 (interquartile range IQR 33–68). Four children were included (13, 15, and two 17 years of age). There were 2 patients (n=2/258, 0.8%) with a positive result for rifampicin resistant TB by the Xpert test. The first patient was referred for sputum culture and phenotypic drug susceptibility testing at a tertiary centre (GENETUP, Nepal). The second rifampicin resistant patient was untraceable for follow-up by the phone number provided. No patient reported known HIV infection.Table1 shows the clinical features of the patients. The only clinical feature to show a statistically significant difference between the groups was shortness of breath, which was more common among Xpert negative patients (56.6 versus 72.2%, P=0.04). As this finding nevertheless occurred in half of Xpert positive patients, it was not appropriate for refining the group of patients tested.Table 1 Comparison of clinical features of 251 patients with suspected smear negative TB testing positive and negative by Xpert. Clinical features Xpert positive Xpert negative P value Cough 38 (71.7) 143 (72.2) 1 Fever 28 (52.8) 90 (45.5) 0.36 Night sweats 23 (43.4) 61 (30.8) 0.10 Loss of appetite 39 (73.6) 140 (70.7) 0.74 Weight loss 31 (58.5) 106 (53.5) 0.54 Malaise 42 (79.3) 164 (82.8) 0.55 Shortness of breath/chest pain 30 (56.6) 143 (72.2) 0.04 Total 53 (100) 198 (100) ∗Clinical questionnaire was not available for 7 patients. kg = kilogram.CXR was available for 187 patients (Table2). Thirteen patients (n=13/187, 6.9%) had no abnormality detected on CXR. Upper lobe infiltrates (P=0.03) and cavitary lesions (P=0.03) were more common among the Xpert positive group. Consolidation (P=0.04) was not evident among the Xpert positive group while it was reported in 9.4% of Xpert negative patients.Table 2 Comparison of chest X-ray features of 187 patients testing positive or negative by Xpert. Chest X-ray findings Xpert positive Xpert negative P value Upper lobe infiltrates 12 (31.57) 22 (14.86) 0.03 Cavitary lesions 5 (10.52) 5 (4.05) 0.03 Pleural effusion 9 (29.68) 27 (18.24) 0.49 Diffuse infiltration 7 (18.42) 26 (17.56) 1 Consolidation 0 14 (9.45) 0.04 Others 0 11 (6.75) 0.12 Normal 1 (2.6) 12 (8.10) 0.30 Other infiltrates 6 (13.15) 24 (16.89) 1 Infiltrates not in upper lobes 5 (13.15) 21 (14.18) 1 Total 39 (100) 148 (100) ∗Sum of the categories is greater than the total number of patients as one patient may be in more than one category.No clinical questionnaire was available for 7 patients, leaving 251 patients in the symptom analysis (Table3). If the Xpert testing was restricted to patients with 2 or more symptoms, the number needed to test was not altered and remained at 4.6 (n=50/232, 21.6% positive). Three cases of TB would have been missed.Table 3 Number of patients detected as positive by GeneXpert using different inclusion criteria for testing. Patients Number of patients with informationn (%) Number of patients tested Numbers of Xpert positiven (%) Number needed to test Numbers of patients not meeting the criteria Xpert positive in group not meeting criterian (%) At least 2symptoms∗ 251 (93.7) 232 50 (21.6) 4.6 19 3 (15.8) At least 2 symptoms +CXR∗∗ 187 (72.5) 163 36 (22.1) 4.5 24 3 (12.5) At least the 4 listed CXR abnormalities 187 (72.5) 152 38 (25.0) 4.0 35 1 (2.9) ∗7 patients had no available information of symptoms and were therefore excluded. ∗ ∗71 patients did not have information of CXR report and were therefore excluded.187 patients had an available questionnaire and CXR report. If CXR abnormalities and 2 clinical symptoms were required for Xpert testing, the number needed to test was 4.5 (n=36/163, 22.1%). Three cases of TB would have been missed.187 patients had CXR available. If testing were restricted to only those with an abnormal CXR showing at least one of the four characteristics, upper lobe infiltrates, cavitary lesions, pleural effusion, or infiltrates, then the number needed to test was 4 (n=38/152=25%). 1 case would have been missed. ## 4. Discussion This report demonstrates that application of Xpert testing as a fee-paying service in a general hospital can substantially increase the yield of confirmed TB cases. Five patients need to be tested to detect one TB case by applying simple routine criteria to select smear-negative TB suspects. During the same time period, there were 196 smear-positive TB cases at Patan Hospital and the implementation of Xpert therefore resulted in a 28.1% (n=55/196) increase in confirmed TB cases.We assumed that the specificity of Xpert is high, as every study performed to date and meta-analysis has indicated consistently high specificity of 99% [10, 11, 16]. Therefore, although there is no culture confirmation for comparison, this was not an evaluation of diagnostic accuracy, which has been comprehensively reported elsewhere and it can be reasonably assumed that these cases are genuine, and the rate of false positive diagnosis is unlikely to exceed that of culture.In areas of high TB prevalence such as Nepal, the majority of suspected TB cases are assessed by sputum smear microscopy and, where available, by CXR. Patients are often placed onto TB treatment on the basis of persistent cough or abnormal CXR alone. Modelling studies have indicated that Xpert scale-up may not increase the overall number of cases initiated on treatment due to these pragmatic empirical treatment practices. Instead, Xpert will divert treatment away from “false cases” to “true” smear-negative TB cases, thereby increasing the accuracy of treatment and cost-effectiveness, while reducing the burdens of toxicity and opportunity cost of treatment in patients who do not in fact have TB [17, 18]. The ability of Xpert to rapidly confirm TB in smear-negative cases offers the possibility of improving early TB case detection, but due to costs, recommendations for high burden, low-resource settings have so far focused on applying Xpert to HIV-infected individuals (where the sensitivity of smear is especially low and the complications of missed diagnosis are more severe) and patients with risk factors for MDR TB. Although WHO recommends applying the test to all smear-negative cases, the recommendation comes with the caveat that this is not financially feasible in most settings [13]. The low positive predictive value of Xpert for rifampicin resistance in a low MDR prevalence test population must also be considered. There has been considerable debate around the possibility of providing Xpert on a fee-paying basis to patients in low-income, high burden settings [19]. This study demonstrates that both uptake and case detection can be high in a targeted approach but further studies should be performed to determine the health economic impacts of such application, both on the individual patients including the opportunity costs of alternative health investigations and on household finances of testing costs and on hospital resource allocation.The impact of delayed TB detection is threefold: firstly, morbidity to the individual is increased and in many cases will persist beyond the disease episode in severe permanent lung damage; secondly, undetected smear-negative TB will become progressively more infectious and transmit within the community; and finally the economic impact on the household is magnified by repeated visits to healthcare facilities, differential diagnostic testing and treatments, and loss of earnings due to healthcare seeking and morbidity.The Xpert assay has also increased early detection of MDR-TB, particularly when applied to high-risk groups in accordance with WHO recommendations. Prior to application of this assay, patients at high risk for MDR TB would have to be referred to a tertiary setting and wait 6 to 8 weeks for results of phenotypic drug susceptibility testing resulting in high loss to follow-up and delays in treatment initiation. The line probe assays for MDR diagnosis have also largely been limited to tertiary centres in LMICs. In an unselected patient group without MDR risk factors and a low background prevalence of MDR TB, the positive predictive value of Xpert for rifampicin resistance is low and the result should be confirmed by a second test. However, early identification of possible MDR cases is key to reducing community transmission and reducing the incidence of MDR TB. The first patient positive for rifampicin resistant TB by Xpert in this study was referred to a tertiary centre for confirmatory testing by phenotypic DST according to the national guidelines. The phenotypic DST showed the patient was infected with TB resistant to isoniazid, rifampicin, streptomycin, and ethambutol. Without Xpert testing, this patient would have received a first line treatment regimen for a minimum of 5 months before being tested by phenotypic DST. The loss to follow-up of the second case despite the use of a rapid test demonstrates the problems associated with TB control beyond the need for accurate tests. This case could not be confirmed as a true positive for MDR and may not have received any TB treatment.It is not clear how many of the patients testing positive forM.tb. by Xpert in this study would have been started on TB treatment based on the judgement of the treating clinician if Xpert testing was not available [20]. In the previous year (2011-2012) at Patan Hospital, 21 patients were empirically treated for TB, approximately one-third of those detected by Xpert during the study period (1.75/months versus 5.65/months, resp.). It is possible that the availability of Xpert increased the recognition of TB symptoms, although we can claim that in this endemic setting TB awareness is consistently high among clinicians. Conversely, it is possible that a negative Xpert test may have discouraged treatment in some cases which would otherwise have been treated, although clinicians were informed that a negative Xpert test does not exclude a TB diagnosis. Patients with a strong possibility of pulmonary TB despite a negative Xpert test were referred for further investigations including culture at a tertiary centre. The rate of smear positivity at Patan is 9.28% which is consistent with the results in other areas of similar endemicity; it is therefore unlikely that the Xpert diagnosis was inflated due to low smear microscopy confirmation [21–23].The cost to the patients of the assay was 20 USD (which is equivalent to the price of eight CXRs in our hospital), including the full cost to the hospital of performing the assay and maintaining the service. Despite this, no clinician reported patients refusing the test for financial reasons, although this was not systematically evaluated. Studies of the patient acceptability of testing should be performed to determine the relative cost benefits for the patients of receiving a fee-paying Xpert test versus the traditional diagnostic route.Scale-up of Xpert testing is underway in Nepal but funding is not available to provide Xpert testing free to all patients with suspected smear-negative TB and is unlikely to be so in the near to medium term. This is also true in other LMICs. In India, it was calculated that providing Xpert to all smear-negative TB suspects would consume the entire national healthcare budget [17].Our study is consistent with other studies which have suggested the benefit of Xpert in smear-negative patients in developing countries [11, 24, 25]. The majority of these studies have been carried out in Africa where there is a substantially higher HIV burden than in South Asia. Three studies have been reported from low-HIV prevalence regions, including Peru and two hospitals (Hinduja Hospital and Christian Medical College Hospital) in India [8, 18, 25].We examined the clinical features of groups testing positive and negative by Xpert to determine if further refinement of eligibility for testing criteria could guide the application of Xpert and reduce unnecessary testing and thereby costs to the patient. The number needed to test was reduced to 4 by restricting patients tested to only those with specific CXR criteria consistent with TB, while only a single case was missed. This strategy should be evaluated in a larger multicentre study to determine the optimal referral algorithm for Xpert testing in the general hospital setting.By using only the classical broad definition of TB symptoms and CXR interpretation, the number needed to test for a single confirmed case of active TB is as low as five in this general hospital. Although smear microscopy, due to its simplicity, speed, and low cost, is used widely in low-resource settings, the low sensitivity precludes it from being an ideal test. The requirement for a rapid, simple TB diagnostic is evidenced by the widespread application of commercial serological tests which are inaccurate. These tests are widely provided at a cost to the patient and used to determine medical treatment. The government of India has recently banned the use of these tests following systematic evaluation by WHO [26]. Although the Xpert has several limitations, including requirement for stable electricity supply, limited temperature range, availability of maintenance, and bulky consumables, wider availability of the accurate Xpert assay may counter the use of these serological tests by providing a viable alternative to the patient and healthcare provider.It is highly probable that a small number of cases of TB were missed in this study as Xpert does not have as high sensitivity as culture. There is a danger that clinicians will “exclude” a TB diagnosis on the basis of a negative Xpert test and it is important that education is carried out to ensure clinicians are aware of the test limitations prior to the test being implemented. However, it is not sustainable to implement TB culture facilities at general hospitals in South Asia and the long turnaround of results means loss to follow-up in the diagnostic pathway is high. This study was not an assessment of Xpert sensitivity and specificity, as this has been comprehensively evaluated in comparison with culture by others. ## 5. Conclusion Our study has shown that by applying Xpert to test for smear-negative TB on a fee-paying basis in a general hospital in Nepal, five patients need to be tested to detect one case of active TB. Restriction of criteria for testing using CXR features can reduce the number needed to 4. Further research is needed evaluating the cost effectiveness, patient acceptability and impact on overall out-of-pocket expenditure of Xpert testing provided at a direct cost to the patient, to determine the optimal sustainable use of this technology while maximizing equality of access. --- *Source: 102430-2015-04-08.xml*
2015
# Resveratrol, MicroRNAs, Inflammation, and Cancer **Authors:** Esmerina Tili; Jean-Jacques Michaille **Journal:** Journal of Nucleic Acids (2011) **Publisher:** SAGE-Hindawi Access to Research **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.4061/2011/102431 --- ## Abstract MicroRNAs are short noncoding RNAs that regulate the expression of many target genes posttranscriptionally and are thus implicated in a wide array of cellular and developmental processes. The expression ofmiR-155 or miR-21 is upregulated during the course of the inflammatory response, but these microRNAs are also considered oncogenes due to their upregulation of expression in several types of tumors. Furthermore, it is now well established that inflammation is associated with the induction or the aggravation of nearly 25% of cancers. Therefore, the above microRNAs are thought to link inflammation and cancer. Recently, resveratrol (trans-3,4′,5-trihydroxystilbene), a natural polyphenol with antioxidant, anti-inflammatory, and anticancer properties, currently at the stage of preclinical studies for human cancer prevention, has been shown to induce the expression of miR-663, a tumor-suppressor and anti-inflammatory microRNA, while downregulating miR-155 and miR-21. In this paper we will discuss how the use of resveratrol in therapeutics may benefit from the preanalyses on the status of expression of miR-155 or miR-21 as well as of TGFβ1. In addition, we will discuss how resveratrol activity might possibly be enhanced by simultaneously manipulating the levels of its key target microRNAs, such as miR-663. --- ## Body ## 1. Inflammation and Cancer Inflammation represents a complex, nonspecific immune response of the body to pathogens, damaged cells, tissue injury, allergens, toxic compounds, or irritant molecules [1]. While it is normally self-contained, it may become permanent and chronic. It may also escape the original tissue and spread via the circulatory and/or the lymphatic system to other parts of the body, producing a systemic inflammatory response syndrome such as sepsis in case of an infection. Chronic inflammation is associated with a simultaneous destruction and healing of the tissue from the inflammatory process and has been linked to a number of pathologies, including cancer, chronic asthma, rheumatoid arthritis, multiple sclerosis, inflammatory bowel diseases, and psoriasis, as well as several types of neurological disorders. The inflammatory response is coordinated by a large range of mediators that form complex regulatory networks [1]. The recruitment of leukocytes in the peripheral tissues is the hallmark of inflammation. It is mediated by several types of chemokines [1–3], which act through their receptors located at the surface of the cytoplasmic membrane of leukocytes. The production of chemokines is induced by inflammatory stimuli such as bacterial lipopolysaccharide (LPS), interleukin (IL)-1, or tumor necrosis factor (TNF). In addition, these chemokines have a clear role in angiogenesis and wound repair [4]. Finely tuned molecular mechanisms exist that ensure that the immune response required for the defense of the body has a limited duration and does not pass a certain maximum activation level, thus avoiding that, it becomes harmful to the organism. It is well acknowledged that unresolved immune response results in inflammation.Epidemiological studies suggest that as many as 25% of all cancers may be due to chronic inflammation [5–7]. The connection between inflammation and cancer consists of an extrinsic pathway, driven by inflammatory conditions that increase cancer risk, and an intrinsic pathway, driven by genetic alterations that cause inflammation and neoplasia [6]. Inflammatory mediators released by cancer-related inflammation induce genetic instability, leading to the accumulation of random genetic alterations in cancer cells [7]. The activation of Toll-like receptors (TLRs), a group of pattern recognition receptors functioning as sensors of pathogens and tissue damages, leads to the nuclear translocation of NF-κB and the production of cytokines such as TNF, IL-1α, IL-1β, IL-6, and IL-8. However, TLR activation has been shown to accelerate the growth of adoptively transferred tumors [8–11]. Accordingly, the stimulation of TLRs leads to increased survival and proliferation of several cell lines [12, 13], and the intratumoral injection of Listeria monocytogenes induces TLR2 signaling in tumor cells, thus promoting their growth [14]. TLR signaling also enhances tumor cell invasion and metastasis by regulating metalloproteinases and integrins [15]. Chemokines also affect several tumor progression pathways, such as leukocyte recruitment and function, cellular proliferation, survival, or senescence, as well as invasion and metastasis and are the targets of a number of anticancer agents [5].Tumor microenvironment contains various inflammatory cell types infiltrating the tumor area in response to inflammatory stimuli, such as macrophages, neutrophils, and mast cells [16, 17]. Tumor-associated macrophages (TAMs) are thought to play key roles in the production of various growth factors, angiogenic factors, proteinases, chemokines, and cytokines, through crosstalks with cancer cells and other tumor stromal cells [18–20]. Factors secreted by TAMs stimulate cell migration/motility, proliferation, survival, angiogenesis, and metastasis, resulting in a dynamic environment that favors the progression of cancer, thus affecting the clinical outcome of malignant tumors. TAMs have thus been described as “obligate partners for tumor-cell migration, invasion and metastasis” [21, 22]. Namely, in a genetic model of breast cancer in macrophage-deficient mice, the tumors developed normally but were unable to form pulmonary metastases in the absence of macrophages [23]. As tumor metastasis is responsible for approximately 90% of all cancer-related deaths, a better understanding of inflammation regulatory mechanisms may potentially allow to optimize the use of anticancer drugs that lower tumor-specific inflammatory response [18].Finally, the transforming growth factorβ (TGFβ) regulates the immune response as well as the effects of the immune system on tumor progression or regression in vivo [24]. TGFβ has been shown to suppress the antitumor activity of T cells, natural killer (NK) cells, neutrophils, monocytes, and macrophages, which together are able to promote or repress tumor progression depending on the cellular context [25–27]. Importantly, TGFβ1, the most abundant and ubiquitously expressed isoform of TGFβ, is usually considered a tumor-suppressor, due to its cytostatic activity in epithelia. However, on advanced stages of tumors, TGFβ1 behaves as a tumor promoter, due to its capability to enhance angiogenesis, epithelial-to-mesenchymal transition, cell motility, and metastasis [28–30]. ## 2. MicroRNAs and Inflammation MicroRNAs (miRNAs) are short noncoding RNAs which regulate the translation and/or degradation of target messenger RNAs [31–33]. They have been implicated in the regulation of a number of fundamental processes, including muscle, cardiac, neural, and lymphocyte development, or the regulation of both the innate and adaptative immune responses [34, 35]. miRNAs originate from primary transcripts (pri-miRNAs) converted in the nucleus into precursor miRNAs (pre-miRNAs) by the RNase III Drosha, associated with DGCR8 to form the small microprocessor complex [36]. Pre-miRNAs are then exported in the cytoplasm where the miRNA hairpin is cleaved by the RNase III Dicer within the RISC loading complex. The guide strand, which corresponds to the mature miRNA, is then incorporated into the RISC complex [36]. miRNAs and their transcriptional regulators usually form autoregulatory loops aimed at controlling their respective levels [37]. miRNAs participate in many gene regulatory networks whose molecular malfunctions are associated with major pathologies such as cancer [34] or auto-mmune diseases [38–40].Several miRNAs have been implicated in both inflammation and cancer [38–41]. The most prominent aremiR-155, miR-21, and miR-125b. Thus the expression of miR-155 is strongly elevated in several human leukemias and lymphomas ([40] and references therein). Transgenic mice with B cells overexpressing miR-155 develop B-cell leukemia, and a sustained expression of miR-155 in hematopoietic stem cells causes a myeloproliferative disorder [40]. On the other side, miR-155 has been implicated in the regulation of myelopoiesis and erythropoiesis, Th1 differentiation, B-cell maturation, IgG1 production, somatic hypermutations, gene conversion, class switch recombination, and B- and T-cell homeostasis as well as in the regulation of the innate immune response [40]. Thus, miR-155 levels increase following LPS treatment of Raw-264 macrophages, and miR-155 transgenic mice show enhanced sensibility to LPS-induced endotoxin shock [42]. In contrast, miR-155-knock-out mice are unable to mount a proper T-cell or B-cell immune response [40]. The expression of another miRNA, miR-125b, was repressed within 1 hour of LPS challenge in Raw-264 cells [42]. In topic eczema, miR-125b expression was reduced in regions of the skin that were inflamed, while that of miR-21 was enhanced [43]. Similarly, miR-21 expression was increased by inflammation due to ulcerative colitis [44] and also by IL-13 and by specific antigens in OVA- and Aspergillus fumigatus antigens-induced asthma models [45]. The expression of miR-21 changes dynamically during antigen-induced T-cell differentiation, with the highest levels of expression in the effector T cell [46]. MiR-21 induction upon T-cell receptor (TCR) stimulation is believed to be involved in a negative feedback loop regulating TCR signaling [46]. MiR-663 has drown recent attention due to its role not only as an anti-inflammatory miRNA but also as a tumor suppressor miRNA. Thus, MiR-663 impairs the upregulation of miR-155 by inflammatory stimuli [47]. In addition, the expression of this microRNA is lost in certain cancers such as gastric or pancreatic cancer, and it induces mitotic catastrophe growth arrest when its expression is restored in these cells [48]. ## 3. MicroRNAs as Oncogenes and Tumor-Suppressor Genes miRNAs participate in many gene regulatory networks whose molecular malfunctions are associated with cancers [35, 41]. Depending on the effects of their downregulation or overexpression, miRNAs have been described either as oncogenic (onco-miRs) or tumor suppressors. Thus, the miR-17-92 cluster on chromosome 13, which contains six miRNAs (miR-17, -18a, -19a, -20a, -19b-1, and -92a-1), is amplified and overexpressed in B-cell lymphomas and solid tumors such as breast or small-cell lung cancers, where it may enhance oncogenesis by potentially targeting E2F1, p21/CDKN1A, and BCL2L11/BIM [49]. On the other hand, loss of function of miR-17-92 miRNAs might be advantageous for cancer cells in certain settings. Namely, loss of heterozygosity at the 13q31.3 locus has been observed in multiple tumor types, and a genome-wide analysis of copy number alterations in cancer revealed that the miR-17-92 cluster was deleted in 16.5% of ovarian cancers, 21.9% of breast cancers, and 20% of melanomas [50]. In contrast, the twelve members of the human let-7 gene family are frequently downregulated in cancers like lung, colon, or other solid tumors [34, 35]s and are, therefore,s considered as tumor-suppressor miRNAs in these types of cancers. In particular, let-7 miRNAs target oncogenes of the Ras family [51] and c-Myc, and their expression in colon tumors results in reduced levels of both Ras and c-Myc [52]. miR-21 is overexpressed in several cancers, including colorectal carcinomas (CRCs), gliomas, as well as breast, gastric, prostate, pancreas, lung, thyroid, and cervical cancers [53, 54]. miR-21 has been shown to function as an onco-miR, due to its targeting of transcripts encoding key regulators of cell proliferation and apoptosis such as PTEN and PDCD4 [53]. Beside miR-21, several miRNAs are overexpressed in CRCs, including miR-17, miR-25, miR-26a, and miR-181a [54, 55].In human, the levels of bothmiR-155 and BIC transcripts (that is, miR-155 primary RNAs) are elevated in diffuse large B-cell lymphoma (DLBCL), Hodgkins lymphoma, and primary mediastinal B-cell lymphoma [40]. In contrast, a very weak expression of miR-155 is found in most non-Hodgkins lymphoma subtypes, including Burkitt lymphoma. In addition, high levels of BIC and miR-155 expression were reported in B-cell chronic lymphocytic leukemia (B-CLL) and in B-CLL proliferation centers [56]. Furthermore, miR-155 was also upregulated in a subset of patients with acute myelomonocytic leukemia and acute monocytic leukemia. Accordingly, transgenic mice whose B cells overexpress miR-155 developed polyclonal preleukemic pre-B-cell proliferation followed by B-cell malignancy [40]. On the other hand, it was reported that BIC cooperates with c-Myc in avian lymphomagenesis and erythroleukemogenesis [57]. Beside liquid malignancies, high levels of miR-155 expression were found in solid tumors such as breast, colon, and lung cancers [40]. miR-155 was recently shown to induce a mutator phenotype by targeting Wee-1, a kinase regulating G2/M phase transition during the cell cycle [58]. Furthermore, it was shown that miR-155 increases genomic instability by targeting transcripts encoding components of the DNA mismatch repair machinery [59]. As a consequence, the simultaneous miR-155-steered suppression of a number of tumor suppressor genes combined with a mutator phenotype might allow the shortening of steps required for tumorogenesis, and might also explain how chronic inflammation associated with high levels of miR-155 induces cancer.Finally, miRNAs have been implicated in metastasis. For example, it has been established that the downregulation of bothmiR-103-1 and miR-103-2 miRNAs induces epithelial-to-mesenchymal transition by targeting Dicer1 transcripts [60]. Furthermore, several miRNAs, including miR-21, have been shown to activate metastasis by acting on multiple signaling pathways and targeting various proteins that are involved in this process. Thus, in breast cancer, which represents the most common malignancy among women in the world, miRNAs such as miR-9, miR-10b, miR-21, miR-103/107, miR-132, miR-373, and miR-520 stimulate metastasis, while miR-7, miR-30, miR-31, miR-126, miR-145, miR-146, miR-200, miR-205, miR-335, miR-661, and miRNAs of the let-7 families in contrast impair the different steps of metastatic process, from epithelial-to-mesenchymal transition to local invasion to colonisation and angiogenesis [61].Numerous reports have provided strong evidence that all of the above miRNAs potentially target a myriad of transcripts including those encoding transcription factors, cytokines, enzymes and kinases, implicated in both cancer and inflammation. ## 4. Anti-Inflammatory and Antitumor Properties of Resveratrol Resveratrol (trans-3,4′,5-trihydroxystilbene) is a natural polyphenolic, nonflavonoid antioxidant found in grapes and other berries, produced by plants in response to infection by the pathogenBotrytis cinerea [62]. Recent studies have documented that resveratrol has various health benefits, such as cardiovascular- and cancer-preventive properties [63–65], and this compound is currently at the stage of preclinical studies for human cancer prevention [66, 67]. Resveratrol was first shown to inhibit both tumor promotion and tumor progression in a mouse skin cancer model [68]. Resveratrol is also tested for preventing and/or treating obesity and diabetes [69, 70]. Fortunately, resveratrol toxicity is minimal, and even proliferating tissues such as bone marrow or intestinal tract are not adversely affected [71].Resveratrol exerts its effects at multiple levels. Both itsm-hydroxyquinone and 4-hydroxystyryl moieties have been shown to be important for the determination of resveratrol inhibitory properties toward various enzymes. This include lipoxygenases and cyclooxygenases that synthesize proinflammatory mediators from arachidonic acid, protein kinases such as PKCs and PKD, and receptor tyrosine kinases, lipid kinases, as well as IKKα, an activator of NF-κB pathway, which establishes a strong link between inflammation and tumorigenesis [72]. Also, resveratrol inhibition of P450/CYP19A1/Aromatase, by limiting the amount of available estrogens and consequently the activity of estrogen receptors, has been proposed to contribute to the protection against several types of cancer, including breast cancer [72, 73]. Of note, resveratrol also inhibits the formation of estrogen-DNA adducts, which are elevated in women at high risk for breast cancer [74].Resveratrol in addition regulates apoptosis and cell proliferation. It induces growth arrest followed by apoptotic cell death and interferes with cell survival by upregulating the expression of proapoptotic genes while simultaneously downregulating the expression of antiapoptotic genes [75]. Resveratrol induces the redistribution of CD95 and other death receptors in lipid rafts, thus contributing to their sensitization to death receptor agonists [75]. It also causes growth arrest at G1 and G1/S phases of cell cycle by inducing the expression of CDK inhibitors p21/CDKN1A and p27/CDKN1B [63]. In addition, resveratrol directly inhibits DNA synthesis by diminishing ribonucleotide reductase and DNA polymerase [72, 76, 77]. Altogether, antiproliferative activities of resveratrol involve the differential regulation of multiple cell-cycle targets in a cell-type-dependent manner [72, 75].One of the possible mechanisms for resveratrol protective activities is by downregulation of the inflammatory responses [78]. That includes the inhibition of synthesis and release of proinflammatory mediators, modifications of eicosanoid synthesis, or inhibiting the enzymes, such as cyclooxygenase-1 (COX-1/PTGS1) or -2 (COX-2/PTGS2), which are responsible for the synthesis of proinflammatory mediators, through the inhibitory effect of resveratrol on transcription factors like NF-κB or activator protein-1 (AP-1) [78, 79]. Of note, constitutive COX-2 expression generally predicts aggressiveness of tumors, therefore, the use of nonsteroidal anti-inflammatory drugs that inhibit COX-2 in cancer treatment. However, cytoplasmic COX-2 can relocalize in the nucleus. This nuclear relocalization of COX-2 is induced by resveratrol, and exposure of resveratrol-treated cells to a specific COX-2 inhibitor blocked resveratrol-induced apoptosis, indicating that COX-2 displays proapoptotic activity in the nucleus, which may be associated with the generation of complexes of COX-2 and ERK1/2 mitogen-activated protein kinases. In mouse macrophages, resveratrol also displays antioxidant activity, decreasing the production of reactive oxygen species and reactive nitrogen species and inhibiting nitric oxygen synthetase (NOS)-2 and COX-2 synthesis as well as prostaglandin E2 production [80].Furthermore, in 3T3-L1 adipocytes, resveratrol inhibits the production of the TNF-induced monocyte chemoattractant protein (MCP)-1/CCL2. MCP-1 plays an essential role in the early events during macrophage infiltration into adipose-tissue, which results in chronic low-grade inflammation, a key feature of obesity type 2 diabetes characterized by adipose tissue macrophage infiltration and abnormal cytokine production [81]. Finally, it is also well established that some anti-inflammatory effects of resveratrol arise from its capability to upregulate histone deacetylase sirtuin 1 (SIRT1) activity, that also presents antitumor and anti-inflammatory capabilities [82]. Altogether, it is clear that the key antitumor properties of resveratrol [77–82] are linked to its anti-inflammatory effects.The fact that resveratrol targets, directly or indirectly, so many different factors, exerts such a wide influence on cell homeostasis, and provides such a range of health benefits, suggested that some of its effects should arise from its capability to modulate the activity of global regulators. Furthermore, the ability of each miRNA to potentially regulate the levels and, therefore, the activity, of tens to hundreds of target genes, strongly suggested that resveratrol should be able to modify the composition of miRNA populations. Indeed, it has recently been shown that resveratrol decreases the levels of several proinflammatory and/or oncogenic miRNAs and upregulates miRNAs with anti-inflammatory and/or antitumor potentials [47, 83]. ## 5.MiR-663 as a Mediator of Resveratrol Anti-Inflammatory Activity Affymetrix microarrays and RNase-protection assays showed that resveratrol treatment of human THP-1 monocytic cells upregulated the expression ofLOC284801 transcripts, that contain the sequence of pre-miR-663 and thus represent miR-663 primary transcripts. MiRNA microarrays and RNase-protection assays accordingly confirmed these data [47]. Interestingly, in silico analysis using TargetScan (http://www.targetscan.org/) suggested that miR-663 may potentially target transcripts encoding factors implicated in (i) the mounting of the immune response, especially JunB, JunD, and FosB, which encode AP-1 factors known to activate many cytokine genes in partnership with NFAT factors [84], (ii) TLR signaling, such as the kinases RIPK1 and IRAK2, and (iii) the differentiation of monocytes, Th1 lymphocytes, and granulocytes.An antisensemiR-663 inhibitory RNA (663-I) proved capable of increasing global AP-1 activity in unchallenged THP-1 cells, showing that miR-663 indeed target transcripts encoding AP-1 factors in these cells. These effects were in particular directed toward JunB and JunD transcripts [47]. In agreement with previous results [79], resveratrol blocked the surge of AP-1 activity that occurs following LPS challenge due to the fact that JunB transcripts peak within the first hour, leading to the accumulation of JunB in the next few hours [85]. This inhibitory effect of resveratrol on AP-1 activity was partly impaired by 663-I, indicating that it arises at least in part from the upregulation of miR-663 by resveratrol [47]. Western blots showed that resveratrol impaired JunB neosynthesis, while still allowing the phosphorylation, that is, the activation of JunB following LPS treatment to take place, at least to a certain extent. Given that AP-1 factors include c-Jun, JunB, JunD, FosB, Fra-1, and Fra-2, as well as Jun dimerization partners JDP1 and JDP2 or the closely related ATF2, LRF1/ATF3, and B-ATF, so that potentially about 18 different dimeric combinations may be formed, the capability of resveratrol to specifically target a subset of AP-1 dimers through the upregulation of miR-663 might have profound effects on the levels of the transcriptional activity of promoters to whom different AP-1 factors can compete for binding. Due to the many roles of AP-1 factors both in inflammation and cancer [86, 87], the specific targeting of genes encoding a subset of AP-1 factors, by changing the composition of AP-1 dimers on key promoters, may possibly explain some of the multiple anti-inflammatory and anticancer properties of resveratrol.Of note,miR-155 had been shown to be under AP-1 activity in activated B cells [88]. Accordingly, miR-663 reduced the upregulation of miR-155 by LPS [42], which may be due to miR-663 targeting of transcripts encoding JunB and JunD and also possibly FosB and KSRP, an RNA binding protein implicated in the LPS-induced miR-155 maturation from its primary transcripts BIC [89]. This is of primary importance, for miR-155 upregulation is the hallmark of inflammatory response following LPS treatment of macrophages/monocytes [42]. Resveratrol also dramatically impaired the upregulation of miR-155 by LPS, an effect partly inhibited by 663-I [47]. Altogether, these results indicate that the anti-inflammatory properties of resveratrol arise, at least in part, from its upregulation of miR-663 and its downregulating effects on miR-155 and that miR-663 might possibly qualify as an anti-inflammatory miRNA. ## 6. MicroRNAs as Mediators of Resveratrol Anticancer Effects The results reported here above also suggested that, due to its targeting of AP-1 factors, known to play a role in tumorigenesis and cell invasion [86, 87] and due to its downregulation of miR-155, whose levels increase in solid as well as in liquid tumors [40], miR-663 may also possibly provide resveratrol with some of its anticancer properties. Namely, miR-663 was found to be downregulated in hormone refractory prostate cancer cells, along with miR-146a and miR-146b[90], further supporting the hypothesis that this miRNA is a tumor-suppressor gene whose one of the function is to keep low the expression level of oncogenic miR-155 [47, 48].CRC is the third most common malignancy and the fourth biggest cause of cancer mortality worldwide [91, 92]. Despite the increased use of screening strategies such as fecal occult blood testing, sigmoidoscopy, and colonoscopy, more than one-third of patients with colorectal cancer will ultimately develop metastatic disease [92]. On the other hand, the TGFβ signaling pathway is one of the most commonly altered cellular signaling pathways in human cancers [93]. Among the three TGFβ isoforms expressed in mammalian epithelia (TGFβ1, TGFβ2, and TGFβ3), TGFβ1 is the most abundant and ubiquitously expressed one. TGFβ signaling is initiated by the binding of TGFβ ligands to type II receptors (TGFβR2). Once bound by TGFβ, TGFβR2 recruits, phosphorylates, and thus activates the type I TGFβ receptor (TGFβR1). TGFβR1 then phosphorylates two transcriptional regulators, namely, SMAD2 and SMAD3, which subsequently bind to SMAD4. This results in the nuclear translocation of SMAD complexes, allowing SMADs to interact with transcription factors controlling the expression of a multitude of TGFβ responsive genes [94]. The expression of TGFβ1 in both tumor and plasma was found to be significantly higher in patients with metastatic colorectal cancer, and increasing colorectal tumor stage was correlated with higher TGFβ1 expression in tumor tissues [95].miRNA microarrays recently showed that resveratrol treatment of SW480 human colon cancer cells significantly increased the levels of 22 miRNAs while decreasing those of 26 others [83]. Among the miRNAs downregulated by resveratrol, miR-17, miR-21, miR-25, miR-92a-2, miR-103-1 and miR-103-2 have been shown to behave as onco-miRNAs, at least in certain contexts. Thus, genomic amplification and overexpression of miR-17-92miRNAs is found in B-cell lymphomas as well as in breast and lung cancers [34, 54]. MiR-21 is overexpressed in several cancers, including CRCs, gliomas, as well as breast, gastric, prostate, pancreas, lung, thyroid, and cervical cancers [53–55]. MiR-17, miR-25, miR-26-a, and miR-181a are also overexpressed in CRCs [34, 54]. In addition, several miRNAs, including miR-21, have been shown to activate metastasis by acting on multiple signaling pathways and targeting various proteins that are key players in this process [53]. Furthermore, the lower metastatic propensity of SW480 cells as compared with SW620 human colon cancer cells, both derived from the primary tumor and a metastasis of the same patient, respectively [96], was associated with a lower level of expression of miR-103-1 and miR-103-2, two miRNAs that induce epithelial-to-mesenchymal transition by targeting Dicer1 transcripts [60].In silico analysis using TargetScan showed that miRNAs downregulated by resveratrol in SW480 cells potentially target transcripts encoding known tumor suppressor factors, such as the two antiproliferation factors PDCD4 and PTEN, the components of the mismatch repair machinery MLH3, MSH2 and MSH3, DICER1, the RNase III producing mature miRNAs from their immediate precursors in the cytoplasm, and several effectors and regulators of the TGFβ signaling pathway [83]. Indeed, resveratrol treatment of SW480 cells lead to a greater accumulation of TGFβR1, TGFβR2, PDCD4, PTEN, and E-CADHERIN (a component of adherens junctions implicated in the maintenance of epithelial phenotype) [83]. Of note, among miRNAs upregulated by resveratrol, miR-663 was the only one to target TGFβ1 transcripts. Luciferase assays and Western blots showed that resveratrol downregulated TGFβ1 in both a miR-663-dependent and a miR-663-independent manner [83]. Resveratrol treatment also decreased the transcriptional activity of SMADs under TGFβ1 signaling, an effect seemingly independent of miR-663 [83].Interestingly, it has been recently shown that GAM/ZNF512B, a vertebrate-specific developmental regulator first described in chicken [97], impairs the upregulation of miRNAs of the miR-17-92 cluster by TGFβ1 and that TGFβ1 in turn downregulates GAM, at least in part through the upregulation of miR-17-92 miRNAs [98]. The facts that GAM transcripts contain three consensus target sites for miR-663 and that GAM is sensible to resveratrol treatment (Tili et al., unpublished results) raises the question of the possible existence of a gene regulatory network that would allow miR-663 to impair GAM repressing activity on TGFβ1 signaling pathway when TGFβ1 works as a tumor suppressor, that is, at the early stages of tumorigenesis but not any more when this pathway starts to favor tumorigenesis and metastasis, that is, on advanced stages of cancers.It is important to emphasize that TGFβ1 has been shown to enhance the maturation of oncogenic miR-21 through the binding of SMAD3 to miR-21 primary RNAs [99] and also to increase the expression of miR-155, which is under the control of TGFβ/SMAD activity [100]. Thus, by targeting AP-1 factors as well as TGFβ1 and possibly SMAD3, miR-663 might inhibit two of the pathways that upregulate miR-155expression.The TGFβ signaling pathway present multiple levels of regulation: sequestration of ligands into inactive precursor forms, ligand traps, decoy receptors, and inhibitory SMADs, not to mention the existence of SMAD-independent pathways and their interactions with many other critical signaling pathways which also proved to play a role in cancer. It is thus not surprising that the short (307-nt long) 3′-UTR of TGFβ1 transcripts contains a potential consensus target site for 28 miRNAs only. Of note, TGFβ1 3′-UTR contains two target sites for two of these miRNAs and only one target site for 25 of the others. Therefore, the fact that miR-663 may potentially target 5 different sites in TGFβ1 3′-UTR suggests that this miRNA could represent a critical TGFβ1 regulator which may possibly be called upon action in emergency situations such as those when cells begin to proliferate anarchically or when a stronger immune response is required. The multiplicity of miR-663 targets sites in TGFβ1 3′-UTR further suggests that the effects of this miRNA might be both dose- and context-dependent, so that resveratrol effects on TGFβ1 signaling pathway might well be also context-dependent. Finally, it is probable that, depending on the cell context, resveratrol might either increase the level of TGFβ signaling—by inhibiting miRNAs targeting its main effectors—when it is beneficial to the organism, that is, when it works to maintain the integrity of epithelia and impair cell proliferation, or in contrast decrease TGFβ1 signaling—by decreasing its production through the upregulation of miR-663—when TGFβ1 starts to favor epithelial-to-mesenchymal transition and metastasis. For example, the targeting of both TGFβ1 and SMAD3 transcripts might possibly allow resveratrol to impair TGFβ1-induced SMAD3-dependent promotion of cell motility and invasiveness in advanced stages of gastric cancer [101, 102] or when SMAD2 and SMAD3 phosphorylated at both linker and COOH-terminal regions transmit malignant TGFβ1 signal in later stages of human CRC [103]. ## 7. Conclusions It is notable that striking phenotypes are often driven through small changes in the cellular concentration of key factors. For example, in the B-cell compartment,miR-150 curtails the activity of c-Myb transcription factor in a dose-dependent fashion over a narrow range of miRNA and c-Myb concentrations [104]. Thus, even slight effects of resveratrol on a handful of key miRNAs might well prove to be critical to its anti-inflammatory, anticancer and antimetastatic properties. In addition, the fact that miR-663, miR-21, miR-155, and TGFβ1 have all been implicated in the regulation of cell proliferation, tumor apparition and development, metastasis formation, and innate immunity, strongly suggests that the capability of resveratrol to behave at the same time as an antitumor, antimetastatic, antiproliferation and anti-inflammatory agent most probably arises from its effects on the expression of a small set of critical endogenous miRNAs having the abilities to impact the cell proteome globally.Finally, miRNAs have the promise to become biomarkers for different stages of cancer, both for diagnosis and prognosis. Furthermore, the discovery that resveratrol can modulate the levels of miRNAs targeting proinflammatory and/or protumor factors opens the possibility to optimize resveratrol treatments by manipulating in parallel the levels of expression of a few critical miRNAs. For example, from the experiments reported here above, it starts to become clear that the use of resveratrol would be especially beneficiary in the type of cancers where the TGFβ pathway is implicated. Of course, resveratrol use would have to be carefully correlated with the stages of cancers, knowing that TGFβ can have two faces, that is, anti- and prometastatic.As a last remark, it should be noted that while resveratrol antitumor potential has been linked with data primarily from human cell culture systems, evidence that resveratrol can inhibit carcinogenesis in several organ sites emerged from results of cancer prevention and therapy studies in laboratory animal models [68]. Given that miR-663 was only found in primates, the reports by Tili et al. [47, 83] come as a warning that studies in animal may not always allow to predict accurately the molecular effects of resveratrol in human, especially when it comes to miRNAs. --- *Source: 102431-2011-08-10.xml*
102431-2011-08-10_102431-2011-08-10.md
34,481
Resveratrol, MicroRNAs, Inflammation, and Cancer
Esmerina Tili; Jean-Jacques Michaille
Journal of Nucleic Acids (2011)
Biological Sciences
SAGE-Hindawi Access to Research
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.4061/2011/102431
102431-2011-08-10.xml
--- ## Abstract MicroRNAs are short noncoding RNAs that regulate the expression of many target genes posttranscriptionally and are thus implicated in a wide array of cellular and developmental processes. The expression ofmiR-155 or miR-21 is upregulated during the course of the inflammatory response, but these microRNAs are also considered oncogenes due to their upregulation of expression in several types of tumors. Furthermore, it is now well established that inflammation is associated with the induction or the aggravation of nearly 25% of cancers. Therefore, the above microRNAs are thought to link inflammation and cancer. Recently, resveratrol (trans-3,4′,5-trihydroxystilbene), a natural polyphenol with antioxidant, anti-inflammatory, and anticancer properties, currently at the stage of preclinical studies for human cancer prevention, has been shown to induce the expression of miR-663, a tumor-suppressor and anti-inflammatory microRNA, while downregulating miR-155 and miR-21. In this paper we will discuss how the use of resveratrol in therapeutics may benefit from the preanalyses on the status of expression of miR-155 or miR-21 as well as of TGFβ1. In addition, we will discuss how resveratrol activity might possibly be enhanced by simultaneously manipulating the levels of its key target microRNAs, such as miR-663. --- ## Body ## 1. Inflammation and Cancer Inflammation represents a complex, nonspecific immune response of the body to pathogens, damaged cells, tissue injury, allergens, toxic compounds, or irritant molecules [1]. While it is normally self-contained, it may become permanent and chronic. It may also escape the original tissue and spread via the circulatory and/or the lymphatic system to other parts of the body, producing a systemic inflammatory response syndrome such as sepsis in case of an infection. Chronic inflammation is associated with a simultaneous destruction and healing of the tissue from the inflammatory process and has been linked to a number of pathologies, including cancer, chronic asthma, rheumatoid arthritis, multiple sclerosis, inflammatory bowel diseases, and psoriasis, as well as several types of neurological disorders. The inflammatory response is coordinated by a large range of mediators that form complex regulatory networks [1]. The recruitment of leukocytes in the peripheral tissues is the hallmark of inflammation. It is mediated by several types of chemokines [1–3], which act through their receptors located at the surface of the cytoplasmic membrane of leukocytes. The production of chemokines is induced by inflammatory stimuli such as bacterial lipopolysaccharide (LPS), interleukin (IL)-1, or tumor necrosis factor (TNF). In addition, these chemokines have a clear role in angiogenesis and wound repair [4]. Finely tuned molecular mechanisms exist that ensure that the immune response required for the defense of the body has a limited duration and does not pass a certain maximum activation level, thus avoiding that, it becomes harmful to the organism. It is well acknowledged that unresolved immune response results in inflammation.Epidemiological studies suggest that as many as 25% of all cancers may be due to chronic inflammation [5–7]. The connection between inflammation and cancer consists of an extrinsic pathway, driven by inflammatory conditions that increase cancer risk, and an intrinsic pathway, driven by genetic alterations that cause inflammation and neoplasia [6]. Inflammatory mediators released by cancer-related inflammation induce genetic instability, leading to the accumulation of random genetic alterations in cancer cells [7]. The activation of Toll-like receptors (TLRs), a group of pattern recognition receptors functioning as sensors of pathogens and tissue damages, leads to the nuclear translocation of NF-κB and the production of cytokines such as TNF, IL-1α, IL-1β, IL-6, and IL-8. However, TLR activation has been shown to accelerate the growth of adoptively transferred tumors [8–11]. Accordingly, the stimulation of TLRs leads to increased survival and proliferation of several cell lines [12, 13], and the intratumoral injection of Listeria monocytogenes induces TLR2 signaling in tumor cells, thus promoting their growth [14]. TLR signaling also enhances tumor cell invasion and metastasis by regulating metalloproteinases and integrins [15]. Chemokines also affect several tumor progression pathways, such as leukocyte recruitment and function, cellular proliferation, survival, or senescence, as well as invasion and metastasis and are the targets of a number of anticancer agents [5].Tumor microenvironment contains various inflammatory cell types infiltrating the tumor area in response to inflammatory stimuli, such as macrophages, neutrophils, and mast cells [16, 17]. Tumor-associated macrophages (TAMs) are thought to play key roles in the production of various growth factors, angiogenic factors, proteinases, chemokines, and cytokines, through crosstalks with cancer cells and other tumor stromal cells [18–20]. Factors secreted by TAMs stimulate cell migration/motility, proliferation, survival, angiogenesis, and metastasis, resulting in a dynamic environment that favors the progression of cancer, thus affecting the clinical outcome of malignant tumors. TAMs have thus been described as “obligate partners for tumor-cell migration, invasion and metastasis” [21, 22]. Namely, in a genetic model of breast cancer in macrophage-deficient mice, the tumors developed normally but were unable to form pulmonary metastases in the absence of macrophages [23]. As tumor metastasis is responsible for approximately 90% of all cancer-related deaths, a better understanding of inflammation regulatory mechanisms may potentially allow to optimize the use of anticancer drugs that lower tumor-specific inflammatory response [18].Finally, the transforming growth factorβ (TGFβ) regulates the immune response as well as the effects of the immune system on tumor progression or regression in vivo [24]. TGFβ has been shown to suppress the antitumor activity of T cells, natural killer (NK) cells, neutrophils, monocytes, and macrophages, which together are able to promote or repress tumor progression depending on the cellular context [25–27]. Importantly, TGFβ1, the most abundant and ubiquitously expressed isoform of TGFβ, is usually considered a tumor-suppressor, due to its cytostatic activity in epithelia. However, on advanced stages of tumors, TGFβ1 behaves as a tumor promoter, due to its capability to enhance angiogenesis, epithelial-to-mesenchymal transition, cell motility, and metastasis [28–30]. ## 2. MicroRNAs and Inflammation MicroRNAs (miRNAs) are short noncoding RNAs which regulate the translation and/or degradation of target messenger RNAs [31–33]. They have been implicated in the regulation of a number of fundamental processes, including muscle, cardiac, neural, and lymphocyte development, or the regulation of both the innate and adaptative immune responses [34, 35]. miRNAs originate from primary transcripts (pri-miRNAs) converted in the nucleus into precursor miRNAs (pre-miRNAs) by the RNase III Drosha, associated with DGCR8 to form the small microprocessor complex [36]. Pre-miRNAs are then exported in the cytoplasm where the miRNA hairpin is cleaved by the RNase III Dicer within the RISC loading complex. The guide strand, which corresponds to the mature miRNA, is then incorporated into the RISC complex [36]. miRNAs and their transcriptional regulators usually form autoregulatory loops aimed at controlling their respective levels [37]. miRNAs participate in many gene regulatory networks whose molecular malfunctions are associated with major pathologies such as cancer [34] or auto-mmune diseases [38–40].Several miRNAs have been implicated in both inflammation and cancer [38–41]. The most prominent aremiR-155, miR-21, and miR-125b. Thus the expression of miR-155 is strongly elevated in several human leukemias and lymphomas ([40] and references therein). Transgenic mice with B cells overexpressing miR-155 develop B-cell leukemia, and a sustained expression of miR-155 in hematopoietic stem cells causes a myeloproliferative disorder [40]. On the other side, miR-155 has been implicated in the regulation of myelopoiesis and erythropoiesis, Th1 differentiation, B-cell maturation, IgG1 production, somatic hypermutations, gene conversion, class switch recombination, and B- and T-cell homeostasis as well as in the regulation of the innate immune response [40]. Thus, miR-155 levels increase following LPS treatment of Raw-264 macrophages, and miR-155 transgenic mice show enhanced sensibility to LPS-induced endotoxin shock [42]. In contrast, miR-155-knock-out mice are unable to mount a proper T-cell or B-cell immune response [40]. The expression of another miRNA, miR-125b, was repressed within 1 hour of LPS challenge in Raw-264 cells [42]. In topic eczema, miR-125b expression was reduced in regions of the skin that were inflamed, while that of miR-21 was enhanced [43]. Similarly, miR-21 expression was increased by inflammation due to ulcerative colitis [44] and also by IL-13 and by specific antigens in OVA- and Aspergillus fumigatus antigens-induced asthma models [45]. The expression of miR-21 changes dynamically during antigen-induced T-cell differentiation, with the highest levels of expression in the effector T cell [46]. MiR-21 induction upon T-cell receptor (TCR) stimulation is believed to be involved in a negative feedback loop regulating TCR signaling [46]. MiR-663 has drown recent attention due to its role not only as an anti-inflammatory miRNA but also as a tumor suppressor miRNA. Thus, MiR-663 impairs the upregulation of miR-155 by inflammatory stimuli [47]. In addition, the expression of this microRNA is lost in certain cancers such as gastric or pancreatic cancer, and it induces mitotic catastrophe growth arrest when its expression is restored in these cells [48]. ## 3. MicroRNAs as Oncogenes and Tumor-Suppressor Genes miRNAs participate in many gene regulatory networks whose molecular malfunctions are associated with cancers [35, 41]. Depending on the effects of their downregulation or overexpression, miRNAs have been described either as oncogenic (onco-miRs) or tumor suppressors. Thus, the miR-17-92 cluster on chromosome 13, which contains six miRNAs (miR-17, -18a, -19a, -20a, -19b-1, and -92a-1), is amplified and overexpressed in B-cell lymphomas and solid tumors such as breast or small-cell lung cancers, where it may enhance oncogenesis by potentially targeting E2F1, p21/CDKN1A, and BCL2L11/BIM [49]. On the other hand, loss of function of miR-17-92 miRNAs might be advantageous for cancer cells in certain settings. Namely, loss of heterozygosity at the 13q31.3 locus has been observed in multiple tumor types, and a genome-wide analysis of copy number alterations in cancer revealed that the miR-17-92 cluster was deleted in 16.5% of ovarian cancers, 21.9% of breast cancers, and 20% of melanomas [50]. In contrast, the twelve members of the human let-7 gene family are frequently downregulated in cancers like lung, colon, or other solid tumors [34, 35]s and are, therefore,s considered as tumor-suppressor miRNAs in these types of cancers. In particular, let-7 miRNAs target oncogenes of the Ras family [51] and c-Myc, and their expression in colon tumors results in reduced levels of both Ras and c-Myc [52]. miR-21 is overexpressed in several cancers, including colorectal carcinomas (CRCs), gliomas, as well as breast, gastric, prostate, pancreas, lung, thyroid, and cervical cancers [53, 54]. miR-21 has been shown to function as an onco-miR, due to its targeting of transcripts encoding key regulators of cell proliferation and apoptosis such as PTEN and PDCD4 [53]. Beside miR-21, several miRNAs are overexpressed in CRCs, including miR-17, miR-25, miR-26a, and miR-181a [54, 55].In human, the levels of bothmiR-155 and BIC transcripts (that is, miR-155 primary RNAs) are elevated in diffuse large B-cell lymphoma (DLBCL), Hodgkins lymphoma, and primary mediastinal B-cell lymphoma [40]. In contrast, a very weak expression of miR-155 is found in most non-Hodgkins lymphoma subtypes, including Burkitt lymphoma. In addition, high levels of BIC and miR-155 expression were reported in B-cell chronic lymphocytic leukemia (B-CLL) and in B-CLL proliferation centers [56]. Furthermore, miR-155 was also upregulated in a subset of patients with acute myelomonocytic leukemia and acute monocytic leukemia. Accordingly, transgenic mice whose B cells overexpress miR-155 developed polyclonal preleukemic pre-B-cell proliferation followed by B-cell malignancy [40]. On the other hand, it was reported that BIC cooperates with c-Myc in avian lymphomagenesis and erythroleukemogenesis [57]. Beside liquid malignancies, high levels of miR-155 expression were found in solid tumors such as breast, colon, and lung cancers [40]. miR-155 was recently shown to induce a mutator phenotype by targeting Wee-1, a kinase regulating G2/M phase transition during the cell cycle [58]. Furthermore, it was shown that miR-155 increases genomic instability by targeting transcripts encoding components of the DNA mismatch repair machinery [59]. As a consequence, the simultaneous miR-155-steered suppression of a number of tumor suppressor genes combined with a mutator phenotype might allow the shortening of steps required for tumorogenesis, and might also explain how chronic inflammation associated with high levels of miR-155 induces cancer.Finally, miRNAs have been implicated in metastasis. For example, it has been established that the downregulation of bothmiR-103-1 and miR-103-2 miRNAs induces epithelial-to-mesenchymal transition by targeting Dicer1 transcripts [60]. Furthermore, several miRNAs, including miR-21, have been shown to activate metastasis by acting on multiple signaling pathways and targeting various proteins that are involved in this process. Thus, in breast cancer, which represents the most common malignancy among women in the world, miRNAs such as miR-9, miR-10b, miR-21, miR-103/107, miR-132, miR-373, and miR-520 stimulate metastasis, while miR-7, miR-30, miR-31, miR-126, miR-145, miR-146, miR-200, miR-205, miR-335, miR-661, and miRNAs of the let-7 families in contrast impair the different steps of metastatic process, from epithelial-to-mesenchymal transition to local invasion to colonisation and angiogenesis [61].Numerous reports have provided strong evidence that all of the above miRNAs potentially target a myriad of transcripts including those encoding transcription factors, cytokines, enzymes and kinases, implicated in both cancer and inflammation. ## 4. Anti-Inflammatory and Antitumor Properties of Resveratrol Resveratrol (trans-3,4′,5-trihydroxystilbene) is a natural polyphenolic, nonflavonoid antioxidant found in grapes and other berries, produced by plants in response to infection by the pathogenBotrytis cinerea [62]. Recent studies have documented that resveratrol has various health benefits, such as cardiovascular- and cancer-preventive properties [63–65], and this compound is currently at the stage of preclinical studies for human cancer prevention [66, 67]. Resveratrol was first shown to inhibit both tumor promotion and tumor progression in a mouse skin cancer model [68]. Resveratrol is also tested for preventing and/or treating obesity and diabetes [69, 70]. Fortunately, resveratrol toxicity is minimal, and even proliferating tissues such as bone marrow or intestinal tract are not adversely affected [71].Resveratrol exerts its effects at multiple levels. Both itsm-hydroxyquinone and 4-hydroxystyryl moieties have been shown to be important for the determination of resveratrol inhibitory properties toward various enzymes. This include lipoxygenases and cyclooxygenases that synthesize proinflammatory mediators from arachidonic acid, protein kinases such as PKCs and PKD, and receptor tyrosine kinases, lipid kinases, as well as IKKα, an activator of NF-κB pathway, which establishes a strong link between inflammation and tumorigenesis [72]. Also, resveratrol inhibition of P450/CYP19A1/Aromatase, by limiting the amount of available estrogens and consequently the activity of estrogen receptors, has been proposed to contribute to the protection against several types of cancer, including breast cancer [72, 73]. Of note, resveratrol also inhibits the formation of estrogen-DNA adducts, which are elevated in women at high risk for breast cancer [74].Resveratrol in addition regulates apoptosis and cell proliferation. It induces growth arrest followed by apoptotic cell death and interferes with cell survival by upregulating the expression of proapoptotic genes while simultaneously downregulating the expression of antiapoptotic genes [75]. Resveratrol induces the redistribution of CD95 and other death receptors in lipid rafts, thus contributing to their sensitization to death receptor agonists [75]. It also causes growth arrest at G1 and G1/S phases of cell cycle by inducing the expression of CDK inhibitors p21/CDKN1A and p27/CDKN1B [63]. In addition, resveratrol directly inhibits DNA synthesis by diminishing ribonucleotide reductase and DNA polymerase [72, 76, 77]. Altogether, antiproliferative activities of resveratrol involve the differential regulation of multiple cell-cycle targets in a cell-type-dependent manner [72, 75].One of the possible mechanisms for resveratrol protective activities is by downregulation of the inflammatory responses [78]. That includes the inhibition of synthesis and release of proinflammatory mediators, modifications of eicosanoid synthesis, or inhibiting the enzymes, such as cyclooxygenase-1 (COX-1/PTGS1) or -2 (COX-2/PTGS2), which are responsible for the synthesis of proinflammatory mediators, through the inhibitory effect of resveratrol on transcription factors like NF-κB or activator protein-1 (AP-1) [78, 79]. Of note, constitutive COX-2 expression generally predicts aggressiveness of tumors, therefore, the use of nonsteroidal anti-inflammatory drugs that inhibit COX-2 in cancer treatment. However, cytoplasmic COX-2 can relocalize in the nucleus. This nuclear relocalization of COX-2 is induced by resveratrol, and exposure of resveratrol-treated cells to a specific COX-2 inhibitor blocked resveratrol-induced apoptosis, indicating that COX-2 displays proapoptotic activity in the nucleus, which may be associated with the generation of complexes of COX-2 and ERK1/2 mitogen-activated protein kinases. In mouse macrophages, resveratrol also displays antioxidant activity, decreasing the production of reactive oxygen species and reactive nitrogen species and inhibiting nitric oxygen synthetase (NOS)-2 and COX-2 synthesis as well as prostaglandin E2 production [80].Furthermore, in 3T3-L1 adipocytes, resveratrol inhibits the production of the TNF-induced monocyte chemoattractant protein (MCP)-1/CCL2. MCP-1 plays an essential role in the early events during macrophage infiltration into adipose-tissue, which results in chronic low-grade inflammation, a key feature of obesity type 2 diabetes characterized by adipose tissue macrophage infiltration and abnormal cytokine production [81]. Finally, it is also well established that some anti-inflammatory effects of resveratrol arise from its capability to upregulate histone deacetylase sirtuin 1 (SIRT1) activity, that also presents antitumor and anti-inflammatory capabilities [82]. Altogether, it is clear that the key antitumor properties of resveratrol [77–82] are linked to its anti-inflammatory effects.The fact that resveratrol targets, directly or indirectly, so many different factors, exerts such a wide influence on cell homeostasis, and provides such a range of health benefits, suggested that some of its effects should arise from its capability to modulate the activity of global regulators. Furthermore, the ability of each miRNA to potentially regulate the levels and, therefore, the activity, of tens to hundreds of target genes, strongly suggested that resveratrol should be able to modify the composition of miRNA populations. Indeed, it has recently been shown that resveratrol decreases the levels of several proinflammatory and/or oncogenic miRNAs and upregulates miRNAs with anti-inflammatory and/or antitumor potentials [47, 83]. ## 5.MiR-663 as a Mediator of Resveratrol Anti-Inflammatory Activity Affymetrix microarrays and RNase-protection assays showed that resveratrol treatment of human THP-1 monocytic cells upregulated the expression ofLOC284801 transcripts, that contain the sequence of pre-miR-663 and thus represent miR-663 primary transcripts. MiRNA microarrays and RNase-protection assays accordingly confirmed these data [47]. Interestingly, in silico analysis using TargetScan (http://www.targetscan.org/) suggested that miR-663 may potentially target transcripts encoding factors implicated in (i) the mounting of the immune response, especially JunB, JunD, and FosB, which encode AP-1 factors known to activate many cytokine genes in partnership with NFAT factors [84], (ii) TLR signaling, such as the kinases RIPK1 and IRAK2, and (iii) the differentiation of monocytes, Th1 lymphocytes, and granulocytes.An antisensemiR-663 inhibitory RNA (663-I) proved capable of increasing global AP-1 activity in unchallenged THP-1 cells, showing that miR-663 indeed target transcripts encoding AP-1 factors in these cells. These effects were in particular directed toward JunB and JunD transcripts [47]. In agreement with previous results [79], resveratrol blocked the surge of AP-1 activity that occurs following LPS challenge due to the fact that JunB transcripts peak within the first hour, leading to the accumulation of JunB in the next few hours [85]. This inhibitory effect of resveratrol on AP-1 activity was partly impaired by 663-I, indicating that it arises at least in part from the upregulation of miR-663 by resveratrol [47]. Western blots showed that resveratrol impaired JunB neosynthesis, while still allowing the phosphorylation, that is, the activation of JunB following LPS treatment to take place, at least to a certain extent. Given that AP-1 factors include c-Jun, JunB, JunD, FosB, Fra-1, and Fra-2, as well as Jun dimerization partners JDP1 and JDP2 or the closely related ATF2, LRF1/ATF3, and B-ATF, so that potentially about 18 different dimeric combinations may be formed, the capability of resveratrol to specifically target a subset of AP-1 dimers through the upregulation of miR-663 might have profound effects on the levels of the transcriptional activity of promoters to whom different AP-1 factors can compete for binding. Due to the many roles of AP-1 factors both in inflammation and cancer [86, 87], the specific targeting of genes encoding a subset of AP-1 factors, by changing the composition of AP-1 dimers on key promoters, may possibly explain some of the multiple anti-inflammatory and anticancer properties of resveratrol.Of note,miR-155 had been shown to be under AP-1 activity in activated B cells [88]. Accordingly, miR-663 reduced the upregulation of miR-155 by LPS [42], which may be due to miR-663 targeting of transcripts encoding JunB and JunD and also possibly FosB and KSRP, an RNA binding protein implicated in the LPS-induced miR-155 maturation from its primary transcripts BIC [89]. This is of primary importance, for miR-155 upregulation is the hallmark of inflammatory response following LPS treatment of macrophages/monocytes [42]. Resveratrol also dramatically impaired the upregulation of miR-155 by LPS, an effect partly inhibited by 663-I [47]. Altogether, these results indicate that the anti-inflammatory properties of resveratrol arise, at least in part, from its upregulation of miR-663 and its downregulating effects on miR-155 and that miR-663 might possibly qualify as an anti-inflammatory miRNA. ## 6. MicroRNAs as Mediators of Resveratrol Anticancer Effects The results reported here above also suggested that, due to its targeting of AP-1 factors, known to play a role in tumorigenesis and cell invasion [86, 87] and due to its downregulation of miR-155, whose levels increase in solid as well as in liquid tumors [40], miR-663 may also possibly provide resveratrol with some of its anticancer properties. Namely, miR-663 was found to be downregulated in hormone refractory prostate cancer cells, along with miR-146a and miR-146b[90], further supporting the hypothesis that this miRNA is a tumor-suppressor gene whose one of the function is to keep low the expression level of oncogenic miR-155 [47, 48].CRC is the third most common malignancy and the fourth biggest cause of cancer mortality worldwide [91, 92]. Despite the increased use of screening strategies such as fecal occult blood testing, sigmoidoscopy, and colonoscopy, more than one-third of patients with colorectal cancer will ultimately develop metastatic disease [92]. On the other hand, the TGFβ signaling pathway is one of the most commonly altered cellular signaling pathways in human cancers [93]. Among the three TGFβ isoforms expressed in mammalian epithelia (TGFβ1, TGFβ2, and TGFβ3), TGFβ1 is the most abundant and ubiquitously expressed one. TGFβ signaling is initiated by the binding of TGFβ ligands to type II receptors (TGFβR2). Once bound by TGFβ, TGFβR2 recruits, phosphorylates, and thus activates the type I TGFβ receptor (TGFβR1). TGFβR1 then phosphorylates two transcriptional regulators, namely, SMAD2 and SMAD3, which subsequently bind to SMAD4. This results in the nuclear translocation of SMAD complexes, allowing SMADs to interact with transcription factors controlling the expression of a multitude of TGFβ responsive genes [94]. The expression of TGFβ1 in both tumor and plasma was found to be significantly higher in patients with metastatic colorectal cancer, and increasing colorectal tumor stage was correlated with higher TGFβ1 expression in tumor tissues [95].miRNA microarrays recently showed that resveratrol treatment of SW480 human colon cancer cells significantly increased the levels of 22 miRNAs while decreasing those of 26 others [83]. Among the miRNAs downregulated by resveratrol, miR-17, miR-21, miR-25, miR-92a-2, miR-103-1 and miR-103-2 have been shown to behave as onco-miRNAs, at least in certain contexts. Thus, genomic amplification and overexpression of miR-17-92miRNAs is found in B-cell lymphomas as well as in breast and lung cancers [34, 54]. MiR-21 is overexpressed in several cancers, including CRCs, gliomas, as well as breast, gastric, prostate, pancreas, lung, thyroid, and cervical cancers [53–55]. MiR-17, miR-25, miR-26-a, and miR-181a are also overexpressed in CRCs [34, 54]. In addition, several miRNAs, including miR-21, have been shown to activate metastasis by acting on multiple signaling pathways and targeting various proteins that are key players in this process [53]. Furthermore, the lower metastatic propensity of SW480 cells as compared with SW620 human colon cancer cells, both derived from the primary tumor and a metastasis of the same patient, respectively [96], was associated with a lower level of expression of miR-103-1 and miR-103-2, two miRNAs that induce epithelial-to-mesenchymal transition by targeting Dicer1 transcripts [60].In silico analysis using TargetScan showed that miRNAs downregulated by resveratrol in SW480 cells potentially target transcripts encoding known tumor suppressor factors, such as the two antiproliferation factors PDCD4 and PTEN, the components of the mismatch repair machinery MLH3, MSH2 and MSH3, DICER1, the RNase III producing mature miRNAs from their immediate precursors in the cytoplasm, and several effectors and regulators of the TGFβ signaling pathway [83]. Indeed, resveratrol treatment of SW480 cells lead to a greater accumulation of TGFβR1, TGFβR2, PDCD4, PTEN, and E-CADHERIN (a component of adherens junctions implicated in the maintenance of epithelial phenotype) [83]. Of note, among miRNAs upregulated by resveratrol, miR-663 was the only one to target TGFβ1 transcripts. Luciferase assays and Western blots showed that resveratrol downregulated TGFβ1 in both a miR-663-dependent and a miR-663-independent manner [83]. Resveratrol treatment also decreased the transcriptional activity of SMADs under TGFβ1 signaling, an effect seemingly independent of miR-663 [83].Interestingly, it has been recently shown that GAM/ZNF512B, a vertebrate-specific developmental regulator first described in chicken [97], impairs the upregulation of miRNAs of the miR-17-92 cluster by TGFβ1 and that TGFβ1 in turn downregulates GAM, at least in part through the upregulation of miR-17-92 miRNAs [98]. The facts that GAM transcripts contain three consensus target sites for miR-663 and that GAM is sensible to resveratrol treatment (Tili et al., unpublished results) raises the question of the possible existence of a gene regulatory network that would allow miR-663 to impair GAM repressing activity on TGFβ1 signaling pathway when TGFβ1 works as a tumor suppressor, that is, at the early stages of tumorigenesis but not any more when this pathway starts to favor tumorigenesis and metastasis, that is, on advanced stages of cancers.It is important to emphasize that TGFβ1 has been shown to enhance the maturation of oncogenic miR-21 through the binding of SMAD3 to miR-21 primary RNAs [99] and also to increase the expression of miR-155, which is under the control of TGFβ/SMAD activity [100]. Thus, by targeting AP-1 factors as well as TGFβ1 and possibly SMAD3, miR-663 might inhibit two of the pathways that upregulate miR-155expression.The TGFβ signaling pathway present multiple levels of regulation: sequestration of ligands into inactive precursor forms, ligand traps, decoy receptors, and inhibitory SMADs, not to mention the existence of SMAD-independent pathways and their interactions with many other critical signaling pathways which also proved to play a role in cancer. It is thus not surprising that the short (307-nt long) 3′-UTR of TGFβ1 transcripts contains a potential consensus target site for 28 miRNAs only. Of note, TGFβ1 3′-UTR contains two target sites for two of these miRNAs and only one target site for 25 of the others. Therefore, the fact that miR-663 may potentially target 5 different sites in TGFβ1 3′-UTR suggests that this miRNA could represent a critical TGFβ1 regulator which may possibly be called upon action in emergency situations such as those when cells begin to proliferate anarchically or when a stronger immune response is required. The multiplicity of miR-663 targets sites in TGFβ1 3′-UTR further suggests that the effects of this miRNA might be both dose- and context-dependent, so that resveratrol effects on TGFβ1 signaling pathway might well be also context-dependent. Finally, it is probable that, depending on the cell context, resveratrol might either increase the level of TGFβ signaling—by inhibiting miRNAs targeting its main effectors—when it is beneficial to the organism, that is, when it works to maintain the integrity of epithelia and impair cell proliferation, or in contrast decrease TGFβ1 signaling—by decreasing its production through the upregulation of miR-663—when TGFβ1 starts to favor epithelial-to-mesenchymal transition and metastasis. For example, the targeting of both TGFβ1 and SMAD3 transcripts might possibly allow resveratrol to impair TGFβ1-induced SMAD3-dependent promotion of cell motility and invasiveness in advanced stages of gastric cancer [101, 102] or when SMAD2 and SMAD3 phosphorylated at both linker and COOH-terminal regions transmit malignant TGFβ1 signal in later stages of human CRC [103]. ## 7. Conclusions It is notable that striking phenotypes are often driven through small changes in the cellular concentration of key factors. For example, in the B-cell compartment,miR-150 curtails the activity of c-Myb transcription factor in a dose-dependent fashion over a narrow range of miRNA and c-Myb concentrations [104]. Thus, even slight effects of resveratrol on a handful of key miRNAs might well prove to be critical to its anti-inflammatory, anticancer and antimetastatic properties. In addition, the fact that miR-663, miR-21, miR-155, and TGFβ1 have all been implicated in the regulation of cell proliferation, tumor apparition and development, metastasis formation, and innate immunity, strongly suggests that the capability of resveratrol to behave at the same time as an antitumor, antimetastatic, antiproliferation and anti-inflammatory agent most probably arises from its effects on the expression of a small set of critical endogenous miRNAs having the abilities to impact the cell proteome globally.Finally, miRNAs have the promise to become biomarkers for different stages of cancer, both for diagnosis and prognosis. Furthermore, the discovery that resveratrol can modulate the levels of miRNAs targeting proinflammatory and/or protumor factors opens the possibility to optimize resveratrol treatments by manipulating in parallel the levels of expression of a few critical miRNAs. For example, from the experiments reported here above, it starts to become clear that the use of resveratrol would be especially beneficiary in the type of cancers where the TGFβ pathway is implicated. Of course, resveratrol use would have to be carefully correlated with the stages of cancers, knowing that TGFβ can have two faces, that is, anti- and prometastatic.As a last remark, it should be noted that while resveratrol antitumor potential has been linked with data primarily from human cell culture systems, evidence that resveratrol can inhibit carcinogenesis in several organ sites emerged from results of cancer prevention and therapy studies in laboratory animal models [68]. Given that miR-663 was only found in primates, the reports by Tili et al. [47, 83] come as a warning that studies in animal may not always allow to predict accurately the molecular effects of resveratrol in human, especially when it comes to miRNAs. --- *Source: 102431-2011-08-10.xml*
2011
# Nephrotoxicity Evaluation on Cisplatin Combined with 5-HT3 Receptor Antagonists: A Retrospective Study **Authors:** Wen Kou; Hongyan Qin; Shahbaz Hanif; Xinan Wu **Journal:** BioMed Research International (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1024324 --- ## Abstract Objective. 5-HT3 receptor antagonist (ondansetron) has been reported to have nephrotoxic effect when combined with cisplatin in mice; however, little evidence exists in explaining its nephrotoxic effects on patients. The aim of this present study was to investigate whether 5-HT3 receptor antagonist could enhance or aggravate the incidence of cisplatin-induced nephrotoxicity in patients.Methods. We retrospectively reviewed 600 tumor patients which were treated with cisplatin (⩾60 mg/m2) as a first-time chemotherapy and combined with 5-HT3 receptor antagonist (i.e., ondansetron, tropisetron, or ramosetron, each kind of 5-HT3 receptor antagonist contains 200 cases) between January 2010 and December 2015. Cisplatin dosing, the baseline creatinine clearance, and other independent risk factors such as patient’s age, sex, PS score, and weight associated with nephrotoxicity were evaluated in a multivariable model.Results. The incidence of Grade ⩾ 2 serum creatinine elevation in cisplatin + ondansetron group was significantly higher than cisplatin + tropisetron group (P=0.04), but no significant difference was found between cisplatin + ondansetron group and cisplatin + ramosetron group (P=0.3). It was also found that cisplatin dosage and tumor type were independent risk factors in the development of nephrotoxicity.Conclusion. Higher cisplatin dosage and regular use of ondansetron combined with cisplatin are more likely to increase the incidence of nephrotoxicity; tropisetron showed the relatively mild effect on kidney function, suggesting that tropisetron is a preferable alternative in the process of cisplatin chemotherapy. --- ## Body ## 1. Introduction Cisplatin is one of the most widely used platinum drugs in chemotherapy regimens for patients with lung cancer and other kinds of malignancies, such as ovarian, endometrial, bladder, head and neck, cervical, stomach, prostate cancers, Hodgkin’s and non-Hodgkin’s lymphomas, multiple myeloma, melanoma, and mesothelioma [1]. The most severe adverse effect caused by cisplatin is nephrotoxicity. Cisplatin kidney injury is dose-duration-frequency dependent and is reported to induce acute or chronic renal impairment in 28%–42% of patients treated with cisplatin [2–5]. Currently, the pathogenic mechanism of cisplatin-induced nephrotoxicity is still not very clear.It is well known that cisplatin is mainly excreted into the urine during the first 24 h after administration and the concentration of cisplatin in the renal cells is much higher than that in the plasma [6–8]. Therefore, it is speculated that cisplatin may have damage on the proximal tubule cells of kidney and thus increase the serum creatinine level [9]. Some researchers also demonstrated that certain transporters, such as OCT2 (organic cation transporter) and MATE1 (multidrug and toxin extrusion protein), may play an important role in the accumulation of cisplatin in renal proximal tubules [10–12]. In addition, it is reported that cimetidine, an OCT2 inhibitor, can reduce the nephrotoxicity of cisplatin in wide-type mice and in Oct1/2 knockout mice [13, 14]. Based on above evidence, it is possible that OCT2 or MATE1 mediated cisplatin accumulation in renal proximal tubules may contribute to nephrotoxicity of cisplatin.5-HT3 antagonists are widely used as antiemetic agents for patients receiving highly emetogenic cisplatin-based regimens [15]. It is found that 5-HT3 receptor antagonists, such as ondansetron and tropisetron, are the substrate of OCT2 and MATE1 [16, 17]. Are these 5-HT3 antagonists risk factors in cisplatin-induced nephrotoxicity? Recently, a research answered this question, which found that ondansetron significantly enhanced renal accumulation of cisplatin and cisplatin-induced nephrotoxicity in mice [18]. Until now, clinical research concerning the potential nephrotoxic effects of cisplatin combined with 5-HT3 receptor antagonists is still absent. This study aims to retrospectively compare the nephrotoxicity in patients treated with cisplatin combined with 5-HT3 receptor antagonists, i.e., ondansetron, tropisetron, and ramosetron. The results of this study will provide useful evidence about how to select 5-HT3 receptor antagonists when patients were treated with cisplatin. ## 2. Patients and Methods ### 2.1. Patients A retrospective study was conducted in the First Hospital of Lanzhou University in July, 2016 (the data were analyzed anonymously, so we did not sign the informed consent). We examined the clinical data of patients (data between January 2010 and December 2015) who received therapies including a high dose (⩾60 mg/m2) of cisplatin in the first-line chemotherapy and combined with 5-HT3 receptor antagonists (i.e., ondansetron, tropisetron, or ramosetron, each kind of 5-HT3 receptor antagonist group contains 200 cases). Patients were included, if they had pathologically confirmed malignancies, an Eastern Cooperative Oncology Group performance status (PS) of 0 to 2, and the serum creatinine level was in the normal range before the chemotherapy. Patients were excluded from the study if they had a history of cisplatin treatment or had more than one cancer. ### 2.2. Hydration and Treatment Methods Cisplatin (⩾60 mg/m2) was administered over 60 min on Day 1 in combination with other chemotherapeutic agents, mannitol and 2000 ml of hydration. There was no difference between the groups with respect to the volume of hydration. Antiemetic prophylaxis consisted of 5-HT3 receptor antagonist (ondansetron: 8 mg, tropisetron: 5 mg, or ramosetron: 0.3 mg). ### 2.3. Nephrotoxicity Evaluation Renal function was evaluated based on the serum creatinine (SCr,μmol/L) level. We use the changes in creatinine clearance (Ccr) as measure of nephrotoxicity. In this study, nephrotoxicity arising from the cisplatin-containing regimen was defined as Grade 1, 2, or more Scr elevation according to Common Terminology Criteria for Adverse Events (CTCAE), version 4.0. We evaluated the association between the incidence of Grade 2 or more Scr elevation during first-time chemotherapy and the type of 5-HT3 receptor antagonist. Ccr was assessed using the Cockroft-Gault formula. ### 2.4. Statistical Analysis To identify risk factors potentially associated with the patients using different types of 5-HT3 receptor antagonist and the difference of the incidence of Grade 2 nephrotoxicity among the three groups, all the patients divided into cisplatin + ondansetron, cisplatin + tropisetron, and cisplatin + ramosetron groups. Factors in the analysis included age (⩾70 vs. <70 years), PS (2 vs. 0 or 1), sex (male vs. female), weight (⩾70 vs. <70 kg), cisplatin dose, baseline Ccr (mL/min), Ccr after treatment (mL/min), and tumor type, these factors showing the range and mean value (except sex, PS, and tumor type). The risk factors were evaluated in multivariable analysis with the Poisson Regression Model. The risk ratio with 95% confidence interval (CI) was calculated for the independent prognostic factors. To investigate the effect of different 5-HT3 receptor antagonists on cisplatin-induced nephrotoxicity, we use unpaired Student’st-test to test the mean change percentage from baseline in Ccr between two groups. ## 2.1. Patients A retrospective study was conducted in the First Hospital of Lanzhou University in July, 2016 (the data were analyzed anonymously, so we did not sign the informed consent). We examined the clinical data of patients (data between January 2010 and December 2015) who received therapies including a high dose (⩾60 mg/m2) of cisplatin in the first-line chemotherapy and combined with 5-HT3 receptor antagonists (i.e., ondansetron, tropisetron, or ramosetron, each kind of 5-HT3 receptor antagonist group contains 200 cases). Patients were included, if they had pathologically confirmed malignancies, an Eastern Cooperative Oncology Group performance status (PS) of 0 to 2, and the serum creatinine level was in the normal range before the chemotherapy. Patients were excluded from the study if they had a history of cisplatin treatment or had more than one cancer. ## 2.2. Hydration and Treatment Methods Cisplatin (⩾60 mg/m2) was administered over 60 min on Day 1 in combination with other chemotherapeutic agents, mannitol and 2000 ml of hydration. There was no difference between the groups with respect to the volume of hydration. Antiemetic prophylaxis consisted of 5-HT3 receptor antagonist (ondansetron: 8 mg, tropisetron: 5 mg, or ramosetron: 0.3 mg). ## 2.3. Nephrotoxicity Evaluation Renal function was evaluated based on the serum creatinine (SCr,μmol/L) level. We use the changes in creatinine clearance (Ccr) as measure of nephrotoxicity. In this study, nephrotoxicity arising from the cisplatin-containing regimen was defined as Grade 1, 2, or more Scr elevation according to Common Terminology Criteria for Adverse Events (CTCAE), version 4.0. We evaluated the association between the incidence of Grade 2 or more Scr elevation during first-time chemotherapy and the type of 5-HT3 receptor antagonist. Ccr was assessed using the Cockroft-Gault formula. ## 2.4. Statistical Analysis To identify risk factors potentially associated with the patients using different types of 5-HT3 receptor antagonist and the difference of the incidence of Grade 2 nephrotoxicity among the three groups, all the patients divided into cisplatin + ondansetron, cisplatin + tropisetron, and cisplatin + ramosetron groups. Factors in the analysis included age (⩾70 vs. <70 years), PS (2 vs. 0 or 1), sex (male vs. female), weight (⩾70 vs. <70 kg), cisplatin dose, baseline Ccr (mL/min), Ccr after treatment (mL/min), and tumor type, these factors showing the range and mean value (except sex, PS, and tumor type). The risk factors were evaluated in multivariable analysis with the Poisson Regression Model. The risk ratio with 95% confidence interval (CI) was calculated for the independent prognostic factors. To investigate the effect of different 5-HT3 receptor antagonists on cisplatin-induced nephrotoxicity, we use unpaired Student’st-test to test the mean change percentage from baseline in Ccr between two groups. ## 3. Results A total of 600 patients who received chemotherapy including high-dose cisplatin were eligible for the analysis. The mean age was 56 years (range: 18–81); 375 patients were male and 225 were female; most patients had a good PS of 0-1. The most common malignancies were bronchial cancer (14.5%); the mean baseline Ccr is 99.5 mL/min (range: 45.3–205.8 mL/min) and after the first cycle of chemotherapy, the Ccr is 86.3 mL/min (range: 42.2–181.6 mL/min); the mean cisplatin dose is 76.7 mg (range: 60–120 mg) (Table1).Table 1 Baseline characteristics of the 600 study patients. Characteristic All patients Cisplatin + ondansetron Cisplatin + tropisetron Cisplatin + ramosetron (n=600) (n=200) (n=200) (n=200) Sex Male 375 (62.5%) 108 (54%) 114 (57%) 153 (76.5%) Female 225 (37.5%) 92 (46%) 86 (43%) 47 (23.5%) PS 0-1 554 (92.3%) 185 (92.5%) 189 (94.5%) 180 (90%) 2 46 (7.7%) 15 (7.5%) 11 (5.5%) 20 (10%) Weight (kg) mean 63 62 65 61 range 42–90 45–86 42–90 45–78 ≥70 535 (89.2%) 174 (87%) 185 (92.5%) 176 (88.0%) <70 65 (10.8%) 26 (13%) 15 (7.5%) 24 (12%) Baseline Ccr(mL/min) mean 99.5 97.1 104.7 96.6 range 45.3–205.8 51.3–158.6 45.3–205.8 54.9–196.0 Ccr during treatment(mL/min) mean 86.3 92.4 93.8 72.8 range 42.2–181.6 48.9–147.4 42.2–181.6 43.5–121.1 Cisplatin dose (mg) mean 76.7 74 76 80 range 60–120 60–120 60–110 60–100 Age (years) mean 56 54 56 58 range 18–81 36–75 28–81 18–75 ≥70 55 (9.2%) 16 (8.0%) 21 (10.5%) 18 (9.0%) <70 545 (90.8%) 184 (92%) 179 (89.5%) 182 (91%) Tumor type Esophageal 87 (14.5%) 10 (5.0%) 15 (7.5%) 62 (31%) Lung 164 (27.3%) 36 (18%) 41 (20.5%) 87 (43.5%) Gastric 31 (5.2%) 4 (2.0%) 7 (3.5%) 20 (10.0%) Cervical 87 (14.5%) 32 (16.0%) 47 (23.5%) 8 (4.0%) Endometrial 27 (4.5%) 20 (10.0%) 4 (2.0%) 3 (1.5%) Bronchial 173 (28.8%) 99 (49.5%) 47 (23.5%) 27 (13.5%) Others 31 (5.2%) 12 (6.0%) 15 (7.5%) 4 (2.0%)Cisplatin-induced nephrotoxicity was observed in 270 of 600 enrolled patients, including 195 patients with Grade 1 nephrotoxicity and 75 patients with Grade 2 nephrotoxicity. Among cisplatin-ondansetron, cisplatin-tropisetron, and cisplatin-ramosetron groups, there are 76, 66, and 68 patients who developed Grade 1 nephrotoxicity and 28, 13, and 19 who developed Grade 2 nephrotoxicity, respectively. The incidence of Scr elevation is higher in the cisplatin-ondansetron group. As for Grade 2 nephrotoxicity, we observed that there is a trend towards higher incidence in cisplatin-ondansetron than cisplatin-tropisetron group. To assess the contribution of each risk factor to cisplatin-induced nephrotoxicity, we performed multivariable analysis; the results showed that cisplatin dosage is more related to nephrotoxicity (Table2).Table 2 Risk ratio in multivariable analysis of potential predisposing factors for cisplatin-induced nephrotoxicity (n = 270). Factor Risk ratio 95% Cl P value Age (≥60 vs. ≤60) 0.131 0.036–0.326 0.202 Sex (male vs. female) 0.057 1.79–3.83 0.474 PS (2 vs. 0 or 1) 0.119 2.77–5.07 0.542 Weight 0.287 0.015–11.22 0.051 Baseline Cr 0.11 4.74–7.69 0.632 Cisplatin dose 0.057 2.40–7.46 <0.001 Tumor type Esophageal cancer 1.000 Lung cancer 0.845 5.56–8.01 0.717 Gastric cancer 1.316 4.45–10.46 0.400 Cervical cancer 1.119 2.21–6.10 0.349 Endometrial cancer 0.838 3.38–12.24 0.238 Bronchial cancer 0.870 0.05–7.66 0.053To investigate the effect of 5-HT3 receptor antagonists on cisplatin-induced nephrotoxicity, we evaluated the mean change percentage from baseline in Ccr during the first course of cisplatin chemotherapy and observed that there is a trend towards higher incidence in Ccr change in the group receiving cisplatin and ondansetron than the other two groups. The trend strongly supports that ondansetron may aggravate the incidence of nephrotoxicity induced by cisplatin, as suggested by statistical results (Figure 1).Figure 1 Box and whisker plot for the relations between cisplatin combined different. 5-HT3 receptor antagonists and the mean change in creatinine clearance rate during the first course of cisplatin chemotherapy. ## 4. Discussion In this study, we analyzed the effects of three 5-HT3 receptor antagonists (i.e., ondansetron, ramosetron, or tropisetron) on cisplatin-induced nephrotoxicity in patients treated with cisplatin-containing chemotherapy; our results showed that the incidence of Grade ⩾ 2 serum creatinine elevation in cisplatin + ondansetron group was the highest in the three groups. We also noticed that cisplatin dosage and tumor type were independent risk factors in cisplatin-induced nephrotoxicity. The transporter plays an important role in drug-drug interactions, which lead to the accumulation of the victim drugs in the kidney and consequently caused adverse effects [19]. Cisplatin has been characterized as a substrate for OCTs and MATEs both in vivo and in vitro, while 5-HT3 receptor antagonists, such as ondansetron and tropisetron, can inhibit OCTs and MATE’s function. It is well known that 5-HT3 receptor antagonists are commonly used during cisplatin chemotherapy. Recently, a study revealed that ondansetron enhances renal accumulation of cisplatin and cisplatin-induced nephrotoxicity in mice. The studies also found that although different 5-HT3 receptor antagonists have similar chemical structures, nevertheless, they show notable differences in their selectivity, potency, and pharmacokinetics [20].To our knowledge this is the first study assessing the interaction between cisplatin and 5-HT3 receptor antagonists in patients. In this study, we investigated 600 tumor patients which were treated with cisplatin (⩾60 mg/m2) as a first-time chemotherapy and combined with 5-HT3 receptor antagonists. We found that, in cisplatin + ondansetron group, the incidence of Grade ⩾ 2 serum creatinine elevation was significantly higher than that of cisplatin + tropisetron group, but this trend was not observed between cisplatin + ondansetron group and cisplatin + ramosetron group or cisplatin + ramosetron and cisplatin + tropisetron group. When comparing the mean change of Ccr before and after the chemotherapy, we observed that there is a trend that, in ramosetron group, the reduction of Ccr is higher than the other two groups, although in ramosetron group, the incidence of Grade ⩾ 2 serum creatinine elevation is lower than ondansetron group; it still has more influence on renal function than tropisetron, suggesting that tropisetron should be a preferable choice in the process of cisplatin chemotherapy.We use multivariable analysis to assess the potential risk factors for cisplatin-induced nephrotoxicity; the results showed that cisplatin dosage is an independent risk factor in the development of nephrotoxicity; our results are consistent with another study demonstrating a higher cumulative dose increase risk for future kidney injury [21]. This finding also provide clue for us that patients treated with high-dose cisplatin chemotherapy and combined with ondansetron need to pay more attention to the incidence of nephrotoxicity.Our study also has several limitations. First, this was a retrospective study which limited to only one department, the observation bias of the data cannot be excluded. Secondly, the patients with malignant tumor were critically of poor physical fitness; in Traditional Chinese Medicine, it is called “syndrome of deficiency of both yin and yang of kidney”; these patients’ renal function was already abnormal even though their serum creatinine level was within the normal range. Thirdly, tumor patients’ physiological status might influence the clearance status of cisplatin and make them more susceptible to nephrotoxicity. Multicenter controlled studies with larger samples are still needed to clarify the associations between dosing and nephrotoxicity of cisplatin and ondansetron. --- *Source: 1024324-2018-05-30.xml*
1024324-2018-05-30_1024324-2018-05-30.md
18,535
Nephrotoxicity Evaluation on Cisplatin Combined with 5-HT3 Receptor Antagonists: A Retrospective Study
Wen Kou; Hongyan Qin; Shahbaz Hanif; Xinan Wu
BioMed Research International (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1024324
1024324-2018-05-30.xml
--- ## Abstract Objective. 5-HT3 receptor antagonist (ondansetron) has been reported to have nephrotoxic effect when combined with cisplatin in mice; however, little evidence exists in explaining its nephrotoxic effects on patients. The aim of this present study was to investigate whether 5-HT3 receptor antagonist could enhance or aggravate the incidence of cisplatin-induced nephrotoxicity in patients.Methods. We retrospectively reviewed 600 tumor patients which were treated with cisplatin (⩾60 mg/m2) as a first-time chemotherapy and combined with 5-HT3 receptor antagonist (i.e., ondansetron, tropisetron, or ramosetron, each kind of 5-HT3 receptor antagonist contains 200 cases) between January 2010 and December 2015. Cisplatin dosing, the baseline creatinine clearance, and other independent risk factors such as patient’s age, sex, PS score, and weight associated with nephrotoxicity were evaluated in a multivariable model.Results. The incidence of Grade ⩾ 2 serum creatinine elevation in cisplatin + ondansetron group was significantly higher than cisplatin + tropisetron group (P=0.04), but no significant difference was found between cisplatin + ondansetron group and cisplatin + ramosetron group (P=0.3). It was also found that cisplatin dosage and tumor type were independent risk factors in the development of nephrotoxicity.Conclusion. Higher cisplatin dosage and regular use of ondansetron combined with cisplatin are more likely to increase the incidence of nephrotoxicity; tropisetron showed the relatively mild effect on kidney function, suggesting that tropisetron is a preferable alternative in the process of cisplatin chemotherapy. --- ## Body ## 1. Introduction Cisplatin is one of the most widely used platinum drugs in chemotherapy regimens for patients with lung cancer and other kinds of malignancies, such as ovarian, endometrial, bladder, head and neck, cervical, stomach, prostate cancers, Hodgkin’s and non-Hodgkin’s lymphomas, multiple myeloma, melanoma, and mesothelioma [1]. The most severe adverse effect caused by cisplatin is nephrotoxicity. Cisplatin kidney injury is dose-duration-frequency dependent and is reported to induce acute or chronic renal impairment in 28%–42% of patients treated with cisplatin [2–5]. Currently, the pathogenic mechanism of cisplatin-induced nephrotoxicity is still not very clear.It is well known that cisplatin is mainly excreted into the urine during the first 24 h after administration and the concentration of cisplatin in the renal cells is much higher than that in the plasma [6–8]. Therefore, it is speculated that cisplatin may have damage on the proximal tubule cells of kidney and thus increase the serum creatinine level [9]. Some researchers also demonstrated that certain transporters, such as OCT2 (organic cation transporter) and MATE1 (multidrug and toxin extrusion protein), may play an important role in the accumulation of cisplatin in renal proximal tubules [10–12]. In addition, it is reported that cimetidine, an OCT2 inhibitor, can reduce the nephrotoxicity of cisplatin in wide-type mice and in Oct1/2 knockout mice [13, 14]. Based on above evidence, it is possible that OCT2 or MATE1 mediated cisplatin accumulation in renal proximal tubules may contribute to nephrotoxicity of cisplatin.5-HT3 antagonists are widely used as antiemetic agents for patients receiving highly emetogenic cisplatin-based regimens [15]. It is found that 5-HT3 receptor antagonists, such as ondansetron and tropisetron, are the substrate of OCT2 and MATE1 [16, 17]. Are these 5-HT3 antagonists risk factors in cisplatin-induced nephrotoxicity? Recently, a research answered this question, which found that ondansetron significantly enhanced renal accumulation of cisplatin and cisplatin-induced nephrotoxicity in mice [18]. Until now, clinical research concerning the potential nephrotoxic effects of cisplatin combined with 5-HT3 receptor antagonists is still absent. This study aims to retrospectively compare the nephrotoxicity in patients treated with cisplatin combined with 5-HT3 receptor antagonists, i.e., ondansetron, tropisetron, and ramosetron. The results of this study will provide useful evidence about how to select 5-HT3 receptor antagonists when patients were treated with cisplatin. ## 2. Patients and Methods ### 2.1. Patients A retrospective study was conducted in the First Hospital of Lanzhou University in July, 2016 (the data were analyzed anonymously, so we did not sign the informed consent). We examined the clinical data of patients (data between January 2010 and December 2015) who received therapies including a high dose (⩾60 mg/m2) of cisplatin in the first-line chemotherapy and combined with 5-HT3 receptor antagonists (i.e., ondansetron, tropisetron, or ramosetron, each kind of 5-HT3 receptor antagonist group contains 200 cases). Patients were included, if they had pathologically confirmed malignancies, an Eastern Cooperative Oncology Group performance status (PS) of 0 to 2, and the serum creatinine level was in the normal range before the chemotherapy. Patients were excluded from the study if they had a history of cisplatin treatment or had more than one cancer. ### 2.2. Hydration and Treatment Methods Cisplatin (⩾60 mg/m2) was administered over 60 min on Day 1 in combination with other chemotherapeutic agents, mannitol and 2000 ml of hydration. There was no difference between the groups with respect to the volume of hydration. Antiemetic prophylaxis consisted of 5-HT3 receptor antagonist (ondansetron: 8 mg, tropisetron: 5 mg, or ramosetron: 0.3 mg). ### 2.3. Nephrotoxicity Evaluation Renal function was evaluated based on the serum creatinine (SCr,μmol/L) level. We use the changes in creatinine clearance (Ccr) as measure of nephrotoxicity. In this study, nephrotoxicity arising from the cisplatin-containing regimen was defined as Grade 1, 2, or more Scr elevation according to Common Terminology Criteria for Adverse Events (CTCAE), version 4.0. We evaluated the association between the incidence of Grade 2 or more Scr elevation during first-time chemotherapy and the type of 5-HT3 receptor antagonist. Ccr was assessed using the Cockroft-Gault formula. ### 2.4. Statistical Analysis To identify risk factors potentially associated with the patients using different types of 5-HT3 receptor antagonist and the difference of the incidence of Grade 2 nephrotoxicity among the three groups, all the patients divided into cisplatin + ondansetron, cisplatin + tropisetron, and cisplatin + ramosetron groups. Factors in the analysis included age (⩾70 vs. <70 years), PS (2 vs. 0 or 1), sex (male vs. female), weight (⩾70 vs. <70 kg), cisplatin dose, baseline Ccr (mL/min), Ccr after treatment (mL/min), and tumor type, these factors showing the range and mean value (except sex, PS, and tumor type). The risk factors were evaluated in multivariable analysis with the Poisson Regression Model. The risk ratio with 95% confidence interval (CI) was calculated for the independent prognostic factors. To investigate the effect of different 5-HT3 receptor antagonists on cisplatin-induced nephrotoxicity, we use unpaired Student’st-test to test the mean change percentage from baseline in Ccr between two groups. ## 2.1. Patients A retrospective study was conducted in the First Hospital of Lanzhou University in July, 2016 (the data were analyzed anonymously, so we did not sign the informed consent). We examined the clinical data of patients (data between January 2010 and December 2015) who received therapies including a high dose (⩾60 mg/m2) of cisplatin in the first-line chemotherapy and combined with 5-HT3 receptor antagonists (i.e., ondansetron, tropisetron, or ramosetron, each kind of 5-HT3 receptor antagonist group contains 200 cases). Patients were included, if they had pathologically confirmed malignancies, an Eastern Cooperative Oncology Group performance status (PS) of 0 to 2, and the serum creatinine level was in the normal range before the chemotherapy. Patients were excluded from the study if they had a history of cisplatin treatment or had more than one cancer. ## 2.2. Hydration and Treatment Methods Cisplatin (⩾60 mg/m2) was administered over 60 min on Day 1 in combination with other chemotherapeutic agents, mannitol and 2000 ml of hydration. There was no difference between the groups with respect to the volume of hydration. Antiemetic prophylaxis consisted of 5-HT3 receptor antagonist (ondansetron: 8 mg, tropisetron: 5 mg, or ramosetron: 0.3 mg). ## 2.3. Nephrotoxicity Evaluation Renal function was evaluated based on the serum creatinine (SCr,μmol/L) level. We use the changes in creatinine clearance (Ccr) as measure of nephrotoxicity. In this study, nephrotoxicity arising from the cisplatin-containing regimen was defined as Grade 1, 2, or more Scr elevation according to Common Terminology Criteria for Adverse Events (CTCAE), version 4.0. We evaluated the association between the incidence of Grade 2 or more Scr elevation during first-time chemotherapy and the type of 5-HT3 receptor antagonist. Ccr was assessed using the Cockroft-Gault formula. ## 2.4. Statistical Analysis To identify risk factors potentially associated with the patients using different types of 5-HT3 receptor antagonist and the difference of the incidence of Grade 2 nephrotoxicity among the three groups, all the patients divided into cisplatin + ondansetron, cisplatin + tropisetron, and cisplatin + ramosetron groups. Factors in the analysis included age (⩾70 vs. <70 years), PS (2 vs. 0 or 1), sex (male vs. female), weight (⩾70 vs. <70 kg), cisplatin dose, baseline Ccr (mL/min), Ccr after treatment (mL/min), and tumor type, these factors showing the range and mean value (except sex, PS, and tumor type). The risk factors were evaluated in multivariable analysis with the Poisson Regression Model. The risk ratio with 95% confidence interval (CI) was calculated for the independent prognostic factors. To investigate the effect of different 5-HT3 receptor antagonists on cisplatin-induced nephrotoxicity, we use unpaired Student’st-test to test the mean change percentage from baseline in Ccr between two groups. ## 3. Results A total of 600 patients who received chemotherapy including high-dose cisplatin were eligible for the analysis. The mean age was 56 years (range: 18–81); 375 patients were male and 225 were female; most patients had a good PS of 0-1. The most common malignancies were bronchial cancer (14.5%); the mean baseline Ccr is 99.5 mL/min (range: 45.3–205.8 mL/min) and after the first cycle of chemotherapy, the Ccr is 86.3 mL/min (range: 42.2–181.6 mL/min); the mean cisplatin dose is 76.7 mg (range: 60–120 mg) (Table1).Table 1 Baseline characteristics of the 600 study patients. Characteristic All patients Cisplatin + ondansetron Cisplatin + tropisetron Cisplatin + ramosetron (n=600) (n=200) (n=200) (n=200) Sex Male 375 (62.5%) 108 (54%) 114 (57%) 153 (76.5%) Female 225 (37.5%) 92 (46%) 86 (43%) 47 (23.5%) PS 0-1 554 (92.3%) 185 (92.5%) 189 (94.5%) 180 (90%) 2 46 (7.7%) 15 (7.5%) 11 (5.5%) 20 (10%) Weight (kg) mean 63 62 65 61 range 42–90 45–86 42–90 45–78 ≥70 535 (89.2%) 174 (87%) 185 (92.5%) 176 (88.0%) <70 65 (10.8%) 26 (13%) 15 (7.5%) 24 (12%) Baseline Ccr(mL/min) mean 99.5 97.1 104.7 96.6 range 45.3–205.8 51.3–158.6 45.3–205.8 54.9–196.0 Ccr during treatment(mL/min) mean 86.3 92.4 93.8 72.8 range 42.2–181.6 48.9–147.4 42.2–181.6 43.5–121.1 Cisplatin dose (mg) mean 76.7 74 76 80 range 60–120 60–120 60–110 60–100 Age (years) mean 56 54 56 58 range 18–81 36–75 28–81 18–75 ≥70 55 (9.2%) 16 (8.0%) 21 (10.5%) 18 (9.0%) <70 545 (90.8%) 184 (92%) 179 (89.5%) 182 (91%) Tumor type Esophageal 87 (14.5%) 10 (5.0%) 15 (7.5%) 62 (31%) Lung 164 (27.3%) 36 (18%) 41 (20.5%) 87 (43.5%) Gastric 31 (5.2%) 4 (2.0%) 7 (3.5%) 20 (10.0%) Cervical 87 (14.5%) 32 (16.0%) 47 (23.5%) 8 (4.0%) Endometrial 27 (4.5%) 20 (10.0%) 4 (2.0%) 3 (1.5%) Bronchial 173 (28.8%) 99 (49.5%) 47 (23.5%) 27 (13.5%) Others 31 (5.2%) 12 (6.0%) 15 (7.5%) 4 (2.0%)Cisplatin-induced nephrotoxicity was observed in 270 of 600 enrolled patients, including 195 patients with Grade 1 nephrotoxicity and 75 patients with Grade 2 nephrotoxicity. Among cisplatin-ondansetron, cisplatin-tropisetron, and cisplatin-ramosetron groups, there are 76, 66, and 68 patients who developed Grade 1 nephrotoxicity and 28, 13, and 19 who developed Grade 2 nephrotoxicity, respectively. The incidence of Scr elevation is higher in the cisplatin-ondansetron group. As for Grade 2 nephrotoxicity, we observed that there is a trend towards higher incidence in cisplatin-ondansetron than cisplatin-tropisetron group. To assess the contribution of each risk factor to cisplatin-induced nephrotoxicity, we performed multivariable analysis; the results showed that cisplatin dosage is more related to nephrotoxicity (Table2).Table 2 Risk ratio in multivariable analysis of potential predisposing factors for cisplatin-induced nephrotoxicity (n = 270). Factor Risk ratio 95% Cl P value Age (≥60 vs. ≤60) 0.131 0.036–0.326 0.202 Sex (male vs. female) 0.057 1.79–3.83 0.474 PS (2 vs. 0 or 1) 0.119 2.77–5.07 0.542 Weight 0.287 0.015–11.22 0.051 Baseline Cr 0.11 4.74–7.69 0.632 Cisplatin dose 0.057 2.40–7.46 <0.001 Tumor type Esophageal cancer 1.000 Lung cancer 0.845 5.56–8.01 0.717 Gastric cancer 1.316 4.45–10.46 0.400 Cervical cancer 1.119 2.21–6.10 0.349 Endometrial cancer 0.838 3.38–12.24 0.238 Bronchial cancer 0.870 0.05–7.66 0.053To investigate the effect of 5-HT3 receptor antagonists on cisplatin-induced nephrotoxicity, we evaluated the mean change percentage from baseline in Ccr during the first course of cisplatin chemotherapy and observed that there is a trend towards higher incidence in Ccr change in the group receiving cisplatin and ondansetron than the other two groups. The trend strongly supports that ondansetron may aggravate the incidence of nephrotoxicity induced by cisplatin, as suggested by statistical results (Figure 1).Figure 1 Box and whisker plot for the relations between cisplatin combined different. 5-HT3 receptor antagonists and the mean change in creatinine clearance rate during the first course of cisplatin chemotherapy. ## 4. Discussion In this study, we analyzed the effects of three 5-HT3 receptor antagonists (i.e., ondansetron, ramosetron, or tropisetron) on cisplatin-induced nephrotoxicity in patients treated with cisplatin-containing chemotherapy; our results showed that the incidence of Grade ⩾ 2 serum creatinine elevation in cisplatin + ondansetron group was the highest in the three groups. We also noticed that cisplatin dosage and tumor type were independent risk factors in cisplatin-induced nephrotoxicity. The transporter plays an important role in drug-drug interactions, which lead to the accumulation of the victim drugs in the kidney and consequently caused adverse effects [19]. Cisplatin has been characterized as a substrate for OCTs and MATEs both in vivo and in vitro, while 5-HT3 receptor antagonists, such as ondansetron and tropisetron, can inhibit OCTs and MATE’s function. It is well known that 5-HT3 receptor antagonists are commonly used during cisplatin chemotherapy. Recently, a study revealed that ondansetron enhances renal accumulation of cisplatin and cisplatin-induced nephrotoxicity in mice. The studies also found that although different 5-HT3 receptor antagonists have similar chemical structures, nevertheless, they show notable differences in their selectivity, potency, and pharmacokinetics [20].To our knowledge this is the first study assessing the interaction between cisplatin and 5-HT3 receptor antagonists in patients. In this study, we investigated 600 tumor patients which were treated with cisplatin (⩾60 mg/m2) as a first-time chemotherapy and combined with 5-HT3 receptor antagonists. We found that, in cisplatin + ondansetron group, the incidence of Grade ⩾ 2 serum creatinine elevation was significantly higher than that of cisplatin + tropisetron group, but this trend was not observed between cisplatin + ondansetron group and cisplatin + ramosetron group or cisplatin + ramosetron and cisplatin + tropisetron group. When comparing the mean change of Ccr before and after the chemotherapy, we observed that there is a trend that, in ramosetron group, the reduction of Ccr is higher than the other two groups, although in ramosetron group, the incidence of Grade ⩾ 2 serum creatinine elevation is lower than ondansetron group; it still has more influence on renal function than tropisetron, suggesting that tropisetron should be a preferable choice in the process of cisplatin chemotherapy.We use multivariable analysis to assess the potential risk factors for cisplatin-induced nephrotoxicity; the results showed that cisplatin dosage is an independent risk factor in the development of nephrotoxicity; our results are consistent with another study demonstrating a higher cumulative dose increase risk for future kidney injury [21]. This finding also provide clue for us that patients treated with high-dose cisplatin chemotherapy and combined with ondansetron need to pay more attention to the incidence of nephrotoxicity.Our study also has several limitations. First, this was a retrospective study which limited to only one department, the observation bias of the data cannot be excluded. Secondly, the patients with malignant tumor were critically of poor physical fitness; in Traditional Chinese Medicine, it is called “syndrome of deficiency of both yin and yang of kidney”; these patients’ renal function was already abnormal even though their serum creatinine level was within the normal range. Thirdly, tumor patients’ physiological status might influence the clearance status of cisplatin and make them more susceptible to nephrotoxicity. Multicenter controlled studies with larger samples are still needed to clarify the associations between dosing and nephrotoxicity of cisplatin and ondansetron. --- *Source: 1024324-2018-05-30.xml*
2018
# Interval Type-2 Recurrent Fuzzy Neural System for Nonlinear Systems Control Using Stable Simultaneous Perturbation Stochastic Approximation Algorithm **Authors:** Ching-Hung Lee; Feng-Yu Chang **Journal:** Mathematical Problems in Engineering (2011) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2011/102436 --- ## Abstract This paper proposes a new type fuzzy neural systems, denoted IT2RFNS-A (interval type-2 recurrent fuzzy neural system with asymmetric membership function), for nonlinear systems identification and control. To enhance the performance and approximation ability, the triangular asymmetric fuzzy membership function (AFMF) and TSK-type consequent part are adopted for IT2RFNS-A. The gradient information of the IT2RFNS-A is not easy to obtain due to the asymmetric membership functions and interval valued sets. The corresponding stable learning is derived by simultaneous perturbation stochastic approximation (SPSA) algorithm which guarantees the convergence and stability of the closed-loop systems. Simulation and comparison results for the chaotic system identification and the control of Chua's chaotic circuit are shown to illustrate the feasibility and effectiveness of the proposed method. --- ## Body ## 1. Introduction In the past few decades, the fuzzy neural network (FNN) which provides the advantages of both neural network and fuzzy system is successfully applied to nonlinear system identification and control [1–4]. In the FNN, the symmetric and fixed membership functions (MFs) are commonly adopted to simplify the design procedure. However, a large number of fuzzy rules should be used to achieve the specified performance [5, 6]. Thus, an asymmetric fuzzy membership function (AFMF) has been proposed to solve this problem. The AFMF is discussed and analyzed that it can effectively improve the accuracy and reduce the fuzzy rules [7, 8].Recently, the type-2 fuzzy sets (T2 FSs) are gaining more and more in popularity [9, 10]. The T2 FSs are described by MFs that are characterized by more parameters than the type-1 fuzzy sets (T1 FSs). Hence, the T2 FSs provide us with more design degree of freedom. Nevertheless, due to the computational complexity of using the T2 FS, most people only the interval type-2 fuzzy sets (IT2 FSs) are proposed [10]. The computations associated with IT2 FSs are very manageable which makes it quite practical [11]. In our previous research [12–15], we proposed a type-2 fuzzy neural network with asymmetric membership function (T2FNN-A) which combines interval type-2 fuzzy logic system with neural network. Then, in order to improve the efficiency of the T2FNN-A, we proposed an interval type-2 fuzzy neural system with AFMFs (IT2FNS-A) which utilizes several enhanced techniques such as Takagi-Sugeno-Kang fuzzy logic system (TSK FLS) and embedded type-reduction network layer [16]. These modifications not only improve the approximation accuracy of the T2FNN-A, but also achieve the specific performance with fewer fuzzy rules. However, a major drawback of the IT2FNS-A is that its application domain is limited to static problem due to its feedforward network structure. Thus, using IT2FNS-A to process dynamic problems is inefficient. Many results have been shown that recurrent system can learn and memorize information to provide better performance [2, 17–20]. In this paper, we propose an interval type-2 recurrent fuzzy neural system with AFMFs (IT2RFNS-A) which provides the memory elements and extends the basic ability of the IT2FNS-A to include the dynamic problems. In addition, since the feedback layer captures the dynamic response of the system, the approximation accuracy of the network is improved.In training neural networks, the back-propagation (BP) algorithm is widely used. However, the differential information of system is difficult to obtain due to the piece-wise continuous property of triangular AFMFs and since there are many adjustable parameters in the IT2RFNS-A. Herein, we adopt the simultaneous perturbation stochastic approximation (SPSA) algorithm to derive the update laws of the proposed IT2RFNS-A. The SPSA algorithm is based on the approximation to gradient information from the measurements of the objective function [21–23]. Hence, a great deal of computational effort is saved. However, due to the stochastic characteristic of SPSA algorithm, we cannot guarantee that every searching step length is appropriate which may lead to the invalid search. To overcome this situation, we employ the Lyapunov stability analysis to derive the optimal learning step length for guaranteeing the stability of the closed-loop system. In addition, the efficient training is also ensured.The remainder of this paper is organized as follows. In Section2, the construction of triangular AFMFs and the SPSA algorithm are introduced. Section 3 illustrates the proposed IT2RFNS-A and the stable SPSA learning algorithm. The simulation results on chaotic system identification and control of the Chua’s chaotic circuit are shown in Section 4. Finally, the conclusion is given. ## 2. Preliminaries In this section, we briefly introduce some prerequisite material including the interval type-2 asymmetric fuzzy membership function (IT2 AFMF) and simultaneous perturbation stochastic approximation (SPSA) algorithm. ### 2.1. Interval Type-2 Asymmetric Fuzzy Membership Function (IT2 AFMF) The interval type-2 membership function (IT2 MF) is considered to be a special case of a general type-2 membership function (T2 MF) which simplifies the computational effort significantly [10, 11]. In general, the symmetric MF is used for simplification. However, the symmetric MF only holds either uncertain mean or uncertain variance. In addition, tuning the parameter of MFs symmetrically may result in low precision. The AFMF can treat these problems.In this paper, the triangular fuzzy MFs are used to construct the interval type-2 asymmetric fuzzy membership functions (IT2 AFMFs) due to their lower computational effort. Each IT2 AFMF consists of upper and lower MFs, as shown in Figure1. The upper MF is defined as(2.1)μ̅F̃(x)={x-a̅b̅-a̅,a̅≤x≤b̅,c̅-xc̅-b̅,b̅≤x≤c̅,0,otherwise, where a̅, b̅, and c̅ denote the positions of three corners satisfying a̅≤b̅≤c̅. Similarly, the lower MF is defined as(2.2)μ̲F̃(x)={λ⋅x-a̲b̲-a̲,a̲≤x≤b̲,λ⋅c̲-xc̲-b̲,b̲≤x≤c̲,0,otherwise, where a̲, b̲, and c̲ denote the positions of three corners satisfying a̲≤b̲≤c̲ and λ denotes the magnitude of lower MF which should be limited between 0.5 and 1 to avoid the invalid result (a small firing strength). As above description, the following restrictions should be constrained to avoid unreasonable IT2 AFMFs: a̅≤b̅≤c̅, a̲≤b̲≤c̲, a̅≤a̲, c̲≤c̅, a̅+λ(b̅-a̅)≤b̲≤c̅-λ(c̅-b̅).Figure 1 Construction of interval type-2 asymmetric fuzzy membership function. ### 2.2. Simultaneous Perturbation Stochastic Approximation (SPSA) Algorithm This section introduces the SPSA algorithm briefly. The detailed description can be found in literature [23]. Consider the optimization problem with an objective function f(W), the SPSA algorithm updates W by(2.3)W(k+1)=W(k)-akg(W(k)), where g(·) is the estimated gradient result of objective function f(·) with respect to W, that is, ∂f(W)/∂W≈g(W). ak denotes the learning step length which is decreased over iterations with ak=a/(k+A)α, where a, A, and α are positive configuration coefficients [23]. The SPSA approach estimates the gradient, g(·), using the following method. Assume that the dimension of parameter W is p. Let Δk=[Δk1Δk2⋯Δkp] be a p-dimensional vector whose element is mutually independent zero-mean random variable. Then, the estimation of the gradient at kth iteration can be computed by (2.4)g(W(k))=f(W(k)+ckΔk)-f(W(k))ck[Δk1-1Δk2-1⋯Δkp-1]T,whereck is gain sequence that is also decreased with ck=c/(k+1)γ, where c and γ are nonnegative configuration coefficients [23]. Obviously, all elements of W are perturbed simultaneously and only two measurements of the objective function are needed to estimate the gradient. In addition, Δk is usually obtained using Bernoulli ±1 distribution with equal probability for each value.In general, the gradient information of neural fuzzy system is not easy to obtain due to the piecewise continuous property of AFMFs and large number of adjustable parameters. Herein, we adopt the SPSA algorithm to derive the stable learning for guaranteeing the convergence and stability of the closed-loop systems. ## 2.1. Interval Type-2 Asymmetric Fuzzy Membership Function (IT2 AFMF) The interval type-2 membership function (IT2 MF) is considered to be a special case of a general type-2 membership function (T2 MF) which simplifies the computational effort significantly [10, 11]. In general, the symmetric MF is used for simplification. However, the symmetric MF only holds either uncertain mean or uncertain variance. In addition, tuning the parameter of MFs symmetrically may result in low precision. The AFMF can treat these problems.In this paper, the triangular fuzzy MFs are used to construct the interval type-2 asymmetric fuzzy membership functions (IT2 AFMFs) due to their lower computational effort. Each IT2 AFMF consists of upper and lower MFs, as shown in Figure1. The upper MF is defined as(2.1)μ̅F̃(x)={x-a̅b̅-a̅,a̅≤x≤b̅,c̅-xc̅-b̅,b̅≤x≤c̅,0,otherwise, where a̅, b̅, and c̅ denote the positions of three corners satisfying a̅≤b̅≤c̅. Similarly, the lower MF is defined as(2.2)μ̲F̃(x)={λ⋅x-a̲b̲-a̲,a̲≤x≤b̲,λ⋅c̲-xc̲-b̲,b̲≤x≤c̲,0,otherwise, where a̲, b̲, and c̲ denote the positions of three corners satisfying a̲≤b̲≤c̲ and λ denotes the magnitude of lower MF which should be limited between 0.5 and 1 to avoid the invalid result (a small firing strength). As above description, the following restrictions should be constrained to avoid unreasonable IT2 AFMFs: a̅≤b̅≤c̅, a̲≤b̲≤c̲, a̅≤a̲, c̲≤c̅, a̅+λ(b̅-a̅)≤b̲≤c̅-λ(c̅-b̅).Figure 1 Construction of interval type-2 asymmetric fuzzy membership function. ## 2.2. Simultaneous Perturbation Stochastic Approximation (SPSA) Algorithm This section introduces the SPSA algorithm briefly. The detailed description can be found in literature [23]. Consider the optimization problem with an objective function f(W), the SPSA algorithm updates W by(2.3)W(k+1)=W(k)-akg(W(k)), where g(·) is the estimated gradient result of objective function f(·) with respect to W, that is, ∂f(W)/∂W≈g(W). ak denotes the learning step length which is decreased over iterations with ak=a/(k+A)α, where a, A, and α are positive configuration coefficients [23]. The SPSA approach estimates the gradient, g(·), using the following method. Assume that the dimension of parameter W is p. Let Δk=[Δk1Δk2⋯Δkp] be a p-dimensional vector whose element is mutually independent zero-mean random variable. Then, the estimation of the gradient at kth iteration can be computed by (2.4)g(W(k))=f(W(k)+ckΔk)-f(W(k))ck[Δk1-1Δk2-1⋯Δkp-1]T,whereck is gain sequence that is also decreased with ck=c/(k+1)γ, where c and γ are nonnegative configuration coefficients [23]. Obviously, all elements of W are perturbed simultaneously and only two measurements of the objective function are needed to estimate the gradient. In addition, Δk is usually obtained using Bernoulli ±1 distribution with equal probability for each value.In general, the gradient information of neural fuzzy system is not easy to obtain due to the piecewise continuous property of AFMFs and large number of adjustable parameters. Herein, we adopt the SPSA algorithm to derive the stable learning for guaranteeing the convergence and stability of the closed-loop systems. ## 3. Interval Type-2 Recurrent Fuzzy Neural System with Asymmetric Membership Function (IT2RFNS-A) ### 3.1. Fuzzy Reasoning of IT2RFNS-A The proposed IT2RFNS-A realizes the fuzzy inference into the network structure. Assume that an IT2RFNS-A system hasM rules and n inputs, the jth rule can be expressed as(3.1)Rj:ifu1jisF̃1j,…,unjisF̃nj,thenYj=Cj0+Cj1x1+Cj2xj+⋯+Cjnxn, where uij is input linguistic variable of the jth rule, F̃ij are interval type-2 antecedent fuzzy sets, Cji are consequent interval fuzzy set, xi is the network input, and Yj is the output of the jth rule. Note that the input linguistic variable uij has the system input term and the past information which is introduced in Section 3.2. As the above description of T2 FLSs, the membership grade is an interval valued set which consists of the lower and upper membership grades, that is,(3.2)μF̃ij(uij)=[μ̲F̃ij(uij)μ̅F̃ij(uij)] and the consequent part is (3.3)Cji=[cji-sjicji+sji], where cji denotes the center of Cji and sji denotes the spread of Cji. Therefore, using the productt-norm, the firing strength associated with the jth rule is(3.4)Fj(u)=[f̲j(u)f¯j(u)], where f̲j(u)=μ̲F̃1j(u1j)×⋯×μ̲F̃nj(unj) and f¯j(u)=μ̅F̃1j(u1j)×⋯×μ̅F̃nj(unj). Thus, the consequent part of the jth rule is(3.5)yjl=(cj0+∑i=1ncjiuij)-(sj0+∑i=1nsji|uij|),yjr=(cj0+∑i=1ncjiuij)+(sj0+∑i=1nsji|uij|). By applying the Extension principle [24], the output of the FLS is(3.6)YTSK(u)=[ylyr]=∫y1∈[y1ly1r]⋯∫yM∈[yMlyMr]∫f1∈[f̲1f1¯]⋯∫fM∈[f̲Mf¯M]1∑j=1Mfjyj/∑j=1Mfj. To compute YTSK(u), we need to compute its two end-points yl and yr by type-reduction operation. Karnik and Mendel developed an iterative algorithm which is known as KM algorithm to compute these two end-points [24, 25]. A type reducer combines all fired-rule output sets in some way, just like a type-2 defuzzifier combines the type-1 rule output sets, which leads to a T1 FS that is called a type-reduced set. Herein, we need to compute the left-end point yl and right-end point yr by utilizing the KM algorithm [24, 25]. In the KM algorithm, the left-end and right-end points are represented as(3.7)yl=∑i=1Lfi¯yi+∑L+1Mf̲iyi∑i=1Lfi¯+∑L+1Mf̲i,yr=∑i=1Rf̲iyi+∑R+1Mfi¯yi∑i=1Rf̲i+∑R+1Mfi¯. Then, we have to find the proper switch point value L and R by iterative procedure where more detail can be referred to [24, 25]. However, this iterative procedure for finding switch point values is time wasting. Hence, in the proposed IT2RFNS-A, a simple weight-average method is used to approximate the L and R so that the iterative procedure is unnecessary. That is, we can calculate the left-most and right-most point of firing strength by(3.8)fjl=ω̅jlfj¯+ω̲jlf̲jω̅jl+ω̲jl,fjr=ω̅jrfj¯+ω̲jrf̲jω̅jr+ω̲jr, where ω̅jl, ω̲jl, ω̅jr, and ω̲jr are adjustable weights. Then, we can obtain the following left-end point and right-end point of the output of interval type-2 fuzzy inference system(3.9)yl=∑j=1Mfjlyjl∑j=1Mfjl,yr=∑j=1Mfjryjr∑j=1Mfjr. Note that the above simplified type-reduction technique is adopted in layer 4 (left-most and right-most layer) and feedback layer of IT2RFNS-A system. This simplifies the computational effort in type-reduction procedure. Finally, we defuzzify the type-reduced set to get a crisp output, that is, (3.10)y(u)=yl+yr2. ### 3.2. Network Structure of IT2RFNS-A The proposed IT2RFNS-A consists of six feed-forward layers and a feedback one. Layer 1 accepts input variables. Layer 2 is used to calculate the IT2 AFMF grade. The feedback layer embedded in Layer 2 is used to store the past information. Layer 3 forms the fuzzy rule base. Layer 4 introduces the simplified type-reduction scheme, called left-most and right-most layer. The TSK-type consequent part is implemented into Layer 5. Layer 6 is the output layer.Next, we indicate the signal propagation and the operation functions of the node in each layer. For convenience, the multi-input-single-output case is considered here. The schematic diagram of the proposed IT2RFNS-A is shown in Figure2. In the following description, Oi(l) denotes the ith output node in layer l.Figure 2 Diagram of the proposed IT2RFNS-A system.Layer 1: Input Layer For theith node of layer 1, the net input and output are represented as (3.11)Oi(1)=xi,wherexi represents the ith input to the ith node. The nodes in this layer only transmit input to the next layer directly.Layer 2: Membership Layer In this layer, each node performs a triangular IT2 AFMF, that is,(3.12)Oij(2)=μF̃ij(Oi(1)+Oi(f))=[O̲ij(2)O̅ij(2)]T=[μ̲F̃ij(Oi(1)+Oi(f))μ̅F̃ij(Oi(1)+Oi(f))]T, where the subscript ij indicates the jth term of the ith input and μF̃ij is an IT2 AFMF as shown in Figure 1 and (3.2). Note that the output of layer 2 and the feedback weight are interval values. According to the above description and the results of literature [26], the type reduction is embedded in the network. Herein, the output of feedback layer is expressed as (3.13)Oi(f)(k)=O̲ij(2)(k-1)⋅θ̲ij+O̅ij(2)(k-1)⋅θ̅ijθ̲ij+θ̅ij, where θ̲ij and θ̅ij denote the link weight of the feedback layer. Clearly, the input of this layer contains the memory terms O̲ij(2)(k-1) and O̅ij(2)(k-1) which store the past information of the network.Layer 3: Rule Layer This layer is used for computing firing strength of fuzzy rule. From (3.4), we obtain (3.14)f̲j=μ̲F̃1j(O1(1)+O1(f))×⋯×μ̲F̃nj(On(1)+On(f)),fj¯=μ̅F̃1j(O1(1)+O1(f))×⋯×μ̅F̃nj(On(1)+On(f)), where μ̲F̃ij(·) and μ̅F̃ij(·) are the lower and upper membership grades. Therefore, the operation function in this layer is (3.15)Oj(3)=[f̲jf¯j]T=[O̲j(3)O̅j(3)]T=[∏i=1nO̲ij(2)∏i=1nO̅ij(2)]T.Layer 4: Left-Most & Right-Most layer Similar to the feedback layer, the type reduction is integrated into the network structure by calculating their left-most and right-most values. That is, the complicated type-reduction method such as KM algorithm can be reduced as(3.16)Oj(4)=[Ojl(4)Ojr(4)]T=[ω̅jlO̅j(3)+ω̲jlO̲j(3)ω̅jl+ω̲jlω̅jrO̅j(3)+ω̲jrO̲j(3)ω̅jr+ω̲jr]T, where the link weights are ω̲l=[ω̲1l⋯ω̲Ml]T, ω̅l=[ω̅1l⋯ω̅Ml]T, ω̲r=[ω̲1r⋯ω̲Mr]T, and ω̅r=[ω̅1r⋯ω̅Mr]T. These link weights are adjusted by the proposed stable SPSA learning algorithm. Note that ω̲jl<ω̅jl and ω̲jr<ω̅jr.Layer 5: TSK Layer From (3.5), the TSK-type consequent part is (3.17)Tj=[TjlTjr]T=[(cj0+∑i=1ncjixi)-(sj0+∑i=1nsji|xi|)(cj0+∑i=1ncjixi)+(sj0+∑i=1nsji|xi|)]T. Then, the output of this layer is (3.18)O(5)=[Ol(5)Or(5)]T=[∑j=1MOjl(4)Tjl∑j=1MOjl(4)∑j=1MOjr(4)Tjr∑j=1MOjr(4)]T.Layer 6: Output Layer Layer 6 is the output layer which is used to implement the defuzzification operation. Therefore, the crisp output is(3.19)O(6)=Ol(5)+Or(5)2. Obviously, the total design parameters of IT2RFNS-A are a̅, b̅, c̅, a̲, b̲, c̲, λ, ω̲jl, ω̅jl, ω̲jr, ω̅jr, cji, and sji. These parameters are adjusted by the proposed stable SPSA learning algorithm which guarantees the convergence of IT2RFNS-A systems. ### 3.3. Training of IT2RFNS-A by the Stable SPSA Algorithm Consider the nonlinear control problem, our goal is to generate a proper control sequenceu(k) such that the system output y(k) follows the desired trajectory yr(k), where k is the discrete-time index. The IT2RFNS-A with the stable SPSA algorithm plays the role of controller for nonlinear plant, the adaptive control scheme is shown in Figure 3. For convenience, we consider the single-output case and the tracking error is defined as(3.20)e(k)=yr(k)-y(k). Then, we define the objective function (error cost function) as(3.21)E(k)=12e2(k)=12(yr(k)-y(k))2. The control objective is to generate the control signal u(k) such that the tracking error approaches to zero, that is, to minimize the objective function E(k). From the well-known gradient descent method, the parameter update law can be written as(3.22)W(k+1)=W(k)+ΔW(k)=W(k)+ak(-∂E(k)∂W), where W is the tuning parameters of IT2RFNS-A. Then, (3.23)∂E(k)∂W=-e(k)∂e(k)∂W. Herein, we adopt the SPSA algorithm to reduce the computational complexity. The parameters update laws can be represented as(3.24)W(k+1)=W(k)+ake(k)g(W(k)), where (3.25)g(W(k))=e(W(k)+ckΔk)-e(W(k))ck[Δk1-1Δk2-1⋯Δkp-1]T,wheree(W(k)+ckΔk) denotes the tracking error between the desired output and system output resulted by the IT2RFNS-A with perturbed tuning parameters. Note that (3.25) does not calculate the system sensitivity or gradient functions; this simplifies the computational effort.Figure 3 Adaptive control scheme for nonlinear system.Note that, in training neural networks, it may not be possible to update all estimated parameters with a single gradient approximation function (3.25). Thus, we should partition the estimated parameters W into several parts, that is, each kind of parameter has its corresponding estimated parameter (e.g., Wθ̅ as the estimated link weight of feedback layer). Then, the estimated parameters are updated separately by the SPSA algorithm.Remark 3.1. As the previous results of [1, 2, 4], the adaptive control laws can be obtained by multiplying system sensitivity ∂y/∂u, that is, (3.26)∂E(k)∂W=e(k)∂(yr(k)-y(k))∂W=e(k)(-1)∂y(k)∂W=e(k)⋅(-1)⋅∂y∂u⋅∂u∂W, where u is the control input produced by the IT2RFNS-A, that is, u=O(6). Hence, the gradient of the IT2RFNS-A should be calculated. However, an additional identifier (using a FNN or an IT2RFNS-A) should be developed to find the unknown system’s sensitivity (details can be found in [1, 2]). In this way, the approximation accuracy should be guaranteed for training the IT2RFNS-A controller and the computational effort is complex and huge. Therefore, an approximation term of ∂E(k)/∂W≅-(e+Δe)∂u/∂W and the optimal learning step length are derived to enhance the efficiency and guarantee the stability, where Δe denotes ė and e(k)-e(k-1) for continuous and discrete time case, respectively.Remark 3.2. Our proposed approach is also valid for the nonlinear system identification. The series-parallel architecture shown in Figure4 is adopted. Hence, the inputs of IT2RFNS-A are u and y(k-1) and the IT2RFNS-A output is the estimated output ŷ(k). The parameters of IT2RFNS-A are tuned by the proposed stable SPSA algorithm. Thus, the parameter update law is (3.27)W(k+1)=W(k)+ak[y(k)-ŷ(k)]⋅O(6)(W(k)+ckΔk)-O(6)(W(k))ck[Δk1-1Δk2-1⋯Δkp-1]T, where O(6)(W(k)+ckΔk) denotes the output of the IT2RFNS-A with perturbed tuning parameters.Figure 4 Series-parallel training architecture for system identification. ### 3.4. Stability Analysis In this section, the convergence theorem for selecting appropriate learning step lengthak is introduced. The choice of learning step length is very important for convergence. If a small value is given for the learning step length, then the convergence of the IT2RFNS-A is guaranteed. However, the convergent speed may be slow. On the other hand, if a large value is selected, then the system may be unstable. Hence, we employ the Lyapunov stability approach to have the condition for convergence and find the optimal learning step length for IT2RFNS-A.Theorem 3.3. Letak be the learning step length of tuning parameters for the IT2RFNS-A controller. Consider the nonlinear control problem using IT2RFNS-A (shown in Figure 3), the asymptotic convergence of the closed-loop system is guaranteed if the learning step length is chosen satisfying (3.28)0<ak<2|g(k)|2,∀k, where g(W(k))=-(e+Δe/e)·(O(6)(W(k)+ckΔk)-O(6)(W(k))/ck)[Δk1-1Δk2-1⋯Δkp-1]T is the gradient estimation using SPSA approach. In addition, the faster convergence can be obtained by the following optimal time-varying learning step length (3.29)ak*=1|g(k)|2.Proof. First, we define the discrete-time Lyapunov function as follows:(3.30)V(k)=E(k)=12e2(k)=12(yr(k)-y(k))2, where e(k) represents the tracking error. Then, the change of the Lyapunov function is (3.31)ΔV(k)=V(k+1)-V(k)=12(e2(k+1)-e2(k)). According to the Lyapunov stability theorem, if the change of the positive definite Lyapunov function, denoted ΔV(k), satisfies the condition ΔV(k)<0, for all k, then the asymptotical stability is guaranteed [1, 2, 27]. Hence, our objective is to select the proper learning step length such that ΔV(k)<0, for all k. This implies that V(k) will converge to zero when k approaches to infinity. By [1, 2, 28], the error difference can be represented as (3.32)Δe(k)=e(k+1)-e(k)≅[∂e(k)∂W]TΔW, where ΔW denotes the change of W. From (3.22) and (3.24), we obtain (3.33)ΔW≡-ake(k)∂e(k)∂W≅ake(k)g(k). Thus, (3.34)ΔV(k)=12[e2(k+1)-e2(k)]=12[e(k+1)-e(k)]⋅[e(k+1)+e(k)]=12Δe(k)⋅[2e(k)+Δe(k)]=Δe(k)⋅(e(k)+12Δe(k))=[∂e(k)∂W]Take(k)g(k)⋅{e(k)+12[∂e(k)∂W]Take(k)g(k)}=-ake2(k)|g(k)|2(1-12ak|g(k)|2). Note that P(k) is positive for all k>0, and let P(k)=ak|g(k)|2. Thus, (3.35)ΔV(k)=-e2(k)P(k)(1-12P(k)). Recall that the asymptotical stability of the IT2RFNS-A is guaranteed if ΔV(k)<0, for all k>0. Thus, (1-(1/2)P(k)) should be positive such that ΔV(k)<0, for all k. Therefore, we obtain the stability condition for ak: (3.36)0<ak<2|g(k)|2. The asymptotic stability is guaranteed if ak is chosen to satisfy (3.36). In addition, we would like to find a condition for ak that guarantees the fast convergence. From (3.34) and (3.35), we have (3.37)e2(k+1)=e2(k)-e2(k)P(k)(2-P(k))=e2(k)[1-2P(k)+P2(k)]=e2(k)[P(k)-1]2. The minimum of e2(k+1) is achieved when P(k)=1. Hence, the time-varying optimal learning step length is (3.38)ak*=1|g(k)|2.For the system identification problem, a similar convergence theorem can be obtained.Theorem 3.4. LetaIk be the learning step length of tuning parameters for the IT2RFNS-A. Consider the nonlinear identification problem by series-parallel architecture using IT2RFNS-A (shown in Figure 4), and the parameters update laws are shown in (3.27). The asymptotic convergence of the IT2RFNS-A system is guaranteed if the chosen learning step length is satisfied: (3.39)0<aIk<2|gI(k)|2,∀k, where gI(k)=(O(6)(W(k)+ckΔk)-O(6)(W(k))/ck)[Δk1-1Δk2-1⋯Δkp-1]T is the gradient estimation of IT2RFNS-A by SPSA approach. In addition, the faster convergence can be obtained by using the following optimal time-varying learning step length: (3.40)aIk*=1|gI(k)|2.Proof. Herein, we omitted it due to that the proof of Theorem3.4 is similar to the proof of Theorem 3.3. Only the express of the estimated gradient function g(k) is replaced by gI(k).Remark 3.5. As above description, the SPSA algorithm has the stochastic property in the estimation of gradient functionsg(k) and gI(k) due to the random values ck and Δk. Therefore, the stability conditions (3.28) and (3.39) are important for training. In addition, the time-varying optimal learning step lengths ak* and aIk*, shown in (3.29) and (3.40), have the ability of guaranteeing the high-speed convergence. This reduces the effect of random values. Simulation results in Section 4 demonstrate the performance of our approach. ## 3.1. Fuzzy Reasoning of IT2RFNS-A The proposed IT2RFNS-A realizes the fuzzy inference into the network structure. Assume that an IT2RFNS-A system hasM rules and n inputs, the jth rule can be expressed as(3.1)Rj:ifu1jisF̃1j,…,unjisF̃nj,thenYj=Cj0+Cj1x1+Cj2xj+⋯+Cjnxn, where uij is input linguistic variable of the jth rule, F̃ij are interval type-2 antecedent fuzzy sets, Cji are consequent interval fuzzy set, xi is the network input, and Yj is the output of the jth rule. Note that the input linguistic variable uij has the system input term and the past information which is introduced in Section 3.2. As the above description of T2 FLSs, the membership grade is an interval valued set which consists of the lower and upper membership grades, that is,(3.2)μF̃ij(uij)=[μ̲F̃ij(uij)μ̅F̃ij(uij)] and the consequent part is (3.3)Cji=[cji-sjicji+sji], where cji denotes the center of Cji and sji denotes the spread of Cji. Therefore, using the productt-norm, the firing strength associated with the jth rule is(3.4)Fj(u)=[f̲j(u)f¯j(u)], where f̲j(u)=μ̲F̃1j(u1j)×⋯×μ̲F̃nj(unj) and f¯j(u)=μ̅F̃1j(u1j)×⋯×μ̅F̃nj(unj). Thus, the consequent part of the jth rule is(3.5)yjl=(cj0+∑i=1ncjiuij)-(sj0+∑i=1nsji|uij|),yjr=(cj0+∑i=1ncjiuij)+(sj0+∑i=1nsji|uij|). By applying the Extension principle [24], the output of the FLS is(3.6)YTSK(u)=[ylyr]=∫y1∈[y1ly1r]⋯∫yM∈[yMlyMr]∫f1∈[f̲1f1¯]⋯∫fM∈[f̲Mf¯M]1∑j=1Mfjyj/∑j=1Mfj. To compute YTSK(u), we need to compute its two end-points yl and yr by type-reduction operation. Karnik and Mendel developed an iterative algorithm which is known as KM algorithm to compute these two end-points [24, 25]. A type reducer combines all fired-rule output sets in some way, just like a type-2 defuzzifier combines the type-1 rule output sets, which leads to a T1 FS that is called a type-reduced set. Herein, we need to compute the left-end point yl and right-end point yr by utilizing the KM algorithm [24, 25]. In the KM algorithm, the left-end and right-end points are represented as(3.7)yl=∑i=1Lfi¯yi+∑L+1Mf̲iyi∑i=1Lfi¯+∑L+1Mf̲i,yr=∑i=1Rf̲iyi+∑R+1Mfi¯yi∑i=1Rf̲i+∑R+1Mfi¯. Then, we have to find the proper switch point value L and R by iterative procedure where more detail can be referred to [24, 25]. However, this iterative procedure for finding switch point values is time wasting. Hence, in the proposed IT2RFNS-A, a simple weight-average method is used to approximate the L and R so that the iterative procedure is unnecessary. That is, we can calculate the left-most and right-most point of firing strength by(3.8)fjl=ω̅jlfj¯+ω̲jlf̲jω̅jl+ω̲jl,fjr=ω̅jrfj¯+ω̲jrf̲jω̅jr+ω̲jr, where ω̅jl, ω̲jl, ω̅jr, and ω̲jr are adjustable weights. Then, we can obtain the following left-end point and right-end point of the output of interval type-2 fuzzy inference system(3.9)yl=∑j=1Mfjlyjl∑j=1Mfjl,yr=∑j=1Mfjryjr∑j=1Mfjr. Note that the above simplified type-reduction technique is adopted in layer 4 (left-most and right-most layer) and feedback layer of IT2RFNS-A system. This simplifies the computational effort in type-reduction procedure. Finally, we defuzzify the type-reduced set to get a crisp output, that is, (3.10)y(u)=yl+yr2. ## 3.2. Network Structure of IT2RFNS-A The proposed IT2RFNS-A consists of six feed-forward layers and a feedback one. Layer 1 accepts input variables. Layer 2 is used to calculate the IT2 AFMF grade. The feedback layer embedded in Layer 2 is used to store the past information. Layer 3 forms the fuzzy rule base. Layer 4 introduces the simplified type-reduction scheme, called left-most and right-most layer. The TSK-type consequent part is implemented into Layer 5. Layer 6 is the output layer.Next, we indicate the signal propagation and the operation functions of the node in each layer. For convenience, the multi-input-single-output case is considered here. The schematic diagram of the proposed IT2RFNS-A is shown in Figure2. In the following description, Oi(l) denotes the ith output node in layer l.Figure 2 Diagram of the proposed IT2RFNS-A system.Layer 1: Input Layer For theith node of layer 1, the net input and output are represented as (3.11)Oi(1)=xi,wherexi represents the ith input to the ith node. The nodes in this layer only transmit input to the next layer directly.Layer 2: Membership Layer In this layer, each node performs a triangular IT2 AFMF, that is,(3.12)Oij(2)=μF̃ij(Oi(1)+Oi(f))=[O̲ij(2)O̅ij(2)]T=[μ̲F̃ij(Oi(1)+Oi(f))μ̅F̃ij(Oi(1)+Oi(f))]T, where the subscript ij indicates the jth term of the ith input and μF̃ij is an IT2 AFMF as shown in Figure 1 and (3.2). Note that the output of layer 2 and the feedback weight are interval values. According to the above description and the results of literature [26], the type reduction is embedded in the network. Herein, the output of feedback layer is expressed as (3.13)Oi(f)(k)=O̲ij(2)(k-1)⋅θ̲ij+O̅ij(2)(k-1)⋅θ̅ijθ̲ij+θ̅ij, where θ̲ij and θ̅ij denote the link weight of the feedback layer. Clearly, the input of this layer contains the memory terms O̲ij(2)(k-1) and O̅ij(2)(k-1) which store the past information of the network.Layer 3: Rule Layer This layer is used for computing firing strength of fuzzy rule. From (3.4), we obtain (3.14)f̲j=μ̲F̃1j(O1(1)+O1(f))×⋯×μ̲F̃nj(On(1)+On(f)),fj¯=μ̅F̃1j(O1(1)+O1(f))×⋯×μ̅F̃nj(On(1)+On(f)), where μ̲F̃ij(·) and μ̅F̃ij(·) are the lower and upper membership grades. Therefore, the operation function in this layer is (3.15)Oj(3)=[f̲jf¯j]T=[O̲j(3)O̅j(3)]T=[∏i=1nO̲ij(2)∏i=1nO̅ij(2)]T.Layer 4: Left-Most & Right-Most layer Similar to the feedback layer, the type reduction is integrated into the network structure by calculating their left-most and right-most values. That is, the complicated type-reduction method such as KM algorithm can be reduced as(3.16)Oj(4)=[Ojl(4)Ojr(4)]T=[ω̅jlO̅j(3)+ω̲jlO̲j(3)ω̅jl+ω̲jlω̅jrO̅j(3)+ω̲jrO̲j(3)ω̅jr+ω̲jr]T, where the link weights are ω̲l=[ω̲1l⋯ω̲Ml]T, ω̅l=[ω̅1l⋯ω̅Ml]T, ω̲r=[ω̲1r⋯ω̲Mr]T, and ω̅r=[ω̅1r⋯ω̅Mr]T. These link weights are adjusted by the proposed stable SPSA learning algorithm. Note that ω̲jl<ω̅jl and ω̲jr<ω̅jr.Layer 5: TSK Layer From (3.5), the TSK-type consequent part is (3.17)Tj=[TjlTjr]T=[(cj0+∑i=1ncjixi)-(sj0+∑i=1nsji|xi|)(cj0+∑i=1ncjixi)+(sj0+∑i=1nsji|xi|)]T. Then, the output of this layer is (3.18)O(5)=[Ol(5)Or(5)]T=[∑j=1MOjl(4)Tjl∑j=1MOjl(4)∑j=1MOjr(4)Tjr∑j=1MOjr(4)]T.Layer 6: Output Layer Layer 6 is the output layer which is used to implement the defuzzification operation. Therefore, the crisp output is(3.19)O(6)=Ol(5)+Or(5)2. Obviously, the total design parameters of IT2RFNS-A are a̅, b̅, c̅, a̲, b̲, c̲, λ, ω̲jl, ω̅jl, ω̲jr, ω̅jr, cji, and sji. These parameters are adjusted by the proposed stable SPSA learning algorithm which guarantees the convergence of IT2RFNS-A systems. ## 3.3. Training of IT2RFNS-A by the Stable SPSA Algorithm Consider the nonlinear control problem, our goal is to generate a proper control sequenceu(k) such that the system output y(k) follows the desired trajectory yr(k), where k is the discrete-time index. The IT2RFNS-A with the stable SPSA algorithm plays the role of controller for nonlinear plant, the adaptive control scheme is shown in Figure 3. For convenience, we consider the single-output case and the tracking error is defined as(3.20)e(k)=yr(k)-y(k). Then, we define the objective function (error cost function) as(3.21)E(k)=12e2(k)=12(yr(k)-y(k))2. The control objective is to generate the control signal u(k) such that the tracking error approaches to zero, that is, to minimize the objective function E(k). From the well-known gradient descent method, the parameter update law can be written as(3.22)W(k+1)=W(k)+ΔW(k)=W(k)+ak(-∂E(k)∂W), where W is the tuning parameters of IT2RFNS-A. Then, (3.23)∂E(k)∂W=-e(k)∂e(k)∂W. Herein, we adopt the SPSA algorithm to reduce the computational complexity. The parameters update laws can be represented as(3.24)W(k+1)=W(k)+ake(k)g(W(k)), where (3.25)g(W(k))=e(W(k)+ckΔk)-e(W(k))ck[Δk1-1Δk2-1⋯Δkp-1]T,wheree(W(k)+ckΔk) denotes the tracking error between the desired output and system output resulted by the IT2RFNS-A with perturbed tuning parameters. Note that (3.25) does not calculate the system sensitivity or gradient functions; this simplifies the computational effort.Figure 3 Adaptive control scheme for nonlinear system.Note that, in training neural networks, it may not be possible to update all estimated parameters with a single gradient approximation function (3.25). Thus, we should partition the estimated parameters W into several parts, that is, each kind of parameter has its corresponding estimated parameter (e.g., Wθ̅ as the estimated link weight of feedback layer). Then, the estimated parameters are updated separately by the SPSA algorithm.Remark 3.1. As the previous results of [1, 2, 4], the adaptive control laws can be obtained by multiplying system sensitivity ∂y/∂u, that is, (3.26)∂E(k)∂W=e(k)∂(yr(k)-y(k))∂W=e(k)(-1)∂y(k)∂W=e(k)⋅(-1)⋅∂y∂u⋅∂u∂W, where u is the control input produced by the IT2RFNS-A, that is, u=O(6). Hence, the gradient of the IT2RFNS-A should be calculated. However, an additional identifier (using a FNN or an IT2RFNS-A) should be developed to find the unknown system’s sensitivity (details can be found in [1, 2]). In this way, the approximation accuracy should be guaranteed for training the IT2RFNS-A controller and the computational effort is complex and huge. Therefore, an approximation term of ∂E(k)/∂W≅-(e+Δe)∂u/∂W and the optimal learning step length are derived to enhance the efficiency and guarantee the stability, where Δe denotes ė and e(k)-e(k-1) for continuous and discrete time case, respectively.Remark 3.2. Our proposed approach is also valid for the nonlinear system identification. The series-parallel architecture shown in Figure4 is adopted. Hence, the inputs of IT2RFNS-A are u and y(k-1) and the IT2RFNS-A output is the estimated output ŷ(k). The parameters of IT2RFNS-A are tuned by the proposed stable SPSA algorithm. Thus, the parameter update law is (3.27)W(k+1)=W(k)+ak[y(k)-ŷ(k)]⋅O(6)(W(k)+ckΔk)-O(6)(W(k))ck[Δk1-1Δk2-1⋯Δkp-1]T, where O(6)(W(k)+ckΔk) denotes the output of the IT2RFNS-A with perturbed tuning parameters.Figure 4 Series-parallel training architecture for system identification. ## 3.4. Stability Analysis In this section, the convergence theorem for selecting appropriate learning step lengthak is introduced. The choice of learning step length is very important for convergence. If a small value is given for the learning step length, then the convergence of the IT2RFNS-A is guaranteed. However, the convergent speed may be slow. On the other hand, if a large value is selected, then the system may be unstable. Hence, we employ the Lyapunov stability approach to have the condition for convergence and find the optimal learning step length for IT2RFNS-A.Theorem 3.3. Letak be the learning step length of tuning parameters for the IT2RFNS-A controller. Consider the nonlinear control problem using IT2RFNS-A (shown in Figure 3), the asymptotic convergence of the closed-loop system is guaranteed if the learning step length is chosen satisfying (3.28)0<ak<2|g(k)|2,∀k, where g(W(k))=-(e+Δe/e)·(O(6)(W(k)+ckΔk)-O(6)(W(k))/ck)[Δk1-1Δk2-1⋯Δkp-1]T is the gradient estimation using SPSA approach. In addition, the faster convergence can be obtained by the following optimal time-varying learning step length (3.29)ak*=1|g(k)|2.Proof. First, we define the discrete-time Lyapunov function as follows:(3.30)V(k)=E(k)=12e2(k)=12(yr(k)-y(k))2, where e(k) represents the tracking error. Then, the change of the Lyapunov function is (3.31)ΔV(k)=V(k+1)-V(k)=12(e2(k+1)-e2(k)). According to the Lyapunov stability theorem, if the change of the positive definite Lyapunov function, denoted ΔV(k), satisfies the condition ΔV(k)<0, for all k, then the asymptotical stability is guaranteed [1, 2, 27]. Hence, our objective is to select the proper learning step length such that ΔV(k)<0, for all k. This implies that V(k) will converge to zero when k approaches to infinity. By [1, 2, 28], the error difference can be represented as (3.32)Δe(k)=e(k+1)-e(k)≅[∂e(k)∂W]TΔW, where ΔW denotes the change of W. From (3.22) and (3.24), we obtain (3.33)ΔW≡-ake(k)∂e(k)∂W≅ake(k)g(k). Thus, (3.34)ΔV(k)=12[e2(k+1)-e2(k)]=12[e(k+1)-e(k)]⋅[e(k+1)+e(k)]=12Δe(k)⋅[2e(k)+Δe(k)]=Δe(k)⋅(e(k)+12Δe(k))=[∂e(k)∂W]Take(k)g(k)⋅{e(k)+12[∂e(k)∂W]Take(k)g(k)}=-ake2(k)|g(k)|2(1-12ak|g(k)|2). Note that P(k) is positive for all k>0, and let P(k)=ak|g(k)|2. Thus, (3.35)ΔV(k)=-e2(k)P(k)(1-12P(k)). Recall that the asymptotical stability of the IT2RFNS-A is guaranteed if ΔV(k)<0, for all k>0. Thus, (1-(1/2)P(k)) should be positive such that ΔV(k)<0, for all k. Therefore, we obtain the stability condition for ak: (3.36)0<ak<2|g(k)|2. The asymptotic stability is guaranteed if ak is chosen to satisfy (3.36). In addition, we would like to find a condition for ak that guarantees the fast convergence. From (3.34) and (3.35), we have (3.37)e2(k+1)=e2(k)-e2(k)P(k)(2-P(k))=e2(k)[1-2P(k)+P2(k)]=e2(k)[P(k)-1]2. The minimum of e2(k+1) is achieved when P(k)=1. Hence, the time-varying optimal learning step length is (3.38)ak*=1|g(k)|2.For the system identification problem, a similar convergence theorem can be obtained.Theorem 3.4. LetaIk be the learning step length of tuning parameters for the IT2RFNS-A. Consider the nonlinear identification problem by series-parallel architecture using IT2RFNS-A (shown in Figure 4), and the parameters update laws are shown in (3.27). The asymptotic convergence of the IT2RFNS-A system is guaranteed if the chosen learning step length is satisfied: (3.39)0<aIk<2|gI(k)|2,∀k, where gI(k)=(O(6)(W(k)+ckΔk)-O(6)(W(k))/ck)[Δk1-1Δk2-1⋯Δkp-1]T is the gradient estimation of IT2RFNS-A by SPSA approach. In addition, the faster convergence can be obtained by using the following optimal time-varying learning step length: (3.40)aIk*=1|gI(k)|2.Proof. Herein, we omitted it due to that the proof of Theorem3.4 is similar to the proof of Theorem 3.3. Only the express of the estimated gradient function g(k) is replaced by gI(k).Remark 3.5. As above description, the SPSA algorithm has the stochastic property in the estimation of gradient functionsg(k) and gI(k) due to the random values ck and Δk. Therefore, the stability conditions (3.28) and (3.39) are important for training. In addition, the time-varying optimal learning step lengths ak* and aIk*, shown in (3.29) and (3.40), have the ability of guaranteeing the high-speed convergence. This reduces the effect of random values. Simulation results in Section 4 demonstrate the performance of our approach. ## 4. Simulation Results Simulation results and comparisons, including the nonlinear system identification and the control of nonlinear system, are introduced to verify the performance of our proposed approach. ### 4.1. Example 1: Nonlinear System Identification of Chaotic System Consider a nonlinear system described as(4.1)y(k)=f(y(k-1),…,y(k-n);u(k-1),…,u(k-m)), where u denotes the input, y is the output, m and n are positive integers, and f(·) is the unknown function f:ℜn+m→ℜ. Our purpose is to identify the nonlinear system (4.1) using IT2RFNS-A with the proposed stable SPSA algorithm. From literature [2], the chaotic system is described as (4.2)y(k+1)=-P⋅y2(k)+Q⋅y(k-1)+1.0, where P=1.4 and Q=0.3 and produces a chaotic strange attractor as shown in Figure 5(a). We first choose 1000 pairs training data randomly from the system over the interval [−1.5 1.5] and the initial point is set to[y(1)y(0)]=[0.40.4]. Then, the IT2RFNS-A is used to approximate the chaotic system using the stable SPSA algorithm with the following coefficients listed in Table 1. After training for 100 epochs, the phase plane of the chaotic system is shown in Figure 5(b). In order to show the superiority of using optimal learning step length, the same simulation is done without using optimal learning step length for comparison. In addition, we also compare it with other neural networks such as fuzzy neural network (FNN) [1], recurrent fuzzy neural network (RFNN) [2], and interval type-2 fuzzy logic system with asymmetric membership function (IT2FNS-A) [16]. For statistical analysis, the learning process is repeated for 10 independent runs. The descending trend of the mean square error (MSE) is shown in Figure 6, solid-black line: IT2RFNS-A with the optimal learning step length aIk*; solid-red line: IT2RFNS-A without the optimal learning step length; dashed line: IT2FNS-A; dotted line: RFNN; dash-dotted line: FNN. The worst, average, and best MSE that each neural network achieved are shown in Table 2. Obviously, the IT2RFNS-A with optimal learning step length achieves the best performance and if we focus on using optimal learning step length or not, then we can find that there is a great improvement. From Figure 6, we can observe that using optimal learning step length results in better performance in convergence. Hence, we accumulate the number of epochs that MSE decreased in 100 epochs and listed in Table 3. The result, evidently, shows that the optimal learning step length can guarantee the efficient training of IT2RFNS-A.Table 1 Configuration coefficients for Example 1: system identification. FNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-ASPSA configuration coefficientsa0.10.10.010.01optimal learning step length (3.40)α0.6020.6020.6020.602A10101010c0.10.10.10.10.1γ0.1010.1010.1010.1010.101Neural network coefficientrule number88222epochs100Table 2 Comparison results in MSE for Example 1. MSEFNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-A (optimalaIk*)Worst0.00480.00176.9579 × 10−51.9464 × 10−55.0267 × 10−6Average0.00286.6921 × 10−43.4154 × 10−54.8760 × 10−61.3319 × 10−6Best0.00182.2731 × 10−41.1265 × 10−56.0900 × 10−81.0091 × 10−27Table 3 Comparison results of using optimal learning step length (number of epochs that MSE decreases in 100 epochs, 10 independent runs). Number of epochsIT2RFNS-AIT2RFNS-A with optimalaIk*Worst3868Average3689Best5598Total epochs100Phase plane plot of (a) the chaotic system and (b) the result of IT2RFNS-A with optimal learning step length. (a)(b)Figure 6 Comparison results in MSE of Example 1. ### 4.2. Nonlinear System Control of Chua’s Chaotic Circuit Consider annth-order nonlinear dynamic system in the companion form or controllability canonical from(4.3)x(n)=f(x)+g(x)u+d,y=x, where u and y are the control input and output of the nonlinear system, f(·) and g(·) are unknown nonlinear and continuous functions, and d is the external disturbance or system uncertainty. Our purpose is to develop an IT2RFNS-A controller to generate the proper control signal such that the system output y can follow a given reference trajectory yr.In this paper, we consider the typical Chua’s chaotic circuit which consists of one inductor, two capacitors, one linear resistor, and one piecewise-linear resistor [29, 30]. It has been shown that this circuit holds very rich nonlinear dynamics such as chaos and bifurcations. The dynamic equation of Chua’s circuit are described as(4.4)dvC1dt=(C1)-1(vC2-vC1R-γ),dvC2dt=(C2)-1(vC1-vC2R-iL),diLdt=(L)-1(-vC1-ROiL), where voltages vC1, vC2 and current iL are state variables, RO is a constant, and γ denotes the nonlinear resistor which is a function of voltage across two terminals of C1. γ is defined as a cubic function as(4.5)γ=aVC1+c(vC1)3(a>0,c<0). According to [30], (4.4) can be transferred into the canonical form as(4.6)[ẋ1ẋ2ẋ3]=[010001000]⋅[x1x2x3]+[001]⋅(f+g⋅u+d),y=[100]⋅[x1x2x3], where f=(14/1805)x1-(168/9025)x2+(1/38)x3-(2/45)((28/361)x1+(7/95)x2+x3)3, g=1, and d is the external disturbance and is assumed to be a square-wave with amplitude ±0.5 and period 2π. Herein, the objective is to track the reference trajectory yr=1.5sin(t). We choose the initial state value as x(0)=[001]T and sampling time as 0.01 second.For comparison, the same simulations are done by using IT2FNS-A and the case without optimal learning step length. The used coefficients are listed in Table4. The simulation results of trajectories are shown in Figure 7. Figure 7(a) shows the phase plant trajectory after being controlled; Figures 7(b)–7(d) shows the state trajectories x1,x2, and x3, respectively (solid line: actual system output; dashed line: reference trajectory). Figure 8 depicts the control effort of the IT2RFNS-A controller. We can observe that the proposed method is valid to control a nonlinear dynamic system. The comparison results are shown in Table 5 which is obtained by 10 independent runs. From Table 5, we can easily find that the IT2RFNS-A performs better than others. This result shows that the IT2RFNS-A has ability to cope with dynamic problems. Observe the performance of the case with optimal learning step length from Table 5, we again demonstrate that adopting optimal learning step length guarantees the efficient training of IT2RFNS-A.Table 4 The selected coefficients for Example 2. FNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-ASPSA configuration coefficientsa0.010.010.010.01optimal learning step length (3.29)α0.6020.6020.6020.602A200200200200c11111γ0.1010.1010.1010.1010.101Neural network coefficientrule number1212444Time (sec.)20Table 5 Comparison results of control performance for Example 2. MSEFNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-A (optimalak)Worst2.60462.53210.39060.72640.0682Average1.25801.20050.31460.27680.0476Best0.75990.65320.26050.07130.0307Simulation results of Example 2: (a) phase plant trajectory of IT2RFNS-A, (b) statex1 and reference trajectory yr, (c) state x2 and reference trajectory ẏr, and (d) state x3 and reference trajectory ÿr. (a)(b)(c)(d)Figure 8 Control effort of IT2RFNS-A controller of Chua’s chaotic circuit. ## 4.1. Example 1: Nonlinear System Identification of Chaotic System Consider a nonlinear system described as(4.1)y(k)=f(y(k-1),…,y(k-n);u(k-1),…,u(k-m)), where u denotes the input, y is the output, m and n are positive integers, and f(·) is the unknown function f:ℜn+m→ℜ. Our purpose is to identify the nonlinear system (4.1) using IT2RFNS-A with the proposed stable SPSA algorithm. From literature [2], the chaotic system is described as (4.2)y(k+1)=-P⋅y2(k)+Q⋅y(k-1)+1.0, where P=1.4 and Q=0.3 and produces a chaotic strange attractor as shown in Figure 5(a). We first choose 1000 pairs training data randomly from the system over the interval [−1.5 1.5] and the initial point is set to[y(1)y(0)]=[0.40.4]. Then, the IT2RFNS-A is used to approximate the chaotic system using the stable SPSA algorithm with the following coefficients listed in Table 1. After training for 100 epochs, the phase plane of the chaotic system is shown in Figure 5(b). In order to show the superiority of using optimal learning step length, the same simulation is done without using optimal learning step length for comparison. In addition, we also compare it with other neural networks such as fuzzy neural network (FNN) [1], recurrent fuzzy neural network (RFNN) [2], and interval type-2 fuzzy logic system with asymmetric membership function (IT2FNS-A) [16]. For statistical analysis, the learning process is repeated for 10 independent runs. The descending trend of the mean square error (MSE) is shown in Figure 6, solid-black line: IT2RFNS-A with the optimal learning step length aIk*; solid-red line: IT2RFNS-A without the optimal learning step length; dashed line: IT2FNS-A; dotted line: RFNN; dash-dotted line: FNN. The worst, average, and best MSE that each neural network achieved are shown in Table 2. Obviously, the IT2RFNS-A with optimal learning step length achieves the best performance and if we focus on using optimal learning step length or not, then we can find that there is a great improvement. From Figure 6, we can observe that using optimal learning step length results in better performance in convergence. Hence, we accumulate the number of epochs that MSE decreased in 100 epochs and listed in Table 3. The result, evidently, shows that the optimal learning step length can guarantee the efficient training of IT2RFNS-A.Table 1 Configuration coefficients for Example 1: system identification. FNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-ASPSA configuration coefficientsa0.10.10.010.01optimal learning step length (3.40)α0.6020.6020.6020.602A10101010c0.10.10.10.10.1γ0.1010.1010.1010.1010.101Neural network coefficientrule number88222epochs100Table 2 Comparison results in MSE for Example 1. MSEFNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-A (optimalaIk*)Worst0.00480.00176.9579 × 10−51.9464 × 10−55.0267 × 10−6Average0.00286.6921 × 10−43.4154 × 10−54.8760 × 10−61.3319 × 10−6Best0.00182.2731 × 10−41.1265 × 10−56.0900 × 10−81.0091 × 10−27Table 3 Comparison results of using optimal learning step length (number of epochs that MSE decreases in 100 epochs, 10 independent runs). Number of epochsIT2RFNS-AIT2RFNS-A with optimalaIk*Worst3868Average3689Best5598Total epochs100Phase plane plot of (a) the chaotic system and (b) the result of IT2RFNS-A with optimal learning step length. (a)(b)Figure 6 Comparison results in MSE of Example 1. ## 4.2. Nonlinear System Control of Chua’s Chaotic Circuit Consider annth-order nonlinear dynamic system in the companion form or controllability canonical from(4.3)x(n)=f(x)+g(x)u+d,y=x, where u and y are the control input and output of the nonlinear system, f(·) and g(·) are unknown nonlinear and continuous functions, and d is the external disturbance or system uncertainty. Our purpose is to develop an IT2RFNS-A controller to generate the proper control signal such that the system output y can follow a given reference trajectory yr.In this paper, we consider the typical Chua’s chaotic circuit which consists of one inductor, two capacitors, one linear resistor, and one piecewise-linear resistor [29, 30]. It has been shown that this circuit holds very rich nonlinear dynamics such as chaos and bifurcations. The dynamic equation of Chua’s circuit are described as(4.4)dvC1dt=(C1)-1(vC2-vC1R-γ),dvC2dt=(C2)-1(vC1-vC2R-iL),diLdt=(L)-1(-vC1-ROiL), where voltages vC1, vC2 and current iL are state variables, RO is a constant, and γ denotes the nonlinear resistor which is a function of voltage across two terminals of C1. γ is defined as a cubic function as(4.5)γ=aVC1+c(vC1)3(a>0,c<0). According to [30], (4.4) can be transferred into the canonical form as(4.6)[ẋ1ẋ2ẋ3]=[010001000]⋅[x1x2x3]+[001]⋅(f+g⋅u+d),y=[100]⋅[x1x2x3], where f=(14/1805)x1-(168/9025)x2+(1/38)x3-(2/45)((28/361)x1+(7/95)x2+x3)3, g=1, and d is the external disturbance and is assumed to be a square-wave with amplitude ±0.5 and period 2π. Herein, the objective is to track the reference trajectory yr=1.5sin(t). We choose the initial state value as x(0)=[001]T and sampling time as 0.01 second.For comparison, the same simulations are done by using IT2FNS-A and the case without optimal learning step length. The used coefficients are listed in Table4. The simulation results of trajectories are shown in Figure 7. Figure 7(a) shows the phase plant trajectory after being controlled; Figures 7(b)–7(d) shows the state trajectories x1,x2, and x3, respectively (solid line: actual system output; dashed line: reference trajectory). Figure 8 depicts the control effort of the IT2RFNS-A controller. We can observe that the proposed method is valid to control a nonlinear dynamic system. The comparison results are shown in Table 5 which is obtained by 10 independent runs. From Table 5, we can easily find that the IT2RFNS-A performs better than others. This result shows that the IT2RFNS-A has ability to cope with dynamic problems. Observe the performance of the case with optimal learning step length from Table 5, we again demonstrate that adopting optimal learning step length guarantees the efficient training of IT2RFNS-A.Table 4 The selected coefficients for Example 2. FNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-ASPSA configuration coefficientsa0.010.010.010.01optimal learning step length (3.29)α0.6020.6020.6020.602A200200200200c11111γ0.1010.1010.1010.1010.101Neural network coefficientrule number1212444Time (sec.)20Table 5 Comparison results of control performance for Example 2. MSEFNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-A (optimalak)Worst2.60462.53210.39060.72640.0682Average1.25801.20050.31460.27680.0476Best0.75990.65320.26050.07130.0307Simulation results of Example 2: (a) phase plant trajectory of IT2RFNS-A, (b) statex1 and reference trajectory yr, (c) state x2 and reference trajectory ẏr, and (d) state x3 and reference trajectory ÿr. (a)(b)(c)(d)Figure 8 Control effort of IT2RFNS-A controller of Chua’s chaotic circuit. ## 5. Conclusion In this paper, we have proposed an interval type-2 recurrent fuzzy neural system with triangular asymmetric membership functions (IT2RFNS-A) and its training scheme using the proposed stable SPSA algorithm. We adopt the Lyapunov theorem to derive the appropriate range of learning step length for SPSA to guarantee the stability of the closed-loop system for nonlinear control and guarantee the convergence of IT2RFNS-A for system identification. In addition, we also obtain the optimal learning step length that ensures the efficient training for IT2RFNS-A. The feasibility and the effectiveness of the proposed method have been demonstrated by two illustration examples. The simulations on the chaotic system identification and the control of the Chua’s chaotic circuit are done, and both of them show the following advantages of proposed approach, (a) the IT2RFNS-A having few rules can achieve the specific performance or even better; (b) the IT2RFNS-A can catch the dynamic response of the system; that is, it is capable of coping with dynamic problems; (c) by using the proposed stable SPSA algorithm, the gradient information is unnecessary. In other words, a great deal of computational effort is saved; (d) for control problems, the proposed method avoids approximating the system sensitivity which leads to inaccuracy; (e) adopting optimal learning step length improves the performance of training IT2RFNS-A. --- *Source: 102436-2011-06-21.xml*
102436-2011-06-21_102436-2011-06-21.md
57,622
Interval Type-2 Recurrent Fuzzy Neural System for Nonlinear Systems Control Using Stable Simultaneous Perturbation Stochastic Approximation Algorithm
Ching-Hung Lee; Feng-Yu Chang
Mathematical Problems in Engineering (2011)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2011/102436
102436-2011-06-21.xml
--- ## Abstract This paper proposes a new type fuzzy neural systems, denoted IT2RFNS-A (interval type-2 recurrent fuzzy neural system with asymmetric membership function), for nonlinear systems identification and control. To enhance the performance and approximation ability, the triangular asymmetric fuzzy membership function (AFMF) and TSK-type consequent part are adopted for IT2RFNS-A. The gradient information of the IT2RFNS-A is not easy to obtain due to the asymmetric membership functions and interval valued sets. The corresponding stable learning is derived by simultaneous perturbation stochastic approximation (SPSA) algorithm which guarantees the convergence and stability of the closed-loop systems. Simulation and comparison results for the chaotic system identification and the control of Chua's chaotic circuit are shown to illustrate the feasibility and effectiveness of the proposed method. --- ## Body ## 1. Introduction In the past few decades, the fuzzy neural network (FNN) which provides the advantages of both neural network and fuzzy system is successfully applied to nonlinear system identification and control [1–4]. In the FNN, the symmetric and fixed membership functions (MFs) are commonly adopted to simplify the design procedure. However, a large number of fuzzy rules should be used to achieve the specified performance [5, 6]. Thus, an asymmetric fuzzy membership function (AFMF) has been proposed to solve this problem. The AFMF is discussed and analyzed that it can effectively improve the accuracy and reduce the fuzzy rules [7, 8].Recently, the type-2 fuzzy sets (T2 FSs) are gaining more and more in popularity [9, 10]. The T2 FSs are described by MFs that are characterized by more parameters than the type-1 fuzzy sets (T1 FSs). Hence, the T2 FSs provide us with more design degree of freedom. Nevertheless, due to the computational complexity of using the T2 FS, most people only the interval type-2 fuzzy sets (IT2 FSs) are proposed [10]. The computations associated with IT2 FSs are very manageable which makes it quite practical [11]. In our previous research [12–15], we proposed a type-2 fuzzy neural network with asymmetric membership function (T2FNN-A) which combines interval type-2 fuzzy logic system with neural network. Then, in order to improve the efficiency of the T2FNN-A, we proposed an interval type-2 fuzzy neural system with AFMFs (IT2FNS-A) which utilizes several enhanced techniques such as Takagi-Sugeno-Kang fuzzy logic system (TSK FLS) and embedded type-reduction network layer [16]. These modifications not only improve the approximation accuracy of the T2FNN-A, but also achieve the specific performance with fewer fuzzy rules. However, a major drawback of the IT2FNS-A is that its application domain is limited to static problem due to its feedforward network structure. Thus, using IT2FNS-A to process dynamic problems is inefficient. Many results have been shown that recurrent system can learn and memorize information to provide better performance [2, 17–20]. In this paper, we propose an interval type-2 recurrent fuzzy neural system with AFMFs (IT2RFNS-A) which provides the memory elements and extends the basic ability of the IT2FNS-A to include the dynamic problems. In addition, since the feedback layer captures the dynamic response of the system, the approximation accuracy of the network is improved.In training neural networks, the back-propagation (BP) algorithm is widely used. However, the differential information of system is difficult to obtain due to the piece-wise continuous property of triangular AFMFs and since there are many adjustable parameters in the IT2RFNS-A. Herein, we adopt the simultaneous perturbation stochastic approximation (SPSA) algorithm to derive the update laws of the proposed IT2RFNS-A. The SPSA algorithm is based on the approximation to gradient information from the measurements of the objective function [21–23]. Hence, a great deal of computational effort is saved. However, due to the stochastic characteristic of SPSA algorithm, we cannot guarantee that every searching step length is appropriate which may lead to the invalid search. To overcome this situation, we employ the Lyapunov stability analysis to derive the optimal learning step length for guaranteeing the stability of the closed-loop system. In addition, the efficient training is also ensured.The remainder of this paper is organized as follows. In Section2, the construction of triangular AFMFs and the SPSA algorithm are introduced. Section 3 illustrates the proposed IT2RFNS-A and the stable SPSA learning algorithm. The simulation results on chaotic system identification and control of the Chua’s chaotic circuit are shown in Section 4. Finally, the conclusion is given. ## 2. Preliminaries In this section, we briefly introduce some prerequisite material including the interval type-2 asymmetric fuzzy membership function (IT2 AFMF) and simultaneous perturbation stochastic approximation (SPSA) algorithm. ### 2.1. Interval Type-2 Asymmetric Fuzzy Membership Function (IT2 AFMF) The interval type-2 membership function (IT2 MF) is considered to be a special case of a general type-2 membership function (T2 MF) which simplifies the computational effort significantly [10, 11]. In general, the symmetric MF is used for simplification. However, the symmetric MF only holds either uncertain mean or uncertain variance. In addition, tuning the parameter of MFs symmetrically may result in low precision. The AFMF can treat these problems.In this paper, the triangular fuzzy MFs are used to construct the interval type-2 asymmetric fuzzy membership functions (IT2 AFMFs) due to their lower computational effort. Each IT2 AFMF consists of upper and lower MFs, as shown in Figure1. The upper MF is defined as(2.1)μ̅F̃(x)={x-a̅b̅-a̅,a̅≤x≤b̅,c̅-xc̅-b̅,b̅≤x≤c̅,0,otherwise, where a̅, b̅, and c̅ denote the positions of three corners satisfying a̅≤b̅≤c̅. Similarly, the lower MF is defined as(2.2)μ̲F̃(x)={λ⋅x-a̲b̲-a̲,a̲≤x≤b̲,λ⋅c̲-xc̲-b̲,b̲≤x≤c̲,0,otherwise, where a̲, b̲, and c̲ denote the positions of three corners satisfying a̲≤b̲≤c̲ and λ denotes the magnitude of lower MF which should be limited between 0.5 and 1 to avoid the invalid result (a small firing strength). As above description, the following restrictions should be constrained to avoid unreasonable IT2 AFMFs: a̅≤b̅≤c̅, a̲≤b̲≤c̲, a̅≤a̲, c̲≤c̅, a̅+λ(b̅-a̅)≤b̲≤c̅-λ(c̅-b̅).Figure 1 Construction of interval type-2 asymmetric fuzzy membership function. ### 2.2. Simultaneous Perturbation Stochastic Approximation (SPSA) Algorithm This section introduces the SPSA algorithm briefly. The detailed description can be found in literature [23]. Consider the optimization problem with an objective function f(W), the SPSA algorithm updates W by(2.3)W(k+1)=W(k)-akg(W(k)), where g(·) is the estimated gradient result of objective function f(·) with respect to W, that is, ∂f(W)/∂W≈g(W). ak denotes the learning step length which is decreased over iterations with ak=a/(k+A)α, where a, A, and α are positive configuration coefficients [23]. The SPSA approach estimates the gradient, g(·), using the following method. Assume that the dimension of parameter W is p. Let Δk=[Δk1Δk2⋯Δkp] be a p-dimensional vector whose element is mutually independent zero-mean random variable. Then, the estimation of the gradient at kth iteration can be computed by (2.4)g(W(k))=f(W(k)+ckΔk)-f(W(k))ck[Δk1-1Δk2-1⋯Δkp-1]T,whereck is gain sequence that is also decreased with ck=c/(k+1)γ, where c and γ are nonnegative configuration coefficients [23]. Obviously, all elements of W are perturbed simultaneously and only two measurements of the objective function are needed to estimate the gradient. In addition, Δk is usually obtained using Bernoulli ±1 distribution with equal probability for each value.In general, the gradient information of neural fuzzy system is not easy to obtain due to the piecewise continuous property of AFMFs and large number of adjustable parameters. Herein, we adopt the SPSA algorithm to derive the stable learning for guaranteeing the convergence and stability of the closed-loop systems. ## 2.1. Interval Type-2 Asymmetric Fuzzy Membership Function (IT2 AFMF) The interval type-2 membership function (IT2 MF) is considered to be a special case of a general type-2 membership function (T2 MF) which simplifies the computational effort significantly [10, 11]. In general, the symmetric MF is used for simplification. However, the symmetric MF only holds either uncertain mean or uncertain variance. In addition, tuning the parameter of MFs symmetrically may result in low precision. The AFMF can treat these problems.In this paper, the triangular fuzzy MFs are used to construct the interval type-2 asymmetric fuzzy membership functions (IT2 AFMFs) due to their lower computational effort. Each IT2 AFMF consists of upper and lower MFs, as shown in Figure1. The upper MF is defined as(2.1)μ̅F̃(x)={x-a̅b̅-a̅,a̅≤x≤b̅,c̅-xc̅-b̅,b̅≤x≤c̅,0,otherwise, where a̅, b̅, and c̅ denote the positions of three corners satisfying a̅≤b̅≤c̅. Similarly, the lower MF is defined as(2.2)μ̲F̃(x)={λ⋅x-a̲b̲-a̲,a̲≤x≤b̲,λ⋅c̲-xc̲-b̲,b̲≤x≤c̲,0,otherwise, where a̲, b̲, and c̲ denote the positions of three corners satisfying a̲≤b̲≤c̲ and λ denotes the magnitude of lower MF which should be limited between 0.5 and 1 to avoid the invalid result (a small firing strength). As above description, the following restrictions should be constrained to avoid unreasonable IT2 AFMFs: a̅≤b̅≤c̅, a̲≤b̲≤c̲, a̅≤a̲, c̲≤c̅, a̅+λ(b̅-a̅)≤b̲≤c̅-λ(c̅-b̅).Figure 1 Construction of interval type-2 asymmetric fuzzy membership function. ## 2.2. Simultaneous Perturbation Stochastic Approximation (SPSA) Algorithm This section introduces the SPSA algorithm briefly. The detailed description can be found in literature [23]. Consider the optimization problem with an objective function f(W), the SPSA algorithm updates W by(2.3)W(k+1)=W(k)-akg(W(k)), where g(·) is the estimated gradient result of objective function f(·) with respect to W, that is, ∂f(W)/∂W≈g(W). ak denotes the learning step length which is decreased over iterations with ak=a/(k+A)α, where a, A, and α are positive configuration coefficients [23]. The SPSA approach estimates the gradient, g(·), using the following method. Assume that the dimension of parameter W is p. Let Δk=[Δk1Δk2⋯Δkp] be a p-dimensional vector whose element is mutually independent zero-mean random variable. Then, the estimation of the gradient at kth iteration can be computed by (2.4)g(W(k))=f(W(k)+ckΔk)-f(W(k))ck[Δk1-1Δk2-1⋯Δkp-1]T,whereck is gain sequence that is also decreased with ck=c/(k+1)γ, where c and γ are nonnegative configuration coefficients [23]. Obviously, all elements of W are perturbed simultaneously and only two measurements of the objective function are needed to estimate the gradient. In addition, Δk is usually obtained using Bernoulli ±1 distribution with equal probability for each value.In general, the gradient information of neural fuzzy system is not easy to obtain due to the piecewise continuous property of AFMFs and large number of adjustable parameters. Herein, we adopt the SPSA algorithm to derive the stable learning for guaranteeing the convergence and stability of the closed-loop systems. ## 3. Interval Type-2 Recurrent Fuzzy Neural System with Asymmetric Membership Function (IT2RFNS-A) ### 3.1. Fuzzy Reasoning of IT2RFNS-A The proposed IT2RFNS-A realizes the fuzzy inference into the network structure. Assume that an IT2RFNS-A system hasM rules and n inputs, the jth rule can be expressed as(3.1)Rj:ifu1jisF̃1j,…,unjisF̃nj,thenYj=Cj0+Cj1x1+Cj2xj+⋯+Cjnxn, where uij is input linguistic variable of the jth rule, F̃ij are interval type-2 antecedent fuzzy sets, Cji are consequent interval fuzzy set, xi is the network input, and Yj is the output of the jth rule. Note that the input linguistic variable uij has the system input term and the past information which is introduced in Section 3.2. As the above description of T2 FLSs, the membership grade is an interval valued set which consists of the lower and upper membership grades, that is,(3.2)μF̃ij(uij)=[μ̲F̃ij(uij)μ̅F̃ij(uij)] and the consequent part is (3.3)Cji=[cji-sjicji+sji], where cji denotes the center of Cji and sji denotes the spread of Cji. Therefore, using the productt-norm, the firing strength associated with the jth rule is(3.4)Fj(u)=[f̲j(u)f¯j(u)], where f̲j(u)=μ̲F̃1j(u1j)×⋯×μ̲F̃nj(unj) and f¯j(u)=μ̅F̃1j(u1j)×⋯×μ̅F̃nj(unj). Thus, the consequent part of the jth rule is(3.5)yjl=(cj0+∑i=1ncjiuij)-(sj0+∑i=1nsji|uij|),yjr=(cj0+∑i=1ncjiuij)+(sj0+∑i=1nsji|uij|). By applying the Extension principle [24], the output of the FLS is(3.6)YTSK(u)=[ylyr]=∫y1∈[y1ly1r]⋯∫yM∈[yMlyMr]∫f1∈[f̲1f1¯]⋯∫fM∈[f̲Mf¯M]1∑j=1Mfjyj/∑j=1Mfj. To compute YTSK(u), we need to compute its two end-points yl and yr by type-reduction operation. Karnik and Mendel developed an iterative algorithm which is known as KM algorithm to compute these two end-points [24, 25]. A type reducer combines all fired-rule output sets in some way, just like a type-2 defuzzifier combines the type-1 rule output sets, which leads to a T1 FS that is called a type-reduced set. Herein, we need to compute the left-end point yl and right-end point yr by utilizing the KM algorithm [24, 25]. In the KM algorithm, the left-end and right-end points are represented as(3.7)yl=∑i=1Lfi¯yi+∑L+1Mf̲iyi∑i=1Lfi¯+∑L+1Mf̲i,yr=∑i=1Rf̲iyi+∑R+1Mfi¯yi∑i=1Rf̲i+∑R+1Mfi¯. Then, we have to find the proper switch point value L and R by iterative procedure where more detail can be referred to [24, 25]. However, this iterative procedure for finding switch point values is time wasting. Hence, in the proposed IT2RFNS-A, a simple weight-average method is used to approximate the L and R so that the iterative procedure is unnecessary. That is, we can calculate the left-most and right-most point of firing strength by(3.8)fjl=ω̅jlfj¯+ω̲jlf̲jω̅jl+ω̲jl,fjr=ω̅jrfj¯+ω̲jrf̲jω̅jr+ω̲jr, where ω̅jl, ω̲jl, ω̅jr, and ω̲jr are adjustable weights. Then, we can obtain the following left-end point and right-end point of the output of interval type-2 fuzzy inference system(3.9)yl=∑j=1Mfjlyjl∑j=1Mfjl,yr=∑j=1Mfjryjr∑j=1Mfjr. Note that the above simplified type-reduction technique is adopted in layer 4 (left-most and right-most layer) and feedback layer of IT2RFNS-A system. This simplifies the computational effort in type-reduction procedure. Finally, we defuzzify the type-reduced set to get a crisp output, that is, (3.10)y(u)=yl+yr2. ### 3.2. Network Structure of IT2RFNS-A The proposed IT2RFNS-A consists of six feed-forward layers and a feedback one. Layer 1 accepts input variables. Layer 2 is used to calculate the IT2 AFMF grade. The feedback layer embedded in Layer 2 is used to store the past information. Layer 3 forms the fuzzy rule base. Layer 4 introduces the simplified type-reduction scheme, called left-most and right-most layer. The TSK-type consequent part is implemented into Layer 5. Layer 6 is the output layer.Next, we indicate the signal propagation and the operation functions of the node in each layer. For convenience, the multi-input-single-output case is considered here. The schematic diagram of the proposed IT2RFNS-A is shown in Figure2. In the following description, Oi(l) denotes the ith output node in layer l.Figure 2 Diagram of the proposed IT2RFNS-A system.Layer 1: Input Layer For theith node of layer 1, the net input and output are represented as (3.11)Oi(1)=xi,wherexi represents the ith input to the ith node. The nodes in this layer only transmit input to the next layer directly.Layer 2: Membership Layer In this layer, each node performs a triangular IT2 AFMF, that is,(3.12)Oij(2)=μF̃ij(Oi(1)+Oi(f))=[O̲ij(2)O̅ij(2)]T=[μ̲F̃ij(Oi(1)+Oi(f))μ̅F̃ij(Oi(1)+Oi(f))]T, where the subscript ij indicates the jth term of the ith input and μF̃ij is an IT2 AFMF as shown in Figure 1 and (3.2). Note that the output of layer 2 and the feedback weight are interval values. According to the above description and the results of literature [26], the type reduction is embedded in the network. Herein, the output of feedback layer is expressed as (3.13)Oi(f)(k)=O̲ij(2)(k-1)⋅θ̲ij+O̅ij(2)(k-1)⋅θ̅ijθ̲ij+θ̅ij, where θ̲ij and θ̅ij denote the link weight of the feedback layer. Clearly, the input of this layer contains the memory terms O̲ij(2)(k-1) and O̅ij(2)(k-1) which store the past information of the network.Layer 3: Rule Layer This layer is used for computing firing strength of fuzzy rule. From (3.4), we obtain (3.14)f̲j=μ̲F̃1j(O1(1)+O1(f))×⋯×μ̲F̃nj(On(1)+On(f)),fj¯=μ̅F̃1j(O1(1)+O1(f))×⋯×μ̅F̃nj(On(1)+On(f)), where μ̲F̃ij(·) and μ̅F̃ij(·) are the lower and upper membership grades. Therefore, the operation function in this layer is (3.15)Oj(3)=[f̲jf¯j]T=[O̲j(3)O̅j(3)]T=[∏i=1nO̲ij(2)∏i=1nO̅ij(2)]T.Layer 4: Left-Most & Right-Most layer Similar to the feedback layer, the type reduction is integrated into the network structure by calculating their left-most and right-most values. That is, the complicated type-reduction method such as KM algorithm can be reduced as(3.16)Oj(4)=[Ojl(4)Ojr(4)]T=[ω̅jlO̅j(3)+ω̲jlO̲j(3)ω̅jl+ω̲jlω̅jrO̅j(3)+ω̲jrO̲j(3)ω̅jr+ω̲jr]T, where the link weights are ω̲l=[ω̲1l⋯ω̲Ml]T, ω̅l=[ω̅1l⋯ω̅Ml]T, ω̲r=[ω̲1r⋯ω̲Mr]T, and ω̅r=[ω̅1r⋯ω̅Mr]T. These link weights are adjusted by the proposed stable SPSA learning algorithm. Note that ω̲jl<ω̅jl and ω̲jr<ω̅jr.Layer 5: TSK Layer From (3.5), the TSK-type consequent part is (3.17)Tj=[TjlTjr]T=[(cj0+∑i=1ncjixi)-(sj0+∑i=1nsji|xi|)(cj0+∑i=1ncjixi)+(sj0+∑i=1nsji|xi|)]T. Then, the output of this layer is (3.18)O(5)=[Ol(5)Or(5)]T=[∑j=1MOjl(4)Tjl∑j=1MOjl(4)∑j=1MOjr(4)Tjr∑j=1MOjr(4)]T.Layer 6: Output Layer Layer 6 is the output layer which is used to implement the defuzzification operation. Therefore, the crisp output is(3.19)O(6)=Ol(5)+Or(5)2. Obviously, the total design parameters of IT2RFNS-A are a̅, b̅, c̅, a̲, b̲, c̲, λ, ω̲jl, ω̅jl, ω̲jr, ω̅jr, cji, and sji. These parameters are adjusted by the proposed stable SPSA learning algorithm which guarantees the convergence of IT2RFNS-A systems. ### 3.3. Training of IT2RFNS-A by the Stable SPSA Algorithm Consider the nonlinear control problem, our goal is to generate a proper control sequenceu(k) such that the system output y(k) follows the desired trajectory yr(k), where k is the discrete-time index. The IT2RFNS-A with the stable SPSA algorithm plays the role of controller for nonlinear plant, the adaptive control scheme is shown in Figure 3. For convenience, we consider the single-output case and the tracking error is defined as(3.20)e(k)=yr(k)-y(k). Then, we define the objective function (error cost function) as(3.21)E(k)=12e2(k)=12(yr(k)-y(k))2. The control objective is to generate the control signal u(k) such that the tracking error approaches to zero, that is, to minimize the objective function E(k). From the well-known gradient descent method, the parameter update law can be written as(3.22)W(k+1)=W(k)+ΔW(k)=W(k)+ak(-∂E(k)∂W), where W is the tuning parameters of IT2RFNS-A. Then, (3.23)∂E(k)∂W=-e(k)∂e(k)∂W. Herein, we adopt the SPSA algorithm to reduce the computational complexity. The parameters update laws can be represented as(3.24)W(k+1)=W(k)+ake(k)g(W(k)), where (3.25)g(W(k))=e(W(k)+ckΔk)-e(W(k))ck[Δk1-1Δk2-1⋯Δkp-1]T,wheree(W(k)+ckΔk) denotes the tracking error between the desired output and system output resulted by the IT2RFNS-A with perturbed tuning parameters. Note that (3.25) does not calculate the system sensitivity or gradient functions; this simplifies the computational effort.Figure 3 Adaptive control scheme for nonlinear system.Note that, in training neural networks, it may not be possible to update all estimated parameters with a single gradient approximation function (3.25). Thus, we should partition the estimated parameters W into several parts, that is, each kind of parameter has its corresponding estimated parameter (e.g., Wθ̅ as the estimated link weight of feedback layer). Then, the estimated parameters are updated separately by the SPSA algorithm.Remark 3.1. As the previous results of [1, 2, 4], the adaptive control laws can be obtained by multiplying system sensitivity ∂y/∂u, that is, (3.26)∂E(k)∂W=e(k)∂(yr(k)-y(k))∂W=e(k)(-1)∂y(k)∂W=e(k)⋅(-1)⋅∂y∂u⋅∂u∂W, where u is the control input produced by the IT2RFNS-A, that is, u=O(6). Hence, the gradient of the IT2RFNS-A should be calculated. However, an additional identifier (using a FNN or an IT2RFNS-A) should be developed to find the unknown system’s sensitivity (details can be found in [1, 2]). In this way, the approximation accuracy should be guaranteed for training the IT2RFNS-A controller and the computational effort is complex and huge. Therefore, an approximation term of ∂E(k)/∂W≅-(e+Δe)∂u/∂W and the optimal learning step length are derived to enhance the efficiency and guarantee the stability, where Δe denotes ė and e(k)-e(k-1) for continuous and discrete time case, respectively.Remark 3.2. Our proposed approach is also valid for the nonlinear system identification. The series-parallel architecture shown in Figure4 is adopted. Hence, the inputs of IT2RFNS-A are u and y(k-1) and the IT2RFNS-A output is the estimated output ŷ(k). The parameters of IT2RFNS-A are tuned by the proposed stable SPSA algorithm. Thus, the parameter update law is (3.27)W(k+1)=W(k)+ak[y(k)-ŷ(k)]⋅O(6)(W(k)+ckΔk)-O(6)(W(k))ck[Δk1-1Δk2-1⋯Δkp-1]T, where O(6)(W(k)+ckΔk) denotes the output of the IT2RFNS-A with perturbed tuning parameters.Figure 4 Series-parallel training architecture for system identification. ### 3.4. Stability Analysis In this section, the convergence theorem for selecting appropriate learning step lengthak is introduced. The choice of learning step length is very important for convergence. If a small value is given for the learning step length, then the convergence of the IT2RFNS-A is guaranteed. However, the convergent speed may be slow. On the other hand, if a large value is selected, then the system may be unstable. Hence, we employ the Lyapunov stability approach to have the condition for convergence and find the optimal learning step length for IT2RFNS-A.Theorem 3.3. Letak be the learning step length of tuning parameters for the IT2RFNS-A controller. Consider the nonlinear control problem using IT2RFNS-A (shown in Figure 3), the asymptotic convergence of the closed-loop system is guaranteed if the learning step length is chosen satisfying (3.28)0<ak<2|g(k)|2,∀k, where g(W(k))=-(e+Δe/e)·(O(6)(W(k)+ckΔk)-O(6)(W(k))/ck)[Δk1-1Δk2-1⋯Δkp-1]T is the gradient estimation using SPSA approach. In addition, the faster convergence can be obtained by the following optimal time-varying learning step length (3.29)ak*=1|g(k)|2.Proof. First, we define the discrete-time Lyapunov function as follows:(3.30)V(k)=E(k)=12e2(k)=12(yr(k)-y(k))2, where e(k) represents the tracking error. Then, the change of the Lyapunov function is (3.31)ΔV(k)=V(k+1)-V(k)=12(e2(k+1)-e2(k)). According to the Lyapunov stability theorem, if the change of the positive definite Lyapunov function, denoted ΔV(k), satisfies the condition ΔV(k)<0, for all k, then the asymptotical stability is guaranteed [1, 2, 27]. Hence, our objective is to select the proper learning step length such that ΔV(k)<0, for all k. This implies that V(k) will converge to zero when k approaches to infinity. By [1, 2, 28], the error difference can be represented as (3.32)Δe(k)=e(k+1)-e(k)≅[∂e(k)∂W]TΔW, where ΔW denotes the change of W. From (3.22) and (3.24), we obtain (3.33)ΔW≡-ake(k)∂e(k)∂W≅ake(k)g(k). Thus, (3.34)ΔV(k)=12[e2(k+1)-e2(k)]=12[e(k+1)-e(k)]⋅[e(k+1)+e(k)]=12Δe(k)⋅[2e(k)+Δe(k)]=Δe(k)⋅(e(k)+12Δe(k))=[∂e(k)∂W]Take(k)g(k)⋅{e(k)+12[∂e(k)∂W]Take(k)g(k)}=-ake2(k)|g(k)|2(1-12ak|g(k)|2). Note that P(k) is positive for all k>0, and let P(k)=ak|g(k)|2. Thus, (3.35)ΔV(k)=-e2(k)P(k)(1-12P(k)). Recall that the asymptotical stability of the IT2RFNS-A is guaranteed if ΔV(k)<0, for all k>0. Thus, (1-(1/2)P(k)) should be positive such that ΔV(k)<0, for all k. Therefore, we obtain the stability condition for ak: (3.36)0<ak<2|g(k)|2. The asymptotic stability is guaranteed if ak is chosen to satisfy (3.36). In addition, we would like to find a condition for ak that guarantees the fast convergence. From (3.34) and (3.35), we have (3.37)e2(k+1)=e2(k)-e2(k)P(k)(2-P(k))=e2(k)[1-2P(k)+P2(k)]=e2(k)[P(k)-1]2. The minimum of e2(k+1) is achieved when P(k)=1. Hence, the time-varying optimal learning step length is (3.38)ak*=1|g(k)|2.For the system identification problem, a similar convergence theorem can be obtained.Theorem 3.4. LetaIk be the learning step length of tuning parameters for the IT2RFNS-A. Consider the nonlinear identification problem by series-parallel architecture using IT2RFNS-A (shown in Figure 4), and the parameters update laws are shown in (3.27). The asymptotic convergence of the IT2RFNS-A system is guaranteed if the chosen learning step length is satisfied: (3.39)0<aIk<2|gI(k)|2,∀k, where gI(k)=(O(6)(W(k)+ckΔk)-O(6)(W(k))/ck)[Δk1-1Δk2-1⋯Δkp-1]T is the gradient estimation of IT2RFNS-A by SPSA approach. In addition, the faster convergence can be obtained by using the following optimal time-varying learning step length: (3.40)aIk*=1|gI(k)|2.Proof. Herein, we omitted it due to that the proof of Theorem3.4 is similar to the proof of Theorem 3.3. Only the express of the estimated gradient function g(k) is replaced by gI(k).Remark 3.5. As above description, the SPSA algorithm has the stochastic property in the estimation of gradient functionsg(k) and gI(k) due to the random values ck and Δk. Therefore, the stability conditions (3.28) and (3.39) are important for training. In addition, the time-varying optimal learning step lengths ak* and aIk*, shown in (3.29) and (3.40), have the ability of guaranteeing the high-speed convergence. This reduces the effect of random values. Simulation results in Section 4 demonstrate the performance of our approach. ## 3.1. Fuzzy Reasoning of IT2RFNS-A The proposed IT2RFNS-A realizes the fuzzy inference into the network structure. Assume that an IT2RFNS-A system hasM rules and n inputs, the jth rule can be expressed as(3.1)Rj:ifu1jisF̃1j,…,unjisF̃nj,thenYj=Cj0+Cj1x1+Cj2xj+⋯+Cjnxn, where uij is input linguistic variable of the jth rule, F̃ij are interval type-2 antecedent fuzzy sets, Cji are consequent interval fuzzy set, xi is the network input, and Yj is the output of the jth rule. Note that the input linguistic variable uij has the system input term and the past information which is introduced in Section 3.2. As the above description of T2 FLSs, the membership grade is an interval valued set which consists of the lower and upper membership grades, that is,(3.2)μF̃ij(uij)=[μ̲F̃ij(uij)μ̅F̃ij(uij)] and the consequent part is (3.3)Cji=[cji-sjicji+sji], where cji denotes the center of Cji and sji denotes the spread of Cji. Therefore, using the productt-norm, the firing strength associated with the jth rule is(3.4)Fj(u)=[f̲j(u)f¯j(u)], where f̲j(u)=μ̲F̃1j(u1j)×⋯×μ̲F̃nj(unj) and f¯j(u)=μ̅F̃1j(u1j)×⋯×μ̅F̃nj(unj). Thus, the consequent part of the jth rule is(3.5)yjl=(cj0+∑i=1ncjiuij)-(sj0+∑i=1nsji|uij|),yjr=(cj0+∑i=1ncjiuij)+(sj0+∑i=1nsji|uij|). By applying the Extension principle [24], the output of the FLS is(3.6)YTSK(u)=[ylyr]=∫y1∈[y1ly1r]⋯∫yM∈[yMlyMr]∫f1∈[f̲1f1¯]⋯∫fM∈[f̲Mf¯M]1∑j=1Mfjyj/∑j=1Mfj. To compute YTSK(u), we need to compute its two end-points yl and yr by type-reduction operation. Karnik and Mendel developed an iterative algorithm which is known as KM algorithm to compute these two end-points [24, 25]. A type reducer combines all fired-rule output sets in some way, just like a type-2 defuzzifier combines the type-1 rule output sets, which leads to a T1 FS that is called a type-reduced set. Herein, we need to compute the left-end point yl and right-end point yr by utilizing the KM algorithm [24, 25]. In the KM algorithm, the left-end and right-end points are represented as(3.7)yl=∑i=1Lfi¯yi+∑L+1Mf̲iyi∑i=1Lfi¯+∑L+1Mf̲i,yr=∑i=1Rf̲iyi+∑R+1Mfi¯yi∑i=1Rf̲i+∑R+1Mfi¯. Then, we have to find the proper switch point value L and R by iterative procedure where more detail can be referred to [24, 25]. However, this iterative procedure for finding switch point values is time wasting. Hence, in the proposed IT2RFNS-A, a simple weight-average method is used to approximate the L and R so that the iterative procedure is unnecessary. That is, we can calculate the left-most and right-most point of firing strength by(3.8)fjl=ω̅jlfj¯+ω̲jlf̲jω̅jl+ω̲jl,fjr=ω̅jrfj¯+ω̲jrf̲jω̅jr+ω̲jr, where ω̅jl, ω̲jl, ω̅jr, and ω̲jr are adjustable weights. Then, we can obtain the following left-end point and right-end point of the output of interval type-2 fuzzy inference system(3.9)yl=∑j=1Mfjlyjl∑j=1Mfjl,yr=∑j=1Mfjryjr∑j=1Mfjr. Note that the above simplified type-reduction technique is adopted in layer 4 (left-most and right-most layer) and feedback layer of IT2RFNS-A system. This simplifies the computational effort in type-reduction procedure. Finally, we defuzzify the type-reduced set to get a crisp output, that is, (3.10)y(u)=yl+yr2. ## 3.2. Network Structure of IT2RFNS-A The proposed IT2RFNS-A consists of six feed-forward layers and a feedback one. Layer 1 accepts input variables. Layer 2 is used to calculate the IT2 AFMF grade. The feedback layer embedded in Layer 2 is used to store the past information. Layer 3 forms the fuzzy rule base. Layer 4 introduces the simplified type-reduction scheme, called left-most and right-most layer. The TSK-type consequent part is implemented into Layer 5. Layer 6 is the output layer.Next, we indicate the signal propagation and the operation functions of the node in each layer. For convenience, the multi-input-single-output case is considered here. The schematic diagram of the proposed IT2RFNS-A is shown in Figure2. In the following description, Oi(l) denotes the ith output node in layer l.Figure 2 Diagram of the proposed IT2RFNS-A system.Layer 1: Input Layer For theith node of layer 1, the net input and output are represented as (3.11)Oi(1)=xi,wherexi represents the ith input to the ith node. The nodes in this layer only transmit input to the next layer directly.Layer 2: Membership Layer In this layer, each node performs a triangular IT2 AFMF, that is,(3.12)Oij(2)=μF̃ij(Oi(1)+Oi(f))=[O̲ij(2)O̅ij(2)]T=[μ̲F̃ij(Oi(1)+Oi(f))μ̅F̃ij(Oi(1)+Oi(f))]T, where the subscript ij indicates the jth term of the ith input and μF̃ij is an IT2 AFMF as shown in Figure 1 and (3.2). Note that the output of layer 2 and the feedback weight are interval values. According to the above description and the results of literature [26], the type reduction is embedded in the network. Herein, the output of feedback layer is expressed as (3.13)Oi(f)(k)=O̲ij(2)(k-1)⋅θ̲ij+O̅ij(2)(k-1)⋅θ̅ijθ̲ij+θ̅ij, where θ̲ij and θ̅ij denote the link weight of the feedback layer. Clearly, the input of this layer contains the memory terms O̲ij(2)(k-1) and O̅ij(2)(k-1) which store the past information of the network.Layer 3: Rule Layer This layer is used for computing firing strength of fuzzy rule. From (3.4), we obtain (3.14)f̲j=μ̲F̃1j(O1(1)+O1(f))×⋯×μ̲F̃nj(On(1)+On(f)),fj¯=μ̅F̃1j(O1(1)+O1(f))×⋯×μ̅F̃nj(On(1)+On(f)), where μ̲F̃ij(·) and μ̅F̃ij(·) are the lower and upper membership grades. Therefore, the operation function in this layer is (3.15)Oj(3)=[f̲jf¯j]T=[O̲j(3)O̅j(3)]T=[∏i=1nO̲ij(2)∏i=1nO̅ij(2)]T.Layer 4: Left-Most & Right-Most layer Similar to the feedback layer, the type reduction is integrated into the network structure by calculating their left-most and right-most values. That is, the complicated type-reduction method such as KM algorithm can be reduced as(3.16)Oj(4)=[Ojl(4)Ojr(4)]T=[ω̅jlO̅j(3)+ω̲jlO̲j(3)ω̅jl+ω̲jlω̅jrO̅j(3)+ω̲jrO̲j(3)ω̅jr+ω̲jr]T, where the link weights are ω̲l=[ω̲1l⋯ω̲Ml]T, ω̅l=[ω̅1l⋯ω̅Ml]T, ω̲r=[ω̲1r⋯ω̲Mr]T, and ω̅r=[ω̅1r⋯ω̅Mr]T. These link weights are adjusted by the proposed stable SPSA learning algorithm. Note that ω̲jl<ω̅jl and ω̲jr<ω̅jr.Layer 5: TSK Layer From (3.5), the TSK-type consequent part is (3.17)Tj=[TjlTjr]T=[(cj0+∑i=1ncjixi)-(sj0+∑i=1nsji|xi|)(cj0+∑i=1ncjixi)+(sj0+∑i=1nsji|xi|)]T. Then, the output of this layer is (3.18)O(5)=[Ol(5)Or(5)]T=[∑j=1MOjl(4)Tjl∑j=1MOjl(4)∑j=1MOjr(4)Tjr∑j=1MOjr(4)]T.Layer 6: Output Layer Layer 6 is the output layer which is used to implement the defuzzification operation. Therefore, the crisp output is(3.19)O(6)=Ol(5)+Or(5)2. Obviously, the total design parameters of IT2RFNS-A are a̅, b̅, c̅, a̲, b̲, c̲, λ, ω̲jl, ω̅jl, ω̲jr, ω̅jr, cji, and sji. These parameters are adjusted by the proposed stable SPSA learning algorithm which guarantees the convergence of IT2RFNS-A systems. ## 3.3. Training of IT2RFNS-A by the Stable SPSA Algorithm Consider the nonlinear control problem, our goal is to generate a proper control sequenceu(k) such that the system output y(k) follows the desired trajectory yr(k), where k is the discrete-time index. The IT2RFNS-A with the stable SPSA algorithm plays the role of controller for nonlinear plant, the adaptive control scheme is shown in Figure 3. For convenience, we consider the single-output case and the tracking error is defined as(3.20)e(k)=yr(k)-y(k). Then, we define the objective function (error cost function) as(3.21)E(k)=12e2(k)=12(yr(k)-y(k))2. The control objective is to generate the control signal u(k) such that the tracking error approaches to zero, that is, to minimize the objective function E(k). From the well-known gradient descent method, the parameter update law can be written as(3.22)W(k+1)=W(k)+ΔW(k)=W(k)+ak(-∂E(k)∂W), where W is the tuning parameters of IT2RFNS-A. Then, (3.23)∂E(k)∂W=-e(k)∂e(k)∂W. Herein, we adopt the SPSA algorithm to reduce the computational complexity. The parameters update laws can be represented as(3.24)W(k+1)=W(k)+ake(k)g(W(k)), where (3.25)g(W(k))=e(W(k)+ckΔk)-e(W(k))ck[Δk1-1Δk2-1⋯Δkp-1]T,wheree(W(k)+ckΔk) denotes the tracking error between the desired output and system output resulted by the IT2RFNS-A with perturbed tuning parameters. Note that (3.25) does not calculate the system sensitivity or gradient functions; this simplifies the computational effort.Figure 3 Adaptive control scheme for nonlinear system.Note that, in training neural networks, it may not be possible to update all estimated parameters with a single gradient approximation function (3.25). Thus, we should partition the estimated parameters W into several parts, that is, each kind of parameter has its corresponding estimated parameter (e.g., Wθ̅ as the estimated link weight of feedback layer). Then, the estimated parameters are updated separately by the SPSA algorithm.Remark 3.1. As the previous results of [1, 2, 4], the adaptive control laws can be obtained by multiplying system sensitivity ∂y/∂u, that is, (3.26)∂E(k)∂W=e(k)∂(yr(k)-y(k))∂W=e(k)(-1)∂y(k)∂W=e(k)⋅(-1)⋅∂y∂u⋅∂u∂W, where u is the control input produced by the IT2RFNS-A, that is, u=O(6). Hence, the gradient of the IT2RFNS-A should be calculated. However, an additional identifier (using a FNN or an IT2RFNS-A) should be developed to find the unknown system’s sensitivity (details can be found in [1, 2]). In this way, the approximation accuracy should be guaranteed for training the IT2RFNS-A controller and the computational effort is complex and huge. Therefore, an approximation term of ∂E(k)/∂W≅-(e+Δe)∂u/∂W and the optimal learning step length are derived to enhance the efficiency and guarantee the stability, where Δe denotes ė and e(k)-e(k-1) for continuous and discrete time case, respectively.Remark 3.2. Our proposed approach is also valid for the nonlinear system identification. The series-parallel architecture shown in Figure4 is adopted. Hence, the inputs of IT2RFNS-A are u and y(k-1) and the IT2RFNS-A output is the estimated output ŷ(k). The parameters of IT2RFNS-A are tuned by the proposed stable SPSA algorithm. Thus, the parameter update law is (3.27)W(k+1)=W(k)+ak[y(k)-ŷ(k)]⋅O(6)(W(k)+ckΔk)-O(6)(W(k))ck[Δk1-1Δk2-1⋯Δkp-1]T, where O(6)(W(k)+ckΔk) denotes the output of the IT2RFNS-A with perturbed tuning parameters.Figure 4 Series-parallel training architecture for system identification. ## 3.4. Stability Analysis In this section, the convergence theorem for selecting appropriate learning step lengthak is introduced. The choice of learning step length is very important for convergence. If a small value is given for the learning step length, then the convergence of the IT2RFNS-A is guaranteed. However, the convergent speed may be slow. On the other hand, if a large value is selected, then the system may be unstable. Hence, we employ the Lyapunov stability approach to have the condition for convergence and find the optimal learning step length for IT2RFNS-A.Theorem 3.3. Letak be the learning step length of tuning parameters for the IT2RFNS-A controller. Consider the nonlinear control problem using IT2RFNS-A (shown in Figure 3), the asymptotic convergence of the closed-loop system is guaranteed if the learning step length is chosen satisfying (3.28)0<ak<2|g(k)|2,∀k, where g(W(k))=-(e+Δe/e)·(O(6)(W(k)+ckΔk)-O(6)(W(k))/ck)[Δk1-1Δk2-1⋯Δkp-1]T is the gradient estimation using SPSA approach. In addition, the faster convergence can be obtained by the following optimal time-varying learning step length (3.29)ak*=1|g(k)|2.Proof. First, we define the discrete-time Lyapunov function as follows:(3.30)V(k)=E(k)=12e2(k)=12(yr(k)-y(k))2, where e(k) represents the tracking error. Then, the change of the Lyapunov function is (3.31)ΔV(k)=V(k+1)-V(k)=12(e2(k+1)-e2(k)). According to the Lyapunov stability theorem, if the change of the positive definite Lyapunov function, denoted ΔV(k), satisfies the condition ΔV(k)<0, for all k, then the asymptotical stability is guaranteed [1, 2, 27]. Hence, our objective is to select the proper learning step length such that ΔV(k)<0, for all k. This implies that V(k) will converge to zero when k approaches to infinity. By [1, 2, 28], the error difference can be represented as (3.32)Δe(k)=e(k+1)-e(k)≅[∂e(k)∂W]TΔW, where ΔW denotes the change of W. From (3.22) and (3.24), we obtain (3.33)ΔW≡-ake(k)∂e(k)∂W≅ake(k)g(k). Thus, (3.34)ΔV(k)=12[e2(k+1)-e2(k)]=12[e(k+1)-e(k)]⋅[e(k+1)+e(k)]=12Δe(k)⋅[2e(k)+Δe(k)]=Δe(k)⋅(e(k)+12Δe(k))=[∂e(k)∂W]Take(k)g(k)⋅{e(k)+12[∂e(k)∂W]Take(k)g(k)}=-ake2(k)|g(k)|2(1-12ak|g(k)|2). Note that P(k) is positive for all k>0, and let P(k)=ak|g(k)|2. Thus, (3.35)ΔV(k)=-e2(k)P(k)(1-12P(k)). Recall that the asymptotical stability of the IT2RFNS-A is guaranteed if ΔV(k)<0, for all k>0. Thus, (1-(1/2)P(k)) should be positive such that ΔV(k)<0, for all k. Therefore, we obtain the stability condition for ak: (3.36)0<ak<2|g(k)|2. The asymptotic stability is guaranteed if ak is chosen to satisfy (3.36). In addition, we would like to find a condition for ak that guarantees the fast convergence. From (3.34) and (3.35), we have (3.37)e2(k+1)=e2(k)-e2(k)P(k)(2-P(k))=e2(k)[1-2P(k)+P2(k)]=e2(k)[P(k)-1]2. The minimum of e2(k+1) is achieved when P(k)=1. Hence, the time-varying optimal learning step length is (3.38)ak*=1|g(k)|2.For the system identification problem, a similar convergence theorem can be obtained.Theorem 3.4. LetaIk be the learning step length of tuning parameters for the IT2RFNS-A. Consider the nonlinear identification problem by series-parallel architecture using IT2RFNS-A (shown in Figure 4), and the parameters update laws are shown in (3.27). The asymptotic convergence of the IT2RFNS-A system is guaranteed if the chosen learning step length is satisfied: (3.39)0<aIk<2|gI(k)|2,∀k, where gI(k)=(O(6)(W(k)+ckΔk)-O(6)(W(k))/ck)[Δk1-1Δk2-1⋯Δkp-1]T is the gradient estimation of IT2RFNS-A by SPSA approach. In addition, the faster convergence can be obtained by using the following optimal time-varying learning step length: (3.40)aIk*=1|gI(k)|2.Proof. Herein, we omitted it due to that the proof of Theorem3.4 is similar to the proof of Theorem 3.3. Only the express of the estimated gradient function g(k) is replaced by gI(k).Remark 3.5. As above description, the SPSA algorithm has the stochastic property in the estimation of gradient functionsg(k) and gI(k) due to the random values ck and Δk. Therefore, the stability conditions (3.28) and (3.39) are important for training. In addition, the time-varying optimal learning step lengths ak* and aIk*, shown in (3.29) and (3.40), have the ability of guaranteeing the high-speed convergence. This reduces the effect of random values. Simulation results in Section 4 demonstrate the performance of our approach. ## 4. Simulation Results Simulation results and comparisons, including the nonlinear system identification and the control of nonlinear system, are introduced to verify the performance of our proposed approach. ### 4.1. Example 1: Nonlinear System Identification of Chaotic System Consider a nonlinear system described as(4.1)y(k)=f(y(k-1),…,y(k-n);u(k-1),…,u(k-m)), where u denotes the input, y is the output, m and n are positive integers, and f(·) is the unknown function f:ℜn+m→ℜ. Our purpose is to identify the nonlinear system (4.1) using IT2RFNS-A with the proposed stable SPSA algorithm. From literature [2], the chaotic system is described as (4.2)y(k+1)=-P⋅y2(k)+Q⋅y(k-1)+1.0, where P=1.4 and Q=0.3 and produces a chaotic strange attractor as shown in Figure 5(a). We first choose 1000 pairs training data randomly from the system over the interval [−1.5 1.5] and the initial point is set to[y(1)y(0)]=[0.40.4]. Then, the IT2RFNS-A is used to approximate the chaotic system using the stable SPSA algorithm with the following coefficients listed in Table 1. After training for 100 epochs, the phase plane of the chaotic system is shown in Figure 5(b). In order to show the superiority of using optimal learning step length, the same simulation is done without using optimal learning step length for comparison. In addition, we also compare it with other neural networks such as fuzzy neural network (FNN) [1], recurrent fuzzy neural network (RFNN) [2], and interval type-2 fuzzy logic system with asymmetric membership function (IT2FNS-A) [16]. For statistical analysis, the learning process is repeated for 10 independent runs. The descending trend of the mean square error (MSE) is shown in Figure 6, solid-black line: IT2RFNS-A with the optimal learning step length aIk*; solid-red line: IT2RFNS-A without the optimal learning step length; dashed line: IT2FNS-A; dotted line: RFNN; dash-dotted line: FNN. The worst, average, and best MSE that each neural network achieved are shown in Table 2. Obviously, the IT2RFNS-A with optimal learning step length achieves the best performance and if we focus on using optimal learning step length or not, then we can find that there is a great improvement. From Figure 6, we can observe that using optimal learning step length results in better performance in convergence. Hence, we accumulate the number of epochs that MSE decreased in 100 epochs and listed in Table 3. The result, evidently, shows that the optimal learning step length can guarantee the efficient training of IT2RFNS-A.Table 1 Configuration coefficients for Example 1: system identification. FNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-ASPSA configuration coefficientsa0.10.10.010.01optimal learning step length (3.40)α0.6020.6020.6020.602A10101010c0.10.10.10.10.1γ0.1010.1010.1010.1010.101Neural network coefficientrule number88222epochs100Table 2 Comparison results in MSE for Example 1. MSEFNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-A (optimalaIk*)Worst0.00480.00176.9579 × 10−51.9464 × 10−55.0267 × 10−6Average0.00286.6921 × 10−43.4154 × 10−54.8760 × 10−61.3319 × 10−6Best0.00182.2731 × 10−41.1265 × 10−56.0900 × 10−81.0091 × 10−27Table 3 Comparison results of using optimal learning step length (number of epochs that MSE decreases in 100 epochs, 10 independent runs). Number of epochsIT2RFNS-AIT2RFNS-A with optimalaIk*Worst3868Average3689Best5598Total epochs100Phase plane plot of (a) the chaotic system and (b) the result of IT2RFNS-A with optimal learning step length. (a)(b)Figure 6 Comparison results in MSE of Example 1. ### 4.2. Nonlinear System Control of Chua’s Chaotic Circuit Consider annth-order nonlinear dynamic system in the companion form or controllability canonical from(4.3)x(n)=f(x)+g(x)u+d,y=x, where u and y are the control input and output of the nonlinear system, f(·) and g(·) are unknown nonlinear and continuous functions, and d is the external disturbance or system uncertainty. Our purpose is to develop an IT2RFNS-A controller to generate the proper control signal such that the system output y can follow a given reference trajectory yr.In this paper, we consider the typical Chua’s chaotic circuit which consists of one inductor, two capacitors, one linear resistor, and one piecewise-linear resistor [29, 30]. It has been shown that this circuit holds very rich nonlinear dynamics such as chaos and bifurcations. The dynamic equation of Chua’s circuit are described as(4.4)dvC1dt=(C1)-1(vC2-vC1R-γ),dvC2dt=(C2)-1(vC1-vC2R-iL),diLdt=(L)-1(-vC1-ROiL), where voltages vC1, vC2 and current iL are state variables, RO is a constant, and γ denotes the nonlinear resistor which is a function of voltage across two terminals of C1. γ is defined as a cubic function as(4.5)γ=aVC1+c(vC1)3(a>0,c<0). According to [30], (4.4) can be transferred into the canonical form as(4.6)[ẋ1ẋ2ẋ3]=[010001000]⋅[x1x2x3]+[001]⋅(f+g⋅u+d),y=[100]⋅[x1x2x3], where f=(14/1805)x1-(168/9025)x2+(1/38)x3-(2/45)((28/361)x1+(7/95)x2+x3)3, g=1, and d is the external disturbance and is assumed to be a square-wave with amplitude ±0.5 and period 2π. Herein, the objective is to track the reference trajectory yr=1.5sin(t). We choose the initial state value as x(0)=[001]T and sampling time as 0.01 second.For comparison, the same simulations are done by using IT2FNS-A and the case without optimal learning step length. The used coefficients are listed in Table4. The simulation results of trajectories are shown in Figure 7. Figure 7(a) shows the phase plant trajectory after being controlled; Figures 7(b)–7(d) shows the state trajectories x1,x2, and x3, respectively (solid line: actual system output; dashed line: reference trajectory). Figure 8 depicts the control effort of the IT2RFNS-A controller. We can observe that the proposed method is valid to control a nonlinear dynamic system. The comparison results are shown in Table 5 which is obtained by 10 independent runs. From Table 5, we can easily find that the IT2RFNS-A performs better than others. This result shows that the IT2RFNS-A has ability to cope with dynamic problems. Observe the performance of the case with optimal learning step length from Table 5, we again demonstrate that adopting optimal learning step length guarantees the efficient training of IT2RFNS-A.Table 4 The selected coefficients for Example 2. FNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-ASPSA configuration coefficientsa0.010.010.010.01optimal learning step length (3.29)α0.6020.6020.6020.602A200200200200c11111γ0.1010.1010.1010.1010.101Neural network coefficientrule number1212444Time (sec.)20Table 5 Comparison results of control performance for Example 2. MSEFNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-A (optimalak)Worst2.60462.53210.39060.72640.0682Average1.25801.20050.31460.27680.0476Best0.75990.65320.26050.07130.0307Simulation results of Example 2: (a) phase plant trajectory of IT2RFNS-A, (b) statex1 and reference trajectory yr, (c) state x2 and reference trajectory ẏr, and (d) state x3 and reference trajectory ÿr. (a)(b)(c)(d)Figure 8 Control effort of IT2RFNS-A controller of Chua’s chaotic circuit. ## 4.1. Example 1: Nonlinear System Identification of Chaotic System Consider a nonlinear system described as(4.1)y(k)=f(y(k-1),…,y(k-n);u(k-1),…,u(k-m)), where u denotes the input, y is the output, m and n are positive integers, and f(·) is the unknown function f:ℜn+m→ℜ. Our purpose is to identify the nonlinear system (4.1) using IT2RFNS-A with the proposed stable SPSA algorithm. From literature [2], the chaotic system is described as (4.2)y(k+1)=-P⋅y2(k)+Q⋅y(k-1)+1.0, where P=1.4 and Q=0.3 and produces a chaotic strange attractor as shown in Figure 5(a). We first choose 1000 pairs training data randomly from the system over the interval [−1.5 1.5] and the initial point is set to[y(1)y(0)]=[0.40.4]. Then, the IT2RFNS-A is used to approximate the chaotic system using the stable SPSA algorithm with the following coefficients listed in Table 1. After training for 100 epochs, the phase plane of the chaotic system is shown in Figure 5(b). In order to show the superiority of using optimal learning step length, the same simulation is done without using optimal learning step length for comparison. In addition, we also compare it with other neural networks such as fuzzy neural network (FNN) [1], recurrent fuzzy neural network (RFNN) [2], and interval type-2 fuzzy logic system with asymmetric membership function (IT2FNS-A) [16]. For statistical analysis, the learning process is repeated for 10 independent runs. The descending trend of the mean square error (MSE) is shown in Figure 6, solid-black line: IT2RFNS-A with the optimal learning step length aIk*; solid-red line: IT2RFNS-A without the optimal learning step length; dashed line: IT2FNS-A; dotted line: RFNN; dash-dotted line: FNN. The worst, average, and best MSE that each neural network achieved are shown in Table 2. Obviously, the IT2RFNS-A with optimal learning step length achieves the best performance and if we focus on using optimal learning step length or not, then we can find that there is a great improvement. From Figure 6, we can observe that using optimal learning step length results in better performance in convergence. Hence, we accumulate the number of epochs that MSE decreased in 100 epochs and listed in Table 3. The result, evidently, shows that the optimal learning step length can guarantee the efficient training of IT2RFNS-A.Table 1 Configuration coefficients for Example 1: system identification. FNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-ASPSA configuration coefficientsa0.10.10.010.01optimal learning step length (3.40)α0.6020.6020.6020.602A10101010c0.10.10.10.10.1γ0.1010.1010.1010.1010.101Neural network coefficientrule number88222epochs100Table 2 Comparison results in MSE for Example 1. MSEFNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-A (optimalaIk*)Worst0.00480.00176.9579 × 10−51.9464 × 10−55.0267 × 10−6Average0.00286.6921 × 10−43.4154 × 10−54.8760 × 10−61.3319 × 10−6Best0.00182.2731 × 10−41.1265 × 10−56.0900 × 10−81.0091 × 10−27Table 3 Comparison results of using optimal learning step length (number of epochs that MSE decreases in 100 epochs, 10 independent runs). Number of epochsIT2RFNS-AIT2RFNS-A with optimalaIk*Worst3868Average3689Best5598Total epochs100Phase plane plot of (a) the chaotic system and (b) the result of IT2RFNS-A with optimal learning step length. (a)(b)Figure 6 Comparison results in MSE of Example 1. ## 4.2. Nonlinear System Control of Chua’s Chaotic Circuit Consider annth-order nonlinear dynamic system in the companion form or controllability canonical from(4.3)x(n)=f(x)+g(x)u+d,y=x, where u and y are the control input and output of the nonlinear system, f(·) and g(·) are unknown nonlinear and continuous functions, and d is the external disturbance or system uncertainty. Our purpose is to develop an IT2RFNS-A controller to generate the proper control signal such that the system output y can follow a given reference trajectory yr.In this paper, we consider the typical Chua’s chaotic circuit which consists of one inductor, two capacitors, one linear resistor, and one piecewise-linear resistor [29, 30]. It has been shown that this circuit holds very rich nonlinear dynamics such as chaos and bifurcations. The dynamic equation of Chua’s circuit are described as(4.4)dvC1dt=(C1)-1(vC2-vC1R-γ),dvC2dt=(C2)-1(vC1-vC2R-iL),diLdt=(L)-1(-vC1-ROiL), where voltages vC1, vC2 and current iL are state variables, RO is a constant, and γ denotes the nonlinear resistor which is a function of voltage across two terminals of C1. γ is defined as a cubic function as(4.5)γ=aVC1+c(vC1)3(a>0,c<0). According to [30], (4.4) can be transferred into the canonical form as(4.6)[ẋ1ẋ2ẋ3]=[010001000]⋅[x1x2x3]+[001]⋅(f+g⋅u+d),y=[100]⋅[x1x2x3], where f=(14/1805)x1-(168/9025)x2+(1/38)x3-(2/45)((28/361)x1+(7/95)x2+x3)3, g=1, and d is the external disturbance and is assumed to be a square-wave with amplitude ±0.5 and period 2π. Herein, the objective is to track the reference trajectory yr=1.5sin(t). We choose the initial state value as x(0)=[001]T and sampling time as 0.01 second.For comparison, the same simulations are done by using IT2FNS-A and the case without optimal learning step length. The used coefficients are listed in Table4. The simulation results of trajectories are shown in Figure 7. Figure 7(a) shows the phase plant trajectory after being controlled; Figures 7(b)–7(d) shows the state trajectories x1,x2, and x3, respectively (solid line: actual system output; dashed line: reference trajectory). Figure 8 depicts the control effort of the IT2RFNS-A controller. We can observe that the proposed method is valid to control a nonlinear dynamic system. The comparison results are shown in Table 5 which is obtained by 10 independent runs. From Table 5, we can easily find that the IT2RFNS-A performs better than others. This result shows that the IT2RFNS-A has ability to cope with dynamic problems. Observe the performance of the case with optimal learning step length from Table 5, we again demonstrate that adopting optimal learning step length guarantees the efficient training of IT2RFNS-A.Table 4 The selected coefficients for Example 2. FNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-ASPSA configuration coefficientsa0.010.010.010.01optimal learning step length (3.29)α0.6020.6020.6020.602A200200200200c11111γ0.1010.1010.1010.1010.101Neural network coefficientrule number1212444Time (sec.)20Table 5 Comparison results of control performance for Example 2. MSEFNNRFNNIT2FNS-AIT2RFNS-AIT2RFNS-A (optimalak)Worst2.60462.53210.39060.72640.0682Average1.25801.20050.31460.27680.0476Best0.75990.65320.26050.07130.0307Simulation results of Example 2: (a) phase plant trajectory of IT2RFNS-A, (b) statex1 and reference trajectory yr, (c) state x2 and reference trajectory ẏr, and (d) state x3 and reference trajectory ÿr. (a)(b)(c)(d)Figure 8 Control effort of IT2RFNS-A controller of Chua’s chaotic circuit. ## 5. Conclusion In this paper, we have proposed an interval type-2 recurrent fuzzy neural system with triangular asymmetric membership functions (IT2RFNS-A) and its training scheme using the proposed stable SPSA algorithm. We adopt the Lyapunov theorem to derive the appropriate range of learning step length for SPSA to guarantee the stability of the closed-loop system for nonlinear control and guarantee the convergence of IT2RFNS-A for system identification. In addition, we also obtain the optimal learning step length that ensures the efficient training for IT2RFNS-A. The feasibility and the effectiveness of the proposed method have been demonstrated by two illustration examples. The simulations on the chaotic system identification and the control of the Chua’s chaotic circuit are done, and both of them show the following advantages of proposed approach, (a) the IT2RFNS-A having few rules can achieve the specific performance or even better; (b) the IT2RFNS-A can catch the dynamic response of the system; that is, it is capable of coping with dynamic problems; (c) by using the proposed stable SPSA algorithm, the gradient information is unnecessary. In other words, a great deal of computational effort is saved; (d) for control problems, the proposed method avoids approximating the system sensitivity which leads to inaccuracy; (e) adopting optimal learning step length improves the performance of training IT2RFNS-A. --- *Source: 102436-2011-06-21.xml*
2011
# Mathematical Modeling of Multiple Quality Characteristics of a Laser Microdrilling Process Used in Al7075/SiCp Metal Matrix Composite Using Genetic Programming **Authors:** Mohammed Yunus; Mohammad S. Alsoufi **Journal:** Modelling and Simulation in Engineering (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1024365 --- ## Abstract The conventional method for machining metal matrix composites (MMCs) is difficult on account of their excellent characteristics compared with those of their source materials. Modern laser machining technology is a suitable noncontact method for machining operations of advanced engineering materials due to its novel advantages such as higher productivity, ease of adaptation to automation, minimum heat affected zone (HAZ), green manufacturing, decreased processing costs, improved quality, reduced wastage, removal of finishing operations, and so on. Their application includes hole drilling in an aircraft engine components such as combustion chambers, nozzle guide vanes, and turbine blades made up of MMCs which meet quality standards that determine their suitability for service use. This paper presents a derived mathematical model based on evolutionary computation methods using multivariate regression fitting for the prediction of multiple characteristics (circularity, taper, spatter, and HAZ) of neodymium: yttrium aluminum garnet laser drilling of aluminum matrix/silicon carbide particulate (Al/SiCp) MMCs using genetic programming. Laser drilling input factors such as laser power, pulse frequency, gas pressure, and pulse width are utilized. From a training dataset, different genetic models for multiple quality characteristics were obtained with great accuracy during simulated evolution to provide a more accurate prediction compared to empirical correlations. --- ## Body ## 1. Introduction Metal matrix composites (MMCs) are substances which blend a tough metallic matrix with a hard ceramic reinforcement possessing excellent features such as high strength to wear ratio, high modulus, and wear and corrosion resistance [1]. MMCs are broadly used in the fields of the aerospace, automotive, electronics, and metallic industries. MMCs comprise a metal as a base material (matrix) and hard ceramic particles such as B4C, SiC, and Al2O3 as reinforcement (long fibers, short whiskers, or particulates in irregular or spherical shapes). The properties of MMCs are judged by matrix, reinforcement, and interface between them [2]. They are a material which is difficult to machine due to the presence of hard ceramic particles [3]. Most of the research in the machining of Al/SiCp MMCs has focused on turning and milling, whereas drilling has been given less attention. ### 1.1. Laser Drilling The laser drilling system is growing exponentially to suit the alternative program for achieving the major demands of aerospace, automobile, metallic, and electric industrial potential, especially microhole drilling in different components such as watches and turbine blades, fuselages, printed circuit boards, and so on [3]. Pulsed Nd:YAG laser microhole drilling has gained popularity in recent years to be used as an indispensable tool for microhole drilling of components for technologically advanced industries. Laser microhole drilling processes are successfully employed to both conductive and nonconductive materials to remove by evaporation, and the amount of molten material removed depends on the penetration of laser energy generated from a sequence of laser pulses at the same place [4]. Laser drilling in the production industry has been associated with the various advantages of a high rate of machining, noncontact method; hence, there is no damage to tool or tool wear, increased product quality, and less wastage. Highly reflective materials require a low amount of laser power with a short wavelength of Nd:YAG (compared with CO2 lasers) [5]. While it possesses several advantages and is widely approved by advanced industries, it also has some defects such as taperness, noncircular holes, HAZ, recast layer, and so on [6]. ### 1.2. Genetic Programming Genetic programming (GP) is the most effective tool used to predict the behaviour of various processes and for the formation of empirical modeling. Normally, GP supports the process of evolution in nature—Darwin’s theory of “survival of the fittest”—to find the best solution to an assigned problem. GP is known as the generalized form of the genetic algorithm (GA) and has been extensively studied [7–9]. In GP, the model is represented by terminals and functions. A well-known implementation of GP is in symbolic regression and is used to determine the mathematical expression for a given set of variables and functions. The function is generated using a Boolean operator (AND, OR, NOT, or AND NOT), nonlinear operators (sin, cos, tan, exp, tanh, and log), and from basic mathematical operators (+, −, /, and ×). The fitness function is calculated as the error between the actual value and the predicted value of the symbolic expressions. In GP, individual terms are randomly initialized, and the population is progressed to find out the optimal solutions through various operations such as reproduction, crossover, and mutation. The reproduction process produces children as an input to the next generation by replicating a fraction of the parent selected by the current generation. Individuals having highest fitness values in the population are selected as the parent and used for reproduction. Normally, the crossover operation produces children by exchanging some parts of their selected parents. The crossover operation is divided into two types (subtree and node crossover). However, the subtree crossover has shown more significant effect than the node crossover.A variety of methods are available to develop relationships between inputs and outputs for evaluating outputs under varying input conditions without experimental work. These forecasting output data are derived in the form of equations using artificial neural network, statistical regression methods like linear regression, response surface, ANOVA, etc., with limited accuracy. To obtain higher accuracy at about 99.3 to 99.8% for the mathematical model derived, the effective method available is genetic programming which is an application of machine learning or artificial intelligence using inbuilt algorithms. The use of the GP approach with experimental data to develop a mathematical model involving laser microhole drilling input and output parameters is presented in this work. These mathematical models can be used to study the production of high-quality microdrills of the pulsed Nd:YAG laser used in Al7075/SiCp MMCs by minimizing the microhole drilling defects, such as the degree of taperness of a hole, spatter, and heat affected zone width, and to maximize a hole circularity [10]. The various drilling input parameters involved are pulse power (v0), pulse frequency (v1), assistance of gas pressure (v2), and pulse width (v3) [11]. ## 1.1. Laser Drilling The laser drilling system is growing exponentially to suit the alternative program for achieving the major demands of aerospace, automobile, metallic, and electric industrial potential, especially microhole drilling in different components such as watches and turbine blades, fuselages, printed circuit boards, and so on [3]. Pulsed Nd:YAG laser microhole drilling has gained popularity in recent years to be used as an indispensable tool for microhole drilling of components for technologically advanced industries. Laser microhole drilling processes are successfully employed to both conductive and nonconductive materials to remove by evaporation, and the amount of molten material removed depends on the penetration of laser energy generated from a sequence of laser pulses at the same place [4]. Laser drilling in the production industry has been associated with the various advantages of a high rate of machining, noncontact method; hence, there is no damage to tool or tool wear, increased product quality, and less wastage. Highly reflective materials require a low amount of laser power with a short wavelength of Nd:YAG (compared with CO2 lasers) [5]. While it possesses several advantages and is widely approved by advanced industries, it also has some defects such as taperness, noncircular holes, HAZ, recast layer, and so on [6]. ## 1.2. Genetic Programming Genetic programming (GP) is the most effective tool used to predict the behaviour of various processes and for the formation of empirical modeling. Normally, GP supports the process of evolution in nature—Darwin’s theory of “survival of the fittest”—to find the best solution to an assigned problem. GP is known as the generalized form of the genetic algorithm (GA) and has been extensively studied [7–9]. In GP, the model is represented by terminals and functions. A well-known implementation of GP is in symbolic regression and is used to determine the mathematical expression for a given set of variables and functions. The function is generated using a Boolean operator (AND, OR, NOT, or AND NOT), nonlinear operators (sin, cos, tan, exp, tanh, and log), and from basic mathematical operators (+, −, /, and ×). The fitness function is calculated as the error between the actual value and the predicted value of the symbolic expressions. In GP, individual terms are randomly initialized, and the population is progressed to find out the optimal solutions through various operations such as reproduction, crossover, and mutation. The reproduction process produces children as an input to the next generation by replicating a fraction of the parent selected by the current generation. Individuals having highest fitness values in the population are selected as the parent and used for reproduction. Normally, the crossover operation produces children by exchanging some parts of their selected parents. The crossover operation is divided into two types (subtree and node crossover). However, the subtree crossover has shown more significant effect than the node crossover.A variety of methods are available to develop relationships between inputs and outputs for evaluating outputs under varying input conditions without experimental work. These forecasting output data are derived in the form of equations using artificial neural network, statistical regression methods like linear regression, response surface, ANOVA, etc., with limited accuracy. To obtain higher accuracy at about 99.3 to 99.8% for the mathematical model derived, the effective method available is genetic programming which is an application of machine learning or artificial intelligence using inbuilt algorithms. The use of the GP approach with experimental data to develop a mathematical model involving laser microhole drilling input and output parameters is presented in this work. These mathematical models can be used to study the production of high-quality microdrills of the pulsed Nd:YAG laser used in Al7075/SiCp MMCs by minimizing the microhole drilling defects, such as the degree of taperness of a hole, spatter, and heat affected zone width, and to maximize a hole circularity [10]. The various drilling input parameters involved are pulse power (v0), pulse frequency (v1), assistance of gas pressure (v2), and pulse width (v3) [11]. ## 2. Materials and Methods In the present work, using the stir casting technique, an MMC consisting of aluminum alloy 7075 as a base metal reinforced with particulates of silicon carbide having a size of 40–50μm for 10% volume fractions is produced. The microholes were drilled onto MMC plates using the pulsed Nd:YAG laser beam system (Model: JK300D), and the tests were carried out at its maximum power capacity of 16 kW. Various input parameters such as laser input power (v0), pulse frequency (v1), assistance of gas pressure (v2), and pulse width (v3) at different levels have been selected. The experimental results for various levels of input factors of laser microhole drilling of MMC (Al7075/10%SiCp) plate of 2 mm thick are shown in Tables 2–5. For a smaller diameter hole, laser microdrilling is preferred, especially when the materials are very hard, extra thin, or made of glass and composites. The quality of these holes mainly depends on heat affected zone (HAZ) of hole walls, taperness formed when the hole is enlarged, circularity to maintain the uniform dimension of the circular hole, and spatter occurring at the ends of a hole during resolidification of material [12, 13].The input parameters of different levels can improve the quality of drilled holes. The quality of holes was determined using optical measuring microscope OLYMPUS STM6 on a cut sectioned hole sample by measuring circularity, spatter, HAZ, and taper characteristics, and in each experimental run, the laser was drilled at a spot size of 180μm [10]. Various studies have been conducted to investigate the effects of input control parameters on the defects developed by the laser microdrilling process [14, 15]. Due to an internal focusing of the issue of a laser drilling process, hole taperness and noncircularity affect the quality of holes [16]. Normally, there is spatter accumulation due to an incomplete suspension of removed MMC at the drilling zone which resolidifies and adheres around the whole circumference. Hence, it is advisable to produce high-quality circular microdrilled holes with the minimum amount of taper, spatter, and HAZ width. The above-said qualities are to be determined by the level of input parameters which requires a mathematical model to study the process condition capable of producing the desired quality of product. However, this model is to be derived in such a way that all the characteristics measuring qualities are quickly measurable simultaneously [11]. ## 3. Genetic Programming Methodology Various stages of GP involved in the flowchart are as follows:For the analysis of multiple characteristics of neodymium: yttrium aluminum garnet laser drilling of aluminum matrix/silicon carbide particulate MMCs such as circularity, taper, spatter, and HAZ, data accumulations of experiments were carried out as per stages involved in Figure1. The accumulated data were randomized using notitia DiscipulusTM software and were provided to the software in three groups, viz., training, validation, and applied [17]. The three sets of data are generated by experimental results provided. Higher numbers of experiment running around more than 100 are required for best solution. Test runs were conducted to determine desirable parameters generating an optimal solution in the minimum possible time. Initially, the tests were carried out at the default parameter settings, such as population size, crossover rate, DSS subset size, and so on, and later varied to find optimum values [18]. The different parameters involved in achieving the final mathematical model satisfying the results are tabulated in Table 1. It shows the flow of set required to achieve the final model which could provide us a mathematical model satisfying the above conditions of the quantity involved [19].Figure 1 Stages involved in genetic programming.Table 1 Parameter setting for genetic programming. Parameters Value assigned Population size (P) 500 Number of generations 1000 Maximum depth of tree 6 Maximum generation 50 Functional set Multiply, plus, minus, divide Terminal set (v1, v2, v3, v4, v5, v6, −10, 10) Number of runs 110 Mutation rate 0.10 Crossover rate 75% nonhomologous 25% homologous Reproduction rate 0.05 Fitness,r2 The square root of the sum of the square of absolute value of the differences (errors), between the program’s output and the observed data. Termination An individual emerges whose sum of absolute errors is less than specified: (a) required number of runs be completed or (b) required correlation coefficient is obtained Terminal set T = {P, random-constants} ### 3.1. Regression and Fitness Measurement Regression analysis is a stochastic method in which symbolic regression finds both the working model of output (or target) function and its inputs (or fixed coefficients), or at least an approximation (error measurement fit by linear or square). Fitness measurement indicates how far the output value predicted by the GP concurs with the experimental value. ### 3.2. Correlation Coefficient,r/R The linear correlation coefficient refers to measuring the strength and direction of a linear relationship held between two variables (output,M, and input, N), where r value lies such that −1 < r < +1 [8]. The signs indicate a strong positive linear correlation between M and N or perfect fit if r is close to +1 with an increase of M values; N values also increase, whereas r is close to −1 if M and N have a strong negative linear correlation such that with an increase of M values, N values decrease. For r = 0, there is no linear/weak correlation, and a value approaching zero represents a random, nonlinear relationship between the M and N. The square of the correlation coefficient provides you with the coefficient of determination, r2, to find the proportion of the variance of output that is predictable from the inputs. This helps us to determine how certain one can be in making predictions from a defined model. r2 defined from the ratio of the illustrated variation to the total variation in the range of 0 < r2 < 1 signifies the strength of the linear correlation between P and Q or represents the percentage of the data which is closest to the line of best fit [20]. If r = 0.977, then r2 = 0.994, which means that 99.4% of the total variation in Q can be explained by the linear relationship between P and Q and the remaining 0.6% of the variation in Q continues unexplained [21]. ### 3.3. Factors Involved in GP Modeling The various parameters involved in modeling GP are tabulated in Table1. It shows the flow of set required to achieve the final model which could provide you with a mathematical model satisfying the above conditions of the quantity involved. ## 3.1. Regression and Fitness Measurement Regression analysis is a stochastic method in which symbolic regression finds both the working model of output (or target) function and its inputs (or fixed coefficients), or at least an approximation (error measurement fit by linear or square). Fitness measurement indicates how far the output value predicted by the GP concurs with the experimental value. ## 3.2. Correlation Coefficient,r/R The linear correlation coefficient refers to measuring the strength and direction of a linear relationship held between two variables (output,M, and input, N), where r value lies such that −1 < r < +1 [8]. The signs indicate a strong positive linear correlation between M and N or perfect fit if r is close to +1 with an increase of M values; N values also increase, whereas r is close to −1 if M and N have a strong negative linear correlation such that with an increase of M values, N values decrease. For r = 0, there is no linear/weak correlation, and a value approaching zero represents a random, nonlinear relationship between the M and N. The square of the correlation coefficient provides you with the coefficient of determination, r2, to find the proportion of the variance of output that is predictable from the inputs. This helps us to determine how certain one can be in making predictions from a defined model. r2 defined from the ratio of the illustrated variation to the total variation in the range of 0 < r2 < 1 signifies the strength of the linear correlation between P and Q or represents the percentage of the data which is closest to the line of best fit [20]. If r = 0.977, then r2 = 0.994, which means that 99.4% of the total variation in Q can be explained by the linear relationship between P and Q and the remaining 0.6% of the variation in Q continues unexplained [21]. ## 3.3. Factors Involved in GP Modeling The various parameters involved in modeling GP are tabulated in Table1. It shows the flow of set required to achieve the final model which could provide you with a mathematical model satisfying the above conditions of the quantity involved. ## 4. Results and Discussion The selection of precise instructions from setF and available terminal genes from set f(0) play a vital role in GP modeling, and evolutionary process will build a mathematical model (i.e., organism) such that it is as fit as for the prediction of results. The model consists of both instructions and function genes behaving similarly to the character of computer programs differing in appearance and dimensions [20, 22]. Previously, extensive work on developing mathematical models was also carried out using linear regression, second order, and higher order equations using response surface methods, ANOVA, box, artificial neural network, and fuzzy logic method where the models of these methods develop the amount of error is very high compared to experimental values upto 3 digit numbers [24]. These analyses require high value of digit numbers not lower and decimal numbers for output values compared to values of inputs to accomplish accuracy. These methods are suitable for small number of inputs; otherwise, erroneous empirical relation would be generated. The above nonGP methods may use limited data according design matrix and orthogonal array of tables, and the equation will be developed by using optimal input values [24].Using Grey–Taguchi method, the overall performance characteristic in Nd:YAG laser microdrilling of alumina was calculated using grey relational grade (GRG) to find a optimal parameter set for getting highest GRG. The optimal value of GRG was 0.9172 which indicates the effectiveness of the proposed approach and the highest value of GRG of 0.8989 for confirmation experiment. The overall quality feature is improved by 2.03% at optimal condition with HAZ width by 8.78%, and the hole taper worsened by 2.14%. This shows that, in multilevel optimization of performance, characteristics cannot reach simultaneously optimum value, instead they have to compromise between various performance characteristics to accomplish the optimum value pertaining to overall performance. Both the performance characteristics improve at optimum level with GRG of 0.1521 (19.88%). The taperness (from 0.0491 to 0.0476 rad) and HAZ width (from 0.2180 to 0.1683 mm) are decreased at the same time. From the results, we can see that optimum results can provide correlations at the accuracy level of 85% if outputs are limited to two numbers. By increasing output quality characteristics as well as input parameters and number of experiments, it is very difficult to arrive at optimal set and corresponding correlations obtained by regression analysis. Therefore GP-based solutions maintain an accuracy level beyond statistical analysis irrespective of number of outputs and inputs [25].The model used experimental measurements collected by converting into three independent datasets such as training, validation, and applied. Independent input variables used are pulse power (v0), pulse frequency (v1), assistance of gas pressure (v2), and pulse width (v3) and circularity, spatter, heat affected zone, and taper as the dependent output variable. Various models for outputs are developed by GP using the training dataset [23]. Best mathematical models obtained from GP simulation are given by equations (1)–(4)(1)circularity=2GF2−v0v0−1.745v0+0.904515,(2)spatter=1+N2K+L−4L−1.9/K+L+24L−1.9/K+L0.5−0.552−1.1524,(3)taper=VP−N/N−M+L/M+V/M−N3+OV/M−N3/M−N3−VF+G+QF+H2Q2−10.25Q2+2H2Q2+4/3Q−3−2F−2G−2F+H2Q2−10.25Q2+2H2Q2+HQ+5−2v3+0.73/3Q−3−2F−2G−2F+H2Q2−10.25Q2+2H2Q2+HQ+5−H2Q2−1/M−N3P−N/N−M+L/M+V/M−N3+OV/M−N3+1.05,(4)heataffectedzone=1.22L/I−0.682v1v3−0.079−1.22FG−0.5356Gv0+1.22L/I−0.682v1.Appendixes A, B, C, and D show further details related to the derivation of the circularity, spatter, taper, and heat affected zone, respectively.Comparison of the experimental and predicted outputs using the mathematical model obtained from GP is shown in Tables2–5 for circularity, taper, spatter, and HAZ of pulsed Nd:YAG laser microhole drilling of MMC components. Errors in determining predicted outputs are very few, and the percentage of error is less than ±1% which shows that results obtained from a GP mathematical model are highly acceptable.Table 2 Comparison between experimental and predicted values of circularity atR2 = 99.68. No. PP (W) (v0) PF (Hz) (v1) AGP (kg/cm2) (v2) PW (ms) (v3) Circularity (mm) Error % Experimental GP 1 250 210 12 0.4 0.908 0.908066 −6.6E − 05 2 210 210 8 0.2 0.9032 0.903074 0.000126 3 240 220 12 0.4 0.911 0.911254 −0.00025 4 210 210 8 0.6 0.953 0.953124 −0.00012 5 250 220 8 0.6 0.9425 0.941842 0.000658 6 210 250 12 0.6 0.978 0.978011 −1.1E − 05 7 210 230 10 0.4 0.948 0.945836 0.002164 8 230 210 10 0.6 0.957 0.958292 −0.00129 9 210 230 10 0.3 0.923 0.922506 0.000494 10 240 210 11 0.3 0.892 0.89308 −0.00108 11 220 210 9 0.3 0.926 0.926227 −0.00023 12 230 230 12 0.3 0.92 0.920843 −0.00084 13 240 220 12 0.3 0.9205 0.920919 −0.00042 14 220 250 8 0.3 0.896 0.895994 6.11E − 06 15 230 210 10 0.5 0.947 0.94698 1.96E − 05 16 240 230 8 0.4 0.899 0.898971 2.94E − 05 17 240 210 11 0.2 0.892 0.893078 −0.00108 18 220 240 12 0.6 0.974 0.973441 0.000559 19 240 250 10 0.6 0.954 0.955555 −0.00155 20 250 250 11 0.4 0.92 0.919789 0.000211 21 250 240 10 0.2 0.894 0.893536 0.000464 22 230 250 9 0.5 0.953 0.949875 0.003125 23 210 250 12 0.5 0.958 0.963079 −0.00508 24 250 240 10 0.3 0.892 0.893537 −0.00154 25 250 220 8 0.5 0.938 0.933247 0.004753 26 250 250 11 0.3 0.895 0.903726 −0.00873 27 230 230 12 0.2 0.893 0.892581 0.000419 28 230 220 11 0.2 0.89 0.892581 −0.00258 29 240 240 9 0.6 0.95 0.949358 0.000642 30 220 240 12 0.2 0.893 0.892039 0.000961 31 250 230 9 0.2 0.896 0.896958 −0.00096 32 220 220 10 0.5 0.951 0.950144 0.000856 33 240 240 9 0.5 0.95 0.943263 0.006737 34 210 220 9 0.2 0.9 0.896232 0.003768 35 220 250 8 0.2 0.902 0.896907 0.005093 36 250 210 12 0.5 0.948 0.94813 −0.00013 37 220 220 10 0.4 0.936 0.923214 0.012786 38 230 240 8 0.3 0.901 0.90405 −0.00305 39 250 230 9 0.6 0.9475 0.947726 −0.00023 40 230 250 9 0.4 0.911 0.928516 −0.01752 41 240 250 10 0.2 0.895 0.895507 −0.00051 42 220 210 9 0.4 0.91 0.90617 0.00383 43 240 230 8 0.5 0.94 0.936347 0.003653 44 210 240 11 0.5 0.951 0.959169 −0.00817 45 220 230 11 0.5 0.95 0.954643 −0.00464 46 220 230 11 0.6 0.97 0.967844 0.002156 47 230 240 8 0.4 0.901 0.906892 −0.00589 48 230 220 11 0.6 0.966 0.964493 0.001507 49 210 240 11 0.4 0.93 0.943397 −0.0134 50 210 220 9 0.3 0.922 0.918336 0.003664 PP: pulse power (W), PF: pulse frequency (Hz), PW: pulse width (ms), AGP: assist gas pressure (Kg/cm3).Table 3 Comparison between experimental and predicted values of HAZ atR2 = 99.88. No. PP (W) (v0) PF (Hz) (v1) AGP (kg/cm2) (v2) PW (ms) (v3) HAZ (mm) Error % Experimental GP 1 240 230 8 0.4 0.04 4.19E − 02 −0.00188 2 230 210 10 0.5 0.112 0.107232 0.004768 3 230 230 12 0.3 0.068 7.13E − 02 −0.00328 4 210 230 10 0.3 0.068 7.17E − 02 −0.00368 5 220 240 12 0.2 0.081 8.02E − 02 0.000789 6 240 220 12 0.3 0.078 7.82E − 02 −0.00023 7 220 210 9 0.4 0.088 8.75E − 02 0.000453 8 250 220 8 0.6 0.104 0.103178 0.000822 9 210 210 8 0.2 0.065 6.80E − 02 −0.00305 10 250 210 12 0.4 0.1 9.97E − 02 0.000311 11 210 230 10 0.4 0.069 7.04E − 02 −0.00139 12 240 220 12 0.4 0.073 7.20E − 02 0.00103 13 230 220 11 0.6 0.085 8.46E − 02 0.000417 14 220 250 8 0.3 0.102 0.101978 2.21E − 05 15 220 230 11 0.6 0.089 8.67E − 02 0.002272 16 220 220 10 0.5 0.094 9.50E − 02 −0.00098 17 240 240 9 0.6 0.105 0.102995 0.002005 18 240 210 11 0.2 0.085 8.12E − 02 0.003847 19 220 250 8 0.2 0.0747 7.28E − 02 0.001893 20 230 250 9 0.5 0.1 0.103908 −0.00391 21 210 220 9 0.3 0.082 8.05E − 02 0.001512 22 210 250 12 0.5 0.0909 9.14E − 02 −0.00051 23 210 250 12 0.6 0.0921 9.28E − 02 −0.00071 24 220 210 9 0.3 0.101 0.101721 −0.00072 25 240 240 9 0.5 0.1 0.10225 −0.00225 26 230 250 9 0.4 0.094 9.11E − 02 0.002885 27 240 250 10 0.6 0.096 9.56E − 02 0.000406 28 250 210 12 0.5 0.097 9.83E − 02 −0.00131 29 230 230 12 0.2 0.076 7.74E − 02 −0.0014 30 220 230 11 0.5 0.092 9.06E − 02 0.001379 31 210 240 11 0.4 0.072 7.25E − 02 −0.00049 32 230 210 10 0.6 0.095 9.25E − 02 0.002548 33 250 250 11 0.3 0.078 7.72E − 02 0.000826 34 250 220 8 0.5 0.105 0.104177 0.000823 35 210 220 9 0.2 0.084 7.68E − 02 0.007168 36 220 240 12 0.6 0.095 9.20E − 02 0.003025 37 230 240 8 0.4 0.055 4.67E − 02 0.008281 38 210 210 8 0.6 0.1 9.79E − 02 0.002054 39 250 230 9 0.2 0.07 7.45E − 02 −0.00448 40 240 250 10 0.2 0.095 8.29E − 02 0.012098 41 240 210 11 0.3 0.08 7.90E − 02 0.001025 42 230 240 8 0.3 0.09 8.87E − 02 0.001309 43 210 240 11 0.5 0.084 9.04E − 02 −0.00638 44 250 240 10 0.3 0.065 7.74E − 02 −0.01241 45 250 230 9 0.6 0.095 9.90E − 02 −0.00402 46 250 250 11 0.4 0.08 0.081854 −0.00185 47 230 220 11 0.2 0.077 0.0783 −0.0013 48 220 220 10 0.4 0.075 0.07292 0.00208 49 250 240 10 0.2 0.091 8.82E − 02 0.002776 50 240 230 8 0.5 0.09 0.090104 −0.0001Table 4 Comparison between experimental and predicted values of taper atR2 = 99.43. No. PP (W) (v0) PF (Hz) (v1) AGP (kg/cm2) (v2) PW (ms) (v3) Taper (deg) Error % Experimental GP 1 230 250 9 0.4 3.5 3.48E + 00 0.020539 2 230 240 8 0.4 3.702 3.68349 0.01851 3 210 220 9 0.3 3.815 3.816499 −0.0015 4 230 210 10 0.5 3.75 3.72E + 00 0.034738 5 240 250 10 0.2 4.135 4.12E + 00 0.017045 6 210 240 11 0.4 3.45 3.43E + 00 0.023249 7 230 230 12 0.3 3.925 3.86E + 00 0.064339 8 220 240 12 0.6 3.5 3.487621 0.012379 9 220 240 12 0.2 4.12 4.13E + 00 −0.00979 10 240 220 12 0.3 3.8 3.81E + 00 −0.00709 11 230 240 8 0.3 3.8 3.79E + 00 0.010771 12 220 250 8 0.3 3.962 3.96E + 00 0.006659 13 230 220 11 0.2 4.16 4.16E + 00 −0.00037 14 240 220 12 0.4 3.775 3.757363 0.017637 15 250 230 9 0.6 3.8 3.770205 0.029795 16 210 210 8 0.6 4.2 4.200192 −0.00019 17 230 220 11 0.6 3.75 3.718309 0.031691 18 220 250 8 0.2 4.05 4.10E + 00 −0.04831 19 250 210 12 0.4 4.1 3.96E + 00 0.138649 20 250 220 8 0.5 3.5 3.522696 −0.0227 21 240 240 9 0.5 3.4 3.41E + 00 −0.01021 22 210 210 8 0.2 4.1 4.10E + 00 0.00162 23 240 230 8 0.5 3.505 3.523799 −0.0188 24 210 230 10 0.4 3.58 3.633331 −0.05333 25 220 210 9 0.3 3.6 3.774172 −0.17417 26 250 250 11 0.3 3.8 3.846722 −0.04672 27 210 240 11 0.5 3.307 3.441274 −0.13427 28 250 240 10 0.2 4.1 4.105488 −0.00549 29 220 230 11 0.5 3.3 3.441037 −0.14104 30 240 250 10 0.6 3.8 3.81E + 00 −0.00654 31 230 210 10 0.6 3.305 3.638603 −0.3336 32 230 250 9 0.5 3.38 3.41E + 00 −0.03093 33 230 230 12 0.2 4.05 4.110147 −0.06015 34 210 220 9 0.2 4 4.125376 −0.12538 35 250 230 9 0.2 4.15 4.123817 0.026183 36 240 240 9 0.6 3.75 3.748872 0.001128 37 240 210 11 0.3 3.9 3.86E + 00 0.043294 38 250 210 12 0.5 3.65 3.501807 0.148193 39 210 250 12 0.6 3.25 3.47E + 00 −0.21634 40 210 250 12 0.5 3.45 3.49592 −0.04592 41 220 220 10 0.5 3.6 3.572536 0.027464 42 240 210 11 0.2 3.95 4.11E + 00 −0.16276 43 250 220 8 0.6 3.6 4.114297 −0.5143 44 220 220 10 0.4 3.8 3.694675 0.105325 45 250 250 11 0.4 3.9 3.655343 0.244657 46 240 230 8 0.4 3.6 3.615975 −0.01597 47 220 230 11 0.6 3.4 3.677536 −0.27754 48 220 210 9 0.4 3.5 3.48E + 00 0.023193 49 250 240 10 0.3 3.8 3.80E + 00 0.004566 50 230 250 9 0.4 3.5 3.48E + 00 0.020539Table 5 Comparison between experimental and predicted values of spatter atR2 = 99.3. No. PP (W) (v0) PF (Hz) (v1) AGP (kg/cm2) (v2) PW (ms) (v3) Spatter (mm) Error % Experimental GP 1 220 220 10 0.4 0.055 5.49E − 02 0.00013 2 230 220 11 0.6 0.044 4.42E − 02 −0.00018 3 210 250 12 0.6 0.044 4.34E − 02 0.000648 4 240 220 12 0.3 0.043 4.32E − 02 −0.00022 5 230 240 8 0.3 0.04 4.36E − 02 −0.00365 6 210 230 10 0.3 0.042 4.21E − 02 −7.3E − 05 7 250 240 10 0.2 0.065 6.50E − 02 3.18E − 05 8 230 230 12 0.2 0.043 4.31E − 02 −7.8E − 05 9 240 240 9 0.6 0.072 7.16E − 02 0.000406 10 230 220 11 0.2 0.045 4.50E − 02 −1.6E − 05 11 250 230 9 0.2 0.068 6.82E − 02 −0.00024 12 220 240 12 0.6 0.052 5.23E − 02 −0.00028 13 210 250 12 0.5 0.043 4.29E − 02 6.36E − 05 14 220 210 9 0.4 0.059 5.94E − 02 −0.00045 15 210 220 9 0.2 0.0445 0.044101 0.000399 16 220 240 12 0.2 0.05 5.02E − 02 −0.00019 17 230 210 10 0.5 0.072 7.03E − 02 0.001661 18 230 240 8 0.4 0.071 6.97E − 02 0.001265 19 240 210 11 0.2 0.056 5.54E − 02 0.000566 20 230 250 9 0.5 0.074 7.25E − 02 0.001524 21 240 230 8 0.4 0.075 0.064522 0.010478 22 220 250 8 0.3 0.074 0.071546 0.002454 23 250 210 12 0.5 0.046 5.18E − 02 −0.00581 24 210 240 11 0.4 0.043 4.20E − 02 0.000993 25 230 210 10 0.6 0.049 5.21E − 02 −0.00307 26 250 250 11 0.3 0.065 5.50E − 02 0.010003 27 210 230 10 0.4 0.04 4.28E − 02 −0.00277 28 210 210 8 0.6 0.043 0.042277 0.000723 29 210 240 11 0.5 0.045 0.044648 0.000352 30 220 230 11 0.6 0.057 5.40E − 02 0.003045 31 240 220 12 0.4 0.042 4.29E − 02 −0.00093 32 240 250 10 0.2 0.066 6.64E − 02 −0.00042 33 230 230 12 0.3 0.041 4.27E − 02 −0.00171 34 230 250 9 0.4 0.069 6.84E − 02 0.000556Furthermore, Figure2 shows the regression fit for the percentage of microdrilling process parameters of the MMCs. Figure 3 shows the experimental-predicted relationship of circularity, HAZ, taper, and spatter with the normal distribution behaviour. The models generated by the genetic programming perform better based on statistical terms and the historical dataset (training, validation, and applied data) to exhibit a better predictive capacity on the experimental dataset. The analysis of the mathematical expression of the GP Model suggests specific laser output quality characteristics for the experimental system under various control factors that can be associated with the performance of a laser microdrilled hole in MMCs. The models generated by genetic programming allow representation of the experimental data without a detailed knowledge of the phenomenon. In addition, their study allows us to obtain a deeper insight into the relevant factors in describing the quality phenomenon; for instance, changes in any of the factors observed in quality of the microhole produced that might be associated with changes in the performance of hole production in MMC.Figure 2 Regression fit for the percentage of microdrilling process parameters of the MMCs. (a) Experimental-predicted value of circularity. (b) Experimental-predicted value of HAZ. (c) Experimental-predicted value of taper. (d) Experimental-predicted value of spatter. (a) (b) (c) (d)Figure 3 Experimental-predicted relationship of (a) circularity, (b) HAZ, (c) taper, and (d) spatter. (a) (b) (c) (d) ## 5. Conclusions In this present work, new models of the circularity, spatter, heat affected zone, and taper of MMCs drilling properties at different laser operating parameters are generated using Discipulus GP software and C programming. Using GP-based mathematical models, quality of laser drilled hole properties for MMCs involving various laser input parameters is determined quickly by substitution of laser input conditions, and without conducting any experiments, the actual results can be predicted. The comparison between the new GP-based model and the experimental results indicated that the new model is more accurately close to ± 0.006 to 0.009. Therefore, the new model can be considered an alternative method to estimate the mechanical properties when the experimental measurements or correlations are not available. The correctness of solutions achieved by GP depends on correlated evolutionary parameters, the number of experimental results, and their level of accuracy. To improve the structure of the model during evolution, more information was supplied through experimental measurements. Therefore, the proposed mathematical model has verified its results are adaptable up to reliability of 99.3 to 99.8% to forecast experimental results. In the testing stage, the GP model gives the same result as was found out during the experiment with the reliability of cent percent. The GP approach has thus proved to be a highly skilled and advantageous tool for recognizing correlations in data when no proper theoretical or other methods are possible or available. --- *Source: 1024365-2019-01-02.xml*
1024365-2019-01-02_1024365-2019-01-02.md
36,674
Mathematical Modeling of Multiple Quality Characteristics of a Laser Microdrilling Process Used in Al7075/SiCp Metal Matrix Composite Using Genetic Programming
Mohammed Yunus; Mohammad S. Alsoufi
Modelling and Simulation in Engineering (2019)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1024365
1024365-2019-01-02.xml
--- ## Abstract The conventional method for machining metal matrix composites (MMCs) is difficult on account of their excellent characteristics compared with those of their source materials. Modern laser machining technology is a suitable noncontact method for machining operations of advanced engineering materials due to its novel advantages such as higher productivity, ease of adaptation to automation, minimum heat affected zone (HAZ), green manufacturing, decreased processing costs, improved quality, reduced wastage, removal of finishing operations, and so on. Their application includes hole drilling in an aircraft engine components such as combustion chambers, nozzle guide vanes, and turbine blades made up of MMCs which meet quality standards that determine their suitability for service use. This paper presents a derived mathematical model based on evolutionary computation methods using multivariate regression fitting for the prediction of multiple characteristics (circularity, taper, spatter, and HAZ) of neodymium: yttrium aluminum garnet laser drilling of aluminum matrix/silicon carbide particulate (Al/SiCp) MMCs using genetic programming. Laser drilling input factors such as laser power, pulse frequency, gas pressure, and pulse width are utilized. From a training dataset, different genetic models for multiple quality characteristics were obtained with great accuracy during simulated evolution to provide a more accurate prediction compared to empirical correlations. --- ## Body ## 1. Introduction Metal matrix composites (MMCs) are substances which blend a tough metallic matrix with a hard ceramic reinforcement possessing excellent features such as high strength to wear ratio, high modulus, and wear and corrosion resistance [1]. MMCs are broadly used in the fields of the aerospace, automotive, electronics, and metallic industries. MMCs comprise a metal as a base material (matrix) and hard ceramic particles such as B4C, SiC, and Al2O3 as reinforcement (long fibers, short whiskers, or particulates in irregular or spherical shapes). The properties of MMCs are judged by matrix, reinforcement, and interface between them [2]. They are a material which is difficult to machine due to the presence of hard ceramic particles [3]. Most of the research in the machining of Al/SiCp MMCs has focused on turning and milling, whereas drilling has been given less attention. ### 1.1. Laser Drilling The laser drilling system is growing exponentially to suit the alternative program for achieving the major demands of aerospace, automobile, metallic, and electric industrial potential, especially microhole drilling in different components such as watches and turbine blades, fuselages, printed circuit boards, and so on [3]. Pulsed Nd:YAG laser microhole drilling has gained popularity in recent years to be used as an indispensable tool for microhole drilling of components for technologically advanced industries. Laser microhole drilling processes are successfully employed to both conductive and nonconductive materials to remove by evaporation, and the amount of molten material removed depends on the penetration of laser energy generated from a sequence of laser pulses at the same place [4]. Laser drilling in the production industry has been associated with the various advantages of a high rate of machining, noncontact method; hence, there is no damage to tool or tool wear, increased product quality, and less wastage. Highly reflective materials require a low amount of laser power with a short wavelength of Nd:YAG (compared with CO2 lasers) [5]. While it possesses several advantages and is widely approved by advanced industries, it also has some defects such as taperness, noncircular holes, HAZ, recast layer, and so on [6]. ### 1.2. Genetic Programming Genetic programming (GP) is the most effective tool used to predict the behaviour of various processes and for the formation of empirical modeling. Normally, GP supports the process of evolution in nature—Darwin’s theory of “survival of the fittest”—to find the best solution to an assigned problem. GP is known as the generalized form of the genetic algorithm (GA) and has been extensively studied [7–9]. In GP, the model is represented by terminals and functions. A well-known implementation of GP is in symbolic regression and is used to determine the mathematical expression for a given set of variables and functions. The function is generated using a Boolean operator (AND, OR, NOT, or AND NOT), nonlinear operators (sin, cos, tan, exp, tanh, and log), and from basic mathematical operators (+, −, /, and ×). The fitness function is calculated as the error between the actual value and the predicted value of the symbolic expressions. In GP, individual terms are randomly initialized, and the population is progressed to find out the optimal solutions through various operations such as reproduction, crossover, and mutation. The reproduction process produces children as an input to the next generation by replicating a fraction of the parent selected by the current generation. Individuals having highest fitness values in the population are selected as the parent and used for reproduction. Normally, the crossover operation produces children by exchanging some parts of their selected parents. The crossover operation is divided into two types (subtree and node crossover). However, the subtree crossover has shown more significant effect than the node crossover.A variety of methods are available to develop relationships between inputs and outputs for evaluating outputs under varying input conditions without experimental work. These forecasting output data are derived in the form of equations using artificial neural network, statistical regression methods like linear regression, response surface, ANOVA, etc., with limited accuracy. To obtain higher accuracy at about 99.3 to 99.8% for the mathematical model derived, the effective method available is genetic programming which is an application of machine learning or artificial intelligence using inbuilt algorithms. The use of the GP approach with experimental data to develop a mathematical model involving laser microhole drilling input and output parameters is presented in this work. These mathematical models can be used to study the production of high-quality microdrills of the pulsed Nd:YAG laser used in Al7075/SiCp MMCs by minimizing the microhole drilling defects, such as the degree of taperness of a hole, spatter, and heat affected zone width, and to maximize a hole circularity [10]. The various drilling input parameters involved are pulse power (v0), pulse frequency (v1), assistance of gas pressure (v2), and pulse width (v3) [11]. ## 1.1. Laser Drilling The laser drilling system is growing exponentially to suit the alternative program for achieving the major demands of aerospace, automobile, metallic, and electric industrial potential, especially microhole drilling in different components such as watches and turbine blades, fuselages, printed circuit boards, and so on [3]. Pulsed Nd:YAG laser microhole drilling has gained popularity in recent years to be used as an indispensable tool for microhole drilling of components for technologically advanced industries. Laser microhole drilling processes are successfully employed to both conductive and nonconductive materials to remove by evaporation, and the amount of molten material removed depends on the penetration of laser energy generated from a sequence of laser pulses at the same place [4]. Laser drilling in the production industry has been associated with the various advantages of a high rate of machining, noncontact method; hence, there is no damage to tool or tool wear, increased product quality, and less wastage. Highly reflective materials require a low amount of laser power with a short wavelength of Nd:YAG (compared with CO2 lasers) [5]. While it possesses several advantages and is widely approved by advanced industries, it also has some defects such as taperness, noncircular holes, HAZ, recast layer, and so on [6]. ## 1.2. Genetic Programming Genetic programming (GP) is the most effective tool used to predict the behaviour of various processes and for the formation of empirical modeling. Normally, GP supports the process of evolution in nature—Darwin’s theory of “survival of the fittest”—to find the best solution to an assigned problem. GP is known as the generalized form of the genetic algorithm (GA) and has been extensively studied [7–9]. In GP, the model is represented by terminals and functions. A well-known implementation of GP is in symbolic regression and is used to determine the mathematical expression for a given set of variables and functions. The function is generated using a Boolean operator (AND, OR, NOT, or AND NOT), nonlinear operators (sin, cos, tan, exp, tanh, and log), and from basic mathematical operators (+, −, /, and ×). The fitness function is calculated as the error between the actual value and the predicted value of the symbolic expressions. In GP, individual terms are randomly initialized, and the population is progressed to find out the optimal solutions through various operations such as reproduction, crossover, and mutation. The reproduction process produces children as an input to the next generation by replicating a fraction of the parent selected by the current generation. Individuals having highest fitness values in the population are selected as the parent and used for reproduction. Normally, the crossover operation produces children by exchanging some parts of their selected parents. The crossover operation is divided into two types (subtree and node crossover). However, the subtree crossover has shown more significant effect than the node crossover.A variety of methods are available to develop relationships between inputs and outputs for evaluating outputs under varying input conditions without experimental work. These forecasting output data are derived in the form of equations using artificial neural network, statistical regression methods like linear regression, response surface, ANOVA, etc., with limited accuracy. To obtain higher accuracy at about 99.3 to 99.8% for the mathematical model derived, the effective method available is genetic programming which is an application of machine learning or artificial intelligence using inbuilt algorithms. The use of the GP approach with experimental data to develop a mathematical model involving laser microhole drilling input and output parameters is presented in this work. These mathematical models can be used to study the production of high-quality microdrills of the pulsed Nd:YAG laser used in Al7075/SiCp MMCs by minimizing the microhole drilling defects, such as the degree of taperness of a hole, spatter, and heat affected zone width, and to maximize a hole circularity [10]. The various drilling input parameters involved are pulse power (v0), pulse frequency (v1), assistance of gas pressure (v2), and pulse width (v3) [11]. ## 2. Materials and Methods In the present work, using the stir casting technique, an MMC consisting of aluminum alloy 7075 as a base metal reinforced with particulates of silicon carbide having a size of 40–50μm for 10% volume fractions is produced. The microholes were drilled onto MMC plates using the pulsed Nd:YAG laser beam system (Model: JK300D), and the tests were carried out at its maximum power capacity of 16 kW. Various input parameters such as laser input power (v0), pulse frequency (v1), assistance of gas pressure (v2), and pulse width (v3) at different levels have been selected. The experimental results for various levels of input factors of laser microhole drilling of MMC (Al7075/10%SiCp) plate of 2 mm thick are shown in Tables 2–5. For a smaller diameter hole, laser microdrilling is preferred, especially when the materials are very hard, extra thin, or made of glass and composites. The quality of these holes mainly depends on heat affected zone (HAZ) of hole walls, taperness formed when the hole is enlarged, circularity to maintain the uniform dimension of the circular hole, and spatter occurring at the ends of a hole during resolidification of material [12, 13].The input parameters of different levels can improve the quality of drilled holes. The quality of holes was determined using optical measuring microscope OLYMPUS STM6 on a cut sectioned hole sample by measuring circularity, spatter, HAZ, and taper characteristics, and in each experimental run, the laser was drilled at a spot size of 180μm [10]. Various studies have been conducted to investigate the effects of input control parameters on the defects developed by the laser microdrilling process [14, 15]. Due to an internal focusing of the issue of a laser drilling process, hole taperness and noncircularity affect the quality of holes [16]. Normally, there is spatter accumulation due to an incomplete suspension of removed MMC at the drilling zone which resolidifies and adheres around the whole circumference. Hence, it is advisable to produce high-quality circular microdrilled holes with the minimum amount of taper, spatter, and HAZ width. The above-said qualities are to be determined by the level of input parameters which requires a mathematical model to study the process condition capable of producing the desired quality of product. However, this model is to be derived in such a way that all the characteristics measuring qualities are quickly measurable simultaneously [11]. ## 3. Genetic Programming Methodology Various stages of GP involved in the flowchart are as follows:For the analysis of multiple characteristics of neodymium: yttrium aluminum garnet laser drilling of aluminum matrix/silicon carbide particulate MMCs such as circularity, taper, spatter, and HAZ, data accumulations of experiments were carried out as per stages involved in Figure1. The accumulated data were randomized using notitia DiscipulusTM software and were provided to the software in three groups, viz., training, validation, and applied [17]. The three sets of data are generated by experimental results provided. Higher numbers of experiment running around more than 100 are required for best solution. Test runs were conducted to determine desirable parameters generating an optimal solution in the minimum possible time. Initially, the tests were carried out at the default parameter settings, such as population size, crossover rate, DSS subset size, and so on, and later varied to find optimum values [18]. The different parameters involved in achieving the final mathematical model satisfying the results are tabulated in Table 1. It shows the flow of set required to achieve the final model which could provide us a mathematical model satisfying the above conditions of the quantity involved [19].Figure 1 Stages involved in genetic programming.Table 1 Parameter setting for genetic programming. Parameters Value assigned Population size (P) 500 Number of generations 1000 Maximum depth of tree 6 Maximum generation 50 Functional set Multiply, plus, minus, divide Terminal set (v1, v2, v3, v4, v5, v6, −10, 10) Number of runs 110 Mutation rate 0.10 Crossover rate 75% nonhomologous 25% homologous Reproduction rate 0.05 Fitness,r2 The square root of the sum of the square of absolute value of the differences (errors), between the program’s output and the observed data. Termination An individual emerges whose sum of absolute errors is less than specified: (a) required number of runs be completed or (b) required correlation coefficient is obtained Terminal set T = {P, random-constants} ### 3.1. Regression and Fitness Measurement Regression analysis is a stochastic method in which symbolic regression finds both the working model of output (or target) function and its inputs (or fixed coefficients), or at least an approximation (error measurement fit by linear or square). Fitness measurement indicates how far the output value predicted by the GP concurs with the experimental value. ### 3.2. Correlation Coefficient,r/R The linear correlation coefficient refers to measuring the strength and direction of a linear relationship held between two variables (output,M, and input, N), where r value lies such that −1 < r < +1 [8]. The signs indicate a strong positive linear correlation between M and N or perfect fit if r is close to +1 with an increase of M values; N values also increase, whereas r is close to −1 if M and N have a strong negative linear correlation such that with an increase of M values, N values decrease. For r = 0, there is no linear/weak correlation, and a value approaching zero represents a random, nonlinear relationship between the M and N. The square of the correlation coefficient provides you with the coefficient of determination, r2, to find the proportion of the variance of output that is predictable from the inputs. This helps us to determine how certain one can be in making predictions from a defined model. r2 defined from the ratio of the illustrated variation to the total variation in the range of 0 < r2 < 1 signifies the strength of the linear correlation between P and Q or represents the percentage of the data which is closest to the line of best fit [20]. If r = 0.977, then r2 = 0.994, which means that 99.4% of the total variation in Q can be explained by the linear relationship between P and Q and the remaining 0.6% of the variation in Q continues unexplained [21]. ### 3.3. Factors Involved in GP Modeling The various parameters involved in modeling GP are tabulated in Table1. It shows the flow of set required to achieve the final model which could provide you with a mathematical model satisfying the above conditions of the quantity involved. ## 3.1. Regression and Fitness Measurement Regression analysis is a stochastic method in which symbolic regression finds both the working model of output (or target) function and its inputs (or fixed coefficients), or at least an approximation (error measurement fit by linear or square). Fitness measurement indicates how far the output value predicted by the GP concurs with the experimental value. ## 3.2. Correlation Coefficient,r/R The linear correlation coefficient refers to measuring the strength and direction of a linear relationship held between two variables (output,M, and input, N), where r value lies such that −1 < r < +1 [8]. The signs indicate a strong positive linear correlation between M and N or perfect fit if r is close to +1 with an increase of M values; N values also increase, whereas r is close to −1 if M and N have a strong negative linear correlation such that with an increase of M values, N values decrease. For r = 0, there is no linear/weak correlation, and a value approaching zero represents a random, nonlinear relationship between the M and N. The square of the correlation coefficient provides you with the coefficient of determination, r2, to find the proportion of the variance of output that is predictable from the inputs. This helps us to determine how certain one can be in making predictions from a defined model. r2 defined from the ratio of the illustrated variation to the total variation in the range of 0 < r2 < 1 signifies the strength of the linear correlation between P and Q or represents the percentage of the data which is closest to the line of best fit [20]. If r = 0.977, then r2 = 0.994, which means that 99.4% of the total variation in Q can be explained by the linear relationship between P and Q and the remaining 0.6% of the variation in Q continues unexplained [21]. ## 3.3. Factors Involved in GP Modeling The various parameters involved in modeling GP are tabulated in Table1. It shows the flow of set required to achieve the final model which could provide you with a mathematical model satisfying the above conditions of the quantity involved. ## 4. Results and Discussion The selection of precise instructions from setF and available terminal genes from set f(0) play a vital role in GP modeling, and evolutionary process will build a mathematical model (i.e., organism) such that it is as fit as for the prediction of results. The model consists of both instructions and function genes behaving similarly to the character of computer programs differing in appearance and dimensions [20, 22]. Previously, extensive work on developing mathematical models was also carried out using linear regression, second order, and higher order equations using response surface methods, ANOVA, box, artificial neural network, and fuzzy logic method where the models of these methods develop the amount of error is very high compared to experimental values upto 3 digit numbers [24]. These analyses require high value of digit numbers not lower and decimal numbers for output values compared to values of inputs to accomplish accuracy. These methods are suitable for small number of inputs; otherwise, erroneous empirical relation would be generated. The above nonGP methods may use limited data according design matrix and orthogonal array of tables, and the equation will be developed by using optimal input values [24].Using Grey–Taguchi method, the overall performance characteristic in Nd:YAG laser microdrilling of alumina was calculated using grey relational grade (GRG) to find a optimal parameter set for getting highest GRG. The optimal value of GRG was 0.9172 which indicates the effectiveness of the proposed approach and the highest value of GRG of 0.8989 for confirmation experiment. The overall quality feature is improved by 2.03% at optimal condition with HAZ width by 8.78%, and the hole taper worsened by 2.14%. This shows that, in multilevel optimization of performance, characteristics cannot reach simultaneously optimum value, instead they have to compromise between various performance characteristics to accomplish the optimum value pertaining to overall performance. Both the performance characteristics improve at optimum level with GRG of 0.1521 (19.88%). The taperness (from 0.0491 to 0.0476 rad) and HAZ width (from 0.2180 to 0.1683 mm) are decreased at the same time. From the results, we can see that optimum results can provide correlations at the accuracy level of 85% if outputs are limited to two numbers. By increasing output quality characteristics as well as input parameters and number of experiments, it is very difficult to arrive at optimal set and corresponding correlations obtained by regression analysis. Therefore GP-based solutions maintain an accuracy level beyond statistical analysis irrespective of number of outputs and inputs [25].The model used experimental measurements collected by converting into three independent datasets such as training, validation, and applied. Independent input variables used are pulse power (v0), pulse frequency (v1), assistance of gas pressure (v2), and pulse width (v3) and circularity, spatter, heat affected zone, and taper as the dependent output variable. Various models for outputs are developed by GP using the training dataset [23]. Best mathematical models obtained from GP simulation are given by equations (1)–(4)(1)circularity=2GF2−v0v0−1.745v0+0.904515,(2)spatter=1+N2K+L−4L−1.9/K+L+24L−1.9/K+L0.5−0.552−1.1524,(3)taper=VP−N/N−M+L/M+V/M−N3+OV/M−N3/M−N3−VF+G+QF+H2Q2−10.25Q2+2H2Q2+4/3Q−3−2F−2G−2F+H2Q2−10.25Q2+2H2Q2+HQ+5−2v3+0.73/3Q−3−2F−2G−2F+H2Q2−10.25Q2+2H2Q2+HQ+5−H2Q2−1/M−N3P−N/N−M+L/M+V/M−N3+OV/M−N3+1.05,(4)heataffectedzone=1.22L/I−0.682v1v3−0.079−1.22FG−0.5356Gv0+1.22L/I−0.682v1.Appendixes A, B, C, and D show further details related to the derivation of the circularity, spatter, taper, and heat affected zone, respectively.Comparison of the experimental and predicted outputs using the mathematical model obtained from GP is shown in Tables2–5 for circularity, taper, spatter, and HAZ of pulsed Nd:YAG laser microhole drilling of MMC components. Errors in determining predicted outputs are very few, and the percentage of error is less than ±1% which shows that results obtained from a GP mathematical model are highly acceptable.Table 2 Comparison between experimental and predicted values of circularity atR2 = 99.68. No. PP (W) (v0) PF (Hz) (v1) AGP (kg/cm2) (v2) PW (ms) (v3) Circularity (mm) Error % Experimental GP 1 250 210 12 0.4 0.908 0.908066 −6.6E − 05 2 210 210 8 0.2 0.9032 0.903074 0.000126 3 240 220 12 0.4 0.911 0.911254 −0.00025 4 210 210 8 0.6 0.953 0.953124 −0.00012 5 250 220 8 0.6 0.9425 0.941842 0.000658 6 210 250 12 0.6 0.978 0.978011 −1.1E − 05 7 210 230 10 0.4 0.948 0.945836 0.002164 8 230 210 10 0.6 0.957 0.958292 −0.00129 9 210 230 10 0.3 0.923 0.922506 0.000494 10 240 210 11 0.3 0.892 0.89308 −0.00108 11 220 210 9 0.3 0.926 0.926227 −0.00023 12 230 230 12 0.3 0.92 0.920843 −0.00084 13 240 220 12 0.3 0.9205 0.920919 −0.00042 14 220 250 8 0.3 0.896 0.895994 6.11E − 06 15 230 210 10 0.5 0.947 0.94698 1.96E − 05 16 240 230 8 0.4 0.899 0.898971 2.94E − 05 17 240 210 11 0.2 0.892 0.893078 −0.00108 18 220 240 12 0.6 0.974 0.973441 0.000559 19 240 250 10 0.6 0.954 0.955555 −0.00155 20 250 250 11 0.4 0.92 0.919789 0.000211 21 250 240 10 0.2 0.894 0.893536 0.000464 22 230 250 9 0.5 0.953 0.949875 0.003125 23 210 250 12 0.5 0.958 0.963079 −0.00508 24 250 240 10 0.3 0.892 0.893537 −0.00154 25 250 220 8 0.5 0.938 0.933247 0.004753 26 250 250 11 0.3 0.895 0.903726 −0.00873 27 230 230 12 0.2 0.893 0.892581 0.000419 28 230 220 11 0.2 0.89 0.892581 −0.00258 29 240 240 9 0.6 0.95 0.949358 0.000642 30 220 240 12 0.2 0.893 0.892039 0.000961 31 250 230 9 0.2 0.896 0.896958 −0.00096 32 220 220 10 0.5 0.951 0.950144 0.000856 33 240 240 9 0.5 0.95 0.943263 0.006737 34 210 220 9 0.2 0.9 0.896232 0.003768 35 220 250 8 0.2 0.902 0.896907 0.005093 36 250 210 12 0.5 0.948 0.94813 −0.00013 37 220 220 10 0.4 0.936 0.923214 0.012786 38 230 240 8 0.3 0.901 0.90405 −0.00305 39 250 230 9 0.6 0.9475 0.947726 −0.00023 40 230 250 9 0.4 0.911 0.928516 −0.01752 41 240 250 10 0.2 0.895 0.895507 −0.00051 42 220 210 9 0.4 0.91 0.90617 0.00383 43 240 230 8 0.5 0.94 0.936347 0.003653 44 210 240 11 0.5 0.951 0.959169 −0.00817 45 220 230 11 0.5 0.95 0.954643 −0.00464 46 220 230 11 0.6 0.97 0.967844 0.002156 47 230 240 8 0.4 0.901 0.906892 −0.00589 48 230 220 11 0.6 0.966 0.964493 0.001507 49 210 240 11 0.4 0.93 0.943397 −0.0134 50 210 220 9 0.3 0.922 0.918336 0.003664 PP: pulse power (W), PF: pulse frequency (Hz), PW: pulse width (ms), AGP: assist gas pressure (Kg/cm3).Table 3 Comparison between experimental and predicted values of HAZ atR2 = 99.88. No. PP (W) (v0) PF (Hz) (v1) AGP (kg/cm2) (v2) PW (ms) (v3) HAZ (mm) Error % Experimental GP 1 240 230 8 0.4 0.04 4.19E − 02 −0.00188 2 230 210 10 0.5 0.112 0.107232 0.004768 3 230 230 12 0.3 0.068 7.13E − 02 −0.00328 4 210 230 10 0.3 0.068 7.17E − 02 −0.00368 5 220 240 12 0.2 0.081 8.02E − 02 0.000789 6 240 220 12 0.3 0.078 7.82E − 02 −0.00023 7 220 210 9 0.4 0.088 8.75E − 02 0.000453 8 250 220 8 0.6 0.104 0.103178 0.000822 9 210 210 8 0.2 0.065 6.80E − 02 −0.00305 10 250 210 12 0.4 0.1 9.97E − 02 0.000311 11 210 230 10 0.4 0.069 7.04E − 02 −0.00139 12 240 220 12 0.4 0.073 7.20E − 02 0.00103 13 230 220 11 0.6 0.085 8.46E − 02 0.000417 14 220 250 8 0.3 0.102 0.101978 2.21E − 05 15 220 230 11 0.6 0.089 8.67E − 02 0.002272 16 220 220 10 0.5 0.094 9.50E − 02 −0.00098 17 240 240 9 0.6 0.105 0.102995 0.002005 18 240 210 11 0.2 0.085 8.12E − 02 0.003847 19 220 250 8 0.2 0.0747 7.28E − 02 0.001893 20 230 250 9 0.5 0.1 0.103908 −0.00391 21 210 220 9 0.3 0.082 8.05E − 02 0.001512 22 210 250 12 0.5 0.0909 9.14E − 02 −0.00051 23 210 250 12 0.6 0.0921 9.28E − 02 −0.00071 24 220 210 9 0.3 0.101 0.101721 −0.00072 25 240 240 9 0.5 0.1 0.10225 −0.00225 26 230 250 9 0.4 0.094 9.11E − 02 0.002885 27 240 250 10 0.6 0.096 9.56E − 02 0.000406 28 250 210 12 0.5 0.097 9.83E − 02 −0.00131 29 230 230 12 0.2 0.076 7.74E − 02 −0.0014 30 220 230 11 0.5 0.092 9.06E − 02 0.001379 31 210 240 11 0.4 0.072 7.25E − 02 −0.00049 32 230 210 10 0.6 0.095 9.25E − 02 0.002548 33 250 250 11 0.3 0.078 7.72E − 02 0.000826 34 250 220 8 0.5 0.105 0.104177 0.000823 35 210 220 9 0.2 0.084 7.68E − 02 0.007168 36 220 240 12 0.6 0.095 9.20E − 02 0.003025 37 230 240 8 0.4 0.055 4.67E − 02 0.008281 38 210 210 8 0.6 0.1 9.79E − 02 0.002054 39 250 230 9 0.2 0.07 7.45E − 02 −0.00448 40 240 250 10 0.2 0.095 8.29E − 02 0.012098 41 240 210 11 0.3 0.08 7.90E − 02 0.001025 42 230 240 8 0.3 0.09 8.87E − 02 0.001309 43 210 240 11 0.5 0.084 9.04E − 02 −0.00638 44 250 240 10 0.3 0.065 7.74E − 02 −0.01241 45 250 230 9 0.6 0.095 9.90E − 02 −0.00402 46 250 250 11 0.4 0.08 0.081854 −0.00185 47 230 220 11 0.2 0.077 0.0783 −0.0013 48 220 220 10 0.4 0.075 0.07292 0.00208 49 250 240 10 0.2 0.091 8.82E − 02 0.002776 50 240 230 8 0.5 0.09 0.090104 −0.0001Table 4 Comparison between experimental and predicted values of taper atR2 = 99.43. No. PP (W) (v0) PF (Hz) (v1) AGP (kg/cm2) (v2) PW (ms) (v3) Taper (deg) Error % Experimental GP 1 230 250 9 0.4 3.5 3.48E + 00 0.020539 2 230 240 8 0.4 3.702 3.68349 0.01851 3 210 220 9 0.3 3.815 3.816499 −0.0015 4 230 210 10 0.5 3.75 3.72E + 00 0.034738 5 240 250 10 0.2 4.135 4.12E + 00 0.017045 6 210 240 11 0.4 3.45 3.43E + 00 0.023249 7 230 230 12 0.3 3.925 3.86E + 00 0.064339 8 220 240 12 0.6 3.5 3.487621 0.012379 9 220 240 12 0.2 4.12 4.13E + 00 −0.00979 10 240 220 12 0.3 3.8 3.81E + 00 −0.00709 11 230 240 8 0.3 3.8 3.79E + 00 0.010771 12 220 250 8 0.3 3.962 3.96E + 00 0.006659 13 230 220 11 0.2 4.16 4.16E + 00 −0.00037 14 240 220 12 0.4 3.775 3.757363 0.017637 15 250 230 9 0.6 3.8 3.770205 0.029795 16 210 210 8 0.6 4.2 4.200192 −0.00019 17 230 220 11 0.6 3.75 3.718309 0.031691 18 220 250 8 0.2 4.05 4.10E + 00 −0.04831 19 250 210 12 0.4 4.1 3.96E + 00 0.138649 20 250 220 8 0.5 3.5 3.522696 −0.0227 21 240 240 9 0.5 3.4 3.41E + 00 −0.01021 22 210 210 8 0.2 4.1 4.10E + 00 0.00162 23 240 230 8 0.5 3.505 3.523799 −0.0188 24 210 230 10 0.4 3.58 3.633331 −0.05333 25 220 210 9 0.3 3.6 3.774172 −0.17417 26 250 250 11 0.3 3.8 3.846722 −0.04672 27 210 240 11 0.5 3.307 3.441274 −0.13427 28 250 240 10 0.2 4.1 4.105488 −0.00549 29 220 230 11 0.5 3.3 3.441037 −0.14104 30 240 250 10 0.6 3.8 3.81E + 00 −0.00654 31 230 210 10 0.6 3.305 3.638603 −0.3336 32 230 250 9 0.5 3.38 3.41E + 00 −0.03093 33 230 230 12 0.2 4.05 4.110147 −0.06015 34 210 220 9 0.2 4 4.125376 −0.12538 35 250 230 9 0.2 4.15 4.123817 0.026183 36 240 240 9 0.6 3.75 3.748872 0.001128 37 240 210 11 0.3 3.9 3.86E + 00 0.043294 38 250 210 12 0.5 3.65 3.501807 0.148193 39 210 250 12 0.6 3.25 3.47E + 00 −0.21634 40 210 250 12 0.5 3.45 3.49592 −0.04592 41 220 220 10 0.5 3.6 3.572536 0.027464 42 240 210 11 0.2 3.95 4.11E + 00 −0.16276 43 250 220 8 0.6 3.6 4.114297 −0.5143 44 220 220 10 0.4 3.8 3.694675 0.105325 45 250 250 11 0.4 3.9 3.655343 0.244657 46 240 230 8 0.4 3.6 3.615975 −0.01597 47 220 230 11 0.6 3.4 3.677536 −0.27754 48 220 210 9 0.4 3.5 3.48E + 00 0.023193 49 250 240 10 0.3 3.8 3.80E + 00 0.004566 50 230 250 9 0.4 3.5 3.48E + 00 0.020539Table 5 Comparison between experimental and predicted values of spatter atR2 = 99.3. No. PP (W) (v0) PF (Hz) (v1) AGP (kg/cm2) (v2) PW (ms) (v3) Spatter (mm) Error % Experimental GP 1 220 220 10 0.4 0.055 5.49E − 02 0.00013 2 230 220 11 0.6 0.044 4.42E − 02 −0.00018 3 210 250 12 0.6 0.044 4.34E − 02 0.000648 4 240 220 12 0.3 0.043 4.32E − 02 −0.00022 5 230 240 8 0.3 0.04 4.36E − 02 −0.00365 6 210 230 10 0.3 0.042 4.21E − 02 −7.3E − 05 7 250 240 10 0.2 0.065 6.50E − 02 3.18E − 05 8 230 230 12 0.2 0.043 4.31E − 02 −7.8E − 05 9 240 240 9 0.6 0.072 7.16E − 02 0.000406 10 230 220 11 0.2 0.045 4.50E − 02 −1.6E − 05 11 250 230 9 0.2 0.068 6.82E − 02 −0.00024 12 220 240 12 0.6 0.052 5.23E − 02 −0.00028 13 210 250 12 0.5 0.043 4.29E − 02 6.36E − 05 14 220 210 9 0.4 0.059 5.94E − 02 −0.00045 15 210 220 9 0.2 0.0445 0.044101 0.000399 16 220 240 12 0.2 0.05 5.02E − 02 −0.00019 17 230 210 10 0.5 0.072 7.03E − 02 0.001661 18 230 240 8 0.4 0.071 6.97E − 02 0.001265 19 240 210 11 0.2 0.056 5.54E − 02 0.000566 20 230 250 9 0.5 0.074 7.25E − 02 0.001524 21 240 230 8 0.4 0.075 0.064522 0.010478 22 220 250 8 0.3 0.074 0.071546 0.002454 23 250 210 12 0.5 0.046 5.18E − 02 −0.00581 24 210 240 11 0.4 0.043 4.20E − 02 0.000993 25 230 210 10 0.6 0.049 5.21E − 02 −0.00307 26 250 250 11 0.3 0.065 5.50E − 02 0.010003 27 210 230 10 0.4 0.04 4.28E − 02 −0.00277 28 210 210 8 0.6 0.043 0.042277 0.000723 29 210 240 11 0.5 0.045 0.044648 0.000352 30 220 230 11 0.6 0.057 5.40E − 02 0.003045 31 240 220 12 0.4 0.042 4.29E − 02 −0.00093 32 240 250 10 0.2 0.066 6.64E − 02 −0.00042 33 230 230 12 0.3 0.041 4.27E − 02 −0.00171 34 230 250 9 0.4 0.069 6.84E − 02 0.000556Furthermore, Figure2 shows the regression fit for the percentage of microdrilling process parameters of the MMCs. Figure 3 shows the experimental-predicted relationship of circularity, HAZ, taper, and spatter with the normal distribution behaviour. The models generated by the genetic programming perform better based on statistical terms and the historical dataset (training, validation, and applied data) to exhibit a better predictive capacity on the experimental dataset. The analysis of the mathematical expression of the GP Model suggests specific laser output quality characteristics for the experimental system under various control factors that can be associated with the performance of a laser microdrilled hole in MMCs. The models generated by genetic programming allow representation of the experimental data without a detailed knowledge of the phenomenon. In addition, their study allows us to obtain a deeper insight into the relevant factors in describing the quality phenomenon; for instance, changes in any of the factors observed in quality of the microhole produced that might be associated with changes in the performance of hole production in MMC.Figure 2 Regression fit for the percentage of microdrilling process parameters of the MMCs. (a) Experimental-predicted value of circularity. (b) Experimental-predicted value of HAZ. (c) Experimental-predicted value of taper. (d) Experimental-predicted value of spatter. (a) (b) (c) (d)Figure 3 Experimental-predicted relationship of (a) circularity, (b) HAZ, (c) taper, and (d) spatter. (a) (b) (c) (d) ## 5. Conclusions In this present work, new models of the circularity, spatter, heat affected zone, and taper of MMCs drilling properties at different laser operating parameters are generated using Discipulus GP software and C programming. Using GP-based mathematical models, quality of laser drilled hole properties for MMCs involving various laser input parameters is determined quickly by substitution of laser input conditions, and without conducting any experiments, the actual results can be predicted. The comparison between the new GP-based model and the experimental results indicated that the new model is more accurately close to ± 0.006 to 0.009. Therefore, the new model can be considered an alternative method to estimate the mechanical properties when the experimental measurements or correlations are not available. The correctness of solutions achieved by GP depends on correlated evolutionary parameters, the number of experimental results, and their level of accuracy. To improve the structure of the model during evolution, more information was supplied through experimental measurements. Therefore, the proposed mathematical model has verified its results are adaptable up to reliability of 99.3 to 99.8% to forecast experimental results. In the testing stage, the GP model gives the same result as was found out during the experiment with the reliability of cent percent. The GP approach has thus proved to be a highly skilled and advantageous tool for recognizing correlations in data when no proper theoretical or other methods are possible or available. --- *Source: 1024365-2019-01-02.xml*
2019
# Bound on the Minimum Eigenvalue ofH-Matrices Involving Hadamard Products **Authors:** Kun Du; Guiding Gu; Guo Liu **Journal:** Algebra (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/102438 --- ## Abstract We present a new lower bound on the minimum eigenvalue ofH-matrices involving Hadamard products τ(A1(α1)∘⋯∘Am(αm)), and we show that our lower bound is larger than the lower bound ∏k=1m[τ(Ak)]αk. Three examples verify our result. --- ## Body ## 1. Introduction In [1], it is shown by Theorem  5.7.15 that if Ak are n×n  H-matrices for all k∈{1,…,m}, and αk≥0 satisfy ∑k=1mαk≥1, then (1)τ(A1(α1)∘⋯∘Am(αm))≥∏k=1m‍[τ(Ak)]αk, where A(α) is defined as entrywise and any scalar definition of aα such that |aα|=|a|α is allowed. This theorem provided a beautiful result about the minimum eigenvalue of H-matrices involving Hadamard products, but sometimes this inequality could be very weak. For example, (2)A=(100-21-3001-12001),B=(100-191000-81-171),C=(10001000100) are H-matrices, and (3)τ(A(2)∘B(2)∘C(2))=10000≫τ(A)2τ(B)2τ(C)2=1, see the details in Section 3.A lot of works have been done on the minimum eigenvalue ofM-matrices and H-matrices involving Hadamard products, see the results in [1–6].In this paper, we present a new lower bound by including diagonal entries and prove that our bound is larger than the bound in (1).We now introduce some notations, see [1]. The Hadamard product of A=[aij]m×n and B=[bij]m×n is defined by A∘B≡[aijbij]m×n. We define |aα|=|a|α and 00=1. Let A=[aij]n×n and denote aij by A(i,j) for every i,j∈{1,…,n}. We denote A(r)=[aijr], where r≥0, and denote by Zn the class of all n×n real matrices all of whose off-diagonal entries are nonpositive. Let A∈Zn, then the minimum eigenvalue of A is defined by τ(A)=min{Re(λ):λ∈σ(A)}; τ(A) is a eigenvalue of A.For two real matricesA,B∈Zn, the Fan product of A and B, denoted by A⋆B=C=[cij], is defined by (4)cij={-aijbij,ifi≠j,aiibii,ifi=j, and then A⋆B∈Zn. The comparison matrix M(A)=[mij] of a given matrix A=[aij]∈ℂn×n is defined by (5)mij={-|aij|,ifi≠j,|aii|,ifi=j. A matrix A is an H-matrix if its comparison matrix M(A) is an M-matrix. For A∈ℂn×n, we define τ(A)≡τ(M(A)).Last, forA∈Zn, we introduce a new definition A[r]=[dij], where (6)dij={-|aij|r,ifi≠j,|aii|r,ifi=j. ## 2. Main Results To prove the main theorem, we need several lemmas.Lemma 1. Letxj=(xj(1),…,xj(n))T≥0 for all j∈{1,…,m}. If pj>0 and ∑j=1m(1/pj)=r≥1, then (7)∑i=1n‍∏j=1m‍xj(i)≤∏j=1m‍{∑i=1n‍[xj(i)]pj}1/pj.Proof. Whenr=1, we can prove this inequality by the Hölder inequality and by induction. When r>1, apparently, ∑j=1m(1/rpj)=1, similarly, we can prove this inequality.IfA∈Zn, then A=αI-B for some B≥0 and some α∈ℝ. Let σ(B)={ρ(B),λ2,…,λn}. Then σ(A)={α-ρ(B),α-λ2,…,α-λn}. If there is a λi such that α-ρ(B)=Re(α-ρ(B))>Re(α-λi)=α-Re(λi), then ρ(B)<Re(λi), but this is a contradiction. So, τ(A)=α-ρ(B). If x≥0 is the right Perron eigenvector of B, then Ax=(αI-B)x=αx-Bx=αx-ρ(B)x=(α-ρ(B))x=τ(A)x. If y≥0 is the left Perron eigenvector of B, then yTA=yT(αI-B)=αyT-yTB=αyT-ρ(B)yT=(α-ρ(B))yT=τ(A)yT. So, similar to the Perron-Frobenius theorem, we have the following: if A∈Zn became irreducible, then there exist positive vectors u and v such that Au=τ(A)u, and vTA=τ(A)vT, u and v being called right and left Perron eigenvectors of A, respectively.Lemma 2. IfA∈Zn is irreducible, and Az≥kz for a nonnegative nonzero vector z, then k≤τ(A).Proof. A = α I - P, where P≥0 is irreducible and α∈ℝ. By the Perron-Frobenius theorem, P has a positive left Perron vector y, that is, yTP=ρ(P)yT. Note that Az-kz=(α-k)z-Pz≥0. Hence, yT(Az-kz)=(α-k)yTz-ρ(P)yTz=(α-k-ρ(P))yTz≥0. Since yTz>0, we have k≤α-ρ(P)=τ(A).Lemma 3. LetAk∈Zn for all k∈{1,…,m}. If pk>0 and ∑k=1m(1/pk)≥1, then (8)τ(A1⋆⋯⋆Am)≥min1≤i≤n{∏k=1m‍Ak(i,i)-∏k=1m‍[|Ak(i,i)|pk-τ(Ak[pk])]1/pk}.Proof. It is quite evident that the conclusion holds with equality forn=1. Forn≥2, we have two cases. Case  1. A 1 ⋆ ⋯ ⋆ A m ∈ Z n is irreducible. Then Ak∈Zn are irreducible for all k∈{1,…,m}. Thus, Ak[pk]∈Zn are irreducible for all k∈{1,…,m}. Let uk(pk)=(uk(1)pk,…,uk(n)pk)T>0 be the right Perron eigenvectors of Ak[pk] for all k∈{1,…,m}. If uk(1)=uk=(uk(1),…,uk(n))T>0 for all k∈{1,…,m}, then for every i∈{1,…,n}, we have (9)Ak[pk]uk(pk)=τ(Ak[pk])uk(pk),|Ak(i,i)|pkuk(i)pk-∑j≠i‍|Ak(i,j)|pkuk(j)pk=τ(Ak[pk])uk(i)pk,∑j≠i‍|Ak(i,j)|pkuk(j)pk=[|Ak(i,i)|pk-τ(Ak[pk])]uk(i)pk≥0. Let C=A1⋆⋯⋆Am∈Zn. Let z=u1∘⋯∘um=(z(1),…,z(n))T>0. Then z(i)=∏k=1muk(i) for all i∈{1,…,n}. Then, for all i∈{1,…,n}, we have (10)(Cz)i=∏k=1m‍Ak(i,i)z(i)-∑j≠i‍(∏k=1m‍|Ak(i,j)|)z(j)=∏k=1m‍Ak(i,i)z(i)-∑j≠i‍∏k=1m‍(|Ak(i,j)|uk(j))≥∏k=1m‍Ak(i,i)z(i)-∏k=1m‍{∑j≠i‍|Ak(i,j)|pkuk(j)pk}1/pk=∏k=1m‍Ak(i,i)z(i)-∏k=1m‍{[|Ak(i,i)|pk-τ(Ak[pk])]uk(i)pk}1/pk={∏k=1m‍Ak(i,i)-∏k=1m‍[|Ak(i,i)|pk-τ(Ak[pk])]1/pk}z(i). By Lemma 1, the “≥” hold. By Lemma 2, we have (11)τ(A1⋆⋯⋆Am)≥min1≤i≤n{∏k=1m‍[|Ak(i,i)|pk-τ(Ak[pk])]1/pk∏k=1m‍Ak(i,i)-∏k=1m‍[|Ak(i,i)|pk-τ(Ak[pk])]1/pk}. Case  2. A 1 ⋆ ⋯ ⋆ A m ∈ Z n is reducible. We denote by T the n×n permutation matrix (tij) with (12)t12=t23=⋯=tn-1,n=tn1=1, the remaining tij zero, then both Ak-εT∈Zn are irreducible for any chosen positive real number ε. Now we substitute Ak-εT for Ak in the previous case, and then by letting ε→0, the result follows by continuity.Lemma 4 (see [7]). Let for alli∈{1,…,n}, xi≥yi≥0 and pi>0 satisfying ∑i=1n(1/pi)≥1. Then (13)∏i=1n‍xi-∏i=1n‍yi≥∏i=1n‍(xipi-yipi)1/pi.The following is our main theorem.Theorem 5. Letαk≥0 and ∑k=1mαk≥1. If Ak is n×n  H-matrices for all k∈{1,…,m}, then (14)τ(A1(α1)∘⋯∘Am(αm))≥min1≤i≤n{∏k=1m‍[|Ak(i,i)|-τ(Ak)]αk∏k=1m‍|Ak(i,i)|αk-∏k=1m‍[|Ak(i,i)|-τ(Ak)]αk}≥∏k=1m‍[τ(Ak)]αk.Proof. Without loss of generality, we can assume thatαk>0 for all k∈{1,2,…,m}. Let pk=1/αk>0, then ∑k=1m(1/pk)=∑k=1mαk≥1. Now we have: (15)τ(A1(α1)∘⋯∘Am(αm))=τ(M(A1(α1)∘⋯∘Am(αm)))=τ(M(A1(α1))⋆⋯⋆M(Am(αm)))≥min1≤i≤n{∏k=1m‍[|Ak(i,i)|-τ(M(Ak(αk))[pk])]1/pk∏k=1m‍|Ak(i,i)|αk-∏k=1m‍[|Ak(i,i)|-τ(M(Ak(αk))[pk])]1/pk}=min1≤i≤n{∏k=1m‍[|Ak(i,i)|-τ(Ak)]αk∏k=1m‍|Ak(i,i)|αk-∏k=1m‍[|Ak(i,i)|-τ(Ak)]αk}≥min1≤i≤n∏k=1m‍[|Ak(i,i)|-(|Ak(i,i)|-τ(Ak))]1/pk=∏k=1m‍[τ(Ak)]αk. The first “≥” hold by Lemma 3. Because Ak is H-matrix, we have 0≤τ(Ak)≤|Ak(i,i)|, which means that |Ak(i,i)|≥|Ak(i,i)|-τ(Ak)≥0. So the second “≥” hold by |Ak(i,i)|αk≥[|Ak(i,i)|-τ(Ak)]αk≥0 and Lemma 4. ## 3. Examples In this section, we present three examples to illustrate our improved bound.Example 6. We take the matricesA, B, and C in Section 1. It is easy to get τ(A)=τ(B)=τ(C)=1, so the low bound in (1) is τ(A)2τ(B)2τ(C)2=1; on the other side, we can get τ(A(2)∘B(2)∘C(2))=10000, which is greatly larger than the lower bound τ(A)=τ(B)=τ(C), that is, (16)τ(A(2)∘B(2)∘C(2))=10000≫τ(A)2τ(B)2τ(C)2=1. However, our lower bound in Theorem 5 is (17)min1≤i≤3{[bii-τ(B)]2[cii-τ(C)]2aii2bii2cii2-[aii-τ(A)]2×[bii-τ(B)]2[cii-τ(C)]2}=min1≤i≤3{1002×12×12-(100-1)2×(1-1)2(1-1)2}=10000, which is the exact value of τ(A(2)∘B(2)∘C(2)).Example 7. Let(18)A=(11-5-8-214-7-1-39),B=(9-8-7-213-4-1-615). It is easy to see that A and B are H-matrices and τ(A)=3.9152,τ(B)=4.1345. Thus, we have (19)τ(A(2)∘B(2))=9799>τ(A)2τ(B)2=262.0319, but our bound is (20)min1≤i≤3{aii2bii2-[aii-τ(A)]2[bii-τ(B)]2}=8612.8, which is close to the exact value of 9799.Example 8. Take theH-matrices (21)A=(15-12-4-314-2-4011),B=(27-14-6-420-8-10-523). It is easy to see that (22)τ(A(2)∘B(1))=2775.5>τ(A)2τ(B)1=244.0, but our bound is (23)min1≤i≤3{aii2bii1-[aii-τ(A)]2[bii-τ(B)]1}=2340.3. Also, our bound is much closer to the exact value than the low bound in (1). --- *Source: 102438-2013-06-19.xml*
102438-2013-06-19_102438-2013-06-19.md
8,410
Bound on the Minimum Eigenvalue ofH-Matrices Involving Hadamard Products
Kun Du; Guiding Gu; Guo Liu
Algebra (2013)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/102438
102438-2013-06-19.xml
--- ## Abstract We present a new lower bound on the minimum eigenvalue ofH-matrices involving Hadamard products τ(A1(α1)∘⋯∘Am(αm)), and we show that our lower bound is larger than the lower bound ∏k=1m[τ(Ak)]αk. Three examples verify our result. --- ## Body ## 1. Introduction In [1], it is shown by Theorem  5.7.15 that if Ak are n×n  H-matrices for all k∈{1,…,m}, and αk≥0 satisfy ∑k=1mαk≥1, then (1)τ(A1(α1)∘⋯∘Am(αm))≥∏k=1m‍[τ(Ak)]αk, where A(α) is defined as entrywise and any scalar definition of aα such that |aα|=|a|α is allowed. This theorem provided a beautiful result about the minimum eigenvalue of H-matrices involving Hadamard products, but sometimes this inequality could be very weak. For example, (2)A=(100-21-3001-12001),B=(100-191000-81-171),C=(10001000100) are H-matrices, and (3)τ(A(2)∘B(2)∘C(2))=10000≫τ(A)2τ(B)2τ(C)2=1, see the details in Section 3.A lot of works have been done on the minimum eigenvalue ofM-matrices and H-matrices involving Hadamard products, see the results in [1–6].In this paper, we present a new lower bound by including diagonal entries and prove that our bound is larger than the bound in (1).We now introduce some notations, see [1]. The Hadamard product of A=[aij]m×n and B=[bij]m×n is defined by A∘B≡[aijbij]m×n. We define |aα|=|a|α and 00=1. Let A=[aij]n×n and denote aij by A(i,j) for every i,j∈{1,…,n}. We denote A(r)=[aijr], where r≥0, and denote by Zn the class of all n×n real matrices all of whose off-diagonal entries are nonpositive. Let A∈Zn, then the minimum eigenvalue of A is defined by τ(A)=min{Re(λ):λ∈σ(A)}; τ(A) is a eigenvalue of A.For two real matricesA,B∈Zn, the Fan product of A and B, denoted by A⋆B=C=[cij], is defined by (4)cij={-aijbij,ifi≠j,aiibii,ifi=j, and then A⋆B∈Zn. The comparison matrix M(A)=[mij] of a given matrix A=[aij]∈ℂn×n is defined by (5)mij={-|aij|,ifi≠j,|aii|,ifi=j. A matrix A is an H-matrix if its comparison matrix M(A) is an M-matrix. For A∈ℂn×n, we define τ(A)≡τ(M(A)).Last, forA∈Zn, we introduce a new definition A[r]=[dij], where (6)dij={-|aij|r,ifi≠j,|aii|r,ifi=j. ## 2. Main Results To prove the main theorem, we need several lemmas.Lemma 1. Letxj=(xj(1),…,xj(n))T≥0 for all j∈{1,…,m}. If pj>0 and ∑j=1m(1/pj)=r≥1, then (7)∑i=1n‍∏j=1m‍xj(i)≤∏j=1m‍{∑i=1n‍[xj(i)]pj}1/pj.Proof. Whenr=1, we can prove this inequality by the Hölder inequality and by induction. When r>1, apparently, ∑j=1m(1/rpj)=1, similarly, we can prove this inequality.IfA∈Zn, then A=αI-B for some B≥0 and some α∈ℝ. Let σ(B)={ρ(B),λ2,…,λn}. Then σ(A)={α-ρ(B),α-λ2,…,α-λn}. If there is a λi such that α-ρ(B)=Re(α-ρ(B))>Re(α-λi)=α-Re(λi), then ρ(B)<Re(λi), but this is a contradiction. So, τ(A)=α-ρ(B). If x≥0 is the right Perron eigenvector of B, then Ax=(αI-B)x=αx-Bx=αx-ρ(B)x=(α-ρ(B))x=τ(A)x. If y≥0 is the left Perron eigenvector of B, then yTA=yT(αI-B)=αyT-yTB=αyT-ρ(B)yT=(α-ρ(B))yT=τ(A)yT. So, similar to the Perron-Frobenius theorem, we have the following: if A∈Zn became irreducible, then there exist positive vectors u and v such that Au=τ(A)u, and vTA=τ(A)vT, u and v being called right and left Perron eigenvectors of A, respectively.Lemma 2. IfA∈Zn is irreducible, and Az≥kz for a nonnegative nonzero vector z, then k≤τ(A).Proof. A = α I - P, where P≥0 is irreducible and α∈ℝ. By the Perron-Frobenius theorem, P has a positive left Perron vector y, that is, yTP=ρ(P)yT. Note that Az-kz=(α-k)z-Pz≥0. Hence, yT(Az-kz)=(α-k)yTz-ρ(P)yTz=(α-k-ρ(P))yTz≥0. Since yTz>0, we have k≤α-ρ(P)=τ(A).Lemma 3. LetAk∈Zn for all k∈{1,…,m}. If pk>0 and ∑k=1m(1/pk)≥1, then (8)τ(A1⋆⋯⋆Am)≥min1≤i≤n{∏k=1m‍Ak(i,i)-∏k=1m‍[|Ak(i,i)|pk-τ(Ak[pk])]1/pk}.Proof. It is quite evident that the conclusion holds with equality forn=1. Forn≥2, we have two cases. Case  1. A 1 ⋆ ⋯ ⋆ A m ∈ Z n is irreducible. Then Ak∈Zn are irreducible for all k∈{1,…,m}. Thus, Ak[pk]∈Zn are irreducible for all k∈{1,…,m}. Let uk(pk)=(uk(1)pk,…,uk(n)pk)T>0 be the right Perron eigenvectors of Ak[pk] for all k∈{1,…,m}. If uk(1)=uk=(uk(1),…,uk(n))T>0 for all k∈{1,…,m}, then for every i∈{1,…,n}, we have (9)Ak[pk]uk(pk)=τ(Ak[pk])uk(pk),|Ak(i,i)|pkuk(i)pk-∑j≠i‍|Ak(i,j)|pkuk(j)pk=τ(Ak[pk])uk(i)pk,∑j≠i‍|Ak(i,j)|pkuk(j)pk=[|Ak(i,i)|pk-τ(Ak[pk])]uk(i)pk≥0. Let C=A1⋆⋯⋆Am∈Zn. Let z=u1∘⋯∘um=(z(1),…,z(n))T>0. Then z(i)=∏k=1muk(i) for all i∈{1,…,n}. Then, for all i∈{1,…,n}, we have (10)(Cz)i=∏k=1m‍Ak(i,i)z(i)-∑j≠i‍(∏k=1m‍|Ak(i,j)|)z(j)=∏k=1m‍Ak(i,i)z(i)-∑j≠i‍∏k=1m‍(|Ak(i,j)|uk(j))≥∏k=1m‍Ak(i,i)z(i)-∏k=1m‍{∑j≠i‍|Ak(i,j)|pkuk(j)pk}1/pk=∏k=1m‍Ak(i,i)z(i)-∏k=1m‍{[|Ak(i,i)|pk-τ(Ak[pk])]uk(i)pk}1/pk={∏k=1m‍Ak(i,i)-∏k=1m‍[|Ak(i,i)|pk-τ(Ak[pk])]1/pk}z(i). By Lemma 1, the “≥” hold. By Lemma 2, we have (11)τ(A1⋆⋯⋆Am)≥min1≤i≤n{∏k=1m‍[|Ak(i,i)|pk-τ(Ak[pk])]1/pk∏k=1m‍Ak(i,i)-∏k=1m‍[|Ak(i,i)|pk-τ(Ak[pk])]1/pk}. Case  2. A 1 ⋆ ⋯ ⋆ A m ∈ Z n is reducible. We denote by T the n×n permutation matrix (tij) with (12)t12=t23=⋯=tn-1,n=tn1=1, the remaining tij zero, then both Ak-εT∈Zn are irreducible for any chosen positive real number ε. Now we substitute Ak-εT for Ak in the previous case, and then by letting ε→0, the result follows by continuity.Lemma 4 (see [7]). Let for alli∈{1,…,n}, xi≥yi≥0 and pi>0 satisfying ∑i=1n(1/pi)≥1. Then (13)∏i=1n‍xi-∏i=1n‍yi≥∏i=1n‍(xipi-yipi)1/pi.The following is our main theorem.Theorem 5. Letαk≥0 and ∑k=1mαk≥1. If Ak is n×n  H-matrices for all k∈{1,…,m}, then (14)τ(A1(α1)∘⋯∘Am(αm))≥min1≤i≤n{∏k=1m‍[|Ak(i,i)|-τ(Ak)]αk∏k=1m‍|Ak(i,i)|αk-∏k=1m‍[|Ak(i,i)|-τ(Ak)]αk}≥∏k=1m‍[τ(Ak)]αk.Proof. Without loss of generality, we can assume thatαk>0 for all k∈{1,2,…,m}. Let pk=1/αk>0, then ∑k=1m(1/pk)=∑k=1mαk≥1. Now we have: (15)τ(A1(α1)∘⋯∘Am(αm))=τ(M(A1(α1)∘⋯∘Am(αm)))=τ(M(A1(α1))⋆⋯⋆M(Am(αm)))≥min1≤i≤n{∏k=1m‍[|Ak(i,i)|-τ(M(Ak(αk))[pk])]1/pk∏k=1m‍|Ak(i,i)|αk-∏k=1m‍[|Ak(i,i)|-τ(M(Ak(αk))[pk])]1/pk}=min1≤i≤n{∏k=1m‍[|Ak(i,i)|-τ(Ak)]αk∏k=1m‍|Ak(i,i)|αk-∏k=1m‍[|Ak(i,i)|-τ(Ak)]αk}≥min1≤i≤n∏k=1m‍[|Ak(i,i)|-(|Ak(i,i)|-τ(Ak))]1/pk=∏k=1m‍[τ(Ak)]αk. The first “≥” hold by Lemma 3. Because Ak is H-matrix, we have 0≤τ(Ak)≤|Ak(i,i)|, which means that |Ak(i,i)|≥|Ak(i,i)|-τ(Ak)≥0. So the second “≥” hold by |Ak(i,i)|αk≥[|Ak(i,i)|-τ(Ak)]αk≥0 and Lemma 4. ## 3. Examples In this section, we present three examples to illustrate our improved bound.Example 6. We take the matricesA, B, and C in Section 1. It is easy to get τ(A)=τ(B)=τ(C)=1, so the low bound in (1) is τ(A)2τ(B)2τ(C)2=1; on the other side, we can get τ(A(2)∘B(2)∘C(2))=10000, which is greatly larger than the lower bound τ(A)=τ(B)=τ(C), that is, (16)τ(A(2)∘B(2)∘C(2))=10000≫τ(A)2τ(B)2τ(C)2=1. However, our lower bound in Theorem 5 is (17)min1≤i≤3{[bii-τ(B)]2[cii-τ(C)]2aii2bii2cii2-[aii-τ(A)]2×[bii-τ(B)]2[cii-τ(C)]2}=min1≤i≤3{1002×12×12-(100-1)2×(1-1)2(1-1)2}=10000, which is the exact value of τ(A(2)∘B(2)∘C(2)).Example 7. Let(18)A=(11-5-8-214-7-1-39),B=(9-8-7-213-4-1-615). It is easy to see that A and B are H-matrices and τ(A)=3.9152,τ(B)=4.1345. Thus, we have (19)τ(A(2)∘B(2))=9799>τ(A)2τ(B)2=262.0319, but our bound is (20)min1≤i≤3{aii2bii2-[aii-τ(A)]2[bii-τ(B)]2}=8612.8, which is close to the exact value of 9799.Example 8. Take theH-matrices (21)A=(15-12-4-314-2-4011),B=(27-14-6-420-8-10-523). It is easy to see that (22)τ(A(2)∘B(1))=2775.5>τ(A)2τ(B)1=244.0, but our bound is (23)min1≤i≤3{aii2bii1-[aii-τ(A)]2[bii-τ(B)]1}=2340.3. Also, our bound is much closer to the exact value than the low bound in (1). --- *Source: 102438-2013-06-19.xml*
2013
# Route-Based Signal Preemption Control of Emergency Vehicle **Authors:** Haibo Mu; Yubo Song; Linzhong Liu **Journal:** Journal of Control Science and Engineering (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1024382 --- ## Abstract This paper focuses on the signal preemption control ofemergency vehicles(EV). A signal preemption control method based on route is proposed to reduce time delay of EV at intersections. According to the time at which EV is detected and the current phase of each intersection on the travelling route of EV, the calculation methods of the earliest start time and the latest start time of green light at each intersection are given. Consequently, the effective time range of green light at each intersection is determined in theory. A multiobjective programming model, whose objectives are the minimal residence time of EV at all intersections and the maximal passing numbers of general society vehicles, is presented. Finally, a simulation calculation is carried out. Calculation results indicate that, by adopting the signal preemption method based on route, the delay of EV is reduced and the number of society vehicles passing through the whole system is increased. The signal preemption control method of EV based on route can reduce the time delay of EV and improve the evacuation efficiency of the system. --- ## Body ## 1. Introduction Once an unexpected event has happened, an evacuation or a rescue must be carried out to help people in hazardous situations, and emergency vehicles (EV) play an important role in this process. To reduce the negative impact of unexpected events, EV is used to transferr people from dangerous areas to emergency shelters or medical assistance organizations as rapidly as possible. As is well-known to us all, the quality of the emergency service relies on the travel time that EV spends in the evacuation route [1, 2]. Therefore, EV has a greater priority than normal social vehicles, and signal preemption policy should be adopted to ensure the rapid transit of EV. However, under the condition of traffic jams or if there is no emergency lane, the queuing vehicles may prevent EV from passing through the intersection. Even if there is an emergency lane, to avoid an urgent spectacle, EV also has to wait if green traffic light for the opposite direction is going on.To ensure that EV can pass through intersections safely and rapidly, some scholars proposed the signal preemption strategies. Paniati and Amoni [3] summarized the advantages of traffic signal preemption for EV in improving response speed, ensuring safety, saving cost, and other aspects and pointed out some of the key technologies used in traffic signal preemption system as well as some associated responsibility institutions. Mirchandani and Lucas [4] studied control strategies by integrating transit signal priority and rail/emergency preemption within a dynamic programming-based real-time traffic adaptive signal control system. Song [5] proposed traffic signal priority control method for multiple EVs based on multiagent and obtained signal priority control strategies through the coordination of phase agent and management agent. Taking the minimum delay of EV at single intersection as control target, three signal priority strategies, namely green time extension, red time shortening, and multiphase integrated control were proposed by Yang et al. [6], and Vissim was adopted to evaluate the proposed strategies. Ma and Cui [7] studied the signal priority problem of EV that was coming from different directions and had to pass the same intersection in the certain time period and presented a signal priority control system for multiple EV based on multi-agent. Wang et al. [8] proposed a degree-of-priority based control strategy for emergency vehicle preemption operation to decrease the impacts of emergency vehicles on normal traffic, and the performance of the proposed strategy was compared with the conventional local detection based method under the microscopic simulation model. To reduce the response time and minimize the impact of EV operation on general traffic, Qin [9] proposed two new control strategies for EV signal preemption, where the first strategy was developed to enable signal transition from normal operation to EV signal preemption and the second control strategy was used for the transition from EV signal preemption back to normal operation.Most of the signal preemption methods adopted currently are based on isolated intersection, namely, taking single intersection as a basis, and signal preemption of this intersection will be activated after local detector has detected EV and thus form a signal priority sequence from intersection to intersection [10, 11]. This kind of signal preemption strategy, which is based on local detection and clears the intersection one by one, can start only after an EV has been detected and usually leads to inevitable intersection delay. Moreover, when multiple signal preemptions are implemented during peak period, the queue and delay of social vehicles in a given network will be significantly affected [12].Studies have shown that implementing signal preemption from the perspective of entire route can reduce the response time of EV [13]. Although substantial progress has been made in signal preemption of isolated intersection, research of dynamic signal preemption based on entire route is rarely seen. Kwon and Kim [14] developed a route-based signal preemption strategy. They used Dijkstra’s algorithm to obtain an optimal path from a given origin to a desired destination so as to ensure a minimal travel cost under current traffic conditions. The control algorithm chose a specific phase combination for each intersection on the selected route sequentially when EV arrived. But, the main purpose of this research was to ensure the safety of pedestrians passing through intersection, and real-time traffic conditions of various intersections were ignored. To reduce travel time of EV, Gedawy et al. [15] combined signal preemption with dynamic route planning and designed route of EV from the perspective of network. A signal preemption strategy, which could ensure the safe operation of an EV and maximize traffic flow passing through the intersection at the same time, was proposed. In order to provide green wave for EV, Kang et al. [16] proposed a coordinated signal control method and carried out traffic simulation taking eight intersections as example. All the methods mentioned above ensured the smooth running of EV by determining in advance the signal phase of each intersection after EV had been detected. Since green signal stayed at the designated phase, traffic flow from other signal phases lost right of way and they had to stay a long time at the intersection, which in turn increased the burden of traffic system and even caused traffic jams. Therefore, it is especially important to establish a route-based dynamic preemption strategy so as to provide efficient and safe operating environment for EV and minimize the interference on social vehicles.So far, most of the developed preemption systems operate on a single intersection basis and require local detection of EV at each intersection to activate signal preemption sequence. Owing to the fact that these kinds of strategies depend on local detection and clear intersection one by one, and signal preemption procedure can not start until an EV is detected, inherent delays at intersections are unavoidable. The basic method of route-based signal preemption control of EV is that, when unexpected events occur, in consideration of dynamically changing traffic situations, the route that EV can use to reach the scene of the accident in the shortest time is calculated and recommended as the evacuation route. Once an evacuation route is selected, particular signal preemption control strategy is presented to determine the activation time for preemption at each intersection on the emergency route and realize the signal preemption control of EV ultimately.Evacuation route planning is an important component of emergency management that seeks to minimize the loss of life or harm to the public during unexpected events [17]. The aim of evacuation route planning is to minimize evacuation time under certain constraints and minimize the possibility of depth hazards to people from accident areas. Nowadays, almost all of the intersections in urban traffic network are signalized intersections. In order to realize smooth traffic under real road condition, traffic management departments usually adopt regulation measures, including no-left-turn, no-right-turn, P-turn, and U-turn, to improve the safety and efficiency of traffic. For this reason, the road network that is connected ostensibly is not connected actually, which add difficulties to the shortest path algorithms and make some classical finding methods unavailable. Many scholars have intensively studied the shortest path problems of road network with traffic restrictions [18–23]. Various techniques have been proposed to eliminate certain turning and crossing maneuvers at intersections so as to obtain evacuation routes that can provide continuous traffic flow and reduce accidents [24–30]. There also has been a considerable amount of evacuation route works that consider delay and capacity of intersections [31, 32]. The majority of the evacuation route works in the literature can be divided into three categories, namely, network flow methods, which can be further divided into linear programming and dynamic minimum cost flow problem [24, 33–38], simulation methods [39, 40], and heuristic methods, which mainly includes the well-known Capacity Constrained Route Planner (CCRP) algorithm and its improved algorithms [41–47]. The research of evacuation route planning has made substantial progress, and the development of modern detecting equipment and information technology ensures the enforceability of real-time traffic flow detection and information exchange among different intersection controllers. These technologies make it possible to implement route-based signal preemption control of EV, but a safe and efficient manner to change the traffic signal timing plan, namely, control mechanism, has not been well studied.In this paper, we will study the problem of dynamic signal preemption control of EV based on entire route. We limit the scope of this paper to signal preemption control of EVs under the assumption that the evacuation route has been determined, and the problem of evacuation route calculation will not be discussed further. The objectives of research reported here were to(1) propose a route-based signal preemption strategy to provide efficient and safe operating environment for EV and minimize the impact on social vehicles in traffic network as far as possible, (2) develop a multiobjective optimization model to minimize the residence time of EV at all intersections and maximize the number of social vehicles passing through all intersections, and (3) design a solving algorithm for the presented optimization model and verify the efficiency of the proposed signal preemption strategy.First, on the premise of having selected a concrete evacuation route, taking isolated intersection as objective and considering the distance between EV detector and each intersection, the operating speed of EV and the number of queued vehicles at each intersection, the earliest-possible start time, and the latest-possible start time of green light at each intersection in the route are given. And then, in order to establish a real-time signal control strategy to ensure that EV can pass through intersection with operating speed or without stopping and minimize the impact of EV on social vehicles, a multiobjective programming model is presented and a particle swarm algorithm is designed to find the Pareto optimal solution set of this model. Finally, a simulation analysis is carried out. ## 2. Calculation of Intersection Time Parameters After the optimal evacuation route of EV has been given, to ensure that the delay of EV is much smaller and the social vehicles passing through the whole system are much larger, we need to determine the most suitable time of opening the green light for EV of each intersection in the given route. Suppose that there aren intersections in the evacuation route except the given origin and the desired destination, and the evacuation direction is from west to east as shown in Figure 1. What needs to be stressed here is that we just take Figure 1 as an example to introduce the following concepts and calculation methods of intersection parameters. In fact, an EV may turn left or right at some of the intersections on the given evacuation route, and the following methods are also applicable to this kind of route. We assume that EV is detected at time t0, and qi, i=1,2,…,n denotes vehicles waiting to pass through intersection i in front of EV along the direction of evacuation at time t0; qiN, qiS, qiW, qiE, i=1,2,…,n denotes vehicles waiting to pass through intersection i in north-southward direction, south-northward direction, west-eastward direction, and east-westward direction at time t0, respectively. Suppose that all the intersections are two-phase control intersections and have fixed signal cycle, as shown in Figure 2. Each intersection i corresponds to a time window sequences WL(i)=wsi,wi1,wi2, where wsi denotes the start time of the first time window, and wik, k=1,2 denotes the kth time window of intersection i.Figure 1 Diagram of intersection.Figure 2 Phase diagram of traffic signal.The time parameters mentioned in this paper include the earliest-possible start time and the latest-possible start time of green light at each intersection.Definition 1. The earliest-possible start time is defined as the earliest time that traffic light of EV direction can be changed to green from the time the EV is detected and on the premise of ensuring the efficiency of the whole system.Definition 2. The latest-possible start time is defined as the latest time that traffic light of EV direction must be changed to green so as to ensure that EV can pass through the intersection without reducing speed or without stopping. ### 2.1. Calculation of Time Parameters of Intersection 1 We employdij and t0 to denote the duration time of phase j at intersection i and the time that EV is detected. If the starting time of the first time window of intersection 1, denoted by ws1, is determined and the duration time of the two phases denoted by d11 and d12 is also determined, then the current phase of intersection 1 and the elapsed green time of this phase can be calculated correspondingly.Provided thatCi denotes the cycle length of intersection i and Di,j denotes the elapsed green time of intersection i after j phases, the elapsed number of cycles in period t0-ws1 is denoted as a, and the remaining time after a cycles in period t0-ws1 is denoted as b. Then we have(1)D1,0=0,D1,j=∑p=1jd1p,j=1,2.Let(2)a=t0-ws1Ci,b=t0-ws1modCi;then the current phase is i if it satisfies that Du,i-1≤b<Du,i, and the elapsed green time of phase i is t0-ws1-a×Ci-∑p=0j-1D1,p.LetGminj, Gmaxj, Gelaj, and Grealj denote the minimal green time, the maximal green time, the elapsed green time, and the real green time of phase j, respectively; LEV-i denotes the distance between EV detection and intersection i; vEV, vreg denote the speed of EV and mean speed of general traffic, respectively; tswitch denotes the time needed to switch green indication from one phase to another phase; tsafe is the safety time interval that must be kept between the last vehicle in the queue on the EV approach and the EV so as to avoid collision between EV and social vehicles. It is an important factor to ensure the safe operation of emergency vehicles and can be provided by an appropriate advance notice of the approaching of EV and a real-time traffic queue control strategy; B is the average effective length of each vehicle when parking, which is equal to the sum of the length of vehicle itself and the distance between adjacent vehicles. Let tE(i) and tL(i)i=1,2,…,n denote the earliest-possible start time and the latest-possible start time of phase i in EV approach; then we will illustrate the calculation method of tE(i) and tL(i)i=1,2,…,n beginning from the first intersection. #### 2.1.1. The Calculation oftE(1) (1) Phase 1 is the Current Phase of Intersection 1 at Time t 0. If the elapsed green time is Gela1 and satisfies LEV-1/vEV+Gela1≤Gmax1, then tE(1)=t0 and the real start time of green signal at intersection 1 is t0. Otherwise, if LEV-1/vEV+Gela1>Gmax1, then we have to determine whether Gela1≥Gmin1 is met at time t0. If it is met, the green indication of phase 1 is ended and starts again after the minimal green time of pahse 2 is finished. Before this, we have to check, after phase 1 regains the green indication, whether the queuing vehicles at the intersection can be cleared before EV arrives and can maintain a certain safety interval between EV and social vehicles. If we have(3)tswitch+Gmin2+tswitch+q1×Bvreg+tsafe≤LEV-1vEVthen tE(1)=t0+2tswitch+Gmin2. If Gela1<Gmin1 is met, then the green indication of phase 1 will remain Gmin1-Gela1 seconds. After Gmin1-Gela1 seconds, the signal transformation mentioned above will be carried out on the basis of satisfying(4)Gmin1-Gela1+tswitch+Gmin2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.It can be seen from (3) and (4) that, when Gela1<Gmin1, if (4) is satisfied then (3) is sure to be satisfied. Let Glow denote the allowed lower bound of green light time that is smaller than Gmin1, and Gshortj denote the shortened green light time of phase j. If (3) is satisfied while (4) is not satisfied, then Glow is given. If Gela1≥Glow, then tE(1)=t0+Gmin2+2tswitch. If (3) is also not satisfied, then the green light time of pahse 2 will be shortened to Gshort2 to satisfy the following equation:(5)tswitch+Gshort2+tswitch+q1×Bvreg+tsafe=LEV-1vEV.Thus we have(6)Gshort2=LEV-1vEV-2tswitch+q1×Bvreg+tsafe.If Gshort2<Glow, it shows that the green light time of phase 2 is very short and has little effect on alleviating traffic pressure and can lead to security problems. Therefore, tE(1)=t0 under this condition. Otherwise, the green light time of pahse 2 is equal to Gshort2, and tE(1) is equal to t0+Gshort2+tswitch. The real start time of green signal is tE(1).(2) Phase 2 is the Current Phase of Intersection 1 at Time t 0. Calculate Gela2 firstly. If Gela2≤Gmin2, it should be guaranteed generally that the green duration of phase 2 is Gmin2 seconds and then changed to phase 1 after fixed yellow and red duration. However, it should be guaranteed that(7)Gmin2-Gela2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.Now, tE(1) is equal to t0+Gmin2-Gela2+tswitch. Otherwise, we have to check whether the following equation is satisfied:(8)Glow-Gela2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.If it is satisfied, then tE(1) is equal to t0+Glow-Gela2+tswitch.IfGela2>Gmin2, then determine whether the following equation is satisfied:(9)tswitch+q1×Bvreg+tsafe≤LEV-1vEV.If it is satisfied, then tE(1) is equal to t0+tswitch. #### 2.1.2. The Calculation oftL(1) Assume thattarri denotes the time that EV arrive at intersection i without reducing its speed; then we have(10)tarr1=t0+LEV-1vEV.Let tcleari denote the time needed to clear the queues at intersection i before EV arrives, then we have(11)tclear1=q1×Bvreg.The value of tL(1) can be described by the following equation:(12)tL1=t0+LEV-1vEV-q1×Bvreg-tsafe.If the distance between EV detector and intersection 1 is far enough and queued vehicles are not too much,tL(1) is sure greater than or equal to tE(1). However, if the traffic is rather heavy and clearing queued vehicles takes a long time, tL(1) may be less than tE(1) at phase 2. Under this condition, to ensure the smooth and safe passing of EV, the green time of phase 2 has to be sacrificed; namely, the green signal is converted to phase 1 after phase 2 has executed a short green time that is less than the minimum green time. Now, the value of tE(1) is t0+tswitch. If tL(1) is still less than tE(1), then let tL(1) and tE(1) have the same value, say, t0+tswitch. Now, it is unable to meet the requirement of EV’s passing through the intersection without deceleration and stopping, and all that we can do is to minimize the waiting time of EV at intersection 1. ### 2.2. Calculation of Time Parameters at Other Intersections #### 2.2.1. The Calculation oftL(2) Lettpass be the time EV needs to pass through the intersection, tarr2 the time EV needs to arrive at the parking line of intersection 2 after passing through intersection 1 without stopping, tstarti the real start time of green signal of intersection i along the evacuation direction of EV, and Qi and Di the number of vehicles arriving at intersection i+1 from intersection i and vehicles departing from intersection i in t0 to tstarti time period, respectively. Then we have(13)tarr2=t0+LEV-2vEV+tpass.And the queuing vehicles q2′ at intersection 2 can be described as(14)q2′=q2+Q1-D2,where Qi and Di are calculated according to the situation that the green light of intersection 1 starts at the latest-possible start time. Let tclear2 be the time needed for clearing q2′ before EV arrives; then we have(15)tclear2=q2′×Bvreg=q2+Q1-D2×Bvreg.We have the following equation to determine the latest start time of green signal at intersection 2:(16)tL2=tarr2-tclear2-tsafe=t0+LEV-2vEV+tpass-q2+Q1-D2×Bvreg-tsafe. #### 2.2.2. The Calculation oftE(2) (1) Phase 1 is the Current Phase of Intersection 2 at Time t 0. If the elapsed green time of phase 1 is Gela1, Gela1+LEV-2/vEV+tpass is the sum of the elapsed green time of phase 1 and the time EV spent before reaching intersection 2. We can discuss the calculation of tE(2) in the following three cases.Case 1. (17) G e l a 1 + L E V - 2 v E V + t p a s s ≤ G r e a l 1 .Case 2. (18) G r e a l 1 < G e l a 1 + L E V - 2 v E V + t p a s s ≤ G m a x 1 .Case 3. (19) G e l a 1 + L E V - 2 v E V + t p a s s > G m a x 1 .In Case  1, there is no need to perform signal preemption at intersection 2, and bothtE(2) and the real start time of green signal are equal to t0.In Case  2, after both phase 1 and phase 2 have executed the minimum green time and then green indication goes back to phase 1, we should check whether the following equation is satisfied:(20)Gmin1-Gela1+2tswitch+Gmin2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass,where q2′ is the queue length of intersection 2 at time (Gmin1-Gela1)+2tswitch+Gmin2 and is calculated according to (14). In (14), Q1 is calculated according to the situation that the green light of intersection 1 starts at the earliest-possible start time; D2 is the number of vehicles passing through intersection 2 in the effective green duration of phase 1 in the period of t0 to tE(2). If (20) is satisfied, then tE(2) is equal to t0+(Gmin1-Gela1)+2tswitch+Gmin2; otherwise, tE(2) is equal to t0.In Case  3, we need to consider the restriction of the maximum green time. Let both phase 1 and phase 2 execute the maximum green time, and then let green indication go back to phase 1. But we should check whetherq2′ can be cleared before EV arrives, namely, whether the following equation is satisfied:(21)Gmax1-Gela1+2tswitch+Gmax2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass.The calculation of q2′ is similar to Case  2. If (21) is satisfied, tE2 is equal to t0+(Gmax1-Gela1)+2tswitch+Gmax2. Otherwise, let phase 1 and phase 2 execute the minimum green time and the maximum green time, respectively, and then let green indication go back to phase 1. But the following equation should be satisfied:(22)Gmin1-Gela1+2tswitch+Gmax2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass.Now, tE(2) is equal to t0+(Gmin1-Gela1)+2tswitch+Gmax2.If (22) is not satisfied, then both phase 1 and phase 2 will execute the minimum green time, and then green indication will go back to phase 1. Let the cycle length be Cmax when all of the two phases execute the maximum green time and Cmin when all of the two phases execute the minimum green time.(2) Phase 2 Is the Current Phase of Intersection 2 at Time t 0. If the elapsed green time of phase 2 is Gela2, we can discuss the calculation of tE(2) in the following four cases.Case 1. (23) G r e a l 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s .Case 2. (24) G m i n 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s < G r e a l 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case 3. (25) G l o w - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s < G m i n 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case 4. (26) L E V - 2 v E V + t p a s s < G l o w - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case  1 corresponds to the situation that the time needed by green light of phase 1 turning green after the normal green duration of phase 2 and clearing the queue length of phase 1 is no more than the time spent by EV for arriving at intersection 2; then there is no need to perform signal preemption at intersection 2 andtE(2) is equal to t0+Greal2-Gela2+tswitch. Case  2 corresponds to the situation that the time needed by phase 1 turning green after the minimum green duration of phase 2 and clearing the queue length of phase 1 is no more than the time spent by EV for arriving at intersection 2; then signal preemption is necessary and tE(2) is equal to t0+Gmin2-Gela2+tswitch. In Case  3, green light of phase 1 turns green after phase 2 has passed Glow time, and now tE(2) is equal to t0+Glow-Gela2+tswitch. In Case  4, signal preemption is necessary at intersection 2, and the green duration of phase 2 has to be sacrificed; namely, the green signal has to turn to phase 1 immediately after detecting EV, and now tE(2) is equal to t0+Gela2+tswitch.The time parameters of the other intersections on the line are similar to those of intersection 2, which is no longer introduced here. ## 2.1. Calculation of Time Parameters of Intersection 1 We employdij and t0 to denote the duration time of phase j at intersection i and the time that EV is detected. If the starting time of the first time window of intersection 1, denoted by ws1, is determined and the duration time of the two phases denoted by d11 and d12 is also determined, then the current phase of intersection 1 and the elapsed green time of this phase can be calculated correspondingly.Provided thatCi denotes the cycle length of intersection i and Di,j denotes the elapsed green time of intersection i after j phases, the elapsed number of cycles in period t0-ws1 is denoted as a, and the remaining time after a cycles in period t0-ws1 is denoted as b. Then we have(1)D1,0=0,D1,j=∑p=1jd1p,j=1,2.Let(2)a=t0-ws1Ci,b=t0-ws1modCi;then the current phase is i if it satisfies that Du,i-1≤b<Du,i, and the elapsed green time of phase i is t0-ws1-a×Ci-∑p=0j-1D1,p.LetGminj, Gmaxj, Gelaj, and Grealj denote the minimal green time, the maximal green time, the elapsed green time, and the real green time of phase j, respectively; LEV-i denotes the distance between EV detection and intersection i; vEV, vreg denote the speed of EV and mean speed of general traffic, respectively; tswitch denotes the time needed to switch green indication from one phase to another phase; tsafe is the safety time interval that must be kept between the last vehicle in the queue on the EV approach and the EV so as to avoid collision between EV and social vehicles. It is an important factor to ensure the safe operation of emergency vehicles and can be provided by an appropriate advance notice of the approaching of EV and a real-time traffic queue control strategy; B is the average effective length of each vehicle when parking, which is equal to the sum of the length of vehicle itself and the distance between adjacent vehicles. Let tE(i) and tL(i)i=1,2,…,n denote the earliest-possible start time and the latest-possible start time of phase i in EV approach; then we will illustrate the calculation method of tE(i) and tL(i)i=1,2,…,n beginning from the first intersection. ### 2.1.1. The Calculation oftE(1) (1) Phase 1 is the Current Phase of Intersection 1 at Time t 0. If the elapsed green time is Gela1 and satisfies LEV-1/vEV+Gela1≤Gmax1, then tE(1)=t0 and the real start time of green signal at intersection 1 is t0. Otherwise, if LEV-1/vEV+Gela1>Gmax1, then we have to determine whether Gela1≥Gmin1 is met at time t0. If it is met, the green indication of phase 1 is ended and starts again after the minimal green time of pahse 2 is finished. Before this, we have to check, after phase 1 regains the green indication, whether the queuing vehicles at the intersection can be cleared before EV arrives and can maintain a certain safety interval between EV and social vehicles. If we have(3)tswitch+Gmin2+tswitch+q1×Bvreg+tsafe≤LEV-1vEVthen tE(1)=t0+2tswitch+Gmin2. If Gela1<Gmin1 is met, then the green indication of phase 1 will remain Gmin1-Gela1 seconds. After Gmin1-Gela1 seconds, the signal transformation mentioned above will be carried out on the basis of satisfying(4)Gmin1-Gela1+tswitch+Gmin2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.It can be seen from (3) and (4) that, when Gela1<Gmin1, if (4) is satisfied then (3) is sure to be satisfied. Let Glow denote the allowed lower bound of green light time that is smaller than Gmin1, and Gshortj denote the shortened green light time of phase j. If (3) is satisfied while (4) is not satisfied, then Glow is given. If Gela1≥Glow, then tE(1)=t0+Gmin2+2tswitch. If (3) is also not satisfied, then the green light time of pahse 2 will be shortened to Gshort2 to satisfy the following equation:(5)tswitch+Gshort2+tswitch+q1×Bvreg+tsafe=LEV-1vEV.Thus we have(6)Gshort2=LEV-1vEV-2tswitch+q1×Bvreg+tsafe.If Gshort2<Glow, it shows that the green light time of phase 2 is very short and has little effect on alleviating traffic pressure and can lead to security problems. Therefore, tE(1)=t0 under this condition. Otherwise, the green light time of pahse 2 is equal to Gshort2, and tE(1) is equal to t0+Gshort2+tswitch. The real start time of green signal is tE(1).(2) Phase 2 is the Current Phase of Intersection 1 at Time t 0. Calculate Gela2 firstly. If Gela2≤Gmin2, it should be guaranteed generally that the green duration of phase 2 is Gmin2 seconds and then changed to phase 1 after fixed yellow and red duration. However, it should be guaranteed that(7)Gmin2-Gela2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.Now, tE(1) is equal to t0+Gmin2-Gela2+tswitch. Otherwise, we have to check whether the following equation is satisfied:(8)Glow-Gela2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.If it is satisfied, then tE(1) is equal to t0+Glow-Gela2+tswitch.IfGela2>Gmin2, then determine whether the following equation is satisfied:(9)tswitch+q1×Bvreg+tsafe≤LEV-1vEV.If it is satisfied, then tE(1) is equal to t0+tswitch. ### 2.1.2. The Calculation oftL(1) Assume thattarri denotes the time that EV arrive at intersection i without reducing its speed; then we have(10)tarr1=t0+LEV-1vEV.Let tcleari denote the time needed to clear the queues at intersection i before EV arrives, then we have(11)tclear1=q1×Bvreg.The value of tL(1) can be described by the following equation:(12)tL1=t0+LEV-1vEV-q1×Bvreg-tsafe.If the distance between EV detector and intersection 1 is far enough and queued vehicles are not too much,tL(1) is sure greater than or equal to tE(1). However, if the traffic is rather heavy and clearing queued vehicles takes a long time, tL(1) may be less than tE(1) at phase 2. Under this condition, to ensure the smooth and safe passing of EV, the green time of phase 2 has to be sacrificed; namely, the green signal is converted to phase 1 after phase 2 has executed a short green time that is less than the minimum green time. Now, the value of tE(1) is t0+tswitch. If tL(1) is still less than tE(1), then let tL(1) and tE(1) have the same value, say, t0+tswitch. Now, it is unable to meet the requirement of EV’s passing through the intersection without deceleration and stopping, and all that we can do is to minimize the waiting time of EV at intersection 1. ## 2.1.1. The Calculation oftE(1) (1) Phase 1 is the Current Phase of Intersection 1 at Time t 0. If the elapsed green time is Gela1 and satisfies LEV-1/vEV+Gela1≤Gmax1, then tE(1)=t0 and the real start time of green signal at intersection 1 is t0. Otherwise, if LEV-1/vEV+Gela1>Gmax1, then we have to determine whether Gela1≥Gmin1 is met at time t0. If it is met, the green indication of phase 1 is ended and starts again after the minimal green time of pahse 2 is finished. Before this, we have to check, after phase 1 regains the green indication, whether the queuing vehicles at the intersection can be cleared before EV arrives and can maintain a certain safety interval between EV and social vehicles. If we have(3)tswitch+Gmin2+tswitch+q1×Bvreg+tsafe≤LEV-1vEVthen tE(1)=t0+2tswitch+Gmin2. If Gela1<Gmin1 is met, then the green indication of phase 1 will remain Gmin1-Gela1 seconds. After Gmin1-Gela1 seconds, the signal transformation mentioned above will be carried out on the basis of satisfying(4)Gmin1-Gela1+tswitch+Gmin2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.It can be seen from (3) and (4) that, when Gela1<Gmin1, if (4) is satisfied then (3) is sure to be satisfied. Let Glow denote the allowed lower bound of green light time that is smaller than Gmin1, and Gshortj denote the shortened green light time of phase j. If (3) is satisfied while (4) is not satisfied, then Glow is given. If Gela1≥Glow, then tE(1)=t0+Gmin2+2tswitch. If (3) is also not satisfied, then the green light time of pahse 2 will be shortened to Gshort2 to satisfy the following equation:(5)tswitch+Gshort2+tswitch+q1×Bvreg+tsafe=LEV-1vEV.Thus we have(6)Gshort2=LEV-1vEV-2tswitch+q1×Bvreg+tsafe.If Gshort2<Glow, it shows that the green light time of phase 2 is very short and has little effect on alleviating traffic pressure and can lead to security problems. Therefore, tE(1)=t0 under this condition. Otherwise, the green light time of pahse 2 is equal to Gshort2, and tE(1) is equal to t0+Gshort2+tswitch. The real start time of green signal is tE(1).(2) Phase 2 is the Current Phase of Intersection 1 at Time t 0. Calculate Gela2 firstly. If Gela2≤Gmin2, it should be guaranteed generally that the green duration of phase 2 is Gmin2 seconds and then changed to phase 1 after fixed yellow and red duration. However, it should be guaranteed that(7)Gmin2-Gela2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.Now, tE(1) is equal to t0+Gmin2-Gela2+tswitch. Otherwise, we have to check whether the following equation is satisfied:(8)Glow-Gela2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.If it is satisfied, then tE(1) is equal to t0+Glow-Gela2+tswitch.IfGela2>Gmin2, then determine whether the following equation is satisfied:(9)tswitch+q1×Bvreg+tsafe≤LEV-1vEV.If it is satisfied, then tE(1) is equal to t0+tswitch. ## 2.1.2. The Calculation oftL(1) Assume thattarri denotes the time that EV arrive at intersection i without reducing its speed; then we have(10)tarr1=t0+LEV-1vEV.Let tcleari denote the time needed to clear the queues at intersection i before EV arrives, then we have(11)tclear1=q1×Bvreg.The value of tL(1) can be described by the following equation:(12)tL1=t0+LEV-1vEV-q1×Bvreg-tsafe.If the distance between EV detector and intersection 1 is far enough and queued vehicles are not too much,tL(1) is sure greater than or equal to tE(1). However, if the traffic is rather heavy and clearing queued vehicles takes a long time, tL(1) may be less than tE(1) at phase 2. Under this condition, to ensure the smooth and safe passing of EV, the green time of phase 2 has to be sacrificed; namely, the green signal is converted to phase 1 after phase 2 has executed a short green time that is less than the minimum green time. Now, the value of tE(1) is t0+tswitch. If tL(1) is still less than tE(1), then let tL(1) and tE(1) have the same value, say, t0+tswitch. Now, it is unable to meet the requirement of EV’s passing through the intersection without deceleration and stopping, and all that we can do is to minimize the waiting time of EV at intersection 1. ## 2.2. Calculation of Time Parameters at Other Intersections ### 2.2.1. The Calculation oftL(2) Lettpass be the time EV needs to pass through the intersection, tarr2 the time EV needs to arrive at the parking line of intersection 2 after passing through intersection 1 without stopping, tstarti the real start time of green signal of intersection i along the evacuation direction of EV, and Qi and Di the number of vehicles arriving at intersection i+1 from intersection i and vehicles departing from intersection i in t0 to tstarti time period, respectively. Then we have(13)tarr2=t0+LEV-2vEV+tpass.And the queuing vehicles q2′ at intersection 2 can be described as(14)q2′=q2+Q1-D2,where Qi and Di are calculated according to the situation that the green light of intersection 1 starts at the latest-possible start time. Let tclear2 be the time needed for clearing q2′ before EV arrives; then we have(15)tclear2=q2′×Bvreg=q2+Q1-D2×Bvreg.We have the following equation to determine the latest start time of green signal at intersection 2:(16)tL2=tarr2-tclear2-tsafe=t0+LEV-2vEV+tpass-q2+Q1-D2×Bvreg-tsafe. ### 2.2.2. The Calculation oftE(2) (1) Phase 1 is the Current Phase of Intersection 2 at Time t 0. If the elapsed green time of phase 1 is Gela1, Gela1+LEV-2/vEV+tpass is the sum of the elapsed green time of phase 1 and the time EV spent before reaching intersection 2. We can discuss the calculation of tE(2) in the following three cases.Case 1. (17) G e l a 1 + L E V - 2 v E V + t p a s s ≤ G r e a l 1 .Case 2. (18) G r e a l 1 < G e l a 1 + L E V - 2 v E V + t p a s s ≤ G m a x 1 .Case 3. (19) G e l a 1 + L E V - 2 v E V + t p a s s > G m a x 1 .In Case  1, there is no need to perform signal preemption at intersection 2, and bothtE(2) and the real start time of green signal are equal to t0.In Case  2, after both phase 1 and phase 2 have executed the minimum green time and then green indication goes back to phase 1, we should check whether the following equation is satisfied:(20)Gmin1-Gela1+2tswitch+Gmin2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass,where q2′ is the queue length of intersection 2 at time (Gmin1-Gela1)+2tswitch+Gmin2 and is calculated according to (14). In (14), Q1 is calculated according to the situation that the green light of intersection 1 starts at the earliest-possible start time; D2 is the number of vehicles passing through intersection 2 in the effective green duration of phase 1 in the period of t0 to tE(2). If (20) is satisfied, then tE(2) is equal to t0+(Gmin1-Gela1)+2tswitch+Gmin2; otherwise, tE(2) is equal to t0.In Case  3, we need to consider the restriction of the maximum green time. Let both phase 1 and phase 2 execute the maximum green time, and then let green indication go back to phase 1. But we should check whetherq2′ can be cleared before EV arrives, namely, whether the following equation is satisfied:(21)Gmax1-Gela1+2tswitch+Gmax2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass.The calculation of q2′ is similar to Case  2. If (21) is satisfied, tE2 is equal to t0+(Gmax1-Gela1)+2tswitch+Gmax2. Otherwise, let phase 1 and phase 2 execute the minimum green time and the maximum green time, respectively, and then let green indication go back to phase 1. But the following equation should be satisfied:(22)Gmin1-Gela1+2tswitch+Gmax2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass.Now, tE(2) is equal to t0+(Gmin1-Gela1)+2tswitch+Gmax2.If (22) is not satisfied, then both phase 1 and phase 2 will execute the minimum green time, and then green indication will go back to phase 1. Let the cycle length be Cmax when all of the two phases execute the maximum green time and Cmin when all of the two phases execute the minimum green time.(2) Phase 2 Is the Current Phase of Intersection 2 at Time t 0. If the elapsed green time of phase 2 is Gela2, we can discuss the calculation of tE(2) in the following four cases.Case 1. (23) G r e a l 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s .Case 2. (24) G m i n 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s < G r e a l 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case 3. (25) G l o w - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s < G m i n 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case 4. (26) L E V - 2 v E V + t p a s s < G l o w - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case  1 corresponds to the situation that the time needed by green light of phase 1 turning green after the normal green duration of phase 2 and clearing the queue length of phase 1 is no more than the time spent by EV for arriving at intersection 2; then there is no need to perform signal preemption at intersection 2 andtE(2) is equal to t0+Greal2-Gela2+tswitch. Case  2 corresponds to the situation that the time needed by phase 1 turning green after the minimum green duration of phase 2 and clearing the queue length of phase 1 is no more than the time spent by EV for arriving at intersection 2; then signal preemption is necessary and tE(2) is equal to t0+Gmin2-Gela2+tswitch. In Case  3, green light of phase 1 turns green after phase 2 has passed Glow time, and now tE(2) is equal to t0+Glow-Gela2+tswitch. In Case  4, signal preemption is necessary at intersection 2, and the green duration of phase 2 has to be sacrificed; namely, the green signal has to turn to phase 1 immediately after detecting EV, and now tE(2) is equal to t0+Gela2+tswitch.The time parameters of the other intersections on the line are similar to those of intersection 2, which is no longer introduced here. ## 2.2.1. The Calculation oftL(2) Lettpass be the time EV needs to pass through the intersection, tarr2 the time EV needs to arrive at the parking line of intersection 2 after passing through intersection 1 without stopping, tstarti the real start time of green signal of intersection i along the evacuation direction of EV, and Qi and Di the number of vehicles arriving at intersection i+1 from intersection i and vehicles departing from intersection i in t0 to tstarti time period, respectively. Then we have(13)tarr2=t0+LEV-2vEV+tpass.And the queuing vehicles q2′ at intersection 2 can be described as(14)q2′=q2+Q1-D2,where Qi and Di are calculated according to the situation that the green light of intersection 1 starts at the latest-possible start time. Let tclear2 be the time needed for clearing q2′ before EV arrives; then we have(15)tclear2=q2′×Bvreg=q2+Q1-D2×Bvreg.We have the following equation to determine the latest start time of green signal at intersection 2:(16)tL2=tarr2-tclear2-tsafe=t0+LEV-2vEV+tpass-q2+Q1-D2×Bvreg-tsafe. ## 2.2.2. The Calculation oftE(2) (1) Phase 1 is the Current Phase of Intersection 2 at Time t 0. If the elapsed green time of phase 1 is Gela1, Gela1+LEV-2/vEV+tpass is the sum of the elapsed green time of phase 1 and the time EV spent before reaching intersection 2. We can discuss the calculation of tE(2) in the following three cases.Case 1. (17) G e l a 1 + L E V - 2 v E V + t p a s s ≤ G r e a l 1 .Case 2. (18) G r e a l 1 < G e l a 1 + L E V - 2 v E V + t p a s s ≤ G m a x 1 .Case 3. (19) G e l a 1 + L E V - 2 v E V + t p a s s > G m a x 1 .In Case  1, there is no need to perform signal preemption at intersection 2, and bothtE(2) and the real start time of green signal are equal to t0.In Case  2, after both phase 1 and phase 2 have executed the minimum green time and then green indication goes back to phase 1, we should check whether the following equation is satisfied:(20)Gmin1-Gela1+2tswitch+Gmin2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass,where q2′ is the queue length of intersection 2 at time (Gmin1-Gela1)+2tswitch+Gmin2 and is calculated according to (14). In (14), Q1 is calculated according to the situation that the green light of intersection 1 starts at the earliest-possible start time; D2 is the number of vehicles passing through intersection 2 in the effective green duration of phase 1 in the period of t0 to tE(2). If (20) is satisfied, then tE(2) is equal to t0+(Gmin1-Gela1)+2tswitch+Gmin2; otherwise, tE(2) is equal to t0.In Case  3, we need to consider the restriction of the maximum green time. Let both phase 1 and phase 2 execute the maximum green time, and then let green indication go back to phase 1. But we should check whetherq2′ can be cleared before EV arrives, namely, whether the following equation is satisfied:(21)Gmax1-Gela1+2tswitch+Gmax2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass.The calculation of q2′ is similar to Case  2. If (21) is satisfied, tE2 is equal to t0+(Gmax1-Gela1)+2tswitch+Gmax2. Otherwise, let phase 1 and phase 2 execute the minimum green time and the maximum green time, respectively, and then let green indication go back to phase 1. But the following equation should be satisfied:(22)Gmin1-Gela1+2tswitch+Gmax2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass.Now, tE(2) is equal to t0+(Gmin1-Gela1)+2tswitch+Gmax2.If (22) is not satisfied, then both phase 1 and phase 2 will execute the minimum green time, and then green indication will go back to phase 1. Let the cycle length be Cmax when all of the two phases execute the maximum green time and Cmin when all of the two phases execute the minimum green time.(2) Phase 2 Is the Current Phase of Intersection 2 at Time t 0. If the elapsed green time of phase 2 is Gela2, we can discuss the calculation of tE(2) in the following four cases.Case 1. (23) G r e a l 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s .Case 2. (24) G m i n 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s < G r e a l 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case 3. (25) G l o w - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s < G m i n 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case 4. (26) L E V - 2 v E V + t p a s s < G l o w - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case  1 corresponds to the situation that the time needed by green light of phase 1 turning green after the normal green duration of phase 2 and clearing the queue length of phase 1 is no more than the time spent by EV for arriving at intersection 2; then there is no need to perform signal preemption at intersection 2 andtE(2) is equal to t0+Greal2-Gela2+tswitch. Case  2 corresponds to the situation that the time needed by phase 1 turning green after the minimum green duration of phase 2 and clearing the queue length of phase 1 is no more than the time spent by EV for arriving at intersection 2; then signal preemption is necessary and tE(2) is equal to t0+Gmin2-Gela2+tswitch. In Case  3, green light of phase 1 turns green after phase 2 has passed Glow time, and now tE(2) is equal to t0+Glow-Gela2+tswitch. In Case  4, signal preemption is necessary at intersection 2, and the green duration of phase 2 has to be sacrificed; namely, the green signal has to turn to phase 1 immediately after detecting EV, and now tE(2) is equal to t0+Gela2+tswitch.The time parameters of the other intersections on the line are similar to those of intersection 2, which is no longer introduced here. ## 3. Multi-Objective Optimization Model Letn be the number of intersections on the evacuation route, m the number of entrance sections on the cross road of evacuation route; tdepi the time instant EV departing from intersection i; geffi,j, si,jy-z, and Si,j the effective green time of phase j at intersection i, the average departure rate of vehicles from y to z directions on the entrance sections, and the saturation flow rate of all lanes on the entrance sections, respectively; and Athri,j, Alefti,j, and Arighti,j the ratio of straight, left turn and right turn of phase j at intersection i, respectively. The multiobjective optimization model for EV signal preemption control based on route is(27)minf1=∑i=1ntstarti+tcleari+tsafe-tarri×δi,(28)maxf2=∑i=1ngeffi,2×si,2N-S+si,2S-N×Athri,2+si,2N-SAlefti,2+si,2S-NArighti,2,(29)s.t.tEi≤tstarti≤tLii=1,2,…,n,(30)tarri=LEV-ivEVi=1,2,…,n,(31)tcleari=qi′×Bvregi=2,…,n,(32)qi′=qi+min⁡qi-1′×Athri-1,1,geffi-1,1×Si-1,j×Athri-1,1+geffi-1,2si,2N-SAlefti,2+si,2S-NArighti,2-∑e=1lmin⁡qe′′,geffi,1×Athri,1,(33)δi=0tstarti+tcleari+tsafe≤tarri1otherwisei=2,…,n,(34)tsafe≥2,(35)tEi≥t0i=1,2,…,n,(36)tLi≥t0i=1,2,…,n,where (27) and (28) are objective functions and represent minimizing the residence time of EV at all intersections and maximizing the number of social vehicles passing through all intersections of the evacuation route, respectively. The number of social vehicles includes the vehicles entering in the system from the cross road of evacuation route and passing through intersections and the vehicles passing through intersections in the direction of the evacuation route of EV. Equation (29) ensures that the real start time of green signal of intersection i is between tE(i) and tL(i); (30) represents the arrival time of EV at intersection i; (31) represents the time needed to clear the queued vehicles before EV arrives; (32) denotes the calculation methd of the queued vehicles at intersection i at the moment the green signal is starting, and q′′,e=1,2,…,l represents the number of queued vehicles of phase 1 at the moment the green signal is starting in each signal cycle of intersection i before tstarti, and l denotes the number of passed signal cycles; (33) represents the 0-1 variable of intersection i, which means that the waiting time of EV at intersection i is 0 if the green signal is on at intersection i, and the queued vehicles have been cleared and security interval is satisfied, namely, tstarti+tcleari+tsafe≤tarri. Otherwise, the EV has to wait in line, and the waiting time is euqual to tstarti+tcleari+tsafe-tarri; (34) shows the lower limit of the value of tsafe; (35) and (36) represent the relationship between tE(i),tL(i), and t0. Variable geffi,2 will take different values according to different t0 and tstarti. The value of geff1,2 and geff2,2 can be calculated by (3)–(9) and (10)–(16) and (20)–(22), respectively, and the calculation method of geffi,2,i=3,4,…,n is similar to geff2,2. ## 4. Design of Solving Algorithm At present, the evolutionary algorithm is the main algorithm for solving multiobjective programming problems. Owing to the advantages of fast convergence speed and easy implementation, the particle swarm algorithm (PSO) has been successfully applied in many optimization problems [48–52]. In this paper, PSO is adopted to solve the given model. ### 4.1. Encoding and Particle Swarm Generation Generate particle swarmP with N1 particles randomly, and each particle can be represented as X=(x1,x2,…,xn), where xi,i=1,2,…,n denotes tstarti of intersection i and xi∈tEi,tLi. If rand represents a random number between 0 and 1, then the specific method of producing xi,i=1,2,…,n is defined as follows:(37)xi=inttEi+tLi-tEi×rand(). ### 4.2. Density of Solutions Since the density of nondominated solutions is inversely proportional to their diversity [48], to obtain uniformly distributed Pareto frontier, the diversity of solutions should be guaranteed; that is, the sparse solutions should be retained as many as possible. For this reason, the solution density is introduced in this paper. Suppose that there are p objectives and there are S solutions in dominant population E. If the solutions in E are sorted in ascending order according to their fitness values and fj(i),j=1,2,…,p;i=1,2,…,S represents the ith solution when sorting by the jth objective, then the solution density is defined as follows:(38)∑j=1pfji+1-fji-1+∑j=1pfji-fji-1=∑j=1pfji+1+∑j=1pfji-2∑j=1lfji-1. ### 4.3. Updating of Location and Velocity In standard multiobjective PSO algorithm, location updating requires the inertia factorw and learning factors c1 and c2. Since a larger w is suitable for a wide range exploration of solution space, while a smaller w is suitable for detailed search on the target, the learning factors are adopted to balance the cognitive abilities of individuals and groups. To achieve better search and balance, some scholars put forward the idea that inertia factor and learning factors can change dynamically [49]. In this paper, inertia factor and learning factors are changed as follows:(39)wk=wf-ws×Max-kMax+ws,c1k=c1f-c1s×Max-kMax+c1s,c2k=c2f-c2s×Max-kMax+c2s,where k, Max, ws, and wf represent current iterations, maximum iterations, the initial value of inertia weight, and the final value of inertia weight, respectively; c1s, c2s represent the initial value of c1 and c2, respectively; c1f, c2f represent the final value of c1 and c2, respectively. ### 4.4. Mutation In order to prevent the algorithm from falling into local optimum, mutation operator is introduced. LetO be the mutation probability; if k<Max×O, then Max×O-k particles are selected from P and mutate according to the following equation:(40)xi=xi+rand()×inttLi-tEi2-xi2.This will not only ensure the diversification of particles but also ensure the feasibility of particles after mutation. ## 4.1. Encoding and Particle Swarm Generation Generate particle swarmP with N1 particles randomly, and each particle can be represented as X=(x1,x2,…,xn), where xi,i=1,2,…,n denotes tstarti of intersection i and xi∈tEi,tLi. If rand represents a random number between 0 and 1, then the specific method of producing xi,i=1,2,…,n is defined as follows:(37)xi=inttEi+tLi-tEi×rand(). ## 4.2. Density of Solutions Since the density of nondominated solutions is inversely proportional to their diversity [48], to obtain uniformly distributed Pareto frontier, the diversity of solutions should be guaranteed; that is, the sparse solutions should be retained as many as possible. For this reason, the solution density is introduced in this paper. Suppose that there are p objectives and there are S solutions in dominant population E. If the solutions in E are sorted in ascending order according to their fitness values and fj(i),j=1,2,…,p;i=1,2,…,S represents the ith solution when sorting by the jth objective, then the solution density is defined as follows:(38)∑j=1pfji+1-fji-1+∑j=1pfji-fji-1=∑j=1pfji+1+∑j=1pfji-2∑j=1lfji-1. ## 4.3. Updating of Location and Velocity In standard multiobjective PSO algorithm, location updating requires the inertia factorw and learning factors c1 and c2. Since a larger w is suitable for a wide range exploration of solution space, while a smaller w is suitable for detailed search on the target, the learning factors are adopted to balance the cognitive abilities of individuals and groups. To achieve better search and balance, some scholars put forward the idea that inertia factor and learning factors can change dynamically [49]. In this paper, inertia factor and learning factors are changed as follows:(39)wk=wf-ws×Max-kMax+ws,c1k=c1f-c1s×Max-kMax+c1s,c2k=c2f-c2s×Max-kMax+c2s,where k, Max, ws, and wf represent current iterations, maximum iterations, the initial value of inertia weight, and the final value of inertia weight, respectively; c1s, c2s represent the initial value of c1 and c2, respectively; c1f, c2f represent the final value of c1 and c2, respectively. ## 4.4. Mutation In order to prevent the algorithm from falling into local optimum, mutation operator is introduced. LetO be the mutation probability; if k<Max×O, then Max×O-k particles are selected from P and mutate according to the following equation:(40)xi=xi+rand()×inttLi-tEi2-xi2.This will not only ensure the diversification of particles but also ensure the feasibility of particles after mutation. ## 5. Simulation Analysis Assume that the evacuation route is composed of 4 intersections, as shown in Figure3. The evacuation direction of EV is from west to east. It will pass through intersection 1 and intersection 2 straight ahead, turn left at intersection 3, and turn right at intersection 4, and LEV-1=400m, LEV-2=850m, LEV-3=1350m, LEV-4=1850m, vEV=45km/h, vreg=45km/h; di1=di2=28s, Ci=56s, i=1,2,3,4, Cmax=120s, Cmin=30s; ws1=0s, ws2=30s, ws3=60s, ws4=90s; tswitch=3s, tpass=2s, B=7m; Gmin1=Gmin2=15s, Gmax1=Gmax2=60s, Greal1=Greal2=25s, Glow=10s, Gshortj=10s.Figure 3 Intersections on evacutation route.In south-northward direction and west-eastward direction, the ratio of straight, right turn, and left turn is 0.7, 0.2, and 0.1, respectively. And it is 0.6, 0.3, and 0.1 respectively, in north-southward direction. In the particle swarm algorithm,N1=100, Max=1000, and O=0.1. The size of external dominance population is 100, and ws=0.4,  wf=0.9,  c1f=1,c2f=2,  c1s=2,  c2s=1.Assume that the number of vehicles waiting to pass through each intersection at timet0 is q1=10,  q2=5,  q3=6, and q4=8, respectively, and qiN=qiS=10, i=1,2, q3S=q3E=10, and q4N=q4W=10. At each intersection, the arrival rate from south to north is 1080–1440 pcu/h and the average departure rate is 1440 pcu/h, the arrival rate from east to west is 1440–1800 pcu/h, and the saturation flow rate is 1800 pcu/h. Assume that t0 is 9 o’clock and 10 o’clock, respectively. The Pareto solution set obtained by using C# for simulation calculation is shown in Tables 1 and 2. The data corresponding to each intersection indicates how many seconds after 9 o’clock or 10 o’clock will each intersection start the green signal; f1 and f2 are the residence time of EV at all the intersections and the number of vehicles passing through the system, respectively.Table 1 The Pareto solution set of of EV whent0 is 9 o’clock. Number The real start time of green signal(s) f 1 (s) f 2 (pcu) Intersection 1 Intersection 2 Intersection 3 Intersection 4 ( 1 ) 10 33 53 81 0 112.1 ( 2 ) 10 34 54 81 2.3 113.6 ( 3 ) 10 34 55 82 2.4 114.8 ( 4 ) 11 34 55 83 3.9 116.6 ( 5 ) 12 34 56 83 4.7 118.7 ( 6 ) 13 34 58 84 4.8 120.4Table 2 The Pareto solution set of EV whent0 is 10 o’clock. Number The real start time of green signal(s) f 1 (s) f 2 (pcu) Intersection 1 Intersection 2 Intersection 3 Intersection 4 ( 1 ) 10 31 56 82 0 119.7 ( 2 ) 11 33 58 83 2.9 119.9 ( 3 ) 11 34 57 83 3.2 122.4 ( 4 ) 12 34 58 83 4.9 122.7 ( 5 ) 13 34 57 83 5.5 123.5Assume that the number of vehicles waiting to pass through each intersection at timet0 is qi=20, i=1,2,3,4, and qiN=qiS=10, i=1,2, q3S=q3E=10, q4N=q4W=10. Traffic flow from north to south and from west to east is the key traffic flow. At each intersection, the arrival rate from north to south is 1400–1600 pcu/h and the average departure rate is 1440 pcu/h; the arrival rate from west to east is 1800–1900 pcu/h and the saturation flow rate is 1800 pcu/h; the arrival rate in other directions remain unchanged. If t0 is 9 o’clock and 10 o’clock, respectively, the Pareto solution set is shown in Tables 3 and 4.Table 3 The Pareto solution set for an increasing arrival rate whent0 is 9 o’clock. Number The real start time of green signal(s) f 1 (s) f 2 (pcu) Intersection 1 Intersection 2 Intersection 3 Intersection 4 ( 1 ) 2 46 73 108 29.4 165.7 ( 2 ) 2 45 75 108 29.5 167.1 ( 3 ) 0 45 76 114 33.2 171.2 ( 4 ) 1 43 76 111 36.7 175.4 ( 5 ) 1 44 80 116 37.9 183.5 ( 6 ) 4 44 80 111 38.4 184.4 ( 7 ) 7 43 82 123 38.4 186.9 ( 8 ) 4 43 80 117 39.3 189.1 ( 9 ) 7 43 84 119 39.9 193.6Table 4 The Pareto solution set for an increasing arrival rate whent0 is 10 o’clock. Number The real start time of green signal(s) f 1 (s) f 2 (pcu) Intersection 1 Intersection 2 Intersection 3 Intersection 4 ( 1 ) 11 43 79 110 27.2 169.9 ( 2 ) 10 45 80 108 29.4 174.7 ( 3 ) 4 33 76 106 29.7 180.2 ( 4 ) 2 34 73 111 30.1 183.8 ( 5 ) 6 46 79 116 31.8 185.7 ( 6 ) 4 35 71 107 36.6 187.0 ( 7 ) 12 43 77 108 38.3 189.2To verify the control effect of the signal preemption control method presented in this paper under the condition of much larger traffic volume, for example, in the moring peak or the evening peak period, we assume thatt0 is 8 o’clock, namely, in the moring peak period, and qi=20, i=1,2,3,4, qiN=qiS=15, i=1,2, q3S=q3E=15, q4N=q4W=15. At each intersection, the arrival rate from north to south and from west to east is 1800–1900 pcu/h and 2200–2400 pcu/h, respectively. The saturation flow rate is 1800 pcu/h and the arrival rate in other directions remain unchanged. The Pareto optimal set is shown in Figure 4.Figure 4 The Pareto optimal set.We can see from the simulation results shown in Tables1 and 2 that, for each solution in the Pareto solution set, the variation law of the two optimization objectives is that with the reduction of one objective; the other objective is reduced simultaneously, which implies that the decrease in the residence time of EV at intersections is at the expense of the reduction of the number of vehicles passing through the system. It also can be seen from the simulation results shown in Tables 3 and 4 that, at the moment the EV is detected, if there are many queued vehicles at each intersections and the traffic volume of key traffic flow exceeds the amount of traffic that the system can release, the control method can still guarantee the EV to pass through the intersection at a faster speed, and more social vehicles can pass through the system. The results shown in Figure 4 also indicate that the signal preemption control method is effective when the arrival rate are much larger in the morning peak period. Simulation results under different traffic states show that the signal preemption control method presented in this paper can reduce the delay of EV at intersections and lessen the impact of EV on social vehicles.When multiple EVs arrive at the same time, the signal priority control method proposed in this paper is also applicable. If the arrival interval of adjacent EV is relatively long, it should be regarded as two signal priority processes. Now, after the front EV has passed through the intersection, we must consider signal transiton from priority signal to normal signal firstly and then adopt the method mentioned above to control the following EV. ## 6. Conclusion In order to overcome the disadvantages of signal preemption control method from intersection to intersection, we study the dynamic signal preemption control method based on route for EV in this paper. Firstly, a single intersection is taken as the research object to determine the earliest-possible start time and the latest-possible start time of each intersection in evacuation direction of EV on the selected route. Then, considering the delay of EV at intersections and the effect of EV to social vehicles comprehensively, to ensure that EV can pass through the intersections without reducing speed and without stopping as far as possible and the impact on social vehicles can be minimal, a multiobjective programming model is established and the solving algorithm is designed. Simulation calculations, considering EV is detected at different times and different traffic states, are carried out, and the corresponding Pareto sets are given. The result shows that, by adopting this method, we can get more reasonable start time of green light at each intersection, reduce the delay of EV at intersections, and improve the efficiency of evacuation. Determine how the signal should be converted to the normal signal when signal preemption has been finished according to the signal timing, the traffic flow characteristics, and other parameters; namely, the signal transition strategy is the next focus of the study. --- *Source: 1024382-2018-04-01.xml*
1024382-2018-04-01_1024382-2018-04-01.md
64,254
Route-Based Signal Preemption Control of Emergency Vehicle
Haibo Mu; Yubo Song; Linzhong Liu
Journal of Control Science and Engineering (2018)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1024382
1024382-2018-04-01.xml
--- ## Abstract This paper focuses on the signal preemption control ofemergency vehicles(EV). A signal preemption control method based on route is proposed to reduce time delay of EV at intersections. According to the time at which EV is detected and the current phase of each intersection on the travelling route of EV, the calculation methods of the earliest start time and the latest start time of green light at each intersection are given. Consequently, the effective time range of green light at each intersection is determined in theory. A multiobjective programming model, whose objectives are the minimal residence time of EV at all intersections and the maximal passing numbers of general society vehicles, is presented. Finally, a simulation calculation is carried out. Calculation results indicate that, by adopting the signal preemption method based on route, the delay of EV is reduced and the number of society vehicles passing through the whole system is increased. The signal preemption control method of EV based on route can reduce the time delay of EV and improve the evacuation efficiency of the system. --- ## Body ## 1. Introduction Once an unexpected event has happened, an evacuation or a rescue must be carried out to help people in hazardous situations, and emergency vehicles (EV) play an important role in this process. To reduce the negative impact of unexpected events, EV is used to transferr people from dangerous areas to emergency shelters or medical assistance organizations as rapidly as possible. As is well-known to us all, the quality of the emergency service relies on the travel time that EV spends in the evacuation route [1, 2]. Therefore, EV has a greater priority than normal social vehicles, and signal preemption policy should be adopted to ensure the rapid transit of EV. However, under the condition of traffic jams or if there is no emergency lane, the queuing vehicles may prevent EV from passing through the intersection. Even if there is an emergency lane, to avoid an urgent spectacle, EV also has to wait if green traffic light for the opposite direction is going on.To ensure that EV can pass through intersections safely and rapidly, some scholars proposed the signal preemption strategies. Paniati and Amoni [3] summarized the advantages of traffic signal preemption for EV in improving response speed, ensuring safety, saving cost, and other aspects and pointed out some of the key technologies used in traffic signal preemption system as well as some associated responsibility institutions. Mirchandani and Lucas [4] studied control strategies by integrating transit signal priority and rail/emergency preemption within a dynamic programming-based real-time traffic adaptive signal control system. Song [5] proposed traffic signal priority control method for multiple EVs based on multiagent and obtained signal priority control strategies through the coordination of phase agent and management agent. Taking the minimum delay of EV at single intersection as control target, three signal priority strategies, namely green time extension, red time shortening, and multiphase integrated control were proposed by Yang et al. [6], and Vissim was adopted to evaluate the proposed strategies. Ma and Cui [7] studied the signal priority problem of EV that was coming from different directions and had to pass the same intersection in the certain time period and presented a signal priority control system for multiple EV based on multi-agent. Wang et al. [8] proposed a degree-of-priority based control strategy for emergency vehicle preemption operation to decrease the impacts of emergency vehicles on normal traffic, and the performance of the proposed strategy was compared with the conventional local detection based method under the microscopic simulation model. To reduce the response time and minimize the impact of EV operation on general traffic, Qin [9] proposed two new control strategies for EV signal preemption, where the first strategy was developed to enable signal transition from normal operation to EV signal preemption and the second control strategy was used for the transition from EV signal preemption back to normal operation.Most of the signal preemption methods adopted currently are based on isolated intersection, namely, taking single intersection as a basis, and signal preemption of this intersection will be activated after local detector has detected EV and thus form a signal priority sequence from intersection to intersection [10, 11]. This kind of signal preemption strategy, which is based on local detection and clears the intersection one by one, can start only after an EV has been detected and usually leads to inevitable intersection delay. Moreover, when multiple signal preemptions are implemented during peak period, the queue and delay of social vehicles in a given network will be significantly affected [12].Studies have shown that implementing signal preemption from the perspective of entire route can reduce the response time of EV [13]. Although substantial progress has been made in signal preemption of isolated intersection, research of dynamic signal preemption based on entire route is rarely seen. Kwon and Kim [14] developed a route-based signal preemption strategy. They used Dijkstra’s algorithm to obtain an optimal path from a given origin to a desired destination so as to ensure a minimal travel cost under current traffic conditions. The control algorithm chose a specific phase combination for each intersection on the selected route sequentially when EV arrived. But, the main purpose of this research was to ensure the safety of pedestrians passing through intersection, and real-time traffic conditions of various intersections were ignored. To reduce travel time of EV, Gedawy et al. [15] combined signal preemption with dynamic route planning and designed route of EV from the perspective of network. A signal preemption strategy, which could ensure the safe operation of an EV and maximize traffic flow passing through the intersection at the same time, was proposed. In order to provide green wave for EV, Kang et al. [16] proposed a coordinated signal control method and carried out traffic simulation taking eight intersections as example. All the methods mentioned above ensured the smooth running of EV by determining in advance the signal phase of each intersection after EV had been detected. Since green signal stayed at the designated phase, traffic flow from other signal phases lost right of way and they had to stay a long time at the intersection, which in turn increased the burden of traffic system and even caused traffic jams. Therefore, it is especially important to establish a route-based dynamic preemption strategy so as to provide efficient and safe operating environment for EV and minimize the interference on social vehicles.So far, most of the developed preemption systems operate on a single intersection basis and require local detection of EV at each intersection to activate signal preemption sequence. Owing to the fact that these kinds of strategies depend on local detection and clear intersection one by one, and signal preemption procedure can not start until an EV is detected, inherent delays at intersections are unavoidable. The basic method of route-based signal preemption control of EV is that, when unexpected events occur, in consideration of dynamically changing traffic situations, the route that EV can use to reach the scene of the accident in the shortest time is calculated and recommended as the evacuation route. Once an evacuation route is selected, particular signal preemption control strategy is presented to determine the activation time for preemption at each intersection on the emergency route and realize the signal preemption control of EV ultimately.Evacuation route planning is an important component of emergency management that seeks to minimize the loss of life or harm to the public during unexpected events [17]. The aim of evacuation route planning is to minimize evacuation time under certain constraints and minimize the possibility of depth hazards to people from accident areas. Nowadays, almost all of the intersections in urban traffic network are signalized intersections. In order to realize smooth traffic under real road condition, traffic management departments usually adopt regulation measures, including no-left-turn, no-right-turn, P-turn, and U-turn, to improve the safety and efficiency of traffic. For this reason, the road network that is connected ostensibly is not connected actually, which add difficulties to the shortest path algorithms and make some classical finding methods unavailable. Many scholars have intensively studied the shortest path problems of road network with traffic restrictions [18–23]. Various techniques have been proposed to eliminate certain turning and crossing maneuvers at intersections so as to obtain evacuation routes that can provide continuous traffic flow and reduce accidents [24–30]. There also has been a considerable amount of evacuation route works that consider delay and capacity of intersections [31, 32]. The majority of the evacuation route works in the literature can be divided into three categories, namely, network flow methods, which can be further divided into linear programming and dynamic minimum cost flow problem [24, 33–38], simulation methods [39, 40], and heuristic methods, which mainly includes the well-known Capacity Constrained Route Planner (CCRP) algorithm and its improved algorithms [41–47]. The research of evacuation route planning has made substantial progress, and the development of modern detecting equipment and information technology ensures the enforceability of real-time traffic flow detection and information exchange among different intersection controllers. These technologies make it possible to implement route-based signal preemption control of EV, but a safe and efficient manner to change the traffic signal timing plan, namely, control mechanism, has not been well studied.In this paper, we will study the problem of dynamic signal preemption control of EV based on entire route. We limit the scope of this paper to signal preemption control of EVs under the assumption that the evacuation route has been determined, and the problem of evacuation route calculation will not be discussed further. The objectives of research reported here were to(1) propose a route-based signal preemption strategy to provide efficient and safe operating environment for EV and minimize the impact on social vehicles in traffic network as far as possible, (2) develop a multiobjective optimization model to minimize the residence time of EV at all intersections and maximize the number of social vehicles passing through all intersections, and (3) design a solving algorithm for the presented optimization model and verify the efficiency of the proposed signal preemption strategy.First, on the premise of having selected a concrete evacuation route, taking isolated intersection as objective and considering the distance between EV detector and each intersection, the operating speed of EV and the number of queued vehicles at each intersection, the earliest-possible start time, and the latest-possible start time of green light at each intersection in the route are given. And then, in order to establish a real-time signal control strategy to ensure that EV can pass through intersection with operating speed or without stopping and minimize the impact of EV on social vehicles, a multiobjective programming model is presented and a particle swarm algorithm is designed to find the Pareto optimal solution set of this model. Finally, a simulation analysis is carried out. ## 2. Calculation of Intersection Time Parameters After the optimal evacuation route of EV has been given, to ensure that the delay of EV is much smaller and the social vehicles passing through the whole system are much larger, we need to determine the most suitable time of opening the green light for EV of each intersection in the given route. Suppose that there aren intersections in the evacuation route except the given origin and the desired destination, and the evacuation direction is from west to east as shown in Figure 1. What needs to be stressed here is that we just take Figure 1 as an example to introduce the following concepts and calculation methods of intersection parameters. In fact, an EV may turn left or right at some of the intersections on the given evacuation route, and the following methods are also applicable to this kind of route. We assume that EV is detected at time t0, and qi, i=1,2,…,n denotes vehicles waiting to pass through intersection i in front of EV along the direction of evacuation at time t0; qiN, qiS, qiW, qiE, i=1,2,…,n denotes vehicles waiting to pass through intersection i in north-southward direction, south-northward direction, west-eastward direction, and east-westward direction at time t0, respectively. Suppose that all the intersections are two-phase control intersections and have fixed signal cycle, as shown in Figure 2. Each intersection i corresponds to a time window sequences WL(i)=wsi,wi1,wi2, where wsi denotes the start time of the first time window, and wik, k=1,2 denotes the kth time window of intersection i.Figure 1 Diagram of intersection.Figure 2 Phase diagram of traffic signal.The time parameters mentioned in this paper include the earliest-possible start time and the latest-possible start time of green light at each intersection.Definition 1. The earliest-possible start time is defined as the earliest time that traffic light of EV direction can be changed to green from the time the EV is detected and on the premise of ensuring the efficiency of the whole system.Definition 2. The latest-possible start time is defined as the latest time that traffic light of EV direction must be changed to green so as to ensure that EV can pass through the intersection without reducing speed or without stopping. ### 2.1. Calculation of Time Parameters of Intersection 1 We employdij and t0 to denote the duration time of phase j at intersection i and the time that EV is detected. If the starting time of the first time window of intersection 1, denoted by ws1, is determined and the duration time of the two phases denoted by d11 and d12 is also determined, then the current phase of intersection 1 and the elapsed green time of this phase can be calculated correspondingly.Provided thatCi denotes the cycle length of intersection i and Di,j denotes the elapsed green time of intersection i after j phases, the elapsed number of cycles in period t0-ws1 is denoted as a, and the remaining time after a cycles in period t0-ws1 is denoted as b. Then we have(1)D1,0=0,D1,j=∑p=1jd1p,j=1,2.Let(2)a=t0-ws1Ci,b=t0-ws1modCi;then the current phase is i if it satisfies that Du,i-1≤b<Du,i, and the elapsed green time of phase i is t0-ws1-a×Ci-∑p=0j-1D1,p.LetGminj, Gmaxj, Gelaj, and Grealj denote the minimal green time, the maximal green time, the elapsed green time, and the real green time of phase j, respectively; LEV-i denotes the distance between EV detection and intersection i; vEV, vreg denote the speed of EV and mean speed of general traffic, respectively; tswitch denotes the time needed to switch green indication from one phase to another phase; tsafe is the safety time interval that must be kept between the last vehicle in the queue on the EV approach and the EV so as to avoid collision between EV and social vehicles. It is an important factor to ensure the safe operation of emergency vehicles and can be provided by an appropriate advance notice of the approaching of EV and a real-time traffic queue control strategy; B is the average effective length of each vehicle when parking, which is equal to the sum of the length of vehicle itself and the distance between adjacent vehicles. Let tE(i) and tL(i)i=1,2,…,n denote the earliest-possible start time and the latest-possible start time of phase i in EV approach; then we will illustrate the calculation method of tE(i) and tL(i)i=1,2,…,n beginning from the first intersection. #### 2.1.1. The Calculation oftE(1) (1) Phase 1 is the Current Phase of Intersection 1 at Time t 0. If the elapsed green time is Gela1 and satisfies LEV-1/vEV+Gela1≤Gmax1, then tE(1)=t0 and the real start time of green signal at intersection 1 is t0. Otherwise, if LEV-1/vEV+Gela1>Gmax1, then we have to determine whether Gela1≥Gmin1 is met at time t0. If it is met, the green indication of phase 1 is ended and starts again after the minimal green time of pahse 2 is finished. Before this, we have to check, after phase 1 regains the green indication, whether the queuing vehicles at the intersection can be cleared before EV arrives and can maintain a certain safety interval between EV and social vehicles. If we have(3)tswitch+Gmin2+tswitch+q1×Bvreg+tsafe≤LEV-1vEVthen tE(1)=t0+2tswitch+Gmin2. If Gela1<Gmin1 is met, then the green indication of phase 1 will remain Gmin1-Gela1 seconds. After Gmin1-Gela1 seconds, the signal transformation mentioned above will be carried out on the basis of satisfying(4)Gmin1-Gela1+tswitch+Gmin2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.It can be seen from (3) and (4) that, when Gela1<Gmin1, if (4) is satisfied then (3) is sure to be satisfied. Let Glow denote the allowed lower bound of green light time that is smaller than Gmin1, and Gshortj denote the shortened green light time of phase j. If (3) is satisfied while (4) is not satisfied, then Glow is given. If Gela1≥Glow, then tE(1)=t0+Gmin2+2tswitch. If (3) is also not satisfied, then the green light time of pahse 2 will be shortened to Gshort2 to satisfy the following equation:(5)tswitch+Gshort2+tswitch+q1×Bvreg+tsafe=LEV-1vEV.Thus we have(6)Gshort2=LEV-1vEV-2tswitch+q1×Bvreg+tsafe.If Gshort2<Glow, it shows that the green light time of phase 2 is very short and has little effect on alleviating traffic pressure and can lead to security problems. Therefore, tE(1)=t0 under this condition. Otherwise, the green light time of pahse 2 is equal to Gshort2, and tE(1) is equal to t0+Gshort2+tswitch. The real start time of green signal is tE(1).(2) Phase 2 is the Current Phase of Intersection 1 at Time t 0. Calculate Gela2 firstly. If Gela2≤Gmin2, it should be guaranteed generally that the green duration of phase 2 is Gmin2 seconds and then changed to phase 1 after fixed yellow and red duration. However, it should be guaranteed that(7)Gmin2-Gela2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.Now, tE(1) is equal to t0+Gmin2-Gela2+tswitch. Otherwise, we have to check whether the following equation is satisfied:(8)Glow-Gela2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.If it is satisfied, then tE(1) is equal to t0+Glow-Gela2+tswitch.IfGela2>Gmin2, then determine whether the following equation is satisfied:(9)tswitch+q1×Bvreg+tsafe≤LEV-1vEV.If it is satisfied, then tE(1) is equal to t0+tswitch. #### 2.1.2. The Calculation oftL(1) Assume thattarri denotes the time that EV arrive at intersection i without reducing its speed; then we have(10)tarr1=t0+LEV-1vEV.Let tcleari denote the time needed to clear the queues at intersection i before EV arrives, then we have(11)tclear1=q1×Bvreg.The value of tL(1) can be described by the following equation:(12)tL1=t0+LEV-1vEV-q1×Bvreg-tsafe.If the distance between EV detector and intersection 1 is far enough and queued vehicles are not too much,tL(1) is sure greater than or equal to tE(1). However, if the traffic is rather heavy and clearing queued vehicles takes a long time, tL(1) may be less than tE(1) at phase 2. Under this condition, to ensure the smooth and safe passing of EV, the green time of phase 2 has to be sacrificed; namely, the green signal is converted to phase 1 after phase 2 has executed a short green time that is less than the minimum green time. Now, the value of tE(1) is t0+tswitch. If tL(1) is still less than tE(1), then let tL(1) and tE(1) have the same value, say, t0+tswitch. Now, it is unable to meet the requirement of EV’s passing through the intersection without deceleration and stopping, and all that we can do is to minimize the waiting time of EV at intersection 1. ### 2.2. Calculation of Time Parameters at Other Intersections #### 2.2.1. The Calculation oftL(2) Lettpass be the time EV needs to pass through the intersection, tarr2 the time EV needs to arrive at the parking line of intersection 2 after passing through intersection 1 without stopping, tstarti the real start time of green signal of intersection i along the evacuation direction of EV, and Qi and Di the number of vehicles arriving at intersection i+1 from intersection i and vehicles departing from intersection i in t0 to tstarti time period, respectively. Then we have(13)tarr2=t0+LEV-2vEV+tpass.And the queuing vehicles q2′ at intersection 2 can be described as(14)q2′=q2+Q1-D2,where Qi and Di are calculated according to the situation that the green light of intersection 1 starts at the latest-possible start time. Let tclear2 be the time needed for clearing q2′ before EV arrives; then we have(15)tclear2=q2′×Bvreg=q2+Q1-D2×Bvreg.We have the following equation to determine the latest start time of green signal at intersection 2:(16)tL2=tarr2-tclear2-tsafe=t0+LEV-2vEV+tpass-q2+Q1-D2×Bvreg-tsafe. #### 2.2.2. The Calculation oftE(2) (1) Phase 1 is the Current Phase of Intersection 2 at Time t 0. If the elapsed green time of phase 1 is Gela1, Gela1+LEV-2/vEV+tpass is the sum of the elapsed green time of phase 1 and the time EV spent before reaching intersection 2. We can discuss the calculation of tE(2) in the following three cases.Case 1. (17) G e l a 1 + L E V - 2 v E V + t p a s s ≤ G r e a l 1 .Case 2. (18) G r e a l 1 < G e l a 1 + L E V - 2 v E V + t p a s s ≤ G m a x 1 .Case 3. (19) G e l a 1 + L E V - 2 v E V + t p a s s > G m a x 1 .In Case  1, there is no need to perform signal preemption at intersection 2, and bothtE(2) and the real start time of green signal are equal to t0.In Case  2, after both phase 1 and phase 2 have executed the minimum green time and then green indication goes back to phase 1, we should check whether the following equation is satisfied:(20)Gmin1-Gela1+2tswitch+Gmin2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass,where q2′ is the queue length of intersection 2 at time (Gmin1-Gela1)+2tswitch+Gmin2 and is calculated according to (14). In (14), Q1 is calculated according to the situation that the green light of intersection 1 starts at the earliest-possible start time; D2 is the number of vehicles passing through intersection 2 in the effective green duration of phase 1 in the period of t0 to tE(2). If (20) is satisfied, then tE(2) is equal to t0+(Gmin1-Gela1)+2tswitch+Gmin2; otherwise, tE(2) is equal to t0.In Case  3, we need to consider the restriction of the maximum green time. Let both phase 1 and phase 2 execute the maximum green time, and then let green indication go back to phase 1. But we should check whetherq2′ can be cleared before EV arrives, namely, whether the following equation is satisfied:(21)Gmax1-Gela1+2tswitch+Gmax2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass.The calculation of q2′ is similar to Case  2. If (21) is satisfied, tE2 is equal to t0+(Gmax1-Gela1)+2tswitch+Gmax2. Otherwise, let phase 1 and phase 2 execute the minimum green time and the maximum green time, respectively, and then let green indication go back to phase 1. But the following equation should be satisfied:(22)Gmin1-Gela1+2tswitch+Gmax2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass.Now, tE(2) is equal to t0+(Gmin1-Gela1)+2tswitch+Gmax2.If (22) is not satisfied, then both phase 1 and phase 2 will execute the minimum green time, and then green indication will go back to phase 1. Let the cycle length be Cmax when all of the two phases execute the maximum green time and Cmin when all of the two phases execute the minimum green time.(2) Phase 2 Is the Current Phase of Intersection 2 at Time t 0. If the elapsed green time of phase 2 is Gela2, we can discuss the calculation of tE(2) in the following four cases.Case 1. (23) G r e a l 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s .Case 2. (24) G m i n 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s < G r e a l 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case 3. (25) G l o w - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s < G m i n 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case 4. (26) L E V - 2 v E V + t p a s s < G l o w - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case  1 corresponds to the situation that the time needed by green light of phase 1 turning green after the normal green duration of phase 2 and clearing the queue length of phase 1 is no more than the time spent by EV for arriving at intersection 2; then there is no need to perform signal preemption at intersection 2 andtE(2) is equal to t0+Greal2-Gela2+tswitch. Case  2 corresponds to the situation that the time needed by phase 1 turning green after the minimum green duration of phase 2 and clearing the queue length of phase 1 is no more than the time spent by EV for arriving at intersection 2; then signal preemption is necessary and tE(2) is equal to t0+Gmin2-Gela2+tswitch. In Case  3, green light of phase 1 turns green after phase 2 has passed Glow time, and now tE(2) is equal to t0+Glow-Gela2+tswitch. In Case  4, signal preemption is necessary at intersection 2, and the green duration of phase 2 has to be sacrificed; namely, the green signal has to turn to phase 1 immediately after detecting EV, and now tE(2) is equal to t0+Gela2+tswitch.The time parameters of the other intersections on the line are similar to those of intersection 2, which is no longer introduced here. ## 2.1. Calculation of Time Parameters of Intersection 1 We employdij and t0 to denote the duration time of phase j at intersection i and the time that EV is detected. If the starting time of the first time window of intersection 1, denoted by ws1, is determined and the duration time of the two phases denoted by d11 and d12 is also determined, then the current phase of intersection 1 and the elapsed green time of this phase can be calculated correspondingly.Provided thatCi denotes the cycle length of intersection i and Di,j denotes the elapsed green time of intersection i after j phases, the elapsed number of cycles in period t0-ws1 is denoted as a, and the remaining time after a cycles in period t0-ws1 is denoted as b. Then we have(1)D1,0=0,D1,j=∑p=1jd1p,j=1,2.Let(2)a=t0-ws1Ci,b=t0-ws1modCi;then the current phase is i if it satisfies that Du,i-1≤b<Du,i, and the elapsed green time of phase i is t0-ws1-a×Ci-∑p=0j-1D1,p.LetGminj, Gmaxj, Gelaj, and Grealj denote the minimal green time, the maximal green time, the elapsed green time, and the real green time of phase j, respectively; LEV-i denotes the distance between EV detection and intersection i; vEV, vreg denote the speed of EV and mean speed of general traffic, respectively; tswitch denotes the time needed to switch green indication from one phase to another phase; tsafe is the safety time interval that must be kept between the last vehicle in the queue on the EV approach and the EV so as to avoid collision between EV and social vehicles. It is an important factor to ensure the safe operation of emergency vehicles and can be provided by an appropriate advance notice of the approaching of EV and a real-time traffic queue control strategy; B is the average effective length of each vehicle when parking, which is equal to the sum of the length of vehicle itself and the distance between adjacent vehicles. Let tE(i) and tL(i)i=1,2,…,n denote the earliest-possible start time and the latest-possible start time of phase i in EV approach; then we will illustrate the calculation method of tE(i) and tL(i)i=1,2,…,n beginning from the first intersection. ### 2.1.1. The Calculation oftE(1) (1) Phase 1 is the Current Phase of Intersection 1 at Time t 0. If the elapsed green time is Gela1 and satisfies LEV-1/vEV+Gela1≤Gmax1, then tE(1)=t0 and the real start time of green signal at intersection 1 is t0. Otherwise, if LEV-1/vEV+Gela1>Gmax1, then we have to determine whether Gela1≥Gmin1 is met at time t0. If it is met, the green indication of phase 1 is ended and starts again after the minimal green time of pahse 2 is finished. Before this, we have to check, after phase 1 regains the green indication, whether the queuing vehicles at the intersection can be cleared before EV arrives and can maintain a certain safety interval between EV and social vehicles. If we have(3)tswitch+Gmin2+tswitch+q1×Bvreg+tsafe≤LEV-1vEVthen tE(1)=t0+2tswitch+Gmin2. If Gela1<Gmin1 is met, then the green indication of phase 1 will remain Gmin1-Gela1 seconds. After Gmin1-Gela1 seconds, the signal transformation mentioned above will be carried out on the basis of satisfying(4)Gmin1-Gela1+tswitch+Gmin2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.It can be seen from (3) and (4) that, when Gela1<Gmin1, if (4) is satisfied then (3) is sure to be satisfied. Let Glow denote the allowed lower bound of green light time that is smaller than Gmin1, and Gshortj denote the shortened green light time of phase j. If (3) is satisfied while (4) is not satisfied, then Glow is given. If Gela1≥Glow, then tE(1)=t0+Gmin2+2tswitch. If (3) is also not satisfied, then the green light time of pahse 2 will be shortened to Gshort2 to satisfy the following equation:(5)tswitch+Gshort2+tswitch+q1×Bvreg+tsafe=LEV-1vEV.Thus we have(6)Gshort2=LEV-1vEV-2tswitch+q1×Bvreg+tsafe.If Gshort2<Glow, it shows that the green light time of phase 2 is very short and has little effect on alleviating traffic pressure and can lead to security problems. Therefore, tE(1)=t0 under this condition. Otherwise, the green light time of pahse 2 is equal to Gshort2, and tE(1) is equal to t0+Gshort2+tswitch. The real start time of green signal is tE(1).(2) Phase 2 is the Current Phase of Intersection 1 at Time t 0. Calculate Gela2 firstly. If Gela2≤Gmin2, it should be guaranteed generally that the green duration of phase 2 is Gmin2 seconds and then changed to phase 1 after fixed yellow and red duration. However, it should be guaranteed that(7)Gmin2-Gela2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.Now, tE(1) is equal to t0+Gmin2-Gela2+tswitch. Otherwise, we have to check whether the following equation is satisfied:(8)Glow-Gela2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.If it is satisfied, then tE(1) is equal to t0+Glow-Gela2+tswitch.IfGela2>Gmin2, then determine whether the following equation is satisfied:(9)tswitch+q1×Bvreg+tsafe≤LEV-1vEV.If it is satisfied, then tE(1) is equal to t0+tswitch. ### 2.1.2. The Calculation oftL(1) Assume thattarri denotes the time that EV arrive at intersection i without reducing its speed; then we have(10)tarr1=t0+LEV-1vEV.Let tcleari denote the time needed to clear the queues at intersection i before EV arrives, then we have(11)tclear1=q1×Bvreg.The value of tL(1) can be described by the following equation:(12)tL1=t0+LEV-1vEV-q1×Bvreg-tsafe.If the distance between EV detector and intersection 1 is far enough and queued vehicles are not too much,tL(1) is sure greater than or equal to tE(1). However, if the traffic is rather heavy and clearing queued vehicles takes a long time, tL(1) may be less than tE(1) at phase 2. Under this condition, to ensure the smooth and safe passing of EV, the green time of phase 2 has to be sacrificed; namely, the green signal is converted to phase 1 after phase 2 has executed a short green time that is less than the minimum green time. Now, the value of tE(1) is t0+tswitch. If tL(1) is still less than tE(1), then let tL(1) and tE(1) have the same value, say, t0+tswitch. Now, it is unable to meet the requirement of EV’s passing through the intersection without deceleration and stopping, and all that we can do is to minimize the waiting time of EV at intersection 1. ## 2.1.1. The Calculation oftE(1) (1) Phase 1 is the Current Phase of Intersection 1 at Time t 0. If the elapsed green time is Gela1 and satisfies LEV-1/vEV+Gela1≤Gmax1, then tE(1)=t0 and the real start time of green signal at intersection 1 is t0. Otherwise, if LEV-1/vEV+Gela1>Gmax1, then we have to determine whether Gela1≥Gmin1 is met at time t0. If it is met, the green indication of phase 1 is ended and starts again after the minimal green time of pahse 2 is finished. Before this, we have to check, after phase 1 regains the green indication, whether the queuing vehicles at the intersection can be cleared before EV arrives and can maintain a certain safety interval between EV and social vehicles. If we have(3)tswitch+Gmin2+tswitch+q1×Bvreg+tsafe≤LEV-1vEVthen tE(1)=t0+2tswitch+Gmin2. If Gela1<Gmin1 is met, then the green indication of phase 1 will remain Gmin1-Gela1 seconds. After Gmin1-Gela1 seconds, the signal transformation mentioned above will be carried out on the basis of satisfying(4)Gmin1-Gela1+tswitch+Gmin2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.It can be seen from (3) and (4) that, when Gela1<Gmin1, if (4) is satisfied then (3) is sure to be satisfied. Let Glow denote the allowed lower bound of green light time that is smaller than Gmin1, and Gshortj denote the shortened green light time of phase j. If (3) is satisfied while (4) is not satisfied, then Glow is given. If Gela1≥Glow, then tE(1)=t0+Gmin2+2tswitch. If (3) is also not satisfied, then the green light time of pahse 2 will be shortened to Gshort2 to satisfy the following equation:(5)tswitch+Gshort2+tswitch+q1×Bvreg+tsafe=LEV-1vEV.Thus we have(6)Gshort2=LEV-1vEV-2tswitch+q1×Bvreg+tsafe.If Gshort2<Glow, it shows that the green light time of phase 2 is very short and has little effect on alleviating traffic pressure and can lead to security problems. Therefore, tE(1)=t0 under this condition. Otherwise, the green light time of pahse 2 is equal to Gshort2, and tE(1) is equal to t0+Gshort2+tswitch. The real start time of green signal is tE(1).(2) Phase 2 is the Current Phase of Intersection 1 at Time t 0. Calculate Gela2 firstly. If Gela2≤Gmin2, it should be guaranteed generally that the green duration of phase 2 is Gmin2 seconds and then changed to phase 1 after fixed yellow and red duration. However, it should be guaranteed that(7)Gmin2-Gela2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.Now, tE(1) is equal to t0+Gmin2-Gela2+tswitch. Otherwise, we have to check whether the following equation is satisfied:(8)Glow-Gela2+tswitch+q1×Bvreg+tsafe≤LEV-1vEV.If it is satisfied, then tE(1) is equal to t0+Glow-Gela2+tswitch.IfGela2>Gmin2, then determine whether the following equation is satisfied:(9)tswitch+q1×Bvreg+tsafe≤LEV-1vEV.If it is satisfied, then tE(1) is equal to t0+tswitch. ## 2.1.2. The Calculation oftL(1) Assume thattarri denotes the time that EV arrive at intersection i without reducing its speed; then we have(10)tarr1=t0+LEV-1vEV.Let tcleari denote the time needed to clear the queues at intersection i before EV arrives, then we have(11)tclear1=q1×Bvreg.The value of tL(1) can be described by the following equation:(12)tL1=t0+LEV-1vEV-q1×Bvreg-tsafe.If the distance between EV detector and intersection 1 is far enough and queued vehicles are not too much,tL(1) is sure greater than or equal to tE(1). However, if the traffic is rather heavy and clearing queued vehicles takes a long time, tL(1) may be less than tE(1) at phase 2. Under this condition, to ensure the smooth and safe passing of EV, the green time of phase 2 has to be sacrificed; namely, the green signal is converted to phase 1 after phase 2 has executed a short green time that is less than the minimum green time. Now, the value of tE(1) is t0+tswitch. If tL(1) is still less than tE(1), then let tL(1) and tE(1) have the same value, say, t0+tswitch. Now, it is unable to meet the requirement of EV’s passing through the intersection without deceleration and stopping, and all that we can do is to minimize the waiting time of EV at intersection 1. ## 2.2. Calculation of Time Parameters at Other Intersections ### 2.2.1. The Calculation oftL(2) Lettpass be the time EV needs to pass through the intersection, tarr2 the time EV needs to arrive at the parking line of intersection 2 after passing through intersection 1 without stopping, tstarti the real start time of green signal of intersection i along the evacuation direction of EV, and Qi and Di the number of vehicles arriving at intersection i+1 from intersection i and vehicles departing from intersection i in t0 to tstarti time period, respectively. Then we have(13)tarr2=t0+LEV-2vEV+tpass.And the queuing vehicles q2′ at intersection 2 can be described as(14)q2′=q2+Q1-D2,where Qi and Di are calculated according to the situation that the green light of intersection 1 starts at the latest-possible start time. Let tclear2 be the time needed for clearing q2′ before EV arrives; then we have(15)tclear2=q2′×Bvreg=q2+Q1-D2×Bvreg.We have the following equation to determine the latest start time of green signal at intersection 2:(16)tL2=tarr2-tclear2-tsafe=t0+LEV-2vEV+tpass-q2+Q1-D2×Bvreg-tsafe. ### 2.2.2. The Calculation oftE(2) (1) Phase 1 is the Current Phase of Intersection 2 at Time t 0. If the elapsed green time of phase 1 is Gela1, Gela1+LEV-2/vEV+tpass is the sum of the elapsed green time of phase 1 and the time EV spent before reaching intersection 2. We can discuss the calculation of tE(2) in the following three cases.Case 1. (17) G e l a 1 + L E V - 2 v E V + t p a s s ≤ G r e a l 1 .Case 2. (18) G r e a l 1 < G e l a 1 + L E V - 2 v E V + t p a s s ≤ G m a x 1 .Case 3. (19) G e l a 1 + L E V - 2 v E V + t p a s s > G m a x 1 .In Case  1, there is no need to perform signal preemption at intersection 2, and bothtE(2) and the real start time of green signal are equal to t0.In Case  2, after both phase 1 and phase 2 have executed the minimum green time and then green indication goes back to phase 1, we should check whether the following equation is satisfied:(20)Gmin1-Gela1+2tswitch+Gmin2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass,where q2′ is the queue length of intersection 2 at time (Gmin1-Gela1)+2tswitch+Gmin2 and is calculated according to (14). In (14), Q1 is calculated according to the situation that the green light of intersection 1 starts at the earliest-possible start time; D2 is the number of vehicles passing through intersection 2 in the effective green duration of phase 1 in the period of t0 to tE(2). If (20) is satisfied, then tE(2) is equal to t0+(Gmin1-Gela1)+2tswitch+Gmin2; otherwise, tE(2) is equal to t0.In Case  3, we need to consider the restriction of the maximum green time. Let both phase 1 and phase 2 execute the maximum green time, and then let green indication go back to phase 1. But we should check whetherq2′ can be cleared before EV arrives, namely, whether the following equation is satisfied:(21)Gmax1-Gela1+2tswitch+Gmax2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass.The calculation of q2′ is similar to Case  2. If (21) is satisfied, tE2 is equal to t0+(Gmax1-Gela1)+2tswitch+Gmax2. Otherwise, let phase 1 and phase 2 execute the minimum green time and the maximum green time, respectively, and then let green indication go back to phase 1. But the following equation should be satisfied:(22)Gmin1-Gela1+2tswitch+Gmax2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass.Now, tE(2) is equal to t0+(Gmin1-Gela1)+2tswitch+Gmax2.If (22) is not satisfied, then both phase 1 and phase 2 will execute the minimum green time, and then green indication will go back to phase 1. Let the cycle length be Cmax when all of the two phases execute the maximum green time and Cmin when all of the two phases execute the minimum green time.(2) Phase 2 Is the Current Phase of Intersection 2 at Time t 0. If the elapsed green time of phase 2 is Gela2, we can discuss the calculation of tE(2) in the following four cases.Case 1. (23) G r e a l 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s .Case 2. (24) G m i n 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s < G r e a l 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case 3. (25) G l o w - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s < G m i n 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case 4. (26) L E V - 2 v E V + t p a s s < G l o w - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case  1 corresponds to the situation that the time needed by green light of phase 1 turning green after the normal green duration of phase 2 and clearing the queue length of phase 1 is no more than the time spent by EV for arriving at intersection 2; then there is no need to perform signal preemption at intersection 2 andtE(2) is equal to t0+Greal2-Gela2+tswitch. Case  2 corresponds to the situation that the time needed by phase 1 turning green after the minimum green duration of phase 2 and clearing the queue length of phase 1 is no more than the time spent by EV for arriving at intersection 2; then signal preemption is necessary and tE(2) is equal to t0+Gmin2-Gela2+tswitch. In Case  3, green light of phase 1 turns green after phase 2 has passed Glow time, and now tE(2) is equal to t0+Glow-Gela2+tswitch. In Case  4, signal preemption is necessary at intersection 2, and the green duration of phase 2 has to be sacrificed; namely, the green signal has to turn to phase 1 immediately after detecting EV, and now tE(2) is equal to t0+Gela2+tswitch.The time parameters of the other intersections on the line are similar to those of intersection 2, which is no longer introduced here. ## 2.2.1. The Calculation oftL(2) Lettpass be the time EV needs to pass through the intersection, tarr2 the time EV needs to arrive at the parking line of intersection 2 after passing through intersection 1 without stopping, tstarti the real start time of green signal of intersection i along the evacuation direction of EV, and Qi and Di the number of vehicles arriving at intersection i+1 from intersection i and vehicles departing from intersection i in t0 to tstarti time period, respectively. Then we have(13)tarr2=t0+LEV-2vEV+tpass.And the queuing vehicles q2′ at intersection 2 can be described as(14)q2′=q2+Q1-D2,where Qi and Di are calculated according to the situation that the green light of intersection 1 starts at the latest-possible start time. Let tclear2 be the time needed for clearing q2′ before EV arrives; then we have(15)tclear2=q2′×Bvreg=q2+Q1-D2×Bvreg.We have the following equation to determine the latest start time of green signal at intersection 2:(16)tL2=tarr2-tclear2-tsafe=t0+LEV-2vEV+tpass-q2+Q1-D2×Bvreg-tsafe. ## 2.2.2. The Calculation oftE(2) (1) Phase 1 is the Current Phase of Intersection 2 at Time t 0. If the elapsed green time of phase 1 is Gela1, Gela1+LEV-2/vEV+tpass is the sum of the elapsed green time of phase 1 and the time EV spent before reaching intersection 2. We can discuss the calculation of tE(2) in the following three cases.Case 1. (17) G e l a 1 + L E V - 2 v E V + t p a s s ≤ G r e a l 1 .Case 2. (18) G r e a l 1 < G e l a 1 + L E V - 2 v E V + t p a s s ≤ G m a x 1 .Case 3. (19) G e l a 1 + L E V - 2 v E V + t p a s s > G m a x 1 .In Case  1, there is no need to perform signal preemption at intersection 2, and bothtE(2) and the real start time of green signal are equal to t0.In Case  2, after both phase 1 and phase 2 have executed the minimum green time and then green indication goes back to phase 1, we should check whether the following equation is satisfied:(20)Gmin1-Gela1+2tswitch+Gmin2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass,where q2′ is the queue length of intersection 2 at time (Gmin1-Gela1)+2tswitch+Gmin2 and is calculated according to (14). In (14), Q1 is calculated according to the situation that the green light of intersection 1 starts at the earliest-possible start time; D2 is the number of vehicles passing through intersection 2 in the effective green duration of phase 1 in the period of t0 to tE(2). If (20) is satisfied, then tE(2) is equal to t0+(Gmin1-Gela1)+2tswitch+Gmin2; otherwise, tE(2) is equal to t0.In Case  3, we need to consider the restriction of the maximum green time. Let both phase 1 and phase 2 execute the maximum green time, and then let green indication go back to phase 1. But we should check whetherq2′ can be cleared before EV arrives, namely, whether the following equation is satisfied:(21)Gmax1-Gela1+2tswitch+Gmax2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass.The calculation of q2′ is similar to Case  2. If (21) is satisfied, tE2 is equal to t0+(Gmax1-Gela1)+2tswitch+Gmax2. Otherwise, let phase 1 and phase 2 execute the minimum green time and the maximum green time, respectively, and then let green indication go back to phase 1. But the following equation should be satisfied:(22)Gmin1-Gela1+2tswitch+Gmax2+q2′×Bvreg+tsafe≤LEV-2vEV+tpass.Now, tE(2) is equal to t0+(Gmin1-Gela1)+2tswitch+Gmax2.If (22) is not satisfied, then both phase 1 and phase 2 will execute the minimum green time, and then green indication will go back to phase 1. Let the cycle length be Cmax when all of the two phases execute the maximum green time and Cmin when all of the two phases execute the minimum green time.(2) Phase 2 Is the Current Phase of Intersection 2 at Time t 0. If the elapsed green time of phase 2 is Gela2, we can discuss the calculation of tE(2) in the following four cases.Case 1. (23) G r e a l 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s .Case 2. (24) G m i n 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s < G r e a l 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case 3. (25) G l o w - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e ≤ L E V - 2 v E V + t p a s s < G m i n 2 - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case 4. (26) L E V - 2 v E V + t p a s s < G l o w - G e l a 2 + t s w i t c h + t c l e a r 2 + t s a f e .Case  1 corresponds to the situation that the time needed by green light of phase 1 turning green after the normal green duration of phase 2 and clearing the queue length of phase 1 is no more than the time spent by EV for arriving at intersection 2; then there is no need to perform signal preemption at intersection 2 andtE(2) is equal to t0+Greal2-Gela2+tswitch. Case  2 corresponds to the situation that the time needed by phase 1 turning green after the minimum green duration of phase 2 and clearing the queue length of phase 1 is no more than the time spent by EV for arriving at intersection 2; then signal preemption is necessary and tE(2) is equal to t0+Gmin2-Gela2+tswitch. In Case  3, green light of phase 1 turns green after phase 2 has passed Glow time, and now tE(2) is equal to t0+Glow-Gela2+tswitch. In Case  4, signal preemption is necessary at intersection 2, and the green duration of phase 2 has to be sacrificed; namely, the green signal has to turn to phase 1 immediately after detecting EV, and now tE(2) is equal to t0+Gela2+tswitch.The time parameters of the other intersections on the line are similar to those of intersection 2, which is no longer introduced here. ## 3. Multi-Objective Optimization Model Letn be the number of intersections on the evacuation route, m the number of entrance sections on the cross road of evacuation route; tdepi the time instant EV departing from intersection i; geffi,j, si,jy-z, and Si,j the effective green time of phase j at intersection i, the average departure rate of vehicles from y to z directions on the entrance sections, and the saturation flow rate of all lanes on the entrance sections, respectively; and Athri,j, Alefti,j, and Arighti,j the ratio of straight, left turn and right turn of phase j at intersection i, respectively. The multiobjective optimization model for EV signal preemption control based on route is(27)minf1=∑i=1ntstarti+tcleari+tsafe-tarri×δi,(28)maxf2=∑i=1ngeffi,2×si,2N-S+si,2S-N×Athri,2+si,2N-SAlefti,2+si,2S-NArighti,2,(29)s.t.tEi≤tstarti≤tLii=1,2,…,n,(30)tarri=LEV-ivEVi=1,2,…,n,(31)tcleari=qi′×Bvregi=2,…,n,(32)qi′=qi+min⁡qi-1′×Athri-1,1,geffi-1,1×Si-1,j×Athri-1,1+geffi-1,2si,2N-SAlefti,2+si,2S-NArighti,2-∑e=1lmin⁡qe′′,geffi,1×Athri,1,(33)δi=0tstarti+tcleari+tsafe≤tarri1otherwisei=2,…,n,(34)tsafe≥2,(35)tEi≥t0i=1,2,…,n,(36)tLi≥t0i=1,2,…,n,where (27) and (28) are objective functions and represent minimizing the residence time of EV at all intersections and maximizing the number of social vehicles passing through all intersections of the evacuation route, respectively. The number of social vehicles includes the vehicles entering in the system from the cross road of evacuation route and passing through intersections and the vehicles passing through intersections in the direction of the evacuation route of EV. Equation (29) ensures that the real start time of green signal of intersection i is between tE(i) and tL(i); (30) represents the arrival time of EV at intersection i; (31) represents the time needed to clear the queued vehicles before EV arrives; (32) denotes the calculation methd of the queued vehicles at intersection i at the moment the green signal is starting, and q′′,e=1,2,…,l represents the number of queued vehicles of phase 1 at the moment the green signal is starting in each signal cycle of intersection i before tstarti, and l denotes the number of passed signal cycles; (33) represents the 0-1 variable of intersection i, which means that the waiting time of EV at intersection i is 0 if the green signal is on at intersection i, and the queued vehicles have been cleared and security interval is satisfied, namely, tstarti+tcleari+tsafe≤tarri. Otherwise, the EV has to wait in line, and the waiting time is euqual to tstarti+tcleari+tsafe-tarri; (34) shows the lower limit of the value of tsafe; (35) and (36) represent the relationship between tE(i),tL(i), and t0. Variable geffi,2 will take different values according to different t0 and tstarti. The value of geff1,2 and geff2,2 can be calculated by (3)–(9) and (10)–(16) and (20)–(22), respectively, and the calculation method of geffi,2,i=3,4,…,n is similar to geff2,2. ## 4. Design of Solving Algorithm At present, the evolutionary algorithm is the main algorithm for solving multiobjective programming problems. Owing to the advantages of fast convergence speed and easy implementation, the particle swarm algorithm (PSO) has been successfully applied in many optimization problems [48–52]. In this paper, PSO is adopted to solve the given model. ### 4.1. Encoding and Particle Swarm Generation Generate particle swarmP with N1 particles randomly, and each particle can be represented as X=(x1,x2,…,xn), where xi,i=1,2,…,n denotes tstarti of intersection i and xi∈tEi,tLi. If rand represents a random number between 0 and 1, then the specific method of producing xi,i=1,2,…,n is defined as follows:(37)xi=inttEi+tLi-tEi×rand(). ### 4.2. Density of Solutions Since the density of nondominated solutions is inversely proportional to their diversity [48], to obtain uniformly distributed Pareto frontier, the diversity of solutions should be guaranteed; that is, the sparse solutions should be retained as many as possible. For this reason, the solution density is introduced in this paper. Suppose that there are p objectives and there are S solutions in dominant population E. If the solutions in E are sorted in ascending order according to their fitness values and fj(i),j=1,2,…,p;i=1,2,…,S represents the ith solution when sorting by the jth objective, then the solution density is defined as follows:(38)∑j=1pfji+1-fji-1+∑j=1pfji-fji-1=∑j=1pfji+1+∑j=1pfji-2∑j=1lfji-1. ### 4.3. Updating of Location and Velocity In standard multiobjective PSO algorithm, location updating requires the inertia factorw and learning factors c1 and c2. Since a larger w is suitable for a wide range exploration of solution space, while a smaller w is suitable for detailed search on the target, the learning factors are adopted to balance the cognitive abilities of individuals and groups. To achieve better search and balance, some scholars put forward the idea that inertia factor and learning factors can change dynamically [49]. In this paper, inertia factor and learning factors are changed as follows:(39)wk=wf-ws×Max-kMax+ws,c1k=c1f-c1s×Max-kMax+c1s,c2k=c2f-c2s×Max-kMax+c2s,where k, Max, ws, and wf represent current iterations, maximum iterations, the initial value of inertia weight, and the final value of inertia weight, respectively; c1s, c2s represent the initial value of c1 and c2, respectively; c1f, c2f represent the final value of c1 and c2, respectively. ### 4.4. Mutation In order to prevent the algorithm from falling into local optimum, mutation operator is introduced. LetO be the mutation probability; if k<Max×O, then Max×O-k particles are selected from P and mutate according to the following equation:(40)xi=xi+rand()×inttLi-tEi2-xi2.This will not only ensure the diversification of particles but also ensure the feasibility of particles after mutation. ## 4.1. Encoding and Particle Swarm Generation Generate particle swarmP with N1 particles randomly, and each particle can be represented as X=(x1,x2,…,xn), where xi,i=1,2,…,n denotes tstarti of intersection i and xi∈tEi,tLi. If rand represents a random number between 0 and 1, then the specific method of producing xi,i=1,2,…,n is defined as follows:(37)xi=inttEi+tLi-tEi×rand(). ## 4.2. Density of Solutions Since the density of nondominated solutions is inversely proportional to their diversity [48], to obtain uniformly distributed Pareto frontier, the diversity of solutions should be guaranteed; that is, the sparse solutions should be retained as many as possible. For this reason, the solution density is introduced in this paper. Suppose that there are p objectives and there are S solutions in dominant population E. If the solutions in E are sorted in ascending order according to their fitness values and fj(i),j=1,2,…,p;i=1,2,…,S represents the ith solution when sorting by the jth objective, then the solution density is defined as follows:(38)∑j=1pfji+1-fji-1+∑j=1pfji-fji-1=∑j=1pfji+1+∑j=1pfji-2∑j=1lfji-1. ## 4.3. Updating of Location and Velocity In standard multiobjective PSO algorithm, location updating requires the inertia factorw and learning factors c1 and c2. Since a larger w is suitable for a wide range exploration of solution space, while a smaller w is suitable for detailed search on the target, the learning factors are adopted to balance the cognitive abilities of individuals and groups. To achieve better search and balance, some scholars put forward the idea that inertia factor and learning factors can change dynamically [49]. In this paper, inertia factor and learning factors are changed as follows:(39)wk=wf-ws×Max-kMax+ws,c1k=c1f-c1s×Max-kMax+c1s,c2k=c2f-c2s×Max-kMax+c2s,where k, Max, ws, and wf represent current iterations, maximum iterations, the initial value of inertia weight, and the final value of inertia weight, respectively; c1s, c2s represent the initial value of c1 and c2, respectively; c1f, c2f represent the final value of c1 and c2, respectively. ## 4.4. Mutation In order to prevent the algorithm from falling into local optimum, mutation operator is introduced. LetO be the mutation probability; if k<Max×O, then Max×O-k particles are selected from P and mutate according to the following equation:(40)xi=xi+rand()×inttLi-tEi2-xi2.This will not only ensure the diversification of particles but also ensure the feasibility of particles after mutation. ## 5. Simulation Analysis Assume that the evacuation route is composed of 4 intersections, as shown in Figure3. The evacuation direction of EV is from west to east. It will pass through intersection 1 and intersection 2 straight ahead, turn left at intersection 3, and turn right at intersection 4, and LEV-1=400m, LEV-2=850m, LEV-3=1350m, LEV-4=1850m, vEV=45km/h, vreg=45km/h; di1=di2=28s, Ci=56s, i=1,2,3,4, Cmax=120s, Cmin=30s; ws1=0s, ws2=30s, ws3=60s, ws4=90s; tswitch=3s, tpass=2s, B=7m; Gmin1=Gmin2=15s, Gmax1=Gmax2=60s, Greal1=Greal2=25s, Glow=10s, Gshortj=10s.Figure 3 Intersections on evacutation route.In south-northward direction and west-eastward direction, the ratio of straight, right turn, and left turn is 0.7, 0.2, and 0.1, respectively. And it is 0.6, 0.3, and 0.1 respectively, in north-southward direction. In the particle swarm algorithm,N1=100, Max=1000, and O=0.1. The size of external dominance population is 100, and ws=0.4,  wf=0.9,  c1f=1,c2f=2,  c1s=2,  c2s=1.Assume that the number of vehicles waiting to pass through each intersection at timet0 is q1=10,  q2=5,  q3=6, and q4=8, respectively, and qiN=qiS=10, i=1,2, q3S=q3E=10, and q4N=q4W=10. At each intersection, the arrival rate from south to north is 1080–1440 pcu/h and the average departure rate is 1440 pcu/h, the arrival rate from east to west is 1440–1800 pcu/h, and the saturation flow rate is 1800 pcu/h. Assume that t0 is 9 o’clock and 10 o’clock, respectively. The Pareto solution set obtained by using C# for simulation calculation is shown in Tables 1 and 2. The data corresponding to each intersection indicates how many seconds after 9 o’clock or 10 o’clock will each intersection start the green signal; f1 and f2 are the residence time of EV at all the intersections and the number of vehicles passing through the system, respectively.Table 1 The Pareto solution set of of EV whent0 is 9 o’clock. Number The real start time of green signal(s) f 1 (s) f 2 (pcu) Intersection 1 Intersection 2 Intersection 3 Intersection 4 ( 1 ) 10 33 53 81 0 112.1 ( 2 ) 10 34 54 81 2.3 113.6 ( 3 ) 10 34 55 82 2.4 114.8 ( 4 ) 11 34 55 83 3.9 116.6 ( 5 ) 12 34 56 83 4.7 118.7 ( 6 ) 13 34 58 84 4.8 120.4Table 2 The Pareto solution set of EV whent0 is 10 o’clock. Number The real start time of green signal(s) f 1 (s) f 2 (pcu) Intersection 1 Intersection 2 Intersection 3 Intersection 4 ( 1 ) 10 31 56 82 0 119.7 ( 2 ) 11 33 58 83 2.9 119.9 ( 3 ) 11 34 57 83 3.2 122.4 ( 4 ) 12 34 58 83 4.9 122.7 ( 5 ) 13 34 57 83 5.5 123.5Assume that the number of vehicles waiting to pass through each intersection at timet0 is qi=20, i=1,2,3,4, and qiN=qiS=10, i=1,2, q3S=q3E=10, q4N=q4W=10. Traffic flow from north to south and from west to east is the key traffic flow. At each intersection, the arrival rate from north to south is 1400–1600 pcu/h and the average departure rate is 1440 pcu/h; the arrival rate from west to east is 1800–1900 pcu/h and the saturation flow rate is 1800 pcu/h; the arrival rate in other directions remain unchanged. If t0 is 9 o’clock and 10 o’clock, respectively, the Pareto solution set is shown in Tables 3 and 4.Table 3 The Pareto solution set for an increasing arrival rate whent0 is 9 o’clock. Number The real start time of green signal(s) f 1 (s) f 2 (pcu) Intersection 1 Intersection 2 Intersection 3 Intersection 4 ( 1 ) 2 46 73 108 29.4 165.7 ( 2 ) 2 45 75 108 29.5 167.1 ( 3 ) 0 45 76 114 33.2 171.2 ( 4 ) 1 43 76 111 36.7 175.4 ( 5 ) 1 44 80 116 37.9 183.5 ( 6 ) 4 44 80 111 38.4 184.4 ( 7 ) 7 43 82 123 38.4 186.9 ( 8 ) 4 43 80 117 39.3 189.1 ( 9 ) 7 43 84 119 39.9 193.6Table 4 The Pareto solution set for an increasing arrival rate whent0 is 10 o’clock. Number The real start time of green signal(s) f 1 (s) f 2 (pcu) Intersection 1 Intersection 2 Intersection 3 Intersection 4 ( 1 ) 11 43 79 110 27.2 169.9 ( 2 ) 10 45 80 108 29.4 174.7 ( 3 ) 4 33 76 106 29.7 180.2 ( 4 ) 2 34 73 111 30.1 183.8 ( 5 ) 6 46 79 116 31.8 185.7 ( 6 ) 4 35 71 107 36.6 187.0 ( 7 ) 12 43 77 108 38.3 189.2To verify the control effect of the signal preemption control method presented in this paper under the condition of much larger traffic volume, for example, in the moring peak or the evening peak period, we assume thatt0 is 8 o’clock, namely, in the moring peak period, and qi=20, i=1,2,3,4, qiN=qiS=15, i=1,2, q3S=q3E=15, q4N=q4W=15. At each intersection, the arrival rate from north to south and from west to east is 1800–1900 pcu/h and 2200–2400 pcu/h, respectively. The saturation flow rate is 1800 pcu/h and the arrival rate in other directions remain unchanged. The Pareto optimal set is shown in Figure 4.Figure 4 The Pareto optimal set.We can see from the simulation results shown in Tables1 and 2 that, for each solution in the Pareto solution set, the variation law of the two optimization objectives is that with the reduction of one objective; the other objective is reduced simultaneously, which implies that the decrease in the residence time of EV at intersections is at the expense of the reduction of the number of vehicles passing through the system. It also can be seen from the simulation results shown in Tables 3 and 4 that, at the moment the EV is detected, if there are many queued vehicles at each intersections and the traffic volume of key traffic flow exceeds the amount of traffic that the system can release, the control method can still guarantee the EV to pass through the intersection at a faster speed, and more social vehicles can pass through the system. The results shown in Figure 4 also indicate that the signal preemption control method is effective when the arrival rate are much larger in the morning peak period. Simulation results under different traffic states show that the signal preemption control method presented in this paper can reduce the delay of EV at intersections and lessen the impact of EV on social vehicles.When multiple EVs arrive at the same time, the signal priority control method proposed in this paper is also applicable. If the arrival interval of adjacent EV is relatively long, it should be regarded as two signal priority processes. Now, after the front EV has passed through the intersection, we must consider signal transiton from priority signal to normal signal firstly and then adopt the method mentioned above to control the following EV. ## 6. Conclusion In order to overcome the disadvantages of signal preemption control method from intersection to intersection, we study the dynamic signal preemption control method based on route for EV in this paper. Firstly, a single intersection is taken as the research object to determine the earliest-possible start time and the latest-possible start time of each intersection in evacuation direction of EV on the selected route. Then, considering the delay of EV at intersections and the effect of EV to social vehicles comprehensively, to ensure that EV can pass through the intersections without reducing speed and without stopping as far as possible and the impact on social vehicles can be minimal, a multiobjective programming model is established and the solving algorithm is designed. Simulation calculations, considering EV is detected at different times and different traffic states, are carried out, and the corresponding Pareto sets are given. The result shows that, by adopting this method, we can get more reasonable start time of green light at each intersection, reduce the delay of EV at intersections, and improve the efficiency of evacuation. Determine how the signal should be converted to the normal signal when signal preemption has been finished according to the signal timing, the traffic flow characteristics, and other parameters; namely, the signal transition strategy is the next focus of the study. --- *Source: 1024382-2018-04-01.xml*
2018
# Subsurface and Petrophysical Studies of Shaly-Sand Reservoir Targets in Apete Field, Niger Delta **Authors:** P. A. Alao; A. I. Ata; C. E. Nwoke **Journal:** ISRN Geophysics (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/102450 --- ## Abstract Conventional departures from Archie conditions for petrophysical attribute delineation include shaliness, fresh formation waters, thin-bed reservoirs, and combinations of these cases. If these departures are unrecognized, water saturation can be overestimated, and this can result in loss of opportunity. Wireline logs of four (4) wells from Apete field were studied to delineate petrophysical attributes of shaly-sand reservoirs in the field. Shale volume and porosities were calculated, water saturations were determined by the dual water model, and net pay was estimated using field-specific pay criteria. Ten sand units within the Agbada formation penetrated by the wells were delineated and correlated and their continuity was observed across the studied wells. The reservoirs had high volume of shale (Vcl), high hydrocarbon saturation, low water saturation, and good effective porosity ranging 12.50–46.90%, 54.00–98.39%, 1.61–46.0%, and 10.40–26.80%, respectively. The pay zones are relatively inhomogeneous reservoirs as revealed from the buckle’s plot except in Apete 05. The direction of deposition of the sands was thus inferred to be east west. Empirical relationships apply with variable levels of accuracy with observation of the porosity-depth, water saturation-depth, and water saturation-porosity trends. Core data is recommended for better characterization of these reservoirs. --- ## Body ## 1. Introduction Shales can cause complications for the petrophysicist because they are generally conductive and may therefore mask the high resistance characteristic of hydrocarbons [1]. Several factors are to be considered when delineating petrophysical attributes for shaly-sand reservoirs because clay minerals add conductivity to the formation especially at low water saturations.Clay minerals attract water that is adsorbed onto the surface, as well as cations (e.g., sodium) that are themselves surrounded by hydration water. This gives rise to an excess conductivity compared with rock, in which clay crystals are not present, and this space might otherwise be filled with hydrocarbon. Using Archie’s equation in shaly sands results in very high water saturation values and may lead to potentially hydrocarbon bearing zones being missed. Moreover, in clean sands, the irreducible water volume is a function of the surface area of the sand grains and therefore the grain size, but for shaly sands the addition of silt and clay usually decreases effective porosity due to poorer sorting and increases the irreducible water volume with the finer grain size [2]. Archie’s equation was developed for clean rocks, and it does not account for the extra conductivity caused by the clay present in shaly sands. Therefore, Archie’s equation would not provide accurate water saturation in shaly sands. In fact, water saturations obtained from Archie’s equation have a tendency to overestimate the water in shaly sands. Several models have been proposed by many researchers for shaly-sand analysis such as Juhasz model, dual water model, Indonesian model, Waxman and Smits model, and so forth. ## 2. Synopsis of the Geology The stratigraphic sequence of the Niger Delta comprises three broad lithostratigraphic units, namely,(1) a continental shallow massive sand sequence, the Benin Formation, (2) a coastal marine sequence of alternating sands and shales, the Agbada Formation, and (3) a basal marine shale unit, the Akata Formation(Figure 2). Outcrops of these units are exposed at various localities (Figure 1). The Akata Formation consists of clays and shales with minor sand intercalations. The sediments were deposited in prodelta environments. Petroleum in the Niger Delta is produced from these unconsolidated sands in the Agbada Formation. Characteristics of the reservoirs in the Agbada Formation are controlled by depositional environment and by depth of burial. The sand percentage here is generally less than 30%. The Agbada Formation consists of alternating sand and shales representing sediments of the transitional environment comprising the lower delta plain (mangrove swamps, floodplain, and marsh) and the coastal barrier and fluviomarine realms. The sand percentage within the Agbada Formation varies from 30 to 70%, which results from the large number of depositional offlap cycles [3]. A complete cycle generally consists of thin fossiliferous transgressive marine sand, followed by an offlap sequence which commences with marine shale and continues with laminated fluviomarine sediments followed by barriers and/or fluviatile sediments terminated by another transgression [4, 5].Figure 1 Map of Southern Nigeria showing outcrops of cretaceous and tertiary formations and type localities of subsurface stratigraphic units. After Short and Stauble [6].Figure 2 Stratigraphic column of the Niger Delta. After Shannon et al. [7] and Doust et al. [8].The Benin Formation is characterized by high sand percentage (70–100%) and forms the top layer of the Niger Delta depositional sequence. The massive sands were deposited in continental environment comprising the fluvial realms (braided and meandering systems) of the upper delta plain. The Niger Delta time-stratigraphy is based on biochronological interpretations of fossil spores, foraminifera, and calcareous nonnoplankton. The current delta-wide stratigraphic framework is largely based on palynological zonations labeled with Shell’s alphanumeric codes (e.g., P630, P780, and P860). This allows correlation across all facies types from continental (Benin) to open marine (Akata). There have been concerted efforts, within the work scope of the stratigraphic committee of the Niger Delta (STRATCOM), to produce a generally acceptable delta-wide biostratigraphic framework [9] but not much again has been accomplished after several data gathering exercise by the committee. The sediments of the Niger Delta span a period of 54.6 million years during which, worldwide, some thirty-nine eustatic sea level rises have been recognized [10]. Correlation with the chart of Galloway [11] confirms the presence of nineteen of such named marine flooding surfaces in the Niger Delta. Eight of these are locally developed. Adesida et al. [10] defined eleven lithological mega sequences marked at the base by regional mappable transgressive shales (shale markers) that are traceable across depobelt boundary faults and proposed these as the genetic sequences that can be used as the basis for lithostratigraphy of the Niger Delta. ## 3. Methodology Composite wireline log data from four well logs were interpreted. The basic analysis procedure used involves the following steps; each of which is described in the following sections. ### 3.1. Import and Well Log Data The well data was imported into the software used and well log correlation (Figure3) was done after which the petrophysical attributes were delineated. Well correlation helped in determining the direction of thickness of sand being mapped and the lateral continuity of these reservoirs.Figure 3 Well correlation of all reservoir sands. ### 3.2. Zoning and Point Selection Zoning is of vital importance in the interpretation of well logs. The logs were split into potential reservoir zones and nonreservoir zones. Hydrocarbon bearing intervals were identified and differentiated based largely on the readings from the shallow and deep reading resistivity tools. However, hydrocarbon typing (oil and gas differentiation) was based on density-neutron logs overlay. ### 3.3. Compute Shale Volume from the Gamma Ray This was derived from the gamma ray log first by determining the gamma ray indexIGR [12]: (1)IGR=(GRlog-GRmin)(GRmax-GRmin), where IGR = gamma ray index; GRlog = gamma ray reading of the formation; GRmin = minimum gamma ray reading (sand baseline); GRmax = maximum gamma ray reading (shale baseline).For the purpose of this research work, Larionov’s [13] volume of shale formula for tertiary rocks was used: (2)Vsh=0.083(23.7*IGR-1)Vsh: volume of shale and IGR: gamma ray index. ### 3.4. Compute Total Porosity and Shale-Corrected (Effective) Porosity Total and effective porosity was estimated from the density, neutron, and sonic logs using Archie’s equation. ### 3.5. Compute Water Saturation Water saturation was estimated using Archie’s water saturation formula and Schlumberger’s dual water model. ### 3.6. Estimate Net Pay Calculate net pay using field-specific net pay cutoffs. Cutoff criteria used are water saturation < 50%, porosity > 10%, and volume of clay < 50%. ### 3.7. Use of Crossplots Pickett, buckles, and neutron-density crossplots were generated to understand reservoir properties. Pickett plot which is a resistivity-porosity plot generated was used to determine saturation values alongside Archie parametersa and m. Porosity is calculated from the neutron porosity and density porosity logs and is plotted against the resistivity data obtained from the deep resistivity log. Porosity is plotted on the y-axis with a logarithmic scale ranging from 0.1% to 100%, while the resistivity is plotted on the x-axis with a logarithmic scale ranging from 1 to 100 ohm meter. In order to properly characterize the reservoir sands delineated and correlated across the studied wells, Buckles plot, a plot of Sw versus Φ, was generated to depict whether or not the sands are at irreducible water saturation. Porosity is plotted on the y-axis with a scale ranging from 0 to 40% porosity (shown in decimals), while water saturation is plotted on the x-axis with a scale ranging from 0 to 100% (shown in decimals) water saturation. The scale for bulk volume water lines (grey lines) ranges from 0.01 to 0.25 and is shown as a secondary y-axis. The implicit assumption in the Buckles plot approach is that the product of irreducible water saturation and porosity is constant. Empirical relationships were also established for porosity-depth, water saturation-depth, water saturation-porosity, and permeability-depth to check the trends. ## 3.1. Import and Well Log Data The well data was imported into the software used and well log correlation (Figure3) was done after which the petrophysical attributes were delineated. Well correlation helped in determining the direction of thickness of sand being mapped and the lateral continuity of these reservoirs.Figure 3 Well correlation of all reservoir sands. ## 3.2. Zoning and Point Selection Zoning is of vital importance in the interpretation of well logs. The logs were split into potential reservoir zones and nonreservoir zones. Hydrocarbon bearing intervals were identified and differentiated based largely on the readings from the shallow and deep reading resistivity tools. However, hydrocarbon typing (oil and gas differentiation) was based on density-neutron logs overlay. ## 3.3. Compute Shale Volume from the Gamma Ray This was derived from the gamma ray log first by determining the gamma ray indexIGR [12]: (1)IGR=(GRlog-GRmin)(GRmax-GRmin), where IGR = gamma ray index; GRlog = gamma ray reading of the formation; GRmin = minimum gamma ray reading (sand baseline); GRmax = maximum gamma ray reading (shale baseline).For the purpose of this research work, Larionov’s [13] volume of shale formula for tertiary rocks was used: (2)Vsh=0.083(23.7*IGR-1)Vsh: volume of shale and IGR: gamma ray index. ## 3.4. Compute Total Porosity and Shale-Corrected (Effective) Porosity Total and effective porosity was estimated from the density, neutron, and sonic logs using Archie’s equation. ## 3.5. Compute Water Saturation Water saturation was estimated using Archie’s water saturation formula and Schlumberger’s dual water model. ## 3.6. Estimate Net Pay Calculate net pay using field-specific net pay cutoffs. Cutoff criteria used are water saturation < 50%, porosity > 10%, and volume of clay < 50%. ## 3.7. Use of Crossplots Pickett, buckles, and neutron-density crossplots were generated to understand reservoir properties. Pickett plot which is a resistivity-porosity plot generated was used to determine saturation values alongside Archie parametersa and m. Porosity is calculated from the neutron porosity and density porosity logs and is plotted against the resistivity data obtained from the deep resistivity log. Porosity is plotted on the y-axis with a logarithmic scale ranging from 0.1% to 100%, while the resistivity is plotted on the x-axis with a logarithmic scale ranging from 1 to 100 ohm meter. In order to properly characterize the reservoir sands delineated and correlated across the studied wells, Buckles plot, a plot of Sw versus Φ, was generated to depict whether or not the sands are at irreducible water saturation. Porosity is plotted on the y-axis with a scale ranging from 0 to 40% porosity (shown in decimals), while water saturation is plotted on the x-axis with a scale ranging from 0 to 100% (shown in decimals) water saturation. The scale for bulk volume water lines (grey lines) ranges from 0.01 to 0.25 and is shown as a secondary y-axis. The implicit assumption in the Buckles plot approach is that the product of irreducible water saturation and porosity is constant. Empirical relationships were also established for porosity-depth, water saturation-depth, water saturation-porosity, and permeability-depth to check the trends. ## 4. Results and Discussions Petrophysical attributes as porosity (effective and total), reservoir thickness (net and gross), water saturation (Archie and dual water model), and volume of shale (Tables1, 2, 3, 4, 5, 6, and 7) were delineated in this research work. Results from petrophysical analysis revealed reservoir Sand F to be the most viable reservoir with net thickness as high as 126.50 ft. All the ten reservoirs exhibited good petrophysical attributes with high porosity and hydrocarbon saturation. The sands are shaly sands with high volume of shale resulting in overestimated water saturation in the reservoirs; dual water model was used for estimating the water saturation for better appraisal of the reservoirs due to their shaly nature.Table 1 Average total porosity for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 0.254 — 0.250 B 0.223 0.225 — 0.224 C 0.280 0.234 — — D — 0.196 0.174 0.122 E 0.214 0.230 — 0.104 F 0.272 0.248 0.239 0.154 G — 0.232 — 0.178 H 0.210 0.177 — 0.129 I — 0.204 — 0.141 J — 0.126 0.172 — Blank spaces mean the zone is not a reservoir.Table 2 Average effective porosity for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 0.247 — 0.218 B 0.219 0.218 — 0.203 C 0.259 0.214 — — D — 0.167 0.158 0.111 E 0.201 0.207 — 0.101 F 0.250 0.221 0.193 0.145 G — 0.211 — 0.169 H 0.205 0.160 — 0.113 I — 0.200 — 0.137 J — 0.123 0.167 — Blank spaces mean the zone is not a reservoir.Table 3 Average net-to-gross ratio for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 0.793 — 0.826 B 0.857 0.771 — 0.851 C 0.515 0.933 — — D — 0.712 0.622 0.449 E 0.654 0.615 — 0.125 F 0.797 0.931 0.886 0.785 G — 0.981 — 0.692 H 0.389 0.278 — 0.289 I — 0.853 — 0.654 J — 0.328 0.279 — Blank spaces mean the zone is not a reservoir.Table 4 Average volume of shale for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 0.217 — 0.308 B 0.292 0.339 — 0.243 C 0.162 0.122 — — D — 0.363 0.290 0.350 E 0.415 0.346 — 0.274 F 0.245 0.192 0.288 0.312 G — 0.162 — 0.276 H 0.327 0.360 — 0.288 I — 0.258 — 0.279 J — 0.469 0.357 — Blank spaces mean the zone is not a reservoir.Table 5 Average water saturation from dual water model for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 0.333 — 0.238 B 0.071 0.298 — 0.106 C 0.200 0.436 — — D — 0.118 0.169 0.231 E 0.223 0.002 — 0.046 F 0.353 0.071 0.164 0.246 G — 0.460 — 0.176 H 0.107 0.023 — 0.079 I — 0.017 — 0.087 J — 0.011 0.016 — Blank spaces mean the zone is not a reservoir.Table 6 Average water saturation from Archie’s model for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 0.377 — 0.324 B 0.164 0.406 — 0.189 C 0.221 0.449 — — D — 0.241 0.232 0.384 E 0.355 0.115 — 0.140 F 0.417 0.113 0.257 0.339 G — 0.492 — 0.244 H 0.192 0.151 — 0.157 I — 0.072 — 0.177 J — 0.229 0.135 —Table 7 Net thickness (ft) for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 78.50 — 57.00 B 12.00 45.50 — 31.50 C 34.00 42.00 — — D — 89.00 102.00 62.00 E 78.50 40.00 — 13.00 F 169.50 95.00 156.00 67.50 G — 126.50 — 45.00 H 27.50 22.00 — 11.00 I — 81.00 — 70.00 J — 20.00 12.00 — ### 4.1. Crossplot Correlation analysis was performed to determine whether the petrophysical attributes (water saturation and porosity) are interdependent. Generally, the effective porosity decreases with depth (Figure4) with high correlation coefficient except in Apete 15 where there was increase in porosity with depth. The observed reduction in depth would likely be due to the effect of compaction resulting from overburden pressure. Water saturation generally increases with depth (Figure 5) except in Apete 16. This implies that reservoirs in Apete field occur in shallow depth hence the unavailability of reservoirs as we go deeper into the subsurface. Efforts made to delineate trends for water saturation and effective porosity were marginally efficient. From crossplot results (Figure 6), effective porosity reduces with an increase in water saturation in Apete 05 and 06. Specifically, in Apete 15 and 16, there was no correlation at all, as the correlation coefficient was extremely low which suggests that no relationship exists between the two petrophysical attributes. Buckles plot for the reservoirs in the wells were generated (Figure 7); the results from these plots reveal only Apete 06 to be at irreducible water saturation as the data points align along the bulk volume of water (BVW) trend line due to the consistency of the data points. The reservoirs zones in this well are considered to be homogenous; therefore, hydrocarbon production from Apete 06 should be water free [14]; that is, the reservoirs would have a low water cut.Figure 4 Porosity-depth plot for Apete 05, 06, 15, and 16.Figure 5 Water saturation-depth plot for Apete 05, 06, 15, and 16.Figure 6 Effective porosity—water saturation trend for Apete 05, 06, 15, and 16.Figure 7 Buckles plot for Apete 05, 06, and 15.Pickett plot (Figure8) reveals the reservoirs to be somewhat shaly which is observed with saturation exponent being less than 2 in the best porosity type. This is further confirmed from the Neutron-Density crossplot (Figure 9). Pickett plot was used to determine Archie parameters, tortuosity (a), and cementation exponent (m) which is approximately 1 and 2, respectively. Neutron density crossplot reveals more of laminated clay in the reservoirs; this should be taken into consideration during well planning. The shale morphology generally changed from laminated to dispersed which affects saturation mixing function hence the need to use another saturation model (Schlumberger’s dual water model) which is designed specifically for shaly sands, rather than the conventional Archie’s water saturation model. Results from both water saturation models used show a wide disparity which could not have been noticed if only the conventional Archie’s model was used which could have led to bypassing some reservoirs as well as undervaluating the reserves in this field.Figure 8 Pickett plot of Sand C in Apete 05.Figure 9 Neutron-density crossplot for Apete 05 and 06. ## 4.1. Crossplot Correlation analysis was performed to determine whether the petrophysical attributes (water saturation and porosity) are interdependent. Generally, the effective porosity decreases with depth (Figure4) with high correlation coefficient except in Apete 15 where there was increase in porosity with depth. The observed reduction in depth would likely be due to the effect of compaction resulting from overburden pressure. Water saturation generally increases with depth (Figure 5) except in Apete 16. This implies that reservoirs in Apete field occur in shallow depth hence the unavailability of reservoirs as we go deeper into the subsurface. Efforts made to delineate trends for water saturation and effective porosity were marginally efficient. From crossplot results (Figure 6), effective porosity reduces with an increase in water saturation in Apete 05 and 06. Specifically, in Apete 15 and 16, there was no correlation at all, as the correlation coefficient was extremely low which suggests that no relationship exists between the two petrophysical attributes. Buckles plot for the reservoirs in the wells were generated (Figure 7); the results from these plots reveal only Apete 06 to be at irreducible water saturation as the data points align along the bulk volume of water (BVW) trend line due to the consistency of the data points. The reservoirs zones in this well are considered to be homogenous; therefore, hydrocarbon production from Apete 06 should be water free [14]; that is, the reservoirs would have a low water cut.Figure 4 Porosity-depth plot for Apete 05, 06, 15, and 16.Figure 5 Water saturation-depth plot for Apete 05, 06, 15, and 16.Figure 6 Effective porosity—water saturation trend for Apete 05, 06, 15, and 16.Figure 7 Buckles plot for Apete 05, 06, and 15.Pickett plot (Figure8) reveals the reservoirs to be somewhat shaly which is observed with saturation exponent being less than 2 in the best porosity type. This is further confirmed from the Neutron-Density crossplot (Figure 9). Pickett plot was used to determine Archie parameters, tortuosity (a), and cementation exponent (m) which is approximately 1 and 2, respectively. Neutron density crossplot reveals more of laminated clay in the reservoirs; this should be taken into consideration during well planning. The shale morphology generally changed from laminated to dispersed which affects saturation mixing function hence the need to use another saturation model (Schlumberger’s dual water model) which is designed specifically for shaly sands, rather than the conventional Archie’s water saturation model. Results from both water saturation models used show a wide disparity which could not have been noticed if only the conventional Archie’s model was used which could have led to bypassing some reservoirs as well as undervaluating the reserves in this field.Figure 8 Pickett plot of Sand C in Apete 05.Figure 9 Neutron-density crossplot for Apete 05 and 06. ## 5. Conclusion In the study of well logs from the Apete field, Niger Delta, it was observed that Apete 06 is the most economic well drilled in this field; apart from having the most presence of reservoirs it also has the highest net thickness of 639.50 ft (Table7). Generally, water saturation increases with depth and porosity reduces with depth as a result of compaction. Water saturation-porosity trends cannot be emphatically established except in Apete 05 and 06 where porosity reduces with increase in water saturation. The reservoirs are shaly sands with the shales mostly occurring as laminated clays which could act as impediment to flow during production and therefore causing reservoir compartmentalization. Shale morphology changes from laminated to dispersed, thereby affecting saturation mixing functions. ## Glossary Buckle’s plot: A plot of water saturation(Sw) against porosity (Φ) generated to depict whether or not the sands are at irreducible water saturation (Φ on y-axis and Sw on the x-axis) Bulk volume of water (BVW): Percentage of the total rock volume occupied by water Core data: Set of data derived from analysis of core (rock) samples Eustatic sea level: Sea level change which occurs on a global scale Fluviatile: Pertaining or relating to rivers, found in or near rivers Fossiliferous: Means containing fossils Laminated: Composed of layers bonded together Marine flooding surface: A surface of deposition at the time the shoreline is at its highest landward position Morphology: Means this refers to the description of the shape of geologic features Offlap: The arrangement of strata deposited on the sea floor during the progressive withdrawal of the sea from land Pickett plot: Plot of Archie’s saturation parameters against resistivity of water so as to estimate the water saturation in such a reservoir Transgression: Progressive movement of the sea towards land Transitional environment: Environment situated between the continental realm and the marine Water cut: Amount of water produced with oil. --- *Source: 102450-2013-12-10.xml*
102450-2013-12-10_102450-2013-12-10.md
25,148
Subsurface and Petrophysical Studies of Shaly-Sand Reservoir Targets in Apete Field, Niger Delta
P. A. Alao; A. I. Ata; C. E. Nwoke
ISRN Geophysics (2013)
Physical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/102450
102450-2013-12-10.xml
--- ## Abstract Conventional departures from Archie conditions for petrophysical attribute delineation include shaliness, fresh formation waters, thin-bed reservoirs, and combinations of these cases. If these departures are unrecognized, water saturation can be overestimated, and this can result in loss of opportunity. Wireline logs of four (4) wells from Apete field were studied to delineate petrophysical attributes of shaly-sand reservoirs in the field. Shale volume and porosities were calculated, water saturations were determined by the dual water model, and net pay was estimated using field-specific pay criteria. Ten sand units within the Agbada formation penetrated by the wells were delineated and correlated and their continuity was observed across the studied wells. The reservoirs had high volume of shale (Vcl), high hydrocarbon saturation, low water saturation, and good effective porosity ranging 12.50–46.90%, 54.00–98.39%, 1.61–46.0%, and 10.40–26.80%, respectively. The pay zones are relatively inhomogeneous reservoirs as revealed from the buckle’s plot except in Apete 05. The direction of deposition of the sands was thus inferred to be east west. Empirical relationships apply with variable levels of accuracy with observation of the porosity-depth, water saturation-depth, and water saturation-porosity trends. Core data is recommended for better characterization of these reservoirs. --- ## Body ## 1. Introduction Shales can cause complications for the petrophysicist because they are generally conductive and may therefore mask the high resistance characteristic of hydrocarbons [1]. Several factors are to be considered when delineating petrophysical attributes for shaly-sand reservoirs because clay minerals add conductivity to the formation especially at low water saturations.Clay minerals attract water that is adsorbed onto the surface, as well as cations (e.g., sodium) that are themselves surrounded by hydration water. This gives rise to an excess conductivity compared with rock, in which clay crystals are not present, and this space might otherwise be filled with hydrocarbon. Using Archie’s equation in shaly sands results in very high water saturation values and may lead to potentially hydrocarbon bearing zones being missed. Moreover, in clean sands, the irreducible water volume is a function of the surface area of the sand grains and therefore the grain size, but for shaly sands the addition of silt and clay usually decreases effective porosity due to poorer sorting and increases the irreducible water volume with the finer grain size [2]. Archie’s equation was developed for clean rocks, and it does not account for the extra conductivity caused by the clay present in shaly sands. Therefore, Archie’s equation would not provide accurate water saturation in shaly sands. In fact, water saturations obtained from Archie’s equation have a tendency to overestimate the water in shaly sands. Several models have been proposed by many researchers for shaly-sand analysis such as Juhasz model, dual water model, Indonesian model, Waxman and Smits model, and so forth. ## 2. Synopsis of the Geology The stratigraphic sequence of the Niger Delta comprises three broad lithostratigraphic units, namely,(1) a continental shallow massive sand sequence, the Benin Formation, (2) a coastal marine sequence of alternating sands and shales, the Agbada Formation, and (3) a basal marine shale unit, the Akata Formation(Figure 2). Outcrops of these units are exposed at various localities (Figure 1). The Akata Formation consists of clays and shales with minor sand intercalations. The sediments were deposited in prodelta environments. Petroleum in the Niger Delta is produced from these unconsolidated sands in the Agbada Formation. Characteristics of the reservoirs in the Agbada Formation are controlled by depositional environment and by depth of burial. The sand percentage here is generally less than 30%. The Agbada Formation consists of alternating sand and shales representing sediments of the transitional environment comprising the lower delta plain (mangrove swamps, floodplain, and marsh) and the coastal barrier and fluviomarine realms. The sand percentage within the Agbada Formation varies from 30 to 70%, which results from the large number of depositional offlap cycles [3]. A complete cycle generally consists of thin fossiliferous transgressive marine sand, followed by an offlap sequence which commences with marine shale and continues with laminated fluviomarine sediments followed by barriers and/or fluviatile sediments terminated by another transgression [4, 5].Figure 1 Map of Southern Nigeria showing outcrops of cretaceous and tertiary formations and type localities of subsurface stratigraphic units. After Short and Stauble [6].Figure 2 Stratigraphic column of the Niger Delta. After Shannon et al. [7] and Doust et al. [8].The Benin Formation is characterized by high sand percentage (70–100%) and forms the top layer of the Niger Delta depositional sequence. The massive sands were deposited in continental environment comprising the fluvial realms (braided and meandering systems) of the upper delta plain. The Niger Delta time-stratigraphy is based on biochronological interpretations of fossil spores, foraminifera, and calcareous nonnoplankton. The current delta-wide stratigraphic framework is largely based on palynological zonations labeled with Shell’s alphanumeric codes (e.g., P630, P780, and P860). This allows correlation across all facies types from continental (Benin) to open marine (Akata). There have been concerted efforts, within the work scope of the stratigraphic committee of the Niger Delta (STRATCOM), to produce a generally acceptable delta-wide biostratigraphic framework [9] but not much again has been accomplished after several data gathering exercise by the committee. The sediments of the Niger Delta span a period of 54.6 million years during which, worldwide, some thirty-nine eustatic sea level rises have been recognized [10]. Correlation with the chart of Galloway [11] confirms the presence of nineteen of such named marine flooding surfaces in the Niger Delta. Eight of these are locally developed. Adesida et al. [10] defined eleven lithological mega sequences marked at the base by regional mappable transgressive shales (shale markers) that are traceable across depobelt boundary faults and proposed these as the genetic sequences that can be used as the basis for lithostratigraphy of the Niger Delta. ## 3. Methodology Composite wireline log data from four well logs were interpreted. The basic analysis procedure used involves the following steps; each of which is described in the following sections. ### 3.1. Import and Well Log Data The well data was imported into the software used and well log correlation (Figure3) was done after which the petrophysical attributes were delineated. Well correlation helped in determining the direction of thickness of sand being mapped and the lateral continuity of these reservoirs.Figure 3 Well correlation of all reservoir sands. ### 3.2. Zoning and Point Selection Zoning is of vital importance in the interpretation of well logs. The logs were split into potential reservoir zones and nonreservoir zones. Hydrocarbon bearing intervals were identified and differentiated based largely on the readings from the shallow and deep reading resistivity tools. However, hydrocarbon typing (oil and gas differentiation) was based on density-neutron logs overlay. ### 3.3. Compute Shale Volume from the Gamma Ray This was derived from the gamma ray log first by determining the gamma ray indexIGR [12]: (1)IGR=(GRlog-GRmin)(GRmax-GRmin), where IGR = gamma ray index; GRlog = gamma ray reading of the formation; GRmin = minimum gamma ray reading (sand baseline); GRmax = maximum gamma ray reading (shale baseline).For the purpose of this research work, Larionov’s [13] volume of shale formula for tertiary rocks was used: (2)Vsh=0.083(23.7*IGR-1)Vsh: volume of shale and IGR: gamma ray index. ### 3.4. Compute Total Porosity and Shale-Corrected (Effective) Porosity Total and effective porosity was estimated from the density, neutron, and sonic logs using Archie’s equation. ### 3.5. Compute Water Saturation Water saturation was estimated using Archie’s water saturation formula and Schlumberger’s dual water model. ### 3.6. Estimate Net Pay Calculate net pay using field-specific net pay cutoffs. Cutoff criteria used are water saturation < 50%, porosity > 10%, and volume of clay < 50%. ### 3.7. Use of Crossplots Pickett, buckles, and neutron-density crossplots were generated to understand reservoir properties. Pickett plot which is a resistivity-porosity plot generated was used to determine saturation values alongside Archie parametersa and m. Porosity is calculated from the neutron porosity and density porosity logs and is plotted against the resistivity data obtained from the deep resistivity log. Porosity is plotted on the y-axis with a logarithmic scale ranging from 0.1% to 100%, while the resistivity is plotted on the x-axis with a logarithmic scale ranging from 1 to 100 ohm meter. In order to properly characterize the reservoir sands delineated and correlated across the studied wells, Buckles plot, a plot of Sw versus Φ, was generated to depict whether or not the sands are at irreducible water saturation. Porosity is plotted on the y-axis with a scale ranging from 0 to 40% porosity (shown in decimals), while water saturation is plotted on the x-axis with a scale ranging from 0 to 100% (shown in decimals) water saturation. The scale for bulk volume water lines (grey lines) ranges from 0.01 to 0.25 and is shown as a secondary y-axis. The implicit assumption in the Buckles plot approach is that the product of irreducible water saturation and porosity is constant. Empirical relationships were also established for porosity-depth, water saturation-depth, water saturation-porosity, and permeability-depth to check the trends. ## 3.1. Import and Well Log Data The well data was imported into the software used and well log correlation (Figure3) was done after which the petrophysical attributes were delineated. Well correlation helped in determining the direction of thickness of sand being mapped and the lateral continuity of these reservoirs.Figure 3 Well correlation of all reservoir sands. ## 3.2. Zoning and Point Selection Zoning is of vital importance in the interpretation of well logs. The logs were split into potential reservoir zones and nonreservoir zones. Hydrocarbon bearing intervals were identified and differentiated based largely on the readings from the shallow and deep reading resistivity tools. However, hydrocarbon typing (oil and gas differentiation) was based on density-neutron logs overlay. ## 3.3. Compute Shale Volume from the Gamma Ray This was derived from the gamma ray log first by determining the gamma ray indexIGR [12]: (1)IGR=(GRlog-GRmin)(GRmax-GRmin), where IGR = gamma ray index; GRlog = gamma ray reading of the formation; GRmin = minimum gamma ray reading (sand baseline); GRmax = maximum gamma ray reading (shale baseline).For the purpose of this research work, Larionov’s [13] volume of shale formula for tertiary rocks was used: (2)Vsh=0.083(23.7*IGR-1)Vsh: volume of shale and IGR: gamma ray index. ## 3.4. Compute Total Porosity and Shale-Corrected (Effective) Porosity Total and effective porosity was estimated from the density, neutron, and sonic logs using Archie’s equation. ## 3.5. Compute Water Saturation Water saturation was estimated using Archie’s water saturation formula and Schlumberger’s dual water model. ## 3.6. Estimate Net Pay Calculate net pay using field-specific net pay cutoffs. Cutoff criteria used are water saturation < 50%, porosity > 10%, and volume of clay < 50%. ## 3.7. Use of Crossplots Pickett, buckles, and neutron-density crossplots were generated to understand reservoir properties. Pickett plot which is a resistivity-porosity plot generated was used to determine saturation values alongside Archie parametersa and m. Porosity is calculated from the neutron porosity and density porosity logs and is plotted against the resistivity data obtained from the deep resistivity log. Porosity is plotted on the y-axis with a logarithmic scale ranging from 0.1% to 100%, while the resistivity is plotted on the x-axis with a logarithmic scale ranging from 1 to 100 ohm meter. In order to properly characterize the reservoir sands delineated and correlated across the studied wells, Buckles plot, a plot of Sw versus Φ, was generated to depict whether or not the sands are at irreducible water saturation. Porosity is plotted on the y-axis with a scale ranging from 0 to 40% porosity (shown in decimals), while water saturation is plotted on the x-axis with a scale ranging from 0 to 100% (shown in decimals) water saturation. The scale for bulk volume water lines (grey lines) ranges from 0.01 to 0.25 and is shown as a secondary y-axis. The implicit assumption in the Buckles plot approach is that the product of irreducible water saturation and porosity is constant. Empirical relationships were also established for porosity-depth, water saturation-depth, water saturation-porosity, and permeability-depth to check the trends. ## 4. Results and Discussions Petrophysical attributes as porosity (effective and total), reservoir thickness (net and gross), water saturation (Archie and dual water model), and volume of shale (Tables1, 2, 3, 4, 5, 6, and 7) were delineated in this research work. Results from petrophysical analysis revealed reservoir Sand F to be the most viable reservoir with net thickness as high as 126.50 ft. All the ten reservoirs exhibited good petrophysical attributes with high porosity and hydrocarbon saturation. The sands are shaly sands with high volume of shale resulting in overestimated water saturation in the reservoirs; dual water model was used for estimating the water saturation for better appraisal of the reservoirs due to their shaly nature.Table 1 Average total porosity for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 0.254 — 0.250 B 0.223 0.225 — 0.224 C 0.280 0.234 — — D — 0.196 0.174 0.122 E 0.214 0.230 — 0.104 F 0.272 0.248 0.239 0.154 G — 0.232 — 0.178 H 0.210 0.177 — 0.129 I — 0.204 — 0.141 J — 0.126 0.172 — Blank spaces mean the zone is not a reservoir.Table 2 Average effective porosity for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 0.247 — 0.218 B 0.219 0.218 — 0.203 C 0.259 0.214 — — D — 0.167 0.158 0.111 E 0.201 0.207 — 0.101 F 0.250 0.221 0.193 0.145 G — 0.211 — 0.169 H 0.205 0.160 — 0.113 I — 0.200 — 0.137 J — 0.123 0.167 — Blank spaces mean the zone is not a reservoir.Table 3 Average net-to-gross ratio for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 0.793 — 0.826 B 0.857 0.771 — 0.851 C 0.515 0.933 — — D — 0.712 0.622 0.449 E 0.654 0.615 — 0.125 F 0.797 0.931 0.886 0.785 G — 0.981 — 0.692 H 0.389 0.278 — 0.289 I — 0.853 — 0.654 J — 0.328 0.279 — Blank spaces mean the zone is not a reservoir.Table 4 Average volume of shale for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 0.217 — 0.308 B 0.292 0.339 — 0.243 C 0.162 0.122 — — D — 0.363 0.290 0.350 E 0.415 0.346 — 0.274 F 0.245 0.192 0.288 0.312 G — 0.162 — 0.276 H 0.327 0.360 — 0.288 I — 0.258 — 0.279 J — 0.469 0.357 — Blank spaces mean the zone is not a reservoir.Table 5 Average water saturation from dual water model for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 0.333 — 0.238 B 0.071 0.298 — 0.106 C 0.200 0.436 — — D — 0.118 0.169 0.231 E 0.223 0.002 — 0.046 F 0.353 0.071 0.164 0.246 G — 0.460 — 0.176 H 0.107 0.023 — 0.079 I — 0.017 — 0.087 J — 0.011 0.016 — Blank spaces mean the zone is not a reservoir.Table 6 Average water saturation from Archie’s model for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 0.377 — 0.324 B 0.164 0.406 — 0.189 C 0.221 0.449 — — D — 0.241 0.232 0.384 E 0.355 0.115 — 0.140 F 0.417 0.113 0.257 0.339 G — 0.492 — 0.244 H 0.192 0.151 — 0.157 I — 0.072 — 0.177 J — 0.229 0.135 —Table 7 Net thickness (ft) for all reservoir sands. Reservoir Apete 05 Apete 06 Apete 15 Apete 16 A — 78.50 — 57.00 B 12.00 45.50 — 31.50 C 34.00 42.00 — — D — 89.00 102.00 62.00 E 78.50 40.00 — 13.00 F 169.50 95.00 156.00 67.50 G — 126.50 — 45.00 H 27.50 22.00 — 11.00 I — 81.00 — 70.00 J — 20.00 12.00 — ### 4.1. Crossplot Correlation analysis was performed to determine whether the petrophysical attributes (water saturation and porosity) are interdependent. Generally, the effective porosity decreases with depth (Figure4) with high correlation coefficient except in Apete 15 where there was increase in porosity with depth. The observed reduction in depth would likely be due to the effect of compaction resulting from overburden pressure. Water saturation generally increases with depth (Figure 5) except in Apete 16. This implies that reservoirs in Apete field occur in shallow depth hence the unavailability of reservoirs as we go deeper into the subsurface. Efforts made to delineate trends for water saturation and effective porosity were marginally efficient. From crossplot results (Figure 6), effective porosity reduces with an increase in water saturation in Apete 05 and 06. Specifically, in Apete 15 and 16, there was no correlation at all, as the correlation coefficient was extremely low which suggests that no relationship exists between the two petrophysical attributes. Buckles plot for the reservoirs in the wells were generated (Figure 7); the results from these plots reveal only Apete 06 to be at irreducible water saturation as the data points align along the bulk volume of water (BVW) trend line due to the consistency of the data points. The reservoirs zones in this well are considered to be homogenous; therefore, hydrocarbon production from Apete 06 should be water free [14]; that is, the reservoirs would have a low water cut.Figure 4 Porosity-depth plot for Apete 05, 06, 15, and 16.Figure 5 Water saturation-depth plot for Apete 05, 06, 15, and 16.Figure 6 Effective porosity—water saturation trend for Apete 05, 06, 15, and 16.Figure 7 Buckles plot for Apete 05, 06, and 15.Pickett plot (Figure8) reveals the reservoirs to be somewhat shaly which is observed with saturation exponent being less than 2 in the best porosity type. This is further confirmed from the Neutron-Density crossplot (Figure 9). Pickett plot was used to determine Archie parameters, tortuosity (a), and cementation exponent (m) which is approximately 1 and 2, respectively. Neutron density crossplot reveals more of laminated clay in the reservoirs; this should be taken into consideration during well planning. The shale morphology generally changed from laminated to dispersed which affects saturation mixing function hence the need to use another saturation model (Schlumberger’s dual water model) which is designed specifically for shaly sands, rather than the conventional Archie’s water saturation model. Results from both water saturation models used show a wide disparity which could not have been noticed if only the conventional Archie’s model was used which could have led to bypassing some reservoirs as well as undervaluating the reserves in this field.Figure 8 Pickett plot of Sand C in Apete 05.Figure 9 Neutron-density crossplot for Apete 05 and 06. ## 4.1. Crossplot Correlation analysis was performed to determine whether the petrophysical attributes (water saturation and porosity) are interdependent. Generally, the effective porosity decreases with depth (Figure4) with high correlation coefficient except in Apete 15 where there was increase in porosity with depth. The observed reduction in depth would likely be due to the effect of compaction resulting from overburden pressure. Water saturation generally increases with depth (Figure 5) except in Apete 16. This implies that reservoirs in Apete field occur in shallow depth hence the unavailability of reservoirs as we go deeper into the subsurface. Efforts made to delineate trends for water saturation and effective porosity were marginally efficient. From crossplot results (Figure 6), effective porosity reduces with an increase in water saturation in Apete 05 and 06. Specifically, in Apete 15 and 16, there was no correlation at all, as the correlation coefficient was extremely low which suggests that no relationship exists between the two petrophysical attributes. Buckles plot for the reservoirs in the wells were generated (Figure 7); the results from these plots reveal only Apete 06 to be at irreducible water saturation as the data points align along the bulk volume of water (BVW) trend line due to the consistency of the data points. The reservoirs zones in this well are considered to be homogenous; therefore, hydrocarbon production from Apete 06 should be water free [14]; that is, the reservoirs would have a low water cut.Figure 4 Porosity-depth plot for Apete 05, 06, 15, and 16.Figure 5 Water saturation-depth plot for Apete 05, 06, 15, and 16.Figure 6 Effective porosity—water saturation trend for Apete 05, 06, 15, and 16.Figure 7 Buckles plot for Apete 05, 06, and 15.Pickett plot (Figure8) reveals the reservoirs to be somewhat shaly which is observed with saturation exponent being less than 2 in the best porosity type. This is further confirmed from the Neutron-Density crossplot (Figure 9). Pickett plot was used to determine Archie parameters, tortuosity (a), and cementation exponent (m) which is approximately 1 and 2, respectively. Neutron density crossplot reveals more of laminated clay in the reservoirs; this should be taken into consideration during well planning. The shale morphology generally changed from laminated to dispersed which affects saturation mixing function hence the need to use another saturation model (Schlumberger’s dual water model) which is designed specifically for shaly sands, rather than the conventional Archie’s water saturation model. Results from both water saturation models used show a wide disparity which could not have been noticed if only the conventional Archie’s model was used which could have led to bypassing some reservoirs as well as undervaluating the reserves in this field.Figure 8 Pickett plot of Sand C in Apete 05.Figure 9 Neutron-density crossplot for Apete 05 and 06. ## 5. Conclusion In the study of well logs from the Apete field, Niger Delta, it was observed that Apete 06 is the most economic well drilled in this field; apart from having the most presence of reservoirs it also has the highest net thickness of 639.50 ft (Table7). Generally, water saturation increases with depth and porosity reduces with depth as a result of compaction. Water saturation-porosity trends cannot be emphatically established except in Apete 05 and 06 where porosity reduces with increase in water saturation. The reservoirs are shaly sands with the shales mostly occurring as laminated clays which could act as impediment to flow during production and therefore causing reservoir compartmentalization. Shale morphology changes from laminated to dispersed, thereby affecting saturation mixing functions. ## Glossary Buckle’s plot: A plot of water saturation(Sw) against porosity (Φ) generated to depict whether or not the sands are at irreducible water saturation (Φ on y-axis and Sw on the x-axis) Bulk volume of water (BVW): Percentage of the total rock volume occupied by water Core data: Set of data derived from analysis of core (rock) samples Eustatic sea level: Sea level change which occurs on a global scale Fluviatile: Pertaining or relating to rivers, found in or near rivers Fossiliferous: Means containing fossils Laminated: Composed of layers bonded together Marine flooding surface: A surface of deposition at the time the shoreline is at its highest landward position Morphology: Means this refers to the description of the shape of geologic features Offlap: The arrangement of strata deposited on the sea floor during the progressive withdrawal of the sea from land Pickett plot: Plot of Archie’s saturation parameters against resistivity of water so as to estimate the water saturation in such a reservoir Transgression: Progressive movement of the sea towards land Transitional environment: Environment situated between the continental realm and the marine Water cut: Amount of water produced with oil. --- *Source: 102450-2013-12-10.xml*
2013
# Listeria monocytogenes Meningitis in an Immunosuppressed Patient with Autoimmune Hepatitis and IgG4 Subclass Deficiency **Authors:** Shahin Gaini **Journal:** Case Reports in Infectious Diseases (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102451 --- ## Abstract A 51-year-old Caucasian woman withListeria monocytogenes meningitis was treated and discharged after an uncomplicated course. Her medical history included immunosuppressive treatment with prednisolone and azathioprine for autoimmune hepatitis. A diagnostic work-up after the meningitis episode revealed that she had low levels of the IgG4 subclass. To our knowledge, this is the first case report describing a possible association between autoimmune hepatitis and the occurrence of Listeria monocytogenes meningitis, describing a possible association between Listeria monocytogenes meningitis and deficiency of the IgG4 subclass and finally describing a possible association between Listeria monocytogenes meningitis and immunosuppressive therapy with prednisolone and azathioprine. --- ## Body ## 1. Introduction Listeria monocytogenes is a pathogen associated with meningitis and/or sepsis [1, 2]. It is well known that cases of invasive infections withListeria monocytogenes are often associated with different states of immunosuppression and pregnancy, and with severe comorbidity like cancer [3–6]. Our case report is pointing towards three possible associations with invasive listeria infection: autoimmune hepatitis, treatment with prednisolone and azathioprine, and finally low levels of IgG4 subclass immunoglobulins. ## 2. Case Presentation A 51-year-old Caucasian female, treated in the Hepatology Out-Patient Clinic with oral prednisolone 5 mg daily and oral azathioprine 100 mg daily for an autoimmune hepatitis since 2006, was admitted to the Medical Department at the National Hospital, Faroe Islands, in 2013, after only 12 hours with evolving headache, vomiting, and fever. At admission, the Glasgow Coma Scale (GCS) was 15 points, temperature 38.9 degrees Celsius, blood pressure 126/73, heart rate 85, and oxygen saturation 93%. No neck stiffness and no neurological deficits were found. Blood chemistry at the time of admission showed an elevated C-reactive protein (CRP) of only 13 mg/L. A CT of the brain without contrast and a chest X-ray were performed with no abnormalities. Blood cultures were drawn before administration of antibiotics. Because the patient had been coughing for 3 weeks prior to the admission, intravenous benzylpenicillin 2 MIU four times daily was started at the time of admission, on suspicion of lower respiratory tract infection. Within a few hours after admission, the patient became somnolent with descending GCS and rising fever to over 40 degrees Celsius. A spinal tap was performed and a turbid cerebrospinal fluid (CSF) was sampled with pleocytosis 1333 × 106/L, 78% neutrophils, CSF-protein 2.0 g/L, and CSF-glucose 1.9 mmol/L (blood glucose 6.0 mmol/L). On suspicion of meningitis and/or encephalitis, the patient was switched over to intravenous benzylpenicillin 3 MIU 6 times daily combined with intravenous ceftriaxone 4 g daily and also combined with intravenous aciclovir 750 mg three times daily. The patient was not given dexamethasone, because she had received one dose of intravenous benzylpenicillin 2 MIU prior to the spinal tap. Gram stain of the CSF was negative. CSF was sent to polymerase chain reaction (PCR) examinations for Herpes simplex virus, Varicella-zoster virus, Epstein-Barr virus, Cytomegalovirus,Mycoplasma pneumoniae, and Enterovirus, specific PCR forStreptococcus pneumoniae, and specific PCR forNeisseria meningitidis. All these PCR examinations were negative. Blood cultures from the time of admission showed no growth. On the second day of admission, the Laboratory of Clinical Microbiology at our hospital identified Gram positive rods from the CSF cultures. On suspicion of possible infection withListeria monocytogenes, the antibiotical therapy was supplemented with intravenous gentamicin 240 mg daily. A CT of the brain with contrast and an MRI of the brain with contrast ruled out the presence of brain abscesses. During the next two days, she improved clinically with rising GCS. On the fourth day of admission, the Laboratory of Clinical Microbiology at our hospital confirmed identification ofListeria monocytogenes in the CSF. TheListeria monocytogenes strain was as expected sensitive to benzylpenicillin, ampicillin, and gentamicin. Treatment with ceftriaxone and aciclovir was terminated. She improved very fast and already on the fourth day she was up walking on the ward with a GCS of 15 points. TheListeria monocytogenes meningitis was treated for three weeks with high dose of benzylpenicillin, supplemented for the first week with gentamicin. The azathioprine treatment was halted during the admission, and after a few days of admission the oral prednisolone treatment was changed to intravenous hydrocortisone-succinat 100 mg two times daily. She had a flare-up in the level of her liver enzymes after two weeks of admission and she was switched back to oral prednisolone, starting out with a dose of 25 mg daily, but still pausing her azathioprine treatment. After three weeks of antibiotic treatment forListeria monocytogenes, she was released from hospital. Seven months after the hospital stay, she was started again on her usual oral azathioprine 100 mg daily treatment combined with oral prednisolone 15 mg daily. A diagnostic work-up for immunodeficiencies was done and revealed a low IgG4 subclass 0.005–0.017 g/L (reference values 0.052–1.25 g/L). The IgG subclasses have been analyzed four times since her episode withListeria monocytogenes meningitis (Table 1). Until now she has been clinically stable and she is followed up on a regular basis in both the Hepatology and the Infectious Diseases/Immune Defect Out-Patient Clinics at the National Hospital, Faroe Islands. She is still treated with azathioprine and prednisolone in the same dosages. She has not had any serious infections since herListeria monocytogenes meningitis and she is living well.Table 1 Levels of immunoglobulins and subclasses. Immunoglobulin reference values January 2013 February 2013 September 2013 December 2014 IgG (total) g/L6.4–13.5 g/L 9.8 11.0 12.3 12.1 IgA (total) g/L0.7–3.12 g/L 1.95 2.63 2.90 2.58 IgM (total) g/L0.56–3.52 2.80 4.22 5.11 3.73 IgG1 g/L2.8–8 g/L 6.9 7.9 8.1 8.4 IgG2 g/L1.2–5.7 g/L 2.18 2.52 2.92 2.89 IgG3 g/L0.24–1.25 g/L 1.780 1.710 2.030 1.720 IgG4 g/L0.052–1.25 g/L 0.006a 0.017b 0.005 0.015 aSample from the third day of admission. bSample from 4 weeks after admission day. ## 3. Discussion Listeria monocytogenes meningitis and/or sepsis is associated with the extreme of ages, immunosuppression, comorbidity, and pregnancy [1–7]. It is a very rare disease with an estimated incidence of 0.05–0.2 cases/100.000 population/yrs [8, 9]. The mortality rate is approx. 17–25% [1, 6, 10]. Ampicillin, amoxicillin, and benzylpenicillin are considered to be the treatment of choice forListeria monocytogenes meningitis [1, 2, 6, 7]. A large recent multinational retrospective cohort study examined clinical features, diagnosis, treatment, and prognosis in neuroinfections withListeria monocytogenes [6]. In this study, addition of aminoglycoside therapy did not affect the prognosis in neuroinfection withListeria monocytogenes [6]. Delay in relevant antimicrobial treatment covering forListeria monocytogenes and the presence of seizures were associated with increased mortality in this study [6].Autoimmune hepatitis is an autoimmune disease involving the liver [11]. The disease is often well treated with prednisolone alone or combination treatment with azathioprine [11]. The incidence of autoimmune hepatitis has been reported to be 1.68/100.000 population/yrs in Denmark [12]. The patient in this case report was diagnosed with autoimmune hepatitis in 2006. At the time of admission withListeria monocytogenes meningitis, she had been treated with prednisolone 5 mg daily and azathioprine 100 mg daily for several years, and she was clinically stable regarding her autoimmune hepatitis. To our best knowledge, no case has been published describingListeria monocytogenes meningitis in a patient with autoimmune hepatitis and this is the first report with this combination of diseases. There is no convincing evidence in the literature associating autoimmune hepatitis with an increased risk of serious infections, independently of immunosuppressive drugs used to treat autoimmune hepatitis. However such a direct effect of the disease autoimmune hepatitis would be difficult to demonstrate, because most patients are treated with either prednisolone or prednisolone and azathioprine in combination. In other autoimmune diseases like systemic lupus erythematosus, there are data indicating an increased risk of infection independent of immunosuppressive treatment [13].An increased risk of severe invasive infections and opportunistic infections is well documented in association with treatment with immunosuppressive drugs like prednisolone, azathioprine, tumour-necrosis-factor inhibitors (TNF-in), and other potent immunosuppressive drugs [14, 15]. TNF-in have been reported increasingly to be associated with different severe infections and opportunistic infections in patients treated with these potent immunosuppressive drugs [16]. More than 15 reports on cases ofListeria monocytogenes infection after treatment with TNF-in have been made [17]. In many of those cases, patients were also treated with other immunosuppressive drugs like prednisolone and azathioprine [17]. One report described occurrence ofListeria monocytogenes sepsis in a patient with ulcerative colitis treated with combination of prednisolone and azathioprine like in our patient [18]. Two reports described occurrence ofListeria monocytogenes in patients with ulcerative colitis treated with monotherapy of azathioprine [19–21]. Only one previous case withListeria monocytogenes occurring in a patient with autoimmune hepatitis has been published [22]. This was spontaneous bacterial peritonitis withListeria monocytogenes in a patient with liver cirrhosis, ascites, and autoimmune hepatitis treated with azathioprine [20, 21]. Our patient was treated with both prednisolone and azathioprine, so it is impossible to conclude which of those two immunosuppressive drugs had a main role in facilitating the development ofListeria monocytogenesmeningitis in our patient. A synergistic negative role between prednisolone and azathioprine in our patient cannot be excluded. The role of her autoimmune hepatitis independently of the immunosuppressive drugs cannot be estimated. An immunological animal study on mice examining the negative effect on the immune system in coping infection withListeria monocytogenes, when the immune system is exposed either to cyclophosphamide, vinblastine, and azathioprine or to methotrexate, indicated a substantial long-term negative effect of azathioprine on the immune response regarding infection withListeria monocytogenes [23].In a diagnostic and immunological work-up looking for other causes for this unusual infection in our patient with autoimmune hepatitis, we observed relative low levels of IgG4 subclass in screening of her immunoglobulin levels and her subclass IgG levels. A screening program for immune deficiencies in patients with neuroinfections has been done since 2011 at our department. All patients with life-threatening neuroinfections have been screened with measurements of immunoglobulins including IgG subclasses and flow cytometric analyses of lymphocyte subsets. That was the reason for measuring immunoglobulins in our patient presented in this report. The knowledge about the significance of low levels of IgG subclasses is still pending [24]. The most common IgG subclass deficiency is low levels of IgG2 combined with low levels of IgA [25]. IgG4 deficiencies are not common and the role of low levels of IgG4 has not been understood yet. Reports have been published showing that IgG4 deficiency can be a monodeficiency but can also occur with low levels of IgG2, IgA, and/or IgG1 [24, 25]. Deficiencies of IgG4 have been connected to different infections and autoimmune disorders; however data are scarce [26]. To our knowledge, this is the first report documenting presence of IgG4 deficiency and at the same time the combination of autoimmune hepatitis andListeria monocytogenes meningitis. However, the role of this deficiency of IgG4 is pending. Several studies indicate that the treatment with prednisolone and azathioprine is probably not influencing the levels of IgG subclasses [27, 28]. This could indicate that the low IgG4 levels in our patient were independent of the immunosuppressive treatment with prednisolone and azathioprine, and therefore it could be speculated that our patient had an IgG4 deficiency immune defect that could have a role in the occurrence ofListeria monocytogenes meningitis. It is well known that levels of immunoglobulins including IgG subclasses can fluctuate significantly relating to active infection, operation, and other causes [25]. Therefore, measurements of IgG subclasses have to be repeated to come to a conclusion. The patient described in this report had also slightly elevated IgM and IgG3 subclasses and this can be explained by the autoimmune hepatitis where elevated levels of IgG and subclasses are seen [11].In conclusion, we present the first case in the literature with the combination ofListeria monocytogenes meningitis and autoimmune hepatitis. Our report points towards three possible associations with the development ofListeria monocytogenes meningitis: her autoimmune hepatitis in itself; immunosuppressive treatment with prednisolone and azathioprine; and finally her IgG4 subclass deficiency. --- *Source: 102451-2015-10-19.xml*
102451-2015-10-19_102451-2015-10-19.md
14,125
Listeria monocytogenes Meningitis in an Immunosuppressed Patient with Autoimmune Hepatitis and IgG4 Subclass Deficiency
Shahin Gaini
Case Reports in Infectious Diseases (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102451
102451-2015-10-19.xml
--- ## Abstract A 51-year-old Caucasian woman withListeria monocytogenes meningitis was treated and discharged after an uncomplicated course. Her medical history included immunosuppressive treatment with prednisolone and azathioprine for autoimmune hepatitis. A diagnostic work-up after the meningitis episode revealed that she had low levels of the IgG4 subclass. To our knowledge, this is the first case report describing a possible association between autoimmune hepatitis and the occurrence of Listeria monocytogenes meningitis, describing a possible association between Listeria monocytogenes meningitis and deficiency of the IgG4 subclass and finally describing a possible association between Listeria monocytogenes meningitis and immunosuppressive therapy with prednisolone and azathioprine. --- ## Body ## 1. Introduction Listeria monocytogenes is a pathogen associated with meningitis and/or sepsis [1, 2]. It is well known that cases of invasive infections withListeria monocytogenes are often associated with different states of immunosuppression and pregnancy, and with severe comorbidity like cancer [3–6]. Our case report is pointing towards three possible associations with invasive listeria infection: autoimmune hepatitis, treatment with prednisolone and azathioprine, and finally low levels of IgG4 subclass immunoglobulins. ## 2. Case Presentation A 51-year-old Caucasian female, treated in the Hepatology Out-Patient Clinic with oral prednisolone 5 mg daily and oral azathioprine 100 mg daily for an autoimmune hepatitis since 2006, was admitted to the Medical Department at the National Hospital, Faroe Islands, in 2013, after only 12 hours with evolving headache, vomiting, and fever. At admission, the Glasgow Coma Scale (GCS) was 15 points, temperature 38.9 degrees Celsius, blood pressure 126/73, heart rate 85, and oxygen saturation 93%. No neck stiffness and no neurological deficits were found. Blood chemistry at the time of admission showed an elevated C-reactive protein (CRP) of only 13 mg/L. A CT of the brain without contrast and a chest X-ray were performed with no abnormalities. Blood cultures were drawn before administration of antibiotics. Because the patient had been coughing for 3 weeks prior to the admission, intravenous benzylpenicillin 2 MIU four times daily was started at the time of admission, on suspicion of lower respiratory tract infection. Within a few hours after admission, the patient became somnolent with descending GCS and rising fever to over 40 degrees Celsius. A spinal tap was performed and a turbid cerebrospinal fluid (CSF) was sampled with pleocytosis 1333 × 106/L, 78% neutrophils, CSF-protein 2.0 g/L, and CSF-glucose 1.9 mmol/L (blood glucose 6.0 mmol/L). On suspicion of meningitis and/or encephalitis, the patient was switched over to intravenous benzylpenicillin 3 MIU 6 times daily combined with intravenous ceftriaxone 4 g daily and also combined with intravenous aciclovir 750 mg three times daily. The patient was not given dexamethasone, because she had received one dose of intravenous benzylpenicillin 2 MIU prior to the spinal tap. Gram stain of the CSF was negative. CSF was sent to polymerase chain reaction (PCR) examinations for Herpes simplex virus, Varicella-zoster virus, Epstein-Barr virus, Cytomegalovirus,Mycoplasma pneumoniae, and Enterovirus, specific PCR forStreptococcus pneumoniae, and specific PCR forNeisseria meningitidis. All these PCR examinations were negative. Blood cultures from the time of admission showed no growth. On the second day of admission, the Laboratory of Clinical Microbiology at our hospital identified Gram positive rods from the CSF cultures. On suspicion of possible infection withListeria monocytogenes, the antibiotical therapy was supplemented with intravenous gentamicin 240 mg daily. A CT of the brain with contrast and an MRI of the brain with contrast ruled out the presence of brain abscesses. During the next two days, she improved clinically with rising GCS. On the fourth day of admission, the Laboratory of Clinical Microbiology at our hospital confirmed identification ofListeria monocytogenes in the CSF. TheListeria monocytogenes strain was as expected sensitive to benzylpenicillin, ampicillin, and gentamicin. Treatment with ceftriaxone and aciclovir was terminated. She improved very fast and already on the fourth day she was up walking on the ward with a GCS of 15 points. TheListeria monocytogenes meningitis was treated for three weeks with high dose of benzylpenicillin, supplemented for the first week with gentamicin. The azathioprine treatment was halted during the admission, and after a few days of admission the oral prednisolone treatment was changed to intravenous hydrocortisone-succinat 100 mg two times daily. She had a flare-up in the level of her liver enzymes after two weeks of admission and she was switched back to oral prednisolone, starting out with a dose of 25 mg daily, but still pausing her azathioprine treatment. After three weeks of antibiotic treatment forListeria monocytogenes, she was released from hospital. Seven months after the hospital stay, she was started again on her usual oral azathioprine 100 mg daily treatment combined with oral prednisolone 15 mg daily. A diagnostic work-up for immunodeficiencies was done and revealed a low IgG4 subclass 0.005–0.017 g/L (reference values 0.052–1.25 g/L). The IgG subclasses have been analyzed four times since her episode withListeria monocytogenes meningitis (Table 1). Until now she has been clinically stable and she is followed up on a regular basis in both the Hepatology and the Infectious Diseases/Immune Defect Out-Patient Clinics at the National Hospital, Faroe Islands. She is still treated with azathioprine and prednisolone in the same dosages. She has not had any serious infections since herListeria monocytogenes meningitis and she is living well.Table 1 Levels of immunoglobulins and subclasses. Immunoglobulin reference values January 2013 February 2013 September 2013 December 2014 IgG (total) g/L6.4–13.5 g/L 9.8 11.0 12.3 12.1 IgA (total) g/L0.7–3.12 g/L 1.95 2.63 2.90 2.58 IgM (total) g/L0.56–3.52 2.80 4.22 5.11 3.73 IgG1 g/L2.8–8 g/L 6.9 7.9 8.1 8.4 IgG2 g/L1.2–5.7 g/L 2.18 2.52 2.92 2.89 IgG3 g/L0.24–1.25 g/L 1.780 1.710 2.030 1.720 IgG4 g/L0.052–1.25 g/L 0.006a 0.017b 0.005 0.015 aSample from the third day of admission. bSample from 4 weeks after admission day. ## 3. Discussion Listeria monocytogenes meningitis and/or sepsis is associated with the extreme of ages, immunosuppression, comorbidity, and pregnancy [1–7]. It is a very rare disease with an estimated incidence of 0.05–0.2 cases/100.000 population/yrs [8, 9]. The mortality rate is approx. 17–25% [1, 6, 10]. Ampicillin, amoxicillin, and benzylpenicillin are considered to be the treatment of choice forListeria monocytogenes meningitis [1, 2, 6, 7]. A large recent multinational retrospective cohort study examined clinical features, diagnosis, treatment, and prognosis in neuroinfections withListeria monocytogenes [6]. In this study, addition of aminoglycoside therapy did not affect the prognosis in neuroinfection withListeria monocytogenes [6]. Delay in relevant antimicrobial treatment covering forListeria monocytogenes and the presence of seizures were associated with increased mortality in this study [6].Autoimmune hepatitis is an autoimmune disease involving the liver [11]. The disease is often well treated with prednisolone alone or combination treatment with azathioprine [11]. The incidence of autoimmune hepatitis has been reported to be 1.68/100.000 population/yrs in Denmark [12]. The patient in this case report was diagnosed with autoimmune hepatitis in 2006. At the time of admission withListeria monocytogenes meningitis, she had been treated with prednisolone 5 mg daily and azathioprine 100 mg daily for several years, and she was clinically stable regarding her autoimmune hepatitis. To our best knowledge, no case has been published describingListeria monocytogenes meningitis in a patient with autoimmune hepatitis and this is the first report with this combination of diseases. There is no convincing evidence in the literature associating autoimmune hepatitis with an increased risk of serious infections, independently of immunosuppressive drugs used to treat autoimmune hepatitis. However such a direct effect of the disease autoimmune hepatitis would be difficult to demonstrate, because most patients are treated with either prednisolone or prednisolone and azathioprine in combination. In other autoimmune diseases like systemic lupus erythematosus, there are data indicating an increased risk of infection independent of immunosuppressive treatment [13].An increased risk of severe invasive infections and opportunistic infections is well documented in association with treatment with immunosuppressive drugs like prednisolone, azathioprine, tumour-necrosis-factor inhibitors (TNF-in), and other potent immunosuppressive drugs [14, 15]. TNF-in have been reported increasingly to be associated with different severe infections and opportunistic infections in patients treated with these potent immunosuppressive drugs [16]. More than 15 reports on cases ofListeria monocytogenes infection after treatment with TNF-in have been made [17]. In many of those cases, patients were also treated with other immunosuppressive drugs like prednisolone and azathioprine [17]. One report described occurrence ofListeria monocytogenes sepsis in a patient with ulcerative colitis treated with combination of prednisolone and azathioprine like in our patient [18]. Two reports described occurrence ofListeria monocytogenes in patients with ulcerative colitis treated with monotherapy of azathioprine [19–21]. Only one previous case withListeria monocytogenes occurring in a patient with autoimmune hepatitis has been published [22]. This was spontaneous bacterial peritonitis withListeria monocytogenes in a patient with liver cirrhosis, ascites, and autoimmune hepatitis treated with azathioprine [20, 21]. Our patient was treated with both prednisolone and azathioprine, so it is impossible to conclude which of those two immunosuppressive drugs had a main role in facilitating the development ofListeria monocytogenesmeningitis in our patient. A synergistic negative role between prednisolone and azathioprine in our patient cannot be excluded. The role of her autoimmune hepatitis independently of the immunosuppressive drugs cannot be estimated. An immunological animal study on mice examining the negative effect on the immune system in coping infection withListeria monocytogenes, when the immune system is exposed either to cyclophosphamide, vinblastine, and azathioprine or to methotrexate, indicated a substantial long-term negative effect of azathioprine on the immune response regarding infection withListeria monocytogenes [23].In a diagnostic and immunological work-up looking for other causes for this unusual infection in our patient with autoimmune hepatitis, we observed relative low levels of IgG4 subclass in screening of her immunoglobulin levels and her subclass IgG levels. A screening program for immune deficiencies in patients with neuroinfections has been done since 2011 at our department. All patients with life-threatening neuroinfections have been screened with measurements of immunoglobulins including IgG subclasses and flow cytometric analyses of lymphocyte subsets. That was the reason for measuring immunoglobulins in our patient presented in this report. The knowledge about the significance of low levels of IgG subclasses is still pending [24]. The most common IgG subclass deficiency is low levels of IgG2 combined with low levels of IgA [25]. IgG4 deficiencies are not common and the role of low levels of IgG4 has not been understood yet. Reports have been published showing that IgG4 deficiency can be a monodeficiency but can also occur with low levels of IgG2, IgA, and/or IgG1 [24, 25]. Deficiencies of IgG4 have been connected to different infections and autoimmune disorders; however data are scarce [26]. To our knowledge, this is the first report documenting presence of IgG4 deficiency and at the same time the combination of autoimmune hepatitis andListeria monocytogenes meningitis. However, the role of this deficiency of IgG4 is pending. Several studies indicate that the treatment with prednisolone and azathioprine is probably not influencing the levels of IgG subclasses [27, 28]. This could indicate that the low IgG4 levels in our patient were independent of the immunosuppressive treatment with prednisolone and azathioprine, and therefore it could be speculated that our patient had an IgG4 deficiency immune defect that could have a role in the occurrence ofListeria monocytogenes meningitis. It is well known that levels of immunoglobulins including IgG subclasses can fluctuate significantly relating to active infection, operation, and other causes [25]. Therefore, measurements of IgG subclasses have to be repeated to come to a conclusion. The patient described in this report had also slightly elevated IgM and IgG3 subclasses and this can be explained by the autoimmune hepatitis where elevated levels of IgG and subclasses are seen [11].In conclusion, we present the first case in the literature with the combination ofListeria monocytogenes meningitis and autoimmune hepatitis. Our report points towards three possible associations with the development ofListeria monocytogenes meningitis: her autoimmune hepatitis in itself; immunosuppressive treatment with prednisolone and azathioprine; and finally her IgG4 subclass deficiency. --- *Source: 102451-2015-10-19.xml*
2015
# Space Object Detection in Video Satellite Images Using Motion Information **Authors:** Xueyang Zhang; Junhua Xiang; Yulin Zhang **Journal:** International Journal of Aerospace Engineering (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1024529 --- ## Abstract Compared to ground-based observation, space-based observation is an effective approach to catalog and monitor increasing space objects. In this paper, space object detection in a video satellite image with star image background is studied. A new detection algorithm using motion information is proposed, which includes not only the known satellite attitude motion information but also the unknown object motion information. The effect of satellite attitude motion on an image is analyzed quantitatively, which can be decomposed into translation and rotation. Considering the continuity of object motion and brightness change, variable thresholding based on local image properties and detection of the previous frame is used to segment a single-frame image. Then, the algorithm uses the correlation of object motion in multiframe and satellite attitude motion information to detect the object. Experimental results with a video image from the Tiantuo-2 satellite show that this algorithm provides a good way for space object detection. --- ## Body ## 1. Introduction In September 8, 2014, Tiantuo-2 (TT-2) designed by the National University of Defense Technology independently was successfully launched into orbit, which is the first Chinese interactive earth observation microsatellite using a video imaging system. The mass was 67 kg, and the altitude was 490 km. Experiments based on interactive control strategies with human in the loop were carried out to realize continuous tracking and monitoring of moving objects.Now, video satellites have been widely utilized by domestic and foreign researchers, and several video satellites have been launched into orbits, such as Skysat-1 and 2 by Skybox Imaging, TUBSAT series satellites by the Technical University of Berlin [1], and the video satellite by Chang Guang Satellite Technology. They can obtain video images with different on-orbit performances.Increasing space objects have greater impact on human space activities, and cataloging and monitoring them become a hot issue in the field of space environment [2–5]. Compared to ground-based observation, space-based observation is not restricted by weather or geographical location and avoids the disturbance of the atmosphere to the object’s signal, which has a unique advantage [2, 3, 6]. To use a video satellite to observe space objects is an effective approach.Object detection and tracking in satellite video images is an important part of space-based observation. For the general problem of object detection and tracking in video, optical flow [7], block matching [8], template detection [9, 10], and so on have been proposed. But most of these methods are based on gray level, and enough texture information is highly required [11]. In fact, small object detection in optical images with star background mainly has the following difficulties: (1) Because the object occupies only one or a few pixels in the image, the shape of the object is not available. (2) Due to background stars and noise introduced by a space environment detecting equipment, the object almost submerges in the complex background bright spots, which increases the difficulty of object detection greatly.Aiming at these difficulties, many scholars have proposed various algorithms, mainly including track before detection (TBD) and detect before tracking (DBT). Multistage hypothesis testing (MHT) [12] and dynamic programming-based algorithm [13–15] can be classified into TBD, which is effective when the image signal-to-noise ratio is very low. But high computation complexity and hard threshold always follow them [16]. Actually, DBT is usually adopted for object detection in star images [17]. Some DBT algorithms are discussed below.An object trace acquisition algorithm for sequence star images with moving background was put forward to detect the discontinuous trace of a small space object by [18], which used the centroid of a group of brightest stars in sequence images to match images and filtered background stars. A space object detection algorithm based on the principle of star map recognition was put forward by [19], which used a triangle algorithm to accomplish image registration and detected the space object in the registered image series. Based on the analysis of a star image model, Han and Liu [20] proposed an image registration method via extracting feature points and then matching the triangle. Zhang et al. [21] used a triangle algorithm to match background stars in image series and classified stars and potential targets and then utilized the 3 frames nearest neighboring correlation method to detect targets grossly, in which false targets were filtered with the multiframe back and forth searching method. An algorithm based on iterative optimization distance classification is proposed to detect small visible optical space objects by [22], which used the classification method to filter background stars and utilized the continuity of object motion to achieve trajectory association.The core idea of the DBT algorithms above is to match the sequence images based on the relative invariance of the star position and to filter background stars. Therefore, there is also a large amount of computation and poor real-time performance. A real-time detection algorithm based on FPGA and DSP for small space objects was presented by [23], but the instability and motion of observation platform were not considered. In addition, image sequences used to validate the algorithm in the literature above are from a ground photoelectricity telescope in [15, 17, 18, 21–23] and generated by simulation in [19, 20]; the characteristic of image sequence from space-based observation has not been considered.This paper studies space object detection in a video satellite image with star image background, and a new detection algorithm based on motion information is proposed, which includes not only the known satellite attitude motion information but also the unknown object motion information. Experimental results about a video image from TT-2 demonstrate the effectiveness of the algorithm. ## 2. Characteristic Analysis of Satellite Video Images A video satellite image of space contains deep space background, stars, objects, and noise introduced by imaging devices and cosmic rays. The mathematical model is given by [24] as follows: (1)fx,y,t=fBx,y,t+fsx,y,t+fTx,y,t+nx,y,t,where fBx,y,t is the gray level of deep space background, fsx,y,t is the gray level of stars, fTx,y,t is the gray level of objects, and nx,y,t is the gray level of noise. x,y is the pixel coordinate in the image. t is the time of the video.In the video image, stars and weak small objects occupy only one or a few pixels. It is difficult to distinguish the object from background stars by morphological characteristics or photometric features. Besides, attitude motion of the object leads to changing brightness, even losing the object in several frames. Therefore, it is almost impossible to detect the object in a single-frame image and necessary to use the continuity of object motion in multiframe images.Figure1 is a local image of a video from TT-2, where Figure 1(a) is a star and Figure 1(b) is an object (debris). It is impossible to distinguish them by morphological characteristics or photometric features.Figure 1 Images of a star and an object. (a) (b) ## 3. Object Detection Using Motion Information When attitude motion of a video satellite is stabilized, background stars are moving extremely slowly in the video and can be considered static in several frames. At the same time, noise is random (dead pixels appear in a fixed position) and only objects are moving continuously. This is the most important distinction of motion characteristics of their images. Because of platform jitter, objects cannot be detected by a simple frame difference method. A new object detection algorithm using motion information (NODAMI) is proposed. The motion information contains known satellite attitude motion information. When the attitude changes, the video image varies accordingly. This case can be transformed into the case of attitude stabilization by compensating attitude motion to image. The complexity of the detection algorithm reduces greatly without image registration.The procedure of NODAMI is shown in Figure2.Figure 2 Procedure of NODAMI.Firstly, the effect of satellite attitude motion on image is analyzed and attitude motion compensation formula is derived. Then, each step of NODAMI is described in detail. ### 3.1. Compensation of Satellite Attitude Motion The inertial frame is defined asOi−XiYiZi, originating at the center of the Earth, and the OiZi-axis aligned with the Earth’s North Pole and the OiXi-axis aligned with the vernal equinox. The satellite body frame is defined as Ob−XbYbZb, originating at the mass center of the satellite and three axes aligned with the principal axes of the satellite. The pixel frame is defined as I−xy, originating at the upper left corner of the image plane and using the pixel as a coordinate unit. x,y represent the number of columns and rows, respectively, of the pixel in the image. The image frame is defined as O−XpYp, originating at the intersection of the optical axis and the image plane, and the OXp-axis and the OYp-axis aligned with Ix and Iy, respectively. The camera frame is defined as Oc−XcYcZc, originating at the optical center of the camera, and the OcZc-axis aligned with the optical axis, the OcXc-axis aligned with −OXp, and the OcYc-axis aligned with −OYp. All the 3D frames defined above are right handed.In the inertial frame, let the coordinate of the object bexi,yi,zi and the coordinate of the satellite be xi0,yi0,zi0. Assuming that the 3-2-1 Euler angle of the satellite body frame with respect to the inertial frame is ψ,θ,φ, the coordinate of the object in the satellite body frame is given as follows: (2)xbybzb=R321ψ,φ,θxi−xi0yi−yi0zi−zi0=RXφRYθRZψxi−xi0yi−yi0zi−zi0,where R321 is the attitude matrix of the satellite body frame with respect to the inertial frame and RXθ, RYθ, and RZθ are the coordinate transformation matrices after a rotation around the x-axis, y-axis, and z-axis, respectively, with the angle θ: (3)RXθ=1000cosθsinθ0−sinθcosθ,RYθ=cosθ0−sinθ010sinθ0cosθ,RZθ=cosθsinθ0−sinθcosθ0001.Assuming that the camera is fixed on the satellite and the 3-2-1 Euler angle of the camera frame with respect to the satellite body frame isα0,β0,γ0, which is constantly determined by the design of the satellite structure, then the attitude matrix of the camera frame with respect to the satellite body frame R0 can be derived. Let the coordinate of Oc in the satellite body frame be S0=x0,y0,z0T; then, the coordinate of the object in the camera frame is given as follows: (4)xcyczc=R0xb−x0yb−y0zb−z0.The design of the video satellite tries to makeα0,β0,γ0 as zero, and actually, they are of small quantities. x0,y0,z0 are much lesser than xb,yb,zb. Without loss of generality, let R0=I,S0=0, that is, the camera frame and the satellite body frame coincide.As shown in Figure3, the coordinate of the object in the image frame is given as follows: (5)xpyp=xczcyczcf,where f is the focal length of the camera.Figure 3 Diagram of frames.If the field of view of the camera is FOV, thenxp,yp<tanFOV/2.The coordinate of the object in the pixel frame is given as follows:(6)mn=xpdxypdy+m0n0,where dx,dy is the size of each pixel and m0,n0 is the coordinate of the image center.The video satellite can achieve tracking imaging of the object based on interactive attitude adjustment with human in the loop. Attitude adjustment is applied to the attitude of the satellite body frame with respect to the inertial frame. Assuming that the 3-2-1 Euler angle of attitude adjustment isΔψ,Δθ,Δφ, the coordinates of the point P in the camera frame, the image frame, and the pixel frame are xc1,yc1,zc1, xp1,yp1, and m1,n1, respectively, before attitude adjustment and xc2,yc2,zc2, xp2,yp2, and m2,n2, respectively, after attitude adjustment.Ignoring the orbit motion of the satellite during the attitude adjustment, which is reasonable when adjustment is instantaneous, gives(7)xc2yc2zc2=R321Δψ,Δθ,Δφxc1yc1zc1=RXΔφRYΔθRZΔψxc1yc1zc1.Denotexc,yc,zcT,xc1,yc1,zc1T,xc2,yc2,zc2T as x,x1,x2 and transformation function determined by RXΔφ, RYΔθ, RZΔψ as TX,Δφ, TY,Δθ, TZ,Δψ, that is, (8)TX,Δφx=RXΔφ⋅x,TY,Δθx=RYΔθ⋅x,TZ,Δψx=RZΔψ⋅x.Then,(9)x2=TX,ΔφTY,ΔθTZ,Δψx1.Denotexp,ypT,xp1,yp1T,xp2,yp2T as y,y1,y2 and the mapping from the camera frame to the image frame as A, that is, (10)y=Ax,y1=Ax1,y→2=Ax2.Denote transformation function of the image frame determined byTX,Δφ,TY,Δθ,TZ,Δψ as rX,Δφ,rY,Δθ,rZ,Δψ, that is, (11)ATX,Δφx→=rX,ΔφAx→=rX,Δφy→,ATY,Δθx→=rY,ΔθAx→=rY,Δθy→,ATZ,Δψx→=rZ,ΔψAx→=rZ,Δψy→.Then,(12)y→2=Ax→2=ATX,ΔφTY,ΔθTZ,Δψx→1=rX,ΔφA∘TY,ΔθTZ,Δψx→1=rX,ΔφrY,ΔθA∘TZ,Δψx→1=rX,ΔφrY,ΔθrZ,Δψy→1.Equation (12) proves that transformation of the image frame caused by attitude adjustment can be decomposed into compound of transformation function determined by the rotation around the satellite body frame axes.Obviously,(13)rX,Δφy→=xp−sinΔφ⋅yp+cosΔφcosΔφ⋅yp+sinΔφ−sinΔφ⋅yp+cosΔφ,rY,Δθy→=cosΔθ⋅xp−sinΔθsinΔθ⋅xp+cosΔθypsinΔθ⋅xp+cosΔθ,rZ,Δψy→=cosΔψ⋅xp+sinΔψ⋅yp−sinΔψ⋅xp+cosΔψ⋅yp=cosΔψsinΔψ−sinΔψcosΔψxpyp.WhenΔψ,Δθ,Δφ are of small quantities, rX,Δφ,rY,Δθ can be approximated as follows: (14)rX,Δφy→=xpyp+Δφ,rY,Δθy→=xp−Δθyp.Then, the conclusion can be drawn that the effect of satellite attitude motion on an image can be decomposed into the translation and rotation of the image. The effects in an image of small rotations around each axis are that roll and pitch lead to image translation, whereas yaw induces image rotation.Especially, whendx=dy=d, the attitude motion compensation formula can be given as follows: (15)m2n2=cosΔψsinΔψ−sinΔψcosΔψm1−m0n1−n0+−ΔθΔφ⋅1d+m0n0.In practice, onboard sensors, such as fiber optic gyroscope, give the angular rate of attitude motion,ω=ωx,ωy,ωzT. Supposing that the time interval per frame is Δt, when Δt is of small quantity, the angular rate in Δt can be regarded as constant. Let (16)ϑ=ωΔt,e=ωΔtϑ=e1,e2,e3T.Then, the attitude adjustment matrix of the current frame moment with respect to the previous frame moment is given as follows:(17)A=c+1−ce121−ce1e2+se31−ce1e3−se21−ce2e1−se3c+1−ce221−ce2e3+se11−ce3e1+se21−ce3e2−se1c+1−ce32=aij3×3,where (18)c=cosϑ,s=sinϑ.The 3-2-1 Euler angleΔψ,Δθ,Δφ can be solved as follows: (19)Δψ=atan2a23,a33,Δθ=arcsin−a13,Δφ=atan2a12,a11.Generally, attitude adjustment does not rotate the image, that is,Δψ can be approximated as 0. And Δθ,Δφ can be approximated as follows: (20)Δθ=ωyΔt,Δφ=ωxΔt,for they are of small quantities.SubstitutingΔψ=0 and (20) into (15) gives (21)m2n2=m1n1+−ωyΔtωxΔt⋅1d. ### 3.2. Image Denoising Noise in the video image mainly includes the space radiation noise, the space background noise, and the CCD dark current noise. A bilateral filter can be used to denoise, which is a simple, noniterative scheme for edge-preserving smoothing [25]. The weights of the filter have two components. The first of which is the same weighting used by the Gaussian filter. The second component takes into account the difference in intensity between the neighboring pixels and the evaluated one. The diameter of the filter is set to 5, and weights are given as follows: (22)wij=1Ke−xi−xc2+yi−yc2/2σs2e−fxi,yi−fxc,yc2/2σr2,where K is the normalization constant, xc,yc is the center of the filter, fx,y is the gray level at x,y, σs is set to 10, and σr is set to 75. ### 3.3. Single-Frame Binary Segmentation In order to distinguish between stars and objects, it is necessary to remove the background in each frame image, but stray light leads to uneven gray level distribution of background. Figure4(a) is a single frame of the video image, and Figure 4(b) is its gray level histogram. The single-peak shape of the gray histogram shows that the classical global threshold method, which is traditionally used to segment the star image [15, 18, 20–22, 24], cannot be used to segment the video image.Figure 4 Single-frame image and its gray level histogram. (a) (b)Whether it is a star or an object, its gray level is greater than the pixels in its neighborhood. Consider using variable thresholding based on local image properties to segment the image. Calculate standard deviationσxy and mean value mxy for the neighborhood of every point x,y in the image, which are descriptors of the local contrast and average gray level. Then, the variable thresholding based on local contrast and average gray level is given as follows: (23)Txy=aσxy+bmxy,where a and b are constant and greater than 0. b is the contribution of the local average gray level to the thresholding and can be set to 1. a is the contribution of local contrast to the thresholding and is the main parameter to set according to the object characteristic.But on the other hand, for space moving objects, their brightness is sometimes changing according to the changing attitude. Ifa was set to be constant, the object would be lost in some frames. Considering the continuity of object motion, if in the current frame x,y is detected as the probable object coordinate, the object detection probability is much greater in the 5 × 5 window of x,y in the next frame. Integrated with the continuity of object brightness change, if there is no probable object detected in the 5 × 5 window of x,y in the next frame, a can be reduced with a factor, that is, aρρ<1. aρ will be stored as auv if u,v is detected as the probable object coordinate in the 5 × 5 window of x,y in the next frame. So the variable thresholding based on local image properties and detection of the previous frame is given as follows: (24)Txy=axyσxy+bmxy.The difference between (23) and (24) is the variable coefficient axy based on detection of the previous frame. axy is initially set to be a little greater than 1 and reset to the initial value when the gray level at x,y becomes large again (greater than 150).The image binary segmentation algorithm is given as follows:(25)gx,y=1,fx,y>Txy,0,fx,y≤Txy,where fx,y is the gray level of the original image at x,y and gx,y is the gray level of the segmented image at x,y. ### 3.4. Coordinate Extraction In the ideal optical system, the point object in the CCD focal plane occupies one pixel, but in practical imaging condition, circular aperture diffraction results in the object in the focal plane diffusing multipixels. So the coordinate of the object in the pixel frame is determined by the position of the center of the gray level. The simple gray weighted centroid algorithm is used to calculate the coordinates, with positioning accuracy up to 0.1~0.3 pixels [26], as follows: (26)xS,yS=∑x,y∈Sfx,y·x,y∑x,y∈Sfx,y,where S is object area after segmentation, fx,y is the gray level of the original image at x,y, and xS,yS is the coordinate of S.The coordinates will be used in single-frame binary segmentation. ### 3.5. Trajectory Association After processing a frame image, some probable object coordinates are obtained. Associate them with existing trajectories or generate new trajectories, by the nearest neighborhood filter. The radius of the neighborhood is determined by object characteristic, which needs to be greater than the moving distance of object image in a frame and endures losing the object in several frames.Kalman filter is an efficient recursive filter that estimates the internal state of a linear dynamic system from a series of noisy measurements. Use Kalman filter to predict probable object’s coordinate and where to use the nearest neighborhood filter in the current frame.Assuming that the state vector of the object at thekth frame is xk=xk,yk,vxk,vykT, that is, its coordinate and velocity in the pixel frame, the system equation is (27)xk=Fxk−1+Buk+w=10Δt0010Δt00100001xk−1+0−ΔtdΔtd00−1d1d0ωxkωyk+w,zk=Hxk+v=10000100xk+v,where F is the state transition matrix, B is the control-input matrix, uk is the control vector, H is the measurement matrix, zk is the measurement vector, that is, the coordinate associated with xk−1,yk−1 obtained by the nearest neighborhood filter at the kth frame, w is the process noise which is assumed to be zero mean Gaussian white noise with covariance Q, denoted as w~N0,Q, and v is the measurement noise which is assumed to be zero mean Gaussian white noise with covariance R, denoted as v~N0,R.In fact,Buk uses the result of (21) and realizes compensation of attitude motion when adjusting attitude. In the case of attitude stabilization, uk=0. Equation (27) unifies these two situations by the control vector.The details of how the Kalman filter predicts and updates are omitted here for there is nothing special.When a trajectory has 20 points, we need to judge whether the trajectory is an object. Velocity in the state vector is used. If the mean velocity of these points is greater than a given thresholding, the trajectory is an object; otherwise, it is not. That is to judge whether the point is moving. The thresholding here is mainly to remove image motion caused by the instability of the satellite platform and other noise. The thresholding is usually set to 2.When satellite attitude is adjusting, do not generate new trajectories in order to reduce false objects. On the other hand, in practice, adjusting attitude means something in the current frame is of interest and there is no need to detect new objects.Thus, objects are detected based on the continuity of motion in multiframes and trajectories are updated. Computation complexity is greatly reduced without image matching of stars in different frames. And compensation of attitude motion results in detecting objects well even during attitude adjusting. ## 3.1. Compensation of Satellite Attitude Motion The inertial frame is defined asOi−XiYiZi, originating at the center of the Earth, and the OiZi-axis aligned with the Earth’s North Pole and the OiXi-axis aligned with the vernal equinox. The satellite body frame is defined as Ob−XbYbZb, originating at the mass center of the satellite and three axes aligned with the principal axes of the satellite. The pixel frame is defined as I−xy, originating at the upper left corner of the image plane and using the pixel as a coordinate unit. x,y represent the number of columns and rows, respectively, of the pixel in the image. The image frame is defined as O−XpYp, originating at the intersection of the optical axis and the image plane, and the OXp-axis and the OYp-axis aligned with Ix and Iy, respectively. The camera frame is defined as Oc−XcYcZc, originating at the optical center of the camera, and the OcZc-axis aligned with the optical axis, the OcXc-axis aligned with −OXp, and the OcYc-axis aligned with −OYp. All the 3D frames defined above are right handed.In the inertial frame, let the coordinate of the object bexi,yi,zi and the coordinate of the satellite be xi0,yi0,zi0. Assuming that the 3-2-1 Euler angle of the satellite body frame with respect to the inertial frame is ψ,θ,φ, the coordinate of the object in the satellite body frame is given as follows: (2)xbybzb=R321ψ,φ,θxi−xi0yi−yi0zi−zi0=RXφRYθRZψxi−xi0yi−yi0zi−zi0,where R321 is the attitude matrix of the satellite body frame with respect to the inertial frame and RXθ, RYθ, and RZθ are the coordinate transformation matrices after a rotation around the x-axis, y-axis, and z-axis, respectively, with the angle θ: (3)RXθ=1000cosθsinθ0−sinθcosθ,RYθ=cosθ0−sinθ010sinθ0cosθ,RZθ=cosθsinθ0−sinθcosθ0001.Assuming that the camera is fixed on the satellite and the 3-2-1 Euler angle of the camera frame with respect to the satellite body frame isα0,β0,γ0, which is constantly determined by the design of the satellite structure, then the attitude matrix of the camera frame with respect to the satellite body frame R0 can be derived. Let the coordinate of Oc in the satellite body frame be S0=x0,y0,z0T; then, the coordinate of the object in the camera frame is given as follows: (4)xcyczc=R0xb−x0yb−y0zb−z0.The design of the video satellite tries to makeα0,β0,γ0 as zero, and actually, they are of small quantities. x0,y0,z0 are much lesser than xb,yb,zb. Without loss of generality, let R0=I,S0=0, that is, the camera frame and the satellite body frame coincide.As shown in Figure3, the coordinate of the object in the image frame is given as follows: (5)xpyp=xczcyczcf,where f is the focal length of the camera.Figure 3 Diagram of frames.If the field of view of the camera is FOV, thenxp,yp<tanFOV/2.The coordinate of the object in the pixel frame is given as follows:(6)mn=xpdxypdy+m0n0,where dx,dy is the size of each pixel and m0,n0 is the coordinate of the image center.The video satellite can achieve tracking imaging of the object based on interactive attitude adjustment with human in the loop. Attitude adjustment is applied to the attitude of the satellite body frame with respect to the inertial frame. Assuming that the 3-2-1 Euler angle of attitude adjustment isΔψ,Δθ,Δφ, the coordinates of the point P in the camera frame, the image frame, and the pixel frame are xc1,yc1,zc1, xp1,yp1, and m1,n1, respectively, before attitude adjustment and xc2,yc2,zc2, xp2,yp2, and m2,n2, respectively, after attitude adjustment.Ignoring the orbit motion of the satellite during the attitude adjustment, which is reasonable when adjustment is instantaneous, gives(7)xc2yc2zc2=R321Δψ,Δθ,Δφxc1yc1zc1=RXΔφRYΔθRZΔψxc1yc1zc1.Denotexc,yc,zcT,xc1,yc1,zc1T,xc2,yc2,zc2T as x,x1,x2 and transformation function determined by RXΔφ, RYΔθ, RZΔψ as TX,Δφ, TY,Δθ, TZ,Δψ, that is, (8)TX,Δφx=RXΔφ⋅x,TY,Δθx=RYΔθ⋅x,TZ,Δψx=RZΔψ⋅x.Then,(9)x2=TX,ΔφTY,ΔθTZ,Δψx1.Denotexp,ypT,xp1,yp1T,xp2,yp2T as y,y1,y2 and the mapping from the camera frame to the image frame as A, that is, (10)y=Ax,y1=Ax1,y→2=Ax2.Denote transformation function of the image frame determined byTX,Δφ,TY,Δθ,TZ,Δψ as rX,Δφ,rY,Δθ,rZ,Δψ, that is, (11)ATX,Δφx→=rX,ΔφAx→=rX,Δφy→,ATY,Δθx→=rY,ΔθAx→=rY,Δθy→,ATZ,Δψx→=rZ,ΔψAx→=rZ,Δψy→.Then,(12)y→2=Ax→2=ATX,ΔφTY,ΔθTZ,Δψx→1=rX,ΔφA∘TY,ΔθTZ,Δψx→1=rX,ΔφrY,ΔθA∘TZ,Δψx→1=rX,ΔφrY,ΔθrZ,Δψy→1.Equation (12) proves that transformation of the image frame caused by attitude adjustment can be decomposed into compound of transformation function determined by the rotation around the satellite body frame axes.Obviously,(13)rX,Δφy→=xp−sinΔφ⋅yp+cosΔφcosΔφ⋅yp+sinΔφ−sinΔφ⋅yp+cosΔφ,rY,Δθy→=cosΔθ⋅xp−sinΔθsinΔθ⋅xp+cosΔθypsinΔθ⋅xp+cosΔθ,rZ,Δψy→=cosΔψ⋅xp+sinΔψ⋅yp−sinΔψ⋅xp+cosΔψ⋅yp=cosΔψsinΔψ−sinΔψcosΔψxpyp.WhenΔψ,Δθ,Δφ are of small quantities, rX,Δφ,rY,Δθ can be approximated as follows: (14)rX,Δφy→=xpyp+Δφ,rY,Δθy→=xp−Δθyp.Then, the conclusion can be drawn that the effect of satellite attitude motion on an image can be decomposed into the translation and rotation of the image. The effects in an image of small rotations around each axis are that roll and pitch lead to image translation, whereas yaw induces image rotation.Especially, whendx=dy=d, the attitude motion compensation formula can be given as follows: (15)m2n2=cosΔψsinΔψ−sinΔψcosΔψm1−m0n1−n0+−ΔθΔφ⋅1d+m0n0.In practice, onboard sensors, such as fiber optic gyroscope, give the angular rate of attitude motion,ω=ωx,ωy,ωzT. Supposing that the time interval per frame is Δt, when Δt is of small quantity, the angular rate in Δt can be regarded as constant. Let (16)ϑ=ωΔt,e=ωΔtϑ=e1,e2,e3T.Then, the attitude adjustment matrix of the current frame moment with respect to the previous frame moment is given as follows:(17)A=c+1−ce121−ce1e2+se31−ce1e3−se21−ce2e1−se3c+1−ce221−ce2e3+se11−ce3e1+se21−ce3e2−se1c+1−ce32=aij3×3,where (18)c=cosϑ,s=sinϑ.The 3-2-1 Euler angleΔψ,Δθ,Δφ can be solved as follows: (19)Δψ=atan2a23,a33,Δθ=arcsin−a13,Δφ=atan2a12,a11.Generally, attitude adjustment does not rotate the image, that is,Δψ can be approximated as 0. And Δθ,Δφ can be approximated as follows: (20)Δθ=ωyΔt,Δφ=ωxΔt,for they are of small quantities.SubstitutingΔψ=0 and (20) into (15) gives (21)m2n2=m1n1+−ωyΔtωxΔt⋅1d. ## 3.2. Image Denoising Noise in the video image mainly includes the space radiation noise, the space background noise, and the CCD dark current noise. A bilateral filter can be used to denoise, which is a simple, noniterative scheme for edge-preserving smoothing [25]. The weights of the filter have two components. The first of which is the same weighting used by the Gaussian filter. The second component takes into account the difference in intensity between the neighboring pixels and the evaluated one. The diameter of the filter is set to 5, and weights are given as follows: (22)wij=1Ke−xi−xc2+yi−yc2/2σs2e−fxi,yi−fxc,yc2/2σr2,where K is the normalization constant, xc,yc is the center of the filter, fx,y is the gray level at x,y, σs is set to 10, and σr is set to 75. ## 3.3. Single-Frame Binary Segmentation In order to distinguish between stars and objects, it is necessary to remove the background in each frame image, but stray light leads to uneven gray level distribution of background. Figure4(a) is a single frame of the video image, and Figure 4(b) is its gray level histogram. The single-peak shape of the gray histogram shows that the classical global threshold method, which is traditionally used to segment the star image [15, 18, 20–22, 24], cannot be used to segment the video image.Figure 4 Single-frame image and its gray level histogram. (a) (b)Whether it is a star or an object, its gray level is greater than the pixels in its neighborhood. Consider using variable thresholding based on local image properties to segment the image. Calculate standard deviationσxy and mean value mxy for the neighborhood of every point x,y in the image, which are descriptors of the local contrast and average gray level. Then, the variable thresholding based on local contrast and average gray level is given as follows: (23)Txy=aσxy+bmxy,where a and b are constant and greater than 0. b is the contribution of the local average gray level to the thresholding and can be set to 1. a is the contribution of local contrast to the thresholding and is the main parameter to set according to the object characteristic.But on the other hand, for space moving objects, their brightness is sometimes changing according to the changing attitude. Ifa was set to be constant, the object would be lost in some frames. Considering the continuity of object motion, if in the current frame x,y is detected as the probable object coordinate, the object detection probability is much greater in the 5 × 5 window of x,y in the next frame. Integrated with the continuity of object brightness change, if there is no probable object detected in the 5 × 5 window of x,y in the next frame, a can be reduced with a factor, that is, aρρ<1. aρ will be stored as auv if u,v is detected as the probable object coordinate in the 5 × 5 window of x,y in the next frame. So the variable thresholding based on local image properties and detection of the previous frame is given as follows: (24)Txy=axyσxy+bmxy.The difference between (23) and (24) is the variable coefficient axy based on detection of the previous frame. axy is initially set to be a little greater than 1 and reset to the initial value when the gray level at x,y becomes large again (greater than 150).The image binary segmentation algorithm is given as follows:(25)gx,y=1,fx,y>Txy,0,fx,y≤Txy,where fx,y is the gray level of the original image at x,y and gx,y is the gray level of the segmented image at x,y. ## 3.4. Coordinate Extraction In the ideal optical system, the point object in the CCD focal plane occupies one pixel, but in practical imaging condition, circular aperture diffraction results in the object in the focal plane diffusing multipixels. So the coordinate of the object in the pixel frame is determined by the position of the center of the gray level. The simple gray weighted centroid algorithm is used to calculate the coordinates, with positioning accuracy up to 0.1~0.3 pixels [26], as follows: (26)xS,yS=∑x,y∈Sfx,y·x,y∑x,y∈Sfx,y,where S is object area after segmentation, fx,y is the gray level of the original image at x,y, and xS,yS is the coordinate of S.The coordinates will be used in single-frame binary segmentation. ## 3.5. Trajectory Association After processing a frame image, some probable object coordinates are obtained. Associate them with existing trajectories or generate new trajectories, by the nearest neighborhood filter. The radius of the neighborhood is determined by object characteristic, which needs to be greater than the moving distance of object image in a frame and endures losing the object in several frames.Kalman filter is an efficient recursive filter that estimates the internal state of a linear dynamic system from a series of noisy measurements. Use Kalman filter to predict probable object’s coordinate and where to use the nearest neighborhood filter in the current frame.Assuming that the state vector of the object at thekth frame is xk=xk,yk,vxk,vykT, that is, its coordinate and velocity in the pixel frame, the system equation is (27)xk=Fxk−1+Buk+w=10Δt0010Δt00100001xk−1+0−ΔtdΔtd00−1d1d0ωxkωyk+w,zk=Hxk+v=10000100xk+v,where F is the state transition matrix, B is the control-input matrix, uk is the control vector, H is the measurement matrix, zk is the measurement vector, that is, the coordinate associated with xk−1,yk−1 obtained by the nearest neighborhood filter at the kth frame, w is the process noise which is assumed to be zero mean Gaussian white noise with covariance Q, denoted as w~N0,Q, and v is the measurement noise which is assumed to be zero mean Gaussian white noise with covariance R, denoted as v~N0,R.In fact,Buk uses the result of (21) and realizes compensation of attitude motion when adjusting attitude. In the case of attitude stabilization, uk=0. Equation (27) unifies these two situations by the control vector.The details of how the Kalman filter predicts and updates are omitted here for there is nothing special.When a trajectory has 20 points, we need to judge whether the trajectory is an object. Velocity in the state vector is used. If the mean velocity of these points is greater than a given thresholding, the trajectory is an object; otherwise, it is not. That is to judge whether the point is moving. The thresholding here is mainly to remove image motion caused by the instability of the satellite platform and other noise. The thresholding is usually set to 2.When satellite attitude is adjusting, do not generate new trajectories in order to reduce false objects. On the other hand, in practice, adjusting attitude means something in the current frame is of interest and there is no need to detect new objects.Thus, objects are detected based on the continuity of motion in multiframes and trajectories are updated. Computation complexity is greatly reduced without image matching of stars in different frames. And compensation of attitude motion results in detecting objects well even during attitude adjusting. ## 4. Experimental Results NODAMI is verified using a video image from TT-2. TT-2 has 4 space video sensors, and the video image used in this paper is from the high-resolution camera, whose focal length is 1000 mm, FOV is 2°30′, anddx=dy=8.33μm. The video image is 25 frames per second, and the resolution is 960 × 576.Firstly, continuous 20 frame images taken in attitude stabilization are processed. In (24), axy is initially set to 1.1 and b is set to 1. The reduced factor is set to 0.8. The neighborhood in image segmentation is set to 7 × 7. For example, 10 object areas are obtained after segmenting the 20th frame (as shown in Figure 5). These areas include objects, stars, and noise, which cannot be distinguished in a single frame.Figure 5 Result of segmenting the 20th frame image.In (27), Q is set to 10−4I4 based on empirical data and R is set to 0.2I2, where In is the n × n identity matrix. The radius of neighborhood in trajectory association is set to 5. NODAMI detects one object in the 20 frame images, as shown in Figure 6. The image of 960 × 576 is too large; Figure 6 is interested in the local image, and a white box is used to identify the object. Figure 6 shows that the brightness of the object varies considerably. NODAMI detects the object well.Figure 6 Detection results of 20 frame images.Besides, for these 20 frames, ifaxy in (24) is set to constant, the object will be lost in 4 frames whereas (24) with a variable coefficient detected the object in all the frames.Another longer video of 40 s taken in attitude stabilization is processed, which has 1000 frame images. NODAMI detects one object in the 1000 frame images, as shown in Figure7, derived by overlaying the 1000 frame images. The image of 960 × 576 is too large; Figure 7(b) is interested in the local image, and a white box is used to identify the object every 50 frames.Figure 7 Detection results of 1000 frame images. (a) (b)Figure7 shows that the brightness of the object varies considerably and naked eyes tend to ignore the object in quite a few frames. Actually, if axy in (24) is set to constant, the object will be lost in 421 frames whereas (24) with a variable coefficient detected the object in 947 frames. Probability of detection improved from 42.1% to 94.7%.Besides, even (24) with a variable coefficient could not detect the object in all the frames. But NODAMI can associate the object with the existing trajectory correctly after losing the object in several frames.Then, we process the video image of 30 s, during which satellite attitude was adjusted. Overlaying 750 frame images gives Figure8. The trajectory of the object is in the white box of Figure 8. The brightness of the object also varies considerably. The 2 right trajectories are of stars caused by satellite attitude adjustment. The object image was moving originally towards the lower left, but it appears to be moving towards the upper left due to attitude adjustment; meanwhile, the star image moved upwards, resulting in the 2 right trajectories. NODAMI detected the space object trajectory well and abandoned the trajectory of the stars. Similarly, the number of points detected in the trajectory is less than 750, for the brightness of the object varies too much and the object is lost in several frames. But NODAMI can associate the object with the existing trajectory correctly after losing the object in several frames.Figure 8 Overlay of 750 frame images.Using the known satellite attitude motion information and (21) to compensate attitude adjustment can give the trajectory of the object as shown in Figure 9, from which it can be seen that the trajectory is recovered well and the effect of attitude adjustment on the image is removed.Figure 9 Trajectory of the object.Experimental results show that the algorithm has good performance for moving space point objects even with changing brightness. ## 5. Conclusion In this paper, a new space object detection algorithm in a video satellite image using motion information is proposed. The effect of satellite attitude motion on an image is analyzed quantitatively, which can be decomposed into translation and rotation. Attitude motion compensation formula is derived. Considering the continuity of object motion and brightness change, variable thresholding based on local image properties and detection of previous frame is used to segment a single-frame image. Then, the algorithm uses the correlation of object motion in multiframe and satellite attitude motion information to detect the object. Computation complexity is greatly reduced without image matching of stars in different frames. And compensation of attitude motion results in detecting objects well even during attitude adjusting. Using the algorithm to process the video image from Tiantuo-2 detects the object and gives its trajectory, which shows that the algorithm has good performance for moving space point objects even with changing brightness. --- *Source: 1024529-2017-10-17.xml*
1024529-2017-10-17_1024529-2017-10-17.md
41,291
Space Object Detection in Video Satellite Images Using Motion Information
Xueyang Zhang; Junhua Xiang; Yulin Zhang
International Journal of Aerospace Engineering (2017)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1024529
1024529-2017-10-17.xml
--- ## Abstract Compared to ground-based observation, space-based observation is an effective approach to catalog and monitor increasing space objects. In this paper, space object detection in a video satellite image with star image background is studied. A new detection algorithm using motion information is proposed, which includes not only the known satellite attitude motion information but also the unknown object motion information. The effect of satellite attitude motion on an image is analyzed quantitatively, which can be decomposed into translation and rotation. Considering the continuity of object motion and brightness change, variable thresholding based on local image properties and detection of the previous frame is used to segment a single-frame image. Then, the algorithm uses the correlation of object motion in multiframe and satellite attitude motion information to detect the object. Experimental results with a video image from the Tiantuo-2 satellite show that this algorithm provides a good way for space object detection. --- ## Body ## 1. Introduction In September 8, 2014, Tiantuo-2 (TT-2) designed by the National University of Defense Technology independently was successfully launched into orbit, which is the first Chinese interactive earth observation microsatellite using a video imaging system. The mass was 67 kg, and the altitude was 490 km. Experiments based on interactive control strategies with human in the loop were carried out to realize continuous tracking and monitoring of moving objects.Now, video satellites have been widely utilized by domestic and foreign researchers, and several video satellites have been launched into orbits, such as Skysat-1 and 2 by Skybox Imaging, TUBSAT series satellites by the Technical University of Berlin [1], and the video satellite by Chang Guang Satellite Technology. They can obtain video images with different on-orbit performances.Increasing space objects have greater impact on human space activities, and cataloging and monitoring them become a hot issue in the field of space environment [2–5]. Compared to ground-based observation, space-based observation is not restricted by weather or geographical location and avoids the disturbance of the atmosphere to the object’s signal, which has a unique advantage [2, 3, 6]. To use a video satellite to observe space objects is an effective approach.Object detection and tracking in satellite video images is an important part of space-based observation. For the general problem of object detection and tracking in video, optical flow [7], block matching [8], template detection [9, 10], and so on have been proposed. But most of these methods are based on gray level, and enough texture information is highly required [11]. In fact, small object detection in optical images with star background mainly has the following difficulties: (1) Because the object occupies only one or a few pixels in the image, the shape of the object is not available. (2) Due to background stars and noise introduced by a space environment detecting equipment, the object almost submerges in the complex background bright spots, which increases the difficulty of object detection greatly.Aiming at these difficulties, many scholars have proposed various algorithms, mainly including track before detection (TBD) and detect before tracking (DBT). Multistage hypothesis testing (MHT) [12] and dynamic programming-based algorithm [13–15] can be classified into TBD, which is effective when the image signal-to-noise ratio is very low. But high computation complexity and hard threshold always follow them [16]. Actually, DBT is usually adopted for object detection in star images [17]. Some DBT algorithms are discussed below.An object trace acquisition algorithm for sequence star images with moving background was put forward to detect the discontinuous trace of a small space object by [18], which used the centroid of a group of brightest stars in sequence images to match images and filtered background stars. A space object detection algorithm based on the principle of star map recognition was put forward by [19], which used a triangle algorithm to accomplish image registration and detected the space object in the registered image series. Based on the analysis of a star image model, Han and Liu [20] proposed an image registration method via extracting feature points and then matching the triangle. Zhang et al. [21] used a triangle algorithm to match background stars in image series and classified stars and potential targets and then utilized the 3 frames nearest neighboring correlation method to detect targets grossly, in which false targets were filtered with the multiframe back and forth searching method. An algorithm based on iterative optimization distance classification is proposed to detect small visible optical space objects by [22], which used the classification method to filter background stars and utilized the continuity of object motion to achieve trajectory association.The core idea of the DBT algorithms above is to match the sequence images based on the relative invariance of the star position and to filter background stars. Therefore, there is also a large amount of computation and poor real-time performance. A real-time detection algorithm based on FPGA and DSP for small space objects was presented by [23], but the instability and motion of observation platform were not considered. In addition, image sequences used to validate the algorithm in the literature above are from a ground photoelectricity telescope in [15, 17, 18, 21–23] and generated by simulation in [19, 20]; the characteristic of image sequence from space-based observation has not been considered.This paper studies space object detection in a video satellite image with star image background, and a new detection algorithm based on motion information is proposed, which includes not only the known satellite attitude motion information but also the unknown object motion information. Experimental results about a video image from TT-2 demonstrate the effectiveness of the algorithm. ## 2. Characteristic Analysis of Satellite Video Images A video satellite image of space contains deep space background, stars, objects, and noise introduced by imaging devices and cosmic rays. The mathematical model is given by [24] as follows: (1)fx,y,t=fBx,y,t+fsx,y,t+fTx,y,t+nx,y,t,where fBx,y,t is the gray level of deep space background, fsx,y,t is the gray level of stars, fTx,y,t is the gray level of objects, and nx,y,t is the gray level of noise. x,y is the pixel coordinate in the image. t is the time of the video.In the video image, stars and weak small objects occupy only one or a few pixels. It is difficult to distinguish the object from background stars by morphological characteristics or photometric features. Besides, attitude motion of the object leads to changing brightness, even losing the object in several frames. Therefore, it is almost impossible to detect the object in a single-frame image and necessary to use the continuity of object motion in multiframe images.Figure1 is a local image of a video from TT-2, where Figure 1(a) is a star and Figure 1(b) is an object (debris). It is impossible to distinguish them by morphological characteristics or photometric features.Figure 1 Images of a star and an object. (a) (b) ## 3. Object Detection Using Motion Information When attitude motion of a video satellite is stabilized, background stars are moving extremely slowly in the video and can be considered static in several frames. At the same time, noise is random (dead pixels appear in a fixed position) and only objects are moving continuously. This is the most important distinction of motion characteristics of their images. Because of platform jitter, objects cannot be detected by a simple frame difference method. A new object detection algorithm using motion information (NODAMI) is proposed. The motion information contains known satellite attitude motion information. When the attitude changes, the video image varies accordingly. This case can be transformed into the case of attitude stabilization by compensating attitude motion to image. The complexity of the detection algorithm reduces greatly without image registration.The procedure of NODAMI is shown in Figure2.Figure 2 Procedure of NODAMI.Firstly, the effect of satellite attitude motion on image is analyzed and attitude motion compensation formula is derived. Then, each step of NODAMI is described in detail. ### 3.1. Compensation of Satellite Attitude Motion The inertial frame is defined asOi−XiYiZi, originating at the center of the Earth, and the OiZi-axis aligned with the Earth’s North Pole and the OiXi-axis aligned with the vernal equinox. The satellite body frame is defined as Ob−XbYbZb, originating at the mass center of the satellite and three axes aligned with the principal axes of the satellite. The pixel frame is defined as I−xy, originating at the upper left corner of the image plane and using the pixel as a coordinate unit. x,y represent the number of columns and rows, respectively, of the pixel in the image. The image frame is defined as O−XpYp, originating at the intersection of the optical axis and the image plane, and the OXp-axis and the OYp-axis aligned with Ix and Iy, respectively. The camera frame is defined as Oc−XcYcZc, originating at the optical center of the camera, and the OcZc-axis aligned with the optical axis, the OcXc-axis aligned with −OXp, and the OcYc-axis aligned with −OYp. All the 3D frames defined above are right handed.In the inertial frame, let the coordinate of the object bexi,yi,zi and the coordinate of the satellite be xi0,yi0,zi0. Assuming that the 3-2-1 Euler angle of the satellite body frame with respect to the inertial frame is ψ,θ,φ, the coordinate of the object in the satellite body frame is given as follows: (2)xbybzb=R321ψ,φ,θxi−xi0yi−yi0zi−zi0=RXφRYθRZψxi−xi0yi−yi0zi−zi0,where R321 is the attitude matrix of the satellite body frame with respect to the inertial frame and RXθ, RYθ, and RZθ are the coordinate transformation matrices after a rotation around the x-axis, y-axis, and z-axis, respectively, with the angle θ: (3)RXθ=1000cosθsinθ0−sinθcosθ,RYθ=cosθ0−sinθ010sinθ0cosθ,RZθ=cosθsinθ0−sinθcosθ0001.Assuming that the camera is fixed on the satellite and the 3-2-1 Euler angle of the camera frame with respect to the satellite body frame isα0,β0,γ0, which is constantly determined by the design of the satellite structure, then the attitude matrix of the camera frame with respect to the satellite body frame R0 can be derived. Let the coordinate of Oc in the satellite body frame be S0=x0,y0,z0T; then, the coordinate of the object in the camera frame is given as follows: (4)xcyczc=R0xb−x0yb−y0zb−z0.The design of the video satellite tries to makeα0,β0,γ0 as zero, and actually, they are of small quantities. x0,y0,z0 are much lesser than xb,yb,zb. Without loss of generality, let R0=I,S0=0, that is, the camera frame and the satellite body frame coincide.As shown in Figure3, the coordinate of the object in the image frame is given as follows: (5)xpyp=xczcyczcf,where f is the focal length of the camera.Figure 3 Diagram of frames.If the field of view of the camera is FOV, thenxp,yp<tanFOV/2.The coordinate of the object in the pixel frame is given as follows:(6)mn=xpdxypdy+m0n0,where dx,dy is the size of each pixel and m0,n0 is the coordinate of the image center.The video satellite can achieve tracking imaging of the object based on interactive attitude adjustment with human in the loop. Attitude adjustment is applied to the attitude of the satellite body frame with respect to the inertial frame. Assuming that the 3-2-1 Euler angle of attitude adjustment isΔψ,Δθ,Δφ, the coordinates of the point P in the camera frame, the image frame, and the pixel frame are xc1,yc1,zc1, xp1,yp1, and m1,n1, respectively, before attitude adjustment and xc2,yc2,zc2, xp2,yp2, and m2,n2, respectively, after attitude adjustment.Ignoring the orbit motion of the satellite during the attitude adjustment, which is reasonable when adjustment is instantaneous, gives(7)xc2yc2zc2=R321Δψ,Δθ,Δφxc1yc1zc1=RXΔφRYΔθRZΔψxc1yc1zc1.Denotexc,yc,zcT,xc1,yc1,zc1T,xc2,yc2,zc2T as x,x1,x2 and transformation function determined by RXΔφ, RYΔθ, RZΔψ as TX,Δφ, TY,Δθ, TZ,Δψ, that is, (8)TX,Δφx=RXΔφ⋅x,TY,Δθx=RYΔθ⋅x,TZ,Δψx=RZΔψ⋅x.Then,(9)x2=TX,ΔφTY,ΔθTZ,Δψx1.Denotexp,ypT,xp1,yp1T,xp2,yp2T as y,y1,y2 and the mapping from the camera frame to the image frame as A, that is, (10)y=Ax,y1=Ax1,y→2=Ax2.Denote transformation function of the image frame determined byTX,Δφ,TY,Δθ,TZ,Δψ as rX,Δφ,rY,Δθ,rZ,Δψ, that is, (11)ATX,Δφx→=rX,ΔφAx→=rX,Δφy→,ATY,Δθx→=rY,ΔθAx→=rY,Δθy→,ATZ,Δψx→=rZ,ΔψAx→=rZ,Δψy→.Then,(12)y→2=Ax→2=ATX,ΔφTY,ΔθTZ,Δψx→1=rX,ΔφA∘TY,ΔθTZ,Δψx→1=rX,ΔφrY,ΔθA∘TZ,Δψx→1=rX,ΔφrY,ΔθrZ,Δψy→1.Equation (12) proves that transformation of the image frame caused by attitude adjustment can be decomposed into compound of transformation function determined by the rotation around the satellite body frame axes.Obviously,(13)rX,Δφy→=xp−sinΔφ⋅yp+cosΔφcosΔφ⋅yp+sinΔφ−sinΔφ⋅yp+cosΔφ,rY,Δθy→=cosΔθ⋅xp−sinΔθsinΔθ⋅xp+cosΔθypsinΔθ⋅xp+cosΔθ,rZ,Δψy→=cosΔψ⋅xp+sinΔψ⋅yp−sinΔψ⋅xp+cosΔψ⋅yp=cosΔψsinΔψ−sinΔψcosΔψxpyp.WhenΔψ,Δθ,Δφ are of small quantities, rX,Δφ,rY,Δθ can be approximated as follows: (14)rX,Δφy→=xpyp+Δφ,rY,Δθy→=xp−Δθyp.Then, the conclusion can be drawn that the effect of satellite attitude motion on an image can be decomposed into the translation and rotation of the image. The effects in an image of small rotations around each axis are that roll and pitch lead to image translation, whereas yaw induces image rotation.Especially, whendx=dy=d, the attitude motion compensation formula can be given as follows: (15)m2n2=cosΔψsinΔψ−sinΔψcosΔψm1−m0n1−n0+−ΔθΔφ⋅1d+m0n0.In practice, onboard sensors, such as fiber optic gyroscope, give the angular rate of attitude motion,ω=ωx,ωy,ωzT. Supposing that the time interval per frame is Δt, when Δt is of small quantity, the angular rate in Δt can be regarded as constant. Let (16)ϑ=ωΔt,e=ωΔtϑ=e1,e2,e3T.Then, the attitude adjustment matrix of the current frame moment with respect to the previous frame moment is given as follows:(17)A=c+1−ce121−ce1e2+se31−ce1e3−se21−ce2e1−se3c+1−ce221−ce2e3+se11−ce3e1+se21−ce3e2−se1c+1−ce32=aij3×3,where (18)c=cosϑ,s=sinϑ.The 3-2-1 Euler angleΔψ,Δθ,Δφ can be solved as follows: (19)Δψ=atan2a23,a33,Δθ=arcsin−a13,Δφ=atan2a12,a11.Generally, attitude adjustment does not rotate the image, that is,Δψ can be approximated as 0. And Δθ,Δφ can be approximated as follows: (20)Δθ=ωyΔt,Δφ=ωxΔt,for they are of small quantities.SubstitutingΔψ=0 and (20) into (15) gives (21)m2n2=m1n1+−ωyΔtωxΔt⋅1d. ### 3.2. Image Denoising Noise in the video image mainly includes the space radiation noise, the space background noise, and the CCD dark current noise. A bilateral filter can be used to denoise, which is a simple, noniterative scheme for edge-preserving smoothing [25]. The weights of the filter have two components. The first of which is the same weighting used by the Gaussian filter. The second component takes into account the difference in intensity between the neighboring pixels and the evaluated one. The diameter of the filter is set to 5, and weights are given as follows: (22)wij=1Ke−xi−xc2+yi−yc2/2σs2e−fxi,yi−fxc,yc2/2σr2,where K is the normalization constant, xc,yc is the center of the filter, fx,y is the gray level at x,y, σs is set to 10, and σr is set to 75. ### 3.3. Single-Frame Binary Segmentation In order to distinguish between stars and objects, it is necessary to remove the background in each frame image, but stray light leads to uneven gray level distribution of background. Figure4(a) is a single frame of the video image, and Figure 4(b) is its gray level histogram. The single-peak shape of the gray histogram shows that the classical global threshold method, which is traditionally used to segment the star image [15, 18, 20–22, 24], cannot be used to segment the video image.Figure 4 Single-frame image and its gray level histogram. (a) (b)Whether it is a star or an object, its gray level is greater than the pixels in its neighborhood. Consider using variable thresholding based on local image properties to segment the image. Calculate standard deviationσxy and mean value mxy for the neighborhood of every point x,y in the image, which are descriptors of the local contrast and average gray level. Then, the variable thresholding based on local contrast and average gray level is given as follows: (23)Txy=aσxy+bmxy,where a and b are constant and greater than 0. b is the contribution of the local average gray level to the thresholding and can be set to 1. a is the contribution of local contrast to the thresholding and is the main parameter to set according to the object characteristic.But on the other hand, for space moving objects, their brightness is sometimes changing according to the changing attitude. Ifa was set to be constant, the object would be lost in some frames. Considering the continuity of object motion, if in the current frame x,y is detected as the probable object coordinate, the object detection probability is much greater in the 5 × 5 window of x,y in the next frame. Integrated with the continuity of object brightness change, if there is no probable object detected in the 5 × 5 window of x,y in the next frame, a can be reduced with a factor, that is, aρρ<1. aρ will be stored as auv if u,v is detected as the probable object coordinate in the 5 × 5 window of x,y in the next frame. So the variable thresholding based on local image properties and detection of the previous frame is given as follows: (24)Txy=axyσxy+bmxy.The difference between (23) and (24) is the variable coefficient axy based on detection of the previous frame. axy is initially set to be a little greater than 1 and reset to the initial value when the gray level at x,y becomes large again (greater than 150).The image binary segmentation algorithm is given as follows:(25)gx,y=1,fx,y>Txy,0,fx,y≤Txy,where fx,y is the gray level of the original image at x,y and gx,y is the gray level of the segmented image at x,y. ### 3.4. Coordinate Extraction In the ideal optical system, the point object in the CCD focal plane occupies one pixel, but in practical imaging condition, circular aperture diffraction results in the object in the focal plane diffusing multipixels. So the coordinate of the object in the pixel frame is determined by the position of the center of the gray level. The simple gray weighted centroid algorithm is used to calculate the coordinates, with positioning accuracy up to 0.1~0.3 pixels [26], as follows: (26)xS,yS=∑x,y∈Sfx,y·x,y∑x,y∈Sfx,y,where S is object area after segmentation, fx,y is the gray level of the original image at x,y, and xS,yS is the coordinate of S.The coordinates will be used in single-frame binary segmentation. ### 3.5. Trajectory Association After processing a frame image, some probable object coordinates are obtained. Associate them with existing trajectories or generate new trajectories, by the nearest neighborhood filter. The radius of the neighborhood is determined by object characteristic, which needs to be greater than the moving distance of object image in a frame and endures losing the object in several frames.Kalman filter is an efficient recursive filter that estimates the internal state of a linear dynamic system from a series of noisy measurements. Use Kalman filter to predict probable object’s coordinate and where to use the nearest neighborhood filter in the current frame.Assuming that the state vector of the object at thekth frame is xk=xk,yk,vxk,vykT, that is, its coordinate and velocity in the pixel frame, the system equation is (27)xk=Fxk−1+Buk+w=10Δt0010Δt00100001xk−1+0−ΔtdΔtd00−1d1d0ωxkωyk+w,zk=Hxk+v=10000100xk+v,where F is the state transition matrix, B is the control-input matrix, uk is the control vector, H is the measurement matrix, zk is the measurement vector, that is, the coordinate associated with xk−1,yk−1 obtained by the nearest neighborhood filter at the kth frame, w is the process noise which is assumed to be zero mean Gaussian white noise with covariance Q, denoted as w~N0,Q, and v is the measurement noise which is assumed to be zero mean Gaussian white noise with covariance R, denoted as v~N0,R.In fact,Buk uses the result of (21) and realizes compensation of attitude motion when adjusting attitude. In the case of attitude stabilization, uk=0. Equation (27) unifies these two situations by the control vector.The details of how the Kalman filter predicts and updates are omitted here for there is nothing special.When a trajectory has 20 points, we need to judge whether the trajectory is an object. Velocity in the state vector is used. If the mean velocity of these points is greater than a given thresholding, the trajectory is an object; otherwise, it is not. That is to judge whether the point is moving. The thresholding here is mainly to remove image motion caused by the instability of the satellite platform and other noise. The thresholding is usually set to 2.When satellite attitude is adjusting, do not generate new trajectories in order to reduce false objects. On the other hand, in practice, adjusting attitude means something in the current frame is of interest and there is no need to detect new objects.Thus, objects are detected based on the continuity of motion in multiframes and trajectories are updated. Computation complexity is greatly reduced without image matching of stars in different frames. And compensation of attitude motion results in detecting objects well even during attitude adjusting. ## 3.1. Compensation of Satellite Attitude Motion The inertial frame is defined asOi−XiYiZi, originating at the center of the Earth, and the OiZi-axis aligned with the Earth’s North Pole and the OiXi-axis aligned with the vernal equinox. The satellite body frame is defined as Ob−XbYbZb, originating at the mass center of the satellite and three axes aligned with the principal axes of the satellite. The pixel frame is defined as I−xy, originating at the upper left corner of the image plane and using the pixel as a coordinate unit. x,y represent the number of columns and rows, respectively, of the pixel in the image. The image frame is defined as O−XpYp, originating at the intersection of the optical axis and the image plane, and the OXp-axis and the OYp-axis aligned with Ix and Iy, respectively. The camera frame is defined as Oc−XcYcZc, originating at the optical center of the camera, and the OcZc-axis aligned with the optical axis, the OcXc-axis aligned with −OXp, and the OcYc-axis aligned with −OYp. All the 3D frames defined above are right handed.In the inertial frame, let the coordinate of the object bexi,yi,zi and the coordinate of the satellite be xi0,yi0,zi0. Assuming that the 3-2-1 Euler angle of the satellite body frame with respect to the inertial frame is ψ,θ,φ, the coordinate of the object in the satellite body frame is given as follows: (2)xbybzb=R321ψ,φ,θxi−xi0yi−yi0zi−zi0=RXφRYθRZψxi−xi0yi−yi0zi−zi0,where R321 is the attitude matrix of the satellite body frame with respect to the inertial frame and RXθ, RYθ, and RZθ are the coordinate transformation matrices after a rotation around the x-axis, y-axis, and z-axis, respectively, with the angle θ: (3)RXθ=1000cosθsinθ0−sinθcosθ,RYθ=cosθ0−sinθ010sinθ0cosθ,RZθ=cosθsinθ0−sinθcosθ0001.Assuming that the camera is fixed on the satellite and the 3-2-1 Euler angle of the camera frame with respect to the satellite body frame isα0,β0,γ0, which is constantly determined by the design of the satellite structure, then the attitude matrix of the camera frame with respect to the satellite body frame R0 can be derived. Let the coordinate of Oc in the satellite body frame be S0=x0,y0,z0T; then, the coordinate of the object in the camera frame is given as follows: (4)xcyczc=R0xb−x0yb−y0zb−z0.The design of the video satellite tries to makeα0,β0,γ0 as zero, and actually, they are of small quantities. x0,y0,z0 are much lesser than xb,yb,zb. Without loss of generality, let R0=I,S0=0, that is, the camera frame and the satellite body frame coincide.As shown in Figure3, the coordinate of the object in the image frame is given as follows: (5)xpyp=xczcyczcf,where f is the focal length of the camera.Figure 3 Diagram of frames.If the field of view of the camera is FOV, thenxp,yp<tanFOV/2.The coordinate of the object in the pixel frame is given as follows:(6)mn=xpdxypdy+m0n0,where dx,dy is the size of each pixel and m0,n0 is the coordinate of the image center.The video satellite can achieve tracking imaging of the object based on interactive attitude adjustment with human in the loop. Attitude adjustment is applied to the attitude of the satellite body frame with respect to the inertial frame. Assuming that the 3-2-1 Euler angle of attitude adjustment isΔψ,Δθ,Δφ, the coordinates of the point P in the camera frame, the image frame, and the pixel frame are xc1,yc1,zc1, xp1,yp1, and m1,n1, respectively, before attitude adjustment and xc2,yc2,zc2, xp2,yp2, and m2,n2, respectively, after attitude adjustment.Ignoring the orbit motion of the satellite during the attitude adjustment, which is reasonable when adjustment is instantaneous, gives(7)xc2yc2zc2=R321Δψ,Δθ,Δφxc1yc1zc1=RXΔφRYΔθRZΔψxc1yc1zc1.Denotexc,yc,zcT,xc1,yc1,zc1T,xc2,yc2,zc2T as x,x1,x2 and transformation function determined by RXΔφ, RYΔθ, RZΔψ as TX,Δφ, TY,Δθ, TZ,Δψ, that is, (8)TX,Δφx=RXΔφ⋅x,TY,Δθx=RYΔθ⋅x,TZ,Δψx=RZΔψ⋅x.Then,(9)x2=TX,ΔφTY,ΔθTZ,Δψx1.Denotexp,ypT,xp1,yp1T,xp2,yp2T as y,y1,y2 and the mapping from the camera frame to the image frame as A, that is, (10)y=Ax,y1=Ax1,y→2=Ax2.Denote transformation function of the image frame determined byTX,Δφ,TY,Δθ,TZ,Δψ as rX,Δφ,rY,Δθ,rZ,Δψ, that is, (11)ATX,Δφx→=rX,ΔφAx→=rX,Δφy→,ATY,Δθx→=rY,ΔθAx→=rY,Δθy→,ATZ,Δψx→=rZ,ΔψAx→=rZ,Δψy→.Then,(12)y→2=Ax→2=ATX,ΔφTY,ΔθTZ,Δψx→1=rX,ΔφA∘TY,ΔθTZ,Δψx→1=rX,ΔφrY,ΔθA∘TZ,Δψx→1=rX,ΔφrY,ΔθrZ,Δψy→1.Equation (12) proves that transformation of the image frame caused by attitude adjustment can be decomposed into compound of transformation function determined by the rotation around the satellite body frame axes.Obviously,(13)rX,Δφy→=xp−sinΔφ⋅yp+cosΔφcosΔφ⋅yp+sinΔφ−sinΔφ⋅yp+cosΔφ,rY,Δθy→=cosΔθ⋅xp−sinΔθsinΔθ⋅xp+cosΔθypsinΔθ⋅xp+cosΔθ,rZ,Δψy→=cosΔψ⋅xp+sinΔψ⋅yp−sinΔψ⋅xp+cosΔψ⋅yp=cosΔψsinΔψ−sinΔψcosΔψxpyp.WhenΔψ,Δθ,Δφ are of small quantities, rX,Δφ,rY,Δθ can be approximated as follows: (14)rX,Δφy→=xpyp+Δφ,rY,Δθy→=xp−Δθyp.Then, the conclusion can be drawn that the effect of satellite attitude motion on an image can be decomposed into the translation and rotation of the image. The effects in an image of small rotations around each axis are that roll and pitch lead to image translation, whereas yaw induces image rotation.Especially, whendx=dy=d, the attitude motion compensation formula can be given as follows: (15)m2n2=cosΔψsinΔψ−sinΔψcosΔψm1−m0n1−n0+−ΔθΔφ⋅1d+m0n0.In practice, onboard sensors, such as fiber optic gyroscope, give the angular rate of attitude motion,ω=ωx,ωy,ωzT. Supposing that the time interval per frame is Δt, when Δt is of small quantity, the angular rate in Δt can be regarded as constant. Let (16)ϑ=ωΔt,e=ωΔtϑ=e1,e2,e3T.Then, the attitude adjustment matrix of the current frame moment with respect to the previous frame moment is given as follows:(17)A=c+1−ce121−ce1e2+se31−ce1e3−se21−ce2e1−se3c+1−ce221−ce2e3+se11−ce3e1+se21−ce3e2−se1c+1−ce32=aij3×3,where (18)c=cosϑ,s=sinϑ.The 3-2-1 Euler angleΔψ,Δθ,Δφ can be solved as follows: (19)Δψ=atan2a23,a33,Δθ=arcsin−a13,Δφ=atan2a12,a11.Generally, attitude adjustment does not rotate the image, that is,Δψ can be approximated as 0. And Δθ,Δφ can be approximated as follows: (20)Δθ=ωyΔt,Δφ=ωxΔt,for they are of small quantities.SubstitutingΔψ=0 and (20) into (15) gives (21)m2n2=m1n1+−ωyΔtωxΔt⋅1d. ## 3.2. Image Denoising Noise in the video image mainly includes the space radiation noise, the space background noise, and the CCD dark current noise. A bilateral filter can be used to denoise, which is a simple, noniterative scheme for edge-preserving smoothing [25]. The weights of the filter have two components. The first of which is the same weighting used by the Gaussian filter. The second component takes into account the difference in intensity between the neighboring pixels and the evaluated one. The diameter of the filter is set to 5, and weights are given as follows: (22)wij=1Ke−xi−xc2+yi−yc2/2σs2e−fxi,yi−fxc,yc2/2σr2,where K is the normalization constant, xc,yc is the center of the filter, fx,y is the gray level at x,y, σs is set to 10, and σr is set to 75. ## 3.3. Single-Frame Binary Segmentation In order to distinguish between stars and objects, it is necessary to remove the background in each frame image, but stray light leads to uneven gray level distribution of background. Figure4(a) is a single frame of the video image, and Figure 4(b) is its gray level histogram. The single-peak shape of the gray histogram shows that the classical global threshold method, which is traditionally used to segment the star image [15, 18, 20–22, 24], cannot be used to segment the video image.Figure 4 Single-frame image and its gray level histogram. (a) (b)Whether it is a star or an object, its gray level is greater than the pixels in its neighborhood. Consider using variable thresholding based on local image properties to segment the image. Calculate standard deviationσxy and mean value mxy for the neighborhood of every point x,y in the image, which are descriptors of the local contrast and average gray level. Then, the variable thresholding based on local contrast and average gray level is given as follows: (23)Txy=aσxy+bmxy,where a and b are constant and greater than 0. b is the contribution of the local average gray level to the thresholding and can be set to 1. a is the contribution of local contrast to the thresholding and is the main parameter to set according to the object characteristic.But on the other hand, for space moving objects, their brightness is sometimes changing according to the changing attitude. Ifa was set to be constant, the object would be lost in some frames. Considering the continuity of object motion, if in the current frame x,y is detected as the probable object coordinate, the object detection probability is much greater in the 5 × 5 window of x,y in the next frame. Integrated with the continuity of object brightness change, if there is no probable object detected in the 5 × 5 window of x,y in the next frame, a can be reduced with a factor, that is, aρρ<1. aρ will be stored as auv if u,v is detected as the probable object coordinate in the 5 × 5 window of x,y in the next frame. So the variable thresholding based on local image properties and detection of the previous frame is given as follows: (24)Txy=axyσxy+bmxy.The difference between (23) and (24) is the variable coefficient axy based on detection of the previous frame. axy is initially set to be a little greater than 1 and reset to the initial value when the gray level at x,y becomes large again (greater than 150).The image binary segmentation algorithm is given as follows:(25)gx,y=1,fx,y>Txy,0,fx,y≤Txy,where fx,y is the gray level of the original image at x,y and gx,y is the gray level of the segmented image at x,y. ## 3.4. Coordinate Extraction In the ideal optical system, the point object in the CCD focal plane occupies one pixel, but in practical imaging condition, circular aperture diffraction results in the object in the focal plane diffusing multipixels. So the coordinate of the object in the pixel frame is determined by the position of the center of the gray level. The simple gray weighted centroid algorithm is used to calculate the coordinates, with positioning accuracy up to 0.1~0.3 pixels [26], as follows: (26)xS,yS=∑x,y∈Sfx,y·x,y∑x,y∈Sfx,y,where S is object area after segmentation, fx,y is the gray level of the original image at x,y, and xS,yS is the coordinate of S.The coordinates will be used in single-frame binary segmentation. ## 3.5. Trajectory Association After processing a frame image, some probable object coordinates are obtained. Associate them with existing trajectories or generate new trajectories, by the nearest neighborhood filter. The radius of the neighborhood is determined by object characteristic, which needs to be greater than the moving distance of object image in a frame and endures losing the object in several frames.Kalman filter is an efficient recursive filter that estimates the internal state of a linear dynamic system from a series of noisy measurements. Use Kalman filter to predict probable object’s coordinate and where to use the nearest neighborhood filter in the current frame.Assuming that the state vector of the object at thekth frame is xk=xk,yk,vxk,vykT, that is, its coordinate and velocity in the pixel frame, the system equation is (27)xk=Fxk−1+Buk+w=10Δt0010Δt00100001xk−1+0−ΔtdΔtd00−1d1d0ωxkωyk+w,zk=Hxk+v=10000100xk+v,where F is the state transition matrix, B is the control-input matrix, uk is the control vector, H is the measurement matrix, zk is the measurement vector, that is, the coordinate associated with xk−1,yk−1 obtained by the nearest neighborhood filter at the kth frame, w is the process noise which is assumed to be zero mean Gaussian white noise with covariance Q, denoted as w~N0,Q, and v is the measurement noise which is assumed to be zero mean Gaussian white noise with covariance R, denoted as v~N0,R.In fact,Buk uses the result of (21) and realizes compensation of attitude motion when adjusting attitude. In the case of attitude stabilization, uk=0. Equation (27) unifies these two situations by the control vector.The details of how the Kalman filter predicts and updates are omitted here for there is nothing special.When a trajectory has 20 points, we need to judge whether the trajectory is an object. Velocity in the state vector is used. If the mean velocity of these points is greater than a given thresholding, the trajectory is an object; otherwise, it is not. That is to judge whether the point is moving. The thresholding here is mainly to remove image motion caused by the instability of the satellite platform and other noise. The thresholding is usually set to 2.When satellite attitude is adjusting, do not generate new trajectories in order to reduce false objects. On the other hand, in practice, adjusting attitude means something in the current frame is of interest and there is no need to detect new objects.Thus, objects are detected based on the continuity of motion in multiframes and trajectories are updated. Computation complexity is greatly reduced without image matching of stars in different frames. And compensation of attitude motion results in detecting objects well even during attitude adjusting. ## 4. Experimental Results NODAMI is verified using a video image from TT-2. TT-2 has 4 space video sensors, and the video image used in this paper is from the high-resolution camera, whose focal length is 1000 mm, FOV is 2°30′, anddx=dy=8.33μm. The video image is 25 frames per second, and the resolution is 960 × 576.Firstly, continuous 20 frame images taken in attitude stabilization are processed. In (24), axy is initially set to 1.1 and b is set to 1. The reduced factor is set to 0.8. The neighborhood in image segmentation is set to 7 × 7. For example, 10 object areas are obtained after segmenting the 20th frame (as shown in Figure 5). These areas include objects, stars, and noise, which cannot be distinguished in a single frame.Figure 5 Result of segmenting the 20th frame image.In (27), Q is set to 10−4I4 based on empirical data and R is set to 0.2I2, where In is the n × n identity matrix. The radius of neighborhood in trajectory association is set to 5. NODAMI detects one object in the 20 frame images, as shown in Figure 6. The image of 960 × 576 is too large; Figure 6 is interested in the local image, and a white box is used to identify the object. Figure 6 shows that the brightness of the object varies considerably. NODAMI detects the object well.Figure 6 Detection results of 20 frame images.Besides, for these 20 frames, ifaxy in (24) is set to constant, the object will be lost in 4 frames whereas (24) with a variable coefficient detected the object in all the frames.Another longer video of 40 s taken in attitude stabilization is processed, which has 1000 frame images. NODAMI detects one object in the 1000 frame images, as shown in Figure7, derived by overlaying the 1000 frame images. The image of 960 × 576 is too large; Figure 7(b) is interested in the local image, and a white box is used to identify the object every 50 frames.Figure 7 Detection results of 1000 frame images. (a) (b)Figure7 shows that the brightness of the object varies considerably and naked eyes tend to ignore the object in quite a few frames. Actually, if axy in (24) is set to constant, the object will be lost in 421 frames whereas (24) with a variable coefficient detected the object in 947 frames. Probability of detection improved from 42.1% to 94.7%.Besides, even (24) with a variable coefficient could not detect the object in all the frames. But NODAMI can associate the object with the existing trajectory correctly after losing the object in several frames.Then, we process the video image of 30 s, during which satellite attitude was adjusted. Overlaying 750 frame images gives Figure8. The trajectory of the object is in the white box of Figure 8. The brightness of the object also varies considerably. The 2 right trajectories are of stars caused by satellite attitude adjustment. The object image was moving originally towards the lower left, but it appears to be moving towards the upper left due to attitude adjustment; meanwhile, the star image moved upwards, resulting in the 2 right trajectories. NODAMI detected the space object trajectory well and abandoned the trajectory of the stars. Similarly, the number of points detected in the trajectory is less than 750, for the brightness of the object varies too much and the object is lost in several frames. But NODAMI can associate the object with the existing trajectory correctly after losing the object in several frames.Figure 8 Overlay of 750 frame images.Using the known satellite attitude motion information and (21) to compensate attitude adjustment can give the trajectory of the object as shown in Figure 9, from which it can be seen that the trajectory is recovered well and the effect of attitude adjustment on the image is removed.Figure 9 Trajectory of the object.Experimental results show that the algorithm has good performance for moving space point objects even with changing brightness. ## 5. Conclusion In this paper, a new space object detection algorithm in a video satellite image using motion information is proposed. The effect of satellite attitude motion on an image is analyzed quantitatively, which can be decomposed into translation and rotation. Attitude motion compensation formula is derived. Considering the continuity of object motion and brightness change, variable thresholding based on local image properties and detection of previous frame is used to segment a single-frame image. Then, the algorithm uses the correlation of object motion in multiframe and satellite attitude motion information to detect the object. Computation complexity is greatly reduced without image matching of stars in different frames. And compensation of attitude motion results in detecting objects well even during attitude adjusting. Using the algorithm to process the video image from Tiantuo-2 detects the object and gives its trajectory, which shows that the algorithm has good performance for moving space point objects even with changing brightness. --- *Source: 1024529-2017-10-17.xml*
2017
# Morphostructural MRI Abnormalities Related to Neuropsychiatric Disorders Associated to Multiple Sclerosis **Authors:** Simona Bonavita; Gioacchino Tedeschi; Antonio Gallo **Journal:** Multiple Sclerosis International (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/102454 --- ## Abstract Multiple Sclerosis associated neuropsychiatric disorders include major depression (MD), obsessive-compulsive disorder (OCD), bipolar affective disorder, euphoria, pseudobulbar affect, psychosis, and personality change. Magnetic Resonance Imaging (MRI) studies focused mainly on identifying morphostructural correlates of MD; only a few anecdotal cases on OCD associated to MS (OCD-MS), euphoria, pseudobulbar affect, psychosis, personality change, and one research article on MRI abnormalities in OCD-MS have been published. Therefore, in the present review we will report mainly on neuroimaging abnormalities found in MS patients with MD and OCD. All together, the studies on MD associated to MS suggest that, in this disease, depression is linked to a damage involving mainly frontotemporal regions either with discrete lesions (with those visible in T1 weighted images playing a more significant role) or subtle normal appearing white matter abnormalities. Hippocampal atrophy, as well, seems to be involved in MS related depression. It is conceivable that grey matter pathology (i.e., global and regional atrophy, cortical lesions), which occurs early in the course of disease, may involve several areas including the dorsolateral prefrontal cortex, the orbitofrontal cortex, and the anterior cingulate cortex whose disruption is currently thought to explain late-life depression. Further MRI studies are necessary to better elucidate OCD pathogenesis in MS. --- ## Body ## 1. Introduction Neuropsychiatric disorders associated to Multiple Sclerosis (MS) may be divided into two categories: mood disorders (including behavioural disturbances) [1] and cognitive impairment. The high preponderance of psychiatric symptoms in patients with MS has led to the suggestion that this disease should be routinely included in the differential diagnosis of patients being seen for psychiatric complaints [2–4]. Diaz-Olavarrieta et al. [5], by administering the Neuropsychiatric Inventory [6] to 44 patients with MS who were not under steroid treatment or were not under relapse, found that 95% of the patients were experiencing neuropsychiatric symptoms; in particular, dysphoria or depressive symptoms were most common (79%), followed by agitation (40%), anxiety (37%), irritability (35%), apathy (20%), euphoria (13%), disinhibition (13%), hallucinations (10%), and delusions (7%).In this review we will focus on MS associated mood and behavioural disorders and, in particular, on their neuroimaging aspects.MS associated neuropsychiatric disorders include major depression (MD), obsessive-compulsive disorder (OCD-MS), bipolar affective disorder, euphoria, pseudobulbar affect, psychosis, and personality change.So far, neuroimaging studies on MS focused mainly on identifying morphostructural changes typical of the disease and on recognizing predictors of disability and/or cognitive impairment. A more limited number of articles report on neuroimaging of psychiatric aspect of MS, with the majority of them concerning the morphostructural correlates of MD associated to MS. To date, MRI studies include only a few anecdotal cases on OCD-MS, euphoria, pseudobulbar affect, psychosis, personality change, and one research article on OCD in MS patients.In the present review we will report mainly on neuroimaging abnormalities found in MS patients with MD and OCD. ## 2. Depressive Disorders The most common psychiatric syndrome in MS, and also in general population, is MD, which is defined by the fourth edition of the Diagnostic and Statistical Manual of the American Psychiatric Association as follows.Five or more of the following symptoms over a minimum two week period:(i) depressed mood for most of the day,(ii) markedly diminished interest or pleasure in all activities,(iii) significant weight loss, or weight gain (5% of body weight in a month),(iv) insomnia or hypersomnia nearly every day,(v) psychomotor retardation or agitation (observable by others),(vi) fatigue or loss of energy nearly every day,(vii) feelings of worthlessness, excessive inappropriate guilt,(viii) diminished ability to think or concentrate,(ix) recurrent thoughts of death.A point prevalence of 15% to 30% and a lifetime prevalence of 40%–60% of MD have been reported in MS patients; this rate of depression is 3 to 10 times that of the general population [7]. The factors responsible for mood disturbances in MS are not well established: a psychological reaction to a progressively disabling and unpredictable disease may be a relevant contributor but reactive mechanisms alone are probably not sufficient to account for the high prevalence and wide spectrum of depression. Currently the physiopathology of MS-related depression is thought to be multifactorial and neuroinflammatory, neuroendocrine and neurotrophic mechanisms have been advocated [8]. The association between affective illness and MS is well described and to clarify whether mood dysregulation follows familial patterns of transmission similar to those found in patients with primary affective illness, Joffe et al. [9] assessed the prevalence of psychiatric diagnoses in the relatives of patients with MS, using the family history approach to diagnosis [10]. They found that the prevalence of psychiatric disorders, particularly affective illness, in the first degree relatives of patients with MS is consistent with that reported for the general population suggesting that affective disorders in this disease may be an intrinsic part of the neurological disorder rather than an independently acquired psychiatric disorder. Driven by these considerations there have been many attempts to identify the anatomical correlates explaining MD in MS patients.Neuroimaging studies in patients with MS have revealed associations between brain abnormalities and depression; computed assisted tomography studies reported that patients with brain lesions were more depressed than patients with only spinal cord involvement [11]. Other studies investigated the correlation between lesions location and occurrence of depression. Pujol et al. [12] studied 45 patients by MRI and tested the presence of depression by the Beck Depression Inventory (BDI). Axial spin-echo sequences were used to quantify lesions observed in three major anatomic divisions (basal, medial, and lateral) of the frontotemporal white matter (WM) in each hemisphere. These regions were chosen because they involve different cerebral connections but include the main frontotemporal associations bundles (considered to be involved in MD pathogenesis): the uncinate fasciculus (basal), the cingulum (medial), and the arcuate fasciculus (lateral). The authors found that the BDI scores significantly correlated with lesion load (LL) of the left arcuate fasciculus region (which contains neocortical-neocortical connections) of the left “verbal” hemisphere; in particular lesions in this area accounted for 17% of depression score variance. No significant correlation was found between lesions in the regions of the right hemisphere (dominant in regulating emotional behaviour) and depression, suggesting that misregulation of emotional behaviour and depressed mood are two different phenomena produced by different mechanisms.Subsequently, Feinstein et al. [13] evaluated MS subjects with and without depression by MRI; automatic tissue segmentation (grey matter (GM), WM, and cerebrospinal fluid (CSF)) and regional brain masking were applied to MRI data; regional and total LL were also calculated. They found that depressed MS patients had more hyperintense lesions in the left inferior medial frontal regions and greater atrophy of the left anterior temporal regions; they postulated that a combination of inflammation, demyelination, and atrophy within medial inferior frontal areas would disconnect neural connectivity with an associated perturbation in several neurotransmitters and modulators known to be involved in frontal subcortical circuits, with the monoaminergic ones being the most intimately tied to disorders of mood. Alterations of afferent and efferent connections from frontal subcortical areas to temporal lobe limbic areas may play a significant role in mood regulation as well.A qualitative “visual” assessment of T1, T2 LL, and brain atrophy performed by Bakshi et al. [14] showed that the only MRI abnormalities that could significantly predict the presence of depression, before and after adjusting for EDSS, were T1 LL in superior frontal, superior parietal, and temporal regions, while the severity of depression was predicted by T1 lesions in superior frontal, superior parietal and temporal regions, enlargement of lateral and third ventricular, and frontal atrophy. The authors speculated that WM lesions in MS patients, especially those in frontal and parietal areas, lead to depression by disconnection of brain cortical areas that regulate the mood; in particular, frontal WM disease in MS could have effects on serotoninergic pathways in frontal-limbic circuits. Moreover, since T1 hypointensities appear to represent more specific chronic destructive irreversible tissue changes such as hypocellularity, complete demyelination, and axonal loss, the association between depression and T1 lesions, indicates that chronic irreversible damage to critical pathways is more likely to cause mood dysfunction. On the other hand, these pathways can function sufficiently well in the presence of less severe insults (edema, partial demyelination, and inflammation) as reflected by T2 LL. Furthermore, brain atrophy measures associated to depression suggest that patients with depression have more severe neuropathologic subsets of MS with a tendency towards more tissue destruction and thus may be more susceptible to dysfunction of mood regulating pathways.An MRI study of 95 consecutive MS patients [15], in which 19% of the patients met the criteria for MD, reported that presence of MD and severity of depression were correlated with right frontal LL and with right temporal brain atrophy; furthermore, T1 lesions in the superior parietal and superior frontal regions predicted depression in MS patients. Of note, these 95 MS patients were also tested for anxiety by the Hamilton Anxiety Rating Scale (HARS) and the HARS scores did not correlate significantly with any of the MRI measures of regional and total LL and brain atrophy. The same authors performed a two-year follow-up study [16] to determine whether changes in total or regional LL and brain atrophy in specific regions of the brain may contribute to the development of depression in patients with MS. Brain atrophy evolution was significantly more conspicuous in the left frontal lobe of depressed patients as compared to nondepressed patients. The correlation analysis, considering the 2-year changes of MRI quantitative measures of regional and total LL and brain atrophy and the 2-year changes of the Hamilton Depression Rating Scale score, showed a significant correlation between the latter and right temporal brain atrophy; moreover, changes in right temporal brain atrophy were significantly and independently related to the severity of depressive symptoms. The authors suggested that in MS the temporal lobes involvement (and the subsequent damage in the main projection areas to the limbic system) may play a role in the aetiology of depressive mood disorders.More recently, a diffusion tensor imaging (DTI) study [17] investigated normal appearing brain tissue in the attempt of shedding further light on the pathogenesis of depression in patients with MS. T1 and PD/T2 LL were evaluated; tissue segmentation (normal appearing white (NAWM) and grey matter (NAGM), CSF), regional atrophy, and DTI analysis were performed as well. Depressed patients had a smaller NAWM volume in the left superior frontal region and a greater T1 LL in the right medial inferior frontal region. Significantly higher mean diffusivity was found in the depressed group in the NAGM of the left anterior temporal lobe region. Reduced fractional anisotropy (FA) was present in the NAWM of the left anterior temporal lobe in the depressed group. In addition, higher mean diffusivity was found in hyperintense lesions in the right inferior frontal region of the depressed group. These findings provide important data on structural brain changes beyond that captured by lesion and atrophy measurements and suggest that when inflammation and demyelination disrupt cellular organization and the linearity of fibres pathways in specific brain regions, even in the absence of discernable lesions, clinical depression may result. Therefore, it was concluded that more destructive aspects of brain change, that is, the black holes of T1-weighted images and reduction in NAWM volume are strictly related to mood disorders in MS patients.Since a smaller volume of the hippocampus was found in psychiatric patients with MD, Gold et al. [18] investigated patients with MS and MD to explore whether subregional volumes of the hippocampus may be associated with the high frequency of depressive symptoms in MS.In this sophisticated study the authors performed hippocampal structural segmentation identifying four regions: cornu ammonis 1 (CA1), CA2-CA3 and dentate gyrus (CA23DG), subiculum, and entorhinal cortex; global atrophy and lesion quantification were evaluated as well. In MS patients with depressive symptoms smaller CA23DG volumes were found (as a distinctive pattern of regional hippocampal atrophy); the correlation analysis revealed an inverse correlation between BDI scores and CA23DG volumes. The authors concluded that some forms of depression in MS may be caused by similar mechanisms hypothesized for MD in psychiatric patients. Moreover, since recently it has been hypothesized a neuroendocrine-limbic aetiology of depression and at the same time there is accumulating evidence that the hypothalamic-pituitary-adrenal axis (HPA) activity is increased in MS patients [19], Gold et al. [18] tested whether specific subregional volumes may be linked to alterations in diurnal cortisol secretion. They found that MS depressed patients, as compared to not depressed patients, had cortisol hypersecretion (elevated evening levels and unchanged cortisol levels in the morning) indicating that diurnal cortisol flattening, due to elevated evening cortisol (i.e., failure to decrease cortisol responses throughout the day), was associated with depression in MS and not with MS. Furthermore, there was an inverse correlation between CA23DG volumes, BDI, scores and cortisol levels. The authors suggested that even subtle hyperactivity of the HPA axis may produce smaller volumes in the CA23DG region and thereby lead to depressive symptoms in MS. The same authors, by using surface mapping techniques, performed volumetric and shape analyses of the hippocampus to characterize neuroanatomical correlates of depression in MS [20]. Two groups of MS patients, respectively with low and high depression scores were studied. Right hippocampal volumes were smaller in the high depression versus the low depression group, but there were no significant differences in left hippocampal volumes. Since in this study only female patients were included, and one recent study on familial depression with a sample including only female patients showed reduced volumes of the right but not the left hippocampus [21], the authors interpreted their finding as an indication for sex-specific associations between lateralized hippocampal atrophy and depression.It should be noticed that all the above-mentioned studies are not free from limitations: the majority of them did not take into account possible confounders like fatigue, cognitive impairment, and number of previous steroids cycles; only some of them included a healthy control group, nonetheless an appropriate protocol design including also psychiatric depressed patients was never done. Furthermore, in these studies depression was evaluated by using depression scales mainly validated for psychiatric patients and not for MS patients in whom physical symptoms may be possible confounders for depression assessment.All together these studies suggest that depression in MS is linked to a damage involving mainly frontotemporal regions either with discrete lesions (with those visible in T1 weighted images playing a more significant role) or subtle NAWM abnormalities. Hippocampal atrophy, as well, seems to be involved in MS related depression.The frontal lobe, and in particular the prefrontal cortex, is involved in information processing. The prefrontal cortex role includes planning, organization, inhibition, empathy, and motivation that are dependent on three distinct cortical networks: (i) the dorsolateral prefrontal cortex, (ii) the orbitofrontal cortex, and (iii) the anterior cingulate cortex [22, 23]. Axons project from these cortical areas, through the WM, to subcortical structures, and MS plaques, involving mainly WM (where the fibre tracts travel extremely close), may disrupt these circuits impacting deeply on their function. The dorsolateral prefrontal cortex is involved in executive functioning (i.e., planning, organization, and attention) and its damage is responsible for perseveration, difficulty in shifting and screening out environmental distractions, impairments in constructional skills, and sequential motor tasks. The orbitofrontal cortex controls socially appropriate behaviour and empathy, thus MS lesions may result in impulsivity, lability, personality changes, and lack of humanistic sensitivity. The anterior cingulate cortex is thought to have at least two further subdivisions: the affect and the cognition ones [24]. The affect portion has connections with the limbic and paralimbic regions, including the orbitofrontal cortex. The cognition subdivision connects with the parietal cortex, the spinal cord, and the dorsolateral prefrontal cortex. These connections highlight the linkage and interdependence of the frontal circuits. Disruption of the aforementioned brain circuitry is thought to explain late-life depression. Indeed, according to this hypothesis, Taylor et al. [25] by using statistical parametric mapping identified frontal WM lesions in elderly depressed patients and found an association between lesions of WM tracts extending from inferior frontal regions toward the basal ganglia and depression. Alexopoulos and coauthors [26, 27] have defined the “depression-executive dysfunction syndrome” in the elderly, which is thought to be a distinct depressive disorder marked by executive dysfunction as a result of lesions in the basal ganglia and left frontal regions.Thus, it can be hypothesized that MS patients, generally in young adult age, by having WM plaques disrupting these cortical networks, are at higher risk of developing depression than healthy peers.Furthermore, new MRI acquisition and image analysis techniques, currently being used for research purposes, such as DTI, tract-based spatial statistics (TBSS), magnetization transfer imaging, Double Inversion Recovery sequences, voxel based (VBM) and surface based morphometry, lesion probability maps, SIENAX, and resting state functional MRI, by allowing a more extensive study of both WM and GM, outside visible MS plaques, will probably help in better understanding the specific role of GM and WM in the pathogenesis of affective disorders associated to MS.In particular, surface based morphometry may be suitable for mapping regional cortical thickness in the brain networks presumably involved in MS related depression (see Figure1, personal data).Figure 1 Left hemisphere, lateral surface. Cortical thickness group analysis between patients with MS related depression (MS-dep) and not depressed MS patients. Areas showing a significant cortical thinning in patients with MS-dep (P<0.01) are coloured in blue. Area 1 is localized in orbito-frontal cortex; areas 2, 3, 4, 5 are localized in prefrontal cortex; areas 10 and 11 are localized in parietal cortex. ## 3. Obsessive-Compulsive Disorder (OCD) The lifetime prevalence of OCD in MS is 8.6% compared with 2.5% in the general population [28]. In primary OCD, neuroimaging studies have revealed structural and/or functional abnormalities in specific brain structures, particularly in the fronto-striato-thalamic circuitry [29–31] which points to an organic aetiology of this psychiatric disease.In an MS patient, OCD developed after being diagnosed with MS concurrently with the appearance of a right parietal WM plaque [32]; this single case raised the possibility that parietal lobe WM microstructure abnormalities play a role in mediating obsessions and compulsions through disruptions of the functional connectivity between cortical-cortical and/or cortical-subcortical brain regions implicated in the pathophysiology of OCD.Recently, Tinelli et al. [33] evaluated by MRI the relationship between GM and WM tissue damage and OCD in patients with MS. The authors studied 16 MS patients with and 15 without OCD. Image analysis included LL, VBM, FA calculation, and TBSS. The only significant differences were found in the VBM analysis that revealed a set of clusters of reduced GM volume in patients with OCD in the right inferior frontal gyrus, inferior temporal gyrus, and middle temporal gyrus. The absence of any significant difference in TBSS analysis in the two groups seems to be in contrast with the results of recent DTI studies in primary OCD, which have reported abnormalities in the corpus callosum and subcortical frontal WM associated with either increased [34] or decreased [35] structural connectivity.Probably, differences in patients’ characteristics and/or in methods of MRI acquisition and data analysis could account for this discrepancy thus leaving space to further studies to better elucidate OCD pathogenesis in MS.In conclusion, despite the great advancements achieved so far, we anticipate that the application of modern advanced MRI technologies and better definition of patients’ features will prompt us to get significantly closer to a more intimate knowledge of the physiopathology and functional basis of neuropsychiatric disorders in MS. --- *Source: 102454-2013-04-16.xml*
102454-2013-04-16_102454-2013-04-16.md
22,640
Morphostructural MRI Abnormalities Related to Neuropsychiatric Disorders Associated to Multiple Sclerosis
Simona Bonavita; Gioacchino Tedeschi; Antonio Gallo
Multiple Sclerosis International (2013)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/102454
102454-2013-04-16.xml
--- ## Abstract Multiple Sclerosis associated neuropsychiatric disorders include major depression (MD), obsessive-compulsive disorder (OCD), bipolar affective disorder, euphoria, pseudobulbar affect, psychosis, and personality change. Magnetic Resonance Imaging (MRI) studies focused mainly on identifying morphostructural correlates of MD; only a few anecdotal cases on OCD associated to MS (OCD-MS), euphoria, pseudobulbar affect, psychosis, personality change, and one research article on MRI abnormalities in OCD-MS have been published. Therefore, in the present review we will report mainly on neuroimaging abnormalities found in MS patients with MD and OCD. All together, the studies on MD associated to MS suggest that, in this disease, depression is linked to a damage involving mainly frontotemporal regions either with discrete lesions (with those visible in T1 weighted images playing a more significant role) or subtle normal appearing white matter abnormalities. Hippocampal atrophy, as well, seems to be involved in MS related depression. It is conceivable that grey matter pathology (i.e., global and regional atrophy, cortical lesions), which occurs early in the course of disease, may involve several areas including the dorsolateral prefrontal cortex, the orbitofrontal cortex, and the anterior cingulate cortex whose disruption is currently thought to explain late-life depression. Further MRI studies are necessary to better elucidate OCD pathogenesis in MS. --- ## Body ## 1. Introduction Neuropsychiatric disorders associated to Multiple Sclerosis (MS) may be divided into two categories: mood disorders (including behavioural disturbances) [1] and cognitive impairment. The high preponderance of psychiatric symptoms in patients with MS has led to the suggestion that this disease should be routinely included in the differential diagnosis of patients being seen for psychiatric complaints [2–4]. Diaz-Olavarrieta et al. [5], by administering the Neuropsychiatric Inventory [6] to 44 patients with MS who were not under steroid treatment or were not under relapse, found that 95% of the patients were experiencing neuropsychiatric symptoms; in particular, dysphoria or depressive symptoms were most common (79%), followed by agitation (40%), anxiety (37%), irritability (35%), apathy (20%), euphoria (13%), disinhibition (13%), hallucinations (10%), and delusions (7%).In this review we will focus on MS associated mood and behavioural disorders and, in particular, on their neuroimaging aspects.MS associated neuropsychiatric disorders include major depression (MD), obsessive-compulsive disorder (OCD-MS), bipolar affective disorder, euphoria, pseudobulbar affect, psychosis, and personality change.So far, neuroimaging studies on MS focused mainly on identifying morphostructural changes typical of the disease and on recognizing predictors of disability and/or cognitive impairment. A more limited number of articles report on neuroimaging of psychiatric aspect of MS, with the majority of them concerning the morphostructural correlates of MD associated to MS. To date, MRI studies include only a few anecdotal cases on OCD-MS, euphoria, pseudobulbar affect, psychosis, personality change, and one research article on OCD in MS patients.In the present review we will report mainly on neuroimaging abnormalities found in MS patients with MD and OCD. ## 2. Depressive Disorders The most common psychiatric syndrome in MS, and also in general population, is MD, which is defined by the fourth edition of the Diagnostic and Statistical Manual of the American Psychiatric Association as follows.Five or more of the following symptoms over a minimum two week period:(i) depressed mood for most of the day,(ii) markedly diminished interest or pleasure in all activities,(iii) significant weight loss, or weight gain (5% of body weight in a month),(iv) insomnia or hypersomnia nearly every day,(v) psychomotor retardation or agitation (observable by others),(vi) fatigue or loss of energy nearly every day,(vii) feelings of worthlessness, excessive inappropriate guilt,(viii) diminished ability to think or concentrate,(ix) recurrent thoughts of death.A point prevalence of 15% to 30% and a lifetime prevalence of 40%–60% of MD have been reported in MS patients; this rate of depression is 3 to 10 times that of the general population [7]. The factors responsible for mood disturbances in MS are not well established: a psychological reaction to a progressively disabling and unpredictable disease may be a relevant contributor but reactive mechanisms alone are probably not sufficient to account for the high prevalence and wide spectrum of depression. Currently the physiopathology of MS-related depression is thought to be multifactorial and neuroinflammatory, neuroendocrine and neurotrophic mechanisms have been advocated [8]. The association between affective illness and MS is well described and to clarify whether mood dysregulation follows familial patterns of transmission similar to those found in patients with primary affective illness, Joffe et al. [9] assessed the prevalence of psychiatric diagnoses in the relatives of patients with MS, using the family history approach to diagnosis [10]. They found that the prevalence of psychiatric disorders, particularly affective illness, in the first degree relatives of patients with MS is consistent with that reported for the general population suggesting that affective disorders in this disease may be an intrinsic part of the neurological disorder rather than an independently acquired psychiatric disorder. Driven by these considerations there have been many attempts to identify the anatomical correlates explaining MD in MS patients.Neuroimaging studies in patients with MS have revealed associations between brain abnormalities and depression; computed assisted tomography studies reported that patients with brain lesions were more depressed than patients with only spinal cord involvement [11]. Other studies investigated the correlation between lesions location and occurrence of depression. Pujol et al. [12] studied 45 patients by MRI and tested the presence of depression by the Beck Depression Inventory (BDI). Axial spin-echo sequences were used to quantify lesions observed in three major anatomic divisions (basal, medial, and lateral) of the frontotemporal white matter (WM) in each hemisphere. These regions were chosen because they involve different cerebral connections but include the main frontotemporal associations bundles (considered to be involved in MD pathogenesis): the uncinate fasciculus (basal), the cingulum (medial), and the arcuate fasciculus (lateral). The authors found that the BDI scores significantly correlated with lesion load (LL) of the left arcuate fasciculus region (which contains neocortical-neocortical connections) of the left “verbal” hemisphere; in particular lesions in this area accounted for 17% of depression score variance. No significant correlation was found between lesions in the regions of the right hemisphere (dominant in regulating emotional behaviour) and depression, suggesting that misregulation of emotional behaviour and depressed mood are two different phenomena produced by different mechanisms.Subsequently, Feinstein et al. [13] evaluated MS subjects with and without depression by MRI; automatic tissue segmentation (grey matter (GM), WM, and cerebrospinal fluid (CSF)) and regional brain masking were applied to MRI data; regional and total LL were also calculated. They found that depressed MS patients had more hyperintense lesions in the left inferior medial frontal regions and greater atrophy of the left anterior temporal regions; they postulated that a combination of inflammation, demyelination, and atrophy within medial inferior frontal areas would disconnect neural connectivity with an associated perturbation in several neurotransmitters and modulators known to be involved in frontal subcortical circuits, with the monoaminergic ones being the most intimately tied to disorders of mood. Alterations of afferent and efferent connections from frontal subcortical areas to temporal lobe limbic areas may play a significant role in mood regulation as well.A qualitative “visual” assessment of T1, T2 LL, and brain atrophy performed by Bakshi et al. [14] showed that the only MRI abnormalities that could significantly predict the presence of depression, before and after adjusting for EDSS, were T1 LL in superior frontal, superior parietal, and temporal regions, while the severity of depression was predicted by T1 lesions in superior frontal, superior parietal and temporal regions, enlargement of lateral and third ventricular, and frontal atrophy. The authors speculated that WM lesions in MS patients, especially those in frontal and parietal areas, lead to depression by disconnection of brain cortical areas that regulate the mood; in particular, frontal WM disease in MS could have effects on serotoninergic pathways in frontal-limbic circuits. Moreover, since T1 hypointensities appear to represent more specific chronic destructive irreversible tissue changes such as hypocellularity, complete demyelination, and axonal loss, the association between depression and T1 lesions, indicates that chronic irreversible damage to critical pathways is more likely to cause mood dysfunction. On the other hand, these pathways can function sufficiently well in the presence of less severe insults (edema, partial demyelination, and inflammation) as reflected by T2 LL. Furthermore, brain atrophy measures associated to depression suggest that patients with depression have more severe neuropathologic subsets of MS with a tendency towards more tissue destruction and thus may be more susceptible to dysfunction of mood regulating pathways.An MRI study of 95 consecutive MS patients [15], in which 19% of the patients met the criteria for MD, reported that presence of MD and severity of depression were correlated with right frontal LL and with right temporal brain atrophy; furthermore, T1 lesions in the superior parietal and superior frontal regions predicted depression in MS patients. Of note, these 95 MS patients were also tested for anxiety by the Hamilton Anxiety Rating Scale (HARS) and the HARS scores did not correlate significantly with any of the MRI measures of regional and total LL and brain atrophy. The same authors performed a two-year follow-up study [16] to determine whether changes in total or regional LL and brain atrophy in specific regions of the brain may contribute to the development of depression in patients with MS. Brain atrophy evolution was significantly more conspicuous in the left frontal lobe of depressed patients as compared to nondepressed patients. The correlation analysis, considering the 2-year changes of MRI quantitative measures of regional and total LL and brain atrophy and the 2-year changes of the Hamilton Depression Rating Scale score, showed a significant correlation between the latter and right temporal brain atrophy; moreover, changes in right temporal brain atrophy were significantly and independently related to the severity of depressive symptoms. The authors suggested that in MS the temporal lobes involvement (and the subsequent damage in the main projection areas to the limbic system) may play a role in the aetiology of depressive mood disorders.More recently, a diffusion tensor imaging (DTI) study [17] investigated normal appearing brain tissue in the attempt of shedding further light on the pathogenesis of depression in patients with MS. T1 and PD/T2 LL were evaluated; tissue segmentation (normal appearing white (NAWM) and grey matter (NAGM), CSF), regional atrophy, and DTI analysis were performed as well. Depressed patients had a smaller NAWM volume in the left superior frontal region and a greater T1 LL in the right medial inferior frontal region. Significantly higher mean diffusivity was found in the depressed group in the NAGM of the left anterior temporal lobe region. Reduced fractional anisotropy (FA) was present in the NAWM of the left anterior temporal lobe in the depressed group. In addition, higher mean diffusivity was found in hyperintense lesions in the right inferior frontal region of the depressed group. These findings provide important data on structural brain changes beyond that captured by lesion and atrophy measurements and suggest that when inflammation and demyelination disrupt cellular organization and the linearity of fibres pathways in specific brain regions, even in the absence of discernable lesions, clinical depression may result. Therefore, it was concluded that more destructive aspects of brain change, that is, the black holes of T1-weighted images and reduction in NAWM volume are strictly related to mood disorders in MS patients.Since a smaller volume of the hippocampus was found in psychiatric patients with MD, Gold et al. [18] investigated patients with MS and MD to explore whether subregional volumes of the hippocampus may be associated with the high frequency of depressive symptoms in MS.In this sophisticated study the authors performed hippocampal structural segmentation identifying four regions: cornu ammonis 1 (CA1), CA2-CA3 and dentate gyrus (CA23DG), subiculum, and entorhinal cortex; global atrophy and lesion quantification were evaluated as well. In MS patients with depressive symptoms smaller CA23DG volumes were found (as a distinctive pattern of regional hippocampal atrophy); the correlation analysis revealed an inverse correlation between BDI scores and CA23DG volumes. The authors concluded that some forms of depression in MS may be caused by similar mechanisms hypothesized for MD in psychiatric patients. Moreover, since recently it has been hypothesized a neuroendocrine-limbic aetiology of depression and at the same time there is accumulating evidence that the hypothalamic-pituitary-adrenal axis (HPA) activity is increased in MS patients [19], Gold et al. [18] tested whether specific subregional volumes may be linked to alterations in diurnal cortisol secretion. They found that MS depressed patients, as compared to not depressed patients, had cortisol hypersecretion (elevated evening levels and unchanged cortisol levels in the morning) indicating that diurnal cortisol flattening, due to elevated evening cortisol (i.e., failure to decrease cortisol responses throughout the day), was associated with depression in MS and not with MS. Furthermore, there was an inverse correlation between CA23DG volumes, BDI, scores and cortisol levels. The authors suggested that even subtle hyperactivity of the HPA axis may produce smaller volumes in the CA23DG region and thereby lead to depressive symptoms in MS. The same authors, by using surface mapping techniques, performed volumetric and shape analyses of the hippocampus to characterize neuroanatomical correlates of depression in MS [20]. Two groups of MS patients, respectively with low and high depression scores were studied. Right hippocampal volumes were smaller in the high depression versus the low depression group, but there were no significant differences in left hippocampal volumes. Since in this study only female patients were included, and one recent study on familial depression with a sample including only female patients showed reduced volumes of the right but not the left hippocampus [21], the authors interpreted their finding as an indication for sex-specific associations between lateralized hippocampal atrophy and depression.It should be noticed that all the above-mentioned studies are not free from limitations: the majority of them did not take into account possible confounders like fatigue, cognitive impairment, and number of previous steroids cycles; only some of them included a healthy control group, nonetheless an appropriate protocol design including also psychiatric depressed patients was never done. Furthermore, in these studies depression was evaluated by using depression scales mainly validated for psychiatric patients and not for MS patients in whom physical symptoms may be possible confounders for depression assessment.All together these studies suggest that depression in MS is linked to a damage involving mainly frontotemporal regions either with discrete lesions (with those visible in T1 weighted images playing a more significant role) or subtle NAWM abnormalities. Hippocampal atrophy, as well, seems to be involved in MS related depression.The frontal lobe, and in particular the prefrontal cortex, is involved in information processing. The prefrontal cortex role includes planning, organization, inhibition, empathy, and motivation that are dependent on three distinct cortical networks: (i) the dorsolateral prefrontal cortex, (ii) the orbitofrontal cortex, and (iii) the anterior cingulate cortex [22, 23]. Axons project from these cortical areas, through the WM, to subcortical structures, and MS plaques, involving mainly WM (where the fibre tracts travel extremely close), may disrupt these circuits impacting deeply on their function. The dorsolateral prefrontal cortex is involved in executive functioning (i.e., planning, organization, and attention) and its damage is responsible for perseveration, difficulty in shifting and screening out environmental distractions, impairments in constructional skills, and sequential motor tasks. The orbitofrontal cortex controls socially appropriate behaviour and empathy, thus MS lesions may result in impulsivity, lability, personality changes, and lack of humanistic sensitivity. The anterior cingulate cortex is thought to have at least two further subdivisions: the affect and the cognition ones [24]. The affect portion has connections with the limbic and paralimbic regions, including the orbitofrontal cortex. The cognition subdivision connects with the parietal cortex, the spinal cord, and the dorsolateral prefrontal cortex. These connections highlight the linkage and interdependence of the frontal circuits. Disruption of the aforementioned brain circuitry is thought to explain late-life depression. Indeed, according to this hypothesis, Taylor et al. [25] by using statistical parametric mapping identified frontal WM lesions in elderly depressed patients and found an association between lesions of WM tracts extending from inferior frontal regions toward the basal ganglia and depression. Alexopoulos and coauthors [26, 27] have defined the “depression-executive dysfunction syndrome” in the elderly, which is thought to be a distinct depressive disorder marked by executive dysfunction as a result of lesions in the basal ganglia and left frontal regions.Thus, it can be hypothesized that MS patients, generally in young adult age, by having WM plaques disrupting these cortical networks, are at higher risk of developing depression than healthy peers.Furthermore, new MRI acquisition and image analysis techniques, currently being used for research purposes, such as DTI, tract-based spatial statistics (TBSS), magnetization transfer imaging, Double Inversion Recovery sequences, voxel based (VBM) and surface based morphometry, lesion probability maps, SIENAX, and resting state functional MRI, by allowing a more extensive study of both WM and GM, outside visible MS plaques, will probably help in better understanding the specific role of GM and WM in the pathogenesis of affective disorders associated to MS.In particular, surface based morphometry may be suitable for mapping regional cortical thickness in the brain networks presumably involved in MS related depression (see Figure1, personal data).Figure 1 Left hemisphere, lateral surface. Cortical thickness group analysis between patients with MS related depression (MS-dep) and not depressed MS patients. Areas showing a significant cortical thinning in patients with MS-dep (P<0.01) are coloured in blue. Area 1 is localized in orbito-frontal cortex; areas 2, 3, 4, 5 are localized in prefrontal cortex; areas 10 and 11 are localized in parietal cortex. ## 3. Obsessive-Compulsive Disorder (OCD) The lifetime prevalence of OCD in MS is 8.6% compared with 2.5% in the general population [28]. In primary OCD, neuroimaging studies have revealed structural and/or functional abnormalities in specific brain structures, particularly in the fronto-striato-thalamic circuitry [29–31] which points to an organic aetiology of this psychiatric disease.In an MS patient, OCD developed after being diagnosed with MS concurrently with the appearance of a right parietal WM plaque [32]; this single case raised the possibility that parietal lobe WM microstructure abnormalities play a role in mediating obsessions and compulsions through disruptions of the functional connectivity between cortical-cortical and/or cortical-subcortical brain regions implicated in the pathophysiology of OCD.Recently, Tinelli et al. [33] evaluated by MRI the relationship between GM and WM tissue damage and OCD in patients with MS. The authors studied 16 MS patients with and 15 without OCD. Image analysis included LL, VBM, FA calculation, and TBSS. The only significant differences were found in the VBM analysis that revealed a set of clusters of reduced GM volume in patients with OCD in the right inferior frontal gyrus, inferior temporal gyrus, and middle temporal gyrus. The absence of any significant difference in TBSS analysis in the two groups seems to be in contrast with the results of recent DTI studies in primary OCD, which have reported abnormalities in the corpus callosum and subcortical frontal WM associated with either increased [34] or decreased [35] structural connectivity.Probably, differences in patients’ characteristics and/or in methods of MRI acquisition and data analysis could account for this discrepancy thus leaving space to further studies to better elucidate OCD pathogenesis in MS.In conclusion, despite the great advancements achieved so far, we anticipate that the application of modern advanced MRI technologies and better definition of patients’ features will prompt us to get significantly closer to a more intimate knowledge of the physiopathology and functional basis of neuropsychiatric disorders in MS. --- *Source: 102454-2013-04-16.xml*
2013
# Cadmium and Lead Removal from Aqueous Solution Using Magnetite Nanoparticles Biofabricated fromPortulaca oleracea Leaf Extract **Authors:** Payam B. Hassan; Rezan O. Rasheed; Kiomars Zargoosh **Journal:** Journal of Nanomaterials (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1024554 --- ## Abstract Magnetic nanoparticles of iron oxide (Fe3O4 NPs) were prepared using a biosynthetic method to investigate their potential use as an adsorbent for adsorption of Pb(II) and Cd(II) from the aqueous solution. The present study for the first time used the magnetite nanoparticles from leaf extract of Portulaca oleracea for the removal of Pb(II) and Cd(II) metal ions. Characterizations for the prepared Fe3O4 NPs (PO-Fe3O4MNPs) were achieved by using X-ray diffraction (XRD), field emission scanning electron microscope (FESEM), energy-dispersive X-ray spectroscopy (EDX), transmittance electron microscopy (TEM), and Fourier transform infrared (FTIR) spectroscopy. The batch adsorption process has been performed to study the effect of various parameters, such as contact time, pH, temperature, initial metal concentration, and adsorbent dose. The optimum pH for Cd(II) and Pb(II) adsorption was 6. The removal of heavy metals was found to increase with adsorbent dosage and contact time and reduced with increasing initial concentration. Langmuir, Freundlich, Khan, and Toth isotherms were used as adsorption isotherm models. The adsorption data fitted well with the Freundlich isotherm model with correlation coefficient (R2>0.94). The maximum adsorption capacities (Qmax) at equilibrium were 177.48 mg/g and 108.2267 mg/g for Cd(II) and Pb(II), respectively. The kinetic analysis showed that the overall adsorption process was successfully fitted with the pseudo-second-order kinetic model. The calculated thermodynamic parameters (∆G°, ∆H°, and ∆S°) showed that the adsorption of Cd(II) and Pb(II) ions onto PO-Fe3O4MNPs was exothermic and spontaneous. These results demonstrate that biogenic synthesized PO-Fe3O4MNPs are highly efficient adsorbents for the removal of Pb(II) and Cd(II) ions from contaminated water. --- ## Body ## 1. Introduction Water is the most important molecule on the earth and is a source of sustainable life. However, millions of people face water scarcity daily [1]. Rapid population growth demands a rapid expansion in the agricultural and industrial sectors, resulting in increased demand for water, which is necessary for the survival of all living forms on this blue planet [2]. Due to the release of untreated organic/inorganic harmful effluents into freshwater bodies, rapid growth in industrialization, population, and urbanization has resulted in a severe exponential increase in environmental pollution [3–5].When water is contaminated, removing the pollutants is costly, difficult, and often impossible. Water pollution is not only harming the organisms but also destroying entire ecosystems [6, 7]. Heavy metals are well recognized among the various pollutants that contribute to environmental damage, owing to their persistence in the environment, toxicity, and bioaccumulative nature, all of which lead to negative consequences for human health and the ecosystem [8, 9]. Lead, chromium, nickel, cadmium, and arsenic in the environment pose a serious threat to plants, animals, and even humans due to their bioaccumulation, nonbiodegradability, and toxicity even at trace concentrations [10]. Human exposure to even trace concentrations may result in conditions such as cardiovascular problems, depression, gastrointestinal and renal failure, neurological damage, osteoporosis, tubular and glomerular dysfunction, and various cancers [11, 12]. Cadmium and lead are two of the most toxic metals for plants, animals, and humans. They are also harmful at low concentrations because they disrupt enzyme functioning, replace essential metals in pigments, and produce reactive oxygen species [3, 13]. As a result of these serious issues, many effective methods to remove heavy metals have been developed, including ion exchange, membrane filtration, adsorption, photodegradation, coagulation–flocculation, electrodeposition, and electrooxidations [14–18]. Until now, adsorption is the most preferred method for water purification due to its efficiency, low operational cost, and application in both small- and large-scale operations [19]. Various kinds of adsorption materials have been used for water remediation, including carbon-based materials, clays, biological materials, bentonite, zeolites, metal oxide, magnetic nanoparticles, agricultural residues, and mesoporous substances such as MCM-41, MCM-48, and SBA-15 [20, 21].Magnetite nanoparticles (NPs) are strong adsorbents for removing pollutants from wastewater. Due to their magnetic properties, they may be easily isolated from the reaction media by applying an external magnetic field. Furthermore, the application of magnetic separation on the nanoadsorbents provides the crucial benefit of the rapid removal of toxic metals from wastewater [22].Different methods including coprecipitation, thermal decomposition, microemulsion, and solvothermal techniques can be used to fabricate magnetite Fe3O4 NPs [23]. Hazardous chemicals, organometallic precursors, and hard reaction conditions, such as high pressure or high temperature, are used in these procedures. However, these methods have several disadvantages, including high toxicity, low nanoparticle stability, exhibiting low dispersion rates, and undesirable to work within scaled-up applications [24, 25]. Therefore, the fabrication of magnetite NPs by both economically and environmentally sustainable processes is necessary. The application of leaf extracts for nanoparticle synthesis has attracted a lot of attention in recent years, due to their low cost, nontoxicity, wide availability, strong metal capping affinity, biodegradability, and nonmutagenicity [26, 27]. Alcohols, aldehydes, amines, carboxyl, ketones, hydroxyl, and sulfhydryl are the intervening functional groups in the synthesis of NPs; therefore, nearly any biological substance containing these groups can be used to convert metal ions into NPs. Some molecules, such as terpenoids, flavonoids, different heterocyclic, polyphenols, reducing sugars, and ascorbate, are directly involved in the synthesis of NPs, while others, such as proteins, serve as stabilizing agents [28]. Examples of plant extracts that have been used in the biosynthesis of NPs include Camellia sinensis, Quercus virginiana, Punica granatum, and Eucalyptus globulus [24], Moringa oleifera [29], and Calliandra haematocephala [30] in the remediation of various pollutants in water.In this study,Portulaca oleracea (family: Portulacaceae; common name, purslane) leaf extract was employed to synthesize magnetite Fe3O4 NPs. Due to the presence of various active phytochemicals such as alkaloids, phenols, flavonoids, coumarins, and terpenoids, Portulaca oleracea is commonly used in medical fields, such as drugs and medicine [31]. Only one study has investigated the preparation of Fe3O4 NPs using Portulaca oleracea leaf extract [32].In this work, we have reported for the first time the use of magnetite Fe3O4 NPs from Portulaca oleracea leaf extract as efficient adsorbents to adsorb cadmium and lead from aqueous solutions. The Fe3O4 NPs were prepared successfully and characterized by X-ray diffraction (XRD), field emission scanning electron microscope (FESEM), energy-dispersive X-ray spectroscopy (EDX), transmittance electron microscopy (TEM), and Fourier transform infrared (FTIR) spectroscopy. The effect of different parameters has been studied such as pH, temperature, initial metal concentrations, adsorbent dose, and contact time. Furthermore, isotherm, kinetics, and thermodynamic studies were carried out to understand the mechanism of the adsorption of metal ions by the biofabricated Fe3O4 NPs. ## 2. Experimental Section ### 2.1. Materials The leaves ofPortulaca oleracea were obtained from the local market in Sulaymaniyah, Iraq. The chemicals: iron sulfate heptahydrate (FeSO4·7H2O) (Himedia, India), sodium hydroxide (NaOH) pellets (Merck, Germany), nitric acid (HNO3) (Sigma Aldrich, Germany), lead nitrate (Pb(NO3)2) (Fluka), and cadmium nitrate (Cd(NO3)2) (BDH/England) were used as received without further purification. Stock solutions of lead and cadmium ions with concentrations of (1000 mg/L) were prepared by dissolving 1.598 g of lead nitrate and 2.744 g of cadmium nitrate in 200 mL of deionized water. A 10 mL concentrated HNO3 was added to the metal ion solutions and then diluted to 1000 mL with distilled water. Solutions of HCl and NaOH (0.1–1 M) were used for pH adjustment. ### 2.2. Preparation ofPortulaca oleracea Leaf Extract FreshPortulaca oleracea leaves were washed with tap water to remove the dirt and surface-adherent materials and then with double-distilled water. The thoroughly washed leaves were air-dried for about an hour. The dried leaves were made into small parts and heated in 500 mL of deionized water for the release of phenolic biomolecules, which rendered the solution yellow. The clear yellow filtrate which was named Portulaca oleracea leaf (POL) extract after cooling was preserved at 4°C for further use [33]. ### 2.3. Synthesis of Magnetic Fe3O4 Nanoparticles A simple and eco-friendly method was used to prepare magnetite Fe3O4 nanoparticles. In a glass beaker, a freshly prepared 0.1 M FeSO4 solution (100 mL) was added to the POL extract at a volume ratio of 1 : 1 at room temperature. The pH of this mixture was increased to 11 by adding NaOH. The solution was then placed in a water bath for 60 min at 90°C. The formed black precipitate indicated the formation of magnetic Fe3O4 nanoparticles (PO–Fe3O4MNPs). The PO–Fe3O4MNPs were isolated using a magnet and then washed several times with deionized water and ethanol. After that, the particles were dried overnight at 80°C in an air oven [30, 34]. Figure 1 illustrates a green synthesis scheme for PO–Fe3O4MNPs.Figure 1 Biosynthesis of PO–Fe3O4MNPs. ### 2.4. Characterization An FTIR spectrometer (Thermo Scientific Nicolet iS10) was used to investigate the surface functional groups and capping agents of PO–Fe3O4MNPs. The crystalline structure of PO–Fe3O4MNPs was examined using an X-ray diffractometer (XRD) (PAN Analytical Xpert Pro, Netherlands). XRD was performed using Cu-Kα radiation (λ=1.54Å); the diffracted intensities of all samples were recorded in the 2θ range of 10°–80° with a step size of 0.1° and a scanning speed of 1 step/second. Field emission scanning electron microscopy (FE-SEM) (Quanta 4500, FEI) and transmission electron microscopy (TEM) (JEM-2100, JEOL, Japan) were used to examine the morphological characteristics and size of powdered PO–Fe3O4MNPs. The purity and the constituent elements of the prepared nanoparticles were confirmed by energy-dispersive X-ray spectroscopy (EDX) combined with FE-SEM. ### 2.5. Adsorption Tests #### 2.5.1. Evaluation of Optimum pH To determine the effect of pH on the adsorption of Pb(II) and Cd(II) ions on PO–Fe3O4MNPs, the following procedure was used: nanosorbent (0.5 g) was added to 100 mL of 50 mg/L Cd(II) and Pb(II) solution in Erlenmeyer flasks and the pH of the solutions was adjusted from 3 to 8 using 0.1 M sodium hydroxide or nitric acid solution and agitated with a speed of 200 rpm. All the adsorption experiments were carried out at a room temperature of 20±2°C. Finally, a magnet was used to separate the PO–Fe3O4MNPs from the solution, and the samples were filtered using Whatman filter paper (no. 1). The residual amount of Pb(II) and Cd(II) was determined by inductively coupled plasma optical emission spectrometer (ICPOES) (Optima 2100DV, PerkinElmer USA). #### 2.5.2. Equilibrium Contact Time The quantity of 0.5 g of PO–Fe3O4MNPs was suspended separately in 100 mL of single metal ion solutions in a number of conical flasks with the concentration of 50 mg/L of Pb(II) and Cd(II). The solutions were adjusted to pH 6 and agitated with a speed of 200 rpm for different periods [35]. Ten milliliters of the sample was taken out of the flasks after 10, 20, 30, 40, 50, 60, and 70 min, and the residual Pb(II) and Cd(II) concentrations were measured using (ICPOES). #### 2.5.3. Effect of Initial Metal Concentrations on Adsorption The influence of solution strength on the functional efficiency of PO–Fe3O4MNPs was demonstrated using different concentrations of Pb(II) and Cd(II) ions. Single metal solutions at concentrations of 10, 50, 100, and 150 mg/L in the presence of 0.5 g of PO–Fe3O4MNPs were agitated at a speed of 200 rpm and 20±2°C for an hour [22]. After that, the magnetic nanoadsorbent was then isolated from the solution using magnets. The residual metal ion concentrations were measured using ICPOES. #### 2.5.4. Equilibrium Isothermal Investigation The equilibrium adsorption isotherm is important for understanding the interaction between adsorbate and adsorbent, which provides significant information about the surface characteristics, activity of the adsorbent, and mechanism of adsorption [36]. Different amounts of PO–Fe3O4MNPs (0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1, 1.2, and 1.4 g) were put into a number of conical flasks. Single metal solutions at concentrations of 50 mg/L were prepared for single systems of lead and cadmium ions, respectively, and then 100 mL from each solution was added to each flask. The pH of the metal solutions was adjusted to the optimum pH value. Thermoshaker was used to agitate the solutions continuously for an hour with a speed of 200 rpm and a temperature of 20±2°C. The PO–Fe3O4MNPs were removed from the solutions with a magnet, and the samples were filtered using Whatman filter paper (no. 1). The Pb(II) and Cd(II) concentrations in the filtrate were determined using ICPOES. The removal efficiency and adsorption capacity of PO–Fe3O4MNPs were calculated from experimental data using [37]. (1)%RE=Co−CeCo×100,(2)qe=V1Co−CeWnanosorbent,whereCo (mg/L) is the initial concentrations of metal ions Pb(II) and Cd(II); Ce (mg/L) is the equilibrium concentrations; V1 (L) is the volume of the metal solution, and W (g) is the dosage of PO–Fe3O4MNPs.The experimental laboratory data obtained from isothermal adsorption experiments was interrelated to the common nonlinear adsorption model equations (Freundlich, Langmuir, Toth, and Khan) through the use of statistical software version 12 to compute fundamental parameters of each model. The Freundlich isotherm for heterogeneous surface energy systems is given as follows:(3)qe=KCe1/n,whereK corresponds to the maximum binding capacity and n characterizes the affinity between the sorbent and sorbate [38].The Langmuir adsorption isotherm which describes the adsorbate-adsorbent system equilibrium shows that all adsorbed species interact only with one site and not with each other, and adsorption is confined to a monolayer [21]. The Langmuir model can be represented as (4)qe=qmbCe1+bCe,whereqm (mg/g) is the Langmuir maximum adsorption capacity and b (L/mg) is the Langmuir constant [39].The Khan isotherm model is given in(5)qe=qmaxbkCe1+bkCeak,wherebk and ak are Khan constants and qmax is the maximum uptake [40].The Toth isotherm is expressed as follows:(6)qe=qmaxbkCe1+bTCe1/nTnT,where bT and nT are constants [41]. #### 2.5.5. Kinetics Adsorption kinetics describes the rate at which an adsorbate is retained or released from an aqueous environment to a solid-phase interface under various conditions [42]. The kinetic study was carried out using 0.5 g of PO–Fe3O4MNPs in 100 mL of single metal ion solutions with the concentration of 50 mg/L for each Pb(II) and Cd(II) at pH 6. The flasks with their contents were shaken for different adsorption times (10-70 min). The rate of sorption is analyzed by two kinetic models, namely, pseudo-first-order and pseudo-second-order.The pseudo-first-order model explains that the rate of occupation of sorption sites is proportional to the number of unoccupied sites. The equation is represented as(7)logqt−qe=logqe−k12.303t,wherek1 (min−1) is the pseudo-first-order rate constant of adsorption and qt (mg/g) and qe (mg/g) are the amount of metal ions adsorbed at any particular time and equilibrium, respectively.The pseudo-second-order kinetic model is based on the assumption that chemisorption is the rate-controlling step and is expressed as follows:(8)tqt=1k2qe2+1qet,wherek2 (g/mg·min) is the pseudo-second-order rate constant and qe (mg/g) and qt (mg/g) are the amount of the metal ions adsorbed at equilibrium and at time t, respectively [21, 43]. #### 2.5.6. Thermodynamic Parameters of Adsorption The influence of temperature on Pb(II) and Cd(II) ion adsorption on PO–Fe3O4MNPs was studied at 20, 35, 45, and 55°C. About 100 mL of each solution with a concentration of 50 mg/L was mixed with 0.5 g of PO–Fe3O4MNPs, and then, the mixtures were maintained at a different temperature ranging from 20 to 55°C, for an hour and agitated at the speed of 200 rpm [4]. The nanoparticles were separated, and the Pb(II) and Cd(II) concentrations were measured using ICPOES. ## 2.1. Materials The leaves ofPortulaca oleracea were obtained from the local market in Sulaymaniyah, Iraq. The chemicals: iron sulfate heptahydrate (FeSO4·7H2O) (Himedia, India), sodium hydroxide (NaOH) pellets (Merck, Germany), nitric acid (HNO3) (Sigma Aldrich, Germany), lead nitrate (Pb(NO3)2) (Fluka), and cadmium nitrate (Cd(NO3)2) (BDH/England) were used as received without further purification. Stock solutions of lead and cadmium ions with concentrations of (1000 mg/L) were prepared by dissolving 1.598 g of lead nitrate and 2.744 g of cadmium nitrate in 200 mL of deionized water. A 10 mL concentrated HNO3 was added to the metal ion solutions and then diluted to 1000 mL with distilled water. Solutions of HCl and NaOH (0.1–1 M) were used for pH adjustment. ## 2.2. Preparation ofPortulaca oleracea Leaf Extract FreshPortulaca oleracea leaves were washed with tap water to remove the dirt and surface-adherent materials and then with double-distilled water. The thoroughly washed leaves were air-dried for about an hour. The dried leaves were made into small parts and heated in 500 mL of deionized water for the release of phenolic biomolecules, which rendered the solution yellow. The clear yellow filtrate which was named Portulaca oleracea leaf (POL) extract after cooling was preserved at 4°C for further use [33]. ## 2.3. Synthesis of Magnetic Fe3O4 Nanoparticles A simple and eco-friendly method was used to prepare magnetite Fe3O4 nanoparticles. In a glass beaker, a freshly prepared 0.1 M FeSO4 solution (100 mL) was added to the POL extract at a volume ratio of 1 : 1 at room temperature. The pH of this mixture was increased to 11 by adding NaOH. The solution was then placed in a water bath for 60 min at 90°C. The formed black precipitate indicated the formation of magnetic Fe3O4 nanoparticles (PO–Fe3O4MNPs). The PO–Fe3O4MNPs were isolated using a magnet and then washed several times with deionized water and ethanol. After that, the particles were dried overnight at 80°C in an air oven [30, 34]. Figure 1 illustrates a green synthesis scheme for PO–Fe3O4MNPs.Figure 1 Biosynthesis of PO–Fe3O4MNPs. ## 2.4. Characterization An FTIR spectrometer (Thermo Scientific Nicolet iS10) was used to investigate the surface functional groups and capping agents of PO–Fe3O4MNPs. The crystalline structure of PO–Fe3O4MNPs was examined using an X-ray diffractometer (XRD) (PAN Analytical Xpert Pro, Netherlands). XRD was performed using Cu-Kα radiation (λ=1.54Å); the diffracted intensities of all samples were recorded in the 2θ range of 10°–80° with a step size of 0.1° and a scanning speed of 1 step/second. Field emission scanning electron microscopy (FE-SEM) (Quanta 4500, FEI) and transmission electron microscopy (TEM) (JEM-2100, JEOL, Japan) were used to examine the morphological characteristics and size of powdered PO–Fe3O4MNPs. The purity and the constituent elements of the prepared nanoparticles were confirmed by energy-dispersive X-ray spectroscopy (EDX) combined with FE-SEM. ## 2.5. Adsorption Tests ### 2.5.1. Evaluation of Optimum pH To determine the effect of pH on the adsorption of Pb(II) and Cd(II) ions on PO–Fe3O4MNPs, the following procedure was used: nanosorbent (0.5 g) was added to 100 mL of 50 mg/L Cd(II) and Pb(II) solution in Erlenmeyer flasks and the pH of the solutions was adjusted from 3 to 8 using 0.1 M sodium hydroxide or nitric acid solution and agitated with a speed of 200 rpm. All the adsorption experiments were carried out at a room temperature of 20±2°C. Finally, a magnet was used to separate the PO–Fe3O4MNPs from the solution, and the samples were filtered using Whatman filter paper (no. 1). The residual amount of Pb(II) and Cd(II) was determined by inductively coupled plasma optical emission spectrometer (ICPOES) (Optima 2100DV, PerkinElmer USA). ### 2.5.2. Equilibrium Contact Time The quantity of 0.5 g of PO–Fe3O4MNPs was suspended separately in 100 mL of single metal ion solutions in a number of conical flasks with the concentration of 50 mg/L of Pb(II) and Cd(II). The solutions were adjusted to pH 6 and agitated with a speed of 200 rpm for different periods [35]. Ten milliliters of the sample was taken out of the flasks after 10, 20, 30, 40, 50, 60, and 70 min, and the residual Pb(II) and Cd(II) concentrations were measured using (ICPOES). ### 2.5.3. Effect of Initial Metal Concentrations on Adsorption The influence of solution strength on the functional efficiency of PO–Fe3O4MNPs was demonstrated using different concentrations of Pb(II) and Cd(II) ions. Single metal solutions at concentrations of 10, 50, 100, and 150 mg/L in the presence of 0.5 g of PO–Fe3O4MNPs were agitated at a speed of 200 rpm and 20±2°C for an hour [22]. After that, the magnetic nanoadsorbent was then isolated from the solution using magnets. The residual metal ion concentrations were measured using ICPOES. ### 2.5.4. Equilibrium Isothermal Investigation The equilibrium adsorption isotherm is important for understanding the interaction between adsorbate and adsorbent, which provides significant information about the surface characteristics, activity of the adsorbent, and mechanism of adsorption [36]. Different amounts of PO–Fe3O4MNPs (0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1, 1.2, and 1.4 g) were put into a number of conical flasks. Single metal solutions at concentrations of 50 mg/L were prepared for single systems of lead and cadmium ions, respectively, and then 100 mL from each solution was added to each flask. The pH of the metal solutions was adjusted to the optimum pH value. Thermoshaker was used to agitate the solutions continuously for an hour with a speed of 200 rpm and a temperature of 20±2°C. The PO–Fe3O4MNPs were removed from the solutions with a magnet, and the samples were filtered using Whatman filter paper (no. 1). The Pb(II) and Cd(II) concentrations in the filtrate were determined using ICPOES. The removal efficiency and adsorption capacity of PO–Fe3O4MNPs were calculated from experimental data using [37]. (1)%RE=Co−CeCo×100,(2)qe=V1Co−CeWnanosorbent,whereCo (mg/L) is the initial concentrations of metal ions Pb(II) and Cd(II); Ce (mg/L) is the equilibrium concentrations; V1 (L) is the volume of the metal solution, and W (g) is the dosage of PO–Fe3O4MNPs.The experimental laboratory data obtained from isothermal adsorption experiments was interrelated to the common nonlinear adsorption model equations (Freundlich, Langmuir, Toth, and Khan) through the use of statistical software version 12 to compute fundamental parameters of each model. The Freundlich isotherm for heterogeneous surface energy systems is given as follows:(3)qe=KCe1/n,whereK corresponds to the maximum binding capacity and n characterizes the affinity between the sorbent and sorbate [38].The Langmuir adsorption isotherm which describes the adsorbate-adsorbent system equilibrium shows that all adsorbed species interact only with one site and not with each other, and adsorption is confined to a monolayer [21]. The Langmuir model can be represented as (4)qe=qmbCe1+bCe,whereqm (mg/g) is the Langmuir maximum adsorption capacity and b (L/mg) is the Langmuir constant [39].The Khan isotherm model is given in(5)qe=qmaxbkCe1+bkCeak,wherebk and ak are Khan constants and qmax is the maximum uptake [40].The Toth isotherm is expressed as follows:(6)qe=qmaxbkCe1+bTCe1/nTnT,where bT and nT are constants [41]. ### 2.5.5. Kinetics Adsorption kinetics describes the rate at which an adsorbate is retained or released from an aqueous environment to a solid-phase interface under various conditions [42]. The kinetic study was carried out using 0.5 g of PO–Fe3O4MNPs in 100 mL of single metal ion solutions with the concentration of 50 mg/L for each Pb(II) and Cd(II) at pH 6. The flasks with their contents were shaken for different adsorption times (10-70 min). The rate of sorption is analyzed by two kinetic models, namely, pseudo-first-order and pseudo-second-order.The pseudo-first-order model explains that the rate of occupation of sorption sites is proportional to the number of unoccupied sites. The equation is represented as(7)logqt−qe=logqe−k12.303t,wherek1 (min−1) is the pseudo-first-order rate constant of adsorption and qt (mg/g) and qe (mg/g) are the amount of metal ions adsorbed at any particular time and equilibrium, respectively.The pseudo-second-order kinetic model is based on the assumption that chemisorption is the rate-controlling step and is expressed as follows:(8)tqt=1k2qe2+1qet,wherek2 (g/mg·min) is the pseudo-second-order rate constant and qe (mg/g) and qt (mg/g) are the amount of the metal ions adsorbed at equilibrium and at time t, respectively [21, 43]. ### 2.5.6. Thermodynamic Parameters of Adsorption The influence of temperature on Pb(II) and Cd(II) ion adsorption on PO–Fe3O4MNPs was studied at 20, 35, 45, and 55°C. About 100 mL of each solution with a concentration of 50 mg/L was mixed with 0.5 g of PO–Fe3O4MNPs, and then, the mixtures were maintained at a different temperature ranging from 20 to 55°C, for an hour and agitated at the speed of 200 rpm [4]. The nanoparticles were separated, and the Pb(II) and Cd(II) concentrations were measured using ICPOES. ## 2.5.1. Evaluation of Optimum pH To determine the effect of pH on the adsorption of Pb(II) and Cd(II) ions on PO–Fe3O4MNPs, the following procedure was used: nanosorbent (0.5 g) was added to 100 mL of 50 mg/L Cd(II) and Pb(II) solution in Erlenmeyer flasks and the pH of the solutions was adjusted from 3 to 8 using 0.1 M sodium hydroxide or nitric acid solution and agitated with a speed of 200 rpm. All the adsorption experiments were carried out at a room temperature of 20±2°C. Finally, a magnet was used to separate the PO–Fe3O4MNPs from the solution, and the samples were filtered using Whatman filter paper (no. 1). The residual amount of Pb(II) and Cd(II) was determined by inductively coupled plasma optical emission spectrometer (ICPOES) (Optima 2100DV, PerkinElmer USA). ## 2.5.2. Equilibrium Contact Time The quantity of 0.5 g of PO–Fe3O4MNPs was suspended separately in 100 mL of single metal ion solutions in a number of conical flasks with the concentration of 50 mg/L of Pb(II) and Cd(II). The solutions were adjusted to pH 6 and agitated with a speed of 200 rpm for different periods [35]. Ten milliliters of the sample was taken out of the flasks after 10, 20, 30, 40, 50, 60, and 70 min, and the residual Pb(II) and Cd(II) concentrations were measured using (ICPOES). ## 2.5.3. Effect of Initial Metal Concentrations on Adsorption The influence of solution strength on the functional efficiency of PO–Fe3O4MNPs was demonstrated using different concentrations of Pb(II) and Cd(II) ions. Single metal solutions at concentrations of 10, 50, 100, and 150 mg/L in the presence of 0.5 g of PO–Fe3O4MNPs were agitated at a speed of 200 rpm and 20±2°C for an hour [22]. After that, the magnetic nanoadsorbent was then isolated from the solution using magnets. The residual metal ion concentrations were measured using ICPOES. ## 2.5.4. Equilibrium Isothermal Investigation The equilibrium adsorption isotherm is important for understanding the interaction between adsorbate and adsorbent, which provides significant information about the surface characteristics, activity of the adsorbent, and mechanism of adsorption [36]. Different amounts of PO–Fe3O4MNPs (0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1, 1.2, and 1.4 g) were put into a number of conical flasks. Single metal solutions at concentrations of 50 mg/L were prepared for single systems of lead and cadmium ions, respectively, and then 100 mL from each solution was added to each flask. The pH of the metal solutions was adjusted to the optimum pH value. Thermoshaker was used to agitate the solutions continuously for an hour with a speed of 200 rpm and a temperature of 20±2°C. The PO–Fe3O4MNPs were removed from the solutions with a magnet, and the samples were filtered using Whatman filter paper (no. 1). The Pb(II) and Cd(II) concentrations in the filtrate were determined using ICPOES. The removal efficiency and adsorption capacity of PO–Fe3O4MNPs were calculated from experimental data using [37]. (1)%RE=Co−CeCo×100,(2)qe=V1Co−CeWnanosorbent,whereCo (mg/L) is the initial concentrations of metal ions Pb(II) and Cd(II); Ce (mg/L) is the equilibrium concentrations; V1 (L) is the volume of the metal solution, and W (g) is the dosage of PO–Fe3O4MNPs.The experimental laboratory data obtained from isothermal adsorption experiments was interrelated to the common nonlinear adsorption model equations (Freundlich, Langmuir, Toth, and Khan) through the use of statistical software version 12 to compute fundamental parameters of each model. The Freundlich isotherm for heterogeneous surface energy systems is given as follows:(3)qe=KCe1/n,whereK corresponds to the maximum binding capacity and n characterizes the affinity between the sorbent and sorbate [38].The Langmuir adsorption isotherm which describes the adsorbate-adsorbent system equilibrium shows that all adsorbed species interact only with one site and not with each other, and adsorption is confined to a monolayer [21]. The Langmuir model can be represented as (4)qe=qmbCe1+bCe,whereqm (mg/g) is the Langmuir maximum adsorption capacity and b (L/mg) is the Langmuir constant [39].The Khan isotherm model is given in(5)qe=qmaxbkCe1+bkCeak,wherebk and ak are Khan constants and qmax is the maximum uptake [40].The Toth isotherm is expressed as follows:(6)qe=qmaxbkCe1+bTCe1/nTnT,where bT and nT are constants [41]. ## 2.5.5. Kinetics Adsorption kinetics describes the rate at which an adsorbate is retained or released from an aqueous environment to a solid-phase interface under various conditions [42]. The kinetic study was carried out using 0.5 g of PO–Fe3O4MNPs in 100 mL of single metal ion solutions with the concentration of 50 mg/L for each Pb(II) and Cd(II) at pH 6. The flasks with their contents were shaken for different adsorption times (10-70 min). The rate of sorption is analyzed by two kinetic models, namely, pseudo-first-order and pseudo-second-order.The pseudo-first-order model explains that the rate of occupation of sorption sites is proportional to the number of unoccupied sites. The equation is represented as(7)logqt−qe=logqe−k12.303t,wherek1 (min−1) is the pseudo-first-order rate constant of adsorption and qt (mg/g) and qe (mg/g) are the amount of metal ions adsorbed at any particular time and equilibrium, respectively.The pseudo-second-order kinetic model is based on the assumption that chemisorption is the rate-controlling step and is expressed as follows:(8)tqt=1k2qe2+1qet,wherek2 (g/mg·min) is the pseudo-second-order rate constant and qe (mg/g) and qt (mg/g) are the amount of the metal ions adsorbed at equilibrium and at time t, respectively [21, 43]. ## 2.5.6. Thermodynamic Parameters of Adsorption The influence of temperature on Pb(II) and Cd(II) ion adsorption on PO–Fe3O4MNPs was studied at 20, 35, 45, and 55°C. About 100 mL of each solution with a concentration of 50 mg/L was mixed with 0.5 g of PO–Fe3O4MNPs, and then, the mixtures were maintained at a different temperature ranging from 20 to 55°C, for an hour and agitated at the speed of 200 rpm [4]. The nanoparticles were separated, and the Pb(II) and Cd(II) concentrations were measured using ICPOES. ## 3. Results and Discussion ### 3.1. PO–Fe3O4MNP Characterizations The phase purity and crystalline construction of PO–Fe3O4MNPs are identified by powder XRD. Figure 2 shows the XRD pattern of dried PO–Fe3O4MNPs. The Bragg reflection peaks were detected at 2θ value of 30.11°, 35.47°, 43.11°, 53.48°, 57.01°, 62.61°, and 74.07°, which corresponded to the (220), (311), (400), (422), (511), (440), and (533) crystallographic planes which prove the structure of biosynthesized PO–Fe3O4MNPs is cubic (reference code: 96-900-5838). The XRD data is also in line with the data obtained by Rajput et al. [44] and Yew et al. [45].Figure 2 XRD spectra of PO–Fe3O4MNPs.Figures3(a) and 3(b) exhibit the TEM image of the biosynthesized PO–Fe3O4MNPs, as well as their particle size distribution. From the TEM images, it is obvious that most of the particles were almost spherical with slight aggregation. The presence of agglomeration could be attributed to van der Waals forces that bind particles together and also shear forces that can be applied on the nanoscale [46]. The mean particle size is 26.0196 nm with a 9.05132 nm standard deviation.Figure 3 (a) Transmission electron micrographs of PO–Fe3O4MNPs and (b) particle size distribution histogram. (a)(b)The FESEM images of the PO–Fe3O4MNPs are shown in Figures 4(a) and 4(b). From FESEM images, it was clearly observed that the nanoparticles are bead-like, spherical in shape, with slight aggregation. Three of such nanoparticles, 34.52, 34.46, and, 46.71 nm, are mentioned in Figure 4(b). Similar spherical shapes were also reported when biosynthesis of Fe3O4 was conducted using leaf extract of Mussaenda erythrophylla [47]. X-ray elemental mapping and energy-dispersive spectroscopy (EDX) were performed to reveal the presence of elemental constituents in the PO–Fe3O4MNPs as shown in Figures 4(c) and 4(d). The resultant EDX spectrum of nanoparticles showed the peaks at 0.7, 6.4, and 7.2 keV for elemental iron and the peak at 0.5 keV for elemental oxygen, which confirmed the formation of the PO–Fe3O4MNPs. The peak at 0.3 keV revealed the existence of carbon that originated from the biomolecules of the leaf extract.Figure 4 (a) FESEM images in 500 nm, (b) 200 nm resolutions, (c) mapping, and (d) EDX of PO–Fe3O4MNP magnetic nanoadsorbent. (a)(b)(c)(d)The surface functional groups and capping agents of PO–Fe3O4MNPs which are responsible for the reduction and stabilization were studied by FTIR spectroscopy. Figure 5 illustrates the FTIR spectrum of PO–Fe3O4MNPs. The peaks were observed at 582 cm-1 and 790 cm-1 due to Fe-O vibrations of Fe3O4 [48]. The band at 3394 cm-1 is attributed to the O–H group of polyphenolic compounds [49]. The band at 1662 cm–1 is assigned to –C=C– stretching vibration of alkenes [30]. The band at 1400 cm-1 belongs to C-C groups derived from aromatic rings found in the POL extract [50]. The bands between 1000 cm-1 and 1300 cm-1 were attributed to the C-O functional group in alcohols, ethers, ester, carboxylic acids, and amides in the extract [46]. These bands confirm the formation of PO–Fe3O4MNPs and showed that they were covered with polyphenols and other organic compounds which improved their stability.Figure 5 FTIR spectra of PO–Fe3O4MNPs. ### 3.2. Adsorption Process #### 3.2.1. Effect of pH pH is the main factor that affects the efficacy of the adsorption process. In the present study, the impact of pH on Pb(II) and Cd(II) removal efficiency using biosynthesized PO–Fe3O4MNPs was verified at pH values ranging from 3 to 8. Pb(II) and Cd(II) adsorption was low at pH values around 3, which was attributed to electrostatic repulsion between the adsorbent and metal ions in the solution. When the concentration of hydrogen ions in the solution increases, the adsorbent sites are occupied by the hydrogen ions instead of metal ions. The protonation of active sites thus tends to decrease the metal adsorption [37]. As presented in Figure 6, the removal efficiency for Pb(II) and Cd(II) at pH 3 is found to be 15.42% and 4.8%, respectively. Maximum adsorption was recorded at higher pH 6, i.e., maximum adsorptions for Pb(II) and Cd(II) were 100% and 95.32%, respectively.Figure 6 Effect of pH on Pb(II) and Cd(II) removal (Co=50mg/L, PO–Fe3O4MNP dosage=0.5g, contacttime=60min, temperature=20±2°C).Beyond the value of pH 6, solute ions will precipitate due to the formation of insoluble metal hydroxides, which then start precipitating from the solutions at higher pH values and make the true sorption studies impossible. This should be avoided during sorption tests because it makes distinguishing between sorption and metal precipitation difficult [51]. As a result, the pH value of 6 was chosen as the optimum and used in all subsequent experiments. These results are in agreement with the results obtained by Lung et al. [52]. #### 3.2.2. Effect of Contact Time The influence of contact time on Pb(II) and Cd(II) removal efficiency by PO–Fe3O4MNPs was investigated using varying the contact time from 10 to 70 minutes. As was observed in Figure 7, the adsorption of both metal ions by PO–Fe3O4MNPs was increased with increasing contact time. Both metal ions have more opportunities for contact with the adsorbent surface, when time increases the availability of more active groups that are present on the surface of the adsorbent increases [22]. The removal of Pb(II) was rapid during the first 30 min. However, no significant increase in the adsorption rate was found after 30 min. The concentration of Cd(II) decreased within 50 min and remained almost constant after an hour, implying that adsorption is rapid and reaches saturation within an hour. This is a promising result because equilibrium time is critical in wastewater treatment plants that are economically viable [53].Figure 7 Effect of contact time on removal of Pb(II) and Cd(II) (Co=50mg/L, PO–Fe3O4MNP dosage=0.5g, pH=6, temperature=20±2°C). #### 3.2.3. Effect of Initial Metal Concentration The effect of the initial metal concentration on the removal efficiency by 0.5 g PO–Fe3O4MNPs at optimal pH was investigated using solutions with varying initial metal concentrations (10, 50, 100, and 150 mg/L). Figure 8 shows that there is a lowering in adsorption of lead and cadmium ions with increasing in initial metal ion concentration in solution. The initial metal concentration plays an important role in the removal efficiency since there is a constant active binding site for a given mass of adsorbent where the fixed amount of metals can be adsorbed. Thus, increasing the metal concentration in solution against the same quantity of adsorbent decreases the adsorption capacity. These results agreed with the results obtained by Ebrahim et al. [35] and Das and Rebecca [54].Figure 8 Effect of initial metal concentration on removal of Pb(II) and Cd(II) (PO–Fe3O4MNP dosage=0.5g, contacttime=60min, pH=6, temperature=20±2°C). #### 3.2.4. Effect of PO–Fe3O4MNP Dose The concentration of the adsorbent is one of the most important factors influencing the efficiency of the process and adsorption capacity. In this study, the impact of the various amounts of the biosynthesized PO–Fe3O4MNPs varying from 0.02 to 1.4 g was studied on the removal efficiency of the Pb(II) and Cd(II) ions. Figure 9 shows that, with increasing nanoadsorbent mass, the percentage of nanoadsorbents and the adsorption percent of both Pb(II) and Cd(II) ions increase. This is due to the fact that at increasing dosages, there are more available sites on the surface of nanoadsorbents [55].Figure 9 Effect of adsorbent dose on removal of Pb(II) and Cd(II) (Co=50mg/L, contacttime=60min, pH=6, temperature=20±2°C). #### 3.2.5. Adsorption Isotherms The adsorption isotherms provide important information concerning adsorption capacity, the adsorption mechanism between the contaminant and the adsorbent, and the contaminant distribution between the adsorbent and the solution [4]. Adsorption isotherms were determined by fitting the experimental data obtained at equilibrium time, with the isotherm models including Freundlich, Langmuir, Toth, and Khan. The Langmuir adsorption isotherm is based on monolayer adsorption on the surface and assumes a homogenous sorbent surface [39]. The Toth isotherm is an empirical modification of the Langmuir equation that is aimed at reducing the error between experimental and predicted equilibrium data. This method is especially useful for explaining systems with heterogeneous adsorption [40]. The Freundlich isotherm can be applied to adsorption processes on heterogonous surfaces; it gives the concept of multilayer adsorption and the exponential distribution of active sites on the surface of the sorbent [56]. Finally, the Khan isotherm represents both the Langmuir and Freundlich models, suggested for adsorbate adsorption from pure solutions [40].Figures10(a) and 10(b) show the comparison of Freundlich, Langmuir, Khan, and Toth models for lead and cadmium ions, respectively. The Freundlich model fits the experimental data better than the Langmuir, Khan, and Toth models based on the correlation coefficients, which denotes multilayer adsorption of Pb(II) and Cd(II). It was noticed that values of n for cadmium and lead were 1.301 and 2.236, respectively, indicating favorable adsorption between PO–Fe3O4MNPs and the metal ions. Table 1 illustrates the adsorption capacity of PO–Fe3O4MNPs compared to other adsorbents that have been reported in various literature. The adsorption capacity was 177.48 mg/g for cadmium and 108.2267 mg/g for lead (Table 2). This indicates that the adsorption capacity of PO–Fe3O4MNPs is higher than that of most of the adsorbents listed in Table 1.Figure 10 Comparison of isotherm models for adsorption of (a) Pb(II) and (b) Cd(II) ions onto PO–Fe3O4MNPs. (a)(b)Table 1 Comparison of maximum adsorption capacity of investigated PO–Fe3O4MNPs for Cd(II) and Pb(II) with other adsorbents reported in the various literature. MetalAdsorbentMaximum adsorption capacity (mg/g)ReferenceCd(II)Iron oxide nanoparticles (IONPs)18.32[27]Iron oxide nanoparticles15.5[37]Magnetite Green Fe3O4 nanoparticles18.73[52][email protected][58]SBA-15@Fe3O4@Isa140[59]c-MCM-4132.3[60]MCM-4829.13[61]PO–Fe3O4MNPs177.48This studyPb(II)Magnetite green Fe3O4 nanoparticles0.16[52]SBA-15@Fe3O4@Isa110[59]c-MCM-4158.5[60]MCM-4850.39[61]Iron nanocomposites (T-Fe3O4)100.0[62]Phytogenic magnetic nanoparticles (PMNPs)68.41[63]Fe3O4 nanoadsorbents64.97[64]PO–Fe3O4MNPs108.2267This studyTable 2 Parameters of Langmuir, Freundlich, Khan, and Toth isotherm models for Pb(II) and Cd(II) ions adsorption onto PO–Fe3O4MNPs. ModelParameterMetal ionsPb(II)Cd(II)Langmuirqm (mg/g)108.2267177.4800b (L/mg)0.7062960.006401R20 .94390.92926FreundlichK (mg/g) (mg/L) (1/n)41.486182.044152n2.2365291.301127R20.972460.94333Khanqm (mg/g)4.0257087.391429bk (L/mg)194.40720.187339ak0.5562490.201856R20.972450.93299Tothqm (mg/g)721.74033392.02625bT2.0873980.011593nT4.5677780.632009R20.969520.92807As illustrated in Figure9, advanced adsorption removal for the lead was achieved at a low dose of PO–Fe3O4MNPs and it has a more rapid affinity towards the nanoparticles as compared to cadmium ions, which reveals the presence of various electrical attractions between cation lead metal and negative adsorption functional sites. Additionally, the lead ion possesses the smallest hydration radius; this agrees with the conception that ions with a small hydration radius are desirably selected and gathered at the interface [51, 57]. #### 3.2.6. Kinetics The kinetics of Cd(II) and Pb(II) adsorption on PO-Fe3O4MNPs were studied using pseudo-first-order and pseudo-second-order kinetic models, as demonstrated in Figures 11 and 12. The kinetic parameters and correlation coefficients R2 are documented in Table 3. Cd(II) and Pb(II) adsorption data were well fitted to pseudo-second-order equation (R2>0.99PbII; 0.95 Cd(II)). Thus, chemisorption is the rate-determining step for Cd(II) and Pb(II) adsorption on PO-Fe3O4MNPs through sharing or exchanging of electrons, between the sorbent and the sorbate [26, 65].Figure 11 Adsorption kinetics for the adsorption of Pb(II) onto PO-Fe3O4MNPs. (a) Pseudo-first-order(b) Pseudo-second-orderFigure 12 Adsorption kinetics for the adsorption of Cd(II) onto PO-Fe3O4MNPs. (a) Pseudo-first-order(b) Pseudo-second-orderTable 3 Kinetic parameters for Pb(II) and Cd(II) ion biosorption onto PO–Fe3O4MNPs. MetalsPseudo-first-orderPseudo-second-orderqe (mg/g)K1 (min–1)R2qe (mg/g)K2 (g/mg min)R2Pb(II)0.76560.10160.923410.02800.51790.9999Cd(II)3.02800.01980.28749.08760.01660.9593 #### 3.2.7. Thermodynamic Analysis Temperature is another important factor that influences the remediation efficiency of the adsorption process. Figure13(a) shows the variation of percentage removal efficiency with temperature. It is obvious from the results that changing the temperature from 20 to 35°C has no significant effect on the sorption of adsorbents so the adsorption experiments can be carried out at room temperature without any adjustment. Similar results were reported by Rasheed and Ebrahim [51]. However, removal efficiency decreases beyond 35°C due to a reduction in the number of active surface sites available for adsorption on the adsorbent [66]. Furthermore, at high temperatures, the attractive forces between the adsorbent and adsorbate become weaker, and therefore, sorption decreases [22]. In the temperature range of 20–55°C, the thermodynamic aspect of the adsorption of Pb(II) and Cd(II) ions on PO–Fe3O4MNPs was determined to show if the process was endothermic or exothermic. Thermodynamic parameters including Gibb’s free energy (ΔG°), enthalpy (ΔH°), and entropy (ΔS°) are obtained using the following equation: (9)ΔG°=−RTlnKcKc=CadCe,(10)ΔG°=ΔH°−TΔS°.Figure 13 (a) Effect of temperature on the Pb(II) and Cd(II) ion removal; (b) thermodynamic plot for Pb(II); (c) thermodynamic plot for Cd(II) adsorption. (a)(b)(c)In these equations,R is the ideal gas constant, T is the absolute temperature (K), Kc is the equilibrium constant, Cad is the amount of Pb(II) and Cd(II) adsorbed on PO–Fe3O4MNPs per liter of the solution (mg/L), and Ce is the equilibrium concentration of Pb(II) and Cd(II) in the solution (mg/L). The enthalpy and entropy change values were calculated from the slope and intercept of the plot of lnKc versus 1/T as shown in Figures 13(b) and 13(c). The calculated ΔG°, ΔH°, and ΔS° are arranged in Table 4. The negative values of ΔG° illustrate the spontaneity and feasibility of the adsorption reaction at a given temperature, and the increasing value of ΔG° with an increased temperature indicates a decrease in the degree of feasibility for Pb(II) and Cd(II) adsorption. To be thermodynamically acceptable, ΔG° must always be negative [21]. The negative values of ΔH°, -176.323312 for Pb(II) ions and -34.228738 kJ/mol for Cd(II) ions, show that the adsorption is exothermic. Moreover, the ΔS° values for Pb(II) and Cd(II) ions at the mentioned temperatures are obtained as -0.51621626 and -0.09403134 kJ/mol/K, respectively. These negative values indicate reducing the irregularity or randomness in the adsorbate–adsorbent interface during Pb(II) and Cd(II) adsorption.Table 4 Thermodynamic parameters for Pb(II) and Cd(II) adsorption onto PO–Fe3O4MNPs. MetalTemperature (K)ΔG° (kJ/mol)ΔH° (kJ/mol)ΔS° (kJ/mol/K)R2Pb(II)293-26.35695282-176.323312-0.516216260.9476308-15.71120935318-10.32196239328-9.193894743Cd(II)293-7.042494622-34.228738-0.094031340.9145308-4.691228444318-4.05764763328-3.873400017 #### 3.2.8. Characterizations of Pb(II)- and Cd(II)-Loaded PO–Fe3O4MNPs Pb(II) and Cd(II) uptake onto PO–Fe3O4MNPs was confirmed by using XRD, TEM, FESEM, EDX, and elemental mapping of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs. About 0.5 g of PO–Fe3O4MNPs was agitated within 50 mg/L of Pb(II) and Cd(II) solution at optimal conditions (temperature: 20±2°C, pH: 6, agitation speed: 200 rpm; equilibrium time: 60 min). The suspension was filtered using Whatman filter paper (no. 1), and the residue was dried in an oven at 50°C overnight.The XRD patterns of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs are shown in Figure 14. It was noted that after adsorption of Cd(II) by PO–Fe3O4MNPs, three peaks, which are centered at 32, 48, and 55°, were observed, attributed to the presence of the Cd(II) ions. In case of Pb(II) adsorption, two peaks were observed, which are centered at 31 and 48°. In addition, XRD spectra showed that there were no changes in the peaks of PO–Fe3O4MNPs; this confirms the adsorption of Cd(II) and Pb(II) ions on the surface of PO–Fe3O4MNPs.Figure 14 XRD spectra of (a) lead-loaded PO–Fe3O4MNPs, (b) cadmium-loaded PO–Fe3O4MNPs, and (c) PO–Fe3O4MNPs.The TEM images of PO–Fe3O4MNPs after Cd(II) and Pb(II) adsorptions are shown in Figures 15(a) and 15(b). Many black particles were observed which may be attributed to the existence of higher concentrations of surface-bound metal ions which result in small accumulation of the particles after adsorption and also indicates the magnetic property of PO–Fe3O4MNPs. Similar results were reported by Lin et al. [27].Figure 15 TEM images of (a) cadmium-loaded PO–Fe3O4MNPs and (b) lead-loaded PO–Fe3O4MNPs; FESEM images of (c) cadmium-loaded PO–Fe3O4MNPs and (d) lead-loaded PO–Fe3O4MNPs. (a)(b)(c)(d)FESEM was used to show the morphology and topographic features of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs. FESEM images revealed some aggregations and a slight increase in the dimensions of PO–Fe3O4MNPs as shown in Figures 15(c) and 15(d). However, the morphology of individual nanoparticles barely changed, indicating that the removal of Pb(II) and Cd(II) ions was performed by the adsorption process.The EDX analysis of PO–Fe3O4MNPs after adsorption of Cd(II) and Pb(II) is presented graphically in Figures 16(a) and 16(b). The EDX patterns show the existence of magnesium, oxygen, sulfur, iron, calcium, and cadmium on the surface of cadmium-loaded PO–Fe3O4MNPs, and the distribution of oxygen, magnesium, calcium, sulfur, iron, and lead on the surface of lead-loaded PO–Fe3O4MNPs clearly confirms successful adsorption of Pb(II) and Cd(II) on PO–Fe3O4MNP surfaces. The results show a good agreement with those obtained by Bagbi et al. [67]. The elemental mapping also confirmed the adsorption of Pb(II) (in green) and Cd(II) (in green) as shown in Figures 16(c) and 16(d). It is clear that Pb(II) and Cd(II) ions are uniformly distributed on the surface of PO–Fe3O4MNPs.Figure 16 EDX spectra of (a) cadmium-loaded PO–Fe3O4MNPs and (b) lead-loaded PO–Fe3O4MNPs; elemental mapping images of (c) cadmium-loaded PO–Fe3O4MNPs and (d) lead-loaded PO–Fe3O4MNPs. (a)(b)(c)(d) ## 3.1. PO–Fe3O4MNP Characterizations The phase purity and crystalline construction of PO–Fe3O4MNPs are identified by powder XRD. Figure 2 shows the XRD pattern of dried PO–Fe3O4MNPs. The Bragg reflection peaks were detected at 2θ value of 30.11°, 35.47°, 43.11°, 53.48°, 57.01°, 62.61°, and 74.07°, which corresponded to the (220), (311), (400), (422), (511), (440), and (533) crystallographic planes which prove the structure of biosynthesized PO–Fe3O4MNPs is cubic (reference code: 96-900-5838). The XRD data is also in line with the data obtained by Rajput et al. [44] and Yew et al. [45].Figure 2 XRD spectra of PO–Fe3O4MNPs.Figures3(a) and 3(b) exhibit the TEM image of the biosynthesized PO–Fe3O4MNPs, as well as their particle size distribution. From the TEM images, it is obvious that most of the particles were almost spherical with slight aggregation. The presence of agglomeration could be attributed to van der Waals forces that bind particles together and also shear forces that can be applied on the nanoscale [46]. The mean particle size is 26.0196 nm with a 9.05132 nm standard deviation.Figure 3 (a) Transmission electron micrographs of PO–Fe3O4MNPs and (b) particle size distribution histogram. (a)(b)The FESEM images of the PO–Fe3O4MNPs are shown in Figures 4(a) and 4(b). From FESEM images, it was clearly observed that the nanoparticles are bead-like, spherical in shape, with slight aggregation. Three of such nanoparticles, 34.52, 34.46, and, 46.71 nm, are mentioned in Figure 4(b). Similar spherical shapes were also reported when biosynthesis of Fe3O4 was conducted using leaf extract of Mussaenda erythrophylla [47]. X-ray elemental mapping and energy-dispersive spectroscopy (EDX) were performed to reveal the presence of elemental constituents in the PO–Fe3O4MNPs as shown in Figures 4(c) and 4(d). The resultant EDX spectrum of nanoparticles showed the peaks at 0.7, 6.4, and 7.2 keV for elemental iron and the peak at 0.5 keV for elemental oxygen, which confirmed the formation of the PO–Fe3O4MNPs. The peak at 0.3 keV revealed the existence of carbon that originated from the biomolecules of the leaf extract.Figure 4 (a) FESEM images in 500 nm, (b) 200 nm resolutions, (c) mapping, and (d) EDX of PO–Fe3O4MNP magnetic nanoadsorbent. (a)(b)(c)(d)The surface functional groups and capping agents of PO–Fe3O4MNPs which are responsible for the reduction and stabilization were studied by FTIR spectroscopy. Figure 5 illustrates the FTIR spectrum of PO–Fe3O4MNPs. The peaks were observed at 582 cm-1 and 790 cm-1 due to Fe-O vibrations of Fe3O4 [48]. The band at 3394 cm-1 is attributed to the O–H group of polyphenolic compounds [49]. The band at 1662 cm–1 is assigned to –C=C– stretching vibration of alkenes [30]. The band at 1400 cm-1 belongs to C-C groups derived from aromatic rings found in the POL extract [50]. The bands between 1000 cm-1 and 1300 cm-1 were attributed to the C-O functional group in alcohols, ethers, ester, carboxylic acids, and amides in the extract [46]. These bands confirm the formation of PO–Fe3O4MNPs and showed that they were covered with polyphenols and other organic compounds which improved their stability.Figure 5 FTIR spectra of PO–Fe3O4MNPs. ## 3.2. Adsorption Process ### 3.2.1. Effect of pH pH is the main factor that affects the efficacy of the adsorption process. In the present study, the impact of pH on Pb(II) and Cd(II) removal efficiency using biosynthesized PO–Fe3O4MNPs was verified at pH values ranging from 3 to 8. Pb(II) and Cd(II) adsorption was low at pH values around 3, which was attributed to electrostatic repulsion between the adsorbent and metal ions in the solution. When the concentration of hydrogen ions in the solution increases, the adsorbent sites are occupied by the hydrogen ions instead of metal ions. The protonation of active sites thus tends to decrease the metal adsorption [37]. As presented in Figure 6, the removal efficiency for Pb(II) and Cd(II) at pH 3 is found to be 15.42% and 4.8%, respectively. Maximum adsorption was recorded at higher pH 6, i.e., maximum adsorptions for Pb(II) and Cd(II) were 100% and 95.32%, respectively.Figure 6 Effect of pH on Pb(II) and Cd(II) removal (Co=50mg/L, PO–Fe3O4MNP dosage=0.5g, contacttime=60min, temperature=20±2°C).Beyond the value of pH 6, solute ions will precipitate due to the formation of insoluble metal hydroxides, which then start precipitating from the solutions at higher pH values and make the true sorption studies impossible. This should be avoided during sorption tests because it makes distinguishing between sorption and metal precipitation difficult [51]. As a result, the pH value of 6 was chosen as the optimum and used in all subsequent experiments. These results are in agreement with the results obtained by Lung et al. [52]. ### 3.2.2. Effect of Contact Time The influence of contact time on Pb(II) and Cd(II) removal efficiency by PO–Fe3O4MNPs was investigated using varying the contact time from 10 to 70 minutes. As was observed in Figure 7, the adsorption of both metal ions by PO–Fe3O4MNPs was increased with increasing contact time. Both metal ions have more opportunities for contact with the adsorbent surface, when time increases the availability of more active groups that are present on the surface of the adsorbent increases [22]. The removal of Pb(II) was rapid during the first 30 min. However, no significant increase in the adsorption rate was found after 30 min. The concentration of Cd(II) decreased within 50 min and remained almost constant after an hour, implying that adsorption is rapid and reaches saturation within an hour. This is a promising result because equilibrium time is critical in wastewater treatment plants that are economically viable [53].Figure 7 Effect of contact time on removal of Pb(II) and Cd(II) (Co=50mg/L, PO–Fe3O4MNP dosage=0.5g, pH=6, temperature=20±2°C). ### 3.2.3. Effect of Initial Metal Concentration The effect of the initial metal concentration on the removal efficiency by 0.5 g PO–Fe3O4MNPs at optimal pH was investigated using solutions with varying initial metal concentrations (10, 50, 100, and 150 mg/L). Figure 8 shows that there is a lowering in adsorption of lead and cadmium ions with increasing in initial metal ion concentration in solution. The initial metal concentration plays an important role in the removal efficiency since there is a constant active binding site for a given mass of adsorbent where the fixed amount of metals can be adsorbed. Thus, increasing the metal concentration in solution against the same quantity of adsorbent decreases the adsorption capacity. These results agreed with the results obtained by Ebrahim et al. [35] and Das and Rebecca [54].Figure 8 Effect of initial metal concentration on removal of Pb(II) and Cd(II) (PO–Fe3O4MNP dosage=0.5g, contacttime=60min, pH=6, temperature=20±2°C). ### 3.2.4. Effect of PO–Fe3O4MNP Dose The concentration of the adsorbent is one of the most important factors influencing the efficiency of the process and adsorption capacity. In this study, the impact of the various amounts of the biosynthesized PO–Fe3O4MNPs varying from 0.02 to 1.4 g was studied on the removal efficiency of the Pb(II) and Cd(II) ions. Figure 9 shows that, with increasing nanoadsorbent mass, the percentage of nanoadsorbents and the adsorption percent of both Pb(II) and Cd(II) ions increase. This is due to the fact that at increasing dosages, there are more available sites on the surface of nanoadsorbents [55].Figure 9 Effect of adsorbent dose on removal of Pb(II) and Cd(II) (Co=50mg/L, contacttime=60min, pH=6, temperature=20±2°C). ### 3.2.5. Adsorption Isotherms The adsorption isotherms provide important information concerning adsorption capacity, the adsorption mechanism between the contaminant and the adsorbent, and the contaminant distribution between the adsorbent and the solution [4]. Adsorption isotherms were determined by fitting the experimental data obtained at equilibrium time, with the isotherm models including Freundlich, Langmuir, Toth, and Khan. The Langmuir adsorption isotherm is based on monolayer adsorption on the surface and assumes a homogenous sorbent surface [39]. The Toth isotherm is an empirical modification of the Langmuir equation that is aimed at reducing the error between experimental and predicted equilibrium data. This method is especially useful for explaining systems with heterogeneous adsorption [40]. The Freundlich isotherm can be applied to adsorption processes on heterogonous surfaces; it gives the concept of multilayer adsorption and the exponential distribution of active sites on the surface of the sorbent [56]. Finally, the Khan isotherm represents both the Langmuir and Freundlich models, suggested for adsorbate adsorption from pure solutions [40].Figures10(a) and 10(b) show the comparison of Freundlich, Langmuir, Khan, and Toth models for lead and cadmium ions, respectively. The Freundlich model fits the experimental data better than the Langmuir, Khan, and Toth models based on the correlation coefficients, which denotes multilayer adsorption of Pb(II) and Cd(II). It was noticed that values of n for cadmium and lead were 1.301 and 2.236, respectively, indicating favorable adsorption between PO–Fe3O4MNPs and the metal ions. Table 1 illustrates the adsorption capacity of PO–Fe3O4MNPs compared to other adsorbents that have been reported in various literature. The adsorption capacity was 177.48 mg/g for cadmium and 108.2267 mg/g for lead (Table 2). This indicates that the adsorption capacity of PO–Fe3O4MNPs is higher than that of most of the adsorbents listed in Table 1.Figure 10 Comparison of isotherm models for adsorption of (a) Pb(II) and (b) Cd(II) ions onto PO–Fe3O4MNPs. (a)(b)Table 1 Comparison of maximum adsorption capacity of investigated PO–Fe3O4MNPs for Cd(II) and Pb(II) with other adsorbents reported in the various literature. MetalAdsorbentMaximum adsorption capacity (mg/g)ReferenceCd(II)Iron oxide nanoparticles (IONPs)18.32[27]Iron oxide nanoparticles15.5[37]Magnetite Green Fe3O4 nanoparticles18.73[52][email protected][58]SBA-15@Fe3O4@Isa140[59]c-MCM-4132.3[60]MCM-4829.13[61]PO–Fe3O4MNPs177.48This studyPb(II)Magnetite green Fe3O4 nanoparticles0.16[52]SBA-15@Fe3O4@Isa110[59]c-MCM-4158.5[60]MCM-4850.39[61]Iron nanocomposites (T-Fe3O4)100.0[62]Phytogenic magnetic nanoparticles (PMNPs)68.41[63]Fe3O4 nanoadsorbents64.97[64]PO–Fe3O4MNPs108.2267This studyTable 2 Parameters of Langmuir, Freundlich, Khan, and Toth isotherm models for Pb(II) and Cd(II) ions adsorption onto PO–Fe3O4MNPs. ModelParameterMetal ionsPb(II)Cd(II)Langmuirqm (mg/g)108.2267177.4800b (L/mg)0.7062960.006401R20 .94390.92926FreundlichK (mg/g) (mg/L) (1/n)41.486182.044152n2.2365291.301127R20.972460.94333Khanqm (mg/g)4.0257087.391429bk (L/mg)194.40720.187339ak0.5562490.201856R20.972450.93299Tothqm (mg/g)721.74033392.02625bT2.0873980.011593nT4.5677780.632009R20.969520.92807As illustrated in Figure9, advanced adsorption removal for the lead was achieved at a low dose of PO–Fe3O4MNPs and it has a more rapid affinity towards the nanoparticles as compared to cadmium ions, which reveals the presence of various electrical attractions between cation lead metal and negative adsorption functional sites. Additionally, the lead ion possesses the smallest hydration radius; this agrees with the conception that ions with a small hydration radius are desirably selected and gathered at the interface [51, 57]. ### 3.2.6. Kinetics The kinetics of Cd(II) and Pb(II) adsorption on PO-Fe3O4MNPs were studied using pseudo-first-order and pseudo-second-order kinetic models, as demonstrated in Figures 11 and 12. The kinetic parameters and correlation coefficients R2 are documented in Table 3. Cd(II) and Pb(II) adsorption data were well fitted to pseudo-second-order equation (R2>0.99PbII; 0.95 Cd(II)). Thus, chemisorption is the rate-determining step for Cd(II) and Pb(II) adsorption on PO-Fe3O4MNPs through sharing or exchanging of electrons, between the sorbent and the sorbate [26, 65].Figure 11 Adsorption kinetics for the adsorption of Pb(II) onto PO-Fe3O4MNPs. (a) Pseudo-first-order(b) Pseudo-second-orderFigure 12 Adsorption kinetics for the adsorption of Cd(II) onto PO-Fe3O4MNPs. (a) Pseudo-first-order(b) Pseudo-second-orderTable 3 Kinetic parameters for Pb(II) and Cd(II) ion biosorption onto PO–Fe3O4MNPs. MetalsPseudo-first-orderPseudo-second-orderqe (mg/g)K1 (min–1)R2qe (mg/g)K2 (g/mg min)R2Pb(II)0.76560.10160.923410.02800.51790.9999Cd(II)3.02800.01980.28749.08760.01660.9593 ### 3.2.7. Thermodynamic Analysis Temperature is another important factor that influences the remediation efficiency of the adsorption process. Figure13(a) shows the variation of percentage removal efficiency with temperature. It is obvious from the results that changing the temperature from 20 to 35°C has no significant effect on the sorption of adsorbents so the adsorption experiments can be carried out at room temperature without any adjustment. Similar results were reported by Rasheed and Ebrahim [51]. However, removal efficiency decreases beyond 35°C due to a reduction in the number of active surface sites available for adsorption on the adsorbent [66]. Furthermore, at high temperatures, the attractive forces between the adsorbent and adsorbate become weaker, and therefore, sorption decreases [22]. In the temperature range of 20–55°C, the thermodynamic aspect of the adsorption of Pb(II) and Cd(II) ions on PO–Fe3O4MNPs was determined to show if the process was endothermic or exothermic. Thermodynamic parameters including Gibb’s free energy (ΔG°), enthalpy (ΔH°), and entropy (ΔS°) are obtained using the following equation: (9)ΔG°=−RTlnKcKc=CadCe,(10)ΔG°=ΔH°−TΔS°.Figure 13 (a) Effect of temperature on the Pb(II) and Cd(II) ion removal; (b) thermodynamic plot for Pb(II); (c) thermodynamic plot for Cd(II) adsorption. (a)(b)(c)In these equations,R is the ideal gas constant, T is the absolute temperature (K), Kc is the equilibrium constant, Cad is the amount of Pb(II) and Cd(II) adsorbed on PO–Fe3O4MNPs per liter of the solution (mg/L), and Ce is the equilibrium concentration of Pb(II) and Cd(II) in the solution (mg/L). The enthalpy and entropy change values were calculated from the slope and intercept of the plot of lnKc versus 1/T as shown in Figures 13(b) and 13(c). The calculated ΔG°, ΔH°, and ΔS° are arranged in Table 4. The negative values of ΔG° illustrate the spontaneity and feasibility of the adsorption reaction at a given temperature, and the increasing value of ΔG° with an increased temperature indicates a decrease in the degree of feasibility for Pb(II) and Cd(II) adsorption. To be thermodynamically acceptable, ΔG° must always be negative [21]. The negative values of ΔH°, -176.323312 for Pb(II) ions and -34.228738 kJ/mol for Cd(II) ions, show that the adsorption is exothermic. Moreover, the ΔS° values for Pb(II) and Cd(II) ions at the mentioned temperatures are obtained as -0.51621626 and -0.09403134 kJ/mol/K, respectively. These negative values indicate reducing the irregularity or randomness in the adsorbate–adsorbent interface during Pb(II) and Cd(II) adsorption.Table 4 Thermodynamic parameters for Pb(II) and Cd(II) adsorption onto PO–Fe3O4MNPs. MetalTemperature (K)ΔG° (kJ/mol)ΔH° (kJ/mol)ΔS° (kJ/mol/K)R2Pb(II)293-26.35695282-176.323312-0.516216260.9476308-15.71120935318-10.32196239328-9.193894743Cd(II)293-7.042494622-34.228738-0.094031340.9145308-4.691228444318-4.05764763328-3.873400017 ### 3.2.8. Characterizations of Pb(II)- and Cd(II)-Loaded PO–Fe3O4MNPs Pb(II) and Cd(II) uptake onto PO–Fe3O4MNPs was confirmed by using XRD, TEM, FESEM, EDX, and elemental mapping of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs. About 0.5 g of PO–Fe3O4MNPs was agitated within 50 mg/L of Pb(II) and Cd(II) solution at optimal conditions (temperature: 20±2°C, pH: 6, agitation speed: 200 rpm; equilibrium time: 60 min). The suspension was filtered using Whatman filter paper (no. 1), and the residue was dried in an oven at 50°C overnight.The XRD patterns of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs are shown in Figure 14. It was noted that after adsorption of Cd(II) by PO–Fe3O4MNPs, three peaks, which are centered at 32, 48, and 55°, were observed, attributed to the presence of the Cd(II) ions. In case of Pb(II) adsorption, two peaks were observed, which are centered at 31 and 48°. In addition, XRD spectra showed that there were no changes in the peaks of PO–Fe3O4MNPs; this confirms the adsorption of Cd(II) and Pb(II) ions on the surface of PO–Fe3O4MNPs.Figure 14 XRD spectra of (a) lead-loaded PO–Fe3O4MNPs, (b) cadmium-loaded PO–Fe3O4MNPs, and (c) PO–Fe3O4MNPs.The TEM images of PO–Fe3O4MNPs after Cd(II) and Pb(II) adsorptions are shown in Figures 15(a) and 15(b). Many black particles were observed which may be attributed to the existence of higher concentrations of surface-bound metal ions which result in small accumulation of the particles after adsorption and also indicates the magnetic property of PO–Fe3O4MNPs. Similar results were reported by Lin et al. [27].Figure 15 TEM images of (a) cadmium-loaded PO–Fe3O4MNPs and (b) lead-loaded PO–Fe3O4MNPs; FESEM images of (c) cadmium-loaded PO–Fe3O4MNPs and (d) lead-loaded PO–Fe3O4MNPs. (a)(b)(c)(d)FESEM was used to show the morphology and topographic features of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs. FESEM images revealed some aggregations and a slight increase in the dimensions of PO–Fe3O4MNPs as shown in Figures 15(c) and 15(d). However, the morphology of individual nanoparticles barely changed, indicating that the removal of Pb(II) and Cd(II) ions was performed by the adsorption process.The EDX analysis of PO–Fe3O4MNPs after adsorption of Cd(II) and Pb(II) is presented graphically in Figures 16(a) and 16(b). The EDX patterns show the existence of magnesium, oxygen, sulfur, iron, calcium, and cadmium on the surface of cadmium-loaded PO–Fe3O4MNPs, and the distribution of oxygen, magnesium, calcium, sulfur, iron, and lead on the surface of lead-loaded PO–Fe3O4MNPs clearly confirms successful adsorption of Pb(II) and Cd(II) on PO–Fe3O4MNP surfaces. The results show a good agreement with those obtained by Bagbi et al. [67]. The elemental mapping also confirmed the adsorption of Pb(II) (in green) and Cd(II) (in green) as shown in Figures 16(c) and 16(d). It is clear that Pb(II) and Cd(II) ions are uniformly distributed on the surface of PO–Fe3O4MNPs.Figure 16 EDX spectra of (a) cadmium-loaded PO–Fe3O4MNPs and (b) lead-loaded PO–Fe3O4MNPs; elemental mapping images of (c) cadmium-loaded PO–Fe3O4MNPs and (d) lead-loaded PO–Fe3O4MNPs. (a)(b)(c)(d) ## 3.2.1. Effect of pH pH is the main factor that affects the efficacy of the adsorption process. In the present study, the impact of pH on Pb(II) and Cd(II) removal efficiency using biosynthesized PO–Fe3O4MNPs was verified at pH values ranging from 3 to 8. Pb(II) and Cd(II) adsorption was low at pH values around 3, which was attributed to electrostatic repulsion between the adsorbent and metal ions in the solution. When the concentration of hydrogen ions in the solution increases, the adsorbent sites are occupied by the hydrogen ions instead of metal ions. The protonation of active sites thus tends to decrease the metal adsorption [37]. As presented in Figure 6, the removal efficiency for Pb(II) and Cd(II) at pH 3 is found to be 15.42% and 4.8%, respectively. Maximum adsorption was recorded at higher pH 6, i.e., maximum adsorptions for Pb(II) and Cd(II) were 100% and 95.32%, respectively.Figure 6 Effect of pH on Pb(II) and Cd(II) removal (Co=50mg/L, PO–Fe3O4MNP dosage=0.5g, contacttime=60min, temperature=20±2°C).Beyond the value of pH 6, solute ions will precipitate due to the formation of insoluble metal hydroxides, which then start precipitating from the solutions at higher pH values and make the true sorption studies impossible. This should be avoided during sorption tests because it makes distinguishing between sorption and metal precipitation difficult [51]. As a result, the pH value of 6 was chosen as the optimum and used in all subsequent experiments. These results are in agreement with the results obtained by Lung et al. [52]. ## 3.2.2. Effect of Contact Time The influence of contact time on Pb(II) and Cd(II) removal efficiency by PO–Fe3O4MNPs was investigated using varying the contact time from 10 to 70 minutes. As was observed in Figure 7, the adsorption of both metal ions by PO–Fe3O4MNPs was increased with increasing contact time. Both metal ions have more opportunities for contact with the adsorbent surface, when time increases the availability of more active groups that are present on the surface of the adsorbent increases [22]. The removal of Pb(II) was rapid during the first 30 min. However, no significant increase in the adsorption rate was found after 30 min. The concentration of Cd(II) decreased within 50 min and remained almost constant after an hour, implying that adsorption is rapid and reaches saturation within an hour. This is a promising result because equilibrium time is critical in wastewater treatment plants that are economically viable [53].Figure 7 Effect of contact time on removal of Pb(II) and Cd(II) (Co=50mg/L, PO–Fe3O4MNP dosage=0.5g, pH=6, temperature=20±2°C). ## 3.2.3. Effect of Initial Metal Concentration The effect of the initial metal concentration on the removal efficiency by 0.5 g PO–Fe3O4MNPs at optimal pH was investigated using solutions with varying initial metal concentrations (10, 50, 100, and 150 mg/L). Figure 8 shows that there is a lowering in adsorption of lead and cadmium ions with increasing in initial metal ion concentration in solution. The initial metal concentration plays an important role in the removal efficiency since there is a constant active binding site for a given mass of adsorbent where the fixed amount of metals can be adsorbed. Thus, increasing the metal concentration in solution against the same quantity of adsorbent decreases the adsorption capacity. These results agreed with the results obtained by Ebrahim et al. [35] and Das and Rebecca [54].Figure 8 Effect of initial metal concentration on removal of Pb(II) and Cd(II) (PO–Fe3O4MNP dosage=0.5g, contacttime=60min, pH=6, temperature=20±2°C). ## 3.2.4. Effect of PO–Fe3O4MNP Dose The concentration of the adsorbent is one of the most important factors influencing the efficiency of the process and adsorption capacity. In this study, the impact of the various amounts of the biosynthesized PO–Fe3O4MNPs varying from 0.02 to 1.4 g was studied on the removal efficiency of the Pb(II) and Cd(II) ions. Figure 9 shows that, with increasing nanoadsorbent mass, the percentage of nanoadsorbents and the adsorption percent of both Pb(II) and Cd(II) ions increase. This is due to the fact that at increasing dosages, there are more available sites on the surface of nanoadsorbents [55].Figure 9 Effect of adsorbent dose on removal of Pb(II) and Cd(II) (Co=50mg/L, contacttime=60min, pH=6, temperature=20±2°C). ## 3.2.5. Adsorption Isotherms The adsorption isotherms provide important information concerning adsorption capacity, the adsorption mechanism between the contaminant and the adsorbent, and the contaminant distribution between the adsorbent and the solution [4]. Adsorption isotherms were determined by fitting the experimental data obtained at equilibrium time, with the isotherm models including Freundlich, Langmuir, Toth, and Khan. The Langmuir adsorption isotherm is based on monolayer adsorption on the surface and assumes a homogenous sorbent surface [39]. The Toth isotherm is an empirical modification of the Langmuir equation that is aimed at reducing the error between experimental and predicted equilibrium data. This method is especially useful for explaining systems with heterogeneous adsorption [40]. The Freundlich isotherm can be applied to adsorption processes on heterogonous surfaces; it gives the concept of multilayer adsorption and the exponential distribution of active sites on the surface of the sorbent [56]. Finally, the Khan isotherm represents both the Langmuir and Freundlich models, suggested for adsorbate adsorption from pure solutions [40].Figures10(a) and 10(b) show the comparison of Freundlich, Langmuir, Khan, and Toth models for lead and cadmium ions, respectively. The Freundlich model fits the experimental data better than the Langmuir, Khan, and Toth models based on the correlation coefficients, which denotes multilayer adsorption of Pb(II) and Cd(II). It was noticed that values of n for cadmium and lead were 1.301 and 2.236, respectively, indicating favorable adsorption between PO–Fe3O4MNPs and the metal ions. Table 1 illustrates the adsorption capacity of PO–Fe3O4MNPs compared to other adsorbents that have been reported in various literature. The adsorption capacity was 177.48 mg/g for cadmium and 108.2267 mg/g for lead (Table 2). This indicates that the adsorption capacity of PO–Fe3O4MNPs is higher than that of most of the adsorbents listed in Table 1.Figure 10 Comparison of isotherm models for adsorption of (a) Pb(II) and (b) Cd(II) ions onto PO–Fe3O4MNPs. (a)(b)Table 1 Comparison of maximum adsorption capacity of investigated PO–Fe3O4MNPs for Cd(II) and Pb(II) with other adsorbents reported in the various literature. MetalAdsorbentMaximum adsorption capacity (mg/g)ReferenceCd(II)Iron oxide nanoparticles (IONPs)18.32[27]Iron oxide nanoparticles15.5[37]Magnetite Green Fe3O4 nanoparticles18.73[52][email protected][58]SBA-15@Fe3O4@Isa140[59]c-MCM-4132.3[60]MCM-4829.13[61]PO–Fe3O4MNPs177.48This studyPb(II)Magnetite green Fe3O4 nanoparticles0.16[52]SBA-15@Fe3O4@Isa110[59]c-MCM-4158.5[60]MCM-4850.39[61]Iron nanocomposites (T-Fe3O4)100.0[62]Phytogenic magnetic nanoparticles (PMNPs)68.41[63]Fe3O4 nanoadsorbents64.97[64]PO–Fe3O4MNPs108.2267This studyTable 2 Parameters of Langmuir, Freundlich, Khan, and Toth isotherm models for Pb(II) and Cd(II) ions adsorption onto PO–Fe3O4MNPs. ModelParameterMetal ionsPb(II)Cd(II)Langmuirqm (mg/g)108.2267177.4800b (L/mg)0.7062960.006401R20 .94390.92926FreundlichK (mg/g) (mg/L) (1/n)41.486182.044152n2.2365291.301127R20.972460.94333Khanqm (mg/g)4.0257087.391429bk (L/mg)194.40720.187339ak0.5562490.201856R20.972450.93299Tothqm (mg/g)721.74033392.02625bT2.0873980.011593nT4.5677780.632009R20.969520.92807As illustrated in Figure9, advanced adsorption removal for the lead was achieved at a low dose of PO–Fe3O4MNPs and it has a more rapid affinity towards the nanoparticles as compared to cadmium ions, which reveals the presence of various electrical attractions between cation lead metal and negative adsorption functional sites. Additionally, the lead ion possesses the smallest hydration radius; this agrees with the conception that ions with a small hydration radius are desirably selected and gathered at the interface [51, 57]. ## 3.2.6. Kinetics The kinetics of Cd(II) and Pb(II) adsorption on PO-Fe3O4MNPs were studied using pseudo-first-order and pseudo-second-order kinetic models, as demonstrated in Figures 11 and 12. The kinetic parameters and correlation coefficients R2 are documented in Table 3. Cd(II) and Pb(II) adsorption data were well fitted to pseudo-second-order equation (R2>0.99PbII; 0.95 Cd(II)). Thus, chemisorption is the rate-determining step for Cd(II) and Pb(II) adsorption on PO-Fe3O4MNPs through sharing or exchanging of electrons, between the sorbent and the sorbate [26, 65].Figure 11 Adsorption kinetics for the adsorption of Pb(II) onto PO-Fe3O4MNPs. (a) Pseudo-first-order(b) Pseudo-second-orderFigure 12 Adsorption kinetics for the adsorption of Cd(II) onto PO-Fe3O4MNPs. (a) Pseudo-first-order(b) Pseudo-second-orderTable 3 Kinetic parameters for Pb(II) and Cd(II) ion biosorption onto PO–Fe3O4MNPs. MetalsPseudo-first-orderPseudo-second-orderqe (mg/g)K1 (min–1)R2qe (mg/g)K2 (g/mg min)R2Pb(II)0.76560.10160.923410.02800.51790.9999Cd(II)3.02800.01980.28749.08760.01660.9593 ## 3.2.7. Thermodynamic Analysis Temperature is another important factor that influences the remediation efficiency of the adsorption process. Figure13(a) shows the variation of percentage removal efficiency with temperature. It is obvious from the results that changing the temperature from 20 to 35°C has no significant effect on the sorption of adsorbents so the adsorption experiments can be carried out at room temperature without any adjustment. Similar results were reported by Rasheed and Ebrahim [51]. However, removal efficiency decreases beyond 35°C due to a reduction in the number of active surface sites available for adsorption on the adsorbent [66]. Furthermore, at high temperatures, the attractive forces between the adsorbent and adsorbate become weaker, and therefore, sorption decreases [22]. In the temperature range of 20–55°C, the thermodynamic aspect of the adsorption of Pb(II) and Cd(II) ions on PO–Fe3O4MNPs was determined to show if the process was endothermic or exothermic. Thermodynamic parameters including Gibb’s free energy (ΔG°), enthalpy (ΔH°), and entropy (ΔS°) are obtained using the following equation: (9)ΔG°=−RTlnKcKc=CadCe,(10)ΔG°=ΔH°−TΔS°.Figure 13 (a) Effect of temperature on the Pb(II) and Cd(II) ion removal; (b) thermodynamic plot for Pb(II); (c) thermodynamic plot for Cd(II) adsorption. (a)(b)(c)In these equations,R is the ideal gas constant, T is the absolute temperature (K), Kc is the equilibrium constant, Cad is the amount of Pb(II) and Cd(II) adsorbed on PO–Fe3O4MNPs per liter of the solution (mg/L), and Ce is the equilibrium concentration of Pb(II) and Cd(II) in the solution (mg/L). The enthalpy and entropy change values were calculated from the slope and intercept of the plot of lnKc versus 1/T as shown in Figures 13(b) and 13(c). The calculated ΔG°, ΔH°, and ΔS° are arranged in Table 4. The negative values of ΔG° illustrate the spontaneity and feasibility of the adsorption reaction at a given temperature, and the increasing value of ΔG° with an increased temperature indicates a decrease in the degree of feasibility for Pb(II) and Cd(II) adsorption. To be thermodynamically acceptable, ΔG° must always be negative [21]. The negative values of ΔH°, -176.323312 for Pb(II) ions and -34.228738 kJ/mol for Cd(II) ions, show that the adsorption is exothermic. Moreover, the ΔS° values for Pb(II) and Cd(II) ions at the mentioned temperatures are obtained as -0.51621626 and -0.09403134 kJ/mol/K, respectively. These negative values indicate reducing the irregularity or randomness in the adsorbate–adsorbent interface during Pb(II) and Cd(II) adsorption.Table 4 Thermodynamic parameters for Pb(II) and Cd(II) adsorption onto PO–Fe3O4MNPs. MetalTemperature (K)ΔG° (kJ/mol)ΔH° (kJ/mol)ΔS° (kJ/mol/K)R2Pb(II)293-26.35695282-176.323312-0.516216260.9476308-15.71120935318-10.32196239328-9.193894743Cd(II)293-7.042494622-34.228738-0.094031340.9145308-4.691228444318-4.05764763328-3.873400017 ## 3.2.8. Characterizations of Pb(II)- and Cd(II)-Loaded PO–Fe3O4MNPs Pb(II) and Cd(II) uptake onto PO–Fe3O4MNPs was confirmed by using XRD, TEM, FESEM, EDX, and elemental mapping of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs. About 0.5 g of PO–Fe3O4MNPs was agitated within 50 mg/L of Pb(II) and Cd(II) solution at optimal conditions (temperature: 20±2°C, pH: 6, agitation speed: 200 rpm; equilibrium time: 60 min). The suspension was filtered using Whatman filter paper (no. 1), and the residue was dried in an oven at 50°C overnight.The XRD patterns of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs are shown in Figure 14. It was noted that after adsorption of Cd(II) by PO–Fe3O4MNPs, three peaks, which are centered at 32, 48, and 55°, were observed, attributed to the presence of the Cd(II) ions. In case of Pb(II) adsorption, two peaks were observed, which are centered at 31 and 48°. In addition, XRD spectra showed that there were no changes in the peaks of PO–Fe3O4MNPs; this confirms the adsorption of Cd(II) and Pb(II) ions on the surface of PO–Fe3O4MNPs.Figure 14 XRD spectra of (a) lead-loaded PO–Fe3O4MNPs, (b) cadmium-loaded PO–Fe3O4MNPs, and (c) PO–Fe3O4MNPs.The TEM images of PO–Fe3O4MNPs after Cd(II) and Pb(II) adsorptions are shown in Figures 15(a) and 15(b). Many black particles were observed which may be attributed to the existence of higher concentrations of surface-bound metal ions which result in small accumulation of the particles after adsorption and also indicates the magnetic property of PO–Fe3O4MNPs. Similar results were reported by Lin et al. [27].Figure 15 TEM images of (a) cadmium-loaded PO–Fe3O4MNPs and (b) lead-loaded PO–Fe3O4MNPs; FESEM images of (c) cadmium-loaded PO–Fe3O4MNPs and (d) lead-loaded PO–Fe3O4MNPs. (a)(b)(c)(d)FESEM was used to show the morphology and topographic features of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs. FESEM images revealed some aggregations and a slight increase in the dimensions of PO–Fe3O4MNPs as shown in Figures 15(c) and 15(d). However, the morphology of individual nanoparticles barely changed, indicating that the removal of Pb(II) and Cd(II) ions was performed by the adsorption process.The EDX analysis of PO–Fe3O4MNPs after adsorption of Cd(II) and Pb(II) is presented graphically in Figures 16(a) and 16(b). The EDX patterns show the existence of magnesium, oxygen, sulfur, iron, calcium, and cadmium on the surface of cadmium-loaded PO–Fe3O4MNPs, and the distribution of oxygen, magnesium, calcium, sulfur, iron, and lead on the surface of lead-loaded PO–Fe3O4MNPs clearly confirms successful adsorption of Pb(II) and Cd(II) on PO–Fe3O4MNP surfaces. The results show a good agreement with those obtained by Bagbi et al. [67]. The elemental mapping also confirmed the adsorption of Pb(II) (in green) and Cd(II) (in green) as shown in Figures 16(c) and 16(d). It is clear that Pb(II) and Cd(II) ions are uniformly distributed on the surface of PO–Fe3O4MNPs.Figure 16 EDX spectra of (a) cadmium-loaded PO–Fe3O4MNPs and (b) lead-loaded PO–Fe3O4MNPs; elemental mapping images of (c) cadmium-loaded PO–Fe3O4MNPs and (d) lead-loaded PO–Fe3O4MNPs. (a)(b)(c)(d) ## 4. Conclusion In this study,Portulaca oleracea leaf extract was successfully used as a reductant in PO–Fe3O4MNP synthesizing process. The biosynthesized PO–Fe3O4MNPs were characterized and used as adsorbents for the removal of Pb(II) and Cd(II) metal ions from the aqueous solution in batch adsorption system. The batch experiments showed that the removal efficiency of Pb(II) and Cd(II) by PO–Fe3O4MNPs increased with increasing pH (up to 6), contact time, and PO–Fe3O4MNP dosage. On the other hand, an increase in metal concentration and temperature resulted in reduced Pb(II) and Cd(II) removal efficiency of PO–Fe3O4MNPs. Isotherm studies revealed that the Freundlich model can properly predict adsorption equilibrium data, which indicates multilayer adsorption. The kinetic studies suggested pseudo-second-order reactions for the adsorption, while thermodynamic studies demonstrated the adsorption as exothermic and spontaneous. This study shows that PO–Fe3O4MNPs can be considered fast, efficient, and biocompatible nanoadsorbent which have promising applications in environmental remediation processes and nanobiotechnology in the future. --- *Source: 1024554-2022-08-16.xml*
1024554-2022-08-16_1024554-2022-08-16.md
87,759
Cadmium and Lead Removal from Aqueous Solution Using Magnetite Nanoparticles Biofabricated fromPortulaca oleracea Leaf Extract
Payam B. Hassan; Rezan O. Rasheed; Kiomars Zargoosh
Journal of Nanomaterials (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1024554
1024554-2022-08-16.xml
--- ## Abstract Magnetic nanoparticles of iron oxide (Fe3O4 NPs) were prepared using a biosynthetic method to investigate their potential use as an adsorbent for adsorption of Pb(II) and Cd(II) from the aqueous solution. The present study for the first time used the magnetite nanoparticles from leaf extract of Portulaca oleracea for the removal of Pb(II) and Cd(II) metal ions. Characterizations for the prepared Fe3O4 NPs (PO-Fe3O4MNPs) were achieved by using X-ray diffraction (XRD), field emission scanning electron microscope (FESEM), energy-dispersive X-ray spectroscopy (EDX), transmittance electron microscopy (TEM), and Fourier transform infrared (FTIR) spectroscopy. The batch adsorption process has been performed to study the effect of various parameters, such as contact time, pH, temperature, initial metal concentration, and adsorbent dose. The optimum pH for Cd(II) and Pb(II) adsorption was 6. The removal of heavy metals was found to increase with adsorbent dosage and contact time and reduced with increasing initial concentration. Langmuir, Freundlich, Khan, and Toth isotherms were used as adsorption isotherm models. The adsorption data fitted well with the Freundlich isotherm model with correlation coefficient (R2>0.94). The maximum adsorption capacities (Qmax) at equilibrium were 177.48 mg/g and 108.2267 mg/g for Cd(II) and Pb(II), respectively. The kinetic analysis showed that the overall adsorption process was successfully fitted with the pseudo-second-order kinetic model. The calculated thermodynamic parameters (∆G°, ∆H°, and ∆S°) showed that the adsorption of Cd(II) and Pb(II) ions onto PO-Fe3O4MNPs was exothermic and spontaneous. These results demonstrate that biogenic synthesized PO-Fe3O4MNPs are highly efficient adsorbents for the removal of Pb(II) and Cd(II) ions from contaminated water. --- ## Body ## 1. Introduction Water is the most important molecule on the earth and is a source of sustainable life. However, millions of people face water scarcity daily [1]. Rapid population growth demands a rapid expansion in the agricultural and industrial sectors, resulting in increased demand for water, which is necessary for the survival of all living forms on this blue planet [2]. Due to the release of untreated organic/inorganic harmful effluents into freshwater bodies, rapid growth in industrialization, population, and urbanization has resulted in a severe exponential increase in environmental pollution [3–5].When water is contaminated, removing the pollutants is costly, difficult, and often impossible. Water pollution is not only harming the organisms but also destroying entire ecosystems [6, 7]. Heavy metals are well recognized among the various pollutants that contribute to environmental damage, owing to their persistence in the environment, toxicity, and bioaccumulative nature, all of which lead to negative consequences for human health and the ecosystem [8, 9]. Lead, chromium, nickel, cadmium, and arsenic in the environment pose a serious threat to plants, animals, and even humans due to their bioaccumulation, nonbiodegradability, and toxicity even at trace concentrations [10]. Human exposure to even trace concentrations may result in conditions such as cardiovascular problems, depression, gastrointestinal and renal failure, neurological damage, osteoporosis, tubular and glomerular dysfunction, and various cancers [11, 12]. Cadmium and lead are two of the most toxic metals for plants, animals, and humans. They are also harmful at low concentrations because they disrupt enzyme functioning, replace essential metals in pigments, and produce reactive oxygen species [3, 13]. As a result of these serious issues, many effective methods to remove heavy metals have been developed, including ion exchange, membrane filtration, adsorption, photodegradation, coagulation–flocculation, electrodeposition, and electrooxidations [14–18]. Until now, adsorption is the most preferred method for water purification due to its efficiency, low operational cost, and application in both small- and large-scale operations [19]. Various kinds of adsorption materials have been used for water remediation, including carbon-based materials, clays, biological materials, bentonite, zeolites, metal oxide, magnetic nanoparticles, agricultural residues, and mesoporous substances such as MCM-41, MCM-48, and SBA-15 [20, 21].Magnetite nanoparticles (NPs) are strong adsorbents for removing pollutants from wastewater. Due to their magnetic properties, they may be easily isolated from the reaction media by applying an external magnetic field. Furthermore, the application of magnetic separation on the nanoadsorbents provides the crucial benefit of the rapid removal of toxic metals from wastewater [22].Different methods including coprecipitation, thermal decomposition, microemulsion, and solvothermal techniques can be used to fabricate magnetite Fe3O4 NPs [23]. Hazardous chemicals, organometallic precursors, and hard reaction conditions, such as high pressure or high temperature, are used in these procedures. However, these methods have several disadvantages, including high toxicity, low nanoparticle stability, exhibiting low dispersion rates, and undesirable to work within scaled-up applications [24, 25]. Therefore, the fabrication of magnetite NPs by both economically and environmentally sustainable processes is necessary. The application of leaf extracts for nanoparticle synthesis has attracted a lot of attention in recent years, due to their low cost, nontoxicity, wide availability, strong metal capping affinity, biodegradability, and nonmutagenicity [26, 27]. Alcohols, aldehydes, amines, carboxyl, ketones, hydroxyl, and sulfhydryl are the intervening functional groups in the synthesis of NPs; therefore, nearly any biological substance containing these groups can be used to convert metal ions into NPs. Some molecules, such as terpenoids, flavonoids, different heterocyclic, polyphenols, reducing sugars, and ascorbate, are directly involved in the synthesis of NPs, while others, such as proteins, serve as stabilizing agents [28]. Examples of plant extracts that have been used in the biosynthesis of NPs include Camellia sinensis, Quercus virginiana, Punica granatum, and Eucalyptus globulus [24], Moringa oleifera [29], and Calliandra haematocephala [30] in the remediation of various pollutants in water.In this study,Portulaca oleracea (family: Portulacaceae; common name, purslane) leaf extract was employed to synthesize magnetite Fe3O4 NPs. Due to the presence of various active phytochemicals such as alkaloids, phenols, flavonoids, coumarins, and terpenoids, Portulaca oleracea is commonly used in medical fields, such as drugs and medicine [31]. Only one study has investigated the preparation of Fe3O4 NPs using Portulaca oleracea leaf extract [32].In this work, we have reported for the first time the use of magnetite Fe3O4 NPs from Portulaca oleracea leaf extract as efficient adsorbents to adsorb cadmium and lead from aqueous solutions. The Fe3O4 NPs were prepared successfully and characterized by X-ray diffraction (XRD), field emission scanning electron microscope (FESEM), energy-dispersive X-ray spectroscopy (EDX), transmittance electron microscopy (TEM), and Fourier transform infrared (FTIR) spectroscopy. The effect of different parameters has been studied such as pH, temperature, initial metal concentrations, adsorbent dose, and contact time. Furthermore, isotherm, kinetics, and thermodynamic studies were carried out to understand the mechanism of the adsorption of metal ions by the biofabricated Fe3O4 NPs. ## 2. Experimental Section ### 2.1. Materials The leaves ofPortulaca oleracea were obtained from the local market in Sulaymaniyah, Iraq. The chemicals: iron sulfate heptahydrate (FeSO4·7H2O) (Himedia, India), sodium hydroxide (NaOH) pellets (Merck, Germany), nitric acid (HNO3) (Sigma Aldrich, Germany), lead nitrate (Pb(NO3)2) (Fluka), and cadmium nitrate (Cd(NO3)2) (BDH/England) were used as received without further purification. Stock solutions of lead and cadmium ions with concentrations of (1000 mg/L) were prepared by dissolving 1.598 g of lead nitrate and 2.744 g of cadmium nitrate in 200 mL of deionized water. A 10 mL concentrated HNO3 was added to the metal ion solutions and then diluted to 1000 mL with distilled water. Solutions of HCl and NaOH (0.1–1 M) were used for pH adjustment. ### 2.2. Preparation ofPortulaca oleracea Leaf Extract FreshPortulaca oleracea leaves were washed with tap water to remove the dirt and surface-adherent materials and then with double-distilled water. The thoroughly washed leaves were air-dried for about an hour. The dried leaves were made into small parts and heated in 500 mL of deionized water for the release of phenolic biomolecules, which rendered the solution yellow. The clear yellow filtrate which was named Portulaca oleracea leaf (POL) extract after cooling was preserved at 4°C for further use [33]. ### 2.3. Synthesis of Magnetic Fe3O4 Nanoparticles A simple and eco-friendly method was used to prepare magnetite Fe3O4 nanoparticles. In a glass beaker, a freshly prepared 0.1 M FeSO4 solution (100 mL) was added to the POL extract at a volume ratio of 1 : 1 at room temperature. The pH of this mixture was increased to 11 by adding NaOH. The solution was then placed in a water bath for 60 min at 90°C. The formed black precipitate indicated the formation of magnetic Fe3O4 nanoparticles (PO–Fe3O4MNPs). The PO–Fe3O4MNPs were isolated using a magnet and then washed several times with deionized water and ethanol. After that, the particles were dried overnight at 80°C in an air oven [30, 34]. Figure 1 illustrates a green synthesis scheme for PO–Fe3O4MNPs.Figure 1 Biosynthesis of PO–Fe3O4MNPs. ### 2.4. Characterization An FTIR spectrometer (Thermo Scientific Nicolet iS10) was used to investigate the surface functional groups and capping agents of PO–Fe3O4MNPs. The crystalline structure of PO–Fe3O4MNPs was examined using an X-ray diffractometer (XRD) (PAN Analytical Xpert Pro, Netherlands). XRD was performed using Cu-Kα radiation (λ=1.54Å); the diffracted intensities of all samples were recorded in the 2θ range of 10°–80° with a step size of 0.1° and a scanning speed of 1 step/second. Field emission scanning electron microscopy (FE-SEM) (Quanta 4500, FEI) and transmission electron microscopy (TEM) (JEM-2100, JEOL, Japan) were used to examine the morphological characteristics and size of powdered PO–Fe3O4MNPs. The purity and the constituent elements of the prepared nanoparticles were confirmed by energy-dispersive X-ray spectroscopy (EDX) combined with FE-SEM. ### 2.5. Adsorption Tests #### 2.5.1. Evaluation of Optimum pH To determine the effect of pH on the adsorption of Pb(II) and Cd(II) ions on PO–Fe3O4MNPs, the following procedure was used: nanosorbent (0.5 g) was added to 100 mL of 50 mg/L Cd(II) and Pb(II) solution in Erlenmeyer flasks and the pH of the solutions was adjusted from 3 to 8 using 0.1 M sodium hydroxide or nitric acid solution and agitated with a speed of 200 rpm. All the adsorption experiments were carried out at a room temperature of 20±2°C. Finally, a magnet was used to separate the PO–Fe3O4MNPs from the solution, and the samples were filtered using Whatman filter paper (no. 1). The residual amount of Pb(II) and Cd(II) was determined by inductively coupled plasma optical emission spectrometer (ICPOES) (Optima 2100DV, PerkinElmer USA). #### 2.5.2. Equilibrium Contact Time The quantity of 0.5 g of PO–Fe3O4MNPs was suspended separately in 100 mL of single metal ion solutions in a number of conical flasks with the concentration of 50 mg/L of Pb(II) and Cd(II). The solutions were adjusted to pH 6 and agitated with a speed of 200 rpm for different periods [35]. Ten milliliters of the sample was taken out of the flasks after 10, 20, 30, 40, 50, 60, and 70 min, and the residual Pb(II) and Cd(II) concentrations were measured using (ICPOES). #### 2.5.3. Effect of Initial Metal Concentrations on Adsorption The influence of solution strength on the functional efficiency of PO–Fe3O4MNPs was demonstrated using different concentrations of Pb(II) and Cd(II) ions. Single metal solutions at concentrations of 10, 50, 100, and 150 mg/L in the presence of 0.5 g of PO–Fe3O4MNPs were agitated at a speed of 200 rpm and 20±2°C for an hour [22]. After that, the magnetic nanoadsorbent was then isolated from the solution using magnets. The residual metal ion concentrations were measured using ICPOES. #### 2.5.4. Equilibrium Isothermal Investigation The equilibrium adsorption isotherm is important for understanding the interaction between adsorbate and adsorbent, which provides significant information about the surface characteristics, activity of the adsorbent, and mechanism of adsorption [36]. Different amounts of PO–Fe3O4MNPs (0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1, 1.2, and 1.4 g) were put into a number of conical flasks. Single metal solutions at concentrations of 50 mg/L were prepared for single systems of lead and cadmium ions, respectively, and then 100 mL from each solution was added to each flask. The pH of the metal solutions was adjusted to the optimum pH value. Thermoshaker was used to agitate the solutions continuously for an hour with a speed of 200 rpm and a temperature of 20±2°C. The PO–Fe3O4MNPs were removed from the solutions with a magnet, and the samples were filtered using Whatman filter paper (no. 1). The Pb(II) and Cd(II) concentrations in the filtrate were determined using ICPOES. The removal efficiency and adsorption capacity of PO–Fe3O4MNPs were calculated from experimental data using [37]. (1)%RE=Co−CeCo×100,(2)qe=V1Co−CeWnanosorbent,whereCo (mg/L) is the initial concentrations of metal ions Pb(II) and Cd(II); Ce (mg/L) is the equilibrium concentrations; V1 (L) is the volume of the metal solution, and W (g) is the dosage of PO–Fe3O4MNPs.The experimental laboratory data obtained from isothermal adsorption experiments was interrelated to the common nonlinear adsorption model equations (Freundlich, Langmuir, Toth, and Khan) through the use of statistical software version 12 to compute fundamental parameters of each model. The Freundlich isotherm for heterogeneous surface energy systems is given as follows:(3)qe=KCe1/n,whereK corresponds to the maximum binding capacity and n characterizes the affinity between the sorbent and sorbate [38].The Langmuir adsorption isotherm which describes the adsorbate-adsorbent system equilibrium shows that all adsorbed species interact only with one site and not with each other, and adsorption is confined to a monolayer [21]. The Langmuir model can be represented as (4)qe=qmbCe1+bCe,whereqm (mg/g) is the Langmuir maximum adsorption capacity and b (L/mg) is the Langmuir constant [39].The Khan isotherm model is given in(5)qe=qmaxbkCe1+bkCeak,wherebk and ak are Khan constants and qmax is the maximum uptake [40].The Toth isotherm is expressed as follows:(6)qe=qmaxbkCe1+bTCe1/nTnT,where bT and nT are constants [41]. #### 2.5.5. Kinetics Adsorption kinetics describes the rate at which an adsorbate is retained or released from an aqueous environment to a solid-phase interface under various conditions [42]. The kinetic study was carried out using 0.5 g of PO–Fe3O4MNPs in 100 mL of single metal ion solutions with the concentration of 50 mg/L for each Pb(II) and Cd(II) at pH 6. The flasks with their contents were shaken for different adsorption times (10-70 min). The rate of sorption is analyzed by two kinetic models, namely, pseudo-first-order and pseudo-second-order.The pseudo-first-order model explains that the rate of occupation of sorption sites is proportional to the number of unoccupied sites. The equation is represented as(7)logqt−qe=logqe−k12.303t,wherek1 (min−1) is the pseudo-first-order rate constant of adsorption and qt (mg/g) and qe (mg/g) are the amount of metal ions adsorbed at any particular time and equilibrium, respectively.The pseudo-second-order kinetic model is based on the assumption that chemisorption is the rate-controlling step and is expressed as follows:(8)tqt=1k2qe2+1qet,wherek2 (g/mg·min) is the pseudo-second-order rate constant and qe (mg/g) and qt (mg/g) are the amount of the metal ions adsorbed at equilibrium and at time t, respectively [21, 43]. #### 2.5.6. Thermodynamic Parameters of Adsorption The influence of temperature on Pb(II) and Cd(II) ion adsorption on PO–Fe3O4MNPs was studied at 20, 35, 45, and 55°C. About 100 mL of each solution with a concentration of 50 mg/L was mixed with 0.5 g of PO–Fe3O4MNPs, and then, the mixtures were maintained at a different temperature ranging from 20 to 55°C, for an hour and agitated at the speed of 200 rpm [4]. The nanoparticles were separated, and the Pb(II) and Cd(II) concentrations were measured using ICPOES. ## 2.1. Materials The leaves ofPortulaca oleracea were obtained from the local market in Sulaymaniyah, Iraq. The chemicals: iron sulfate heptahydrate (FeSO4·7H2O) (Himedia, India), sodium hydroxide (NaOH) pellets (Merck, Germany), nitric acid (HNO3) (Sigma Aldrich, Germany), lead nitrate (Pb(NO3)2) (Fluka), and cadmium nitrate (Cd(NO3)2) (BDH/England) were used as received without further purification. Stock solutions of lead and cadmium ions with concentrations of (1000 mg/L) were prepared by dissolving 1.598 g of lead nitrate and 2.744 g of cadmium nitrate in 200 mL of deionized water. A 10 mL concentrated HNO3 was added to the metal ion solutions and then diluted to 1000 mL with distilled water. Solutions of HCl and NaOH (0.1–1 M) were used for pH adjustment. ## 2.2. Preparation ofPortulaca oleracea Leaf Extract FreshPortulaca oleracea leaves were washed with tap water to remove the dirt and surface-adherent materials and then with double-distilled water. The thoroughly washed leaves were air-dried for about an hour. The dried leaves were made into small parts and heated in 500 mL of deionized water for the release of phenolic biomolecules, which rendered the solution yellow. The clear yellow filtrate which was named Portulaca oleracea leaf (POL) extract after cooling was preserved at 4°C for further use [33]. ## 2.3. Synthesis of Magnetic Fe3O4 Nanoparticles A simple and eco-friendly method was used to prepare magnetite Fe3O4 nanoparticles. In a glass beaker, a freshly prepared 0.1 M FeSO4 solution (100 mL) was added to the POL extract at a volume ratio of 1 : 1 at room temperature. The pH of this mixture was increased to 11 by adding NaOH. The solution was then placed in a water bath for 60 min at 90°C. The formed black precipitate indicated the formation of magnetic Fe3O4 nanoparticles (PO–Fe3O4MNPs). The PO–Fe3O4MNPs were isolated using a magnet and then washed several times with deionized water and ethanol. After that, the particles were dried overnight at 80°C in an air oven [30, 34]. Figure 1 illustrates a green synthesis scheme for PO–Fe3O4MNPs.Figure 1 Biosynthesis of PO–Fe3O4MNPs. ## 2.4. Characterization An FTIR spectrometer (Thermo Scientific Nicolet iS10) was used to investigate the surface functional groups and capping agents of PO–Fe3O4MNPs. The crystalline structure of PO–Fe3O4MNPs was examined using an X-ray diffractometer (XRD) (PAN Analytical Xpert Pro, Netherlands). XRD was performed using Cu-Kα radiation (λ=1.54Å); the diffracted intensities of all samples were recorded in the 2θ range of 10°–80° with a step size of 0.1° and a scanning speed of 1 step/second. Field emission scanning electron microscopy (FE-SEM) (Quanta 4500, FEI) and transmission electron microscopy (TEM) (JEM-2100, JEOL, Japan) were used to examine the morphological characteristics and size of powdered PO–Fe3O4MNPs. The purity and the constituent elements of the prepared nanoparticles were confirmed by energy-dispersive X-ray spectroscopy (EDX) combined with FE-SEM. ## 2.5. Adsorption Tests ### 2.5.1. Evaluation of Optimum pH To determine the effect of pH on the adsorption of Pb(II) and Cd(II) ions on PO–Fe3O4MNPs, the following procedure was used: nanosorbent (0.5 g) was added to 100 mL of 50 mg/L Cd(II) and Pb(II) solution in Erlenmeyer flasks and the pH of the solutions was adjusted from 3 to 8 using 0.1 M sodium hydroxide or nitric acid solution and agitated with a speed of 200 rpm. All the adsorption experiments were carried out at a room temperature of 20±2°C. Finally, a magnet was used to separate the PO–Fe3O4MNPs from the solution, and the samples were filtered using Whatman filter paper (no. 1). The residual amount of Pb(II) and Cd(II) was determined by inductively coupled plasma optical emission spectrometer (ICPOES) (Optima 2100DV, PerkinElmer USA). ### 2.5.2. Equilibrium Contact Time The quantity of 0.5 g of PO–Fe3O4MNPs was suspended separately in 100 mL of single metal ion solutions in a number of conical flasks with the concentration of 50 mg/L of Pb(II) and Cd(II). The solutions were adjusted to pH 6 and agitated with a speed of 200 rpm for different periods [35]. Ten milliliters of the sample was taken out of the flasks after 10, 20, 30, 40, 50, 60, and 70 min, and the residual Pb(II) and Cd(II) concentrations were measured using (ICPOES). ### 2.5.3. Effect of Initial Metal Concentrations on Adsorption The influence of solution strength on the functional efficiency of PO–Fe3O4MNPs was demonstrated using different concentrations of Pb(II) and Cd(II) ions. Single metal solutions at concentrations of 10, 50, 100, and 150 mg/L in the presence of 0.5 g of PO–Fe3O4MNPs were agitated at a speed of 200 rpm and 20±2°C for an hour [22]. After that, the magnetic nanoadsorbent was then isolated from the solution using magnets. The residual metal ion concentrations were measured using ICPOES. ### 2.5.4. Equilibrium Isothermal Investigation The equilibrium adsorption isotherm is important for understanding the interaction between adsorbate and adsorbent, which provides significant information about the surface characteristics, activity of the adsorbent, and mechanism of adsorption [36]. Different amounts of PO–Fe3O4MNPs (0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1, 1.2, and 1.4 g) were put into a number of conical flasks. Single metal solutions at concentrations of 50 mg/L were prepared for single systems of lead and cadmium ions, respectively, and then 100 mL from each solution was added to each flask. The pH of the metal solutions was adjusted to the optimum pH value. Thermoshaker was used to agitate the solutions continuously for an hour with a speed of 200 rpm and a temperature of 20±2°C. The PO–Fe3O4MNPs were removed from the solutions with a magnet, and the samples were filtered using Whatman filter paper (no. 1). The Pb(II) and Cd(II) concentrations in the filtrate were determined using ICPOES. The removal efficiency and adsorption capacity of PO–Fe3O4MNPs were calculated from experimental data using [37]. (1)%RE=Co−CeCo×100,(2)qe=V1Co−CeWnanosorbent,whereCo (mg/L) is the initial concentrations of metal ions Pb(II) and Cd(II); Ce (mg/L) is the equilibrium concentrations; V1 (L) is the volume of the metal solution, and W (g) is the dosage of PO–Fe3O4MNPs.The experimental laboratory data obtained from isothermal adsorption experiments was interrelated to the common nonlinear adsorption model equations (Freundlich, Langmuir, Toth, and Khan) through the use of statistical software version 12 to compute fundamental parameters of each model. The Freundlich isotherm for heterogeneous surface energy systems is given as follows:(3)qe=KCe1/n,whereK corresponds to the maximum binding capacity and n characterizes the affinity between the sorbent and sorbate [38].The Langmuir adsorption isotherm which describes the adsorbate-adsorbent system equilibrium shows that all adsorbed species interact only with one site and not with each other, and adsorption is confined to a monolayer [21]. The Langmuir model can be represented as (4)qe=qmbCe1+bCe,whereqm (mg/g) is the Langmuir maximum adsorption capacity and b (L/mg) is the Langmuir constant [39].The Khan isotherm model is given in(5)qe=qmaxbkCe1+bkCeak,wherebk and ak are Khan constants and qmax is the maximum uptake [40].The Toth isotherm is expressed as follows:(6)qe=qmaxbkCe1+bTCe1/nTnT,where bT and nT are constants [41]. ### 2.5.5. Kinetics Adsorption kinetics describes the rate at which an adsorbate is retained or released from an aqueous environment to a solid-phase interface under various conditions [42]. The kinetic study was carried out using 0.5 g of PO–Fe3O4MNPs in 100 mL of single metal ion solutions with the concentration of 50 mg/L for each Pb(II) and Cd(II) at pH 6. The flasks with their contents were shaken for different adsorption times (10-70 min). The rate of sorption is analyzed by two kinetic models, namely, pseudo-first-order and pseudo-second-order.The pseudo-first-order model explains that the rate of occupation of sorption sites is proportional to the number of unoccupied sites. The equation is represented as(7)logqt−qe=logqe−k12.303t,wherek1 (min−1) is the pseudo-first-order rate constant of adsorption and qt (mg/g) and qe (mg/g) are the amount of metal ions adsorbed at any particular time and equilibrium, respectively.The pseudo-second-order kinetic model is based on the assumption that chemisorption is the rate-controlling step and is expressed as follows:(8)tqt=1k2qe2+1qet,wherek2 (g/mg·min) is the pseudo-second-order rate constant and qe (mg/g) and qt (mg/g) are the amount of the metal ions adsorbed at equilibrium and at time t, respectively [21, 43]. ### 2.5.6. Thermodynamic Parameters of Adsorption The influence of temperature on Pb(II) and Cd(II) ion adsorption on PO–Fe3O4MNPs was studied at 20, 35, 45, and 55°C. About 100 mL of each solution with a concentration of 50 mg/L was mixed with 0.5 g of PO–Fe3O4MNPs, and then, the mixtures were maintained at a different temperature ranging from 20 to 55°C, for an hour and agitated at the speed of 200 rpm [4]. The nanoparticles were separated, and the Pb(II) and Cd(II) concentrations were measured using ICPOES. ## 2.5.1. Evaluation of Optimum pH To determine the effect of pH on the adsorption of Pb(II) and Cd(II) ions on PO–Fe3O4MNPs, the following procedure was used: nanosorbent (0.5 g) was added to 100 mL of 50 mg/L Cd(II) and Pb(II) solution in Erlenmeyer flasks and the pH of the solutions was adjusted from 3 to 8 using 0.1 M sodium hydroxide or nitric acid solution and agitated with a speed of 200 rpm. All the adsorption experiments were carried out at a room temperature of 20±2°C. Finally, a magnet was used to separate the PO–Fe3O4MNPs from the solution, and the samples were filtered using Whatman filter paper (no. 1). The residual amount of Pb(II) and Cd(II) was determined by inductively coupled plasma optical emission spectrometer (ICPOES) (Optima 2100DV, PerkinElmer USA). ## 2.5.2. Equilibrium Contact Time The quantity of 0.5 g of PO–Fe3O4MNPs was suspended separately in 100 mL of single metal ion solutions in a number of conical flasks with the concentration of 50 mg/L of Pb(II) and Cd(II). The solutions were adjusted to pH 6 and agitated with a speed of 200 rpm for different periods [35]. Ten milliliters of the sample was taken out of the flasks after 10, 20, 30, 40, 50, 60, and 70 min, and the residual Pb(II) and Cd(II) concentrations were measured using (ICPOES). ## 2.5.3. Effect of Initial Metal Concentrations on Adsorption The influence of solution strength on the functional efficiency of PO–Fe3O4MNPs was demonstrated using different concentrations of Pb(II) and Cd(II) ions. Single metal solutions at concentrations of 10, 50, 100, and 150 mg/L in the presence of 0.5 g of PO–Fe3O4MNPs were agitated at a speed of 200 rpm and 20±2°C for an hour [22]. After that, the magnetic nanoadsorbent was then isolated from the solution using magnets. The residual metal ion concentrations were measured using ICPOES. ## 2.5.4. Equilibrium Isothermal Investigation The equilibrium adsorption isotherm is important for understanding the interaction between adsorbate and adsorbent, which provides significant information about the surface characteristics, activity of the adsorbent, and mechanism of adsorption [36]. Different amounts of PO–Fe3O4MNPs (0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1, 1.2, and 1.4 g) were put into a number of conical flasks. Single metal solutions at concentrations of 50 mg/L were prepared for single systems of lead and cadmium ions, respectively, and then 100 mL from each solution was added to each flask. The pH of the metal solutions was adjusted to the optimum pH value. Thermoshaker was used to agitate the solutions continuously for an hour with a speed of 200 rpm and a temperature of 20±2°C. The PO–Fe3O4MNPs were removed from the solutions with a magnet, and the samples were filtered using Whatman filter paper (no. 1). The Pb(II) and Cd(II) concentrations in the filtrate were determined using ICPOES. The removal efficiency and adsorption capacity of PO–Fe3O4MNPs were calculated from experimental data using [37]. (1)%RE=Co−CeCo×100,(2)qe=V1Co−CeWnanosorbent,whereCo (mg/L) is the initial concentrations of metal ions Pb(II) and Cd(II); Ce (mg/L) is the equilibrium concentrations; V1 (L) is the volume of the metal solution, and W (g) is the dosage of PO–Fe3O4MNPs.The experimental laboratory data obtained from isothermal adsorption experiments was interrelated to the common nonlinear adsorption model equations (Freundlich, Langmuir, Toth, and Khan) through the use of statistical software version 12 to compute fundamental parameters of each model. The Freundlich isotherm for heterogeneous surface energy systems is given as follows:(3)qe=KCe1/n,whereK corresponds to the maximum binding capacity and n characterizes the affinity between the sorbent and sorbate [38].The Langmuir adsorption isotherm which describes the adsorbate-adsorbent system equilibrium shows that all adsorbed species interact only with one site and not with each other, and adsorption is confined to a monolayer [21]. The Langmuir model can be represented as (4)qe=qmbCe1+bCe,whereqm (mg/g) is the Langmuir maximum adsorption capacity and b (L/mg) is the Langmuir constant [39].The Khan isotherm model is given in(5)qe=qmaxbkCe1+bkCeak,wherebk and ak are Khan constants and qmax is the maximum uptake [40].The Toth isotherm is expressed as follows:(6)qe=qmaxbkCe1+bTCe1/nTnT,where bT and nT are constants [41]. ## 2.5.5. Kinetics Adsorption kinetics describes the rate at which an adsorbate is retained or released from an aqueous environment to a solid-phase interface under various conditions [42]. The kinetic study was carried out using 0.5 g of PO–Fe3O4MNPs in 100 mL of single metal ion solutions with the concentration of 50 mg/L for each Pb(II) and Cd(II) at pH 6. The flasks with their contents were shaken for different adsorption times (10-70 min). The rate of sorption is analyzed by two kinetic models, namely, pseudo-first-order and pseudo-second-order.The pseudo-first-order model explains that the rate of occupation of sorption sites is proportional to the number of unoccupied sites. The equation is represented as(7)logqt−qe=logqe−k12.303t,wherek1 (min−1) is the pseudo-first-order rate constant of adsorption and qt (mg/g) and qe (mg/g) are the amount of metal ions adsorbed at any particular time and equilibrium, respectively.The pseudo-second-order kinetic model is based on the assumption that chemisorption is the rate-controlling step and is expressed as follows:(8)tqt=1k2qe2+1qet,wherek2 (g/mg·min) is the pseudo-second-order rate constant and qe (mg/g) and qt (mg/g) are the amount of the metal ions adsorbed at equilibrium and at time t, respectively [21, 43]. ## 2.5.6. Thermodynamic Parameters of Adsorption The influence of temperature on Pb(II) and Cd(II) ion adsorption on PO–Fe3O4MNPs was studied at 20, 35, 45, and 55°C. About 100 mL of each solution with a concentration of 50 mg/L was mixed with 0.5 g of PO–Fe3O4MNPs, and then, the mixtures were maintained at a different temperature ranging from 20 to 55°C, for an hour and agitated at the speed of 200 rpm [4]. The nanoparticles were separated, and the Pb(II) and Cd(II) concentrations were measured using ICPOES. ## 3. Results and Discussion ### 3.1. PO–Fe3O4MNP Characterizations The phase purity and crystalline construction of PO–Fe3O4MNPs are identified by powder XRD. Figure 2 shows the XRD pattern of dried PO–Fe3O4MNPs. The Bragg reflection peaks were detected at 2θ value of 30.11°, 35.47°, 43.11°, 53.48°, 57.01°, 62.61°, and 74.07°, which corresponded to the (220), (311), (400), (422), (511), (440), and (533) crystallographic planes which prove the structure of biosynthesized PO–Fe3O4MNPs is cubic (reference code: 96-900-5838). The XRD data is also in line with the data obtained by Rajput et al. [44] and Yew et al. [45].Figure 2 XRD spectra of PO–Fe3O4MNPs.Figures3(a) and 3(b) exhibit the TEM image of the biosynthesized PO–Fe3O4MNPs, as well as their particle size distribution. From the TEM images, it is obvious that most of the particles were almost spherical with slight aggregation. The presence of agglomeration could be attributed to van der Waals forces that bind particles together and also shear forces that can be applied on the nanoscale [46]. The mean particle size is 26.0196 nm with a 9.05132 nm standard deviation.Figure 3 (a) Transmission electron micrographs of PO–Fe3O4MNPs and (b) particle size distribution histogram. (a)(b)The FESEM images of the PO–Fe3O4MNPs are shown in Figures 4(a) and 4(b). From FESEM images, it was clearly observed that the nanoparticles are bead-like, spherical in shape, with slight aggregation. Three of such nanoparticles, 34.52, 34.46, and, 46.71 nm, are mentioned in Figure 4(b). Similar spherical shapes were also reported when biosynthesis of Fe3O4 was conducted using leaf extract of Mussaenda erythrophylla [47]. X-ray elemental mapping and energy-dispersive spectroscopy (EDX) were performed to reveal the presence of elemental constituents in the PO–Fe3O4MNPs as shown in Figures 4(c) and 4(d). The resultant EDX spectrum of nanoparticles showed the peaks at 0.7, 6.4, and 7.2 keV for elemental iron and the peak at 0.5 keV for elemental oxygen, which confirmed the formation of the PO–Fe3O4MNPs. The peak at 0.3 keV revealed the existence of carbon that originated from the biomolecules of the leaf extract.Figure 4 (a) FESEM images in 500 nm, (b) 200 nm resolutions, (c) mapping, and (d) EDX of PO–Fe3O4MNP magnetic nanoadsorbent. (a)(b)(c)(d)The surface functional groups and capping agents of PO–Fe3O4MNPs which are responsible for the reduction and stabilization were studied by FTIR spectroscopy. Figure 5 illustrates the FTIR spectrum of PO–Fe3O4MNPs. The peaks were observed at 582 cm-1 and 790 cm-1 due to Fe-O vibrations of Fe3O4 [48]. The band at 3394 cm-1 is attributed to the O–H group of polyphenolic compounds [49]. The band at 1662 cm–1 is assigned to –C=C– stretching vibration of alkenes [30]. The band at 1400 cm-1 belongs to C-C groups derived from aromatic rings found in the POL extract [50]. The bands between 1000 cm-1 and 1300 cm-1 were attributed to the C-O functional group in alcohols, ethers, ester, carboxylic acids, and amides in the extract [46]. These bands confirm the formation of PO–Fe3O4MNPs and showed that they were covered with polyphenols and other organic compounds which improved their stability.Figure 5 FTIR spectra of PO–Fe3O4MNPs. ### 3.2. Adsorption Process #### 3.2.1. Effect of pH pH is the main factor that affects the efficacy of the adsorption process. In the present study, the impact of pH on Pb(II) and Cd(II) removal efficiency using biosynthesized PO–Fe3O4MNPs was verified at pH values ranging from 3 to 8. Pb(II) and Cd(II) adsorption was low at pH values around 3, which was attributed to electrostatic repulsion between the adsorbent and metal ions in the solution. When the concentration of hydrogen ions in the solution increases, the adsorbent sites are occupied by the hydrogen ions instead of metal ions. The protonation of active sites thus tends to decrease the metal adsorption [37]. As presented in Figure 6, the removal efficiency for Pb(II) and Cd(II) at pH 3 is found to be 15.42% and 4.8%, respectively. Maximum adsorption was recorded at higher pH 6, i.e., maximum adsorptions for Pb(II) and Cd(II) were 100% and 95.32%, respectively.Figure 6 Effect of pH on Pb(II) and Cd(II) removal (Co=50mg/L, PO–Fe3O4MNP dosage=0.5g, contacttime=60min, temperature=20±2°C).Beyond the value of pH 6, solute ions will precipitate due to the formation of insoluble metal hydroxides, which then start precipitating from the solutions at higher pH values and make the true sorption studies impossible. This should be avoided during sorption tests because it makes distinguishing between sorption and metal precipitation difficult [51]. As a result, the pH value of 6 was chosen as the optimum and used in all subsequent experiments. These results are in agreement with the results obtained by Lung et al. [52]. #### 3.2.2. Effect of Contact Time The influence of contact time on Pb(II) and Cd(II) removal efficiency by PO–Fe3O4MNPs was investigated using varying the contact time from 10 to 70 minutes. As was observed in Figure 7, the adsorption of both metal ions by PO–Fe3O4MNPs was increased with increasing contact time. Both metal ions have more opportunities for contact with the adsorbent surface, when time increases the availability of more active groups that are present on the surface of the adsorbent increases [22]. The removal of Pb(II) was rapid during the first 30 min. However, no significant increase in the adsorption rate was found after 30 min. The concentration of Cd(II) decreased within 50 min and remained almost constant after an hour, implying that adsorption is rapid and reaches saturation within an hour. This is a promising result because equilibrium time is critical in wastewater treatment plants that are economically viable [53].Figure 7 Effect of contact time on removal of Pb(II) and Cd(II) (Co=50mg/L, PO–Fe3O4MNP dosage=0.5g, pH=6, temperature=20±2°C). #### 3.2.3. Effect of Initial Metal Concentration The effect of the initial metal concentration on the removal efficiency by 0.5 g PO–Fe3O4MNPs at optimal pH was investigated using solutions with varying initial metal concentrations (10, 50, 100, and 150 mg/L). Figure 8 shows that there is a lowering in adsorption of lead and cadmium ions with increasing in initial metal ion concentration in solution. The initial metal concentration plays an important role in the removal efficiency since there is a constant active binding site for a given mass of adsorbent where the fixed amount of metals can be adsorbed. Thus, increasing the metal concentration in solution against the same quantity of adsorbent decreases the adsorption capacity. These results agreed with the results obtained by Ebrahim et al. [35] and Das and Rebecca [54].Figure 8 Effect of initial metal concentration on removal of Pb(II) and Cd(II) (PO–Fe3O4MNP dosage=0.5g, contacttime=60min, pH=6, temperature=20±2°C). #### 3.2.4. Effect of PO–Fe3O4MNP Dose The concentration of the adsorbent is one of the most important factors influencing the efficiency of the process and adsorption capacity. In this study, the impact of the various amounts of the biosynthesized PO–Fe3O4MNPs varying from 0.02 to 1.4 g was studied on the removal efficiency of the Pb(II) and Cd(II) ions. Figure 9 shows that, with increasing nanoadsorbent mass, the percentage of nanoadsorbents and the adsorption percent of both Pb(II) and Cd(II) ions increase. This is due to the fact that at increasing dosages, there are more available sites on the surface of nanoadsorbents [55].Figure 9 Effect of adsorbent dose on removal of Pb(II) and Cd(II) (Co=50mg/L, contacttime=60min, pH=6, temperature=20±2°C). #### 3.2.5. Adsorption Isotherms The adsorption isotherms provide important information concerning adsorption capacity, the adsorption mechanism between the contaminant and the adsorbent, and the contaminant distribution between the adsorbent and the solution [4]. Adsorption isotherms were determined by fitting the experimental data obtained at equilibrium time, with the isotherm models including Freundlich, Langmuir, Toth, and Khan. The Langmuir adsorption isotherm is based on monolayer adsorption on the surface and assumes a homogenous sorbent surface [39]. The Toth isotherm is an empirical modification of the Langmuir equation that is aimed at reducing the error between experimental and predicted equilibrium data. This method is especially useful for explaining systems with heterogeneous adsorption [40]. The Freundlich isotherm can be applied to adsorption processes on heterogonous surfaces; it gives the concept of multilayer adsorption and the exponential distribution of active sites on the surface of the sorbent [56]. Finally, the Khan isotherm represents both the Langmuir and Freundlich models, suggested for adsorbate adsorption from pure solutions [40].Figures10(a) and 10(b) show the comparison of Freundlich, Langmuir, Khan, and Toth models for lead and cadmium ions, respectively. The Freundlich model fits the experimental data better than the Langmuir, Khan, and Toth models based on the correlation coefficients, which denotes multilayer adsorption of Pb(II) and Cd(II). It was noticed that values of n for cadmium and lead were 1.301 and 2.236, respectively, indicating favorable adsorption between PO–Fe3O4MNPs and the metal ions. Table 1 illustrates the adsorption capacity of PO–Fe3O4MNPs compared to other adsorbents that have been reported in various literature. The adsorption capacity was 177.48 mg/g for cadmium and 108.2267 mg/g for lead (Table 2). This indicates that the adsorption capacity of PO–Fe3O4MNPs is higher than that of most of the adsorbents listed in Table 1.Figure 10 Comparison of isotherm models for adsorption of (a) Pb(II) and (b) Cd(II) ions onto PO–Fe3O4MNPs. (a)(b)Table 1 Comparison of maximum adsorption capacity of investigated PO–Fe3O4MNPs for Cd(II) and Pb(II) with other adsorbents reported in the various literature. MetalAdsorbentMaximum adsorption capacity (mg/g)ReferenceCd(II)Iron oxide nanoparticles (IONPs)18.32[27]Iron oxide nanoparticles15.5[37]Magnetite Green Fe3O4 nanoparticles18.73[52][email protected][58]SBA-15@Fe3O4@Isa140[59]c-MCM-4132.3[60]MCM-4829.13[61]PO–Fe3O4MNPs177.48This studyPb(II)Magnetite green Fe3O4 nanoparticles0.16[52]SBA-15@Fe3O4@Isa110[59]c-MCM-4158.5[60]MCM-4850.39[61]Iron nanocomposites (T-Fe3O4)100.0[62]Phytogenic magnetic nanoparticles (PMNPs)68.41[63]Fe3O4 nanoadsorbents64.97[64]PO–Fe3O4MNPs108.2267This studyTable 2 Parameters of Langmuir, Freundlich, Khan, and Toth isotherm models for Pb(II) and Cd(II) ions adsorption onto PO–Fe3O4MNPs. ModelParameterMetal ionsPb(II)Cd(II)Langmuirqm (mg/g)108.2267177.4800b (L/mg)0.7062960.006401R20 .94390.92926FreundlichK (mg/g) (mg/L) (1/n)41.486182.044152n2.2365291.301127R20.972460.94333Khanqm (mg/g)4.0257087.391429bk (L/mg)194.40720.187339ak0.5562490.201856R20.972450.93299Tothqm (mg/g)721.74033392.02625bT2.0873980.011593nT4.5677780.632009R20.969520.92807As illustrated in Figure9, advanced adsorption removal for the lead was achieved at a low dose of PO–Fe3O4MNPs and it has a more rapid affinity towards the nanoparticles as compared to cadmium ions, which reveals the presence of various electrical attractions between cation lead metal and negative adsorption functional sites. Additionally, the lead ion possesses the smallest hydration radius; this agrees with the conception that ions with a small hydration radius are desirably selected and gathered at the interface [51, 57]. #### 3.2.6. Kinetics The kinetics of Cd(II) and Pb(II) adsorption on PO-Fe3O4MNPs were studied using pseudo-first-order and pseudo-second-order kinetic models, as demonstrated in Figures 11 and 12. The kinetic parameters and correlation coefficients R2 are documented in Table 3. Cd(II) and Pb(II) adsorption data were well fitted to pseudo-second-order equation (R2>0.99PbII; 0.95 Cd(II)). Thus, chemisorption is the rate-determining step for Cd(II) and Pb(II) adsorption on PO-Fe3O4MNPs through sharing or exchanging of electrons, between the sorbent and the sorbate [26, 65].Figure 11 Adsorption kinetics for the adsorption of Pb(II) onto PO-Fe3O4MNPs. (a) Pseudo-first-order(b) Pseudo-second-orderFigure 12 Adsorption kinetics for the adsorption of Cd(II) onto PO-Fe3O4MNPs. (a) Pseudo-first-order(b) Pseudo-second-orderTable 3 Kinetic parameters for Pb(II) and Cd(II) ion biosorption onto PO–Fe3O4MNPs. MetalsPseudo-first-orderPseudo-second-orderqe (mg/g)K1 (min–1)R2qe (mg/g)K2 (g/mg min)R2Pb(II)0.76560.10160.923410.02800.51790.9999Cd(II)3.02800.01980.28749.08760.01660.9593 #### 3.2.7. Thermodynamic Analysis Temperature is another important factor that influences the remediation efficiency of the adsorption process. Figure13(a) shows the variation of percentage removal efficiency with temperature. It is obvious from the results that changing the temperature from 20 to 35°C has no significant effect on the sorption of adsorbents so the adsorption experiments can be carried out at room temperature without any adjustment. Similar results were reported by Rasheed and Ebrahim [51]. However, removal efficiency decreases beyond 35°C due to a reduction in the number of active surface sites available for adsorption on the adsorbent [66]. Furthermore, at high temperatures, the attractive forces between the adsorbent and adsorbate become weaker, and therefore, sorption decreases [22]. In the temperature range of 20–55°C, the thermodynamic aspect of the adsorption of Pb(II) and Cd(II) ions on PO–Fe3O4MNPs was determined to show if the process was endothermic or exothermic. Thermodynamic parameters including Gibb’s free energy (ΔG°), enthalpy (ΔH°), and entropy (ΔS°) are obtained using the following equation: (9)ΔG°=−RTlnKcKc=CadCe,(10)ΔG°=ΔH°−TΔS°.Figure 13 (a) Effect of temperature on the Pb(II) and Cd(II) ion removal; (b) thermodynamic plot for Pb(II); (c) thermodynamic plot for Cd(II) adsorption. (a)(b)(c)In these equations,R is the ideal gas constant, T is the absolute temperature (K), Kc is the equilibrium constant, Cad is the amount of Pb(II) and Cd(II) adsorbed on PO–Fe3O4MNPs per liter of the solution (mg/L), and Ce is the equilibrium concentration of Pb(II) and Cd(II) in the solution (mg/L). The enthalpy and entropy change values were calculated from the slope and intercept of the plot of lnKc versus 1/T as shown in Figures 13(b) and 13(c). The calculated ΔG°, ΔH°, and ΔS° are arranged in Table 4. The negative values of ΔG° illustrate the spontaneity and feasibility of the adsorption reaction at a given temperature, and the increasing value of ΔG° with an increased temperature indicates a decrease in the degree of feasibility for Pb(II) and Cd(II) adsorption. To be thermodynamically acceptable, ΔG° must always be negative [21]. The negative values of ΔH°, -176.323312 for Pb(II) ions and -34.228738 kJ/mol for Cd(II) ions, show that the adsorption is exothermic. Moreover, the ΔS° values for Pb(II) and Cd(II) ions at the mentioned temperatures are obtained as -0.51621626 and -0.09403134 kJ/mol/K, respectively. These negative values indicate reducing the irregularity or randomness in the adsorbate–adsorbent interface during Pb(II) and Cd(II) adsorption.Table 4 Thermodynamic parameters for Pb(II) and Cd(II) adsorption onto PO–Fe3O4MNPs. MetalTemperature (K)ΔG° (kJ/mol)ΔH° (kJ/mol)ΔS° (kJ/mol/K)R2Pb(II)293-26.35695282-176.323312-0.516216260.9476308-15.71120935318-10.32196239328-9.193894743Cd(II)293-7.042494622-34.228738-0.094031340.9145308-4.691228444318-4.05764763328-3.873400017 #### 3.2.8. Characterizations of Pb(II)- and Cd(II)-Loaded PO–Fe3O4MNPs Pb(II) and Cd(II) uptake onto PO–Fe3O4MNPs was confirmed by using XRD, TEM, FESEM, EDX, and elemental mapping of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs. About 0.5 g of PO–Fe3O4MNPs was agitated within 50 mg/L of Pb(II) and Cd(II) solution at optimal conditions (temperature: 20±2°C, pH: 6, agitation speed: 200 rpm; equilibrium time: 60 min). The suspension was filtered using Whatman filter paper (no. 1), and the residue was dried in an oven at 50°C overnight.The XRD patterns of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs are shown in Figure 14. It was noted that after adsorption of Cd(II) by PO–Fe3O4MNPs, three peaks, which are centered at 32, 48, and 55°, were observed, attributed to the presence of the Cd(II) ions. In case of Pb(II) adsorption, two peaks were observed, which are centered at 31 and 48°. In addition, XRD spectra showed that there were no changes in the peaks of PO–Fe3O4MNPs; this confirms the adsorption of Cd(II) and Pb(II) ions on the surface of PO–Fe3O4MNPs.Figure 14 XRD spectra of (a) lead-loaded PO–Fe3O4MNPs, (b) cadmium-loaded PO–Fe3O4MNPs, and (c) PO–Fe3O4MNPs.The TEM images of PO–Fe3O4MNPs after Cd(II) and Pb(II) adsorptions are shown in Figures 15(a) and 15(b). Many black particles were observed which may be attributed to the existence of higher concentrations of surface-bound metal ions which result in small accumulation of the particles after adsorption and also indicates the magnetic property of PO–Fe3O4MNPs. Similar results were reported by Lin et al. [27].Figure 15 TEM images of (a) cadmium-loaded PO–Fe3O4MNPs and (b) lead-loaded PO–Fe3O4MNPs; FESEM images of (c) cadmium-loaded PO–Fe3O4MNPs and (d) lead-loaded PO–Fe3O4MNPs. (a)(b)(c)(d)FESEM was used to show the morphology and topographic features of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs. FESEM images revealed some aggregations and a slight increase in the dimensions of PO–Fe3O4MNPs as shown in Figures 15(c) and 15(d). However, the morphology of individual nanoparticles barely changed, indicating that the removal of Pb(II) and Cd(II) ions was performed by the adsorption process.The EDX analysis of PO–Fe3O4MNPs after adsorption of Cd(II) and Pb(II) is presented graphically in Figures 16(a) and 16(b). The EDX patterns show the existence of magnesium, oxygen, sulfur, iron, calcium, and cadmium on the surface of cadmium-loaded PO–Fe3O4MNPs, and the distribution of oxygen, magnesium, calcium, sulfur, iron, and lead on the surface of lead-loaded PO–Fe3O4MNPs clearly confirms successful adsorption of Pb(II) and Cd(II) on PO–Fe3O4MNP surfaces. The results show a good agreement with those obtained by Bagbi et al. [67]. The elemental mapping also confirmed the adsorption of Pb(II) (in green) and Cd(II) (in green) as shown in Figures 16(c) and 16(d). It is clear that Pb(II) and Cd(II) ions are uniformly distributed on the surface of PO–Fe3O4MNPs.Figure 16 EDX spectra of (a) cadmium-loaded PO–Fe3O4MNPs and (b) lead-loaded PO–Fe3O4MNPs; elemental mapping images of (c) cadmium-loaded PO–Fe3O4MNPs and (d) lead-loaded PO–Fe3O4MNPs. (a)(b)(c)(d) ## 3.1. PO–Fe3O4MNP Characterizations The phase purity and crystalline construction of PO–Fe3O4MNPs are identified by powder XRD. Figure 2 shows the XRD pattern of dried PO–Fe3O4MNPs. The Bragg reflection peaks were detected at 2θ value of 30.11°, 35.47°, 43.11°, 53.48°, 57.01°, 62.61°, and 74.07°, which corresponded to the (220), (311), (400), (422), (511), (440), and (533) crystallographic planes which prove the structure of biosynthesized PO–Fe3O4MNPs is cubic (reference code: 96-900-5838). The XRD data is also in line with the data obtained by Rajput et al. [44] and Yew et al. [45].Figure 2 XRD spectra of PO–Fe3O4MNPs.Figures3(a) and 3(b) exhibit the TEM image of the biosynthesized PO–Fe3O4MNPs, as well as their particle size distribution. From the TEM images, it is obvious that most of the particles were almost spherical with slight aggregation. The presence of agglomeration could be attributed to van der Waals forces that bind particles together and also shear forces that can be applied on the nanoscale [46]. The mean particle size is 26.0196 nm with a 9.05132 nm standard deviation.Figure 3 (a) Transmission electron micrographs of PO–Fe3O4MNPs and (b) particle size distribution histogram. (a)(b)The FESEM images of the PO–Fe3O4MNPs are shown in Figures 4(a) and 4(b). From FESEM images, it was clearly observed that the nanoparticles are bead-like, spherical in shape, with slight aggregation. Three of such nanoparticles, 34.52, 34.46, and, 46.71 nm, are mentioned in Figure 4(b). Similar spherical shapes were also reported when biosynthesis of Fe3O4 was conducted using leaf extract of Mussaenda erythrophylla [47]. X-ray elemental mapping and energy-dispersive spectroscopy (EDX) were performed to reveal the presence of elemental constituents in the PO–Fe3O4MNPs as shown in Figures 4(c) and 4(d). The resultant EDX spectrum of nanoparticles showed the peaks at 0.7, 6.4, and 7.2 keV for elemental iron and the peak at 0.5 keV for elemental oxygen, which confirmed the formation of the PO–Fe3O4MNPs. The peak at 0.3 keV revealed the existence of carbon that originated from the biomolecules of the leaf extract.Figure 4 (a) FESEM images in 500 nm, (b) 200 nm resolutions, (c) mapping, and (d) EDX of PO–Fe3O4MNP magnetic nanoadsorbent. (a)(b)(c)(d)The surface functional groups and capping agents of PO–Fe3O4MNPs which are responsible for the reduction and stabilization were studied by FTIR spectroscopy. Figure 5 illustrates the FTIR spectrum of PO–Fe3O4MNPs. The peaks were observed at 582 cm-1 and 790 cm-1 due to Fe-O vibrations of Fe3O4 [48]. The band at 3394 cm-1 is attributed to the O–H group of polyphenolic compounds [49]. The band at 1662 cm–1 is assigned to –C=C– stretching vibration of alkenes [30]. The band at 1400 cm-1 belongs to C-C groups derived from aromatic rings found in the POL extract [50]. The bands between 1000 cm-1 and 1300 cm-1 were attributed to the C-O functional group in alcohols, ethers, ester, carboxylic acids, and amides in the extract [46]. These bands confirm the formation of PO–Fe3O4MNPs and showed that they were covered with polyphenols and other organic compounds which improved their stability.Figure 5 FTIR spectra of PO–Fe3O4MNPs. ## 3.2. Adsorption Process ### 3.2.1. Effect of pH pH is the main factor that affects the efficacy of the adsorption process. In the present study, the impact of pH on Pb(II) and Cd(II) removal efficiency using biosynthesized PO–Fe3O4MNPs was verified at pH values ranging from 3 to 8. Pb(II) and Cd(II) adsorption was low at pH values around 3, which was attributed to electrostatic repulsion between the adsorbent and metal ions in the solution. When the concentration of hydrogen ions in the solution increases, the adsorbent sites are occupied by the hydrogen ions instead of metal ions. The protonation of active sites thus tends to decrease the metal adsorption [37]. As presented in Figure 6, the removal efficiency for Pb(II) and Cd(II) at pH 3 is found to be 15.42% and 4.8%, respectively. Maximum adsorption was recorded at higher pH 6, i.e., maximum adsorptions for Pb(II) and Cd(II) were 100% and 95.32%, respectively.Figure 6 Effect of pH on Pb(II) and Cd(II) removal (Co=50mg/L, PO–Fe3O4MNP dosage=0.5g, contacttime=60min, temperature=20±2°C).Beyond the value of pH 6, solute ions will precipitate due to the formation of insoluble metal hydroxides, which then start precipitating from the solutions at higher pH values and make the true sorption studies impossible. This should be avoided during sorption tests because it makes distinguishing between sorption and metal precipitation difficult [51]. As a result, the pH value of 6 was chosen as the optimum and used in all subsequent experiments. These results are in agreement with the results obtained by Lung et al. [52]. ### 3.2.2. Effect of Contact Time The influence of contact time on Pb(II) and Cd(II) removal efficiency by PO–Fe3O4MNPs was investigated using varying the contact time from 10 to 70 minutes. As was observed in Figure 7, the adsorption of both metal ions by PO–Fe3O4MNPs was increased with increasing contact time. Both metal ions have more opportunities for contact with the adsorbent surface, when time increases the availability of more active groups that are present on the surface of the adsorbent increases [22]. The removal of Pb(II) was rapid during the first 30 min. However, no significant increase in the adsorption rate was found after 30 min. The concentration of Cd(II) decreased within 50 min and remained almost constant after an hour, implying that adsorption is rapid and reaches saturation within an hour. This is a promising result because equilibrium time is critical in wastewater treatment plants that are economically viable [53].Figure 7 Effect of contact time on removal of Pb(II) and Cd(II) (Co=50mg/L, PO–Fe3O4MNP dosage=0.5g, pH=6, temperature=20±2°C). ### 3.2.3. Effect of Initial Metal Concentration The effect of the initial metal concentration on the removal efficiency by 0.5 g PO–Fe3O4MNPs at optimal pH was investigated using solutions with varying initial metal concentrations (10, 50, 100, and 150 mg/L). Figure 8 shows that there is a lowering in adsorption of lead and cadmium ions with increasing in initial metal ion concentration in solution. The initial metal concentration plays an important role in the removal efficiency since there is a constant active binding site for a given mass of adsorbent where the fixed amount of metals can be adsorbed. Thus, increasing the metal concentration in solution against the same quantity of adsorbent decreases the adsorption capacity. These results agreed with the results obtained by Ebrahim et al. [35] and Das and Rebecca [54].Figure 8 Effect of initial metal concentration on removal of Pb(II) and Cd(II) (PO–Fe3O4MNP dosage=0.5g, contacttime=60min, pH=6, temperature=20±2°C). ### 3.2.4. Effect of PO–Fe3O4MNP Dose The concentration of the adsorbent is one of the most important factors influencing the efficiency of the process and adsorption capacity. In this study, the impact of the various amounts of the biosynthesized PO–Fe3O4MNPs varying from 0.02 to 1.4 g was studied on the removal efficiency of the Pb(II) and Cd(II) ions. Figure 9 shows that, with increasing nanoadsorbent mass, the percentage of nanoadsorbents and the adsorption percent of both Pb(II) and Cd(II) ions increase. This is due to the fact that at increasing dosages, there are more available sites on the surface of nanoadsorbents [55].Figure 9 Effect of adsorbent dose on removal of Pb(II) and Cd(II) (Co=50mg/L, contacttime=60min, pH=6, temperature=20±2°C). ### 3.2.5. Adsorption Isotherms The adsorption isotherms provide important information concerning adsorption capacity, the adsorption mechanism between the contaminant and the adsorbent, and the contaminant distribution between the adsorbent and the solution [4]. Adsorption isotherms were determined by fitting the experimental data obtained at equilibrium time, with the isotherm models including Freundlich, Langmuir, Toth, and Khan. The Langmuir adsorption isotherm is based on monolayer adsorption on the surface and assumes a homogenous sorbent surface [39]. The Toth isotherm is an empirical modification of the Langmuir equation that is aimed at reducing the error between experimental and predicted equilibrium data. This method is especially useful for explaining systems with heterogeneous adsorption [40]. The Freundlich isotherm can be applied to adsorption processes on heterogonous surfaces; it gives the concept of multilayer adsorption and the exponential distribution of active sites on the surface of the sorbent [56]. Finally, the Khan isotherm represents both the Langmuir and Freundlich models, suggested for adsorbate adsorption from pure solutions [40].Figures10(a) and 10(b) show the comparison of Freundlich, Langmuir, Khan, and Toth models for lead and cadmium ions, respectively. The Freundlich model fits the experimental data better than the Langmuir, Khan, and Toth models based on the correlation coefficients, which denotes multilayer adsorption of Pb(II) and Cd(II). It was noticed that values of n for cadmium and lead were 1.301 and 2.236, respectively, indicating favorable adsorption between PO–Fe3O4MNPs and the metal ions. Table 1 illustrates the adsorption capacity of PO–Fe3O4MNPs compared to other adsorbents that have been reported in various literature. The adsorption capacity was 177.48 mg/g for cadmium and 108.2267 mg/g for lead (Table 2). This indicates that the adsorption capacity of PO–Fe3O4MNPs is higher than that of most of the adsorbents listed in Table 1.Figure 10 Comparison of isotherm models for adsorption of (a) Pb(II) and (b) Cd(II) ions onto PO–Fe3O4MNPs. (a)(b)Table 1 Comparison of maximum adsorption capacity of investigated PO–Fe3O4MNPs for Cd(II) and Pb(II) with other adsorbents reported in the various literature. MetalAdsorbentMaximum adsorption capacity (mg/g)ReferenceCd(II)Iron oxide nanoparticles (IONPs)18.32[27]Iron oxide nanoparticles15.5[37]Magnetite Green Fe3O4 nanoparticles18.73[52][email protected][58]SBA-15@Fe3O4@Isa140[59]c-MCM-4132.3[60]MCM-4829.13[61]PO–Fe3O4MNPs177.48This studyPb(II)Magnetite green Fe3O4 nanoparticles0.16[52]SBA-15@Fe3O4@Isa110[59]c-MCM-4158.5[60]MCM-4850.39[61]Iron nanocomposites (T-Fe3O4)100.0[62]Phytogenic magnetic nanoparticles (PMNPs)68.41[63]Fe3O4 nanoadsorbents64.97[64]PO–Fe3O4MNPs108.2267This studyTable 2 Parameters of Langmuir, Freundlich, Khan, and Toth isotherm models for Pb(II) and Cd(II) ions adsorption onto PO–Fe3O4MNPs. ModelParameterMetal ionsPb(II)Cd(II)Langmuirqm (mg/g)108.2267177.4800b (L/mg)0.7062960.006401R20 .94390.92926FreundlichK (mg/g) (mg/L) (1/n)41.486182.044152n2.2365291.301127R20.972460.94333Khanqm (mg/g)4.0257087.391429bk (L/mg)194.40720.187339ak0.5562490.201856R20.972450.93299Tothqm (mg/g)721.74033392.02625bT2.0873980.011593nT4.5677780.632009R20.969520.92807As illustrated in Figure9, advanced adsorption removal for the lead was achieved at a low dose of PO–Fe3O4MNPs and it has a more rapid affinity towards the nanoparticles as compared to cadmium ions, which reveals the presence of various electrical attractions between cation lead metal and negative adsorption functional sites. Additionally, the lead ion possesses the smallest hydration radius; this agrees with the conception that ions with a small hydration radius are desirably selected and gathered at the interface [51, 57]. ### 3.2.6. Kinetics The kinetics of Cd(II) and Pb(II) adsorption on PO-Fe3O4MNPs were studied using pseudo-first-order and pseudo-second-order kinetic models, as demonstrated in Figures 11 and 12. The kinetic parameters and correlation coefficients R2 are documented in Table 3. Cd(II) and Pb(II) adsorption data were well fitted to pseudo-second-order equation (R2>0.99PbII; 0.95 Cd(II)). Thus, chemisorption is the rate-determining step for Cd(II) and Pb(II) adsorption on PO-Fe3O4MNPs through sharing or exchanging of electrons, between the sorbent and the sorbate [26, 65].Figure 11 Adsorption kinetics for the adsorption of Pb(II) onto PO-Fe3O4MNPs. (a) Pseudo-first-order(b) Pseudo-second-orderFigure 12 Adsorption kinetics for the adsorption of Cd(II) onto PO-Fe3O4MNPs. (a) Pseudo-first-order(b) Pseudo-second-orderTable 3 Kinetic parameters for Pb(II) and Cd(II) ion biosorption onto PO–Fe3O4MNPs. MetalsPseudo-first-orderPseudo-second-orderqe (mg/g)K1 (min–1)R2qe (mg/g)K2 (g/mg min)R2Pb(II)0.76560.10160.923410.02800.51790.9999Cd(II)3.02800.01980.28749.08760.01660.9593 ### 3.2.7. Thermodynamic Analysis Temperature is another important factor that influences the remediation efficiency of the adsorption process. Figure13(a) shows the variation of percentage removal efficiency with temperature. It is obvious from the results that changing the temperature from 20 to 35°C has no significant effect on the sorption of adsorbents so the adsorption experiments can be carried out at room temperature without any adjustment. Similar results were reported by Rasheed and Ebrahim [51]. However, removal efficiency decreases beyond 35°C due to a reduction in the number of active surface sites available for adsorption on the adsorbent [66]. Furthermore, at high temperatures, the attractive forces between the adsorbent and adsorbate become weaker, and therefore, sorption decreases [22]. In the temperature range of 20–55°C, the thermodynamic aspect of the adsorption of Pb(II) and Cd(II) ions on PO–Fe3O4MNPs was determined to show if the process was endothermic or exothermic. Thermodynamic parameters including Gibb’s free energy (ΔG°), enthalpy (ΔH°), and entropy (ΔS°) are obtained using the following equation: (9)ΔG°=−RTlnKcKc=CadCe,(10)ΔG°=ΔH°−TΔS°.Figure 13 (a) Effect of temperature on the Pb(II) and Cd(II) ion removal; (b) thermodynamic plot for Pb(II); (c) thermodynamic plot for Cd(II) adsorption. (a)(b)(c)In these equations,R is the ideal gas constant, T is the absolute temperature (K), Kc is the equilibrium constant, Cad is the amount of Pb(II) and Cd(II) adsorbed on PO–Fe3O4MNPs per liter of the solution (mg/L), and Ce is the equilibrium concentration of Pb(II) and Cd(II) in the solution (mg/L). The enthalpy and entropy change values were calculated from the slope and intercept of the plot of lnKc versus 1/T as shown in Figures 13(b) and 13(c). The calculated ΔG°, ΔH°, and ΔS° are arranged in Table 4. The negative values of ΔG° illustrate the spontaneity and feasibility of the adsorption reaction at a given temperature, and the increasing value of ΔG° with an increased temperature indicates a decrease in the degree of feasibility for Pb(II) and Cd(II) adsorption. To be thermodynamically acceptable, ΔG° must always be negative [21]. The negative values of ΔH°, -176.323312 for Pb(II) ions and -34.228738 kJ/mol for Cd(II) ions, show that the adsorption is exothermic. Moreover, the ΔS° values for Pb(II) and Cd(II) ions at the mentioned temperatures are obtained as -0.51621626 and -0.09403134 kJ/mol/K, respectively. These negative values indicate reducing the irregularity or randomness in the adsorbate–adsorbent interface during Pb(II) and Cd(II) adsorption.Table 4 Thermodynamic parameters for Pb(II) and Cd(II) adsorption onto PO–Fe3O4MNPs. MetalTemperature (K)ΔG° (kJ/mol)ΔH° (kJ/mol)ΔS° (kJ/mol/K)R2Pb(II)293-26.35695282-176.323312-0.516216260.9476308-15.71120935318-10.32196239328-9.193894743Cd(II)293-7.042494622-34.228738-0.094031340.9145308-4.691228444318-4.05764763328-3.873400017 ### 3.2.8. Characterizations of Pb(II)- and Cd(II)-Loaded PO–Fe3O4MNPs Pb(II) and Cd(II) uptake onto PO–Fe3O4MNPs was confirmed by using XRD, TEM, FESEM, EDX, and elemental mapping of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs. About 0.5 g of PO–Fe3O4MNPs was agitated within 50 mg/L of Pb(II) and Cd(II) solution at optimal conditions (temperature: 20±2°C, pH: 6, agitation speed: 200 rpm; equilibrium time: 60 min). The suspension was filtered using Whatman filter paper (no. 1), and the residue was dried in an oven at 50°C overnight.The XRD patterns of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs are shown in Figure 14. It was noted that after adsorption of Cd(II) by PO–Fe3O4MNPs, three peaks, which are centered at 32, 48, and 55°, were observed, attributed to the presence of the Cd(II) ions. In case of Pb(II) adsorption, two peaks were observed, which are centered at 31 and 48°. In addition, XRD spectra showed that there were no changes in the peaks of PO–Fe3O4MNPs; this confirms the adsorption of Cd(II) and Pb(II) ions on the surface of PO–Fe3O4MNPs.Figure 14 XRD spectra of (a) lead-loaded PO–Fe3O4MNPs, (b) cadmium-loaded PO–Fe3O4MNPs, and (c) PO–Fe3O4MNPs.The TEM images of PO–Fe3O4MNPs after Cd(II) and Pb(II) adsorptions are shown in Figures 15(a) and 15(b). Many black particles were observed which may be attributed to the existence of higher concentrations of surface-bound metal ions which result in small accumulation of the particles after adsorption and also indicates the magnetic property of PO–Fe3O4MNPs. Similar results were reported by Lin et al. [27].Figure 15 TEM images of (a) cadmium-loaded PO–Fe3O4MNPs and (b) lead-loaded PO–Fe3O4MNPs; FESEM images of (c) cadmium-loaded PO–Fe3O4MNPs and (d) lead-loaded PO–Fe3O4MNPs. (a)(b)(c)(d)FESEM was used to show the morphology and topographic features of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs. FESEM images revealed some aggregations and a slight increase in the dimensions of PO–Fe3O4MNPs as shown in Figures 15(c) and 15(d). However, the morphology of individual nanoparticles barely changed, indicating that the removal of Pb(II) and Cd(II) ions was performed by the adsorption process.The EDX analysis of PO–Fe3O4MNPs after adsorption of Cd(II) and Pb(II) is presented graphically in Figures 16(a) and 16(b). The EDX patterns show the existence of magnesium, oxygen, sulfur, iron, calcium, and cadmium on the surface of cadmium-loaded PO–Fe3O4MNPs, and the distribution of oxygen, magnesium, calcium, sulfur, iron, and lead on the surface of lead-loaded PO–Fe3O4MNPs clearly confirms successful adsorption of Pb(II) and Cd(II) on PO–Fe3O4MNP surfaces. The results show a good agreement with those obtained by Bagbi et al. [67]. The elemental mapping also confirmed the adsorption of Pb(II) (in green) and Cd(II) (in green) as shown in Figures 16(c) and 16(d). It is clear that Pb(II) and Cd(II) ions are uniformly distributed on the surface of PO–Fe3O4MNPs.Figure 16 EDX spectra of (a) cadmium-loaded PO–Fe3O4MNPs and (b) lead-loaded PO–Fe3O4MNPs; elemental mapping images of (c) cadmium-loaded PO–Fe3O4MNPs and (d) lead-loaded PO–Fe3O4MNPs. (a)(b)(c)(d) ## 3.2.1. Effect of pH pH is the main factor that affects the efficacy of the adsorption process. In the present study, the impact of pH on Pb(II) and Cd(II) removal efficiency using biosynthesized PO–Fe3O4MNPs was verified at pH values ranging from 3 to 8. Pb(II) and Cd(II) adsorption was low at pH values around 3, which was attributed to electrostatic repulsion between the adsorbent and metal ions in the solution. When the concentration of hydrogen ions in the solution increases, the adsorbent sites are occupied by the hydrogen ions instead of metal ions. The protonation of active sites thus tends to decrease the metal adsorption [37]. As presented in Figure 6, the removal efficiency for Pb(II) and Cd(II) at pH 3 is found to be 15.42% and 4.8%, respectively. Maximum adsorption was recorded at higher pH 6, i.e., maximum adsorptions for Pb(II) and Cd(II) were 100% and 95.32%, respectively.Figure 6 Effect of pH on Pb(II) and Cd(II) removal (Co=50mg/L, PO–Fe3O4MNP dosage=0.5g, contacttime=60min, temperature=20±2°C).Beyond the value of pH 6, solute ions will precipitate due to the formation of insoluble metal hydroxides, which then start precipitating from the solutions at higher pH values and make the true sorption studies impossible. This should be avoided during sorption tests because it makes distinguishing between sorption and metal precipitation difficult [51]. As a result, the pH value of 6 was chosen as the optimum and used in all subsequent experiments. These results are in agreement with the results obtained by Lung et al. [52]. ## 3.2.2. Effect of Contact Time The influence of contact time on Pb(II) and Cd(II) removal efficiency by PO–Fe3O4MNPs was investigated using varying the contact time from 10 to 70 minutes. As was observed in Figure 7, the adsorption of both metal ions by PO–Fe3O4MNPs was increased with increasing contact time. Both metal ions have more opportunities for contact with the adsorbent surface, when time increases the availability of more active groups that are present on the surface of the adsorbent increases [22]. The removal of Pb(II) was rapid during the first 30 min. However, no significant increase in the adsorption rate was found after 30 min. The concentration of Cd(II) decreased within 50 min and remained almost constant after an hour, implying that adsorption is rapid and reaches saturation within an hour. This is a promising result because equilibrium time is critical in wastewater treatment plants that are economically viable [53].Figure 7 Effect of contact time on removal of Pb(II) and Cd(II) (Co=50mg/L, PO–Fe3O4MNP dosage=0.5g, pH=6, temperature=20±2°C). ## 3.2.3. Effect of Initial Metal Concentration The effect of the initial metal concentration on the removal efficiency by 0.5 g PO–Fe3O4MNPs at optimal pH was investigated using solutions with varying initial metal concentrations (10, 50, 100, and 150 mg/L). Figure 8 shows that there is a lowering in adsorption of lead and cadmium ions with increasing in initial metal ion concentration in solution. The initial metal concentration plays an important role in the removal efficiency since there is a constant active binding site for a given mass of adsorbent where the fixed amount of metals can be adsorbed. Thus, increasing the metal concentration in solution against the same quantity of adsorbent decreases the adsorption capacity. These results agreed with the results obtained by Ebrahim et al. [35] and Das and Rebecca [54].Figure 8 Effect of initial metal concentration on removal of Pb(II) and Cd(II) (PO–Fe3O4MNP dosage=0.5g, contacttime=60min, pH=6, temperature=20±2°C). ## 3.2.4. Effect of PO–Fe3O4MNP Dose The concentration of the adsorbent is one of the most important factors influencing the efficiency of the process and adsorption capacity. In this study, the impact of the various amounts of the biosynthesized PO–Fe3O4MNPs varying from 0.02 to 1.4 g was studied on the removal efficiency of the Pb(II) and Cd(II) ions. Figure 9 shows that, with increasing nanoadsorbent mass, the percentage of nanoadsorbents and the adsorption percent of both Pb(II) and Cd(II) ions increase. This is due to the fact that at increasing dosages, there are more available sites on the surface of nanoadsorbents [55].Figure 9 Effect of adsorbent dose on removal of Pb(II) and Cd(II) (Co=50mg/L, contacttime=60min, pH=6, temperature=20±2°C). ## 3.2.5. Adsorption Isotherms The adsorption isotherms provide important information concerning adsorption capacity, the adsorption mechanism between the contaminant and the adsorbent, and the contaminant distribution between the adsorbent and the solution [4]. Adsorption isotherms were determined by fitting the experimental data obtained at equilibrium time, with the isotherm models including Freundlich, Langmuir, Toth, and Khan. The Langmuir adsorption isotherm is based on monolayer adsorption on the surface and assumes a homogenous sorbent surface [39]. The Toth isotherm is an empirical modification of the Langmuir equation that is aimed at reducing the error between experimental and predicted equilibrium data. This method is especially useful for explaining systems with heterogeneous adsorption [40]. The Freundlich isotherm can be applied to adsorption processes on heterogonous surfaces; it gives the concept of multilayer adsorption and the exponential distribution of active sites on the surface of the sorbent [56]. Finally, the Khan isotherm represents both the Langmuir and Freundlich models, suggested for adsorbate adsorption from pure solutions [40].Figures10(a) and 10(b) show the comparison of Freundlich, Langmuir, Khan, and Toth models for lead and cadmium ions, respectively. The Freundlich model fits the experimental data better than the Langmuir, Khan, and Toth models based on the correlation coefficients, which denotes multilayer adsorption of Pb(II) and Cd(II). It was noticed that values of n for cadmium and lead were 1.301 and 2.236, respectively, indicating favorable adsorption between PO–Fe3O4MNPs and the metal ions. Table 1 illustrates the adsorption capacity of PO–Fe3O4MNPs compared to other adsorbents that have been reported in various literature. The adsorption capacity was 177.48 mg/g for cadmium and 108.2267 mg/g for lead (Table 2). This indicates that the adsorption capacity of PO–Fe3O4MNPs is higher than that of most of the adsorbents listed in Table 1.Figure 10 Comparison of isotherm models for adsorption of (a) Pb(II) and (b) Cd(II) ions onto PO–Fe3O4MNPs. (a)(b)Table 1 Comparison of maximum adsorption capacity of investigated PO–Fe3O4MNPs for Cd(II) and Pb(II) with other adsorbents reported in the various literature. MetalAdsorbentMaximum adsorption capacity (mg/g)ReferenceCd(II)Iron oxide nanoparticles (IONPs)18.32[27]Iron oxide nanoparticles15.5[37]Magnetite Green Fe3O4 nanoparticles18.73[52][email protected][58]SBA-15@Fe3O4@Isa140[59]c-MCM-4132.3[60]MCM-4829.13[61]PO–Fe3O4MNPs177.48This studyPb(II)Magnetite green Fe3O4 nanoparticles0.16[52]SBA-15@Fe3O4@Isa110[59]c-MCM-4158.5[60]MCM-4850.39[61]Iron nanocomposites (T-Fe3O4)100.0[62]Phytogenic magnetic nanoparticles (PMNPs)68.41[63]Fe3O4 nanoadsorbents64.97[64]PO–Fe3O4MNPs108.2267This studyTable 2 Parameters of Langmuir, Freundlich, Khan, and Toth isotherm models for Pb(II) and Cd(II) ions adsorption onto PO–Fe3O4MNPs. ModelParameterMetal ionsPb(II)Cd(II)Langmuirqm (mg/g)108.2267177.4800b (L/mg)0.7062960.006401R20 .94390.92926FreundlichK (mg/g) (mg/L) (1/n)41.486182.044152n2.2365291.301127R20.972460.94333Khanqm (mg/g)4.0257087.391429bk (L/mg)194.40720.187339ak0.5562490.201856R20.972450.93299Tothqm (mg/g)721.74033392.02625bT2.0873980.011593nT4.5677780.632009R20.969520.92807As illustrated in Figure9, advanced adsorption removal for the lead was achieved at a low dose of PO–Fe3O4MNPs and it has a more rapid affinity towards the nanoparticles as compared to cadmium ions, which reveals the presence of various electrical attractions between cation lead metal and negative adsorption functional sites. Additionally, the lead ion possesses the smallest hydration radius; this agrees with the conception that ions with a small hydration radius are desirably selected and gathered at the interface [51, 57]. ## 3.2.6. Kinetics The kinetics of Cd(II) and Pb(II) adsorption on PO-Fe3O4MNPs were studied using pseudo-first-order and pseudo-second-order kinetic models, as demonstrated in Figures 11 and 12. The kinetic parameters and correlation coefficients R2 are documented in Table 3. Cd(II) and Pb(II) adsorption data were well fitted to pseudo-second-order equation (R2>0.99PbII; 0.95 Cd(II)). Thus, chemisorption is the rate-determining step for Cd(II) and Pb(II) adsorption on PO-Fe3O4MNPs through sharing or exchanging of electrons, between the sorbent and the sorbate [26, 65].Figure 11 Adsorption kinetics for the adsorption of Pb(II) onto PO-Fe3O4MNPs. (a) Pseudo-first-order(b) Pseudo-second-orderFigure 12 Adsorption kinetics for the adsorption of Cd(II) onto PO-Fe3O4MNPs. (a) Pseudo-first-order(b) Pseudo-second-orderTable 3 Kinetic parameters for Pb(II) and Cd(II) ion biosorption onto PO–Fe3O4MNPs. MetalsPseudo-first-orderPseudo-second-orderqe (mg/g)K1 (min–1)R2qe (mg/g)K2 (g/mg min)R2Pb(II)0.76560.10160.923410.02800.51790.9999Cd(II)3.02800.01980.28749.08760.01660.9593 ## 3.2.7. Thermodynamic Analysis Temperature is another important factor that influences the remediation efficiency of the adsorption process. Figure13(a) shows the variation of percentage removal efficiency with temperature. It is obvious from the results that changing the temperature from 20 to 35°C has no significant effect on the sorption of adsorbents so the adsorption experiments can be carried out at room temperature without any adjustment. Similar results were reported by Rasheed and Ebrahim [51]. However, removal efficiency decreases beyond 35°C due to a reduction in the number of active surface sites available for adsorption on the adsorbent [66]. Furthermore, at high temperatures, the attractive forces between the adsorbent and adsorbate become weaker, and therefore, sorption decreases [22]. In the temperature range of 20–55°C, the thermodynamic aspect of the adsorption of Pb(II) and Cd(II) ions on PO–Fe3O4MNPs was determined to show if the process was endothermic or exothermic. Thermodynamic parameters including Gibb’s free energy (ΔG°), enthalpy (ΔH°), and entropy (ΔS°) are obtained using the following equation: (9)ΔG°=−RTlnKcKc=CadCe,(10)ΔG°=ΔH°−TΔS°.Figure 13 (a) Effect of temperature on the Pb(II) and Cd(II) ion removal; (b) thermodynamic plot for Pb(II); (c) thermodynamic plot for Cd(II) adsorption. (a)(b)(c)In these equations,R is the ideal gas constant, T is the absolute temperature (K), Kc is the equilibrium constant, Cad is the amount of Pb(II) and Cd(II) adsorbed on PO–Fe3O4MNPs per liter of the solution (mg/L), and Ce is the equilibrium concentration of Pb(II) and Cd(II) in the solution (mg/L). The enthalpy and entropy change values were calculated from the slope and intercept of the plot of lnKc versus 1/T as shown in Figures 13(b) and 13(c). The calculated ΔG°, ΔH°, and ΔS° are arranged in Table 4. The negative values of ΔG° illustrate the spontaneity and feasibility of the adsorption reaction at a given temperature, and the increasing value of ΔG° with an increased temperature indicates a decrease in the degree of feasibility for Pb(II) and Cd(II) adsorption. To be thermodynamically acceptable, ΔG° must always be negative [21]. The negative values of ΔH°, -176.323312 for Pb(II) ions and -34.228738 kJ/mol for Cd(II) ions, show that the adsorption is exothermic. Moreover, the ΔS° values for Pb(II) and Cd(II) ions at the mentioned temperatures are obtained as -0.51621626 and -0.09403134 kJ/mol/K, respectively. These negative values indicate reducing the irregularity or randomness in the adsorbate–adsorbent interface during Pb(II) and Cd(II) adsorption.Table 4 Thermodynamic parameters for Pb(II) and Cd(II) adsorption onto PO–Fe3O4MNPs. MetalTemperature (K)ΔG° (kJ/mol)ΔH° (kJ/mol)ΔS° (kJ/mol/K)R2Pb(II)293-26.35695282-176.323312-0.516216260.9476308-15.71120935318-10.32196239328-9.193894743Cd(II)293-7.042494622-34.228738-0.094031340.9145308-4.691228444318-4.05764763328-3.873400017 ## 3.2.8. Characterizations of Pb(II)- and Cd(II)-Loaded PO–Fe3O4MNPs Pb(II) and Cd(II) uptake onto PO–Fe3O4MNPs was confirmed by using XRD, TEM, FESEM, EDX, and elemental mapping of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs. About 0.5 g of PO–Fe3O4MNPs was agitated within 50 mg/L of Pb(II) and Cd(II) solution at optimal conditions (temperature: 20±2°C, pH: 6, agitation speed: 200 rpm; equilibrium time: 60 min). The suspension was filtered using Whatman filter paper (no. 1), and the residue was dried in an oven at 50°C overnight.The XRD patterns of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs are shown in Figure 14. It was noted that after adsorption of Cd(II) by PO–Fe3O4MNPs, three peaks, which are centered at 32, 48, and 55°, were observed, attributed to the presence of the Cd(II) ions. In case of Pb(II) adsorption, two peaks were observed, which are centered at 31 and 48°. In addition, XRD spectra showed that there were no changes in the peaks of PO–Fe3O4MNPs; this confirms the adsorption of Cd(II) and Pb(II) ions on the surface of PO–Fe3O4MNPs.Figure 14 XRD spectra of (a) lead-loaded PO–Fe3O4MNPs, (b) cadmium-loaded PO–Fe3O4MNPs, and (c) PO–Fe3O4MNPs.The TEM images of PO–Fe3O4MNPs after Cd(II) and Pb(II) adsorptions are shown in Figures 15(a) and 15(b). Many black particles were observed which may be attributed to the existence of higher concentrations of surface-bound metal ions which result in small accumulation of the particles after adsorption and also indicates the magnetic property of PO–Fe3O4MNPs. Similar results were reported by Lin et al. [27].Figure 15 TEM images of (a) cadmium-loaded PO–Fe3O4MNPs and (b) lead-loaded PO–Fe3O4MNPs; FESEM images of (c) cadmium-loaded PO–Fe3O4MNPs and (d) lead-loaded PO–Fe3O4MNPs. (a)(b)(c)(d)FESEM was used to show the morphology and topographic features of Pb(II)- and Cd(II)-loaded PO–Fe3O4MNPs. FESEM images revealed some aggregations and a slight increase in the dimensions of PO–Fe3O4MNPs as shown in Figures 15(c) and 15(d). However, the morphology of individual nanoparticles barely changed, indicating that the removal of Pb(II) and Cd(II) ions was performed by the adsorption process.The EDX analysis of PO–Fe3O4MNPs after adsorption of Cd(II) and Pb(II) is presented graphically in Figures 16(a) and 16(b). The EDX patterns show the existence of magnesium, oxygen, sulfur, iron, calcium, and cadmium on the surface of cadmium-loaded PO–Fe3O4MNPs, and the distribution of oxygen, magnesium, calcium, sulfur, iron, and lead on the surface of lead-loaded PO–Fe3O4MNPs clearly confirms successful adsorption of Pb(II) and Cd(II) on PO–Fe3O4MNP surfaces. The results show a good agreement with those obtained by Bagbi et al. [67]. The elemental mapping also confirmed the adsorption of Pb(II) (in green) and Cd(II) (in green) as shown in Figures 16(c) and 16(d). It is clear that Pb(II) and Cd(II) ions are uniformly distributed on the surface of PO–Fe3O4MNPs.Figure 16 EDX spectra of (a) cadmium-loaded PO–Fe3O4MNPs and (b) lead-loaded PO–Fe3O4MNPs; elemental mapping images of (c) cadmium-loaded PO–Fe3O4MNPs and (d) lead-loaded PO–Fe3O4MNPs. (a)(b)(c)(d) ## 4. Conclusion In this study,Portulaca oleracea leaf extract was successfully used as a reductant in PO–Fe3O4MNP synthesizing process. The biosynthesized PO–Fe3O4MNPs were characterized and used as adsorbents for the removal of Pb(II) and Cd(II) metal ions from the aqueous solution in batch adsorption system. The batch experiments showed that the removal efficiency of Pb(II) and Cd(II) by PO–Fe3O4MNPs increased with increasing pH (up to 6), contact time, and PO–Fe3O4MNP dosage. On the other hand, an increase in metal concentration and temperature resulted in reduced Pb(II) and Cd(II) removal efficiency of PO–Fe3O4MNPs. Isotherm studies revealed that the Freundlich model can properly predict adsorption equilibrium data, which indicates multilayer adsorption. The kinetic studies suggested pseudo-second-order reactions for the adsorption, while thermodynamic studies demonstrated the adsorption as exothermic and spontaneous. This study shows that PO–Fe3O4MNPs can be considered fast, efficient, and biocompatible nanoadsorbent which have promising applications in environmental remediation processes and nanobiotechnology in the future. --- *Source: 1024554-2022-08-16.xml*
2022
# Flavonoid Naringenin: A Potential Immunomodulator forChlamydia trachomatis Inflammation **Authors:** Abebayehu N. Yilma; Shree R. Singh; Lisa Morici; Vida A. Dennis **Journal:** Mediators of Inflammation (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/102457 --- ## Abstract Chlamydia trachomatis, the agent of bacterial sexually transmitted infections, can manifest itself as either acute cervicitis, pelvic inflammatory disease, or a chronic asymptomatic infection. Inflammation induced by C. trachomatis contributes greatly to the pathogenesis of disease. Here we evaluated the anti-inflammatory capacity of naringenin, a polyphenolic compound, to modulate inflammatory mediators produced by mouse J774 macrophages infected with live C. trachomatis. Infected macrophages produced a broad spectrum of inflammatory cytokines (GM-CSF, TNF, IL-1β, IL-1α, IL-6, IL-12p70, and IL-10) and chemokines (CCL4, CCL5, CXCL1, CXCL5, and CXCL10) which were downregulated by naringenin in a dose-dependent manner. Enhanced protein and mRNA gene transcript expressions of TLR2 and TLR4 in addition to the CD86 costimulatory molecule on infected macrophages were modulated by naringenin. Pathway-specific inhibition studies disclosed that p38 mitogen-activated-protein kinase (MAPK) is involved in the production of inflammatory mediators by infected macrophages. Notably, naringenin inhibited the ability of C. trachomatis to phosphorylate p38 in macrophages, suggesting a potential mechanism of its attenuation of concomitantly produced inflammatory mediators. Our data demonstrates that naringenin is an immunomodulator of inflammation triggered by C. trachomatis, which possibly may be mediated upstream by modulation of TLR2, TLR4, and CD86 receptors on infected macrophages and downstream via the p38 MAPK pathway. --- ## Body ## 1. Introduction Sexually transmittedChlamydia trachomatis infection is of widespread public health concern because of its prevalence and potentially devastating reproductive consequences, including pelvic inflammatory disease (PID), infertility, and ectopic pregnancy [1–3]. The negatively charge elementary bodies (EB), infectious particles of C. trachomatis, invade the mucosal surface of the female genital tract and persist in them for a long time [2]. Abundant in vitro data suggests that the inflammatory response to Chlamydiae is initiated and sustained by actively infected host cells including epithelial cells and resident macrophages [4].C. trachomatis has the ability to infect both epithelial cells and resident macrophages. These infected host cells act as first responders to initiate and propagate immune responses, which later participate in initiation of adaptive immune responses. Activation of adaptive immune responses consequently leads to accumulation of effector T and B cells at the site of Chlamydia infection and plays critical roles in controlling the infection [5, 6]. However, C. trachomatis uses various strategies to escape the host immune response and persist for a prolonged period of time, subsequently leading to the many disease manifestations associated with the infection. This is a common scenario for most intracellular organisms such as Mycobacteria, where cells produce excessive inflammatory mediators to contribute to disease manifestation by damaging neighboring cells [7]. For example, results from studies using the murine model of C. trachomatis revealed that tubal dilation frequently occurred as an end result for a primary infection, suggesting that the inflammatory process resulting from a single C. trachomatis infection is sufficient to result in long-term tissue damage [8].Like other infectious microorganisms, inflammatory mediators have been documented to be hallmarks ofC. trachomatis infection and its pathogenesis [4–6]. Because of the inherent difficulties in acquiring human tissue samples for study, researchers have taken advantage of multiple animal models of Chlamydia infection to examine the nature and timing of the inflammatory response. We have shown by in vitroexperiments that primary Chlamydia infection of human epithelial cells and mouse macrophages occurs within 2 days of infection and is characterized by significant production of IL-6, TNF, and IL-8 [9]. It is well documented that inflammatory cytokines and chemokines play critical role for the recruitment and chemoattractant of neutrophils and other leukocytes. Neutrophils have the capability to destroy accessible EBs, and when recruited in high numbers, they release matrix metalloprotease (MMPs) molecules and neutrophil elastase, which have been shown to contribute to tissue damage [10, 11].To control inflammation triggered by infectious organisms, alternative strategies that could balance the levels of inflammatory mediators released during infection are of intense interest. Recently active compounds with the capacity to modulate host inflammatory responses have received considerable attention as they may be potential new therapeutic agents for the treatment of inflammatory diseases [12–15]. Naringenin is a naturally occurring polyphenolic compound containing two benzene rings linked together with a heterocyclic pyrone ring [16]. Naringenin is a normal constituent of the human diet in grapefruit and tomatoes and is known to exhibit a variety of biological activities, such as enzyme inhibitors, antioxidants, anticancer, and as an anti-inflammatory agent [17–21].Since its discovery, naringenin’s wide ranges of pharmacological properties have attracted the attentions of many researchers because of its anti-inflammatory properties. Its anti-inflammatory property is actively studied in macrophages andex vivo human whole-blood models [22–24]. In this study, we investigated the anti-inflammatory capacity of naringenin to regulate cytokines and chemokines produced by mouse J774 macrophages infected with live C. trachomatis (MoPn Nigg II). We used multiplex ELISA to determine a broad range of inflammatory cytokines and chemokines produced during the interaction of C. trachomatis and macrophages. We then assessed the ability of naringenin to regulate the production level of these mediators. Next, we determined the potential mechanism(s) by which naringenin may modulate inflammatory mediators by investigating its effect on TLR2, TLR4, and CD86 receptors, as well as the p38 MAPK pathway. The findings from our study are discussed here in the context of naringenin as a potential new immunomodulator of C. trachomatis induced inflammation. ## 2. Materials and Methods ### 2.1. Cell Culture and Infectivity Mouse J774 macrophages were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA) and cultured as already described [9]. C. trachomatis MoPn Nigg II was purchased from ATCC (ATCC VR-123) and propagated as previously described [9]. To establish infection, macrophages (106 cells/well) were seeded in 24-well plates for 24 h after which they were infected with live C. trachomatis infectious particles (105) in 500 μL of growth media/well. The cells were then incubated at 37°C under 5% CO2 and culture supernatants were collected at 48 h after infection. The optimum bacterium dose and duration of infection were determined as reported [9]. As a positive control, macrophages (106 cells/well) were stimulated with E. coli LPS (1 μg/mL) and culture supernatants were collected at 48 h after stimulation. Collected supernatants were centrifuged at 450 ×g for 10 min at 4°C and stored at −80°C until used. ### 2.2. Preparation of Naringenin The stock solution of naringenin (Sigma, St. Louis, MO, USA) was prepared by dissolving 40 mg of naringenin in 1 mL dimethyl sulfoxide (DMSO). After 2-day infection of macrophages withC. trachomatis, the media were replaced with fresh media containing various concentrations (0.01, 0.1, 1, and 10 μg/mL) of naringenin. Cell-free supernatants were collected after an additional 48 h incubation following centrifugation at 450 ×g for 10 min at 4°C and stored at 80°C until used. ### 2.3. Inflammatory Cytokines and Chemokines Milliplex mouse 32-plex cytokine and chemokines detection reagent (catalogue number MPXMCYTO-70 K-PMX32) was purchased from Millipore (EMD Millipore Corporation, Billerica, MA, USA) and the assay was performed as described [25]. ### 2.4. Cytotoxicity Studies Cytotoxicity of naringenin to mouse J774 macrophages was measured using the 3-(4, 5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) dye reduction assay and the CellTiter 96 Cell Proliferation Assay kit (Promega, Madison, WI, USA). Cells were seeded in a 96-well plate at a density of 105 cells/well in 50 μL media and incubated overnight at 37°C under 5% CO2. Naringenin was added to cells in concentrations ranging from 0.1 to 100 μg/mL and after 48 h supernatants were removed, cells were washed twice with sterile PBS, followed by addition of 15 μL of MTT dye solution to each well, and cells were further incubated for 3 h at 37°C under 5% CO2. To stop the reaction, 100 μL of solubilization solution/stop mixture was added to each well and plates incubated for 30 min at room temperature (RT). Absorbance at 570 nm was measured using a Tecan Sunrise plate reader (TECAN US Inc., Durham, NC, USA). The percentage of cell viability was obtained using the optical density readings of naringenin treated cells compared to those of normal cells (control), where % viability = [A]test/[A]control×100, where [A]test is the absorbance of the test sample and [A]control is the absorbance of control sample. ### 2.5. Flow Cytometry Mouse J774 macrophages (106 cells/mL) were left uninfected or infected with C. trachomatis and after 48 h infection the media were removed and replenished with fresh media containing 1 μg/mL of naringenin. Following incubation for an additional 48 h, cells were scraped from wells, washed, and then blocked with Fc blocking antibody (BD Bioscience) in FACS (fluorescence-activated cell sorting) buffer (PBS Containing 0.1% NaN3 and 1% fetal bovine serum) for 15 min at 4°C. Cells were next washed two times followed by staining with fluorochrome-conjugated antibodies (50 μL in FACS buffer) against mouse TLR2 (PE), TLR4 (FITC), CD80 (PE-Cy7), and CD86 (APC) (eBiosciences). The optimum concentrations of all fluorochromes were predetermined in our laboratory. Cells were incubated with fluorochrome antibodies for 30 min at 4°C, washed 2 times, and then fixed using 2% paraformaldehyde solution. Data were acquired on a BD FACSCanto II flow cytometer (BD Bioscience) with at least 105 events for each sample. TLR2, TLR4, CD80, and CD86 positive cells and their mean fluorescence intensity (MFI) were analyzed using FlowJo software (TreeStar Inc., Ashland, OR, USA). ### 2.6. RNA Extraction and Quantitative Real-Time PCR (qRT-PCR) Mouse J774 macrophages (3× 106 cells/well) were infected with live C. trachomatis (3 × 105 IFU/well) in 6-well plates for 48 h followed by replacement of fresh media containing 1 μg/mL of naringenin. RNA was extracted from the cell pellets using Qiagen RNeasy Kit (Qiagen Inc., Valencia, CA, USA), which included a DNase-I digestion step. qRT-PCR was employed to quantify mRNA gene transcripts of CD86 and TLR2 using TaqMan RNA-to-CT 1-step kit in combination with TaqMan gene expression assays (Applied Biosystems by Life Technologies, Foster City, CA, USA) as reported [25]. Amplification of gene transcripts was performed according to the manufacturer’s protocol using ABI ViiA 7 real-time PCR (Applied Biosystem by Life Technologies) and standard amplification conditions. The relative changes in gene expression were calculated using the following equation: 2-ΔΔCT where all values were normalized with respect to the “housekeeping” gene GAPDH mRNA levels. Amplification using 50 ng RNA was performed in a total volume of 20 μL. Each real-time PCR assay was performed in triplicates and the results are expressed as the mean ± SD. ### 2.7. Inhibition of p38 MAP Kinase Pathway To determine if the p38 MAPK pathway is employed byC. trachomatis to trigger production of cytokines and chemokines by mouse J774 macrophages, we next blocked p38 MAPK signaling with its specific inhibitor, SB203350 (EMD Millipore Corporation, Billerica, MA, USA). Mouse J774 macrophages (106 cells/well) were preincubated with 20 μM of SB203350 for 24 h, infected with C. trachomatis (105 IFU/well), and incubated for an additional 72 h. Cell-free supernatants were collected by centrifugation and the production levels of randomly selected cytokines (IL-6, TNF, IL-12p70, and IL-1β) and chemokines (CCL5 and CXCL10) were determined using single ELISAs as described previously [9]. The 20 μM concentration and 24 h inhibition time point used for SB203350 were optimal conditions predetermined in our laboratory. ### 2.8. Phosphorylation of p38 MAPK byC. trachomatis Mouse J774 macrophages (3× 106 cells/well) were seeded in 6-well plates and infected with live C. trachomatis (3 × 105 IFU/well) for 15, 30, and 60 min. Cells were lysed at different time points using 1x RIPA buffer (Sigma) supplemented with phosphatase inhibitors (Sigma). Immediately cells were transferred to microcentrifuge tubes, sonicated for 15 sec to shear DNA and reduce sample viscosity followed by centrifugation at 450 g for 10 min at 4°C. The concentrations of proteins were determined by the bicinchoninic acid assay (BCA) (Thermo Scientific, Rockford, IL, USA). Proteins were separated by SDS-PAGE, transferred to nitrocellulose membranes, and blocked with blocking buffer (tris-buffered saline (TBS)) containing 0.1% Tween-20 and 5% w/v nonfat milk. After blocking for 1 h, the membrane was washed 3 times for 5 min each with wash buffer (TBS, 0.1% Tween-20) and incubated overnight with gentle agitation at 4°C with phospho-p38 or total p38 primary antibodies (Cell Signaling Technology Inc., Beverly, MA, USA) each at a dilution of 1 : 1000 (diluted in primary antibody dilution buffer (1x TBS, 0.1% Tween-20, 5% bovine serum album (BSA), and dH2O). Following overnight incubation, the membrane was washed 3 times and incubated with HRP-conjugated secondary antibody (Cell Signaling) at 1 : 2000 (diluted in blocking buffer) with gentile agitation for 1 h at RT. After 3 washes, protein bands were visualized using LumiGLO substrate (Cell Signaling) on scientific imaging film (Kodak Inc., Rochester, NY, USA). The sizes of total p38 and phospho-p38 were determined from the biotinylated protein ladder. The optimum concentrations for antibodies were used according to the manufactures suggestion. Biotinylated secondary antibody (1 : 1000 diluted in blocking buffer) was used to detect the protein markers. For some experiments, macrophages were infected with C. trachomatis in the presence and absence of naringenin at 1 μg/mL to determine if naringenin may exert its anti-inflammatory activity by blocking the p38 MAPK pathway. Protein lysates were collected and used in western blotting to detect the phosphorylation of p38 MAPK as described in the preceding paragraph. ### 2.9. Statistics Analysis The two-tailed unpaired Student’st-test was used to compare the data. P<0.05 was considered significant. ## 2.1. Cell Culture and Infectivity Mouse J774 macrophages were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA) and cultured as already described [9]. C. trachomatis MoPn Nigg II was purchased from ATCC (ATCC VR-123) and propagated as previously described [9]. To establish infection, macrophages (106 cells/well) were seeded in 24-well plates for 24 h after which they were infected with live C. trachomatis infectious particles (105) in 500 μL of growth media/well. The cells were then incubated at 37°C under 5% CO2 and culture supernatants were collected at 48 h after infection. The optimum bacterium dose and duration of infection were determined as reported [9]. As a positive control, macrophages (106 cells/well) were stimulated with E. coli LPS (1 μg/mL) and culture supernatants were collected at 48 h after stimulation. Collected supernatants were centrifuged at 450 ×g for 10 min at 4°C and stored at −80°C until used. ## 2.2. Preparation of Naringenin The stock solution of naringenin (Sigma, St. Louis, MO, USA) was prepared by dissolving 40 mg of naringenin in 1 mL dimethyl sulfoxide (DMSO). After 2-day infection of macrophages withC. trachomatis, the media were replaced with fresh media containing various concentrations (0.01, 0.1, 1, and 10 μg/mL) of naringenin. Cell-free supernatants were collected after an additional 48 h incubation following centrifugation at 450 ×g for 10 min at 4°C and stored at 80°C until used. ## 2.3. Inflammatory Cytokines and Chemokines Milliplex mouse 32-plex cytokine and chemokines detection reagent (catalogue number MPXMCYTO-70 K-PMX32) was purchased from Millipore (EMD Millipore Corporation, Billerica, MA, USA) and the assay was performed as described [25]. ## 2.4. Cytotoxicity Studies Cytotoxicity of naringenin to mouse J774 macrophages was measured using the 3-(4, 5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) dye reduction assay and the CellTiter 96 Cell Proliferation Assay kit (Promega, Madison, WI, USA). Cells were seeded in a 96-well plate at a density of 105 cells/well in 50 μL media and incubated overnight at 37°C under 5% CO2. Naringenin was added to cells in concentrations ranging from 0.1 to 100 μg/mL and after 48 h supernatants were removed, cells were washed twice with sterile PBS, followed by addition of 15 μL of MTT dye solution to each well, and cells were further incubated for 3 h at 37°C under 5% CO2. To stop the reaction, 100 μL of solubilization solution/stop mixture was added to each well and plates incubated for 30 min at room temperature (RT). Absorbance at 570 nm was measured using a Tecan Sunrise plate reader (TECAN US Inc., Durham, NC, USA). The percentage of cell viability was obtained using the optical density readings of naringenin treated cells compared to those of normal cells (control), where % viability = [A]test/[A]control×100, where [A]test is the absorbance of the test sample and [A]control is the absorbance of control sample. ## 2.5. Flow Cytometry Mouse J774 macrophages (106 cells/mL) were left uninfected or infected with C. trachomatis and after 48 h infection the media were removed and replenished with fresh media containing 1 μg/mL of naringenin. Following incubation for an additional 48 h, cells were scraped from wells, washed, and then blocked with Fc blocking antibody (BD Bioscience) in FACS (fluorescence-activated cell sorting) buffer (PBS Containing 0.1% NaN3 and 1% fetal bovine serum) for 15 min at 4°C. Cells were next washed two times followed by staining with fluorochrome-conjugated antibodies (50 μL in FACS buffer) against mouse TLR2 (PE), TLR4 (FITC), CD80 (PE-Cy7), and CD86 (APC) (eBiosciences). The optimum concentrations of all fluorochromes were predetermined in our laboratory. Cells were incubated with fluorochrome antibodies for 30 min at 4°C, washed 2 times, and then fixed using 2% paraformaldehyde solution. Data were acquired on a BD FACSCanto II flow cytometer (BD Bioscience) with at least 105 events for each sample. TLR2, TLR4, CD80, and CD86 positive cells and their mean fluorescence intensity (MFI) were analyzed using FlowJo software (TreeStar Inc., Ashland, OR, USA). ## 2.6. RNA Extraction and Quantitative Real-Time PCR (qRT-PCR) Mouse J774 macrophages (3× 106 cells/well) were infected with live C. trachomatis (3 × 105 IFU/well) in 6-well plates for 48 h followed by replacement of fresh media containing 1 μg/mL of naringenin. RNA was extracted from the cell pellets using Qiagen RNeasy Kit (Qiagen Inc., Valencia, CA, USA), which included a DNase-I digestion step. qRT-PCR was employed to quantify mRNA gene transcripts of CD86 and TLR2 using TaqMan RNA-to-CT 1-step kit in combination with TaqMan gene expression assays (Applied Biosystems by Life Technologies, Foster City, CA, USA) as reported [25]. Amplification of gene transcripts was performed according to the manufacturer’s protocol using ABI ViiA 7 real-time PCR (Applied Biosystem by Life Technologies) and standard amplification conditions. The relative changes in gene expression were calculated using the following equation: 2-ΔΔCT where all values were normalized with respect to the “housekeeping” gene GAPDH mRNA levels. Amplification using 50 ng RNA was performed in a total volume of 20 μL. Each real-time PCR assay was performed in triplicates and the results are expressed as the mean ± SD. ## 2.7. Inhibition of p38 MAP Kinase Pathway To determine if the p38 MAPK pathway is employed byC. trachomatis to trigger production of cytokines and chemokines by mouse J774 macrophages, we next blocked p38 MAPK signaling with its specific inhibitor, SB203350 (EMD Millipore Corporation, Billerica, MA, USA). Mouse J774 macrophages (106 cells/well) were preincubated with 20 μM of SB203350 for 24 h, infected with C. trachomatis (105 IFU/well), and incubated for an additional 72 h. Cell-free supernatants were collected by centrifugation and the production levels of randomly selected cytokines (IL-6, TNF, IL-12p70, and IL-1β) and chemokines (CCL5 and CXCL10) were determined using single ELISAs as described previously [9]. The 20 μM concentration and 24 h inhibition time point used for SB203350 were optimal conditions predetermined in our laboratory. ## 2.8. Phosphorylation of p38 MAPK byC. trachomatis Mouse J774 macrophages (3× 106 cells/well) were seeded in 6-well plates and infected with live C. trachomatis (3 × 105 IFU/well) for 15, 30, and 60 min. Cells were lysed at different time points using 1x RIPA buffer (Sigma) supplemented with phosphatase inhibitors (Sigma). Immediately cells were transferred to microcentrifuge tubes, sonicated for 15 sec to shear DNA and reduce sample viscosity followed by centrifugation at 450 g for 10 min at 4°C. The concentrations of proteins were determined by the bicinchoninic acid assay (BCA) (Thermo Scientific, Rockford, IL, USA). Proteins were separated by SDS-PAGE, transferred to nitrocellulose membranes, and blocked with blocking buffer (tris-buffered saline (TBS)) containing 0.1% Tween-20 and 5% w/v nonfat milk. After blocking for 1 h, the membrane was washed 3 times for 5 min each with wash buffer (TBS, 0.1% Tween-20) and incubated overnight with gentle agitation at 4°C with phospho-p38 or total p38 primary antibodies (Cell Signaling Technology Inc., Beverly, MA, USA) each at a dilution of 1 : 1000 (diluted in primary antibody dilution buffer (1x TBS, 0.1% Tween-20, 5% bovine serum album (BSA), and dH2O). Following overnight incubation, the membrane was washed 3 times and incubated with HRP-conjugated secondary antibody (Cell Signaling) at 1 : 2000 (diluted in blocking buffer) with gentile agitation for 1 h at RT. After 3 washes, protein bands were visualized using LumiGLO substrate (Cell Signaling) on scientific imaging film (Kodak Inc., Rochester, NY, USA). The sizes of total p38 and phospho-p38 were determined from the biotinylated protein ladder. The optimum concentrations for antibodies were used according to the manufactures suggestion. Biotinylated secondary antibody (1 : 1000 diluted in blocking buffer) was used to detect the protein markers. For some experiments, macrophages were infected with C. trachomatis in the presence and absence of naringenin at 1 μg/mL to determine if naringenin may exert its anti-inflammatory activity by blocking the p38 MAPK pathway. Protein lysates were collected and used in western blotting to detect the phosphorylation of p38 MAPK as described in the preceding paragraph. ## 2.9. Statistics Analysis The two-tailed unpaired Student’st-test was used to compare the data. P<0.05 was considered significant. ## 3. Results ### 3.1. The Effect of Naringenin on the Levels of Inflammatory Cytokines and Chemokines Produced byC. trachomatis Infected Macrophages Like other infection agents,C. trachomatis induces the secretion of various inflammatory mediators upon its infection of macrophages. In the present study, we employed multiplex ELISA to identify and quantify cytokines and chemokines in supernatants from macrophages infected with live C. trachomatis. Infected macrophages produced significant (P<0.001) levels of cytokines (IL-6, TNF, IL-10, IL-12p70, IL-1α, IL-1β, and GM-CSF) and chemokines (CCL4, CXCL10, CXCL5, CCL5, and CXCL1) (Figures 1(a) and 1(b)). However, the production levels of these mediators were reduced in a dose-dependent manner in the presence of added naringenin (Figures 1(a) and 1(b)). Supernatants of C. trachomatis infected macrophages that contained 10 μg/mL of added naringenin showed a significant reduction in the levels of cytokines and chemokines (P<0.001) (Figures 1(a) and 1(b)). The inhibitory activity of naringenin was significantly (P<0.01) observed with as little as 1 μg/mL (Figure 1(a)), suggesting the potency of naringenin even at low concentrations. Naringenin similarly reduced the production levels of cytokines and chemokines in a dose-dependent manner (P<0.001) when LPS was used as the stimulant, especially at 10 μg/mL (Figures 1(a) and 1(b)). Overall, our results indicate that naringenin has an anti-inflammatory effect against C. trachomatis induced inflammatory mediators by macrophages.Naringenin downregulates inflammatory mediators inC. trachomatis infected mouse J774 macrophages. Macrophages (106 cells/mL) were seeded in 24-well plates and were either infected with live C. trachomatis (105 IFU/well) or LPS at 1 μg/mL. After 2-day infection, naringenin at 0.01 to 10 μg/mL was added to cell cultures and the production levels of cytokines (a) and chemokines (b) were quantified in supernatants collected 2 days later employing multiplex ELISA. ***indicates significant difference (P<0.001) between C. trachomatis treated cells and those treated with various concentrations of naringenin using the two-tailed unpaired Student’s t-test. Each bar represents the average of samples run in duplicates and the data are representative of three separate experiments. (a) (b) ### 3.2. The Anti-Inflammatory Effect of Naringenin Is Not due to Cell Death To ensure that the inhibitory effect of naringenin is not attributed to cell death, cytotoxicity studies were performed employing the MTT assay and J774 macrophages exposed to various concentrations of naringenin (0.01 to 100μg/mL). With the exception of the 100 μg/mL naringenin concentration, all other tested concentrations exhibited between 85% and 100% cell viability, suggesting that naringenin is effectively nontoxic to macrophages at these concentrations (Figure 2(a)). Figure 2(b) depicts a representative 96-well plate with cell death occurring in the presence of 100 μg/mL of naringenin (yellow color) versus viable cells at other naringenin concentrations (dark purple color). Overall, these results demonstrate that naringenin’s anti-inflammatory effect on inflammatory mediators produced by C. trachomatis infected macrophages is not attributed to cell death but rather to alternative mechanisms.Naringenin toxicity to mouse J774 macrophages is concentration dependent. Macrophages were seeded in a 96-well plate at a density of 105 cells/well/50 μL in the presence or absence of naringenin in concentrations ranging from 0.1 to 100 μg/mL. The CellTiter 96 Cell Proliferation Assay kit was used to determine cell viability. Absorbance was read at 570 nm and the percentage of cell viability was calculated by using the optical density readings compared to normal cells (a). A representative plate before the absorbance readings where dark purple and yellow wells are depictions of live and dead cells, respectively (b). The data are representative of three separate experiments. (a) (b) ### 3.3. Naringenin Downregulates the Expression Levels of CD86, TLR2, and TLR4 on J774 Macrophages Receptors on host cell surfaces such as TLRs recognize extracellular stimuli for subsequent intracellular signaling processes. Multiple studies have shown that TLR2 and TLR4 play pivotal roles in the recognition ofC. trachomatis [26–29]. To begin to understand the mechanism(s) by which naringenin modulates inflammatory mediators, we first focused on whether or not naringenin will affect the putative TLR2 and TLR4 receptors expressed on C. trachomatis infected mouse J774 macrophages. As compared to unstimulated cells, C. trachomatis infected cells expressed more TLR2 and TLR4 receptors, which were markedly downregulated in the presence of added naringenin, especially for TLR2 (Figures 3(a) and 3(c)). In addition, the MFI for TLR2 and TLR4 onC. trachomatis infected cells was significantly increased (P<0.05) as shown by ratios of 22 and 16, respectively, in comparison to those of J774 and naringenin only uninfected cells (Figure 3(e)). When naringenin was added to C. trachomatis infected macrophages, the MFI of TLR2 and TLR4 reduced significantly (P<0.05) as compared with that of C. trachomatis infected macrophages (Figure 3(e)), suggesting the ability of naringenin to down-regulate the expression of these receptors. Our result provides evidence that naringenin diminishes the recognition of C. trachomatis by its putative TLR2 and TLR4 receptors to possibly exert its anti-inflammatory downstream effects during reinfection of cells by C. trachomatis.Naringenin downregulates the expression levels of CD80, TLR2, and TLR4 inC. trachomatis infected mouse J774 macrophages. Macrophages (106 cells/mL) were seeded in 24-well plates and infected with C. trachomatis (105 IFU/well) or left uninfected. After 2-day infection, 1 μg/mL naringenin was added to cell cultures and 2 days later samples were analyzed by flow cytometry as described in Section 2. Shown are the expression shifts of TLR2 (a), CD80 (b), TLR4 (c), and CD86 (d) and their mean fluorescence intensity (MFI) (e) before and after infection of macrophages with C. trachomatis (CT) in the presence and absence of naringenin. P*<0.05 is considered significant as compared to untreated cells (J774) and to cells treated with CT or CT + naringenin. The data are representative of two separate experiments. (a) (b) (c) (d) (e)Activated T cells produce additional inflammatory cytokines and chemokines to direct immune responses. For T cells to be fully activated, antigen presenting cells must express costimulatory molecules such as CD80 and CD86 [30]. Therefore, down-regulating the expression of either CD80 or CD86 or both may negatively impact the activation of T cells. Here we tested if naringenin may impact T-cell activation by down-regulating CD80 and CD86 expression levels onC. trachomatis infected macrophages. Our flow cytometric results show that naringenin at 1 μg/mL downregulates the expression of CD86 induced by C. trachomatis infected macrophages but not that of CD80 as compared to macrophages exposed only to C. trachomatis (Figures 3(b) and 3(d)). Moreover, naringenin significantly reduced (P<0.05) the MFI of CD86 on C. trachomatis infected cells from 18 to 9 (Figure 3(e)). On the other hand, naringenin did not reduce the MFI of CD80 on infected cells (Figure 3(e)), indicating its selective modulation of costimulatory molecules on C. trachomatis infected cells. This finding further suggests that naringenin anti-inflammatory effect is not only limited to innate immune responses but also to adaptive immune responses since the expression of either CD80 or CD86 or both plays critical roles for activation of T cells during adaptive immune responses. ### 3.4. Effect of Naringenin on the mRNA Expression Levels of CD86 and TLR2 As a further validation of our flow cytometric results, we next determine the effect of naringenin on the mRNA gene transcript expression levels of TLR2 and CD86 inC. trachomatis infected J774 macrophages. C. trachomatis enhanced the gene transcripts expression levels of TLR2 and CD86, which were both significantly (P<0.05) downregulated (up to a 2-fold decrease) in the presence of naringenin (at 1 μg/mL) (Figure 4). Combining these findings suggests that naringenin downregulates TLR2 and CD86 expression at both the protein and mRNA gene transcripts levels, thus underscoring its role in regulating C. trachomatis inflammation in macrophages.Figure 4 Naringenin reduces the transcriptional activation of TLR2 and CD86 byC. trachomatis in mouse J774 macrophages. Macrophages (3 × 106 cells/mL) were left uninfected or infected with live C. trachomatis (3 × 105 IFU/well). After 2-day infection of macrophages with C. trachomatis (CT), naringenin (1 μg/mL) was added to macrophage cultures and 2 days later total RNA was extracted as described in Section 2. One step qRT-PCR was used to quantify the mRNA gene transcripts of TLR2 and CD86. P*<0.05 is considered significant when compared to untreated cells (J774) and to cells treated with CT or CT + naringenin. Data shown is an average of triplicates run representative of two separate experiments. ### 3.5.C. trachomatis Uses the p38 MAPK Pathway to Induce Inflammatory Mediators Among the many MAPK pathways, strong link has been established between the p38 signaling pathway and inflammation [31]. Multiple studies have suggested that p38 is a key MAPK pathway that is activated by intracellular pathogen to induce inflammatory mediators [31–33]. To investigate if the p38 pathway is exploited by C. trachomatis for production of its concomitantly elicited inflammatory mediators, we treated J774 macrophages with a p38 specific inhibitor followed by quantification of randomly selected cytokines and chemokines in collected supernatants. With the exception of IL-1β, our result shows that the levels of IL-6, IL-12p70, TNF, CCL5, and CXCL10 were significantly reduced (P<0.05) when macrophages were treated with the p38 inhibitor (Figure 5), suggesting that this pathway is used by C. trachomatis for their production by macrophages.Figure 5 C. trachomatis employs the p38 MAPK pathway for production of inflammatory mediators in mouse J774 macrophages. Macrophages (106 cells/mL) were seeded in 24-well plates in the presence and absence of the SB-203358 p38 inhibitor for 24 h, after which they were left uninfected or infected with live C. trachomatis (105 IFU/mL). After 3 days following infection with C. trachomatis (CT), supernatants were collected and the levels of inflammatory mediators were determined by single ELISAs. P*<0.05 is considered significant when compared to untreated cells (J774) and to cells treated with CT or CT + SB. Data shown is an average of duplicate run representative of two separate experiments. ### 3.6. Naringenin DownregulatesC. trachomatis Phosphorylation of p38 MAPK Given that p38 MAPK mediates, in part, the production of inflammatory mediators byC. trachomatis infected macrophages, we investigated if this pathway may be used by naringenin to exert its anti-inflammatory effect in macrophages. Therefore, we first determined that indeed C. trachomatis could induce the phosphorylation of p38 MAPK in J774 macrophages for the production of its inflammatory mediators. Our time-kinetics experiment shows that C. trachomatis infected macrophages expressed the highest p38 phosphorylation at 60 min (Figure 6(a)). However, in the presence of naringenin, the phosphorylation of p38 reduced as indicated by the reduced band intensity (Figure 6(b)). Similarly, LPS induced the phosphorylation of p38 at 60 min of stimulation, but naringenin reduced its ability to induce phosphorylation of p38 (Figure 6(c)). Overall, our results show increased phosphorylation of p38 MAP kinase in C. trachomatis infected macrophages, which was downregulated by naringenin, suggesting a potential downstream mechanism for naringenin to regulate inflammatory mediators.Naringenin downregulates the phosphorylation of p38 MAPK inC. trachomatis infected mouse J774 macrophages. Macrophages (3×106 cells/well) were seeded in 6-well plates and then infected with C. trachomatis (3 × 105 IFU/well) or LPS (3 μg/well). After infection of macrophages, protein lysates were collected as indicated in Section 2. The presence of total p38 (p38) (internal control) and phosphorylated p38 (Pp38) was determined by western blotting. Shown are the band intensities for internal control and pp38 at different time points for macrophages treated with C. trachomatis (CT) and LPS (a). Blots shown in (b) and (c) were developed 1 h after exposing macrophages to CT or LPS in the presence and absence of naringenin. The 43 kDa Pp38 and p38 proteins were determined from a known biotinylated protein ladder. (a) (b) (c) ## 3.1. The Effect of Naringenin on the Levels of Inflammatory Cytokines and Chemokines Produced byC. trachomatis Infected Macrophages Like other infection agents,C. trachomatis induces the secretion of various inflammatory mediators upon its infection of macrophages. In the present study, we employed multiplex ELISA to identify and quantify cytokines and chemokines in supernatants from macrophages infected with live C. trachomatis. Infected macrophages produced significant (P<0.001) levels of cytokines (IL-6, TNF, IL-10, IL-12p70, IL-1α, IL-1β, and GM-CSF) and chemokines (CCL4, CXCL10, CXCL5, CCL5, and CXCL1) (Figures 1(a) and 1(b)). However, the production levels of these mediators were reduced in a dose-dependent manner in the presence of added naringenin (Figures 1(a) and 1(b)). Supernatants of C. trachomatis infected macrophages that contained 10 μg/mL of added naringenin showed a significant reduction in the levels of cytokines and chemokines (P<0.001) (Figures 1(a) and 1(b)). The inhibitory activity of naringenin was significantly (P<0.01) observed with as little as 1 μg/mL (Figure 1(a)), suggesting the potency of naringenin even at low concentrations. Naringenin similarly reduced the production levels of cytokines and chemokines in a dose-dependent manner (P<0.001) when LPS was used as the stimulant, especially at 10 μg/mL (Figures 1(a) and 1(b)). Overall, our results indicate that naringenin has an anti-inflammatory effect against C. trachomatis induced inflammatory mediators by macrophages.Naringenin downregulates inflammatory mediators inC. trachomatis infected mouse J774 macrophages. Macrophages (106 cells/mL) were seeded in 24-well plates and were either infected with live C. trachomatis (105 IFU/well) or LPS at 1 μg/mL. After 2-day infection, naringenin at 0.01 to 10 μg/mL was added to cell cultures and the production levels of cytokines (a) and chemokines (b) were quantified in supernatants collected 2 days later employing multiplex ELISA. ***indicates significant difference (P<0.001) between C. trachomatis treated cells and those treated with various concentrations of naringenin using the two-tailed unpaired Student’s t-test. Each bar represents the average of samples run in duplicates and the data are representative of three separate experiments. (a) (b) ## 3.2. The Anti-Inflammatory Effect of Naringenin Is Not due to Cell Death To ensure that the inhibitory effect of naringenin is not attributed to cell death, cytotoxicity studies were performed employing the MTT assay and J774 macrophages exposed to various concentrations of naringenin (0.01 to 100μg/mL). With the exception of the 100 μg/mL naringenin concentration, all other tested concentrations exhibited between 85% and 100% cell viability, suggesting that naringenin is effectively nontoxic to macrophages at these concentrations (Figure 2(a)). Figure 2(b) depicts a representative 96-well plate with cell death occurring in the presence of 100 μg/mL of naringenin (yellow color) versus viable cells at other naringenin concentrations (dark purple color). Overall, these results demonstrate that naringenin’s anti-inflammatory effect on inflammatory mediators produced by C. trachomatis infected macrophages is not attributed to cell death but rather to alternative mechanisms.Naringenin toxicity to mouse J774 macrophages is concentration dependent. Macrophages were seeded in a 96-well plate at a density of 105 cells/well/50 μL in the presence or absence of naringenin in concentrations ranging from 0.1 to 100 μg/mL. The CellTiter 96 Cell Proliferation Assay kit was used to determine cell viability. Absorbance was read at 570 nm and the percentage of cell viability was calculated by using the optical density readings compared to normal cells (a). A representative plate before the absorbance readings where dark purple and yellow wells are depictions of live and dead cells, respectively (b). The data are representative of three separate experiments. (a) (b) ## 3.3. Naringenin Downregulates the Expression Levels of CD86, TLR2, and TLR4 on J774 Macrophages Receptors on host cell surfaces such as TLRs recognize extracellular stimuli for subsequent intracellular signaling processes. Multiple studies have shown that TLR2 and TLR4 play pivotal roles in the recognition ofC. trachomatis [26–29]. To begin to understand the mechanism(s) by which naringenin modulates inflammatory mediators, we first focused on whether or not naringenin will affect the putative TLR2 and TLR4 receptors expressed on C. trachomatis infected mouse J774 macrophages. As compared to unstimulated cells, C. trachomatis infected cells expressed more TLR2 and TLR4 receptors, which were markedly downregulated in the presence of added naringenin, especially for TLR2 (Figures 3(a) and 3(c)). In addition, the MFI for TLR2 and TLR4 onC. trachomatis infected cells was significantly increased (P<0.05) as shown by ratios of 22 and 16, respectively, in comparison to those of J774 and naringenin only uninfected cells (Figure 3(e)). When naringenin was added to C. trachomatis infected macrophages, the MFI of TLR2 and TLR4 reduced significantly (P<0.05) as compared with that of C. trachomatis infected macrophages (Figure 3(e)), suggesting the ability of naringenin to down-regulate the expression of these receptors. Our result provides evidence that naringenin diminishes the recognition of C. trachomatis by its putative TLR2 and TLR4 receptors to possibly exert its anti-inflammatory downstream effects during reinfection of cells by C. trachomatis.Naringenin downregulates the expression levels of CD80, TLR2, and TLR4 inC. trachomatis infected mouse J774 macrophages. Macrophages (106 cells/mL) were seeded in 24-well plates and infected with C. trachomatis (105 IFU/well) or left uninfected. After 2-day infection, 1 μg/mL naringenin was added to cell cultures and 2 days later samples were analyzed by flow cytometry as described in Section 2. Shown are the expression shifts of TLR2 (a), CD80 (b), TLR4 (c), and CD86 (d) and their mean fluorescence intensity (MFI) (e) before and after infection of macrophages with C. trachomatis (CT) in the presence and absence of naringenin. P*<0.05 is considered significant as compared to untreated cells (J774) and to cells treated with CT or CT + naringenin. The data are representative of two separate experiments. (a) (b) (c) (d) (e)Activated T cells produce additional inflammatory cytokines and chemokines to direct immune responses. For T cells to be fully activated, antigen presenting cells must express costimulatory molecules such as CD80 and CD86 [30]. Therefore, down-regulating the expression of either CD80 or CD86 or both may negatively impact the activation of T cells. Here we tested if naringenin may impact T-cell activation by down-regulating CD80 and CD86 expression levels onC. trachomatis infected macrophages. Our flow cytometric results show that naringenin at 1 μg/mL downregulates the expression of CD86 induced by C. trachomatis infected macrophages but not that of CD80 as compared to macrophages exposed only to C. trachomatis (Figures 3(b) and 3(d)). Moreover, naringenin significantly reduced (P<0.05) the MFI of CD86 on C. trachomatis infected cells from 18 to 9 (Figure 3(e)). On the other hand, naringenin did not reduce the MFI of CD80 on infected cells (Figure 3(e)), indicating its selective modulation of costimulatory molecules on C. trachomatis infected cells. This finding further suggests that naringenin anti-inflammatory effect is not only limited to innate immune responses but also to adaptive immune responses since the expression of either CD80 or CD86 or both plays critical roles for activation of T cells during adaptive immune responses. ## 3.4. Effect of Naringenin on the mRNA Expression Levels of CD86 and TLR2 As a further validation of our flow cytometric results, we next determine the effect of naringenin on the mRNA gene transcript expression levels of TLR2 and CD86 inC. trachomatis infected J774 macrophages. C. trachomatis enhanced the gene transcripts expression levels of TLR2 and CD86, which were both significantly (P<0.05) downregulated (up to a 2-fold decrease) in the presence of naringenin (at 1 μg/mL) (Figure 4). Combining these findings suggests that naringenin downregulates TLR2 and CD86 expression at both the protein and mRNA gene transcripts levels, thus underscoring its role in regulating C. trachomatis inflammation in macrophages.Figure 4 Naringenin reduces the transcriptional activation of TLR2 and CD86 byC. trachomatis in mouse J774 macrophages. Macrophages (3 × 106 cells/mL) were left uninfected or infected with live C. trachomatis (3 × 105 IFU/well). After 2-day infection of macrophages with C. trachomatis (CT), naringenin (1 μg/mL) was added to macrophage cultures and 2 days later total RNA was extracted as described in Section 2. One step qRT-PCR was used to quantify the mRNA gene transcripts of TLR2 and CD86. P*<0.05 is considered significant when compared to untreated cells (J774) and to cells treated with CT or CT + naringenin. Data shown is an average of triplicates run representative of two separate experiments. ## 3.5.C. trachomatis Uses the p38 MAPK Pathway to Induce Inflammatory Mediators Among the many MAPK pathways, strong link has been established between the p38 signaling pathway and inflammation [31]. Multiple studies have suggested that p38 is a key MAPK pathway that is activated by intracellular pathogen to induce inflammatory mediators [31–33]. To investigate if the p38 pathway is exploited by C. trachomatis for production of its concomitantly elicited inflammatory mediators, we treated J774 macrophages with a p38 specific inhibitor followed by quantification of randomly selected cytokines and chemokines in collected supernatants. With the exception of IL-1β, our result shows that the levels of IL-6, IL-12p70, TNF, CCL5, and CXCL10 were significantly reduced (P<0.05) when macrophages were treated with the p38 inhibitor (Figure 5), suggesting that this pathway is used by C. trachomatis for their production by macrophages.Figure 5 C. trachomatis employs the p38 MAPK pathway for production of inflammatory mediators in mouse J774 macrophages. Macrophages (106 cells/mL) were seeded in 24-well plates in the presence and absence of the SB-203358 p38 inhibitor for 24 h, after which they were left uninfected or infected with live C. trachomatis (105 IFU/mL). After 3 days following infection with C. trachomatis (CT), supernatants were collected and the levels of inflammatory mediators were determined by single ELISAs. P*<0.05 is considered significant when compared to untreated cells (J774) and to cells treated with CT or CT + SB. Data shown is an average of duplicate run representative of two separate experiments. ## 3.6. Naringenin DownregulatesC. trachomatis Phosphorylation of p38 MAPK Given that p38 MAPK mediates, in part, the production of inflammatory mediators byC. trachomatis infected macrophages, we investigated if this pathway may be used by naringenin to exert its anti-inflammatory effect in macrophages. Therefore, we first determined that indeed C. trachomatis could induce the phosphorylation of p38 MAPK in J774 macrophages for the production of its inflammatory mediators. Our time-kinetics experiment shows that C. trachomatis infected macrophages expressed the highest p38 phosphorylation at 60 min (Figure 6(a)). However, in the presence of naringenin, the phosphorylation of p38 reduced as indicated by the reduced band intensity (Figure 6(b)). Similarly, LPS induced the phosphorylation of p38 at 60 min of stimulation, but naringenin reduced its ability to induce phosphorylation of p38 (Figure 6(c)). Overall, our results show increased phosphorylation of p38 MAP kinase in C. trachomatis infected macrophages, which was downregulated by naringenin, suggesting a potential downstream mechanism for naringenin to regulate inflammatory mediators.Naringenin downregulates the phosphorylation of p38 MAPK inC. trachomatis infected mouse J774 macrophages. Macrophages (3×106 cells/well) were seeded in 6-well plates and then infected with C. trachomatis (3 × 105 IFU/well) or LPS (3 μg/well). After infection of macrophages, protein lysates were collected as indicated in Section 2. The presence of total p38 (p38) (internal control) and phosphorylated p38 (Pp38) was determined by western blotting. Shown are the band intensities for internal control and pp38 at different time points for macrophages treated with C. trachomatis (CT) and LPS (a). Blots shown in (b) and (c) were developed 1 h after exposing macrophages to CT or LPS in the presence and absence of naringenin. The 43 kDa Pp38 and p38 proteins were determined from a known biotinylated protein ladder. (a) (b) (c) ## 4. Discussion Inflammatory responses toC. trachomatis are initiated and sustained by actively infected host cells including epithelial cells and resident macrophages [4]. The influx of inflammatory cells in pathogen-induced diseases can be either beneficial or detrimental to the host [28]. Therefore, immunointervention strategies that can reduce the influx of inflammatory cells in a beneficial fashion could potentially impact the pathogenesis of diseases. Along with other controlling strategies, our laboratory is also interested in evaluating anti-inflammatory molecules to control C. trachomatis inflammation. Previously we have shown that the anti-inflammatory cytokines, IL-10, downregulate essential inflammatory mediators produced by epithelial cells infected with live C. trachomatis [9]. In the present paper we explored the natural flavonoid, naringenin, as a potential anti-inflammatory agent to regulate inflammatory mediators produced by C. trachomatis infected macrophages. Among the numerous structural diversities, we selected naringenin based on its abundance in nature and potential application in medicine. The following observations were made here: (1) by multiplex ELISA a spectra of cytokines and chemokines, which may perpetuate an early C. trachomatis inflammation, were revealed, (2) naringenin downregulated cytokines and chemokines as produced by C. trachomatis infected macrophages, (3) naringenin downregulated TLR2 and TLR4 and also the CD86 costimulatory molecule on infected macrophages, and (4) naringenin inhibited the ability of C. trachomatis to phosphorylate p38 MAPK for production of its inflammatory mediators by macrophages.Activation of immune cells, especially macrophages with microbial stimuli, influences the nature and progression of disease. In this study, analysis fromC. trachomatis infected macrophages revealed increased levels of GM-CSF, IL-1α, IL-1β, IL-6, TNF, IL-12p70, and IL-10 after a 2-day infection, with TNF, IL-6, and IL-1α being more robustly produced (Figure 1(a)). Indeed this observation is of no surprise since cytokines are secreted at different magnitudes during the infection process. It is well reported that all secreted cytokines have their own specific role during the infection process [1, 4–8]. One plausible explanation for lower levels of IL-12p70, IL-10, IL-1β, and GM-CSF may be attributed to differences in the time kinetics for their optimum secretion during the infection process. Interestingly, this finding is in agreement with previous studies where lower levels of IL-10 were detected during Borrelia infection of human monocytes [34] and C. trachomatis infection of human epithelial cells and macrophages [9]. The heightened secretion of TNF, IL-6, and IL-1α by C. trachomatis infected macrophages may have some relevancy to the initiation of a Chlamydia inflammation. It has been demonstrated that IL-6, TNF, and IL-1α have crucial roles in increasing the intracellular adhesion molecule (ICAM) [4]. Infection of nonimmune host epithelial cells and resident tissue innate immune cells with Chlamydia results in an increase in adhesion molecules, whereby these molecules promote binding of small proteins such as chemokines on cell surfaces [4].Chemokines are also produced during an infection to amplify the inflammation process. Chemokines play critical role attracting leukocytes to the site of infection, where the leukocytes presence can be seen either as beneficial or detrimental to the host. The main leukocytes that are recruited and attracted by chemokines during an early inflammatory process are macrophages and neutrophils [1–6]. Our result shows that C. trachomatis infected macrophages produced greater quantities of CCL4, CXCL10, CCL5, CXCL1, and CXCL5 (Figure 1(b)). The production levels of most chemokines are typically influenced by the type of cytokines present in the inflammatory milieu. The different profiles of chemokines produced by infected macrophages in this study correlated with the high levels of IL-6, TNF, and IL-1α. High levels of IL-6, TNF, and IL-1α apparently cause chemokines to stick to endothelial cell surfaces for efficient attraction mainly due to an increase in ICAM [4]. Overall, our results clearly demonstrate that the spectra of cytokines and chemokines produced by C. trachomatis infected macrophages may have significant roles in initiating its inflammatory process and thus pathogenesis of disease.Naringenin has a broad-spectrum medicinal application against bacteria, parasitic, and viral infections. Lakshmi et al.’s [35] study showed an antifilarial activity of naringenin against the filarial parasite, Brugia malayi [35]. Naringenin was also shown to exhibit antimicrobial activity against pathogenic bacteria like Listeria monocytogenes, Escherichia. coli O157:H7 and Staphylococcus aureus [36]. Similarly, an antiviral activity of naringenin was shown against herpes simplex virus type-1 (HSV), polivirus, parainfluenza virus type-3, and respiratory syncytial virus (RSV) [37]. Du and colleagues [21] demonstrated that naringenin regulates immune system function in a lung cancer infection model, where it reduced IL-4 but increased IL-2 and IFN-γ levels [21]. In a different study, Shi et al. [38] also showed that naringenin displayed an inhibitory role in allergen-induced airway inflammation by reducing IL-4, IL-13, CCL5, and CCL11 [38]. For the first time, in the present study we have shown that naringenin has an anti-inflammatory effect in an in vitro C. trachomatis infection model. Naringenin reduced in a dose-dependent manner the level of major inflammatory mediators secreted by C. trachomatis infected macrophages, which was not attributed to cell death. These studies suggest that naringenin has a broader immune-regulatory property in different disease models, especially inflammatory diseases.In this study, we have clearly demonstrated that naringenin altered the levels of numerous cytokines and chemokines inC. trachomatis infected macrophages by its alteration of multiple inflammatory pathways. Induction of inflammatory pathway initially starts when invasive pathogens are recognized by cell surface receptor molecules such as TLRs in the host, followed by activation of various signaling pathways. It is well documented that C. trachomatis is recognized by TLRs specifically TLR2 and TLR4 on macrophages to induce secretion of inflammatory mediators, which can be either beneficial or detrimental to the host [29, 39]. Here in the present study we show enhanced expression of both TLR2 and TLR4 on C. trachomatis infected macrophages and whose expression levels were reduced by naringenin (Figures 3 and 4). Our study suggests the capacity of naringenin to inhibit the interaction of C. trachomatis with its upstream putative receptors to potentially mediate its anti-inflammatory effect in macrophages.TLR-stimulated macrophages induce effectors of the adaptive immune system such as CD40, CD80, and CD86 to drive T-cell activation and proliferation. The CD28-mediated costimulatory signal can result in an enhanced T-cell proliferation and cytokine production which contributes to the development of various inflammatory diseases [40–42]. Our flow cytometry result demonstrates that C. trachomatis induced the expression of CD80 and CD86, however, with only CD86 expression being modulated by naringenin (Figures 3 and 4). Although we have not shown it in this study, but inhibiting CD80 and CD86 expression has a possibility to impair the activation of T cells and eventually blocking effectors of the adaptive immune system. Lim and coworkers documented significant reduction in the levels of IL-2 and IFN-γ when both CD80 and CD86 costimulatory molecules were inhibited confirming the key role played by costimulatory molecules in functional T-cell activation [43]. Weakened T-cell activation is directly associated with less interaction between antigen presenting (APC) cells and T cells. Thus, our data provides mechanical insights of C. trachomatis engulfment by macrophages as indicative by heightened expression of CD80 and CD86, which eventually contributes to the activation of adaptive immune responses.Down-regulation of only CD86 expression in the presence of naringenin provides evidence for its broader capability in modulating inflammatory response duringC. trachomatis infection. However, the perplexing question remains as to why naringenin inhibited CD86 but not CD80 expression even though both are costimulatory molecules highly needed for T-cell activation and also by which cell-to-cell binding forces depend on their recognition. It has been reported that treatment with CD80/86 blocking antibodies reduced the interaction force of cell : cell conjugates [43, 44]. Both CD80 and CD86 can bind to the T-cell stimulatory receptor CD28 [44] and to the inhibitory receptor CTLA4 [45]. CD86 appeared to strengthen APC : T-cell interactions more markedly than CD80 since higher force reduction was observed after blocking CD86 alone than that achieved by disrupting CD80 alone [44, 45]. Therefore, the ability of CD86 and not CD80 to induce stronger APC : T-cell interaction indicates its crucial ability in initiating immune responses.Upon microbial recognition by TLRs, MAPK signaling pathways are activated to produce inflammatory mediators. Of the many MAPK pathways, p38 is considered to be an important pathway to induce inflammatory mediators duringC. trachomatis infection [46]. Our inhibition study supports this idea, where in the presence of a p38 inhibitor the levels of IL-12p70, IL-6, TNF, CCL5, and CXCL10 (Figure 5) were significantly reduced suggesting that this pathway is employed by C. trachomatis to induce these respective inflammatory mediators. Furthermore, phosphorylation of p38 by C. trachomatis in macrophages in this study (Figure 6) underscores that it triggers this pathway for producing its concomitant inflammatory mediators. Of outmost significance, naringenin inhibited the ability of C. trachomatis to phosphorylate p38 in macrophages, suggesting possibly its attenuation of concomitantly produced cytokines and chemokines. Other investigators have reported that naringenin’s inhibitory role in allergen airway infection is associated with its down-regulating the activation of the NF-κB pathway via MAPK pathway [38]. In another study, it was also shown that naringenin manifested its anti-inflammatory functions in vitro by inhibiting NFκB in macrophages [47, 48]. Shi et al. [38] also reported that naringenin can suppress mucous production by inhibiting NFκB activity in a murine model of asthma [38]. Overall, our findings coupled with the above mentioned reports provide evidence that inflammatory signaling pathways including MAPK, especially p38 and NFκB, are potential targets for naringenin anti-inflammatory effects.C. trachomatis has a prolonged and unique developmental life cycle which takes 24–72 h for completion after entry into target cells. This process involves lysis and reinfection of cells by the released EBs [4] after binding to their cognate cell surface receptors. Reinfection reportedly is one of the major characteristics of C. trachomatis persistent infection [4, 31] contributing to the pathogenesis of disease. The ability of naringenin to reduce cell surface receptor expression and associated inflammatory signaling pathways 48 h after infection of cells with C. trachomatis is a testament of naringenin regulation of inflammatory mediators during the reinfection process. Even though we focused on selected cell surface receptors and signaling pathways in this study, we cannot dismiss the involvement of other receptors like the nucleotide binding site/leucine-rich repeat (NBS/LRR) protein, NOD2 that is recognized by C. trachomatis [49] (and our unpublished observation) or the NFκB signaling pathway that reportedly mediates naringenin anti-inflammatory actions [38].Admittedly, the precise mechanisms by which naringenin downregulates surface receptors and signaling pathways were not investigated here. Nevertheless, we cannot rule out the possibility that naringenin regulatory activity may be the direct consequences of its reducing theC. trachomatis infectious load in macrophages, ultimately resulting in less induction of inflammatory mediators. Indeed naringenin has been shown to have antibactericidal activity against several pathogenic bacteria [36]. Whether or not naringenin has anti-bactericidal activity against C. trachomatis in macrophages is the topic of our ongoing investigations.In summary, most intracellular microorganisms includingC. trachomatis prefer not to be targeted by regimens that impair their perpetuation in cells by inducing unwanted immune responses to amplify the disease progression. Therefore, in such scenarios, immunointervention approaches that focus on reducing any unwanted host immune response is attractive and can be viewed as alternative means to prevent or control severe inflammatory responses. Our findings presented here are the first, to our knowledge, to demonstrate that naringenin is an immunomodulator of inflammatory responses triggered by C. trachomatis in macrophages. Reduction of these inflammatory mediators by naringenin is mediated upstream by modulating TLR2, TLR4, and CD86 macrophage surface receptors and downstream via the p38 MAPK signaling pathway. More studies are warranted to further explore the in vivo relevancy of naringenin in controlling severe inflammatory responses that are induced not only by C. trachomatis but also by other similar pathogenic microorganisms. --- *Source: 102457-2013-05-23.xml*
102457-2013-05-23_102457-2013-05-23.md
63,819
Flavonoid Naringenin: A Potential Immunomodulator forChlamydia trachomatis Inflammation
Abebayehu N. Yilma; Shree R. Singh; Lisa Morici; Vida A. Dennis
Mediators of Inflammation (2013)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/102457
102457-2013-05-23.xml
--- ## Abstract Chlamydia trachomatis, the agent of bacterial sexually transmitted infections, can manifest itself as either acute cervicitis, pelvic inflammatory disease, or a chronic asymptomatic infection. Inflammation induced by C. trachomatis contributes greatly to the pathogenesis of disease. Here we evaluated the anti-inflammatory capacity of naringenin, a polyphenolic compound, to modulate inflammatory mediators produced by mouse J774 macrophages infected with live C. trachomatis. Infected macrophages produced a broad spectrum of inflammatory cytokines (GM-CSF, TNF, IL-1β, IL-1α, IL-6, IL-12p70, and IL-10) and chemokines (CCL4, CCL5, CXCL1, CXCL5, and CXCL10) which were downregulated by naringenin in a dose-dependent manner. Enhanced protein and mRNA gene transcript expressions of TLR2 and TLR4 in addition to the CD86 costimulatory molecule on infected macrophages were modulated by naringenin. Pathway-specific inhibition studies disclosed that p38 mitogen-activated-protein kinase (MAPK) is involved in the production of inflammatory mediators by infected macrophages. Notably, naringenin inhibited the ability of C. trachomatis to phosphorylate p38 in macrophages, suggesting a potential mechanism of its attenuation of concomitantly produced inflammatory mediators. Our data demonstrates that naringenin is an immunomodulator of inflammation triggered by C. trachomatis, which possibly may be mediated upstream by modulation of TLR2, TLR4, and CD86 receptors on infected macrophages and downstream via the p38 MAPK pathway. --- ## Body ## 1. Introduction Sexually transmittedChlamydia trachomatis infection is of widespread public health concern because of its prevalence and potentially devastating reproductive consequences, including pelvic inflammatory disease (PID), infertility, and ectopic pregnancy [1–3]. The negatively charge elementary bodies (EB), infectious particles of C. trachomatis, invade the mucosal surface of the female genital tract and persist in them for a long time [2]. Abundant in vitro data suggests that the inflammatory response to Chlamydiae is initiated and sustained by actively infected host cells including epithelial cells and resident macrophages [4].C. trachomatis has the ability to infect both epithelial cells and resident macrophages. These infected host cells act as first responders to initiate and propagate immune responses, which later participate in initiation of adaptive immune responses. Activation of adaptive immune responses consequently leads to accumulation of effector T and B cells at the site of Chlamydia infection and plays critical roles in controlling the infection [5, 6]. However, C. trachomatis uses various strategies to escape the host immune response and persist for a prolonged period of time, subsequently leading to the many disease manifestations associated with the infection. This is a common scenario for most intracellular organisms such as Mycobacteria, where cells produce excessive inflammatory mediators to contribute to disease manifestation by damaging neighboring cells [7]. For example, results from studies using the murine model of C. trachomatis revealed that tubal dilation frequently occurred as an end result for a primary infection, suggesting that the inflammatory process resulting from a single C. trachomatis infection is sufficient to result in long-term tissue damage [8].Like other infectious microorganisms, inflammatory mediators have been documented to be hallmarks ofC. trachomatis infection and its pathogenesis [4–6]. Because of the inherent difficulties in acquiring human tissue samples for study, researchers have taken advantage of multiple animal models of Chlamydia infection to examine the nature and timing of the inflammatory response. We have shown by in vitroexperiments that primary Chlamydia infection of human epithelial cells and mouse macrophages occurs within 2 days of infection and is characterized by significant production of IL-6, TNF, and IL-8 [9]. It is well documented that inflammatory cytokines and chemokines play critical role for the recruitment and chemoattractant of neutrophils and other leukocytes. Neutrophils have the capability to destroy accessible EBs, and when recruited in high numbers, they release matrix metalloprotease (MMPs) molecules and neutrophil elastase, which have been shown to contribute to tissue damage [10, 11].To control inflammation triggered by infectious organisms, alternative strategies that could balance the levels of inflammatory mediators released during infection are of intense interest. Recently active compounds with the capacity to modulate host inflammatory responses have received considerable attention as they may be potential new therapeutic agents for the treatment of inflammatory diseases [12–15]. Naringenin is a naturally occurring polyphenolic compound containing two benzene rings linked together with a heterocyclic pyrone ring [16]. Naringenin is a normal constituent of the human diet in grapefruit and tomatoes and is known to exhibit a variety of biological activities, such as enzyme inhibitors, antioxidants, anticancer, and as an anti-inflammatory agent [17–21].Since its discovery, naringenin’s wide ranges of pharmacological properties have attracted the attentions of many researchers because of its anti-inflammatory properties. Its anti-inflammatory property is actively studied in macrophages andex vivo human whole-blood models [22–24]. In this study, we investigated the anti-inflammatory capacity of naringenin to regulate cytokines and chemokines produced by mouse J774 macrophages infected with live C. trachomatis (MoPn Nigg II). We used multiplex ELISA to determine a broad range of inflammatory cytokines and chemokines produced during the interaction of C. trachomatis and macrophages. We then assessed the ability of naringenin to regulate the production level of these mediators. Next, we determined the potential mechanism(s) by which naringenin may modulate inflammatory mediators by investigating its effect on TLR2, TLR4, and CD86 receptors, as well as the p38 MAPK pathway. The findings from our study are discussed here in the context of naringenin as a potential new immunomodulator of C. trachomatis induced inflammation. ## 2. Materials and Methods ### 2.1. Cell Culture and Infectivity Mouse J774 macrophages were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA) and cultured as already described [9]. C. trachomatis MoPn Nigg II was purchased from ATCC (ATCC VR-123) and propagated as previously described [9]. To establish infection, macrophages (106 cells/well) were seeded in 24-well plates for 24 h after which they were infected with live C. trachomatis infectious particles (105) in 500 μL of growth media/well. The cells were then incubated at 37°C under 5% CO2 and culture supernatants were collected at 48 h after infection. The optimum bacterium dose and duration of infection were determined as reported [9]. As a positive control, macrophages (106 cells/well) were stimulated with E. coli LPS (1 μg/mL) and culture supernatants were collected at 48 h after stimulation. Collected supernatants were centrifuged at 450 ×g for 10 min at 4°C and stored at −80°C until used. ### 2.2. Preparation of Naringenin The stock solution of naringenin (Sigma, St. Louis, MO, USA) was prepared by dissolving 40 mg of naringenin in 1 mL dimethyl sulfoxide (DMSO). After 2-day infection of macrophages withC. trachomatis, the media were replaced with fresh media containing various concentrations (0.01, 0.1, 1, and 10 μg/mL) of naringenin. Cell-free supernatants were collected after an additional 48 h incubation following centrifugation at 450 ×g for 10 min at 4°C and stored at 80°C until used. ### 2.3. Inflammatory Cytokines and Chemokines Milliplex mouse 32-plex cytokine and chemokines detection reagent (catalogue number MPXMCYTO-70 K-PMX32) was purchased from Millipore (EMD Millipore Corporation, Billerica, MA, USA) and the assay was performed as described [25]. ### 2.4. Cytotoxicity Studies Cytotoxicity of naringenin to mouse J774 macrophages was measured using the 3-(4, 5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) dye reduction assay and the CellTiter 96 Cell Proliferation Assay kit (Promega, Madison, WI, USA). Cells were seeded in a 96-well plate at a density of 105 cells/well in 50 μL media and incubated overnight at 37°C under 5% CO2. Naringenin was added to cells in concentrations ranging from 0.1 to 100 μg/mL and after 48 h supernatants were removed, cells were washed twice with sterile PBS, followed by addition of 15 μL of MTT dye solution to each well, and cells were further incubated for 3 h at 37°C under 5% CO2. To stop the reaction, 100 μL of solubilization solution/stop mixture was added to each well and plates incubated for 30 min at room temperature (RT). Absorbance at 570 nm was measured using a Tecan Sunrise plate reader (TECAN US Inc., Durham, NC, USA). The percentage of cell viability was obtained using the optical density readings of naringenin treated cells compared to those of normal cells (control), where % viability = [A]test/[A]control×100, where [A]test is the absorbance of the test sample and [A]control is the absorbance of control sample. ### 2.5. Flow Cytometry Mouse J774 macrophages (106 cells/mL) were left uninfected or infected with C. trachomatis and after 48 h infection the media were removed and replenished with fresh media containing 1 μg/mL of naringenin. Following incubation for an additional 48 h, cells were scraped from wells, washed, and then blocked with Fc blocking antibody (BD Bioscience) in FACS (fluorescence-activated cell sorting) buffer (PBS Containing 0.1% NaN3 and 1% fetal bovine serum) for 15 min at 4°C. Cells were next washed two times followed by staining with fluorochrome-conjugated antibodies (50 μL in FACS buffer) against mouse TLR2 (PE), TLR4 (FITC), CD80 (PE-Cy7), and CD86 (APC) (eBiosciences). The optimum concentrations of all fluorochromes were predetermined in our laboratory. Cells were incubated with fluorochrome antibodies for 30 min at 4°C, washed 2 times, and then fixed using 2% paraformaldehyde solution. Data were acquired on a BD FACSCanto II flow cytometer (BD Bioscience) with at least 105 events for each sample. TLR2, TLR4, CD80, and CD86 positive cells and their mean fluorescence intensity (MFI) were analyzed using FlowJo software (TreeStar Inc., Ashland, OR, USA). ### 2.6. RNA Extraction and Quantitative Real-Time PCR (qRT-PCR) Mouse J774 macrophages (3× 106 cells/well) were infected with live C. trachomatis (3 × 105 IFU/well) in 6-well plates for 48 h followed by replacement of fresh media containing 1 μg/mL of naringenin. RNA was extracted from the cell pellets using Qiagen RNeasy Kit (Qiagen Inc., Valencia, CA, USA), which included a DNase-I digestion step. qRT-PCR was employed to quantify mRNA gene transcripts of CD86 and TLR2 using TaqMan RNA-to-CT 1-step kit in combination with TaqMan gene expression assays (Applied Biosystems by Life Technologies, Foster City, CA, USA) as reported [25]. Amplification of gene transcripts was performed according to the manufacturer’s protocol using ABI ViiA 7 real-time PCR (Applied Biosystem by Life Technologies) and standard amplification conditions. The relative changes in gene expression were calculated using the following equation: 2-ΔΔCT where all values were normalized with respect to the “housekeeping” gene GAPDH mRNA levels. Amplification using 50 ng RNA was performed in a total volume of 20 μL. Each real-time PCR assay was performed in triplicates and the results are expressed as the mean ± SD. ### 2.7. Inhibition of p38 MAP Kinase Pathway To determine if the p38 MAPK pathway is employed byC. trachomatis to trigger production of cytokines and chemokines by mouse J774 macrophages, we next blocked p38 MAPK signaling with its specific inhibitor, SB203350 (EMD Millipore Corporation, Billerica, MA, USA). Mouse J774 macrophages (106 cells/well) were preincubated with 20 μM of SB203350 for 24 h, infected with C. trachomatis (105 IFU/well), and incubated for an additional 72 h. Cell-free supernatants were collected by centrifugation and the production levels of randomly selected cytokines (IL-6, TNF, IL-12p70, and IL-1β) and chemokines (CCL5 and CXCL10) were determined using single ELISAs as described previously [9]. The 20 μM concentration and 24 h inhibition time point used for SB203350 were optimal conditions predetermined in our laboratory. ### 2.8. Phosphorylation of p38 MAPK byC. trachomatis Mouse J774 macrophages (3× 106 cells/well) were seeded in 6-well plates and infected with live C. trachomatis (3 × 105 IFU/well) for 15, 30, and 60 min. Cells were lysed at different time points using 1x RIPA buffer (Sigma) supplemented with phosphatase inhibitors (Sigma). Immediately cells were transferred to microcentrifuge tubes, sonicated for 15 sec to shear DNA and reduce sample viscosity followed by centrifugation at 450 g for 10 min at 4°C. The concentrations of proteins were determined by the bicinchoninic acid assay (BCA) (Thermo Scientific, Rockford, IL, USA). Proteins were separated by SDS-PAGE, transferred to nitrocellulose membranes, and blocked with blocking buffer (tris-buffered saline (TBS)) containing 0.1% Tween-20 and 5% w/v nonfat milk. After blocking for 1 h, the membrane was washed 3 times for 5 min each with wash buffer (TBS, 0.1% Tween-20) and incubated overnight with gentle agitation at 4°C with phospho-p38 or total p38 primary antibodies (Cell Signaling Technology Inc., Beverly, MA, USA) each at a dilution of 1 : 1000 (diluted in primary antibody dilution buffer (1x TBS, 0.1% Tween-20, 5% bovine serum album (BSA), and dH2O). Following overnight incubation, the membrane was washed 3 times and incubated with HRP-conjugated secondary antibody (Cell Signaling) at 1 : 2000 (diluted in blocking buffer) with gentile agitation for 1 h at RT. After 3 washes, protein bands were visualized using LumiGLO substrate (Cell Signaling) on scientific imaging film (Kodak Inc., Rochester, NY, USA). The sizes of total p38 and phospho-p38 were determined from the biotinylated protein ladder. The optimum concentrations for antibodies were used according to the manufactures suggestion. Biotinylated secondary antibody (1 : 1000 diluted in blocking buffer) was used to detect the protein markers. For some experiments, macrophages were infected with C. trachomatis in the presence and absence of naringenin at 1 μg/mL to determine if naringenin may exert its anti-inflammatory activity by blocking the p38 MAPK pathway. Protein lysates were collected and used in western blotting to detect the phosphorylation of p38 MAPK as described in the preceding paragraph. ### 2.9. Statistics Analysis The two-tailed unpaired Student’st-test was used to compare the data. P<0.05 was considered significant. ## 2.1. Cell Culture and Infectivity Mouse J774 macrophages were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA) and cultured as already described [9]. C. trachomatis MoPn Nigg II was purchased from ATCC (ATCC VR-123) and propagated as previously described [9]. To establish infection, macrophages (106 cells/well) were seeded in 24-well plates for 24 h after which they were infected with live C. trachomatis infectious particles (105) in 500 μL of growth media/well. The cells were then incubated at 37°C under 5% CO2 and culture supernatants were collected at 48 h after infection. The optimum bacterium dose and duration of infection were determined as reported [9]. As a positive control, macrophages (106 cells/well) were stimulated with E. coli LPS (1 μg/mL) and culture supernatants were collected at 48 h after stimulation. Collected supernatants were centrifuged at 450 ×g for 10 min at 4°C and stored at −80°C until used. ## 2.2. Preparation of Naringenin The stock solution of naringenin (Sigma, St. Louis, MO, USA) was prepared by dissolving 40 mg of naringenin in 1 mL dimethyl sulfoxide (DMSO). After 2-day infection of macrophages withC. trachomatis, the media were replaced with fresh media containing various concentrations (0.01, 0.1, 1, and 10 μg/mL) of naringenin. Cell-free supernatants were collected after an additional 48 h incubation following centrifugation at 450 ×g for 10 min at 4°C and stored at 80°C until used. ## 2.3. Inflammatory Cytokines and Chemokines Milliplex mouse 32-plex cytokine and chemokines detection reagent (catalogue number MPXMCYTO-70 K-PMX32) was purchased from Millipore (EMD Millipore Corporation, Billerica, MA, USA) and the assay was performed as described [25]. ## 2.4. Cytotoxicity Studies Cytotoxicity of naringenin to mouse J774 macrophages was measured using the 3-(4, 5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) dye reduction assay and the CellTiter 96 Cell Proliferation Assay kit (Promega, Madison, WI, USA). Cells were seeded in a 96-well plate at a density of 105 cells/well in 50 μL media and incubated overnight at 37°C under 5% CO2. Naringenin was added to cells in concentrations ranging from 0.1 to 100 μg/mL and after 48 h supernatants were removed, cells were washed twice with sterile PBS, followed by addition of 15 μL of MTT dye solution to each well, and cells were further incubated for 3 h at 37°C under 5% CO2. To stop the reaction, 100 μL of solubilization solution/stop mixture was added to each well and plates incubated for 30 min at room temperature (RT). Absorbance at 570 nm was measured using a Tecan Sunrise plate reader (TECAN US Inc., Durham, NC, USA). The percentage of cell viability was obtained using the optical density readings of naringenin treated cells compared to those of normal cells (control), where % viability = [A]test/[A]control×100, where [A]test is the absorbance of the test sample and [A]control is the absorbance of control sample. ## 2.5. Flow Cytometry Mouse J774 macrophages (106 cells/mL) were left uninfected or infected with C. trachomatis and after 48 h infection the media were removed and replenished with fresh media containing 1 μg/mL of naringenin. Following incubation for an additional 48 h, cells were scraped from wells, washed, and then blocked with Fc blocking antibody (BD Bioscience) in FACS (fluorescence-activated cell sorting) buffer (PBS Containing 0.1% NaN3 and 1% fetal bovine serum) for 15 min at 4°C. Cells were next washed two times followed by staining with fluorochrome-conjugated antibodies (50 μL in FACS buffer) against mouse TLR2 (PE), TLR4 (FITC), CD80 (PE-Cy7), and CD86 (APC) (eBiosciences). The optimum concentrations of all fluorochromes were predetermined in our laboratory. Cells were incubated with fluorochrome antibodies for 30 min at 4°C, washed 2 times, and then fixed using 2% paraformaldehyde solution. Data were acquired on a BD FACSCanto II flow cytometer (BD Bioscience) with at least 105 events for each sample. TLR2, TLR4, CD80, and CD86 positive cells and their mean fluorescence intensity (MFI) were analyzed using FlowJo software (TreeStar Inc., Ashland, OR, USA). ## 2.6. RNA Extraction and Quantitative Real-Time PCR (qRT-PCR) Mouse J774 macrophages (3× 106 cells/well) were infected with live C. trachomatis (3 × 105 IFU/well) in 6-well plates for 48 h followed by replacement of fresh media containing 1 μg/mL of naringenin. RNA was extracted from the cell pellets using Qiagen RNeasy Kit (Qiagen Inc., Valencia, CA, USA), which included a DNase-I digestion step. qRT-PCR was employed to quantify mRNA gene transcripts of CD86 and TLR2 using TaqMan RNA-to-CT 1-step kit in combination with TaqMan gene expression assays (Applied Biosystems by Life Technologies, Foster City, CA, USA) as reported [25]. Amplification of gene transcripts was performed according to the manufacturer’s protocol using ABI ViiA 7 real-time PCR (Applied Biosystem by Life Technologies) and standard amplification conditions. The relative changes in gene expression were calculated using the following equation: 2-ΔΔCT where all values were normalized with respect to the “housekeeping” gene GAPDH mRNA levels. Amplification using 50 ng RNA was performed in a total volume of 20 μL. Each real-time PCR assay was performed in triplicates and the results are expressed as the mean ± SD. ## 2.7. Inhibition of p38 MAP Kinase Pathway To determine if the p38 MAPK pathway is employed byC. trachomatis to trigger production of cytokines and chemokines by mouse J774 macrophages, we next blocked p38 MAPK signaling with its specific inhibitor, SB203350 (EMD Millipore Corporation, Billerica, MA, USA). Mouse J774 macrophages (106 cells/well) were preincubated with 20 μM of SB203350 for 24 h, infected with C. trachomatis (105 IFU/well), and incubated for an additional 72 h. Cell-free supernatants were collected by centrifugation and the production levels of randomly selected cytokines (IL-6, TNF, IL-12p70, and IL-1β) and chemokines (CCL5 and CXCL10) were determined using single ELISAs as described previously [9]. The 20 μM concentration and 24 h inhibition time point used for SB203350 were optimal conditions predetermined in our laboratory. ## 2.8. Phosphorylation of p38 MAPK byC. trachomatis Mouse J774 macrophages (3× 106 cells/well) were seeded in 6-well plates and infected with live C. trachomatis (3 × 105 IFU/well) for 15, 30, and 60 min. Cells were lysed at different time points using 1x RIPA buffer (Sigma) supplemented with phosphatase inhibitors (Sigma). Immediately cells were transferred to microcentrifuge tubes, sonicated for 15 sec to shear DNA and reduce sample viscosity followed by centrifugation at 450 g for 10 min at 4°C. The concentrations of proteins were determined by the bicinchoninic acid assay (BCA) (Thermo Scientific, Rockford, IL, USA). Proteins were separated by SDS-PAGE, transferred to nitrocellulose membranes, and blocked with blocking buffer (tris-buffered saline (TBS)) containing 0.1% Tween-20 and 5% w/v nonfat milk. After blocking for 1 h, the membrane was washed 3 times for 5 min each with wash buffer (TBS, 0.1% Tween-20) and incubated overnight with gentle agitation at 4°C with phospho-p38 or total p38 primary antibodies (Cell Signaling Technology Inc., Beverly, MA, USA) each at a dilution of 1 : 1000 (diluted in primary antibody dilution buffer (1x TBS, 0.1% Tween-20, 5% bovine serum album (BSA), and dH2O). Following overnight incubation, the membrane was washed 3 times and incubated with HRP-conjugated secondary antibody (Cell Signaling) at 1 : 2000 (diluted in blocking buffer) with gentile agitation for 1 h at RT. After 3 washes, protein bands were visualized using LumiGLO substrate (Cell Signaling) on scientific imaging film (Kodak Inc., Rochester, NY, USA). The sizes of total p38 and phospho-p38 were determined from the biotinylated protein ladder. The optimum concentrations for antibodies were used according to the manufactures suggestion. Biotinylated secondary antibody (1 : 1000 diluted in blocking buffer) was used to detect the protein markers. For some experiments, macrophages were infected with C. trachomatis in the presence and absence of naringenin at 1 μg/mL to determine if naringenin may exert its anti-inflammatory activity by blocking the p38 MAPK pathway. Protein lysates were collected and used in western blotting to detect the phosphorylation of p38 MAPK as described in the preceding paragraph. ## 2.9. Statistics Analysis The two-tailed unpaired Student’st-test was used to compare the data. P<0.05 was considered significant. ## 3. Results ### 3.1. The Effect of Naringenin on the Levels of Inflammatory Cytokines and Chemokines Produced byC. trachomatis Infected Macrophages Like other infection agents,C. trachomatis induces the secretion of various inflammatory mediators upon its infection of macrophages. In the present study, we employed multiplex ELISA to identify and quantify cytokines and chemokines in supernatants from macrophages infected with live C. trachomatis. Infected macrophages produced significant (P<0.001) levels of cytokines (IL-6, TNF, IL-10, IL-12p70, IL-1α, IL-1β, and GM-CSF) and chemokines (CCL4, CXCL10, CXCL5, CCL5, and CXCL1) (Figures 1(a) and 1(b)). However, the production levels of these mediators were reduced in a dose-dependent manner in the presence of added naringenin (Figures 1(a) and 1(b)). Supernatants of C. trachomatis infected macrophages that contained 10 μg/mL of added naringenin showed a significant reduction in the levels of cytokines and chemokines (P<0.001) (Figures 1(a) and 1(b)). The inhibitory activity of naringenin was significantly (P<0.01) observed with as little as 1 μg/mL (Figure 1(a)), suggesting the potency of naringenin even at low concentrations. Naringenin similarly reduced the production levels of cytokines and chemokines in a dose-dependent manner (P<0.001) when LPS was used as the stimulant, especially at 10 μg/mL (Figures 1(a) and 1(b)). Overall, our results indicate that naringenin has an anti-inflammatory effect against C. trachomatis induced inflammatory mediators by macrophages.Naringenin downregulates inflammatory mediators inC. trachomatis infected mouse J774 macrophages. Macrophages (106 cells/mL) were seeded in 24-well plates and were either infected with live C. trachomatis (105 IFU/well) or LPS at 1 μg/mL. After 2-day infection, naringenin at 0.01 to 10 μg/mL was added to cell cultures and the production levels of cytokines (a) and chemokines (b) were quantified in supernatants collected 2 days later employing multiplex ELISA. ***indicates significant difference (P<0.001) between C. trachomatis treated cells and those treated with various concentrations of naringenin using the two-tailed unpaired Student’s t-test. Each bar represents the average of samples run in duplicates and the data are representative of three separate experiments. (a) (b) ### 3.2. The Anti-Inflammatory Effect of Naringenin Is Not due to Cell Death To ensure that the inhibitory effect of naringenin is not attributed to cell death, cytotoxicity studies were performed employing the MTT assay and J774 macrophages exposed to various concentrations of naringenin (0.01 to 100μg/mL). With the exception of the 100 μg/mL naringenin concentration, all other tested concentrations exhibited between 85% and 100% cell viability, suggesting that naringenin is effectively nontoxic to macrophages at these concentrations (Figure 2(a)). Figure 2(b) depicts a representative 96-well plate with cell death occurring in the presence of 100 μg/mL of naringenin (yellow color) versus viable cells at other naringenin concentrations (dark purple color). Overall, these results demonstrate that naringenin’s anti-inflammatory effect on inflammatory mediators produced by C. trachomatis infected macrophages is not attributed to cell death but rather to alternative mechanisms.Naringenin toxicity to mouse J774 macrophages is concentration dependent. Macrophages were seeded in a 96-well plate at a density of 105 cells/well/50 μL in the presence or absence of naringenin in concentrations ranging from 0.1 to 100 μg/mL. The CellTiter 96 Cell Proliferation Assay kit was used to determine cell viability. Absorbance was read at 570 nm and the percentage of cell viability was calculated by using the optical density readings compared to normal cells (a). A representative plate before the absorbance readings where dark purple and yellow wells are depictions of live and dead cells, respectively (b). The data are representative of three separate experiments. (a) (b) ### 3.3. Naringenin Downregulates the Expression Levels of CD86, TLR2, and TLR4 on J774 Macrophages Receptors on host cell surfaces such as TLRs recognize extracellular stimuli for subsequent intracellular signaling processes. Multiple studies have shown that TLR2 and TLR4 play pivotal roles in the recognition ofC. trachomatis [26–29]. To begin to understand the mechanism(s) by which naringenin modulates inflammatory mediators, we first focused on whether or not naringenin will affect the putative TLR2 and TLR4 receptors expressed on C. trachomatis infected mouse J774 macrophages. As compared to unstimulated cells, C. trachomatis infected cells expressed more TLR2 and TLR4 receptors, which were markedly downregulated in the presence of added naringenin, especially for TLR2 (Figures 3(a) and 3(c)). In addition, the MFI for TLR2 and TLR4 onC. trachomatis infected cells was significantly increased (P<0.05) as shown by ratios of 22 and 16, respectively, in comparison to those of J774 and naringenin only uninfected cells (Figure 3(e)). When naringenin was added to C. trachomatis infected macrophages, the MFI of TLR2 and TLR4 reduced significantly (P<0.05) as compared with that of C. trachomatis infected macrophages (Figure 3(e)), suggesting the ability of naringenin to down-regulate the expression of these receptors. Our result provides evidence that naringenin diminishes the recognition of C. trachomatis by its putative TLR2 and TLR4 receptors to possibly exert its anti-inflammatory downstream effects during reinfection of cells by C. trachomatis.Naringenin downregulates the expression levels of CD80, TLR2, and TLR4 inC. trachomatis infected mouse J774 macrophages. Macrophages (106 cells/mL) were seeded in 24-well plates and infected with C. trachomatis (105 IFU/well) or left uninfected. After 2-day infection, 1 μg/mL naringenin was added to cell cultures and 2 days later samples were analyzed by flow cytometry as described in Section 2. Shown are the expression shifts of TLR2 (a), CD80 (b), TLR4 (c), and CD86 (d) and their mean fluorescence intensity (MFI) (e) before and after infection of macrophages with C. trachomatis (CT) in the presence and absence of naringenin. P*<0.05 is considered significant as compared to untreated cells (J774) and to cells treated with CT or CT + naringenin. The data are representative of two separate experiments. (a) (b) (c) (d) (e)Activated T cells produce additional inflammatory cytokines and chemokines to direct immune responses. For T cells to be fully activated, antigen presenting cells must express costimulatory molecules such as CD80 and CD86 [30]. Therefore, down-regulating the expression of either CD80 or CD86 or both may negatively impact the activation of T cells. Here we tested if naringenin may impact T-cell activation by down-regulating CD80 and CD86 expression levels onC. trachomatis infected macrophages. Our flow cytometric results show that naringenin at 1 μg/mL downregulates the expression of CD86 induced by C. trachomatis infected macrophages but not that of CD80 as compared to macrophages exposed only to C. trachomatis (Figures 3(b) and 3(d)). Moreover, naringenin significantly reduced (P<0.05) the MFI of CD86 on C. trachomatis infected cells from 18 to 9 (Figure 3(e)). On the other hand, naringenin did not reduce the MFI of CD80 on infected cells (Figure 3(e)), indicating its selective modulation of costimulatory molecules on C. trachomatis infected cells. This finding further suggests that naringenin anti-inflammatory effect is not only limited to innate immune responses but also to adaptive immune responses since the expression of either CD80 or CD86 or both plays critical roles for activation of T cells during adaptive immune responses. ### 3.4. Effect of Naringenin on the mRNA Expression Levels of CD86 and TLR2 As a further validation of our flow cytometric results, we next determine the effect of naringenin on the mRNA gene transcript expression levels of TLR2 and CD86 inC. trachomatis infected J774 macrophages. C. trachomatis enhanced the gene transcripts expression levels of TLR2 and CD86, which were both significantly (P<0.05) downregulated (up to a 2-fold decrease) in the presence of naringenin (at 1 μg/mL) (Figure 4). Combining these findings suggests that naringenin downregulates TLR2 and CD86 expression at both the protein and mRNA gene transcripts levels, thus underscoring its role in regulating C. trachomatis inflammation in macrophages.Figure 4 Naringenin reduces the transcriptional activation of TLR2 and CD86 byC. trachomatis in mouse J774 macrophages. Macrophages (3 × 106 cells/mL) were left uninfected or infected with live C. trachomatis (3 × 105 IFU/well). After 2-day infection of macrophages with C. trachomatis (CT), naringenin (1 μg/mL) was added to macrophage cultures and 2 days later total RNA was extracted as described in Section 2. One step qRT-PCR was used to quantify the mRNA gene transcripts of TLR2 and CD86. P*<0.05 is considered significant when compared to untreated cells (J774) and to cells treated with CT or CT + naringenin. Data shown is an average of triplicates run representative of two separate experiments. ### 3.5.C. trachomatis Uses the p38 MAPK Pathway to Induce Inflammatory Mediators Among the many MAPK pathways, strong link has been established between the p38 signaling pathway and inflammation [31]. Multiple studies have suggested that p38 is a key MAPK pathway that is activated by intracellular pathogen to induce inflammatory mediators [31–33]. To investigate if the p38 pathway is exploited by C. trachomatis for production of its concomitantly elicited inflammatory mediators, we treated J774 macrophages with a p38 specific inhibitor followed by quantification of randomly selected cytokines and chemokines in collected supernatants. With the exception of IL-1β, our result shows that the levels of IL-6, IL-12p70, TNF, CCL5, and CXCL10 were significantly reduced (P<0.05) when macrophages were treated with the p38 inhibitor (Figure 5), suggesting that this pathway is used by C. trachomatis for their production by macrophages.Figure 5 C. trachomatis employs the p38 MAPK pathway for production of inflammatory mediators in mouse J774 macrophages. Macrophages (106 cells/mL) were seeded in 24-well plates in the presence and absence of the SB-203358 p38 inhibitor for 24 h, after which they were left uninfected or infected with live C. trachomatis (105 IFU/mL). After 3 days following infection with C. trachomatis (CT), supernatants were collected and the levels of inflammatory mediators were determined by single ELISAs. P*<0.05 is considered significant when compared to untreated cells (J774) and to cells treated with CT or CT + SB. Data shown is an average of duplicate run representative of two separate experiments. ### 3.6. Naringenin DownregulatesC. trachomatis Phosphorylation of p38 MAPK Given that p38 MAPK mediates, in part, the production of inflammatory mediators byC. trachomatis infected macrophages, we investigated if this pathway may be used by naringenin to exert its anti-inflammatory effect in macrophages. Therefore, we first determined that indeed C. trachomatis could induce the phosphorylation of p38 MAPK in J774 macrophages for the production of its inflammatory mediators. Our time-kinetics experiment shows that C. trachomatis infected macrophages expressed the highest p38 phosphorylation at 60 min (Figure 6(a)). However, in the presence of naringenin, the phosphorylation of p38 reduced as indicated by the reduced band intensity (Figure 6(b)). Similarly, LPS induced the phosphorylation of p38 at 60 min of stimulation, but naringenin reduced its ability to induce phosphorylation of p38 (Figure 6(c)). Overall, our results show increased phosphorylation of p38 MAP kinase in C. trachomatis infected macrophages, which was downregulated by naringenin, suggesting a potential downstream mechanism for naringenin to regulate inflammatory mediators.Naringenin downregulates the phosphorylation of p38 MAPK inC. trachomatis infected mouse J774 macrophages. Macrophages (3×106 cells/well) were seeded in 6-well plates and then infected with C. trachomatis (3 × 105 IFU/well) or LPS (3 μg/well). After infection of macrophages, protein lysates were collected as indicated in Section 2. The presence of total p38 (p38) (internal control) and phosphorylated p38 (Pp38) was determined by western blotting. Shown are the band intensities for internal control and pp38 at different time points for macrophages treated with C. trachomatis (CT) and LPS (a). Blots shown in (b) and (c) were developed 1 h after exposing macrophages to CT or LPS in the presence and absence of naringenin. The 43 kDa Pp38 and p38 proteins were determined from a known biotinylated protein ladder. (a) (b) (c) ## 3.1. The Effect of Naringenin on the Levels of Inflammatory Cytokines and Chemokines Produced byC. trachomatis Infected Macrophages Like other infection agents,C. trachomatis induces the secretion of various inflammatory mediators upon its infection of macrophages. In the present study, we employed multiplex ELISA to identify and quantify cytokines and chemokines in supernatants from macrophages infected with live C. trachomatis. Infected macrophages produced significant (P<0.001) levels of cytokines (IL-6, TNF, IL-10, IL-12p70, IL-1α, IL-1β, and GM-CSF) and chemokines (CCL4, CXCL10, CXCL5, CCL5, and CXCL1) (Figures 1(a) and 1(b)). However, the production levels of these mediators were reduced in a dose-dependent manner in the presence of added naringenin (Figures 1(a) and 1(b)). Supernatants of C. trachomatis infected macrophages that contained 10 μg/mL of added naringenin showed a significant reduction in the levels of cytokines and chemokines (P<0.001) (Figures 1(a) and 1(b)). The inhibitory activity of naringenin was significantly (P<0.01) observed with as little as 1 μg/mL (Figure 1(a)), suggesting the potency of naringenin even at low concentrations. Naringenin similarly reduced the production levels of cytokines and chemokines in a dose-dependent manner (P<0.001) when LPS was used as the stimulant, especially at 10 μg/mL (Figures 1(a) and 1(b)). Overall, our results indicate that naringenin has an anti-inflammatory effect against C. trachomatis induced inflammatory mediators by macrophages.Naringenin downregulates inflammatory mediators inC. trachomatis infected mouse J774 macrophages. Macrophages (106 cells/mL) were seeded in 24-well plates and were either infected with live C. trachomatis (105 IFU/well) or LPS at 1 μg/mL. After 2-day infection, naringenin at 0.01 to 10 μg/mL was added to cell cultures and the production levels of cytokines (a) and chemokines (b) were quantified in supernatants collected 2 days later employing multiplex ELISA. ***indicates significant difference (P<0.001) between C. trachomatis treated cells and those treated with various concentrations of naringenin using the two-tailed unpaired Student’s t-test. Each bar represents the average of samples run in duplicates and the data are representative of three separate experiments. (a) (b) ## 3.2. The Anti-Inflammatory Effect of Naringenin Is Not due to Cell Death To ensure that the inhibitory effect of naringenin is not attributed to cell death, cytotoxicity studies were performed employing the MTT assay and J774 macrophages exposed to various concentrations of naringenin (0.01 to 100μg/mL). With the exception of the 100 μg/mL naringenin concentration, all other tested concentrations exhibited between 85% and 100% cell viability, suggesting that naringenin is effectively nontoxic to macrophages at these concentrations (Figure 2(a)). Figure 2(b) depicts a representative 96-well plate with cell death occurring in the presence of 100 μg/mL of naringenin (yellow color) versus viable cells at other naringenin concentrations (dark purple color). Overall, these results demonstrate that naringenin’s anti-inflammatory effect on inflammatory mediators produced by C. trachomatis infected macrophages is not attributed to cell death but rather to alternative mechanisms.Naringenin toxicity to mouse J774 macrophages is concentration dependent. Macrophages were seeded in a 96-well plate at a density of 105 cells/well/50 μL in the presence or absence of naringenin in concentrations ranging from 0.1 to 100 μg/mL. The CellTiter 96 Cell Proliferation Assay kit was used to determine cell viability. Absorbance was read at 570 nm and the percentage of cell viability was calculated by using the optical density readings compared to normal cells (a). A representative plate before the absorbance readings where dark purple and yellow wells are depictions of live and dead cells, respectively (b). The data are representative of three separate experiments. (a) (b) ## 3.3. Naringenin Downregulates the Expression Levels of CD86, TLR2, and TLR4 on J774 Macrophages Receptors on host cell surfaces such as TLRs recognize extracellular stimuli for subsequent intracellular signaling processes. Multiple studies have shown that TLR2 and TLR4 play pivotal roles in the recognition ofC. trachomatis [26–29]. To begin to understand the mechanism(s) by which naringenin modulates inflammatory mediators, we first focused on whether or not naringenin will affect the putative TLR2 and TLR4 receptors expressed on C. trachomatis infected mouse J774 macrophages. As compared to unstimulated cells, C. trachomatis infected cells expressed more TLR2 and TLR4 receptors, which were markedly downregulated in the presence of added naringenin, especially for TLR2 (Figures 3(a) and 3(c)). In addition, the MFI for TLR2 and TLR4 onC. trachomatis infected cells was significantly increased (P<0.05) as shown by ratios of 22 and 16, respectively, in comparison to those of J774 and naringenin only uninfected cells (Figure 3(e)). When naringenin was added to C. trachomatis infected macrophages, the MFI of TLR2 and TLR4 reduced significantly (P<0.05) as compared with that of C. trachomatis infected macrophages (Figure 3(e)), suggesting the ability of naringenin to down-regulate the expression of these receptors. Our result provides evidence that naringenin diminishes the recognition of C. trachomatis by its putative TLR2 and TLR4 receptors to possibly exert its anti-inflammatory downstream effects during reinfection of cells by C. trachomatis.Naringenin downregulates the expression levels of CD80, TLR2, and TLR4 inC. trachomatis infected mouse J774 macrophages. Macrophages (106 cells/mL) were seeded in 24-well plates and infected with C. trachomatis (105 IFU/well) or left uninfected. After 2-day infection, 1 μg/mL naringenin was added to cell cultures and 2 days later samples were analyzed by flow cytometry as described in Section 2. Shown are the expression shifts of TLR2 (a), CD80 (b), TLR4 (c), and CD86 (d) and their mean fluorescence intensity (MFI) (e) before and after infection of macrophages with C. trachomatis (CT) in the presence and absence of naringenin. P*<0.05 is considered significant as compared to untreated cells (J774) and to cells treated with CT or CT + naringenin. The data are representative of two separate experiments. (a) (b) (c) (d) (e)Activated T cells produce additional inflammatory cytokines and chemokines to direct immune responses. For T cells to be fully activated, antigen presenting cells must express costimulatory molecules such as CD80 and CD86 [30]. Therefore, down-regulating the expression of either CD80 or CD86 or both may negatively impact the activation of T cells. Here we tested if naringenin may impact T-cell activation by down-regulating CD80 and CD86 expression levels onC. trachomatis infected macrophages. Our flow cytometric results show that naringenin at 1 μg/mL downregulates the expression of CD86 induced by C. trachomatis infected macrophages but not that of CD80 as compared to macrophages exposed only to C. trachomatis (Figures 3(b) and 3(d)). Moreover, naringenin significantly reduced (P<0.05) the MFI of CD86 on C. trachomatis infected cells from 18 to 9 (Figure 3(e)). On the other hand, naringenin did not reduce the MFI of CD80 on infected cells (Figure 3(e)), indicating its selective modulation of costimulatory molecules on C. trachomatis infected cells. This finding further suggests that naringenin anti-inflammatory effect is not only limited to innate immune responses but also to adaptive immune responses since the expression of either CD80 or CD86 or both plays critical roles for activation of T cells during adaptive immune responses. ## 3.4. Effect of Naringenin on the mRNA Expression Levels of CD86 and TLR2 As a further validation of our flow cytometric results, we next determine the effect of naringenin on the mRNA gene transcript expression levels of TLR2 and CD86 inC. trachomatis infected J774 macrophages. C. trachomatis enhanced the gene transcripts expression levels of TLR2 and CD86, which were both significantly (P<0.05) downregulated (up to a 2-fold decrease) in the presence of naringenin (at 1 μg/mL) (Figure 4). Combining these findings suggests that naringenin downregulates TLR2 and CD86 expression at both the protein and mRNA gene transcripts levels, thus underscoring its role in regulating C. trachomatis inflammation in macrophages.Figure 4 Naringenin reduces the transcriptional activation of TLR2 and CD86 byC. trachomatis in mouse J774 macrophages. Macrophages (3 × 106 cells/mL) were left uninfected or infected with live C. trachomatis (3 × 105 IFU/well). After 2-day infection of macrophages with C. trachomatis (CT), naringenin (1 μg/mL) was added to macrophage cultures and 2 days later total RNA was extracted as described in Section 2. One step qRT-PCR was used to quantify the mRNA gene transcripts of TLR2 and CD86. P*<0.05 is considered significant when compared to untreated cells (J774) and to cells treated with CT or CT + naringenin. Data shown is an average of triplicates run representative of two separate experiments. ## 3.5.C. trachomatis Uses the p38 MAPK Pathway to Induce Inflammatory Mediators Among the many MAPK pathways, strong link has been established between the p38 signaling pathway and inflammation [31]. Multiple studies have suggested that p38 is a key MAPK pathway that is activated by intracellular pathogen to induce inflammatory mediators [31–33]. To investigate if the p38 pathway is exploited by C. trachomatis for production of its concomitantly elicited inflammatory mediators, we treated J774 macrophages with a p38 specific inhibitor followed by quantification of randomly selected cytokines and chemokines in collected supernatants. With the exception of IL-1β, our result shows that the levels of IL-6, IL-12p70, TNF, CCL5, and CXCL10 were significantly reduced (P<0.05) when macrophages were treated with the p38 inhibitor (Figure 5), suggesting that this pathway is used by C. trachomatis for their production by macrophages.Figure 5 C. trachomatis employs the p38 MAPK pathway for production of inflammatory mediators in mouse J774 macrophages. Macrophages (106 cells/mL) were seeded in 24-well plates in the presence and absence of the SB-203358 p38 inhibitor for 24 h, after which they were left uninfected or infected with live C. trachomatis (105 IFU/mL). After 3 days following infection with C. trachomatis (CT), supernatants were collected and the levels of inflammatory mediators were determined by single ELISAs. P*<0.05 is considered significant when compared to untreated cells (J774) and to cells treated with CT or CT + SB. Data shown is an average of duplicate run representative of two separate experiments. ## 3.6. Naringenin DownregulatesC. trachomatis Phosphorylation of p38 MAPK Given that p38 MAPK mediates, in part, the production of inflammatory mediators byC. trachomatis infected macrophages, we investigated if this pathway may be used by naringenin to exert its anti-inflammatory effect in macrophages. Therefore, we first determined that indeed C. trachomatis could induce the phosphorylation of p38 MAPK in J774 macrophages for the production of its inflammatory mediators. Our time-kinetics experiment shows that C. trachomatis infected macrophages expressed the highest p38 phosphorylation at 60 min (Figure 6(a)). However, in the presence of naringenin, the phosphorylation of p38 reduced as indicated by the reduced band intensity (Figure 6(b)). Similarly, LPS induced the phosphorylation of p38 at 60 min of stimulation, but naringenin reduced its ability to induce phosphorylation of p38 (Figure 6(c)). Overall, our results show increased phosphorylation of p38 MAP kinase in C. trachomatis infected macrophages, which was downregulated by naringenin, suggesting a potential downstream mechanism for naringenin to regulate inflammatory mediators.Naringenin downregulates the phosphorylation of p38 MAPK inC. trachomatis infected mouse J774 macrophages. Macrophages (3×106 cells/well) were seeded in 6-well plates and then infected with C. trachomatis (3 × 105 IFU/well) or LPS (3 μg/well). After infection of macrophages, protein lysates were collected as indicated in Section 2. The presence of total p38 (p38) (internal control) and phosphorylated p38 (Pp38) was determined by western blotting. Shown are the band intensities for internal control and pp38 at different time points for macrophages treated with C. trachomatis (CT) and LPS (a). Blots shown in (b) and (c) were developed 1 h after exposing macrophages to CT or LPS in the presence and absence of naringenin. The 43 kDa Pp38 and p38 proteins were determined from a known biotinylated protein ladder. (a) (b) (c) ## 4. Discussion Inflammatory responses toC. trachomatis are initiated and sustained by actively infected host cells including epithelial cells and resident macrophages [4]. The influx of inflammatory cells in pathogen-induced diseases can be either beneficial or detrimental to the host [28]. Therefore, immunointervention strategies that can reduce the influx of inflammatory cells in a beneficial fashion could potentially impact the pathogenesis of diseases. Along with other controlling strategies, our laboratory is also interested in evaluating anti-inflammatory molecules to control C. trachomatis inflammation. Previously we have shown that the anti-inflammatory cytokines, IL-10, downregulate essential inflammatory mediators produced by epithelial cells infected with live C. trachomatis [9]. In the present paper we explored the natural flavonoid, naringenin, as a potential anti-inflammatory agent to regulate inflammatory mediators produced by C. trachomatis infected macrophages. Among the numerous structural diversities, we selected naringenin based on its abundance in nature and potential application in medicine. The following observations were made here: (1) by multiplex ELISA a spectra of cytokines and chemokines, which may perpetuate an early C. trachomatis inflammation, were revealed, (2) naringenin downregulated cytokines and chemokines as produced by C. trachomatis infected macrophages, (3) naringenin downregulated TLR2 and TLR4 and also the CD86 costimulatory molecule on infected macrophages, and (4) naringenin inhibited the ability of C. trachomatis to phosphorylate p38 MAPK for production of its inflammatory mediators by macrophages.Activation of immune cells, especially macrophages with microbial stimuli, influences the nature and progression of disease. In this study, analysis fromC. trachomatis infected macrophages revealed increased levels of GM-CSF, IL-1α, IL-1β, IL-6, TNF, IL-12p70, and IL-10 after a 2-day infection, with TNF, IL-6, and IL-1α being more robustly produced (Figure 1(a)). Indeed this observation is of no surprise since cytokines are secreted at different magnitudes during the infection process. It is well reported that all secreted cytokines have their own specific role during the infection process [1, 4–8]. One plausible explanation for lower levels of IL-12p70, IL-10, IL-1β, and GM-CSF may be attributed to differences in the time kinetics for their optimum secretion during the infection process. Interestingly, this finding is in agreement with previous studies where lower levels of IL-10 were detected during Borrelia infection of human monocytes [34] and C. trachomatis infection of human epithelial cells and macrophages [9]. The heightened secretion of TNF, IL-6, and IL-1α by C. trachomatis infected macrophages may have some relevancy to the initiation of a Chlamydia inflammation. It has been demonstrated that IL-6, TNF, and IL-1α have crucial roles in increasing the intracellular adhesion molecule (ICAM) [4]. Infection of nonimmune host epithelial cells and resident tissue innate immune cells with Chlamydia results in an increase in adhesion molecules, whereby these molecules promote binding of small proteins such as chemokines on cell surfaces [4].Chemokines are also produced during an infection to amplify the inflammation process. Chemokines play critical role attracting leukocytes to the site of infection, where the leukocytes presence can be seen either as beneficial or detrimental to the host. The main leukocytes that are recruited and attracted by chemokines during an early inflammatory process are macrophages and neutrophils [1–6]. Our result shows that C. trachomatis infected macrophages produced greater quantities of CCL4, CXCL10, CCL5, CXCL1, and CXCL5 (Figure 1(b)). The production levels of most chemokines are typically influenced by the type of cytokines present in the inflammatory milieu. The different profiles of chemokines produced by infected macrophages in this study correlated with the high levels of IL-6, TNF, and IL-1α. High levels of IL-6, TNF, and IL-1α apparently cause chemokines to stick to endothelial cell surfaces for efficient attraction mainly due to an increase in ICAM [4]. Overall, our results clearly demonstrate that the spectra of cytokines and chemokines produced by C. trachomatis infected macrophages may have significant roles in initiating its inflammatory process and thus pathogenesis of disease.Naringenin has a broad-spectrum medicinal application against bacteria, parasitic, and viral infections. Lakshmi et al.’s [35] study showed an antifilarial activity of naringenin against the filarial parasite, Brugia malayi [35]. Naringenin was also shown to exhibit antimicrobial activity against pathogenic bacteria like Listeria monocytogenes, Escherichia. coli O157:H7 and Staphylococcus aureus [36]. Similarly, an antiviral activity of naringenin was shown against herpes simplex virus type-1 (HSV), polivirus, parainfluenza virus type-3, and respiratory syncytial virus (RSV) [37]. Du and colleagues [21] demonstrated that naringenin regulates immune system function in a lung cancer infection model, where it reduced IL-4 but increased IL-2 and IFN-γ levels [21]. In a different study, Shi et al. [38] also showed that naringenin displayed an inhibitory role in allergen-induced airway inflammation by reducing IL-4, IL-13, CCL5, and CCL11 [38]. For the first time, in the present study we have shown that naringenin has an anti-inflammatory effect in an in vitro C. trachomatis infection model. Naringenin reduced in a dose-dependent manner the level of major inflammatory mediators secreted by C. trachomatis infected macrophages, which was not attributed to cell death. These studies suggest that naringenin has a broader immune-regulatory property in different disease models, especially inflammatory diseases.In this study, we have clearly demonstrated that naringenin altered the levels of numerous cytokines and chemokines inC. trachomatis infected macrophages by its alteration of multiple inflammatory pathways. Induction of inflammatory pathway initially starts when invasive pathogens are recognized by cell surface receptor molecules such as TLRs in the host, followed by activation of various signaling pathways. It is well documented that C. trachomatis is recognized by TLRs specifically TLR2 and TLR4 on macrophages to induce secretion of inflammatory mediators, which can be either beneficial or detrimental to the host [29, 39]. Here in the present study we show enhanced expression of both TLR2 and TLR4 on C. trachomatis infected macrophages and whose expression levels were reduced by naringenin (Figures 3 and 4). Our study suggests the capacity of naringenin to inhibit the interaction of C. trachomatis with its upstream putative receptors to potentially mediate its anti-inflammatory effect in macrophages.TLR-stimulated macrophages induce effectors of the adaptive immune system such as CD40, CD80, and CD86 to drive T-cell activation and proliferation. The CD28-mediated costimulatory signal can result in an enhanced T-cell proliferation and cytokine production which contributes to the development of various inflammatory diseases [40–42]. Our flow cytometry result demonstrates that C. trachomatis induced the expression of CD80 and CD86, however, with only CD86 expression being modulated by naringenin (Figures 3 and 4). Although we have not shown it in this study, but inhibiting CD80 and CD86 expression has a possibility to impair the activation of T cells and eventually blocking effectors of the adaptive immune system. Lim and coworkers documented significant reduction in the levels of IL-2 and IFN-γ when both CD80 and CD86 costimulatory molecules were inhibited confirming the key role played by costimulatory molecules in functional T-cell activation [43]. Weakened T-cell activation is directly associated with less interaction between antigen presenting (APC) cells and T cells. Thus, our data provides mechanical insights of C. trachomatis engulfment by macrophages as indicative by heightened expression of CD80 and CD86, which eventually contributes to the activation of adaptive immune responses.Down-regulation of only CD86 expression in the presence of naringenin provides evidence for its broader capability in modulating inflammatory response duringC. trachomatis infection. However, the perplexing question remains as to why naringenin inhibited CD86 but not CD80 expression even though both are costimulatory molecules highly needed for T-cell activation and also by which cell-to-cell binding forces depend on their recognition. It has been reported that treatment with CD80/86 blocking antibodies reduced the interaction force of cell : cell conjugates [43, 44]. Both CD80 and CD86 can bind to the T-cell stimulatory receptor CD28 [44] and to the inhibitory receptor CTLA4 [45]. CD86 appeared to strengthen APC : T-cell interactions more markedly than CD80 since higher force reduction was observed after blocking CD86 alone than that achieved by disrupting CD80 alone [44, 45]. Therefore, the ability of CD86 and not CD80 to induce stronger APC : T-cell interaction indicates its crucial ability in initiating immune responses.Upon microbial recognition by TLRs, MAPK signaling pathways are activated to produce inflammatory mediators. Of the many MAPK pathways, p38 is considered to be an important pathway to induce inflammatory mediators duringC. trachomatis infection [46]. Our inhibition study supports this idea, where in the presence of a p38 inhibitor the levels of IL-12p70, IL-6, TNF, CCL5, and CXCL10 (Figure 5) were significantly reduced suggesting that this pathway is employed by C. trachomatis to induce these respective inflammatory mediators. Furthermore, phosphorylation of p38 by C. trachomatis in macrophages in this study (Figure 6) underscores that it triggers this pathway for producing its concomitant inflammatory mediators. Of outmost significance, naringenin inhibited the ability of C. trachomatis to phosphorylate p38 in macrophages, suggesting possibly its attenuation of concomitantly produced cytokines and chemokines. Other investigators have reported that naringenin’s inhibitory role in allergen airway infection is associated with its down-regulating the activation of the NF-κB pathway via MAPK pathway [38]. In another study, it was also shown that naringenin manifested its anti-inflammatory functions in vitro by inhibiting NFκB in macrophages [47, 48]. Shi et al. [38] also reported that naringenin can suppress mucous production by inhibiting NFκB activity in a murine model of asthma [38]. Overall, our findings coupled with the above mentioned reports provide evidence that inflammatory signaling pathways including MAPK, especially p38 and NFκB, are potential targets for naringenin anti-inflammatory effects.C. trachomatis has a prolonged and unique developmental life cycle which takes 24–72 h for completion after entry into target cells. This process involves lysis and reinfection of cells by the released EBs [4] after binding to their cognate cell surface receptors. Reinfection reportedly is one of the major characteristics of C. trachomatis persistent infection [4, 31] contributing to the pathogenesis of disease. The ability of naringenin to reduce cell surface receptor expression and associated inflammatory signaling pathways 48 h after infection of cells with C. trachomatis is a testament of naringenin regulation of inflammatory mediators during the reinfection process. Even though we focused on selected cell surface receptors and signaling pathways in this study, we cannot dismiss the involvement of other receptors like the nucleotide binding site/leucine-rich repeat (NBS/LRR) protein, NOD2 that is recognized by C. trachomatis [49] (and our unpublished observation) or the NFκB signaling pathway that reportedly mediates naringenin anti-inflammatory actions [38].Admittedly, the precise mechanisms by which naringenin downregulates surface receptors and signaling pathways were not investigated here. Nevertheless, we cannot rule out the possibility that naringenin regulatory activity may be the direct consequences of its reducing theC. trachomatis infectious load in macrophages, ultimately resulting in less induction of inflammatory mediators. Indeed naringenin has been shown to have antibactericidal activity against several pathogenic bacteria [36]. Whether or not naringenin has anti-bactericidal activity against C. trachomatis in macrophages is the topic of our ongoing investigations.In summary, most intracellular microorganisms includingC. trachomatis prefer not to be targeted by regimens that impair their perpetuation in cells by inducing unwanted immune responses to amplify the disease progression. Therefore, in such scenarios, immunointervention approaches that focus on reducing any unwanted host immune response is attractive and can be viewed as alternative means to prevent or control severe inflammatory responses. Our findings presented here are the first, to our knowledge, to demonstrate that naringenin is an immunomodulator of inflammatory responses triggered by C. trachomatis in macrophages. Reduction of these inflammatory mediators by naringenin is mediated upstream by modulating TLR2, TLR4, and CD86 macrophage surface receptors and downstream via the p38 MAPK signaling pathway. More studies are warranted to further explore the in vivo relevancy of naringenin in controlling severe inflammatory responses that are induced not only by C. trachomatis but also by other similar pathogenic microorganisms. --- *Source: 102457-2013-05-23.xml*
2013
# Importance and Limits of Ischemia in Renal Partial Surgery: Experimental and Clinical Research **Authors:** Fernando P. Secin **Journal:** Advances in Urology (2008) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2008/102461 --- ## Abstract Introduction. The objective is to determine the clinical and experimental evidences of the renal responses to warm and cold ischemia, kidney tolerability, and available practical techniques of protecting the kidney during nephron-sparing surgery. Materials and methods. Review of the English and non-English literature using MEDLINE, MD Consult, and urology textbooks. Results and discussion. There are three main mechanisms of ischemic renal injury, including persistent vasoconstriction with an abnormal endothelial cell compensatory response, tubular obstruction with backflow of urine, and reperfusion injury. Controversy persists on the maximal kidney tolerability to warm ischemia (WI), which can be influenced by surgical technique, patient age, presence of collateral vascularization, indemnity of the arterial bed, and so forth. Conclusions. When WI time is expected to exceed from 20 to 30 minutes, especially in patients whose baseline medical characteristics put them at potentially higher, though unproven, risks of ischemic damage, local renal hypothermia should be used. --- ## Body ## 1. Introduction Nephron-sparing surgery in the oncologic setting entails complete local resection of a renal tumor while leaving the largest possible amount of normal functioning parenchyma in the involved kidney. Different surgical techniques can be employed for performing partial nephrectomy, but all of them require adherence to basic principles of early vascular control, avoidance of ischemic renal damage with complete tumor excision with free margins, precise closure of the collecting system, careful hemostasis, and closure with or without tamponading of the renal defect with adjacent fat, fascia, or any available artificial sealant [1, 2].Observance of all these principles is extremely important, however, prevention of ischemic renal damage is a key to the final success of the procedure. Ischemia is the leading cause of postoperative acute and chronic renal failure in patients undergoing nephron sparing surgery, for which no specific medical treatment modality has been established to date.By the same token, surgeons need to apply transitory occlusion of the renal artery as it not only diminishes intraoperative parenchymal bleeding but also improves visualization and facilitates access to intrarenal structures by causing the kidney to contract and by reducing renal tissue fullness. Surgeons performing this approach require an understanding of renal responses to warm ischemia (WI) and available methods of protecting the kidney when the period of arterial occlusion exceeds normal parenchyma tolerability [3].In order to decrease the exposure of the spared parenchyma to ischemia, the surgeon should have a complete preoperative and intraoperative assessment of the relationship of the tumor and its vascular supply to the collecting system and adjacent normal renal parenchyma [4–6].There is no question that the less the better, whenever the philosophy to preserve as much functioning renal tissue as possible is followed. This manuscript seeks to determine the clinical and experimental evidences of the renal responses to warm and cold ischemia, kidney tolerability, and available practical techniques of protecting the kidney when the period of arterial occlusion surpasses that which may be safely tolerated during renal nephron sparing surgery. ## 2. Material and Methods Biomedical and related databases were queried including MEDLINE, MD Consult, and urology textbooks. Manuscripts and library archives were retrieved from the Nathan Cummings Center, Memorial Sloan-Kettering Cancer Center, NY, USA.A Medline search in combination with additional references of non-Medline-indexed journals included the following key words: “nephron-sparing surgery,” “partial nephrectomy,” “warm ischemia and kidney,” and “ischemia time and kidney,” as well as links to related articles. Non-English articles and letters to editors were reviewed as well. These references formed the basis of the article. Following selection and deletion based on relevance of the subject and importance of the studies, a library of 115 references remained. ## 3. Results and Discussion ### 3.1. Intraoperative Renal Ischemia: Pathophysiology of Injury In recent years, there have been significant insights into the pathophysiologic process of renal ischemia [7, 8]. Ischemic insult to the kidney often results in damage to cells of nephron and renal vasculature. Cells are lost through the processes of necrosis and apoptosis, inevitably leading to renal failure. Renal failure is characterized by a decline in glomerular filtration rate, retention of nitrogenous waste products, perturbation of extra cellular fluid volume, and electrolyte and acid-base homeostasis. Renal failure is only diagnosed when these pathophysiologic perturbations are advanced enough to manifest biochemical abnormalities in the blood. The pathophysiologic response to cell death dictates the prevailing level of renal functional impairment [9]. Therefore, a clear understanding of the extent of post ischemic kidney damage and associated inflammation is needed to prevent this hitherto intractable condition, which will ultimately impact on overall survival [10].For understanding and didactic purposes, three interrelated main mechanisms through which ischemia damages the kidney are herein described based on a recent review by Abuelo [7]. One mechanism is merely vascular, caused by persistent vasoconstriction and an abnormal response of endothelial cells to compensatory means. The second is obstructive, where soughed tubular epithelial cells and brush-border-membrane debris form casts that obstruct tubules, and glomerular filtrate leaks from the tubular lumen across denuded tubular walls into capillaries and the circulation (back-leak) causing a reduction in the “effective” GFR, where the latter is defined as the rate at which filtrate is delivered into final urine. The third has to do with reperfusion injury after blood flow is restored [7, 11]. #### 3.1.1. Vascular Mechanism Both animal and human studies have found that a multi-inflammatory response is involved in ischemia/reperfusion injury of the kidney [12]. The inflammatory reaction incurred after an ischemic insult precipitates more damage to the tissue and impedes intrarenal blood flow caused by vasoconstriction and vascular congestion, leading to a vicious cycle [13].This damage mainly takes place in endothelial cells of the peritubular capillaries, especially in the outer medulla, which is marginally oxygenated under normal circumstances. This oxidant injury, together with a shift in the balance of vasoactive substances toward vasoconstrictors such as endothelin, results in vasoconstriction, congestion, hypoperfusion, and expression of adhesion molecules. The expression of adhesion molecules, in turn, initiates leukocyte infiltration, augmented by proinflammatory and chemotactic cytokines generated by ischemic tubular cells [7].Inciting stimuli induce kidney macrophages and probably renal parenchymal cells to release inflammatory cytokines, such as tumor necrosis factor-α (TNF-α) and interleukin-1 (IL-1). TNF-α and IL-1 promote renal parenchymal damage by directly inducing apoptosis in epithelial cells, recruitment of neutrophils that release reactive oxygen metabolites and proteases, and up regulating adhesion receptors on endothelial cells and leukocytes [14, 15]. These cytokines also stimulate renal cortical epithelial cells to release the chemoattractant interleukin-8 [16, 17]. The arrival of additional leukocytes obstructs the microcirculation and releases more cytotoxic cytokines, reactive oxygen species, and proteolytic enzymes, which damage the tubular cells [7].Endothelial injury results in cell swelling and enhanced expression of cell adhesion molecules. This, together with leukocyte activation, leads to enhanced leukocyte-endothelial cell interactions, which can promote injury and swelling of the endothelial cell. Endothelial swelling contributes to the production of local factors promoting vasoconstriction and adds to the effects of vasoconstriction and tubule cell metabolism by physically impeding blood flow, perpetuating that vicious cycle [18].Heterogeneity of intrarenal blood flow contributes to the pathophysiology of ischemic renal failure. An imbalance between the vasodilator nitric oxide and the vasoconstrictor endothelin impairs medullary blood flow, especially in the outer medulla, where tubules have high oxygen requirements, resulting in cellular injury due to a mismatch between oxygen delivery and demand. Endothelial activation and injury together with increased leukocyte-endothelial cell interactions and activation of coagulation pathways may have a greater effect on outer medullary ischemia than arteriolar vasoconstriction, as there can be markedly impaired oxygen delivery to the outer medulla despite adequate renal blood flow [18].The arteriolar response to vasoactive substances can also be altered during endothelial injury. The basal tone of arterioles is increased in post ischemic kidneys as well as their reactivity to vasoconstrictive agents. These arterioles also have decreased vasodilatory responses compared with arterioles from normal kidneys. Alterations in local levels of vasoconstrictors (angiotensin II, thromboxane A2, leukotrienes, adenosine, endothelin-1) have been implicated in abnormal vascular tone [19]. Angiotensin II seems to play a key role by activating endothelin B or prostaglandin H2-thromboxane A2 receptors. Systemic endothelin-1 levels increase with ischemia, and administration of antiendothelin antibodies or endothelin receptor antagonists has been reported to protect against ischemia-reperfusion injury [20]. Saralasin, an angiotensin II receptor antagonist, could also attenuate angiotensin II vasoconstricting effect [21]. Nitric oxide, an endothelial-derived relaxing factor, plays a theoretical protective role against ischemic renal injury, by means of its vasodilatory effect and by decreasing endothelin expression and secretion in the vascular endothelium. Of interest, endothelial nitric oxide synthase is inhibited during endothelial injury [22]. A combination therapy consisting of 5-aminoimidazole-4-carboxamide-1-beta-D-ribonucleoside (AICAR) and N-acetyl cysteine (NAC), drugs that inhibit the induction of proinflammatory cytokines and nitric oxide synthase, and block tumor necrosis factor-alpha induced apoptotic cell death, has shown to attenuate ischemia-reperfusion injury in a canine model of autologous renal transplantation [23]. Early studies showed no conclusive evidence that vasodilators (such as diltiazem or dopamine) or other compounds have any clinical utility in either preventing or treating ischemic renal failure in humans thus far [24–26]. More recently, however, the highly selective dopamine type 1 agonist fenoldopam mesylate [27] and the antianginal medication trimetazidine [28] appeared to aid in restoring renal function to baseline values in patients with prolonged WI time. Further research is needed. #### 3.1.2. Obstructive Mechanism Normally, the cells are bathed in an extra cellular solution high in sodium and low in potassium. This ratio is maintained by a sodium pump (Na+-K + ATPase pump) which uses much of the adenosine triphosphate (ATP) energy derived from oxidative phosphorylation. ATP is required for the cellular sodium pump to maintain a high intracellular concentration of potassium and a low concentration of sodium. The sodium pump effectively makes Na+ an impermeant outside the cell that counteracts the colloidal osmotic pressure derived from intracellular proteins and other anions [29].The ischemic insult causes a failure of oxidative phosphorylation and ATP depletion, leading to malfunctioning of the sodium pump. When the sodium pump is impaired, sodium chloride and water passively diffuse into the cells, resulting in cellular swelling and the “no-reflow” phenomenon after renal reperfusion. Cellular potassium and magnesium are lost, calcium is gained, anaerobic glycolysis and acidosis occur, and lysosomal enzymes are activated. This results in cell death. During reperfusion, hypoxanthine, a product of ATP degradation, is oxidized to xanthine with the formation of free radicals that cause further cell damage [29]. (See later.)As mentioned, the mechanism whereby ischemia and oxygen depletion injure tubular cells starts with ATP depletion, which activates a number of critical alterations in metabolism, causing cytoskeletal disruption and loss of those properties that normally render the tubule cell monolayer impermeable to certain components of filtrate. Cytoskeletal disruption causes not only loss of brush-border microvilli and cell junctions but also mislocation of integrins and the sodium pump from the basal surface to the apical surface.In addition, impaired sodium reabsorption by injured tubular epithelial cells increases the sodium concentration in the tubular lumen. The increased intratubular sodium concentration polymerizes Tamm-Horsfall protein, which is normally secreted by the loop of Henle, forming a gel and contributing to cast formation. As a result, brush-border membranes and cells slough obstruct tubules downstream. As mentioned before, these debris form casts that obstruct tubules, and glomerular filtrate leaks from the tubular lumen across denuded tubular walls into capillaries and the circulation (back-leak) causing a reduction in the “effective” GFR. ATP depletion also activates harmful proteases and phospholipases, which, with reperfusion, cause oxidant injury to tubular cells, the so-called reperfusion injury [7]. #### 3.1.3. Reperfusion Injury WI insult followed by restoration of blood flow to the ischemic tissue frequently results in a secondary reperfusion injury. Despite WI causing significant renal dysfunction, reperfusion injury has been shown to be as damaging or even more detrimental than renal ischemia itself, producing an inflammatory response that worsens local kidney damage and leads to a systemic insult [30, 31].The reperfusion injury can be mediated by several mechanisms including the generation of reactive oxygen species, cellular derangement, microvessel congestion and compression, polimorphonuclear (PMN)-mediated damage, and hypercoagulation. Reperfusion with the resulting reintroduction of molecular oxygen of constricted microvessels leads to congestion and red cell trapping. This vascular effect can reduce renal blood flow by as much as 50% [32].During the reperfusion period, superoxide production in the kidney is markedly enhanced by the transformation of xanthine dehydrogenase to xanthine oxidase and the increase in free electrons in mitochondria, prostaglandin H, and lipoxygenase with the coexistence of NAD(P)H and infiltrated neutrophils. Superoxide raises the following chain reactions, producing hydroxyl radicals or other reactive oxygen species (ROSs), or interacts with nitric oxide (NO), which is produced by macrophage inducible NO synthase, generating a highly toxic radical peroxynitrite. These ROS and NO derived species consume tissue antioxidants and decrease organ reducing activity [33].The exact magnitude of reperfusion injury is still unclear. Some authors state that the role of free radicals mediated injury in kidneys may not be as significant as in other organs given the low relative activity of renal xanthine oxidase compared with the high endogenous activity of superoxide dismutase [29].Notwithstanding, nicaraven (N,N9-propylenebisnicotinamide), a drug that may actively trap free radicals and prevent vascular constriction due to lipid peroxide [34] and edaravone (3-methyl-1-phenyl-2-pyrazolin-5-one, MCI-186), a synthetic free radical scavenger, have shown in vitro experiments to protect endothelial cells against ischemic injury in different organs, including ischemically damaged kidneys [35, 36]. Clinical studies are eagerly awaited. ## 3.1. Intraoperative Renal Ischemia: Pathophysiology of Injury In recent years, there have been significant insights into the pathophysiologic process of renal ischemia [7, 8]. Ischemic insult to the kidney often results in damage to cells of nephron and renal vasculature. Cells are lost through the processes of necrosis and apoptosis, inevitably leading to renal failure. Renal failure is characterized by a decline in glomerular filtration rate, retention of nitrogenous waste products, perturbation of extra cellular fluid volume, and electrolyte and acid-base homeostasis. Renal failure is only diagnosed when these pathophysiologic perturbations are advanced enough to manifest biochemical abnormalities in the blood. The pathophysiologic response to cell death dictates the prevailing level of renal functional impairment [9]. Therefore, a clear understanding of the extent of post ischemic kidney damage and associated inflammation is needed to prevent this hitherto intractable condition, which will ultimately impact on overall survival [10].For understanding and didactic purposes, three interrelated main mechanisms through which ischemia damages the kidney are herein described based on a recent review by Abuelo [7]. One mechanism is merely vascular, caused by persistent vasoconstriction and an abnormal response of endothelial cells to compensatory means. The second is obstructive, where soughed tubular epithelial cells and brush-border-membrane debris form casts that obstruct tubules, and glomerular filtrate leaks from the tubular lumen across denuded tubular walls into capillaries and the circulation (back-leak) causing a reduction in the “effective” GFR, where the latter is defined as the rate at which filtrate is delivered into final urine. The third has to do with reperfusion injury after blood flow is restored [7, 11]. ### 3.1.1. Vascular Mechanism Both animal and human studies have found that a multi-inflammatory response is involved in ischemia/reperfusion injury of the kidney [12]. The inflammatory reaction incurred after an ischemic insult precipitates more damage to the tissue and impedes intrarenal blood flow caused by vasoconstriction and vascular congestion, leading to a vicious cycle [13].This damage mainly takes place in endothelial cells of the peritubular capillaries, especially in the outer medulla, which is marginally oxygenated under normal circumstances. This oxidant injury, together with a shift in the balance of vasoactive substances toward vasoconstrictors such as endothelin, results in vasoconstriction, congestion, hypoperfusion, and expression of adhesion molecules. The expression of adhesion molecules, in turn, initiates leukocyte infiltration, augmented by proinflammatory and chemotactic cytokines generated by ischemic tubular cells [7].Inciting stimuli induce kidney macrophages and probably renal parenchymal cells to release inflammatory cytokines, such as tumor necrosis factor-α (TNF-α) and interleukin-1 (IL-1). TNF-α and IL-1 promote renal parenchymal damage by directly inducing apoptosis in epithelial cells, recruitment of neutrophils that release reactive oxygen metabolites and proteases, and up regulating adhesion receptors on endothelial cells and leukocytes [14, 15]. These cytokines also stimulate renal cortical epithelial cells to release the chemoattractant interleukin-8 [16, 17]. The arrival of additional leukocytes obstructs the microcirculation and releases more cytotoxic cytokines, reactive oxygen species, and proteolytic enzymes, which damage the tubular cells [7].Endothelial injury results in cell swelling and enhanced expression of cell adhesion molecules. This, together with leukocyte activation, leads to enhanced leukocyte-endothelial cell interactions, which can promote injury and swelling of the endothelial cell. Endothelial swelling contributes to the production of local factors promoting vasoconstriction and adds to the effects of vasoconstriction and tubule cell metabolism by physically impeding blood flow, perpetuating that vicious cycle [18].Heterogeneity of intrarenal blood flow contributes to the pathophysiology of ischemic renal failure. An imbalance between the vasodilator nitric oxide and the vasoconstrictor endothelin impairs medullary blood flow, especially in the outer medulla, where tubules have high oxygen requirements, resulting in cellular injury due to a mismatch between oxygen delivery and demand. Endothelial activation and injury together with increased leukocyte-endothelial cell interactions and activation of coagulation pathways may have a greater effect on outer medullary ischemia than arteriolar vasoconstriction, as there can be markedly impaired oxygen delivery to the outer medulla despite adequate renal blood flow [18].The arteriolar response to vasoactive substances can also be altered during endothelial injury. The basal tone of arterioles is increased in post ischemic kidneys as well as their reactivity to vasoconstrictive agents. These arterioles also have decreased vasodilatory responses compared with arterioles from normal kidneys. Alterations in local levels of vasoconstrictors (angiotensin II, thromboxane A2, leukotrienes, adenosine, endothelin-1) have been implicated in abnormal vascular tone [19]. Angiotensin II seems to play a key role by activating endothelin B or prostaglandin H2-thromboxane A2 receptors. Systemic endothelin-1 levels increase with ischemia, and administration of antiendothelin antibodies or endothelin receptor antagonists has been reported to protect against ischemia-reperfusion injury [20]. Saralasin, an angiotensin II receptor antagonist, could also attenuate angiotensin II vasoconstricting effect [21]. Nitric oxide, an endothelial-derived relaxing factor, plays a theoretical protective role against ischemic renal injury, by means of its vasodilatory effect and by decreasing endothelin expression and secretion in the vascular endothelium. Of interest, endothelial nitric oxide synthase is inhibited during endothelial injury [22]. A combination therapy consisting of 5-aminoimidazole-4-carboxamide-1-beta-D-ribonucleoside (AICAR) and N-acetyl cysteine (NAC), drugs that inhibit the induction of proinflammatory cytokines and nitric oxide synthase, and block tumor necrosis factor-alpha induced apoptotic cell death, has shown to attenuate ischemia-reperfusion injury in a canine model of autologous renal transplantation [23]. Early studies showed no conclusive evidence that vasodilators (such as diltiazem or dopamine) or other compounds have any clinical utility in either preventing or treating ischemic renal failure in humans thus far [24–26]. More recently, however, the highly selective dopamine type 1 agonist fenoldopam mesylate [27] and the antianginal medication trimetazidine [28] appeared to aid in restoring renal function to baseline values in patients with prolonged WI time. Further research is needed. ### 3.1.2. Obstructive Mechanism Normally, the cells are bathed in an extra cellular solution high in sodium and low in potassium. This ratio is maintained by a sodium pump (Na+-K + ATPase pump) which uses much of the adenosine triphosphate (ATP) energy derived from oxidative phosphorylation. ATP is required for the cellular sodium pump to maintain a high intracellular concentration of potassium and a low concentration of sodium. The sodium pump effectively makes Na+ an impermeant outside the cell that counteracts the colloidal osmotic pressure derived from intracellular proteins and other anions [29].The ischemic insult causes a failure of oxidative phosphorylation and ATP depletion, leading to malfunctioning of the sodium pump. When the sodium pump is impaired, sodium chloride and water passively diffuse into the cells, resulting in cellular swelling and the “no-reflow” phenomenon after renal reperfusion. Cellular potassium and magnesium are lost, calcium is gained, anaerobic glycolysis and acidosis occur, and lysosomal enzymes are activated. This results in cell death. During reperfusion, hypoxanthine, a product of ATP degradation, is oxidized to xanthine with the formation of free radicals that cause further cell damage [29]. (See later.)As mentioned, the mechanism whereby ischemia and oxygen depletion injure tubular cells starts with ATP depletion, which activates a number of critical alterations in metabolism, causing cytoskeletal disruption and loss of those properties that normally render the tubule cell monolayer impermeable to certain components of filtrate. Cytoskeletal disruption causes not only loss of brush-border microvilli and cell junctions but also mislocation of integrins and the sodium pump from the basal surface to the apical surface.In addition, impaired sodium reabsorption by injured tubular epithelial cells increases the sodium concentration in the tubular lumen. The increased intratubular sodium concentration polymerizes Tamm-Horsfall protein, which is normally secreted by the loop of Henle, forming a gel and contributing to cast formation. As a result, brush-border membranes and cells slough obstruct tubules downstream. As mentioned before, these debris form casts that obstruct tubules, and glomerular filtrate leaks from the tubular lumen across denuded tubular walls into capillaries and the circulation (back-leak) causing a reduction in the “effective” GFR. ATP depletion also activates harmful proteases and phospholipases, which, with reperfusion, cause oxidant injury to tubular cells, the so-called reperfusion injury [7]. ### 3.1.3. Reperfusion Injury WI insult followed by restoration of blood flow to the ischemic tissue frequently results in a secondary reperfusion injury. Despite WI causing significant renal dysfunction, reperfusion injury has been shown to be as damaging or even more detrimental than renal ischemia itself, producing an inflammatory response that worsens local kidney damage and leads to a systemic insult [30, 31].The reperfusion injury can be mediated by several mechanisms including the generation of reactive oxygen species, cellular derangement, microvessel congestion and compression, polimorphonuclear (PMN)-mediated damage, and hypercoagulation. Reperfusion with the resulting reintroduction of molecular oxygen of constricted microvessels leads to congestion and red cell trapping. This vascular effect can reduce renal blood flow by as much as 50% [32].During the reperfusion period, superoxide production in the kidney is markedly enhanced by the transformation of xanthine dehydrogenase to xanthine oxidase and the increase in free electrons in mitochondria, prostaglandin H, and lipoxygenase with the coexistence of NAD(P)H and infiltrated neutrophils. Superoxide raises the following chain reactions, producing hydroxyl radicals or other reactive oxygen species (ROSs), or interacts with nitric oxide (NO), which is produced by macrophage inducible NO synthase, generating a highly toxic radical peroxynitrite. These ROS and NO derived species consume tissue antioxidants and decrease organ reducing activity [33].The exact magnitude of reperfusion injury is still unclear. Some authors state that the role of free radicals mediated injury in kidneys may not be as significant as in other organs given the low relative activity of renal xanthine oxidase compared with the high endogenous activity of superoxide dismutase [29].Notwithstanding, nicaraven (N,N9-propylenebisnicotinamide), a drug that may actively trap free radicals and prevent vascular constriction due to lipid peroxide [34] and edaravone (3-methyl-1-phenyl-2-pyrazolin-5-one, MCI-186), a synthetic free radical scavenger, have shown in vitro experiments to protect endothelial cells against ischemic injury in different organs, including ischemically damaged kidneys [35, 36]. Clinical studies are eagerly awaited. ## 3.1.1. Vascular Mechanism Both animal and human studies have found that a multi-inflammatory response is involved in ischemia/reperfusion injury of the kidney [12]. The inflammatory reaction incurred after an ischemic insult precipitates more damage to the tissue and impedes intrarenal blood flow caused by vasoconstriction and vascular congestion, leading to a vicious cycle [13].This damage mainly takes place in endothelial cells of the peritubular capillaries, especially in the outer medulla, which is marginally oxygenated under normal circumstances. This oxidant injury, together with a shift in the balance of vasoactive substances toward vasoconstrictors such as endothelin, results in vasoconstriction, congestion, hypoperfusion, and expression of adhesion molecules. The expression of adhesion molecules, in turn, initiates leukocyte infiltration, augmented by proinflammatory and chemotactic cytokines generated by ischemic tubular cells [7].Inciting stimuli induce kidney macrophages and probably renal parenchymal cells to release inflammatory cytokines, such as tumor necrosis factor-α (TNF-α) and interleukin-1 (IL-1). TNF-α and IL-1 promote renal parenchymal damage by directly inducing apoptosis in epithelial cells, recruitment of neutrophils that release reactive oxygen metabolites and proteases, and up regulating adhesion receptors on endothelial cells and leukocytes [14, 15]. These cytokines also stimulate renal cortical epithelial cells to release the chemoattractant interleukin-8 [16, 17]. The arrival of additional leukocytes obstructs the microcirculation and releases more cytotoxic cytokines, reactive oxygen species, and proteolytic enzymes, which damage the tubular cells [7].Endothelial injury results in cell swelling and enhanced expression of cell adhesion molecules. This, together with leukocyte activation, leads to enhanced leukocyte-endothelial cell interactions, which can promote injury and swelling of the endothelial cell. Endothelial swelling contributes to the production of local factors promoting vasoconstriction and adds to the effects of vasoconstriction and tubule cell metabolism by physically impeding blood flow, perpetuating that vicious cycle [18].Heterogeneity of intrarenal blood flow contributes to the pathophysiology of ischemic renal failure. An imbalance between the vasodilator nitric oxide and the vasoconstrictor endothelin impairs medullary blood flow, especially in the outer medulla, where tubules have high oxygen requirements, resulting in cellular injury due to a mismatch between oxygen delivery and demand. Endothelial activation and injury together with increased leukocyte-endothelial cell interactions and activation of coagulation pathways may have a greater effect on outer medullary ischemia than arteriolar vasoconstriction, as there can be markedly impaired oxygen delivery to the outer medulla despite adequate renal blood flow [18].The arteriolar response to vasoactive substances can also be altered during endothelial injury. The basal tone of arterioles is increased in post ischemic kidneys as well as their reactivity to vasoconstrictive agents. These arterioles also have decreased vasodilatory responses compared with arterioles from normal kidneys. Alterations in local levels of vasoconstrictors (angiotensin II, thromboxane A2, leukotrienes, adenosine, endothelin-1) have been implicated in abnormal vascular tone [19]. Angiotensin II seems to play a key role by activating endothelin B or prostaglandin H2-thromboxane A2 receptors. Systemic endothelin-1 levels increase with ischemia, and administration of antiendothelin antibodies or endothelin receptor antagonists has been reported to protect against ischemia-reperfusion injury [20]. Saralasin, an angiotensin II receptor antagonist, could also attenuate angiotensin II vasoconstricting effect [21]. Nitric oxide, an endothelial-derived relaxing factor, plays a theoretical protective role against ischemic renal injury, by means of its vasodilatory effect and by decreasing endothelin expression and secretion in the vascular endothelium. Of interest, endothelial nitric oxide synthase is inhibited during endothelial injury [22]. A combination therapy consisting of 5-aminoimidazole-4-carboxamide-1-beta-D-ribonucleoside (AICAR) and N-acetyl cysteine (NAC), drugs that inhibit the induction of proinflammatory cytokines and nitric oxide synthase, and block tumor necrosis factor-alpha induced apoptotic cell death, has shown to attenuate ischemia-reperfusion injury in a canine model of autologous renal transplantation [23]. Early studies showed no conclusive evidence that vasodilators (such as diltiazem or dopamine) or other compounds have any clinical utility in either preventing or treating ischemic renal failure in humans thus far [24–26]. More recently, however, the highly selective dopamine type 1 agonist fenoldopam mesylate [27] and the antianginal medication trimetazidine [28] appeared to aid in restoring renal function to baseline values in patients with prolonged WI time. Further research is needed. ## 3.1.2. Obstructive Mechanism Normally, the cells are bathed in an extra cellular solution high in sodium and low in potassium. This ratio is maintained by a sodium pump (Na+-K + ATPase pump) which uses much of the adenosine triphosphate (ATP) energy derived from oxidative phosphorylation. ATP is required for the cellular sodium pump to maintain a high intracellular concentration of potassium and a low concentration of sodium. The sodium pump effectively makes Na+ an impermeant outside the cell that counteracts the colloidal osmotic pressure derived from intracellular proteins and other anions [29].The ischemic insult causes a failure of oxidative phosphorylation and ATP depletion, leading to malfunctioning of the sodium pump. When the sodium pump is impaired, sodium chloride and water passively diffuse into the cells, resulting in cellular swelling and the “no-reflow” phenomenon after renal reperfusion. Cellular potassium and magnesium are lost, calcium is gained, anaerobic glycolysis and acidosis occur, and lysosomal enzymes are activated. This results in cell death. During reperfusion, hypoxanthine, a product of ATP degradation, is oxidized to xanthine with the formation of free radicals that cause further cell damage [29]. (See later.)As mentioned, the mechanism whereby ischemia and oxygen depletion injure tubular cells starts with ATP depletion, which activates a number of critical alterations in metabolism, causing cytoskeletal disruption and loss of those properties that normally render the tubule cell monolayer impermeable to certain components of filtrate. Cytoskeletal disruption causes not only loss of brush-border microvilli and cell junctions but also mislocation of integrins and the sodium pump from the basal surface to the apical surface.In addition, impaired sodium reabsorption by injured tubular epithelial cells increases the sodium concentration in the tubular lumen. The increased intratubular sodium concentration polymerizes Tamm-Horsfall protein, which is normally secreted by the loop of Henle, forming a gel and contributing to cast formation. As a result, brush-border membranes and cells slough obstruct tubules downstream. As mentioned before, these debris form casts that obstruct tubules, and glomerular filtrate leaks from the tubular lumen across denuded tubular walls into capillaries and the circulation (back-leak) causing a reduction in the “effective” GFR. ATP depletion also activates harmful proteases and phospholipases, which, with reperfusion, cause oxidant injury to tubular cells, the so-called reperfusion injury [7]. ## 3.1.3. Reperfusion Injury WI insult followed by restoration of blood flow to the ischemic tissue frequently results in a secondary reperfusion injury. Despite WI causing significant renal dysfunction, reperfusion injury has been shown to be as damaging or even more detrimental than renal ischemia itself, producing an inflammatory response that worsens local kidney damage and leads to a systemic insult [30, 31].The reperfusion injury can be mediated by several mechanisms including the generation of reactive oxygen species, cellular derangement, microvessel congestion and compression, polimorphonuclear (PMN)-mediated damage, and hypercoagulation. Reperfusion with the resulting reintroduction of molecular oxygen of constricted microvessels leads to congestion and red cell trapping. This vascular effect can reduce renal blood flow by as much as 50% [32].During the reperfusion period, superoxide production in the kidney is markedly enhanced by the transformation of xanthine dehydrogenase to xanthine oxidase and the increase in free electrons in mitochondria, prostaglandin H, and lipoxygenase with the coexistence of NAD(P)H and infiltrated neutrophils. Superoxide raises the following chain reactions, producing hydroxyl radicals or other reactive oxygen species (ROSs), or interacts with nitric oxide (NO), which is produced by macrophage inducible NO synthase, generating a highly toxic radical peroxynitrite. These ROS and NO derived species consume tissue antioxidants and decrease organ reducing activity [33].The exact magnitude of reperfusion injury is still unclear. Some authors state that the role of free radicals mediated injury in kidneys may not be as significant as in other organs given the low relative activity of renal xanthine oxidase compared with the high endogenous activity of superoxide dismutase [29].Notwithstanding, nicaraven (N,N9-propylenebisnicotinamide), a drug that may actively trap free radicals and prevent vascular constriction due to lipid peroxide [34] and edaravone (3-methyl-1-phenyl-2-pyrazolin-5-one, MCI-186), a synthetic free radical scavenger, have shown in vitro experiments to protect endothelial cells against ischemic injury in different organs, including ischemically damaged kidneys [35, 36]. Clinical studies are eagerly awaited. ## 4. For How Long Can the Kidney Tolerate Warm Ischemia? Despite several animal studies [37–39] and clinical reports [40, 41] demonstrating kidney tolerance to warm ischemia times beyond 30 minutes, concern still remains regarding the potential for full-renal function recovery after this time period [42]. The stoic 30-minute cutoff has been questioned by some authors [43] on the grounds that kidneys harvested from nonheart beating donors (NHBDs) have shown favorable recovery of renal function in transplanted kidneys that sustained warm ischemia times well over 30 minutes [44–46]. Nishikido et al. [45] found that the risk factors affecting significant graft loss were WI time more than 20 minutes, donor age above 50 years, and donor serum creatinine at admission above 1.0 mg/dL. Today, most nonheart beating donor programs currently exclude those donors with a WI time exceeding 40 minutes [45, 47–49].Although laparoscopic surgeons are gaining further experience and are more ambitious to perform partial nephrectomy for larger and deeper tumors, the 30-minute cutoff still remains the accepted safe limit time beyond which irreversible kidney damage occurs in the absence of renal cooling [50–52].Although early observations in dog models showed that there may be substantial variation in kidney tolerance up to two or three hours of ischemia [53] there is no doubt that the extent of renal damage after transitory arterial occlusion exclusively depends on the duration of the ischemic insult [25, 54, 55]. The literature also demonstrates that, even within a tolerable period of WI, the longer the WI time the longer it takes for the kidney to recover (or approach) its preoperative function [55]. Notwithstanding, the maximum tolerable limit of renal warm ischemia time that can render complete function recovery remains to be established in humans.The study by Ward [56] is commonly cited by opinion leaders to state a maximum 30-minute tolerance of the kidney to WI. These authors showed in dogs that warm ischemic intervals of up to 30 minutes can be sustained with eventual full recovery of renal function. However, this study was not strictly designed to establish the most accurate length of time a kidney would be able to sustain reversible damage following ischemic injury. What the authors actually concluded was that no additional protection to ischemia could be gained by cooling below 15 degrees. Thus, they recommended 15 degrees as the optimum temperature for use in clinical renal hypothermia.Research in rats, pigs, and monkeys has also been conducted by other investigators. Laven et al. [38] found renal resilience to WI beyond the traditionally accepted 30 minutes in a solitary kidney pig model. Prolonged renal WI time increased the incidence of renal dysfunction during the initial 72 hours after the ischemic insult. However, by 2 weeks after the WI insult renal function returned to baseline in the 30, 60, and 90-minute WI groups. However, the same study group found that prolonged WI time of 120 minutes produced significant loss of renal function and mortality [43].Martin et al. [57] proved potential kidney WI tolerability of up to 35 minutes in a single kidney monkey model.Haisch et al. [58] studies in dog models suggested that the window of reversible WI injury could be as long as 2 hours after the insult.The question remains whether findings in animal studies can be extrapolated to humans. One limitation has to do with a reliable method to differentiate between ischemic injury and the loss of renal volume secondary to tumor excision. The ideal method to evaluate residual kidney function in the operated kidney is still undefined. While most authors use serum creatinine assay or 99mtechnetium-labeled mercaptoacetyl triglycine (MAG3) renal scintigraphy with split renal function, others, like Abukora et al. [59] proposed estimation of parenchymal transit time (PTT) as a good indicator of ischemic injury. Transit time is the time that a tracer remains within the kidney or within a part of the kidney. However, the international consensus committee on renal transit time, from the subcommittee of the International Scientific Committee of Radionuclides in Nephrourology, recently concluded that the value of delayed transit remains controversial, and the committee recommended further research [60].Bhayani et al. [40] evaluated 118 patients, with a single, unilateral, sporadic renal tumor, and normal contralateral kidney, who underwent laparoscopic partial nephrectomy (LPN) to assess the effect of variable durations of WI on long-term renal function. Patients were divided into 3 groups based on WI time: group 1, no renal occlusion (n = 42), group 2, WI < 30 minutes (n = 48), and group 3, WI > 30 minutes (n = 28). At a median followup of 28 months (minimum followup of 6 months) median creatinine had not statistically increased postoperatively and none of the 118 patients progressed to renal insufficiency or required dialysis after LPN. The authors concluded that WI time up to 55 minutes did not significantly influence long-term renal function after LPN. A main limitation of this study has to do with the fact that all patients had a normal contralateral kidney so that 6 months postoperatively creatinine values could have reflected contralateral kidney function.A similar study has been conducted by Shekarriz et al. [61] on a substantially lower number of patients (n = 17); however, the authors assessed kidney function using 99technetium labeled diethylenetetraminepentaacetic acid scan renal scan with differential function 1 month before and 3 months after surgery in all patients. The authors found that all their patients preserved adequate renal function in the affected kidney following temporary hilar clamping of up to 44 minutes. (The mean WI time was 22.5 minutes.)In line with this author, Kane et al. [62] showed that temporary arterial occlusion did not appear to affect short-term renal function (mean followup: 130 days) in a series of laparoscopic partial nephrectomies (LPNs) with a mean WI of 43 minutes (range: 25–65 minutes).Desai et al. [50] retrospectively assessed the effect of WI on renal function after LPN for tumor, and evaluated the influence of various risk factors on renal function in 179 patients under WI conditions. No kidney was lost because of ischemic sequelae with clamping of the renal artery and vein of up to 55 minutes. The mean WI time was 31 minutes. Nonetheless, the authors concluded that advancing age and pre-existing azotaemia increased the risk of renal dysfunction after LPN, especially when the warm ischemia exceeded 30 minutes.In contrast, Kondo et al. [63] found that patient age did not influence residual function in patients undergoing partial nephrectomy, while tumor size was the only significant factor that inversely correlated with the relative 99technetium labeled dimercaptosuccinic acid (DMSA) uptake.Porpiglia et al. [52] assessed kidney damage in 18 patients 1 year after LPN with a WI time between 31 and 60 minutes. The authors evaluated the contribution of the operated kidney to the overall renal function by radionuclide scintigraphy with 99mTc-MAG3. They observed that there was an initial significant drop of approximately 11% in the operated kidney’s contribution to overall function, followed by a constant and progressive recovery that never reached the preoperative value (42.8% at 1 year versus 48.3% before surgery). The authors stated by logistic regression analysis that the loss of function of the operated kidney depended mostly on the WI time and less importantly on the maximum thickness of resected healthy parenchyma. Unfortunately, the full regression model that included 6 variables to predict an event in only 18 patients is not shown in the manuscript.Recently, Thompson et al. [42] made a retrospective review of 537 patients with solitary kidneys who underwent open nephron sparing surgery by more than 20 different surgeons from both the Cleveland Clinic, Ohio, USA, and Mayo Clinic, Minn, USA, to evaluate the renal effects of vascular clamping in patients with solitary kidneys. After adjusting for tumor complexity and tumor size, the author found in a subsequent analysis [64] that patients with more than 20 minutes of WI were significantly more likely to have acute renal failure (24% versus 6%, p 0.002) compared to those requiring less than 20 minutes, and this risk remained significant even after adjusting for tumor size (odds ratio 3.4, p 0.025). Additionally, patients with more than 20 minutes of WI were significantly more likely to progress to chronic renal failure (odds ratio 2.9, p 0.008) and were more than 4 times more likely to experience an increase in creatinine postoperatively of greater than 0.5 mg/dL (odds ratio 4.3, p 0.001) compared to those requiring less than 20 minutes of WI. After adjusting for tumor size, the risk of chronic renal failure (odds ratio 2.6, p 0.03) and an increase in creatinine of greater than 0.5 mg/dL (odds ratio 4.6, p 0.002) remained statistically significant if more than 20 minutes of WI were needed. The authors concluded that WI should be restricted to less than 20 minutes when technically feasible, especially in patients with solitary kidneys. ## 5. What Are the Factors Affecting Tolerance to Warm Ischemia? It often goes without saying that there may be individual variation to WI tolerance. Baldwin et al. [37] observed that some of the 16 solitary porcine kidneys showed a rapid return to the dark red color, and other animals demonstrated minimal color change during the several minutes following complete hilar clamp removal, despite all of them receiving similar surgical technique and ischemia time. Having acknowledged the potential for individual variation, there may be other multiple factors that can affect tolerance to WI which are herein described.It has been suggested that patients with solitary kidneys might safely tolerate longer periods of ischemia than patients with both kidneys as the result of development of a collateral vascular supply; [65–67] however, the presence of vascular collateralization secondary to vascular occlusive disease, [68] or yet other clinical entities like hypertension, [69] should warn the surgeon for the possibility of a kidney less resistant to WI injury for the likely presence of panvascular disease and or occult chronic renal insufficiency.Another factor that can impact ischemic damage is the method employed to achieve vascular control of the kidney. When technically possible, depending on the size and location of the tumor, it is helpful to leave the renal vein patent throughout the operation. This measure has been proven to decrease intraoperative renal ischemia and, by allowing venous backbleeding, facilitates hemostasis by enabling identification of small, transected renal veins [1–3, 5].Animal studies have shown that functional impairment is least when the renal arteryalone is occluded. Although some authors found no difference [70] simultaneous occlusion of the renal artery and vein for an equivalent time interval is more damaging because it prevents, as mentioned, retrograde perfusion of the kidney through the renal vein and may also produce venous congestion of the kidney [2, 3, 71–73]. However, this benefit may not be observed in patients undergoing LRP since the pressure of the pneumoperitoneum may cause partial occlusion of the renal vein, thus, negating the advantage of renal artery clamping only [72].Intermittent clamping of the renal artery with short periods of recirculation may also be more damaging than continuous arterial occlusion, possibly because of the release and trapping of damaging vasoconstrictor agents within the kidney [39, 55, 71, 74–77].Manual (or instrumental) compression of the kidney parenchyma to control intraoperative hemorrhage (as an alternative to clamping of the pedicle) has the theoretical advantages of avoiding WI of the normal parenchyma while allowing the surgeon to operate in an almost bloodless field, something that could be particularly useful in peripherically located tumors. Although animal studies have shown that the use of renal parenchyma compression may be more deleterious than simple arterial occlusion [71, 76], this technique has been recently “resuscitated” by some authors both in the open kidney surgery [78–82] and in the laparoscopic setting [83].When the surgeon anticipates a WI time exceeding the “classical” 30 minutes, local renal hypothermia is used to protect against ischemic renal injury. Hypothermia has been the most effective and universally used means of protecting the kidney from the ischemic insult. Hypothermia reduces basal cell metabolism, energy-dependent metabolic activity of the cortical cells, with a resultant decrease in both the consumption of oxygen and ATP [84–86].There are multiple ways of achieving hypothermia. Surrounding the fully mobilized kidney with crushed ice (ice slush) is the most frequently used technique because of its ease and simplicity [87, 88]. When using ice slush to reduce kidney temperature, it is recommended to keep the entire kidney covered with ice for 10 to 15 minutes immediately after occluding the renal artery and before commencing the resection of the tumor in order to allow core renal temperature to decrease to approximately 20 degrees centigrade or less [2]. Mannitol, with or without the addition of furosemide, should be administered intravenously 5 to 15 minutes before renal arterial clamping as it increases renal plasma flow, decreases intrarenal vascular resistance and intracellular edema, and promotes an osmotic diuresis when renal circulation is restored [89]. Regular use of heparin to prevent intrarenal vascular thrombosis has not been found to be useful [2, 3, 56].Other methods than the use of ice slush to achieve renal hypothermia have also being explored, including application of ice-slurry [90, 91], antegrade perfusion of the renal artery either via preoperative renal artery catheterization [92] or via intraoperative renal artery cannulation [93], retrograde perfusion of the collecting system with cold solutions [94, 95] or near-freezing saline irrigation delivered with a standard irrigator aspirator [96] among others, some of them particularly used in the laparoscopic setting. Very few studies compared kidney cooling techniques; [97–100] however, hypothermia by properly applying ice to the renal surface seems to be equivalent to hypothermia by perfusion [98]. Perfusion of the kidney with a cold solution instilled via the renal artery not only may have a theoretic risk of tumor dissemination, but also requires participation of an intervention radiology team to perform preoperative renal artery catheterization, adding complexity and risks of potential complications to the procedure [3]. On the contrary, continuous renal perfusion might have the advantage of providing a more homogeneous and effective hypothermia for a more extended period of time [99, 100]. It is generally accepted, founded on data extrapolated from the kidney stone literature, that adequate hypothermia provides up to 2 to 3 hours of renal protection from circulatory arrest [99, 101–104].Needless to say, generous preoperative and intraoperative hydration, prevention of intraoperative hypotension, avoidance of unnecessary manipulation or traction on the renal artery as well as the aforementioned administration of mannitol are necessary to keep the kidney adequately perfused before and after the ischemic insult.Ischemic preconditioning (IP) has emerged as a powerful method of ameliorating ischemia/reperfusion injury not only the myocardium (as initially described) [105] but also to other organs, including kidney. IP is a physiologic phenomenon by which cells develop defense strategies to allow them survive in a hypoxic environment. The original IP hypothesis stated that multiple brief ischemic episodes applied to an organ would actually protect it (originally the myocardium) during a subsequent sustained ischemic insult so that, in effect, ischemia could be exploited to protect that organ (originally the heart) from ischemic injury [105]. The “preconditioned” cells would become more tolerant to ischemia by adjusting its energy balance to a new, lower steady-state equilibrium. Specifically, preconditioned tissues exhibit reduced energy requirements, altered energy metabolism, better electrolyte homeostasis and genetic reorganization, giving rise to the concept of “ischemia tolerance.” IP also induces “reperfusion tolerance” with less reactive oxygen species and activated neutrophils released, reduced apoptosis and better microcirculatory perfusion compared to not preconditioned tissue. Systemic reperfusion injury is also diminished by preconditioning [31]. A review by Pasupathy and Homer-Vanniasinkam [31] showed that IP utilizes endogenous mechanisms in skeletal muscle, liver, lung, kidney, intestine, and brain in animal models to convey varying degrees of protection from reperfusion injury. To date, there are few human studies, but some reports suggest that human liver, lung, and skeletal muscle acquire similar protection after IP. IP is ubiquitous but more research is required to fully translate these findings to the clinical arena.Some authors propose that during laparoscopy, the increase of intra-abdominal pressure due to the pneumoperitoneum may create an IP-like situation that might increase kidney tolerance to subsequent WI and reduce tissue injury [106–110]. For this reason, it might theoretically be possible to increase WI time during LPN, compared to open surgery, something which is still very far from being demonstrated [30, 109–111].In contrast, other studies expressed some concern about the potential harm of pneumoperitoneum and increased intra-abdominal pressure (IAP) on kidney function. Several experimental animal studies have investigated the effect of pneumoperitoneum on renal function. While some authors demonstrated that increased IAP by insufflation of CO2 gas resulted in decreased renal blood flow that may lead to ischemia and subsequent decreased glomerular filtration rate [112], others denied such effect [37, 113].Kirsch et al. [112] showed a decrease in urine output and GFR with increasing IAP. A pneumoperitoneum of 15 mmHg for 4 hours resulted in a decrease in renal blood flow to 70% of baseline. Even IAPs of 4 and 10 mmHg resulted in a reduction of the renal circulation of 34% and 41%, respectively. Although, the decreased urinary output during prolonged IAP greater than or equal to 15 mmHg in the animal model was associated with a corresponding decrease in renal vein flow, it did not appear to be associated with any permanent renal derangement nor any transient histological changes [114]. After the release of the pneumoperitoneum or pneumoretroperitoneum, the renal function and urine output return to normal with no long-term sequelae, even in patients with pre-existing renal disease [115].Lind et al. [113] found that WI time of 20 minutes did not impair graft function and histomorphology during 1 year of followup after renal transplantation in a syngeneic rat model. Most important, WI in combination with pneumoperitoneum did not result in an additive negative effect on long-term graft function.In addition, Baldwin et al. [37] observed that temporary serum creatinine elevation evident after 60 and 90 minutes of ischemia normalized within 7 days in 16 farm pigs which had been nephrectomized 14 days prior to the laparoscopically applied ischemic insult. No difference from the controls was noted in those pigs receiving 30 minutes of ischemia during the laparoscopic procedure. Of note, insufflation had been maintained for 150 minutes at 15 mmHg in all animals. Those findings suggested that in laparoscopic renal surgery, WI times of up to 90 minutes (and a pneumoperitoneum of up to 150 minutes) might be well tolerated and followed by complete renal recovery. The reader is referred to the excellent review by Dunn and McDougall [115] for further information on the impact of pneumoperitoneum on renal physiology. ## 6. Conclusions The maximal duration of WI allowable before the onset of irreversible renal damage continues to be a topic of debate, irrespective of the surgical approach. In addition, there seems to be variation among patients, possibly related to surgical technique, patient age, presence of collateral vascularization, and indemnity of the arterial bed, among others. Unfortunately, no method exists for predicting preoperatively or intraoperative monitoring for renal injury. Surgeons should exert extreme efforts to keep warm ischemia time as short as possible. When WI time is expected to exceed from 20 to 30 minutes, specially in patients whose baseline medical characteristics put them at potentially higher, though unproven, risks of ischemic damage, the time-tested way around this time limit has been renal hypothermia, regardless of what the time limit may exactly be. --- *Source: 102461-2008-07-15.xml*
102461-2008-07-15_102461-2008-07-15.md
59,475
Importance and Limits of Ischemia in Renal Partial Surgery: Experimental and Clinical Research
Fernando P. Secin
Advances in Urology (2008)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2008/102461
102461-2008-07-15.xml
--- ## Abstract Introduction. The objective is to determine the clinical and experimental evidences of the renal responses to warm and cold ischemia, kidney tolerability, and available practical techniques of protecting the kidney during nephron-sparing surgery. Materials and methods. Review of the English and non-English literature using MEDLINE, MD Consult, and urology textbooks. Results and discussion. There are three main mechanisms of ischemic renal injury, including persistent vasoconstriction with an abnormal endothelial cell compensatory response, tubular obstruction with backflow of urine, and reperfusion injury. Controversy persists on the maximal kidney tolerability to warm ischemia (WI), which can be influenced by surgical technique, patient age, presence of collateral vascularization, indemnity of the arterial bed, and so forth. Conclusions. When WI time is expected to exceed from 20 to 30 minutes, especially in patients whose baseline medical characteristics put them at potentially higher, though unproven, risks of ischemic damage, local renal hypothermia should be used. --- ## Body ## 1. Introduction Nephron-sparing surgery in the oncologic setting entails complete local resection of a renal tumor while leaving the largest possible amount of normal functioning parenchyma in the involved kidney. Different surgical techniques can be employed for performing partial nephrectomy, but all of them require adherence to basic principles of early vascular control, avoidance of ischemic renal damage with complete tumor excision with free margins, precise closure of the collecting system, careful hemostasis, and closure with or without tamponading of the renal defect with adjacent fat, fascia, or any available artificial sealant [1, 2].Observance of all these principles is extremely important, however, prevention of ischemic renal damage is a key to the final success of the procedure. Ischemia is the leading cause of postoperative acute and chronic renal failure in patients undergoing nephron sparing surgery, for which no specific medical treatment modality has been established to date.By the same token, surgeons need to apply transitory occlusion of the renal artery as it not only diminishes intraoperative parenchymal bleeding but also improves visualization and facilitates access to intrarenal structures by causing the kidney to contract and by reducing renal tissue fullness. Surgeons performing this approach require an understanding of renal responses to warm ischemia (WI) and available methods of protecting the kidney when the period of arterial occlusion exceeds normal parenchyma tolerability [3].In order to decrease the exposure of the spared parenchyma to ischemia, the surgeon should have a complete preoperative and intraoperative assessment of the relationship of the tumor and its vascular supply to the collecting system and adjacent normal renal parenchyma [4–6].There is no question that the less the better, whenever the philosophy to preserve as much functioning renal tissue as possible is followed. This manuscript seeks to determine the clinical and experimental evidences of the renal responses to warm and cold ischemia, kidney tolerability, and available practical techniques of protecting the kidney when the period of arterial occlusion surpasses that which may be safely tolerated during renal nephron sparing surgery. ## 2. Material and Methods Biomedical and related databases were queried including MEDLINE, MD Consult, and urology textbooks. Manuscripts and library archives were retrieved from the Nathan Cummings Center, Memorial Sloan-Kettering Cancer Center, NY, USA.A Medline search in combination with additional references of non-Medline-indexed journals included the following key words: “nephron-sparing surgery,” “partial nephrectomy,” “warm ischemia and kidney,” and “ischemia time and kidney,” as well as links to related articles. Non-English articles and letters to editors were reviewed as well. These references formed the basis of the article. Following selection and deletion based on relevance of the subject and importance of the studies, a library of 115 references remained. ## 3. Results and Discussion ### 3.1. Intraoperative Renal Ischemia: Pathophysiology of Injury In recent years, there have been significant insights into the pathophysiologic process of renal ischemia [7, 8]. Ischemic insult to the kidney often results in damage to cells of nephron and renal vasculature. Cells are lost through the processes of necrosis and apoptosis, inevitably leading to renal failure. Renal failure is characterized by a decline in glomerular filtration rate, retention of nitrogenous waste products, perturbation of extra cellular fluid volume, and electrolyte and acid-base homeostasis. Renal failure is only diagnosed when these pathophysiologic perturbations are advanced enough to manifest biochemical abnormalities in the blood. The pathophysiologic response to cell death dictates the prevailing level of renal functional impairment [9]. Therefore, a clear understanding of the extent of post ischemic kidney damage and associated inflammation is needed to prevent this hitherto intractable condition, which will ultimately impact on overall survival [10].For understanding and didactic purposes, three interrelated main mechanisms through which ischemia damages the kidney are herein described based on a recent review by Abuelo [7]. One mechanism is merely vascular, caused by persistent vasoconstriction and an abnormal response of endothelial cells to compensatory means. The second is obstructive, where soughed tubular epithelial cells and brush-border-membrane debris form casts that obstruct tubules, and glomerular filtrate leaks from the tubular lumen across denuded tubular walls into capillaries and the circulation (back-leak) causing a reduction in the “effective” GFR, where the latter is defined as the rate at which filtrate is delivered into final urine. The third has to do with reperfusion injury after blood flow is restored [7, 11]. #### 3.1.1. Vascular Mechanism Both animal and human studies have found that a multi-inflammatory response is involved in ischemia/reperfusion injury of the kidney [12]. The inflammatory reaction incurred after an ischemic insult precipitates more damage to the tissue and impedes intrarenal blood flow caused by vasoconstriction and vascular congestion, leading to a vicious cycle [13].This damage mainly takes place in endothelial cells of the peritubular capillaries, especially in the outer medulla, which is marginally oxygenated under normal circumstances. This oxidant injury, together with a shift in the balance of vasoactive substances toward vasoconstrictors such as endothelin, results in vasoconstriction, congestion, hypoperfusion, and expression of adhesion molecules. The expression of adhesion molecules, in turn, initiates leukocyte infiltration, augmented by proinflammatory and chemotactic cytokines generated by ischemic tubular cells [7].Inciting stimuli induce kidney macrophages and probably renal parenchymal cells to release inflammatory cytokines, such as tumor necrosis factor-α (TNF-α) and interleukin-1 (IL-1). TNF-α and IL-1 promote renal parenchymal damage by directly inducing apoptosis in epithelial cells, recruitment of neutrophils that release reactive oxygen metabolites and proteases, and up regulating adhesion receptors on endothelial cells and leukocytes [14, 15]. These cytokines also stimulate renal cortical epithelial cells to release the chemoattractant interleukin-8 [16, 17]. The arrival of additional leukocytes obstructs the microcirculation and releases more cytotoxic cytokines, reactive oxygen species, and proteolytic enzymes, which damage the tubular cells [7].Endothelial injury results in cell swelling and enhanced expression of cell adhesion molecules. This, together with leukocyte activation, leads to enhanced leukocyte-endothelial cell interactions, which can promote injury and swelling of the endothelial cell. Endothelial swelling contributes to the production of local factors promoting vasoconstriction and adds to the effects of vasoconstriction and tubule cell metabolism by physically impeding blood flow, perpetuating that vicious cycle [18].Heterogeneity of intrarenal blood flow contributes to the pathophysiology of ischemic renal failure. An imbalance between the vasodilator nitric oxide and the vasoconstrictor endothelin impairs medullary blood flow, especially in the outer medulla, where tubules have high oxygen requirements, resulting in cellular injury due to a mismatch between oxygen delivery and demand. Endothelial activation and injury together with increased leukocyte-endothelial cell interactions and activation of coagulation pathways may have a greater effect on outer medullary ischemia than arteriolar vasoconstriction, as there can be markedly impaired oxygen delivery to the outer medulla despite adequate renal blood flow [18].The arteriolar response to vasoactive substances can also be altered during endothelial injury. The basal tone of arterioles is increased in post ischemic kidneys as well as their reactivity to vasoconstrictive agents. These arterioles also have decreased vasodilatory responses compared with arterioles from normal kidneys. Alterations in local levels of vasoconstrictors (angiotensin II, thromboxane A2, leukotrienes, adenosine, endothelin-1) have been implicated in abnormal vascular tone [19]. Angiotensin II seems to play a key role by activating endothelin B or prostaglandin H2-thromboxane A2 receptors. Systemic endothelin-1 levels increase with ischemia, and administration of antiendothelin antibodies or endothelin receptor antagonists has been reported to protect against ischemia-reperfusion injury [20]. Saralasin, an angiotensin II receptor antagonist, could also attenuate angiotensin II vasoconstricting effect [21]. Nitric oxide, an endothelial-derived relaxing factor, plays a theoretical protective role against ischemic renal injury, by means of its vasodilatory effect and by decreasing endothelin expression and secretion in the vascular endothelium. Of interest, endothelial nitric oxide synthase is inhibited during endothelial injury [22]. A combination therapy consisting of 5-aminoimidazole-4-carboxamide-1-beta-D-ribonucleoside (AICAR) and N-acetyl cysteine (NAC), drugs that inhibit the induction of proinflammatory cytokines and nitric oxide synthase, and block tumor necrosis factor-alpha induced apoptotic cell death, has shown to attenuate ischemia-reperfusion injury in a canine model of autologous renal transplantation [23]. Early studies showed no conclusive evidence that vasodilators (such as diltiazem or dopamine) or other compounds have any clinical utility in either preventing or treating ischemic renal failure in humans thus far [24–26]. More recently, however, the highly selective dopamine type 1 agonist fenoldopam mesylate [27] and the antianginal medication trimetazidine [28] appeared to aid in restoring renal function to baseline values in patients with prolonged WI time. Further research is needed. #### 3.1.2. Obstructive Mechanism Normally, the cells are bathed in an extra cellular solution high in sodium and low in potassium. This ratio is maintained by a sodium pump (Na+-K + ATPase pump) which uses much of the adenosine triphosphate (ATP) energy derived from oxidative phosphorylation. ATP is required for the cellular sodium pump to maintain a high intracellular concentration of potassium and a low concentration of sodium. The sodium pump effectively makes Na+ an impermeant outside the cell that counteracts the colloidal osmotic pressure derived from intracellular proteins and other anions [29].The ischemic insult causes a failure of oxidative phosphorylation and ATP depletion, leading to malfunctioning of the sodium pump. When the sodium pump is impaired, sodium chloride and water passively diffuse into the cells, resulting in cellular swelling and the “no-reflow” phenomenon after renal reperfusion. Cellular potassium and magnesium are lost, calcium is gained, anaerobic glycolysis and acidosis occur, and lysosomal enzymes are activated. This results in cell death. During reperfusion, hypoxanthine, a product of ATP degradation, is oxidized to xanthine with the formation of free radicals that cause further cell damage [29]. (See later.)As mentioned, the mechanism whereby ischemia and oxygen depletion injure tubular cells starts with ATP depletion, which activates a number of critical alterations in metabolism, causing cytoskeletal disruption and loss of those properties that normally render the tubule cell monolayer impermeable to certain components of filtrate. Cytoskeletal disruption causes not only loss of brush-border microvilli and cell junctions but also mislocation of integrins and the sodium pump from the basal surface to the apical surface.In addition, impaired sodium reabsorption by injured tubular epithelial cells increases the sodium concentration in the tubular lumen. The increased intratubular sodium concentration polymerizes Tamm-Horsfall protein, which is normally secreted by the loop of Henle, forming a gel and contributing to cast formation. As a result, brush-border membranes and cells slough obstruct tubules downstream. As mentioned before, these debris form casts that obstruct tubules, and glomerular filtrate leaks from the tubular lumen across denuded tubular walls into capillaries and the circulation (back-leak) causing a reduction in the “effective” GFR. ATP depletion also activates harmful proteases and phospholipases, which, with reperfusion, cause oxidant injury to tubular cells, the so-called reperfusion injury [7]. #### 3.1.3. Reperfusion Injury WI insult followed by restoration of blood flow to the ischemic tissue frequently results in a secondary reperfusion injury. Despite WI causing significant renal dysfunction, reperfusion injury has been shown to be as damaging or even more detrimental than renal ischemia itself, producing an inflammatory response that worsens local kidney damage and leads to a systemic insult [30, 31].The reperfusion injury can be mediated by several mechanisms including the generation of reactive oxygen species, cellular derangement, microvessel congestion and compression, polimorphonuclear (PMN)-mediated damage, and hypercoagulation. Reperfusion with the resulting reintroduction of molecular oxygen of constricted microvessels leads to congestion and red cell trapping. This vascular effect can reduce renal blood flow by as much as 50% [32].During the reperfusion period, superoxide production in the kidney is markedly enhanced by the transformation of xanthine dehydrogenase to xanthine oxidase and the increase in free electrons in mitochondria, prostaglandin H, and lipoxygenase with the coexistence of NAD(P)H and infiltrated neutrophils. Superoxide raises the following chain reactions, producing hydroxyl radicals or other reactive oxygen species (ROSs), or interacts with nitric oxide (NO), which is produced by macrophage inducible NO synthase, generating a highly toxic radical peroxynitrite. These ROS and NO derived species consume tissue antioxidants and decrease organ reducing activity [33].The exact magnitude of reperfusion injury is still unclear. Some authors state that the role of free radicals mediated injury in kidneys may not be as significant as in other organs given the low relative activity of renal xanthine oxidase compared with the high endogenous activity of superoxide dismutase [29].Notwithstanding, nicaraven (N,N9-propylenebisnicotinamide), a drug that may actively trap free radicals and prevent vascular constriction due to lipid peroxide [34] and edaravone (3-methyl-1-phenyl-2-pyrazolin-5-one, MCI-186), a synthetic free radical scavenger, have shown in vitro experiments to protect endothelial cells against ischemic injury in different organs, including ischemically damaged kidneys [35, 36]. Clinical studies are eagerly awaited. ## 3.1. Intraoperative Renal Ischemia: Pathophysiology of Injury In recent years, there have been significant insights into the pathophysiologic process of renal ischemia [7, 8]. Ischemic insult to the kidney often results in damage to cells of nephron and renal vasculature. Cells are lost through the processes of necrosis and apoptosis, inevitably leading to renal failure. Renal failure is characterized by a decline in glomerular filtration rate, retention of nitrogenous waste products, perturbation of extra cellular fluid volume, and electrolyte and acid-base homeostasis. Renal failure is only diagnosed when these pathophysiologic perturbations are advanced enough to manifest biochemical abnormalities in the blood. The pathophysiologic response to cell death dictates the prevailing level of renal functional impairment [9]. Therefore, a clear understanding of the extent of post ischemic kidney damage and associated inflammation is needed to prevent this hitherto intractable condition, which will ultimately impact on overall survival [10].For understanding and didactic purposes, three interrelated main mechanisms through which ischemia damages the kidney are herein described based on a recent review by Abuelo [7]. One mechanism is merely vascular, caused by persistent vasoconstriction and an abnormal response of endothelial cells to compensatory means. The second is obstructive, where soughed tubular epithelial cells and brush-border-membrane debris form casts that obstruct tubules, and glomerular filtrate leaks from the tubular lumen across denuded tubular walls into capillaries and the circulation (back-leak) causing a reduction in the “effective” GFR, where the latter is defined as the rate at which filtrate is delivered into final urine. The third has to do with reperfusion injury after blood flow is restored [7, 11]. ### 3.1.1. Vascular Mechanism Both animal and human studies have found that a multi-inflammatory response is involved in ischemia/reperfusion injury of the kidney [12]. The inflammatory reaction incurred after an ischemic insult precipitates more damage to the tissue and impedes intrarenal blood flow caused by vasoconstriction and vascular congestion, leading to a vicious cycle [13].This damage mainly takes place in endothelial cells of the peritubular capillaries, especially in the outer medulla, which is marginally oxygenated under normal circumstances. This oxidant injury, together with a shift in the balance of vasoactive substances toward vasoconstrictors such as endothelin, results in vasoconstriction, congestion, hypoperfusion, and expression of adhesion molecules. The expression of adhesion molecules, in turn, initiates leukocyte infiltration, augmented by proinflammatory and chemotactic cytokines generated by ischemic tubular cells [7].Inciting stimuli induce kidney macrophages and probably renal parenchymal cells to release inflammatory cytokines, such as tumor necrosis factor-α (TNF-α) and interleukin-1 (IL-1). TNF-α and IL-1 promote renal parenchymal damage by directly inducing apoptosis in epithelial cells, recruitment of neutrophils that release reactive oxygen metabolites and proteases, and up regulating adhesion receptors on endothelial cells and leukocytes [14, 15]. These cytokines also stimulate renal cortical epithelial cells to release the chemoattractant interleukin-8 [16, 17]. The arrival of additional leukocytes obstructs the microcirculation and releases more cytotoxic cytokines, reactive oxygen species, and proteolytic enzymes, which damage the tubular cells [7].Endothelial injury results in cell swelling and enhanced expression of cell adhesion molecules. This, together with leukocyte activation, leads to enhanced leukocyte-endothelial cell interactions, which can promote injury and swelling of the endothelial cell. Endothelial swelling contributes to the production of local factors promoting vasoconstriction and adds to the effects of vasoconstriction and tubule cell metabolism by physically impeding blood flow, perpetuating that vicious cycle [18].Heterogeneity of intrarenal blood flow contributes to the pathophysiology of ischemic renal failure. An imbalance between the vasodilator nitric oxide and the vasoconstrictor endothelin impairs medullary blood flow, especially in the outer medulla, where tubules have high oxygen requirements, resulting in cellular injury due to a mismatch between oxygen delivery and demand. Endothelial activation and injury together with increased leukocyte-endothelial cell interactions and activation of coagulation pathways may have a greater effect on outer medullary ischemia than arteriolar vasoconstriction, as there can be markedly impaired oxygen delivery to the outer medulla despite adequate renal blood flow [18].The arteriolar response to vasoactive substances can also be altered during endothelial injury. The basal tone of arterioles is increased in post ischemic kidneys as well as their reactivity to vasoconstrictive agents. These arterioles also have decreased vasodilatory responses compared with arterioles from normal kidneys. Alterations in local levels of vasoconstrictors (angiotensin II, thromboxane A2, leukotrienes, adenosine, endothelin-1) have been implicated in abnormal vascular tone [19]. Angiotensin II seems to play a key role by activating endothelin B or prostaglandin H2-thromboxane A2 receptors. Systemic endothelin-1 levels increase with ischemia, and administration of antiendothelin antibodies or endothelin receptor antagonists has been reported to protect against ischemia-reperfusion injury [20]. Saralasin, an angiotensin II receptor antagonist, could also attenuate angiotensin II vasoconstricting effect [21]. Nitric oxide, an endothelial-derived relaxing factor, plays a theoretical protective role against ischemic renal injury, by means of its vasodilatory effect and by decreasing endothelin expression and secretion in the vascular endothelium. Of interest, endothelial nitric oxide synthase is inhibited during endothelial injury [22]. A combination therapy consisting of 5-aminoimidazole-4-carboxamide-1-beta-D-ribonucleoside (AICAR) and N-acetyl cysteine (NAC), drugs that inhibit the induction of proinflammatory cytokines and nitric oxide synthase, and block tumor necrosis factor-alpha induced apoptotic cell death, has shown to attenuate ischemia-reperfusion injury in a canine model of autologous renal transplantation [23]. Early studies showed no conclusive evidence that vasodilators (such as diltiazem or dopamine) or other compounds have any clinical utility in either preventing or treating ischemic renal failure in humans thus far [24–26]. More recently, however, the highly selective dopamine type 1 agonist fenoldopam mesylate [27] and the antianginal medication trimetazidine [28] appeared to aid in restoring renal function to baseline values in patients with prolonged WI time. Further research is needed. ### 3.1.2. Obstructive Mechanism Normally, the cells are bathed in an extra cellular solution high in sodium and low in potassium. This ratio is maintained by a sodium pump (Na+-K + ATPase pump) which uses much of the adenosine triphosphate (ATP) energy derived from oxidative phosphorylation. ATP is required for the cellular sodium pump to maintain a high intracellular concentration of potassium and a low concentration of sodium. The sodium pump effectively makes Na+ an impermeant outside the cell that counteracts the colloidal osmotic pressure derived from intracellular proteins and other anions [29].The ischemic insult causes a failure of oxidative phosphorylation and ATP depletion, leading to malfunctioning of the sodium pump. When the sodium pump is impaired, sodium chloride and water passively diffuse into the cells, resulting in cellular swelling and the “no-reflow” phenomenon after renal reperfusion. Cellular potassium and magnesium are lost, calcium is gained, anaerobic glycolysis and acidosis occur, and lysosomal enzymes are activated. This results in cell death. During reperfusion, hypoxanthine, a product of ATP degradation, is oxidized to xanthine with the formation of free radicals that cause further cell damage [29]. (See later.)As mentioned, the mechanism whereby ischemia and oxygen depletion injure tubular cells starts with ATP depletion, which activates a number of critical alterations in metabolism, causing cytoskeletal disruption and loss of those properties that normally render the tubule cell monolayer impermeable to certain components of filtrate. Cytoskeletal disruption causes not only loss of brush-border microvilli and cell junctions but also mislocation of integrins and the sodium pump from the basal surface to the apical surface.In addition, impaired sodium reabsorption by injured tubular epithelial cells increases the sodium concentration in the tubular lumen. The increased intratubular sodium concentration polymerizes Tamm-Horsfall protein, which is normally secreted by the loop of Henle, forming a gel and contributing to cast formation. As a result, brush-border membranes and cells slough obstruct tubules downstream. As mentioned before, these debris form casts that obstruct tubules, and glomerular filtrate leaks from the tubular lumen across denuded tubular walls into capillaries and the circulation (back-leak) causing a reduction in the “effective” GFR. ATP depletion also activates harmful proteases and phospholipases, which, with reperfusion, cause oxidant injury to tubular cells, the so-called reperfusion injury [7]. ### 3.1.3. Reperfusion Injury WI insult followed by restoration of blood flow to the ischemic tissue frequently results in a secondary reperfusion injury. Despite WI causing significant renal dysfunction, reperfusion injury has been shown to be as damaging or even more detrimental than renal ischemia itself, producing an inflammatory response that worsens local kidney damage and leads to a systemic insult [30, 31].The reperfusion injury can be mediated by several mechanisms including the generation of reactive oxygen species, cellular derangement, microvessel congestion and compression, polimorphonuclear (PMN)-mediated damage, and hypercoagulation. Reperfusion with the resulting reintroduction of molecular oxygen of constricted microvessels leads to congestion and red cell trapping. This vascular effect can reduce renal blood flow by as much as 50% [32].During the reperfusion period, superoxide production in the kidney is markedly enhanced by the transformation of xanthine dehydrogenase to xanthine oxidase and the increase in free electrons in mitochondria, prostaglandin H, and lipoxygenase with the coexistence of NAD(P)H and infiltrated neutrophils. Superoxide raises the following chain reactions, producing hydroxyl radicals or other reactive oxygen species (ROSs), or interacts with nitric oxide (NO), which is produced by macrophage inducible NO synthase, generating a highly toxic radical peroxynitrite. These ROS and NO derived species consume tissue antioxidants and decrease organ reducing activity [33].The exact magnitude of reperfusion injury is still unclear. Some authors state that the role of free radicals mediated injury in kidneys may not be as significant as in other organs given the low relative activity of renal xanthine oxidase compared with the high endogenous activity of superoxide dismutase [29].Notwithstanding, nicaraven (N,N9-propylenebisnicotinamide), a drug that may actively trap free radicals and prevent vascular constriction due to lipid peroxide [34] and edaravone (3-methyl-1-phenyl-2-pyrazolin-5-one, MCI-186), a synthetic free radical scavenger, have shown in vitro experiments to protect endothelial cells against ischemic injury in different organs, including ischemically damaged kidneys [35, 36]. Clinical studies are eagerly awaited. ## 3.1.1. Vascular Mechanism Both animal and human studies have found that a multi-inflammatory response is involved in ischemia/reperfusion injury of the kidney [12]. The inflammatory reaction incurred after an ischemic insult precipitates more damage to the tissue and impedes intrarenal blood flow caused by vasoconstriction and vascular congestion, leading to a vicious cycle [13].This damage mainly takes place in endothelial cells of the peritubular capillaries, especially in the outer medulla, which is marginally oxygenated under normal circumstances. This oxidant injury, together with a shift in the balance of vasoactive substances toward vasoconstrictors such as endothelin, results in vasoconstriction, congestion, hypoperfusion, and expression of adhesion molecules. The expression of adhesion molecules, in turn, initiates leukocyte infiltration, augmented by proinflammatory and chemotactic cytokines generated by ischemic tubular cells [7].Inciting stimuli induce kidney macrophages and probably renal parenchymal cells to release inflammatory cytokines, such as tumor necrosis factor-α (TNF-α) and interleukin-1 (IL-1). TNF-α and IL-1 promote renal parenchymal damage by directly inducing apoptosis in epithelial cells, recruitment of neutrophils that release reactive oxygen metabolites and proteases, and up regulating adhesion receptors on endothelial cells and leukocytes [14, 15]. These cytokines also stimulate renal cortical epithelial cells to release the chemoattractant interleukin-8 [16, 17]. The arrival of additional leukocytes obstructs the microcirculation and releases more cytotoxic cytokines, reactive oxygen species, and proteolytic enzymes, which damage the tubular cells [7].Endothelial injury results in cell swelling and enhanced expression of cell adhesion molecules. This, together with leukocyte activation, leads to enhanced leukocyte-endothelial cell interactions, which can promote injury and swelling of the endothelial cell. Endothelial swelling contributes to the production of local factors promoting vasoconstriction and adds to the effects of vasoconstriction and tubule cell metabolism by physically impeding blood flow, perpetuating that vicious cycle [18].Heterogeneity of intrarenal blood flow contributes to the pathophysiology of ischemic renal failure. An imbalance between the vasodilator nitric oxide and the vasoconstrictor endothelin impairs medullary blood flow, especially in the outer medulla, where tubules have high oxygen requirements, resulting in cellular injury due to a mismatch between oxygen delivery and demand. Endothelial activation and injury together with increased leukocyte-endothelial cell interactions and activation of coagulation pathways may have a greater effect on outer medullary ischemia than arteriolar vasoconstriction, as there can be markedly impaired oxygen delivery to the outer medulla despite adequate renal blood flow [18].The arteriolar response to vasoactive substances can also be altered during endothelial injury. The basal tone of arterioles is increased in post ischemic kidneys as well as their reactivity to vasoconstrictive agents. These arterioles also have decreased vasodilatory responses compared with arterioles from normal kidneys. Alterations in local levels of vasoconstrictors (angiotensin II, thromboxane A2, leukotrienes, adenosine, endothelin-1) have been implicated in abnormal vascular tone [19]. Angiotensin II seems to play a key role by activating endothelin B or prostaglandin H2-thromboxane A2 receptors. Systemic endothelin-1 levels increase with ischemia, and administration of antiendothelin antibodies or endothelin receptor antagonists has been reported to protect against ischemia-reperfusion injury [20]. Saralasin, an angiotensin II receptor antagonist, could also attenuate angiotensin II vasoconstricting effect [21]. Nitric oxide, an endothelial-derived relaxing factor, plays a theoretical protective role against ischemic renal injury, by means of its vasodilatory effect and by decreasing endothelin expression and secretion in the vascular endothelium. Of interest, endothelial nitric oxide synthase is inhibited during endothelial injury [22]. A combination therapy consisting of 5-aminoimidazole-4-carboxamide-1-beta-D-ribonucleoside (AICAR) and N-acetyl cysteine (NAC), drugs that inhibit the induction of proinflammatory cytokines and nitric oxide synthase, and block tumor necrosis factor-alpha induced apoptotic cell death, has shown to attenuate ischemia-reperfusion injury in a canine model of autologous renal transplantation [23]. Early studies showed no conclusive evidence that vasodilators (such as diltiazem or dopamine) or other compounds have any clinical utility in either preventing or treating ischemic renal failure in humans thus far [24–26]. More recently, however, the highly selective dopamine type 1 agonist fenoldopam mesylate [27] and the antianginal medication trimetazidine [28] appeared to aid in restoring renal function to baseline values in patients with prolonged WI time. Further research is needed. ## 3.1.2. Obstructive Mechanism Normally, the cells are bathed in an extra cellular solution high in sodium and low in potassium. This ratio is maintained by a sodium pump (Na+-K + ATPase pump) which uses much of the adenosine triphosphate (ATP) energy derived from oxidative phosphorylation. ATP is required for the cellular sodium pump to maintain a high intracellular concentration of potassium and a low concentration of sodium. The sodium pump effectively makes Na+ an impermeant outside the cell that counteracts the colloidal osmotic pressure derived from intracellular proteins and other anions [29].The ischemic insult causes a failure of oxidative phosphorylation and ATP depletion, leading to malfunctioning of the sodium pump. When the sodium pump is impaired, sodium chloride and water passively diffuse into the cells, resulting in cellular swelling and the “no-reflow” phenomenon after renal reperfusion. Cellular potassium and magnesium are lost, calcium is gained, anaerobic glycolysis and acidosis occur, and lysosomal enzymes are activated. This results in cell death. During reperfusion, hypoxanthine, a product of ATP degradation, is oxidized to xanthine with the formation of free radicals that cause further cell damage [29]. (See later.)As mentioned, the mechanism whereby ischemia and oxygen depletion injure tubular cells starts with ATP depletion, which activates a number of critical alterations in metabolism, causing cytoskeletal disruption and loss of those properties that normally render the tubule cell monolayer impermeable to certain components of filtrate. Cytoskeletal disruption causes not only loss of brush-border microvilli and cell junctions but also mislocation of integrins and the sodium pump from the basal surface to the apical surface.In addition, impaired sodium reabsorption by injured tubular epithelial cells increases the sodium concentration in the tubular lumen. The increased intratubular sodium concentration polymerizes Tamm-Horsfall protein, which is normally secreted by the loop of Henle, forming a gel and contributing to cast formation. As a result, brush-border membranes and cells slough obstruct tubules downstream. As mentioned before, these debris form casts that obstruct tubules, and glomerular filtrate leaks from the tubular lumen across denuded tubular walls into capillaries and the circulation (back-leak) causing a reduction in the “effective” GFR. ATP depletion also activates harmful proteases and phospholipases, which, with reperfusion, cause oxidant injury to tubular cells, the so-called reperfusion injury [7]. ## 3.1.3. Reperfusion Injury WI insult followed by restoration of blood flow to the ischemic tissue frequently results in a secondary reperfusion injury. Despite WI causing significant renal dysfunction, reperfusion injury has been shown to be as damaging or even more detrimental than renal ischemia itself, producing an inflammatory response that worsens local kidney damage and leads to a systemic insult [30, 31].The reperfusion injury can be mediated by several mechanisms including the generation of reactive oxygen species, cellular derangement, microvessel congestion and compression, polimorphonuclear (PMN)-mediated damage, and hypercoagulation. Reperfusion with the resulting reintroduction of molecular oxygen of constricted microvessels leads to congestion and red cell trapping. This vascular effect can reduce renal blood flow by as much as 50% [32].During the reperfusion period, superoxide production in the kidney is markedly enhanced by the transformation of xanthine dehydrogenase to xanthine oxidase and the increase in free electrons in mitochondria, prostaglandin H, and lipoxygenase with the coexistence of NAD(P)H and infiltrated neutrophils. Superoxide raises the following chain reactions, producing hydroxyl radicals or other reactive oxygen species (ROSs), or interacts with nitric oxide (NO), which is produced by macrophage inducible NO synthase, generating a highly toxic radical peroxynitrite. These ROS and NO derived species consume tissue antioxidants and decrease organ reducing activity [33].The exact magnitude of reperfusion injury is still unclear. Some authors state that the role of free radicals mediated injury in kidneys may not be as significant as in other organs given the low relative activity of renal xanthine oxidase compared with the high endogenous activity of superoxide dismutase [29].Notwithstanding, nicaraven (N,N9-propylenebisnicotinamide), a drug that may actively trap free radicals and prevent vascular constriction due to lipid peroxide [34] and edaravone (3-methyl-1-phenyl-2-pyrazolin-5-one, MCI-186), a synthetic free radical scavenger, have shown in vitro experiments to protect endothelial cells against ischemic injury in different organs, including ischemically damaged kidneys [35, 36]. Clinical studies are eagerly awaited. ## 4. For How Long Can the Kidney Tolerate Warm Ischemia? Despite several animal studies [37–39] and clinical reports [40, 41] demonstrating kidney tolerance to warm ischemia times beyond 30 minutes, concern still remains regarding the potential for full-renal function recovery after this time period [42]. The stoic 30-minute cutoff has been questioned by some authors [43] on the grounds that kidneys harvested from nonheart beating donors (NHBDs) have shown favorable recovery of renal function in transplanted kidneys that sustained warm ischemia times well over 30 minutes [44–46]. Nishikido et al. [45] found that the risk factors affecting significant graft loss were WI time more than 20 minutes, donor age above 50 years, and donor serum creatinine at admission above 1.0 mg/dL. Today, most nonheart beating donor programs currently exclude those donors with a WI time exceeding 40 minutes [45, 47–49].Although laparoscopic surgeons are gaining further experience and are more ambitious to perform partial nephrectomy for larger and deeper tumors, the 30-minute cutoff still remains the accepted safe limit time beyond which irreversible kidney damage occurs in the absence of renal cooling [50–52].Although early observations in dog models showed that there may be substantial variation in kidney tolerance up to two or three hours of ischemia [53] there is no doubt that the extent of renal damage after transitory arterial occlusion exclusively depends on the duration of the ischemic insult [25, 54, 55]. The literature also demonstrates that, even within a tolerable period of WI, the longer the WI time the longer it takes for the kidney to recover (or approach) its preoperative function [55]. Notwithstanding, the maximum tolerable limit of renal warm ischemia time that can render complete function recovery remains to be established in humans.The study by Ward [56] is commonly cited by opinion leaders to state a maximum 30-minute tolerance of the kidney to WI. These authors showed in dogs that warm ischemic intervals of up to 30 minutes can be sustained with eventual full recovery of renal function. However, this study was not strictly designed to establish the most accurate length of time a kidney would be able to sustain reversible damage following ischemic injury. What the authors actually concluded was that no additional protection to ischemia could be gained by cooling below 15 degrees. Thus, they recommended 15 degrees as the optimum temperature for use in clinical renal hypothermia.Research in rats, pigs, and monkeys has also been conducted by other investigators. Laven et al. [38] found renal resilience to WI beyond the traditionally accepted 30 minutes in a solitary kidney pig model. Prolonged renal WI time increased the incidence of renal dysfunction during the initial 72 hours after the ischemic insult. However, by 2 weeks after the WI insult renal function returned to baseline in the 30, 60, and 90-minute WI groups. However, the same study group found that prolonged WI time of 120 minutes produced significant loss of renal function and mortality [43].Martin et al. [57] proved potential kidney WI tolerability of up to 35 minutes in a single kidney monkey model.Haisch et al. [58] studies in dog models suggested that the window of reversible WI injury could be as long as 2 hours after the insult.The question remains whether findings in animal studies can be extrapolated to humans. One limitation has to do with a reliable method to differentiate between ischemic injury and the loss of renal volume secondary to tumor excision. The ideal method to evaluate residual kidney function in the operated kidney is still undefined. While most authors use serum creatinine assay or 99mtechnetium-labeled mercaptoacetyl triglycine (MAG3) renal scintigraphy with split renal function, others, like Abukora et al. [59] proposed estimation of parenchymal transit time (PTT) as a good indicator of ischemic injury. Transit time is the time that a tracer remains within the kidney or within a part of the kidney. However, the international consensus committee on renal transit time, from the subcommittee of the International Scientific Committee of Radionuclides in Nephrourology, recently concluded that the value of delayed transit remains controversial, and the committee recommended further research [60].Bhayani et al. [40] evaluated 118 patients, with a single, unilateral, sporadic renal tumor, and normal contralateral kidney, who underwent laparoscopic partial nephrectomy (LPN) to assess the effect of variable durations of WI on long-term renal function. Patients were divided into 3 groups based on WI time: group 1, no renal occlusion (n = 42), group 2, WI < 30 minutes (n = 48), and group 3, WI > 30 minutes (n = 28). At a median followup of 28 months (minimum followup of 6 months) median creatinine had not statistically increased postoperatively and none of the 118 patients progressed to renal insufficiency or required dialysis after LPN. The authors concluded that WI time up to 55 minutes did not significantly influence long-term renal function after LPN. A main limitation of this study has to do with the fact that all patients had a normal contralateral kidney so that 6 months postoperatively creatinine values could have reflected contralateral kidney function.A similar study has been conducted by Shekarriz et al. [61] on a substantially lower number of patients (n = 17); however, the authors assessed kidney function using 99technetium labeled diethylenetetraminepentaacetic acid scan renal scan with differential function 1 month before and 3 months after surgery in all patients. The authors found that all their patients preserved adequate renal function in the affected kidney following temporary hilar clamping of up to 44 minutes. (The mean WI time was 22.5 minutes.)In line with this author, Kane et al. [62] showed that temporary arterial occlusion did not appear to affect short-term renal function (mean followup: 130 days) in a series of laparoscopic partial nephrectomies (LPNs) with a mean WI of 43 minutes (range: 25–65 minutes).Desai et al. [50] retrospectively assessed the effect of WI on renal function after LPN for tumor, and evaluated the influence of various risk factors on renal function in 179 patients under WI conditions. No kidney was lost because of ischemic sequelae with clamping of the renal artery and vein of up to 55 minutes. The mean WI time was 31 minutes. Nonetheless, the authors concluded that advancing age and pre-existing azotaemia increased the risk of renal dysfunction after LPN, especially when the warm ischemia exceeded 30 minutes.In contrast, Kondo et al. [63] found that patient age did not influence residual function in patients undergoing partial nephrectomy, while tumor size was the only significant factor that inversely correlated with the relative 99technetium labeled dimercaptosuccinic acid (DMSA) uptake.Porpiglia et al. [52] assessed kidney damage in 18 patients 1 year after LPN with a WI time between 31 and 60 minutes. The authors evaluated the contribution of the operated kidney to the overall renal function by radionuclide scintigraphy with 99mTc-MAG3. They observed that there was an initial significant drop of approximately 11% in the operated kidney’s contribution to overall function, followed by a constant and progressive recovery that never reached the preoperative value (42.8% at 1 year versus 48.3% before surgery). The authors stated by logistic regression analysis that the loss of function of the operated kidney depended mostly on the WI time and less importantly on the maximum thickness of resected healthy parenchyma. Unfortunately, the full regression model that included 6 variables to predict an event in only 18 patients is not shown in the manuscript.Recently, Thompson et al. [42] made a retrospective review of 537 patients with solitary kidneys who underwent open nephron sparing surgery by more than 20 different surgeons from both the Cleveland Clinic, Ohio, USA, and Mayo Clinic, Minn, USA, to evaluate the renal effects of vascular clamping in patients with solitary kidneys. After adjusting for tumor complexity and tumor size, the author found in a subsequent analysis [64] that patients with more than 20 minutes of WI were significantly more likely to have acute renal failure (24% versus 6%, p 0.002) compared to those requiring less than 20 minutes, and this risk remained significant even after adjusting for tumor size (odds ratio 3.4, p 0.025). Additionally, patients with more than 20 minutes of WI were significantly more likely to progress to chronic renal failure (odds ratio 2.9, p 0.008) and were more than 4 times more likely to experience an increase in creatinine postoperatively of greater than 0.5 mg/dL (odds ratio 4.3, p 0.001) compared to those requiring less than 20 minutes of WI. After adjusting for tumor size, the risk of chronic renal failure (odds ratio 2.6, p 0.03) and an increase in creatinine of greater than 0.5 mg/dL (odds ratio 4.6, p 0.002) remained statistically significant if more than 20 minutes of WI were needed. The authors concluded that WI should be restricted to less than 20 minutes when technically feasible, especially in patients with solitary kidneys. ## 5. What Are the Factors Affecting Tolerance to Warm Ischemia? It often goes without saying that there may be individual variation to WI tolerance. Baldwin et al. [37] observed that some of the 16 solitary porcine kidneys showed a rapid return to the dark red color, and other animals demonstrated minimal color change during the several minutes following complete hilar clamp removal, despite all of them receiving similar surgical technique and ischemia time. Having acknowledged the potential for individual variation, there may be other multiple factors that can affect tolerance to WI which are herein described.It has been suggested that patients with solitary kidneys might safely tolerate longer periods of ischemia than patients with both kidneys as the result of development of a collateral vascular supply; [65–67] however, the presence of vascular collateralization secondary to vascular occlusive disease, [68] or yet other clinical entities like hypertension, [69] should warn the surgeon for the possibility of a kidney less resistant to WI injury for the likely presence of panvascular disease and or occult chronic renal insufficiency.Another factor that can impact ischemic damage is the method employed to achieve vascular control of the kidney. When technically possible, depending on the size and location of the tumor, it is helpful to leave the renal vein patent throughout the operation. This measure has been proven to decrease intraoperative renal ischemia and, by allowing venous backbleeding, facilitates hemostasis by enabling identification of small, transected renal veins [1–3, 5].Animal studies have shown that functional impairment is least when the renal arteryalone is occluded. Although some authors found no difference [70] simultaneous occlusion of the renal artery and vein for an equivalent time interval is more damaging because it prevents, as mentioned, retrograde perfusion of the kidney through the renal vein and may also produce venous congestion of the kidney [2, 3, 71–73]. However, this benefit may not be observed in patients undergoing LRP since the pressure of the pneumoperitoneum may cause partial occlusion of the renal vein, thus, negating the advantage of renal artery clamping only [72].Intermittent clamping of the renal artery with short periods of recirculation may also be more damaging than continuous arterial occlusion, possibly because of the release and trapping of damaging vasoconstrictor agents within the kidney [39, 55, 71, 74–77].Manual (or instrumental) compression of the kidney parenchyma to control intraoperative hemorrhage (as an alternative to clamping of the pedicle) has the theoretical advantages of avoiding WI of the normal parenchyma while allowing the surgeon to operate in an almost bloodless field, something that could be particularly useful in peripherically located tumors. Although animal studies have shown that the use of renal parenchyma compression may be more deleterious than simple arterial occlusion [71, 76], this technique has been recently “resuscitated” by some authors both in the open kidney surgery [78–82] and in the laparoscopic setting [83].When the surgeon anticipates a WI time exceeding the “classical” 30 minutes, local renal hypothermia is used to protect against ischemic renal injury. Hypothermia has been the most effective and universally used means of protecting the kidney from the ischemic insult. Hypothermia reduces basal cell metabolism, energy-dependent metabolic activity of the cortical cells, with a resultant decrease in both the consumption of oxygen and ATP [84–86].There are multiple ways of achieving hypothermia. Surrounding the fully mobilized kidney with crushed ice (ice slush) is the most frequently used technique because of its ease and simplicity [87, 88]. When using ice slush to reduce kidney temperature, it is recommended to keep the entire kidney covered with ice for 10 to 15 minutes immediately after occluding the renal artery and before commencing the resection of the tumor in order to allow core renal temperature to decrease to approximately 20 degrees centigrade or less [2]. Mannitol, with or without the addition of furosemide, should be administered intravenously 5 to 15 minutes before renal arterial clamping as it increases renal plasma flow, decreases intrarenal vascular resistance and intracellular edema, and promotes an osmotic diuresis when renal circulation is restored [89]. Regular use of heparin to prevent intrarenal vascular thrombosis has not been found to be useful [2, 3, 56].Other methods than the use of ice slush to achieve renal hypothermia have also being explored, including application of ice-slurry [90, 91], antegrade perfusion of the renal artery either via preoperative renal artery catheterization [92] or via intraoperative renal artery cannulation [93], retrograde perfusion of the collecting system with cold solutions [94, 95] or near-freezing saline irrigation delivered with a standard irrigator aspirator [96] among others, some of them particularly used in the laparoscopic setting. Very few studies compared kidney cooling techniques; [97–100] however, hypothermia by properly applying ice to the renal surface seems to be equivalent to hypothermia by perfusion [98]. Perfusion of the kidney with a cold solution instilled via the renal artery not only may have a theoretic risk of tumor dissemination, but also requires participation of an intervention radiology team to perform preoperative renal artery catheterization, adding complexity and risks of potential complications to the procedure [3]. On the contrary, continuous renal perfusion might have the advantage of providing a more homogeneous and effective hypothermia for a more extended period of time [99, 100]. It is generally accepted, founded on data extrapolated from the kidney stone literature, that adequate hypothermia provides up to 2 to 3 hours of renal protection from circulatory arrest [99, 101–104].Needless to say, generous preoperative and intraoperative hydration, prevention of intraoperative hypotension, avoidance of unnecessary manipulation or traction on the renal artery as well as the aforementioned administration of mannitol are necessary to keep the kidney adequately perfused before and after the ischemic insult.Ischemic preconditioning (IP) has emerged as a powerful method of ameliorating ischemia/reperfusion injury not only the myocardium (as initially described) [105] but also to other organs, including kidney. IP is a physiologic phenomenon by which cells develop defense strategies to allow them survive in a hypoxic environment. The original IP hypothesis stated that multiple brief ischemic episodes applied to an organ would actually protect it (originally the myocardium) during a subsequent sustained ischemic insult so that, in effect, ischemia could be exploited to protect that organ (originally the heart) from ischemic injury [105]. The “preconditioned” cells would become more tolerant to ischemia by adjusting its energy balance to a new, lower steady-state equilibrium. Specifically, preconditioned tissues exhibit reduced energy requirements, altered energy metabolism, better electrolyte homeostasis and genetic reorganization, giving rise to the concept of “ischemia tolerance.” IP also induces “reperfusion tolerance” with less reactive oxygen species and activated neutrophils released, reduced apoptosis and better microcirculatory perfusion compared to not preconditioned tissue. Systemic reperfusion injury is also diminished by preconditioning [31]. A review by Pasupathy and Homer-Vanniasinkam [31] showed that IP utilizes endogenous mechanisms in skeletal muscle, liver, lung, kidney, intestine, and brain in animal models to convey varying degrees of protection from reperfusion injury. To date, there are few human studies, but some reports suggest that human liver, lung, and skeletal muscle acquire similar protection after IP. IP is ubiquitous but more research is required to fully translate these findings to the clinical arena.Some authors propose that during laparoscopy, the increase of intra-abdominal pressure due to the pneumoperitoneum may create an IP-like situation that might increase kidney tolerance to subsequent WI and reduce tissue injury [106–110]. For this reason, it might theoretically be possible to increase WI time during LPN, compared to open surgery, something which is still very far from being demonstrated [30, 109–111].In contrast, other studies expressed some concern about the potential harm of pneumoperitoneum and increased intra-abdominal pressure (IAP) on kidney function. Several experimental animal studies have investigated the effect of pneumoperitoneum on renal function. While some authors demonstrated that increased IAP by insufflation of CO2 gas resulted in decreased renal blood flow that may lead to ischemia and subsequent decreased glomerular filtration rate [112], others denied such effect [37, 113].Kirsch et al. [112] showed a decrease in urine output and GFR with increasing IAP. A pneumoperitoneum of 15 mmHg for 4 hours resulted in a decrease in renal blood flow to 70% of baseline. Even IAPs of 4 and 10 mmHg resulted in a reduction of the renal circulation of 34% and 41%, respectively. Although, the decreased urinary output during prolonged IAP greater than or equal to 15 mmHg in the animal model was associated with a corresponding decrease in renal vein flow, it did not appear to be associated with any permanent renal derangement nor any transient histological changes [114]. After the release of the pneumoperitoneum or pneumoretroperitoneum, the renal function and urine output return to normal with no long-term sequelae, even in patients with pre-existing renal disease [115].Lind et al. [113] found that WI time of 20 minutes did not impair graft function and histomorphology during 1 year of followup after renal transplantation in a syngeneic rat model. Most important, WI in combination with pneumoperitoneum did not result in an additive negative effect on long-term graft function.In addition, Baldwin et al. [37] observed that temporary serum creatinine elevation evident after 60 and 90 minutes of ischemia normalized within 7 days in 16 farm pigs which had been nephrectomized 14 days prior to the laparoscopically applied ischemic insult. No difference from the controls was noted in those pigs receiving 30 minutes of ischemia during the laparoscopic procedure. Of note, insufflation had been maintained for 150 minutes at 15 mmHg in all animals. Those findings suggested that in laparoscopic renal surgery, WI times of up to 90 minutes (and a pneumoperitoneum of up to 150 minutes) might be well tolerated and followed by complete renal recovery. The reader is referred to the excellent review by Dunn and McDougall [115] for further information on the impact of pneumoperitoneum on renal physiology. ## 6. Conclusions The maximal duration of WI allowable before the onset of irreversible renal damage continues to be a topic of debate, irrespective of the surgical approach. In addition, there seems to be variation among patients, possibly related to surgical technique, patient age, presence of collateral vascularization, and indemnity of the arterial bed, among others. Unfortunately, no method exists for predicting preoperatively or intraoperative monitoring for renal injury. Surgeons should exert extreme efforts to keep warm ischemia time as short as possible. When WI time is expected to exceed from 20 to 30 minutes, specially in patients whose baseline medical characteristics put them at potentially higher, though unproven, risks of ischemic damage, the time-tested way around this time limit has been renal hypothermia, regardless of what the time limit may exactly be. --- *Source: 102461-2008-07-15.xml*
2008
# CFTR Expression Analysis for Subtyping of Human Pancreatic Cancer Organoids **Authors:** Alexander Hennig; Laura Wolf; Beatrix Jahnke; Heike Polster; Therese Seidlitz; Kristin Werner; Daniela E. Aust; Jochen Hampe; Marius Distler; Jürgen Weitz; Daniel E. Stange; Thilo Welsch **Journal:** Stem Cells International (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1024614 --- ## Abstract Background. Organoid cultures of human pancreatic ductal adenocarcinoma (PDAC) have become a promising tool for tumor subtyping and individualized chemosensitivity testing. PDACs have recently been grouped into different molecular subtypes with clinical impact based on cytokeratin-81 (KRT81) and hepatocyte nuclear factor 1A (HNF1A). However, a suitable antibody for HNF1A is currently unavailable. The present study is aimed at establishing subtyping in PDAC organoids using an alternative marker. Methods. A PDAC organoid biobank was generated from human primary tumor samples containing 22 lines. Immunofluorescence staining was established and done for 10 organoid lines for cystic fibrosis transmembrane conductance regulator (CFTR) and KRT81. Quantitative real-time PCR (qPCR) was performed for CFTR and HNF1A. A chemotherapeutic drug response analysis was done using gemcitabine, 5-FU, oxaliplatin, and irinotecan. Results. A biobank of patient-derived PDAC organoids was established. The efficiency was 71% (22/31) with 68% for surgical resections and 83% for fine needle aspirations. Organoids could be categorized into the established quasimesenchymal, exocrine-like, and classical subtypes based on KRT81 and CFTR immunoreactivity. CFTR protein expression was confirmed on the transcript level. CFTR and HNF1A transcript expression levels positively correlated (n=10; r=0.927; p=0.001). PDAC subtypes of the primary tumors and the corresponding organoid lines were identical for most of the cases analyzed (6/7). Treatment with chemotherapeutic drugs revealed tendencies but no significant differences regarding drug responses. Conclusions. Human PDAC organoids can be classified into known subtypes based on KRT81 and CFTR immunoreactivity. CFTR and HNF1A mRNA levels correlated well. Furthermore, subtype-specific immunoreactivity matched well between PDAC organoids and the respective primary tumor tissue. Subtyping of human PDACs using CFTR might constitute an alternative to HNF1A and should be further investigated. --- ## Body ## 1. Introduction Despite advances with multimodal treatment modalities such as the FOLFIRINOX regime, pancreatic ductal adenocarcinoma (PDAC) still remains the cancer with the worst prognosis. Today, combination chemotherapy in the neoadjuvant or adjuvant setting is critical for optimal outcome (Conroy et al. [1] and Hackert et al. [2]). However, PDAC is heterogeneous regarding its genetic alterations and molecular expression profile leading to subtype-specific responses towards single chemotherapeutic agents and survival [3–6]. Recently, an immunohistochemical (IHC) subtyping of PDAC using cytokeratin-81 (KRT81) and hepatocyte nuclear factor 1A (HNF1A) was found to match with the transcriptional subtypes quasimesenchymal (QM; KRT81+HNF1A-), classical (KRT81-HNF1A-), and exocrine-like (KRT81-HNF1A+) [4, 5]. In cohorts of surgically treated PDAC patients, the HNF1A+ subtype was associated with the best, whereas the KRT81+ subtypes with the worst survival prognosis [4, 5, 7]. Thus, early subtyping after diagnosis based on KRT81/HNF1A IHCs of PDAC could guide individualized combination chemotherapy.Within recent years, modern patient-derived 3D cell cultures named organoids have emerged as a promising model for personalized tumor analysis and drug screening [8–11]. Organoid cultures enable tumor cultivation, propagation, and timely chemosensitivity testing [12]. Furthermore, due to the possibility of repeated freeze-thaw cycles, they allow the establishment of living tumor biobanks for large-scale testing of drug panels [13]. Based on the response evaluation, personalized treatment strategies for individual patients can be designed [14]. To date, immunosubtyping of human PDAC organoids using KRT81 and HNF1A has not been established. The immunosubtyping has further become difficult because the originally described HNF1A antibody (H-205; sc-8986) is no longer commercially available and alternative HNF1A antibodies failed to produce reproducible results. Another potential marker of the exocrine-like (HNF1A+) subtype—which might permit subtype differentiation—is the cystic fibrosis transmembrane conductance regulator (CFTR) [4]. However, CFTR expression in PDAC organoids and its correlation with HNF1A expression is unknown. In the normal pancreas, the cAMP-regulated chloride channel is apically expressed by the ductal epithelial cells [15]. CFTR mutations are linked to an increased risk for the development of PDAC [16].The present study is aimed at analyzing KRT81, HNF1A, and CFTR expression in human PDAC organoids in order to enable routine organoid subtyping for personalized treatment. ## 2. Materials and Methods ### 2.1. Human PDAC Samples This study was performed with human specimens obtained from patients admitted to the Department of Visceral, Thoracic and Vascular Surgery or the Medical Department I at the University Hospital Carl Gustav Carus, Technische Universität Dresden, Germany. All samples were diagnosed as PDAC according to the World Health Organization criteria by a board certified pathologist. Tissue collection, organoid culture, and analysis were permitted by the local ethics committee (#EK451122014 and #EK68022018). ### 2.2. Generation and Cultivation of Human PDAC Organoids Tumor specimens were cut into pieces smaller than 1 mm3 and digested with dispase II (2.5 mg/ml, Roche) and collagenase II (0.625 mg/ml, Sigma-Aldrich) in DMEM/F12+++ medium (DMEM/F12 (Invitrogen) supplemented with 1x HEPES (Invitrogen), 1x Pen/Strep (Invitrogen), and 1x GlutaMAX (Invitrogen)) at 37°C for 30-120 minutes depending on sample size. After several washing steps with DMEM/F12+++ medium, the remaining cell pellet was resuspended in GFR Matrigel (Corning) and cultivated in human PDAC organoid medium DMEM/F12+++ supplemented with Wnt3a-conditioned medium (50% v/v), noggin-conditioned medium (10% v/v), RSPO1-conditioned medium (10% v/v), B27 (1x, Invitrogen), nicotinamide (10 mM, Sigma-Aldrich), gastrin (1 nM, Sigma-Aldrich), N-acetyl-L-cysteine (1 mM, Sigma-Aldrich), primocin (1 mg/ml, InvivoGen), recombinant murine epidermal growth factor (mEGF, 50 ng/ml, Invitrogen), recombinant human fibroblast growth factor 10 (hFGF10, 100 ng/ml, PeproTech), A-83-01 (0.5 μM, Tocris Bioscience), and N2 (1x, Invitrogen). ### 2.3. Immunohistochemistry (IHC) Stainings and Imaging Sections from paraffin-embedded primary PDAC tissue samples were provided by the Tumor- and Normal Tissue Bank of the Institute of Pathology, University Hospital Carl Gustav Carus. The hematoxylin-eosin (H&E) and IHC stainings for KRT81 (Santa Cruz, #sc100929, 1 : 150) and CFTR (Abcam, #ab131553, 1 : 300) were performed according to a standard protocol on deparaffinized tissue sections. Images were taken by an EVOS FL Auto (Life Technologies) microscope. CFTR expression was considered to be positive if a medium to strong staining was detected in more than 10% of the epithelial cells. Analysis of KRT81 stainings was done according to the criteria of Muckenhuber and colleagues [7]. In brief, only a strong staining of KRT81 in at least 30% of epithelial cells leads to the classification of a “KRT81-positive PDAC.” Organoid lines NR002, NR005, and NR006 were derived from fine needle aspirations, so no primary tumor tissue was available to perform IHC stainings. ### 2.4. Immunofluorescence Stainings Whole PDAC organoids were collected in 15 ml falcons, fixed in 2% formaldehyde (Sigma-Aldrich) over night at 4° C, permeabilized with 0.3% Triton X-100 (Sigma-Aldrich) for 20 minutes, and blocked with 1% BSA (Thermo Fisher Scientific) and 0.1% Triton X-100 in PBS (Sigma-Aldrich) for 1 hour. Samples were then incubated with primary antibodies against KRT81 (Santa Cruz, #sc100929) or CFTR (Abcam, #ab131553), both diluted 1 : 50 in blocking buffer, for 2 hours at room temperature, followed by a 1-hour incubation step with the secondary goat-anti-rabbit Alexa-Fluor 488 antibody (Life Technologies), diluted 1 : 200 in blocking buffer. PDAC organoids additionally were stained with DAPI (Thermo Fisher Scientific) and Alexa-Fluor 568 phalloidin (Thermo Fisher Scientific). Images were taken by Zeiss LSM 510/880 confocal microscope and analyzed with ImageJ (NIH). All acquired images were taken with identical settings. ### 2.5. In Vitro Drug Assays Mechanically dissociated PDAC organoids were plated in 384 well plates in 15μl Matrigel supplied with 40 μl PDAC organoid medium. 24 h later, drug treatment was started with conventional chemotherapeutic drugs diluted as follows: gemcitabine (1 μM, 200 nM, 100 nM, 50 nM, 25 nM, 10 nM, and 1 nM), 5-fluorouracil (5-FU; 50 μM, 25 μM, 10 μM, 5 μM, 1 μM, 100 nM, and 10 nM), oxaliplatin (500 μM, 100 μM, 50 μM, 25 μM, 10 μM, 1 μM, and 100 nM), irinotecan (250 μM, 25 μM, 10 μM, 1 μM, 100 nM, 10 nM, and 1 nM). Negative controls and each drug dilution were done in triplicates. Medium was replaced after 72 h. Readout was done after 144 h incubation by measuring the metabolic activity using PrestoBlue cell viability reagent (Invitrogen) following the manufacturer’s protocol. Briefly, organoids were incubated for 2 hours at 37°C with PrestoBlue and fluorescence measured at 560/590 nm using a Varioskan LUX (Thermo Scientific). Relative viability was calculated after blank subtraction by normalizing to the mean of the negative control. All drug assays were carried out three times. In order to dissect subtype specific drug responses assay results from the quasimesenchymal (KRT81+) and double-positive (KRT81+/CFTR+) organoid lines were combined. The same was done for the drug assay results from the exocrine-like (CFTR+) and classical (KRT81-/CFTR-) PDAC organoid lines. ### 2.6. Quantitative Real-Time PCR (qPCR) Total RNA was isolated from organoid cultures using the RNeasy Mini Kit (Qiagen) following the recommended user instructions. cDNA synthesis from 0.5μg RNA was done with the qScript cDNA SuperMix (Quantabio). qPCR was carried out using the GoTaq qPCR Master Mix (Promega) on a StepOnePlus Real-Time PCR System (Thermo Fisher Scientific) for expression analysis of the following genes: GAPDH (5′-GCA CCA CCA ACT GCT TAG-3′ (sense), 5′-ATG ATG TTC TGG AGA GCC CC-3′ (antisense)); ACTB1 (5′-AAA TCT GGC ACC ACA CCT TC-3′ (sense), 5′-AGA GGC GTA CAG GGA TAG CA-3′ (antisense)); HNF1A (5′-ACG ACG ATG GGG AAG ACT TC-3′ (sense), 5′-GAC TTG ACC ATC TTC GCC AC-3′ (antisense)); and CFTR (5′-CGT CAT CAA AGC ATG CCA AC-3′ (sense), 5′-TCG TTG ACC TCC ACT CAG TG-3′ (antisense)). Calculation of the relative gene expression was done as described by Hellemans and colleagues ([17]). Briefly, arithmetical means were calculated for each gene from all analyzed samples for conversion of quantification cycle values into relative quantities (RQs). Next, the geometrical mean of the RQs from the two housekeeping genes was calculated resulting in the sample-specific normalization factor (NF). The relative expression was determined by dividing the RQ with the NF. ### 2.7. Statistical Analysis Correlation analysis was done by calculating the Pearson correlation coefficient in GraphPad Prism (version 6.02), assuming a normal distribution of the data. Confidence interval was 95% (two-tailed) forp value calculation. ## 2.1. Human PDAC Samples This study was performed with human specimens obtained from patients admitted to the Department of Visceral, Thoracic and Vascular Surgery or the Medical Department I at the University Hospital Carl Gustav Carus, Technische Universität Dresden, Germany. All samples were diagnosed as PDAC according to the World Health Organization criteria by a board certified pathologist. Tissue collection, organoid culture, and analysis were permitted by the local ethics committee (#EK451122014 and #EK68022018). ## 2.2. Generation and Cultivation of Human PDAC Organoids Tumor specimens were cut into pieces smaller than 1 mm3 and digested with dispase II (2.5 mg/ml, Roche) and collagenase II (0.625 mg/ml, Sigma-Aldrich) in DMEM/F12+++ medium (DMEM/F12 (Invitrogen) supplemented with 1x HEPES (Invitrogen), 1x Pen/Strep (Invitrogen), and 1x GlutaMAX (Invitrogen)) at 37°C for 30-120 minutes depending on sample size. After several washing steps with DMEM/F12+++ medium, the remaining cell pellet was resuspended in GFR Matrigel (Corning) and cultivated in human PDAC organoid medium DMEM/F12+++ supplemented with Wnt3a-conditioned medium (50% v/v), noggin-conditioned medium (10% v/v), RSPO1-conditioned medium (10% v/v), B27 (1x, Invitrogen), nicotinamide (10 mM, Sigma-Aldrich), gastrin (1 nM, Sigma-Aldrich), N-acetyl-L-cysteine (1 mM, Sigma-Aldrich), primocin (1 mg/ml, InvivoGen), recombinant murine epidermal growth factor (mEGF, 50 ng/ml, Invitrogen), recombinant human fibroblast growth factor 10 (hFGF10, 100 ng/ml, PeproTech), A-83-01 (0.5 μM, Tocris Bioscience), and N2 (1x, Invitrogen). ## 2.3. Immunohistochemistry (IHC) Stainings and Imaging Sections from paraffin-embedded primary PDAC tissue samples were provided by the Tumor- and Normal Tissue Bank of the Institute of Pathology, University Hospital Carl Gustav Carus. The hematoxylin-eosin (H&E) and IHC stainings for KRT81 (Santa Cruz, #sc100929, 1 : 150) and CFTR (Abcam, #ab131553, 1 : 300) were performed according to a standard protocol on deparaffinized tissue sections. Images were taken by an EVOS FL Auto (Life Technologies) microscope. CFTR expression was considered to be positive if a medium to strong staining was detected in more than 10% of the epithelial cells. Analysis of KRT81 stainings was done according to the criteria of Muckenhuber and colleagues [7]. In brief, only a strong staining of KRT81 in at least 30% of epithelial cells leads to the classification of a “KRT81-positive PDAC.” Organoid lines NR002, NR005, and NR006 were derived from fine needle aspirations, so no primary tumor tissue was available to perform IHC stainings. ## 2.4. Immunofluorescence Stainings Whole PDAC organoids were collected in 15 ml falcons, fixed in 2% formaldehyde (Sigma-Aldrich) over night at 4° C, permeabilized with 0.3% Triton X-100 (Sigma-Aldrich) for 20 minutes, and blocked with 1% BSA (Thermo Fisher Scientific) and 0.1% Triton X-100 in PBS (Sigma-Aldrich) for 1 hour. Samples were then incubated with primary antibodies against KRT81 (Santa Cruz, #sc100929) or CFTR (Abcam, #ab131553), both diluted 1 : 50 in blocking buffer, for 2 hours at room temperature, followed by a 1-hour incubation step with the secondary goat-anti-rabbit Alexa-Fluor 488 antibody (Life Technologies), diluted 1 : 200 in blocking buffer. PDAC organoids additionally were stained with DAPI (Thermo Fisher Scientific) and Alexa-Fluor 568 phalloidin (Thermo Fisher Scientific). Images were taken by Zeiss LSM 510/880 confocal microscope and analyzed with ImageJ (NIH). All acquired images were taken with identical settings. ## 2.5. In Vitro Drug Assays Mechanically dissociated PDAC organoids were plated in 384 well plates in 15μl Matrigel supplied with 40 μl PDAC organoid medium. 24 h later, drug treatment was started with conventional chemotherapeutic drugs diluted as follows: gemcitabine (1 μM, 200 nM, 100 nM, 50 nM, 25 nM, 10 nM, and 1 nM), 5-fluorouracil (5-FU; 50 μM, 25 μM, 10 μM, 5 μM, 1 μM, 100 nM, and 10 nM), oxaliplatin (500 μM, 100 μM, 50 μM, 25 μM, 10 μM, 1 μM, and 100 nM), irinotecan (250 μM, 25 μM, 10 μM, 1 μM, 100 nM, 10 nM, and 1 nM). Negative controls and each drug dilution were done in triplicates. Medium was replaced after 72 h. Readout was done after 144 h incubation by measuring the metabolic activity using PrestoBlue cell viability reagent (Invitrogen) following the manufacturer’s protocol. Briefly, organoids were incubated for 2 hours at 37°C with PrestoBlue and fluorescence measured at 560/590 nm using a Varioskan LUX (Thermo Scientific). Relative viability was calculated after blank subtraction by normalizing to the mean of the negative control. All drug assays were carried out three times. In order to dissect subtype specific drug responses assay results from the quasimesenchymal (KRT81+) and double-positive (KRT81+/CFTR+) organoid lines were combined. The same was done for the drug assay results from the exocrine-like (CFTR+) and classical (KRT81-/CFTR-) PDAC organoid lines. ## 2.6. Quantitative Real-Time PCR (qPCR) Total RNA was isolated from organoid cultures using the RNeasy Mini Kit (Qiagen) following the recommended user instructions. cDNA synthesis from 0.5μg RNA was done with the qScript cDNA SuperMix (Quantabio). qPCR was carried out using the GoTaq qPCR Master Mix (Promega) on a StepOnePlus Real-Time PCR System (Thermo Fisher Scientific) for expression analysis of the following genes: GAPDH (5′-GCA CCA CCA ACT GCT TAG-3′ (sense), 5′-ATG ATG TTC TGG AGA GCC CC-3′ (antisense)); ACTB1 (5′-AAA TCT GGC ACC ACA CCT TC-3′ (sense), 5′-AGA GGC GTA CAG GGA TAG CA-3′ (antisense)); HNF1A (5′-ACG ACG ATG GGG AAG ACT TC-3′ (sense), 5′-GAC TTG ACC ATC TTC GCC AC-3′ (antisense)); and CFTR (5′-CGT CAT CAA AGC ATG CCA AC-3′ (sense), 5′-TCG TTG ACC TCC ACT CAG TG-3′ (antisense)). Calculation of the relative gene expression was done as described by Hellemans and colleagues ([17]). Briefly, arithmetical means were calculated for each gene from all analyzed samples for conversion of quantification cycle values into relative quantities (RQs). Next, the geometrical mean of the RQs from the two housekeeping genes was calculated resulting in the sample-specific normalization factor (NF). The relative expression was determined by dividing the RQ with the NF. ## 2.7. Statistical Analysis Correlation analysis was done by calculating the Pearson correlation coefficient in GraphPad Prism (version 6.02), assuming a normal distribution of the data. Confidence interval was 95% (two-tailed) forp value calculation. ## 3. Results ### 3.1. Generation of a Human Pancreatic Cancer Organoid Biobank We collected human primary tumor samples from 31 PDAC patients that were treatment-naïve: 25 specimens from surgical tumor resections and 6 from endoscopic ultrasound- (EUS-) guided fine needle aspiration (FNA) (Figure1). The total organoid generation efficiency was 71% yielding 22 PDAC organoid lines (68% for surgical resections and 83% for FNA). All primary cancers were histologically confirmed PDACs. Criteria for new PDAC organoid lines were a stable growth for more than 10 passages and the presence of mutated KRAS, analyzed by Sanger sequencing or—if needed—Illumina panel sequencing, thus excluding growth of normal pancreatic organoids, a common problem in PDAC organoid generation [11]. Specimens from FNAs showed a higher outgrowth efficiency compared to resection specimens. Overall, growth rates are comparable to previously published organoid biobanks [8, 11].Figure 1 Establishing a human PDAC organoid bank. Phase-contrast images of three representative established PDAC (passage>10) organoids derived from (a) surgical resection specimens (DD314 and DD394) and (b) EUS-guided fine needle aspiration (NR005). Scale bars represent 500 μM. ### 3.2. CFTR Might Constitute an Alternative to HNF1A as a Biomarker for PDAC Subtyping Due to the lack of a suitable HNF1A antibody, we searched for an alternative marker for subtyping PDAC. CFTR is part of the PDAssigner gene set defining the exocrine-like subtype [4]. We therefore performed immunofluorescence (IF) stainings of CFTR as well as KRT81 in 10 organoid lines (Figure 2 and Supplementary Figure S1). The organoid lines DD314, DD376, DD385, DD394, and DD442 were CFTR positive (CFTR+), whereas no significant expression of KRT81 (KRT81-) was observed. On the other hand, the organoid lines DD337, DD439, and NR006 exhibited no CFTR immunoreactivity (CFTR-), but a strong KRT81 positivity (KRT81+). No expression of both markers (KRT81-/CFTR-) was observed for NR005, while NR002 expressed both markers (KRT81+/CFTR+). Thus, CFTR and KRT81 showed a mutually exclusive expression pattern, assigning nearly all PDAC organoids to the described two most frequently occurring subtypes: exocrine-like (CFTR+/KRT81–) and quasimesenchymal (CFTR–/KRT81+). In addition, one double-negative “classical” (KRT81-/CFTR-) and one double-positive organoids were contained within the analyzed cohort.Figure 2 Confocal CFTR and KRT81 immunofluorescence analysis of human pancreatic cancer organoids. Representative stainings of two PDAC organoids (DD385 and DD337) depicting CFTR+/KRT81− (a–h) and CFTR−/KRT81+ (i–p) subtypes, respectively. Scale bars represent 200 μm.To further establish CFTR as an alternative marker for HNF1A, we analyzed the mRNA expression levels of both genes in all PDAC organoid lines by qPCR (Figure3(a)).Figure 3 CFTR and HNF1a expression correlate in human PDAC organoids. (a) Relative gene expression ofHNF1A and CFTR in 10 analyzed human PDAC organoid lines. Organoids have been established from surgical resection specimens (DD314-DD442) or EUS-guided FNA samples (NR002-NR006). RT-qPCR data were analyzed by including the two housekeeping genes, GAPDH and ACBT1. (b) Calculation of the Pearson correlation coefficient shows a high linear relationship between the mRNA level of HNF1A and CFTR (r=0.927) within the analyzed PDAC organoid lines (p value = 0.001 (two-tailed), α=0.05). (a) (b)Based on a cutoff value for positivity for HNF1A and CFTR of 2-fold overexpression, the organoid lines DD314, DD376, DD385, DD394, DD442, and NR005 were judged positive for both genes, while the organoid lines DD337, DD439, and NR006 were considered to be negative for both genes. The organoid line NR002 was positive for HNF1A, while negative for CFTR. A strong linear correlation between the expressions of HNF1A and CFTR was detected (r=0.927; p=0.001; Figure 3(b)).Comparing IF and qPCR results, the organoid lines DD314, DD376, DD385, DD394, and DD442 were CFTR positive both on the mRNA and the protein levels, whereas DD337, DD439, and NR006 were negative on both levels. For two cases (NR002, NR005), mRNA levels did not match with the corresponding IF stainings. NR002 showed a weak but clearly present expression of CFTR on protein level, while it was judged negative based on the mRNA level. The opposite was seen for NR005, where mRNA levels of CFTR were judged positive, while no protein expression was detected. ### 3.3. Preservation of Subtypes between Organoid Lines and Their Corresponding Primary Tumors To answer the question if PDAC organoids express the same subtype-specific immunoreactivity as their corresponding primary tumor, we performed immunohistochemical stainings for KRT81 and CFTR on paraffin sections for all patients of which organoids were derived from resection specimens (n=7; Supplementary Figure S2). CFTR and KRT81 expression was consistent with the organoid immunoreactivity for organoid lines DD314, DD337, DD376, DD385, DD394, and DD442. For DD439, IHC staining was positive for both markers, whereas the corresponding organoid line only expressed KRT81. In summary, in 6/7 samples, the subtype was preserved between primary tumors and organoid line. ### 3.4. Drug Response Testing to Conventional Chemotherapy To address whether the different molecular subtypes exhibit a differential drug response towards conventional chemotherapeutics, PDAC organoids were treated with gemcitabine and the single drug compounds of the FOLFIRINOX regimens, irinotecan, oxaliplatin, and 5-FU (Figure4(a)). A wide variation in drug response was observed for each drug. In order to perform a group comparison, the KRT81+ quasimesenchymal/double-positive (n=4) organoid lines and the KRT81- exocrine-like/classical (n=6) organoid lines were combined. A nonsignificant (p>0.05; Figure 4(b)) 1.5-fold higher resistance of KRT81+ organoids against 5-FU (mean IC50 KRT81+ 5.01 μM; KRT81- 7.52 μM) and oxaliplatin (mean IC50 KRT81+ 20.88 μM; KRT81- 30.36 μM) was observed, while no difference was detected for gemcitabine (mean IC50 KRT81+ 0.02 μM; KRT81- 0.02 μM) and irinotecan (mean IC50 KRT81+ 7.61 μM; KRT81- 6.33 μM).Figure 4 Drug response of PDAC organoid lines. (a) PDAC organoid response to different concentrations of gemcitabine, 5-FU, oxaliplatin, and irinotecan. Red curves indicate KRT81+ (quasimesenchymal/double positive) and green curves KRT81- (exocrine-like/classical) PDAC organoid lines. Relative viability was calculated by normalizing to the mean of the negative control. Drug response curves were calculated in GraphPad Prism (version 6.02) via nonlinear regression (curve fit) analysis, requiring drug concentrations to be log transformed. (b) Comparison of IC50 values between all analyzed quasimesenchymal/double-positive and exocrine-like/classical PDAC organoids for gemcitabine, 5-FU, oxaliplatin, and irinotecan. Box plots show the median values and the upper and lower quartile values in each group. (a) (b) ## 3.1. Generation of a Human Pancreatic Cancer Organoid Biobank We collected human primary tumor samples from 31 PDAC patients that were treatment-naïve: 25 specimens from surgical tumor resections and 6 from endoscopic ultrasound- (EUS-) guided fine needle aspiration (FNA) (Figure1). The total organoid generation efficiency was 71% yielding 22 PDAC organoid lines (68% for surgical resections and 83% for FNA). All primary cancers were histologically confirmed PDACs. Criteria for new PDAC organoid lines were a stable growth for more than 10 passages and the presence of mutated KRAS, analyzed by Sanger sequencing or—if needed—Illumina panel sequencing, thus excluding growth of normal pancreatic organoids, a common problem in PDAC organoid generation [11]. Specimens from FNAs showed a higher outgrowth efficiency compared to resection specimens. Overall, growth rates are comparable to previously published organoid biobanks [8, 11].Figure 1 Establishing a human PDAC organoid bank. Phase-contrast images of three representative established PDAC (passage>10) organoids derived from (a) surgical resection specimens (DD314 and DD394) and (b) EUS-guided fine needle aspiration (NR005). Scale bars represent 500 μM. ## 3.2. CFTR Might Constitute an Alternative to HNF1A as a Biomarker for PDAC Subtyping Due to the lack of a suitable HNF1A antibody, we searched for an alternative marker for subtyping PDAC. CFTR is part of the PDAssigner gene set defining the exocrine-like subtype [4]. We therefore performed immunofluorescence (IF) stainings of CFTR as well as KRT81 in 10 organoid lines (Figure 2 and Supplementary Figure S1). The organoid lines DD314, DD376, DD385, DD394, and DD442 were CFTR positive (CFTR+), whereas no significant expression of KRT81 (KRT81-) was observed. On the other hand, the organoid lines DD337, DD439, and NR006 exhibited no CFTR immunoreactivity (CFTR-), but a strong KRT81 positivity (KRT81+). No expression of both markers (KRT81-/CFTR-) was observed for NR005, while NR002 expressed both markers (KRT81+/CFTR+). Thus, CFTR and KRT81 showed a mutually exclusive expression pattern, assigning nearly all PDAC organoids to the described two most frequently occurring subtypes: exocrine-like (CFTR+/KRT81–) and quasimesenchymal (CFTR–/KRT81+). In addition, one double-negative “classical” (KRT81-/CFTR-) and one double-positive organoids were contained within the analyzed cohort.Figure 2 Confocal CFTR and KRT81 immunofluorescence analysis of human pancreatic cancer organoids. Representative stainings of two PDAC organoids (DD385 and DD337) depicting CFTR+/KRT81− (a–h) and CFTR−/KRT81+ (i–p) subtypes, respectively. Scale bars represent 200 μm.To further establish CFTR as an alternative marker for HNF1A, we analyzed the mRNA expression levels of both genes in all PDAC organoid lines by qPCR (Figure3(a)).Figure 3 CFTR and HNF1a expression correlate in human PDAC organoids. (a) Relative gene expression ofHNF1A and CFTR in 10 analyzed human PDAC organoid lines. Organoids have been established from surgical resection specimens (DD314-DD442) or EUS-guided FNA samples (NR002-NR006). RT-qPCR data were analyzed by including the two housekeeping genes, GAPDH and ACBT1. (b) Calculation of the Pearson correlation coefficient shows a high linear relationship between the mRNA level of HNF1A and CFTR (r=0.927) within the analyzed PDAC organoid lines (p value = 0.001 (two-tailed), α=0.05). (a) (b)Based on a cutoff value for positivity for HNF1A and CFTR of 2-fold overexpression, the organoid lines DD314, DD376, DD385, DD394, DD442, and NR005 were judged positive for both genes, while the organoid lines DD337, DD439, and NR006 were considered to be negative for both genes. The organoid line NR002 was positive for HNF1A, while negative for CFTR. A strong linear correlation between the expressions of HNF1A and CFTR was detected (r=0.927; p=0.001; Figure 3(b)).Comparing IF and qPCR results, the organoid lines DD314, DD376, DD385, DD394, and DD442 were CFTR positive both on the mRNA and the protein levels, whereas DD337, DD439, and NR006 were negative on both levels. For two cases (NR002, NR005), mRNA levels did not match with the corresponding IF stainings. NR002 showed a weak but clearly present expression of CFTR on protein level, while it was judged negative based on the mRNA level. The opposite was seen for NR005, where mRNA levels of CFTR were judged positive, while no protein expression was detected. ## 3.3. Preservation of Subtypes between Organoid Lines and Their Corresponding Primary Tumors To answer the question if PDAC organoids express the same subtype-specific immunoreactivity as their corresponding primary tumor, we performed immunohistochemical stainings for KRT81 and CFTR on paraffin sections for all patients of which organoids were derived from resection specimens (n=7; Supplementary Figure S2). CFTR and KRT81 expression was consistent with the organoid immunoreactivity for organoid lines DD314, DD337, DD376, DD385, DD394, and DD442. For DD439, IHC staining was positive for both markers, whereas the corresponding organoid line only expressed KRT81. In summary, in 6/7 samples, the subtype was preserved between primary tumors and organoid line. ## 3.4. Drug Response Testing to Conventional Chemotherapy To address whether the different molecular subtypes exhibit a differential drug response towards conventional chemotherapeutics, PDAC organoids were treated with gemcitabine and the single drug compounds of the FOLFIRINOX regimens, irinotecan, oxaliplatin, and 5-FU (Figure4(a)). A wide variation in drug response was observed for each drug. In order to perform a group comparison, the KRT81+ quasimesenchymal/double-positive (n=4) organoid lines and the KRT81- exocrine-like/classical (n=6) organoid lines were combined. A nonsignificant (p>0.05; Figure 4(b)) 1.5-fold higher resistance of KRT81+ organoids against 5-FU (mean IC50 KRT81+ 5.01 μM; KRT81- 7.52 μM) and oxaliplatin (mean IC50 KRT81+ 20.88 μM; KRT81- 30.36 μM) was observed, while no difference was detected for gemcitabine (mean IC50 KRT81+ 0.02 μM; KRT81- 0.02 μM) and irinotecan (mean IC50 KRT81+ 7.61 μM; KRT81- 6.33 μM).Figure 4 Drug response of PDAC organoid lines. (a) PDAC organoid response to different concentrations of gemcitabine, 5-FU, oxaliplatin, and irinotecan. Red curves indicate KRT81+ (quasimesenchymal/double positive) and green curves KRT81- (exocrine-like/classical) PDAC organoid lines. Relative viability was calculated by normalizing to the mean of the negative control. Drug response curves were calculated in GraphPad Prism (version 6.02) via nonlinear regression (curve fit) analysis, requiring drug concentrations to be log transformed. (b) Comparison of IC50 values between all analyzed quasimesenchymal/double-positive and exocrine-like/classical PDAC organoids for gemcitabine, 5-FU, oxaliplatin, and irinotecan. Box plots show the median values and the upper and lower quartile values in each group. (a) (b) ## 4. Discussion For many years, classical two-dimensional cell cultures in plastic dishes were the workhorse of cancer research. Based on the identification of Lgr5 as a marker gene of intestinal stem cells [18], a novel three-dimensional culture system named organoids was developed that faithfully recapitulates the tissue of origin [19]. This was possible by using a matrix (Matrigel) plus a defined cocktail of growth factors and inhibitors based on the growth requirements of normal intestinal stem cells [20]. In the meantime, several different organoid culture protocols have been published for many different organs [21]. Following the initial establishment of normal tissue organoid cultures, also organoids derived from human tumors were described [13, 22]. These patient-derived cancer organoids open up new opportunities for personalized therapy, as they mimic in vitro to a high-degree response of the tumor in vivo [14].PDAC is a very heterogeneous disease, both on the histological and on the molecular levels [23]. Nevertheless, currently, nearly all patients receive the same chemotherapeutic treatment, as molecular subtyping has not entered the clinical stage yet. There is a tremendous need in defining better treatment strategies, as most patients present with advanced disease stages and systemic chemotherapies in general have only a minor effect on overall survival. Grouping of PDAC into different molecular subtypes has shown to distinguish patients with different survivals [4, 7]. Collisson et al. described three different PDAC subtypes: a classical, a quasimesenchymal, and an exocrine-like subtype. This subtyping was based on mRNA expression analyses using microarrays of laser capture-microdissected material. To facilitate the cumbersome and also decay prone mRNA-based subtyping, Noll et al. identified the protein markers HNF1A and KRT18, which can classify PDAC into the established “Collisson” subgroups [5]. In a follow-up study, the authors further established the value of HNF1A/KRT81-based subtyping by documenting differential therapeutic response of patients to standard of care treatment [7].As all these studies have been performed on patient material in a retrospective setting, prospective clinical trials have to show that personalizing treatment according to the described molecular subtypes is of benefit to PDAC patients. In order to set up such a clinical trial, two points are of importance: firstly, the subtyping needs to be rather fast, and secondly, very reproducible. Especially in the neoadjuvant setting, only very little tumor material is available, since material is mostly received from EUS-guided FNAs. This material does not suffice on a routine basis to perform IHC-based subtyping. Organoids can be generated with a high efficiency from FNAs (in our current cohort in 83%) and therefore constitute a feasible way to expand tumor material ex vivo to perform subtyping. The second important point is reproducibility of the IHC stainings. The originally used HNF1A antibody (#sc-8986, Santa Cruz Biotechnology) was discontinued and is no longer available. Testing of alternative antibodies for HNF1A has resulted in contradictory and inconclusive results. We therefore set out to establish CFTR as a substitute for HNF1A on PDAC organoid cultures. CFTR IF staining gave either very strong or very weak/absent stainings, allowing us to classify organoid lines as either positive or negative. As we could not perform HNF1A IHC to compare side by side HNF1A to CFTR stainings, we performed mRNA expression analyses. This resulted in a highly significant correlation of the transcript levels of the two genes. In line with previously published papers, CFTR expression was mutually exclusive to KRT81 expression in nearly all organoid lines we analyzed. We could therefore assign—assuming that CFTR can indeed substitute HNF1A—our organoid lines into the quasimesenchymal, the exocrine-like, or classical subtype. In addition, one double-positive organoid line was detected. The existence of PDACs expressing both markers was previously observed [5, 7].However, the existence of the exocrine-like subtype was recently questioned and attributed to a contamination of normal pancreatic tissue in the analyzed samples [24]. As our cultures contain only tumor cells based on the allele frequency found for the KRAS mutation and the homogenous positivity of the whole cultures in IF stainings, our data nevertheless argues for existence of this PDAC subtype. Noteworthy, we observed a high concordance between the PDAC subtypes of the primary tumor and the respective organoid lines (6/7). In only one case (DD439), a signal for CFTR and KRT81 was seen in immunohistochemical stainings of the primary tumor, whereas the corresponding organoid line only showed a high KRT81 expression on protein and mRNA level. A possible explanation could be a restricted clonality of this PDAC organoid line. A primary tumor was judged CFTR positive if a minimum of 10% of epithelial cells were stained. It is therefore possible that the organoid line was established from a CFTR-negative region, which in this particular case is up to 80% of the primary tumor. In any case, one limitation of the present analysis is the lack of microarray-based mRNA expression subtyping of the organoid lines using the original PDAssigner gene set of Collisson et al. as a control.Drug assays performed for the frequently used chemotherapeutics in PDAC treatment did not show a statistically significant differential effect between KRT81+ (quasimesenchymal/double positive) and KRT81- (exocrine-like/classical) PDAC organoid lines. Collisson et al. have described quasimesenchymal PDAC 2D cell lines to be more sensitive to gemcitabine compared to the classical subtype [4]. Muckenhuber and colleagues suggest KRT81+ tumor cells to be more resistant to the FOLFIRINOX regimen compared to the exocrine-like subtype (HNF1A+). In line with this, we observed a tendency of KRT81+ PDAC organoids to be more resistant towards 5-FU and oxaliplatin in our cohort, comprising 2/3 drugs of the FOLFIRINOX regimen, although we could clearly document the feasibility of drug testing in primary patient-derived tumor models such as the organoid system. Larger PDAC organoid libraries in conjunction with the KRT81/CFTR-based subtyping approach might reveal in the future subtype-specific resistance patterns towards conventional or targeted drugs.In summary, subtyping of FNA-derived PDAC organoid lines based on CFTR and KRT81 might constitute a feasible way to perform prospective clinical trials for the evaluation of subtype-specific personalized treatment protocols. --- *Source: 1024614-2019-05-02.xml*
1024614-2019-05-02_1024614-2019-05-02.md
39,548
CFTR Expression Analysis for Subtyping of Human Pancreatic Cancer Organoids
Alexander Hennig; Laura Wolf; Beatrix Jahnke; Heike Polster; Therese Seidlitz; Kristin Werner; Daniela E. Aust; Jochen Hampe; Marius Distler; Jürgen Weitz; Daniel E. Stange; Thilo Welsch
Stem Cells International (2019)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1024614
1024614-2019-05-02.xml
--- ## Abstract Background. Organoid cultures of human pancreatic ductal adenocarcinoma (PDAC) have become a promising tool for tumor subtyping and individualized chemosensitivity testing. PDACs have recently been grouped into different molecular subtypes with clinical impact based on cytokeratin-81 (KRT81) and hepatocyte nuclear factor 1A (HNF1A). However, a suitable antibody for HNF1A is currently unavailable. The present study is aimed at establishing subtyping in PDAC organoids using an alternative marker. Methods. A PDAC organoid biobank was generated from human primary tumor samples containing 22 lines. Immunofluorescence staining was established and done for 10 organoid lines for cystic fibrosis transmembrane conductance regulator (CFTR) and KRT81. Quantitative real-time PCR (qPCR) was performed for CFTR and HNF1A. A chemotherapeutic drug response analysis was done using gemcitabine, 5-FU, oxaliplatin, and irinotecan. Results. A biobank of patient-derived PDAC organoids was established. The efficiency was 71% (22/31) with 68% for surgical resections and 83% for fine needle aspirations. Organoids could be categorized into the established quasimesenchymal, exocrine-like, and classical subtypes based on KRT81 and CFTR immunoreactivity. CFTR protein expression was confirmed on the transcript level. CFTR and HNF1A transcript expression levels positively correlated (n=10; r=0.927; p=0.001). PDAC subtypes of the primary tumors and the corresponding organoid lines were identical for most of the cases analyzed (6/7). Treatment with chemotherapeutic drugs revealed tendencies but no significant differences regarding drug responses. Conclusions. Human PDAC organoids can be classified into known subtypes based on KRT81 and CFTR immunoreactivity. CFTR and HNF1A mRNA levels correlated well. Furthermore, subtype-specific immunoreactivity matched well between PDAC organoids and the respective primary tumor tissue. Subtyping of human PDACs using CFTR might constitute an alternative to HNF1A and should be further investigated. --- ## Body ## 1. Introduction Despite advances with multimodal treatment modalities such as the FOLFIRINOX regime, pancreatic ductal adenocarcinoma (PDAC) still remains the cancer with the worst prognosis. Today, combination chemotherapy in the neoadjuvant or adjuvant setting is critical for optimal outcome (Conroy et al. [1] and Hackert et al. [2]). However, PDAC is heterogeneous regarding its genetic alterations and molecular expression profile leading to subtype-specific responses towards single chemotherapeutic agents and survival [3–6]. Recently, an immunohistochemical (IHC) subtyping of PDAC using cytokeratin-81 (KRT81) and hepatocyte nuclear factor 1A (HNF1A) was found to match with the transcriptional subtypes quasimesenchymal (QM; KRT81+HNF1A-), classical (KRT81-HNF1A-), and exocrine-like (KRT81-HNF1A+) [4, 5]. In cohorts of surgically treated PDAC patients, the HNF1A+ subtype was associated with the best, whereas the KRT81+ subtypes with the worst survival prognosis [4, 5, 7]. Thus, early subtyping after diagnosis based on KRT81/HNF1A IHCs of PDAC could guide individualized combination chemotherapy.Within recent years, modern patient-derived 3D cell cultures named organoids have emerged as a promising model for personalized tumor analysis and drug screening [8–11]. Organoid cultures enable tumor cultivation, propagation, and timely chemosensitivity testing [12]. Furthermore, due to the possibility of repeated freeze-thaw cycles, they allow the establishment of living tumor biobanks for large-scale testing of drug panels [13]. Based on the response evaluation, personalized treatment strategies for individual patients can be designed [14]. To date, immunosubtyping of human PDAC organoids using KRT81 and HNF1A has not been established. The immunosubtyping has further become difficult because the originally described HNF1A antibody (H-205; sc-8986) is no longer commercially available and alternative HNF1A antibodies failed to produce reproducible results. Another potential marker of the exocrine-like (HNF1A+) subtype—which might permit subtype differentiation—is the cystic fibrosis transmembrane conductance regulator (CFTR) [4]. However, CFTR expression in PDAC organoids and its correlation with HNF1A expression is unknown. In the normal pancreas, the cAMP-regulated chloride channel is apically expressed by the ductal epithelial cells [15]. CFTR mutations are linked to an increased risk for the development of PDAC [16].The present study is aimed at analyzing KRT81, HNF1A, and CFTR expression in human PDAC organoids in order to enable routine organoid subtyping for personalized treatment. ## 2. Materials and Methods ### 2.1. Human PDAC Samples This study was performed with human specimens obtained from patients admitted to the Department of Visceral, Thoracic and Vascular Surgery or the Medical Department I at the University Hospital Carl Gustav Carus, Technische Universität Dresden, Germany. All samples were diagnosed as PDAC according to the World Health Organization criteria by a board certified pathologist. Tissue collection, organoid culture, and analysis were permitted by the local ethics committee (#EK451122014 and #EK68022018). ### 2.2. Generation and Cultivation of Human PDAC Organoids Tumor specimens were cut into pieces smaller than 1 mm3 and digested with dispase II (2.5 mg/ml, Roche) and collagenase II (0.625 mg/ml, Sigma-Aldrich) in DMEM/F12+++ medium (DMEM/F12 (Invitrogen) supplemented with 1x HEPES (Invitrogen), 1x Pen/Strep (Invitrogen), and 1x GlutaMAX (Invitrogen)) at 37°C for 30-120 minutes depending on sample size. After several washing steps with DMEM/F12+++ medium, the remaining cell pellet was resuspended in GFR Matrigel (Corning) and cultivated in human PDAC organoid medium DMEM/F12+++ supplemented with Wnt3a-conditioned medium (50% v/v), noggin-conditioned medium (10% v/v), RSPO1-conditioned medium (10% v/v), B27 (1x, Invitrogen), nicotinamide (10 mM, Sigma-Aldrich), gastrin (1 nM, Sigma-Aldrich), N-acetyl-L-cysteine (1 mM, Sigma-Aldrich), primocin (1 mg/ml, InvivoGen), recombinant murine epidermal growth factor (mEGF, 50 ng/ml, Invitrogen), recombinant human fibroblast growth factor 10 (hFGF10, 100 ng/ml, PeproTech), A-83-01 (0.5 μM, Tocris Bioscience), and N2 (1x, Invitrogen). ### 2.3. Immunohistochemistry (IHC) Stainings and Imaging Sections from paraffin-embedded primary PDAC tissue samples were provided by the Tumor- and Normal Tissue Bank of the Institute of Pathology, University Hospital Carl Gustav Carus. The hematoxylin-eosin (H&E) and IHC stainings for KRT81 (Santa Cruz, #sc100929, 1 : 150) and CFTR (Abcam, #ab131553, 1 : 300) were performed according to a standard protocol on deparaffinized tissue sections. Images were taken by an EVOS FL Auto (Life Technologies) microscope. CFTR expression was considered to be positive if a medium to strong staining was detected in more than 10% of the epithelial cells. Analysis of KRT81 stainings was done according to the criteria of Muckenhuber and colleagues [7]. In brief, only a strong staining of KRT81 in at least 30% of epithelial cells leads to the classification of a “KRT81-positive PDAC.” Organoid lines NR002, NR005, and NR006 were derived from fine needle aspirations, so no primary tumor tissue was available to perform IHC stainings. ### 2.4. Immunofluorescence Stainings Whole PDAC organoids were collected in 15 ml falcons, fixed in 2% formaldehyde (Sigma-Aldrich) over night at 4° C, permeabilized with 0.3% Triton X-100 (Sigma-Aldrich) for 20 minutes, and blocked with 1% BSA (Thermo Fisher Scientific) and 0.1% Triton X-100 in PBS (Sigma-Aldrich) for 1 hour. Samples were then incubated with primary antibodies against KRT81 (Santa Cruz, #sc100929) or CFTR (Abcam, #ab131553), both diluted 1 : 50 in blocking buffer, for 2 hours at room temperature, followed by a 1-hour incubation step with the secondary goat-anti-rabbit Alexa-Fluor 488 antibody (Life Technologies), diluted 1 : 200 in blocking buffer. PDAC organoids additionally were stained with DAPI (Thermo Fisher Scientific) and Alexa-Fluor 568 phalloidin (Thermo Fisher Scientific). Images were taken by Zeiss LSM 510/880 confocal microscope and analyzed with ImageJ (NIH). All acquired images were taken with identical settings. ### 2.5. In Vitro Drug Assays Mechanically dissociated PDAC organoids were plated in 384 well plates in 15μl Matrigel supplied with 40 μl PDAC organoid medium. 24 h later, drug treatment was started with conventional chemotherapeutic drugs diluted as follows: gemcitabine (1 μM, 200 nM, 100 nM, 50 nM, 25 nM, 10 nM, and 1 nM), 5-fluorouracil (5-FU; 50 μM, 25 μM, 10 μM, 5 μM, 1 μM, 100 nM, and 10 nM), oxaliplatin (500 μM, 100 μM, 50 μM, 25 μM, 10 μM, 1 μM, and 100 nM), irinotecan (250 μM, 25 μM, 10 μM, 1 μM, 100 nM, 10 nM, and 1 nM). Negative controls and each drug dilution were done in triplicates. Medium was replaced after 72 h. Readout was done after 144 h incubation by measuring the metabolic activity using PrestoBlue cell viability reagent (Invitrogen) following the manufacturer’s protocol. Briefly, organoids were incubated for 2 hours at 37°C with PrestoBlue and fluorescence measured at 560/590 nm using a Varioskan LUX (Thermo Scientific). Relative viability was calculated after blank subtraction by normalizing to the mean of the negative control. All drug assays were carried out three times. In order to dissect subtype specific drug responses assay results from the quasimesenchymal (KRT81+) and double-positive (KRT81+/CFTR+) organoid lines were combined. The same was done for the drug assay results from the exocrine-like (CFTR+) and classical (KRT81-/CFTR-) PDAC organoid lines. ### 2.6. Quantitative Real-Time PCR (qPCR) Total RNA was isolated from organoid cultures using the RNeasy Mini Kit (Qiagen) following the recommended user instructions. cDNA synthesis from 0.5μg RNA was done with the qScript cDNA SuperMix (Quantabio). qPCR was carried out using the GoTaq qPCR Master Mix (Promega) on a StepOnePlus Real-Time PCR System (Thermo Fisher Scientific) for expression analysis of the following genes: GAPDH (5′-GCA CCA CCA ACT GCT TAG-3′ (sense), 5′-ATG ATG TTC TGG AGA GCC CC-3′ (antisense)); ACTB1 (5′-AAA TCT GGC ACC ACA CCT TC-3′ (sense), 5′-AGA GGC GTA CAG GGA TAG CA-3′ (antisense)); HNF1A (5′-ACG ACG ATG GGG AAG ACT TC-3′ (sense), 5′-GAC TTG ACC ATC TTC GCC AC-3′ (antisense)); and CFTR (5′-CGT CAT CAA AGC ATG CCA AC-3′ (sense), 5′-TCG TTG ACC TCC ACT CAG TG-3′ (antisense)). Calculation of the relative gene expression was done as described by Hellemans and colleagues ([17]). Briefly, arithmetical means were calculated for each gene from all analyzed samples for conversion of quantification cycle values into relative quantities (RQs). Next, the geometrical mean of the RQs from the two housekeeping genes was calculated resulting in the sample-specific normalization factor (NF). The relative expression was determined by dividing the RQ with the NF. ### 2.7. Statistical Analysis Correlation analysis was done by calculating the Pearson correlation coefficient in GraphPad Prism (version 6.02), assuming a normal distribution of the data. Confidence interval was 95% (two-tailed) forp value calculation. ## 2.1. Human PDAC Samples This study was performed with human specimens obtained from patients admitted to the Department of Visceral, Thoracic and Vascular Surgery or the Medical Department I at the University Hospital Carl Gustav Carus, Technische Universität Dresden, Germany. All samples were diagnosed as PDAC according to the World Health Organization criteria by a board certified pathologist. Tissue collection, organoid culture, and analysis were permitted by the local ethics committee (#EK451122014 and #EK68022018). ## 2.2. Generation and Cultivation of Human PDAC Organoids Tumor specimens were cut into pieces smaller than 1 mm3 and digested with dispase II (2.5 mg/ml, Roche) and collagenase II (0.625 mg/ml, Sigma-Aldrich) in DMEM/F12+++ medium (DMEM/F12 (Invitrogen) supplemented with 1x HEPES (Invitrogen), 1x Pen/Strep (Invitrogen), and 1x GlutaMAX (Invitrogen)) at 37°C for 30-120 minutes depending on sample size. After several washing steps with DMEM/F12+++ medium, the remaining cell pellet was resuspended in GFR Matrigel (Corning) and cultivated in human PDAC organoid medium DMEM/F12+++ supplemented with Wnt3a-conditioned medium (50% v/v), noggin-conditioned medium (10% v/v), RSPO1-conditioned medium (10% v/v), B27 (1x, Invitrogen), nicotinamide (10 mM, Sigma-Aldrich), gastrin (1 nM, Sigma-Aldrich), N-acetyl-L-cysteine (1 mM, Sigma-Aldrich), primocin (1 mg/ml, InvivoGen), recombinant murine epidermal growth factor (mEGF, 50 ng/ml, Invitrogen), recombinant human fibroblast growth factor 10 (hFGF10, 100 ng/ml, PeproTech), A-83-01 (0.5 μM, Tocris Bioscience), and N2 (1x, Invitrogen). ## 2.3. Immunohistochemistry (IHC) Stainings and Imaging Sections from paraffin-embedded primary PDAC tissue samples were provided by the Tumor- and Normal Tissue Bank of the Institute of Pathology, University Hospital Carl Gustav Carus. The hematoxylin-eosin (H&E) and IHC stainings for KRT81 (Santa Cruz, #sc100929, 1 : 150) and CFTR (Abcam, #ab131553, 1 : 300) were performed according to a standard protocol on deparaffinized tissue sections. Images were taken by an EVOS FL Auto (Life Technologies) microscope. CFTR expression was considered to be positive if a medium to strong staining was detected in more than 10% of the epithelial cells. Analysis of KRT81 stainings was done according to the criteria of Muckenhuber and colleagues [7]. In brief, only a strong staining of KRT81 in at least 30% of epithelial cells leads to the classification of a “KRT81-positive PDAC.” Organoid lines NR002, NR005, and NR006 were derived from fine needle aspirations, so no primary tumor tissue was available to perform IHC stainings. ## 2.4. Immunofluorescence Stainings Whole PDAC organoids were collected in 15 ml falcons, fixed in 2% formaldehyde (Sigma-Aldrich) over night at 4° C, permeabilized with 0.3% Triton X-100 (Sigma-Aldrich) for 20 minutes, and blocked with 1% BSA (Thermo Fisher Scientific) and 0.1% Triton X-100 in PBS (Sigma-Aldrich) for 1 hour. Samples were then incubated with primary antibodies against KRT81 (Santa Cruz, #sc100929) or CFTR (Abcam, #ab131553), both diluted 1 : 50 in blocking buffer, for 2 hours at room temperature, followed by a 1-hour incubation step with the secondary goat-anti-rabbit Alexa-Fluor 488 antibody (Life Technologies), diluted 1 : 200 in blocking buffer. PDAC organoids additionally were stained with DAPI (Thermo Fisher Scientific) and Alexa-Fluor 568 phalloidin (Thermo Fisher Scientific). Images were taken by Zeiss LSM 510/880 confocal microscope and analyzed with ImageJ (NIH). All acquired images were taken with identical settings. ## 2.5. In Vitro Drug Assays Mechanically dissociated PDAC organoids were plated in 384 well plates in 15μl Matrigel supplied with 40 μl PDAC organoid medium. 24 h later, drug treatment was started with conventional chemotherapeutic drugs diluted as follows: gemcitabine (1 μM, 200 nM, 100 nM, 50 nM, 25 nM, 10 nM, and 1 nM), 5-fluorouracil (5-FU; 50 μM, 25 μM, 10 μM, 5 μM, 1 μM, 100 nM, and 10 nM), oxaliplatin (500 μM, 100 μM, 50 μM, 25 μM, 10 μM, 1 μM, and 100 nM), irinotecan (250 μM, 25 μM, 10 μM, 1 μM, 100 nM, 10 nM, and 1 nM). Negative controls and each drug dilution were done in triplicates. Medium was replaced after 72 h. Readout was done after 144 h incubation by measuring the metabolic activity using PrestoBlue cell viability reagent (Invitrogen) following the manufacturer’s protocol. Briefly, organoids were incubated for 2 hours at 37°C with PrestoBlue and fluorescence measured at 560/590 nm using a Varioskan LUX (Thermo Scientific). Relative viability was calculated after blank subtraction by normalizing to the mean of the negative control. All drug assays were carried out three times. In order to dissect subtype specific drug responses assay results from the quasimesenchymal (KRT81+) and double-positive (KRT81+/CFTR+) organoid lines were combined. The same was done for the drug assay results from the exocrine-like (CFTR+) and classical (KRT81-/CFTR-) PDAC organoid lines. ## 2.6. Quantitative Real-Time PCR (qPCR) Total RNA was isolated from organoid cultures using the RNeasy Mini Kit (Qiagen) following the recommended user instructions. cDNA synthesis from 0.5μg RNA was done with the qScript cDNA SuperMix (Quantabio). qPCR was carried out using the GoTaq qPCR Master Mix (Promega) on a StepOnePlus Real-Time PCR System (Thermo Fisher Scientific) for expression analysis of the following genes: GAPDH (5′-GCA CCA CCA ACT GCT TAG-3′ (sense), 5′-ATG ATG TTC TGG AGA GCC CC-3′ (antisense)); ACTB1 (5′-AAA TCT GGC ACC ACA CCT TC-3′ (sense), 5′-AGA GGC GTA CAG GGA TAG CA-3′ (antisense)); HNF1A (5′-ACG ACG ATG GGG AAG ACT TC-3′ (sense), 5′-GAC TTG ACC ATC TTC GCC AC-3′ (antisense)); and CFTR (5′-CGT CAT CAA AGC ATG CCA AC-3′ (sense), 5′-TCG TTG ACC TCC ACT CAG TG-3′ (antisense)). Calculation of the relative gene expression was done as described by Hellemans and colleagues ([17]). Briefly, arithmetical means were calculated for each gene from all analyzed samples for conversion of quantification cycle values into relative quantities (RQs). Next, the geometrical mean of the RQs from the two housekeeping genes was calculated resulting in the sample-specific normalization factor (NF). The relative expression was determined by dividing the RQ with the NF. ## 2.7. Statistical Analysis Correlation analysis was done by calculating the Pearson correlation coefficient in GraphPad Prism (version 6.02), assuming a normal distribution of the data. Confidence interval was 95% (two-tailed) forp value calculation. ## 3. Results ### 3.1. Generation of a Human Pancreatic Cancer Organoid Biobank We collected human primary tumor samples from 31 PDAC patients that were treatment-naïve: 25 specimens from surgical tumor resections and 6 from endoscopic ultrasound- (EUS-) guided fine needle aspiration (FNA) (Figure1). The total organoid generation efficiency was 71% yielding 22 PDAC organoid lines (68% for surgical resections and 83% for FNA). All primary cancers were histologically confirmed PDACs. Criteria for new PDAC organoid lines were a stable growth for more than 10 passages and the presence of mutated KRAS, analyzed by Sanger sequencing or—if needed—Illumina panel sequencing, thus excluding growth of normal pancreatic organoids, a common problem in PDAC organoid generation [11]. Specimens from FNAs showed a higher outgrowth efficiency compared to resection specimens. Overall, growth rates are comparable to previously published organoid biobanks [8, 11].Figure 1 Establishing a human PDAC organoid bank. Phase-contrast images of three representative established PDAC (passage>10) organoids derived from (a) surgical resection specimens (DD314 and DD394) and (b) EUS-guided fine needle aspiration (NR005). Scale bars represent 500 μM. ### 3.2. CFTR Might Constitute an Alternative to HNF1A as a Biomarker for PDAC Subtyping Due to the lack of a suitable HNF1A antibody, we searched for an alternative marker for subtyping PDAC. CFTR is part of the PDAssigner gene set defining the exocrine-like subtype [4]. We therefore performed immunofluorescence (IF) stainings of CFTR as well as KRT81 in 10 organoid lines (Figure 2 and Supplementary Figure S1). The organoid lines DD314, DD376, DD385, DD394, and DD442 were CFTR positive (CFTR+), whereas no significant expression of KRT81 (KRT81-) was observed. On the other hand, the organoid lines DD337, DD439, and NR006 exhibited no CFTR immunoreactivity (CFTR-), but a strong KRT81 positivity (KRT81+). No expression of both markers (KRT81-/CFTR-) was observed for NR005, while NR002 expressed both markers (KRT81+/CFTR+). Thus, CFTR and KRT81 showed a mutually exclusive expression pattern, assigning nearly all PDAC organoids to the described two most frequently occurring subtypes: exocrine-like (CFTR+/KRT81–) and quasimesenchymal (CFTR–/KRT81+). In addition, one double-negative “classical” (KRT81-/CFTR-) and one double-positive organoids were contained within the analyzed cohort.Figure 2 Confocal CFTR and KRT81 immunofluorescence analysis of human pancreatic cancer organoids. Representative stainings of two PDAC organoids (DD385 and DD337) depicting CFTR+/KRT81− (a–h) and CFTR−/KRT81+ (i–p) subtypes, respectively. Scale bars represent 200 μm.To further establish CFTR as an alternative marker for HNF1A, we analyzed the mRNA expression levels of both genes in all PDAC organoid lines by qPCR (Figure3(a)).Figure 3 CFTR and HNF1a expression correlate in human PDAC organoids. (a) Relative gene expression ofHNF1A and CFTR in 10 analyzed human PDAC organoid lines. Organoids have been established from surgical resection specimens (DD314-DD442) or EUS-guided FNA samples (NR002-NR006). RT-qPCR data were analyzed by including the two housekeeping genes, GAPDH and ACBT1. (b) Calculation of the Pearson correlation coefficient shows a high linear relationship between the mRNA level of HNF1A and CFTR (r=0.927) within the analyzed PDAC organoid lines (p value = 0.001 (two-tailed), α=0.05). (a) (b)Based on a cutoff value for positivity for HNF1A and CFTR of 2-fold overexpression, the organoid lines DD314, DD376, DD385, DD394, DD442, and NR005 were judged positive for both genes, while the organoid lines DD337, DD439, and NR006 were considered to be negative for both genes. The organoid line NR002 was positive for HNF1A, while negative for CFTR. A strong linear correlation between the expressions of HNF1A and CFTR was detected (r=0.927; p=0.001; Figure 3(b)).Comparing IF and qPCR results, the organoid lines DD314, DD376, DD385, DD394, and DD442 were CFTR positive both on the mRNA and the protein levels, whereas DD337, DD439, and NR006 were negative on both levels. For two cases (NR002, NR005), mRNA levels did not match with the corresponding IF stainings. NR002 showed a weak but clearly present expression of CFTR on protein level, while it was judged negative based on the mRNA level. The opposite was seen for NR005, where mRNA levels of CFTR were judged positive, while no protein expression was detected. ### 3.3. Preservation of Subtypes between Organoid Lines and Their Corresponding Primary Tumors To answer the question if PDAC organoids express the same subtype-specific immunoreactivity as their corresponding primary tumor, we performed immunohistochemical stainings for KRT81 and CFTR on paraffin sections for all patients of which organoids were derived from resection specimens (n=7; Supplementary Figure S2). CFTR and KRT81 expression was consistent with the organoid immunoreactivity for organoid lines DD314, DD337, DD376, DD385, DD394, and DD442. For DD439, IHC staining was positive for both markers, whereas the corresponding organoid line only expressed KRT81. In summary, in 6/7 samples, the subtype was preserved between primary tumors and organoid line. ### 3.4. Drug Response Testing to Conventional Chemotherapy To address whether the different molecular subtypes exhibit a differential drug response towards conventional chemotherapeutics, PDAC organoids were treated with gemcitabine and the single drug compounds of the FOLFIRINOX regimens, irinotecan, oxaliplatin, and 5-FU (Figure4(a)). A wide variation in drug response was observed for each drug. In order to perform a group comparison, the KRT81+ quasimesenchymal/double-positive (n=4) organoid lines and the KRT81- exocrine-like/classical (n=6) organoid lines were combined. A nonsignificant (p>0.05; Figure 4(b)) 1.5-fold higher resistance of KRT81+ organoids against 5-FU (mean IC50 KRT81+ 5.01 μM; KRT81- 7.52 μM) and oxaliplatin (mean IC50 KRT81+ 20.88 μM; KRT81- 30.36 μM) was observed, while no difference was detected for gemcitabine (mean IC50 KRT81+ 0.02 μM; KRT81- 0.02 μM) and irinotecan (mean IC50 KRT81+ 7.61 μM; KRT81- 6.33 μM).Figure 4 Drug response of PDAC organoid lines. (a) PDAC organoid response to different concentrations of gemcitabine, 5-FU, oxaliplatin, and irinotecan. Red curves indicate KRT81+ (quasimesenchymal/double positive) and green curves KRT81- (exocrine-like/classical) PDAC organoid lines. Relative viability was calculated by normalizing to the mean of the negative control. Drug response curves were calculated in GraphPad Prism (version 6.02) via nonlinear regression (curve fit) analysis, requiring drug concentrations to be log transformed. (b) Comparison of IC50 values between all analyzed quasimesenchymal/double-positive and exocrine-like/classical PDAC organoids for gemcitabine, 5-FU, oxaliplatin, and irinotecan. Box plots show the median values and the upper and lower quartile values in each group. (a) (b) ## 3.1. Generation of a Human Pancreatic Cancer Organoid Biobank We collected human primary tumor samples from 31 PDAC patients that were treatment-naïve: 25 specimens from surgical tumor resections and 6 from endoscopic ultrasound- (EUS-) guided fine needle aspiration (FNA) (Figure1). The total organoid generation efficiency was 71% yielding 22 PDAC organoid lines (68% for surgical resections and 83% for FNA). All primary cancers were histologically confirmed PDACs. Criteria for new PDAC organoid lines were a stable growth for more than 10 passages and the presence of mutated KRAS, analyzed by Sanger sequencing or—if needed—Illumina panel sequencing, thus excluding growth of normal pancreatic organoids, a common problem in PDAC organoid generation [11]. Specimens from FNAs showed a higher outgrowth efficiency compared to resection specimens. Overall, growth rates are comparable to previously published organoid biobanks [8, 11].Figure 1 Establishing a human PDAC organoid bank. Phase-contrast images of three representative established PDAC (passage>10) organoids derived from (a) surgical resection specimens (DD314 and DD394) and (b) EUS-guided fine needle aspiration (NR005). Scale bars represent 500 μM. ## 3.2. CFTR Might Constitute an Alternative to HNF1A as a Biomarker for PDAC Subtyping Due to the lack of a suitable HNF1A antibody, we searched for an alternative marker for subtyping PDAC. CFTR is part of the PDAssigner gene set defining the exocrine-like subtype [4]. We therefore performed immunofluorescence (IF) stainings of CFTR as well as KRT81 in 10 organoid lines (Figure 2 and Supplementary Figure S1). The organoid lines DD314, DD376, DD385, DD394, and DD442 were CFTR positive (CFTR+), whereas no significant expression of KRT81 (KRT81-) was observed. On the other hand, the organoid lines DD337, DD439, and NR006 exhibited no CFTR immunoreactivity (CFTR-), but a strong KRT81 positivity (KRT81+). No expression of both markers (KRT81-/CFTR-) was observed for NR005, while NR002 expressed both markers (KRT81+/CFTR+). Thus, CFTR and KRT81 showed a mutually exclusive expression pattern, assigning nearly all PDAC organoids to the described two most frequently occurring subtypes: exocrine-like (CFTR+/KRT81–) and quasimesenchymal (CFTR–/KRT81+). In addition, one double-negative “classical” (KRT81-/CFTR-) and one double-positive organoids were contained within the analyzed cohort.Figure 2 Confocal CFTR and KRT81 immunofluorescence analysis of human pancreatic cancer organoids. Representative stainings of two PDAC organoids (DD385 and DD337) depicting CFTR+/KRT81− (a–h) and CFTR−/KRT81+ (i–p) subtypes, respectively. Scale bars represent 200 μm.To further establish CFTR as an alternative marker for HNF1A, we analyzed the mRNA expression levels of both genes in all PDAC organoid lines by qPCR (Figure3(a)).Figure 3 CFTR and HNF1a expression correlate in human PDAC organoids. (a) Relative gene expression ofHNF1A and CFTR in 10 analyzed human PDAC organoid lines. Organoids have been established from surgical resection specimens (DD314-DD442) or EUS-guided FNA samples (NR002-NR006). RT-qPCR data were analyzed by including the two housekeeping genes, GAPDH and ACBT1. (b) Calculation of the Pearson correlation coefficient shows a high linear relationship between the mRNA level of HNF1A and CFTR (r=0.927) within the analyzed PDAC organoid lines (p value = 0.001 (two-tailed), α=0.05). (a) (b)Based on a cutoff value for positivity for HNF1A and CFTR of 2-fold overexpression, the organoid lines DD314, DD376, DD385, DD394, DD442, and NR005 were judged positive for both genes, while the organoid lines DD337, DD439, and NR006 were considered to be negative for both genes. The organoid line NR002 was positive for HNF1A, while negative for CFTR. A strong linear correlation between the expressions of HNF1A and CFTR was detected (r=0.927; p=0.001; Figure 3(b)).Comparing IF and qPCR results, the organoid lines DD314, DD376, DD385, DD394, and DD442 were CFTR positive both on the mRNA and the protein levels, whereas DD337, DD439, and NR006 were negative on both levels. For two cases (NR002, NR005), mRNA levels did not match with the corresponding IF stainings. NR002 showed a weak but clearly present expression of CFTR on protein level, while it was judged negative based on the mRNA level. The opposite was seen for NR005, where mRNA levels of CFTR were judged positive, while no protein expression was detected. ## 3.3. Preservation of Subtypes between Organoid Lines and Their Corresponding Primary Tumors To answer the question if PDAC organoids express the same subtype-specific immunoreactivity as their corresponding primary tumor, we performed immunohistochemical stainings for KRT81 and CFTR on paraffin sections for all patients of which organoids were derived from resection specimens (n=7; Supplementary Figure S2). CFTR and KRT81 expression was consistent with the organoid immunoreactivity for organoid lines DD314, DD337, DD376, DD385, DD394, and DD442. For DD439, IHC staining was positive for both markers, whereas the corresponding organoid line only expressed KRT81. In summary, in 6/7 samples, the subtype was preserved between primary tumors and organoid line. ## 3.4. Drug Response Testing to Conventional Chemotherapy To address whether the different molecular subtypes exhibit a differential drug response towards conventional chemotherapeutics, PDAC organoids were treated with gemcitabine and the single drug compounds of the FOLFIRINOX regimens, irinotecan, oxaliplatin, and 5-FU (Figure4(a)). A wide variation in drug response was observed for each drug. In order to perform a group comparison, the KRT81+ quasimesenchymal/double-positive (n=4) organoid lines and the KRT81- exocrine-like/classical (n=6) organoid lines were combined. A nonsignificant (p>0.05; Figure 4(b)) 1.5-fold higher resistance of KRT81+ organoids against 5-FU (mean IC50 KRT81+ 5.01 μM; KRT81- 7.52 μM) and oxaliplatin (mean IC50 KRT81+ 20.88 μM; KRT81- 30.36 μM) was observed, while no difference was detected for gemcitabine (mean IC50 KRT81+ 0.02 μM; KRT81- 0.02 μM) and irinotecan (mean IC50 KRT81+ 7.61 μM; KRT81- 6.33 μM).Figure 4 Drug response of PDAC organoid lines. (a) PDAC organoid response to different concentrations of gemcitabine, 5-FU, oxaliplatin, and irinotecan. Red curves indicate KRT81+ (quasimesenchymal/double positive) and green curves KRT81- (exocrine-like/classical) PDAC organoid lines. Relative viability was calculated by normalizing to the mean of the negative control. Drug response curves were calculated in GraphPad Prism (version 6.02) via nonlinear regression (curve fit) analysis, requiring drug concentrations to be log transformed. (b) Comparison of IC50 values between all analyzed quasimesenchymal/double-positive and exocrine-like/classical PDAC organoids for gemcitabine, 5-FU, oxaliplatin, and irinotecan. Box plots show the median values and the upper and lower quartile values in each group. (a) (b) ## 4. Discussion For many years, classical two-dimensional cell cultures in plastic dishes were the workhorse of cancer research. Based on the identification of Lgr5 as a marker gene of intestinal stem cells [18], a novel three-dimensional culture system named organoids was developed that faithfully recapitulates the tissue of origin [19]. This was possible by using a matrix (Matrigel) plus a defined cocktail of growth factors and inhibitors based on the growth requirements of normal intestinal stem cells [20]. In the meantime, several different organoid culture protocols have been published for many different organs [21]. Following the initial establishment of normal tissue organoid cultures, also organoids derived from human tumors were described [13, 22]. These patient-derived cancer organoids open up new opportunities for personalized therapy, as they mimic in vitro to a high-degree response of the tumor in vivo [14].PDAC is a very heterogeneous disease, both on the histological and on the molecular levels [23]. Nevertheless, currently, nearly all patients receive the same chemotherapeutic treatment, as molecular subtyping has not entered the clinical stage yet. There is a tremendous need in defining better treatment strategies, as most patients present with advanced disease stages and systemic chemotherapies in general have only a minor effect on overall survival. Grouping of PDAC into different molecular subtypes has shown to distinguish patients with different survivals [4, 7]. Collisson et al. described three different PDAC subtypes: a classical, a quasimesenchymal, and an exocrine-like subtype. This subtyping was based on mRNA expression analyses using microarrays of laser capture-microdissected material. To facilitate the cumbersome and also decay prone mRNA-based subtyping, Noll et al. identified the protein markers HNF1A and KRT18, which can classify PDAC into the established “Collisson” subgroups [5]. In a follow-up study, the authors further established the value of HNF1A/KRT81-based subtyping by documenting differential therapeutic response of patients to standard of care treatment [7].As all these studies have been performed on patient material in a retrospective setting, prospective clinical trials have to show that personalizing treatment according to the described molecular subtypes is of benefit to PDAC patients. In order to set up such a clinical trial, two points are of importance: firstly, the subtyping needs to be rather fast, and secondly, very reproducible. Especially in the neoadjuvant setting, only very little tumor material is available, since material is mostly received from EUS-guided FNAs. This material does not suffice on a routine basis to perform IHC-based subtyping. Organoids can be generated with a high efficiency from FNAs (in our current cohort in 83%) and therefore constitute a feasible way to expand tumor material ex vivo to perform subtyping. The second important point is reproducibility of the IHC stainings. The originally used HNF1A antibody (#sc-8986, Santa Cruz Biotechnology) was discontinued and is no longer available. Testing of alternative antibodies for HNF1A has resulted in contradictory and inconclusive results. We therefore set out to establish CFTR as a substitute for HNF1A on PDAC organoid cultures. CFTR IF staining gave either very strong or very weak/absent stainings, allowing us to classify organoid lines as either positive or negative. As we could not perform HNF1A IHC to compare side by side HNF1A to CFTR stainings, we performed mRNA expression analyses. This resulted in a highly significant correlation of the transcript levels of the two genes. In line with previously published papers, CFTR expression was mutually exclusive to KRT81 expression in nearly all organoid lines we analyzed. We could therefore assign—assuming that CFTR can indeed substitute HNF1A—our organoid lines into the quasimesenchymal, the exocrine-like, or classical subtype. In addition, one double-positive organoid line was detected. The existence of PDACs expressing both markers was previously observed [5, 7].However, the existence of the exocrine-like subtype was recently questioned and attributed to a contamination of normal pancreatic tissue in the analyzed samples [24]. As our cultures contain only tumor cells based on the allele frequency found for the KRAS mutation and the homogenous positivity of the whole cultures in IF stainings, our data nevertheless argues for existence of this PDAC subtype. Noteworthy, we observed a high concordance between the PDAC subtypes of the primary tumor and the respective organoid lines (6/7). In only one case (DD439), a signal for CFTR and KRT81 was seen in immunohistochemical stainings of the primary tumor, whereas the corresponding organoid line only showed a high KRT81 expression on protein and mRNA level. A possible explanation could be a restricted clonality of this PDAC organoid line. A primary tumor was judged CFTR positive if a minimum of 10% of epithelial cells were stained. It is therefore possible that the organoid line was established from a CFTR-negative region, which in this particular case is up to 80% of the primary tumor. In any case, one limitation of the present analysis is the lack of microarray-based mRNA expression subtyping of the organoid lines using the original PDAssigner gene set of Collisson et al. as a control.Drug assays performed for the frequently used chemotherapeutics in PDAC treatment did not show a statistically significant differential effect between KRT81+ (quasimesenchymal/double positive) and KRT81- (exocrine-like/classical) PDAC organoid lines. Collisson et al. have described quasimesenchymal PDAC 2D cell lines to be more sensitive to gemcitabine compared to the classical subtype [4]. Muckenhuber and colleagues suggest KRT81+ tumor cells to be more resistant to the FOLFIRINOX regimen compared to the exocrine-like subtype (HNF1A+). In line with this, we observed a tendency of KRT81+ PDAC organoids to be more resistant towards 5-FU and oxaliplatin in our cohort, comprising 2/3 drugs of the FOLFIRINOX regimen, although we could clearly document the feasibility of drug testing in primary patient-derived tumor models such as the organoid system. Larger PDAC organoid libraries in conjunction with the KRT81/CFTR-based subtyping approach might reveal in the future subtype-specific resistance patterns towards conventional or targeted drugs.In summary, subtyping of FNA-derived PDAC organoid lines based on CFTR and KRT81 might constitute a feasible way to perform prospective clinical trials for the evaluation of subtype-specific personalized treatment protocols. --- *Source: 1024614-2019-05-02.xml*
2019
# OnP- and p-Convexity of Banach Spaces **Authors:** Omar Muñiz-Pérez **Journal:** Abstract and Applied Analysis (2010) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2010/102462 --- ## Abstract We show that everyU-space and every Banach space X satisfying δX(1)>0 are P(3)-convex, and we study the nonuniform version of P-convexity, which we call p-convexity. --- ## Body ## 1. Introduction Kottman introduced in 1970 the concept ofP-convexity in [1]. He proved that every P-convex space is reflexive and also that P-convexity follows from uniform convexity, as well as from uniform smoothness. In this paper we study conditions which guarantee the P-convexity of a Banach space and generalize the result of Kottman concerning uniform convexity in two different ways: every U-space and every Banach space X satisfying δX(1)>0 are P(3)-convex. There are many convexity conditions of Banach spaces which have a uniform and also a nonuniform version, for example, strictly convexity is the nonuniform version of uniform convexity, smoothness is the nonuniform version of uniform smoothness, and a u-space is the nonuniform version of a U-space, among others. We also define the concept of p-convexity, which is the nonuniform version of P-convexity and obtain some interesting results. ## 2.P-Convex Banach Spaces Throughout this paper we adopt the following notation.(X,∥·∥) will be a Banach space and when there is no possible confusion, we simply write X. The unit ball {x∈X:∥x∥≤1} and the unit sphere {x∈X:∥x∥=1} are denoted, respectively, by BX and SX. B(y,r) will denote the closed ball with center y and radius r. The topological dual space of X is denoted by X*. ### 2.1.P-Convexity The next concept was given by Kottman in [1].Definition 2.1. LetX be a Banach space. For each n∈ℕ let (2.1)P(n,X)=sup{r>0:thereexistndisjointballsofradiusrinBX}. It is easy to see that P(n,X)≤1/2 for n≥2.Definition 2.2. X is said to be P-convex if P(n,X)<1/2 for some n∈ℕ.The following lemma was proved in [1].Lemma 2.3. LetX be a Banach space and n∈ℕ. Then P(n,X)<1/2 if and only if there exists ε>0 such that for any x1,x2,…,xn∈SX(2.2)min{∥xi-xj∥:1≤i,j≤n,i≠j}≤2-ε. That is, X is P -convex if and only if X satisfies condition (2.2) for some n∈ℕ and some ε>0.Definition 2.4. Givenn∈ℕ and ε>0 we say that X is P(ε,n)-convex if X satisfies (2.2). For each n∈ℕ, X is said to be P(n)-convex if it is P(ε,n)-convex for some ε>0. ### 2.2.P-Convexity and the Coefficient of Convexity In [1], Kottman proved that if X is a Banach space satisfying the condition δX(2/3)>0, then X is P(3)-convex, where δX is the modulus of convexity. In this section we give a result which improves this condition, and we show that this assumption is sharp.We recall the following concepts introduced by J. A. Clarkson in 1936.Definition 2.5. The modulus of convexity of a Banach spaceX is the function δX:[0,2]→[0,1] defined by (2.3)δX(ε)=inf{1-∥x+y2∥:x,y∈BX,∥x-y∥≥ε}. The coefficient of convexity of a Banach space X is the number ε0(X) defined as (2.4)ε0(X)=sup{ε∈[0,2]:δX(ε)=0}. We also need the following definition given by R. C. James in 1964.Definition 2.6. X is said to be uniformly nonsquare if there exists α>0 such that for all ξ,η∈SX(2.5)min{∥ξ-η∥,∥ξ+η∥}≤2-α. In order to prove our theorem we need two known results which can be found in [2].Lemma 2.7 (Goebel-Kirk). LetX be a Banach space. For each ε∈[ε0(X),2], one has the equality δX(2-2δX(ε))=1-ε/2.Lemma 2.8 (Ullán). LetX be a Banach space. For each 0≤ε2≤ε1<2 the following inequality holds: δX(ε1)-δX(ε2)≤(ε1-ε2)/(2-ε1).Using these lemmas we obtain:Theorem 2.9. LetX be a Banach space which satisfies δX(1)>0, that is, ε0(X)<1. Then X is P(3)-convex. Moreover, there exists a Banach space X with ε0(X)=1 which is not P(3)-convex.Proof. Lett0=2-2-ε0(X). Clearly ε0(X)<t0<1. Let x,y,z∈SX, and suppose that ∥x-y∥>2-2δX(t0) and ∥x-z∥>2-2δX(t0). By Lemma 2.7, we have (2.6)∥x+y2∥≤1-δ(2-2δX(t0))=1-(1-t02)=t02. Similarly ∥(x+z)/2∥≤t0/2. Hence we get (2.7)∥z-y∥≤∥z+x∥+∥x+y∥≤2t0. Finally, from Lemma 2.8 it follows that (2.8)δX(t0)=δX(t0)-δX(ε0(X))≤t0-ε0(X)2-t0=2-ε0(X)-1=1-t0. Then ∥y-z∥≤2t0≤2-2δX(t0), and thus X is P(3)-convex. Now consider for each1<p<∞ the space lp,∞ defined as follows. Each element x={xi}i∈lp may be represented as x=x+-x-, where the respective ith components of x+ and x- are given by (x+)i=max{xi,0} and (x-)i=max{-xi,0}. Set ∥x∥p,∞=max{∥x+∥p,∥x-∥p} where ∥·∥p stands for the lp-norm. The space lp,∞=(lp,∥·∥p,∞) satisfies ε0(lp,∞)=1 (see [3]). On the other hand let x1=e1-e3, x2=-e1+e2, x3=-e2+e3∈Slp,∞, where {ei}i is the canonical basis in lp. These points satisfy that ∥xi-xj∥p,∞=2, i≠j. Thus lp,∞ is not P(3.2)-convex.It is known that if a Banach spaceX satisfies ε0(X)<1, then X has normal structure as well as P(3)-convexity. The space X=lp,∞ is an example of a Banach space with ε0(X)=1 which does not have normal structure (see [3]) and is not P(3)-convex.Kottman also proved in [1] that every uniformly smooth space is a P-convex space. We obtain a generalization of this fact. Before we show this result we recall the next concept.Definition 2.10. The modulus of smoothness of a Banach spaceX is the function ρX:[0,∞)→[0,∞) defined by (2.9)ρX(t)=sup{12(∥x+ty∥+∥x-ty∥-2):x,y∈SX} for each t≥0. X is called uniformly smooth if limt→0ρX(t)/t=0. The proofs of the following lemmas can be found in [4, 5].Lemma 2.11. For every Banach spaceX, one has limt→0ρX(t)/t=(1/2)ε0(X*).Lemma 2.12. LetX be a Banach space. X is P(3)-convex if and only if X* is P(3)-convex.By Theorem2.9 and by the previous lemmas we deduce the next result.Corollary 2.13. IfX is a Banach space satisfying limt→0ρX(t)/t<1/2, then X is P(3)-convex.With respect toP(4)-convex spaces we have this result, which is easy to prove.Proposition 2.14. IfX is a Banach space P(ε,4)-convex, then ε0(X)≤2-ε, and hence X is uniformly nonsquare.In fact, in bidimensional normed spaces,P(4)-convexity and uniform nonsquareness coincide. The proof of this involves many calculations and can be seen in [6].Another technical proof (see [6]) shows that if X is a bidimensional normed space, then X is always P(1,5)-convex. Hence the space X=(ℝ2,∥·∥∞) is P(1,5)-convex and ε0(X)=2, and thus P(5)-convexity does not imply uniform squareness. ### 2.3. Relation betweenU-Spaces and P-Convex Spaces In this section we show thatP-convexity follows from U-convexity. The following concept was introduced by Lau in 1978 [7].Definition 2.15. A Banach spaceX is called a U-space if for any ε>0 there exists δ>0 such that (2.10)x,y∈SX,f(x-y)>ε,forsomef∈∇(x)⇒∥x+y2∥≤1-δ, where for each x∈X(2.11)∇(x)={f∈SX*:f(x)=∥x∥}. The modulus of this type of convexity was introduced by Gao in [8] and further studied by Mazcuñán-Navarro [9] and Saejung [10]. The following result is proved in [8].Lemma 2.16. LetX be a Banach space. If X is U-space, then X is uniformly nonsquare,From the above we obtain the next theorem which is a generalization of Kottman's result, who showed in [1] that P(3)-convexity follows from uniform convexity.Theorem 2.17. IfX is a U-space, then X is P( 3)-convex.Proof. By Lemma2.16 we have that there exists α>0 such that for all ξ,η∈SX(2.12)min{∥ξ-η∥,∥ξ+η∥}≤2-α. Since X is a U-space, for ε=α/2 there exists δ>0 such that (2.13)x,y∈SX,f(x-y)≥α2,forsomef∈∇(x)⇒∥x+y2∥≤1-δ. We claim that X is P(β,3)-convex, where β=min{α,δ}. Indeed, proceeding by contradiction, assume that there exist x,y,z∈SX such that (2.14)min{∥x-y∥,∥x-z∥,∥y-z∥}>2-β. Define w=-y and u=-z, and let f∈∇(w). If f(w-x)≥α/2, then (2.15)∥w+x2∥<1-δ. Therefore 2-δ≤2-β<∥x-y∥<2-2δ, which is not possible. Hence f(w-x)<α/2. Similarly we prove f(w+u)<α/2. Also ∥x+u∥=∥x-z∥>2-β≥2-α, and hence, by (2.12) we have f(x-u)≤∥x-u∥≤2-α. By the above we have (2.16)2=2f(w)=f(w-x)+f(x-u)+f(u+w)<α2+2-α+α2=2 which is a contradiction. ### 2.4. The Dual Concept ofP-Convexity In [1], Kottman introduces a property which turns out to be the dual concept of P-convexity. In this section we characterize the dual of a P-convex space in an easier way. We begin by showing Kottman's characterization.Definition 2.18. LetX be a Banach space and ε>0. A convex subset A of BX is said to be ε-flat if A⋂(1-ε)BX=∅. A collection 𝔇 of ε-flats is called complemented if for each pair of ε-flatsA and B in 𝔇 we have that A⋃B has a pair of antipodal points. For any n∈ℕ we define (2.17)F(n,X)=inf{ε>0:BXhasacomplementedcollection𝔇ofε-flatssuchthatCard(𝔇)=n}.Theorem 2.19 (Kottman). LetX be a Banach space and n∈ℕ. Then (a) F(n,X*)=0⇔P(n,X)=1/2.(b) P(n,X*)=1/2⇔F(n,X)=0.Now we defineP-smoothness and prove that it turns out to be the dual concept of P-convexity. The advantage of this characterization is that it uses only simple concepts, and one does not need ε-flats. Besides in the proof of the duality we do not need Helly's theorem nor the theorem of Hahn-Banach, as Kottman does in Theorem 2.19.Definition 2.20. LetX be a Banach space and δ>0. For each f,g∈X* set S(f,g,δ)={x∈BX:f(x)≥1-δ,g(x)≥1-δ}. Given δ>0 and n∈ℕ, X is said to be P(δ,n)-smooth if for each f1,f2,…,fn∈SX* there exist 1≤i,j≤n,i≠j, such that S(fi,-fj,δ)=∅. X is said to be P(n)-smooth if it is P(δ,n)-smooth for some δ>0, and X is said to be P-smooth if it is P(δ,n)-smooth for some δ>0 and some n∈ℕ.Proposition 2.21. LetX be a Banach space. Then (a) X is P(n)-convex if and only if X* is P(n)-smooth.(b) X is P(n)-smooth if and only if X* is P(n)-convex.Proof. (a) LetX be a P(ε,n)-convex space. Let x1**,…,xn**∈SX**. We will show that there exist 1≤i,j≤n,i≠j, such that S(xi**,-xj**,ε/4)=∅. Since X is P-convex, it is also reflexive. Therefore x1**=𝚥(x1),…,xn**=𝚥𝚥(xn) for some x1,…,xn∈SX, where 𝚥 is the canonical injection from X to X**. By hypothesis, there exist 1≤i,j≤n, i≠j, such that ∥xi-xj∥≤2-ε. Therefore it is enough to prove that (2.18){f∈BX*:f(xi)≥1-ε4,-f(xj)≥1-ε4}=∅. We proceed by contradiction supposing that there exists f∈BX* such that f(xi)≥1-ε/4 and -f(xj)≥1-ε/4. Then (2.19)2-ε≥∥xi-xj∥≥f(xi-xj)≥2-ε2, which is not possible; consequently X* is P(ε/4,n)-smooth. Now letX be a Banach space such that X* is P(ε,n)-smooth. Let x1,…,xn∈SX. By hypothesis, there exist 1≤i,j≤n, i≠j, such that S(𝚥(xi),-𝚥(xj),ε)=∅, that is, for each f∈BX* we have f(xi)<1-ε or -f(xj)<1-ε. We will see that ∥xi-xj∥≤2-ε. We again proceed by contradiction supposing that ∥xi-xj∥=∥𝚥(xi-xj)∥>2-ε. There exists f∈SX* such that 𝚥(xi-xj)(f)=f(xi)-f(xj)>2-ε. If f(xi)<1-ε, then(2.20)1=∥f∥∥xj∥≥-f(xj)>2-ε-f(xi)>1 which is not possible. Similarly if -f(xj)<1-ε, we obtain a contradiction. Thus ∥xi-xj∥≤2-ε, and consequently X is P(ε,n)-convex. The proof of (b) is analogous to the proof of (a).Therefore the conditionsX is P(n)-smooth and F(n,X)>0 must be equivalent. ## 2.1.P-Convexity The next concept was given by Kottman in [1].Definition 2.1. LetX be a Banach space. For each n∈ℕ let (2.1)P(n,X)=sup{r>0:thereexistndisjointballsofradiusrinBX}. It is easy to see that P(n,X)≤1/2 for n≥2.Definition 2.2. X is said to be P-convex if P(n,X)<1/2 for some n∈ℕ.The following lemma was proved in [1].Lemma 2.3. LetX be a Banach space and n∈ℕ. Then P(n,X)<1/2 if and only if there exists ε>0 such that for any x1,x2,…,xn∈SX(2.2)min{∥xi-xj∥:1≤i,j≤n,i≠j}≤2-ε. That is, X is P -convex if and only if X satisfies condition (2.2) for some n∈ℕ and some ε>0.Definition 2.4. Givenn∈ℕ and ε>0 we say that X is P(ε,n)-convex if X satisfies (2.2). For each n∈ℕ, X is said to be P(n)-convex if it is P(ε,n)-convex for some ε>0. ## 2.2.P-Convexity and the Coefficient of Convexity In [1], Kottman proved that if X is a Banach space satisfying the condition δX(2/3)>0, then X is P(3)-convex, where δX is the modulus of convexity. In this section we give a result which improves this condition, and we show that this assumption is sharp.We recall the following concepts introduced by J. A. Clarkson in 1936.Definition 2.5. The modulus of convexity of a Banach spaceX is the function δX:[0,2]→[0,1] defined by (2.3)δX(ε)=inf{1-∥x+y2∥:x,y∈BX,∥x-y∥≥ε}. The coefficient of convexity of a Banach space X is the number ε0(X) defined as (2.4)ε0(X)=sup{ε∈[0,2]:δX(ε)=0}. We also need the following definition given by R. C. James in 1964.Definition 2.6. X is said to be uniformly nonsquare if there exists α>0 such that for all ξ,η∈SX(2.5)min{∥ξ-η∥,∥ξ+η∥}≤2-α. In order to prove our theorem we need two known results which can be found in [2].Lemma 2.7 (Goebel-Kirk). LetX be a Banach space. For each ε∈[ε0(X),2], one has the equality δX(2-2δX(ε))=1-ε/2.Lemma 2.8 (Ullán). LetX be a Banach space. For each 0≤ε2≤ε1<2 the following inequality holds: δX(ε1)-δX(ε2)≤(ε1-ε2)/(2-ε1).Using these lemmas we obtain:Theorem 2.9. LetX be a Banach space which satisfies δX(1)>0, that is, ε0(X)<1. Then X is P(3)-convex. Moreover, there exists a Banach space X with ε0(X)=1 which is not P(3)-convex.Proof. Lett0=2-2-ε0(X). Clearly ε0(X)<t0<1. Let x,y,z∈SX, and suppose that ∥x-y∥>2-2δX(t0) and ∥x-z∥>2-2δX(t0). By Lemma 2.7, we have (2.6)∥x+y2∥≤1-δ(2-2δX(t0))=1-(1-t02)=t02. Similarly ∥(x+z)/2∥≤t0/2. Hence we get (2.7)∥z-y∥≤∥z+x∥+∥x+y∥≤2t0. Finally, from Lemma 2.8 it follows that (2.8)δX(t0)=δX(t0)-δX(ε0(X))≤t0-ε0(X)2-t0=2-ε0(X)-1=1-t0. Then ∥y-z∥≤2t0≤2-2δX(t0), and thus X is P(3)-convex. Now consider for each1<p<∞ the space lp,∞ defined as follows. Each element x={xi}i∈lp may be represented as x=x+-x-, where the respective ith components of x+ and x- are given by (x+)i=max{xi,0} and (x-)i=max{-xi,0}. Set ∥x∥p,∞=max{∥x+∥p,∥x-∥p} where ∥·∥p stands for the lp-norm. The space lp,∞=(lp,∥·∥p,∞) satisfies ε0(lp,∞)=1 (see [3]). On the other hand let x1=e1-e3, x2=-e1+e2, x3=-e2+e3∈Slp,∞, where {ei}i is the canonical basis in lp. These points satisfy that ∥xi-xj∥p,∞=2, i≠j. Thus lp,∞ is not P(3.2)-convex.It is known that if a Banach spaceX satisfies ε0(X)<1, then X has normal structure as well as P(3)-convexity. The space X=lp,∞ is an example of a Banach space with ε0(X)=1 which does not have normal structure (see [3]) and is not P(3)-convex.Kottman also proved in [1] that every uniformly smooth space is a P-convex space. We obtain a generalization of this fact. Before we show this result we recall the next concept.Definition 2.10. The modulus of smoothness of a Banach spaceX is the function ρX:[0,∞)→[0,∞) defined by (2.9)ρX(t)=sup{12(∥x+ty∥+∥x-ty∥-2):x,y∈SX} for each t≥0. X is called uniformly smooth if limt→0ρX(t)/t=0. The proofs of the following lemmas can be found in [4, 5].Lemma 2.11. For every Banach spaceX, one has limt→0ρX(t)/t=(1/2)ε0(X*).Lemma 2.12. LetX be a Banach space. X is P(3)-convex if and only if X* is P(3)-convex.By Theorem2.9 and by the previous lemmas we deduce the next result.Corollary 2.13. IfX is a Banach space satisfying limt→0ρX(t)/t<1/2, then X is P(3)-convex.With respect toP(4)-convex spaces we have this result, which is easy to prove.Proposition 2.14. IfX is a Banach space P(ε,4)-convex, then ε0(X)≤2-ε, and hence X is uniformly nonsquare.In fact, in bidimensional normed spaces,P(4)-convexity and uniform nonsquareness coincide. The proof of this involves many calculations and can be seen in [6].Another technical proof (see [6]) shows that if X is a bidimensional normed space, then X is always P(1,5)-convex. Hence the space X=(ℝ2,∥·∥∞) is P(1,5)-convex and ε0(X)=2, and thus P(5)-convexity does not imply uniform squareness. ## 2.3. Relation betweenU-Spaces and P-Convex Spaces In this section we show thatP-convexity follows from U-convexity. The following concept was introduced by Lau in 1978 [7].Definition 2.15. A Banach spaceX is called a U-space if for any ε>0 there exists δ>0 such that (2.10)x,y∈SX,f(x-y)>ε,forsomef∈∇(x)⇒∥x+y2∥≤1-δ, where for each x∈X(2.11)∇(x)={f∈SX*:f(x)=∥x∥}. The modulus of this type of convexity was introduced by Gao in [8] and further studied by Mazcuñán-Navarro [9] and Saejung [10]. The following result is proved in [8].Lemma 2.16. LetX be a Banach space. If X is U-space, then X is uniformly nonsquare,From the above we obtain the next theorem which is a generalization of Kottman's result, who showed in [1] that P(3)-convexity follows from uniform convexity.Theorem 2.17. IfX is a U-space, then X is P( 3)-convex.Proof. By Lemma2.16 we have that there exists α>0 such that for all ξ,η∈SX(2.12)min{∥ξ-η∥,∥ξ+η∥}≤2-α. Since X is a U-space, for ε=α/2 there exists δ>0 such that (2.13)x,y∈SX,f(x-y)≥α2,forsomef∈∇(x)⇒∥x+y2∥≤1-δ. We claim that X is P(β,3)-convex, where β=min{α,δ}. Indeed, proceeding by contradiction, assume that there exist x,y,z∈SX such that (2.14)min{∥x-y∥,∥x-z∥,∥y-z∥}>2-β. Define w=-y and u=-z, and let f∈∇(w). If f(w-x)≥α/2, then (2.15)∥w+x2∥<1-δ. Therefore 2-δ≤2-β<∥x-y∥<2-2δ, which is not possible. Hence f(w-x)<α/2. Similarly we prove f(w+u)<α/2. Also ∥x+u∥=∥x-z∥>2-β≥2-α, and hence, by (2.12) we have f(x-u)≤∥x-u∥≤2-α. By the above we have (2.16)2=2f(w)=f(w-x)+f(x-u)+f(u+w)<α2+2-α+α2=2 which is a contradiction. ## 2.4. The Dual Concept ofP-Convexity In [1], Kottman introduces a property which turns out to be the dual concept of P-convexity. In this section we characterize the dual of a P-convex space in an easier way. We begin by showing Kottman's characterization.Definition 2.18. LetX be a Banach space and ε>0. A convex subset A of BX is said to be ε-flat if A⋂(1-ε)BX=∅. A collection 𝔇 of ε-flats is called complemented if for each pair of ε-flatsA and B in 𝔇 we have that A⋃B has a pair of antipodal points. For any n∈ℕ we define (2.17)F(n,X)=inf{ε>0:BXhasacomplementedcollection𝔇ofε-flatssuchthatCard(𝔇)=n}.Theorem 2.19 (Kottman). LetX be a Banach space and n∈ℕ. Then (a) F(n,X*)=0⇔P(n,X)=1/2.(b) P(n,X*)=1/2⇔F(n,X)=0.Now we defineP-smoothness and prove that it turns out to be the dual concept of P-convexity. The advantage of this characterization is that it uses only simple concepts, and one does not need ε-flats. Besides in the proof of the duality we do not need Helly's theorem nor the theorem of Hahn-Banach, as Kottman does in Theorem 2.19.Definition 2.20. LetX be a Banach space and δ>0. For each f,g∈X* set S(f,g,δ)={x∈BX:f(x)≥1-δ,g(x)≥1-δ}. Given δ>0 and n∈ℕ, X is said to be P(δ,n)-smooth if for each f1,f2,…,fn∈SX* there exist 1≤i,j≤n,i≠j, such that S(fi,-fj,δ)=∅. X is said to be P(n)-smooth if it is P(δ,n)-smooth for some δ>0, and X is said to be P-smooth if it is P(δ,n)-smooth for some δ>0 and some n∈ℕ.Proposition 2.21. LetX be a Banach space. Then (a) X is P(n)-convex if and only if X* is P(n)-smooth.(b) X is P(n)-smooth if and only if X* is P(n)-convex.Proof. (a) LetX be a P(ε,n)-convex space. Let x1**,…,xn**∈SX**. We will show that there exist 1≤i,j≤n,i≠j, such that S(xi**,-xj**,ε/4)=∅. Since X is P-convex, it is also reflexive. Therefore x1**=𝚥(x1),…,xn**=𝚥𝚥(xn) for some x1,…,xn∈SX, where 𝚥 is the canonical injection from X to X**. By hypothesis, there exist 1≤i,j≤n, i≠j, such that ∥xi-xj∥≤2-ε. Therefore it is enough to prove that (2.18){f∈BX*:f(xi)≥1-ε4,-f(xj)≥1-ε4}=∅. We proceed by contradiction supposing that there exists f∈BX* such that f(xi)≥1-ε/4 and -f(xj)≥1-ε/4. Then (2.19)2-ε≥∥xi-xj∥≥f(xi-xj)≥2-ε2, which is not possible; consequently X* is P(ε/4,n)-smooth. Now letX be a Banach space such that X* is P(ε,n)-smooth. Let x1,…,xn∈SX. By hypothesis, there exist 1≤i,j≤n, i≠j, such that S(𝚥(xi),-𝚥(xj),ε)=∅, that is, for each f∈BX* we have f(xi)<1-ε or -f(xj)<1-ε. We will see that ∥xi-xj∥≤2-ε. We again proceed by contradiction supposing that ∥xi-xj∥=∥𝚥(xi-xj)∥>2-ε. There exists f∈SX* such that 𝚥(xi-xj)(f)=f(xi)-f(xj)>2-ε. If f(xi)<1-ε, then(2.20)1=∥f∥∥xj∥≥-f(xj)>2-ε-f(xi)>1 which is not possible. Similarly if -f(xj)<1-ε, we obtain a contradiction. Thus ∥xi-xj∥≤2-ε, and consequently X is P(ε,n)-convex. The proof of (b) is analogous to the proof of (a).Therefore the conditionsX is P(n)-smooth and F(n,X)>0 must be equivalent. ## 3.p-Convex Banach Spaces In this section we introduce the nonuniform version ofP-convexity and we call it p-convexity.Definition 3.1. LetX be a Banach space and n∈ℕ. X is said to be p(n)-convex if for any x1,…,xn∈SX, there exist 1≤i,j≤n, i≠j, such that ∥xi-xj∥<2. X is said to be p-convex if is p(n)-convex for some n∈ℕ.Kottman defined the concept ofP-convexity in terms of the intersection of balls. We will do something similar to give an equivalent definition of p-convexity. It is easy to see that in a normed space any two closed balls of radius 1/2 contained in the unit ball have non empty intersection. If the radius is less than 1/2, for example, in l1 for every n and for every r<1/2, then there exist n closed balls of radius r so that no two of them intersect. In fact let {ei}i=1∞ be the canonical basis of l1. Then the closed balls of radius r<1/2 centered at the points (1/2)ei, i∈ℕ are disjoint and contained in the unit ball. However, if X is p(n)-convex, we will see that for any n points in the unit ball there exists r<1/2 so that if the n closed balls centered at these n points are contained in the unit ball, there are two different balls with non empty intersection. To prove this we need the following lemma, which was shown in [11].Lemma 3.2. LetX be a Banach space and x,y∈X, x,y≠0. Then (3.1)∥x∥x∥-y∥y∥∥≥1min{∥x∥,∥y∥}(∥x-y∥-|∥x∥-∥y∥|).Lemma 3.3. X is a p(n)-convex space if and only if for any y1,…,yn∈BX there exists r∈(0,1/2) such that, if B(yi,r)⊂BX for all i=1,…,n, then there are 1≤i,j≤n, i≠j, so that (3.2)B(yi,r)∩B(yj,r)≠∅.Proof. Assume thatX satisfies condition (3.2), and let x1,…,xn∈SX. Let r∈(0,1/2) be the number which satisfies condition (3.2) for x1/2,…,xn/2. It is easy to see that B(xi/2,r)⊂BX for each i=1,…,n. Therefore there exist 1≤i,j≤n, i≠j, such that (3.3)B(xi2,r)∩B(xj2,r)≠∅. Let (3.4)y∈B(xi2,r)∩B(xj2,r). We have (3.5)∥xi-xj2∥≤∥xi2-y∥+∥xj2-y∥<2r<1, and thus X is p(n)-convex. Now we suppose that there exist y1,…,yn∈BX such that for any ρ∈(0,1/2) we have (3.6)B(yi,12-ρ)⊂BX for all i=1,…,n, and (3.7)B(yi,12-ρ)∩B(yj,12-ρ)=∅, for all i,j=1,…,n, i≠j. We verify that X is not p(n)-convex in four steps.(a) Take∥yi-yj∥>1-2ρ for any i,j=1,…,n, i≠j.(b) Take1/2-3ρ<∥yi∥≤1/2+ρ, for all i=1,…,n. To verify this claim we note that ∥yi/∥yi∥-yi∥≥1/2-ρ for all i, because if ∥yi/∥yi∥-yi∥<1/2-ρ for some i, then yi/∥yi∥∈intB(yi,1/2-ρ)⊂intBX, which is not possible. Hence, as ∥yi/∥yi∥-yi∥=1-∥yi∥, it follows that ∥yi∥=1-∥yi/∥yi∥-yi∥≤1/2+ρ, for each i=1,…,n. Now, if ∥yi∥≤1/2-3ρ for some i, we have by (a) that for any j≠i, 1-2ρ<∥yi-yj∥≤∥yi∥+∥yj∥≤(1/2-3ρ)+(1/2+ρ)=1-2ρ which is not possible.(c) Take|∥yi∥-∥yj∥|<4ρ, for any i,j=1,…,n, i≠j. Indeed, by (b) we get -4ρ=(1/2-3ρ)-(1/2+ρ)<∥yi∥-∥yj∥<(1/2+ρ)-(1/2-3ρ)=4ρ.(d) From (a), (b), (c), and by Lemma3.2, we have(3.8)∥yi∥yi∥-yj∥yj∥∥≥1∥yi∥(∥yi-yj∥-|∥yi∥-∥yj∥|)>2-16ρ1+2ρ for any i,j=1,…,n, i≠j. Since ρ>0 is arbitrary, as ρ→0, we obtain ∥yi/∥yi∥-yj/∥yj∥∥=2, for all i,j=1,…,n, i≠j, and thus X is not p(n)-convex.Next we give some examples of spaces which are notp-convex. The first is not reflexive and the last one is superreflexive.Example 3.4. c0,and consequently,C[0,1] and l∞ are not p-convex spaces. Indeed, let {ei}i=1∞ be the canonical basis in c0.For each n∈ℕ we define ui=∑j=1nλi,jej, where λi,j=1 if j≠i,λi,i=-1, and i=1,…,n.Clearly u1,…,un∈Sc0, and for each i≠j we have ∥ui-uj∥∞=2.Example 3.5. LetX denote the space obtained by renorming l2 as follows.For x=(xi)i∈ℕ∈l2 set (3.9)∥|x|∥=max{supi,j|xi-xj|,(∑i=1∞xi2)1/2}. Then ∥x∥≤∥|x|∥≤2∥x∥, where ∥·∥ stands for the l2-norm and X is superreflexive. On the other hand, the canonical basis {en}n in l2 satisfies ∥ei-ej∥∞=2 for each i≠j. Thus X is not p-convex.Now we will mention several properties that implyp-convexity.Recall the following concepts. LetX be a Banach space. X is said to be a u-space if it satisfies the following implication: (3.10)x,y∈SX,∥x+y2∥=1⇒∇(x)=∇(y).X is said to be smooth if for any x∈SX, there exists a unique f∈SX* such that f(x)=1. That is, for each x∈SX, ∇(x) contains a single point. X is called strictly convex if the following implication holds: (3.11)∀x,y∈BX:x≠y⇒∥x+y2∥<1.Proposition 3.6. Every smooth space, every strictly convex space and every u-space are p(3)-convex space.Proof. Every smooth space and every strictly convex space areu-space. It suffices to show that p(3)-convexity follows from being u-space. If X is a u-space, then for any x,y∈SX the following inequality holds: min{∥x-y∥,∥x+y∥}<2. Indeed, if we suppose that there exist x,y∈SX such that ∥x+y∥=∥x-y∥=2, then ∇(x)=∇(y) and ∇(x)=∇(-y), which is not possible. Suppose that X is not p(3)-convex, and there exist x,y,z∈SX so that ∥x-y∥=∥y-z∥=∥z-x∥=2. Since (1/2)∥x-y∥=(1/2)∥y-z∥=1,we have ∇(x)=∇(-y)=∇(z). Let f∈∇(-y); then f(x+z)≤∥x+z∥<2, and (3.12)2=f(x)+f(-y)=f(x+z)-f(z)+f(-y)=f(x+z)<2. Thus X is p(3)-convex.ObviouslyP-convexity implies p-convexity; however, a p-convex space is not necessarily P-convex, even if the space is reflexive as the following example shows.Example 3.7. Let{rk}k=1∞ be a sequence of real numbers such that rk>1 for each k∈ℕ and rk↓1,when k→∞.Consider the space X=∑k=1∞⊕2lrk.It is known that this space is strictly convex, hence it is alsop(3)-convex.It is also known that X is reflexive.However X is notP-convex.Indeed, let ε>0.We choose k∈ℕ such that 2-ε<21/rk. If{ei}i=1∞ is the canonical basis of lrk, we have that ∥ei-ej∥rk=21/rk>2-ε for all i,j∈ℕ,i≠j, and hence X is not aP-convex space.We have obtained a result which shows a strong relation betweenP-convexity and p-convexity with respect to the ultrapower of Banach spaces. We recall the definition and some results regarding ultrapowers which can be found in [4].A filter𝔘 on I is called an ultrafilter on I if 𝔘 is a maximal element from 𝒫 with respect to the set inclusion. 𝔘 is an ultrafilter on I if and only if for all A⊂I either A∈𝔘 or I∖A∈𝔘. Let {Xi}i∈I be a family of Banach spaces, and let(3.13)l∞(Xi)={{xi}i∈I∈∏i∈IXi:sup{∥xi∥Xi:i∈I}<∞}. If we define ∥{xi}i∈I∥∞=sup{∥xi∥Xi:i∈I} for each {xi}i∈I∈l∞(Xi), then ∥·∥∞ defines a norm in ł∞(Xi), and (l∞(Xi),∥·∥∞) is a Banach space. If 𝔘 is a free ultrafilter on I, then for each {xi}i∈I∈l∞(Xi) we have lim𝔘xi always exists and is unique. Let 𝔘 be an ultrafilter on I, and define (3.14)𝒩𝔘={{xi}∈l∞(Xi):lim𝔘∥xi∥=0}.𝒩𝔘 is a closed subspace of l∞(Xi). Theultraproduct of {Xi}i∈I with respect to the ultrafilter 𝔘 on I is the quotient space l∞(Xi)/𝒩𝔘 equipped with the quotient norm, which is denoted by {Xi}𝔘 and its elements by {xi}𝔘. If Xi=X for all i∈I, then {X}𝔘={Xi}𝔘 is called the ultrapower of X. The quotient norm in {Xi}𝔘, (3.15)∥{xi}𝔘∥=inf{∥{xi+yi}i∥∞:{yi}i∈𝒩𝔘}, satisfies the equality (3.16)∥{xi}𝔘∥=lim𝔘∥xi∥Xi,foreach{xi}𝔘∈{Xi}𝔘. If 𝔘 is nontrivial, then X can be embedded into {X}𝔘 isometrically. We will write X̃i instead of {Xi}𝔘 and x̃ instead of {xi}𝔘 unless we need to specify the ultrafilter we are talking about.It is known thatX is uniformly convex if and only if X̃ is strictly convex, X is uniformly smooth if and only if X̃ is smooth, and X is a U-space if and only if X̃ is a u-space (see [12]). Similarly we obtain the following result.Theorem 3.8. LetX be a Banach space and m∈ℕ. The following are equivalent:(a) X̃ is P(m)-convex.(b) X is P(m)-convex,(c) X̃ is p(m)-convex,Proof. (a)⇒(b). Let {xi(n)}n∈x̃i, x̃i∈SX̃, i=1,…,m. Since lim𝔘∥xi(n)∥X=∥x̃i∥X̃=1 for all i, there exists a subsequence {xi(nk)}k of {xi(n)}n such that limk→∞∥xi(nk)∥X=1 and ∥xi(nk)∥X>0, for all k∈ℕ. Define (3.17)yi(nk)=xi(nk)∥xi(nk)∥X,Γi,j={k∈ℕ:∥yi(nk)-yj(nk)∥X≤2-ε}, for each i,j=1,…,m, i≠j. We verify that there exist 1≤i,j≤m, i≠j, such that Γi,j∈𝔘. We proceed by contradiction assuming that, Γi,j∉𝔘 for all i≠j. Hence ℕ∖Γi,j∈𝔘 for all i≠j, and consequently ℕ∖(⋃i≠jΓi,j)≠∅, therefore there exists k0∈ℕ∖(⋃i≠jΓi,j). Thus we have ∥yi(nk0)-yj(nk0)∥>2-ε for each i≠j, and X is not P(m)-convex, which is a contradiction. Therefore there exist 1≤i,j≤m, i≠j, such that Γi,j∈𝔘, and hence lim𝔘∥yi(nk)-yj(nk)∥X≤2-ε. Finally, note that (3.18)∥xi(nk)-xj(nk)∥X≤∥xi(nk)-yi(nk)∥X+∥xj(nk)-yj(nk)∥X+∥yi(nk)-yj(nk)∥X=|1-∥xi(nk)∥X|+|1-∥xi(nk)∥X|+∥yi(nk)-yj(nk)∥X,∥x̃i-x̃j∥X̃=lim𝔘∥xi(n)-xj(n)∥X=lim𝔘∥xi(nk)-xj(nk)∥X≤lim𝔘|1-∥xi(nk)∥X|+lim𝔘|1-∥xi(nk)∥X|+lim𝔘∥yi(nk)-yj(nk)∥X≤2-ε. Therefore X̃ is P(m)-convex. (b)⇒(c) is obvious. (c)⇒(a). Suppose that X is not P(m)-convex. Hence for any n∈ℕ there exist x1(n),…,xm(n)∈SX such that ∥xi(n)-xj(n)∥X>2-1/n for all i,j=1,…,m, i≠j. Define x̃i={xi(n)}𝔘 for each i=1,…,m. Clearly x̃i∈SX̃ for all i, because ∥x̃i∥X̃=lim𝔘∥xi(n)∥X=1, and also,(3.19)∥x̃i-x̃j∥X̃=lim𝔘∥xi(n)-xj(n)∥X=limn→∞∥xi(n)-xj(n)∥X=2, for each i≠j. Hence X̃ is not p(m)-convex.By the above theorem we can deduce the following known result.Corollary 3.9. IfX is P -convex, then X is superreflexive.Proof. IfX is P-convex, then X̃ is P-convex and therefore is reflexive. However in ultrapower reflexivity and superreflexivity are equivalent, hence X̃ is superreflexive, and consequently X is superreflexive.Now we turn our attention to some results regarding thep-convexity and the P-convexity of quotient spaces. To prove them we need the following concept.Definition 3.10. A subspaceY of a normed space X is said to be proximinal if for all x∈X there exists y∈Y such that d(x,Y)=∥x-y∥.It is easy to see that every proximinal subspaceY of a Banach space X is closed.Proposition 3.11. IfX is p(n)-convex and Y is a proximinal subspace of X, then X/Y is p (n)-convex.Proof. Letq:X→X/Y be the quotient function. By the proximinality of Y we have q(BX)=BX/Y. Let x̃1,…,x̃n∈SX/Y and x1,…,xn∈SX such that x̃i=q(xi). Since X is p(n)-convex, there exist 1≤i,j≤n, i≠j, such that ∥xi-xj∥<2, and consequently ∥x̃i-x̃j∥<2.Corollary 3.12. LetX be p(n)-convex and reflexive. If Y is a closed subspace of X, then X/Y is p(n)-convex.Proof. It is shown in [13] that a Banach space X is reflexive if and only if each closed subspace of X is proximinal, and thus the corollary is a consequence of Proposition 3.11.Similarly we can prove that ifX is P(ε,n)-convex and Y is a closed subspace of X, then X/Y is P(ε,n)-convex.We obtain two results involvingψ-direct sums of p-convex spaces. Next we will define these sums as in [14] by Saito, et al.Definition 3.13. SetΨ={ψ:[0,1]→ℝ∣ψisacontinuousconvexfunction,max{1-t,t}≤ψ(t)≤1,forall0≤t≤1.} Let(X,∥·∥X) and (Y,∥·∥Y) be Banach spaces. For each ψ∈Ψ, one defines the norm ∥·∥ψ in X⊕Y as ∥(0,0)∥ψ=0 and for each (x,y)≠(0,0)(3.20)∥(x,y)∥ψ=(∥x∥X+∥y∥Y)ψ(∥y∥Y∥x∥X+∥y∥Y).In [15] it is shown that (X⊕Y,∥·∥ψ) is a Banach space, denoted by X⊕ψY called the ψ-direct and sum of X and Y.The proof of the following theorem is similar to the proof of Theorem3.5 in [16], which shows the corresponding result for P-convex spaces.Theorem 3.14. LetX and Y be Banach spaces and ψ∈Ψ. Then X⊕ψY is p-convex if and only if X and Y are p-convex.In [17] there is a theorem stating several equivalent conditions for strict convexity. We prove a similar result for p-convexity.Lemma 3.15. LetX be a Banach space. The next assertions are equivalent.(a) X is p(n)-convex.(b) For anyq∈(1,∞) and for any x1,…,xn∈X, not all zero, there exist 1≤i,j≤n, i≠j, such that ∥xi-xj∥<2(q-1)/q(∥xi∥q+∥xj∥q)1/q.(c) For someq∈(1,∞) and for any x1,…,xn∈X, not all zero, there exist 1≤i,j≤n, i≠j, such that ∥xi-xj∥<2(q-1)/q(∥xi∥q+∥xj∥q)1/q.Proof. The implications(b)⇒(c)⇒(a) are immediate. We verify (a)⇒(b). Let q∈(1,∞) and x1,…,xn∈X, not all zero. If xj=0 and xi≠0 for some 1≤i,j≤n, then it is clear that ∥xi-xj∥<2(q-1)/q(∥xi∥q+∥xj∥q)1/q. Suppose that x1,…,xn∈X∖{0}. There exist 1≤i,j≤n, i≠j, such that (3.21)∥xi∥xi∥-xj∥xj∥∥<2. If ∥xj∥≤∥xi∥ by Lemma 3.2 we get (3.22)∥xi-xj∥≤∥xj∥∥xi∥xi∥-xj∥xj∥∥+∥xi∥+∥xj∥<∥xi∥+∥xj∥. As the function t↦tq is convex we obtain that (3.23)∥xi-xj2∥q<(∥xi∥+∥xj∥2)q≤12(∥xi∥q+∥xj∥q). Thus ∥xi-xj∥<2(q-1)/q(∥xi∥q+∥xj∥q)1/q.Proposition 3.16. Let{Xi}i∈I be a family of p(n)-convex spaces, where the index set I≠∅ has any cardinality. Then the space X=lq(Xi) (1<q<∞) is p(n)-convex.Proof. Letx(k)={xi(k)}i∈I∈X, 1≤k≤n, not all zero. Let i0∈I be such that xi0(k)≠0, for some k∈{1,…,n}. As Xi0 is a p(n)-convex space, we have by the preceding lemma that there exist 1≤l,m≤n such that (3.24)∥xi0(l)-xi0(m)∥q<2q-1(∥xi0(l)∥q+∥xi0(m)∥q). By the above we obtain (3.25)∥x(l)-x(m)∥qq=∑i∈I∥xi(l)-xi(m)∥q<∑i∈I2q-1(∥xi(l)∥q+∥xi(m)∥q)=2q-1(∥x(l)∥qq+∥x(m)∥qq). Therefore, by the previous lemma, X is p(n)-convex. --- *Source: 102462-2010-09-22.xml*
102462-2010-09-22_102462-2010-09-22.md
34,997
OnP- and p-Convexity of Banach Spaces
Omar Muñiz-Pérez
Abstract and Applied Analysis (2010)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2010/102462
102462-2010-09-22.xml
--- ## Abstract We show that everyU-space and every Banach space X satisfying δX(1)>0 are P(3)-convex, and we study the nonuniform version of P-convexity, which we call p-convexity. --- ## Body ## 1. Introduction Kottman introduced in 1970 the concept ofP-convexity in [1]. He proved that every P-convex space is reflexive and also that P-convexity follows from uniform convexity, as well as from uniform smoothness. In this paper we study conditions which guarantee the P-convexity of a Banach space and generalize the result of Kottman concerning uniform convexity in two different ways: every U-space and every Banach space X satisfying δX(1)>0 are P(3)-convex. There are many convexity conditions of Banach spaces which have a uniform and also a nonuniform version, for example, strictly convexity is the nonuniform version of uniform convexity, smoothness is the nonuniform version of uniform smoothness, and a u-space is the nonuniform version of a U-space, among others. We also define the concept of p-convexity, which is the nonuniform version of P-convexity and obtain some interesting results. ## 2.P-Convex Banach Spaces Throughout this paper we adopt the following notation.(X,∥·∥) will be a Banach space and when there is no possible confusion, we simply write X. The unit ball {x∈X:∥x∥≤1} and the unit sphere {x∈X:∥x∥=1} are denoted, respectively, by BX and SX. B(y,r) will denote the closed ball with center y and radius r. The topological dual space of X is denoted by X*. ### 2.1.P-Convexity The next concept was given by Kottman in [1].Definition 2.1. LetX be a Banach space. For each n∈ℕ let (2.1)P(n,X)=sup{r>0:thereexistndisjointballsofradiusrinBX}. It is easy to see that P(n,X)≤1/2 for n≥2.Definition 2.2. X is said to be P-convex if P(n,X)<1/2 for some n∈ℕ.The following lemma was proved in [1].Lemma 2.3. LetX be a Banach space and n∈ℕ. Then P(n,X)<1/2 if and only if there exists ε>0 such that for any x1,x2,…,xn∈SX(2.2)min{∥xi-xj∥:1≤i,j≤n,i≠j}≤2-ε. That is, X is P -convex if and only if X satisfies condition (2.2) for some n∈ℕ and some ε>0.Definition 2.4. Givenn∈ℕ and ε>0 we say that X is P(ε,n)-convex if X satisfies (2.2). For each n∈ℕ, X is said to be P(n)-convex if it is P(ε,n)-convex for some ε>0. ### 2.2.P-Convexity and the Coefficient of Convexity In [1], Kottman proved that if X is a Banach space satisfying the condition δX(2/3)>0, then X is P(3)-convex, where δX is the modulus of convexity. In this section we give a result which improves this condition, and we show that this assumption is sharp.We recall the following concepts introduced by J. A. Clarkson in 1936.Definition 2.5. The modulus of convexity of a Banach spaceX is the function δX:[0,2]→[0,1] defined by (2.3)δX(ε)=inf{1-∥x+y2∥:x,y∈BX,∥x-y∥≥ε}. The coefficient of convexity of a Banach space X is the number ε0(X) defined as (2.4)ε0(X)=sup{ε∈[0,2]:δX(ε)=0}. We also need the following definition given by R. C. James in 1964.Definition 2.6. X is said to be uniformly nonsquare if there exists α>0 such that for all ξ,η∈SX(2.5)min{∥ξ-η∥,∥ξ+η∥}≤2-α. In order to prove our theorem we need two known results which can be found in [2].Lemma 2.7 (Goebel-Kirk). LetX be a Banach space. For each ε∈[ε0(X),2], one has the equality δX(2-2δX(ε))=1-ε/2.Lemma 2.8 (Ullán). LetX be a Banach space. For each 0≤ε2≤ε1<2 the following inequality holds: δX(ε1)-δX(ε2)≤(ε1-ε2)/(2-ε1).Using these lemmas we obtain:Theorem 2.9. LetX be a Banach space which satisfies δX(1)>0, that is, ε0(X)<1. Then X is P(3)-convex. Moreover, there exists a Banach space X with ε0(X)=1 which is not P(3)-convex.Proof. Lett0=2-2-ε0(X). Clearly ε0(X)<t0<1. Let x,y,z∈SX, and suppose that ∥x-y∥>2-2δX(t0) and ∥x-z∥>2-2δX(t0). By Lemma 2.7, we have (2.6)∥x+y2∥≤1-δ(2-2δX(t0))=1-(1-t02)=t02. Similarly ∥(x+z)/2∥≤t0/2. Hence we get (2.7)∥z-y∥≤∥z+x∥+∥x+y∥≤2t0. Finally, from Lemma 2.8 it follows that (2.8)δX(t0)=δX(t0)-δX(ε0(X))≤t0-ε0(X)2-t0=2-ε0(X)-1=1-t0. Then ∥y-z∥≤2t0≤2-2δX(t0), and thus X is P(3)-convex. Now consider for each1<p<∞ the space lp,∞ defined as follows. Each element x={xi}i∈lp may be represented as x=x+-x-, where the respective ith components of x+ and x- are given by (x+)i=max{xi,0} and (x-)i=max{-xi,0}. Set ∥x∥p,∞=max{∥x+∥p,∥x-∥p} where ∥·∥p stands for the lp-norm. The space lp,∞=(lp,∥·∥p,∞) satisfies ε0(lp,∞)=1 (see [3]). On the other hand let x1=e1-e3, x2=-e1+e2, x3=-e2+e3∈Slp,∞, where {ei}i is the canonical basis in lp. These points satisfy that ∥xi-xj∥p,∞=2, i≠j. Thus lp,∞ is not P(3.2)-convex.It is known that if a Banach spaceX satisfies ε0(X)<1, then X has normal structure as well as P(3)-convexity. The space X=lp,∞ is an example of a Banach space with ε0(X)=1 which does not have normal structure (see [3]) and is not P(3)-convex.Kottman also proved in [1] that every uniformly smooth space is a P-convex space. We obtain a generalization of this fact. Before we show this result we recall the next concept.Definition 2.10. The modulus of smoothness of a Banach spaceX is the function ρX:[0,∞)→[0,∞) defined by (2.9)ρX(t)=sup{12(∥x+ty∥+∥x-ty∥-2):x,y∈SX} for each t≥0. X is called uniformly smooth if limt→0ρX(t)/t=0. The proofs of the following lemmas can be found in [4, 5].Lemma 2.11. For every Banach spaceX, one has limt→0ρX(t)/t=(1/2)ε0(X*).Lemma 2.12. LetX be a Banach space. X is P(3)-convex if and only if X* is P(3)-convex.By Theorem2.9 and by the previous lemmas we deduce the next result.Corollary 2.13. IfX is a Banach space satisfying limt→0ρX(t)/t<1/2, then X is P(3)-convex.With respect toP(4)-convex spaces we have this result, which is easy to prove.Proposition 2.14. IfX is a Banach space P(ε,4)-convex, then ε0(X)≤2-ε, and hence X is uniformly nonsquare.In fact, in bidimensional normed spaces,P(4)-convexity and uniform nonsquareness coincide. The proof of this involves many calculations and can be seen in [6].Another technical proof (see [6]) shows that if X is a bidimensional normed space, then X is always P(1,5)-convex. Hence the space X=(ℝ2,∥·∥∞) is P(1,5)-convex and ε0(X)=2, and thus P(5)-convexity does not imply uniform squareness. ### 2.3. Relation betweenU-Spaces and P-Convex Spaces In this section we show thatP-convexity follows from U-convexity. The following concept was introduced by Lau in 1978 [7].Definition 2.15. A Banach spaceX is called a U-space if for any ε>0 there exists δ>0 such that (2.10)x,y∈SX,f(x-y)>ε,forsomef∈∇(x)⇒∥x+y2∥≤1-δ, where for each x∈X(2.11)∇(x)={f∈SX*:f(x)=∥x∥}. The modulus of this type of convexity was introduced by Gao in [8] and further studied by Mazcuñán-Navarro [9] and Saejung [10]. The following result is proved in [8].Lemma 2.16. LetX be a Banach space. If X is U-space, then X is uniformly nonsquare,From the above we obtain the next theorem which is a generalization of Kottman's result, who showed in [1] that P(3)-convexity follows from uniform convexity.Theorem 2.17. IfX is a U-space, then X is P( 3)-convex.Proof. By Lemma2.16 we have that there exists α>0 such that for all ξ,η∈SX(2.12)min{∥ξ-η∥,∥ξ+η∥}≤2-α. Since X is a U-space, for ε=α/2 there exists δ>0 such that (2.13)x,y∈SX,f(x-y)≥α2,forsomef∈∇(x)⇒∥x+y2∥≤1-δ. We claim that X is P(β,3)-convex, where β=min{α,δ}. Indeed, proceeding by contradiction, assume that there exist x,y,z∈SX such that (2.14)min{∥x-y∥,∥x-z∥,∥y-z∥}>2-β. Define w=-y and u=-z, and let f∈∇(w). If f(w-x)≥α/2, then (2.15)∥w+x2∥<1-δ. Therefore 2-δ≤2-β<∥x-y∥<2-2δ, which is not possible. Hence f(w-x)<α/2. Similarly we prove f(w+u)<α/2. Also ∥x+u∥=∥x-z∥>2-β≥2-α, and hence, by (2.12) we have f(x-u)≤∥x-u∥≤2-α. By the above we have (2.16)2=2f(w)=f(w-x)+f(x-u)+f(u+w)<α2+2-α+α2=2 which is a contradiction. ### 2.4. The Dual Concept ofP-Convexity In [1], Kottman introduces a property which turns out to be the dual concept of P-convexity. In this section we characterize the dual of a P-convex space in an easier way. We begin by showing Kottman's characterization.Definition 2.18. LetX be a Banach space and ε>0. A convex subset A of BX is said to be ε-flat if A⋂(1-ε)BX=∅. A collection 𝔇 of ε-flats is called complemented if for each pair of ε-flatsA and B in 𝔇 we have that A⋃B has a pair of antipodal points. For any n∈ℕ we define (2.17)F(n,X)=inf{ε>0:BXhasacomplementedcollection𝔇ofε-flatssuchthatCard(𝔇)=n}.Theorem 2.19 (Kottman). LetX be a Banach space and n∈ℕ. Then (a) F(n,X*)=0⇔P(n,X)=1/2.(b) P(n,X*)=1/2⇔F(n,X)=0.Now we defineP-smoothness and prove that it turns out to be the dual concept of P-convexity. The advantage of this characterization is that it uses only simple concepts, and one does not need ε-flats. Besides in the proof of the duality we do not need Helly's theorem nor the theorem of Hahn-Banach, as Kottman does in Theorem 2.19.Definition 2.20. LetX be a Banach space and δ>0. For each f,g∈X* set S(f,g,δ)={x∈BX:f(x)≥1-δ,g(x)≥1-δ}. Given δ>0 and n∈ℕ, X is said to be P(δ,n)-smooth if for each f1,f2,…,fn∈SX* there exist 1≤i,j≤n,i≠j, such that S(fi,-fj,δ)=∅. X is said to be P(n)-smooth if it is P(δ,n)-smooth for some δ>0, and X is said to be P-smooth if it is P(δ,n)-smooth for some δ>0 and some n∈ℕ.Proposition 2.21. LetX be a Banach space. Then (a) X is P(n)-convex if and only if X* is P(n)-smooth.(b) X is P(n)-smooth if and only if X* is P(n)-convex.Proof. (a) LetX be a P(ε,n)-convex space. Let x1**,…,xn**∈SX**. We will show that there exist 1≤i,j≤n,i≠j, such that S(xi**,-xj**,ε/4)=∅. Since X is P-convex, it is also reflexive. Therefore x1**=𝚥(x1),…,xn**=𝚥𝚥(xn) for some x1,…,xn∈SX, where 𝚥 is the canonical injection from X to X**. By hypothesis, there exist 1≤i,j≤n, i≠j, such that ∥xi-xj∥≤2-ε. Therefore it is enough to prove that (2.18){f∈BX*:f(xi)≥1-ε4,-f(xj)≥1-ε4}=∅. We proceed by contradiction supposing that there exists f∈BX* such that f(xi)≥1-ε/4 and -f(xj)≥1-ε/4. Then (2.19)2-ε≥∥xi-xj∥≥f(xi-xj)≥2-ε2, which is not possible; consequently X* is P(ε/4,n)-smooth. Now letX be a Banach space such that X* is P(ε,n)-smooth. Let x1,…,xn∈SX. By hypothesis, there exist 1≤i,j≤n, i≠j, such that S(𝚥(xi),-𝚥(xj),ε)=∅, that is, for each f∈BX* we have f(xi)<1-ε or -f(xj)<1-ε. We will see that ∥xi-xj∥≤2-ε. We again proceed by contradiction supposing that ∥xi-xj∥=∥𝚥(xi-xj)∥>2-ε. There exists f∈SX* such that 𝚥(xi-xj)(f)=f(xi)-f(xj)>2-ε. If f(xi)<1-ε, then(2.20)1=∥f∥∥xj∥≥-f(xj)>2-ε-f(xi)>1 which is not possible. Similarly if -f(xj)<1-ε, we obtain a contradiction. Thus ∥xi-xj∥≤2-ε, and consequently X is P(ε,n)-convex. The proof of (b) is analogous to the proof of (a).Therefore the conditionsX is P(n)-smooth and F(n,X)>0 must be equivalent. ## 2.1.P-Convexity The next concept was given by Kottman in [1].Definition 2.1. LetX be a Banach space. For each n∈ℕ let (2.1)P(n,X)=sup{r>0:thereexistndisjointballsofradiusrinBX}. It is easy to see that P(n,X)≤1/2 for n≥2.Definition 2.2. X is said to be P-convex if P(n,X)<1/2 for some n∈ℕ.The following lemma was proved in [1].Lemma 2.3. LetX be a Banach space and n∈ℕ. Then P(n,X)<1/2 if and only if there exists ε>0 such that for any x1,x2,…,xn∈SX(2.2)min{∥xi-xj∥:1≤i,j≤n,i≠j}≤2-ε. That is, X is P -convex if and only if X satisfies condition (2.2) for some n∈ℕ and some ε>0.Definition 2.4. Givenn∈ℕ and ε>0 we say that X is P(ε,n)-convex if X satisfies (2.2). For each n∈ℕ, X is said to be P(n)-convex if it is P(ε,n)-convex for some ε>0. ## 2.2.P-Convexity and the Coefficient of Convexity In [1], Kottman proved that if X is a Banach space satisfying the condition δX(2/3)>0, then X is P(3)-convex, where δX is the modulus of convexity. In this section we give a result which improves this condition, and we show that this assumption is sharp.We recall the following concepts introduced by J. A. Clarkson in 1936.Definition 2.5. The modulus of convexity of a Banach spaceX is the function δX:[0,2]→[0,1] defined by (2.3)δX(ε)=inf{1-∥x+y2∥:x,y∈BX,∥x-y∥≥ε}. The coefficient of convexity of a Banach space X is the number ε0(X) defined as (2.4)ε0(X)=sup{ε∈[0,2]:δX(ε)=0}. We also need the following definition given by R. C. James in 1964.Definition 2.6. X is said to be uniformly nonsquare if there exists α>0 such that for all ξ,η∈SX(2.5)min{∥ξ-η∥,∥ξ+η∥}≤2-α. In order to prove our theorem we need two known results which can be found in [2].Lemma 2.7 (Goebel-Kirk). LetX be a Banach space. For each ε∈[ε0(X),2], one has the equality δX(2-2δX(ε))=1-ε/2.Lemma 2.8 (Ullán). LetX be a Banach space. For each 0≤ε2≤ε1<2 the following inequality holds: δX(ε1)-δX(ε2)≤(ε1-ε2)/(2-ε1).Using these lemmas we obtain:Theorem 2.9. LetX be a Banach space which satisfies δX(1)>0, that is, ε0(X)<1. Then X is P(3)-convex. Moreover, there exists a Banach space X with ε0(X)=1 which is not P(3)-convex.Proof. Lett0=2-2-ε0(X). Clearly ε0(X)<t0<1. Let x,y,z∈SX, and suppose that ∥x-y∥>2-2δX(t0) and ∥x-z∥>2-2δX(t0). By Lemma 2.7, we have (2.6)∥x+y2∥≤1-δ(2-2δX(t0))=1-(1-t02)=t02. Similarly ∥(x+z)/2∥≤t0/2. Hence we get (2.7)∥z-y∥≤∥z+x∥+∥x+y∥≤2t0. Finally, from Lemma 2.8 it follows that (2.8)δX(t0)=δX(t0)-δX(ε0(X))≤t0-ε0(X)2-t0=2-ε0(X)-1=1-t0. Then ∥y-z∥≤2t0≤2-2δX(t0), and thus X is P(3)-convex. Now consider for each1<p<∞ the space lp,∞ defined as follows. Each element x={xi}i∈lp may be represented as x=x+-x-, where the respective ith components of x+ and x- are given by (x+)i=max{xi,0} and (x-)i=max{-xi,0}. Set ∥x∥p,∞=max{∥x+∥p,∥x-∥p} where ∥·∥p stands for the lp-norm. The space lp,∞=(lp,∥·∥p,∞) satisfies ε0(lp,∞)=1 (see [3]). On the other hand let x1=e1-e3, x2=-e1+e2, x3=-e2+e3∈Slp,∞, where {ei}i is the canonical basis in lp. These points satisfy that ∥xi-xj∥p,∞=2, i≠j. Thus lp,∞ is not P(3.2)-convex.It is known that if a Banach spaceX satisfies ε0(X)<1, then X has normal structure as well as P(3)-convexity. The space X=lp,∞ is an example of a Banach space with ε0(X)=1 which does not have normal structure (see [3]) and is not P(3)-convex.Kottman also proved in [1] that every uniformly smooth space is a P-convex space. We obtain a generalization of this fact. Before we show this result we recall the next concept.Definition 2.10. The modulus of smoothness of a Banach spaceX is the function ρX:[0,∞)→[0,∞) defined by (2.9)ρX(t)=sup{12(∥x+ty∥+∥x-ty∥-2):x,y∈SX} for each t≥0. X is called uniformly smooth if limt→0ρX(t)/t=0. The proofs of the following lemmas can be found in [4, 5].Lemma 2.11. For every Banach spaceX, one has limt→0ρX(t)/t=(1/2)ε0(X*).Lemma 2.12. LetX be a Banach space. X is P(3)-convex if and only if X* is P(3)-convex.By Theorem2.9 and by the previous lemmas we deduce the next result.Corollary 2.13. IfX is a Banach space satisfying limt→0ρX(t)/t<1/2, then X is P(3)-convex.With respect toP(4)-convex spaces we have this result, which is easy to prove.Proposition 2.14. IfX is a Banach space P(ε,4)-convex, then ε0(X)≤2-ε, and hence X is uniformly nonsquare.In fact, in bidimensional normed spaces,P(4)-convexity and uniform nonsquareness coincide. The proof of this involves many calculations and can be seen in [6].Another technical proof (see [6]) shows that if X is a bidimensional normed space, then X is always P(1,5)-convex. Hence the space X=(ℝ2,∥·∥∞) is P(1,5)-convex and ε0(X)=2, and thus P(5)-convexity does not imply uniform squareness. ## 2.3. Relation betweenU-Spaces and P-Convex Spaces In this section we show thatP-convexity follows from U-convexity. The following concept was introduced by Lau in 1978 [7].Definition 2.15. A Banach spaceX is called a U-space if for any ε>0 there exists δ>0 such that (2.10)x,y∈SX,f(x-y)>ε,forsomef∈∇(x)⇒∥x+y2∥≤1-δ, where for each x∈X(2.11)∇(x)={f∈SX*:f(x)=∥x∥}. The modulus of this type of convexity was introduced by Gao in [8] and further studied by Mazcuñán-Navarro [9] and Saejung [10]. The following result is proved in [8].Lemma 2.16. LetX be a Banach space. If X is U-space, then X is uniformly nonsquare,From the above we obtain the next theorem which is a generalization of Kottman's result, who showed in [1] that P(3)-convexity follows from uniform convexity.Theorem 2.17. IfX is a U-space, then X is P( 3)-convex.Proof. By Lemma2.16 we have that there exists α>0 such that for all ξ,η∈SX(2.12)min{∥ξ-η∥,∥ξ+η∥}≤2-α. Since X is a U-space, for ε=α/2 there exists δ>0 such that (2.13)x,y∈SX,f(x-y)≥α2,forsomef∈∇(x)⇒∥x+y2∥≤1-δ. We claim that X is P(β,3)-convex, where β=min{α,δ}. Indeed, proceeding by contradiction, assume that there exist x,y,z∈SX such that (2.14)min{∥x-y∥,∥x-z∥,∥y-z∥}>2-β. Define w=-y and u=-z, and let f∈∇(w). If f(w-x)≥α/2, then (2.15)∥w+x2∥<1-δ. Therefore 2-δ≤2-β<∥x-y∥<2-2δ, which is not possible. Hence f(w-x)<α/2. Similarly we prove f(w+u)<α/2. Also ∥x+u∥=∥x-z∥>2-β≥2-α, and hence, by (2.12) we have f(x-u)≤∥x-u∥≤2-α. By the above we have (2.16)2=2f(w)=f(w-x)+f(x-u)+f(u+w)<α2+2-α+α2=2 which is a contradiction. ## 2.4. The Dual Concept ofP-Convexity In [1], Kottman introduces a property which turns out to be the dual concept of P-convexity. In this section we characterize the dual of a P-convex space in an easier way. We begin by showing Kottman's characterization.Definition 2.18. LetX be a Banach space and ε>0. A convex subset A of BX is said to be ε-flat if A⋂(1-ε)BX=∅. A collection 𝔇 of ε-flats is called complemented if for each pair of ε-flatsA and B in 𝔇 we have that A⋃B has a pair of antipodal points. For any n∈ℕ we define (2.17)F(n,X)=inf{ε>0:BXhasacomplementedcollection𝔇ofε-flatssuchthatCard(𝔇)=n}.Theorem 2.19 (Kottman). LetX be a Banach space and n∈ℕ. Then (a) F(n,X*)=0⇔P(n,X)=1/2.(b) P(n,X*)=1/2⇔F(n,X)=0.Now we defineP-smoothness and prove that it turns out to be the dual concept of P-convexity. The advantage of this characterization is that it uses only simple concepts, and one does not need ε-flats. Besides in the proof of the duality we do not need Helly's theorem nor the theorem of Hahn-Banach, as Kottman does in Theorem 2.19.Definition 2.20. LetX be a Banach space and δ>0. For each f,g∈X* set S(f,g,δ)={x∈BX:f(x)≥1-δ,g(x)≥1-δ}. Given δ>0 and n∈ℕ, X is said to be P(δ,n)-smooth if for each f1,f2,…,fn∈SX* there exist 1≤i,j≤n,i≠j, such that S(fi,-fj,δ)=∅. X is said to be P(n)-smooth if it is P(δ,n)-smooth for some δ>0, and X is said to be P-smooth if it is P(δ,n)-smooth for some δ>0 and some n∈ℕ.Proposition 2.21. LetX be a Banach space. Then (a) X is P(n)-convex if and only if X* is P(n)-smooth.(b) X is P(n)-smooth if and only if X* is P(n)-convex.Proof. (a) LetX be a P(ε,n)-convex space. Let x1**,…,xn**∈SX**. We will show that there exist 1≤i,j≤n,i≠j, such that S(xi**,-xj**,ε/4)=∅. Since X is P-convex, it is also reflexive. Therefore x1**=𝚥(x1),…,xn**=𝚥𝚥(xn) for some x1,…,xn∈SX, where 𝚥 is the canonical injection from X to X**. By hypothesis, there exist 1≤i,j≤n, i≠j, such that ∥xi-xj∥≤2-ε. Therefore it is enough to prove that (2.18){f∈BX*:f(xi)≥1-ε4,-f(xj)≥1-ε4}=∅. We proceed by contradiction supposing that there exists f∈BX* such that f(xi)≥1-ε/4 and -f(xj)≥1-ε/4. Then (2.19)2-ε≥∥xi-xj∥≥f(xi-xj)≥2-ε2, which is not possible; consequently X* is P(ε/4,n)-smooth. Now letX be a Banach space such that X* is P(ε,n)-smooth. Let x1,…,xn∈SX. By hypothesis, there exist 1≤i,j≤n, i≠j, such that S(𝚥(xi),-𝚥(xj),ε)=∅, that is, for each f∈BX* we have f(xi)<1-ε or -f(xj)<1-ε. We will see that ∥xi-xj∥≤2-ε. We again proceed by contradiction supposing that ∥xi-xj∥=∥𝚥(xi-xj)∥>2-ε. There exists f∈SX* such that 𝚥(xi-xj)(f)=f(xi)-f(xj)>2-ε. If f(xi)<1-ε, then(2.20)1=∥f∥∥xj∥≥-f(xj)>2-ε-f(xi)>1 which is not possible. Similarly if -f(xj)<1-ε, we obtain a contradiction. Thus ∥xi-xj∥≤2-ε, and consequently X is P(ε,n)-convex. The proof of (b) is analogous to the proof of (a).Therefore the conditionsX is P(n)-smooth and F(n,X)>0 must be equivalent. ## 3.p-Convex Banach Spaces In this section we introduce the nonuniform version ofP-convexity and we call it p-convexity.Definition 3.1. LetX be a Banach space and n∈ℕ. X is said to be p(n)-convex if for any x1,…,xn∈SX, there exist 1≤i,j≤n, i≠j, such that ∥xi-xj∥<2. X is said to be p-convex if is p(n)-convex for some n∈ℕ.Kottman defined the concept ofP-convexity in terms of the intersection of balls. We will do something similar to give an equivalent definition of p-convexity. It is easy to see that in a normed space any two closed balls of radius 1/2 contained in the unit ball have non empty intersection. If the radius is less than 1/2, for example, in l1 for every n and for every r<1/2, then there exist n closed balls of radius r so that no two of them intersect. In fact let {ei}i=1∞ be the canonical basis of l1. Then the closed balls of radius r<1/2 centered at the points (1/2)ei, i∈ℕ are disjoint and contained in the unit ball. However, if X is p(n)-convex, we will see that for any n points in the unit ball there exists r<1/2 so that if the n closed balls centered at these n points are contained in the unit ball, there are two different balls with non empty intersection. To prove this we need the following lemma, which was shown in [11].Lemma 3.2. LetX be a Banach space and x,y∈X, x,y≠0. Then (3.1)∥x∥x∥-y∥y∥∥≥1min{∥x∥,∥y∥}(∥x-y∥-|∥x∥-∥y∥|).Lemma 3.3. X is a p(n)-convex space if and only if for any y1,…,yn∈BX there exists r∈(0,1/2) such that, if B(yi,r)⊂BX for all i=1,…,n, then there are 1≤i,j≤n, i≠j, so that (3.2)B(yi,r)∩B(yj,r)≠∅.Proof. Assume thatX satisfies condition (3.2), and let x1,…,xn∈SX. Let r∈(0,1/2) be the number which satisfies condition (3.2) for x1/2,…,xn/2. It is easy to see that B(xi/2,r)⊂BX for each i=1,…,n. Therefore there exist 1≤i,j≤n, i≠j, such that (3.3)B(xi2,r)∩B(xj2,r)≠∅. Let (3.4)y∈B(xi2,r)∩B(xj2,r). We have (3.5)∥xi-xj2∥≤∥xi2-y∥+∥xj2-y∥<2r<1, and thus X is p(n)-convex. Now we suppose that there exist y1,…,yn∈BX such that for any ρ∈(0,1/2) we have (3.6)B(yi,12-ρ)⊂BX for all i=1,…,n, and (3.7)B(yi,12-ρ)∩B(yj,12-ρ)=∅, for all i,j=1,…,n, i≠j. We verify that X is not p(n)-convex in four steps.(a) Take∥yi-yj∥>1-2ρ for any i,j=1,…,n, i≠j.(b) Take1/2-3ρ<∥yi∥≤1/2+ρ, for all i=1,…,n. To verify this claim we note that ∥yi/∥yi∥-yi∥≥1/2-ρ for all i, because if ∥yi/∥yi∥-yi∥<1/2-ρ for some i, then yi/∥yi∥∈intB(yi,1/2-ρ)⊂intBX, which is not possible. Hence, as ∥yi/∥yi∥-yi∥=1-∥yi∥, it follows that ∥yi∥=1-∥yi/∥yi∥-yi∥≤1/2+ρ, for each i=1,…,n. Now, if ∥yi∥≤1/2-3ρ for some i, we have by (a) that for any j≠i, 1-2ρ<∥yi-yj∥≤∥yi∥+∥yj∥≤(1/2-3ρ)+(1/2+ρ)=1-2ρ which is not possible.(c) Take|∥yi∥-∥yj∥|<4ρ, for any i,j=1,…,n, i≠j. Indeed, by (b) we get -4ρ=(1/2-3ρ)-(1/2+ρ)<∥yi∥-∥yj∥<(1/2+ρ)-(1/2-3ρ)=4ρ.(d) From (a), (b), (c), and by Lemma3.2, we have(3.8)∥yi∥yi∥-yj∥yj∥∥≥1∥yi∥(∥yi-yj∥-|∥yi∥-∥yj∥|)>2-16ρ1+2ρ for any i,j=1,…,n, i≠j. Since ρ>0 is arbitrary, as ρ→0, we obtain ∥yi/∥yi∥-yj/∥yj∥∥=2, for all i,j=1,…,n, i≠j, and thus X is not p(n)-convex.Next we give some examples of spaces which are notp-convex. The first is not reflexive and the last one is superreflexive.Example 3.4. c0,and consequently,C[0,1] and l∞ are not p-convex spaces. Indeed, let {ei}i=1∞ be the canonical basis in c0.For each n∈ℕ we define ui=∑j=1nλi,jej, where λi,j=1 if j≠i,λi,i=-1, and i=1,…,n.Clearly u1,…,un∈Sc0, and for each i≠j we have ∥ui-uj∥∞=2.Example 3.5. LetX denote the space obtained by renorming l2 as follows.For x=(xi)i∈ℕ∈l2 set (3.9)∥|x|∥=max{supi,j|xi-xj|,(∑i=1∞xi2)1/2}. Then ∥x∥≤∥|x|∥≤2∥x∥, where ∥·∥ stands for the l2-norm and X is superreflexive. On the other hand, the canonical basis {en}n in l2 satisfies ∥ei-ej∥∞=2 for each i≠j. Thus X is not p-convex.Now we will mention several properties that implyp-convexity.Recall the following concepts. LetX be a Banach space. X is said to be a u-space if it satisfies the following implication: (3.10)x,y∈SX,∥x+y2∥=1⇒∇(x)=∇(y).X is said to be smooth if for any x∈SX, there exists a unique f∈SX* such that f(x)=1. That is, for each x∈SX, ∇(x) contains a single point. X is called strictly convex if the following implication holds: (3.11)∀x,y∈BX:x≠y⇒∥x+y2∥<1.Proposition 3.6. Every smooth space, every strictly convex space and every u-space are p(3)-convex space.Proof. Every smooth space and every strictly convex space areu-space. It suffices to show that p(3)-convexity follows from being u-space. If X is a u-space, then for any x,y∈SX the following inequality holds: min{∥x-y∥,∥x+y∥}<2. Indeed, if we suppose that there exist x,y∈SX such that ∥x+y∥=∥x-y∥=2, then ∇(x)=∇(y) and ∇(x)=∇(-y), which is not possible. Suppose that X is not p(3)-convex, and there exist x,y,z∈SX so that ∥x-y∥=∥y-z∥=∥z-x∥=2. Since (1/2)∥x-y∥=(1/2)∥y-z∥=1,we have ∇(x)=∇(-y)=∇(z). Let f∈∇(-y); then f(x+z)≤∥x+z∥<2, and (3.12)2=f(x)+f(-y)=f(x+z)-f(z)+f(-y)=f(x+z)<2. Thus X is p(3)-convex.ObviouslyP-convexity implies p-convexity; however, a p-convex space is not necessarily P-convex, even if the space is reflexive as the following example shows.Example 3.7. Let{rk}k=1∞ be a sequence of real numbers such that rk>1 for each k∈ℕ and rk↓1,when k→∞.Consider the space X=∑k=1∞⊕2lrk.It is known that this space is strictly convex, hence it is alsop(3)-convex.It is also known that X is reflexive.However X is notP-convex.Indeed, let ε>0.We choose k∈ℕ such that 2-ε<21/rk. If{ei}i=1∞ is the canonical basis of lrk, we have that ∥ei-ej∥rk=21/rk>2-ε for all i,j∈ℕ,i≠j, and hence X is not aP-convex space.We have obtained a result which shows a strong relation betweenP-convexity and p-convexity with respect to the ultrapower of Banach spaces. We recall the definition and some results regarding ultrapowers which can be found in [4].A filter𝔘 on I is called an ultrafilter on I if 𝔘 is a maximal element from 𝒫 with respect to the set inclusion. 𝔘 is an ultrafilter on I if and only if for all A⊂I either A∈𝔘 or I∖A∈𝔘. Let {Xi}i∈I be a family of Banach spaces, and let(3.13)l∞(Xi)={{xi}i∈I∈∏i∈IXi:sup{∥xi∥Xi:i∈I}<∞}. If we define ∥{xi}i∈I∥∞=sup{∥xi∥Xi:i∈I} for each {xi}i∈I∈l∞(Xi), then ∥·∥∞ defines a norm in ł∞(Xi), and (l∞(Xi),∥·∥∞) is a Banach space. If 𝔘 is a free ultrafilter on I, then for each {xi}i∈I∈l∞(Xi) we have lim𝔘xi always exists and is unique. Let 𝔘 be an ultrafilter on I, and define (3.14)𝒩𝔘={{xi}∈l∞(Xi):lim𝔘∥xi∥=0}.𝒩𝔘 is a closed subspace of l∞(Xi). Theultraproduct of {Xi}i∈I with respect to the ultrafilter 𝔘 on I is the quotient space l∞(Xi)/𝒩𝔘 equipped with the quotient norm, which is denoted by {Xi}𝔘 and its elements by {xi}𝔘. If Xi=X for all i∈I, then {X}𝔘={Xi}𝔘 is called the ultrapower of X. The quotient norm in {Xi}𝔘, (3.15)∥{xi}𝔘∥=inf{∥{xi+yi}i∥∞:{yi}i∈𝒩𝔘}, satisfies the equality (3.16)∥{xi}𝔘∥=lim𝔘∥xi∥Xi,foreach{xi}𝔘∈{Xi}𝔘. If 𝔘 is nontrivial, then X can be embedded into {X}𝔘 isometrically. We will write X̃i instead of {Xi}𝔘 and x̃ instead of {xi}𝔘 unless we need to specify the ultrafilter we are talking about.It is known thatX is uniformly convex if and only if X̃ is strictly convex, X is uniformly smooth if and only if X̃ is smooth, and X is a U-space if and only if X̃ is a u-space (see [12]). Similarly we obtain the following result.Theorem 3.8. LetX be a Banach space and m∈ℕ. The following are equivalent:(a) X̃ is P(m)-convex.(b) X is P(m)-convex,(c) X̃ is p(m)-convex,Proof. (a)⇒(b). Let {xi(n)}n∈x̃i, x̃i∈SX̃, i=1,…,m. Since lim𝔘∥xi(n)∥X=∥x̃i∥X̃=1 for all i, there exists a subsequence {xi(nk)}k of {xi(n)}n such that limk→∞∥xi(nk)∥X=1 and ∥xi(nk)∥X>0, for all k∈ℕ. Define (3.17)yi(nk)=xi(nk)∥xi(nk)∥X,Γi,j={k∈ℕ:∥yi(nk)-yj(nk)∥X≤2-ε}, for each i,j=1,…,m, i≠j. We verify that there exist 1≤i,j≤m, i≠j, such that Γi,j∈𝔘. We proceed by contradiction assuming that, Γi,j∉𝔘 for all i≠j. Hence ℕ∖Γi,j∈𝔘 for all i≠j, and consequently ℕ∖(⋃i≠jΓi,j)≠∅, therefore there exists k0∈ℕ∖(⋃i≠jΓi,j). Thus we have ∥yi(nk0)-yj(nk0)∥>2-ε for each i≠j, and X is not P(m)-convex, which is a contradiction. Therefore there exist 1≤i,j≤m, i≠j, such that Γi,j∈𝔘, and hence lim𝔘∥yi(nk)-yj(nk)∥X≤2-ε. Finally, note that (3.18)∥xi(nk)-xj(nk)∥X≤∥xi(nk)-yi(nk)∥X+∥xj(nk)-yj(nk)∥X+∥yi(nk)-yj(nk)∥X=|1-∥xi(nk)∥X|+|1-∥xi(nk)∥X|+∥yi(nk)-yj(nk)∥X,∥x̃i-x̃j∥X̃=lim𝔘∥xi(n)-xj(n)∥X=lim𝔘∥xi(nk)-xj(nk)∥X≤lim𝔘|1-∥xi(nk)∥X|+lim𝔘|1-∥xi(nk)∥X|+lim𝔘∥yi(nk)-yj(nk)∥X≤2-ε. Therefore X̃ is P(m)-convex. (b)⇒(c) is obvious. (c)⇒(a). Suppose that X is not P(m)-convex. Hence for any n∈ℕ there exist x1(n),…,xm(n)∈SX such that ∥xi(n)-xj(n)∥X>2-1/n for all i,j=1,…,m, i≠j. Define x̃i={xi(n)}𝔘 for each i=1,…,m. Clearly x̃i∈SX̃ for all i, because ∥x̃i∥X̃=lim𝔘∥xi(n)∥X=1, and also,(3.19)∥x̃i-x̃j∥X̃=lim𝔘∥xi(n)-xj(n)∥X=limn→∞∥xi(n)-xj(n)∥X=2, for each i≠j. Hence X̃ is not p(m)-convex.By the above theorem we can deduce the following known result.Corollary 3.9. IfX is P -convex, then X is superreflexive.Proof. IfX is P-convex, then X̃ is P-convex and therefore is reflexive. However in ultrapower reflexivity and superreflexivity are equivalent, hence X̃ is superreflexive, and consequently X is superreflexive.Now we turn our attention to some results regarding thep-convexity and the P-convexity of quotient spaces. To prove them we need the following concept.Definition 3.10. A subspaceY of a normed space X is said to be proximinal if for all x∈X there exists y∈Y such that d(x,Y)=∥x-y∥.It is easy to see that every proximinal subspaceY of a Banach space X is closed.Proposition 3.11. IfX is p(n)-convex and Y is a proximinal subspace of X, then X/Y is p (n)-convex.Proof. Letq:X→X/Y be the quotient function. By the proximinality of Y we have q(BX)=BX/Y. Let x̃1,…,x̃n∈SX/Y and x1,…,xn∈SX such that x̃i=q(xi). Since X is p(n)-convex, there exist 1≤i,j≤n, i≠j, such that ∥xi-xj∥<2, and consequently ∥x̃i-x̃j∥<2.Corollary 3.12. LetX be p(n)-convex and reflexive. If Y is a closed subspace of X, then X/Y is p(n)-convex.Proof. It is shown in [13] that a Banach space X is reflexive if and only if each closed subspace of X is proximinal, and thus the corollary is a consequence of Proposition 3.11.Similarly we can prove that ifX is P(ε,n)-convex and Y is a closed subspace of X, then X/Y is P(ε,n)-convex.We obtain two results involvingψ-direct sums of p-convex spaces. Next we will define these sums as in [14] by Saito, et al.Definition 3.13. SetΨ={ψ:[0,1]→ℝ∣ψisacontinuousconvexfunction,max{1-t,t}≤ψ(t)≤1,forall0≤t≤1.} Let(X,∥·∥X) and (Y,∥·∥Y) be Banach spaces. For each ψ∈Ψ, one defines the norm ∥·∥ψ in X⊕Y as ∥(0,0)∥ψ=0 and for each (x,y)≠(0,0)(3.20)∥(x,y)∥ψ=(∥x∥X+∥y∥Y)ψ(∥y∥Y∥x∥X+∥y∥Y).In [15] it is shown that (X⊕Y,∥·∥ψ) is a Banach space, denoted by X⊕ψY called the ψ-direct and sum of X and Y.The proof of the following theorem is similar to the proof of Theorem3.5 in [16], which shows the corresponding result for P-convex spaces.Theorem 3.14. LetX and Y be Banach spaces and ψ∈Ψ. Then X⊕ψY is p-convex if and only if X and Y are p-convex.In [17] there is a theorem stating several equivalent conditions for strict convexity. We prove a similar result for p-convexity.Lemma 3.15. LetX be a Banach space. The next assertions are equivalent.(a) X is p(n)-convex.(b) For anyq∈(1,∞) and for any x1,…,xn∈X, not all zero, there exist 1≤i,j≤n, i≠j, such that ∥xi-xj∥<2(q-1)/q(∥xi∥q+∥xj∥q)1/q.(c) For someq∈(1,∞) and for any x1,…,xn∈X, not all zero, there exist 1≤i,j≤n, i≠j, such that ∥xi-xj∥<2(q-1)/q(∥xi∥q+∥xj∥q)1/q.Proof. The implications(b)⇒(c)⇒(a) are immediate. We verify (a)⇒(b). Let q∈(1,∞) and x1,…,xn∈X, not all zero. If xj=0 and xi≠0 for some 1≤i,j≤n, then it is clear that ∥xi-xj∥<2(q-1)/q(∥xi∥q+∥xj∥q)1/q. Suppose that x1,…,xn∈X∖{0}. There exist 1≤i,j≤n, i≠j, such that (3.21)∥xi∥xi∥-xj∥xj∥∥<2. If ∥xj∥≤∥xi∥ by Lemma 3.2 we get (3.22)∥xi-xj∥≤∥xj∥∥xi∥xi∥-xj∥xj∥∥+∥xi∥+∥xj∥<∥xi∥+∥xj∥. As the function t↦tq is convex we obtain that (3.23)∥xi-xj2∥q<(∥xi∥+∥xj∥2)q≤12(∥xi∥q+∥xj∥q). Thus ∥xi-xj∥<2(q-1)/q(∥xi∥q+∥xj∥q)1/q.Proposition 3.16. Let{Xi}i∈I be a family of p(n)-convex spaces, where the index set I≠∅ has any cardinality. Then the space X=lq(Xi) (1<q<∞) is p(n)-convex.Proof. Letx(k)={xi(k)}i∈I∈X, 1≤k≤n, not all zero. Let i0∈I be such that xi0(k)≠0, for some k∈{1,…,n}. As Xi0 is a p(n)-convex space, we have by the preceding lemma that there exist 1≤l,m≤n such that (3.24)∥xi0(l)-xi0(m)∥q<2q-1(∥xi0(l)∥q+∥xi0(m)∥q). By the above we obtain (3.25)∥x(l)-x(m)∥qq=∑i∈I∥xi(l)-xi(m)∥q<∑i∈I2q-1(∥xi(l)∥q+∥xi(m)∥q)=2q-1(∥x(l)∥qq+∥x(m)∥qq). Therefore, by the previous lemma, X is p(n)-convex. --- *Source: 102462-2010-09-22.xml*
2010
# Colquhounia Root Tablet Protects Rat Pulmonary Microvascular Endothelial Cells against TNF-α-Induced Injury by Upregulating the Expression of Tight Junction Proteins Claudin-5 and ZO-1 **Authors:** Wenjie Zhou; Guocui Shi; Jijia Bai; Shenmao Ma; Qinfu Liu; Xigang Ma **Journal:** Evidence-Based Complementary and Alternative Medicine (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1024634 --- ## Abstract Background. There are currently limited effective pharmacotherapy agents for acute lung injury (ALI). Inflammatory response in the lungs is the main pathophysiological process of ALI. Our preliminary data have shown that colquhounia root tablet (CRT), a natural herbal medicine, alleviates the pulmonary inflammatory responses and edema in a rat model with oleic acid-induced ALI. However, the potential molecular action mechanisms underlining its protective effects against ALI are poorly understood. This study aimed to investigate the effects and mechanism of CRT in rat pulmonary microvascular endothelial cells (PMEC) with TNF-α-induced injury.Methods. PMECs were divided into 6 groups: normal control, TNF-α (10 ng/mL TNF-α), Dex (1×10-6 M Dex + 10 ng/mL TNF-α), CRT high (1000 ng/mL CRT + 10 ng/mL TNF-α), CRT medium (500 ng/mL CRT + 10 ng/mL TNF-α), and CRT low group (250 ng/mL CRT + 10 ng/mL TNF-α). Cell proliferation and apoptosis were detected by MTT assay and flow cytometry. Cell micromorphology was observed under transmission electron microscope. The localization and expression of tight junction proteins Claudin-5 and ZO-1 were analyzed by immunofluorescence staining and Western blot, respectively.Results. TNF-a had successfully induced an acute endothelial cell injury model. Dex and CRT treatments had significantly stimulated the growth and reduced the apoptosis of PMECs (allp < 0.05 or 0.01) and alleviated the TNF-α-induced cell injury. The expression of Claudin-5 and ZO-1 in Dex and all 3 CRT groups was markedly increased compared with TNF-a group (allp < 0.05 or 0.01).Conclusion. CRT effectively protects PMECs from TNF-α-induced injury, which might be mediated via stabilizing the structure of tight junction. CRT might be a promising, effective, and safe therapeutic agent for the treatment of ALI. --- ## Body ## 1. Introduction Acute lung injury (ALI) is a major cause of acute respiratory failure with high morbidity and mortality in critical care medicine [1]. ALI is characterized with persistent pulmonary inflammation and increased microvascular permeability [1]. The inflammatory response in the lungs is the main pathophysiological process of ALI. TNF-α is an important factor mediating the inflammatory response during injury, which initiates inflammatory cascade and destroys the alveolar capillary barrier [2], leading to increased permeability and pulmonary edema. The pulmonary microvascular endothelial cell (PMEC) barrier acts as the first line of defense against inflammatory factor attack and plays a key role in the development of lung injury. The tight junction among PMECs, composed of transmembrane proteins including Claudins, Occludin, and ZO-1, serves as an important structure controlling pericellular permeability and regulating PMEC barrier functions [3].Although the pathophysiology and treatments of ALI have been investigated in numerous studies, there are currently limited effective pharmacotherapies agents. Glucocorticoids have been commonly used to treat lung injury induced by various causes [4–6]. Dexamethasone (Dex) is a widely used synthetic glucocorticoid compound with proven protection effects against ALI [7, 8]. Moreover, Dex has been frequently used as positive control in several pharmacological studies on potential therapeutic agents for ALI [9]. However, these hormone therapies may trigger a variety of adverse reactions, such as infection, elevated blood glucose, peptic ulcer, and osteoporosis [10]. Therefore, it is urgent to search for novel therapeutic reagents for ALI. The traditional Chinese medicine colquhounia root contains several alkaloids, terpenoids, lactones, and phenolic acids, such as triptolide and epicatechin [11]. Colquhounia root has several beneficial pharmacological properties, such as anti-inflammatory, immunosuppressive, antitumor, and analgesic activities [12–14]. Recent studies have found that colquhounia root attenuates allergic encephalomyelitis in rats by reducing capillary permeability during inflammation and decreasing inflammatory exudate [15]. Our preliminary data have shown that colquhounia root alleviates the pulmonary inflammatory responses and edema in rat model with oleic acid-induced ALI [16]. However, the potential molecular action mechanisms underlining its protective effects against ALI have not been investigated. In this study, we treated rat PMECs with different doses of colquhounia root and explored its protective effects and potential mechanism on the cells and intercellular tight junctions using Dex as a treatment control. Our findings shall shed insight on potential novel therapeutic reagents against ALI. ## 2. Material and Methods ### 2.1. Reagents Colquhounia root tablets (CRT, 0.18 g/tablet) were purchased from the Pharmaceutical Factory of the Chongqing Academy of Chinese Materia Medica (Chongqing, China). A CRT was ground into powders, dissolved in 1 mL of dimethyl sulfoxide (DMSO), and filtered through a 0.22-um sterile filter. TNF-α (400-14, purity ≥ 98%, Peprotech, Rocky Hill, NJ, USA) was prepared into 0.1 mg/mL stock. Dex (Dexamethasone, D4902, purity ≥ 97%, Sigma, Shanghai, China) was prepared into 0.03185 M stock. All stocks were stored at -20°C until use. ### 2.2. High-Performance Liquid Chromatography (HPLC) The major active components of CRT were determined by HPLC. Triptolide was purchased from the Chinese National Institute for Food and Drug Control and dissolved in 60% methanol for 10μg/mL standard solution for HPLC. Finely ground CRT powder (3.6 g) was mixed with 50 mL of methanol, processed using an ultrasonic cleaner (KQ-250DE, Kunshan Ultrasonic Instrument, China) at 250 w, 50 KHz for 1 h, and filtered. The residue was washed twice with 10 mL of methanol. The methanol solution was combined and loaded on neutral alumina column (length: 300 mm, inner diameter: 1.5 cm). The column was eluded with 60 mL of acetone. The eluate was collected, dried, dissolved in 5 mL of 60% methanol, and filtered through a 0.45-μm millipore filter. Samples of the filtrate were injected into an InertSustain C18 chromatographic column (4.6 x 250 mm, 5 μm) in a LC-20AT HPLC system (Shimadzu, Tokyo, Japan). HPLC was performed using the following parameters: mobile phase: acetonitrile-water (25:75); detection wavelength: 220 nm; flow rate: 1 mL/min; column temperature: 35°C; injection volume: 10 μL. The number of theoretical plates of the triptolide peak should be no less than 2000.Epicatechin was purchased from the Chinese National Institute for Food and Drug Control and dissolved in 60% methanol for 100μg/mL standard solution for HPLC. CRT powder (0.9 g) is mixed with 40 mL of water-saturated ethyl acetate, processed by an ultrasonic cleaner (250 w, 50 KHz) for 30 min, filtered, and dried. The residue was dissolved in 10 mL of 60% methanol. Aliquots of samples and standard solution (20 μL) were injected onto an InertSustain C18 column and HPLC was performed using the following parameters: mobile phase: 0.63% glacial acetic acid solution-methanol-acetonitrile (82.5:2:15.5); detection wavelength: 280 nm; flow rate: 1 mL/min; column temperature: 35°C. ### 2.3. Cells and Grouping Rat PMECs were purchased from the Bioleaf Biotech Inc. (Shanghai, China) and cultured in RPMI-1640 medium (Gibco, Rockville, MD, USA) supplemented with 5% fetal bovine serum (FBS, Gibco), 100μg/mL streptomycin and 100 U/mL penicillin penicillin/streptomycin (Invitrogen, Shanghai, China) at 37°C, 5% CO2 in an incubator. Cells were seeded into 96-well plates at a density of 1×105 cells/well and incubated overnight. Cells were then divided into the following 6 groups and cultured for an additional 48 h for subsequent examinations: normal control group without any treatment, TNF-α group cultured in medium containing 10 ng/mL TNF-α, Dex group in 1×10-6 M Dex and 10 ng/mL TNF-α, CRT high group in 1000 ng/mL CRT and 10 ng/mL TNF-α, CRT medium group in 500 ng/mL CRT and 10 ng/mL TNF-α, and CRT low group in 250 ng/mL CRT and 10 ng/mL TNF-α. ### 2.4. MTT Cell Proliferation Assay Cell proliferation in all groups was analyzed using MTT assay kit (KeyGen Biotech Inc., Nanjing, China). Briefly, cell medium was discarded and cells were incubated with 90μL of FBS-free medium and 20 μL of MTT at 37°C for 4 h. Cells were treated with 150 μL of DMSO for 10 min. The optical density (OD) was detected with a microplate reader under the wavelength of 490 nm. Three wells were prepared for each group. The cell proliferation inhibition rate was calculated using the following formula: cell proliferation inhibition rate (%) = (OD value in control group - OD value in experimental group)/OD value in control group × 100%. ### 2.5. Cell Apoptosis Assay Cell apoptosis in all groups was analyzed using Annexin V-FITC apoptosis detection kit (Bestbio Biotech Inc., Shanghai, China). In brief, cells were collected upon the completion of treatment. Cells were washed twice with PBS, resuspended in 400μL of 1× binding buffer, stained with 5 μL of FITC Annexin V in the dark at 4°C for 15 min. Cells were then incubated in 10 μL of PI in the dark for 5 more min. The fluorescence of cells was analyzed by a BD Accuri C6 flow cytometry (BD Biosciences, Franklin Lakes, NJ, USA) at 488 nm within 1 h. ### 2.6. Transmission Electron Microscopic Observation After treatment, cells in all groups were collected, washed twice with 0.1 M natrium cacodylate buffer solution, and fixed in 2% glutaraldehyde solution at 4°C for 5 min. Cells were collected and incubated in 2% glutaraldehyde solution at 4°C for 1 h. Cells were collected and incubated 3 times in 0.1 M natrium cacodylicum buffer solution at 4°C for 30 min each. Cells were then incubated in 1% osmic acid for 1 h, washed twice with 0.1 M natrium cacodylicum buffer solution for 15 min each, dehydrated with 30%, 50%, 70%, 80%, 90%, and 100% ethanol, infiltrated with epoxypropane for 15 min, and treated with embedding solution at 35°C for 6 h. Embedded cells were cut into 50-nm slices using an ultrathin slicer, stained in 3% silver citrate solution, and observed under an H7650 transmission electron microscope (Hitachi, Japan). ### 2.7. Immunofluorescence Staining Normal PMECs at exponential phase were inoculated into a six-well plate with sterile slides and incubated at 37°C, 5% CO2 for 24 h. The cells were washed twice with prewarmed PBS, fixed with 4% paraformaldehyde solution for 20 min at room temperature, and washed 3 times with PBS. Cells were blocked with 5% BSA for 1 h at room temperature and incubated with primary antibody (mouse anti-rat Claudin-5, 1:50; rabbit anti-rat ZO-1, 1:50, Invitrogen, USA) overnight at 4°C. Cells were washed 3 times with PBS and incubated with FITC-labeled secondary antibody (goat anti-rabbit IgG, 1: 100; goat anti-mouse IgG, 1:100, Zhongshan Golden Bridge Biotech Inc., Beijing, China) for 1 h at room temperature in the dark. Cells were stained with DAPI. The slide was sealed and observed under an Olympus FV1000 confocal microscope (Olympus, USA). ### 2.8. Quantitative Reverse Transcription PCR (qRT-PCR) The expression of Claudin-5 and ZO-1 mRNA was determined by qRT-PCR. Briefly, total RNA was extracted from cells using TRIzol reagent (Invitrogen, USA). Reverse transcription was performed using reverse transcription kit (Takara, Japan). The primers were designed and synthesized by Genscript Inc. (Nanjing, China): Claudin-5 forward 5’-CAGCGTTGGAAATTCTGGGTC-3’, reverse 5’-ACACTTTGCATTGCATGTGCC-3’; ZO-1 forward 5’-TGGTGCTCCTAAACAATC-3’, reverse 5’-TGCTATTACACGGTCCTC-3’; andβ-actin forward 5’-CCCATCTATGAGGGTTACGC-3’, reverse 5’-TTTAATGTCACGCACGATTTC-3’. The qRT-PCR reaction mixture was prepared using real time PCR kit (TaKaRa, Japan): SYBR Premix Ex Taq II 10 μL, forward primer 0.8 μL, reverse primer 0.8 μL, cDNA template 2 μL, and dH2O 6.4 μL. Reaction was performed on an ABI PRISM 7500 Fluorescent Quantitative PCR System (Applied Biosystems, Foster City, CA, USA) used according to the following reaction conditions: 95°C 5 s followed by 45 cycles of 95°C 5 s, 57°C (Claudin-5)/60°C (ZO-1) 30 s and 72°C 40 s. The experiment was performed in triplicate and the expression level was calculated using the 2-ΔΔCt method. β-actin was used as the internal control. ### 2.9. Western Blot Total protein was extracted from cells using total protein extraction kit (Keygen Biotech, Nanjing, China) and quantified using BCA protein assay kit (Keygen Biotech). Equal aliquots (20μg) of protein were separated by SDS-PAGE and transferred to polyvinylidene difluoride membranes. The membranes were blocked with 5% skim milk for 2 h and incubated with primary antibodies (mouse anti-rat Claudin-5, 1:500, Invitrogen, USA; rabbit anti-rat ZO-1, 1:250, Invitrogen, USA; mouse anti-rat β-actin, 1:500, Zhongshan Golden Bridge Biotech.) overnight at 4°C. The membrane was washed 3 times with 1 x TBST and incubated with HRP-conjugated secondary antibody (goat anti-rabbit IgG, 1:5000; goat anti-mouse IgG, 1:5000, Zhongshan Golden Bridge Biotech.) at room temperature for 2 h. The immunoreactivity was detected using the NCI 5079 ECL detection system (Themo Fisher, USA). The image was analyzed by ChemiGenius Bioimaging System (Syngene, MD, USA) and the relative expression of proteins was quantified using Image J (National Institutes of Health, Bethesda, USA) with β-actin as the internal reference. ### 2.10. Cell Transfection siRNAs were designed and synthesized by Genscript Biotech. (Nanjing, China): si-Claudin-5 forward: 5’-GUCCGGGAGUUCUAUGAUCCA-3’, reverse: 5’-GATCATAGAACTCCCGGACTA-3’; si-ZO-1 forward: 5’-UGUUGAACAUGCUUUUGCUGT -3’, reverse: 5’-AGCAAAAGCAUGUUCAACATT -3’; si-nagetive control (NC) forward: 5’-UUCUCCGAACGUGUCACGUTT-3’, and reverse: 5’-ACGUGACACGUUCGGAGAATT-3’. Normal PMECs at 70% confluence in 6-well plates were transfected with 30 nM of siRNAs or siNC using Lipofectamine 3000 transfection agent (Invitrogen, Shanghai, China) according to the manufacturer’s instructions. Cells were then incubated in medium containing 500 ng/mL CRT and 10 ng/mL TNF-α. After 48 h, cells were collected and the expression of Claudin-5 and ZO-1 was determined by Western blot as described above. Cell proliferation and apoptosis were also measured as described above. ### 2.11. Statistical Analysis Data were expressed as mean± standard deviation. Data analysis was performed using SPSS 17.0 statistical software (IBM SPSS, Chicago, IL, USA). Difference among groups was analyzed by one-way analysis of variance (ANOVA) followed by post hoc SNK-q test. Rates were compared by chi-square test.p< 0.05 was considered statistically significant. ## 2.1. Reagents Colquhounia root tablets (CRT, 0.18 g/tablet) were purchased from the Pharmaceutical Factory of the Chongqing Academy of Chinese Materia Medica (Chongqing, China). A CRT was ground into powders, dissolved in 1 mL of dimethyl sulfoxide (DMSO), and filtered through a 0.22-um sterile filter. TNF-α (400-14, purity ≥ 98%, Peprotech, Rocky Hill, NJ, USA) was prepared into 0.1 mg/mL stock. Dex (Dexamethasone, D4902, purity ≥ 97%, Sigma, Shanghai, China) was prepared into 0.03185 M stock. All stocks were stored at -20°C until use. ## 2.2. High-Performance Liquid Chromatography (HPLC) The major active components of CRT were determined by HPLC. Triptolide was purchased from the Chinese National Institute for Food and Drug Control and dissolved in 60% methanol for 10μg/mL standard solution for HPLC. Finely ground CRT powder (3.6 g) was mixed with 50 mL of methanol, processed using an ultrasonic cleaner (KQ-250DE, Kunshan Ultrasonic Instrument, China) at 250 w, 50 KHz for 1 h, and filtered. The residue was washed twice with 10 mL of methanol. The methanol solution was combined and loaded on neutral alumina column (length: 300 mm, inner diameter: 1.5 cm). The column was eluded with 60 mL of acetone. The eluate was collected, dried, dissolved in 5 mL of 60% methanol, and filtered through a 0.45-μm millipore filter. Samples of the filtrate were injected into an InertSustain C18 chromatographic column (4.6 x 250 mm, 5 μm) in a LC-20AT HPLC system (Shimadzu, Tokyo, Japan). HPLC was performed using the following parameters: mobile phase: acetonitrile-water (25:75); detection wavelength: 220 nm; flow rate: 1 mL/min; column temperature: 35°C; injection volume: 10 μL. The number of theoretical plates of the triptolide peak should be no less than 2000.Epicatechin was purchased from the Chinese National Institute for Food and Drug Control and dissolved in 60% methanol for 100μg/mL standard solution for HPLC. CRT powder (0.9 g) is mixed with 40 mL of water-saturated ethyl acetate, processed by an ultrasonic cleaner (250 w, 50 KHz) for 30 min, filtered, and dried. The residue was dissolved in 10 mL of 60% methanol. Aliquots of samples and standard solution (20 μL) were injected onto an InertSustain C18 column and HPLC was performed using the following parameters: mobile phase: 0.63% glacial acetic acid solution-methanol-acetonitrile (82.5:2:15.5); detection wavelength: 280 nm; flow rate: 1 mL/min; column temperature: 35°C. ## 2.3. Cells and Grouping Rat PMECs were purchased from the Bioleaf Biotech Inc. (Shanghai, China) and cultured in RPMI-1640 medium (Gibco, Rockville, MD, USA) supplemented with 5% fetal bovine serum (FBS, Gibco), 100μg/mL streptomycin and 100 U/mL penicillin penicillin/streptomycin (Invitrogen, Shanghai, China) at 37°C, 5% CO2 in an incubator. Cells were seeded into 96-well plates at a density of 1×105 cells/well and incubated overnight. Cells were then divided into the following 6 groups and cultured for an additional 48 h for subsequent examinations: normal control group without any treatment, TNF-α group cultured in medium containing 10 ng/mL TNF-α, Dex group in 1×10-6 M Dex and 10 ng/mL TNF-α, CRT high group in 1000 ng/mL CRT and 10 ng/mL TNF-α, CRT medium group in 500 ng/mL CRT and 10 ng/mL TNF-α, and CRT low group in 250 ng/mL CRT and 10 ng/mL TNF-α. ## 2.4. MTT Cell Proliferation Assay Cell proliferation in all groups was analyzed using MTT assay kit (KeyGen Biotech Inc., Nanjing, China). Briefly, cell medium was discarded and cells were incubated with 90μL of FBS-free medium and 20 μL of MTT at 37°C for 4 h. Cells were treated with 150 μL of DMSO for 10 min. The optical density (OD) was detected with a microplate reader under the wavelength of 490 nm. Three wells were prepared for each group. The cell proliferation inhibition rate was calculated using the following formula: cell proliferation inhibition rate (%) = (OD value in control group - OD value in experimental group)/OD value in control group × 100%. ## 2.5. Cell Apoptosis Assay Cell apoptosis in all groups was analyzed using Annexin V-FITC apoptosis detection kit (Bestbio Biotech Inc., Shanghai, China). In brief, cells were collected upon the completion of treatment. Cells were washed twice with PBS, resuspended in 400μL of 1× binding buffer, stained with 5 μL of FITC Annexin V in the dark at 4°C for 15 min. Cells were then incubated in 10 μL of PI in the dark for 5 more min. The fluorescence of cells was analyzed by a BD Accuri C6 flow cytometry (BD Biosciences, Franklin Lakes, NJ, USA) at 488 nm within 1 h. ## 2.6. Transmission Electron Microscopic Observation After treatment, cells in all groups were collected, washed twice with 0.1 M natrium cacodylate buffer solution, and fixed in 2% glutaraldehyde solution at 4°C for 5 min. Cells were collected and incubated in 2% glutaraldehyde solution at 4°C for 1 h. Cells were collected and incubated 3 times in 0.1 M natrium cacodylicum buffer solution at 4°C for 30 min each. Cells were then incubated in 1% osmic acid for 1 h, washed twice with 0.1 M natrium cacodylicum buffer solution for 15 min each, dehydrated with 30%, 50%, 70%, 80%, 90%, and 100% ethanol, infiltrated with epoxypropane for 15 min, and treated with embedding solution at 35°C for 6 h. Embedded cells were cut into 50-nm slices using an ultrathin slicer, stained in 3% silver citrate solution, and observed under an H7650 transmission electron microscope (Hitachi, Japan). ## 2.7. Immunofluorescence Staining Normal PMECs at exponential phase were inoculated into a six-well plate with sterile slides and incubated at 37°C, 5% CO2 for 24 h. The cells were washed twice with prewarmed PBS, fixed with 4% paraformaldehyde solution for 20 min at room temperature, and washed 3 times with PBS. Cells were blocked with 5% BSA for 1 h at room temperature and incubated with primary antibody (mouse anti-rat Claudin-5, 1:50; rabbit anti-rat ZO-1, 1:50, Invitrogen, USA) overnight at 4°C. Cells were washed 3 times with PBS and incubated with FITC-labeled secondary antibody (goat anti-rabbit IgG, 1: 100; goat anti-mouse IgG, 1:100, Zhongshan Golden Bridge Biotech Inc., Beijing, China) for 1 h at room temperature in the dark. Cells were stained with DAPI. The slide was sealed and observed under an Olympus FV1000 confocal microscope (Olympus, USA). ## 2.8. Quantitative Reverse Transcription PCR (qRT-PCR) The expression of Claudin-5 and ZO-1 mRNA was determined by qRT-PCR. Briefly, total RNA was extracted from cells using TRIzol reagent (Invitrogen, USA). Reverse transcription was performed using reverse transcription kit (Takara, Japan). The primers were designed and synthesized by Genscript Inc. (Nanjing, China): Claudin-5 forward 5’-CAGCGTTGGAAATTCTGGGTC-3’, reverse 5’-ACACTTTGCATTGCATGTGCC-3’; ZO-1 forward 5’-TGGTGCTCCTAAACAATC-3’, reverse 5’-TGCTATTACACGGTCCTC-3’; andβ-actin forward 5’-CCCATCTATGAGGGTTACGC-3’, reverse 5’-TTTAATGTCACGCACGATTTC-3’. The qRT-PCR reaction mixture was prepared using real time PCR kit (TaKaRa, Japan): SYBR Premix Ex Taq II 10 μL, forward primer 0.8 μL, reverse primer 0.8 μL, cDNA template 2 μL, and dH2O 6.4 μL. Reaction was performed on an ABI PRISM 7500 Fluorescent Quantitative PCR System (Applied Biosystems, Foster City, CA, USA) used according to the following reaction conditions: 95°C 5 s followed by 45 cycles of 95°C 5 s, 57°C (Claudin-5)/60°C (ZO-1) 30 s and 72°C 40 s. The experiment was performed in triplicate and the expression level was calculated using the 2-ΔΔCt method. β-actin was used as the internal control. ## 2.9. Western Blot Total protein was extracted from cells using total protein extraction kit (Keygen Biotech, Nanjing, China) and quantified using BCA protein assay kit (Keygen Biotech). Equal aliquots (20μg) of protein were separated by SDS-PAGE and transferred to polyvinylidene difluoride membranes. The membranes were blocked with 5% skim milk for 2 h and incubated with primary antibodies (mouse anti-rat Claudin-5, 1:500, Invitrogen, USA; rabbit anti-rat ZO-1, 1:250, Invitrogen, USA; mouse anti-rat β-actin, 1:500, Zhongshan Golden Bridge Biotech.) overnight at 4°C. The membrane was washed 3 times with 1 x TBST and incubated with HRP-conjugated secondary antibody (goat anti-rabbit IgG, 1:5000; goat anti-mouse IgG, 1:5000, Zhongshan Golden Bridge Biotech.) at room temperature for 2 h. The immunoreactivity was detected using the NCI 5079 ECL detection system (Themo Fisher, USA). The image was analyzed by ChemiGenius Bioimaging System (Syngene, MD, USA) and the relative expression of proteins was quantified using Image J (National Institutes of Health, Bethesda, USA) with β-actin as the internal reference. ## 2.10. Cell Transfection siRNAs were designed and synthesized by Genscript Biotech. (Nanjing, China): si-Claudin-5 forward: 5’-GUCCGGGAGUUCUAUGAUCCA-3’, reverse: 5’-GATCATAGAACTCCCGGACTA-3’; si-ZO-1 forward: 5’-UGUUGAACAUGCUUUUGCUGT -3’, reverse: 5’-AGCAAAAGCAUGUUCAACATT -3’; si-nagetive control (NC) forward: 5’-UUCUCCGAACGUGUCACGUTT-3’, and reverse: 5’-ACGUGACACGUUCGGAGAATT-3’. Normal PMECs at 70% confluence in 6-well plates were transfected with 30 nM of siRNAs or siNC using Lipofectamine 3000 transfection agent (Invitrogen, Shanghai, China) according to the manufacturer’s instructions. Cells were then incubated in medium containing 500 ng/mL CRT and 10 ng/mL TNF-α. After 48 h, cells were collected and the expression of Claudin-5 and ZO-1 was determined by Western blot as described above. Cell proliferation and apoptosis were also measured as described above. ## 2.11. Statistical Analysis Data were expressed as mean± standard deviation. Data analysis was performed using SPSS 17.0 statistical software (IBM SPSS, Chicago, IL, USA). Difference among groups was analyzed by one-way analysis of variance (ANOVA) followed by post hoc SNK-q test. Rates were compared by chi-square test.p< 0.05 was considered statistically significant. ## 3. Results ### 3.1. Major Components of CRT As shown in Figure1, HPLC data suggests that each tablet (0.18 g) contains 3.04 μg of triptolide (C20H24O6) and 0.13 mg of epicatechin (C35H14O6).Figure 1 HPLC analyses of the major active components of CRT: triptolide (a) and epicatechin (b) at the wavelengths of 220 nm and 280 nm, respectively. (a) (b) ### 3.2. CRT Stimulates the Growth of PMECs The growth of PMECs was compared by MTT assay. As shown in Figure2(a), the TNF-α had significantly increased the growth inhibition rate of PMECs compared with normal control group (p = 0.016). The growth inhibition rate of PMECs in Dex and all CRT groups was significantly lower than that in TNF-α group (p= 0.033, 0.045, 0.020, and 0.039, respectively). The growth inhibition rate in CRT medium group was significantly lower compared with Dex group (p = 0.032), whereas the other two CRT groups had similar rate as that in Dex group (p > 0.05).Figure 2 CRT treatment inhibited the cell proliferation inhibition rate and apoptosis rate induced by TNF- α . Rat PMEC monolayers were divided into 6 groups: normal control, TNF-α, Dex, CRT high, CRT medium, and CRT low groups. After 48 h incubation, cell proliferation (a) and apoptosis (b) were detected by MTT assay and flow cytometry, respectively. The cell proliferation inhibition rate (%) = (OD value in control group - OD value in experimental group)/OD value in control group × 100%. ∗, P<0.05, TNF-α group vs. normal control group; #, P<0.05 treatment groups vs. TNF-a group; △, P<0.05, CRT groups vs. Dex group. ### 3.3. CRT Reduces the Apoptosis of PMECs Further, the effects of CRT on the apoptosis of PMECs were evaluated by flow cytometry. As shown in Figure2(b), the apoptosis rate in TNF-α group was significantly higher than that in normal control group (p = 0.026). The apoptosis rate of PMECs in all 4 treatment groups was significantly reduced compared with TNF-α group (p = 0.023, 0.037, 0.019, and 0.042, respectively). The apoptosis rate in CRT medium group was significantly lower compared with Dex group (p = 0.028), but the other two CRT groups had similar rate as that in Dex group (p > 0.05). ### 3.4. CRT Alleviates the TNF-α-Induced Cell Injury The effect of CRT on the ultrastructure of PMECs was observed under transmission electron microscope (Figure3). In normal control group, there are abundant cytoplasms with a number of intact organelles (mitochondria, endoplasmic reticulum, and Golgi) and few pinocytotic vesicles. After the treatment of TNF-a, cells exhibited severe injuries. The number of organelles was obviously decreased. Mitochondria and endoplasmic reticulum swelling were clearly observed. Mitochondria cristae disappeared, and abundant vacuole-like structures were formed. Nuclear fragmentation/lysis and cell apoptosis were observed in some cells. In Dex and the 3 CRT groups, cells exhibited more normal morphology and less severe damage compared with TNF-α group. Although mitochondrial swelling was still noticed, mitochondrial structure was intact. The number of vacuoles in the cytoplasm was greatly reduced. CRT medium group had clearly much more intracellular organelles and less vacuoles than the other two CRT groups, indicating lighter cell injury.Figure 3 CRT alleviated the TNF- α -induced cell injury.Rat PMEC monolayers were divided into 6 groups: normal control (a), TNF-α (b), Dex (c), CRT high (d), CRT medium (e), and CRT low groups (f). After 48 h incubation, intracellular microstructures were observed under an H7650 transmission electron microscope (20000×). Arrows and triangles mark the mitochondria and endoplasmic reticulum, respectively. ### 3.5. CRT Upregulates the Expression of Tight Junction Protein Claudin-5 and ZO-1 The localization of tight junction proteins Claudin-5 and ZO-1 was detected by immunofluorescence assay using a confocal microscope. As shown in Figure4, linear fluorescence staining of Claudin-5 and ZO-1 was observed along the endothelial cell membrane, indicating that both proteins are localized at the edge of endothelial cells. Furthermore, abundant diffused fluorescence was also detected among the cells, which suggested that both proteins coordinately form the intracellular tight junction structure. To clarify the action mechanism of CRT against TNF-a-induced cell injury, we further examined the expression of Claudin-5 and ZO-1 mRNA and protein in different groups. It was found that the expression of ZO-1 and Claudin-5 mRNA in TNF-a group was significantly lower compared with normal control group (p = 0.034 and 0.008, respectively, Figure 5(a)). Claudin-5 and ZO-1 mRNA expression in Dex and the 3 CRT groups was remarkably increased compared with TNF-α group (allp < 0.01 or 0.001). Claudin-5 and ZO-1 mRNA expression in CRT medium group was significantly higher compared with Dex group (p≦ 0.001 and 0.004), whereas the other two CRT groups had similar rate as that in Dex group (p> 0.05). Consistently, the expression of Claudin-5 and ZO-1 protein showed similar change pattern as the mRNA expression (Figure 5(b)). To further confirm the therapeutic effects of CRT mediated by upregulating Claudin-5 and ZO-1 expression, PMECs were transfected with siRNAs targeting Claudin-5 or ZO-1 and incubated in medium containing TNF-α and CRT. Western blot results showed ZO-1 and Claudin-5 expression was successfully downregulated by siRNAs (p = 0.032 and 0.015, respectively, Figure 6(a)). The cell proliferation inhibition and apoptosis rate in TNF-α+CRT+si-ZO-1 and TNF-α+CRT+si-Claudin-5 group was significantly higher compared with TNF-α+CRT+si-NC group (allp< 0.05, Figures 6(b)-6(c)), suggesting that the silencing of ZO-1 and Claudin-5 expression had blocked the therapeutic effects of CRT. In other words, the protecting effects of CRT were mediated via modulating the Claudin-5 and ZO-1 expression.Figure 4 Analysis of the localization of tight junction proteins ZO-1 (a) and Claudin-5 (b) in normal rat PMECs by immunofluorescence assay.Normal rat PMECs were subjected to DAPI and immunofluorescence staining. Cells were observed under a confocal microscope. Arrows indicate the linear fluorescence of Claudin-5 and ZO-1 along the endothelial cell membrane, suggesting that both proteins are localized at the edge of endothelial cells. Abundant diffused fluorescence was also detected among the cells, suggesting that both proteins coordinately form the intracellular tight junction structure.Figure 5 CRT upregulates the expression of tight junction proteins Claudin-5 and ZO-1.Rat PMEC monolayers were divided into 6 groups: normal control, TNF-α, Dex, CRT high, CRT medium, and CRT low groups. After 48 h incubation, the expression of Claudin-5 and ZO-1 mRNA (a) and protein (b) was detected by qRT-PCR and Western blot, respectively. ∗, P<0.05, ∗∗, P<0.01, TNF-α group vs. normal control group; #, P<0.05, ##, P<0.01, treatment groups vs. TNF-a group; △, P<0.05, △△, P<0.01, CRT groups vs. Dex group. (a) (b) (c) (d)Figure 6 Therapeutic effects of CRT were greatly reduced by siRNAs.Rat PMECs at 70% confluence were transfected with siRNAs or siNC. Cells were incubated in medium containing 500 ng/mL CRT and 10 ng/mL TNF-α. After 48 h, cells were collected and the expression of Claudin-5 and ZO-1 was determined by Western blot (a). Cell proliferation (b) and apoptosis (c) were also measured. ∗, p < 0.05, TNF-α+CRT+si-ZO-1 or si-Claudin-5 vs. TNF-α+CRT+si-NC. ## 3.1. Major Components of CRT As shown in Figure1, HPLC data suggests that each tablet (0.18 g) contains 3.04 μg of triptolide (C20H24O6) and 0.13 mg of epicatechin (C35H14O6).Figure 1 HPLC analyses of the major active components of CRT: triptolide (a) and epicatechin (b) at the wavelengths of 220 nm and 280 nm, respectively. (a) (b) ## 3.2. CRT Stimulates the Growth of PMECs The growth of PMECs was compared by MTT assay. As shown in Figure2(a), the TNF-α had significantly increased the growth inhibition rate of PMECs compared with normal control group (p = 0.016). The growth inhibition rate of PMECs in Dex and all CRT groups was significantly lower than that in TNF-α group (p= 0.033, 0.045, 0.020, and 0.039, respectively). The growth inhibition rate in CRT medium group was significantly lower compared with Dex group (p = 0.032), whereas the other two CRT groups had similar rate as that in Dex group (p > 0.05).Figure 2 CRT treatment inhibited the cell proliferation inhibition rate and apoptosis rate induced by TNF- α . Rat PMEC monolayers were divided into 6 groups: normal control, TNF-α, Dex, CRT high, CRT medium, and CRT low groups. After 48 h incubation, cell proliferation (a) and apoptosis (b) were detected by MTT assay and flow cytometry, respectively. The cell proliferation inhibition rate (%) = (OD value in control group - OD value in experimental group)/OD value in control group × 100%. ∗, P<0.05, TNF-α group vs. normal control group; #, P<0.05 treatment groups vs. TNF-a group; △, P<0.05, CRT groups vs. Dex group. ## 3.3. CRT Reduces the Apoptosis of PMECs Further, the effects of CRT on the apoptosis of PMECs were evaluated by flow cytometry. As shown in Figure2(b), the apoptosis rate in TNF-α group was significantly higher than that in normal control group (p = 0.026). The apoptosis rate of PMECs in all 4 treatment groups was significantly reduced compared with TNF-α group (p = 0.023, 0.037, 0.019, and 0.042, respectively). The apoptosis rate in CRT medium group was significantly lower compared with Dex group (p = 0.028), but the other two CRT groups had similar rate as that in Dex group (p > 0.05). ## 3.4. CRT Alleviates the TNF-α-Induced Cell Injury The effect of CRT on the ultrastructure of PMECs was observed under transmission electron microscope (Figure3). In normal control group, there are abundant cytoplasms with a number of intact organelles (mitochondria, endoplasmic reticulum, and Golgi) and few pinocytotic vesicles. After the treatment of TNF-a, cells exhibited severe injuries. The number of organelles was obviously decreased. Mitochondria and endoplasmic reticulum swelling were clearly observed. Mitochondria cristae disappeared, and abundant vacuole-like structures were formed. Nuclear fragmentation/lysis and cell apoptosis were observed in some cells. In Dex and the 3 CRT groups, cells exhibited more normal morphology and less severe damage compared with TNF-α group. Although mitochondrial swelling was still noticed, mitochondrial structure was intact. The number of vacuoles in the cytoplasm was greatly reduced. CRT medium group had clearly much more intracellular organelles and less vacuoles than the other two CRT groups, indicating lighter cell injury.Figure 3 CRT alleviated the TNF- α -induced cell injury.Rat PMEC monolayers were divided into 6 groups: normal control (a), TNF-α (b), Dex (c), CRT high (d), CRT medium (e), and CRT low groups (f). After 48 h incubation, intracellular microstructures were observed under an H7650 transmission electron microscope (20000×). Arrows and triangles mark the mitochondria and endoplasmic reticulum, respectively. ## 3.5. CRT Upregulates the Expression of Tight Junction Protein Claudin-5 and ZO-1 The localization of tight junction proteins Claudin-5 and ZO-1 was detected by immunofluorescence assay using a confocal microscope. As shown in Figure4, linear fluorescence staining of Claudin-5 and ZO-1 was observed along the endothelial cell membrane, indicating that both proteins are localized at the edge of endothelial cells. Furthermore, abundant diffused fluorescence was also detected among the cells, which suggested that both proteins coordinately form the intracellular tight junction structure. To clarify the action mechanism of CRT against TNF-a-induced cell injury, we further examined the expression of Claudin-5 and ZO-1 mRNA and protein in different groups. It was found that the expression of ZO-1 and Claudin-5 mRNA in TNF-a group was significantly lower compared with normal control group (p = 0.034 and 0.008, respectively, Figure 5(a)). Claudin-5 and ZO-1 mRNA expression in Dex and the 3 CRT groups was remarkably increased compared with TNF-α group (allp < 0.01 or 0.001). Claudin-5 and ZO-1 mRNA expression in CRT medium group was significantly higher compared with Dex group (p≦ 0.001 and 0.004), whereas the other two CRT groups had similar rate as that in Dex group (p> 0.05). Consistently, the expression of Claudin-5 and ZO-1 protein showed similar change pattern as the mRNA expression (Figure 5(b)). To further confirm the therapeutic effects of CRT mediated by upregulating Claudin-5 and ZO-1 expression, PMECs were transfected with siRNAs targeting Claudin-5 or ZO-1 and incubated in medium containing TNF-α and CRT. Western blot results showed ZO-1 and Claudin-5 expression was successfully downregulated by siRNAs (p = 0.032 and 0.015, respectively, Figure 6(a)). The cell proliferation inhibition and apoptosis rate in TNF-α+CRT+si-ZO-1 and TNF-α+CRT+si-Claudin-5 group was significantly higher compared with TNF-α+CRT+si-NC group (allp< 0.05, Figures 6(b)-6(c)), suggesting that the silencing of ZO-1 and Claudin-5 expression had blocked the therapeutic effects of CRT. In other words, the protecting effects of CRT were mediated via modulating the Claudin-5 and ZO-1 expression.Figure 4 Analysis of the localization of tight junction proteins ZO-1 (a) and Claudin-5 (b) in normal rat PMECs by immunofluorescence assay.Normal rat PMECs were subjected to DAPI and immunofluorescence staining. Cells were observed under a confocal microscope. Arrows indicate the linear fluorescence of Claudin-5 and ZO-1 along the endothelial cell membrane, suggesting that both proteins are localized at the edge of endothelial cells. Abundant diffused fluorescence was also detected among the cells, suggesting that both proteins coordinately form the intracellular tight junction structure.Figure 5 CRT upregulates the expression of tight junction proteins Claudin-5 and ZO-1.Rat PMEC monolayers were divided into 6 groups: normal control, TNF-α, Dex, CRT high, CRT medium, and CRT low groups. After 48 h incubation, the expression of Claudin-5 and ZO-1 mRNA (a) and protein (b) was detected by qRT-PCR and Western blot, respectively. ∗, P<0.05, ∗∗, P<0.01, TNF-α group vs. normal control group; #, P<0.05, ##, P<0.01, treatment groups vs. TNF-a group; △, P<0.05, △△, P<0.01, CRT groups vs. Dex group. (a) (b) (c) (d)Figure 6 Therapeutic effects of CRT were greatly reduced by siRNAs.Rat PMECs at 70% confluence were transfected with siRNAs or siNC. Cells were incubated in medium containing 500 ng/mL CRT and 10 ng/mL TNF-α. After 48 h, cells were collected and the expression of Claudin-5 and ZO-1 was determined by Western blot (a). Cell proliferation (b) and apoptosis (c) were also measured. ∗, p < 0.05, TNF-α+CRT+si-ZO-1 or si-Claudin-5 vs. TNF-α+CRT+si-NC. ## 4. Discussion TNF-α released early during ALI acts on PMECs through blood circulation, which damages cells and alveolar capillary barrier, and thus leading to lung injury [17, 18]. In this study, TNF-a was added to the cell culture medium to mimic the biological condition when ALI was developed. Electron microscope results showed that TNF-a treatment has damaged organelle structure and induced mitochondrial and endoplasmic reticulum swelling. Mitochondrial crista nearly disappeared, and abundant vacuole-like structures were observed. Moreover, TNF-a group had significantly inhibited the growth and stimulated the apoptosis of PMECs, suggesting that TNF-a had successfully induced an acute endothelial cell injury model.In the current study, the main components of colquhounia root tablets were identified by HPLC as triptolide and epicatechin. Extensive studies have reported the anti-inflammatory [19, 20], immunosuppressive [21, 22], antitumor [23, 24] effects of triptolide and epicatechin. Studies have shown that triptolide and epicatechin have protection effects against lung injury [25, 26]. Although triptolide and epicatechin are the two major components of CRT, the herbal medicine contains several other active ingredients such as alkaloids, terpenoids, and lactones. Moreover, in clinical practice, CRT has been widely used to treat nephrotic syndrome and rheumatoid arthritis [27]. Our preliminary study has found that CRT can effectively alleviate pulmonary edema [28]. Therefore, instead of focusing on the pharmacological effects of individual components, we investigated the role and mechanism of CRT in protecting lung injury aiming to provide a basis for its clinical application in ALI treatment. Our data showed that CRT had significantly lighter mitochondrial and endoplasmic reticulum swelling and lower number of intracellular vacuoles. Intact organelle structures were observed under electron microscope. Furthermore, cells in CRT groups had exhibited higher cell proliferation and lower apoptosis rate as compared with TNF-α group, suggesting that CRT had effectively protected cells from TNF-a-induced injury.Tight junction is an intercellular junction complex that is widely present in the blood-brain barrier, intestinal barrier, retinal barrier, glomerular basement membrane barrier, and alveolar capillary barrier [29–33]. It plays a key role in regulating the transport of water and solute molecules and maintaining tissue permeability [34, 35]. Claudins are important structural molecules in tight junction. Claudin-5 is strongly expressed in PMECs and regulates paracellular permeability [36]. The overexpression of Claudin-5 in PMECs and cerebral vascular endothelial cells reduces permeability of tight junction and thus protects endothelial barrier function [36, 37]. Studies have found that reduced Claudin-5 expression in endothelial cells results in a rapid increase in the permeability of pulmonary blood vessels [36]. As the key structure of tight junction, ZO-1 directly affects the pulmonary barrier permeability. When ZO-1 expression is inhibited, transepithelial electrical resistance of mouse PMECs is markedly decreased, leading to increased pulmonary permeability and impaired lung barrier functions [38]. The intracellular parts of Claudin-5 and ZO-1 interact with each other to maintain the stability of tight junction structure. In this study, immunofluorescence assay demonstrated linear fluorescence staining of both Claudin-5 and ZO-1 along the endothelial cell membrane and abundant diffused fluorescence among these cells, suggesting both proteins form the intracellular tight junction structure and coordinately regulate the paracellular permeability.Studies have suggested that TNF-α can downregulate the expression of several tight junction proteins in the lungs of mice, including Claudin-2, -4, -5, and ZO-1, increasing the lung barrier permeability [39]. Consistently, our results showed a significant decrease in the intracellular expression of Claudin-5 and ZO-1 mRNA and protein in TNF-α group, indicating that TNF-α destroyed the integrity of tight junction by inhibiting the expression of structural proteins. In contrast, Claudin-5 and ZO-1 expression in CRT groups was significantly enhanced when compared with TNF-α group, suggesting that the protective CRT was mediated via stabilizing the structure of tight junction and endothelial barrier. It is worth noting that CRT medium group had higher therapeutic effects than Dex group. The action mechanism of Dex is mediated through the pituitary-adrenal system. Dex regulates the expression of anti-inflammatory genes by binding to the glucocorticoid receptor (GR) [40]. In contrast, CRT does not exert the anti-inflammatory effect via the pituitary-adrenal system [41], but instead reduces oxidative stress and inflammation by regulating NF-κB signaling pathway [42]. The NF-κB pathway directly regulates tight junctions, and its activation increases paracellular permeability and impairs barrier function [43, 44]. TNF-α can affect the inflammatory response by activating the NF-κB pathway and thus increase the pericellular permeability [45]. Therefore, we speculate that the higher therapeutic effects of CRT medium group than Dex group may be associated with its modulation on the NF-κB pathway.One may also notice that the CRT medium group exhibited significantly lower growth inhibition rate and apoptosis rate compared with CRT high and low groups. Electron microscopic image also revealed much lighter destruction of PMECs in CRT medium group compared with the other two CRT groups. Furthermore, the highest Claudin-5 and ZO-1 mRNA and protein expression was observed in CRT medium group among the three CRT groups. Altogether, our results suggested that the medium dose of CRT exerted the best therapeutic effects. Studies have shown that high-dose CRT exhibits cytotoxic effects and inhibits cell proliferation [13, 14]. In our preliminary test, we also found that high-dose CRT inhibited the proliferation of alveolar type II epithelial cells. Therefore, the CRT high group in the current study had lower therapeutic effects compared with CRT medium group. It is thus important to find the optimal CRT treatment dose in clinical practice. ## 5. Conclusion In summary, this study has found that CRT effectively reduces the TNF-α-induced growth inhibition rate and apoptosis rate of PMECs. The protective effects of CRT against TNF-α-induced injury might be mediated via stimulating the expression of Claudin-5 and ZO-1 in PMECs, which stabilizes the structure of tight junction and endothelial barrier. As a natural herbal medicine, CRT might be a promising effective and safe therapeutic agent to substitute the glucocorticoids for the treatment of ALI. Future studies are needed to investigate the signaling mechanism involved in the regulation of CRT on tight junction proteins. --- *Source: 1024634-2018-11-18.xml*
1024634-2018-11-18_1024634-2018-11-18.md
47,332
Colquhounia Root Tablet Protects Rat Pulmonary Microvascular Endothelial Cells against TNF-α-Induced Injury by Upregulating the Expression of Tight Junction Proteins Claudin-5 and ZO-1
Wenjie Zhou; Guocui Shi; Jijia Bai; Shenmao Ma; Qinfu Liu; Xigang Ma
Evidence-Based Complementary and Alternative Medicine (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1024634
1024634-2018-11-18.xml
--- ## Abstract Background. There are currently limited effective pharmacotherapy agents for acute lung injury (ALI). Inflammatory response in the lungs is the main pathophysiological process of ALI. Our preliminary data have shown that colquhounia root tablet (CRT), a natural herbal medicine, alleviates the pulmonary inflammatory responses and edema in a rat model with oleic acid-induced ALI. However, the potential molecular action mechanisms underlining its protective effects against ALI are poorly understood. This study aimed to investigate the effects and mechanism of CRT in rat pulmonary microvascular endothelial cells (PMEC) with TNF-α-induced injury.Methods. PMECs were divided into 6 groups: normal control, TNF-α (10 ng/mL TNF-α), Dex (1×10-6 M Dex + 10 ng/mL TNF-α), CRT high (1000 ng/mL CRT + 10 ng/mL TNF-α), CRT medium (500 ng/mL CRT + 10 ng/mL TNF-α), and CRT low group (250 ng/mL CRT + 10 ng/mL TNF-α). Cell proliferation and apoptosis were detected by MTT assay and flow cytometry. Cell micromorphology was observed under transmission electron microscope. The localization and expression of tight junction proteins Claudin-5 and ZO-1 were analyzed by immunofluorescence staining and Western blot, respectively.Results. TNF-a had successfully induced an acute endothelial cell injury model. Dex and CRT treatments had significantly stimulated the growth and reduced the apoptosis of PMECs (allp < 0.05 or 0.01) and alleviated the TNF-α-induced cell injury. The expression of Claudin-5 and ZO-1 in Dex and all 3 CRT groups was markedly increased compared with TNF-a group (allp < 0.05 or 0.01).Conclusion. CRT effectively protects PMECs from TNF-α-induced injury, which might be mediated via stabilizing the structure of tight junction. CRT might be a promising, effective, and safe therapeutic agent for the treatment of ALI. --- ## Body ## 1. Introduction Acute lung injury (ALI) is a major cause of acute respiratory failure with high morbidity and mortality in critical care medicine [1]. ALI is characterized with persistent pulmonary inflammation and increased microvascular permeability [1]. The inflammatory response in the lungs is the main pathophysiological process of ALI. TNF-α is an important factor mediating the inflammatory response during injury, which initiates inflammatory cascade and destroys the alveolar capillary barrier [2], leading to increased permeability and pulmonary edema. The pulmonary microvascular endothelial cell (PMEC) barrier acts as the first line of defense against inflammatory factor attack and plays a key role in the development of lung injury. The tight junction among PMECs, composed of transmembrane proteins including Claudins, Occludin, and ZO-1, serves as an important structure controlling pericellular permeability and regulating PMEC barrier functions [3].Although the pathophysiology and treatments of ALI have been investigated in numerous studies, there are currently limited effective pharmacotherapies agents. Glucocorticoids have been commonly used to treat lung injury induced by various causes [4–6]. Dexamethasone (Dex) is a widely used synthetic glucocorticoid compound with proven protection effects against ALI [7, 8]. Moreover, Dex has been frequently used as positive control in several pharmacological studies on potential therapeutic agents for ALI [9]. However, these hormone therapies may trigger a variety of adverse reactions, such as infection, elevated blood glucose, peptic ulcer, and osteoporosis [10]. Therefore, it is urgent to search for novel therapeutic reagents for ALI. The traditional Chinese medicine colquhounia root contains several alkaloids, terpenoids, lactones, and phenolic acids, such as triptolide and epicatechin [11]. Colquhounia root has several beneficial pharmacological properties, such as anti-inflammatory, immunosuppressive, antitumor, and analgesic activities [12–14]. Recent studies have found that colquhounia root attenuates allergic encephalomyelitis in rats by reducing capillary permeability during inflammation and decreasing inflammatory exudate [15]. Our preliminary data have shown that colquhounia root alleviates the pulmonary inflammatory responses and edema in rat model with oleic acid-induced ALI [16]. However, the potential molecular action mechanisms underlining its protective effects against ALI have not been investigated. In this study, we treated rat PMECs with different doses of colquhounia root and explored its protective effects and potential mechanism on the cells and intercellular tight junctions using Dex as a treatment control. Our findings shall shed insight on potential novel therapeutic reagents against ALI. ## 2. Material and Methods ### 2.1. Reagents Colquhounia root tablets (CRT, 0.18 g/tablet) were purchased from the Pharmaceutical Factory of the Chongqing Academy of Chinese Materia Medica (Chongqing, China). A CRT was ground into powders, dissolved in 1 mL of dimethyl sulfoxide (DMSO), and filtered through a 0.22-um sterile filter. TNF-α (400-14, purity ≥ 98%, Peprotech, Rocky Hill, NJ, USA) was prepared into 0.1 mg/mL stock. Dex (Dexamethasone, D4902, purity ≥ 97%, Sigma, Shanghai, China) was prepared into 0.03185 M stock. All stocks were stored at -20°C until use. ### 2.2. High-Performance Liquid Chromatography (HPLC) The major active components of CRT were determined by HPLC. Triptolide was purchased from the Chinese National Institute for Food and Drug Control and dissolved in 60% methanol for 10μg/mL standard solution for HPLC. Finely ground CRT powder (3.6 g) was mixed with 50 mL of methanol, processed using an ultrasonic cleaner (KQ-250DE, Kunshan Ultrasonic Instrument, China) at 250 w, 50 KHz for 1 h, and filtered. The residue was washed twice with 10 mL of methanol. The methanol solution was combined and loaded on neutral alumina column (length: 300 mm, inner diameter: 1.5 cm). The column was eluded with 60 mL of acetone. The eluate was collected, dried, dissolved in 5 mL of 60% methanol, and filtered through a 0.45-μm millipore filter. Samples of the filtrate were injected into an InertSustain C18 chromatographic column (4.6 x 250 mm, 5 μm) in a LC-20AT HPLC system (Shimadzu, Tokyo, Japan). HPLC was performed using the following parameters: mobile phase: acetonitrile-water (25:75); detection wavelength: 220 nm; flow rate: 1 mL/min; column temperature: 35°C; injection volume: 10 μL. The number of theoretical plates of the triptolide peak should be no less than 2000.Epicatechin was purchased from the Chinese National Institute for Food and Drug Control and dissolved in 60% methanol for 100μg/mL standard solution for HPLC. CRT powder (0.9 g) is mixed with 40 mL of water-saturated ethyl acetate, processed by an ultrasonic cleaner (250 w, 50 KHz) for 30 min, filtered, and dried. The residue was dissolved in 10 mL of 60% methanol. Aliquots of samples and standard solution (20 μL) were injected onto an InertSustain C18 column and HPLC was performed using the following parameters: mobile phase: 0.63% glacial acetic acid solution-methanol-acetonitrile (82.5:2:15.5); detection wavelength: 280 nm; flow rate: 1 mL/min; column temperature: 35°C. ### 2.3. Cells and Grouping Rat PMECs were purchased from the Bioleaf Biotech Inc. (Shanghai, China) and cultured in RPMI-1640 medium (Gibco, Rockville, MD, USA) supplemented with 5% fetal bovine serum (FBS, Gibco), 100μg/mL streptomycin and 100 U/mL penicillin penicillin/streptomycin (Invitrogen, Shanghai, China) at 37°C, 5% CO2 in an incubator. Cells were seeded into 96-well plates at a density of 1×105 cells/well and incubated overnight. Cells were then divided into the following 6 groups and cultured for an additional 48 h for subsequent examinations: normal control group without any treatment, TNF-α group cultured in medium containing 10 ng/mL TNF-α, Dex group in 1×10-6 M Dex and 10 ng/mL TNF-α, CRT high group in 1000 ng/mL CRT and 10 ng/mL TNF-α, CRT medium group in 500 ng/mL CRT and 10 ng/mL TNF-α, and CRT low group in 250 ng/mL CRT and 10 ng/mL TNF-α. ### 2.4. MTT Cell Proliferation Assay Cell proliferation in all groups was analyzed using MTT assay kit (KeyGen Biotech Inc., Nanjing, China). Briefly, cell medium was discarded and cells were incubated with 90μL of FBS-free medium and 20 μL of MTT at 37°C for 4 h. Cells were treated with 150 μL of DMSO for 10 min. The optical density (OD) was detected with a microplate reader under the wavelength of 490 nm. Three wells were prepared for each group. The cell proliferation inhibition rate was calculated using the following formula: cell proliferation inhibition rate (%) = (OD value in control group - OD value in experimental group)/OD value in control group × 100%. ### 2.5. Cell Apoptosis Assay Cell apoptosis in all groups was analyzed using Annexin V-FITC apoptosis detection kit (Bestbio Biotech Inc., Shanghai, China). In brief, cells were collected upon the completion of treatment. Cells were washed twice with PBS, resuspended in 400μL of 1× binding buffer, stained with 5 μL of FITC Annexin V in the dark at 4°C for 15 min. Cells were then incubated in 10 μL of PI in the dark for 5 more min. The fluorescence of cells was analyzed by a BD Accuri C6 flow cytometry (BD Biosciences, Franklin Lakes, NJ, USA) at 488 nm within 1 h. ### 2.6. Transmission Electron Microscopic Observation After treatment, cells in all groups were collected, washed twice with 0.1 M natrium cacodylate buffer solution, and fixed in 2% glutaraldehyde solution at 4°C for 5 min. Cells were collected and incubated in 2% glutaraldehyde solution at 4°C for 1 h. Cells were collected and incubated 3 times in 0.1 M natrium cacodylicum buffer solution at 4°C for 30 min each. Cells were then incubated in 1% osmic acid for 1 h, washed twice with 0.1 M natrium cacodylicum buffer solution for 15 min each, dehydrated with 30%, 50%, 70%, 80%, 90%, and 100% ethanol, infiltrated with epoxypropane for 15 min, and treated with embedding solution at 35°C for 6 h. Embedded cells were cut into 50-nm slices using an ultrathin slicer, stained in 3% silver citrate solution, and observed under an H7650 transmission electron microscope (Hitachi, Japan). ### 2.7. Immunofluorescence Staining Normal PMECs at exponential phase were inoculated into a six-well plate with sterile slides and incubated at 37°C, 5% CO2 for 24 h. The cells were washed twice with prewarmed PBS, fixed with 4% paraformaldehyde solution for 20 min at room temperature, and washed 3 times with PBS. Cells were blocked with 5% BSA for 1 h at room temperature and incubated with primary antibody (mouse anti-rat Claudin-5, 1:50; rabbit anti-rat ZO-1, 1:50, Invitrogen, USA) overnight at 4°C. Cells were washed 3 times with PBS and incubated with FITC-labeled secondary antibody (goat anti-rabbit IgG, 1: 100; goat anti-mouse IgG, 1:100, Zhongshan Golden Bridge Biotech Inc., Beijing, China) for 1 h at room temperature in the dark. Cells were stained with DAPI. The slide was sealed and observed under an Olympus FV1000 confocal microscope (Olympus, USA). ### 2.8. Quantitative Reverse Transcription PCR (qRT-PCR) The expression of Claudin-5 and ZO-1 mRNA was determined by qRT-PCR. Briefly, total RNA was extracted from cells using TRIzol reagent (Invitrogen, USA). Reverse transcription was performed using reverse transcription kit (Takara, Japan). The primers were designed and synthesized by Genscript Inc. (Nanjing, China): Claudin-5 forward 5’-CAGCGTTGGAAATTCTGGGTC-3’, reverse 5’-ACACTTTGCATTGCATGTGCC-3’; ZO-1 forward 5’-TGGTGCTCCTAAACAATC-3’, reverse 5’-TGCTATTACACGGTCCTC-3’; andβ-actin forward 5’-CCCATCTATGAGGGTTACGC-3’, reverse 5’-TTTAATGTCACGCACGATTTC-3’. The qRT-PCR reaction mixture was prepared using real time PCR kit (TaKaRa, Japan): SYBR Premix Ex Taq II 10 μL, forward primer 0.8 μL, reverse primer 0.8 μL, cDNA template 2 μL, and dH2O 6.4 μL. Reaction was performed on an ABI PRISM 7500 Fluorescent Quantitative PCR System (Applied Biosystems, Foster City, CA, USA) used according to the following reaction conditions: 95°C 5 s followed by 45 cycles of 95°C 5 s, 57°C (Claudin-5)/60°C (ZO-1) 30 s and 72°C 40 s. The experiment was performed in triplicate and the expression level was calculated using the 2-ΔΔCt method. β-actin was used as the internal control. ### 2.9. Western Blot Total protein was extracted from cells using total protein extraction kit (Keygen Biotech, Nanjing, China) and quantified using BCA protein assay kit (Keygen Biotech). Equal aliquots (20μg) of protein were separated by SDS-PAGE and transferred to polyvinylidene difluoride membranes. The membranes were blocked with 5% skim milk for 2 h and incubated with primary antibodies (mouse anti-rat Claudin-5, 1:500, Invitrogen, USA; rabbit anti-rat ZO-1, 1:250, Invitrogen, USA; mouse anti-rat β-actin, 1:500, Zhongshan Golden Bridge Biotech.) overnight at 4°C. The membrane was washed 3 times with 1 x TBST and incubated with HRP-conjugated secondary antibody (goat anti-rabbit IgG, 1:5000; goat anti-mouse IgG, 1:5000, Zhongshan Golden Bridge Biotech.) at room temperature for 2 h. The immunoreactivity was detected using the NCI 5079 ECL detection system (Themo Fisher, USA). The image was analyzed by ChemiGenius Bioimaging System (Syngene, MD, USA) and the relative expression of proteins was quantified using Image J (National Institutes of Health, Bethesda, USA) with β-actin as the internal reference. ### 2.10. Cell Transfection siRNAs were designed and synthesized by Genscript Biotech. (Nanjing, China): si-Claudin-5 forward: 5’-GUCCGGGAGUUCUAUGAUCCA-3’, reverse: 5’-GATCATAGAACTCCCGGACTA-3’; si-ZO-1 forward: 5’-UGUUGAACAUGCUUUUGCUGT -3’, reverse: 5’-AGCAAAAGCAUGUUCAACATT -3’; si-nagetive control (NC) forward: 5’-UUCUCCGAACGUGUCACGUTT-3’, and reverse: 5’-ACGUGACACGUUCGGAGAATT-3’. Normal PMECs at 70% confluence in 6-well plates were transfected with 30 nM of siRNAs or siNC using Lipofectamine 3000 transfection agent (Invitrogen, Shanghai, China) according to the manufacturer’s instructions. Cells were then incubated in medium containing 500 ng/mL CRT and 10 ng/mL TNF-α. After 48 h, cells were collected and the expression of Claudin-5 and ZO-1 was determined by Western blot as described above. Cell proliferation and apoptosis were also measured as described above. ### 2.11. Statistical Analysis Data were expressed as mean± standard deviation. Data analysis was performed using SPSS 17.0 statistical software (IBM SPSS, Chicago, IL, USA). Difference among groups was analyzed by one-way analysis of variance (ANOVA) followed by post hoc SNK-q test. Rates were compared by chi-square test.p< 0.05 was considered statistically significant. ## 2.1. Reagents Colquhounia root tablets (CRT, 0.18 g/tablet) were purchased from the Pharmaceutical Factory of the Chongqing Academy of Chinese Materia Medica (Chongqing, China). A CRT was ground into powders, dissolved in 1 mL of dimethyl sulfoxide (DMSO), and filtered through a 0.22-um sterile filter. TNF-α (400-14, purity ≥ 98%, Peprotech, Rocky Hill, NJ, USA) was prepared into 0.1 mg/mL stock. Dex (Dexamethasone, D4902, purity ≥ 97%, Sigma, Shanghai, China) was prepared into 0.03185 M stock. All stocks were stored at -20°C until use. ## 2.2. High-Performance Liquid Chromatography (HPLC) The major active components of CRT were determined by HPLC. Triptolide was purchased from the Chinese National Institute for Food and Drug Control and dissolved in 60% methanol for 10μg/mL standard solution for HPLC. Finely ground CRT powder (3.6 g) was mixed with 50 mL of methanol, processed using an ultrasonic cleaner (KQ-250DE, Kunshan Ultrasonic Instrument, China) at 250 w, 50 KHz for 1 h, and filtered. The residue was washed twice with 10 mL of methanol. The methanol solution was combined and loaded on neutral alumina column (length: 300 mm, inner diameter: 1.5 cm). The column was eluded with 60 mL of acetone. The eluate was collected, dried, dissolved in 5 mL of 60% methanol, and filtered through a 0.45-μm millipore filter. Samples of the filtrate were injected into an InertSustain C18 chromatographic column (4.6 x 250 mm, 5 μm) in a LC-20AT HPLC system (Shimadzu, Tokyo, Japan). HPLC was performed using the following parameters: mobile phase: acetonitrile-water (25:75); detection wavelength: 220 nm; flow rate: 1 mL/min; column temperature: 35°C; injection volume: 10 μL. The number of theoretical plates of the triptolide peak should be no less than 2000.Epicatechin was purchased from the Chinese National Institute for Food and Drug Control and dissolved in 60% methanol for 100μg/mL standard solution for HPLC. CRT powder (0.9 g) is mixed with 40 mL of water-saturated ethyl acetate, processed by an ultrasonic cleaner (250 w, 50 KHz) for 30 min, filtered, and dried. The residue was dissolved in 10 mL of 60% methanol. Aliquots of samples and standard solution (20 μL) were injected onto an InertSustain C18 column and HPLC was performed using the following parameters: mobile phase: 0.63% glacial acetic acid solution-methanol-acetonitrile (82.5:2:15.5); detection wavelength: 280 nm; flow rate: 1 mL/min; column temperature: 35°C. ## 2.3. Cells and Grouping Rat PMECs were purchased from the Bioleaf Biotech Inc. (Shanghai, China) and cultured in RPMI-1640 medium (Gibco, Rockville, MD, USA) supplemented with 5% fetal bovine serum (FBS, Gibco), 100μg/mL streptomycin and 100 U/mL penicillin penicillin/streptomycin (Invitrogen, Shanghai, China) at 37°C, 5% CO2 in an incubator. Cells were seeded into 96-well plates at a density of 1×105 cells/well and incubated overnight. Cells were then divided into the following 6 groups and cultured for an additional 48 h for subsequent examinations: normal control group without any treatment, TNF-α group cultured in medium containing 10 ng/mL TNF-α, Dex group in 1×10-6 M Dex and 10 ng/mL TNF-α, CRT high group in 1000 ng/mL CRT and 10 ng/mL TNF-α, CRT medium group in 500 ng/mL CRT and 10 ng/mL TNF-α, and CRT low group in 250 ng/mL CRT and 10 ng/mL TNF-α. ## 2.4. MTT Cell Proliferation Assay Cell proliferation in all groups was analyzed using MTT assay kit (KeyGen Biotech Inc., Nanjing, China). Briefly, cell medium was discarded and cells were incubated with 90μL of FBS-free medium and 20 μL of MTT at 37°C for 4 h. Cells were treated with 150 μL of DMSO for 10 min. The optical density (OD) was detected with a microplate reader under the wavelength of 490 nm. Three wells were prepared for each group. The cell proliferation inhibition rate was calculated using the following formula: cell proliferation inhibition rate (%) = (OD value in control group - OD value in experimental group)/OD value in control group × 100%. ## 2.5. Cell Apoptosis Assay Cell apoptosis in all groups was analyzed using Annexin V-FITC apoptosis detection kit (Bestbio Biotech Inc., Shanghai, China). In brief, cells were collected upon the completion of treatment. Cells were washed twice with PBS, resuspended in 400μL of 1× binding buffer, stained with 5 μL of FITC Annexin V in the dark at 4°C for 15 min. Cells were then incubated in 10 μL of PI in the dark for 5 more min. The fluorescence of cells was analyzed by a BD Accuri C6 flow cytometry (BD Biosciences, Franklin Lakes, NJ, USA) at 488 nm within 1 h. ## 2.6. Transmission Electron Microscopic Observation After treatment, cells in all groups were collected, washed twice with 0.1 M natrium cacodylate buffer solution, and fixed in 2% glutaraldehyde solution at 4°C for 5 min. Cells were collected and incubated in 2% glutaraldehyde solution at 4°C for 1 h. Cells were collected and incubated 3 times in 0.1 M natrium cacodylicum buffer solution at 4°C for 30 min each. Cells were then incubated in 1% osmic acid for 1 h, washed twice with 0.1 M natrium cacodylicum buffer solution for 15 min each, dehydrated with 30%, 50%, 70%, 80%, 90%, and 100% ethanol, infiltrated with epoxypropane for 15 min, and treated with embedding solution at 35°C for 6 h. Embedded cells were cut into 50-nm slices using an ultrathin slicer, stained in 3% silver citrate solution, and observed under an H7650 transmission electron microscope (Hitachi, Japan). ## 2.7. Immunofluorescence Staining Normal PMECs at exponential phase were inoculated into a six-well plate with sterile slides and incubated at 37°C, 5% CO2 for 24 h. The cells were washed twice with prewarmed PBS, fixed with 4% paraformaldehyde solution for 20 min at room temperature, and washed 3 times with PBS. Cells were blocked with 5% BSA for 1 h at room temperature and incubated with primary antibody (mouse anti-rat Claudin-5, 1:50; rabbit anti-rat ZO-1, 1:50, Invitrogen, USA) overnight at 4°C. Cells were washed 3 times with PBS and incubated with FITC-labeled secondary antibody (goat anti-rabbit IgG, 1: 100; goat anti-mouse IgG, 1:100, Zhongshan Golden Bridge Biotech Inc., Beijing, China) for 1 h at room temperature in the dark. Cells were stained with DAPI. The slide was sealed and observed under an Olympus FV1000 confocal microscope (Olympus, USA). ## 2.8. Quantitative Reverse Transcription PCR (qRT-PCR) The expression of Claudin-5 and ZO-1 mRNA was determined by qRT-PCR. Briefly, total RNA was extracted from cells using TRIzol reagent (Invitrogen, USA). Reverse transcription was performed using reverse transcription kit (Takara, Japan). The primers were designed and synthesized by Genscript Inc. (Nanjing, China): Claudin-5 forward 5’-CAGCGTTGGAAATTCTGGGTC-3’, reverse 5’-ACACTTTGCATTGCATGTGCC-3’; ZO-1 forward 5’-TGGTGCTCCTAAACAATC-3’, reverse 5’-TGCTATTACACGGTCCTC-3’; andβ-actin forward 5’-CCCATCTATGAGGGTTACGC-3’, reverse 5’-TTTAATGTCACGCACGATTTC-3’. The qRT-PCR reaction mixture was prepared using real time PCR kit (TaKaRa, Japan): SYBR Premix Ex Taq II 10 μL, forward primer 0.8 μL, reverse primer 0.8 μL, cDNA template 2 μL, and dH2O 6.4 μL. Reaction was performed on an ABI PRISM 7500 Fluorescent Quantitative PCR System (Applied Biosystems, Foster City, CA, USA) used according to the following reaction conditions: 95°C 5 s followed by 45 cycles of 95°C 5 s, 57°C (Claudin-5)/60°C (ZO-1) 30 s and 72°C 40 s. The experiment was performed in triplicate and the expression level was calculated using the 2-ΔΔCt method. β-actin was used as the internal control. ## 2.9. Western Blot Total protein was extracted from cells using total protein extraction kit (Keygen Biotech, Nanjing, China) and quantified using BCA protein assay kit (Keygen Biotech). Equal aliquots (20μg) of protein were separated by SDS-PAGE and transferred to polyvinylidene difluoride membranes. The membranes were blocked with 5% skim milk for 2 h and incubated with primary antibodies (mouse anti-rat Claudin-5, 1:500, Invitrogen, USA; rabbit anti-rat ZO-1, 1:250, Invitrogen, USA; mouse anti-rat β-actin, 1:500, Zhongshan Golden Bridge Biotech.) overnight at 4°C. The membrane was washed 3 times with 1 x TBST and incubated with HRP-conjugated secondary antibody (goat anti-rabbit IgG, 1:5000; goat anti-mouse IgG, 1:5000, Zhongshan Golden Bridge Biotech.) at room temperature for 2 h. The immunoreactivity was detected using the NCI 5079 ECL detection system (Themo Fisher, USA). The image was analyzed by ChemiGenius Bioimaging System (Syngene, MD, USA) and the relative expression of proteins was quantified using Image J (National Institutes of Health, Bethesda, USA) with β-actin as the internal reference. ## 2.10. Cell Transfection siRNAs were designed and synthesized by Genscript Biotech. (Nanjing, China): si-Claudin-5 forward: 5’-GUCCGGGAGUUCUAUGAUCCA-3’, reverse: 5’-GATCATAGAACTCCCGGACTA-3’; si-ZO-1 forward: 5’-UGUUGAACAUGCUUUUGCUGT -3’, reverse: 5’-AGCAAAAGCAUGUUCAACATT -3’; si-nagetive control (NC) forward: 5’-UUCUCCGAACGUGUCACGUTT-3’, and reverse: 5’-ACGUGACACGUUCGGAGAATT-3’. Normal PMECs at 70% confluence in 6-well plates were transfected with 30 nM of siRNAs or siNC using Lipofectamine 3000 transfection agent (Invitrogen, Shanghai, China) according to the manufacturer’s instructions. Cells were then incubated in medium containing 500 ng/mL CRT and 10 ng/mL TNF-α. After 48 h, cells were collected and the expression of Claudin-5 and ZO-1 was determined by Western blot as described above. Cell proliferation and apoptosis were also measured as described above. ## 2.11. Statistical Analysis Data were expressed as mean± standard deviation. Data analysis was performed using SPSS 17.0 statistical software (IBM SPSS, Chicago, IL, USA). Difference among groups was analyzed by one-way analysis of variance (ANOVA) followed by post hoc SNK-q test. Rates were compared by chi-square test.p< 0.05 was considered statistically significant. ## 3. Results ### 3.1. Major Components of CRT As shown in Figure1, HPLC data suggests that each tablet (0.18 g) contains 3.04 μg of triptolide (C20H24O6) and 0.13 mg of epicatechin (C35H14O6).Figure 1 HPLC analyses of the major active components of CRT: triptolide (a) and epicatechin (b) at the wavelengths of 220 nm and 280 nm, respectively. (a) (b) ### 3.2. CRT Stimulates the Growth of PMECs The growth of PMECs was compared by MTT assay. As shown in Figure2(a), the TNF-α had significantly increased the growth inhibition rate of PMECs compared with normal control group (p = 0.016). The growth inhibition rate of PMECs in Dex and all CRT groups was significantly lower than that in TNF-α group (p= 0.033, 0.045, 0.020, and 0.039, respectively). The growth inhibition rate in CRT medium group was significantly lower compared with Dex group (p = 0.032), whereas the other two CRT groups had similar rate as that in Dex group (p > 0.05).Figure 2 CRT treatment inhibited the cell proliferation inhibition rate and apoptosis rate induced by TNF- α . Rat PMEC monolayers were divided into 6 groups: normal control, TNF-α, Dex, CRT high, CRT medium, and CRT low groups. After 48 h incubation, cell proliferation (a) and apoptosis (b) were detected by MTT assay and flow cytometry, respectively. The cell proliferation inhibition rate (%) = (OD value in control group - OD value in experimental group)/OD value in control group × 100%. ∗, P<0.05, TNF-α group vs. normal control group; #, P<0.05 treatment groups vs. TNF-a group; △, P<0.05, CRT groups vs. Dex group. ### 3.3. CRT Reduces the Apoptosis of PMECs Further, the effects of CRT on the apoptosis of PMECs were evaluated by flow cytometry. As shown in Figure2(b), the apoptosis rate in TNF-α group was significantly higher than that in normal control group (p = 0.026). The apoptosis rate of PMECs in all 4 treatment groups was significantly reduced compared with TNF-α group (p = 0.023, 0.037, 0.019, and 0.042, respectively). The apoptosis rate in CRT medium group was significantly lower compared with Dex group (p = 0.028), but the other two CRT groups had similar rate as that in Dex group (p > 0.05). ### 3.4. CRT Alleviates the TNF-α-Induced Cell Injury The effect of CRT on the ultrastructure of PMECs was observed under transmission electron microscope (Figure3). In normal control group, there are abundant cytoplasms with a number of intact organelles (mitochondria, endoplasmic reticulum, and Golgi) and few pinocytotic vesicles. After the treatment of TNF-a, cells exhibited severe injuries. The number of organelles was obviously decreased. Mitochondria and endoplasmic reticulum swelling were clearly observed. Mitochondria cristae disappeared, and abundant vacuole-like structures were formed. Nuclear fragmentation/lysis and cell apoptosis were observed in some cells. In Dex and the 3 CRT groups, cells exhibited more normal morphology and less severe damage compared with TNF-α group. Although mitochondrial swelling was still noticed, mitochondrial structure was intact. The number of vacuoles in the cytoplasm was greatly reduced. CRT medium group had clearly much more intracellular organelles and less vacuoles than the other two CRT groups, indicating lighter cell injury.Figure 3 CRT alleviated the TNF- α -induced cell injury.Rat PMEC monolayers were divided into 6 groups: normal control (a), TNF-α (b), Dex (c), CRT high (d), CRT medium (e), and CRT low groups (f). After 48 h incubation, intracellular microstructures were observed under an H7650 transmission electron microscope (20000×). Arrows and triangles mark the mitochondria and endoplasmic reticulum, respectively. ### 3.5. CRT Upregulates the Expression of Tight Junction Protein Claudin-5 and ZO-1 The localization of tight junction proteins Claudin-5 and ZO-1 was detected by immunofluorescence assay using a confocal microscope. As shown in Figure4, linear fluorescence staining of Claudin-5 and ZO-1 was observed along the endothelial cell membrane, indicating that both proteins are localized at the edge of endothelial cells. Furthermore, abundant diffused fluorescence was also detected among the cells, which suggested that both proteins coordinately form the intracellular tight junction structure. To clarify the action mechanism of CRT against TNF-a-induced cell injury, we further examined the expression of Claudin-5 and ZO-1 mRNA and protein in different groups. It was found that the expression of ZO-1 and Claudin-5 mRNA in TNF-a group was significantly lower compared with normal control group (p = 0.034 and 0.008, respectively, Figure 5(a)). Claudin-5 and ZO-1 mRNA expression in Dex and the 3 CRT groups was remarkably increased compared with TNF-α group (allp < 0.01 or 0.001). Claudin-5 and ZO-1 mRNA expression in CRT medium group was significantly higher compared with Dex group (p≦ 0.001 and 0.004), whereas the other two CRT groups had similar rate as that in Dex group (p> 0.05). Consistently, the expression of Claudin-5 and ZO-1 protein showed similar change pattern as the mRNA expression (Figure 5(b)). To further confirm the therapeutic effects of CRT mediated by upregulating Claudin-5 and ZO-1 expression, PMECs were transfected with siRNAs targeting Claudin-5 or ZO-1 and incubated in medium containing TNF-α and CRT. Western blot results showed ZO-1 and Claudin-5 expression was successfully downregulated by siRNAs (p = 0.032 and 0.015, respectively, Figure 6(a)). The cell proliferation inhibition and apoptosis rate in TNF-α+CRT+si-ZO-1 and TNF-α+CRT+si-Claudin-5 group was significantly higher compared with TNF-α+CRT+si-NC group (allp< 0.05, Figures 6(b)-6(c)), suggesting that the silencing of ZO-1 and Claudin-5 expression had blocked the therapeutic effects of CRT. In other words, the protecting effects of CRT were mediated via modulating the Claudin-5 and ZO-1 expression.Figure 4 Analysis of the localization of tight junction proteins ZO-1 (a) and Claudin-5 (b) in normal rat PMECs by immunofluorescence assay.Normal rat PMECs were subjected to DAPI and immunofluorescence staining. Cells were observed under a confocal microscope. Arrows indicate the linear fluorescence of Claudin-5 and ZO-1 along the endothelial cell membrane, suggesting that both proteins are localized at the edge of endothelial cells. Abundant diffused fluorescence was also detected among the cells, suggesting that both proteins coordinately form the intracellular tight junction structure.Figure 5 CRT upregulates the expression of tight junction proteins Claudin-5 and ZO-1.Rat PMEC monolayers were divided into 6 groups: normal control, TNF-α, Dex, CRT high, CRT medium, and CRT low groups. After 48 h incubation, the expression of Claudin-5 and ZO-1 mRNA (a) and protein (b) was detected by qRT-PCR and Western blot, respectively. ∗, P<0.05, ∗∗, P<0.01, TNF-α group vs. normal control group; #, P<0.05, ##, P<0.01, treatment groups vs. TNF-a group; △, P<0.05, △△, P<0.01, CRT groups vs. Dex group. (a) (b) (c) (d)Figure 6 Therapeutic effects of CRT were greatly reduced by siRNAs.Rat PMECs at 70% confluence were transfected with siRNAs or siNC. Cells were incubated in medium containing 500 ng/mL CRT and 10 ng/mL TNF-α. After 48 h, cells were collected and the expression of Claudin-5 and ZO-1 was determined by Western blot (a). Cell proliferation (b) and apoptosis (c) were also measured. ∗, p < 0.05, TNF-α+CRT+si-ZO-1 or si-Claudin-5 vs. TNF-α+CRT+si-NC. ## 3.1. Major Components of CRT As shown in Figure1, HPLC data suggests that each tablet (0.18 g) contains 3.04 μg of triptolide (C20H24O6) and 0.13 mg of epicatechin (C35H14O6).Figure 1 HPLC analyses of the major active components of CRT: triptolide (a) and epicatechin (b) at the wavelengths of 220 nm and 280 nm, respectively. (a) (b) ## 3.2. CRT Stimulates the Growth of PMECs The growth of PMECs was compared by MTT assay. As shown in Figure2(a), the TNF-α had significantly increased the growth inhibition rate of PMECs compared with normal control group (p = 0.016). The growth inhibition rate of PMECs in Dex and all CRT groups was significantly lower than that in TNF-α group (p= 0.033, 0.045, 0.020, and 0.039, respectively). The growth inhibition rate in CRT medium group was significantly lower compared with Dex group (p = 0.032), whereas the other two CRT groups had similar rate as that in Dex group (p > 0.05).Figure 2 CRT treatment inhibited the cell proliferation inhibition rate and apoptosis rate induced by TNF- α . Rat PMEC monolayers were divided into 6 groups: normal control, TNF-α, Dex, CRT high, CRT medium, and CRT low groups. After 48 h incubation, cell proliferation (a) and apoptosis (b) were detected by MTT assay and flow cytometry, respectively. The cell proliferation inhibition rate (%) = (OD value in control group - OD value in experimental group)/OD value in control group × 100%. ∗, P<0.05, TNF-α group vs. normal control group; #, P<0.05 treatment groups vs. TNF-a group; △, P<0.05, CRT groups vs. Dex group. ## 3.3. CRT Reduces the Apoptosis of PMECs Further, the effects of CRT on the apoptosis of PMECs were evaluated by flow cytometry. As shown in Figure2(b), the apoptosis rate in TNF-α group was significantly higher than that in normal control group (p = 0.026). The apoptosis rate of PMECs in all 4 treatment groups was significantly reduced compared with TNF-α group (p = 0.023, 0.037, 0.019, and 0.042, respectively). The apoptosis rate in CRT medium group was significantly lower compared with Dex group (p = 0.028), but the other two CRT groups had similar rate as that in Dex group (p > 0.05). ## 3.4. CRT Alleviates the TNF-α-Induced Cell Injury The effect of CRT on the ultrastructure of PMECs was observed under transmission electron microscope (Figure3). In normal control group, there are abundant cytoplasms with a number of intact organelles (mitochondria, endoplasmic reticulum, and Golgi) and few pinocytotic vesicles. After the treatment of TNF-a, cells exhibited severe injuries. The number of organelles was obviously decreased. Mitochondria and endoplasmic reticulum swelling were clearly observed. Mitochondria cristae disappeared, and abundant vacuole-like structures were formed. Nuclear fragmentation/lysis and cell apoptosis were observed in some cells. In Dex and the 3 CRT groups, cells exhibited more normal morphology and less severe damage compared with TNF-α group. Although mitochondrial swelling was still noticed, mitochondrial structure was intact. The number of vacuoles in the cytoplasm was greatly reduced. CRT medium group had clearly much more intracellular organelles and less vacuoles than the other two CRT groups, indicating lighter cell injury.Figure 3 CRT alleviated the TNF- α -induced cell injury.Rat PMEC monolayers were divided into 6 groups: normal control (a), TNF-α (b), Dex (c), CRT high (d), CRT medium (e), and CRT low groups (f). After 48 h incubation, intracellular microstructures were observed under an H7650 transmission electron microscope (20000×). Arrows and triangles mark the mitochondria and endoplasmic reticulum, respectively. ## 3.5. CRT Upregulates the Expression of Tight Junction Protein Claudin-5 and ZO-1 The localization of tight junction proteins Claudin-5 and ZO-1 was detected by immunofluorescence assay using a confocal microscope. As shown in Figure4, linear fluorescence staining of Claudin-5 and ZO-1 was observed along the endothelial cell membrane, indicating that both proteins are localized at the edge of endothelial cells. Furthermore, abundant diffused fluorescence was also detected among the cells, which suggested that both proteins coordinately form the intracellular tight junction structure. To clarify the action mechanism of CRT against TNF-a-induced cell injury, we further examined the expression of Claudin-5 and ZO-1 mRNA and protein in different groups. It was found that the expression of ZO-1 and Claudin-5 mRNA in TNF-a group was significantly lower compared with normal control group (p = 0.034 and 0.008, respectively, Figure 5(a)). Claudin-5 and ZO-1 mRNA expression in Dex and the 3 CRT groups was remarkably increased compared with TNF-α group (allp < 0.01 or 0.001). Claudin-5 and ZO-1 mRNA expression in CRT medium group was significantly higher compared with Dex group (p≦ 0.001 and 0.004), whereas the other two CRT groups had similar rate as that in Dex group (p> 0.05). Consistently, the expression of Claudin-5 and ZO-1 protein showed similar change pattern as the mRNA expression (Figure 5(b)). To further confirm the therapeutic effects of CRT mediated by upregulating Claudin-5 and ZO-1 expression, PMECs were transfected with siRNAs targeting Claudin-5 or ZO-1 and incubated in medium containing TNF-α and CRT. Western blot results showed ZO-1 and Claudin-5 expression was successfully downregulated by siRNAs (p = 0.032 and 0.015, respectively, Figure 6(a)). The cell proliferation inhibition and apoptosis rate in TNF-α+CRT+si-ZO-1 and TNF-α+CRT+si-Claudin-5 group was significantly higher compared with TNF-α+CRT+si-NC group (allp< 0.05, Figures 6(b)-6(c)), suggesting that the silencing of ZO-1 and Claudin-5 expression had blocked the therapeutic effects of CRT. In other words, the protecting effects of CRT were mediated via modulating the Claudin-5 and ZO-1 expression.Figure 4 Analysis of the localization of tight junction proteins ZO-1 (a) and Claudin-5 (b) in normal rat PMECs by immunofluorescence assay.Normal rat PMECs were subjected to DAPI and immunofluorescence staining. Cells were observed under a confocal microscope. Arrows indicate the linear fluorescence of Claudin-5 and ZO-1 along the endothelial cell membrane, suggesting that both proteins are localized at the edge of endothelial cells. Abundant diffused fluorescence was also detected among the cells, suggesting that both proteins coordinately form the intracellular tight junction structure.Figure 5 CRT upregulates the expression of tight junction proteins Claudin-5 and ZO-1.Rat PMEC monolayers were divided into 6 groups: normal control, TNF-α, Dex, CRT high, CRT medium, and CRT low groups. After 48 h incubation, the expression of Claudin-5 and ZO-1 mRNA (a) and protein (b) was detected by qRT-PCR and Western blot, respectively. ∗, P<0.05, ∗∗, P<0.01, TNF-α group vs. normal control group; #, P<0.05, ##, P<0.01, treatment groups vs. TNF-a group; △, P<0.05, △△, P<0.01, CRT groups vs. Dex group. (a) (b) (c) (d)Figure 6 Therapeutic effects of CRT were greatly reduced by siRNAs.Rat PMECs at 70% confluence were transfected with siRNAs or siNC. Cells were incubated in medium containing 500 ng/mL CRT and 10 ng/mL TNF-α. After 48 h, cells were collected and the expression of Claudin-5 and ZO-1 was determined by Western blot (a). Cell proliferation (b) and apoptosis (c) were also measured. ∗, p < 0.05, TNF-α+CRT+si-ZO-1 or si-Claudin-5 vs. TNF-α+CRT+si-NC. ## 4. Discussion TNF-α released early during ALI acts on PMECs through blood circulation, which damages cells and alveolar capillary barrier, and thus leading to lung injury [17, 18]. In this study, TNF-a was added to the cell culture medium to mimic the biological condition when ALI was developed. Electron microscope results showed that TNF-a treatment has damaged organelle structure and induced mitochondrial and endoplasmic reticulum swelling. Mitochondrial crista nearly disappeared, and abundant vacuole-like structures were observed. Moreover, TNF-a group had significantly inhibited the growth and stimulated the apoptosis of PMECs, suggesting that TNF-a had successfully induced an acute endothelial cell injury model.In the current study, the main components of colquhounia root tablets were identified by HPLC as triptolide and epicatechin. Extensive studies have reported the anti-inflammatory [19, 20], immunosuppressive [21, 22], antitumor [23, 24] effects of triptolide and epicatechin. Studies have shown that triptolide and epicatechin have protection effects against lung injury [25, 26]. Although triptolide and epicatechin are the two major components of CRT, the herbal medicine contains several other active ingredients such as alkaloids, terpenoids, and lactones. Moreover, in clinical practice, CRT has been widely used to treat nephrotic syndrome and rheumatoid arthritis [27]. Our preliminary study has found that CRT can effectively alleviate pulmonary edema [28]. Therefore, instead of focusing on the pharmacological effects of individual components, we investigated the role and mechanism of CRT in protecting lung injury aiming to provide a basis for its clinical application in ALI treatment. Our data showed that CRT had significantly lighter mitochondrial and endoplasmic reticulum swelling and lower number of intracellular vacuoles. Intact organelle structures were observed under electron microscope. Furthermore, cells in CRT groups had exhibited higher cell proliferation and lower apoptosis rate as compared with TNF-α group, suggesting that CRT had effectively protected cells from TNF-a-induced injury.Tight junction is an intercellular junction complex that is widely present in the blood-brain barrier, intestinal barrier, retinal barrier, glomerular basement membrane barrier, and alveolar capillary barrier [29–33]. It plays a key role in regulating the transport of water and solute molecules and maintaining tissue permeability [34, 35]. Claudins are important structural molecules in tight junction. Claudin-5 is strongly expressed in PMECs and regulates paracellular permeability [36]. The overexpression of Claudin-5 in PMECs and cerebral vascular endothelial cells reduces permeability of tight junction and thus protects endothelial barrier function [36, 37]. Studies have found that reduced Claudin-5 expression in endothelial cells results in a rapid increase in the permeability of pulmonary blood vessels [36]. As the key structure of tight junction, ZO-1 directly affects the pulmonary barrier permeability. When ZO-1 expression is inhibited, transepithelial electrical resistance of mouse PMECs is markedly decreased, leading to increased pulmonary permeability and impaired lung barrier functions [38]. The intracellular parts of Claudin-5 and ZO-1 interact with each other to maintain the stability of tight junction structure. In this study, immunofluorescence assay demonstrated linear fluorescence staining of both Claudin-5 and ZO-1 along the endothelial cell membrane and abundant diffused fluorescence among these cells, suggesting both proteins form the intracellular tight junction structure and coordinately regulate the paracellular permeability.Studies have suggested that TNF-α can downregulate the expression of several tight junction proteins in the lungs of mice, including Claudin-2, -4, -5, and ZO-1, increasing the lung barrier permeability [39]. Consistently, our results showed a significant decrease in the intracellular expression of Claudin-5 and ZO-1 mRNA and protein in TNF-α group, indicating that TNF-α destroyed the integrity of tight junction by inhibiting the expression of structural proteins. In contrast, Claudin-5 and ZO-1 expression in CRT groups was significantly enhanced when compared with TNF-α group, suggesting that the protective CRT was mediated via stabilizing the structure of tight junction and endothelial barrier. It is worth noting that CRT medium group had higher therapeutic effects than Dex group. The action mechanism of Dex is mediated through the pituitary-adrenal system. Dex regulates the expression of anti-inflammatory genes by binding to the glucocorticoid receptor (GR) [40]. In contrast, CRT does not exert the anti-inflammatory effect via the pituitary-adrenal system [41], but instead reduces oxidative stress and inflammation by regulating NF-κB signaling pathway [42]. The NF-κB pathway directly regulates tight junctions, and its activation increases paracellular permeability and impairs barrier function [43, 44]. TNF-α can affect the inflammatory response by activating the NF-κB pathway and thus increase the pericellular permeability [45]. Therefore, we speculate that the higher therapeutic effects of CRT medium group than Dex group may be associated with its modulation on the NF-κB pathway.One may also notice that the CRT medium group exhibited significantly lower growth inhibition rate and apoptosis rate compared with CRT high and low groups. Electron microscopic image also revealed much lighter destruction of PMECs in CRT medium group compared with the other two CRT groups. Furthermore, the highest Claudin-5 and ZO-1 mRNA and protein expression was observed in CRT medium group among the three CRT groups. Altogether, our results suggested that the medium dose of CRT exerted the best therapeutic effects. Studies have shown that high-dose CRT exhibits cytotoxic effects and inhibits cell proliferation [13, 14]. In our preliminary test, we also found that high-dose CRT inhibited the proliferation of alveolar type II epithelial cells. Therefore, the CRT high group in the current study had lower therapeutic effects compared with CRT medium group. It is thus important to find the optimal CRT treatment dose in clinical practice. ## 5. Conclusion In summary, this study has found that CRT effectively reduces the TNF-α-induced growth inhibition rate and apoptosis rate of PMECs. The protective effects of CRT against TNF-α-induced injury might be mediated via stimulating the expression of Claudin-5 and ZO-1 in PMECs, which stabilizes the structure of tight junction and endothelial barrier. As a natural herbal medicine, CRT might be a promising effective and safe therapeutic agent to substitute the glucocorticoids for the treatment of ALI. Future studies are needed to investigate the signaling mechanism involved in the regulation of CRT on tight junction proteins. --- *Source: 1024634-2018-11-18.xml*
2018
# A Rare Complication of Thymoma: Pure White Cell Aplasia in Good’s Syndrome **Authors:** Kim Uy; Elizabeth Levin; Pawel Mroz; Faqian Li; Surbhi Shah **Journal:** Case Reports in Hematology (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1024670 --- ## Abstract Pure white cell aplasia (PWCA) is a rare manifestation of thymoma. It is characterized by agranulocytosis with absent myeloid precursors in the bone marrow and normal hematopoiesis for other cell lines. Here we describe a 65-year-old female patient who presented with three days of fever and night sweat. Chest CT revealed an anterior mediastinal mass. A biopsy of the mass confirmed a diagnosis of thymoma mixed type A and B2. The patient developed a severe neutropenia, and her bone marrow revealed significantly decreased neutrophil-lineage cells, rare to absent B cells, and defective T cells, consistent with PWCA. Following thymectomy, a complete resolution of PWCA was achieved via multimodality therapy of intravenous immunoglobulins, granulocyte colony-stimulating factor, and immunosuppressant. This report highlights the care complexity regarding treatment choices and decision to perform thymectomy in patients presenting with PWCA. --- ## Body ## 1. Background Primary tumors of the thymus gland are rare neoplasm [1]. If occurs, its most common type is benign thymoma. Thymus plays a central role in the development of adaptive immune system, particularly in the maturation process of T lymphocytes. Hence, patients with thymoma often present with a varying type of paraneoplastic syndromes [2, 3]. Good’s syndrome or hypogammaglobulinemia is a recognized paraneoplastic syndrome of thymoma, and it is commonly associated with pure red cell aplasia. In contrast, pure white cell aplasia (PWCA) is a rare manifestation, particularly in setting of Good’s syndrome. PWCA is characterized by agranulocytosis with absent myeloid precursors in the bone marrow and preserved hematopoiesis for other cell lines [4]. Here we report a thymoma patient presenting with Good’s syndrome and PWCA. ## 2. Case Presentation A previously healthy 65-year-old Caucasian woman presented with three days of fever and night sweats. On exam, she had oral thrush and truncal morbilliform rash. Laboratory workup showed severe neutropenia with WBC 1.5 K/μL, ANC 0 K/μL, and low serum immunoglobulin with IgA 54 mg/dL, IgM 83 mg/dL, and IgG 455 mg/dL. Other standard laboratory workup was unremarkable including detailed infectious and rheumatologic evaluations.A bone marrow biopsy revealed decreased mature granulocytes and granulocytic precursors and increased CD3-positive T lymphocytes in small lymphoid aggregates (Figure1). Flow cytometry showed rare to absent B cells and no aberrant immunophenotype on T cells. The patient was started on intravenous immunoglobulin (IVIg) and filgrastim without improvement in WBC. A skin biopsy of the rash revealed a dermatitis, suggestive of drug eruption but could not rule out viral exanthem.Figure 1 Bone marrow biopsy: bone marrow aspiration of the posterior iliac crest revealed normocellular bone marrow 30–40% with granulocytopenia and increased CD3+ T lymphocytes some in aggregates (a). Flow cytometry showed rare to absent B cells and no aberrant immunophenotype on T cells (b). (a) (b)Chest CT scan revealed an 8 cm circumscribed heterogeneously enhancing solid mass in the anterior mediastinum, suspicious for neoplastic process (Figure2). It also showed multiple indeterminate nodules throughout the bilateral lungs. Given the severe neutropenia of unknown duration, these nodules were concerning for invasive fungal infection. Bronchoalveolar lavage was performed. Fungal cultures were negative after four weeks. Serum beta-D-glucan, galactomannan antigen, Histoplasma antigen, and Cryptococcus antigens were also negative. The patient was empirically started on levofloxacin and voriconazole. A CT-guided mediastinal mass biopsy was pursued, revealing a network of cytokeratin AE1/AE3-positive epithelial cells mixed with CD3+, TdT+, and CD1a+ lymphocytes and rare CD20+ B cells. These were consistent with thymoma type B2. The biopsy also showed spindle cells consistent with thymoma type A (Figure 3). A diagnosis of thymoma mixed type A and B2 was given.Figure 2 Chest CT: a 4.1 × 8.3 × 7.9 cm mass in the anterior mediastinum with lobulations and dense enhancement (a). Multiple indeterminate nodules throughout the bilateral lungs (b). (a) (b)Figure 3 Thymoma mixed type A and B2: grossly and microscopically encapsulated thymoma type A (a) showing spindle cells and type B2 (b) showing mixed epithelial cells and lymphocytes. The lymphocytes are positive for CD3 with rare CD20-positive B cells. These lymphocytes are also positive for TdT and CD1a. (a) (b)After multidisciplinary discussions, thymectomy was pursued despite concerns of poor wound healing in the setting of severe neutropenia. Pathology on the surgical specimen revealed encapsulated tumor consistent with thymoma, modified Masaoka stage I. Specimen had free tumor margins, and associated lymph nodes were benign. The patient recovered well following thymectomy, and filgrastim was discontinued due to its limited effect during the preoperative period. The patient was continued on prophylactic antibiotics as she remained neutropenic.On postop follow-up, the patient had persistent neutropenia. Therefore, cyclosporine was initiated for immunomodulation, and filgrastim was reintroduced to boost myeloid stem cells. In one week, the WBC increased to 6.7 from 1.0 K/μL and ANC increased to 2.8 from 0.0 K/μL. However, when granulocyte colony-stimulating factor (G-CSF) was discontinued, WBC decreased to 2.2 K/μL and ANC decreased to 0.3 K/μL over a 3-day period. G-CSF was restarted for an additional one month until cyclosporine reached maintenance levels. In the next 4 months, the patient remained on a maintenance dose of cyclosporine with a target level of 200–400 ng/mL while monitoring toxicity. Subsequently, she was successfully weaned off all immune modulators with self-sustaining WBC after a total of 6-month therapy. Notably, the patient suffered multiple recurrent respiratory infections with CT showing an infiltrate in different lung fields and required short courses of antibiotics throughout her recovery. However, no major infectious complications were observed. ## 3. Discussion In the United States, the overall incidence of thymoma is 0.13 per 100,000 person-years [1]. Thymoma equally affects males and females with a rising incident in the fourth or fifth decade and peak in the seventh decade of life [2]. Thymomas are neoplasms of thymic epithelial cells typically with mixed cortical and medullary properties. Its classifications are based on the non-neoplastic lymphocyte content and epithelial cell features, which are categorized into type A, AB, B1, B2, and B3 [5]. Surgical removal of thymoma is the mainstay of treatment, particularly for stage I and II thymoma [6].Approximately, 6–11% of all thymoma cases present with hypogammaglobulinemia or Good’s syndrome [2]. Principal findings of Good’s syndrome include few or absent B cells, inverted CD4+ : CD8+ T cell ratio, CD4 T-cell lymphopenia, and impaired mitogenic function of T cells. Despite low immunoglobulin synthesis, up to 30% of Good’s syndrome presentations are associated with hematologic disorders [2, 3]. The most common associations are pure red cell aplasia and myasthenia gravis [2]. Conversely, pure white cell aplasia (PWCA) in setting of Good’s syndrome is a rare occurrence, accounting for 1.1% of patients with thymoma [2]. PWCA is a hematologic disorder characterized by agranulocytosis with absent neutrophil-lineage cells in the bone marrow and preserved erythropoiesis and megakaryocytopoiesis. Most PWCA are associated with type A and mixed type AB thymomas [7]. PWCA has also been found to be associated with drugs, viruses, and other autoimmune disorders [8–11].Although the etiology of PWCA seen in thymoma remains elusive in the literature, it is consensus that PWCA is of autoimmune origin. Many speculate that presence of thymoma leads to a loss of autoimmune regulator, thus creating a “window of opportunity” for autoimmune disease to develop [3, 12]. Two mechanistic pathways had been proposed to support this theory. The first pathway proposes that dysregulated production of cytokines, possibly mediated by neoplastic thymic stromal cells, influences the growth and differentiation of both the thymus and precursor B cells in the bone marrow [3]. For example, thymic stromal lymphopoietin (TSLP), an IL-7-like cytokine produced mainly by thymic epithelial cells, has wide-ranging impacts on the development of pre-B cells in the bone marrow, as well as regulatory T and Th17 cells in the thymus. Wang et al. found that the levels of TSLP mRNA and protein expression were lower in patients with thymoma-associated myasthenia gravis than in thymoma patients without myasthenia gravis [13]. Other studies of patients with thymic neoplasia also detect autoantibodies against cytokines and a decrease in in vitro cytokine production in response to T-cell simulation [14, 15]. These findings suggested that decrease in the expression of cytokines contribute to T-cell imbalance in the thymus and affect precursor B-cell development in the bone marrow. The second pathway proposes that the abnormal environment of thymoma prevents non-neoplastic T cell to undergo appropriate thymic central tolerance and maturation, thus leading to activation of autoreactive T cells [16]. When these cells escape into the circulation, they could precipitate autoimmune reactions and contribute to abnormal activation of B cells, yielding lymphocytic- and antibody-mediated toxicities to myelomonocytic precursor cells [3]. Interestingly, our patient’s bone marrow biopsy showed rare to absent B cells, lack of aberrant immunophenotype on T cells, and agranulocytosis in absence of positive rheumatological tests. These findings suggest that potential inhibitory factors diminish the growth of progenitor cells involving myelocyte and lymphoid lineages.This case highlights the care complexity regarding treatment choices in patients presenting with PWCA, particularly surgical intervention in the setting of neutropenia and concerns for wound healing. While thymectomy does not necessarily reverse the immunological abnormalities in Good’s syndrome, some have observed clinical improvement following resection [17]. Our case demonstrated that while G-CSF and IVIg were completely ineffective prior to thymectomy, the bone marrow was responsive postoperatively. With regards to immunomodulatory agents, the repertoire is broad due to limited literature describing the use of chemotherapeutic agents [18]. In this presented case, cyclosporine treatment at minimal doses was used with complete resolution of PCWA. Patients with PCWA and Good’s syndrome are at increased risk of fatal infections due to concurrent leukopenia and immunodeficiency, thus suggesting the importance of supportive care with antibiotics and IVIg. Lastly, similar to other paraneoplastic presentations of thymoma these patients run a risk of recurrence; thereby, they should remain on surveillance. --- *Source: 1024670-2019-10-13.xml*
1024670-2019-10-13_1024670-2019-10-13.md
11,453
A Rare Complication of Thymoma: Pure White Cell Aplasia in Good’s Syndrome
Kim Uy; Elizabeth Levin; Pawel Mroz; Faqian Li; Surbhi Shah
Case Reports in Hematology (2019)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1024670
1024670-2019-10-13.xml
--- ## Abstract Pure white cell aplasia (PWCA) is a rare manifestation of thymoma. It is characterized by agranulocytosis with absent myeloid precursors in the bone marrow and normal hematopoiesis for other cell lines. Here we describe a 65-year-old female patient who presented with three days of fever and night sweat. Chest CT revealed an anterior mediastinal mass. A biopsy of the mass confirmed a diagnosis of thymoma mixed type A and B2. The patient developed a severe neutropenia, and her bone marrow revealed significantly decreased neutrophil-lineage cells, rare to absent B cells, and defective T cells, consistent with PWCA. Following thymectomy, a complete resolution of PWCA was achieved via multimodality therapy of intravenous immunoglobulins, granulocyte colony-stimulating factor, and immunosuppressant. This report highlights the care complexity regarding treatment choices and decision to perform thymectomy in patients presenting with PWCA. --- ## Body ## 1. Background Primary tumors of the thymus gland are rare neoplasm [1]. If occurs, its most common type is benign thymoma. Thymus plays a central role in the development of adaptive immune system, particularly in the maturation process of T lymphocytes. Hence, patients with thymoma often present with a varying type of paraneoplastic syndromes [2, 3]. Good’s syndrome or hypogammaglobulinemia is a recognized paraneoplastic syndrome of thymoma, and it is commonly associated with pure red cell aplasia. In contrast, pure white cell aplasia (PWCA) is a rare manifestation, particularly in setting of Good’s syndrome. PWCA is characterized by agranulocytosis with absent myeloid precursors in the bone marrow and preserved hematopoiesis for other cell lines [4]. Here we report a thymoma patient presenting with Good’s syndrome and PWCA. ## 2. Case Presentation A previously healthy 65-year-old Caucasian woman presented with three days of fever and night sweats. On exam, she had oral thrush and truncal morbilliform rash. Laboratory workup showed severe neutropenia with WBC 1.5 K/μL, ANC 0 K/μL, and low serum immunoglobulin with IgA 54 mg/dL, IgM 83 mg/dL, and IgG 455 mg/dL. Other standard laboratory workup was unremarkable including detailed infectious and rheumatologic evaluations.A bone marrow biopsy revealed decreased mature granulocytes and granulocytic precursors and increased CD3-positive T lymphocytes in small lymphoid aggregates (Figure1). Flow cytometry showed rare to absent B cells and no aberrant immunophenotype on T cells. The patient was started on intravenous immunoglobulin (IVIg) and filgrastim without improvement in WBC. A skin biopsy of the rash revealed a dermatitis, suggestive of drug eruption but could not rule out viral exanthem.Figure 1 Bone marrow biopsy: bone marrow aspiration of the posterior iliac crest revealed normocellular bone marrow 30–40% with granulocytopenia and increased CD3+ T lymphocytes some in aggregates (a). Flow cytometry showed rare to absent B cells and no aberrant immunophenotype on T cells (b). (a) (b)Chest CT scan revealed an 8 cm circumscribed heterogeneously enhancing solid mass in the anterior mediastinum, suspicious for neoplastic process (Figure2). It also showed multiple indeterminate nodules throughout the bilateral lungs. Given the severe neutropenia of unknown duration, these nodules were concerning for invasive fungal infection. Bronchoalveolar lavage was performed. Fungal cultures were negative after four weeks. Serum beta-D-glucan, galactomannan antigen, Histoplasma antigen, and Cryptococcus antigens were also negative. The patient was empirically started on levofloxacin and voriconazole. A CT-guided mediastinal mass biopsy was pursued, revealing a network of cytokeratin AE1/AE3-positive epithelial cells mixed with CD3+, TdT+, and CD1a+ lymphocytes and rare CD20+ B cells. These were consistent with thymoma type B2. The biopsy also showed spindle cells consistent with thymoma type A (Figure 3). A diagnosis of thymoma mixed type A and B2 was given.Figure 2 Chest CT: a 4.1 × 8.3 × 7.9 cm mass in the anterior mediastinum with lobulations and dense enhancement (a). Multiple indeterminate nodules throughout the bilateral lungs (b). (a) (b)Figure 3 Thymoma mixed type A and B2: grossly and microscopically encapsulated thymoma type A (a) showing spindle cells and type B2 (b) showing mixed epithelial cells and lymphocytes. The lymphocytes are positive for CD3 with rare CD20-positive B cells. These lymphocytes are also positive for TdT and CD1a. (a) (b)After multidisciplinary discussions, thymectomy was pursued despite concerns of poor wound healing in the setting of severe neutropenia. Pathology on the surgical specimen revealed encapsulated tumor consistent with thymoma, modified Masaoka stage I. Specimen had free tumor margins, and associated lymph nodes were benign. The patient recovered well following thymectomy, and filgrastim was discontinued due to its limited effect during the preoperative period. The patient was continued on prophylactic antibiotics as she remained neutropenic.On postop follow-up, the patient had persistent neutropenia. Therefore, cyclosporine was initiated for immunomodulation, and filgrastim was reintroduced to boost myeloid stem cells. In one week, the WBC increased to 6.7 from 1.0 K/μL and ANC increased to 2.8 from 0.0 K/μL. However, when granulocyte colony-stimulating factor (G-CSF) was discontinued, WBC decreased to 2.2 K/μL and ANC decreased to 0.3 K/μL over a 3-day period. G-CSF was restarted for an additional one month until cyclosporine reached maintenance levels. In the next 4 months, the patient remained on a maintenance dose of cyclosporine with a target level of 200–400 ng/mL while monitoring toxicity. Subsequently, she was successfully weaned off all immune modulators with self-sustaining WBC after a total of 6-month therapy. Notably, the patient suffered multiple recurrent respiratory infections with CT showing an infiltrate in different lung fields and required short courses of antibiotics throughout her recovery. However, no major infectious complications were observed. ## 3. Discussion In the United States, the overall incidence of thymoma is 0.13 per 100,000 person-years [1]. Thymoma equally affects males and females with a rising incident in the fourth or fifth decade and peak in the seventh decade of life [2]. Thymomas are neoplasms of thymic epithelial cells typically with mixed cortical and medullary properties. Its classifications are based on the non-neoplastic lymphocyte content and epithelial cell features, which are categorized into type A, AB, B1, B2, and B3 [5]. Surgical removal of thymoma is the mainstay of treatment, particularly for stage I and II thymoma [6].Approximately, 6–11% of all thymoma cases present with hypogammaglobulinemia or Good’s syndrome [2]. Principal findings of Good’s syndrome include few or absent B cells, inverted CD4+ : CD8+ T cell ratio, CD4 T-cell lymphopenia, and impaired mitogenic function of T cells. Despite low immunoglobulin synthesis, up to 30% of Good’s syndrome presentations are associated with hematologic disorders [2, 3]. The most common associations are pure red cell aplasia and myasthenia gravis [2]. Conversely, pure white cell aplasia (PWCA) in setting of Good’s syndrome is a rare occurrence, accounting for 1.1% of patients with thymoma [2]. PWCA is a hematologic disorder characterized by agranulocytosis with absent neutrophil-lineage cells in the bone marrow and preserved erythropoiesis and megakaryocytopoiesis. Most PWCA are associated with type A and mixed type AB thymomas [7]. PWCA has also been found to be associated with drugs, viruses, and other autoimmune disorders [8–11].Although the etiology of PWCA seen in thymoma remains elusive in the literature, it is consensus that PWCA is of autoimmune origin. Many speculate that presence of thymoma leads to a loss of autoimmune regulator, thus creating a “window of opportunity” for autoimmune disease to develop [3, 12]. Two mechanistic pathways had been proposed to support this theory. The first pathway proposes that dysregulated production of cytokines, possibly mediated by neoplastic thymic stromal cells, influences the growth and differentiation of both the thymus and precursor B cells in the bone marrow [3]. For example, thymic stromal lymphopoietin (TSLP), an IL-7-like cytokine produced mainly by thymic epithelial cells, has wide-ranging impacts on the development of pre-B cells in the bone marrow, as well as regulatory T and Th17 cells in the thymus. Wang et al. found that the levels of TSLP mRNA and protein expression were lower in patients with thymoma-associated myasthenia gravis than in thymoma patients without myasthenia gravis [13]. Other studies of patients with thymic neoplasia also detect autoantibodies against cytokines and a decrease in in vitro cytokine production in response to T-cell simulation [14, 15]. These findings suggested that decrease in the expression of cytokines contribute to T-cell imbalance in the thymus and affect precursor B-cell development in the bone marrow. The second pathway proposes that the abnormal environment of thymoma prevents non-neoplastic T cell to undergo appropriate thymic central tolerance and maturation, thus leading to activation of autoreactive T cells [16]. When these cells escape into the circulation, they could precipitate autoimmune reactions and contribute to abnormal activation of B cells, yielding lymphocytic- and antibody-mediated toxicities to myelomonocytic precursor cells [3]. Interestingly, our patient’s bone marrow biopsy showed rare to absent B cells, lack of aberrant immunophenotype on T cells, and agranulocytosis in absence of positive rheumatological tests. These findings suggest that potential inhibitory factors diminish the growth of progenitor cells involving myelocyte and lymphoid lineages.This case highlights the care complexity regarding treatment choices in patients presenting with PWCA, particularly surgical intervention in the setting of neutropenia and concerns for wound healing. While thymectomy does not necessarily reverse the immunological abnormalities in Good’s syndrome, some have observed clinical improvement following resection [17]. Our case demonstrated that while G-CSF and IVIg were completely ineffective prior to thymectomy, the bone marrow was responsive postoperatively. With regards to immunomodulatory agents, the repertoire is broad due to limited literature describing the use of chemotherapeutic agents [18]. In this presented case, cyclosporine treatment at minimal doses was used with complete resolution of PCWA. Patients with PCWA and Good’s syndrome are at increased risk of fatal infections due to concurrent leukopenia and immunodeficiency, thus suggesting the importance of supportive care with antibiotics and IVIg. Lastly, similar to other paraneoplastic presentations of thymoma these patients run a risk of recurrence; thereby, they should remain on surveillance. --- *Source: 1024670-2019-10-13.xml*
2019
# New Advances in Distributed Control of Large-Scale Systems **Authors:** Dan Zhang; Wen-An Zhang; Zheng-Guang Wu; Kun Liu; Hui Zhang; Yun-Bo Zhao **Journal:** Mathematical Problems in Engineering (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102469 --- ## Body --- *Source: 102469-2015-10-05.xml*
102469-2015-10-05_102469-2015-10-05.md
396
New Advances in Distributed Control of Large-Scale Systems
Dan Zhang; Wen-An Zhang; Zheng-Guang Wu; Kun Liu; Hui Zhang; Yun-Bo Zhao
Mathematical Problems in Engineering (2015)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102469
102469-2015-10-05.xml
--- ## Body --- *Source: 102469-2015-10-05.xml*
2015
# Antidepressant Active Components of Bupleurum chinense DC-Paeonia lactiflora Pall Herb Pair: Pharmacological Mechanisms **Authors:** Shimeng Lv; Yifan Zhao; Le Wang; Yihong Yu; Jiaxin Li; Yufei Huang; Wenhua Xu; Geqin Sun; Weibo Dai; Tingting Zhao; Dezhong Bi; Yuexiang Ma; Peng Sun **Journal:** BioMed Research International (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1024693 --- ## Abstract Depression is a serious psychological disorder with a rapidly increasing incidence in recent years. Clinically, selective serotonin reuptake inhibitors are the main therapy. These drugs, have serious adverse reactions, however. Traditional Chinese medicine has the characteristics of multiple components, targets, and pathways, which has huge potential advantages for the treatment of depression. The antidepressant potential of the herbal combination of Bupleurum chinense DC (Chaihu) and Paeonia lactiflora Pall (Baishao) has been extensively studied previously. In this review, we summarized the antidepressant active components and mechanism of Chaihu-Baishao herb pair. We found that it works mainly through relieving oxidative stress, regulating HPA axis, and protecting neurons. Nevertheless, current research of this combined preparation still faces many challenges. On one hand, most of the current studies only stay at the level of animal models, lacking of sufficient clinical double-blind controlled trials for further verification. In addition, studies on the synergistic effect between different targets and signaling pathways are scarce. On the other hand, this preparation has numerous defects such as poor stability, low solubility, and difficulty in crossing the blood-brain barrier. --- ## Body ## 1. Introduction Major depressive disorder, also known as depression, is a chronic, recurrent, and potentially life-threatening severe mental disorder [1]. According to the World Health Organization, depression is the main cause of disability in the world; more than 350 million people worldwide are suffering from depression, which increases the risk of death at any age, reduces the quality of life of depressed patients, and creates a burden on families and society. Clinically, Western medicine treatments such as the use of selective serotonin reuptake inhibitors are the main treatment for depression, but most of these compounds have issues that include delayed effects, high nonresponse rates, nausea, headaches, chronic sexual dysfunction, and weight gain [1–3].Therefore, it is important to develop more beneficial drugs for the treatment of depression. Traditional Chinese medicine (TCM) has great potential in the treatment of depression because of its multiple components that can act on multiple targets and pathways [4]. Some Chinese medicine formulas have significant effects on the treatment of depression with low toxicity and little to no side effects [5]. Therefore, in order to avoid the side effects and adverse reactions caused by Western medical treatments, scientific researchers have turned to TCM and its active ingredients as a way to treat depression [1].Bupleurum chinense DC (Chaihu) is made from the dried roots of the Bupleurum plant. It is an herbal medicine that regulates and relieves liver qi. Modern research has found that Bupleurum chinense and its active ingredients have immunomodulatory, antiviral, hepatoprotective, antipyretic, and other pharmacological effects [6]. Paeonia lactiflora Pall (Baishao) is a traditional herb with a bitter, sour, and slightly cold taste. It can be used alone or as part of a drug combination. Modern pharmacological research shows that Baishao has anti-inflammatory, antiviral, antioxidant, and immunoregulatory properties as well as other functions [7]. The Chaihu-Baishao herbal combination is used in many antidepressant TCM compounds, such as Xiaoyao San, Chaihu Shugan San, and Sini San [8]. In this review, the research on the treatment of depression with the main active ingredients of the Chaihu-Baishao herb pair is summarized to provide a reference for future basic research and clinical applications. ## 2. Antidepressant mMchanisms of Chaihu-Baishao ### 2.1. Active Compounds in Chaihu-Baishao Inhibit Inflammation and Relieve Oxidative Stress There is a large amount of evidence supporting the link between depression and inflammatory processes. It has been shown that inflammation increases an individual’s susceptibility to depression. There is an increase in proinflammatory markers in patients with depression, and the use of proinflammatory drugs can increase the risk of depression [9]. Studies have found that oxidative stress may play an important role in the pathophysiology of depression. Patients with depression have elevated levels of malondialdehyde and the antioxidant enzymes, superoxide dismutase, and catalase, along with decreased levels of glutathione peroxidase [10]. Some scholars have found that 8-hydroxydeoxyglucose and F2 isoprostaglandin are found in patients with depression through meta-analysis. It is also believed that depression is accompanied by increased oxidative damage [11]. There is much evidence that inflammation-related diseases can be treated with various traditional medicines containing a variety of active natural compounds [12]. When antidepressants are used, the levels of peripheral inflammatory cytokines in patients with depression are reduced [13]. Therefore, inhibiting inflammation and relieving oxidative stress are important for the treatment of depression. Several of the previously studied active compounds of traditional medicines are reviewed below.Saikosaponin D is a triterpene saponin compound extracted from Chaihu and has various pharmacological effects such as counteracting inflammation [14] and oxidative stress [15]. Tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β), and interleukin-6 (IL-6) are proinflammatory cytokines that regulate oxidative stress, apoptosis, and metabolism and cause damage to the branching process of neurons, which will in turn affect neuronal function [1]. Microglia’s are the innate immune cells of the central nervous system and play a vital role in the process of neuroinflammation; they are found in abundance in the cerebrospinal fluid of depressed patients. A series of proinflammatory factors released by activated microglia were detected. In a lipopolysaccharide (LPS)-induced depression model, Su et al. gave mice an intraperitoneal injection of Saikosaponin D and found that it can improve the depression-like behavior of the mice and inhibit both the overexpression of the proinflammatory cytokines, TNF-α, IL-6, and 1 L-1β, and the activation of microglia induced by LPS in the mouse hippocampus [16]. The observed anti-inflammatory effect was also correlated with the inhibition of the high mobility group protein 1/Toll-like receptor 4 (TLR4)/nuclear transcription factor-κB (NF-κB) signaling pathway. The NF-κB pathway is a typical inflammatory pathway, which can regulate the production of proinflammatory cytokines, leukocyte recruitment, or cell survival and cause the body to produce an inflammatory response [17]. It has been reported that Saponin D can downregulate the expression of NF-κB in rat hippocampal neurons and improve the depression-like behavior induced by chronic unpredictable mild stress (CUMS) by downregulating Mir-155 expression and upregulating fibroblast growth factor 2 (FGF2) expression [18]. Quercetin is a flavonoid compound widely found in fruits and vegetables that has anti-inflammatory, antioxidant, antiviral, and anticancer effects [19]. It has been reported that quercetin can reverse the increase in lipid hydroperoxide content, induced by olfactory bulbectomy, in the hippocampus and improve the depression-like behavior of mice via a mechanism that is correlated with the enhancement of N-methyl-D-aspartic acid receptor expression [20].Kaempferol is the main component of a variety of fruits and vegetables. Using a mouse depression model based on chronic social defeat stress, Gao et al. found that after intraperitoneal injection of kaempferol, the inflammatory response and oxidative stress in the prefrontal cortex of these mice was alleviated [21]. Kaempferol was also found to increase the activity of P-serine/threonine protein kinase (AKT) and β-catenin, but after using PI3-K inhibitors, the overall protective effect mediated by kaempferol was partially inhibited, indicating that kaempferol can enhance the antioxidant capacity and anti-inflammatory effects by enhancing the activity of the AKT/β-catenin cascade, thereby treating depression [21]. Additionally, caffeic acid, a catechol compound, is widely distributed in fruits, tea, and wine. In the LPS-induced depression model in mice, caffeic acid was found to reverse both the reduction of brain glutathione levels and the increase of malondialdehyde and proinflammatory cytokines [22]. Ferulic acid is a phenolic compound widely found in a variety of herbal medicines. There is evidence that in mouse models of cortisol (CORT)-induced depression, ferulic acid can improve the behavioral performance of depressed mice and simultaneously reduce malondialdehyde, nitrite, and protein carbonylation levels in the brain and increase nonprotein sulfhydryl levels [23]. Liu et al. found that administration of ferulic acid can reverse the CUMS-induced upregulation of the proinflammatory cytokines, IL-1β, IL-6, and TNF-α, in the prefrontal cortex of mice and the activation of microglia, NF-κB signaling, and the nucleotide-binding oligomerization domain-like receptor protein 3 (NLRP3) inflammasome [24]. Gallic acid is a secondary plant metabolite, which is commonly found in many plant-based foods and beverages and has antioxidant activity. In a mouse model of poststroke depression (PSD), Nabavi et al. found that oxidative stress is closely related to the pathological process of stroke and PSD, and gallic acid can exert an antidepressant effect by inhibiting oxidative stress [25]. Paeoniflorin is one of the main biologically active ingredients extracted from the root of Paeonia lactiflora, which has antioxidation, anti-inflammatory, and other pharmacological effects [26]. Gasdermin D (GSDMD) is a member of the Gasdermin conserved protein family. When cell pyrolysis occurs, Caspase-1 is activated, which directly leads to GSDMD cleavage. GSDMD is cleaved out of the C-terminal and N-terminal domains that binds to the phospholipid protein on the cell membrane, resulting in the formation of cell membrane holes that cause cell rupture, cell content outflow, and a massive release of inflammatory factors. GSDMD can also activate Caspase-1 activation induced by NLRP3 [27]. Reports have shown that paeoniflorin can inhibit the expression of GSDMD, caspase-11, Caspase-1, NLRP3, IL-1β, and other proteins involved in pyroptosis signal transduction in microglia, as well as reduce inflammation and relieve symptoms of depression [28]. FGF2 is a neurotrophic and anti-inflammatory factor involved in regulating the proliferation, differentiation, and apoptosis of neurons in the brain. In a mouse depression model induced by LPS, Cheng et al. found that paeoniflorin can inhibit LPS-induced TLR4/NF-κB/NLRP3 signaling in the hippocampus of mice, reduce the level of proinflammatory cytokines and microglia activation, and at the same time, increase neuronal FGF2 levels and dendritic spine density [29]. Neuropathic pain is a clinical problem that causes comorbid pain and depression. In a neuropathic pain mouse model, Bai et al. found that paeoniflorin can significantly improve the hyperalgesia and depression-like behavior of mice, reduce the level of proinflammatory cytokines, inhibit the excessive activation of microglia, and reduce the pathological damage of hippocampal cells in a way that is similar to the inhibition of the expression of TLR4/NF-κB pathway related proteins [30]. Interferon-α (IFN-α) is a pleiotropic cytokine with antiviral and antiproliferative effects. It is widely used in cancer. However, studies have found that about 30%–50% of patients have symptoms of depression after receiving IFN-α treatment. Li et al. found that administration of paeoniflorin can improve the IFN-α-induced depression-like behavior in mice, while reducing inflammation levels in the serum, medial prefrontal cortex, ventral hippocampus, and amygdala [31]. Systemic lupus erythematosus is a chronic inflammatory autoimmune disease, and depression is one of its common complications. It was found that administration of paeoniflorin can inhibit the activity of the high-mobility group box 1 protein/TLR4/NF-κB pathway and alleviate the level of inflammation in the serum and hippocampus, thereby treating lupus-induced depression [32]. Through component identification, Li found that Chaihu-Baishao mainly contain saikosaponin A, saikosaponin D, saikosaponin C, saikosaponin B2, paeoniflorin, albiflorin, and oxypaeoniflorin, and then combined with network pharmacology and metabolomics. Li found that Chaihu-Baishao plays an antidepressant effect by regulating arachidonic acid metabolism, the expression of Prostaglandin G/H synthase 1(PTGS1) and Prostaglandin G/H synthase 2(PTGS2) targets [8]. He et al. analyzed the changes of chemical constituents before and after the combination of Chaihu-Baishao by UPLC-MS background subtraction and metabolomics, founding that saikosaponin A, 3′-O-acetylation of saikosaponin A, and 4′-O-acetylation of saikosaponin A decreased after compatibility with Chaihu, and paeoniflorin, gallic acidpaeoniflorin or their isomers increased [33]. Sini Powder is a commonly used traditional Chinese medicine. In the prescription, Chaihu plays a major role, and Baishao plays an auxiliary role. It has the effect of soothing the liver and regulating the spleen [34]. It has been reported that Sini Powder can improve depression-like behavior and neuronal pathological damage in the hippocampus in CUMS rats, and its mechanism may be related to the inflammatory response mediated by the NLRP3 inflammasome signaling pathway [35]. In clinical studies, Sini Powder can effectively improve depression in cerebral infarction, type 2 diabetes and functional dyspepsia, and improve clinical efficacy [36–38] (Figure 1 and Table 1).Figure 1 Active compounds in Chaihu-Baishao inhibit inflammation and relieve oxidative stress. TLR4: Toll-like receptor 4; NLRP3: nucleotide-binding oligomerization domain-like receptor protein 3; NF-κB: nuclear transcription factor-κB; TNF-α: Tumor necrosis factor-α; IL-6: interleukin-6; IL-1β: interleukin-1β.Table 1 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair inhibit inflammation and relieve oxidative stress. Compound Model Animal species Dosage Behaviour involved Mechanism of action/main indicators References Saikosaponin D LPS Male ICR mice 1 mg/kg OFT, TST, FST, and SPT Inhibition of microglia activation, the activity of TLR4/NF-κB pathway HMGB1 translocation, and proinflammatory cytokines ↓ [16] Quercetin CUMS Male SD rat 0.75, 1.5 mg/kg OFT, TST, and FST, SPT NF-κB↓, FGF2↑, and miR-155↓ [18] OB Female SWISS mice 10, 25, 50 mg/kg FST, TST, OFT, and ST LOOH↓ [20] Kaempferol CSDS Male CD1 and C57 mice 10, 20 mg/kg SPT, SIT, and TST SOD↑, MDA↓, CAT↑, GPx↑, GST↑, P-AKT↑, andβ-catenin↑ [21] Caffeic acid LPS Male Swiss albino mice 30 mg/kg OFT, FST, and TST IL-6↓, MDA↓, GSH↑, and TNF-α↓ [22] Ferulic acid CORT Male Swiss mice 1 mg/kg TST, OFT, and ST MDA↓, nitrite↓, and protein carbonylation↓ [23] CUMS Male ICR mice 20,40, 80 mg/kg SPT and TST IL-1β,I L-6, TNF-α mRNA↓,NF-κB↓P-NF-κB/NF-κB↓, and inhibition of PFC microglia activation [24] Gallic acid BCCAO Male balb/c mice 25, 50 mg/kg TST and SPT Antioxidant stress [25] Paeoniflorin RESP/LSP/ATP Male C57BL/6Mice 10, 20, 40 mg/kg OFT, TST, and FST Dendritic spines in hippocampal CA1 region↑, NLRP3↓, CASP-11↓, CASP-1↓, GSDMD↓, IL-1β, and microglia activation ↓ [28] LPS Male ICR mice 20, 40, 80 mg/kg SPT and FST FGF2↑, IL-6↓, TNF-α↓, TLR4↓, P-NF-κB↓, NLRP3↓, Cox-2↓, and dendritic spine density in CA3 area of hippocampus ↑ [29] Cuff SPF male Balb/c mice 50, 100 mg/kg SPT, FST, and TST Inflammation in hippocampal CA3 area ↓, IL-6↓, TNF-α↓, IL-1↓, microglia activation ↓, and TLR4/NF-κB pathway ↓ [30] IFN-α Male C57BL/6Jmice 10, 20, 40 mg/kg SPT, OFT, TST, and FST Serum, mPFC, vHi, and amygdala inflammation levels↓ [31] SLE Wild type mice and MRL/MpJ-Faslpr/2 J (MRL/lpr) mice 20 mg/kg SPT, TST, and FST Activity of HMGB1/TLR4/NF-κB pathway↓, serum, and hippocampus contents of TNF-α, IL-1β, and IL-6↓ [32] Notes: OFT: open field test; TST: tail suspension test; FST: forced swimming test; SPT: sucrose preference test; TLR4: Toll-like receptor 4; NF-κB: nuclear transcription factor-κB; HMGB1: High mobility group box 1; FGF2: fibroblast growth factor 2; LOOH: lipid hydroperoxides; SOD: Superoxide dismutase; MDA: malondialdehyde; CAT: catalase; GST: Glutathione S-transferase; AKT: serine/threonine protein kinase; IL-6: interleukin-6; TNF-α: Tumor necrosis factor-α; NLRP3: nucleotide-binding oligomerization domain-like receptor protein 3; CASP-11: Caspase-11; CASP-1: Caspase-1; GSDMD: Gasdermin D; IL-1β: interleukin-1β; TLR4: Toll-like receptor 4; LPS: lipopolysaccharide; CUMS: chronic unpredictable mild stress; OB: olfactory bulbectomy; CSDS: Chronic social stress defeat; CORT: cortisol; BCCAO: brain ischemia-reperfusion; ATP: adenosine triphosphate; IFN-α: Interferon-α; SLE: Systemic lupus erythematosus. ### 2.2. Active Compounds in Chaihu-Baishao Regulate the Hypothalamic-Pituitary-Adrenal (HPA) Axis The HPA axis is an important part of the neuroendocrine system and helps control the response to stress. When the HPA axis is activated, the paraventricular nucleus of the hypothalamus releases corticotropin-releasing hormone (CRH), which sends a signal to the anterior pituitary gland to secrete adrenocorticotropic hormone (ACTH) into the bloodstream, and in turn, ACTH acts on the adrenal cortex to stimulate the secretion of CORT [39]. HPA axis hyperfunction is an important factor in the onset of depression as evidenced by increases in corticotropin-releasing hormone, ACTH, and glucocorticoid, an imbalance in the HPA axis negative feedback, an enlargement of the pituitary and adrenal glands, and the onset of hypercortisolemia that are seen in some depression patients [40].Saikosaponin A, a triterpene saponin extracted from Bupleurum, has anti-inflammatory and antioxidant effects [41]. In the perimenopausal depression model of female rats induced by CUMS, Saikosaponin A can improve the behavioral performance of these rats and reduce CRH mRNA, CRH protein, and serum CORT levels in the rat hypothalamus, while inhibiting the overexpression of hippocampal proinflammatory cytokines induced by CUMS [42]. Li et al. found that Saikosaponin D promotes hippocampal neurogenesis, alleviates the weight loss of rats induced by CUMS, and increases the sucrose consumption of rats and the movement distance in the open field test (OFT). They also found that it reduces the immobility time in the forced swimming test (FST), serum CORT levels, the inhibition of glucocorticoid receptor expression, and nuclear translocation induced by CUMS [43]. Adriamycin is a commonly used antitumor drug that has many adverse reactions. A previous study exploring the effect of quercetin on anxiety and depression-like behavior caused by Adriamycin showed that quercetin can improve the anxiety and depression-like behavior of rats, reduce serum CORT levels, inhibit the hyperactivity of the HPA axis, relieve brain oxidative damage, and regulate rat immune function [44]. Studies have also found that the combination of ferulic acid and levetiracetam improves epilepsy complicated by depression, restores serum CORT levels, and reduces the activity of proinflammatory cytokines and indoleamine 2,3-dioxygenase in the mouse brain [45]. Prenatal stress (PS) can increase the generation of depression, anxiety, attention deficit hyperactivity disorder, and other negative emotions and behaviors in offspring. After giving ferulic acid to rats in a prenatal stress model, Zheng et al. found that ferulic acid can exert an antidepressant effect by inhibiting the HPA axis activity and hippocampal inflammation in offspring rats [46]. Similarly, paeoniflorin can treat prenatal stress-induced depression-like behavior in rat offspring by promoting the nuclear translocation of glucocorticoid receptors, inhibiting the expression of a series of proteins and the formation of complexes (such as SNAP25), and inhibiting stress-induced HPA axis hyperactivity [47]. In FST-induced depression in rats, paeoniflorin can alleviate both the hyperactivity of the HPA axis and oxidative damage, increase plasma and hippocampal monoamine neurotransmitters and brain-derived neurotrophic factor (BDNF) levels, and promote gastrointestinal movement [48]. In a rat model of menopausal depression induced by CUMS combined with ovariectomy, Huang et al. found that paeoniflorin can improve the depression-like behavior of rats, while inhibiting both the overactivity of the HPA axis and the overexpression of the serotonin (5-HT2A) receptor and upregulating the expression of the 5-HT1A receptor in the brain [49]. Protocatechuic acid ethyl ester (PCA) is a phenolic compound with neuroprotective effects. It has been reported that acute restraint stress can induce depression-like behavior in mice through neuronal oxidative damage, upregulation of HPA axis activity, and increases in serum CORT levels, all of which were reversed after PCA was administered [50] (Figure 2 and Table 2).Figure 2 Active compounds in Chaihu-Baishao regulate the HPA axis. HPA: hypothalamic-pituitary-adrenal; CORT: cortisol; ACTH: adrenocorticotropic; CRH: corticotropin-releasing.Table 2 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair regulate the HPA axis. Compound Model Animal species Dosage Behaviour involved Mechanism of action/main indicators References Saikosaponin A CUMS Female Wistar rat 25, 50, 100 mg/kg SPT, FST, and NSFT Hypothalamus CRH mRNA, CRH protein↓, hippocampal proinflammatory cytokine↓, and CORT↓ [42] Saikosaponin D CUMS Male SD rat 0.75, 1.5 mg/kg SPT, OFT, and FST CORT↓, GR↑, weight ↑, p-CREB↑, BDNF↑, and promoting the generation of hippocampal neurons [43] Quercetin Adriamycin Male Wistar rat 60 mg/kg FST, OFT, and EPM CORT↓, relieving brain oxidative stress damage, and regulating immune function [44] Ferulic acid Pentylenetetrazole kindling epilepsy Male Swiss albino mice 40, 80 mg/kg TST, SPT CORT↓, TNF-α↓, and IL-1β↓ [45] PS Female, male SD rats 12.5, 25, 50 mg/kg SPT, OFT, and FST Serum ACTH↓, CORT↓, hippocampus GR↑, inhibiting hippocampal inflammation, and improving neuronal damage in hippocampal CA3 area [46] Paeoniflorin PS Female, male SD rats 15, 30, 60 mg/kg SPT, FST, and OFT Serum CRH, ACTH, CORT↓, and relieving neuronal damage in hippocampal CA3 area [47] FST Male SD rat 10 mg/kg FST and OFT Promote gastrointestinal motility, plasma motilin↑, CRH↓, ACTH↓, CORT↓, BDNF↓, norepinephrine↑, and oxidative stress↓ [48] CUMS SD female rats 45 mg/kg OFT and SPT HPA axis activity↓, brain 5-HT2AR↓, and brain 5-HT1AR↑ [49] Protocatechuic acid ARS Male, female, Swiss albino mice 100, 200 mg/kg FST and OFT Serum CORT↓ and hippocampal oxidative stress↓, [50] Note: CUMS: chronic unpredictable mild stress; PS: Prenatal stress; FST: forced swimming test; ARS: Acute restraint stress; SPT: sucrose preference test; NSFT: novelty suppressed feeding test; OFT: open field test; EPM: Elevated Plus Maze; CRH: corticotropin-releasing hormone; GR: Glucocorticoid Receptor; BDNF: brain-derived neurotrophic factor; ACTH: adrenocorticotropic hormone; HPA: hypothalamic-pituitary-adrenal; 5-HT: serotonin; MDA: malondialdehyde; CORT: cortisol. ### 2.3. Active Compounds in Chaihu-Baishao Regulate Monoamine Neurotransmitters Monoamine neurotransmitters are central neurotransmitters that are mainly catecholamines, such as dopamine, norepinephrine, and epinephrine, and indoleamines, such as 5-HT. Dopamine (DA) is an important regulator of learning and motivation [51]. 5-HT and norepinephrine are mainly involved in the regulation of emotional cognition and sleep, and when monoamine neurotransmitters are disordered, they can cause various emotional changes [40]. The classic monoamine hypothesis in depression predicts that the underlying pathophysiological basis of depression is the depletion of monoamine neurotransmitters in the central nervous system. The serum levels of monoamine neurotransmitters and their metabolites can be used as important biomarkers for the diagnosis of depression, and drugs that increase the synaptic concentration of monoamines can improve the symptoms of depression [1].It has been reported that Saikosaponin A can improve CUMS-induced depression-like behaviors in rats, and its antidepressant effect is believed to be related to the increase of dopamine content in the hippocampus of rats and the upregulation of hippocampal Proline-rich transmembrane protein 2 expression [52]. Similarly, in the same animal model of depression, Khan et al. found that quercetin can improve the behavioral performance of depressed mice, increase brain 5-HT levels, alleviate CUMS-induced brain inflammation and oxidative damage, and reduce brain glutamate levels [53]. There is evidence that in restraint stress-induced depression and anxiety models in mice, quercetin can treat anxiety and depression by regulating 5-HT and cholinergic neurotransmission and antioxidative damage as well as enhancing memory after restraint stress [54]. Acetylcholinesterase has the effect of terminating cholinergic neurotransmission [55]. Arsenic is an element that naturally exists in food, soil, and water. Exposure to arsenic can cause memory disorders, anxiety, depression, and other neurological perturbations and diseases. Samad et al. found that gallic acid can reverse the excessive increase in acetylcholinesterase activity induced by arsenic, alleviate the brain oxidative stress damage caused by arsenic, and protect memory function [56]. Additionally, gallic acid can also exert an antidepressant effect by increasing 5-HT and catecholamine levels in synaptic clefts in the central nervous system [57]. There are also reports showing that in olfactory bulbectomy-induced animal depression models, PCA can shorten the immobility time of rats in the OFT, increase the distance explored in the OFT, increase rat hippocampal monoamine neurotransmitters (5-HT, DA, and norepinephrine) and BDNF levels, reduce hippocampal CORT levels, and alleviate hippocampal neuroinflammation and oxidative damage [58]. It has been reported that Chaihu-Baishao can effectively improve postoperative depression and diarrhea in patients with colorectal cancer by regulating the imbalance of monoamine neurotransmitters (DA and 5-HT) [59] (Figure 3 and Table 3).Figure 3 Active compounds in Chaihu-Baishao regulate monoamine neurotransmitters. 5-HT: serotonin; DA: dopamine; NE: norepinephrine.Table 3 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair regulate monoamine neurotransmitters. Compound Model Animal species Dosage Behaviour involved Mechanism of action/main indicators References Saikosaponin A CUMS Male SD rat 50 mg/kg OFT and SPT Weight↑, hippocampus DA↑, and hippocampus PRRT2↑ [52] Quercetin CUMS Male Swiss albino mice 25 mg/kg MFST, TST, and OFT Brain5-HT↑, brain glutamate, IL-6, TNF-α↓, and relieving brain oxidative stress [53] Restraint stress Male Albaino wistar mice 20 mg/kg/ml FST, LDA, EPM, and MWM Prevent oxidase damage, regulate 5-HT, and cholinergic [54] Gallic acid iAS SD male rat 50, 100 mg/kg EPM, LDA, FST, and MWM Relieve brain oxidative stress and brain AChE↓ [56] Protocatechuic acid OB Male Wistar rat 100, 200 mg/kg FST and OFT Hippocampus 5-HT, norepinephrine, DA↑, hippocampus BNDF↑, hippocampus TNF-α, IL-6, CORT, and hippocampus oxidase damage↓ [58] Note: LDA: Light-dark activity box; iAS: Arsenic; OB: olfactory bulbectomy; MWM: Morris water maze; FST: forced swimming test; EPM: Elevated Plus Maze; OFT: open field test; 5-HT: serotonin; CORT: cortisol; IL-6: interleukin-6; TNF-α: Tumor necrosis factor-α; AChE: acetylcholinesterase; DA: dopamine; BDNF: brain-derived neurotrophic factor; MFST: Modified Forced Swim Test; OB: olfactory bulbectomy; TST: tail suspension test; CUMS: chronic unpredictable mild stress. ### 2.4. Active Compounds in Chaihu-Baishao Promote Hippocampal Neurogenesis and Regulate BDNF Levels BDNF is a widely studied growth factor that plays an important role in neuronal maturation, synapse formation, and synaptic plasticity in the brain [60]. The “neurotrophic theory” posits that neurons will lose access to trophic factors as the expression level of BDNF decreases, leading to neuronal atrophy, decreased synaptic plasticity, and the onset of depression [61]. The optimization of BDNF levels helps synaptic plasticity and remodeling, improves neuronal damage, and relieves depression [62]. There is evidence that increasing the expression of BDNF in hippocampal astrocytes can treat depression and anxiety and stimulate hippocampal neurogenesis [63]. Adult hippocampal neurogenesis involves the proliferation, survival, differentiation, and integration of newborn neurons into preexisting neuronal networks [64]. There is evidence that hippocampal volume, the number of granular cells in the anterior and middle dentate gyrus, and the volume of the granular cell layer decrease in patients with depression [65], whereas increasing adult hippocampal neurogenesis can improve depression and anxiety [66].Depression is also one of the common complications after stroke, though this pathogenesis has not been fully elucidated, and there is a lack of clinically effective treatments for stroke-induced depression. The cAMP response element-binding protein (CREB) is a protein that regulates gene transcription and acts as a chief transcription factor in the regulation of genes that encode proteins, such as BDNF, that are involved in synaptic plasticity and memory [67]. Studies have found that Saikosaponin A can significantly improve the depression-like behavior of PSD model rats, inhibit the apoptosis of the hippocampal meridian, and increase the levels of phosphorylated CREB and BDNF in the hippocampus [68]. Neuronal cell death occurs extensively during development and pathology, where it is especially important because of the limited capacity of adult neurons to proliferate or be replaced [69]. There are at least three known types of neuronal death, namely apoptosis, autophagy, and necrosis, and there is evidence that apoptosis is closely related to bipolar disorder (including depression) [70]. Inhibiting neuronal apoptosis can improve depression-like behavior [71, 72]. In LPS-induced depression in mice, Saikosaponin D can inhibit hippocampal neuronal apoptosis and inflammation through the lysophosphatidic acid 1/Ras homologous family member A (RhoA)/Rho protein kinase 2 pathway [73]. Rutin is a flavonoid, which is abundantly present in a variety of fruits and vegetables. It functions in cardiovascular protection and as an antiviral agent. In a recent study, rutin was shown to improve the CUMS-induced weight loss in mice and protect mouse hippocampal neurons [74]. Activation of tyrosine kinase B (TrkB) promotes neuronal survival, differentiation, and synapse formation [75].Estrogen receptor alpha (ERα) is the main regulator mediated by estrogen and plays an important role in preventing depression and cardiovascular diseases. It has been reported that knocking out ERα induces depression in mice and reduces hippocampal BDNF and the phosphorylation of its downstream targets TrkB, AKT, and extracellular regulatory protein kinase 1/2 (ERK1/2). Quercetin can reverse the above phenomenon, alleviate cell apoptosis, and reverse the depression-like symptoms induced by ERα knockout [76]. Similarly, quercetin can also exert an antidepressant effect by regulating the levels of BDNF in the hippocampus and prefrontal cortex, as well as the levels of the related Copine 6 and the triggering receptor expressed on myeloid cells (TREM)-1 and TREM-2 [77]. Quercetin combined with exercise therapy can increase the expression of BDNF protein in 1,2-dimethylhydrazine-induced depression model rats with colorectal cancer and can act on the TrKβ/β-chain protein axis to treat depression [78]. Liu et al. found that in the mouse depression model induced by CUMS, ferulic acid can significantly improve the behavioral performance in both the sucrose performance test and FST, upregulate BDNF and synapsin I levels in the prefrontal cortex and hippocampus, and increase hippocampal PSD95 protein expression [79]. In a recent report, paeoniflorin can improve the depression-like behavior of PSD model mice and increase the expression of BDNF and phosphorylated CREB in the CA1 region of the hippocampus [80]. Zhong et al. found that paeoniflorin can protect nerves by upregulating the activity of the ERK-CREB signaling pathway and treat CUMS-induced depression-like behavior in rats [81]. The alteration of synaptic plasticity, and specifically hippocampal long-term potentiation (LTP), also plays a role in the onset of depression [82]. There is evidence that in CUMS-induced animal depression models, hippocampal LTP is impaired, and Liu et al. found that administration of paeoniflorin can alleviate LTP injury in the hippocampal CAI area and increase both the density of hippocampal dendritic spines and the expression levels of BDNF and PSD95 [83]. There are also reports that paeoniflorin can enhance the expression and gene transcription of BDNF, activate the expression of TrkB, and promote proliferation and differentiation into astrocytes and the neurogenesis of neural stem cells in the rat hippocampal dentate gyrus [84] (Figure 4 and Table 4).Figure 4 Active compounds in Chaihu-Baishao promote hippocampal neurogenesis and regulate BDNF levels. BDNF: brain-derived neurotrophic factor; AKT: serine/threonine protein kinase; TrKB: tyrosine kinase B; CREB: cAMP response element-binding protein.Table 4 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair promote hippocampal neurogenesis and regulate BDNF levels. Compound Model Animal species Dosage Behaviour involved Mechanism of action/Main indicators References Saikosaponin A MCAO+CUMS+isolation Male SD rat 5 mg/kg SPT, OFT, BWT, FST Weight↑, hippocampus BDNF, P-CREB↑, inhibiting neuronal apoptosis, hippocampus Bax, caspase-3↓, [68] Saikosaponin D LPS Male ICR mice 0.5, 1 mg/kg TST, OFT Proinflammatory cytokines↓, inhibiting the activity of LPA1/RhoA/ROCK2 signaling pathway, relieving apoptosis of hippocampal neurons [73] Rutin CUMS Adult Swiss albino mice 100 mg/kg OFT, SPT, EPM, NOR, BWT Relieving hippocampal damage in CA3 area [74] Quercetin ERα receptor knockout Female C57bl/6 and ERα-KO mice 100 mg/kg OFT, TST, FST Number of hippocampal neurons↑, relieve hippocampal cell apoptosis, Hippocampus BDNF,P-TrKB,P-AKT,P-ERK1/2↑ [76] DMH Male Wistar rat 50 mg/kg OFT, FST Proinflammatory cytokines↓, BDNF↑, number of neurons↑, TrKB↑,β-catenin↑, incidence of rectal cancer↓, [78] Ferulic acid CUMS Male ICR mice 20, 40 mg/kg SPT, FST PFC and hippocampus BDNF, Synapsin I↑, hippocampus PSD95↑ [79] Paeoniflorin MCAO+CUMS Male SD rat 5 mg/kg BBT, SPT, OFT Weight↑, hippocampus CA1 area BDNF, P-CREB↑ [80] CUMS Male SD rat 30, 60 mg/kg SPT, FST, LAT Number of neurons in hippocampal CA3 area↑, upregulate the ERK-CREB signaling pathway [81] CUMS C57BL/6Wild-type male mice 20 mg/kg SPT, FST, TST Relieve hippocampal CA1 LTP damage, hippocampus dendritic spine density, BDNF, PSD95↑ [82] CUMS Male SD rats 60 mg/kg SPT Promote neurogenesis in the hippocampus dentate gyrus [84] Note: CUMS: chronic unpredictable mild stress; MCAO: middle cerebral artery occlusion; SPT: sucrose preference test; OFT: open field test; FST: forced swimming test; BWT: Beam walking test; BDNF: brain-derived neurotrophic factor; CREB: cAMP response element-binding protein; LPS: lipopolysaccharide; LPA-1: specific cell surface G protein–coupled receptors; ROCK: Rho-kinase; NOR: Novel object recognition test; TST: tail suspension test; TrKB: tyrosine kinase B; AKT:serine/threonine protein kinase; ERK: extracellular regulatory protein kinase; ERα: Estrogen receptor alpha; DMH: 1,2-dimethylhydrazine; BBT: Beam balance test; LAT: Locomotor Activity Test; LTP: long-term potentiation. ## 2.1. Active Compounds in Chaihu-Baishao Inhibit Inflammation and Relieve Oxidative Stress There is a large amount of evidence supporting the link between depression and inflammatory processes. It has been shown that inflammation increases an individual’s susceptibility to depression. There is an increase in proinflammatory markers in patients with depression, and the use of proinflammatory drugs can increase the risk of depression [9]. Studies have found that oxidative stress may play an important role in the pathophysiology of depression. Patients with depression have elevated levels of malondialdehyde and the antioxidant enzymes, superoxide dismutase, and catalase, along with decreased levels of glutathione peroxidase [10]. Some scholars have found that 8-hydroxydeoxyglucose and F2 isoprostaglandin are found in patients with depression through meta-analysis. It is also believed that depression is accompanied by increased oxidative damage [11]. There is much evidence that inflammation-related diseases can be treated with various traditional medicines containing a variety of active natural compounds [12]. When antidepressants are used, the levels of peripheral inflammatory cytokines in patients with depression are reduced [13]. Therefore, inhibiting inflammation and relieving oxidative stress are important for the treatment of depression. Several of the previously studied active compounds of traditional medicines are reviewed below.Saikosaponin D is a triterpene saponin compound extracted from Chaihu and has various pharmacological effects such as counteracting inflammation [14] and oxidative stress [15]. Tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β), and interleukin-6 (IL-6) are proinflammatory cytokines that regulate oxidative stress, apoptosis, and metabolism and cause damage to the branching process of neurons, which will in turn affect neuronal function [1]. Microglia’s are the innate immune cells of the central nervous system and play a vital role in the process of neuroinflammation; they are found in abundance in the cerebrospinal fluid of depressed patients. A series of proinflammatory factors released by activated microglia were detected. In a lipopolysaccharide (LPS)-induced depression model, Su et al. gave mice an intraperitoneal injection of Saikosaponin D and found that it can improve the depression-like behavior of the mice and inhibit both the overexpression of the proinflammatory cytokines, TNF-α, IL-6, and 1 L-1β, and the activation of microglia induced by LPS in the mouse hippocampus [16]. The observed anti-inflammatory effect was also correlated with the inhibition of the high mobility group protein 1/Toll-like receptor 4 (TLR4)/nuclear transcription factor-κB (NF-κB) signaling pathway. The NF-κB pathway is a typical inflammatory pathway, which can regulate the production of proinflammatory cytokines, leukocyte recruitment, or cell survival and cause the body to produce an inflammatory response [17]. It has been reported that Saponin D can downregulate the expression of NF-κB in rat hippocampal neurons and improve the depression-like behavior induced by chronic unpredictable mild stress (CUMS) by downregulating Mir-155 expression and upregulating fibroblast growth factor 2 (FGF2) expression [18]. Quercetin is a flavonoid compound widely found in fruits and vegetables that has anti-inflammatory, antioxidant, antiviral, and anticancer effects [19]. It has been reported that quercetin can reverse the increase in lipid hydroperoxide content, induced by olfactory bulbectomy, in the hippocampus and improve the depression-like behavior of mice via a mechanism that is correlated with the enhancement of N-methyl-D-aspartic acid receptor expression [20].Kaempferol is the main component of a variety of fruits and vegetables. Using a mouse depression model based on chronic social defeat stress, Gao et al. found that after intraperitoneal injection of kaempferol, the inflammatory response and oxidative stress in the prefrontal cortex of these mice was alleviated [21]. Kaempferol was also found to increase the activity of P-serine/threonine protein kinase (AKT) and β-catenin, but after using PI3-K inhibitors, the overall protective effect mediated by kaempferol was partially inhibited, indicating that kaempferol can enhance the antioxidant capacity and anti-inflammatory effects by enhancing the activity of the AKT/β-catenin cascade, thereby treating depression [21]. Additionally, caffeic acid, a catechol compound, is widely distributed in fruits, tea, and wine. In the LPS-induced depression model in mice, caffeic acid was found to reverse both the reduction of brain glutathione levels and the increase of malondialdehyde and proinflammatory cytokines [22]. Ferulic acid is a phenolic compound widely found in a variety of herbal medicines. There is evidence that in mouse models of cortisol (CORT)-induced depression, ferulic acid can improve the behavioral performance of depressed mice and simultaneously reduce malondialdehyde, nitrite, and protein carbonylation levels in the brain and increase nonprotein sulfhydryl levels [23]. Liu et al. found that administration of ferulic acid can reverse the CUMS-induced upregulation of the proinflammatory cytokines, IL-1β, IL-6, and TNF-α, in the prefrontal cortex of mice and the activation of microglia, NF-κB signaling, and the nucleotide-binding oligomerization domain-like receptor protein 3 (NLRP3) inflammasome [24]. Gallic acid is a secondary plant metabolite, which is commonly found in many plant-based foods and beverages and has antioxidant activity. In a mouse model of poststroke depression (PSD), Nabavi et al. found that oxidative stress is closely related to the pathological process of stroke and PSD, and gallic acid can exert an antidepressant effect by inhibiting oxidative stress [25]. Paeoniflorin is one of the main biologically active ingredients extracted from the root of Paeonia lactiflora, which has antioxidation, anti-inflammatory, and other pharmacological effects [26]. Gasdermin D (GSDMD) is a member of the Gasdermin conserved protein family. When cell pyrolysis occurs, Caspase-1 is activated, which directly leads to GSDMD cleavage. GSDMD is cleaved out of the C-terminal and N-terminal domains that binds to the phospholipid protein on the cell membrane, resulting in the formation of cell membrane holes that cause cell rupture, cell content outflow, and a massive release of inflammatory factors. GSDMD can also activate Caspase-1 activation induced by NLRP3 [27]. Reports have shown that paeoniflorin can inhibit the expression of GSDMD, caspase-11, Caspase-1, NLRP3, IL-1β, and other proteins involved in pyroptosis signal transduction in microglia, as well as reduce inflammation and relieve symptoms of depression [28]. FGF2 is a neurotrophic and anti-inflammatory factor involved in regulating the proliferation, differentiation, and apoptosis of neurons in the brain. In a mouse depression model induced by LPS, Cheng et al. found that paeoniflorin can inhibit LPS-induced TLR4/NF-κB/NLRP3 signaling in the hippocampus of mice, reduce the level of proinflammatory cytokines and microglia activation, and at the same time, increase neuronal FGF2 levels and dendritic spine density [29]. Neuropathic pain is a clinical problem that causes comorbid pain and depression. In a neuropathic pain mouse model, Bai et al. found that paeoniflorin can significantly improve the hyperalgesia and depression-like behavior of mice, reduce the level of proinflammatory cytokines, inhibit the excessive activation of microglia, and reduce the pathological damage of hippocampal cells in a way that is similar to the inhibition of the expression of TLR4/NF-κB pathway related proteins [30]. Interferon-α (IFN-α) is a pleiotropic cytokine with antiviral and antiproliferative effects. It is widely used in cancer. However, studies have found that about 30%–50% of patients have symptoms of depression after receiving IFN-α treatment. Li et al. found that administration of paeoniflorin can improve the IFN-α-induced depression-like behavior in mice, while reducing inflammation levels in the serum, medial prefrontal cortex, ventral hippocampus, and amygdala [31]. Systemic lupus erythematosus is a chronic inflammatory autoimmune disease, and depression is one of its common complications. It was found that administration of paeoniflorin can inhibit the activity of the high-mobility group box 1 protein/TLR4/NF-κB pathway and alleviate the level of inflammation in the serum and hippocampus, thereby treating lupus-induced depression [32]. Through component identification, Li found that Chaihu-Baishao mainly contain saikosaponin A, saikosaponin D, saikosaponin C, saikosaponin B2, paeoniflorin, albiflorin, and oxypaeoniflorin, and then combined with network pharmacology and metabolomics. Li found that Chaihu-Baishao plays an antidepressant effect by regulating arachidonic acid metabolism, the expression of Prostaglandin G/H synthase 1(PTGS1) and Prostaglandin G/H synthase 2(PTGS2) targets [8]. He et al. analyzed the changes of chemical constituents before and after the combination of Chaihu-Baishao by UPLC-MS background subtraction and metabolomics, founding that saikosaponin A, 3′-O-acetylation of saikosaponin A, and 4′-O-acetylation of saikosaponin A decreased after compatibility with Chaihu, and paeoniflorin, gallic acidpaeoniflorin or their isomers increased [33]. Sini Powder is a commonly used traditional Chinese medicine. In the prescription, Chaihu plays a major role, and Baishao plays an auxiliary role. It has the effect of soothing the liver and regulating the spleen [34]. It has been reported that Sini Powder can improve depression-like behavior and neuronal pathological damage in the hippocampus in CUMS rats, and its mechanism may be related to the inflammatory response mediated by the NLRP3 inflammasome signaling pathway [35]. In clinical studies, Sini Powder can effectively improve depression in cerebral infarction, type 2 diabetes and functional dyspepsia, and improve clinical efficacy [36–38] (Figure 1 and Table 1).Figure 1 Active compounds in Chaihu-Baishao inhibit inflammation and relieve oxidative stress. TLR4: Toll-like receptor 4; NLRP3: nucleotide-binding oligomerization domain-like receptor protein 3; NF-κB: nuclear transcription factor-κB; TNF-α: Tumor necrosis factor-α; IL-6: interleukin-6; IL-1β: interleukin-1β.Table 1 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair inhibit inflammation and relieve oxidative stress. Compound Model Animal species Dosage Behaviour involved Mechanism of action/main indicators References Saikosaponin D LPS Male ICR mice 1 mg/kg OFT, TST, FST, and SPT Inhibition of microglia activation, the activity of TLR4/NF-κB pathway HMGB1 translocation, and proinflammatory cytokines ↓ [16] Quercetin CUMS Male SD rat 0.75, 1.5 mg/kg OFT, TST, and FST, SPT NF-κB↓, FGF2↑, and miR-155↓ [18] OB Female SWISS mice 10, 25, 50 mg/kg FST, TST, OFT, and ST LOOH↓ [20] Kaempferol CSDS Male CD1 and C57 mice 10, 20 mg/kg SPT, SIT, and TST SOD↑, MDA↓, CAT↑, GPx↑, GST↑, P-AKT↑, andβ-catenin↑ [21] Caffeic acid LPS Male Swiss albino mice 30 mg/kg OFT, FST, and TST IL-6↓, MDA↓, GSH↑, and TNF-α↓ [22] Ferulic acid CORT Male Swiss mice 1 mg/kg TST, OFT, and ST MDA↓, nitrite↓, and protein carbonylation↓ [23] CUMS Male ICR mice 20,40, 80 mg/kg SPT and TST IL-1β,I L-6, TNF-α mRNA↓,NF-κB↓P-NF-κB/NF-κB↓, and inhibition of PFC microglia activation [24] Gallic acid BCCAO Male balb/c mice 25, 50 mg/kg TST and SPT Antioxidant stress [25] Paeoniflorin RESP/LSP/ATP Male C57BL/6Mice 10, 20, 40 mg/kg OFT, TST, and FST Dendritic spines in hippocampal CA1 region↑, NLRP3↓, CASP-11↓, CASP-1↓, GSDMD↓, IL-1β, and microglia activation ↓ [28] LPS Male ICR mice 20, 40, 80 mg/kg SPT and FST FGF2↑, IL-6↓, TNF-α↓, TLR4↓, P-NF-κB↓, NLRP3↓, Cox-2↓, and dendritic spine density in CA3 area of hippocampus ↑ [29] Cuff SPF male Balb/c mice 50, 100 mg/kg SPT, FST, and TST Inflammation in hippocampal CA3 area ↓, IL-6↓, TNF-α↓, IL-1↓, microglia activation ↓, and TLR4/NF-κB pathway ↓ [30] IFN-α Male C57BL/6Jmice 10, 20, 40 mg/kg SPT, OFT, TST, and FST Serum, mPFC, vHi, and amygdala inflammation levels↓ [31] SLE Wild type mice and MRL/MpJ-Faslpr/2 J (MRL/lpr) mice 20 mg/kg SPT, TST, and FST Activity of HMGB1/TLR4/NF-κB pathway↓, serum, and hippocampus contents of TNF-α, IL-1β, and IL-6↓ [32] Notes: OFT: open field test; TST: tail suspension test; FST: forced swimming test; SPT: sucrose preference test; TLR4: Toll-like receptor 4; NF-κB: nuclear transcription factor-κB; HMGB1: High mobility group box 1; FGF2: fibroblast growth factor 2; LOOH: lipid hydroperoxides; SOD: Superoxide dismutase; MDA: malondialdehyde; CAT: catalase; GST: Glutathione S-transferase; AKT: serine/threonine protein kinase; IL-6: interleukin-6; TNF-α: Tumor necrosis factor-α; NLRP3: nucleotide-binding oligomerization domain-like receptor protein 3; CASP-11: Caspase-11; CASP-1: Caspase-1; GSDMD: Gasdermin D; IL-1β: interleukin-1β; TLR4: Toll-like receptor 4; LPS: lipopolysaccharide; CUMS: chronic unpredictable mild stress; OB: olfactory bulbectomy; CSDS: Chronic social stress defeat; CORT: cortisol; BCCAO: brain ischemia-reperfusion; ATP: adenosine triphosphate; IFN-α: Interferon-α; SLE: Systemic lupus erythematosus. ## 2.2. Active Compounds in Chaihu-Baishao Regulate the Hypothalamic-Pituitary-Adrenal (HPA) Axis The HPA axis is an important part of the neuroendocrine system and helps control the response to stress. When the HPA axis is activated, the paraventricular nucleus of the hypothalamus releases corticotropin-releasing hormone (CRH), which sends a signal to the anterior pituitary gland to secrete adrenocorticotropic hormone (ACTH) into the bloodstream, and in turn, ACTH acts on the adrenal cortex to stimulate the secretion of CORT [39]. HPA axis hyperfunction is an important factor in the onset of depression as evidenced by increases in corticotropin-releasing hormone, ACTH, and glucocorticoid, an imbalance in the HPA axis negative feedback, an enlargement of the pituitary and adrenal glands, and the onset of hypercortisolemia that are seen in some depression patients [40].Saikosaponin A, a triterpene saponin extracted from Bupleurum, has anti-inflammatory and antioxidant effects [41]. In the perimenopausal depression model of female rats induced by CUMS, Saikosaponin A can improve the behavioral performance of these rats and reduce CRH mRNA, CRH protein, and serum CORT levels in the rat hypothalamus, while inhibiting the overexpression of hippocampal proinflammatory cytokines induced by CUMS [42]. Li et al. found that Saikosaponin D promotes hippocampal neurogenesis, alleviates the weight loss of rats induced by CUMS, and increases the sucrose consumption of rats and the movement distance in the open field test (OFT). They also found that it reduces the immobility time in the forced swimming test (FST), serum CORT levels, the inhibition of glucocorticoid receptor expression, and nuclear translocation induced by CUMS [43]. Adriamycin is a commonly used antitumor drug that has many adverse reactions. A previous study exploring the effect of quercetin on anxiety and depression-like behavior caused by Adriamycin showed that quercetin can improve the anxiety and depression-like behavior of rats, reduce serum CORT levels, inhibit the hyperactivity of the HPA axis, relieve brain oxidative damage, and regulate rat immune function [44]. Studies have also found that the combination of ferulic acid and levetiracetam improves epilepsy complicated by depression, restores serum CORT levels, and reduces the activity of proinflammatory cytokines and indoleamine 2,3-dioxygenase in the mouse brain [45]. Prenatal stress (PS) can increase the generation of depression, anxiety, attention deficit hyperactivity disorder, and other negative emotions and behaviors in offspring. After giving ferulic acid to rats in a prenatal stress model, Zheng et al. found that ferulic acid can exert an antidepressant effect by inhibiting the HPA axis activity and hippocampal inflammation in offspring rats [46]. Similarly, paeoniflorin can treat prenatal stress-induced depression-like behavior in rat offspring by promoting the nuclear translocation of glucocorticoid receptors, inhibiting the expression of a series of proteins and the formation of complexes (such as SNAP25), and inhibiting stress-induced HPA axis hyperactivity [47]. In FST-induced depression in rats, paeoniflorin can alleviate both the hyperactivity of the HPA axis and oxidative damage, increase plasma and hippocampal monoamine neurotransmitters and brain-derived neurotrophic factor (BDNF) levels, and promote gastrointestinal movement [48]. In a rat model of menopausal depression induced by CUMS combined with ovariectomy, Huang et al. found that paeoniflorin can improve the depression-like behavior of rats, while inhibiting both the overactivity of the HPA axis and the overexpression of the serotonin (5-HT2A) receptor and upregulating the expression of the 5-HT1A receptor in the brain [49]. Protocatechuic acid ethyl ester (PCA) is a phenolic compound with neuroprotective effects. It has been reported that acute restraint stress can induce depression-like behavior in mice through neuronal oxidative damage, upregulation of HPA axis activity, and increases in serum CORT levels, all of which were reversed after PCA was administered [50] (Figure 2 and Table 2).Figure 2 Active compounds in Chaihu-Baishao regulate the HPA axis. HPA: hypothalamic-pituitary-adrenal; CORT: cortisol; ACTH: adrenocorticotropic; CRH: corticotropin-releasing.Table 2 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair regulate the HPA axis. Compound Model Animal species Dosage Behaviour involved Mechanism of action/main indicators References Saikosaponin A CUMS Female Wistar rat 25, 50, 100 mg/kg SPT, FST, and NSFT Hypothalamus CRH mRNA, CRH protein↓, hippocampal proinflammatory cytokine↓, and CORT↓ [42] Saikosaponin D CUMS Male SD rat 0.75, 1.5 mg/kg SPT, OFT, and FST CORT↓, GR↑, weight ↑, p-CREB↑, BDNF↑, and promoting the generation of hippocampal neurons [43] Quercetin Adriamycin Male Wistar rat 60 mg/kg FST, OFT, and EPM CORT↓, relieving brain oxidative stress damage, and regulating immune function [44] Ferulic acid Pentylenetetrazole kindling epilepsy Male Swiss albino mice 40, 80 mg/kg TST, SPT CORT↓, TNF-α↓, and IL-1β↓ [45] PS Female, male SD rats 12.5, 25, 50 mg/kg SPT, OFT, and FST Serum ACTH↓, CORT↓, hippocampus GR↑, inhibiting hippocampal inflammation, and improving neuronal damage in hippocampal CA3 area [46] Paeoniflorin PS Female, male SD rats 15, 30, 60 mg/kg SPT, FST, and OFT Serum CRH, ACTH, CORT↓, and relieving neuronal damage in hippocampal CA3 area [47] FST Male SD rat 10 mg/kg FST and OFT Promote gastrointestinal motility, plasma motilin↑, CRH↓, ACTH↓, CORT↓, BDNF↓, norepinephrine↑, and oxidative stress↓ [48] CUMS SD female rats 45 mg/kg OFT and SPT HPA axis activity↓, brain 5-HT2AR↓, and brain 5-HT1AR↑ [49] Protocatechuic acid ARS Male, female, Swiss albino mice 100, 200 mg/kg FST and OFT Serum CORT↓ and hippocampal oxidative stress↓, [50] Note: CUMS: chronic unpredictable mild stress; PS: Prenatal stress; FST: forced swimming test; ARS: Acute restraint stress; SPT: sucrose preference test; NSFT: novelty suppressed feeding test; OFT: open field test; EPM: Elevated Plus Maze; CRH: corticotropin-releasing hormone; GR: Glucocorticoid Receptor; BDNF: brain-derived neurotrophic factor; ACTH: adrenocorticotropic hormone; HPA: hypothalamic-pituitary-adrenal; 5-HT: serotonin; MDA: malondialdehyde; CORT: cortisol. ## 2.3. Active Compounds in Chaihu-Baishao Regulate Monoamine Neurotransmitters Monoamine neurotransmitters are central neurotransmitters that are mainly catecholamines, such as dopamine, norepinephrine, and epinephrine, and indoleamines, such as 5-HT. Dopamine (DA) is an important regulator of learning and motivation [51]. 5-HT and norepinephrine are mainly involved in the regulation of emotional cognition and sleep, and when monoamine neurotransmitters are disordered, they can cause various emotional changes [40]. The classic monoamine hypothesis in depression predicts that the underlying pathophysiological basis of depression is the depletion of monoamine neurotransmitters in the central nervous system. The serum levels of monoamine neurotransmitters and their metabolites can be used as important biomarkers for the diagnosis of depression, and drugs that increase the synaptic concentration of monoamines can improve the symptoms of depression [1].It has been reported that Saikosaponin A can improve CUMS-induced depression-like behaviors in rats, and its antidepressant effect is believed to be related to the increase of dopamine content in the hippocampus of rats and the upregulation of hippocampal Proline-rich transmembrane protein 2 expression [52]. Similarly, in the same animal model of depression, Khan et al. found that quercetin can improve the behavioral performance of depressed mice, increase brain 5-HT levels, alleviate CUMS-induced brain inflammation and oxidative damage, and reduce brain glutamate levels [53]. There is evidence that in restraint stress-induced depression and anxiety models in mice, quercetin can treat anxiety and depression by regulating 5-HT and cholinergic neurotransmission and antioxidative damage as well as enhancing memory after restraint stress [54]. Acetylcholinesterase has the effect of terminating cholinergic neurotransmission [55]. Arsenic is an element that naturally exists in food, soil, and water. Exposure to arsenic can cause memory disorders, anxiety, depression, and other neurological perturbations and diseases. Samad et al. found that gallic acid can reverse the excessive increase in acetylcholinesterase activity induced by arsenic, alleviate the brain oxidative stress damage caused by arsenic, and protect memory function [56]. Additionally, gallic acid can also exert an antidepressant effect by increasing 5-HT and catecholamine levels in synaptic clefts in the central nervous system [57]. There are also reports showing that in olfactory bulbectomy-induced animal depression models, PCA can shorten the immobility time of rats in the OFT, increase the distance explored in the OFT, increase rat hippocampal monoamine neurotransmitters (5-HT, DA, and norepinephrine) and BDNF levels, reduce hippocampal CORT levels, and alleviate hippocampal neuroinflammation and oxidative damage [58]. It has been reported that Chaihu-Baishao can effectively improve postoperative depression and diarrhea in patients with colorectal cancer by regulating the imbalance of monoamine neurotransmitters (DA and 5-HT) [59] (Figure 3 and Table 3).Figure 3 Active compounds in Chaihu-Baishao regulate monoamine neurotransmitters. 5-HT: serotonin; DA: dopamine; NE: norepinephrine.Table 3 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair regulate monoamine neurotransmitters. Compound Model Animal species Dosage Behaviour involved Mechanism of action/main indicators References Saikosaponin A CUMS Male SD rat 50 mg/kg OFT and SPT Weight↑, hippocampus DA↑, and hippocampus PRRT2↑ [52] Quercetin CUMS Male Swiss albino mice 25 mg/kg MFST, TST, and OFT Brain5-HT↑, brain glutamate, IL-6, TNF-α↓, and relieving brain oxidative stress [53] Restraint stress Male Albaino wistar mice 20 mg/kg/ml FST, LDA, EPM, and MWM Prevent oxidase damage, regulate 5-HT, and cholinergic [54] Gallic acid iAS SD male rat 50, 100 mg/kg EPM, LDA, FST, and MWM Relieve brain oxidative stress and brain AChE↓ [56] Protocatechuic acid OB Male Wistar rat 100, 200 mg/kg FST and OFT Hippocampus 5-HT, norepinephrine, DA↑, hippocampus BNDF↑, hippocampus TNF-α, IL-6, CORT, and hippocampus oxidase damage↓ [58] Note: LDA: Light-dark activity box; iAS: Arsenic; OB: olfactory bulbectomy; MWM: Morris water maze; FST: forced swimming test; EPM: Elevated Plus Maze; OFT: open field test; 5-HT: serotonin; CORT: cortisol; IL-6: interleukin-6; TNF-α: Tumor necrosis factor-α; AChE: acetylcholinesterase; DA: dopamine; BDNF: brain-derived neurotrophic factor; MFST: Modified Forced Swim Test; OB: olfactory bulbectomy; TST: tail suspension test; CUMS: chronic unpredictable mild stress. ## 2.4. Active Compounds in Chaihu-Baishao Promote Hippocampal Neurogenesis and Regulate BDNF Levels BDNF is a widely studied growth factor that plays an important role in neuronal maturation, synapse formation, and synaptic plasticity in the brain [60]. The “neurotrophic theory” posits that neurons will lose access to trophic factors as the expression level of BDNF decreases, leading to neuronal atrophy, decreased synaptic plasticity, and the onset of depression [61]. The optimization of BDNF levels helps synaptic plasticity and remodeling, improves neuronal damage, and relieves depression [62]. There is evidence that increasing the expression of BDNF in hippocampal astrocytes can treat depression and anxiety and stimulate hippocampal neurogenesis [63]. Adult hippocampal neurogenesis involves the proliferation, survival, differentiation, and integration of newborn neurons into preexisting neuronal networks [64]. There is evidence that hippocampal volume, the number of granular cells in the anterior and middle dentate gyrus, and the volume of the granular cell layer decrease in patients with depression [65], whereas increasing adult hippocampal neurogenesis can improve depression and anxiety [66].Depression is also one of the common complications after stroke, though this pathogenesis has not been fully elucidated, and there is a lack of clinically effective treatments for stroke-induced depression. The cAMP response element-binding protein (CREB) is a protein that regulates gene transcription and acts as a chief transcription factor in the regulation of genes that encode proteins, such as BDNF, that are involved in synaptic plasticity and memory [67]. Studies have found that Saikosaponin A can significantly improve the depression-like behavior of PSD model rats, inhibit the apoptosis of the hippocampal meridian, and increase the levels of phosphorylated CREB and BDNF in the hippocampus [68]. Neuronal cell death occurs extensively during development and pathology, where it is especially important because of the limited capacity of adult neurons to proliferate or be replaced [69]. There are at least three known types of neuronal death, namely apoptosis, autophagy, and necrosis, and there is evidence that apoptosis is closely related to bipolar disorder (including depression) [70]. Inhibiting neuronal apoptosis can improve depression-like behavior [71, 72]. In LPS-induced depression in mice, Saikosaponin D can inhibit hippocampal neuronal apoptosis and inflammation through the lysophosphatidic acid 1/Ras homologous family member A (RhoA)/Rho protein kinase 2 pathway [73]. Rutin is a flavonoid, which is abundantly present in a variety of fruits and vegetables. It functions in cardiovascular protection and as an antiviral agent. In a recent study, rutin was shown to improve the CUMS-induced weight loss in mice and protect mouse hippocampal neurons [74]. Activation of tyrosine kinase B (TrkB) promotes neuronal survival, differentiation, and synapse formation [75].Estrogen receptor alpha (ERα) is the main regulator mediated by estrogen and plays an important role in preventing depression and cardiovascular diseases. It has been reported that knocking out ERα induces depression in mice and reduces hippocampal BDNF and the phosphorylation of its downstream targets TrkB, AKT, and extracellular regulatory protein kinase 1/2 (ERK1/2). Quercetin can reverse the above phenomenon, alleviate cell apoptosis, and reverse the depression-like symptoms induced by ERα knockout [76]. Similarly, quercetin can also exert an antidepressant effect by regulating the levels of BDNF in the hippocampus and prefrontal cortex, as well as the levels of the related Copine 6 and the triggering receptor expressed on myeloid cells (TREM)-1 and TREM-2 [77]. Quercetin combined with exercise therapy can increase the expression of BDNF protein in 1,2-dimethylhydrazine-induced depression model rats with colorectal cancer and can act on the TrKβ/β-chain protein axis to treat depression [78]. Liu et al. found that in the mouse depression model induced by CUMS, ferulic acid can significantly improve the behavioral performance in both the sucrose performance test and FST, upregulate BDNF and synapsin I levels in the prefrontal cortex and hippocampus, and increase hippocampal PSD95 protein expression [79]. In a recent report, paeoniflorin can improve the depression-like behavior of PSD model mice and increase the expression of BDNF and phosphorylated CREB in the CA1 region of the hippocampus [80]. Zhong et al. found that paeoniflorin can protect nerves by upregulating the activity of the ERK-CREB signaling pathway and treat CUMS-induced depression-like behavior in rats [81]. The alteration of synaptic plasticity, and specifically hippocampal long-term potentiation (LTP), also plays a role in the onset of depression [82]. There is evidence that in CUMS-induced animal depression models, hippocampal LTP is impaired, and Liu et al. found that administration of paeoniflorin can alleviate LTP injury in the hippocampal CAI area and increase both the density of hippocampal dendritic spines and the expression levels of BDNF and PSD95 [83]. There are also reports that paeoniflorin can enhance the expression and gene transcription of BDNF, activate the expression of TrkB, and promote proliferation and differentiation into astrocytes and the neurogenesis of neural stem cells in the rat hippocampal dentate gyrus [84] (Figure 4 and Table 4).Figure 4 Active compounds in Chaihu-Baishao promote hippocampal neurogenesis and regulate BDNF levels. BDNF: brain-derived neurotrophic factor; AKT: serine/threonine protein kinase; TrKB: tyrosine kinase B; CREB: cAMP response element-binding protein.Table 4 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair promote hippocampal neurogenesis and regulate BDNF levels. Compound Model Animal species Dosage Behaviour involved Mechanism of action/Main indicators References Saikosaponin A MCAO+CUMS+isolation Male SD rat 5 mg/kg SPT, OFT, BWT, FST Weight↑, hippocampus BDNF, P-CREB↑, inhibiting neuronal apoptosis, hippocampus Bax, caspase-3↓, [68] Saikosaponin D LPS Male ICR mice 0.5, 1 mg/kg TST, OFT Proinflammatory cytokines↓, inhibiting the activity of LPA1/RhoA/ROCK2 signaling pathway, relieving apoptosis of hippocampal neurons [73] Rutin CUMS Adult Swiss albino mice 100 mg/kg OFT, SPT, EPM, NOR, BWT Relieving hippocampal damage in CA3 area [74] Quercetin ERα receptor knockout Female C57bl/6 and ERα-KO mice 100 mg/kg OFT, TST, FST Number of hippocampal neurons↑, relieve hippocampal cell apoptosis, Hippocampus BDNF,P-TrKB,P-AKT,P-ERK1/2↑ [76] DMH Male Wistar rat 50 mg/kg OFT, FST Proinflammatory cytokines↓, BDNF↑, number of neurons↑, TrKB↑,β-catenin↑, incidence of rectal cancer↓, [78] Ferulic acid CUMS Male ICR mice 20, 40 mg/kg SPT, FST PFC and hippocampus BDNF, Synapsin I↑, hippocampus PSD95↑ [79] Paeoniflorin MCAO+CUMS Male SD rat 5 mg/kg BBT, SPT, OFT Weight↑, hippocampus CA1 area BDNF, P-CREB↑ [80] CUMS Male SD rat 30, 60 mg/kg SPT, FST, LAT Number of neurons in hippocampal CA3 area↑, upregulate the ERK-CREB signaling pathway [81] CUMS C57BL/6Wild-type male mice 20 mg/kg SPT, FST, TST Relieve hippocampal CA1 LTP damage, hippocampus dendritic spine density, BDNF, PSD95↑ [82] CUMS Male SD rats 60 mg/kg SPT Promote neurogenesis in the hippocampus dentate gyrus [84] Note: CUMS: chronic unpredictable mild stress; MCAO: middle cerebral artery occlusion; SPT: sucrose preference test; OFT: open field test; FST: forced swimming test; BWT: Beam walking test; BDNF: brain-derived neurotrophic factor; CREB: cAMP response element-binding protein; LPS: lipopolysaccharide; LPA-1: specific cell surface G protein–coupled receptors; ROCK: Rho-kinase; NOR: Novel object recognition test; TST: tail suspension test; TrKB: tyrosine kinase B; AKT:serine/threonine protein kinase; ERK: extracellular regulatory protein kinase; ERα: Estrogen receptor alpha; DMH: 1,2-dimethylhydrazine; BBT: Beam balance test; LAT: Locomotor Activity Test; LTP: long-term potentiation. ## 3. Discussion Depression is a common neuropsychiatric disorder. Its symptoms and signs include lack of motivation, inability to feel happy, social withdrawal, cognitive difficulties, and changes in appetite. It belongs to the category of “mood disorder” in Chinese medicine [5]. In TCM theory, Chaihu can soothe the liver, and white peony root can restrain the qi. In the antidepressant Chinese medicine compounds, such as Xiaoyao Powder and Sini Powder, Chaihu and Baishao are often a “monarch and minister” collocation relationship [8]. In this review, we present the results of studies that have found that the active ingredients in Chaihu and Baishao that can function as an antidepressant through mechanisms that include reduction of inflammation and oxidative stress, neuronal protection, and the regulation of the HPA axis, neurotransmitters, and BDNF levels. However, the current research in these areas was only done at the cellular or animal level, and there have not been enough clinical trials to investigate whether they can effectively improve the symptoms of depressed patients. Furthermore, there are some active ingredients in Chaihu and Baishao, such as paeoniflorin, which are difficult to pass through the blood-brain barrier, thus limiting the efficacy. In recent years, the two-way signaling pathway between the brain and the gut microbiota has received much attention, and there is evidence that dysfunction of the gut-brain axis plays an important role in the pathogenesis of depression [85]. However, there is still a lack of sufficient research to show that the Chaihu-Baishao herb pair can treat depression by regulating the brain-gut axis. At the same time, most of the current research is limited to a single target or single signaling pathway, and the interaction mechanism between the target and the pathway lacks in-depth exploration. Clinically, the current main treatment methods are mainly western medicine, but there are serious adverse reactions. TCM has significant advantages in treatment of depression, which is worthy of clinical investigation. However, the current syndrome differentiation and classification of depression is complex and changeable, the standards are not uniform, and a complete diagnosis and treatment and efficacy evaluation system has not been formed. There are few case analyses in clinical research, and the quality is different.In future research, therefore, a large number of randomized double-blind controlled trials are expected to be carried out in accordance with the standardization of syndrome differentiation to investigate whether Chaihu-Baishao can effectively improve depression. In order to apparently promote the efficacy of TCM, it is meaningful to strengthen the research on the targeted delivery system, prolonging action time, and increase concentration in central nervous system. Meanwhile, multiomics technology is beneficial for exploring the connection between targets or signaling pathways and the overall drug intervention mechanism, which could enrich the mechanism of action of TCM, providing guidance for industrial transformation and clinical application. --- *Source: 1024693-2022-11-09.xml*
1024693-2022-11-09_1024693-2022-11-09.md
73,136
Antidepressant Active Components of Bupleurum chinense DC-Paeonia lactiflora Pall Herb Pair: Pharmacological Mechanisms
Shimeng Lv; Yifan Zhao; Le Wang; Yihong Yu; Jiaxin Li; Yufei Huang; Wenhua Xu; Geqin Sun; Weibo Dai; Tingting Zhao; Dezhong Bi; Yuexiang Ma; Peng Sun
BioMed Research International (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1024693
1024693-2022-11-09.xml
--- ## Abstract Depression is a serious psychological disorder with a rapidly increasing incidence in recent years. Clinically, selective serotonin reuptake inhibitors are the main therapy. These drugs, have serious adverse reactions, however. Traditional Chinese medicine has the characteristics of multiple components, targets, and pathways, which has huge potential advantages for the treatment of depression. The antidepressant potential of the herbal combination of Bupleurum chinense DC (Chaihu) and Paeonia lactiflora Pall (Baishao) has been extensively studied previously. In this review, we summarized the antidepressant active components and mechanism of Chaihu-Baishao herb pair. We found that it works mainly through relieving oxidative stress, regulating HPA axis, and protecting neurons. Nevertheless, current research of this combined preparation still faces many challenges. On one hand, most of the current studies only stay at the level of animal models, lacking of sufficient clinical double-blind controlled trials for further verification. In addition, studies on the synergistic effect between different targets and signaling pathways are scarce. On the other hand, this preparation has numerous defects such as poor stability, low solubility, and difficulty in crossing the blood-brain barrier. --- ## Body ## 1. Introduction Major depressive disorder, also known as depression, is a chronic, recurrent, and potentially life-threatening severe mental disorder [1]. According to the World Health Organization, depression is the main cause of disability in the world; more than 350 million people worldwide are suffering from depression, which increases the risk of death at any age, reduces the quality of life of depressed patients, and creates a burden on families and society. Clinically, Western medicine treatments such as the use of selective serotonin reuptake inhibitors are the main treatment for depression, but most of these compounds have issues that include delayed effects, high nonresponse rates, nausea, headaches, chronic sexual dysfunction, and weight gain [1–3].Therefore, it is important to develop more beneficial drugs for the treatment of depression. Traditional Chinese medicine (TCM) has great potential in the treatment of depression because of its multiple components that can act on multiple targets and pathways [4]. Some Chinese medicine formulas have significant effects on the treatment of depression with low toxicity and little to no side effects [5]. Therefore, in order to avoid the side effects and adverse reactions caused by Western medical treatments, scientific researchers have turned to TCM and its active ingredients as a way to treat depression [1].Bupleurum chinense DC (Chaihu) is made from the dried roots of the Bupleurum plant. It is an herbal medicine that regulates and relieves liver qi. Modern research has found that Bupleurum chinense and its active ingredients have immunomodulatory, antiviral, hepatoprotective, antipyretic, and other pharmacological effects [6]. Paeonia lactiflora Pall (Baishao) is a traditional herb with a bitter, sour, and slightly cold taste. It can be used alone or as part of a drug combination. Modern pharmacological research shows that Baishao has anti-inflammatory, antiviral, antioxidant, and immunoregulatory properties as well as other functions [7]. The Chaihu-Baishao herbal combination is used in many antidepressant TCM compounds, such as Xiaoyao San, Chaihu Shugan San, and Sini San [8]. In this review, the research on the treatment of depression with the main active ingredients of the Chaihu-Baishao herb pair is summarized to provide a reference for future basic research and clinical applications. ## 2. Antidepressant mMchanisms of Chaihu-Baishao ### 2.1. Active Compounds in Chaihu-Baishao Inhibit Inflammation and Relieve Oxidative Stress There is a large amount of evidence supporting the link between depression and inflammatory processes. It has been shown that inflammation increases an individual’s susceptibility to depression. There is an increase in proinflammatory markers in patients with depression, and the use of proinflammatory drugs can increase the risk of depression [9]. Studies have found that oxidative stress may play an important role in the pathophysiology of depression. Patients with depression have elevated levels of malondialdehyde and the antioxidant enzymes, superoxide dismutase, and catalase, along with decreased levels of glutathione peroxidase [10]. Some scholars have found that 8-hydroxydeoxyglucose and F2 isoprostaglandin are found in patients with depression through meta-analysis. It is also believed that depression is accompanied by increased oxidative damage [11]. There is much evidence that inflammation-related diseases can be treated with various traditional medicines containing a variety of active natural compounds [12]. When antidepressants are used, the levels of peripheral inflammatory cytokines in patients with depression are reduced [13]. Therefore, inhibiting inflammation and relieving oxidative stress are important for the treatment of depression. Several of the previously studied active compounds of traditional medicines are reviewed below.Saikosaponin D is a triterpene saponin compound extracted from Chaihu and has various pharmacological effects such as counteracting inflammation [14] and oxidative stress [15]. Tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β), and interleukin-6 (IL-6) are proinflammatory cytokines that regulate oxidative stress, apoptosis, and metabolism and cause damage to the branching process of neurons, which will in turn affect neuronal function [1]. Microglia’s are the innate immune cells of the central nervous system and play a vital role in the process of neuroinflammation; they are found in abundance in the cerebrospinal fluid of depressed patients. A series of proinflammatory factors released by activated microglia were detected. In a lipopolysaccharide (LPS)-induced depression model, Su et al. gave mice an intraperitoneal injection of Saikosaponin D and found that it can improve the depression-like behavior of the mice and inhibit both the overexpression of the proinflammatory cytokines, TNF-α, IL-6, and 1 L-1β, and the activation of microglia induced by LPS in the mouse hippocampus [16]. The observed anti-inflammatory effect was also correlated with the inhibition of the high mobility group protein 1/Toll-like receptor 4 (TLR4)/nuclear transcription factor-κB (NF-κB) signaling pathway. The NF-κB pathway is a typical inflammatory pathway, which can regulate the production of proinflammatory cytokines, leukocyte recruitment, or cell survival and cause the body to produce an inflammatory response [17]. It has been reported that Saponin D can downregulate the expression of NF-κB in rat hippocampal neurons and improve the depression-like behavior induced by chronic unpredictable mild stress (CUMS) by downregulating Mir-155 expression and upregulating fibroblast growth factor 2 (FGF2) expression [18]. Quercetin is a flavonoid compound widely found in fruits and vegetables that has anti-inflammatory, antioxidant, antiviral, and anticancer effects [19]. It has been reported that quercetin can reverse the increase in lipid hydroperoxide content, induced by olfactory bulbectomy, in the hippocampus and improve the depression-like behavior of mice via a mechanism that is correlated with the enhancement of N-methyl-D-aspartic acid receptor expression [20].Kaempferol is the main component of a variety of fruits and vegetables. Using a mouse depression model based on chronic social defeat stress, Gao et al. found that after intraperitoneal injection of kaempferol, the inflammatory response and oxidative stress in the prefrontal cortex of these mice was alleviated [21]. Kaempferol was also found to increase the activity of P-serine/threonine protein kinase (AKT) and β-catenin, but after using PI3-K inhibitors, the overall protective effect mediated by kaempferol was partially inhibited, indicating that kaempferol can enhance the antioxidant capacity and anti-inflammatory effects by enhancing the activity of the AKT/β-catenin cascade, thereby treating depression [21]. Additionally, caffeic acid, a catechol compound, is widely distributed in fruits, tea, and wine. In the LPS-induced depression model in mice, caffeic acid was found to reverse both the reduction of brain glutathione levels and the increase of malondialdehyde and proinflammatory cytokines [22]. Ferulic acid is a phenolic compound widely found in a variety of herbal medicines. There is evidence that in mouse models of cortisol (CORT)-induced depression, ferulic acid can improve the behavioral performance of depressed mice and simultaneously reduce malondialdehyde, nitrite, and protein carbonylation levels in the brain and increase nonprotein sulfhydryl levels [23]. Liu et al. found that administration of ferulic acid can reverse the CUMS-induced upregulation of the proinflammatory cytokines, IL-1β, IL-6, and TNF-α, in the prefrontal cortex of mice and the activation of microglia, NF-κB signaling, and the nucleotide-binding oligomerization domain-like receptor protein 3 (NLRP3) inflammasome [24]. Gallic acid is a secondary plant metabolite, which is commonly found in many plant-based foods and beverages and has antioxidant activity. In a mouse model of poststroke depression (PSD), Nabavi et al. found that oxidative stress is closely related to the pathological process of stroke and PSD, and gallic acid can exert an antidepressant effect by inhibiting oxidative stress [25]. Paeoniflorin is one of the main biologically active ingredients extracted from the root of Paeonia lactiflora, which has antioxidation, anti-inflammatory, and other pharmacological effects [26]. Gasdermin D (GSDMD) is a member of the Gasdermin conserved protein family. When cell pyrolysis occurs, Caspase-1 is activated, which directly leads to GSDMD cleavage. GSDMD is cleaved out of the C-terminal and N-terminal domains that binds to the phospholipid protein on the cell membrane, resulting in the formation of cell membrane holes that cause cell rupture, cell content outflow, and a massive release of inflammatory factors. GSDMD can also activate Caspase-1 activation induced by NLRP3 [27]. Reports have shown that paeoniflorin can inhibit the expression of GSDMD, caspase-11, Caspase-1, NLRP3, IL-1β, and other proteins involved in pyroptosis signal transduction in microglia, as well as reduce inflammation and relieve symptoms of depression [28]. FGF2 is a neurotrophic and anti-inflammatory factor involved in regulating the proliferation, differentiation, and apoptosis of neurons in the brain. In a mouse depression model induced by LPS, Cheng et al. found that paeoniflorin can inhibit LPS-induced TLR4/NF-κB/NLRP3 signaling in the hippocampus of mice, reduce the level of proinflammatory cytokines and microglia activation, and at the same time, increase neuronal FGF2 levels and dendritic spine density [29]. Neuropathic pain is a clinical problem that causes comorbid pain and depression. In a neuropathic pain mouse model, Bai et al. found that paeoniflorin can significantly improve the hyperalgesia and depression-like behavior of mice, reduce the level of proinflammatory cytokines, inhibit the excessive activation of microglia, and reduce the pathological damage of hippocampal cells in a way that is similar to the inhibition of the expression of TLR4/NF-κB pathway related proteins [30]. Interferon-α (IFN-α) is a pleiotropic cytokine with antiviral and antiproliferative effects. It is widely used in cancer. However, studies have found that about 30%–50% of patients have symptoms of depression after receiving IFN-α treatment. Li et al. found that administration of paeoniflorin can improve the IFN-α-induced depression-like behavior in mice, while reducing inflammation levels in the serum, medial prefrontal cortex, ventral hippocampus, and amygdala [31]. Systemic lupus erythematosus is a chronic inflammatory autoimmune disease, and depression is one of its common complications. It was found that administration of paeoniflorin can inhibit the activity of the high-mobility group box 1 protein/TLR4/NF-κB pathway and alleviate the level of inflammation in the serum and hippocampus, thereby treating lupus-induced depression [32]. Through component identification, Li found that Chaihu-Baishao mainly contain saikosaponin A, saikosaponin D, saikosaponin C, saikosaponin B2, paeoniflorin, albiflorin, and oxypaeoniflorin, and then combined with network pharmacology and metabolomics. Li found that Chaihu-Baishao plays an antidepressant effect by regulating arachidonic acid metabolism, the expression of Prostaglandin G/H synthase 1(PTGS1) and Prostaglandin G/H synthase 2(PTGS2) targets [8]. He et al. analyzed the changes of chemical constituents before and after the combination of Chaihu-Baishao by UPLC-MS background subtraction and metabolomics, founding that saikosaponin A, 3′-O-acetylation of saikosaponin A, and 4′-O-acetylation of saikosaponin A decreased after compatibility with Chaihu, and paeoniflorin, gallic acidpaeoniflorin or their isomers increased [33]. Sini Powder is a commonly used traditional Chinese medicine. In the prescription, Chaihu plays a major role, and Baishao plays an auxiliary role. It has the effect of soothing the liver and regulating the spleen [34]. It has been reported that Sini Powder can improve depression-like behavior and neuronal pathological damage in the hippocampus in CUMS rats, and its mechanism may be related to the inflammatory response mediated by the NLRP3 inflammasome signaling pathway [35]. In clinical studies, Sini Powder can effectively improve depression in cerebral infarction, type 2 diabetes and functional dyspepsia, and improve clinical efficacy [36–38] (Figure 1 and Table 1).Figure 1 Active compounds in Chaihu-Baishao inhibit inflammation and relieve oxidative stress. TLR4: Toll-like receptor 4; NLRP3: nucleotide-binding oligomerization domain-like receptor protein 3; NF-κB: nuclear transcription factor-κB; TNF-α: Tumor necrosis factor-α; IL-6: interleukin-6; IL-1β: interleukin-1β.Table 1 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair inhibit inflammation and relieve oxidative stress. Compound Model Animal species Dosage Behaviour involved Mechanism of action/main indicators References Saikosaponin D LPS Male ICR mice 1 mg/kg OFT, TST, FST, and SPT Inhibition of microglia activation, the activity of TLR4/NF-κB pathway HMGB1 translocation, and proinflammatory cytokines ↓ [16] Quercetin CUMS Male SD rat 0.75, 1.5 mg/kg OFT, TST, and FST, SPT NF-κB↓, FGF2↑, and miR-155↓ [18] OB Female SWISS mice 10, 25, 50 mg/kg FST, TST, OFT, and ST LOOH↓ [20] Kaempferol CSDS Male CD1 and C57 mice 10, 20 mg/kg SPT, SIT, and TST SOD↑, MDA↓, CAT↑, GPx↑, GST↑, P-AKT↑, andβ-catenin↑ [21] Caffeic acid LPS Male Swiss albino mice 30 mg/kg OFT, FST, and TST IL-6↓, MDA↓, GSH↑, and TNF-α↓ [22] Ferulic acid CORT Male Swiss mice 1 mg/kg TST, OFT, and ST MDA↓, nitrite↓, and protein carbonylation↓ [23] CUMS Male ICR mice 20,40, 80 mg/kg SPT and TST IL-1β,I L-6, TNF-α mRNA↓,NF-κB↓P-NF-κB/NF-κB↓, and inhibition of PFC microglia activation [24] Gallic acid BCCAO Male balb/c mice 25, 50 mg/kg TST and SPT Antioxidant stress [25] Paeoniflorin RESP/LSP/ATP Male C57BL/6Mice 10, 20, 40 mg/kg OFT, TST, and FST Dendritic spines in hippocampal CA1 region↑, NLRP3↓, CASP-11↓, CASP-1↓, GSDMD↓, IL-1β, and microglia activation ↓ [28] LPS Male ICR mice 20, 40, 80 mg/kg SPT and FST FGF2↑, IL-6↓, TNF-α↓, TLR4↓, P-NF-κB↓, NLRP3↓, Cox-2↓, and dendritic spine density in CA3 area of hippocampus ↑ [29] Cuff SPF male Balb/c mice 50, 100 mg/kg SPT, FST, and TST Inflammation in hippocampal CA3 area ↓, IL-6↓, TNF-α↓, IL-1↓, microglia activation ↓, and TLR4/NF-κB pathway ↓ [30] IFN-α Male C57BL/6Jmice 10, 20, 40 mg/kg SPT, OFT, TST, and FST Serum, mPFC, vHi, and amygdala inflammation levels↓ [31] SLE Wild type mice and MRL/MpJ-Faslpr/2 J (MRL/lpr) mice 20 mg/kg SPT, TST, and FST Activity of HMGB1/TLR4/NF-κB pathway↓, serum, and hippocampus contents of TNF-α, IL-1β, and IL-6↓ [32] Notes: OFT: open field test; TST: tail suspension test; FST: forced swimming test; SPT: sucrose preference test; TLR4: Toll-like receptor 4; NF-κB: nuclear transcription factor-κB; HMGB1: High mobility group box 1; FGF2: fibroblast growth factor 2; LOOH: lipid hydroperoxides; SOD: Superoxide dismutase; MDA: malondialdehyde; CAT: catalase; GST: Glutathione S-transferase; AKT: serine/threonine protein kinase; IL-6: interleukin-6; TNF-α: Tumor necrosis factor-α; NLRP3: nucleotide-binding oligomerization domain-like receptor protein 3; CASP-11: Caspase-11; CASP-1: Caspase-1; GSDMD: Gasdermin D; IL-1β: interleukin-1β; TLR4: Toll-like receptor 4; LPS: lipopolysaccharide; CUMS: chronic unpredictable mild stress; OB: olfactory bulbectomy; CSDS: Chronic social stress defeat; CORT: cortisol; BCCAO: brain ischemia-reperfusion; ATP: adenosine triphosphate; IFN-α: Interferon-α; SLE: Systemic lupus erythematosus. ### 2.2. Active Compounds in Chaihu-Baishao Regulate the Hypothalamic-Pituitary-Adrenal (HPA) Axis The HPA axis is an important part of the neuroendocrine system and helps control the response to stress. When the HPA axis is activated, the paraventricular nucleus of the hypothalamus releases corticotropin-releasing hormone (CRH), which sends a signal to the anterior pituitary gland to secrete adrenocorticotropic hormone (ACTH) into the bloodstream, and in turn, ACTH acts on the adrenal cortex to stimulate the secretion of CORT [39]. HPA axis hyperfunction is an important factor in the onset of depression as evidenced by increases in corticotropin-releasing hormone, ACTH, and glucocorticoid, an imbalance in the HPA axis negative feedback, an enlargement of the pituitary and adrenal glands, and the onset of hypercortisolemia that are seen in some depression patients [40].Saikosaponin A, a triterpene saponin extracted from Bupleurum, has anti-inflammatory and antioxidant effects [41]. In the perimenopausal depression model of female rats induced by CUMS, Saikosaponin A can improve the behavioral performance of these rats and reduce CRH mRNA, CRH protein, and serum CORT levels in the rat hypothalamus, while inhibiting the overexpression of hippocampal proinflammatory cytokines induced by CUMS [42]. Li et al. found that Saikosaponin D promotes hippocampal neurogenesis, alleviates the weight loss of rats induced by CUMS, and increases the sucrose consumption of rats and the movement distance in the open field test (OFT). They also found that it reduces the immobility time in the forced swimming test (FST), serum CORT levels, the inhibition of glucocorticoid receptor expression, and nuclear translocation induced by CUMS [43]. Adriamycin is a commonly used antitumor drug that has many adverse reactions. A previous study exploring the effect of quercetin on anxiety and depression-like behavior caused by Adriamycin showed that quercetin can improve the anxiety and depression-like behavior of rats, reduce serum CORT levels, inhibit the hyperactivity of the HPA axis, relieve brain oxidative damage, and regulate rat immune function [44]. Studies have also found that the combination of ferulic acid and levetiracetam improves epilepsy complicated by depression, restores serum CORT levels, and reduces the activity of proinflammatory cytokines and indoleamine 2,3-dioxygenase in the mouse brain [45]. Prenatal stress (PS) can increase the generation of depression, anxiety, attention deficit hyperactivity disorder, and other negative emotions and behaviors in offspring. After giving ferulic acid to rats in a prenatal stress model, Zheng et al. found that ferulic acid can exert an antidepressant effect by inhibiting the HPA axis activity and hippocampal inflammation in offspring rats [46]. Similarly, paeoniflorin can treat prenatal stress-induced depression-like behavior in rat offspring by promoting the nuclear translocation of glucocorticoid receptors, inhibiting the expression of a series of proteins and the formation of complexes (such as SNAP25), and inhibiting stress-induced HPA axis hyperactivity [47]. In FST-induced depression in rats, paeoniflorin can alleviate both the hyperactivity of the HPA axis and oxidative damage, increase plasma and hippocampal monoamine neurotransmitters and brain-derived neurotrophic factor (BDNF) levels, and promote gastrointestinal movement [48]. In a rat model of menopausal depression induced by CUMS combined with ovariectomy, Huang et al. found that paeoniflorin can improve the depression-like behavior of rats, while inhibiting both the overactivity of the HPA axis and the overexpression of the serotonin (5-HT2A) receptor and upregulating the expression of the 5-HT1A receptor in the brain [49]. Protocatechuic acid ethyl ester (PCA) is a phenolic compound with neuroprotective effects. It has been reported that acute restraint stress can induce depression-like behavior in mice through neuronal oxidative damage, upregulation of HPA axis activity, and increases in serum CORT levels, all of which were reversed after PCA was administered [50] (Figure 2 and Table 2).Figure 2 Active compounds in Chaihu-Baishao regulate the HPA axis. HPA: hypothalamic-pituitary-adrenal; CORT: cortisol; ACTH: adrenocorticotropic; CRH: corticotropin-releasing.Table 2 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair regulate the HPA axis. Compound Model Animal species Dosage Behaviour involved Mechanism of action/main indicators References Saikosaponin A CUMS Female Wistar rat 25, 50, 100 mg/kg SPT, FST, and NSFT Hypothalamus CRH mRNA, CRH protein↓, hippocampal proinflammatory cytokine↓, and CORT↓ [42] Saikosaponin D CUMS Male SD rat 0.75, 1.5 mg/kg SPT, OFT, and FST CORT↓, GR↑, weight ↑, p-CREB↑, BDNF↑, and promoting the generation of hippocampal neurons [43] Quercetin Adriamycin Male Wistar rat 60 mg/kg FST, OFT, and EPM CORT↓, relieving brain oxidative stress damage, and regulating immune function [44] Ferulic acid Pentylenetetrazole kindling epilepsy Male Swiss albino mice 40, 80 mg/kg TST, SPT CORT↓, TNF-α↓, and IL-1β↓ [45] PS Female, male SD rats 12.5, 25, 50 mg/kg SPT, OFT, and FST Serum ACTH↓, CORT↓, hippocampus GR↑, inhibiting hippocampal inflammation, and improving neuronal damage in hippocampal CA3 area [46] Paeoniflorin PS Female, male SD rats 15, 30, 60 mg/kg SPT, FST, and OFT Serum CRH, ACTH, CORT↓, and relieving neuronal damage in hippocampal CA3 area [47] FST Male SD rat 10 mg/kg FST and OFT Promote gastrointestinal motility, plasma motilin↑, CRH↓, ACTH↓, CORT↓, BDNF↓, norepinephrine↑, and oxidative stress↓ [48] CUMS SD female rats 45 mg/kg OFT and SPT HPA axis activity↓, brain 5-HT2AR↓, and brain 5-HT1AR↑ [49] Protocatechuic acid ARS Male, female, Swiss albino mice 100, 200 mg/kg FST and OFT Serum CORT↓ and hippocampal oxidative stress↓, [50] Note: CUMS: chronic unpredictable mild stress; PS: Prenatal stress; FST: forced swimming test; ARS: Acute restraint stress; SPT: sucrose preference test; NSFT: novelty suppressed feeding test; OFT: open field test; EPM: Elevated Plus Maze; CRH: corticotropin-releasing hormone; GR: Glucocorticoid Receptor; BDNF: brain-derived neurotrophic factor; ACTH: adrenocorticotropic hormone; HPA: hypothalamic-pituitary-adrenal; 5-HT: serotonin; MDA: malondialdehyde; CORT: cortisol. ### 2.3. Active Compounds in Chaihu-Baishao Regulate Monoamine Neurotransmitters Monoamine neurotransmitters are central neurotransmitters that are mainly catecholamines, such as dopamine, norepinephrine, and epinephrine, and indoleamines, such as 5-HT. Dopamine (DA) is an important regulator of learning and motivation [51]. 5-HT and norepinephrine are mainly involved in the regulation of emotional cognition and sleep, and when monoamine neurotransmitters are disordered, they can cause various emotional changes [40]. The classic monoamine hypothesis in depression predicts that the underlying pathophysiological basis of depression is the depletion of monoamine neurotransmitters in the central nervous system. The serum levels of monoamine neurotransmitters and their metabolites can be used as important biomarkers for the diagnosis of depression, and drugs that increase the synaptic concentration of monoamines can improve the symptoms of depression [1].It has been reported that Saikosaponin A can improve CUMS-induced depression-like behaviors in rats, and its antidepressant effect is believed to be related to the increase of dopamine content in the hippocampus of rats and the upregulation of hippocampal Proline-rich transmembrane protein 2 expression [52]. Similarly, in the same animal model of depression, Khan et al. found that quercetin can improve the behavioral performance of depressed mice, increase brain 5-HT levels, alleviate CUMS-induced brain inflammation and oxidative damage, and reduce brain glutamate levels [53]. There is evidence that in restraint stress-induced depression and anxiety models in mice, quercetin can treat anxiety and depression by regulating 5-HT and cholinergic neurotransmission and antioxidative damage as well as enhancing memory after restraint stress [54]. Acetylcholinesterase has the effect of terminating cholinergic neurotransmission [55]. Arsenic is an element that naturally exists in food, soil, and water. Exposure to arsenic can cause memory disorders, anxiety, depression, and other neurological perturbations and diseases. Samad et al. found that gallic acid can reverse the excessive increase in acetylcholinesterase activity induced by arsenic, alleviate the brain oxidative stress damage caused by arsenic, and protect memory function [56]. Additionally, gallic acid can also exert an antidepressant effect by increasing 5-HT and catecholamine levels in synaptic clefts in the central nervous system [57]. There are also reports showing that in olfactory bulbectomy-induced animal depression models, PCA can shorten the immobility time of rats in the OFT, increase the distance explored in the OFT, increase rat hippocampal monoamine neurotransmitters (5-HT, DA, and norepinephrine) and BDNF levels, reduce hippocampal CORT levels, and alleviate hippocampal neuroinflammation and oxidative damage [58]. It has been reported that Chaihu-Baishao can effectively improve postoperative depression and diarrhea in patients with colorectal cancer by regulating the imbalance of monoamine neurotransmitters (DA and 5-HT) [59] (Figure 3 and Table 3).Figure 3 Active compounds in Chaihu-Baishao regulate monoamine neurotransmitters. 5-HT: serotonin; DA: dopamine; NE: norepinephrine.Table 3 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair regulate monoamine neurotransmitters. Compound Model Animal species Dosage Behaviour involved Mechanism of action/main indicators References Saikosaponin A CUMS Male SD rat 50 mg/kg OFT and SPT Weight↑, hippocampus DA↑, and hippocampus PRRT2↑ [52] Quercetin CUMS Male Swiss albino mice 25 mg/kg MFST, TST, and OFT Brain5-HT↑, brain glutamate, IL-6, TNF-α↓, and relieving brain oxidative stress [53] Restraint stress Male Albaino wistar mice 20 mg/kg/ml FST, LDA, EPM, and MWM Prevent oxidase damage, regulate 5-HT, and cholinergic [54] Gallic acid iAS SD male rat 50, 100 mg/kg EPM, LDA, FST, and MWM Relieve brain oxidative stress and brain AChE↓ [56] Protocatechuic acid OB Male Wistar rat 100, 200 mg/kg FST and OFT Hippocampus 5-HT, norepinephrine, DA↑, hippocampus BNDF↑, hippocampus TNF-α, IL-6, CORT, and hippocampus oxidase damage↓ [58] Note: LDA: Light-dark activity box; iAS: Arsenic; OB: olfactory bulbectomy; MWM: Morris water maze; FST: forced swimming test; EPM: Elevated Plus Maze; OFT: open field test; 5-HT: serotonin; CORT: cortisol; IL-6: interleukin-6; TNF-α: Tumor necrosis factor-α; AChE: acetylcholinesterase; DA: dopamine; BDNF: brain-derived neurotrophic factor; MFST: Modified Forced Swim Test; OB: olfactory bulbectomy; TST: tail suspension test; CUMS: chronic unpredictable mild stress. ### 2.4. Active Compounds in Chaihu-Baishao Promote Hippocampal Neurogenesis and Regulate BDNF Levels BDNF is a widely studied growth factor that plays an important role in neuronal maturation, synapse formation, and synaptic plasticity in the brain [60]. The “neurotrophic theory” posits that neurons will lose access to trophic factors as the expression level of BDNF decreases, leading to neuronal atrophy, decreased synaptic plasticity, and the onset of depression [61]. The optimization of BDNF levels helps synaptic plasticity and remodeling, improves neuronal damage, and relieves depression [62]. There is evidence that increasing the expression of BDNF in hippocampal astrocytes can treat depression and anxiety and stimulate hippocampal neurogenesis [63]. Adult hippocampal neurogenesis involves the proliferation, survival, differentiation, and integration of newborn neurons into preexisting neuronal networks [64]. There is evidence that hippocampal volume, the number of granular cells in the anterior and middle dentate gyrus, and the volume of the granular cell layer decrease in patients with depression [65], whereas increasing adult hippocampal neurogenesis can improve depression and anxiety [66].Depression is also one of the common complications after stroke, though this pathogenesis has not been fully elucidated, and there is a lack of clinically effective treatments for stroke-induced depression. The cAMP response element-binding protein (CREB) is a protein that regulates gene transcription and acts as a chief transcription factor in the regulation of genes that encode proteins, such as BDNF, that are involved in synaptic plasticity and memory [67]. Studies have found that Saikosaponin A can significantly improve the depression-like behavior of PSD model rats, inhibit the apoptosis of the hippocampal meridian, and increase the levels of phosphorylated CREB and BDNF in the hippocampus [68]. Neuronal cell death occurs extensively during development and pathology, where it is especially important because of the limited capacity of adult neurons to proliferate or be replaced [69]. There are at least three known types of neuronal death, namely apoptosis, autophagy, and necrosis, and there is evidence that apoptosis is closely related to bipolar disorder (including depression) [70]. Inhibiting neuronal apoptosis can improve depression-like behavior [71, 72]. In LPS-induced depression in mice, Saikosaponin D can inhibit hippocampal neuronal apoptosis and inflammation through the lysophosphatidic acid 1/Ras homologous family member A (RhoA)/Rho protein kinase 2 pathway [73]. Rutin is a flavonoid, which is abundantly present in a variety of fruits and vegetables. It functions in cardiovascular protection and as an antiviral agent. In a recent study, rutin was shown to improve the CUMS-induced weight loss in mice and protect mouse hippocampal neurons [74]. Activation of tyrosine kinase B (TrkB) promotes neuronal survival, differentiation, and synapse formation [75].Estrogen receptor alpha (ERα) is the main regulator mediated by estrogen and plays an important role in preventing depression and cardiovascular diseases. It has been reported that knocking out ERα induces depression in mice and reduces hippocampal BDNF and the phosphorylation of its downstream targets TrkB, AKT, and extracellular regulatory protein kinase 1/2 (ERK1/2). Quercetin can reverse the above phenomenon, alleviate cell apoptosis, and reverse the depression-like symptoms induced by ERα knockout [76]. Similarly, quercetin can also exert an antidepressant effect by regulating the levels of BDNF in the hippocampus and prefrontal cortex, as well as the levels of the related Copine 6 and the triggering receptor expressed on myeloid cells (TREM)-1 and TREM-2 [77]. Quercetin combined with exercise therapy can increase the expression of BDNF protein in 1,2-dimethylhydrazine-induced depression model rats with colorectal cancer and can act on the TrKβ/β-chain protein axis to treat depression [78]. Liu et al. found that in the mouse depression model induced by CUMS, ferulic acid can significantly improve the behavioral performance in both the sucrose performance test and FST, upregulate BDNF and synapsin I levels in the prefrontal cortex and hippocampus, and increase hippocampal PSD95 protein expression [79]. In a recent report, paeoniflorin can improve the depression-like behavior of PSD model mice and increase the expression of BDNF and phosphorylated CREB in the CA1 region of the hippocampus [80]. Zhong et al. found that paeoniflorin can protect nerves by upregulating the activity of the ERK-CREB signaling pathway and treat CUMS-induced depression-like behavior in rats [81]. The alteration of synaptic plasticity, and specifically hippocampal long-term potentiation (LTP), also plays a role in the onset of depression [82]. There is evidence that in CUMS-induced animal depression models, hippocampal LTP is impaired, and Liu et al. found that administration of paeoniflorin can alleviate LTP injury in the hippocampal CAI area and increase both the density of hippocampal dendritic spines and the expression levels of BDNF and PSD95 [83]. There are also reports that paeoniflorin can enhance the expression and gene transcription of BDNF, activate the expression of TrkB, and promote proliferation and differentiation into astrocytes and the neurogenesis of neural stem cells in the rat hippocampal dentate gyrus [84] (Figure 4 and Table 4).Figure 4 Active compounds in Chaihu-Baishao promote hippocampal neurogenesis and regulate BDNF levels. BDNF: brain-derived neurotrophic factor; AKT: serine/threonine protein kinase; TrKB: tyrosine kinase B; CREB: cAMP response element-binding protein.Table 4 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair promote hippocampal neurogenesis and regulate BDNF levels. Compound Model Animal species Dosage Behaviour involved Mechanism of action/Main indicators References Saikosaponin A MCAO+CUMS+isolation Male SD rat 5 mg/kg SPT, OFT, BWT, FST Weight↑, hippocampus BDNF, P-CREB↑, inhibiting neuronal apoptosis, hippocampus Bax, caspase-3↓, [68] Saikosaponin D LPS Male ICR mice 0.5, 1 mg/kg TST, OFT Proinflammatory cytokines↓, inhibiting the activity of LPA1/RhoA/ROCK2 signaling pathway, relieving apoptosis of hippocampal neurons [73] Rutin CUMS Adult Swiss albino mice 100 mg/kg OFT, SPT, EPM, NOR, BWT Relieving hippocampal damage in CA3 area [74] Quercetin ERα receptor knockout Female C57bl/6 and ERα-KO mice 100 mg/kg OFT, TST, FST Number of hippocampal neurons↑, relieve hippocampal cell apoptosis, Hippocampus BDNF,P-TrKB,P-AKT,P-ERK1/2↑ [76] DMH Male Wistar rat 50 mg/kg OFT, FST Proinflammatory cytokines↓, BDNF↑, number of neurons↑, TrKB↑,β-catenin↑, incidence of rectal cancer↓, [78] Ferulic acid CUMS Male ICR mice 20, 40 mg/kg SPT, FST PFC and hippocampus BDNF, Synapsin I↑, hippocampus PSD95↑ [79] Paeoniflorin MCAO+CUMS Male SD rat 5 mg/kg BBT, SPT, OFT Weight↑, hippocampus CA1 area BDNF, P-CREB↑ [80] CUMS Male SD rat 30, 60 mg/kg SPT, FST, LAT Number of neurons in hippocampal CA3 area↑, upregulate the ERK-CREB signaling pathway [81] CUMS C57BL/6Wild-type male mice 20 mg/kg SPT, FST, TST Relieve hippocampal CA1 LTP damage, hippocampus dendritic spine density, BDNF, PSD95↑ [82] CUMS Male SD rats 60 mg/kg SPT Promote neurogenesis in the hippocampus dentate gyrus [84] Note: CUMS: chronic unpredictable mild stress; MCAO: middle cerebral artery occlusion; SPT: sucrose preference test; OFT: open field test; FST: forced swimming test; BWT: Beam walking test; BDNF: brain-derived neurotrophic factor; CREB: cAMP response element-binding protein; LPS: lipopolysaccharide; LPA-1: specific cell surface G protein–coupled receptors; ROCK: Rho-kinase; NOR: Novel object recognition test; TST: tail suspension test; TrKB: tyrosine kinase B; AKT:serine/threonine protein kinase; ERK: extracellular regulatory protein kinase; ERα: Estrogen receptor alpha; DMH: 1,2-dimethylhydrazine; BBT: Beam balance test; LAT: Locomotor Activity Test; LTP: long-term potentiation. ## 2.1. Active Compounds in Chaihu-Baishao Inhibit Inflammation and Relieve Oxidative Stress There is a large amount of evidence supporting the link between depression and inflammatory processes. It has been shown that inflammation increases an individual’s susceptibility to depression. There is an increase in proinflammatory markers in patients with depression, and the use of proinflammatory drugs can increase the risk of depression [9]. Studies have found that oxidative stress may play an important role in the pathophysiology of depression. Patients with depression have elevated levels of malondialdehyde and the antioxidant enzymes, superoxide dismutase, and catalase, along with decreased levels of glutathione peroxidase [10]. Some scholars have found that 8-hydroxydeoxyglucose and F2 isoprostaglandin are found in patients with depression through meta-analysis. It is also believed that depression is accompanied by increased oxidative damage [11]. There is much evidence that inflammation-related diseases can be treated with various traditional medicines containing a variety of active natural compounds [12]. When antidepressants are used, the levels of peripheral inflammatory cytokines in patients with depression are reduced [13]. Therefore, inhibiting inflammation and relieving oxidative stress are important for the treatment of depression. Several of the previously studied active compounds of traditional medicines are reviewed below.Saikosaponin D is a triterpene saponin compound extracted from Chaihu and has various pharmacological effects such as counteracting inflammation [14] and oxidative stress [15]. Tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β), and interleukin-6 (IL-6) are proinflammatory cytokines that regulate oxidative stress, apoptosis, and metabolism and cause damage to the branching process of neurons, which will in turn affect neuronal function [1]. Microglia’s are the innate immune cells of the central nervous system and play a vital role in the process of neuroinflammation; they are found in abundance in the cerebrospinal fluid of depressed patients. A series of proinflammatory factors released by activated microglia were detected. In a lipopolysaccharide (LPS)-induced depression model, Su et al. gave mice an intraperitoneal injection of Saikosaponin D and found that it can improve the depression-like behavior of the mice and inhibit both the overexpression of the proinflammatory cytokines, TNF-α, IL-6, and 1 L-1β, and the activation of microglia induced by LPS in the mouse hippocampus [16]. The observed anti-inflammatory effect was also correlated with the inhibition of the high mobility group protein 1/Toll-like receptor 4 (TLR4)/nuclear transcription factor-κB (NF-κB) signaling pathway. The NF-κB pathway is a typical inflammatory pathway, which can regulate the production of proinflammatory cytokines, leukocyte recruitment, or cell survival and cause the body to produce an inflammatory response [17]. It has been reported that Saponin D can downregulate the expression of NF-κB in rat hippocampal neurons and improve the depression-like behavior induced by chronic unpredictable mild stress (CUMS) by downregulating Mir-155 expression and upregulating fibroblast growth factor 2 (FGF2) expression [18]. Quercetin is a flavonoid compound widely found in fruits and vegetables that has anti-inflammatory, antioxidant, antiviral, and anticancer effects [19]. It has been reported that quercetin can reverse the increase in lipid hydroperoxide content, induced by olfactory bulbectomy, in the hippocampus and improve the depression-like behavior of mice via a mechanism that is correlated with the enhancement of N-methyl-D-aspartic acid receptor expression [20].Kaempferol is the main component of a variety of fruits and vegetables. Using a mouse depression model based on chronic social defeat stress, Gao et al. found that after intraperitoneal injection of kaempferol, the inflammatory response and oxidative stress in the prefrontal cortex of these mice was alleviated [21]. Kaempferol was also found to increase the activity of P-serine/threonine protein kinase (AKT) and β-catenin, but after using PI3-K inhibitors, the overall protective effect mediated by kaempferol was partially inhibited, indicating that kaempferol can enhance the antioxidant capacity and anti-inflammatory effects by enhancing the activity of the AKT/β-catenin cascade, thereby treating depression [21]. Additionally, caffeic acid, a catechol compound, is widely distributed in fruits, tea, and wine. In the LPS-induced depression model in mice, caffeic acid was found to reverse both the reduction of brain glutathione levels and the increase of malondialdehyde and proinflammatory cytokines [22]. Ferulic acid is a phenolic compound widely found in a variety of herbal medicines. There is evidence that in mouse models of cortisol (CORT)-induced depression, ferulic acid can improve the behavioral performance of depressed mice and simultaneously reduce malondialdehyde, nitrite, and protein carbonylation levels in the brain and increase nonprotein sulfhydryl levels [23]. Liu et al. found that administration of ferulic acid can reverse the CUMS-induced upregulation of the proinflammatory cytokines, IL-1β, IL-6, and TNF-α, in the prefrontal cortex of mice and the activation of microglia, NF-κB signaling, and the nucleotide-binding oligomerization domain-like receptor protein 3 (NLRP3) inflammasome [24]. Gallic acid is a secondary plant metabolite, which is commonly found in many plant-based foods and beverages and has antioxidant activity. In a mouse model of poststroke depression (PSD), Nabavi et al. found that oxidative stress is closely related to the pathological process of stroke and PSD, and gallic acid can exert an antidepressant effect by inhibiting oxidative stress [25]. Paeoniflorin is one of the main biologically active ingredients extracted from the root of Paeonia lactiflora, which has antioxidation, anti-inflammatory, and other pharmacological effects [26]. Gasdermin D (GSDMD) is a member of the Gasdermin conserved protein family. When cell pyrolysis occurs, Caspase-1 is activated, which directly leads to GSDMD cleavage. GSDMD is cleaved out of the C-terminal and N-terminal domains that binds to the phospholipid protein on the cell membrane, resulting in the formation of cell membrane holes that cause cell rupture, cell content outflow, and a massive release of inflammatory factors. GSDMD can also activate Caspase-1 activation induced by NLRP3 [27]. Reports have shown that paeoniflorin can inhibit the expression of GSDMD, caspase-11, Caspase-1, NLRP3, IL-1β, and other proteins involved in pyroptosis signal transduction in microglia, as well as reduce inflammation and relieve symptoms of depression [28]. FGF2 is a neurotrophic and anti-inflammatory factor involved in regulating the proliferation, differentiation, and apoptosis of neurons in the brain. In a mouse depression model induced by LPS, Cheng et al. found that paeoniflorin can inhibit LPS-induced TLR4/NF-κB/NLRP3 signaling in the hippocampus of mice, reduce the level of proinflammatory cytokines and microglia activation, and at the same time, increase neuronal FGF2 levels and dendritic spine density [29]. Neuropathic pain is a clinical problem that causes comorbid pain and depression. In a neuropathic pain mouse model, Bai et al. found that paeoniflorin can significantly improve the hyperalgesia and depression-like behavior of mice, reduce the level of proinflammatory cytokines, inhibit the excessive activation of microglia, and reduce the pathological damage of hippocampal cells in a way that is similar to the inhibition of the expression of TLR4/NF-κB pathway related proteins [30]. Interferon-α (IFN-α) is a pleiotropic cytokine with antiviral and antiproliferative effects. It is widely used in cancer. However, studies have found that about 30%–50% of patients have symptoms of depression after receiving IFN-α treatment. Li et al. found that administration of paeoniflorin can improve the IFN-α-induced depression-like behavior in mice, while reducing inflammation levels in the serum, medial prefrontal cortex, ventral hippocampus, and amygdala [31]. Systemic lupus erythematosus is a chronic inflammatory autoimmune disease, and depression is one of its common complications. It was found that administration of paeoniflorin can inhibit the activity of the high-mobility group box 1 protein/TLR4/NF-κB pathway and alleviate the level of inflammation in the serum and hippocampus, thereby treating lupus-induced depression [32]. Through component identification, Li found that Chaihu-Baishao mainly contain saikosaponin A, saikosaponin D, saikosaponin C, saikosaponin B2, paeoniflorin, albiflorin, and oxypaeoniflorin, and then combined with network pharmacology and metabolomics. Li found that Chaihu-Baishao plays an antidepressant effect by regulating arachidonic acid metabolism, the expression of Prostaglandin G/H synthase 1(PTGS1) and Prostaglandin G/H synthase 2(PTGS2) targets [8]. He et al. analyzed the changes of chemical constituents before and after the combination of Chaihu-Baishao by UPLC-MS background subtraction and metabolomics, founding that saikosaponin A, 3′-O-acetylation of saikosaponin A, and 4′-O-acetylation of saikosaponin A decreased after compatibility with Chaihu, and paeoniflorin, gallic acidpaeoniflorin or their isomers increased [33]. Sini Powder is a commonly used traditional Chinese medicine. In the prescription, Chaihu plays a major role, and Baishao plays an auxiliary role. It has the effect of soothing the liver and regulating the spleen [34]. It has been reported that Sini Powder can improve depression-like behavior and neuronal pathological damage in the hippocampus in CUMS rats, and its mechanism may be related to the inflammatory response mediated by the NLRP3 inflammasome signaling pathway [35]. In clinical studies, Sini Powder can effectively improve depression in cerebral infarction, type 2 diabetes and functional dyspepsia, and improve clinical efficacy [36–38] (Figure 1 and Table 1).Figure 1 Active compounds in Chaihu-Baishao inhibit inflammation and relieve oxidative stress. TLR4: Toll-like receptor 4; NLRP3: nucleotide-binding oligomerization domain-like receptor protein 3; NF-κB: nuclear transcription factor-κB; TNF-α: Tumor necrosis factor-α; IL-6: interleukin-6; IL-1β: interleukin-1β.Table 1 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair inhibit inflammation and relieve oxidative stress. Compound Model Animal species Dosage Behaviour involved Mechanism of action/main indicators References Saikosaponin D LPS Male ICR mice 1 mg/kg OFT, TST, FST, and SPT Inhibition of microglia activation, the activity of TLR4/NF-κB pathway HMGB1 translocation, and proinflammatory cytokines ↓ [16] Quercetin CUMS Male SD rat 0.75, 1.5 mg/kg OFT, TST, and FST, SPT NF-κB↓, FGF2↑, and miR-155↓ [18] OB Female SWISS mice 10, 25, 50 mg/kg FST, TST, OFT, and ST LOOH↓ [20] Kaempferol CSDS Male CD1 and C57 mice 10, 20 mg/kg SPT, SIT, and TST SOD↑, MDA↓, CAT↑, GPx↑, GST↑, P-AKT↑, andβ-catenin↑ [21] Caffeic acid LPS Male Swiss albino mice 30 mg/kg OFT, FST, and TST IL-6↓, MDA↓, GSH↑, and TNF-α↓ [22] Ferulic acid CORT Male Swiss mice 1 mg/kg TST, OFT, and ST MDA↓, nitrite↓, and protein carbonylation↓ [23] CUMS Male ICR mice 20,40, 80 mg/kg SPT and TST IL-1β,I L-6, TNF-α mRNA↓,NF-κB↓P-NF-κB/NF-κB↓, and inhibition of PFC microglia activation [24] Gallic acid BCCAO Male balb/c mice 25, 50 mg/kg TST and SPT Antioxidant stress [25] Paeoniflorin RESP/LSP/ATP Male C57BL/6Mice 10, 20, 40 mg/kg OFT, TST, and FST Dendritic spines in hippocampal CA1 region↑, NLRP3↓, CASP-11↓, CASP-1↓, GSDMD↓, IL-1β, and microglia activation ↓ [28] LPS Male ICR mice 20, 40, 80 mg/kg SPT and FST FGF2↑, IL-6↓, TNF-α↓, TLR4↓, P-NF-κB↓, NLRP3↓, Cox-2↓, and dendritic spine density in CA3 area of hippocampus ↑ [29] Cuff SPF male Balb/c mice 50, 100 mg/kg SPT, FST, and TST Inflammation in hippocampal CA3 area ↓, IL-6↓, TNF-α↓, IL-1↓, microglia activation ↓, and TLR4/NF-κB pathway ↓ [30] IFN-α Male C57BL/6Jmice 10, 20, 40 mg/kg SPT, OFT, TST, and FST Serum, mPFC, vHi, and amygdala inflammation levels↓ [31] SLE Wild type mice and MRL/MpJ-Faslpr/2 J (MRL/lpr) mice 20 mg/kg SPT, TST, and FST Activity of HMGB1/TLR4/NF-κB pathway↓, serum, and hippocampus contents of TNF-α, IL-1β, and IL-6↓ [32] Notes: OFT: open field test; TST: tail suspension test; FST: forced swimming test; SPT: sucrose preference test; TLR4: Toll-like receptor 4; NF-κB: nuclear transcription factor-κB; HMGB1: High mobility group box 1; FGF2: fibroblast growth factor 2; LOOH: lipid hydroperoxides; SOD: Superoxide dismutase; MDA: malondialdehyde; CAT: catalase; GST: Glutathione S-transferase; AKT: serine/threonine protein kinase; IL-6: interleukin-6; TNF-α: Tumor necrosis factor-α; NLRP3: nucleotide-binding oligomerization domain-like receptor protein 3; CASP-11: Caspase-11; CASP-1: Caspase-1; GSDMD: Gasdermin D; IL-1β: interleukin-1β; TLR4: Toll-like receptor 4; LPS: lipopolysaccharide; CUMS: chronic unpredictable mild stress; OB: olfactory bulbectomy; CSDS: Chronic social stress defeat; CORT: cortisol; BCCAO: brain ischemia-reperfusion; ATP: adenosine triphosphate; IFN-α: Interferon-α; SLE: Systemic lupus erythematosus. ## 2.2. Active Compounds in Chaihu-Baishao Regulate the Hypothalamic-Pituitary-Adrenal (HPA) Axis The HPA axis is an important part of the neuroendocrine system and helps control the response to stress. When the HPA axis is activated, the paraventricular nucleus of the hypothalamus releases corticotropin-releasing hormone (CRH), which sends a signal to the anterior pituitary gland to secrete adrenocorticotropic hormone (ACTH) into the bloodstream, and in turn, ACTH acts on the adrenal cortex to stimulate the secretion of CORT [39]. HPA axis hyperfunction is an important factor in the onset of depression as evidenced by increases in corticotropin-releasing hormone, ACTH, and glucocorticoid, an imbalance in the HPA axis negative feedback, an enlargement of the pituitary and adrenal glands, and the onset of hypercortisolemia that are seen in some depression patients [40].Saikosaponin A, a triterpene saponin extracted from Bupleurum, has anti-inflammatory and antioxidant effects [41]. In the perimenopausal depression model of female rats induced by CUMS, Saikosaponin A can improve the behavioral performance of these rats and reduce CRH mRNA, CRH protein, and serum CORT levels in the rat hypothalamus, while inhibiting the overexpression of hippocampal proinflammatory cytokines induced by CUMS [42]. Li et al. found that Saikosaponin D promotes hippocampal neurogenesis, alleviates the weight loss of rats induced by CUMS, and increases the sucrose consumption of rats and the movement distance in the open field test (OFT). They also found that it reduces the immobility time in the forced swimming test (FST), serum CORT levels, the inhibition of glucocorticoid receptor expression, and nuclear translocation induced by CUMS [43]. Adriamycin is a commonly used antitumor drug that has many adverse reactions. A previous study exploring the effect of quercetin on anxiety and depression-like behavior caused by Adriamycin showed that quercetin can improve the anxiety and depression-like behavior of rats, reduce serum CORT levels, inhibit the hyperactivity of the HPA axis, relieve brain oxidative damage, and regulate rat immune function [44]. Studies have also found that the combination of ferulic acid and levetiracetam improves epilepsy complicated by depression, restores serum CORT levels, and reduces the activity of proinflammatory cytokines and indoleamine 2,3-dioxygenase in the mouse brain [45]. Prenatal stress (PS) can increase the generation of depression, anxiety, attention deficit hyperactivity disorder, and other negative emotions and behaviors in offspring. After giving ferulic acid to rats in a prenatal stress model, Zheng et al. found that ferulic acid can exert an antidepressant effect by inhibiting the HPA axis activity and hippocampal inflammation in offspring rats [46]. Similarly, paeoniflorin can treat prenatal stress-induced depression-like behavior in rat offspring by promoting the nuclear translocation of glucocorticoid receptors, inhibiting the expression of a series of proteins and the formation of complexes (such as SNAP25), and inhibiting stress-induced HPA axis hyperactivity [47]. In FST-induced depression in rats, paeoniflorin can alleviate both the hyperactivity of the HPA axis and oxidative damage, increase plasma and hippocampal monoamine neurotransmitters and brain-derived neurotrophic factor (BDNF) levels, and promote gastrointestinal movement [48]. In a rat model of menopausal depression induced by CUMS combined with ovariectomy, Huang et al. found that paeoniflorin can improve the depression-like behavior of rats, while inhibiting both the overactivity of the HPA axis and the overexpression of the serotonin (5-HT2A) receptor and upregulating the expression of the 5-HT1A receptor in the brain [49]. Protocatechuic acid ethyl ester (PCA) is a phenolic compound with neuroprotective effects. It has been reported that acute restraint stress can induce depression-like behavior in mice through neuronal oxidative damage, upregulation of HPA axis activity, and increases in serum CORT levels, all of which were reversed after PCA was administered [50] (Figure 2 and Table 2).Figure 2 Active compounds in Chaihu-Baishao regulate the HPA axis. HPA: hypothalamic-pituitary-adrenal; CORT: cortisol; ACTH: adrenocorticotropic; CRH: corticotropin-releasing.Table 2 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair regulate the HPA axis. Compound Model Animal species Dosage Behaviour involved Mechanism of action/main indicators References Saikosaponin A CUMS Female Wistar rat 25, 50, 100 mg/kg SPT, FST, and NSFT Hypothalamus CRH mRNA, CRH protein↓, hippocampal proinflammatory cytokine↓, and CORT↓ [42] Saikosaponin D CUMS Male SD rat 0.75, 1.5 mg/kg SPT, OFT, and FST CORT↓, GR↑, weight ↑, p-CREB↑, BDNF↑, and promoting the generation of hippocampal neurons [43] Quercetin Adriamycin Male Wistar rat 60 mg/kg FST, OFT, and EPM CORT↓, relieving brain oxidative stress damage, and regulating immune function [44] Ferulic acid Pentylenetetrazole kindling epilepsy Male Swiss albino mice 40, 80 mg/kg TST, SPT CORT↓, TNF-α↓, and IL-1β↓ [45] PS Female, male SD rats 12.5, 25, 50 mg/kg SPT, OFT, and FST Serum ACTH↓, CORT↓, hippocampus GR↑, inhibiting hippocampal inflammation, and improving neuronal damage in hippocampal CA3 area [46] Paeoniflorin PS Female, male SD rats 15, 30, 60 mg/kg SPT, FST, and OFT Serum CRH, ACTH, CORT↓, and relieving neuronal damage in hippocampal CA3 area [47] FST Male SD rat 10 mg/kg FST and OFT Promote gastrointestinal motility, plasma motilin↑, CRH↓, ACTH↓, CORT↓, BDNF↓, norepinephrine↑, and oxidative stress↓ [48] CUMS SD female rats 45 mg/kg OFT and SPT HPA axis activity↓, brain 5-HT2AR↓, and brain 5-HT1AR↑ [49] Protocatechuic acid ARS Male, female, Swiss albino mice 100, 200 mg/kg FST and OFT Serum CORT↓ and hippocampal oxidative stress↓, [50] Note: CUMS: chronic unpredictable mild stress; PS: Prenatal stress; FST: forced swimming test; ARS: Acute restraint stress; SPT: sucrose preference test; NSFT: novelty suppressed feeding test; OFT: open field test; EPM: Elevated Plus Maze; CRH: corticotropin-releasing hormone; GR: Glucocorticoid Receptor; BDNF: brain-derived neurotrophic factor; ACTH: adrenocorticotropic hormone; HPA: hypothalamic-pituitary-adrenal; 5-HT: serotonin; MDA: malondialdehyde; CORT: cortisol. ## 2.3. Active Compounds in Chaihu-Baishao Regulate Monoamine Neurotransmitters Monoamine neurotransmitters are central neurotransmitters that are mainly catecholamines, such as dopamine, norepinephrine, and epinephrine, and indoleamines, such as 5-HT. Dopamine (DA) is an important regulator of learning and motivation [51]. 5-HT and norepinephrine are mainly involved in the regulation of emotional cognition and sleep, and when monoamine neurotransmitters are disordered, they can cause various emotional changes [40]. The classic monoamine hypothesis in depression predicts that the underlying pathophysiological basis of depression is the depletion of monoamine neurotransmitters in the central nervous system. The serum levels of monoamine neurotransmitters and their metabolites can be used as important biomarkers for the diagnosis of depression, and drugs that increase the synaptic concentration of monoamines can improve the symptoms of depression [1].It has been reported that Saikosaponin A can improve CUMS-induced depression-like behaviors in rats, and its antidepressant effect is believed to be related to the increase of dopamine content in the hippocampus of rats and the upregulation of hippocampal Proline-rich transmembrane protein 2 expression [52]. Similarly, in the same animal model of depression, Khan et al. found that quercetin can improve the behavioral performance of depressed mice, increase brain 5-HT levels, alleviate CUMS-induced brain inflammation and oxidative damage, and reduce brain glutamate levels [53]. There is evidence that in restraint stress-induced depression and anxiety models in mice, quercetin can treat anxiety and depression by regulating 5-HT and cholinergic neurotransmission and antioxidative damage as well as enhancing memory after restraint stress [54]. Acetylcholinesterase has the effect of terminating cholinergic neurotransmission [55]. Arsenic is an element that naturally exists in food, soil, and water. Exposure to arsenic can cause memory disorders, anxiety, depression, and other neurological perturbations and diseases. Samad et al. found that gallic acid can reverse the excessive increase in acetylcholinesterase activity induced by arsenic, alleviate the brain oxidative stress damage caused by arsenic, and protect memory function [56]. Additionally, gallic acid can also exert an antidepressant effect by increasing 5-HT and catecholamine levels in synaptic clefts in the central nervous system [57]. There are also reports showing that in olfactory bulbectomy-induced animal depression models, PCA can shorten the immobility time of rats in the OFT, increase the distance explored in the OFT, increase rat hippocampal monoamine neurotransmitters (5-HT, DA, and norepinephrine) and BDNF levels, reduce hippocampal CORT levels, and alleviate hippocampal neuroinflammation and oxidative damage [58]. It has been reported that Chaihu-Baishao can effectively improve postoperative depression and diarrhea in patients with colorectal cancer by regulating the imbalance of monoamine neurotransmitters (DA and 5-HT) [59] (Figure 3 and Table 3).Figure 3 Active compounds in Chaihu-Baishao regulate monoamine neurotransmitters. 5-HT: serotonin; DA: dopamine; NE: norepinephrine.Table 3 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair regulate monoamine neurotransmitters. Compound Model Animal species Dosage Behaviour involved Mechanism of action/main indicators References Saikosaponin A CUMS Male SD rat 50 mg/kg OFT and SPT Weight↑, hippocampus DA↑, and hippocampus PRRT2↑ [52] Quercetin CUMS Male Swiss albino mice 25 mg/kg MFST, TST, and OFT Brain5-HT↑, brain glutamate, IL-6, TNF-α↓, and relieving brain oxidative stress [53] Restraint stress Male Albaino wistar mice 20 mg/kg/ml FST, LDA, EPM, and MWM Prevent oxidase damage, regulate 5-HT, and cholinergic [54] Gallic acid iAS SD male rat 50, 100 mg/kg EPM, LDA, FST, and MWM Relieve brain oxidative stress and brain AChE↓ [56] Protocatechuic acid OB Male Wistar rat 100, 200 mg/kg FST and OFT Hippocampus 5-HT, norepinephrine, DA↑, hippocampus BNDF↑, hippocampus TNF-α, IL-6, CORT, and hippocampus oxidase damage↓ [58] Note: LDA: Light-dark activity box; iAS: Arsenic; OB: olfactory bulbectomy; MWM: Morris water maze; FST: forced swimming test; EPM: Elevated Plus Maze; OFT: open field test; 5-HT: serotonin; CORT: cortisol; IL-6: interleukin-6; TNF-α: Tumor necrosis factor-α; AChE: acetylcholinesterase; DA: dopamine; BDNF: brain-derived neurotrophic factor; MFST: Modified Forced Swim Test; OB: olfactory bulbectomy; TST: tail suspension test; CUMS: chronic unpredictable mild stress. ## 2.4. Active Compounds in Chaihu-Baishao Promote Hippocampal Neurogenesis and Regulate BDNF Levels BDNF is a widely studied growth factor that plays an important role in neuronal maturation, synapse formation, and synaptic plasticity in the brain [60]. The “neurotrophic theory” posits that neurons will lose access to trophic factors as the expression level of BDNF decreases, leading to neuronal atrophy, decreased synaptic plasticity, and the onset of depression [61]. The optimization of BDNF levels helps synaptic plasticity and remodeling, improves neuronal damage, and relieves depression [62]. There is evidence that increasing the expression of BDNF in hippocampal astrocytes can treat depression and anxiety and stimulate hippocampal neurogenesis [63]. Adult hippocampal neurogenesis involves the proliferation, survival, differentiation, and integration of newborn neurons into preexisting neuronal networks [64]. There is evidence that hippocampal volume, the number of granular cells in the anterior and middle dentate gyrus, and the volume of the granular cell layer decrease in patients with depression [65], whereas increasing adult hippocampal neurogenesis can improve depression and anxiety [66].Depression is also one of the common complications after stroke, though this pathogenesis has not been fully elucidated, and there is a lack of clinically effective treatments for stroke-induced depression. The cAMP response element-binding protein (CREB) is a protein that regulates gene transcription and acts as a chief transcription factor in the regulation of genes that encode proteins, such as BDNF, that are involved in synaptic plasticity and memory [67]. Studies have found that Saikosaponin A can significantly improve the depression-like behavior of PSD model rats, inhibit the apoptosis of the hippocampal meridian, and increase the levels of phosphorylated CREB and BDNF in the hippocampus [68]. Neuronal cell death occurs extensively during development and pathology, where it is especially important because of the limited capacity of adult neurons to proliferate or be replaced [69]. There are at least three known types of neuronal death, namely apoptosis, autophagy, and necrosis, and there is evidence that apoptosis is closely related to bipolar disorder (including depression) [70]. Inhibiting neuronal apoptosis can improve depression-like behavior [71, 72]. In LPS-induced depression in mice, Saikosaponin D can inhibit hippocampal neuronal apoptosis and inflammation through the lysophosphatidic acid 1/Ras homologous family member A (RhoA)/Rho protein kinase 2 pathway [73]. Rutin is a flavonoid, which is abundantly present in a variety of fruits and vegetables. It functions in cardiovascular protection and as an antiviral agent. In a recent study, rutin was shown to improve the CUMS-induced weight loss in mice and protect mouse hippocampal neurons [74]. Activation of tyrosine kinase B (TrkB) promotes neuronal survival, differentiation, and synapse formation [75].Estrogen receptor alpha (ERα) is the main regulator mediated by estrogen and plays an important role in preventing depression and cardiovascular diseases. It has been reported that knocking out ERα induces depression in mice and reduces hippocampal BDNF and the phosphorylation of its downstream targets TrkB, AKT, and extracellular regulatory protein kinase 1/2 (ERK1/2). Quercetin can reverse the above phenomenon, alleviate cell apoptosis, and reverse the depression-like symptoms induced by ERα knockout [76]. Similarly, quercetin can also exert an antidepressant effect by regulating the levels of BDNF in the hippocampus and prefrontal cortex, as well as the levels of the related Copine 6 and the triggering receptor expressed on myeloid cells (TREM)-1 and TREM-2 [77]. Quercetin combined with exercise therapy can increase the expression of BDNF protein in 1,2-dimethylhydrazine-induced depression model rats with colorectal cancer and can act on the TrKβ/β-chain protein axis to treat depression [78]. Liu et al. found that in the mouse depression model induced by CUMS, ferulic acid can significantly improve the behavioral performance in both the sucrose performance test and FST, upregulate BDNF and synapsin I levels in the prefrontal cortex and hippocampus, and increase hippocampal PSD95 protein expression [79]. In a recent report, paeoniflorin can improve the depression-like behavior of PSD model mice and increase the expression of BDNF and phosphorylated CREB in the CA1 region of the hippocampus [80]. Zhong et al. found that paeoniflorin can protect nerves by upregulating the activity of the ERK-CREB signaling pathway and treat CUMS-induced depression-like behavior in rats [81]. The alteration of synaptic plasticity, and specifically hippocampal long-term potentiation (LTP), also plays a role in the onset of depression [82]. There is evidence that in CUMS-induced animal depression models, hippocampal LTP is impaired, and Liu et al. found that administration of paeoniflorin can alleviate LTP injury in the hippocampal CAI area and increase both the density of hippocampal dendritic spines and the expression levels of BDNF and PSD95 [83]. There are also reports that paeoniflorin can enhance the expression and gene transcription of BDNF, activate the expression of TrkB, and promote proliferation and differentiation into astrocytes and the neurogenesis of neural stem cells in the rat hippocampal dentate gyrus [84] (Figure 4 and Table 4).Figure 4 Active compounds in Chaihu-Baishao promote hippocampal neurogenesis and regulate BDNF levels. BDNF: brain-derived neurotrophic factor; AKT: serine/threonine protein kinase; TrKB: tyrosine kinase B; CREB: cAMP response element-binding protein.Table 4 Active compounds in Bupleurum chinense DC-Paeonia lactiflora Pall herb pair promote hippocampal neurogenesis and regulate BDNF levels. Compound Model Animal species Dosage Behaviour involved Mechanism of action/Main indicators References Saikosaponin A MCAO+CUMS+isolation Male SD rat 5 mg/kg SPT, OFT, BWT, FST Weight↑, hippocampus BDNF, P-CREB↑, inhibiting neuronal apoptosis, hippocampus Bax, caspase-3↓, [68] Saikosaponin D LPS Male ICR mice 0.5, 1 mg/kg TST, OFT Proinflammatory cytokines↓, inhibiting the activity of LPA1/RhoA/ROCK2 signaling pathway, relieving apoptosis of hippocampal neurons [73] Rutin CUMS Adult Swiss albino mice 100 mg/kg OFT, SPT, EPM, NOR, BWT Relieving hippocampal damage in CA3 area [74] Quercetin ERα receptor knockout Female C57bl/6 and ERα-KO mice 100 mg/kg OFT, TST, FST Number of hippocampal neurons↑, relieve hippocampal cell apoptosis, Hippocampus BDNF,P-TrKB,P-AKT,P-ERK1/2↑ [76] DMH Male Wistar rat 50 mg/kg OFT, FST Proinflammatory cytokines↓, BDNF↑, number of neurons↑, TrKB↑,β-catenin↑, incidence of rectal cancer↓, [78] Ferulic acid CUMS Male ICR mice 20, 40 mg/kg SPT, FST PFC and hippocampus BDNF, Synapsin I↑, hippocampus PSD95↑ [79] Paeoniflorin MCAO+CUMS Male SD rat 5 mg/kg BBT, SPT, OFT Weight↑, hippocampus CA1 area BDNF, P-CREB↑ [80] CUMS Male SD rat 30, 60 mg/kg SPT, FST, LAT Number of neurons in hippocampal CA3 area↑, upregulate the ERK-CREB signaling pathway [81] CUMS C57BL/6Wild-type male mice 20 mg/kg SPT, FST, TST Relieve hippocampal CA1 LTP damage, hippocampus dendritic spine density, BDNF, PSD95↑ [82] CUMS Male SD rats 60 mg/kg SPT Promote neurogenesis in the hippocampus dentate gyrus [84] Note: CUMS: chronic unpredictable mild stress; MCAO: middle cerebral artery occlusion; SPT: sucrose preference test; OFT: open field test; FST: forced swimming test; BWT: Beam walking test; BDNF: brain-derived neurotrophic factor; CREB: cAMP response element-binding protein; LPS: lipopolysaccharide; LPA-1: specific cell surface G protein–coupled receptors; ROCK: Rho-kinase; NOR: Novel object recognition test; TST: tail suspension test; TrKB: tyrosine kinase B; AKT:serine/threonine protein kinase; ERK: extracellular regulatory protein kinase; ERα: Estrogen receptor alpha; DMH: 1,2-dimethylhydrazine; BBT: Beam balance test; LAT: Locomotor Activity Test; LTP: long-term potentiation. ## 3. Discussion Depression is a common neuropsychiatric disorder. Its symptoms and signs include lack of motivation, inability to feel happy, social withdrawal, cognitive difficulties, and changes in appetite. It belongs to the category of “mood disorder” in Chinese medicine [5]. In TCM theory, Chaihu can soothe the liver, and white peony root can restrain the qi. In the antidepressant Chinese medicine compounds, such as Xiaoyao Powder and Sini Powder, Chaihu and Baishao are often a “monarch and minister” collocation relationship [8]. In this review, we present the results of studies that have found that the active ingredients in Chaihu and Baishao that can function as an antidepressant through mechanisms that include reduction of inflammation and oxidative stress, neuronal protection, and the regulation of the HPA axis, neurotransmitters, and BDNF levels. However, the current research in these areas was only done at the cellular or animal level, and there have not been enough clinical trials to investigate whether they can effectively improve the symptoms of depressed patients. Furthermore, there are some active ingredients in Chaihu and Baishao, such as paeoniflorin, which are difficult to pass through the blood-brain barrier, thus limiting the efficacy. In recent years, the two-way signaling pathway between the brain and the gut microbiota has received much attention, and there is evidence that dysfunction of the gut-brain axis plays an important role in the pathogenesis of depression [85]. However, there is still a lack of sufficient research to show that the Chaihu-Baishao herb pair can treat depression by regulating the brain-gut axis. At the same time, most of the current research is limited to a single target or single signaling pathway, and the interaction mechanism between the target and the pathway lacks in-depth exploration. Clinically, the current main treatment methods are mainly western medicine, but there are serious adverse reactions. TCM has significant advantages in treatment of depression, which is worthy of clinical investigation. However, the current syndrome differentiation and classification of depression is complex and changeable, the standards are not uniform, and a complete diagnosis and treatment and efficacy evaluation system has not been formed. There are few case analyses in clinical research, and the quality is different.In future research, therefore, a large number of randomized double-blind controlled trials are expected to be carried out in accordance with the standardization of syndrome differentiation to investigate whether Chaihu-Baishao can effectively improve depression. In order to apparently promote the efficacy of TCM, it is meaningful to strengthen the research on the targeted delivery system, prolonging action time, and increase concentration in central nervous system. Meanwhile, multiomics technology is beneficial for exploring the connection between targets or signaling pathways and the overall drug intervention mechanism, which could enrich the mechanism of action of TCM, providing guidance for industrial transformation and clinical application. --- *Source: 1024693-2022-11-09.xml*
2022
# Minimally Invasive Methods for Staging in Lung Cancer: Systematic Review and Meta-Analysis **Authors:** Gonzalo Labarca; Carlos Aravena; Francisco Ortega; Alex Arenas; Adnan Majid; Erik Folch; Hiren J. Mehta; Michael A. Jantz; Sebastian Fernandez-Bussy **Journal:** Pulmonary Medicine (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1024709 --- ## Abstract Introduction. Endobronchial ultrasound (EBUS) is a procedure that provides access to the mediastinal staging; however, EBUS cannot be used to stage all of the nodes in the mediastinum. In these cases, endoscopic ultrasound (EUS) is used for complete staging.Objective. To provide a synthesis of the evidence on the diagnostic performance of EBUS + EUS in patients undergoing mediastinal staging.Methods. Systematic review and meta-analysis to evaluate the diagnostic yield of EBUS + EUS compared with surgical staging. Two researchers performed the literature search, quality assessments, data extractions, and analyses. We produced a meta-analysis including sensitivity, specificity, and likelihood ratio analysis.Results. Twelve primary studies (1515 patients) were included; two were randomized controlled trials (RCTs) and ten were prospective trials. The pooled sensitivity for combined EBUS + EUS was 87% (CI 84–89%) and the specificity was 99% (CI 98–100%). For EBUS + EUS performed with a single bronchoscope group, the sensitivity improved to 88% (CI 83.1–91.4%) and specificity improved to 100% (CI 99-100%).Conclusion. EBUS + EUS is a highly accurate and safe procedure. The combined procedure should be considered in selected patients with lymphadenopathy noted at stations that are not traditionally accessible with conventional EBUS. --- ## Body ## 1. Introduction In recent years, the approach to patients with suspected non-small-cell lung cancer (NSCLC), has changed [1]. Several diagnostic and staging methods have been developed to avoid the use of more invasive techniques [2]. Surgical methods, such as mediastinoscopy, video assisted thoracoscopy (VATS), mediastinal dissection, and lymph node resection, are the reference standard for lung cancer lymph node staging. However, minimally invasive methods, including computed tomography (CT), magnetic resonance imaging (MRI), or positron emission tomography (PET), as well as bronchoscopic methods, are alternatives with low complication rates and these methods are often used as the first approach for confirming or excluding metastatic disease [2, 3]. One of the limitations of radiological studies is the number of false positive and false negative cases; for this reason, tissue samples are needed for cytopathology or histopathology [2–4].Over the last decade, bronchoscopic modalities such as endobronchial ultrasound with transbronchial needle aspiration (EBUS-TBNA) have emerged as safe methods to obtain tissue from mediastinal or in close proximity to central airways, with an accuracy of 80–90% and an incidence of complications of less than 1% [4, 5]. This minimally invasive approach is limited to certain mediastinal lymph nodes; however, one of the weaknesses of EBUS-TBNA is its inability to obtain tissue from stations 5, 6, 8, and 9 of the IASLC mediastinal lymph node map [6]. In such cases, a complementary approach with endoscopic ultrasound (EUS) is a safe alternative to obtain tissue from all of the mediastinal lymph nodes, except from stations 5 and 6 [4].Studies of combined EBUS + EUS have included retrospective, prospective, and randomized controlled trials (RCTs). Two systematic reviews and meta-analyses have been published [7, 8] in the past. However, most of the evidence from primary studies has been published in the past five years. The purpose of this systematic review and meta-analysis is to evaluate the utility of EBUS + EUS for NSCLC staging or diagnosis in those patients with suspected NSCLC. ## 2. Materials and Methods ### 2.1. Literature Search and Clinical Eligibility Criteria Previous descriptions and the protocol for this systematic review and meta-analysis are available in the PROSPERO registry (ID: CRD42015017199) [9]. In this systematic review, two independent reviewers searched the following databases: PubMed (Medline), OVID, Lilacs, Clinical Trials (https://clinicaltrials.gov/), and the Cochrane database. In addition, two metasearches of the TRIP database and Epistemonikos were included up to April 2015 [10]. For maximum sensitivity, meeting abstracts were searched from the European Respiratory Society (ERS; 2008 to 2014), American Thoracic Society (ATS; 2008 to 2014), and American College of Chest Physicians (2008 to 2014).The search criteria included the following. In the PubMed database, we searched using the following MeSH terms: ((“Carcinoma, Non-Small-Cell Lung” [majr]) AND “Bronchoscopy” [majr]) AND “Ultrasonography” [MeSH]. The search also included the following non-MeSH terms: “endobronchial”, “endobronchial ultrasound” (EBUS) alone or in combination with “non-small cell lung carcinoma”, and “neoplasm staging”. The literature search and inclusion criteria were in accordance with the Cochrane handbook and this systematic review was performed in accordance with the PRISMA statement [11].The inclusion criteria for this systematic review and meta-analysis were the following: (1) patients older than 18 years; (2) confirmed lung cancer with an indication for mediastinal staging (based on enlarged and/or PET positive lymph nodes); (3) available index test defined as EBUS + EUS with different endoscopes or with the same bronchoscope (EBUS + EUS-B-FNA) and reference standard (surgical methods or clinical followup); and (4) two-by-two diagnostic yield results of specificity, sensitivity, and positive likelihood ratios (LRs).Two independent authors (GL and CA) performed the literature search, and disagreements concerning study inclusion were resolved by discussion. The full-text versions of the included studies were retrieved, and a manual cross-reference search of the articles was performed with no language restrictions. ### 2.2. Quality Assessment of the Retrieved Articles A methodological assessment and quality analysis were performed by two independent reviewers (GL and CA) using the diagnostic test accuracy approach from the Cochrane Handbook for Systematic Reviews [12] and disagreements concerning study inclusion were resolved by discussion. ### 2.3. Outcomes Measured We included the diagnostic yield results (specificity, sensitivity, and LR) from all of the included articles in this systematic review and meta-analysis, and a secondary analysis of only EBUS + EUS with fine needle aspiration performed with the same bronchoscope (EBUS + EUS-B- FNA) was performed. In addition, we analysed adverse events related to EBUS + EUS.For this study we defined the following terms: sensitivity = positive endosonography (EBUS + EUS)/true positive via surgical staging + false negative; specificity = negative endosonography (EBUS + EUS)/true negative via surgical staging + false positive. The patients that were positive during endosonography were considered as true positive. ### 2.4. Data Extraction and Analysis Data extraction was performed by two independent reviewers (GL and FO). Two-by-two tables were generated that included true positives, false positives, true negatives, and false negatives. Primary study descriptions, population descriptions, types of studies, and reported adverse events were also extracted, and a new summary table was created.The extracted data were imported into Microsoft Excel 2010 (Redmond, WA, USA). For the qualitative and quantitative analyses, we used Cochrane Review Manager (RevMan) software, version 5.3, and a random effects model was used for the quantitative meta-analysis of diagnostic yield. We defined significant heterogeneity asi 2 > 50 % and created a symmetrical summary receiver operatic characteristic curve (SROC); the area under the curve was analysed using MetaDisc, version 1.4. Statistical significance was defined by a p value less than 0.05. We created a summary table of the findings based on the GRADE approach using GRADEpro software. ## 2.1. Literature Search and Clinical Eligibility Criteria Previous descriptions and the protocol for this systematic review and meta-analysis are available in the PROSPERO registry (ID: CRD42015017199) [9]. In this systematic review, two independent reviewers searched the following databases: PubMed (Medline), OVID, Lilacs, Clinical Trials (https://clinicaltrials.gov/), and the Cochrane database. In addition, two metasearches of the TRIP database and Epistemonikos were included up to April 2015 [10]. For maximum sensitivity, meeting abstracts were searched from the European Respiratory Society (ERS; 2008 to 2014), American Thoracic Society (ATS; 2008 to 2014), and American College of Chest Physicians (2008 to 2014).The search criteria included the following. In the PubMed database, we searched using the following MeSH terms: ((“Carcinoma, Non-Small-Cell Lung” [majr]) AND “Bronchoscopy” [majr]) AND “Ultrasonography” [MeSH]. The search also included the following non-MeSH terms: “endobronchial”, “endobronchial ultrasound” (EBUS) alone or in combination with “non-small cell lung carcinoma”, and “neoplasm staging”. The literature search and inclusion criteria were in accordance with the Cochrane handbook and this systematic review was performed in accordance with the PRISMA statement [11].The inclusion criteria for this systematic review and meta-analysis were the following: (1) patients older than 18 years; (2) confirmed lung cancer with an indication for mediastinal staging (based on enlarged and/or PET positive lymph nodes); (3) available index test defined as EBUS + EUS with different endoscopes or with the same bronchoscope (EBUS + EUS-B-FNA) and reference standard (surgical methods or clinical followup); and (4) two-by-two diagnostic yield results of specificity, sensitivity, and positive likelihood ratios (LRs).Two independent authors (GL and CA) performed the literature search, and disagreements concerning study inclusion were resolved by discussion. The full-text versions of the included studies were retrieved, and a manual cross-reference search of the articles was performed with no language restrictions. ## 2.2. Quality Assessment of the Retrieved Articles A methodological assessment and quality analysis were performed by two independent reviewers (GL and CA) using the diagnostic test accuracy approach from the Cochrane Handbook for Systematic Reviews [12] and disagreements concerning study inclusion were resolved by discussion. ## 2.3. Outcomes Measured We included the diagnostic yield results (specificity, sensitivity, and LR) from all of the included articles in this systematic review and meta-analysis, and a secondary analysis of only EBUS + EUS with fine needle aspiration performed with the same bronchoscope (EBUS + EUS-B- FNA) was performed. In addition, we analysed adverse events related to EBUS + EUS.For this study we defined the following terms: sensitivity = positive endosonography (EBUS + EUS)/true positive via surgical staging + false negative; specificity = negative endosonography (EBUS + EUS)/true negative via surgical staging + false positive. The patients that were positive during endosonography were considered as true positive. ## 2.4. Data Extraction and Analysis Data extraction was performed by two independent reviewers (GL and FO). Two-by-two tables were generated that included true positives, false positives, true negatives, and false negatives. Primary study descriptions, population descriptions, types of studies, and reported adverse events were also extracted, and a new summary table was created.The extracted data were imported into Microsoft Excel 2010 (Redmond, WA, USA). For the qualitative and quantitative analyses, we used Cochrane Review Manager (RevMan) software, version 5.3, and a random effects model was used for the quantitative meta-analysis of diagnostic yield. We defined significant heterogeneity asi 2 > 50 % and created a symmetrical summary receiver operatic characteristic curve (SROC); the area under the curve was analysed using MetaDisc, version 1.4. Statistical significance was defined by a p value less than 0.05. We created a summary table of the findings based on the GRADE approach using GRADEpro software. ## 3. Results Our search identified 820 records after removing duplicates. 775 references were excluded and a total of 41 potentially eligible primary studies were evaluated in full-text format. After the full-text screening, 29 primary studies were excluded for various reasons [25–54], and 12 primary studies (1515 patients) were included in the qualitative and quantitative analyses and the meta-analysis (Figure 1) [7, 8, 55, 56] [47–54]. No additional studies were identified from conference abstracts. The characteristics of the included and excluded studies are reported in Tables 1 and 5.Table 1 Primary studies included and their characteristics. Author Year Sample Patient Image study Index test Outcome Reference standard Comments Vilmann et al. [13] 2005 31 Lung cancer staging or suspected lung cancer CT scan with suspected mass or lymph node EBUS-TBNA + EUS-FNA Lung cancer staging or diagnosis Thoracotomy or clinical followup Prospective trial, non-RCT. 9 patients underwent thoracotomy and 19 had clinical followup. Wallace et al. [14] 2008 138 Lung cancer staging or suspected lung cancer CT scan and PET CT with enlarged and/or PET positive lymph nodes EBUS-TBNA + EUS-FNA Lung cancer staging or diagnosis Thoracotomy, mediastinoscopy, lobectomy, and thoracoscopy Prospective trial, non-RCT. 33 patients underwent thoracotomy, 4 mediastinoscopy, 4 lobectomy, and 1 thoracoscopy. The rest had 6–12-month clinical followup. Annema et al. [15] 2010 241 Lung cancer staging, resectable CT scan and PET CT with enlarged and/or PET positive lymph nodes EBUS-TBNA + EUS-FNA Lung cancer staging Mediastinoscopy and/or thoracotomy RCT, 1 : 1. One arm to endoscopic staging and one arm to surgical staging. Standard reference for this study included thoracotomy in patients without positive endosonography. Herth et al. [16] 2010 139 Lung cancer staging or suspected lung cancer CT scan, PET CT in some patients EBUS-TBNA and EUS-B-FNA Lung cancer staging Thoracoscopy, thoracotomy, or clinical followup to 12 months Prospective study, non-RCT. Timing flow since inclusion is 6–12 months. Hwangbo et al. [17] 2010 150 Lung cancer staging or suspected lung cancer CT scan and PET CT with enlarged and/or PET positive lymph nodes EBUS-TBNA and EUS-B-FNA Lung cancer staging Surgery, lymph node dissection Prospective trial, non-RCT. Szlubowski et al. [18] 2010 120 Lung cancer staging, stage IA-IIB CT scan with normal size lymph nodes EBUS-TBNA + EUS-FNA Lung cancer staging Bilateral transcervical extended mediastinal lymphadenectomy Prospective trial, non-RCT. Patients with negative EBUS/EUS underwent bilateral transcervical extended mediastinal lymphadenectomy. Ohnishi et al. [19] 2011 110 Staging for suspected resectable lung cancer CT scan and PET CT with enlarged and/or PET positive lymph nodes EBUS-TBNA + EUS-FNA Lung cancer staging Surgery without any specification Prospective trial, non-RCT. Szlubowski et al. [20] 2012 214 Lung cancer staging, stage 1A-IIIB CT scan EBUS-TBNA and EUS-B-FNA Lung cancer staging Systematic lymph node dissection Prospective trial, non-RCT. 110 EBUS + EUS and 104 EBUS + EUS-B-FNA. Kang et al. [21] 2014 148 Staging for confirmed or suspected resectable lung cancer CT scan and PET CT with enlarged and/or PET positive lymph nodes EBUS-TBNA and EUS-B-FNA Lung cancer staging Surgery without any specification RCT, 1 : 1. EBUS centered arm versus EUS centered arm using the same bronchoscope. Patients without definitive data were excluded for sensitivity analysis. Lee et al. [22] 2014 44 Staging for confirmed or suspected lung cancer PET CT without M1 disease EBUS-TBNA and EUS-B-FNA Lung cancer staging Mediastinoscopy or lymph node resection Retrospective analysis. 4 patients underwent mediastinoscopy and 4 underwent lymph node resection. Liberman et al. [23] 2014 144 Staging for confirmed or suspected resectable lung cancer CT scan and PET CT with enlarged and/or PET positive lymph nodes EBUS-TBNA + EUS-FNA Lung cancer staging Mediastinoscopy or lymph node dissection Prospective trial, non-RCT. AS per protocol, patients underwent surgical staging following endosonographic staging. Oki et al. [24] 2014 150 Staging for confirmed or suspected resectable lung cancer CT scan and PET CT EBUS-TBNA and EUS-B-FNA Lung cancer staging Surgical resection with lymph node dissection or clinical followup Prospective trial, non-RCT. 5 patients were excluded from analysis without clinical followup. Clinical followup was 6 months after the procedure.Figure 1 Study of flow diagram following PRISMA statements.Two of the included primary studies were RCTs. Annema et al. allocated patients 1 : 1 to an endoscopic staging arm and a surgical arm. In the other RCT, Kang et al. allocated patients 1 : 1 to an EBUS followed by EUS arm and an EUS followed by EBUS arm. These trials were evaluated by two independent researchers and were defined as the best evidence available.The remaining studies included in the quantitative and qualitative analyses were observational studies. Eleven of these studies were prospective trials, and the other one trial was retrospective in nature. ### 3.1. Risk of Bias in the Included Reviews A quality assessment of the primary studies was performed using the Cochrane assessment tool. Most of the primary studies reported and addressed a specific question (diagnostic yield from EBUS + EUS for mediastinal staging) without any concern for index testing or reference standard. The limitations of the included studies were the following: (1) no data on the interval since the index test or the reference standard; (2) a risk of bias in the results because some patients in the prospective trials were excluded from the data analysis; (3) some tests including “surgical methods” as reference test, without any specification between mediastinoscopy, thoracotomy, and others; and (4) the study type (10 of the 12 trials were prospective). The results are presented in Figures2 and 5.Figure 2 Risk of bias and applicability concerns graph: review of authors’ judgments about each domain presented as percentages across included studies. ### 3.2. Diagnostic Accuracy The pooled data from the primary studies that evaluated the diagnostic yield of EBUS + EUS versus surgical methods or clinical followup are shown in Figure3.Figure 3 Comparison 1. Forest plot of diagnostic yield from all included studies.The sensitivity across all the primary studies was 85% (CI 80–89%) and the specificity was 99% (CI 98–100%). The meta-analysis of sensitivity, specificity, and positive likelihood ratio of all the studies and subgroups of EBUS-B-FNA and EBUS + EUS using different endoscopes is reported in Table3.In a subgroup analysis of the EBUS + EUS using different endoscopes revealed a sensitivity of 85% and a specificity of 99.6%, compared with EBUS-EUS-B-FNA with a sensitivity of 88% and specificity of 100%. Finally, SROC analysis revealed an AUC of 0.98 for all of the included primary studies and 0.99 for the EBUS + ESU-B-FNA only. Summaries of these results are presented in Figure4.Figure 4 SROC from all included studies.Figure 5 Risk of bias and applicability concerns summary: review of authors’ judgments about each domain for each included study.Adverse events related to endoscopic procedures were reported in 12 primary studies. The most common adverse event was minor bleeding. Table2 shows the number of adverse events reported in each trial.Table 2 EBUS + EUS adverse events reported in primary studies. Author EBUS + EUS adverse events Vilmann et al. [13] No complications Wallace et al. [14] No complications Annema et al. [15] One case of pneumothorax and 5 minor complications Herth et al. [16] No complications Hwangbo et al. [17] One case of lymph node abscess Szlubowski et al. [18] No complications Ohnishi et al. [19] No complications Szlubowski et al. [20] Two cases of nausea and 3 cases of self-limiting abdominal pain Kang et al. [21] 12 cases of minor bleeding and 1 case of pneumomediastinum Lee et al. [22] No complications Liberman et al. [23] One case of bronchial laceration and 1 case of major bleeding Oki et al. [24] Two cases with severe coughTable 3 Summary of meta-analysis of all included studies and subgroup analysis. Comparison Sensitivity Specificity LR (+) All included studies 87.3% (CI 80–89%;i 2 = 22.86%) 99% (CI 99-100%;i 2 = 6.69%) 60.66 (CI 25.27–145.60;i 2 = 3.42%) EBUS + EUS 85% (CI 80–89%;i 2 = 22.86%) 99.6% (CI 98.5–100%;i 2 = 6.69%) 60.66 (CI 25.27–145.6;i 2 = 3.42%) EBUS + EUS-B-FNA 88% (CI 83.1–91.4%;i 2 = 25.64%) 100% (CI 99-100%;i 2 = 0%) 87.67 (CI 28.35–271.07;i 2 = 1.85%) LR (+): positive likelihood ratio.Finally, the quality of the evidence in the EBUS + EUS and EBUS + EUS-B-FNA methods was LOW for sensitivity and MODERATE for specificity based on the GRADE approach. A summary of the findings is provided in Tables4(a) and 4(b).Table 4 Summary of finding using GRADE approach. (a) shows EBUS + EUS-B-FNA only; (b) shows pooled data from all included primary studies. (a) EBUS + EUS pooled sensitivity: 0.87 (95% CI: 0.83 to 0.89) | pooled specificity: 0.99 (95% CI: 0.99 to 1.00) Test result Number of results per 1000 patients tested (95% CI) Number of participants (studies) Quality of the evidence (GRADE) Prevalence 40.2% True positives  (patients with staging) 350 (334 to 358) 609 (12) ⁢ ⨁ ⨁◯◯LOW1,2⁢ False negatives  (patients incorrectly classified as not having staging) 52 (68 to 44) True negatives  (patients without staging) 592 (592 to 598) 906 (12) ⁢ ⨁ ⨁ ⨁◯MODERATE3⁢ False positives  (patients incorrectly classified as having staging) 6 (6 to 0) (b) EBUS-EUS-B-FNA pooled sensitivity: 0.88 (95% CI: 0.83 to 0.91) | pooled specificity: 1.00 (95% CI: 0.99 to 1.00) Test result Number of results per 1000 patients tested (95% CI) Number of participants (studies) Quality of the evidence (GRADE) Prevalence 40.8% True positives (patients with staging) 359 (339 to 371) 297 (6) ⁢ ⨁ ⨁ ⨁◯LOW1,2⁢ False negatives (patients incorrectly classified as not having staging) 49 (69 to 37) True negatives (patients without staging) 592 (586 to 592) 431 (6) ⁢ ⨁ ⨁ ⨁◯MODERATE3⁢ False positives (patients incorrectly classified as having staging) 0 (6 to 0) 1Low-quality studies. 2Imprecision between different studies 3Different standard reference.Table 5 Excluded studies and motive for exclusion. Author Year Motive for exclusion Sánchez-Font et al. [25] 2014 Noninclusion criteria Schuhmann et al. [26] 2014 Noninclusion criteria Yarmus et al. [27] 2013 Noninclusion criteria Yarmus et al. [28] 2013 Noninclusion criteria Navani et al. [29] 2012 Noninclusion criteria Fielding et al. [30] 2012 Noninclusion criteria Oki et al. [31] 2012 Noninclusion criteria Casal et al. [32] 2012 Noninclusion criteria Steinfort et al. [33] 2011 Noninclusion criteria Ishida et al. [34] 2011 Noninclusion criteria Rintoul et al. [35] 2009 Noninclusion criteria Chao et al. [36] 2009 Noninclusion criteria Lee et al. [37] 2008 Noninclusion criteria Yoshikawa et al. [38] 2007 Noninclusion criteria Chung et al. [39] 2007 Noninclusion criteria Yasufuku et al. [40] 2006 Noninclusion criteria Herth et al. [41] 2006 Noninclusion criteria Herth et al. [42] 2006 Noninclusion criteria Herth et al. [43] 2003 Noninclusion criteria Herth et al. [44] 2002 Noninclusion criteria Verhagen et al. [45] 2013 Noninclusion criteria ## 3.1. Risk of Bias in the Included Reviews A quality assessment of the primary studies was performed using the Cochrane assessment tool. Most of the primary studies reported and addressed a specific question (diagnostic yield from EBUS + EUS for mediastinal staging) without any concern for index testing or reference standard. The limitations of the included studies were the following: (1) no data on the interval since the index test or the reference standard; (2) a risk of bias in the results because some patients in the prospective trials were excluded from the data analysis; (3) some tests including “surgical methods” as reference test, without any specification between mediastinoscopy, thoracotomy, and others; and (4) the study type (10 of the 12 trials were prospective). The results are presented in Figures2 and 5.Figure 2 Risk of bias and applicability concerns graph: review of authors’ judgments about each domain presented as percentages across included studies. ## 3.2. Diagnostic Accuracy The pooled data from the primary studies that evaluated the diagnostic yield of EBUS + EUS versus surgical methods or clinical followup are shown in Figure3.Figure 3 Comparison 1. Forest plot of diagnostic yield from all included studies.The sensitivity across all the primary studies was 85% (CI 80–89%) and the specificity was 99% (CI 98–100%). The meta-analysis of sensitivity, specificity, and positive likelihood ratio of all the studies and subgroups of EBUS-B-FNA and EBUS + EUS using different endoscopes is reported in Table3.In a subgroup analysis of the EBUS + EUS using different endoscopes revealed a sensitivity of 85% and a specificity of 99.6%, compared with EBUS-EUS-B-FNA with a sensitivity of 88% and specificity of 100%. Finally, SROC analysis revealed an AUC of 0.98 for all of the included primary studies and 0.99 for the EBUS + ESU-B-FNA only. Summaries of these results are presented in Figure4.Figure 4 SROC from all included studies.Figure 5 Risk of bias and applicability concerns summary: review of authors’ judgments about each domain for each included study.Adverse events related to endoscopic procedures were reported in 12 primary studies. The most common adverse event was minor bleeding. Table2 shows the number of adverse events reported in each trial.Table 2 EBUS + EUS adverse events reported in primary studies. Author EBUS + EUS adverse events Vilmann et al. [13] No complications Wallace et al. [14] No complications Annema et al. [15] One case of pneumothorax and 5 minor complications Herth et al. [16] No complications Hwangbo et al. [17] One case of lymph node abscess Szlubowski et al. [18] No complications Ohnishi et al. [19] No complications Szlubowski et al. [20] Two cases of nausea and 3 cases of self-limiting abdominal pain Kang et al. [21] 12 cases of minor bleeding and 1 case of pneumomediastinum Lee et al. [22] No complications Liberman et al. [23] One case of bronchial laceration and 1 case of major bleeding Oki et al. [24] Two cases with severe coughTable 3 Summary of meta-analysis of all included studies and subgroup analysis. Comparison Sensitivity Specificity LR (+) All included studies 87.3% (CI 80–89%;i 2 = 22.86%) 99% (CI 99-100%;i 2 = 6.69%) 60.66 (CI 25.27–145.60;i 2 = 3.42%) EBUS + EUS 85% (CI 80–89%;i 2 = 22.86%) 99.6% (CI 98.5–100%;i 2 = 6.69%) 60.66 (CI 25.27–145.6;i 2 = 3.42%) EBUS + EUS-B-FNA 88% (CI 83.1–91.4%;i 2 = 25.64%) 100% (CI 99-100%;i 2 = 0%) 87.67 (CI 28.35–271.07;i 2 = 1.85%) LR (+): positive likelihood ratio.Finally, the quality of the evidence in the EBUS + EUS and EBUS + EUS-B-FNA methods was LOW for sensitivity and MODERATE for specificity based on the GRADE approach. A summary of the findings is provided in Tables4(a) and 4(b).Table 4 Summary of finding using GRADE approach. (a) shows EBUS + EUS-B-FNA only; (b) shows pooled data from all included primary studies. (a) EBUS + EUS pooled sensitivity: 0.87 (95% CI: 0.83 to 0.89) | pooled specificity: 0.99 (95% CI: 0.99 to 1.00) Test result Number of results per 1000 patients tested (95% CI) Number of participants (studies) Quality of the evidence (GRADE) Prevalence 40.2% True positives  (patients with staging) 350 (334 to 358) 609 (12) ⁢ ⨁ ⨁◯◯LOW1,2⁢ False negatives  (patients incorrectly classified as not having staging) 52 (68 to 44) True negatives  (patients without staging) 592 (592 to 598) 906 (12) ⁢ ⨁ ⨁ ⨁◯MODERATE3⁢ False positives  (patients incorrectly classified as having staging) 6 (6 to 0) (b) EBUS-EUS-B-FNA pooled sensitivity: 0.88 (95% CI: 0.83 to 0.91) | pooled specificity: 1.00 (95% CI: 0.99 to 1.00) Test result Number of results per 1000 patients tested (95% CI) Number of participants (studies) Quality of the evidence (GRADE) Prevalence 40.8% True positives (patients with staging) 359 (339 to 371) 297 (6) ⁢ ⨁ ⨁ ⨁◯LOW1,2⁢ False negatives (patients incorrectly classified as not having staging) 49 (69 to 37) True negatives (patients without staging) 592 (586 to 592) 431 (6) ⁢ ⨁ ⨁ ⨁◯MODERATE3⁢ False positives (patients incorrectly classified as having staging) 0 (6 to 0) 1Low-quality studies. 2Imprecision between different studies 3Different standard reference.Table 5 Excluded studies and motive for exclusion. Author Year Motive for exclusion Sánchez-Font et al. [25] 2014 Noninclusion criteria Schuhmann et al. [26] 2014 Noninclusion criteria Yarmus et al. [27] 2013 Noninclusion criteria Yarmus et al. [28] 2013 Noninclusion criteria Navani et al. [29] 2012 Noninclusion criteria Fielding et al. [30] 2012 Noninclusion criteria Oki et al. [31] 2012 Noninclusion criteria Casal et al. [32] 2012 Noninclusion criteria Steinfort et al. [33] 2011 Noninclusion criteria Ishida et al. [34] 2011 Noninclusion criteria Rintoul et al. [35] 2009 Noninclusion criteria Chao et al. [36] 2009 Noninclusion criteria Lee et al. [37] 2008 Noninclusion criteria Yoshikawa et al. [38] 2007 Noninclusion criteria Chung et al. [39] 2007 Noninclusion criteria Yasufuku et al. [40] 2006 Noninclusion criteria Herth et al. [41] 2006 Noninclusion criteria Herth et al. [42] 2006 Noninclusion criteria Herth et al. [43] 2003 Noninclusion criteria Herth et al. [44] 2002 Noninclusion criteria Verhagen et al. [45] 2013 Noninclusion criteria ## 4. Discussion EBUS is a minimally invasive procedure with good accuracy in confirming or excluding lung cancer or mediastinal lung cancer metastasis. The addition of EUS to EBUS mediastinal staging, however, has improved the sensitivity and accuracy of this method, thereby decreasing the number of unnecessary thoracotomies and surgical procedures [3, 29, 57]. Based on the published data, we recommend EBUS + EUS as the first step for evaluating patients with suspected operable lung cancer or with known operable lung cancer who require mediastinal staging. In the subgroup analysis of EBUS + EUS-B-FNA data only, the sensitivity improved to 88% and the specificity increased to 100%. In addition, there was a significant decrease in the heterogeneity in the EBUS + EUS-B-FNA only group; these data were consistent with the evidence quality of the primary studies. In a systematic review of surgical mediastinal staging published by Silvestri et al. in 2013, traditional mediastinoscopy had a pooled median sensitivity of 78% and NPV of 91% and video assisted mediastinoscopy had a median sensitivity of 89% and NPV of 92% [57]. EBUS + EUS is a safe procedure. In our review, major adverse related events were reported for less than 1% of the procedures; pneumothorax was the most commonly reported complication, these data confirm the safety of this method across multiple studies in different settings [5].The evidence quality of the included systematic reviews was assessed using Cochrane tools, which include the most influential aspects of diagnostic test methods (patient selection, index test, reference standard, flow, and timing) [12]. We considered the overall quality of all the included primary studies to be low.The studies included patients with known lung cancer who required mediastinal staging or lesions suspected to be NSCLC. In most cases, a positive pathologic diagnosis with EBUS + EUS was considered a true positive; in patients with negative diagnoses, a second test was performed (surgical methods or clinical followup). This approach is common in clinical practice, and we considered it to be of little concern to the applicability of our results.A previous systematic review and meta-analysis was published by Zhang et al. [8]. In that meta-analysis, the risk of bias was evaluated with the STARD and QUADAS tools, and several trials reported low-quality data. In their study, a positive biopsy with an EBUS + EUS procedure was sufficient to confirm the pathological diagnosis, and surgery was not necessary to confirm the disease [58].According to ACCP guidelines and previous systematic review and meta-analysis, EBUS alone reports a higher diagnostic yield than EBUS/EUS, Dong et al report a sensitivity of 90% (CI, 84.4–95.7%) and a specificity of 98.4% [55, 57]. However, no head to head comparing EBUS, EBUS/EUS, and mediastinoscopy were identified; we found that a network meta-analysis that includes different mediastinal methods for staging in NSCLC is needed.According to our results, we suggest that EBUS/EUS performed with the same bronchoscope has a higher yield than these performed with separate bronchoscope and endoscope; however, these are not head to head comparisons; so it is difficult to determine what to make of these findings but they are intriguing.Finally, training is certainly an issue that merits further research. There appears to be a paucity of training in these combined techniques; EUS performed by an interventional pulmonology (not by gastroenterologist trained in EUS) is not taught in most interventional pulmonary fellowships; this fact should be considered as a potential training for interventional pulmonary fellowship programs.We found some bias in our study. In several studies the reference standards were suboptimal. Mediastinoscopy has an accuracy that is comparable to endosonography. Mediastinoscopy is potentially therefore a suboptimal reference standard and, if used alone, this will lead to overestimations of the accuracy of endosonography. We considered this fact as a major source of bias, and we determined that the protocol of another systematic review was incomplete (PROSPERO ID: CRD42014009792) [56]. We considered this systematic review and meta-analysis as part of the current body of evidence for our study. Limitations of this approach to integrating evidence include the following. First, several nonrandomized studies had different inclusion criteria such as mediastinal masses or lung masses suspected of cancer and included patients with potentially benign lesions. Second, the preprocedure evaluation was not reported in several studies. Some did include the use of PET-CT as part of the preprocedure evaluation. Third, for negative endoscopic procedures, we had concerns about the reference standard. Some patients were excluded from the data analysis in some studies, and in other studies, the final diagnosis was declared using methods other than mediastinoscopy or surgical procedures. All of these various criteria might have introduced heterogeneity in the trials and the RCT data were limited to two trials. More primary studies and RCTs are needed to improve the body of evidence. ## 5. Conclusion Based on previous primary studies published, this systematic review is the most complete evidence-based synthesis today, including all diagnostic test studies and best quality studies (RCTs) analysed by Cochrane criteria. Based upon this analysis, we believe that EBUS + EUS is a minimally invasive and highly accurate method for mediastinal staging that is associated with a low incidence of complications. While the diagnostic yield is not superior to EBUS alone, the combined procedure should be considered in selected patients with lymphadenopathy noted at stations that are not traditionally accessible with conventional EBUS. However, most of the available data are from high-quality observational studies. Additional studies are necessary to improve the quality of evidence. --- *Source: 1024709-2016-10-13.xml*
1024709-2016-10-13_1024709-2016-10-13.md
36,413
Minimally Invasive Methods for Staging in Lung Cancer: Systematic Review and Meta-Analysis
Gonzalo Labarca; Carlos Aravena; Francisco Ortega; Alex Arenas; Adnan Majid; Erik Folch; Hiren J. Mehta; Michael A. Jantz; Sebastian Fernandez-Bussy
Pulmonary Medicine (2016)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1024709
1024709-2016-10-13.xml
--- ## Abstract Introduction. Endobronchial ultrasound (EBUS) is a procedure that provides access to the mediastinal staging; however, EBUS cannot be used to stage all of the nodes in the mediastinum. In these cases, endoscopic ultrasound (EUS) is used for complete staging.Objective. To provide a synthesis of the evidence on the diagnostic performance of EBUS + EUS in patients undergoing mediastinal staging.Methods. Systematic review and meta-analysis to evaluate the diagnostic yield of EBUS + EUS compared with surgical staging. Two researchers performed the literature search, quality assessments, data extractions, and analyses. We produced a meta-analysis including sensitivity, specificity, and likelihood ratio analysis.Results. Twelve primary studies (1515 patients) were included; two were randomized controlled trials (RCTs) and ten were prospective trials. The pooled sensitivity for combined EBUS + EUS was 87% (CI 84–89%) and the specificity was 99% (CI 98–100%). For EBUS + EUS performed with a single bronchoscope group, the sensitivity improved to 88% (CI 83.1–91.4%) and specificity improved to 100% (CI 99-100%).Conclusion. EBUS + EUS is a highly accurate and safe procedure. The combined procedure should be considered in selected patients with lymphadenopathy noted at stations that are not traditionally accessible with conventional EBUS. --- ## Body ## 1. Introduction In recent years, the approach to patients with suspected non-small-cell lung cancer (NSCLC), has changed [1]. Several diagnostic and staging methods have been developed to avoid the use of more invasive techniques [2]. Surgical methods, such as mediastinoscopy, video assisted thoracoscopy (VATS), mediastinal dissection, and lymph node resection, are the reference standard for lung cancer lymph node staging. However, minimally invasive methods, including computed tomography (CT), magnetic resonance imaging (MRI), or positron emission tomography (PET), as well as bronchoscopic methods, are alternatives with low complication rates and these methods are often used as the first approach for confirming or excluding metastatic disease [2, 3]. One of the limitations of radiological studies is the number of false positive and false negative cases; for this reason, tissue samples are needed for cytopathology or histopathology [2–4].Over the last decade, bronchoscopic modalities such as endobronchial ultrasound with transbronchial needle aspiration (EBUS-TBNA) have emerged as safe methods to obtain tissue from mediastinal or in close proximity to central airways, with an accuracy of 80–90% and an incidence of complications of less than 1% [4, 5]. This minimally invasive approach is limited to certain mediastinal lymph nodes; however, one of the weaknesses of EBUS-TBNA is its inability to obtain tissue from stations 5, 6, 8, and 9 of the IASLC mediastinal lymph node map [6]. In such cases, a complementary approach with endoscopic ultrasound (EUS) is a safe alternative to obtain tissue from all of the mediastinal lymph nodes, except from stations 5 and 6 [4].Studies of combined EBUS + EUS have included retrospective, prospective, and randomized controlled trials (RCTs). Two systematic reviews and meta-analyses have been published [7, 8] in the past. However, most of the evidence from primary studies has been published in the past five years. The purpose of this systematic review and meta-analysis is to evaluate the utility of EBUS + EUS for NSCLC staging or diagnosis in those patients with suspected NSCLC. ## 2. Materials and Methods ### 2.1. Literature Search and Clinical Eligibility Criteria Previous descriptions and the protocol for this systematic review and meta-analysis are available in the PROSPERO registry (ID: CRD42015017199) [9]. In this systematic review, two independent reviewers searched the following databases: PubMed (Medline), OVID, Lilacs, Clinical Trials (https://clinicaltrials.gov/), and the Cochrane database. In addition, two metasearches of the TRIP database and Epistemonikos were included up to April 2015 [10]. For maximum sensitivity, meeting abstracts were searched from the European Respiratory Society (ERS; 2008 to 2014), American Thoracic Society (ATS; 2008 to 2014), and American College of Chest Physicians (2008 to 2014).The search criteria included the following. In the PubMed database, we searched using the following MeSH terms: ((“Carcinoma, Non-Small-Cell Lung” [majr]) AND “Bronchoscopy” [majr]) AND “Ultrasonography” [MeSH]. The search also included the following non-MeSH terms: “endobronchial”, “endobronchial ultrasound” (EBUS) alone or in combination with “non-small cell lung carcinoma”, and “neoplasm staging”. The literature search and inclusion criteria were in accordance with the Cochrane handbook and this systematic review was performed in accordance with the PRISMA statement [11].The inclusion criteria for this systematic review and meta-analysis were the following: (1) patients older than 18 years; (2) confirmed lung cancer with an indication for mediastinal staging (based on enlarged and/or PET positive lymph nodes); (3) available index test defined as EBUS + EUS with different endoscopes or with the same bronchoscope (EBUS + EUS-B-FNA) and reference standard (surgical methods or clinical followup); and (4) two-by-two diagnostic yield results of specificity, sensitivity, and positive likelihood ratios (LRs).Two independent authors (GL and CA) performed the literature search, and disagreements concerning study inclusion were resolved by discussion. The full-text versions of the included studies were retrieved, and a manual cross-reference search of the articles was performed with no language restrictions. ### 2.2. Quality Assessment of the Retrieved Articles A methodological assessment and quality analysis were performed by two independent reviewers (GL and CA) using the diagnostic test accuracy approach from the Cochrane Handbook for Systematic Reviews [12] and disagreements concerning study inclusion were resolved by discussion. ### 2.3. Outcomes Measured We included the diagnostic yield results (specificity, sensitivity, and LR) from all of the included articles in this systematic review and meta-analysis, and a secondary analysis of only EBUS + EUS with fine needle aspiration performed with the same bronchoscope (EBUS + EUS-B- FNA) was performed. In addition, we analysed adverse events related to EBUS + EUS.For this study we defined the following terms: sensitivity = positive endosonography (EBUS + EUS)/true positive via surgical staging + false negative; specificity = negative endosonography (EBUS + EUS)/true negative via surgical staging + false positive. The patients that were positive during endosonography were considered as true positive. ### 2.4. Data Extraction and Analysis Data extraction was performed by two independent reviewers (GL and FO). Two-by-two tables were generated that included true positives, false positives, true negatives, and false negatives. Primary study descriptions, population descriptions, types of studies, and reported adverse events were also extracted, and a new summary table was created.The extracted data were imported into Microsoft Excel 2010 (Redmond, WA, USA). For the qualitative and quantitative analyses, we used Cochrane Review Manager (RevMan) software, version 5.3, and a random effects model was used for the quantitative meta-analysis of diagnostic yield. We defined significant heterogeneity asi 2 > 50 % and created a symmetrical summary receiver operatic characteristic curve (SROC); the area under the curve was analysed using MetaDisc, version 1.4. Statistical significance was defined by a p value less than 0.05. We created a summary table of the findings based on the GRADE approach using GRADEpro software. ## 2.1. Literature Search and Clinical Eligibility Criteria Previous descriptions and the protocol for this systematic review and meta-analysis are available in the PROSPERO registry (ID: CRD42015017199) [9]. In this systematic review, two independent reviewers searched the following databases: PubMed (Medline), OVID, Lilacs, Clinical Trials (https://clinicaltrials.gov/), and the Cochrane database. In addition, two metasearches of the TRIP database and Epistemonikos were included up to April 2015 [10]. For maximum sensitivity, meeting abstracts were searched from the European Respiratory Society (ERS; 2008 to 2014), American Thoracic Society (ATS; 2008 to 2014), and American College of Chest Physicians (2008 to 2014).The search criteria included the following. In the PubMed database, we searched using the following MeSH terms: ((“Carcinoma, Non-Small-Cell Lung” [majr]) AND “Bronchoscopy” [majr]) AND “Ultrasonography” [MeSH]. The search also included the following non-MeSH terms: “endobronchial”, “endobronchial ultrasound” (EBUS) alone or in combination with “non-small cell lung carcinoma”, and “neoplasm staging”. The literature search and inclusion criteria were in accordance with the Cochrane handbook and this systematic review was performed in accordance with the PRISMA statement [11].The inclusion criteria for this systematic review and meta-analysis were the following: (1) patients older than 18 years; (2) confirmed lung cancer with an indication for mediastinal staging (based on enlarged and/or PET positive lymph nodes); (3) available index test defined as EBUS + EUS with different endoscopes or with the same bronchoscope (EBUS + EUS-B-FNA) and reference standard (surgical methods or clinical followup); and (4) two-by-two diagnostic yield results of specificity, sensitivity, and positive likelihood ratios (LRs).Two independent authors (GL and CA) performed the literature search, and disagreements concerning study inclusion were resolved by discussion. The full-text versions of the included studies were retrieved, and a manual cross-reference search of the articles was performed with no language restrictions. ## 2.2. Quality Assessment of the Retrieved Articles A methodological assessment and quality analysis were performed by two independent reviewers (GL and CA) using the diagnostic test accuracy approach from the Cochrane Handbook for Systematic Reviews [12] and disagreements concerning study inclusion were resolved by discussion. ## 2.3. Outcomes Measured We included the diagnostic yield results (specificity, sensitivity, and LR) from all of the included articles in this systematic review and meta-analysis, and a secondary analysis of only EBUS + EUS with fine needle aspiration performed with the same bronchoscope (EBUS + EUS-B- FNA) was performed. In addition, we analysed adverse events related to EBUS + EUS.For this study we defined the following terms: sensitivity = positive endosonography (EBUS + EUS)/true positive via surgical staging + false negative; specificity = negative endosonography (EBUS + EUS)/true negative via surgical staging + false positive. The patients that were positive during endosonography were considered as true positive. ## 2.4. Data Extraction and Analysis Data extraction was performed by two independent reviewers (GL and FO). Two-by-two tables were generated that included true positives, false positives, true negatives, and false negatives. Primary study descriptions, population descriptions, types of studies, and reported adverse events were also extracted, and a new summary table was created.The extracted data were imported into Microsoft Excel 2010 (Redmond, WA, USA). For the qualitative and quantitative analyses, we used Cochrane Review Manager (RevMan) software, version 5.3, and a random effects model was used for the quantitative meta-analysis of diagnostic yield. We defined significant heterogeneity asi 2 > 50 % and created a symmetrical summary receiver operatic characteristic curve (SROC); the area under the curve was analysed using MetaDisc, version 1.4. Statistical significance was defined by a p value less than 0.05. We created a summary table of the findings based on the GRADE approach using GRADEpro software. ## 3. Results Our search identified 820 records after removing duplicates. 775 references were excluded and a total of 41 potentially eligible primary studies were evaluated in full-text format. After the full-text screening, 29 primary studies were excluded for various reasons [25–54], and 12 primary studies (1515 patients) were included in the qualitative and quantitative analyses and the meta-analysis (Figure 1) [7, 8, 55, 56] [47–54]. No additional studies were identified from conference abstracts. The characteristics of the included and excluded studies are reported in Tables 1 and 5.Table 1 Primary studies included and their characteristics. Author Year Sample Patient Image study Index test Outcome Reference standard Comments Vilmann et al. [13] 2005 31 Lung cancer staging or suspected lung cancer CT scan with suspected mass or lymph node EBUS-TBNA + EUS-FNA Lung cancer staging or diagnosis Thoracotomy or clinical followup Prospective trial, non-RCT. 9 patients underwent thoracotomy and 19 had clinical followup. Wallace et al. [14] 2008 138 Lung cancer staging or suspected lung cancer CT scan and PET CT with enlarged and/or PET positive lymph nodes EBUS-TBNA + EUS-FNA Lung cancer staging or diagnosis Thoracotomy, mediastinoscopy, lobectomy, and thoracoscopy Prospective trial, non-RCT. 33 patients underwent thoracotomy, 4 mediastinoscopy, 4 lobectomy, and 1 thoracoscopy. The rest had 6–12-month clinical followup. Annema et al. [15] 2010 241 Lung cancer staging, resectable CT scan and PET CT with enlarged and/or PET positive lymph nodes EBUS-TBNA + EUS-FNA Lung cancer staging Mediastinoscopy and/or thoracotomy RCT, 1 : 1. One arm to endoscopic staging and one arm to surgical staging. Standard reference for this study included thoracotomy in patients without positive endosonography. Herth et al. [16] 2010 139 Lung cancer staging or suspected lung cancer CT scan, PET CT in some patients EBUS-TBNA and EUS-B-FNA Lung cancer staging Thoracoscopy, thoracotomy, or clinical followup to 12 months Prospective study, non-RCT. Timing flow since inclusion is 6–12 months. Hwangbo et al. [17] 2010 150 Lung cancer staging or suspected lung cancer CT scan and PET CT with enlarged and/or PET positive lymph nodes EBUS-TBNA and EUS-B-FNA Lung cancer staging Surgery, lymph node dissection Prospective trial, non-RCT. Szlubowski et al. [18] 2010 120 Lung cancer staging, stage IA-IIB CT scan with normal size lymph nodes EBUS-TBNA + EUS-FNA Lung cancer staging Bilateral transcervical extended mediastinal lymphadenectomy Prospective trial, non-RCT. Patients with negative EBUS/EUS underwent bilateral transcervical extended mediastinal lymphadenectomy. Ohnishi et al. [19] 2011 110 Staging for suspected resectable lung cancer CT scan and PET CT with enlarged and/or PET positive lymph nodes EBUS-TBNA + EUS-FNA Lung cancer staging Surgery without any specification Prospective trial, non-RCT. Szlubowski et al. [20] 2012 214 Lung cancer staging, stage 1A-IIIB CT scan EBUS-TBNA and EUS-B-FNA Lung cancer staging Systematic lymph node dissection Prospective trial, non-RCT. 110 EBUS + EUS and 104 EBUS + EUS-B-FNA. Kang et al. [21] 2014 148 Staging for confirmed or suspected resectable lung cancer CT scan and PET CT with enlarged and/or PET positive lymph nodes EBUS-TBNA and EUS-B-FNA Lung cancer staging Surgery without any specification RCT, 1 : 1. EBUS centered arm versus EUS centered arm using the same bronchoscope. Patients without definitive data were excluded for sensitivity analysis. Lee et al. [22] 2014 44 Staging for confirmed or suspected lung cancer PET CT without M1 disease EBUS-TBNA and EUS-B-FNA Lung cancer staging Mediastinoscopy or lymph node resection Retrospective analysis. 4 patients underwent mediastinoscopy and 4 underwent lymph node resection. Liberman et al. [23] 2014 144 Staging for confirmed or suspected resectable lung cancer CT scan and PET CT with enlarged and/or PET positive lymph nodes EBUS-TBNA + EUS-FNA Lung cancer staging Mediastinoscopy or lymph node dissection Prospective trial, non-RCT. AS per protocol, patients underwent surgical staging following endosonographic staging. Oki et al. [24] 2014 150 Staging for confirmed or suspected resectable lung cancer CT scan and PET CT EBUS-TBNA and EUS-B-FNA Lung cancer staging Surgical resection with lymph node dissection or clinical followup Prospective trial, non-RCT. 5 patients were excluded from analysis without clinical followup. Clinical followup was 6 months after the procedure.Figure 1 Study of flow diagram following PRISMA statements.Two of the included primary studies were RCTs. Annema et al. allocated patients 1 : 1 to an endoscopic staging arm and a surgical arm. In the other RCT, Kang et al. allocated patients 1 : 1 to an EBUS followed by EUS arm and an EUS followed by EBUS arm. These trials were evaluated by two independent researchers and were defined as the best evidence available.The remaining studies included in the quantitative and qualitative analyses were observational studies. Eleven of these studies were prospective trials, and the other one trial was retrospective in nature. ### 3.1. Risk of Bias in the Included Reviews A quality assessment of the primary studies was performed using the Cochrane assessment tool. Most of the primary studies reported and addressed a specific question (diagnostic yield from EBUS + EUS for mediastinal staging) without any concern for index testing or reference standard. The limitations of the included studies were the following: (1) no data on the interval since the index test or the reference standard; (2) a risk of bias in the results because some patients in the prospective trials were excluded from the data analysis; (3) some tests including “surgical methods” as reference test, without any specification between mediastinoscopy, thoracotomy, and others; and (4) the study type (10 of the 12 trials were prospective). The results are presented in Figures2 and 5.Figure 2 Risk of bias and applicability concerns graph: review of authors’ judgments about each domain presented as percentages across included studies. ### 3.2. Diagnostic Accuracy The pooled data from the primary studies that evaluated the diagnostic yield of EBUS + EUS versus surgical methods or clinical followup are shown in Figure3.Figure 3 Comparison 1. Forest plot of diagnostic yield from all included studies.The sensitivity across all the primary studies was 85% (CI 80–89%) and the specificity was 99% (CI 98–100%). The meta-analysis of sensitivity, specificity, and positive likelihood ratio of all the studies and subgroups of EBUS-B-FNA and EBUS + EUS using different endoscopes is reported in Table3.In a subgroup analysis of the EBUS + EUS using different endoscopes revealed a sensitivity of 85% and a specificity of 99.6%, compared with EBUS-EUS-B-FNA with a sensitivity of 88% and specificity of 100%. Finally, SROC analysis revealed an AUC of 0.98 for all of the included primary studies and 0.99 for the EBUS + ESU-B-FNA only. Summaries of these results are presented in Figure4.Figure 4 SROC from all included studies.Figure 5 Risk of bias and applicability concerns summary: review of authors’ judgments about each domain for each included study.Adverse events related to endoscopic procedures were reported in 12 primary studies. The most common adverse event was minor bleeding. Table2 shows the number of adverse events reported in each trial.Table 2 EBUS + EUS adverse events reported in primary studies. Author EBUS + EUS adverse events Vilmann et al. [13] No complications Wallace et al. [14] No complications Annema et al. [15] One case of pneumothorax and 5 minor complications Herth et al. [16] No complications Hwangbo et al. [17] One case of lymph node abscess Szlubowski et al. [18] No complications Ohnishi et al. [19] No complications Szlubowski et al. [20] Two cases of nausea and 3 cases of self-limiting abdominal pain Kang et al. [21] 12 cases of minor bleeding and 1 case of pneumomediastinum Lee et al. [22] No complications Liberman et al. [23] One case of bronchial laceration and 1 case of major bleeding Oki et al. [24] Two cases with severe coughTable 3 Summary of meta-analysis of all included studies and subgroup analysis. Comparison Sensitivity Specificity LR (+) All included studies 87.3% (CI 80–89%;i 2 = 22.86%) 99% (CI 99-100%;i 2 = 6.69%) 60.66 (CI 25.27–145.60;i 2 = 3.42%) EBUS + EUS 85% (CI 80–89%;i 2 = 22.86%) 99.6% (CI 98.5–100%;i 2 = 6.69%) 60.66 (CI 25.27–145.6;i 2 = 3.42%) EBUS + EUS-B-FNA 88% (CI 83.1–91.4%;i 2 = 25.64%) 100% (CI 99-100%;i 2 = 0%) 87.67 (CI 28.35–271.07;i 2 = 1.85%) LR (+): positive likelihood ratio.Finally, the quality of the evidence in the EBUS + EUS and EBUS + EUS-B-FNA methods was LOW for sensitivity and MODERATE for specificity based on the GRADE approach. A summary of the findings is provided in Tables4(a) and 4(b).Table 4 Summary of finding using GRADE approach. (a) shows EBUS + EUS-B-FNA only; (b) shows pooled data from all included primary studies. (a) EBUS + EUS pooled sensitivity: 0.87 (95% CI: 0.83 to 0.89) | pooled specificity: 0.99 (95% CI: 0.99 to 1.00) Test result Number of results per 1000 patients tested (95% CI) Number of participants (studies) Quality of the evidence (GRADE) Prevalence 40.2% True positives  (patients with staging) 350 (334 to 358) 609 (12) ⁢ ⨁ ⨁◯◯LOW1,2⁢ False negatives  (patients incorrectly classified as not having staging) 52 (68 to 44) True negatives  (patients without staging) 592 (592 to 598) 906 (12) ⁢ ⨁ ⨁ ⨁◯MODERATE3⁢ False positives  (patients incorrectly classified as having staging) 6 (6 to 0) (b) EBUS-EUS-B-FNA pooled sensitivity: 0.88 (95% CI: 0.83 to 0.91) | pooled specificity: 1.00 (95% CI: 0.99 to 1.00) Test result Number of results per 1000 patients tested (95% CI) Number of participants (studies) Quality of the evidence (GRADE) Prevalence 40.8% True positives (patients with staging) 359 (339 to 371) 297 (6) ⁢ ⨁ ⨁ ⨁◯LOW1,2⁢ False negatives (patients incorrectly classified as not having staging) 49 (69 to 37) True negatives (patients without staging) 592 (586 to 592) 431 (6) ⁢ ⨁ ⨁ ⨁◯MODERATE3⁢ False positives (patients incorrectly classified as having staging) 0 (6 to 0) 1Low-quality studies. 2Imprecision between different studies 3Different standard reference.Table 5 Excluded studies and motive for exclusion. Author Year Motive for exclusion Sánchez-Font et al. [25] 2014 Noninclusion criteria Schuhmann et al. [26] 2014 Noninclusion criteria Yarmus et al. [27] 2013 Noninclusion criteria Yarmus et al. [28] 2013 Noninclusion criteria Navani et al. [29] 2012 Noninclusion criteria Fielding et al. [30] 2012 Noninclusion criteria Oki et al. [31] 2012 Noninclusion criteria Casal et al. [32] 2012 Noninclusion criteria Steinfort et al. [33] 2011 Noninclusion criteria Ishida et al. [34] 2011 Noninclusion criteria Rintoul et al. [35] 2009 Noninclusion criteria Chao et al. [36] 2009 Noninclusion criteria Lee et al. [37] 2008 Noninclusion criteria Yoshikawa et al. [38] 2007 Noninclusion criteria Chung et al. [39] 2007 Noninclusion criteria Yasufuku et al. [40] 2006 Noninclusion criteria Herth et al. [41] 2006 Noninclusion criteria Herth et al. [42] 2006 Noninclusion criteria Herth et al. [43] 2003 Noninclusion criteria Herth et al. [44] 2002 Noninclusion criteria Verhagen et al. [45] 2013 Noninclusion criteria ## 3.1. Risk of Bias in the Included Reviews A quality assessment of the primary studies was performed using the Cochrane assessment tool. Most of the primary studies reported and addressed a specific question (diagnostic yield from EBUS + EUS for mediastinal staging) without any concern for index testing or reference standard. The limitations of the included studies were the following: (1) no data on the interval since the index test or the reference standard; (2) a risk of bias in the results because some patients in the prospective trials were excluded from the data analysis; (3) some tests including “surgical methods” as reference test, without any specification between mediastinoscopy, thoracotomy, and others; and (4) the study type (10 of the 12 trials were prospective). The results are presented in Figures2 and 5.Figure 2 Risk of bias and applicability concerns graph: review of authors’ judgments about each domain presented as percentages across included studies. ## 3.2. Diagnostic Accuracy The pooled data from the primary studies that evaluated the diagnostic yield of EBUS + EUS versus surgical methods or clinical followup are shown in Figure3.Figure 3 Comparison 1. Forest plot of diagnostic yield from all included studies.The sensitivity across all the primary studies was 85% (CI 80–89%) and the specificity was 99% (CI 98–100%). The meta-analysis of sensitivity, specificity, and positive likelihood ratio of all the studies and subgroups of EBUS-B-FNA and EBUS + EUS using different endoscopes is reported in Table3.In a subgroup analysis of the EBUS + EUS using different endoscopes revealed a sensitivity of 85% and a specificity of 99.6%, compared with EBUS-EUS-B-FNA with a sensitivity of 88% and specificity of 100%. Finally, SROC analysis revealed an AUC of 0.98 for all of the included primary studies and 0.99 for the EBUS + ESU-B-FNA only. Summaries of these results are presented in Figure4.Figure 4 SROC from all included studies.Figure 5 Risk of bias and applicability concerns summary: review of authors’ judgments about each domain for each included study.Adverse events related to endoscopic procedures were reported in 12 primary studies. The most common adverse event was minor bleeding. Table2 shows the number of adverse events reported in each trial.Table 2 EBUS + EUS adverse events reported in primary studies. Author EBUS + EUS adverse events Vilmann et al. [13] No complications Wallace et al. [14] No complications Annema et al. [15] One case of pneumothorax and 5 minor complications Herth et al. [16] No complications Hwangbo et al. [17] One case of lymph node abscess Szlubowski et al. [18] No complications Ohnishi et al. [19] No complications Szlubowski et al. [20] Two cases of nausea and 3 cases of self-limiting abdominal pain Kang et al. [21] 12 cases of minor bleeding and 1 case of pneumomediastinum Lee et al. [22] No complications Liberman et al. [23] One case of bronchial laceration and 1 case of major bleeding Oki et al. [24] Two cases with severe coughTable 3 Summary of meta-analysis of all included studies and subgroup analysis. Comparison Sensitivity Specificity LR (+) All included studies 87.3% (CI 80–89%;i 2 = 22.86%) 99% (CI 99-100%;i 2 = 6.69%) 60.66 (CI 25.27–145.60;i 2 = 3.42%) EBUS + EUS 85% (CI 80–89%;i 2 = 22.86%) 99.6% (CI 98.5–100%;i 2 = 6.69%) 60.66 (CI 25.27–145.6;i 2 = 3.42%) EBUS + EUS-B-FNA 88% (CI 83.1–91.4%;i 2 = 25.64%) 100% (CI 99-100%;i 2 = 0%) 87.67 (CI 28.35–271.07;i 2 = 1.85%) LR (+): positive likelihood ratio.Finally, the quality of the evidence in the EBUS + EUS and EBUS + EUS-B-FNA methods was LOW for sensitivity and MODERATE for specificity based on the GRADE approach. A summary of the findings is provided in Tables4(a) and 4(b).Table 4 Summary of finding using GRADE approach. (a) shows EBUS + EUS-B-FNA only; (b) shows pooled data from all included primary studies. (a) EBUS + EUS pooled sensitivity: 0.87 (95% CI: 0.83 to 0.89) | pooled specificity: 0.99 (95% CI: 0.99 to 1.00) Test result Number of results per 1000 patients tested (95% CI) Number of participants (studies) Quality of the evidence (GRADE) Prevalence 40.2% True positives  (patients with staging) 350 (334 to 358) 609 (12) ⁢ ⨁ ⨁◯◯LOW1,2⁢ False negatives  (patients incorrectly classified as not having staging) 52 (68 to 44) True negatives  (patients without staging) 592 (592 to 598) 906 (12) ⁢ ⨁ ⨁ ⨁◯MODERATE3⁢ False positives  (patients incorrectly classified as having staging) 6 (6 to 0) (b) EBUS-EUS-B-FNA pooled sensitivity: 0.88 (95% CI: 0.83 to 0.91) | pooled specificity: 1.00 (95% CI: 0.99 to 1.00) Test result Number of results per 1000 patients tested (95% CI) Number of participants (studies) Quality of the evidence (GRADE) Prevalence 40.8% True positives (patients with staging) 359 (339 to 371) 297 (6) ⁢ ⨁ ⨁ ⨁◯LOW1,2⁢ False negatives (patients incorrectly classified as not having staging) 49 (69 to 37) True negatives (patients without staging) 592 (586 to 592) 431 (6) ⁢ ⨁ ⨁ ⨁◯MODERATE3⁢ False positives (patients incorrectly classified as having staging) 0 (6 to 0) 1Low-quality studies. 2Imprecision between different studies 3Different standard reference.Table 5 Excluded studies and motive for exclusion. Author Year Motive for exclusion Sánchez-Font et al. [25] 2014 Noninclusion criteria Schuhmann et al. [26] 2014 Noninclusion criteria Yarmus et al. [27] 2013 Noninclusion criteria Yarmus et al. [28] 2013 Noninclusion criteria Navani et al. [29] 2012 Noninclusion criteria Fielding et al. [30] 2012 Noninclusion criteria Oki et al. [31] 2012 Noninclusion criteria Casal et al. [32] 2012 Noninclusion criteria Steinfort et al. [33] 2011 Noninclusion criteria Ishida et al. [34] 2011 Noninclusion criteria Rintoul et al. [35] 2009 Noninclusion criteria Chao et al. [36] 2009 Noninclusion criteria Lee et al. [37] 2008 Noninclusion criteria Yoshikawa et al. [38] 2007 Noninclusion criteria Chung et al. [39] 2007 Noninclusion criteria Yasufuku et al. [40] 2006 Noninclusion criteria Herth et al. [41] 2006 Noninclusion criteria Herth et al. [42] 2006 Noninclusion criteria Herth et al. [43] 2003 Noninclusion criteria Herth et al. [44] 2002 Noninclusion criteria Verhagen et al. [45] 2013 Noninclusion criteria ## 4. Discussion EBUS is a minimally invasive procedure with good accuracy in confirming or excluding lung cancer or mediastinal lung cancer metastasis. The addition of EUS to EBUS mediastinal staging, however, has improved the sensitivity and accuracy of this method, thereby decreasing the number of unnecessary thoracotomies and surgical procedures [3, 29, 57]. Based on the published data, we recommend EBUS + EUS as the first step for evaluating patients with suspected operable lung cancer or with known operable lung cancer who require mediastinal staging. In the subgroup analysis of EBUS + EUS-B-FNA data only, the sensitivity improved to 88% and the specificity increased to 100%. In addition, there was a significant decrease in the heterogeneity in the EBUS + EUS-B-FNA only group; these data were consistent with the evidence quality of the primary studies. In a systematic review of surgical mediastinal staging published by Silvestri et al. in 2013, traditional mediastinoscopy had a pooled median sensitivity of 78% and NPV of 91% and video assisted mediastinoscopy had a median sensitivity of 89% and NPV of 92% [57]. EBUS + EUS is a safe procedure. In our review, major adverse related events were reported for less than 1% of the procedures; pneumothorax was the most commonly reported complication, these data confirm the safety of this method across multiple studies in different settings [5].The evidence quality of the included systematic reviews was assessed using Cochrane tools, which include the most influential aspects of diagnostic test methods (patient selection, index test, reference standard, flow, and timing) [12]. We considered the overall quality of all the included primary studies to be low.The studies included patients with known lung cancer who required mediastinal staging or lesions suspected to be NSCLC. In most cases, a positive pathologic diagnosis with EBUS + EUS was considered a true positive; in patients with negative diagnoses, a second test was performed (surgical methods or clinical followup). This approach is common in clinical practice, and we considered it to be of little concern to the applicability of our results.A previous systematic review and meta-analysis was published by Zhang et al. [8]. In that meta-analysis, the risk of bias was evaluated with the STARD and QUADAS tools, and several trials reported low-quality data. In their study, a positive biopsy with an EBUS + EUS procedure was sufficient to confirm the pathological diagnosis, and surgery was not necessary to confirm the disease [58].According to ACCP guidelines and previous systematic review and meta-analysis, EBUS alone reports a higher diagnostic yield than EBUS/EUS, Dong et al report a sensitivity of 90% (CI, 84.4–95.7%) and a specificity of 98.4% [55, 57]. However, no head to head comparing EBUS, EBUS/EUS, and mediastinoscopy were identified; we found that a network meta-analysis that includes different mediastinal methods for staging in NSCLC is needed.According to our results, we suggest that EBUS/EUS performed with the same bronchoscope has a higher yield than these performed with separate bronchoscope and endoscope; however, these are not head to head comparisons; so it is difficult to determine what to make of these findings but they are intriguing.Finally, training is certainly an issue that merits further research. There appears to be a paucity of training in these combined techniques; EUS performed by an interventional pulmonology (not by gastroenterologist trained in EUS) is not taught in most interventional pulmonary fellowships; this fact should be considered as a potential training for interventional pulmonary fellowship programs.We found some bias in our study. In several studies the reference standards were suboptimal. Mediastinoscopy has an accuracy that is comparable to endosonography. Mediastinoscopy is potentially therefore a suboptimal reference standard and, if used alone, this will lead to overestimations of the accuracy of endosonography. We considered this fact as a major source of bias, and we determined that the protocol of another systematic review was incomplete (PROSPERO ID: CRD42014009792) [56]. We considered this systematic review and meta-analysis as part of the current body of evidence for our study. Limitations of this approach to integrating evidence include the following. First, several nonrandomized studies had different inclusion criteria such as mediastinal masses or lung masses suspected of cancer and included patients with potentially benign lesions. Second, the preprocedure evaluation was not reported in several studies. Some did include the use of PET-CT as part of the preprocedure evaluation. Third, for negative endoscopic procedures, we had concerns about the reference standard. Some patients were excluded from the data analysis in some studies, and in other studies, the final diagnosis was declared using methods other than mediastinoscopy or surgical procedures. All of these various criteria might have introduced heterogeneity in the trials and the RCT data were limited to two trials. More primary studies and RCTs are needed to improve the body of evidence. ## 5. Conclusion Based on previous primary studies published, this systematic review is the most complete evidence-based synthesis today, including all diagnostic test studies and best quality studies (RCTs) analysed by Cochrane criteria. Based upon this analysis, we believe that EBUS + EUS is a minimally invasive and highly accurate method for mediastinal staging that is associated with a low incidence of complications. While the diagnostic yield is not superior to EBUS alone, the combined procedure should be considered in selected patients with lymphadenopathy noted at stations that are not traditionally accessible with conventional EBUS. However, most of the available data are from high-quality observational studies. Additional studies are necessary to improve the quality of evidence. --- *Source: 1024709-2016-10-13.xml*
2016
# Quantification of Photocyanine in Human Serum by High-Performance Liquid Chromatography-Tandem Mass Spectrometry and Its Application in a Pharmacokinetic Study **Authors:** Bing-Tian Bi; Ben-Yan Zou; Li-Ting Deng; Jing Zhan; Hai Liao; Kun-Yao Feng; Su Li **Journal:** Journal of Analytical Methods in Chemistry (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102474 --- ## Abstract Photocyanine is a novel anticancer drug. Its pharmacokinetic study in cancer patients is therefore very important for choosing doses, and dosing intervals in clinical application. A rapid, selective and sensitive high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method was developed and validated for the determination of photocyanine in patient serum. Sample preparation involved one-step protein precipitation by adding methanol and N,N-dimethyl formamide to 0.1 mL serum. The detection was performed on a triple quadrupole tandem mass spectrometer operating in multiple reaction-monitoring (MRM) mode. Each sample was chromatographed within 7 min. Linear calibration curves were obtained for photocyanine at a concentration range of 20–2000 ng/mL (r > 0.99 5), with the lower limit of quantification (LLOQ) being 20 ng/mL. The intrabatch accuracy ranged from 101.98% to 107.54%, and the interbatch accuracy varied from 100.52% to 105.62%. Stability tests showed that photocyanine was stable throughout the analytical procedure. This study is the first to utilize the HPLC-MS/MS method for the pharmacokinetic study of photocyanine in six cancer patients who had received a single dose of photocyanine (0.1 mg/kg) administered intravenously. --- ## Body ## 1. Introduction Photodynamic therapy (PDT) is a potential model for cancer therapy, which has been used to treat or relieve the symptoms of skin cancer, esophageal cancer, prostate cancer, and non-small cell lung cancer [1–3]. During the PDT procedure, the excited photosensitizer forms highly reactive oxygen species using visible light of an appropriate wavelength, resulting in oxidative damage to cellular membranes and membranous organelles [2, 4–7].As one type of the photosensitizer, porphyrins have been approved for PDT in the USA, Europe, Canada, and Japan, but their weak absorption attenuates their optimal application in PDT [8]. Phthalocyanines and their derivatives are also widely used photosensitizers for the PDT of cancer, displaying high absorption of visible light, mainly in the phototherapeutic wavelength window (600–800 nm) [9, 10]. Photosense, Pc4, and CGP55847 are second-generation photosensitizers that have been used in clinical practice [11–14].Photocyanine (ZnPcS2P2), a new second-generation PDT drug, was approved for clinical trials as a new medicine in 2008 by the State Food and Drug Administration in China. It is an isomeric mixture of di-(potassium sulfonate)-di-phthalimidomethyl phthalocyanine zinc (Figure1) [15–17]. The presence of both hydrophobic and hydrophilic groups in photocyanine would improve its tumor selectivity. Clinical trial of a novel synthesized drug requires a reliable method for measuring levels of the drug in biological samples. Because photocyanine is a novel photodynamic drug, few methods have been developed for its quantification. Li et al. [18] also reported an HPLC method to separate the four isomers of a photocyanine mixture from human serum. However, the detection signals from this method display insufficient specificity in the biological samples, because it is difficult to avoid the interference from the matrix or other interferents by ultraviolet detector. Therefore, a method with higher specificity should be established to ensure the validity of the determination.Chemical structures of photocyanine. (a) (b) (c) (d)No evidence verifies the difference of each isomer of photocyanine in pharmacodynamic studies. Moreover, no standard substances for any of the isomers can be obtained from Fujian Longhua Pharmaceutical Co. Therefore, we developed a new high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method to determine the total concentration of photocyanine in cancer patient serum in a pharmacokinetic study. The method was validated for its specificity, sensitivity, linearity, accuracy, precision, matrix effect, dilution integrity, and stability, and the data established the method as a high-throughput and reliable bioanalytical assay. ## 2. Materials and Methods ### 2.1. Experimental Chemicals Photocyanine (purity > 95%) and the internal standard (IS) mono-β-sulfonated zinc phthalocyanine potassium (purity > 95%) were provided by Fujian Longhua Pharmaceutical Co. (Fujian, China). HPLC-grade N, N-dimethyl formamide (DMF) and methanol were purchased from Tedia Company, Inc., (Fairfield, OH, USA). Aqueous ammonia was obtained from Guangzhou Chemical Reagent Factory (Guangzhou, Guangdong, China). Deionized water was obtained from a Milli-Q analytical deionization system (Millipore, Bedford, MA, USA). Freshly obtained, drug-free human serum was collected from healthy individuals and stored at −80°C before use. ### 2.2. Chromatographic Conditions The HPLC system consisted of an LC-20AD solvent delivery system, an SIL-20AC autosampler, a CTO-20AC column oven, and a CBM-20A controller from Shimadzu (Kyoto, Japan). Chromatographic separation of photocyanine and mono-β-sulfonated zinc phthalocyanine potassium was evaluated on an XBridge C18 column (50 mm × 4.6 mm, 5 μm) from Waters (Milford, MA, USA). For method validation and sample analysis, chromatographic separation was conducted by gradient elution using deionized water (adjusted to pH 10.0 with aqueous ammonia) as mobile phase A (MPA) and methanol as mobile phase B (MPB). The HPLC program for gradient elution was as follows: 20% of MPB (0–0.2 min), from 20% to 95% of MPB (0.2–1.3 min), 95% of MPB (1.3–4.0 min), from 95% to 20% of MPB (4.0–4.1 min), and 20% of MPB (4.1–7.0 min). The separation was performed using a flow rate of 0.6 mL/min. The column temperature was maintained at 60°C. ### 2.3. Mass Spectrometric Conditions An API 4000 QTRAP system (AB SCIEX, MA, USA) was operated in negative ionization mode with multiple reaction monitoring (MRM) for HPLC-MS/MS analysis. The mass spectrometric parameters were optimized to improve the MRM sensitivity. The instrument parameters for monitoring photocyanine and IS were as follows: vaporizer temperature, 650°C; ion spray voltage, −4,500 V; curtain gas (CUR), nitrogen, 25; nebulizing gas (GS1), 65; heated gas (GS2), 65; declustering potential (DP), photocyanine −140 V, IS −135 V; collision energy (CE), photocyanine −50 eV, IS −64.4 eV; entrance potential (EP), −10 V; collision cell exit potential (CXP), −10 V. The precursor-to-product ion transitions used for the MRM of photocyanine and IS werem/z 526.0 → 146.0 andm/z 655.1 → 591.8, respectively. The mass spectrometer was operated at unit mass resolution for both the first and third quadrupoles. ### 2.4. Sample Preparation A 100μL aliquot of blank human serum, spiked serum, or pharmacokinetic study serum was transferred to a 1.5 mL Eppendorf tube. Then, 200 μL of DMF was added to each tube of serum, and the mixture was vortexed for 1 min. The mixture was then spiked with 300 μL methanol containing 450 ng/mL IS, vortexed, and centrifuged for 10 min at 15,000 rpm at 4°C. The supernatant was collected and filtered. 10 μL of supernatant was injected into the LC-MS/MS system for analysis. ### 2.5. Method Validation Photocyanine was validated for an HPLC-MS/MS assay. Specificity, the lower limits of quantification (LLOQ), linearity, accuracy, precision, extraction recovery, matrix effect, and stability were evaluated during method validation. The specificity was assessed by testing six independent aliquots of blank serum for exclusion of any endogenous interference at the peak region of photocyanine or IS (Figure2). LLOQ was defined as the lowest concentration on the standard calibration curve from six different batches, in which both precision and accuracy were ≤20% with a signal-to-noise ratio (S/N) > 10. The linearity of the calibration curve was evaluated over the range of 20 ng/mL and 2000 ng/mL. Calibration curves were constructed via linear least-squares regression analysis by plotting the peak-area ratios (photocyanine/IS) versus the drug concentrations in the serum, and the resulting correlation coefficient (r > 0.99) was considered satisfactory. Precision and accuracy were assessed by the analytes covering the range of the calibration curve, in which the criteria for acceptability are defined as an accuracy ±15% standard deviation (SD) from the nominal values and a precision of ±15% relative standard deviation (RSD). Intrabatch accuracy and precision were evaluated by analyzing the quality control (QC) samples at concentrations of 60, 1000, and 1600 ng/mL with six duplicated levels per concentration on the same day. The interbatch accuracy and precision were assessed over three days. The extraction recovery of photocyanine and IS was determined by calculating the ratio of the peak area of photocyanine and IS spiked in serum before extraction against postextraction spiked photocyanine and IS at the same concentration. The matrix effect was determined by calculating the matrix factor, which was obtained as the ratio of the analyte peak response in the presence of matrix ions to the analyte peak response in the absence of matrix ions by spiking analytes into blank serum extracts and blank water extracts. The stability of photocyanine was compared to the nominal level of photocyanine to determine whether photocyanine was stable in the experiments, including postpreparative stability test, freeze-thaw cycle test, and long-term stability test. If the calculated concentration of photocyanine was less than the nominal concentration by >15%, the analyte was considered to be unstable. Low, medium, and high serum QC samples were determined in six duplicated levels. The stability of the extracts was evaluated by putting them at room temperature for 24 h and then subjecting them to the analytical procedure. Photocyanine maintained at −80°C for 30 days was evaluated by comparing the postfreeze measured concentration with the initial concentration added to the sample. The freeze-thaw stability of the samples was assessed over three freeze-thaw cycles by thawing samples at room temperature, refreezing them for 24 h at −80°C and then analyzing them.Representative HPLC-MS/MS chromatograms of extract from (a) blank serum; (b) a human serum sample spiked with mono-β-sulfonated zinc phthalocyanine potassium (IS), (c) a human serum sample spiked with 1000 ng/mL photocyanine and IS, and (d) a patient serum sample after i.v. administration of photocyanine spiked with IS. (a) (b) (c) (d) ### 2.6. Application Six patients with cancer were enrolled at the Cancer Center, Sun Yat-sen University. The patients were five males and one female, ranging in age from 37 to 69 years, who had been diagnosed with a primary or metastatic malignancy. All patients provided written informed consent prior to participation. The patients were infused (i.v. administration) with photocyanine (0.1 mg/kg) for 60 min. Blood samples were obtained at 0, 0.5, 1, 2, 4, 6, 8, 12, 24, 72, 120, and 168 h after administration and then placed on ice and kept away from light. The blood samples were centrifuged at 3000 rpm/min for 10 min, and the serum was stored at −80°C until the analysis was conducted. The present study was approved by the Human Subjects Review Committees of the University of Sun Yat-sen University Cancer Center and conducted according to the Declaration of Helsinki.Noncompartmental pharmacokinetic parameters were calculated using WinNonlin (Version 5.0, PUMCH Clinical Pharmacology Research Center). The maximum serum concentration (C max ⁡) and time to reach it (T max ⁡) were determined directly from the data. The terminal-phase rate constant (λ z) was calculated as the negative slope of the log-linear terminal portion of the serum concentration-time curve using linear regression with at least four last concentration-time points. The terminal-phase half-life (t 1/2) was calculated as 0.693/λ z. The area under the curve from time zero to the last observed time (AUC0-t) was calculated by the linear trapezoidal rule for ascending data points. The total area under the curve (AUC0-∞) was calculated as AUC0-t + Ct/λ z, where Ct is the last measurable concentration. The apparent volume of distribution associated with the terminal phase (Vz) was calculated asVz = CL/λ z, and the apparent total body clearance (CL) was calculated as CL = dose/AUC. ## 2.1. Experimental Chemicals Photocyanine (purity > 95%) and the internal standard (IS) mono-β-sulfonated zinc phthalocyanine potassium (purity > 95%) were provided by Fujian Longhua Pharmaceutical Co. (Fujian, China). HPLC-grade N, N-dimethyl formamide (DMF) and methanol were purchased from Tedia Company, Inc., (Fairfield, OH, USA). Aqueous ammonia was obtained from Guangzhou Chemical Reagent Factory (Guangzhou, Guangdong, China). Deionized water was obtained from a Milli-Q analytical deionization system (Millipore, Bedford, MA, USA). Freshly obtained, drug-free human serum was collected from healthy individuals and stored at −80°C before use. ## 2.2. Chromatographic Conditions The HPLC system consisted of an LC-20AD solvent delivery system, an SIL-20AC autosampler, a CTO-20AC column oven, and a CBM-20A controller from Shimadzu (Kyoto, Japan). Chromatographic separation of photocyanine and mono-β-sulfonated zinc phthalocyanine potassium was evaluated on an XBridge C18 column (50 mm × 4.6 mm, 5 μm) from Waters (Milford, MA, USA). For method validation and sample analysis, chromatographic separation was conducted by gradient elution using deionized water (adjusted to pH 10.0 with aqueous ammonia) as mobile phase A (MPA) and methanol as mobile phase B (MPB). The HPLC program for gradient elution was as follows: 20% of MPB (0–0.2 min), from 20% to 95% of MPB (0.2–1.3 min), 95% of MPB (1.3–4.0 min), from 95% to 20% of MPB (4.0–4.1 min), and 20% of MPB (4.1–7.0 min). The separation was performed using a flow rate of 0.6 mL/min. The column temperature was maintained at 60°C. ## 2.3. Mass Spectrometric Conditions An API 4000 QTRAP system (AB SCIEX, MA, USA) was operated in negative ionization mode with multiple reaction monitoring (MRM) for HPLC-MS/MS analysis. The mass spectrometric parameters were optimized to improve the MRM sensitivity. The instrument parameters for monitoring photocyanine and IS were as follows: vaporizer temperature, 650°C; ion spray voltage, −4,500 V; curtain gas (CUR), nitrogen, 25; nebulizing gas (GS1), 65; heated gas (GS2), 65; declustering potential (DP), photocyanine −140 V, IS −135 V; collision energy (CE), photocyanine −50 eV, IS −64.4 eV; entrance potential (EP), −10 V; collision cell exit potential (CXP), −10 V. The precursor-to-product ion transitions used for the MRM of photocyanine and IS werem/z 526.0 → 146.0 andm/z 655.1 → 591.8, respectively. The mass spectrometer was operated at unit mass resolution for both the first and third quadrupoles. ## 2.4. Sample Preparation A 100μL aliquot of blank human serum, spiked serum, or pharmacokinetic study serum was transferred to a 1.5 mL Eppendorf tube. Then, 200 μL of DMF was added to each tube of serum, and the mixture was vortexed for 1 min. The mixture was then spiked with 300 μL methanol containing 450 ng/mL IS, vortexed, and centrifuged for 10 min at 15,000 rpm at 4°C. The supernatant was collected and filtered. 10 μL of supernatant was injected into the LC-MS/MS system for analysis. ## 2.5. Method Validation Photocyanine was validated for an HPLC-MS/MS assay. Specificity, the lower limits of quantification (LLOQ), linearity, accuracy, precision, extraction recovery, matrix effect, and stability were evaluated during method validation. The specificity was assessed by testing six independent aliquots of blank serum for exclusion of any endogenous interference at the peak region of photocyanine or IS (Figure2). LLOQ was defined as the lowest concentration on the standard calibration curve from six different batches, in which both precision and accuracy were ≤20% with a signal-to-noise ratio (S/N) > 10. The linearity of the calibration curve was evaluated over the range of 20 ng/mL and 2000 ng/mL. Calibration curves were constructed via linear least-squares regression analysis by plotting the peak-area ratios (photocyanine/IS) versus the drug concentrations in the serum, and the resulting correlation coefficient (r > 0.99) was considered satisfactory. Precision and accuracy were assessed by the analytes covering the range of the calibration curve, in which the criteria for acceptability are defined as an accuracy ±15% standard deviation (SD) from the nominal values and a precision of ±15% relative standard deviation (RSD). Intrabatch accuracy and precision were evaluated by analyzing the quality control (QC) samples at concentrations of 60, 1000, and 1600 ng/mL with six duplicated levels per concentration on the same day. The interbatch accuracy and precision were assessed over three days. The extraction recovery of photocyanine and IS was determined by calculating the ratio of the peak area of photocyanine and IS spiked in serum before extraction against postextraction spiked photocyanine and IS at the same concentration. The matrix effect was determined by calculating the matrix factor, which was obtained as the ratio of the analyte peak response in the presence of matrix ions to the analyte peak response in the absence of matrix ions by spiking analytes into blank serum extracts and blank water extracts. The stability of photocyanine was compared to the nominal level of photocyanine to determine whether photocyanine was stable in the experiments, including postpreparative stability test, freeze-thaw cycle test, and long-term stability test. If the calculated concentration of photocyanine was less than the nominal concentration by >15%, the analyte was considered to be unstable. Low, medium, and high serum QC samples were determined in six duplicated levels. The stability of the extracts was evaluated by putting them at room temperature for 24 h and then subjecting them to the analytical procedure. Photocyanine maintained at −80°C for 30 days was evaluated by comparing the postfreeze measured concentration with the initial concentration added to the sample. The freeze-thaw stability of the samples was assessed over three freeze-thaw cycles by thawing samples at room temperature, refreezing them for 24 h at −80°C and then analyzing them.Representative HPLC-MS/MS chromatograms of extract from (a) blank serum; (b) a human serum sample spiked with mono-β-sulfonated zinc phthalocyanine potassium (IS), (c) a human serum sample spiked with 1000 ng/mL photocyanine and IS, and (d) a patient serum sample after i.v. administration of photocyanine spiked with IS. (a) (b) (c) (d) ## 2.6. Application Six patients with cancer were enrolled at the Cancer Center, Sun Yat-sen University. The patients were five males and one female, ranging in age from 37 to 69 years, who had been diagnosed with a primary or metastatic malignancy. All patients provided written informed consent prior to participation. The patients were infused (i.v. administration) with photocyanine (0.1 mg/kg) for 60 min. Blood samples were obtained at 0, 0.5, 1, 2, 4, 6, 8, 12, 24, 72, 120, and 168 h after administration and then placed on ice and kept away from light. The blood samples were centrifuged at 3000 rpm/min for 10 min, and the serum was stored at −80°C until the analysis was conducted. The present study was approved by the Human Subjects Review Committees of the University of Sun Yat-sen University Cancer Center and conducted according to the Declaration of Helsinki.Noncompartmental pharmacokinetic parameters were calculated using WinNonlin (Version 5.0, PUMCH Clinical Pharmacology Research Center). The maximum serum concentration (C max ⁡) and time to reach it (T max ⁡) were determined directly from the data. The terminal-phase rate constant (λ z) was calculated as the negative slope of the log-linear terminal portion of the serum concentration-time curve using linear regression with at least four last concentration-time points. The terminal-phase half-life (t 1/2) was calculated as 0.693/λ z. The area under the curve from time zero to the last observed time (AUC0-t) was calculated by the linear trapezoidal rule for ascending data points. The total area under the curve (AUC0-∞) was calculated as AUC0-t + Ct/λ z, where Ct is the last measurable concentration. The apparent volume of distribution associated with the terminal phase (Vz) was calculated asVz = CL/λ z, and the apparent total body clearance (CL) was calculated as CL = dose/AUC. ## 3. Results and Discussion ### 3.1. HPLC-MS/MS Condition Optimization HPLC-MS/MS operation parameters were carefully optimized for the determination of photocyanine. Photocyanine is a typical sulfonate compound, which is usually ionized in negative mode. We tuned the mass spectrometer in both positive and negative ionization modes with ESI for photocyanine and found that both the signal intensity and ratio of signal to noise obtained in negative ionization mode were much greater than those in positive ionization mode, which was consistent with previous study on another sulfonate compound [19]. In the precursor ion full-scan spectra, the most abundant ions were protonated molecules withm/zratios of 526.0 and 655.1 for photocyanine and IS, respectively. Parameters such as desolvation temperature, ESI source temperature, capillary voltage, and flow rate of desolvation gas and cone gas were all optimized to obtain the highest intensity of protonated analyte molecules. The product ion scan spectra showed high abundance fragment ions atm/zvalues of 146.0 and 591.8 for photocyanine and IS, respectively. Multiple reaction monitoring (MRM) using the precursor→product ion transition ofm/z526.0→ m/z146.0 andm/z655.1→ m/z591.8 was used for the quantification of photocyanine and IS, respectively.The efficiency of the chromatographic separation of photocyanine and IS was evaluated using tests with different chromatographic columns and mobile phases. Photocyanine is amphoteric and relatively hydrophobic. We found that photocyanine displayed a very strong retention on BDS or XDB C18 columns, and little retention on C8 or SCX columns, which resulted in broad peak or substantial carry over. These columns have been tested without success. It has been shown that good chromatographic profiles of photocyanine and IS are obtained using a Waters XBridge C18 column (50 mm × 4.6 mm, 5μm), with retention times of 2.33 and 2.59 min, respectively. The total analysis time per sample is 7 min, which is much shorter than that of 45 min in previous study [18]. We also optimized the column temperature by observing the chromatographic peak and resolution and found that XBridge C18 column displayed well column performance at 60°C. Optimization of the mobile phase is important for improving peak shape and detection sensitivity and for shortening the run time. We tested methanol, acetonitrile, and a mixture of the two as the organic modifier of the mobile phase, and we found that the peaks were more symmetric when methanol was used. Moreover, the chromatographic behavior of photocyanine subjected to mobile phases of different pH values was investigated, and we observed that deionized water (adjusted to pH 10.0 with aqueous ammonia) improved the peak shape and significantly increased the signal intensity of the analyte. In order to further optimize the chromatographic condition, the peak symmetry factor of photocyanine was calculated in different percentage of MPB at the elution period 1.3–4.0 min. Symmetry factor of a peak is calculated by dividingW 0.05 by two-foldf, whereW 0.05 is the width of the peak at 5% height andf is the distance from the peak maximum to the leading edge of the peak, the distance being measured at a point 5% of the peak height from the baseline. As shown in Table 1, the optimal value was 95%. Finally, the optimized gradient elution with deionized water (adjusted to pH 10.0 with aqueous ammonia) and methanol at a flow rate of 0.6 mL/min was established in this study.Table 1 Peak symmetry factor of photocyanine in different percentage of mobile phase B. MPB (%) Symmetry factor (n = 5) RSD (%) 95 1.052 4.67 90 1.103 9.00 80 1.198 7.45 70 1.354 7.67 ### 3.2. Sample Preparation Procedure Sample preparation is important for the HPLC-MS/MS assay. Liquid-liquid extraction (LLE) and solid-phase extraction (SPE) are techniques often used in the preparation of biological samples due to their ability to improve assay sensitivity [20, 21]. SPE columns, including Strata, Strata-X, and Strata C18-E from Phenomenex (Torrence, CA, USA), OASIS WAX Cartridge, and Sep-pak C18 from Waters (Milford, MA, USA), were used for sample preparation in this study. However, photocyanine exhibited no elution due to its strong adsorption to the SPE columns. We also carried out LLE with ethyl acetate, n-butyl alcohol, and mixtures of these organic solvents with n-hexane; however, we obtained low recovery and reproducibility using this procedure. Because HPLC-MS/MS quantification is highly specific and sensitive, protein precipitation (PPT) from the sample preparations was tried in the present study. We found that PPT was not only simple and efficient but also applicable to pharmacokinetic studies in which only 100 μL of serum was used to obtain bioanalytical results. In addition, we observed that the linearity of photocyanine concentration in human serum with DMF was significantly improved than that of without DMF, indicating that DMF is conductive to maximize the release of photocyanine in PPT by inhibiting the binding of the drug to serum proteins, but there is no related report so far about the mechanism of DMF in PPT or drug release. Finally, we added 200 μL DMF to the sample, and the effect of DMF here is consistent to that in previous study [18]. ### 3.3. Specificity Specificity was determined by comparing the chromatograms of six different batches of blank human serum with the corresponding spiked serum. No interference from endogenous substances was observed at the respective retention times of photocyanine and IS (data not shown). ### 3.4. Linearity and LLOQ The linear calibration curves were determined from the peak-area ratios (peak-area analyte/peak-area IS) versus concentration in human serum using a weighting factor (1/x 2), varying linearly over the concentration range tested (20–2000 ng/mL). As shown in Figure 3, the typical equation for the calibration curve for photocyanine was y = 0.421 x + 0.00439 (r = 0.99 65). The slopes of the equations were consistent with the calibration curves prepared on three separate days. The LLOQ in this study was 20 ng/mL for photocyanine, in which the S/N was >10, and the precision of repeat injections was 8.26%. Our method displayed a little higher sensitivity than the previous method, in which the LLOQ was 30 ng/mL [18].Figure 3 Linear calibration curve of photocyanine from the peak-area analyte/peak-area IS ratios versus concentrations in human serum using a weighting factor (1/x 2), varying linearly over the concentration range of 20–2000 ng/mL. The equation for the curve was y = 0.421 x + 0.00439 (r = 0.99 65). ### 3.5. Accuracy and Precision The accuracy and precision of the method were determined by analyzing QC samples at three concentrations in six replicates. The intrabatch accuracy ranged from 101.98% to 107.54% at three concentrations, with precisions between 1.29% and 4.91%. The interbatch accuracy varied from 100.52% to 105.62%, with precisions ranging from 4.72% to 8.53% (Table2). Thus, the present method has satisfactory accuracy, precision, and reproducibility.Table 2 Precision and accuracy of photocyanine. Concentration(ng/mL) Intrabatch (n = 6) Interbatch (n = 3) Accuracy (%) RSD (%) Accuracy (%) RSD (%) 60 103.72 4.91 105.62 4.72 1000 101.98 1.29 100.52 5.93 1600 107.54 3.31 101.18 8.53 ### 3.6. Extraction Recovery and Matrix Effect The extraction recoveries from QC samples at low, intermediate, and high concentrations ranged from 31.64% to 43.53% at three tested concentrations. We extracted the analyte from serum using protein precipitation and DMF in the present study, providing a simple and rapid method for the bioanalysis of photocyanine. In terms of matrix effects, the MF ranged from 61.51% to 77.03% at the three concentrations tested (Table3), indicating that the coeluting substance only slightly influenced the ionization of the analyte.Table 3 Extraction recovery and matrix effects of photocyanine. Concentration(ng/mL) Extraction recovery (%)(n = 5) Matrix effect (%)(n = 6) 60 37.13 61.51 1000 43.53 71.24 1600 31.64 77.03 ### 3.7. Stability The results from the stability tests are presented in Table4, and the data demonstrate a good stability of photocyanine throughout the steps of the determination. The method is therefore applicable to routine analysis.Table 4 Stability of photocyanine under different storage conditions. Storage conditions Concentration (ng/mL) Accuracy (%) (n = 6) RSD (%) Freeze-thaw three cycles 60 107.08 6.09 1000 101.20 4.14 1600 93.92 1.76 −80°C for 30 days 60 95.14 10.01 1000 89.56 1.85 1600 85.31 3.26 Post-preparative(room temperature for 24 h) 60 102.52 12.12 1000 96.16 3.55 1600 102.15 2.00 ### 3.8. Analysis of Patient Samples The validated HPLC-MS/MS method described here was successfully applied to a pharmacokinetic study in 6 cancer patients following i.v. administration of 0.1 mg/kg photocyanine. A mean plasma concentration-time curve of a single dose of photocyanine is shown in Figure4. This result revealed that our method was sufficiently sensitive to determine the photocyanine concentration in the serum of patients. The parameters of the pharmacokinetic analysis are shown in Table 5. The time of maximum plasma concentration (T max ⁡) was 1.83 ± 0.41 h, the maximum plasma concentration (C max ⁡) was 2465.3 ± 723.0 ng/mL, the half-life of drug elimination at the terminal phase (t 1/2) was 74.62 ± 13.29 h, the area under the plasma concentration-time curve from 0 h to the time of the last detectable concentration (AUC0-t) was 53137.2 ± 20210.6 ng·mL−1·h, the area under the plasma concentration-time curve from 0 h to infinity (AUC0-∞) was 62634.6 ± 25398.6 ng·mL−1·h, the volume of distribution (V d) of photocyanine was 11.29 ± 4.12 L, the total clearance (CL) was 0.11 ± 0.04 L/h, and the mean residence time (MRT) was 40.16 ± 4.32 h.Table 5 Noncompartmental pharmacokinetic parameters of photocyanine in cancer patients after a single i.v. dose of 0.1 mg/kg photocyanine (n = 6). Parameters Mean ± SD T max ⁡ (h)a 1.83 ± 0.41 C max ⁡ (ng/mL)b 2465.3 ± 723.0 t 1 / 2 (h)c 74.62 ± 13.29 AU C 0 – t d (ng·mL−1 ·h) 53137.2 ± 20210.6 AU C 0 – ∞ e (ng·mL−1 ·h) 62634.6 ± 25398.6 V d (L)f 11.29 ± 4.12 CL (L/h)g 0.11 ± 0.04 MRT (h)h 40.16 ± 4.32 a T max ⁡: time to maximum concentration; b C max ⁡: maximum plasma concentration; c t 1 / 2: half-life of elimination; d AUC 0 – t: area under the concentration-time curve from time zero to the last quantifiable measurement; e AUC 0 – ∞: area under the concentration-time curve extrapolated to infinity; f V d: volume of distribution; gCL: total clearance; hMRT: mean residence time.Figure 4 Mean serum concentration-time curve of photocyanine after 0.1 mg/kg single i.v. administration to cancer patients (n = 6). ## 3.1. HPLC-MS/MS Condition Optimization HPLC-MS/MS operation parameters were carefully optimized for the determination of photocyanine. Photocyanine is a typical sulfonate compound, which is usually ionized in negative mode. We tuned the mass spectrometer in both positive and negative ionization modes with ESI for photocyanine and found that both the signal intensity and ratio of signal to noise obtained in negative ionization mode were much greater than those in positive ionization mode, which was consistent with previous study on another sulfonate compound [19]. In the precursor ion full-scan spectra, the most abundant ions were protonated molecules withm/zratios of 526.0 and 655.1 for photocyanine and IS, respectively. Parameters such as desolvation temperature, ESI source temperature, capillary voltage, and flow rate of desolvation gas and cone gas were all optimized to obtain the highest intensity of protonated analyte molecules. The product ion scan spectra showed high abundance fragment ions atm/zvalues of 146.0 and 591.8 for photocyanine and IS, respectively. Multiple reaction monitoring (MRM) using the precursor→product ion transition ofm/z526.0→ m/z146.0 andm/z655.1→ m/z591.8 was used for the quantification of photocyanine and IS, respectively.The efficiency of the chromatographic separation of photocyanine and IS was evaluated using tests with different chromatographic columns and mobile phases. Photocyanine is amphoteric and relatively hydrophobic. We found that photocyanine displayed a very strong retention on BDS or XDB C18 columns, and little retention on C8 or SCX columns, which resulted in broad peak or substantial carry over. These columns have been tested without success. It has been shown that good chromatographic profiles of photocyanine and IS are obtained using a Waters XBridge C18 column (50 mm × 4.6 mm, 5μm), with retention times of 2.33 and 2.59 min, respectively. The total analysis time per sample is 7 min, which is much shorter than that of 45 min in previous study [18]. We also optimized the column temperature by observing the chromatographic peak and resolution and found that XBridge C18 column displayed well column performance at 60°C. Optimization of the mobile phase is important for improving peak shape and detection sensitivity and for shortening the run time. We tested methanol, acetonitrile, and a mixture of the two as the organic modifier of the mobile phase, and we found that the peaks were more symmetric when methanol was used. Moreover, the chromatographic behavior of photocyanine subjected to mobile phases of different pH values was investigated, and we observed that deionized water (adjusted to pH 10.0 with aqueous ammonia) improved the peak shape and significantly increased the signal intensity of the analyte. In order to further optimize the chromatographic condition, the peak symmetry factor of photocyanine was calculated in different percentage of MPB at the elution period 1.3–4.0 min. Symmetry factor of a peak is calculated by dividingW 0.05 by two-foldf, whereW 0.05 is the width of the peak at 5% height andf is the distance from the peak maximum to the leading edge of the peak, the distance being measured at a point 5% of the peak height from the baseline. As shown in Table 1, the optimal value was 95%. Finally, the optimized gradient elution with deionized water (adjusted to pH 10.0 with aqueous ammonia) and methanol at a flow rate of 0.6 mL/min was established in this study.Table 1 Peak symmetry factor of photocyanine in different percentage of mobile phase B. MPB (%) Symmetry factor (n = 5) RSD (%) 95 1.052 4.67 90 1.103 9.00 80 1.198 7.45 70 1.354 7.67 ## 3.2. Sample Preparation Procedure Sample preparation is important for the HPLC-MS/MS assay. Liquid-liquid extraction (LLE) and solid-phase extraction (SPE) are techniques often used in the preparation of biological samples due to their ability to improve assay sensitivity [20, 21]. SPE columns, including Strata, Strata-X, and Strata C18-E from Phenomenex (Torrence, CA, USA), OASIS WAX Cartridge, and Sep-pak C18 from Waters (Milford, MA, USA), were used for sample preparation in this study. However, photocyanine exhibited no elution due to its strong adsorption to the SPE columns. We also carried out LLE with ethyl acetate, n-butyl alcohol, and mixtures of these organic solvents with n-hexane; however, we obtained low recovery and reproducibility using this procedure. Because HPLC-MS/MS quantification is highly specific and sensitive, protein precipitation (PPT) from the sample preparations was tried in the present study. We found that PPT was not only simple and efficient but also applicable to pharmacokinetic studies in which only 100 μL of serum was used to obtain bioanalytical results. In addition, we observed that the linearity of photocyanine concentration in human serum with DMF was significantly improved than that of without DMF, indicating that DMF is conductive to maximize the release of photocyanine in PPT by inhibiting the binding of the drug to serum proteins, but there is no related report so far about the mechanism of DMF in PPT or drug release. Finally, we added 200 μL DMF to the sample, and the effect of DMF here is consistent to that in previous study [18]. ## 3.3. Specificity Specificity was determined by comparing the chromatograms of six different batches of blank human serum with the corresponding spiked serum. No interference from endogenous substances was observed at the respective retention times of photocyanine and IS (data not shown). ## 3.4. Linearity and LLOQ The linear calibration curves were determined from the peak-area ratios (peak-area analyte/peak-area IS) versus concentration in human serum using a weighting factor (1/x 2), varying linearly over the concentration range tested (20–2000 ng/mL). As shown in Figure 3, the typical equation for the calibration curve for photocyanine was y = 0.421 x + 0.00439 (r = 0.99 65). The slopes of the equations were consistent with the calibration curves prepared on three separate days. The LLOQ in this study was 20 ng/mL for photocyanine, in which the S/N was >10, and the precision of repeat injections was 8.26%. Our method displayed a little higher sensitivity than the previous method, in which the LLOQ was 30 ng/mL [18].Figure 3 Linear calibration curve of photocyanine from the peak-area analyte/peak-area IS ratios versus concentrations in human serum using a weighting factor (1/x 2), varying linearly over the concentration range of 20–2000 ng/mL. The equation for the curve was y = 0.421 x + 0.00439 (r = 0.99 65). ## 3.5. Accuracy and Precision The accuracy and precision of the method were determined by analyzing QC samples at three concentrations in six replicates. The intrabatch accuracy ranged from 101.98% to 107.54% at three concentrations, with precisions between 1.29% and 4.91%. The interbatch accuracy varied from 100.52% to 105.62%, with precisions ranging from 4.72% to 8.53% (Table2). Thus, the present method has satisfactory accuracy, precision, and reproducibility.Table 2 Precision and accuracy of photocyanine. Concentration(ng/mL) Intrabatch (n = 6) Interbatch (n = 3) Accuracy (%) RSD (%) Accuracy (%) RSD (%) 60 103.72 4.91 105.62 4.72 1000 101.98 1.29 100.52 5.93 1600 107.54 3.31 101.18 8.53 ## 3.6. Extraction Recovery and Matrix Effect The extraction recoveries from QC samples at low, intermediate, and high concentrations ranged from 31.64% to 43.53% at three tested concentrations. We extracted the analyte from serum using protein precipitation and DMF in the present study, providing a simple and rapid method for the bioanalysis of photocyanine. In terms of matrix effects, the MF ranged from 61.51% to 77.03% at the three concentrations tested (Table3), indicating that the coeluting substance only slightly influenced the ionization of the analyte.Table 3 Extraction recovery and matrix effects of photocyanine. Concentration(ng/mL) Extraction recovery (%)(n = 5) Matrix effect (%)(n = 6) 60 37.13 61.51 1000 43.53 71.24 1600 31.64 77.03 ## 3.7. Stability The results from the stability tests are presented in Table4, and the data demonstrate a good stability of photocyanine throughout the steps of the determination. The method is therefore applicable to routine analysis.Table 4 Stability of photocyanine under different storage conditions. Storage conditions Concentration (ng/mL) Accuracy (%) (n = 6) RSD (%) Freeze-thaw three cycles 60 107.08 6.09 1000 101.20 4.14 1600 93.92 1.76 −80°C for 30 days 60 95.14 10.01 1000 89.56 1.85 1600 85.31 3.26 Post-preparative(room temperature for 24 h) 60 102.52 12.12 1000 96.16 3.55 1600 102.15 2.00 ## 3.8. Analysis of Patient Samples The validated HPLC-MS/MS method described here was successfully applied to a pharmacokinetic study in 6 cancer patients following i.v. administration of 0.1 mg/kg photocyanine. A mean plasma concentration-time curve of a single dose of photocyanine is shown in Figure4. This result revealed that our method was sufficiently sensitive to determine the photocyanine concentration in the serum of patients. The parameters of the pharmacokinetic analysis are shown in Table 5. The time of maximum plasma concentration (T max ⁡) was 1.83 ± 0.41 h, the maximum plasma concentration (C max ⁡) was 2465.3 ± 723.0 ng/mL, the half-life of drug elimination at the terminal phase (t 1/2) was 74.62 ± 13.29 h, the area under the plasma concentration-time curve from 0 h to the time of the last detectable concentration (AUC0-t) was 53137.2 ± 20210.6 ng·mL−1·h, the area under the plasma concentration-time curve from 0 h to infinity (AUC0-∞) was 62634.6 ± 25398.6 ng·mL−1·h, the volume of distribution (V d) of photocyanine was 11.29 ± 4.12 L, the total clearance (CL) was 0.11 ± 0.04 L/h, and the mean residence time (MRT) was 40.16 ± 4.32 h.Table 5 Noncompartmental pharmacokinetic parameters of photocyanine in cancer patients after a single i.v. dose of 0.1 mg/kg photocyanine (n = 6). Parameters Mean ± SD T max ⁡ (h)a 1.83 ± 0.41 C max ⁡ (ng/mL)b 2465.3 ± 723.0 t 1 / 2 (h)c 74.62 ± 13.29 AU C 0 – t d (ng·mL−1 ·h) 53137.2 ± 20210.6 AU C 0 – ∞ e (ng·mL−1 ·h) 62634.6 ± 25398.6 V d (L)f 11.29 ± 4.12 CL (L/h)g 0.11 ± 0.04 MRT (h)h 40.16 ± 4.32 a T max ⁡: time to maximum concentration; b C max ⁡: maximum plasma concentration; c t 1 / 2: half-life of elimination; d AUC 0 – t: area under the concentration-time curve from time zero to the last quantifiable measurement; e AUC 0 – ∞: area under the concentration-time curve extrapolated to infinity; f V d: volume of distribution; gCL: total clearance; hMRT: mean residence time.Figure 4 Mean serum concentration-time curve of photocyanine after 0.1 mg/kg single i.v. administration to cancer patients (n = 6). ## 4. Conclusion A selective, sensitive, and rapid HPLC-MS/MS method for measuring photocyanine in human serum is described. In comparing this method with the analytical methods reported previously [14, 18], the present method proved superior with respect to higher simplicity of sample preparation, higher selectivity and sensitivity, and shorter chromatographic analysis time. The present description is the first to utilize the HPLC-MS/MS method for the pharmacokinetic study of photocyanine given by injection to cancer patients. 100 μL human serum is sufficient for obtaining results in our pharmacokinetic study, indicating that the present method is applicable to human phase I clinical trials. --- *Source: 102474-2014-06-24.xml*
102474-2014-06-24_102474-2014-06-24.md
43,979
Quantification of Photocyanine in Human Serum by High-Performance Liquid Chromatography-Tandem Mass Spectrometry and Its Application in a Pharmacokinetic Study
Bing-Tian Bi; Ben-Yan Zou; Li-Ting Deng; Jing Zhan; Hai Liao; Kun-Yao Feng; Su Li
Journal of Analytical Methods in Chemistry (2014)
Chemistry and Chemical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102474
102474-2014-06-24.xml
--- ## Abstract Photocyanine is a novel anticancer drug. Its pharmacokinetic study in cancer patients is therefore very important for choosing doses, and dosing intervals in clinical application. A rapid, selective and sensitive high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method was developed and validated for the determination of photocyanine in patient serum. Sample preparation involved one-step protein precipitation by adding methanol and N,N-dimethyl formamide to 0.1 mL serum. The detection was performed on a triple quadrupole tandem mass spectrometer operating in multiple reaction-monitoring (MRM) mode. Each sample was chromatographed within 7 min. Linear calibration curves were obtained for photocyanine at a concentration range of 20–2000 ng/mL (r > 0.99 5), with the lower limit of quantification (LLOQ) being 20 ng/mL. The intrabatch accuracy ranged from 101.98% to 107.54%, and the interbatch accuracy varied from 100.52% to 105.62%. Stability tests showed that photocyanine was stable throughout the analytical procedure. This study is the first to utilize the HPLC-MS/MS method for the pharmacokinetic study of photocyanine in six cancer patients who had received a single dose of photocyanine (0.1 mg/kg) administered intravenously. --- ## Body ## 1. Introduction Photodynamic therapy (PDT) is a potential model for cancer therapy, which has been used to treat or relieve the symptoms of skin cancer, esophageal cancer, prostate cancer, and non-small cell lung cancer [1–3]. During the PDT procedure, the excited photosensitizer forms highly reactive oxygen species using visible light of an appropriate wavelength, resulting in oxidative damage to cellular membranes and membranous organelles [2, 4–7].As one type of the photosensitizer, porphyrins have been approved for PDT in the USA, Europe, Canada, and Japan, but their weak absorption attenuates their optimal application in PDT [8]. Phthalocyanines and their derivatives are also widely used photosensitizers for the PDT of cancer, displaying high absorption of visible light, mainly in the phototherapeutic wavelength window (600–800 nm) [9, 10]. Photosense, Pc4, and CGP55847 are second-generation photosensitizers that have been used in clinical practice [11–14].Photocyanine (ZnPcS2P2), a new second-generation PDT drug, was approved for clinical trials as a new medicine in 2008 by the State Food and Drug Administration in China. It is an isomeric mixture of di-(potassium sulfonate)-di-phthalimidomethyl phthalocyanine zinc (Figure1) [15–17]. The presence of both hydrophobic and hydrophilic groups in photocyanine would improve its tumor selectivity. Clinical trial of a novel synthesized drug requires a reliable method for measuring levels of the drug in biological samples. Because photocyanine is a novel photodynamic drug, few methods have been developed for its quantification. Li et al. [18] also reported an HPLC method to separate the four isomers of a photocyanine mixture from human serum. However, the detection signals from this method display insufficient specificity in the biological samples, because it is difficult to avoid the interference from the matrix or other interferents by ultraviolet detector. Therefore, a method with higher specificity should be established to ensure the validity of the determination.Chemical structures of photocyanine. (a) (b) (c) (d)No evidence verifies the difference of each isomer of photocyanine in pharmacodynamic studies. Moreover, no standard substances for any of the isomers can be obtained from Fujian Longhua Pharmaceutical Co. Therefore, we developed a new high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method to determine the total concentration of photocyanine in cancer patient serum in a pharmacokinetic study. The method was validated for its specificity, sensitivity, linearity, accuracy, precision, matrix effect, dilution integrity, and stability, and the data established the method as a high-throughput and reliable bioanalytical assay. ## 2. Materials and Methods ### 2.1. Experimental Chemicals Photocyanine (purity > 95%) and the internal standard (IS) mono-β-sulfonated zinc phthalocyanine potassium (purity > 95%) were provided by Fujian Longhua Pharmaceutical Co. (Fujian, China). HPLC-grade N, N-dimethyl formamide (DMF) and methanol were purchased from Tedia Company, Inc., (Fairfield, OH, USA). Aqueous ammonia was obtained from Guangzhou Chemical Reagent Factory (Guangzhou, Guangdong, China). Deionized water was obtained from a Milli-Q analytical deionization system (Millipore, Bedford, MA, USA). Freshly obtained, drug-free human serum was collected from healthy individuals and stored at −80°C before use. ### 2.2. Chromatographic Conditions The HPLC system consisted of an LC-20AD solvent delivery system, an SIL-20AC autosampler, a CTO-20AC column oven, and a CBM-20A controller from Shimadzu (Kyoto, Japan). Chromatographic separation of photocyanine and mono-β-sulfonated zinc phthalocyanine potassium was evaluated on an XBridge C18 column (50 mm × 4.6 mm, 5 μm) from Waters (Milford, MA, USA). For method validation and sample analysis, chromatographic separation was conducted by gradient elution using deionized water (adjusted to pH 10.0 with aqueous ammonia) as mobile phase A (MPA) and methanol as mobile phase B (MPB). The HPLC program for gradient elution was as follows: 20% of MPB (0–0.2 min), from 20% to 95% of MPB (0.2–1.3 min), 95% of MPB (1.3–4.0 min), from 95% to 20% of MPB (4.0–4.1 min), and 20% of MPB (4.1–7.0 min). The separation was performed using a flow rate of 0.6 mL/min. The column temperature was maintained at 60°C. ### 2.3. Mass Spectrometric Conditions An API 4000 QTRAP system (AB SCIEX, MA, USA) was operated in negative ionization mode with multiple reaction monitoring (MRM) for HPLC-MS/MS analysis. The mass spectrometric parameters were optimized to improve the MRM sensitivity. The instrument parameters for monitoring photocyanine and IS were as follows: vaporizer temperature, 650°C; ion spray voltage, −4,500 V; curtain gas (CUR), nitrogen, 25; nebulizing gas (GS1), 65; heated gas (GS2), 65; declustering potential (DP), photocyanine −140 V, IS −135 V; collision energy (CE), photocyanine −50 eV, IS −64.4 eV; entrance potential (EP), −10 V; collision cell exit potential (CXP), −10 V. The precursor-to-product ion transitions used for the MRM of photocyanine and IS werem/z 526.0 → 146.0 andm/z 655.1 → 591.8, respectively. The mass spectrometer was operated at unit mass resolution for both the first and third quadrupoles. ### 2.4. Sample Preparation A 100μL aliquot of blank human serum, spiked serum, or pharmacokinetic study serum was transferred to a 1.5 mL Eppendorf tube. Then, 200 μL of DMF was added to each tube of serum, and the mixture was vortexed for 1 min. The mixture was then spiked with 300 μL methanol containing 450 ng/mL IS, vortexed, and centrifuged for 10 min at 15,000 rpm at 4°C. The supernatant was collected and filtered. 10 μL of supernatant was injected into the LC-MS/MS system for analysis. ### 2.5. Method Validation Photocyanine was validated for an HPLC-MS/MS assay. Specificity, the lower limits of quantification (LLOQ), linearity, accuracy, precision, extraction recovery, matrix effect, and stability were evaluated during method validation. The specificity was assessed by testing six independent aliquots of blank serum for exclusion of any endogenous interference at the peak region of photocyanine or IS (Figure2). LLOQ was defined as the lowest concentration on the standard calibration curve from six different batches, in which both precision and accuracy were ≤20% with a signal-to-noise ratio (S/N) > 10. The linearity of the calibration curve was evaluated over the range of 20 ng/mL and 2000 ng/mL. Calibration curves were constructed via linear least-squares regression analysis by plotting the peak-area ratios (photocyanine/IS) versus the drug concentrations in the serum, and the resulting correlation coefficient (r > 0.99) was considered satisfactory. Precision and accuracy were assessed by the analytes covering the range of the calibration curve, in which the criteria for acceptability are defined as an accuracy ±15% standard deviation (SD) from the nominal values and a precision of ±15% relative standard deviation (RSD). Intrabatch accuracy and precision were evaluated by analyzing the quality control (QC) samples at concentrations of 60, 1000, and 1600 ng/mL with six duplicated levels per concentration on the same day. The interbatch accuracy and precision were assessed over three days. The extraction recovery of photocyanine and IS was determined by calculating the ratio of the peak area of photocyanine and IS spiked in serum before extraction against postextraction spiked photocyanine and IS at the same concentration. The matrix effect was determined by calculating the matrix factor, which was obtained as the ratio of the analyte peak response in the presence of matrix ions to the analyte peak response in the absence of matrix ions by spiking analytes into blank serum extracts and blank water extracts. The stability of photocyanine was compared to the nominal level of photocyanine to determine whether photocyanine was stable in the experiments, including postpreparative stability test, freeze-thaw cycle test, and long-term stability test. If the calculated concentration of photocyanine was less than the nominal concentration by >15%, the analyte was considered to be unstable. Low, medium, and high serum QC samples were determined in six duplicated levels. The stability of the extracts was evaluated by putting them at room temperature for 24 h and then subjecting them to the analytical procedure. Photocyanine maintained at −80°C for 30 days was evaluated by comparing the postfreeze measured concentration with the initial concentration added to the sample. The freeze-thaw stability of the samples was assessed over three freeze-thaw cycles by thawing samples at room temperature, refreezing them for 24 h at −80°C and then analyzing them.Representative HPLC-MS/MS chromatograms of extract from (a) blank serum; (b) a human serum sample spiked with mono-β-sulfonated zinc phthalocyanine potassium (IS), (c) a human serum sample spiked with 1000 ng/mL photocyanine and IS, and (d) a patient serum sample after i.v. administration of photocyanine spiked with IS. (a) (b) (c) (d) ### 2.6. Application Six patients with cancer were enrolled at the Cancer Center, Sun Yat-sen University. The patients were five males and one female, ranging in age from 37 to 69 years, who had been diagnosed with a primary or metastatic malignancy. All patients provided written informed consent prior to participation. The patients were infused (i.v. administration) with photocyanine (0.1 mg/kg) for 60 min. Blood samples were obtained at 0, 0.5, 1, 2, 4, 6, 8, 12, 24, 72, 120, and 168 h after administration and then placed on ice and kept away from light. The blood samples were centrifuged at 3000 rpm/min for 10 min, and the serum was stored at −80°C until the analysis was conducted. The present study was approved by the Human Subjects Review Committees of the University of Sun Yat-sen University Cancer Center and conducted according to the Declaration of Helsinki.Noncompartmental pharmacokinetic parameters were calculated using WinNonlin (Version 5.0, PUMCH Clinical Pharmacology Research Center). The maximum serum concentration (C max ⁡) and time to reach it (T max ⁡) were determined directly from the data. The terminal-phase rate constant (λ z) was calculated as the negative slope of the log-linear terminal portion of the serum concentration-time curve using linear regression with at least four last concentration-time points. The terminal-phase half-life (t 1/2) was calculated as 0.693/λ z. The area under the curve from time zero to the last observed time (AUC0-t) was calculated by the linear trapezoidal rule for ascending data points. The total area under the curve (AUC0-∞) was calculated as AUC0-t + Ct/λ z, where Ct is the last measurable concentration. The apparent volume of distribution associated with the terminal phase (Vz) was calculated asVz = CL/λ z, and the apparent total body clearance (CL) was calculated as CL = dose/AUC. ## 2.1. Experimental Chemicals Photocyanine (purity > 95%) and the internal standard (IS) mono-β-sulfonated zinc phthalocyanine potassium (purity > 95%) were provided by Fujian Longhua Pharmaceutical Co. (Fujian, China). HPLC-grade N, N-dimethyl formamide (DMF) and methanol were purchased from Tedia Company, Inc., (Fairfield, OH, USA). Aqueous ammonia was obtained from Guangzhou Chemical Reagent Factory (Guangzhou, Guangdong, China). Deionized water was obtained from a Milli-Q analytical deionization system (Millipore, Bedford, MA, USA). Freshly obtained, drug-free human serum was collected from healthy individuals and stored at −80°C before use. ## 2.2. Chromatographic Conditions The HPLC system consisted of an LC-20AD solvent delivery system, an SIL-20AC autosampler, a CTO-20AC column oven, and a CBM-20A controller from Shimadzu (Kyoto, Japan). Chromatographic separation of photocyanine and mono-β-sulfonated zinc phthalocyanine potassium was evaluated on an XBridge C18 column (50 mm × 4.6 mm, 5 μm) from Waters (Milford, MA, USA). For method validation and sample analysis, chromatographic separation was conducted by gradient elution using deionized water (adjusted to pH 10.0 with aqueous ammonia) as mobile phase A (MPA) and methanol as mobile phase B (MPB). The HPLC program for gradient elution was as follows: 20% of MPB (0–0.2 min), from 20% to 95% of MPB (0.2–1.3 min), 95% of MPB (1.3–4.0 min), from 95% to 20% of MPB (4.0–4.1 min), and 20% of MPB (4.1–7.0 min). The separation was performed using a flow rate of 0.6 mL/min. The column temperature was maintained at 60°C. ## 2.3. Mass Spectrometric Conditions An API 4000 QTRAP system (AB SCIEX, MA, USA) was operated in negative ionization mode with multiple reaction monitoring (MRM) for HPLC-MS/MS analysis. The mass spectrometric parameters were optimized to improve the MRM sensitivity. The instrument parameters for monitoring photocyanine and IS were as follows: vaporizer temperature, 650°C; ion spray voltage, −4,500 V; curtain gas (CUR), nitrogen, 25; nebulizing gas (GS1), 65; heated gas (GS2), 65; declustering potential (DP), photocyanine −140 V, IS −135 V; collision energy (CE), photocyanine −50 eV, IS −64.4 eV; entrance potential (EP), −10 V; collision cell exit potential (CXP), −10 V. The precursor-to-product ion transitions used for the MRM of photocyanine and IS werem/z 526.0 → 146.0 andm/z 655.1 → 591.8, respectively. The mass spectrometer was operated at unit mass resolution for both the first and third quadrupoles. ## 2.4. Sample Preparation A 100μL aliquot of blank human serum, spiked serum, or pharmacokinetic study serum was transferred to a 1.5 mL Eppendorf tube. Then, 200 μL of DMF was added to each tube of serum, and the mixture was vortexed for 1 min. The mixture was then spiked with 300 μL methanol containing 450 ng/mL IS, vortexed, and centrifuged for 10 min at 15,000 rpm at 4°C. The supernatant was collected and filtered. 10 μL of supernatant was injected into the LC-MS/MS system for analysis. ## 2.5. Method Validation Photocyanine was validated for an HPLC-MS/MS assay. Specificity, the lower limits of quantification (LLOQ), linearity, accuracy, precision, extraction recovery, matrix effect, and stability were evaluated during method validation. The specificity was assessed by testing six independent aliquots of blank serum for exclusion of any endogenous interference at the peak region of photocyanine or IS (Figure2). LLOQ was defined as the lowest concentration on the standard calibration curve from six different batches, in which both precision and accuracy were ≤20% with a signal-to-noise ratio (S/N) > 10. The linearity of the calibration curve was evaluated over the range of 20 ng/mL and 2000 ng/mL. Calibration curves were constructed via linear least-squares regression analysis by plotting the peak-area ratios (photocyanine/IS) versus the drug concentrations in the serum, and the resulting correlation coefficient (r > 0.99) was considered satisfactory. Precision and accuracy were assessed by the analytes covering the range of the calibration curve, in which the criteria for acceptability are defined as an accuracy ±15% standard deviation (SD) from the nominal values and a precision of ±15% relative standard deviation (RSD). Intrabatch accuracy and precision were evaluated by analyzing the quality control (QC) samples at concentrations of 60, 1000, and 1600 ng/mL with six duplicated levels per concentration on the same day. The interbatch accuracy and precision were assessed over three days. The extraction recovery of photocyanine and IS was determined by calculating the ratio of the peak area of photocyanine and IS spiked in serum before extraction against postextraction spiked photocyanine and IS at the same concentration. The matrix effect was determined by calculating the matrix factor, which was obtained as the ratio of the analyte peak response in the presence of matrix ions to the analyte peak response in the absence of matrix ions by spiking analytes into blank serum extracts and blank water extracts. The stability of photocyanine was compared to the nominal level of photocyanine to determine whether photocyanine was stable in the experiments, including postpreparative stability test, freeze-thaw cycle test, and long-term stability test. If the calculated concentration of photocyanine was less than the nominal concentration by >15%, the analyte was considered to be unstable. Low, medium, and high serum QC samples were determined in six duplicated levels. The stability of the extracts was evaluated by putting them at room temperature for 24 h and then subjecting them to the analytical procedure. Photocyanine maintained at −80°C for 30 days was evaluated by comparing the postfreeze measured concentration with the initial concentration added to the sample. The freeze-thaw stability of the samples was assessed over three freeze-thaw cycles by thawing samples at room temperature, refreezing them for 24 h at −80°C and then analyzing them.Representative HPLC-MS/MS chromatograms of extract from (a) blank serum; (b) a human serum sample spiked with mono-β-sulfonated zinc phthalocyanine potassium (IS), (c) a human serum sample spiked with 1000 ng/mL photocyanine and IS, and (d) a patient serum sample after i.v. administration of photocyanine spiked with IS. (a) (b) (c) (d) ## 2.6. Application Six patients with cancer were enrolled at the Cancer Center, Sun Yat-sen University. The patients were five males and one female, ranging in age from 37 to 69 years, who had been diagnosed with a primary or metastatic malignancy. All patients provided written informed consent prior to participation. The patients were infused (i.v. administration) with photocyanine (0.1 mg/kg) for 60 min. Blood samples were obtained at 0, 0.5, 1, 2, 4, 6, 8, 12, 24, 72, 120, and 168 h after administration and then placed on ice and kept away from light. The blood samples were centrifuged at 3000 rpm/min for 10 min, and the serum was stored at −80°C until the analysis was conducted. The present study was approved by the Human Subjects Review Committees of the University of Sun Yat-sen University Cancer Center and conducted according to the Declaration of Helsinki.Noncompartmental pharmacokinetic parameters were calculated using WinNonlin (Version 5.0, PUMCH Clinical Pharmacology Research Center). The maximum serum concentration (C max ⁡) and time to reach it (T max ⁡) were determined directly from the data. The terminal-phase rate constant (λ z) was calculated as the negative slope of the log-linear terminal portion of the serum concentration-time curve using linear regression with at least four last concentration-time points. The terminal-phase half-life (t 1/2) was calculated as 0.693/λ z. The area under the curve from time zero to the last observed time (AUC0-t) was calculated by the linear trapezoidal rule for ascending data points. The total area under the curve (AUC0-∞) was calculated as AUC0-t + Ct/λ z, where Ct is the last measurable concentration. The apparent volume of distribution associated with the terminal phase (Vz) was calculated asVz = CL/λ z, and the apparent total body clearance (CL) was calculated as CL = dose/AUC. ## 3. Results and Discussion ### 3.1. HPLC-MS/MS Condition Optimization HPLC-MS/MS operation parameters were carefully optimized for the determination of photocyanine. Photocyanine is a typical sulfonate compound, which is usually ionized in negative mode. We tuned the mass spectrometer in both positive and negative ionization modes with ESI for photocyanine and found that both the signal intensity and ratio of signal to noise obtained in negative ionization mode were much greater than those in positive ionization mode, which was consistent with previous study on another sulfonate compound [19]. In the precursor ion full-scan spectra, the most abundant ions were protonated molecules withm/zratios of 526.0 and 655.1 for photocyanine and IS, respectively. Parameters such as desolvation temperature, ESI source temperature, capillary voltage, and flow rate of desolvation gas and cone gas were all optimized to obtain the highest intensity of protonated analyte molecules. The product ion scan spectra showed high abundance fragment ions atm/zvalues of 146.0 and 591.8 for photocyanine and IS, respectively. Multiple reaction monitoring (MRM) using the precursor→product ion transition ofm/z526.0→ m/z146.0 andm/z655.1→ m/z591.8 was used for the quantification of photocyanine and IS, respectively.The efficiency of the chromatographic separation of photocyanine and IS was evaluated using tests with different chromatographic columns and mobile phases. Photocyanine is amphoteric and relatively hydrophobic. We found that photocyanine displayed a very strong retention on BDS or XDB C18 columns, and little retention on C8 or SCX columns, which resulted in broad peak or substantial carry over. These columns have been tested without success. It has been shown that good chromatographic profiles of photocyanine and IS are obtained using a Waters XBridge C18 column (50 mm × 4.6 mm, 5μm), with retention times of 2.33 and 2.59 min, respectively. The total analysis time per sample is 7 min, which is much shorter than that of 45 min in previous study [18]. We also optimized the column temperature by observing the chromatographic peak and resolution and found that XBridge C18 column displayed well column performance at 60°C. Optimization of the mobile phase is important for improving peak shape and detection sensitivity and for shortening the run time. We tested methanol, acetonitrile, and a mixture of the two as the organic modifier of the mobile phase, and we found that the peaks were more symmetric when methanol was used. Moreover, the chromatographic behavior of photocyanine subjected to mobile phases of different pH values was investigated, and we observed that deionized water (adjusted to pH 10.0 with aqueous ammonia) improved the peak shape and significantly increased the signal intensity of the analyte. In order to further optimize the chromatographic condition, the peak symmetry factor of photocyanine was calculated in different percentage of MPB at the elution period 1.3–4.0 min. Symmetry factor of a peak is calculated by dividingW 0.05 by two-foldf, whereW 0.05 is the width of the peak at 5% height andf is the distance from the peak maximum to the leading edge of the peak, the distance being measured at a point 5% of the peak height from the baseline. As shown in Table 1, the optimal value was 95%. Finally, the optimized gradient elution with deionized water (adjusted to pH 10.0 with aqueous ammonia) and methanol at a flow rate of 0.6 mL/min was established in this study.Table 1 Peak symmetry factor of photocyanine in different percentage of mobile phase B. MPB (%) Symmetry factor (n = 5) RSD (%) 95 1.052 4.67 90 1.103 9.00 80 1.198 7.45 70 1.354 7.67 ### 3.2. Sample Preparation Procedure Sample preparation is important for the HPLC-MS/MS assay. Liquid-liquid extraction (LLE) and solid-phase extraction (SPE) are techniques often used in the preparation of biological samples due to their ability to improve assay sensitivity [20, 21]. SPE columns, including Strata, Strata-X, and Strata C18-E from Phenomenex (Torrence, CA, USA), OASIS WAX Cartridge, and Sep-pak C18 from Waters (Milford, MA, USA), were used for sample preparation in this study. However, photocyanine exhibited no elution due to its strong adsorption to the SPE columns. We also carried out LLE with ethyl acetate, n-butyl alcohol, and mixtures of these organic solvents with n-hexane; however, we obtained low recovery and reproducibility using this procedure. Because HPLC-MS/MS quantification is highly specific and sensitive, protein precipitation (PPT) from the sample preparations was tried in the present study. We found that PPT was not only simple and efficient but also applicable to pharmacokinetic studies in which only 100 μL of serum was used to obtain bioanalytical results. In addition, we observed that the linearity of photocyanine concentration in human serum with DMF was significantly improved than that of without DMF, indicating that DMF is conductive to maximize the release of photocyanine in PPT by inhibiting the binding of the drug to serum proteins, but there is no related report so far about the mechanism of DMF in PPT or drug release. Finally, we added 200 μL DMF to the sample, and the effect of DMF here is consistent to that in previous study [18]. ### 3.3. Specificity Specificity was determined by comparing the chromatograms of six different batches of blank human serum with the corresponding spiked serum. No interference from endogenous substances was observed at the respective retention times of photocyanine and IS (data not shown). ### 3.4. Linearity and LLOQ The linear calibration curves were determined from the peak-area ratios (peak-area analyte/peak-area IS) versus concentration in human serum using a weighting factor (1/x 2), varying linearly over the concentration range tested (20–2000 ng/mL). As shown in Figure 3, the typical equation for the calibration curve for photocyanine was y = 0.421 x + 0.00439 (r = 0.99 65). The slopes of the equations were consistent with the calibration curves prepared on three separate days. The LLOQ in this study was 20 ng/mL for photocyanine, in which the S/N was >10, and the precision of repeat injections was 8.26%. Our method displayed a little higher sensitivity than the previous method, in which the LLOQ was 30 ng/mL [18].Figure 3 Linear calibration curve of photocyanine from the peak-area analyte/peak-area IS ratios versus concentrations in human serum using a weighting factor (1/x 2), varying linearly over the concentration range of 20–2000 ng/mL. The equation for the curve was y = 0.421 x + 0.00439 (r = 0.99 65). ### 3.5. Accuracy and Precision The accuracy and precision of the method were determined by analyzing QC samples at three concentrations in six replicates. The intrabatch accuracy ranged from 101.98% to 107.54% at three concentrations, with precisions between 1.29% and 4.91%. The interbatch accuracy varied from 100.52% to 105.62%, with precisions ranging from 4.72% to 8.53% (Table2). Thus, the present method has satisfactory accuracy, precision, and reproducibility.Table 2 Precision and accuracy of photocyanine. Concentration(ng/mL) Intrabatch (n = 6) Interbatch (n = 3) Accuracy (%) RSD (%) Accuracy (%) RSD (%) 60 103.72 4.91 105.62 4.72 1000 101.98 1.29 100.52 5.93 1600 107.54 3.31 101.18 8.53 ### 3.6. Extraction Recovery and Matrix Effect The extraction recoveries from QC samples at low, intermediate, and high concentrations ranged from 31.64% to 43.53% at three tested concentrations. We extracted the analyte from serum using protein precipitation and DMF in the present study, providing a simple and rapid method for the bioanalysis of photocyanine. In terms of matrix effects, the MF ranged from 61.51% to 77.03% at the three concentrations tested (Table3), indicating that the coeluting substance only slightly influenced the ionization of the analyte.Table 3 Extraction recovery and matrix effects of photocyanine. Concentration(ng/mL) Extraction recovery (%)(n = 5) Matrix effect (%)(n = 6) 60 37.13 61.51 1000 43.53 71.24 1600 31.64 77.03 ### 3.7. Stability The results from the stability tests are presented in Table4, and the data demonstrate a good stability of photocyanine throughout the steps of the determination. The method is therefore applicable to routine analysis.Table 4 Stability of photocyanine under different storage conditions. Storage conditions Concentration (ng/mL) Accuracy (%) (n = 6) RSD (%) Freeze-thaw three cycles 60 107.08 6.09 1000 101.20 4.14 1600 93.92 1.76 −80°C for 30 days 60 95.14 10.01 1000 89.56 1.85 1600 85.31 3.26 Post-preparative(room temperature for 24 h) 60 102.52 12.12 1000 96.16 3.55 1600 102.15 2.00 ### 3.8. Analysis of Patient Samples The validated HPLC-MS/MS method described here was successfully applied to a pharmacokinetic study in 6 cancer patients following i.v. administration of 0.1 mg/kg photocyanine. A mean plasma concentration-time curve of a single dose of photocyanine is shown in Figure4. This result revealed that our method was sufficiently sensitive to determine the photocyanine concentration in the serum of patients. The parameters of the pharmacokinetic analysis are shown in Table 5. The time of maximum plasma concentration (T max ⁡) was 1.83 ± 0.41 h, the maximum plasma concentration (C max ⁡) was 2465.3 ± 723.0 ng/mL, the half-life of drug elimination at the terminal phase (t 1/2) was 74.62 ± 13.29 h, the area under the plasma concentration-time curve from 0 h to the time of the last detectable concentration (AUC0-t) was 53137.2 ± 20210.6 ng·mL−1·h, the area under the plasma concentration-time curve from 0 h to infinity (AUC0-∞) was 62634.6 ± 25398.6 ng·mL−1·h, the volume of distribution (V d) of photocyanine was 11.29 ± 4.12 L, the total clearance (CL) was 0.11 ± 0.04 L/h, and the mean residence time (MRT) was 40.16 ± 4.32 h.Table 5 Noncompartmental pharmacokinetic parameters of photocyanine in cancer patients after a single i.v. dose of 0.1 mg/kg photocyanine (n = 6). Parameters Mean ± SD T max ⁡ (h)a 1.83 ± 0.41 C max ⁡ (ng/mL)b 2465.3 ± 723.0 t 1 / 2 (h)c 74.62 ± 13.29 AU C 0 – t d (ng·mL−1 ·h) 53137.2 ± 20210.6 AU C 0 – ∞ e (ng·mL−1 ·h) 62634.6 ± 25398.6 V d (L)f 11.29 ± 4.12 CL (L/h)g 0.11 ± 0.04 MRT (h)h 40.16 ± 4.32 a T max ⁡: time to maximum concentration; b C max ⁡: maximum plasma concentration; c t 1 / 2: half-life of elimination; d AUC 0 – t: area under the concentration-time curve from time zero to the last quantifiable measurement; e AUC 0 – ∞: area under the concentration-time curve extrapolated to infinity; f V d: volume of distribution; gCL: total clearance; hMRT: mean residence time.Figure 4 Mean serum concentration-time curve of photocyanine after 0.1 mg/kg single i.v. administration to cancer patients (n = 6). ## 3.1. HPLC-MS/MS Condition Optimization HPLC-MS/MS operation parameters were carefully optimized for the determination of photocyanine. Photocyanine is a typical sulfonate compound, which is usually ionized in negative mode. We tuned the mass spectrometer in both positive and negative ionization modes with ESI for photocyanine and found that both the signal intensity and ratio of signal to noise obtained in negative ionization mode were much greater than those in positive ionization mode, which was consistent with previous study on another sulfonate compound [19]. In the precursor ion full-scan spectra, the most abundant ions were protonated molecules withm/zratios of 526.0 and 655.1 for photocyanine and IS, respectively. Parameters such as desolvation temperature, ESI source temperature, capillary voltage, and flow rate of desolvation gas and cone gas were all optimized to obtain the highest intensity of protonated analyte molecules. The product ion scan spectra showed high abundance fragment ions atm/zvalues of 146.0 and 591.8 for photocyanine and IS, respectively. Multiple reaction monitoring (MRM) using the precursor→product ion transition ofm/z526.0→ m/z146.0 andm/z655.1→ m/z591.8 was used for the quantification of photocyanine and IS, respectively.The efficiency of the chromatographic separation of photocyanine and IS was evaluated using tests with different chromatographic columns and mobile phases. Photocyanine is amphoteric and relatively hydrophobic. We found that photocyanine displayed a very strong retention on BDS or XDB C18 columns, and little retention on C8 or SCX columns, which resulted in broad peak or substantial carry over. These columns have been tested without success. It has been shown that good chromatographic profiles of photocyanine and IS are obtained using a Waters XBridge C18 column (50 mm × 4.6 mm, 5μm), with retention times of 2.33 and 2.59 min, respectively. The total analysis time per sample is 7 min, which is much shorter than that of 45 min in previous study [18]. We also optimized the column temperature by observing the chromatographic peak and resolution and found that XBridge C18 column displayed well column performance at 60°C. Optimization of the mobile phase is important for improving peak shape and detection sensitivity and for shortening the run time. We tested methanol, acetonitrile, and a mixture of the two as the organic modifier of the mobile phase, and we found that the peaks were more symmetric when methanol was used. Moreover, the chromatographic behavior of photocyanine subjected to mobile phases of different pH values was investigated, and we observed that deionized water (adjusted to pH 10.0 with aqueous ammonia) improved the peak shape and significantly increased the signal intensity of the analyte. In order to further optimize the chromatographic condition, the peak symmetry factor of photocyanine was calculated in different percentage of MPB at the elution period 1.3–4.0 min. Symmetry factor of a peak is calculated by dividingW 0.05 by two-foldf, whereW 0.05 is the width of the peak at 5% height andf is the distance from the peak maximum to the leading edge of the peak, the distance being measured at a point 5% of the peak height from the baseline. As shown in Table 1, the optimal value was 95%. Finally, the optimized gradient elution with deionized water (adjusted to pH 10.0 with aqueous ammonia) and methanol at a flow rate of 0.6 mL/min was established in this study.Table 1 Peak symmetry factor of photocyanine in different percentage of mobile phase B. MPB (%) Symmetry factor (n = 5) RSD (%) 95 1.052 4.67 90 1.103 9.00 80 1.198 7.45 70 1.354 7.67 ## 3.2. Sample Preparation Procedure Sample preparation is important for the HPLC-MS/MS assay. Liquid-liquid extraction (LLE) and solid-phase extraction (SPE) are techniques often used in the preparation of biological samples due to their ability to improve assay sensitivity [20, 21]. SPE columns, including Strata, Strata-X, and Strata C18-E from Phenomenex (Torrence, CA, USA), OASIS WAX Cartridge, and Sep-pak C18 from Waters (Milford, MA, USA), were used for sample preparation in this study. However, photocyanine exhibited no elution due to its strong adsorption to the SPE columns. We also carried out LLE with ethyl acetate, n-butyl alcohol, and mixtures of these organic solvents with n-hexane; however, we obtained low recovery and reproducibility using this procedure. Because HPLC-MS/MS quantification is highly specific and sensitive, protein precipitation (PPT) from the sample preparations was tried in the present study. We found that PPT was not only simple and efficient but also applicable to pharmacokinetic studies in which only 100 μL of serum was used to obtain bioanalytical results. In addition, we observed that the linearity of photocyanine concentration in human serum with DMF was significantly improved than that of without DMF, indicating that DMF is conductive to maximize the release of photocyanine in PPT by inhibiting the binding of the drug to serum proteins, but there is no related report so far about the mechanism of DMF in PPT or drug release. Finally, we added 200 μL DMF to the sample, and the effect of DMF here is consistent to that in previous study [18]. ## 3.3. Specificity Specificity was determined by comparing the chromatograms of six different batches of blank human serum with the corresponding spiked serum. No interference from endogenous substances was observed at the respective retention times of photocyanine and IS (data not shown). ## 3.4. Linearity and LLOQ The linear calibration curves were determined from the peak-area ratios (peak-area analyte/peak-area IS) versus concentration in human serum using a weighting factor (1/x 2), varying linearly over the concentration range tested (20–2000 ng/mL). As shown in Figure 3, the typical equation for the calibration curve for photocyanine was y = 0.421 x + 0.00439 (r = 0.99 65). The slopes of the equations were consistent with the calibration curves prepared on three separate days. The LLOQ in this study was 20 ng/mL for photocyanine, in which the S/N was >10, and the precision of repeat injections was 8.26%. Our method displayed a little higher sensitivity than the previous method, in which the LLOQ was 30 ng/mL [18].Figure 3 Linear calibration curve of photocyanine from the peak-area analyte/peak-area IS ratios versus concentrations in human serum using a weighting factor (1/x 2), varying linearly over the concentration range of 20–2000 ng/mL. The equation for the curve was y = 0.421 x + 0.00439 (r = 0.99 65). ## 3.5. Accuracy and Precision The accuracy and precision of the method were determined by analyzing QC samples at three concentrations in six replicates. The intrabatch accuracy ranged from 101.98% to 107.54% at three concentrations, with precisions between 1.29% and 4.91%. The interbatch accuracy varied from 100.52% to 105.62%, with precisions ranging from 4.72% to 8.53% (Table2). Thus, the present method has satisfactory accuracy, precision, and reproducibility.Table 2 Precision and accuracy of photocyanine. Concentration(ng/mL) Intrabatch (n = 6) Interbatch (n = 3) Accuracy (%) RSD (%) Accuracy (%) RSD (%) 60 103.72 4.91 105.62 4.72 1000 101.98 1.29 100.52 5.93 1600 107.54 3.31 101.18 8.53 ## 3.6. Extraction Recovery and Matrix Effect The extraction recoveries from QC samples at low, intermediate, and high concentrations ranged from 31.64% to 43.53% at three tested concentrations. We extracted the analyte from serum using protein precipitation and DMF in the present study, providing a simple and rapid method for the bioanalysis of photocyanine. In terms of matrix effects, the MF ranged from 61.51% to 77.03% at the three concentrations tested (Table3), indicating that the coeluting substance only slightly influenced the ionization of the analyte.Table 3 Extraction recovery and matrix effects of photocyanine. Concentration(ng/mL) Extraction recovery (%)(n = 5) Matrix effect (%)(n = 6) 60 37.13 61.51 1000 43.53 71.24 1600 31.64 77.03 ## 3.7. Stability The results from the stability tests are presented in Table4, and the data demonstrate a good stability of photocyanine throughout the steps of the determination. The method is therefore applicable to routine analysis.Table 4 Stability of photocyanine under different storage conditions. Storage conditions Concentration (ng/mL) Accuracy (%) (n = 6) RSD (%) Freeze-thaw three cycles 60 107.08 6.09 1000 101.20 4.14 1600 93.92 1.76 −80°C for 30 days 60 95.14 10.01 1000 89.56 1.85 1600 85.31 3.26 Post-preparative(room temperature for 24 h) 60 102.52 12.12 1000 96.16 3.55 1600 102.15 2.00 ## 3.8. Analysis of Patient Samples The validated HPLC-MS/MS method described here was successfully applied to a pharmacokinetic study in 6 cancer patients following i.v. administration of 0.1 mg/kg photocyanine. A mean plasma concentration-time curve of a single dose of photocyanine is shown in Figure4. This result revealed that our method was sufficiently sensitive to determine the photocyanine concentration in the serum of patients. The parameters of the pharmacokinetic analysis are shown in Table 5. The time of maximum plasma concentration (T max ⁡) was 1.83 ± 0.41 h, the maximum plasma concentration (C max ⁡) was 2465.3 ± 723.0 ng/mL, the half-life of drug elimination at the terminal phase (t 1/2) was 74.62 ± 13.29 h, the area under the plasma concentration-time curve from 0 h to the time of the last detectable concentration (AUC0-t) was 53137.2 ± 20210.6 ng·mL−1·h, the area under the plasma concentration-time curve from 0 h to infinity (AUC0-∞) was 62634.6 ± 25398.6 ng·mL−1·h, the volume of distribution (V d) of photocyanine was 11.29 ± 4.12 L, the total clearance (CL) was 0.11 ± 0.04 L/h, and the mean residence time (MRT) was 40.16 ± 4.32 h.Table 5 Noncompartmental pharmacokinetic parameters of photocyanine in cancer patients after a single i.v. dose of 0.1 mg/kg photocyanine (n = 6). Parameters Mean ± SD T max ⁡ (h)a 1.83 ± 0.41 C max ⁡ (ng/mL)b 2465.3 ± 723.0 t 1 / 2 (h)c 74.62 ± 13.29 AU C 0 – t d (ng·mL−1 ·h) 53137.2 ± 20210.6 AU C 0 – ∞ e (ng·mL−1 ·h) 62634.6 ± 25398.6 V d (L)f 11.29 ± 4.12 CL (L/h)g 0.11 ± 0.04 MRT (h)h 40.16 ± 4.32 a T max ⁡: time to maximum concentration; b C max ⁡: maximum plasma concentration; c t 1 / 2: half-life of elimination; d AUC 0 – t: area under the concentration-time curve from time zero to the last quantifiable measurement; e AUC 0 – ∞: area under the concentration-time curve extrapolated to infinity; f V d: volume of distribution; gCL: total clearance; hMRT: mean residence time.Figure 4 Mean serum concentration-time curve of photocyanine after 0.1 mg/kg single i.v. administration to cancer patients (n = 6). ## 4. Conclusion A selective, sensitive, and rapid HPLC-MS/MS method for measuring photocyanine in human serum is described. In comparing this method with the analytical methods reported previously [14, 18], the present method proved superior with respect to higher simplicity of sample preparation, higher selectivity and sensitivity, and shorter chromatographic analysis time. The present description is the first to utilize the HPLC-MS/MS method for the pharmacokinetic study of photocyanine given by injection to cancer patients. 100 μL human serum is sufficient for obtaining results in our pharmacokinetic study, indicating that the present method is applicable to human phase I clinical trials. --- *Source: 102474-2014-06-24.xml*
2014
# MPNs as Inflammatory Diseases: The Evidence, Consequences, and Perspectives **Authors:** Hans Carl Hasselbalch; Mads Emil Bjørn **Journal:** Mediators of Inflammation (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102476 --- ## Abstract In recent years the evidence is increasing that chronic inflammation may be an important driving force for clonal evolution and disease progression in the Philadelphia-negative myeloproliferative neoplasms (MPNs), essential thrombocythemia (ET), polycythemia vera (PV), and myelofibrosis (MF). Abnormal expression and activity of a number of proinflammatory cytokines are associated with MPNs, in particular MF, in which immune dysregulation is pronounced as evidenced by dysregulation of several immune and inflammation genes. In addition, chronic inflammation has been suggested to contribute to the development of premature atherosclerosis and may drive the development of other cancers in MPNs, both nonhematologic and hematologic. The MPN population has a substantial inflammation-mediated comorbidity burden. This review describes the evidence for considering the MPNs as inflammatory diseases,A Human Inflammation Model of Cancer Development, and the role of cytokines in disease initiation and progression. The consequences of this model are discussed, including the increased risk of second cancers and other inflammation-mediated diseases, emphasizing the urgent need for rethinking our therapeutic approach. Early intervention with interferon-alpha2, which as monotherapy has been shown to be able to induce minimal residual disease, in combination with potent anti-inflammatory agents such as JAK-inhibitors is foreseen as the most promising new treatment modality in the years to come. --- ## Body ## 1. Introduction Recent studies have provided evidence that the chronic myeloproliferative neoplasms (MPNs), essential thrombocythemia (ET), polycythemia vera (PV), and myelofibrosis (MF), may be preceded by or accompanied by chronic inflammation and also may imply an increased risk for the development of other cancers [1–3]. In these neoplasms morbidity and mortality are massively influenced by cardiovascular and thromboembolic complications [1, 4, 5]. The advanced myelofibrotic stage is typically characterized by transfusion-dependent anemia, large spleen, severe bone marrow fibrosis, and steadily increasing white blood cell counts or severe pancytopenia and end-stage development of acute leukemia, seen in up to 20% of patients with MF [1, 5]. The incidence of MPNs is low, but the prevalence is high and comparable with lung cancer. In 2005, a unique breakthrough was described by the identification of the JAK2V617F mutation in almost all patients with PV and about half of patients with ET and MF [1]. It is possible to monitor the “tumor burden” when analyzing theJAK2 allelic burden by qPCR. In 2013 the calreticulin mutations were described in a large proportion of the JAK2V617F negative ET and MF patients [6, 7]. The clinical implications of these mutations are being described elsewhere in this Theme Issue.Chronic inflammation is an important risk factor for the development of atherosclerosis which occurs prematurely in patients with chronic inflammatory diseases, including rheumatoid arthritis, systemic lupus erythematosus, psoriasis, and type II diabetes mellitus. In these diseases, in vivo activation of leukocytes, platelets, and endothelial cells contributes significantly to the increased risk of thrombosis. The same thrombophilia-generating mechanisms are operative in ET, PV, and MF, in which chronic inflammation has recently been described as a potentially very important facilitator not only of premature atherosclerosis, but also of clonal evolution and second cancer [8]. Thus, the chronic MPNs are both “model diseases” for studies of the relationship between chronic inflammation and premature atherosclerosis development in the biological continuum from ET over PV to myelofibrosis and “model diseases” for cancer development from the early cancer stage (ET, PV) to the advanced metastatic cancer stage (MF with myeloid metaplasia) [9–13].Based upon experimental, clinical, and epidemiological studies we herein argue for the MPNs as inflammatory diseases in accordance with the “Human Inflammation Model for Cancer Development.” In the following we will describe the evidence for MPNs as chronic inflammatory diseases and discuss the consequences of chronic inflammation in MPNs in terms of disease progression due to inflammation-mediated clonal expansion and defective tumor immune surveillance. In this context we argue for dampening chronic inflammation at the earliest disease stage (ET/PV), when the tumor burden is minimal, the clone is homogenous (prior to subclone formation and/or acquisition of additional driving mutations), and accordingly the outcome of treatment is logically most favorable (Figure1).Figure 1 Vicious cycle of inflammation in the biological continuum of ET, PV, and MF. Chronic inflammation is proposed as the trigger and driver of clonal evolution in the biologic continuum from early disease state (ET/PV) to a more advanced disease state (MF). It is possible that combination therapy, using low doses of agents such as interferon-alpha, Janus kinase inhibitors, and statins at the early disease stage, will positively influence the vicious cycle of disease progression. HGF: hepatocyte growth factor; IL: interleukin; MPN: myeloproliferative neoplasm; and TNF: tumor necrosis factor. ## 2. The Evidence of a Link between Chronic Inflammation and Cancer About 30 years ago Dvorak described cancers as “wounds that do not heal,” a concept updated most recently and since 1986 being increasingly recognized [14, 15]. In their seminal contribution from 2000 Hanahan and Weinberg identified the six hallmarks of cancer and recently chronic inflammation was added as the seventh hallmark, emphasizing the huge impact of chronic inflammation on cancer development and progression (“oncoinflammation”) [16, 17]. Accordingly, today chronic inflammation is considered of major importance in the development of cancer and several molecular and cellular signaling circuits have been identified linking inflammation and cancer [18–22]. Indeed, this concept was already described by Virchow in the 19th century when he suggested that chronic inflammation might give rise to malignancy [21]. Regardless, not until more recently, the link between inflammation and cancer has been acknowledged, partly due to epidemiologic studies, which have generated data on chronic infections and inflammation as major risk factors for various types of cancer. In hematological malignancies a link between chronic inflammation and malignant lymphomas has been well described whereas chronic inflammation as a potential initiating event and a driver of clonal evolution in myeloid cancers including MPNs has not been focused upon until very recently [8, 9, 11–13, 23–25]. ## 3. The Evidence of MPNs as Inflammatory and Immune Deregulated Diseases ### 3.1. What Is the Epidemiological Evidence? An increased risk of autoimmune and/or inflammatory conditions has been documented several years ago in patients with myeloid malignancies and recently a large Swedish epidemiologic study concluded that chronic immune stimulation might act as a trigger for the development of the myelodysplastic syndrome (MDS) and acute myelogenous leukemia (AML) [26, 27]. In regard to MPNs, another Swedish study has shown that inflammatory diseases may precede or develop during the course of ET, PV, and MF. In this Swedish study, a prior history of any autoimmune disease was associated with a significantly increased risk of a myeloproliferative neoplasm. The “inflammatory” diseases included, among others, Crohn’s disease, polymyalgia rheumatica, and giant cell arteritis, and the “autoimmune” diseases included immune thrombocytopenic purpura and aplastic anemia [2]. The 46/1 haplotype is present in 45% of the general population and is associated with a predisposition to acquire theJAK2V617F mutation and accordingly MPNs but also predisposes to MPNs with no mutation ofJAK2 and to MPNs with mutation inMPL [28–31]. Importantly, epidemiological studies have shown that the frequency of theJAK2 46/1 haplotype is increased in inflammatory diseases, including Crohn’s disease [32, 33].Risk factors for developing atherosclerosis, a chronic inflammatory disease, have been investigated in a large Danish epidemiological study of 49 488 individuals from the Copenhagen General Population Study. It was discovered that those harboring theJAK2V617F mutation had a 2.2-/1.2-fold risk (prevalent/incident) of ischemic heart disease [34]. ### 3.2. What Is the Histomorphological Evidence? Already about 40 years ago it was speculated if autoimmune bone marrow damage might be incriminated in the pathogenesis of “idiopathic myelofibrosis” (IMF). Several observations seem to support the participation of immune mechanisms in the development of bone marrow fibrosis. Thus, histopathological findings of “Fibrin-Faser-Stern” figures, increased numbers of plasma cells and lymphocytes with plasmacytoid appearance, the demonstration of a parallel increase in interstitial deposits of immunoglobulins and the extent of bone marrow fibrosis, and the development of bone marrow fibrosis after repeated antigen injections in animal models all render immune-mediated bone marrow fibrosis possible [35–41]. Importantly, the findings of “Fibrin-Faser-Stern” figures and lymphoid aggregates in bone marrows from MPNs patients have been variably interpreted as evidence of immune activity in the marrow with deposition of immune complexes [35–38]. Immune activity in the bone marrow with an increase of lymphoid nodules has been found to be most prominent in the early stage of IMF [37, 38]. A most recent study investigated the mechanism of bone marrow fibrosis in patients with MF by comparing TGF-β1 signaling of marrow and spleen cells from patients with MF and of nondiseased individuals. The expression of several TGF-β1 signaling genes was altered in the marrow and spleen of MF patients, respectively. Abnormalities included genes of TGF-β1 signaling, cell cycling, and Hedgehog and p53 signaling. Pathway analysis of these alterations predicted an increased osteoblast differentiation, ineffective hematopoiesis, and fibrosis driven by noncanonical TGF-β1 signaling in the marrow and increased proliferation and defective DNA repair in the spleen. The hypothesis that fibrosis in MF might result from an autoimmune process, triggered by dead megakaryocytes, was supported by the findings of increased plasma levels of mitochondrial DNA and anti-mitochondrial antibodies in MF patients. It was concluded that autoimmunity might be a plausible cause of marrow fibrosis in MF [42]. Finally, the clinical observations of a favorable outcome of immunosuppressive therapy in some MF patients with evidence of autoimmune activity support the concept that autoimmunity, immune dysfunction, and chronic inflammation may be important factors in pathogenesis [43–48]. ### 3.3. What Is the Clinical Evidence? #### 3.3.1. The Inflammation-Mediated Cardiovascular and Thromboembolic Disease Burden Patients with MPNs have a massive cardiovascular disease burden with a high risk of thrombosis (Figure2), which is partly explained by excessive aggregation of circulating leukocytes and platelets due to in vivo leukocyte-platelet and endothelial activation in combination with a thrombogenic endothelium [1, 4]. In addition MPNs are associated with a procoagulant state, which has recently been elegantly reviewed by Barbui et al. [49]. The hyperactivation of circulating cells in MPNs has been thought to be attributed to the clonal myeloproliferation. Thus, theJAK2V671F mutation per se has been shown to induce leukocyte and platelet activation and several clinical studies have demonstrated thatJAK2V617F positivity is a thrombogenic factor in MPNs [49–52]. Of note, Barbui et al. have recently shown that the level of C-reactive protein (CRP) is elevated in patients with ET and PV and correlates significantly with theJAK2V617F allele burden [53]. Furthermore, elevated CRP levels have also been associated with shortened leukemia-free survival in myelofibrosis [54]. It was speculated if sustained inflammation might elicit the stem cell insult by inducing a state of chronic oxidative stress with elevated levels of reactive oxygen species (ROS) in the bone marrow, thereby creating a high-risk microenvironment for induction of mutations in hematopoietic cells [9]. Being a sensitive marker of inflammation and influencing, for example, endothelial function, coagulation, fibrinolysis, and plaque stability, CRP is considered to be a mediator of vascular disease and accordingly a major vascular risk factor as well [55–57]. This association has recently been demonstrated in a meta-analysis, showing continuous associations between the CRP concentration and the risk of coronary heart disease, ischemic stroke, and vascular mortality [58]. For decades it has been known that atherosclerosis and atherothrombosis are chronic inflammatory diseases [59, 60]. Several studies have reported that chronic inflammatory diseases (e.g., rheumatoid arthritis, psoriasis, systemic lupus erythematosus, and diabetes mellitus) are associated with accelerated atherosclerosis and accordingly development of premature atherosclerosis (early ageing?) [61–65]. In addition, considering the association between atherosclerosis and venous thrombosis, chronic inflammation indirectly predisposes to venous thrombosis and pulmonary thromboembolism as well [66]. In the context of the associations between inflammation and CRP in ET and PV, inflammation might be considered to be a secondary event elicited by clonal cells [53]. However, elevated leukocyte and platelet counts in MPNs may not only reflect clonal myeloproliferation but also reflect the impact of chronic inflammation per se on the clonal cells. In particular, this interpretation is intriguing when considering that one of the hallmarks of MPNs is inherent hypersensitivity to growth factor and cytokine stimulation [8]. In this perspective, chronic inflammation in MPNs may also have a key role in promoting premature atherosclerosis and all its debilitating cardiovascular and thromboembolic complications, the common denominators for their development being elevated leukocyte and platelet counts, elevated CRP levels, and in vivo leukocyte-platelet and endothelial cell activation, taking into account that platelet-leukocyte interactions link inflammatory and thromboembolic events in several other inflammation-mediated diseases [67].Figure 2 Patients with MPNs have a massive cardiovascular and thromboembolic disease burden. #### 3.3.2. Inflammation-Mediated Chronic Kidney Disease Uncontrolled chronic inflammation is associated with organ dysfunction, organ fibrosis, and ultimately organ failure [68]. This development is classically depicted in patients with the metabolic syndrome progressing to type II diabetes mellitus (DM) which, without adequate treatment to normalize elevated blood glucose levels, may rapidly develop organ failure due to accelerated atherosclerosis (e.g., hypertension, ischemic heart disease, stroke, dementia, peripheral arterial insufficiency, venous thromboembolism, and chronic kidney disease). The progressive deterioration of multiple organs in uncontrolled DM consequent to elevated blood glucose levels with in vivo leukocyte-platelet and endothelial activation and development of premature atherosclerosis is in several aspects comparable to the multitude of systemic manifestations in patients with uncontrolled MPNs, the common denominators being a huge cardiovascular disease burden and thromboembolic complications [10]. Importantly, similar to patients with type II DM, it has been demonstrated that patients with MPNs have an increased risk of developing chronic kidney disease [69]. It was concluded that progressive renal impairment may be an important factor in MPNs contributing to the comorbidity burden and likely to the overall survival. In addition it was speculated whether chronic inflammation with accumulation of ROS might be a driving force for impairment of renal function and accordingly supportive of early intervention in order to normalize elevated cell counts and reduce the chronic inflammatory drive elicited by the malignant clone itself [69]. #### 3.3.3. Inflammation-Mediated “Autoinflammatory” Diseases As outlined above, patients with MPNs may have an increased risk of various autoimmune, “autoinflammatory,” or inflammatory diseases. Thus, associations have been reported with systemic lupus erythematosus, progressive systemic sclerosis, primary biliary cirrhosis, ulcerative colitis, Crohn’s disease, nephrotic syndrome, polyarteritis nodosa, Sjögren syndrome, juvenile rheumatoid arthritis, polymyalgia rheumatica/arteritis temporalis, immune thrombocytopenic purpura (ITP), and aplastic anemia. In large epidemiological studies these associations have only been significant for Crohn’s disease, polymyalgia rheumatica/arteritis temporalis, and ITP [2]. Interestingly, a particular subtype of myelofibrosis, “primary autoimmune myelofibrosis,” has been described. This subtype has been considered to be a nonclonal and nonneoplastic disease, featured by anemia/cytopenias and autoantibodies suggesting systemic autoimmunity. Most patients have no or only mild splenomegaly and the bone marrow biopsy exhibits MPN-like histology with fibrosis, hypercellularity, and megakaryocyte clusters. In addition, bone marrow lymphoid aggregates are prominent [46]. It remains to be established if this subset of MF actually exists or if these patients indeed should be categorized within the MPNs disease entity, taking into account that autoimmunity and chronic inflammation today are considered to have a major role in MPNs pathogenesis. #### 3.3.4. Inflammation-Mediated Osteopenia A recent Danish registry study has shown that patients with ET and PV have an increased incidence of fractures compared with the general population [70]. Taking into account that chronic inflammation has been suggested to explain the initiation of clonal development and progression in chronic myeloproliferative neoplasms and other chronic inflammatory diseases that are associated with an increased risk of osteopenia it has been speculated if chronic inflammation might induce osteopenia in MPNs and by this mechanism also predispose to the increased risk of fractures [12, 70–72]. #### 3.3.5. Inflammation-Mediated Second Cancers As noted previously patients with MPNs have been shown to have an increased risk of second cancers [3, 5]. In the perspective that chronic inflammation may be a driving force for clonal evolution in MPNs it is intriguing to consider if chronic inflammation may contribute to the development of second cancers in MPNs as well, taking into account the close association between inflammation and cancer [8, 9, 11–13, 17–22]. In this regard a defective “tumor immune surveillance” consequent to immune deregulation, which has been demonstrated in MPNs in several recent studies and most recently comprehensively reviewed, might be of importance [42, 73–75]. Of note, the increased risk of second cancers has also been recorded prior to the MPNs diagnosis emphasizing that the MPNs may have a long prediagnosis phase (5–10–15 years) with a chronic inflammatory state promoting mutagenesis, defective tumor immune surveillance, and immune deregulation [9, 76–78]. This concept is compatible with the most recent observations of additional mutations that are already present at the time of diagnosis likely induced by a sustained inflammatory drive on the malignant clone several years before diagnosis [7, 9, 78, 79] (Figure 4). ### 3.4. What Is the Biochemical Evidence? As outlined above MPNs are associated with a low-grade inflammatory state as assessed by slightly elevated CRP in a large proportion of patients with ET and PV [53]. The CRP levels are steadily increasing when patients enter the accelerated phase towards leukemic transformation [54]. Considering the close association between CRP and other inflammatory markers, the leukocyte and platelet counts, it is most relevant to speculate if leukocytosis and thrombocytosis in MPNs are also attributed to the chronic inflammatory drive per se with sustained generation of inflammatory products that fuel the malignant clone in a vicious self-perpetuating circle [8, 11]. Similar to CRP, plasma fibrinogen and plasma D-dimers levels are slightly elevated in several patients and may indeed be more sensitive inflammatory markers than CRP (unpublished observations). Proinflammatory cytokines are elevated in a substantial proportion of patients with MPNs, a topic which has recently been reviewed and thoroughly described by Fleischman and others in this Theme Issue [11].The hypothesis and the concept of MPNs and the advanced MF stage being elicited and perpetuated by autoimmune/inflammatory mechanisms were intensely investigated and discussed already 30 years ago. Some of the clinical and histomorphological issues with associations between MPNs and autoimmune/inflammatory states have already been addressed above. In addition, several studies from that period reported biochemical evidence of autoimmunity/inflammation in MPNs, such as elevated levels of antibodies to RBCs, antibodies to platelets, anti-nuclear and anti-mitochondrial antibodies (ANA and AMA), rheumatoid factor, lupus-like anticoagulant, low levels of complement, complement activation, increased levels of immune complexes (ICs), and increased levels of interleukin-2 soluble receptors (s-IL2R) [38, 43, 80–85]. It was debated whether deposition of immune complexes in the bone marrow, either formed in situ or trapped from the circulation, might be followed by complement activation with subsequent local inflammatory reaction, an interpretation fitting very well with the findings of complement activation in MF patients [80, 81]. Of note, circulating immune complexes were predominantly found in the early disease stage. Since circulating ICs were in some studies mainly found in MF patients with a short duration of disease from diagnosis it was hypothesized that potential immune-mediated bone marrow damage might indeed occur in the early phase of the disease and the late, fibrotic stage with undetectable IC representing the “burnt out” phase of the disease [81, 84]. Today, 30 years after the detection of IC in MPNs, their significance in MF and related neoplasms remains unsettled. With the renaissance of the concept of autoimmune bone marrow damage and chronic inflammation as driving forces for disease evolution and progression further studies on circulating ICs and their pathogenetic and clinical relevance are highly relevant and timely. Indeed, their detection may reflect ongoing inflammatory immune reactions in the circulation and in the bone marrow, being likely most pronounced in the initial disease phase and possibly related to a more acute course of the disease [81]. Most recently, a comprehensive study of autoimmune phenomena and cytokines in 100 patients with MF, including early stage MF, has added further proof of the concept that autoimmune and inflammatory mechanisms may be highly important in the pathogenesis of MPNs [86]. Importantly, organ/non-organ-specific autoantibodies were found in 57% of cases, without clinically overt disease, and mostly in low-risk/intermediate-risk-1 and MF-0/MF-1. Furthermore, TGF-β and IL-8 were increased in MS-DAT positive cases, and TGF-β and IL-17 were elevated in early clinical and morphological stages, while IL-8 increased in advanced stages. It was concluded that autoimmune phenomena and cytokine dysregulation may be particularly relevant in early MF [86].Several studies have shown that circulating YKL-40 levels are elevated in a number of different diseases, including cancer, diabetes mellitus, and cardiovascular diseases, in which YKL-40 serves as an excellent marker of the disease burden. Importantly, a state of chronic inflammation is shared by them all, and YKL-40 also has a major impact upon the severity of chronic endothelial inflammation, which today is considered of crucial importance for the development of atherosclerosis. Considering the MPNs as chronic inflammatory diseases and accordingly with an increased risk of development of premature atherosclerosis we hypothesized that circulating YKL-40 might be an ideal marker of the integrated impact of chronic inflammation in MPNs and accordingly might display correlations with conventional markers of inflammation and disease burden in MPNs. Indeed, we have recently shown that circulating YKL-40 is a potential novel biomarker of disease activity and the inflammatory state in myelofibrosis and related neoplasms [87, 88]. These studies have demonstrated a steady increase in YKL-40 from early cancer stage (ET) over PV to the advanced cancer stage with myelofibrosis, which exhibited the highest YKL-40 levels of them all. Highly interesting, we also found a significant correlation between YKL-40 and several markers of inflammation and disease burden, including neutrophils, platelets, CRP, LDH, and theJAK2V617F allele burden. Accordingly, circulating YKL-40 may be a novel marker of inflammation, disease burden, and progression in MPNs [87, 88]. ### 3.5. What Is the Molecular Evidence? The concept of chronic inflammation leading to clonal evolution in MPNs is also supported by gene expression profiling studies (Figure3), which have unraveled deregulation of several genes that might be implicated in the development and phenotype of the MPNs [89–92]. Using whole-blood transcriptional profiling and accordingly obtaining an integrated signature of genes expressed in several immune cells (granulocytes, monocytes, B cells, T cells, and platelets), we have shown that the MPNs exhibit a massive upregulation of IFN-related genes, particularly interferon-inducible (IFI) gene IFI27 and severe deregulation of other inflammation and immune genes as well. Indeed, several genes (e.g., IFI27) displayed a stepwise upregulation in patients with ET, PV, and PMF with fold changes from 8 to 16 to 30, respectively. The striking deregulation of IFI genes may likely reflect a hyperstimulated but incompetent immune system being most enhanced in patients with advanced MF. In this context, the massive upregulation of the IFI27 gene may also reflect an exaggerated antitumor response as part of a highly activated IFN system, including enhanced IFN gamma expression, which might also imply activation of dendritic cells. IFI27 is also upregulated during wound repair processes, which may be of particular relevance when considering the Dvorak thesis on “Tumors: wounds that do not heal” [14, 15]. Thus, it is tempting to argue that MPNs are “wounds in the bone marrow that will not heal,” owing to the continuous release from clonal cells of growth factors and matrix proteases with ensuing extracellular remodeling of the bone marrow stroma. In this scenario, one might speculate whether the high expression of IFI27 may reflect these processes as well, IFI27 cooperating with distinct genes of potential importance for egress of CD34+ cells from the bone marrow niches into the circulation [93]. In the context of matrix remodeling during cancer metastasis (which in MPNs consists of egress of CD34+ cells from the bone marrow niches into the circulation) it is of particular interest to note that IFN-inducible genes, including IFI27, have been shown to be associated with the so-called metagenes in patients with breast cancer, accurately identifying those patients with lymph node metastasis and accordingly predictors of outcomes in individual patients [94]. Thus, the highly upregulated IFI27 gene in MPNs may reflect progressive clonal evolution with “metastasis” (extramedullary hematopoiesis) despite an exaggerated yet incompetent IFN-mediated antitumor response by activated dendritic cells and T cells. In this regard a hyperstimulated immune system might also contribute to the increased risk of autoimmune diseases in MPNs. Accordingly the interferon signature may reflect MF as the terminal stage of chronic inflammation with a huge burden of oxidative stress, genomic instability, and accumulation of additional inflammation-induced mutations, the ultimate outcome being leukemic transformation [8–12]. During this evolution from early cancer stage to the metastatic stage with MF, the interferons are important cytokines for immunity and cancer immunoediting [95]. For this and several other reasons IFN is, today and in the future, considered the cornerstone in the treatment of MPNs which, when instituted in the very early disease stage, may be able to quell the fire and accordingly induce “minimal residual disease” and in some patients likely cure as will be discussed below [96–103]. Supporting chronic inflammation as the driving force for clonal evolution is also the most recent whole-blood gene expression studies, showing a marked deregulation of oxidative stress genes in MPNs [104]. This issue is extensively described by Bjørn and Hasselbalch in the chapter on “The Role of Reactive Oxygen Species in Myelofibrosis and Related Neoplasms.”Figure 3 Chronic inflammation as the driving force for clonal evolution in MPNs. Tumor burden and comorbidity burden are illustrated for patients withJAK2V617F positive MPNs. Comorbidity burden increases from early disease stage (ET/PV) to the accelerated phase with myelofibrotic and leukemic transformation. With permission: H. C. Hasselbalch [12]. AML: acute myeloid leukemia; ET: essential thrombocythemia; JAK: Janus kinase; PPV-MF: post-polycythemia vera; and PV: polycythemia vera.Figure 4 Patients with MPNs have an increased risk of second cancer not only after the MPNs diagnosis but also in the pre-MPNs diagnosis phase, which may last several years in which the patients are at an increased risk of severe cardiovascular and thromboembolic events. According to this model, the initial stem cell insult has occurred 5–10–15 years before the MPNs diagnosis. ### 3.6. What Are the Consequences of Chronic Inflammation in MPNs? #### 3.6.1. The Bone Marrow Is Burning In MPNs chronic inflammation may elicit a “cytokine storm,” “a wound that does not heal,” due to the continuous release of proinflammatory cytokines that in a self-perpetuating vicious circle drives the malignant clone. Importantly, in this inflammatory micromilieu, reactive oxygen species (ROS) are steadily accumulating, giving rise to increasing genomic instability, subclone formation with additional mutations, and ultimately bone marrow failure as a consequence of inflammation-mediated ineffective myelopoiesis (anemia, granulocytopenia, and thrombocytopenia), accumulation of connective tissue in the bone marrow, and ultimately leukemic transformation [8, 9, 11–13]. The impact and consequences of ROS for disease progression have been thoroughly described elsewhere by Bjørn and Hasselbalch and the impact of chronic inflammation on bone marrow stroma has been reviewed by Marie Caroline Le Bousse Kerdiles and coworkers.Chronic inflammation in the bone marrow microenvironment may enhance in vivo granulocyte activation with ensuing release of a vast amount of proteolytic enzymes from neutrophil granules, thereby facilitating egress of CD34+ cells and progenitors from bone marrow niches into the circulation (“metastasis”). #### 3.6.2. The Spleen Is Burning A common complaint in MPNs patients with enlarged spleens is a “burning” spleen, which on clinical examination may also be extremely painful. Although spleen infarction may occasionally explain the spleen pain, it is in the large majority of patients attributed to inflammation as evidenced by a remarkable relief when being treated with high-dose glucocorticoids and, in particular, during treatment with JAK2 inhibitors which within a few days is associated with a reduction in spleen size and a concomitant improvement in spleen pain as well. Accordingly, the rapid reduction in spleen size during, for example, treatment with ruxolitinib, is primarily consequent to its very potent anti-inflammatory effects as also evidenced by the rapid decrease in circulating inflammatory cytokines [11, 12]. #### 3.6.3. The Circulation Is Burning As outlined above circulating levels of a large number of inflammatory cytokines are elevated in patients with MPNs [11, 105, 106]. These cytokines activate circulating leukocytes and platelets and also activate endothelial cells as well, giving rise to aggregation of leukocytes and platelets with the formation of microaggregates that compromise the microcirculation in several organs [48, 51] (Figure 5). Taking into account that a large proportion of the circulating leukocytes and platelets are activated per se due to their clonal origin the additional impact of chronic inflammation upon in vivo activation of these cells may profoundly worsen the microcirculation in several organs with ensuing tissue ischemia and associated symptoms, including, for example, CNS-related symptoms (headaches, visual disturbances, dizziness, infarction, and dementia), pulmonary symptoms (dyspnoea due to pulmonary embolism, inflammation due to sequestration of leukocytes and platelets and megakaryocytes in the microcirculation with release of a large number of inflammatory products), symptoms of ischemic heart disease (angina, infarction, and congestive heart failure), or symptoms of peripheral vascular insufficiency [4, 5, 12, 34, 107–112] (Figure 5).Figure 5 Inflammation in the circulation elicits in vivo leukocyte and platelet aggregation giving rise to circulating microaggregates with ensuing impairment of microcirculation, tissue ischemia, and ultimately development of ulcers on toes and fingers which may terminate with gangrene. Treatment with aspirin momentarily resolves microaggregation with improvement in microcirculation. ## 3.1. What Is the Epidemiological Evidence? An increased risk of autoimmune and/or inflammatory conditions has been documented several years ago in patients with myeloid malignancies and recently a large Swedish epidemiologic study concluded that chronic immune stimulation might act as a trigger for the development of the myelodysplastic syndrome (MDS) and acute myelogenous leukemia (AML) [26, 27]. In regard to MPNs, another Swedish study has shown that inflammatory diseases may precede or develop during the course of ET, PV, and MF. In this Swedish study, a prior history of any autoimmune disease was associated with a significantly increased risk of a myeloproliferative neoplasm. The “inflammatory” diseases included, among others, Crohn’s disease, polymyalgia rheumatica, and giant cell arteritis, and the “autoimmune” diseases included immune thrombocytopenic purpura and aplastic anemia [2]. The 46/1 haplotype is present in 45% of the general population and is associated with a predisposition to acquire theJAK2V617F mutation and accordingly MPNs but also predisposes to MPNs with no mutation ofJAK2 and to MPNs with mutation inMPL [28–31]. Importantly, epidemiological studies have shown that the frequency of theJAK2 46/1 haplotype is increased in inflammatory diseases, including Crohn’s disease [32, 33].Risk factors for developing atherosclerosis, a chronic inflammatory disease, have been investigated in a large Danish epidemiological study of 49 488 individuals from the Copenhagen General Population Study. It was discovered that those harboring theJAK2V617F mutation had a 2.2-/1.2-fold risk (prevalent/incident) of ischemic heart disease [34]. ## 3.2. What Is the Histomorphological Evidence? Already about 40 years ago it was speculated if autoimmune bone marrow damage might be incriminated in the pathogenesis of “idiopathic myelofibrosis” (IMF). Several observations seem to support the participation of immune mechanisms in the development of bone marrow fibrosis. Thus, histopathological findings of “Fibrin-Faser-Stern” figures, increased numbers of plasma cells and lymphocytes with plasmacytoid appearance, the demonstration of a parallel increase in interstitial deposits of immunoglobulins and the extent of bone marrow fibrosis, and the development of bone marrow fibrosis after repeated antigen injections in animal models all render immune-mediated bone marrow fibrosis possible [35–41]. Importantly, the findings of “Fibrin-Faser-Stern” figures and lymphoid aggregates in bone marrows from MPNs patients have been variably interpreted as evidence of immune activity in the marrow with deposition of immune complexes [35–38]. Immune activity in the bone marrow with an increase of lymphoid nodules has been found to be most prominent in the early stage of IMF [37, 38]. A most recent study investigated the mechanism of bone marrow fibrosis in patients with MF by comparing TGF-β1 signaling of marrow and spleen cells from patients with MF and of nondiseased individuals. The expression of several TGF-β1 signaling genes was altered in the marrow and spleen of MF patients, respectively. Abnormalities included genes of TGF-β1 signaling, cell cycling, and Hedgehog and p53 signaling. Pathway analysis of these alterations predicted an increased osteoblast differentiation, ineffective hematopoiesis, and fibrosis driven by noncanonical TGF-β1 signaling in the marrow and increased proliferation and defective DNA repair in the spleen. The hypothesis that fibrosis in MF might result from an autoimmune process, triggered by dead megakaryocytes, was supported by the findings of increased plasma levels of mitochondrial DNA and anti-mitochondrial antibodies in MF patients. It was concluded that autoimmunity might be a plausible cause of marrow fibrosis in MF [42]. Finally, the clinical observations of a favorable outcome of immunosuppressive therapy in some MF patients with evidence of autoimmune activity support the concept that autoimmunity, immune dysfunction, and chronic inflammation may be important factors in pathogenesis [43–48]. ## 3.3. What Is the Clinical Evidence? ### 3.3.1. The Inflammation-Mediated Cardiovascular and Thromboembolic Disease Burden Patients with MPNs have a massive cardiovascular disease burden with a high risk of thrombosis (Figure2), which is partly explained by excessive aggregation of circulating leukocytes and platelets due to in vivo leukocyte-platelet and endothelial activation in combination with a thrombogenic endothelium [1, 4]. In addition MPNs are associated with a procoagulant state, which has recently been elegantly reviewed by Barbui et al. [49]. The hyperactivation of circulating cells in MPNs has been thought to be attributed to the clonal myeloproliferation. Thus, theJAK2V671F mutation per se has been shown to induce leukocyte and platelet activation and several clinical studies have demonstrated thatJAK2V617F positivity is a thrombogenic factor in MPNs [49–52]. Of note, Barbui et al. have recently shown that the level of C-reactive protein (CRP) is elevated in patients with ET and PV and correlates significantly with theJAK2V617F allele burden [53]. Furthermore, elevated CRP levels have also been associated with shortened leukemia-free survival in myelofibrosis [54]. It was speculated if sustained inflammation might elicit the stem cell insult by inducing a state of chronic oxidative stress with elevated levels of reactive oxygen species (ROS) in the bone marrow, thereby creating a high-risk microenvironment for induction of mutations in hematopoietic cells [9]. Being a sensitive marker of inflammation and influencing, for example, endothelial function, coagulation, fibrinolysis, and plaque stability, CRP is considered to be a mediator of vascular disease and accordingly a major vascular risk factor as well [55–57]. This association has recently been demonstrated in a meta-analysis, showing continuous associations between the CRP concentration and the risk of coronary heart disease, ischemic stroke, and vascular mortality [58]. For decades it has been known that atherosclerosis and atherothrombosis are chronic inflammatory diseases [59, 60]. Several studies have reported that chronic inflammatory diseases (e.g., rheumatoid arthritis, psoriasis, systemic lupus erythematosus, and diabetes mellitus) are associated with accelerated atherosclerosis and accordingly development of premature atherosclerosis (early ageing?) [61–65]. In addition, considering the association between atherosclerosis and venous thrombosis, chronic inflammation indirectly predisposes to venous thrombosis and pulmonary thromboembolism as well [66]. In the context of the associations between inflammation and CRP in ET and PV, inflammation might be considered to be a secondary event elicited by clonal cells [53]. However, elevated leukocyte and platelet counts in MPNs may not only reflect clonal myeloproliferation but also reflect the impact of chronic inflammation per se on the clonal cells. In particular, this interpretation is intriguing when considering that one of the hallmarks of MPNs is inherent hypersensitivity to growth factor and cytokine stimulation [8]. In this perspective, chronic inflammation in MPNs may also have a key role in promoting premature atherosclerosis and all its debilitating cardiovascular and thromboembolic complications, the common denominators for their development being elevated leukocyte and platelet counts, elevated CRP levels, and in vivo leukocyte-platelet and endothelial cell activation, taking into account that platelet-leukocyte interactions link inflammatory and thromboembolic events in several other inflammation-mediated diseases [67].Figure 2 Patients with MPNs have a massive cardiovascular and thromboembolic disease burden. ### 3.3.2. Inflammation-Mediated Chronic Kidney Disease Uncontrolled chronic inflammation is associated with organ dysfunction, organ fibrosis, and ultimately organ failure [68]. This development is classically depicted in patients with the metabolic syndrome progressing to type II diabetes mellitus (DM) which, without adequate treatment to normalize elevated blood glucose levels, may rapidly develop organ failure due to accelerated atherosclerosis (e.g., hypertension, ischemic heart disease, stroke, dementia, peripheral arterial insufficiency, venous thromboembolism, and chronic kidney disease). The progressive deterioration of multiple organs in uncontrolled DM consequent to elevated blood glucose levels with in vivo leukocyte-platelet and endothelial activation and development of premature atherosclerosis is in several aspects comparable to the multitude of systemic manifestations in patients with uncontrolled MPNs, the common denominators being a huge cardiovascular disease burden and thromboembolic complications [10]. Importantly, similar to patients with type II DM, it has been demonstrated that patients with MPNs have an increased risk of developing chronic kidney disease [69]. It was concluded that progressive renal impairment may be an important factor in MPNs contributing to the comorbidity burden and likely to the overall survival. In addition it was speculated whether chronic inflammation with accumulation of ROS might be a driving force for impairment of renal function and accordingly supportive of early intervention in order to normalize elevated cell counts and reduce the chronic inflammatory drive elicited by the malignant clone itself [69]. ### 3.3.3. Inflammation-Mediated “Autoinflammatory” Diseases As outlined above, patients with MPNs may have an increased risk of various autoimmune, “autoinflammatory,” or inflammatory diseases. Thus, associations have been reported with systemic lupus erythematosus, progressive systemic sclerosis, primary biliary cirrhosis, ulcerative colitis, Crohn’s disease, nephrotic syndrome, polyarteritis nodosa, Sjögren syndrome, juvenile rheumatoid arthritis, polymyalgia rheumatica/arteritis temporalis, immune thrombocytopenic purpura (ITP), and aplastic anemia. In large epidemiological studies these associations have only been significant for Crohn’s disease, polymyalgia rheumatica/arteritis temporalis, and ITP [2]. Interestingly, a particular subtype of myelofibrosis, “primary autoimmune myelofibrosis,” has been described. This subtype has been considered to be a nonclonal and nonneoplastic disease, featured by anemia/cytopenias and autoantibodies suggesting systemic autoimmunity. Most patients have no or only mild splenomegaly and the bone marrow biopsy exhibits MPN-like histology with fibrosis, hypercellularity, and megakaryocyte clusters. In addition, bone marrow lymphoid aggregates are prominent [46]. It remains to be established if this subset of MF actually exists or if these patients indeed should be categorized within the MPNs disease entity, taking into account that autoimmunity and chronic inflammation today are considered to have a major role in MPNs pathogenesis. ### 3.3.4. Inflammation-Mediated Osteopenia A recent Danish registry study has shown that patients with ET and PV have an increased incidence of fractures compared with the general population [70]. Taking into account that chronic inflammation has been suggested to explain the initiation of clonal development and progression in chronic myeloproliferative neoplasms and other chronic inflammatory diseases that are associated with an increased risk of osteopenia it has been speculated if chronic inflammation might induce osteopenia in MPNs and by this mechanism also predispose to the increased risk of fractures [12, 70–72]. ### 3.3.5. Inflammation-Mediated Second Cancers As noted previously patients with MPNs have been shown to have an increased risk of second cancers [3, 5]. In the perspective that chronic inflammation may be a driving force for clonal evolution in MPNs it is intriguing to consider if chronic inflammation may contribute to the development of second cancers in MPNs as well, taking into account the close association between inflammation and cancer [8, 9, 11–13, 17–22]. In this regard a defective “tumor immune surveillance” consequent to immune deregulation, which has been demonstrated in MPNs in several recent studies and most recently comprehensively reviewed, might be of importance [42, 73–75]. Of note, the increased risk of second cancers has also been recorded prior to the MPNs diagnosis emphasizing that the MPNs may have a long prediagnosis phase (5–10–15 years) with a chronic inflammatory state promoting mutagenesis, defective tumor immune surveillance, and immune deregulation [9, 76–78]. This concept is compatible with the most recent observations of additional mutations that are already present at the time of diagnosis likely induced by a sustained inflammatory drive on the malignant clone several years before diagnosis [7, 9, 78, 79] (Figure 4). ## 3.3.1. The Inflammation-Mediated Cardiovascular and Thromboembolic Disease Burden Patients with MPNs have a massive cardiovascular disease burden with a high risk of thrombosis (Figure2), which is partly explained by excessive aggregation of circulating leukocytes and platelets due to in vivo leukocyte-platelet and endothelial activation in combination with a thrombogenic endothelium [1, 4]. In addition MPNs are associated with a procoagulant state, which has recently been elegantly reviewed by Barbui et al. [49]. The hyperactivation of circulating cells in MPNs has been thought to be attributed to the clonal myeloproliferation. Thus, theJAK2V671F mutation per se has been shown to induce leukocyte and platelet activation and several clinical studies have demonstrated thatJAK2V617F positivity is a thrombogenic factor in MPNs [49–52]. Of note, Barbui et al. have recently shown that the level of C-reactive protein (CRP) is elevated in patients with ET and PV and correlates significantly with theJAK2V617F allele burden [53]. Furthermore, elevated CRP levels have also been associated with shortened leukemia-free survival in myelofibrosis [54]. It was speculated if sustained inflammation might elicit the stem cell insult by inducing a state of chronic oxidative stress with elevated levels of reactive oxygen species (ROS) in the bone marrow, thereby creating a high-risk microenvironment for induction of mutations in hematopoietic cells [9]. Being a sensitive marker of inflammation and influencing, for example, endothelial function, coagulation, fibrinolysis, and plaque stability, CRP is considered to be a mediator of vascular disease and accordingly a major vascular risk factor as well [55–57]. This association has recently been demonstrated in a meta-analysis, showing continuous associations between the CRP concentration and the risk of coronary heart disease, ischemic stroke, and vascular mortality [58]. For decades it has been known that atherosclerosis and atherothrombosis are chronic inflammatory diseases [59, 60]. Several studies have reported that chronic inflammatory diseases (e.g., rheumatoid arthritis, psoriasis, systemic lupus erythematosus, and diabetes mellitus) are associated with accelerated atherosclerosis and accordingly development of premature atherosclerosis (early ageing?) [61–65]. In addition, considering the association between atherosclerosis and venous thrombosis, chronic inflammation indirectly predisposes to venous thrombosis and pulmonary thromboembolism as well [66]. In the context of the associations between inflammation and CRP in ET and PV, inflammation might be considered to be a secondary event elicited by clonal cells [53]. However, elevated leukocyte and platelet counts in MPNs may not only reflect clonal myeloproliferation but also reflect the impact of chronic inflammation per se on the clonal cells. In particular, this interpretation is intriguing when considering that one of the hallmarks of MPNs is inherent hypersensitivity to growth factor and cytokine stimulation [8]. In this perspective, chronic inflammation in MPNs may also have a key role in promoting premature atherosclerosis and all its debilitating cardiovascular and thromboembolic complications, the common denominators for their development being elevated leukocyte and platelet counts, elevated CRP levels, and in vivo leukocyte-platelet and endothelial cell activation, taking into account that platelet-leukocyte interactions link inflammatory and thromboembolic events in several other inflammation-mediated diseases [67].Figure 2 Patients with MPNs have a massive cardiovascular and thromboembolic disease burden. ## 3.3.2. Inflammation-Mediated Chronic Kidney Disease Uncontrolled chronic inflammation is associated with organ dysfunction, organ fibrosis, and ultimately organ failure [68]. This development is classically depicted in patients with the metabolic syndrome progressing to type II diabetes mellitus (DM) which, without adequate treatment to normalize elevated blood glucose levels, may rapidly develop organ failure due to accelerated atherosclerosis (e.g., hypertension, ischemic heart disease, stroke, dementia, peripheral arterial insufficiency, venous thromboembolism, and chronic kidney disease). The progressive deterioration of multiple organs in uncontrolled DM consequent to elevated blood glucose levels with in vivo leukocyte-platelet and endothelial activation and development of premature atherosclerosis is in several aspects comparable to the multitude of systemic manifestations in patients with uncontrolled MPNs, the common denominators being a huge cardiovascular disease burden and thromboembolic complications [10]. Importantly, similar to patients with type II DM, it has been demonstrated that patients with MPNs have an increased risk of developing chronic kidney disease [69]. It was concluded that progressive renal impairment may be an important factor in MPNs contributing to the comorbidity burden and likely to the overall survival. In addition it was speculated whether chronic inflammation with accumulation of ROS might be a driving force for impairment of renal function and accordingly supportive of early intervention in order to normalize elevated cell counts and reduce the chronic inflammatory drive elicited by the malignant clone itself [69]. ## 3.3.3. Inflammation-Mediated “Autoinflammatory” Diseases As outlined above, patients with MPNs may have an increased risk of various autoimmune, “autoinflammatory,” or inflammatory diseases. Thus, associations have been reported with systemic lupus erythematosus, progressive systemic sclerosis, primary biliary cirrhosis, ulcerative colitis, Crohn’s disease, nephrotic syndrome, polyarteritis nodosa, Sjögren syndrome, juvenile rheumatoid arthritis, polymyalgia rheumatica/arteritis temporalis, immune thrombocytopenic purpura (ITP), and aplastic anemia. In large epidemiological studies these associations have only been significant for Crohn’s disease, polymyalgia rheumatica/arteritis temporalis, and ITP [2]. Interestingly, a particular subtype of myelofibrosis, “primary autoimmune myelofibrosis,” has been described. This subtype has been considered to be a nonclonal and nonneoplastic disease, featured by anemia/cytopenias and autoantibodies suggesting systemic autoimmunity. Most patients have no or only mild splenomegaly and the bone marrow biopsy exhibits MPN-like histology with fibrosis, hypercellularity, and megakaryocyte clusters. In addition, bone marrow lymphoid aggregates are prominent [46]. It remains to be established if this subset of MF actually exists or if these patients indeed should be categorized within the MPNs disease entity, taking into account that autoimmunity and chronic inflammation today are considered to have a major role in MPNs pathogenesis. ## 3.3.4. Inflammation-Mediated Osteopenia A recent Danish registry study has shown that patients with ET and PV have an increased incidence of fractures compared with the general population [70]. Taking into account that chronic inflammation has been suggested to explain the initiation of clonal development and progression in chronic myeloproliferative neoplasms and other chronic inflammatory diseases that are associated with an increased risk of osteopenia it has been speculated if chronic inflammation might induce osteopenia in MPNs and by this mechanism also predispose to the increased risk of fractures [12, 70–72]. ## 3.3.5. Inflammation-Mediated Second Cancers As noted previously patients with MPNs have been shown to have an increased risk of second cancers [3, 5]. In the perspective that chronic inflammation may be a driving force for clonal evolution in MPNs it is intriguing to consider if chronic inflammation may contribute to the development of second cancers in MPNs as well, taking into account the close association between inflammation and cancer [8, 9, 11–13, 17–22]. In this regard a defective “tumor immune surveillance” consequent to immune deregulation, which has been demonstrated in MPNs in several recent studies and most recently comprehensively reviewed, might be of importance [42, 73–75]. Of note, the increased risk of second cancers has also been recorded prior to the MPNs diagnosis emphasizing that the MPNs may have a long prediagnosis phase (5–10–15 years) with a chronic inflammatory state promoting mutagenesis, defective tumor immune surveillance, and immune deregulation [9, 76–78]. This concept is compatible with the most recent observations of additional mutations that are already present at the time of diagnosis likely induced by a sustained inflammatory drive on the malignant clone several years before diagnosis [7, 9, 78, 79] (Figure 4). ## 3.4. What Is the Biochemical Evidence? As outlined above MPNs are associated with a low-grade inflammatory state as assessed by slightly elevated CRP in a large proportion of patients with ET and PV [53]. The CRP levels are steadily increasing when patients enter the accelerated phase towards leukemic transformation [54]. Considering the close association between CRP and other inflammatory markers, the leukocyte and platelet counts, it is most relevant to speculate if leukocytosis and thrombocytosis in MPNs are also attributed to the chronic inflammatory drive per se with sustained generation of inflammatory products that fuel the malignant clone in a vicious self-perpetuating circle [8, 11]. Similar to CRP, plasma fibrinogen and plasma D-dimers levels are slightly elevated in several patients and may indeed be more sensitive inflammatory markers than CRP (unpublished observations). Proinflammatory cytokines are elevated in a substantial proportion of patients with MPNs, a topic which has recently been reviewed and thoroughly described by Fleischman and others in this Theme Issue [11].The hypothesis and the concept of MPNs and the advanced MF stage being elicited and perpetuated by autoimmune/inflammatory mechanisms were intensely investigated and discussed already 30 years ago. Some of the clinical and histomorphological issues with associations between MPNs and autoimmune/inflammatory states have already been addressed above. In addition, several studies from that period reported biochemical evidence of autoimmunity/inflammation in MPNs, such as elevated levels of antibodies to RBCs, antibodies to platelets, anti-nuclear and anti-mitochondrial antibodies (ANA and AMA), rheumatoid factor, lupus-like anticoagulant, low levels of complement, complement activation, increased levels of immune complexes (ICs), and increased levels of interleukin-2 soluble receptors (s-IL2R) [38, 43, 80–85]. It was debated whether deposition of immune complexes in the bone marrow, either formed in situ or trapped from the circulation, might be followed by complement activation with subsequent local inflammatory reaction, an interpretation fitting very well with the findings of complement activation in MF patients [80, 81]. Of note, circulating immune complexes were predominantly found in the early disease stage. Since circulating ICs were in some studies mainly found in MF patients with a short duration of disease from diagnosis it was hypothesized that potential immune-mediated bone marrow damage might indeed occur in the early phase of the disease and the late, fibrotic stage with undetectable IC representing the “burnt out” phase of the disease [81, 84]. Today, 30 years after the detection of IC in MPNs, their significance in MF and related neoplasms remains unsettled. With the renaissance of the concept of autoimmune bone marrow damage and chronic inflammation as driving forces for disease evolution and progression further studies on circulating ICs and their pathogenetic and clinical relevance are highly relevant and timely. Indeed, their detection may reflect ongoing inflammatory immune reactions in the circulation and in the bone marrow, being likely most pronounced in the initial disease phase and possibly related to a more acute course of the disease [81]. Most recently, a comprehensive study of autoimmune phenomena and cytokines in 100 patients with MF, including early stage MF, has added further proof of the concept that autoimmune and inflammatory mechanisms may be highly important in the pathogenesis of MPNs [86]. Importantly, organ/non-organ-specific autoantibodies were found in 57% of cases, without clinically overt disease, and mostly in low-risk/intermediate-risk-1 and MF-0/MF-1. Furthermore, TGF-β and IL-8 were increased in MS-DAT positive cases, and TGF-β and IL-17 were elevated in early clinical and morphological stages, while IL-8 increased in advanced stages. It was concluded that autoimmune phenomena and cytokine dysregulation may be particularly relevant in early MF [86].Several studies have shown that circulating YKL-40 levels are elevated in a number of different diseases, including cancer, diabetes mellitus, and cardiovascular diseases, in which YKL-40 serves as an excellent marker of the disease burden. Importantly, a state of chronic inflammation is shared by them all, and YKL-40 also has a major impact upon the severity of chronic endothelial inflammation, which today is considered of crucial importance for the development of atherosclerosis. Considering the MPNs as chronic inflammatory diseases and accordingly with an increased risk of development of premature atherosclerosis we hypothesized that circulating YKL-40 might be an ideal marker of the integrated impact of chronic inflammation in MPNs and accordingly might display correlations with conventional markers of inflammation and disease burden in MPNs. Indeed, we have recently shown that circulating YKL-40 is a potential novel biomarker of disease activity and the inflammatory state in myelofibrosis and related neoplasms [87, 88]. These studies have demonstrated a steady increase in YKL-40 from early cancer stage (ET) over PV to the advanced cancer stage with myelofibrosis, which exhibited the highest YKL-40 levels of them all. Highly interesting, we also found a significant correlation between YKL-40 and several markers of inflammation and disease burden, including neutrophils, platelets, CRP, LDH, and theJAK2V617F allele burden. Accordingly, circulating YKL-40 may be a novel marker of inflammation, disease burden, and progression in MPNs [87, 88]. ## 3.5. What Is the Molecular Evidence? The concept of chronic inflammation leading to clonal evolution in MPNs is also supported by gene expression profiling studies (Figure3), which have unraveled deregulation of several genes that might be implicated in the development and phenotype of the MPNs [89–92]. Using whole-blood transcriptional profiling and accordingly obtaining an integrated signature of genes expressed in several immune cells (granulocytes, monocytes, B cells, T cells, and platelets), we have shown that the MPNs exhibit a massive upregulation of IFN-related genes, particularly interferon-inducible (IFI) gene IFI27 and severe deregulation of other inflammation and immune genes as well. Indeed, several genes (e.g., IFI27) displayed a stepwise upregulation in patients with ET, PV, and PMF with fold changes from 8 to 16 to 30, respectively. The striking deregulation of IFI genes may likely reflect a hyperstimulated but incompetent immune system being most enhanced in patients with advanced MF. In this context, the massive upregulation of the IFI27 gene may also reflect an exaggerated antitumor response as part of a highly activated IFN system, including enhanced IFN gamma expression, which might also imply activation of dendritic cells. IFI27 is also upregulated during wound repair processes, which may be of particular relevance when considering the Dvorak thesis on “Tumors: wounds that do not heal” [14, 15]. Thus, it is tempting to argue that MPNs are “wounds in the bone marrow that will not heal,” owing to the continuous release from clonal cells of growth factors and matrix proteases with ensuing extracellular remodeling of the bone marrow stroma. In this scenario, one might speculate whether the high expression of IFI27 may reflect these processes as well, IFI27 cooperating with distinct genes of potential importance for egress of CD34+ cells from the bone marrow niches into the circulation [93]. In the context of matrix remodeling during cancer metastasis (which in MPNs consists of egress of CD34+ cells from the bone marrow niches into the circulation) it is of particular interest to note that IFN-inducible genes, including IFI27, have been shown to be associated with the so-called metagenes in patients with breast cancer, accurately identifying those patients with lymph node metastasis and accordingly predictors of outcomes in individual patients [94]. Thus, the highly upregulated IFI27 gene in MPNs may reflect progressive clonal evolution with “metastasis” (extramedullary hematopoiesis) despite an exaggerated yet incompetent IFN-mediated antitumor response by activated dendritic cells and T cells. In this regard a hyperstimulated immune system might also contribute to the increased risk of autoimmune diseases in MPNs. Accordingly the interferon signature may reflect MF as the terminal stage of chronic inflammation with a huge burden of oxidative stress, genomic instability, and accumulation of additional inflammation-induced mutations, the ultimate outcome being leukemic transformation [8–12]. During this evolution from early cancer stage to the metastatic stage with MF, the interferons are important cytokines for immunity and cancer immunoediting [95]. For this and several other reasons IFN is, today and in the future, considered the cornerstone in the treatment of MPNs which, when instituted in the very early disease stage, may be able to quell the fire and accordingly induce “minimal residual disease” and in some patients likely cure as will be discussed below [96–103]. Supporting chronic inflammation as the driving force for clonal evolution is also the most recent whole-blood gene expression studies, showing a marked deregulation of oxidative stress genes in MPNs [104]. This issue is extensively described by Bjørn and Hasselbalch in the chapter on “The Role of Reactive Oxygen Species in Myelofibrosis and Related Neoplasms.”Figure 3 Chronic inflammation as the driving force for clonal evolution in MPNs. Tumor burden and comorbidity burden are illustrated for patients withJAK2V617F positive MPNs. Comorbidity burden increases from early disease stage (ET/PV) to the accelerated phase with myelofibrotic and leukemic transformation. With permission: H. C. Hasselbalch [12]. AML: acute myeloid leukemia; ET: essential thrombocythemia; JAK: Janus kinase; PPV-MF: post-polycythemia vera; and PV: polycythemia vera.Figure 4 Patients with MPNs have an increased risk of second cancer not only after the MPNs diagnosis but also in the pre-MPNs diagnosis phase, which may last several years in which the patients are at an increased risk of severe cardiovascular and thromboembolic events. According to this model, the initial stem cell insult has occurred 5–10–15 years before the MPNs diagnosis. ## 3.6. What Are the Consequences of Chronic Inflammation in MPNs? ### 3.6.1. The Bone Marrow Is Burning In MPNs chronic inflammation may elicit a “cytokine storm,” “a wound that does not heal,” due to the continuous release of proinflammatory cytokines that in a self-perpetuating vicious circle drives the malignant clone. Importantly, in this inflammatory micromilieu, reactive oxygen species (ROS) are steadily accumulating, giving rise to increasing genomic instability, subclone formation with additional mutations, and ultimately bone marrow failure as a consequence of inflammation-mediated ineffective myelopoiesis (anemia, granulocytopenia, and thrombocytopenia), accumulation of connective tissue in the bone marrow, and ultimately leukemic transformation [8, 9, 11–13]. The impact and consequences of ROS for disease progression have been thoroughly described elsewhere by Bjørn and Hasselbalch and the impact of chronic inflammation on bone marrow stroma has been reviewed by Marie Caroline Le Bousse Kerdiles and coworkers.Chronic inflammation in the bone marrow microenvironment may enhance in vivo granulocyte activation with ensuing release of a vast amount of proteolytic enzymes from neutrophil granules, thereby facilitating egress of CD34+ cells and progenitors from bone marrow niches into the circulation (“metastasis”). ### 3.6.2. The Spleen Is Burning A common complaint in MPNs patients with enlarged spleens is a “burning” spleen, which on clinical examination may also be extremely painful. Although spleen infarction may occasionally explain the spleen pain, it is in the large majority of patients attributed to inflammation as evidenced by a remarkable relief when being treated with high-dose glucocorticoids and, in particular, during treatment with JAK2 inhibitors which within a few days is associated with a reduction in spleen size and a concomitant improvement in spleen pain as well. Accordingly, the rapid reduction in spleen size during, for example, treatment with ruxolitinib, is primarily consequent to its very potent anti-inflammatory effects as also evidenced by the rapid decrease in circulating inflammatory cytokines [11, 12]. ### 3.6.3. The Circulation Is Burning As outlined above circulating levels of a large number of inflammatory cytokines are elevated in patients with MPNs [11, 105, 106]. These cytokines activate circulating leukocytes and platelets and also activate endothelial cells as well, giving rise to aggregation of leukocytes and platelets with the formation of microaggregates that compromise the microcirculation in several organs [48, 51] (Figure 5). Taking into account that a large proportion of the circulating leukocytes and platelets are activated per se due to their clonal origin the additional impact of chronic inflammation upon in vivo activation of these cells may profoundly worsen the microcirculation in several organs with ensuing tissue ischemia and associated symptoms, including, for example, CNS-related symptoms (headaches, visual disturbances, dizziness, infarction, and dementia), pulmonary symptoms (dyspnoea due to pulmonary embolism, inflammation due to sequestration of leukocytes and platelets and megakaryocytes in the microcirculation with release of a large number of inflammatory products), symptoms of ischemic heart disease (angina, infarction, and congestive heart failure), or symptoms of peripheral vascular insufficiency [4, 5, 12, 34, 107–112] (Figure 5).Figure 5 Inflammation in the circulation elicits in vivo leukocyte and platelet aggregation giving rise to circulating microaggregates with ensuing impairment of microcirculation, tissue ischemia, and ultimately development of ulcers on toes and fingers which may terminate with gangrene. Treatment with aspirin momentarily resolves microaggregation with improvement in microcirculation. ## 3.6.1. The Bone Marrow Is Burning In MPNs chronic inflammation may elicit a “cytokine storm,” “a wound that does not heal,” due to the continuous release of proinflammatory cytokines that in a self-perpetuating vicious circle drives the malignant clone. Importantly, in this inflammatory micromilieu, reactive oxygen species (ROS) are steadily accumulating, giving rise to increasing genomic instability, subclone formation with additional mutations, and ultimately bone marrow failure as a consequence of inflammation-mediated ineffective myelopoiesis (anemia, granulocytopenia, and thrombocytopenia), accumulation of connective tissue in the bone marrow, and ultimately leukemic transformation [8, 9, 11–13]. The impact and consequences of ROS for disease progression have been thoroughly described elsewhere by Bjørn and Hasselbalch and the impact of chronic inflammation on bone marrow stroma has been reviewed by Marie Caroline Le Bousse Kerdiles and coworkers.Chronic inflammation in the bone marrow microenvironment may enhance in vivo granulocyte activation with ensuing release of a vast amount of proteolytic enzymes from neutrophil granules, thereby facilitating egress of CD34+ cells and progenitors from bone marrow niches into the circulation (“metastasis”). ## 3.6.2. The Spleen Is Burning A common complaint in MPNs patients with enlarged spleens is a “burning” spleen, which on clinical examination may also be extremely painful. Although spleen infarction may occasionally explain the spleen pain, it is in the large majority of patients attributed to inflammation as evidenced by a remarkable relief when being treated with high-dose glucocorticoids and, in particular, during treatment with JAK2 inhibitors which within a few days is associated with a reduction in spleen size and a concomitant improvement in spleen pain as well. Accordingly, the rapid reduction in spleen size during, for example, treatment with ruxolitinib, is primarily consequent to its very potent anti-inflammatory effects as also evidenced by the rapid decrease in circulating inflammatory cytokines [11, 12]. ## 3.6.3. The Circulation Is Burning As outlined above circulating levels of a large number of inflammatory cytokines are elevated in patients with MPNs [11, 105, 106]. These cytokines activate circulating leukocytes and platelets and also activate endothelial cells as well, giving rise to aggregation of leukocytes and platelets with the formation of microaggregates that compromise the microcirculation in several organs [48, 51] (Figure 5). Taking into account that a large proportion of the circulating leukocytes and platelets are activated per se due to their clonal origin the additional impact of chronic inflammation upon in vivo activation of these cells may profoundly worsen the microcirculation in several organs with ensuing tissue ischemia and associated symptoms, including, for example, CNS-related symptoms (headaches, visual disturbances, dizziness, infarction, and dementia), pulmonary symptoms (dyspnoea due to pulmonary embolism, inflammation due to sequestration of leukocytes and platelets and megakaryocytes in the microcirculation with release of a large number of inflammatory products), symptoms of ischemic heart disease (angina, infarction, and congestive heart failure), or symptoms of peripheral vascular insufficiency [4, 5, 12, 34, 107–112] (Figure 5).Figure 5 Inflammation in the circulation elicits in vivo leukocyte and platelet aggregation giving rise to circulating microaggregates with ensuing impairment of microcirculation, tissue ischemia, and ultimately development of ulcers on toes and fingers which may terminate with gangrene. Treatment with aspirin momentarily resolves microaggregation with improvement in microcirculation. ## 4. Discussion and Perspectives The perspectives of the MPNs as “A Human Inflammation Model for Cancer Development” being driven by chronic inflammation in a self-perpetuating vicious circle from early cancer stage (ET/PV) to the advanced “metastatic” stage with severe MF and egress of CD34+ cells from bone marrow niches to the circulation (metastasis to the spleen and liver and elsewhere) are several [8–13, 96–103].Firstly, this novel concept calls for the urgent need of a fundamental change in our therapeutic attitude from the conventional “watch-and-wait strategy” to “the early intervention concept” using interferon-alpha2 (IFN) as the cornerstone in the early treatment from the time of diagnosis [96–103] (Figures 1 and 6). However, since access to IFN for the routine use in patients with MPNs is highly variable, a prerequisite for such a change is that opinion leaders within the international MPNs scientific community realize that the time has come to rethink when, how, and who we should treat with IFN. Today the world is divided into two: in one world, not having access to IFN and accordingly its MPNs experts no or only modest experience with the use of IFN most ET and PV patients are followed according to the “watch-and-wait strategy,” receiving only cytoreductive treatment with hydroxyurea (HU) for elevated cell counts if they have suffered a prior thrombosis, the platelet count being >1500 × 109/L or if they are elderly (>60 years) [113–120]. This risk stratification therapy is partly based upon the concept “do no harm to the patient,” since HU treatment implies an increased risk of skin cancer and an increasing concern in regard to an increased risk of other cancers as well, including myelodysplasia and acute myelogenous leukemia [98, 100, 102, 121–125]. Accordingly, in this part of the world, HU is avoided in younger patients with ET and PV, who then may not receive cytoreductive treatment for elevated leukocyte counts or elevated platelet counts (>1500 × 109/L) unless they experience the catastrophe, thrombosis or major hemorrhage and consequentsequelae. In the other world, having access to IFN, most newly diagnosed patients with ET, PV, and hyperproliferative myelofibrosis are treated routinely with low-dose IFN as described in several studies and reviews during recent years [96–103].Figure 6 The MPNs care pathway and the effect of early intervention. It is suggested that ET, PV, and MF form a biological continuum and, thus, early intervention with combination therapies including JAK1/2 inhibitors, IFN, and/or statins is likely to result in the inhibition of disease evolution. ASCT: allogeneic stem cell transplantation; ET: essential thrombocythemia; HU: hydroxyurea; IFN: interferon; JAK: Janus kinase; MF: myelofibrosis; and PV: polycythemia vera (with permission: H. C. Hasselbalch [12]).Secondly, we, the MPNs scientific community, and health authorities (Food and Drug Administration (FDA) and EMA (European Medical Agency)) also need to rethink if optimal treatment of MPNs is only determined by the randomized trial or if optimal treatment might also be determined by several single-arm studies proving safety and efficacy of oncology drugs in orphan diseases [102, 123]. In this regard IFN in MPNs is a classic example which for sure has shown safety and efficacy in a large number of clinical studies during the last 25 years but, regardless, is still being considered experimental or not evidence-based therapy, in the world without access to IFN.Accordingly, promotion of rapidly accumulating evidence for the concept of MPNs as “A Human Inflammation Model for Cancer Development” into clinical practice with upfront treatment with IFN to inhibit clonal expansion (“stopping the fuel that feeds the fire”) requires a global signature from the MPN scientific community, a fusion of the two worlds, and an urgent action from health authorities to accept that approval of a drug for orphan diseases—IFN in MPNs—is applicable when safety and efficacy have been demonstrated in a large number of single-arm studies during the last 2 decades [102, 126].Thirdly, the proof of concept that chronic inflammation may elicit MPNs needs to be further investigated in other mouse models than the ones already published, including the MPN-mouse model from Heike Pahl’s group and the mouse model that has displayed formaldehyde (FA) by inhalation to be able to induce inflammation and ROS accumulation in the bone marrow with ensuing MPN-like blood and bone marrow features such as anemia, leukopenia and thrombocythemia, and megakaryocyte hyperplasia with myelofibrosis, respectively [13, 127, 128].Fourthly, considering chronic inflammation as a potential trigger of MPNs evolution and the experimental proof that FA induces inflammation in the bone marrow with myelofibrosis, it is indeed intriguing to speculate if cigarette smoke that contains thousands of toxic inflammatory agents, including FA, may actually be a risk factor for development of MPNs [129]. Thus, smoking is associated with elevated hematocrit, leukocytosis, monocytosis, and occasionally thrombocytosis—all are hallmarks in patients with MPNs. To this end the JAK-STAT and NF-kB signalling pathways are activated in both smokers and in patients with MPNs. Additionally, both share elevated levels of several proinflammatory cytokines, in vivo activation of leukocytes and platelets, endothelial dysfunction, and increased systemic oxidative stress. Indeed, smoke as a chronic inflammation stimulus giving rise to a chronic myelomonocytic response and ultimately MPNs fits very well with the excellent inflammation model for MPNs development as recently described by Hermouet and coworkers [31]. Accordingly, there is reason to believe that smoking may be both a trigger for and a driver of clonal evolution in MPNs taking into account that both smoking and MPNs are associated with chronic inflammation and systemic oxidative stress. In this context smoking may augment chronic inflammation in MPNs, thereby magnifying the risk of thrombosis, clonal expansion, and second cancers. The role of smoking in MPNs pathogenesis is further supported by a most recent study showing that a high proportion of MPNs patients actually have a smoking history [130]. An association between smoking and MPNs evolution is also supported by the fact that the most frequent second cancers in patients with MPNs are lung and urinary tract cancers which are most prevalent in smokers [3].Fifthly, chronic systemic inflammation in patients with MPNs may predispose to or aggravate existing inflammation-mediated diseases in MPNs patients. Thus, it might be anticipated that chronic inflammation associated with (other) chronic inflammatory diseases, for instance, inflammatory rheumatological or dermatological diseases (e.g., polymyalgia rheumatica, rheumatoid arthritis, psoriasis, hidradenitis, and systemic lupus erythematosus), chronic inflammatory bowel diseases (Crohn’s disease, colitis ulcerosa), chronic obstructive pulmonary disease and cancers (e.g., lung cancer) might ultimately elicit MPNs in a subset of the patients consequent to the chronic inflammation-mediated myelomonocytic drive [31]. Importantly, in these patients, anemia, leukocytosis, and thrombocythemia are ascribed to their chronic inflammatory disease or cancer and, accordingly, they are not normally screened for JAK2V617F, CALR, or MPL mutations. In the context of MPNs as inflammatory diseases, being potentially triggered and driven by chronic inflammation, the time is ripe to consider if the above disease categories should be investigated more rigorously for MPNs than being clinical practice today. Indeed, such studies are urgently needed to elucidate and expand the role of chronic inflammation as a true trigger for and driver of clonal evolution in MPNs.Sixthly, chronic inflammation and oxidative stress may have therapeutic implications. Thus, it might be anticipated that patients with systemic chronic inflammation due to concurrent inflammation-mediated comorbidities may exhibit an inferior response to cytoreductive therapy necessitating higher dosages of, for example, hydroxyurea to obtain normal leukocyte and platelet counts. Furthermore, the response to IFN might be impaired considering that IFN signalling is impaired by inflammation and oxidative stress [131].Seventhly, in the context that “triple-negative” (negative for JAK2V617F, CALR, and MPL-mutations) ET patients have a much more favourable prognosis than mutation-positive ET patients, some triple-negative “ET” patients may actually not have a MPNs but instead polyclonal inflammation-driven thrombocythemia. If so, the subset of “triple-negative” “ET” patients may be associated with a heavy comorbidity burden of chronic inflammatory diseases, an issue which deserves to be investigated systematically.Eighthly, by dampening chronic inflammation using potent anti-inflammatory agents such as JAK2 inhibitor treatment and statins, it is anticipated that the rate of thromboembolic events will likely decline, since chronic inflammation per se carries an increased risk of thrombosis due to several factors as outlined above (leukocytosis, thrombocytosis, and in vivo leukocyte-platelet and endothelial activation). This issue on inflammation-mediated thrombogenesis has been dealt with most recently [132].Ninthly, chronic inflammation in MPNs, if left untreated with elevated platelet counts, may worsen the prognosis of second cancers, which MPNs patients are prone to develop, not only after the MPNs diagnosis but also prior to the diagnosis [3, 76]. This particular issue, the “Platelet-Cancer-Loop” in MPNs, and the perspectives for prognosis of second cancers when not treating elevated platelet counts in MPNs have most recently been reviewed and debated [78, 133]. Indeed, elevated platelet counts in MPNs may contribute to the inferior prognosis of second cancers in these patients, most recently being reported in a large Danish epidemiological study [134].Tenthly, the notion of treating these diseases only when far advanced is antithetical to treating other forms of cancer. The model of clonal evolution, the occurrence of additional molecular abnormalities, and the development of metastatic sites of disease following extramedullary hematopoiesis of CD34+ cells in the spleen and liver are just some of the compelling reasons to consider treating sooner rather than later, when the tumor burden is less rather than more and before disease progression occurs. The fact that both rIFN and JAK1/2 inhibition can cause molecular change in JAK2V617F allele burden and revert cytogenetic and other clonal abnormalities adds impetus to this argument. From the perspective that chronic inflammation may drive clonal expansion in these neoplasms early treatment may induce a state of minimal disease in a substantial number of patients. This may alter the natural history of the MPNs and the otherwise inevitable path towards thrombosis, irreversible MF, and leukemic transformation [97–103].Eleventh, statins have, in addition to a cholesterol-lowering effect, many so-called pleiotropic effects, including antiproliferative, proapoptotic, antiangiogenic, antithrombotic, and especially potent anti-inflammatory effects [135]. Most recently, it has been shown that statins also significantly inhibit the malignant MPNs cell growth, including a potent synergistic effect with JAK inhibition [136, 137]. Thus, the perspectives may be that statins will achieve an important role in the future MPNs treatment in combination with JAK1/2 inhibitors and IFN-alpha2, a combination therapy, which—if instituted already from the time of diagnosis by potent inhibition of clonal proliferation and hence blockage of chronic inflammation generated by the malignant clone itself—may envisage the hope of reverting MPNs disease progression by inhibiting inflammation-driven genomic instability, subclone formation, mutagenesis, and thereby the ultimate transformation to myelofibrosis and acute myeloid leukemia. In regard to the anti-inflammatory, antithrombotic, and cytoreductive potential of statins and most lately the epidemiological evidence that statins reduce cancer-related mortality the rationale for the use of statins in patients with MPNs—per se accompanied by an increased risk of second cancers with an inferior prognosis—is only further supported [134–138]. Taking into account that MPNs patients may be prone to develop inflammation-mediated osteopenia with an increased risk of fractures early diagnosis and treatment of osteopenia with bisphosphonates may be an option in the future. Indeed, bisphosphonates also possess potent anti-inflammatory, immunomodulatory anticancer properties and may have a synergistic effect with statins in targeting the bone marrow stroma niche, thereby inhibiting the egress of CD34+ cells from stem cell niches [139]. To this end, several reports have documented beneficial effects of treatment with bisphosphonates in MPNs [140–145]. The rationales for the mevalonate pathway as a therapeutic target in the treatment of MPNs have been thoroughly described in recent reviews [135, 146]. ## 5. Conclusion The concept of chronic inflammation as a major driver of disease progression in MPNs opens the avenue for clinical trials in which the two most promising agents within MPNs—IFN and ruxolitinib—are combined and instituted in the early disease stage according to the early intervention concept. The proof of concept and the rationales for this combination therapy have most recently been published [147] and a Danish study on combination therapy with low-dose pegylated IFN and ruxolitinib is ongoing with very promising preliminary results. The ability of IFN to induce deep molecular responses with normalisation of the bone marrow, even years after cessation of IFN, and the role of inflammation in the initiation and progression of MPNs make the combination of IFN and ruxolitinib one of the most promising new treatment strategies for patients with MPNs [8, 9, 11–13]. --- *Source: 102476-2015-10-28.xml*
102476-2015-10-28_102476-2015-10-28.md
88,694
MPNs as Inflammatory Diseases: The Evidence, Consequences, and Perspectives
Hans Carl Hasselbalch; Mads Emil Bjørn
Mediators of Inflammation (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102476
102476-2015-10-28.xml
--- ## Abstract In recent years the evidence is increasing that chronic inflammation may be an important driving force for clonal evolution and disease progression in the Philadelphia-negative myeloproliferative neoplasms (MPNs), essential thrombocythemia (ET), polycythemia vera (PV), and myelofibrosis (MF). Abnormal expression and activity of a number of proinflammatory cytokines are associated with MPNs, in particular MF, in which immune dysregulation is pronounced as evidenced by dysregulation of several immune and inflammation genes. In addition, chronic inflammation has been suggested to contribute to the development of premature atherosclerosis and may drive the development of other cancers in MPNs, both nonhematologic and hematologic. The MPN population has a substantial inflammation-mediated comorbidity burden. This review describes the evidence for considering the MPNs as inflammatory diseases,A Human Inflammation Model of Cancer Development, and the role of cytokines in disease initiation and progression. The consequences of this model are discussed, including the increased risk of second cancers and other inflammation-mediated diseases, emphasizing the urgent need for rethinking our therapeutic approach. Early intervention with interferon-alpha2, which as monotherapy has been shown to be able to induce minimal residual disease, in combination with potent anti-inflammatory agents such as JAK-inhibitors is foreseen as the most promising new treatment modality in the years to come. --- ## Body ## 1. Introduction Recent studies have provided evidence that the chronic myeloproliferative neoplasms (MPNs), essential thrombocythemia (ET), polycythemia vera (PV), and myelofibrosis (MF), may be preceded by or accompanied by chronic inflammation and also may imply an increased risk for the development of other cancers [1–3]. In these neoplasms morbidity and mortality are massively influenced by cardiovascular and thromboembolic complications [1, 4, 5]. The advanced myelofibrotic stage is typically characterized by transfusion-dependent anemia, large spleen, severe bone marrow fibrosis, and steadily increasing white blood cell counts or severe pancytopenia and end-stage development of acute leukemia, seen in up to 20% of patients with MF [1, 5]. The incidence of MPNs is low, but the prevalence is high and comparable with lung cancer. In 2005, a unique breakthrough was described by the identification of the JAK2V617F mutation in almost all patients with PV and about half of patients with ET and MF [1]. It is possible to monitor the “tumor burden” when analyzing theJAK2 allelic burden by qPCR. In 2013 the calreticulin mutations were described in a large proportion of the JAK2V617F negative ET and MF patients [6, 7]. The clinical implications of these mutations are being described elsewhere in this Theme Issue.Chronic inflammation is an important risk factor for the development of atherosclerosis which occurs prematurely in patients with chronic inflammatory diseases, including rheumatoid arthritis, systemic lupus erythematosus, psoriasis, and type II diabetes mellitus. In these diseases, in vivo activation of leukocytes, platelets, and endothelial cells contributes significantly to the increased risk of thrombosis. The same thrombophilia-generating mechanisms are operative in ET, PV, and MF, in which chronic inflammation has recently been described as a potentially very important facilitator not only of premature atherosclerosis, but also of clonal evolution and second cancer [8]. Thus, the chronic MPNs are both “model diseases” for studies of the relationship between chronic inflammation and premature atherosclerosis development in the biological continuum from ET over PV to myelofibrosis and “model diseases” for cancer development from the early cancer stage (ET, PV) to the advanced metastatic cancer stage (MF with myeloid metaplasia) [9–13].Based upon experimental, clinical, and epidemiological studies we herein argue for the MPNs as inflammatory diseases in accordance with the “Human Inflammation Model for Cancer Development.” In the following we will describe the evidence for MPNs as chronic inflammatory diseases and discuss the consequences of chronic inflammation in MPNs in terms of disease progression due to inflammation-mediated clonal expansion and defective tumor immune surveillance. In this context we argue for dampening chronic inflammation at the earliest disease stage (ET/PV), when the tumor burden is minimal, the clone is homogenous (prior to subclone formation and/or acquisition of additional driving mutations), and accordingly the outcome of treatment is logically most favorable (Figure1).Figure 1 Vicious cycle of inflammation in the biological continuum of ET, PV, and MF. Chronic inflammation is proposed as the trigger and driver of clonal evolution in the biologic continuum from early disease state (ET/PV) to a more advanced disease state (MF). It is possible that combination therapy, using low doses of agents such as interferon-alpha, Janus kinase inhibitors, and statins at the early disease stage, will positively influence the vicious cycle of disease progression. HGF: hepatocyte growth factor; IL: interleukin; MPN: myeloproliferative neoplasm; and TNF: tumor necrosis factor. ## 2. The Evidence of a Link between Chronic Inflammation and Cancer About 30 years ago Dvorak described cancers as “wounds that do not heal,” a concept updated most recently and since 1986 being increasingly recognized [14, 15]. In their seminal contribution from 2000 Hanahan and Weinberg identified the six hallmarks of cancer and recently chronic inflammation was added as the seventh hallmark, emphasizing the huge impact of chronic inflammation on cancer development and progression (“oncoinflammation”) [16, 17]. Accordingly, today chronic inflammation is considered of major importance in the development of cancer and several molecular and cellular signaling circuits have been identified linking inflammation and cancer [18–22]. Indeed, this concept was already described by Virchow in the 19th century when he suggested that chronic inflammation might give rise to malignancy [21]. Regardless, not until more recently, the link between inflammation and cancer has been acknowledged, partly due to epidemiologic studies, which have generated data on chronic infections and inflammation as major risk factors for various types of cancer. In hematological malignancies a link between chronic inflammation and malignant lymphomas has been well described whereas chronic inflammation as a potential initiating event and a driver of clonal evolution in myeloid cancers including MPNs has not been focused upon until very recently [8, 9, 11–13, 23–25]. ## 3. The Evidence of MPNs as Inflammatory and Immune Deregulated Diseases ### 3.1. What Is the Epidemiological Evidence? An increased risk of autoimmune and/or inflammatory conditions has been documented several years ago in patients with myeloid malignancies and recently a large Swedish epidemiologic study concluded that chronic immune stimulation might act as a trigger for the development of the myelodysplastic syndrome (MDS) and acute myelogenous leukemia (AML) [26, 27]. In regard to MPNs, another Swedish study has shown that inflammatory diseases may precede or develop during the course of ET, PV, and MF. In this Swedish study, a prior history of any autoimmune disease was associated with a significantly increased risk of a myeloproliferative neoplasm. The “inflammatory” diseases included, among others, Crohn’s disease, polymyalgia rheumatica, and giant cell arteritis, and the “autoimmune” diseases included immune thrombocytopenic purpura and aplastic anemia [2]. The 46/1 haplotype is present in 45% of the general population and is associated with a predisposition to acquire theJAK2V617F mutation and accordingly MPNs but also predisposes to MPNs with no mutation ofJAK2 and to MPNs with mutation inMPL [28–31]. Importantly, epidemiological studies have shown that the frequency of theJAK2 46/1 haplotype is increased in inflammatory diseases, including Crohn’s disease [32, 33].Risk factors for developing atherosclerosis, a chronic inflammatory disease, have been investigated in a large Danish epidemiological study of 49 488 individuals from the Copenhagen General Population Study. It was discovered that those harboring theJAK2V617F mutation had a 2.2-/1.2-fold risk (prevalent/incident) of ischemic heart disease [34]. ### 3.2. What Is the Histomorphological Evidence? Already about 40 years ago it was speculated if autoimmune bone marrow damage might be incriminated in the pathogenesis of “idiopathic myelofibrosis” (IMF). Several observations seem to support the participation of immune mechanisms in the development of bone marrow fibrosis. Thus, histopathological findings of “Fibrin-Faser-Stern” figures, increased numbers of plasma cells and lymphocytes with plasmacytoid appearance, the demonstration of a parallel increase in interstitial deposits of immunoglobulins and the extent of bone marrow fibrosis, and the development of bone marrow fibrosis after repeated antigen injections in animal models all render immune-mediated bone marrow fibrosis possible [35–41]. Importantly, the findings of “Fibrin-Faser-Stern” figures and lymphoid aggregates in bone marrows from MPNs patients have been variably interpreted as evidence of immune activity in the marrow with deposition of immune complexes [35–38]. Immune activity in the bone marrow with an increase of lymphoid nodules has been found to be most prominent in the early stage of IMF [37, 38]. A most recent study investigated the mechanism of bone marrow fibrosis in patients with MF by comparing TGF-β1 signaling of marrow and spleen cells from patients with MF and of nondiseased individuals. The expression of several TGF-β1 signaling genes was altered in the marrow and spleen of MF patients, respectively. Abnormalities included genes of TGF-β1 signaling, cell cycling, and Hedgehog and p53 signaling. Pathway analysis of these alterations predicted an increased osteoblast differentiation, ineffective hematopoiesis, and fibrosis driven by noncanonical TGF-β1 signaling in the marrow and increased proliferation and defective DNA repair in the spleen. The hypothesis that fibrosis in MF might result from an autoimmune process, triggered by dead megakaryocytes, was supported by the findings of increased plasma levels of mitochondrial DNA and anti-mitochondrial antibodies in MF patients. It was concluded that autoimmunity might be a plausible cause of marrow fibrosis in MF [42]. Finally, the clinical observations of a favorable outcome of immunosuppressive therapy in some MF patients with evidence of autoimmune activity support the concept that autoimmunity, immune dysfunction, and chronic inflammation may be important factors in pathogenesis [43–48]. ### 3.3. What Is the Clinical Evidence? #### 3.3.1. The Inflammation-Mediated Cardiovascular and Thromboembolic Disease Burden Patients with MPNs have a massive cardiovascular disease burden with a high risk of thrombosis (Figure2), which is partly explained by excessive aggregation of circulating leukocytes and platelets due to in vivo leukocyte-platelet and endothelial activation in combination with a thrombogenic endothelium [1, 4]. In addition MPNs are associated with a procoagulant state, which has recently been elegantly reviewed by Barbui et al. [49]. The hyperactivation of circulating cells in MPNs has been thought to be attributed to the clonal myeloproliferation. Thus, theJAK2V671F mutation per se has been shown to induce leukocyte and platelet activation and several clinical studies have demonstrated thatJAK2V617F positivity is a thrombogenic factor in MPNs [49–52]. Of note, Barbui et al. have recently shown that the level of C-reactive protein (CRP) is elevated in patients with ET and PV and correlates significantly with theJAK2V617F allele burden [53]. Furthermore, elevated CRP levels have also been associated with shortened leukemia-free survival in myelofibrosis [54]. It was speculated if sustained inflammation might elicit the stem cell insult by inducing a state of chronic oxidative stress with elevated levels of reactive oxygen species (ROS) in the bone marrow, thereby creating a high-risk microenvironment for induction of mutations in hematopoietic cells [9]. Being a sensitive marker of inflammation and influencing, for example, endothelial function, coagulation, fibrinolysis, and plaque stability, CRP is considered to be a mediator of vascular disease and accordingly a major vascular risk factor as well [55–57]. This association has recently been demonstrated in a meta-analysis, showing continuous associations between the CRP concentration and the risk of coronary heart disease, ischemic stroke, and vascular mortality [58]. For decades it has been known that atherosclerosis and atherothrombosis are chronic inflammatory diseases [59, 60]. Several studies have reported that chronic inflammatory diseases (e.g., rheumatoid arthritis, psoriasis, systemic lupus erythematosus, and diabetes mellitus) are associated with accelerated atherosclerosis and accordingly development of premature atherosclerosis (early ageing?) [61–65]. In addition, considering the association between atherosclerosis and venous thrombosis, chronic inflammation indirectly predisposes to venous thrombosis and pulmonary thromboembolism as well [66]. In the context of the associations between inflammation and CRP in ET and PV, inflammation might be considered to be a secondary event elicited by clonal cells [53]. However, elevated leukocyte and platelet counts in MPNs may not only reflect clonal myeloproliferation but also reflect the impact of chronic inflammation per se on the clonal cells. In particular, this interpretation is intriguing when considering that one of the hallmarks of MPNs is inherent hypersensitivity to growth factor and cytokine stimulation [8]. In this perspective, chronic inflammation in MPNs may also have a key role in promoting premature atherosclerosis and all its debilitating cardiovascular and thromboembolic complications, the common denominators for their development being elevated leukocyte and platelet counts, elevated CRP levels, and in vivo leukocyte-platelet and endothelial cell activation, taking into account that platelet-leukocyte interactions link inflammatory and thromboembolic events in several other inflammation-mediated diseases [67].Figure 2 Patients with MPNs have a massive cardiovascular and thromboembolic disease burden. #### 3.3.2. Inflammation-Mediated Chronic Kidney Disease Uncontrolled chronic inflammation is associated with organ dysfunction, organ fibrosis, and ultimately organ failure [68]. This development is classically depicted in patients with the metabolic syndrome progressing to type II diabetes mellitus (DM) which, without adequate treatment to normalize elevated blood glucose levels, may rapidly develop organ failure due to accelerated atherosclerosis (e.g., hypertension, ischemic heart disease, stroke, dementia, peripheral arterial insufficiency, venous thromboembolism, and chronic kidney disease). The progressive deterioration of multiple organs in uncontrolled DM consequent to elevated blood glucose levels with in vivo leukocyte-platelet and endothelial activation and development of premature atherosclerosis is in several aspects comparable to the multitude of systemic manifestations in patients with uncontrolled MPNs, the common denominators being a huge cardiovascular disease burden and thromboembolic complications [10]. Importantly, similar to patients with type II DM, it has been demonstrated that patients with MPNs have an increased risk of developing chronic kidney disease [69]. It was concluded that progressive renal impairment may be an important factor in MPNs contributing to the comorbidity burden and likely to the overall survival. In addition it was speculated whether chronic inflammation with accumulation of ROS might be a driving force for impairment of renal function and accordingly supportive of early intervention in order to normalize elevated cell counts and reduce the chronic inflammatory drive elicited by the malignant clone itself [69]. #### 3.3.3. Inflammation-Mediated “Autoinflammatory” Diseases As outlined above, patients with MPNs may have an increased risk of various autoimmune, “autoinflammatory,” or inflammatory diseases. Thus, associations have been reported with systemic lupus erythematosus, progressive systemic sclerosis, primary biliary cirrhosis, ulcerative colitis, Crohn’s disease, nephrotic syndrome, polyarteritis nodosa, Sjögren syndrome, juvenile rheumatoid arthritis, polymyalgia rheumatica/arteritis temporalis, immune thrombocytopenic purpura (ITP), and aplastic anemia. In large epidemiological studies these associations have only been significant for Crohn’s disease, polymyalgia rheumatica/arteritis temporalis, and ITP [2]. Interestingly, a particular subtype of myelofibrosis, “primary autoimmune myelofibrosis,” has been described. This subtype has been considered to be a nonclonal and nonneoplastic disease, featured by anemia/cytopenias and autoantibodies suggesting systemic autoimmunity. Most patients have no or only mild splenomegaly and the bone marrow biopsy exhibits MPN-like histology with fibrosis, hypercellularity, and megakaryocyte clusters. In addition, bone marrow lymphoid aggregates are prominent [46]. It remains to be established if this subset of MF actually exists or if these patients indeed should be categorized within the MPNs disease entity, taking into account that autoimmunity and chronic inflammation today are considered to have a major role in MPNs pathogenesis. #### 3.3.4. Inflammation-Mediated Osteopenia A recent Danish registry study has shown that patients with ET and PV have an increased incidence of fractures compared with the general population [70]. Taking into account that chronic inflammation has been suggested to explain the initiation of clonal development and progression in chronic myeloproliferative neoplasms and other chronic inflammatory diseases that are associated with an increased risk of osteopenia it has been speculated if chronic inflammation might induce osteopenia in MPNs and by this mechanism also predispose to the increased risk of fractures [12, 70–72]. #### 3.3.5. Inflammation-Mediated Second Cancers As noted previously patients with MPNs have been shown to have an increased risk of second cancers [3, 5]. In the perspective that chronic inflammation may be a driving force for clonal evolution in MPNs it is intriguing to consider if chronic inflammation may contribute to the development of second cancers in MPNs as well, taking into account the close association between inflammation and cancer [8, 9, 11–13, 17–22]. In this regard a defective “tumor immune surveillance” consequent to immune deregulation, which has been demonstrated in MPNs in several recent studies and most recently comprehensively reviewed, might be of importance [42, 73–75]. Of note, the increased risk of second cancers has also been recorded prior to the MPNs diagnosis emphasizing that the MPNs may have a long prediagnosis phase (5–10–15 years) with a chronic inflammatory state promoting mutagenesis, defective tumor immune surveillance, and immune deregulation [9, 76–78]. This concept is compatible with the most recent observations of additional mutations that are already present at the time of diagnosis likely induced by a sustained inflammatory drive on the malignant clone several years before diagnosis [7, 9, 78, 79] (Figure 4). ### 3.4. What Is the Biochemical Evidence? As outlined above MPNs are associated with a low-grade inflammatory state as assessed by slightly elevated CRP in a large proportion of patients with ET and PV [53]. The CRP levels are steadily increasing when patients enter the accelerated phase towards leukemic transformation [54]. Considering the close association between CRP and other inflammatory markers, the leukocyte and platelet counts, it is most relevant to speculate if leukocytosis and thrombocytosis in MPNs are also attributed to the chronic inflammatory drive per se with sustained generation of inflammatory products that fuel the malignant clone in a vicious self-perpetuating circle [8, 11]. Similar to CRP, plasma fibrinogen and plasma D-dimers levels are slightly elevated in several patients and may indeed be more sensitive inflammatory markers than CRP (unpublished observations). Proinflammatory cytokines are elevated in a substantial proportion of patients with MPNs, a topic which has recently been reviewed and thoroughly described by Fleischman and others in this Theme Issue [11].The hypothesis and the concept of MPNs and the advanced MF stage being elicited and perpetuated by autoimmune/inflammatory mechanisms were intensely investigated and discussed already 30 years ago. Some of the clinical and histomorphological issues with associations between MPNs and autoimmune/inflammatory states have already been addressed above. In addition, several studies from that period reported biochemical evidence of autoimmunity/inflammation in MPNs, such as elevated levels of antibodies to RBCs, antibodies to platelets, anti-nuclear and anti-mitochondrial antibodies (ANA and AMA), rheumatoid factor, lupus-like anticoagulant, low levels of complement, complement activation, increased levels of immune complexes (ICs), and increased levels of interleukin-2 soluble receptors (s-IL2R) [38, 43, 80–85]. It was debated whether deposition of immune complexes in the bone marrow, either formed in situ or trapped from the circulation, might be followed by complement activation with subsequent local inflammatory reaction, an interpretation fitting very well with the findings of complement activation in MF patients [80, 81]. Of note, circulating immune complexes were predominantly found in the early disease stage. Since circulating ICs were in some studies mainly found in MF patients with a short duration of disease from diagnosis it was hypothesized that potential immune-mediated bone marrow damage might indeed occur in the early phase of the disease and the late, fibrotic stage with undetectable IC representing the “burnt out” phase of the disease [81, 84]. Today, 30 years after the detection of IC in MPNs, their significance in MF and related neoplasms remains unsettled. With the renaissance of the concept of autoimmune bone marrow damage and chronic inflammation as driving forces for disease evolution and progression further studies on circulating ICs and their pathogenetic and clinical relevance are highly relevant and timely. Indeed, their detection may reflect ongoing inflammatory immune reactions in the circulation and in the bone marrow, being likely most pronounced in the initial disease phase and possibly related to a more acute course of the disease [81]. Most recently, a comprehensive study of autoimmune phenomena and cytokines in 100 patients with MF, including early stage MF, has added further proof of the concept that autoimmune and inflammatory mechanisms may be highly important in the pathogenesis of MPNs [86]. Importantly, organ/non-organ-specific autoantibodies were found in 57% of cases, without clinically overt disease, and mostly in low-risk/intermediate-risk-1 and MF-0/MF-1. Furthermore, TGF-β and IL-8 were increased in MS-DAT positive cases, and TGF-β and IL-17 were elevated in early clinical and morphological stages, while IL-8 increased in advanced stages. It was concluded that autoimmune phenomena and cytokine dysregulation may be particularly relevant in early MF [86].Several studies have shown that circulating YKL-40 levels are elevated in a number of different diseases, including cancer, diabetes mellitus, and cardiovascular diseases, in which YKL-40 serves as an excellent marker of the disease burden. Importantly, a state of chronic inflammation is shared by them all, and YKL-40 also has a major impact upon the severity of chronic endothelial inflammation, which today is considered of crucial importance for the development of atherosclerosis. Considering the MPNs as chronic inflammatory diseases and accordingly with an increased risk of development of premature atherosclerosis we hypothesized that circulating YKL-40 might be an ideal marker of the integrated impact of chronic inflammation in MPNs and accordingly might display correlations with conventional markers of inflammation and disease burden in MPNs. Indeed, we have recently shown that circulating YKL-40 is a potential novel biomarker of disease activity and the inflammatory state in myelofibrosis and related neoplasms [87, 88]. These studies have demonstrated a steady increase in YKL-40 from early cancer stage (ET) over PV to the advanced cancer stage with myelofibrosis, which exhibited the highest YKL-40 levels of them all. Highly interesting, we also found a significant correlation between YKL-40 and several markers of inflammation and disease burden, including neutrophils, platelets, CRP, LDH, and theJAK2V617F allele burden. Accordingly, circulating YKL-40 may be a novel marker of inflammation, disease burden, and progression in MPNs [87, 88]. ### 3.5. What Is the Molecular Evidence? The concept of chronic inflammation leading to clonal evolution in MPNs is also supported by gene expression profiling studies (Figure3), which have unraveled deregulation of several genes that might be implicated in the development and phenotype of the MPNs [89–92]. Using whole-blood transcriptional profiling and accordingly obtaining an integrated signature of genes expressed in several immune cells (granulocytes, monocytes, B cells, T cells, and platelets), we have shown that the MPNs exhibit a massive upregulation of IFN-related genes, particularly interferon-inducible (IFI) gene IFI27 and severe deregulation of other inflammation and immune genes as well. Indeed, several genes (e.g., IFI27) displayed a stepwise upregulation in patients with ET, PV, and PMF with fold changes from 8 to 16 to 30, respectively. The striking deregulation of IFI genes may likely reflect a hyperstimulated but incompetent immune system being most enhanced in patients with advanced MF. In this context, the massive upregulation of the IFI27 gene may also reflect an exaggerated antitumor response as part of a highly activated IFN system, including enhanced IFN gamma expression, which might also imply activation of dendritic cells. IFI27 is also upregulated during wound repair processes, which may be of particular relevance when considering the Dvorak thesis on “Tumors: wounds that do not heal” [14, 15]. Thus, it is tempting to argue that MPNs are “wounds in the bone marrow that will not heal,” owing to the continuous release from clonal cells of growth factors and matrix proteases with ensuing extracellular remodeling of the bone marrow stroma. In this scenario, one might speculate whether the high expression of IFI27 may reflect these processes as well, IFI27 cooperating with distinct genes of potential importance for egress of CD34+ cells from the bone marrow niches into the circulation [93]. In the context of matrix remodeling during cancer metastasis (which in MPNs consists of egress of CD34+ cells from the bone marrow niches into the circulation) it is of particular interest to note that IFN-inducible genes, including IFI27, have been shown to be associated with the so-called metagenes in patients with breast cancer, accurately identifying those patients with lymph node metastasis and accordingly predictors of outcomes in individual patients [94]. Thus, the highly upregulated IFI27 gene in MPNs may reflect progressive clonal evolution with “metastasis” (extramedullary hematopoiesis) despite an exaggerated yet incompetent IFN-mediated antitumor response by activated dendritic cells and T cells. In this regard a hyperstimulated immune system might also contribute to the increased risk of autoimmune diseases in MPNs. Accordingly the interferon signature may reflect MF as the terminal stage of chronic inflammation with a huge burden of oxidative stress, genomic instability, and accumulation of additional inflammation-induced mutations, the ultimate outcome being leukemic transformation [8–12]. During this evolution from early cancer stage to the metastatic stage with MF, the interferons are important cytokines for immunity and cancer immunoediting [95]. For this and several other reasons IFN is, today and in the future, considered the cornerstone in the treatment of MPNs which, when instituted in the very early disease stage, may be able to quell the fire and accordingly induce “minimal residual disease” and in some patients likely cure as will be discussed below [96–103]. Supporting chronic inflammation as the driving force for clonal evolution is also the most recent whole-blood gene expression studies, showing a marked deregulation of oxidative stress genes in MPNs [104]. This issue is extensively described by Bjørn and Hasselbalch in the chapter on “The Role of Reactive Oxygen Species in Myelofibrosis and Related Neoplasms.”Figure 3 Chronic inflammation as the driving force for clonal evolution in MPNs. Tumor burden and comorbidity burden are illustrated for patients withJAK2V617F positive MPNs. Comorbidity burden increases from early disease stage (ET/PV) to the accelerated phase with myelofibrotic and leukemic transformation. With permission: H. C. Hasselbalch [12]. AML: acute myeloid leukemia; ET: essential thrombocythemia; JAK: Janus kinase; PPV-MF: post-polycythemia vera; and PV: polycythemia vera.Figure 4 Patients with MPNs have an increased risk of second cancer not only after the MPNs diagnosis but also in the pre-MPNs diagnosis phase, which may last several years in which the patients are at an increased risk of severe cardiovascular and thromboembolic events. According to this model, the initial stem cell insult has occurred 5–10–15 years before the MPNs diagnosis. ### 3.6. What Are the Consequences of Chronic Inflammation in MPNs? #### 3.6.1. The Bone Marrow Is Burning In MPNs chronic inflammation may elicit a “cytokine storm,” “a wound that does not heal,” due to the continuous release of proinflammatory cytokines that in a self-perpetuating vicious circle drives the malignant clone. Importantly, in this inflammatory micromilieu, reactive oxygen species (ROS) are steadily accumulating, giving rise to increasing genomic instability, subclone formation with additional mutations, and ultimately bone marrow failure as a consequence of inflammation-mediated ineffective myelopoiesis (anemia, granulocytopenia, and thrombocytopenia), accumulation of connective tissue in the bone marrow, and ultimately leukemic transformation [8, 9, 11–13]. The impact and consequences of ROS for disease progression have been thoroughly described elsewhere by Bjørn and Hasselbalch and the impact of chronic inflammation on bone marrow stroma has been reviewed by Marie Caroline Le Bousse Kerdiles and coworkers.Chronic inflammation in the bone marrow microenvironment may enhance in vivo granulocyte activation with ensuing release of a vast amount of proteolytic enzymes from neutrophil granules, thereby facilitating egress of CD34+ cells and progenitors from bone marrow niches into the circulation (“metastasis”). #### 3.6.2. The Spleen Is Burning A common complaint in MPNs patients with enlarged spleens is a “burning” spleen, which on clinical examination may also be extremely painful. Although spleen infarction may occasionally explain the spleen pain, it is in the large majority of patients attributed to inflammation as evidenced by a remarkable relief when being treated with high-dose glucocorticoids and, in particular, during treatment with JAK2 inhibitors which within a few days is associated with a reduction in spleen size and a concomitant improvement in spleen pain as well. Accordingly, the rapid reduction in spleen size during, for example, treatment with ruxolitinib, is primarily consequent to its very potent anti-inflammatory effects as also evidenced by the rapid decrease in circulating inflammatory cytokines [11, 12]. #### 3.6.3. The Circulation Is Burning As outlined above circulating levels of a large number of inflammatory cytokines are elevated in patients with MPNs [11, 105, 106]. These cytokines activate circulating leukocytes and platelets and also activate endothelial cells as well, giving rise to aggregation of leukocytes and platelets with the formation of microaggregates that compromise the microcirculation in several organs [48, 51] (Figure 5). Taking into account that a large proportion of the circulating leukocytes and platelets are activated per se due to their clonal origin the additional impact of chronic inflammation upon in vivo activation of these cells may profoundly worsen the microcirculation in several organs with ensuing tissue ischemia and associated symptoms, including, for example, CNS-related symptoms (headaches, visual disturbances, dizziness, infarction, and dementia), pulmonary symptoms (dyspnoea due to pulmonary embolism, inflammation due to sequestration of leukocytes and platelets and megakaryocytes in the microcirculation with release of a large number of inflammatory products), symptoms of ischemic heart disease (angina, infarction, and congestive heart failure), or symptoms of peripheral vascular insufficiency [4, 5, 12, 34, 107–112] (Figure 5).Figure 5 Inflammation in the circulation elicits in vivo leukocyte and platelet aggregation giving rise to circulating microaggregates with ensuing impairment of microcirculation, tissue ischemia, and ultimately development of ulcers on toes and fingers which may terminate with gangrene. Treatment with aspirin momentarily resolves microaggregation with improvement in microcirculation. ## 3.1. What Is the Epidemiological Evidence? An increased risk of autoimmune and/or inflammatory conditions has been documented several years ago in patients with myeloid malignancies and recently a large Swedish epidemiologic study concluded that chronic immune stimulation might act as a trigger for the development of the myelodysplastic syndrome (MDS) and acute myelogenous leukemia (AML) [26, 27]. In regard to MPNs, another Swedish study has shown that inflammatory diseases may precede or develop during the course of ET, PV, and MF. In this Swedish study, a prior history of any autoimmune disease was associated with a significantly increased risk of a myeloproliferative neoplasm. The “inflammatory” diseases included, among others, Crohn’s disease, polymyalgia rheumatica, and giant cell arteritis, and the “autoimmune” diseases included immune thrombocytopenic purpura and aplastic anemia [2]. The 46/1 haplotype is present in 45% of the general population and is associated with a predisposition to acquire theJAK2V617F mutation and accordingly MPNs but also predisposes to MPNs with no mutation ofJAK2 and to MPNs with mutation inMPL [28–31]. Importantly, epidemiological studies have shown that the frequency of theJAK2 46/1 haplotype is increased in inflammatory diseases, including Crohn’s disease [32, 33].Risk factors for developing atherosclerosis, a chronic inflammatory disease, have been investigated in a large Danish epidemiological study of 49 488 individuals from the Copenhagen General Population Study. It was discovered that those harboring theJAK2V617F mutation had a 2.2-/1.2-fold risk (prevalent/incident) of ischemic heart disease [34]. ## 3.2. What Is the Histomorphological Evidence? Already about 40 years ago it was speculated if autoimmune bone marrow damage might be incriminated in the pathogenesis of “idiopathic myelofibrosis” (IMF). Several observations seem to support the participation of immune mechanisms in the development of bone marrow fibrosis. Thus, histopathological findings of “Fibrin-Faser-Stern” figures, increased numbers of plasma cells and lymphocytes with plasmacytoid appearance, the demonstration of a parallel increase in interstitial deposits of immunoglobulins and the extent of bone marrow fibrosis, and the development of bone marrow fibrosis after repeated antigen injections in animal models all render immune-mediated bone marrow fibrosis possible [35–41]. Importantly, the findings of “Fibrin-Faser-Stern” figures and lymphoid aggregates in bone marrows from MPNs patients have been variably interpreted as evidence of immune activity in the marrow with deposition of immune complexes [35–38]. Immune activity in the bone marrow with an increase of lymphoid nodules has been found to be most prominent in the early stage of IMF [37, 38]. A most recent study investigated the mechanism of bone marrow fibrosis in patients with MF by comparing TGF-β1 signaling of marrow and spleen cells from patients with MF and of nondiseased individuals. The expression of several TGF-β1 signaling genes was altered in the marrow and spleen of MF patients, respectively. Abnormalities included genes of TGF-β1 signaling, cell cycling, and Hedgehog and p53 signaling. Pathway analysis of these alterations predicted an increased osteoblast differentiation, ineffective hematopoiesis, and fibrosis driven by noncanonical TGF-β1 signaling in the marrow and increased proliferation and defective DNA repair in the spleen. The hypothesis that fibrosis in MF might result from an autoimmune process, triggered by dead megakaryocytes, was supported by the findings of increased plasma levels of mitochondrial DNA and anti-mitochondrial antibodies in MF patients. It was concluded that autoimmunity might be a plausible cause of marrow fibrosis in MF [42]. Finally, the clinical observations of a favorable outcome of immunosuppressive therapy in some MF patients with evidence of autoimmune activity support the concept that autoimmunity, immune dysfunction, and chronic inflammation may be important factors in pathogenesis [43–48]. ## 3.3. What Is the Clinical Evidence? ### 3.3.1. The Inflammation-Mediated Cardiovascular and Thromboembolic Disease Burden Patients with MPNs have a massive cardiovascular disease burden with a high risk of thrombosis (Figure2), which is partly explained by excessive aggregation of circulating leukocytes and platelets due to in vivo leukocyte-platelet and endothelial activation in combination with a thrombogenic endothelium [1, 4]. In addition MPNs are associated with a procoagulant state, which has recently been elegantly reviewed by Barbui et al. [49]. The hyperactivation of circulating cells in MPNs has been thought to be attributed to the clonal myeloproliferation. Thus, theJAK2V671F mutation per se has been shown to induce leukocyte and platelet activation and several clinical studies have demonstrated thatJAK2V617F positivity is a thrombogenic factor in MPNs [49–52]. Of note, Barbui et al. have recently shown that the level of C-reactive protein (CRP) is elevated in patients with ET and PV and correlates significantly with theJAK2V617F allele burden [53]. Furthermore, elevated CRP levels have also been associated with shortened leukemia-free survival in myelofibrosis [54]. It was speculated if sustained inflammation might elicit the stem cell insult by inducing a state of chronic oxidative stress with elevated levels of reactive oxygen species (ROS) in the bone marrow, thereby creating a high-risk microenvironment for induction of mutations in hematopoietic cells [9]. Being a sensitive marker of inflammation and influencing, for example, endothelial function, coagulation, fibrinolysis, and plaque stability, CRP is considered to be a mediator of vascular disease and accordingly a major vascular risk factor as well [55–57]. This association has recently been demonstrated in a meta-analysis, showing continuous associations between the CRP concentration and the risk of coronary heart disease, ischemic stroke, and vascular mortality [58]. For decades it has been known that atherosclerosis and atherothrombosis are chronic inflammatory diseases [59, 60]. Several studies have reported that chronic inflammatory diseases (e.g., rheumatoid arthritis, psoriasis, systemic lupus erythematosus, and diabetes mellitus) are associated with accelerated atherosclerosis and accordingly development of premature atherosclerosis (early ageing?) [61–65]. In addition, considering the association between atherosclerosis and venous thrombosis, chronic inflammation indirectly predisposes to venous thrombosis and pulmonary thromboembolism as well [66]. In the context of the associations between inflammation and CRP in ET and PV, inflammation might be considered to be a secondary event elicited by clonal cells [53]. However, elevated leukocyte and platelet counts in MPNs may not only reflect clonal myeloproliferation but also reflect the impact of chronic inflammation per se on the clonal cells. In particular, this interpretation is intriguing when considering that one of the hallmarks of MPNs is inherent hypersensitivity to growth factor and cytokine stimulation [8]. In this perspective, chronic inflammation in MPNs may also have a key role in promoting premature atherosclerosis and all its debilitating cardiovascular and thromboembolic complications, the common denominators for their development being elevated leukocyte and platelet counts, elevated CRP levels, and in vivo leukocyte-platelet and endothelial cell activation, taking into account that platelet-leukocyte interactions link inflammatory and thromboembolic events in several other inflammation-mediated diseases [67].Figure 2 Patients with MPNs have a massive cardiovascular and thromboembolic disease burden. ### 3.3.2. Inflammation-Mediated Chronic Kidney Disease Uncontrolled chronic inflammation is associated with organ dysfunction, organ fibrosis, and ultimately organ failure [68]. This development is classically depicted in patients with the metabolic syndrome progressing to type II diabetes mellitus (DM) which, without adequate treatment to normalize elevated blood glucose levels, may rapidly develop organ failure due to accelerated atherosclerosis (e.g., hypertension, ischemic heart disease, stroke, dementia, peripheral arterial insufficiency, venous thromboembolism, and chronic kidney disease). The progressive deterioration of multiple organs in uncontrolled DM consequent to elevated blood glucose levels with in vivo leukocyte-platelet and endothelial activation and development of premature atherosclerosis is in several aspects comparable to the multitude of systemic manifestations in patients with uncontrolled MPNs, the common denominators being a huge cardiovascular disease burden and thromboembolic complications [10]. Importantly, similar to patients with type II DM, it has been demonstrated that patients with MPNs have an increased risk of developing chronic kidney disease [69]. It was concluded that progressive renal impairment may be an important factor in MPNs contributing to the comorbidity burden and likely to the overall survival. In addition it was speculated whether chronic inflammation with accumulation of ROS might be a driving force for impairment of renal function and accordingly supportive of early intervention in order to normalize elevated cell counts and reduce the chronic inflammatory drive elicited by the malignant clone itself [69]. ### 3.3.3. Inflammation-Mediated “Autoinflammatory” Diseases As outlined above, patients with MPNs may have an increased risk of various autoimmune, “autoinflammatory,” or inflammatory diseases. Thus, associations have been reported with systemic lupus erythematosus, progressive systemic sclerosis, primary biliary cirrhosis, ulcerative colitis, Crohn’s disease, nephrotic syndrome, polyarteritis nodosa, Sjögren syndrome, juvenile rheumatoid arthritis, polymyalgia rheumatica/arteritis temporalis, immune thrombocytopenic purpura (ITP), and aplastic anemia. In large epidemiological studies these associations have only been significant for Crohn’s disease, polymyalgia rheumatica/arteritis temporalis, and ITP [2]. Interestingly, a particular subtype of myelofibrosis, “primary autoimmune myelofibrosis,” has been described. This subtype has been considered to be a nonclonal and nonneoplastic disease, featured by anemia/cytopenias and autoantibodies suggesting systemic autoimmunity. Most patients have no or only mild splenomegaly and the bone marrow biopsy exhibits MPN-like histology with fibrosis, hypercellularity, and megakaryocyte clusters. In addition, bone marrow lymphoid aggregates are prominent [46]. It remains to be established if this subset of MF actually exists or if these patients indeed should be categorized within the MPNs disease entity, taking into account that autoimmunity and chronic inflammation today are considered to have a major role in MPNs pathogenesis. ### 3.3.4. Inflammation-Mediated Osteopenia A recent Danish registry study has shown that patients with ET and PV have an increased incidence of fractures compared with the general population [70]. Taking into account that chronic inflammation has been suggested to explain the initiation of clonal development and progression in chronic myeloproliferative neoplasms and other chronic inflammatory diseases that are associated with an increased risk of osteopenia it has been speculated if chronic inflammation might induce osteopenia in MPNs and by this mechanism also predispose to the increased risk of fractures [12, 70–72]. ### 3.3.5. Inflammation-Mediated Second Cancers As noted previously patients with MPNs have been shown to have an increased risk of second cancers [3, 5]. In the perspective that chronic inflammation may be a driving force for clonal evolution in MPNs it is intriguing to consider if chronic inflammation may contribute to the development of second cancers in MPNs as well, taking into account the close association between inflammation and cancer [8, 9, 11–13, 17–22]. In this regard a defective “tumor immune surveillance” consequent to immune deregulation, which has been demonstrated in MPNs in several recent studies and most recently comprehensively reviewed, might be of importance [42, 73–75]. Of note, the increased risk of second cancers has also been recorded prior to the MPNs diagnosis emphasizing that the MPNs may have a long prediagnosis phase (5–10–15 years) with a chronic inflammatory state promoting mutagenesis, defective tumor immune surveillance, and immune deregulation [9, 76–78]. This concept is compatible with the most recent observations of additional mutations that are already present at the time of diagnosis likely induced by a sustained inflammatory drive on the malignant clone several years before diagnosis [7, 9, 78, 79] (Figure 4). ## 3.3.1. The Inflammation-Mediated Cardiovascular and Thromboembolic Disease Burden Patients with MPNs have a massive cardiovascular disease burden with a high risk of thrombosis (Figure2), which is partly explained by excessive aggregation of circulating leukocytes and platelets due to in vivo leukocyte-platelet and endothelial activation in combination with a thrombogenic endothelium [1, 4]. In addition MPNs are associated with a procoagulant state, which has recently been elegantly reviewed by Barbui et al. [49]. The hyperactivation of circulating cells in MPNs has been thought to be attributed to the clonal myeloproliferation. Thus, theJAK2V671F mutation per se has been shown to induce leukocyte and platelet activation and several clinical studies have demonstrated thatJAK2V617F positivity is a thrombogenic factor in MPNs [49–52]. Of note, Barbui et al. have recently shown that the level of C-reactive protein (CRP) is elevated in patients with ET and PV and correlates significantly with theJAK2V617F allele burden [53]. Furthermore, elevated CRP levels have also been associated with shortened leukemia-free survival in myelofibrosis [54]. It was speculated if sustained inflammation might elicit the stem cell insult by inducing a state of chronic oxidative stress with elevated levels of reactive oxygen species (ROS) in the bone marrow, thereby creating a high-risk microenvironment for induction of mutations in hematopoietic cells [9]. Being a sensitive marker of inflammation and influencing, for example, endothelial function, coagulation, fibrinolysis, and plaque stability, CRP is considered to be a mediator of vascular disease and accordingly a major vascular risk factor as well [55–57]. This association has recently been demonstrated in a meta-analysis, showing continuous associations between the CRP concentration and the risk of coronary heart disease, ischemic stroke, and vascular mortality [58]. For decades it has been known that atherosclerosis and atherothrombosis are chronic inflammatory diseases [59, 60]. Several studies have reported that chronic inflammatory diseases (e.g., rheumatoid arthritis, psoriasis, systemic lupus erythematosus, and diabetes mellitus) are associated with accelerated atherosclerosis and accordingly development of premature atherosclerosis (early ageing?) [61–65]. In addition, considering the association between atherosclerosis and venous thrombosis, chronic inflammation indirectly predisposes to venous thrombosis and pulmonary thromboembolism as well [66]. In the context of the associations between inflammation and CRP in ET and PV, inflammation might be considered to be a secondary event elicited by clonal cells [53]. However, elevated leukocyte and platelet counts in MPNs may not only reflect clonal myeloproliferation but also reflect the impact of chronic inflammation per se on the clonal cells. In particular, this interpretation is intriguing when considering that one of the hallmarks of MPNs is inherent hypersensitivity to growth factor and cytokine stimulation [8]. In this perspective, chronic inflammation in MPNs may also have a key role in promoting premature atherosclerosis and all its debilitating cardiovascular and thromboembolic complications, the common denominators for their development being elevated leukocyte and platelet counts, elevated CRP levels, and in vivo leukocyte-platelet and endothelial cell activation, taking into account that platelet-leukocyte interactions link inflammatory and thromboembolic events in several other inflammation-mediated diseases [67].Figure 2 Patients with MPNs have a massive cardiovascular and thromboembolic disease burden. ## 3.3.2. Inflammation-Mediated Chronic Kidney Disease Uncontrolled chronic inflammation is associated with organ dysfunction, organ fibrosis, and ultimately organ failure [68]. This development is classically depicted in patients with the metabolic syndrome progressing to type II diabetes mellitus (DM) which, without adequate treatment to normalize elevated blood glucose levels, may rapidly develop organ failure due to accelerated atherosclerosis (e.g., hypertension, ischemic heart disease, stroke, dementia, peripheral arterial insufficiency, venous thromboembolism, and chronic kidney disease). The progressive deterioration of multiple organs in uncontrolled DM consequent to elevated blood glucose levels with in vivo leukocyte-platelet and endothelial activation and development of premature atherosclerosis is in several aspects comparable to the multitude of systemic manifestations in patients with uncontrolled MPNs, the common denominators being a huge cardiovascular disease burden and thromboembolic complications [10]. Importantly, similar to patients with type II DM, it has been demonstrated that patients with MPNs have an increased risk of developing chronic kidney disease [69]. It was concluded that progressive renal impairment may be an important factor in MPNs contributing to the comorbidity burden and likely to the overall survival. In addition it was speculated whether chronic inflammation with accumulation of ROS might be a driving force for impairment of renal function and accordingly supportive of early intervention in order to normalize elevated cell counts and reduce the chronic inflammatory drive elicited by the malignant clone itself [69]. ## 3.3.3. Inflammation-Mediated “Autoinflammatory” Diseases As outlined above, patients with MPNs may have an increased risk of various autoimmune, “autoinflammatory,” or inflammatory diseases. Thus, associations have been reported with systemic lupus erythematosus, progressive systemic sclerosis, primary biliary cirrhosis, ulcerative colitis, Crohn’s disease, nephrotic syndrome, polyarteritis nodosa, Sjögren syndrome, juvenile rheumatoid arthritis, polymyalgia rheumatica/arteritis temporalis, immune thrombocytopenic purpura (ITP), and aplastic anemia. In large epidemiological studies these associations have only been significant for Crohn’s disease, polymyalgia rheumatica/arteritis temporalis, and ITP [2]. Interestingly, a particular subtype of myelofibrosis, “primary autoimmune myelofibrosis,” has been described. This subtype has been considered to be a nonclonal and nonneoplastic disease, featured by anemia/cytopenias and autoantibodies suggesting systemic autoimmunity. Most patients have no or only mild splenomegaly and the bone marrow biopsy exhibits MPN-like histology with fibrosis, hypercellularity, and megakaryocyte clusters. In addition, bone marrow lymphoid aggregates are prominent [46]. It remains to be established if this subset of MF actually exists or if these patients indeed should be categorized within the MPNs disease entity, taking into account that autoimmunity and chronic inflammation today are considered to have a major role in MPNs pathogenesis. ## 3.3.4. Inflammation-Mediated Osteopenia A recent Danish registry study has shown that patients with ET and PV have an increased incidence of fractures compared with the general population [70]. Taking into account that chronic inflammation has been suggested to explain the initiation of clonal development and progression in chronic myeloproliferative neoplasms and other chronic inflammatory diseases that are associated with an increased risk of osteopenia it has been speculated if chronic inflammation might induce osteopenia in MPNs and by this mechanism also predispose to the increased risk of fractures [12, 70–72]. ## 3.3.5. Inflammation-Mediated Second Cancers As noted previously patients with MPNs have been shown to have an increased risk of second cancers [3, 5]. In the perspective that chronic inflammation may be a driving force for clonal evolution in MPNs it is intriguing to consider if chronic inflammation may contribute to the development of second cancers in MPNs as well, taking into account the close association between inflammation and cancer [8, 9, 11–13, 17–22]. In this regard a defective “tumor immune surveillance” consequent to immune deregulation, which has been demonstrated in MPNs in several recent studies and most recently comprehensively reviewed, might be of importance [42, 73–75]. Of note, the increased risk of second cancers has also been recorded prior to the MPNs diagnosis emphasizing that the MPNs may have a long prediagnosis phase (5–10–15 years) with a chronic inflammatory state promoting mutagenesis, defective tumor immune surveillance, and immune deregulation [9, 76–78]. This concept is compatible with the most recent observations of additional mutations that are already present at the time of diagnosis likely induced by a sustained inflammatory drive on the malignant clone several years before diagnosis [7, 9, 78, 79] (Figure 4). ## 3.4. What Is the Biochemical Evidence? As outlined above MPNs are associated with a low-grade inflammatory state as assessed by slightly elevated CRP in a large proportion of patients with ET and PV [53]. The CRP levels are steadily increasing when patients enter the accelerated phase towards leukemic transformation [54]. Considering the close association between CRP and other inflammatory markers, the leukocyte and platelet counts, it is most relevant to speculate if leukocytosis and thrombocytosis in MPNs are also attributed to the chronic inflammatory drive per se with sustained generation of inflammatory products that fuel the malignant clone in a vicious self-perpetuating circle [8, 11]. Similar to CRP, plasma fibrinogen and plasma D-dimers levels are slightly elevated in several patients and may indeed be more sensitive inflammatory markers than CRP (unpublished observations). Proinflammatory cytokines are elevated in a substantial proportion of patients with MPNs, a topic which has recently been reviewed and thoroughly described by Fleischman and others in this Theme Issue [11].The hypothesis and the concept of MPNs and the advanced MF stage being elicited and perpetuated by autoimmune/inflammatory mechanisms were intensely investigated and discussed already 30 years ago. Some of the clinical and histomorphological issues with associations between MPNs and autoimmune/inflammatory states have already been addressed above. In addition, several studies from that period reported biochemical evidence of autoimmunity/inflammation in MPNs, such as elevated levels of antibodies to RBCs, antibodies to platelets, anti-nuclear and anti-mitochondrial antibodies (ANA and AMA), rheumatoid factor, lupus-like anticoagulant, low levels of complement, complement activation, increased levels of immune complexes (ICs), and increased levels of interleukin-2 soluble receptors (s-IL2R) [38, 43, 80–85]. It was debated whether deposition of immune complexes in the bone marrow, either formed in situ or trapped from the circulation, might be followed by complement activation with subsequent local inflammatory reaction, an interpretation fitting very well with the findings of complement activation in MF patients [80, 81]. Of note, circulating immune complexes were predominantly found in the early disease stage. Since circulating ICs were in some studies mainly found in MF patients with a short duration of disease from diagnosis it was hypothesized that potential immune-mediated bone marrow damage might indeed occur in the early phase of the disease and the late, fibrotic stage with undetectable IC representing the “burnt out” phase of the disease [81, 84]. Today, 30 years after the detection of IC in MPNs, their significance in MF and related neoplasms remains unsettled. With the renaissance of the concept of autoimmune bone marrow damage and chronic inflammation as driving forces for disease evolution and progression further studies on circulating ICs and their pathogenetic and clinical relevance are highly relevant and timely. Indeed, their detection may reflect ongoing inflammatory immune reactions in the circulation and in the bone marrow, being likely most pronounced in the initial disease phase and possibly related to a more acute course of the disease [81]. Most recently, a comprehensive study of autoimmune phenomena and cytokines in 100 patients with MF, including early stage MF, has added further proof of the concept that autoimmune and inflammatory mechanisms may be highly important in the pathogenesis of MPNs [86]. Importantly, organ/non-organ-specific autoantibodies were found in 57% of cases, without clinically overt disease, and mostly in low-risk/intermediate-risk-1 and MF-0/MF-1. Furthermore, TGF-β and IL-8 were increased in MS-DAT positive cases, and TGF-β and IL-17 were elevated in early clinical and morphological stages, while IL-8 increased in advanced stages. It was concluded that autoimmune phenomena and cytokine dysregulation may be particularly relevant in early MF [86].Several studies have shown that circulating YKL-40 levels are elevated in a number of different diseases, including cancer, diabetes mellitus, and cardiovascular diseases, in which YKL-40 serves as an excellent marker of the disease burden. Importantly, a state of chronic inflammation is shared by them all, and YKL-40 also has a major impact upon the severity of chronic endothelial inflammation, which today is considered of crucial importance for the development of atherosclerosis. Considering the MPNs as chronic inflammatory diseases and accordingly with an increased risk of development of premature atherosclerosis we hypothesized that circulating YKL-40 might be an ideal marker of the integrated impact of chronic inflammation in MPNs and accordingly might display correlations with conventional markers of inflammation and disease burden in MPNs. Indeed, we have recently shown that circulating YKL-40 is a potential novel biomarker of disease activity and the inflammatory state in myelofibrosis and related neoplasms [87, 88]. These studies have demonstrated a steady increase in YKL-40 from early cancer stage (ET) over PV to the advanced cancer stage with myelofibrosis, which exhibited the highest YKL-40 levels of them all. Highly interesting, we also found a significant correlation between YKL-40 and several markers of inflammation and disease burden, including neutrophils, platelets, CRP, LDH, and theJAK2V617F allele burden. Accordingly, circulating YKL-40 may be a novel marker of inflammation, disease burden, and progression in MPNs [87, 88]. ## 3.5. What Is the Molecular Evidence? The concept of chronic inflammation leading to clonal evolution in MPNs is also supported by gene expression profiling studies (Figure3), which have unraveled deregulation of several genes that might be implicated in the development and phenotype of the MPNs [89–92]. Using whole-blood transcriptional profiling and accordingly obtaining an integrated signature of genes expressed in several immune cells (granulocytes, monocytes, B cells, T cells, and platelets), we have shown that the MPNs exhibit a massive upregulation of IFN-related genes, particularly interferon-inducible (IFI) gene IFI27 and severe deregulation of other inflammation and immune genes as well. Indeed, several genes (e.g., IFI27) displayed a stepwise upregulation in patients with ET, PV, and PMF with fold changes from 8 to 16 to 30, respectively. The striking deregulation of IFI genes may likely reflect a hyperstimulated but incompetent immune system being most enhanced in patients with advanced MF. In this context, the massive upregulation of the IFI27 gene may also reflect an exaggerated antitumor response as part of a highly activated IFN system, including enhanced IFN gamma expression, which might also imply activation of dendritic cells. IFI27 is also upregulated during wound repair processes, which may be of particular relevance when considering the Dvorak thesis on “Tumors: wounds that do not heal” [14, 15]. Thus, it is tempting to argue that MPNs are “wounds in the bone marrow that will not heal,” owing to the continuous release from clonal cells of growth factors and matrix proteases with ensuing extracellular remodeling of the bone marrow stroma. In this scenario, one might speculate whether the high expression of IFI27 may reflect these processes as well, IFI27 cooperating with distinct genes of potential importance for egress of CD34+ cells from the bone marrow niches into the circulation [93]. In the context of matrix remodeling during cancer metastasis (which in MPNs consists of egress of CD34+ cells from the bone marrow niches into the circulation) it is of particular interest to note that IFN-inducible genes, including IFI27, have been shown to be associated with the so-called metagenes in patients with breast cancer, accurately identifying those patients with lymph node metastasis and accordingly predictors of outcomes in individual patients [94]. Thus, the highly upregulated IFI27 gene in MPNs may reflect progressive clonal evolution with “metastasis” (extramedullary hematopoiesis) despite an exaggerated yet incompetent IFN-mediated antitumor response by activated dendritic cells and T cells. In this regard a hyperstimulated immune system might also contribute to the increased risk of autoimmune diseases in MPNs. Accordingly the interferon signature may reflect MF as the terminal stage of chronic inflammation with a huge burden of oxidative stress, genomic instability, and accumulation of additional inflammation-induced mutations, the ultimate outcome being leukemic transformation [8–12]. During this evolution from early cancer stage to the metastatic stage with MF, the interferons are important cytokines for immunity and cancer immunoediting [95]. For this and several other reasons IFN is, today and in the future, considered the cornerstone in the treatment of MPNs which, when instituted in the very early disease stage, may be able to quell the fire and accordingly induce “minimal residual disease” and in some patients likely cure as will be discussed below [96–103]. Supporting chronic inflammation as the driving force for clonal evolution is also the most recent whole-blood gene expression studies, showing a marked deregulation of oxidative stress genes in MPNs [104]. This issue is extensively described by Bjørn and Hasselbalch in the chapter on “The Role of Reactive Oxygen Species in Myelofibrosis and Related Neoplasms.”Figure 3 Chronic inflammation as the driving force for clonal evolution in MPNs. Tumor burden and comorbidity burden are illustrated for patients withJAK2V617F positive MPNs. Comorbidity burden increases from early disease stage (ET/PV) to the accelerated phase with myelofibrotic and leukemic transformation. With permission: H. C. Hasselbalch [12]. AML: acute myeloid leukemia; ET: essential thrombocythemia; JAK: Janus kinase; PPV-MF: post-polycythemia vera; and PV: polycythemia vera.Figure 4 Patients with MPNs have an increased risk of second cancer not only after the MPNs diagnosis but also in the pre-MPNs diagnosis phase, which may last several years in which the patients are at an increased risk of severe cardiovascular and thromboembolic events. According to this model, the initial stem cell insult has occurred 5–10–15 years before the MPNs diagnosis. ## 3.6. What Are the Consequences of Chronic Inflammation in MPNs? ### 3.6.1. The Bone Marrow Is Burning In MPNs chronic inflammation may elicit a “cytokine storm,” “a wound that does not heal,” due to the continuous release of proinflammatory cytokines that in a self-perpetuating vicious circle drives the malignant clone. Importantly, in this inflammatory micromilieu, reactive oxygen species (ROS) are steadily accumulating, giving rise to increasing genomic instability, subclone formation with additional mutations, and ultimately bone marrow failure as a consequence of inflammation-mediated ineffective myelopoiesis (anemia, granulocytopenia, and thrombocytopenia), accumulation of connective tissue in the bone marrow, and ultimately leukemic transformation [8, 9, 11–13]. The impact and consequences of ROS for disease progression have been thoroughly described elsewhere by Bjørn and Hasselbalch and the impact of chronic inflammation on bone marrow stroma has been reviewed by Marie Caroline Le Bousse Kerdiles and coworkers.Chronic inflammation in the bone marrow microenvironment may enhance in vivo granulocyte activation with ensuing release of a vast amount of proteolytic enzymes from neutrophil granules, thereby facilitating egress of CD34+ cells and progenitors from bone marrow niches into the circulation (“metastasis”). ### 3.6.2. The Spleen Is Burning A common complaint in MPNs patients with enlarged spleens is a “burning” spleen, which on clinical examination may also be extremely painful. Although spleen infarction may occasionally explain the spleen pain, it is in the large majority of patients attributed to inflammation as evidenced by a remarkable relief when being treated with high-dose glucocorticoids and, in particular, during treatment with JAK2 inhibitors which within a few days is associated with a reduction in spleen size and a concomitant improvement in spleen pain as well. Accordingly, the rapid reduction in spleen size during, for example, treatment with ruxolitinib, is primarily consequent to its very potent anti-inflammatory effects as also evidenced by the rapid decrease in circulating inflammatory cytokines [11, 12]. ### 3.6.3. The Circulation Is Burning As outlined above circulating levels of a large number of inflammatory cytokines are elevated in patients with MPNs [11, 105, 106]. These cytokines activate circulating leukocytes and platelets and also activate endothelial cells as well, giving rise to aggregation of leukocytes and platelets with the formation of microaggregates that compromise the microcirculation in several organs [48, 51] (Figure 5). Taking into account that a large proportion of the circulating leukocytes and platelets are activated per se due to their clonal origin the additional impact of chronic inflammation upon in vivo activation of these cells may profoundly worsen the microcirculation in several organs with ensuing tissue ischemia and associated symptoms, including, for example, CNS-related symptoms (headaches, visual disturbances, dizziness, infarction, and dementia), pulmonary symptoms (dyspnoea due to pulmonary embolism, inflammation due to sequestration of leukocytes and platelets and megakaryocytes in the microcirculation with release of a large number of inflammatory products), symptoms of ischemic heart disease (angina, infarction, and congestive heart failure), or symptoms of peripheral vascular insufficiency [4, 5, 12, 34, 107–112] (Figure 5).Figure 5 Inflammation in the circulation elicits in vivo leukocyte and platelet aggregation giving rise to circulating microaggregates with ensuing impairment of microcirculation, tissue ischemia, and ultimately development of ulcers on toes and fingers which may terminate with gangrene. Treatment with aspirin momentarily resolves microaggregation with improvement in microcirculation. ## 3.6.1. The Bone Marrow Is Burning In MPNs chronic inflammation may elicit a “cytokine storm,” “a wound that does not heal,” due to the continuous release of proinflammatory cytokines that in a self-perpetuating vicious circle drives the malignant clone. Importantly, in this inflammatory micromilieu, reactive oxygen species (ROS) are steadily accumulating, giving rise to increasing genomic instability, subclone formation with additional mutations, and ultimately bone marrow failure as a consequence of inflammation-mediated ineffective myelopoiesis (anemia, granulocytopenia, and thrombocytopenia), accumulation of connective tissue in the bone marrow, and ultimately leukemic transformation [8, 9, 11–13]. The impact and consequences of ROS for disease progression have been thoroughly described elsewhere by Bjørn and Hasselbalch and the impact of chronic inflammation on bone marrow stroma has been reviewed by Marie Caroline Le Bousse Kerdiles and coworkers.Chronic inflammation in the bone marrow microenvironment may enhance in vivo granulocyte activation with ensuing release of a vast amount of proteolytic enzymes from neutrophil granules, thereby facilitating egress of CD34+ cells and progenitors from bone marrow niches into the circulation (“metastasis”). ## 3.6.2. The Spleen Is Burning A common complaint in MPNs patients with enlarged spleens is a “burning” spleen, which on clinical examination may also be extremely painful. Although spleen infarction may occasionally explain the spleen pain, it is in the large majority of patients attributed to inflammation as evidenced by a remarkable relief when being treated with high-dose glucocorticoids and, in particular, during treatment with JAK2 inhibitors which within a few days is associated with a reduction in spleen size and a concomitant improvement in spleen pain as well. Accordingly, the rapid reduction in spleen size during, for example, treatment with ruxolitinib, is primarily consequent to its very potent anti-inflammatory effects as also evidenced by the rapid decrease in circulating inflammatory cytokines [11, 12]. ## 3.6.3. The Circulation Is Burning As outlined above circulating levels of a large number of inflammatory cytokines are elevated in patients with MPNs [11, 105, 106]. These cytokines activate circulating leukocytes and platelets and also activate endothelial cells as well, giving rise to aggregation of leukocytes and platelets with the formation of microaggregates that compromise the microcirculation in several organs [48, 51] (Figure 5). Taking into account that a large proportion of the circulating leukocytes and platelets are activated per se due to their clonal origin the additional impact of chronic inflammation upon in vivo activation of these cells may profoundly worsen the microcirculation in several organs with ensuing tissue ischemia and associated symptoms, including, for example, CNS-related symptoms (headaches, visual disturbances, dizziness, infarction, and dementia), pulmonary symptoms (dyspnoea due to pulmonary embolism, inflammation due to sequestration of leukocytes and platelets and megakaryocytes in the microcirculation with release of a large number of inflammatory products), symptoms of ischemic heart disease (angina, infarction, and congestive heart failure), or symptoms of peripheral vascular insufficiency [4, 5, 12, 34, 107–112] (Figure 5).Figure 5 Inflammation in the circulation elicits in vivo leukocyte and platelet aggregation giving rise to circulating microaggregates with ensuing impairment of microcirculation, tissue ischemia, and ultimately development of ulcers on toes and fingers which may terminate with gangrene. Treatment with aspirin momentarily resolves microaggregation with improvement in microcirculation. ## 4. Discussion and Perspectives The perspectives of the MPNs as “A Human Inflammation Model for Cancer Development” being driven by chronic inflammation in a self-perpetuating vicious circle from early cancer stage (ET/PV) to the advanced “metastatic” stage with severe MF and egress of CD34+ cells from bone marrow niches to the circulation (metastasis to the spleen and liver and elsewhere) are several [8–13, 96–103].Firstly, this novel concept calls for the urgent need of a fundamental change in our therapeutic attitude from the conventional “watch-and-wait strategy” to “the early intervention concept” using interferon-alpha2 (IFN) as the cornerstone in the early treatment from the time of diagnosis [96–103] (Figures 1 and 6). However, since access to IFN for the routine use in patients with MPNs is highly variable, a prerequisite for such a change is that opinion leaders within the international MPNs scientific community realize that the time has come to rethink when, how, and who we should treat with IFN. Today the world is divided into two: in one world, not having access to IFN and accordingly its MPNs experts no or only modest experience with the use of IFN most ET and PV patients are followed according to the “watch-and-wait strategy,” receiving only cytoreductive treatment with hydroxyurea (HU) for elevated cell counts if they have suffered a prior thrombosis, the platelet count being >1500 × 109/L or if they are elderly (>60 years) [113–120]. This risk stratification therapy is partly based upon the concept “do no harm to the patient,” since HU treatment implies an increased risk of skin cancer and an increasing concern in regard to an increased risk of other cancers as well, including myelodysplasia and acute myelogenous leukemia [98, 100, 102, 121–125]. Accordingly, in this part of the world, HU is avoided in younger patients with ET and PV, who then may not receive cytoreductive treatment for elevated leukocyte counts or elevated platelet counts (>1500 × 109/L) unless they experience the catastrophe, thrombosis or major hemorrhage and consequentsequelae. In the other world, having access to IFN, most newly diagnosed patients with ET, PV, and hyperproliferative myelofibrosis are treated routinely with low-dose IFN as described in several studies and reviews during recent years [96–103].Figure 6 The MPNs care pathway and the effect of early intervention. It is suggested that ET, PV, and MF form a biological continuum and, thus, early intervention with combination therapies including JAK1/2 inhibitors, IFN, and/or statins is likely to result in the inhibition of disease evolution. ASCT: allogeneic stem cell transplantation; ET: essential thrombocythemia; HU: hydroxyurea; IFN: interferon; JAK: Janus kinase; MF: myelofibrosis; and PV: polycythemia vera (with permission: H. C. Hasselbalch [12]).Secondly, we, the MPNs scientific community, and health authorities (Food and Drug Administration (FDA) and EMA (European Medical Agency)) also need to rethink if optimal treatment of MPNs is only determined by the randomized trial or if optimal treatment might also be determined by several single-arm studies proving safety and efficacy of oncology drugs in orphan diseases [102, 123]. In this regard IFN in MPNs is a classic example which for sure has shown safety and efficacy in a large number of clinical studies during the last 25 years but, regardless, is still being considered experimental or not evidence-based therapy, in the world without access to IFN.Accordingly, promotion of rapidly accumulating evidence for the concept of MPNs as “A Human Inflammation Model for Cancer Development” into clinical practice with upfront treatment with IFN to inhibit clonal expansion (“stopping the fuel that feeds the fire”) requires a global signature from the MPN scientific community, a fusion of the two worlds, and an urgent action from health authorities to accept that approval of a drug for orphan diseases—IFN in MPNs—is applicable when safety and efficacy have been demonstrated in a large number of single-arm studies during the last 2 decades [102, 126].Thirdly, the proof of concept that chronic inflammation may elicit MPNs needs to be further investigated in other mouse models than the ones already published, including the MPN-mouse model from Heike Pahl’s group and the mouse model that has displayed formaldehyde (FA) by inhalation to be able to induce inflammation and ROS accumulation in the bone marrow with ensuing MPN-like blood and bone marrow features such as anemia, leukopenia and thrombocythemia, and megakaryocyte hyperplasia with myelofibrosis, respectively [13, 127, 128].Fourthly, considering chronic inflammation as a potential trigger of MPNs evolution and the experimental proof that FA induces inflammation in the bone marrow with myelofibrosis, it is indeed intriguing to speculate if cigarette smoke that contains thousands of toxic inflammatory agents, including FA, may actually be a risk factor for development of MPNs [129]. Thus, smoking is associated with elevated hematocrit, leukocytosis, monocytosis, and occasionally thrombocytosis—all are hallmarks in patients with MPNs. To this end the JAK-STAT and NF-kB signalling pathways are activated in both smokers and in patients with MPNs. Additionally, both share elevated levels of several proinflammatory cytokines, in vivo activation of leukocytes and platelets, endothelial dysfunction, and increased systemic oxidative stress. Indeed, smoke as a chronic inflammation stimulus giving rise to a chronic myelomonocytic response and ultimately MPNs fits very well with the excellent inflammation model for MPNs development as recently described by Hermouet and coworkers [31]. Accordingly, there is reason to believe that smoking may be both a trigger for and a driver of clonal evolution in MPNs taking into account that both smoking and MPNs are associated with chronic inflammation and systemic oxidative stress. In this context smoking may augment chronic inflammation in MPNs, thereby magnifying the risk of thrombosis, clonal expansion, and second cancers. The role of smoking in MPNs pathogenesis is further supported by a most recent study showing that a high proportion of MPNs patients actually have a smoking history [130]. An association between smoking and MPNs evolution is also supported by the fact that the most frequent second cancers in patients with MPNs are lung and urinary tract cancers which are most prevalent in smokers [3].Fifthly, chronic systemic inflammation in patients with MPNs may predispose to or aggravate existing inflammation-mediated diseases in MPNs patients. Thus, it might be anticipated that chronic inflammation associated with (other) chronic inflammatory diseases, for instance, inflammatory rheumatological or dermatological diseases (e.g., polymyalgia rheumatica, rheumatoid arthritis, psoriasis, hidradenitis, and systemic lupus erythematosus), chronic inflammatory bowel diseases (Crohn’s disease, colitis ulcerosa), chronic obstructive pulmonary disease and cancers (e.g., lung cancer) might ultimately elicit MPNs in a subset of the patients consequent to the chronic inflammation-mediated myelomonocytic drive [31]. Importantly, in these patients, anemia, leukocytosis, and thrombocythemia are ascribed to their chronic inflammatory disease or cancer and, accordingly, they are not normally screened for JAK2V617F, CALR, or MPL mutations. In the context of MPNs as inflammatory diseases, being potentially triggered and driven by chronic inflammation, the time is ripe to consider if the above disease categories should be investigated more rigorously for MPNs than being clinical practice today. Indeed, such studies are urgently needed to elucidate and expand the role of chronic inflammation as a true trigger for and driver of clonal evolution in MPNs.Sixthly, chronic inflammation and oxidative stress may have therapeutic implications. Thus, it might be anticipated that patients with systemic chronic inflammation due to concurrent inflammation-mediated comorbidities may exhibit an inferior response to cytoreductive therapy necessitating higher dosages of, for example, hydroxyurea to obtain normal leukocyte and platelet counts. Furthermore, the response to IFN might be impaired considering that IFN signalling is impaired by inflammation and oxidative stress [131].Seventhly, in the context that “triple-negative” (negative for JAK2V617F, CALR, and MPL-mutations) ET patients have a much more favourable prognosis than mutation-positive ET patients, some triple-negative “ET” patients may actually not have a MPNs but instead polyclonal inflammation-driven thrombocythemia. If so, the subset of “triple-negative” “ET” patients may be associated with a heavy comorbidity burden of chronic inflammatory diseases, an issue which deserves to be investigated systematically.Eighthly, by dampening chronic inflammation using potent anti-inflammatory agents such as JAK2 inhibitor treatment and statins, it is anticipated that the rate of thromboembolic events will likely decline, since chronic inflammation per se carries an increased risk of thrombosis due to several factors as outlined above (leukocytosis, thrombocytosis, and in vivo leukocyte-platelet and endothelial activation). This issue on inflammation-mediated thrombogenesis has been dealt with most recently [132].Ninthly, chronic inflammation in MPNs, if left untreated with elevated platelet counts, may worsen the prognosis of second cancers, which MPNs patients are prone to develop, not only after the MPNs diagnosis but also prior to the diagnosis [3, 76]. This particular issue, the “Platelet-Cancer-Loop” in MPNs, and the perspectives for prognosis of second cancers when not treating elevated platelet counts in MPNs have most recently been reviewed and debated [78, 133]. Indeed, elevated platelet counts in MPNs may contribute to the inferior prognosis of second cancers in these patients, most recently being reported in a large Danish epidemiological study [134].Tenthly, the notion of treating these diseases only when far advanced is antithetical to treating other forms of cancer. The model of clonal evolution, the occurrence of additional molecular abnormalities, and the development of metastatic sites of disease following extramedullary hematopoiesis of CD34+ cells in the spleen and liver are just some of the compelling reasons to consider treating sooner rather than later, when the tumor burden is less rather than more and before disease progression occurs. The fact that both rIFN and JAK1/2 inhibition can cause molecular change in JAK2V617F allele burden and revert cytogenetic and other clonal abnormalities adds impetus to this argument. From the perspective that chronic inflammation may drive clonal expansion in these neoplasms early treatment may induce a state of minimal disease in a substantial number of patients. This may alter the natural history of the MPNs and the otherwise inevitable path towards thrombosis, irreversible MF, and leukemic transformation [97–103].Eleventh, statins have, in addition to a cholesterol-lowering effect, many so-called pleiotropic effects, including antiproliferative, proapoptotic, antiangiogenic, antithrombotic, and especially potent anti-inflammatory effects [135]. Most recently, it has been shown that statins also significantly inhibit the malignant MPNs cell growth, including a potent synergistic effect with JAK inhibition [136, 137]. Thus, the perspectives may be that statins will achieve an important role in the future MPNs treatment in combination with JAK1/2 inhibitors and IFN-alpha2, a combination therapy, which—if instituted already from the time of diagnosis by potent inhibition of clonal proliferation and hence blockage of chronic inflammation generated by the malignant clone itself—may envisage the hope of reverting MPNs disease progression by inhibiting inflammation-driven genomic instability, subclone formation, mutagenesis, and thereby the ultimate transformation to myelofibrosis and acute myeloid leukemia. In regard to the anti-inflammatory, antithrombotic, and cytoreductive potential of statins and most lately the epidemiological evidence that statins reduce cancer-related mortality the rationale for the use of statins in patients with MPNs—per se accompanied by an increased risk of second cancers with an inferior prognosis—is only further supported [134–138]. Taking into account that MPNs patients may be prone to develop inflammation-mediated osteopenia with an increased risk of fractures early diagnosis and treatment of osteopenia with bisphosphonates may be an option in the future. Indeed, bisphosphonates also possess potent anti-inflammatory, immunomodulatory anticancer properties and may have a synergistic effect with statins in targeting the bone marrow stroma niche, thereby inhibiting the egress of CD34+ cells from stem cell niches [139]. To this end, several reports have documented beneficial effects of treatment with bisphosphonates in MPNs [140–145]. The rationales for the mevalonate pathway as a therapeutic target in the treatment of MPNs have been thoroughly described in recent reviews [135, 146]. ## 5. Conclusion The concept of chronic inflammation as a major driver of disease progression in MPNs opens the avenue for clinical trials in which the two most promising agents within MPNs—IFN and ruxolitinib—are combined and instituted in the early disease stage according to the early intervention concept. The proof of concept and the rationales for this combination therapy have most recently been published [147] and a Danish study on combination therapy with low-dose pegylated IFN and ruxolitinib is ongoing with very promising preliminary results. The ability of IFN to induce deep molecular responses with normalisation of the bone marrow, even years after cessation of IFN, and the role of inflammation in the initiation and progression of MPNs make the combination of IFN and ruxolitinib one of the most promising new treatment strategies for patients with MPNs [8, 9, 11–13]. --- *Source: 102476-2015-10-28.xml*
2015
# Effect of Zinc Supplementation on Maintenance Hemodialysis Patients: A Systematic Review and Meta-Analysis of 15 Randomized Controlled Trials **Authors:** Ling-Jun Wang; Ming-Qing Wang; Rong Hu; Yi Yang; Yu-Sheng Huang; Shao-Xiang Xian; Lu Lu **Journal:** BioMed Research International (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1024769 --- ## Abstract We aimed to examine the effects of zinc supplementation on nutritional status, lipid profile, and antioxidant and anti-inflammatory therapies in maintenance hemodialysis (MHD) patients. We performed a systematic review and meta-analysis of randomized, controlled clinical trials of zinc supplementation. Metaregression analyses were utilized to determine the cause of discrepancy. Begg and Egger tests were performed to assess publication bias. Subgroup analysis was utilized to investigate the effects of zinc supplementation in certain conditions. In the crude pooled results, we found that zinc supplementation resulted in higher serum zinc levels (weighted mean difference [WMD] = 28.489;P<0.001), higher dietary protein intake (WMD = 8.012; P<0.001), higher superoxide dismutase levels (WMD = 357.568; P=0.001), and lower levels of C-reactive protein (WMD = −8.618; P=0.015) and malondialdehyde (WMD = −1.275; P<0.001). The results showed no differences in lipid profile. In the metaregression analysis, we found that serum zinc levels correlated positively with intervention time (β=0.272; P=0.042) and varied greatly by ethnicity (P=0.023). Results from Begg and Egger tests showed that there was no significant bias in our meta-analysis (P>0.1). Results of subgroup analysis supported the above results. Our analysis shows that zinc supplementation may benefit the nutritional status of MHD patients and show a time-effect relationship. --- ## Body ## 1. Introduction Zinc is an essential trace element for humans which is found in nearly 100 specific enzymes. Zinc plays “ubiquitous biological roles” in physiological function, including gene expression, protein synthesis, immune function, and behavioral responses [1–3]. Although the prevalence of zinc deficiency is still unclear in patients with maintenance hemodialysis (MHD), some data show adverse outcomes that may attribute to zinc deficiency [4, 5]. Malnutrition, as an independent risk factor of cardiovascular events, and death are the most common complications observed in MHD patients [6, 7]. Some studies have reported the potential relationship between zinc deficiency and other imbalances, such as oxidative stress, inflammation, or immunosuppression [8, 9], and these disorders may contribute to poor prognosis of disease.Many previous studies have investigated the effects of zinc supplementation in MHD patients. The relevant results show that zinc supplementation can improve a number of disorders including low-grade inflammatory process, protein-energy wasting, and impaired immune response [10, 11]. However, to our knowledge, there may be some inadequacies in separate clinic trials. For example, in a separate randomized controlled trial (RCT), it is hard to identify the effects of zinc supplementation in MHD patients with different races and varying intervention dosages. The existing data showed different and even contradictory results simultaneously within different studies [12, 13]. Yet, no studies have focused on the causes of heterogeneity. The incomparable evidences make it hard to assess the actual effects of zinc supplementation in MHD patients. These controversies may be ascribed to inadequate data and differences in experimental designs between the published investigations. A meta-analysis may help to find the sources of heterogeneity and clarify the effects of zinc supplementation on nutritional status, oxidative stress, and inflammation.In this study, we collected relevant RCTs for systematic review and performed meta-analyses to comprehensively investigate the relationships between nutritional status, oxidative stress, and inflammation and zinc supplementation in MHD patients. Subsequently, we aimed to find conflicting results and analyze their causes. Our study may add to the existing literature. ## 2. Methods ### 2.1. Literature Search Strategy Systematic literature searches were conducted in the following electronic databases: PubMed, Embase, Web of Science, Chinese Biomedical Literature, and the Cochrane Library. The relevant articles were published before January 15, 2016. Only publications with sufficient data were included for assessment. The following search terms were used: [Hemodialysis OR Dialysis OR Renal Replacement Therapy OR End Stage Renal Disease] AND [Zinc OR Zn] AND [Random∗ OR Randomized Controlled Trial OR Randomized Controlled Trial as Topic] AND [Nutrition OR Nutritional Status OR Reactive Oxygen OR Oxidative Stress OR Inflammation].To identify additional potentially relevant publications, the related references from all retrieved articles and reviews were manually searched. Only published studies with full-text articles were included in the meta-analysis. All data were entered into the Review Manager 5.0 software (Biostat, NJ, USA) by one author and checked by another author. Any disagreements were resolved by discussion between the two authors and by seeking the opinion of a third party when necessary. ### 2.2. Inclusion and Exclusion Criteria Types of studies included published reports of RCTs comparing zinc supplementation with controls (placebo or blank control) with available data for outcomes. Type of participants included studies that were restricted to any patients with MHD therapy stability. Records of the basic characteristics of participants (age, sex ratio, and dialysis duration) were required. Type of intervention included studies comparing zinc supplementation with a control for MHD patients. The dose of zinc compounds (zinc sulfate, zinc gluconate, or zinc aspartate) was converted to elemental zinc dose. The control intervention included placebo and blank control.All outcomes were extracted for type of outcome measures. Only outcomes measured in at least 2 studies were included for the meta-analysis and were as follows: nutritional status: serum zinc levels, body mass index (BMI, kg/m2), normalized protein equivalent of nitrogen appearance rate (nPNA, g/kg/d), dietary protein intake (g/kg/d), albumin (g/dL), hemoglobin, triglycerides (g/dL), total cholesterol (mg/dL), low density lipoprotein (mg/dL), and high density lipoprotein (mg/dL) levels; inflammation: C-reactive protein (CRP, ng/mL); and oxidative stress: malondialdehyde (MDA, nmol/mL) and superoxide dismutase (SOD, U/g Hb).Studies were excluded if there is lack of data regarding oral dosage of zinc or intervention time. Trials of sexual dysfunction were also excluded due to prior analysis. If any studies included multiple publications on the same RCT, we chose the one with the highest quality according to the study quality assessment. ### 2.3. Study Quality Assessment The method quality of each study was evaluated by 2 authors independently using the Cochrane Risk of Bias Tool that included 6 evaluation criteria [14]: random sequence generation (selection bias), allocation sequence concealment (selection bias), blinding (performance and detection bias), selective outcome reporting (reporting bias), incomplete outcome data (attrition bias), and other potential sources of bias. The judgment for each criterion was indicated as “low risk of bias,” “high risk of bias,” or “unclear risk of bias.” ### 2.4. Data Extraction and Synthesis Two independent investigators performed data extraction using the inclusion criteria as described above. All discrepancies were resolved by discussion and, if required, participation by a third author. The following information was extracted from each study: first author’s surname, year of publication, race and geographical location of the study, sample size, age, base disease, dialysis duration, oral zinc dose, the control intervention (placebo or blank control), and outcomes. The 2 investigators’ results were compared, and disagreements were resolved by discussion. ### 2.5. Statistical Analysis Statistical analyses were performed using Stata version 10.0 software (StataCorp LP, College Station, TX, USA). The effect of each outcome was determined by calculating the respective weighted mean difference (WMD) with a 95% confidence interval (CI). Heterogeneity of the effect size was evaluated using theQ and I-squared statistics. A fixed effects model was used when the P value was >0.05 and I-squared was <50%; otherwise, a random effects model was used. The significance of the pooled WMD was determined using a Z test. We used Begg and Egger tests to investigate the publication bias of our meta-analysis. To explore the sources of heterogeneity, we performed a meta-regression analysis. Subgroup analyses were also used to evaluate the effect in various conditions. P values < 0.05 were considered significant. The above work was completed by 2 authors and checked by a third author. ## 2.1. Literature Search Strategy Systematic literature searches were conducted in the following electronic databases: PubMed, Embase, Web of Science, Chinese Biomedical Literature, and the Cochrane Library. The relevant articles were published before January 15, 2016. Only publications with sufficient data were included for assessment. The following search terms were used: [Hemodialysis OR Dialysis OR Renal Replacement Therapy OR End Stage Renal Disease] AND [Zinc OR Zn] AND [Random∗ OR Randomized Controlled Trial OR Randomized Controlled Trial as Topic] AND [Nutrition OR Nutritional Status OR Reactive Oxygen OR Oxidative Stress OR Inflammation].To identify additional potentially relevant publications, the related references from all retrieved articles and reviews were manually searched. Only published studies with full-text articles were included in the meta-analysis. All data were entered into the Review Manager 5.0 software (Biostat, NJ, USA) by one author and checked by another author. Any disagreements were resolved by discussion between the two authors and by seeking the opinion of a third party when necessary. ## 2.2. Inclusion and Exclusion Criteria Types of studies included published reports of RCTs comparing zinc supplementation with controls (placebo or blank control) with available data for outcomes. Type of participants included studies that were restricted to any patients with MHD therapy stability. Records of the basic characteristics of participants (age, sex ratio, and dialysis duration) were required. Type of intervention included studies comparing zinc supplementation with a control for MHD patients. The dose of zinc compounds (zinc sulfate, zinc gluconate, or zinc aspartate) was converted to elemental zinc dose. The control intervention included placebo and blank control.All outcomes were extracted for type of outcome measures. Only outcomes measured in at least 2 studies were included for the meta-analysis and were as follows: nutritional status: serum zinc levels, body mass index (BMI, kg/m2), normalized protein equivalent of nitrogen appearance rate (nPNA, g/kg/d), dietary protein intake (g/kg/d), albumin (g/dL), hemoglobin, triglycerides (g/dL), total cholesterol (mg/dL), low density lipoprotein (mg/dL), and high density lipoprotein (mg/dL) levels; inflammation: C-reactive protein (CRP, ng/mL); and oxidative stress: malondialdehyde (MDA, nmol/mL) and superoxide dismutase (SOD, U/g Hb).Studies were excluded if there is lack of data regarding oral dosage of zinc or intervention time. Trials of sexual dysfunction were also excluded due to prior analysis. If any studies included multiple publications on the same RCT, we chose the one with the highest quality according to the study quality assessment. ## 2.3. Study Quality Assessment The method quality of each study was evaluated by 2 authors independently using the Cochrane Risk of Bias Tool that included 6 evaluation criteria [14]: random sequence generation (selection bias), allocation sequence concealment (selection bias), blinding (performance and detection bias), selective outcome reporting (reporting bias), incomplete outcome data (attrition bias), and other potential sources of bias. The judgment for each criterion was indicated as “low risk of bias,” “high risk of bias,” or “unclear risk of bias.” ## 2.4. Data Extraction and Synthesis Two independent investigators performed data extraction using the inclusion criteria as described above. All discrepancies were resolved by discussion and, if required, participation by a third author. The following information was extracted from each study: first author’s surname, year of publication, race and geographical location of the study, sample size, age, base disease, dialysis duration, oral zinc dose, the control intervention (placebo or blank control), and outcomes. The 2 investigators’ results were compared, and disagreements were resolved by discussion. ## 2.5. Statistical Analysis Statistical analyses were performed using Stata version 10.0 software (StataCorp LP, College Station, TX, USA). The effect of each outcome was determined by calculating the respective weighted mean difference (WMD) with a 95% confidence interval (CI). Heterogeneity of the effect size was evaluated using theQ and I-squared statistics. A fixed effects model was used when the P value was >0.05 and I-squared was <50%; otherwise, a random effects model was used. The significance of the pooled WMD was determined using a Z test. We used Begg and Egger tests to investigate the publication bias of our meta-analysis. To explore the sources of heterogeneity, we performed a meta-regression analysis. Subgroup analyses were also used to evaluate the effect in various conditions. P values < 0.05 were considered significant. The above work was completed by 2 authors and checked by a third author. ## 3. Results ### 3.1. Characteristics of Studies A total of 106 relevant published articles were identified following the aforementioned retrieval strategy. After strict review, 91 of these publications were excluded (73 records were screened by title/abstract and 18 records were screened by full-text) and 15 relevant published articles were identified and selected for our meta-analysis (Figure1). The included studies enrolled a total of 645 MHD patients, among which 345 were treated with zinc supplementation and 300 were treated with placebo or as blank control. Of these included studies, 8 were West Asian (Iran, Turkey, and Egypt), 5 were European or American (United Kingdom, United States, and Mexico), and 2 were East Asian trials (Taiwan and Japan). All studies included patients with chronic kidney disease (CKD) and one complicated low protein catabolic rate. Mean age of participants ranged from 13 to 80 years with dialysis for at least 3 months. The elemental zinc doses ranged from 11 to 100 mg and follow-up ranged from 40 to 360 days. The main characteristics of included studies are summarized in Table 1.Table 1 Characteristics of included studies. Study Reference Year Region Number of patients (M/F)a Age (Year)b Primary disease Dialysis duration Elemental zinc dose Intervening time Comparative approach Outcomesc Kobayashi et al. [15] 2015 Japan 70 (43/27) 69 ± 10 CKD - 34 mg/day 90/180/270/360 days Blank Serum zinc, hemoglobin, RBC, ESA, ERI El-Shazly et al. [16] 2015 Egypt 30 (29/31) 13.2 ± 2.1 CKD ≥6 months 16.5 mg/day 90 days Palcebo Serum zinc, leptin, body weight, BMI Argani et al. [17] 2014 Iran 60 (36/24) (50, 60) CKD -c 90 mg/day 60 days Palcebo Serum zinc, albumin, BMI, body fat, body water, Ccr, hematocrit, hemoglobin, leptin, TC, TG Pakfetrat et al. [18] 2013 Iran 97 (55/42) 51.6 ± 16.8 CKD >3 months 50 mg/day 43 days Placebo Serum zinc, HCys, homocysteine Mazani et al. [19] 2013 Iran 65 (41/24) 52.7 ± 12.6 CKD >6 months 100 mg/day 60 days Placebo Serum zinc, BMI, GSH, MDA, SOD, TAC Guo and Wang [8] 2013 Taiwan 65 59.7 ± 9.2 CKD >3 months 11 mg/day 56 days Blank Serum zinc, hematocrit, albumin, CD4/CD8, CRP, GFR, IL-6, MDA, nPNA, Cu, SOD, TNF-α, Vit C/E Rahimi-Ardabili et al. [20] 2012 Iran 60 (38/22) 52.7 ± 12.7 CKD ≥6 months 100 mg/day 60 days Placebo TC, Apo-AI, Apo-B, HDL, LDL, PON, TG Roozbeh et al. [21] 2009 Iran 53 (28/25) 55.7 CKD ≥6 months 45 mg/day 42 days Placebo Serum zinc, HDL, LDL, TC, TG Rashidi et al. [22] 2009 Iran 55 (32/23) 57.6 CKD ≥6 months 45 mg/day 42 days Placebo Serum zinc, CRP, hemoglobin Nava-Hernandez and Amato [23] 2005 Mexico 25 16.6 CKD - 100 mg/day 90 days Placebo Albumin, pre-albumin, transferrin Matson et al. [12] 2003 UK 15 (11/4) 63.73 CKD ≥3 months 45 mg/day 42 days Placebo Serum zinc, albumin, Kt/V, calcium, CRP, nPNA, phosphate Chevalier et al. [24] 2002 USA 27 (22/6) 51.9 CKD ≥6 months 50 mg/day 40/90 days Placebo Serum zinc, dietary intake, HDL, LDL, TC Candan et al. [25] 2002 Turkey 34 (18/16) 45.6 (28, 64) CKD - 20 mg/day 90 days Placebo Serum zinc, MDA, OSMO fragility Jern et al. [26] 2000 USA 14 56.5 (23, 80) CKD with low PCR ≥6 months 45 mg/day 40/90 days Placebo Serum zinc, dietary intake, nPNA Brodersen et al. [27] 1995 German 40 (22/18) 60 CKD - 60 mg/day 112 days Blank Serum zinc Note. aSex ratio: M = male, F = female; bage appears as mean, mean ± standard deviation or mean (lower limit, upper limit); c-: no information was recorded in included study. Abbreivations. RBC, red blood cell; ESA, erythropoiesis-stimulating agent; ERI, ESA resistance index; BMI, body mass index; Ccr, creatinine clearance rate; TC, total cholesterol; TG, triglyceride; HDL, high-density lipoprotein; LDL, low-density lipoprotein; CRP, C-reactive protein; GFR, glomerular filtration rate; MDA, malondialdehyde; nPNA, normalized protein equivalent of nitrogen appearance; SOD, superoxide dismutase.Figure 1 Flow chart of included studies. ### 3.2. Quality Assessment of Included Studies Of the 15 included studies, all claimed to apply randomized methods; however, only 2 used such methods (drawing random numbers). One clearly described allocation concealment (third party ensuring). Ten studies had a double-blinded design, but the details were unclear. Only one study described the pharmacy clinical trials unit as a third party, ensuring a double-blinded design. Eight studies reported withdrawals, but the results were not analyzed on an intention-to-treat basis. Of the 15 included studies, 8 reported all expected outcomes. Only 4 studies reported dietary restrictions. This may have caused potential bias due to insufficient information in the included trials. A summary of findings is shown in Figure2.Figure 2 Risk of bias graph and bias summary: (a) review of authors’ judgments regarding each risk of bias item for each included study and (b) review of authors’ judgments regarding each risk of bias item presented as percentages across all included studies. (a) (b) ### 3.3. Crude Pooled Results of Each Outcome In the crude analysis, we found that levels of serum zinc, dietary protein intake, and SOD in the zinc supplementation group were higher than levels in control group after treatment. The pooled WMDs were statistically significant (serum zinc: WMD = 28.489, 95% CI = 26.264 to 30.713,P<0.001; dietary protein intake: WMD = 8.012, 95% CI = 1.592 to 14.408, P<0.001; SOD: WMD = 357.568, 95% CI = 152.158 to 562.978, P=0.001). CRP and MDA levels were lower after zinc supplementation (CRP: WMD = −8.618, 95% CI = −15.579 to −1.656, P=0.015; MDA: WMD = −1.275, 95% CI = −1.945 to −0.605, P<0.001). The results showed no differences in BMI, nPNA, hemoglobin, or lipid profile (P>0.05). Heterogeneity was significant in the results. Of the 15 pooled outcomes, 10 showed obvious heterogeneity (I-squared > 50%, P<0.1). Results from Begg and Egger tests showed that there was no significant bias in our meta-analysis (P>0.1). All data are presented in Table 2. Relative bioavailability of zinc sulfate, gluconate, and aspartate may be different and worth further analysis. For inadequate reports, meta-analysis for effect of different zinc compounds was not performed.Table 2 Summary of the effects of zinc supplementation in MHD patients. Factor Number of studies Heterogeneity test Weighted mean difference Publication bias Q I 2 P value WMD [95% CI] P value Begg test Egger test Serum zinc (ug/dL) 13 81.98 90.7% <0.001 28.489 [26.264, 30.713] <0.001∗ 0.502 0.355 BMI (kg/m2) 3 1.34 0% 0.511 0.149 [−0.762, 1.059] 0.794 1.000 0.263 nPNA (g/kg/d) 4 11.31 91.20% 0.001 0.135 [−0.161, 0.431] 0.371 1.000 0.657 Dietary protein intake (g/kg/d) 2 0.27 0% 0.604 8.012 [1.592, 14.408] <0.001∗ - - Albumin (g/dL) 4 9.74 69.20% 0.021 0.358 [−0.016, 0.732] 0.061 0.734 0.276 Hemoglobin (g/dL) 4 16.27 81.6% 0.013 0.756 [−0.011, 1.522] 0.053 1.000 0.654 HDL (mg/dL) 4 24.6 91.90% <0.001 4.048 [−3.142, 11.238] 0.27 1.000 0.847 LDL (mg/dL) 4 24.46 91.80% <0.001 21.028 [−15.478, 57.534] 0.259 1.000 0.749 TC (mg/dL) 5 22.97 86.90% <0.001 16.198 [−9.975, 42.371] 0.225 0.734 0.624 TG (mg/dL) 3 8.2 75.60% 0.017 0.207 [−34.711, 35.125] 0.991 1.000 0.327 CRP (ng/mL) 3 13.85 85.60% 0.001 - 8.618 [−15.579, −1.656] 0.015∗ 1.000 0.783 MDA (nmol/mL) 3 16.32 87.70% <0.001 - 1.275 [−1.945, −0.605] <0.001∗ 0.296 0.287 SOD (U/g Hb) 2 3.42 70.80% 0.064 357.568 [152.158, 562.978] 0.001∗ - - Note. -: values could not be calculated due to an insufficient number of studies; ∗P<0.05, and WMD was considered statistically significant. ### 3.4. Results of Metaregression and Region-Subgroup Analysis To explore the sources of heterogeneity, we performed a metaregression analysis on serum zinc levels. Oral zinc dose, intervention time, baseline of serum zinc, and region of study were selected as dependent variables. As shown in Figures3 and 4, we found that serum zinc levels correlated positively with intervention time (β=0.272, P=0.042). Subgroup data suggested a significant difference among race (P=0.023), and serum zinc levels of patients in Europe and America showed the lowest effect. These 2 factors explained 43.83% of the bias. No correlations were identified between serum zinc levels and oral zinc dose (β=-0.066, P=0.691) or baseline zinc levels (β=-0.048, P=0.885).Figure 3 Metaregression data of serum zinc levels based on (a) serum zinc, (b) oral zinc dose, and (c) intervening time at baseline. (a) (b) (c)Figure 4 Forest plot: the effect of zinc supplementation on serum zinc levels in different regions (East Asia, West Asia, and Europe/North America). ### 3.5. Results of Dose Subgroup To explore the effects of zinc dose on various outcomes, we performed a dose-special subgroup analysis. The results showed zinc supplement results in higher serum zinc levels and lower CRP and MDA levels, which was consistent with the crude results. However, no dose-effect trend was found when zinc dose changed. In the heterogeneity test, 8 results showed obvious heterogeneity out of a total of 23 (I-squared > 50%, P<0.1). All data are presented in Table 3.Table 3 Results of dose subgroup. Factor Zinc dose Number of studies Heterogeneity test Weighted mean difference Q I 2 P value WMD [95% CI] P value Serum zinc (ug/dL) <45 mg 4 2.27 27.52% 0.211 30.792 [ 23.781, 44.201] <0.001∗ 45–50 mg 6 121.15 95.90% <0.001 23.831 [ 4.824, 42.837] 0.014∗ >50 mg 3 18.01 88.90% <0.001 32.692 [ 20.111, 45.273] <0.001∗ BMI (kg/m2) 16.5 mg 1 - - - 0.530 [ - 3.769, 2.709] 0.621 ≥90 mg 2 1.16 13.70% 0.282 0.124 [ - 1.062, 1.309] 0.838 nPNA (g/kg/d) 11 mg 1 - - - 0.41 [ 0.292, 0.528] <0.001∗ 45 mg 2 1.3 23.00% 0.250 0.019 [ - 0.130, 0.167] 0.805 Dietary protein intake (g/kg/d) 45 mg 2 0.64 0.00% 0.425 5.605 [ - 0.527, 11.736] 0.073 50 mg 2 0.53 0.00% 0.467 5.373 [ - 1.351, 12.097] 0.117 Albumin (g/dL) <50 mg 2 6.96 85.60% 0.008 0.37 [ - 0.225, 0.966] 0.223 ≥50 mg 2 2.35 57.40% 0.125 0.309 [ - 0.343, 0.962] 0.353 Hemoglobin (g/dL) <45 mg 2 3.62 71.87% 0.054 1.018 [ - 0.188, 2.223] 0.098 ≥45 mg 2 0.08 0.00% 0.780 0.385 [ - 0.307, 1.078] 0.275 HDL (mg/dL) <50 mg 2 31.35 96.80% <0.001 4.083 [ - 14.455, 22.621] 0.666 ≥50 mg 2 10.09 96.80% <0.001 0.031 [ - 6.494, 6.556] 0.993 LDL (mg/dL) <50 mg 2 4.84 79.40% 0.028 6.088 [ - 25.371, 37.548] 0.704 ≥50 mg 2 0.46 0.00% 0.496 44.792 [ 34.951, 54.632] <0.001∗ TC (mg/dL) ≤50 mg 2 0.03 0.00% 0.857 37.045 [ 26.472, 47.617] <0.001∗ >50 mg 2 1.81 44.70% 0.179 - 2.381 [ - 19.731, 14.968] 0.788 TG (mg/dL) <50 mg 2 7.67 87.00% 0.006 0.337 [ - 53.525, 54.200] 0.99 ≥50 mg 1 - - - - 3.112 [ - 40.718, 34.718] 0.876 CRP (ng/mL) 11 mg 1 - - - - 5.799 [ - 8.925, -2.673] <0.001∗ 45 mg 2 2.11 52.60% 0.146 - 10.234 [ - 20.861, -0.392] 0.039∗ MDA (nmol/mL) <50 mg 2 10.34 90.30% 0.001 - 1.617 [ - 2.948, -0.286] 0.017∗ ≥50 mg 1 - - - - 0.8 [ - 0.995, -0.605] <0.001∗ Note. -: values could not be calculated due to an insufficient number of studies; ∗P<0.05, and WMD was considered statistically significant. ### 3.6. Results of Intervening Time Subgroup To investigate the effects of intervention time on a series of outcomes, we performed a time-special subgroup analysis. In the results, we found that zinc supplementation induced a time effect in serum zinc levels and dietary protein intake. Although our results showed zinc supplementation results in higher SOD levels and lower CRP and MDA levels in all subanalyses, no time effect was identified. Heterogeneity was lower in the time subgroup compared with results in the crude analysis. Of the 25 results analyzed, 4 showed obvious heterogeneity (I-squared > 50%, P<0.1). All data are presented in Table 4.Table 4 Results of intervention time subgroup. Factor Intervention time Number of studies Heterogeneity test Weighted mean difference Q I 2 P value WMD [95% CI] P value Serum zinc (ug/dL) <50 days 6 21.15 85.90% <0.001 23.831 [ 4.824, 42.837] 0.014 ∗ 50–60 days 4 1.82 0.00% 0.402 28.310 [ 24.399, 32.220] <0.001∗ >60 days 5 1.07 12.90% 0.388 36.065 [ 25.694, 46.437] <0.001∗ BMI (kg/m2) 60 days 2 1.16 13.70% 0.282 0.124 [ - 1.062, 1.309] 0.838 90 days 1 - - - 0.530 [ - 3.769, 2.709] 0.621 nPNA (g/kg/d) <50 days 2 0.58 0.00% 0.447 - 0.018 [ - 0.128, 0.092] 0.751 ≥50 days 2 1.3 31.10% 0.243 0.235 [ - 0.108, 0.578] 0.179 Dietary protein intake (g/kg/d) <50 days 2 0.18 0.00% 0.776 3.322 [ - 3.407, 9.407] 0.359 ≥50 days 2 0.12 0.00% 0.812 8.109 [ 1.592, 14.408] <0.001∗ Albumin (g/dL) <60 days 2 1.96 45.60% 0.134 0.301 [ - 0.225, 0.966] 0.223 ≥60 days 2 2.17 47.40% 0.125 0.409 [ - 0.243, 1.062] 0.353 Hemoglobin (g/dL) <60 days 2 0.04 0.00% 0.850 0.378 [ - 0.048, 0.804] 0.082 ≥60 days 2 3.69 72.90% 0.055 1.171 [ 0.083, 2.259] 0.035 ∗ HDL (mg/dL) <60 days 2 31.35 96.80% <0.001 4.083 [ - 14.455, 22.621] 0.666 ≥60 days 2 10.09 90.10% 0.001 0.031 [ - 6.494, 6.556] 0.993 LDL (mg/dL) <60 days 2 1.96 49.00% 0.161 34.829 [ 17.061, 52.597] <0.001∗ ≥60 days 2 24.46 95.90% <0.001 19.971 [ - 36.680, 76.623] 0.490 TC (mg/dL) <60 days 2 1.5 33.20% 0.221 18.673 [ - 2.741, 40.088] 0.087 ≥60 days 3 22.15 91.00% <0.001 11.147 [ - 18.788, 41.082] 0.465 TG (mg/dL) <60 days 1 - - - 26.080 [ 6.602, 45.558] 0.009 ≥60 days 2 1.01 8.60% 0.317 - 17.415 [ - 42.743, 7.913] 0.178 CRP (ng/mL) <60 days 2 2.11 14.60% 0.146 - 10.234 [ - 20.861, 0.392] 0.059 ≥60 days 1 - - - - 5.799 [ - 8.925, -2.673] <0.001∗ MDA (nmol/mL) <60 days 1 - - - - 2.330 [ - 3.048, -1.612] <0.001∗ ≥60 days 2 0.53 0.00% 0.467 - 0.831 [ - 1.007, -0.654] <0.001∗ SOD (U/g Hb) <60 days 1 - - - 446.600 [ 340.276, 552.924] <0.001∗ ≥60 days 1 - - - 234.200 [ 35.906, 432.494] 0.021 ∗ Note. -: values could not be calculated due to an insufficient number of studies; ∗P<0.05, and WMD was considered statistically significant. ## 3.1. Characteristics of Studies A total of 106 relevant published articles were identified following the aforementioned retrieval strategy. After strict review, 91 of these publications were excluded (73 records were screened by title/abstract and 18 records were screened by full-text) and 15 relevant published articles were identified and selected for our meta-analysis (Figure1). The included studies enrolled a total of 645 MHD patients, among which 345 were treated with zinc supplementation and 300 were treated with placebo or as blank control. Of these included studies, 8 were West Asian (Iran, Turkey, and Egypt), 5 were European or American (United Kingdom, United States, and Mexico), and 2 were East Asian trials (Taiwan and Japan). All studies included patients with chronic kidney disease (CKD) and one complicated low protein catabolic rate. Mean age of participants ranged from 13 to 80 years with dialysis for at least 3 months. The elemental zinc doses ranged from 11 to 100 mg and follow-up ranged from 40 to 360 days. The main characteristics of included studies are summarized in Table 1.Table 1 Characteristics of included studies. Study Reference Year Region Number of patients (M/F)a Age (Year)b Primary disease Dialysis duration Elemental zinc dose Intervening time Comparative approach Outcomesc Kobayashi et al. [15] 2015 Japan 70 (43/27) 69 ± 10 CKD - 34 mg/day 90/180/270/360 days Blank Serum zinc, hemoglobin, RBC, ESA, ERI El-Shazly et al. [16] 2015 Egypt 30 (29/31) 13.2 ± 2.1 CKD ≥6 months 16.5 mg/day 90 days Palcebo Serum zinc, leptin, body weight, BMI Argani et al. [17] 2014 Iran 60 (36/24) (50, 60) CKD -c 90 mg/day 60 days Palcebo Serum zinc, albumin, BMI, body fat, body water, Ccr, hematocrit, hemoglobin, leptin, TC, TG Pakfetrat et al. [18] 2013 Iran 97 (55/42) 51.6 ± 16.8 CKD >3 months 50 mg/day 43 days Placebo Serum zinc, HCys, homocysteine Mazani et al. [19] 2013 Iran 65 (41/24) 52.7 ± 12.6 CKD >6 months 100 mg/day 60 days Placebo Serum zinc, BMI, GSH, MDA, SOD, TAC Guo and Wang [8] 2013 Taiwan 65 59.7 ± 9.2 CKD >3 months 11 mg/day 56 days Blank Serum zinc, hematocrit, albumin, CD4/CD8, CRP, GFR, IL-6, MDA, nPNA, Cu, SOD, TNF-α, Vit C/E Rahimi-Ardabili et al. [20] 2012 Iran 60 (38/22) 52.7 ± 12.7 CKD ≥6 months 100 mg/day 60 days Placebo TC, Apo-AI, Apo-B, HDL, LDL, PON, TG Roozbeh et al. [21] 2009 Iran 53 (28/25) 55.7 CKD ≥6 months 45 mg/day 42 days Placebo Serum zinc, HDL, LDL, TC, TG Rashidi et al. [22] 2009 Iran 55 (32/23) 57.6 CKD ≥6 months 45 mg/day 42 days Placebo Serum zinc, CRP, hemoglobin Nava-Hernandez and Amato [23] 2005 Mexico 25 16.6 CKD - 100 mg/day 90 days Placebo Albumin, pre-albumin, transferrin Matson et al. [12] 2003 UK 15 (11/4) 63.73 CKD ≥3 months 45 mg/day 42 days Placebo Serum zinc, albumin, Kt/V, calcium, CRP, nPNA, phosphate Chevalier et al. [24] 2002 USA 27 (22/6) 51.9 CKD ≥6 months 50 mg/day 40/90 days Placebo Serum zinc, dietary intake, HDL, LDL, TC Candan et al. [25] 2002 Turkey 34 (18/16) 45.6 (28, 64) CKD - 20 mg/day 90 days Placebo Serum zinc, MDA, OSMO fragility Jern et al. [26] 2000 USA 14 56.5 (23, 80) CKD with low PCR ≥6 months 45 mg/day 40/90 days Placebo Serum zinc, dietary intake, nPNA Brodersen et al. [27] 1995 German 40 (22/18) 60 CKD - 60 mg/day 112 days Blank Serum zinc Note. aSex ratio: M = male, F = female; bage appears as mean, mean ± standard deviation or mean (lower limit, upper limit); c-: no information was recorded in included study. Abbreivations. RBC, red blood cell; ESA, erythropoiesis-stimulating agent; ERI, ESA resistance index; BMI, body mass index; Ccr, creatinine clearance rate; TC, total cholesterol; TG, triglyceride; HDL, high-density lipoprotein; LDL, low-density lipoprotein; CRP, C-reactive protein; GFR, glomerular filtration rate; MDA, malondialdehyde; nPNA, normalized protein equivalent of nitrogen appearance; SOD, superoxide dismutase.Figure 1 Flow chart of included studies. ## 3.2. Quality Assessment of Included Studies Of the 15 included studies, all claimed to apply randomized methods; however, only 2 used such methods (drawing random numbers). One clearly described allocation concealment (third party ensuring). Ten studies had a double-blinded design, but the details were unclear. Only one study described the pharmacy clinical trials unit as a third party, ensuring a double-blinded design. Eight studies reported withdrawals, but the results were not analyzed on an intention-to-treat basis. Of the 15 included studies, 8 reported all expected outcomes. Only 4 studies reported dietary restrictions. This may have caused potential bias due to insufficient information in the included trials. A summary of findings is shown in Figure2.Figure 2 Risk of bias graph and bias summary: (a) review of authors’ judgments regarding each risk of bias item for each included study and (b) review of authors’ judgments regarding each risk of bias item presented as percentages across all included studies. (a) (b) ## 3.3. Crude Pooled Results of Each Outcome In the crude analysis, we found that levels of serum zinc, dietary protein intake, and SOD in the zinc supplementation group were higher than levels in control group after treatment. The pooled WMDs were statistically significant (serum zinc: WMD = 28.489, 95% CI = 26.264 to 30.713,P<0.001; dietary protein intake: WMD = 8.012, 95% CI = 1.592 to 14.408, P<0.001; SOD: WMD = 357.568, 95% CI = 152.158 to 562.978, P=0.001). CRP and MDA levels were lower after zinc supplementation (CRP: WMD = −8.618, 95% CI = −15.579 to −1.656, P=0.015; MDA: WMD = −1.275, 95% CI = −1.945 to −0.605, P<0.001). The results showed no differences in BMI, nPNA, hemoglobin, or lipid profile (P>0.05). Heterogeneity was significant in the results. Of the 15 pooled outcomes, 10 showed obvious heterogeneity (I-squared > 50%, P<0.1). Results from Begg and Egger tests showed that there was no significant bias in our meta-analysis (P>0.1). All data are presented in Table 2. Relative bioavailability of zinc sulfate, gluconate, and aspartate may be different and worth further analysis. For inadequate reports, meta-analysis for effect of different zinc compounds was not performed.Table 2 Summary of the effects of zinc supplementation in MHD patients. Factor Number of studies Heterogeneity test Weighted mean difference Publication bias Q I 2 P value WMD [95% CI] P value Begg test Egger test Serum zinc (ug/dL) 13 81.98 90.7% <0.001 28.489 [26.264, 30.713] <0.001∗ 0.502 0.355 BMI (kg/m2) 3 1.34 0% 0.511 0.149 [−0.762, 1.059] 0.794 1.000 0.263 nPNA (g/kg/d) 4 11.31 91.20% 0.001 0.135 [−0.161, 0.431] 0.371 1.000 0.657 Dietary protein intake (g/kg/d) 2 0.27 0% 0.604 8.012 [1.592, 14.408] <0.001∗ - - Albumin (g/dL) 4 9.74 69.20% 0.021 0.358 [−0.016, 0.732] 0.061 0.734 0.276 Hemoglobin (g/dL) 4 16.27 81.6% 0.013 0.756 [−0.011, 1.522] 0.053 1.000 0.654 HDL (mg/dL) 4 24.6 91.90% <0.001 4.048 [−3.142, 11.238] 0.27 1.000 0.847 LDL (mg/dL) 4 24.46 91.80% <0.001 21.028 [−15.478, 57.534] 0.259 1.000 0.749 TC (mg/dL) 5 22.97 86.90% <0.001 16.198 [−9.975, 42.371] 0.225 0.734 0.624 TG (mg/dL) 3 8.2 75.60% 0.017 0.207 [−34.711, 35.125] 0.991 1.000 0.327 CRP (ng/mL) 3 13.85 85.60% 0.001 - 8.618 [−15.579, −1.656] 0.015∗ 1.000 0.783 MDA (nmol/mL) 3 16.32 87.70% <0.001 - 1.275 [−1.945, −0.605] <0.001∗ 0.296 0.287 SOD (U/g Hb) 2 3.42 70.80% 0.064 357.568 [152.158, 562.978] 0.001∗ - - Note. -: values could not be calculated due to an insufficient number of studies; ∗P<0.05, and WMD was considered statistically significant. ## 3.4. Results of Metaregression and Region-Subgroup Analysis To explore the sources of heterogeneity, we performed a metaregression analysis on serum zinc levels. Oral zinc dose, intervention time, baseline of serum zinc, and region of study were selected as dependent variables. As shown in Figures3 and 4, we found that serum zinc levels correlated positively with intervention time (β=0.272, P=0.042). Subgroup data suggested a significant difference among race (P=0.023), and serum zinc levels of patients in Europe and America showed the lowest effect. These 2 factors explained 43.83% of the bias. No correlations were identified between serum zinc levels and oral zinc dose (β=-0.066, P=0.691) or baseline zinc levels (β=-0.048, P=0.885).Figure 3 Metaregression data of serum zinc levels based on (a) serum zinc, (b) oral zinc dose, and (c) intervening time at baseline. (a) (b) (c)Figure 4 Forest plot: the effect of zinc supplementation on serum zinc levels in different regions (East Asia, West Asia, and Europe/North America). ## 3.5. Results of Dose Subgroup To explore the effects of zinc dose on various outcomes, we performed a dose-special subgroup analysis. The results showed zinc supplement results in higher serum zinc levels and lower CRP and MDA levels, which was consistent with the crude results. However, no dose-effect trend was found when zinc dose changed. In the heterogeneity test, 8 results showed obvious heterogeneity out of a total of 23 (I-squared > 50%, P<0.1). All data are presented in Table 3.Table 3 Results of dose subgroup. Factor Zinc dose Number of studies Heterogeneity test Weighted mean difference Q I 2 P value WMD [95% CI] P value Serum zinc (ug/dL) <45 mg 4 2.27 27.52% 0.211 30.792 [ 23.781, 44.201] <0.001∗ 45–50 mg 6 121.15 95.90% <0.001 23.831 [ 4.824, 42.837] 0.014∗ >50 mg 3 18.01 88.90% <0.001 32.692 [ 20.111, 45.273] <0.001∗ BMI (kg/m2) 16.5 mg 1 - - - 0.530 [ - 3.769, 2.709] 0.621 ≥90 mg 2 1.16 13.70% 0.282 0.124 [ - 1.062, 1.309] 0.838 nPNA (g/kg/d) 11 mg 1 - - - 0.41 [ 0.292, 0.528] <0.001∗ 45 mg 2 1.3 23.00% 0.250 0.019 [ - 0.130, 0.167] 0.805 Dietary protein intake (g/kg/d) 45 mg 2 0.64 0.00% 0.425 5.605 [ - 0.527, 11.736] 0.073 50 mg 2 0.53 0.00% 0.467 5.373 [ - 1.351, 12.097] 0.117 Albumin (g/dL) <50 mg 2 6.96 85.60% 0.008 0.37 [ - 0.225, 0.966] 0.223 ≥50 mg 2 2.35 57.40% 0.125 0.309 [ - 0.343, 0.962] 0.353 Hemoglobin (g/dL) <45 mg 2 3.62 71.87% 0.054 1.018 [ - 0.188, 2.223] 0.098 ≥45 mg 2 0.08 0.00% 0.780 0.385 [ - 0.307, 1.078] 0.275 HDL (mg/dL) <50 mg 2 31.35 96.80% <0.001 4.083 [ - 14.455, 22.621] 0.666 ≥50 mg 2 10.09 96.80% <0.001 0.031 [ - 6.494, 6.556] 0.993 LDL (mg/dL) <50 mg 2 4.84 79.40% 0.028 6.088 [ - 25.371, 37.548] 0.704 ≥50 mg 2 0.46 0.00% 0.496 44.792 [ 34.951, 54.632] <0.001∗ TC (mg/dL) ≤50 mg 2 0.03 0.00% 0.857 37.045 [ 26.472, 47.617] <0.001∗ >50 mg 2 1.81 44.70% 0.179 - 2.381 [ - 19.731, 14.968] 0.788 TG (mg/dL) <50 mg 2 7.67 87.00% 0.006 0.337 [ - 53.525, 54.200] 0.99 ≥50 mg 1 - - - - 3.112 [ - 40.718, 34.718] 0.876 CRP (ng/mL) 11 mg 1 - - - - 5.799 [ - 8.925, -2.673] <0.001∗ 45 mg 2 2.11 52.60% 0.146 - 10.234 [ - 20.861, -0.392] 0.039∗ MDA (nmol/mL) <50 mg 2 10.34 90.30% 0.001 - 1.617 [ - 2.948, -0.286] 0.017∗ ≥50 mg 1 - - - - 0.8 [ - 0.995, -0.605] <0.001∗ Note. -: values could not be calculated due to an insufficient number of studies; ∗P<0.05, and WMD was considered statistically significant. ## 3.6. Results of Intervening Time Subgroup To investigate the effects of intervention time on a series of outcomes, we performed a time-special subgroup analysis. In the results, we found that zinc supplementation induced a time effect in serum zinc levels and dietary protein intake. Although our results showed zinc supplementation results in higher SOD levels and lower CRP and MDA levels in all subanalyses, no time effect was identified. Heterogeneity was lower in the time subgroup compared with results in the crude analysis. Of the 25 results analyzed, 4 showed obvious heterogeneity (I-squared > 50%, P<0.1). All data are presented in Table 4.Table 4 Results of intervention time subgroup. Factor Intervention time Number of studies Heterogeneity test Weighted mean difference Q I 2 P value WMD [95% CI] P value Serum zinc (ug/dL) <50 days 6 21.15 85.90% <0.001 23.831 [ 4.824, 42.837] 0.014 ∗ 50–60 days 4 1.82 0.00% 0.402 28.310 [ 24.399, 32.220] <0.001∗ >60 days 5 1.07 12.90% 0.388 36.065 [ 25.694, 46.437] <0.001∗ BMI (kg/m2) 60 days 2 1.16 13.70% 0.282 0.124 [ - 1.062, 1.309] 0.838 90 days 1 - - - 0.530 [ - 3.769, 2.709] 0.621 nPNA (g/kg/d) <50 days 2 0.58 0.00% 0.447 - 0.018 [ - 0.128, 0.092] 0.751 ≥50 days 2 1.3 31.10% 0.243 0.235 [ - 0.108, 0.578] 0.179 Dietary protein intake (g/kg/d) <50 days 2 0.18 0.00% 0.776 3.322 [ - 3.407, 9.407] 0.359 ≥50 days 2 0.12 0.00% 0.812 8.109 [ 1.592, 14.408] <0.001∗ Albumin (g/dL) <60 days 2 1.96 45.60% 0.134 0.301 [ - 0.225, 0.966] 0.223 ≥60 days 2 2.17 47.40% 0.125 0.409 [ - 0.243, 1.062] 0.353 Hemoglobin (g/dL) <60 days 2 0.04 0.00% 0.850 0.378 [ - 0.048, 0.804] 0.082 ≥60 days 2 3.69 72.90% 0.055 1.171 [ 0.083, 2.259] 0.035 ∗ HDL (mg/dL) <60 days 2 31.35 96.80% <0.001 4.083 [ - 14.455, 22.621] 0.666 ≥60 days 2 10.09 90.10% 0.001 0.031 [ - 6.494, 6.556] 0.993 LDL (mg/dL) <60 days 2 1.96 49.00% 0.161 34.829 [ 17.061, 52.597] <0.001∗ ≥60 days 2 24.46 95.90% <0.001 19.971 [ - 36.680, 76.623] 0.490 TC (mg/dL) <60 days 2 1.5 33.20% 0.221 18.673 [ - 2.741, 40.088] 0.087 ≥60 days 3 22.15 91.00% <0.001 11.147 [ - 18.788, 41.082] 0.465 TG (mg/dL) <60 days 1 - - - 26.080 [ 6.602, 45.558] 0.009 ≥60 days 2 1.01 8.60% 0.317 - 17.415 [ - 42.743, 7.913] 0.178 CRP (ng/mL) <60 days 2 2.11 14.60% 0.146 - 10.234 [ - 20.861, 0.392] 0.059 ≥60 days 1 - - - - 5.799 [ - 8.925, -2.673] <0.001∗ MDA (nmol/mL) <60 days 1 - - - - 2.330 [ - 3.048, -1.612] <0.001∗ ≥60 days 2 0.53 0.00% 0.467 - 0.831 [ - 1.007, -0.654] <0.001∗ SOD (U/g Hb) <60 days 1 - - - 446.600 [ 340.276, 552.924] <0.001∗ ≥60 days 1 - - - 234.200 [ 35.906, 432.494] 0.021 ∗ Note. -: values could not be calculated due to an insufficient number of studies; ∗P<0.05, and WMD was considered statistically significant. ## 4. Discussion Some dialysis complications, such as malnutrition and inflammation, were partly attributed to zinc deficiency. From previous studies, the conclusions regarding zinc supplementation for MHD patients were not consensus. The effect on nutritional status was one of the most controversial results. Contrary to the majority of opinions and our results, Matson et al. reported that evidence is limited for proving the effects of zinc supplementation on anorexia and nutritional status in MHD patients [12]. An earlier article attributed this discrepancy to age of patients and zinc absorption [28]. Notably, in the study of Matson et al., differences in serum zinc levels were not statistically significant between the treatment and placebo groups. However, in our analysis, we found that zinc supplementation increased serum zinc levels for MHD patients. Our results showed that intervention time and race are two factors that correlated significantly with serum zinc levels, and there is a time-effect but not a dose-effect relationship in zinc supplementation. In the included studies, we found that the median intervention time was 60 days and the median zinc dose was 45 mg/day. In the study of Matson et al., patients received 220 mg zinc sulfate (45 mg elemental zinc) per day for 6 weeks; thus, the limited intervention time may be a possible explanation. It is also plausible that zinc supplementation showed different effects from various racial groups. The mean change of serum zinc levels in the Taiwanese population was several times higher than in the Western regions [8]. European studies showed the lowest effect compared with data from Asia, most likely due to differences in epidemiology and diet. Further studies are warranted to comprehensively investigate the precise effect on different racial groups. Another notable finding was that, although no statistical significance was found in data on albumin and hemoglobin, the P values were close to the significance threshold (albumin: P=0.061; hemoglobin: P=0.069). Subgroup analyses also showed a time-effect relationship in the two factors. We assume the long-term effect of zinc supplementation may improve these nutritional indices, but this interpretation also requires further testing. Nonetheless, our results suggest intervention time of zinc supplementation should be adequate when aiming to improve nutritional indices and appetite.Hypercholesterolemia and hypertriglyceridemia have been reported in previous studies on zinc-deficient diets, which could induce cardiovascular events and insulin resistance in CKD patients [29, 30]. Although a series of previous studies suggest that zinc supplementation improves blood lipid metabolism, this finding was not drawn from our pooled data. The effects and trends of zinc supplementation on lipid profile were inconsistent and even contradictory in the results [17, 20, 21]. There are several possible explanations for this discrepancy. First, it is possible that the characteristics of included patients were divided. For example, zinc supplementation could increase blood lipids by improving energy intake in patients with anorexia [31] and show an opposite effect when including patients with hyperlipemia or insulin resistance [32–34]. Second, little is known about lipid intake from the included studies; diverse dietary lipid intake may have led to information bias. Therefore, more evidence is needed to determine the effects of zinc supplementation on lipid profile in MHD patients.Inflammation and oxidative stress are common complications in MHD patients. Several previous studies have found that zinc deficiency in MHD patients may result in increased oxidative stress and CRP concentrations [22, 35]. The antioxidative action of zinc involves 2 mechanisms: (1) directly protecting easily oxidized groups such as sulfhydryl and (2) producing some other ultimate long-term antioxidant like metallothionein [36, 37]. Zinc could also decrease CRP and other inflammatory cytokines through increased antioxidant power; the major target is most likely the NF-kappa b pathway [38, 39]. In our analysis, all the included studies showed that zinc supplementation was positive for anti-inflammatory and antioxidant activity. The pooled results showed statistical significance in all subgroups, and no significance for time or dose effect was observed. This may suggest that zinc supplementation may lead to an anti-inflammatory and antioxidative effect in MHD patients.It should be noted that there might be some possible limitations in our meta-analysis. First, adverse outcomes of zinc supplementation were not analyzed. As is known, although zinc is an essential requirement for good health, an excess in zinc supplementation can be harmful. Excessive absorption of zinc can suppress copper and iron absorption and may cause nerve damage [40, 41]. In the included studies, only one paper mentioned asking patients whether they had experienced any adverse effects. No side effects could be analyzed in the pooled data; incomplete data may have deterred this evaluation. Second, due to insufficient data, the effects of zinc supplementation on clinical endpoint events, such as cardiovascular events or death, remain unclear. Such discrepancies make it difficult to attain strong evidence for MHD patients. Finally, the epidemiology varies significantly across the different regions. For example, zinc deficiency is widespread in East Asia (nearly 40% to 60% of the population have mild or moderate zinc deficiency); the proportion is much lower in Europe and America [42, 43]. However, meta-analyses of the racial subgroups could not be performed due to lack of data in most outcomes. This may have generated a selection bias in our results. ## 5. Conclusion Our meta-analysis suggests that zinc supplementation benefits the nutritional status of MHD patients and shows a time-effect relationship. It also leads to an anti-inflammatory and antioxidative effect in MHD patients. Still, there is a need for more evidence regarding the effects on lipid profile. Given the presence of data deficiency in this study, further studies are warranted to comprehensively investigate the effects of zinc supplementation on clinical endpoint events and on race. --- *Source: 1024769-2017-12-31.xml*
1024769-2017-12-31_1024769-2017-12-31.md
48,342
Effect of Zinc Supplementation on Maintenance Hemodialysis Patients: A Systematic Review and Meta-Analysis of 15 Randomized Controlled Trials
Ling-Jun Wang; Ming-Qing Wang; Rong Hu; Yi Yang; Yu-Sheng Huang; Shao-Xiang Xian; Lu Lu
BioMed Research International (2017)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1024769
1024769-2017-12-31.xml
--- ## Abstract We aimed to examine the effects of zinc supplementation on nutritional status, lipid profile, and antioxidant and anti-inflammatory therapies in maintenance hemodialysis (MHD) patients. We performed a systematic review and meta-analysis of randomized, controlled clinical trials of zinc supplementation. Metaregression analyses were utilized to determine the cause of discrepancy. Begg and Egger tests were performed to assess publication bias. Subgroup analysis was utilized to investigate the effects of zinc supplementation in certain conditions. In the crude pooled results, we found that zinc supplementation resulted in higher serum zinc levels (weighted mean difference [WMD] = 28.489;P<0.001), higher dietary protein intake (WMD = 8.012; P<0.001), higher superoxide dismutase levels (WMD = 357.568; P=0.001), and lower levels of C-reactive protein (WMD = −8.618; P=0.015) and malondialdehyde (WMD = −1.275; P<0.001). The results showed no differences in lipid profile. In the metaregression analysis, we found that serum zinc levels correlated positively with intervention time (β=0.272; P=0.042) and varied greatly by ethnicity (P=0.023). Results from Begg and Egger tests showed that there was no significant bias in our meta-analysis (P>0.1). Results of subgroup analysis supported the above results. Our analysis shows that zinc supplementation may benefit the nutritional status of MHD patients and show a time-effect relationship. --- ## Body ## 1. Introduction Zinc is an essential trace element for humans which is found in nearly 100 specific enzymes. Zinc plays “ubiquitous biological roles” in physiological function, including gene expression, protein synthesis, immune function, and behavioral responses [1–3]. Although the prevalence of zinc deficiency is still unclear in patients with maintenance hemodialysis (MHD), some data show adverse outcomes that may attribute to zinc deficiency [4, 5]. Malnutrition, as an independent risk factor of cardiovascular events, and death are the most common complications observed in MHD patients [6, 7]. Some studies have reported the potential relationship between zinc deficiency and other imbalances, such as oxidative stress, inflammation, or immunosuppression [8, 9], and these disorders may contribute to poor prognosis of disease.Many previous studies have investigated the effects of zinc supplementation in MHD patients. The relevant results show that zinc supplementation can improve a number of disorders including low-grade inflammatory process, protein-energy wasting, and impaired immune response [10, 11]. However, to our knowledge, there may be some inadequacies in separate clinic trials. For example, in a separate randomized controlled trial (RCT), it is hard to identify the effects of zinc supplementation in MHD patients with different races and varying intervention dosages. The existing data showed different and even contradictory results simultaneously within different studies [12, 13]. Yet, no studies have focused on the causes of heterogeneity. The incomparable evidences make it hard to assess the actual effects of zinc supplementation in MHD patients. These controversies may be ascribed to inadequate data and differences in experimental designs between the published investigations. A meta-analysis may help to find the sources of heterogeneity and clarify the effects of zinc supplementation on nutritional status, oxidative stress, and inflammation.In this study, we collected relevant RCTs for systematic review and performed meta-analyses to comprehensively investigate the relationships between nutritional status, oxidative stress, and inflammation and zinc supplementation in MHD patients. Subsequently, we aimed to find conflicting results and analyze their causes. Our study may add to the existing literature. ## 2. Methods ### 2.1. Literature Search Strategy Systematic literature searches were conducted in the following electronic databases: PubMed, Embase, Web of Science, Chinese Biomedical Literature, and the Cochrane Library. The relevant articles were published before January 15, 2016. Only publications with sufficient data were included for assessment. The following search terms were used: [Hemodialysis OR Dialysis OR Renal Replacement Therapy OR End Stage Renal Disease] AND [Zinc OR Zn] AND [Random∗ OR Randomized Controlled Trial OR Randomized Controlled Trial as Topic] AND [Nutrition OR Nutritional Status OR Reactive Oxygen OR Oxidative Stress OR Inflammation].To identify additional potentially relevant publications, the related references from all retrieved articles and reviews were manually searched. Only published studies with full-text articles were included in the meta-analysis. All data were entered into the Review Manager 5.0 software (Biostat, NJ, USA) by one author and checked by another author. Any disagreements were resolved by discussion between the two authors and by seeking the opinion of a third party when necessary. ### 2.2. Inclusion and Exclusion Criteria Types of studies included published reports of RCTs comparing zinc supplementation with controls (placebo or blank control) with available data for outcomes. Type of participants included studies that were restricted to any patients with MHD therapy stability. Records of the basic characteristics of participants (age, sex ratio, and dialysis duration) were required. Type of intervention included studies comparing zinc supplementation with a control for MHD patients. The dose of zinc compounds (zinc sulfate, zinc gluconate, or zinc aspartate) was converted to elemental zinc dose. The control intervention included placebo and blank control.All outcomes were extracted for type of outcome measures. Only outcomes measured in at least 2 studies were included for the meta-analysis and were as follows: nutritional status: serum zinc levels, body mass index (BMI, kg/m2), normalized protein equivalent of nitrogen appearance rate (nPNA, g/kg/d), dietary protein intake (g/kg/d), albumin (g/dL), hemoglobin, triglycerides (g/dL), total cholesterol (mg/dL), low density lipoprotein (mg/dL), and high density lipoprotein (mg/dL) levels; inflammation: C-reactive protein (CRP, ng/mL); and oxidative stress: malondialdehyde (MDA, nmol/mL) and superoxide dismutase (SOD, U/g Hb).Studies were excluded if there is lack of data regarding oral dosage of zinc or intervention time. Trials of sexual dysfunction were also excluded due to prior analysis. If any studies included multiple publications on the same RCT, we chose the one with the highest quality according to the study quality assessment. ### 2.3. Study Quality Assessment The method quality of each study was evaluated by 2 authors independently using the Cochrane Risk of Bias Tool that included 6 evaluation criteria [14]: random sequence generation (selection bias), allocation sequence concealment (selection bias), blinding (performance and detection bias), selective outcome reporting (reporting bias), incomplete outcome data (attrition bias), and other potential sources of bias. The judgment for each criterion was indicated as “low risk of bias,” “high risk of bias,” or “unclear risk of bias.” ### 2.4. Data Extraction and Synthesis Two independent investigators performed data extraction using the inclusion criteria as described above. All discrepancies were resolved by discussion and, if required, participation by a third author. The following information was extracted from each study: first author’s surname, year of publication, race and geographical location of the study, sample size, age, base disease, dialysis duration, oral zinc dose, the control intervention (placebo or blank control), and outcomes. The 2 investigators’ results were compared, and disagreements were resolved by discussion. ### 2.5. Statistical Analysis Statistical analyses were performed using Stata version 10.0 software (StataCorp LP, College Station, TX, USA). The effect of each outcome was determined by calculating the respective weighted mean difference (WMD) with a 95% confidence interval (CI). Heterogeneity of the effect size was evaluated using theQ and I-squared statistics. A fixed effects model was used when the P value was >0.05 and I-squared was <50%; otherwise, a random effects model was used. The significance of the pooled WMD was determined using a Z test. We used Begg and Egger tests to investigate the publication bias of our meta-analysis. To explore the sources of heterogeneity, we performed a meta-regression analysis. Subgroup analyses were also used to evaluate the effect in various conditions. P values < 0.05 were considered significant. The above work was completed by 2 authors and checked by a third author. ## 2.1. Literature Search Strategy Systematic literature searches were conducted in the following electronic databases: PubMed, Embase, Web of Science, Chinese Biomedical Literature, and the Cochrane Library. The relevant articles were published before January 15, 2016. Only publications with sufficient data were included for assessment. The following search terms were used: [Hemodialysis OR Dialysis OR Renal Replacement Therapy OR End Stage Renal Disease] AND [Zinc OR Zn] AND [Random∗ OR Randomized Controlled Trial OR Randomized Controlled Trial as Topic] AND [Nutrition OR Nutritional Status OR Reactive Oxygen OR Oxidative Stress OR Inflammation].To identify additional potentially relevant publications, the related references from all retrieved articles and reviews were manually searched. Only published studies with full-text articles were included in the meta-analysis. All data were entered into the Review Manager 5.0 software (Biostat, NJ, USA) by one author and checked by another author. Any disagreements were resolved by discussion between the two authors and by seeking the opinion of a third party when necessary. ## 2.2. Inclusion and Exclusion Criteria Types of studies included published reports of RCTs comparing zinc supplementation with controls (placebo or blank control) with available data for outcomes. Type of participants included studies that were restricted to any patients with MHD therapy stability. Records of the basic characteristics of participants (age, sex ratio, and dialysis duration) were required. Type of intervention included studies comparing zinc supplementation with a control for MHD patients. The dose of zinc compounds (zinc sulfate, zinc gluconate, or zinc aspartate) was converted to elemental zinc dose. The control intervention included placebo and blank control.All outcomes were extracted for type of outcome measures. Only outcomes measured in at least 2 studies were included for the meta-analysis and were as follows: nutritional status: serum zinc levels, body mass index (BMI, kg/m2), normalized protein equivalent of nitrogen appearance rate (nPNA, g/kg/d), dietary protein intake (g/kg/d), albumin (g/dL), hemoglobin, triglycerides (g/dL), total cholesterol (mg/dL), low density lipoprotein (mg/dL), and high density lipoprotein (mg/dL) levels; inflammation: C-reactive protein (CRP, ng/mL); and oxidative stress: malondialdehyde (MDA, nmol/mL) and superoxide dismutase (SOD, U/g Hb).Studies were excluded if there is lack of data regarding oral dosage of zinc or intervention time. Trials of sexual dysfunction were also excluded due to prior analysis. If any studies included multiple publications on the same RCT, we chose the one with the highest quality according to the study quality assessment. ## 2.3. Study Quality Assessment The method quality of each study was evaluated by 2 authors independently using the Cochrane Risk of Bias Tool that included 6 evaluation criteria [14]: random sequence generation (selection bias), allocation sequence concealment (selection bias), blinding (performance and detection bias), selective outcome reporting (reporting bias), incomplete outcome data (attrition bias), and other potential sources of bias. The judgment for each criterion was indicated as “low risk of bias,” “high risk of bias,” or “unclear risk of bias.” ## 2.4. Data Extraction and Synthesis Two independent investigators performed data extraction using the inclusion criteria as described above. All discrepancies were resolved by discussion and, if required, participation by a third author. The following information was extracted from each study: first author’s surname, year of publication, race and geographical location of the study, sample size, age, base disease, dialysis duration, oral zinc dose, the control intervention (placebo or blank control), and outcomes. The 2 investigators’ results were compared, and disagreements were resolved by discussion. ## 2.5. Statistical Analysis Statistical analyses were performed using Stata version 10.0 software (StataCorp LP, College Station, TX, USA). The effect of each outcome was determined by calculating the respective weighted mean difference (WMD) with a 95% confidence interval (CI). Heterogeneity of the effect size was evaluated using theQ and I-squared statistics. A fixed effects model was used when the P value was >0.05 and I-squared was <50%; otherwise, a random effects model was used. The significance of the pooled WMD was determined using a Z test. We used Begg and Egger tests to investigate the publication bias of our meta-analysis. To explore the sources of heterogeneity, we performed a meta-regression analysis. Subgroup analyses were also used to evaluate the effect in various conditions. P values < 0.05 were considered significant. The above work was completed by 2 authors and checked by a third author. ## 3. Results ### 3.1. Characteristics of Studies A total of 106 relevant published articles were identified following the aforementioned retrieval strategy. After strict review, 91 of these publications were excluded (73 records were screened by title/abstract and 18 records were screened by full-text) and 15 relevant published articles were identified and selected for our meta-analysis (Figure1). The included studies enrolled a total of 645 MHD patients, among which 345 were treated with zinc supplementation and 300 were treated with placebo or as blank control. Of these included studies, 8 were West Asian (Iran, Turkey, and Egypt), 5 were European or American (United Kingdom, United States, and Mexico), and 2 were East Asian trials (Taiwan and Japan). All studies included patients with chronic kidney disease (CKD) and one complicated low protein catabolic rate. Mean age of participants ranged from 13 to 80 years with dialysis for at least 3 months. The elemental zinc doses ranged from 11 to 100 mg and follow-up ranged from 40 to 360 days. The main characteristics of included studies are summarized in Table 1.Table 1 Characteristics of included studies. Study Reference Year Region Number of patients (M/F)a Age (Year)b Primary disease Dialysis duration Elemental zinc dose Intervening time Comparative approach Outcomesc Kobayashi et al. [15] 2015 Japan 70 (43/27) 69 ± 10 CKD - 34 mg/day 90/180/270/360 days Blank Serum zinc, hemoglobin, RBC, ESA, ERI El-Shazly et al. [16] 2015 Egypt 30 (29/31) 13.2 ± 2.1 CKD ≥6 months 16.5 mg/day 90 days Palcebo Serum zinc, leptin, body weight, BMI Argani et al. [17] 2014 Iran 60 (36/24) (50, 60) CKD -c 90 mg/day 60 days Palcebo Serum zinc, albumin, BMI, body fat, body water, Ccr, hematocrit, hemoglobin, leptin, TC, TG Pakfetrat et al. [18] 2013 Iran 97 (55/42) 51.6 ± 16.8 CKD >3 months 50 mg/day 43 days Placebo Serum zinc, HCys, homocysteine Mazani et al. [19] 2013 Iran 65 (41/24) 52.7 ± 12.6 CKD >6 months 100 mg/day 60 days Placebo Serum zinc, BMI, GSH, MDA, SOD, TAC Guo and Wang [8] 2013 Taiwan 65 59.7 ± 9.2 CKD >3 months 11 mg/day 56 days Blank Serum zinc, hematocrit, albumin, CD4/CD8, CRP, GFR, IL-6, MDA, nPNA, Cu, SOD, TNF-α, Vit C/E Rahimi-Ardabili et al. [20] 2012 Iran 60 (38/22) 52.7 ± 12.7 CKD ≥6 months 100 mg/day 60 days Placebo TC, Apo-AI, Apo-B, HDL, LDL, PON, TG Roozbeh et al. [21] 2009 Iran 53 (28/25) 55.7 CKD ≥6 months 45 mg/day 42 days Placebo Serum zinc, HDL, LDL, TC, TG Rashidi et al. [22] 2009 Iran 55 (32/23) 57.6 CKD ≥6 months 45 mg/day 42 days Placebo Serum zinc, CRP, hemoglobin Nava-Hernandez and Amato [23] 2005 Mexico 25 16.6 CKD - 100 mg/day 90 days Placebo Albumin, pre-albumin, transferrin Matson et al. [12] 2003 UK 15 (11/4) 63.73 CKD ≥3 months 45 mg/day 42 days Placebo Serum zinc, albumin, Kt/V, calcium, CRP, nPNA, phosphate Chevalier et al. [24] 2002 USA 27 (22/6) 51.9 CKD ≥6 months 50 mg/day 40/90 days Placebo Serum zinc, dietary intake, HDL, LDL, TC Candan et al. [25] 2002 Turkey 34 (18/16) 45.6 (28, 64) CKD - 20 mg/day 90 days Placebo Serum zinc, MDA, OSMO fragility Jern et al. [26] 2000 USA 14 56.5 (23, 80) CKD with low PCR ≥6 months 45 mg/day 40/90 days Placebo Serum zinc, dietary intake, nPNA Brodersen et al. [27] 1995 German 40 (22/18) 60 CKD - 60 mg/day 112 days Blank Serum zinc Note. aSex ratio: M = male, F = female; bage appears as mean, mean ± standard deviation or mean (lower limit, upper limit); c-: no information was recorded in included study. Abbreivations. RBC, red blood cell; ESA, erythropoiesis-stimulating agent; ERI, ESA resistance index; BMI, body mass index; Ccr, creatinine clearance rate; TC, total cholesterol; TG, triglyceride; HDL, high-density lipoprotein; LDL, low-density lipoprotein; CRP, C-reactive protein; GFR, glomerular filtration rate; MDA, malondialdehyde; nPNA, normalized protein equivalent of nitrogen appearance; SOD, superoxide dismutase.Figure 1 Flow chart of included studies. ### 3.2. Quality Assessment of Included Studies Of the 15 included studies, all claimed to apply randomized methods; however, only 2 used such methods (drawing random numbers). One clearly described allocation concealment (third party ensuring). Ten studies had a double-blinded design, but the details were unclear. Only one study described the pharmacy clinical trials unit as a third party, ensuring a double-blinded design. Eight studies reported withdrawals, but the results were not analyzed on an intention-to-treat basis. Of the 15 included studies, 8 reported all expected outcomes. Only 4 studies reported dietary restrictions. This may have caused potential bias due to insufficient information in the included trials. A summary of findings is shown in Figure2.Figure 2 Risk of bias graph and bias summary: (a) review of authors’ judgments regarding each risk of bias item for each included study and (b) review of authors’ judgments regarding each risk of bias item presented as percentages across all included studies. (a) (b) ### 3.3. Crude Pooled Results of Each Outcome In the crude analysis, we found that levels of serum zinc, dietary protein intake, and SOD in the zinc supplementation group were higher than levels in control group after treatment. The pooled WMDs were statistically significant (serum zinc: WMD = 28.489, 95% CI = 26.264 to 30.713,P<0.001; dietary protein intake: WMD = 8.012, 95% CI = 1.592 to 14.408, P<0.001; SOD: WMD = 357.568, 95% CI = 152.158 to 562.978, P=0.001). CRP and MDA levels were lower after zinc supplementation (CRP: WMD = −8.618, 95% CI = −15.579 to −1.656, P=0.015; MDA: WMD = −1.275, 95% CI = −1.945 to −0.605, P<0.001). The results showed no differences in BMI, nPNA, hemoglobin, or lipid profile (P>0.05). Heterogeneity was significant in the results. Of the 15 pooled outcomes, 10 showed obvious heterogeneity (I-squared > 50%, P<0.1). Results from Begg and Egger tests showed that there was no significant bias in our meta-analysis (P>0.1). All data are presented in Table 2. Relative bioavailability of zinc sulfate, gluconate, and aspartate may be different and worth further analysis. For inadequate reports, meta-analysis for effect of different zinc compounds was not performed.Table 2 Summary of the effects of zinc supplementation in MHD patients. Factor Number of studies Heterogeneity test Weighted mean difference Publication bias Q I 2 P value WMD [95% CI] P value Begg test Egger test Serum zinc (ug/dL) 13 81.98 90.7% <0.001 28.489 [26.264, 30.713] <0.001∗ 0.502 0.355 BMI (kg/m2) 3 1.34 0% 0.511 0.149 [−0.762, 1.059] 0.794 1.000 0.263 nPNA (g/kg/d) 4 11.31 91.20% 0.001 0.135 [−0.161, 0.431] 0.371 1.000 0.657 Dietary protein intake (g/kg/d) 2 0.27 0% 0.604 8.012 [1.592, 14.408] <0.001∗ - - Albumin (g/dL) 4 9.74 69.20% 0.021 0.358 [−0.016, 0.732] 0.061 0.734 0.276 Hemoglobin (g/dL) 4 16.27 81.6% 0.013 0.756 [−0.011, 1.522] 0.053 1.000 0.654 HDL (mg/dL) 4 24.6 91.90% <0.001 4.048 [−3.142, 11.238] 0.27 1.000 0.847 LDL (mg/dL) 4 24.46 91.80% <0.001 21.028 [−15.478, 57.534] 0.259 1.000 0.749 TC (mg/dL) 5 22.97 86.90% <0.001 16.198 [−9.975, 42.371] 0.225 0.734 0.624 TG (mg/dL) 3 8.2 75.60% 0.017 0.207 [−34.711, 35.125] 0.991 1.000 0.327 CRP (ng/mL) 3 13.85 85.60% 0.001 - 8.618 [−15.579, −1.656] 0.015∗ 1.000 0.783 MDA (nmol/mL) 3 16.32 87.70% <0.001 - 1.275 [−1.945, −0.605] <0.001∗ 0.296 0.287 SOD (U/g Hb) 2 3.42 70.80% 0.064 357.568 [152.158, 562.978] 0.001∗ - - Note. -: values could not be calculated due to an insufficient number of studies; ∗P<0.05, and WMD was considered statistically significant. ### 3.4. Results of Metaregression and Region-Subgroup Analysis To explore the sources of heterogeneity, we performed a metaregression analysis on serum zinc levels. Oral zinc dose, intervention time, baseline of serum zinc, and region of study were selected as dependent variables. As shown in Figures3 and 4, we found that serum zinc levels correlated positively with intervention time (β=0.272, P=0.042). Subgroup data suggested a significant difference among race (P=0.023), and serum zinc levels of patients in Europe and America showed the lowest effect. These 2 factors explained 43.83% of the bias. No correlations were identified between serum zinc levels and oral zinc dose (β=-0.066, P=0.691) or baseline zinc levels (β=-0.048, P=0.885).Figure 3 Metaregression data of serum zinc levels based on (a) serum zinc, (b) oral zinc dose, and (c) intervening time at baseline. (a) (b) (c)Figure 4 Forest plot: the effect of zinc supplementation on serum zinc levels in different regions (East Asia, West Asia, and Europe/North America). ### 3.5. Results of Dose Subgroup To explore the effects of zinc dose on various outcomes, we performed a dose-special subgroup analysis. The results showed zinc supplement results in higher serum zinc levels and lower CRP and MDA levels, which was consistent with the crude results. However, no dose-effect trend was found when zinc dose changed. In the heterogeneity test, 8 results showed obvious heterogeneity out of a total of 23 (I-squared > 50%, P<0.1). All data are presented in Table 3.Table 3 Results of dose subgroup. Factor Zinc dose Number of studies Heterogeneity test Weighted mean difference Q I 2 P value WMD [95% CI] P value Serum zinc (ug/dL) <45 mg 4 2.27 27.52% 0.211 30.792 [ 23.781, 44.201] <0.001∗ 45–50 mg 6 121.15 95.90% <0.001 23.831 [ 4.824, 42.837] 0.014∗ >50 mg 3 18.01 88.90% <0.001 32.692 [ 20.111, 45.273] <0.001∗ BMI (kg/m2) 16.5 mg 1 - - - 0.530 [ - 3.769, 2.709] 0.621 ≥90 mg 2 1.16 13.70% 0.282 0.124 [ - 1.062, 1.309] 0.838 nPNA (g/kg/d) 11 mg 1 - - - 0.41 [ 0.292, 0.528] <0.001∗ 45 mg 2 1.3 23.00% 0.250 0.019 [ - 0.130, 0.167] 0.805 Dietary protein intake (g/kg/d) 45 mg 2 0.64 0.00% 0.425 5.605 [ - 0.527, 11.736] 0.073 50 mg 2 0.53 0.00% 0.467 5.373 [ - 1.351, 12.097] 0.117 Albumin (g/dL) <50 mg 2 6.96 85.60% 0.008 0.37 [ - 0.225, 0.966] 0.223 ≥50 mg 2 2.35 57.40% 0.125 0.309 [ - 0.343, 0.962] 0.353 Hemoglobin (g/dL) <45 mg 2 3.62 71.87% 0.054 1.018 [ - 0.188, 2.223] 0.098 ≥45 mg 2 0.08 0.00% 0.780 0.385 [ - 0.307, 1.078] 0.275 HDL (mg/dL) <50 mg 2 31.35 96.80% <0.001 4.083 [ - 14.455, 22.621] 0.666 ≥50 mg 2 10.09 96.80% <0.001 0.031 [ - 6.494, 6.556] 0.993 LDL (mg/dL) <50 mg 2 4.84 79.40% 0.028 6.088 [ - 25.371, 37.548] 0.704 ≥50 mg 2 0.46 0.00% 0.496 44.792 [ 34.951, 54.632] <0.001∗ TC (mg/dL) ≤50 mg 2 0.03 0.00% 0.857 37.045 [ 26.472, 47.617] <0.001∗ >50 mg 2 1.81 44.70% 0.179 - 2.381 [ - 19.731, 14.968] 0.788 TG (mg/dL) <50 mg 2 7.67 87.00% 0.006 0.337 [ - 53.525, 54.200] 0.99 ≥50 mg 1 - - - - 3.112 [ - 40.718, 34.718] 0.876 CRP (ng/mL) 11 mg 1 - - - - 5.799 [ - 8.925, -2.673] <0.001∗ 45 mg 2 2.11 52.60% 0.146 - 10.234 [ - 20.861, -0.392] 0.039∗ MDA (nmol/mL) <50 mg 2 10.34 90.30% 0.001 - 1.617 [ - 2.948, -0.286] 0.017∗ ≥50 mg 1 - - - - 0.8 [ - 0.995, -0.605] <0.001∗ Note. -: values could not be calculated due to an insufficient number of studies; ∗P<0.05, and WMD was considered statistically significant. ### 3.6. Results of Intervening Time Subgroup To investigate the effects of intervention time on a series of outcomes, we performed a time-special subgroup analysis. In the results, we found that zinc supplementation induced a time effect in serum zinc levels and dietary protein intake. Although our results showed zinc supplementation results in higher SOD levels and lower CRP and MDA levels in all subanalyses, no time effect was identified. Heterogeneity was lower in the time subgroup compared with results in the crude analysis. Of the 25 results analyzed, 4 showed obvious heterogeneity (I-squared > 50%, P<0.1). All data are presented in Table 4.Table 4 Results of intervention time subgroup. Factor Intervention time Number of studies Heterogeneity test Weighted mean difference Q I 2 P value WMD [95% CI] P value Serum zinc (ug/dL) <50 days 6 21.15 85.90% <0.001 23.831 [ 4.824, 42.837] 0.014 ∗ 50–60 days 4 1.82 0.00% 0.402 28.310 [ 24.399, 32.220] <0.001∗ >60 days 5 1.07 12.90% 0.388 36.065 [ 25.694, 46.437] <0.001∗ BMI (kg/m2) 60 days 2 1.16 13.70% 0.282 0.124 [ - 1.062, 1.309] 0.838 90 days 1 - - - 0.530 [ - 3.769, 2.709] 0.621 nPNA (g/kg/d) <50 days 2 0.58 0.00% 0.447 - 0.018 [ - 0.128, 0.092] 0.751 ≥50 days 2 1.3 31.10% 0.243 0.235 [ - 0.108, 0.578] 0.179 Dietary protein intake (g/kg/d) <50 days 2 0.18 0.00% 0.776 3.322 [ - 3.407, 9.407] 0.359 ≥50 days 2 0.12 0.00% 0.812 8.109 [ 1.592, 14.408] <0.001∗ Albumin (g/dL) <60 days 2 1.96 45.60% 0.134 0.301 [ - 0.225, 0.966] 0.223 ≥60 days 2 2.17 47.40% 0.125 0.409 [ - 0.243, 1.062] 0.353 Hemoglobin (g/dL) <60 days 2 0.04 0.00% 0.850 0.378 [ - 0.048, 0.804] 0.082 ≥60 days 2 3.69 72.90% 0.055 1.171 [ 0.083, 2.259] 0.035 ∗ HDL (mg/dL) <60 days 2 31.35 96.80% <0.001 4.083 [ - 14.455, 22.621] 0.666 ≥60 days 2 10.09 90.10% 0.001 0.031 [ - 6.494, 6.556] 0.993 LDL (mg/dL) <60 days 2 1.96 49.00% 0.161 34.829 [ 17.061, 52.597] <0.001∗ ≥60 days 2 24.46 95.90% <0.001 19.971 [ - 36.680, 76.623] 0.490 TC (mg/dL) <60 days 2 1.5 33.20% 0.221 18.673 [ - 2.741, 40.088] 0.087 ≥60 days 3 22.15 91.00% <0.001 11.147 [ - 18.788, 41.082] 0.465 TG (mg/dL) <60 days 1 - - - 26.080 [ 6.602, 45.558] 0.009 ≥60 days 2 1.01 8.60% 0.317 - 17.415 [ - 42.743, 7.913] 0.178 CRP (ng/mL) <60 days 2 2.11 14.60% 0.146 - 10.234 [ - 20.861, 0.392] 0.059 ≥60 days 1 - - - - 5.799 [ - 8.925, -2.673] <0.001∗ MDA (nmol/mL) <60 days 1 - - - - 2.330 [ - 3.048, -1.612] <0.001∗ ≥60 days 2 0.53 0.00% 0.467 - 0.831 [ - 1.007, -0.654] <0.001∗ SOD (U/g Hb) <60 days 1 - - - 446.600 [ 340.276, 552.924] <0.001∗ ≥60 days 1 - - - 234.200 [ 35.906, 432.494] 0.021 ∗ Note. -: values could not be calculated due to an insufficient number of studies; ∗P<0.05, and WMD was considered statistically significant. ## 3.1. Characteristics of Studies A total of 106 relevant published articles were identified following the aforementioned retrieval strategy. After strict review, 91 of these publications were excluded (73 records were screened by title/abstract and 18 records were screened by full-text) and 15 relevant published articles were identified and selected for our meta-analysis (Figure1). The included studies enrolled a total of 645 MHD patients, among which 345 were treated with zinc supplementation and 300 were treated with placebo or as blank control. Of these included studies, 8 were West Asian (Iran, Turkey, and Egypt), 5 were European or American (United Kingdom, United States, and Mexico), and 2 were East Asian trials (Taiwan and Japan). All studies included patients with chronic kidney disease (CKD) and one complicated low protein catabolic rate. Mean age of participants ranged from 13 to 80 years with dialysis for at least 3 months. The elemental zinc doses ranged from 11 to 100 mg and follow-up ranged from 40 to 360 days. The main characteristics of included studies are summarized in Table 1.Table 1 Characteristics of included studies. Study Reference Year Region Number of patients (M/F)a Age (Year)b Primary disease Dialysis duration Elemental zinc dose Intervening time Comparative approach Outcomesc Kobayashi et al. [15] 2015 Japan 70 (43/27) 69 ± 10 CKD - 34 mg/day 90/180/270/360 days Blank Serum zinc, hemoglobin, RBC, ESA, ERI El-Shazly et al. [16] 2015 Egypt 30 (29/31) 13.2 ± 2.1 CKD ≥6 months 16.5 mg/day 90 days Palcebo Serum zinc, leptin, body weight, BMI Argani et al. [17] 2014 Iran 60 (36/24) (50, 60) CKD -c 90 mg/day 60 days Palcebo Serum zinc, albumin, BMI, body fat, body water, Ccr, hematocrit, hemoglobin, leptin, TC, TG Pakfetrat et al. [18] 2013 Iran 97 (55/42) 51.6 ± 16.8 CKD >3 months 50 mg/day 43 days Placebo Serum zinc, HCys, homocysteine Mazani et al. [19] 2013 Iran 65 (41/24) 52.7 ± 12.6 CKD >6 months 100 mg/day 60 days Placebo Serum zinc, BMI, GSH, MDA, SOD, TAC Guo and Wang [8] 2013 Taiwan 65 59.7 ± 9.2 CKD >3 months 11 mg/day 56 days Blank Serum zinc, hematocrit, albumin, CD4/CD8, CRP, GFR, IL-6, MDA, nPNA, Cu, SOD, TNF-α, Vit C/E Rahimi-Ardabili et al. [20] 2012 Iran 60 (38/22) 52.7 ± 12.7 CKD ≥6 months 100 mg/day 60 days Placebo TC, Apo-AI, Apo-B, HDL, LDL, PON, TG Roozbeh et al. [21] 2009 Iran 53 (28/25) 55.7 CKD ≥6 months 45 mg/day 42 days Placebo Serum zinc, HDL, LDL, TC, TG Rashidi et al. [22] 2009 Iran 55 (32/23) 57.6 CKD ≥6 months 45 mg/day 42 days Placebo Serum zinc, CRP, hemoglobin Nava-Hernandez and Amato [23] 2005 Mexico 25 16.6 CKD - 100 mg/day 90 days Placebo Albumin, pre-albumin, transferrin Matson et al. [12] 2003 UK 15 (11/4) 63.73 CKD ≥3 months 45 mg/day 42 days Placebo Serum zinc, albumin, Kt/V, calcium, CRP, nPNA, phosphate Chevalier et al. [24] 2002 USA 27 (22/6) 51.9 CKD ≥6 months 50 mg/day 40/90 days Placebo Serum zinc, dietary intake, HDL, LDL, TC Candan et al. [25] 2002 Turkey 34 (18/16) 45.6 (28, 64) CKD - 20 mg/day 90 days Placebo Serum zinc, MDA, OSMO fragility Jern et al. [26] 2000 USA 14 56.5 (23, 80) CKD with low PCR ≥6 months 45 mg/day 40/90 days Placebo Serum zinc, dietary intake, nPNA Brodersen et al. [27] 1995 German 40 (22/18) 60 CKD - 60 mg/day 112 days Blank Serum zinc Note. aSex ratio: M = male, F = female; bage appears as mean, mean ± standard deviation or mean (lower limit, upper limit); c-: no information was recorded in included study. Abbreivations. RBC, red blood cell; ESA, erythropoiesis-stimulating agent; ERI, ESA resistance index; BMI, body mass index; Ccr, creatinine clearance rate; TC, total cholesterol; TG, triglyceride; HDL, high-density lipoprotein; LDL, low-density lipoprotein; CRP, C-reactive protein; GFR, glomerular filtration rate; MDA, malondialdehyde; nPNA, normalized protein equivalent of nitrogen appearance; SOD, superoxide dismutase.Figure 1 Flow chart of included studies. ## 3.2. Quality Assessment of Included Studies Of the 15 included studies, all claimed to apply randomized methods; however, only 2 used such methods (drawing random numbers). One clearly described allocation concealment (third party ensuring). Ten studies had a double-blinded design, but the details were unclear. Only one study described the pharmacy clinical trials unit as a third party, ensuring a double-blinded design. Eight studies reported withdrawals, but the results were not analyzed on an intention-to-treat basis. Of the 15 included studies, 8 reported all expected outcomes. Only 4 studies reported dietary restrictions. This may have caused potential bias due to insufficient information in the included trials. A summary of findings is shown in Figure2.Figure 2 Risk of bias graph and bias summary: (a) review of authors’ judgments regarding each risk of bias item for each included study and (b) review of authors’ judgments regarding each risk of bias item presented as percentages across all included studies. (a) (b) ## 3.3. Crude Pooled Results of Each Outcome In the crude analysis, we found that levels of serum zinc, dietary protein intake, and SOD in the zinc supplementation group were higher than levels in control group after treatment. The pooled WMDs were statistically significant (serum zinc: WMD = 28.489, 95% CI = 26.264 to 30.713,P<0.001; dietary protein intake: WMD = 8.012, 95% CI = 1.592 to 14.408, P<0.001; SOD: WMD = 357.568, 95% CI = 152.158 to 562.978, P=0.001). CRP and MDA levels were lower after zinc supplementation (CRP: WMD = −8.618, 95% CI = −15.579 to −1.656, P=0.015; MDA: WMD = −1.275, 95% CI = −1.945 to −0.605, P<0.001). The results showed no differences in BMI, nPNA, hemoglobin, or lipid profile (P>0.05). Heterogeneity was significant in the results. Of the 15 pooled outcomes, 10 showed obvious heterogeneity (I-squared > 50%, P<0.1). Results from Begg and Egger tests showed that there was no significant bias in our meta-analysis (P>0.1). All data are presented in Table 2. Relative bioavailability of zinc sulfate, gluconate, and aspartate may be different and worth further analysis. For inadequate reports, meta-analysis for effect of different zinc compounds was not performed.Table 2 Summary of the effects of zinc supplementation in MHD patients. Factor Number of studies Heterogeneity test Weighted mean difference Publication bias Q I 2 P value WMD [95% CI] P value Begg test Egger test Serum zinc (ug/dL) 13 81.98 90.7% <0.001 28.489 [26.264, 30.713] <0.001∗ 0.502 0.355 BMI (kg/m2) 3 1.34 0% 0.511 0.149 [−0.762, 1.059] 0.794 1.000 0.263 nPNA (g/kg/d) 4 11.31 91.20% 0.001 0.135 [−0.161, 0.431] 0.371 1.000 0.657 Dietary protein intake (g/kg/d) 2 0.27 0% 0.604 8.012 [1.592, 14.408] <0.001∗ - - Albumin (g/dL) 4 9.74 69.20% 0.021 0.358 [−0.016, 0.732] 0.061 0.734 0.276 Hemoglobin (g/dL) 4 16.27 81.6% 0.013 0.756 [−0.011, 1.522] 0.053 1.000 0.654 HDL (mg/dL) 4 24.6 91.90% <0.001 4.048 [−3.142, 11.238] 0.27 1.000 0.847 LDL (mg/dL) 4 24.46 91.80% <0.001 21.028 [−15.478, 57.534] 0.259 1.000 0.749 TC (mg/dL) 5 22.97 86.90% <0.001 16.198 [−9.975, 42.371] 0.225 0.734 0.624 TG (mg/dL) 3 8.2 75.60% 0.017 0.207 [−34.711, 35.125] 0.991 1.000 0.327 CRP (ng/mL) 3 13.85 85.60% 0.001 - 8.618 [−15.579, −1.656] 0.015∗ 1.000 0.783 MDA (nmol/mL) 3 16.32 87.70% <0.001 - 1.275 [−1.945, −0.605] <0.001∗ 0.296 0.287 SOD (U/g Hb) 2 3.42 70.80% 0.064 357.568 [152.158, 562.978] 0.001∗ - - Note. -: values could not be calculated due to an insufficient number of studies; ∗P<0.05, and WMD was considered statistically significant. ## 3.4. Results of Metaregression and Region-Subgroup Analysis To explore the sources of heterogeneity, we performed a metaregression analysis on serum zinc levels. Oral zinc dose, intervention time, baseline of serum zinc, and region of study were selected as dependent variables. As shown in Figures3 and 4, we found that serum zinc levels correlated positively with intervention time (β=0.272, P=0.042). Subgroup data suggested a significant difference among race (P=0.023), and serum zinc levels of patients in Europe and America showed the lowest effect. These 2 factors explained 43.83% of the bias. No correlations were identified between serum zinc levels and oral zinc dose (β=-0.066, P=0.691) or baseline zinc levels (β=-0.048, P=0.885).Figure 3 Metaregression data of serum zinc levels based on (a) serum zinc, (b) oral zinc dose, and (c) intervening time at baseline. (a) (b) (c)Figure 4 Forest plot: the effect of zinc supplementation on serum zinc levels in different regions (East Asia, West Asia, and Europe/North America). ## 3.5. Results of Dose Subgroup To explore the effects of zinc dose on various outcomes, we performed a dose-special subgroup analysis. The results showed zinc supplement results in higher serum zinc levels and lower CRP and MDA levels, which was consistent with the crude results. However, no dose-effect trend was found when zinc dose changed. In the heterogeneity test, 8 results showed obvious heterogeneity out of a total of 23 (I-squared > 50%, P<0.1). All data are presented in Table 3.Table 3 Results of dose subgroup. Factor Zinc dose Number of studies Heterogeneity test Weighted mean difference Q I 2 P value WMD [95% CI] P value Serum zinc (ug/dL) <45 mg 4 2.27 27.52% 0.211 30.792 [ 23.781, 44.201] <0.001∗ 45–50 mg 6 121.15 95.90% <0.001 23.831 [ 4.824, 42.837] 0.014∗ >50 mg 3 18.01 88.90% <0.001 32.692 [ 20.111, 45.273] <0.001∗ BMI (kg/m2) 16.5 mg 1 - - - 0.530 [ - 3.769, 2.709] 0.621 ≥90 mg 2 1.16 13.70% 0.282 0.124 [ - 1.062, 1.309] 0.838 nPNA (g/kg/d) 11 mg 1 - - - 0.41 [ 0.292, 0.528] <0.001∗ 45 mg 2 1.3 23.00% 0.250 0.019 [ - 0.130, 0.167] 0.805 Dietary protein intake (g/kg/d) 45 mg 2 0.64 0.00% 0.425 5.605 [ - 0.527, 11.736] 0.073 50 mg 2 0.53 0.00% 0.467 5.373 [ - 1.351, 12.097] 0.117 Albumin (g/dL) <50 mg 2 6.96 85.60% 0.008 0.37 [ - 0.225, 0.966] 0.223 ≥50 mg 2 2.35 57.40% 0.125 0.309 [ - 0.343, 0.962] 0.353 Hemoglobin (g/dL) <45 mg 2 3.62 71.87% 0.054 1.018 [ - 0.188, 2.223] 0.098 ≥45 mg 2 0.08 0.00% 0.780 0.385 [ - 0.307, 1.078] 0.275 HDL (mg/dL) <50 mg 2 31.35 96.80% <0.001 4.083 [ - 14.455, 22.621] 0.666 ≥50 mg 2 10.09 96.80% <0.001 0.031 [ - 6.494, 6.556] 0.993 LDL (mg/dL) <50 mg 2 4.84 79.40% 0.028 6.088 [ - 25.371, 37.548] 0.704 ≥50 mg 2 0.46 0.00% 0.496 44.792 [ 34.951, 54.632] <0.001∗ TC (mg/dL) ≤50 mg 2 0.03 0.00% 0.857 37.045 [ 26.472, 47.617] <0.001∗ >50 mg 2 1.81 44.70% 0.179 - 2.381 [ - 19.731, 14.968] 0.788 TG (mg/dL) <50 mg 2 7.67 87.00% 0.006 0.337 [ - 53.525, 54.200] 0.99 ≥50 mg 1 - - - - 3.112 [ - 40.718, 34.718] 0.876 CRP (ng/mL) 11 mg 1 - - - - 5.799 [ - 8.925, -2.673] <0.001∗ 45 mg 2 2.11 52.60% 0.146 - 10.234 [ - 20.861, -0.392] 0.039∗ MDA (nmol/mL) <50 mg 2 10.34 90.30% 0.001 - 1.617 [ - 2.948, -0.286] 0.017∗ ≥50 mg 1 - - - - 0.8 [ - 0.995, -0.605] <0.001∗ Note. -: values could not be calculated due to an insufficient number of studies; ∗P<0.05, and WMD was considered statistically significant. ## 3.6. Results of Intervening Time Subgroup To investigate the effects of intervention time on a series of outcomes, we performed a time-special subgroup analysis. In the results, we found that zinc supplementation induced a time effect in serum zinc levels and dietary protein intake. Although our results showed zinc supplementation results in higher SOD levels and lower CRP and MDA levels in all subanalyses, no time effect was identified. Heterogeneity was lower in the time subgroup compared with results in the crude analysis. Of the 25 results analyzed, 4 showed obvious heterogeneity (I-squared > 50%, P<0.1). All data are presented in Table 4.Table 4 Results of intervention time subgroup. Factor Intervention time Number of studies Heterogeneity test Weighted mean difference Q I 2 P value WMD [95% CI] P value Serum zinc (ug/dL) <50 days 6 21.15 85.90% <0.001 23.831 [ 4.824, 42.837] 0.014 ∗ 50–60 days 4 1.82 0.00% 0.402 28.310 [ 24.399, 32.220] <0.001∗ >60 days 5 1.07 12.90% 0.388 36.065 [ 25.694, 46.437] <0.001∗ BMI (kg/m2) 60 days 2 1.16 13.70% 0.282 0.124 [ - 1.062, 1.309] 0.838 90 days 1 - - - 0.530 [ - 3.769, 2.709] 0.621 nPNA (g/kg/d) <50 days 2 0.58 0.00% 0.447 - 0.018 [ - 0.128, 0.092] 0.751 ≥50 days 2 1.3 31.10% 0.243 0.235 [ - 0.108, 0.578] 0.179 Dietary protein intake (g/kg/d) <50 days 2 0.18 0.00% 0.776 3.322 [ - 3.407, 9.407] 0.359 ≥50 days 2 0.12 0.00% 0.812 8.109 [ 1.592, 14.408] <0.001∗ Albumin (g/dL) <60 days 2 1.96 45.60% 0.134 0.301 [ - 0.225, 0.966] 0.223 ≥60 days 2 2.17 47.40% 0.125 0.409 [ - 0.243, 1.062] 0.353 Hemoglobin (g/dL) <60 days 2 0.04 0.00% 0.850 0.378 [ - 0.048, 0.804] 0.082 ≥60 days 2 3.69 72.90% 0.055 1.171 [ 0.083, 2.259] 0.035 ∗ HDL (mg/dL) <60 days 2 31.35 96.80% <0.001 4.083 [ - 14.455, 22.621] 0.666 ≥60 days 2 10.09 90.10% 0.001 0.031 [ - 6.494, 6.556] 0.993 LDL (mg/dL) <60 days 2 1.96 49.00% 0.161 34.829 [ 17.061, 52.597] <0.001∗ ≥60 days 2 24.46 95.90% <0.001 19.971 [ - 36.680, 76.623] 0.490 TC (mg/dL) <60 days 2 1.5 33.20% 0.221 18.673 [ - 2.741, 40.088] 0.087 ≥60 days 3 22.15 91.00% <0.001 11.147 [ - 18.788, 41.082] 0.465 TG (mg/dL) <60 days 1 - - - 26.080 [ 6.602, 45.558] 0.009 ≥60 days 2 1.01 8.60% 0.317 - 17.415 [ - 42.743, 7.913] 0.178 CRP (ng/mL) <60 days 2 2.11 14.60% 0.146 - 10.234 [ - 20.861, 0.392] 0.059 ≥60 days 1 - - - - 5.799 [ - 8.925, -2.673] <0.001∗ MDA (nmol/mL) <60 days 1 - - - - 2.330 [ - 3.048, -1.612] <0.001∗ ≥60 days 2 0.53 0.00% 0.467 - 0.831 [ - 1.007, -0.654] <0.001∗ SOD (U/g Hb) <60 days 1 - - - 446.600 [ 340.276, 552.924] <0.001∗ ≥60 days 1 - - - 234.200 [ 35.906, 432.494] 0.021 ∗ Note. -: values could not be calculated due to an insufficient number of studies; ∗P<0.05, and WMD was considered statistically significant. ## 4. Discussion Some dialysis complications, such as malnutrition and inflammation, were partly attributed to zinc deficiency. From previous studies, the conclusions regarding zinc supplementation for MHD patients were not consensus. The effect on nutritional status was one of the most controversial results. Contrary to the majority of opinions and our results, Matson et al. reported that evidence is limited for proving the effects of zinc supplementation on anorexia and nutritional status in MHD patients [12]. An earlier article attributed this discrepancy to age of patients and zinc absorption [28]. Notably, in the study of Matson et al., differences in serum zinc levels were not statistically significant between the treatment and placebo groups. However, in our analysis, we found that zinc supplementation increased serum zinc levels for MHD patients. Our results showed that intervention time and race are two factors that correlated significantly with serum zinc levels, and there is a time-effect but not a dose-effect relationship in zinc supplementation. In the included studies, we found that the median intervention time was 60 days and the median zinc dose was 45 mg/day. In the study of Matson et al., patients received 220 mg zinc sulfate (45 mg elemental zinc) per day for 6 weeks; thus, the limited intervention time may be a possible explanation. It is also plausible that zinc supplementation showed different effects from various racial groups. The mean change of serum zinc levels in the Taiwanese population was several times higher than in the Western regions [8]. European studies showed the lowest effect compared with data from Asia, most likely due to differences in epidemiology and diet. Further studies are warranted to comprehensively investigate the precise effect on different racial groups. Another notable finding was that, although no statistical significance was found in data on albumin and hemoglobin, the P values were close to the significance threshold (albumin: P=0.061; hemoglobin: P=0.069). Subgroup analyses also showed a time-effect relationship in the two factors. We assume the long-term effect of zinc supplementation may improve these nutritional indices, but this interpretation also requires further testing. Nonetheless, our results suggest intervention time of zinc supplementation should be adequate when aiming to improve nutritional indices and appetite.Hypercholesterolemia and hypertriglyceridemia have been reported in previous studies on zinc-deficient diets, which could induce cardiovascular events and insulin resistance in CKD patients [29, 30]. Although a series of previous studies suggest that zinc supplementation improves blood lipid metabolism, this finding was not drawn from our pooled data. The effects and trends of zinc supplementation on lipid profile were inconsistent and even contradictory in the results [17, 20, 21]. There are several possible explanations for this discrepancy. First, it is possible that the characteristics of included patients were divided. For example, zinc supplementation could increase blood lipids by improving energy intake in patients with anorexia [31] and show an opposite effect when including patients with hyperlipemia or insulin resistance [32–34]. Second, little is known about lipid intake from the included studies; diverse dietary lipid intake may have led to information bias. Therefore, more evidence is needed to determine the effects of zinc supplementation on lipid profile in MHD patients.Inflammation and oxidative stress are common complications in MHD patients. Several previous studies have found that zinc deficiency in MHD patients may result in increased oxidative stress and CRP concentrations [22, 35]. The antioxidative action of zinc involves 2 mechanisms: (1) directly protecting easily oxidized groups such as sulfhydryl and (2) producing some other ultimate long-term antioxidant like metallothionein [36, 37]. Zinc could also decrease CRP and other inflammatory cytokines through increased antioxidant power; the major target is most likely the NF-kappa b pathway [38, 39]. In our analysis, all the included studies showed that zinc supplementation was positive for anti-inflammatory and antioxidant activity. The pooled results showed statistical significance in all subgroups, and no significance for time or dose effect was observed. This may suggest that zinc supplementation may lead to an anti-inflammatory and antioxidative effect in MHD patients.It should be noted that there might be some possible limitations in our meta-analysis. First, adverse outcomes of zinc supplementation were not analyzed. As is known, although zinc is an essential requirement for good health, an excess in zinc supplementation can be harmful. Excessive absorption of zinc can suppress copper and iron absorption and may cause nerve damage [40, 41]. In the included studies, only one paper mentioned asking patients whether they had experienced any adverse effects. No side effects could be analyzed in the pooled data; incomplete data may have deterred this evaluation. Second, due to insufficient data, the effects of zinc supplementation on clinical endpoint events, such as cardiovascular events or death, remain unclear. Such discrepancies make it difficult to attain strong evidence for MHD patients. Finally, the epidemiology varies significantly across the different regions. For example, zinc deficiency is widespread in East Asia (nearly 40% to 60% of the population have mild or moderate zinc deficiency); the proportion is much lower in Europe and America [42, 43]. However, meta-analyses of the racial subgroups could not be performed due to lack of data in most outcomes. This may have generated a selection bias in our results. ## 5. Conclusion Our meta-analysis suggests that zinc supplementation benefits the nutritional status of MHD patients and shows a time-effect relationship. It also leads to an anti-inflammatory and antioxidative effect in MHD patients. Still, there is a need for more evidence regarding the effects on lipid profile. Given the presence of data deficiency in this study, further studies are warranted to comprehensively investigate the effects of zinc supplementation on clinical endpoint events and on race. --- *Source: 1024769-2017-12-31.xml*
2017
# Evaluation of 12-Lipoxygenase (12-LOX) and Plasminogen Activator Inhibitor 1 (PAI-1) as Prognostic Markers in Prostate Cancer **Authors:** Tomasz Gondek; Mariusz Szajewski; Jarosław Szefel; Ewa Aleksandrowicz-Wrona; Ewa Skrzypczak-Jankun; Jerzy Jankun; Wieslawa Lysiak-Szydlowska **Journal:** BioMed Research International (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102478 --- ## Abstract In carcinoma of prostate, a causative role of platelet 12-lipoxygenase (12-LOX) and plasminogen activator inhibitor 1 (PAI-1) for tumor progression has been firmly established in tumor and/or adjacent tissue. Our goal was to investigate if 12-LOX and/or PAI-1 in patient’s plasma could be used to predict outcome of the disease. The study comprised 149 patients (age70 ± 9) divided into two groups: a study group with carcinoma confirmed by positive biopsy of prostate (n = 116) and a reference group (n = 33) with benign prostatic hyperplasia (BPH). The following parameters were determined by the laboratory test in plasma or platelet-rich plasma: protein level of 12-LOX, PAI-1, thromboglobulin (TGB), prostate specific antigen (PSA), C-reactive protein (CRP), hemoglobin (HGB, and hematocrit (HCT), as well as red (RBC) and white blood cells (WBC), number of platelets (PLT), international normalized ratio of blood clotting (INR), and activated partial thromboplastin time (APTT). The only difference of significance was noticed in the concentration of 12-LOX in platelet rich plasma, which was lower in cancer than in BPH group. Standardization to TGB and platelet count increases the sensitivity of the test that might be used as a biomarker to assess risk for prostate cancer in periodically monitored patients. --- ## Body ## 1. Introduction The prostate cancer is the most common malignancy diagnosed in older men in the Western Hemisphere population. According to the European Association of Urology (EAU) Guidelines of 2012 mortality from prostate cancer is ranked second to lung cancer [1]. The diagnosis of prostate cancer at an early stage and ability to differentiate benign and aggressive form would improve the selection of the optimal method of treatment resulting in better outcome. Currently used diagnostic standards consist of determination of prostate specific antigen (PSA), clinical stage, and total Gleason grade. Unfortunately, they do not give sufficient justification for choosing the optimal therapy for a particular patient. Hence, it is necessary to search for new biomarkers to allow for the prediction of disease dynamics and personalization of therapy [2, 3].It has been shown that men who consume high-fat diet containing abundance of arachidonic acid (AA) have a high rate of incidence of prostate cancer [4, 5]. Availability of AA in combination with the overexpression of lipoxygenases (12-LOX, 5-LOX), cyclooxygenase (COX-2), and cytochrome P450 (CYP) excess leads to the synthesis of eicosanoids [6–8]. Eicosanoids trigger signals for transcription of genes that modulate the immune system, hemostasis, apoptosis, cell proliferation, and many other processes [9, 10]. This avalanche of signals results in the development of inflammation favoring carcinogenesis [11, 12]. Also, eicosanoids accelerate the rate of proliferation of glandular cells, inhibit their apoptosis, and further intensify angiogenesis [13–16]. Angiogenesis is a prerequisite for tumor development and plasminogen activation system (PAS) has a significant impact on that crucial step. PAS includes urokinase plasminogen activator (uPA), urokinase plasminogen activator receptor (uPAR), and plasminogen activator inhibitor type-1 (PAI-1) [17]. The increase in uPA activity and number of uPAR correlate with the ability of cancer to form angiogenic vasculature and increase cancer cells metastasis. Urokinase, both free and receptor bound, converts plasminogen to proteolytically active plasmin that is responsible for lysis of extracellular matrix, essential for angiogenesis and metastasis [18, 19]. Inhibition of both uPA and uPAR activity reduces angiogenesis and metastasis [20, 21]. Research indicates that inhibition of uPA by PAI-1 reduces the size of the tumor [22]. In the capillaries surrounding the tumor there are a large amount and activity of uPA and uPAR [23]. Taking into account the role of uPA, uPAR, and PAI-1 in angiogenesis, Pepper considers that normal vessel formation by angioge-nesis depends on proteases and antiproteases balance [24]. However, role of PAI-1 in carcinogenesis is more complex than simple inhibition of proteolysis. PAI-1 overexpressed up to approximately ten times more than normal level increases motility of cancer cells by interacting with vitronectin and other proteins. However, PAI-1 in supramolecular levels significantly inhibits angiogenesis and metastasis reducing activity of uPA [18, 25, 26]. This phenomenon is called “PAI-paradox” [27]. Now, a high level of PAI-1 appears to inhibit angiogenesis, while slightly elevated level of PAI-1 is necessary for growth of angiogenic vessels.A healthy body maintains a balance between activators and inhibitors of angiogenesis. The tumor microenvironment is different than normal tissue where the pro- and antiangiogenic factors are well balanced. Folkman and Hanahan introduced the concept of the angiogenic switch wherein it is stated that angiogenesis starts at a global disturbance of the expression of pro- and antiangiogenic factors [28]. The primary target of both of them is endothelial cell [29]. Among the others 12-LOX and PAI-1 are proteins governing these processes and can be secreted at high levels by tumor cells. While expression of these proteins by cancer cell was studied and documented, serum tests were not well investigated [30–32], albeit they might provide an easy laboratory test as a diagnostic tool. Therefore, we have studied expression of human platelet 12-LOX and PAI-1 in serum of patients to find whether any correlation exists between their concentration in blood and stage of the prostate disease. ## 2. Materials and Methods ### 2.1. Patients The study involved 149 men (age70 ± 9 years) qualified for diagnostic biopsy of the prostate. The criteria for inclusion in the study were a positive digital rectal examination (DRE) result, PSA level above the upper limit of the reference value of 4 ng/mL, and a positive transrectal ultrasound (TRUS) test result. The study excluded patients with previously diagnosed cancer, regardless of its location and nature. Patients taking aspirin, warfarin, COX inhibitors, and heparin were also excluded from study since these drugs may affect level or activity of 12-LOX and/or PAI-1 [33–37].In all patients the entire volume of the prostate adenoma and cancer foci were determined by TRUS. Biopsy was done for all patients with a total of 12 samples taken in the following way: biopsies 1–4 taken from the suspected foci, biopsies 5–8 taken from the opposite lobe of the prostate, and biopsies 9–12 taken from the lobe where the suspected foci were present. Targeted biopsy was performed on patients suspected with pathological growth. Formaldehyde and paraffin embedded slides were examined by pathologist who determined the type of cancer (or lack of it), tumor grade and Gleason sum. Based on the results of the histopathological examination of biopsy material the patients were divided into two groups (group with cancer,n = 116, and without prostate cancer, n = 33). The study was approved by the Bioethical Committee for Scientific Research at the Medical University of Gdańsk. Each patient prior to study was informed of the objectives and principles and signed an informed consent to participate in it. ### 2.2. Blood For all patients the following parameters were determined in hospital laboratory by the routine tests: hemoglobin (HGB), hematocrit (HCT), red blood cells (RBC), and white blood cells (WBC). Citrated blood samples were divided into two parts, one of them was centrifuged at 100 ×g for fifteen minutes to obtain platelet rich plasma, frozen, and stored at −20°C for the determination of PAI-1, 12-LOX, and thromboglobulin (TGB); the other was centrifuged at 1500×g for ten minutes for the determination of PSA, C-reactive protein (CRP), international normalized ratio of blood clotting (INR), and activated partial thromboplastin time (APTT). ### 2.3. ELISA Kits Thromboglobulin was assayed by Asserachrom-TBG, product number REF 00950 from Diagnostic Stago Inc., Mount Olive, NJ, USA. PAI-1 was analyzed by active human PAI-1 functional assay ELISA kit, HPAIKIT from Molecular Innovation, Novi, MI 48377, USA. 12-LOX was analyzed by IMUBIND 12-Lipoxygenase ELISA product number ADG872 from American Diagnostica GmbH, Pfungstadt, Germany. ### 2.4. Statistical Analysis Statistical analysis was done using Statistica 10 (StatSoft Polska Sp. z o.o., Krakόw, Poland) for nonparametric Mann-WhitneyU test, Chi square distribution, and analysis of the Pearson correlation coefficient. The level of significance was established as P ≤ 0.05. ## 2.1. Patients The study involved 149 men (age70 ± 9 years) qualified for diagnostic biopsy of the prostate. The criteria for inclusion in the study were a positive digital rectal examination (DRE) result, PSA level above the upper limit of the reference value of 4 ng/mL, and a positive transrectal ultrasound (TRUS) test result. The study excluded patients with previously diagnosed cancer, regardless of its location and nature. Patients taking aspirin, warfarin, COX inhibitors, and heparin were also excluded from study since these drugs may affect level or activity of 12-LOX and/or PAI-1 [33–37].In all patients the entire volume of the prostate adenoma and cancer foci were determined by TRUS. Biopsy was done for all patients with a total of 12 samples taken in the following way: biopsies 1–4 taken from the suspected foci, biopsies 5–8 taken from the opposite lobe of the prostate, and biopsies 9–12 taken from the lobe where the suspected foci were present. Targeted biopsy was performed on patients suspected with pathological growth. Formaldehyde and paraffin embedded slides were examined by pathologist who determined the type of cancer (or lack of it), tumor grade and Gleason sum. Based on the results of the histopathological examination of biopsy material the patients were divided into two groups (group with cancer,n = 116, and without prostate cancer, n = 33). The study was approved by the Bioethical Committee for Scientific Research at the Medical University of Gdańsk. Each patient prior to study was informed of the objectives and principles and signed an informed consent to participate in it. ## 2.2. Blood For all patients the following parameters were determined in hospital laboratory by the routine tests: hemoglobin (HGB), hematocrit (HCT), red blood cells (RBC), and white blood cells (WBC). Citrated blood samples were divided into two parts, one of them was centrifuged at 100 ×g for fifteen minutes to obtain platelet rich plasma, frozen, and stored at −20°C for the determination of PAI-1, 12-LOX, and thromboglobulin (TGB); the other was centrifuged at 1500×g for ten minutes for the determination of PSA, C-reactive protein (CRP), international normalized ratio of blood clotting (INR), and activated partial thromboplastin time (APTT). ## 2.3. ELISA Kits Thromboglobulin was assayed by Asserachrom-TBG, product number REF 00950 from Diagnostic Stago Inc., Mount Olive, NJ, USA. PAI-1 was analyzed by active human PAI-1 functional assay ELISA kit, HPAIKIT from Molecular Innovation, Novi, MI 48377, USA. 12-LOX was analyzed by IMUBIND 12-Lipoxygenase ELISA product number ADG872 from American Diagnostica GmbH, Pfungstadt, Germany. ## 2.4. Statistical Analysis Statistical analysis was done using Statistica 10 (StatSoft Polska Sp. z o.o., Krakόw, Poland) for nonparametric Mann-WhitneyU test, Chi square distribution, and analysis of the Pearson correlation coefficient. The level of significance was established as P ≤ 0.05. ## 3. Results and Discussion Enrolled patients in addition to BPH and prostate cancer were diagnosed with diabetes (12%), hypertension (42%), chronic obstructive pulmonary disease (6%), coronary disease (94%), diabetes (10.3%), and hypertension (50.8%). Patients in study were divided into two groups: with prostate cancer and with benign prostatic hyperplasia (BPH). No prevalence of any of these diseases was observed between BPH and prostate cancer patients. Blood work revealed also that there were no differences in test values of CRP, APTT, HGB, HCT, WBC, RBC, PLT, and TGB between these two groups (data not shown). As can be seen in Table1 patients with BPH were younger and have had higher volume of prostate, volume of adenoma and, as expected, significantly lower PSA than these with prostate cancer.Table 1 Characteristics of the study group. BPHMean ± SD(median) Prostate cancerMean ± SD(median) P Age of patients (years) 67.3 ± 9.9 (65) 71.2 ± 8.5 (72) 0.02 Volume of prostate (mL) 64.4 ± 32.5 (55.7) 50.1 ± 24.4 (47.7) 0.01 Volume of adenoma (mL) 31.1 ± 20.8 (25.1) 22 ± 15.8 (18.3) 0.004 Volume of cancer foci (mL) — 0.5 ± 0.9 (0.2) PSA (ng/mL) 6.6 ± 4.8 (5.0) 58.4 ± 302.0 (8.6) 0.0004 Gleason grade ≤ 6 — 63% Gleason grade > 6 — 37%Table2 shows that expression of 12-LOX in platelet rich plasma was significantly lower in prostate cancer patients than in BPH population and normalization to PLT and TBG increases statistical significance (Figure 1). Differences between the groups in all other tested parameters were not statistically significant.Table 2 Normal expression of 12-LOX and PAI-1 and normalized expression to TBG and PLT in BPH and prostate cancer patients. BPHMean ± SD(median) Prostate cancerMean ± SD(median) P 12-LOX (ng/mL) 219.6 ± 209.3 (137) 144.6 ± 304.8 (56) 0.0001 PAI-1 (U/mL) 447.0 ± 345.8 (367) 610.8 ± 483.9 (441) 0.1 TBG (kU) 5.81 ± 6.02 (2.8) 6.21 ± 4.03 (6.7) 0.054 PLT (103/mm3) 207 ± 55 (208) 219 ± 69 (211) 0.6 12-LOX/TBG 83.2 ± 111.2 (52.3) 34.1 ± 77.9 (9.3) 0.000005 PAI-1/TBG 217.5 ± 321.6 (120) 151.7 ± 180.9 (90) 0.2 TBG/PLT 0.02 ± 0.02 (0.01) 0.03 ± 0.02 (0.02) 0.13 12-LOX/PLT 1.07 ± 0.97 (0.8) 0.66 ± 1.40 (0.25) 0.00003 PAI-1/PLT 2.28 ± 1.68 (1.7) 2.98 ± 2.57 (2.2) 0.2Expression of 12-LOX in platelet-rich plasma of BPH and prostate cancer patients. Normalization to PLT and TBG greatly increases sensitivity of correlation. Box and whisker plots of expression of 12-LOX (P = 0.0001) (a), 12-LOX normalized to PLT (P = 0.00003) (b), and 12-LOX normalized to TBG (P = 0.000005) (c) for BPH and prostate cancer. Solid, horizontal line inside box represents the median, position of the little square gives the average, box encompasses results within 25–75%, and wiskers mark values between 5–95%. (a) (b) (c)Also as it is shown in Table3 there were no differences in tested parameters in prostate cancer patients divided for groups according to the Gleason grade <6 and >6, respectively.Table 3 Normal expression of 12-LOX and PAI-1 and normalized expression to TBG and PLT in prostate cancer patients with different Gleason grade. Gleason grade ≤ 6Mean ± SD (median) Gleason grade > 6Mean ± SD(median) P 12-LOX (ng/mL) 158.5 ± 354.0 (56) 112.8 ± 133.7 (63) 0.8 PAI-1 (U/mL) 577.3 ± 442.2 (428) 687.3 ± 568.6 (499) 0.3 TBG (kU) 6.5 ± 4.5 (7.75) 5.5 ± 2.7 (5.4) 0.4 PLT (103/mm3) 209.9 ± 49.9 (210) 241.3 ± 97.1 (219) 0.2 12-LOX/TBG 35.6 ± 80.8 (8.8) 30.7 ± 71.7 (11.4) 0.5 PAI-1/TBG 135.8 ± 155.3 (80) 188.6 ± 227.9 (90) 0.6 TBG/PLT 0.03 ± 0.02 (0.03) 0.025 ± 0.01 (0.02) 0.3 12-LOX/PLT 0.75 ± 1.6 (0.25) 0.47 ± 0.53 (0.27) 0.9 PAI-1/PLT 2.95 ± 2.6 (2.1) 3.0 ± 2.6 (2.4) 0.7This study includes only BPH and prostate cancer patients. Healthy individuals were excluded due to an ethical consideration. Defining a person as “healthy” would require verification by blood work, digital rectum examination, and biopsy which (especially biopsy) was considered unethical for the asymptomatic person.Cancer markers indicate a high probability for the existence of cancer in the body. Most markers are assayed by analysis of blood plasma [38]. It is expected that the concentration of tumor markers in the plasma or urine of patients with cancer should considerably vary from the values typically observed in healthy subjects [39]. This assumption results from the positive relationship between the mass of cancer cells and the amount of the substance produced by them [40]. At this moment markers of prostate cancer cannot select precisely a group at risk for the disease progression [41, 42]. The 12-LOX and PAI-1 together with the products of the reactions catalyzed by them seem to point an interesting direction of research [30, 43]. In our initial studies we measured 12-LOX expression in serum and found that 12-LOX was lower in prostate cancer patients in comparison with healthy individuals and BPH patients. However limited number of individuals in each group did not allow us to establish statistical significant differences [44]. The other studies show promise but are difficult to compare due to the fact that some of them analyzed prostate tissue, while other were done in plasma [26, 45, 46]. Also, one study analyzed the expression of genes; the other reported levels of protein, and yet in some other the enzyme activity in plasma was determined for 12-LOX and PAI-1 [47, 48]. The disparities were also observed in the method of selecting groups of patients. Studies compared the results for patients without and with cancer, which could mean healthy, but also from cobenign prostatic hyperplasia [8, 14, 49, 50]. Moreover, laboratory tests for the determination of 12-LOX and PAI-1 used antibodies of different specificity. These diversities make the comparison of the results difficult and rather questionable.Higher PSA depending on the severity of cancer, lower volume of cancerous prostate confirm not only the generally accepted and recognized standards of diagnosis and management in the field of prostate cancer, but also the proper selection of patients in the study group. To improve the results in our study we normalized assayed parameters to the number of platelets and TBG in platelet rich plasma. Each platelet rich plasma sample was frozen for storage (not exceeding 12–16 weeks) and thaw immediately before the appropriate tests were performed to guarantee uniform conditions for releasing TBG and other proteins in the determination of platelet-rich plasma parameters studied [51].Analyzing the results for the concentrations of TBG in the platelet-rich plasma, we have observed that the differences in the TBG levels ofP = 0.054 (Table 2) where close to the chosen level of significance P less or equal 0.05, with the mean value of TBG higher in the cancer group. This somewhat diminished that statistical level of significance may be due to the fact that the control group consisted of patients with BPH and not prostate-trouble-free individuals.The activity of the platelet 12-LOX in prostate cancer was investigated extensively and tied with angiogenesis. Nie et al. examined 12-LOX concentration in prostate cancer cell lines using antibodies and postulated that increase of 12-LOX expression stimulates prostate cancer tumor growth and activates angiogenesis [13]. Elevated levels or activity in cancer tissue has been reported before by others as well [30, 52–55]; thus, our finding of reduced amount of 12-LOX in platelet-rich plasma of prostate cancer patients was somewhat surprising. One of the explanations is that volume of prostate in BPH patients was larger than prostate cancer patients. So, if BPH and prostate cancer gland release steady amount of 12-LOX into blood, indeed this protein level can depend on gland volume. It is worthy to emphasize that a mean total volume of prostate in cancer patients was 65% of BPH prostate group of patients, and percentage of 12-LOX expression in platelet rich plasma was also 65% for cancer group versus patients with BPH. Moreover, the mean volume of cancer foci was only ~1% of prostate volume in prostate cancer patients so its impact on total secretion of 12-LOX in blood could be limited. The other possibility is that lower 12-LOX expression in cancer is intrinsic property of cancer. Observations from the cell lines are consistent with the results from human tissue in the fact that 12-LOX level is growing with the progression of cancer. However, Gohara et al. reported that expression of 12-LOX in normal kidney tissue is higher than in low grade and stage form of this cancer, to rise in terminal malignancy, approximating but not quite reaching the level of expression observed in the normal tissue samples [56]. Thus this mechanism requires more investigations.Expression of 12-LOX also depends on type of cancer. For example it has been reported that significant increase in 12-LOX levels in serum was observed in breast cancer patients (40 ng/mL) as compared to healthy controls (13 ng/mL) (P < 0.0001) in study of 86 biopsy proven breast cancer patients. Moreover, serum 12-LOX levels were significantly higher (P < 0.002) in patients with metastasis to the lymph nodes and over 75% patients had shown significant (P < 0.0001) reduction of 12-LOX levels after chemotherapy [57].We have not seen any significant differences in expression of PAI-1 in platelet rich plasma between BPH and prostate cancer patients, as well as in groups with different Gleason grade (Table3), regardless of many reports stating that PAI-1 is overexpressed in prostate cancer [43, 58, 59], although uPA and its receptor are overexpressed on the surface of cancer cells. However, when PAI-1 binds to uPA-uPAR complexes it interacts with LPR leading to internalization of PAI-1/uPA-uPAR/LPR into the cancer cells. PAI-1 and uPA are degraded while uPAR and LPR are recycled to the cell surface [25, 60–63]. Thus PAI-1 might not be secreted into the blood stream. Although expression of 12-LOX and PAI-1 and normalized expression to TBG and PLT in prostate cancer patients clearly were not statistically different in patients with different Gleason grade (Table 3), but some trend was observed. Concentrations of 12-LOX were lower in group with Gleason >6, while PAI-1 was higher when compared with group of Gleason <6. Together with other parameters the Gleason grading system helps evaluate the prognosis of men with prostate cancer. Cancers with the higher Gleason score are more aggressive and have a worse prognosis but this score itself cannot predict outcome precisely [64]. Thus, it is possible that expressions of 12-LOX and PAI-1 are related to other parameters such as outcome of disease or survival which we are going to monitor in the future studies. ## 4. Conclusion No significant difference in platelet-rich plasma was noted for PAI-1 levels or 12-LOX and PAI-1 ratio between patients with cancer and BPH. Therefore, PAI-1 results in this study do not meet the conditions expected for the prostate cancer marker.The concentration of 12-LOX in platelet-rich plasma in patients with prostate cancer is significantly lower than in patients with BPH; thus, the low concentration of 12-LOX might indicate the increased risk of developing prostate cancer or the onset of the disease in periodically monitored patients. Standardization of the expression of 12-LOX in platelet-rich plasma to the concentration of TGB and the number of PLT significantly increases the sensitivity of the test and could be used as biomarker for the assessment of risk for the prostate cancer. --- *Source: 102478-2014-03-24.xml*
102478-2014-03-24_102478-2014-03-24.md
23,920
Evaluation of 12-Lipoxygenase (12-LOX) and Plasminogen Activator Inhibitor 1 (PAI-1) as Prognostic Markers in Prostate Cancer
Tomasz Gondek; Mariusz Szajewski; Jarosław Szefel; Ewa Aleksandrowicz-Wrona; Ewa Skrzypczak-Jankun; Jerzy Jankun; Wieslawa Lysiak-Szydlowska
BioMed Research International (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102478
102478-2014-03-24.xml
--- ## Abstract In carcinoma of prostate, a causative role of platelet 12-lipoxygenase (12-LOX) and plasminogen activator inhibitor 1 (PAI-1) for tumor progression has been firmly established in tumor and/or adjacent tissue. Our goal was to investigate if 12-LOX and/or PAI-1 in patient’s plasma could be used to predict outcome of the disease. The study comprised 149 patients (age70 ± 9) divided into two groups: a study group with carcinoma confirmed by positive biopsy of prostate (n = 116) and a reference group (n = 33) with benign prostatic hyperplasia (BPH). The following parameters were determined by the laboratory test in plasma or platelet-rich plasma: protein level of 12-LOX, PAI-1, thromboglobulin (TGB), prostate specific antigen (PSA), C-reactive protein (CRP), hemoglobin (HGB, and hematocrit (HCT), as well as red (RBC) and white blood cells (WBC), number of platelets (PLT), international normalized ratio of blood clotting (INR), and activated partial thromboplastin time (APTT). The only difference of significance was noticed in the concentration of 12-LOX in platelet rich plasma, which was lower in cancer than in BPH group. Standardization to TGB and platelet count increases the sensitivity of the test that might be used as a biomarker to assess risk for prostate cancer in periodically monitored patients. --- ## Body ## 1. Introduction The prostate cancer is the most common malignancy diagnosed in older men in the Western Hemisphere population. According to the European Association of Urology (EAU) Guidelines of 2012 mortality from prostate cancer is ranked second to lung cancer [1]. The diagnosis of prostate cancer at an early stage and ability to differentiate benign and aggressive form would improve the selection of the optimal method of treatment resulting in better outcome. Currently used diagnostic standards consist of determination of prostate specific antigen (PSA), clinical stage, and total Gleason grade. Unfortunately, they do not give sufficient justification for choosing the optimal therapy for a particular patient. Hence, it is necessary to search for new biomarkers to allow for the prediction of disease dynamics and personalization of therapy [2, 3].It has been shown that men who consume high-fat diet containing abundance of arachidonic acid (AA) have a high rate of incidence of prostate cancer [4, 5]. Availability of AA in combination with the overexpression of lipoxygenases (12-LOX, 5-LOX), cyclooxygenase (COX-2), and cytochrome P450 (CYP) excess leads to the synthesis of eicosanoids [6–8]. Eicosanoids trigger signals for transcription of genes that modulate the immune system, hemostasis, apoptosis, cell proliferation, and many other processes [9, 10]. This avalanche of signals results in the development of inflammation favoring carcinogenesis [11, 12]. Also, eicosanoids accelerate the rate of proliferation of glandular cells, inhibit their apoptosis, and further intensify angiogenesis [13–16]. Angiogenesis is a prerequisite for tumor development and plasminogen activation system (PAS) has a significant impact on that crucial step. PAS includes urokinase plasminogen activator (uPA), urokinase plasminogen activator receptor (uPAR), and plasminogen activator inhibitor type-1 (PAI-1) [17]. The increase in uPA activity and number of uPAR correlate with the ability of cancer to form angiogenic vasculature and increase cancer cells metastasis. Urokinase, both free and receptor bound, converts plasminogen to proteolytically active plasmin that is responsible for lysis of extracellular matrix, essential for angiogenesis and metastasis [18, 19]. Inhibition of both uPA and uPAR activity reduces angiogenesis and metastasis [20, 21]. Research indicates that inhibition of uPA by PAI-1 reduces the size of the tumor [22]. In the capillaries surrounding the tumor there are a large amount and activity of uPA and uPAR [23]. Taking into account the role of uPA, uPAR, and PAI-1 in angiogenesis, Pepper considers that normal vessel formation by angioge-nesis depends on proteases and antiproteases balance [24]. However, role of PAI-1 in carcinogenesis is more complex than simple inhibition of proteolysis. PAI-1 overexpressed up to approximately ten times more than normal level increases motility of cancer cells by interacting with vitronectin and other proteins. However, PAI-1 in supramolecular levels significantly inhibits angiogenesis and metastasis reducing activity of uPA [18, 25, 26]. This phenomenon is called “PAI-paradox” [27]. Now, a high level of PAI-1 appears to inhibit angiogenesis, while slightly elevated level of PAI-1 is necessary for growth of angiogenic vessels.A healthy body maintains a balance between activators and inhibitors of angiogenesis. The tumor microenvironment is different than normal tissue where the pro- and antiangiogenic factors are well balanced. Folkman and Hanahan introduced the concept of the angiogenic switch wherein it is stated that angiogenesis starts at a global disturbance of the expression of pro- and antiangiogenic factors [28]. The primary target of both of them is endothelial cell [29]. Among the others 12-LOX and PAI-1 are proteins governing these processes and can be secreted at high levels by tumor cells. While expression of these proteins by cancer cell was studied and documented, serum tests were not well investigated [30–32], albeit they might provide an easy laboratory test as a diagnostic tool. Therefore, we have studied expression of human platelet 12-LOX and PAI-1 in serum of patients to find whether any correlation exists between their concentration in blood and stage of the prostate disease. ## 2. Materials and Methods ### 2.1. Patients The study involved 149 men (age70 ± 9 years) qualified for diagnostic biopsy of the prostate. The criteria for inclusion in the study were a positive digital rectal examination (DRE) result, PSA level above the upper limit of the reference value of 4 ng/mL, and a positive transrectal ultrasound (TRUS) test result. The study excluded patients with previously diagnosed cancer, regardless of its location and nature. Patients taking aspirin, warfarin, COX inhibitors, and heparin were also excluded from study since these drugs may affect level or activity of 12-LOX and/or PAI-1 [33–37].In all patients the entire volume of the prostate adenoma and cancer foci were determined by TRUS. Biopsy was done for all patients with a total of 12 samples taken in the following way: biopsies 1–4 taken from the suspected foci, biopsies 5–8 taken from the opposite lobe of the prostate, and biopsies 9–12 taken from the lobe where the suspected foci were present. Targeted biopsy was performed on patients suspected with pathological growth. Formaldehyde and paraffin embedded slides were examined by pathologist who determined the type of cancer (or lack of it), tumor grade and Gleason sum. Based on the results of the histopathological examination of biopsy material the patients were divided into two groups (group with cancer,n = 116, and without prostate cancer, n = 33). The study was approved by the Bioethical Committee for Scientific Research at the Medical University of Gdańsk. Each patient prior to study was informed of the objectives and principles and signed an informed consent to participate in it. ### 2.2. Blood For all patients the following parameters were determined in hospital laboratory by the routine tests: hemoglobin (HGB), hematocrit (HCT), red blood cells (RBC), and white blood cells (WBC). Citrated blood samples were divided into two parts, one of them was centrifuged at 100 ×g for fifteen minutes to obtain platelet rich plasma, frozen, and stored at −20°C for the determination of PAI-1, 12-LOX, and thromboglobulin (TGB); the other was centrifuged at 1500×g for ten minutes for the determination of PSA, C-reactive protein (CRP), international normalized ratio of blood clotting (INR), and activated partial thromboplastin time (APTT). ### 2.3. ELISA Kits Thromboglobulin was assayed by Asserachrom-TBG, product number REF 00950 from Diagnostic Stago Inc., Mount Olive, NJ, USA. PAI-1 was analyzed by active human PAI-1 functional assay ELISA kit, HPAIKIT from Molecular Innovation, Novi, MI 48377, USA. 12-LOX was analyzed by IMUBIND 12-Lipoxygenase ELISA product number ADG872 from American Diagnostica GmbH, Pfungstadt, Germany. ### 2.4. Statistical Analysis Statistical analysis was done using Statistica 10 (StatSoft Polska Sp. z o.o., Krakόw, Poland) for nonparametric Mann-WhitneyU test, Chi square distribution, and analysis of the Pearson correlation coefficient. The level of significance was established as P ≤ 0.05. ## 2.1. Patients The study involved 149 men (age70 ± 9 years) qualified for diagnostic biopsy of the prostate. The criteria for inclusion in the study were a positive digital rectal examination (DRE) result, PSA level above the upper limit of the reference value of 4 ng/mL, and a positive transrectal ultrasound (TRUS) test result. The study excluded patients with previously diagnosed cancer, regardless of its location and nature. Patients taking aspirin, warfarin, COX inhibitors, and heparin were also excluded from study since these drugs may affect level or activity of 12-LOX and/or PAI-1 [33–37].In all patients the entire volume of the prostate adenoma and cancer foci were determined by TRUS. Biopsy was done for all patients with a total of 12 samples taken in the following way: biopsies 1–4 taken from the suspected foci, biopsies 5–8 taken from the opposite lobe of the prostate, and biopsies 9–12 taken from the lobe where the suspected foci were present. Targeted biopsy was performed on patients suspected with pathological growth. Formaldehyde and paraffin embedded slides were examined by pathologist who determined the type of cancer (or lack of it), tumor grade and Gleason sum. Based on the results of the histopathological examination of biopsy material the patients were divided into two groups (group with cancer,n = 116, and without prostate cancer, n = 33). The study was approved by the Bioethical Committee for Scientific Research at the Medical University of Gdańsk. Each patient prior to study was informed of the objectives and principles and signed an informed consent to participate in it. ## 2.2. Blood For all patients the following parameters were determined in hospital laboratory by the routine tests: hemoglobin (HGB), hematocrit (HCT), red blood cells (RBC), and white blood cells (WBC). Citrated blood samples were divided into two parts, one of them was centrifuged at 100 ×g for fifteen minutes to obtain platelet rich plasma, frozen, and stored at −20°C for the determination of PAI-1, 12-LOX, and thromboglobulin (TGB); the other was centrifuged at 1500×g for ten minutes for the determination of PSA, C-reactive protein (CRP), international normalized ratio of blood clotting (INR), and activated partial thromboplastin time (APTT). ## 2.3. ELISA Kits Thromboglobulin was assayed by Asserachrom-TBG, product number REF 00950 from Diagnostic Stago Inc., Mount Olive, NJ, USA. PAI-1 was analyzed by active human PAI-1 functional assay ELISA kit, HPAIKIT from Molecular Innovation, Novi, MI 48377, USA. 12-LOX was analyzed by IMUBIND 12-Lipoxygenase ELISA product number ADG872 from American Diagnostica GmbH, Pfungstadt, Germany. ## 2.4. Statistical Analysis Statistical analysis was done using Statistica 10 (StatSoft Polska Sp. z o.o., Krakόw, Poland) for nonparametric Mann-WhitneyU test, Chi square distribution, and analysis of the Pearson correlation coefficient. The level of significance was established as P ≤ 0.05. ## 3. Results and Discussion Enrolled patients in addition to BPH and prostate cancer were diagnosed with diabetes (12%), hypertension (42%), chronic obstructive pulmonary disease (6%), coronary disease (94%), diabetes (10.3%), and hypertension (50.8%). Patients in study were divided into two groups: with prostate cancer and with benign prostatic hyperplasia (BPH). No prevalence of any of these diseases was observed between BPH and prostate cancer patients. Blood work revealed also that there were no differences in test values of CRP, APTT, HGB, HCT, WBC, RBC, PLT, and TGB between these two groups (data not shown). As can be seen in Table1 patients with BPH were younger and have had higher volume of prostate, volume of adenoma and, as expected, significantly lower PSA than these with prostate cancer.Table 1 Characteristics of the study group. BPHMean ± SD(median) Prostate cancerMean ± SD(median) P Age of patients (years) 67.3 ± 9.9 (65) 71.2 ± 8.5 (72) 0.02 Volume of prostate (mL) 64.4 ± 32.5 (55.7) 50.1 ± 24.4 (47.7) 0.01 Volume of adenoma (mL) 31.1 ± 20.8 (25.1) 22 ± 15.8 (18.3) 0.004 Volume of cancer foci (mL) — 0.5 ± 0.9 (0.2) PSA (ng/mL) 6.6 ± 4.8 (5.0) 58.4 ± 302.0 (8.6) 0.0004 Gleason grade ≤ 6 — 63% Gleason grade > 6 — 37%Table2 shows that expression of 12-LOX in platelet rich plasma was significantly lower in prostate cancer patients than in BPH population and normalization to PLT and TBG increases statistical significance (Figure 1). Differences between the groups in all other tested parameters were not statistically significant.Table 2 Normal expression of 12-LOX and PAI-1 and normalized expression to TBG and PLT in BPH and prostate cancer patients. BPHMean ± SD(median) Prostate cancerMean ± SD(median) P 12-LOX (ng/mL) 219.6 ± 209.3 (137) 144.6 ± 304.8 (56) 0.0001 PAI-1 (U/mL) 447.0 ± 345.8 (367) 610.8 ± 483.9 (441) 0.1 TBG (kU) 5.81 ± 6.02 (2.8) 6.21 ± 4.03 (6.7) 0.054 PLT (103/mm3) 207 ± 55 (208) 219 ± 69 (211) 0.6 12-LOX/TBG 83.2 ± 111.2 (52.3) 34.1 ± 77.9 (9.3) 0.000005 PAI-1/TBG 217.5 ± 321.6 (120) 151.7 ± 180.9 (90) 0.2 TBG/PLT 0.02 ± 0.02 (0.01) 0.03 ± 0.02 (0.02) 0.13 12-LOX/PLT 1.07 ± 0.97 (0.8) 0.66 ± 1.40 (0.25) 0.00003 PAI-1/PLT 2.28 ± 1.68 (1.7) 2.98 ± 2.57 (2.2) 0.2Expression of 12-LOX in platelet-rich plasma of BPH and prostate cancer patients. Normalization to PLT and TBG greatly increases sensitivity of correlation. Box and whisker plots of expression of 12-LOX (P = 0.0001) (a), 12-LOX normalized to PLT (P = 0.00003) (b), and 12-LOX normalized to TBG (P = 0.000005) (c) for BPH and prostate cancer. Solid, horizontal line inside box represents the median, position of the little square gives the average, box encompasses results within 25–75%, and wiskers mark values between 5–95%. (a) (b) (c)Also as it is shown in Table3 there were no differences in tested parameters in prostate cancer patients divided for groups according to the Gleason grade <6 and >6, respectively.Table 3 Normal expression of 12-LOX and PAI-1 and normalized expression to TBG and PLT in prostate cancer patients with different Gleason grade. Gleason grade ≤ 6Mean ± SD (median) Gleason grade > 6Mean ± SD(median) P 12-LOX (ng/mL) 158.5 ± 354.0 (56) 112.8 ± 133.7 (63) 0.8 PAI-1 (U/mL) 577.3 ± 442.2 (428) 687.3 ± 568.6 (499) 0.3 TBG (kU) 6.5 ± 4.5 (7.75) 5.5 ± 2.7 (5.4) 0.4 PLT (103/mm3) 209.9 ± 49.9 (210) 241.3 ± 97.1 (219) 0.2 12-LOX/TBG 35.6 ± 80.8 (8.8) 30.7 ± 71.7 (11.4) 0.5 PAI-1/TBG 135.8 ± 155.3 (80) 188.6 ± 227.9 (90) 0.6 TBG/PLT 0.03 ± 0.02 (0.03) 0.025 ± 0.01 (0.02) 0.3 12-LOX/PLT 0.75 ± 1.6 (0.25) 0.47 ± 0.53 (0.27) 0.9 PAI-1/PLT 2.95 ± 2.6 (2.1) 3.0 ± 2.6 (2.4) 0.7This study includes only BPH and prostate cancer patients. Healthy individuals were excluded due to an ethical consideration. Defining a person as “healthy” would require verification by blood work, digital rectum examination, and biopsy which (especially biopsy) was considered unethical for the asymptomatic person.Cancer markers indicate a high probability for the existence of cancer in the body. Most markers are assayed by analysis of blood plasma [38]. It is expected that the concentration of tumor markers in the plasma or urine of patients with cancer should considerably vary from the values typically observed in healthy subjects [39]. This assumption results from the positive relationship between the mass of cancer cells and the amount of the substance produced by them [40]. At this moment markers of prostate cancer cannot select precisely a group at risk for the disease progression [41, 42]. The 12-LOX and PAI-1 together with the products of the reactions catalyzed by them seem to point an interesting direction of research [30, 43]. In our initial studies we measured 12-LOX expression in serum and found that 12-LOX was lower in prostate cancer patients in comparison with healthy individuals and BPH patients. However limited number of individuals in each group did not allow us to establish statistical significant differences [44]. The other studies show promise but are difficult to compare due to the fact that some of them analyzed prostate tissue, while other were done in plasma [26, 45, 46]. Also, one study analyzed the expression of genes; the other reported levels of protein, and yet in some other the enzyme activity in plasma was determined for 12-LOX and PAI-1 [47, 48]. The disparities were also observed in the method of selecting groups of patients. Studies compared the results for patients without and with cancer, which could mean healthy, but also from cobenign prostatic hyperplasia [8, 14, 49, 50]. Moreover, laboratory tests for the determination of 12-LOX and PAI-1 used antibodies of different specificity. These diversities make the comparison of the results difficult and rather questionable.Higher PSA depending on the severity of cancer, lower volume of cancerous prostate confirm not only the generally accepted and recognized standards of diagnosis and management in the field of prostate cancer, but also the proper selection of patients in the study group. To improve the results in our study we normalized assayed parameters to the number of platelets and TBG in platelet rich plasma. Each platelet rich plasma sample was frozen for storage (not exceeding 12–16 weeks) and thaw immediately before the appropriate tests were performed to guarantee uniform conditions for releasing TBG and other proteins in the determination of platelet-rich plasma parameters studied [51].Analyzing the results for the concentrations of TBG in the platelet-rich plasma, we have observed that the differences in the TBG levels ofP = 0.054 (Table 2) where close to the chosen level of significance P less or equal 0.05, with the mean value of TBG higher in the cancer group. This somewhat diminished that statistical level of significance may be due to the fact that the control group consisted of patients with BPH and not prostate-trouble-free individuals.The activity of the platelet 12-LOX in prostate cancer was investigated extensively and tied with angiogenesis. Nie et al. examined 12-LOX concentration in prostate cancer cell lines using antibodies and postulated that increase of 12-LOX expression stimulates prostate cancer tumor growth and activates angiogenesis [13]. Elevated levels or activity in cancer tissue has been reported before by others as well [30, 52–55]; thus, our finding of reduced amount of 12-LOX in platelet-rich plasma of prostate cancer patients was somewhat surprising. One of the explanations is that volume of prostate in BPH patients was larger than prostate cancer patients. So, if BPH and prostate cancer gland release steady amount of 12-LOX into blood, indeed this protein level can depend on gland volume. It is worthy to emphasize that a mean total volume of prostate in cancer patients was 65% of BPH prostate group of patients, and percentage of 12-LOX expression in platelet rich plasma was also 65% for cancer group versus patients with BPH. Moreover, the mean volume of cancer foci was only ~1% of prostate volume in prostate cancer patients so its impact on total secretion of 12-LOX in blood could be limited. The other possibility is that lower 12-LOX expression in cancer is intrinsic property of cancer. Observations from the cell lines are consistent with the results from human tissue in the fact that 12-LOX level is growing with the progression of cancer. However, Gohara et al. reported that expression of 12-LOX in normal kidney tissue is higher than in low grade and stage form of this cancer, to rise in terminal malignancy, approximating but not quite reaching the level of expression observed in the normal tissue samples [56]. Thus this mechanism requires more investigations.Expression of 12-LOX also depends on type of cancer. For example it has been reported that significant increase in 12-LOX levels in serum was observed in breast cancer patients (40 ng/mL) as compared to healthy controls (13 ng/mL) (P < 0.0001) in study of 86 biopsy proven breast cancer patients. Moreover, serum 12-LOX levels were significantly higher (P < 0.002) in patients with metastasis to the lymph nodes and over 75% patients had shown significant (P < 0.0001) reduction of 12-LOX levels after chemotherapy [57].We have not seen any significant differences in expression of PAI-1 in platelet rich plasma between BPH and prostate cancer patients, as well as in groups with different Gleason grade (Table3), regardless of many reports stating that PAI-1 is overexpressed in prostate cancer [43, 58, 59], although uPA and its receptor are overexpressed on the surface of cancer cells. However, when PAI-1 binds to uPA-uPAR complexes it interacts with LPR leading to internalization of PAI-1/uPA-uPAR/LPR into the cancer cells. PAI-1 and uPA are degraded while uPAR and LPR are recycled to the cell surface [25, 60–63]. Thus PAI-1 might not be secreted into the blood stream. Although expression of 12-LOX and PAI-1 and normalized expression to TBG and PLT in prostate cancer patients clearly were not statistically different in patients with different Gleason grade (Table 3), but some trend was observed. Concentrations of 12-LOX were lower in group with Gleason >6, while PAI-1 was higher when compared with group of Gleason <6. Together with other parameters the Gleason grading system helps evaluate the prognosis of men with prostate cancer. Cancers with the higher Gleason score are more aggressive and have a worse prognosis but this score itself cannot predict outcome precisely [64]. Thus, it is possible that expressions of 12-LOX and PAI-1 are related to other parameters such as outcome of disease or survival which we are going to monitor in the future studies. ## 4. Conclusion No significant difference in platelet-rich plasma was noted for PAI-1 levels or 12-LOX and PAI-1 ratio between patients with cancer and BPH. Therefore, PAI-1 results in this study do not meet the conditions expected for the prostate cancer marker.The concentration of 12-LOX in platelet-rich plasma in patients with prostate cancer is significantly lower than in patients with BPH; thus, the low concentration of 12-LOX might indicate the increased risk of developing prostate cancer or the onset of the disease in periodically monitored patients. Standardization of the expression of 12-LOX in platelet-rich plasma to the concentration of TGB and the number of PLT significantly increases the sensitivity of the test and could be used as biomarker for the assessment of risk for the prostate cancer. --- *Source: 102478-2014-03-24.xml*
2014
# Advanced Extrauterine Pregnancy at 33 Weeks with a Healthy Newborn **Authors:** Tajudeen Dabiri; Guillermo A. Marroquin; Boleslaw Bendek; Enyonam Agamasu; Magdy Mikhail **Journal:** BioMed Research International (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102479 --- ## Abstract Abdominal pregnancy is a very rare form of ectopic pregnancy, associated with high morbidity and mortality for both fetus and mother. It is, and often, seen in poor resource nations, where early diagnosis is often a major challenge due to poor prenatal care and lack of medical resources. An advanced abdominal pregnancy with a good fetal and maternal outcome is therefore a more extraordinary occurrence in the modern developed world. We present a case of an abdominal pregnancy at 33.4 weeks in an individual with no documented prenatal care, who arrived in a hospital in the Bronx, in June 25th 2014, with symptoms of generalized, severe lower abdominal pain. Upon examination it was found that due to category III fetal tracing an emergent cesarean section was performed. At the time of laparotomy the fetus was located in the pelvis covered by the uterine serosa, with distortion of the entire right adnexa and invasion to the right parametrium. The placenta invaded the pouch of Douglas and the lower part of the sigmoid colon. A massive hemorrhage followed, followed by a supracervical hysterectomy. A viable infant was delivered and mother discharged on postoperative day 4. --- ## Body ## 1. Introduction Symptoms of an abdominal pregnancy are very nonspecific and often include abdominal pain, nausea, vomiting, palpable fetal parts, fetal mal presentation, pain on fetal movement, and displacement of the cervix.With remarkable advances in radiographic technology an early discovery of an extrauterine pregnancy should be a practicable endeavor. This is particularly important in a community where there are an increased number of immigrants from low resource nations [1].The prevalence of ectopic pregnancy is 1-2% with 95% occurring in the fallopian tube. The incidence of abdominal pregnancy ranges from 1 : 1000 to 1 : 30,000 depending on the community but is most commonly seen in developing nations of the world [2, 3], which represent approximately 1–1.4% of all ectopic pregnancies alone [4–6]. The first documented case of abdominal pregnancy was reported in the year 1708, followed by numerous case reports particularly from middle and low income regions of the world [7]. Frequently, the diagnosis was made based on complications such as hemorrhage and abdominal pain at the time of laparotomy. Most often, the pregnancy did not survive and often resulted in extraction of the dead fetus with increased maternal mortality.In the developed world, abdominal pregnancy is extremely rare and very few of such cases have been published in the last 10 years. It is unclear if abdominal pregnancy is a result of secondary implantation from an aborted tubal pregnancy or result of primary implantation from intra-abdominal fertilization. Associated risks for developing abdominal pregnancy are endometriosis, pelvic inflammatory disease, assisted reproductive techniques, tubal occlusion, and multiparity [8–10].In view of rarity and lack of management guidelines of advanced abdominal pregnancy, we expose this case of abdominal pregnancy in order to present the symptoms associated that could lead to an early recognition and the successful management that resulted in a good maternal and fetal outcome. ## 2. Case Report A 27-year-old G2P0010 at 33 weeks and 4 days by last menstrual period was brought in by Emergency System to the hospital on June 25th 2014, with complaints of severe abdominal pain of 1 hour duration. Patient was without medical or surgical history and had a termination of pregnancy before. Abdominal pain was generalized, 10 out of 10 in severity, and associated with vomiting. She denied any diarrhea, vaginal bleeding, or leakage of amniotic fluid. She had recently migrated from the Dominican Republic in May 2014 with no record of prenatal care.On examination, patient was in visible pain with elevated blood pressure, maternal tachycardia, and bilious emesis. An abdominal examination revealed generalized tenderness with guarding and rebound and a fundal height of 34 cm. The fetal heart rate was category III with absent variability and repetitive late decelerations. A vaginal examination revealed a bulging pouch of Douglas with the presenting part deep in the pelvis: a short, firm, and closed cervix displaced anteriorly behind the pubic symphysis.On the way to the operating room limited bed side sonogram revealed fetus in cephalic and a questionable placental location. A tentative diagnosis of uterine rupture versus concealed placental abruption was made proceeding with immediate abdominal delivery.At the time of laparotomy, meconium stained amniotic fluid was seen upon entry to the peritoneal cavity. A fetus was located outside of the endometrial cavity covered only by the uterine serosa on the right side with a placenta attachment to the serosa of the uterus. The left ovary was unremarkable in appearance and an anatomical distortion of the right adnexa was appreciated. A large opening was noted on the posterior aspect of the serosa where the amniotic fluid was leaking.An incision was made on the protruding serosa and a viable female infant was delivered via cephalic presentation with Apgar score of 9/9 at 1 and 5 minutes with weight of 2362 g. The uterus and placenta were exteriorized after delivery due to massive bleeding and distortion of the anatomy (Figure1). On further inspection of the placenta, it was noted to invade the pouch of Douglas and lower part of the sigmoid colon and the right uterine serosa.Figure 1 Representing placenta location and uterus after delivery of the baby, to note the size and the integrity of the uterus with a large placenta in the abdominal cavity.A massive hemorrhage protocol was initiated and an emergency back-up team was called. A general surgical consult was requested due to involvement of bowel. The decision was made to proceed on hysterectomy and removal of the placenta tissue due to continuous bleeding. The patient underwent supracervical hysterectomy and excision of the placenta tissue occupying the right side of the pelvic floor. Adhesiolysis from the sigmoid colon was performed by surgery with minimal damage to the serosa.Intraoperatively, the patient received 6 units of packed red blood cells, 4 units of fresh frozen plasma, and one unit of platelets. Estimated blood loss was 3000 mL. The patient was then transferred to the ICU for further observation and extubated the following morning.She was discharged home with the baby on day 4 after surgery. There was no evidence of anomaly documented in the baby. Mother and baby are doing well and currently being followed up closely.A pathology report revealed that placenta with a segment of trivessel umbilical cord marked old infarct at fetal and maternal surfaces. Attached to the maternal surfaces are fibrous tissues with smooth muscle and dilated vessels. Focal endovasculopathy with luminal occlusion, focal amnion with squamous metaplasia with an attached stretched ovary and fragment of mostly chorionic villi.The uterus was described as intact and weighed 300 g measuring 9.5 cm in length, 11 cm from cornua to cornua and 6 cm anterior posterior diameter with thick endometrial, decidual changes and focal autolysis, no chorionic villi or trophoblast are seen in the endometrium. ## 3. Discussion Primary abdominal pregnancy refers to an extrauterine pregnancy where implantation of fertilized ovum occurs directly in the abdominal cavity while the secondary abdominal pregnancy is a tubal pregnancy that ruptures with reimplantation within the abdominal cavity usually resulting in tubal or ovarian damage [10].In this report, the findings of recurrent pain throughout pregnancy especially during fetal movement, signs of peritonitis on day of presentation with free fluid in the abdomen, and findings of intraoperative distortion of the right ovary and fallopian tube are more indicative of a ruptured tubal pregnancy with a secondary implantation on the serosa and the right broad ligament. Nunyaluendo and Einterz [11], in a recent review of 163 cases of abdominal pregnancy, revealed that identification of this condition is often missed with only 45% cases diagnosed during the prenatal period. In this case, patient did not have any prenatal care and had history of intermittent pain throughout the pregnancy. Another factor to consider is the fact that she had a previous termination of pregnancy in the first trimester via suction curettage previously to this pregnancy in 2012 that could cause a defect in the uterus.Interestingly, the most common symptoms in abdominal pregnancy are abdominal pain 100%, nausea and vomiting 70%, and general malaise 40% [12]. Our patient had sudden severe abdominal pain with vomiting one hour prior to presentation to the hospital. A high index of suspicion for possible rupture of uterus versus abdominal pregnancy should be always considered when the fetal parts are easily palpated on abdominal examination and signs and symptoms of an acute abdomen. However a vaginal examination revealed fetal head bulging through the pouch of Douglas displacing the cervix into the retropubic space as described before is a concerning finding.An abdominal pregnancy is often associated with fetal deformities [13], such as facial and cranial asymmetry, joint abnormalities and limb deformity, and central nervous deformities in about 21% of cases. In our case, there was no evidence of deformity or abnormalities as per the team of pediatricians.Bleeding from placental implantation site could be massive and life threatening and is often the most common cause of maternal mortality which can reach as high as 20–30%. The decision to remove or leave the placenta should depend on extent of the placentation particularly with the bowel and omental involvement as well as on the expertise of the surgeon. Because of increased postoperative morbidity and mortality, it is not advisable to leave the placenta in situ [13]. In this case, because of the involvement of the broad ligament on the right side with distortion of the ovary and tube on the same side and extension of part of the placenta to small portion of the sigmoid colon posteriorly the decision was made intraoperatively for a supracervical hysterectomy to obtain adequate hemostasis. In our case massive transfusion protocol was applied as per hospital protocol [14]. ## 4. Conclusion A high index of suspicion and recognition of signs and symptoms are therefore detrimental to diagnosis and guide to a prompt surgical emergency. In patients with acute symptoms and lack of prenatal care, abdominal pregnancy should always be a differential.Prompt delivery of the fetus, followed by and control of hemorrhage and decision of placenta removal are the greatest challenges. Adequate personnel including anesthesia, pediatricians, and general surgeons may be necessary for a successful management. --- *Source: 102479-2014-12-03.xml*
102479-2014-12-03_102479-2014-12-03.md
11,356
Advanced Extrauterine Pregnancy at 33 Weeks with a Healthy Newborn
Tajudeen Dabiri; Guillermo A. Marroquin; Boleslaw Bendek; Enyonam Agamasu; Magdy Mikhail
BioMed Research International (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102479
102479-2014-12-03.xml
--- ## Abstract Abdominal pregnancy is a very rare form of ectopic pregnancy, associated with high morbidity and mortality for both fetus and mother. It is, and often, seen in poor resource nations, where early diagnosis is often a major challenge due to poor prenatal care and lack of medical resources. An advanced abdominal pregnancy with a good fetal and maternal outcome is therefore a more extraordinary occurrence in the modern developed world. We present a case of an abdominal pregnancy at 33.4 weeks in an individual with no documented prenatal care, who arrived in a hospital in the Bronx, in June 25th 2014, with symptoms of generalized, severe lower abdominal pain. Upon examination it was found that due to category III fetal tracing an emergent cesarean section was performed. At the time of laparotomy the fetus was located in the pelvis covered by the uterine serosa, with distortion of the entire right adnexa and invasion to the right parametrium. The placenta invaded the pouch of Douglas and the lower part of the sigmoid colon. A massive hemorrhage followed, followed by a supracervical hysterectomy. A viable infant was delivered and mother discharged on postoperative day 4. --- ## Body ## 1. Introduction Symptoms of an abdominal pregnancy are very nonspecific and often include abdominal pain, nausea, vomiting, palpable fetal parts, fetal mal presentation, pain on fetal movement, and displacement of the cervix.With remarkable advances in radiographic technology an early discovery of an extrauterine pregnancy should be a practicable endeavor. This is particularly important in a community where there are an increased number of immigrants from low resource nations [1].The prevalence of ectopic pregnancy is 1-2% with 95% occurring in the fallopian tube. The incidence of abdominal pregnancy ranges from 1 : 1000 to 1 : 30,000 depending on the community but is most commonly seen in developing nations of the world [2, 3], which represent approximately 1–1.4% of all ectopic pregnancies alone [4–6]. The first documented case of abdominal pregnancy was reported in the year 1708, followed by numerous case reports particularly from middle and low income regions of the world [7]. Frequently, the diagnosis was made based on complications such as hemorrhage and abdominal pain at the time of laparotomy. Most often, the pregnancy did not survive and often resulted in extraction of the dead fetus with increased maternal mortality.In the developed world, abdominal pregnancy is extremely rare and very few of such cases have been published in the last 10 years. It is unclear if abdominal pregnancy is a result of secondary implantation from an aborted tubal pregnancy or result of primary implantation from intra-abdominal fertilization. Associated risks for developing abdominal pregnancy are endometriosis, pelvic inflammatory disease, assisted reproductive techniques, tubal occlusion, and multiparity [8–10].In view of rarity and lack of management guidelines of advanced abdominal pregnancy, we expose this case of abdominal pregnancy in order to present the symptoms associated that could lead to an early recognition and the successful management that resulted in a good maternal and fetal outcome. ## 2. Case Report A 27-year-old G2P0010 at 33 weeks and 4 days by last menstrual period was brought in by Emergency System to the hospital on June 25th 2014, with complaints of severe abdominal pain of 1 hour duration. Patient was without medical or surgical history and had a termination of pregnancy before. Abdominal pain was generalized, 10 out of 10 in severity, and associated with vomiting. She denied any diarrhea, vaginal bleeding, or leakage of amniotic fluid. She had recently migrated from the Dominican Republic in May 2014 with no record of prenatal care.On examination, patient was in visible pain with elevated blood pressure, maternal tachycardia, and bilious emesis. An abdominal examination revealed generalized tenderness with guarding and rebound and a fundal height of 34 cm. The fetal heart rate was category III with absent variability and repetitive late decelerations. A vaginal examination revealed a bulging pouch of Douglas with the presenting part deep in the pelvis: a short, firm, and closed cervix displaced anteriorly behind the pubic symphysis.On the way to the operating room limited bed side sonogram revealed fetus in cephalic and a questionable placental location. A tentative diagnosis of uterine rupture versus concealed placental abruption was made proceeding with immediate abdominal delivery.At the time of laparotomy, meconium stained amniotic fluid was seen upon entry to the peritoneal cavity. A fetus was located outside of the endometrial cavity covered only by the uterine serosa on the right side with a placenta attachment to the serosa of the uterus. The left ovary was unremarkable in appearance and an anatomical distortion of the right adnexa was appreciated. A large opening was noted on the posterior aspect of the serosa where the amniotic fluid was leaking.An incision was made on the protruding serosa and a viable female infant was delivered via cephalic presentation with Apgar score of 9/9 at 1 and 5 minutes with weight of 2362 g. The uterus and placenta were exteriorized after delivery due to massive bleeding and distortion of the anatomy (Figure1). On further inspection of the placenta, it was noted to invade the pouch of Douglas and lower part of the sigmoid colon and the right uterine serosa.Figure 1 Representing placenta location and uterus after delivery of the baby, to note the size and the integrity of the uterus with a large placenta in the abdominal cavity.A massive hemorrhage protocol was initiated and an emergency back-up team was called. A general surgical consult was requested due to involvement of bowel. The decision was made to proceed on hysterectomy and removal of the placenta tissue due to continuous bleeding. The patient underwent supracervical hysterectomy and excision of the placenta tissue occupying the right side of the pelvic floor. Adhesiolysis from the sigmoid colon was performed by surgery with minimal damage to the serosa.Intraoperatively, the patient received 6 units of packed red blood cells, 4 units of fresh frozen plasma, and one unit of platelets. Estimated blood loss was 3000 mL. The patient was then transferred to the ICU for further observation and extubated the following morning.She was discharged home with the baby on day 4 after surgery. There was no evidence of anomaly documented in the baby. Mother and baby are doing well and currently being followed up closely.A pathology report revealed that placenta with a segment of trivessel umbilical cord marked old infarct at fetal and maternal surfaces. Attached to the maternal surfaces are fibrous tissues with smooth muscle and dilated vessels. Focal endovasculopathy with luminal occlusion, focal amnion with squamous metaplasia with an attached stretched ovary and fragment of mostly chorionic villi.The uterus was described as intact and weighed 300 g measuring 9.5 cm in length, 11 cm from cornua to cornua and 6 cm anterior posterior diameter with thick endometrial, decidual changes and focal autolysis, no chorionic villi or trophoblast are seen in the endometrium. ## 3. Discussion Primary abdominal pregnancy refers to an extrauterine pregnancy where implantation of fertilized ovum occurs directly in the abdominal cavity while the secondary abdominal pregnancy is a tubal pregnancy that ruptures with reimplantation within the abdominal cavity usually resulting in tubal or ovarian damage [10].In this report, the findings of recurrent pain throughout pregnancy especially during fetal movement, signs of peritonitis on day of presentation with free fluid in the abdomen, and findings of intraoperative distortion of the right ovary and fallopian tube are more indicative of a ruptured tubal pregnancy with a secondary implantation on the serosa and the right broad ligament. Nunyaluendo and Einterz [11], in a recent review of 163 cases of abdominal pregnancy, revealed that identification of this condition is often missed with only 45% cases diagnosed during the prenatal period. In this case, patient did not have any prenatal care and had history of intermittent pain throughout the pregnancy. Another factor to consider is the fact that she had a previous termination of pregnancy in the first trimester via suction curettage previously to this pregnancy in 2012 that could cause a defect in the uterus.Interestingly, the most common symptoms in abdominal pregnancy are abdominal pain 100%, nausea and vomiting 70%, and general malaise 40% [12]. Our patient had sudden severe abdominal pain with vomiting one hour prior to presentation to the hospital. A high index of suspicion for possible rupture of uterus versus abdominal pregnancy should be always considered when the fetal parts are easily palpated on abdominal examination and signs and symptoms of an acute abdomen. However a vaginal examination revealed fetal head bulging through the pouch of Douglas displacing the cervix into the retropubic space as described before is a concerning finding.An abdominal pregnancy is often associated with fetal deformities [13], such as facial and cranial asymmetry, joint abnormalities and limb deformity, and central nervous deformities in about 21% of cases. In our case, there was no evidence of deformity or abnormalities as per the team of pediatricians.Bleeding from placental implantation site could be massive and life threatening and is often the most common cause of maternal mortality which can reach as high as 20–30%. The decision to remove or leave the placenta should depend on extent of the placentation particularly with the bowel and omental involvement as well as on the expertise of the surgeon. Because of increased postoperative morbidity and mortality, it is not advisable to leave the placenta in situ [13]. In this case, because of the involvement of the broad ligament on the right side with distortion of the ovary and tube on the same side and extension of part of the placenta to small portion of the sigmoid colon posteriorly the decision was made intraoperatively for a supracervical hysterectomy to obtain adequate hemostasis. In our case massive transfusion protocol was applied as per hospital protocol [14]. ## 4. Conclusion A high index of suspicion and recognition of signs and symptoms are therefore detrimental to diagnosis and guide to a prompt surgical emergency. In patients with acute symptoms and lack of prenatal care, abdominal pregnancy should always be a differential.Prompt delivery of the fetus, followed by and control of hemorrhage and decision of placenta removal are the greatest challenges. Adequate personnel including anesthesia, pediatricians, and general surgeons may be necessary for a successful management. --- *Source: 102479-2014-12-03.xml*
2014
# Lane Departure Avoidance Control for Electric Vehicle Using Torque Allocation **Authors:** Yiwan Wu; Zhengqiang Chen; Rong Liu; Fan Li **Journal:** Mathematical Problems in Engineering (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1024805 --- ## Abstract This paper focuses on the lane departure avoidance system for a four in-wheel motors’ drive electric vehicle, aiming at preventing lane departure under dangerous driving conditions. The control architecture for the lane departure avoidance system is hierarchical. In the upper controller, the desired yaw rate was calculated with the consideration of vehicle-lane deviation, vehicle dynamic, and the limitation of road adhesion. In the middle controller, a sliding mode controller (SMC) was designed to control the additional yaw moment. In the lower layer, the yaw moment was produced by the optimal distribution of driving/braking torque between four wheels. Lane departure avoidance was carried out by tracking desired yaw response. Simulations were performed to study the effectiveness of the control algorithm in Carsim®/Simulink® cosimulation. Simulation results show that the proposed methods can effectively confine the vehicle in lane and prevent lane departure accidents. --- ## Body ## 1. Introduction In the last decade, a large portion of highways traffic accidents lead to heavy casualties [1] due to drivers’ inattention, drowsiness, or fatigue. In particular, Unintended Lane departure accidents exceed 15% of traffic accidents that occurred over the last 10 years in Germany [2]. Moreover, road departure accounted for 28% of fatal traffic accidents that occurred in 2005 in USA [3]. To prevent unintended lane departure accidents, various lane keeping assistance systems (LKAS) or lane departure avoidance systems (LDAS) [4] have been developed to automatically adjust vehicle’s dynamics or trajectory to confine the vehicle in its driving lane.To realize LKAS or LDAS, several types of control inputs have been pursued in literature. According to active control means, the LDAS can be classified into three types, that is, systems using steering control, systems using differential braking control, and systems using differential driving/braking control.The steering control, which overlaid a steering torque or a steering angle by a DC motor mounted on steering column, has been deeply and widely investigated [5–9]. As the steering wheel is controlled by LDAS and drivers simultaneously, the interferences and conflicts between the driver and control system are the major challenges of the lane departure avoidance control with only front steering wheel angle/torque input [10]. Lane departure avoidance control with four-wheel steering has two independent inputs, namely, front- and rear-steering angles. Four-wheel steering provides superior lane departure avoidance performance in lateral and yaw motions to the only front steering angle/torque input [11]. However, four-wheel steering has not been widely used in passenger cars.The control of vehicle’s lateral dynamics using differential braking technique was proposed by Pilutti et al. in 1995 [12]. Nissan is the first to offer a LDAS using differential braking control [13]. The LDAS using this way has been theoretically investigated in authors’ previous works [14–16]. The LDAS with differential braking input can provide satisfactory lane departure avoidance performance and resolve the conflicts between the driver and control system. To keep the vehicle in the lane and avoid drivers’ discomfort a lane departure assistance system by the coordinated control of steering and differential braking control was developed in previous researches [9, 17–19].The third technique is that differential driving/braking control by the output torque of each wheel motor is individually controlled. A wheel motor can provide more accurate and faster torque response compared with hydraulic system. The distributing driving torque between rear wheels has been used to fulfill the lane keeping functions [20]. For the four in-wheel motors drive electric vehicle is able to adopt either driving or braking torque generated by the in-wheel motors, and the torque of each in-wheel motor can be independently and precisely controlled [21]. It means an improved lane departure avoidance control with high performance can be achieved.In this paper, a new lane departure avoidance control method using differential driving/braking control for enhancing the active safety of electric vehicles is proposed. In Section2, the architecture of lane departure avoidance system is introduced. In Section 3, decision-making and vehicle dynamics planning are studied. In Section 4, a sliding mode controller (SMC) is designed to follow the desired dynamics. In Section 5, an optimal allocation algorithm is developed for torque allocation between the wheels. In Section 6, the simulation of the proposed methods is performed. The contribution of this research and introduction of future works are summarized in Section 7. ## 2. Structure of Lane Departure Avoidance System The control architecture for lane departure avoidance system is hierarchical, as shown in Figure1. The upper controller is composed of decision-making and vehicle dynamic planning. Decision-making is conducted by the time to line cross (tTLC), the distance to lane center (dDLC), and driver’s intention. The desired vehicle dynamic planning is carried out by a preview driving model and a 2-DOF linear vehicle model. The error (Δγ) between actual dynamic (γreal) and desired dynamic (γd) is input into middle controller, which tracks the desired dynamic through differential driving/breaking. In the lower controller, an optimal torque allocation method is used to coordinate output torque of all wheels.Figure 1 Architecture of the proposed lane departure avoidance control method. ## 3. Upper Controller ### 3.1. Decision-Making The rotational torque applied by the driver to the steering wheel and the state of vehicle’s steering switch can be combined to identify driver’s intention. If steering torque was greater than 2 Nm or steering switch was turned on, control system will assume that the driver has a clear intention to manipulate vehicle.Time to line cross (TLC) and distance to lane center (DLC) are used jointly to decide whether to turn on lane departure avoidance control or not.Lane departure avoidance control will be turned on when either of the following conditions is satisfied:①dDLC ≥ 0.75 m.②tTLC ≤ 0.75 s.Lane departure avoidance control will be turned off when either of the following conditions is satisfied:①dDLC < 0.3 m, and tTLC > 2 s.②  Driver’s intention is obvious.③u ≤ 65 km/h.④  The lane identification failure. ### 3.2. Desired Vehicle Dynamic Planning #### 3.2.1. Preview Driving Model and Vehicle Dynamics Model The desired vehicle dynamics to prevent lane departure is planned by a preview driving model and a 2-DOF linear vehicle model.Assuming that the motion of the vehicle is constrained by Ackerman geometry, an optimal preview model is used to calculate the desired steering angle (δd) required for eliminating vehicle-lane deviation [14]. (1)δd=arctan⁡2LuTp2Δyp-uTpβ,where L is the wheel base, β is the side slip angle, u is the longitudinal velocity, Tp is the preview time, and Δyp is the lateral deviation at the preview point.A linear 2-DOF vehicle model is considered as a reference vehicle model for lane departure avoidance control.(2)β˙=Cf+Crmuβ+lfCf-lrCrmu2-1γ-Cfmuδ(3)γ˙=lfCf-lrCrIzβ+lf2Cf+lr2CrIzuγ-lfCfIzδ+1IzMz,where m is the vehicle mass, Cf(Cr) is the front (rear) axle equivalent cornering stiffness, lf(lr) is the longitudinal distance between front (rear) axle and center of gravity, Mz is the additional yaw moment, Iz is the yaw moment of inertia, and δ is the steering angle of the front wheel.According to the kinemics relations of the vehicle, the slip angles of front wheel (αf) and rear wheel (αr) are as follows:(4)αf=β+aγu-δαr=β-bγu. #### 3.2.2. Tire Model At large slip angles, the tire model can be no longer linear. The lateral tire force depended on slip angle(αf,αr), the normal tire load (Fz), the tire-road friction coefficient (μ), and the longitudinal tire force. To calculate lateral tire force for a wide range of operating conditions considering large slip angle and slip ratios, a simplified magic formula is adopted, which can be expressed as(5)Fyij=κFzijμsin⁡Barctan⁡Dαij,where subscript ij denotes the index indicating front left tire (fl)/front right tire (fr)/rear left tire (rl) and rear right tire (rr); Fyij is the lateral tire force of each wheel; Fzij is the vertical tire force of each wheel; κ, B, and D are fitting coefficients and can be obtained by curve fitting.Under different vertical loads and tire-road friction coefficients, the calculated lateral forces are compared with test data, as shown in Figure2. The results show that the simplified magic formula tire model can describe the nonlinear characteristics of tires with acceptable accuracy.Figure 2 Lateral tire force comparison: simplified magic formula and test data.Then, the equivalent cornering stiffness of each tire can be given by(6)Cij=Fyijαij,where αij is the slip angle of each wheel. #### 3.2.3. Desired Yaw Rate Response The peak lateral acceleration must be bounded by the tyre-road friction coefficientμ as follows:(7)Aymax=εμg,where μ is the friction coefficient, g is the gravity unit, and ɛ is the safety factor (in this paper, the value of ɛ is set as 0.85).The maximum yaw rate must be limited by the friction coefficient of the road. Thus, the desired yaw rate response (γd) and the desired steering angle (δd) can be described by(8)γd=min⁡u/L1+mu2lfCf-lrCr/L2CfCrδd,Aymaxusign⁡u/L1+mu2lfCf-lrCr/L2CfCrδd. ## 3.1. Decision-Making The rotational torque applied by the driver to the steering wheel and the state of vehicle’s steering switch can be combined to identify driver’s intention. If steering torque was greater than 2 Nm or steering switch was turned on, control system will assume that the driver has a clear intention to manipulate vehicle.Time to line cross (TLC) and distance to lane center (DLC) are used jointly to decide whether to turn on lane departure avoidance control or not.Lane departure avoidance control will be turned on when either of the following conditions is satisfied:①dDLC ≥ 0.75 m.②tTLC ≤ 0.75 s.Lane departure avoidance control will be turned off when either of the following conditions is satisfied:①dDLC < 0.3 m, and tTLC > 2 s.②  Driver’s intention is obvious.③u ≤ 65 km/h.④  The lane identification failure. ## 3.2. Desired Vehicle Dynamic Planning ### 3.2.1. Preview Driving Model and Vehicle Dynamics Model The desired vehicle dynamics to prevent lane departure is planned by a preview driving model and a 2-DOF linear vehicle model.Assuming that the motion of the vehicle is constrained by Ackerman geometry, an optimal preview model is used to calculate the desired steering angle (δd) required for eliminating vehicle-lane deviation [14]. (1)δd=arctan⁡2LuTp2Δyp-uTpβ,where L is the wheel base, β is the side slip angle, u is the longitudinal velocity, Tp is the preview time, and Δyp is the lateral deviation at the preview point.A linear 2-DOF vehicle model is considered as a reference vehicle model for lane departure avoidance control.(2)β˙=Cf+Crmuβ+lfCf-lrCrmu2-1γ-Cfmuδ(3)γ˙=lfCf-lrCrIzβ+lf2Cf+lr2CrIzuγ-lfCfIzδ+1IzMz,where m is the vehicle mass, Cf(Cr) is the front (rear) axle equivalent cornering stiffness, lf(lr) is the longitudinal distance between front (rear) axle and center of gravity, Mz is the additional yaw moment, Iz is the yaw moment of inertia, and δ is the steering angle of the front wheel.According to the kinemics relations of the vehicle, the slip angles of front wheel (αf) and rear wheel (αr) are as follows:(4)αf=β+aγu-δαr=β-bγu. ### 3.2.2. Tire Model At large slip angles, the tire model can be no longer linear. The lateral tire force depended on slip angle(αf,αr), the normal tire load (Fz), the tire-road friction coefficient (μ), and the longitudinal tire force. To calculate lateral tire force for a wide range of operating conditions considering large slip angle and slip ratios, a simplified magic formula is adopted, which can be expressed as(5)Fyij=κFzijμsin⁡Barctan⁡Dαij,where subscript ij denotes the index indicating front left tire (fl)/front right tire (fr)/rear left tire (rl) and rear right tire (rr); Fyij is the lateral tire force of each wheel; Fzij is the vertical tire force of each wheel; κ, B, and D are fitting coefficients and can be obtained by curve fitting.Under different vertical loads and tire-road friction coefficients, the calculated lateral forces are compared with test data, as shown in Figure2. The results show that the simplified magic formula tire model can describe the nonlinear characteristics of tires with acceptable accuracy.Figure 2 Lateral tire force comparison: simplified magic formula and test data.Then, the equivalent cornering stiffness of each tire can be given by(6)Cij=Fyijαij,where αij is the slip angle of each wheel. ### 3.2.3. Desired Yaw Rate Response The peak lateral acceleration must be bounded by the tyre-road friction coefficientμ as follows:(7)Aymax=εμg,where μ is the friction coefficient, g is the gravity unit, and ɛ is the safety factor (in this paper, the value of ɛ is set as 0.85).The maximum yaw rate must be limited by the friction coefficient of the road. Thus, the desired yaw rate response (γd) and the desired steering angle (δd) can be described by(8)γd=min⁡u/L1+mu2lfCf-lrCr/L2CfCrδd,Aymaxusign⁡u/L1+mu2lfCf-lrCr/L2CfCrδd. ## 3.2.1. Preview Driving Model and Vehicle Dynamics Model The desired vehicle dynamics to prevent lane departure is planned by a preview driving model and a 2-DOF linear vehicle model.Assuming that the motion of the vehicle is constrained by Ackerman geometry, an optimal preview model is used to calculate the desired steering angle (δd) required for eliminating vehicle-lane deviation [14]. (1)δd=arctan⁡2LuTp2Δyp-uTpβ,where L is the wheel base, β is the side slip angle, u is the longitudinal velocity, Tp is the preview time, and Δyp is the lateral deviation at the preview point.A linear 2-DOF vehicle model is considered as a reference vehicle model for lane departure avoidance control.(2)β˙=Cf+Crmuβ+lfCf-lrCrmu2-1γ-Cfmuδ(3)γ˙=lfCf-lrCrIzβ+lf2Cf+lr2CrIzuγ-lfCfIzδ+1IzMz,where m is the vehicle mass, Cf(Cr) is the front (rear) axle equivalent cornering stiffness, lf(lr) is the longitudinal distance between front (rear) axle and center of gravity, Mz is the additional yaw moment, Iz is the yaw moment of inertia, and δ is the steering angle of the front wheel.According to the kinemics relations of the vehicle, the slip angles of front wheel (αf) and rear wheel (αr) are as follows:(4)αf=β+aγu-δαr=β-bγu. ## 3.2.2. Tire Model At large slip angles, the tire model can be no longer linear. The lateral tire force depended on slip angle(αf,αr), the normal tire load (Fz), the tire-road friction coefficient (μ), and the longitudinal tire force. To calculate lateral tire force for a wide range of operating conditions considering large slip angle and slip ratios, a simplified magic formula is adopted, which can be expressed as(5)Fyij=κFzijμsin⁡Barctan⁡Dαij,where subscript ij denotes the index indicating front left tire (fl)/front right tire (fr)/rear left tire (rl) and rear right tire (rr); Fyij is the lateral tire force of each wheel; Fzij is the vertical tire force of each wheel; κ, B, and D are fitting coefficients and can be obtained by curve fitting.Under different vertical loads and tire-road friction coefficients, the calculated lateral forces are compared with test data, as shown in Figure2. The results show that the simplified magic formula tire model can describe the nonlinear characteristics of tires with acceptable accuracy.Figure 2 Lateral tire force comparison: simplified magic formula and test data.Then, the equivalent cornering stiffness of each tire can be given by(6)Cij=Fyijαij,where αij is the slip angle of each wheel. ## 3.2.3. Desired Yaw Rate Response The peak lateral acceleration must be bounded by the tyre-road friction coefficientμ as follows:(7)Aymax=εμg,where μ is the friction coefficient, g is the gravity unit, and ɛ is the safety factor (in this paper, the value of ɛ is set as 0.85).The maximum yaw rate must be limited by the friction coefficient of the road. Thus, the desired yaw rate response (γd) and the desired steering angle (δd) can be described by(8)γd=min⁡u/L1+mu2lfCf-lrCr/L2CfCrδd,Aymaxusign⁡u/L1+mu2lfCf-lrCr/L2CfCrδd. ## 4. Middle Controller ### 4.1. Yaw Rate Tracking To make a vehicle follow the desired yaw rate, a sliding mode controller is adopted to calculate the additional yaw moment (Mz), which is required by dynamics tracking. The sliding surface is defined by (9)S=γ-γd.Differentiating the above equation:(10)S˙=γ˙-γ˙d.Combining (3) and (10): (11)S˙=1IZlfCf-lrCrβ+lf2Cf+lr2Cruγ-lfCfδ+Mz-γ˙d.SettingS˙=-ξS yields the control law(12)Mz=IZγ˙d-ξγ-γd-lfCf-lrCrβ-lf2Cf+lr2Cruγ+lfCfδ,where ξ is the control parameter of SMC and is greater than zero. ### 4.2. Speed Tracking The deviation between actual speedu and desired speed ud is determined by current throttle opening. It can be expressed as(13)Δu=u-ud.A PI controller is designed to determine the sum of longitudinal forces (Fx) required to follow current throttle opening, as follows:(14)Fx=kpΔu+ki∫0tΔudt,where kp is the proportional gain and ki is the integral gain. ## 4.1. Yaw Rate Tracking To make a vehicle follow the desired yaw rate, a sliding mode controller is adopted to calculate the additional yaw moment (Mz), which is required by dynamics tracking. The sliding surface is defined by (9)S=γ-γd.Differentiating the above equation:(10)S˙=γ˙-γ˙d.Combining (3) and (10): (11)S˙=1IZlfCf-lrCrβ+lf2Cf+lr2Cruγ-lfCfδ+Mz-γ˙d.SettingS˙=-ξS yields the control law(12)Mz=IZγ˙d-ξγ-γd-lfCf-lrCrβ-lf2Cf+lr2Cruγ+lfCfδ,where ξ is the control parameter of SMC and is greater than zero. ## 4.2. Speed Tracking The deviation between actual speedu and desired speed ud is determined by current throttle opening. It can be expressed as(13)Δu=u-ud.A PI controller is designed to determine the sum of longitudinal forces (Fx) required to follow current throttle opening, as follows:(14)Fx=kpΔu+ki∫0tΔudt,where kp is the proportional gain and ki is the integral gain. ## 5. Lower Controller The lower controller determines the output torque of the in-wheel-motor at each wheel, so as to meet driver’s desired speed and generate a net yaw moment that tracks the desired value for additional yaw moment determined by the middle controller. Control allocation technology is utilized in this study for torque allocation.When the condition of front wheel is small, the total longitudinal force and additional yaw moment can be expressed as(15)V=BU,where(16)V=FxMzT,U=FxflFxrlFxfrFxrrT,B=1111-lw2-lw2lw2lw2,lwis  the  track  width.The primary objective of the lower controller is to track the desired yaw response and make minimum allocation error. Another objective of the lower controller is to minimize the energy consumption of the in-wheel-motors.According to the objectives of torque allocation, the optimization problem can be described by Sequence Least Squares (SLS). It is expressed as follows:(17a)U=argminU∈Ω⁡WuU-Ud2(17b)Ω=argminUmin≤U≤Umax⁡WvBU-V2,where Wv is the weighting matrix, which shows the importance of each generalized force. Ud is the desired longitudinal tire force. To minimize the energy consumption, Ud is set as Fx. Umin and Umax are decided, respectively, by the tyre-road friction coefficient constraint and actuator constraint, which is the maximum torque range of in-wheel-motors. Wu is the diagonal weighting matrix, which shows the different of vertical load between each wheel, and the elements are(18)Wu=Fzrl∑FzijFzrr∑FzijFzfl∑FzijFzfr∑Fzij,where Fzij is the vertical load of each wheel. Fzij consists of static loads and dynamic loads caused by load transfer:(19)Fzfl=mLglr-Axh2-AyhlrlwFzrl=mLglr+Axh2-AyhlrlwFzfr=mLglr-Axh2+AyhlrlwFzrr=mLglr+Axh2+Ayhlrlw.A weighting factorη is used in this study. Then the SLS problem can be transformed into a Weighted Least Squares problem (WLS):(20)U=argminUmin≤U≤Umax⁡WuU-Ud22+ηBU-V22.To minimize the allocation error,η  is usually set to be very large. This optimization problem is solved by an active set method [22].Considering the dynamics of each wheel, the relationship between tyre longitudinal force (Fxij) and driving/braking torque (Tij) can be expressed as follows:(21)Tij=rFxij+Jwω˙ij,where r is the wheel radius, Jw is the wheel inertia, and ωij is the angular speed of each wheel.The output torque of the in-wheel-motor is limited by motor speed. The relationship of the output torque and motor speed is shown in Figure3.Figure 3 Specification of in-wheel motor.The dynamic response of the in-wheel-motor can be described as a first-order lag:(22)Gs=1τs+1,where τ is a time constant obtained through experiments. ## 6. Simulation Results In order to verify the effectiveness of the proposed method, a high-fidelity four-wheel-independent-drive electric vehicle (4WID-EV) model developed in Carsim and Matlab/Simulink is applied in this study. The single lane change test is chosen as the simulation test maneuver. The target road is designed according to ISO-3888-2-2002 and is shown in Figure4. Table 1 shows the main vehicle parameters and their values.Table 1 Vehicle parameters. Definition Symbol Unit Value Vehicle mass m kg 1231 Wheel base L m 2.6 Track width l w m 1.481 Distance from C.G. to front axle l f m 1.56 Distance from C.G. to rear axle l r m 1.04 Yaw moment of inertia I z kg⋅m2 2031.4 Height of C.G. h m 0.34 Wheel radius r m 0.304Figure 4 Target road for simulation. (a) Coordinate (b) CurvatureTo evaluate the work load of each tyre, tyre usage (Pij) is defined as(23)Pij=Fxij2+Fyij2μFzij.Maneuver 1. The vehicle is driven on the target road at the speed of 80 Km/h. The width of lane is 3.5 m and tyre-road friction coefficientμ is 0.8. To simulate an unintended lane departure, the steering wheel is not operated during 2~5 s. The vehicle will deviate from lane center gradually.The decision-making is shown in Figure5. At 2.21 s, tTLC ≤ 0.75 s, LDAS is activated. At 3.08 s, tTLC > 2 s and dDLC ≥ 0.75 m; it means that the vehicle has not returned to the center zone of the lane; thus LDAS maintain output. At 4.87 s, dDLC < 0.3 m, and tTLC > 2 s, LDAS is deactivated. After 5 s, the driver restored from inattention, drowsiness, or fatigue to control the vehicle. The decision logic of LDAS is shown in Figure 5(c). “1” indicates LDAS is activated. “0” indicates LDAS is deactivated.Figure 5 Decision-making. (a) TLC (b) DLC (c) Decision logicThe control results of Maneuver 1 are shown in Figure6. The output torque of the in-wheel-motor at each wheel is illustrated in Figure 6(a). During LDAS activation, the proposed method can allocate optimal torque to each wheel in time with the consideration of tyre loads, the tyre-road friction coefficient constraint, and actuator constraint. As seen in Figure 6(b), the vehicle is able to track the desired yaw response accurately and quickly when LDAS is activated. The peak lateral acceleration is 0.33 g, which is far less than the upper bound defined by (7). It indicates that the vehicle has enough lateral stability margin. The tyre usage of each wheel is illustrated in Figure 6(d). During LDAS activation, the tyre usage of each wheel is within a reasonable range. Although the output torque of front in-wheel-motors is larger than rear in-wheel-motors, the tyre usage is on the contrary. The difference is caused by the distribution of the center of gravity and load transfer. Figure 6(e) shows the corresponding side slip angle of the vehicle. The side slip angle is at a small value (the minimum value is −0.024 rad). It means that the vehicle is able to track the desired trajectory. As shown in Figure 5(b), the maximum dDLC is 0.818 m. It means that the LDAS can prevent lane departure effectively.Figure 6 Control results. (a) Driving/braking torque (b) Yaw rate tracking (c) Lateral acceleration response (d) Tyre usage (e) Side slip angleManeuver 2. The simulation conditions are the same as Maneuver 1 except the tyre-road friction coefficientμ which is only 0.4.The decision-making is shown in Figure7. During 2~5 s, LDAS is activated twice. The maximum dDLC is 1.4 m, as shown in Figure 5(b). It means that the LDAS can still prevent lane departure effectively even in the condition that the tyre-road friction coefficient μ is lower (e.g., wet road).Figure 7 Decision-making. (a) TLC (b) DLC (c) Decision logicThe control results of Maneuver 2 are shown in Figure8. The maximum output torque of each wheel is smaller than that shown in Figure 6(a). Figure 8(a) illustrates that the optimal output torque is constrained by the tyre-road friction coefficient. As shown in Figure 8(b), the desired yaw rate reaches the upper bound value, which is bounded by the tyre-road friction coefficient μ. During LDAS activation, the vehicle can track the desired yaw response accurately and quickly by differential driving/braking control. The peak lateral acceleration is 0.2 g, which is far less than the upper bound defined by (7). It indicates that the vehicle has enough lateral stability margin in spite of the wet road. The tyre usage of each wheel is illustrated in Figure 8(d). According to equation (23), the tyre usage of each wheel will become larger for the lower tyre-road friction coefficient. On the wet road, the tyre usage is less than 1; it indicates that all wheels are not at the risk of saturation. As shown in Figure 8(e), the maximum side slip angle is only 0.015 rad. It means that the vehicle can track the desired trajectory effectively.Figure 8 Control results. (a) Driving/braking torque (b) Yaw rate tracking (c) Lateral acceleration response (d) Tyre usage (e) Side slip angleManeuver 3. The simulation conditions are the same as Maneuver 1 except the speed of 120 km/h.The decision-making is shown in Figure9. During 2~5 s, LDAS is activated thrice. The maximum dDLC is 1.62 m, as shown in Figure 9(b). It means that the LDAS can still prevent lane departure in curved roadway with overspeed.Figure 9 Decision-making. (a) TLC (b) DLC (c) Decision logicThe control results of Maneuver 3 are shown in Figure10. Because the output torque of the in-wheel-motor is constrained by the specification of in-wheel motor (see Figure 3), the maximum output torque of each wheel is smaller than that shown in Figure 6(a). Figure 10(b) indicates that the vehicle can track the desired yaw response quickly when LDAS is activated, although the maximum overshoot of yaw rate is 11.9%. The desired yaw rate is able to reach the upper bound value, which is bounded by the tyre-road friction coefficient μ and speed u. The peak lateral acceleration is 0.35 g less than the upper bound calculated by (7). It indicates that the vehicle has enough lateral stability margin in spite of the high speed. The tyre usage of each wheel is illustrated in Figure 10(d). According to (23), the tyre usage of each wheel will become smaller for output torque of the in-wheel-motor limited by the motor speed. The tyre usage is less than 1; it indicates that all wheels are not at the risk of saturation. As shown in Figure 10(e), the maximum side slip angle is only 0.015 rad. It means that the vehicle can track the desired trajectory, although the lateral displacement of the vehicle is larger compared to low speed maneuver.Figure 10 Control results. (a) Driving/braking torque (b) Yaw rate tracking (c) Lateral acceleration response (d) Tyre usage (e) Side slip angleThe maximum side slip angle on wet road is smaller than that on the dry road. The objective of the stability system is to track a desired yaw rate, which is limited by the friction coefficient of the road and vehicle speed. As shown in Figures6 and 8, the proposed method can track the desired yaw rate by driving/braking torque allocation. On the wet road, the desired yaw rate is limited to ensure vehicle stability. Thus, the amplitude of the output torque of in-wheel-motors is smaller than that on the dry road. This can avoid large side slip angle and prevent vehicle skid and loss of control. However, the deficiency of this control method is that the lateral displacement of the vehicle will increase, as shown in Figures 5(b) and 7(b). ## 7. Conclusion In this paper, a hierarchical method for LDAS has been presented on the basis of differential driving/braking control. The proposed method consists of three parts: an upper-level controller, a middle-level controller, and a lower-level controller. The upper-level controller was designed to monitor driver intention and vehicle-lane deviation and to determine whether an intervention is required. Another task of the upper-level controller is to determine the desired dynamics (desired yaw rate). To tack the desired dynamics, sliding mode control method and PI control method were used in the middle-level controller. The lower-level controller was designed to distribute driving/braking torque between each wheel. To achieve optimal allocation of the output torque of the wheel motors, a control allocation method was adopted. The performances of the proposed method were evaluated via three software-in-loop tests. Simulation results show that the presented method can effectively prevent the lane departure by differential driving/braking control.The proposed method is verified in SIL test, which is different with hardware-in-loop (HIL) and real driving experiment, for example, signal disturb or delay. To this end, the proposed LDAS will be implemented in HIL platform and a prototype vehicle in the future. --- *Source: 1024805-2018-02-19.xml*
1024805-2018-02-19_1024805-2018-02-19.md
30,521
Lane Departure Avoidance Control for Electric Vehicle Using Torque Allocation
Yiwan Wu; Zhengqiang Chen; Rong Liu; Fan Li
Mathematical Problems in Engineering (2018)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1024805
1024805-2018-02-19.xml
--- ## Abstract This paper focuses on the lane departure avoidance system for a four in-wheel motors’ drive electric vehicle, aiming at preventing lane departure under dangerous driving conditions. The control architecture for the lane departure avoidance system is hierarchical. In the upper controller, the desired yaw rate was calculated with the consideration of vehicle-lane deviation, vehicle dynamic, and the limitation of road adhesion. In the middle controller, a sliding mode controller (SMC) was designed to control the additional yaw moment. In the lower layer, the yaw moment was produced by the optimal distribution of driving/braking torque between four wheels. Lane departure avoidance was carried out by tracking desired yaw response. Simulations were performed to study the effectiveness of the control algorithm in Carsim®/Simulink® cosimulation. Simulation results show that the proposed methods can effectively confine the vehicle in lane and prevent lane departure accidents. --- ## Body ## 1. Introduction In the last decade, a large portion of highways traffic accidents lead to heavy casualties [1] due to drivers’ inattention, drowsiness, or fatigue. In particular, Unintended Lane departure accidents exceed 15% of traffic accidents that occurred over the last 10 years in Germany [2]. Moreover, road departure accounted for 28% of fatal traffic accidents that occurred in 2005 in USA [3]. To prevent unintended lane departure accidents, various lane keeping assistance systems (LKAS) or lane departure avoidance systems (LDAS) [4] have been developed to automatically adjust vehicle’s dynamics or trajectory to confine the vehicle in its driving lane.To realize LKAS or LDAS, several types of control inputs have been pursued in literature. According to active control means, the LDAS can be classified into three types, that is, systems using steering control, systems using differential braking control, and systems using differential driving/braking control.The steering control, which overlaid a steering torque or a steering angle by a DC motor mounted on steering column, has been deeply and widely investigated [5–9]. As the steering wheel is controlled by LDAS and drivers simultaneously, the interferences and conflicts between the driver and control system are the major challenges of the lane departure avoidance control with only front steering wheel angle/torque input [10]. Lane departure avoidance control with four-wheel steering has two independent inputs, namely, front- and rear-steering angles. Four-wheel steering provides superior lane departure avoidance performance in lateral and yaw motions to the only front steering angle/torque input [11]. However, four-wheel steering has not been widely used in passenger cars.The control of vehicle’s lateral dynamics using differential braking technique was proposed by Pilutti et al. in 1995 [12]. Nissan is the first to offer a LDAS using differential braking control [13]. The LDAS using this way has been theoretically investigated in authors’ previous works [14–16]. The LDAS with differential braking input can provide satisfactory lane departure avoidance performance and resolve the conflicts between the driver and control system. To keep the vehicle in the lane and avoid drivers’ discomfort a lane departure assistance system by the coordinated control of steering and differential braking control was developed in previous researches [9, 17–19].The third technique is that differential driving/braking control by the output torque of each wheel motor is individually controlled. A wheel motor can provide more accurate and faster torque response compared with hydraulic system. The distributing driving torque between rear wheels has been used to fulfill the lane keeping functions [20]. For the four in-wheel motors drive electric vehicle is able to adopt either driving or braking torque generated by the in-wheel motors, and the torque of each in-wheel motor can be independently and precisely controlled [21]. It means an improved lane departure avoidance control with high performance can be achieved.In this paper, a new lane departure avoidance control method using differential driving/braking control for enhancing the active safety of electric vehicles is proposed. In Section2, the architecture of lane departure avoidance system is introduced. In Section 3, decision-making and vehicle dynamics planning are studied. In Section 4, a sliding mode controller (SMC) is designed to follow the desired dynamics. In Section 5, an optimal allocation algorithm is developed for torque allocation between the wheels. In Section 6, the simulation of the proposed methods is performed. The contribution of this research and introduction of future works are summarized in Section 7. ## 2. Structure of Lane Departure Avoidance System The control architecture for lane departure avoidance system is hierarchical, as shown in Figure1. The upper controller is composed of decision-making and vehicle dynamic planning. Decision-making is conducted by the time to line cross (tTLC), the distance to lane center (dDLC), and driver’s intention. The desired vehicle dynamic planning is carried out by a preview driving model and a 2-DOF linear vehicle model. The error (Δγ) between actual dynamic (γreal) and desired dynamic (γd) is input into middle controller, which tracks the desired dynamic through differential driving/breaking. In the lower controller, an optimal torque allocation method is used to coordinate output torque of all wheels.Figure 1 Architecture of the proposed lane departure avoidance control method. ## 3. Upper Controller ### 3.1. Decision-Making The rotational torque applied by the driver to the steering wheel and the state of vehicle’s steering switch can be combined to identify driver’s intention. If steering torque was greater than 2 Nm or steering switch was turned on, control system will assume that the driver has a clear intention to manipulate vehicle.Time to line cross (TLC) and distance to lane center (DLC) are used jointly to decide whether to turn on lane departure avoidance control or not.Lane departure avoidance control will be turned on when either of the following conditions is satisfied:①dDLC ≥ 0.75 m.②tTLC ≤ 0.75 s.Lane departure avoidance control will be turned off when either of the following conditions is satisfied:①dDLC < 0.3 m, and tTLC > 2 s.②  Driver’s intention is obvious.③u ≤ 65 km/h.④  The lane identification failure. ### 3.2. Desired Vehicle Dynamic Planning #### 3.2.1. Preview Driving Model and Vehicle Dynamics Model The desired vehicle dynamics to prevent lane departure is planned by a preview driving model and a 2-DOF linear vehicle model.Assuming that the motion of the vehicle is constrained by Ackerman geometry, an optimal preview model is used to calculate the desired steering angle (δd) required for eliminating vehicle-lane deviation [14]. (1)δd=arctan⁡2LuTp2Δyp-uTpβ,where L is the wheel base, β is the side slip angle, u is the longitudinal velocity, Tp is the preview time, and Δyp is the lateral deviation at the preview point.A linear 2-DOF vehicle model is considered as a reference vehicle model for lane departure avoidance control.(2)β˙=Cf+Crmuβ+lfCf-lrCrmu2-1γ-Cfmuδ(3)γ˙=lfCf-lrCrIzβ+lf2Cf+lr2CrIzuγ-lfCfIzδ+1IzMz,where m is the vehicle mass, Cf(Cr) is the front (rear) axle equivalent cornering stiffness, lf(lr) is the longitudinal distance between front (rear) axle and center of gravity, Mz is the additional yaw moment, Iz is the yaw moment of inertia, and δ is the steering angle of the front wheel.According to the kinemics relations of the vehicle, the slip angles of front wheel (αf) and rear wheel (αr) are as follows:(4)αf=β+aγu-δαr=β-bγu. #### 3.2.2. Tire Model At large slip angles, the tire model can be no longer linear. The lateral tire force depended on slip angle(αf,αr), the normal tire load (Fz), the tire-road friction coefficient (μ), and the longitudinal tire force. To calculate lateral tire force for a wide range of operating conditions considering large slip angle and slip ratios, a simplified magic formula is adopted, which can be expressed as(5)Fyij=κFzijμsin⁡Barctan⁡Dαij,where subscript ij denotes the index indicating front left tire (fl)/front right tire (fr)/rear left tire (rl) and rear right tire (rr); Fyij is the lateral tire force of each wheel; Fzij is the vertical tire force of each wheel; κ, B, and D are fitting coefficients and can be obtained by curve fitting.Under different vertical loads and tire-road friction coefficients, the calculated lateral forces are compared with test data, as shown in Figure2. The results show that the simplified magic formula tire model can describe the nonlinear characteristics of tires with acceptable accuracy.Figure 2 Lateral tire force comparison: simplified magic formula and test data.Then, the equivalent cornering stiffness of each tire can be given by(6)Cij=Fyijαij,where αij is the slip angle of each wheel. #### 3.2.3. Desired Yaw Rate Response The peak lateral acceleration must be bounded by the tyre-road friction coefficientμ as follows:(7)Aymax=εμg,where μ is the friction coefficient, g is the gravity unit, and ɛ is the safety factor (in this paper, the value of ɛ is set as 0.85).The maximum yaw rate must be limited by the friction coefficient of the road. Thus, the desired yaw rate response (γd) and the desired steering angle (δd) can be described by(8)γd=min⁡u/L1+mu2lfCf-lrCr/L2CfCrδd,Aymaxusign⁡u/L1+mu2lfCf-lrCr/L2CfCrδd. ## 3.1. Decision-Making The rotational torque applied by the driver to the steering wheel and the state of vehicle’s steering switch can be combined to identify driver’s intention. If steering torque was greater than 2 Nm or steering switch was turned on, control system will assume that the driver has a clear intention to manipulate vehicle.Time to line cross (TLC) and distance to lane center (DLC) are used jointly to decide whether to turn on lane departure avoidance control or not.Lane departure avoidance control will be turned on when either of the following conditions is satisfied:①dDLC ≥ 0.75 m.②tTLC ≤ 0.75 s.Lane departure avoidance control will be turned off when either of the following conditions is satisfied:①dDLC < 0.3 m, and tTLC > 2 s.②  Driver’s intention is obvious.③u ≤ 65 km/h.④  The lane identification failure. ## 3.2. Desired Vehicle Dynamic Planning ### 3.2.1. Preview Driving Model and Vehicle Dynamics Model The desired vehicle dynamics to prevent lane departure is planned by a preview driving model and a 2-DOF linear vehicle model.Assuming that the motion of the vehicle is constrained by Ackerman geometry, an optimal preview model is used to calculate the desired steering angle (δd) required for eliminating vehicle-lane deviation [14]. (1)δd=arctan⁡2LuTp2Δyp-uTpβ,where L is the wheel base, β is the side slip angle, u is the longitudinal velocity, Tp is the preview time, and Δyp is the lateral deviation at the preview point.A linear 2-DOF vehicle model is considered as a reference vehicle model for lane departure avoidance control.(2)β˙=Cf+Crmuβ+lfCf-lrCrmu2-1γ-Cfmuδ(3)γ˙=lfCf-lrCrIzβ+lf2Cf+lr2CrIzuγ-lfCfIzδ+1IzMz,where m is the vehicle mass, Cf(Cr) is the front (rear) axle equivalent cornering stiffness, lf(lr) is the longitudinal distance between front (rear) axle and center of gravity, Mz is the additional yaw moment, Iz is the yaw moment of inertia, and δ is the steering angle of the front wheel.According to the kinemics relations of the vehicle, the slip angles of front wheel (αf) and rear wheel (αr) are as follows:(4)αf=β+aγu-δαr=β-bγu. ### 3.2.2. Tire Model At large slip angles, the tire model can be no longer linear. The lateral tire force depended on slip angle(αf,αr), the normal tire load (Fz), the tire-road friction coefficient (μ), and the longitudinal tire force. To calculate lateral tire force for a wide range of operating conditions considering large slip angle and slip ratios, a simplified magic formula is adopted, which can be expressed as(5)Fyij=κFzijμsin⁡Barctan⁡Dαij,where subscript ij denotes the index indicating front left tire (fl)/front right tire (fr)/rear left tire (rl) and rear right tire (rr); Fyij is the lateral tire force of each wheel; Fzij is the vertical tire force of each wheel; κ, B, and D are fitting coefficients and can be obtained by curve fitting.Under different vertical loads and tire-road friction coefficients, the calculated lateral forces are compared with test data, as shown in Figure2. The results show that the simplified magic formula tire model can describe the nonlinear characteristics of tires with acceptable accuracy.Figure 2 Lateral tire force comparison: simplified magic formula and test data.Then, the equivalent cornering stiffness of each tire can be given by(6)Cij=Fyijαij,where αij is the slip angle of each wheel. ### 3.2.3. Desired Yaw Rate Response The peak lateral acceleration must be bounded by the tyre-road friction coefficientμ as follows:(7)Aymax=εμg,where μ is the friction coefficient, g is the gravity unit, and ɛ is the safety factor (in this paper, the value of ɛ is set as 0.85).The maximum yaw rate must be limited by the friction coefficient of the road. Thus, the desired yaw rate response (γd) and the desired steering angle (δd) can be described by(8)γd=min⁡u/L1+mu2lfCf-lrCr/L2CfCrδd,Aymaxusign⁡u/L1+mu2lfCf-lrCr/L2CfCrδd. ## 3.2.1. Preview Driving Model and Vehicle Dynamics Model The desired vehicle dynamics to prevent lane departure is planned by a preview driving model and a 2-DOF linear vehicle model.Assuming that the motion of the vehicle is constrained by Ackerman geometry, an optimal preview model is used to calculate the desired steering angle (δd) required for eliminating vehicle-lane deviation [14]. (1)δd=arctan⁡2LuTp2Δyp-uTpβ,where L is the wheel base, β is the side slip angle, u is the longitudinal velocity, Tp is the preview time, and Δyp is the lateral deviation at the preview point.A linear 2-DOF vehicle model is considered as a reference vehicle model for lane departure avoidance control.(2)β˙=Cf+Crmuβ+lfCf-lrCrmu2-1γ-Cfmuδ(3)γ˙=lfCf-lrCrIzβ+lf2Cf+lr2CrIzuγ-lfCfIzδ+1IzMz,where m is the vehicle mass, Cf(Cr) is the front (rear) axle equivalent cornering stiffness, lf(lr) is the longitudinal distance between front (rear) axle and center of gravity, Mz is the additional yaw moment, Iz is the yaw moment of inertia, and δ is the steering angle of the front wheel.According to the kinemics relations of the vehicle, the slip angles of front wheel (αf) and rear wheel (αr) are as follows:(4)αf=β+aγu-δαr=β-bγu. ## 3.2.2. Tire Model At large slip angles, the tire model can be no longer linear. The lateral tire force depended on slip angle(αf,αr), the normal tire load (Fz), the tire-road friction coefficient (μ), and the longitudinal tire force. To calculate lateral tire force for a wide range of operating conditions considering large slip angle and slip ratios, a simplified magic formula is adopted, which can be expressed as(5)Fyij=κFzijμsin⁡Barctan⁡Dαij,where subscript ij denotes the index indicating front left tire (fl)/front right tire (fr)/rear left tire (rl) and rear right tire (rr); Fyij is the lateral tire force of each wheel; Fzij is the vertical tire force of each wheel; κ, B, and D are fitting coefficients and can be obtained by curve fitting.Under different vertical loads and tire-road friction coefficients, the calculated lateral forces are compared with test data, as shown in Figure2. The results show that the simplified magic formula tire model can describe the nonlinear characteristics of tires with acceptable accuracy.Figure 2 Lateral tire force comparison: simplified magic formula and test data.Then, the equivalent cornering stiffness of each tire can be given by(6)Cij=Fyijαij,where αij is the slip angle of each wheel. ## 3.2.3. Desired Yaw Rate Response The peak lateral acceleration must be bounded by the tyre-road friction coefficientμ as follows:(7)Aymax=εμg,where μ is the friction coefficient, g is the gravity unit, and ɛ is the safety factor (in this paper, the value of ɛ is set as 0.85).The maximum yaw rate must be limited by the friction coefficient of the road. Thus, the desired yaw rate response (γd) and the desired steering angle (δd) can be described by(8)γd=min⁡u/L1+mu2lfCf-lrCr/L2CfCrδd,Aymaxusign⁡u/L1+mu2lfCf-lrCr/L2CfCrδd. ## 4. Middle Controller ### 4.1. Yaw Rate Tracking To make a vehicle follow the desired yaw rate, a sliding mode controller is adopted to calculate the additional yaw moment (Mz), which is required by dynamics tracking. The sliding surface is defined by (9)S=γ-γd.Differentiating the above equation:(10)S˙=γ˙-γ˙d.Combining (3) and (10): (11)S˙=1IZlfCf-lrCrβ+lf2Cf+lr2Cruγ-lfCfδ+Mz-γ˙d.SettingS˙=-ξS yields the control law(12)Mz=IZγ˙d-ξγ-γd-lfCf-lrCrβ-lf2Cf+lr2Cruγ+lfCfδ,where ξ is the control parameter of SMC and is greater than zero. ### 4.2. Speed Tracking The deviation between actual speedu and desired speed ud is determined by current throttle opening. It can be expressed as(13)Δu=u-ud.A PI controller is designed to determine the sum of longitudinal forces (Fx) required to follow current throttle opening, as follows:(14)Fx=kpΔu+ki∫0tΔudt,where kp is the proportional gain and ki is the integral gain. ## 4.1. Yaw Rate Tracking To make a vehicle follow the desired yaw rate, a sliding mode controller is adopted to calculate the additional yaw moment (Mz), which is required by dynamics tracking. The sliding surface is defined by (9)S=γ-γd.Differentiating the above equation:(10)S˙=γ˙-γ˙d.Combining (3) and (10): (11)S˙=1IZlfCf-lrCrβ+lf2Cf+lr2Cruγ-lfCfδ+Mz-γ˙d.SettingS˙=-ξS yields the control law(12)Mz=IZγ˙d-ξγ-γd-lfCf-lrCrβ-lf2Cf+lr2Cruγ+lfCfδ,where ξ is the control parameter of SMC and is greater than zero. ## 4.2. Speed Tracking The deviation between actual speedu and desired speed ud is determined by current throttle opening. It can be expressed as(13)Δu=u-ud.A PI controller is designed to determine the sum of longitudinal forces (Fx) required to follow current throttle opening, as follows:(14)Fx=kpΔu+ki∫0tΔudt,where kp is the proportional gain and ki is the integral gain. ## 5. Lower Controller The lower controller determines the output torque of the in-wheel-motor at each wheel, so as to meet driver’s desired speed and generate a net yaw moment that tracks the desired value for additional yaw moment determined by the middle controller. Control allocation technology is utilized in this study for torque allocation.When the condition of front wheel is small, the total longitudinal force and additional yaw moment can be expressed as(15)V=BU,where(16)V=FxMzT,U=FxflFxrlFxfrFxrrT,B=1111-lw2-lw2lw2lw2,lwis  the  track  width.The primary objective of the lower controller is to track the desired yaw response and make minimum allocation error. Another objective of the lower controller is to minimize the energy consumption of the in-wheel-motors.According to the objectives of torque allocation, the optimization problem can be described by Sequence Least Squares (SLS). It is expressed as follows:(17a)U=argminU∈Ω⁡WuU-Ud2(17b)Ω=argminUmin≤U≤Umax⁡WvBU-V2,where Wv is the weighting matrix, which shows the importance of each generalized force. Ud is the desired longitudinal tire force. To minimize the energy consumption, Ud is set as Fx. Umin and Umax are decided, respectively, by the tyre-road friction coefficient constraint and actuator constraint, which is the maximum torque range of in-wheel-motors. Wu is the diagonal weighting matrix, which shows the different of vertical load between each wheel, and the elements are(18)Wu=Fzrl∑FzijFzrr∑FzijFzfl∑FzijFzfr∑Fzij,where Fzij is the vertical load of each wheel. Fzij consists of static loads and dynamic loads caused by load transfer:(19)Fzfl=mLglr-Axh2-AyhlrlwFzrl=mLglr+Axh2-AyhlrlwFzfr=mLglr-Axh2+AyhlrlwFzrr=mLglr+Axh2+Ayhlrlw.A weighting factorη is used in this study. Then the SLS problem can be transformed into a Weighted Least Squares problem (WLS):(20)U=argminUmin≤U≤Umax⁡WuU-Ud22+ηBU-V22.To minimize the allocation error,η  is usually set to be very large. This optimization problem is solved by an active set method [22].Considering the dynamics of each wheel, the relationship between tyre longitudinal force (Fxij) and driving/braking torque (Tij) can be expressed as follows:(21)Tij=rFxij+Jwω˙ij,where r is the wheel radius, Jw is the wheel inertia, and ωij is the angular speed of each wheel.The output torque of the in-wheel-motor is limited by motor speed. The relationship of the output torque and motor speed is shown in Figure3.Figure 3 Specification of in-wheel motor.The dynamic response of the in-wheel-motor can be described as a first-order lag:(22)Gs=1τs+1,where τ is a time constant obtained through experiments. ## 6. Simulation Results In order to verify the effectiveness of the proposed method, a high-fidelity four-wheel-independent-drive electric vehicle (4WID-EV) model developed in Carsim and Matlab/Simulink is applied in this study. The single lane change test is chosen as the simulation test maneuver. The target road is designed according to ISO-3888-2-2002 and is shown in Figure4. Table 1 shows the main vehicle parameters and their values.Table 1 Vehicle parameters. Definition Symbol Unit Value Vehicle mass m kg 1231 Wheel base L m 2.6 Track width l w m 1.481 Distance from C.G. to front axle l f m 1.56 Distance from C.G. to rear axle l r m 1.04 Yaw moment of inertia I z kg⋅m2 2031.4 Height of C.G. h m 0.34 Wheel radius r m 0.304Figure 4 Target road for simulation. (a) Coordinate (b) CurvatureTo evaluate the work load of each tyre, tyre usage (Pij) is defined as(23)Pij=Fxij2+Fyij2μFzij.Maneuver 1. The vehicle is driven on the target road at the speed of 80 Km/h. The width of lane is 3.5 m and tyre-road friction coefficientμ is 0.8. To simulate an unintended lane departure, the steering wheel is not operated during 2~5 s. The vehicle will deviate from lane center gradually.The decision-making is shown in Figure5. At 2.21 s, tTLC ≤ 0.75 s, LDAS is activated. At 3.08 s, tTLC > 2 s and dDLC ≥ 0.75 m; it means that the vehicle has not returned to the center zone of the lane; thus LDAS maintain output. At 4.87 s, dDLC < 0.3 m, and tTLC > 2 s, LDAS is deactivated. After 5 s, the driver restored from inattention, drowsiness, or fatigue to control the vehicle. The decision logic of LDAS is shown in Figure 5(c). “1” indicates LDAS is activated. “0” indicates LDAS is deactivated.Figure 5 Decision-making. (a) TLC (b) DLC (c) Decision logicThe control results of Maneuver 1 are shown in Figure6. The output torque of the in-wheel-motor at each wheel is illustrated in Figure 6(a). During LDAS activation, the proposed method can allocate optimal torque to each wheel in time with the consideration of tyre loads, the tyre-road friction coefficient constraint, and actuator constraint. As seen in Figure 6(b), the vehicle is able to track the desired yaw response accurately and quickly when LDAS is activated. The peak lateral acceleration is 0.33 g, which is far less than the upper bound defined by (7). It indicates that the vehicle has enough lateral stability margin. The tyre usage of each wheel is illustrated in Figure 6(d). During LDAS activation, the tyre usage of each wheel is within a reasonable range. Although the output torque of front in-wheel-motors is larger than rear in-wheel-motors, the tyre usage is on the contrary. The difference is caused by the distribution of the center of gravity and load transfer. Figure 6(e) shows the corresponding side slip angle of the vehicle. The side slip angle is at a small value (the minimum value is −0.024 rad). It means that the vehicle is able to track the desired trajectory. As shown in Figure 5(b), the maximum dDLC is 0.818 m. It means that the LDAS can prevent lane departure effectively.Figure 6 Control results. (a) Driving/braking torque (b) Yaw rate tracking (c) Lateral acceleration response (d) Tyre usage (e) Side slip angleManeuver 2. The simulation conditions are the same as Maneuver 1 except the tyre-road friction coefficientμ which is only 0.4.The decision-making is shown in Figure7. During 2~5 s, LDAS is activated twice. The maximum dDLC is 1.4 m, as shown in Figure 5(b). It means that the LDAS can still prevent lane departure effectively even in the condition that the tyre-road friction coefficient μ is lower (e.g., wet road).Figure 7 Decision-making. (a) TLC (b) DLC (c) Decision logicThe control results of Maneuver 2 are shown in Figure8. The maximum output torque of each wheel is smaller than that shown in Figure 6(a). Figure 8(a) illustrates that the optimal output torque is constrained by the tyre-road friction coefficient. As shown in Figure 8(b), the desired yaw rate reaches the upper bound value, which is bounded by the tyre-road friction coefficient μ. During LDAS activation, the vehicle can track the desired yaw response accurately and quickly by differential driving/braking control. The peak lateral acceleration is 0.2 g, which is far less than the upper bound defined by (7). It indicates that the vehicle has enough lateral stability margin in spite of the wet road. The tyre usage of each wheel is illustrated in Figure 8(d). According to equation (23), the tyre usage of each wheel will become larger for the lower tyre-road friction coefficient. On the wet road, the tyre usage is less than 1; it indicates that all wheels are not at the risk of saturation. As shown in Figure 8(e), the maximum side slip angle is only 0.015 rad. It means that the vehicle can track the desired trajectory effectively.Figure 8 Control results. (a) Driving/braking torque (b) Yaw rate tracking (c) Lateral acceleration response (d) Tyre usage (e) Side slip angleManeuver 3. The simulation conditions are the same as Maneuver 1 except the speed of 120 km/h.The decision-making is shown in Figure9. During 2~5 s, LDAS is activated thrice. The maximum dDLC is 1.62 m, as shown in Figure 9(b). It means that the LDAS can still prevent lane departure in curved roadway with overspeed.Figure 9 Decision-making. (a) TLC (b) DLC (c) Decision logicThe control results of Maneuver 3 are shown in Figure10. Because the output torque of the in-wheel-motor is constrained by the specification of in-wheel motor (see Figure 3), the maximum output torque of each wheel is smaller than that shown in Figure 6(a). Figure 10(b) indicates that the vehicle can track the desired yaw response quickly when LDAS is activated, although the maximum overshoot of yaw rate is 11.9%. The desired yaw rate is able to reach the upper bound value, which is bounded by the tyre-road friction coefficient μ and speed u. The peak lateral acceleration is 0.35 g less than the upper bound calculated by (7). It indicates that the vehicle has enough lateral stability margin in spite of the high speed. The tyre usage of each wheel is illustrated in Figure 10(d). According to (23), the tyre usage of each wheel will become smaller for output torque of the in-wheel-motor limited by the motor speed. The tyre usage is less than 1; it indicates that all wheels are not at the risk of saturation. As shown in Figure 10(e), the maximum side slip angle is only 0.015 rad. It means that the vehicle can track the desired trajectory, although the lateral displacement of the vehicle is larger compared to low speed maneuver.Figure 10 Control results. (a) Driving/braking torque (b) Yaw rate tracking (c) Lateral acceleration response (d) Tyre usage (e) Side slip angleThe maximum side slip angle on wet road is smaller than that on the dry road. The objective of the stability system is to track a desired yaw rate, which is limited by the friction coefficient of the road and vehicle speed. As shown in Figures6 and 8, the proposed method can track the desired yaw rate by driving/braking torque allocation. On the wet road, the desired yaw rate is limited to ensure vehicle stability. Thus, the amplitude of the output torque of in-wheel-motors is smaller than that on the dry road. This can avoid large side slip angle and prevent vehicle skid and loss of control. However, the deficiency of this control method is that the lateral displacement of the vehicle will increase, as shown in Figures 5(b) and 7(b). ## 7. Conclusion In this paper, a hierarchical method for LDAS has been presented on the basis of differential driving/braking control. The proposed method consists of three parts: an upper-level controller, a middle-level controller, and a lower-level controller. The upper-level controller was designed to monitor driver intention and vehicle-lane deviation and to determine whether an intervention is required. Another task of the upper-level controller is to determine the desired dynamics (desired yaw rate). To tack the desired dynamics, sliding mode control method and PI control method were used in the middle-level controller. The lower-level controller was designed to distribute driving/braking torque between each wheel. To achieve optimal allocation of the output torque of the wheel motors, a control allocation method was adopted. The performances of the proposed method were evaluated via three software-in-loop tests. Simulation results show that the presented method can effectively prevent the lane departure by differential driving/braking control.The proposed method is verified in SIL test, which is different with hardware-in-loop (HIL) and real driving experiment, for example, signal disturb or delay. To this end, the proposed LDAS will be implemented in HIL platform and a prototype vehicle in the future. --- *Source: 1024805-2018-02-19.xml*
2018
# Solving Bilevel Multiobjective Programming Problem by Elite Quantum Behaved Particle Swarm Optimization **Authors:** Tao Zhang; Tiesong Hu; Jia-wei Chen; Zhongping Wan; Xuning Guo **Journal:** Abstract and Applied Analysis (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/102482 --- ## Abstract An elite quantum behaved particle swarm optimization (EQPSO) algorithm is proposed, in which an elite strategy is exerted for the global best particle to prevent premature convergence of the swarm. The EQPSO algorithm is employed for solving bilevel multiobjective programming problem (BLMPP) in this study, which has never been reported in other literatures. Finally, we use eight different test problems to measure and evaluate the proposed algorithm, including low dimension and high dimension BLMPPs, as well as attempt to solve the BLMPPs whose theoretical Pareto optimal front is not known. The experimental results show that the proposed algorithm is a feasible and efficient method for solving BLMPPs. --- ## Body ## 1. Introduction Bilevel programming problem (BLPP) arises in a wide variety of scientific and engineering applications including optimal control, process optimization, game-playing strategy development, transportation problem, and so on. Thus the BLPP has been developed and researched by many scholars. The reviews, monographs, and surveys on the BLPP can refer to [1–11]. Moreover, the evolutionary algorithms (EAs) have been employed to address BLPP in papers [12–16].For the multiobjective characteristics widely existing in the BLPP, the bilevel multiobjective programming problem (BLMPP) has attracted many researchers to study it. For example, Shi and Xia [17, 18], Abo-Sinna and Baky [19], Nishizaki and Sakawa [20], Zheng et al. [21] presented an interactive algorithm for BLMPP. Eichfelder [22] presented a method for solving nonlinear bilevel multiobjective optimization problems with coupled upper level constraints. Thereafter, Eichfelder [23] developed a numerical method for solving nonlinear nonconvex bilevel multiobjective optimization problems. In recent years, the metaheuristic has attracted considerable attention as an alternative method for BLMPP. For example, Deb and Sinha [24–26], as well as Sinha and Deb [27] discussed BLMPP based on evolutionary multiobjective optimization principles. Based on those studies, Deb and Sinha [28] proposed a viable and hybrid evolutionary-local-search based algorithm and presented challenging test problems. Sinha [29] presented a progressively interactive evolutionary multiobjective optimization method for BLMPP.Particle swarm optimization (PSO) is a relatively novel heuristic algorithm inspired by the choreography of a bird flock, which has been found to be quite successful in a wide variety of optimization tasks [30]. Due to its high speed of convergence and relative simplicity, the PSO algorithm has been employed by many researchers for solving BLPPs. For example, Li et al. [31] proposed a hierarchical PSO for solving BLPP. Kuo and Huang [32] applied the PSO algorithm for solving bilevel linear programming problem. Gao et al. [33] presented a method to solve bilevel pricing problems in supply chains using PSO. However, it is worth noting that the papers mentioned above only for bilevel single objective problems and the BLMPP have seldom been studied using PSO so far. There are probably two reasons for this situation. One reason is that the added complexities associated with solving each level, and the other reason is that the global convergence of the PSO cannot be guaranteed [34].In this paper, a global convergence guaranteed method called as EQPSO is proposed, in which an elite strategy is exerted for global best particle to prevent premature convergence of the swarm. The EQPSO is employed for solving the BLMPP in this study, which has not been reported in other literatures. For such problems, the proposed algorithm directly simulates the decision process of the bilevel programming, which is different from most traditional algorithms designed for specific versions or based on specific assumptions. The BLMPP is transformed to solve multiobjective optimization problems in the upper level and the lower level interactively by the EQPSO. And a set of approximate Pareto optimal solutions for BLMPP are obtained using the elite strategy. This interactive procedure is repeated until the accurate Pareto optimal solutions of the original problem are found. The rest of the paper is organized as follows. In Section2, the problem formulation is provided. The proposed algorithm for solving bilevel multiobjective problem is presented in Section 3. In Section 4, some numerical examples are given to demonstrate the feasibility and efficiency of the proposed algorithm. ## 2. Problem Formulation Letx∈Rn1,y∈Rn2, F:Rn1×Rn2→Rm1, f:Rn1×Rn2→Rm2, G:Rn1×Rn2→Rp, g:Rn1×Rn2→Rq. The general model of the BLMPP can be written as follows: (2.1)minxF(x,y)s.t.G(x,y)≥0,minyf(x,y)s.t.g(x,y)≥0, where F(x,y) and f(x,y) are the upper level and the lower level objective functions, respectively. G(x,y) and g(x,y) denote the upper level and the lower level constraints, respectively.LetS={(x,y)∣G(x,y)≥0,g(x,y)≥0}, X={x∣∃y,G(x,y)≥0,g(x,y)≥0}, S(x)={y∣g(x,y)≥0}, and for the fixed x∈X, let S¯(X) denote the weak efficiency set of solutions to the lower level problem, the feasible solution set of problem (2.1) is denoted as: IR={(x,y)∣(x,y)∈S,y∈S¯(X)}.Definition 2.1. For a fixedx∈X, if y is a Pareto optimal solution to the lower level problem, then (x,y) is a feasible solution to the problem (2.1).Definition 2.2. If(x*,y*) is a feasible solution to the problem (2.1), and there are no (x,y)∈IR, such that F(x,y)≺F(x*,y*), then (x*,y*) is a Pareto optimal solution to the problem (2.1), where “≺” denotes Pareto preference.For problem (2.1), it is noted that a solution (x*,y*) is feasible for the upper level problem if and only if y* is an optimal solution for the lower level problem with x=x*. In practice, we often make the approximate Pareto optimal solutions of the lower level problem as the optimal response feed back to the upper level problem, and this point of view is accepted usually. Based on this fact, the EQPSO algorithm may have a great potential for solving BLMPP. On the other hand, unlike the traditional point-by-point approach mentioned in Section 1, the EQPSO algorithm uses a group of points in its operation, thus the EQPSO can be developed as a new way for solving BLMPP. We next present the algorithm based on the EQPSO is presented for (2.1). ## 3. The Algorithm ### 3.1. The EQPSO The quantum behaved particle swarm optimization (QPSO) is the integration of PSO and quantum computing theory developed by [35–38]. Compared with PSO, it needs no velocity vectors for particles and has fewer parameters to adjust. Moreover, its global convergence can be guaranteed [39]. Due to its global convergence and relative simplicity, it has been found to be quite successful in a wide variety of optimization tasks. For example, a wide range of continuous optimization problems [40–45] are solved by QPSO and the experiment results show that the QPSO works better than standard PSO. Some improved QPSO algorithms can refer to [46–48]. In this paper, the EQPSO algorithm is proposed, in which an elite strategy is exerted for global best particle to prevent premature convergence of the swarm, and it makes the proposed algorithm has good performance for solving the high dimension BLMPPS. The EQPSO has the same design principle with the QPSO except for the global optimal particle selection criterion, so the global convergence proof of the EQPSO can refer to [39]. In the EQPSO, the particles move according to the following iterative equation: (3.1)zt+1=pt-αt(mBestt-zt)*ln(1u)ifk≥0.5,zt+1=pt+αt(mBestt-zt)*ln(1u)ifk<0.5, where (3.2)pt=φ*ppBestt+(1-φ)*pgBestt,mBestt=1N∑i=1NppBestit,αt=m-(m-n)*tT,pgBest∈rand(At), where the z denotes the particle’s position. mBest denotes the mean best position of all the particles’ best positions. The k, u, and φ are random numbers distributed uniformly on (0,1), respectively. α(t) is the expansion-contraction coefficient. In general, m=1,n=0.5, t is the current iteration number, and T is the maximum number of iterations. The ppBest and pgBest are the particle’s personal best position and the global best position, respectively. At is the elite set which is introduced in following parts (see Algorithm: Step 3). ### 3.2. The Algorithm for Solving BLMPP The process of the proposed algorithm for solving BLMPP is an interactive coevolutionary process. We first initialize population and then solve multiobjective optimization problems in the upper level and the lower level interactively using the EQPSO. For one time of iteration, a set of approximate Pareto optimal solutions for problem 1 is obtained by the elite strategy which was adopted in Deb et al. [49]. This interactive procedure is repeated until the accurate Pareto optimal solutions of problem (2.1) are found. The details of the proposed algorithm are given as follows.Algorithm Step  1. Initializing. Step  1.1.  Initialize the population P0 with Nu particles which is composed by ns=Nu/Nl subswarms of size Nl each. The particle’s position of the kth (k=1,2,…,ns) subswarm is presented as: zj=(xj,yj) (j=1,2,…,nl) and zj is sampled randomly in the feasible space. Step  1.2.  Initialize the external loop counter t:=0. Step  2.  For the kth subswarm, (k=1,2,…,ns), each particle is assigned a nondomination rank NDl and a crowding value CDl in f space. Then, all resulting subswarms are combined into one population which is named as the Pt. Afterwards, each particle is assigned a nondomination rank NDu and a crowding value CDu in F space. Step  3.  The nondomination particles assigned both NDu=1 and NDl=1 from Pt are saved in the elite set At. Step  4.  For the kth subswarm, (k=1,2,…,ns), update the lower level decision variables. Step  4.1.  Initialize the lower level loop counter tl:=0. Step  4.2.  Update the jth (j=1,2,…,Nl) particle’s position with the fixed xj according to (3.1) and (3.2). Step  4.3.  tl:=tl+1. Step  4.4.  If tl≥Tl, go to Step 4.5. Otherwise, go to Step 4.2 Step  4.5.  Each particle of the ith subswarm is reassigned a nondomination rank NDl and a crowding value CDl in f space. Then, all resulting subswarms are combined into one population which is renamed as the Qt. Afterwards, each particle is reassigned a nondomination rank NDu and a crowding value CDu in F space. Step  5.  Combined population Pt and Qt to form Rt. The combined population Rt is reassigned a nondomination rank NDu, and the particles within an identical nondomination rank are assigned a crowding distance value CDu in the F space. Step  6.  Choose half particles from Rt. The particles of rank NDu=1 are considered first. From the particles of rank NDu=1, the particles with NDl=1 are noted one by one in the order of reducing crowding distance CDu, for each such particle the corresponding subswarm from its source population (either Pt or Qt) is copied in an intermediate population St. If a subswarm is already copied in St and a future particle from the same subswarm is found to have NDu=NDl=1, the subswarm is not copied again. When all particles of NDu=1 are considered, a similar consideration is continued with NDu=2 and so on till exactly ns subswarms are copied in St. Step  7.  Update the elite set At. The nondomination particles assigned both NDu=1 and NDl=1 from St are saved in the elite set At. Step  8.  Update the upper level decision variables in St. Step  8.1.  Initiate the upper level loop counter tu:=0. Step  8.2.  Update the ith (i=1,2,…,Nu) particle’s position with the fixed yi according to (3.1) and (3.2). Step  8.3.  tu:=tu+1. Step  8.4.  If tu≥Tu, go to Step 8.5. Otherwise, go to Step 8.2. Step  8.5.  Every member is then assigned a nondomination rank NDu and a crowding distance value CDu in F space. Step  9.  t:=t+1. Step  10.  If t≥T, output the elite set At. Otherwise, go to Step 2.In Steps 4 and 8, the global best position is chosen at random from the elite setAt. The criterion of personal best position choice is that, if the current position is dominated by the previous position, then the previous position is kept; otherwise, the current position replaces the previous one; if neither of them is dominated by the other, then we select one of them randomly. A relatively simple scheme is used to handle constraints. Whenever two individuals are compared, their constraints are checked. If both are feasible, nondomination sorting technology is directly applied to decide which one is selected. If one is feasible and the other is infeasible, the feasible dominates. If both are infeasible, then the one with the lowest amount of constraint violation dominates the other. Notations used in the proposed algorithm are detailed in Table 1.Table 1 The notations of the algorithm. x i Theith particle’s position of the upper level problem. y j Thejth particle’s position of the lower level problem. z j Thejth particle’s position of BLMPP. N u The population size of the upper level problem. N l The sub-swarm size of the lower level problem. t Current iteration number for the overall problem. T The predefined max iteration number fort. t u Current iteration number for the upper level problem. t l Current iteration number for the lower level problem. T u The predefined max iteration number fortu. T l The predefined max iteration number fortl. ND u Non-domination sorting rank of the upper level problem. CD u Crowding distance value of the upper level problem. ND l Non-domination sorting rank of the lower level problem. CD l Crowding distance value of the lower level problem. P t Thetth iteration population. Q t The offspring ofPt. S t Intermediate population. ## 3.1. The EQPSO The quantum behaved particle swarm optimization (QPSO) is the integration of PSO and quantum computing theory developed by [35–38]. Compared with PSO, it needs no velocity vectors for particles and has fewer parameters to adjust. Moreover, its global convergence can be guaranteed [39]. Due to its global convergence and relative simplicity, it has been found to be quite successful in a wide variety of optimization tasks. For example, a wide range of continuous optimization problems [40–45] are solved by QPSO and the experiment results show that the QPSO works better than standard PSO. Some improved QPSO algorithms can refer to [46–48]. In this paper, the EQPSO algorithm is proposed, in which an elite strategy is exerted for global best particle to prevent premature convergence of the swarm, and it makes the proposed algorithm has good performance for solving the high dimension BLMPPS. The EQPSO has the same design principle with the QPSO except for the global optimal particle selection criterion, so the global convergence proof of the EQPSO can refer to [39]. In the EQPSO, the particles move according to the following iterative equation: (3.1)zt+1=pt-αt(mBestt-zt)*ln(1u)ifk≥0.5,zt+1=pt+αt(mBestt-zt)*ln(1u)ifk<0.5, where (3.2)pt=φ*ppBestt+(1-φ)*pgBestt,mBestt=1N∑i=1NppBestit,αt=m-(m-n)*tT,pgBest∈rand(At), where the z denotes the particle’s position. mBest denotes the mean best position of all the particles’ best positions. The k, u, and φ are random numbers distributed uniformly on (0,1), respectively. α(t) is the expansion-contraction coefficient. In general, m=1,n=0.5, t is the current iteration number, and T is the maximum number of iterations. The ppBest and pgBest are the particle’s personal best position and the global best position, respectively. At is the elite set which is introduced in following parts (see Algorithm: Step 3). ## 3.2. The Algorithm for Solving BLMPP The process of the proposed algorithm for solving BLMPP is an interactive coevolutionary process. We first initialize population and then solve multiobjective optimization problems in the upper level and the lower level interactively using the EQPSO. For one time of iteration, a set of approximate Pareto optimal solutions for problem 1 is obtained by the elite strategy which was adopted in Deb et al. [49]. This interactive procedure is repeated until the accurate Pareto optimal solutions of problem (2.1) are found. The details of the proposed algorithm are given as follows.Algorithm Step  1. Initializing. Step  1.1.  Initialize the population P0 with Nu particles which is composed by ns=Nu/Nl subswarms of size Nl each. The particle’s position of the kth (k=1,2,…,ns) subswarm is presented as: zj=(xj,yj) (j=1,2,…,nl) and zj is sampled randomly in the feasible space. Step  1.2.  Initialize the external loop counter t:=0. Step  2.  For the kth subswarm, (k=1,2,…,ns), each particle is assigned a nondomination rank NDl and a crowding value CDl in f space. Then, all resulting subswarms are combined into one population which is named as the Pt. Afterwards, each particle is assigned a nondomination rank NDu and a crowding value CDu in F space. Step  3.  The nondomination particles assigned both NDu=1 and NDl=1 from Pt are saved in the elite set At. Step  4.  For the kth subswarm, (k=1,2,…,ns), update the lower level decision variables. Step  4.1.  Initialize the lower level loop counter tl:=0. Step  4.2.  Update the jth (j=1,2,…,Nl) particle’s position with the fixed xj according to (3.1) and (3.2). Step  4.3.  tl:=tl+1. Step  4.4.  If tl≥Tl, go to Step 4.5. Otherwise, go to Step 4.2 Step  4.5.  Each particle of the ith subswarm is reassigned a nondomination rank NDl and a crowding value CDl in f space. Then, all resulting subswarms are combined into one population which is renamed as the Qt. Afterwards, each particle is reassigned a nondomination rank NDu and a crowding value CDu in F space. Step  5.  Combined population Pt and Qt to form Rt. The combined population Rt is reassigned a nondomination rank NDu, and the particles within an identical nondomination rank are assigned a crowding distance value CDu in the F space. Step  6.  Choose half particles from Rt. The particles of rank NDu=1 are considered first. From the particles of rank NDu=1, the particles with NDl=1 are noted one by one in the order of reducing crowding distance CDu, for each such particle the corresponding subswarm from its source population (either Pt or Qt) is copied in an intermediate population St. If a subswarm is already copied in St and a future particle from the same subswarm is found to have NDu=NDl=1, the subswarm is not copied again. When all particles of NDu=1 are considered, a similar consideration is continued with NDu=2 and so on till exactly ns subswarms are copied in St. Step  7.  Update the elite set At. The nondomination particles assigned both NDu=1 and NDl=1 from St are saved in the elite set At. Step  8.  Update the upper level decision variables in St. Step  8.1.  Initiate the upper level loop counter tu:=0. Step  8.2.  Update the ith (i=1,2,…,Nu) particle’s position with the fixed yi according to (3.1) and (3.2). Step  8.3.  tu:=tu+1. Step  8.4.  If tu≥Tu, go to Step 8.5. Otherwise, go to Step 8.2. Step  8.5.  Every member is then assigned a nondomination rank NDu and a crowding distance value CDu in F space. Step  9.  t:=t+1. Step  10.  If t≥T, output the elite set At. Otherwise, go to Step 2.In Steps 4 and 8, the global best position is chosen at random from the elite setAt. The criterion of personal best position choice is that, if the current position is dominated by the previous position, then the previous position is kept; otherwise, the current position replaces the previous one; if neither of them is dominated by the other, then we select one of them randomly. A relatively simple scheme is used to handle constraints. Whenever two individuals are compared, their constraints are checked. If both are feasible, nondomination sorting technology is directly applied to decide which one is selected. If one is feasible and the other is infeasible, the feasible dominates. If both are infeasible, then the one with the lowest amount of constraint violation dominates the other. Notations used in the proposed algorithm are detailed in Table 1.Table 1 The notations of the algorithm. x i Theith particle’s position of the upper level problem. y j Thejth particle’s position of the lower level problem. z j Thejth particle’s position of BLMPP. N u The population size of the upper level problem. N l The sub-swarm size of the lower level problem. t Current iteration number for the overall problem. T The predefined max iteration number fort. t u Current iteration number for the upper level problem. t l Current iteration number for the lower level problem. T u The predefined max iteration number fortu. T l The predefined max iteration number fortl. ND u Non-domination sorting rank of the upper level problem. CD u Crowding distance value of the upper level problem. ND l Non-domination sorting rank of the lower level problem. CD l Crowding distance value of the lower level problem. P t Thetth iteration population. Q t The offspring ofPt. S t Intermediate population. ## 4. Numerical Experiment In this section, three examples will be considered to illustrate the feasibility of the proposed algorithm for problem (2.1). In order to evaluate the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front, as well as the diversity of the obtained Pareto optimal solutions along the theoretical Pareto optimal front, we adopted the following evaluation metrics. ### 4.1. Performance Evaluation Metrics (a) Generational Distance (GD): this metric used by Deb [50] is employed in this paper as a way of evaluating the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front. The GD metric denotes the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front: (4.1)GD=∑i=1ndi2n, where n is the number of the obtained Pareto optimal solutions by the proposed algorithm and di is the Euclidean distance between each obtained Pareto optimal solution and the nearest member of the theoretical Pareto optimal set.(b) Spacing (SP): this metric is used to evaluate the diversity of the obtained Pareto optimal solutions by comparing the uniform distribution and the deviation of solutions as described by Deb [50]: (4.2)SP=∑m=1Mdme+∑i=1n(d¯-di)2∑m=1Mdme+nd¯, where di=minj(|F1i(x,y)-F1j(x,y)|+|F2i(x,y)-F2j(x,y)|),  i,j=1,2,…,n,  d¯ is the mean of all di,  dme is the Euclidean distance between the extreme solutions in obtained Pareto optimal solution set and the theoretical Pareto optimal solution set on the mth objective, M is the number of the upper level objective function, and n is the number of the obtained solutions by the proposed algorithm.All results presented in this paper have been obtained on a personal computer (CPU: AMD Phenon II X6 1055T 2.80 GHz; RAM: 3.25 GB) using a c# implementation of the proposed algorithm. ### 4.2. Numerical Examples #### 4.2.1. Low Dimension BLMPPs Example 4.1. Example4.1 is taken from [22]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=200, Tu=200, Nl=40, Tl=40, and T=40: (4.3)minxF(x,y)=(y1-x,y2)s.t.G1(y)=1+y1+y2≥0minyf(x,y)=(y1,y2)s.t.g1(x,y)=x2-y12-y22≥0,-1≤y1,y2≤1,0≤x≤1. Figure1 shows the obtained Pareto front of this example by the proposed algorithm. From Figure 1, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00024, that is, GD=0.00024 (see Table 2). Moreover, the lower SP value (SP=0.0042, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Figure 2 shows the obtained solutions of this example, which follow the relationship, that is, y1=-1-y2, y2=-1/2±(1/4)8x2-4 and x∈(1/2,1). It is also obvious that all obtained solutions are close to being on the upper level constraint G(x) boundary (1+y1+y2=0).Table 2 Results of the Generation Distance (GD) and Spacing (SP) metrics for the above six examples. Example GD SP Example4.1 0.00024 0.00442 Example4.2 0.00003 0.00169 Example4.3 0.00027 0.00127 Example4.4 0.00036 0.00235 Example4.5 0.00058 0.00364 Example4.6 0.00039 0.00168Figure 1 The obtained Pareto optimal front of Example4.1.Figure 2 The obtained solutions of Example4.1.Example 4.2. Example4.2 is taken from [51]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=200, Tu=50, Nl=40, Tl=20, and T=40: (4.4)minxF(x,y)=(x2+(y1-1)2+y22,(x-1)2+(y1-1)2+y22)minyf(x,y)=(y12+y22,(y1-x)2+y22)-1≤x,y1,y2≤2. Figure3 shows the obtained Pareto optimal front of this example by the proposed algorithm. From Figure 3, it is obvious that the obtained Pareto optimal front is very close to the theoretical Pareto optimal front, the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00003 (see Table 2). On the other hand, the obtained Pareto optimal solutions can distribute uniformly on entire range of theoretical Pareto optimal front base on the fact that the SP value is lower (SP=0.00169, see Table 2). Figure 4 shows the obtained Pareto optimal solutions, they follow the relationship, that is, x=y1,y1∈[0.5,1] and y2=0.Figure 3 The obtained Pareto optimal front of Example4.2.Figure 4 The obtained solutions Example4.2. #### 4.2.2. High Dimension BLMPPs Example 4.3. Example4.3 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400, Tu=50, Nl=40, Tl=20, and T=60: (4.5)minxF(x,y)=((1+r-cos(απx1))+∑j=2K(xj-j-12)2+τ∑i=2K(yi-xi)2-γcos(γπx12y1),(1+r-sin(απx1))+∑j=2K(xj-j-12)2+τ∑i=2K(yi-xi)2-γsin(γπx12y1)∑j=2K(xj-j-12)2)minyf(x,y)=(y12+∑i=2K(yi-xi)2+∑i=2K10(1-cos(πk(yi-xi))),∑i=1K(yi-xi)2+∑i=2K10|1-sin(πk(yi-xi))|)s.t.-K≤yi≤K,(i=1,2,…,K);1≤x1≤4,-K≤xj≤K,(j=2,3,…,K).α=1,r=0.1,τ=1,γ=1,K=10.Example 4.4. Example4.4 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=80. (4.6)minxF(x,y)=v1(x1)+∑j=2Kyj2+10(1-cos(πk)yi)+τ∑i=2K(yi-xi)2-rcos(γπx12y1),v2(x1)+∑j=2Kyj2+10(1-cos(πk)yi)+τ∑i=2K(yi-xi)2-rsin(γπx12y1),minyf(x,y)=(y12+∑i=2K(yi-xi)2,∑i=1Ki(yi-xi)2)s.t.-K≤yi≤K,(i=1,2,…,K);0.001≤x1≤4,-K≤xj≤K,(j=2,3,…,K).α=1,r=0.25,τ=1,γ=4,K=10, where (4.7)v1(x1)={cos(0.2π)x1+sin(0.2π)|0.02sin(5πx1)|,for0≤x1≤1,x1-(1-sin(0.2π)),forx1>1,v2(x1)={-sin(0.2π)x1+cos(0.2π)|0.02sin(5πx1)|,for0≤x1≤1,0.1(x1-1)-sin(0.2π),forx1>1. This problem is more difficult compared to the previous problems (Examples4.1 and 4.2) because the lower level problem of this example has multimodalities, thereby making the lower level problem difficult in finding the upper level Pareto optimal front. From Figure 5, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00027, that is, GD=0.00027 (see Table 2). Moreover, the lower SP value (SP=0.00127, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, two obtained lower level Pareto optimal fronts are given when x1=2 and x1=2.5.Figure 5 The obtained Pareto front of Example4.3.Figure6 shows the obtained Pareto front of Example 4.4 by the proposed algorithm. The upper level problem has multimodalities, thereby causing an algorithm difficulty in finding the upper level Pareto optimal front. From Figure 6, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00036, that is, GD=0.00036 (see Table 2). Moreover, the lower SP value (SP=0.00235, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all corresponding lower level Pareto optimal fronts are given.Figure 6 The obtained Pareto front of Example4.4.Example 4.5. Example4.5 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=60: (4.8)minxF(x·y)=(x1+∑j=3K(xj-j2)2+τ∑i=3K(yi-xi)2-cos(4tan-1(x2-y2x1-y1)),x2+∑j=3K(xj-j2)2+τ∑i=3K(yi-xi)2-cos(4tan-1(x2-y2x1-y1)))s.t.G(x)=x2-(1-x12)2≥0minyf(x,y)=(y1+∑i=3K(yi-xi)2,y2+∑i=3K(yi-xi)2)s.t.g1(x,y)=(y1-x1)2+(y2-x2)2≤r2R(x1)=(0.1+0.15|sin(2π(x1-0.1))|),-K≤yi≤K,(i=1,2,…,K);0≤x1≤K,-K≤xj≤K,(j=2,3,…,K).τ=1,r=0.2,K=10.Example 4.6. Example4.6 is taken from [28]. Here x∈R1,y∈R9. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=40: (4.9)minxF(x,y)=((1-y1)(1+∑j=2Kyj2)x1,y1(1+∑j=2Kyj2)x1)s.t.G1(x,y)=-(1-y1)x1-12x1y1≤1minyf(x,y)=((1-y1)(1+∑j=K+1K+Lyj2)x1,y1(1+∑j=K+1K+Lyj2)x1)s.t.1≤x1≤2,-1≤y1≤1,-(K+L)≤yj≤(K+L),(j=2,3,…,K+L),K=5,L=4. Figure7 shows the obtained Pareto front of Example 4.5 by the proposed algorithm. From Figure 7, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00058, that is, GD=0.00058 (see Table 2). Moreover, the lower SP value (SP=0.00364, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all obtained lower level Pareto optimal fronts are given. It is also obvious that the Pareto optimal fronts for both the lower and upper level lie on constraint boundaries and every lower level Pareto optimal front has an unequal contribution to the upper level Pareto optimal front. Figure8 shows the obtained Pareto front of Example 4.6 by the proposed algorithm. From Figure 8, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00039, that is, GD=0.00039 (see Table 2). Moreover, the lower SP value (SP=0.00168, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, three obtained lower level Pareto optimal fronts are given when y1=1, y1=1.5 and y1=2. It can be seen that only one Pareto optimal point from each participating lower level problem qualifies to be on the upper level Pareto optimal front.Figure 7 The obtained Pareto front of Example4.5.Figure 8 The obtained Pareto front of Example4.6. #### 4.2.3. The BLMPPs with Unknown Theoretical Pareto Optimal Fronts Example 4.7. Example4.7 is taken from [52], in which the theoretical Pareto optimal front is not given. Here x∈R2,y∈R3. In this example, the population size and iteration times are set as follows: Nu=100,Tu=50, Nl=20, Tl=10, and T=40: (4.10)maxxF(x,y)=(x1+9x2+10y1+y2+3x3,9x1+2x2+2y1+7y2+4x3)s.t.G1(x,y)=3x1+9x2+9y1+5y2+3y3≤1039G2(x,y)=-4x1-x2+3y1-3y2+2y3≤94minyf(x,y)=(4x1+6x2+7y1+4y2+8y3,6x1+4x2+8y1+7y2+4y3)s.t.g1(x,y)=3x1-9x2-9y1-4y2≤61g2(x,y)=5x1+9x2+10y1-y2-2y3≤924g3(x,y)=3x1-3x2+y2+5y3≤420x1,x2,y1,y2,y3≥0.Example 4.8. Example4.8 is taken from [23]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=800,Tu=50,Nl=40, Tl=20, and T=40: (4.11)minxF(x,y)=(x+y12+y22+sin2(x1+y),cos(y2)(0.1+y)(exp(-y10.1+y2)))s.t.G1(x,y)=(x-5)2-(y1-0.5)2-(y2-5)2≤16minyf(x,y)=((y1-2)2+(y2-2)24+xy2+(5-x)216+sin(y210),y12+(y2-6)2-2xy1-(5-x)280)s.t.g1(x,y)=y12-y2≤0g2(x,y)=5y12+y2-10-≤0g3(x,y)=y2-5-x6≤00≤x≤10,0≤y1,y2≤10. Figure9 shows the obtained Pareto optimal front of Example 4.7 by the proposed algorithm. Note that, Zhang et al. [52] only obtained a single optimal solution x=(146.2955,28.9394) and y=(0,67.9318,0) which lies on the maximum of the F2 using weighted sum method. In contrast, a set of Pareto optimal solutions are obtained by the proposed algorithm. However, the fact that the single optimal solution in [49] is included in the obtained Pareto optimal solutions illustrates the feasibility of proposed algorithm. Figure 10 shows the final archive solutions of the Example 4.8 by the proposed algorithm. For this problem, the exact Pareto optimal front is not known, but the obtained Pareto optimal front by the proposed algorithm is similar to that reported in the previous study [23].Figure 9 The obtained front of Example4.7.Figure 10 The obtained front of Example4.8. ## 4.1. Performance Evaluation Metrics (a) Generational Distance (GD): this metric used by Deb [50] is employed in this paper as a way of evaluating the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front. The GD metric denotes the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front: (4.1)GD=∑i=1ndi2n, where n is the number of the obtained Pareto optimal solutions by the proposed algorithm and di is the Euclidean distance between each obtained Pareto optimal solution and the nearest member of the theoretical Pareto optimal set.(b) Spacing (SP): this metric is used to evaluate the diversity of the obtained Pareto optimal solutions by comparing the uniform distribution and the deviation of solutions as described by Deb [50]: (4.2)SP=∑m=1Mdme+∑i=1n(d¯-di)2∑m=1Mdme+nd¯, where di=minj(|F1i(x,y)-F1j(x,y)|+|F2i(x,y)-F2j(x,y)|),  i,j=1,2,…,n,  d¯ is the mean of all di,  dme is the Euclidean distance between the extreme solutions in obtained Pareto optimal solution set and the theoretical Pareto optimal solution set on the mth objective, M is the number of the upper level objective function, and n is the number of the obtained solutions by the proposed algorithm.All results presented in this paper have been obtained on a personal computer (CPU: AMD Phenon II X6 1055T 2.80 GHz; RAM: 3.25 GB) using a c# implementation of the proposed algorithm. ## 4.2. Numerical Examples ### 4.2.1. Low Dimension BLMPPs Example 4.1. Example4.1 is taken from [22]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=200, Tu=200, Nl=40, Tl=40, and T=40: (4.3)minxF(x,y)=(y1-x,y2)s.t.G1(y)=1+y1+y2≥0minyf(x,y)=(y1,y2)s.t.g1(x,y)=x2-y12-y22≥0,-1≤y1,y2≤1,0≤x≤1. Figure1 shows the obtained Pareto front of this example by the proposed algorithm. From Figure 1, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00024, that is, GD=0.00024 (see Table 2). Moreover, the lower SP value (SP=0.0042, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Figure 2 shows the obtained solutions of this example, which follow the relationship, that is, y1=-1-y2, y2=-1/2±(1/4)8x2-4 and x∈(1/2,1). It is also obvious that all obtained solutions are close to being on the upper level constraint G(x) boundary (1+y1+y2=0).Table 2 Results of the Generation Distance (GD) and Spacing (SP) metrics for the above six examples. Example GD SP Example4.1 0.00024 0.00442 Example4.2 0.00003 0.00169 Example4.3 0.00027 0.00127 Example4.4 0.00036 0.00235 Example4.5 0.00058 0.00364 Example4.6 0.00039 0.00168Figure 1 The obtained Pareto optimal front of Example4.1.Figure 2 The obtained solutions of Example4.1.Example 4.2. Example4.2 is taken from [51]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=200, Tu=50, Nl=40, Tl=20, and T=40: (4.4)minxF(x,y)=(x2+(y1-1)2+y22,(x-1)2+(y1-1)2+y22)minyf(x,y)=(y12+y22,(y1-x)2+y22)-1≤x,y1,y2≤2. Figure3 shows the obtained Pareto optimal front of this example by the proposed algorithm. From Figure 3, it is obvious that the obtained Pareto optimal front is very close to the theoretical Pareto optimal front, the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00003 (see Table 2). On the other hand, the obtained Pareto optimal solutions can distribute uniformly on entire range of theoretical Pareto optimal front base on the fact that the SP value is lower (SP=0.00169, see Table 2). Figure 4 shows the obtained Pareto optimal solutions, they follow the relationship, that is, x=y1,y1∈[0.5,1] and y2=0.Figure 3 The obtained Pareto optimal front of Example4.2.Figure 4 The obtained solutions Example4.2. ### 4.2.2. High Dimension BLMPPs Example 4.3. Example4.3 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400, Tu=50, Nl=40, Tl=20, and T=60: (4.5)minxF(x,y)=((1+r-cos(απx1))+∑j=2K(xj-j-12)2+τ∑i=2K(yi-xi)2-γcos(γπx12y1),(1+r-sin(απx1))+∑j=2K(xj-j-12)2+τ∑i=2K(yi-xi)2-γsin(γπx12y1)∑j=2K(xj-j-12)2)minyf(x,y)=(y12+∑i=2K(yi-xi)2+∑i=2K10(1-cos(πk(yi-xi))),∑i=1K(yi-xi)2+∑i=2K10|1-sin(πk(yi-xi))|)s.t.-K≤yi≤K,(i=1,2,…,K);1≤x1≤4,-K≤xj≤K,(j=2,3,…,K).α=1,r=0.1,τ=1,γ=1,K=10.Example 4.4. Example4.4 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=80. (4.6)minxF(x,y)=v1(x1)+∑j=2Kyj2+10(1-cos(πk)yi)+τ∑i=2K(yi-xi)2-rcos(γπx12y1),v2(x1)+∑j=2Kyj2+10(1-cos(πk)yi)+τ∑i=2K(yi-xi)2-rsin(γπx12y1),minyf(x,y)=(y12+∑i=2K(yi-xi)2,∑i=1Ki(yi-xi)2)s.t.-K≤yi≤K,(i=1,2,…,K);0.001≤x1≤4,-K≤xj≤K,(j=2,3,…,K).α=1,r=0.25,τ=1,γ=4,K=10, where (4.7)v1(x1)={cos(0.2π)x1+sin(0.2π)|0.02sin(5πx1)|,for0≤x1≤1,x1-(1-sin(0.2π)),forx1>1,v2(x1)={-sin(0.2π)x1+cos(0.2π)|0.02sin(5πx1)|,for0≤x1≤1,0.1(x1-1)-sin(0.2π),forx1>1. This problem is more difficult compared to the previous problems (Examples4.1 and 4.2) because the lower level problem of this example has multimodalities, thereby making the lower level problem difficult in finding the upper level Pareto optimal front. From Figure 5, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00027, that is, GD=0.00027 (see Table 2). Moreover, the lower SP value (SP=0.00127, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, two obtained lower level Pareto optimal fronts are given when x1=2 and x1=2.5.Figure 5 The obtained Pareto front of Example4.3.Figure6 shows the obtained Pareto front of Example 4.4 by the proposed algorithm. The upper level problem has multimodalities, thereby causing an algorithm difficulty in finding the upper level Pareto optimal front. From Figure 6, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00036, that is, GD=0.00036 (see Table 2). Moreover, the lower SP value (SP=0.00235, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all corresponding lower level Pareto optimal fronts are given.Figure 6 The obtained Pareto front of Example4.4.Example 4.5. Example4.5 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=60: (4.8)minxF(x·y)=(x1+∑j=3K(xj-j2)2+τ∑i=3K(yi-xi)2-cos(4tan-1(x2-y2x1-y1)),x2+∑j=3K(xj-j2)2+τ∑i=3K(yi-xi)2-cos(4tan-1(x2-y2x1-y1)))s.t.G(x)=x2-(1-x12)2≥0minyf(x,y)=(y1+∑i=3K(yi-xi)2,y2+∑i=3K(yi-xi)2)s.t.g1(x,y)=(y1-x1)2+(y2-x2)2≤r2R(x1)=(0.1+0.15|sin(2π(x1-0.1))|),-K≤yi≤K,(i=1,2,…,K);0≤x1≤K,-K≤xj≤K,(j=2,3,…,K).τ=1,r=0.2,K=10.Example 4.6. Example4.6 is taken from [28]. Here x∈R1,y∈R9. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=40: (4.9)minxF(x,y)=((1-y1)(1+∑j=2Kyj2)x1,y1(1+∑j=2Kyj2)x1)s.t.G1(x,y)=-(1-y1)x1-12x1y1≤1minyf(x,y)=((1-y1)(1+∑j=K+1K+Lyj2)x1,y1(1+∑j=K+1K+Lyj2)x1)s.t.1≤x1≤2,-1≤y1≤1,-(K+L)≤yj≤(K+L),(j=2,3,…,K+L),K=5,L=4. Figure7 shows the obtained Pareto front of Example 4.5 by the proposed algorithm. From Figure 7, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00058, that is, GD=0.00058 (see Table 2). Moreover, the lower SP value (SP=0.00364, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all obtained lower level Pareto optimal fronts are given. It is also obvious that the Pareto optimal fronts for both the lower and upper level lie on constraint boundaries and every lower level Pareto optimal front has an unequal contribution to the upper level Pareto optimal front. Figure8 shows the obtained Pareto front of Example 4.6 by the proposed algorithm. From Figure 8, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00039, that is, GD=0.00039 (see Table 2). Moreover, the lower SP value (SP=0.00168, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, three obtained lower level Pareto optimal fronts are given when y1=1, y1=1.5 and y1=2. It can be seen that only one Pareto optimal point from each participating lower level problem qualifies to be on the upper level Pareto optimal front.Figure 7 The obtained Pareto front of Example4.5.Figure 8 The obtained Pareto front of Example4.6. ### 4.2.3. The BLMPPs with Unknown Theoretical Pareto Optimal Fronts Example 4.7. Example4.7 is taken from [52], in which the theoretical Pareto optimal front is not given. Here x∈R2,y∈R3. In this example, the population size and iteration times are set as follows: Nu=100,Tu=50, Nl=20, Tl=10, and T=40: (4.10)maxxF(x,y)=(x1+9x2+10y1+y2+3x3,9x1+2x2+2y1+7y2+4x3)s.t.G1(x,y)=3x1+9x2+9y1+5y2+3y3≤1039G2(x,y)=-4x1-x2+3y1-3y2+2y3≤94minyf(x,y)=(4x1+6x2+7y1+4y2+8y3,6x1+4x2+8y1+7y2+4y3)s.t.g1(x,y)=3x1-9x2-9y1-4y2≤61g2(x,y)=5x1+9x2+10y1-y2-2y3≤924g3(x,y)=3x1-3x2+y2+5y3≤420x1,x2,y1,y2,y3≥0.Example 4.8. Example4.8 is taken from [23]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=800,Tu=50,Nl=40, Tl=20, and T=40: (4.11)minxF(x,y)=(x+y12+y22+sin2(x1+y),cos(y2)(0.1+y)(exp(-y10.1+y2)))s.t.G1(x,y)=(x-5)2-(y1-0.5)2-(y2-5)2≤16minyf(x,y)=((y1-2)2+(y2-2)24+xy2+(5-x)216+sin(y210),y12+(y2-6)2-2xy1-(5-x)280)s.t.g1(x,y)=y12-y2≤0g2(x,y)=5y12+y2-10-≤0g3(x,y)=y2-5-x6≤00≤x≤10,0≤y1,y2≤10. Figure9 shows the obtained Pareto optimal front of Example 4.7 by the proposed algorithm. Note that, Zhang et al. [52] only obtained a single optimal solution x=(146.2955,28.9394) and y=(0,67.9318,0) which lies on the maximum of the F2 using weighted sum method. In contrast, a set of Pareto optimal solutions are obtained by the proposed algorithm. However, the fact that the single optimal solution in [49] is included in the obtained Pareto optimal solutions illustrates the feasibility of proposed algorithm. Figure 10 shows the final archive solutions of the Example 4.8 by the proposed algorithm. For this problem, the exact Pareto optimal front is not known, but the obtained Pareto optimal front by the proposed algorithm is similar to that reported in the previous study [23].Figure 9 The obtained front of Example4.7.Figure 10 The obtained front of Example4.8. ## 4.2.1. Low Dimension BLMPPs Example 4.1. Example4.1 is taken from [22]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=200, Tu=200, Nl=40, Tl=40, and T=40: (4.3)minxF(x,y)=(y1-x,y2)s.t.G1(y)=1+y1+y2≥0minyf(x,y)=(y1,y2)s.t.g1(x,y)=x2-y12-y22≥0,-1≤y1,y2≤1,0≤x≤1. Figure1 shows the obtained Pareto front of this example by the proposed algorithm. From Figure 1, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00024, that is, GD=0.00024 (see Table 2). Moreover, the lower SP value (SP=0.0042, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Figure 2 shows the obtained solutions of this example, which follow the relationship, that is, y1=-1-y2, y2=-1/2±(1/4)8x2-4 and x∈(1/2,1). It is also obvious that all obtained solutions are close to being on the upper level constraint G(x) boundary (1+y1+y2=0).Table 2 Results of the Generation Distance (GD) and Spacing (SP) metrics for the above six examples. Example GD SP Example4.1 0.00024 0.00442 Example4.2 0.00003 0.00169 Example4.3 0.00027 0.00127 Example4.4 0.00036 0.00235 Example4.5 0.00058 0.00364 Example4.6 0.00039 0.00168Figure 1 The obtained Pareto optimal front of Example4.1.Figure 2 The obtained solutions of Example4.1.Example 4.2. Example4.2 is taken from [51]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=200, Tu=50, Nl=40, Tl=20, and T=40: (4.4)minxF(x,y)=(x2+(y1-1)2+y22,(x-1)2+(y1-1)2+y22)minyf(x,y)=(y12+y22,(y1-x)2+y22)-1≤x,y1,y2≤2. Figure3 shows the obtained Pareto optimal front of this example by the proposed algorithm. From Figure 3, it is obvious that the obtained Pareto optimal front is very close to the theoretical Pareto optimal front, the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00003 (see Table 2). On the other hand, the obtained Pareto optimal solutions can distribute uniformly on entire range of theoretical Pareto optimal front base on the fact that the SP value is lower (SP=0.00169, see Table 2). Figure 4 shows the obtained Pareto optimal solutions, they follow the relationship, that is, x=y1,y1∈[0.5,1] and y2=0.Figure 3 The obtained Pareto optimal front of Example4.2.Figure 4 The obtained solutions Example4.2. ## 4.2.2. High Dimension BLMPPs Example 4.3. Example4.3 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400, Tu=50, Nl=40, Tl=20, and T=60: (4.5)minxF(x,y)=((1+r-cos(απx1))+∑j=2K(xj-j-12)2+τ∑i=2K(yi-xi)2-γcos(γπx12y1),(1+r-sin(απx1))+∑j=2K(xj-j-12)2+τ∑i=2K(yi-xi)2-γsin(γπx12y1)∑j=2K(xj-j-12)2)minyf(x,y)=(y12+∑i=2K(yi-xi)2+∑i=2K10(1-cos(πk(yi-xi))),∑i=1K(yi-xi)2+∑i=2K10|1-sin(πk(yi-xi))|)s.t.-K≤yi≤K,(i=1,2,…,K);1≤x1≤4,-K≤xj≤K,(j=2,3,…,K).α=1,r=0.1,τ=1,γ=1,K=10.Example 4.4. Example4.4 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=80. (4.6)minxF(x,y)=v1(x1)+∑j=2Kyj2+10(1-cos(πk)yi)+τ∑i=2K(yi-xi)2-rcos(γπx12y1),v2(x1)+∑j=2Kyj2+10(1-cos(πk)yi)+τ∑i=2K(yi-xi)2-rsin(γπx12y1),minyf(x,y)=(y12+∑i=2K(yi-xi)2,∑i=1Ki(yi-xi)2)s.t.-K≤yi≤K,(i=1,2,…,K);0.001≤x1≤4,-K≤xj≤K,(j=2,3,…,K).α=1,r=0.25,τ=1,γ=4,K=10, where (4.7)v1(x1)={cos(0.2π)x1+sin(0.2π)|0.02sin(5πx1)|,for0≤x1≤1,x1-(1-sin(0.2π)),forx1>1,v2(x1)={-sin(0.2π)x1+cos(0.2π)|0.02sin(5πx1)|,for0≤x1≤1,0.1(x1-1)-sin(0.2π),forx1>1. This problem is more difficult compared to the previous problems (Examples4.1 and 4.2) because the lower level problem of this example has multimodalities, thereby making the lower level problem difficult in finding the upper level Pareto optimal front. From Figure 5, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00027, that is, GD=0.00027 (see Table 2). Moreover, the lower SP value (SP=0.00127, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, two obtained lower level Pareto optimal fronts are given when x1=2 and x1=2.5.Figure 5 The obtained Pareto front of Example4.3.Figure6 shows the obtained Pareto front of Example 4.4 by the proposed algorithm. The upper level problem has multimodalities, thereby causing an algorithm difficulty in finding the upper level Pareto optimal front. From Figure 6, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00036, that is, GD=0.00036 (see Table 2). Moreover, the lower SP value (SP=0.00235, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all corresponding lower level Pareto optimal fronts are given.Figure 6 The obtained Pareto front of Example4.4.Example 4.5. Example4.5 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=60: (4.8)minxF(x·y)=(x1+∑j=3K(xj-j2)2+τ∑i=3K(yi-xi)2-cos(4tan-1(x2-y2x1-y1)),x2+∑j=3K(xj-j2)2+τ∑i=3K(yi-xi)2-cos(4tan-1(x2-y2x1-y1)))s.t.G(x)=x2-(1-x12)2≥0minyf(x,y)=(y1+∑i=3K(yi-xi)2,y2+∑i=3K(yi-xi)2)s.t.g1(x,y)=(y1-x1)2+(y2-x2)2≤r2R(x1)=(0.1+0.15|sin(2π(x1-0.1))|),-K≤yi≤K,(i=1,2,…,K);0≤x1≤K,-K≤xj≤K,(j=2,3,…,K).τ=1,r=0.2,K=10.Example 4.6. Example4.6 is taken from [28]. Here x∈R1,y∈R9. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=40: (4.9)minxF(x,y)=((1-y1)(1+∑j=2Kyj2)x1,y1(1+∑j=2Kyj2)x1)s.t.G1(x,y)=-(1-y1)x1-12x1y1≤1minyf(x,y)=((1-y1)(1+∑j=K+1K+Lyj2)x1,y1(1+∑j=K+1K+Lyj2)x1)s.t.1≤x1≤2,-1≤y1≤1,-(K+L)≤yj≤(K+L),(j=2,3,…,K+L),K=5,L=4. Figure7 shows the obtained Pareto front of Example 4.5 by the proposed algorithm. From Figure 7, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00058, that is, GD=0.00058 (see Table 2). Moreover, the lower SP value (SP=0.00364, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all obtained lower level Pareto optimal fronts are given. It is also obvious that the Pareto optimal fronts for both the lower and upper level lie on constraint boundaries and every lower level Pareto optimal front has an unequal contribution to the upper level Pareto optimal front. Figure8 shows the obtained Pareto front of Example 4.6 by the proposed algorithm. From Figure 8, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00039, that is, GD=0.00039 (see Table 2). Moreover, the lower SP value (SP=0.00168, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, three obtained lower level Pareto optimal fronts are given when y1=1, y1=1.5 and y1=2. It can be seen that only one Pareto optimal point from each participating lower level problem qualifies to be on the upper level Pareto optimal front.Figure 7 The obtained Pareto front of Example4.5.Figure 8 The obtained Pareto front of Example4.6. ## 4.2.3. The BLMPPs with Unknown Theoretical Pareto Optimal Fronts Example 4.7. Example4.7 is taken from [52], in which the theoretical Pareto optimal front is not given. Here x∈R2,y∈R3. In this example, the population size and iteration times are set as follows: Nu=100,Tu=50, Nl=20, Tl=10, and T=40: (4.10)maxxF(x,y)=(x1+9x2+10y1+y2+3x3,9x1+2x2+2y1+7y2+4x3)s.t.G1(x,y)=3x1+9x2+9y1+5y2+3y3≤1039G2(x,y)=-4x1-x2+3y1-3y2+2y3≤94minyf(x,y)=(4x1+6x2+7y1+4y2+8y3,6x1+4x2+8y1+7y2+4y3)s.t.g1(x,y)=3x1-9x2-9y1-4y2≤61g2(x,y)=5x1+9x2+10y1-y2-2y3≤924g3(x,y)=3x1-3x2+y2+5y3≤420x1,x2,y1,y2,y3≥0.Example 4.8. Example4.8 is taken from [23]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=800,Tu=50,Nl=40, Tl=20, and T=40: (4.11)minxF(x,y)=(x+y12+y22+sin2(x1+y),cos(y2)(0.1+y)(exp(-y10.1+y2)))s.t.G1(x,y)=(x-5)2-(y1-0.5)2-(y2-5)2≤16minyf(x,y)=((y1-2)2+(y2-2)24+xy2+(5-x)216+sin(y210),y12+(y2-6)2-2xy1-(5-x)280)s.t.g1(x,y)=y12-y2≤0g2(x,y)=5y12+y2-10-≤0g3(x,y)=y2-5-x6≤00≤x≤10,0≤y1,y2≤10. Figure9 shows the obtained Pareto optimal front of Example 4.7 by the proposed algorithm. Note that, Zhang et al. [52] only obtained a single optimal solution x=(146.2955,28.9394) and y=(0,67.9318,0) which lies on the maximum of the F2 using weighted sum method. In contrast, a set of Pareto optimal solutions are obtained by the proposed algorithm. However, the fact that the single optimal solution in [49] is included in the obtained Pareto optimal solutions illustrates the feasibility of proposed algorithm. Figure 10 shows the final archive solutions of the Example 4.8 by the proposed algorithm. For this problem, the exact Pareto optimal front is not known, but the obtained Pareto optimal front by the proposed algorithm is similar to that reported in the previous study [23].Figure 9 The obtained front of Example4.7.Figure 10 The obtained front of Example4.8. ## 5. Conclusion In this paper, an EQPSO is presented, in which an elite strategy is exerted for global best particle to prevent the swarm from clustering, enabling the particle to escape the local optima. The EQPSO algorithm is employed for solving bilevel multiobjective programming problem (BLMPP) for the first time. In this study, some numerical examples are used to explore the feasibility and efficiency of the proposed algorithm. The experimental results indicate that the obtained Pareto front by the proposed algorithm is very close to the theoretical Pareto optimal front, and the solutions are also distributed uniformly on entire range of the theoretical Pareto optimal front. The proposed algorithm is simple and easy to implement, which provides another appealing method for further study on BLMPP. --- *Source: 102482-2012-11-29.xml*
102482-2012-11-29_102482-2012-11-29.md
56,476
Solving Bilevel Multiobjective Programming Problem by Elite Quantum Behaved Particle Swarm Optimization
Tao Zhang; Tiesong Hu; Jia-wei Chen; Zhongping Wan; Xuning Guo
Abstract and Applied Analysis (2012)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/102482
102482-2012-11-29.xml
--- ## Abstract An elite quantum behaved particle swarm optimization (EQPSO) algorithm is proposed, in which an elite strategy is exerted for the global best particle to prevent premature convergence of the swarm. The EQPSO algorithm is employed for solving bilevel multiobjective programming problem (BLMPP) in this study, which has never been reported in other literatures. Finally, we use eight different test problems to measure and evaluate the proposed algorithm, including low dimension and high dimension BLMPPs, as well as attempt to solve the BLMPPs whose theoretical Pareto optimal front is not known. The experimental results show that the proposed algorithm is a feasible and efficient method for solving BLMPPs. --- ## Body ## 1. Introduction Bilevel programming problem (BLPP) arises in a wide variety of scientific and engineering applications including optimal control, process optimization, game-playing strategy development, transportation problem, and so on. Thus the BLPP has been developed and researched by many scholars. The reviews, monographs, and surveys on the BLPP can refer to [1–11]. Moreover, the evolutionary algorithms (EAs) have been employed to address BLPP in papers [12–16].For the multiobjective characteristics widely existing in the BLPP, the bilevel multiobjective programming problem (BLMPP) has attracted many researchers to study it. For example, Shi and Xia [17, 18], Abo-Sinna and Baky [19], Nishizaki and Sakawa [20], Zheng et al. [21] presented an interactive algorithm for BLMPP. Eichfelder [22] presented a method for solving nonlinear bilevel multiobjective optimization problems with coupled upper level constraints. Thereafter, Eichfelder [23] developed a numerical method for solving nonlinear nonconvex bilevel multiobjective optimization problems. In recent years, the metaheuristic has attracted considerable attention as an alternative method for BLMPP. For example, Deb and Sinha [24–26], as well as Sinha and Deb [27] discussed BLMPP based on evolutionary multiobjective optimization principles. Based on those studies, Deb and Sinha [28] proposed a viable and hybrid evolutionary-local-search based algorithm and presented challenging test problems. Sinha [29] presented a progressively interactive evolutionary multiobjective optimization method for BLMPP.Particle swarm optimization (PSO) is a relatively novel heuristic algorithm inspired by the choreography of a bird flock, which has been found to be quite successful in a wide variety of optimization tasks [30]. Due to its high speed of convergence and relative simplicity, the PSO algorithm has been employed by many researchers for solving BLPPs. For example, Li et al. [31] proposed a hierarchical PSO for solving BLPP. Kuo and Huang [32] applied the PSO algorithm for solving bilevel linear programming problem. Gao et al. [33] presented a method to solve bilevel pricing problems in supply chains using PSO. However, it is worth noting that the papers mentioned above only for bilevel single objective problems and the BLMPP have seldom been studied using PSO so far. There are probably two reasons for this situation. One reason is that the added complexities associated with solving each level, and the other reason is that the global convergence of the PSO cannot be guaranteed [34].In this paper, a global convergence guaranteed method called as EQPSO is proposed, in which an elite strategy is exerted for global best particle to prevent premature convergence of the swarm. The EQPSO is employed for solving the BLMPP in this study, which has not been reported in other literatures. For such problems, the proposed algorithm directly simulates the decision process of the bilevel programming, which is different from most traditional algorithms designed for specific versions or based on specific assumptions. The BLMPP is transformed to solve multiobjective optimization problems in the upper level and the lower level interactively by the EQPSO. And a set of approximate Pareto optimal solutions for BLMPP are obtained using the elite strategy. This interactive procedure is repeated until the accurate Pareto optimal solutions of the original problem are found. The rest of the paper is organized as follows. In Section2, the problem formulation is provided. The proposed algorithm for solving bilevel multiobjective problem is presented in Section 3. In Section 4, some numerical examples are given to demonstrate the feasibility and efficiency of the proposed algorithm. ## 2. Problem Formulation Letx∈Rn1,y∈Rn2, F:Rn1×Rn2→Rm1, f:Rn1×Rn2→Rm2, G:Rn1×Rn2→Rp, g:Rn1×Rn2→Rq. The general model of the BLMPP can be written as follows: (2.1)minxF(x,y)s.t.G(x,y)≥0,minyf(x,y)s.t.g(x,y)≥0, where F(x,y) and f(x,y) are the upper level and the lower level objective functions, respectively. G(x,y) and g(x,y) denote the upper level and the lower level constraints, respectively.LetS={(x,y)∣G(x,y)≥0,g(x,y)≥0}, X={x∣∃y,G(x,y)≥0,g(x,y)≥0}, S(x)={y∣g(x,y)≥0}, and for the fixed x∈X, let S¯(X) denote the weak efficiency set of solutions to the lower level problem, the feasible solution set of problem (2.1) is denoted as: IR={(x,y)∣(x,y)∈S,y∈S¯(X)}.Definition 2.1. For a fixedx∈X, if y is a Pareto optimal solution to the lower level problem, then (x,y) is a feasible solution to the problem (2.1).Definition 2.2. If(x*,y*) is a feasible solution to the problem (2.1), and there are no (x,y)∈IR, such that F(x,y)≺F(x*,y*), then (x*,y*) is a Pareto optimal solution to the problem (2.1), where “≺” denotes Pareto preference.For problem (2.1), it is noted that a solution (x*,y*) is feasible for the upper level problem if and only if y* is an optimal solution for the lower level problem with x=x*. In practice, we often make the approximate Pareto optimal solutions of the lower level problem as the optimal response feed back to the upper level problem, and this point of view is accepted usually. Based on this fact, the EQPSO algorithm may have a great potential for solving BLMPP. On the other hand, unlike the traditional point-by-point approach mentioned in Section 1, the EQPSO algorithm uses a group of points in its operation, thus the EQPSO can be developed as a new way for solving BLMPP. We next present the algorithm based on the EQPSO is presented for (2.1). ## 3. The Algorithm ### 3.1. The EQPSO The quantum behaved particle swarm optimization (QPSO) is the integration of PSO and quantum computing theory developed by [35–38]. Compared with PSO, it needs no velocity vectors for particles and has fewer parameters to adjust. Moreover, its global convergence can be guaranteed [39]. Due to its global convergence and relative simplicity, it has been found to be quite successful in a wide variety of optimization tasks. For example, a wide range of continuous optimization problems [40–45] are solved by QPSO and the experiment results show that the QPSO works better than standard PSO. Some improved QPSO algorithms can refer to [46–48]. In this paper, the EQPSO algorithm is proposed, in which an elite strategy is exerted for global best particle to prevent premature convergence of the swarm, and it makes the proposed algorithm has good performance for solving the high dimension BLMPPS. The EQPSO has the same design principle with the QPSO except for the global optimal particle selection criterion, so the global convergence proof of the EQPSO can refer to [39]. In the EQPSO, the particles move according to the following iterative equation: (3.1)zt+1=pt-αt(mBestt-zt)*ln(1u)ifk≥0.5,zt+1=pt+αt(mBestt-zt)*ln(1u)ifk<0.5, where (3.2)pt=φ*ppBestt+(1-φ)*pgBestt,mBestt=1N∑i=1NppBestit,αt=m-(m-n)*tT,pgBest∈rand(At), where the z denotes the particle’s position. mBest denotes the mean best position of all the particles’ best positions. The k, u, and φ are random numbers distributed uniformly on (0,1), respectively. α(t) is the expansion-contraction coefficient. In general, m=1,n=0.5, t is the current iteration number, and T is the maximum number of iterations. The ppBest and pgBest are the particle’s personal best position and the global best position, respectively. At is the elite set which is introduced in following parts (see Algorithm: Step 3). ### 3.2. The Algorithm for Solving BLMPP The process of the proposed algorithm for solving BLMPP is an interactive coevolutionary process. We first initialize population and then solve multiobjective optimization problems in the upper level and the lower level interactively using the EQPSO. For one time of iteration, a set of approximate Pareto optimal solutions for problem 1 is obtained by the elite strategy which was adopted in Deb et al. [49]. This interactive procedure is repeated until the accurate Pareto optimal solutions of problem (2.1) are found. The details of the proposed algorithm are given as follows.Algorithm Step  1. Initializing. Step  1.1.  Initialize the population P0 with Nu particles which is composed by ns=Nu/Nl subswarms of size Nl each. The particle’s position of the kth (k=1,2,…,ns) subswarm is presented as: zj=(xj,yj) (j=1,2,…,nl) and zj is sampled randomly in the feasible space. Step  1.2.  Initialize the external loop counter t:=0. Step  2.  For the kth subswarm, (k=1,2,…,ns), each particle is assigned a nondomination rank NDl and a crowding value CDl in f space. Then, all resulting subswarms are combined into one population which is named as the Pt. Afterwards, each particle is assigned a nondomination rank NDu and a crowding value CDu in F space. Step  3.  The nondomination particles assigned both NDu=1 and NDl=1 from Pt are saved in the elite set At. Step  4.  For the kth subswarm, (k=1,2,…,ns), update the lower level decision variables. Step  4.1.  Initialize the lower level loop counter tl:=0. Step  4.2.  Update the jth (j=1,2,…,Nl) particle’s position with the fixed xj according to (3.1) and (3.2). Step  4.3.  tl:=tl+1. Step  4.4.  If tl≥Tl, go to Step 4.5. Otherwise, go to Step 4.2 Step  4.5.  Each particle of the ith subswarm is reassigned a nondomination rank NDl and a crowding value CDl in f space. Then, all resulting subswarms are combined into one population which is renamed as the Qt. Afterwards, each particle is reassigned a nondomination rank NDu and a crowding value CDu in F space. Step  5.  Combined population Pt and Qt to form Rt. The combined population Rt is reassigned a nondomination rank NDu, and the particles within an identical nondomination rank are assigned a crowding distance value CDu in the F space. Step  6.  Choose half particles from Rt. The particles of rank NDu=1 are considered first. From the particles of rank NDu=1, the particles with NDl=1 are noted one by one in the order of reducing crowding distance CDu, for each such particle the corresponding subswarm from its source population (either Pt or Qt) is copied in an intermediate population St. If a subswarm is already copied in St and a future particle from the same subswarm is found to have NDu=NDl=1, the subswarm is not copied again. When all particles of NDu=1 are considered, a similar consideration is continued with NDu=2 and so on till exactly ns subswarms are copied in St. Step  7.  Update the elite set At. The nondomination particles assigned both NDu=1 and NDl=1 from St are saved in the elite set At. Step  8.  Update the upper level decision variables in St. Step  8.1.  Initiate the upper level loop counter tu:=0. Step  8.2.  Update the ith (i=1,2,…,Nu) particle’s position with the fixed yi according to (3.1) and (3.2). Step  8.3.  tu:=tu+1. Step  8.4.  If tu≥Tu, go to Step 8.5. Otherwise, go to Step 8.2. Step  8.5.  Every member is then assigned a nondomination rank NDu and a crowding distance value CDu in F space. Step  9.  t:=t+1. Step  10.  If t≥T, output the elite set At. Otherwise, go to Step 2.In Steps 4 and 8, the global best position is chosen at random from the elite setAt. The criterion of personal best position choice is that, if the current position is dominated by the previous position, then the previous position is kept; otherwise, the current position replaces the previous one; if neither of them is dominated by the other, then we select one of them randomly. A relatively simple scheme is used to handle constraints. Whenever two individuals are compared, their constraints are checked. If both are feasible, nondomination sorting technology is directly applied to decide which one is selected. If one is feasible and the other is infeasible, the feasible dominates. If both are infeasible, then the one with the lowest amount of constraint violation dominates the other. Notations used in the proposed algorithm are detailed in Table 1.Table 1 The notations of the algorithm. x i Theith particle’s position of the upper level problem. y j Thejth particle’s position of the lower level problem. z j Thejth particle’s position of BLMPP. N u The population size of the upper level problem. N l The sub-swarm size of the lower level problem. t Current iteration number for the overall problem. T The predefined max iteration number fort. t u Current iteration number for the upper level problem. t l Current iteration number for the lower level problem. T u The predefined max iteration number fortu. T l The predefined max iteration number fortl. ND u Non-domination sorting rank of the upper level problem. CD u Crowding distance value of the upper level problem. ND l Non-domination sorting rank of the lower level problem. CD l Crowding distance value of the lower level problem. P t Thetth iteration population. Q t The offspring ofPt. S t Intermediate population. ## 3.1. The EQPSO The quantum behaved particle swarm optimization (QPSO) is the integration of PSO and quantum computing theory developed by [35–38]. Compared with PSO, it needs no velocity vectors for particles and has fewer parameters to adjust. Moreover, its global convergence can be guaranteed [39]. Due to its global convergence and relative simplicity, it has been found to be quite successful in a wide variety of optimization tasks. For example, a wide range of continuous optimization problems [40–45] are solved by QPSO and the experiment results show that the QPSO works better than standard PSO. Some improved QPSO algorithms can refer to [46–48]. In this paper, the EQPSO algorithm is proposed, in which an elite strategy is exerted for global best particle to prevent premature convergence of the swarm, and it makes the proposed algorithm has good performance for solving the high dimension BLMPPS. The EQPSO has the same design principle with the QPSO except for the global optimal particle selection criterion, so the global convergence proof of the EQPSO can refer to [39]. In the EQPSO, the particles move according to the following iterative equation: (3.1)zt+1=pt-αt(mBestt-zt)*ln(1u)ifk≥0.5,zt+1=pt+αt(mBestt-zt)*ln(1u)ifk<0.5, where (3.2)pt=φ*ppBestt+(1-φ)*pgBestt,mBestt=1N∑i=1NppBestit,αt=m-(m-n)*tT,pgBest∈rand(At), where the z denotes the particle’s position. mBest denotes the mean best position of all the particles’ best positions. The k, u, and φ are random numbers distributed uniformly on (0,1), respectively. α(t) is the expansion-contraction coefficient. In general, m=1,n=0.5, t is the current iteration number, and T is the maximum number of iterations. The ppBest and pgBest are the particle’s personal best position and the global best position, respectively. At is the elite set which is introduced in following parts (see Algorithm: Step 3). ## 3.2. The Algorithm for Solving BLMPP The process of the proposed algorithm for solving BLMPP is an interactive coevolutionary process. We first initialize population and then solve multiobjective optimization problems in the upper level and the lower level interactively using the EQPSO. For one time of iteration, a set of approximate Pareto optimal solutions for problem 1 is obtained by the elite strategy which was adopted in Deb et al. [49]. This interactive procedure is repeated until the accurate Pareto optimal solutions of problem (2.1) are found. The details of the proposed algorithm are given as follows.Algorithm Step  1. Initializing. Step  1.1.  Initialize the population P0 with Nu particles which is composed by ns=Nu/Nl subswarms of size Nl each. The particle’s position of the kth (k=1,2,…,ns) subswarm is presented as: zj=(xj,yj) (j=1,2,…,nl) and zj is sampled randomly in the feasible space. Step  1.2.  Initialize the external loop counter t:=0. Step  2.  For the kth subswarm, (k=1,2,…,ns), each particle is assigned a nondomination rank NDl and a crowding value CDl in f space. Then, all resulting subswarms are combined into one population which is named as the Pt. Afterwards, each particle is assigned a nondomination rank NDu and a crowding value CDu in F space. Step  3.  The nondomination particles assigned both NDu=1 and NDl=1 from Pt are saved in the elite set At. Step  4.  For the kth subswarm, (k=1,2,…,ns), update the lower level decision variables. Step  4.1.  Initialize the lower level loop counter tl:=0. Step  4.2.  Update the jth (j=1,2,…,Nl) particle’s position with the fixed xj according to (3.1) and (3.2). Step  4.3.  tl:=tl+1. Step  4.4.  If tl≥Tl, go to Step 4.5. Otherwise, go to Step 4.2 Step  4.5.  Each particle of the ith subswarm is reassigned a nondomination rank NDl and a crowding value CDl in f space. Then, all resulting subswarms are combined into one population which is renamed as the Qt. Afterwards, each particle is reassigned a nondomination rank NDu and a crowding value CDu in F space. Step  5.  Combined population Pt and Qt to form Rt. The combined population Rt is reassigned a nondomination rank NDu, and the particles within an identical nondomination rank are assigned a crowding distance value CDu in the F space. Step  6.  Choose half particles from Rt. The particles of rank NDu=1 are considered first. From the particles of rank NDu=1, the particles with NDl=1 are noted one by one in the order of reducing crowding distance CDu, for each such particle the corresponding subswarm from its source population (either Pt or Qt) is copied in an intermediate population St. If a subswarm is already copied in St and a future particle from the same subswarm is found to have NDu=NDl=1, the subswarm is not copied again. When all particles of NDu=1 are considered, a similar consideration is continued with NDu=2 and so on till exactly ns subswarms are copied in St. Step  7.  Update the elite set At. The nondomination particles assigned both NDu=1 and NDl=1 from St are saved in the elite set At. Step  8.  Update the upper level decision variables in St. Step  8.1.  Initiate the upper level loop counter tu:=0. Step  8.2.  Update the ith (i=1,2,…,Nu) particle’s position with the fixed yi according to (3.1) and (3.2). Step  8.3.  tu:=tu+1. Step  8.4.  If tu≥Tu, go to Step 8.5. Otherwise, go to Step 8.2. Step  8.5.  Every member is then assigned a nondomination rank NDu and a crowding distance value CDu in F space. Step  9.  t:=t+1. Step  10.  If t≥T, output the elite set At. Otherwise, go to Step 2.In Steps 4 and 8, the global best position is chosen at random from the elite setAt. The criterion of personal best position choice is that, if the current position is dominated by the previous position, then the previous position is kept; otherwise, the current position replaces the previous one; if neither of them is dominated by the other, then we select one of them randomly. A relatively simple scheme is used to handle constraints. Whenever two individuals are compared, their constraints are checked. If both are feasible, nondomination sorting technology is directly applied to decide which one is selected. If one is feasible and the other is infeasible, the feasible dominates. If both are infeasible, then the one with the lowest amount of constraint violation dominates the other. Notations used in the proposed algorithm are detailed in Table 1.Table 1 The notations of the algorithm. x i Theith particle’s position of the upper level problem. y j Thejth particle’s position of the lower level problem. z j Thejth particle’s position of BLMPP. N u The population size of the upper level problem. N l The sub-swarm size of the lower level problem. t Current iteration number for the overall problem. T The predefined max iteration number fort. t u Current iteration number for the upper level problem. t l Current iteration number for the lower level problem. T u The predefined max iteration number fortu. T l The predefined max iteration number fortl. ND u Non-domination sorting rank of the upper level problem. CD u Crowding distance value of the upper level problem. ND l Non-domination sorting rank of the lower level problem. CD l Crowding distance value of the lower level problem. P t Thetth iteration population. Q t The offspring ofPt. S t Intermediate population. ## 4. Numerical Experiment In this section, three examples will be considered to illustrate the feasibility of the proposed algorithm for problem (2.1). In order to evaluate the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front, as well as the diversity of the obtained Pareto optimal solutions along the theoretical Pareto optimal front, we adopted the following evaluation metrics. ### 4.1. Performance Evaluation Metrics (a) Generational Distance (GD): this metric used by Deb [50] is employed in this paper as a way of evaluating the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front. The GD metric denotes the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front: (4.1)GD=∑i=1ndi2n, where n is the number of the obtained Pareto optimal solutions by the proposed algorithm and di is the Euclidean distance between each obtained Pareto optimal solution and the nearest member of the theoretical Pareto optimal set.(b) Spacing (SP): this metric is used to evaluate the diversity of the obtained Pareto optimal solutions by comparing the uniform distribution and the deviation of solutions as described by Deb [50]: (4.2)SP=∑m=1Mdme+∑i=1n(d¯-di)2∑m=1Mdme+nd¯, where di=minj(|F1i(x,y)-F1j(x,y)|+|F2i(x,y)-F2j(x,y)|),  i,j=1,2,…,n,  d¯ is the mean of all di,  dme is the Euclidean distance between the extreme solutions in obtained Pareto optimal solution set and the theoretical Pareto optimal solution set on the mth objective, M is the number of the upper level objective function, and n is the number of the obtained solutions by the proposed algorithm.All results presented in this paper have been obtained on a personal computer (CPU: AMD Phenon II X6 1055T 2.80 GHz; RAM: 3.25 GB) using a c# implementation of the proposed algorithm. ### 4.2. Numerical Examples #### 4.2.1. Low Dimension BLMPPs Example 4.1. Example4.1 is taken from [22]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=200, Tu=200, Nl=40, Tl=40, and T=40: (4.3)minxF(x,y)=(y1-x,y2)s.t.G1(y)=1+y1+y2≥0minyf(x,y)=(y1,y2)s.t.g1(x,y)=x2-y12-y22≥0,-1≤y1,y2≤1,0≤x≤1. Figure1 shows the obtained Pareto front of this example by the proposed algorithm. From Figure 1, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00024, that is, GD=0.00024 (see Table 2). Moreover, the lower SP value (SP=0.0042, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Figure 2 shows the obtained solutions of this example, which follow the relationship, that is, y1=-1-y2, y2=-1/2±(1/4)8x2-4 and x∈(1/2,1). It is also obvious that all obtained solutions are close to being on the upper level constraint G(x) boundary (1+y1+y2=0).Table 2 Results of the Generation Distance (GD) and Spacing (SP) metrics for the above six examples. Example GD SP Example4.1 0.00024 0.00442 Example4.2 0.00003 0.00169 Example4.3 0.00027 0.00127 Example4.4 0.00036 0.00235 Example4.5 0.00058 0.00364 Example4.6 0.00039 0.00168Figure 1 The obtained Pareto optimal front of Example4.1.Figure 2 The obtained solutions of Example4.1.Example 4.2. Example4.2 is taken from [51]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=200, Tu=50, Nl=40, Tl=20, and T=40: (4.4)minxF(x,y)=(x2+(y1-1)2+y22,(x-1)2+(y1-1)2+y22)minyf(x,y)=(y12+y22,(y1-x)2+y22)-1≤x,y1,y2≤2. Figure3 shows the obtained Pareto optimal front of this example by the proposed algorithm. From Figure 3, it is obvious that the obtained Pareto optimal front is very close to the theoretical Pareto optimal front, the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00003 (see Table 2). On the other hand, the obtained Pareto optimal solutions can distribute uniformly on entire range of theoretical Pareto optimal front base on the fact that the SP value is lower (SP=0.00169, see Table 2). Figure 4 shows the obtained Pareto optimal solutions, they follow the relationship, that is, x=y1,y1∈[0.5,1] and y2=0.Figure 3 The obtained Pareto optimal front of Example4.2.Figure 4 The obtained solutions Example4.2. #### 4.2.2. High Dimension BLMPPs Example 4.3. Example4.3 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400, Tu=50, Nl=40, Tl=20, and T=60: (4.5)minxF(x,y)=((1+r-cos(απx1))+∑j=2K(xj-j-12)2+τ∑i=2K(yi-xi)2-γcos(γπx12y1),(1+r-sin(απx1))+∑j=2K(xj-j-12)2+τ∑i=2K(yi-xi)2-γsin(γπx12y1)∑j=2K(xj-j-12)2)minyf(x,y)=(y12+∑i=2K(yi-xi)2+∑i=2K10(1-cos(πk(yi-xi))),∑i=1K(yi-xi)2+∑i=2K10|1-sin(πk(yi-xi))|)s.t.-K≤yi≤K,(i=1,2,…,K);1≤x1≤4,-K≤xj≤K,(j=2,3,…,K).α=1,r=0.1,τ=1,γ=1,K=10.Example 4.4. Example4.4 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=80. (4.6)minxF(x,y)=v1(x1)+∑j=2Kyj2+10(1-cos(πk)yi)+τ∑i=2K(yi-xi)2-rcos(γπx12y1),v2(x1)+∑j=2Kyj2+10(1-cos(πk)yi)+τ∑i=2K(yi-xi)2-rsin(γπx12y1),minyf(x,y)=(y12+∑i=2K(yi-xi)2,∑i=1Ki(yi-xi)2)s.t.-K≤yi≤K,(i=1,2,…,K);0.001≤x1≤4,-K≤xj≤K,(j=2,3,…,K).α=1,r=0.25,τ=1,γ=4,K=10, where (4.7)v1(x1)={cos(0.2π)x1+sin(0.2π)|0.02sin(5πx1)|,for0≤x1≤1,x1-(1-sin(0.2π)),forx1>1,v2(x1)={-sin(0.2π)x1+cos(0.2π)|0.02sin(5πx1)|,for0≤x1≤1,0.1(x1-1)-sin(0.2π),forx1>1. This problem is more difficult compared to the previous problems (Examples4.1 and 4.2) because the lower level problem of this example has multimodalities, thereby making the lower level problem difficult in finding the upper level Pareto optimal front. From Figure 5, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00027, that is, GD=0.00027 (see Table 2). Moreover, the lower SP value (SP=0.00127, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, two obtained lower level Pareto optimal fronts are given when x1=2 and x1=2.5.Figure 5 The obtained Pareto front of Example4.3.Figure6 shows the obtained Pareto front of Example 4.4 by the proposed algorithm. The upper level problem has multimodalities, thereby causing an algorithm difficulty in finding the upper level Pareto optimal front. From Figure 6, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00036, that is, GD=0.00036 (see Table 2). Moreover, the lower SP value (SP=0.00235, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all corresponding lower level Pareto optimal fronts are given.Figure 6 The obtained Pareto front of Example4.4.Example 4.5. Example4.5 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=60: (4.8)minxF(x·y)=(x1+∑j=3K(xj-j2)2+τ∑i=3K(yi-xi)2-cos(4tan-1(x2-y2x1-y1)),x2+∑j=3K(xj-j2)2+τ∑i=3K(yi-xi)2-cos(4tan-1(x2-y2x1-y1)))s.t.G(x)=x2-(1-x12)2≥0minyf(x,y)=(y1+∑i=3K(yi-xi)2,y2+∑i=3K(yi-xi)2)s.t.g1(x,y)=(y1-x1)2+(y2-x2)2≤r2R(x1)=(0.1+0.15|sin(2π(x1-0.1))|),-K≤yi≤K,(i=1,2,…,K);0≤x1≤K,-K≤xj≤K,(j=2,3,…,K).τ=1,r=0.2,K=10.Example 4.6. Example4.6 is taken from [28]. Here x∈R1,y∈R9. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=40: (4.9)minxF(x,y)=((1-y1)(1+∑j=2Kyj2)x1,y1(1+∑j=2Kyj2)x1)s.t.G1(x,y)=-(1-y1)x1-12x1y1≤1minyf(x,y)=((1-y1)(1+∑j=K+1K+Lyj2)x1,y1(1+∑j=K+1K+Lyj2)x1)s.t.1≤x1≤2,-1≤y1≤1,-(K+L)≤yj≤(K+L),(j=2,3,…,K+L),K=5,L=4. Figure7 shows the obtained Pareto front of Example 4.5 by the proposed algorithm. From Figure 7, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00058, that is, GD=0.00058 (see Table 2). Moreover, the lower SP value (SP=0.00364, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all obtained lower level Pareto optimal fronts are given. It is also obvious that the Pareto optimal fronts for both the lower and upper level lie on constraint boundaries and every lower level Pareto optimal front has an unequal contribution to the upper level Pareto optimal front. Figure8 shows the obtained Pareto front of Example 4.6 by the proposed algorithm. From Figure 8, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00039, that is, GD=0.00039 (see Table 2). Moreover, the lower SP value (SP=0.00168, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, three obtained lower level Pareto optimal fronts are given when y1=1, y1=1.5 and y1=2. It can be seen that only one Pareto optimal point from each participating lower level problem qualifies to be on the upper level Pareto optimal front.Figure 7 The obtained Pareto front of Example4.5.Figure 8 The obtained Pareto front of Example4.6. #### 4.2.3. The BLMPPs with Unknown Theoretical Pareto Optimal Fronts Example 4.7. Example4.7 is taken from [52], in which the theoretical Pareto optimal front is not given. Here x∈R2,y∈R3. In this example, the population size and iteration times are set as follows: Nu=100,Tu=50, Nl=20, Tl=10, and T=40: (4.10)maxxF(x,y)=(x1+9x2+10y1+y2+3x3,9x1+2x2+2y1+7y2+4x3)s.t.G1(x,y)=3x1+9x2+9y1+5y2+3y3≤1039G2(x,y)=-4x1-x2+3y1-3y2+2y3≤94minyf(x,y)=(4x1+6x2+7y1+4y2+8y3,6x1+4x2+8y1+7y2+4y3)s.t.g1(x,y)=3x1-9x2-9y1-4y2≤61g2(x,y)=5x1+9x2+10y1-y2-2y3≤924g3(x,y)=3x1-3x2+y2+5y3≤420x1,x2,y1,y2,y3≥0.Example 4.8. Example4.8 is taken from [23]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=800,Tu=50,Nl=40, Tl=20, and T=40: (4.11)minxF(x,y)=(x+y12+y22+sin2(x1+y),cos(y2)(0.1+y)(exp(-y10.1+y2)))s.t.G1(x,y)=(x-5)2-(y1-0.5)2-(y2-5)2≤16minyf(x,y)=((y1-2)2+(y2-2)24+xy2+(5-x)216+sin(y210),y12+(y2-6)2-2xy1-(5-x)280)s.t.g1(x,y)=y12-y2≤0g2(x,y)=5y12+y2-10-≤0g3(x,y)=y2-5-x6≤00≤x≤10,0≤y1,y2≤10. Figure9 shows the obtained Pareto optimal front of Example 4.7 by the proposed algorithm. Note that, Zhang et al. [52] only obtained a single optimal solution x=(146.2955,28.9394) and y=(0,67.9318,0) which lies on the maximum of the F2 using weighted sum method. In contrast, a set of Pareto optimal solutions are obtained by the proposed algorithm. However, the fact that the single optimal solution in [49] is included in the obtained Pareto optimal solutions illustrates the feasibility of proposed algorithm. Figure 10 shows the final archive solutions of the Example 4.8 by the proposed algorithm. For this problem, the exact Pareto optimal front is not known, but the obtained Pareto optimal front by the proposed algorithm is similar to that reported in the previous study [23].Figure 9 The obtained front of Example4.7.Figure 10 The obtained front of Example4.8. ## 4.1. Performance Evaluation Metrics (a) Generational Distance (GD): this metric used by Deb [50] is employed in this paper as a way of evaluating the closeness between the obtained Pareto optimal front and the theoretical Pareto optimal front. The GD metric denotes the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front: (4.1)GD=∑i=1ndi2n, where n is the number of the obtained Pareto optimal solutions by the proposed algorithm and di is the Euclidean distance between each obtained Pareto optimal solution and the nearest member of the theoretical Pareto optimal set.(b) Spacing (SP): this metric is used to evaluate the diversity of the obtained Pareto optimal solutions by comparing the uniform distribution and the deviation of solutions as described by Deb [50]: (4.2)SP=∑m=1Mdme+∑i=1n(d¯-di)2∑m=1Mdme+nd¯, where di=minj(|F1i(x,y)-F1j(x,y)|+|F2i(x,y)-F2j(x,y)|),  i,j=1,2,…,n,  d¯ is the mean of all di,  dme is the Euclidean distance between the extreme solutions in obtained Pareto optimal solution set and the theoretical Pareto optimal solution set on the mth objective, M is the number of the upper level objective function, and n is the number of the obtained solutions by the proposed algorithm.All results presented in this paper have been obtained on a personal computer (CPU: AMD Phenon II X6 1055T 2.80 GHz; RAM: 3.25 GB) using a c# implementation of the proposed algorithm. ## 4.2. Numerical Examples ### 4.2.1. Low Dimension BLMPPs Example 4.1. Example4.1 is taken from [22]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=200, Tu=200, Nl=40, Tl=40, and T=40: (4.3)minxF(x,y)=(y1-x,y2)s.t.G1(y)=1+y1+y2≥0minyf(x,y)=(y1,y2)s.t.g1(x,y)=x2-y12-y22≥0,-1≤y1,y2≤1,0≤x≤1. Figure1 shows the obtained Pareto front of this example by the proposed algorithm. From Figure 1, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00024, that is, GD=0.00024 (see Table 2). Moreover, the lower SP value (SP=0.0042, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Figure 2 shows the obtained solutions of this example, which follow the relationship, that is, y1=-1-y2, y2=-1/2±(1/4)8x2-4 and x∈(1/2,1). It is also obvious that all obtained solutions are close to being on the upper level constraint G(x) boundary (1+y1+y2=0).Table 2 Results of the Generation Distance (GD) and Spacing (SP) metrics for the above six examples. Example GD SP Example4.1 0.00024 0.00442 Example4.2 0.00003 0.00169 Example4.3 0.00027 0.00127 Example4.4 0.00036 0.00235 Example4.5 0.00058 0.00364 Example4.6 0.00039 0.00168Figure 1 The obtained Pareto optimal front of Example4.1.Figure 2 The obtained solutions of Example4.1.Example 4.2. Example4.2 is taken from [51]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=200, Tu=50, Nl=40, Tl=20, and T=40: (4.4)minxF(x,y)=(x2+(y1-1)2+y22,(x-1)2+(y1-1)2+y22)minyf(x,y)=(y12+y22,(y1-x)2+y22)-1≤x,y1,y2≤2. Figure3 shows the obtained Pareto optimal front of this example by the proposed algorithm. From Figure 3, it is obvious that the obtained Pareto optimal front is very close to the theoretical Pareto optimal front, the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00003 (see Table 2). On the other hand, the obtained Pareto optimal solutions can distribute uniformly on entire range of theoretical Pareto optimal front base on the fact that the SP value is lower (SP=0.00169, see Table 2). Figure 4 shows the obtained Pareto optimal solutions, they follow the relationship, that is, x=y1,y1∈[0.5,1] and y2=0.Figure 3 The obtained Pareto optimal front of Example4.2.Figure 4 The obtained solutions Example4.2. ### 4.2.2. High Dimension BLMPPs Example 4.3. Example4.3 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400, Tu=50, Nl=40, Tl=20, and T=60: (4.5)minxF(x,y)=((1+r-cos(απx1))+∑j=2K(xj-j-12)2+τ∑i=2K(yi-xi)2-γcos(γπx12y1),(1+r-sin(απx1))+∑j=2K(xj-j-12)2+τ∑i=2K(yi-xi)2-γsin(γπx12y1)∑j=2K(xj-j-12)2)minyf(x,y)=(y12+∑i=2K(yi-xi)2+∑i=2K10(1-cos(πk(yi-xi))),∑i=1K(yi-xi)2+∑i=2K10|1-sin(πk(yi-xi))|)s.t.-K≤yi≤K,(i=1,2,…,K);1≤x1≤4,-K≤xj≤K,(j=2,3,…,K).α=1,r=0.1,τ=1,γ=1,K=10.Example 4.4. Example4.4 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=80. (4.6)minxF(x,y)=v1(x1)+∑j=2Kyj2+10(1-cos(πk)yi)+τ∑i=2K(yi-xi)2-rcos(γπx12y1),v2(x1)+∑j=2Kyj2+10(1-cos(πk)yi)+τ∑i=2K(yi-xi)2-rsin(γπx12y1),minyf(x,y)=(y12+∑i=2K(yi-xi)2,∑i=1Ki(yi-xi)2)s.t.-K≤yi≤K,(i=1,2,…,K);0.001≤x1≤4,-K≤xj≤K,(j=2,3,…,K).α=1,r=0.25,τ=1,γ=4,K=10, where (4.7)v1(x1)={cos(0.2π)x1+sin(0.2π)|0.02sin(5πx1)|,for0≤x1≤1,x1-(1-sin(0.2π)),forx1>1,v2(x1)={-sin(0.2π)x1+cos(0.2π)|0.02sin(5πx1)|,for0≤x1≤1,0.1(x1-1)-sin(0.2π),forx1>1. This problem is more difficult compared to the previous problems (Examples4.1 and 4.2) because the lower level problem of this example has multimodalities, thereby making the lower level problem difficult in finding the upper level Pareto optimal front. From Figure 5, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00027, that is, GD=0.00027 (see Table 2). Moreover, the lower SP value (SP=0.00127, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, two obtained lower level Pareto optimal fronts are given when x1=2 and x1=2.5.Figure 5 The obtained Pareto front of Example4.3.Figure6 shows the obtained Pareto front of Example 4.4 by the proposed algorithm. The upper level problem has multimodalities, thereby causing an algorithm difficulty in finding the upper level Pareto optimal front. From Figure 6, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00036, that is, GD=0.00036 (see Table 2). Moreover, the lower SP value (SP=0.00235, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all corresponding lower level Pareto optimal fronts are given.Figure 6 The obtained Pareto front of Example4.4.Example 4.5. Example4.5 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=60: (4.8)minxF(x·y)=(x1+∑j=3K(xj-j2)2+τ∑i=3K(yi-xi)2-cos(4tan-1(x2-y2x1-y1)),x2+∑j=3K(xj-j2)2+τ∑i=3K(yi-xi)2-cos(4tan-1(x2-y2x1-y1)))s.t.G(x)=x2-(1-x12)2≥0minyf(x,y)=(y1+∑i=3K(yi-xi)2,y2+∑i=3K(yi-xi)2)s.t.g1(x,y)=(y1-x1)2+(y2-x2)2≤r2R(x1)=(0.1+0.15|sin(2π(x1-0.1))|),-K≤yi≤K,(i=1,2,…,K);0≤x1≤K,-K≤xj≤K,(j=2,3,…,K).τ=1,r=0.2,K=10.Example 4.6. Example4.6 is taken from [28]. Here x∈R1,y∈R9. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=40: (4.9)minxF(x,y)=((1-y1)(1+∑j=2Kyj2)x1,y1(1+∑j=2Kyj2)x1)s.t.G1(x,y)=-(1-y1)x1-12x1y1≤1minyf(x,y)=((1-y1)(1+∑j=K+1K+Lyj2)x1,y1(1+∑j=K+1K+Lyj2)x1)s.t.1≤x1≤2,-1≤y1≤1,-(K+L)≤yj≤(K+L),(j=2,3,…,K+L),K=5,L=4. Figure7 shows the obtained Pareto front of Example 4.5 by the proposed algorithm. From Figure 7, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00058, that is, GD=0.00058 (see Table 2). Moreover, the lower SP value (SP=0.00364, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all obtained lower level Pareto optimal fronts are given. It is also obvious that the Pareto optimal fronts for both the lower and upper level lie on constraint boundaries and every lower level Pareto optimal front has an unequal contribution to the upper level Pareto optimal front. Figure8 shows the obtained Pareto front of Example 4.6 by the proposed algorithm. From Figure 8, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00039, that is, GD=0.00039 (see Table 2). Moreover, the lower SP value (SP=0.00168, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, three obtained lower level Pareto optimal fronts are given when y1=1, y1=1.5 and y1=2. It can be seen that only one Pareto optimal point from each participating lower level problem qualifies to be on the upper level Pareto optimal front.Figure 7 The obtained Pareto front of Example4.5.Figure 8 The obtained Pareto front of Example4.6. ### 4.2.3. The BLMPPs with Unknown Theoretical Pareto Optimal Fronts Example 4.7. Example4.7 is taken from [52], in which the theoretical Pareto optimal front is not given. Here x∈R2,y∈R3. In this example, the population size and iteration times are set as follows: Nu=100,Tu=50, Nl=20, Tl=10, and T=40: (4.10)maxxF(x,y)=(x1+9x2+10y1+y2+3x3,9x1+2x2+2y1+7y2+4x3)s.t.G1(x,y)=3x1+9x2+9y1+5y2+3y3≤1039G2(x,y)=-4x1-x2+3y1-3y2+2y3≤94minyf(x,y)=(4x1+6x2+7y1+4y2+8y3,6x1+4x2+8y1+7y2+4y3)s.t.g1(x,y)=3x1-9x2-9y1-4y2≤61g2(x,y)=5x1+9x2+10y1-y2-2y3≤924g3(x,y)=3x1-3x2+y2+5y3≤420x1,x2,y1,y2,y3≥0.Example 4.8. Example4.8 is taken from [23]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=800,Tu=50,Nl=40, Tl=20, and T=40: (4.11)minxF(x,y)=(x+y12+y22+sin2(x1+y),cos(y2)(0.1+y)(exp(-y10.1+y2)))s.t.G1(x,y)=(x-5)2-(y1-0.5)2-(y2-5)2≤16minyf(x,y)=((y1-2)2+(y2-2)24+xy2+(5-x)216+sin(y210),y12+(y2-6)2-2xy1-(5-x)280)s.t.g1(x,y)=y12-y2≤0g2(x,y)=5y12+y2-10-≤0g3(x,y)=y2-5-x6≤00≤x≤10,0≤y1,y2≤10. Figure9 shows the obtained Pareto optimal front of Example 4.7 by the proposed algorithm. Note that, Zhang et al. [52] only obtained a single optimal solution x=(146.2955,28.9394) and y=(0,67.9318,0) which lies on the maximum of the F2 using weighted sum method. In contrast, a set of Pareto optimal solutions are obtained by the proposed algorithm. However, the fact that the single optimal solution in [49] is included in the obtained Pareto optimal solutions illustrates the feasibility of proposed algorithm. Figure 10 shows the final archive solutions of the Example 4.8 by the proposed algorithm. For this problem, the exact Pareto optimal front is not known, but the obtained Pareto optimal front by the proposed algorithm is similar to that reported in the previous study [23].Figure 9 The obtained front of Example4.7.Figure 10 The obtained front of Example4.8. ## 4.2.1. Low Dimension BLMPPs Example 4.1. Example4.1 is taken from [22]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=200, Tu=200, Nl=40, Tl=40, and T=40: (4.3)minxF(x,y)=(y1-x,y2)s.t.G1(y)=1+y1+y2≥0minyf(x,y)=(y1,y2)s.t.g1(x,y)=x2-y12-y22≥0,-1≤y1,y2≤1,0≤x≤1. Figure1 shows the obtained Pareto front of this example by the proposed algorithm. From Figure 1, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00024, that is, GD=0.00024 (see Table 2). Moreover, the lower SP value (SP=0.0042, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Figure 2 shows the obtained solutions of this example, which follow the relationship, that is, y1=-1-y2, y2=-1/2±(1/4)8x2-4 and x∈(1/2,1). It is also obvious that all obtained solutions are close to being on the upper level constraint G(x) boundary (1+y1+y2=0).Table 2 Results of the Generation Distance (GD) and Spacing (SP) metrics for the above six examples. Example GD SP Example4.1 0.00024 0.00442 Example4.2 0.00003 0.00169 Example4.3 0.00027 0.00127 Example4.4 0.00036 0.00235 Example4.5 0.00058 0.00364 Example4.6 0.00039 0.00168Figure 1 The obtained Pareto optimal front of Example4.1.Figure 2 The obtained solutions of Example4.1.Example 4.2. Example4.2 is taken from [51]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=200, Tu=50, Nl=40, Tl=20, and T=40: (4.4)minxF(x,y)=(x2+(y1-1)2+y22,(x-1)2+(y1-1)2+y22)minyf(x,y)=(y12+y22,(y1-x)2+y22)-1≤x,y1,y2≤2. Figure3 shows the obtained Pareto optimal front of this example by the proposed algorithm. From Figure 3, it is obvious that the obtained Pareto optimal front is very close to the theoretical Pareto optimal front, the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00003 (see Table 2). On the other hand, the obtained Pareto optimal solutions can distribute uniformly on entire range of theoretical Pareto optimal front base on the fact that the SP value is lower (SP=0.00169, see Table 2). Figure 4 shows the obtained Pareto optimal solutions, they follow the relationship, that is, x=y1,y1∈[0.5,1] and y2=0.Figure 3 The obtained Pareto optimal front of Example4.2.Figure 4 The obtained solutions Example4.2. ## 4.2.2. High Dimension BLMPPs Example 4.3. Example4.3 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400, Tu=50, Nl=40, Tl=20, and T=60: (4.5)minxF(x,y)=((1+r-cos(απx1))+∑j=2K(xj-j-12)2+τ∑i=2K(yi-xi)2-γcos(γπx12y1),(1+r-sin(απx1))+∑j=2K(xj-j-12)2+τ∑i=2K(yi-xi)2-γsin(γπx12y1)∑j=2K(xj-j-12)2)minyf(x,y)=(y12+∑i=2K(yi-xi)2+∑i=2K10(1-cos(πk(yi-xi))),∑i=1K(yi-xi)2+∑i=2K10|1-sin(πk(yi-xi))|)s.t.-K≤yi≤K,(i=1,2,…,K);1≤x1≤4,-K≤xj≤K,(j=2,3,…,K).α=1,r=0.1,τ=1,γ=1,K=10.Example 4.4. Example4.4 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=80. (4.6)minxF(x,y)=v1(x1)+∑j=2Kyj2+10(1-cos(πk)yi)+τ∑i=2K(yi-xi)2-rcos(γπx12y1),v2(x1)+∑j=2Kyj2+10(1-cos(πk)yi)+τ∑i=2K(yi-xi)2-rsin(γπx12y1),minyf(x,y)=(y12+∑i=2K(yi-xi)2,∑i=1Ki(yi-xi)2)s.t.-K≤yi≤K,(i=1,2,…,K);0.001≤x1≤4,-K≤xj≤K,(j=2,3,…,K).α=1,r=0.25,τ=1,γ=4,K=10, where (4.7)v1(x1)={cos(0.2π)x1+sin(0.2π)|0.02sin(5πx1)|,for0≤x1≤1,x1-(1-sin(0.2π)),forx1>1,v2(x1)={-sin(0.2π)x1+cos(0.2π)|0.02sin(5πx1)|,for0≤x1≤1,0.1(x1-1)-sin(0.2π),forx1>1. This problem is more difficult compared to the previous problems (Examples4.1 and 4.2) because the lower level problem of this example has multimodalities, thereby making the lower level problem difficult in finding the upper level Pareto optimal front. From Figure 5, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00027, that is, GD=0.00027 (see Table 2). Moreover, the lower SP value (SP=0.00127, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, two obtained lower level Pareto optimal fronts are given when x1=2 and x1=2.5.Figure 5 The obtained Pareto front of Example4.3.Figure6 shows the obtained Pareto front of Example 4.4 by the proposed algorithm. The upper level problem has multimodalities, thereby causing an algorithm difficulty in finding the upper level Pareto optimal front. From Figure 6, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00036, that is, GD=0.00036 (see Table 2). Moreover, the lower SP value (SP=0.00235, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all corresponding lower level Pareto optimal fronts are given.Figure 6 The obtained Pareto front of Example4.4.Example 4.5. Example4.5 is taken from [28]. Here x∈R10,y∈R10. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=60: (4.8)minxF(x·y)=(x1+∑j=3K(xj-j2)2+τ∑i=3K(yi-xi)2-cos(4tan-1(x2-y2x1-y1)),x2+∑j=3K(xj-j2)2+τ∑i=3K(yi-xi)2-cos(4tan-1(x2-y2x1-y1)))s.t.G(x)=x2-(1-x12)2≥0minyf(x,y)=(y1+∑i=3K(yi-xi)2,y2+∑i=3K(yi-xi)2)s.t.g1(x,y)=(y1-x1)2+(y2-x2)2≤r2R(x1)=(0.1+0.15|sin(2π(x1-0.1))|),-K≤yi≤K,(i=1,2,…,K);0≤x1≤K,-K≤xj≤K,(j=2,3,…,K).τ=1,r=0.2,K=10.Example 4.6. Example4.6 is taken from [28]. Here x∈R1,y∈R9. In this example, the population size and iteration times are set as follows: Nu=400,Tu=50, Nl=40, Tl=20, and T=40: (4.9)minxF(x,y)=((1-y1)(1+∑j=2Kyj2)x1,y1(1+∑j=2Kyj2)x1)s.t.G1(x,y)=-(1-y1)x1-12x1y1≤1minyf(x,y)=((1-y1)(1+∑j=K+1K+Lyj2)x1,y1(1+∑j=K+1K+Lyj2)x1)s.t.1≤x1≤2,-1≤y1≤1,-(K+L)≤yj≤(K+L),(j=2,3,…,K+L),K=5,L=4. Figure7 shows the obtained Pareto front of Example 4.5 by the proposed algorithm. From Figure 7, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00058, that is, GD=0.00058 (see Table 2). Moreover, the lower SP value (SP=0.00364, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, all obtained lower level Pareto optimal fronts are given. It is also obvious that the Pareto optimal fronts for both the lower and upper level lie on constraint boundaries and every lower level Pareto optimal front has an unequal contribution to the upper level Pareto optimal front. Figure8 shows the obtained Pareto front of Example 4.6 by the proposed algorithm. From Figure 8, it can be seen that the obtained Pareto front is very close to the theoretical Pareto optimal front, and the average distance between the obtained Pareto optimal front and the theoretical Pareto optimal front is 0.00039, that is, GD=0.00039 (see Table 2). Moreover, the lower SP value (SP=0.00168, see Table 2) shows that the proposed algorithm is able to obtain a good distribution of solutions on the entire range of the theoretical Pareto optimal front. Furthermore, three obtained lower level Pareto optimal fronts are given when y1=1, y1=1.5 and y1=2. It can be seen that only one Pareto optimal point from each participating lower level problem qualifies to be on the upper level Pareto optimal front.Figure 7 The obtained Pareto front of Example4.5.Figure 8 The obtained Pareto front of Example4.6. ## 4.2.3. The BLMPPs with Unknown Theoretical Pareto Optimal Fronts Example 4.7. Example4.7 is taken from [52], in which the theoretical Pareto optimal front is not given. Here x∈R2,y∈R3. In this example, the population size and iteration times are set as follows: Nu=100,Tu=50, Nl=20, Tl=10, and T=40: (4.10)maxxF(x,y)=(x1+9x2+10y1+y2+3x3,9x1+2x2+2y1+7y2+4x3)s.t.G1(x,y)=3x1+9x2+9y1+5y2+3y3≤1039G2(x,y)=-4x1-x2+3y1-3y2+2y3≤94minyf(x,y)=(4x1+6x2+7y1+4y2+8y3,6x1+4x2+8y1+7y2+4y3)s.t.g1(x,y)=3x1-9x2-9y1-4y2≤61g2(x,y)=5x1+9x2+10y1-y2-2y3≤924g3(x,y)=3x1-3x2+y2+5y3≤420x1,x2,y1,y2,y3≥0.Example 4.8. Example4.8 is taken from [23]. Here x∈R1,y∈R2. In this example, the population size and iteration times are set as follows: Nu=800,Tu=50,Nl=40, Tl=20, and T=40: (4.11)minxF(x,y)=(x+y12+y22+sin2(x1+y),cos(y2)(0.1+y)(exp(-y10.1+y2)))s.t.G1(x,y)=(x-5)2-(y1-0.5)2-(y2-5)2≤16minyf(x,y)=((y1-2)2+(y2-2)24+xy2+(5-x)216+sin(y210),y12+(y2-6)2-2xy1-(5-x)280)s.t.g1(x,y)=y12-y2≤0g2(x,y)=5y12+y2-10-≤0g3(x,y)=y2-5-x6≤00≤x≤10,0≤y1,y2≤10. Figure9 shows the obtained Pareto optimal front of Example 4.7 by the proposed algorithm. Note that, Zhang et al. [52] only obtained a single optimal solution x=(146.2955,28.9394) and y=(0,67.9318,0) which lies on the maximum of the F2 using weighted sum method. In contrast, a set of Pareto optimal solutions are obtained by the proposed algorithm. However, the fact that the single optimal solution in [49] is included in the obtained Pareto optimal solutions illustrates the feasibility of proposed algorithm. Figure 10 shows the final archive solutions of the Example 4.8 by the proposed algorithm. For this problem, the exact Pareto optimal front is not known, but the obtained Pareto optimal front by the proposed algorithm is similar to that reported in the previous study [23].Figure 9 The obtained front of Example4.7.Figure 10 The obtained front of Example4.8. ## 5. Conclusion In this paper, an EQPSO is presented, in which an elite strategy is exerted for global best particle to prevent the swarm from clustering, enabling the particle to escape the local optima. The EQPSO algorithm is employed for solving bilevel multiobjective programming problem (BLMPP) for the first time. In this study, some numerical examples are used to explore the feasibility and efficiency of the proposed algorithm. The experimental results indicate that the obtained Pareto front by the proposed algorithm is very close to the theoretical Pareto optimal front, and the solutions are also distributed uniformly on entire range of the theoretical Pareto optimal front. The proposed algorithm is simple and easy to implement, which provides another appealing method for further study on BLMPP. --- *Source: 102482-2012-11-29.xml*
2012
# Comparison of Different Methods for the Calculation of the Microvascular Flow Index **Authors:** Mario O. Pozo; Vanina S. Kanoore Edul; Can Ince; Arnaldo Dubin **Journal:** Critical Care Research and Practice (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/102483 --- ## Abstract The microvascular flow index (MFI) is commonly used to semiquantitatively characterize the velocity of microcirculatory perfusion as absent (0), intermittent (1), sluggish (2), or normal (3). There are three approaches to compute MFI: (1) the average of the predominant flow in each of the four quadrants (MFIby quadrants), (2) the direct assessment during the bedside video acquisition (MFIpoint of care), and (3) the mean value of the MFIs determined in each individual vessel (MFIvessel by vessel). We hypothesized that the agreement between the MFIs is poor and that the MFIvessel by vessel better reflects the microvascular perfusion. For this purpose, we analyzed 100 videos from septic patients. In 25 of them, red blood cell (RBC) velocity was also measured. There were wide 95% limits of agreement between MFIby quadrants and MFIpoint of care (1.46), between MFIby quadrants and MFIvessel by vessel (2.85), and between MFIby point of care and MFIvessel by vessel (2.56). The MFIs significantly correlated with the RBC velocity and with the fraction of perfused small vessels, but MFIvessel by vessel showed the best R2. Although the different methods for the calculation of MFI reflect microvascular perfusion, they are not interchangeable and MFIvessel by vessel might be better. --- ## Body ## 1. Introduction The patency of microvascular perfusion is essential for the preservation of aerobic metabolism and organ functions. Although the microcirculation is a key component of the cardiovascular system, its behavior may differ from that of systemic circulation [1]. Despite the continuous developments in the monitoring of critically ill patients, the evaluation of the microcirculation remained as an elusive issue during many years. The introduction of the orthogonal polarization spectral (OPS) [2] and the sidestream dark field (SDF) [3] imaging devices has recently allowed the direct visualization of microcirculation at the bedside. Thereafter, different researchers described that septic patients showed sublingual microvascular alterations such as a decreased perfusion and increased heterogeneity [3–5]. These disorders were later found to be associated with the development of multiple organ failure and death [6]. Eventually, the microcirculation became used as a therapeutic target [7–9].Some controversies, however, still remain about the proper evaluation of the microcirculation [10]. The magnitude of the microvascular perfusion is commonly evaluated by means of the microvascular flow index (MFI) [11]. The MFI is based on determination of the predominant type of flow. For this purpose, flow is characterized as absent (0), intermittent (1), sluggish (2), or normal (3). Subsequently, the MFI has been computed in three different ways. Originally, Boerma et al. calculated the MFI as the average of the predominant flow in each of the four quadrants (MFIby quadrants) [11]. Then Arnold et al. reported that a determination of MFI during bedside video acquisition (MFIpoint of care) gave a good agreement with the MFIby quadrants [12]. Finally, Dubin et al. used the mean value of the MFI determined in each individual vessel (MFIvessel by vessel) [1, 8, 9]. This analysis is time consuming but tightly correlated with the actual red blood cell (RBC) velocity measured with a software both in experimental and clinical conditions [1, 13, 14].Our hypothesis was that the agreement between the different methods to determine the MFI is poor and that theMFIvessel by vessel better reflects the microvascular perfusion than the other approaches. ## 2. Materials and Methods This was a prospective observational study performed in a teaching intensive care unit. It was approved by the Institutional Review Board. Informed consent was obtained from the next of kin for all patients admitted to the study.One hundred videos were obtained by a single operator (AD) from 25 patients with septic shock in different clinical and hemodynamic conditions. Their clinical and epidemiologic characteristics are shown in Table1. All the patients were mechanically ventilated and received infusions of midazolam and fentanyl. Corticosteroids, propofol, and activated protein C were never used.Table 1 Clinical and epidemiologic characteristics of the patients. Age, years73 ± 10Gender male,n (%)14 (56)SOFA score10 ± 3APACHE II score25 ± 6Actual mortality, %ICU mortality4830-day mortality48Hospital mortality48APACHE II predicted mortality, %49 ± 20Norepinephrine (μg/kg/min)0.51 ± 0.41Intra-abdominal8 (32)Respiratory8 (32)Urinary6 (24)Intravascular3 (12)Definition of abbreviations: SOFA, sepsis-related organ failure assessment; APACHE, acute physiology and chronic health evaluation.Data are expressed as mean ± standard deviation or number (percentage).The microcirculatory network was evaluated in the sublingual mucosa by means of a SDF imaging device (Microscan, MicroVision Medical, Amsterdam, Netherlands) [3]. Different precautions were taken and steps followed to obtain images of adequate quality and to ensure good reproducibility. Video acquisition and image analyses were performed by well-trained researchers. After gentle removal of saliva by isotonic-saline-drenched gauze, steady images of at least 20 seconds were obtained while avoiding pressure artifacts through the use of a portable computer and an analog/digital video converter (ADVC110, Canopus Co, San Jose, CA, USA). Videoclips were stored as AVI files to allow computerized frame-by-frame image analysis. Adequate focus and contrast adjustment were verified and images of poor quality discarded. The entire sequence was used to characterize the semiquantitative characteristics of microvascular flow and particularly the presence of stopped or intermittent flow.MFI was randomly and blindly determined in three different ways by a single researcher (MOP). First, a semiquantitative analysis by eye was performed in individual vessels. It distinguishes between no flow (0), intermittent flow (1), sluggish flow (2), and continuous flow (3) [11]. A value was assigned to each individual vessel. The overall score of each video is the average of the individual values (MFIvessel by vessel). In addition, MFIby quadrants was calculated as the mean value of the predominant type of flow in each of the four quadrants. Finally, as an approximation to the real-time assessment at the bedside [12], MFIpoint of care was determined during a 20-second observation of a video sequence.We also calculated the proportion of perfused small vessels as the number of vessels with flow values of 2 and 3 divided by the total number of vessels.Quantitative RBC velocity of single vessels was measured through the use of space-time diagrams, which were generated by means of analysis software developed for the SDF video images [15]. This method of velocity determination consists of making diagrams of changes in grey-level values (e.g., flowing red blood cells) along the center line of a vessel segment being analyzed, as a function of time. In sequential images, the diagram of such an analysis consists of the y-axis, the distance traveled along the vessel segment and on the x-axis, time. This portrayal of the kinetics of sequential images generates slanted dark lines representing the movement of the red blood cells, the slopes of which give red blood cell velocity. This value is calculated as v=Δs/Δt, where Δs is the longitudinal displacement along the vessel centerline in time fragment Δt. We traced three center lines manually in the space-time diagram, and the average orientation was used to calculate the RBC velocity. The RBC velocity of each video was the average of all RBC velocities measured in single vessels in that video. The analysis was restricted to small vessels (i.e., vessels with a diameter <20 μm). ### 2.1. Statistical Analysis The agreement between the three methods for the determination of MFI was tested using the Bland-Altman method [16]. In addition, linear regression analysis was performed between MFIs and the fraction of perfused small vessels and between MFIs and RBC velocity. ## 2.1. Statistical Analysis The agreement between the three methods for the determination of MFI was tested using the Bland-Altman method [16]. In addition, linear regression analysis was performed between MFIs and the fraction of perfused small vessels and between MFIs and RBC velocity. ## 3. Results For the determination ofMFIvessel by vessel, 37 ± 9 small vessels per video were assessed. For the calculation of MFIby quadrants, the four quadrants were analyzed in all videos. The red blood cell velocity was measured in 20 ± 8 small vessels per video.Figure1 shows the wide 95% limits of agreement among the different methods for determining MFI. The bias ± precision for MFIpoint of care and MFIby quadrants (0.03 ± 0.37) was lower than for the MFIpoint of care and MFIvessel by vessel (0.24 ± 0.65, P=0.005) or MFIby quadrants and MFIvessel by vessel (0.21 ± 0.73, P=0.05) comparisons.Bland and Altman analysis for the different methods used for the calculation of microvascular flow index (MFI). Panel (a): bedside point of care MFI (MFIpoint of care) and MFI determined by quadrants (MFIby quadrants). Panel (b): MFIpoint of care) and MFI determined by vessel by vessel analysis (MFIvessel by vessel). Panel (c): (MFIby quadrants) and (MFIvessel by vessel). Lines are bias and 95% limits of agreement. (a)(b)(c)RBC velocity significantly correlated with the three MFIs (Figure2). Although, the MFIvessel by vessel method showed the highest R2, the difference did not reach statistical significance.Correlations of the red blood cell velocity with the microvascular flow index determined by vessel by vessel analysis (MFIvessel by vessel) Panel (a), the microvascular flow index determined by quadrants (MFIby quadrants) Panel (b), and the bedside point-of-care microvascular flow index (MFIpoint of care) Panel (c). (a)(b)(c)The proportion of perfused small vessels exhibited significant correlations with the three methods used in the calculation of MFI (Figure3). The MFIvessel by vessel showed the highest coefficient of determination, whose value was statistically higher than the other two (P<0.0001 for both).Correlations of the proportion of perfused small vessels with the microvascular flow index determined by vessel by vessel analysis (MFIvessel by vessel) Panel (a), the microvascular flow index determined by quadrants (MFIby quadrants) Panel (b), and the bedside point-of-care microvascular flow index (MFIpoint of care) Panel (c). (a)(b)(c) ## 4. Discussion Our results showed that each method used for the calculation of MFI was significantly correlated with the actual RBC velocity. Nevertheless, the agreement among the different MFIs was poor. TheMFIvessel by vessel was the approach that had the best correlations with the RBC velocity and the proportion of perfused small vessels.According to a recent consensus conference, the evaluation of the microcirculation should take into account the three different characteristics of density, perfusion, and flow heterogeneity. The question of which parameters are more appropriate to evaluate microcirculatory perfusion and density is still controversial. In particular, the discussion has mainly been focussed on the advantages versus the limitations of either the proportion of perfused vessels or the MFI [10]. Since the proportion of perfused vessels only distinguishes continuous from intermittent/stopped flow, the presence of a continuous but slow flow could be missed. The MFI does not provide information about functional density. Theoretically this index could be misleading if flow improves in perfused vessels, but the total number of perfused vessels also decreases. Moreover, the MFI is a categorical variable, so a change from 0 to 1 may have a different meaning in terms of tissue perfusion than a change from 2 to 3. Beyond these considerations, we found strong correlations between the proportion of perfused small vessels and the different approaches to MFI. The correlation with MFIvessel by vessel, however, was the strongest and also exhibited very narrow 95% confidence intervals. These findings suggest a similar performance of both the proportion of perfused small vessels and MFI in the characterization of microcirculatory perfusion, especially when the MFIvessel by vessel is used.We found statistically significant correlations between RBC velocity and the three measurements of MFI. Although the correlation withMFIvessel by vessel showed the best coefficient of determination, the difference between that r2 value and the other two did not reach statistical significance. Probably, our study was underpowered for showing this difference.The agreement between the different approaches to the MFI was poor. We found large 95% limits of agreements between them, whose range precludes any interchangeability. The 95% limits of agreement betweenMFIpoint of care and MFIby quadrants were lower than those found between the other MFIs, although still wide. Arnold et al. reported a similar bias ± precision for this Bland and Altman analysis (−0.031 ± 0.198). Nevertheless, they concluded that the agreement was good.We found positive biases withMFIpoint of care versus MFIvessel by vessel and with MFIby quadrants versus MFIvessel by vessel, meaning that MFIpoint of care and MFIby quadrants overestimate MFIvessel by vessel. These biases could be anticipated since the two first methods use the predominant type of flow, either in the whole videomicroscopic area or in the quadrants. Accordingly, a high but not predominant proportion of small vessels with stopped or intermittent flow could be left unconsidered in the MFIpoint of care and MFIby quadrants. In contrast, in the MFIvessel by vessel, every vessel score is used in the final computation. For example, if 30% of the small vessels have stopped flow and 70% normal blood flow, the MFIvessel by vessel will be 2.1, while with the other two methods the predominant flow will be 3.Although the methods are not interchangeable andMFIvessel by vessel probably better reflects the velocity of the perfusion, MFIby quadrants and MFIpoint of care were also significatively correlated with the proportion of perfused vessels and the RBC velocity.This study has certain limitations. First, theMFIpoint of care used in this study was only a simulation of that used in the study of Arnold et al. [12]. We performed the MFIpoint of care during a 20 sec view of the video sequence but not during a real video acquisition. In addition, the strong correlation between MFIvessel by vessel and the proportion of perfused vessels could be partially explained by mathematical coupling. This problem can develop when two parameters, calculated from a shared variable, are subsequently correlated. If there is an error in the determination of the shared variable, it could be propagated in the calculation of those parameters. The resulting correlation could not be a real phenomenon but could be the expression of the methodological mistake. Mathematical coupling, however, is only applicable to artifactual relationships when there is a significant error in the measurement of the common variable. Another limitation is that the number of analyzed videos, especially those in which the RBC velocity was measured, was limited. Finally, we correlated the MFIs with other parameters of perfusion such as the proportion of perfused vessels and the RBC velocity but not with an actual measurement of microvascular flow.In conclusion, although the different methods for the calculations of MFI reflect the magnitude of microvascular perfusion, they are not interchangeable. Even though theMFIvessel by vessel is time consuming, this method could arguably more precisely track the microcirculatory perfusion as suggested by its stronger correlations with other parameters of microvascular perfusion. Larger studies are needed to determine if these findings also imply advantages as an outcome predictor. --- *Source: 102483-2012-04-23.xml*
102483-2012-04-23_102483-2012-04-23.md
16,507
Comparison of Different Methods for the Calculation of the Microvascular Flow Index
Mario O. Pozo; Vanina S. Kanoore Edul; Can Ince; Arnaldo Dubin
Critical Care Research and Practice (2012)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/102483
102483-2012-04-23.xml
--- ## Abstract The microvascular flow index (MFI) is commonly used to semiquantitatively characterize the velocity of microcirculatory perfusion as absent (0), intermittent (1), sluggish (2), or normal (3). There are three approaches to compute MFI: (1) the average of the predominant flow in each of the four quadrants (MFIby quadrants), (2) the direct assessment during the bedside video acquisition (MFIpoint of care), and (3) the mean value of the MFIs determined in each individual vessel (MFIvessel by vessel). We hypothesized that the agreement between the MFIs is poor and that the MFIvessel by vessel better reflects the microvascular perfusion. For this purpose, we analyzed 100 videos from septic patients. In 25 of them, red blood cell (RBC) velocity was also measured. There were wide 95% limits of agreement between MFIby quadrants and MFIpoint of care (1.46), between MFIby quadrants and MFIvessel by vessel (2.85), and between MFIby point of care and MFIvessel by vessel (2.56). The MFIs significantly correlated with the RBC velocity and with the fraction of perfused small vessels, but MFIvessel by vessel showed the best R2. Although the different methods for the calculation of MFI reflect microvascular perfusion, they are not interchangeable and MFIvessel by vessel might be better. --- ## Body ## 1. Introduction The patency of microvascular perfusion is essential for the preservation of aerobic metabolism and organ functions. Although the microcirculation is a key component of the cardiovascular system, its behavior may differ from that of systemic circulation [1]. Despite the continuous developments in the monitoring of critically ill patients, the evaluation of the microcirculation remained as an elusive issue during many years. The introduction of the orthogonal polarization spectral (OPS) [2] and the sidestream dark field (SDF) [3] imaging devices has recently allowed the direct visualization of microcirculation at the bedside. Thereafter, different researchers described that septic patients showed sublingual microvascular alterations such as a decreased perfusion and increased heterogeneity [3–5]. These disorders were later found to be associated with the development of multiple organ failure and death [6]. Eventually, the microcirculation became used as a therapeutic target [7–9].Some controversies, however, still remain about the proper evaluation of the microcirculation [10]. The magnitude of the microvascular perfusion is commonly evaluated by means of the microvascular flow index (MFI) [11]. The MFI is based on determination of the predominant type of flow. For this purpose, flow is characterized as absent (0), intermittent (1), sluggish (2), or normal (3). Subsequently, the MFI has been computed in three different ways. Originally, Boerma et al. calculated the MFI as the average of the predominant flow in each of the four quadrants (MFIby quadrants) [11]. Then Arnold et al. reported that a determination of MFI during bedside video acquisition (MFIpoint of care) gave a good agreement with the MFIby quadrants [12]. Finally, Dubin et al. used the mean value of the MFI determined in each individual vessel (MFIvessel by vessel) [1, 8, 9]. This analysis is time consuming but tightly correlated with the actual red blood cell (RBC) velocity measured with a software both in experimental and clinical conditions [1, 13, 14].Our hypothesis was that the agreement between the different methods to determine the MFI is poor and that theMFIvessel by vessel better reflects the microvascular perfusion than the other approaches. ## 2. Materials and Methods This was a prospective observational study performed in a teaching intensive care unit. It was approved by the Institutional Review Board. Informed consent was obtained from the next of kin for all patients admitted to the study.One hundred videos were obtained by a single operator (AD) from 25 patients with septic shock in different clinical and hemodynamic conditions. Their clinical and epidemiologic characteristics are shown in Table1. All the patients were mechanically ventilated and received infusions of midazolam and fentanyl. Corticosteroids, propofol, and activated protein C were never used.Table 1 Clinical and epidemiologic characteristics of the patients. Age, years73 ± 10Gender male,n (%)14 (56)SOFA score10 ± 3APACHE II score25 ± 6Actual mortality, %ICU mortality4830-day mortality48Hospital mortality48APACHE II predicted mortality, %49 ± 20Norepinephrine (μg/kg/min)0.51 ± 0.41Intra-abdominal8 (32)Respiratory8 (32)Urinary6 (24)Intravascular3 (12)Definition of abbreviations: SOFA, sepsis-related organ failure assessment; APACHE, acute physiology and chronic health evaluation.Data are expressed as mean ± standard deviation or number (percentage).The microcirculatory network was evaluated in the sublingual mucosa by means of a SDF imaging device (Microscan, MicroVision Medical, Amsterdam, Netherlands) [3]. Different precautions were taken and steps followed to obtain images of adequate quality and to ensure good reproducibility. Video acquisition and image analyses were performed by well-trained researchers. After gentle removal of saliva by isotonic-saline-drenched gauze, steady images of at least 20 seconds were obtained while avoiding pressure artifacts through the use of a portable computer and an analog/digital video converter (ADVC110, Canopus Co, San Jose, CA, USA). Videoclips were stored as AVI files to allow computerized frame-by-frame image analysis. Adequate focus and contrast adjustment were verified and images of poor quality discarded. The entire sequence was used to characterize the semiquantitative characteristics of microvascular flow and particularly the presence of stopped or intermittent flow.MFI was randomly and blindly determined in three different ways by a single researcher (MOP). First, a semiquantitative analysis by eye was performed in individual vessels. It distinguishes between no flow (0), intermittent flow (1), sluggish flow (2), and continuous flow (3) [11]. A value was assigned to each individual vessel. The overall score of each video is the average of the individual values (MFIvessel by vessel). In addition, MFIby quadrants was calculated as the mean value of the predominant type of flow in each of the four quadrants. Finally, as an approximation to the real-time assessment at the bedside [12], MFIpoint of care was determined during a 20-second observation of a video sequence.We also calculated the proportion of perfused small vessels as the number of vessels with flow values of 2 and 3 divided by the total number of vessels.Quantitative RBC velocity of single vessels was measured through the use of space-time diagrams, which were generated by means of analysis software developed for the SDF video images [15]. This method of velocity determination consists of making diagrams of changes in grey-level values (e.g., flowing red blood cells) along the center line of a vessel segment being analyzed, as a function of time. In sequential images, the diagram of such an analysis consists of the y-axis, the distance traveled along the vessel segment and on the x-axis, time. This portrayal of the kinetics of sequential images generates slanted dark lines representing the movement of the red blood cells, the slopes of which give red blood cell velocity. This value is calculated as v=Δs/Δt, where Δs is the longitudinal displacement along the vessel centerline in time fragment Δt. We traced three center lines manually in the space-time diagram, and the average orientation was used to calculate the RBC velocity. The RBC velocity of each video was the average of all RBC velocities measured in single vessels in that video. The analysis was restricted to small vessels (i.e., vessels with a diameter <20 μm). ### 2.1. Statistical Analysis The agreement between the three methods for the determination of MFI was tested using the Bland-Altman method [16]. In addition, linear regression analysis was performed between MFIs and the fraction of perfused small vessels and between MFIs and RBC velocity. ## 2.1. Statistical Analysis The agreement between the three methods for the determination of MFI was tested using the Bland-Altman method [16]. In addition, linear regression analysis was performed between MFIs and the fraction of perfused small vessels and between MFIs and RBC velocity. ## 3. Results For the determination ofMFIvessel by vessel, 37 ± 9 small vessels per video were assessed. For the calculation of MFIby quadrants, the four quadrants were analyzed in all videos. The red blood cell velocity was measured in 20 ± 8 small vessels per video.Figure1 shows the wide 95% limits of agreement among the different methods for determining MFI. The bias ± precision for MFIpoint of care and MFIby quadrants (0.03 ± 0.37) was lower than for the MFIpoint of care and MFIvessel by vessel (0.24 ± 0.65, P=0.005) or MFIby quadrants and MFIvessel by vessel (0.21 ± 0.73, P=0.05) comparisons.Bland and Altman analysis for the different methods used for the calculation of microvascular flow index (MFI). Panel (a): bedside point of care MFI (MFIpoint of care) and MFI determined by quadrants (MFIby quadrants). Panel (b): MFIpoint of care) and MFI determined by vessel by vessel analysis (MFIvessel by vessel). Panel (c): (MFIby quadrants) and (MFIvessel by vessel). Lines are bias and 95% limits of agreement. (a)(b)(c)RBC velocity significantly correlated with the three MFIs (Figure2). Although, the MFIvessel by vessel method showed the highest R2, the difference did not reach statistical significance.Correlations of the red blood cell velocity with the microvascular flow index determined by vessel by vessel analysis (MFIvessel by vessel) Panel (a), the microvascular flow index determined by quadrants (MFIby quadrants) Panel (b), and the bedside point-of-care microvascular flow index (MFIpoint of care) Panel (c). (a)(b)(c)The proportion of perfused small vessels exhibited significant correlations with the three methods used in the calculation of MFI (Figure3). The MFIvessel by vessel showed the highest coefficient of determination, whose value was statistically higher than the other two (P<0.0001 for both).Correlations of the proportion of perfused small vessels with the microvascular flow index determined by vessel by vessel analysis (MFIvessel by vessel) Panel (a), the microvascular flow index determined by quadrants (MFIby quadrants) Panel (b), and the bedside point-of-care microvascular flow index (MFIpoint of care) Panel (c). (a)(b)(c) ## 4. Discussion Our results showed that each method used for the calculation of MFI was significantly correlated with the actual RBC velocity. Nevertheless, the agreement among the different MFIs was poor. TheMFIvessel by vessel was the approach that had the best correlations with the RBC velocity and the proportion of perfused small vessels.According to a recent consensus conference, the evaluation of the microcirculation should take into account the three different characteristics of density, perfusion, and flow heterogeneity. The question of which parameters are more appropriate to evaluate microcirculatory perfusion and density is still controversial. In particular, the discussion has mainly been focussed on the advantages versus the limitations of either the proportion of perfused vessels or the MFI [10]. Since the proportion of perfused vessels only distinguishes continuous from intermittent/stopped flow, the presence of a continuous but slow flow could be missed. The MFI does not provide information about functional density. Theoretically this index could be misleading if flow improves in perfused vessels, but the total number of perfused vessels also decreases. Moreover, the MFI is a categorical variable, so a change from 0 to 1 may have a different meaning in terms of tissue perfusion than a change from 2 to 3. Beyond these considerations, we found strong correlations between the proportion of perfused small vessels and the different approaches to MFI. The correlation with MFIvessel by vessel, however, was the strongest and also exhibited very narrow 95% confidence intervals. These findings suggest a similar performance of both the proportion of perfused small vessels and MFI in the characterization of microcirculatory perfusion, especially when the MFIvessel by vessel is used.We found statistically significant correlations between RBC velocity and the three measurements of MFI. Although the correlation withMFIvessel by vessel showed the best coefficient of determination, the difference between that r2 value and the other two did not reach statistical significance. Probably, our study was underpowered for showing this difference.The agreement between the different approaches to the MFI was poor. We found large 95% limits of agreements between them, whose range precludes any interchangeability. The 95% limits of agreement betweenMFIpoint of care and MFIby quadrants were lower than those found between the other MFIs, although still wide. Arnold et al. reported a similar bias ± precision for this Bland and Altman analysis (−0.031 ± 0.198). Nevertheless, they concluded that the agreement was good.We found positive biases withMFIpoint of care versus MFIvessel by vessel and with MFIby quadrants versus MFIvessel by vessel, meaning that MFIpoint of care and MFIby quadrants overestimate MFIvessel by vessel. These biases could be anticipated since the two first methods use the predominant type of flow, either in the whole videomicroscopic area or in the quadrants. Accordingly, a high but not predominant proportion of small vessels with stopped or intermittent flow could be left unconsidered in the MFIpoint of care and MFIby quadrants. In contrast, in the MFIvessel by vessel, every vessel score is used in the final computation. For example, if 30% of the small vessels have stopped flow and 70% normal blood flow, the MFIvessel by vessel will be 2.1, while with the other two methods the predominant flow will be 3.Although the methods are not interchangeable andMFIvessel by vessel probably better reflects the velocity of the perfusion, MFIby quadrants and MFIpoint of care were also significatively correlated with the proportion of perfused vessels and the RBC velocity.This study has certain limitations. First, theMFIpoint of care used in this study was only a simulation of that used in the study of Arnold et al. [12]. We performed the MFIpoint of care during a 20 sec view of the video sequence but not during a real video acquisition. In addition, the strong correlation between MFIvessel by vessel and the proportion of perfused vessels could be partially explained by mathematical coupling. This problem can develop when two parameters, calculated from a shared variable, are subsequently correlated. If there is an error in the determination of the shared variable, it could be propagated in the calculation of those parameters. The resulting correlation could not be a real phenomenon but could be the expression of the methodological mistake. Mathematical coupling, however, is only applicable to artifactual relationships when there is a significant error in the measurement of the common variable. Another limitation is that the number of analyzed videos, especially those in which the RBC velocity was measured, was limited. Finally, we correlated the MFIs with other parameters of perfusion such as the proportion of perfused vessels and the RBC velocity but not with an actual measurement of microvascular flow.In conclusion, although the different methods for the calculations of MFI reflect the magnitude of microvascular perfusion, they are not interchangeable. Even though theMFIvessel by vessel is time consuming, this method could arguably more precisely track the microcirculatory perfusion as suggested by its stronger correlations with other parameters of microvascular perfusion. Larger studies are needed to determine if these findings also imply advantages as an outcome predictor. --- *Source: 102483-2012-04-23.xml*
2012
# Rapidly Developing Yeast Microcolonies Differentiate in a Similar Way to Aging Giant Colonies **Authors:** Libuše Váchová; Ladislava Hatáková; Michal Čáp; Michaela Pokorná; Zdena Palková **Journal:** Oxidative Medicine and Cellular Longevity (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/102485 --- ## Abstract During their development and aging on solid substrates, yeast giant colonies produce ammonia, which acts as a quorum sensing molecule. Ammonia production is connected with alkalization of the surrounding medium and with extensive reprogramming of cell metabolism. In addition, ammonia signaling is important for both horizontal (colony centre versus colony margin) and vertical (upper versus lower cell layers) colony differentiations. The centre of an aging differentiated giant colony is thus composed of two major cell subpopulations, the subpopulation of long-living, metabolically active and stress-resistant cells that form the upper layers of the colony and the subpopulation of stress-sensitive starving cells in the colony interior. Here, we show that microcolonies originating from one cell pass through similar developmental phases as giant colonies. Microcolony differentiation is linked to ammonia signaling, and cells similar to the upper and lower cells of aged giant colonies are formed even in relatively young microcolonies. A comparison of the properties of these cells revealed a number of features that are similar in microcolonies and giant colonies as well as a few that are only typical of chronologically aged giant colonies. These findings show that colony ageper se is not crucial for colony differentiation. --- ## Body ## 1. Introduction When developing on solid media or in nonshaken liquid environments, yeast cells can organize into structured and differentiated multicellular communities where individual cells gain specific properties and can fulfill specific roles. Colonies, stalks, biofilms, and flors on liquid surfaces are examples of such organized communities [1–11]. Colonies growing on solid agar medium usually originate either from individual cells (microcolonies) or from a cell suspension spotted onto the agar (giant colonies) [12–14]. The morphology and internal architecture of both microcolonies and giant colonies are dependent on the yeast species or even the strain that forms the colony, the cultivation conditions (e.g., nutrient sources), and developmental phase (i.e., the age of the colony). Thus, for example, natural strains of Saccharomyces cerevisiae form structured biofilm colonies [15, 16] that to some extent resemble the colonies formed by pathogenic yeasts of the Candida or Cryptococcus species [7]. These structured colonies exhibit features (such as the presence of multidrug resistance transporters and an extracellular matrix) that are important for the formation, development, and survival of natural yeast biofilms [17]. The internal architecture of these structured colonies differs strikingly from the architecture of smooth colonies that are formed by laboratory strains of S. cerevisiae.As we have shown previously, giant colonies ofS. cerevisiae laboratory strains grown on solid complex respiratory medium pass through distinct developmental phases that can be detected by monitoring the pH changes of the medium, changing from the acidic to near alkali and vice versa [13]. The alkali phase of colony development is accompanied by the production of volatile ammonia that functions as a signal important for colony metabolic reprogramming and long-term survival [13, 18–20]. Such metabolic reprogramming appears to be more important for colony survival than some mechanisms eliminating stress factors, such as stress defense enzymes [21]. We have demonstrated that ammonia-related changes are important for diversification between the cells in the center and margin of a colony [20–22]. We have also recently shown that ammonia signaling and related metabolic reprogramming are involved in the diversification of the cells of the colony and the formation of cells with specialized functions precisely localized within the colony [23, 24]. Thus, during the switch of giant colonies to the alkali phase, both horizontal and vertical differentiations occur, where central and margin cells behave differently, as do cells located in the upper and lower regions of the colony center. Detailed analysis of the central colony region revealed two major cell subpopulations located in the upper (U cells) and lower (L cells) colony areas that differ in their morphology, physiology, and gene expression. U cells are large stress-resistant cells with a longevity phenotype, while L cells are smaller, more sensitive to various stresses (such as heat shock and ethanol treatment), and lose viability over the time. Both cell types significantly differ in their gene expression, as shown by a transcriptomic comparison of U and L cells isolated from 15- and 20-day-old colonies [23]. According to these transcriptomic data, U cells seem to be metabolically active cells with induced amino acid metabolism, glycolysis, and some other pathways such as the pentose-phosphate shunt. U cells also express a large group of genes coding for ribosomal and some other proteins of the translational machinery. These genes are usually controlled by the TOR pathway under nutrient-rich conditions. Some other expression characteristics of U cells, however, indicate that some pathways usually active under conditions of nutrient limitation are also induced in U cells and affect their physiology [23]. For example, a large group of amino acid biosynthetic genes is controlled by the transcription factor Gcn4p [25]. In contrast to U cells, L cells behave like stressed cells—they have low metabolic activity and seem to activate some degradative mechanisms that can contribute to the release of compounds that can be exploited by U cells.An important question is to what extent chronological aging of the whole colony population on one side and active signaling (which includes the action of ammonia and related metabolic reprogramming as well as other not yet identified signaling and regulatory processes) on the other side contribute toS. cerevisiae colony development, differentiation, and long-term survival. As was mentioned above, giant colonies activate ammonia signaling and form U and L cells between days 7 and 10 of colony development when most of the colonial cells are in the stationary (or slow growth) phase. That is, cells differentiating into U and L cells are relatively old and most of them have already persisted in nondividing form for several days. Both of the above-mentioned processes (chronological aging and signal-related metabolic reprogramming) are therefore running in parallel in giant colonies and thus both could contribute to colony differentiation and U and L cell properties. In contrast to giant colonies, switch to the alkali phase and ammonia signaling among microcolonies usually starts much earlier than in giant colonies and central differentiated cells are therefore much younger (i.e., less chronologically aged) in microcolonies than in giant colonies. However, the major expression changes that accompany medium alkalization and ammonia production in microcolonies resemble those changes identified in giant colonies [18]. Similarly, Ato1p, a putative ammonia exporter, is produced in the margin and upper central cell layer in both giant colonies and microcolonies when they begin to alkalize the medium [22, 24]. Here, we examined the main features of the central parts of differentiated microcolonies and compared these features to those described in giant colonies. Through this analysis, we showed that prominent characteristics of central upper cells of yeast colonies are not related to colony aging but dependent on active colony reprogramming. On the other hand, some other features such as in particular the stress resistance of cells in the colony interior (L cells) significantly differ in younger microcolonies compared to giant colonies, being related to colony aging. ## 2. Results and Discussion ### 2.1. Microcolonies Pass through the Same Developmental Phases as Giant Colonies Similarly to giant colonies, microcolonies of BY4742 growing on GMA solid plates pass through the developmental phases characterized by changes in external pH and ammonia production [24, 26]. Microcolonies, thus, pass through the first acidic, alkali and second acidic developmental phases (Figure 1(a)), where the alkali phase is accompanied by ammonia production. In contrast to giant colonies, in which the timing of the transition to the ammonia-producing period is typically standardized by the inoculation of six giant colonies on the plate [18], the timing of the acid-to-alkali microcolony transition is dependent on the density of the plated microcolonies. More densely plated microcolonies switch to the ammonia-producing period earlier than microcolonies growing at a lower density on the plate. Like giant colonies that synchronize ammonia production and developmental phases [27], microcolonies on the same plate also synchronize themselves via the ammonia that starts to be produced by the most densely plated microcolonies. For the experiments described in the following sections, we used a standard plating of approximately 5000 microcolonies per plate, that is the density of microcolonies that results in microcolony transition to the alkali phase between days 3 and 4 of colony growth.Developmental phases and vertical differentiation of yeast microcolonies. (a) Microcolonies develop on GMA-BKP. BKP functions as pH dye indicator with pKa of 6.3, the color of which changes from yellow at acidic pH to purple in more alkali pH. Microcolonies were in the 1st acidic (2 d), alkali (4 d), and beginning of the 2nd acidic (6 d) phases. Bird views of microcolonies are shown. (b) Vertical transversal cross-section viewed by 2PE-CM of akali-phase microcolony formed by the strain producing Ato1p-GFP (left) and scheme of the localization of three cell subpopulations within the microcolony (right). (c) Boundary between Um and Lm cells (left) and morphology of Um and Lm cells (center) of BY-PTEF1-GFP strain at vertical cross-sections of 4-day-old microcolonies analyzed by 2PE-CM. Cytosolic expression of GFP is used for in situ visualization of Um and Lm cells by 2PE-CM since it enables the visualization of large vacuoles in Lm cells (from which the fluorescence is excluded) and the size of Um and Lm cells. Morphology of Um and Lm cells from 4-day-old BY4742 microcolonies separated by gradient centrifugation and visualized by Nomarski contrast (right). White arrows show large vacuoles in Lm cells; red arrows show lipid droplets in Um cells. (a) (b) (c) ### 2.2. Switch to Ammonia Production Is Accompanied by Differentiation of Microcolonies and Formation of Um and Lm Cells As with giant colonies [23], the transition of microcolonies to the alkali phase is accompanied by a diversification of the relatively homogeneous cell population of the 1st acidic phase microcolonies to two major cell types that are localized in the upper and lower layers of alkali phase microcolonies. Figure 1 shows that these upper and lower cells morphologically resemble the U and L cells of giant colonies, respectively. Cells in the lower parts of a microcolony (Lm cells) are smaller and usually contain one large vacuole, while cells in the upper parts (Um cells) are larger with no visible vacuoles. The staining of microcolony sections by Nile red (Figure 2) as well as Nomarski contrast visualization (Figure 1(c)) confirmed that similarly to giant colonies, Um cells contain several large lipid droplets, while Lm cells usually contain one small lipid droplet.Localization of cells containing storage compounds (lipid droplets) and cells with active autophagy and TORC1 signaling pathway. (a) Vertical transversal cross-sections of 4-day-old BY4742 microcolonies. Left, boundary between Um and Lm cells, lipid droplets are stained with Nile red. Right, lipid droplets of Um and Lm cells stained with Nile red (red) and cell walls with concanavalin A conjugated with Alexa Fluor 488 (green). (b) Vertical cross-sections of 4- and 7-day-old microcolonies of strains producing cytosolic proteins Ino1p-GFP or Met17p-GFP. Um cells are shown; arrows indicate GFP in vacuoles of 7-day-old Um cells where cytosolic proteins were delivered to vacuoles via autophagy. (c) Vertical cross-sections of 4-day-old microcolonies formed by Gat1p-GFP strain showing localization of Gat1p-GFP protein in Um and Lm cells. Arrows indicate relocalization of Gat1p-GFP from the cytosol to the nuclei of Um cells after treating the cut edge of the colony section with 250 ng/mL rapamycin, an inhibitor of TORC1. (a) (b) (c)In addition to their morphology similarities, we found similar profiles of proteins produced by the Um cells from 4-day-old microcolonies and by the U cells of 15-day-old giant colonies [23]. Hence, all three Ato proteins (Ato1p, Ato2p, and Ato3p) started to be produced exclusively in Um cells but not in Lm cells (as shown for Ato3p-GFP in Figure 3) after the microcolonies had entered the alkali phase. Similarly, Pox1p-GFP and Icl2p-GFP are preferentially produced in Um cells (Figure 3) as well as in the U cells of giant colonies [23]. In addition, the production profile of Ole1p-GFP is also similar in microcolonies and giant colonies. Ole1p-GFP is produced and properly localized to the endoplasmatic reticulum (ER) in L and Lm cells, respectively, while it is degraded within vacuoles in U and Um cells, respectively, (Figure 3).Figure 3 Profile of selected, GFP-labeled proteins in alkali phase microcolonies. 2PE-CM of vertical transversal cross-sections of alkali phase (4-day-old) microcolonies formed by strains producing particular labeled proteins.Another typical feature of differentiated giant colonies is an increased activity of TORC1 in U cells and its inactivation in L cells, as shown by the different localization of Gat1p-GFP in the two cell types [23]. The GATA transcription factor Gat1p was shown to be phosphorylated by TORC1, which results in Gat1p cytosolic localization and thus functional inhibition [28]. Confocal microscopy of microcolony cross-sections clearly showed that Gat1p-GFP is localized to the nuclei of Lm cells, which indicates that TORC1 is inactive in Lm cells (Figure 2(c)). In Um cells, TORC1 is apparently active, as Gat1p-GFP is predominantly in the cytosol (i.e., phosphorylated) and it only moves to the nucleus when a TORC1 inhibitor rapamycin is added to the colony sections.In summary, these data show that several typical features of the U cells of giant colonies are found in the Um cells of microcolonies that switch to the alkali phase of ammonia production, even though Um cells are far younger chronologically than the U cells of giant colonies. The typical features of U cells, such as the accumulation of lipid droplets, production of typical marker proteins, and active TORC1, are found in Um cells soon after the upper and lower layers have formed in microcolonies entering the alkali phase. These data therefore indicate that ammonia-related signaling events are more significant than chronological age in the formation of these typical features of upper cells. This is in agreement with the previous finding that in giant colonies, the formation of cells morphologically resembling U cells can also be prematurely induced by ammonia from an artificial source [23]. ### 2.3. Autophagy Appears Later in Um Cells Another typical feature of the U cells of giant colonies is active autophagy [23]. Monitoring the cellular localization of GFP in the microcolonies of strains producing cytosolic Ino1p and Met17p labeled with GFP showed a significant vacuolar GFP signal in the Um cells of 7-day-old microcolonies (Figure 2). However, no vacuolar localization of GFP was visible in 4-day-old colonies. As in the L cells of giant colonies, no vacuolar GFP was detected in Lm cells of any age. These data showed that Um cells activate autophagy like the U cells of giant colonies. However, the autophagy is initiated later than other typical processes of Um cells and seems, therefore, to be more dependent on the chronological aging of Um cells. ### 2.4. Um and Lm Cells Differ in Their Respiratory Capacity An important and unexpected difference between the U and L cells of giant colonies is in the capacity of these cells to consume oxygen [23]. Although localized close to the air, U cells exhibit significantly decreased ability to consume oxygen as compared with L cells, and, accordingly, U cells contain large swollen mitochondria with few cristae. On the other hand, L cells maintain their capacity to consume oxygen quite effectively and contain normal-looking cristated mitochondria. To compare the respiration of Um and Lm cells, we separated these cells from 4- to 6-day-old microcolonies by gradient centrifugation and measured their respiratory capacity. As shown in Figure 4(a), the Um cells of 4-day-old microcolonies already consume less oxygen than Lm cells of the same age. This difference persisted in older colonies. These data show that as with the other features described above, the decreased respiratory capacity of U cells identified in 15-day-old giant colonies and in 4-day-old alkali phase microcolonies is a characteristic that is also most likely predominantly induced by a signaling event and not by the aging of colony.Physiological differences between Um and Lm cells from 4- and 6-day-old colonies. (a) Oxygen consumption as a measure of respiratory capacity of Um and Lm cells isolated from 4- and 6-day-old microcolonies. Respiration of glucose- and glycerol-grown cells from exponential liquid shaken cultures is shown for comparison. (b) Stress-related features of Um and Lm. Sensitivity to zymolyase is shown as a decrease in optical density of cell suspensions (left). Production of ROS measured as fluorescence of DHE (right). All data represent averages of at least three experiments ±SD. **—t-test P value < 0.01; ***—t-test P value < 0.001. (a) (b) ### 2.5. Lm Cells Differ from L Cells of Giant Colonies in Some of Their Features Other physiological differences between the U and L cells of giant colonies are in terms of reactive oxygen species (ROS) production, resistance to the cell wall degrading enzyme zymolyase, and sensitivity to various stresses, such as heat shock and ethanol treatment [23]. Measurement of the ROS level in Um and Lm cells separated from microcolonies and stained with dihydroethidium (DHE) showed that Lm cells produce significantly higher amount of ROS than Um cells. This difference was significant in 4-day-old microcolonies and persisted in older microcolonies (Figure 4(b)). Lm cells are also more sensitive to zymolyase treatment than Um cells, thus, indicating a weaker cell wall of Lm cells. Thus, the differences in both ROS production and zymolyase resistance between Um and Lm cells were similar to those observed between the U and L cells of giant colonies.On the other hand, an analysis of Um and Lm cells from 4- to 6-day-old microcolonies did not reveal significant differences in the sensitivity of the two cell types to heat shock and ethanol treatment (not shown). In general, both Um and Lm cells were slightly more resistant to heat shock than U cells from 15-day-old giant colonies and significantly more resistant than L cells from such colonies (i.e., than cells that exhibit a strong decrease in viability after heat shock and ethanol treatment) [23]. In other words, Lm cells from 6-day-old colonies have not yet decreased their resistance to these stresses.Another difference between microcolonies and giant colonies was in their levels of certain amino acids in the upper and lower cells. While the level of intracellular glutamine was significantly higher in U than in the L cells of giant colonies, only a negligible difference was observed between the Um and Lm cells of 6-day-old microcolonies (and no difference in 4-day-old microcolonies). On the other hand, differences in amino acids such as lysine, alanine, and GABA that are present in higher concentrations in L cells than in the U cells of giant colonies are already detectable in 6-day-old microcolonies. Lysine, alanine, and GABA are present in 2.3, 1.6, and 4.9 times higher concentrations in Lm cells than in Um cells, respectively. These values are comparable with L/U ratio of 1.6, 2.2, and 3.6, respectively, for 15-day-old giant colonies [23]. These data indicate that particularly a drop in glutamine in L cells is also connected with the chronological aging or prolonged starvation of colonies.In summary, the data indicate that Lm cells from 4- to 6-day-old microcolonies are in a better physiological condition than the L cells of 15- to 20-day-old giant colonies. The observed decrease in the resistance and viability of the L cells of giant colonies as well as the drop in their glutamine content thus seems to appear later during colony chronological aging and is probably not directly related to the changes induced by ammonia signaling and/or related metabolic reprogramming. This conclusion is also supported by the observation that some of the proteins that started to be produced in the L cells of 15-day-old giant colonies and the production which increases later in 20-day-old giant colonies (such as Ino1p and Met17p) are not yet produced in the Lm cells of 4- to 6-day-old microcolonies (not shown). ## 2.1. Microcolonies Pass through the Same Developmental Phases as Giant Colonies Similarly to giant colonies, microcolonies of BY4742 growing on GMA solid plates pass through the developmental phases characterized by changes in external pH and ammonia production [24, 26]. Microcolonies, thus, pass through the first acidic, alkali and second acidic developmental phases (Figure 1(a)), where the alkali phase is accompanied by ammonia production. In contrast to giant colonies, in which the timing of the transition to the ammonia-producing period is typically standardized by the inoculation of six giant colonies on the plate [18], the timing of the acid-to-alkali microcolony transition is dependent on the density of the plated microcolonies. More densely plated microcolonies switch to the ammonia-producing period earlier than microcolonies growing at a lower density on the plate. Like giant colonies that synchronize ammonia production and developmental phases [27], microcolonies on the same plate also synchronize themselves via the ammonia that starts to be produced by the most densely plated microcolonies. For the experiments described in the following sections, we used a standard plating of approximately 5000 microcolonies per plate, that is the density of microcolonies that results in microcolony transition to the alkali phase between days 3 and 4 of colony growth.Developmental phases and vertical differentiation of yeast microcolonies. (a) Microcolonies develop on GMA-BKP. BKP functions as pH dye indicator with pKa of 6.3, the color of which changes from yellow at acidic pH to purple in more alkali pH. Microcolonies were in the 1st acidic (2 d), alkali (4 d), and beginning of the 2nd acidic (6 d) phases. Bird views of microcolonies are shown. (b) Vertical transversal cross-section viewed by 2PE-CM of akali-phase microcolony formed by the strain producing Ato1p-GFP (left) and scheme of the localization of three cell subpopulations within the microcolony (right). (c) Boundary between Um and Lm cells (left) and morphology of Um and Lm cells (center) of BY-PTEF1-GFP strain at vertical cross-sections of 4-day-old microcolonies analyzed by 2PE-CM. Cytosolic expression of GFP is used for in situ visualization of Um and Lm cells by 2PE-CM since it enables the visualization of large vacuoles in Lm cells (from which the fluorescence is excluded) and the size of Um and Lm cells. Morphology of Um and Lm cells from 4-day-old BY4742 microcolonies separated by gradient centrifugation and visualized by Nomarski contrast (right). White arrows show large vacuoles in Lm cells; red arrows show lipid droplets in Um cells. (a) (b) (c) ## 2.2. Switch to Ammonia Production Is Accompanied by Differentiation of Microcolonies and Formation of Um and Lm Cells As with giant colonies [23], the transition of microcolonies to the alkali phase is accompanied by a diversification of the relatively homogeneous cell population of the 1st acidic phase microcolonies to two major cell types that are localized in the upper and lower layers of alkali phase microcolonies. Figure 1 shows that these upper and lower cells morphologically resemble the U and L cells of giant colonies, respectively. Cells in the lower parts of a microcolony (Lm cells) are smaller and usually contain one large vacuole, while cells in the upper parts (Um cells) are larger with no visible vacuoles. The staining of microcolony sections by Nile red (Figure 2) as well as Nomarski contrast visualization (Figure 1(c)) confirmed that similarly to giant colonies, Um cells contain several large lipid droplets, while Lm cells usually contain one small lipid droplet.Localization of cells containing storage compounds (lipid droplets) and cells with active autophagy and TORC1 signaling pathway. (a) Vertical transversal cross-sections of 4-day-old BY4742 microcolonies. Left, boundary between Um and Lm cells, lipid droplets are stained with Nile red. Right, lipid droplets of Um and Lm cells stained with Nile red (red) and cell walls with concanavalin A conjugated with Alexa Fluor 488 (green). (b) Vertical cross-sections of 4- and 7-day-old microcolonies of strains producing cytosolic proteins Ino1p-GFP or Met17p-GFP. Um cells are shown; arrows indicate GFP in vacuoles of 7-day-old Um cells where cytosolic proteins were delivered to vacuoles via autophagy. (c) Vertical cross-sections of 4-day-old microcolonies formed by Gat1p-GFP strain showing localization of Gat1p-GFP protein in Um and Lm cells. Arrows indicate relocalization of Gat1p-GFP from the cytosol to the nuclei of Um cells after treating the cut edge of the colony section with 250 ng/mL rapamycin, an inhibitor of TORC1. (a) (b) (c)In addition to their morphology similarities, we found similar profiles of proteins produced by the Um cells from 4-day-old microcolonies and by the U cells of 15-day-old giant colonies [23]. Hence, all three Ato proteins (Ato1p, Ato2p, and Ato3p) started to be produced exclusively in Um cells but not in Lm cells (as shown for Ato3p-GFP in Figure 3) after the microcolonies had entered the alkali phase. Similarly, Pox1p-GFP and Icl2p-GFP are preferentially produced in Um cells (Figure 3) as well as in the U cells of giant colonies [23]. In addition, the production profile of Ole1p-GFP is also similar in microcolonies and giant colonies. Ole1p-GFP is produced and properly localized to the endoplasmatic reticulum (ER) in L and Lm cells, respectively, while it is degraded within vacuoles in U and Um cells, respectively, (Figure 3).Figure 3 Profile of selected, GFP-labeled proteins in alkali phase microcolonies. 2PE-CM of vertical transversal cross-sections of alkali phase (4-day-old) microcolonies formed by strains producing particular labeled proteins.Another typical feature of differentiated giant colonies is an increased activity of TORC1 in U cells and its inactivation in L cells, as shown by the different localization of Gat1p-GFP in the two cell types [23]. The GATA transcription factor Gat1p was shown to be phosphorylated by TORC1, which results in Gat1p cytosolic localization and thus functional inhibition [28]. Confocal microscopy of microcolony cross-sections clearly showed that Gat1p-GFP is localized to the nuclei of Lm cells, which indicates that TORC1 is inactive in Lm cells (Figure 2(c)). In Um cells, TORC1 is apparently active, as Gat1p-GFP is predominantly in the cytosol (i.e., phosphorylated) and it only moves to the nucleus when a TORC1 inhibitor rapamycin is added to the colony sections.In summary, these data show that several typical features of the U cells of giant colonies are found in the Um cells of microcolonies that switch to the alkali phase of ammonia production, even though Um cells are far younger chronologically than the U cells of giant colonies. The typical features of U cells, such as the accumulation of lipid droplets, production of typical marker proteins, and active TORC1, are found in Um cells soon after the upper and lower layers have formed in microcolonies entering the alkali phase. These data therefore indicate that ammonia-related signaling events are more significant than chronological age in the formation of these typical features of upper cells. This is in agreement with the previous finding that in giant colonies, the formation of cells morphologically resembling U cells can also be prematurely induced by ammonia from an artificial source [23]. ## 2.3. Autophagy Appears Later in Um Cells Another typical feature of the U cells of giant colonies is active autophagy [23]. Monitoring the cellular localization of GFP in the microcolonies of strains producing cytosolic Ino1p and Met17p labeled with GFP showed a significant vacuolar GFP signal in the Um cells of 7-day-old microcolonies (Figure 2). However, no vacuolar localization of GFP was visible in 4-day-old colonies. As in the L cells of giant colonies, no vacuolar GFP was detected in Lm cells of any age. These data showed that Um cells activate autophagy like the U cells of giant colonies. However, the autophagy is initiated later than other typical processes of Um cells and seems, therefore, to be more dependent on the chronological aging of Um cells. ## 2.4. Um and Lm Cells Differ in Their Respiratory Capacity An important and unexpected difference between the U and L cells of giant colonies is in the capacity of these cells to consume oxygen [23]. Although localized close to the air, U cells exhibit significantly decreased ability to consume oxygen as compared with L cells, and, accordingly, U cells contain large swollen mitochondria with few cristae. On the other hand, L cells maintain their capacity to consume oxygen quite effectively and contain normal-looking cristated mitochondria. To compare the respiration of Um and Lm cells, we separated these cells from 4- to 6-day-old microcolonies by gradient centrifugation and measured their respiratory capacity. As shown in Figure 4(a), the Um cells of 4-day-old microcolonies already consume less oxygen than Lm cells of the same age. This difference persisted in older colonies. These data show that as with the other features described above, the decreased respiratory capacity of U cells identified in 15-day-old giant colonies and in 4-day-old alkali phase microcolonies is a characteristic that is also most likely predominantly induced by a signaling event and not by the aging of colony.Physiological differences between Um and Lm cells from 4- and 6-day-old colonies. (a) Oxygen consumption as a measure of respiratory capacity of Um and Lm cells isolated from 4- and 6-day-old microcolonies. Respiration of glucose- and glycerol-grown cells from exponential liquid shaken cultures is shown for comparison. (b) Stress-related features of Um and Lm. Sensitivity to zymolyase is shown as a decrease in optical density of cell suspensions (left). Production of ROS measured as fluorescence of DHE (right). All data represent averages of at least three experiments ±SD. **—t-test P value < 0.01; ***—t-test P value < 0.001. (a) (b) ## 2.5. Lm Cells Differ from L Cells of Giant Colonies in Some of Their Features Other physiological differences between the U and L cells of giant colonies are in terms of reactive oxygen species (ROS) production, resistance to the cell wall degrading enzyme zymolyase, and sensitivity to various stresses, such as heat shock and ethanol treatment [23]. Measurement of the ROS level in Um and Lm cells separated from microcolonies and stained with dihydroethidium (DHE) showed that Lm cells produce significantly higher amount of ROS than Um cells. This difference was significant in 4-day-old microcolonies and persisted in older microcolonies (Figure 4(b)). Lm cells are also more sensitive to zymolyase treatment than Um cells, thus, indicating a weaker cell wall of Lm cells. Thus, the differences in both ROS production and zymolyase resistance between Um and Lm cells were similar to those observed between the U and L cells of giant colonies.On the other hand, an analysis of Um and Lm cells from 4- to 6-day-old microcolonies did not reveal significant differences in the sensitivity of the two cell types to heat shock and ethanol treatment (not shown). In general, both Um and Lm cells were slightly more resistant to heat shock than U cells from 15-day-old giant colonies and significantly more resistant than L cells from such colonies (i.e., than cells that exhibit a strong decrease in viability after heat shock and ethanol treatment) [23]. In other words, Lm cells from 6-day-old colonies have not yet decreased their resistance to these stresses.Another difference between microcolonies and giant colonies was in their levels of certain amino acids in the upper and lower cells. While the level of intracellular glutamine was significantly higher in U than in the L cells of giant colonies, only a negligible difference was observed between the Um and Lm cells of 6-day-old microcolonies (and no difference in 4-day-old microcolonies). On the other hand, differences in amino acids such as lysine, alanine, and GABA that are present in higher concentrations in L cells than in the U cells of giant colonies are already detectable in 6-day-old microcolonies. Lysine, alanine, and GABA are present in 2.3, 1.6, and 4.9 times higher concentrations in Lm cells than in Um cells, respectively. These values are comparable with L/U ratio of 1.6, 2.2, and 3.6, respectively, for 15-day-old giant colonies [23]. These data indicate that particularly a drop in glutamine in L cells is also connected with the chronological aging or prolonged starvation of colonies.In summary, the data indicate that Lm cells from 4- to 6-day-old microcolonies are in a better physiological condition than the L cells of 15- to 20-day-old giant colonies. The observed decrease in the resistance and viability of the L cells of giant colonies as well as the drop in their glutamine content thus seems to appear later during colony chronological aging and is probably not directly related to the changes induced by ammonia signaling and/or related metabolic reprogramming. This conclusion is also supported by the observation that some of the proteins that started to be produced in the L cells of 15-day-old giant colonies and the production which increases later in 20-day-old giant colonies (such as Ino1p and Met17p) are not yet produced in the Lm cells of 4- to 6-day-old microcolonies (not shown). ## 3. Conclusions The presented data show that various features typical of the U cells of giant colonies growing on complex respiratory medium and undergoing differentiation during their transition to the ammonia-producing alkali phase (10- to 15-day-old colonies) [23] are also found in Um cells located in the upper layers of alkali phase microcolonies that are only 3 to 4 days old. These features include the production of specific proteins, accumulation of storage material such as lipid droplets, activity of specific regulators such as TORC1, decreased function of mitochondria, low level of ROS, and high resistance to zymolyase, indicating a strengthening of the cell wall. Thus, all of these features of U cells seem to be predominantly related to signaling events and the metabolic reprogramming that accompanies the colony transition from the acidic to the alkali developmental phase [18, 22], rather than to the cell chronological aging. Such signaling processes leading to colony reprogramming could be initiated (by not yet identified mechanism(s)), for example, in the first microcolony that senses a nutrient shortage, that is, in a microcolony that is located in the densest area of plated microcolonies. Ammonia is then the signal that spreads the information about “the need for reprogramming” to the other microcolonies over the whole plate. The ability of ammonia to prematurely induce colonies to ammonia production independently of their current developmental phase [27] guarantees that even the sparsely plated microcolonies become induced and initiate the reprogramming and differentiation while still experiencing nutrient abundance. Um cells gain the major properties of the U cells of giant colonies, although they have spent a much shorter time in the stationary or slow growing phase than U cells.A comparison of the transcriptomes of “outside” and “inside” cells separated by FACS from 4-day-old microcolonies growing on complex glucose medium [29] showed that some expression characteristics of U cells from giant colonies grown on complex respiratory medium [23] are also found in “outside” cells. These characteristics include the expression of genes coding for ribosomal proteins and proteins of the translational machinery, genes for glycolytic enzymes, genes involved in amino acid metabolism, and some others [29]. Similarly, we observed the production of carbonic anhydrase Nce103p in the upper cell layers of microcolonies growing both on complex glucose [30] and complex glycerol (unpublished data) agar media. These data indicate that the expression of particular genes and activation of specific metabolic pathways could be profitable for cells in the upper layers of yeast colonies.In contrast to Um cells, only some features of L cells are preserved in Lm cells of 4- to 6-day-old microcolonies compared to 15- to 20-day-old giant colonies. These include a higher respiratory capacity, higher production of ROS, higher sensitivity to zymolyase, and the production of some proteins (such as Ole1p). Traven et al. [29] also demonstrated an increased expression of genes required for the activity of the mitochondrial respiratory chain genes in the “inside” cells of 4-day-old microcolonies (cells at a similar position within the colony to L cells) grown on glucose complex medium. All of these features are also typical of the L cells of 15- to 20-day-old giant colonies grown on complex respiratory medium [23]. On the other hand, other features of the L cells of giant colonies are not yet present in the Lm cells of 4- to 6-day-old microcolonies. In particular, Lm cells do not exhibit an enhanced sensitivity to some stresses such as heat shock and ethanol treatment, which indicates that these cells are in a better physiological condition than much older L cells from giant colonies. These stress-related features seem to be therefore more dependent on the chronological age of L cells and could be also related to the duration of the coexistence of U and L cells. Previous findings suggested that U cells are fed at the expense of L cells [23] which could then lead to a deepening of starvation of L cells over the time and to a consequent decrease in their overall viability in older giant colonies. Similarly, autophagy, which seems to be important for the longevity of U cells [23], is only activated later in Um of 6- to 7-day-old microcolonies. This finding suggests that the regulation of autophagy is partially dependent on signaling events guiding the development of U cells (autophagy is only activated in U cells) but that it is also dependent on the aging and nutrition status of U cells. ## 4. Material and Methods ### 4.1. Strains and Media S. cerevisiae strain BY4742 (MATα, his3Δ, leu2Δ, lys2Δ, ura3Δ) was from the EUROSCARF collection. BY4742-derived strains containing proteins (Ato1p, Ato3p, Pox1p, Icl2p, Ole1p, Ino1p, Met17p, and Gat1p) fused with GFP at their C-terminus were constructed as described previously [23, 24]. BY-PTEF1-GFP strain expressing GFP under the control of constitutive promoter of TEF1 gene (PTEF1) was constructed by integration of PTEF1-GFP-natNT2 cassette amplified from pYM-N21 plasmid [31] into HIS3 locus of BY4742 strain. Yeast microcolonies were grown at 28°C either on GMA (1% yeast extract, 3% glycerol, 1% ethanol, 2% agar, 10 mM CaCl2) or on GMA-BKP (GMA, 0.01% bromocresol purple). For standard experiments, cells were plated at an approximate density of 5×103 per plate. ### 4.2. Two-Photon Excitation Confocal Microscopy (2PE-CM) The microcolony sample preparation and 2PE-CM of transversal vertical cross-section of microcolonies were performed according to [24]. When required, the microcolony cross-sections were stained with Nile red (2.5 μg/mL) and concanavalin A labeled with Alexa Fluor 488 (ConA-AF, 30 μg/mL) as described in [17]. Alternatively, GFP fluorescence was monitored. An SP2 AOBS MP confocal scanner microscope (Leica) fitted with a Ti:Sapphire Chameleon Ultra laser (Coherent Inc.) and 63×/1.20 water immersion plan apochromat objective were used. Excitation wavelength was 920 nm, and emission bandwidths were 470–540 nm for ConA, 580–750 nm for NR, and 480–595 nm for GFP. ### 4.3. Colony Images Colony images were captured in transmitted light with a Navitar objective and a complementary metal-oxide semiconductor camera (ProgRes CT3; Jenoptik). ### 4.4. Sorbitol Gradient Cell Fractionation Cells from microcolonies were fractionated into subpopulations by centrifugation as described in [23] with the following modification: instead of sucrose, a 10–35% sorbitol gradient was used to avoid changes that could be induced by sucrose in the relatively young cells of microcolonies. ### 4.5. U and L Cell Resistance to Stresses Cell resistance was assayed using 10-fold serial dilutions of cell suspensions (OD600 = 10) that were incubated at 52°C for 45 or 90 min or in 20% ethanol for 60 min and compared to untreated controls. Zymolyase resistance was determined as the decrease in the OD600 of a cell suspension (starting OD600 = 0.5) in 50 mM potassium phosphate buffer, pH 7.5 with 2 mM dithiothreitol, and 5 U/mL zymolyase (MP Biomedicals). ### 4.6. Respiration Rate and ROS Quantification The oxygen consumption of 5 mg of freshly isolated Um or Lm wet cell biomass was determined at 30°C in 1 mL of water using a 782 oxygen meter with a 1-mL MT-200A cell (Strathkelvin Instruments). ROS was quantified using DHE staining according to Čáp et al. [21] with minor modifications. Briefly, isolated Um and Lm cells were resuspended in water to a final concentration of 100 mg/mL. 7.5 μL of this suspension was incubated with 42.5 μL of water and 5 μL of 25 μg/mL DHE solution (freshly prepared from 1 mg/mL stock solution in DMSO). Cells were stained for 25 min in the dark and diluted with 1.95 mL of water, and the DHE fluorescence was measured using a FluoroMax 3 spectrofluorometer (Jobin Yvon) with excitation/emission wavelengths of 480/604 nm. ### 4.7. Amino Acid Concentration Total intracellular amino acids were extracted from cell suspensions in water by boiling for 5 min, and the concentration was determined by HPLC with precolumn derivatization by OPA [23, 32] with a ZORBAX Eclipse AAA, 3.5 mm, 4.6 × 75 mm reverse phase column (Agilent), and fluorescence detection. ## 4.1. Strains and Media S. cerevisiae strain BY4742 (MATα, his3Δ, leu2Δ, lys2Δ, ura3Δ) was from the EUROSCARF collection. BY4742-derived strains containing proteins (Ato1p, Ato3p, Pox1p, Icl2p, Ole1p, Ino1p, Met17p, and Gat1p) fused with GFP at their C-terminus were constructed as described previously [23, 24]. BY-PTEF1-GFP strain expressing GFP under the control of constitutive promoter of TEF1 gene (PTEF1) was constructed by integration of PTEF1-GFP-natNT2 cassette amplified from pYM-N21 plasmid [31] into HIS3 locus of BY4742 strain. Yeast microcolonies were grown at 28°C either on GMA (1% yeast extract, 3% glycerol, 1% ethanol, 2% agar, 10 mM CaCl2) or on GMA-BKP (GMA, 0.01% bromocresol purple). For standard experiments, cells were plated at an approximate density of 5×103 per plate. ## 4.2. Two-Photon Excitation Confocal Microscopy (2PE-CM) The microcolony sample preparation and 2PE-CM of transversal vertical cross-section of microcolonies were performed according to [24]. When required, the microcolony cross-sections were stained with Nile red (2.5 μg/mL) and concanavalin A labeled with Alexa Fluor 488 (ConA-AF, 30 μg/mL) as described in [17]. Alternatively, GFP fluorescence was monitored. An SP2 AOBS MP confocal scanner microscope (Leica) fitted with a Ti:Sapphire Chameleon Ultra laser (Coherent Inc.) and 63×/1.20 water immersion plan apochromat objective were used. Excitation wavelength was 920 nm, and emission bandwidths were 470–540 nm for ConA, 580–750 nm for NR, and 480–595 nm for GFP. ## 4.3. Colony Images Colony images were captured in transmitted light with a Navitar objective and a complementary metal-oxide semiconductor camera (ProgRes CT3; Jenoptik). ## 4.4. Sorbitol Gradient Cell Fractionation Cells from microcolonies were fractionated into subpopulations by centrifugation as described in [23] with the following modification: instead of sucrose, a 10–35% sorbitol gradient was used to avoid changes that could be induced by sucrose in the relatively young cells of microcolonies. ## 4.5. U and L Cell Resistance to Stresses Cell resistance was assayed using 10-fold serial dilutions of cell suspensions (OD600 = 10) that were incubated at 52°C for 45 or 90 min or in 20% ethanol for 60 min and compared to untreated controls. Zymolyase resistance was determined as the decrease in the OD600 of a cell suspension (starting OD600 = 0.5) in 50 mM potassium phosphate buffer, pH 7.5 with 2 mM dithiothreitol, and 5 U/mL zymolyase (MP Biomedicals). ## 4.6. Respiration Rate and ROS Quantification The oxygen consumption of 5 mg of freshly isolated Um or Lm wet cell biomass was determined at 30°C in 1 mL of water using a 782 oxygen meter with a 1-mL MT-200A cell (Strathkelvin Instruments). ROS was quantified using DHE staining according to Čáp et al. [21] with minor modifications. Briefly, isolated Um and Lm cells were resuspended in water to a final concentration of 100 mg/mL. 7.5 μL of this suspension was incubated with 42.5 μL of water and 5 μL of 25 μg/mL DHE solution (freshly prepared from 1 mg/mL stock solution in DMSO). Cells were stained for 25 min in the dark and diluted with 1.95 mL of water, and the DHE fluorescence was measured using a FluoroMax 3 spectrofluorometer (Jobin Yvon) with excitation/emission wavelengths of 480/604 nm. ## 4.7. Amino Acid Concentration Total intracellular amino acids were extracted from cell suspensions in water by boiling for 5 min, and the concentration was determined by HPLC with precolumn derivatization by OPA [23, 32] with a ZORBAX Eclipse AAA, 3.5 mm, 4.6 × 75 mm reverse phase column (Agilent), and fluorescence detection. --- *Source: 102485-2013-07-21.xml*
102485-2013-07-21_102485-2013-07-21.md
47,412
Rapidly Developing Yeast Microcolonies Differentiate in a Similar Way to Aging Giant Colonies
Libuše Váchová; Ladislava Hatáková; Michal Čáp; Michaela Pokorná; Zdena Palková
Oxidative Medicine and Cellular Longevity (2013)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/102485
102485-2013-07-21.xml
--- ## Abstract During their development and aging on solid substrates, yeast giant colonies produce ammonia, which acts as a quorum sensing molecule. Ammonia production is connected with alkalization of the surrounding medium and with extensive reprogramming of cell metabolism. In addition, ammonia signaling is important for both horizontal (colony centre versus colony margin) and vertical (upper versus lower cell layers) colony differentiations. The centre of an aging differentiated giant colony is thus composed of two major cell subpopulations, the subpopulation of long-living, metabolically active and stress-resistant cells that form the upper layers of the colony and the subpopulation of stress-sensitive starving cells in the colony interior. Here, we show that microcolonies originating from one cell pass through similar developmental phases as giant colonies. Microcolony differentiation is linked to ammonia signaling, and cells similar to the upper and lower cells of aged giant colonies are formed even in relatively young microcolonies. A comparison of the properties of these cells revealed a number of features that are similar in microcolonies and giant colonies as well as a few that are only typical of chronologically aged giant colonies. These findings show that colony ageper se is not crucial for colony differentiation. --- ## Body ## 1. Introduction When developing on solid media or in nonshaken liquid environments, yeast cells can organize into structured and differentiated multicellular communities where individual cells gain specific properties and can fulfill specific roles. Colonies, stalks, biofilms, and flors on liquid surfaces are examples of such organized communities [1–11]. Colonies growing on solid agar medium usually originate either from individual cells (microcolonies) or from a cell suspension spotted onto the agar (giant colonies) [12–14]. The morphology and internal architecture of both microcolonies and giant colonies are dependent on the yeast species or even the strain that forms the colony, the cultivation conditions (e.g., nutrient sources), and developmental phase (i.e., the age of the colony). Thus, for example, natural strains of Saccharomyces cerevisiae form structured biofilm colonies [15, 16] that to some extent resemble the colonies formed by pathogenic yeasts of the Candida or Cryptococcus species [7]. These structured colonies exhibit features (such as the presence of multidrug resistance transporters and an extracellular matrix) that are important for the formation, development, and survival of natural yeast biofilms [17]. The internal architecture of these structured colonies differs strikingly from the architecture of smooth colonies that are formed by laboratory strains of S. cerevisiae.As we have shown previously, giant colonies ofS. cerevisiae laboratory strains grown on solid complex respiratory medium pass through distinct developmental phases that can be detected by monitoring the pH changes of the medium, changing from the acidic to near alkali and vice versa [13]. The alkali phase of colony development is accompanied by the production of volatile ammonia that functions as a signal important for colony metabolic reprogramming and long-term survival [13, 18–20]. Such metabolic reprogramming appears to be more important for colony survival than some mechanisms eliminating stress factors, such as stress defense enzymes [21]. We have demonstrated that ammonia-related changes are important for diversification between the cells in the center and margin of a colony [20–22]. We have also recently shown that ammonia signaling and related metabolic reprogramming are involved in the diversification of the cells of the colony and the formation of cells with specialized functions precisely localized within the colony [23, 24]. Thus, during the switch of giant colonies to the alkali phase, both horizontal and vertical differentiations occur, where central and margin cells behave differently, as do cells located in the upper and lower regions of the colony center. Detailed analysis of the central colony region revealed two major cell subpopulations located in the upper (U cells) and lower (L cells) colony areas that differ in their morphology, physiology, and gene expression. U cells are large stress-resistant cells with a longevity phenotype, while L cells are smaller, more sensitive to various stresses (such as heat shock and ethanol treatment), and lose viability over the time. Both cell types significantly differ in their gene expression, as shown by a transcriptomic comparison of U and L cells isolated from 15- and 20-day-old colonies [23]. According to these transcriptomic data, U cells seem to be metabolically active cells with induced amino acid metabolism, glycolysis, and some other pathways such as the pentose-phosphate shunt. U cells also express a large group of genes coding for ribosomal and some other proteins of the translational machinery. These genes are usually controlled by the TOR pathway under nutrient-rich conditions. Some other expression characteristics of U cells, however, indicate that some pathways usually active under conditions of nutrient limitation are also induced in U cells and affect their physiology [23]. For example, a large group of amino acid biosynthetic genes is controlled by the transcription factor Gcn4p [25]. In contrast to U cells, L cells behave like stressed cells—they have low metabolic activity and seem to activate some degradative mechanisms that can contribute to the release of compounds that can be exploited by U cells.An important question is to what extent chronological aging of the whole colony population on one side and active signaling (which includes the action of ammonia and related metabolic reprogramming as well as other not yet identified signaling and regulatory processes) on the other side contribute toS. cerevisiae colony development, differentiation, and long-term survival. As was mentioned above, giant colonies activate ammonia signaling and form U and L cells between days 7 and 10 of colony development when most of the colonial cells are in the stationary (or slow growth) phase. That is, cells differentiating into U and L cells are relatively old and most of them have already persisted in nondividing form for several days. Both of the above-mentioned processes (chronological aging and signal-related metabolic reprogramming) are therefore running in parallel in giant colonies and thus both could contribute to colony differentiation and U and L cell properties. In contrast to giant colonies, switch to the alkali phase and ammonia signaling among microcolonies usually starts much earlier than in giant colonies and central differentiated cells are therefore much younger (i.e., less chronologically aged) in microcolonies than in giant colonies. However, the major expression changes that accompany medium alkalization and ammonia production in microcolonies resemble those changes identified in giant colonies [18]. Similarly, Ato1p, a putative ammonia exporter, is produced in the margin and upper central cell layer in both giant colonies and microcolonies when they begin to alkalize the medium [22, 24]. Here, we examined the main features of the central parts of differentiated microcolonies and compared these features to those described in giant colonies. Through this analysis, we showed that prominent characteristics of central upper cells of yeast colonies are not related to colony aging but dependent on active colony reprogramming. On the other hand, some other features such as in particular the stress resistance of cells in the colony interior (L cells) significantly differ in younger microcolonies compared to giant colonies, being related to colony aging. ## 2. Results and Discussion ### 2.1. Microcolonies Pass through the Same Developmental Phases as Giant Colonies Similarly to giant colonies, microcolonies of BY4742 growing on GMA solid plates pass through the developmental phases characterized by changes in external pH and ammonia production [24, 26]. Microcolonies, thus, pass through the first acidic, alkali and second acidic developmental phases (Figure 1(a)), where the alkali phase is accompanied by ammonia production. In contrast to giant colonies, in which the timing of the transition to the ammonia-producing period is typically standardized by the inoculation of six giant colonies on the plate [18], the timing of the acid-to-alkali microcolony transition is dependent on the density of the plated microcolonies. More densely plated microcolonies switch to the ammonia-producing period earlier than microcolonies growing at a lower density on the plate. Like giant colonies that synchronize ammonia production and developmental phases [27], microcolonies on the same plate also synchronize themselves via the ammonia that starts to be produced by the most densely plated microcolonies. For the experiments described in the following sections, we used a standard plating of approximately 5000 microcolonies per plate, that is the density of microcolonies that results in microcolony transition to the alkali phase between days 3 and 4 of colony growth.Developmental phases and vertical differentiation of yeast microcolonies. (a) Microcolonies develop on GMA-BKP. BKP functions as pH dye indicator with pKa of 6.3, the color of which changes from yellow at acidic pH to purple in more alkali pH. Microcolonies were in the 1st acidic (2 d), alkali (4 d), and beginning of the 2nd acidic (6 d) phases. Bird views of microcolonies are shown. (b) Vertical transversal cross-section viewed by 2PE-CM of akali-phase microcolony formed by the strain producing Ato1p-GFP (left) and scheme of the localization of three cell subpopulations within the microcolony (right). (c) Boundary between Um and Lm cells (left) and morphology of Um and Lm cells (center) of BY-PTEF1-GFP strain at vertical cross-sections of 4-day-old microcolonies analyzed by 2PE-CM. Cytosolic expression of GFP is used for in situ visualization of Um and Lm cells by 2PE-CM since it enables the visualization of large vacuoles in Lm cells (from which the fluorescence is excluded) and the size of Um and Lm cells. Morphology of Um and Lm cells from 4-day-old BY4742 microcolonies separated by gradient centrifugation and visualized by Nomarski contrast (right). White arrows show large vacuoles in Lm cells; red arrows show lipid droplets in Um cells. (a) (b) (c) ### 2.2. Switch to Ammonia Production Is Accompanied by Differentiation of Microcolonies and Formation of Um and Lm Cells As with giant colonies [23], the transition of microcolonies to the alkali phase is accompanied by a diversification of the relatively homogeneous cell population of the 1st acidic phase microcolonies to two major cell types that are localized in the upper and lower layers of alkali phase microcolonies. Figure 1 shows that these upper and lower cells morphologically resemble the U and L cells of giant colonies, respectively. Cells in the lower parts of a microcolony (Lm cells) are smaller and usually contain one large vacuole, while cells in the upper parts (Um cells) are larger with no visible vacuoles. The staining of microcolony sections by Nile red (Figure 2) as well as Nomarski contrast visualization (Figure 1(c)) confirmed that similarly to giant colonies, Um cells contain several large lipid droplets, while Lm cells usually contain one small lipid droplet.Localization of cells containing storage compounds (lipid droplets) and cells with active autophagy and TORC1 signaling pathway. (a) Vertical transversal cross-sections of 4-day-old BY4742 microcolonies. Left, boundary between Um and Lm cells, lipid droplets are stained with Nile red. Right, lipid droplets of Um and Lm cells stained with Nile red (red) and cell walls with concanavalin A conjugated with Alexa Fluor 488 (green). (b) Vertical cross-sections of 4- and 7-day-old microcolonies of strains producing cytosolic proteins Ino1p-GFP or Met17p-GFP. Um cells are shown; arrows indicate GFP in vacuoles of 7-day-old Um cells where cytosolic proteins were delivered to vacuoles via autophagy. (c) Vertical cross-sections of 4-day-old microcolonies formed by Gat1p-GFP strain showing localization of Gat1p-GFP protein in Um and Lm cells. Arrows indicate relocalization of Gat1p-GFP from the cytosol to the nuclei of Um cells after treating the cut edge of the colony section with 250 ng/mL rapamycin, an inhibitor of TORC1. (a) (b) (c)In addition to their morphology similarities, we found similar profiles of proteins produced by the Um cells from 4-day-old microcolonies and by the U cells of 15-day-old giant colonies [23]. Hence, all three Ato proteins (Ato1p, Ato2p, and Ato3p) started to be produced exclusively in Um cells but not in Lm cells (as shown for Ato3p-GFP in Figure 3) after the microcolonies had entered the alkali phase. Similarly, Pox1p-GFP and Icl2p-GFP are preferentially produced in Um cells (Figure 3) as well as in the U cells of giant colonies [23]. In addition, the production profile of Ole1p-GFP is also similar in microcolonies and giant colonies. Ole1p-GFP is produced and properly localized to the endoplasmatic reticulum (ER) in L and Lm cells, respectively, while it is degraded within vacuoles in U and Um cells, respectively, (Figure 3).Figure 3 Profile of selected, GFP-labeled proteins in alkali phase microcolonies. 2PE-CM of vertical transversal cross-sections of alkali phase (4-day-old) microcolonies formed by strains producing particular labeled proteins.Another typical feature of differentiated giant colonies is an increased activity of TORC1 in U cells and its inactivation in L cells, as shown by the different localization of Gat1p-GFP in the two cell types [23]. The GATA transcription factor Gat1p was shown to be phosphorylated by TORC1, which results in Gat1p cytosolic localization and thus functional inhibition [28]. Confocal microscopy of microcolony cross-sections clearly showed that Gat1p-GFP is localized to the nuclei of Lm cells, which indicates that TORC1 is inactive in Lm cells (Figure 2(c)). In Um cells, TORC1 is apparently active, as Gat1p-GFP is predominantly in the cytosol (i.e., phosphorylated) and it only moves to the nucleus when a TORC1 inhibitor rapamycin is added to the colony sections.In summary, these data show that several typical features of the U cells of giant colonies are found in the Um cells of microcolonies that switch to the alkali phase of ammonia production, even though Um cells are far younger chronologically than the U cells of giant colonies. The typical features of U cells, such as the accumulation of lipid droplets, production of typical marker proteins, and active TORC1, are found in Um cells soon after the upper and lower layers have formed in microcolonies entering the alkali phase. These data therefore indicate that ammonia-related signaling events are more significant than chronological age in the formation of these typical features of upper cells. This is in agreement with the previous finding that in giant colonies, the formation of cells morphologically resembling U cells can also be prematurely induced by ammonia from an artificial source [23]. ### 2.3. Autophagy Appears Later in Um Cells Another typical feature of the U cells of giant colonies is active autophagy [23]. Monitoring the cellular localization of GFP in the microcolonies of strains producing cytosolic Ino1p and Met17p labeled with GFP showed a significant vacuolar GFP signal in the Um cells of 7-day-old microcolonies (Figure 2). However, no vacuolar localization of GFP was visible in 4-day-old colonies. As in the L cells of giant colonies, no vacuolar GFP was detected in Lm cells of any age. These data showed that Um cells activate autophagy like the U cells of giant colonies. However, the autophagy is initiated later than other typical processes of Um cells and seems, therefore, to be more dependent on the chronological aging of Um cells. ### 2.4. Um and Lm Cells Differ in Their Respiratory Capacity An important and unexpected difference between the U and L cells of giant colonies is in the capacity of these cells to consume oxygen [23]. Although localized close to the air, U cells exhibit significantly decreased ability to consume oxygen as compared with L cells, and, accordingly, U cells contain large swollen mitochondria with few cristae. On the other hand, L cells maintain their capacity to consume oxygen quite effectively and contain normal-looking cristated mitochondria. To compare the respiration of Um and Lm cells, we separated these cells from 4- to 6-day-old microcolonies by gradient centrifugation and measured their respiratory capacity. As shown in Figure 4(a), the Um cells of 4-day-old microcolonies already consume less oxygen than Lm cells of the same age. This difference persisted in older colonies. These data show that as with the other features described above, the decreased respiratory capacity of U cells identified in 15-day-old giant colonies and in 4-day-old alkali phase microcolonies is a characteristic that is also most likely predominantly induced by a signaling event and not by the aging of colony.Physiological differences between Um and Lm cells from 4- and 6-day-old colonies. (a) Oxygen consumption as a measure of respiratory capacity of Um and Lm cells isolated from 4- and 6-day-old microcolonies. Respiration of glucose- and glycerol-grown cells from exponential liquid shaken cultures is shown for comparison. (b) Stress-related features of Um and Lm. Sensitivity to zymolyase is shown as a decrease in optical density of cell suspensions (left). Production of ROS measured as fluorescence of DHE (right). All data represent averages of at least three experiments ±SD. **—t-test P value < 0.01; ***—t-test P value < 0.001. (a) (b) ### 2.5. Lm Cells Differ from L Cells of Giant Colonies in Some of Their Features Other physiological differences between the U and L cells of giant colonies are in terms of reactive oxygen species (ROS) production, resistance to the cell wall degrading enzyme zymolyase, and sensitivity to various stresses, such as heat shock and ethanol treatment [23]. Measurement of the ROS level in Um and Lm cells separated from microcolonies and stained with dihydroethidium (DHE) showed that Lm cells produce significantly higher amount of ROS than Um cells. This difference was significant in 4-day-old microcolonies and persisted in older microcolonies (Figure 4(b)). Lm cells are also more sensitive to zymolyase treatment than Um cells, thus, indicating a weaker cell wall of Lm cells. Thus, the differences in both ROS production and zymolyase resistance between Um and Lm cells were similar to those observed between the U and L cells of giant colonies.On the other hand, an analysis of Um and Lm cells from 4- to 6-day-old microcolonies did not reveal significant differences in the sensitivity of the two cell types to heat shock and ethanol treatment (not shown). In general, both Um and Lm cells were slightly more resistant to heat shock than U cells from 15-day-old giant colonies and significantly more resistant than L cells from such colonies (i.e., than cells that exhibit a strong decrease in viability after heat shock and ethanol treatment) [23]. In other words, Lm cells from 6-day-old colonies have not yet decreased their resistance to these stresses.Another difference between microcolonies and giant colonies was in their levels of certain amino acids in the upper and lower cells. While the level of intracellular glutamine was significantly higher in U than in the L cells of giant colonies, only a negligible difference was observed between the Um and Lm cells of 6-day-old microcolonies (and no difference in 4-day-old microcolonies). On the other hand, differences in amino acids such as lysine, alanine, and GABA that are present in higher concentrations in L cells than in the U cells of giant colonies are already detectable in 6-day-old microcolonies. Lysine, alanine, and GABA are present in 2.3, 1.6, and 4.9 times higher concentrations in Lm cells than in Um cells, respectively. These values are comparable with L/U ratio of 1.6, 2.2, and 3.6, respectively, for 15-day-old giant colonies [23]. These data indicate that particularly a drop in glutamine in L cells is also connected with the chronological aging or prolonged starvation of colonies.In summary, the data indicate that Lm cells from 4- to 6-day-old microcolonies are in a better physiological condition than the L cells of 15- to 20-day-old giant colonies. The observed decrease in the resistance and viability of the L cells of giant colonies as well as the drop in their glutamine content thus seems to appear later during colony chronological aging and is probably not directly related to the changes induced by ammonia signaling and/or related metabolic reprogramming. This conclusion is also supported by the observation that some of the proteins that started to be produced in the L cells of 15-day-old giant colonies and the production which increases later in 20-day-old giant colonies (such as Ino1p and Met17p) are not yet produced in the Lm cells of 4- to 6-day-old microcolonies (not shown). ## 2.1. Microcolonies Pass through the Same Developmental Phases as Giant Colonies Similarly to giant colonies, microcolonies of BY4742 growing on GMA solid plates pass through the developmental phases characterized by changes in external pH and ammonia production [24, 26]. Microcolonies, thus, pass through the first acidic, alkali and second acidic developmental phases (Figure 1(a)), where the alkali phase is accompanied by ammonia production. In contrast to giant colonies, in which the timing of the transition to the ammonia-producing period is typically standardized by the inoculation of six giant colonies on the plate [18], the timing of the acid-to-alkali microcolony transition is dependent on the density of the plated microcolonies. More densely plated microcolonies switch to the ammonia-producing period earlier than microcolonies growing at a lower density on the plate. Like giant colonies that synchronize ammonia production and developmental phases [27], microcolonies on the same plate also synchronize themselves via the ammonia that starts to be produced by the most densely plated microcolonies. For the experiments described in the following sections, we used a standard plating of approximately 5000 microcolonies per plate, that is the density of microcolonies that results in microcolony transition to the alkali phase between days 3 and 4 of colony growth.Developmental phases and vertical differentiation of yeast microcolonies. (a) Microcolonies develop on GMA-BKP. BKP functions as pH dye indicator with pKa of 6.3, the color of which changes from yellow at acidic pH to purple in more alkali pH. Microcolonies were in the 1st acidic (2 d), alkali (4 d), and beginning of the 2nd acidic (6 d) phases. Bird views of microcolonies are shown. (b) Vertical transversal cross-section viewed by 2PE-CM of akali-phase microcolony formed by the strain producing Ato1p-GFP (left) and scheme of the localization of three cell subpopulations within the microcolony (right). (c) Boundary between Um and Lm cells (left) and morphology of Um and Lm cells (center) of BY-PTEF1-GFP strain at vertical cross-sections of 4-day-old microcolonies analyzed by 2PE-CM. Cytosolic expression of GFP is used for in situ visualization of Um and Lm cells by 2PE-CM since it enables the visualization of large vacuoles in Lm cells (from which the fluorescence is excluded) and the size of Um and Lm cells. Morphology of Um and Lm cells from 4-day-old BY4742 microcolonies separated by gradient centrifugation and visualized by Nomarski contrast (right). White arrows show large vacuoles in Lm cells; red arrows show lipid droplets in Um cells. (a) (b) (c) ## 2.2. Switch to Ammonia Production Is Accompanied by Differentiation of Microcolonies and Formation of Um and Lm Cells As with giant colonies [23], the transition of microcolonies to the alkali phase is accompanied by a diversification of the relatively homogeneous cell population of the 1st acidic phase microcolonies to two major cell types that are localized in the upper and lower layers of alkali phase microcolonies. Figure 1 shows that these upper and lower cells morphologically resemble the U and L cells of giant colonies, respectively. Cells in the lower parts of a microcolony (Lm cells) are smaller and usually contain one large vacuole, while cells in the upper parts (Um cells) are larger with no visible vacuoles. The staining of microcolony sections by Nile red (Figure 2) as well as Nomarski contrast visualization (Figure 1(c)) confirmed that similarly to giant colonies, Um cells contain several large lipid droplets, while Lm cells usually contain one small lipid droplet.Localization of cells containing storage compounds (lipid droplets) and cells with active autophagy and TORC1 signaling pathway. (a) Vertical transversal cross-sections of 4-day-old BY4742 microcolonies. Left, boundary between Um and Lm cells, lipid droplets are stained with Nile red. Right, lipid droplets of Um and Lm cells stained with Nile red (red) and cell walls with concanavalin A conjugated with Alexa Fluor 488 (green). (b) Vertical cross-sections of 4- and 7-day-old microcolonies of strains producing cytosolic proteins Ino1p-GFP or Met17p-GFP. Um cells are shown; arrows indicate GFP in vacuoles of 7-day-old Um cells where cytosolic proteins were delivered to vacuoles via autophagy. (c) Vertical cross-sections of 4-day-old microcolonies formed by Gat1p-GFP strain showing localization of Gat1p-GFP protein in Um and Lm cells. Arrows indicate relocalization of Gat1p-GFP from the cytosol to the nuclei of Um cells after treating the cut edge of the colony section with 250 ng/mL rapamycin, an inhibitor of TORC1. (a) (b) (c)In addition to their morphology similarities, we found similar profiles of proteins produced by the Um cells from 4-day-old microcolonies and by the U cells of 15-day-old giant colonies [23]. Hence, all three Ato proteins (Ato1p, Ato2p, and Ato3p) started to be produced exclusively in Um cells but not in Lm cells (as shown for Ato3p-GFP in Figure 3) after the microcolonies had entered the alkali phase. Similarly, Pox1p-GFP and Icl2p-GFP are preferentially produced in Um cells (Figure 3) as well as in the U cells of giant colonies [23]. In addition, the production profile of Ole1p-GFP is also similar in microcolonies and giant colonies. Ole1p-GFP is produced and properly localized to the endoplasmatic reticulum (ER) in L and Lm cells, respectively, while it is degraded within vacuoles in U and Um cells, respectively, (Figure 3).Figure 3 Profile of selected, GFP-labeled proteins in alkali phase microcolonies. 2PE-CM of vertical transversal cross-sections of alkali phase (4-day-old) microcolonies formed by strains producing particular labeled proteins.Another typical feature of differentiated giant colonies is an increased activity of TORC1 in U cells and its inactivation in L cells, as shown by the different localization of Gat1p-GFP in the two cell types [23]. The GATA transcription factor Gat1p was shown to be phosphorylated by TORC1, which results in Gat1p cytosolic localization and thus functional inhibition [28]. Confocal microscopy of microcolony cross-sections clearly showed that Gat1p-GFP is localized to the nuclei of Lm cells, which indicates that TORC1 is inactive in Lm cells (Figure 2(c)). In Um cells, TORC1 is apparently active, as Gat1p-GFP is predominantly in the cytosol (i.e., phosphorylated) and it only moves to the nucleus when a TORC1 inhibitor rapamycin is added to the colony sections.In summary, these data show that several typical features of the U cells of giant colonies are found in the Um cells of microcolonies that switch to the alkali phase of ammonia production, even though Um cells are far younger chronologically than the U cells of giant colonies. The typical features of U cells, such as the accumulation of lipid droplets, production of typical marker proteins, and active TORC1, are found in Um cells soon after the upper and lower layers have formed in microcolonies entering the alkali phase. These data therefore indicate that ammonia-related signaling events are more significant than chronological age in the formation of these typical features of upper cells. This is in agreement with the previous finding that in giant colonies, the formation of cells morphologically resembling U cells can also be prematurely induced by ammonia from an artificial source [23]. ## 2.3. Autophagy Appears Later in Um Cells Another typical feature of the U cells of giant colonies is active autophagy [23]. Monitoring the cellular localization of GFP in the microcolonies of strains producing cytosolic Ino1p and Met17p labeled with GFP showed a significant vacuolar GFP signal in the Um cells of 7-day-old microcolonies (Figure 2). However, no vacuolar localization of GFP was visible in 4-day-old colonies. As in the L cells of giant colonies, no vacuolar GFP was detected in Lm cells of any age. These data showed that Um cells activate autophagy like the U cells of giant colonies. However, the autophagy is initiated later than other typical processes of Um cells and seems, therefore, to be more dependent on the chronological aging of Um cells. ## 2.4. Um and Lm Cells Differ in Their Respiratory Capacity An important and unexpected difference between the U and L cells of giant colonies is in the capacity of these cells to consume oxygen [23]. Although localized close to the air, U cells exhibit significantly decreased ability to consume oxygen as compared with L cells, and, accordingly, U cells contain large swollen mitochondria with few cristae. On the other hand, L cells maintain their capacity to consume oxygen quite effectively and contain normal-looking cristated mitochondria. To compare the respiration of Um and Lm cells, we separated these cells from 4- to 6-day-old microcolonies by gradient centrifugation and measured their respiratory capacity. As shown in Figure 4(a), the Um cells of 4-day-old microcolonies already consume less oxygen than Lm cells of the same age. This difference persisted in older colonies. These data show that as with the other features described above, the decreased respiratory capacity of U cells identified in 15-day-old giant colonies and in 4-day-old alkali phase microcolonies is a characteristic that is also most likely predominantly induced by a signaling event and not by the aging of colony.Physiological differences between Um and Lm cells from 4- and 6-day-old colonies. (a) Oxygen consumption as a measure of respiratory capacity of Um and Lm cells isolated from 4- and 6-day-old microcolonies. Respiration of glucose- and glycerol-grown cells from exponential liquid shaken cultures is shown for comparison. (b) Stress-related features of Um and Lm. Sensitivity to zymolyase is shown as a decrease in optical density of cell suspensions (left). Production of ROS measured as fluorescence of DHE (right). All data represent averages of at least three experiments ±SD. **—t-test P value < 0.01; ***—t-test P value < 0.001. (a) (b) ## 2.5. Lm Cells Differ from L Cells of Giant Colonies in Some of Their Features Other physiological differences between the U and L cells of giant colonies are in terms of reactive oxygen species (ROS) production, resistance to the cell wall degrading enzyme zymolyase, and sensitivity to various stresses, such as heat shock and ethanol treatment [23]. Measurement of the ROS level in Um and Lm cells separated from microcolonies and stained with dihydroethidium (DHE) showed that Lm cells produce significantly higher amount of ROS than Um cells. This difference was significant in 4-day-old microcolonies and persisted in older microcolonies (Figure 4(b)). Lm cells are also more sensitive to zymolyase treatment than Um cells, thus, indicating a weaker cell wall of Lm cells. Thus, the differences in both ROS production and zymolyase resistance between Um and Lm cells were similar to those observed between the U and L cells of giant colonies.On the other hand, an analysis of Um and Lm cells from 4- to 6-day-old microcolonies did not reveal significant differences in the sensitivity of the two cell types to heat shock and ethanol treatment (not shown). In general, both Um and Lm cells were slightly more resistant to heat shock than U cells from 15-day-old giant colonies and significantly more resistant than L cells from such colonies (i.e., than cells that exhibit a strong decrease in viability after heat shock and ethanol treatment) [23]. In other words, Lm cells from 6-day-old colonies have not yet decreased their resistance to these stresses.Another difference between microcolonies and giant colonies was in their levels of certain amino acids in the upper and lower cells. While the level of intracellular glutamine was significantly higher in U than in the L cells of giant colonies, only a negligible difference was observed between the Um and Lm cells of 6-day-old microcolonies (and no difference in 4-day-old microcolonies). On the other hand, differences in amino acids such as lysine, alanine, and GABA that are present in higher concentrations in L cells than in the U cells of giant colonies are already detectable in 6-day-old microcolonies. Lysine, alanine, and GABA are present in 2.3, 1.6, and 4.9 times higher concentrations in Lm cells than in Um cells, respectively. These values are comparable with L/U ratio of 1.6, 2.2, and 3.6, respectively, for 15-day-old giant colonies [23]. These data indicate that particularly a drop in glutamine in L cells is also connected with the chronological aging or prolonged starvation of colonies.In summary, the data indicate that Lm cells from 4- to 6-day-old microcolonies are in a better physiological condition than the L cells of 15- to 20-day-old giant colonies. The observed decrease in the resistance and viability of the L cells of giant colonies as well as the drop in their glutamine content thus seems to appear later during colony chronological aging and is probably not directly related to the changes induced by ammonia signaling and/or related metabolic reprogramming. This conclusion is also supported by the observation that some of the proteins that started to be produced in the L cells of 15-day-old giant colonies and the production which increases later in 20-day-old giant colonies (such as Ino1p and Met17p) are not yet produced in the Lm cells of 4- to 6-day-old microcolonies (not shown). ## 3. Conclusions The presented data show that various features typical of the U cells of giant colonies growing on complex respiratory medium and undergoing differentiation during their transition to the ammonia-producing alkali phase (10- to 15-day-old colonies) [23] are also found in Um cells located in the upper layers of alkali phase microcolonies that are only 3 to 4 days old. These features include the production of specific proteins, accumulation of storage material such as lipid droplets, activity of specific regulators such as TORC1, decreased function of mitochondria, low level of ROS, and high resistance to zymolyase, indicating a strengthening of the cell wall. Thus, all of these features of U cells seem to be predominantly related to signaling events and the metabolic reprogramming that accompanies the colony transition from the acidic to the alkali developmental phase [18, 22], rather than to the cell chronological aging. Such signaling processes leading to colony reprogramming could be initiated (by not yet identified mechanism(s)), for example, in the first microcolony that senses a nutrient shortage, that is, in a microcolony that is located in the densest area of plated microcolonies. Ammonia is then the signal that spreads the information about “the need for reprogramming” to the other microcolonies over the whole plate. The ability of ammonia to prematurely induce colonies to ammonia production independently of their current developmental phase [27] guarantees that even the sparsely plated microcolonies become induced and initiate the reprogramming and differentiation while still experiencing nutrient abundance. Um cells gain the major properties of the U cells of giant colonies, although they have spent a much shorter time in the stationary or slow growing phase than U cells.A comparison of the transcriptomes of “outside” and “inside” cells separated by FACS from 4-day-old microcolonies growing on complex glucose medium [29] showed that some expression characteristics of U cells from giant colonies grown on complex respiratory medium [23] are also found in “outside” cells. These characteristics include the expression of genes coding for ribosomal proteins and proteins of the translational machinery, genes for glycolytic enzymes, genes involved in amino acid metabolism, and some others [29]. Similarly, we observed the production of carbonic anhydrase Nce103p in the upper cell layers of microcolonies growing both on complex glucose [30] and complex glycerol (unpublished data) agar media. These data indicate that the expression of particular genes and activation of specific metabolic pathways could be profitable for cells in the upper layers of yeast colonies.In contrast to Um cells, only some features of L cells are preserved in Lm cells of 4- to 6-day-old microcolonies compared to 15- to 20-day-old giant colonies. These include a higher respiratory capacity, higher production of ROS, higher sensitivity to zymolyase, and the production of some proteins (such as Ole1p). Traven et al. [29] also demonstrated an increased expression of genes required for the activity of the mitochondrial respiratory chain genes in the “inside” cells of 4-day-old microcolonies (cells at a similar position within the colony to L cells) grown on glucose complex medium. All of these features are also typical of the L cells of 15- to 20-day-old giant colonies grown on complex respiratory medium [23]. On the other hand, other features of the L cells of giant colonies are not yet present in the Lm cells of 4- to 6-day-old microcolonies. In particular, Lm cells do not exhibit an enhanced sensitivity to some stresses such as heat shock and ethanol treatment, which indicates that these cells are in a better physiological condition than much older L cells from giant colonies. These stress-related features seem to be therefore more dependent on the chronological age of L cells and could be also related to the duration of the coexistence of U and L cells. Previous findings suggested that U cells are fed at the expense of L cells [23] which could then lead to a deepening of starvation of L cells over the time and to a consequent decrease in their overall viability in older giant colonies. Similarly, autophagy, which seems to be important for the longevity of U cells [23], is only activated later in Um of 6- to 7-day-old microcolonies. This finding suggests that the regulation of autophagy is partially dependent on signaling events guiding the development of U cells (autophagy is only activated in U cells) but that it is also dependent on the aging and nutrition status of U cells. ## 4. Material and Methods ### 4.1. Strains and Media S. cerevisiae strain BY4742 (MATα, his3Δ, leu2Δ, lys2Δ, ura3Δ) was from the EUROSCARF collection. BY4742-derived strains containing proteins (Ato1p, Ato3p, Pox1p, Icl2p, Ole1p, Ino1p, Met17p, and Gat1p) fused with GFP at their C-terminus were constructed as described previously [23, 24]. BY-PTEF1-GFP strain expressing GFP under the control of constitutive promoter of TEF1 gene (PTEF1) was constructed by integration of PTEF1-GFP-natNT2 cassette amplified from pYM-N21 plasmid [31] into HIS3 locus of BY4742 strain. Yeast microcolonies were grown at 28°C either on GMA (1% yeast extract, 3% glycerol, 1% ethanol, 2% agar, 10 mM CaCl2) or on GMA-BKP (GMA, 0.01% bromocresol purple). For standard experiments, cells were plated at an approximate density of 5×103 per plate. ### 4.2. Two-Photon Excitation Confocal Microscopy (2PE-CM) The microcolony sample preparation and 2PE-CM of transversal vertical cross-section of microcolonies were performed according to [24]. When required, the microcolony cross-sections were stained with Nile red (2.5 μg/mL) and concanavalin A labeled with Alexa Fluor 488 (ConA-AF, 30 μg/mL) as described in [17]. Alternatively, GFP fluorescence was monitored. An SP2 AOBS MP confocal scanner microscope (Leica) fitted with a Ti:Sapphire Chameleon Ultra laser (Coherent Inc.) and 63×/1.20 water immersion plan apochromat objective were used. Excitation wavelength was 920 nm, and emission bandwidths were 470–540 nm for ConA, 580–750 nm for NR, and 480–595 nm for GFP. ### 4.3. Colony Images Colony images were captured in transmitted light with a Navitar objective and a complementary metal-oxide semiconductor camera (ProgRes CT3; Jenoptik). ### 4.4. Sorbitol Gradient Cell Fractionation Cells from microcolonies were fractionated into subpopulations by centrifugation as described in [23] with the following modification: instead of sucrose, a 10–35% sorbitol gradient was used to avoid changes that could be induced by sucrose in the relatively young cells of microcolonies. ### 4.5. U and L Cell Resistance to Stresses Cell resistance was assayed using 10-fold serial dilutions of cell suspensions (OD600 = 10) that were incubated at 52°C for 45 or 90 min or in 20% ethanol for 60 min and compared to untreated controls. Zymolyase resistance was determined as the decrease in the OD600 of a cell suspension (starting OD600 = 0.5) in 50 mM potassium phosphate buffer, pH 7.5 with 2 mM dithiothreitol, and 5 U/mL zymolyase (MP Biomedicals). ### 4.6. Respiration Rate and ROS Quantification The oxygen consumption of 5 mg of freshly isolated Um or Lm wet cell biomass was determined at 30°C in 1 mL of water using a 782 oxygen meter with a 1-mL MT-200A cell (Strathkelvin Instruments). ROS was quantified using DHE staining according to Čáp et al. [21] with minor modifications. Briefly, isolated Um and Lm cells were resuspended in water to a final concentration of 100 mg/mL. 7.5 μL of this suspension was incubated with 42.5 μL of water and 5 μL of 25 μg/mL DHE solution (freshly prepared from 1 mg/mL stock solution in DMSO). Cells were stained for 25 min in the dark and diluted with 1.95 mL of water, and the DHE fluorescence was measured using a FluoroMax 3 spectrofluorometer (Jobin Yvon) with excitation/emission wavelengths of 480/604 nm. ### 4.7. Amino Acid Concentration Total intracellular amino acids were extracted from cell suspensions in water by boiling for 5 min, and the concentration was determined by HPLC with precolumn derivatization by OPA [23, 32] with a ZORBAX Eclipse AAA, 3.5 mm, 4.6 × 75 mm reverse phase column (Agilent), and fluorescence detection. ## 4.1. Strains and Media S. cerevisiae strain BY4742 (MATα, his3Δ, leu2Δ, lys2Δ, ura3Δ) was from the EUROSCARF collection. BY4742-derived strains containing proteins (Ato1p, Ato3p, Pox1p, Icl2p, Ole1p, Ino1p, Met17p, and Gat1p) fused with GFP at their C-terminus were constructed as described previously [23, 24]. BY-PTEF1-GFP strain expressing GFP under the control of constitutive promoter of TEF1 gene (PTEF1) was constructed by integration of PTEF1-GFP-natNT2 cassette amplified from pYM-N21 plasmid [31] into HIS3 locus of BY4742 strain. Yeast microcolonies were grown at 28°C either on GMA (1% yeast extract, 3% glycerol, 1% ethanol, 2% agar, 10 mM CaCl2) or on GMA-BKP (GMA, 0.01% bromocresol purple). For standard experiments, cells were plated at an approximate density of 5×103 per plate. ## 4.2. Two-Photon Excitation Confocal Microscopy (2PE-CM) The microcolony sample preparation and 2PE-CM of transversal vertical cross-section of microcolonies were performed according to [24]. When required, the microcolony cross-sections were stained with Nile red (2.5 μg/mL) and concanavalin A labeled with Alexa Fluor 488 (ConA-AF, 30 μg/mL) as described in [17]. Alternatively, GFP fluorescence was monitored. An SP2 AOBS MP confocal scanner microscope (Leica) fitted with a Ti:Sapphire Chameleon Ultra laser (Coherent Inc.) and 63×/1.20 water immersion plan apochromat objective were used. Excitation wavelength was 920 nm, and emission bandwidths were 470–540 nm for ConA, 580–750 nm for NR, and 480–595 nm for GFP. ## 4.3. Colony Images Colony images were captured in transmitted light with a Navitar objective and a complementary metal-oxide semiconductor camera (ProgRes CT3; Jenoptik). ## 4.4. Sorbitol Gradient Cell Fractionation Cells from microcolonies were fractionated into subpopulations by centrifugation as described in [23] with the following modification: instead of sucrose, a 10–35% sorbitol gradient was used to avoid changes that could be induced by sucrose in the relatively young cells of microcolonies. ## 4.5. U and L Cell Resistance to Stresses Cell resistance was assayed using 10-fold serial dilutions of cell suspensions (OD600 = 10) that were incubated at 52°C for 45 or 90 min or in 20% ethanol for 60 min and compared to untreated controls. Zymolyase resistance was determined as the decrease in the OD600 of a cell suspension (starting OD600 = 0.5) in 50 mM potassium phosphate buffer, pH 7.5 with 2 mM dithiothreitol, and 5 U/mL zymolyase (MP Biomedicals). ## 4.6. Respiration Rate and ROS Quantification The oxygen consumption of 5 mg of freshly isolated Um or Lm wet cell biomass was determined at 30°C in 1 mL of water using a 782 oxygen meter with a 1-mL MT-200A cell (Strathkelvin Instruments). ROS was quantified using DHE staining according to Čáp et al. [21] with minor modifications. Briefly, isolated Um and Lm cells were resuspended in water to a final concentration of 100 mg/mL. 7.5 μL of this suspension was incubated with 42.5 μL of water and 5 μL of 25 μg/mL DHE solution (freshly prepared from 1 mg/mL stock solution in DMSO). Cells were stained for 25 min in the dark and diluted with 1.95 mL of water, and the DHE fluorescence was measured using a FluoroMax 3 spectrofluorometer (Jobin Yvon) with excitation/emission wavelengths of 480/604 nm. ## 4.7. Amino Acid Concentration Total intracellular amino acids were extracted from cell suspensions in water by boiling for 5 min, and the concentration was determined by HPLC with precolumn derivatization by OPA [23, 32] with a ZORBAX Eclipse AAA, 3.5 mm, 4.6 × 75 mm reverse phase column (Agilent), and fluorescence detection. --- *Source: 102485-2013-07-21.xml*
2013
# Variational Methods for NLEV Approximation Near a Bifurcation Point **Authors:** Raffaele Chiappinelli **Journal:** International Journal of Mathematics and Mathematical Sciences (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/102489 --- ## Abstract We review some more and less recent results concerning bounds on nonlinear eigenvalues (NLEV) for gradient operators. In particular, we discuss the asymptotic behaviour of NLEV (as the norm of the eigenvector tends to zero) in bifurcation problems from the line of trivial solutions, considering perturbations of linear self-adjoint operators in a Hilbert space. The proofs are based on the Lusternik-Schnirelmann theory of critical points on one side and on the Lyapounov-Schmidt reduction to the relevant finite-dimensional kernel on the other side. The results are applied to some semilinear elliptic operators in bounded domains ofℝN. A section reviewing some general facts about eigenvalues of linear and nonlinear operators is included. --- ## Body ## 1. Introduction and Examples The term “nonlinear eigenvalue” (NLEV) is a frequent shorthand for “eigenvalue of a nonlinear problem,” see, for instance [1, 3]. While for the estimation of eigenvalues of linear operators there is wealth of abstract and computational methods (see, e.g., Kato's [4] and Weinberger's [5] monographs), for NLEV, the question is relatively new and there is not much literature available. In this paper, we review some abstract methods which allow for the computation of upper and lower bounds of NLEV near a bifurcation point of the linearized problem. Moreover, as one of our aims is to stimulate further research on the subject, we spend some effort in presenting it in a sufficiently general context and emphasize the question of the existence of eigenvalues for a nonlinear operator. In fact, Section 2 is entirely devoted to this, and to a parallel consideration of similar facts for linear operators.Thus, generally speaking, consider two nonlinear (= not necessarily linear) operatorsA,B:E→F (E,F real Banach spaces) such that A(0)=B(0)=0. If for some λ∈ℝ the equation (1.1)A(u)=λB(u) has a solution u≠0, then we say that λ is an eigenvalue of the pair (A,B) and u is an eigenvector corresponding to λ. This definition is a word-by-word copy of the standard one for pairs of linear operators, where most frequently one takes E=F and B(u)=u, and of course it may be of very little significance in general. However, it goes back at least to Krasnosel'skii [6] the demonstration of the importance of this concept for operator equations such as (1.1), with a view in particular to nonlinear integral equations of Hammerstein or Urysohn type.In this paper, we consider (1.1) under the following qualitative assumptions:(A) (1.1) possesses infinitely many eigenvalues λn;(B) (1.1) has a linear reference problem A0(u)=λB0(u) which also possesses infinitely many eigenvalues λn0.It is then natural to try to approximate or estimateλn in terms of λn0. In the sequel, we will take F=E', the dual space of E, and assume that all operators involved are continuous gradient operators from E to E'; of course, this is done in order to exploit the full strength of variational methods. We emphasize in particular the case in which E is a Hilbert space, identified with its dual.Next, we note that two main routes are available to guarantee (A) and (B). The first involves the Lusternik-Schnirelmann (LS) theory of critical points for even functionals on symmetric manifolds (whenA and B are odd mappings). The model example is the p-Laplace equation, briefly recalled in Example 1.1, exhibiting infinitely many eigenvalues and having the ordinary Laplace equation (p=2) as linear reference problem. From our point of view, a main advantage of LS theory is precisely that it grants—provided that the constraint manifold contains subsets of arbitrary genus and that the Palais-Smale condition is satisfied at all candidate critical levels—the existence of infinitely many distinct eigenvalue/eigenvector pairs of (1.1), see, for instance, Amann [7], Berger [8], Browder [9], Palais [10], and Rabinowitz [11].The domain of applicability of LS theory embraces as a particular case of (1.1) NLEV problems of the form (1.2)(A(u)≡)A0(u)+P(u)=λu(≡λB(u)), where the operators act in a real Hilbert space H, A0 is linear and self-adjoint, and P is odd and viewed as a perturbation of A0. Under appropriate compactness and positivity assumptions on A0 and P, (A) and (B) will be satisfied. More general forms of (1.2)—such as A0(u)+P(u)=λB(u) where A0,P, and B are operators of E into its dual E' and A0 behaves as the p-Laplacian—have been considered by Chabrowski [12], see Example 1.4 in this section.However, problems of the form (1.2) can be studied in our framework also when P is not necessarily an odd mapping, but rather satisfies the local condition (1.3)P(u)=o(∥u∥)asu→0. Indeed in this case, Bifurcation theory ensures (see, e.g., [11]) that each isolated eigenvalue λ0 of finite multiplicity of A0 is a bifurcation point for (1.2), which roughly speaking means that solutions u≠0 of the unperturbed problem A0u=λ0u (i.e., eigenfunctions associated with λ0) do survive for the perturbed problem (1.2) in a neighborhood of u=0 and for λ near λ0. Therefore, the framework described above at the points (A) and (B) is grosso modo respected also in this case provided that A0 has a countable discrete spectrum.When applicable, LS theory yields existence of eigenfunctions ofany norm (provided of course that the relevant operators be defined in the whole space), in contrast with Bifurcation theory which only yields (in this context) information near u=0.In the main part of this paper (Section3), we focus our attention upon equations of the form (1.2), having in mind—with a view to the applications—a P that is odd and satisfies (1.3). For such a P, both methods are applicable and can be tested to see which of them yields better quantitative information on the eigenvalues associated with small eigenvectors. More precisely, given an isolated eigenvalue λ0 of finite multiplicity of A0, the assumptions on P guarantee bifurcation at (λ0,0) from the line {(λ,0):λ∈ℝ} of trivial solutions, and in particular ensure the existence for R>0 sufficiently small of solutions (λR,uR) of (1.2) such that (1.4)∥uR∥=RforeachR,λR→λ0asR→0, that is, parameterized by the norm R of the eigenvector uR and bifurcating from (λ0,0). If we qualify the condition P(u)=o(∥u∥) with the more specific requirement that, for some q>2, (1.5)P(u)=O(∥u∥q-1)asu→0, then the information in (1.4) can be made more precise to yield estimates of the form (as R→0) (1.6)C1Rq-2+o(Rq-2)≤λR-λ0≤C2Rq-2+o(Rq-2). We are interested in the evaluation of the constants C1 and C2. It turns out that these can be estimated in terms of λ0 itself and other known constants related to P. We do this in two distinct ways, as indicated before.(i) Using Lusternik-Schnirelmann's theory in order to estimate the differenceλR-λ0 through the LS “minimax” critical levels. This approach was first used by Berger [8, Chapter 6, Section 6.7A] and then pursued by the author (see [13], e.g.) and subsequently by Chabrowski [12].(ii) Using the Lyapounov-Schmidt method to reduce (1.2) to an equation in the finite-dimensional space N≡N(A0-λ0I), and then working carefully on the reduced equation in order to exploit the stronger condition (1.5). We have recently followed this approach in [14].Our computations in Section3 show that the second method is both technically and conceptually simpler, requires less on P (P need not be odd), and yields sharper results. We conclude Section 3 and the present work on applying these abstract results to a simple semilinear elliptic equation, see Example 1.3. Let us remark on passing that in the case of ordinary differential equations, detailed estimates for NLEV near a bifurcation point have been recently proved by Shibata [15]. The techniques employed by him are elementary and straightforward—direct integration and manipulation of the differential equation, series expansion, and so on—but very efficiently used. Some earlier result in this style can be found, for instance, in [16].The remaining parts of this paper are organised as follows. We complete this introductory section presenting (as a matter of example) some boundary-value problems for nonlinear differential equations, depending on a real parameterλ and admitting the zero solution for all values of λ, that can be cast in the form (1.1) with an appropriate choice of the function space E and of the operators A,B.Section2 is intended to recall for the reader's convenience some basic facts from the calculus of variations and critical point theory. We first indicate the reduction of (1.1) to the search of critical points of the potential a of A on the manifold V≡{u∈E:b(u)=const}, b the potential of B. Some details are spent to show that absolute minima or maxima correspond to the first eigenvalue—we do this for the elementary case of homogeneous operators such as the p-Laplacian—while minimax critical levels correspond to higher order eigenvalues, both for linear and nonlinear operators. In this circle of ideas, we recall a few elements of LS theory that are helpful to state and prove our subsequent results.Let us finally mention that foundations and inspiration for the study, of NLEV problems are to be found in (among many others) Krasnosel'skii [6], Vainberg [17], Fučík et al. [18], Ambrosetti and Prodi [19], Nirenberg [20], Rabinowitz [11, 21, 22], Berger [8], Stackgold [23], and Mawhin [3].Example 1.1 (thep-Laplace equation). The most famous (and probably most important) example of a nonlinear problem exhibiting the features described in the points (A) and (B) above is provided by thep-Laplace equation (p>1): (1.7)-div(|∇u|p-2∇u)=μ|u|p-2u, in a bounded domain Ω⊂ℝN(N≥1), subject to the Dirichlet boundary conditions u=0 on the boundary ∂Ω of Ω. Fix p>1, let E be the Sobolev space W01,p(Ω), equipped with the norm (1.8)∥v∥W01,pp=∫Ω‍|∇v|p, and let E'=W-1,p'(Ω) be the dual space of E. A (weak) solution of (1.7) is a function u∈E such that (1.9)Ap(u)=λBp(u), where λ=μ-1(μ≠0) and Ap,Bp:E→E' are defined by duality via the equations (1.10)〈Bp(u),v〉=∫Ω‍|∇u|p-2∇u∇vdx,〈Ap(u),v〉=∫Ω‍|u|p-2uvdx, where u,v∈E and 〈·,·〉 denotes the duality pairing between E and E'.Equation (1.7) possesses countably many eigenvalues μn(p)(n∈ℕ), which are values of the real function ϕp defined via (1.11)ϕp(u)=∫Ω‍|∇u|p∫Ω‍|u|p(u∈W01,p(Ω),u≠0), and can be naturally arranged in an increasing sequence (1.12)μ1(p)≤μ2(p)≤⋯μn(p)≤⋯,limn→∞μn(p)=+∞. This relies on the very special nature of (1.7), because Ap and Bp are (i) odd (F:E→E' is said to be odd if F(-u)=-F(u) for u∈E);(ii) positively homogeneous of the same degree p-1>0 (F positively homogeneous of degree α means that F(tu)=tαF(u) for t>0 and u∈E);(iii) gradient (F gradient means that 〈F(u),v〉=f'(u)v for some functional f on E).The existence of the sequence(μn(p)) then follows (using the compactness of the embedding of W01,p(Ω) in Lp(Ω)) by the Lusternik-Schnirelmann theory of critical points for even functionals on sphere-like manifolds (see the references cited in Section 1). The eigenvalues μn(p) have been studied in detail, and in particular as to their asymptotic behaviourGarcía Azorero and Peral Alonso [24] and Friedlander [25] have proved the two-sided inequality (1.13)A|Ω|np/N≤μn(p)≤B|Ω|np/N, to hold for all sufficiently large n and for suitable positive constants A and B depending only on N and p; |Ω| stands for the (Lebesgue) N-dimensional volume of Ω. This generalizes in part the classical result of Weyl [26] for the linear case (corresponding to p=2 in (1.7)), that is, for the eigenvalues μn0 of the Dirichlet Laplacian -Δu=μu,u∈W01,2(Ω): (1.14)μn0≡μn(2)=K|Ω|n2/N+o(n2/N)(n→∞).Evidently, this and similar questions would be of greater interest, should one be able to prove that theμn(p) are the only eigenvalues of (1.7); however, this is demonstrated only for N=1, in which case they can be computed by explicit solution of (1.7). For this, as well as for a general discussion of the features of (1.7), its eigenvalues, and in particular the very special properties owned by the first eigenvalue μ1(p) (corresponding to the minimum of the functional ϕp defined in (1.11)) and the associated eigenfunctions, we refer the reader to the beautiful Lecture Notes by Lindqvist [27]. For an interesting discussion on the existence of eigenvalues outside the LS sequence in problems related to (1.7), we recommend the very recent papers in [28, 29].Remark 1.2. The existence of countably many eigenvalues for (1.7) has been recently proved by means of a completely different method than the LS theory, namely, the representation theory of compact linear operators in Banach spaces developed in [30]. Actually the eigenfunctions associated with these eigenvalues are defined in a weaker sense, and only upper bounds of the type in (1.13) are proved. As emphasized in [30], it is not clear what connection (if any) there is between the higher eigenvalues found by the two procedures, nor whether there are eigenvalues not found by either method.Example 1.3 (semilinear equations). As a second example, consider the semilinear elliptic eigenvalue problem(1.15)-Δu=μ(u+f(x,u)),u∈H01(Ω)≡W01,2(Ω), again in a bounded domain of Ω⊂ℝN, where the nonlinearity is given by a real-valued function f=f(x,s) defined on Ω×ℝ and satisfying the following hypotheses:(HF0) f satisfies Carathéodory conditions (i.e., to say, f is continuous in s for a.e. x∈Ω and measurable in x for all s∈ℝ);(HF1) there exist a constanta≥0 and an exponent q with 2<q<2N/(N-2)≡2* if N>2,2<q<∞ if N≤2 such that (1.16)|f(x,s)|≤a|s|q-1forx∈Ω(a.e.),s∈ℝ.Here we take the Hilbert spaceH=H01(Ω) equipped with the scalar product (1.17)(u,v)=∫Ω‍∇u∇vdx, and consider again weak solutions of (1.15), defined now as solutions of the equation in H(1.18)A0u+P(u)=λu, whereas before λ=1/μ(μ≠0) while the operators A0,P:H→H are defined (using the self-duality of H based on (1.17)) by the equations (1.19)(A0u,v)=∫Ω‍uvdx,(P(u),v)=∫Ω‍f(x,u)vdx, for u,v∈H (note: we write here and henceforth A0 for A2). Then we see that also (1.15) can be cast in the form (1.1), with B(u)=u and (1.20)A=A0+P. Despite this formal similarity, the present Example is essentially different from Example 1.1. To see this, first note that the basic eigenvalue problem for the Dirichlet Laplacian, -Δu=μu,u∈H01(Ω), takes (in our notations) the form (1.21)A0(u)=λu, and involves—of course—only linear operators, A0 and the identity map. These can be seen as a special type of 1-homogeneous operators; now while in the former example they are replaced with the (p-1)-homogeneous operators Ap and Bp defined in (1.10), here we deal with an additive perturbation, A0+P, of A0. This new operator A=A0+P is still a gradient, and will be odd if so is taken f in its dependence upon the second variable, but plainly is not 1-homogeneous any longer (except when f(x,s)=a(x)s,a∈L∞(Ω), in which case of course we would be dealing with a linear perturbation of a linear problem: see for this [31] or [5]).Nevertheless, the assumptions (HF0)-(HF1) and results from Bifurcation theory ensure all the same (as indicated before in Section1, see also Section 3 for more details) that eigenvalues μR for (1.15) do exist, associated with eigenfunctions uR of small H01 norm R, near each fixed eigenvalue μ0=μk0 of the Dirichlet Laplacian; we put here and henceforth μk0=(λk0)-1, with λk0 the kth eigenvalue of (1.21).Two main differences with the former situation must be noted at once.(i) First, the loss of homogeneity causes that the eigenvalues “depend on the norm of the eigenfunction,” unlike in Example1.1 where it suffices to consider normalized eigenvectors. Indeed in general, if the operators A and B appearing in (1.1) are both homogeneous of the same degree, it is clear that if u0 is an eigenvector corresponding say to the eigenvalue μ^, then so does tu0 for any t>0.(ii) Second, Bifurcation theory provides in the present “generic” situation onlylocal results, that is, results holding in a neighborhood of (μ0,0)∈ℝ×H01(Ω), and thus concerning eigenfunctions of small norm. “Generic” means that we here ignore the multiplicity of μ0: for in case we knew that this is an odd number, then global results would be available from the theory [32] to grant the existence of an unbounded “branch” (in ℝ×H01(Ω)) of solution pairs (μ,u) bifurcating from (μ0,0).Under the assumptions (HF0)-(HF1) we have shown in particular (see [14, 33, 34]) that (1.22)μR=μ0+O(Rq-2)asR→0, (i.e., |μR-μ0|≤KRq-2 for some K≥0 and all sufficiently small R>0), and more precisely that (1.23)CRq-2+o(Rq-2)≤μ0-μR≤DRq-2+o(Rq-2)asR→0, for suitable constants C, D related to f; here o(Rq-2) denotes as usual an unspecified function h=h(R) such that h(R)/Rq-2→0 as R→0.In Section3, we explain how estimates like (1.23) follow independently both from LS theory (when f is odd in s) and from Bifurcation theory and refine our previous results in the estimate of the constants C and D.Example 1.4 (quasilinear equations). The results indicated in Examples1.1 and 1.3 can be partly extended to the problem (1.24)-div(|∇u|p-2∇u)=μ(|u|p-2u+f(x,u))u∈W01,p(Ω), where p>1 and f=f(x,s) is dominated by |s|q-1 for some p<q<p*, with (1.25)p*≡NpN-pifN>p,p*≡∞ifN≤p. Equation (1.24) reduces to (1.7) if f≡0 and to (1.15) if p=2, and therefore formally provides a common framework for both equations. However, it must be noted—looking at the bifurcation approach indicated in the previous example—that the desired extension can only be partial, because p≠2 (1.24) is no longer a perturbation of a linear problem, but of the homogeneous problem (1.7). Bifurcation should thus be considered from the eigenvalues of the p-Laplace operator, but to my knowledge there is (in the general case) no abstract result about bifurcation from the eigenvalues of a homogeneous operator (let alone stand from those of a general nonlinear operator). A fundamental exception is that of the first eigenvalue of a homogeneous operator (see Theorem 2.4 and Remark 2.6 in Section 2) which possesses—under additional assumptions on the operator itself—remarkable properties such as the positivity of the associated eigenfunctions, see [35]. These properties have been extensively used (in [36, 37], e.g.) in order to prove global bifurcation results for (1.24) from the first eigenvalue of the p-Laplacian. Related results can be found in [38, 39].Clearly, in case thef appearing in (1.24) be odd in its second variable, typically when f is of the form (1.26)f(x,s)=a(x)|s|q-2s,a∈L∞(Ω), then one can resort again to LS theory, because the resulting abstract equation (1.27)Ap(u)+P(u)=λBp(u) (with Ap and Bp as in (1.10) and P defined via (1.19) and (1.26)) involves operators which are all odd, and one can prove in this way bifurcation from each eigenvalue μn(p) of (1.7). For the corresponding results, see Chabrowski [12]. To be precise, the problem dealt with by Chabrowski is slightly different as he considers the modified form of (1.24) in which f sits on the left-hand side (i.e., it is added to the p-Laplacian) rather than on the right-hand side of the equation. Needless to say, this does not change the essence of our remark, nor the results for (1.24) would be much different from those in [12]. ## 2. Existence of Eigenvalues for Gradient Operators Consider (1.1) where A,B:E→E' (E a real, infinite dimensional, reflexive Banach space) and suppose that 〈B(u),u〉≠0 for u≠0. If λ is an eigenvalue of (A,B) and u is a corresponding eigenvector, then (2.1)λ=〈A(u),u〉〈B(u),u〉≡R(u). Thus, the eigenvalues of (A,B)—if any—must be searched among the values of the function R defined on E∖{0} by means of (2.1). R is called the Rayleigh quotient relative to (A,B), and its importance for pairs of linear operators is well established [5].Well-known simple examples (just think of linear operators) show that without further assumptions, there may be no eigenvalues at all for(A,B). On the other hand, we know that a real symmetric n×n matrix has at least one eigenvalue, and so does any self-adjoint linear operator in an infinite-dimensional real Hilbert space, provided it is compact. The nonlinear analogue of the class of self-adjoint operators is that of gradient operators, which are the natural candidates for the use of variational methods.In their simplest and oldest form traced by the Calculus of Variations, variational methods consist in finding the minimum or the maximum value of a functional on a given set in order to find a solution of a problem in the set itself. Basically, if we wish to solve the equation(2.2)A(u)=0,u∈E, and A:E→E' is a gradient operator, which means that (2.3)〈A(u),v〉=a′(u)v∀u,v∈E, for some differentiable functional a:E→ℝ [(the potential of a)], then we need to just find the critical points of a, that is, the points u∈E where the derivative a'(u) of a vanishes. The images a(u) of these points are by definition the critical values of a, and the simplest such are evidently the minimum and the maximum values of a (provided of course that they are attained). However, from the standpoint of eigenvalue theory, the relevant equation is (1.1), whose solutions u are—when also B is a gradient—the critical points of a constrained to b(u)=const, where b is the potential of B. To be precise, normalize the potentials assuming that a(0)=b(0)=0 and consider for c≠0 the “surface” (2.4)Vc≡{u∈E:b(u)=c}. Then at a critical point u∈Vc of the restriction of a to Vc, we have (2.5)a′(u)v=λb′(u)v∀v∈E, for some Lagrange multiplier λ. This is the same as to write A(u)=λB(u), and thus yields an eigenvalue-eigenvector pair (λ,u)∈ℝ×Vc for (1.1); note that 0∉Vc if c≠0. Of course to derive (2.5) we need some regularity of Vc, and this is ensured (if B is continuous) by the assumptions made upon B, which guarantee—since b'(u)u=〈B(u),u〉≠0 for u≠0 and 0∉Vc—that V is indeed a C1 submanifold of E of codimension one [40].Let us collect the above remarks stating formally the basic assumptions onA,B and the basic fact on the existence of at least one eigenvalue for A, B.(AB0)A,B:E→E' are continuous gradient operators with 〈B(u),u〉≠0foru≠0.Theorem 2.1. Suppose thatA, B satisfy (AB0) and let Vc be as in (2.4). Suppose, moreover, that the potential a of A is bounded above on Vc and let M≡supu∈Vca(u). If M is attained at u0∈Vc, then there exists λ0∈ℝ such that (2.6)A(u0)=λ0B(u0). That is, u0 is an eigenvector of the pair (A,B) corresponding to the eigenvalue λ0. A similar statement holds if a is bounded below, provided that m≡infu∈Va(u) is attained. ### 2.1. The First Eigenvalue (for Linear and Nonlinear Operators) Looking at the statement of Theorem2.1, we remark that in general there may be more points/eigenvectors u∈Vc (if any at all) where M is attained, and consequently different corresponding eigenvalues (the values taken by the Rayleigh quotient (2.1) at such points). However, in a special case, λ0 is uniquely determined by M and plays the role of “first eigenvalue” of (A,B): this is when A and B are positively homogeneous of the same degree. Recall that A is said to be positively homogeneous of degree α>0 if A(tu)=tαA(u) for u∈E and t>0. For such operators pairs, it is sufficient to consider a fixed level set (that is, to consider normalized eigenvectors), for instance, (2.7)V≡{u∈E:b(u)=1}.Theorem 2.2. LetA,B:E→E' satisfy (AB0) and let a,b be their respective potentials. Suppose in addition that A, B are positively homogeneous of the same degree. If a is bounded above on V and M=supu∈Va(u) is attained at u0∈V, then (2.8)A(u0)=MB(u0). Moreover, M is the largest eigenvalue of the pair (A,B). Likewise, if a is bounded below and m=infu∈Va(u) is attained, then m is the smallest eigenvalue of the pair (A,B).Let us give the direct easy proof of Theorem2.2, that does not even need Lagrange multipliers. The homogeneity of A and B implies that (2.9)supu∈Va(u)=supu≠0a(u)b(u)=supu≠0〈A(u),u〉〈B(u),u〉. Indeed recall (see [7] or [8]) that a continuous gradient operator A is related to its potential a (normalized so that a(0)=0) by the formula (2.10)a(u)=∫01‍〈A(tu),u〉dt. Thus, if A is homogeneous of degree α we have (2.11)a(u)=〈A(u),u〉α+1, and similarly for b; in particular, a and b are (α+1)-homogeneous. Therefore, if for u≠0 we put t(u)=(b(u))-1/(α+1), we have (2.12)a(u)b(u)=a(t(u)u), and as b(t(u)u)=1 (i.e., t(u)u∈V), the first equality in (2.9), follows immediately, and so does the second by virtue of (2.11). By (2.9) and the definition of M we have (2.13)a(u)-Mb(u)≤0foranyu∈E. Suppose now that M is attained at u0∈V. Then a(u0)-Mb(u0)=0. Thus, u0 is a point of absolute maximum of the map K≡a-Mb:E→ℝ and therefore its derivative K'(u0)=a'(u0)-Mb'(u0) at u0 vanishes, that is, (2.14)A(u0)=MB(u0). This proves (2.8). To prove the final assertion, observe that by (2.9), M is also the maximum value of the Rayleigh quotient, and therefore the largest eigenvalue of (A,B) by the remark made above.So the real question laid by Theorems2.1 and 2.2 is how can we ensure that (i) a is bounded and (ii) a attains its maximum (or minimum) value on V? The first question would be settled by requiring in principle that V is bounded and that A (and therefore a) is bounded on bounded sets. However, to answer affirmatively (ii), we need anyway some compactness, and as E has infinite dimension—which makes hard to hope that V be compact—such property must be demanded to a (or to A).Definition 2.3. A functionala:E→ℝ is said to be weakly sequentially continuous (wsc for short) if a(un)→a(u) whenever un→u weakly in E, and weakly sequentially lower semicontinuous (wslsc) if (2.15)liminfn→∞a(un)≥a(u), whenever un→u weakly in E. Finally, a is said to be coercive if a(un)→+∞ whenever ∥un∥→+∞.Theorem 2.4. LetA,B:E→E' satisfy (AB0) and let a, b be their respective potentials. Suppose that (i) a is wsc;(ii) b is coercive and wslsc. Then a is bounded on V. Suppose moreover that A and B are positively homogeneous of the same degree. If M≡supu∈Va(u)>0 (resp., m≡infu∈Va(u)<0), then it is attained and is the largest (resp., smallest) eigenvalue of (A,B).Proof. Suppose by way of contradiction thata is not bounded above on V, and let (un)⊂V be such that a(un)→+∞. As b is coercive, (un) is bounded (in fact, V itself is a bounded set) and therefore as E is reflexive we can assume—passing if necessary to a subsequence—that (un) converges weakly to some u0∈E. As a is wsc, it follows that a(un)→a(u0), contradicting the assumption that a(un)→+∞. Thus, M is finite, and we can now let (un)⊂V be a maximizing sequence, that is, a sequence such that a(un)→M. As before, we can assume that (un) converges weakly to some u0∈E, and the weak sequential continuity of a now implies that a(u0)=M. It remains to prove—under the stated additional assumptions—thatu0∈V. To do this, first observe that (as b is wslsc) (2.16)1=liminfn→∞b(un)≥b(u0). We claim that b(u0)=1. Indeed suppose by way of contradiction that b(u0)<1, and let t0>0 be such that t0u0∈V; such a t0 is uniquely determined by the condition (2.17)b(t0u0)=t0α+1b(u0)=1, which yields t0=((b(u0))-1/(α+1) and shows that t0>1. But then, as M>0, we would have (2.18)a(t0u0)=t0α+1a(u0)=t0α+1M>M, which contradicts the definition of M and proves our claim. The proof that m is attained if it is strictly negative is entirely similar.Example 2.5 (the first eigenvalue of thep-Laplace operator). IfA=Ap and B=Bp are defined as in Example 1.1, we have (2.19)a(u)=∫Ω‍|u|pp,b(u)=∫Ω‍|∇u|pp(u∈W01,p(Ω)), for their respective potentials (see (2.11)), and therefore (2.20)a(u)b(u)=〈Ap(u),u〉〈Bp(u),u〉=∫Ω‍|u|p∫Ω‍|∇u|p=(ϕp(u))-1(u≠0), with ϕp as in (1.11). The compact embedding of W01,p(Ω) into Lp(Ω) implies that a is wsc (see the comments following Definition 2.7); moreover, looking at (1.8) we see that b is coercive, while its weak sequential lower semicontinuity is granted as a property of the norm of any reflexive Banach space [41]. It follows by Theorem 2.4 that (2.21)λ1(p)≡sup∫Ω‍|u|p∫Ω‍|∇u|p is attained and is the largest eigenvalue of Ap, which is the same as to say that μ1(p)≡(λ1(p))-1 is the smallest eigenvalue of (1.7). This shows the existence and variational characterization of the first point in the spectral sequence (1.12).Remark 2.6. Much more can be said aboutμ1(p), in particular, μ1(p) is isolated and simple (i.e., the corresponding eigenfunctions are multiple of each other), and moreover the eigenfunctions do not change sign inΩ. These fundamental properties (proved, e.g., in [27]) are among others at the basis of the global bifurcation results for equations of the form (1.24) due to [36, 37]. For an abstract version of these properties of the first eigenvalue, see [35].Let us now indicate very briefly some conditions onA, B ensuring the properties required upon a, b in Theorem 2.4.Definition 2.7. A mappingA:E→F (E, F Banach spaces) is said to be strongly sequentially continuous (strongly continuous for short) if it maps weakly convergent sequences of E to strongly convergent sequences of F.It can be proved (see, e.g., [7]) that if a gradient operator A:E→E' is strongly continuous, then its potential a is wsc. Moreover, it is easy to see that a strongly continuous operator A:E→F is compact, which means by definition that A maps bounded sets of E onto relatively compact sets of F (or equivalently, that any bounded sequence (un) in E contains a subsequence (unk) such that A(unk) converges in F). Moreover, when A is a linear operator, then it is strongly continuous if and only if it is compact [42].Definition 2.8. A mappingA:E→E' is said to be strongly monotone if (2.22)〈A(u)-A(v),u-v〉≥k∥u-v∥2, for some k>0 and for all u,v∈E.It can be proved (see, e.g., [9]) that if a gradient operator A is strongly monotone, then its potential a is coercive and wslsc.With the help of Definitions2.7 and 2.8, Theorem 2.4 can be easily restated using hypotheses which only involve the operators A and B. Rather than doing this in general, we wish to give evidence to the special case that E=E'=H, a real Hilbert space (whose scalar product will be denoted (·,·)), and that B(u)=u. In fact, this is the situation that we will mainly consider from now on. Note that in this case, if A is positively homogeneous of degree 1, we have by (2.9) (2.23)supb(u)=1a(u)=supu≠0(A(u),u)∥u∥2=supu∈S(A(u),u), where (2.24)S={u∈H:∥u∥=1}.Corollary 2.9. LetH be a real, infinite-dimensional Hilbert space and let A:H→H be a strongly continuous gradient operator which is positively homogeneous of degree 1. Let (2.25)M=supu∈S(A(u),u),m=infu∈S(A(u),u). Then M,m are finite and moreover if M>0 (resp., m<0), it is attained and is the largest (resp., smallest) eigenvalue of A.Remark 2.10. The result just stated holds true under the weaker assumption thatA be compact, see [43, Theorem 1.2 and Remark 1.2], where also noncompact maps are considered. In this case, however, the condition M>0 must be replaced by M>α(A), with α(A) the measure of noncompactness of A.Among the 1-positively homogeneous operators, a distinguished subclass is formed by thebounded linear operators acting in H. Denoting such an operator with T, we first recall (see e.g., [8]) that T is a gradient if and only if it is self-adjoint (or symmetric), that is, (Tu,v)=(u,Tv) for all u,v∈H. Next, a classical result of functional analysis (see, e.g., [42]) states that if a linear operator T:H→H is self-adjoint and compact, then it has at least one eigenvalue. The precise statement is as follows: put (2.26)λ1+(T)≡supu∈S(Tu,u),λ1-(T)≡infu∈S(Tu,u).Thenλ1+(T)≥0, and if λ1+(T)>0 then it (is attained) is the largest eigenvalue of T. Similar statements—with reverse inequalities—hold for λ1-(T). Evidently, these can be proven as particular cases of Corollary 2.9, except for the nonstrict inequalities, which are due to our assumptions that H has infinite dimension and that T is compact. Indeed, if for instance, we had λ1+(T)<0, then the very definition (2.26) would imply that |(Tu,u)|≥α∥u∥2 for some α>0 and all u∈H, whence it would follow (by the Schwarz' inequality) that ∥Tu∥≥α∥u∥ for all u∈H, implying that T has a bounded inverse T-1 and therefore that S=T-1T(S) is compact, which is absurd. Finally, note that λ1+(T)=λ1-(T)=0 can only happen if (Tu,u)=0 for all u∈H, implying that T≡0 [42]. The conclusion is that any compact self-adjoint operator has at least one nonzero eigenvalue provided that it is not identically zero. ### 2.2. Higher Order Eigenvalues (for Linear and Nonlinear Operators) Let us remain for a while in the class of bounded linear operators. For these, the use of variational methods in order to study the existence and location ofhigher order eigenvalues is entirely classical and well represented by the famous minimax principle for the eigenvalues of the Laplacian [31]. By the standpoint of operator theory (see, e.g., [44] or [45], Chapter XI, Theorem 1.2), this consists in characterizing the (positive, e.g.) eigenvalues of a compact self-adjoint operator T in a Hilbert space H as follows. For any integer n≥0 let (2.27)𝒰n={V⊂H:Vsubspaceofdimension≤n}, and for n≥1 set (2.28)cn(T)=infV∈𝒰n-1supu∈S∩V⊥(Tu,u), where S={u∈H:∥u∥=1} and V⊥ is the subspace orthogonal to V. Then (2.29)(supu∈S(Tu,u)=)c1(T)≥c2(T)≥⋯≥cn(T)≥⋯≥0, and if cn(T)>0, T has n eigenvalues above 0, precisely, ci(T)=λi+(T) for i=1,…,n where (λi+(T)) denotes the (possibly finite) sequence of all such eigenvalues, arranged in decreasing order and counting multiplicities. There is also a “dual” formula for the positive eigenvalues: (2.30)λn(T)=supV∈𝒱ninfu∈S∩V(Tu,u), where (2.31)𝒱n≡{V⊂H:Vsubspaceofdimension≥n}. The above formulae (2.28)–(2.30) may appear quite involved at first sight, but the principle on which they are based is simple enough. Suppose we have found the first eigenvalue λ1(T)≡λ1+(T)(>0) as in (2.26). For simplicity we consider just positive eigenvalues and so we drop the superscript +. Now, iterate the procedure: let (i) v1∈S be such that Tv1=λ1(T)v1;(ii) V2≡{u∈H:(u,v1)=0}≡v1⊥;(iii) λ2(T)≡supu∈S∩V2(Tu,u).Thenλ1(T)≥λ2(T)≥0, and if λ2(T)>0 then it is attained and is an eigenvalue of T: indeed—due to the symmetry of T—the restriction T2 of T to V2 is an operator inV2, and so one can apply to T2 the same argument used above for T to prove the existence of λ1. Moreover, in this case, if we let (i) v2∈S be such that Tv2=λ2v2,(ii) Z2≡[v1,v2]≡{αv1+βv2:α,β∈ℝ},then it is immediate to check that(2.32)λ2(T)=infu∈S∩Z2(Tu,u). Collecting these facts, and using some linear algebra, it is not difficult to see that (2.33)λ2(T)=infV∈𝒰1supu∈S∩V⊥(Tu,u)=supV∈𝒱2infu∈S∩V(Tu,u), where (2.34)𝒱2≡{V⊂H:Vsubspaceofdimension≥2}. For a rigorous discussion and complete proofs of the above statements, we refer the reader to [44, 45] or [5], for instance.Corollary 2.11. IfT:H→H is compact, self-adjoint, and positive (i.e., such that (Tu,u)>0 for u≠0), then it has infinitely many eigenvalues λn0: (2.35)(supu∈S(Tu,u))=λ10≥λ20≥⋯λn0≥⋯>0. Moreover, λn0→0 as n→∞.The last statement is easily proved as follows: suppose instead thatλn0≥k>0 for all n∈ℕ. For each n, pick un∈S with Tun=λn0un; we have (un,um)=0 for n≠m because T is self-adjoint. Then T(un/λn0)=un, and the compactness of T would now imply that (un) contains a convergent subsequence, which is absurd since ∥un-um∥2=2 for all n≠m.We now finally come to the nonlinear version of the minimax principle, that is, the Lusternik-Schnirelmann (LS) theory of critical points for even functionals on the sphere [17]. There are various excellent accounts of the theory in much greater generality (see, e.g., Amann [7], Berger [8], Browder [9], Palais [10], and Rabinowitz [11, 21]), and so we need just to mention a few basic points of it, these will lead us in short to a simple but fundamental statement (Corollary 2.17), that is, a striking proper generalization of Corollary 2.11 and that will be used in Section 3.ForR>0, let (2.36)SR≡RS={u∈H:∥u∥=R}. If K⊂SR is symmetric (i.e., u∈K⇒-u∈K) then the genus of K, denoted γ(K), is defined as (2.37)γ(K)=inf{n∈ℕ:thereexistsacontinuousoddmappingofKintoℝn∖{0}}. If V is a subspace of H with dimV=n, then γ(SR∩V)=n. For n∈ℕ put (2.38)Kn(R)={K⊂SR:Kcompactandsymmetric,γ(K)≥n}. In the search of critical points of a functional, the so-called Palais-Smale condition is of prime importance. For a continuous gradient operator A:H→H (with potential a), and for a given R>0, put (2.39)D(x)≡A(x)-(A(x),x)R2x(x∈H) and call D the gradient ofaonSR. Essentially, for a given x∈SR, D(x) is the tangential component of A(x), that is, the component of A(x) on the tangent space to SR at x.Definition 2.12. LetA:H→H be a continuous gradient operator and let a be its potential. a is said to satisfy the Palais-Smale condition atc∈ℝ ((PS)c for short) on SR if any sequence (xn)⊂SR such that a(xn)→c and D(xn)→0 contains a convergent subsequence.Lemma 2.13. LetA:H→H be a strongly continuous gradient operator and let a:H→ℝ be its potential. Suppose that a(x)≠0 implies A(x)≠0. Then a satisfies (PS)c on SR for each c≠0.Proof. It is enough to consider the caseR=1. So let (xn)⊂S be a sequence such that a(xn)→c≠0 and (2.40)D(xn)=A(xn)-(A(xn),xn)xn→0. We can assume—passing if necessary to a subsequence—that (xn) converges weakly to some x0. Therefore, A(xn)→A(x0) and moreover—as a is wsc—a(xn)→a(x0) and similarly (A(xn),xn)→(A(x0),x0). Thus, a(x0)=c≠0 and therefore A(x0)≠0 by assumption. It follows from (2.40) that (A(xn),xn)xn→-A(x0)≠0. This first shows that (A(x0),x0)≠0 (otherwise we would have (A(xn),xn)xn→0) and then implies—since (xn)=(A(xn),xn)-1(A(xn),xn)xn—that (xn) converges to -(A(x0),x0)-1A(x0), of course.Example 2.14. Here are two simple but important cases in which the assumption mentioned in Lemma2.13 is satisfied.(i) A is a positive (resp., negative) operator, that is, (A(u),u)>0 (resp., (A(u),u)<0) for u∈H,u≠0.(ii) A is a positively homogeneous operator.Indeed ifA is, for instance, positive, then in particular A(u)≠0 for u≠0, and so the conclusion follows because a(u)≠0 implies that u≠0. While if A is positively homogeneous of degree say α, then a(u)=〈A(u),u〉/(α+1) and so the conclusion is immediate.Theorem 2.15. Suppose thatA:H→H is an odd strongly continuous gradient operator, and let a be its potential. Suppose that a(x)≠0 implies A(x)≠0. For n∈ℕ and R>0 put (2.41)Cn(R)≡supKn(R)infKa(u), where Kn(R) is as in (2.38). Then (2.42)(supSRa(u))=C1(R)≥⋯Cn(R)≥Cn+1(R)≥⋯≥0. Moreover, Cn(R)→0 as n→∞, and if Ck(R)>0 for some k∈ℕ, then for 1≤n≤k,Cn(R) is a critical value of a on SR. Thus, there exist λn(R)∈ℝ,un(R)∈SR(1≤n≤k) such that (2.43)Cn(R)=a(un(R)),(2.44)A(un(R))=λn(R)un(R).Remark 2.16. A similar assertion holds for the negative minimax levels ofa, (2.45)(infSRa(u))=D1(R)≤⋯Dn(R)≤Dn+1(R)≤⋯≤0.Indication of the Proof of Theorem2.15 The sequence(Cn(R)) is nondecreasing because for any n∈ℕ, we have Kn(R)⊃Kn+1(R) as shown by (2.38). Also, C1(R)=supSRa(u) because K1(R) contains all sets of the form {x}∪{-x}, x∈SR [11]. For the proof that Cn(R)→0 as n→∞ we refer to Zeidler [46]. Finally, if Ck(R)>0, since by Lemma  2.4 we know that a satisfies (PS) at the level Ck(R), it follows by standard facts of critical point theory (see any of the cited references) that Ck(R) is attained and a critical value of a on SR.Corollary 2.17. LetA:H→H be an odd strongly continuous gradient operator, and suppose moreover that A is positive. Then the numbers Cn(R) defined in (2.41) are all positive. Thus, for each R>0, there exists an infinite sequence of “eigenpairs” (λn(R),un(R))∈ℝ×SR satisfying (2.43)-(2.44).In conjunction with Corollary2.17, the following result—for which we refer to [8]—will be used to carry out our estimates in Section 3.Proposition 2.18. LetA0:H→H satisfy the assumptions of Corollary 2.17. Suppose moreover that A0 is linear (and therefore is a linear compact self-adjoint positive operator in H). Then (2.46)Cn0(R)≡supKn(R)infKa0(u)=12λn0R2, where a0(u)=(1/2)(A0(u),u) and (λn0) is the decreasing sequence of the eigenvalues of A0, as in Corollary 2.11. ## 2.1. The First Eigenvalue (for Linear and Nonlinear Operators) Looking at the statement of Theorem2.1, we remark that in general there may be more points/eigenvectors u∈Vc (if any at all) where M is attained, and consequently different corresponding eigenvalues (the values taken by the Rayleigh quotient (2.1) at such points). However, in a special case, λ0 is uniquely determined by M and plays the role of “first eigenvalue” of (A,B): this is when A and B are positively homogeneous of the same degree. Recall that A is said to be positively homogeneous of degree α>0 if A(tu)=tαA(u) for u∈E and t>0. For such operators pairs, it is sufficient to consider a fixed level set (that is, to consider normalized eigenvectors), for instance, (2.7)V≡{u∈E:b(u)=1}.Theorem 2.2. LetA,B:E→E' satisfy (AB0) and let a,b be their respective potentials. Suppose in addition that A, B are positively homogeneous of the same degree. If a is bounded above on V and M=supu∈Va(u) is attained at u0∈V, then (2.8)A(u0)=MB(u0). Moreover, M is the largest eigenvalue of the pair (A,B). Likewise, if a is bounded below and m=infu∈Va(u) is attained, then m is the smallest eigenvalue of the pair (A,B).Let us give the direct easy proof of Theorem2.2, that does not even need Lagrange multipliers. The homogeneity of A and B implies that (2.9)supu∈Va(u)=supu≠0a(u)b(u)=supu≠0〈A(u),u〉〈B(u),u〉. Indeed recall (see [7] or [8]) that a continuous gradient operator A is related to its potential a (normalized so that a(0)=0) by the formula (2.10)a(u)=∫01‍〈A(tu),u〉dt. Thus, if A is homogeneous of degree α we have (2.11)a(u)=〈A(u),u〉α+1, and similarly for b; in particular, a and b are (α+1)-homogeneous. Therefore, if for u≠0 we put t(u)=(b(u))-1/(α+1), we have (2.12)a(u)b(u)=a(t(u)u), and as b(t(u)u)=1 (i.e., t(u)u∈V), the first equality in (2.9), follows immediately, and so does the second by virtue of (2.11). By (2.9) and the definition of M we have (2.13)a(u)-Mb(u)≤0foranyu∈E. Suppose now that M is attained at u0∈V. Then a(u0)-Mb(u0)=0. Thus, u0 is a point of absolute maximum of the map K≡a-Mb:E→ℝ and therefore its derivative K'(u0)=a'(u0)-Mb'(u0) at u0 vanishes, that is, (2.14)A(u0)=MB(u0). This proves (2.8). To prove the final assertion, observe that by (2.9), M is also the maximum value of the Rayleigh quotient, and therefore the largest eigenvalue of (A,B) by the remark made above.So the real question laid by Theorems2.1 and 2.2 is how can we ensure that (i) a is bounded and (ii) a attains its maximum (or minimum) value on V? The first question would be settled by requiring in principle that V is bounded and that A (and therefore a) is bounded on bounded sets. However, to answer affirmatively (ii), we need anyway some compactness, and as E has infinite dimension—which makes hard to hope that V be compact—such property must be demanded to a (or to A).Definition 2.3. A functionala:E→ℝ is said to be weakly sequentially continuous (wsc for short) if a(un)→a(u) whenever un→u weakly in E, and weakly sequentially lower semicontinuous (wslsc) if (2.15)liminfn→∞a(un)≥a(u), whenever un→u weakly in E. Finally, a is said to be coercive if a(un)→+∞ whenever ∥un∥→+∞.Theorem 2.4. LetA,B:E→E' satisfy (AB0) and let a, b be their respective potentials. Suppose that (i) a is wsc;(ii) b is coercive and wslsc. Then a is bounded on V. Suppose moreover that A and B are positively homogeneous of the same degree. If M≡supu∈Va(u)>0 (resp., m≡infu∈Va(u)<0), then it is attained and is the largest (resp., smallest) eigenvalue of (A,B).Proof. Suppose by way of contradiction thata is not bounded above on V, and let (un)⊂V be such that a(un)→+∞. As b is coercive, (un) is bounded (in fact, V itself is a bounded set) and therefore as E is reflexive we can assume—passing if necessary to a subsequence—that (un) converges weakly to some u0∈E. As a is wsc, it follows that a(un)→a(u0), contradicting the assumption that a(un)→+∞. Thus, M is finite, and we can now let (un)⊂V be a maximizing sequence, that is, a sequence such that a(un)→M. As before, we can assume that (un) converges weakly to some u0∈E, and the weak sequential continuity of a now implies that a(u0)=M. It remains to prove—under the stated additional assumptions—thatu0∈V. To do this, first observe that (as b is wslsc) (2.16)1=liminfn→∞b(un)≥b(u0). We claim that b(u0)=1. Indeed suppose by way of contradiction that b(u0)<1, and let t0>0 be such that t0u0∈V; such a t0 is uniquely determined by the condition (2.17)b(t0u0)=t0α+1b(u0)=1, which yields t0=((b(u0))-1/(α+1) and shows that t0>1. But then, as M>0, we would have (2.18)a(t0u0)=t0α+1a(u0)=t0α+1M>M, which contradicts the definition of M and proves our claim. The proof that m is attained if it is strictly negative is entirely similar.Example 2.5 (the first eigenvalue of thep-Laplace operator). IfA=Ap and B=Bp are defined as in Example 1.1, we have (2.19)a(u)=∫Ω‍|u|pp,b(u)=∫Ω‍|∇u|pp(u∈W01,p(Ω)), for their respective potentials (see (2.11)), and therefore (2.20)a(u)b(u)=〈Ap(u),u〉〈Bp(u),u〉=∫Ω‍|u|p∫Ω‍|∇u|p=(ϕp(u))-1(u≠0), with ϕp as in (1.11). The compact embedding of W01,p(Ω) into Lp(Ω) implies that a is wsc (see the comments following Definition 2.7); moreover, looking at (1.8) we see that b is coercive, while its weak sequential lower semicontinuity is granted as a property of the norm of any reflexive Banach space [41]. It follows by Theorem 2.4 that (2.21)λ1(p)≡sup∫Ω‍|u|p∫Ω‍|∇u|p is attained and is the largest eigenvalue of Ap, which is the same as to say that μ1(p)≡(λ1(p))-1 is the smallest eigenvalue of (1.7). This shows the existence and variational characterization of the first point in the spectral sequence (1.12).Remark 2.6. Much more can be said aboutμ1(p), in particular, μ1(p) is isolated and simple (i.e., the corresponding eigenfunctions are multiple of each other), and moreover the eigenfunctions do not change sign inΩ. These fundamental properties (proved, e.g., in [27]) are among others at the basis of the global bifurcation results for equations of the form (1.24) due to [36, 37]. For an abstract version of these properties of the first eigenvalue, see [35].Let us now indicate very briefly some conditions onA, B ensuring the properties required upon a, b in Theorem 2.4.Definition 2.7. A mappingA:E→F (E, F Banach spaces) is said to be strongly sequentially continuous (strongly continuous for short) if it maps weakly convergent sequences of E to strongly convergent sequences of F.It can be proved (see, e.g., [7]) that if a gradient operator A:E→E' is strongly continuous, then its potential a is wsc. Moreover, it is easy to see that a strongly continuous operator A:E→F is compact, which means by definition that A maps bounded sets of E onto relatively compact sets of F (or equivalently, that any bounded sequence (un) in E contains a subsequence (unk) such that A(unk) converges in F). Moreover, when A is a linear operator, then it is strongly continuous if and only if it is compact [42].Definition 2.8. A mappingA:E→E' is said to be strongly monotone if (2.22)〈A(u)-A(v),u-v〉≥k∥u-v∥2, for some k>0 and for all u,v∈E.It can be proved (see, e.g., [9]) that if a gradient operator A is strongly monotone, then its potential a is coercive and wslsc.With the help of Definitions2.7 and 2.8, Theorem 2.4 can be easily restated using hypotheses which only involve the operators A and B. Rather than doing this in general, we wish to give evidence to the special case that E=E'=H, a real Hilbert space (whose scalar product will be denoted (·,·)), and that B(u)=u. In fact, this is the situation that we will mainly consider from now on. Note that in this case, if A is positively homogeneous of degree 1, we have by (2.9) (2.23)supb(u)=1a(u)=supu≠0(A(u),u)∥u∥2=supu∈S(A(u),u), where (2.24)S={u∈H:∥u∥=1}.Corollary 2.9. LetH be a real, infinite-dimensional Hilbert space and let A:H→H be a strongly continuous gradient operator which is positively homogeneous of degree 1. Let (2.25)M=supu∈S(A(u),u),m=infu∈S(A(u),u). Then M,m are finite and moreover if M>0 (resp., m<0), it is attained and is the largest (resp., smallest) eigenvalue of A.Remark 2.10. The result just stated holds true under the weaker assumption thatA be compact, see [43, Theorem 1.2 and Remark 1.2], where also noncompact maps are considered. In this case, however, the condition M>0 must be replaced by M>α(A), with α(A) the measure of noncompactness of A.Among the 1-positively homogeneous operators, a distinguished subclass is formed by thebounded linear operators acting in H. Denoting such an operator with T, we first recall (see e.g., [8]) that T is a gradient if and only if it is self-adjoint (or symmetric), that is, (Tu,v)=(u,Tv) for all u,v∈H. Next, a classical result of functional analysis (see, e.g., [42]) states that if a linear operator T:H→H is self-adjoint and compact, then it has at least one eigenvalue. The precise statement is as follows: put (2.26)λ1+(T)≡supu∈S(Tu,u),λ1-(T)≡infu∈S(Tu,u).Thenλ1+(T)≥0, and if λ1+(T)>0 then it (is attained) is the largest eigenvalue of T. Similar statements—with reverse inequalities—hold for λ1-(T). Evidently, these can be proven as particular cases of Corollary 2.9, except for the nonstrict inequalities, which are due to our assumptions that H has infinite dimension and that T is compact. Indeed, if for instance, we had λ1+(T)<0, then the very definition (2.26) would imply that |(Tu,u)|≥α∥u∥2 for some α>0 and all u∈H, whence it would follow (by the Schwarz' inequality) that ∥Tu∥≥α∥u∥ for all u∈H, implying that T has a bounded inverse T-1 and therefore that S=T-1T(S) is compact, which is absurd. Finally, note that λ1+(T)=λ1-(T)=0 can only happen if (Tu,u)=0 for all u∈H, implying that T≡0 [42]. The conclusion is that any compact self-adjoint operator has at least one nonzero eigenvalue provided that it is not identically zero. ## 2.2. Higher Order Eigenvalues (for Linear and Nonlinear Operators) Let us remain for a while in the class of bounded linear operators. For these, the use of variational methods in order to study the existence and location ofhigher order eigenvalues is entirely classical and well represented by the famous minimax principle for the eigenvalues of the Laplacian [31]. By the standpoint of operator theory (see, e.g., [44] or [45], Chapter XI, Theorem 1.2), this consists in characterizing the (positive, e.g.) eigenvalues of a compact self-adjoint operator T in a Hilbert space H as follows. For any integer n≥0 let (2.27)𝒰n={V⊂H:Vsubspaceofdimension≤n}, and for n≥1 set (2.28)cn(T)=infV∈𝒰n-1supu∈S∩V⊥(Tu,u), where S={u∈H:∥u∥=1} and V⊥ is the subspace orthogonal to V. Then (2.29)(supu∈S(Tu,u)=)c1(T)≥c2(T)≥⋯≥cn(T)≥⋯≥0, and if cn(T)>0, T has n eigenvalues above 0, precisely, ci(T)=λi+(T) for i=1,…,n where (λi+(T)) denotes the (possibly finite) sequence of all such eigenvalues, arranged in decreasing order and counting multiplicities. There is also a “dual” formula for the positive eigenvalues: (2.30)λn(T)=supV∈𝒱ninfu∈S∩V(Tu,u), where (2.31)𝒱n≡{V⊂H:Vsubspaceofdimension≥n}. The above formulae (2.28)–(2.30) may appear quite involved at first sight, but the principle on which they are based is simple enough. Suppose we have found the first eigenvalue λ1(T)≡λ1+(T)(>0) as in (2.26). For simplicity we consider just positive eigenvalues and so we drop the superscript +. Now, iterate the procedure: let (i) v1∈S be such that Tv1=λ1(T)v1;(ii) V2≡{u∈H:(u,v1)=0}≡v1⊥;(iii) λ2(T)≡supu∈S∩V2(Tu,u).Thenλ1(T)≥λ2(T)≥0, and if λ2(T)>0 then it is attained and is an eigenvalue of T: indeed—due to the symmetry of T—the restriction T2 of T to V2 is an operator inV2, and so one can apply to T2 the same argument used above for T to prove the existence of λ1. Moreover, in this case, if we let (i) v2∈S be such that Tv2=λ2v2,(ii) Z2≡[v1,v2]≡{αv1+βv2:α,β∈ℝ},then it is immediate to check that(2.32)λ2(T)=infu∈S∩Z2(Tu,u). Collecting these facts, and using some linear algebra, it is not difficult to see that (2.33)λ2(T)=infV∈𝒰1supu∈S∩V⊥(Tu,u)=supV∈𝒱2infu∈S∩V(Tu,u), where (2.34)𝒱2≡{V⊂H:Vsubspaceofdimension≥2}. For a rigorous discussion and complete proofs of the above statements, we refer the reader to [44, 45] or [5], for instance.Corollary 2.11. IfT:H→H is compact, self-adjoint, and positive (i.e., such that (Tu,u)>0 for u≠0), then it has infinitely many eigenvalues λn0: (2.35)(supu∈S(Tu,u))=λ10≥λ20≥⋯λn0≥⋯>0. Moreover, λn0→0 as n→∞.The last statement is easily proved as follows: suppose instead thatλn0≥k>0 for all n∈ℕ. For each n, pick un∈S with Tun=λn0un; we have (un,um)=0 for n≠m because T is self-adjoint. Then T(un/λn0)=un, and the compactness of T would now imply that (un) contains a convergent subsequence, which is absurd since ∥un-um∥2=2 for all n≠m.We now finally come to the nonlinear version of the minimax principle, that is, the Lusternik-Schnirelmann (LS) theory of critical points for even functionals on the sphere [17]. There are various excellent accounts of the theory in much greater generality (see, e.g., Amann [7], Berger [8], Browder [9], Palais [10], and Rabinowitz [11, 21]), and so we need just to mention a few basic points of it, these will lead us in short to a simple but fundamental statement (Corollary 2.17), that is, a striking proper generalization of Corollary 2.11 and that will be used in Section 3.ForR>0, let (2.36)SR≡RS={u∈H:∥u∥=R}. If K⊂SR is symmetric (i.e., u∈K⇒-u∈K) then the genus of K, denoted γ(K), is defined as (2.37)γ(K)=inf{n∈ℕ:thereexistsacontinuousoddmappingofKintoℝn∖{0}}. If V is a subspace of H with dimV=n, then γ(SR∩V)=n. For n∈ℕ put (2.38)Kn(R)={K⊂SR:Kcompactandsymmetric,γ(K)≥n}. In the search of critical points of a functional, the so-called Palais-Smale condition is of prime importance. For a continuous gradient operator A:H→H (with potential a), and for a given R>0, put (2.39)D(x)≡A(x)-(A(x),x)R2x(x∈H) and call D the gradient ofaonSR. Essentially, for a given x∈SR, D(x) is the tangential component of A(x), that is, the component of A(x) on the tangent space to SR at x.Definition 2.12. LetA:H→H be a continuous gradient operator and let a be its potential. a is said to satisfy the Palais-Smale condition atc∈ℝ ((PS)c for short) on SR if any sequence (xn)⊂SR such that a(xn)→c and D(xn)→0 contains a convergent subsequence.Lemma 2.13. LetA:H→H be a strongly continuous gradient operator and let a:H→ℝ be its potential. Suppose that a(x)≠0 implies A(x)≠0. Then a satisfies (PS)c on SR for each c≠0.Proof. It is enough to consider the caseR=1. So let (xn)⊂S be a sequence such that a(xn)→c≠0 and (2.40)D(xn)=A(xn)-(A(xn),xn)xn→0. We can assume—passing if necessary to a subsequence—that (xn) converges weakly to some x0. Therefore, A(xn)→A(x0) and moreover—as a is wsc—a(xn)→a(x0) and similarly (A(xn),xn)→(A(x0),x0). Thus, a(x0)=c≠0 and therefore A(x0)≠0 by assumption. It follows from (2.40) that (A(xn),xn)xn→-A(x0)≠0. This first shows that (A(x0),x0)≠0 (otherwise we would have (A(xn),xn)xn→0) and then implies—since (xn)=(A(xn),xn)-1(A(xn),xn)xn—that (xn) converges to -(A(x0),x0)-1A(x0), of course.Example 2.14. Here are two simple but important cases in which the assumption mentioned in Lemma2.13 is satisfied.(i) A is a positive (resp., negative) operator, that is, (A(u),u)>0 (resp., (A(u),u)<0) for u∈H,u≠0.(ii) A is a positively homogeneous operator.Indeed ifA is, for instance, positive, then in particular A(u)≠0 for u≠0, and so the conclusion follows because a(u)≠0 implies that u≠0. While if A is positively homogeneous of degree say α, then a(u)=〈A(u),u〉/(α+1) and so the conclusion is immediate.Theorem 2.15. Suppose thatA:H→H is an odd strongly continuous gradient operator, and let a be its potential. Suppose that a(x)≠0 implies A(x)≠0. For n∈ℕ and R>0 put (2.41)Cn(R)≡supKn(R)infKa(u), where Kn(R) is as in (2.38). Then (2.42)(supSRa(u))=C1(R)≥⋯Cn(R)≥Cn+1(R)≥⋯≥0. Moreover, Cn(R)→0 as n→∞, and if Ck(R)>0 for some k∈ℕ, then for 1≤n≤k,Cn(R) is a critical value of a on SR. Thus, there exist λn(R)∈ℝ,un(R)∈SR(1≤n≤k) such that (2.43)Cn(R)=a(un(R)),(2.44)A(un(R))=λn(R)un(R).Remark 2.16. A similar assertion holds for the negative minimax levels ofa, (2.45)(infSRa(u))=D1(R)≤⋯Dn(R)≤Dn+1(R)≤⋯≤0.Indication of the Proof of Theorem2.15 The sequence(Cn(R)) is nondecreasing because for any n∈ℕ, we have Kn(R)⊃Kn+1(R) as shown by (2.38). Also, C1(R)=supSRa(u) because K1(R) contains all sets of the form {x}∪{-x}, x∈SR [11]. For the proof that Cn(R)→0 as n→∞ we refer to Zeidler [46]. Finally, if Ck(R)>0, since by Lemma  2.4 we know that a satisfies (PS) at the level Ck(R), it follows by standard facts of critical point theory (see any of the cited references) that Ck(R) is attained and a critical value of a on SR.Corollary 2.17. LetA:H→H be an odd strongly continuous gradient operator, and suppose moreover that A is positive. Then the numbers Cn(R) defined in (2.41) are all positive. Thus, for each R>0, there exists an infinite sequence of “eigenpairs” (λn(R),un(R))∈ℝ×SR satisfying (2.43)-(2.44).In conjunction with Corollary2.17, the following result—for which we refer to [8]—will be used to carry out our estimates in Section 3.Proposition 2.18. LetA0:H→H satisfy the assumptions of Corollary 2.17. Suppose moreover that A0 is linear (and therefore is a linear compact self-adjoint positive operator in H). Then (2.46)Cn0(R)≡supKn(R)infKa0(u)=12λn0R2, where a0(u)=(1/2)(A0(u),u) and (λn0) is the decreasing sequence of the eigenvalues of A0, as in Corollary 2.11. ## 3. Nonlinear Gradient Perturbation of a Self-Adjoint Operator In this section we restrict our attention to equations of the form(3.1)A(u)≡A0(u)+P(u)=λu, in a real Hilbert space H, where,(i) A0 is a (linear) bounded self-adjoint operator in H;(ii) P is a continuous gradient operator in H.We suppose moreover that(3.2)P(u)=o(∥u∥)asu→0. Note that—due to the continuity condition on P—this is the same as to assume that P(0)=0 and that P is is Fréchet differentiable at 0 with (3.3)P'(0)=0.Remark 3.1. We are assuming for convenience thatP is defined on the whole of H, but it will be clear from the sequel that our conclusions hold true whenPis merely defined in a neighborhood of0. The only modification would occur in the first statement of Theorem 3.2, where the words “for each R>0” should be replaced by “for each R>0 sufficiently small.”AsP(0)=0, (3.1) possesses the trivial solutions{(λ,0)∣λ∈ℝ}. Recall that a point λ0∈ℝ is said to be a bifurcation point for (3.1) if any neighborhood of (λ0,0) in ℝ×H contains nontrivial solutions (i.e., pairs (λ,u) with u≠0) of (3.1). A basic result in this matter states that if P satisfies (3.2), and if moreover A0 is compact and P is strongly continuous (so that A=A0+P is strongly continuous), then each nonzero eigenvalue of A0=A'(0) is a bifurcation point for (3.1), and in particular for any R>0 sufficiently small, there exists a solution (λR,uR) such that (3.4)∥uR∥=RforeachR,λR→λ0asR→0. Essentially, this goes back to Krasnosel'skii [6, Theorem 6.2.2], who used a minimax argument of Lusternik-Schnirelmann type considering deformations of a certain class of compact, noncontractible subsets of the sphere SR. Subsequently, the compactness (resp., strong continuity) conditions on A0 (resp., on P) were removed and replaced by the assumption that P should be of class C1 near u=0, by Böhme [47] and Marino [48], who strengthened the conclusions showing that in this case bifurcation takes place from every isolated eigenvalue of finite multiplicity of A0 and moreover that for R>0 sufficiently small, there exist (at least) two distinct solutions (λRi,uRi) satisfying (3.4) for i=1,2; “distinct” means here in particular that uR1≠uR2. Proofs of this result can be found also in Rabinowitz [11, Theorem 11.4] or in Stuart [49, Theorem 7.2], for example. Moreover, when P is also odd, then the proper critical point theory of Lusternik and Schnirelmann for even functionals (briefly recalled in Section 2) can be further exploited to show that if n is the multiplicity of λ0, then for each R>0 there are at least 2n distinct solutions (λRk,±uRk), k=1…n, which satisfy (3.4) for each k; see, for instance [11, Corollary 11.30]. Each of these sets of assumptions thus guarantees the existence of one or more families (3.5)ℱ={(λR,uR)∣0<R<R0}, of solutions of (3.1) satisfying (3.4), that is, parameterized by the norm R of the eigenvector uR for R in an interval ]0,R0[ and bifurcating from (λ0,0). In such situation, it is natural to study the rate of convergence of the eigenvalues λR to λ0 as R→0, and in order to perform such quantitative analysis we do strengthen and make more precise the condition (3.2) on P. Indeed throughout this section we consider a P that satisfies the following basic growth assumption near u=0: (3.6)P(u)=O(∥u∥q-1)asu→0forsomeq>2, that is, we suppose that there exist (q>2 and) positive constants M and R0 such that (3.7)∥P(u)∥≤M∥u∥q-1, for all u∈H with ∥u∥≤R0.We suppose moreover that there exist constantsR1>0, 0≤k≤K and β,γ∈[0,α], α≡q/2>1 such that for all u∈H with ∥u∥≤R1, (3.8)k(A0(u),u)β(∥u∥2)α-β≤(P(u),u)≤K(A0(u),u)γ(∥u∥2)α-γ.Note that asA0 is a bounded linear operator, we have ∥A0(u)∥≤C∥u∥ for some C≥0 and for all u∈H, which implies that |(A0(u),u)|≤C∥u∥2 for all u. Inserting this in (3.8) thus yields (3.9)|P(u),u|≤C1∥u∥2α=C1∥u∥q, for some C1≥0. On the other hand, (3.7) also implies—via the Cauchy-Schwarz inequality—a similar bound on (P(u),u). Thus, we see that (3.8) is compatible with (3.7), and is essentially a more specific form of it carrying a sign condition on P. In our final Example 3.4, we will see that (3.8) is satisfied by the operator associated with simple power nonlinearities often considered in perturbed eigenvalue problems for the Laplacian. Before this, in the present section we develop eigenvalue estimates that follow by (3.7) and (3.8) in the general Hilbert space context. ### 3.1. NLEV Estimates via LS Theory In our first approach, we exploit LS theory in the simple form described in Section2. We will therefore assume, in addition to the hypotheses already made in this section upon A0 and P, that(i) P is odd;(ii) A0 is compact and P is strongly continuous;(iii) A0 is positive and P is nonnegative (i.e., (P(u),u)≥0 for all u∈H).Theorem 3.2. (A) LetH be a real Hilbert space and suppose that (i) A0 is a linear, compact, self-adjoint, and positive operator in H;(ii) P is an odd, strongly continuous, gradient, and nonnegative operator in H. Then for each fixed R>0, (3.1) has an infinite sequence (λn(R),un(R)) of eigenvalue-eigenvector pairs with ∥un(R)∥=R. (B) Suppose in addition thatP satisfies (3.2). Then for each n∈ℕ, λn(R)→λn0 as R→0, where λn0 is the nth eigenvalue of A0. Thus, each λn0 is a bifurcation point for (3.1). (C) Suppose in addition thatP satisfies (3.6). Then (3.10)λn(R)=λn0+O(Rq-2)asR→0. (D) Finally, if in additionP satisfies (3.8), then as R→0 one has (3.11)-K(λn0)γRq-2+o(Rq-2)≤λn(R)-λn0≤K(1α+1)(λn0)γRq-2+o(Rq-2).Proof. The conditions in (A) guarantee thatA≡A0+P satisfies the assumptions of Corollary 2.17. Therefore, for each R>0, there exist an infinite sequence Cn(R) of critical values and a corresponding sequence (λn(R),un(R)) of eigenvalue-eigenvector pairs satisfying (2.41)–(2.44). We will make use of these formulae to derive our estimates. The statement (B) is essentially due to Berger, see [8, Chapter 6, Section 6.7A]. As the third statement has been essentially proved elsewhere (see, e.g. [33]), it remains only to prove (D). Leta,a0 and p be the potentials of A, A0, and P, respectively. We have from (2.10) (3.12)a=a0+p,a0(u)=12(A0(u),u),p(u)=∫01‍〈P(tu),u〉dt. Also letR1>0 be such that (3.8) holds for ∥u∥≤R1. In the derivation of the estimates below, we assume without further mention that ∥u∥≤R1. Step  1. It follows from (3.8) that (3.13)k1a0(u)β(∥u∥2)α-β≤p(u)≤K1a0(u)γ(∥u∥2)α-γ, where (3.14)k1:=H2β-1α,K1:=2γ-1α. The definition (2.41) of Cn(R) then shows, using (3.13) and (2.46), that (3.15)Cn0(R)+k1Cn0(R)βR2(α-β)≤Cn(R)≤Cn0(R)+K1Cn0(R)γR2(α-γ). Step  2. Equation (2.44) implies in particular that (3.16)(A(un(R)),un(R))=λn(R)R2. Whence—using (2.43) and (3.12)—we get (3.17)Cn(R)-12λn(R)R2=a(un(R))-12(A(un(R)),un(R))=p(un(R))-12(P(un(R)),un(R)). It also follows from (3.8) that (3.18)k2a0(u)β(∥u∥2)α-β≤12(P(u),u)≤K2a0(u)γ(∥u∥2)α-γ, where (3.19)k2:=k2β-1,K2:=K2γ-1. We see from (3.13) and (3.18) that both p(u) and (1/2)(P(u),u) vary in the interval of endpoints k1a0(u)β(∥u∥2)α-β and K2a0(u)γ(∥u∥2)α-γ: indeed (as α>1) min{k1,k2}=k1,max{K1,K2}=K2. Therefore, writing for simplicity u for un(R), we have (3.20)|Cn(R)-12λn(R)R2|≤K2a0(u)γ(∥u∥2)α-γ-k1a0(u)β(∥u∥2)α-β≤K2a0(u)γ(∥u∥2)α-γ(sinceA0≥0)≤K2a(u)γ(∥u∥2)α-γ(sinceP≥0)=K2Cn(R)γR2(α-γ)(by(2.43)). Step  3. Using the right-hand side of (3.15), we get (3.21)K2Cn(R)γR2(α-γ)≤K2{Cn0(R)+K1Cn0(R)γR2(α-γ)}γR2(α-γ)=K2Cn0(R)γ{1+K1Cn0(R)γ-1R2(α-γ)}γR2(α-γ)=K2(λn02)γR2α{1+K1(λn02)γ-1Rϵ}γ, where we have replaced Cn0(R) with its value (1/2)λn0R2—see (2.46)—and have put ϵ=2(γ-1)+2(α-γ)=2(α-1)>0 as α>1. Thus, as R→0, (3.22){1+K1(λn02)γ-1Rϵ}γ=1+O(Rϵ)=1+o(1), so that (3.23)K2Cn(R)γR2(α-γ)≤K2(λn02)γR2α(1+o(1)). Therefore, by (3.20), we end up this step with the estimate (3.24)|Cn(R)-12λn(R)R2|≤K2(λn02)γR2α(1+o(1)). Step  4 (upper bound). Using again the right hand side of (3.15) in (3.24), and then using again (2.46) we obtain (3.25)12λn(R)R2≤Cn(R)+K2(λn02)γR2α(1+o(1))≤Cn0(R)+K1Cn0(R)γR2(α-γ)+K2(λn02)γR2α(1+o(1))=12λn0R2+K1(λn02)γRq+K2(λn02)γRq(1+o(1))=12λn0R2+Z(λn02)γRq+o(Rq), where (3.26)Z=K1+K2=K2γ-1α+K2γ-1=K2γ-1(1α+1). We conclude that as R→0(3.27)λn(R)≤λn0+K(1α+1)(λn0)γRq-2+o(Rq-2). Step  5   (lower bound). Using now the left hand side of (3.15) in (3.24), and then using as before (2.46) we get (3.28)12λn(R)R2≥Cn(R)-K2(λn02)γR2α(1+o(1))≥Cn0(R)+k1Cn0(R)βR2(α-β)-K2(λn02)γR2α(1+o(1))=12λn0R2+{k1(λn02)β-K2(λn02)γ}Rq+o(Rq)≥12λn0R2-K2(λn02)γRq+o(Rq). We conclude that, as R→0, (3.29)λn(R)≥λn0-K(λn0)γRq-2+o(Rq-2), and this, together with (3.27), ends the proof of (3.11). ### 3.2. NLEV Estimates via Bifurcation Theory As already remarked, LS theory has a true global character from the standpoint of NLEV, in that forany fixed R>0 it allows for the “simultaneous consideration of an infinite number of eigenvalues”λn(R), if we can use the same words of Kato [4] for a situation involving nonlinear operators—though strictly parallel to that of compact self-adjoint operators, as shown by Corollaries 2.11 and 2.17.In contrast, Bifurcation theory—at least in the way used here and based on the classical Lyapounov-Schmidt method, see for instance [23]—is (i)local (it yields information for R small) and (ii) built starting from a fixed isolated eigenvalue of finite multiplicity of A0: given such an eigenvalue λ0, one reduces (via the Implicit Function Theorem) the original equation to an equation in the finite-dimensional kernel N(λ0)≡N(A0-λ0I). The use of he Implicit Function Theorem demands C1 regularity on the operators involved, but dispenses from the assumptions made before of (oddness, positivity and) compactness.These differences between Theorems3.2 and 3.3 are stressed by the change of notation (λ0 rather than λn0) also in the formulae (3.11) and (3.31) for our estimates. On the other hand, the obvious relation existing between the two statements is that each nonzero eigenvalue of a compact operator is isolated and of finite multiplicity.Theorem 3.3. (A) LetA0 be a bounded self-adjoint linear operator in a real Hilbert space H and let λ0 be an isolated eigenvalue of finite multiplicity of A0. Consider (3.1), where P is a C1 gradient map defined in a neighborhood of 0 in H and satisfying (3.6). Then λ0 is a bifurcation point for (3.1), and moreover if ℱ={(λR,uR):0<R<R0} is any family of nontrivial solutions of (3.1) satisfying (3.4), then the eigenvalues λR satisfy the estimate (3.30)λR=λ0+O(Rq-2)asR→0. (B) If, in addition,P satisfies the condition (3.8), then as R→0 one has (3.31)k(λ0)βRq-2+o(Rq-2)≤λR-λ0≤K(λ0)γRq-2+o(Rq-2).Proof. Theorem3.3 is merely a variant of Theorem 1.1 in [14]. We report here the main points of the proof of the latter—that makes systematic use of the condition (3.6)—and the improvements deriving by the use of the additional assumption (3.8). LetN=N(λ0)≡N(A0-λ0I) be the eigenspace associated with λ0, and let W be the range of A0-λ0I. Then by our assumptions on A0 and λ0, H is the orthogonal sum (3.32)H=N⊕W. Set L=A0-λ0I, δ=λ-λ0 and write (1.18) as (3.33)Lu+P(u)=δu. Let Π1, Π2=I-Π1 be the orthogonal projections onto N and W, respectively; then writing u=Π1u+Π2u≡v+w according to (3.32) and applying in turn Π1, Π2 to both members of (3.33), the latter is turned to the system (3.34)Π1P(v+w)=δv,Lw+Π2P(v+w)=δw. By the self-adjointness of A0, we have Lw∈W for any w∈W and therefore (Lu,u)=(Lw,w) for any u=v+w∈H. Now let ℱ={(λ0+δR,uR):0<R<R0} be as in the statement of Theorem 3.3. Then from (3.33), (3.35)(LuR,uR)+(P(uR),uR)=δRR2, for 0<R<R0, and writing uR=vR+wR this yields (3.36)(LwR,wR)+(P(uR),uR)=δRR2(0<R<R0). Under assumption (1.5), the term (P(uR),uR) in (3.36) is evidently O(Rq). What matters is to estimate the first term (LwR,wR); we claim that the same assumption (1.5) also yields (3.37)(LwR,wR)=o(Rq)asR→0. Then (3.36) will immediately imply that δR=O(Rq-2)—which is (3.30)—and will thus prove the first assertion of Theorem 3.3. To prove our claim, we let (δ,u) be any solution of (3.33) and write u=v+w,v∈N,w∈W; then (δ,v,w) satisfies the system (3.34). The second of these equations is Lw-δw=-Π2P(v+w) or, putting Hδ=-((L-δI)|W)-1, (3.38)w=HδΠ2P(v+w). As P is C1 near u=0 and P'(0)=0, a standard application of the Implicit Function Theorem guarantees the existence of neighborhoods 𝒰 of (0,0) in ℝ×N and 𝒲 of 0 in W such that, for each fixed (δ,v) in 𝒰, there exists a unique solution w=w(δ,v)∈𝒲 of (3.38). Moreover, w depends on a C1 fashion upon δ and v and (3.39)∥w(δ,v)∥=o(∥v∥)asv→0,v∈N, uniformly with respect to δ for δ in bounded intervals of ℝ. Our point is that using again the supplementary assumption (1.5), (3.39) can be improved (see [14]) to (3.40)∥w(δ,v)∥=O(∥v∥q-1)asv→0,v∈N, uniformly for δ near 0. Now to prove the claim (3.37), first observe that L|W:W→W is a bounded linear operator, so that ∥Lw∥≤C∥w∥ for some C>0 and for all w∈W. Thus, |(Lw,w)|≤C∥w∥2, and it follows by (3.40) that (3.41)(Lw(δ,v),w(δ,v))=O(∥v∥2(q-1))asv→0. Returning to the solutions (λ0+δR,uR)∈ℱ, and writing as above uR=vR+wR, we can suppose—diminishing R0 if necessary—that (δR,vR,wR)∈𝒰×𝒲 for all R:0<R<R0. This implies by uniqueness that wR=w(δR,vR) for all R:0<R<R0. The estimate (3.40) thus yields in particular that ∥wR∥=O(∥vR∥q-1) as R→0 and in turn (since ∥vR∥≤∥uR∥=R), (3.41)) yields that (AwR,wR)=O(R2(q-1)) as R→0. Since 2(q-1)>q (because q>2), this implies (3.37). In order to improve the rudimentary estimate (3.30), one has to look more closely at the term (P(uR),uR) in (3.36). Indeed as shown in [14], under the stated assumptions on P we also have (3.42)(P(uR),uR)=(P(vR),vR)+o(Rq)asR→0. Using (3.37) and (3.42) in (3.36), we have therefore (3.43)δRR2=(P(vR),vR)+o(Rq)asR→0. To conclude the proof of Theorem 3.3, we introduce as in [14] constants kλ0 and Kλ0 via the formulae (3.44)kλ0≡inf0<∥v∥<R0,v∈N(P(v),v)∥v∥q,Kλ0≡sup0<∥v∥<R0,v∈N(P(v),v)∥v∥q. These yield the inequalities (3.45)kλ0∥v∥q≤(P(v),v)≤Kλ0∥v∥q(v∈N,∥v∥<R0). We know that as v→0, w(δ,v)=o(∥v∥) and so ∥v+w(δ,v)∥=∥v∥+o(∥v∥). It follows that as R→0, ∥vR∥=R+o(R) for the solutions (λ0+δR,vR+wR)∈ℱ; using this in (3.45), we conclude that (3.46)kλ0Rq+o(Rq)≤〈P(vR),vR〉≤Kλ0Rq+o(Rq)(R→0). Replacing this in (3.43), we obtain the inequalities (3.47)kλ0Rq-2+o(Rq-2)≤λR-λ0≤Kλ0Rq-2+o(Rq-2). Note that these have been derived using merely the assumption (3.6), which implies that |(P(u),u)|≤M∥u∥q for some constant M in a neighborhood of u=0 and thus guarantees that kλ0,Kλ0 are finite. Suppose now that A0 satisfies the additional assumption (3.8), that we report here for the reader's convenience: (3.48)k(A0(u),u)β(∥u∥2)α-β≤(P(u),u)≤K(A0(u),u)γ(∥u∥2)α-γ. As A0v=λ0v for v∈N≡N(A0-λ0I), we have (A0(v),v)=λ0∥v∥2 for such v and therefore (3.48) yields (3.49)(P(v),v)≤K(λ0)γ∥v∥2γ(∥v∥2)α-γ=K(λ0)γ∥v∥q,v∈N. A similar inequality, based on the left-hand side of (3.48), provides a lower bound to (P(v),v) for v∈N. It follows by the definitions (3.44) of kλ0,Kλ0 that (3.50)kλ0≥k(λ0)β,Kλ0≤K(λ0)γ. Using these in (3.47) yields the desired inequalities (3.31).Example 3.4. Let us now reconsider Example1.3, and take in particular the basic example of a nonlinearity satisfying (HF0) and (HF1), namely, (3.51)f(x,s)=|s|q-2s(2<q<2*). In this case, we see from (1.19) that (3.52)(P(u),u)=∫Ω‍|u|qdx=∥u∥qq. The following inequality for functions of H01(Ω) permits to estimate (P(u),u).Proposition 3.5. LetΩ be a bounded open set in ℝN(N>2), let 2* be defined by 1/2*=1/2-1/N, and let q be such that 2≤q≤2*. Then (3.53)C∥u∥2q≤∥u∥qq≤D∥u∥2q-(q-2)N/2∥∇u∥2(q-2)N/2, for all u∈H01(Ω), with (3.54)C=|Ω|-(q-2)/2,D=S(2,N)(q-2)N/2. Here |Ω| stands for the (Lebesgue) measure of Ω in ℝN and S(2,N) for the best constant of the Sobolev embedding of H01(Ω) into L2*(Ω): (3.55)S(2,N)=supu∈W01,2(Ω)∥u∥2*∥∇u∥2.Proof. The proof of the left-hand side of (3.53) is very simple and amounts to verify the inequality (3.56)∫Ω‍|v|qdx≥|Ω|-(q-2)/2(∫Ω‍v2dx)q/2, which holds true for any q≥2 and for any measurable function v on Ω. To see this, first observe that (3.56) is trivial if q=2. While if q>2, then q/2>1, and so by Hölder's inequality, (3.57)∫Ω‍v2dx≤(∫Ω‍|v|qdx)2/q(∫Ω‍dx)(q-2)/q=|Ω|(q-2)/q(∫Ω‍|v|qdx)2/q. It follows that (3.58)(∫Ω‍v2dx)q/2≤|Ω|(q-2)/2(∫Ω‍|v|qdx), which gives (3.56). The proof of the right-hand side of (3.53) requires more work and is based on an interpolation inequality which makes use of Hölder's and Sobolev's inequality (see [41], e.g.). A detailed proof can be found in [34].Consider the operatorsA0 and P in H≡H01(Ω) defined as in (1.19). If we put (3.59)β=α=q2,γ=q2-(q-2)N4, then (3.53) can be written as (3.60)C(A0(u),u)β≤〈P(u),u〉≤D(A0(u),u)γ(∥u∥2)α-γ, and shows that P satisfies (3.8) with k=C,K=D and α,β,γ as shown in (3.59). It is straightforward to check (see, e.g., [21] or [11]) that A0 and P satisfy the remaining assumptions of Theorem 3.3. Therefore, we can use the inequality (3.31), that in the present case takes the form (3.61)C(λ0)q/2Rq-2+o(Rq-2)≤λR-λ0≤D(λ0)q/2-(q-2)N/4Rq-2+o(Rq-2). Putting μR=(λR)-1, we then have a corresponding family {(μR,uR)} of solutions of the original problem (1.15) such that, as R→0, (3.62)μ0μR[Cμ0-q/2Rq-2+o(Rq-2)]≤μ0-μR≤μ0μR[Dμ0-q/2+ϵRq-2+o(Rq-2)], where(3.63)ϵ≡(q-2)N4>0. Since μR=μ0+o(1) anyway, this yields in turn (3.64)Cμ02μ0-q/2Rq-2+o(Rq-2)≤μ0-μR≤Dμ02μ0-q/2+ϵRq-2+o(Rq-2), as R→0, or putting α=2-q/2=(4-q)/2, (3.65)Cμ0αRq-2+o(Rq-2)≤μ0-μR≤Dμ0α+ϵRq-2+o(Rq-2). We remark that (3.65) can be used for actual computation, in view of the expressions (3.54) of C and D: indeed S(2,N) is explicitly known for any N [50] and can be found, for instance, in [51, page 151]. Some work on numerical computation of NLEV for equations of the form (1.15) can be found, for instance, in [52]. ## 3.1. NLEV Estimates via LS Theory In our first approach, we exploit LS theory in the simple form described in Section2. We will therefore assume, in addition to the hypotheses already made in this section upon A0 and P, that(i) P is odd;(ii) A0 is compact and P is strongly continuous;(iii) A0 is positive and P is nonnegative (i.e., (P(u),u)≥0 for all u∈H).Theorem 3.2. (A) LetH be a real Hilbert space and suppose that (i) A0 is a linear, compact, self-adjoint, and positive operator in H;(ii) P is an odd, strongly continuous, gradient, and nonnegative operator in H. Then for each fixed R>0, (3.1) has an infinite sequence (λn(R),un(R)) of eigenvalue-eigenvector pairs with ∥un(R)∥=R. (B) Suppose in addition thatP satisfies (3.2). Then for each n∈ℕ, λn(R)→λn0 as R→0, where λn0 is the nth eigenvalue of A0. Thus, each λn0 is a bifurcation point for (3.1). (C) Suppose in addition thatP satisfies (3.6). Then (3.10)λn(R)=λn0+O(Rq-2)asR→0. (D) Finally, if in additionP satisfies (3.8), then as R→0 one has (3.11)-K(λn0)γRq-2+o(Rq-2)≤λn(R)-λn0≤K(1α+1)(λn0)γRq-2+o(Rq-2).Proof. The conditions in (A) guarantee thatA≡A0+P satisfies the assumptions of Corollary 2.17. Therefore, for each R>0, there exist an infinite sequence Cn(R) of critical values and a corresponding sequence (λn(R),un(R)) of eigenvalue-eigenvector pairs satisfying (2.41)–(2.44). We will make use of these formulae to derive our estimates. The statement (B) is essentially due to Berger, see [8, Chapter 6, Section 6.7A]. As the third statement has been essentially proved elsewhere (see, e.g. [33]), it remains only to prove (D). Leta,a0 and p be the potentials of A, A0, and P, respectively. We have from (2.10) (3.12)a=a0+p,a0(u)=12(A0(u),u),p(u)=∫01‍〈P(tu),u〉dt. Also letR1>0 be such that (3.8) holds for ∥u∥≤R1. In the derivation of the estimates below, we assume without further mention that ∥u∥≤R1. Step  1. It follows from (3.8) that (3.13)k1a0(u)β(∥u∥2)α-β≤p(u)≤K1a0(u)γ(∥u∥2)α-γ, where (3.14)k1:=H2β-1α,K1:=2γ-1α. The definition (2.41) of Cn(R) then shows, using (3.13) and (2.46), that (3.15)Cn0(R)+k1Cn0(R)βR2(α-β)≤Cn(R)≤Cn0(R)+K1Cn0(R)γR2(α-γ). Step  2. Equation (2.44) implies in particular that (3.16)(A(un(R)),un(R))=λn(R)R2. Whence—using (2.43) and (3.12)—we get (3.17)Cn(R)-12λn(R)R2=a(un(R))-12(A(un(R)),un(R))=p(un(R))-12(P(un(R)),un(R)). It also follows from (3.8) that (3.18)k2a0(u)β(∥u∥2)α-β≤12(P(u),u)≤K2a0(u)γ(∥u∥2)α-γ, where (3.19)k2:=k2β-1,K2:=K2γ-1. We see from (3.13) and (3.18) that both p(u) and (1/2)(P(u),u) vary in the interval of endpoints k1a0(u)β(∥u∥2)α-β and K2a0(u)γ(∥u∥2)α-γ: indeed (as α>1) min{k1,k2}=k1,max{K1,K2}=K2. Therefore, writing for simplicity u for un(R), we have (3.20)|Cn(R)-12λn(R)R2|≤K2a0(u)γ(∥u∥2)α-γ-k1a0(u)β(∥u∥2)α-β≤K2a0(u)γ(∥u∥2)α-γ(sinceA0≥0)≤K2a(u)γ(∥u∥2)α-γ(sinceP≥0)=K2Cn(R)γR2(α-γ)(by(2.43)). Step  3. Using the right-hand side of (3.15), we get (3.21)K2Cn(R)γR2(α-γ)≤K2{Cn0(R)+K1Cn0(R)γR2(α-γ)}γR2(α-γ)=K2Cn0(R)γ{1+K1Cn0(R)γ-1R2(α-γ)}γR2(α-γ)=K2(λn02)γR2α{1+K1(λn02)γ-1Rϵ}γ, where we have replaced Cn0(R) with its value (1/2)λn0R2—see (2.46)—and have put ϵ=2(γ-1)+2(α-γ)=2(α-1)>0 as α>1. Thus, as R→0, (3.22){1+K1(λn02)γ-1Rϵ}γ=1+O(Rϵ)=1+o(1), so that (3.23)K2Cn(R)γR2(α-γ)≤K2(λn02)γR2α(1+o(1)). Therefore, by (3.20), we end up this step with the estimate (3.24)|Cn(R)-12λn(R)R2|≤K2(λn02)γR2α(1+o(1)). Step  4 (upper bound). Using again the right hand side of (3.15) in (3.24), and then using again (2.46) we obtain (3.25)12λn(R)R2≤Cn(R)+K2(λn02)γR2α(1+o(1))≤Cn0(R)+K1Cn0(R)γR2(α-γ)+K2(λn02)γR2α(1+o(1))=12λn0R2+K1(λn02)γRq+K2(λn02)γRq(1+o(1))=12λn0R2+Z(λn02)γRq+o(Rq), where (3.26)Z=K1+K2=K2γ-1α+K2γ-1=K2γ-1(1α+1). We conclude that as R→0(3.27)λn(R)≤λn0+K(1α+1)(λn0)γRq-2+o(Rq-2). Step  5   (lower bound). Using now the left hand side of (3.15) in (3.24), and then using as before (2.46) we get (3.28)12λn(R)R2≥Cn(R)-K2(λn02)γR2α(1+o(1))≥Cn0(R)+k1Cn0(R)βR2(α-β)-K2(λn02)γR2α(1+o(1))=12λn0R2+{k1(λn02)β-K2(λn02)γ}Rq+o(Rq)≥12λn0R2-K2(λn02)γRq+o(Rq). We conclude that, as R→0, (3.29)λn(R)≥λn0-K(λn0)γRq-2+o(Rq-2), and this, together with (3.27), ends the proof of (3.11). ## 3.2. NLEV Estimates via Bifurcation Theory As already remarked, LS theory has a true global character from the standpoint of NLEV, in that forany fixed R>0 it allows for the “simultaneous consideration of an infinite number of eigenvalues”λn(R), if we can use the same words of Kato [4] for a situation involving nonlinear operators—though strictly parallel to that of compact self-adjoint operators, as shown by Corollaries 2.11 and 2.17.In contrast, Bifurcation theory—at least in the way used here and based on the classical Lyapounov-Schmidt method, see for instance [23]—is (i)local (it yields information for R small) and (ii) built starting from a fixed isolated eigenvalue of finite multiplicity of A0: given such an eigenvalue λ0, one reduces (via the Implicit Function Theorem) the original equation to an equation in the finite-dimensional kernel N(λ0)≡N(A0-λ0I). The use of he Implicit Function Theorem demands C1 regularity on the operators involved, but dispenses from the assumptions made before of (oddness, positivity and) compactness.These differences between Theorems3.2 and 3.3 are stressed by the change of notation (λ0 rather than λn0) also in the formulae (3.11) and (3.31) for our estimates. On the other hand, the obvious relation existing between the two statements is that each nonzero eigenvalue of a compact operator is isolated and of finite multiplicity.Theorem 3.3. (A) LetA0 be a bounded self-adjoint linear operator in a real Hilbert space H and let λ0 be an isolated eigenvalue of finite multiplicity of A0. Consider (3.1), where P is a C1 gradient map defined in a neighborhood of 0 in H and satisfying (3.6). Then λ0 is a bifurcation point for (3.1), and moreover if ℱ={(λR,uR):0<R<R0} is any family of nontrivial solutions of (3.1) satisfying (3.4), then the eigenvalues λR satisfy the estimate (3.30)λR=λ0+O(Rq-2)asR→0. (B) If, in addition,P satisfies the condition (3.8), then as R→0 one has (3.31)k(λ0)βRq-2+o(Rq-2)≤λR-λ0≤K(λ0)γRq-2+o(Rq-2).Proof. Theorem3.3 is merely a variant of Theorem 1.1 in [14]. We report here the main points of the proof of the latter—that makes systematic use of the condition (3.6)—and the improvements deriving by the use of the additional assumption (3.8). LetN=N(λ0)≡N(A0-λ0I) be the eigenspace associated with λ0, and let W be the range of A0-λ0I. Then by our assumptions on A0 and λ0, H is the orthogonal sum (3.32)H=N⊕W. Set L=A0-λ0I, δ=λ-λ0 and write (1.18) as (3.33)Lu+P(u)=δu. Let Π1, Π2=I-Π1 be the orthogonal projections onto N and W, respectively; then writing u=Π1u+Π2u≡v+w according to (3.32) and applying in turn Π1, Π2 to both members of (3.33), the latter is turned to the system (3.34)Π1P(v+w)=δv,Lw+Π2P(v+w)=δw. By the self-adjointness of A0, we have Lw∈W for any w∈W and therefore (Lu,u)=(Lw,w) for any u=v+w∈H. Now let ℱ={(λ0+δR,uR):0<R<R0} be as in the statement of Theorem 3.3. Then from (3.33), (3.35)(LuR,uR)+(P(uR),uR)=δRR2, for 0<R<R0, and writing uR=vR+wR this yields (3.36)(LwR,wR)+(P(uR),uR)=δRR2(0<R<R0). Under assumption (1.5), the term (P(uR),uR) in (3.36) is evidently O(Rq). What matters is to estimate the first term (LwR,wR); we claim that the same assumption (1.5) also yields (3.37)(LwR,wR)=o(Rq)asR→0. Then (3.36) will immediately imply that δR=O(Rq-2)—which is (3.30)—and will thus prove the first assertion of Theorem 3.3. To prove our claim, we let (δ,u) be any solution of (3.33) and write u=v+w,v∈N,w∈W; then (δ,v,w) satisfies the system (3.34). The second of these equations is Lw-δw=-Π2P(v+w) or, putting Hδ=-((L-δI)|W)-1, (3.38)w=HδΠ2P(v+w). As P is C1 near u=0 and P'(0)=0, a standard application of the Implicit Function Theorem guarantees the existence of neighborhoods 𝒰 of (0,0) in ℝ×N and 𝒲 of 0 in W such that, for each fixed (δ,v) in 𝒰, there exists a unique solution w=w(δ,v)∈𝒲 of (3.38). Moreover, w depends on a C1 fashion upon δ and v and (3.39)∥w(δ,v)∥=o(∥v∥)asv→0,v∈N, uniformly with respect to δ for δ in bounded intervals of ℝ. Our point is that using again the supplementary assumption (1.5), (3.39) can be improved (see [14]) to (3.40)∥w(δ,v)∥=O(∥v∥q-1)asv→0,v∈N, uniformly for δ near 0. Now to prove the claim (3.37), first observe that L|W:W→W is a bounded linear operator, so that ∥Lw∥≤C∥w∥ for some C>0 and for all w∈W. Thus, |(Lw,w)|≤C∥w∥2, and it follows by (3.40) that (3.41)(Lw(δ,v),w(δ,v))=O(∥v∥2(q-1))asv→0. Returning to the solutions (λ0+δR,uR)∈ℱ, and writing as above uR=vR+wR, we can suppose—diminishing R0 if necessary—that (δR,vR,wR)∈𝒰×𝒲 for all R:0<R<R0. This implies by uniqueness that wR=w(δR,vR) for all R:0<R<R0. The estimate (3.40) thus yields in particular that ∥wR∥=O(∥vR∥q-1) as R→0 and in turn (since ∥vR∥≤∥uR∥=R), (3.41)) yields that (AwR,wR)=O(R2(q-1)) as R→0. Since 2(q-1)>q (because q>2), this implies (3.37). In order to improve the rudimentary estimate (3.30), one has to look more closely at the term (P(uR),uR) in (3.36). Indeed as shown in [14], under the stated assumptions on P we also have (3.42)(P(uR),uR)=(P(vR),vR)+o(Rq)asR→0. Using (3.37) and (3.42) in (3.36), we have therefore (3.43)δRR2=(P(vR),vR)+o(Rq)asR→0. To conclude the proof of Theorem 3.3, we introduce as in [14] constants kλ0 and Kλ0 via the formulae (3.44)kλ0≡inf0<∥v∥<R0,v∈N(P(v),v)∥v∥q,Kλ0≡sup0<∥v∥<R0,v∈N(P(v),v)∥v∥q. These yield the inequalities (3.45)kλ0∥v∥q≤(P(v),v)≤Kλ0∥v∥q(v∈N,∥v∥<R0). We know that as v→0, w(δ,v)=o(∥v∥) and so ∥v+w(δ,v)∥=∥v∥+o(∥v∥). It follows that as R→0, ∥vR∥=R+o(R) for the solutions (λ0+δR,vR+wR)∈ℱ; using this in (3.45), we conclude that (3.46)kλ0Rq+o(Rq)≤〈P(vR),vR〉≤Kλ0Rq+o(Rq)(R→0). Replacing this in (3.43), we obtain the inequalities (3.47)kλ0Rq-2+o(Rq-2)≤λR-λ0≤Kλ0Rq-2+o(Rq-2). Note that these have been derived using merely the assumption (3.6), which implies that |(P(u),u)|≤M∥u∥q for some constant M in a neighborhood of u=0 and thus guarantees that kλ0,Kλ0 are finite. Suppose now that A0 satisfies the additional assumption (3.8), that we report here for the reader's convenience: (3.48)k(A0(u),u)β(∥u∥2)α-β≤(P(u),u)≤K(A0(u),u)γ(∥u∥2)α-γ. As A0v=λ0v for v∈N≡N(A0-λ0I), we have (A0(v),v)=λ0∥v∥2 for such v and therefore (3.48) yields (3.49)(P(v),v)≤K(λ0)γ∥v∥2γ(∥v∥2)α-γ=K(λ0)γ∥v∥q,v∈N. A similar inequality, based on the left-hand side of (3.48), provides a lower bound to (P(v),v) for v∈N. It follows by the definitions (3.44) of kλ0,Kλ0 that (3.50)kλ0≥k(λ0)β,Kλ0≤K(λ0)γ. Using these in (3.47) yields the desired inequalities (3.31).Example 3.4. Let us now reconsider Example1.3, and take in particular the basic example of a nonlinearity satisfying (HF0) and (HF1), namely, (3.51)f(x,s)=|s|q-2s(2<q<2*). In this case, we see from (1.19) that (3.52)(P(u),u)=∫Ω‍|u|qdx=∥u∥qq. The following inequality for functions of H01(Ω) permits to estimate (P(u),u).Proposition 3.5. LetΩ be a bounded open set in ℝN(N>2), let 2* be defined by 1/2*=1/2-1/N, and let q be such that 2≤q≤2*. Then (3.53)C∥u∥2q≤∥u∥qq≤D∥u∥2q-(q-2)N/2∥∇u∥2(q-2)N/2, for all u∈H01(Ω), with (3.54)C=|Ω|-(q-2)/2,D=S(2,N)(q-2)N/2. Here |Ω| stands for the (Lebesgue) measure of Ω in ℝN and S(2,N) for the best constant of the Sobolev embedding of H01(Ω) into L2*(Ω): (3.55)S(2,N)=supu∈W01,2(Ω)∥u∥2*∥∇u∥2.Proof. The proof of the left-hand side of (3.53) is very simple and amounts to verify the inequality (3.56)∫Ω‍|v|qdx≥|Ω|-(q-2)/2(∫Ω‍v2dx)q/2, which holds true for any q≥2 and for any measurable function v on Ω. To see this, first observe that (3.56) is trivial if q=2. While if q>2, then q/2>1, and so by Hölder's inequality, (3.57)∫Ω‍v2dx≤(∫Ω‍|v|qdx)2/q(∫Ω‍dx)(q-2)/q=|Ω|(q-2)/q(∫Ω‍|v|qdx)2/q. It follows that (3.58)(∫Ω‍v2dx)q/2≤|Ω|(q-2)/2(∫Ω‍|v|qdx), which gives (3.56). The proof of the right-hand side of (3.53) requires more work and is based on an interpolation inequality which makes use of Hölder's and Sobolev's inequality (see [41], e.g.). A detailed proof can be found in [34].Consider the operatorsA0 and P in H≡H01(Ω) defined as in (1.19). If we put (3.59)β=α=q2,γ=q2-(q-2)N4, then (3.53) can be written as (3.60)C(A0(u),u)β≤〈P(u),u〉≤D(A0(u),u)γ(∥u∥2)α-γ, and shows that P satisfies (3.8) with k=C,K=D and α,β,γ as shown in (3.59). It is straightforward to check (see, e.g., [21] or [11]) that A0 and P satisfy the remaining assumptions of Theorem 3.3. Therefore, we can use the inequality (3.31), that in the present case takes the form (3.61)C(λ0)q/2Rq-2+o(Rq-2)≤λR-λ0≤D(λ0)q/2-(q-2)N/4Rq-2+o(Rq-2). Putting μR=(λR)-1, we then have a corresponding family {(μR,uR)} of solutions of the original problem (1.15) such that, as R→0, (3.62)μ0μR[Cμ0-q/2Rq-2+o(Rq-2)]≤μ0-μR≤μ0μR[Dμ0-q/2+ϵRq-2+o(Rq-2)], where(3.63)ϵ≡(q-2)N4>0. Since μR=μ0+o(1) anyway, this yields in turn (3.64)Cμ02μ0-q/2Rq-2+o(Rq-2)≤μ0-μR≤Dμ02μ0-q/2+ϵRq-2+o(Rq-2), as R→0, or putting α=2-q/2=(4-q)/2, (3.65)Cμ0αRq-2+o(Rq-2)≤μ0-μR≤Dμ0α+ϵRq-2+o(Rq-2). We remark that (3.65) can be used for actual computation, in view of the expressions (3.54) of C and D: indeed S(2,N) is explicitly known for any N [50] and can be found, for instance, in [51, page 151]. Some work on numerical computation of NLEV for equations of the form (1.15) can be found, for instance, in [52]. --- *Source: 102489-2012-11-13.xml*
102489-2012-11-13_102489-2012-11-13.md
92,641
Variational Methods for NLEV Approximation Near a Bifurcation Point
Raffaele Chiappinelli
International Journal of Mathematics and Mathematical Sciences (2012)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/102489
102489-2012-11-13.xml
--- ## Abstract We review some more and less recent results concerning bounds on nonlinear eigenvalues (NLEV) for gradient operators. In particular, we discuss the asymptotic behaviour of NLEV (as the norm of the eigenvector tends to zero) in bifurcation problems from the line of trivial solutions, considering perturbations of linear self-adjoint operators in a Hilbert space. The proofs are based on the Lusternik-Schnirelmann theory of critical points on one side and on the Lyapounov-Schmidt reduction to the relevant finite-dimensional kernel on the other side. The results are applied to some semilinear elliptic operators in bounded domains ofℝN. A section reviewing some general facts about eigenvalues of linear and nonlinear operators is included. --- ## Body ## 1. Introduction and Examples The term “nonlinear eigenvalue” (NLEV) is a frequent shorthand for “eigenvalue of a nonlinear problem,” see, for instance [1, 3]. While for the estimation of eigenvalues of linear operators there is wealth of abstract and computational methods (see, e.g., Kato's [4] and Weinberger's [5] monographs), for NLEV, the question is relatively new and there is not much literature available. In this paper, we review some abstract methods which allow for the computation of upper and lower bounds of NLEV near a bifurcation point of the linearized problem. Moreover, as one of our aims is to stimulate further research on the subject, we spend some effort in presenting it in a sufficiently general context and emphasize the question of the existence of eigenvalues for a nonlinear operator. In fact, Section 2 is entirely devoted to this, and to a parallel consideration of similar facts for linear operators.Thus, generally speaking, consider two nonlinear (= not necessarily linear) operatorsA,B:E→F (E,F real Banach spaces) such that A(0)=B(0)=0. If for some λ∈ℝ the equation (1.1)A(u)=λB(u) has a solution u≠0, then we say that λ is an eigenvalue of the pair (A,B) and u is an eigenvector corresponding to λ. This definition is a word-by-word copy of the standard one for pairs of linear operators, where most frequently one takes E=F and B(u)=u, and of course it may be of very little significance in general. However, it goes back at least to Krasnosel'skii [6] the demonstration of the importance of this concept for operator equations such as (1.1), with a view in particular to nonlinear integral equations of Hammerstein or Urysohn type.In this paper, we consider (1.1) under the following qualitative assumptions:(A) (1.1) possesses infinitely many eigenvalues λn;(B) (1.1) has a linear reference problem A0(u)=λB0(u) which also possesses infinitely many eigenvalues λn0.It is then natural to try to approximate or estimateλn in terms of λn0. In the sequel, we will take F=E', the dual space of E, and assume that all operators involved are continuous gradient operators from E to E'; of course, this is done in order to exploit the full strength of variational methods. We emphasize in particular the case in which E is a Hilbert space, identified with its dual.Next, we note that two main routes are available to guarantee (A) and (B). The first involves the Lusternik-Schnirelmann (LS) theory of critical points for even functionals on symmetric manifolds (whenA and B are odd mappings). The model example is the p-Laplace equation, briefly recalled in Example 1.1, exhibiting infinitely many eigenvalues and having the ordinary Laplace equation (p=2) as linear reference problem. From our point of view, a main advantage of LS theory is precisely that it grants—provided that the constraint manifold contains subsets of arbitrary genus and that the Palais-Smale condition is satisfied at all candidate critical levels—the existence of infinitely many distinct eigenvalue/eigenvector pairs of (1.1), see, for instance, Amann [7], Berger [8], Browder [9], Palais [10], and Rabinowitz [11].The domain of applicability of LS theory embraces as a particular case of (1.1) NLEV problems of the form (1.2)(A(u)≡)A0(u)+P(u)=λu(≡λB(u)), where the operators act in a real Hilbert space H, A0 is linear and self-adjoint, and P is odd and viewed as a perturbation of A0. Under appropriate compactness and positivity assumptions on A0 and P, (A) and (B) will be satisfied. More general forms of (1.2)—such as A0(u)+P(u)=λB(u) where A0,P, and B are operators of E into its dual E' and A0 behaves as the p-Laplacian—have been considered by Chabrowski [12], see Example 1.4 in this section.However, problems of the form (1.2) can be studied in our framework also when P is not necessarily an odd mapping, but rather satisfies the local condition (1.3)P(u)=o(∥u∥)asu→0. Indeed in this case, Bifurcation theory ensures (see, e.g., [11]) that each isolated eigenvalue λ0 of finite multiplicity of A0 is a bifurcation point for (1.2), which roughly speaking means that solutions u≠0 of the unperturbed problem A0u=λ0u (i.e., eigenfunctions associated with λ0) do survive for the perturbed problem (1.2) in a neighborhood of u=0 and for λ near λ0. Therefore, the framework described above at the points (A) and (B) is grosso modo respected also in this case provided that A0 has a countable discrete spectrum.When applicable, LS theory yields existence of eigenfunctions ofany norm (provided of course that the relevant operators be defined in the whole space), in contrast with Bifurcation theory which only yields (in this context) information near u=0.In the main part of this paper (Section3), we focus our attention upon equations of the form (1.2), having in mind—with a view to the applications—a P that is odd and satisfies (1.3). For such a P, both methods are applicable and can be tested to see which of them yields better quantitative information on the eigenvalues associated with small eigenvectors. More precisely, given an isolated eigenvalue λ0 of finite multiplicity of A0, the assumptions on P guarantee bifurcation at (λ0,0) from the line {(λ,0):λ∈ℝ} of trivial solutions, and in particular ensure the existence for R>0 sufficiently small of solutions (λR,uR) of (1.2) such that (1.4)∥uR∥=RforeachR,λR→λ0asR→0, that is, parameterized by the norm R of the eigenvector uR and bifurcating from (λ0,0). If we qualify the condition P(u)=o(∥u∥) with the more specific requirement that, for some q>2, (1.5)P(u)=O(∥u∥q-1)asu→0, then the information in (1.4) can be made more precise to yield estimates of the form (as R→0) (1.6)C1Rq-2+o(Rq-2)≤λR-λ0≤C2Rq-2+o(Rq-2). We are interested in the evaluation of the constants C1 and C2. It turns out that these can be estimated in terms of λ0 itself and other known constants related to P. We do this in two distinct ways, as indicated before.(i) Using Lusternik-Schnirelmann's theory in order to estimate the differenceλR-λ0 through the LS “minimax” critical levels. This approach was first used by Berger [8, Chapter 6, Section 6.7A] and then pursued by the author (see [13], e.g.) and subsequently by Chabrowski [12].(ii) Using the Lyapounov-Schmidt method to reduce (1.2) to an equation in the finite-dimensional space N≡N(A0-λ0I), and then working carefully on the reduced equation in order to exploit the stronger condition (1.5). We have recently followed this approach in [14].Our computations in Section3 show that the second method is both technically and conceptually simpler, requires less on P (P need not be odd), and yields sharper results. We conclude Section 3 and the present work on applying these abstract results to a simple semilinear elliptic equation, see Example 1.3. Let us remark on passing that in the case of ordinary differential equations, detailed estimates for NLEV near a bifurcation point have been recently proved by Shibata [15]. The techniques employed by him are elementary and straightforward—direct integration and manipulation of the differential equation, series expansion, and so on—but very efficiently used. Some earlier result in this style can be found, for instance, in [16].The remaining parts of this paper are organised as follows. We complete this introductory section presenting (as a matter of example) some boundary-value problems for nonlinear differential equations, depending on a real parameterλ and admitting the zero solution for all values of λ, that can be cast in the form (1.1) with an appropriate choice of the function space E and of the operators A,B.Section2 is intended to recall for the reader's convenience some basic facts from the calculus of variations and critical point theory. We first indicate the reduction of (1.1) to the search of critical points of the potential a of A on the manifold V≡{u∈E:b(u)=const}, b the potential of B. Some details are spent to show that absolute minima or maxima correspond to the first eigenvalue—we do this for the elementary case of homogeneous operators such as the p-Laplacian—while minimax critical levels correspond to higher order eigenvalues, both for linear and nonlinear operators. In this circle of ideas, we recall a few elements of LS theory that are helpful to state and prove our subsequent results.Let us finally mention that foundations and inspiration for the study, of NLEV problems are to be found in (among many others) Krasnosel'skii [6], Vainberg [17], Fučík et al. [18], Ambrosetti and Prodi [19], Nirenberg [20], Rabinowitz [11, 21, 22], Berger [8], Stackgold [23], and Mawhin [3].Example 1.1 (thep-Laplace equation). The most famous (and probably most important) example of a nonlinear problem exhibiting the features described in the points (A) and (B) above is provided by thep-Laplace equation (p>1): (1.7)-div(|∇u|p-2∇u)=μ|u|p-2u, in a bounded domain Ω⊂ℝN(N≥1), subject to the Dirichlet boundary conditions u=0 on the boundary ∂Ω of Ω. Fix p>1, let E be the Sobolev space W01,p(Ω), equipped with the norm (1.8)∥v∥W01,pp=∫Ω‍|∇v|p, and let E'=W-1,p'(Ω) be the dual space of E. A (weak) solution of (1.7) is a function u∈E such that (1.9)Ap(u)=λBp(u), where λ=μ-1(μ≠0) and Ap,Bp:E→E' are defined by duality via the equations (1.10)〈Bp(u),v〉=∫Ω‍|∇u|p-2∇u∇vdx,〈Ap(u),v〉=∫Ω‍|u|p-2uvdx, where u,v∈E and 〈·,·〉 denotes the duality pairing between E and E'.Equation (1.7) possesses countably many eigenvalues μn(p)(n∈ℕ), which are values of the real function ϕp defined via (1.11)ϕp(u)=∫Ω‍|∇u|p∫Ω‍|u|p(u∈W01,p(Ω),u≠0), and can be naturally arranged in an increasing sequence (1.12)μ1(p)≤μ2(p)≤⋯μn(p)≤⋯,limn→∞μn(p)=+∞. This relies on the very special nature of (1.7), because Ap and Bp are (i) odd (F:E→E' is said to be odd if F(-u)=-F(u) for u∈E);(ii) positively homogeneous of the same degree p-1>0 (F positively homogeneous of degree α means that F(tu)=tαF(u) for t>0 and u∈E);(iii) gradient (F gradient means that 〈F(u),v〉=f'(u)v for some functional f on E).The existence of the sequence(μn(p)) then follows (using the compactness of the embedding of W01,p(Ω) in Lp(Ω)) by the Lusternik-Schnirelmann theory of critical points for even functionals on sphere-like manifolds (see the references cited in Section 1). The eigenvalues μn(p) have been studied in detail, and in particular as to their asymptotic behaviourGarcía Azorero and Peral Alonso [24] and Friedlander [25] have proved the two-sided inequality (1.13)A|Ω|np/N≤μn(p)≤B|Ω|np/N, to hold for all sufficiently large n and for suitable positive constants A and B depending only on N and p; |Ω| stands for the (Lebesgue) N-dimensional volume of Ω. This generalizes in part the classical result of Weyl [26] for the linear case (corresponding to p=2 in (1.7)), that is, for the eigenvalues μn0 of the Dirichlet Laplacian -Δu=μu,u∈W01,2(Ω): (1.14)μn0≡μn(2)=K|Ω|n2/N+o(n2/N)(n→∞).Evidently, this and similar questions would be of greater interest, should one be able to prove that theμn(p) are the only eigenvalues of (1.7); however, this is demonstrated only for N=1, in which case they can be computed by explicit solution of (1.7). For this, as well as for a general discussion of the features of (1.7), its eigenvalues, and in particular the very special properties owned by the first eigenvalue μ1(p) (corresponding to the minimum of the functional ϕp defined in (1.11)) and the associated eigenfunctions, we refer the reader to the beautiful Lecture Notes by Lindqvist [27]. For an interesting discussion on the existence of eigenvalues outside the LS sequence in problems related to (1.7), we recommend the very recent papers in [28, 29].Remark 1.2. The existence of countably many eigenvalues for (1.7) has been recently proved by means of a completely different method than the LS theory, namely, the representation theory of compact linear operators in Banach spaces developed in [30]. Actually the eigenfunctions associated with these eigenvalues are defined in a weaker sense, and only upper bounds of the type in (1.13) are proved. As emphasized in [30], it is not clear what connection (if any) there is between the higher eigenvalues found by the two procedures, nor whether there are eigenvalues not found by either method.Example 1.3 (semilinear equations). As a second example, consider the semilinear elliptic eigenvalue problem(1.15)-Δu=μ(u+f(x,u)),u∈H01(Ω)≡W01,2(Ω), again in a bounded domain of Ω⊂ℝN, where the nonlinearity is given by a real-valued function f=f(x,s) defined on Ω×ℝ and satisfying the following hypotheses:(HF0) f satisfies Carathéodory conditions (i.e., to say, f is continuous in s for a.e. x∈Ω and measurable in x for all s∈ℝ);(HF1) there exist a constanta≥0 and an exponent q with 2<q<2N/(N-2)≡2* if N>2,2<q<∞ if N≤2 such that (1.16)|f(x,s)|≤a|s|q-1forx∈Ω(a.e.),s∈ℝ.Here we take the Hilbert spaceH=H01(Ω) equipped with the scalar product (1.17)(u,v)=∫Ω‍∇u∇vdx, and consider again weak solutions of (1.15), defined now as solutions of the equation in H(1.18)A0u+P(u)=λu, whereas before λ=1/μ(μ≠0) while the operators A0,P:H→H are defined (using the self-duality of H based on (1.17)) by the equations (1.19)(A0u,v)=∫Ω‍uvdx,(P(u),v)=∫Ω‍f(x,u)vdx, for u,v∈H (note: we write here and henceforth A0 for A2). Then we see that also (1.15) can be cast in the form (1.1), with B(u)=u and (1.20)A=A0+P. Despite this formal similarity, the present Example is essentially different from Example 1.1. To see this, first note that the basic eigenvalue problem for the Dirichlet Laplacian, -Δu=μu,u∈H01(Ω), takes (in our notations) the form (1.21)A0(u)=λu, and involves—of course—only linear operators, A0 and the identity map. These can be seen as a special type of 1-homogeneous operators; now while in the former example they are replaced with the (p-1)-homogeneous operators Ap and Bp defined in (1.10), here we deal with an additive perturbation, A0+P, of A0. This new operator A=A0+P is still a gradient, and will be odd if so is taken f in its dependence upon the second variable, but plainly is not 1-homogeneous any longer (except when f(x,s)=a(x)s,a∈L∞(Ω), in which case of course we would be dealing with a linear perturbation of a linear problem: see for this [31] or [5]).Nevertheless, the assumptions (HF0)-(HF1) and results from Bifurcation theory ensure all the same (as indicated before in Section1, see also Section 3 for more details) that eigenvalues μR for (1.15) do exist, associated with eigenfunctions uR of small H01 norm R, near each fixed eigenvalue μ0=μk0 of the Dirichlet Laplacian; we put here and henceforth μk0=(λk0)-1, with λk0 the kth eigenvalue of (1.21).Two main differences with the former situation must be noted at once.(i) First, the loss of homogeneity causes that the eigenvalues “depend on the norm of the eigenfunction,” unlike in Example1.1 where it suffices to consider normalized eigenvectors. Indeed in general, if the operators A and B appearing in (1.1) are both homogeneous of the same degree, it is clear that if u0 is an eigenvector corresponding say to the eigenvalue μ^, then so does tu0 for any t>0.(ii) Second, Bifurcation theory provides in the present “generic” situation onlylocal results, that is, results holding in a neighborhood of (μ0,0)∈ℝ×H01(Ω), and thus concerning eigenfunctions of small norm. “Generic” means that we here ignore the multiplicity of μ0: for in case we knew that this is an odd number, then global results would be available from the theory [32] to grant the existence of an unbounded “branch” (in ℝ×H01(Ω)) of solution pairs (μ,u) bifurcating from (μ0,0).Under the assumptions (HF0)-(HF1) we have shown in particular (see [14, 33, 34]) that (1.22)μR=μ0+O(Rq-2)asR→0, (i.e., |μR-μ0|≤KRq-2 for some K≥0 and all sufficiently small R>0), and more precisely that (1.23)CRq-2+o(Rq-2)≤μ0-μR≤DRq-2+o(Rq-2)asR→0, for suitable constants C, D related to f; here o(Rq-2) denotes as usual an unspecified function h=h(R) such that h(R)/Rq-2→0 as R→0.In Section3, we explain how estimates like (1.23) follow independently both from LS theory (when f is odd in s) and from Bifurcation theory and refine our previous results in the estimate of the constants C and D.Example 1.4 (quasilinear equations). The results indicated in Examples1.1 and 1.3 can be partly extended to the problem (1.24)-div(|∇u|p-2∇u)=μ(|u|p-2u+f(x,u))u∈W01,p(Ω), where p>1 and f=f(x,s) is dominated by |s|q-1 for some p<q<p*, with (1.25)p*≡NpN-pifN>p,p*≡∞ifN≤p. Equation (1.24) reduces to (1.7) if f≡0 and to (1.15) if p=2, and therefore formally provides a common framework for both equations. However, it must be noted—looking at the bifurcation approach indicated in the previous example—that the desired extension can only be partial, because p≠2 (1.24) is no longer a perturbation of a linear problem, but of the homogeneous problem (1.7). Bifurcation should thus be considered from the eigenvalues of the p-Laplace operator, but to my knowledge there is (in the general case) no abstract result about bifurcation from the eigenvalues of a homogeneous operator (let alone stand from those of a general nonlinear operator). A fundamental exception is that of the first eigenvalue of a homogeneous operator (see Theorem 2.4 and Remark 2.6 in Section 2) which possesses—under additional assumptions on the operator itself—remarkable properties such as the positivity of the associated eigenfunctions, see [35]. These properties have been extensively used (in [36, 37], e.g.) in order to prove global bifurcation results for (1.24) from the first eigenvalue of the p-Laplacian. Related results can be found in [38, 39].Clearly, in case thef appearing in (1.24) be odd in its second variable, typically when f is of the form (1.26)f(x,s)=a(x)|s|q-2s,a∈L∞(Ω), then one can resort again to LS theory, because the resulting abstract equation (1.27)Ap(u)+P(u)=λBp(u) (with Ap and Bp as in (1.10) and P defined via (1.19) and (1.26)) involves operators which are all odd, and one can prove in this way bifurcation from each eigenvalue μn(p) of (1.7). For the corresponding results, see Chabrowski [12]. To be precise, the problem dealt with by Chabrowski is slightly different as he considers the modified form of (1.24) in which f sits on the left-hand side (i.e., it is added to the p-Laplacian) rather than on the right-hand side of the equation. Needless to say, this does not change the essence of our remark, nor the results for (1.24) would be much different from those in [12]. ## 2. Existence of Eigenvalues for Gradient Operators Consider (1.1) where A,B:E→E' (E a real, infinite dimensional, reflexive Banach space) and suppose that 〈B(u),u〉≠0 for u≠0. If λ is an eigenvalue of (A,B) and u is a corresponding eigenvector, then (2.1)λ=〈A(u),u〉〈B(u),u〉≡R(u). Thus, the eigenvalues of (A,B)—if any—must be searched among the values of the function R defined on E∖{0} by means of (2.1). R is called the Rayleigh quotient relative to (A,B), and its importance for pairs of linear operators is well established [5].Well-known simple examples (just think of linear operators) show that without further assumptions, there may be no eigenvalues at all for(A,B). On the other hand, we know that a real symmetric n×n matrix has at least one eigenvalue, and so does any self-adjoint linear operator in an infinite-dimensional real Hilbert space, provided it is compact. The nonlinear analogue of the class of self-adjoint operators is that of gradient operators, which are the natural candidates for the use of variational methods.In their simplest and oldest form traced by the Calculus of Variations, variational methods consist in finding the minimum or the maximum value of a functional on a given set in order to find a solution of a problem in the set itself. Basically, if we wish to solve the equation(2.2)A(u)=0,u∈E, and A:E→E' is a gradient operator, which means that (2.3)〈A(u),v〉=a′(u)v∀u,v∈E, for some differentiable functional a:E→ℝ [(the potential of a)], then we need to just find the critical points of a, that is, the points u∈E where the derivative a'(u) of a vanishes. The images a(u) of these points are by definition the critical values of a, and the simplest such are evidently the minimum and the maximum values of a (provided of course that they are attained). However, from the standpoint of eigenvalue theory, the relevant equation is (1.1), whose solutions u are—when also B is a gradient—the critical points of a constrained to b(u)=const, where b is the potential of B. To be precise, normalize the potentials assuming that a(0)=b(0)=0 and consider for c≠0 the “surface” (2.4)Vc≡{u∈E:b(u)=c}. Then at a critical point u∈Vc of the restriction of a to Vc, we have (2.5)a′(u)v=λb′(u)v∀v∈E, for some Lagrange multiplier λ. This is the same as to write A(u)=λB(u), and thus yields an eigenvalue-eigenvector pair (λ,u)∈ℝ×Vc for (1.1); note that 0∉Vc if c≠0. Of course to derive (2.5) we need some regularity of Vc, and this is ensured (if B is continuous) by the assumptions made upon B, which guarantee—since b'(u)u=〈B(u),u〉≠0 for u≠0 and 0∉Vc—that V is indeed a C1 submanifold of E of codimension one [40].Let us collect the above remarks stating formally the basic assumptions onA,B and the basic fact on the existence of at least one eigenvalue for A, B.(AB0)A,B:E→E' are continuous gradient operators with 〈B(u),u〉≠0foru≠0.Theorem 2.1. Suppose thatA, B satisfy (AB0) and let Vc be as in (2.4). Suppose, moreover, that the potential a of A is bounded above on Vc and let M≡supu∈Vca(u). If M is attained at u0∈Vc, then there exists λ0∈ℝ such that (2.6)A(u0)=λ0B(u0). That is, u0 is an eigenvector of the pair (A,B) corresponding to the eigenvalue λ0. A similar statement holds if a is bounded below, provided that m≡infu∈Va(u) is attained. ### 2.1. The First Eigenvalue (for Linear and Nonlinear Operators) Looking at the statement of Theorem2.1, we remark that in general there may be more points/eigenvectors u∈Vc (if any at all) where M is attained, and consequently different corresponding eigenvalues (the values taken by the Rayleigh quotient (2.1) at such points). However, in a special case, λ0 is uniquely determined by M and plays the role of “first eigenvalue” of (A,B): this is when A and B are positively homogeneous of the same degree. Recall that A is said to be positively homogeneous of degree α>0 if A(tu)=tαA(u) for u∈E and t>0. For such operators pairs, it is sufficient to consider a fixed level set (that is, to consider normalized eigenvectors), for instance, (2.7)V≡{u∈E:b(u)=1}.Theorem 2.2. LetA,B:E→E' satisfy (AB0) and let a,b be their respective potentials. Suppose in addition that A, B are positively homogeneous of the same degree. If a is bounded above on V and M=supu∈Va(u) is attained at u0∈V, then (2.8)A(u0)=MB(u0). Moreover, M is the largest eigenvalue of the pair (A,B). Likewise, if a is bounded below and m=infu∈Va(u) is attained, then m is the smallest eigenvalue of the pair (A,B).Let us give the direct easy proof of Theorem2.2, that does not even need Lagrange multipliers. The homogeneity of A and B implies that (2.9)supu∈Va(u)=supu≠0a(u)b(u)=supu≠0〈A(u),u〉〈B(u),u〉. Indeed recall (see [7] or [8]) that a continuous gradient operator A is related to its potential a (normalized so that a(0)=0) by the formula (2.10)a(u)=∫01‍〈A(tu),u〉dt. Thus, if A is homogeneous of degree α we have (2.11)a(u)=〈A(u),u〉α+1, and similarly for b; in particular, a and b are (α+1)-homogeneous. Therefore, if for u≠0 we put t(u)=(b(u))-1/(α+1), we have (2.12)a(u)b(u)=a(t(u)u), and as b(t(u)u)=1 (i.e., t(u)u∈V), the first equality in (2.9), follows immediately, and so does the second by virtue of (2.11). By (2.9) and the definition of M we have (2.13)a(u)-Mb(u)≤0foranyu∈E. Suppose now that M is attained at u0∈V. Then a(u0)-Mb(u0)=0. Thus, u0 is a point of absolute maximum of the map K≡a-Mb:E→ℝ and therefore its derivative K'(u0)=a'(u0)-Mb'(u0) at u0 vanishes, that is, (2.14)A(u0)=MB(u0). This proves (2.8). To prove the final assertion, observe that by (2.9), M is also the maximum value of the Rayleigh quotient, and therefore the largest eigenvalue of (A,B) by the remark made above.So the real question laid by Theorems2.1 and 2.2 is how can we ensure that (i) a is bounded and (ii) a attains its maximum (or minimum) value on V? The first question would be settled by requiring in principle that V is bounded and that A (and therefore a) is bounded on bounded sets. However, to answer affirmatively (ii), we need anyway some compactness, and as E has infinite dimension—which makes hard to hope that V be compact—such property must be demanded to a (or to A).Definition 2.3. A functionala:E→ℝ is said to be weakly sequentially continuous (wsc for short) if a(un)→a(u) whenever un→u weakly in E, and weakly sequentially lower semicontinuous (wslsc) if (2.15)liminfn→∞a(un)≥a(u), whenever un→u weakly in E. Finally, a is said to be coercive if a(un)→+∞ whenever ∥un∥→+∞.Theorem 2.4. LetA,B:E→E' satisfy (AB0) and let a, b be their respective potentials. Suppose that (i) a is wsc;(ii) b is coercive and wslsc. Then a is bounded on V. Suppose moreover that A and B are positively homogeneous of the same degree. If M≡supu∈Va(u)>0 (resp., m≡infu∈Va(u)<0), then it is attained and is the largest (resp., smallest) eigenvalue of (A,B).Proof. Suppose by way of contradiction thata is not bounded above on V, and let (un)⊂V be such that a(un)→+∞. As b is coercive, (un) is bounded (in fact, V itself is a bounded set) and therefore as E is reflexive we can assume—passing if necessary to a subsequence—that (un) converges weakly to some u0∈E. As a is wsc, it follows that a(un)→a(u0), contradicting the assumption that a(un)→+∞. Thus, M is finite, and we can now let (un)⊂V be a maximizing sequence, that is, a sequence such that a(un)→M. As before, we can assume that (un) converges weakly to some u0∈E, and the weak sequential continuity of a now implies that a(u0)=M. It remains to prove—under the stated additional assumptions—thatu0∈V. To do this, first observe that (as b is wslsc) (2.16)1=liminfn→∞b(un)≥b(u0). We claim that b(u0)=1. Indeed suppose by way of contradiction that b(u0)<1, and let t0>0 be such that t0u0∈V; such a t0 is uniquely determined by the condition (2.17)b(t0u0)=t0α+1b(u0)=1, which yields t0=((b(u0))-1/(α+1) and shows that t0>1. But then, as M>0, we would have (2.18)a(t0u0)=t0α+1a(u0)=t0α+1M>M, which contradicts the definition of M and proves our claim. The proof that m is attained if it is strictly negative is entirely similar.Example 2.5 (the first eigenvalue of thep-Laplace operator). IfA=Ap and B=Bp are defined as in Example 1.1, we have (2.19)a(u)=∫Ω‍|u|pp,b(u)=∫Ω‍|∇u|pp(u∈W01,p(Ω)), for their respective potentials (see (2.11)), and therefore (2.20)a(u)b(u)=〈Ap(u),u〉〈Bp(u),u〉=∫Ω‍|u|p∫Ω‍|∇u|p=(ϕp(u))-1(u≠0), with ϕp as in (1.11). The compact embedding of W01,p(Ω) into Lp(Ω) implies that a is wsc (see the comments following Definition 2.7); moreover, looking at (1.8) we see that b is coercive, while its weak sequential lower semicontinuity is granted as a property of the norm of any reflexive Banach space [41]. It follows by Theorem 2.4 that (2.21)λ1(p)≡sup∫Ω‍|u|p∫Ω‍|∇u|p is attained and is the largest eigenvalue of Ap, which is the same as to say that μ1(p)≡(λ1(p))-1 is the smallest eigenvalue of (1.7). This shows the existence and variational characterization of the first point in the spectral sequence (1.12).Remark 2.6. Much more can be said aboutμ1(p), in particular, μ1(p) is isolated and simple (i.e., the corresponding eigenfunctions are multiple of each other), and moreover the eigenfunctions do not change sign inΩ. These fundamental properties (proved, e.g., in [27]) are among others at the basis of the global bifurcation results for equations of the form (1.24) due to [36, 37]. For an abstract version of these properties of the first eigenvalue, see [35].Let us now indicate very briefly some conditions onA, B ensuring the properties required upon a, b in Theorem 2.4.Definition 2.7. A mappingA:E→F (E, F Banach spaces) is said to be strongly sequentially continuous (strongly continuous for short) if it maps weakly convergent sequences of E to strongly convergent sequences of F.It can be proved (see, e.g., [7]) that if a gradient operator A:E→E' is strongly continuous, then its potential a is wsc. Moreover, it is easy to see that a strongly continuous operator A:E→F is compact, which means by definition that A maps bounded sets of E onto relatively compact sets of F (or equivalently, that any bounded sequence (un) in E contains a subsequence (unk) such that A(unk) converges in F). Moreover, when A is a linear operator, then it is strongly continuous if and only if it is compact [42].Definition 2.8. A mappingA:E→E' is said to be strongly monotone if (2.22)〈A(u)-A(v),u-v〉≥k∥u-v∥2, for some k>0 and for all u,v∈E.It can be proved (see, e.g., [9]) that if a gradient operator A is strongly monotone, then its potential a is coercive and wslsc.With the help of Definitions2.7 and 2.8, Theorem 2.4 can be easily restated using hypotheses which only involve the operators A and B. Rather than doing this in general, we wish to give evidence to the special case that E=E'=H, a real Hilbert space (whose scalar product will be denoted (·,·)), and that B(u)=u. In fact, this is the situation that we will mainly consider from now on. Note that in this case, if A is positively homogeneous of degree 1, we have by (2.9) (2.23)supb(u)=1a(u)=supu≠0(A(u),u)∥u∥2=supu∈S(A(u),u), where (2.24)S={u∈H:∥u∥=1}.Corollary 2.9. LetH be a real, infinite-dimensional Hilbert space and let A:H→H be a strongly continuous gradient operator which is positively homogeneous of degree 1. Let (2.25)M=supu∈S(A(u),u),m=infu∈S(A(u),u). Then M,m are finite and moreover if M>0 (resp., m<0), it is attained and is the largest (resp., smallest) eigenvalue of A.Remark 2.10. The result just stated holds true under the weaker assumption thatA be compact, see [43, Theorem 1.2 and Remark 1.2], where also noncompact maps are considered. In this case, however, the condition M>0 must be replaced by M>α(A), with α(A) the measure of noncompactness of A.Among the 1-positively homogeneous operators, a distinguished subclass is formed by thebounded linear operators acting in H. Denoting such an operator with T, we first recall (see e.g., [8]) that T is a gradient if and only if it is self-adjoint (or symmetric), that is, (Tu,v)=(u,Tv) for all u,v∈H. Next, a classical result of functional analysis (see, e.g., [42]) states that if a linear operator T:H→H is self-adjoint and compact, then it has at least one eigenvalue. The precise statement is as follows: put (2.26)λ1+(T)≡supu∈S(Tu,u),λ1-(T)≡infu∈S(Tu,u).Thenλ1+(T)≥0, and if λ1+(T)>0 then it (is attained) is the largest eigenvalue of T. Similar statements—with reverse inequalities—hold for λ1-(T). Evidently, these can be proven as particular cases of Corollary 2.9, except for the nonstrict inequalities, which are due to our assumptions that H has infinite dimension and that T is compact. Indeed, if for instance, we had λ1+(T)<0, then the very definition (2.26) would imply that |(Tu,u)|≥α∥u∥2 for some α>0 and all u∈H, whence it would follow (by the Schwarz' inequality) that ∥Tu∥≥α∥u∥ for all u∈H, implying that T has a bounded inverse T-1 and therefore that S=T-1T(S) is compact, which is absurd. Finally, note that λ1+(T)=λ1-(T)=0 can only happen if (Tu,u)=0 for all u∈H, implying that T≡0 [42]. The conclusion is that any compact self-adjoint operator has at least one nonzero eigenvalue provided that it is not identically zero. ### 2.2. Higher Order Eigenvalues (for Linear and Nonlinear Operators) Let us remain for a while in the class of bounded linear operators. For these, the use of variational methods in order to study the existence and location ofhigher order eigenvalues is entirely classical and well represented by the famous minimax principle for the eigenvalues of the Laplacian [31]. By the standpoint of operator theory (see, e.g., [44] or [45], Chapter XI, Theorem 1.2), this consists in characterizing the (positive, e.g.) eigenvalues of a compact self-adjoint operator T in a Hilbert space H as follows. For any integer n≥0 let (2.27)𝒰n={V⊂H:Vsubspaceofdimension≤n}, and for n≥1 set (2.28)cn(T)=infV∈𝒰n-1supu∈S∩V⊥(Tu,u), where S={u∈H:∥u∥=1} and V⊥ is the subspace orthogonal to V. Then (2.29)(supu∈S(Tu,u)=)c1(T)≥c2(T)≥⋯≥cn(T)≥⋯≥0, and if cn(T)>0, T has n eigenvalues above 0, precisely, ci(T)=λi+(T) for i=1,…,n where (λi+(T)) denotes the (possibly finite) sequence of all such eigenvalues, arranged in decreasing order and counting multiplicities. There is also a “dual” formula for the positive eigenvalues: (2.30)λn(T)=supV∈𝒱ninfu∈S∩V(Tu,u), where (2.31)𝒱n≡{V⊂H:Vsubspaceofdimension≥n}. The above formulae (2.28)–(2.30) may appear quite involved at first sight, but the principle on which they are based is simple enough. Suppose we have found the first eigenvalue λ1(T)≡λ1+(T)(>0) as in (2.26). For simplicity we consider just positive eigenvalues and so we drop the superscript +. Now, iterate the procedure: let (i) v1∈S be such that Tv1=λ1(T)v1;(ii) V2≡{u∈H:(u,v1)=0}≡v1⊥;(iii) λ2(T)≡supu∈S∩V2(Tu,u).Thenλ1(T)≥λ2(T)≥0, and if λ2(T)>0 then it is attained and is an eigenvalue of T: indeed—due to the symmetry of T—the restriction T2 of T to V2 is an operator inV2, and so one can apply to T2 the same argument used above for T to prove the existence of λ1. Moreover, in this case, if we let (i) v2∈S be such that Tv2=λ2v2,(ii) Z2≡[v1,v2]≡{αv1+βv2:α,β∈ℝ},then it is immediate to check that(2.32)λ2(T)=infu∈S∩Z2(Tu,u). Collecting these facts, and using some linear algebra, it is not difficult to see that (2.33)λ2(T)=infV∈𝒰1supu∈S∩V⊥(Tu,u)=supV∈𝒱2infu∈S∩V(Tu,u), where (2.34)𝒱2≡{V⊂H:Vsubspaceofdimension≥2}. For a rigorous discussion and complete proofs of the above statements, we refer the reader to [44, 45] or [5], for instance.Corollary 2.11. IfT:H→H is compact, self-adjoint, and positive (i.e., such that (Tu,u)>0 for u≠0), then it has infinitely many eigenvalues λn0: (2.35)(supu∈S(Tu,u))=λ10≥λ20≥⋯λn0≥⋯>0. Moreover, λn0→0 as n→∞.The last statement is easily proved as follows: suppose instead thatλn0≥k>0 for all n∈ℕ. For each n, pick un∈S with Tun=λn0un; we have (un,um)=0 for n≠m because T is self-adjoint. Then T(un/λn0)=un, and the compactness of T would now imply that (un) contains a convergent subsequence, which is absurd since ∥un-um∥2=2 for all n≠m.We now finally come to the nonlinear version of the minimax principle, that is, the Lusternik-Schnirelmann (LS) theory of critical points for even functionals on the sphere [17]. There are various excellent accounts of the theory in much greater generality (see, e.g., Amann [7], Berger [8], Browder [9], Palais [10], and Rabinowitz [11, 21]), and so we need just to mention a few basic points of it, these will lead us in short to a simple but fundamental statement (Corollary 2.17), that is, a striking proper generalization of Corollary 2.11 and that will be used in Section 3.ForR>0, let (2.36)SR≡RS={u∈H:∥u∥=R}. If K⊂SR is symmetric (i.e., u∈K⇒-u∈K) then the genus of K, denoted γ(K), is defined as (2.37)γ(K)=inf{n∈ℕ:thereexistsacontinuousoddmappingofKintoℝn∖{0}}. If V is a subspace of H with dimV=n, then γ(SR∩V)=n. For n∈ℕ put (2.38)Kn(R)={K⊂SR:Kcompactandsymmetric,γ(K)≥n}. In the search of critical points of a functional, the so-called Palais-Smale condition is of prime importance. For a continuous gradient operator A:H→H (with potential a), and for a given R>0, put (2.39)D(x)≡A(x)-(A(x),x)R2x(x∈H) and call D the gradient ofaonSR. Essentially, for a given x∈SR, D(x) is the tangential component of A(x), that is, the component of A(x) on the tangent space to SR at x.Definition 2.12. LetA:H→H be a continuous gradient operator and let a be its potential. a is said to satisfy the Palais-Smale condition atc∈ℝ ((PS)c for short) on SR if any sequence (xn)⊂SR such that a(xn)→c and D(xn)→0 contains a convergent subsequence.Lemma 2.13. LetA:H→H be a strongly continuous gradient operator and let a:H→ℝ be its potential. Suppose that a(x)≠0 implies A(x)≠0. Then a satisfies (PS)c on SR for each c≠0.Proof. It is enough to consider the caseR=1. So let (xn)⊂S be a sequence such that a(xn)→c≠0 and (2.40)D(xn)=A(xn)-(A(xn),xn)xn→0. We can assume—passing if necessary to a subsequence—that (xn) converges weakly to some x0. Therefore, A(xn)→A(x0) and moreover—as a is wsc—a(xn)→a(x0) and similarly (A(xn),xn)→(A(x0),x0). Thus, a(x0)=c≠0 and therefore A(x0)≠0 by assumption. It follows from (2.40) that (A(xn),xn)xn→-A(x0)≠0. This first shows that (A(x0),x0)≠0 (otherwise we would have (A(xn),xn)xn→0) and then implies—since (xn)=(A(xn),xn)-1(A(xn),xn)xn—that (xn) converges to -(A(x0),x0)-1A(x0), of course.Example 2.14. Here are two simple but important cases in which the assumption mentioned in Lemma2.13 is satisfied.(i) A is a positive (resp., negative) operator, that is, (A(u),u)>0 (resp., (A(u),u)<0) for u∈H,u≠0.(ii) A is a positively homogeneous operator.Indeed ifA is, for instance, positive, then in particular A(u)≠0 for u≠0, and so the conclusion follows because a(u)≠0 implies that u≠0. While if A is positively homogeneous of degree say α, then a(u)=〈A(u),u〉/(α+1) and so the conclusion is immediate.Theorem 2.15. Suppose thatA:H→H is an odd strongly continuous gradient operator, and let a be its potential. Suppose that a(x)≠0 implies A(x)≠0. For n∈ℕ and R>0 put (2.41)Cn(R)≡supKn(R)infKa(u), where Kn(R) is as in (2.38). Then (2.42)(supSRa(u))=C1(R)≥⋯Cn(R)≥Cn+1(R)≥⋯≥0. Moreover, Cn(R)→0 as n→∞, and if Ck(R)>0 for some k∈ℕ, then for 1≤n≤k,Cn(R) is a critical value of a on SR. Thus, there exist λn(R)∈ℝ,un(R)∈SR(1≤n≤k) such that (2.43)Cn(R)=a(un(R)),(2.44)A(un(R))=λn(R)un(R).Remark 2.16. A similar assertion holds for the negative minimax levels ofa, (2.45)(infSRa(u))=D1(R)≤⋯Dn(R)≤Dn+1(R)≤⋯≤0.Indication of the Proof of Theorem2.15 The sequence(Cn(R)) is nondecreasing because for any n∈ℕ, we have Kn(R)⊃Kn+1(R) as shown by (2.38). Also, C1(R)=supSRa(u) because K1(R) contains all sets of the form {x}∪{-x}, x∈SR [11]. For the proof that Cn(R)→0 as n→∞ we refer to Zeidler [46]. Finally, if Ck(R)>0, since by Lemma  2.4 we know that a satisfies (PS) at the level Ck(R), it follows by standard facts of critical point theory (see any of the cited references) that Ck(R) is attained and a critical value of a on SR.Corollary 2.17. LetA:H→H be an odd strongly continuous gradient operator, and suppose moreover that A is positive. Then the numbers Cn(R) defined in (2.41) are all positive. Thus, for each R>0, there exists an infinite sequence of “eigenpairs” (λn(R),un(R))∈ℝ×SR satisfying (2.43)-(2.44).In conjunction with Corollary2.17, the following result—for which we refer to [8]—will be used to carry out our estimates in Section 3.Proposition 2.18. LetA0:H→H satisfy the assumptions of Corollary 2.17. Suppose moreover that A0 is linear (and therefore is a linear compact self-adjoint positive operator in H). Then (2.46)Cn0(R)≡supKn(R)infKa0(u)=12λn0R2, where a0(u)=(1/2)(A0(u),u) and (λn0) is the decreasing sequence of the eigenvalues of A0, as in Corollary 2.11. ## 2.1. The First Eigenvalue (for Linear and Nonlinear Operators) Looking at the statement of Theorem2.1, we remark that in general there may be more points/eigenvectors u∈Vc (if any at all) where M is attained, and consequently different corresponding eigenvalues (the values taken by the Rayleigh quotient (2.1) at such points). However, in a special case, λ0 is uniquely determined by M and plays the role of “first eigenvalue” of (A,B): this is when A and B are positively homogeneous of the same degree. Recall that A is said to be positively homogeneous of degree α>0 if A(tu)=tαA(u) for u∈E and t>0. For such operators pairs, it is sufficient to consider a fixed level set (that is, to consider normalized eigenvectors), for instance, (2.7)V≡{u∈E:b(u)=1}.Theorem 2.2. LetA,B:E→E' satisfy (AB0) and let a,b be their respective potentials. Suppose in addition that A, B are positively homogeneous of the same degree. If a is bounded above on V and M=supu∈Va(u) is attained at u0∈V, then (2.8)A(u0)=MB(u0). Moreover, M is the largest eigenvalue of the pair (A,B). Likewise, if a is bounded below and m=infu∈Va(u) is attained, then m is the smallest eigenvalue of the pair (A,B).Let us give the direct easy proof of Theorem2.2, that does not even need Lagrange multipliers. The homogeneity of A and B implies that (2.9)supu∈Va(u)=supu≠0a(u)b(u)=supu≠0〈A(u),u〉〈B(u),u〉. Indeed recall (see [7] or [8]) that a continuous gradient operator A is related to its potential a (normalized so that a(0)=0) by the formula (2.10)a(u)=∫01‍〈A(tu),u〉dt. Thus, if A is homogeneous of degree α we have (2.11)a(u)=〈A(u),u〉α+1, and similarly for b; in particular, a and b are (α+1)-homogeneous. Therefore, if for u≠0 we put t(u)=(b(u))-1/(α+1), we have (2.12)a(u)b(u)=a(t(u)u), and as b(t(u)u)=1 (i.e., t(u)u∈V), the first equality in (2.9), follows immediately, and so does the second by virtue of (2.11). By (2.9) and the definition of M we have (2.13)a(u)-Mb(u)≤0foranyu∈E. Suppose now that M is attained at u0∈V. Then a(u0)-Mb(u0)=0. Thus, u0 is a point of absolute maximum of the map K≡a-Mb:E→ℝ and therefore its derivative K'(u0)=a'(u0)-Mb'(u0) at u0 vanishes, that is, (2.14)A(u0)=MB(u0). This proves (2.8). To prove the final assertion, observe that by (2.9), M is also the maximum value of the Rayleigh quotient, and therefore the largest eigenvalue of (A,B) by the remark made above.So the real question laid by Theorems2.1 and 2.2 is how can we ensure that (i) a is bounded and (ii) a attains its maximum (or minimum) value on V? The first question would be settled by requiring in principle that V is bounded and that A (and therefore a) is bounded on bounded sets. However, to answer affirmatively (ii), we need anyway some compactness, and as E has infinite dimension—which makes hard to hope that V be compact—such property must be demanded to a (or to A).Definition 2.3. A functionala:E→ℝ is said to be weakly sequentially continuous (wsc for short) if a(un)→a(u) whenever un→u weakly in E, and weakly sequentially lower semicontinuous (wslsc) if (2.15)liminfn→∞a(un)≥a(u), whenever un→u weakly in E. Finally, a is said to be coercive if a(un)→+∞ whenever ∥un∥→+∞.Theorem 2.4. LetA,B:E→E' satisfy (AB0) and let a, b be their respective potentials. Suppose that (i) a is wsc;(ii) b is coercive and wslsc. Then a is bounded on V. Suppose moreover that A and B are positively homogeneous of the same degree. If M≡supu∈Va(u)>0 (resp., m≡infu∈Va(u)<0), then it is attained and is the largest (resp., smallest) eigenvalue of (A,B).Proof. Suppose by way of contradiction thata is not bounded above on V, and let (un)⊂V be such that a(un)→+∞. As b is coercive, (un) is bounded (in fact, V itself is a bounded set) and therefore as E is reflexive we can assume—passing if necessary to a subsequence—that (un) converges weakly to some u0∈E. As a is wsc, it follows that a(un)→a(u0), contradicting the assumption that a(un)→+∞. Thus, M is finite, and we can now let (un)⊂V be a maximizing sequence, that is, a sequence such that a(un)→M. As before, we can assume that (un) converges weakly to some u0∈E, and the weak sequential continuity of a now implies that a(u0)=M. It remains to prove—under the stated additional assumptions—thatu0∈V. To do this, first observe that (as b is wslsc) (2.16)1=liminfn→∞b(un)≥b(u0). We claim that b(u0)=1. Indeed suppose by way of contradiction that b(u0)<1, and let t0>0 be such that t0u0∈V; such a t0 is uniquely determined by the condition (2.17)b(t0u0)=t0α+1b(u0)=1, which yields t0=((b(u0))-1/(α+1) and shows that t0>1. But then, as M>0, we would have (2.18)a(t0u0)=t0α+1a(u0)=t0α+1M>M, which contradicts the definition of M and proves our claim. The proof that m is attained if it is strictly negative is entirely similar.Example 2.5 (the first eigenvalue of thep-Laplace operator). IfA=Ap and B=Bp are defined as in Example 1.1, we have (2.19)a(u)=∫Ω‍|u|pp,b(u)=∫Ω‍|∇u|pp(u∈W01,p(Ω)), for their respective potentials (see (2.11)), and therefore (2.20)a(u)b(u)=〈Ap(u),u〉〈Bp(u),u〉=∫Ω‍|u|p∫Ω‍|∇u|p=(ϕp(u))-1(u≠0), with ϕp as in (1.11). The compact embedding of W01,p(Ω) into Lp(Ω) implies that a is wsc (see the comments following Definition 2.7); moreover, looking at (1.8) we see that b is coercive, while its weak sequential lower semicontinuity is granted as a property of the norm of any reflexive Banach space [41]. It follows by Theorem 2.4 that (2.21)λ1(p)≡sup∫Ω‍|u|p∫Ω‍|∇u|p is attained and is the largest eigenvalue of Ap, which is the same as to say that μ1(p)≡(λ1(p))-1 is the smallest eigenvalue of (1.7). This shows the existence and variational characterization of the first point in the spectral sequence (1.12).Remark 2.6. Much more can be said aboutμ1(p), in particular, μ1(p) is isolated and simple (i.e., the corresponding eigenfunctions are multiple of each other), and moreover the eigenfunctions do not change sign inΩ. These fundamental properties (proved, e.g., in [27]) are among others at the basis of the global bifurcation results for equations of the form (1.24) due to [36, 37]. For an abstract version of these properties of the first eigenvalue, see [35].Let us now indicate very briefly some conditions onA, B ensuring the properties required upon a, b in Theorem 2.4.Definition 2.7. A mappingA:E→F (E, F Banach spaces) is said to be strongly sequentially continuous (strongly continuous for short) if it maps weakly convergent sequences of E to strongly convergent sequences of F.It can be proved (see, e.g., [7]) that if a gradient operator A:E→E' is strongly continuous, then its potential a is wsc. Moreover, it is easy to see that a strongly continuous operator A:E→F is compact, which means by definition that A maps bounded sets of E onto relatively compact sets of F (or equivalently, that any bounded sequence (un) in E contains a subsequence (unk) such that A(unk) converges in F). Moreover, when A is a linear operator, then it is strongly continuous if and only if it is compact [42].Definition 2.8. A mappingA:E→E' is said to be strongly monotone if (2.22)〈A(u)-A(v),u-v〉≥k∥u-v∥2, for some k>0 and for all u,v∈E.It can be proved (see, e.g., [9]) that if a gradient operator A is strongly monotone, then its potential a is coercive and wslsc.With the help of Definitions2.7 and 2.8, Theorem 2.4 can be easily restated using hypotheses which only involve the operators A and B. Rather than doing this in general, we wish to give evidence to the special case that E=E'=H, a real Hilbert space (whose scalar product will be denoted (·,·)), and that B(u)=u. In fact, this is the situation that we will mainly consider from now on. Note that in this case, if A is positively homogeneous of degree 1, we have by (2.9) (2.23)supb(u)=1a(u)=supu≠0(A(u),u)∥u∥2=supu∈S(A(u),u), where (2.24)S={u∈H:∥u∥=1}.Corollary 2.9. LetH be a real, infinite-dimensional Hilbert space and let A:H→H be a strongly continuous gradient operator which is positively homogeneous of degree 1. Let (2.25)M=supu∈S(A(u),u),m=infu∈S(A(u),u). Then M,m are finite and moreover if M>0 (resp., m<0), it is attained and is the largest (resp., smallest) eigenvalue of A.Remark 2.10. The result just stated holds true under the weaker assumption thatA be compact, see [43, Theorem 1.2 and Remark 1.2], where also noncompact maps are considered. In this case, however, the condition M>0 must be replaced by M>α(A), with α(A) the measure of noncompactness of A.Among the 1-positively homogeneous operators, a distinguished subclass is formed by thebounded linear operators acting in H. Denoting such an operator with T, we first recall (see e.g., [8]) that T is a gradient if and only if it is self-adjoint (or symmetric), that is, (Tu,v)=(u,Tv) for all u,v∈H. Next, a classical result of functional analysis (see, e.g., [42]) states that if a linear operator T:H→H is self-adjoint and compact, then it has at least one eigenvalue. The precise statement is as follows: put (2.26)λ1+(T)≡supu∈S(Tu,u),λ1-(T)≡infu∈S(Tu,u).Thenλ1+(T)≥0, and if λ1+(T)>0 then it (is attained) is the largest eigenvalue of T. Similar statements—with reverse inequalities—hold for λ1-(T). Evidently, these can be proven as particular cases of Corollary 2.9, except for the nonstrict inequalities, which are due to our assumptions that H has infinite dimension and that T is compact. Indeed, if for instance, we had λ1+(T)<0, then the very definition (2.26) would imply that |(Tu,u)|≥α∥u∥2 for some α>0 and all u∈H, whence it would follow (by the Schwarz' inequality) that ∥Tu∥≥α∥u∥ for all u∈H, implying that T has a bounded inverse T-1 and therefore that S=T-1T(S) is compact, which is absurd. Finally, note that λ1+(T)=λ1-(T)=0 can only happen if (Tu,u)=0 for all u∈H, implying that T≡0 [42]. The conclusion is that any compact self-adjoint operator has at least one nonzero eigenvalue provided that it is not identically zero. ## 2.2. Higher Order Eigenvalues (for Linear and Nonlinear Operators) Let us remain for a while in the class of bounded linear operators. For these, the use of variational methods in order to study the existence and location ofhigher order eigenvalues is entirely classical and well represented by the famous minimax principle for the eigenvalues of the Laplacian [31]. By the standpoint of operator theory (see, e.g., [44] or [45], Chapter XI, Theorem 1.2), this consists in characterizing the (positive, e.g.) eigenvalues of a compact self-adjoint operator T in a Hilbert space H as follows. For any integer n≥0 let (2.27)𝒰n={V⊂H:Vsubspaceofdimension≤n}, and for n≥1 set (2.28)cn(T)=infV∈𝒰n-1supu∈S∩V⊥(Tu,u), where S={u∈H:∥u∥=1} and V⊥ is the subspace orthogonal to V. Then (2.29)(supu∈S(Tu,u)=)c1(T)≥c2(T)≥⋯≥cn(T)≥⋯≥0, and if cn(T)>0, T has n eigenvalues above 0, precisely, ci(T)=λi+(T) for i=1,…,n where (λi+(T)) denotes the (possibly finite) sequence of all such eigenvalues, arranged in decreasing order and counting multiplicities. There is also a “dual” formula for the positive eigenvalues: (2.30)λn(T)=supV∈𝒱ninfu∈S∩V(Tu,u), where (2.31)𝒱n≡{V⊂H:Vsubspaceofdimension≥n}. The above formulae (2.28)–(2.30) may appear quite involved at first sight, but the principle on which they are based is simple enough. Suppose we have found the first eigenvalue λ1(T)≡λ1+(T)(>0) as in (2.26). For simplicity we consider just positive eigenvalues and so we drop the superscript +. Now, iterate the procedure: let (i) v1∈S be such that Tv1=λ1(T)v1;(ii) V2≡{u∈H:(u,v1)=0}≡v1⊥;(iii) λ2(T)≡supu∈S∩V2(Tu,u).Thenλ1(T)≥λ2(T)≥0, and if λ2(T)>0 then it is attained and is an eigenvalue of T: indeed—due to the symmetry of T—the restriction T2 of T to V2 is an operator inV2, and so one can apply to T2 the same argument used above for T to prove the existence of λ1. Moreover, in this case, if we let (i) v2∈S be such that Tv2=λ2v2,(ii) Z2≡[v1,v2]≡{αv1+βv2:α,β∈ℝ},then it is immediate to check that(2.32)λ2(T)=infu∈S∩Z2(Tu,u). Collecting these facts, and using some linear algebra, it is not difficult to see that (2.33)λ2(T)=infV∈𝒰1supu∈S∩V⊥(Tu,u)=supV∈𝒱2infu∈S∩V(Tu,u), where (2.34)𝒱2≡{V⊂H:Vsubspaceofdimension≥2}. For a rigorous discussion and complete proofs of the above statements, we refer the reader to [44, 45] or [5], for instance.Corollary 2.11. IfT:H→H is compact, self-adjoint, and positive (i.e., such that (Tu,u)>0 for u≠0), then it has infinitely many eigenvalues λn0: (2.35)(supu∈S(Tu,u))=λ10≥λ20≥⋯λn0≥⋯>0. Moreover, λn0→0 as n→∞.The last statement is easily proved as follows: suppose instead thatλn0≥k>0 for all n∈ℕ. For each n, pick un∈S with Tun=λn0un; we have (un,um)=0 for n≠m because T is self-adjoint. Then T(un/λn0)=un, and the compactness of T would now imply that (un) contains a convergent subsequence, which is absurd since ∥un-um∥2=2 for all n≠m.We now finally come to the nonlinear version of the minimax principle, that is, the Lusternik-Schnirelmann (LS) theory of critical points for even functionals on the sphere [17]. There are various excellent accounts of the theory in much greater generality (see, e.g., Amann [7], Berger [8], Browder [9], Palais [10], and Rabinowitz [11, 21]), and so we need just to mention a few basic points of it, these will lead us in short to a simple but fundamental statement (Corollary 2.17), that is, a striking proper generalization of Corollary 2.11 and that will be used in Section 3.ForR>0, let (2.36)SR≡RS={u∈H:∥u∥=R}. If K⊂SR is symmetric (i.e., u∈K⇒-u∈K) then the genus of K, denoted γ(K), is defined as (2.37)γ(K)=inf{n∈ℕ:thereexistsacontinuousoddmappingofKintoℝn∖{0}}. If V is a subspace of H with dimV=n, then γ(SR∩V)=n. For n∈ℕ put (2.38)Kn(R)={K⊂SR:Kcompactandsymmetric,γ(K)≥n}. In the search of critical points of a functional, the so-called Palais-Smale condition is of prime importance. For a continuous gradient operator A:H→H (with potential a), and for a given R>0, put (2.39)D(x)≡A(x)-(A(x),x)R2x(x∈H) and call D the gradient ofaonSR. Essentially, for a given x∈SR, D(x) is the tangential component of A(x), that is, the component of A(x) on the tangent space to SR at x.Definition 2.12. LetA:H→H be a continuous gradient operator and let a be its potential. a is said to satisfy the Palais-Smale condition atc∈ℝ ((PS)c for short) on SR if any sequence (xn)⊂SR such that a(xn)→c and D(xn)→0 contains a convergent subsequence.Lemma 2.13. LetA:H→H be a strongly continuous gradient operator and let a:H→ℝ be its potential. Suppose that a(x)≠0 implies A(x)≠0. Then a satisfies (PS)c on SR for each c≠0.Proof. It is enough to consider the caseR=1. So let (xn)⊂S be a sequence such that a(xn)→c≠0 and (2.40)D(xn)=A(xn)-(A(xn),xn)xn→0. We can assume—passing if necessary to a subsequence—that (xn) converges weakly to some x0. Therefore, A(xn)→A(x0) and moreover—as a is wsc—a(xn)→a(x0) and similarly (A(xn),xn)→(A(x0),x0). Thus, a(x0)=c≠0 and therefore A(x0)≠0 by assumption. It follows from (2.40) that (A(xn),xn)xn→-A(x0)≠0. This first shows that (A(x0),x0)≠0 (otherwise we would have (A(xn),xn)xn→0) and then implies—since (xn)=(A(xn),xn)-1(A(xn),xn)xn—that (xn) converges to -(A(x0),x0)-1A(x0), of course.Example 2.14. Here are two simple but important cases in which the assumption mentioned in Lemma2.13 is satisfied.(i) A is a positive (resp., negative) operator, that is, (A(u),u)>0 (resp., (A(u),u)<0) for u∈H,u≠0.(ii) A is a positively homogeneous operator.Indeed ifA is, for instance, positive, then in particular A(u)≠0 for u≠0, and so the conclusion follows because a(u)≠0 implies that u≠0. While if A is positively homogeneous of degree say α, then a(u)=〈A(u),u〉/(α+1) and so the conclusion is immediate.Theorem 2.15. Suppose thatA:H→H is an odd strongly continuous gradient operator, and let a be its potential. Suppose that a(x)≠0 implies A(x)≠0. For n∈ℕ and R>0 put (2.41)Cn(R)≡supKn(R)infKa(u), where Kn(R) is as in (2.38). Then (2.42)(supSRa(u))=C1(R)≥⋯Cn(R)≥Cn+1(R)≥⋯≥0. Moreover, Cn(R)→0 as n→∞, and if Ck(R)>0 for some k∈ℕ, then for 1≤n≤k,Cn(R) is a critical value of a on SR. Thus, there exist λn(R)∈ℝ,un(R)∈SR(1≤n≤k) such that (2.43)Cn(R)=a(un(R)),(2.44)A(un(R))=λn(R)un(R).Remark 2.16. A similar assertion holds for the negative minimax levels ofa, (2.45)(infSRa(u))=D1(R)≤⋯Dn(R)≤Dn+1(R)≤⋯≤0.Indication of the Proof of Theorem2.15 The sequence(Cn(R)) is nondecreasing because for any n∈ℕ, we have Kn(R)⊃Kn+1(R) as shown by (2.38). Also, C1(R)=supSRa(u) because K1(R) contains all sets of the form {x}∪{-x}, x∈SR [11]. For the proof that Cn(R)→0 as n→∞ we refer to Zeidler [46]. Finally, if Ck(R)>0, since by Lemma  2.4 we know that a satisfies (PS) at the level Ck(R), it follows by standard facts of critical point theory (see any of the cited references) that Ck(R) is attained and a critical value of a on SR.Corollary 2.17. LetA:H→H be an odd strongly continuous gradient operator, and suppose moreover that A is positive. Then the numbers Cn(R) defined in (2.41) are all positive. Thus, for each R>0, there exists an infinite sequence of “eigenpairs” (λn(R),un(R))∈ℝ×SR satisfying (2.43)-(2.44).In conjunction with Corollary2.17, the following result—for which we refer to [8]—will be used to carry out our estimates in Section 3.Proposition 2.18. LetA0:H→H satisfy the assumptions of Corollary 2.17. Suppose moreover that A0 is linear (and therefore is a linear compact self-adjoint positive operator in H). Then (2.46)Cn0(R)≡supKn(R)infKa0(u)=12λn0R2, where a0(u)=(1/2)(A0(u),u) and (λn0) is the decreasing sequence of the eigenvalues of A0, as in Corollary 2.11. ## 3. Nonlinear Gradient Perturbation of a Self-Adjoint Operator In this section we restrict our attention to equations of the form(3.1)A(u)≡A0(u)+P(u)=λu, in a real Hilbert space H, where,(i) A0 is a (linear) bounded self-adjoint operator in H;(ii) P is a continuous gradient operator in H.We suppose moreover that(3.2)P(u)=o(∥u∥)asu→0. Note that—due to the continuity condition on P—this is the same as to assume that P(0)=0 and that P is is Fréchet differentiable at 0 with (3.3)P'(0)=0.Remark 3.1. We are assuming for convenience thatP is defined on the whole of H, but it will be clear from the sequel that our conclusions hold true whenPis merely defined in a neighborhood of0. The only modification would occur in the first statement of Theorem 3.2, where the words “for each R>0” should be replaced by “for each R>0 sufficiently small.”AsP(0)=0, (3.1) possesses the trivial solutions{(λ,0)∣λ∈ℝ}. Recall that a point λ0∈ℝ is said to be a bifurcation point for (3.1) if any neighborhood of (λ0,0) in ℝ×H contains nontrivial solutions (i.e., pairs (λ,u) with u≠0) of (3.1). A basic result in this matter states that if P satisfies (3.2), and if moreover A0 is compact and P is strongly continuous (so that A=A0+P is strongly continuous), then each nonzero eigenvalue of A0=A'(0) is a bifurcation point for (3.1), and in particular for any R>0 sufficiently small, there exists a solution (λR,uR) such that (3.4)∥uR∥=RforeachR,λR→λ0asR→0. Essentially, this goes back to Krasnosel'skii [6, Theorem 6.2.2], who used a minimax argument of Lusternik-Schnirelmann type considering deformations of a certain class of compact, noncontractible subsets of the sphere SR. Subsequently, the compactness (resp., strong continuity) conditions on A0 (resp., on P) were removed and replaced by the assumption that P should be of class C1 near u=0, by Böhme [47] and Marino [48], who strengthened the conclusions showing that in this case bifurcation takes place from every isolated eigenvalue of finite multiplicity of A0 and moreover that for R>0 sufficiently small, there exist (at least) two distinct solutions (λRi,uRi) satisfying (3.4) for i=1,2; “distinct” means here in particular that uR1≠uR2. Proofs of this result can be found also in Rabinowitz [11, Theorem 11.4] or in Stuart [49, Theorem 7.2], for example. Moreover, when P is also odd, then the proper critical point theory of Lusternik and Schnirelmann for even functionals (briefly recalled in Section 2) can be further exploited to show that if n is the multiplicity of λ0, then for each R>0 there are at least 2n distinct solutions (λRk,±uRk), k=1…n, which satisfy (3.4) for each k; see, for instance [11, Corollary 11.30]. Each of these sets of assumptions thus guarantees the existence of one or more families (3.5)ℱ={(λR,uR)∣0<R<R0}, of solutions of (3.1) satisfying (3.4), that is, parameterized by the norm R of the eigenvector uR for R in an interval ]0,R0[ and bifurcating from (λ0,0). In such situation, it is natural to study the rate of convergence of the eigenvalues λR to λ0 as R→0, and in order to perform such quantitative analysis we do strengthen and make more precise the condition (3.2) on P. Indeed throughout this section we consider a P that satisfies the following basic growth assumption near u=0: (3.6)P(u)=O(∥u∥q-1)asu→0forsomeq>2, that is, we suppose that there exist (q>2 and) positive constants M and R0 such that (3.7)∥P(u)∥≤M∥u∥q-1, for all u∈H with ∥u∥≤R0.We suppose moreover that there exist constantsR1>0, 0≤k≤K and β,γ∈[0,α], α≡q/2>1 such that for all u∈H with ∥u∥≤R1, (3.8)k(A0(u),u)β(∥u∥2)α-β≤(P(u),u)≤K(A0(u),u)γ(∥u∥2)α-γ.Note that asA0 is a bounded linear operator, we have ∥A0(u)∥≤C∥u∥ for some C≥0 and for all u∈H, which implies that |(A0(u),u)|≤C∥u∥2 for all u. Inserting this in (3.8) thus yields (3.9)|P(u),u|≤C1∥u∥2α=C1∥u∥q, for some C1≥0. On the other hand, (3.7) also implies—via the Cauchy-Schwarz inequality—a similar bound on (P(u),u). Thus, we see that (3.8) is compatible with (3.7), and is essentially a more specific form of it carrying a sign condition on P. In our final Example 3.4, we will see that (3.8) is satisfied by the operator associated with simple power nonlinearities often considered in perturbed eigenvalue problems for the Laplacian. Before this, in the present section we develop eigenvalue estimates that follow by (3.7) and (3.8) in the general Hilbert space context. ### 3.1. NLEV Estimates via LS Theory In our first approach, we exploit LS theory in the simple form described in Section2. We will therefore assume, in addition to the hypotheses already made in this section upon A0 and P, that(i) P is odd;(ii) A0 is compact and P is strongly continuous;(iii) A0 is positive and P is nonnegative (i.e., (P(u),u)≥0 for all u∈H).Theorem 3.2. (A) LetH be a real Hilbert space and suppose that (i) A0 is a linear, compact, self-adjoint, and positive operator in H;(ii) P is an odd, strongly continuous, gradient, and nonnegative operator in H. Then for each fixed R>0, (3.1) has an infinite sequence (λn(R),un(R)) of eigenvalue-eigenvector pairs with ∥un(R)∥=R. (B) Suppose in addition thatP satisfies (3.2). Then for each n∈ℕ, λn(R)→λn0 as R→0, where λn0 is the nth eigenvalue of A0. Thus, each λn0 is a bifurcation point for (3.1). (C) Suppose in addition thatP satisfies (3.6). Then (3.10)λn(R)=λn0+O(Rq-2)asR→0. (D) Finally, if in additionP satisfies (3.8), then as R→0 one has (3.11)-K(λn0)γRq-2+o(Rq-2)≤λn(R)-λn0≤K(1α+1)(λn0)γRq-2+o(Rq-2).Proof. The conditions in (A) guarantee thatA≡A0+P satisfies the assumptions of Corollary 2.17. Therefore, for each R>0, there exist an infinite sequence Cn(R) of critical values and a corresponding sequence (λn(R),un(R)) of eigenvalue-eigenvector pairs satisfying (2.41)–(2.44). We will make use of these formulae to derive our estimates. The statement (B) is essentially due to Berger, see [8, Chapter 6, Section 6.7A]. As the third statement has been essentially proved elsewhere (see, e.g. [33]), it remains only to prove (D). Leta,a0 and p be the potentials of A, A0, and P, respectively. We have from (2.10) (3.12)a=a0+p,a0(u)=12(A0(u),u),p(u)=∫01‍〈P(tu),u〉dt. Also letR1>0 be such that (3.8) holds for ∥u∥≤R1. In the derivation of the estimates below, we assume without further mention that ∥u∥≤R1. Step  1. It follows from (3.8) that (3.13)k1a0(u)β(∥u∥2)α-β≤p(u)≤K1a0(u)γ(∥u∥2)α-γ, where (3.14)k1:=H2β-1α,K1:=2γ-1α. The definition (2.41) of Cn(R) then shows, using (3.13) and (2.46), that (3.15)Cn0(R)+k1Cn0(R)βR2(α-β)≤Cn(R)≤Cn0(R)+K1Cn0(R)γR2(α-γ). Step  2. Equation (2.44) implies in particular that (3.16)(A(un(R)),un(R))=λn(R)R2. Whence—using (2.43) and (3.12)—we get (3.17)Cn(R)-12λn(R)R2=a(un(R))-12(A(un(R)),un(R))=p(un(R))-12(P(un(R)),un(R)). It also follows from (3.8) that (3.18)k2a0(u)β(∥u∥2)α-β≤12(P(u),u)≤K2a0(u)γ(∥u∥2)α-γ, where (3.19)k2:=k2β-1,K2:=K2γ-1. We see from (3.13) and (3.18) that both p(u) and (1/2)(P(u),u) vary in the interval of endpoints k1a0(u)β(∥u∥2)α-β and K2a0(u)γ(∥u∥2)α-γ: indeed (as α>1) min{k1,k2}=k1,max{K1,K2}=K2. Therefore, writing for simplicity u for un(R), we have (3.20)|Cn(R)-12λn(R)R2|≤K2a0(u)γ(∥u∥2)α-γ-k1a0(u)β(∥u∥2)α-β≤K2a0(u)γ(∥u∥2)α-γ(sinceA0≥0)≤K2a(u)γ(∥u∥2)α-γ(sinceP≥0)=K2Cn(R)γR2(α-γ)(by(2.43)). Step  3. Using the right-hand side of (3.15), we get (3.21)K2Cn(R)γR2(α-γ)≤K2{Cn0(R)+K1Cn0(R)γR2(α-γ)}γR2(α-γ)=K2Cn0(R)γ{1+K1Cn0(R)γ-1R2(α-γ)}γR2(α-γ)=K2(λn02)γR2α{1+K1(λn02)γ-1Rϵ}γ, where we have replaced Cn0(R) with its value (1/2)λn0R2—see (2.46)—and have put ϵ=2(γ-1)+2(α-γ)=2(α-1)>0 as α>1. Thus, as R→0, (3.22){1+K1(λn02)γ-1Rϵ}γ=1+O(Rϵ)=1+o(1), so that (3.23)K2Cn(R)γR2(α-γ)≤K2(λn02)γR2α(1+o(1)). Therefore, by (3.20), we end up this step with the estimate (3.24)|Cn(R)-12λn(R)R2|≤K2(λn02)γR2α(1+o(1)). Step  4 (upper bound). Using again the right hand side of (3.15) in (3.24), and then using again (2.46) we obtain (3.25)12λn(R)R2≤Cn(R)+K2(λn02)γR2α(1+o(1))≤Cn0(R)+K1Cn0(R)γR2(α-γ)+K2(λn02)γR2α(1+o(1))=12λn0R2+K1(λn02)γRq+K2(λn02)γRq(1+o(1))=12λn0R2+Z(λn02)γRq+o(Rq), where (3.26)Z=K1+K2=K2γ-1α+K2γ-1=K2γ-1(1α+1). We conclude that as R→0(3.27)λn(R)≤λn0+K(1α+1)(λn0)γRq-2+o(Rq-2). Step  5   (lower bound). Using now the left hand side of (3.15) in (3.24), and then using as before (2.46) we get (3.28)12λn(R)R2≥Cn(R)-K2(λn02)γR2α(1+o(1))≥Cn0(R)+k1Cn0(R)βR2(α-β)-K2(λn02)γR2α(1+o(1))=12λn0R2+{k1(λn02)β-K2(λn02)γ}Rq+o(Rq)≥12λn0R2-K2(λn02)γRq+o(Rq). We conclude that, as R→0, (3.29)λn(R)≥λn0-K(λn0)γRq-2+o(Rq-2), and this, together with (3.27), ends the proof of (3.11). ### 3.2. NLEV Estimates via Bifurcation Theory As already remarked, LS theory has a true global character from the standpoint of NLEV, in that forany fixed R>0 it allows for the “simultaneous consideration of an infinite number of eigenvalues”λn(R), if we can use the same words of Kato [4] for a situation involving nonlinear operators—though strictly parallel to that of compact self-adjoint operators, as shown by Corollaries 2.11 and 2.17.In contrast, Bifurcation theory—at least in the way used here and based on the classical Lyapounov-Schmidt method, see for instance [23]—is (i)local (it yields information for R small) and (ii) built starting from a fixed isolated eigenvalue of finite multiplicity of A0: given such an eigenvalue λ0, one reduces (via the Implicit Function Theorem) the original equation to an equation in the finite-dimensional kernel N(λ0)≡N(A0-λ0I). The use of he Implicit Function Theorem demands C1 regularity on the operators involved, but dispenses from the assumptions made before of (oddness, positivity and) compactness.These differences between Theorems3.2 and 3.3 are stressed by the change of notation (λ0 rather than λn0) also in the formulae (3.11) and (3.31) for our estimates. On the other hand, the obvious relation existing between the two statements is that each nonzero eigenvalue of a compact operator is isolated and of finite multiplicity.Theorem 3.3. (A) LetA0 be a bounded self-adjoint linear operator in a real Hilbert space H and let λ0 be an isolated eigenvalue of finite multiplicity of A0. Consider (3.1), where P is a C1 gradient map defined in a neighborhood of 0 in H and satisfying (3.6). Then λ0 is a bifurcation point for (3.1), and moreover if ℱ={(λR,uR):0<R<R0} is any family of nontrivial solutions of (3.1) satisfying (3.4), then the eigenvalues λR satisfy the estimate (3.30)λR=λ0+O(Rq-2)asR→0. (B) If, in addition,P satisfies the condition (3.8), then as R→0 one has (3.31)k(λ0)βRq-2+o(Rq-2)≤λR-λ0≤K(λ0)γRq-2+o(Rq-2).Proof. Theorem3.3 is merely a variant of Theorem 1.1 in [14]. We report here the main points of the proof of the latter—that makes systematic use of the condition (3.6)—and the improvements deriving by the use of the additional assumption (3.8). LetN=N(λ0)≡N(A0-λ0I) be the eigenspace associated with λ0, and let W be the range of A0-λ0I. Then by our assumptions on A0 and λ0, H is the orthogonal sum (3.32)H=N⊕W. Set L=A0-λ0I, δ=λ-λ0 and write (1.18) as (3.33)Lu+P(u)=δu. Let Π1, Π2=I-Π1 be the orthogonal projections onto N and W, respectively; then writing u=Π1u+Π2u≡v+w according to (3.32) and applying in turn Π1, Π2 to both members of (3.33), the latter is turned to the system (3.34)Π1P(v+w)=δv,Lw+Π2P(v+w)=δw. By the self-adjointness of A0, we have Lw∈W for any w∈W and therefore (Lu,u)=(Lw,w) for any u=v+w∈H. Now let ℱ={(λ0+δR,uR):0<R<R0} be as in the statement of Theorem 3.3. Then from (3.33), (3.35)(LuR,uR)+(P(uR),uR)=δRR2, for 0<R<R0, and writing uR=vR+wR this yields (3.36)(LwR,wR)+(P(uR),uR)=δRR2(0<R<R0). Under assumption (1.5), the term (P(uR),uR) in (3.36) is evidently O(Rq). What matters is to estimate the first term (LwR,wR); we claim that the same assumption (1.5) also yields (3.37)(LwR,wR)=o(Rq)asR→0. Then (3.36) will immediately imply that δR=O(Rq-2)—which is (3.30)—and will thus prove the first assertion of Theorem 3.3. To prove our claim, we let (δ,u) be any solution of (3.33) and write u=v+w,v∈N,w∈W; then (δ,v,w) satisfies the system (3.34). The second of these equations is Lw-δw=-Π2P(v+w) or, putting Hδ=-((L-δI)|W)-1, (3.38)w=HδΠ2P(v+w). As P is C1 near u=0 and P'(0)=0, a standard application of the Implicit Function Theorem guarantees the existence of neighborhoods 𝒰 of (0,0) in ℝ×N and 𝒲 of 0 in W such that, for each fixed (δ,v) in 𝒰, there exists a unique solution w=w(δ,v)∈𝒲 of (3.38). Moreover, w depends on a C1 fashion upon δ and v and (3.39)∥w(δ,v)∥=o(∥v∥)asv→0,v∈N, uniformly with respect to δ for δ in bounded intervals of ℝ. Our point is that using again the supplementary assumption (1.5), (3.39) can be improved (see [14]) to (3.40)∥w(δ,v)∥=O(∥v∥q-1)asv→0,v∈N, uniformly for δ near 0. Now to prove the claim (3.37), first observe that L|W:W→W is a bounded linear operator, so that ∥Lw∥≤C∥w∥ for some C>0 and for all w∈W. Thus, |(Lw,w)|≤C∥w∥2, and it follows by (3.40) that (3.41)(Lw(δ,v),w(δ,v))=O(∥v∥2(q-1))asv→0. Returning to the solutions (λ0+δR,uR)∈ℱ, and writing as above uR=vR+wR, we can suppose—diminishing R0 if necessary—that (δR,vR,wR)∈𝒰×𝒲 for all R:0<R<R0. This implies by uniqueness that wR=w(δR,vR) for all R:0<R<R0. The estimate (3.40) thus yields in particular that ∥wR∥=O(∥vR∥q-1) as R→0 and in turn (since ∥vR∥≤∥uR∥=R), (3.41)) yields that (AwR,wR)=O(R2(q-1)) as R→0. Since 2(q-1)>q (because q>2), this implies (3.37). In order to improve the rudimentary estimate (3.30), one has to look more closely at the term (P(uR),uR) in (3.36). Indeed as shown in [14], under the stated assumptions on P we also have (3.42)(P(uR),uR)=(P(vR),vR)+o(Rq)asR→0. Using (3.37) and (3.42) in (3.36), we have therefore (3.43)δRR2=(P(vR),vR)+o(Rq)asR→0. To conclude the proof of Theorem 3.3, we introduce as in [14] constants kλ0 and Kλ0 via the formulae (3.44)kλ0≡inf0<∥v∥<R0,v∈N(P(v),v)∥v∥q,Kλ0≡sup0<∥v∥<R0,v∈N(P(v),v)∥v∥q. These yield the inequalities (3.45)kλ0∥v∥q≤(P(v),v)≤Kλ0∥v∥q(v∈N,∥v∥<R0). We know that as v→0, w(δ,v)=o(∥v∥) and so ∥v+w(δ,v)∥=∥v∥+o(∥v∥). It follows that as R→0, ∥vR∥=R+o(R) for the solutions (λ0+δR,vR+wR)∈ℱ; using this in (3.45), we conclude that (3.46)kλ0Rq+o(Rq)≤〈P(vR),vR〉≤Kλ0Rq+o(Rq)(R→0). Replacing this in (3.43), we obtain the inequalities (3.47)kλ0Rq-2+o(Rq-2)≤λR-λ0≤Kλ0Rq-2+o(Rq-2). Note that these have been derived using merely the assumption (3.6), which implies that |(P(u),u)|≤M∥u∥q for some constant M in a neighborhood of u=0 and thus guarantees that kλ0,Kλ0 are finite. Suppose now that A0 satisfies the additional assumption (3.8), that we report here for the reader's convenience: (3.48)k(A0(u),u)β(∥u∥2)α-β≤(P(u),u)≤K(A0(u),u)γ(∥u∥2)α-γ. As A0v=λ0v for v∈N≡N(A0-λ0I), we have (A0(v),v)=λ0∥v∥2 for such v and therefore (3.48) yields (3.49)(P(v),v)≤K(λ0)γ∥v∥2γ(∥v∥2)α-γ=K(λ0)γ∥v∥q,v∈N. A similar inequality, based on the left-hand side of (3.48), provides a lower bound to (P(v),v) for v∈N. It follows by the definitions (3.44) of kλ0,Kλ0 that (3.50)kλ0≥k(λ0)β,Kλ0≤K(λ0)γ. Using these in (3.47) yields the desired inequalities (3.31).Example 3.4. Let us now reconsider Example1.3, and take in particular the basic example of a nonlinearity satisfying (HF0) and (HF1), namely, (3.51)f(x,s)=|s|q-2s(2<q<2*). In this case, we see from (1.19) that (3.52)(P(u),u)=∫Ω‍|u|qdx=∥u∥qq. The following inequality for functions of H01(Ω) permits to estimate (P(u),u).Proposition 3.5. LetΩ be a bounded open set in ℝN(N>2), let 2* be defined by 1/2*=1/2-1/N, and let q be such that 2≤q≤2*. Then (3.53)C∥u∥2q≤∥u∥qq≤D∥u∥2q-(q-2)N/2∥∇u∥2(q-2)N/2, for all u∈H01(Ω), with (3.54)C=|Ω|-(q-2)/2,D=S(2,N)(q-2)N/2. Here |Ω| stands for the (Lebesgue) measure of Ω in ℝN and S(2,N) for the best constant of the Sobolev embedding of H01(Ω) into L2*(Ω): (3.55)S(2,N)=supu∈W01,2(Ω)∥u∥2*∥∇u∥2.Proof. The proof of the left-hand side of (3.53) is very simple and amounts to verify the inequality (3.56)∫Ω‍|v|qdx≥|Ω|-(q-2)/2(∫Ω‍v2dx)q/2, which holds true for any q≥2 and for any measurable function v on Ω. To see this, first observe that (3.56) is trivial if q=2. While if q>2, then q/2>1, and so by Hölder's inequality, (3.57)∫Ω‍v2dx≤(∫Ω‍|v|qdx)2/q(∫Ω‍dx)(q-2)/q=|Ω|(q-2)/q(∫Ω‍|v|qdx)2/q. It follows that (3.58)(∫Ω‍v2dx)q/2≤|Ω|(q-2)/2(∫Ω‍|v|qdx), which gives (3.56). The proof of the right-hand side of (3.53) requires more work and is based on an interpolation inequality which makes use of Hölder's and Sobolev's inequality (see [41], e.g.). A detailed proof can be found in [34].Consider the operatorsA0 and P in H≡H01(Ω) defined as in (1.19). If we put (3.59)β=α=q2,γ=q2-(q-2)N4, then (3.53) can be written as (3.60)C(A0(u),u)β≤〈P(u),u〉≤D(A0(u),u)γ(∥u∥2)α-γ, and shows that P satisfies (3.8) with k=C,K=D and α,β,γ as shown in (3.59). It is straightforward to check (see, e.g., [21] or [11]) that A0 and P satisfy the remaining assumptions of Theorem 3.3. Therefore, we can use the inequality (3.31), that in the present case takes the form (3.61)C(λ0)q/2Rq-2+o(Rq-2)≤λR-λ0≤D(λ0)q/2-(q-2)N/4Rq-2+o(Rq-2). Putting μR=(λR)-1, we then have a corresponding family {(μR,uR)} of solutions of the original problem (1.15) such that, as R→0, (3.62)μ0μR[Cμ0-q/2Rq-2+o(Rq-2)]≤μ0-μR≤μ0μR[Dμ0-q/2+ϵRq-2+o(Rq-2)], where(3.63)ϵ≡(q-2)N4>0. Since μR=μ0+o(1) anyway, this yields in turn (3.64)Cμ02μ0-q/2Rq-2+o(Rq-2)≤μ0-μR≤Dμ02μ0-q/2+ϵRq-2+o(Rq-2), as R→0, or putting α=2-q/2=(4-q)/2, (3.65)Cμ0αRq-2+o(Rq-2)≤μ0-μR≤Dμ0α+ϵRq-2+o(Rq-2). We remark that (3.65) can be used for actual computation, in view of the expressions (3.54) of C and D: indeed S(2,N) is explicitly known for any N [50] and can be found, for instance, in [51, page 151]. Some work on numerical computation of NLEV for equations of the form (1.15) can be found, for instance, in [52]. ## 3.1. NLEV Estimates via LS Theory In our first approach, we exploit LS theory in the simple form described in Section2. We will therefore assume, in addition to the hypotheses already made in this section upon A0 and P, that(i) P is odd;(ii) A0 is compact and P is strongly continuous;(iii) A0 is positive and P is nonnegative (i.e., (P(u),u)≥0 for all u∈H).Theorem 3.2. (A) LetH be a real Hilbert space and suppose that (i) A0 is a linear, compact, self-adjoint, and positive operator in H;(ii) P is an odd, strongly continuous, gradient, and nonnegative operator in H. Then for each fixed R>0, (3.1) has an infinite sequence (λn(R),un(R)) of eigenvalue-eigenvector pairs with ∥un(R)∥=R. (B) Suppose in addition thatP satisfies (3.2). Then for each n∈ℕ, λn(R)→λn0 as R→0, where λn0 is the nth eigenvalue of A0. Thus, each λn0 is a bifurcation point for (3.1). (C) Suppose in addition thatP satisfies (3.6). Then (3.10)λn(R)=λn0+O(Rq-2)asR→0. (D) Finally, if in additionP satisfies (3.8), then as R→0 one has (3.11)-K(λn0)γRq-2+o(Rq-2)≤λn(R)-λn0≤K(1α+1)(λn0)γRq-2+o(Rq-2).Proof. The conditions in (A) guarantee thatA≡A0+P satisfies the assumptions of Corollary 2.17. Therefore, for each R>0, there exist an infinite sequence Cn(R) of critical values and a corresponding sequence (λn(R),un(R)) of eigenvalue-eigenvector pairs satisfying (2.41)–(2.44). We will make use of these formulae to derive our estimates. The statement (B) is essentially due to Berger, see [8, Chapter 6, Section 6.7A]. As the third statement has been essentially proved elsewhere (see, e.g. [33]), it remains only to prove (D). Leta,a0 and p be the potentials of A, A0, and P, respectively. We have from (2.10) (3.12)a=a0+p,a0(u)=12(A0(u),u),p(u)=∫01‍〈P(tu),u〉dt. Also letR1>0 be such that (3.8) holds for ∥u∥≤R1. In the derivation of the estimates below, we assume without further mention that ∥u∥≤R1. Step  1. It follows from (3.8) that (3.13)k1a0(u)β(∥u∥2)α-β≤p(u)≤K1a0(u)γ(∥u∥2)α-γ, where (3.14)k1:=H2β-1α,K1:=2γ-1α. The definition (2.41) of Cn(R) then shows, using (3.13) and (2.46), that (3.15)Cn0(R)+k1Cn0(R)βR2(α-β)≤Cn(R)≤Cn0(R)+K1Cn0(R)γR2(α-γ). Step  2. Equation (2.44) implies in particular that (3.16)(A(un(R)),un(R))=λn(R)R2. Whence—using (2.43) and (3.12)—we get (3.17)Cn(R)-12λn(R)R2=a(un(R))-12(A(un(R)),un(R))=p(un(R))-12(P(un(R)),un(R)). It also follows from (3.8) that (3.18)k2a0(u)β(∥u∥2)α-β≤12(P(u),u)≤K2a0(u)γ(∥u∥2)α-γ, where (3.19)k2:=k2β-1,K2:=K2γ-1. We see from (3.13) and (3.18) that both p(u) and (1/2)(P(u),u) vary in the interval of endpoints k1a0(u)β(∥u∥2)α-β and K2a0(u)γ(∥u∥2)α-γ: indeed (as α>1) min{k1,k2}=k1,max{K1,K2}=K2. Therefore, writing for simplicity u for un(R), we have (3.20)|Cn(R)-12λn(R)R2|≤K2a0(u)γ(∥u∥2)α-γ-k1a0(u)β(∥u∥2)α-β≤K2a0(u)γ(∥u∥2)α-γ(sinceA0≥0)≤K2a(u)γ(∥u∥2)α-γ(sinceP≥0)=K2Cn(R)γR2(α-γ)(by(2.43)). Step  3. Using the right-hand side of (3.15), we get (3.21)K2Cn(R)γR2(α-γ)≤K2{Cn0(R)+K1Cn0(R)γR2(α-γ)}γR2(α-γ)=K2Cn0(R)γ{1+K1Cn0(R)γ-1R2(α-γ)}γR2(α-γ)=K2(λn02)γR2α{1+K1(λn02)γ-1Rϵ}γ, where we have replaced Cn0(R) with its value (1/2)λn0R2—see (2.46)—and have put ϵ=2(γ-1)+2(α-γ)=2(α-1)>0 as α>1. Thus, as R→0, (3.22){1+K1(λn02)γ-1Rϵ}γ=1+O(Rϵ)=1+o(1), so that (3.23)K2Cn(R)γR2(α-γ)≤K2(λn02)γR2α(1+o(1)). Therefore, by (3.20), we end up this step with the estimate (3.24)|Cn(R)-12λn(R)R2|≤K2(λn02)γR2α(1+o(1)). Step  4 (upper bound). Using again the right hand side of (3.15) in (3.24), and then using again (2.46) we obtain (3.25)12λn(R)R2≤Cn(R)+K2(λn02)γR2α(1+o(1))≤Cn0(R)+K1Cn0(R)γR2(α-γ)+K2(λn02)γR2α(1+o(1))=12λn0R2+K1(λn02)γRq+K2(λn02)γRq(1+o(1))=12λn0R2+Z(λn02)γRq+o(Rq), where (3.26)Z=K1+K2=K2γ-1α+K2γ-1=K2γ-1(1α+1). We conclude that as R→0(3.27)λn(R)≤λn0+K(1α+1)(λn0)γRq-2+o(Rq-2). Step  5   (lower bound). Using now the left hand side of (3.15) in (3.24), and then using as before (2.46) we get (3.28)12λn(R)R2≥Cn(R)-K2(λn02)γR2α(1+o(1))≥Cn0(R)+k1Cn0(R)βR2(α-β)-K2(λn02)γR2α(1+o(1))=12λn0R2+{k1(λn02)β-K2(λn02)γ}Rq+o(Rq)≥12λn0R2-K2(λn02)γRq+o(Rq). We conclude that, as R→0, (3.29)λn(R)≥λn0-K(λn0)γRq-2+o(Rq-2), and this, together with (3.27), ends the proof of (3.11). ## 3.2. NLEV Estimates via Bifurcation Theory As already remarked, LS theory has a true global character from the standpoint of NLEV, in that forany fixed R>0 it allows for the “simultaneous consideration of an infinite number of eigenvalues”λn(R), if we can use the same words of Kato [4] for a situation involving nonlinear operators—though strictly parallel to that of compact self-adjoint operators, as shown by Corollaries 2.11 and 2.17.In contrast, Bifurcation theory—at least in the way used here and based on the classical Lyapounov-Schmidt method, see for instance [23]—is (i)local (it yields information for R small) and (ii) built starting from a fixed isolated eigenvalue of finite multiplicity of A0: given such an eigenvalue λ0, one reduces (via the Implicit Function Theorem) the original equation to an equation in the finite-dimensional kernel N(λ0)≡N(A0-λ0I). The use of he Implicit Function Theorem demands C1 regularity on the operators involved, but dispenses from the assumptions made before of (oddness, positivity and) compactness.These differences between Theorems3.2 and 3.3 are stressed by the change of notation (λ0 rather than λn0) also in the formulae (3.11) and (3.31) for our estimates. On the other hand, the obvious relation existing between the two statements is that each nonzero eigenvalue of a compact operator is isolated and of finite multiplicity.Theorem 3.3. (A) LetA0 be a bounded self-adjoint linear operator in a real Hilbert space H and let λ0 be an isolated eigenvalue of finite multiplicity of A0. Consider (3.1), where P is a C1 gradient map defined in a neighborhood of 0 in H and satisfying (3.6). Then λ0 is a bifurcation point for (3.1), and moreover if ℱ={(λR,uR):0<R<R0} is any family of nontrivial solutions of (3.1) satisfying (3.4), then the eigenvalues λR satisfy the estimate (3.30)λR=λ0+O(Rq-2)asR→0. (B) If, in addition,P satisfies the condition (3.8), then as R→0 one has (3.31)k(λ0)βRq-2+o(Rq-2)≤λR-λ0≤K(λ0)γRq-2+o(Rq-2).Proof. Theorem3.3 is merely a variant of Theorem 1.1 in [14]. We report here the main points of the proof of the latter—that makes systematic use of the condition (3.6)—and the improvements deriving by the use of the additional assumption (3.8). LetN=N(λ0)≡N(A0-λ0I) be the eigenspace associated with λ0, and let W be the range of A0-λ0I. Then by our assumptions on A0 and λ0, H is the orthogonal sum (3.32)H=N⊕W. Set L=A0-λ0I, δ=λ-λ0 and write (1.18) as (3.33)Lu+P(u)=δu. Let Π1, Π2=I-Π1 be the orthogonal projections onto N and W, respectively; then writing u=Π1u+Π2u≡v+w according to (3.32) and applying in turn Π1, Π2 to both members of (3.33), the latter is turned to the system (3.34)Π1P(v+w)=δv,Lw+Π2P(v+w)=δw. By the self-adjointness of A0, we have Lw∈W for any w∈W and therefore (Lu,u)=(Lw,w) for any u=v+w∈H. Now let ℱ={(λ0+δR,uR):0<R<R0} be as in the statement of Theorem 3.3. Then from (3.33), (3.35)(LuR,uR)+(P(uR),uR)=δRR2, for 0<R<R0, and writing uR=vR+wR this yields (3.36)(LwR,wR)+(P(uR),uR)=δRR2(0<R<R0). Under assumption (1.5), the term (P(uR),uR) in (3.36) is evidently O(Rq). What matters is to estimate the first term (LwR,wR); we claim that the same assumption (1.5) also yields (3.37)(LwR,wR)=o(Rq)asR→0. Then (3.36) will immediately imply that δR=O(Rq-2)—which is (3.30)—and will thus prove the first assertion of Theorem 3.3. To prove our claim, we let (δ,u) be any solution of (3.33) and write u=v+w,v∈N,w∈W; then (δ,v,w) satisfies the system (3.34). The second of these equations is Lw-δw=-Π2P(v+w) or, putting Hδ=-((L-δI)|W)-1, (3.38)w=HδΠ2P(v+w). As P is C1 near u=0 and P'(0)=0, a standard application of the Implicit Function Theorem guarantees the existence of neighborhoods 𝒰 of (0,0) in ℝ×N and 𝒲 of 0 in W such that, for each fixed (δ,v) in 𝒰, there exists a unique solution w=w(δ,v)∈𝒲 of (3.38). Moreover, w depends on a C1 fashion upon δ and v and (3.39)∥w(δ,v)∥=o(∥v∥)asv→0,v∈N, uniformly with respect to δ for δ in bounded intervals of ℝ. Our point is that using again the supplementary assumption (1.5), (3.39) can be improved (see [14]) to (3.40)∥w(δ,v)∥=O(∥v∥q-1)asv→0,v∈N, uniformly for δ near 0. Now to prove the claim (3.37), first observe that L|W:W→W is a bounded linear operator, so that ∥Lw∥≤C∥w∥ for some C>0 and for all w∈W. Thus, |(Lw,w)|≤C∥w∥2, and it follows by (3.40) that (3.41)(Lw(δ,v),w(δ,v))=O(∥v∥2(q-1))asv→0. Returning to the solutions (λ0+δR,uR)∈ℱ, and writing as above uR=vR+wR, we can suppose—diminishing R0 if necessary—that (δR,vR,wR)∈𝒰×𝒲 for all R:0<R<R0. This implies by uniqueness that wR=w(δR,vR) for all R:0<R<R0. The estimate (3.40) thus yields in particular that ∥wR∥=O(∥vR∥q-1) as R→0 and in turn (since ∥vR∥≤∥uR∥=R), (3.41)) yields that (AwR,wR)=O(R2(q-1)) as R→0. Since 2(q-1)>q (because q>2), this implies (3.37). In order to improve the rudimentary estimate (3.30), one has to look more closely at the term (P(uR),uR) in (3.36). Indeed as shown in [14], under the stated assumptions on P we also have (3.42)(P(uR),uR)=(P(vR),vR)+o(Rq)asR→0. Using (3.37) and (3.42) in (3.36), we have therefore (3.43)δRR2=(P(vR),vR)+o(Rq)asR→0. To conclude the proof of Theorem 3.3, we introduce as in [14] constants kλ0 and Kλ0 via the formulae (3.44)kλ0≡inf0<∥v∥<R0,v∈N(P(v),v)∥v∥q,Kλ0≡sup0<∥v∥<R0,v∈N(P(v),v)∥v∥q. These yield the inequalities (3.45)kλ0∥v∥q≤(P(v),v)≤Kλ0∥v∥q(v∈N,∥v∥<R0). We know that as v→0, w(δ,v)=o(∥v∥) and so ∥v+w(δ,v)∥=∥v∥+o(∥v∥). It follows that as R→0, ∥vR∥=R+o(R) for the solutions (λ0+δR,vR+wR)∈ℱ; using this in (3.45), we conclude that (3.46)kλ0Rq+o(Rq)≤〈P(vR),vR〉≤Kλ0Rq+o(Rq)(R→0). Replacing this in (3.43), we obtain the inequalities (3.47)kλ0Rq-2+o(Rq-2)≤λR-λ0≤Kλ0Rq-2+o(Rq-2). Note that these have been derived using merely the assumption (3.6), which implies that |(P(u),u)|≤M∥u∥q for some constant M in a neighborhood of u=0 and thus guarantees that kλ0,Kλ0 are finite. Suppose now that A0 satisfies the additional assumption (3.8), that we report here for the reader's convenience: (3.48)k(A0(u),u)β(∥u∥2)α-β≤(P(u),u)≤K(A0(u),u)γ(∥u∥2)α-γ. As A0v=λ0v for v∈N≡N(A0-λ0I), we have (A0(v),v)=λ0∥v∥2 for such v and therefore (3.48) yields (3.49)(P(v),v)≤K(λ0)γ∥v∥2γ(∥v∥2)α-γ=K(λ0)γ∥v∥q,v∈N. A similar inequality, based on the left-hand side of (3.48), provides a lower bound to (P(v),v) for v∈N. It follows by the definitions (3.44) of kλ0,Kλ0 that (3.50)kλ0≥k(λ0)β,Kλ0≤K(λ0)γ. Using these in (3.47) yields the desired inequalities (3.31).Example 3.4. Let us now reconsider Example1.3, and take in particular the basic example of a nonlinearity satisfying (HF0) and (HF1), namely, (3.51)f(x,s)=|s|q-2s(2<q<2*). In this case, we see from (1.19) that (3.52)(P(u),u)=∫Ω‍|u|qdx=∥u∥qq. The following inequality for functions of H01(Ω) permits to estimate (P(u),u).Proposition 3.5. LetΩ be a bounded open set in ℝN(N>2), let 2* be defined by 1/2*=1/2-1/N, and let q be such that 2≤q≤2*. Then (3.53)C∥u∥2q≤∥u∥qq≤D∥u∥2q-(q-2)N/2∥∇u∥2(q-2)N/2, for all u∈H01(Ω), with (3.54)C=|Ω|-(q-2)/2,D=S(2,N)(q-2)N/2. Here |Ω| stands for the (Lebesgue) measure of Ω in ℝN and S(2,N) for the best constant of the Sobolev embedding of H01(Ω) into L2*(Ω): (3.55)S(2,N)=supu∈W01,2(Ω)∥u∥2*∥∇u∥2.Proof. The proof of the left-hand side of (3.53) is very simple and amounts to verify the inequality (3.56)∫Ω‍|v|qdx≥|Ω|-(q-2)/2(∫Ω‍v2dx)q/2, which holds true for any q≥2 and for any measurable function v on Ω. To see this, first observe that (3.56) is trivial if q=2. While if q>2, then q/2>1, and so by Hölder's inequality, (3.57)∫Ω‍v2dx≤(∫Ω‍|v|qdx)2/q(∫Ω‍dx)(q-2)/q=|Ω|(q-2)/q(∫Ω‍|v|qdx)2/q. It follows that (3.58)(∫Ω‍v2dx)q/2≤|Ω|(q-2)/2(∫Ω‍|v|qdx), which gives (3.56). The proof of the right-hand side of (3.53) requires more work and is based on an interpolation inequality which makes use of Hölder's and Sobolev's inequality (see [41], e.g.). A detailed proof can be found in [34].Consider the operatorsA0 and P in H≡H01(Ω) defined as in (1.19). If we put (3.59)β=α=q2,γ=q2-(q-2)N4, then (3.53) can be written as (3.60)C(A0(u),u)β≤〈P(u),u〉≤D(A0(u),u)γ(∥u∥2)α-γ, and shows that P satisfies (3.8) with k=C,K=D and α,β,γ as shown in (3.59). It is straightforward to check (see, e.g., [21] or [11]) that A0 and P satisfy the remaining assumptions of Theorem 3.3. Therefore, we can use the inequality (3.31), that in the present case takes the form (3.61)C(λ0)q/2Rq-2+o(Rq-2)≤λR-λ0≤D(λ0)q/2-(q-2)N/4Rq-2+o(Rq-2). Putting μR=(λR)-1, we then have a corresponding family {(μR,uR)} of solutions of the original problem (1.15) such that, as R→0, (3.62)μ0μR[Cμ0-q/2Rq-2+o(Rq-2)]≤μ0-μR≤μ0μR[Dμ0-q/2+ϵRq-2+o(Rq-2)], where(3.63)ϵ≡(q-2)N4>0. Since μR=μ0+o(1) anyway, this yields in turn (3.64)Cμ02μ0-q/2Rq-2+o(Rq-2)≤μ0-μR≤Dμ02μ0-q/2+ϵRq-2+o(Rq-2), as R→0, or putting α=2-q/2=(4-q)/2, (3.65)Cμ0αRq-2+o(Rq-2)≤μ0-μR≤Dμ0α+ϵRq-2+o(Rq-2). We remark that (3.65) can be used for actual computation, in view of the expressions (3.54) of C and D: indeed S(2,N) is explicitly known for any N [50] and can be found, for instance, in [51, page 151]. Some work on numerical computation of NLEV for equations of the form (1.15) can be found, for instance, in [52]. --- *Source: 102489-2012-11-13.xml*
2012
# The Application of Artificial Neural Networks and Logistic Regression in the Evaluation of Risk for Dry Eye after Vitrectomy **Authors:** Wan-Ju Yang; Li Wu; Zhong-Ming Mei; Yi Xiang **Journal:** Journal of Ophthalmology (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1024926 --- ## Abstract Supervised machine-learning (ML) models were employed to predict the occurrence of dry eye disease (DED) after vitrectomy in this study. The clinical data of 217 patients receiving vitrectomy from April 2017 to July 2018 were used as training dataset; the clinical data of 33 patients receiving vitrectomy from August 2018 to September 2018 were collected as validating dataset. The input features for ML training were selected based on the Delphi method and univariate logistic regression (LR). LR and artificial neural network (ANN) models were trained and subsequently used to predict the occurrence of DED in patients who underwent vitrectomy for the first time during the period. The area under the receiver operating characteristic curve (AUC-ROC) was used to evaluate the predictive accuracy of the ML models. The AUCs with use of the LR and ANN models were 0.741 and 0.786, respectively, suggesting satisfactory performance in predicting the occurrence of DED. When the two models were compared in terms of predictive power, the fitting effect of the ANN model was slightly superior to that of the LR model. In conclusion, both LR and ANN models may be used to accurately predict the occurrence of DED after vitrectomy. --- ## Body ## 1. Introduction Vitrectomy, an ocular surgery performed to partially or completely remove the vitreous, is widely used to treat various ocular conditions, such as cloudy vitreous, vitreous haemorrhage, retinal detachment, and proliferative diabetic retinopathy [1–3]. Most vitrectomies are performed to facilitate surgery to address one of a variety of retinal conditions [3]. Some vitrectomies are conducted for diagnostic purposes. With advances in the instrumentation available, vitrectomy has become a well-established procedure. Serious vitrectomy-associated complications are very rare [1, 3]; however, like other types of ocular surgeries, vitrectomy may traumatize the conjunctival tissues, often resulting in the development of secondary dry eye disease (DED) [4–6]. Using the demographic and clinical features of patients to predict risk for vitrectomy-related DED will facilitate decision-making in the management of vitrectomy patients and improve the relationship between doctors and patients. To the best of our knowledge, no previous study has investigated the prediction of risk for secondary DED in patients scheduled to undergo vitrectomy.In recent years, machine learning has been widely applied to solve real-life problems, including issues related to healthcare [7–9]. In supervised machine learning, the algorithm learns a target function from labelled training data. The outcome value of a case in unlabeled new data can be calculated based on the target function. The two learning tasks required for supervised learning are classification and regression. Although several techniques have been developed for supervised learning, those used most widely in healthcare and medicine are logistic regression (LR) and artificial neural networks (ANNs) [9, 10].LR is a machine-learning technique borrowed from statistics. The logistic regression model, based on a logistic function, is used to express the relationship between multiple input features (independent variables) and a categorical dependent variable (outcome variable) and to predict the probability of a given outcome variable [8].ANN is a nonlinear adaptive dynamic system that simulates biological nerve structure and consists of many processing units. It has become an important tool for predictive data applications [8]. In this study, we used logistic regression and an ANN to construct models for predicting the risk of secondary dry eye after vitrectomy. We evaluated the performance of these clinical prediction models in assessing the risk of secondary dry eye after vitrectomy in order to elucidate the mechanism of secondary dry eye after vitrectomy. ## 2. Materials and Methods ### 2.1. Patients This study was approved by the Ethics Committee of Tongji Medical College of Huazhong University of Science and Technology. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Written informed consent was obtained from all individual participants included in the study.We retrospectively reviewed the data of patients who underwent vitrectomy in the Ophthalmology Department of our hospital during the period from January 1, 2014, to July 31, 2018; the data from these patients were used to train the supervised ML models. We also prospectively studied the patients who underwent vitrectomy during the period from January 1, 2018, to September 1, 2018. The datasets from these patients were used to validate the ML models.The inclusion criteria for enrolment for training and validation of the ML models were as follows: (1) complete clinical data and clear outcomes; (2) age ≥18 years and ability to articulate one’s feelings; (3) initial presentation at our institution; (4) history of complete vitrectomy; and (5) targeted diagnosis and treatment for the initial medical concern.The exclusion criteria included the following: (1) a voluntary request from the patient to terminate treatment during the perioperative period, followed by early discharge; (2) previous diagnosis with xerophthalmia (with or without treatment); (3) previous history of complications such as acute conjunctivitis, glaucoma, keratitis, ocular trauma, dacryocystitis, and systemic lupus erythematosus; (4) history of contact lens wear; (5) history of laser or other eye operations; (6) history of disease requiring long-term use of atropine, neostigmine, artificial tears, or other drugs that affect tear film stability; and (7) refusal to cooperate with the necessary examinations. ### 2.2. Diagnosis of Dry Eye Disease DED was diagnosed according to the guidelines provided in the TFOS DEWS II diagnostic methodology report [11]. These were as follows: (1) screening questionnaire scored >5 or OSDI >13, accompanied by noninvasive break-up time (BUT) ≤10; (2) osmolarity >308 mOsm/L or interocular difference >8 mOsm/L; and (3) ocular surface staining >5 corneal spots or >9 conjunctival spots, or lid margin ≥2 mm in length and ≥25 mm in width. ### 2.3. Feature Selection for ML Four representative ophthalmologists screened randomized patients with the Delphi method. Each patient was screened twice. The potentially relevant factors that were treated as categorical variables were gender, age, history of hypertension, history of diabetes mellitus, history of smoking, indoor work, occupation, daily exposure to computer or mobile phone screens, and driving conditions (driving time per day).For the preoperative Schirmer I test (SIT), 5 mm filter paper was placed at a point 1/3 of the length of the lower conjunctival sac with respect to the medial canthus in the absence of ocular surface anesthesia. After the patient had gently closed his/her eyes for 5 minutes, the filter paper was taken out and the length of the wet filter paper was measured from the fold. To quantify preoperative BUT, sodium fluorescein solution was dripped into the conjunctival sac. After the patient had blinked several times, he/she was asked to look straight ahead. The patient was evaluated under wide-angle cobalt blue light from the slit lamp. The time from the last blink to appearance of the first black spot on the cornea was considered as tear film rupture time. After repeated measurements, the average value was obtained. For preoperative corneal fluorescein staining, fluorescein solution was dripped into the conjunctival sac of the eye that had been operated upon, which was then observed under the cobalt blue light from the slit lamp. The presence of any corneal epithelial defect was recorded. The range of corneal staining was scored as follows: corneal epithelial nonstaining, 0; dispersed fluorescence throughout the cornea, 1; slightly dense corneal staining, 2; and dense or flaky corneal staining, 2. Intraocular pressure (IP) was measured with a noncontact tonometer, with the range of normal values considered to be 10–21 mmHg. Corneal central thickness (CCT) of the operated eye was determined with an Orbscan II anterior segment analyzer.Use of the Delphi method for analysis dictated exclusion of the following factors: history of hyperlipidemia, drinking history, educational background, correct reading posture, body mass index (BMI), duration of disease, proximity of contaminated buildings, long-term exposure to air-conditioning, and daily use of a mobile phone postoperatively. ### 2.4. Machine-Learning Construction and Testing #### 2.4.1. LR Model Univariate analysis was performed to determine the regression coefficient for each potential influencing factor. The variables revealed to have statistical significance after univariate analysis were input to train the multivariate logistic regression model to establish the prediction equation. Stepwise regression analysis was used to eliminate variables for modeling and to observe whether there were statistical differences between variables in the goodness of fit. The Wald-2 test was performed to estimate the logistic regression equation and regression coefficient. The partial regression coefficient (B), standard error (S.E.), Wald statistics, andp value were obtained for the corresponding variables, and the multivariate logistic regression equation was constructed. #### 2.4.2. ANN Model The ANN model was used to analyze the relationship between secondary DED, diagnosed 3 months after surgery, and to identify potential risk factors for vitrectomy-associated DED. The neural network model was set using a multilayer perceptron neural network. The numbers of hidden layers and network neurons were automatically determined by network optimization. First, factors thought to increase risk for dry eye secondary to vitrectomy were extracted as input layer vectors. Second, we established a neural network model, which consisted of three layers: input and output layers on both sides and hidden layers in the middle. Each hidden layer comprised multiple layers. Finally, forward and backward propagation networks were trained. For forward propagation, independent variables were input into the neural network from the input layer and then passed through several hidden layers. Finally, the prediction results were output to the output layer. For back propagation, an error backpropagation algorithm and the gradient descent optimization method were used to adjust the weights of each network layer. Error information was obtained by comparing the output information and expected information. The chain derivation method was employed to obtain error information for each step. Each layer was propagated forward to obtain the corresponding error information, and the weight and bias of each layer were adjusted accordingly.ANN training and validation were carried out using MATLAB 2012 software. The network type was feedforward backpropagation; the training function was trainlm; the learning function was learngdm; and the error performance function was mse. The tansig function was used to complete the transfer of each layer, and the training times were set at 1000.Finally, the predictions obtained with the methods described above were used as test variables. Actual prognosis outcomes were used as state variables; 1-specificity was used as the abscissa; and sensitivity was used as the ordinate. Then, the receiver operating characteristic curve (ROC curve) was drawn, and the area under the curve (AUC) and 95% CI were calculated. The binormal model was fitted according to the data results; the corresponding parameters were estimated with the maximum likelihood method; and the smooth ROC curve was obtained. The ROC curve was used to test the ML models in order to further clarify the impact value of each factor on the outcome. ### 2.5. Statistical Analysis Data analyses were conducted using the SPSS 21.0 software package, and differences were considered statistically significant whenp<0.05. The Kolmogorov–Smirnov test was used for the measurement data, and measurement data conforming to the normal distribution were expressed as mean (+SD/SEM). The independent sample t-test was used for between-group comparisons. Single-factor analysis of variance was used for multiple-group comparisons. Medians (M) and quartiles (Q25 and Q75) were used for measurement data that did not conform to the normal distribution. ## 2.1. Patients This study was approved by the Ethics Committee of Tongji Medical College of Huazhong University of Science and Technology. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Written informed consent was obtained from all individual participants included in the study.We retrospectively reviewed the data of patients who underwent vitrectomy in the Ophthalmology Department of our hospital during the period from January 1, 2014, to July 31, 2018; the data from these patients were used to train the supervised ML models. We also prospectively studied the patients who underwent vitrectomy during the period from January 1, 2018, to September 1, 2018. The datasets from these patients were used to validate the ML models.The inclusion criteria for enrolment for training and validation of the ML models were as follows: (1) complete clinical data and clear outcomes; (2) age ≥18 years and ability to articulate one’s feelings; (3) initial presentation at our institution; (4) history of complete vitrectomy; and (5) targeted diagnosis and treatment for the initial medical concern.The exclusion criteria included the following: (1) a voluntary request from the patient to terminate treatment during the perioperative period, followed by early discharge; (2) previous diagnosis with xerophthalmia (with or without treatment); (3) previous history of complications such as acute conjunctivitis, glaucoma, keratitis, ocular trauma, dacryocystitis, and systemic lupus erythematosus; (4) history of contact lens wear; (5) history of laser or other eye operations; (6) history of disease requiring long-term use of atropine, neostigmine, artificial tears, or other drugs that affect tear film stability; and (7) refusal to cooperate with the necessary examinations. ## 2.2. Diagnosis of Dry Eye Disease DED was diagnosed according to the guidelines provided in the TFOS DEWS II diagnostic methodology report [11]. These were as follows: (1) screening questionnaire scored >5 or OSDI >13, accompanied by noninvasive break-up time (BUT) ≤10; (2) osmolarity >308 mOsm/L or interocular difference >8 mOsm/L; and (3) ocular surface staining >5 corneal spots or >9 conjunctival spots, or lid margin ≥2 mm in length and ≥25 mm in width. ## 2.3. Feature Selection for ML Four representative ophthalmologists screened randomized patients with the Delphi method. Each patient was screened twice. The potentially relevant factors that were treated as categorical variables were gender, age, history of hypertension, history of diabetes mellitus, history of smoking, indoor work, occupation, daily exposure to computer or mobile phone screens, and driving conditions (driving time per day).For the preoperative Schirmer I test (SIT), 5 mm filter paper was placed at a point 1/3 of the length of the lower conjunctival sac with respect to the medial canthus in the absence of ocular surface anesthesia. After the patient had gently closed his/her eyes for 5 minutes, the filter paper was taken out and the length of the wet filter paper was measured from the fold. To quantify preoperative BUT, sodium fluorescein solution was dripped into the conjunctival sac. After the patient had blinked several times, he/she was asked to look straight ahead. The patient was evaluated under wide-angle cobalt blue light from the slit lamp. The time from the last blink to appearance of the first black spot on the cornea was considered as tear film rupture time. After repeated measurements, the average value was obtained. For preoperative corneal fluorescein staining, fluorescein solution was dripped into the conjunctival sac of the eye that had been operated upon, which was then observed under the cobalt blue light from the slit lamp. The presence of any corneal epithelial defect was recorded. The range of corneal staining was scored as follows: corneal epithelial nonstaining, 0; dispersed fluorescence throughout the cornea, 1; slightly dense corneal staining, 2; and dense or flaky corneal staining, 2. Intraocular pressure (IP) was measured with a noncontact tonometer, with the range of normal values considered to be 10–21 mmHg. Corneal central thickness (CCT) of the operated eye was determined with an Orbscan II anterior segment analyzer.Use of the Delphi method for analysis dictated exclusion of the following factors: history of hyperlipidemia, drinking history, educational background, correct reading posture, body mass index (BMI), duration of disease, proximity of contaminated buildings, long-term exposure to air-conditioning, and daily use of a mobile phone postoperatively. ## 2.4. Machine-Learning Construction and Testing ### 2.4.1. LR Model Univariate analysis was performed to determine the regression coefficient for each potential influencing factor. The variables revealed to have statistical significance after univariate analysis were input to train the multivariate logistic regression model to establish the prediction equation. Stepwise regression analysis was used to eliminate variables for modeling and to observe whether there were statistical differences between variables in the goodness of fit. The Wald-2 test was performed to estimate the logistic regression equation and regression coefficient. The partial regression coefficient (B), standard error (S.E.), Wald statistics, andp value were obtained for the corresponding variables, and the multivariate logistic regression equation was constructed. ### 2.4.2. ANN Model The ANN model was used to analyze the relationship between secondary DED, diagnosed 3 months after surgery, and to identify potential risk factors for vitrectomy-associated DED. The neural network model was set using a multilayer perceptron neural network. The numbers of hidden layers and network neurons were automatically determined by network optimization. First, factors thought to increase risk for dry eye secondary to vitrectomy were extracted as input layer vectors. Second, we established a neural network model, which consisted of three layers: input and output layers on both sides and hidden layers in the middle. Each hidden layer comprised multiple layers. Finally, forward and backward propagation networks were trained. For forward propagation, independent variables were input into the neural network from the input layer and then passed through several hidden layers. Finally, the prediction results were output to the output layer. For back propagation, an error backpropagation algorithm and the gradient descent optimization method were used to adjust the weights of each network layer. Error information was obtained by comparing the output information and expected information. The chain derivation method was employed to obtain error information for each step. Each layer was propagated forward to obtain the corresponding error information, and the weight and bias of each layer were adjusted accordingly.ANN training and validation were carried out using MATLAB 2012 software. The network type was feedforward backpropagation; the training function was trainlm; the learning function was learngdm; and the error performance function was mse. The tansig function was used to complete the transfer of each layer, and the training times were set at 1000.Finally, the predictions obtained with the methods described above were used as test variables. Actual prognosis outcomes were used as state variables; 1-specificity was used as the abscissa; and sensitivity was used as the ordinate. Then, the receiver operating characteristic curve (ROC curve) was drawn, and the area under the curve (AUC) and 95% CI were calculated. The binormal model was fitted according to the data results; the corresponding parameters were estimated with the maximum likelihood method; and the smooth ROC curve was obtained. The ROC curve was used to test the ML models in order to further clarify the impact value of each factor on the outcome. ## 2.4.1. LR Model Univariate analysis was performed to determine the regression coefficient for each potential influencing factor. The variables revealed to have statistical significance after univariate analysis were input to train the multivariate logistic regression model to establish the prediction equation. Stepwise regression analysis was used to eliminate variables for modeling and to observe whether there were statistical differences between variables in the goodness of fit. The Wald-2 test was performed to estimate the logistic regression equation and regression coefficient. The partial regression coefficient (B), standard error (S.E.), Wald statistics, andp value were obtained for the corresponding variables, and the multivariate logistic regression equation was constructed. ## 2.4.2. ANN Model The ANN model was used to analyze the relationship between secondary DED, diagnosed 3 months after surgery, and to identify potential risk factors for vitrectomy-associated DED. The neural network model was set using a multilayer perceptron neural network. The numbers of hidden layers and network neurons were automatically determined by network optimization. First, factors thought to increase risk for dry eye secondary to vitrectomy were extracted as input layer vectors. Second, we established a neural network model, which consisted of three layers: input and output layers on both sides and hidden layers in the middle. Each hidden layer comprised multiple layers. Finally, forward and backward propagation networks were trained. For forward propagation, independent variables were input into the neural network from the input layer and then passed through several hidden layers. Finally, the prediction results were output to the output layer. For back propagation, an error backpropagation algorithm and the gradient descent optimization method were used to adjust the weights of each network layer. Error information was obtained by comparing the output information and expected information. The chain derivation method was employed to obtain error information for each step. Each layer was propagated forward to obtain the corresponding error information, and the weight and bias of each layer were adjusted accordingly.ANN training and validation were carried out using MATLAB 2012 software. The network type was feedforward backpropagation; the training function was trainlm; the learning function was learngdm; and the error performance function was mse. The tansig function was used to complete the transfer of each layer, and the training times were set at 1000.Finally, the predictions obtained with the methods described above were used as test variables. Actual prognosis outcomes were used as state variables; 1-specificity was used as the abscissa; and sensitivity was used as the ordinate. Then, the receiver operating characteristic curve (ROC curve) was drawn, and the area under the curve (AUC) and 95% CI were calculated. The binormal model was fitted according to the data results; the corresponding parameters were estimated with the maximum likelihood method; and the smooth ROC curve was obtained. The ROC curve was used to test the ML models in order to further clarify the impact value of each factor on the outcome. ## 2.5. Statistical Analysis Data analyses were conducted using the SPSS 21.0 software package, and differences were considered statistically significant whenp<0.05. The Kolmogorov–Smirnov test was used for the measurement data, and measurement data conforming to the normal distribution were expressed as mean (+SD/SEM). The independent sample t-test was used for between-group comparisons. Single-factor analysis of variance was used for multiple-group comparisons. Medians (M) and quartiles (Q25 and Q75) were used for measurement data that did not conform to the normal distribution. ## 3. Results ### 3.1. Patients and Datasets for Training and Validation After screening, 217 cases of vitrectomy that satisfied the study’s inclusion and exclusion criteria were ultimately obtained, including 57 cases of giant retinal detachment, 42 cases of cataract with retinal detachment, 38 cases of proliferative diabetic retinopathy, 33 cases of idiopathic macular hole, 30 cases of exudative retinal detachment, and 17 cases of acute retinal necrosis syndrome. The average hospitalization time was 7.26 days. On the 10th day, 36 patients were diagnosed with secondary DED.The validation dataset included clinical data for 33 cases of vitrectomy: 21 cases of giant retinal detachment, 9 cases of cataract complicated with retinal detachment, and 3 cases of endophthalmitis. The average hospitalization time was 8.76 (±1.28 days). Five cases had vitrectomy-associated DED. ### 3.2. LR Analysis for Predicting Risk for Vitrectomy-Associated DED Univariate analysis was performed with the chi-square test and the independent samplet-test. The factors correlated with secondary dry eye after surgery were gender (male), age, history of diabetes mellitus, history free of smoking, smoking more than 10 cigarettes per day, indoor work, daily exposure to computer and mobile phone preoperatively, preoperative BUT, and preoperative CCT (P<0.05; Table 1).Table 1 Univariate LR analysis of dry eye disease after vitrectomy. Groups Without dry eye (n = 181) With dry eye (n = 36) p value Male,n (%) 119 (65.75) 16 (44.44) 0.020∗ Age (years) 46 (29, 70) 54 (38, 72) 0.018∗ Hypertension,n (%) 47 (25.97) 9 (25.00) 0.417 Diabetes,n (%) 10 (5.52) 15 (30.56) 0.009∗ Smoking none,n (%) 122 (67.40) 19 (52.78) 0.040∗ <10,n (%) 32 (17.68) 7 (19.44) 0.203 ≥10,n (%) 27 (14.92) 10 (27.78) 0.042∗ Indoor work,n (%) 113 (62.43) 30 (83.33) 0.018∗ Occupations Farmer,n (%) 9 (4.97) 2 (5.56) 0.415 Worker,n (%) 23 (12.71) 4 (11.11) 0.338 White collar,n (%) 58 (32.04) 11 (30.56) 0.316 Student,n (%) 20 (11.05%) 4 (11.11) 0.556 Others,n (%) 40 (22.10) 9 (25.00) 0.289 Unemployed,n (%) 31 (17.13) 6 (16.67) 0.334 Preoperative daily use of computer and smart phone (h) 6 (2, 11) 10 (5, 14) 0.018∗ Preoperative daily driving time None,n (%) 79 (43.65) 15 (41.67) 0.258 <1 h,n (%) 41 (22.65) 9 (25.00) 0.189 ≥1 h,n (%) 61 (33.70) 12 (33.33) 0.486 Preoperative Sit (mm) 13.03 ± 4.70 12.85 ± 3.28 0.181 Preoperative BUT (s) 11.25 ± 2.24 8.72 ± 1.29 0.039∗ Preoperative CFS (min) 2.85 ± 0.77 2.48 ± 0.65 0.248 Preoperative IP (mmHg) 9.85 ± 1.74 11.05 ± 2.12 0.088 Preoperative CCT (mm) 0.525 ± 0.021 0.637 ± 0.025 0.017∗ Light intensity during the operation Low,n (%) 43 (23.76) 10 (27.78) 0.206 Moderate,n (%) 111 (61.33) 21 (58.33) 0.184 High,n (%) 27 (14.92) 5 (13.89) 0.320 Surgical duration (min) 56.35 ± 12.26 77.89 ± 10.78 0.021∗ Corneal protection during operation,n (%) 164 (90.60) 22 (70.98) 0.026∗ ∗ P < 0.05; Sit, Schirmer I test; BUT, break-up time; CFS, corneal fluorescein staining; IP, intraocular pressure; CCT, corneal central thickness.The dependent outcome variable, presence or absence of vitrectomy-associated DED, was binary. Variables associated with significant differences in univariate analysis were used as input features to train the logistic regression model by stepwise regression analysis. Significant differences between variables in goodness of fit were observed. The Wald-2 test was performed to estimate the logistic regression equation and the regression coefficient. The final independent influencing factors (all were risk factors), in order from most to least important, were age, history of diabetes mellitus, smoking more than 10 cigarettes per day, daily exposure to electronic screens preoperatively, preoperative BUT, and duration of surgery (P<0.05; Table 2). The goodness-of-fit test of the multivariate logistic regression equation showed that χ2 = 8.083, DF = 7, and P=0.374, suggesting satisfactory goodness-of-fit. The equation was as follows:(1)P=11+0.240−1.612+0.753X2+0.623X3+1.130X4+1.112X6+0.286X7+0.889X9,where X2 is age; X3 is history of diabetes mellitus; X4 is smoking more than 10 cigarettes per day; X6 is daily exposure to computer or mobile phone screen; X7 is preoperative BUT; and X9 is duration of surgery.Table 2 Multivariate logistic regression analysis of dry eye disease after vitrectomy. Groups B S.E. Wald χ2 OR 95% CI p value Male −0.202 0.009 1.871 0.595 0.289–1.193 0.146 Age 0.753 0.081 3.659 1.254 1.081–1.708 0.039∗ Diabetes 0.623 0.122 4.481 2.028 1.481–3.289 0.018∗ Smoking No — — 1.157 — — 0.233 <10 −0.890 0.198 1.002 0.894 0.449–1.215 0.175 ≥10 1.130 0.074 3.589 1.436 1.084–2.158 0.041∗ Indoor work 0.440 0.047 1.226 1.450 0.801–2.130 0.137 Preoperative daily use of computer and smartphone 1.122 0.320 5.701 2.156 1.707–3.008 0.019∗ Preoperative BUT 0.286 0.049 3.791 1.430 1.119–1.951 0.036∗ Preoperative CCT 0.225 0.065 1.844 1.289 0.887–1.578 0.202 Surgical duration 0.889 0.120 4.250 1.980 1.336–3.265 0.012∗ Constant 0.805 0.104 5.884 1.612 — 0.020∗ ∗ P < 0.05. ### 3.3. Predictive Accuracy of the LR Model We substituted specific values for the independent factors from the validation dataset into the formulas presented above. We compared the outcomes predicted by these formulas with the actual outcomes and then evaluated the predictive accuracy of the LR model. The performance of the LR model was tested by the ROC curve and showed AUC = 0.741, 95% CI = 0.611–0.870, andP<0.05. These findings suggest that the prediction model was effective in predicting the occurrence of postoperative DED secondary to vitrectomy (Figure 1).Figure 1 ROC curve for evaluating the performance of the logistic regression model in predicting the occurrence of dry eye disease. ### 3.4. ANN Model and Its Performance in Predicting Vitrectomy-Associated DED The neural network of multilayer perceptron was enhanced. The numbers of layers and neurons in hidden layers were determined automatically by network optimization. Potential influencing factors related to vitrectomy were used as input variables for the network model, with occurrence of vitrectomy-associated DED as output variable. As shown above, 217 subjects and 33 subjects were included in the training and test datasets, respectively. Analysis of the artificial neural network identified the following as influencing factors (independent variables) correlated with DED secondary to vitrectomy (in order from most to least important): age (100%), daily exposure to computer or mobile phone screen preoperatively (76.93%), preoperative BUT (69.18%), preoperative CCT (65.24%), and daily smoking (>10) (62.69%). The ANN model is summarized in Table3.Table 3 Summary of the artificial neural network analysis. Datasets Summary Training SSE 51.849 Percent error prediction 15.62 Stopping rule Error calculation based on test sample Training time (s) 78 Validating SSE 35.817 Percent error prediction 13.36The trained ANN model was used to test the validating dataset. The classification of the ANN model is shown in Table4 via comparison of predicted vs. actual outcomes. As shown in Figure 2, the performance of the ANN model in predicting the occurrence of DED was tested by the ROC curve, with the following parameters: AUC = 0.786, 95% CI = 0.667–0.906, and P<0.05. The ROC curve showed that the prediction model was effective for the prediction of secondary DED after vitrectomy.Table 4 Classification by the artificial neural network model. Datasets Observed Predicted Without dry eye With dry eye Training Without dry eye 161 20 With dry eye 20 16 Validation Without dry eye 26 2 With dry eye 2 3Figure 2 ROC curve for evaluating the performance of the ANN model in predicting the occurrence of dry eye disease. ### 3.5. Comparison of the LR and ANN Models in terms of Predictive Accuracy The ROC curves for the two models tested showed that the predictive accuracy of the ANN model (AUC = 0.786) was slightly better than that of the LR model (AUC = 0.741). Table5 shows the detailed parameters of the ROC curves for both ML models.Table 5 Comparison of predictive accuracy between the logistic regression model and the artificial neural network model. Groups AUC 95% CI P value Critical points Minimum Maximum Sensitivity (%) Specificity (%) Logistic regression 0.741 0.611 0.870 0.006∗ 81.9 62.0 ANN 0.786 0.667 0.906 0.001∗ 83.1 72.4 ∗ P < 0.05. ## 3.1. Patients and Datasets for Training and Validation After screening, 217 cases of vitrectomy that satisfied the study’s inclusion and exclusion criteria were ultimately obtained, including 57 cases of giant retinal detachment, 42 cases of cataract with retinal detachment, 38 cases of proliferative diabetic retinopathy, 33 cases of idiopathic macular hole, 30 cases of exudative retinal detachment, and 17 cases of acute retinal necrosis syndrome. The average hospitalization time was 7.26 days. On the 10th day, 36 patients were diagnosed with secondary DED.The validation dataset included clinical data for 33 cases of vitrectomy: 21 cases of giant retinal detachment, 9 cases of cataract complicated with retinal detachment, and 3 cases of endophthalmitis. The average hospitalization time was 8.76 (±1.28 days). Five cases had vitrectomy-associated DED. ## 3.2. LR Analysis for Predicting Risk for Vitrectomy-Associated DED Univariate analysis was performed with the chi-square test and the independent samplet-test. The factors correlated with secondary dry eye after surgery were gender (male), age, history of diabetes mellitus, history free of smoking, smoking more than 10 cigarettes per day, indoor work, daily exposure to computer and mobile phone preoperatively, preoperative BUT, and preoperative CCT (P<0.05; Table 1).Table 1 Univariate LR analysis of dry eye disease after vitrectomy. Groups Without dry eye (n = 181) With dry eye (n = 36) p value Male,n (%) 119 (65.75) 16 (44.44) 0.020∗ Age (years) 46 (29, 70) 54 (38, 72) 0.018∗ Hypertension,n (%) 47 (25.97) 9 (25.00) 0.417 Diabetes,n (%) 10 (5.52) 15 (30.56) 0.009∗ Smoking none,n (%) 122 (67.40) 19 (52.78) 0.040∗ <10,n (%) 32 (17.68) 7 (19.44) 0.203 ≥10,n (%) 27 (14.92) 10 (27.78) 0.042∗ Indoor work,n (%) 113 (62.43) 30 (83.33) 0.018∗ Occupations Farmer,n (%) 9 (4.97) 2 (5.56) 0.415 Worker,n (%) 23 (12.71) 4 (11.11) 0.338 White collar,n (%) 58 (32.04) 11 (30.56) 0.316 Student,n (%) 20 (11.05%) 4 (11.11) 0.556 Others,n (%) 40 (22.10) 9 (25.00) 0.289 Unemployed,n (%) 31 (17.13) 6 (16.67) 0.334 Preoperative daily use of computer and smart phone (h) 6 (2, 11) 10 (5, 14) 0.018∗ Preoperative daily driving time None,n (%) 79 (43.65) 15 (41.67) 0.258 <1 h,n (%) 41 (22.65) 9 (25.00) 0.189 ≥1 h,n (%) 61 (33.70) 12 (33.33) 0.486 Preoperative Sit (mm) 13.03 ± 4.70 12.85 ± 3.28 0.181 Preoperative BUT (s) 11.25 ± 2.24 8.72 ± 1.29 0.039∗ Preoperative CFS (min) 2.85 ± 0.77 2.48 ± 0.65 0.248 Preoperative IP (mmHg) 9.85 ± 1.74 11.05 ± 2.12 0.088 Preoperative CCT (mm) 0.525 ± 0.021 0.637 ± 0.025 0.017∗ Light intensity during the operation Low,n (%) 43 (23.76) 10 (27.78) 0.206 Moderate,n (%) 111 (61.33) 21 (58.33) 0.184 High,n (%) 27 (14.92) 5 (13.89) 0.320 Surgical duration (min) 56.35 ± 12.26 77.89 ± 10.78 0.021∗ Corneal protection during operation,n (%) 164 (90.60) 22 (70.98) 0.026∗ ∗ P < 0.05; Sit, Schirmer I test; BUT, break-up time; CFS, corneal fluorescein staining; IP, intraocular pressure; CCT, corneal central thickness.The dependent outcome variable, presence or absence of vitrectomy-associated DED, was binary. Variables associated with significant differences in univariate analysis were used as input features to train the logistic regression model by stepwise regression analysis. Significant differences between variables in goodness of fit were observed. The Wald-2 test was performed to estimate the logistic regression equation and the regression coefficient. The final independent influencing factors (all were risk factors), in order from most to least important, were age, history of diabetes mellitus, smoking more than 10 cigarettes per day, daily exposure to electronic screens preoperatively, preoperative BUT, and duration of surgery (P<0.05; Table 2). The goodness-of-fit test of the multivariate logistic regression equation showed that χ2 = 8.083, DF = 7, and P=0.374, suggesting satisfactory goodness-of-fit. The equation was as follows:(1)P=11+0.240−1.612+0.753X2+0.623X3+1.130X4+1.112X6+0.286X7+0.889X9,where X2 is age; X3 is history of diabetes mellitus; X4 is smoking more than 10 cigarettes per day; X6 is daily exposure to computer or mobile phone screen; X7 is preoperative BUT; and X9 is duration of surgery.Table 2 Multivariate logistic regression analysis of dry eye disease after vitrectomy. Groups B S.E. Wald χ2 OR 95% CI p value Male −0.202 0.009 1.871 0.595 0.289–1.193 0.146 Age 0.753 0.081 3.659 1.254 1.081–1.708 0.039∗ Diabetes 0.623 0.122 4.481 2.028 1.481–3.289 0.018∗ Smoking No — — 1.157 — — 0.233 <10 −0.890 0.198 1.002 0.894 0.449–1.215 0.175 ≥10 1.130 0.074 3.589 1.436 1.084–2.158 0.041∗ Indoor work 0.440 0.047 1.226 1.450 0.801–2.130 0.137 Preoperative daily use of computer and smartphone 1.122 0.320 5.701 2.156 1.707–3.008 0.019∗ Preoperative BUT 0.286 0.049 3.791 1.430 1.119–1.951 0.036∗ Preoperative CCT 0.225 0.065 1.844 1.289 0.887–1.578 0.202 Surgical duration 0.889 0.120 4.250 1.980 1.336–3.265 0.012∗ Constant 0.805 0.104 5.884 1.612 — 0.020∗ ∗ P < 0.05. ## 3.3. Predictive Accuracy of the LR Model We substituted specific values for the independent factors from the validation dataset into the formulas presented above. We compared the outcomes predicted by these formulas with the actual outcomes and then evaluated the predictive accuracy of the LR model. The performance of the LR model was tested by the ROC curve and showed AUC = 0.741, 95% CI = 0.611–0.870, andP<0.05. These findings suggest that the prediction model was effective in predicting the occurrence of postoperative DED secondary to vitrectomy (Figure 1).Figure 1 ROC curve for evaluating the performance of the logistic regression model in predicting the occurrence of dry eye disease. ## 3.4. ANN Model and Its Performance in Predicting Vitrectomy-Associated DED The neural network of multilayer perceptron was enhanced. The numbers of layers and neurons in hidden layers were determined automatically by network optimization. Potential influencing factors related to vitrectomy were used as input variables for the network model, with occurrence of vitrectomy-associated DED as output variable. As shown above, 217 subjects and 33 subjects were included in the training and test datasets, respectively. Analysis of the artificial neural network identified the following as influencing factors (independent variables) correlated with DED secondary to vitrectomy (in order from most to least important): age (100%), daily exposure to computer or mobile phone screen preoperatively (76.93%), preoperative BUT (69.18%), preoperative CCT (65.24%), and daily smoking (>10) (62.69%). The ANN model is summarized in Table3.Table 3 Summary of the artificial neural network analysis. Datasets Summary Training SSE 51.849 Percent error prediction 15.62 Stopping rule Error calculation based on test sample Training time (s) 78 Validating SSE 35.817 Percent error prediction 13.36The trained ANN model was used to test the validating dataset. The classification of the ANN model is shown in Table4 via comparison of predicted vs. actual outcomes. As shown in Figure 2, the performance of the ANN model in predicting the occurrence of DED was tested by the ROC curve, with the following parameters: AUC = 0.786, 95% CI = 0.667–0.906, and P<0.05. The ROC curve showed that the prediction model was effective for the prediction of secondary DED after vitrectomy.Table 4 Classification by the artificial neural network model. Datasets Observed Predicted Without dry eye With dry eye Training Without dry eye 161 20 With dry eye 20 16 Validation Without dry eye 26 2 With dry eye 2 3Figure 2 ROC curve for evaluating the performance of the ANN model in predicting the occurrence of dry eye disease. ## 3.5. Comparison of the LR and ANN Models in terms of Predictive Accuracy The ROC curves for the two models tested showed that the predictive accuracy of the ANN model (AUC = 0.786) was slightly better than that of the LR model (AUC = 0.741). Table5 shows the detailed parameters of the ROC curves for both ML models.Table 5 Comparison of predictive accuracy between the logistic regression model and the artificial neural network model. Groups AUC 95% CI P value Critical points Minimum Maximum Sensitivity (%) Specificity (%) Logistic regression 0.741 0.611 0.870 0.006∗ 81.9 62.0 ANN 0.786 0.667 0.906 0.001∗ 83.1 72.4 ∗ P < 0.05. ## 4. Discussion This is the first study to use supervised ML models to predict the risk of DED after vitrectomy. Both LR and ANN models were trained with the labelled data retrieved from previous cases of vitrectomy. Both models performed similarly in predicting the occurrence of DED, but the ANN model performed slightly better than the LR model.The results of this study demonstrate that the LR model and ANN model had similar predictive accuracy. The AUC of the ROC curve were 0.741 and 0.786, respectively, suggesting that the performance of the ANN model is slightly better than that of the LR model. Notably, both models identified age as the top risk factor for vitrectomy-associated DED. In addition, both LR and ANN models identified four common independent risk factors for DED after vitrectomy: age, smoking more than 10 cigarettes per day, daily exposure to computer or mobile phone screens preoperatively, and preoperative BUT. History of diabetes mellitus and surgical duration were identified as risk factors only by the LR model, while preoperative CCT was only identified as a risk factor for vitrectomy-associated DED by the ANN model. LR performs better for qualitative and semiquantitative (multiclassification) independent variables, while ANNs use either categorical or continuous variables as input. These facts may explain the differences between the results provided by these two models. This indicates that the ANN model has superior predictive adaptability for use in clinical research.Few studies have investigated the influencing factors that affect the occurrence of DED after vitrectomy. Our results are consistent with previous reports that age and surgical duration are risk factors for vitrectomy-related DED [12]. In addition, Banaee et al. (2008) reported that scleral depression significantly increased risk for DED after vitrectomy [5].The tear film, which comprises lipid, tear, and mucin layers, nourishes the conjunctival epithelium and cornea, supplies lubrication to facilitate opening and closing of the eyelids, and provides a high-quality optical surface for the cornea [13, 14]. The pathogenesis of DED includes inflammation, apoptosis of the lacrimal gland cells and conjunctival epithelial cells, and androgen imbalance [13, 14]. The results of this study helped us to identify the mechanisms underlying DED after vitrectomy. First, the corneal epithelium and conjunctiva may be damaged by vitrectomy. After the operation, numerous factors may disturb the ocular surface, including scleral sutures, conjunctival sutures, incisions, and conjunctival edema. Corneal curvature may be affected, resulting in a decrease in tear film stability [15]. Importantly, it has been reported that basic fibroblast growth factor (alone or in combination with cytochrome c peroxidase) accelerates the healing of surgically damaged corneal epithelium [16, 17]. We therefore sought to investigate the benefit of reducing the occurrence of DED in patients who had undergone vitrectomy. Second, vitrectomy-associated congestion and edema of the corneal tissue may affect the adhesion of mucin, allowing for the infiltration of inflammatory factors. This process can cause lacrimal gland damage, which exacerbates any corneal damage [18]. Prolonged surgical time can thus destroy the stability of the tear film and lead to secondary dry eye after vitrectomy. Third, corneal goblet cells are more sensitive and vulnerable to external environmental factors, such as hyperglycemia (in diabetic patients). Metabolic disorders and nutritional disorders shorten the tear film rupture time and destroy the normal corneal morphology, thereby reducing the secretion of mucin. Finally, the eye drops often prescribed for patients after vitrectomy contain preservatives, which can affect corneal epithelial integrity, damage repair functions, and reduce the regularity of the corneal surface. All of these factors decrease tear film stability [19].The analysis of clinical research data is challenging due to the complexity involved: on one hand, data need to meet the constraints of analytical models; on the other hand, the data characteristics need to be retained as far as possible in order to simulate the clinical situation [20]. It is therefore of great clinical significance to use the limited clinical data available for patients who have undergone vitrectomy for data analysis. Such a data-based approach will improve the data model for predicting secondary DED after vitrectomy and help physicians to identify risks early enough to communicate effectively with patients and to provide pertinent clinical interventions.Logistic/Cox regression analysis is suitable for discriminating two or more classified variables, obtaining approximate estimates of relative risk and calculating their respective probabilities. Logistic/Cox regression analysis can be used to analyze most clinical data, but the flexibility and ease of use are ineffective for processing multiclass clinical data. With the increasingly close integration of computer science and applied mathematics with clinical medicine, more and more analytical and computational tools have been applied to clinical research. Various problems encountered by those performing clinical data analysis have been solved.As a digital model which imitates the functional structure of a biological neural network, the ANN model utilizes large-scale nonlinear parallel processing and strong adaptability. The ANN model does not restrict the distribution of data, allowing researchers to make full use of data information. The ANN model has strong fault tolerance, so it can be widely used in the fields of prediction and analysis [21]. The ANN model also has better fit than the LR model. The identification of risk factors for vitrectomy-associated DED and the accurate prediction of secondary DED after vitrectomy by ML models will be helpful for clinical decision-making, as well as the management of patients who have undergone vitrectomy. ## 5. Conclusions In conclusion, our study has shown that the LR and ANN models are similarly effective in predicting the occurrence of DED after vitrectomy. However, the ANN model better reflects the true relationship between input variables and the outcome variable. --- *Source: 1024926-2020-04-21.xml*
1024926-2020-04-21_1024926-2020-04-21.md
48,559
The Application of Artificial Neural Networks and Logistic Regression in the Evaluation of Risk for Dry Eye after Vitrectomy
Wan-Ju Yang; Li Wu; Zhong-Ming Mei; Yi Xiang
Journal of Ophthalmology (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1024926
1024926-2020-04-21.xml
--- ## Abstract Supervised machine-learning (ML) models were employed to predict the occurrence of dry eye disease (DED) after vitrectomy in this study. The clinical data of 217 patients receiving vitrectomy from April 2017 to July 2018 were used as training dataset; the clinical data of 33 patients receiving vitrectomy from August 2018 to September 2018 were collected as validating dataset. The input features for ML training were selected based on the Delphi method and univariate logistic regression (LR). LR and artificial neural network (ANN) models were trained and subsequently used to predict the occurrence of DED in patients who underwent vitrectomy for the first time during the period. The area under the receiver operating characteristic curve (AUC-ROC) was used to evaluate the predictive accuracy of the ML models. The AUCs with use of the LR and ANN models were 0.741 and 0.786, respectively, suggesting satisfactory performance in predicting the occurrence of DED. When the two models were compared in terms of predictive power, the fitting effect of the ANN model was slightly superior to that of the LR model. In conclusion, both LR and ANN models may be used to accurately predict the occurrence of DED after vitrectomy. --- ## Body ## 1. Introduction Vitrectomy, an ocular surgery performed to partially or completely remove the vitreous, is widely used to treat various ocular conditions, such as cloudy vitreous, vitreous haemorrhage, retinal detachment, and proliferative diabetic retinopathy [1–3]. Most vitrectomies are performed to facilitate surgery to address one of a variety of retinal conditions [3]. Some vitrectomies are conducted for diagnostic purposes. With advances in the instrumentation available, vitrectomy has become a well-established procedure. Serious vitrectomy-associated complications are very rare [1, 3]; however, like other types of ocular surgeries, vitrectomy may traumatize the conjunctival tissues, often resulting in the development of secondary dry eye disease (DED) [4–6]. Using the demographic and clinical features of patients to predict risk for vitrectomy-related DED will facilitate decision-making in the management of vitrectomy patients and improve the relationship between doctors and patients. To the best of our knowledge, no previous study has investigated the prediction of risk for secondary DED in patients scheduled to undergo vitrectomy.In recent years, machine learning has been widely applied to solve real-life problems, including issues related to healthcare [7–9]. In supervised machine learning, the algorithm learns a target function from labelled training data. The outcome value of a case in unlabeled new data can be calculated based on the target function. The two learning tasks required for supervised learning are classification and regression. Although several techniques have been developed for supervised learning, those used most widely in healthcare and medicine are logistic regression (LR) and artificial neural networks (ANNs) [9, 10].LR is a machine-learning technique borrowed from statistics. The logistic regression model, based on a logistic function, is used to express the relationship between multiple input features (independent variables) and a categorical dependent variable (outcome variable) and to predict the probability of a given outcome variable [8].ANN is a nonlinear adaptive dynamic system that simulates biological nerve structure and consists of many processing units. It has become an important tool for predictive data applications [8]. In this study, we used logistic regression and an ANN to construct models for predicting the risk of secondary dry eye after vitrectomy. We evaluated the performance of these clinical prediction models in assessing the risk of secondary dry eye after vitrectomy in order to elucidate the mechanism of secondary dry eye after vitrectomy. ## 2. Materials and Methods ### 2.1. Patients This study was approved by the Ethics Committee of Tongji Medical College of Huazhong University of Science and Technology. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Written informed consent was obtained from all individual participants included in the study.We retrospectively reviewed the data of patients who underwent vitrectomy in the Ophthalmology Department of our hospital during the period from January 1, 2014, to July 31, 2018; the data from these patients were used to train the supervised ML models. We also prospectively studied the patients who underwent vitrectomy during the period from January 1, 2018, to September 1, 2018. The datasets from these patients were used to validate the ML models.The inclusion criteria for enrolment for training and validation of the ML models were as follows: (1) complete clinical data and clear outcomes; (2) age ≥18 years and ability to articulate one’s feelings; (3) initial presentation at our institution; (4) history of complete vitrectomy; and (5) targeted diagnosis and treatment for the initial medical concern.The exclusion criteria included the following: (1) a voluntary request from the patient to terminate treatment during the perioperative period, followed by early discharge; (2) previous diagnosis with xerophthalmia (with or without treatment); (3) previous history of complications such as acute conjunctivitis, glaucoma, keratitis, ocular trauma, dacryocystitis, and systemic lupus erythematosus; (4) history of contact lens wear; (5) history of laser or other eye operations; (6) history of disease requiring long-term use of atropine, neostigmine, artificial tears, or other drugs that affect tear film stability; and (7) refusal to cooperate with the necessary examinations. ### 2.2. Diagnosis of Dry Eye Disease DED was diagnosed according to the guidelines provided in the TFOS DEWS II diagnostic methodology report [11]. These were as follows: (1) screening questionnaire scored >5 or OSDI >13, accompanied by noninvasive break-up time (BUT) ≤10; (2) osmolarity >308 mOsm/L or interocular difference >8 mOsm/L; and (3) ocular surface staining >5 corneal spots or >9 conjunctival spots, or lid margin ≥2 mm in length and ≥25 mm in width. ### 2.3. Feature Selection for ML Four representative ophthalmologists screened randomized patients with the Delphi method. Each patient was screened twice. The potentially relevant factors that were treated as categorical variables were gender, age, history of hypertension, history of diabetes mellitus, history of smoking, indoor work, occupation, daily exposure to computer or mobile phone screens, and driving conditions (driving time per day).For the preoperative Schirmer I test (SIT), 5 mm filter paper was placed at a point 1/3 of the length of the lower conjunctival sac with respect to the medial canthus in the absence of ocular surface anesthesia. After the patient had gently closed his/her eyes for 5 minutes, the filter paper was taken out and the length of the wet filter paper was measured from the fold. To quantify preoperative BUT, sodium fluorescein solution was dripped into the conjunctival sac. After the patient had blinked several times, he/she was asked to look straight ahead. The patient was evaluated under wide-angle cobalt blue light from the slit lamp. The time from the last blink to appearance of the first black spot on the cornea was considered as tear film rupture time. After repeated measurements, the average value was obtained. For preoperative corneal fluorescein staining, fluorescein solution was dripped into the conjunctival sac of the eye that had been operated upon, which was then observed under the cobalt blue light from the slit lamp. The presence of any corneal epithelial defect was recorded. The range of corneal staining was scored as follows: corneal epithelial nonstaining, 0; dispersed fluorescence throughout the cornea, 1; slightly dense corneal staining, 2; and dense or flaky corneal staining, 2. Intraocular pressure (IP) was measured with a noncontact tonometer, with the range of normal values considered to be 10–21 mmHg. Corneal central thickness (CCT) of the operated eye was determined with an Orbscan II anterior segment analyzer.Use of the Delphi method for analysis dictated exclusion of the following factors: history of hyperlipidemia, drinking history, educational background, correct reading posture, body mass index (BMI), duration of disease, proximity of contaminated buildings, long-term exposure to air-conditioning, and daily use of a mobile phone postoperatively. ### 2.4. Machine-Learning Construction and Testing #### 2.4.1. LR Model Univariate analysis was performed to determine the regression coefficient for each potential influencing factor. The variables revealed to have statistical significance after univariate analysis were input to train the multivariate logistic regression model to establish the prediction equation. Stepwise regression analysis was used to eliminate variables for modeling and to observe whether there were statistical differences between variables in the goodness of fit. The Wald-2 test was performed to estimate the logistic regression equation and regression coefficient. The partial regression coefficient (B), standard error (S.E.), Wald statistics, andp value were obtained for the corresponding variables, and the multivariate logistic regression equation was constructed. #### 2.4.2. ANN Model The ANN model was used to analyze the relationship between secondary DED, diagnosed 3 months after surgery, and to identify potential risk factors for vitrectomy-associated DED. The neural network model was set using a multilayer perceptron neural network. The numbers of hidden layers and network neurons were automatically determined by network optimization. First, factors thought to increase risk for dry eye secondary to vitrectomy were extracted as input layer vectors. Second, we established a neural network model, which consisted of three layers: input and output layers on both sides and hidden layers in the middle. Each hidden layer comprised multiple layers. Finally, forward and backward propagation networks were trained. For forward propagation, independent variables were input into the neural network from the input layer and then passed through several hidden layers. Finally, the prediction results were output to the output layer. For back propagation, an error backpropagation algorithm and the gradient descent optimization method were used to adjust the weights of each network layer. Error information was obtained by comparing the output information and expected information. The chain derivation method was employed to obtain error information for each step. Each layer was propagated forward to obtain the corresponding error information, and the weight and bias of each layer were adjusted accordingly.ANN training and validation were carried out using MATLAB 2012 software. The network type was feedforward backpropagation; the training function was trainlm; the learning function was learngdm; and the error performance function was mse. The tansig function was used to complete the transfer of each layer, and the training times were set at 1000.Finally, the predictions obtained with the methods described above were used as test variables. Actual prognosis outcomes were used as state variables; 1-specificity was used as the abscissa; and sensitivity was used as the ordinate. Then, the receiver operating characteristic curve (ROC curve) was drawn, and the area under the curve (AUC) and 95% CI were calculated. The binormal model was fitted according to the data results; the corresponding parameters were estimated with the maximum likelihood method; and the smooth ROC curve was obtained. The ROC curve was used to test the ML models in order to further clarify the impact value of each factor on the outcome. ### 2.5. Statistical Analysis Data analyses were conducted using the SPSS 21.0 software package, and differences were considered statistically significant whenp<0.05. The Kolmogorov–Smirnov test was used for the measurement data, and measurement data conforming to the normal distribution were expressed as mean (+SD/SEM). The independent sample t-test was used for between-group comparisons. Single-factor analysis of variance was used for multiple-group comparisons. Medians (M) and quartiles (Q25 and Q75) were used for measurement data that did not conform to the normal distribution. ## 2.1. Patients This study was approved by the Ethics Committee of Tongji Medical College of Huazhong University of Science and Technology. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Written informed consent was obtained from all individual participants included in the study.We retrospectively reviewed the data of patients who underwent vitrectomy in the Ophthalmology Department of our hospital during the period from January 1, 2014, to July 31, 2018; the data from these patients were used to train the supervised ML models. We also prospectively studied the patients who underwent vitrectomy during the period from January 1, 2018, to September 1, 2018. The datasets from these patients were used to validate the ML models.The inclusion criteria for enrolment for training and validation of the ML models were as follows: (1) complete clinical data and clear outcomes; (2) age ≥18 years and ability to articulate one’s feelings; (3) initial presentation at our institution; (4) history of complete vitrectomy; and (5) targeted diagnosis and treatment for the initial medical concern.The exclusion criteria included the following: (1) a voluntary request from the patient to terminate treatment during the perioperative period, followed by early discharge; (2) previous diagnosis with xerophthalmia (with or without treatment); (3) previous history of complications such as acute conjunctivitis, glaucoma, keratitis, ocular trauma, dacryocystitis, and systemic lupus erythematosus; (4) history of contact lens wear; (5) history of laser or other eye operations; (6) history of disease requiring long-term use of atropine, neostigmine, artificial tears, or other drugs that affect tear film stability; and (7) refusal to cooperate with the necessary examinations. ## 2.2. Diagnosis of Dry Eye Disease DED was diagnosed according to the guidelines provided in the TFOS DEWS II diagnostic methodology report [11]. These were as follows: (1) screening questionnaire scored >5 or OSDI >13, accompanied by noninvasive break-up time (BUT) ≤10; (2) osmolarity >308 mOsm/L or interocular difference >8 mOsm/L; and (3) ocular surface staining >5 corneal spots or >9 conjunctival spots, or lid margin ≥2 mm in length and ≥25 mm in width. ## 2.3. Feature Selection for ML Four representative ophthalmologists screened randomized patients with the Delphi method. Each patient was screened twice. The potentially relevant factors that were treated as categorical variables were gender, age, history of hypertension, history of diabetes mellitus, history of smoking, indoor work, occupation, daily exposure to computer or mobile phone screens, and driving conditions (driving time per day).For the preoperative Schirmer I test (SIT), 5 mm filter paper was placed at a point 1/3 of the length of the lower conjunctival sac with respect to the medial canthus in the absence of ocular surface anesthesia. After the patient had gently closed his/her eyes for 5 minutes, the filter paper was taken out and the length of the wet filter paper was measured from the fold. To quantify preoperative BUT, sodium fluorescein solution was dripped into the conjunctival sac. After the patient had blinked several times, he/she was asked to look straight ahead. The patient was evaluated under wide-angle cobalt blue light from the slit lamp. The time from the last blink to appearance of the first black spot on the cornea was considered as tear film rupture time. After repeated measurements, the average value was obtained. For preoperative corneal fluorescein staining, fluorescein solution was dripped into the conjunctival sac of the eye that had been operated upon, which was then observed under the cobalt blue light from the slit lamp. The presence of any corneal epithelial defect was recorded. The range of corneal staining was scored as follows: corneal epithelial nonstaining, 0; dispersed fluorescence throughout the cornea, 1; slightly dense corneal staining, 2; and dense or flaky corneal staining, 2. Intraocular pressure (IP) was measured with a noncontact tonometer, with the range of normal values considered to be 10–21 mmHg. Corneal central thickness (CCT) of the operated eye was determined with an Orbscan II anterior segment analyzer.Use of the Delphi method for analysis dictated exclusion of the following factors: history of hyperlipidemia, drinking history, educational background, correct reading posture, body mass index (BMI), duration of disease, proximity of contaminated buildings, long-term exposure to air-conditioning, and daily use of a mobile phone postoperatively. ## 2.4. Machine-Learning Construction and Testing ### 2.4.1. LR Model Univariate analysis was performed to determine the regression coefficient for each potential influencing factor. The variables revealed to have statistical significance after univariate analysis were input to train the multivariate logistic regression model to establish the prediction equation. Stepwise regression analysis was used to eliminate variables for modeling and to observe whether there were statistical differences between variables in the goodness of fit. The Wald-2 test was performed to estimate the logistic regression equation and regression coefficient. The partial regression coefficient (B), standard error (S.E.), Wald statistics, andp value were obtained for the corresponding variables, and the multivariate logistic regression equation was constructed. ### 2.4.2. ANN Model The ANN model was used to analyze the relationship between secondary DED, diagnosed 3 months after surgery, and to identify potential risk factors for vitrectomy-associated DED. The neural network model was set using a multilayer perceptron neural network. The numbers of hidden layers and network neurons were automatically determined by network optimization. First, factors thought to increase risk for dry eye secondary to vitrectomy were extracted as input layer vectors. Second, we established a neural network model, which consisted of three layers: input and output layers on both sides and hidden layers in the middle. Each hidden layer comprised multiple layers. Finally, forward and backward propagation networks were trained. For forward propagation, independent variables were input into the neural network from the input layer and then passed through several hidden layers. Finally, the prediction results were output to the output layer. For back propagation, an error backpropagation algorithm and the gradient descent optimization method were used to adjust the weights of each network layer. Error information was obtained by comparing the output information and expected information. The chain derivation method was employed to obtain error information for each step. Each layer was propagated forward to obtain the corresponding error information, and the weight and bias of each layer were adjusted accordingly.ANN training and validation were carried out using MATLAB 2012 software. The network type was feedforward backpropagation; the training function was trainlm; the learning function was learngdm; and the error performance function was mse. The tansig function was used to complete the transfer of each layer, and the training times were set at 1000.Finally, the predictions obtained with the methods described above were used as test variables. Actual prognosis outcomes were used as state variables; 1-specificity was used as the abscissa; and sensitivity was used as the ordinate. Then, the receiver operating characteristic curve (ROC curve) was drawn, and the area under the curve (AUC) and 95% CI were calculated. The binormal model was fitted according to the data results; the corresponding parameters were estimated with the maximum likelihood method; and the smooth ROC curve was obtained. The ROC curve was used to test the ML models in order to further clarify the impact value of each factor on the outcome. ## 2.4.1. LR Model Univariate analysis was performed to determine the regression coefficient for each potential influencing factor. The variables revealed to have statistical significance after univariate analysis were input to train the multivariate logistic regression model to establish the prediction equation. Stepwise regression analysis was used to eliminate variables for modeling and to observe whether there were statistical differences between variables in the goodness of fit. The Wald-2 test was performed to estimate the logistic regression equation and regression coefficient. The partial regression coefficient (B), standard error (S.E.), Wald statistics, andp value were obtained for the corresponding variables, and the multivariate logistic regression equation was constructed. ## 2.4.2. ANN Model The ANN model was used to analyze the relationship between secondary DED, diagnosed 3 months after surgery, and to identify potential risk factors for vitrectomy-associated DED. The neural network model was set using a multilayer perceptron neural network. The numbers of hidden layers and network neurons were automatically determined by network optimization. First, factors thought to increase risk for dry eye secondary to vitrectomy were extracted as input layer vectors. Second, we established a neural network model, which consisted of three layers: input and output layers on both sides and hidden layers in the middle. Each hidden layer comprised multiple layers. Finally, forward and backward propagation networks were trained. For forward propagation, independent variables were input into the neural network from the input layer and then passed through several hidden layers. Finally, the prediction results were output to the output layer. For back propagation, an error backpropagation algorithm and the gradient descent optimization method were used to adjust the weights of each network layer. Error information was obtained by comparing the output information and expected information. The chain derivation method was employed to obtain error information for each step. Each layer was propagated forward to obtain the corresponding error information, and the weight and bias of each layer were adjusted accordingly.ANN training and validation were carried out using MATLAB 2012 software. The network type was feedforward backpropagation; the training function was trainlm; the learning function was learngdm; and the error performance function was mse. The tansig function was used to complete the transfer of each layer, and the training times were set at 1000.Finally, the predictions obtained with the methods described above were used as test variables. Actual prognosis outcomes were used as state variables; 1-specificity was used as the abscissa; and sensitivity was used as the ordinate. Then, the receiver operating characteristic curve (ROC curve) was drawn, and the area under the curve (AUC) and 95% CI were calculated. The binormal model was fitted according to the data results; the corresponding parameters were estimated with the maximum likelihood method; and the smooth ROC curve was obtained. The ROC curve was used to test the ML models in order to further clarify the impact value of each factor on the outcome. ## 2.5. Statistical Analysis Data analyses were conducted using the SPSS 21.0 software package, and differences were considered statistically significant whenp<0.05. The Kolmogorov–Smirnov test was used for the measurement data, and measurement data conforming to the normal distribution were expressed as mean (+SD/SEM). The independent sample t-test was used for between-group comparisons. Single-factor analysis of variance was used for multiple-group comparisons. Medians (M) and quartiles (Q25 and Q75) were used for measurement data that did not conform to the normal distribution. ## 3. Results ### 3.1. Patients and Datasets for Training and Validation After screening, 217 cases of vitrectomy that satisfied the study’s inclusion and exclusion criteria were ultimately obtained, including 57 cases of giant retinal detachment, 42 cases of cataract with retinal detachment, 38 cases of proliferative diabetic retinopathy, 33 cases of idiopathic macular hole, 30 cases of exudative retinal detachment, and 17 cases of acute retinal necrosis syndrome. The average hospitalization time was 7.26 days. On the 10th day, 36 patients were diagnosed with secondary DED.The validation dataset included clinical data for 33 cases of vitrectomy: 21 cases of giant retinal detachment, 9 cases of cataract complicated with retinal detachment, and 3 cases of endophthalmitis. The average hospitalization time was 8.76 (±1.28 days). Five cases had vitrectomy-associated DED. ### 3.2. LR Analysis for Predicting Risk for Vitrectomy-Associated DED Univariate analysis was performed with the chi-square test and the independent samplet-test. The factors correlated with secondary dry eye after surgery were gender (male), age, history of diabetes mellitus, history free of smoking, smoking more than 10 cigarettes per day, indoor work, daily exposure to computer and mobile phone preoperatively, preoperative BUT, and preoperative CCT (P<0.05; Table 1).Table 1 Univariate LR analysis of dry eye disease after vitrectomy. Groups Without dry eye (n = 181) With dry eye (n = 36) p value Male,n (%) 119 (65.75) 16 (44.44) 0.020∗ Age (years) 46 (29, 70) 54 (38, 72) 0.018∗ Hypertension,n (%) 47 (25.97) 9 (25.00) 0.417 Diabetes,n (%) 10 (5.52) 15 (30.56) 0.009∗ Smoking none,n (%) 122 (67.40) 19 (52.78) 0.040∗ <10,n (%) 32 (17.68) 7 (19.44) 0.203 ≥10,n (%) 27 (14.92) 10 (27.78) 0.042∗ Indoor work,n (%) 113 (62.43) 30 (83.33) 0.018∗ Occupations Farmer,n (%) 9 (4.97) 2 (5.56) 0.415 Worker,n (%) 23 (12.71) 4 (11.11) 0.338 White collar,n (%) 58 (32.04) 11 (30.56) 0.316 Student,n (%) 20 (11.05%) 4 (11.11) 0.556 Others,n (%) 40 (22.10) 9 (25.00) 0.289 Unemployed,n (%) 31 (17.13) 6 (16.67) 0.334 Preoperative daily use of computer and smart phone (h) 6 (2, 11) 10 (5, 14) 0.018∗ Preoperative daily driving time None,n (%) 79 (43.65) 15 (41.67) 0.258 <1 h,n (%) 41 (22.65) 9 (25.00) 0.189 ≥1 h,n (%) 61 (33.70) 12 (33.33) 0.486 Preoperative Sit (mm) 13.03 ± 4.70 12.85 ± 3.28 0.181 Preoperative BUT (s) 11.25 ± 2.24 8.72 ± 1.29 0.039∗ Preoperative CFS (min) 2.85 ± 0.77 2.48 ± 0.65 0.248 Preoperative IP (mmHg) 9.85 ± 1.74 11.05 ± 2.12 0.088 Preoperative CCT (mm) 0.525 ± 0.021 0.637 ± 0.025 0.017∗ Light intensity during the operation Low,n (%) 43 (23.76) 10 (27.78) 0.206 Moderate,n (%) 111 (61.33) 21 (58.33) 0.184 High,n (%) 27 (14.92) 5 (13.89) 0.320 Surgical duration (min) 56.35 ± 12.26 77.89 ± 10.78 0.021∗ Corneal protection during operation,n (%) 164 (90.60) 22 (70.98) 0.026∗ ∗ P < 0.05; Sit, Schirmer I test; BUT, break-up time; CFS, corneal fluorescein staining; IP, intraocular pressure; CCT, corneal central thickness.The dependent outcome variable, presence or absence of vitrectomy-associated DED, was binary. Variables associated with significant differences in univariate analysis were used as input features to train the logistic regression model by stepwise regression analysis. Significant differences between variables in goodness of fit were observed. The Wald-2 test was performed to estimate the logistic regression equation and the regression coefficient. The final independent influencing factors (all were risk factors), in order from most to least important, were age, history of diabetes mellitus, smoking more than 10 cigarettes per day, daily exposure to electronic screens preoperatively, preoperative BUT, and duration of surgery (P<0.05; Table 2). The goodness-of-fit test of the multivariate logistic regression equation showed that χ2 = 8.083, DF = 7, and P=0.374, suggesting satisfactory goodness-of-fit. The equation was as follows:(1)P=11+0.240−1.612+0.753X2+0.623X3+1.130X4+1.112X6+0.286X7+0.889X9,where X2 is age; X3 is history of diabetes mellitus; X4 is smoking more than 10 cigarettes per day; X6 is daily exposure to computer or mobile phone screen; X7 is preoperative BUT; and X9 is duration of surgery.Table 2 Multivariate logistic regression analysis of dry eye disease after vitrectomy. Groups B S.E. Wald χ2 OR 95% CI p value Male −0.202 0.009 1.871 0.595 0.289–1.193 0.146 Age 0.753 0.081 3.659 1.254 1.081–1.708 0.039∗ Diabetes 0.623 0.122 4.481 2.028 1.481–3.289 0.018∗ Smoking No — — 1.157 — — 0.233 <10 −0.890 0.198 1.002 0.894 0.449–1.215 0.175 ≥10 1.130 0.074 3.589 1.436 1.084–2.158 0.041∗ Indoor work 0.440 0.047 1.226 1.450 0.801–2.130 0.137 Preoperative daily use of computer and smartphone 1.122 0.320 5.701 2.156 1.707–3.008 0.019∗ Preoperative BUT 0.286 0.049 3.791 1.430 1.119–1.951 0.036∗ Preoperative CCT 0.225 0.065 1.844 1.289 0.887–1.578 0.202 Surgical duration 0.889 0.120 4.250 1.980 1.336–3.265 0.012∗ Constant 0.805 0.104 5.884 1.612 — 0.020∗ ∗ P < 0.05. ### 3.3. Predictive Accuracy of the LR Model We substituted specific values for the independent factors from the validation dataset into the formulas presented above. We compared the outcomes predicted by these formulas with the actual outcomes and then evaluated the predictive accuracy of the LR model. The performance of the LR model was tested by the ROC curve and showed AUC = 0.741, 95% CI = 0.611–0.870, andP<0.05. These findings suggest that the prediction model was effective in predicting the occurrence of postoperative DED secondary to vitrectomy (Figure 1).Figure 1 ROC curve for evaluating the performance of the logistic regression model in predicting the occurrence of dry eye disease. ### 3.4. ANN Model and Its Performance in Predicting Vitrectomy-Associated DED The neural network of multilayer perceptron was enhanced. The numbers of layers and neurons in hidden layers were determined automatically by network optimization. Potential influencing factors related to vitrectomy were used as input variables for the network model, with occurrence of vitrectomy-associated DED as output variable. As shown above, 217 subjects and 33 subjects were included in the training and test datasets, respectively. Analysis of the artificial neural network identified the following as influencing factors (independent variables) correlated with DED secondary to vitrectomy (in order from most to least important): age (100%), daily exposure to computer or mobile phone screen preoperatively (76.93%), preoperative BUT (69.18%), preoperative CCT (65.24%), and daily smoking (>10) (62.69%). The ANN model is summarized in Table3.Table 3 Summary of the artificial neural network analysis. Datasets Summary Training SSE 51.849 Percent error prediction 15.62 Stopping rule Error calculation based on test sample Training time (s) 78 Validating SSE 35.817 Percent error prediction 13.36The trained ANN model was used to test the validating dataset. The classification of the ANN model is shown in Table4 via comparison of predicted vs. actual outcomes. As shown in Figure 2, the performance of the ANN model in predicting the occurrence of DED was tested by the ROC curve, with the following parameters: AUC = 0.786, 95% CI = 0.667–0.906, and P<0.05. The ROC curve showed that the prediction model was effective for the prediction of secondary DED after vitrectomy.Table 4 Classification by the artificial neural network model. Datasets Observed Predicted Without dry eye With dry eye Training Without dry eye 161 20 With dry eye 20 16 Validation Without dry eye 26 2 With dry eye 2 3Figure 2 ROC curve for evaluating the performance of the ANN model in predicting the occurrence of dry eye disease. ### 3.5. Comparison of the LR and ANN Models in terms of Predictive Accuracy The ROC curves for the two models tested showed that the predictive accuracy of the ANN model (AUC = 0.786) was slightly better than that of the LR model (AUC = 0.741). Table5 shows the detailed parameters of the ROC curves for both ML models.Table 5 Comparison of predictive accuracy between the logistic regression model and the artificial neural network model. Groups AUC 95% CI P value Critical points Minimum Maximum Sensitivity (%) Specificity (%) Logistic regression 0.741 0.611 0.870 0.006∗ 81.9 62.0 ANN 0.786 0.667 0.906 0.001∗ 83.1 72.4 ∗ P < 0.05. ## 3.1. Patients and Datasets for Training and Validation After screening, 217 cases of vitrectomy that satisfied the study’s inclusion and exclusion criteria were ultimately obtained, including 57 cases of giant retinal detachment, 42 cases of cataract with retinal detachment, 38 cases of proliferative diabetic retinopathy, 33 cases of idiopathic macular hole, 30 cases of exudative retinal detachment, and 17 cases of acute retinal necrosis syndrome. The average hospitalization time was 7.26 days. On the 10th day, 36 patients were diagnosed with secondary DED.The validation dataset included clinical data for 33 cases of vitrectomy: 21 cases of giant retinal detachment, 9 cases of cataract complicated with retinal detachment, and 3 cases of endophthalmitis. The average hospitalization time was 8.76 (±1.28 days). Five cases had vitrectomy-associated DED. ## 3.2. LR Analysis for Predicting Risk for Vitrectomy-Associated DED Univariate analysis was performed with the chi-square test and the independent samplet-test. The factors correlated with secondary dry eye after surgery were gender (male), age, history of diabetes mellitus, history free of smoking, smoking more than 10 cigarettes per day, indoor work, daily exposure to computer and mobile phone preoperatively, preoperative BUT, and preoperative CCT (P<0.05; Table 1).Table 1 Univariate LR analysis of dry eye disease after vitrectomy. Groups Without dry eye (n = 181) With dry eye (n = 36) p value Male,n (%) 119 (65.75) 16 (44.44) 0.020∗ Age (years) 46 (29, 70) 54 (38, 72) 0.018∗ Hypertension,n (%) 47 (25.97) 9 (25.00) 0.417 Diabetes,n (%) 10 (5.52) 15 (30.56) 0.009∗ Smoking none,n (%) 122 (67.40) 19 (52.78) 0.040∗ <10,n (%) 32 (17.68) 7 (19.44) 0.203 ≥10,n (%) 27 (14.92) 10 (27.78) 0.042∗ Indoor work,n (%) 113 (62.43) 30 (83.33) 0.018∗ Occupations Farmer,n (%) 9 (4.97) 2 (5.56) 0.415 Worker,n (%) 23 (12.71) 4 (11.11) 0.338 White collar,n (%) 58 (32.04) 11 (30.56) 0.316 Student,n (%) 20 (11.05%) 4 (11.11) 0.556 Others,n (%) 40 (22.10) 9 (25.00) 0.289 Unemployed,n (%) 31 (17.13) 6 (16.67) 0.334 Preoperative daily use of computer and smart phone (h) 6 (2, 11) 10 (5, 14) 0.018∗ Preoperative daily driving time None,n (%) 79 (43.65) 15 (41.67) 0.258 <1 h,n (%) 41 (22.65) 9 (25.00) 0.189 ≥1 h,n (%) 61 (33.70) 12 (33.33) 0.486 Preoperative Sit (mm) 13.03 ± 4.70 12.85 ± 3.28 0.181 Preoperative BUT (s) 11.25 ± 2.24 8.72 ± 1.29 0.039∗ Preoperative CFS (min) 2.85 ± 0.77 2.48 ± 0.65 0.248 Preoperative IP (mmHg) 9.85 ± 1.74 11.05 ± 2.12 0.088 Preoperative CCT (mm) 0.525 ± 0.021 0.637 ± 0.025 0.017∗ Light intensity during the operation Low,n (%) 43 (23.76) 10 (27.78) 0.206 Moderate,n (%) 111 (61.33) 21 (58.33) 0.184 High,n (%) 27 (14.92) 5 (13.89) 0.320 Surgical duration (min) 56.35 ± 12.26 77.89 ± 10.78 0.021∗ Corneal protection during operation,n (%) 164 (90.60) 22 (70.98) 0.026∗ ∗ P < 0.05; Sit, Schirmer I test; BUT, break-up time; CFS, corneal fluorescein staining; IP, intraocular pressure; CCT, corneal central thickness.The dependent outcome variable, presence or absence of vitrectomy-associated DED, was binary. Variables associated with significant differences in univariate analysis were used as input features to train the logistic regression model by stepwise regression analysis. Significant differences between variables in goodness of fit were observed. The Wald-2 test was performed to estimate the logistic regression equation and the regression coefficient. The final independent influencing factors (all were risk factors), in order from most to least important, were age, history of diabetes mellitus, smoking more than 10 cigarettes per day, daily exposure to electronic screens preoperatively, preoperative BUT, and duration of surgery (P<0.05; Table 2). The goodness-of-fit test of the multivariate logistic regression equation showed that χ2 = 8.083, DF = 7, and P=0.374, suggesting satisfactory goodness-of-fit. The equation was as follows:(1)P=11+0.240−1.612+0.753X2+0.623X3+1.130X4+1.112X6+0.286X7+0.889X9,where X2 is age; X3 is history of diabetes mellitus; X4 is smoking more than 10 cigarettes per day; X6 is daily exposure to computer or mobile phone screen; X7 is preoperative BUT; and X9 is duration of surgery.Table 2 Multivariate logistic regression analysis of dry eye disease after vitrectomy. Groups B S.E. Wald χ2 OR 95% CI p value Male −0.202 0.009 1.871 0.595 0.289–1.193 0.146 Age 0.753 0.081 3.659 1.254 1.081–1.708 0.039∗ Diabetes 0.623 0.122 4.481 2.028 1.481–3.289 0.018∗ Smoking No — — 1.157 — — 0.233 <10 −0.890 0.198 1.002 0.894 0.449–1.215 0.175 ≥10 1.130 0.074 3.589 1.436 1.084–2.158 0.041∗ Indoor work 0.440 0.047 1.226 1.450 0.801–2.130 0.137 Preoperative daily use of computer and smartphone 1.122 0.320 5.701 2.156 1.707–3.008 0.019∗ Preoperative BUT 0.286 0.049 3.791 1.430 1.119–1.951 0.036∗ Preoperative CCT 0.225 0.065 1.844 1.289 0.887–1.578 0.202 Surgical duration 0.889 0.120 4.250 1.980 1.336–3.265 0.012∗ Constant 0.805 0.104 5.884 1.612 — 0.020∗ ∗ P < 0.05. ## 3.3. Predictive Accuracy of the LR Model We substituted specific values for the independent factors from the validation dataset into the formulas presented above. We compared the outcomes predicted by these formulas with the actual outcomes and then evaluated the predictive accuracy of the LR model. The performance of the LR model was tested by the ROC curve and showed AUC = 0.741, 95% CI = 0.611–0.870, andP<0.05. These findings suggest that the prediction model was effective in predicting the occurrence of postoperative DED secondary to vitrectomy (Figure 1).Figure 1 ROC curve for evaluating the performance of the logistic regression model in predicting the occurrence of dry eye disease. ## 3.4. ANN Model and Its Performance in Predicting Vitrectomy-Associated DED The neural network of multilayer perceptron was enhanced. The numbers of layers and neurons in hidden layers were determined automatically by network optimization. Potential influencing factors related to vitrectomy were used as input variables for the network model, with occurrence of vitrectomy-associated DED as output variable. As shown above, 217 subjects and 33 subjects were included in the training and test datasets, respectively. Analysis of the artificial neural network identified the following as influencing factors (independent variables) correlated with DED secondary to vitrectomy (in order from most to least important): age (100%), daily exposure to computer or mobile phone screen preoperatively (76.93%), preoperative BUT (69.18%), preoperative CCT (65.24%), and daily smoking (>10) (62.69%). The ANN model is summarized in Table3.Table 3 Summary of the artificial neural network analysis. Datasets Summary Training SSE 51.849 Percent error prediction 15.62 Stopping rule Error calculation based on test sample Training time (s) 78 Validating SSE 35.817 Percent error prediction 13.36The trained ANN model was used to test the validating dataset. The classification of the ANN model is shown in Table4 via comparison of predicted vs. actual outcomes. As shown in Figure 2, the performance of the ANN model in predicting the occurrence of DED was tested by the ROC curve, with the following parameters: AUC = 0.786, 95% CI = 0.667–0.906, and P<0.05. The ROC curve showed that the prediction model was effective for the prediction of secondary DED after vitrectomy.Table 4 Classification by the artificial neural network model. Datasets Observed Predicted Without dry eye With dry eye Training Without dry eye 161 20 With dry eye 20 16 Validation Without dry eye 26 2 With dry eye 2 3Figure 2 ROC curve for evaluating the performance of the ANN model in predicting the occurrence of dry eye disease. ## 3.5. Comparison of the LR and ANN Models in terms of Predictive Accuracy The ROC curves for the two models tested showed that the predictive accuracy of the ANN model (AUC = 0.786) was slightly better than that of the LR model (AUC = 0.741). Table5 shows the detailed parameters of the ROC curves for both ML models.Table 5 Comparison of predictive accuracy between the logistic regression model and the artificial neural network model. Groups AUC 95% CI P value Critical points Minimum Maximum Sensitivity (%) Specificity (%) Logistic regression 0.741 0.611 0.870 0.006∗ 81.9 62.0 ANN 0.786 0.667 0.906 0.001∗ 83.1 72.4 ∗ P < 0.05. ## 4. Discussion This is the first study to use supervised ML models to predict the risk of DED after vitrectomy. Both LR and ANN models were trained with the labelled data retrieved from previous cases of vitrectomy. Both models performed similarly in predicting the occurrence of DED, but the ANN model performed slightly better than the LR model.The results of this study demonstrate that the LR model and ANN model had similar predictive accuracy. The AUC of the ROC curve were 0.741 and 0.786, respectively, suggesting that the performance of the ANN model is slightly better than that of the LR model. Notably, both models identified age as the top risk factor for vitrectomy-associated DED. In addition, both LR and ANN models identified four common independent risk factors for DED after vitrectomy: age, smoking more than 10 cigarettes per day, daily exposure to computer or mobile phone screens preoperatively, and preoperative BUT. History of diabetes mellitus and surgical duration were identified as risk factors only by the LR model, while preoperative CCT was only identified as a risk factor for vitrectomy-associated DED by the ANN model. LR performs better for qualitative and semiquantitative (multiclassification) independent variables, while ANNs use either categorical or continuous variables as input. These facts may explain the differences between the results provided by these two models. This indicates that the ANN model has superior predictive adaptability for use in clinical research.Few studies have investigated the influencing factors that affect the occurrence of DED after vitrectomy. Our results are consistent with previous reports that age and surgical duration are risk factors for vitrectomy-related DED [12]. In addition, Banaee et al. (2008) reported that scleral depression significantly increased risk for DED after vitrectomy [5].The tear film, which comprises lipid, tear, and mucin layers, nourishes the conjunctival epithelium and cornea, supplies lubrication to facilitate opening and closing of the eyelids, and provides a high-quality optical surface for the cornea [13, 14]. The pathogenesis of DED includes inflammation, apoptosis of the lacrimal gland cells and conjunctival epithelial cells, and androgen imbalance [13, 14]. The results of this study helped us to identify the mechanisms underlying DED after vitrectomy. First, the corneal epithelium and conjunctiva may be damaged by vitrectomy. After the operation, numerous factors may disturb the ocular surface, including scleral sutures, conjunctival sutures, incisions, and conjunctival edema. Corneal curvature may be affected, resulting in a decrease in tear film stability [15]. Importantly, it has been reported that basic fibroblast growth factor (alone or in combination with cytochrome c peroxidase) accelerates the healing of surgically damaged corneal epithelium [16, 17]. We therefore sought to investigate the benefit of reducing the occurrence of DED in patients who had undergone vitrectomy. Second, vitrectomy-associated congestion and edema of the corneal tissue may affect the adhesion of mucin, allowing for the infiltration of inflammatory factors. This process can cause lacrimal gland damage, which exacerbates any corneal damage [18]. Prolonged surgical time can thus destroy the stability of the tear film and lead to secondary dry eye after vitrectomy. Third, corneal goblet cells are more sensitive and vulnerable to external environmental factors, such as hyperglycemia (in diabetic patients). Metabolic disorders and nutritional disorders shorten the tear film rupture time and destroy the normal corneal morphology, thereby reducing the secretion of mucin. Finally, the eye drops often prescribed for patients after vitrectomy contain preservatives, which can affect corneal epithelial integrity, damage repair functions, and reduce the regularity of the corneal surface. All of these factors decrease tear film stability [19].The analysis of clinical research data is challenging due to the complexity involved: on one hand, data need to meet the constraints of analytical models; on the other hand, the data characteristics need to be retained as far as possible in order to simulate the clinical situation [20]. It is therefore of great clinical significance to use the limited clinical data available for patients who have undergone vitrectomy for data analysis. Such a data-based approach will improve the data model for predicting secondary DED after vitrectomy and help physicians to identify risks early enough to communicate effectively with patients and to provide pertinent clinical interventions.Logistic/Cox regression analysis is suitable for discriminating two or more classified variables, obtaining approximate estimates of relative risk and calculating their respective probabilities. Logistic/Cox regression analysis can be used to analyze most clinical data, but the flexibility and ease of use are ineffective for processing multiclass clinical data. With the increasingly close integration of computer science and applied mathematics with clinical medicine, more and more analytical and computational tools have been applied to clinical research. Various problems encountered by those performing clinical data analysis have been solved.As a digital model which imitates the functional structure of a biological neural network, the ANN model utilizes large-scale nonlinear parallel processing and strong adaptability. The ANN model does not restrict the distribution of data, allowing researchers to make full use of data information. The ANN model has strong fault tolerance, so it can be widely used in the fields of prediction and analysis [21]. The ANN model also has better fit than the LR model. The identification of risk factors for vitrectomy-associated DED and the accurate prediction of secondary DED after vitrectomy by ML models will be helpful for clinical decision-making, as well as the management of patients who have undergone vitrectomy. ## 5. Conclusions In conclusion, our study has shown that the LR and ANN models are similarly effective in predicting the occurrence of DED after vitrectomy. However, the ANN model better reflects the true relationship between input variables and the outcome variable. --- *Source: 1024926-2020-04-21.xml*
2020
# Evaluation of the Inductive Coupling between Equivalent Emission Sources of Components **Authors:** Moisés Ferber; Sanâa Zangui; Carlos Sartori; Christian Vollaire; Ronan Perrussel; Laurent Krähenbühl **Journal:** International Journal of Antennas and Propagation (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/102495 --- ## Abstract The electromagnetic interference between electronic systems or between their components influences the overall performance. It is important thus to model these interferences in order to optimize the position of the components of an electronic system. In this paper, a methodology to construct the equivalent model of magnetic field sources is proposed. It is based on the multipole expansion, and it represents the radiated emission of generic structures in a spherical reference frame. Experimental results for different kinds of sources are presented illustrating our method. --- ## Body ## 1. Introduction The development of semiconductor technology in the last decades has greatly increased the use of power electronics in various applications, such as computer power supplies, voltage converters, electronic ballasts, and variable-speed drives [1]. Recently, new applications of power electronics have also appeared in the vehicle industry, such as electric cars and airplanes. However, the commutation of the switches (rectifiers, SCRs and triacs, BJTs, MOSFETs, and IGBTs) generates high currents with high di/dt, and, thus, a wide bandwidth of unwanted electromagnetic interference (EMI) pollutes the electromagnetic environment [2].The electromagnetic compatibility (EMC) is an engineering domain responsible to ensure that systems, equipment, and devices can coexist satisfactorily in the same electromagnetic environment [3]. Electric cars, for instance, may encounter malfunction in its electronic systems (ESP, ABS, ALS, etc) if special care is not taken. The EMI between the cables of the power electronics and the cables carrying electronic signals, if they are too close to each other without proper shielding, may prevent the correct operation of certain systems [4].There are not many reliable methods to predict the EMC of a complex system in the design phase [5], and, thus, in practice, the EMC design is still carried out by trial and error [3] causing high development cost in case of malfunctioning due to EMI, when the prototype is tested.To ensure the compatibility of cables, equipment, and systems at the design phase, EMC predictive tools must be improved [5]. In order to achieve this requirement, frequency domain simulations can be performed utilizing equivalent models for the EMI sources. For instance, in power electronics, the range of frequency analyzed can be restricted from 10 kHz to 50 MHz, which comprises the common operating range of semiconductor switches utilized in power converters and frequency harmonics produced by them.The EMI is usually established in different ways, for instance, the near-field coupling between filter components [6–9] or the coupling between wires [10]. Each coupling phenomenon is thus best modeled by different mathematical models.The near-field coupling between filter components can be well modeled by a methodology based on the multipole expansion, which represents the radiation emission in a spherical reference system(r,θ,φ) [6–9], whereas the coupling between wires is usually well modeled by the PEEC method [10].This paper presents a methodology to determine the first two coefficients of the multipole expansion (Q10 and Q20) of a generic magnetic field source, by a numerical or an experimental approach, depending upon the complexity of the source. The numerical approach is rather limited to simple sources, but the experimental approach has no limitations over the geometrical complexity of the source.The experimental approach utilizes an antenna consisting of four loops around the magnetic field source. The mutual coupling between the loops must be taken into account when modeling the source, in order to avoid a significant error, which can be up to 40%.Finally, the methodology is validated by comparing the calculated and measured mutual inductance of a modeled power transformer and a well-known loop. ## 2. Theory of Multipole Expansion The multipole expansion can be used to represent electromagnetic fields in 3D, assuming that the field is computed outside a sphere of a given radius that contains the equivalent source. Figure1 shows the reference sphere considered [11].Figure 1 Reference adopted in the field computation.In the case of outgoing radiated emission sources, the multipole expansion allows expressing the electric and magnetic fields as [12] (1)E(r,θ,φ)=∑n=1∞∑m=-nnQnmTEF1nm(r,θ,φ)+QnmTMF2nm(r,θ,φ),H(r,θ,φ)=jη∑n=1∞∑m=-nnQnmTMF1nm(r,θ,φ)+QnmTEF2nm(r,θ,φ), where (i) η=μ/ε is the intrinsic impedance of the considered environment;(ii) QnmTE and QnmTM are the magnetic and electric coefficients, respectively. The coefficients QnmTE describe the strength of the transverse-electric (TE) components of the radiated field, while coefficients QnmTM describe the strength of the transverse-magnetic (TM) components. Each of them corresponds to the equivalent radiated source. Thus, these coefficients are the parameters to be identified that characterize the equivalent model of the radiated field components;(iii) F1nm and F2nm are the vector spherical harmonics which are solutions of Maxwell’s equations in free space, excluding the sphere that involves the sources;(iv) n is the degree, and m is the azimuthal order.In our study, only the magnetic source in the near-field is considered. That isQnmTM=0, and it is assumed that the electric field component is low when compared with the magnetic field. Thus, the computation of the QnmTE, wrote as Qnm in (2), is carried out by the radial component Hr, in near field [12, 13]:(2)Hr=-14π∑n=1+∞∑m=-nnQnm∂∂r(1rn+1)Ynm(θ,φ), where Ynm are the normalized spherical harmonics given by the following expression:(3)Ynm(θ,φ)=(2n+1)(n-m)!4π(n+m)!Pnm(cos⁡θ)ejmφ.One of the main properties of the multipole expansion to be emphasized is the decrease of the terms of ordern with r-(n+1). This ensures a hierarchy between each order of the decomposition. The larger is the distance to the source, the fewer are the terms required to reconstruct the field. Thus, the accuracy of the mutual inductance computation is related to the choice of the maximum order description, noted Nmax. It should be observed that there are (2n+1) components for each n order. For an order source equal to Nmax, it will correspond to Nmax(Nmax+2) components, but due to the previously mentioned property (hierarchy between each order), Nmax can be limited to 5, based on the present experience of the authors. ## 3. Multipole Identification ### 3.1. Numerical Approach This approach consists in identifying the source utilizing the software Flux2D based on the finite element method. The software calculates the radial component of the magnetic induction on a measurement sphereSM, which contains the source, as shown in Figure 2. The computation of the Qnm coefficients is achieved by integrating these components on SM.Figure 2 Source modeling Flux2D.The coefficients of the multipole expansion can be deduced from (2), based on the following expression:(4)Qnm=4πr0n(n+1)∫02π∫0πHr(r0,θ,φ)Ynm(θ,φ)dS, where Hr corresponds to the radial component of the magnetic field on the sphere SM of a radius of r0. This result is due to the orthogonal property of Ynm base:(5)∬SMYnmYn′m′dS={r02,if(n,m)=(n′,m′),0,otherwise.The order of the approximation is not limited for this identification method. However, the computational time increases with the order. Moreover, the discretization of the sphere surface must respect the Shannon theorem in order to avoid spatial aliasing. For instance, withn=1, the axes theta and phi require at least two points each, whereas, for n=2, four points are required. For n=Nmax, 2Nmax points are necessary for each axis.The numerical approach can be excessively time consuming or require too much memory, if the modeled source is geometrically complex. The experimental approach is thus an alternative, and it is suitable for practically any source. ### 3.2. Experimental Approach This approach consists in identifying the source utilizing an antenna and a measurement equipment. Figure3 shows the prototype antenna with its loop sensors corresponding to the dipole (2 loops for the dipole component Q10), the quadrupole (2 loops for the quadrupole Q20), and the loop from the standard CISPR16-1. All mentioned loops were built initially only in the z-direction. The complete measurement setup is surrounded by a sphere of radius rM equal to 0.225 m. The short-circuited loops were proposed as sensors with a flat response within the 9 kHz–30 MHz frequency range. Although the use of short-circuited loops corresponds to high values of currents and thus high sensitivity, the magnetic coupling between them imposes some constraints to the measurement and calibration methodology.Figure 3 Prototype antenna.Based on the multipole expansion of the magnetic field, the relationship between the fluxes across the surface delimited by the “sensors set” and theQnm components of the expansion can be directly obtained [14, 15]. The quasi-static approximation was adopted, and, for the maximum frequency of 30 MHz, it is valid for a rM≤1.7 m. In our case, assuming the expansion limited to the second order and in the z-direction (m=0), we have [15](6)Q10=108rM32π(φ101+φ102),Q20=6125*104rM23π21(φ201-φ202), where φnm corresponds to the flux through the loop antenna given by Figure 3. Thus, for the loop configuration given by the same figure, the fluxes through the sensors due to a multipole source will be determined based on the current measured on each loop, after taking into account the magnetic coupling effects of the loops and applying the corresponding antenna factors (AFnm) for i=1,2:(7)φ10_i=AF10i10_i,φ20_i=AF20i20_i.The correction of the magnetic coupling between the loop sensors, which can be considered as a postprocessing in the identification procedure, is treated as follows: the total concatenated magnetic flux in each loop can be expressed as the sum of the flux produced by the multipole source (desired) and the fluxes produced by all the other antenna loops (undesired). The measured current in loopn is denoted as iMES(n) and can be obtained by the following expression:(8)iMES(n)=iDUT(n)-∑k=1k≠n5jωMkniMES(k)rn+jωLn, where rn is the resistance and Ln is the self-inductance of the loop n, Mkn is the mutual inductance between loops k and n, ω is the angular frequency, and iDUT(n) is the current in loop n due to the multipole source only.Thus, considering now the measured currents for all the five loops, one can write (8) in matrix form, solved for iDUT(n):(9)[iDUT]n,1=[M]n,n[iMES]n,1.The elements of[M]n,n are unitary in the diagonal and given by (jωMkn/(rn+jωLn)) otherwise. The coefficients Q10 and Q20 can then be determined by (6), and (7) utilizing the set of currents in (9). This procedure was validated numerically and experimentally, and the results are presented in the following section. ## 3.1. Numerical Approach This approach consists in identifying the source utilizing the software Flux2D based on the finite element method. The software calculates the radial component of the magnetic induction on a measurement sphereSM, which contains the source, as shown in Figure 2. The computation of the Qnm coefficients is achieved by integrating these components on SM.Figure 2 Source modeling Flux2D.The coefficients of the multipole expansion can be deduced from (2), based on the following expression:(4)Qnm=4πr0n(n+1)∫02π∫0πHr(r0,θ,φ)Ynm(θ,φ)dS, where Hr corresponds to the radial component of the magnetic field on the sphere SM of a radius of r0. This result is due to the orthogonal property of Ynm base:(5)∬SMYnmYn′m′dS={r02,if(n,m)=(n′,m′),0,otherwise.The order of the approximation is not limited for this identification method. However, the computational time increases with the order. Moreover, the discretization of the sphere surface must respect the Shannon theorem in order to avoid spatial aliasing. For instance, withn=1, the axes theta and phi require at least two points each, whereas, for n=2, four points are required. For n=Nmax, 2Nmax points are necessary for each axis.The numerical approach can be excessively time consuming or require too much memory, if the modeled source is geometrically complex. The experimental approach is thus an alternative, and it is suitable for practically any source. ## 3.2. Experimental Approach This approach consists in identifying the source utilizing an antenna and a measurement equipment. Figure3 shows the prototype antenna with its loop sensors corresponding to the dipole (2 loops for the dipole component Q10), the quadrupole (2 loops for the quadrupole Q20), and the loop from the standard CISPR16-1. All mentioned loops were built initially only in the z-direction. The complete measurement setup is surrounded by a sphere of radius rM equal to 0.225 m. The short-circuited loops were proposed as sensors with a flat response within the 9 kHz–30 MHz frequency range. Although the use of short-circuited loops corresponds to high values of currents and thus high sensitivity, the magnetic coupling between them imposes some constraints to the measurement and calibration methodology.Figure 3 Prototype antenna.Based on the multipole expansion of the magnetic field, the relationship between the fluxes across the surface delimited by the “sensors set” and theQnm components of the expansion can be directly obtained [14, 15]. The quasi-static approximation was adopted, and, for the maximum frequency of 30 MHz, it is valid for a rM≤1.7 m. In our case, assuming the expansion limited to the second order and in the z-direction (m=0), we have [15](6)Q10=108rM32π(φ101+φ102),Q20=6125*104rM23π21(φ201-φ202), where φnm corresponds to the flux through the loop antenna given by Figure 3. Thus, for the loop configuration given by the same figure, the fluxes through the sensors due to a multipole source will be determined based on the current measured on each loop, after taking into account the magnetic coupling effects of the loops and applying the corresponding antenna factors (AFnm) for i=1,2:(7)φ10_i=AF10i10_i,φ20_i=AF20i20_i.The correction of the magnetic coupling between the loop sensors, which can be considered as a postprocessing in the identification procedure, is treated as follows: the total concatenated magnetic flux in each loop can be expressed as the sum of the flux produced by the multipole source (desired) and the fluxes produced by all the other antenna loops (undesired). The measured current in loopn is denoted as iMES(n) and can be obtained by the following expression:(8)iMES(n)=iDUT(n)-∑k=1k≠n5jωMkniMES(k)rn+jωLn, where rn is the resistance and Ln is the self-inductance of the loop n, Mkn is the mutual inductance between loops k and n, ω is the angular frequency, and iDUT(n) is the current in loop n due to the multipole source only.Thus, considering now the measured currents for all the five loops, one can write (8) in matrix form, solved for iDUT(n):(9)[iDUT]n,1=[M]n,n[iMES]n,1.The elements of[M]n,n are unitary in the diagonal and given by (jωMkn/(rn+jωLn)) otherwise. The coefficients Q10 and Q20 can then be determined by (6), and (7) utilizing the set of currents in (9). This procedure was validated numerically and experimentally, and the results are presented in the following section. ## 4. Multipole Identification Results A vector network analyzer (VNA) and large bandwidth current probes were utilized to measure the current ratio of each sensor loop relative to the source, in dB. The frequency range of all experiments was from 20 kHz to 10 MHz.Three different magnetic field sources were studied: a dipole, a quadrupole, and a generic power transformer. The accuracy of the methodology can be easily verified for the first two sources by utilizing the following expressions:(10)Dipole:Q10=πr2i;Q20=0,Quadrupole:Q10=0;Q20=πr2h0i, where r is the radius of the loop and h0 is the distance between the loops in the quadrupole source. Moreover, for these 2 sources, it is only necessary to present the results for the upper (or lower) loop antennas due to the symmetry on the z-axis with respect to the origin.The accuracy of the coefficientsQ10 and Q20 of the power transformer can be verified indirectly by calculating [16] and measuring its mutual inductance with a known circular loop and then comparing these results. The VNA and the probes are again used for the measurement. ### 4.1. Dipole The measured current ratios loop/source for a dipole of radius 5 cm aligned in thez-axis for the loop sensors Q10_1 (loop 2) and Q20_1 (loop 1) are presented in Figures 4 and 5, respectively. For each figure, there are 6 curves, in which the 3 upper ones correspond to the lower ones after applying the postprocessing described previously.Figure 4 Dipole dB current ratio, sensorQ10_1 (loop 2).Figure 5 Dipole dB current ratio, sensorQ20_1 (loop 1).Supposing a current of 1 Arms in this dipole and the symmetry in the z-axis and utilizing the plots in Figures 4 and 5, we can determine the 4 currents of (7) and finally the components Q10 and Q20 with (6). ### 4.2. Quadrupole The measured current ratios for a quadrupole of parametersr and h0 both equal to 5 cm are presented in Figures 6 and 7 in a similar fashion done for the dipole.Figure 6 Quadrupole dB current ratio, sensorQ10_1 (loop 2).Figure 7 Quadrupole dB current ratio, sensorQ20_1 (loop 1). ### 4.3. Power Transformer The measured current ratios loop/source for a power transformer rated 220 V—20 A are presented in Figure8. The experiment was conducted in a similar manner to the previous ones, although there is no longer symmetry in the z-axis.Figure 8 Transformer dB current ratio for all sensor loops.The components of the multipole expansion of the dipole, the quadrupole, and the transformer and the errors of the first two sources relatively to (10) are presented in Table 1. The frequency considered for these results was 200 kHz, located on the flat part of the measured curves.Table 1 Components of the multipole expansion. ComponentQ10 (m·Am2)Q20 (m·Am3)Error (%)Dipole7.307.6 (Q10)Quadrupole00.6320 (Q20)Transformer66.50.82—It should be noted that there is a −4 dB and −3.5 dB difference between the current ratios in Figure4 and in Figure 5, respectively. Therefore, the effect of the mutual inductances in the antenna would correspond to a difference of 37% and 33% in the calculation of Q10 and Q20, respectively, for the dipole.The mutual inductance between this transformer and a loop can be determined by the following expression:(11)M12i1=L2i2, where M12 is the mutual inductance, i1 the current in the transformer, L2 the self-inductance of the loop, and i2 the current in the loop.This was done by measuring the ratio of currents using the VNA and applying an analytical formula for the self-inductance of a loop [16] as shown in (11). Figure 9 shows the measurement setup for this case.Figure 9 Mutual inductance measurement. ## 4.1. Dipole The measured current ratios loop/source for a dipole of radius 5 cm aligned in thez-axis for the loop sensors Q10_1 (loop 2) and Q20_1 (loop 1) are presented in Figures 4 and 5, respectively. For each figure, there are 6 curves, in which the 3 upper ones correspond to the lower ones after applying the postprocessing described previously.Figure 4 Dipole dB current ratio, sensorQ10_1 (loop 2).Figure 5 Dipole dB current ratio, sensorQ20_1 (loop 1).Supposing a current of 1 Arms in this dipole and the symmetry in the z-axis and utilizing the plots in Figures 4 and 5, we can determine the 4 currents of (7) and finally the components Q10 and Q20 with (6). ## 4.2. Quadrupole The measured current ratios for a quadrupole of parametersr and h0 both equal to 5 cm are presented in Figures 6 and 7 in a similar fashion done for the dipole.Figure 6 Quadrupole dB current ratio, sensorQ10_1 (loop 2).Figure 7 Quadrupole dB current ratio, sensorQ20_1 (loop 1). ## 4.3. Power Transformer The measured current ratios loop/source for a power transformer rated 220 V—20 A are presented in Figure8. The experiment was conducted in a similar manner to the previous ones, although there is no longer symmetry in the z-axis.Figure 8 Transformer dB current ratio for all sensor loops.The components of the multipole expansion of the dipole, the quadrupole, and the transformer and the errors of the first two sources relatively to (10) are presented in Table 1. The frequency considered for these results was 200 kHz, located on the flat part of the measured curves.Table 1 Components of the multipole expansion. ComponentQ10 (m·Am2)Q20 (m·Am3)Error (%)Dipole7.307.6 (Q10)Quadrupole00.6320 (Q20)Transformer66.50.82—It should be noted that there is a −4 dB and −3.5 dB difference between the current ratios in Figure4 and in Figure 5, respectively. Therefore, the effect of the mutual inductances in the antenna would correspond to a difference of 37% and 33% in the calculation of Q10 and Q20, respectively, for the dipole.The mutual inductance between this transformer and a loop can be determined by the following expression:(11)M12i1=L2i2, where M12 is the mutual inductance, i1 the current in the transformer, L2 the self-inductance of the loop, and i2 the current in the loop.This was done by measuring the ratio of currents using the VNA and applying an analytical formula for the self-inductance of a loop [16] as shown in (11). Figure 9 shows the measurement setup for this case.Figure 9 Mutual inductance measurement. ## 5. Computing the Mutual Inductance Using the equivalent radiated field source model, we can determine the coupling between two equivalent sources through the computation of the mutual inductance. Figure10 illustrates the configurations regarding the representation of two radiating sources (models 1 and 2).Figure 10 Representation of two radiating sources.The computation of the mutual impedance between source 1 and source 2 can be expressed in terms of the electrical fieldE and magnetic field H for each source [12]: (12)Z12=-1i1i2∯Σ1(E1×H2-E2×H1).When the spheres that contain the sources do not intersect each other, the mutual impedance can be expressed according to the coefficients of the multipole expansion:(13)Z12=1i1i21k2ε0μ0∑n=1Nmax⁡∑m=-nn(-1)m(Q1n,-m*Q2n,m). The expression of the mutual inductance is (14)M12=1jωi1i21k2ε0μ0∑n=1Nmax⁡∑m=-nn(-1)m(Q1n,-m*Q2n,m), where i1 and i2 are, respectively, the current that flows in sources 1 and 2, and k is the phase constant.The coefficients associated to the magnetic transverse modes of the multipole expansion of sources 1 and 2 must be expressed in the same reference: a translation is required, for example, the coefficients of the source 2 can be expressed in the reference of the source 1.The rotation of the coefficientsQnm is expressed by using Euler angles. It should be mentioned that only two angles are necessary because of the spherical symmetry. The details of the methodology for determining the rotation matrices for complex or real coefficients Qnm, are presented in [17, 18]. The translation is based on the “Addition Theorem for Vector Spherical Harmonics” [18].The addition theorem links the harmonics evaluated onr to those evaluated on r′, where r is measured from the origin of the second spherical basis, whose axes are parallel to the first as shown in Figure 11. The origin of the second spherical basis is located in the first by r′′. These 3 vectors are connected by the relation r=r′+r′′.Figure 11 Translation of a spherical basis.The expression of the translation coefficientsQsnm are: (15)Qn′m′TE=∑n=1∞∑m=-nnQmnTEAn′,m′,n,m+QnmTMBn′,m′,n,m,Qn′m′TM=∑n=1∞∑m=-nnQnmTMAn′,m′,n,m+QmnTEBn′,m′,n,m. The coefficients An′,m′,n,m and Bn′,m′,n,m involve computing of the Wigner 3j symbol according to quantum mechanics [19].Utilizing theQ10 and Q20 of the transformer, it is also possible to estimate its mutual inductance with the loop in a simpler manner. This is achieved by using an analytical expression for the mutual inductance between 2 loops [16], by considering that Q10 is represented by a loop and Q20 is represented by 2 loops, both in the z axis. These results are presented in Table 2.Table 2 Comparison between mutual inductances. Height (cm)Measured (nH)Estimated (nH)Error (%)29.33.653.171335.81.981.923 ## 6. Conclusion The presented methodology enables the evaluation of coupling parameters of components by using equivalent emission sources. This method is composed by two steps. At first, the equivalent sources which represent the radiated field component using the multipole expansion representation are identified. It can be obtained by a numerical or an experimental approach. Both of them were discussed in the paper. Secondly, the equivalent sources will be used to compute the coupling between them, which was represented by a mutual inductance as a function of the distance that separates them.Other kind of multipole expansions like the cylindrical one can be more suitable for modeling components such as tracks or cables, and it will also be considered. For example, in the case of the coupling between a track and a component, the spherical harmonics method is not very adequate and other harmonics method should be used.The method proposed could be helpful when used together with other circuit simulator methods in the evaluation of equivalent circuit of power electronics devices (R-L-M-C). This will give us a considerable gain of memory space concerning the full model configuration used in EMC filter numerical simulations. --- *Source: 102495-2012-01-19.xml*
102495-2012-01-19_102495-2012-01-19.md
26,356
Evaluation of the Inductive Coupling between Equivalent Emission Sources of Components
Moisés Ferber; Sanâa Zangui; Carlos Sartori; Christian Vollaire; Ronan Perrussel; Laurent Krähenbühl
International Journal of Antennas and Propagation (2012)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/102495
102495-2012-01-19.xml
--- ## Abstract The electromagnetic interference between electronic systems or between their components influences the overall performance. It is important thus to model these interferences in order to optimize the position of the components of an electronic system. In this paper, a methodology to construct the equivalent model of magnetic field sources is proposed. It is based on the multipole expansion, and it represents the radiated emission of generic structures in a spherical reference frame. Experimental results for different kinds of sources are presented illustrating our method. --- ## Body ## 1. Introduction The development of semiconductor technology in the last decades has greatly increased the use of power electronics in various applications, such as computer power supplies, voltage converters, electronic ballasts, and variable-speed drives [1]. Recently, new applications of power electronics have also appeared in the vehicle industry, such as electric cars and airplanes. However, the commutation of the switches (rectifiers, SCRs and triacs, BJTs, MOSFETs, and IGBTs) generates high currents with high di/dt, and, thus, a wide bandwidth of unwanted electromagnetic interference (EMI) pollutes the electromagnetic environment [2].The electromagnetic compatibility (EMC) is an engineering domain responsible to ensure that systems, equipment, and devices can coexist satisfactorily in the same electromagnetic environment [3]. Electric cars, for instance, may encounter malfunction in its electronic systems (ESP, ABS, ALS, etc) if special care is not taken. The EMI between the cables of the power electronics and the cables carrying electronic signals, if they are too close to each other without proper shielding, may prevent the correct operation of certain systems [4].There are not many reliable methods to predict the EMC of a complex system in the design phase [5], and, thus, in practice, the EMC design is still carried out by trial and error [3] causing high development cost in case of malfunctioning due to EMI, when the prototype is tested.To ensure the compatibility of cables, equipment, and systems at the design phase, EMC predictive tools must be improved [5]. In order to achieve this requirement, frequency domain simulations can be performed utilizing equivalent models for the EMI sources. For instance, in power electronics, the range of frequency analyzed can be restricted from 10 kHz to 50 MHz, which comprises the common operating range of semiconductor switches utilized in power converters and frequency harmonics produced by them.The EMI is usually established in different ways, for instance, the near-field coupling between filter components [6–9] or the coupling between wires [10]. Each coupling phenomenon is thus best modeled by different mathematical models.The near-field coupling between filter components can be well modeled by a methodology based on the multipole expansion, which represents the radiation emission in a spherical reference system(r,θ,φ) [6–9], whereas the coupling between wires is usually well modeled by the PEEC method [10].This paper presents a methodology to determine the first two coefficients of the multipole expansion (Q10 and Q20) of a generic magnetic field source, by a numerical or an experimental approach, depending upon the complexity of the source. The numerical approach is rather limited to simple sources, but the experimental approach has no limitations over the geometrical complexity of the source.The experimental approach utilizes an antenna consisting of four loops around the magnetic field source. The mutual coupling between the loops must be taken into account when modeling the source, in order to avoid a significant error, which can be up to 40%.Finally, the methodology is validated by comparing the calculated and measured mutual inductance of a modeled power transformer and a well-known loop. ## 2. Theory of Multipole Expansion The multipole expansion can be used to represent electromagnetic fields in 3D, assuming that the field is computed outside a sphere of a given radius that contains the equivalent source. Figure1 shows the reference sphere considered [11].Figure 1 Reference adopted in the field computation.In the case of outgoing radiated emission sources, the multipole expansion allows expressing the electric and magnetic fields as [12] (1)E(r,θ,φ)=∑n=1∞∑m=-nnQnmTEF1nm(r,θ,φ)+QnmTMF2nm(r,θ,φ),H(r,θ,φ)=jη∑n=1∞∑m=-nnQnmTMF1nm(r,θ,φ)+QnmTEF2nm(r,θ,φ), where (i) η=μ/ε is the intrinsic impedance of the considered environment;(ii) QnmTE and QnmTM are the magnetic and electric coefficients, respectively. The coefficients QnmTE describe the strength of the transverse-electric (TE) components of the radiated field, while coefficients QnmTM describe the strength of the transverse-magnetic (TM) components. Each of them corresponds to the equivalent radiated source. Thus, these coefficients are the parameters to be identified that characterize the equivalent model of the radiated field components;(iii) F1nm and F2nm are the vector spherical harmonics which are solutions of Maxwell’s equations in free space, excluding the sphere that involves the sources;(iv) n is the degree, and m is the azimuthal order.In our study, only the magnetic source in the near-field is considered. That isQnmTM=0, and it is assumed that the electric field component is low when compared with the magnetic field. Thus, the computation of the QnmTE, wrote as Qnm in (2), is carried out by the radial component Hr, in near field [12, 13]:(2)Hr=-14π∑n=1+∞∑m=-nnQnm∂∂r(1rn+1)Ynm(θ,φ), where Ynm are the normalized spherical harmonics given by the following expression:(3)Ynm(θ,φ)=(2n+1)(n-m)!4π(n+m)!Pnm(cos⁡θ)ejmφ.One of the main properties of the multipole expansion to be emphasized is the decrease of the terms of ordern with r-(n+1). This ensures a hierarchy between each order of the decomposition. The larger is the distance to the source, the fewer are the terms required to reconstruct the field. Thus, the accuracy of the mutual inductance computation is related to the choice of the maximum order description, noted Nmax. It should be observed that there are (2n+1) components for each n order. For an order source equal to Nmax, it will correspond to Nmax(Nmax+2) components, but due to the previously mentioned property (hierarchy between each order), Nmax can be limited to 5, based on the present experience of the authors. ## 3. Multipole Identification ### 3.1. Numerical Approach This approach consists in identifying the source utilizing the software Flux2D based on the finite element method. The software calculates the radial component of the magnetic induction on a measurement sphereSM, which contains the source, as shown in Figure 2. The computation of the Qnm coefficients is achieved by integrating these components on SM.Figure 2 Source modeling Flux2D.The coefficients of the multipole expansion can be deduced from (2), based on the following expression:(4)Qnm=4πr0n(n+1)∫02π∫0πHr(r0,θ,φ)Ynm(θ,φ)dS, where Hr corresponds to the radial component of the magnetic field on the sphere SM of a radius of r0. This result is due to the orthogonal property of Ynm base:(5)∬SMYnmYn′m′dS={r02,if(n,m)=(n′,m′),0,otherwise.The order of the approximation is not limited for this identification method. However, the computational time increases with the order. Moreover, the discretization of the sphere surface must respect the Shannon theorem in order to avoid spatial aliasing. For instance, withn=1, the axes theta and phi require at least two points each, whereas, for n=2, four points are required. For n=Nmax, 2Nmax points are necessary for each axis.The numerical approach can be excessively time consuming or require too much memory, if the modeled source is geometrically complex. The experimental approach is thus an alternative, and it is suitable for practically any source. ### 3.2. Experimental Approach This approach consists in identifying the source utilizing an antenna and a measurement equipment. Figure3 shows the prototype antenna with its loop sensors corresponding to the dipole (2 loops for the dipole component Q10), the quadrupole (2 loops for the quadrupole Q20), and the loop from the standard CISPR16-1. All mentioned loops were built initially only in the z-direction. The complete measurement setup is surrounded by a sphere of radius rM equal to 0.225 m. The short-circuited loops were proposed as sensors with a flat response within the 9 kHz–30 MHz frequency range. Although the use of short-circuited loops corresponds to high values of currents and thus high sensitivity, the magnetic coupling between them imposes some constraints to the measurement and calibration methodology.Figure 3 Prototype antenna.Based on the multipole expansion of the magnetic field, the relationship between the fluxes across the surface delimited by the “sensors set” and theQnm components of the expansion can be directly obtained [14, 15]. The quasi-static approximation was adopted, and, for the maximum frequency of 30 MHz, it is valid for a rM≤1.7 m. In our case, assuming the expansion limited to the second order and in the z-direction (m=0), we have [15](6)Q10=108rM32π(φ101+φ102),Q20=6125*104rM23π21(φ201-φ202), where φnm corresponds to the flux through the loop antenna given by Figure 3. Thus, for the loop configuration given by the same figure, the fluxes through the sensors due to a multipole source will be determined based on the current measured on each loop, after taking into account the magnetic coupling effects of the loops and applying the corresponding antenna factors (AFnm) for i=1,2:(7)φ10_i=AF10i10_i,φ20_i=AF20i20_i.The correction of the magnetic coupling between the loop sensors, which can be considered as a postprocessing in the identification procedure, is treated as follows: the total concatenated magnetic flux in each loop can be expressed as the sum of the flux produced by the multipole source (desired) and the fluxes produced by all the other antenna loops (undesired). The measured current in loopn is denoted as iMES(n) and can be obtained by the following expression:(8)iMES(n)=iDUT(n)-∑k=1k≠n5jωMkniMES(k)rn+jωLn, where rn is the resistance and Ln is the self-inductance of the loop n, Mkn is the mutual inductance between loops k and n, ω is the angular frequency, and iDUT(n) is the current in loop n due to the multipole source only.Thus, considering now the measured currents for all the five loops, one can write (8) in matrix form, solved for iDUT(n):(9)[iDUT]n,1=[M]n,n[iMES]n,1.The elements of[M]n,n are unitary in the diagonal and given by (jωMkn/(rn+jωLn)) otherwise. The coefficients Q10 and Q20 can then be determined by (6), and (7) utilizing the set of currents in (9). This procedure was validated numerically and experimentally, and the results are presented in the following section. ## 3.1. Numerical Approach This approach consists in identifying the source utilizing the software Flux2D based on the finite element method. The software calculates the radial component of the magnetic induction on a measurement sphereSM, which contains the source, as shown in Figure 2. The computation of the Qnm coefficients is achieved by integrating these components on SM.Figure 2 Source modeling Flux2D.The coefficients of the multipole expansion can be deduced from (2), based on the following expression:(4)Qnm=4πr0n(n+1)∫02π∫0πHr(r0,θ,φ)Ynm(θ,φ)dS, where Hr corresponds to the radial component of the magnetic field on the sphere SM of a radius of r0. This result is due to the orthogonal property of Ynm base:(5)∬SMYnmYn′m′dS={r02,if(n,m)=(n′,m′),0,otherwise.The order of the approximation is not limited for this identification method. However, the computational time increases with the order. Moreover, the discretization of the sphere surface must respect the Shannon theorem in order to avoid spatial aliasing. For instance, withn=1, the axes theta and phi require at least two points each, whereas, for n=2, four points are required. For n=Nmax, 2Nmax points are necessary for each axis.The numerical approach can be excessively time consuming or require too much memory, if the modeled source is geometrically complex. The experimental approach is thus an alternative, and it is suitable for practically any source. ## 3.2. Experimental Approach This approach consists in identifying the source utilizing an antenna and a measurement equipment. Figure3 shows the prototype antenna with its loop sensors corresponding to the dipole (2 loops for the dipole component Q10), the quadrupole (2 loops for the quadrupole Q20), and the loop from the standard CISPR16-1. All mentioned loops were built initially only in the z-direction. The complete measurement setup is surrounded by a sphere of radius rM equal to 0.225 m. The short-circuited loops were proposed as sensors with a flat response within the 9 kHz–30 MHz frequency range. Although the use of short-circuited loops corresponds to high values of currents and thus high sensitivity, the magnetic coupling between them imposes some constraints to the measurement and calibration methodology.Figure 3 Prototype antenna.Based on the multipole expansion of the magnetic field, the relationship between the fluxes across the surface delimited by the “sensors set” and theQnm components of the expansion can be directly obtained [14, 15]. The quasi-static approximation was adopted, and, for the maximum frequency of 30 MHz, it is valid for a rM≤1.7 m. In our case, assuming the expansion limited to the second order and in the z-direction (m=0), we have [15](6)Q10=108rM32π(φ101+φ102),Q20=6125*104rM23π21(φ201-φ202), where φnm corresponds to the flux through the loop antenna given by Figure 3. Thus, for the loop configuration given by the same figure, the fluxes through the sensors due to a multipole source will be determined based on the current measured on each loop, after taking into account the magnetic coupling effects of the loops and applying the corresponding antenna factors (AFnm) for i=1,2:(7)φ10_i=AF10i10_i,φ20_i=AF20i20_i.The correction of the magnetic coupling between the loop sensors, which can be considered as a postprocessing in the identification procedure, is treated as follows: the total concatenated magnetic flux in each loop can be expressed as the sum of the flux produced by the multipole source (desired) and the fluxes produced by all the other antenna loops (undesired). The measured current in loopn is denoted as iMES(n) and can be obtained by the following expression:(8)iMES(n)=iDUT(n)-∑k=1k≠n5jωMkniMES(k)rn+jωLn, where rn is the resistance and Ln is the self-inductance of the loop n, Mkn is the mutual inductance between loops k and n, ω is the angular frequency, and iDUT(n) is the current in loop n due to the multipole source only.Thus, considering now the measured currents for all the five loops, one can write (8) in matrix form, solved for iDUT(n):(9)[iDUT]n,1=[M]n,n[iMES]n,1.The elements of[M]n,n are unitary in the diagonal and given by (jωMkn/(rn+jωLn)) otherwise. The coefficients Q10 and Q20 can then be determined by (6), and (7) utilizing the set of currents in (9). This procedure was validated numerically and experimentally, and the results are presented in the following section. ## 4. Multipole Identification Results A vector network analyzer (VNA) and large bandwidth current probes were utilized to measure the current ratio of each sensor loop relative to the source, in dB. The frequency range of all experiments was from 20 kHz to 10 MHz.Three different magnetic field sources were studied: a dipole, a quadrupole, and a generic power transformer. The accuracy of the methodology can be easily verified for the first two sources by utilizing the following expressions:(10)Dipole:Q10=πr2i;Q20=0,Quadrupole:Q10=0;Q20=πr2h0i, where r is the radius of the loop and h0 is the distance between the loops in the quadrupole source. Moreover, for these 2 sources, it is only necessary to present the results for the upper (or lower) loop antennas due to the symmetry on the z-axis with respect to the origin.The accuracy of the coefficientsQ10 and Q20 of the power transformer can be verified indirectly by calculating [16] and measuring its mutual inductance with a known circular loop and then comparing these results. The VNA and the probes are again used for the measurement. ### 4.1. Dipole The measured current ratios loop/source for a dipole of radius 5 cm aligned in thez-axis for the loop sensors Q10_1 (loop 2) and Q20_1 (loop 1) are presented in Figures 4 and 5, respectively. For each figure, there are 6 curves, in which the 3 upper ones correspond to the lower ones after applying the postprocessing described previously.Figure 4 Dipole dB current ratio, sensorQ10_1 (loop 2).Figure 5 Dipole dB current ratio, sensorQ20_1 (loop 1).Supposing a current of 1 Arms in this dipole and the symmetry in the z-axis and utilizing the plots in Figures 4 and 5, we can determine the 4 currents of (7) and finally the components Q10 and Q20 with (6). ### 4.2. Quadrupole The measured current ratios for a quadrupole of parametersr and h0 both equal to 5 cm are presented in Figures 6 and 7 in a similar fashion done for the dipole.Figure 6 Quadrupole dB current ratio, sensorQ10_1 (loop 2).Figure 7 Quadrupole dB current ratio, sensorQ20_1 (loop 1). ### 4.3. Power Transformer The measured current ratios loop/source for a power transformer rated 220 V—20 A are presented in Figure8. The experiment was conducted in a similar manner to the previous ones, although there is no longer symmetry in the z-axis.Figure 8 Transformer dB current ratio for all sensor loops.The components of the multipole expansion of the dipole, the quadrupole, and the transformer and the errors of the first two sources relatively to (10) are presented in Table 1. The frequency considered for these results was 200 kHz, located on the flat part of the measured curves.Table 1 Components of the multipole expansion. ComponentQ10 (m·Am2)Q20 (m·Am3)Error (%)Dipole7.307.6 (Q10)Quadrupole00.6320 (Q20)Transformer66.50.82—It should be noted that there is a −4 dB and −3.5 dB difference between the current ratios in Figure4 and in Figure 5, respectively. Therefore, the effect of the mutual inductances in the antenna would correspond to a difference of 37% and 33% in the calculation of Q10 and Q20, respectively, for the dipole.The mutual inductance between this transformer and a loop can be determined by the following expression:(11)M12i1=L2i2, where M12 is the mutual inductance, i1 the current in the transformer, L2 the self-inductance of the loop, and i2 the current in the loop.This was done by measuring the ratio of currents using the VNA and applying an analytical formula for the self-inductance of a loop [16] as shown in (11). Figure 9 shows the measurement setup for this case.Figure 9 Mutual inductance measurement. ## 4.1. Dipole The measured current ratios loop/source for a dipole of radius 5 cm aligned in thez-axis for the loop sensors Q10_1 (loop 2) and Q20_1 (loop 1) are presented in Figures 4 and 5, respectively. For each figure, there are 6 curves, in which the 3 upper ones correspond to the lower ones after applying the postprocessing described previously.Figure 4 Dipole dB current ratio, sensorQ10_1 (loop 2).Figure 5 Dipole dB current ratio, sensorQ20_1 (loop 1).Supposing a current of 1 Arms in this dipole and the symmetry in the z-axis and utilizing the plots in Figures 4 and 5, we can determine the 4 currents of (7) and finally the components Q10 and Q20 with (6). ## 4.2. Quadrupole The measured current ratios for a quadrupole of parametersr and h0 both equal to 5 cm are presented in Figures 6 and 7 in a similar fashion done for the dipole.Figure 6 Quadrupole dB current ratio, sensorQ10_1 (loop 2).Figure 7 Quadrupole dB current ratio, sensorQ20_1 (loop 1). ## 4.3. Power Transformer The measured current ratios loop/source for a power transformer rated 220 V—20 A are presented in Figure8. The experiment was conducted in a similar manner to the previous ones, although there is no longer symmetry in the z-axis.Figure 8 Transformer dB current ratio for all sensor loops.The components of the multipole expansion of the dipole, the quadrupole, and the transformer and the errors of the first two sources relatively to (10) are presented in Table 1. The frequency considered for these results was 200 kHz, located on the flat part of the measured curves.Table 1 Components of the multipole expansion. ComponentQ10 (m·Am2)Q20 (m·Am3)Error (%)Dipole7.307.6 (Q10)Quadrupole00.6320 (Q20)Transformer66.50.82—It should be noted that there is a −4 dB and −3.5 dB difference between the current ratios in Figure4 and in Figure 5, respectively. Therefore, the effect of the mutual inductances in the antenna would correspond to a difference of 37% and 33% in the calculation of Q10 and Q20, respectively, for the dipole.The mutual inductance between this transformer and a loop can be determined by the following expression:(11)M12i1=L2i2, where M12 is the mutual inductance, i1 the current in the transformer, L2 the self-inductance of the loop, and i2 the current in the loop.This was done by measuring the ratio of currents using the VNA and applying an analytical formula for the self-inductance of a loop [16] as shown in (11). Figure 9 shows the measurement setup for this case.Figure 9 Mutual inductance measurement. ## 5. Computing the Mutual Inductance Using the equivalent radiated field source model, we can determine the coupling between two equivalent sources through the computation of the mutual inductance. Figure10 illustrates the configurations regarding the representation of two radiating sources (models 1 and 2).Figure 10 Representation of two radiating sources.The computation of the mutual impedance between source 1 and source 2 can be expressed in terms of the electrical fieldE and magnetic field H for each source [12]: (12)Z12=-1i1i2∯Σ1(E1×H2-E2×H1).When the spheres that contain the sources do not intersect each other, the mutual impedance can be expressed according to the coefficients of the multipole expansion:(13)Z12=1i1i21k2ε0μ0∑n=1Nmax⁡∑m=-nn(-1)m(Q1n,-m*Q2n,m). The expression of the mutual inductance is (14)M12=1jωi1i21k2ε0μ0∑n=1Nmax⁡∑m=-nn(-1)m(Q1n,-m*Q2n,m), where i1 and i2 are, respectively, the current that flows in sources 1 and 2, and k is the phase constant.The coefficients associated to the magnetic transverse modes of the multipole expansion of sources 1 and 2 must be expressed in the same reference: a translation is required, for example, the coefficients of the source 2 can be expressed in the reference of the source 1.The rotation of the coefficientsQnm is expressed by using Euler angles. It should be mentioned that only two angles are necessary because of the spherical symmetry. The details of the methodology for determining the rotation matrices for complex or real coefficients Qnm, are presented in [17, 18]. The translation is based on the “Addition Theorem for Vector Spherical Harmonics” [18].The addition theorem links the harmonics evaluated onr to those evaluated on r′, where r is measured from the origin of the second spherical basis, whose axes are parallel to the first as shown in Figure 11. The origin of the second spherical basis is located in the first by r′′. These 3 vectors are connected by the relation r=r′+r′′.Figure 11 Translation of a spherical basis.The expression of the translation coefficientsQsnm are: (15)Qn′m′TE=∑n=1∞∑m=-nnQmnTEAn′,m′,n,m+QnmTMBn′,m′,n,m,Qn′m′TM=∑n=1∞∑m=-nnQnmTMAn′,m′,n,m+QmnTEBn′,m′,n,m. The coefficients An′,m′,n,m and Bn′,m′,n,m involve computing of the Wigner 3j symbol according to quantum mechanics [19].Utilizing theQ10 and Q20 of the transformer, it is also possible to estimate its mutual inductance with the loop in a simpler manner. This is achieved by using an analytical expression for the mutual inductance between 2 loops [16], by considering that Q10 is represented by a loop and Q20 is represented by 2 loops, both in the z axis. These results are presented in Table 2.Table 2 Comparison between mutual inductances. Height (cm)Measured (nH)Estimated (nH)Error (%)29.33.653.171335.81.981.923 ## 6. Conclusion The presented methodology enables the evaluation of coupling parameters of components by using equivalent emission sources. This method is composed by two steps. At first, the equivalent sources which represent the radiated field component using the multipole expansion representation are identified. It can be obtained by a numerical or an experimental approach. Both of them were discussed in the paper. Secondly, the equivalent sources will be used to compute the coupling between them, which was represented by a mutual inductance as a function of the distance that separates them.Other kind of multipole expansions like the cylindrical one can be more suitable for modeling components such as tracks or cables, and it will also be considered. For example, in the case of the coupling between a track and a component, the spherical harmonics method is not very adequate and other harmonics method should be used.The method proposed could be helpful when used together with other circuit simulator methods in the evaluation of equivalent circuit of power electronics devices (R-L-M-C). This will give us a considerable gain of memory space concerning the full model configuration used in EMC filter numerical simulations. --- *Source: 102495-2012-01-19.xml*
2012
# Comment on “Unilateral Global Bifurcation from Intervals for Fourth-Order Problems and Its Applications” **Authors:** Ziyatkhan Aliyev **Journal:** Discrete Dynamics in Nature and Society (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1024950 --- ## Abstract In the recent paper W. Shen and T. He and G. Dai and X. Han established unilateral global bifurcation result for a class of nonlinear fourth-order eigenvalue problems. They show the existence of two families of unbounded continua of nontrivial solutions of these problems bifurcating from the points and intervals of the line trivial solutions, corresponding to the positive or negative eigenvalues of the linear problem. As applications of this result, these authors study the existence of nodal solutions for a class of nonlinear fourth-order eigenvalue problems with sign-changing weight. Moreover, they also establish the Sturm type comparison theorem for fourth-order problems with sign-changing weight. In the present comment, we show that these papers of above authors contain serious errors and, therefore, unfortunately, the results of these works are not true. Note also that the authors used the results of the recent work by G. Dai which also contain gaps. --- ## Body --- *Source: 1024950-2017-04-05.xml*
1024950-2017-04-05_1024950-2017-04-05.md
1,361
Comment on “Unilateral Global Bifurcation from Intervals for Fourth-Order Problems and Its Applications”
Ziyatkhan Aliyev
Discrete Dynamics in Nature and Society (2017)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1024950
1024950-2017-04-05.xml
--- ## Abstract In the recent paper W. Shen and T. He and G. Dai and X. Han established unilateral global bifurcation result for a class of nonlinear fourth-order eigenvalue problems. They show the existence of two families of unbounded continua of nontrivial solutions of these problems bifurcating from the points and intervals of the line trivial solutions, corresponding to the positive or negative eigenvalues of the linear problem. As applications of this result, these authors study the existence of nodal solutions for a class of nonlinear fourth-order eigenvalue problems with sign-changing weight. Moreover, they also establish the Sturm type comparison theorem for fourth-order problems with sign-changing weight. In the present comment, we show that these papers of above authors contain serious errors and, therefore, unfortunately, the results of these works are not true. Note also that the authors used the results of the recent work by G. Dai which also contain gaps. --- ## Body --- *Source: 1024950-2017-04-05.xml*
2017
# Samul-Tang Regulates Cell Cycle and Migration of Vascular Smooth Muscle Cells against TNF-α Stimulation **Authors:** Eun Sik Choi; Jung Joo Yoon; Byung Hyuk Han; Da Hye Jeong; Hye Yoom Kim; You Mee Ahn; So Young Eun; Yun Jung Lee; Dae Gill Kang; Ho Sub Lee **Journal:** Evidence-Based Complementary and Alternative Medicine (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1024974 --- ## Abstract Samul-Tang (SMT), consisting of four medicinal herbs, is a well-known herbal prescription treating hematological disorders related symptoms. Our previous study demonstrated that SMT attenuated inflammation of vascular endothelial cells. In condition of retained vascular dysfunction, vascular inflammation is initiated and results in activation of smooth muscle cells (SMCs). Activated SMCs lose control of cell cycle regulation and migrate into intima, resulting in formation of atheroma. Here, we further investigated whether SMT suppresses proliferation and migration of SMCs. SMT showed antiproliferative effects on SMCs by suppressing [3H]-thymidine incorporation against TNF-α stimulation. Underlying mechanisms of antiproliferative effects were found to be resulting from cell cycle regulation. SMT downregulated expression of cyclin D1-CDK4 and cyclin E-CDK2 complexes and upregulated p21waf1/cip1 and p27kip1. SMT also suppressed migration of SMCs against TNF-α stimulation. This is thought to have resulted from suppressing MMP2 and MMP9 expressions and ROS production. In summary, SMT attenuates abnormal migration of vascular smooth muscle cells via regulating cell cycle and suppressing MMPs expression and ROS production. Our study suggests that SMT, a traditionally used herbal formula, protects vascular smooth muscle cells and might be used as an antiatherosclerotic drug. --- ## Body ## 1. Introduction The vascular smooth muscle cells (SMCs) are located in the walls of blood vessels and regulate contraction or relaxation of blood vessel. Besides, SMCs are also involved in developing cardiovascular complication such as atherosclerosis resulting from chronic vascular inflammation or abnormal proliferation and migration of SMCs [1].In pathogenesis of atherosclerosis, proliferation and migration of SMCs are crucial [2] and regulating cell cycle of SMCs could be a novel therapeutic strategy. Basically, progression of cell cycle is positively regulated by cyclin-cyclin-dependent kinase (CDK) complexes and negatively regulated by CDK inhibitors. In G1 phase, cyclin D1 drives the formation of active CDK4 and cyclin E activates CDK2. Consequently, cyclin D1-CDK4 and cyclin E-CDK2 complexes drive cells to exit G1 phase and initiate DNA replication to enter into S phase [3]. In contrast, CDK inhibitors such as p21waf1/cip1 and p27kip1 stabilize cyclin-CDK complexes and negatively regulate cell cycle [4].Matrix metalloproteinases (MMPs) are enzymes involved in degradation and remodeling of extracellular matrix (ECM). Particularly, MMP2 (72 kDa) and MMP9 (92 kDa), which are also called gelatinase, play important role in breaking down structure of ECM [5] as well as regulating proliferation and migration of SMCs [6]. Excessive breakdown of ECM might lead to plaque rupture and incidence of myocardial infarction could be increased [7]. Oxidative stress mediated by reactive oxygen species (ROS) also prompts vascular inflammation developing atherosclerosis and also activates cell cycle circulation and MMPs [8].Samul-Tang (SMT, Si-Wu-Tang in Chinese and Four-Agent-Decoction in English) is a herbal prescription consisting of four medicinal herbs: Angelicae Gigantis Radix (Angelica gigasNakai, root), Cnidii Rhizoma (Ligusticum officinale Makino, rhizome), Rehmanniae Radix Preparata (Rehmannia glutinosa Gaertn. DC., rhizome, steamed and dried), and Paeoniae Radix (Paeonia lactiflora Pall., root).Dongui Bogam (Treasured Mirror of Eastern Medicine) and other several formularies contain medical information about SMT. Traditional use of SMT is treating hematological disorders related symptoms such as anemia [9], irregular menstruation [10, 11], or postpartum weakness defined as blood deficiency and blood stasis in traditional Korean medicine. Recent pharmacological studies suggest that SMT exerts hematopoietic [12], anti-inflammatory [13], and antidermatitis [14] effects. Furthermore, our previous study demonstrated that SMT exerts vascular protective effects in human umbilical vein endothelial cells (HUVECs), which are pathologically intimate with SMCs [15]. Here, we further investigated effects and mechanisms of SMT on tumor necrosis factor-α (TNF-α) stimulated SMCs proliferation and migration. ## 2. Materials and Methods ### 2.1. Plant Materials of SMT The four crude herbs forming SMT were purchased from Omniherb (Yeongcheon, Korea) in February 2008. The origin of each herbal medicine was taxonomically identified by Professor Je Hyun Lee, Dongguk University, Gyeongju, Republic of Korea. A voucher specimen (2008-KE25-1~KE25-4) has been deposited at the K-herb Research Center, Korea Institute of Oriental Medicine. SMT extract was prepared as described previously [15]. ### 2.2. Chemicals and Reagents DMEM low glucose, fetal bovine serum, TNF-α, cell culture reagents, and CM-H2DCFDA were purchased from Invitrogen (San Diego, CA). Primary antibodies, including mouse anti-cyclin D1, rabbit anti-CDK4, mouse anti-cyclin E, rabbit anti-CDK2, rabbit anti-p21, mouse anti-p27, rabbit anti-MMP2, and rabbit anti-MMP9, were purchased from Santa Cruz Biotechnology (CA, USA). Goat anti-rabbit IgG and goat anti-mouse IgG were purchased from Enzo (Farmingdale, USA). ### 2.3. Cell Culture Human aortic smooth muscle cells (SMCs, C-007-5C) were purchased from Invitrogen (Carlsbad, CA). Cells were cultured with DMEM low glucose containing 5% fetal bovine serum and penicillin-streptomycin and maintained in a humidified incubator containing 5% CO2 at 37°C. ### 2.4. [3H]-Thymidine Incorporation Assay Quiescent cells were treated with 10 ng/ml TNF-α and SMT, respectively, and 1 μCi of [3H]-thymidine was added (methyl-[3H] thymidine, 50 Ci/mmol; Amersham, Oakville, Ontario, Canada). After 24 h of incubation, cells were washed once with 2 ml of ice-cold PBS for 10 min, extracted three times with 2 ml of cold 10% TCA for 5 min each, and solubilized for at least 30 min at room temperature in 0.2 ml of 0.3 N NaOH, 1% SDS. After neutralization with 0.2 ml of 0.3 N HCl, [3H]-thymidine activity was measured in a liquid scintillation counter (Beckman LS 7500, Fullerton, CA). Each experiment was conducted either in triplicate or in quadruplicate. ### 2.5. Western Blot Analysis Cell homogenates were separated on 10% SDS-polyacrylamide gel electrophoresis and transferred to nitrocellulose paper. Blots were then washed with H2O, blocked with 5% skimmed milk powder in Tris-Buffered Saline Tween-20 (TBS-T) (10 mM Tris-HCl, pH 7.6, 150 mM NaCl, 0.05% Tween-20) for 1 h, and incubated with the appropriate primary antibody at dilutions recommended by the supplier. Then the membrane was washed, and primary antibodies were detected with secondary antibodies conjugated to horseradish peroxidase, and the bands were visualized with enhanced chemiluminescence (Amersham Bioscience, Buckinghamshire, UK). Protein expression levels were determined by analyzing the signals captured on the nitrocellulose membranes using the ChemiDoc image analyzer (Bio-Rad Laboratories, Hercules, CA). ### 2.6. Cell Migration (Scratch) Assay The cell migration assay was evaluated by wound healing assay. Briefly, SMCs were plated in 12-well culture plates and cultured in DMEM containing 10% FBS. Scratches were performed with a sterile tip in a 12-well plate containing cells at 80% confluence level. Next, SMCs were pretreated with SMT for 30 min, which was followed by treatment with TNF-α at 37°C for 24 h. After incubation, the microscopic photographs of migrated cells were measured by microscopy (Eclipse Ti, Nikon). ### 2.7. Gelatin Zymography SMCs were pretreated with SMT for 30 min and stimulated with TNF-α for 24 h. The supernatant, conditioned medium was collected for zymography. SDS-PAGE for measurement of MMP2 activity was added with 0.1% gelatin in the 10% separated gel. The gel was washed with renaturation buffer (2.5% Triton X-100 in DW) at room temperature for 1 h and then incubated with development buffer (Invitrogen Corporation, Carlsbad, CA) at 37°C overnight. Next, the gel was stained with 0.2% Coomassie brilliant blue R stain reagent at room temperature for 1 h. After washing with destain buffer, the gels were scanned using a ChemiDoc image analyzer (Bio-Rad, USA). ### 2.8. Intracellular ROS Production Assay The fluorescent probe, (5-(and-6)-chloromethyl-2',7'-dichlorodihydrofluorescein diacetate, acetyl ester) (CM-H2DCFDA), was used to determine the intracellular generation of ROS by stimulation of TNF-α. Briefly, the confluent SMCs in the 6-well culture plates were pretreated with or without SMT for 30 min. After removing the SMT from the wells, the cells were incubated with 20 μM of CM-H2DCFDA for 1 h. The cells were stimulated with TNF-α, and the fluorescence intensity was measured at an excitation and emission wavelength of 485 nm and 530 nm, respectively, using a flow cytometry on FACSCalibur (BD, San Diego, CA). ### 2.9. Statistical Analysis All the experiments were repeated at least three times. The results were expressed as a mean ± SE, and the data were analyzed using one-way ANOVA followed by Student’st-test to determine any significant differences. p<0.05 was considered as statistically significant. ## 2.1. Plant Materials of SMT The four crude herbs forming SMT were purchased from Omniherb (Yeongcheon, Korea) in February 2008. The origin of each herbal medicine was taxonomically identified by Professor Je Hyun Lee, Dongguk University, Gyeongju, Republic of Korea. A voucher specimen (2008-KE25-1~KE25-4) has been deposited at the K-herb Research Center, Korea Institute of Oriental Medicine. SMT extract was prepared as described previously [15]. ## 2.2. Chemicals and Reagents DMEM low glucose, fetal bovine serum, TNF-α, cell culture reagents, and CM-H2DCFDA were purchased from Invitrogen (San Diego, CA). Primary antibodies, including mouse anti-cyclin D1, rabbit anti-CDK4, mouse anti-cyclin E, rabbit anti-CDK2, rabbit anti-p21, mouse anti-p27, rabbit anti-MMP2, and rabbit anti-MMP9, were purchased from Santa Cruz Biotechnology (CA, USA). Goat anti-rabbit IgG and goat anti-mouse IgG were purchased from Enzo (Farmingdale, USA). ## 2.3. Cell Culture Human aortic smooth muscle cells (SMCs, C-007-5C) were purchased from Invitrogen (Carlsbad, CA). Cells were cultured with DMEM low glucose containing 5% fetal bovine serum and penicillin-streptomycin and maintained in a humidified incubator containing 5% CO2 at 37°C. ## 2.4. [3H]-Thymidine Incorporation Assay Quiescent cells were treated with 10 ng/ml TNF-α and SMT, respectively, and 1 μCi of [3H]-thymidine was added (methyl-[3H] thymidine, 50 Ci/mmol; Amersham, Oakville, Ontario, Canada). After 24 h of incubation, cells were washed once with 2 ml of ice-cold PBS for 10 min, extracted three times with 2 ml of cold 10% TCA for 5 min each, and solubilized for at least 30 min at room temperature in 0.2 ml of 0.3 N NaOH, 1% SDS. After neutralization with 0.2 ml of 0.3 N HCl, [3H]-thymidine activity was measured in a liquid scintillation counter (Beckman LS 7500, Fullerton, CA). Each experiment was conducted either in triplicate or in quadruplicate. ## 2.5. Western Blot Analysis Cell homogenates were separated on 10% SDS-polyacrylamide gel electrophoresis and transferred to nitrocellulose paper. Blots were then washed with H2O, blocked with 5% skimmed milk powder in Tris-Buffered Saline Tween-20 (TBS-T) (10 mM Tris-HCl, pH 7.6, 150 mM NaCl, 0.05% Tween-20) for 1 h, and incubated with the appropriate primary antibody at dilutions recommended by the supplier. Then the membrane was washed, and primary antibodies were detected with secondary antibodies conjugated to horseradish peroxidase, and the bands were visualized with enhanced chemiluminescence (Amersham Bioscience, Buckinghamshire, UK). Protein expression levels were determined by analyzing the signals captured on the nitrocellulose membranes using the ChemiDoc image analyzer (Bio-Rad Laboratories, Hercules, CA). ## 2.6. Cell Migration (Scratch) Assay The cell migration assay was evaluated by wound healing assay. Briefly, SMCs were plated in 12-well culture plates and cultured in DMEM containing 10% FBS. Scratches were performed with a sterile tip in a 12-well plate containing cells at 80% confluence level. Next, SMCs were pretreated with SMT for 30 min, which was followed by treatment with TNF-α at 37°C for 24 h. After incubation, the microscopic photographs of migrated cells were measured by microscopy (Eclipse Ti, Nikon). ## 2.7. Gelatin Zymography SMCs were pretreated with SMT for 30 min and stimulated with TNF-α for 24 h. The supernatant, conditioned medium was collected for zymography. SDS-PAGE for measurement of MMP2 activity was added with 0.1% gelatin in the 10% separated gel. The gel was washed with renaturation buffer (2.5% Triton X-100 in DW) at room temperature for 1 h and then incubated with development buffer (Invitrogen Corporation, Carlsbad, CA) at 37°C overnight. Next, the gel was stained with 0.2% Coomassie brilliant blue R stain reagent at room temperature for 1 h. After washing with destain buffer, the gels were scanned using a ChemiDoc image analyzer (Bio-Rad, USA). ## 2.8. Intracellular ROS Production Assay The fluorescent probe, (5-(and-6)-chloromethyl-2',7'-dichlorodihydrofluorescein diacetate, acetyl ester) (CM-H2DCFDA), was used to determine the intracellular generation of ROS by stimulation of TNF-α. Briefly, the confluent SMCs in the 6-well culture plates were pretreated with or without SMT for 30 min. After removing the SMT from the wells, the cells were incubated with 20 μM of CM-H2DCFDA for 1 h. The cells were stimulated with TNF-α, and the fluorescence intensity was measured at an excitation and emission wavelength of 485 nm and 530 nm, respectively, using a flow cytometry on FACSCalibur (BD, San Diego, CA). ## 2.9. Statistical Analysis All the experiments were repeated at least three times. The results were expressed as a mean ± SE, and the data were analyzed using one-way ANOVA followed by Student’st-test to determine any significant differences. p<0.05 was considered as statistically significant. ## 3. Results ### 3.1. Effects of SMT on TNF-α Stimulated SMCs Proliferation To investigate the effects of SMT on proliferation of TNF-α stimulated SMCs, [3H]-thymidine incorporation assay was performed. [3H]-thymidine incorporation is used as an index of DNA synthesis. As shown in Figure 1, stimulation with TNF-α for 24 h significantly increased the [3H]-thymidine incorporation compared to untreated control group (p∗∗<0.01), Whereas SMT 10 μg/mL (p#<0.05), 30 μg/mL (p#<0.05), and 50 μg/mL (p##<0.01), respectively, significantly suppressed incorporation of [3H]-thymidine against TNF-α stimulation in a dose-dependent manner.Figure 1 Effects of SMT on TNF-α stimulated SMCs proliferation. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min and incubated with 1 μCi of [3H]-thymidine. [3H]-Thymidine incorporation was used as an index of DNA synthesis. Bar represents the mean ± SEM of more than 3 independent experiments. p∗∗<0.01 versus untreated control group. p#<0.05 and p##<0.01 versus TNF-α-treated group. ### 3.2. Effects of SMT on TNF-α Stimulated SMCs Expression of Cell Cycle Regulators To investigate mechanisms of antiproliferative effects of SMT on SMCs against TNF-α stimulation, signaling pathways of cyclin-CDK complexes and CDK inhibitors were assessed by western blot. As shown in Figure 2(a), stimulation with TNF-α for 24 h significantly upregulated expression of cyclin D1 (p∗∗<0.01) and CDK4 (p∗<0.05) compared to untreated control group, Whereas SMT 30 μg/mL (p#<0.05) and 50 μg/mL (p##<0.01) significantly suppressed expression of cyclin D1 against TNF-α stimulation in a dose-dependent manner. Also, CDK4 expression was suppressed by SMT 50 μg/mL treatment (p#<0.05) against TNF-α stimulation. As shown in Figure 2(b), stimulation with TNF-α for 24 h significantly upregulated expression of cyclin E (p∗∗<0.01) and CDK2 (p∗∗<0.01) compared to untreated control group, whereas SMT 30 μg/mL (p#<0.05) and 50 μg/mL (p##<0.01) significantly suppressed expression of cyclin E against TNF-α stimulation in a dose-dependent manner. Also, CDK2 expression was suppressed by SMT 50 μg/mL treatment (p#<0.05) against TNF-α stimulation. As shown in Figure 2(c), expressions of p21waf1/cip1 and p27kip1 (CDK inhibitors) were both significantly downregulated by stimulation with TNF-α for 24 h (p∗∗<0.01), whereas SMT 50 μg/mL significantly inhibited downregulation of both p21waf1/cip1 and p27kip1 against TNF-α stimulation.Figure 2 Effect of SMT on TNF-α stimulated expression of cell cycle regulators. (a) cyclin D1 and CDK4, (b) cyclin E and CDK2, and (c) p21waf1/cip1 and p27kip1. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. Bar represents the mean ± SEM of 3 independent experiments. p∗<0.05 and p∗∗<0.01 versus untreated control group. p#<0.05 and p##<0.01 versus TNF-α-treated group. (a) (b) (c)These results demonstrate that SMT regulates cell cycle of SMCs via inhibiting cyclin and CDK expression and upregulating CDK inhibitors. ### 3.3. Effects of SMT on TNF-α Stimulated SMCs Migration Migration of SMCs is closely related to its proliferation and is a previous process of neointima and plaque formation. Migration assay was performed under examination with microscope and scale bar indicates 500μm (40x magnification). As shown in Figure 3, TNF-α stimulated SMCs showed increased migration compared to untreated control group, whereas SMT-treated SMCs (10–50 μg/mL) showed decreased migration against TNF-α stimulation.Figure 3 Effect of SMT on TNF-α stimulated SMCs migration. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. Microscopic photographs of all groups were captured with same magnification (40x) and scale bar indicates 500 μm. ### 3.4. Effects of SMT on TNF-α Stimulated SMCs Secretion and Expression of MMPs MMP2 and MMP9 are involved in the neointima formation. Gelatin zymography and western blot were performed to measure MMP2 and MMP9 levels. Zymography was performed with cell culture medium incubated with SMCs. As shown in Figure4(a), zymogram shows that MMP2 and MMP9 secretions of SMCs stimulated with TNF-α both increased compared to untreated control group, whereas SMT (10–50 μg/mL) suppressed secretion of MMP2 and MMP9 against TNF-α stimulation on SMCs.Figure 4 Effect of SMT on TNF-α stimulated SMCs secretion and expression of MMPs. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. (a) 0.1% gelatin-containing separating gel was used to measure MMPs secretion in zymography. White bands stand for degraded gelatins by MMP9 and MMP2 secreted in cell cultured medium. (b) Protein expression of MMPs. Bar represents the mean ± SEM of 3 independent experiments. p∗<0.05 versus untreated control group. p#<0.05 versus TNF-α-treated group. (a) (b)As shown in Figure4(b), protein expression of both MMP2 and MMP9 was significantly upregulated by TNF-α stimulation compared to untreated control group (p∗<0.05), whereas SMT 50 μg/mL significantly suppressed protein expression of both MMP2 and MMP9 against TNF-α stimulation (p#<0.05). ### 3.5. Effects of SMT on TNF-α Induced Intracellular ROS Production Oxidant stress is a major cause of endothelial dysfunction and vascular inflammation. To measure intracellular ROS production, SMCs were labeled with CM-H2DCFDA as described in Materials and methods. As shown in Figure 5, SMCs production of intracellular ROS was increased by TNF-α stimulation (p∗∗<0.01), whereas ROS scavenger N-acetyl-L-cysteine (NAC, 10 μM) inhibited production of intracellular ROS and SMT (10–50 μg/mL) also suppressed production of intracellular ROS (p##<0.01), implicating its antioxidant effects.Figure 5 Effect of SMT on TNF-α stimulated SMCs ROS production was measured by (a) FACs analysis and (b) microplate reader. Cells were labeled with CM-H2DCFDA and treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. N-acetyl-L-cysteine (NAC) was used as positive control. p∗∗<0.01 versus untreated control group. p##<0.01 versus TNF-α-treated group. (a) (b) ## 3.1. Effects of SMT on TNF-α Stimulated SMCs Proliferation To investigate the effects of SMT on proliferation of TNF-α stimulated SMCs, [3H]-thymidine incorporation assay was performed. [3H]-thymidine incorporation is used as an index of DNA synthesis. As shown in Figure 1, stimulation with TNF-α for 24 h significantly increased the [3H]-thymidine incorporation compared to untreated control group (p∗∗<0.01), Whereas SMT 10 μg/mL (p#<0.05), 30 μg/mL (p#<0.05), and 50 μg/mL (p##<0.01), respectively, significantly suppressed incorporation of [3H]-thymidine against TNF-α stimulation in a dose-dependent manner.Figure 1 Effects of SMT on TNF-α stimulated SMCs proliferation. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min and incubated with 1 μCi of [3H]-thymidine. [3H]-Thymidine incorporation was used as an index of DNA synthesis. Bar represents the mean ± SEM of more than 3 independent experiments. p∗∗<0.01 versus untreated control group. p#<0.05 and p##<0.01 versus TNF-α-treated group. ## 3.2. Effects of SMT on TNF-α Stimulated SMCs Expression of Cell Cycle Regulators To investigate mechanisms of antiproliferative effects of SMT on SMCs against TNF-α stimulation, signaling pathways of cyclin-CDK complexes and CDK inhibitors were assessed by western blot. As shown in Figure 2(a), stimulation with TNF-α for 24 h significantly upregulated expression of cyclin D1 (p∗∗<0.01) and CDK4 (p∗<0.05) compared to untreated control group, Whereas SMT 30 μg/mL (p#<0.05) and 50 μg/mL (p##<0.01) significantly suppressed expression of cyclin D1 against TNF-α stimulation in a dose-dependent manner. Also, CDK4 expression was suppressed by SMT 50 μg/mL treatment (p#<0.05) against TNF-α stimulation. As shown in Figure 2(b), stimulation with TNF-α for 24 h significantly upregulated expression of cyclin E (p∗∗<0.01) and CDK2 (p∗∗<0.01) compared to untreated control group, whereas SMT 30 μg/mL (p#<0.05) and 50 μg/mL (p##<0.01) significantly suppressed expression of cyclin E against TNF-α stimulation in a dose-dependent manner. Also, CDK2 expression was suppressed by SMT 50 μg/mL treatment (p#<0.05) against TNF-α stimulation. As shown in Figure 2(c), expressions of p21waf1/cip1 and p27kip1 (CDK inhibitors) were both significantly downregulated by stimulation with TNF-α for 24 h (p∗∗<0.01), whereas SMT 50 μg/mL significantly inhibited downregulation of both p21waf1/cip1 and p27kip1 against TNF-α stimulation.Figure 2 Effect of SMT on TNF-α stimulated expression of cell cycle regulators. (a) cyclin D1 and CDK4, (b) cyclin E and CDK2, and (c) p21waf1/cip1 and p27kip1. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. Bar represents the mean ± SEM of 3 independent experiments. p∗<0.05 and p∗∗<0.01 versus untreated control group. p#<0.05 and p##<0.01 versus TNF-α-treated group. (a) (b) (c)These results demonstrate that SMT regulates cell cycle of SMCs via inhibiting cyclin and CDK expression and upregulating CDK inhibitors. ## 3.3. Effects of SMT on TNF-α Stimulated SMCs Migration Migration of SMCs is closely related to its proliferation and is a previous process of neointima and plaque formation. Migration assay was performed under examination with microscope and scale bar indicates 500μm (40x magnification). As shown in Figure 3, TNF-α stimulated SMCs showed increased migration compared to untreated control group, whereas SMT-treated SMCs (10–50 μg/mL) showed decreased migration against TNF-α stimulation.Figure 3 Effect of SMT on TNF-α stimulated SMCs migration. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. Microscopic photographs of all groups were captured with same magnification (40x) and scale bar indicates 500 μm. ## 3.4. Effects of SMT on TNF-α Stimulated SMCs Secretion and Expression of MMPs MMP2 and MMP9 are involved in the neointima formation. Gelatin zymography and western blot were performed to measure MMP2 and MMP9 levels. Zymography was performed with cell culture medium incubated with SMCs. As shown in Figure4(a), zymogram shows that MMP2 and MMP9 secretions of SMCs stimulated with TNF-α both increased compared to untreated control group, whereas SMT (10–50 μg/mL) suppressed secretion of MMP2 and MMP9 against TNF-α stimulation on SMCs.Figure 4 Effect of SMT on TNF-α stimulated SMCs secretion and expression of MMPs. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. (a) 0.1% gelatin-containing separating gel was used to measure MMPs secretion in zymography. White bands stand for degraded gelatins by MMP9 and MMP2 secreted in cell cultured medium. (b) Protein expression of MMPs. Bar represents the mean ± SEM of 3 independent experiments. p∗<0.05 versus untreated control group. p#<0.05 versus TNF-α-treated group. (a) (b)As shown in Figure4(b), protein expression of both MMP2 and MMP9 was significantly upregulated by TNF-α stimulation compared to untreated control group (p∗<0.05), whereas SMT 50 μg/mL significantly suppressed protein expression of both MMP2 and MMP9 against TNF-α stimulation (p#<0.05). ## 3.5. Effects of SMT on TNF-α Induced Intracellular ROS Production Oxidant stress is a major cause of endothelial dysfunction and vascular inflammation. To measure intracellular ROS production, SMCs were labeled with CM-H2DCFDA as described in Materials and methods. As shown in Figure 5, SMCs production of intracellular ROS was increased by TNF-α stimulation (p∗∗<0.01), whereas ROS scavenger N-acetyl-L-cysteine (NAC, 10 μM) inhibited production of intracellular ROS and SMT (10–50 μg/mL) also suppressed production of intracellular ROS (p##<0.01), implicating its antioxidant effects.Figure 5 Effect of SMT on TNF-α stimulated SMCs ROS production was measured by (a) FACs analysis and (b) microplate reader. Cells were labeled with CM-H2DCFDA and treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. N-acetyl-L-cysteine (NAC) was used as positive control. p∗∗<0.01 versus untreated control group. p##<0.01 versus TNF-α-treated group. (a) (b) ## 4. Discussion This study demonstrates that SMT suppressed abnormal increase of SMCs proliferation and migration via regulating cell cycle, ROS production, and MMPs expression against TNF-α stimulation. We stimulated SMCs with TNF-α to make atherogenic circumstances. Cytokines such as TNF-α work as an autocrine and paracrine mediator and are highly expressed in atherosclerotic lesions [16]. Besides, TNF-α also leads to production of other inflammatory factors inducing proliferation and migration of SMCs [17]. Proliferation of SMCs was measured with [3H]-thymidine incorporation assay in this study, which is used as an index of DNA synthesis. SMT significantly suppressed SMCs [3H]-thymidine incorporation against TNF-α stimulation in a dose-dependent manner. Thus, detailed antiproliferative mechanism of SMT on TNF-α stimulated SMCs cell cycle was investigated.Phases of the cell cycle are coordinated by CDKs which have little activity in the absence of cyclins. CDKs bind cyclins to form cyclin-CDK complexes and their activities are up to phosphorylation of CDKs and cyclin expression. In quiescent G0 phase, E2F family members exist in inactive form with retinoblastoma protein. However, after mitogenic stimulation, cyclin D1-CDK4 and cyclin E-CDK2 complexes phosphorylate retinoblastoma, dissociating E2F to express other cyclins and CDKs [18]. Cell cycle regulators are key molecules in cancers and atherosclerosis. They induce cell proliferation and share a similar pathogenic pathway [19]. IBRANCE® (Palbociclib) is an FDA-approved drug clinically used in cancer therapy [20]. Working mechanism of Palbociclib is to inhibit CDK4 and CDK6. Rapamycin, an antibiotic produced byStreptomyces hygroscopicus [21], is known to inhibit migration of SMCs [22]. Its proposed mechanisms are accumulation of p27kip1 and inhibition of CDK activity [23]. In this way, regulating cell cycle of target molecules could be a reasonable pharmacological approach to treat not only cancer but also atherosclerosis. In this study, SMT significantly suppressed the SMCs upregulation of cyclin D1-CDK4 and cyclin E-CDK2 against TNF-α stimulation. Furthermore, p21waf1/cip1 and p27kip1 (CDK inhibitors) expression of SMCs after TNF-α stimulation was significantly decreased. However, SMT also showed cell cycle regulative effects by significantly suppressing downregulation of p21waf1/cip1 and p27kip1 against TNF-α stimulation.Besides, results of migration assay with microscopy showed that SMT suppressed SMCs migration against TNF-α stimulation. In addition to cell cycle regulative effects of SMT, underlying mechanism of this phenomenon seems to be resulting from MMP suppressive effects of SMT. SMCs can produce proteolytic enzymes related to the remodeling of MMPs and ECM. In particular, MMP2 and MMP9 belong to representative MMPs. Recent knock-out study also suggests that MMP2 [24] and MMP9 [25] are crucial in development of arterial lesions resulting in atherosclerosis. In ECM remodeling process, new extra matrix components are synthesized and vulnerable lesions are broken down. During the process, activated SMCs abnormally proliferate and migrate into luminal side of blood vessel wall to form thrombus with foam cells that result from endothelial dysfunction [26]. In this study, we performed gelatin zymography and western blot to assay secretion and expression of MMP2 and MMP9. Results of zymography showed that SMT suppressed the SMCs secretion of both MMP2 and MMP9 against TNF-α stimulation. In addition, SMT significantly suppressed the protein expression of both MMP2 and MMP9 against TNF-α stimulation.It is suggested that there is a certain relation between oxidative stress and MMPs modulation. Recent studies suggest that MMP-mediated ECM remodeling could be modulated by ROS and onein vitro study suggests that oxidative stress may enhance MMP expression and activity [27]. In this study, TNF-α stimulation increased the ROS production of SMCs and this might be influence the overall signaling pathways of proliferation and migration. In this regard, SMT suppressed production of intracellular ROS on TNF-α stimulated SMCs and it might contribute to cell cycle regulation and MMPs suppression by SMT.Our previous study demonstrated that SMT attenuated inflammation of vascular endothelial cells against TNF-α stimulation via inhibiting activation of nuclear factor-κB and inducing heme oxygenase-1 [15]. In condition of retained vascular dysfunction, vascular inflammation is initiated and results in activation of SMCs. Activated SMCs lose control of cell cycle regulation and migrate into intima resulting in formation of atheroma [8, 28]. Likewise, since this succession of pathogenesis is inseparable, we conducted further study with SMCs to broaden the understanding about vascular protective effects of SMT. To relate our previous and present studies together, SMT not only has protective effects against chronic inflammation on vascular endothelium in early stage but also is expected to suppress abnormal migration of vascular smooth muscle cells and atheroma formation resulting in atherosclerosis.To summarize, SMT attenuated abnormal migration of vascular smooth muscle cells via regulating cell cycle and suppressing MMPs expression and reactive oxygen species production. We suggest that SMT, a traditionally used herbal formula consisting of four herbs, might act as a vascular protective drug. --- *Source: 1024974-2018-06-25.xml*
1024974-2018-06-25_1024974-2018-06-25.md
33,280
Samul-Tang Regulates Cell Cycle and Migration of Vascular Smooth Muscle Cells against TNF-α Stimulation
Eun Sik Choi; Jung Joo Yoon; Byung Hyuk Han; Da Hye Jeong; Hye Yoom Kim; You Mee Ahn; So Young Eun; Yun Jung Lee; Dae Gill Kang; Ho Sub Lee
Evidence-Based Complementary and Alternative Medicine (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1024974
1024974-2018-06-25.xml
--- ## Abstract Samul-Tang (SMT), consisting of four medicinal herbs, is a well-known herbal prescription treating hematological disorders related symptoms. Our previous study demonstrated that SMT attenuated inflammation of vascular endothelial cells. In condition of retained vascular dysfunction, vascular inflammation is initiated and results in activation of smooth muscle cells (SMCs). Activated SMCs lose control of cell cycle regulation and migrate into intima, resulting in formation of atheroma. Here, we further investigated whether SMT suppresses proliferation and migration of SMCs. SMT showed antiproliferative effects on SMCs by suppressing [3H]-thymidine incorporation against TNF-α stimulation. Underlying mechanisms of antiproliferative effects were found to be resulting from cell cycle regulation. SMT downregulated expression of cyclin D1-CDK4 and cyclin E-CDK2 complexes and upregulated p21waf1/cip1 and p27kip1. SMT also suppressed migration of SMCs against TNF-α stimulation. This is thought to have resulted from suppressing MMP2 and MMP9 expressions and ROS production. In summary, SMT attenuates abnormal migration of vascular smooth muscle cells via regulating cell cycle and suppressing MMPs expression and ROS production. Our study suggests that SMT, a traditionally used herbal formula, protects vascular smooth muscle cells and might be used as an antiatherosclerotic drug. --- ## Body ## 1. Introduction The vascular smooth muscle cells (SMCs) are located in the walls of blood vessels and regulate contraction or relaxation of blood vessel. Besides, SMCs are also involved in developing cardiovascular complication such as atherosclerosis resulting from chronic vascular inflammation or abnormal proliferation and migration of SMCs [1].In pathogenesis of atherosclerosis, proliferation and migration of SMCs are crucial [2] and regulating cell cycle of SMCs could be a novel therapeutic strategy. Basically, progression of cell cycle is positively regulated by cyclin-cyclin-dependent kinase (CDK) complexes and negatively regulated by CDK inhibitors. In G1 phase, cyclin D1 drives the formation of active CDK4 and cyclin E activates CDK2. Consequently, cyclin D1-CDK4 and cyclin E-CDK2 complexes drive cells to exit G1 phase and initiate DNA replication to enter into S phase [3]. In contrast, CDK inhibitors such as p21waf1/cip1 and p27kip1 stabilize cyclin-CDK complexes and negatively regulate cell cycle [4].Matrix metalloproteinases (MMPs) are enzymes involved in degradation and remodeling of extracellular matrix (ECM). Particularly, MMP2 (72 kDa) and MMP9 (92 kDa), which are also called gelatinase, play important role in breaking down structure of ECM [5] as well as regulating proliferation and migration of SMCs [6]. Excessive breakdown of ECM might lead to plaque rupture and incidence of myocardial infarction could be increased [7]. Oxidative stress mediated by reactive oxygen species (ROS) also prompts vascular inflammation developing atherosclerosis and also activates cell cycle circulation and MMPs [8].Samul-Tang (SMT, Si-Wu-Tang in Chinese and Four-Agent-Decoction in English) is a herbal prescription consisting of four medicinal herbs: Angelicae Gigantis Radix (Angelica gigasNakai, root), Cnidii Rhizoma (Ligusticum officinale Makino, rhizome), Rehmanniae Radix Preparata (Rehmannia glutinosa Gaertn. DC., rhizome, steamed and dried), and Paeoniae Radix (Paeonia lactiflora Pall., root).Dongui Bogam (Treasured Mirror of Eastern Medicine) and other several formularies contain medical information about SMT. Traditional use of SMT is treating hematological disorders related symptoms such as anemia [9], irregular menstruation [10, 11], or postpartum weakness defined as blood deficiency and blood stasis in traditional Korean medicine. Recent pharmacological studies suggest that SMT exerts hematopoietic [12], anti-inflammatory [13], and antidermatitis [14] effects. Furthermore, our previous study demonstrated that SMT exerts vascular protective effects in human umbilical vein endothelial cells (HUVECs), which are pathologically intimate with SMCs [15]. Here, we further investigated effects and mechanisms of SMT on tumor necrosis factor-α (TNF-α) stimulated SMCs proliferation and migration. ## 2. Materials and Methods ### 2.1. Plant Materials of SMT The four crude herbs forming SMT were purchased from Omniherb (Yeongcheon, Korea) in February 2008. The origin of each herbal medicine was taxonomically identified by Professor Je Hyun Lee, Dongguk University, Gyeongju, Republic of Korea. A voucher specimen (2008-KE25-1~KE25-4) has been deposited at the K-herb Research Center, Korea Institute of Oriental Medicine. SMT extract was prepared as described previously [15]. ### 2.2. Chemicals and Reagents DMEM low glucose, fetal bovine serum, TNF-α, cell culture reagents, and CM-H2DCFDA were purchased from Invitrogen (San Diego, CA). Primary antibodies, including mouse anti-cyclin D1, rabbit anti-CDK4, mouse anti-cyclin E, rabbit anti-CDK2, rabbit anti-p21, mouse anti-p27, rabbit anti-MMP2, and rabbit anti-MMP9, were purchased from Santa Cruz Biotechnology (CA, USA). Goat anti-rabbit IgG and goat anti-mouse IgG were purchased from Enzo (Farmingdale, USA). ### 2.3. Cell Culture Human aortic smooth muscle cells (SMCs, C-007-5C) were purchased from Invitrogen (Carlsbad, CA). Cells were cultured with DMEM low glucose containing 5% fetal bovine serum and penicillin-streptomycin and maintained in a humidified incubator containing 5% CO2 at 37°C. ### 2.4. [3H]-Thymidine Incorporation Assay Quiescent cells were treated with 10 ng/ml TNF-α and SMT, respectively, and 1 μCi of [3H]-thymidine was added (methyl-[3H] thymidine, 50 Ci/mmol; Amersham, Oakville, Ontario, Canada). After 24 h of incubation, cells were washed once with 2 ml of ice-cold PBS for 10 min, extracted three times with 2 ml of cold 10% TCA for 5 min each, and solubilized for at least 30 min at room temperature in 0.2 ml of 0.3 N NaOH, 1% SDS. After neutralization with 0.2 ml of 0.3 N HCl, [3H]-thymidine activity was measured in a liquid scintillation counter (Beckman LS 7500, Fullerton, CA). Each experiment was conducted either in triplicate or in quadruplicate. ### 2.5. Western Blot Analysis Cell homogenates were separated on 10% SDS-polyacrylamide gel electrophoresis and transferred to nitrocellulose paper. Blots were then washed with H2O, blocked with 5% skimmed milk powder in Tris-Buffered Saline Tween-20 (TBS-T) (10 mM Tris-HCl, pH 7.6, 150 mM NaCl, 0.05% Tween-20) for 1 h, and incubated with the appropriate primary antibody at dilutions recommended by the supplier. Then the membrane was washed, and primary antibodies were detected with secondary antibodies conjugated to horseradish peroxidase, and the bands were visualized with enhanced chemiluminescence (Amersham Bioscience, Buckinghamshire, UK). Protein expression levels were determined by analyzing the signals captured on the nitrocellulose membranes using the ChemiDoc image analyzer (Bio-Rad Laboratories, Hercules, CA). ### 2.6. Cell Migration (Scratch) Assay The cell migration assay was evaluated by wound healing assay. Briefly, SMCs were plated in 12-well culture plates and cultured in DMEM containing 10% FBS. Scratches were performed with a sterile tip in a 12-well plate containing cells at 80% confluence level. Next, SMCs were pretreated with SMT for 30 min, which was followed by treatment with TNF-α at 37°C for 24 h. After incubation, the microscopic photographs of migrated cells were measured by microscopy (Eclipse Ti, Nikon). ### 2.7. Gelatin Zymography SMCs were pretreated with SMT for 30 min and stimulated with TNF-α for 24 h. The supernatant, conditioned medium was collected for zymography. SDS-PAGE for measurement of MMP2 activity was added with 0.1% gelatin in the 10% separated gel. The gel was washed with renaturation buffer (2.5% Triton X-100 in DW) at room temperature for 1 h and then incubated with development buffer (Invitrogen Corporation, Carlsbad, CA) at 37°C overnight. Next, the gel was stained with 0.2% Coomassie brilliant blue R stain reagent at room temperature for 1 h. After washing with destain buffer, the gels were scanned using a ChemiDoc image analyzer (Bio-Rad, USA). ### 2.8. Intracellular ROS Production Assay The fluorescent probe, (5-(and-6)-chloromethyl-2',7'-dichlorodihydrofluorescein diacetate, acetyl ester) (CM-H2DCFDA), was used to determine the intracellular generation of ROS by stimulation of TNF-α. Briefly, the confluent SMCs in the 6-well culture plates were pretreated with or without SMT for 30 min. After removing the SMT from the wells, the cells were incubated with 20 μM of CM-H2DCFDA for 1 h. The cells were stimulated with TNF-α, and the fluorescence intensity was measured at an excitation and emission wavelength of 485 nm and 530 nm, respectively, using a flow cytometry on FACSCalibur (BD, San Diego, CA). ### 2.9. Statistical Analysis All the experiments were repeated at least three times. The results were expressed as a mean ± SE, and the data were analyzed using one-way ANOVA followed by Student’st-test to determine any significant differences. p<0.05 was considered as statistically significant. ## 2.1. Plant Materials of SMT The four crude herbs forming SMT were purchased from Omniherb (Yeongcheon, Korea) in February 2008. The origin of each herbal medicine was taxonomically identified by Professor Je Hyun Lee, Dongguk University, Gyeongju, Republic of Korea. A voucher specimen (2008-KE25-1~KE25-4) has been deposited at the K-herb Research Center, Korea Institute of Oriental Medicine. SMT extract was prepared as described previously [15]. ## 2.2. Chemicals and Reagents DMEM low glucose, fetal bovine serum, TNF-α, cell culture reagents, and CM-H2DCFDA were purchased from Invitrogen (San Diego, CA). Primary antibodies, including mouse anti-cyclin D1, rabbit anti-CDK4, mouse anti-cyclin E, rabbit anti-CDK2, rabbit anti-p21, mouse anti-p27, rabbit anti-MMP2, and rabbit anti-MMP9, were purchased from Santa Cruz Biotechnology (CA, USA). Goat anti-rabbit IgG and goat anti-mouse IgG were purchased from Enzo (Farmingdale, USA). ## 2.3. Cell Culture Human aortic smooth muscle cells (SMCs, C-007-5C) were purchased from Invitrogen (Carlsbad, CA). Cells were cultured with DMEM low glucose containing 5% fetal bovine serum and penicillin-streptomycin and maintained in a humidified incubator containing 5% CO2 at 37°C. ## 2.4. [3H]-Thymidine Incorporation Assay Quiescent cells were treated with 10 ng/ml TNF-α and SMT, respectively, and 1 μCi of [3H]-thymidine was added (methyl-[3H] thymidine, 50 Ci/mmol; Amersham, Oakville, Ontario, Canada). After 24 h of incubation, cells were washed once with 2 ml of ice-cold PBS for 10 min, extracted three times with 2 ml of cold 10% TCA for 5 min each, and solubilized for at least 30 min at room temperature in 0.2 ml of 0.3 N NaOH, 1% SDS. After neutralization with 0.2 ml of 0.3 N HCl, [3H]-thymidine activity was measured in a liquid scintillation counter (Beckman LS 7500, Fullerton, CA). Each experiment was conducted either in triplicate or in quadruplicate. ## 2.5. Western Blot Analysis Cell homogenates were separated on 10% SDS-polyacrylamide gel electrophoresis and transferred to nitrocellulose paper. Blots were then washed with H2O, blocked with 5% skimmed milk powder in Tris-Buffered Saline Tween-20 (TBS-T) (10 mM Tris-HCl, pH 7.6, 150 mM NaCl, 0.05% Tween-20) for 1 h, and incubated with the appropriate primary antibody at dilutions recommended by the supplier. Then the membrane was washed, and primary antibodies were detected with secondary antibodies conjugated to horseradish peroxidase, and the bands were visualized with enhanced chemiluminescence (Amersham Bioscience, Buckinghamshire, UK). Protein expression levels were determined by analyzing the signals captured on the nitrocellulose membranes using the ChemiDoc image analyzer (Bio-Rad Laboratories, Hercules, CA). ## 2.6. Cell Migration (Scratch) Assay The cell migration assay was evaluated by wound healing assay. Briefly, SMCs were plated in 12-well culture plates and cultured in DMEM containing 10% FBS. Scratches were performed with a sterile tip in a 12-well plate containing cells at 80% confluence level. Next, SMCs were pretreated with SMT for 30 min, which was followed by treatment with TNF-α at 37°C for 24 h. After incubation, the microscopic photographs of migrated cells were measured by microscopy (Eclipse Ti, Nikon). ## 2.7. Gelatin Zymography SMCs were pretreated with SMT for 30 min and stimulated with TNF-α for 24 h. The supernatant, conditioned medium was collected for zymography. SDS-PAGE for measurement of MMP2 activity was added with 0.1% gelatin in the 10% separated gel. The gel was washed with renaturation buffer (2.5% Triton X-100 in DW) at room temperature for 1 h and then incubated with development buffer (Invitrogen Corporation, Carlsbad, CA) at 37°C overnight. Next, the gel was stained with 0.2% Coomassie brilliant blue R stain reagent at room temperature for 1 h. After washing with destain buffer, the gels were scanned using a ChemiDoc image analyzer (Bio-Rad, USA). ## 2.8. Intracellular ROS Production Assay The fluorescent probe, (5-(and-6)-chloromethyl-2',7'-dichlorodihydrofluorescein diacetate, acetyl ester) (CM-H2DCFDA), was used to determine the intracellular generation of ROS by stimulation of TNF-α. Briefly, the confluent SMCs in the 6-well culture plates were pretreated with or without SMT for 30 min. After removing the SMT from the wells, the cells were incubated with 20 μM of CM-H2DCFDA for 1 h. The cells were stimulated with TNF-α, and the fluorescence intensity was measured at an excitation and emission wavelength of 485 nm and 530 nm, respectively, using a flow cytometry on FACSCalibur (BD, San Diego, CA). ## 2.9. Statistical Analysis All the experiments were repeated at least three times. The results were expressed as a mean ± SE, and the data were analyzed using one-way ANOVA followed by Student’st-test to determine any significant differences. p<0.05 was considered as statistically significant. ## 3. Results ### 3.1. Effects of SMT on TNF-α Stimulated SMCs Proliferation To investigate the effects of SMT on proliferation of TNF-α stimulated SMCs, [3H]-thymidine incorporation assay was performed. [3H]-thymidine incorporation is used as an index of DNA synthesis. As shown in Figure 1, stimulation with TNF-α for 24 h significantly increased the [3H]-thymidine incorporation compared to untreated control group (p∗∗<0.01), Whereas SMT 10 μg/mL (p#<0.05), 30 μg/mL (p#<0.05), and 50 μg/mL (p##<0.01), respectively, significantly suppressed incorporation of [3H]-thymidine against TNF-α stimulation in a dose-dependent manner.Figure 1 Effects of SMT on TNF-α stimulated SMCs proliferation. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min and incubated with 1 μCi of [3H]-thymidine. [3H]-Thymidine incorporation was used as an index of DNA synthesis. Bar represents the mean ± SEM of more than 3 independent experiments. p∗∗<0.01 versus untreated control group. p#<0.05 and p##<0.01 versus TNF-α-treated group. ### 3.2. Effects of SMT on TNF-α Stimulated SMCs Expression of Cell Cycle Regulators To investigate mechanisms of antiproliferative effects of SMT on SMCs against TNF-α stimulation, signaling pathways of cyclin-CDK complexes and CDK inhibitors were assessed by western blot. As shown in Figure 2(a), stimulation with TNF-α for 24 h significantly upregulated expression of cyclin D1 (p∗∗<0.01) and CDK4 (p∗<0.05) compared to untreated control group, Whereas SMT 30 μg/mL (p#<0.05) and 50 μg/mL (p##<0.01) significantly suppressed expression of cyclin D1 against TNF-α stimulation in a dose-dependent manner. Also, CDK4 expression was suppressed by SMT 50 μg/mL treatment (p#<0.05) against TNF-α stimulation. As shown in Figure 2(b), stimulation with TNF-α for 24 h significantly upregulated expression of cyclin E (p∗∗<0.01) and CDK2 (p∗∗<0.01) compared to untreated control group, whereas SMT 30 μg/mL (p#<0.05) and 50 μg/mL (p##<0.01) significantly suppressed expression of cyclin E against TNF-α stimulation in a dose-dependent manner. Also, CDK2 expression was suppressed by SMT 50 μg/mL treatment (p#<0.05) against TNF-α stimulation. As shown in Figure 2(c), expressions of p21waf1/cip1 and p27kip1 (CDK inhibitors) were both significantly downregulated by stimulation with TNF-α for 24 h (p∗∗<0.01), whereas SMT 50 μg/mL significantly inhibited downregulation of both p21waf1/cip1 and p27kip1 against TNF-α stimulation.Figure 2 Effect of SMT on TNF-α stimulated expression of cell cycle regulators. (a) cyclin D1 and CDK4, (b) cyclin E and CDK2, and (c) p21waf1/cip1 and p27kip1. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. Bar represents the mean ± SEM of 3 independent experiments. p∗<0.05 and p∗∗<0.01 versus untreated control group. p#<0.05 and p##<0.01 versus TNF-α-treated group. (a) (b) (c)These results demonstrate that SMT regulates cell cycle of SMCs via inhibiting cyclin and CDK expression and upregulating CDK inhibitors. ### 3.3. Effects of SMT on TNF-α Stimulated SMCs Migration Migration of SMCs is closely related to its proliferation and is a previous process of neointima and plaque formation. Migration assay was performed under examination with microscope and scale bar indicates 500μm (40x magnification). As shown in Figure 3, TNF-α stimulated SMCs showed increased migration compared to untreated control group, whereas SMT-treated SMCs (10–50 μg/mL) showed decreased migration against TNF-α stimulation.Figure 3 Effect of SMT on TNF-α stimulated SMCs migration. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. Microscopic photographs of all groups were captured with same magnification (40x) and scale bar indicates 500 μm. ### 3.4. Effects of SMT on TNF-α Stimulated SMCs Secretion and Expression of MMPs MMP2 and MMP9 are involved in the neointima formation. Gelatin zymography and western blot were performed to measure MMP2 and MMP9 levels. Zymography was performed with cell culture medium incubated with SMCs. As shown in Figure4(a), zymogram shows that MMP2 and MMP9 secretions of SMCs stimulated with TNF-α both increased compared to untreated control group, whereas SMT (10–50 μg/mL) suppressed secretion of MMP2 and MMP9 against TNF-α stimulation on SMCs.Figure 4 Effect of SMT on TNF-α stimulated SMCs secretion and expression of MMPs. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. (a) 0.1% gelatin-containing separating gel was used to measure MMPs secretion in zymography. White bands stand for degraded gelatins by MMP9 and MMP2 secreted in cell cultured medium. (b) Protein expression of MMPs. Bar represents the mean ± SEM of 3 independent experiments. p∗<0.05 versus untreated control group. p#<0.05 versus TNF-α-treated group. (a) (b)As shown in Figure4(b), protein expression of both MMP2 and MMP9 was significantly upregulated by TNF-α stimulation compared to untreated control group (p∗<0.05), whereas SMT 50 μg/mL significantly suppressed protein expression of both MMP2 and MMP9 against TNF-α stimulation (p#<0.05). ### 3.5. Effects of SMT on TNF-α Induced Intracellular ROS Production Oxidant stress is a major cause of endothelial dysfunction and vascular inflammation. To measure intracellular ROS production, SMCs were labeled with CM-H2DCFDA as described in Materials and methods. As shown in Figure 5, SMCs production of intracellular ROS was increased by TNF-α stimulation (p∗∗<0.01), whereas ROS scavenger N-acetyl-L-cysteine (NAC, 10 μM) inhibited production of intracellular ROS and SMT (10–50 μg/mL) also suppressed production of intracellular ROS (p##<0.01), implicating its antioxidant effects.Figure 5 Effect of SMT on TNF-α stimulated SMCs ROS production was measured by (a) FACs analysis and (b) microplate reader. Cells were labeled with CM-H2DCFDA and treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. N-acetyl-L-cysteine (NAC) was used as positive control. p∗∗<0.01 versus untreated control group. p##<0.01 versus TNF-α-treated group. (a) (b) ## 3.1. Effects of SMT on TNF-α Stimulated SMCs Proliferation To investigate the effects of SMT on proliferation of TNF-α stimulated SMCs, [3H]-thymidine incorporation assay was performed. [3H]-thymidine incorporation is used as an index of DNA synthesis. As shown in Figure 1, stimulation with TNF-α for 24 h significantly increased the [3H]-thymidine incorporation compared to untreated control group (p∗∗<0.01), Whereas SMT 10 μg/mL (p#<0.05), 30 μg/mL (p#<0.05), and 50 μg/mL (p##<0.01), respectively, significantly suppressed incorporation of [3H]-thymidine against TNF-α stimulation in a dose-dependent manner.Figure 1 Effects of SMT on TNF-α stimulated SMCs proliferation. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min and incubated with 1 μCi of [3H]-thymidine. [3H]-Thymidine incorporation was used as an index of DNA synthesis. Bar represents the mean ± SEM of more than 3 independent experiments. p∗∗<0.01 versus untreated control group. p#<0.05 and p##<0.01 versus TNF-α-treated group. ## 3.2. Effects of SMT on TNF-α Stimulated SMCs Expression of Cell Cycle Regulators To investigate mechanisms of antiproliferative effects of SMT on SMCs against TNF-α stimulation, signaling pathways of cyclin-CDK complexes and CDK inhibitors were assessed by western blot. As shown in Figure 2(a), stimulation with TNF-α for 24 h significantly upregulated expression of cyclin D1 (p∗∗<0.01) and CDK4 (p∗<0.05) compared to untreated control group, Whereas SMT 30 μg/mL (p#<0.05) and 50 μg/mL (p##<0.01) significantly suppressed expression of cyclin D1 against TNF-α stimulation in a dose-dependent manner. Also, CDK4 expression was suppressed by SMT 50 μg/mL treatment (p#<0.05) against TNF-α stimulation. As shown in Figure 2(b), stimulation with TNF-α for 24 h significantly upregulated expression of cyclin E (p∗∗<0.01) and CDK2 (p∗∗<0.01) compared to untreated control group, whereas SMT 30 μg/mL (p#<0.05) and 50 μg/mL (p##<0.01) significantly suppressed expression of cyclin E against TNF-α stimulation in a dose-dependent manner. Also, CDK2 expression was suppressed by SMT 50 μg/mL treatment (p#<0.05) against TNF-α stimulation. As shown in Figure 2(c), expressions of p21waf1/cip1 and p27kip1 (CDK inhibitors) were both significantly downregulated by stimulation with TNF-α for 24 h (p∗∗<0.01), whereas SMT 50 μg/mL significantly inhibited downregulation of both p21waf1/cip1 and p27kip1 against TNF-α stimulation.Figure 2 Effect of SMT on TNF-α stimulated expression of cell cycle regulators. (a) cyclin D1 and CDK4, (b) cyclin E and CDK2, and (c) p21waf1/cip1 and p27kip1. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. Bar represents the mean ± SEM of 3 independent experiments. p∗<0.05 and p∗∗<0.01 versus untreated control group. p#<0.05 and p##<0.01 versus TNF-α-treated group. (a) (b) (c)These results demonstrate that SMT regulates cell cycle of SMCs via inhibiting cyclin and CDK expression and upregulating CDK inhibitors. ## 3.3. Effects of SMT on TNF-α Stimulated SMCs Migration Migration of SMCs is closely related to its proliferation and is a previous process of neointima and plaque formation. Migration assay was performed under examination with microscope and scale bar indicates 500μm (40x magnification). As shown in Figure 3, TNF-α stimulated SMCs showed increased migration compared to untreated control group, whereas SMT-treated SMCs (10–50 μg/mL) showed decreased migration against TNF-α stimulation.Figure 3 Effect of SMT on TNF-α stimulated SMCs migration. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. Microscopic photographs of all groups were captured with same magnification (40x) and scale bar indicates 500 μm. ## 3.4. Effects of SMT on TNF-α Stimulated SMCs Secretion and Expression of MMPs MMP2 and MMP9 are involved in the neointima formation. Gelatin zymography and western blot were performed to measure MMP2 and MMP9 levels. Zymography was performed with cell culture medium incubated with SMCs. As shown in Figure4(a), zymogram shows that MMP2 and MMP9 secretions of SMCs stimulated with TNF-α both increased compared to untreated control group, whereas SMT (10–50 μg/mL) suppressed secretion of MMP2 and MMP9 against TNF-α stimulation on SMCs.Figure 4 Effect of SMT on TNF-α stimulated SMCs secretion and expression of MMPs. Cells were treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. (a) 0.1% gelatin-containing separating gel was used to measure MMPs secretion in zymography. White bands stand for degraded gelatins by MMP9 and MMP2 secreted in cell cultured medium. (b) Protein expression of MMPs. Bar represents the mean ± SEM of 3 independent experiments. p∗<0.05 versus untreated control group. p#<0.05 versus TNF-α-treated group. (a) (b)As shown in Figure4(b), protein expression of both MMP2 and MMP9 was significantly upregulated by TNF-α stimulation compared to untreated control group (p∗<0.05), whereas SMT 50 μg/mL significantly suppressed protein expression of both MMP2 and MMP9 against TNF-α stimulation (p#<0.05). ## 3.5. Effects of SMT on TNF-α Induced Intracellular ROS Production Oxidant stress is a major cause of endothelial dysfunction and vascular inflammation. To measure intracellular ROS production, SMCs were labeled with CM-H2DCFDA as described in Materials and methods. As shown in Figure 5, SMCs production of intracellular ROS was increased by TNF-α stimulation (p∗∗<0.01), whereas ROS scavenger N-acetyl-L-cysteine (NAC, 10 μM) inhibited production of intracellular ROS and SMT (10–50 μg/mL) also suppressed production of intracellular ROS (p##<0.01), implicating its antioxidant effects.Figure 5 Effect of SMT on TNF-α stimulated SMCs ROS production was measured by (a) FACs analysis and (b) microplate reader. Cells were labeled with CM-H2DCFDA and treated with TNF-α (10 ng/ml) for 24 h in the absence or pretreatment of SMT (10, 30, and 50 μg/ml) for 30 min. N-acetyl-L-cysteine (NAC) was used as positive control. p∗∗<0.01 versus untreated control group. p##<0.01 versus TNF-α-treated group. (a) (b) ## 4. Discussion This study demonstrates that SMT suppressed abnormal increase of SMCs proliferation and migration via regulating cell cycle, ROS production, and MMPs expression against TNF-α stimulation. We stimulated SMCs with TNF-α to make atherogenic circumstances. Cytokines such as TNF-α work as an autocrine and paracrine mediator and are highly expressed in atherosclerotic lesions [16]. Besides, TNF-α also leads to production of other inflammatory factors inducing proliferation and migration of SMCs [17]. Proliferation of SMCs was measured with [3H]-thymidine incorporation assay in this study, which is used as an index of DNA synthesis. SMT significantly suppressed SMCs [3H]-thymidine incorporation against TNF-α stimulation in a dose-dependent manner. Thus, detailed antiproliferative mechanism of SMT on TNF-α stimulated SMCs cell cycle was investigated.Phases of the cell cycle are coordinated by CDKs which have little activity in the absence of cyclins. CDKs bind cyclins to form cyclin-CDK complexes and their activities are up to phosphorylation of CDKs and cyclin expression. In quiescent G0 phase, E2F family members exist in inactive form with retinoblastoma protein. However, after mitogenic stimulation, cyclin D1-CDK4 and cyclin E-CDK2 complexes phosphorylate retinoblastoma, dissociating E2F to express other cyclins and CDKs [18]. Cell cycle regulators are key molecules in cancers and atherosclerosis. They induce cell proliferation and share a similar pathogenic pathway [19]. IBRANCE® (Palbociclib) is an FDA-approved drug clinically used in cancer therapy [20]. Working mechanism of Palbociclib is to inhibit CDK4 and CDK6. Rapamycin, an antibiotic produced byStreptomyces hygroscopicus [21], is known to inhibit migration of SMCs [22]. Its proposed mechanisms are accumulation of p27kip1 and inhibition of CDK activity [23]. In this way, regulating cell cycle of target molecules could be a reasonable pharmacological approach to treat not only cancer but also atherosclerosis. In this study, SMT significantly suppressed the SMCs upregulation of cyclin D1-CDK4 and cyclin E-CDK2 against TNF-α stimulation. Furthermore, p21waf1/cip1 and p27kip1 (CDK inhibitors) expression of SMCs after TNF-α stimulation was significantly decreased. However, SMT also showed cell cycle regulative effects by significantly suppressing downregulation of p21waf1/cip1 and p27kip1 against TNF-α stimulation.Besides, results of migration assay with microscopy showed that SMT suppressed SMCs migration against TNF-α stimulation. In addition to cell cycle regulative effects of SMT, underlying mechanism of this phenomenon seems to be resulting from MMP suppressive effects of SMT. SMCs can produce proteolytic enzymes related to the remodeling of MMPs and ECM. In particular, MMP2 and MMP9 belong to representative MMPs. Recent knock-out study also suggests that MMP2 [24] and MMP9 [25] are crucial in development of arterial lesions resulting in atherosclerosis. In ECM remodeling process, new extra matrix components are synthesized and vulnerable lesions are broken down. During the process, activated SMCs abnormally proliferate and migrate into luminal side of blood vessel wall to form thrombus with foam cells that result from endothelial dysfunction [26]. In this study, we performed gelatin zymography and western blot to assay secretion and expression of MMP2 and MMP9. Results of zymography showed that SMT suppressed the SMCs secretion of both MMP2 and MMP9 against TNF-α stimulation. In addition, SMT significantly suppressed the protein expression of both MMP2 and MMP9 against TNF-α stimulation.It is suggested that there is a certain relation between oxidative stress and MMPs modulation. Recent studies suggest that MMP-mediated ECM remodeling could be modulated by ROS and onein vitro study suggests that oxidative stress may enhance MMP expression and activity [27]. In this study, TNF-α stimulation increased the ROS production of SMCs and this might be influence the overall signaling pathways of proliferation and migration. In this regard, SMT suppressed production of intracellular ROS on TNF-α stimulated SMCs and it might contribute to cell cycle regulation and MMPs suppression by SMT.Our previous study demonstrated that SMT attenuated inflammation of vascular endothelial cells against TNF-α stimulation via inhibiting activation of nuclear factor-κB and inducing heme oxygenase-1 [15]. In condition of retained vascular dysfunction, vascular inflammation is initiated and results in activation of SMCs. Activated SMCs lose control of cell cycle regulation and migrate into intima resulting in formation of atheroma [8, 28]. Likewise, since this succession of pathogenesis is inseparable, we conducted further study with SMCs to broaden the understanding about vascular protective effects of SMT. To relate our previous and present studies together, SMT not only has protective effects against chronic inflammation on vascular endothelium in early stage but also is expected to suppress abnormal migration of vascular smooth muscle cells and atheroma formation resulting in atherosclerosis.To summarize, SMT attenuated abnormal migration of vascular smooth muscle cells via regulating cell cycle and suppressing MMPs expression and reactive oxygen species production. We suggest that SMT, a traditionally used herbal formula consisting of four herbs, might act as a vascular protective drug. --- *Source: 1024974-2018-06-25.xml*
2018
# Dispositional Affect in Unique Subgroups of Patients with Rheumatoid Arthritis **Authors:** Danielle B. Rice; Swati Mehta; Janet E. Pope; Manfred Harth; Allan Shapiro; Robert W. Teasell **Journal:** Pain Research and Management (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1024985 --- ## Abstract Background. Patients with rheumatoid arthritis may experience increased negative outcomes if they exhibit specific patterns of dispositional affect.Objective. To identify subgroups of patients with rheumatoid arthritis based on dispositional affect. The secondary objective was to compare mood, pain catastrophizing, fear of pain, disability, and quality of life between subgroups.Methods. Outpatients from a rheumatology clinic were categorized into subgroups by a cluster analysis based on dispositional affect. Differences in outcomes were compared between clusters through multivariate analysis of covariance.Results. 227 patients were divided into two subgroups. Cluster 1 (n = 85) included patients reporting significantly higher scores on all dispositional variables (experiential avoidance, anxiety sensitivity, worry, fear of pain, and perfectionism; all p < 0.001) compared to patients in Cluster 2 (n = 142). Patients in Cluster 1 also reported significantly greater mood impairment, pain anxiety sensitivity, and pain catastrophizing (all p < 0.001). Clusters did not differ on quality of life or disability.Conclusions. The present study identifies a subgroup of rheumatoid arthritis patients who score significantly higher on dispositional affect and report increased mood impairment, pain anxiety sensitivity, and pain catastrophizing. Considering dispositional affect within subgroups of patients with RA may help health professionals tailor interventions for the specific stressors that these patients experience. --- ## Body ## 1. Introduction Rheumatoid arthritis (RA) is an autoimmune disease characterized by joint swelling and tenderness at multiple sites in the body. These symptoms have a disabling effect on an individuals’ mental and physical health [1]. An international study examining data from 32 countries, the QUEST-RA study, found that more than a third of patients reported work related disability due to RA [2]. Furthermore, health care costs of RA management remain high even after major advancements in treatment. Hallert et al. (2014) estimated a mean total cost of EUR 14,768 per patient in their first year of being diagnosed with RA and EUR 18,438 per year by year six [3].Individuals with RA experience significant levels of chronic pain that negatively impacts multiple quality of life domains [4]. The related disability has been linked to several psychological contributors including depression, anxiety, and stress [5]. Epidemiological and clinical studies have consistently revealed a higher prevalence of depressive and anxiety disorders in patients with RA than in the general population [1, 6–8]. Presence of psychiatric symptoms among RA individuals has been shown to increase the perception of pain, use of analgesics, and work disability [7]. The comorbidity between chronic pain and depression has been established among several studies and management strategies have been implemented in clinical practice guidelines [9].However, aspects of personality are also increasingly being viewed as important by pain researchers, clinicians, and patients with chronic pain. Newth and Delongis (2004) found that personality was a strong moderator of coping after chronic pain in RA individuals [10]. Specific personality traits such as neuroticism predicted both day to day reports of illness symptoms and the subsequent accuracy with which symptoms are recalled over the same period [11]. A recent review concluded that specific dispositional variables including neuroticism, anxiety sensitivity, and experiential avoidance can predispose individuals with chronic pain to use ineffective strategies in coping [12]. Other studies looking at general chronic pain populations have found dispositional variables including maladaptive perfectionism [13], experiential avoidance [12, 14], anxiety sensitivity [15, 16], and psychological inflexibility [17] negatively related to patients clinical outcomes including mood and disability.These dispositional variables have also been discussed in a qualitative study that included eight overactive chronic pain patients [18]. All patients believed that their tendency to do too much was related to their personality and five of the eight participants noted that their over activity resulted in depressed mood, anxiety, and/or irritability. These patients identified aspects of psychological inflexibility including experiential avoidance and reported being perfectionists and unable to relax and described themselves as having obsessive personality traits [18]. To the best of the authors’ knowledge these aspects of personality have not yet been studied specifically in patients diagnosed with RA, even though these patients experience unique difficulties in comparison to patients with a diagnosis of chronic soft tissue pain [19]. Examination of specific dispositional variables and how they affect clinical outcomes among individuals with RA may be important in screening patients at risk for developing more optimized management plans.The present study represents a preliminary step in identifying subgroups among persons with RA based on dispositional affect. The aims of the study were to use a cluster analysis to identify homogenous pain behavior subgroups among persons with RA through a number of dispositional personality variables that have been previously linked to maladaptive coping styles [12]. The secondary aim was to determine if the subgroups identified differed on measures of mood, pain catastrophizing, fear of pain, disability, and quality of life. ## 2. Methods ### 2.1. Participants Participants included patients with RA diagnosed by a rheumatologist (using the American College of Rheumatology Criteria) that scheduled a regular outpatient clinic appointment and were recruited over a 20-month period from an academic rheumatology clinic in London, Ontario (St. Joseph’s Health Care London, associated with Western University). Patients at least 18 years who had a diagnosis of RA and self-reported pain secondary to RA for greater than three months were eligible for inclusion. Given that this study involved the completion of questionnaire booklets, exclusion criteria included the inability to read and write in English. Ethics was reviewed and approved by the Office of Research Ethics at the University of Western Ontario in London, Ontario, Canada. All eligible participants signed informed consent prior to completing any questionnaires for the study. ### 2.2. Procedures Patients who met the inclusion criteria and agreed to participate were referred to the research coordinator by their primary physician. The research coordinator provided potential participants with the letter of information and consent form. Patients were made aware that their decision to participate in the study would in no way interfere with their standard care at the hospital. All patients received individualized pharmacotherapy and psychotherapy or referrals as seen fit by the multidisciplinary team. Eligible participants were mailed a package introducing the study two weeks prior to their scheduled clinic appointment with their rheumatologist. The package contained the study information letter, a consent form, and the first of two questionnaire booklets. Research assistants followed up with phone calls to all eligible patients to explain the procedures of the study, answer any study related questions, and confirm that the patient was still experiencing pain secondary to RA. Consenting participants completed the first booklet of questionnaires regarding demographics (age, gender, years of education, and relationship status), time since RA diagnosis, and average pain intensity prior to their clinic appointment. Participants were asked to arrive half an hour early to their clinic appointment to provide research assistants with their first questionnaire booklet and complete the second booklet questionnaires that included measures of dispositional affect, pain catastrophizing, fear of pain, quality of life, and disability. One researcher independently entered questionnaire responses into a SPSS database which was then validated by a second researcher. ### 2.3. Demographic Measures Demographic variables including age, sex, years of education, marital status, and years since RA diagnosis were assessed with single straightforward patient-report items. #### 2.3.1. Average Pain Intensity Rating Pain ratings for current, least, average, and worst pain were summed to yield an aggregate pain intensity score. ### 2.4. Cluster Variable Measures #### 2.4.1. Acceptance and Action Questionnaire (AAQ) The AAQ [20] is a 9-item measure of experiential avoidance, that is, an unwillingness to remain in contact with distressing private experiences (body sensations, emotions, and thoughts) and the inclination to alter the form or frequency of these experiences. It yields a single factor solution and is correlated with a wide range of negative behavioural and physical health outcomes [20]. #### 2.4.2. Anxiety Sensitivity Index (ASI) The ASI [21] is a 16-item measure of the fear of anxiety-related symptoms comprised of three factors: fear of the somatic symptoms of anxiety; fear of mental incapacitation (cognitive dyscontrol); and fear of negative social repercussions of anxiety [20]. These factors can be summed for a total score. Each item is rated on a five-point Likert scale ranging from 0 (very little) to 4 (very much). The instrument’s psychometric properties and predictive validity have been well established [22, 23]. #### 2.4.3. Frost Multidimensional Perfectionism Scale (FMPS) The FMPS [24] contains subscales measuring six different dimensions of perfectionism. In the present study, we used the total score with the parental standards and criticism subscales omitted. Research suggests that the concerns about mistakes and doubts about actions subscales are related to negative affectivity and reflect “maladaptive” perfectionism, while the high standards and need for organization subscales are unrelated or negatively related to negative affectivity [25–27]. #### 2.4.4. Penn State Worry Questionnaire (PSWQ) The PSWQ is a 16-item measure of the frequency and intensity of worry that yields a single score [28]. The PSWQ is a single factor structure and has good predictive validity [29]. #### 2.4.5. Reactions to Relaxation and Arousal Questionnaire (RRAQ) The RRAQ is a nine-item factor analytically derived measure of fear of relaxation [30]. Participants rate the applicableness and accuracy of each item from 1 (not at all) to 5 (very much so). This measure has high retest reliability and strong convergent and discriminant validity [31]. ### 2.5. Dependent Outcome Measures #### 2.5.1. Depression Anxiety Stress Scales-Short Form (DASS-SF) The DASS-SF [32] is a 21-item self-report questionnaire yielding separate scores for depression, anxiety, and stress over the previous week. This measure has good to excellent psychometric properties [33]. #### 2.5.2. Health Assessment Questionnaire Disability Index (HAQ-DI) This questionnaire is an assessment for patients with RA where patients report the amount of difficulty they have performing specific activities (dressing and grooming, arising, eating, walking, hygiene, reach, grip, and common daily activities). Each question is scored from 0 to 3 based on whether the patient has no difficulty with the activity (0) or the activity cannot be done at all (3). The construct, convergent, and predictive validity and sensitivity to change have also been established in numerous observational studies and clinical trials [34]. The HAQ-DI was scored with the standard scoring methods whereby the highest subcategory score from each category was used, the use of aids/devices or help was adjusted for, and the summed category scores were divided by the number of categories answered. #### 2.5.3. Pain Anxiety Symptom Scale (PASS-20) The PASS-20 is designed to measure fear of pain. This measure includes 4 subscales: avoidance, cognitive anxiety, fearful thinking, and physiological anxiety. PASS-20 has demonstrated good psychometric properties and is highly correlated with its longer version [35]. #### 2.5.4. Pain Catastrophizing Scale (PCS) The PCS contains 13 items assessing the tendency to misinterpret and exaggerate the threat value of pain sensations. It has good psychometric properties and includes 3 main factors: rumination, magnification, and helplessness [36]. #### 2.5.5. 36-Item Short Form Health Survey (SF-36) The SF-36 is a 36-item self-report measure that assesses eight domains of health related quality of life. These domains include the following: (1) limitations in physical functioning; (2) social limitations due to emotional or physical troubles; (3) role limitations due to physical health problems; (4) role limitations due to emotional health problems; (5) general mental health; (6) bodily pain; (7) vitality; (8) general health perceptions [37]. The SF-36 has acceptable psychometric properties [38]. The SF-36 can also be scored based on physical and mental components; the current study used individualized scores for each subscale for consistency in using total or subscale scores. ### 2.6. Statistical Analysis A two-step cluster analysis was performed using SPSS 23 to identify and classify observations into two or more mutually exclusive groups, where members of the groups share properties in common. Five dispositional trait-like variables were used to cluster the observations: experiential avoidance, fear of relaxation, anxiety sensitivity, perfectionism, and worrying based on the AAQ, RRAQ, ASI, FMPS, and PSWQ measures, respectively. The log-likelihood distance measure was used to compute likelihood distance between clusters with subjects assigned to the cluster leading to the largest likelihood. No restrictions were set for the number of clusters and the Bayesian information criterion was used to judge adequacy of the final solution. Differences in sample demographic characteristics were compared according to cluster membership using independent samplest-tests and χ 2 tests for categorical variables in order to characterize differences between the resulting clusters. A multivariate analysis of covariance (MANCOVA) was conducted on outcome measures including mood (DASS-SF), pain catastrophizing (PCS), fear of pain (PASS), quality of life (SF-36), and disability (HAQ-DI) according to cluster membership. Any significant difference on demographic characteristics between clusters was entered as covariates in the MANCOVA. Pairwise comparisons were conducted with Bonferroni adjustment. SPSS version 23.0 (Chicago, IL) was used for all tests performed, with the significance level set at alpha 0.05 and all tests were two-tailed. ## 2.1. Participants Participants included patients with RA diagnosed by a rheumatologist (using the American College of Rheumatology Criteria) that scheduled a regular outpatient clinic appointment and were recruited over a 20-month period from an academic rheumatology clinic in London, Ontario (St. Joseph’s Health Care London, associated with Western University). Patients at least 18 years who had a diagnosis of RA and self-reported pain secondary to RA for greater than three months were eligible for inclusion. Given that this study involved the completion of questionnaire booklets, exclusion criteria included the inability to read and write in English. Ethics was reviewed and approved by the Office of Research Ethics at the University of Western Ontario in London, Ontario, Canada. All eligible participants signed informed consent prior to completing any questionnaires for the study. ## 2.2. Procedures Patients who met the inclusion criteria and agreed to participate were referred to the research coordinator by their primary physician. The research coordinator provided potential participants with the letter of information and consent form. Patients were made aware that their decision to participate in the study would in no way interfere with their standard care at the hospital. All patients received individualized pharmacotherapy and psychotherapy or referrals as seen fit by the multidisciplinary team. Eligible participants were mailed a package introducing the study two weeks prior to their scheduled clinic appointment with their rheumatologist. The package contained the study information letter, a consent form, and the first of two questionnaire booklets. Research assistants followed up with phone calls to all eligible patients to explain the procedures of the study, answer any study related questions, and confirm that the patient was still experiencing pain secondary to RA. Consenting participants completed the first booklet of questionnaires regarding demographics (age, gender, years of education, and relationship status), time since RA diagnosis, and average pain intensity prior to their clinic appointment. Participants were asked to arrive half an hour early to their clinic appointment to provide research assistants with their first questionnaire booklet and complete the second booklet questionnaires that included measures of dispositional affect, pain catastrophizing, fear of pain, quality of life, and disability. One researcher independently entered questionnaire responses into a SPSS database which was then validated by a second researcher. ## 2.3. Demographic Measures Demographic variables including age, sex, years of education, marital status, and years since RA diagnosis were assessed with single straightforward patient-report items. ### 2.3.1. Average Pain Intensity Rating Pain ratings for current, least, average, and worst pain were summed to yield an aggregate pain intensity score. ## 2.3.1. Average Pain Intensity Rating Pain ratings for current, least, average, and worst pain were summed to yield an aggregate pain intensity score. ## 2.4. Cluster Variable Measures ### 2.4.1. Acceptance and Action Questionnaire (AAQ) The AAQ [20] is a 9-item measure of experiential avoidance, that is, an unwillingness to remain in contact with distressing private experiences (body sensations, emotions, and thoughts) and the inclination to alter the form or frequency of these experiences. It yields a single factor solution and is correlated with a wide range of negative behavioural and physical health outcomes [20]. ### 2.4.2. Anxiety Sensitivity Index (ASI) The ASI [21] is a 16-item measure of the fear of anxiety-related symptoms comprised of three factors: fear of the somatic symptoms of anxiety; fear of mental incapacitation (cognitive dyscontrol); and fear of negative social repercussions of anxiety [20]. These factors can be summed for a total score. Each item is rated on a five-point Likert scale ranging from 0 (very little) to 4 (very much). The instrument’s psychometric properties and predictive validity have been well established [22, 23]. ### 2.4.3. Frost Multidimensional Perfectionism Scale (FMPS) The FMPS [24] contains subscales measuring six different dimensions of perfectionism. In the present study, we used the total score with the parental standards and criticism subscales omitted. Research suggests that the concerns about mistakes and doubts about actions subscales are related to negative affectivity and reflect “maladaptive” perfectionism, while the high standards and need for organization subscales are unrelated or negatively related to negative affectivity [25–27]. ### 2.4.4. Penn State Worry Questionnaire (PSWQ) The PSWQ is a 16-item measure of the frequency and intensity of worry that yields a single score [28]. The PSWQ is a single factor structure and has good predictive validity [29]. ### 2.4.5. Reactions to Relaxation and Arousal Questionnaire (RRAQ) The RRAQ is a nine-item factor analytically derived measure of fear of relaxation [30]. Participants rate the applicableness and accuracy of each item from 1 (not at all) to 5 (very much so). This measure has high retest reliability and strong convergent and discriminant validity [31]. ## 2.4.1. Acceptance and Action Questionnaire (AAQ) The AAQ [20] is a 9-item measure of experiential avoidance, that is, an unwillingness to remain in contact with distressing private experiences (body sensations, emotions, and thoughts) and the inclination to alter the form or frequency of these experiences. It yields a single factor solution and is correlated with a wide range of negative behavioural and physical health outcomes [20]. ## 2.4.2. Anxiety Sensitivity Index (ASI) The ASI [21] is a 16-item measure of the fear of anxiety-related symptoms comprised of three factors: fear of the somatic symptoms of anxiety; fear of mental incapacitation (cognitive dyscontrol); and fear of negative social repercussions of anxiety [20]. These factors can be summed for a total score. Each item is rated on a five-point Likert scale ranging from 0 (very little) to 4 (very much). The instrument’s psychometric properties and predictive validity have been well established [22, 23]. ## 2.4.3. Frost Multidimensional Perfectionism Scale (FMPS) The FMPS [24] contains subscales measuring six different dimensions of perfectionism. In the present study, we used the total score with the parental standards and criticism subscales omitted. Research suggests that the concerns about mistakes and doubts about actions subscales are related to negative affectivity and reflect “maladaptive” perfectionism, while the high standards and need for organization subscales are unrelated or negatively related to negative affectivity [25–27]. ## 2.4.4. Penn State Worry Questionnaire (PSWQ) The PSWQ is a 16-item measure of the frequency and intensity of worry that yields a single score [28]. The PSWQ is a single factor structure and has good predictive validity [29]. ## 2.4.5. Reactions to Relaxation and Arousal Questionnaire (RRAQ) The RRAQ is a nine-item factor analytically derived measure of fear of relaxation [30]. Participants rate the applicableness and accuracy of each item from 1 (not at all) to 5 (very much so). This measure has high retest reliability and strong convergent and discriminant validity [31]. ## 2.5. Dependent Outcome Measures ### 2.5.1. Depression Anxiety Stress Scales-Short Form (DASS-SF) The DASS-SF [32] is a 21-item self-report questionnaire yielding separate scores for depression, anxiety, and stress over the previous week. This measure has good to excellent psychometric properties [33]. ### 2.5.2. Health Assessment Questionnaire Disability Index (HAQ-DI) This questionnaire is an assessment for patients with RA where patients report the amount of difficulty they have performing specific activities (dressing and grooming, arising, eating, walking, hygiene, reach, grip, and common daily activities). Each question is scored from 0 to 3 based on whether the patient has no difficulty with the activity (0) or the activity cannot be done at all (3). The construct, convergent, and predictive validity and sensitivity to change have also been established in numerous observational studies and clinical trials [34]. The HAQ-DI was scored with the standard scoring methods whereby the highest subcategory score from each category was used, the use of aids/devices or help was adjusted for, and the summed category scores were divided by the number of categories answered. ### 2.5.3. Pain Anxiety Symptom Scale (PASS-20) The PASS-20 is designed to measure fear of pain. This measure includes 4 subscales: avoidance, cognitive anxiety, fearful thinking, and physiological anxiety. PASS-20 has demonstrated good psychometric properties and is highly correlated with its longer version [35]. ### 2.5.4. Pain Catastrophizing Scale (PCS) The PCS contains 13 items assessing the tendency to misinterpret and exaggerate the threat value of pain sensations. It has good psychometric properties and includes 3 main factors: rumination, magnification, and helplessness [36]. ### 2.5.5. 36-Item Short Form Health Survey (SF-36) The SF-36 is a 36-item self-report measure that assesses eight domains of health related quality of life. These domains include the following: (1) limitations in physical functioning; (2) social limitations due to emotional or physical troubles; (3) role limitations due to physical health problems; (4) role limitations due to emotional health problems; (5) general mental health; (6) bodily pain; (7) vitality; (8) general health perceptions [37]. The SF-36 has acceptable psychometric properties [38]. The SF-36 can also be scored based on physical and mental components; the current study used individualized scores for each subscale for consistency in using total or subscale scores. ## 2.5.1. Depression Anxiety Stress Scales-Short Form (DASS-SF) The DASS-SF [32] is a 21-item self-report questionnaire yielding separate scores for depression, anxiety, and stress over the previous week. This measure has good to excellent psychometric properties [33]. ## 2.5.2. Health Assessment Questionnaire Disability Index (HAQ-DI) This questionnaire is an assessment for patients with RA where patients report the amount of difficulty they have performing specific activities (dressing and grooming, arising, eating, walking, hygiene, reach, grip, and common daily activities). Each question is scored from 0 to 3 based on whether the patient has no difficulty with the activity (0) or the activity cannot be done at all (3). The construct, convergent, and predictive validity and sensitivity to change have also been established in numerous observational studies and clinical trials [34]. The HAQ-DI was scored with the standard scoring methods whereby the highest subcategory score from each category was used, the use of aids/devices or help was adjusted for, and the summed category scores were divided by the number of categories answered. ## 2.5.3. Pain Anxiety Symptom Scale (PASS-20) The PASS-20 is designed to measure fear of pain. This measure includes 4 subscales: avoidance, cognitive anxiety, fearful thinking, and physiological anxiety. PASS-20 has demonstrated good psychometric properties and is highly correlated with its longer version [35]. ## 2.5.4. Pain Catastrophizing Scale (PCS) The PCS contains 13 items assessing the tendency to misinterpret and exaggerate the threat value of pain sensations. It has good psychometric properties and includes 3 main factors: rumination, magnification, and helplessness [36]. ## 2.5.5. 36-Item Short Form Health Survey (SF-36) The SF-36 is a 36-item self-report measure that assesses eight domains of health related quality of life. These domains include the following: (1) limitations in physical functioning; (2) social limitations due to emotional or physical troubles; (3) role limitations due to physical health problems; (4) role limitations due to emotional health problems; (5) general mental health; (6) bodily pain; (7) vitality; (8) general health perceptions [37]. The SF-36 has acceptable psychometric properties [38]. The SF-36 can also be scored based on physical and mental components; the current study used individualized scores for each subscale for consistency in using total or subscale scores. ## 2.6. Statistical Analysis A two-step cluster analysis was performed using SPSS 23 to identify and classify observations into two or more mutually exclusive groups, where members of the groups share properties in common. Five dispositional trait-like variables were used to cluster the observations: experiential avoidance, fear of relaxation, anxiety sensitivity, perfectionism, and worrying based on the AAQ, RRAQ, ASI, FMPS, and PSWQ measures, respectively. The log-likelihood distance measure was used to compute likelihood distance between clusters with subjects assigned to the cluster leading to the largest likelihood. No restrictions were set for the number of clusters and the Bayesian information criterion was used to judge adequacy of the final solution. Differences in sample demographic characteristics were compared according to cluster membership using independent samplest-tests and χ 2 tests for categorical variables in order to characterize differences between the resulting clusters. A multivariate analysis of covariance (MANCOVA) was conducted on outcome measures including mood (DASS-SF), pain catastrophizing (PCS), fear of pain (PASS), quality of life (SF-36), and disability (HAQ-DI) according to cluster membership. Any significant difference on demographic characteristics between clusters was entered as covariates in the MANCOVA. Pairwise comparisons were conducted with Bonferroni adjustment. SPSS version 23.0 (Chicago, IL) was used for all tests performed, with the significance level set at alpha 0.05 and all tests were two-tailed. ## 3. Results A total of 300 individuals with RA were eligible for inclusion, of which 227 agreed to participate in the study and completed the questionnaires (Figure1). The mean age of the sample was 57.8 (SD = 14.4) and the majority of participants were females (75.7%). Table 1 shows additional sociodemographic and clinical characteristics of the sample.Table 1 Demographic and clinical characteristics of study sample and cluster subgroups. Study population Cluster 1 Cluster 2 p(between clusters) N 227 85 142 Mean age (SD) 57.8 (14.4) 58.3 (14.7) 57.5 (14.4) 0.672 Sex (male %) 24.3 24.7 24.1 0.512 Mean years of education (SD) 13.0 (3.3) 12.5 (3.6) 13.3 (3.1) 0.094 Mean years since RA diagnosis 13.2 (11.0) 15.7 (12.2) 11.6 (9.9) 0.006 Relationship status (%) 0.841 Single 11.1 9.5 12.1 Married or in a serious relationship 74.7 76.2 73.8 Divorced, separated, widowed 14.2 14.3 14.2 Average pain intensity 3.8 (2.2) 4.0 (2.0) 3.7 (2.3) 0.092 AAQ (SD) 28.3 (7.5) 32.7 (6.1) 25.7 (7.0) 0.001 RRAQ (SD) 12.8 (5.4) 17.0 (5.8) 10.2 (3.1) 0.001 ASI total (SD) 15.2 (10.8) 23.8 (10.8) 10.0 (6.9) 0.001 FMPS total 74.0 (16.0) 86.1 (13.9) 66.7 (12.5) 0.001 PSWQ (SD) 40.9 (12.9) 51.2 (12.4) 34.7 (8.9) 0.001 Significant values are shown in bold. AAQ: Acceptance and Action Questionnaire; ASI: Anxiety Sensitivity Index; FMPS: Frost Multidimensional Perfectionism Scale; HAQ: Health Assessment Questionnaire; PSWQ: Penn State Worry Questionnaire; RRAQ: Reactions to Relaxation and Arousal; SD: standard deviation.Figure 1 Participant Flowchart.The two-step cluster analysis of personality questionnaires was conducted with no exclusion of cases. The cluster analysis resulted in an optimal grouping of two clusters (change in Schwartz’s Bayesian criterion = −152.1; distance measures ratio = 3.0). The two clusters significantly differed from each other on all clustering variables (see Table1). Cluster 1 (n = 85) was characterized by a dispositional affect comprised of patients scoring significantly higher on experiential avoidance (EA), fear of relaxation (RRAQ), anxiety sensitivity (ASI), perfectionism (FMPS), and worrying (PSWQ) as compared to Cluster 2 (n = 142). Demographic characteristics were compared between the two clusters, where it was found that patients in Cluster 1 had been diagnosed with RA for a significantly greater number of years than patients in Cluster 2 (p = 0.006). The remaining demographic variables: age, sex, education, relationship status, and average pain intensity were comparable between the two clusters (Table 1).The two clusters were compared through a MANCOVA while controlling for mean time since RA diagnosis with Bonferroni correction. Pairwise comparisons revealed significant differences between Cluster 1 and Cluster 2 for all mood (DASS-SF), catastrophizing (PCS), and pain anxiety sensitivity (PASS) subscales. Cluster 1 reported significantly higher scores on these measures of distress and cognitive aspects related to pain. There were no significant differences between the clusters for quality of life or disability (see Table2).Table 2 MANCOVA adjusted for years since RA diagnosis between clusters subgroups. Cluster 1 mean (SE) Cluster 2 mean (SE) Mean difference between Cluster 1 and Cluster 2 (SE) p Disability and quality of life HAQ total 1.08 (0.97) 1.08 (0.97) 0.03 (0.10) 0.749 SF-36 physical functioning 19.63 (0.62) 18.95 (0.49) 0.68 (0.80) 0.540 SF-36 role physical 5.63 (0.18) 5.35 (0.14) 0.29 (0.23) 0.123 SF-36 bodily pain 6.27 (0.23) 6.25 (0.18) 0.02 (0.30) 0.912 SF-36 general health 15.82 (0.30) 15.96 (0.24) −0.13 (0.39) 0.894 SF-36 vitality 15.08 (0.26) 15.59 (0.20) −0.52 (0.33) 0.269 SF-36 social function 5.96 (0.31) 6.35 (0.24) −0.39 (0.40) 0.433 SF-36 role emotional 4.73 (0.15) 4.70 (0.11) 0.03 (0.19) 0.489 SF-36 mental health 20.99 (0.25) 21.33 (0.20) −0.35 (0.32) 0.331 SF-36 reported health 2.78 (0.10) 2.99 (0.8) −0.21 (0.12) 0.209 Distress and coping DASS depression 4.78 (0.34) 2.43 (0.26) 2.4 (0.4) 0.001 DASS anxiety 5.88 (0.39) 3.28 (0.30) 2.6 (0.5) 0.001 DASS stress 5.22 (0.33) 2.60 (0.25) 2.6 (0.4) 0.001 PASS escape avoidance 11.00 (0.62) 8.07 (0.48) 2.9 (0.8) 0.001 PASS cognitive anxiety 11.01 (0.60) 6.74 (0.47) 4.3 (0.8) 0.001 PASS fearful thinking 7.53 (0.61) 3.19 (0.47) 4.3 (0.8) 0.001 PASS physiological anxiety 5.64 (0.46) 2.69 (0.36) 2.9 (0.6) 0.001 PCS rumination 11.20 (0.53) 8.11 (0.41) 3.1 (0.7) 0.001 PCS magnification 6.24 (0.23) 4.43 (0.17) 1.8 (0.3) 0.001 PCS helplessness 11.15 (0.41) 7.92 (0.31) 3.2 (0.5) 0.001 Significant values are shown in bold. DASS: Depression Anxiety Stress Scale; HAQ-DI: Health Assessment Questionnaire-Disability Index; PASS: Pain Anxiety Sensitivity Scale; PCS: Pain Catastrophizing Scale; RA: rheumatoid arthritis; SE: standard error; SF-36: 36-Item Short Form Health Survey. ## 4. Discussion The present study aimed to determine if patients with RA could be differentiated based on dispositional affect. Our second aim was to determine if mood, pain catastrophizing, fear of pain, disability, and quality of life varied as a function of these patient groupings. Participants were divided into two meaningful clusters that represented one group (Cluster 1) composed of patients who reported significantly higher scores on all dispositional variables measured, including experiential avoidance, fear of relaxation, anxiety sensitivity, perfectionism, and worrying, while the second cluster of patients (Cluster 2) included those who scored significantly lower on each of these personality measures. Results also confirmed that mood, pain catastrophizing, and fear of pain measures systematically varied based on patient reports of dispositional variables studied, with those in Cluster 1 demonstrating significantly worse scores on mood, pain catastrophizing, and fear of pain compared to Cluster 2, while controlling for differences in demographic variables between clusters. There were no significant differences found between clusters on disability or quality of life measures.Our findings revealed that the subset of patients with RA in our sample who reported higher scores on a number of dispositional variables experienced worse mood including increased depressive, anxiety, and stress symptoms, as well as increased cognitions of pain catastrophizing and fear of pain, as shown through higher scores on each pain catastrophizing and pain anxiety symptom subscale. Our results suggest that patients with RA who present with increased endorsement for the cluster of dispositional variables measured within our study may represent a group of patients who experience increased distress, pain catastrophizing, and fear of pain when living with their chronic health condition. Notably, the subset of patients reporting increased endorsement for dispositional affect encompassed fewer patients (n = 85) than the cluster of patients who reported levels of these factors (n = 142) closer in line to normative means and community samples [39–41]. However, this group of patients endorsing a complex set of dispositional characteristics and increased difficulties in mood, pain catastrophizing, and fear of pain represents a large number of patients with RA experiencing psychological concerns (37% of our sample). This prevalence of patients is comparable to other samples of patients with chronic pain, specifically fibromyalgia, where one study found that 32% of patients displayed elevated mood difficulties, increased pain catastrophizing, and low levels of perceived control over pain [42].Specific trait-like characteristics including experiential avoidance, fear of relaxation, anxiety sensitivity, perfectionism, and worrying have been linked to a variety of negative outcomes in patients with chronic pain [12, 13, 15, 16, 42]. Patients with RA in Cluster 1 of our sample scored significantly higher on each of these dispositional variables which have been associated to poor mood, catastrophizing, worse functionality, and subjective state of health [43–45].A number of studies have considered aspects of personality in patients with chronic pain, yet no studies have demonstrated how patients can be clustered together in subgroups based on scoring patterns on a variety of dispositional variables within patients diagnosed with RA experiencing chronic pain. Two previous studies have clustered patients with fibromyalgia based on neurobiological, personality, psychological, and cognitive characteristics. In the first study, cluster analyses classified 97 patients based on anxiety, depression, catastrophizing, control over pain, pain threshold, and multiple random-staircase pressure-pain sensitivity determination [46]. Three subsets of patients were identified through cluster analysis. When considering psychological and cognitive factors from these results, one group was characterized by patients with the highest levels of anxiety, depression, catastrophizing, and the lowest levels of control over pain. Of the remaining two clusters, one scored moderately on all variables while the other had the lowest scores on anxiety, depression, catastrophizing, and the highest control over pain [46]. It was hypothesized that the cluster with the highest levels of anxiety, depression, catastrophizing, and low control over pain may represent the common presentation of fibromyalgia in tertiary care settings. Furthermore, within this study, quality of life (subscales of SF-36) did not significantly differ between clusters. Similarities between our findings and Giesecke et al. (2003) are present whereby Cluster 1 of our sample was comprised of patients who reported significantly greater symptoms of anxiety, depression, and catastrophizing in comparison to Cluster 2. There was also no difference between our clusters of patients on the SF-36 subscales. The SF-36 measures a number of factors; thus, it may not reflect large enough differences in patient distress to differ between subgroups of fibromyalgia [46] or RA patients. Furthermore, the lack of difference in quality of life and similarly in disability between the clusters may be due to the cross-sectional nature of the current study. It may be that time has a strong influence on these two factors and a longitudinal study is needed to capture this effect. Mehta et al. [16] conducted a longitudinal study examining the effect of dispositional traits such as AS and EA on long-term disability among individuals with chronic pain. The study found that those individuals with high levels of dispositional variables had significantly higher levels of long-term disability compared to those with lower levels of dispositional affect [15].A second study clustered 774 patients with fibromyalgia, some of which were experiencing chronic pain and a comorbid rheumatic disorder [42]. Cluster analysis was used to group patients based on personality traits (neuroticism, extraversion, agreeableness, openness to experience, and conscientiousness). This study divided patients into two clusters. The first cluster was characterized by maladaptiveness whereby patients in this cluster were described as being more likely to experience affective distress and poorly manage social conflicts. These patients scored significantly higher on neuroticism and lower on extraversion, openness to experience, agreeableness, and conscientiousness in comparison to the second cluster [42]. Multivariate analyses comparing the two clusters found that the first cluster, characterized by maladaptiveness, had significantly higher scores for depression, anxiety, and each pain catastrophizing subscale. These significant differences between clusters depression, anxiety, and the pain catastrophizing rumination subscale were also present at six-month follow-up [42]. Our results are generally in line with Torres et al. (2013) findings as our study also resulted in two patient groups where the cluster that endorsed higher levels of dispositional affect also exhibited increased distress, pain catastrophizing, and fear of pain. Specifically, Cluster 1 of our sample and Torres et al. (2013) reported significantly higher scores of depression, anxiety, and all pain catastrophizing subscales suggesting lower mood and the use of ruminative styles that have been associated with magnifying the threat of pain and feeling helpless [12] in both our sample of RA patients and the study of fibromyalgia patients.Our study contributes to the increased interest of researchers to investigate dispositional affect and trait-like features simultaneously, to present clusters of personality factors rather than considering variables in isolation from one another. Our results provide an understanding of how mood and cognitions associated to pain (pain catastrophizing and fear of pain) may be impacted by a number of dispositional variables within patients with RA. Considering subgroups of patients with RA characterized by dispositional affect had not been previously studied, yet specific personality factors have been associated with psychopathology and difficulties coping in other patient samples [12, 47, 48]. While treatment plans are individualized, intervention studies have found that patients with RA experiencing increased distress benefit from psychological interventions [49, 50]. Providing access to these interventions could allow for targeted approaches to manage poor mood and problematic coping strategies which may be used by patients reporting high scores on the identified dispositional variables. Furthermore, interventions could be developed and targeted to address distinct clusters of patients with RA and within other chronic illnesses. The development of screening tools has been one approach suggested to initiate the assessment and subsequent treatment of psychological comorbidity in patients with RA [12, 50]. Activity pacing is another pain management strategy that may be applied to RA patients who demonstrate specific patterns of dispositional affect. Pacing has been recommended for patients with chronic pain who tend to display obsessive personality traits including psychological inflexibility, fear of relaxation, perfectionism, and experiential avoidance [18]. However, in a small sample of overactive chronic pain patients, applying pacing strategies and enacting behaviour change was difficult when only education of pacing was provided [18].Specific limitations should be considered when interpreting the findings from this study. First, there are inherent limitations when using a cross-sectional design which inhibit causal relationships to be determined. Second, the personality factors considered were based on a number of different outcome measures rather than one specific personality measure such as the NEO Five-Factor Inventory and thus did not encompass all relevant variables that have been previously studied and linked to mood, with chronic pain. Nonetheless, the dispositional affect measures administered allowed for the analysis of a potentially challenging combination of variables. Further, an important limitation to consider when interpreting our findings is a lack of objective measure of inflammation and thus the inability for inflammation differences between patients to be adjusted for within analyses. Additionally, though the chronicity of pain was controlled for in the MANCOVA, the study demonstrated a significant difference between the two clusters in chronicity of pain. Hence, it may be that the groups differed from each other not only on the dispositional factors but also on this demographic factor. Finally, sample selection bias cannot be ruled out as our sample was recruited from a single site tertiary RA clinic which may compromise the generalizability of our findings. ## 5. Conclusions In conclusion, the present study identified subgroups of patients with RA based on a number of dispositional variables. The cluster characterized by significantly greater reports of dispositional affect were comprised of RA patients who experienced significantly more depression, anxiety, and stress symptoms in addition to heightened pain anxiety/fear of pain and pain catastrophizing. Ensuring that patients have access to qualified providers of appropriate multimodal treatment may be beneficial for patients with RA experiencing specific difficulties associated with their pain or adjustment including distress, pain catastrophizing, and fear of pain. Clinicians should consider that patients with specific dispositional affect may benefit from referrals for additional social support and programs that target the range of factors included in our study, beginning when they are diagnosed with RA to promote positive adjustment. Future research replicating our findings within RA patients and other samples of chronic pain patients should be carried out so that management programs can be developed to address specific needs of patients such as improving moods and decreasing ruminative styles such as pain catastrophizing and fear of pain. --- *Source: 1024985-2016-03-03.xml*
1024985-2016-03-03_1024985-2016-03-03.md
46,371
Dispositional Affect in Unique Subgroups of Patients with Rheumatoid Arthritis
Danielle B. Rice; Swati Mehta; Janet E. Pope; Manfred Harth; Allan Shapiro; Robert W. Teasell
Pain Research and Management (2016)
Other
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1024985
1024985-2016-03-03.xml
--- ## Abstract Background. Patients with rheumatoid arthritis may experience increased negative outcomes if they exhibit specific patterns of dispositional affect.Objective. To identify subgroups of patients with rheumatoid arthritis based on dispositional affect. The secondary objective was to compare mood, pain catastrophizing, fear of pain, disability, and quality of life between subgroups.Methods. Outpatients from a rheumatology clinic were categorized into subgroups by a cluster analysis based on dispositional affect. Differences in outcomes were compared between clusters through multivariate analysis of covariance.Results. 227 patients were divided into two subgroups. Cluster 1 (n = 85) included patients reporting significantly higher scores on all dispositional variables (experiential avoidance, anxiety sensitivity, worry, fear of pain, and perfectionism; all p < 0.001) compared to patients in Cluster 2 (n = 142). Patients in Cluster 1 also reported significantly greater mood impairment, pain anxiety sensitivity, and pain catastrophizing (all p < 0.001). Clusters did not differ on quality of life or disability.Conclusions. The present study identifies a subgroup of rheumatoid arthritis patients who score significantly higher on dispositional affect and report increased mood impairment, pain anxiety sensitivity, and pain catastrophizing. Considering dispositional affect within subgroups of patients with RA may help health professionals tailor interventions for the specific stressors that these patients experience. --- ## Body ## 1. Introduction Rheumatoid arthritis (RA) is an autoimmune disease characterized by joint swelling and tenderness at multiple sites in the body. These symptoms have a disabling effect on an individuals’ mental and physical health [1]. An international study examining data from 32 countries, the QUEST-RA study, found that more than a third of patients reported work related disability due to RA [2]. Furthermore, health care costs of RA management remain high even after major advancements in treatment. Hallert et al. (2014) estimated a mean total cost of EUR 14,768 per patient in their first year of being diagnosed with RA and EUR 18,438 per year by year six [3].Individuals with RA experience significant levels of chronic pain that negatively impacts multiple quality of life domains [4]. The related disability has been linked to several psychological contributors including depression, anxiety, and stress [5]. Epidemiological and clinical studies have consistently revealed a higher prevalence of depressive and anxiety disorders in patients with RA than in the general population [1, 6–8]. Presence of psychiatric symptoms among RA individuals has been shown to increase the perception of pain, use of analgesics, and work disability [7]. The comorbidity between chronic pain and depression has been established among several studies and management strategies have been implemented in clinical practice guidelines [9].However, aspects of personality are also increasingly being viewed as important by pain researchers, clinicians, and patients with chronic pain. Newth and Delongis (2004) found that personality was a strong moderator of coping after chronic pain in RA individuals [10]. Specific personality traits such as neuroticism predicted both day to day reports of illness symptoms and the subsequent accuracy with which symptoms are recalled over the same period [11]. A recent review concluded that specific dispositional variables including neuroticism, anxiety sensitivity, and experiential avoidance can predispose individuals with chronic pain to use ineffective strategies in coping [12]. Other studies looking at general chronic pain populations have found dispositional variables including maladaptive perfectionism [13], experiential avoidance [12, 14], anxiety sensitivity [15, 16], and psychological inflexibility [17] negatively related to patients clinical outcomes including mood and disability.These dispositional variables have also been discussed in a qualitative study that included eight overactive chronic pain patients [18]. All patients believed that their tendency to do too much was related to their personality and five of the eight participants noted that their over activity resulted in depressed mood, anxiety, and/or irritability. These patients identified aspects of psychological inflexibility including experiential avoidance and reported being perfectionists and unable to relax and described themselves as having obsessive personality traits [18]. To the best of the authors’ knowledge these aspects of personality have not yet been studied specifically in patients diagnosed with RA, even though these patients experience unique difficulties in comparison to patients with a diagnosis of chronic soft tissue pain [19]. Examination of specific dispositional variables and how they affect clinical outcomes among individuals with RA may be important in screening patients at risk for developing more optimized management plans.The present study represents a preliminary step in identifying subgroups among persons with RA based on dispositional affect. The aims of the study were to use a cluster analysis to identify homogenous pain behavior subgroups among persons with RA through a number of dispositional personality variables that have been previously linked to maladaptive coping styles [12]. The secondary aim was to determine if the subgroups identified differed on measures of mood, pain catastrophizing, fear of pain, disability, and quality of life. ## 2. Methods ### 2.1. Participants Participants included patients with RA diagnosed by a rheumatologist (using the American College of Rheumatology Criteria) that scheduled a regular outpatient clinic appointment and were recruited over a 20-month period from an academic rheumatology clinic in London, Ontario (St. Joseph’s Health Care London, associated with Western University). Patients at least 18 years who had a diagnosis of RA and self-reported pain secondary to RA for greater than three months were eligible for inclusion. Given that this study involved the completion of questionnaire booklets, exclusion criteria included the inability to read and write in English. Ethics was reviewed and approved by the Office of Research Ethics at the University of Western Ontario in London, Ontario, Canada. All eligible participants signed informed consent prior to completing any questionnaires for the study. ### 2.2. Procedures Patients who met the inclusion criteria and agreed to participate were referred to the research coordinator by their primary physician. The research coordinator provided potential participants with the letter of information and consent form. Patients were made aware that their decision to participate in the study would in no way interfere with their standard care at the hospital. All patients received individualized pharmacotherapy and psychotherapy or referrals as seen fit by the multidisciplinary team. Eligible participants were mailed a package introducing the study two weeks prior to their scheduled clinic appointment with their rheumatologist. The package contained the study information letter, a consent form, and the first of two questionnaire booklets. Research assistants followed up with phone calls to all eligible patients to explain the procedures of the study, answer any study related questions, and confirm that the patient was still experiencing pain secondary to RA. Consenting participants completed the first booklet of questionnaires regarding demographics (age, gender, years of education, and relationship status), time since RA diagnosis, and average pain intensity prior to their clinic appointment. Participants were asked to arrive half an hour early to their clinic appointment to provide research assistants with their first questionnaire booklet and complete the second booklet questionnaires that included measures of dispositional affect, pain catastrophizing, fear of pain, quality of life, and disability. One researcher independently entered questionnaire responses into a SPSS database which was then validated by a second researcher. ### 2.3. Demographic Measures Demographic variables including age, sex, years of education, marital status, and years since RA diagnosis were assessed with single straightforward patient-report items. #### 2.3.1. Average Pain Intensity Rating Pain ratings for current, least, average, and worst pain were summed to yield an aggregate pain intensity score. ### 2.4. Cluster Variable Measures #### 2.4.1. Acceptance and Action Questionnaire (AAQ) The AAQ [20] is a 9-item measure of experiential avoidance, that is, an unwillingness to remain in contact with distressing private experiences (body sensations, emotions, and thoughts) and the inclination to alter the form or frequency of these experiences. It yields a single factor solution and is correlated with a wide range of negative behavioural and physical health outcomes [20]. #### 2.4.2. Anxiety Sensitivity Index (ASI) The ASI [21] is a 16-item measure of the fear of anxiety-related symptoms comprised of three factors: fear of the somatic symptoms of anxiety; fear of mental incapacitation (cognitive dyscontrol); and fear of negative social repercussions of anxiety [20]. These factors can be summed for a total score. Each item is rated on a five-point Likert scale ranging from 0 (very little) to 4 (very much). The instrument’s psychometric properties and predictive validity have been well established [22, 23]. #### 2.4.3. Frost Multidimensional Perfectionism Scale (FMPS) The FMPS [24] contains subscales measuring six different dimensions of perfectionism. In the present study, we used the total score with the parental standards and criticism subscales omitted. Research suggests that the concerns about mistakes and doubts about actions subscales are related to negative affectivity and reflect “maladaptive” perfectionism, while the high standards and need for organization subscales are unrelated or negatively related to negative affectivity [25–27]. #### 2.4.4. Penn State Worry Questionnaire (PSWQ) The PSWQ is a 16-item measure of the frequency and intensity of worry that yields a single score [28]. The PSWQ is a single factor structure and has good predictive validity [29]. #### 2.4.5. Reactions to Relaxation and Arousal Questionnaire (RRAQ) The RRAQ is a nine-item factor analytically derived measure of fear of relaxation [30]. Participants rate the applicableness and accuracy of each item from 1 (not at all) to 5 (very much so). This measure has high retest reliability and strong convergent and discriminant validity [31]. ### 2.5. Dependent Outcome Measures #### 2.5.1. Depression Anxiety Stress Scales-Short Form (DASS-SF) The DASS-SF [32] is a 21-item self-report questionnaire yielding separate scores for depression, anxiety, and stress over the previous week. This measure has good to excellent psychometric properties [33]. #### 2.5.2. Health Assessment Questionnaire Disability Index (HAQ-DI) This questionnaire is an assessment for patients with RA where patients report the amount of difficulty they have performing specific activities (dressing and grooming, arising, eating, walking, hygiene, reach, grip, and common daily activities). Each question is scored from 0 to 3 based on whether the patient has no difficulty with the activity (0) or the activity cannot be done at all (3). The construct, convergent, and predictive validity and sensitivity to change have also been established in numerous observational studies and clinical trials [34]. The HAQ-DI was scored with the standard scoring methods whereby the highest subcategory score from each category was used, the use of aids/devices or help was adjusted for, and the summed category scores were divided by the number of categories answered. #### 2.5.3. Pain Anxiety Symptom Scale (PASS-20) The PASS-20 is designed to measure fear of pain. This measure includes 4 subscales: avoidance, cognitive anxiety, fearful thinking, and physiological anxiety. PASS-20 has demonstrated good psychometric properties and is highly correlated with its longer version [35]. #### 2.5.4. Pain Catastrophizing Scale (PCS) The PCS contains 13 items assessing the tendency to misinterpret and exaggerate the threat value of pain sensations. It has good psychometric properties and includes 3 main factors: rumination, magnification, and helplessness [36]. #### 2.5.5. 36-Item Short Form Health Survey (SF-36) The SF-36 is a 36-item self-report measure that assesses eight domains of health related quality of life. These domains include the following: (1) limitations in physical functioning; (2) social limitations due to emotional or physical troubles; (3) role limitations due to physical health problems; (4) role limitations due to emotional health problems; (5) general mental health; (6) bodily pain; (7) vitality; (8) general health perceptions [37]. The SF-36 has acceptable psychometric properties [38]. The SF-36 can also be scored based on physical and mental components; the current study used individualized scores for each subscale for consistency in using total or subscale scores. ### 2.6. Statistical Analysis A two-step cluster analysis was performed using SPSS 23 to identify and classify observations into two or more mutually exclusive groups, where members of the groups share properties in common. Five dispositional trait-like variables were used to cluster the observations: experiential avoidance, fear of relaxation, anxiety sensitivity, perfectionism, and worrying based on the AAQ, RRAQ, ASI, FMPS, and PSWQ measures, respectively. The log-likelihood distance measure was used to compute likelihood distance between clusters with subjects assigned to the cluster leading to the largest likelihood. No restrictions were set for the number of clusters and the Bayesian information criterion was used to judge adequacy of the final solution. Differences in sample demographic characteristics were compared according to cluster membership using independent samplest-tests and χ 2 tests for categorical variables in order to characterize differences between the resulting clusters. A multivariate analysis of covariance (MANCOVA) was conducted on outcome measures including mood (DASS-SF), pain catastrophizing (PCS), fear of pain (PASS), quality of life (SF-36), and disability (HAQ-DI) according to cluster membership. Any significant difference on demographic characteristics between clusters was entered as covariates in the MANCOVA. Pairwise comparisons were conducted with Bonferroni adjustment. SPSS version 23.0 (Chicago, IL) was used for all tests performed, with the significance level set at alpha 0.05 and all tests were two-tailed. ## 2.1. Participants Participants included patients with RA diagnosed by a rheumatologist (using the American College of Rheumatology Criteria) that scheduled a regular outpatient clinic appointment and were recruited over a 20-month period from an academic rheumatology clinic in London, Ontario (St. Joseph’s Health Care London, associated with Western University). Patients at least 18 years who had a diagnosis of RA and self-reported pain secondary to RA for greater than three months were eligible for inclusion. Given that this study involved the completion of questionnaire booklets, exclusion criteria included the inability to read and write in English. Ethics was reviewed and approved by the Office of Research Ethics at the University of Western Ontario in London, Ontario, Canada. All eligible participants signed informed consent prior to completing any questionnaires for the study. ## 2.2. Procedures Patients who met the inclusion criteria and agreed to participate were referred to the research coordinator by their primary physician. The research coordinator provided potential participants with the letter of information and consent form. Patients were made aware that their decision to participate in the study would in no way interfere with their standard care at the hospital. All patients received individualized pharmacotherapy and psychotherapy or referrals as seen fit by the multidisciplinary team. Eligible participants were mailed a package introducing the study two weeks prior to their scheduled clinic appointment with their rheumatologist. The package contained the study information letter, a consent form, and the first of two questionnaire booklets. Research assistants followed up with phone calls to all eligible patients to explain the procedures of the study, answer any study related questions, and confirm that the patient was still experiencing pain secondary to RA. Consenting participants completed the first booklet of questionnaires regarding demographics (age, gender, years of education, and relationship status), time since RA diagnosis, and average pain intensity prior to their clinic appointment. Participants were asked to arrive half an hour early to their clinic appointment to provide research assistants with their first questionnaire booklet and complete the second booklet questionnaires that included measures of dispositional affect, pain catastrophizing, fear of pain, quality of life, and disability. One researcher independently entered questionnaire responses into a SPSS database which was then validated by a second researcher. ## 2.3. Demographic Measures Demographic variables including age, sex, years of education, marital status, and years since RA diagnosis were assessed with single straightforward patient-report items. ### 2.3.1. Average Pain Intensity Rating Pain ratings for current, least, average, and worst pain were summed to yield an aggregate pain intensity score. ## 2.3.1. Average Pain Intensity Rating Pain ratings for current, least, average, and worst pain were summed to yield an aggregate pain intensity score. ## 2.4. Cluster Variable Measures ### 2.4.1. Acceptance and Action Questionnaire (AAQ) The AAQ [20] is a 9-item measure of experiential avoidance, that is, an unwillingness to remain in contact with distressing private experiences (body sensations, emotions, and thoughts) and the inclination to alter the form or frequency of these experiences. It yields a single factor solution and is correlated with a wide range of negative behavioural and physical health outcomes [20]. ### 2.4.2. Anxiety Sensitivity Index (ASI) The ASI [21] is a 16-item measure of the fear of anxiety-related symptoms comprised of three factors: fear of the somatic symptoms of anxiety; fear of mental incapacitation (cognitive dyscontrol); and fear of negative social repercussions of anxiety [20]. These factors can be summed for a total score. Each item is rated on a five-point Likert scale ranging from 0 (very little) to 4 (very much). The instrument’s psychometric properties and predictive validity have been well established [22, 23]. ### 2.4.3. Frost Multidimensional Perfectionism Scale (FMPS) The FMPS [24] contains subscales measuring six different dimensions of perfectionism. In the present study, we used the total score with the parental standards and criticism subscales omitted. Research suggests that the concerns about mistakes and doubts about actions subscales are related to negative affectivity and reflect “maladaptive” perfectionism, while the high standards and need for organization subscales are unrelated or negatively related to negative affectivity [25–27]. ### 2.4.4. Penn State Worry Questionnaire (PSWQ) The PSWQ is a 16-item measure of the frequency and intensity of worry that yields a single score [28]. The PSWQ is a single factor structure and has good predictive validity [29]. ### 2.4.5. Reactions to Relaxation and Arousal Questionnaire (RRAQ) The RRAQ is a nine-item factor analytically derived measure of fear of relaxation [30]. Participants rate the applicableness and accuracy of each item from 1 (not at all) to 5 (very much so). This measure has high retest reliability and strong convergent and discriminant validity [31]. ## 2.4.1. Acceptance and Action Questionnaire (AAQ) The AAQ [20] is a 9-item measure of experiential avoidance, that is, an unwillingness to remain in contact with distressing private experiences (body sensations, emotions, and thoughts) and the inclination to alter the form or frequency of these experiences. It yields a single factor solution and is correlated with a wide range of negative behavioural and physical health outcomes [20]. ## 2.4.2. Anxiety Sensitivity Index (ASI) The ASI [21] is a 16-item measure of the fear of anxiety-related symptoms comprised of three factors: fear of the somatic symptoms of anxiety; fear of mental incapacitation (cognitive dyscontrol); and fear of negative social repercussions of anxiety [20]. These factors can be summed for a total score. Each item is rated on a five-point Likert scale ranging from 0 (very little) to 4 (very much). The instrument’s psychometric properties and predictive validity have been well established [22, 23]. ## 2.4.3. Frost Multidimensional Perfectionism Scale (FMPS) The FMPS [24] contains subscales measuring six different dimensions of perfectionism. In the present study, we used the total score with the parental standards and criticism subscales omitted. Research suggests that the concerns about mistakes and doubts about actions subscales are related to negative affectivity and reflect “maladaptive” perfectionism, while the high standards and need for organization subscales are unrelated or negatively related to negative affectivity [25–27]. ## 2.4.4. Penn State Worry Questionnaire (PSWQ) The PSWQ is a 16-item measure of the frequency and intensity of worry that yields a single score [28]. The PSWQ is a single factor structure and has good predictive validity [29]. ## 2.4.5. Reactions to Relaxation and Arousal Questionnaire (RRAQ) The RRAQ is a nine-item factor analytically derived measure of fear of relaxation [30]. Participants rate the applicableness and accuracy of each item from 1 (not at all) to 5 (very much so). This measure has high retest reliability and strong convergent and discriminant validity [31]. ## 2.5. Dependent Outcome Measures ### 2.5.1. Depression Anxiety Stress Scales-Short Form (DASS-SF) The DASS-SF [32] is a 21-item self-report questionnaire yielding separate scores for depression, anxiety, and stress over the previous week. This measure has good to excellent psychometric properties [33]. ### 2.5.2. Health Assessment Questionnaire Disability Index (HAQ-DI) This questionnaire is an assessment for patients with RA where patients report the amount of difficulty they have performing specific activities (dressing and grooming, arising, eating, walking, hygiene, reach, grip, and common daily activities). Each question is scored from 0 to 3 based on whether the patient has no difficulty with the activity (0) or the activity cannot be done at all (3). The construct, convergent, and predictive validity and sensitivity to change have also been established in numerous observational studies and clinical trials [34]. The HAQ-DI was scored with the standard scoring methods whereby the highest subcategory score from each category was used, the use of aids/devices or help was adjusted for, and the summed category scores were divided by the number of categories answered. ### 2.5.3. Pain Anxiety Symptom Scale (PASS-20) The PASS-20 is designed to measure fear of pain. This measure includes 4 subscales: avoidance, cognitive anxiety, fearful thinking, and physiological anxiety. PASS-20 has demonstrated good psychometric properties and is highly correlated with its longer version [35]. ### 2.5.4. Pain Catastrophizing Scale (PCS) The PCS contains 13 items assessing the tendency to misinterpret and exaggerate the threat value of pain sensations. It has good psychometric properties and includes 3 main factors: rumination, magnification, and helplessness [36]. ### 2.5.5. 36-Item Short Form Health Survey (SF-36) The SF-36 is a 36-item self-report measure that assesses eight domains of health related quality of life. These domains include the following: (1) limitations in physical functioning; (2) social limitations due to emotional or physical troubles; (3) role limitations due to physical health problems; (4) role limitations due to emotional health problems; (5) general mental health; (6) bodily pain; (7) vitality; (8) general health perceptions [37]. The SF-36 has acceptable psychometric properties [38]. The SF-36 can also be scored based on physical and mental components; the current study used individualized scores for each subscale for consistency in using total or subscale scores. ## 2.5.1. Depression Anxiety Stress Scales-Short Form (DASS-SF) The DASS-SF [32] is a 21-item self-report questionnaire yielding separate scores for depression, anxiety, and stress over the previous week. This measure has good to excellent psychometric properties [33]. ## 2.5.2. Health Assessment Questionnaire Disability Index (HAQ-DI) This questionnaire is an assessment for patients with RA where patients report the amount of difficulty they have performing specific activities (dressing and grooming, arising, eating, walking, hygiene, reach, grip, and common daily activities). Each question is scored from 0 to 3 based on whether the patient has no difficulty with the activity (0) or the activity cannot be done at all (3). The construct, convergent, and predictive validity and sensitivity to change have also been established in numerous observational studies and clinical trials [34]. The HAQ-DI was scored with the standard scoring methods whereby the highest subcategory score from each category was used, the use of aids/devices or help was adjusted for, and the summed category scores were divided by the number of categories answered. ## 2.5.3. Pain Anxiety Symptom Scale (PASS-20) The PASS-20 is designed to measure fear of pain. This measure includes 4 subscales: avoidance, cognitive anxiety, fearful thinking, and physiological anxiety. PASS-20 has demonstrated good psychometric properties and is highly correlated with its longer version [35]. ## 2.5.4. Pain Catastrophizing Scale (PCS) The PCS contains 13 items assessing the tendency to misinterpret and exaggerate the threat value of pain sensations. It has good psychometric properties and includes 3 main factors: rumination, magnification, and helplessness [36]. ## 2.5.5. 36-Item Short Form Health Survey (SF-36) The SF-36 is a 36-item self-report measure that assesses eight domains of health related quality of life. These domains include the following: (1) limitations in physical functioning; (2) social limitations due to emotional or physical troubles; (3) role limitations due to physical health problems; (4) role limitations due to emotional health problems; (5) general mental health; (6) bodily pain; (7) vitality; (8) general health perceptions [37]. The SF-36 has acceptable psychometric properties [38]. The SF-36 can also be scored based on physical and mental components; the current study used individualized scores for each subscale for consistency in using total or subscale scores. ## 2.6. Statistical Analysis A two-step cluster analysis was performed using SPSS 23 to identify and classify observations into two or more mutually exclusive groups, where members of the groups share properties in common. Five dispositional trait-like variables were used to cluster the observations: experiential avoidance, fear of relaxation, anxiety sensitivity, perfectionism, and worrying based on the AAQ, RRAQ, ASI, FMPS, and PSWQ measures, respectively. The log-likelihood distance measure was used to compute likelihood distance between clusters with subjects assigned to the cluster leading to the largest likelihood. No restrictions were set for the number of clusters and the Bayesian information criterion was used to judge adequacy of the final solution. Differences in sample demographic characteristics were compared according to cluster membership using independent samplest-tests and χ 2 tests for categorical variables in order to characterize differences between the resulting clusters. A multivariate analysis of covariance (MANCOVA) was conducted on outcome measures including mood (DASS-SF), pain catastrophizing (PCS), fear of pain (PASS), quality of life (SF-36), and disability (HAQ-DI) according to cluster membership. Any significant difference on demographic characteristics between clusters was entered as covariates in the MANCOVA. Pairwise comparisons were conducted with Bonferroni adjustment. SPSS version 23.0 (Chicago, IL) was used for all tests performed, with the significance level set at alpha 0.05 and all tests were two-tailed. ## 3. Results A total of 300 individuals with RA were eligible for inclusion, of which 227 agreed to participate in the study and completed the questionnaires (Figure1). The mean age of the sample was 57.8 (SD = 14.4) and the majority of participants were females (75.7%). Table 1 shows additional sociodemographic and clinical characteristics of the sample.Table 1 Demographic and clinical characteristics of study sample and cluster subgroups. Study population Cluster 1 Cluster 2 p(between clusters) N 227 85 142 Mean age (SD) 57.8 (14.4) 58.3 (14.7) 57.5 (14.4) 0.672 Sex (male %) 24.3 24.7 24.1 0.512 Mean years of education (SD) 13.0 (3.3) 12.5 (3.6) 13.3 (3.1) 0.094 Mean years since RA diagnosis 13.2 (11.0) 15.7 (12.2) 11.6 (9.9) 0.006 Relationship status (%) 0.841 Single 11.1 9.5 12.1 Married or in a serious relationship 74.7 76.2 73.8 Divorced, separated, widowed 14.2 14.3 14.2 Average pain intensity 3.8 (2.2) 4.0 (2.0) 3.7 (2.3) 0.092 AAQ (SD) 28.3 (7.5) 32.7 (6.1) 25.7 (7.0) 0.001 RRAQ (SD) 12.8 (5.4) 17.0 (5.8) 10.2 (3.1) 0.001 ASI total (SD) 15.2 (10.8) 23.8 (10.8) 10.0 (6.9) 0.001 FMPS total 74.0 (16.0) 86.1 (13.9) 66.7 (12.5) 0.001 PSWQ (SD) 40.9 (12.9) 51.2 (12.4) 34.7 (8.9) 0.001 Significant values are shown in bold. AAQ: Acceptance and Action Questionnaire; ASI: Anxiety Sensitivity Index; FMPS: Frost Multidimensional Perfectionism Scale; HAQ: Health Assessment Questionnaire; PSWQ: Penn State Worry Questionnaire; RRAQ: Reactions to Relaxation and Arousal; SD: standard deviation.Figure 1 Participant Flowchart.The two-step cluster analysis of personality questionnaires was conducted with no exclusion of cases. The cluster analysis resulted in an optimal grouping of two clusters (change in Schwartz’s Bayesian criterion = −152.1; distance measures ratio = 3.0). The two clusters significantly differed from each other on all clustering variables (see Table1). Cluster 1 (n = 85) was characterized by a dispositional affect comprised of patients scoring significantly higher on experiential avoidance (EA), fear of relaxation (RRAQ), anxiety sensitivity (ASI), perfectionism (FMPS), and worrying (PSWQ) as compared to Cluster 2 (n = 142). Demographic characteristics were compared between the two clusters, where it was found that patients in Cluster 1 had been diagnosed with RA for a significantly greater number of years than patients in Cluster 2 (p = 0.006). The remaining demographic variables: age, sex, education, relationship status, and average pain intensity were comparable between the two clusters (Table 1).The two clusters were compared through a MANCOVA while controlling for mean time since RA diagnosis with Bonferroni correction. Pairwise comparisons revealed significant differences between Cluster 1 and Cluster 2 for all mood (DASS-SF), catastrophizing (PCS), and pain anxiety sensitivity (PASS) subscales. Cluster 1 reported significantly higher scores on these measures of distress and cognitive aspects related to pain. There were no significant differences between the clusters for quality of life or disability (see Table2).Table 2 MANCOVA adjusted for years since RA diagnosis between clusters subgroups. Cluster 1 mean (SE) Cluster 2 mean (SE) Mean difference between Cluster 1 and Cluster 2 (SE) p Disability and quality of life HAQ total 1.08 (0.97) 1.08 (0.97) 0.03 (0.10) 0.749 SF-36 physical functioning 19.63 (0.62) 18.95 (0.49) 0.68 (0.80) 0.540 SF-36 role physical 5.63 (0.18) 5.35 (0.14) 0.29 (0.23) 0.123 SF-36 bodily pain 6.27 (0.23) 6.25 (0.18) 0.02 (0.30) 0.912 SF-36 general health 15.82 (0.30) 15.96 (0.24) −0.13 (0.39) 0.894 SF-36 vitality 15.08 (0.26) 15.59 (0.20) −0.52 (0.33) 0.269 SF-36 social function 5.96 (0.31) 6.35 (0.24) −0.39 (0.40) 0.433 SF-36 role emotional 4.73 (0.15) 4.70 (0.11) 0.03 (0.19) 0.489 SF-36 mental health 20.99 (0.25) 21.33 (0.20) −0.35 (0.32) 0.331 SF-36 reported health 2.78 (0.10) 2.99 (0.8) −0.21 (0.12) 0.209 Distress and coping DASS depression 4.78 (0.34) 2.43 (0.26) 2.4 (0.4) 0.001 DASS anxiety 5.88 (0.39) 3.28 (0.30) 2.6 (0.5) 0.001 DASS stress 5.22 (0.33) 2.60 (0.25) 2.6 (0.4) 0.001 PASS escape avoidance 11.00 (0.62) 8.07 (0.48) 2.9 (0.8) 0.001 PASS cognitive anxiety 11.01 (0.60) 6.74 (0.47) 4.3 (0.8) 0.001 PASS fearful thinking 7.53 (0.61) 3.19 (0.47) 4.3 (0.8) 0.001 PASS physiological anxiety 5.64 (0.46) 2.69 (0.36) 2.9 (0.6) 0.001 PCS rumination 11.20 (0.53) 8.11 (0.41) 3.1 (0.7) 0.001 PCS magnification 6.24 (0.23) 4.43 (0.17) 1.8 (0.3) 0.001 PCS helplessness 11.15 (0.41) 7.92 (0.31) 3.2 (0.5) 0.001 Significant values are shown in bold. DASS: Depression Anxiety Stress Scale; HAQ-DI: Health Assessment Questionnaire-Disability Index; PASS: Pain Anxiety Sensitivity Scale; PCS: Pain Catastrophizing Scale; RA: rheumatoid arthritis; SE: standard error; SF-36: 36-Item Short Form Health Survey. ## 4. Discussion The present study aimed to determine if patients with RA could be differentiated based on dispositional affect. Our second aim was to determine if mood, pain catastrophizing, fear of pain, disability, and quality of life varied as a function of these patient groupings. Participants were divided into two meaningful clusters that represented one group (Cluster 1) composed of patients who reported significantly higher scores on all dispositional variables measured, including experiential avoidance, fear of relaxation, anxiety sensitivity, perfectionism, and worrying, while the second cluster of patients (Cluster 2) included those who scored significantly lower on each of these personality measures. Results also confirmed that mood, pain catastrophizing, and fear of pain measures systematically varied based on patient reports of dispositional variables studied, with those in Cluster 1 demonstrating significantly worse scores on mood, pain catastrophizing, and fear of pain compared to Cluster 2, while controlling for differences in demographic variables between clusters. There were no significant differences found between clusters on disability or quality of life measures.Our findings revealed that the subset of patients with RA in our sample who reported higher scores on a number of dispositional variables experienced worse mood including increased depressive, anxiety, and stress symptoms, as well as increased cognitions of pain catastrophizing and fear of pain, as shown through higher scores on each pain catastrophizing and pain anxiety symptom subscale. Our results suggest that patients with RA who present with increased endorsement for the cluster of dispositional variables measured within our study may represent a group of patients who experience increased distress, pain catastrophizing, and fear of pain when living with their chronic health condition. Notably, the subset of patients reporting increased endorsement for dispositional affect encompassed fewer patients (n = 85) than the cluster of patients who reported levels of these factors (n = 142) closer in line to normative means and community samples [39–41]. However, this group of patients endorsing a complex set of dispositional characteristics and increased difficulties in mood, pain catastrophizing, and fear of pain represents a large number of patients with RA experiencing psychological concerns (37% of our sample). This prevalence of patients is comparable to other samples of patients with chronic pain, specifically fibromyalgia, where one study found that 32% of patients displayed elevated mood difficulties, increased pain catastrophizing, and low levels of perceived control over pain [42].Specific trait-like characteristics including experiential avoidance, fear of relaxation, anxiety sensitivity, perfectionism, and worrying have been linked to a variety of negative outcomes in patients with chronic pain [12, 13, 15, 16, 42]. Patients with RA in Cluster 1 of our sample scored significantly higher on each of these dispositional variables which have been associated to poor mood, catastrophizing, worse functionality, and subjective state of health [43–45].A number of studies have considered aspects of personality in patients with chronic pain, yet no studies have demonstrated how patients can be clustered together in subgroups based on scoring patterns on a variety of dispositional variables within patients diagnosed with RA experiencing chronic pain. Two previous studies have clustered patients with fibromyalgia based on neurobiological, personality, psychological, and cognitive characteristics. In the first study, cluster analyses classified 97 patients based on anxiety, depression, catastrophizing, control over pain, pain threshold, and multiple random-staircase pressure-pain sensitivity determination [46]. Three subsets of patients were identified through cluster analysis. When considering psychological and cognitive factors from these results, one group was characterized by patients with the highest levels of anxiety, depression, catastrophizing, and the lowest levels of control over pain. Of the remaining two clusters, one scored moderately on all variables while the other had the lowest scores on anxiety, depression, catastrophizing, and the highest control over pain [46]. It was hypothesized that the cluster with the highest levels of anxiety, depression, catastrophizing, and low control over pain may represent the common presentation of fibromyalgia in tertiary care settings. Furthermore, within this study, quality of life (subscales of SF-36) did not significantly differ between clusters. Similarities between our findings and Giesecke et al. (2003) are present whereby Cluster 1 of our sample was comprised of patients who reported significantly greater symptoms of anxiety, depression, and catastrophizing in comparison to Cluster 2. There was also no difference between our clusters of patients on the SF-36 subscales. The SF-36 measures a number of factors; thus, it may not reflect large enough differences in patient distress to differ between subgroups of fibromyalgia [46] or RA patients. Furthermore, the lack of difference in quality of life and similarly in disability between the clusters may be due to the cross-sectional nature of the current study. It may be that time has a strong influence on these two factors and a longitudinal study is needed to capture this effect. Mehta et al. [16] conducted a longitudinal study examining the effect of dispositional traits such as AS and EA on long-term disability among individuals with chronic pain. The study found that those individuals with high levels of dispositional variables had significantly higher levels of long-term disability compared to those with lower levels of dispositional affect [15].A second study clustered 774 patients with fibromyalgia, some of which were experiencing chronic pain and a comorbid rheumatic disorder [42]. Cluster analysis was used to group patients based on personality traits (neuroticism, extraversion, agreeableness, openness to experience, and conscientiousness). This study divided patients into two clusters. The first cluster was characterized by maladaptiveness whereby patients in this cluster were described as being more likely to experience affective distress and poorly manage social conflicts. These patients scored significantly higher on neuroticism and lower on extraversion, openness to experience, agreeableness, and conscientiousness in comparison to the second cluster [42]. Multivariate analyses comparing the two clusters found that the first cluster, characterized by maladaptiveness, had significantly higher scores for depression, anxiety, and each pain catastrophizing subscale. These significant differences between clusters depression, anxiety, and the pain catastrophizing rumination subscale were also present at six-month follow-up [42]. Our results are generally in line with Torres et al. (2013) findings as our study also resulted in two patient groups where the cluster that endorsed higher levels of dispositional affect also exhibited increased distress, pain catastrophizing, and fear of pain. Specifically, Cluster 1 of our sample and Torres et al. (2013) reported significantly higher scores of depression, anxiety, and all pain catastrophizing subscales suggesting lower mood and the use of ruminative styles that have been associated with magnifying the threat of pain and feeling helpless [12] in both our sample of RA patients and the study of fibromyalgia patients.Our study contributes to the increased interest of researchers to investigate dispositional affect and trait-like features simultaneously, to present clusters of personality factors rather than considering variables in isolation from one another. Our results provide an understanding of how mood and cognitions associated to pain (pain catastrophizing and fear of pain) may be impacted by a number of dispositional variables within patients with RA. Considering subgroups of patients with RA characterized by dispositional affect had not been previously studied, yet specific personality factors have been associated with psychopathology and difficulties coping in other patient samples [12, 47, 48]. While treatment plans are individualized, intervention studies have found that patients with RA experiencing increased distress benefit from psychological interventions [49, 50]. Providing access to these interventions could allow for targeted approaches to manage poor mood and problematic coping strategies which may be used by patients reporting high scores on the identified dispositional variables. Furthermore, interventions could be developed and targeted to address distinct clusters of patients with RA and within other chronic illnesses. The development of screening tools has been one approach suggested to initiate the assessment and subsequent treatment of psychological comorbidity in patients with RA [12, 50]. Activity pacing is another pain management strategy that may be applied to RA patients who demonstrate specific patterns of dispositional affect. Pacing has been recommended for patients with chronic pain who tend to display obsessive personality traits including psychological inflexibility, fear of relaxation, perfectionism, and experiential avoidance [18]. However, in a small sample of overactive chronic pain patients, applying pacing strategies and enacting behaviour change was difficult when only education of pacing was provided [18].Specific limitations should be considered when interpreting the findings from this study. First, there are inherent limitations when using a cross-sectional design which inhibit causal relationships to be determined. Second, the personality factors considered were based on a number of different outcome measures rather than one specific personality measure such as the NEO Five-Factor Inventory and thus did not encompass all relevant variables that have been previously studied and linked to mood, with chronic pain. Nonetheless, the dispositional affect measures administered allowed for the analysis of a potentially challenging combination of variables. Further, an important limitation to consider when interpreting our findings is a lack of objective measure of inflammation and thus the inability for inflammation differences between patients to be adjusted for within analyses. Additionally, though the chronicity of pain was controlled for in the MANCOVA, the study demonstrated a significant difference between the two clusters in chronicity of pain. Hence, it may be that the groups differed from each other not only on the dispositional factors but also on this demographic factor. Finally, sample selection bias cannot be ruled out as our sample was recruited from a single site tertiary RA clinic which may compromise the generalizability of our findings. ## 5. Conclusions In conclusion, the present study identified subgroups of patients with RA based on a number of dispositional variables. The cluster characterized by significantly greater reports of dispositional affect were comprised of RA patients who experienced significantly more depression, anxiety, and stress symptoms in addition to heightened pain anxiety/fear of pain and pain catastrophizing. Ensuring that patients have access to qualified providers of appropriate multimodal treatment may be beneficial for patients with RA experiencing specific difficulties associated with their pain or adjustment including distress, pain catastrophizing, and fear of pain. Clinicians should consider that patients with specific dispositional affect may benefit from referrals for additional social support and programs that target the range of factors included in our study, beginning when they are diagnosed with RA to promote positive adjustment. Future research replicating our findings within RA patients and other samples of chronic pain patients should be carried out so that management programs can be developed to address specific needs of patients such as improving moods and decreasing ruminative styles such as pain catastrophizing and fear of pain. --- *Source: 1024985-2016-03-03.xml*
2016
# Simultaneous Association of Variations in the Origin and Diameter of the Left Vertebral Artery in a Patient with a C1 Lateral Mass Tumor **Authors:** Seyed Reza Mousavi; Majid Reza Farrokhi; Shayan Yousufzai; Maryam Naseh; Fatemeh Karimi **Journal:** Case Reports in Surgery (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1025019 --- ## Abstract The anomalous origin of a hypoplastic Left Vertebral Artery (LVA) from the aortic arch is a rare anatomic variant. This study discusses the case of a patient with a C1 lateral mass tumor that surrounded a dominant Right Vertebral Artery (RVA) according to preoperative computed tomography angiography, with a hypoplastic LVA originating from the aortic arch. Surgery was performed, and the patient recovered uneventfully. To date, no study has reported the simultaneous association of two variations (origin and diameter) in the LVA. A deep understanding of abnormalities in the diameter and origin of LVA is a must for neurosurgeons as well as for thoracic and vascular surgeons to conduct surgical procedures. --- ## Body ## 1. Introduction The branches of the vertebral artery can vary in origin, diameter, and course [1]. The Left Vertebral Artery (LVA) origin represents one substantial variation that surgeons must be aware of [2]. The prevalence of anomalous vertebral artery origins is higher in the LVA (6%) than in the Right Vertebral Artery (RVA) (3.8%) [1, 3]. The LVA longitude can show different variations, including hypoplasia and termination at the Posterior Inferior Cerebellar Artery (PICA) rather than the basilar artery [2]. Vertebral Artery Hypoplasia (VAH), another common vertebral artery variation, has been considered an anomaly in which the vertebral artery diameter is less than 2 mm [4]. Recent studies have suggested that patients with VAH can show vertebrobasilar system insufficiency symptoms, especially when they have vascular risk factors or when a dominant vertebral artery fails to supply the posterior circulation [5, 6].The embryological complexity and extensive anatomy of the vertebrobasilar system are responsible for the development of uncommon variations. Congenital vascular defects can strongly affect the outcomes of angiographic and surgical interventions [7]. To date, no studies have reported the presence of a hypoplastic LVA originating from the aortic arch. This case report presents a sporadic case of the simultaneous association of variations in both the origin and diameter of the LVA in a patient with a C1 lateral mass tumor. ## 2. Case Presentation A 33-year-old woman complained of intolerable neck pain commencing one month earlier. On physical examination, the patient showed local tenderness in the upper cervical region without any neurological deficits. Through Magnetic Resonance Imaging (MRI), a lateral mass was found on the right side of her first cervical vertebrae. Computed tomography angiography (CTA) was performed to evaluate the cerebral vessels, which revealed variations in the origin and diameter of the LVA [8]. The LVA originated from the aortic arch, between the Left Common Carotid Artery (LCCA) and the Left Subclavian Artery (LSCA) (Figure 1). In addition to the abnormal origin, a hypoplastic LVA was observed (Figure 2). The dominant vertebral artery was the RVA. The RVA diameter was 4.1 mm, and the LVA diameter was 1.5 mm. Therefore, the RVA was significantly larger than the LVA. Additionally, the LVA was shorter than the RVA (16.8 vs. 18.4 cm). The LVA passed through the transverse foramina of the sixth cervical vertebrae and formed the basilar artery by joining the RVA. Other LSCA and RSCA branches were normal, and no specific variations were noted. No evidence of dissection or aneurysmal dilation was detected. In addition, the carotid artery bifurcations showed standard configuration with no filling defects, plaques, or narrowings.Figure 1 Three-dimensional reconstructed computed tomography angiography shows the carotid, subclavian, and vertebral arteries. A hypoplastic left vertebral artery originates from the aortic arch and is smaller than the right vertebral artery along its entire course. All mentioned vascular structures show normal caliber and course, smooth intima, and no narrowing or obliteration. AA = aortic arch; BCT = brachiocephalic trunk; LCCA = left common carotid artery; LSCA = left subclavian artery; RCCA = right common carotid artery; RSCA = right subclavian artery; LVA = left vertebral artery; RVA = right vertebral artery.Figure 2 Computed tomographic angiography demonstrates the anterior-posterior (a), posterior-anterior (b), and lateral (c) views. AA = aortic arch; BCT = brachiocephalic trunk; LCCA = left common carotid artery; LSCA = left subclavian artery; RCCA = right common carotid artery; RSCA = right subclavian artery; LVA = left vertebral artery; RVA = right vertebral artery. (a)(b)(c) ## 3. Discussion The present study discusses the unprecedented simultaneous occurrence of both hypoplasia and an anomalous origin of the LVA in a patient with chronic neck pain due to a C1 lateral mass tumor. The RVA was dominant and was surrounded by the tumor at the craniovertebral junction.Vertebral arteries are among the major arteries in the cervical area, typically arising from the first part of the subclavian artery on both sides [1]. Previous studies showed that the LVA was the most common site for variations in the origin of vertebral arteries [9]. The LVA can originate from atypical sites such as the aortic arch, common carotid artery, and internal or external carotid arteries. Furthermore, the LVA can have dual origins from the aortic arch and the subclavian artery [1, 10]. An origin from the aortic arch is a common variation, with a prevalence rate of 2.4-6.9%. However, in most variants, the LVA is situated between the LCCA and the LSCA [11–13].The variable origin of the LVA carries remarkable importance in surgical and clinical settings. Understanding this issue is necessary for experts involved in the fields of head and neck surgery, cerebral disorders, angiography, arterial dissection, and stent placement in vertebral or carotid arteries [14]. Blumberg et al. reported that an LVA originated from the aortic arch in a patient with an acute intramural hematoma [15]. In another study conducted by Fridah et al., 84 vertebral arteries were evaluated by CTA in a Zambian population, three of which originated from the aortic arch [16]. Yamaki et al. dissected 515 vertebral arteries in Japanese adult corpses, among which 30 LVAs were noted to originate directly from the aortic arch [17]. When the LVA branches from the aortic arch, its opening is exposed to turbulent blood flow, paving the way for iatrogenic injuries [18].Changes in the site and pattern of branching, agenesis, perforating branches, and hypoplasia are the most reported variations of the vertebral arteries [19, 20]. The prevalence of vertebral artery hypoplasia appears to be 1.9-11.6% [21], though Ogeng et al. monitored 346 vertebral arteries for hypoplasia in Kenya, revealing a prevalence of 28.9% [22]. This latter figure seemed to be higher than those seen in other populations. Researchers have found that vertebral artery hypoplasia is linked to an increase in the chance of posterior circulation ischemia. This finding was further noted in the PICA, where relative hypoperfusion occurs [23]. An interesting study conducted by Harati et al. demonstrated a 52% linkage of vertebral artery hypoplasia with VA-PICA aneurysms. These researchers also emphasized that blood pressure and blood flow were two major factors affecting vascular morphology [24].To date, several studies have been conducted to evaluate LVA abnormalities. Nonetheless, no study has reported the simultaneous association of abnormal variants in the diameter and origin (aortic arch) of the LVA. The current research is believed to be the first study indicating the simultaneous presence of these two variants in the LVA, which was hypoplastic (with a diameter of 1.5 mm) and originated from the aortic arch. ### 3.1. Embryological Development In order to appreciate variations in the vertebral arteries, one must inspect the embryological development and branching patterns of the aortic arch. During embryologic development, the intersegmental arteries branching from the dorsal aorta are responsible for supplying the somites and their derivatives [25]. As the human cervical region develops during the embryonic period, longitude anastomosis between the C1 and C7 intersegmental arteries results in the formation of the vertebral arteries. Both the right and left vertebral arteries are derived from the distal portion of the dorsal C7 intersegmental artery [20]. Furthermore, most of the primary connections of the intersegmental arteries with the dorsal aorta disappear. Hence, the remaining primary vessels can develop anatomical variations in the vertebral arteries [26]. In some cases, the anastomosis of the C6 and C7 intervertebral arteries remains incomplete on the left side. Hence, C6 remains free, which causes the LVA to originate from the aortic arch between the LCCA and the LSCA [27]. Patil et al. stated that enhanced embryonic tissue absorbance from the LSCA between the vertebral artery origin and the aortic arch could explain this phenomenon [4]. Nonetheless, further research should be undertaken to determine why some arteries persist and others disappear.The studies conducted on the embryologic reasons for hypoplasia in the vertebral arteries have underlined four carotid-vertebrobasilar anastomosis types in the early embryonic period, namely, the Proatlantal Intersegmental Artery (PIA), hypoglossal artery, otic artery, and trigeminal artery. It is noteworthy that most of these anastomosis types vanish in one week as the vertebrobasilar arterial system develops. If vertebral arteries do not develop and fail to join the basilar artery, PIA anastomosis continues to persist. However, permanent anastomosis of the PIA is among the important reasons for vertebral artery hypoplasia and agenesis. Some studies have demonstrated the association of PIA anastomosis with posterior circulation infarction, transient ischemic attacks, and vertebrobasilar insufficiency [19]. However, no study has reported the simultaneous association of two variations (origin and diameter) in the LVA. Therefore, further research is strongly recommended in this area. ## 3.1. Embryological Development In order to appreciate variations in the vertebral arteries, one must inspect the embryological development and branching patterns of the aortic arch. During embryologic development, the intersegmental arteries branching from the dorsal aorta are responsible for supplying the somites and their derivatives [25]. As the human cervical region develops during the embryonic period, longitude anastomosis between the C1 and C7 intersegmental arteries results in the formation of the vertebral arteries. Both the right and left vertebral arteries are derived from the distal portion of the dorsal C7 intersegmental artery [20]. Furthermore, most of the primary connections of the intersegmental arteries with the dorsal aorta disappear. Hence, the remaining primary vessels can develop anatomical variations in the vertebral arteries [26]. In some cases, the anastomosis of the C6 and C7 intervertebral arteries remains incomplete on the left side. Hence, C6 remains free, which causes the LVA to originate from the aortic arch between the LCCA and the LSCA [27]. Patil et al. stated that enhanced embryonic tissue absorbance from the LSCA between the vertebral artery origin and the aortic arch could explain this phenomenon [4]. Nonetheless, further research should be undertaken to determine why some arteries persist and others disappear.The studies conducted on the embryologic reasons for hypoplasia in the vertebral arteries have underlined four carotid-vertebrobasilar anastomosis types in the early embryonic period, namely, the Proatlantal Intersegmental Artery (PIA), hypoglossal artery, otic artery, and trigeminal artery. It is noteworthy that most of these anastomosis types vanish in one week as the vertebrobasilar arterial system develops. If vertebral arteries do not develop and fail to join the basilar artery, PIA anastomosis continues to persist. However, permanent anastomosis of the PIA is among the important reasons for vertebral artery hypoplasia and agenesis. Some studies have demonstrated the association of PIA anastomosis with posterior circulation infarction, transient ischemic attacks, and vertebrobasilar insufficiency [19]. However, no study has reported the simultaneous association of two variations (origin and diameter) in the LVA. Therefore, further research is strongly recommended in this area. ## 4. Conclusion It is substantially rare to observe the left vertebral artery being hypoplastic and simultaneously originating from the aortic arch. Overall, a deep understanding of the developmental anomalies in diameter and origin of the vertebral arteries is a must for neurosurgeons as well as for thoracic and vascular surgeons. Our findings can also guide endovascular interventions. --- *Source: 1025019-2022-04-28.xml*
1025019-2022-04-28_1025019-2022-04-28.md
13,349
Simultaneous Association of Variations in the Origin and Diameter of the Left Vertebral Artery in a Patient with a C1 Lateral Mass Tumor
Seyed Reza Mousavi; Majid Reza Farrokhi; Shayan Yousufzai; Maryam Naseh; Fatemeh Karimi
Case Reports in Surgery (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1025019
1025019-2022-04-28.xml
--- ## Abstract The anomalous origin of a hypoplastic Left Vertebral Artery (LVA) from the aortic arch is a rare anatomic variant. This study discusses the case of a patient with a C1 lateral mass tumor that surrounded a dominant Right Vertebral Artery (RVA) according to preoperative computed tomography angiography, with a hypoplastic LVA originating from the aortic arch. Surgery was performed, and the patient recovered uneventfully. To date, no study has reported the simultaneous association of two variations (origin and diameter) in the LVA. A deep understanding of abnormalities in the diameter and origin of LVA is a must for neurosurgeons as well as for thoracic and vascular surgeons to conduct surgical procedures. --- ## Body ## 1. Introduction The branches of the vertebral artery can vary in origin, diameter, and course [1]. The Left Vertebral Artery (LVA) origin represents one substantial variation that surgeons must be aware of [2]. The prevalence of anomalous vertebral artery origins is higher in the LVA (6%) than in the Right Vertebral Artery (RVA) (3.8%) [1, 3]. The LVA longitude can show different variations, including hypoplasia and termination at the Posterior Inferior Cerebellar Artery (PICA) rather than the basilar artery [2]. Vertebral Artery Hypoplasia (VAH), another common vertebral artery variation, has been considered an anomaly in which the vertebral artery diameter is less than 2 mm [4]. Recent studies have suggested that patients with VAH can show vertebrobasilar system insufficiency symptoms, especially when they have vascular risk factors or when a dominant vertebral artery fails to supply the posterior circulation [5, 6].The embryological complexity and extensive anatomy of the vertebrobasilar system are responsible for the development of uncommon variations. Congenital vascular defects can strongly affect the outcomes of angiographic and surgical interventions [7]. To date, no studies have reported the presence of a hypoplastic LVA originating from the aortic arch. This case report presents a sporadic case of the simultaneous association of variations in both the origin and diameter of the LVA in a patient with a C1 lateral mass tumor. ## 2. Case Presentation A 33-year-old woman complained of intolerable neck pain commencing one month earlier. On physical examination, the patient showed local tenderness in the upper cervical region without any neurological deficits. Through Magnetic Resonance Imaging (MRI), a lateral mass was found on the right side of her first cervical vertebrae. Computed tomography angiography (CTA) was performed to evaluate the cerebral vessels, which revealed variations in the origin and diameter of the LVA [8]. The LVA originated from the aortic arch, between the Left Common Carotid Artery (LCCA) and the Left Subclavian Artery (LSCA) (Figure 1). In addition to the abnormal origin, a hypoplastic LVA was observed (Figure 2). The dominant vertebral artery was the RVA. The RVA diameter was 4.1 mm, and the LVA diameter was 1.5 mm. Therefore, the RVA was significantly larger than the LVA. Additionally, the LVA was shorter than the RVA (16.8 vs. 18.4 cm). The LVA passed through the transverse foramina of the sixth cervical vertebrae and formed the basilar artery by joining the RVA. Other LSCA and RSCA branches were normal, and no specific variations were noted. No evidence of dissection or aneurysmal dilation was detected. In addition, the carotid artery bifurcations showed standard configuration with no filling defects, plaques, or narrowings.Figure 1 Three-dimensional reconstructed computed tomography angiography shows the carotid, subclavian, and vertebral arteries. A hypoplastic left vertebral artery originates from the aortic arch and is smaller than the right vertebral artery along its entire course. All mentioned vascular structures show normal caliber and course, smooth intima, and no narrowing or obliteration. AA = aortic arch; BCT = brachiocephalic trunk; LCCA = left common carotid artery; LSCA = left subclavian artery; RCCA = right common carotid artery; RSCA = right subclavian artery; LVA = left vertebral artery; RVA = right vertebral artery.Figure 2 Computed tomographic angiography demonstrates the anterior-posterior (a), posterior-anterior (b), and lateral (c) views. AA = aortic arch; BCT = brachiocephalic trunk; LCCA = left common carotid artery; LSCA = left subclavian artery; RCCA = right common carotid artery; RSCA = right subclavian artery; LVA = left vertebral artery; RVA = right vertebral artery. (a)(b)(c) ## 3. Discussion The present study discusses the unprecedented simultaneous occurrence of both hypoplasia and an anomalous origin of the LVA in a patient with chronic neck pain due to a C1 lateral mass tumor. The RVA was dominant and was surrounded by the tumor at the craniovertebral junction.Vertebral arteries are among the major arteries in the cervical area, typically arising from the first part of the subclavian artery on both sides [1]. Previous studies showed that the LVA was the most common site for variations in the origin of vertebral arteries [9]. The LVA can originate from atypical sites such as the aortic arch, common carotid artery, and internal or external carotid arteries. Furthermore, the LVA can have dual origins from the aortic arch and the subclavian artery [1, 10]. An origin from the aortic arch is a common variation, with a prevalence rate of 2.4-6.9%. However, in most variants, the LVA is situated between the LCCA and the LSCA [11–13].The variable origin of the LVA carries remarkable importance in surgical and clinical settings. Understanding this issue is necessary for experts involved in the fields of head and neck surgery, cerebral disorders, angiography, arterial dissection, and stent placement in vertebral or carotid arteries [14]. Blumberg et al. reported that an LVA originated from the aortic arch in a patient with an acute intramural hematoma [15]. In another study conducted by Fridah et al., 84 vertebral arteries were evaluated by CTA in a Zambian population, three of which originated from the aortic arch [16]. Yamaki et al. dissected 515 vertebral arteries in Japanese adult corpses, among which 30 LVAs were noted to originate directly from the aortic arch [17]. When the LVA branches from the aortic arch, its opening is exposed to turbulent blood flow, paving the way for iatrogenic injuries [18].Changes in the site and pattern of branching, agenesis, perforating branches, and hypoplasia are the most reported variations of the vertebral arteries [19, 20]. The prevalence of vertebral artery hypoplasia appears to be 1.9-11.6% [21], though Ogeng et al. monitored 346 vertebral arteries for hypoplasia in Kenya, revealing a prevalence of 28.9% [22]. This latter figure seemed to be higher than those seen in other populations. Researchers have found that vertebral artery hypoplasia is linked to an increase in the chance of posterior circulation ischemia. This finding was further noted in the PICA, where relative hypoperfusion occurs [23]. An interesting study conducted by Harati et al. demonstrated a 52% linkage of vertebral artery hypoplasia with VA-PICA aneurysms. These researchers also emphasized that blood pressure and blood flow were two major factors affecting vascular morphology [24].To date, several studies have been conducted to evaluate LVA abnormalities. Nonetheless, no study has reported the simultaneous association of abnormal variants in the diameter and origin (aortic arch) of the LVA. The current research is believed to be the first study indicating the simultaneous presence of these two variants in the LVA, which was hypoplastic (with a diameter of 1.5 mm) and originated from the aortic arch. ### 3.1. Embryological Development In order to appreciate variations in the vertebral arteries, one must inspect the embryological development and branching patterns of the aortic arch. During embryologic development, the intersegmental arteries branching from the dorsal aorta are responsible for supplying the somites and their derivatives [25]. As the human cervical region develops during the embryonic period, longitude anastomosis between the C1 and C7 intersegmental arteries results in the formation of the vertebral arteries. Both the right and left vertebral arteries are derived from the distal portion of the dorsal C7 intersegmental artery [20]. Furthermore, most of the primary connections of the intersegmental arteries with the dorsal aorta disappear. Hence, the remaining primary vessels can develop anatomical variations in the vertebral arteries [26]. In some cases, the anastomosis of the C6 and C7 intervertebral arteries remains incomplete on the left side. Hence, C6 remains free, which causes the LVA to originate from the aortic arch between the LCCA and the LSCA [27]. Patil et al. stated that enhanced embryonic tissue absorbance from the LSCA between the vertebral artery origin and the aortic arch could explain this phenomenon [4]. Nonetheless, further research should be undertaken to determine why some arteries persist and others disappear.The studies conducted on the embryologic reasons for hypoplasia in the vertebral arteries have underlined four carotid-vertebrobasilar anastomosis types in the early embryonic period, namely, the Proatlantal Intersegmental Artery (PIA), hypoglossal artery, otic artery, and trigeminal artery. It is noteworthy that most of these anastomosis types vanish in one week as the vertebrobasilar arterial system develops. If vertebral arteries do not develop and fail to join the basilar artery, PIA anastomosis continues to persist. However, permanent anastomosis of the PIA is among the important reasons for vertebral artery hypoplasia and agenesis. Some studies have demonstrated the association of PIA anastomosis with posterior circulation infarction, transient ischemic attacks, and vertebrobasilar insufficiency [19]. However, no study has reported the simultaneous association of two variations (origin and diameter) in the LVA. Therefore, further research is strongly recommended in this area. ## 3.1. Embryological Development In order to appreciate variations in the vertebral arteries, one must inspect the embryological development and branching patterns of the aortic arch. During embryologic development, the intersegmental arteries branching from the dorsal aorta are responsible for supplying the somites and their derivatives [25]. As the human cervical region develops during the embryonic period, longitude anastomosis between the C1 and C7 intersegmental arteries results in the formation of the vertebral arteries. Both the right and left vertebral arteries are derived from the distal portion of the dorsal C7 intersegmental artery [20]. Furthermore, most of the primary connections of the intersegmental arteries with the dorsal aorta disappear. Hence, the remaining primary vessels can develop anatomical variations in the vertebral arteries [26]. In some cases, the anastomosis of the C6 and C7 intervertebral arteries remains incomplete on the left side. Hence, C6 remains free, which causes the LVA to originate from the aortic arch between the LCCA and the LSCA [27]. Patil et al. stated that enhanced embryonic tissue absorbance from the LSCA between the vertebral artery origin and the aortic arch could explain this phenomenon [4]. Nonetheless, further research should be undertaken to determine why some arteries persist and others disappear.The studies conducted on the embryologic reasons for hypoplasia in the vertebral arteries have underlined four carotid-vertebrobasilar anastomosis types in the early embryonic period, namely, the Proatlantal Intersegmental Artery (PIA), hypoglossal artery, otic artery, and trigeminal artery. It is noteworthy that most of these anastomosis types vanish in one week as the vertebrobasilar arterial system develops. If vertebral arteries do not develop and fail to join the basilar artery, PIA anastomosis continues to persist. However, permanent anastomosis of the PIA is among the important reasons for vertebral artery hypoplasia and agenesis. Some studies have demonstrated the association of PIA anastomosis with posterior circulation infarction, transient ischemic attacks, and vertebrobasilar insufficiency [19]. However, no study has reported the simultaneous association of two variations (origin and diameter) in the LVA. Therefore, further research is strongly recommended in this area. ## 4. Conclusion It is substantially rare to observe the left vertebral artery being hypoplastic and simultaneously originating from the aortic arch. Overall, a deep understanding of the developmental anomalies in diameter and origin of the vertebral arteries is a must for neurosurgeons as well as for thoracic and vascular surgeons. Our findings can also guide endovascular interventions. --- *Source: 1025019-2022-04-28.xml*
2022
# Incidence of Central Venous Catheter-Related Bloodstream Infections: Evaluation of Bundle Prevention in Two Intensive Care Units in Central Brazil **Authors:** Thais Yoshida; Ana Elisa Bauer de Camargo Silva; Luciana Leite Pineli Simões; Rafael Alves Guimarães **Journal:** The Scientific World Journal (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1025032 --- ## Abstract Background. Central venous catheter-associated bloodstream infections (CVC-BSIs) have been associated with increased length of hospital stay, mortality, and healthcare costs, especially in intensive care units (ICUs). The aim of this study was to evaluate the incidence density of CVC-BSIs before and after implementation of the bundle in a hospital of infectious and dermatological diseases in Central Brazil.Methods. A retrospective cohort study was conducted in two ICUs (adult and pediatric) between 2012 and 2015. Two periods were compared to assess the effect of the intervention in incidence density of CVC-BSIs: before and after intervention, related to the stages before and after the implementation of the bundle, respectively.Results. No significant reduction was observed in the incidence density of CVC-BSIs in adult ICU (incidence rate ratio [IRR]: 0.754; 95.0% CI: 0.349 to 1.621; p-value = 0.469), despite the high bundle application rate in the postintervention period. Similarly, significant reduction in the incidence density in pediatric ICU has not been verified after implementation of the bundle (IRR: 1.148; 95.0% CI: 0.314 to 4.193; p-value = 0.834).Conclusion. Not significant reduction in the incidence density of CVC-BSIs was observed after bundle implementation in ICUs, suggesting the need to review the use of process, as well as continuing education for staffs in compliance and correct application of the bundle. Further studies are needed to evaluate the effect of bundle in the reduction of incidence density of CVC-BSIs in Brazil. --- ## Body ## 1. Introduction Healthcare-associated infections (HAIs) are a serious public health problem and represents significant adverse events in hospitalized patients, especially in intensive care units (ICUs) [1–3]. Central venous catheter-associated bloodstream infections (CVC-BSIs) are among the most serious HAIs and has been associated with increased length of hospital stay, mortality and healthcare costs [4].It is estimated to occur, per year, in the United States of America (USA), 80.000 cases of CVC-BSIs in ICUs [5]. In addition, in 2013, the National Healthcare Safety Network (NHSN) estimated an incidence of 1.2 BSIs/1,000 CVC-days in medical ICUs in the USA [3]. In European countries, the incidence/1,000 CVC-days range from 1.2 in France to 4.2 in England [2]. In developing countries, especially those in Latin America, the dimension of CVC-BSIs is little known. However, studies conducted in Brazil indicate that the incidence density of CVC-BSIs in ICUs patients has decreased over the years [6]. In 2014, it was recorded an incidence of 5.1 CVC-BSIs/1,000 CVC-days in adult ICUs in Brazil, rate lower than in 2011 (5.9 CVC-BSIs/1,000 CVC-days) [6]. In pediatric ICUs, the incidence density of CVC-BSIs/1,000 CVC-days decreased from 7.3 in 2011 to 5.8 in 2014 [6].The CVC-BSIs are serious infections and can be prevented through appropriate techniques for insertion and management of CVC [4]. The application of preventive measures in an integrated manner, structured and systemized has shown positive results in reducing these infections and help to increase patient safety in healthcare services [4, 7, 8]. In this context, are bundles of prevention, defined as a set of preventive practices based on evidence that must be performed collectively. The use of these measures allows the evaluation of programs of care and handling of the CVC, to identify potential failures and/or successes that affect the final results. Also enables the calculation of indicators that show the care practice, called process indicators. Care implied in care processes and evaluated through the use of bundles are essential to improving quality and safety in patient care [9, 10].Several studies have shown decrease in the incidence of CVC-BSIs after bundle implementation [10–15]. A meta-analysis that examined the impact of bundles showed a significant reduction in the median incidence CVC-BSIs after application of these strategies (6.4/1,000 CVC-days versus 2.5/1,000 CVC-days; p-value < 0.001) [10]. The impact of bundles in reducing the incidence of CVC-BSIs depends on multidisciplinary teamwork, effective communication, setting daily goals easily measurable care, continuous professional training, and auditing processes [8]. Thus, despite the positive results in decreasing CVC-BSIs after implementing reported bundles in several investigations, some studies have shown no reduction in CVC-BSIs rates in places like USA, Taiwan, Spain, and Brazil, even after systematic application of these strategies [10].In Brazil, few studies have investigated the effect of bundle in reducing CVC-BSIs in ICU, and most of these were conducted in the most developed region of the country (Southeast) [13, 14]. Still, studies in pediatric ICU are scarce in Brazil. Thus, this research aimed to evaluate the incidence density of CVC-BSIs before and after implementation of the bundle in a hospital of infectious and dermatological diseases in Central Brazil. ## 2. Materials and Methods This is a retrospective cohort study that examined the incidence density of CVC-BSIs before and after implementation of bundle of prevention. The research was conducted in adult and pediatric ICUs of a hospital of infectious and dermatological diseases in Central Brazil, from January 2012 to December 2015.The hospital provides care elective and emergency medium and high complexity exclusively to patients of the Health Unic System (Sistema Único de Saúdein Portuguese) in Brazil. This institution has 130 beds distributed in five sectors, two of them in intensive care. The adult ICU has nine beds, four of them for individual isolation of patients, while the pediatric ICU has four hospital beds, two intended for isolation of patients in special care. Overall, they have 100% occupancy rate in all periods. The service profile in both ICUs is for patients with infectious diseases, including Acquired Immune Deficiency Syndrome (AIDS), tuberculosis, meningitis, dengue, among others. Patients are mostly immunosuppressed with use of antimicrobials for community infections, opportunistic or related to health care (HAIs).The data of this research were obtained by searching the electronic files of Hospital Infection Control Service of the institution, sector responsible for monitoring CVC-BSIs in the ICU. The information files were drawn about the bundle of prevention (components of the package and the total number of applications-days) number of patients-days, number of patients with CVC-days, number of episodes of HAIs, number of cases of CVC-BSIs, number of deaths from CVC-BSIs, and characteristics of patients with CVC-BSIs (age, sex, length of stay, time of use of CVC, diagnosis, and isolated microorganisms).The study included all cases of CVC-BSIs diagnosed in adult and pediatric ICUs during the analysis period. The case definition was based on criteria established by the National Health Surveillance Agency of Brazil, which are based on the NHSN [16]. The IPCS were defined based on laboratory criteria, that is, diagnosed using blood cultures. Thus, CVC-BSIs were considered if one of the three criteria was met: (i) Criterion 1: patient with one or more positive blood cultures collected preferentially from peripheral blood with the pathogen being not related to infection elsewhere; (ii) Criterion 2: at least one of the following signs or symptoms: fever (> 38°C), tremor, oliguria (urinary volume < 20 ml per hour), and hypotension (systolic pressure ≤ 90 mmHg), these symptoms being unrelated to infection (e.g., diphtheroids,Bacillus spp.,Propionibacterium spp., staphylococci coagulase negative, and micrococci), or (iii) Criterion 3: for children > 28 days and < 1 year—at least one of the following signs and symptoms: fever (> 38°C), hypothermia (< 36°C), bradycardia or tachycardia (not related to infection elsewhere), and two or more blood cultures (in different punctures with a maximum interval of 48 hours) with common skin contaminants (e.g., diphtheroids,Bacillus spp.,Propionibacterium spp., staphylococci coagulase negative, and micrococci) [16].The bundle of prevention of CVC-BSIs was systematically implemented in the institution from September 2014 (adult ICU) to November 2014 (pediatric ICU). It consisted of actions to be performed in all patients using CVC defined from the Institute’s recommendations for Health care Improvement (IHI) [17]. This corresponds to an audit tool in the use of the CVC process that consists of four check items in the form of checklists, which are actions to be performed daily in all patients using CVC. The bundle includes the following elements.(i) Care in catheter insertion: aseptic technique for catheter insertion (barrier maximum precautions), hand hygiene with chlorhexidine degermante 2%, patient skin antisepsis with degermante 2% chlorhexidine followed by alcoholic 0.5%, and record in the catheter insertion of records with justification statement. (ii) Care in the administration of drugs: aseptic guns and connections with 70% alcohol before medications and exclusive route for infusion of blood derivatives or parenteral nutrition. (iii) Care in maintaining the catheter: daily medical records to assess the insertion site; clean, dry dressing and adhered and dated; exchange of catheters inserted in emergency situations and those from other institutions for a maximum of 48 hours; exchange for the infusion system every 96 hours and/or in case of suspicion of pyrogenic shock, and blood visible stuck inside the system; record (date and signature) installation in equipos infusion. (iv) Daily assessment for early catheter removal: removal of the catheter so that there is no more indication of use, or in the presence of signs and symptoms of catheter-related infection and evidence of medical records of catheter removal with justification statement.In this study, two periods were compared to assess the effect of the bundle: before intervention (reference period) and after intervention [10]. The preintervention phase encompassed the period from January 2012 to August 2014, in the pediatric ICU, and from January 2012 to October 2014, in the adult ICU, and represented the period before the application of the bundle. Thus, the phase of postintervention contemplated November 2014 to December 2015 in the adult ICU and from September 2014 to December 2015 in the pediatric ICU and reflected the period after implementation of the bundle. The primary outcome was the incidence density of CVC-BSIs preintervention phase compared with the phase of intervention.Data analysis of adult and pediatric ICU was performed separately. Initially the calculations of process indicators and their respective confidence intervals of 95% (95.0% CI) for each study period were performed. For analysis of the indicator for the bundle, the bundle was calculated application rate, the following formula:(i) total number of applications-days bundle in the period by the total number of patients with CVC-days.For analysis of outcome indicators the following formulas were used:(i) Incidence density of HAIs: number of episodes of HAIs in the period by the number of patients-days x 1,000. (ii) CVC utilization (%): number of patients with CVC-days by the number of patients-days. (iii) Incidence density of CVC-BSIs number of new cases of CVC-BSIs in the period by the number of patients with CVC-days x 1.000. (iv) Lethality with CVC-BSIs: number of deaths from CVC-BSIs in the period by the number of patients who developed CVC-BSIs.Analyses were performed using the Stata software, version 14.0 [18]. The indicators found in before and after intervention were compared using the Wald statistic. For the primary outcome (Incidence density of CVC-BSIs) the bundle effect was analysed using Poisson regression models with robust variance [19]. The models were adjustment by baseline severity. In addition, we included a dummy variable in the model representing intervention “0” in the preintervention period and “1” in the postintervention period [20]. The following assumptions of the Poisson regression were met for model validation: (ii) Independence of observations: it was verified by comparing errors based on standard models with robust errors to determine large differences—for adult ICU the difference of errors between the standard and robust models for the two models was 4.5% and for the pediatric ICU it was 3.4%, suggesting independence; (ii) distribution following a classical Poisson distribution, verifying that the observed and expected data were similar—this assumption was verified by predicting the mean values of the dependent variable and compared by the t test with the observed values [21]—for the adult ICU the observed and expected values were similar (t = 0.000; df: 47; p value = 1.000), as observed in the pediatric ICU model (t = 0.000; df: 47; p value = 1.000) and (iii) the mean and variance of the model are the same or similar, as assessed by Pearson’s chi-square of Pearson’s chi-square for the pediatric ICU model with a value of 1.067 and for the pediatric ICU model a chi-Pearson square with a value of 1.078, indicating small overdispersion of the data (values less than 1 indicate subdispersion, equal to 1 equidispersion and greater than 1 overdispersion), therefore not causing serious problems in the model. In addition, to reinforce data suitability to Poisson models, the goodness-of-fit Deviance was performed - the result for the adult ICU was a chi-square of 53.70 (df: 45; p value = 0.175) and for pediatric ICU a chi-square of 28,621 (df: 45; p value = 0.973), indicating that both data fit the Poisson model well. Thus, the adjusted incidence rate ratio (IRR) was calculated and its 95.0% CI for the difference in incidence density of CVC-BSIs among the investigated periods. P values < 0.05 were considered statistically significant. In addition, a descriptive analysis of the variables related to the patient with CVC-BSIs was carried out (total and ICU).The study was approved by the Ethics Committee in Tropical Diseases Hospital Research Dr. Anuar Auad, protocol n. 011/2012, and all ethical and legal principles were considered under Resolution n. 466/2012 of the National Health Council [22]. ## 3. Results The results of this study are presented by ICU. Table1 shows the variables and indicators related to the bundle in the adult and paediatric ICU. In adult ICU, a total of 2.282 applications-days of bundle was observed, resulting in an application rate of 89.8% in the postintervention period. Furthermore, there was an overall compliance of 85.6%. The item with the lowest application in the adult ICU was Item 4.Table 1 Variables and bundle indicators in adult and pediatric ICUs. Central Brazil. 2014-2015. Variables and indicators bundle Adult ICU Pediatric ICU Applications-day 2.282 438 Bundle application rate (%) 89.8 54.1 Item 1 60.2 48.6 Item 2 99.8 99.8 Item 3 99.8 95.9 Item 4 88.1 5.7 Total Compliance 85.6 4.3 ICU: Intensive Care Unit.In pediatric ICU, there was a total of 438 applications-days, resulting in bundle application rate of 54.1% and full compliance of 4.3. Items 1 and 4 presented a low application rate in this ICU (Table1).In the adult ICU during the study period, we observed a total of 11,446 patients-days, 9,387 CVC-days, and an overall utilization rate of 82.0% CVC. A higher CVC usage fee in the preintervention period compared to the postintervention period (85.6% vs. 73.6%; p value < 0.001). We believe that the change in the patient profile in adult ICU (with decreased severity in the postintervention period) is responsible for the decrease of CVC usage fee. Still, there was a decrease in the number of cases of HAIs, ranging from 270 in the preintervention phase to 77 in the postintervention period. The overall incidence density of HAIs was 30.3 per 1,000 patient-days (95.0% CI: 27.3 to 33.6). A reduction in HAIs density per 1,000 patients-days between the periods was found (p value < 0.001) (Table2).Table 2 Evaluated variables and indicators in the adult ICU. Central Brazil. 2012-2015. Variables and indicators All Periods p value Preinterventiona Postinterventionb Number of patients-days 11.446 7.995 3.451 - Number of episodes of HAIs 347 270 77 - Incidence density of HAIs (95.0% CI) 30.3 (27.3-33.6) 33.8 (30.0-38.0) 22.3 (17.9-27.8) < 0.001c CL utilization (%) (95.0% CI)b 82.0 (81.3-82.7) 85.6 (84.9-86.4) 73.6 (72.1-75.1) < 0.001c Number of new cases of CLABSIs 32 25 7 - Number of deaths from CLABSIs 1 7 3 - Lethality with CVC-BSIs (%) (95.0% CI)b 31.3 (18.0-49.6) 28.0 (14.3-47.6) 42.9 (15.8-75.0) 0.459c CVC-days 9.387 6.847 2.540 - Incidence density of CVC-BSIs (95.0%CI)b 3.40 (2.41-4.80) 3.65 (2.47-5.38) 2.75 (1.33-5.47) 0.469d 95.0% CI: 95.0% Confidence Interval; a. Preintervention period: January 2012 to October 2014; b. Postintervention period: November 2014 to December 2015; c. Wald Statistics; d. Wald Statistics obtained in fitted Poisson model. CL: Central Line; CVC: Central venous catheter; CVC-BSIs: Central venous catheter-associated bloodstream infections; HAIs: Healthcare-associated infections; ICU: Intensive Care Unit.It was also observed in the adult ICU that the occurrence of 32 cases of CVC-BSIS throughout the investigation period (25 in the preintervention period and 7 in the postintervention period). In the preintervention period, there was an incidence density of CVC-BSIs per 1,000 CVC-days of 3.65 (95.0% CI: 2.47 to 5.38). Despite the reduction in the absolute number of cases of CVC-BSIs, after the implementation of the bundle (after intervention), there was no significant reduction in the incidence density (IRR: 0.754; 95.0% CI: 0.349 to 1.621; p value = 0.469) (Table2).In the pediatric ICU, a total of 3.791 patients-days and 2.078 CVC- days were observed, resulting in an overall rate of use of CVC of 54.8%. It was found that the CVC usage fee significantly increased in the analyzed periods (p < 0.001). In this unit, there were a total of 51 cases of HAIs (density of 13.5 per 1,000 patients-days; 95.0% CI: 10.3 to 17.6). In addition, the incidence density of HAIs was reduced in the postintervention period (p value < 0.001) (Table3).Table 3 Evaluated variables and indicators in the pediatric ICU. Central Brazil. 2012-2015. Variables and indicators All Periods p value Preinterventiona Postinterventionb Number of patients-days 3.791 2.575 1.216 - Number of episodes of HAIs 51 35 16 - Incidence density of HAIs (95.0% CI) 13.5(10.2-17.6) 13.6 (9.8-18.8) 12.2 (8.1-21.3) 0.783c CL utilization (%) (95.0% CI)b 54.8 (53.2-56.3) 49.6 (47.7-51.6) 66.5 (63.8-69.1) < 0.001c Number of new cases of CLABSIs 7 4 3 - Number of deaths from CLABSIs - - - - Lethality with CVC-BSIs (%) (95.0% CI)b - - - - CVC-days 2.078 1.269 809 - Incidence density of CVC-BSIs (95.0%CI)b 3.36(1.63-6.93) 3.15(1.22-8.07) 3.70(1.26-10.84) 0.834d 95.0% CI: 95.0% Confidence Interval; a. Preintervention period: January 2012 to August 2014; b. Postintervention period: September 2014 to December 2015; c. Wald statistics; d. Wald Statistics obtained in fitted Poisson model. CL: Central Line; CVC: Central venous catheter; CVC-BSIs: Central venous catheter-associated bloodstream infections; HAIs: Healthcare-associated infections; ICU: Intensive Care Unit.During the study period, there was an overall per 1,000 CVC-days of 3.36 (95.0% CI: 1.63-6.93) in the pediatric ICU. In the preintervention period, there was an incidence density of CVC-BSIs of 3.15 by 1,000 CVC-days (95.0% CI: 1.22 to 8.07). There was no significant reduction in the incidence density of CVC-BSIs between the periods (IRR: 1.148; 95.0% CI: 0.314 to 4.193; p value = 0.834) obtained in fitted Poisson model (Table3).The characterization of these patients is presented in Table4. It is noteworthy that most patients who developed CVC-BSIs in adult ICU were male (78.1%), while in the pediatric ICU they were female. In general, the main diagnosis in patients with CVC-BSIs was AIDS (48.7%), followed by tuberculosis (12.8%). The median length of hospital stay and CVC use was 39.5 days and 10.5 days, respectively.Table 4 Characterization of patients with CVC-BSIs. Central Brazil. 2012-2015. Variables All (n = 39) Adult ICU (n = 32) Pediatric ICU (n = 7) Age (yeas) (Median; IIQ) 44.5 (23.0) 46.5 (18.0) 5.0 (8.0) Length of stay (days) (Median; IIQ) 39.5 (38.0) 42.0 (44.0) 33.0 (30.0) CL usage time (days) (Median; IIQ) c 10.5 (8.0) 10.0 (8.0) 12.0 (8.0) Sex Male 27 (69.2) 25 (78.1) 2 (28.6) Female 12 (30.8) 7 (21.9) 5 (71.4) Diagnoses AIDS 19 (48.7) 19.0 (59.4) - Tuberculosis 5 (12.8) 5 (15.6) - Viral hepatitis 1 (2.6) 1 (3.1) - Leishmaniasis 2 (5.1) 2 (6.2) - Dengue 1 (2.6) - 1 (14.3) Leprosy 1 (2.6) 1 (3.1) - Meningitis 2 (5.1) 1 (3.1) 1 (14.3) Other lung infections 2 (5.1) - 2 (28.6) Tetanus 1 (2.6) 1 (3.1) - Others 6 (15.4) 3 (9.4) 3 (42.9) AIDS: Acquired immunodeficiency syndrome; ICU: Intensive Care Unit.Table5 shows the characterization of the microorganisms identified in culture for diagnosis of CVC-BSIs. Most (61.8%) of the causative agents of CVC-BSIs in the institution were Gram-negative, with a predominance ofPseudomonas aeruginosa(28.2 %). Gram- positive ones accounted for 30.8% of the isolated microorganisms, highlightingStaphylococcus aureus. Fungi accounted for 10.3% of microorganisms, the most prevalent non-albicans Candida.Table 5 Characterization of the microorganisms identified in culture for diagnosis of CVC-BSIs. Microorganisms All (n = 39) Adult ICU (n = 32) Pediatric ICU (n = 7) Gram-positive 12 (30.8) 10 (31.2) 2 (28.6) Staphylococcus aureus 6 (15.4) 2 (12.5) 2 (28.6) Staphylococcus epidermidis 3 (7.7) 3 (9.4) - Staphylococcus coagulase  negativos 1 (2.6) 1 (3.1) - Enterococcus faecalis 2 (5.1) 2 (6.2) - Streptococcus salivarius 1 (2.6) 1 (3.1) - Gram-negative 24 (61.8) 20 (62.5) 4 (57.1) Pseudomonas auruginosa 11 (28.2) 8 (25.0) 3 (42.9) Acinetobacterspp. 5 (12.8) 5 (15.6) - Enterobacterspp. 5 (12.8) 3 (9.4) 2 (28.6) Klebsiella pneumoniae 5 (12.8) 5 (15.6) - ESBL Klebsiella 1 (2.6) 1 (3.1) - Achromobacter xylosoxidans 1 (2.6) 1 (3.1) - Fungi 4 (10.3) 3 (9.4) 1 (14.3) Candida albicans 2 (5.1) 1 (3.1) 1 (14.3) Candida não albicans 3 (7.7) 3 (9.4) - ICU: Intensive Care Unit. ## 4. Discussion Currently, the CVC-BSIs control has been the subject of national and international targets [17, 23]. The reduction of these infections is feasible and possible, since its occurrence is directly related to adoption of safe practice and protocol compliance, including the systematic use of bundles of prevention [24]. However, even well established in the practice, great are the challenges and constant is the quest for membership of professional best practice. There are few published studies on the evaluation of bundles in reducing CVC-BSIs in Latin America. This research adds to the literature about the effect of these strategies in rates CVC-BSIs rates in Brazil. The results showed that, even after implementation of bundles of prevention, significant reduction in incidence density of CVC-BSIs did not occur in both units assessed.This investigation found an incidence density of CVC-BSIs per 1,000 CVC-days of 3.4 in the adult ICU, index below percentile 90 of Brazilian ICUs (11.8 CVC-BSIs per 1,000 CVC-days) [6] and higher than the American ICUs (2.8 CVC-BSIs per 1,000 CVC-days) [3]. Similarly, in the pediatric ICU, the incidence of overall incidence density was 3.36 CVC-BSIs per 1,000 CVC-days, rate below percentile 90 in pediatric ICU in Brazil in 2014 (14.2 CVC-BSIs per 1,000 CVC-days) [6] and higher than that found in pediatric ICU USA (2.0 CVC-BSIs per 1,000 CVC-days) [3].Regarding the CVC utilization rate in the adult ICU, most patients used the device for most of the length of stay, although this rate shows lower postintervention period. However, in the pediatric ICU it was found that this ratio significantly increased in the postintervention period. Regardless of these differences, there was high use rate of this device during the period analyzed in both units. The utilization rates of CVC in adult and pediatric ICU are above the percentile of 75.0% of American hospitals evaluated by NHSN [3], probably reflecting the greater severity of patients admitted to the institution under study in relation to those hospitals that integrate the NHSN system. The CVC utilization rate reveals the degree of exposure to BSIs. Mesiano and Merchan-Hamann [25] point out that the maintenance of vascular access for a long time and with greater frequency of use results in increased infections related to that device.Poisson regression models showed no significant reduction in the incidence density of CVC-BSIs in adult and pediatric ICUs after implementation of the bundle (after intervention) (p value > 0.05). This corroborates with other studies conducted in different geographical locations that have shown no significant reduction of infections after implementation of ICU prevention packages [26–29]. In USA, a randomized clinical trial in ICU of 60 hospitals also found no significant reduction after application of preventive bundles (2.42 to 2.73 CVC-BSIs per 1,000 CVC-days; p value = 0:59) [29]. In Taiwan, a study conducted in two ICUs found a similar rate of BSIs between periods of preintervention and after systematic implementation of bundles (1.58 to 1.06 CVC-BSIs per 1,000 CVC-days; p value = 0:31) [28]. In Spain, a study conducted in a university hospital found no reduction in pre- and post-application bundles (5.5 to 3.8 CVC-BSIs per 1,000 CVC-days; p value = 0.49) [27]. In Brazil, a study conducted by Wolf et al. [25] in the ICU of São Paulo showed that, even after bundle implementation, no significant reduction in incidence density of CVC-BSIs (20 to 11 CVC-BSIs per 1,000 CVC- days; p value = 0.07) [26].The studies did not identify reduction in the incidence density of CVC-BSIs after bundles application emphasizing that their use in isolation does not bring decrease in infections, requiring a multidisciplinary approach and to consider the epidemiological profile of the institution and focused active leaders in continuous improvement processes [26–29]. In addition, factors such as high rate of use of CVC, low full compliance bundles application [26], low adherence to bundle and constant vigilance are factors that can decrease the effectiveness of intervention strategies. In fact, in this study, total compliance in the pediatric ICU and especially for Item 4 “assessment for early catheter removal” was very low, which contributed to the absence of significant reduction.In the present study, there was a greater proportion of Gram-negative than Gram-positive microorganisms, unlike most studies conducted in North America that show a higher frequency of Gram-positivity in CVC-BSIs [30–32]. However, it corroborates with other studies previously published in several countries and regions [32–35]. In fact, studies in Latin America, such as Brazil, have shown a higher prevalence of Gram-negativity in CLABSI compared to American studies. Investigations such as SCOPE (Surveillance and Control of Pathogens of Epidemiological Importance) [36] and EPIC II (Extended Prevalence of Infection in Intensive Care) [32] show this difference. These studies discuss the possibility of a climate influence. [32, 36]. As Brazil is a tropical country, it has a warmer climate than in USA and some studies show a higher prevalence of Gram-negative summer/spring infections than in the autumn/winter where there would be more Gram-positive infections [37]. Another possibility would be a higher proportion of infections secondary to lung and urinary tract infections than in American studies [36].This study has some limitations. First, in retrospective analyses there is no possibility of reporting bias with the inability to control confounding variables (lack of information). Second data as catheter insertion site and other risk factors of patients with CVC-BSIs were not subject to collection by the lack of information in the source data, data that could explain the lack of reduction in incidence density of CVC-BSIs. Thirdly, the number of new cases of CLABSIs in the postintervention period was very small in both ICUs (adult and pediatric). This may have diminished the power of the study to verify statistical differences. Other studies, with larger samples and in several hospitals, are needed. Fourth, the analysis period after intervention period was relatively short to evaluate the effect of long-term bundle. Finally, the results cannot be generalized to all ICUs because they are only considered units of an institution. ## 5. Conclusion In conclusion, there was no significant reduction in the incidence density CVC-BSIs in adult ICU (p value = 0.469) and pediatrics (p = 0.834) after implantation of the bundle of prevention. There was an increase of CVC utilization rate in both ICUs and low total bundle compliance in the pediatric ICU in the postintervention period, which indicate bias application of care for CVC-BSIs prevention.The results of this study show a need to reassess the strategy, as well as continuous training for the application bundle and measurement of compliance with discussion of process indicators with the care team. It is the multidisciplinary team treating the patient that takes responsibility in this chain of transmission, adhering to the protocols of prevention. Managers remain with the implicit responsibility to manage the processes, train professionals, and provide favorable conditions for the implementation of preventive measures in health care practice. The implications for the management deserve attention, since joining the bundle of practice is based on actions that do not require additional costs, but the adoption of preventive measures by professionals, since health institutions are already well structured with respect to human resources and materials. The findings of this study suggest managers periodically investigate the indicators of the CVC application process (bundles) and the occurrence of CVC-BSIs to identify the root causes and implement new preventive measures and evaluation of bundles of prevention. Further studies are needed to evaluate the effect of bundle prevention of CVC-BSIs long term in Brazil. --- *Source: 1025032-2019-10-07.xml*
1025032-2019-10-07_1025032-2019-10-07.md
30,929
Incidence of Central Venous Catheter-Related Bloodstream Infections: Evaluation of Bundle Prevention in Two Intensive Care Units in Central Brazil
Thais Yoshida; Ana Elisa Bauer de Camargo Silva; Luciana Leite Pineli Simões; Rafael Alves Guimarães
The Scientific World Journal (2019)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1025032
1025032-2019-10-07.xml
--- ## Abstract Background. Central venous catheter-associated bloodstream infections (CVC-BSIs) have been associated with increased length of hospital stay, mortality, and healthcare costs, especially in intensive care units (ICUs). The aim of this study was to evaluate the incidence density of CVC-BSIs before and after implementation of the bundle in a hospital of infectious and dermatological diseases in Central Brazil.Methods. A retrospective cohort study was conducted in two ICUs (adult and pediatric) between 2012 and 2015. Two periods were compared to assess the effect of the intervention in incidence density of CVC-BSIs: before and after intervention, related to the stages before and after the implementation of the bundle, respectively.Results. No significant reduction was observed in the incidence density of CVC-BSIs in adult ICU (incidence rate ratio [IRR]: 0.754; 95.0% CI: 0.349 to 1.621; p-value = 0.469), despite the high bundle application rate in the postintervention period. Similarly, significant reduction in the incidence density in pediatric ICU has not been verified after implementation of the bundle (IRR: 1.148; 95.0% CI: 0.314 to 4.193; p-value = 0.834).Conclusion. Not significant reduction in the incidence density of CVC-BSIs was observed after bundle implementation in ICUs, suggesting the need to review the use of process, as well as continuing education for staffs in compliance and correct application of the bundle. Further studies are needed to evaluate the effect of bundle in the reduction of incidence density of CVC-BSIs in Brazil. --- ## Body ## 1. Introduction Healthcare-associated infections (HAIs) are a serious public health problem and represents significant adverse events in hospitalized patients, especially in intensive care units (ICUs) [1–3]. Central venous catheter-associated bloodstream infections (CVC-BSIs) are among the most serious HAIs and has been associated with increased length of hospital stay, mortality and healthcare costs [4].It is estimated to occur, per year, in the United States of America (USA), 80.000 cases of CVC-BSIs in ICUs [5]. In addition, in 2013, the National Healthcare Safety Network (NHSN) estimated an incidence of 1.2 BSIs/1,000 CVC-days in medical ICUs in the USA [3]. In European countries, the incidence/1,000 CVC-days range from 1.2 in France to 4.2 in England [2]. In developing countries, especially those in Latin America, the dimension of CVC-BSIs is little known. However, studies conducted in Brazil indicate that the incidence density of CVC-BSIs in ICUs patients has decreased over the years [6]. In 2014, it was recorded an incidence of 5.1 CVC-BSIs/1,000 CVC-days in adult ICUs in Brazil, rate lower than in 2011 (5.9 CVC-BSIs/1,000 CVC-days) [6]. In pediatric ICUs, the incidence density of CVC-BSIs/1,000 CVC-days decreased from 7.3 in 2011 to 5.8 in 2014 [6].The CVC-BSIs are serious infections and can be prevented through appropriate techniques for insertion and management of CVC [4]. The application of preventive measures in an integrated manner, structured and systemized has shown positive results in reducing these infections and help to increase patient safety in healthcare services [4, 7, 8]. In this context, are bundles of prevention, defined as a set of preventive practices based on evidence that must be performed collectively. The use of these measures allows the evaluation of programs of care and handling of the CVC, to identify potential failures and/or successes that affect the final results. Also enables the calculation of indicators that show the care practice, called process indicators. Care implied in care processes and evaluated through the use of bundles are essential to improving quality and safety in patient care [9, 10].Several studies have shown decrease in the incidence of CVC-BSIs after bundle implementation [10–15]. A meta-analysis that examined the impact of bundles showed a significant reduction in the median incidence CVC-BSIs after application of these strategies (6.4/1,000 CVC-days versus 2.5/1,000 CVC-days; p-value < 0.001) [10]. The impact of bundles in reducing the incidence of CVC-BSIs depends on multidisciplinary teamwork, effective communication, setting daily goals easily measurable care, continuous professional training, and auditing processes [8]. Thus, despite the positive results in decreasing CVC-BSIs after implementing reported bundles in several investigations, some studies have shown no reduction in CVC-BSIs rates in places like USA, Taiwan, Spain, and Brazil, even after systematic application of these strategies [10].In Brazil, few studies have investigated the effect of bundle in reducing CVC-BSIs in ICU, and most of these were conducted in the most developed region of the country (Southeast) [13, 14]. Still, studies in pediatric ICU are scarce in Brazil. Thus, this research aimed to evaluate the incidence density of CVC-BSIs before and after implementation of the bundle in a hospital of infectious and dermatological diseases in Central Brazil. ## 2. Materials and Methods This is a retrospective cohort study that examined the incidence density of CVC-BSIs before and after implementation of bundle of prevention. The research was conducted in adult and pediatric ICUs of a hospital of infectious and dermatological diseases in Central Brazil, from January 2012 to December 2015.The hospital provides care elective and emergency medium and high complexity exclusively to patients of the Health Unic System (Sistema Único de Saúdein Portuguese) in Brazil. This institution has 130 beds distributed in five sectors, two of them in intensive care. The adult ICU has nine beds, four of them for individual isolation of patients, while the pediatric ICU has four hospital beds, two intended for isolation of patients in special care. Overall, they have 100% occupancy rate in all periods. The service profile in both ICUs is for patients with infectious diseases, including Acquired Immune Deficiency Syndrome (AIDS), tuberculosis, meningitis, dengue, among others. Patients are mostly immunosuppressed with use of antimicrobials for community infections, opportunistic or related to health care (HAIs).The data of this research were obtained by searching the electronic files of Hospital Infection Control Service of the institution, sector responsible for monitoring CVC-BSIs in the ICU. The information files were drawn about the bundle of prevention (components of the package and the total number of applications-days) number of patients-days, number of patients with CVC-days, number of episodes of HAIs, number of cases of CVC-BSIs, number of deaths from CVC-BSIs, and characteristics of patients with CVC-BSIs (age, sex, length of stay, time of use of CVC, diagnosis, and isolated microorganisms).The study included all cases of CVC-BSIs diagnosed in adult and pediatric ICUs during the analysis period. The case definition was based on criteria established by the National Health Surveillance Agency of Brazil, which are based on the NHSN [16]. The IPCS were defined based on laboratory criteria, that is, diagnosed using blood cultures. Thus, CVC-BSIs were considered if one of the three criteria was met: (i) Criterion 1: patient with one or more positive blood cultures collected preferentially from peripheral blood with the pathogen being not related to infection elsewhere; (ii) Criterion 2: at least one of the following signs or symptoms: fever (> 38°C), tremor, oliguria (urinary volume < 20 ml per hour), and hypotension (systolic pressure ≤ 90 mmHg), these symptoms being unrelated to infection (e.g., diphtheroids,Bacillus spp.,Propionibacterium spp., staphylococci coagulase negative, and micrococci), or (iii) Criterion 3: for children > 28 days and < 1 year—at least one of the following signs and symptoms: fever (> 38°C), hypothermia (< 36°C), bradycardia or tachycardia (not related to infection elsewhere), and two or more blood cultures (in different punctures with a maximum interval of 48 hours) with common skin contaminants (e.g., diphtheroids,Bacillus spp.,Propionibacterium spp., staphylococci coagulase negative, and micrococci) [16].The bundle of prevention of CVC-BSIs was systematically implemented in the institution from September 2014 (adult ICU) to November 2014 (pediatric ICU). It consisted of actions to be performed in all patients using CVC defined from the Institute’s recommendations for Health care Improvement (IHI) [17]. This corresponds to an audit tool in the use of the CVC process that consists of four check items in the form of checklists, which are actions to be performed daily in all patients using CVC. The bundle includes the following elements.(i) Care in catheter insertion: aseptic technique for catheter insertion (barrier maximum precautions), hand hygiene with chlorhexidine degermante 2%, patient skin antisepsis with degermante 2% chlorhexidine followed by alcoholic 0.5%, and record in the catheter insertion of records with justification statement. (ii) Care in the administration of drugs: aseptic guns and connections with 70% alcohol before medications and exclusive route for infusion of blood derivatives or parenteral nutrition. (iii) Care in maintaining the catheter: daily medical records to assess the insertion site; clean, dry dressing and adhered and dated; exchange of catheters inserted in emergency situations and those from other institutions for a maximum of 48 hours; exchange for the infusion system every 96 hours and/or in case of suspicion of pyrogenic shock, and blood visible stuck inside the system; record (date and signature) installation in equipos infusion. (iv) Daily assessment for early catheter removal: removal of the catheter so that there is no more indication of use, or in the presence of signs and symptoms of catheter-related infection and evidence of medical records of catheter removal with justification statement.In this study, two periods were compared to assess the effect of the bundle: before intervention (reference period) and after intervention [10]. The preintervention phase encompassed the period from January 2012 to August 2014, in the pediatric ICU, and from January 2012 to October 2014, in the adult ICU, and represented the period before the application of the bundle. Thus, the phase of postintervention contemplated November 2014 to December 2015 in the adult ICU and from September 2014 to December 2015 in the pediatric ICU and reflected the period after implementation of the bundle. The primary outcome was the incidence density of CVC-BSIs preintervention phase compared with the phase of intervention.Data analysis of adult and pediatric ICU was performed separately. Initially the calculations of process indicators and their respective confidence intervals of 95% (95.0% CI) for each study period were performed. For analysis of the indicator for the bundle, the bundle was calculated application rate, the following formula:(i) total number of applications-days bundle in the period by the total number of patients with CVC-days.For analysis of outcome indicators the following formulas were used:(i) Incidence density of HAIs: number of episodes of HAIs in the period by the number of patients-days x 1,000. (ii) CVC utilization (%): number of patients with CVC-days by the number of patients-days. (iii) Incidence density of CVC-BSIs number of new cases of CVC-BSIs in the period by the number of patients with CVC-days x 1.000. (iv) Lethality with CVC-BSIs: number of deaths from CVC-BSIs in the period by the number of patients who developed CVC-BSIs.Analyses were performed using the Stata software, version 14.0 [18]. The indicators found in before and after intervention were compared using the Wald statistic. For the primary outcome (Incidence density of CVC-BSIs) the bundle effect was analysed using Poisson regression models with robust variance [19]. The models were adjustment by baseline severity. In addition, we included a dummy variable in the model representing intervention “0” in the preintervention period and “1” in the postintervention period [20]. The following assumptions of the Poisson regression were met for model validation: (ii) Independence of observations: it was verified by comparing errors based on standard models with robust errors to determine large differences—for adult ICU the difference of errors between the standard and robust models for the two models was 4.5% and for the pediatric ICU it was 3.4%, suggesting independence; (ii) distribution following a classical Poisson distribution, verifying that the observed and expected data were similar—this assumption was verified by predicting the mean values of the dependent variable and compared by the t test with the observed values [21]—for the adult ICU the observed and expected values were similar (t = 0.000; df: 47; p value = 1.000), as observed in the pediatric ICU model (t = 0.000; df: 47; p value = 1.000) and (iii) the mean and variance of the model are the same or similar, as assessed by Pearson’s chi-square of Pearson’s chi-square for the pediatric ICU model with a value of 1.067 and for the pediatric ICU model a chi-Pearson square with a value of 1.078, indicating small overdispersion of the data (values less than 1 indicate subdispersion, equal to 1 equidispersion and greater than 1 overdispersion), therefore not causing serious problems in the model. In addition, to reinforce data suitability to Poisson models, the goodness-of-fit Deviance was performed - the result for the adult ICU was a chi-square of 53.70 (df: 45; p value = 0.175) and for pediatric ICU a chi-square of 28,621 (df: 45; p value = 0.973), indicating that both data fit the Poisson model well. Thus, the adjusted incidence rate ratio (IRR) was calculated and its 95.0% CI for the difference in incidence density of CVC-BSIs among the investigated periods. P values < 0.05 were considered statistically significant. In addition, a descriptive analysis of the variables related to the patient with CVC-BSIs was carried out (total and ICU).The study was approved by the Ethics Committee in Tropical Diseases Hospital Research Dr. Anuar Auad, protocol n. 011/2012, and all ethical and legal principles were considered under Resolution n. 466/2012 of the National Health Council [22]. ## 3. Results The results of this study are presented by ICU. Table1 shows the variables and indicators related to the bundle in the adult and paediatric ICU. In adult ICU, a total of 2.282 applications-days of bundle was observed, resulting in an application rate of 89.8% in the postintervention period. Furthermore, there was an overall compliance of 85.6%. The item with the lowest application in the adult ICU was Item 4.Table 1 Variables and bundle indicators in adult and pediatric ICUs. Central Brazil. 2014-2015. Variables and indicators bundle Adult ICU Pediatric ICU Applications-day 2.282 438 Bundle application rate (%) 89.8 54.1 Item 1 60.2 48.6 Item 2 99.8 99.8 Item 3 99.8 95.9 Item 4 88.1 5.7 Total Compliance 85.6 4.3 ICU: Intensive Care Unit.In pediatric ICU, there was a total of 438 applications-days, resulting in bundle application rate of 54.1% and full compliance of 4.3. Items 1 and 4 presented a low application rate in this ICU (Table1).In the adult ICU during the study period, we observed a total of 11,446 patients-days, 9,387 CVC-days, and an overall utilization rate of 82.0% CVC. A higher CVC usage fee in the preintervention period compared to the postintervention period (85.6% vs. 73.6%; p value < 0.001). We believe that the change in the patient profile in adult ICU (with decreased severity in the postintervention period) is responsible for the decrease of CVC usage fee. Still, there was a decrease in the number of cases of HAIs, ranging from 270 in the preintervention phase to 77 in the postintervention period. The overall incidence density of HAIs was 30.3 per 1,000 patient-days (95.0% CI: 27.3 to 33.6). A reduction in HAIs density per 1,000 patients-days between the periods was found (p value < 0.001) (Table2).Table 2 Evaluated variables and indicators in the adult ICU. Central Brazil. 2012-2015. Variables and indicators All Periods p value Preinterventiona Postinterventionb Number of patients-days 11.446 7.995 3.451 - Number of episodes of HAIs 347 270 77 - Incidence density of HAIs (95.0% CI) 30.3 (27.3-33.6) 33.8 (30.0-38.0) 22.3 (17.9-27.8) < 0.001c CL utilization (%) (95.0% CI)b 82.0 (81.3-82.7) 85.6 (84.9-86.4) 73.6 (72.1-75.1) < 0.001c Number of new cases of CLABSIs 32 25 7 - Number of deaths from CLABSIs 1 7 3 - Lethality with CVC-BSIs (%) (95.0% CI)b 31.3 (18.0-49.6) 28.0 (14.3-47.6) 42.9 (15.8-75.0) 0.459c CVC-days 9.387 6.847 2.540 - Incidence density of CVC-BSIs (95.0%CI)b 3.40 (2.41-4.80) 3.65 (2.47-5.38) 2.75 (1.33-5.47) 0.469d 95.0% CI: 95.0% Confidence Interval; a. Preintervention period: January 2012 to October 2014; b. Postintervention period: November 2014 to December 2015; c. Wald Statistics; d. Wald Statistics obtained in fitted Poisson model. CL: Central Line; CVC: Central venous catheter; CVC-BSIs: Central venous catheter-associated bloodstream infections; HAIs: Healthcare-associated infections; ICU: Intensive Care Unit.It was also observed in the adult ICU that the occurrence of 32 cases of CVC-BSIS throughout the investigation period (25 in the preintervention period and 7 in the postintervention period). In the preintervention period, there was an incidence density of CVC-BSIs per 1,000 CVC-days of 3.65 (95.0% CI: 2.47 to 5.38). Despite the reduction in the absolute number of cases of CVC-BSIs, after the implementation of the bundle (after intervention), there was no significant reduction in the incidence density (IRR: 0.754; 95.0% CI: 0.349 to 1.621; p value = 0.469) (Table2).In the pediatric ICU, a total of 3.791 patients-days and 2.078 CVC- days were observed, resulting in an overall rate of use of CVC of 54.8%. It was found that the CVC usage fee significantly increased in the analyzed periods (p < 0.001). In this unit, there were a total of 51 cases of HAIs (density of 13.5 per 1,000 patients-days; 95.0% CI: 10.3 to 17.6). In addition, the incidence density of HAIs was reduced in the postintervention period (p value < 0.001) (Table3).Table 3 Evaluated variables and indicators in the pediatric ICU. Central Brazil. 2012-2015. Variables and indicators All Periods p value Preinterventiona Postinterventionb Number of patients-days 3.791 2.575 1.216 - Number of episodes of HAIs 51 35 16 - Incidence density of HAIs (95.0% CI) 13.5(10.2-17.6) 13.6 (9.8-18.8) 12.2 (8.1-21.3) 0.783c CL utilization (%) (95.0% CI)b 54.8 (53.2-56.3) 49.6 (47.7-51.6) 66.5 (63.8-69.1) < 0.001c Number of new cases of CLABSIs 7 4 3 - Number of deaths from CLABSIs - - - - Lethality with CVC-BSIs (%) (95.0% CI)b - - - - CVC-days 2.078 1.269 809 - Incidence density of CVC-BSIs (95.0%CI)b 3.36(1.63-6.93) 3.15(1.22-8.07) 3.70(1.26-10.84) 0.834d 95.0% CI: 95.0% Confidence Interval; a. Preintervention period: January 2012 to August 2014; b. Postintervention period: September 2014 to December 2015; c. Wald statistics; d. Wald Statistics obtained in fitted Poisson model. CL: Central Line; CVC: Central venous catheter; CVC-BSIs: Central venous catheter-associated bloodstream infections; HAIs: Healthcare-associated infections; ICU: Intensive Care Unit.During the study period, there was an overall per 1,000 CVC-days of 3.36 (95.0% CI: 1.63-6.93) in the pediatric ICU. In the preintervention period, there was an incidence density of CVC-BSIs of 3.15 by 1,000 CVC-days (95.0% CI: 1.22 to 8.07). There was no significant reduction in the incidence density of CVC-BSIs between the periods (IRR: 1.148; 95.0% CI: 0.314 to 4.193; p value = 0.834) obtained in fitted Poisson model (Table3).The characterization of these patients is presented in Table4. It is noteworthy that most patients who developed CVC-BSIs in adult ICU were male (78.1%), while in the pediatric ICU they were female. In general, the main diagnosis in patients with CVC-BSIs was AIDS (48.7%), followed by tuberculosis (12.8%). The median length of hospital stay and CVC use was 39.5 days and 10.5 days, respectively.Table 4 Characterization of patients with CVC-BSIs. Central Brazil. 2012-2015. Variables All (n = 39) Adult ICU (n = 32) Pediatric ICU (n = 7) Age (yeas) (Median; IIQ) 44.5 (23.0) 46.5 (18.0) 5.0 (8.0) Length of stay (days) (Median; IIQ) 39.5 (38.0) 42.0 (44.0) 33.0 (30.0) CL usage time (days) (Median; IIQ) c 10.5 (8.0) 10.0 (8.0) 12.0 (8.0) Sex Male 27 (69.2) 25 (78.1) 2 (28.6) Female 12 (30.8) 7 (21.9) 5 (71.4) Diagnoses AIDS 19 (48.7) 19.0 (59.4) - Tuberculosis 5 (12.8) 5 (15.6) - Viral hepatitis 1 (2.6) 1 (3.1) - Leishmaniasis 2 (5.1) 2 (6.2) - Dengue 1 (2.6) - 1 (14.3) Leprosy 1 (2.6) 1 (3.1) - Meningitis 2 (5.1) 1 (3.1) 1 (14.3) Other lung infections 2 (5.1) - 2 (28.6) Tetanus 1 (2.6) 1 (3.1) - Others 6 (15.4) 3 (9.4) 3 (42.9) AIDS: Acquired immunodeficiency syndrome; ICU: Intensive Care Unit.Table5 shows the characterization of the microorganisms identified in culture for diagnosis of CVC-BSIs. Most (61.8%) of the causative agents of CVC-BSIs in the institution were Gram-negative, with a predominance ofPseudomonas aeruginosa(28.2 %). Gram- positive ones accounted for 30.8% of the isolated microorganisms, highlightingStaphylococcus aureus. Fungi accounted for 10.3% of microorganisms, the most prevalent non-albicans Candida.Table 5 Characterization of the microorganisms identified in culture for diagnosis of CVC-BSIs. Microorganisms All (n = 39) Adult ICU (n = 32) Pediatric ICU (n = 7) Gram-positive 12 (30.8) 10 (31.2) 2 (28.6) Staphylococcus aureus 6 (15.4) 2 (12.5) 2 (28.6) Staphylococcus epidermidis 3 (7.7) 3 (9.4) - Staphylococcus coagulase  negativos 1 (2.6) 1 (3.1) - Enterococcus faecalis 2 (5.1) 2 (6.2) - Streptococcus salivarius 1 (2.6) 1 (3.1) - Gram-negative 24 (61.8) 20 (62.5) 4 (57.1) Pseudomonas auruginosa 11 (28.2) 8 (25.0) 3 (42.9) Acinetobacterspp. 5 (12.8) 5 (15.6) - Enterobacterspp. 5 (12.8) 3 (9.4) 2 (28.6) Klebsiella pneumoniae 5 (12.8) 5 (15.6) - ESBL Klebsiella 1 (2.6) 1 (3.1) - Achromobacter xylosoxidans 1 (2.6) 1 (3.1) - Fungi 4 (10.3) 3 (9.4) 1 (14.3) Candida albicans 2 (5.1) 1 (3.1) 1 (14.3) Candida não albicans 3 (7.7) 3 (9.4) - ICU: Intensive Care Unit. ## 4. Discussion Currently, the CVC-BSIs control has been the subject of national and international targets [17, 23]. The reduction of these infections is feasible and possible, since its occurrence is directly related to adoption of safe practice and protocol compliance, including the systematic use of bundles of prevention [24]. However, even well established in the practice, great are the challenges and constant is the quest for membership of professional best practice. There are few published studies on the evaluation of bundles in reducing CVC-BSIs in Latin America. This research adds to the literature about the effect of these strategies in rates CVC-BSIs rates in Brazil. The results showed that, even after implementation of bundles of prevention, significant reduction in incidence density of CVC-BSIs did not occur in both units assessed.This investigation found an incidence density of CVC-BSIs per 1,000 CVC-days of 3.4 in the adult ICU, index below percentile 90 of Brazilian ICUs (11.8 CVC-BSIs per 1,000 CVC-days) [6] and higher than the American ICUs (2.8 CVC-BSIs per 1,000 CVC-days) [3]. Similarly, in the pediatric ICU, the incidence of overall incidence density was 3.36 CVC-BSIs per 1,000 CVC-days, rate below percentile 90 in pediatric ICU in Brazil in 2014 (14.2 CVC-BSIs per 1,000 CVC-days) [6] and higher than that found in pediatric ICU USA (2.0 CVC-BSIs per 1,000 CVC-days) [3].Regarding the CVC utilization rate in the adult ICU, most patients used the device for most of the length of stay, although this rate shows lower postintervention period. However, in the pediatric ICU it was found that this ratio significantly increased in the postintervention period. Regardless of these differences, there was high use rate of this device during the period analyzed in both units. The utilization rates of CVC in adult and pediatric ICU are above the percentile of 75.0% of American hospitals evaluated by NHSN [3], probably reflecting the greater severity of patients admitted to the institution under study in relation to those hospitals that integrate the NHSN system. The CVC utilization rate reveals the degree of exposure to BSIs. Mesiano and Merchan-Hamann [25] point out that the maintenance of vascular access for a long time and with greater frequency of use results in increased infections related to that device.Poisson regression models showed no significant reduction in the incidence density of CVC-BSIs in adult and pediatric ICUs after implementation of the bundle (after intervention) (p value > 0.05). This corroborates with other studies conducted in different geographical locations that have shown no significant reduction of infections after implementation of ICU prevention packages [26–29]. In USA, a randomized clinical trial in ICU of 60 hospitals also found no significant reduction after application of preventive bundles (2.42 to 2.73 CVC-BSIs per 1,000 CVC-days; p value = 0:59) [29]. In Taiwan, a study conducted in two ICUs found a similar rate of BSIs between periods of preintervention and after systematic implementation of bundles (1.58 to 1.06 CVC-BSIs per 1,000 CVC-days; p value = 0:31) [28]. In Spain, a study conducted in a university hospital found no reduction in pre- and post-application bundles (5.5 to 3.8 CVC-BSIs per 1,000 CVC-days; p value = 0.49) [27]. In Brazil, a study conducted by Wolf et al. [25] in the ICU of São Paulo showed that, even after bundle implementation, no significant reduction in incidence density of CVC-BSIs (20 to 11 CVC-BSIs per 1,000 CVC- days; p value = 0.07) [26].The studies did not identify reduction in the incidence density of CVC-BSIs after bundles application emphasizing that their use in isolation does not bring decrease in infections, requiring a multidisciplinary approach and to consider the epidemiological profile of the institution and focused active leaders in continuous improvement processes [26–29]. In addition, factors such as high rate of use of CVC, low full compliance bundles application [26], low adherence to bundle and constant vigilance are factors that can decrease the effectiveness of intervention strategies. In fact, in this study, total compliance in the pediatric ICU and especially for Item 4 “assessment for early catheter removal” was very low, which contributed to the absence of significant reduction.In the present study, there was a greater proportion of Gram-negative than Gram-positive microorganisms, unlike most studies conducted in North America that show a higher frequency of Gram-positivity in CVC-BSIs [30–32]. However, it corroborates with other studies previously published in several countries and regions [32–35]. In fact, studies in Latin America, such as Brazil, have shown a higher prevalence of Gram-negativity in CLABSI compared to American studies. Investigations such as SCOPE (Surveillance and Control of Pathogens of Epidemiological Importance) [36] and EPIC II (Extended Prevalence of Infection in Intensive Care) [32] show this difference. These studies discuss the possibility of a climate influence. [32, 36]. As Brazil is a tropical country, it has a warmer climate than in USA and some studies show a higher prevalence of Gram-negative summer/spring infections than in the autumn/winter where there would be more Gram-positive infections [37]. Another possibility would be a higher proportion of infections secondary to lung and urinary tract infections than in American studies [36].This study has some limitations. First, in retrospective analyses there is no possibility of reporting bias with the inability to control confounding variables (lack of information). Second data as catheter insertion site and other risk factors of patients with CVC-BSIs were not subject to collection by the lack of information in the source data, data that could explain the lack of reduction in incidence density of CVC-BSIs. Thirdly, the number of new cases of CLABSIs in the postintervention period was very small in both ICUs (adult and pediatric). This may have diminished the power of the study to verify statistical differences. Other studies, with larger samples and in several hospitals, are needed. Fourth, the analysis period after intervention period was relatively short to evaluate the effect of long-term bundle. Finally, the results cannot be generalized to all ICUs because they are only considered units of an institution. ## 5. Conclusion In conclusion, there was no significant reduction in the incidence density CVC-BSIs in adult ICU (p value = 0.469) and pediatrics (p = 0.834) after implantation of the bundle of prevention. There was an increase of CVC utilization rate in both ICUs and low total bundle compliance in the pediatric ICU in the postintervention period, which indicate bias application of care for CVC-BSIs prevention.The results of this study show a need to reassess the strategy, as well as continuous training for the application bundle and measurement of compliance with discussion of process indicators with the care team. It is the multidisciplinary team treating the patient that takes responsibility in this chain of transmission, adhering to the protocols of prevention. Managers remain with the implicit responsibility to manage the processes, train professionals, and provide favorable conditions for the implementation of preventive measures in health care practice. The implications for the management deserve attention, since joining the bundle of practice is based on actions that do not require additional costs, but the adoption of preventive measures by professionals, since health institutions are already well structured with respect to human resources and materials. The findings of this study suggest managers periodically investigate the indicators of the CVC application process (bundles) and the occurrence of CVC-BSIs to identify the root causes and implement new preventive measures and evaluation of bundles of prevention. Further studies are needed to evaluate the effect of bundle prevention of CVC-BSIs long term in Brazil. --- *Source: 1025032-2019-10-07.xml*
2019
# Exploring the Potential Mechanism of Tang-Shen-Ning Decoction against Diabetic Nephropathy Based on the Combination of Network Pharmacology and Experimental Validation **Authors:** Jiajun Liang; Jiaxin He; Yanbin Gao; Zhiyao Zhu **Journal:** Evidence-Based Complementary and Alternative Medicine (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1025053 --- ## Abstract Background. Diabetic nephropathy (DN) has become one of the leading causes of the end-stage renal disease (ESRD). Tang-Shen-Ning (TSN) decoction, an effective Traditional Chinese formula for DN, can improve the renal function and inhibit renal fibrosis in DN. However, its potential mechanism is still unexplored. Methods. A network pharmacology approach was employed in this study, including screening for differential expressed genes of DN (DN-DEGs), protein-protein interaction (PPI) network analysis, and GO and KEGG enrichment analysis. Besides, a rat model was established to verify the potential effect of TSN in DN. Results. Twenty-three TSN-related DN-DEGs targets were identified. These genes were associated with decreased glomerular filtration rate (GFR) DN. The enrichment analysis suggested that the inhibition of renal fibrosis and inflammation through growth factors and chemokines is the potential mechanism through which TSN improves DN. TSN reduced renal fibrosis and improved pathological damage in the kidney in vivo through the regulation of GJA1, CTGF, MMP7, and CCL5, which are genes associated with ECM deposition. Conclusion. This study revealed that TSN improves DN through a multicomponent, multitarget, and multipathway synergy. We provide a scientific basis for potential targets for TSN use to treat DN, yet further experimental validation is needed to investigate these targets and mechanisms. --- ## Body ## 1. Introduction Diabetic nephropathy (DN), a severe microvascular complication of long-term diabetes mellitus (DM), is present in 40% of diabetic patients [1]. DN leads to chronic progressive renal damage, accompanied by increased urinary proteins and decreased glomerular filtration rate (GFR). Indeed, DN becomes a leading cause of progression to end-stage renal disease (ESRD) in DM patients [2, 3]. KDOQI proposed that the term “diabetic nephropathy” should be replaced by “diabetic kidney disease” (DKD), which applies to a kidney disease that is caused by diabetes, while DN can be diagnosed only after the histopathological confirmation. The pathological progression of DN includes glomerular basement membrane thickening, mesangial matrix hyperplasia, and glomerulosclerosis [4]. DN’s pathogenesis is complex and includes alterations in renal hemodynamics, disturbances in glucolipid metabolism, the action of various cytokines, and activation of the Renin-Angiotensin-Aldosterone System (RAAS). RAAS inhibitors have some benefit in the treatment of DN. However, according to clinical studies, when patients were treated with dual RAAS blocking, not only is it less effective in the long term, but patients were also more likely to suffer from risky events like acute kidney injury or hyperkalemia [5, 6]. Therefore, it is essential to identify novel therapeutic schedule for DN.In China, Traditional Chinese Medicine (TCM) has a long history of treating DM and DN and provides renal protection for DN through different herbal combinations [7]. Developed by Yanbin Gao, a TCM expert, Tang-Shen-Ning decoction (TSN) is an empirical formula to treat DN, consisting of Astragalus membranaceus (Fisch.) Bunge (Huang Qi in Chinese, HQ, 15 g), Euryale ferox Salisb. ex DC (Qian Shi in Chinese, QS, 15 g), Rosa laevigata Michx (Jin Ying Zi in Chinese, JYZ, 15 g), Rheum officinale Baill (Da Huang in Chinese, DH, 6 g), and Ligusticum chuanxiong Hort (Chuan Xiong in Chinese, CX, 12 g). According to a double-blinded control clinical trial on DN patients, the use of TSN led to reductions in 24-hour urine albumin excretion rate (24 h UAER) compared to control group. Besides, patients in the TSN treatment group showed improvements in serum SOD, MDA, and hs-CRP compared to the control group, indicating that TSN is a safe and effective medicine for the treatment of early DN and improving the inflammatory and oxidative stress status in DN [8]. It was also shown that TSN treatment in DN mice decreased 24 h UAER, serum creatinine, and blood urea nitrogen. Furthermore, TSN activated the Wnt/β-catenin pathway, reversed the podocyte epithelial-mesenchymal transition (EMT), reduced the expression of fibroblast-specific protein 1 (FSP-1) and collagen I, and alleviated kidney damage in DN mice [9]. However, the effect of TCM is multitargeted and affects multiple pathways; therefore, the potential mechanism of TSN in DN still needs to be investigated.Currently, significant developments are taking place in the field of systems biology, in which network pharmacology is considered an important tool for drug discovery. Network pharmacology aims to describe a biological system using a network structure, switching the research paradigm to “network-targeted, multicomponent therapy.” This research model has similarities with TCM due to the synergistic mechanism of multicomponent therapeutic approaches that affect multiple pathways and therefore underlying mechanisms of TCM can be elucidated through network pharmacology [10, 11]. Therefore, we performed a systems biology-based approach to explore the underlying mechanisms of TSN in DN (Figure 1) as a guidance for further research.Figure 1 The schematic diagram of network pharmacological study of TSN for DN. ## 2. Materials and Methods ### 2.1. Screening of TSN Compounds with Potential Biological Activity TCMSP database (http://tcmspw.com/tcmsp.php) [12] was searched to obtain the candidate ingredients of the fives herbs contained in TSN. Then, compounds with oral bioavailability (OB) ≥30% and drug-likeness (DL) ≥0.18 were considered as potential bioactive ingredients [11, 13]. ### 2.2. Target Screening The TCMSP, PubChem database (https://pubchem.ncbi.nlm.nih.gov/) [14], and STITCH platform (http://stitch.embl.de/cgi/network.pl) [15] were used to search the targets of potential bioactive ingredients in TSN. In the PubChem database, targets were collected from the following four sections: Chemical-Gene Co-Occurrences in Literature, Protein Bound 3D Structures, Drug-Gene Interactions, and BioAssay Results. The threshold score was set to 0.9 to filter targets predicted by STITCH. ### 2.3. Identification of Differentially Expressed Genes (DEGs) Gene expression data from 9 DN patients and 13 normal human glomeruli in GSE30528 Genest were downloaded from GEO database (https://www.ncbi.nlm.nih.gov/geo/), which were firstly contributed by Woroniecka et al. [16]. DEGs of glomeruli from DN patients and normal samples were obtained using the GEO2R online tool by setting |logFC| > 1.5 and p<0.05. The gene expression was considered downregulated when the assay results indicated logFC < 0; if LogFC > 0, the gene expression was upregulated. ### 2.4. Network Construction Networks were constructed using Cytoscape v3.8.0 [17] as follows: (1) component-target network, (2) protein-protein interaction (PPI) network for DEGs, (3) component-DEGs network, and (4) target-signaling pathway network. ### 2.5. PPI Network and Topological Analysis TSN-related DEGs in DN (DN-DEGs) were identified as genes up- or downregulated in both TSN treatment and DN. The PPI network of TSN-related DN-DEGs was constructed using Bisogenet in Cytoscape. Three topology parameters, namely, degree, betweenness centrality, and closeness centrality, were chosen to analyze the topology of the network diagram, which reflected the topological importance of the nodes. Nodes with corresponding parameters greater than 2 times the median were selected to finally obtain the core targets of the PPI network [18]. “Degree” indicates how many edges were connected to a node; “betweenness centrality” is the times a node appears on the shortest path versus the total number of paths; “closeness centrality” measures the closeness of a node by calculating the distance of the shortest path from that node to the others [19]. ### 2.6. Gene Ontology (GO) and KEGG Pathway Enrichment Analysis The hub targets in the PPI network were analyzed using Metascape (http://metascape.org/gp/index.html) [20]. The enrichment results with p<0.05 were ranked, and those with p<0.01 were considered significantly enriched. Finally, the biological processes (BP), molecular functions (MF), cellular components (CC), and signaling pathways were identified. ### 2.7. Correlation of TSN-Related DN-DEGs with GFR Nephroseq v5 (http://v5.nephroseq.org) is an online platform to mine comprehensive nephropathy gene expression datasets to identify markers of disease progression by correlating renal genetic phenotypes with known disease phenotypes [21]. Nephroseq v5 platform was applied to analyze the correlation between TSN-related DN-DEGs and GFR. TSN-related DN-DEGs were searched and the dataset Woroniecka Diabetes Glom, which is linked to GSE30528, was selected. ### 2.8. Experimental Validation #### 2.8.1. Reagents Streptozocin (STZ) was purchased from Sigma. (RANTES/CCL5) ELISA kit (CSB-E07398r) was purchased from Cusabio Biotech. Mouse monoclonal anti-CTGF antibody (Cat#: SC-365970) was purchased from Santa Cruz Biotechnology. Rabbit polyclonal anti-MMP7 antibodies (Cat#: 3801) and Rabbit polyclonal anti-GJA1 antibodies (Cat#: 3512) were purchased from Cell Signaling Technology. Mouse monoclonal anti-β-Tubulin antibody (Cat#: C1340) was purchased from APPLYGEN. Donkey Anti-Mouse IgG was purchased from Proteintech. Goat Anti-Rabbit IgG was purchased from LabLead. #### 2.8.2. Animals and Models Eighteen male Sprague Dawley rats (180–220 g) were purchased from Weitonglihua (Beijing, China). Rats were fed and given water ad libitum in an SPF environment. All procedures were approved by the Animal Experiments and Experimental Animal Welfare Committee of Capital Medical University.Rats were randomly distributed, after one week of acclimatization, into normal control (NC,n = 6), DN group (DN, n = 6), and TSN group (TSN, n = 6). The DN rat model used in both DN and TSN groups was established as described previously [22]. Briefly, rats received intraperitoneal injections of streptozotocin (STZ, 55 mg/kg) and a high-fat diet (10% lard, 20% sucrose, 2.5% cholesterol, 0.5% sodium cholate, and 67% basic feed). Rats in NC were injected with the same dose of sodium citrate buffer and fed with a normal diet (12% fat, 28% protein, and 60% carbohydrate). After 7 days, random blood glucose (RGB) was measured in all rats in the diabetic group, and those rats with RBG levels above 16.7 mmol/L for three consecutive days were considered diabetic. Diabetic rats were randomly distributed into DN (DN, n = 6) and TSN (TSN, n = 6) groups. Rats in TSN group were administered 20g/kg TSN orally every day while rats in NC and DN were given the same volume of normal saline instead. The rats were euthanized after 12 weeks of treatment. Fresh kidneys were dissected and preserved under −80°C for further experiments. #### 2.8.3. Histological Analysis Formalin-fixed kidney tissues were embedded in paraffin and stained with hematoxylin-eosin (HE), Periodic Acid-Schiff (PAS), and Masson. The slices were analyzed and captured using a light microscope (Leica, DM4B) under ×400 magnification. #### 2.8.4. ELISA Assay CCL5 levels in the rat kidneys were measured using ELISA kit (Cusabio, Wuhan, China) according to the instructions provided. #### 2.8.5. Western Blot Analysis Kidney tissues were lysed and the supernatant was collected after centrifugation. After homogenization, protein symbol was separated using 10% SDS-PAGE electrophoresis and electrotransferred to PVDF membranes. Membranes were blocked with 5% non-fat milk and then incubated overnight at 4°C with the proper primary antibody for anti-GJA1 (Cell Signaling Technology, USA), CTGF (Santa Cruz Biotechnology, USA), MMP7 (Cell Signal Technology, USA), andβ-Tubulin (APPLYGEN, China). After that, the membranes were incubated with corresponding secondary antibodies, including Donkey Anti-Mouse IgG (Proteintech, USA) and Goat Anti-Rabbit IgG (LabLead, China) at room temperature. Antigen-antibody immunoreactivity was visualized using electrochemiluminescence (ECL) reagents (Millipore, USA). Protein expressions were normalized according to the intensity of β-Tubulin and analyzed with Gelpro Analyzer software. ### 2.9. Statistical Analyses Pearson’s correlations between GFR and TSN-related DN-DEGs were performed using Nephroseq v5. Experimental data were presented as mean ± SD and analyzed using Graphpad prism 9.00. One-way ANOVA was applied for comparison between more than two groups.p<0.05 means the comparison was statistically significant. ## 2.1. Screening of TSN Compounds with Potential Biological Activity TCMSP database (http://tcmspw.com/tcmsp.php) [12] was searched to obtain the candidate ingredients of the fives herbs contained in TSN. Then, compounds with oral bioavailability (OB) ≥30% and drug-likeness (DL) ≥0.18 were considered as potential bioactive ingredients [11, 13]. ## 2.2. Target Screening The TCMSP, PubChem database (https://pubchem.ncbi.nlm.nih.gov/) [14], and STITCH platform (http://stitch.embl.de/cgi/network.pl) [15] were used to search the targets of potential bioactive ingredients in TSN. In the PubChem database, targets were collected from the following four sections: Chemical-Gene Co-Occurrences in Literature, Protein Bound 3D Structures, Drug-Gene Interactions, and BioAssay Results. The threshold score was set to 0.9 to filter targets predicted by STITCH. ## 2.3. Identification of Differentially Expressed Genes (DEGs) Gene expression data from 9 DN patients and 13 normal human glomeruli in GSE30528 Genest were downloaded from GEO database (https://www.ncbi.nlm.nih.gov/geo/), which were firstly contributed by Woroniecka et al. [16]. DEGs of glomeruli from DN patients and normal samples were obtained using the GEO2R online tool by setting |logFC| > 1.5 and p<0.05. The gene expression was considered downregulated when the assay results indicated logFC < 0; if LogFC > 0, the gene expression was upregulated. ## 2.4. Network Construction Networks were constructed using Cytoscape v3.8.0 [17] as follows: (1) component-target network, (2) protein-protein interaction (PPI) network for DEGs, (3) component-DEGs network, and (4) target-signaling pathway network. ## 2.5. PPI Network and Topological Analysis TSN-related DEGs in DN (DN-DEGs) were identified as genes up- or downregulated in both TSN treatment and DN. The PPI network of TSN-related DN-DEGs was constructed using Bisogenet in Cytoscape. Three topology parameters, namely, degree, betweenness centrality, and closeness centrality, were chosen to analyze the topology of the network diagram, which reflected the topological importance of the nodes. Nodes with corresponding parameters greater than 2 times the median were selected to finally obtain the core targets of the PPI network [18]. “Degree” indicates how many edges were connected to a node; “betweenness centrality” is the times a node appears on the shortest path versus the total number of paths; “closeness centrality” measures the closeness of a node by calculating the distance of the shortest path from that node to the others [19]. ## 2.6. Gene Ontology (GO) and KEGG Pathway Enrichment Analysis The hub targets in the PPI network were analyzed using Metascape (http://metascape.org/gp/index.html) [20]. The enrichment results with p<0.05 were ranked, and those with p<0.01 were considered significantly enriched. Finally, the biological processes (BP), molecular functions (MF), cellular components (CC), and signaling pathways were identified. ## 2.7. Correlation of TSN-Related DN-DEGs with GFR Nephroseq v5 (http://v5.nephroseq.org) is an online platform to mine comprehensive nephropathy gene expression datasets to identify markers of disease progression by correlating renal genetic phenotypes with known disease phenotypes [21]. Nephroseq v5 platform was applied to analyze the correlation between TSN-related DN-DEGs and GFR. TSN-related DN-DEGs were searched and the dataset Woroniecka Diabetes Glom, which is linked to GSE30528, was selected. ## 2.8. Experimental Validation ### 2.8.1. Reagents Streptozocin (STZ) was purchased from Sigma. (RANTES/CCL5) ELISA kit (CSB-E07398r) was purchased from Cusabio Biotech. Mouse monoclonal anti-CTGF antibody (Cat#: SC-365970) was purchased from Santa Cruz Biotechnology. Rabbit polyclonal anti-MMP7 antibodies (Cat#: 3801) and Rabbit polyclonal anti-GJA1 antibodies (Cat#: 3512) were purchased from Cell Signaling Technology. Mouse monoclonal anti-β-Tubulin antibody (Cat#: C1340) was purchased from APPLYGEN. Donkey Anti-Mouse IgG was purchased from Proteintech. Goat Anti-Rabbit IgG was purchased from LabLead. ### 2.8.2. Animals and Models Eighteen male Sprague Dawley rats (180–220 g) were purchased from Weitonglihua (Beijing, China). Rats were fed and given water ad libitum in an SPF environment. All procedures were approved by the Animal Experiments and Experimental Animal Welfare Committee of Capital Medical University.Rats were randomly distributed, after one week of acclimatization, into normal control (NC,n = 6), DN group (DN, n = 6), and TSN group (TSN, n = 6). The DN rat model used in both DN and TSN groups was established as described previously [22]. Briefly, rats received intraperitoneal injections of streptozotocin (STZ, 55 mg/kg) and a high-fat diet (10% lard, 20% sucrose, 2.5% cholesterol, 0.5% sodium cholate, and 67% basic feed). Rats in NC were injected with the same dose of sodium citrate buffer and fed with a normal diet (12% fat, 28% protein, and 60% carbohydrate). After 7 days, random blood glucose (RGB) was measured in all rats in the diabetic group, and those rats with RBG levels above 16.7 mmol/L for three consecutive days were considered diabetic. Diabetic rats were randomly distributed into DN (DN, n = 6) and TSN (TSN, n = 6) groups. Rats in TSN group were administered 20g/kg TSN orally every day while rats in NC and DN were given the same volume of normal saline instead. The rats were euthanized after 12 weeks of treatment. Fresh kidneys were dissected and preserved under −80°C for further experiments. ### 2.8.3. Histological Analysis Formalin-fixed kidney tissues were embedded in paraffin and stained with hematoxylin-eosin (HE), Periodic Acid-Schiff (PAS), and Masson. The slices were analyzed and captured using a light microscope (Leica, DM4B) under ×400 magnification. ### 2.8.4. ELISA Assay CCL5 levels in the rat kidneys were measured using ELISA kit (Cusabio, Wuhan, China) according to the instructions provided. ### 2.8.5. Western Blot Analysis Kidney tissues were lysed and the supernatant was collected after centrifugation. After homogenization, protein symbol was separated using 10% SDS-PAGE electrophoresis and electrotransferred to PVDF membranes. Membranes were blocked with 5% non-fat milk and then incubated overnight at 4°C with the proper primary antibody for anti-GJA1 (Cell Signaling Technology, USA), CTGF (Santa Cruz Biotechnology, USA), MMP7 (Cell Signal Technology, USA), andβ-Tubulin (APPLYGEN, China). After that, the membranes were incubated with corresponding secondary antibodies, including Donkey Anti-Mouse IgG (Proteintech, USA) and Goat Anti-Rabbit IgG (LabLead, China) at room temperature. Antigen-antibody immunoreactivity was visualized using electrochemiluminescence (ECL) reagents (Millipore, USA). Protein expressions were normalized according to the intensity of β-Tubulin and analyzed with Gelpro Analyzer software. ## 2.8.1. Reagents Streptozocin (STZ) was purchased from Sigma. (RANTES/CCL5) ELISA kit (CSB-E07398r) was purchased from Cusabio Biotech. Mouse monoclonal anti-CTGF antibody (Cat#: SC-365970) was purchased from Santa Cruz Biotechnology. Rabbit polyclonal anti-MMP7 antibodies (Cat#: 3801) and Rabbit polyclonal anti-GJA1 antibodies (Cat#: 3512) were purchased from Cell Signaling Technology. Mouse monoclonal anti-β-Tubulin antibody (Cat#: C1340) was purchased from APPLYGEN. Donkey Anti-Mouse IgG was purchased from Proteintech. Goat Anti-Rabbit IgG was purchased from LabLead. ## 2.8.2. Animals and Models Eighteen male Sprague Dawley rats (180–220 g) were purchased from Weitonglihua (Beijing, China). Rats were fed and given water ad libitum in an SPF environment. All procedures were approved by the Animal Experiments and Experimental Animal Welfare Committee of Capital Medical University.Rats were randomly distributed, after one week of acclimatization, into normal control (NC,n = 6), DN group (DN, n = 6), and TSN group (TSN, n = 6). The DN rat model used in both DN and TSN groups was established as described previously [22]. Briefly, rats received intraperitoneal injections of streptozotocin (STZ, 55 mg/kg) and a high-fat diet (10% lard, 20% sucrose, 2.5% cholesterol, 0.5% sodium cholate, and 67% basic feed). Rats in NC were injected with the same dose of sodium citrate buffer and fed with a normal diet (12% fat, 28% protein, and 60% carbohydrate). After 7 days, random blood glucose (RGB) was measured in all rats in the diabetic group, and those rats with RBG levels above 16.7 mmol/L for three consecutive days were considered diabetic. Diabetic rats were randomly distributed into DN (DN, n = 6) and TSN (TSN, n = 6) groups. Rats in TSN group were administered 20g/kg TSN orally every day while rats in NC and DN were given the same volume of normal saline instead. The rats were euthanized after 12 weeks of treatment. Fresh kidneys were dissected and preserved under −80°C for further experiments. ## 2.8.3. Histological Analysis Formalin-fixed kidney tissues were embedded in paraffin and stained with hematoxylin-eosin (HE), Periodic Acid-Schiff (PAS), and Masson. The slices were analyzed and captured using a light microscope (Leica, DM4B) under ×400 magnification. ## 2.8.4. ELISA Assay CCL5 levels in the rat kidneys were measured using ELISA kit (Cusabio, Wuhan, China) according to the instructions provided. ## 2.8.5. Western Blot Analysis Kidney tissues were lysed and the supernatant was collected after centrifugation. After homogenization, protein symbol was separated using 10% SDS-PAGE electrophoresis and electrotransferred to PVDF membranes. Membranes were blocked with 5% non-fat milk and then incubated overnight at 4°C with the proper primary antibody for anti-GJA1 (Cell Signaling Technology, USA), CTGF (Santa Cruz Biotechnology, USA), MMP7 (Cell Signal Technology, USA), andβ-Tubulin (APPLYGEN, China). After that, the membranes were incubated with corresponding secondary antibodies, including Donkey Anti-Mouse IgG (Proteintech, USA) and Goat Anti-Rabbit IgG (LabLead, China) at room temperature. Antigen-antibody immunoreactivity was visualized using electrochemiluminescence (ECL) reagents (Millipore, USA). Protein expressions were normalized according to the intensity of β-Tubulin and analyzed with Gelpro Analyzer software. ## 2.9. Statistical Analyses Pearson’s correlations between GFR and TSN-related DN-DEGs were performed using Nephroseq v5. Experimental data were presented as mean ± SD and analyzed using Graphpad prism 9.00. One-way ANOVA was applied for comparison between more than two groups.p<0.05 means the comparison was statistically significant. ## 3. Results ### 3.1. Potential Bioactive Components and Targets in TSN After screening with OB and DL value, 52 potential bioactive compounds were collected, including 20 fromHuang Qi, 2 from Qian Shi, 16 from Da Huang, 7 from Jin Yin Zi, and 7 from Chuan Xiong and finally, 47 bioactive compounds were obtained by removing duplicates (Table 1). By searching these 47 components of TSN in TCMSP, PubChem, and STITCH databases, we obtained 858 corresponding potential targets. The Compound-Target network of TSN suggested potential synergistic effects of these herbs at shared targets (Figure 2).Table 1 The potential bioactive compounds in TSN. Mol IDCompoundsOBDLHerbMOL000211Mairin55.380.78HQMOL000239Jaranol50.830.29HQMOL000296Hederagenin36.910.75HQMOL000033(3S,8S,9S,10R,13R,14S,17R)-10,13-Dimethyl-17-[(2R,5S)-5-propan-2-yloctan-2-yl]-2,3,4,7,8,9,11,12,14,15,16,17-dodecahydro-1H-cyclopenta[a]phenanthren-3-ol36.230.78HQMOL000354Isorhamnetin49.60.31HQMOL0003713,9-Di-O-methylnissolin53.740.48HQMOL0003745′-Hydroxyiso-muronulatol-2′,5′-di-O-glucoside41.720.69HQMOL0003787-O-Methylisomucronulatol74.690.3HQMOL0003799,10-Dimethoxypterocarpan-3-O-β-D-glucoside36.740.92HQMOL000380(6aR,11aR)-9,10-Dimethoxy-6a,11a-dihydro-6H-benzofurano [3,2-c]chromen-3-ol64.260.42HQMOL000387Bifendate31.10.67HQMOL000392Formononetin69.670.21HQMOL000398Isoflavanone109.990.3HQMOL000417Calycosin47.750.24HQMOL000422Kaempferol41.880.24HQ, JYZMOL000433FA68.960.71HQ, CXMOL000438(3R)-3-(2-Hydroxy-3,4-dimethoxyphenyl)chroman-7-ol67.670.26HQMOL000439Isomucronulatol-7,2′-di-O-glucosiole49.280.62HQMOL0004421,7-Dihydroxy-3,9-dimethoxy pterocarpene39.050.48HQMOL000098Quercetin46.430.28HQ, JYZMOL002773Beta-carotene37.180.58QSMOL007180Vitamin-E32.290.7QSMOL001494Mandenol420.19JYZ, CXMOL000358Beta-sitosterol36.910.75JYZ, DHMOL005030Gondoic acid30.70.2JYZMOL008622Methyl trametenolate42.880.82JYZMOL0086284′-Methyl-N-methylcoclaurine53.430.26JYZMOL002280Torachrysone-8-O-beta-D-(6′-oxayl)-glucoside43.020.74DHMOL002281Toralactone46.460.24DHMOL002288Emodin-1-O-beta-D-glucopyranoside44.810.8DHMOL002293Sennoside D_qt61.060.61DHMOL002297Daucosterol_qt35.890.7DHMOL002303Palmidin A32.450.65DHMOL000471Aloe-emodin83.380.24DHMOL000554Gallic acid-3-O-(6′-O-galloyl)-glucoside30.250.67DHMOL000096(−)-Catechin49.680.24DHMOL002135Myricanone40.60.51CXMOL002140Perlolyrine65.950.27CXMOL002151Senkyunone47.660.24CXMOL002157Wallichilide42.310.71CXMOL000359Sitosterol36.910.75CXMOL002235Eupatin50.80.41DHMOL002251Mutatochrome48.640.61DHMOL002259Physciondiglucoside41.650.63DHMOL002260Procyanidin B-5,3′-O-gallate31.990.32DHMOL002268Rhein47.070.28DHMOL002276Sennoside E_qt50.690.61DHFigure 2 The Compound-Target network of TSN that consists of 910 nodes and 2912 edges. Circle and round square nodes denote the compounds and targets, respectively. ### 3.2. DEGs Identified in DN Glomerular gene expression data from GSE30528 were analyzed, resulting in 274 DEGs (Figure3 and Supplementary Table S1), including 67 upregulated (represented by red plots) and 207 downregulated genes (blue plots).Figure 3 Volcano plot represents DEGs associated with diabetic nephropathy from GSE30528 dataset and 23 hub proteins targeted by TSN; blue plots represent downregulated DEGs; red plots represent upregulated genes. ### 3.3. DN-DEGs Related to TSN After mapping the DN-DEGs to 858 potential TSN targets, 23 common genes were collected as critical effect targets of TSN in DN (Figure4) and this information is shown in Table 2. The expression levels of common targets provided by the matrix file of GSE30528 are shown in Supplementary Fig. S1. A disease network including TCM compounds was constructed based on common targets (Figure 5), in which HQ contained the most active components associated with DN-DEGs, with a total of 7 targets, suggesting that HQ may be the most effective component in TSN. In addition, mairin (MOL000211) was related to the highest number of targets (7 in total), followed by quercetin (MOL000098) and hederagenin (MOL000296) (both with 5).Figure 4 Venn diagram showing 23 TSN-related DN-DEGs.Table 2 The information of 23 TSN-related DN-DEGs. GeneDescriptionLog FCp valueRegulationLPLLipoprotein lipase−3.199287.54E − 07DownregulatedIGF1Insulin-like growth factor 1−2.574567.11E − 05DownregulatedGPRC5AG protein-coupled receptor class C group 5 member A−2.279823.09E − 07DownregulatedPLATTissue-type plasminogen activator−2.245430.000584DownregulatedSNCASynuclein alpha−2.18091.15E − 06DownregulatedF3Tissue factor−2.149948.01E − 06DownregulatedHPGD15-Hydroxyprostaglandin dehydrogenase (NAD(+))−2.066420.000075DownregulatedCTGFCCN family member 2−1.989662.17E − 05DownregulatedGJA1Gap junction alpha-1 protein−1.938553.39E − 08DownregulatedBMP2Bone morphogenetic protein 2−1.827954.52E − 05DownregulatedALBAlbumin−1.786230.00896DownregulatedCLDN5Claudin 5−1.758025.85E − 06DownregulatedGADD45BGrowth arrest and DNA damage inducible beta−1.651272.25E − 05DownregulatedVEGFAVascular endothelial growth factor A−1.586245.92E − 07DownregulatedLYZLysozyme1.5295320.00258UpregulatedIRF8Interferon regulatory factor 81.5832470.000131UpregulatedALOX5Arachidonate 5-lipoxygenase1.6737350.00516UpregulatedLCKTyrosine-protein kinase lck1.6947452.47E − 06UpregulatedAKR1B10Aldo-keto reductase family 1 member B101.7054470.00508UpregulatedCCL5C-C motif chemokine receptor 51.7204930.000615UpregulatedMOXD1Monooxygenase 11.9055621.47E − 05UpregulatedMMP7Matrix metallopeptidase 72.086450.000339UpregulatedADH1BAlcohol dehydrogenase 1B2.1839662.28E − 05UpregulatedFigure 5 The network of compounds and 23 TSN-related DEGs. The circle and round square nodes indicate targets and compounds, respectively. ### 3.4. PPI Network Targets and Analysis TSN-related DN-DEGs were imported into Bisogenet in Cytoscape to generate the PPI network. The network consisted of 883 nodes 9507 edges. The targets with “degree,” “betweenness centrality,” and “closeness centrality” above median values were selected as key targets. Details of the two screening and threshold setting are shown in Figure6 and Supplementary Table S2. The screening results showed key underlying pathways of TSN in DN. A core PPI network consisting of 116 nodes and 1894 edges was constructed.Figure 6 The construction and topological analysis of the PPI network. (a) The PPI network of TSN-related DN-DEGs was constructed using Bisogenet and the red nodes represent TSN-related DN-DEGs. (b) The process of topological analysis for the PPI network. (a)(b) ### 3.5. GO and KEGG Pathway Enrichment Analysis Enrichment analysis of the core network was performed using the Metascape platform to elucidate BP, CC, MF, and signaling pathways involved. GO enrichment analysis showed that the BP involved in TSN mainly included the transmembrane receptor protein tyrosine kinase signaling pathway, regulation of protein catabolic process, Fc receptor signaling pathway, immune response-regulating signaling pathway, cellular response to growth factor stimulus, and regulation of apoptotic signaling pathway. In addition, the above BPs were associated with related MFs, including protein domain specific binding, kinase binding, ubiquitin-like protein ligase binding, protein kinase binding, phosphoprotein binding, and protein phosphorylated amino acid binding. The major CCs involved focal adhesion, cell-substrate adherent junction, cell-substrate junction, perinuclear region of the cytoplasm, membrane region, and vesicle lumen (Figure7).Figure 7 The GO enrichment of 116 key genes in PPI network. The top 10 items for each section were listed separately.165 signaling pathways were enriched according to enrichment analysis and the first 25 are shown in Figure8. The main related pathways included PI3K-Akt, chemokine, MAPK, focal adhesion, ErbB, estrogen, Ras, AGE-RAGE, and HIF-1, endocrine resistance, and adherent junction signaling pathway. A network was constructed using Cytoscape for visualization to uncover the relationship between targets and pathways (Figure 9).Figure 8 The KEGG enrichment analysis of the top 25 metabolic pathways.Figure 9 The network of targets involved in the major KEGG pathways. Circle and round square nodes denote the targets and signaling pathways, respectively. ### 3.6. Association between DN-DEGs Related to TSN and Clinical Features of DN The correlation between TSN-related DN-DEGs and GFR, the main clinical manifestation in DN, was investigated on the Nephroseq v5 (Supplementary FigureS2 and Table 3). There were no data for LZY and detailed information of other DEGs is shown in Supplementary Table S2. ALB, GPRC5A, PLAT, SNCA, F3, HPGD, CTGF, GJA1, BMP2, CLDN5, MODX1, GADD45B, and VEGFA were positively correlated with GFR, suggesting that these genes contributed to renal protection. In this case, the correlation between ALB and GFR was relatively weak (R = 0.454). Furthermore, IRF8, ALOX5, LCK, AKR1B10, CCL5, MOXD1, MMP7, and ADH1B were negatively correlated with GFR, implying that these genes were involved in the progression of DN. However, there was a nonlinear correlation between IGF1 and GFR (p=0.057).Table 3 Pearson’s correlations between GFR and TSN-related DN-DEGs. Targetp valueR valueR2MMP73.11E − 04−0.6970.485809ADH1B3.53E − 05−0.7640.583696MOXD11.59E − 04−0.720.5184CCL50.001−0.6410.410881AKR1B109.48E − 04−0.6550.429025LCK6.46E − 04−0.670.4489ALOX50.007−0.5570.310249IRF80.014−0.5150.265225VEGFA2.00E − 050.7780.605284GADD45B5.32E − 050.7530.567009CLDN52.06E − 050.7780.605284ALB0.0340.4540.206116BMP20.0010.6440.414736GJA14.91E − 070.8520.725904CTGF3.49E − 040.6930.480249HPGD1.59E − 040.720.5184F33.52E − 060.8170.667489SNCA5.30E − 050.7530.567009PLAT0.0110.5290.279841GPRC5A8.32E − 070.8430.710649IGF10.0570.4120.169744LPL0.00001370.7870.619369 ### 3.7. Experimental Validation #### 3.7.1. Kidney Histological Observations HE, PAS, and Masson’s staining results revealed that glomeruli from the NC group were of regular morphology, clear, and with a complete structure. Also, the mesangial membrane was not significantly thickened. There were hyperplasia and structural disorganization of glomerular cells in the DN group, with enlarged glomeruli and broadened mesangial matrix, which was accompanied by extensive collagen fibril formation and glycogen deposition. The histopathological damage in the kidney was improved in the TSN-treated rats, with relatively intact glomeruli structure and reduced fibrous deposition compared to those of DN animals (Figure10).Figure 10 Renal histopathological changes in each group (magnification ×400). #### 3.7.2. TSN Regulated the Expression of GJA1, CTGF, MMP7, and CCL5 In order to confirm the effect of TSN treatment in TSN-related DN-DEGs, the protein levels of GJA1, CTGF, and MMP7 were evaluated by western blot, and ELISA measured renal CCL5 levels. The results showed that the levels of CTGF, MMP7, and CCL5 level were increased in the DN group, while the GJA1 were decreased compared to the NC group. Indeed, TSN treatment attenuated the increase in CCL5, MMP7, and CTGF while it upregulated GJA1 (Figure11).Figure 11 Effect of TSN on related DEGs in DN rats. Data are presented as the mean ± SD (∗P<0.05; ∗P<0.01). (a) Representative immunoblots for the CX43, CTGF, MMP7, and β-Tubulin proteins. (b) The relative expression levels of CX43/β-Tubulin and CTGF/β-Tubulin and MMP7/β-Tubulin. The data on quantified protein expressions were normalized by related β-Tubulin (fold change of NC). (c) CCL5s expression in all group. (a)(b)(c) ## 3.1. Potential Bioactive Components and Targets in TSN After screening with OB and DL value, 52 potential bioactive compounds were collected, including 20 fromHuang Qi, 2 from Qian Shi, 16 from Da Huang, 7 from Jin Yin Zi, and 7 from Chuan Xiong and finally, 47 bioactive compounds were obtained by removing duplicates (Table 1). By searching these 47 components of TSN in TCMSP, PubChem, and STITCH databases, we obtained 858 corresponding potential targets. The Compound-Target network of TSN suggested potential synergistic effects of these herbs at shared targets (Figure 2).Table 1 The potential bioactive compounds in TSN. Mol IDCompoundsOBDLHerbMOL000211Mairin55.380.78HQMOL000239Jaranol50.830.29HQMOL000296Hederagenin36.910.75HQMOL000033(3S,8S,9S,10R,13R,14S,17R)-10,13-Dimethyl-17-[(2R,5S)-5-propan-2-yloctan-2-yl]-2,3,4,7,8,9,11,12,14,15,16,17-dodecahydro-1H-cyclopenta[a]phenanthren-3-ol36.230.78HQMOL000354Isorhamnetin49.60.31HQMOL0003713,9-Di-O-methylnissolin53.740.48HQMOL0003745′-Hydroxyiso-muronulatol-2′,5′-di-O-glucoside41.720.69HQMOL0003787-O-Methylisomucronulatol74.690.3HQMOL0003799,10-Dimethoxypterocarpan-3-O-β-D-glucoside36.740.92HQMOL000380(6aR,11aR)-9,10-Dimethoxy-6a,11a-dihydro-6H-benzofurano [3,2-c]chromen-3-ol64.260.42HQMOL000387Bifendate31.10.67HQMOL000392Formononetin69.670.21HQMOL000398Isoflavanone109.990.3HQMOL000417Calycosin47.750.24HQMOL000422Kaempferol41.880.24HQ, JYZMOL000433FA68.960.71HQ, CXMOL000438(3R)-3-(2-Hydroxy-3,4-dimethoxyphenyl)chroman-7-ol67.670.26HQMOL000439Isomucronulatol-7,2′-di-O-glucosiole49.280.62HQMOL0004421,7-Dihydroxy-3,9-dimethoxy pterocarpene39.050.48HQMOL000098Quercetin46.430.28HQ, JYZMOL002773Beta-carotene37.180.58QSMOL007180Vitamin-E32.290.7QSMOL001494Mandenol420.19JYZ, CXMOL000358Beta-sitosterol36.910.75JYZ, DHMOL005030Gondoic acid30.70.2JYZMOL008622Methyl trametenolate42.880.82JYZMOL0086284′-Methyl-N-methylcoclaurine53.430.26JYZMOL002280Torachrysone-8-O-beta-D-(6′-oxayl)-glucoside43.020.74DHMOL002281Toralactone46.460.24DHMOL002288Emodin-1-O-beta-D-glucopyranoside44.810.8DHMOL002293Sennoside D_qt61.060.61DHMOL002297Daucosterol_qt35.890.7DHMOL002303Palmidin A32.450.65DHMOL000471Aloe-emodin83.380.24DHMOL000554Gallic acid-3-O-(6′-O-galloyl)-glucoside30.250.67DHMOL000096(−)-Catechin49.680.24DHMOL002135Myricanone40.60.51CXMOL002140Perlolyrine65.950.27CXMOL002151Senkyunone47.660.24CXMOL002157Wallichilide42.310.71CXMOL000359Sitosterol36.910.75CXMOL002235Eupatin50.80.41DHMOL002251Mutatochrome48.640.61DHMOL002259Physciondiglucoside41.650.63DHMOL002260Procyanidin B-5,3′-O-gallate31.990.32DHMOL002268Rhein47.070.28DHMOL002276Sennoside E_qt50.690.61DHFigure 2 The Compound-Target network of TSN that consists of 910 nodes and 2912 edges. Circle and round square nodes denote the compounds and targets, respectively. ## 3.2. DEGs Identified in DN Glomerular gene expression data from GSE30528 were analyzed, resulting in 274 DEGs (Figure3 and Supplementary Table S1), including 67 upregulated (represented by red plots) and 207 downregulated genes (blue plots).Figure 3 Volcano plot represents DEGs associated with diabetic nephropathy from GSE30528 dataset and 23 hub proteins targeted by TSN; blue plots represent downregulated DEGs; red plots represent upregulated genes. ## 3.3. DN-DEGs Related to TSN After mapping the DN-DEGs to 858 potential TSN targets, 23 common genes were collected as critical effect targets of TSN in DN (Figure4) and this information is shown in Table 2. The expression levels of common targets provided by the matrix file of GSE30528 are shown in Supplementary Fig. S1. A disease network including TCM compounds was constructed based on common targets (Figure 5), in which HQ contained the most active components associated with DN-DEGs, with a total of 7 targets, suggesting that HQ may be the most effective component in TSN. In addition, mairin (MOL000211) was related to the highest number of targets (7 in total), followed by quercetin (MOL000098) and hederagenin (MOL000296) (both with 5).Figure 4 Venn diagram showing 23 TSN-related DN-DEGs.Table 2 The information of 23 TSN-related DN-DEGs. GeneDescriptionLog FCp valueRegulationLPLLipoprotein lipase−3.199287.54E − 07DownregulatedIGF1Insulin-like growth factor 1−2.574567.11E − 05DownregulatedGPRC5AG protein-coupled receptor class C group 5 member A−2.279823.09E − 07DownregulatedPLATTissue-type plasminogen activator−2.245430.000584DownregulatedSNCASynuclein alpha−2.18091.15E − 06DownregulatedF3Tissue factor−2.149948.01E − 06DownregulatedHPGD15-Hydroxyprostaglandin dehydrogenase (NAD(+))−2.066420.000075DownregulatedCTGFCCN family member 2−1.989662.17E − 05DownregulatedGJA1Gap junction alpha-1 protein−1.938553.39E − 08DownregulatedBMP2Bone morphogenetic protein 2−1.827954.52E − 05DownregulatedALBAlbumin−1.786230.00896DownregulatedCLDN5Claudin 5−1.758025.85E − 06DownregulatedGADD45BGrowth arrest and DNA damage inducible beta−1.651272.25E − 05DownregulatedVEGFAVascular endothelial growth factor A−1.586245.92E − 07DownregulatedLYZLysozyme1.5295320.00258UpregulatedIRF8Interferon regulatory factor 81.5832470.000131UpregulatedALOX5Arachidonate 5-lipoxygenase1.6737350.00516UpregulatedLCKTyrosine-protein kinase lck1.6947452.47E − 06UpregulatedAKR1B10Aldo-keto reductase family 1 member B101.7054470.00508UpregulatedCCL5C-C motif chemokine receptor 51.7204930.000615UpregulatedMOXD1Monooxygenase 11.9055621.47E − 05UpregulatedMMP7Matrix metallopeptidase 72.086450.000339UpregulatedADH1BAlcohol dehydrogenase 1B2.1839662.28E − 05UpregulatedFigure 5 The network of compounds and 23 TSN-related DEGs. The circle and round square nodes indicate targets and compounds, respectively. ## 3.4. PPI Network Targets and Analysis TSN-related DN-DEGs were imported into Bisogenet in Cytoscape to generate the PPI network. The network consisted of 883 nodes 9507 edges. The targets with “degree,” “betweenness centrality,” and “closeness centrality” above median values were selected as key targets. Details of the two screening and threshold setting are shown in Figure6 and Supplementary Table S2. The screening results showed key underlying pathways of TSN in DN. A core PPI network consisting of 116 nodes and 1894 edges was constructed.Figure 6 The construction and topological analysis of the PPI network. (a) The PPI network of TSN-related DN-DEGs was constructed using Bisogenet and the red nodes represent TSN-related DN-DEGs. (b) The process of topological analysis for the PPI network. (a)(b) ## 3.5. GO and KEGG Pathway Enrichment Analysis Enrichment analysis of the core network was performed using the Metascape platform to elucidate BP, CC, MF, and signaling pathways involved. GO enrichment analysis showed that the BP involved in TSN mainly included the transmembrane receptor protein tyrosine kinase signaling pathway, regulation of protein catabolic process, Fc receptor signaling pathway, immune response-regulating signaling pathway, cellular response to growth factor stimulus, and regulation of apoptotic signaling pathway. In addition, the above BPs were associated with related MFs, including protein domain specific binding, kinase binding, ubiquitin-like protein ligase binding, protein kinase binding, phosphoprotein binding, and protein phosphorylated amino acid binding. The major CCs involved focal adhesion, cell-substrate adherent junction, cell-substrate junction, perinuclear region of the cytoplasm, membrane region, and vesicle lumen (Figure7).Figure 7 The GO enrichment of 116 key genes in PPI network. The top 10 items for each section were listed separately.165 signaling pathways were enriched according to enrichment analysis and the first 25 are shown in Figure8. The main related pathways included PI3K-Akt, chemokine, MAPK, focal adhesion, ErbB, estrogen, Ras, AGE-RAGE, and HIF-1, endocrine resistance, and adherent junction signaling pathway. A network was constructed using Cytoscape for visualization to uncover the relationship between targets and pathways (Figure 9).Figure 8 The KEGG enrichment analysis of the top 25 metabolic pathways.Figure 9 The network of targets involved in the major KEGG pathways. Circle and round square nodes denote the targets and signaling pathways, respectively. ## 3.6. Association between DN-DEGs Related to TSN and Clinical Features of DN The correlation between TSN-related DN-DEGs and GFR, the main clinical manifestation in DN, was investigated on the Nephroseq v5 (Supplementary FigureS2 and Table 3). There were no data for LZY and detailed information of other DEGs is shown in Supplementary Table S2. ALB, GPRC5A, PLAT, SNCA, F3, HPGD, CTGF, GJA1, BMP2, CLDN5, MODX1, GADD45B, and VEGFA were positively correlated with GFR, suggesting that these genes contributed to renal protection. In this case, the correlation between ALB and GFR was relatively weak (R = 0.454). Furthermore, IRF8, ALOX5, LCK, AKR1B10, CCL5, MOXD1, MMP7, and ADH1B were negatively correlated with GFR, implying that these genes were involved in the progression of DN. However, there was a nonlinear correlation between IGF1 and GFR (p=0.057).Table 3 Pearson’s correlations between GFR and TSN-related DN-DEGs. Targetp valueR valueR2MMP73.11E − 04−0.6970.485809ADH1B3.53E − 05−0.7640.583696MOXD11.59E − 04−0.720.5184CCL50.001−0.6410.410881AKR1B109.48E − 04−0.6550.429025LCK6.46E − 04−0.670.4489ALOX50.007−0.5570.310249IRF80.014−0.5150.265225VEGFA2.00E − 050.7780.605284GADD45B5.32E − 050.7530.567009CLDN52.06E − 050.7780.605284ALB0.0340.4540.206116BMP20.0010.6440.414736GJA14.91E − 070.8520.725904CTGF3.49E − 040.6930.480249HPGD1.59E − 040.720.5184F33.52E − 060.8170.667489SNCA5.30E − 050.7530.567009PLAT0.0110.5290.279841GPRC5A8.32E − 070.8430.710649IGF10.0570.4120.169744LPL0.00001370.7870.619369 ## 3.7. Experimental Validation ### 3.7.1. Kidney Histological Observations HE, PAS, and Masson’s staining results revealed that glomeruli from the NC group were of regular morphology, clear, and with a complete structure. Also, the mesangial membrane was not significantly thickened. There were hyperplasia and structural disorganization of glomerular cells in the DN group, with enlarged glomeruli and broadened mesangial matrix, which was accompanied by extensive collagen fibril formation and glycogen deposition. The histopathological damage in the kidney was improved in the TSN-treated rats, with relatively intact glomeruli structure and reduced fibrous deposition compared to those of DN animals (Figure10).Figure 10 Renal histopathological changes in each group (magnification ×400). ### 3.7.2. TSN Regulated the Expression of GJA1, CTGF, MMP7, and CCL5 In order to confirm the effect of TSN treatment in TSN-related DN-DEGs, the protein levels of GJA1, CTGF, and MMP7 were evaluated by western blot, and ELISA measured renal CCL5 levels. The results showed that the levels of CTGF, MMP7, and CCL5 level were increased in the DN group, while the GJA1 were decreased compared to the NC group. Indeed, TSN treatment attenuated the increase in CCL5, MMP7, and CTGF while it upregulated GJA1 (Figure11).Figure 11 Effect of TSN on related DEGs in DN rats. Data are presented as the mean ± SD (∗P<0.05; ∗P<0.01). (a) Representative immunoblots for the CX43, CTGF, MMP7, and β-Tubulin proteins. (b) The relative expression levels of CX43/β-Tubulin and CTGF/β-Tubulin and MMP7/β-Tubulin. The data on quantified protein expressions were normalized by related β-Tubulin (fold change of NC). (c) CCL5s expression in all group. (a)(b)(c) ## 3.7.1. Kidney Histological Observations HE, PAS, and Masson’s staining results revealed that glomeruli from the NC group were of regular morphology, clear, and with a complete structure. Also, the mesangial membrane was not significantly thickened. There were hyperplasia and structural disorganization of glomerular cells in the DN group, with enlarged glomeruli and broadened mesangial matrix, which was accompanied by extensive collagen fibril formation and glycogen deposition. The histopathological damage in the kidney was improved in the TSN-treated rats, with relatively intact glomeruli structure and reduced fibrous deposition compared to those of DN animals (Figure10).Figure 10 Renal histopathological changes in each group (magnification ×400). ## 3.7.2. TSN Regulated the Expression of GJA1, CTGF, MMP7, and CCL5 In order to confirm the effect of TSN treatment in TSN-related DN-DEGs, the protein levels of GJA1, CTGF, and MMP7 were evaluated by western blot, and ELISA measured renal CCL5 levels. The results showed that the levels of CTGF, MMP7, and CCL5 level were increased in the DN group, while the GJA1 were decreased compared to the NC group. Indeed, TSN treatment attenuated the increase in CCL5, MMP7, and CTGF while it upregulated GJA1 (Figure11).Figure 11 Effect of TSN on related DEGs in DN rats. Data are presented as the mean ± SD (∗P<0.05; ∗P<0.01). (a) Representative immunoblots for the CX43, CTGF, MMP7, and β-Tubulin proteins. (b) The relative expression levels of CX43/β-Tubulin and CTGF/β-Tubulin and MMP7/β-Tubulin. The data on quantified protein expressions were normalized by related β-Tubulin (fold change of NC). (c) CCL5s expression in all group. (a)(b)(c) ## 4. Discussion DN is an important microvascular complication of DM and also the main cause of ESRD. Indeed, DN is closely associated with increased mortality in DM patients. DN is multifactorial and is characterized by decreased GFR, proteinuria, and renal ultrastructural changes. Typical pathological changes in DN include tubular and glomerular basement membrane thickening, interstitial ECM expansion, glomerulosclerosis, and tubulointerstitial fibrosis, which ultimately cause renal hypofunction [23]. The main therapy for DN is to regulate blood glucose, blood lipids, and blood pressure [24]. However, the progressive decline of renal function in DN cannot be effectively prevented by glucose-lowing strategies. Therefore, the description of novel treatment strategies for DN is an important topic. In this regard, TCM has been used for many years in DM treatment and its complications [7, 24], being characterized as a multicomponent and multitarget strategy that acts on multiple pathways. TSN is an empirical formula for DN by a Chinese medicine expert, Yanbin Gao, which was effective in both humans and mice. In this study, the molecular network mechanism of TSN was explored based on network pharmacology to investigate potential underlying mechanisms of this formula used in DN.In this study, we predicted the key herbs, active compounds, and potential effector targets in TSN. TSN-related DN-DEGs were obtained and found to be strongly correlated with decreases in GFR. GFR is an indicator of the extent of glomerular disease and the level of renal function. Therefore, these genes were identified as potential key targets of TSN treatment in DN. Besides,HuangQi and bioactive components mairin (degree = 7), quercetin (degree = 5), and hederagenin (degree = 5) were the key components in the treatment as they were high associated with these 23 targets. Mairin, also known as betulinic acid, can reduce glucose uptake and decrease endogenous glucose production [25, 26] by inhibiting alpha-glucosidase activity by competitively binding to alpha-glucosidase [27]. Mairin can also inhibit NF-κB activation by preventing the degradation of IκB in DN rats, resulting in the reduction of fibrosis in DN [28]. Quercetin not only provides improved renal function, but also possesses effects on antihyperglycemia and insulin resistance. Furthermore, quercetin reactivates the Hippo pathway to inhibit the proliferation of glomerular mesangial cells (MCs) in DN rats and MCs treated with high glucose, while quercetin also improves renal fibrosis and renal function in DN [29]. In addition, quercetin was shown to improve renal function in DN rats by downregulating TGF-β1 and CTGF [30, 31]. However, research on hederagenin is relatively scarce. As a dietary fiber, it has the potential effect to reduce lipid synthesis and lipid absorption in the intestine and promote the excretion of bile acids and triglycerides [32]. Although hyperlipidemia is an important factor in DM and DN, the effects of hederagenin in DM are still unclear. Network exploration reveals TSN's multicomponent synergistic effect on DN prevention.In order to investigate the intrinsic mechanisms of TSN, we constructed a regulatory network of TSN-related DN-DEGs and performed the GO enrichment analysis. It was suggested that TSN may influence DM progression through multiple biological processes, including growth factor regulation, immune response, cellular stress response, and apoptosis. TSN may have a regulatory effect on focal adhesion, cell-substrate adherent junction, and cell-substrate junction, and these components are mainly participating in extracellular matrix (ECM) generation.The pathways enriched are related to ECM production, cell proliferation and migration, inflammation, and endocrine disruption and suggested that the therapeutic efficacy of TSN is mediated by the synergy of multiple pathways. For example, AGEs can upregulate RAGE expression, activate multiple signaling pathways, including JAK-STAT, MAPK/ERK, PI3K/Akt/mTOR, and NF-κB, and increase oxidative stress, inflammation, and renal fibrosis, causing structural and functional disorders in the kidney of patients with DM [33]. Active PI3K/AKT pathway in diabetic patients upregulates CTGF [34], which is involved in ECM deposition and promotes EMT in DN [35, 36]. Hyperglycemia also causes MAPK phosphorylation and activates the MAPK signaling pathway, the activation of which causes increased apoptosis, inflammatory cell invasion, and ECM synthesis [37]. Focal adhesion is involved in cell migration and ECM synthesis during EMT in DN [38]. Long-term overactivation of HIF-1α can induce the deposition in the ECM, causing glomerulosclerosis and renal interstitial fibrosis [39, 40]. Previous studies have reported that TSN exhibits a nephroprotective effect in DN mice by reversing podocyte EMT [9]. Combined with the results of GO and KEGG enrichment analysis, we presumed that a potential mechanism for TSN treatment in DN may be to reduce ECM deposition in the kidneys of DN patients by modulating levels of growth factors and chemokines, improving cellular stress and inflammation, and inhibiting EMT-induced cell proliferation.EMT is an induced transformation of damaged cells into mesenchymal cells, with metastatic and fiber-generation capacity. The EMT process generates cytokines and chemokines, accompanied by the recruitment and proliferation of fibroblasts, causing the production and deposition of ECM, which participates in the progression of glomerulosclerosis and renal fibrosis [41–43]. Pathologically, DN presents itself with ECM accumulation and renal fibrosis. Extensive protein deposition in the basement membrane leads to thickening of the glomerular basement membrane, inducing interstitial fibrosis, glomerulosclerosis, and ultimately renal failure [41, 44]. Therefore, therapies directed at EMT and renal fibrosis are expected to be effective strategies for the treatment of DN. Through the establishment of the DN rat model, histopathological observation revealed that TSN effectively attenuated the extent of glomerular damage and reduced fibrous deposition, demonstrating a protective effect against renal fibrosis in DN rats.The intrinsic mechanisms and related targets involved were further explored. Among the TSN-related DN-DEGs, GJA1, CTGF, MMP7, and CCL5 were our main targets of interest. Gap Junction Protein Alpha 1 (GJA1), also called CX43, is an essential component of cellular junctional structures during the transport of small molecules intercellularly. GJA1 prevents the proliferation of renal fibrosis in DN by improving oxidative stress [45] as well as downregulating TGF-β1 levels [46, 47]. The growth factor CTGF amplifies the fibrillogenic activity of TGF-β1 and induces the accumulation of extracellular matrix [48, 49]. MMP7, a member of the MMP family, can cleave a series of matrix and protein fragments, and release growth factors from the extracellular matrix, while promoting renal fibrosis [23, 50, 51]. In addition, the chemokine signaling pathway was also enriched in KEGG enrichment analysis. CCL5, a member of the C-C chemokine family, exerts a powerful chemotactic effect on immune cells, which is involved in chronic inflammatory seen in progression of DN [4, 52]. Therefore, the levels of CCL5 in each group were measured. As expected, GJA1 was upregulated and CTGF, MMP7, and CCL5 were downregulated by TSN to alleviate kidney fibrosis in DN, and these targets were demonstrated to be engaged in the treatment.We investigated the role of TSN for DN therapy at the molecular level using a network pharmacology method and explored the possible mechanisms of its effect. The inhibition of renal fibrosis and inflammation by TSN is predicted to be a potential mechanism for the treatment of DN, and growth factors and chemokines may be key underlying mechanisms through which TSN exerts its nephroprotective effects. ## 5. Conclusion In conclusion, potential active compounds, genes, and signaling pathways involved in the nephroprotection of TSN treatment were identified. TSN is perceived to be a promising strategy to treat DN through the synergistic effect of its “multicomponent, multitarget, and multipathway” compounds. Furthermore, it was verified that TSN could regulate the levels of GJA1, CTGF, MMP7, and CCL5 and alleviate renal fibrosis in DN. This work provides a science-based foundation for DN treatment with TSN. Additional experimental validation is essential to investigate other mechanisms involved in renal fibrosis and the role of the other TSN-related DEGs. --- *Source: 1025053-2021-09-09.xml*
1025053-2021-09-09_1025053-2021-09-09.md
57,810
Exploring the Potential Mechanism of Tang-Shen-Ning Decoction against Diabetic Nephropathy Based on the Combination of Network Pharmacology and Experimental Validation
Jiajun Liang; Jiaxin He; Yanbin Gao; Zhiyao Zhu
Evidence-Based Complementary and Alternative Medicine (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1025053
1025053-2021-09-09.xml
--- ## Abstract Background. Diabetic nephropathy (DN) has become one of the leading causes of the end-stage renal disease (ESRD). Tang-Shen-Ning (TSN) decoction, an effective Traditional Chinese formula for DN, can improve the renal function and inhibit renal fibrosis in DN. However, its potential mechanism is still unexplored. Methods. A network pharmacology approach was employed in this study, including screening for differential expressed genes of DN (DN-DEGs), protein-protein interaction (PPI) network analysis, and GO and KEGG enrichment analysis. Besides, a rat model was established to verify the potential effect of TSN in DN. Results. Twenty-three TSN-related DN-DEGs targets were identified. These genes were associated with decreased glomerular filtration rate (GFR) DN. The enrichment analysis suggested that the inhibition of renal fibrosis and inflammation through growth factors and chemokines is the potential mechanism through which TSN improves DN. TSN reduced renal fibrosis and improved pathological damage in the kidney in vivo through the regulation of GJA1, CTGF, MMP7, and CCL5, which are genes associated with ECM deposition. Conclusion. This study revealed that TSN improves DN through a multicomponent, multitarget, and multipathway synergy. We provide a scientific basis for potential targets for TSN use to treat DN, yet further experimental validation is needed to investigate these targets and mechanisms. --- ## Body ## 1. Introduction Diabetic nephropathy (DN), a severe microvascular complication of long-term diabetes mellitus (DM), is present in 40% of diabetic patients [1]. DN leads to chronic progressive renal damage, accompanied by increased urinary proteins and decreased glomerular filtration rate (GFR). Indeed, DN becomes a leading cause of progression to end-stage renal disease (ESRD) in DM patients [2, 3]. KDOQI proposed that the term “diabetic nephropathy” should be replaced by “diabetic kidney disease” (DKD), which applies to a kidney disease that is caused by diabetes, while DN can be diagnosed only after the histopathological confirmation. The pathological progression of DN includes glomerular basement membrane thickening, mesangial matrix hyperplasia, and glomerulosclerosis [4]. DN’s pathogenesis is complex and includes alterations in renal hemodynamics, disturbances in glucolipid metabolism, the action of various cytokines, and activation of the Renin-Angiotensin-Aldosterone System (RAAS). RAAS inhibitors have some benefit in the treatment of DN. However, according to clinical studies, when patients were treated with dual RAAS blocking, not only is it less effective in the long term, but patients were also more likely to suffer from risky events like acute kidney injury or hyperkalemia [5, 6]. Therefore, it is essential to identify novel therapeutic schedule for DN.In China, Traditional Chinese Medicine (TCM) has a long history of treating DM and DN and provides renal protection for DN through different herbal combinations [7]. Developed by Yanbin Gao, a TCM expert, Tang-Shen-Ning decoction (TSN) is an empirical formula to treat DN, consisting of Astragalus membranaceus (Fisch.) Bunge (Huang Qi in Chinese, HQ, 15 g), Euryale ferox Salisb. ex DC (Qian Shi in Chinese, QS, 15 g), Rosa laevigata Michx (Jin Ying Zi in Chinese, JYZ, 15 g), Rheum officinale Baill (Da Huang in Chinese, DH, 6 g), and Ligusticum chuanxiong Hort (Chuan Xiong in Chinese, CX, 12 g). According to a double-blinded control clinical trial on DN patients, the use of TSN led to reductions in 24-hour urine albumin excretion rate (24 h UAER) compared to control group. Besides, patients in the TSN treatment group showed improvements in serum SOD, MDA, and hs-CRP compared to the control group, indicating that TSN is a safe and effective medicine for the treatment of early DN and improving the inflammatory and oxidative stress status in DN [8]. It was also shown that TSN treatment in DN mice decreased 24 h UAER, serum creatinine, and blood urea nitrogen. Furthermore, TSN activated the Wnt/β-catenin pathway, reversed the podocyte epithelial-mesenchymal transition (EMT), reduced the expression of fibroblast-specific protein 1 (FSP-1) and collagen I, and alleviated kidney damage in DN mice [9]. However, the effect of TCM is multitargeted and affects multiple pathways; therefore, the potential mechanism of TSN in DN still needs to be investigated.Currently, significant developments are taking place in the field of systems biology, in which network pharmacology is considered an important tool for drug discovery. Network pharmacology aims to describe a biological system using a network structure, switching the research paradigm to “network-targeted, multicomponent therapy.” This research model has similarities with TCM due to the synergistic mechanism of multicomponent therapeutic approaches that affect multiple pathways and therefore underlying mechanisms of TCM can be elucidated through network pharmacology [10, 11]. Therefore, we performed a systems biology-based approach to explore the underlying mechanisms of TSN in DN (Figure 1) as a guidance for further research.Figure 1 The schematic diagram of network pharmacological study of TSN for DN. ## 2. Materials and Methods ### 2.1. Screening of TSN Compounds with Potential Biological Activity TCMSP database (http://tcmspw.com/tcmsp.php) [12] was searched to obtain the candidate ingredients of the fives herbs contained in TSN. Then, compounds with oral bioavailability (OB) ≥30% and drug-likeness (DL) ≥0.18 were considered as potential bioactive ingredients [11, 13]. ### 2.2. Target Screening The TCMSP, PubChem database (https://pubchem.ncbi.nlm.nih.gov/) [14], and STITCH platform (http://stitch.embl.de/cgi/network.pl) [15] were used to search the targets of potential bioactive ingredients in TSN. In the PubChem database, targets were collected from the following four sections: Chemical-Gene Co-Occurrences in Literature, Protein Bound 3D Structures, Drug-Gene Interactions, and BioAssay Results. The threshold score was set to 0.9 to filter targets predicted by STITCH. ### 2.3. Identification of Differentially Expressed Genes (DEGs) Gene expression data from 9 DN patients and 13 normal human glomeruli in GSE30528 Genest were downloaded from GEO database (https://www.ncbi.nlm.nih.gov/geo/), which were firstly contributed by Woroniecka et al. [16]. DEGs of glomeruli from DN patients and normal samples were obtained using the GEO2R online tool by setting |logFC| > 1.5 and p<0.05. The gene expression was considered downregulated when the assay results indicated logFC < 0; if LogFC > 0, the gene expression was upregulated. ### 2.4. Network Construction Networks were constructed using Cytoscape v3.8.0 [17] as follows: (1) component-target network, (2) protein-protein interaction (PPI) network for DEGs, (3) component-DEGs network, and (4) target-signaling pathway network. ### 2.5. PPI Network and Topological Analysis TSN-related DEGs in DN (DN-DEGs) were identified as genes up- or downregulated in both TSN treatment and DN. The PPI network of TSN-related DN-DEGs was constructed using Bisogenet in Cytoscape. Three topology parameters, namely, degree, betweenness centrality, and closeness centrality, were chosen to analyze the topology of the network diagram, which reflected the topological importance of the nodes. Nodes with corresponding parameters greater than 2 times the median were selected to finally obtain the core targets of the PPI network [18]. “Degree” indicates how many edges were connected to a node; “betweenness centrality” is the times a node appears on the shortest path versus the total number of paths; “closeness centrality” measures the closeness of a node by calculating the distance of the shortest path from that node to the others [19]. ### 2.6. Gene Ontology (GO) and KEGG Pathway Enrichment Analysis The hub targets in the PPI network were analyzed using Metascape (http://metascape.org/gp/index.html) [20]. The enrichment results with p<0.05 were ranked, and those with p<0.01 were considered significantly enriched. Finally, the biological processes (BP), molecular functions (MF), cellular components (CC), and signaling pathways were identified. ### 2.7. Correlation of TSN-Related DN-DEGs with GFR Nephroseq v5 (http://v5.nephroseq.org) is an online platform to mine comprehensive nephropathy gene expression datasets to identify markers of disease progression by correlating renal genetic phenotypes with known disease phenotypes [21]. Nephroseq v5 platform was applied to analyze the correlation between TSN-related DN-DEGs and GFR. TSN-related DN-DEGs were searched and the dataset Woroniecka Diabetes Glom, which is linked to GSE30528, was selected. ### 2.8. Experimental Validation #### 2.8.1. Reagents Streptozocin (STZ) was purchased from Sigma. (RANTES/CCL5) ELISA kit (CSB-E07398r) was purchased from Cusabio Biotech. Mouse monoclonal anti-CTGF antibody (Cat#: SC-365970) was purchased from Santa Cruz Biotechnology. Rabbit polyclonal anti-MMP7 antibodies (Cat#: 3801) and Rabbit polyclonal anti-GJA1 antibodies (Cat#: 3512) were purchased from Cell Signaling Technology. Mouse monoclonal anti-β-Tubulin antibody (Cat#: C1340) was purchased from APPLYGEN. Donkey Anti-Mouse IgG was purchased from Proteintech. Goat Anti-Rabbit IgG was purchased from LabLead. #### 2.8.2. Animals and Models Eighteen male Sprague Dawley rats (180–220 g) were purchased from Weitonglihua (Beijing, China). Rats were fed and given water ad libitum in an SPF environment. All procedures were approved by the Animal Experiments and Experimental Animal Welfare Committee of Capital Medical University.Rats were randomly distributed, after one week of acclimatization, into normal control (NC,n = 6), DN group (DN, n = 6), and TSN group (TSN, n = 6). The DN rat model used in both DN and TSN groups was established as described previously [22]. Briefly, rats received intraperitoneal injections of streptozotocin (STZ, 55 mg/kg) and a high-fat diet (10% lard, 20% sucrose, 2.5% cholesterol, 0.5% sodium cholate, and 67% basic feed). Rats in NC were injected with the same dose of sodium citrate buffer and fed with a normal diet (12% fat, 28% protein, and 60% carbohydrate). After 7 days, random blood glucose (RGB) was measured in all rats in the diabetic group, and those rats with RBG levels above 16.7 mmol/L for three consecutive days were considered diabetic. Diabetic rats were randomly distributed into DN (DN, n = 6) and TSN (TSN, n = 6) groups. Rats in TSN group were administered 20g/kg TSN orally every day while rats in NC and DN were given the same volume of normal saline instead. The rats were euthanized after 12 weeks of treatment. Fresh kidneys were dissected and preserved under −80°C for further experiments. #### 2.8.3. Histological Analysis Formalin-fixed kidney tissues were embedded in paraffin and stained with hematoxylin-eosin (HE), Periodic Acid-Schiff (PAS), and Masson. The slices were analyzed and captured using a light microscope (Leica, DM4B) under ×400 magnification. #### 2.8.4. ELISA Assay CCL5 levels in the rat kidneys were measured using ELISA kit (Cusabio, Wuhan, China) according to the instructions provided. #### 2.8.5. Western Blot Analysis Kidney tissues were lysed and the supernatant was collected after centrifugation. After homogenization, protein symbol was separated using 10% SDS-PAGE electrophoresis and electrotransferred to PVDF membranes. Membranes were blocked with 5% non-fat milk and then incubated overnight at 4°C with the proper primary antibody for anti-GJA1 (Cell Signaling Technology, USA), CTGF (Santa Cruz Biotechnology, USA), MMP7 (Cell Signal Technology, USA), andβ-Tubulin (APPLYGEN, China). After that, the membranes were incubated with corresponding secondary antibodies, including Donkey Anti-Mouse IgG (Proteintech, USA) and Goat Anti-Rabbit IgG (LabLead, China) at room temperature. Antigen-antibody immunoreactivity was visualized using electrochemiluminescence (ECL) reagents (Millipore, USA). Protein expressions were normalized according to the intensity of β-Tubulin and analyzed with Gelpro Analyzer software. ### 2.9. Statistical Analyses Pearson’s correlations between GFR and TSN-related DN-DEGs were performed using Nephroseq v5. Experimental data were presented as mean ± SD and analyzed using Graphpad prism 9.00. One-way ANOVA was applied for comparison between more than two groups.p<0.05 means the comparison was statistically significant. ## 2.1. Screening of TSN Compounds with Potential Biological Activity TCMSP database (http://tcmspw.com/tcmsp.php) [12] was searched to obtain the candidate ingredients of the fives herbs contained in TSN. Then, compounds with oral bioavailability (OB) ≥30% and drug-likeness (DL) ≥0.18 were considered as potential bioactive ingredients [11, 13]. ## 2.2. Target Screening The TCMSP, PubChem database (https://pubchem.ncbi.nlm.nih.gov/) [14], and STITCH platform (http://stitch.embl.de/cgi/network.pl) [15] were used to search the targets of potential bioactive ingredients in TSN. In the PubChem database, targets were collected from the following four sections: Chemical-Gene Co-Occurrences in Literature, Protein Bound 3D Structures, Drug-Gene Interactions, and BioAssay Results. The threshold score was set to 0.9 to filter targets predicted by STITCH. ## 2.3. Identification of Differentially Expressed Genes (DEGs) Gene expression data from 9 DN patients and 13 normal human glomeruli in GSE30528 Genest were downloaded from GEO database (https://www.ncbi.nlm.nih.gov/geo/), which were firstly contributed by Woroniecka et al. [16]. DEGs of glomeruli from DN patients and normal samples were obtained using the GEO2R online tool by setting |logFC| > 1.5 and p<0.05. The gene expression was considered downregulated when the assay results indicated logFC < 0; if LogFC > 0, the gene expression was upregulated. ## 2.4. Network Construction Networks were constructed using Cytoscape v3.8.0 [17] as follows: (1) component-target network, (2) protein-protein interaction (PPI) network for DEGs, (3) component-DEGs network, and (4) target-signaling pathway network. ## 2.5. PPI Network and Topological Analysis TSN-related DEGs in DN (DN-DEGs) were identified as genes up- or downregulated in both TSN treatment and DN. The PPI network of TSN-related DN-DEGs was constructed using Bisogenet in Cytoscape. Three topology parameters, namely, degree, betweenness centrality, and closeness centrality, were chosen to analyze the topology of the network diagram, which reflected the topological importance of the nodes. Nodes with corresponding parameters greater than 2 times the median were selected to finally obtain the core targets of the PPI network [18]. “Degree” indicates how many edges were connected to a node; “betweenness centrality” is the times a node appears on the shortest path versus the total number of paths; “closeness centrality” measures the closeness of a node by calculating the distance of the shortest path from that node to the others [19]. ## 2.6. Gene Ontology (GO) and KEGG Pathway Enrichment Analysis The hub targets in the PPI network were analyzed using Metascape (http://metascape.org/gp/index.html) [20]. The enrichment results with p<0.05 were ranked, and those with p<0.01 were considered significantly enriched. Finally, the biological processes (BP), molecular functions (MF), cellular components (CC), and signaling pathways were identified. ## 2.7. Correlation of TSN-Related DN-DEGs with GFR Nephroseq v5 (http://v5.nephroseq.org) is an online platform to mine comprehensive nephropathy gene expression datasets to identify markers of disease progression by correlating renal genetic phenotypes with known disease phenotypes [21]. Nephroseq v5 platform was applied to analyze the correlation between TSN-related DN-DEGs and GFR. TSN-related DN-DEGs were searched and the dataset Woroniecka Diabetes Glom, which is linked to GSE30528, was selected. ## 2.8. Experimental Validation ### 2.8.1. Reagents Streptozocin (STZ) was purchased from Sigma. (RANTES/CCL5) ELISA kit (CSB-E07398r) was purchased from Cusabio Biotech. Mouse monoclonal anti-CTGF antibody (Cat#: SC-365970) was purchased from Santa Cruz Biotechnology. Rabbit polyclonal anti-MMP7 antibodies (Cat#: 3801) and Rabbit polyclonal anti-GJA1 antibodies (Cat#: 3512) were purchased from Cell Signaling Technology. Mouse monoclonal anti-β-Tubulin antibody (Cat#: C1340) was purchased from APPLYGEN. Donkey Anti-Mouse IgG was purchased from Proteintech. Goat Anti-Rabbit IgG was purchased from LabLead. ### 2.8.2. Animals and Models Eighteen male Sprague Dawley rats (180–220 g) were purchased from Weitonglihua (Beijing, China). Rats were fed and given water ad libitum in an SPF environment. All procedures were approved by the Animal Experiments and Experimental Animal Welfare Committee of Capital Medical University.Rats were randomly distributed, after one week of acclimatization, into normal control (NC,n = 6), DN group (DN, n = 6), and TSN group (TSN, n = 6). The DN rat model used in both DN and TSN groups was established as described previously [22]. Briefly, rats received intraperitoneal injections of streptozotocin (STZ, 55 mg/kg) and a high-fat diet (10% lard, 20% sucrose, 2.5% cholesterol, 0.5% sodium cholate, and 67% basic feed). Rats in NC were injected with the same dose of sodium citrate buffer and fed with a normal diet (12% fat, 28% protein, and 60% carbohydrate). After 7 days, random blood glucose (RGB) was measured in all rats in the diabetic group, and those rats with RBG levels above 16.7 mmol/L for three consecutive days were considered diabetic. Diabetic rats were randomly distributed into DN (DN, n = 6) and TSN (TSN, n = 6) groups. Rats in TSN group were administered 20g/kg TSN orally every day while rats in NC and DN were given the same volume of normal saline instead. The rats were euthanized after 12 weeks of treatment. Fresh kidneys were dissected and preserved under −80°C for further experiments. ### 2.8.3. Histological Analysis Formalin-fixed kidney tissues were embedded in paraffin and stained with hematoxylin-eosin (HE), Periodic Acid-Schiff (PAS), and Masson. The slices were analyzed and captured using a light microscope (Leica, DM4B) under ×400 magnification. ### 2.8.4. ELISA Assay CCL5 levels in the rat kidneys were measured using ELISA kit (Cusabio, Wuhan, China) according to the instructions provided. ### 2.8.5. Western Blot Analysis Kidney tissues were lysed and the supernatant was collected after centrifugation. After homogenization, protein symbol was separated using 10% SDS-PAGE electrophoresis and electrotransferred to PVDF membranes. Membranes were blocked with 5% non-fat milk and then incubated overnight at 4°C with the proper primary antibody for anti-GJA1 (Cell Signaling Technology, USA), CTGF (Santa Cruz Biotechnology, USA), MMP7 (Cell Signal Technology, USA), andβ-Tubulin (APPLYGEN, China). After that, the membranes were incubated with corresponding secondary antibodies, including Donkey Anti-Mouse IgG (Proteintech, USA) and Goat Anti-Rabbit IgG (LabLead, China) at room temperature. Antigen-antibody immunoreactivity was visualized using electrochemiluminescence (ECL) reagents (Millipore, USA). Protein expressions were normalized according to the intensity of β-Tubulin and analyzed with Gelpro Analyzer software. ## 2.8.1. Reagents Streptozocin (STZ) was purchased from Sigma. (RANTES/CCL5) ELISA kit (CSB-E07398r) was purchased from Cusabio Biotech. Mouse monoclonal anti-CTGF antibody (Cat#: SC-365970) was purchased from Santa Cruz Biotechnology. Rabbit polyclonal anti-MMP7 antibodies (Cat#: 3801) and Rabbit polyclonal anti-GJA1 antibodies (Cat#: 3512) were purchased from Cell Signaling Technology. Mouse monoclonal anti-β-Tubulin antibody (Cat#: C1340) was purchased from APPLYGEN. Donkey Anti-Mouse IgG was purchased from Proteintech. Goat Anti-Rabbit IgG was purchased from LabLead. ## 2.8.2. Animals and Models Eighteen male Sprague Dawley rats (180–220 g) were purchased from Weitonglihua (Beijing, China). Rats were fed and given water ad libitum in an SPF environment. All procedures were approved by the Animal Experiments and Experimental Animal Welfare Committee of Capital Medical University.Rats were randomly distributed, after one week of acclimatization, into normal control (NC,n = 6), DN group (DN, n = 6), and TSN group (TSN, n = 6). The DN rat model used in both DN and TSN groups was established as described previously [22]. Briefly, rats received intraperitoneal injections of streptozotocin (STZ, 55 mg/kg) and a high-fat diet (10% lard, 20% sucrose, 2.5% cholesterol, 0.5% sodium cholate, and 67% basic feed). Rats in NC were injected with the same dose of sodium citrate buffer and fed with a normal diet (12% fat, 28% protein, and 60% carbohydrate). After 7 days, random blood glucose (RGB) was measured in all rats in the diabetic group, and those rats with RBG levels above 16.7 mmol/L for three consecutive days were considered diabetic. Diabetic rats were randomly distributed into DN (DN, n = 6) and TSN (TSN, n = 6) groups. Rats in TSN group were administered 20g/kg TSN orally every day while rats in NC and DN were given the same volume of normal saline instead. The rats were euthanized after 12 weeks of treatment. Fresh kidneys were dissected and preserved under −80°C for further experiments. ## 2.8.3. Histological Analysis Formalin-fixed kidney tissues were embedded in paraffin and stained with hematoxylin-eosin (HE), Periodic Acid-Schiff (PAS), and Masson. The slices were analyzed and captured using a light microscope (Leica, DM4B) under ×400 magnification. ## 2.8.4. ELISA Assay CCL5 levels in the rat kidneys were measured using ELISA kit (Cusabio, Wuhan, China) according to the instructions provided. ## 2.8.5. Western Blot Analysis Kidney tissues were lysed and the supernatant was collected after centrifugation. After homogenization, protein symbol was separated using 10% SDS-PAGE electrophoresis and electrotransferred to PVDF membranes. Membranes were blocked with 5% non-fat milk and then incubated overnight at 4°C with the proper primary antibody for anti-GJA1 (Cell Signaling Technology, USA), CTGF (Santa Cruz Biotechnology, USA), MMP7 (Cell Signal Technology, USA), andβ-Tubulin (APPLYGEN, China). After that, the membranes were incubated with corresponding secondary antibodies, including Donkey Anti-Mouse IgG (Proteintech, USA) and Goat Anti-Rabbit IgG (LabLead, China) at room temperature. Antigen-antibody immunoreactivity was visualized using electrochemiluminescence (ECL) reagents (Millipore, USA). Protein expressions were normalized according to the intensity of β-Tubulin and analyzed with Gelpro Analyzer software. ## 2.9. Statistical Analyses Pearson’s correlations between GFR and TSN-related DN-DEGs were performed using Nephroseq v5. Experimental data were presented as mean ± SD and analyzed using Graphpad prism 9.00. One-way ANOVA was applied for comparison between more than two groups.p<0.05 means the comparison was statistically significant. ## 3. Results ### 3.1. Potential Bioactive Components and Targets in TSN After screening with OB and DL value, 52 potential bioactive compounds were collected, including 20 fromHuang Qi, 2 from Qian Shi, 16 from Da Huang, 7 from Jin Yin Zi, and 7 from Chuan Xiong and finally, 47 bioactive compounds were obtained by removing duplicates (Table 1). By searching these 47 components of TSN in TCMSP, PubChem, and STITCH databases, we obtained 858 corresponding potential targets. The Compound-Target network of TSN suggested potential synergistic effects of these herbs at shared targets (Figure 2).Table 1 The potential bioactive compounds in TSN. Mol IDCompoundsOBDLHerbMOL000211Mairin55.380.78HQMOL000239Jaranol50.830.29HQMOL000296Hederagenin36.910.75HQMOL000033(3S,8S,9S,10R,13R,14S,17R)-10,13-Dimethyl-17-[(2R,5S)-5-propan-2-yloctan-2-yl]-2,3,4,7,8,9,11,12,14,15,16,17-dodecahydro-1H-cyclopenta[a]phenanthren-3-ol36.230.78HQMOL000354Isorhamnetin49.60.31HQMOL0003713,9-Di-O-methylnissolin53.740.48HQMOL0003745′-Hydroxyiso-muronulatol-2′,5′-di-O-glucoside41.720.69HQMOL0003787-O-Methylisomucronulatol74.690.3HQMOL0003799,10-Dimethoxypterocarpan-3-O-β-D-glucoside36.740.92HQMOL000380(6aR,11aR)-9,10-Dimethoxy-6a,11a-dihydro-6H-benzofurano [3,2-c]chromen-3-ol64.260.42HQMOL000387Bifendate31.10.67HQMOL000392Formononetin69.670.21HQMOL000398Isoflavanone109.990.3HQMOL000417Calycosin47.750.24HQMOL000422Kaempferol41.880.24HQ, JYZMOL000433FA68.960.71HQ, CXMOL000438(3R)-3-(2-Hydroxy-3,4-dimethoxyphenyl)chroman-7-ol67.670.26HQMOL000439Isomucronulatol-7,2′-di-O-glucosiole49.280.62HQMOL0004421,7-Dihydroxy-3,9-dimethoxy pterocarpene39.050.48HQMOL000098Quercetin46.430.28HQ, JYZMOL002773Beta-carotene37.180.58QSMOL007180Vitamin-E32.290.7QSMOL001494Mandenol420.19JYZ, CXMOL000358Beta-sitosterol36.910.75JYZ, DHMOL005030Gondoic acid30.70.2JYZMOL008622Methyl trametenolate42.880.82JYZMOL0086284′-Methyl-N-methylcoclaurine53.430.26JYZMOL002280Torachrysone-8-O-beta-D-(6′-oxayl)-glucoside43.020.74DHMOL002281Toralactone46.460.24DHMOL002288Emodin-1-O-beta-D-glucopyranoside44.810.8DHMOL002293Sennoside D_qt61.060.61DHMOL002297Daucosterol_qt35.890.7DHMOL002303Palmidin A32.450.65DHMOL000471Aloe-emodin83.380.24DHMOL000554Gallic acid-3-O-(6′-O-galloyl)-glucoside30.250.67DHMOL000096(−)-Catechin49.680.24DHMOL002135Myricanone40.60.51CXMOL002140Perlolyrine65.950.27CXMOL002151Senkyunone47.660.24CXMOL002157Wallichilide42.310.71CXMOL000359Sitosterol36.910.75CXMOL002235Eupatin50.80.41DHMOL002251Mutatochrome48.640.61DHMOL002259Physciondiglucoside41.650.63DHMOL002260Procyanidin B-5,3′-O-gallate31.990.32DHMOL002268Rhein47.070.28DHMOL002276Sennoside E_qt50.690.61DHFigure 2 The Compound-Target network of TSN that consists of 910 nodes and 2912 edges. Circle and round square nodes denote the compounds and targets, respectively. ### 3.2. DEGs Identified in DN Glomerular gene expression data from GSE30528 were analyzed, resulting in 274 DEGs (Figure3 and Supplementary Table S1), including 67 upregulated (represented by red plots) and 207 downregulated genes (blue plots).Figure 3 Volcano plot represents DEGs associated with diabetic nephropathy from GSE30528 dataset and 23 hub proteins targeted by TSN; blue plots represent downregulated DEGs; red plots represent upregulated genes. ### 3.3. DN-DEGs Related to TSN After mapping the DN-DEGs to 858 potential TSN targets, 23 common genes were collected as critical effect targets of TSN in DN (Figure4) and this information is shown in Table 2. The expression levels of common targets provided by the matrix file of GSE30528 are shown in Supplementary Fig. S1. A disease network including TCM compounds was constructed based on common targets (Figure 5), in which HQ contained the most active components associated with DN-DEGs, with a total of 7 targets, suggesting that HQ may be the most effective component in TSN. In addition, mairin (MOL000211) was related to the highest number of targets (7 in total), followed by quercetin (MOL000098) and hederagenin (MOL000296) (both with 5).Figure 4 Venn diagram showing 23 TSN-related DN-DEGs.Table 2 The information of 23 TSN-related DN-DEGs. GeneDescriptionLog FCp valueRegulationLPLLipoprotein lipase−3.199287.54E − 07DownregulatedIGF1Insulin-like growth factor 1−2.574567.11E − 05DownregulatedGPRC5AG protein-coupled receptor class C group 5 member A−2.279823.09E − 07DownregulatedPLATTissue-type plasminogen activator−2.245430.000584DownregulatedSNCASynuclein alpha−2.18091.15E − 06DownregulatedF3Tissue factor−2.149948.01E − 06DownregulatedHPGD15-Hydroxyprostaglandin dehydrogenase (NAD(+))−2.066420.000075DownregulatedCTGFCCN family member 2−1.989662.17E − 05DownregulatedGJA1Gap junction alpha-1 protein−1.938553.39E − 08DownregulatedBMP2Bone morphogenetic protein 2−1.827954.52E − 05DownregulatedALBAlbumin−1.786230.00896DownregulatedCLDN5Claudin 5−1.758025.85E − 06DownregulatedGADD45BGrowth arrest and DNA damage inducible beta−1.651272.25E − 05DownregulatedVEGFAVascular endothelial growth factor A−1.586245.92E − 07DownregulatedLYZLysozyme1.5295320.00258UpregulatedIRF8Interferon regulatory factor 81.5832470.000131UpregulatedALOX5Arachidonate 5-lipoxygenase1.6737350.00516UpregulatedLCKTyrosine-protein kinase lck1.6947452.47E − 06UpregulatedAKR1B10Aldo-keto reductase family 1 member B101.7054470.00508UpregulatedCCL5C-C motif chemokine receptor 51.7204930.000615UpregulatedMOXD1Monooxygenase 11.9055621.47E − 05UpregulatedMMP7Matrix metallopeptidase 72.086450.000339UpregulatedADH1BAlcohol dehydrogenase 1B2.1839662.28E − 05UpregulatedFigure 5 The network of compounds and 23 TSN-related DEGs. The circle and round square nodes indicate targets and compounds, respectively. ### 3.4. PPI Network Targets and Analysis TSN-related DN-DEGs were imported into Bisogenet in Cytoscape to generate the PPI network. The network consisted of 883 nodes 9507 edges. The targets with “degree,” “betweenness centrality,” and “closeness centrality” above median values were selected as key targets. Details of the two screening and threshold setting are shown in Figure6 and Supplementary Table S2. The screening results showed key underlying pathways of TSN in DN. A core PPI network consisting of 116 nodes and 1894 edges was constructed.Figure 6 The construction and topological analysis of the PPI network. (a) The PPI network of TSN-related DN-DEGs was constructed using Bisogenet and the red nodes represent TSN-related DN-DEGs. (b) The process of topological analysis for the PPI network. (a)(b) ### 3.5. GO and KEGG Pathway Enrichment Analysis Enrichment analysis of the core network was performed using the Metascape platform to elucidate BP, CC, MF, and signaling pathways involved. GO enrichment analysis showed that the BP involved in TSN mainly included the transmembrane receptor protein tyrosine kinase signaling pathway, regulation of protein catabolic process, Fc receptor signaling pathway, immune response-regulating signaling pathway, cellular response to growth factor stimulus, and regulation of apoptotic signaling pathway. In addition, the above BPs were associated with related MFs, including protein domain specific binding, kinase binding, ubiquitin-like protein ligase binding, protein kinase binding, phosphoprotein binding, and protein phosphorylated amino acid binding. The major CCs involved focal adhesion, cell-substrate adherent junction, cell-substrate junction, perinuclear region of the cytoplasm, membrane region, and vesicle lumen (Figure7).Figure 7 The GO enrichment of 116 key genes in PPI network. The top 10 items for each section were listed separately.165 signaling pathways were enriched according to enrichment analysis and the first 25 are shown in Figure8. The main related pathways included PI3K-Akt, chemokine, MAPK, focal adhesion, ErbB, estrogen, Ras, AGE-RAGE, and HIF-1, endocrine resistance, and adherent junction signaling pathway. A network was constructed using Cytoscape for visualization to uncover the relationship between targets and pathways (Figure 9).Figure 8 The KEGG enrichment analysis of the top 25 metabolic pathways.Figure 9 The network of targets involved in the major KEGG pathways. Circle and round square nodes denote the targets and signaling pathways, respectively. ### 3.6. Association between DN-DEGs Related to TSN and Clinical Features of DN The correlation between TSN-related DN-DEGs and GFR, the main clinical manifestation in DN, was investigated on the Nephroseq v5 (Supplementary FigureS2 and Table 3). There were no data for LZY and detailed information of other DEGs is shown in Supplementary Table S2. ALB, GPRC5A, PLAT, SNCA, F3, HPGD, CTGF, GJA1, BMP2, CLDN5, MODX1, GADD45B, and VEGFA were positively correlated with GFR, suggesting that these genes contributed to renal protection. In this case, the correlation between ALB and GFR was relatively weak (R = 0.454). Furthermore, IRF8, ALOX5, LCK, AKR1B10, CCL5, MOXD1, MMP7, and ADH1B were negatively correlated with GFR, implying that these genes were involved in the progression of DN. However, there was a nonlinear correlation between IGF1 and GFR (p=0.057).Table 3 Pearson’s correlations between GFR and TSN-related DN-DEGs. Targetp valueR valueR2MMP73.11E − 04−0.6970.485809ADH1B3.53E − 05−0.7640.583696MOXD11.59E − 04−0.720.5184CCL50.001−0.6410.410881AKR1B109.48E − 04−0.6550.429025LCK6.46E − 04−0.670.4489ALOX50.007−0.5570.310249IRF80.014−0.5150.265225VEGFA2.00E − 050.7780.605284GADD45B5.32E − 050.7530.567009CLDN52.06E − 050.7780.605284ALB0.0340.4540.206116BMP20.0010.6440.414736GJA14.91E − 070.8520.725904CTGF3.49E − 040.6930.480249HPGD1.59E − 040.720.5184F33.52E − 060.8170.667489SNCA5.30E − 050.7530.567009PLAT0.0110.5290.279841GPRC5A8.32E − 070.8430.710649IGF10.0570.4120.169744LPL0.00001370.7870.619369 ### 3.7. Experimental Validation #### 3.7.1. Kidney Histological Observations HE, PAS, and Masson’s staining results revealed that glomeruli from the NC group were of regular morphology, clear, and with a complete structure. Also, the mesangial membrane was not significantly thickened. There were hyperplasia and structural disorganization of glomerular cells in the DN group, with enlarged glomeruli and broadened mesangial matrix, which was accompanied by extensive collagen fibril formation and glycogen deposition. The histopathological damage in the kidney was improved in the TSN-treated rats, with relatively intact glomeruli structure and reduced fibrous deposition compared to those of DN animals (Figure10).Figure 10 Renal histopathological changes in each group (magnification ×400). #### 3.7.2. TSN Regulated the Expression of GJA1, CTGF, MMP7, and CCL5 In order to confirm the effect of TSN treatment in TSN-related DN-DEGs, the protein levels of GJA1, CTGF, and MMP7 were evaluated by western blot, and ELISA measured renal CCL5 levels. The results showed that the levels of CTGF, MMP7, and CCL5 level were increased in the DN group, while the GJA1 were decreased compared to the NC group. Indeed, TSN treatment attenuated the increase in CCL5, MMP7, and CTGF while it upregulated GJA1 (Figure11).Figure 11 Effect of TSN on related DEGs in DN rats. Data are presented as the mean ± SD (∗P<0.05; ∗P<0.01). (a) Representative immunoblots for the CX43, CTGF, MMP7, and β-Tubulin proteins. (b) The relative expression levels of CX43/β-Tubulin and CTGF/β-Tubulin and MMP7/β-Tubulin. The data on quantified protein expressions were normalized by related β-Tubulin (fold change of NC). (c) CCL5s expression in all group. (a)(b)(c) ## 3.1. Potential Bioactive Components and Targets in TSN After screening with OB and DL value, 52 potential bioactive compounds were collected, including 20 fromHuang Qi, 2 from Qian Shi, 16 from Da Huang, 7 from Jin Yin Zi, and 7 from Chuan Xiong and finally, 47 bioactive compounds were obtained by removing duplicates (Table 1). By searching these 47 components of TSN in TCMSP, PubChem, and STITCH databases, we obtained 858 corresponding potential targets. The Compound-Target network of TSN suggested potential synergistic effects of these herbs at shared targets (Figure 2).Table 1 The potential bioactive compounds in TSN. Mol IDCompoundsOBDLHerbMOL000211Mairin55.380.78HQMOL000239Jaranol50.830.29HQMOL000296Hederagenin36.910.75HQMOL000033(3S,8S,9S,10R,13R,14S,17R)-10,13-Dimethyl-17-[(2R,5S)-5-propan-2-yloctan-2-yl]-2,3,4,7,8,9,11,12,14,15,16,17-dodecahydro-1H-cyclopenta[a]phenanthren-3-ol36.230.78HQMOL000354Isorhamnetin49.60.31HQMOL0003713,9-Di-O-methylnissolin53.740.48HQMOL0003745′-Hydroxyiso-muronulatol-2′,5′-di-O-glucoside41.720.69HQMOL0003787-O-Methylisomucronulatol74.690.3HQMOL0003799,10-Dimethoxypterocarpan-3-O-β-D-glucoside36.740.92HQMOL000380(6aR,11aR)-9,10-Dimethoxy-6a,11a-dihydro-6H-benzofurano [3,2-c]chromen-3-ol64.260.42HQMOL000387Bifendate31.10.67HQMOL000392Formononetin69.670.21HQMOL000398Isoflavanone109.990.3HQMOL000417Calycosin47.750.24HQMOL000422Kaempferol41.880.24HQ, JYZMOL000433FA68.960.71HQ, CXMOL000438(3R)-3-(2-Hydroxy-3,4-dimethoxyphenyl)chroman-7-ol67.670.26HQMOL000439Isomucronulatol-7,2′-di-O-glucosiole49.280.62HQMOL0004421,7-Dihydroxy-3,9-dimethoxy pterocarpene39.050.48HQMOL000098Quercetin46.430.28HQ, JYZMOL002773Beta-carotene37.180.58QSMOL007180Vitamin-E32.290.7QSMOL001494Mandenol420.19JYZ, CXMOL000358Beta-sitosterol36.910.75JYZ, DHMOL005030Gondoic acid30.70.2JYZMOL008622Methyl trametenolate42.880.82JYZMOL0086284′-Methyl-N-methylcoclaurine53.430.26JYZMOL002280Torachrysone-8-O-beta-D-(6′-oxayl)-glucoside43.020.74DHMOL002281Toralactone46.460.24DHMOL002288Emodin-1-O-beta-D-glucopyranoside44.810.8DHMOL002293Sennoside D_qt61.060.61DHMOL002297Daucosterol_qt35.890.7DHMOL002303Palmidin A32.450.65DHMOL000471Aloe-emodin83.380.24DHMOL000554Gallic acid-3-O-(6′-O-galloyl)-glucoside30.250.67DHMOL000096(−)-Catechin49.680.24DHMOL002135Myricanone40.60.51CXMOL002140Perlolyrine65.950.27CXMOL002151Senkyunone47.660.24CXMOL002157Wallichilide42.310.71CXMOL000359Sitosterol36.910.75CXMOL002235Eupatin50.80.41DHMOL002251Mutatochrome48.640.61DHMOL002259Physciondiglucoside41.650.63DHMOL002260Procyanidin B-5,3′-O-gallate31.990.32DHMOL002268Rhein47.070.28DHMOL002276Sennoside E_qt50.690.61DHFigure 2 The Compound-Target network of TSN that consists of 910 nodes and 2912 edges. Circle and round square nodes denote the compounds and targets, respectively. ## 3.2. DEGs Identified in DN Glomerular gene expression data from GSE30528 were analyzed, resulting in 274 DEGs (Figure3 and Supplementary Table S1), including 67 upregulated (represented by red plots) and 207 downregulated genes (blue plots).Figure 3 Volcano plot represents DEGs associated with diabetic nephropathy from GSE30528 dataset and 23 hub proteins targeted by TSN; blue plots represent downregulated DEGs; red plots represent upregulated genes. ## 3.3. DN-DEGs Related to TSN After mapping the DN-DEGs to 858 potential TSN targets, 23 common genes were collected as critical effect targets of TSN in DN (Figure4) and this information is shown in Table 2. The expression levels of common targets provided by the matrix file of GSE30528 are shown in Supplementary Fig. S1. A disease network including TCM compounds was constructed based on common targets (Figure 5), in which HQ contained the most active components associated with DN-DEGs, with a total of 7 targets, suggesting that HQ may be the most effective component in TSN. In addition, mairin (MOL000211) was related to the highest number of targets (7 in total), followed by quercetin (MOL000098) and hederagenin (MOL000296) (both with 5).Figure 4 Venn diagram showing 23 TSN-related DN-DEGs.Table 2 The information of 23 TSN-related DN-DEGs. GeneDescriptionLog FCp valueRegulationLPLLipoprotein lipase−3.199287.54E − 07DownregulatedIGF1Insulin-like growth factor 1−2.574567.11E − 05DownregulatedGPRC5AG protein-coupled receptor class C group 5 member A−2.279823.09E − 07DownregulatedPLATTissue-type plasminogen activator−2.245430.000584DownregulatedSNCASynuclein alpha−2.18091.15E − 06DownregulatedF3Tissue factor−2.149948.01E − 06DownregulatedHPGD15-Hydroxyprostaglandin dehydrogenase (NAD(+))−2.066420.000075DownregulatedCTGFCCN family member 2−1.989662.17E − 05DownregulatedGJA1Gap junction alpha-1 protein−1.938553.39E − 08DownregulatedBMP2Bone morphogenetic protein 2−1.827954.52E − 05DownregulatedALBAlbumin−1.786230.00896DownregulatedCLDN5Claudin 5−1.758025.85E − 06DownregulatedGADD45BGrowth arrest and DNA damage inducible beta−1.651272.25E − 05DownregulatedVEGFAVascular endothelial growth factor A−1.586245.92E − 07DownregulatedLYZLysozyme1.5295320.00258UpregulatedIRF8Interferon regulatory factor 81.5832470.000131UpregulatedALOX5Arachidonate 5-lipoxygenase1.6737350.00516UpregulatedLCKTyrosine-protein kinase lck1.6947452.47E − 06UpregulatedAKR1B10Aldo-keto reductase family 1 member B101.7054470.00508UpregulatedCCL5C-C motif chemokine receptor 51.7204930.000615UpregulatedMOXD1Monooxygenase 11.9055621.47E − 05UpregulatedMMP7Matrix metallopeptidase 72.086450.000339UpregulatedADH1BAlcohol dehydrogenase 1B2.1839662.28E − 05UpregulatedFigure 5 The network of compounds and 23 TSN-related DEGs. The circle and round square nodes indicate targets and compounds, respectively. ## 3.4. PPI Network Targets and Analysis TSN-related DN-DEGs were imported into Bisogenet in Cytoscape to generate the PPI network. The network consisted of 883 nodes 9507 edges. The targets with “degree,” “betweenness centrality,” and “closeness centrality” above median values were selected as key targets. Details of the two screening and threshold setting are shown in Figure6 and Supplementary Table S2. The screening results showed key underlying pathways of TSN in DN. A core PPI network consisting of 116 nodes and 1894 edges was constructed.Figure 6 The construction and topological analysis of the PPI network. (a) The PPI network of TSN-related DN-DEGs was constructed using Bisogenet and the red nodes represent TSN-related DN-DEGs. (b) The process of topological analysis for the PPI network. (a)(b) ## 3.5. GO and KEGG Pathway Enrichment Analysis Enrichment analysis of the core network was performed using the Metascape platform to elucidate BP, CC, MF, and signaling pathways involved. GO enrichment analysis showed that the BP involved in TSN mainly included the transmembrane receptor protein tyrosine kinase signaling pathway, regulation of protein catabolic process, Fc receptor signaling pathway, immune response-regulating signaling pathway, cellular response to growth factor stimulus, and regulation of apoptotic signaling pathway. In addition, the above BPs were associated with related MFs, including protein domain specific binding, kinase binding, ubiquitin-like protein ligase binding, protein kinase binding, phosphoprotein binding, and protein phosphorylated amino acid binding. The major CCs involved focal adhesion, cell-substrate adherent junction, cell-substrate junction, perinuclear region of the cytoplasm, membrane region, and vesicle lumen (Figure7).Figure 7 The GO enrichment of 116 key genes in PPI network. The top 10 items for each section were listed separately.165 signaling pathways were enriched according to enrichment analysis and the first 25 are shown in Figure8. The main related pathways included PI3K-Akt, chemokine, MAPK, focal adhesion, ErbB, estrogen, Ras, AGE-RAGE, and HIF-1, endocrine resistance, and adherent junction signaling pathway. A network was constructed using Cytoscape for visualization to uncover the relationship between targets and pathways (Figure 9).Figure 8 The KEGG enrichment analysis of the top 25 metabolic pathways.Figure 9 The network of targets involved in the major KEGG pathways. Circle and round square nodes denote the targets and signaling pathways, respectively. ## 3.6. Association between DN-DEGs Related to TSN and Clinical Features of DN The correlation between TSN-related DN-DEGs and GFR, the main clinical manifestation in DN, was investigated on the Nephroseq v5 (Supplementary FigureS2 and Table 3). There were no data for LZY and detailed information of other DEGs is shown in Supplementary Table S2. ALB, GPRC5A, PLAT, SNCA, F3, HPGD, CTGF, GJA1, BMP2, CLDN5, MODX1, GADD45B, and VEGFA were positively correlated with GFR, suggesting that these genes contributed to renal protection. In this case, the correlation between ALB and GFR was relatively weak (R = 0.454). Furthermore, IRF8, ALOX5, LCK, AKR1B10, CCL5, MOXD1, MMP7, and ADH1B were negatively correlated with GFR, implying that these genes were involved in the progression of DN. However, there was a nonlinear correlation between IGF1 and GFR (p=0.057).Table 3 Pearson’s correlations between GFR and TSN-related DN-DEGs. Targetp valueR valueR2MMP73.11E − 04−0.6970.485809ADH1B3.53E − 05−0.7640.583696MOXD11.59E − 04−0.720.5184CCL50.001−0.6410.410881AKR1B109.48E − 04−0.6550.429025LCK6.46E − 04−0.670.4489ALOX50.007−0.5570.310249IRF80.014−0.5150.265225VEGFA2.00E − 050.7780.605284GADD45B5.32E − 050.7530.567009CLDN52.06E − 050.7780.605284ALB0.0340.4540.206116BMP20.0010.6440.414736GJA14.91E − 070.8520.725904CTGF3.49E − 040.6930.480249HPGD1.59E − 040.720.5184F33.52E − 060.8170.667489SNCA5.30E − 050.7530.567009PLAT0.0110.5290.279841GPRC5A8.32E − 070.8430.710649IGF10.0570.4120.169744LPL0.00001370.7870.619369 ## 3.7. Experimental Validation ### 3.7.1. Kidney Histological Observations HE, PAS, and Masson’s staining results revealed that glomeruli from the NC group were of regular morphology, clear, and with a complete structure. Also, the mesangial membrane was not significantly thickened. There were hyperplasia and structural disorganization of glomerular cells in the DN group, with enlarged glomeruli and broadened mesangial matrix, which was accompanied by extensive collagen fibril formation and glycogen deposition. The histopathological damage in the kidney was improved in the TSN-treated rats, with relatively intact glomeruli structure and reduced fibrous deposition compared to those of DN animals (Figure10).Figure 10 Renal histopathological changes in each group (magnification ×400). ### 3.7.2. TSN Regulated the Expression of GJA1, CTGF, MMP7, and CCL5 In order to confirm the effect of TSN treatment in TSN-related DN-DEGs, the protein levels of GJA1, CTGF, and MMP7 were evaluated by western blot, and ELISA measured renal CCL5 levels. The results showed that the levels of CTGF, MMP7, and CCL5 level were increased in the DN group, while the GJA1 were decreased compared to the NC group. Indeed, TSN treatment attenuated the increase in CCL5, MMP7, and CTGF while it upregulated GJA1 (Figure11).Figure 11 Effect of TSN on related DEGs in DN rats. Data are presented as the mean ± SD (∗P<0.05; ∗P<0.01). (a) Representative immunoblots for the CX43, CTGF, MMP7, and β-Tubulin proteins. (b) The relative expression levels of CX43/β-Tubulin and CTGF/β-Tubulin and MMP7/β-Tubulin. The data on quantified protein expressions were normalized by related β-Tubulin (fold change of NC). (c) CCL5s expression in all group. (a)(b)(c) ## 3.7.1. Kidney Histological Observations HE, PAS, and Masson’s staining results revealed that glomeruli from the NC group were of regular morphology, clear, and with a complete structure. Also, the mesangial membrane was not significantly thickened. There were hyperplasia and structural disorganization of glomerular cells in the DN group, with enlarged glomeruli and broadened mesangial matrix, which was accompanied by extensive collagen fibril formation and glycogen deposition. The histopathological damage in the kidney was improved in the TSN-treated rats, with relatively intact glomeruli structure and reduced fibrous deposition compared to those of DN animals (Figure10).Figure 10 Renal histopathological changes in each group (magnification ×400). ## 3.7.2. TSN Regulated the Expression of GJA1, CTGF, MMP7, and CCL5 In order to confirm the effect of TSN treatment in TSN-related DN-DEGs, the protein levels of GJA1, CTGF, and MMP7 were evaluated by western blot, and ELISA measured renal CCL5 levels. The results showed that the levels of CTGF, MMP7, and CCL5 level were increased in the DN group, while the GJA1 were decreased compared to the NC group. Indeed, TSN treatment attenuated the increase in CCL5, MMP7, and CTGF while it upregulated GJA1 (Figure11).Figure 11 Effect of TSN on related DEGs in DN rats. Data are presented as the mean ± SD (∗P<0.05; ∗P<0.01). (a) Representative immunoblots for the CX43, CTGF, MMP7, and β-Tubulin proteins. (b) The relative expression levels of CX43/β-Tubulin and CTGF/β-Tubulin and MMP7/β-Tubulin. The data on quantified protein expressions were normalized by related β-Tubulin (fold change of NC). (c) CCL5s expression in all group. (a)(b)(c) ## 4. Discussion DN is an important microvascular complication of DM and also the main cause of ESRD. Indeed, DN is closely associated with increased mortality in DM patients. DN is multifactorial and is characterized by decreased GFR, proteinuria, and renal ultrastructural changes. Typical pathological changes in DN include tubular and glomerular basement membrane thickening, interstitial ECM expansion, glomerulosclerosis, and tubulointerstitial fibrosis, which ultimately cause renal hypofunction [23]. The main therapy for DN is to regulate blood glucose, blood lipids, and blood pressure [24]. However, the progressive decline of renal function in DN cannot be effectively prevented by glucose-lowing strategies. Therefore, the description of novel treatment strategies for DN is an important topic. In this regard, TCM has been used for many years in DM treatment and its complications [7, 24], being characterized as a multicomponent and multitarget strategy that acts on multiple pathways. TSN is an empirical formula for DN by a Chinese medicine expert, Yanbin Gao, which was effective in both humans and mice. In this study, the molecular network mechanism of TSN was explored based on network pharmacology to investigate potential underlying mechanisms of this formula used in DN.In this study, we predicted the key herbs, active compounds, and potential effector targets in TSN. TSN-related DN-DEGs were obtained and found to be strongly correlated with decreases in GFR. GFR is an indicator of the extent of glomerular disease and the level of renal function. Therefore, these genes were identified as potential key targets of TSN treatment in DN. Besides,HuangQi and bioactive components mairin (degree = 7), quercetin (degree = 5), and hederagenin (degree = 5) were the key components in the treatment as they were high associated with these 23 targets. Mairin, also known as betulinic acid, can reduce glucose uptake and decrease endogenous glucose production [25, 26] by inhibiting alpha-glucosidase activity by competitively binding to alpha-glucosidase [27]. Mairin can also inhibit NF-κB activation by preventing the degradation of IκB in DN rats, resulting in the reduction of fibrosis in DN [28]. Quercetin not only provides improved renal function, but also possesses effects on antihyperglycemia and insulin resistance. Furthermore, quercetin reactivates the Hippo pathway to inhibit the proliferation of glomerular mesangial cells (MCs) in DN rats and MCs treated with high glucose, while quercetin also improves renal fibrosis and renal function in DN [29]. In addition, quercetin was shown to improve renal function in DN rats by downregulating TGF-β1 and CTGF [30, 31]. However, research on hederagenin is relatively scarce. As a dietary fiber, it has the potential effect to reduce lipid synthesis and lipid absorption in the intestine and promote the excretion of bile acids and triglycerides [32]. Although hyperlipidemia is an important factor in DM and DN, the effects of hederagenin in DM are still unclear. Network exploration reveals TSN's multicomponent synergistic effect on DN prevention.In order to investigate the intrinsic mechanisms of TSN, we constructed a regulatory network of TSN-related DN-DEGs and performed the GO enrichment analysis. It was suggested that TSN may influence DM progression through multiple biological processes, including growth factor regulation, immune response, cellular stress response, and apoptosis. TSN may have a regulatory effect on focal adhesion, cell-substrate adherent junction, and cell-substrate junction, and these components are mainly participating in extracellular matrix (ECM) generation.The pathways enriched are related to ECM production, cell proliferation and migration, inflammation, and endocrine disruption and suggested that the therapeutic efficacy of TSN is mediated by the synergy of multiple pathways. For example, AGEs can upregulate RAGE expression, activate multiple signaling pathways, including JAK-STAT, MAPK/ERK, PI3K/Akt/mTOR, and NF-κB, and increase oxidative stress, inflammation, and renal fibrosis, causing structural and functional disorders in the kidney of patients with DM [33]. Active PI3K/AKT pathway in diabetic patients upregulates CTGF [34], which is involved in ECM deposition and promotes EMT in DN [35, 36]. Hyperglycemia also causes MAPK phosphorylation and activates the MAPK signaling pathway, the activation of which causes increased apoptosis, inflammatory cell invasion, and ECM synthesis [37]. Focal adhesion is involved in cell migration and ECM synthesis during EMT in DN [38]. Long-term overactivation of HIF-1α can induce the deposition in the ECM, causing glomerulosclerosis and renal interstitial fibrosis [39, 40]. Previous studies have reported that TSN exhibits a nephroprotective effect in DN mice by reversing podocyte EMT [9]. Combined with the results of GO and KEGG enrichment analysis, we presumed that a potential mechanism for TSN treatment in DN may be to reduce ECM deposition in the kidneys of DN patients by modulating levels of growth factors and chemokines, improving cellular stress and inflammation, and inhibiting EMT-induced cell proliferation.EMT is an induced transformation of damaged cells into mesenchymal cells, with metastatic and fiber-generation capacity. The EMT process generates cytokines and chemokines, accompanied by the recruitment and proliferation of fibroblasts, causing the production and deposition of ECM, which participates in the progression of glomerulosclerosis and renal fibrosis [41–43]. Pathologically, DN presents itself with ECM accumulation and renal fibrosis. Extensive protein deposition in the basement membrane leads to thickening of the glomerular basement membrane, inducing interstitial fibrosis, glomerulosclerosis, and ultimately renal failure [41, 44]. Therefore, therapies directed at EMT and renal fibrosis are expected to be effective strategies for the treatment of DN. Through the establishment of the DN rat model, histopathological observation revealed that TSN effectively attenuated the extent of glomerular damage and reduced fibrous deposition, demonstrating a protective effect against renal fibrosis in DN rats.The intrinsic mechanisms and related targets involved were further explored. Among the TSN-related DN-DEGs, GJA1, CTGF, MMP7, and CCL5 were our main targets of interest. Gap Junction Protein Alpha 1 (GJA1), also called CX43, is an essential component of cellular junctional structures during the transport of small molecules intercellularly. GJA1 prevents the proliferation of renal fibrosis in DN by improving oxidative stress [45] as well as downregulating TGF-β1 levels [46, 47]. The growth factor CTGF amplifies the fibrillogenic activity of TGF-β1 and induces the accumulation of extracellular matrix [48, 49]. MMP7, a member of the MMP family, can cleave a series of matrix and protein fragments, and release growth factors from the extracellular matrix, while promoting renal fibrosis [23, 50, 51]. In addition, the chemokine signaling pathway was also enriched in KEGG enrichment analysis. CCL5, a member of the C-C chemokine family, exerts a powerful chemotactic effect on immune cells, which is involved in chronic inflammatory seen in progression of DN [4, 52]. Therefore, the levels of CCL5 in each group were measured. As expected, GJA1 was upregulated and CTGF, MMP7, and CCL5 were downregulated by TSN to alleviate kidney fibrosis in DN, and these targets were demonstrated to be engaged in the treatment.We investigated the role of TSN for DN therapy at the molecular level using a network pharmacology method and explored the possible mechanisms of its effect. The inhibition of renal fibrosis and inflammation by TSN is predicted to be a potential mechanism for the treatment of DN, and growth factors and chemokines may be key underlying mechanisms through which TSN exerts its nephroprotective effects. ## 5. Conclusion In conclusion, potential active compounds, genes, and signaling pathways involved in the nephroprotection of TSN treatment were identified. TSN is perceived to be a promising strategy to treat DN through the synergistic effect of its “multicomponent, multitarget, and multipathway” compounds. Furthermore, it was verified that TSN could regulate the levels of GJA1, CTGF, MMP7, and CCL5 and alleviate renal fibrosis in DN. This work provides a science-based foundation for DN treatment with TSN. Additional experimental validation is essential to investigate other mechanisms involved in renal fibrosis and the role of the other TSN-related DEGs. --- *Source: 1025053-2021-09-09.xml*
2021
# Feasibility of Stabilized Zn and Pb Contaminated Soils as Roadway Subgrade Materials **Authors:** Mingli Wei; Hao Ni; Shiji Zhou; Yuan Li **Journal:** Advances in Materials Science and Engineering (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1025056 --- ## Abstract The authors have developed a new binder, KMP, which is made from oxalic acid-activated phosphate rock, monopotassium phosphate (KH2PO4), and reactive magnesia (MgO). This study explores the acid neutralization capacity, strength characteristics, water-soaking durability, resilient modulus, and pore size distribution of KMP stabilized soils with individual Zn, Pb, or coexisting Zn and Pb contaminants. For comparison purpose, Portland cement (PC) is also tested. The results show that KMP stabilized soils have a higher acid buffering capacity than PC stabilized soils, regardless of the soil contamination conditions. The water stability coefficient and resilient modulus of the KMP stabilized soils are found to be higher than PC stabilized soils. The reasons for the differences in these properties between KMP and PC stabilized soils are interpreted based on the stability and dissolubility of the main hydration products of the KMP and PC stabilized soils, the soil pore distribution, and concentration of Mg or Ca leached from the KMP and PC stabilized soils obtained from the acid neutralization capacity tests. Overall, this study demonstrates that the KMP is effective in stabilizing soils that are contaminated with Zn or Pb alone and mixed Zn and Pb contaminants, and the KMP stabilized soils are better suited as roadway subgrade material. --- ## Body ## 1. Introduction Large quantities of abandoned industrial sites, which may be contaminated by high concentrations of heavy metals, have been produced, caused by the rapid industrialization and urbanization in China for the last several decades [1]. Heavy metals are risky to environmental and human health and lead to degrading the mechanical properties of soils, restricting the redevelopment of the abandoned industrial sites [2, 3]. Solidification and stabilization (S/S) remediation technology is one of the widely used technologies that involve mixing binders and contaminated soils to reduce the mobility of contaminants and strengthen the mechanical stability of the soils by physical and chemical methods [4, 5].Portland cement (PC), the high alkali cementitious material, is widely used to immobilize heavy metals, including Pb, Zn, Cu, and Cd contained in waste or soil [6–8]. However, the heavy metal immobilization by PC depends significantly on the degree of cement hydration and alkaline environment. Previous studies have reported that heavy metals in the PC stabilized soils would leach easily when exposed to the long-term external conditions such as acid rain, sulfate attack, freeze-thaw cycling, and carbonation [4, 5]. In addition, the strength and modulus of cement stabilized soils decreased rapidly as Zn concentrations increased [5]. For heavy metals such as Zn and Pb, a tremendous retardant effect on cement hydration has been proved to exist. Du et al. [4] reported that the presence of Pb and Zn in the cement stabilized soils hinders the formation of Ca(OH)2/CSH and thereby decreases the soil buffering capacity. In this case, the higher the concentration, the more intensive the retardant effect of heavy metals [4, 5]. Therefore, it is necessary to develop alternative binders to stabilize soils that have relatively high concentrations of Zn and Pb. Besides, the treated contaminated soils should have environmental safety and mechanical stability so that they can be reused as materials for civil engineering projects such as roadway subgrade course materials.Recently, the authors developed a new binder, KMP, which is made from oxalic acid-activated phosphate rock, monopotassium phosphate (KH2PO4), and reactive magnesia (MgO) [3]. Compared with PC, the KMP binder may be more valuable in the attempt to stabilize soil that is contaminated with Zn or Pb alone and mixed Zn and Pb for the following reasons: (1) the hydration products of KMP with significant strength (e.g., bobierrite (MgKPO4·6H2O) and k-struvite (Mg3(PO4)2·8H2O)) are formed through an acid-base reaction between the MgO and PO43− released from the acidified apatite and KH2PO4. This reaction process is less affected by the pH solution relative to the cement hydration reaction. Hence, the KMP stabilized soils may provide more reliable strength relative to PC stabilized soils, and (2) the reaction of KMP with Zn and Pb in soils can produce insoluble metal phosphates, such as Zn3(PO4)2·4H2O, CaZn2(PO4)2·2H2O, and fluoropyromorphite (Pb5(PO4)3F), which have outstanding chemical and morphologic stability despite the strong acid and alkali circumstances [9]. Therefore, it is reasonable to expect that the KMP stabilized soil would exhibit higher leachability, acid buffer capacity, strength, or water-soaking durability relative to the PC stabilized soils.The effectiveness of the S/S is usually defined by the strength and leaching resistance of solidified products. The leaching resistance of solidified soils depends on contaminant speciation, which is difficult to characterize. A series of methods, such as toxicity characteristic leaching procedure (TCLP) tests or single batch test (EN 12457-2) [10], are usually used to evaluate the S/S and characterize the solidified soils. However, different leaching tests can lead to different characterization of the same soil and may not represent the leaching level in the real environments due to the different disposal or leaching conditions of the stabilized soils, such as pH, redox potential (Eh), liquid-to-solid ratio, and leachate type [11]. When the stabilized soils are used as the roadway subgrade materials, the acid rain is a major environmental concern. Stegemann and Zhou [12] put forward that leaching metals tend to be a function of pH value of the solidified matrix. When the used binders are high alkali cementitious materials, such as PC, quicklime, and pulverized fly ash, the pore water pH of all S/S soils will be highly alkaline initially, and this alkalinity will be neutralized over time by acidic influences, which will gradually cause the dissolution of the hydration products and metal contaminants. Thus, using acid neutralization capacity (ANC) test [13], which measures leachate pH, can help to characterize chemical immobilization of contaminants, as well as chemical durability of the S/S matrix [12], and can represent the disposal conditions. The unconfined compressive strength test provides basic information of strength on soil stabilization and is used for quality control in pavement design. In addition, the resilient modulus (MR) is also a common measure of the strength of a mixture design in roadways and is often used to decide the structural layer coefficients of the subbase layers for designing pavements. Although plenty of studies have evaluated the strength of stabilized soils, they only refer to the dry environment; few of them focus on the water-soaking durability [3, 5, 6, 14, 15]. Nevertheless, the use of binders in wet environment has more challenges due to the solubility of binders in water [16].A previous study [3, 17] has discussed the leachability and unconfined compressive strength of Zn, Pb, or Zn/Pb contaminated soils stabilized by KMP. It is found that soils that initially contained relatively high concentrations of both individual and mixed Zn and Pb contaminants displayed low leachability or ecotoxicity and high unconfined compressive strength after stabilization with the KMP, which demonstrates that KMP satisfactorily and effectively stabilized these soils. However, Du et al. [3] did not explore the characteristics of ANC, water-soaking durability, and resilient modulus of KMP stabilized Zn and Pb contaminated soils. The goal of the present study was to evaluate these properties to further investigate the feasibility of KMP stabilized soils for potential reuse as roadway subgrade materials. For comparison purpose, PC is also tested as a control binder. Several series of tests, including ANC tests, mercury intrusion porosimetry (MIP), water-soaking durability tests, and MR tests, are conducted. The effects of the initial Zn and Pb contaminations on the water stability and MR of the soils are discussed. The difference of the above properties for the KMP and PC stabilized soils is assessed based on the results of ANC and MIP tests and stability and dissolubility of the main hydration products of the KMP and PC. ## 2. Materials and Methods ### 2.1. Materials The soil used in this study was collected from Nanjing City, China. Table1 shows its basic physicochemical properties. The Atterberg limits were measured as per ASTM D4318 [18]. Based on the Unified Soil Classification System [19], the soil is classified as low plasticity clay. The chemical compositions of the apatite, PC, and soil were measured using an X-ray fluorescence spectrometer and their values are shown in Table 2. Commercial apatite cores with diameters of approximately 30 cm were crushed and ground to pass through a sieve with an opening size of 0.075 mm. The chemical analytical reagent, KH2PO4, was obtained from Sinopharm Chemical Reagent Co. Ltd., China. For the KMP powder, the proportions of the acidified phosphate rock, KH2PO4, and reactive MgO were controlled as 1 : 1 : 2 (on dry weight basis), since this ratio yields relatively low leachability and high unconfined compressive strength for the stabilized soils [3].Table 1 Properties of soil used in this study. PropertySoilNatural water content,wn(%)21.4Plastic limit,wp (%)20.6Liquid limit,wL (%)45.3Specific gravity,Gs2.72Soil pH7.43Number of replicate is 3, confidence interval (CI) is ≥ 95%, and standard deviation (SD) is < 5%.Table 2 Oxide chemistry of cement, phosphate rock, and soil used in this study. Oxide chemistry1Apatite (%)Cement (%)Soil (%)Calcium oxide (CaO)45.9349.7—Aluminum oxide (Al2O3)1.239.8715.40Magnesium oxide (MgO)—2.06—Phosphorus oxide (P2O5)25.10——Potassium oxide (K2O)—0.751.89Silicon oxide (SiO2)6.1422.669.61Ferric oxide (Fe2O3)—3.504.90Sulfate oxide (SO3)—3.84—Sodium oxide (Na2O)—0.24—Fluorine (F)2.35——Chlorine (Cl)———Loss on ignition213.126.194.861Mineral composition is analyzed by X-ray fluorescence method using ARL9800XP + XRF spectrometry. 2Value of loss on ignition is referenced to 950°C. ### 2.2. Specimen Preparation To prepare Zn and Pb or Zn and Pb mixed-contaminated soil specimens, a predetermined volume of selected stock solution was mixed with the air-dried clean soil until its water content reached its optimum water content (27%). The solution-soil mixture was thoroughly mixed using an electronic mixer to create a homogeneous paste that was then sealed in a closed container where it remained undisturbed for 10 days at a temperature of 20 ± 2°C and relative humidity of 95%. The KMP or PC powder with predetermined weight (4 and 6%, on dry weight of soil basis) was poured into the contaminated soil and mixed thoroughly using the electronic mixer for 15 min to achieve homogeneity. Then, approximately 210 g of the mixture was compacted into a stainless cylindrical mold with 50 mm diameter and 50 mm height at the optimum water content (27%) and maximum dry density (1.67 × 103 kg/m3). The specimen was carefully extruded from the mold using a hydraulic jack, sealed in black polyethylene bags, and cured for 7 or 28 d at a temperature of 20 ± 2°C and relative humidity of 95%. For comparison, stabilized clean soil and untreated contaminated soil were also prepared and cured under the same conditions. The concentrations of individual Zn or Pb and combined Zn and Pb contaminants, binder contents, and curing time are summarized in Table 3. In this study, a symbol of “Zni” or “Pbj” denotes a specimen with Zn concentration of i% or Pb concentration of j% based on the oven-dried soil weight.Table 3 Zn and Pb concentrations and curing times for various tests and analyses of spiked soils. Test programZn or/and Pb concentration (%)Binder content (%)Curing time (d)ANCZn0Pb0∗, Zn1.5, Zn1.5Pb2, and Pb2628MIPZn0Pb0∗, Zn0.5, Zn1, Zn1.5, Zn1.5Pb2, and Pb2628WSDZn0Pb0∗, Zn1.5, Zn1.5Pb2, and Pb24, 66, 27RMT∗Zn0Pb0∗and Zn1.5Pb2628ANC = acid neutralization capacity; MIP = mercury intrusion porosimetry test; WSD = water-soaking durability test; RMT = resilient modulus test.∗Untreated clean soil. ### 2.3. Testing Methods The ANC test was performed as per USEPA Method 1313 [20]. A series of batch extraction tests were conducted on 20 g of crushed, sieved (<100 μm), and stabilized soils cured for 28 d using 200 ml of nitric acid solutions with various concentrations as extraction liquids. After tumbling for 18 h, the pH of the solution filtered with a 0.45 μm membrane filter was measured with a pH meter HORIBA D-54. Measurements were made in triplicate and the average values were recorded. In this study, the ratio of leached Mg or Ca concentration for the stabilized contaminated soils to that of the stabilized clean soil (RLC) is used to quantify the relative stability of KMP hydration products (MgKPO4·6H2O and Mg3(PO4)2·8H2O, mainly) and PC hydration products (CSH and Ca(OH)2, mainly) in the ANC test at the leachate of which pH = 7. The concentrations of the leached Mg and Ca were measured using an IRIS Advantage Inductively Coupled Plasma Atomic Emission Spectroscopy (ICP-AES). The definition of RLC is expressed by the following equation:(1)RLC=MiM0,where Mi and M0 are the measured leached Mg or Ca concentration for the stabilized contaminated soil or clean soil, respectively, mg/L.For the MIP test, the soil sample of approximately 1 cm3 was retrieved from a carefully hand-broken identical specimen after curing for 28 d. The test soil was frozen using liquid nitrogen with a boiling point of −195°C and placed in a freezing unit with a vacuum chamber to be dried by sublimation of the frozen water at a temperature of −80°C. Then, the specimens were analyzed using an Auto Pore IV 9510 mercury intrusion porosimeter. The freeze-drying apparatus used in this study was the XIANOU-18N freeze-drier. In this method, the capillary pressure equation is employed to compute the pore diameter as expressed by Mitchell [21]:(2)d=−4τcosθp,where d is the diameter of the pore intruded, τ is the surface tension of intruding mercury (4.84 × 10−4 N/mm at 25°C), θ is the contact angle (135° in this study), and p is the applied pressure of mercury intrusion (maximum 413 MPa in this study).The water-soaking durability test was conducted as per the method recommended by Du et al. [14]. Each identical specimen was immersed under the water table of distilled water with a volume of 1.5 L. The water-soaking period was 1 d and the value of unconfined compression strength (qu′) after soaking was measured by the unconfined compression tests (UCTs) for each identical sample. The UCTs were conducted on the stabilized soils with a strain rate of 1%/min, as per ASTM D 4219-08 [22]. Three identical samples were tested and the average value of qu′ was recorded. The water stability coefficient (Kr), which defines the water stability of the soils, was determined using the following equation:(3)Kr=qu′qu,where qu′ is the measured unconfined compression strength after water-soaking test (kPa) and qu is the measured unconfined compression strength before water-soaking test (kPa), the data based on a previous study [3].The resilient modulus (MR) of the soil was measured using the plate loading test outlined by the China Test Methods of Soils for Highway Engineering [23]. With the assumption that the stabilized soil layer is homogeneous, isotropic, elastic, and infinite in depth, MR of the material can be determined using the following equation based on the method of the elasticity theory [24]:(4)MR=πσ0r2l1−μ2,where MR is the resilient modulus of elasticity of the material (MPa), σ0 is the pressure applied to the surface of the plate (MPa), r is the radius of the plate (50 mm in this study), l is the deflection of the plate associated with the pressure, and μ is Poisson’s ratio (0.3 in this study). The stresses and displacements were measured using a hydraulic multifunction material testing system (UTM-25). For each case tested, one identical sample was subjected to the resilient modulus test. ## 2.1. Materials The soil used in this study was collected from Nanjing City, China. Table1 shows its basic physicochemical properties. The Atterberg limits were measured as per ASTM D4318 [18]. Based on the Unified Soil Classification System [19], the soil is classified as low plasticity clay. The chemical compositions of the apatite, PC, and soil were measured using an X-ray fluorescence spectrometer and their values are shown in Table 2. Commercial apatite cores with diameters of approximately 30 cm were crushed and ground to pass through a sieve with an opening size of 0.075 mm. The chemical analytical reagent, KH2PO4, was obtained from Sinopharm Chemical Reagent Co. Ltd., China. For the KMP powder, the proportions of the acidified phosphate rock, KH2PO4, and reactive MgO were controlled as 1 : 1 : 2 (on dry weight basis), since this ratio yields relatively low leachability and high unconfined compressive strength for the stabilized soils [3].Table 1 Properties of soil used in this study. PropertySoilNatural water content,wn(%)21.4Plastic limit,wp (%)20.6Liquid limit,wL (%)45.3Specific gravity,Gs2.72Soil pH7.43Number of replicate is 3, confidence interval (CI) is ≥ 95%, and standard deviation (SD) is < 5%.Table 2 Oxide chemistry of cement, phosphate rock, and soil used in this study. Oxide chemistry1Apatite (%)Cement (%)Soil (%)Calcium oxide (CaO)45.9349.7—Aluminum oxide (Al2O3)1.239.8715.40Magnesium oxide (MgO)—2.06—Phosphorus oxide (P2O5)25.10——Potassium oxide (K2O)—0.751.89Silicon oxide (SiO2)6.1422.669.61Ferric oxide (Fe2O3)—3.504.90Sulfate oxide (SO3)—3.84—Sodium oxide (Na2O)—0.24—Fluorine (F)2.35——Chlorine (Cl)———Loss on ignition213.126.194.861Mineral composition is analyzed by X-ray fluorescence method using ARL9800XP + XRF spectrometry. 2Value of loss on ignition is referenced to 950°C. ## 2.2. Specimen Preparation To prepare Zn and Pb or Zn and Pb mixed-contaminated soil specimens, a predetermined volume of selected stock solution was mixed with the air-dried clean soil until its water content reached its optimum water content (27%). The solution-soil mixture was thoroughly mixed using an electronic mixer to create a homogeneous paste that was then sealed in a closed container where it remained undisturbed for 10 days at a temperature of 20 ± 2°C and relative humidity of 95%. The KMP or PC powder with predetermined weight (4 and 6%, on dry weight of soil basis) was poured into the contaminated soil and mixed thoroughly using the electronic mixer for 15 min to achieve homogeneity. Then, approximately 210 g of the mixture was compacted into a stainless cylindrical mold with 50 mm diameter and 50 mm height at the optimum water content (27%) and maximum dry density (1.67 × 103 kg/m3). The specimen was carefully extruded from the mold using a hydraulic jack, sealed in black polyethylene bags, and cured for 7 or 28 d at a temperature of 20 ± 2°C and relative humidity of 95%. For comparison, stabilized clean soil and untreated contaminated soil were also prepared and cured under the same conditions. The concentrations of individual Zn or Pb and combined Zn and Pb contaminants, binder contents, and curing time are summarized in Table 3. In this study, a symbol of “Zni” or “Pbj” denotes a specimen with Zn concentration of i% or Pb concentration of j% based on the oven-dried soil weight.Table 3 Zn and Pb concentrations and curing times for various tests and analyses of spiked soils. Test programZn or/and Pb concentration (%)Binder content (%)Curing time (d)ANCZn0Pb0∗, Zn1.5, Zn1.5Pb2, and Pb2628MIPZn0Pb0∗, Zn0.5, Zn1, Zn1.5, Zn1.5Pb2, and Pb2628WSDZn0Pb0∗, Zn1.5, Zn1.5Pb2, and Pb24, 66, 27RMT∗Zn0Pb0∗and Zn1.5Pb2628ANC = acid neutralization capacity; MIP = mercury intrusion porosimetry test; WSD = water-soaking durability test; RMT = resilient modulus test.∗Untreated clean soil. ## 2.3. Testing Methods The ANC test was performed as per USEPA Method 1313 [20]. A series of batch extraction tests were conducted on 20 g of crushed, sieved (<100 μm), and stabilized soils cured for 28 d using 200 ml of nitric acid solutions with various concentrations as extraction liquids. After tumbling for 18 h, the pH of the solution filtered with a 0.45 μm membrane filter was measured with a pH meter HORIBA D-54. Measurements were made in triplicate and the average values were recorded. In this study, the ratio of leached Mg or Ca concentration for the stabilized contaminated soils to that of the stabilized clean soil (RLC) is used to quantify the relative stability of KMP hydration products (MgKPO4·6H2O and Mg3(PO4)2·8H2O, mainly) and PC hydration products (CSH and Ca(OH)2, mainly) in the ANC test at the leachate of which pH = 7. The concentrations of the leached Mg and Ca were measured using an IRIS Advantage Inductively Coupled Plasma Atomic Emission Spectroscopy (ICP-AES). The definition of RLC is expressed by the following equation:(1)RLC=MiM0,where Mi and M0 are the measured leached Mg or Ca concentration for the stabilized contaminated soil or clean soil, respectively, mg/L.For the MIP test, the soil sample of approximately 1 cm3 was retrieved from a carefully hand-broken identical specimen after curing for 28 d. The test soil was frozen using liquid nitrogen with a boiling point of −195°C and placed in a freezing unit with a vacuum chamber to be dried by sublimation of the frozen water at a temperature of −80°C. Then, the specimens were analyzed using an Auto Pore IV 9510 mercury intrusion porosimeter. The freeze-drying apparatus used in this study was the XIANOU-18N freeze-drier. In this method, the capillary pressure equation is employed to compute the pore diameter as expressed by Mitchell [21]:(2)d=−4τcosθp,where d is the diameter of the pore intruded, τ is the surface tension of intruding mercury (4.84 × 10−4 N/mm at 25°C), θ is the contact angle (135° in this study), and p is the applied pressure of mercury intrusion (maximum 413 MPa in this study).The water-soaking durability test was conducted as per the method recommended by Du et al. [14]. Each identical specimen was immersed under the water table of distilled water with a volume of 1.5 L. The water-soaking period was 1 d and the value of unconfined compression strength (qu′) after soaking was measured by the unconfined compression tests (UCTs) for each identical sample. The UCTs were conducted on the stabilized soils with a strain rate of 1%/min, as per ASTM D 4219-08 [22]. Three identical samples were tested and the average value of qu′ was recorded. The water stability coefficient (Kr), which defines the water stability of the soils, was determined using the following equation:(3)Kr=qu′qu,where qu′ is the measured unconfined compression strength after water-soaking test (kPa) and qu is the measured unconfined compression strength before water-soaking test (kPa), the data based on a previous study [3].The resilient modulus (MR) of the soil was measured using the plate loading test outlined by the China Test Methods of Soils for Highway Engineering [23]. With the assumption that the stabilized soil layer is homogeneous, isotropic, elastic, and infinite in depth, MR of the material can be determined using the following equation based on the method of the elasticity theory [24]:(4)MR=πσ0r2l1−μ2,where MR is the resilient modulus of elasticity of the material (MPa), σ0 is the pressure applied to the surface of the plate (MPa), r is the radius of the plate (50 mm in this study), l is the deflection of the plate associated with the pressure, and μ is Poisson’s ratio (0.3 in this study). The stresses and displacements were measured using a hydraulic multifunction material testing system (UTM-25). For each case tested, one identical sample was subjected to the resilient modulus test. ## 3. Results and Discussion ### 3.1. Acid Neutralization Capacity Figures1(a) and 1(b) present the titration curves obtained from the ANC tests for the stabilized soils with different initial soil contaminant conditions and cured for 28 d. The titration curves were obtained by plotting the amount of acid added and the final equilibrium pH of each leachate and the slope of the titration curve corresponds to the ability to resist a change in the pH of the leachate since the dissolution of hydration products in stabilized soils [25]. The slope of the titration curves for the KMP stabilized soils is noticeably gentler than that of the PC stabilized soils. A significant turning point appears near the 400 cmol/kg nitric acid solution point for both the KMP and PC stabilized soils. When the acid addition is < 400 cmol/kg, the presence of heavy metal causes a rapid drop in pH compared to the stabilized clean soil. With a relatively high amount of acid addition (i.e., ≥400 cmol/kg), the KMP stabilized soils displays a significantly gentler slope relative to the PC stabilized soils.Figure 1 pH-acid titration curves of the stabilized clean and contaminated soils cured for 28 d: (a) KMP binder; (b) PC binder. (a)(b)An index denoted byβ, representing moles of strong acid CA (H+) added to the soil to cause a unit change in leachate pH, is used to define the acid buffering capacity of the soil and expressed in the following equation [25]:(5)β=−dCAdpH.Figure2 illustrates the variations in the values of β (acid buffer capacity) as calculated using equation (5) with the amount of acid added. It can be concluded that the KMP stabilized soils display higher values of β regardless of the initial soil contamination conditions.Figure 2 Buffer capacity (β) of the stabilized contaminated soils cured for 28 d: (a) KMP binder; (b) PC binder. (a)(b)Figures3(a) and 3(b) show the variations in the leached Mg and Ca concentrations and RLC with the initial soil contamination conditions for the stabilized soils cured for 28 d were obtained from the ANC test when the pH of the leachate was 7. Generally, the concentrations of Mg and Ca leached from the stabilized contaminated soils are higher than those leached from the stabilized clean soils and they rise with an increase in the initial Zn concentration (Figure 3(a)). The leached Mg and Ca concentrations for the mixed Zn and Pb contaminants (Zn1.5Pb2) are much higher than those for the Zn or Pb contaminant alone. The values of RLC of Mg for the KMP stabilized soils are approximately 46 to 170%, which are notably lower than those of Ca for PC stabilized soils (50 to 440%) (Figure 3(b)).Figure 3 Leached amount of Mg and Ca obtained from the acid neutralization test (ANC) for the stabilized soils cured for 28 d: (a) leached Mg and Ca concentrations and (b) ratio of leached Mg or Ca concentration (RLC) for the contaminated soil to that for the clean soil. (a)(b) ### 3.2. Water Stability Coefficient Figures4(a) and 4(b) present the variations qu′ of the stabilized clean and contaminated soils that experienced the water-soaking tests, with the initial soil contamination conditions. For the uncontaminated condition (Zn0Pb0), the KMP stabilized soils exhibit lower qu′ than PC stabilized soils for both binder contents of 4% and 6%, when the curing time is 6 d (Figure 4). However, for the cases of Zn or Pb alone and mixed Zn and Pb (Zn1.5Pb2) contaminants, the KMP stabilized soils display notably higher qu′ values, irrespective of the curing time or binder content.Figure 4 Variations in the unconfined compressive strength (qu′) with the initial soil contamination conditions for the stabilized soils after water-soaking tests: (a) 4% binder and (b) 6% binder. (a)(b)Figure5 shows that the Kr values of the KMP stabilized clean soils (Zb0Pb0) are about 10 to 35% lower than those of the PC stabilized clean soils. In contrast, the KMP stabilized contaminated soils (Zn1.5, Zb1.5Pb2, and Pb2) display 3 to 465% higher Kr values than the PC stabilized soils, indicating that the former possess superior water-resistance capacity and this is consistent with the results presented in Figure 4.Figure 5 Variations in the water stability coefficient (Kr) with the initial soil contamination conditions for the stabilized soils cured for 6 and 27 d: (a) 4% binder and (b) 6% binder. (a)(b)Figure6 presents a comparison of apparent surface characteristics between the KMP and PC stabilized soils which were cured for 27 d and then subsequently soaked for 1 d. It can be seen that the macrocracks occurred at the surfaces of all PC stabilized soils, whereas only a few numbers of cracks developed at the surface of the KMP stabilized soil with mixed Zn concentration of 1.5% and Pb concentration of 2%. The observations further suggest that the KMP stabilized soils have greater water stability as compared to the PC stabilized soils.Figure 6 Photos showing the cracks developed on the surfaces of the stabilized soils after water-soaking tests (curing time of 27 d and binder content of 6%). (a) KMP Zn0Pb0. (b) KMP Zn1.5. (c) KMP Pb2. (d) KMP Zn1.5Pb2. (e) PC Zn0Pb0. (f) PC Zn1.5. (g) PC Pb2. (h) PC Zn1.5Pb2. (a)(b)(c)(d)(e)(f)(g)(h) ### 3.3. Resilient Modulus Figure7 shows the measured resilient modulus (MR) for the clean (Zn0Pb0) and contaminated soils with mixed Zn concentration of 1.5% and Pb concentration of 2% (Zn1.5Pb2) when stabilized with 6% KMP or PC. The MR values of the KMP and PC stabilized clean soils are 155% and 238% higher than those of the stabilized contaminated soils, respectively, indicating that the presence of heavy metals has exposed a notable impact on the measured MR. In addition, the corresponding MR of the KMP stabilized soils is 6% lower than that of the PC stabilized soils for the case of Zn0Pb0, whereas it is 24% higher than that of the PC stabilized soils (Zn1.5Pb2), indicating that the KMP binder is more capable of strengthening MR of the soils tested in this study.Figure 7 Measured resilient modulus for the stabilized and untreated soils. ### 3.4. Soil Pore Size Distribution Figure8 shows the pore size distributions (PSDs) of KMP and PC stabilized clean and contaminated soils. The y-axis of PSD curves of the stabilized soils is plotted as f (D) (f (D) = dV/dlogD), where V is the volume of mercury intruded at a given pressure increment that corresponds to the pore having a diameter of D in 1 g of the dry soil [5]. It can be seen that the variation of cumulative pore volume with a pore diameter is of a bimodal type, which is a common modal observed for compacted soils [26] as well as cement stabilized Zn-contaminated soils [5]. The first mode is characterized by pore sizes ranging from 0.003 to 0.1 μm; meanwhile, the pore sizes at the second mode span the interval of 0.1 to 2 μm. For the stabilized clean and contaminated soils, the first pore diameter peaks have increased from 0.02 to 0.03 μm and 0.02 to 0.04 μm with increasing Zn concentration, respectively; the second pore diameter peaks exhibit a similar trend of change with increasing Zn concentration. At a given Zn concentration, the two pore diameter peaks of the KMP stabilized soils are lower than those of the PC stabilized soils (Figure 9), indicating that the KMP stabilized soils have a denser structure.Figure 8 Pore size distribution of the stabilized soils cured for 28 d: (a) KMP and (b) PC. (a)(b)Figure 9 Bimodal PSD peak fitting for the KMP stabilized soils cured for 28 d: (a) Zn0Pb0 and (b) Zn1.5Pb2. (a)(b)In this study, each peak of the PSDs curves was simulated by a Gaussian distribution function suggested by previous studies [5, 26], as expressed by the following fitting equation:(6)fD=∑i=1nfiD∑i=1nai=12πσie−logD−μi2/2σi2,where n is the number of peaks in PSD curves on a logarithmic scale (2 for the bimodal types), ai is the pore volume in the 1 g dry soil covered by the fitted curve of fi (D) (mL/g), σi is the standard deviation on a logarithmic scale, and μi is the mean pore diameter in the fitted curve of fi (D) on a logarithmic scale (μm) [26].Table4 lists the fitting parameters obtained using equation (6). From Table 4, it is found that a1 changes marginally for both KMP and PC stabilized soils when the Zn concentration increases from 0% to 1.5%; on the contrary, a2 increases from 0.028 to 0.044 mL/g and 0.026 to 0.048 mL/g for the KMP and PC stabilized soils, respectively. In addition, there are no differences in the values of a1 and a2 between the KMP and PC stabilized soils. As for μ1 and μ2, they both increase with increasing Zn concentration. When Zn concentration increases from 0% to 1.5%, it can be observed that μ1 of KMP stabilized soils varies from 0.019 to 0.034, which is an 78.9% increase. In contrast, μ1 of PC stabilized soils varies from 0.021 to 0.04, which is an 90.5% increase. In addition, μ2 of KMP stabilized soils varies from 0.391 to 0.440, which is an 12.5% increase, and μ2 of PC stabilized soils varies from 0.411 to 0.485, which is an 18% increase when Zn concentration increases from 0% to 1.5%. According to this, it further demonstrates that the KMP stabilized soils have a denser structure, as suggested by a previous study [5], which may be attributed to the more prominent filling of soil pores by the KMP hydration products (e.g., bobierrite (MgKPO4·6H2O) and k-struvite (Mg3(PO4)2·8H2O)) relative to those of PC (e.g., Ca(OH)2, CSH, and CAH).Table 4 Parameters obtained from the simulated PSD curves of KMP and PC stabilized soils (6% cement, 28 days of curing) based on peak analysis. Specimensa1 (mL/g)a2 (mL/g)μ1μ2σ1σ2R2Zn0Pb0 (KMP)0.0010.0280.0190.3910.0070.3280.90Zn0.5 (KMP)0.0010.0270.0190.3950.0070.3200.91Zn1.0 (KMP)0.0010.0310.0220.4050.0070.3250.93Zn1.5 (KMP)0.0020.0330.0280.4230.0120.2860.92Zn1.5Pb2 (KMP)0.0020.0440.0340.4400.0130.3390.93Pb2 (KMP)0.0010.0330.0220.4160.0080.3270.93Zn0Pb0 (PC)0.0010.0260.0210.4110.0090.2650.90Zn0.5 (PC)0.0010.0280.0240.4350.0100.2790.94Zn1.0 (PC)0.0010.0330.0280.4480.0070.3220.94Zn1.5 (PC)0.0020.0440.0310.4610.0120.3230.92Zn1.5Pb2 (PC)0.0030.0480.0400.4850.0190.2800.91Pb2 (PC)0.0020.0370.0220.4770.0090.3100.94n = number of peaks in the fitted PSD curves; a1 = volume of intra-aggregate pore; a2 = volume of interaggregate pore; μ1 = mean diameter of intra-aggregate pore; μ2 = mean diameter of interaggregate pore; σ1 and σ2 = standard deviation; R = correlation coefficient.Figure9 illustrates the typical bimodal PSD peak fitting result for the KMP stabilized clean soils (Pb0Zn0) and contaminated soils with mixed Zn concentration of 1.5% and Pb concentration of 2% (Zn1.5Pb2). It shows that all the simulated PSD curves have bimodal characteristics (n = 2). The dual peaks in the PSD curves represent intra-aggregate and interaggregate pores, respectively, and the formation of intra-aggregate pores is due to the filling of the large pores by hydration products of KMP or cement [5]. ## 3.1. Acid Neutralization Capacity Figures1(a) and 1(b) present the titration curves obtained from the ANC tests for the stabilized soils with different initial soil contaminant conditions and cured for 28 d. The titration curves were obtained by plotting the amount of acid added and the final equilibrium pH of each leachate and the slope of the titration curve corresponds to the ability to resist a change in the pH of the leachate since the dissolution of hydration products in stabilized soils [25]. The slope of the titration curves for the KMP stabilized soils is noticeably gentler than that of the PC stabilized soils. A significant turning point appears near the 400 cmol/kg nitric acid solution point for both the KMP and PC stabilized soils. When the acid addition is < 400 cmol/kg, the presence of heavy metal causes a rapid drop in pH compared to the stabilized clean soil. With a relatively high amount of acid addition (i.e., ≥400 cmol/kg), the KMP stabilized soils displays a significantly gentler slope relative to the PC stabilized soils.Figure 1 pH-acid titration curves of the stabilized clean and contaminated soils cured for 28 d: (a) KMP binder; (b) PC binder. (a)(b)An index denoted byβ, representing moles of strong acid CA (H+) added to the soil to cause a unit change in leachate pH, is used to define the acid buffering capacity of the soil and expressed in the following equation [25]:(5)β=−dCAdpH.Figure2 illustrates the variations in the values of β (acid buffer capacity) as calculated using equation (5) with the amount of acid added. It can be concluded that the KMP stabilized soils display higher values of β regardless of the initial soil contamination conditions.Figure 2 Buffer capacity (β) of the stabilized contaminated soils cured for 28 d: (a) KMP binder; (b) PC binder. (a)(b)Figures3(a) and 3(b) show the variations in the leached Mg and Ca concentrations and RLC with the initial soil contamination conditions for the stabilized soils cured for 28 d were obtained from the ANC test when the pH of the leachate was 7. Generally, the concentrations of Mg and Ca leached from the stabilized contaminated soils are higher than those leached from the stabilized clean soils and they rise with an increase in the initial Zn concentration (Figure 3(a)). The leached Mg and Ca concentrations for the mixed Zn and Pb contaminants (Zn1.5Pb2) are much higher than those for the Zn or Pb contaminant alone. The values of RLC of Mg for the KMP stabilized soils are approximately 46 to 170%, which are notably lower than those of Ca for PC stabilized soils (50 to 440%) (Figure 3(b)).Figure 3 Leached amount of Mg and Ca obtained from the acid neutralization test (ANC) for the stabilized soils cured for 28 d: (a) leached Mg and Ca concentrations and (b) ratio of leached Mg or Ca concentration (RLC) for the contaminated soil to that for the clean soil. (a)(b) ## 3.2. Water Stability Coefficient Figures4(a) and 4(b) present the variations qu′ of the stabilized clean and contaminated soils that experienced the water-soaking tests, with the initial soil contamination conditions. For the uncontaminated condition (Zn0Pb0), the KMP stabilized soils exhibit lower qu′ than PC stabilized soils for both binder contents of 4% and 6%, when the curing time is 6 d (Figure 4). However, for the cases of Zn or Pb alone and mixed Zn and Pb (Zn1.5Pb2) contaminants, the KMP stabilized soils display notably higher qu′ values, irrespective of the curing time or binder content.Figure 4 Variations in the unconfined compressive strength (qu′) with the initial soil contamination conditions for the stabilized soils after water-soaking tests: (a) 4% binder and (b) 6% binder. (a)(b)Figure5 shows that the Kr values of the KMP stabilized clean soils (Zb0Pb0) are about 10 to 35% lower than those of the PC stabilized clean soils. In contrast, the KMP stabilized contaminated soils (Zn1.5, Zb1.5Pb2, and Pb2) display 3 to 465% higher Kr values than the PC stabilized soils, indicating that the former possess superior water-resistance capacity and this is consistent with the results presented in Figure 4.Figure 5 Variations in the water stability coefficient (Kr) with the initial soil contamination conditions for the stabilized soils cured for 6 and 27 d: (a) 4% binder and (b) 6% binder. (a)(b)Figure6 presents a comparison of apparent surface characteristics between the KMP and PC stabilized soils which were cured for 27 d and then subsequently soaked for 1 d. It can be seen that the macrocracks occurred at the surfaces of all PC stabilized soils, whereas only a few numbers of cracks developed at the surface of the KMP stabilized soil with mixed Zn concentration of 1.5% and Pb concentration of 2%. The observations further suggest that the KMP stabilized soils have greater water stability as compared to the PC stabilized soils.Figure 6 Photos showing the cracks developed on the surfaces of the stabilized soils after water-soaking tests (curing time of 27 d and binder content of 6%). (a) KMP Zn0Pb0. (b) KMP Zn1.5. (c) KMP Pb2. (d) KMP Zn1.5Pb2. (e) PC Zn0Pb0. (f) PC Zn1.5. (g) PC Pb2. (h) PC Zn1.5Pb2. (a)(b)(c)(d)(e)(f)(g)(h) ## 3.3. Resilient Modulus Figure7 shows the measured resilient modulus (MR) for the clean (Zn0Pb0) and contaminated soils with mixed Zn concentration of 1.5% and Pb concentration of 2% (Zn1.5Pb2) when stabilized with 6% KMP or PC. The MR values of the KMP and PC stabilized clean soils are 155% and 238% higher than those of the stabilized contaminated soils, respectively, indicating that the presence of heavy metals has exposed a notable impact on the measured MR. In addition, the corresponding MR of the KMP stabilized soils is 6% lower than that of the PC stabilized soils for the case of Zn0Pb0, whereas it is 24% higher than that of the PC stabilized soils (Zn1.5Pb2), indicating that the KMP binder is more capable of strengthening MR of the soils tested in this study.Figure 7 Measured resilient modulus for the stabilized and untreated soils. ## 3.4. Soil Pore Size Distribution Figure8 shows the pore size distributions (PSDs) of KMP and PC stabilized clean and contaminated soils. The y-axis of PSD curves of the stabilized soils is plotted as f (D) (f (D) = dV/dlogD), where V is the volume of mercury intruded at a given pressure increment that corresponds to the pore having a diameter of D in 1 g of the dry soil [5]. It can be seen that the variation of cumulative pore volume with a pore diameter is of a bimodal type, which is a common modal observed for compacted soils [26] as well as cement stabilized Zn-contaminated soils [5]. The first mode is characterized by pore sizes ranging from 0.003 to 0.1 μm; meanwhile, the pore sizes at the second mode span the interval of 0.1 to 2 μm. For the stabilized clean and contaminated soils, the first pore diameter peaks have increased from 0.02 to 0.03 μm and 0.02 to 0.04 μm with increasing Zn concentration, respectively; the second pore diameter peaks exhibit a similar trend of change with increasing Zn concentration. At a given Zn concentration, the two pore diameter peaks of the KMP stabilized soils are lower than those of the PC stabilized soils (Figure 9), indicating that the KMP stabilized soils have a denser structure.Figure 8 Pore size distribution of the stabilized soils cured for 28 d: (a) KMP and (b) PC. (a)(b)Figure 9 Bimodal PSD peak fitting for the KMP stabilized soils cured for 28 d: (a) Zn0Pb0 and (b) Zn1.5Pb2. (a)(b)In this study, each peak of the PSDs curves was simulated by a Gaussian distribution function suggested by previous studies [5, 26], as expressed by the following fitting equation:(6)fD=∑i=1nfiD∑i=1nai=12πσie−logD−μi2/2σi2,where n is the number of peaks in PSD curves on a logarithmic scale (2 for the bimodal types), ai is the pore volume in the 1 g dry soil covered by the fitted curve of fi (D) (mL/g), σi is the standard deviation on a logarithmic scale, and μi is the mean pore diameter in the fitted curve of fi (D) on a logarithmic scale (μm) [26].Table4 lists the fitting parameters obtained using equation (6). From Table 4, it is found that a1 changes marginally for both KMP and PC stabilized soils when the Zn concentration increases from 0% to 1.5%; on the contrary, a2 increases from 0.028 to 0.044 mL/g and 0.026 to 0.048 mL/g for the KMP and PC stabilized soils, respectively. In addition, there are no differences in the values of a1 and a2 between the KMP and PC stabilized soils. As for μ1 and μ2, they both increase with increasing Zn concentration. When Zn concentration increases from 0% to 1.5%, it can be observed that μ1 of KMP stabilized soils varies from 0.019 to 0.034, which is an 78.9% increase. In contrast, μ1 of PC stabilized soils varies from 0.021 to 0.04, which is an 90.5% increase. In addition, μ2 of KMP stabilized soils varies from 0.391 to 0.440, which is an 12.5% increase, and μ2 of PC stabilized soils varies from 0.411 to 0.485, which is an 18% increase when Zn concentration increases from 0% to 1.5%. According to this, it further demonstrates that the KMP stabilized soils have a denser structure, as suggested by a previous study [5], which may be attributed to the more prominent filling of soil pores by the KMP hydration products (e.g., bobierrite (MgKPO4·6H2O) and k-struvite (Mg3(PO4)2·8H2O)) relative to those of PC (e.g., Ca(OH)2, CSH, and CAH).Table 4 Parameters obtained from the simulated PSD curves of KMP and PC stabilized soils (6% cement, 28 days of curing) based on peak analysis. Specimensa1 (mL/g)a2 (mL/g)μ1μ2σ1σ2R2Zn0Pb0 (KMP)0.0010.0280.0190.3910.0070.3280.90Zn0.5 (KMP)0.0010.0270.0190.3950.0070.3200.91Zn1.0 (KMP)0.0010.0310.0220.4050.0070.3250.93Zn1.5 (KMP)0.0020.0330.0280.4230.0120.2860.92Zn1.5Pb2 (KMP)0.0020.0440.0340.4400.0130.3390.93Pb2 (KMP)0.0010.0330.0220.4160.0080.3270.93Zn0Pb0 (PC)0.0010.0260.0210.4110.0090.2650.90Zn0.5 (PC)0.0010.0280.0240.4350.0100.2790.94Zn1.0 (PC)0.0010.0330.0280.4480.0070.3220.94Zn1.5 (PC)0.0020.0440.0310.4610.0120.3230.92Zn1.5Pb2 (PC)0.0030.0480.0400.4850.0190.2800.91Pb2 (PC)0.0020.0370.0220.4770.0090.3100.94n = number of peaks in the fitted PSD curves; a1 = volume of intra-aggregate pore; a2 = volume of interaggregate pore; μ1 = mean diameter of intra-aggregate pore; μ2 = mean diameter of interaggregate pore; σ1 and σ2 = standard deviation; R = correlation coefficient.Figure9 illustrates the typical bimodal PSD peak fitting result for the KMP stabilized clean soils (Pb0Zn0) and contaminated soils with mixed Zn concentration of 1.5% and Pb concentration of 2% (Zn1.5Pb2). It shows that all the simulated PSD curves have bimodal characteristics (n = 2). The dual peaks in the PSD curves represent intra-aggregate and interaggregate pores, respectively, and the formation of intra-aggregate pores is due to the filling of the large pores by hydration products of KMP or cement [5]. ## 4. Discussion The ANC test results show that the KMP stabilized soils display higher acid buffering capacity (Figures1 and 2). The ratio of leached concentration (RLC) of Mg for the KMP stabilized soil is significantly lower relative to Ca for the PC stabilized soil regardless of the Zn or Pb concentration (Figure 3), indicating that a lower amount of Mg or Ca has been leached from the stabilized soil and the soil would consequently display a slow reduction in strength as a result of the graduation dissolution of the hydration products as suggested by previous studies [27]. These findings can be attributed to two reasons: (1) the main alkaline hydration products of KMP such as MgKPO4·6H2O and Mg3(PO4)2·8H2O are difficult to dissolve under the acidic environment and have significantly stable mineral crystalline morphologies in a wide range of acidic conditions [3, 9, 28]. In contrast, the cement hydration products such as Ca(OH)2, CSH, and CAH have a relatively higher solubility and are more easy to dissolve under acidic condition [4, 29], and (2) Zn or Pb merely consumes a small amount of PO43− and hydroxyl ions (OH-), which results in a marginal impact on the formation of MgKPO4·6H2O and Mg3(PO4)2·8H2O in the KMP stabilized soils [3]. However, for the PC stabilized soils, the presence of Zn would remarkably retard the cement hydration or pozzolanic reaction and lead to a reduction in these cement hydration products such as Ca(OH)2 or CSH, which are primary contributors to the acid buffer capacity [4, 5].The water-soaking durability test and resilient modulus test results show that KMP stabilized contaminated soils possess higherqu′, Kr, and MR relative to the PC stabilized contaminated soils, regardless of the initial soil contamination condition (Figures 4, 5, and 7). The phenomenon can be attributed to three causes: (1) the KMP stabilized contaminated soils have a denser structure and higher resistance to water adsorption as a consequence, as illustrated by the PSD curves (Figures 8 and 9) as well as PSD simulation results mentioned in the previous section (Table 4); (2) the RLC of Mg for the KMP stabilized soil is lower than the RLC of Ca for the PC stabilized soil (Figure 3(b)); (3) the main KMP hydration products, including MgKPO4·6H2O and Mg3(PO4)2·8H2O, have higher qu′ values relative to the CSH, the main cement hydration product and primary contributor to qu. Qiao et al. [30] indicated that magnesium phosphate cement (MPC) pastes, mainly containing potassium dihydrogen phosphate (KH2PO4) and magnesia (MgO), had a remarkable bond strength relative to the PC mortars. Since the hardening mechanisms and hydration products of KMP are similar to those of MPC, it is reasonable to infer that the main hydration products of KMP may provide higher bond strength to the soil presented in this study.The resilient modulus test results show that theMR values of the soils decrease with an increase in the Zn concentration particularly for the soils contaminated with Zn1.5Pb2 (Figure 7). And the water-soaking test results also show that the obvious crack developed in KMP/PC stabilized soils with Zn1.5Pb2 (Figure 6). The observation is attributed to two facts: (1) the soils display a relatively looser structure as the Zn concentration increases, as illustrated by the rise in the values of µ1 and µ2 obtained from the PSD simulations (Table 4); (2) the formation of the KMP and PC hydration products is retarded when Zn or Pb is presented in the soils, as suggested by previous studies [3, 5]. Subsequently, the bonding strengths of the soils will decrease as the Zn concentration increases. ## 5. Conclusions This study presented a detailed investigation on the acid neutralization capacity, pore size distribution, water stability coefficient, and resilient modulus characteristics of the KMP and PC stabilized soils with Zn or Pb alone and mixed Zn and Pb contaminants. The stabilized soils could be the potential roadway subgrade course materials. Based on the test results, the following conclusions can be drawn:(1) The acid neutralization capacity titration curves of the KMP stabilized soils were flatter relative to the PC stabilized soils and those of the acid buffer capacity index,β, were higher, which indicates that the KMP stabilized soils possess a higher acid buffering capacity. The values of the ratio of the leached concentration of Mg for the KMP stabilized soils were approximately 46 to 170%, which are notably lower than those of Ca for the PC stabilized soils.(2) The KMP stabilized contaminated soils exhibited higherqu and Kr values relative to the PC stabilized ones that underwent the water-soaking tests for both binder contents of 4% and 6%, irrespective of the curing time, which indicates that the former possessed a superior water-resistance capacity. At the 6% binder content, the macrocracks occurred on the surfaces of the PC stabilized soils, whereas a few numbers of cracks developed on the surface of the KMP stabilized sample with mixed Zn and Pb contaminants.(3) At 6% binder content, the MR values of the KMP and PC stabilized clean soils were 155% and 238%, respectively, which are higher than those of the stabilized contaminated ones. The MR of the KMP stabilized soil was 24% higher relative to the PC stabilized one with mixed Zn and Pb contaminants.(4) The PSD curves of the KMP and PC stabilized soils had bimodal characteristics. Thea2, μ1, and μ2 parameters increased noticeably with the increase in Zn concentration, whereas the a1 change was insignificant. For a given Zn concentration, the KMP stabilized soils displayed lower values of a2 or μ2, indicating that they had a denser structure relative to the PC stabilized soils.Overall, this study demonstrates that the KMP provides super efficiency in stabilizing soils that are contaminated with Zn or Pb alone and mixed Zn and Pb contaminants. Additional research is warranted to investigate the leachability, strength, water-soaking durability, and resilient modulus of the stabilized contaminated soils at actual field sites. Further quantitative X-ray diffraction analysis and scanning electron microscope analysis are needed to understand the microscale change in the properties of the stabilized soils. --- *Source: 1025056-2020-12-16.xml*
1025056-2020-12-16_1025056-2020-12-16.md
53,101
Feasibility of Stabilized Zn and Pb Contaminated Soils as Roadway Subgrade Materials
Mingli Wei; Hao Ni; Shiji Zhou; Yuan Li
Advances in Materials Science and Engineering (2020)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1025056
1025056-2020-12-16.xml
--- ## Abstract The authors have developed a new binder, KMP, which is made from oxalic acid-activated phosphate rock, monopotassium phosphate (KH2PO4), and reactive magnesia (MgO). This study explores the acid neutralization capacity, strength characteristics, water-soaking durability, resilient modulus, and pore size distribution of KMP stabilized soils with individual Zn, Pb, or coexisting Zn and Pb contaminants. For comparison purpose, Portland cement (PC) is also tested. The results show that KMP stabilized soils have a higher acid buffering capacity than PC stabilized soils, regardless of the soil contamination conditions. The water stability coefficient and resilient modulus of the KMP stabilized soils are found to be higher than PC stabilized soils. The reasons for the differences in these properties between KMP and PC stabilized soils are interpreted based on the stability and dissolubility of the main hydration products of the KMP and PC stabilized soils, the soil pore distribution, and concentration of Mg or Ca leached from the KMP and PC stabilized soils obtained from the acid neutralization capacity tests. Overall, this study demonstrates that the KMP is effective in stabilizing soils that are contaminated with Zn or Pb alone and mixed Zn and Pb contaminants, and the KMP stabilized soils are better suited as roadway subgrade material. --- ## Body ## 1. Introduction Large quantities of abandoned industrial sites, which may be contaminated by high concentrations of heavy metals, have been produced, caused by the rapid industrialization and urbanization in China for the last several decades [1]. Heavy metals are risky to environmental and human health and lead to degrading the mechanical properties of soils, restricting the redevelopment of the abandoned industrial sites [2, 3]. Solidification and stabilization (S/S) remediation technology is one of the widely used technologies that involve mixing binders and contaminated soils to reduce the mobility of contaminants and strengthen the mechanical stability of the soils by physical and chemical methods [4, 5].Portland cement (PC), the high alkali cementitious material, is widely used to immobilize heavy metals, including Pb, Zn, Cu, and Cd contained in waste or soil [6–8]. However, the heavy metal immobilization by PC depends significantly on the degree of cement hydration and alkaline environment. Previous studies have reported that heavy metals in the PC stabilized soils would leach easily when exposed to the long-term external conditions such as acid rain, sulfate attack, freeze-thaw cycling, and carbonation [4, 5]. In addition, the strength and modulus of cement stabilized soils decreased rapidly as Zn concentrations increased [5]. For heavy metals such as Zn and Pb, a tremendous retardant effect on cement hydration has been proved to exist. Du et al. [4] reported that the presence of Pb and Zn in the cement stabilized soils hinders the formation of Ca(OH)2/CSH and thereby decreases the soil buffering capacity. In this case, the higher the concentration, the more intensive the retardant effect of heavy metals [4, 5]. Therefore, it is necessary to develop alternative binders to stabilize soils that have relatively high concentrations of Zn and Pb. Besides, the treated contaminated soils should have environmental safety and mechanical stability so that they can be reused as materials for civil engineering projects such as roadway subgrade course materials.Recently, the authors developed a new binder, KMP, which is made from oxalic acid-activated phosphate rock, monopotassium phosphate (KH2PO4), and reactive magnesia (MgO) [3]. Compared with PC, the KMP binder may be more valuable in the attempt to stabilize soil that is contaminated with Zn or Pb alone and mixed Zn and Pb for the following reasons: (1) the hydration products of KMP with significant strength (e.g., bobierrite (MgKPO4·6H2O) and k-struvite (Mg3(PO4)2·8H2O)) are formed through an acid-base reaction between the MgO and PO43− released from the acidified apatite and KH2PO4. This reaction process is less affected by the pH solution relative to the cement hydration reaction. Hence, the KMP stabilized soils may provide more reliable strength relative to PC stabilized soils, and (2) the reaction of KMP with Zn and Pb in soils can produce insoluble metal phosphates, such as Zn3(PO4)2·4H2O, CaZn2(PO4)2·2H2O, and fluoropyromorphite (Pb5(PO4)3F), which have outstanding chemical and morphologic stability despite the strong acid and alkali circumstances [9]. Therefore, it is reasonable to expect that the KMP stabilized soil would exhibit higher leachability, acid buffer capacity, strength, or water-soaking durability relative to the PC stabilized soils.The effectiveness of the S/S is usually defined by the strength and leaching resistance of solidified products. The leaching resistance of solidified soils depends on contaminant speciation, which is difficult to characterize. A series of methods, such as toxicity characteristic leaching procedure (TCLP) tests or single batch test (EN 12457-2) [10], are usually used to evaluate the S/S and characterize the solidified soils. However, different leaching tests can lead to different characterization of the same soil and may not represent the leaching level in the real environments due to the different disposal or leaching conditions of the stabilized soils, such as pH, redox potential (Eh), liquid-to-solid ratio, and leachate type [11]. When the stabilized soils are used as the roadway subgrade materials, the acid rain is a major environmental concern. Stegemann and Zhou [12] put forward that leaching metals tend to be a function of pH value of the solidified matrix. When the used binders are high alkali cementitious materials, such as PC, quicklime, and pulverized fly ash, the pore water pH of all S/S soils will be highly alkaline initially, and this alkalinity will be neutralized over time by acidic influences, which will gradually cause the dissolution of the hydration products and metal contaminants. Thus, using acid neutralization capacity (ANC) test [13], which measures leachate pH, can help to characterize chemical immobilization of contaminants, as well as chemical durability of the S/S matrix [12], and can represent the disposal conditions. The unconfined compressive strength test provides basic information of strength on soil stabilization and is used for quality control in pavement design. In addition, the resilient modulus (MR) is also a common measure of the strength of a mixture design in roadways and is often used to decide the structural layer coefficients of the subbase layers for designing pavements. Although plenty of studies have evaluated the strength of stabilized soils, they only refer to the dry environment; few of them focus on the water-soaking durability [3, 5, 6, 14, 15]. Nevertheless, the use of binders in wet environment has more challenges due to the solubility of binders in water [16].A previous study [3, 17] has discussed the leachability and unconfined compressive strength of Zn, Pb, or Zn/Pb contaminated soils stabilized by KMP. It is found that soils that initially contained relatively high concentrations of both individual and mixed Zn and Pb contaminants displayed low leachability or ecotoxicity and high unconfined compressive strength after stabilization with the KMP, which demonstrates that KMP satisfactorily and effectively stabilized these soils. However, Du et al. [3] did not explore the characteristics of ANC, water-soaking durability, and resilient modulus of KMP stabilized Zn and Pb contaminated soils. The goal of the present study was to evaluate these properties to further investigate the feasibility of KMP stabilized soils for potential reuse as roadway subgrade materials. For comparison purpose, PC is also tested as a control binder. Several series of tests, including ANC tests, mercury intrusion porosimetry (MIP), water-soaking durability tests, and MR tests, are conducted. The effects of the initial Zn and Pb contaminations on the water stability and MR of the soils are discussed. The difference of the above properties for the KMP and PC stabilized soils is assessed based on the results of ANC and MIP tests and stability and dissolubility of the main hydration products of the KMP and PC. ## 2. Materials and Methods ### 2.1. Materials The soil used in this study was collected from Nanjing City, China. Table1 shows its basic physicochemical properties. The Atterberg limits were measured as per ASTM D4318 [18]. Based on the Unified Soil Classification System [19], the soil is classified as low plasticity clay. The chemical compositions of the apatite, PC, and soil were measured using an X-ray fluorescence spectrometer and their values are shown in Table 2. Commercial apatite cores with diameters of approximately 30 cm were crushed and ground to pass through a sieve with an opening size of 0.075 mm. The chemical analytical reagent, KH2PO4, was obtained from Sinopharm Chemical Reagent Co. Ltd., China. For the KMP powder, the proportions of the acidified phosphate rock, KH2PO4, and reactive MgO were controlled as 1 : 1 : 2 (on dry weight basis), since this ratio yields relatively low leachability and high unconfined compressive strength for the stabilized soils [3].Table 1 Properties of soil used in this study. PropertySoilNatural water content,wn(%)21.4Plastic limit,wp (%)20.6Liquid limit,wL (%)45.3Specific gravity,Gs2.72Soil pH7.43Number of replicate is 3, confidence interval (CI) is ≥ 95%, and standard deviation (SD) is < 5%.Table 2 Oxide chemistry of cement, phosphate rock, and soil used in this study. Oxide chemistry1Apatite (%)Cement (%)Soil (%)Calcium oxide (CaO)45.9349.7—Aluminum oxide (Al2O3)1.239.8715.40Magnesium oxide (MgO)—2.06—Phosphorus oxide (P2O5)25.10——Potassium oxide (K2O)—0.751.89Silicon oxide (SiO2)6.1422.669.61Ferric oxide (Fe2O3)—3.504.90Sulfate oxide (SO3)—3.84—Sodium oxide (Na2O)—0.24—Fluorine (F)2.35——Chlorine (Cl)———Loss on ignition213.126.194.861Mineral composition is analyzed by X-ray fluorescence method using ARL9800XP + XRF spectrometry. 2Value of loss on ignition is referenced to 950°C. ### 2.2. Specimen Preparation To prepare Zn and Pb or Zn and Pb mixed-contaminated soil specimens, a predetermined volume of selected stock solution was mixed with the air-dried clean soil until its water content reached its optimum water content (27%). The solution-soil mixture was thoroughly mixed using an electronic mixer to create a homogeneous paste that was then sealed in a closed container where it remained undisturbed for 10 days at a temperature of 20 ± 2°C and relative humidity of 95%. The KMP or PC powder with predetermined weight (4 and 6%, on dry weight of soil basis) was poured into the contaminated soil and mixed thoroughly using the electronic mixer for 15 min to achieve homogeneity. Then, approximately 210 g of the mixture was compacted into a stainless cylindrical mold with 50 mm diameter and 50 mm height at the optimum water content (27%) and maximum dry density (1.67 × 103 kg/m3). The specimen was carefully extruded from the mold using a hydraulic jack, sealed in black polyethylene bags, and cured for 7 or 28 d at a temperature of 20 ± 2°C and relative humidity of 95%. For comparison, stabilized clean soil and untreated contaminated soil were also prepared and cured under the same conditions. The concentrations of individual Zn or Pb and combined Zn and Pb contaminants, binder contents, and curing time are summarized in Table 3. In this study, a symbol of “Zni” or “Pbj” denotes a specimen with Zn concentration of i% or Pb concentration of j% based on the oven-dried soil weight.Table 3 Zn and Pb concentrations and curing times for various tests and analyses of spiked soils. Test programZn or/and Pb concentration (%)Binder content (%)Curing time (d)ANCZn0Pb0∗, Zn1.5, Zn1.5Pb2, and Pb2628MIPZn0Pb0∗, Zn0.5, Zn1, Zn1.5, Zn1.5Pb2, and Pb2628WSDZn0Pb0∗, Zn1.5, Zn1.5Pb2, and Pb24, 66, 27RMT∗Zn0Pb0∗and Zn1.5Pb2628ANC = acid neutralization capacity; MIP = mercury intrusion porosimetry test; WSD = water-soaking durability test; RMT = resilient modulus test.∗Untreated clean soil. ### 2.3. Testing Methods The ANC test was performed as per USEPA Method 1313 [20]. A series of batch extraction tests were conducted on 20 g of crushed, sieved (<100 μm), and stabilized soils cured for 28 d using 200 ml of nitric acid solutions with various concentrations as extraction liquids. After tumbling for 18 h, the pH of the solution filtered with a 0.45 μm membrane filter was measured with a pH meter HORIBA D-54. Measurements were made in triplicate and the average values were recorded. In this study, the ratio of leached Mg or Ca concentration for the stabilized contaminated soils to that of the stabilized clean soil (RLC) is used to quantify the relative stability of KMP hydration products (MgKPO4·6H2O and Mg3(PO4)2·8H2O, mainly) and PC hydration products (CSH and Ca(OH)2, mainly) in the ANC test at the leachate of which pH = 7. The concentrations of the leached Mg and Ca were measured using an IRIS Advantage Inductively Coupled Plasma Atomic Emission Spectroscopy (ICP-AES). The definition of RLC is expressed by the following equation:(1)RLC=MiM0,where Mi and M0 are the measured leached Mg or Ca concentration for the stabilized contaminated soil or clean soil, respectively, mg/L.For the MIP test, the soil sample of approximately 1 cm3 was retrieved from a carefully hand-broken identical specimen after curing for 28 d. The test soil was frozen using liquid nitrogen with a boiling point of −195°C and placed in a freezing unit with a vacuum chamber to be dried by sublimation of the frozen water at a temperature of −80°C. Then, the specimens were analyzed using an Auto Pore IV 9510 mercury intrusion porosimeter. The freeze-drying apparatus used in this study was the XIANOU-18N freeze-drier. In this method, the capillary pressure equation is employed to compute the pore diameter as expressed by Mitchell [21]:(2)d=−4τcosθp,where d is the diameter of the pore intruded, τ is the surface tension of intruding mercury (4.84 × 10−4 N/mm at 25°C), θ is the contact angle (135° in this study), and p is the applied pressure of mercury intrusion (maximum 413 MPa in this study).The water-soaking durability test was conducted as per the method recommended by Du et al. [14]. Each identical specimen was immersed under the water table of distilled water with a volume of 1.5 L. The water-soaking period was 1 d and the value of unconfined compression strength (qu′) after soaking was measured by the unconfined compression tests (UCTs) for each identical sample. The UCTs were conducted on the stabilized soils with a strain rate of 1%/min, as per ASTM D 4219-08 [22]. Three identical samples were tested and the average value of qu′ was recorded. The water stability coefficient (Kr), which defines the water stability of the soils, was determined using the following equation:(3)Kr=qu′qu,where qu′ is the measured unconfined compression strength after water-soaking test (kPa) and qu is the measured unconfined compression strength before water-soaking test (kPa), the data based on a previous study [3].The resilient modulus (MR) of the soil was measured using the plate loading test outlined by the China Test Methods of Soils for Highway Engineering [23]. With the assumption that the stabilized soil layer is homogeneous, isotropic, elastic, and infinite in depth, MR of the material can be determined using the following equation based on the method of the elasticity theory [24]:(4)MR=πσ0r2l1−μ2,where MR is the resilient modulus of elasticity of the material (MPa), σ0 is the pressure applied to the surface of the plate (MPa), r is the radius of the plate (50 mm in this study), l is the deflection of the plate associated with the pressure, and μ is Poisson’s ratio (0.3 in this study). The stresses and displacements were measured using a hydraulic multifunction material testing system (UTM-25). For each case tested, one identical sample was subjected to the resilient modulus test. ## 2.1. Materials The soil used in this study was collected from Nanjing City, China. Table1 shows its basic physicochemical properties. The Atterberg limits were measured as per ASTM D4318 [18]. Based on the Unified Soil Classification System [19], the soil is classified as low plasticity clay. The chemical compositions of the apatite, PC, and soil were measured using an X-ray fluorescence spectrometer and their values are shown in Table 2. Commercial apatite cores with diameters of approximately 30 cm were crushed and ground to pass through a sieve with an opening size of 0.075 mm. The chemical analytical reagent, KH2PO4, was obtained from Sinopharm Chemical Reagent Co. Ltd., China. For the KMP powder, the proportions of the acidified phosphate rock, KH2PO4, and reactive MgO were controlled as 1 : 1 : 2 (on dry weight basis), since this ratio yields relatively low leachability and high unconfined compressive strength for the stabilized soils [3].Table 1 Properties of soil used in this study. PropertySoilNatural water content,wn(%)21.4Plastic limit,wp (%)20.6Liquid limit,wL (%)45.3Specific gravity,Gs2.72Soil pH7.43Number of replicate is 3, confidence interval (CI) is ≥ 95%, and standard deviation (SD) is < 5%.Table 2 Oxide chemistry of cement, phosphate rock, and soil used in this study. Oxide chemistry1Apatite (%)Cement (%)Soil (%)Calcium oxide (CaO)45.9349.7—Aluminum oxide (Al2O3)1.239.8715.40Magnesium oxide (MgO)—2.06—Phosphorus oxide (P2O5)25.10——Potassium oxide (K2O)—0.751.89Silicon oxide (SiO2)6.1422.669.61Ferric oxide (Fe2O3)—3.504.90Sulfate oxide (SO3)—3.84—Sodium oxide (Na2O)—0.24—Fluorine (F)2.35——Chlorine (Cl)———Loss on ignition213.126.194.861Mineral composition is analyzed by X-ray fluorescence method using ARL9800XP + XRF spectrometry. 2Value of loss on ignition is referenced to 950°C. ## 2.2. Specimen Preparation To prepare Zn and Pb or Zn and Pb mixed-contaminated soil specimens, a predetermined volume of selected stock solution was mixed with the air-dried clean soil until its water content reached its optimum water content (27%). The solution-soil mixture was thoroughly mixed using an electronic mixer to create a homogeneous paste that was then sealed in a closed container where it remained undisturbed for 10 days at a temperature of 20 ± 2°C and relative humidity of 95%. The KMP or PC powder with predetermined weight (4 and 6%, on dry weight of soil basis) was poured into the contaminated soil and mixed thoroughly using the electronic mixer for 15 min to achieve homogeneity. Then, approximately 210 g of the mixture was compacted into a stainless cylindrical mold with 50 mm diameter and 50 mm height at the optimum water content (27%) and maximum dry density (1.67 × 103 kg/m3). The specimen was carefully extruded from the mold using a hydraulic jack, sealed in black polyethylene bags, and cured for 7 or 28 d at a temperature of 20 ± 2°C and relative humidity of 95%. For comparison, stabilized clean soil and untreated contaminated soil were also prepared and cured under the same conditions. The concentrations of individual Zn or Pb and combined Zn and Pb contaminants, binder contents, and curing time are summarized in Table 3. In this study, a symbol of “Zni” or “Pbj” denotes a specimen with Zn concentration of i% or Pb concentration of j% based on the oven-dried soil weight.Table 3 Zn and Pb concentrations and curing times for various tests and analyses of spiked soils. Test programZn or/and Pb concentration (%)Binder content (%)Curing time (d)ANCZn0Pb0∗, Zn1.5, Zn1.5Pb2, and Pb2628MIPZn0Pb0∗, Zn0.5, Zn1, Zn1.5, Zn1.5Pb2, and Pb2628WSDZn0Pb0∗, Zn1.5, Zn1.5Pb2, and Pb24, 66, 27RMT∗Zn0Pb0∗and Zn1.5Pb2628ANC = acid neutralization capacity; MIP = mercury intrusion porosimetry test; WSD = water-soaking durability test; RMT = resilient modulus test.∗Untreated clean soil. ## 2.3. Testing Methods The ANC test was performed as per USEPA Method 1313 [20]. A series of batch extraction tests were conducted on 20 g of crushed, sieved (<100 μm), and stabilized soils cured for 28 d using 200 ml of nitric acid solutions with various concentrations as extraction liquids. After tumbling for 18 h, the pH of the solution filtered with a 0.45 μm membrane filter was measured with a pH meter HORIBA D-54. Measurements were made in triplicate and the average values were recorded. In this study, the ratio of leached Mg or Ca concentration for the stabilized contaminated soils to that of the stabilized clean soil (RLC) is used to quantify the relative stability of KMP hydration products (MgKPO4·6H2O and Mg3(PO4)2·8H2O, mainly) and PC hydration products (CSH and Ca(OH)2, mainly) in the ANC test at the leachate of which pH = 7. The concentrations of the leached Mg and Ca were measured using an IRIS Advantage Inductively Coupled Plasma Atomic Emission Spectroscopy (ICP-AES). The definition of RLC is expressed by the following equation:(1)RLC=MiM0,where Mi and M0 are the measured leached Mg or Ca concentration for the stabilized contaminated soil or clean soil, respectively, mg/L.For the MIP test, the soil sample of approximately 1 cm3 was retrieved from a carefully hand-broken identical specimen after curing for 28 d. The test soil was frozen using liquid nitrogen with a boiling point of −195°C and placed in a freezing unit with a vacuum chamber to be dried by sublimation of the frozen water at a temperature of −80°C. Then, the specimens were analyzed using an Auto Pore IV 9510 mercury intrusion porosimeter. The freeze-drying apparatus used in this study was the XIANOU-18N freeze-drier. In this method, the capillary pressure equation is employed to compute the pore diameter as expressed by Mitchell [21]:(2)d=−4τcosθp,where d is the diameter of the pore intruded, τ is the surface tension of intruding mercury (4.84 × 10−4 N/mm at 25°C), θ is the contact angle (135° in this study), and p is the applied pressure of mercury intrusion (maximum 413 MPa in this study).The water-soaking durability test was conducted as per the method recommended by Du et al. [14]. Each identical specimen was immersed under the water table of distilled water with a volume of 1.5 L. The water-soaking period was 1 d and the value of unconfined compression strength (qu′) after soaking was measured by the unconfined compression tests (UCTs) for each identical sample. The UCTs were conducted on the stabilized soils with a strain rate of 1%/min, as per ASTM D 4219-08 [22]. Three identical samples were tested and the average value of qu′ was recorded. The water stability coefficient (Kr), which defines the water stability of the soils, was determined using the following equation:(3)Kr=qu′qu,where qu′ is the measured unconfined compression strength after water-soaking test (kPa) and qu is the measured unconfined compression strength before water-soaking test (kPa), the data based on a previous study [3].The resilient modulus (MR) of the soil was measured using the plate loading test outlined by the China Test Methods of Soils for Highway Engineering [23]. With the assumption that the stabilized soil layer is homogeneous, isotropic, elastic, and infinite in depth, MR of the material can be determined using the following equation based on the method of the elasticity theory [24]:(4)MR=πσ0r2l1−μ2,where MR is the resilient modulus of elasticity of the material (MPa), σ0 is the pressure applied to the surface of the plate (MPa), r is the radius of the plate (50 mm in this study), l is the deflection of the plate associated with the pressure, and μ is Poisson’s ratio (0.3 in this study). The stresses and displacements were measured using a hydraulic multifunction material testing system (UTM-25). For each case tested, one identical sample was subjected to the resilient modulus test. ## 3. Results and Discussion ### 3.1. Acid Neutralization Capacity Figures1(a) and 1(b) present the titration curves obtained from the ANC tests for the stabilized soils with different initial soil contaminant conditions and cured for 28 d. The titration curves were obtained by plotting the amount of acid added and the final equilibrium pH of each leachate and the slope of the titration curve corresponds to the ability to resist a change in the pH of the leachate since the dissolution of hydration products in stabilized soils [25]. The slope of the titration curves for the KMP stabilized soils is noticeably gentler than that of the PC stabilized soils. A significant turning point appears near the 400 cmol/kg nitric acid solution point for both the KMP and PC stabilized soils. When the acid addition is < 400 cmol/kg, the presence of heavy metal causes a rapid drop in pH compared to the stabilized clean soil. With a relatively high amount of acid addition (i.e., ≥400 cmol/kg), the KMP stabilized soils displays a significantly gentler slope relative to the PC stabilized soils.Figure 1 pH-acid titration curves of the stabilized clean and contaminated soils cured for 28 d: (a) KMP binder; (b) PC binder. (a)(b)An index denoted byβ, representing moles of strong acid CA (H+) added to the soil to cause a unit change in leachate pH, is used to define the acid buffering capacity of the soil and expressed in the following equation [25]:(5)β=−dCAdpH.Figure2 illustrates the variations in the values of β (acid buffer capacity) as calculated using equation (5) with the amount of acid added. It can be concluded that the KMP stabilized soils display higher values of β regardless of the initial soil contamination conditions.Figure 2 Buffer capacity (β) of the stabilized contaminated soils cured for 28 d: (a) KMP binder; (b) PC binder. (a)(b)Figures3(a) and 3(b) show the variations in the leached Mg and Ca concentrations and RLC with the initial soil contamination conditions for the stabilized soils cured for 28 d were obtained from the ANC test when the pH of the leachate was 7. Generally, the concentrations of Mg and Ca leached from the stabilized contaminated soils are higher than those leached from the stabilized clean soils and they rise with an increase in the initial Zn concentration (Figure 3(a)). The leached Mg and Ca concentrations for the mixed Zn and Pb contaminants (Zn1.5Pb2) are much higher than those for the Zn or Pb contaminant alone. The values of RLC of Mg for the KMP stabilized soils are approximately 46 to 170%, which are notably lower than those of Ca for PC stabilized soils (50 to 440%) (Figure 3(b)).Figure 3 Leached amount of Mg and Ca obtained from the acid neutralization test (ANC) for the stabilized soils cured for 28 d: (a) leached Mg and Ca concentrations and (b) ratio of leached Mg or Ca concentration (RLC) for the contaminated soil to that for the clean soil. (a)(b) ### 3.2. Water Stability Coefficient Figures4(a) and 4(b) present the variations qu′ of the stabilized clean and contaminated soils that experienced the water-soaking tests, with the initial soil contamination conditions. For the uncontaminated condition (Zn0Pb0), the KMP stabilized soils exhibit lower qu′ than PC stabilized soils for both binder contents of 4% and 6%, when the curing time is 6 d (Figure 4). However, for the cases of Zn or Pb alone and mixed Zn and Pb (Zn1.5Pb2) contaminants, the KMP stabilized soils display notably higher qu′ values, irrespective of the curing time or binder content.Figure 4 Variations in the unconfined compressive strength (qu′) with the initial soil contamination conditions for the stabilized soils after water-soaking tests: (a) 4% binder and (b) 6% binder. (a)(b)Figure5 shows that the Kr values of the KMP stabilized clean soils (Zb0Pb0) are about 10 to 35% lower than those of the PC stabilized clean soils. In contrast, the KMP stabilized contaminated soils (Zn1.5, Zb1.5Pb2, and Pb2) display 3 to 465% higher Kr values than the PC stabilized soils, indicating that the former possess superior water-resistance capacity and this is consistent with the results presented in Figure 4.Figure 5 Variations in the water stability coefficient (Kr) with the initial soil contamination conditions for the stabilized soils cured for 6 and 27 d: (a) 4% binder and (b) 6% binder. (a)(b)Figure6 presents a comparison of apparent surface characteristics between the KMP and PC stabilized soils which were cured for 27 d and then subsequently soaked for 1 d. It can be seen that the macrocracks occurred at the surfaces of all PC stabilized soils, whereas only a few numbers of cracks developed at the surface of the KMP stabilized soil with mixed Zn concentration of 1.5% and Pb concentration of 2%. The observations further suggest that the KMP stabilized soils have greater water stability as compared to the PC stabilized soils.Figure 6 Photos showing the cracks developed on the surfaces of the stabilized soils after water-soaking tests (curing time of 27 d and binder content of 6%). (a) KMP Zn0Pb0. (b) KMP Zn1.5. (c) KMP Pb2. (d) KMP Zn1.5Pb2. (e) PC Zn0Pb0. (f) PC Zn1.5. (g) PC Pb2. (h) PC Zn1.5Pb2. (a)(b)(c)(d)(e)(f)(g)(h) ### 3.3. Resilient Modulus Figure7 shows the measured resilient modulus (MR) for the clean (Zn0Pb0) and contaminated soils with mixed Zn concentration of 1.5% and Pb concentration of 2% (Zn1.5Pb2) when stabilized with 6% KMP or PC. The MR values of the KMP and PC stabilized clean soils are 155% and 238% higher than those of the stabilized contaminated soils, respectively, indicating that the presence of heavy metals has exposed a notable impact on the measured MR. In addition, the corresponding MR of the KMP stabilized soils is 6% lower than that of the PC stabilized soils for the case of Zn0Pb0, whereas it is 24% higher than that of the PC stabilized soils (Zn1.5Pb2), indicating that the KMP binder is more capable of strengthening MR of the soils tested in this study.Figure 7 Measured resilient modulus for the stabilized and untreated soils. ### 3.4. Soil Pore Size Distribution Figure8 shows the pore size distributions (PSDs) of KMP and PC stabilized clean and contaminated soils. The y-axis of PSD curves of the stabilized soils is plotted as f (D) (f (D) = dV/dlogD), where V is the volume of mercury intruded at a given pressure increment that corresponds to the pore having a diameter of D in 1 g of the dry soil [5]. It can be seen that the variation of cumulative pore volume with a pore diameter is of a bimodal type, which is a common modal observed for compacted soils [26] as well as cement stabilized Zn-contaminated soils [5]. The first mode is characterized by pore sizes ranging from 0.003 to 0.1 μm; meanwhile, the pore sizes at the second mode span the interval of 0.1 to 2 μm. For the stabilized clean and contaminated soils, the first pore diameter peaks have increased from 0.02 to 0.03 μm and 0.02 to 0.04 μm with increasing Zn concentration, respectively; the second pore diameter peaks exhibit a similar trend of change with increasing Zn concentration. At a given Zn concentration, the two pore diameter peaks of the KMP stabilized soils are lower than those of the PC stabilized soils (Figure 9), indicating that the KMP stabilized soils have a denser structure.Figure 8 Pore size distribution of the stabilized soils cured for 28 d: (a) KMP and (b) PC. (a)(b)Figure 9 Bimodal PSD peak fitting for the KMP stabilized soils cured for 28 d: (a) Zn0Pb0 and (b) Zn1.5Pb2. (a)(b)In this study, each peak of the PSDs curves was simulated by a Gaussian distribution function suggested by previous studies [5, 26], as expressed by the following fitting equation:(6)fD=∑i=1nfiD∑i=1nai=12πσie−logD−μi2/2σi2,where n is the number of peaks in PSD curves on a logarithmic scale (2 for the bimodal types), ai is the pore volume in the 1 g dry soil covered by the fitted curve of fi (D) (mL/g), σi is the standard deviation on a logarithmic scale, and μi is the mean pore diameter in the fitted curve of fi (D) on a logarithmic scale (μm) [26].Table4 lists the fitting parameters obtained using equation (6). From Table 4, it is found that a1 changes marginally for both KMP and PC stabilized soils when the Zn concentration increases from 0% to 1.5%; on the contrary, a2 increases from 0.028 to 0.044 mL/g and 0.026 to 0.048 mL/g for the KMP and PC stabilized soils, respectively. In addition, there are no differences in the values of a1 and a2 between the KMP and PC stabilized soils. As for μ1 and μ2, they both increase with increasing Zn concentration. When Zn concentration increases from 0% to 1.5%, it can be observed that μ1 of KMP stabilized soils varies from 0.019 to 0.034, which is an 78.9% increase. In contrast, μ1 of PC stabilized soils varies from 0.021 to 0.04, which is an 90.5% increase. In addition, μ2 of KMP stabilized soils varies from 0.391 to 0.440, which is an 12.5% increase, and μ2 of PC stabilized soils varies from 0.411 to 0.485, which is an 18% increase when Zn concentration increases from 0% to 1.5%. According to this, it further demonstrates that the KMP stabilized soils have a denser structure, as suggested by a previous study [5], which may be attributed to the more prominent filling of soil pores by the KMP hydration products (e.g., bobierrite (MgKPO4·6H2O) and k-struvite (Mg3(PO4)2·8H2O)) relative to those of PC (e.g., Ca(OH)2, CSH, and CAH).Table 4 Parameters obtained from the simulated PSD curves of KMP and PC stabilized soils (6% cement, 28 days of curing) based on peak analysis. Specimensa1 (mL/g)a2 (mL/g)μ1μ2σ1σ2R2Zn0Pb0 (KMP)0.0010.0280.0190.3910.0070.3280.90Zn0.5 (KMP)0.0010.0270.0190.3950.0070.3200.91Zn1.0 (KMP)0.0010.0310.0220.4050.0070.3250.93Zn1.5 (KMP)0.0020.0330.0280.4230.0120.2860.92Zn1.5Pb2 (KMP)0.0020.0440.0340.4400.0130.3390.93Pb2 (KMP)0.0010.0330.0220.4160.0080.3270.93Zn0Pb0 (PC)0.0010.0260.0210.4110.0090.2650.90Zn0.5 (PC)0.0010.0280.0240.4350.0100.2790.94Zn1.0 (PC)0.0010.0330.0280.4480.0070.3220.94Zn1.5 (PC)0.0020.0440.0310.4610.0120.3230.92Zn1.5Pb2 (PC)0.0030.0480.0400.4850.0190.2800.91Pb2 (PC)0.0020.0370.0220.4770.0090.3100.94n = number of peaks in the fitted PSD curves; a1 = volume of intra-aggregate pore; a2 = volume of interaggregate pore; μ1 = mean diameter of intra-aggregate pore; μ2 = mean diameter of interaggregate pore; σ1 and σ2 = standard deviation; R = correlation coefficient.Figure9 illustrates the typical bimodal PSD peak fitting result for the KMP stabilized clean soils (Pb0Zn0) and contaminated soils with mixed Zn concentration of 1.5% and Pb concentration of 2% (Zn1.5Pb2). It shows that all the simulated PSD curves have bimodal characteristics (n = 2). The dual peaks in the PSD curves represent intra-aggregate and interaggregate pores, respectively, and the formation of intra-aggregate pores is due to the filling of the large pores by hydration products of KMP or cement [5]. ## 3.1. Acid Neutralization Capacity Figures1(a) and 1(b) present the titration curves obtained from the ANC tests for the stabilized soils with different initial soil contaminant conditions and cured for 28 d. The titration curves were obtained by plotting the amount of acid added and the final equilibrium pH of each leachate and the slope of the titration curve corresponds to the ability to resist a change in the pH of the leachate since the dissolution of hydration products in stabilized soils [25]. The slope of the titration curves for the KMP stabilized soils is noticeably gentler than that of the PC stabilized soils. A significant turning point appears near the 400 cmol/kg nitric acid solution point for both the KMP and PC stabilized soils. When the acid addition is < 400 cmol/kg, the presence of heavy metal causes a rapid drop in pH compared to the stabilized clean soil. With a relatively high amount of acid addition (i.e., ≥400 cmol/kg), the KMP stabilized soils displays a significantly gentler slope relative to the PC stabilized soils.Figure 1 pH-acid titration curves of the stabilized clean and contaminated soils cured for 28 d: (a) KMP binder; (b) PC binder. (a)(b)An index denoted byβ, representing moles of strong acid CA (H+) added to the soil to cause a unit change in leachate pH, is used to define the acid buffering capacity of the soil and expressed in the following equation [25]:(5)β=−dCAdpH.Figure2 illustrates the variations in the values of β (acid buffer capacity) as calculated using equation (5) with the amount of acid added. It can be concluded that the KMP stabilized soils display higher values of β regardless of the initial soil contamination conditions.Figure 2 Buffer capacity (β) of the stabilized contaminated soils cured for 28 d: (a) KMP binder; (b) PC binder. (a)(b)Figures3(a) and 3(b) show the variations in the leached Mg and Ca concentrations and RLC with the initial soil contamination conditions for the stabilized soils cured for 28 d were obtained from the ANC test when the pH of the leachate was 7. Generally, the concentrations of Mg and Ca leached from the stabilized contaminated soils are higher than those leached from the stabilized clean soils and they rise with an increase in the initial Zn concentration (Figure 3(a)). The leached Mg and Ca concentrations for the mixed Zn and Pb contaminants (Zn1.5Pb2) are much higher than those for the Zn or Pb contaminant alone. The values of RLC of Mg for the KMP stabilized soils are approximately 46 to 170%, which are notably lower than those of Ca for PC stabilized soils (50 to 440%) (Figure 3(b)).Figure 3 Leached amount of Mg and Ca obtained from the acid neutralization test (ANC) for the stabilized soils cured for 28 d: (a) leached Mg and Ca concentrations and (b) ratio of leached Mg or Ca concentration (RLC) for the contaminated soil to that for the clean soil. (a)(b) ## 3.2. Water Stability Coefficient Figures4(a) and 4(b) present the variations qu′ of the stabilized clean and contaminated soils that experienced the water-soaking tests, with the initial soil contamination conditions. For the uncontaminated condition (Zn0Pb0), the KMP stabilized soils exhibit lower qu′ than PC stabilized soils for both binder contents of 4% and 6%, when the curing time is 6 d (Figure 4). However, for the cases of Zn or Pb alone and mixed Zn and Pb (Zn1.5Pb2) contaminants, the KMP stabilized soils display notably higher qu′ values, irrespective of the curing time or binder content.Figure 4 Variations in the unconfined compressive strength (qu′) with the initial soil contamination conditions for the stabilized soils after water-soaking tests: (a) 4% binder and (b) 6% binder. (a)(b)Figure5 shows that the Kr values of the KMP stabilized clean soils (Zb0Pb0) are about 10 to 35% lower than those of the PC stabilized clean soils. In contrast, the KMP stabilized contaminated soils (Zn1.5, Zb1.5Pb2, and Pb2) display 3 to 465% higher Kr values than the PC stabilized soils, indicating that the former possess superior water-resistance capacity and this is consistent with the results presented in Figure 4.Figure 5 Variations in the water stability coefficient (Kr) with the initial soil contamination conditions for the stabilized soils cured for 6 and 27 d: (a) 4% binder and (b) 6% binder. (a)(b)Figure6 presents a comparison of apparent surface characteristics between the KMP and PC stabilized soils which were cured for 27 d and then subsequently soaked for 1 d. It can be seen that the macrocracks occurred at the surfaces of all PC stabilized soils, whereas only a few numbers of cracks developed at the surface of the KMP stabilized soil with mixed Zn concentration of 1.5% and Pb concentration of 2%. The observations further suggest that the KMP stabilized soils have greater water stability as compared to the PC stabilized soils.Figure 6 Photos showing the cracks developed on the surfaces of the stabilized soils after water-soaking tests (curing time of 27 d and binder content of 6%). (a) KMP Zn0Pb0. (b) KMP Zn1.5. (c) KMP Pb2. (d) KMP Zn1.5Pb2. (e) PC Zn0Pb0. (f) PC Zn1.5. (g) PC Pb2. (h) PC Zn1.5Pb2. (a)(b)(c)(d)(e)(f)(g)(h) ## 3.3. Resilient Modulus Figure7 shows the measured resilient modulus (MR) for the clean (Zn0Pb0) and contaminated soils with mixed Zn concentration of 1.5% and Pb concentration of 2% (Zn1.5Pb2) when stabilized with 6% KMP or PC. The MR values of the KMP and PC stabilized clean soils are 155% and 238% higher than those of the stabilized contaminated soils, respectively, indicating that the presence of heavy metals has exposed a notable impact on the measured MR. In addition, the corresponding MR of the KMP stabilized soils is 6% lower than that of the PC stabilized soils for the case of Zn0Pb0, whereas it is 24% higher than that of the PC stabilized soils (Zn1.5Pb2), indicating that the KMP binder is more capable of strengthening MR of the soils tested in this study.Figure 7 Measured resilient modulus for the stabilized and untreated soils. ## 3.4. Soil Pore Size Distribution Figure8 shows the pore size distributions (PSDs) of KMP and PC stabilized clean and contaminated soils. The y-axis of PSD curves of the stabilized soils is plotted as f (D) (f (D) = dV/dlogD), where V is the volume of mercury intruded at a given pressure increment that corresponds to the pore having a diameter of D in 1 g of the dry soil [5]. It can be seen that the variation of cumulative pore volume with a pore diameter is of a bimodal type, which is a common modal observed for compacted soils [26] as well as cement stabilized Zn-contaminated soils [5]. The first mode is characterized by pore sizes ranging from 0.003 to 0.1 μm; meanwhile, the pore sizes at the second mode span the interval of 0.1 to 2 μm. For the stabilized clean and contaminated soils, the first pore diameter peaks have increased from 0.02 to 0.03 μm and 0.02 to 0.04 μm with increasing Zn concentration, respectively; the second pore diameter peaks exhibit a similar trend of change with increasing Zn concentration. At a given Zn concentration, the two pore diameter peaks of the KMP stabilized soils are lower than those of the PC stabilized soils (Figure 9), indicating that the KMP stabilized soils have a denser structure.Figure 8 Pore size distribution of the stabilized soils cured for 28 d: (a) KMP and (b) PC. (a)(b)Figure 9 Bimodal PSD peak fitting for the KMP stabilized soils cured for 28 d: (a) Zn0Pb0 and (b) Zn1.5Pb2. (a)(b)In this study, each peak of the PSDs curves was simulated by a Gaussian distribution function suggested by previous studies [5, 26], as expressed by the following fitting equation:(6)fD=∑i=1nfiD∑i=1nai=12πσie−logD−μi2/2σi2,where n is the number of peaks in PSD curves on a logarithmic scale (2 for the bimodal types), ai is the pore volume in the 1 g dry soil covered by the fitted curve of fi (D) (mL/g), σi is the standard deviation on a logarithmic scale, and μi is the mean pore diameter in the fitted curve of fi (D) on a logarithmic scale (μm) [26].Table4 lists the fitting parameters obtained using equation (6). From Table 4, it is found that a1 changes marginally for both KMP and PC stabilized soils when the Zn concentration increases from 0% to 1.5%; on the contrary, a2 increases from 0.028 to 0.044 mL/g and 0.026 to 0.048 mL/g for the KMP and PC stabilized soils, respectively. In addition, there are no differences in the values of a1 and a2 between the KMP and PC stabilized soils. As for μ1 and μ2, they both increase with increasing Zn concentration. When Zn concentration increases from 0% to 1.5%, it can be observed that μ1 of KMP stabilized soils varies from 0.019 to 0.034, which is an 78.9% increase. In contrast, μ1 of PC stabilized soils varies from 0.021 to 0.04, which is an 90.5% increase. In addition, μ2 of KMP stabilized soils varies from 0.391 to 0.440, which is an 12.5% increase, and μ2 of PC stabilized soils varies from 0.411 to 0.485, which is an 18% increase when Zn concentration increases from 0% to 1.5%. According to this, it further demonstrates that the KMP stabilized soils have a denser structure, as suggested by a previous study [5], which may be attributed to the more prominent filling of soil pores by the KMP hydration products (e.g., bobierrite (MgKPO4·6H2O) and k-struvite (Mg3(PO4)2·8H2O)) relative to those of PC (e.g., Ca(OH)2, CSH, and CAH).Table 4 Parameters obtained from the simulated PSD curves of KMP and PC stabilized soils (6% cement, 28 days of curing) based on peak analysis. Specimensa1 (mL/g)a2 (mL/g)μ1μ2σ1σ2R2Zn0Pb0 (KMP)0.0010.0280.0190.3910.0070.3280.90Zn0.5 (KMP)0.0010.0270.0190.3950.0070.3200.91Zn1.0 (KMP)0.0010.0310.0220.4050.0070.3250.93Zn1.5 (KMP)0.0020.0330.0280.4230.0120.2860.92Zn1.5Pb2 (KMP)0.0020.0440.0340.4400.0130.3390.93Pb2 (KMP)0.0010.0330.0220.4160.0080.3270.93Zn0Pb0 (PC)0.0010.0260.0210.4110.0090.2650.90Zn0.5 (PC)0.0010.0280.0240.4350.0100.2790.94Zn1.0 (PC)0.0010.0330.0280.4480.0070.3220.94Zn1.5 (PC)0.0020.0440.0310.4610.0120.3230.92Zn1.5Pb2 (PC)0.0030.0480.0400.4850.0190.2800.91Pb2 (PC)0.0020.0370.0220.4770.0090.3100.94n = number of peaks in the fitted PSD curves; a1 = volume of intra-aggregate pore; a2 = volume of interaggregate pore; μ1 = mean diameter of intra-aggregate pore; μ2 = mean diameter of interaggregate pore; σ1 and σ2 = standard deviation; R = correlation coefficient.Figure9 illustrates the typical bimodal PSD peak fitting result for the KMP stabilized clean soils (Pb0Zn0) and contaminated soils with mixed Zn concentration of 1.5% and Pb concentration of 2% (Zn1.5Pb2). It shows that all the simulated PSD curves have bimodal characteristics (n = 2). The dual peaks in the PSD curves represent intra-aggregate and interaggregate pores, respectively, and the formation of intra-aggregate pores is due to the filling of the large pores by hydration products of KMP or cement [5]. ## 4. Discussion The ANC test results show that the KMP stabilized soils display higher acid buffering capacity (Figures1 and 2). The ratio of leached concentration (RLC) of Mg for the KMP stabilized soil is significantly lower relative to Ca for the PC stabilized soil regardless of the Zn or Pb concentration (Figure 3), indicating that a lower amount of Mg or Ca has been leached from the stabilized soil and the soil would consequently display a slow reduction in strength as a result of the graduation dissolution of the hydration products as suggested by previous studies [27]. These findings can be attributed to two reasons: (1) the main alkaline hydration products of KMP such as MgKPO4·6H2O and Mg3(PO4)2·8H2O are difficult to dissolve under the acidic environment and have significantly stable mineral crystalline morphologies in a wide range of acidic conditions [3, 9, 28]. In contrast, the cement hydration products such as Ca(OH)2, CSH, and CAH have a relatively higher solubility and are more easy to dissolve under acidic condition [4, 29], and (2) Zn or Pb merely consumes a small amount of PO43− and hydroxyl ions (OH-), which results in a marginal impact on the formation of MgKPO4·6H2O and Mg3(PO4)2·8H2O in the KMP stabilized soils [3]. However, for the PC stabilized soils, the presence of Zn would remarkably retard the cement hydration or pozzolanic reaction and lead to a reduction in these cement hydration products such as Ca(OH)2 or CSH, which are primary contributors to the acid buffer capacity [4, 5].The water-soaking durability test and resilient modulus test results show that KMP stabilized contaminated soils possess higherqu′, Kr, and MR relative to the PC stabilized contaminated soils, regardless of the initial soil contamination condition (Figures 4, 5, and 7). The phenomenon can be attributed to three causes: (1) the KMP stabilized contaminated soils have a denser structure and higher resistance to water adsorption as a consequence, as illustrated by the PSD curves (Figures 8 and 9) as well as PSD simulation results mentioned in the previous section (Table 4); (2) the RLC of Mg for the KMP stabilized soil is lower than the RLC of Ca for the PC stabilized soil (Figure 3(b)); (3) the main KMP hydration products, including MgKPO4·6H2O and Mg3(PO4)2·8H2O, have higher qu′ values relative to the CSH, the main cement hydration product and primary contributor to qu. Qiao et al. [30] indicated that magnesium phosphate cement (MPC) pastes, mainly containing potassium dihydrogen phosphate (KH2PO4) and magnesia (MgO), had a remarkable bond strength relative to the PC mortars. Since the hardening mechanisms and hydration products of KMP are similar to those of MPC, it is reasonable to infer that the main hydration products of KMP may provide higher bond strength to the soil presented in this study.The resilient modulus test results show that theMR values of the soils decrease with an increase in the Zn concentration particularly for the soils contaminated with Zn1.5Pb2 (Figure 7). And the water-soaking test results also show that the obvious crack developed in KMP/PC stabilized soils with Zn1.5Pb2 (Figure 6). The observation is attributed to two facts: (1) the soils display a relatively looser structure as the Zn concentration increases, as illustrated by the rise in the values of µ1 and µ2 obtained from the PSD simulations (Table 4); (2) the formation of the KMP and PC hydration products is retarded when Zn or Pb is presented in the soils, as suggested by previous studies [3, 5]. Subsequently, the bonding strengths of the soils will decrease as the Zn concentration increases. ## 5. Conclusions This study presented a detailed investigation on the acid neutralization capacity, pore size distribution, water stability coefficient, and resilient modulus characteristics of the KMP and PC stabilized soils with Zn or Pb alone and mixed Zn and Pb contaminants. The stabilized soils could be the potential roadway subgrade course materials. Based on the test results, the following conclusions can be drawn:(1) The acid neutralization capacity titration curves of the KMP stabilized soils were flatter relative to the PC stabilized soils and those of the acid buffer capacity index,β, were higher, which indicates that the KMP stabilized soils possess a higher acid buffering capacity. The values of the ratio of the leached concentration of Mg for the KMP stabilized soils were approximately 46 to 170%, which are notably lower than those of Ca for the PC stabilized soils.(2) The KMP stabilized contaminated soils exhibited higherqu and Kr values relative to the PC stabilized ones that underwent the water-soaking tests for both binder contents of 4% and 6%, irrespective of the curing time, which indicates that the former possessed a superior water-resistance capacity. At the 6% binder content, the macrocracks occurred on the surfaces of the PC stabilized soils, whereas a few numbers of cracks developed on the surface of the KMP stabilized sample with mixed Zn and Pb contaminants.(3) At 6% binder content, the MR values of the KMP and PC stabilized clean soils were 155% and 238%, respectively, which are higher than those of the stabilized contaminated ones. The MR of the KMP stabilized soil was 24% higher relative to the PC stabilized one with mixed Zn and Pb contaminants.(4) The PSD curves of the KMP and PC stabilized soils had bimodal characteristics. Thea2, μ1, and μ2 parameters increased noticeably with the increase in Zn concentration, whereas the a1 change was insignificant. For a given Zn concentration, the KMP stabilized soils displayed lower values of a2 or μ2, indicating that they had a denser structure relative to the PC stabilized soils.Overall, this study demonstrates that the KMP provides super efficiency in stabilizing soils that are contaminated with Zn or Pb alone and mixed Zn and Pb contaminants. Additional research is warranted to investigate the leachability, strength, water-soaking durability, and resilient modulus of the stabilized contaminated soils at actual field sites. Further quantitative X-ray diffraction analysis and scanning electron microscope analysis are needed to understand the microscale change in the properties of the stabilized soils. --- *Source: 1025056-2020-12-16.xml*
2020
# Experimental Investigation on Thermoelectric Chiller Driven by Solar Cell **Authors:** Yen-Lin Chen; Zi-Jie Chien; Wen-Shing Lee; Ching-Song Jwo; Kun-Ching Cho **Journal:** International Journal of Photoenergy (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102510 --- ## Abstract This paper presents experimental explorations on cooling performance of thermoelectric chillers being driven by solar cells, as well as comparison results to the performance being driven by fixed direct current. Solar energy is clear and limitless and can be collected by solar cells. We use solar cells to drive thermoelectric chillers, where the cold side is connected to the water tank. It is found that 250 mL of water can be cooled from 18.5°C to 13°C, where the corresponding coefficient of performance (COP) is changed between 0.55 and 1.05, when solar insolation is changed between 450 W/m2 and 1000 W/m2. The experimental results demonstrate that the thermoelectric chiller driven by solar cell is feasible and effective for energy saving issues. --- ## Body ## 1. Introduction Renewable energy resources become important energy sources over the world. People apply more renewable energy sources and technologies so as we can reduce the consumption of traditional fossil energy. Therefore, the carbon dioxide production can be significantly reduced, and global warming problem can also be slowed down [1]. Solar chillers could be accomplished by using one of the following refrigeration systems: vapor compression, adsorption refrigeration [2], and thermoelectric refrigeration systems. The first two systems require low and high pressure sides of working fluids in refrigeration cycles and are difficult to be developed into portable and lightweight solar devices used in outdoor environments. The thermoelectric refrigeration system can provide advantages of being small, lightweight, reliable, noiseless, portable, and of low cost in mass production [3]. In our studies, solar cells are used to drive the thermoelectric chillers.Gaur and Tiwari found that the energy efficiency of solar cell is about 10% to 25% [4]. Based on the calculations of the solar cell modules made in laboratory, cadmium telluride (CdTe) has shown that it can provide the minimum cost per unit electrical energy, whereas amorphous silicon (a-Si)/nanocrystalline silicon (nc-Si) can provide the minimum cost for unit electrical energy when commercial availability of solar modules is concerned. Copper indium gallium diselenide (CIGS) can achieve the lowest capitalized cost over all other solar cell technologies. Yang and Yin have presented a hybrid solar system that utilizes photovoltaic cells, thermoelectric modules, and hot water [5]. The hybrid solar system is superior to traditional PV systems with 30% higher output electric power.Peltier effect and Seebeck effect were first discovered to present in metals as early as 1820s–1830s [6, 7]. Despite their low energy efficiency as compared to traditional devices, thermoelectric effects present distinct advantages such as compactness, precision, simplicity, and reliability. Applications of thermoelectric devices are very wide in many areas, including equipment used in military, aerospace, medical, industrial, consumer, and scientific institutions [8]. Solar-driven thermoelectric technologies have two types: the solar-driven thermoelectric refrigeration and the solar-driven thermoelectric power generation [9]. One important application is thermoelectric generation with waste energy and renewable energy. Champier et al. incorporated wood stove with thermoelectric generator. In their system, the hot side of thermoelectric generator passes through hot air, whereas the cold side of thermoelectric generator is put under two liters of water tank. This system can produce up to 9.5 W as they used prototype of ten watts thermoelectric generator [10]. Meng et al. [11] presented a system that adopted the flushing water of blast furnace slag, and about 0.93 kW electrical energy can be produced per unit area when the temperature of flushing water starts at 100°C with a temperature drop of 1.5°C and the conversion efficiency is 2%. The cost recovery period of the equipment is about 8 years. Attia et al. [12] have reported an experimental power generation device that can produce electrical energy in milliwatts level by using standard bismuth telluride thermoelectric modules with a device size of about 10 cm3. Rezania et al. [13] explored the effective pumping power for the cooling system at five temperature differences of the hot and cold sides of the thermoelectric generation device. Their experimental results demonstrated that there is a unique flow rate that gives maximum net power in the system at each temperature difference. Chávez Urbiola and Vorobiev [14] studied solar hybrid electric/thermal system using photovoltaic panels combined with a water/air-filled heat extracting unit and thermoelectric generators. Their experiment results showed that the hot side of thermoelectric generation at midday has a temperature of around 200°C and the cold side is approximately 50°C. This system generated 20 W of electrical energy and 200 W of thermal energy.Another important application of thermoelectric is for cooling operations [15]. Dai et al. [3] conducted experimental investigation on thermoelectric refrigerator driven by solar cells. The results demonstrated that the unit could maintain the temperature in the refrigerator at 5–10°C and have a COP about 0.3. He et al. [16] conduct experiments on thermoelectric cooling and heating system driven by solar power in a model room whose volume is 0.125 m3 and performed the test in summer time. The resulting COP of their proposed thermoelectric device is of average 0.6. Zhou and Yu [17] present a generalized theoretical model for the optimization of a thermoelectric cooling system. Their analysis showed that the maximal COP and the maximal cooling capacity can be obtained when the finite thermal conductance is optimally allocated. Abdul-Wahab et al. [18] designed and built an affordable solar thermoelectric refrigerator. Their results indicated that the temperature of the refrigeration was reduced from 27°C to 5°C in approximately 44 minutes. The COP of their system was calculated and found to be about 0.16. Chang et al. [19] investigated the thermoelectric air-cooling module for electronic devices. The results demonstrated that their thermoelectric air-cooling module can provide better performance at a low heat loading condition.This study develops the thermoelectric chiller driven by solar cell in daytime and direct current (DC) source in cloudy days or nighttime. The solar thermoelectric chiller is investigated in terms of COP and the temperature of thermoelectric through a specially designed test rig. ## 2. Configuration of the Solar Thermoelectric Chiller Figure1 is the schematics thermoelectric chiller driven by solar cell. The chiller mainly consists of the one solar cell, thermoelectric, controller, and water tank. The specifications of the proposed solar thermoelectric chiller are shown in Table 1. One solar cell can be commercially available. Solar insolation is 1000 W/m2; maximum of Psolar is 130 W. The maximum voltage of the solar cell is 17.6 V, the maximum current of the solar cell is 7.39 A, its area is 0.89 m2, and the corresponding energy efficiency of the solar cells ηPV is 11%. We adopt a commercially available thermoelectric device; its size is 40mm×40mm×4.2 mm and the corresponding specifications are TEC-127-05 with 127 couples ofp-n bismuth tin (BiSn) alloy thermoelement sandwiched between two thin ceramic plates. At the hot side, the temperature is 50°C. The maximal cooling production is 49 W. The maximum temperature difference between the hot and cold sides is 75°C. The maximum thermoelectric voltage is 16.2 V, and the maximum thermoelectric current is 5.3 A. The electrical resistance is 2.75 Ω. The function of the controller is switching power supply between solar cells and the DC power source. The water tank is filled with 250 mL of ambient temperature water.Table 1 Specifications of solar thermoelectric chiller. Solar cell Thermoelectric element AtS 1000 W/m2 AtTH 25°C P solar,max 130 W Q c , max ⁡ 49 W V solar,max 17.6 V Δ T max ⁡ 75°C I solar,max 7.39 A V tec,max 16.2 V A 0.89 m2 I tec,max 5.3 A η P V 13% R tec 2.75Ω α 0.0508 V/K K 0.38 WK−1 Z 2.47 × 10 - 3 1/KFigure 1 Solar thermoelectric chiller.The hot side of the thermoelectric system was connected to the heat exchanger to provide more cooling efficiency. The thermoelectric cooling process is performed when a fixed direct current is passed through one or more pairs ofn-type top-type junctions. As the differences of the current and temperature increase, Peltier cooling effect can be accordingly increased. However, main loss in Joule heat is also proportional to the square of the current and, therefore, eventually becomes the dominant factor. In the daytime, the thermoelectric chiller is driven by solar cells. In the nighttime, the thermoelectric chiller is driven by fixed direct current. ## 3. Experimental Study ### 3.1. Theory Consideration Solar cell is used in driven thermoelectric chiller. The output electric power is calculated by the following equation [3]: (1)Psolar=SAηPV, where S is the solar insolation rate, A is the area of the solar cell to receive solar irradiation, and ηPV is the efficiency of energy conversion from solar energy to electric power.The heat absorption rate at the cold side, that is, the cooling capacity, can be obtained by(2)Qc=αITc-0.5I2-K(Th-Tc). The input voltage of the thermoelectric is given by (3)V=α(Th-Tc)+IR. The input electrical power of the thermoelectric is given by (4)P=αI(Th-Tc)+I2R. The coefficient of performance of thermoelectric can be obtained by (5)COP=QcP. ### 3.2. Experiment Setup In order to quantitatively evaluate the cooling performance of a solar thermoelectric chiller, the following parameters should be considered: the solar cell power, the thermoelectric power consumption, the hot side and cold side temperatures of the thermoelectric system, and the coefficient of performance (COP). Figure2 illustrates the schematic organization of our experimental system for determining cooling performance of the solar thermoelectric chiller.Figure 2 Experimental setup and measurement of solar thermoelectric chiller.All of the experiments used cooling 250 mL water in water tank, which was connected to the cold side of the thermoelectric system. The thermoelectric chiller was driven by a fixed 9.6 V direct current source and solar cells, respectively. According to (3), the voltage of thermoelectric is related to the Seebeck coefficient α, the temperature difference between hot and cold sides (Th-Tc), the electric current of thermoelectric I, and the thermoelectric resistance R. The voltage of thermoelectric may change with respect to the changes of the above variables and approach to the fixed direct current. We investigate the cooling capacity of the solar thermoelectric chiller in 180 minutes. We used a power analyzer to measure the power consumption of the thermoelectric system. For data logging, we used a data logger and its associated software to grab the evaluation data. TheT-type thermocouples were used for continuous measurement of hot side and cold side temperatures, ambient temperature, and water tank temperature. TypeT (copper-constantan) thermocouples are suitable for measurements in the −200 to 350°C range and have a sensitivity of about 43 μV/°C. The pyranometer was used for continuous measurement of solar insolation. It is suitable for measurements in the 0 to 2000 W/m2 range and has a sensitivity of about 0.1 W/m2.Our research goal is to investigate solar thermoelectric chiller. We tested the solar thermoelectric chiller by two stages and proved feasibility.(1) First Stage Experiment. Using DC source to drive the thermoelectric chiller, we measure the voltage, current, consuming power, the cold side and hot side temperatures, and cooling capacity and analyze the COP. We conduct the test experiments by duration of 180 minutes.(2) Second Stage Experiment. Using the solar cell to drive thermoelectric chiller, we measure voltage, current, consuming power, the cold side and hot side temperatures, and cooling capacity and analyze the COP. We conduct the test experiments by duration of 180 minutes. ## 3.1. Theory Consideration Solar cell is used in driven thermoelectric chiller. The output electric power is calculated by the following equation [3]: (1)Psolar=SAηPV, where S is the solar insolation rate, A is the area of the solar cell to receive solar irradiation, and ηPV is the efficiency of energy conversion from solar energy to electric power.The heat absorption rate at the cold side, that is, the cooling capacity, can be obtained by(2)Qc=αITc-0.5I2-K(Th-Tc). The input voltage of the thermoelectric is given by (3)V=α(Th-Tc)+IR. The input electrical power of the thermoelectric is given by (4)P=αI(Th-Tc)+I2R. The coefficient of performance of thermoelectric can be obtained by (5)COP=QcP. ## 3.2. Experiment Setup In order to quantitatively evaluate the cooling performance of a solar thermoelectric chiller, the following parameters should be considered: the solar cell power, the thermoelectric power consumption, the hot side and cold side temperatures of the thermoelectric system, and the coefficient of performance (COP). Figure2 illustrates the schematic organization of our experimental system for determining cooling performance of the solar thermoelectric chiller.Figure 2 Experimental setup and measurement of solar thermoelectric chiller.All of the experiments used cooling 250 mL water in water tank, which was connected to the cold side of the thermoelectric system. The thermoelectric chiller was driven by a fixed 9.6 V direct current source and solar cells, respectively. According to (3), the voltage of thermoelectric is related to the Seebeck coefficient α, the temperature difference between hot and cold sides (Th-Tc), the electric current of thermoelectric I, and the thermoelectric resistance R. The voltage of thermoelectric may change with respect to the changes of the above variables and approach to the fixed direct current. We investigate the cooling capacity of the solar thermoelectric chiller in 180 minutes. We used a power analyzer to measure the power consumption of the thermoelectric system. For data logging, we used a data logger and its associated software to grab the evaluation data. TheT-type thermocouples were used for continuous measurement of hot side and cold side temperatures, ambient temperature, and water tank temperature. TypeT (copper-constantan) thermocouples are suitable for measurements in the −200 to 350°C range and have a sensitivity of about 43 μV/°C. The pyranometer was used for continuous measurement of solar insolation. It is suitable for measurements in the 0 to 2000 W/m2 range and has a sensitivity of about 0.1 W/m2.Our research goal is to investigate solar thermoelectric chiller. We tested the solar thermoelectric chiller by two stages and proved feasibility.(1) First Stage Experiment. Using DC source to drive the thermoelectric chiller, we measure the voltage, current, consuming power, the cold side and hot side temperatures, and cooling capacity and analyze the COP. We conduct the test experiments by duration of 180 minutes.(2) Second Stage Experiment. Using the solar cell to drive thermoelectric chiller, we measure voltage, current, consuming power, the cold side and hot side temperatures, and cooling capacity and analyze the COP. We conduct the test experiments by duration of 180 minutes. ## 4. Results and Discussion The proposed experiments used twelve halogens to simulate solar light. We adjust the voltage of halogen to change solar intensity. We adjust the light intensity from 450 W/m2 to 1000 W/m2 as shown in Figure 3. In Figure 4, the electric voltages of thermoelectric system are stable when the fixed direct current power is supplied to drive the thermoelectric system, whereas the voltages of thermoelectric system are varied with respect to the variations of the solar insolation rates. According to Figures 3 and 4, the light intensity and the voltage of thermoelectric are positively correlated. When the lowest light intensity is about 450 W/m2, the voltage of thermoelectric is about 2.7 V; when the highest light intensity is about 1000 W/m2, the voltage of thermoelectric is about 8 V. In Figure 5, the electric currents of the thermoelectric system are stable when the fixed direct current power is supplied to drive the thermoelectric system, whereas the currents of the thermoelectric system are varied with respect to the variations of the solar insolation rates. In Figures 6 and 7, we used fixed direct current power to drive the thermoelectric system. The hot side temperature is maintained at about 47°C, and meanwhile the cold side temperature drops from 0°C to −3.5°C. When we use the solar cell to drive the thermoelectric system, the hot side temperature and cold side temperature accordingly vary with respect to solar insolation rates. The highest hot side temperature is 36°C, and the least hot side temperature is 23.5°C, whereas the highest cold side temperature is 6°C, and the least hot side temperature is −3.5°C.Figure 3 Light intensity distribution.Figure 4 The electric voltages of thermoelectric system driven by DC source and solar cells.Figure 5 The electric currents of thermoelectric system driven by DC source and solar cells.Figure 6 The hot side temperatures of the thermoelectric system by DC source and solar cells.Figure 7 The cold side temperatures of the thermoelectric system by DC source and solar cells.Figures8 and 9 show the cold tank temperatures and cooling capacities of the thermoelectric chiller driven by the fixed direct current power and the solar cells. For the thermoelectric chiller driven by the fixed direct current power, the cold tank temperature drops form 20°C to 11°C, and the cooling capacity is maintained at 12 W. For the thermoelectric chiller driven by the solar cell, the cold tank temperature drops rapidly form 18.5°C to 14°C when the cooling capacity changes from 10.5 W to 14 W and the average is cooling capacity 12.2 W. We can see that the cold tank temperature gently drops from 14°C to 13°C when cooling capacity changes from 5 W and 17.5 W; the average cooling capacity is 10.6 W in this range. As shown in Figures 10 and 11, when we use fixed direct current power to drive the thermoelectric chiller, the input electric power of the thermoelectric chiller is maintained at about 35 W and the COP value is maintained at about 0.35. By comparison, when we adopt the solar cell to drive the thermoelectric chiller, the lowest value of the input electric power of the thermoelectric system is 5.5 W, while the highest value is 26 W, and the lowest COP is 0.55, while the highest COP is 1.05. We observe that the solar cell driven thermoelectric system can provide significantly better COP and needs lower input electric power consumption for the thermoelectric system. As shown in Table 2, the average cooling capacity of the solar driven system is 11.2 W, which is lower than the one of the system driven by fixed direct current of 11.9 W. By comparison, the average COP of the solar driven system is 0.74, which is better than the one driven by fixed direct current of 0.35. This is because the solar driven system is operated on an average of electric current of 2 A, which is a better operating point than that of the fixed direct current driven system.Table 2 The average cooling capacity of the semiconductor of the system. Result Average S (W/m2) V (V) I (A) P (W) Q c (W) COP DC power source 9.4 3.2 34.3 11.9 0.35 Solar cell 747.9 5.9 2.0 16.7 11.2 0.74Figure 8 The cold tank temperatures of the thermoelectric system by DC source and solar cells.Figure 9 Cooling capacity of the thermoelectric system by DC source and solar cells.Figure 10 The input electric powers of the thermoelectric system by DC source and solar cells.Figure 11 COPs of the thermoelectric chiller by DC source and solar cells. ## 5. Conclusions This study conducts experimental investigation on a thermoelectric chiller driven by solar cells. This experimental system has been tested, and the test results are presented. By comparing the experimental data of the thermoelectric system driven by direct current source and by the solar cell, we found that the thermoelectric chiller driven by the direct current source provides the cooling capacity of 11.9 W, which is somewhat higher than the one driven by the solar cell (11.2 W). However, the COP obtained by the solar driven system is 0.74, which is significantly better than the one driven by direct current of 0.35. Although the thermoelectric system driven by fixed direct current can provide stable cooling capacity, the thermoelectric chiller driven by the solar cell can provide significantly better COP, which ranges between 0.55 and 1.05, with an average of 0.74, than that driven by the fixed direct current. The experimental results also reveal that if the solar insolation rate can achieve 450 W/m2 to 1000 W/m2, the solar thermoelectric chiller can provide feasible and effective performance. --- *Source: 102510-2014-06-16.xml*
102510-2014-06-16_102510-2014-06-16.md
21,765
Experimental Investigation on Thermoelectric Chiller Driven by Solar Cell
Yen-Lin Chen; Zi-Jie Chien; Wen-Shing Lee; Ching-Song Jwo; Kun-Ching Cho
International Journal of Photoenergy (2014)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102510
102510-2014-06-16.xml
--- ## Abstract This paper presents experimental explorations on cooling performance of thermoelectric chillers being driven by solar cells, as well as comparison results to the performance being driven by fixed direct current. Solar energy is clear and limitless and can be collected by solar cells. We use solar cells to drive thermoelectric chillers, where the cold side is connected to the water tank. It is found that 250 mL of water can be cooled from 18.5°C to 13°C, where the corresponding coefficient of performance (COP) is changed between 0.55 and 1.05, when solar insolation is changed between 450 W/m2 and 1000 W/m2. The experimental results demonstrate that the thermoelectric chiller driven by solar cell is feasible and effective for energy saving issues. --- ## Body ## 1. Introduction Renewable energy resources become important energy sources over the world. People apply more renewable energy sources and technologies so as we can reduce the consumption of traditional fossil energy. Therefore, the carbon dioxide production can be significantly reduced, and global warming problem can also be slowed down [1]. Solar chillers could be accomplished by using one of the following refrigeration systems: vapor compression, adsorption refrigeration [2], and thermoelectric refrigeration systems. The first two systems require low and high pressure sides of working fluids in refrigeration cycles and are difficult to be developed into portable and lightweight solar devices used in outdoor environments. The thermoelectric refrigeration system can provide advantages of being small, lightweight, reliable, noiseless, portable, and of low cost in mass production [3]. In our studies, solar cells are used to drive the thermoelectric chillers.Gaur and Tiwari found that the energy efficiency of solar cell is about 10% to 25% [4]. Based on the calculations of the solar cell modules made in laboratory, cadmium telluride (CdTe) has shown that it can provide the minimum cost per unit electrical energy, whereas amorphous silicon (a-Si)/nanocrystalline silicon (nc-Si) can provide the minimum cost for unit electrical energy when commercial availability of solar modules is concerned. Copper indium gallium diselenide (CIGS) can achieve the lowest capitalized cost over all other solar cell technologies. Yang and Yin have presented a hybrid solar system that utilizes photovoltaic cells, thermoelectric modules, and hot water [5]. The hybrid solar system is superior to traditional PV systems with 30% higher output electric power.Peltier effect and Seebeck effect were first discovered to present in metals as early as 1820s–1830s [6, 7]. Despite their low energy efficiency as compared to traditional devices, thermoelectric effects present distinct advantages such as compactness, precision, simplicity, and reliability. Applications of thermoelectric devices are very wide in many areas, including equipment used in military, aerospace, medical, industrial, consumer, and scientific institutions [8]. Solar-driven thermoelectric technologies have two types: the solar-driven thermoelectric refrigeration and the solar-driven thermoelectric power generation [9]. One important application is thermoelectric generation with waste energy and renewable energy. Champier et al. incorporated wood stove with thermoelectric generator. In their system, the hot side of thermoelectric generator passes through hot air, whereas the cold side of thermoelectric generator is put under two liters of water tank. This system can produce up to 9.5 W as they used prototype of ten watts thermoelectric generator [10]. Meng et al. [11] presented a system that adopted the flushing water of blast furnace slag, and about 0.93 kW electrical energy can be produced per unit area when the temperature of flushing water starts at 100°C with a temperature drop of 1.5°C and the conversion efficiency is 2%. The cost recovery period of the equipment is about 8 years. Attia et al. [12] have reported an experimental power generation device that can produce electrical energy in milliwatts level by using standard bismuth telluride thermoelectric modules with a device size of about 10 cm3. Rezania et al. [13] explored the effective pumping power for the cooling system at five temperature differences of the hot and cold sides of the thermoelectric generation device. Their experimental results demonstrated that there is a unique flow rate that gives maximum net power in the system at each temperature difference. Chávez Urbiola and Vorobiev [14] studied solar hybrid electric/thermal system using photovoltaic panels combined with a water/air-filled heat extracting unit and thermoelectric generators. Their experiment results showed that the hot side of thermoelectric generation at midday has a temperature of around 200°C and the cold side is approximately 50°C. This system generated 20 W of electrical energy and 200 W of thermal energy.Another important application of thermoelectric is for cooling operations [15]. Dai et al. [3] conducted experimental investigation on thermoelectric refrigerator driven by solar cells. The results demonstrated that the unit could maintain the temperature in the refrigerator at 5–10°C and have a COP about 0.3. He et al. [16] conduct experiments on thermoelectric cooling and heating system driven by solar power in a model room whose volume is 0.125 m3 and performed the test in summer time. The resulting COP of their proposed thermoelectric device is of average 0.6. Zhou and Yu [17] present a generalized theoretical model for the optimization of a thermoelectric cooling system. Their analysis showed that the maximal COP and the maximal cooling capacity can be obtained when the finite thermal conductance is optimally allocated. Abdul-Wahab et al. [18] designed and built an affordable solar thermoelectric refrigerator. Their results indicated that the temperature of the refrigeration was reduced from 27°C to 5°C in approximately 44 minutes. The COP of their system was calculated and found to be about 0.16. Chang et al. [19] investigated the thermoelectric air-cooling module for electronic devices. The results demonstrated that their thermoelectric air-cooling module can provide better performance at a low heat loading condition.This study develops the thermoelectric chiller driven by solar cell in daytime and direct current (DC) source in cloudy days or nighttime. The solar thermoelectric chiller is investigated in terms of COP and the temperature of thermoelectric through a specially designed test rig. ## 2. Configuration of the Solar Thermoelectric Chiller Figure1 is the schematics thermoelectric chiller driven by solar cell. The chiller mainly consists of the one solar cell, thermoelectric, controller, and water tank. The specifications of the proposed solar thermoelectric chiller are shown in Table 1. One solar cell can be commercially available. Solar insolation is 1000 W/m2; maximum of Psolar is 130 W. The maximum voltage of the solar cell is 17.6 V, the maximum current of the solar cell is 7.39 A, its area is 0.89 m2, and the corresponding energy efficiency of the solar cells ηPV is 11%. We adopt a commercially available thermoelectric device; its size is 40mm×40mm×4.2 mm and the corresponding specifications are TEC-127-05 with 127 couples ofp-n bismuth tin (BiSn) alloy thermoelement sandwiched between two thin ceramic plates. At the hot side, the temperature is 50°C. The maximal cooling production is 49 W. The maximum temperature difference between the hot and cold sides is 75°C. The maximum thermoelectric voltage is 16.2 V, and the maximum thermoelectric current is 5.3 A. The electrical resistance is 2.75 Ω. The function of the controller is switching power supply between solar cells and the DC power source. The water tank is filled with 250 mL of ambient temperature water.Table 1 Specifications of solar thermoelectric chiller. Solar cell Thermoelectric element AtS 1000 W/m2 AtTH 25°C P solar,max 130 W Q c , max ⁡ 49 W V solar,max 17.6 V Δ T max ⁡ 75°C I solar,max 7.39 A V tec,max 16.2 V A 0.89 m2 I tec,max 5.3 A η P V 13% R tec 2.75Ω α 0.0508 V/K K 0.38 WK−1 Z 2.47 × 10 - 3 1/KFigure 1 Solar thermoelectric chiller.The hot side of the thermoelectric system was connected to the heat exchanger to provide more cooling efficiency. The thermoelectric cooling process is performed when a fixed direct current is passed through one or more pairs ofn-type top-type junctions. As the differences of the current and temperature increase, Peltier cooling effect can be accordingly increased. However, main loss in Joule heat is also proportional to the square of the current and, therefore, eventually becomes the dominant factor. In the daytime, the thermoelectric chiller is driven by solar cells. In the nighttime, the thermoelectric chiller is driven by fixed direct current. ## 3. Experimental Study ### 3.1. Theory Consideration Solar cell is used in driven thermoelectric chiller. The output electric power is calculated by the following equation [3]: (1)Psolar=SAηPV, where S is the solar insolation rate, A is the area of the solar cell to receive solar irradiation, and ηPV is the efficiency of energy conversion from solar energy to electric power.The heat absorption rate at the cold side, that is, the cooling capacity, can be obtained by(2)Qc=αITc-0.5I2-K(Th-Tc). The input voltage of the thermoelectric is given by (3)V=α(Th-Tc)+IR. The input electrical power of the thermoelectric is given by (4)P=αI(Th-Tc)+I2R. The coefficient of performance of thermoelectric can be obtained by (5)COP=QcP. ### 3.2. Experiment Setup In order to quantitatively evaluate the cooling performance of a solar thermoelectric chiller, the following parameters should be considered: the solar cell power, the thermoelectric power consumption, the hot side and cold side temperatures of the thermoelectric system, and the coefficient of performance (COP). Figure2 illustrates the schematic organization of our experimental system for determining cooling performance of the solar thermoelectric chiller.Figure 2 Experimental setup and measurement of solar thermoelectric chiller.All of the experiments used cooling 250 mL water in water tank, which was connected to the cold side of the thermoelectric system. The thermoelectric chiller was driven by a fixed 9.6 V direct current source and solar cells, respectively. According to (3), the voltage of thermoelectric is related to the Seebeck coefficient α, the temperature difference between hot and cold sides (Th-Tc), the electric current of thermoelectric I, and the thermoelectric resistance R. The voltage of thermoelectric may change with respect to the changes of the above variables and approach to the fixed direct current. We investigate the cooling capacity of the solar thermoelectric chiller in 180 minutes. We used a power analyzer to measure the power consumption of the thermoelectric system. For data logging, we used a data logger and its associated software to grab the evaluation data. TheT-type thermocouples were used for continuous measurement of hot side and cold side temperatures, ambient temperature, and water tank temperature. TypeT (copper-constantan) thermocouples are suitable for measurements in the −200 to 350°C range and have a sensitivity of about 43 μV/°C. The pyranometer was used for continuous measurement of solar insolation. It is suitable for measurements in the 0 to 2000 W/m2 range and has a sensitivity of about 0.1 W/m2.Our research goal is to investigate solar thermoelectric chiller. We tested the solar thermoelectric chiller by two stages and proved feasibility.(1) First Stage Experiment. Using DC source to drive the thermoelectric chiller, we measure the voltage, current, consuming power, the cold side and hot side temperatures, and cooling capacity and analyze the COP. We conduct the test experiments by duration of 180 minutes.(2) Second Stage Experiment. Using the solar cell to drive thermoelectric chiller, we measure voltage, current, consuming power, the cold side and hot side temperatures, and cooling capacity and analyze the COP. We conduct the test experiments by duration of 180 minutes. ## 3.1. Theory Consideration Solar cell is used in driven thermoelectric chiller. The output electric power is calculated by the following equation [3]: (1)Psolar=SAηPV, where S is the solar insolation rate, A is the area of the solar cell to receive solar irradiation, and ηPV is the efficiency of energy conversion from solar energy to electric power.The heat absorption rate at the cold side, that is, the cooling capacity, can be obtained by(2)Qc=αITc-0.5I2-K(Th-Tc). The input voltage of the thermoelectric is given by (3)V=α(Th-Tc)+IR. The input electrical power of the thermoelectric is given by (4)P=αI(Th-Tc)+I2R. The coefficient of performance of thermoelectric can be obtained by (5)COP=QcP. ## 3.2. Experiment Setup In order to quantitatively evaluate the cooling performance of a solar thermoelectric chiller, the following parameters should be considered: the solar cell power, the thermoelectric power consumption, the hot side and cold side temperatures of the thermoelectric system, and the coefficient of performance (COP). Figure2 illustrates the schematic organization of our experimental system for determining cooling performance of the solar thermoelectric chiller.Figure 2 Experimental setup and measurement of solar thermoelectric chiller.All of the experiments used cooling 250 mL water in water tank, which was connected to the cold side of the thermoelectric system. The thermoelectric chiller was driven by a fixed 9.6 V direct current source and solar cells, respectively. According to (3), the voltage of thermoelectric is related to the Seebeck coefficient α, the temperature difference between hot and cold sides (Th-Tc), the electric current of thermoelectric I, and the thermoelectric resistance R. The voltage of thermoelectric may change with respect to the changes of the above variables and approach to the fixed direct current. We investigate the cooling capacity of the solar thermoelectric chiller in 180 minutes. We used a power analyzer to measure the power consumption of the thermoelectric system. For data logging, we used a data logger and its associated software to grab the evaluation data. TheT-type thermocouples were used for continuous measurement of hot side and cold side temperatures, ambient temperature, and water tank temperature. TypeT (copper-constantan) thermocouples are suitable for measurements in the −200 to 350°C range and have a sensitivity of about 43 μV/°C. The pyranometer was used for continuous measurement of solar insolation. It is suitable for measurements in the 0 to 2000 W/m2 range and has a sensitivity of about 0.1 W/m2.Our research goal is to investigate solar thermoelectric chiller. We tested the solar thermoelectric chiller by two stages and proved feasibility.(1) First Stage Experiment. Using DC source to drive the thermoelectric chiller, we measure the voltage, current, consuming power, the cold side and hot side temperatures, and cooling capacity and analyze the COP. We conduct the test experiments by duration of 180 minutes.(2) Second Stage Experiment. Using the solar cell to drive thermoelectric chiller, we measure voltage, current, consuming power, the cold side and hot side temperatures, and cooling capacity and analyze the COP. We conduct the test experiments by duration of 180 minutes. ## 4. Results and Discussion The proposed experiments used twelve halogens to simulate solar light. We adjust the voltage of halogen to change solar intensity. We adjust the light intensity from 450 W/m2 to 1000 W/m2 as shown in Figure 3. In Figure 4, the electric voltages of thermoelectric system are stable when the fixed direct current power is supplied to drive the thermoelectric system, whereas the voltages of thermoelectric system are varied with respect to the variations of the solar insolation rates. According to Figures 3 and 4, the light intensity and the voltage of thermoelectric are positively correlated. When the lowest light intensity is about 450 W/m2, the voltage of thermoelectric is about 2.7 V; when the highest light intensity is about 1000 W/m2, the voltage of thermoelectric is about 8 V. In Figure 5, the electric currents of the thermoelectric system are stable when the fixed direct current power is supplied to drive the thermoelectric system, whereas the currents of the thermoelectric system are varied with respect to the variations of the solar insolation rates. In Figures 6 and 7, we used fixed direct current power to drive the thermoelectric system. The hot side temperature is maintained at about 47°C, and meanwhile the cold side temperature drops from 0°C to −3.5°C. When we use the solar cell to drive the thermoelectric system, the hot side temperature and cold side temperature accordingly vary with respect to solar insolation rates. The highest hot side temperature is 36°C, and the least hot side temperature is 23.5°C, whereas the highest cold side temperature is 6°C, and the least hot side temperature is −3.5°C.Figure 3 Light intensity distribution.Figure 4 The electric voltages of thermoelectric system driven by DC source and solar cells.Figure 5 The electric currents of thermoelectric system driven by DC source and solar cells.Figure 6 The hot side temperatures of the thermoelectric system by DC source and solar cells.Figure 7 The cold side temperatures of the thermoelectric system by DC source and solar cells.Figures8 and 9 show the cold tank temperatures and cooling capacities of the thermoelectric chiller driven by the fixed direct current power and the solar cells. For the thermoelectric chiller driven by the fixed direct current power, the cold tank temperature drops form 20°C to 11°C, and the cooling capacity is maintained at 12 W. For the thermoelectric chiller driven by the solar cell, the cold tank temperature drops rapidly form 18.5°C to 14°C when the cooling capacity changes from 10.5 W to 14 W and the average is cooling capacity 12.2 W. We can see that the cold tank temperature gently drops from 14°C to 13°C when cooling capacity changes from 5 W and 17.5 W; the average cooling capacity is 10.6 W in this range. As shown in Figures 10 and 11, when we use fixed direct current power to drive the thermoelectric chiller, the input electric power of the thermoelectric chiller is maintained at about 35 W and the COP value is maintained at about 0.35. By comparison, when we adopt the solar cell to drive the thermoelectric chiller, the lowest value of the input electric power of the thermoelectric system is 5.5 W, while the highest value is 26 W, and the lowest COP is 0.55, while the highest COP is 1.05. We observe that the solar cell driven thermoelectric system can provide significantly better COP and needs lower input electric power consumption for the thermoelectric system. As shown in Table 2, the average cooling capacity of the solar driven system is 11.2 W, which is lower than the one of the system driven by fixed direct current of 11.9 W. By comparison, the average COP of the solar driven system is 0.74, which is better than the one driven by fixed direct current of 0.35. This is because the solar driven system is operated on an average of electric current of 2 A, which is a better operating point than that of the fixed direct current driven system.Table 2 The average cooling capacity of the semiconductor of the system. Result Average S (W/m2) V (V) I (A) P (W) Q c (W) COP DC power source 9.4 3.2 34.3 11.9 0.35 Solar cell 747.9 5.9 2.0 16.7 11.2 0.74Figure 8 The cold tank temperatures of the thermoelectric system by DC source and solar cells.Figure 9 Cooling capacity of the thermoelectric system by DC source and solar cells.Figure 10 The input electric powers of the thermoelectric system by DC source and solar cells.Figure 11 COPs of the thermoelectric chiller by DC source and solar cells. ## 5. Conclusions This study conducts experimental investigation on a thermoelectric chiller driven by solar cells. This experimental system has been tested, and the test results are presented. By comparing the experimental data of the thermoelectric system driven by direct current source and by the solar cell, we found that the thermoelectric chiller driven by the direct current source provides the cooling capacity of 11.9 W, which is somewhat higher than the one driven by the solar cell (11.2 W). However, the COP obtained by the solar driven system is 0.74, which is significantly better than the one driven by direct current of 0.35. Although the thermoelectric system driven by fixed direct current can provide stable cooling capacity, the thermoelectric chiller driven by the solar cell can provide significantly better COP, which ranges between 0.55 and 1.05, with an average of 0.74, than that driven by the fixed direct current. The experimental results also reveal that if the solar insolation rate can achieve 450 W/m2 to 1000 W/m2, the solar thermoelectric chiller can provide feasible and effective performance. --- *Source: 102510-2014-06-16.xml*
2014
# Corrigendum to “The SLC Family Are Candidate Diagnostic and Prognostic Biomarkers in Clear Cell Renal Cell Carcinoma” **Authors:** Weiting Kang; Meng Zhang; Qiang Wang; Da Gu; Zhilong Huang; Hanbo Wang; Yuzhu Xiang; Qinghua Xia; Zilian Cui; Xunbo Jin **Journal:** BioMed Research International (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1025178 --- ## Body --- *Source: 1025178-2020-12-12.xml*
1025178-2020-12-12_1025178-2020-12-12.md
478
Corrigendum to “The SLC Family Are Candidate Diagnostic and Prognostic Biomarkers in Clear Cell Renal Cell Carcinoma”
Weiting Kang; Meng Zhang; Qiang Wang; Da Gu; Zhilong Huang; Hanbo Wang; Yuzhu Xiang; Qinghua Xia; Zilian Cui; Xunbo Jin
BioMed Research International (2020)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1025178
1025178-2020-12-12.xml
--- ## Body --- *Source: 1025178-2020-12-12.xml*
2020
# Recycling Glass Cullet from Waste CRTs for the Production of High Strength Mortars **Authors:** Stefano Maschio; Gabriele Tonello; Erika Furlani **Journal:** Journal of Waste Management (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/102519 --- ## Abstract The present paper reports on the results of some experiments dealing with the recycling of mixed cathode ray tube (CRT) glass waste in the production of high-strength mortars. Waste CRT glass cullet was previously milled, and sieved, and the only fine fraction was added to the fresh mortar in order to replace part of the natural aggregate. The addition of superplasticizer was also investigated. All hydrated materials displayed high compressive strength after curing. Samples containing CRT mixed glass showed a more rapid increase of strength with respect to the reference compositions, and materials with a superplasticizer content of 1% showed the best overall performance due to the favourable influence of the small glass particles which increase the amount of silicate hydrated produced. The induced coupled plasma (ICP) analysis made on the solutions, obtained from the leaching tests, confirmed the low elution of hazardous elements from the monolithic materials produced and consequently their possible environmental compatibility. --- ## Body ## 1. Introduction CRT glass waste includes that from TVs, PC monitors and other monitors used in special applications, and waste from the original assembly process. Waste glass from PC and TV monitors will begin to decline as a direct consequence of the emerging flat screen display technology; nevertheless, it seems reasonable to assume that an amount of CRTs from all sources is likely to continue to enter into the waste stream in the coming years. Studies have shown that when CRTs are disposed of in landfill sites, leaching processes from the crushed glass cullet may contaminate ground water. This is a major driving force for CRT recycling. Moreover, it must be pointed out that CRT waste does not contain only glass but also other materials which concur with the CRTs assembly, such as ferrous and nonferrous metals and plastics. The Waste Electrical and Electronic Equipment (WEEE) directive sets strict regulations for recycling or recovery when materials derive from equipment containing CRTs. Such norms must obviously be coupled with those reported by the European Waste Catalogue which classifies CRTs as hazardous waste and makes landfill disposal of CRT materials costly. The great amount of CRT waste produced all over the world implies that its recycling is presently necessary not only due to the rising cost of landfill disposal, which is reflected on the cost of new CRTs produced, but also as a consequence of the “zero-waste” objective which must be the final goal of all future human activities. Mixed CRT glass (funnel, neck and screen glass) contains high amounts of PbO, BaO, and SrO; it follows that cullet with this composition is unsuitable for recycling in applications where metal oxides could leach into food products or ground water. A possible partial recovery of waste CRT glass from the assembly process could follow the path of the manufacture of new CRTs, even if the high cost of separating, sorting, and processing the glass to meet the standards required by glass manufacturers strongly limits this option [1]. Other methods for waste CRT glass recycling are the copper-lead smelter where it acts as a substitute for sand in the smelting process [2] or as an additive raw material in the ceramic industry for the production of tiles or other monolithic ceramic materials [3, 4]. The above options represent, however, a small number of opportunities for waste CRT glass recovery, and additional proposals are necessary in order to maximise its reuse in the production of other types of materials.In order to propose a new option for waste CRT glass recycling, as independently as possible from glass composition, in the present research the production of stable mortars was carried out using cement, ground-waste-mixed CRT glass (funnel, neck, and screen glass), natural aggregate, and water; the addition of superplasticizer was also investigated. Mortars are expected to take advantage of ground glass; it is known, in fact, that the addition of silica fume as a component material in the production of mortars or concretes can lead to the preparation of products, namely, reaction powders mortars or concretes (RPM, RPC), with low water absorption, high mechanical properties, and modified shrinkage [5, 6]. In a parallel approach, the addition of generic waste glass to mortars or concretes has been widely investigated [7, 8]. It has been demonstrated, for example, that the use of waste glass from crushed containers as concrete aggregate may develop pozzolanic activity which is affected by glass finenesses and chemical composition [9, 10]. Such properties may also affect workability [11], strength, and durability. More in particular a high content of alkalis can cause alkali silica reaction (ASR) and expansion [12, 13]. Conversely, the use of waste E-glass from electronic scraps (low-alkali content) improves compressive strength and sulphate resistance and reduces chloride-ion penetration with no adverse alkali silica reaction-expansion effects [14]. The addition of the specific CRT glass waste has been, on the other hand, recently studied [15–17]; however additional detailed studies are worthy of interest.In the present research, mortars were produced using a fixed cement/aggregate (c/a) ratio (1/3), as it has been often proposed in the literature [18, 19], whereas milled CRT mixed glass was added in different proportions as well as were different contents of superplasticizer. The goal of the present research is to demonstrate that by selecting a proper amount of milled CRT waste glass coupled with an optimal quantity of superplasticizer, it is possible to produce mortars with high compressive strength, low water absorption, and therefore long durability coupled with low elution release of hazardous elements. ## 2. Experimentals ### 2.1. Materials The starting materials used were: a type I ordinary portland cement (OPC) and a natural aggregate with maximum particle dimension of 4.76 mm, Blaine fineness of 3480 cm2 g−1 (EN 933-2), density of 2.46 g cm−3, and water absorption of 0.37% (the measurement was carried out following the ASTM C127 and C128 norms) which were mixed with different proportions of mixed CRT waste glass. The as-received CRT glass cullet was first transformed into a powder by milling and then sieved by a 500 μm sieve. Only the part of powdered glass with size smaller than 500 μm was used for the present research; particles of larger dimensions were remilled. Glenium 51 (BASF) was also used as a superplasticizer in the preparation of some specimens. The required amount of water was obviously added to each starting blend. The chemical analysis of cement, natural aggregate, and CRT glass, determined by a Spectro Mass 2000 ICP mass spectrometer, is reported, in terms of oxides, in Table 1 which also displays loss on ignition (LOI), obtained after thermal treatment at 1000°C for 2 h, density, water absorption, and Blaine fineness.Table 1 Composition (oxide wt%), organic carbon, specific gravity, LOI and fineness modulus of milled CRT mixed glass, cement and aggregate; “undetermined” indicates the cumulative quantity of all oxides determined in quantity lower than 0.1 wt%. Component CRT glass Cement Aggregate SiO2 59.74 21.40 1.98 Al2O3 2.67 1.48 1.72 CaO 2.06 61.02 46.73 MgO 1.15 1.16 19.83 Na2O 6.81 0.26 1.43 K2O 6.15 0.53 0.74 Fe2O3 0.11 0.35 2.1 TiO2 0.17 <0.1 <0.1 CuO 0.21 <0.1 <0.1 BaO 6.13 <0.1 <0.1 SrO 4.77 <0.1 <0.1 Sb2O3 0.28 <0.1 <0.1 ZrO2 0.39 <0.1 <0.1 PbO 8.33 <0.1 <0.1 SO 4 = 0.49 2.92 0.13 C (organic) <0.1 1.20 0.57 Undetermined 0.93 1.38 1.92 Density (g cm−3) 2.95 3.03 2.46 Water abs. (%) 0.20 — 0.37 LOI (%) 0.82 13.14 23.55 Fineness modulus 0.72 — 3.48It is observed that CRT glass contains, together with an expected major quantity of SiO2, also great fractions of PbO, Na2O, K2O, BaO, SrO, and moderate quantities of Al2O3, CaO, and MgO; other compounds as well as organic carbon are present in limited amounts so that also LOI is limited as is the water absorption. Aggregate mainly contains calcium and magnesium oxide accompanied by small fractions of silica, iron oxide, and alumina; organic carbon, density, and LOI are in line with the literature data [5, 20]. The OPC conforms to European Standards EN-197/1. Data reported in Table 1 are confirmed by the XRD analysis of the starting materials (not reported in the present paper) which revealed the presence of alite (84%) and belite (16%) in cement, whereas dolomite (65%), calcium carbonate (27%), and free quartz (8%) were identified in the aggregate; numbers must be read with caution since XRD analysis does not provide accurate quantitative analysis of the tested materials, but they only supply an approximate magnitude order of their crystallographic composition. It can be also observed that CRT glass has a density of 2.95, OPC 3.03, and aggregate 2.46 g cm−3. It is worth to point out the low Blaine fineness of the milled and sieved CRT glass (0.72), thus confirming the presence of a large fraction of particles with size below 75 μm and therefore in agreement with the characteristics suggested by other authors [14] when the production of high-performance materials containing milled waste glass is required. ### 2.2. Methods #### 2.2.1. X-Ray Diffraction Investigation (XRD) The crystalline phases of starting components as well as those of the hydrated materials were investigated by X-ray diffraction (XRD). XRD patterns were recorded on a Philips X’Pert diffractometer operating at 40 kV and 40 mA using Ni-filtered Cu-Kα radiation. Spectra were collected using a step size of 0.02° and a counting time of 40 s per angular abscissa in the range of 15–55°. Philips X’Pert High Score software was used for phase identification and semiquantitative analysis (RIR method). #### 2.2.2. Particle Size Distribution (PSD) Measurements The particle size distribution (PSD) of the fine fraction of the aggregate (<500μm), cement, and powdered mixed CRT glass were determined by a Horiba LA950 laser scattering PSD analyser; analyses was made in water after a 3 min sonication; PSD curves are represented with logarithmic abscissa. In order to access the PSD of the aggregate’s fine fraction, the total as received product was sieved (500 μm) and fines were separated from coarse particles; the fines represent 25% of the total aggregate. #### 2.2.3. Materials Composition The ratio between cement and aggregate quantity (natural aggregate plus glass cullet) was set at 1/3 as this is a frequently used ratio. Some reference glass free compositions, hereafter called R, containing cement, aggregate, superplasticizer, and an optimized amount of water were also prepared as blank samples in order to compare the mechanical behaviour, after hydration, of the materials produced, bearing in mind that the focus of the present research regards the production of materials obtained by replacing part of the natural aggregate with an equivalent mass of 5, 10, and 20 wt% of milled and sieved glass powder from mixed CRT glass. Samples with symbolic names, corresponding aggregate composition, s/c, and water/cement (w/c) ratios are reported in Table2.Table 2 Specimens symbolic names, corresponding aggregate composition, superplasticizer/cement (s/c) and water cement (w/c) ratios. Sample Natural aggregate(wt%) CRT glass(wt%) s/c(%) w/c R 100 0 0 0.44 R1 100 0 1 0.31 R2 100 0 2 0.27 V5 95 5 0 0.44 V51 95 5 1 0.31 V52 95 5 2 0.27 V10 90 10 0 0.44 V101 90 10 1 0.31 V102 90 10 2 0.27 V20 80 20 0 0.44 V201 80 20 1 0.31 V202 80 20 2 0.27 #### 2.2.4. Materials Preparation For the mixture preparation and w/c optimization, a 5 L Hobart planetary conforming to ASTM C305 standards was used. The optimized amount of water was determined by the ASTM C1437 slump test performed on the reference blend R. The paste is said to have the right workability if the cake width is 200 (±20) mm. The identified optimal w/c value of the reference blend (R) was 0.44; this same value was applied to all the superplasticizer free compositions. Blends containing superplasticizer required reduced amounts of water as displayed in Table2. Pastes were then poured under vibration into moulds with dimensions of 100 × 100 × 100 mm, sealed with a plastic film to ensure mass curing and aged 24 h for a first hydration. Samples were then demoulded, sealed again with a plastic film, and cured again in the air for 24 h and then in water at room temperature for 3, 7, 28, 90, and 180 d. The ageing water was maintained at the constant temperature of 25°C (±3°C) and replaced with fresh water every 3 d of curing. After curing, before their characterisation, samples were dried with a cloth and aged in the atmosphere for 24 h. Specimens used for release evaluation were not aged in water but sealed with a plastic film for 7 d and then tested. #### 2.2.5. Materials Characterization Compression tests were performed after 3, 7, 28, 90, and 180 d in accordance with the ASTM C469 norm using Shimadzu AG10 apparatus; data were averaged over 5 measurements. Expansion was measured by a calliper after 28 d of curing in water. Fracture surfaces were examined by an Assing EVO40 Scanning Electron Microscope (SEM) coupled with the Energy Dispersive X-ray Spectroscopy (EDXS). The ASTM C642 norm was used to test the water absorption of the samples after curing for the established number of days. #### 2.2.6. Leaching Evaluation After ageing for 7 d, the R and V20 samples were submitted to an elution release test in water. V20 was selected in order to test a composition containing the highest amount of mixed CRT glass and displaying a high level of water absorption. For this measurement, the above specimens were aged in an autoclave, at 120°C, and 2 kPa for 1 h using 4 L of water. After boiling, samples were cooled down to room temperature (in water). The solutions (water plus several salts eluted from the samples weight ratio between mortar sample and solution is around 0.17) were submitted to the ICP tests. For comparison, materials aged in water for 28 d and used for testing the mechanical strength were submitted to the Pb release which was determined according to the US EPA 1311 TCLP test which establishes to crush the hydrated mortar (28 d), sieve the product through a 10 mm sieve, and extract a solution using glacial acetic acid. ## 2.1. Materials The starting materials used were: a type I ordinary portland cement (OPC) and a natural aggregate with maximum particle dimension of 4.76 mm, Blaine fineness of 3480 cm2 g−1 (EN 933-2), density of 2.46 g cm−3, and water absorption of 0.37% (the measurement was carried out following the ASTM C127 and C128 norms) which were mixed with different proportions of mixed CRT waste glass. The as-received CRT glass cullet was first transformed into a powder by milling and then sieved by a 500 μm sieve. Only the part of powdered glass with size smaller than 500 μm was used for the present research; particles of larger dimensions were remilled. Glenium 51 (BASF) was also used as a superplasticizer in the preparation of some specimens. The required amount of water was obviously added to each starting blend. The chemical analysis of cement, natural aggregate, and CRT glass, determined by a Spectro Mass 2000 ICP mass spectrometer, is reported, in terms of oxides, in Table 1 which also displays loss on ignition (LOI), obtained after thermal treatment at 1000°C for 2 h, density, water absorption, and Blaine fineness.Table 1 Composition (oxide wt%), organic carbon, specific gravity, LOI and fineness modulus of milled CRT mixed glass, cement and aggregate; “undetermined” indicates the cumulative quantity of all oxides determined in quantity lower than 0.1 wt%. Component CRT glass Cement Aggregate SiO2 59.74 21.40 1.98 Al2O3 2.67 1.48 1.72 CaO 2.06 61.02 46.73 MgO 1.15 1.16 19.83 Na2O 6.81 0.26 1.43 K2O 6.15 0.53 0.74 Fe2O3 0.11 0.35 2.1 TiO2 0.17 <0.1 <0.1 CuO 0.21 <0.1 <0.1 BaO 6.13 <0.1 <0.1 SrO 4.77 <0.1 <0.1 Sb2O3 0.28 <0.1 <0.1 ZrO2 0.39 <0.1 <0.1 PbO 8.33 <0.1 <0.1 SO 4 = 0.49 2.92 0.13 C (organic) <0.1 1.20 0.57 Undetermined 0.93 1.38 1.92 Density (g cm−3) 2.95 3.03 2.46 Water abs. (%) 0.20 — 0.37 LOI (%) 0.82 13.14 23.55 Fineness modulus 0.72 — 3.48It is observed that CRT glass contains, together with an expected major quantity of SiO2, also great fractions of PbO, Na2O, K2O, BaO, SrO, and moderate quantities of Al2O3, CaO, and MgO; other compounds as well as organic carbon are present in limited amounts so that also LOI is limited as is the water absorption. Aggregate mainly contains calcium and magnesium oxide accompanied by small fractions of silica, iron oxide, and alumina; organic carbon, density, and LOI are in line with the literature data [5, 20]. The OPC conforms to European Standards EN-197/1. Data reported in Table 1 are confirmed by the XRD analysis of the starting materials (not reported in the present paper) which revealed the presence of alite (84%) and belite (16%) in cement, whereas dolomite (65%), calcium carbonate (27%), and free quartz (8%) were identified in the aggregate; numbers must be read with caution since XRD analysis does not provide accurate quantitative analysis of the tested materials, but they only supply an approximate magnitude order of their crystallographic composition. It can be also observed that CRT glass has a density of 2.95, OPC 3.03, and aggregate 2.46 g cm−3. It is worth to point out the low Blaine fineness of the milled and sieved CRT glass (0.72), thus confirming the presence of a large fraction of particles with size below 75 μm and therefore in agreement with the characteristics suggested by other authors [14] when the production of high-performance materials containing milled waste glass is required. ## 2.2. Methods ### 2.2.1. X-Ray Diffraction Investigation (XRD) The crystalline phases of starting components as well as those of the hydrated materials were investigated by X-ray diffraction (XRD). XRD patterns were recorded on a Philips X’Pert diffractometer operating at 40 kV and 40 mA using Ni-filtered Cu-Kα radiation. Spectra were collected using a step size of 0.02° and a counting time of 40 s per angular abscissa in the range of 15–55°. Philips X’Pert High Score software was used for phase identification and semiquantitative analysis (RIR method). ### 2.2.2. Particle Size Distribution (PSD) Measurements The particle size distribution (PSD) of the fine fraction of the aggregate (<500μm), cement, and powdered mixed CRT glass were determined by a Horiba LA950 laser scattering PSD analyser; analyses was made in water after a 3 min sonication; PSD curves are represented with logarithmic abscissa. In order to access the PSD of the aggregate’s fine fraction, the total as received product was sieved (500 μm) and fines were separated from coarse particles; the fines represent 25% of the total aggregate. ### 2.2.3. Materials Composition The ratio between cement and aggregate quantity (natural aggregate plus glass cullet) was set at 1/3 as this is a frequently used ratio. Some reference glass free compositions, hereafter called R, containing cement, aggregate, superplasticizer, and an optimized amount of water were also prepared as blank samples in order to compare the mechanical behaviour, after hydration, of the materials produced, bearing in mind that the focus of the present research regards the production of materials obtained by replacing part of the natural aggregate with an equivalent mass of 5, 10, and 20 wt% of milled and sieved glass powder from mixed CRT glass. Samples with symbolic names, corresponding aggregate composition, s/c, and water/cement (w/c) ratios are reported in Table2.Table 2 Specimens symbolic names, corresponding aggregate composition, superplasticizer/cement (s/c) and water cement (w/c) ratios. Sample Natural aggregate(wt%) CRT glass(wt%) s/c(%) w/c R 100 0 0 0.44 R1 100 0 1 0.31 R2 100 0 2 0.27 V5 95 5 0 0.44 V51 95 5 1 0.31 V52 95 5 2 0.27 V10 90 10 0 0.44 V101 90 10 1 0.31 V102 90 10 2 0.27 V20 80 20 0 0.44 V201 80 20 1 0.31 V202 80 20 2 0.27 ### 2.2.4. Materials Preparation For the mixture preparation and w/c optimization, a 5 L Hobart planetary conforming to ASTM C305 standards was used. The optimized amount of water was determined by the ASTM C1437 slump test performed on the reference blend R. The paste is said to have the right workability if the cake width is 200 (±20) mm. The identified optimal w/c value of the reference blend (R) was 0.44; this same value was applied to all the superplasticizer free compositions. Blends containing superplasticizer required reduced amounts of water as displayed in Table2. Pastes were then poured under vibration into moulds with dimensions of 100 × 100 × 100 mm, sealed with a plastic film to ensure mass curing and aged 24 h for a first hydration. Samples were then demoulded, sealed again with a plastic film, and cured again in the air for 24 h and then in water at room temperature for 3, 7, 28, 90, and 180 d. The ageing water was maintained at the constant temperature of 25°C (±3°C) and replaced with fresh water every 3 d of curing. After curing, before their characterisation, samples were dried with a cloth and aged in the atmosphere for 24 h. Specimens used for release evaluation were not aged in water but sealed with a plastic film for 7 d and then tested. ### 2.2.5. Materials Characterization Compression tests were performed after 3, 7, 28, 90, and 180 d in accordance with the ASTM C469 norm using Shimadzu AG10 apparatus; data were averaged over 5 measurements. Expansion was measured by a calliper after 28 d of curing in water. Fracture surfaces were examined by an Assing EVO40 Scanning Electron Microscope (SEM) coupled with the Energy Dispersive X-ray Spectroscopy (EDXS). The ASTM C642 norm was used to test the water absorption of the samples after curing for the established number of days. ### 2.2.6. Leaching Evaluation After ageing for 7 d, the R and V20 samples were submitted to an elution release test in water. V20 was selected in order to test a composition containing the highest amount of mixed CRT glass and displaying a high level of water absorption. For this measurement, the above specimens were aged in an autoclave, at 120°C, and 2 kPa for 1 h using 4 L of water. After boiling, samples were cooled down to room temperature (in water). The solutions (water plus several salts eluted from the samples weight ratio between mortar sample and solution is around 0.17) were submitted to the ICP tests. For comparison, materials aged in water for 28 d and used for testing the mechanical strength were submitted to the Pb release which was determined according to the US EPA 1311 TCLP test which establishes to crush the hydrated mortar (28 d), sieve the product through a 10 mm sieve, and extract a solution using glacial acetic acid. ## 2.2.1. X-Ray Diffraction Investigation (XRD) The crystalline phases of starting components as well as those of the hydrated materials were investigated by X-ray diffraction (XRD). XRD patterns were recorded on a Philips X’Pert diffractometer operating at 40 kV and 40 mA using Ni-filtered Cu-Kα radiation. Spectra were collected using a step size of 0.02° and a counting time of 40 s per angular abscissa in the range of 15–55°. Philips X’Pert High Score software was used for phase identification and semiquantitative analysis (RIR method). ## 2.2.2. Particle Size Distribution (PSD) Measurements The particle size distribution (PSD) of the fine fraction of the aggregate (<500μm), cement, and powdered mixed CRT glass were determined by a Horiba LA950 laser scattering PSD analyser; analyses was made in water after a 3 min sonication; PSD curves are represented with logarithmic abscissa. In order to access the PSD of the aggregate’s fine fraction, the total as received product was sieved (500 μm) and fines were separated from coarse particles; the fines represent 25% of the total aggregate. ## 2.2.3. Materials Composition The ratio between cement and aggregate quantity (natural aggregate plus glass cullet) was set at 1/3 as this is a frequently used ratio. Some reference glass free compositions, hereafter called R, containing cement, aggregate, superplasticizer, and an optimized amount of water were also prepared as blank samples in order to compare the mechanical behaviour, after hydration, of the materials produced, bearing in mind that the focus of the present research regards the production of materials obtained by replacing part of the natural aggregate with an equivalent mass of 5, 10, and 20 wt% of milled and sieved glass powder from mixed CRT glass. Samples with symbolic names, corresponding aggregate composition, s/c, and water/cement (w/c) ratios are reported in Table2.Table 2 Specimens symbolic names, corresponding aggregate composition, superplasticizer/cement (s/c) and water cement (w/c) ratios. Sample Natural aggregate(wt%) CRT glass(wt%) s/c(%) w/c R 100 0 0 0.44 R1 100 0 1 0.31 R2 100 0 2 0.27 V5 95 5 0 0.44 V51 95 5 1 0.31 V52 95 5 2 0.27 V10 90 10 0 0.44 V101 90 10 1 0.31 V102 90 10 2 0.27 V20 80 20 0 0.44 V201 80 20 1 0.31 V202 80 20 2 0.27 ## 2.2.4. Materials Preparation For the mixture preparation and w/c optimization, a 5 L Hobart planetary conforming to ASTM C305 standards was used. The optimized amount of water was determined by the ASTM C1437 slump test performed on the reference blend R. The paste is said to have the right workability if the cake width is 200 (±20) mm. The identified optimal w/c value of the reference blend (R) was 0.44; this same value was applied to all the superplasticizer free compositions. Blends containing superplasticizer required reduced amounts of water as displayed in Table2. Pastes were then poured under vibration into moulds with dimensions of 100 × 100 × 100 mm, sealed with a plastic film to ensure mass curing and aged 24 h for a first hydration. Samples were then demoulded, sealed again with a plastic film, and cured again in the air for 24 h and then in water at room temperature for 3, 7, 28, 90, and 180 d. The ageing water was maintained at the constant temperature of 25°C (±3°C) and replaced with fresh water every 3 d of curing. After curing, before their characterisation, samples were dried with a cloth and aged in the atmosphere for 24 h. Specimens used for release evaluation were not aged in water but sealed with a plastic film for 7 d and then tested. ## 2.2.5. Materials Characterization Compression tests were performed after 3, 7, 28, 90, and 180 d in accordance with the ASTM C469 norm using Shimadzu AG10 apparatus; data were averaged over 5 measurements. Expansion was measured by a calliper after 28 d of curing in water. Fracture surfaces were examined by an Assing EVO40 Scanning Electron Microscope (SEM) coupled with the Energy Dispersive X-ray Spectroscopy (EDXS). The ASTM C642 norm was used to test the water absorption of the samples after curing for the established number of days. ## 2.2.6. Leaching Evaluation After ageing for 7 d, the R and V20 samples were submitted to an elution release test in water. V20 was selected in order to test a composition containing the highest amount of mixed CRT glass and displaying a high level of water absorption. For this measurement, the above specimens were aged in an autoclave, at 120°C, and 2 kPa for 1 h using 4 L of water. After boiling, samples were cooled down to room temperature (in water). The solutions (water plus several salts eluted from the samples weight ratio between mortar sample and solution is around 0.17) were submitted to the ICP tests. For comparison, materials aged in water for 28 d and used for testing the mechanical strength were submitted to the Pb release which was determined according to the US EPA 1311 TCLP test which establishes to crush the hydrated mortar (28 d), sieve the product through a 10 mm sieve, and extract a solution using glacial acetic acid. ## 3. Results and Discussion The starting materials have different particle size distribution: the OPC displays a monomodal distribution of particles with maximum concentration at 12μm (see Figure 1); milled and sieved CRT glass has a broad PSD with a very large peak at around 180 μm, therefore showing the presence of about 60 vol% of coarse particles with size greater than 75 μm and around 40 vol% of fines with size below 75 μm; the fine fraction of the natural aggregate displays a bimodal distribution of particles with two peaks: one, low at 11 μm, the other, higher, at around 130 μm. It is known that the most reactive fraction of a component is its finest fraction, which also affects the w/c ratio as well as the alkali silica reaction in the presence of Na2O and K2O. In this contest, we have considered it important to show that the fine fraction of the milled waste glass falls in the range of submicronic sized particles and is overlapped to that of the smallest cement particles. Also natural aggregate contains small particles, the amount of alkalis is very low, and their size is always greater than 2 μm. It therefore appears reasonable that waste glass particles with size below 1 μm could easily interact with cement particles of the same dimension developing pozzolanic activity, limiting ASR, and improving long term properties of the resulting materials. Conversely, small sized particles also influence w/c ratio reducing, in this way, mortars workability. However, the absolute amount of glass fines is limited so that their influence on the fresh mortars workability as well as on the properties of the resulting hydrated materials is expected to be limited.Figure 1 PSD curves of the OPC (a), the fine fraction of aggregate (b), and that of the milled and sieved mixed CRT glass used for samples preparation (c).Figures2(a), 2(b) and 2(c) show the trend, displayed as a function of curing time, of compressive strength (solid lines, left axis) and water absorption (dashed lines, right axis) of three sets of materials, respectively: (a) superplasticizer free, (b) with an s content of 1%, and (c) with an s amount of 2%. Error bars are, for clarity, not displayed due to their overlapping and the possible confusion on reading; however data scattering is, for each composition, maintained within the interval ±5% of the reported average value. Figure 2(a) shows that the average compressive strength of the reference R reaches 50 MPa after 3 d of curing, 66 after 7, 87 after 28, and 97 after 90 and rises to 100 after 180 d; the corresponding water absorption is 6.3% after 3 d of curing, 5.7 after 7, and 5.4 after 28 and lowers to 5.1 and 4.9 after 90 and 180 d, respectively. Such an abnormally high compressive strength is reasonably due to the high strength, low porosity (low water absorption), and chemical nature of the aggregate used in the present research [21]. Compressive strength of composition V10 raises from 45 MPa after 3 d of curing to 60 after 7, 83 after 28, 100 after 90, and 105 after 180 d; water absorption ranges from 7.1 (3) to 4.4% (180). It can be observed that compositions V5 and V20 show compressive strength curves below that of the reference material over the whole range of time, whereas that of composition V10 intercepts it between 28 and 90 d of curing, the final strength being higher than that of R; the addition of CRT glass improves its long-term strength which is little affected by the possible ASR.Trend, displayed as a function of curing time, of compressive strength (solid lines, left axis) and water absorption (dashed lines, right axis) of three sets of materials: (a) superplasticizer free; (b) with a superplasticizer content of 1%; (c) with superplasticizer content of 2%. Error bars are, for clarity, not displayed due to their overlapping and the possible confusion on reading. (a) (b) (c)In fact, Chen et al. demonstrated that E-glass particle can be used as partial fine aggregate replacement material as well as supplementary binding material depending on its particle size. Particles with size smaller than 75μm could possess cementitious capability resulting from hydration or pozzolanic reaction; the coarser cylindrical might act as a potential crack arrester and inhibits the internal crack propagation [14]. Such behaviour has been confirmed by other authors [13] who demonstrated that ground waste glass containing high amount of alkali could display good performances against ASR if the fraction of fine particles can compensate the high quantity of Na2O+K2O.Figure2(b) shows that the addition of 1% superplasticizer improves compressive strength of all compositions at any ageing time. The strength of composition R1 increases from 80 MPa after 3 d of curing to 94 after 7, 114 after 28, 120 after 90, and 122 MPa after 180 d. Conversely, compositions containing 5, 10, and 20% of glass have lower strength than the reference for ageing times lower than 28 d, but their values improve faster being higher than R1 after 90 d or more. It must be also pointed out that the curve of composition V101 intercepts that of R1 shortly after 28 d and displays the best long time mechanical performance whereas those of V51 and V201 cross R after longer ageing times. Data obtained from water absorption tests (also displayed in Figure 2(b)) are in agreement with the corresponding compressive strength data, being low in materials with high strength and high in those having reduced strength levels. More in detail, it can be observed that materials with composition R1 display values between 4.7% (after 3 d of curing) and 3.1% (after 180 d) whereas these with V101 between 5.3 (after 3 d) and 2.1% (after 180 d).Due to the irregular particle shape, blends containing glass had similar but not the same rheological behaviour as the reference compositions. Pastes workability was determined by the slump test, and pastes were defined of right workability when cake width fell into the range of200±20 mm. However, pastes containing 10 or 20 wt% of glass gave slump cakes with size close to the inferior limit, being the corresponding slurries of relatively lower fluidity with respect to the reference compositions, in agreement with the results obtained by other authors [13, 14], and the resulting hydrated materials showed a slightly higher residual porosity. The addition of superplasticizer, by improving workability, has a beneficial effect on rheological behaviour and also on the final residual porosity of compositions containing 10 or 20 wt% of waste glass.Figure2(c) reports the curves obtained after testing materials with 2% of superplasticizer. Compositions R2, V52, and V102 have the same strength after 28 d and similar strength trend, whereas V202 displays a lower 28 d strength, but a faster increase; the 180 d strength of all compositions are concentrated around 137 MPa. Also in this set of materials, data obtained from water absorption tests (also displayed in Figure 2(c)) are in line with the corresponding compressive strength levels. In detail, data obtained after 3 d of curing range from 5.3 for composition V202 to 3.8 for V102, whereas those acquired after 180 d are all around 2.3%; intermediate curing times give rise to materials with intermediate water absorption levels.It must be pointed out that water absorption is not porosity but is related to the open porosity. It means that a body contains not only open porosity but also closed porosity. The water absorption test provides access to the open but not to the closed porosity which remains undetermined together with materials total porosity. However, we would like to point out the strict relationship between compressive strength and water absorption data so that their trend helps to explain material’s behaviour.XRD analysis of the hydrated samples acquired after 28, 90, and 180 d did not reveal substantial differences between reference compositions and mixed CRT glass containing materials. For comparison, the present article reports (see Figure3) the patterns acquired on samples R1 and V201, which have a wide difference of composition and compressive strength after 90 or more days of curing. Patterns are similar, and the same phases can be identified in both materials, that is, portlandite, Ca(Mg0.67Fe0.33)(CO3)2, calcite, and quartz. The presence of hydrated phases is not documented by this type of investigation probably due to their amorphous or cryptocrystalline nature [20, 22, 23]. However, it is possible to emphasize the presence of three small peaks which clearly appear in R and not in V201; in Figure 3 they are indicated by an arrow and may be attributed to the presence of residual calcium silicates. One could be led to infer that, in R, this phase is still present after 90 d, whereas in V201 it is almost completely consumed, thanks to the presence of the glass small particles.Figure 3 X-ray diffraction patterns between 15 and 55° of the compositions R1 and V201. The phases are identified by the following symbols: (•) portlandite; (○) Ca(Mg0.67Fe0.33)(CO3)2; (∆) calcite; (□) quartz; peaks due to residual calcium silicates compounds are highlighted by arrows.The different mechanical behaviour of glass containing materials with respect to the reference glass free compositions may, moreover, be explained by the contribution to the hydration phenomena of the glass particles during mortar curing; such information could be supplied by SEM investigation, but it is necessary to start from the assumption that the coarse glass particles should contribute in a different mode with respect to the smaller ones. The SEM analysis has been made over all the samples, considering ageing time coupled with materials composition, but for brevity, only the most representative images have been reported in the present paper. Figure4(a) shows a SEM image of the fracture surface of a sample with composition V201 after 90 d of curing. It is possible to observe that the large glass particle (dark) is strictly entrapped by the cementitious matrix; the interface appears well defined with no voids. The EDXS analysis of the particles showed that the glass mainly contains SiO2 and BaO, but Na2O, K2O, and Al2O3 are also present; the cementitious zone mainly contains CaO accompanied by smaller amounts of SiO2, MgO, Al2O3, and Fe2O3. It is also possible to speculate the development of a hypothetic hydrated phase containing SiO2 (10 wt%), CaO (38%), CO2 (40%), Al2O3 (3.5%), and BaO (5%) since EDXS analysis of this composition has revealed a small lump which appears well stuck to the large glass particle and is highlighted by an arrow. This hypothesis is, however, not confirmed by other investigations and, at this point, must be considered speculative.SEM micrographs showing the fracture surface of a sample with composition V201 after 90 d of curing. (a) a great glass particle (dark) is strictly entrapped by the cementitious matrix: the interface appears well defined with no voids; (b) the presence of small glass particles leads to the development of long hydrated silicates crystals. (a) (b)The effect of small glass particles on the materials microstructure can be observed in Figure4(b) where the presence of well-developed silicate hydrated crystal clusters are visible [10, 11, 18, 24]. The EDXS analysis revealed that smooth particles are CRT glass with compositions SiO2 (62.9 wt%), CaO (1.5%), PbO (6.7%), Al2O3 (1.3%), Na2O (5.4%), K2O (6.2%), and BaO (16%) whereas the surrounding matrix and elongated crystals were detected as containing, respectively, SiO2 (19.4 mol%), CaO (61.4%), MgO (1%), Al2O3 (3.2%), K2O (8.7%), Fe2O3 (6.3%), SiO2 (37.2%), CaO (58.4%), MgO (1.7%), and Al2O3 (2.7%). Such well-developed silicate hydrated crystal clusters were observed only around the small glass particles and not in the reference samples thus confirming that particles with size smaller than 75 μm could possess cementitious capability resulting from hydration or pozzolanic reaction concurring to limit ASR. As a consequence, provided that curing time is sufficient, the inevitable pores which are formed in the c/a matrix during mortar production may be filled with silicate hydrated crystals with a high shape ratio which interlock the surrounding material promoting the development of densely packed structures, raising compressive strength and reducing materials’ permeability.Materials aged for 180 d were also submitted to thermo gravimetric analysis (TGA) which did not supply any further information about the different behaviour between reference and glass containing compositions; consequently, TGA graphics are not specified in the present communication. It can also be pointed out that after 28 d of exposure to a moist environment, the change in length of the samples is always below the normally accepted value of 0.05% suggested in ASTM C33.The ICP analysis made on the solutions obtained from the release tests in water of samples R and V20 is displayed in Table3 which shows, in accordance with the work of other researchers, low elution of hazardous elements from the mortar samples containing waste materials [25]. In Table 3, only data resulted from samples containing the highest amount of mixed CRT glass and a high level of water absorption are reported since all the other compositions showed lower quantities of released hazardous elements. Some elements, such as Ca, K, Na, and S are, conversely, present in non negligible amounts, but their presence is not considered a warning parameter by most of the standard release tests. The elution release test used in the present study is not a codified test, as presently is not established leaching test for mortars or concretes containing hazardous elements, but it is indicative of the possible environmental compatibility of the materials produced. However, for safety, the Pb and Ba release from materials with composition V20 aged 28 d was also accessed by the TCLP test which showed 2.40 and 1.85 mg L−1, respectively, which are far from the established limits of 5 and 100 mg L−1, respectively, and in sufficiently good agreement with data reported by other authors [15–17]. It must be finally pointed out that TCLP tests are mandatory when hazardous waste materials need to be managed or disposed of to landfill, but equivalent tests on industrial products containing the same waste are presently missing. The authors of the present research therefore suggest the development of standards leaching tests to be used with monolithic materials containing waste hazardous components (i.e., ASTM or others for mortars or concretes).Table 3 More abundant elements (μg Kg−1 = parts per billion) revealed by the ICP on the solutions obtained from the water absorption test of samples with composition R and V20. Elements not reported were determined in quantity lower than 25 ppb. Sample name Mg Al Ca Si Na K Fe Ba Sr Pb S R 2311 3100 22005 786 16420 11003 417 <25 114 <25 19990 V20 3409 3333 16871 3956 20097 16294 390 727 326 799 21388 ## 4. Conclusions In the present research, the production of stable mortars was carried out using a commercial OPC, ground waste CRT glass, natural aggregate, and water; the addition of superplasticizer was also investigated. Mortars were produced using a fixed c/a ratio (1/3), whereas milled CRT glass was added in different proportions as well as an amount of superplasticizer.The following important conclusions were derived from the study.(1) All hydrated materials displayed high compression strength after 3,7, 28, 90, and 180 d of curing in a moist environment.(2) Glass containing samples showed a more rapid increase of strength with respect to the reference compositions when subjected to long-term ageing.(3) Materials with a s/c ratio of 1 showed the best overall behaviour.(4) The addition of more than 10 wt% of CRT glass powder did not lead to the production of materials with the best mechanical performances.(5) The results obtained in the present research are reasonably due to the favourable influence of the small glass particles which interact with the hydraulic phases promoting pozzolanic reaction, limiting ASR and increasing the amount of hydrated silicates produced during long term ageing.(6) The ICP analysis made on the solutions obtained from the release tests in water confirms the low elution of hazardous elements from the materials produced and therefore their possible environmental compatibility. --- *Source: 102519-2013-06-03.xml*
102519-2013-06-03_102519-2013-06-03.md
45,364
Recycling Glass Cullet from Waste CRTs for the Production of High Strength Mortars
Stefano Maschio; Gabriele Tonello; Erika Furlani
Journal of Waste Management (2013)
Other
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/102519
102519-2013-06-03.xml
--- ## Abstract The present paper reports on the results of some experiments dealing with the recycling of mixed cathode ray tube (CRT) glass waste in the production of high-strength mortars. Waste CRT glass cullet was previously milled, and sieved, and the only fine fraction was added to the fresh mortar in order to replace part of the natural aggregate. The addition of superplasticizer was also investigated. All hydrated materials displayed high compressive strength after curing. Samples containing CRT mixed glass showed a more rapid increase of strength with respect to the reference compositions, and materials with a superplasticizer content of 1% showed the best overall performance due to the favourable influence of the small glass particles which increase the amount of silicate hydrated produced. The induced coupled plasma (ICP) analysis made on the solutions, obtained from the leaching tests, confirmed the low elution of hazardous elements from the monolithic materials produced and consequently their possible environmental compatibility. --- ## Body ## 1. Introduction CRT glass waste includes that from TVs, PC monitors and other monitors used in special applications, and waste from the original assembly process. Waste glass from PC and TV monitors will begin to decline as a direct consequence of the emerging flat screen display technology; nevertheless, it seems reasonable to assume that an amount of CRTs from all sources is likely to continue to enter into the waste stream in the coming years. Studies have shown that when CRTs are disposed of in landfill sites, leaching processes from the crushed glass cullet may contaminate ground water. This is a major driving force for CRT recycling. Moreover, it must be pointed out that CRT waste does not contain only glass but also other materials which concur with the CRTs assembly, such as ferrous and nonferrous metals and plastics. The Waste Electrical and Electronic Equipment (WEEE) directive sets strict regulations for recycling or recovery when materials derive from equipment containing CRTs. Such norms must obviously be coupled with those reported by the European Waste Catalogue which classifies CRTs as hazardous waste and makes landfill disposal of CRT materials costly. The great amount of CRT waste produced all over the world implies that its recycling is presently necessary not only due to the rising cost of landfill disposal, which is reflected on the cost of new CRTs produced, but also as a consequence of the “zero-waste” objective which must be the final goal of all future human activities. Mixed CRT glass (funnel, neck and screen glass) contains high amounts of PbO, BaO, and SrO; it follows that cullet with this composition is unsuitable for recycling in applications where metal oxides could leach into food products or ground water. A possible partial recovery of waste CRT glass from the assembly process could follow the path of the manufacture of new CRTs, even if the high cost of separating, sorting, and processing the glass to meet the standards required by glass manufacturers strongly limits this option [1]. Other methods for waste CRT glass recycling are the copper-lead smelter where it acts as a substitute for sand in the smelting process [2] or as an additive raw material in the ceramic industry for the production of tiles or other monolithic ceramic materials [3, 4]. The above options represent, however, a small number of opportunities for waste CRT glass recovery, and additional proposals are necessary in order to maximise its reuse in the production of other types of materials.In order to propose a new option for waste CRT glass recycling, as independently as possible from glass composition, in the present research the production of stable mortars was carried out using cement, ground-waste-mixed CRT glass (funnel, neck, and screen glass), natural aggregate, and water; the addition of superplasticizer was also investigated. Mortars are expected to take advantage of ground glass; it is known, in fact, that the addition of silica fume as a component material in the production of mortars or concretes can lead to the preparation of products, namely, reaction powders mortars or concretes (RPM, RPC), with low water absorption, high mechanical properties, and modified shrinkage [5, 6]. In a parallel approach, the addition of generic waste glass to mortars or concretes has been widely investigated [7, 8]. It has been demonstrated, for example, that the use of waste glass from crushed containers as concrete aggregate may develop pozzolanic activity which is affected by glass finenesses and chemical composition [9, 10]. Such properties may also affect workability [11], strength, and durability. More in particular a high content of alkalis can cause alkali silica reaction (ASR) and expansion [12, 13]. Conversely, the use of waste E-glass from electronic scraps (low-alkali content) improves compressive strength and sulphate resistance and reduces chloride-ion penetration with no adverse alkali silica reaction-expansion effects [14]. The addition of the specific CRT glass waste has been, on the other hand, recently studied [15–17]; however additional detailed studies are worthy of interest.In the present research, mortars were produced using a fixed cement/aggregate (c/a) ratio (1/3), as it has been often proposed in the literature [18, 19], whereas milled CRT mixed glass was added in different proportions as well as were different contents of superplasticizer. The goal of the present research is to demonstrate that by selecting a proper amount of milled CRT waste glass coupled with an optimal quantity of superplasticizer, it is possible to produce mortars with high compressive strength, low water absorption, and therefore long durability coupled with low elution release of hazardous elements. ## 2. Experimentals ### 2.1. Materials The starting materials used were: a type I ordinary portland cement (OPC) and a natural aggregate with maximum particle dimension of 4.76 mm, Blaine fineness of 3480 cm2 g−1 (EN 933-2), density of 2.46 g cm−3, and water absorption of 0.37% (the measurement was carried out following the ASTM C127 and C128 norms) which were mixed with different proportions of mixed CRT waste glass. The as-received CRT glass cullet was first transformed into a powder by milling and then sieved by a 500 μm sieve. Only the part of powdered glass with size smaller than 500 μm was used for the present research; particles of larger dimensions were remilled. Glenium 51 (BASF) was also used as a superplasticizer in the preparation of some specimens. The required amount of water was obviously added to each starting blend. The chemical analysis of cement, natural aggregate, and CRT glass, determined by a Spectro Mass 2000 ICP mass spectrometer, is reported, in terms of oxides, in Table 1 which also displays loss on ignition (LOI), obtained after thermal treatment at 1000°C for 2 h, density, water absorption, and Blaine fineness.Table 1 Composition (oxide wt%), organic carbon, specific gravity, LOI and fineness modulus of milled CRT mixed glass, cement and aggregate; “undetermined” indicates the cumulative quantity of all oxides determined in quantity lower than 0.1 wt%. Component CRT glass Cement Aggregate SiO2 59.74 21.40 1.98 Al2O3 2.67 1.48 1.72 CaO 2.06 61.02 46.73 MgO 1.15 1.16 19.83 Na2O 6.81 0.26 1.43 K2O 6.15 0.53 0.74 Fe2O3 0.11 0.35 2.1 TiO2 0.17 <0.1 <0.1 CuO 0.21 <0.1 <0.1 BaO 6.13 <0.1 <0.1 SrO 4.77 <0.1 <0.1 Sb2O3 0.28 <0.1 <0.1 ZrO2 0.39 <0.1 <0.1 PbO 8.33 <0.1 <0.1 SO 4 = 0.49 2.92 0.13 C (organic) <0.1 1.20 0.57 Undetermined 0.93 1.38 1.92 Density (g cm−3) 2.95 3.03 2.46 Water abs. (%) 0.20 — 0.37 LOI (%) 0.82 13.14 23.55 Fineness modulus 0.72 — 3.48It is observed that CRT glass contains, together with an expected major quantity of SiO2, also great fractions of PbO, Na2O, K2O, BaO, SrO, and moderate quantities of Al2O3, CaO, and MgO; other compounds as well as organic carbon are present in limited amounts so that also LOI is limited as is the water absorption. Aggregate mainly contains calcium and magnesium oxide accompanied by small fractions of silica, iron oxide, and alumina; organic carbon, density, and LOI are in line with the literature data [5, 20]. The OPC conforms to European Standards EN-197/1. Data reported in Table 1 are confirmed by the XRD analysis of the starting materials (not reported in the present paper) which revealed the presence of alite (84%) and belite (16%) in cement, whereas dolomite (65%), calcium carbonate (27%), and free quartz (8%) were identified in the aggregate; numbers must be read with caution since XRD analysis does not provide accurate quantitative analysis of the tested materials, but they only supply an approximate magnitude order of their crystallographic composition. It can be also observed that CRT glass has a density of 2.95, OPC 3.03, and aggregate 2.46 g cm−3. It is worth to point out the low Blaine fineness of the milled and sieved CRT glass (0.72), thus confirming the presence of a large fraction of particles with size below 75 μm and therefore in agreement with the characteristics suggested by other authors [14] when the production of high-performance materials containing milled waste glass is required. ### 2.2. Methods #### 2.2.1. X-Ray Diffraction Investigation (XRD) The crystalline phases of starting components as well as those of the hydrated materials were investigated by X-ray diffraction (XRD). XRD patterns were recorded on a Philips X’Pert diffractometer operating at 40 kV and 40 mA using Ni-filtered Cu-Kα radiation. Spectra were collected using a step size of 0.02° and a counting time of 40 s per angular abscissa in the range of 15–55°. Philips X’Pert High Score software was used for phase identification and semiquantitative analysis (RIR method). #### 2.2.2. Particle Size Distribution (PSD) Measurements The particle size distribution (PSD) of the fine fraction of the aggregate (<500μm), cement, and powdered mixed CRT glass were determined by a Horiba LA950 laser scattering PSD analyser; analyses was made in water after a 3 min sonication; PSD curves are represented with logarithmic abscissa. In order to access the PSD of the aggregate’s fine fraction, the total as received product was sieved (500 μm) and fines were separated from coarse particles; the fines represent 25% of the total aggregate. #### 2.2.3. Materials Composition The ratio between cement and aggregate quantity (natural aggregate plus glass cullet) was set at 1/3 as this is a frequently used ratio. Some reference glass free compositions, hereafter called R, containing cement, aggregate, superplasticizer, and an optimized amount of water were also prepared as blank samples in order to compare the mechanical behaviour, after hydration, of the materials produced, bearing in mind that the focus of the present research regards the production of materials obtained by replacing part of the natural aggregate with an equivalent mass of 5, 10, and 20 wt% of milled and sieved glass powder from mixed CRT glass. Samples with symbolic names, corresponding aggregate composition, s/c, and water/cement (w/c) ratios are reported in Table2.Table 2 Specimens symbolic names, corresponding aggregate composition, superplasticizer/cement (s/c) and water cement (w/c) ratios. Sample Natural aggregate(wt%) CRT glass(wt%) s/c(%) w/c R 100 0 0 0.44 R1 100 0 1 0.31 R2 100 0 2 0.27 V5 95 5 0 0.44 V51 95 5 1 0.31 V52 95 5 2 0.27 V10 90 10 0 0.44 V101 90 10 1 0.31 V102 90 10 2 0.27 V20 80 20 0 0.44 V201 80 20 1 0.31 V202 80 20 2 0.27 #### 2.2.4. Materials Preparation For the mixture preparation and w/c optimization, a 5 L Hobart planetary conforming to ASTM C305 standards was used. The optimized amount of water was determined by the ASTM C1437 slump test performed on the reference blend R. The paste is said to have the right workability if the cake width is 200 (±20) mm. The identified optimal w/c value of the reference blend (R) was 0.44; this same value was applied to all the superplasticizer free compositions. Blends containing superplasticizer required reduced amounts of water as displayed in Table2. Pastes were then poured under vibration into moulds with dimensions of 100 × 100 × 100 mm, sealed with a plastic film to ensure mass curing and aged 24 h for a first hydration. Samples were then demoulded, sealed again with a plastic film, and cured again in the air for 24 h and then in water at room temperature for 3, 7, 28, 90, and 180 d. The ageing water was maintained at the constant temperature of 25°C (±3°C) and replaced with fresh water every 3 d of curing. After curing, before their characterisation, samples were dried with a cloth and aged in the atmosphere for 24 h. Specimens used for release evaluation were not aged in water but sealed with a plastic film for 7 d and then tested. #### 2.2.5. Materials Characterization Compression tests were performed after 3, 7, 28, 90, and 180 d in accordance with the ASTM C469 norm using Shimadzu AG10 apparatus; data were averaged over 5 measurements. Expansion was measured by a calliper after 28 d of curing in water. Fracture surfaces were examined by an Assing EVO40 Scanning Electron Microscope (SEM) coupled with the Energy Dispersive X-ray Spectroscopy (EDXS). The ASTM C642 norm was used to test the water absorption of the samples after curing for the established number of days. #### 2.2.6. Leaching Evaluation After ageing for 7 d, the R and V20 samples were submitted to an elution release test in water. V20 was selected in order to test a composition containing the highest amount of mixed CRT glass and displaying a high level of water absorption. For this measurement, the above specimens were aged in an autoclave, at 120°C, and 2 kPa for 1 h using 4 L of water. After boiling, samples were cooled down to room temperature (in water). The solutions (water plus several salts eluted from the samples weight ratio between mortar sample and solution is around 0.17) were submitted to the ICP tests. For comparison, materials aged in water for 28 d and used for testing the mechanical strength were submitted to the Pb release which was determined according to the US EPA 1311 TCLP test which establishes to crush the hydrated mortar (28 d), sieve the product through a 10 mm sieve, and extract a solution using glacial acetic acid. ## 2.1. Materials The starting materials used were: a type I ordinary portland cement (OPC) and a natural aggregate with maximum particle dimension of 4.76 mm, Blaine fineness of 3480 cm2 g−1 (EN 933-2), density of 2.46 g cm−3, and water absorption of 0.37% (the measurement was carried out following the ASTM C127 and C128 norms) which were mixed with different proportions of mixed CRT waste glass. The as-received CRT glass cullet was first transformed into a powder by milling and then sieved by a 500 μm sieve. Only the part of powdered glass with size smaller than 500 μm was used for the present research; particles of larger dimensions were remilled. Glenium 51 (BASF) was also used as a superplasticizer in the preparation of some specimens. The required amount of water was obviously added to each starting blend. The chemical analysis of cement, natural aggregate, and CRT glass, determined by a Spectro Mass 2000 ICP mass spectrometer, is reported, in terms of oxides, in Table 1 which also displays loss on ignition (LOI), obtained after thermal treatment at 1000°C for 2 h, density, water absorption, and Blaine fineness.Table 1 Composition (oxide wt%), organic carbon, specific gravity, LOI and fineness modulus of milled CRT mixed glass, cement and aggregate; “undetermined” indicates the cumulative quantity of all oxides determined in quantity lower than 0.1 wt%. Component CRT glass Cement Aggregate SiO2 59.74 21.40 1.98 Al2O3 2.67 1.48 1.72 CaO 2.06 61.02 46.73 MgO 1.15 1.16 19.83 Na2O 6.81 0.26 1.43 K2O 6.15 0.53 0.74 Fe2O3 0.11 0.35 2.1 TiO2 0.17 <0.1 <0.1 CuO 0.21 <0.1 <0.1 BaO 6.13 <0.1 <0.1 SrO 4.77 <0.1 <0.1 Sb2O3 0.28 <0.1 <0.1 ZrO2 0.39 <0.1 <0.1 PbO 8.33 <0.1 <0.1 SO 4 = 0.49 2.92 0.13 C (organic) <0.1 1.20 0.57 Undetermined 0.93 1.38 1.92 Density (g cm−3) 2.95 3.03 2.46 Water abs. (%) 0.20 — 0.37 LOI (%) 0.82 13.14 23.55 Fineness modulus 0.72 — 3.48It is observed that CRT glass contains, together with an expected major quantity of SiO2, also great fractions of PbO, Na2O, K2O, BaO, SrO, and moderate quantities of Al2O3, CaO, and MgO; other compounds as well as organic carbon are present in limited amounts so that also LOI is limited as is the water absorption. Aggregate mainly contains calcium and magnesium oxide accompanied by small fractions of silica, iron oxide, and alumina; organic carbon, density, and LOI are in line with the literature data [5, 20]. The OPC conforms to European Standards EN-197/1. Data reported in Table 1 are confirmed by the XRD analysis of the starting materials (not reported in the present paper) which revealed the presence of alite (84%) and belite (16%) in cement, whereas dolomite (65%), calcium carbonate (27%), and free quartz (8%) were identified in the aggregate; numbers must be read with caution since XRD analysis does not provide accurate quantitative analysis of the tested materials, but they only supply an approximate magnitude order of their crystallographic composition. It can be also observed that CRT glass has a density of 2.95, OPC 3.03, and aggregate 2.46 g cm−3. It is worth to point out the low Blaine fineness of the milled and sieved CRT glass (0.72), thus confirming the presence of a large fraction of particles with size below 75 μm and therefore in agreement with the characteristics suggested by other authors [14] when the production of high-performance materials containing milled waste glass is required. ## 2.2. Methods ### 2.2.1. X-Ray Diffraction Investigation (XRD) The crystalline phases of starting components as well as those of the hydrated materials were investigated by X-ray diffraction (XRD). XRD patterns were recorded on a Philips X’Pert diffractometer operating at 40 kV and 40 mA using Ni-filtered Cu-Kα radiation. Spectra were collected using a step size of 0.02° and a counting time of 40 s per angular abscissa in the range of 15–55°. Philips X’Pert High Score software was used for phase identification and semiquantitative analysis (RIR method). ### 2.2.2. Particle Size Distribution (PSD) Measurements The particle size distribution (PSD) of the fine fraction of the aggregate (<500μm), cement, and powdered mixed CRT glass were determined by a Horiba LA950 laser scattering PSD analyser; analyses was made in water after a 3 min sonication; PSD curves are represented with logarithmic abscissa. In order to access the PSD of the aggregate’s fine fraction, the total as received product was sieved (500 μm) and fines were separated from coarse particles; the fines represent 25% of the total aggregate. ### 2.2.3. Materials Composition The ratio between cement and aggregate quantity (natural aggregate plus glass cullet) was set at 1/3 as this is a frequently used ratio. Some reference glass free compositions, hereafter called R, containing cement, aggregate, superplasticizer, and an optimized amount of water were also prepared as blank samples in order to compare the mechanical behaviour, after hydration, of the materials produced, bearing in mind that the focus of the present research regards the production of materials obtained by replacing part of the natural aggregate with an equivalent mass of 5, 10, and 20 wt% of milled and sieved glass powder from mixed CRT glass. Samples with symbolic names, corresponding aggregate composition, s/c, and water/cement (w/c) ratios are reported in Table2.Table 2 Specimens symbolic names, corresponding aggregate composition, superplasticizer/cement (s/c) and water cement (w/c) ratios. Sample Natural aggregate(wt%) CRT glass(wt%) s/c(%) w/c R 100 0 0 0.44 R1 100 0 1 0.31 R2 100 0 2 0.27 V5 95 5 0 0.44 V51 95 5 1 0.31 V52 95 5 2 0.27 V10 90 10 0 0.44 V101 90 10 1 0.31 V102 90 10 2 0.27 V20 80 20 0 0.44 V201 80 20 1 0.31 V202 80 20 2 0.27 ### 2.2.4. Materials Preparation For the mixture preparation and w/c optimization, a 5 L Hobart planetary conforming to ASTM C305 standards was used. The optimized amount of water was determined by the ASTM C1437 slump test performed on the reference blend R. The paste is said to have the right workability if the cake width is 200 (±20) mm. The identified optimal w/c value of the reference blend (R) was 0.44; this same value was applied to all the superplasticizer free compositions. Blends containing superplasticizer required reduced amounts of water as displayed in Table2. Pastes were then poured under vibration into moulds with dimensions of 100 × 100 × 100 mm, sealed with a plastic film to ensure mass curing and aged 24 h for a first hydration. Samples were then demoulded, sealed again with a plastic film, and cured again in the air for 24 h and then in water at room temperature for 3, 7, 28, 90, and 180 d. The ageing water was maintained at the constant temperature of 25°C (±3°C) and replaced with fresh water every 3 d of curing. After curing, before their characterisation, samples were dried with a cloth and aged in the atmosphere for 24 h. Specimens used for release evaluation were not aged in water but sealed with a plastic film for 7 d and then tested. ### 2.2.5. Materials Characterization Compression tests were performed after 3, 7, 28, 90, and 180 d in accordance with the ASTM C469 norm using Shimadzu AG10 apparatus; data were averaged over 5 measurements. Expansion was measured by a calliper after 28 d of curing in water. Fracture surfaces were examined by an Assing EVO40 Scanning Electron Microscope (SEM) coupled with the Energy Dispersive X-ray Spectroscopy (EDXS). The ASTM C642 norm was used to test the water absorption of the samples after curing for the established number of days. ### 2.2.6. Leaching Evaluation After ageing for 7 d, the R and V20 samples were submitted to an elution release test in water. V20 was selected in order to test a composition containing the highest amount of mixed CRT glass and displaying a high level of water absorption. For this measurement, the above specimens were aged in an autoclave, at 120°C, and 2 kPa for 1 h using 4 L of water. After boiling, samples were cooled down to room temperature (in water). The solutions (water plus several salts eluted from the samples weight ratio between mortar sample and solution is around 0.17) were submitted to the ICP tests. For comparison, materials aged in water for 28 d and used for testing the mechanical strength were submitted to the Pb release which was determined according to the US EPA 1311 TCLP test which establishes to crush the hydrated mortar (28 d), sieve the product through a 10 mm sieve, and extract a solution using glacial acetic acid. ## 2.2.1. X-Ray Diffraction Investigation (XRD) The crystalline phases of starting components as well as those of the hydrated materials were investigated by X-ray diffraction (XRD). XRD patterns were recorded on a Philips X’Pert diffractometer operating at 40 kV and 40 mA using Ni-filtered Cu-Kα radiation. Spectra were collected using a step size of 0.02° and a counting time of 40 s per angular abscissa in the range of 15–55°. Philips X’Pert High Score software was used for phase identification and semiquantitative analysis (RIR method). ## 2.2.2. Particle Size Distribution (PSD) Measurements The particle size distribution (PSD) of the fine fraction of the aggregate (<500μm), cement, and powdered mixed CRT glass were determined by a Horiba LA950 laser scattering PSD analyser; analyses was made in water after a 3 min sonication; PSD curves are represented with logarithmic abscissa. In order to access the PSD of the aggregate’s fine fraction, the total as received product was sieved (500 μm) and fines were separated from coarse particles; the fines represent 25% of the total aggregate. ## 2.2.3. Materials Composition The ratio between cement and aggregate quantity (natural aggregate plus glass cullet) was set at 1/3 as this is a frequently used ratio. Some reference glass free compositions, hereafter called R, containing cement, aggregate, superplasticizer, and an optimized amount of water were also prepared as blank samples in order to compare the mechanical behaviour, after hydration, of the materials produced, bearing in mind that the focus of the present research regards the production of materials obtained by replacing part of the natural aggregate with an equivalent mass of 5, 10, and 20 wt% of milled and sieved glass powder from mixed CRT glass. Samples with symbolic names, corresponding aggregate composition, s/c, and water/cement (w/c) ratios are reported in Table2.Table 2 Specimens symbolic names, corresponding aggregate composition, superplasticizer/cement (s/c) and water cement (w/c) ratios. Sample Natural aggregate(wt%) CRT glass(wt%) s/c(%) w/c R 100 0 0 0.44 R1 100 0 1 0.31 R2 100 0 2 0.27 V5 95 5 0 0.44 V51 95 5 1 0.31 V52 95 5 2 0.27 V10 90 10 0 0.44 V101 90 10 1 0.31 V102 90 10 2 0.27 V20 80 20 0 0.44 V201 80 20 1 0.31 V202 80 20 2 0.27 ## 2.2.4. Materials Preparation For the mixture preparation and w/c optimization, a 5 L Hobart planetary conforming to ASTM C305 standards was used. The optimized amount of water was determined by the ASTM C1437 slump test performed on the reference blend R. The paste is said to have the right workability if the cake width is 200 (±20) mm. The identified optimal w/c value of the reference blend (R) was 0.44; this same value was applied to all the superplasticizer free compositions. Blends containing superplasticizer required reduced amounts of water as displayed in Table2. Pastes were then poured under vibration into moulds with dimensions of 100 × 100 × 100 mm, sealed with a plastic film to ensure mass curing and aged 24 h for a first hydration. Samples were then demoulded, sealed again with a plastic film, and cured again in the air for 24 h and then in water at room temperature for 3, 7, 28, 90, and 180 d. The ageing water was maintained at the constant temperature of 25°C (±3°C) and replaced with fresh water every 3 d of curing. After curing, before their characterisation, samples were dried with a cloth and aged in the atmosphere for 24 h. Specimens used for release evaluation were not aged in water but sealed with a plastic film for 7 d and then tested. ## 2.2.5. Materials Characterization Compression tests were performed after 3, 7, 28, 90, and 180 d in accordance with the ASTM C469 norm using Shimadzu AG10 apparatus; data were averaged over 5 measurements. Expansion was measured by a calliper after 28 d of curing in water. Fracture surfaces were examined by an Assing EVO40 Scanning Electron Microscope (SEM) coupled with the Energy Dispersive X-ray Spectroscopy (EDXS). The ASTM C642 norm was used to test the water absorption of the samples after curing for the established number of days. ## 2.2.6. Leaching Evaluation After ageing for 7 d, the R and V20 samples were submitted to an elution release test in water. V20 was selected in order to test a composition containing the highest amount of mixed CRT glass and displaying a high level of water absorption. For this measurement, the above specimens were aged in an autoclave, at 120°C, and 2 kPa for 1 h using 4 L of water. After boiling, samples were cooled down to room temperature (in water). The solutions (water plus several salts eluted from the samples weight ratio between mortar sample and solution is around 0.17) were submitted to the ICP tests. For comparison, materials aged in water for 28 d and used for testing the mechanical strength were submitted to the Pb release which was determined according to the US EPA 1311 TCLP test which establishes to crush the hydrated mortar (28 d), sieve the product through a 10 mm sieve, and extract a solution using glacial acetic acid. ## 3. Results and Discussion The starting materials have different particle size distribution: the OPC displays a monomodal distribution of particles with maximum concentration at 12μm (see Figure 1); milled and sieved CRT glass has a broad PSD with a very large peak at around 180 μm, therefore showing the presence of about 60 vol% of coarse particles with size greater than 75 μm and around 40 vol% of fines with size below 75 μm; the fine fraction of the natural aggregate displays a bimodal distribution of particles with two peaks: one, low at 11 μm, the other, higher, at around 130 μm. It is known that the most reactive fraction of a component is its finest fraction, which also affects the w/c ratio as well as the alkali silica reaction in the presence of Na2O and K2O. In this contest, we have considered it important to show that the fine fraction of the milled waste glass falls in the range of submicronic sized particles and is overlapped to that of the smallest cement particles. Also natural aggregate contains small particles, the amount of alkalis is very low, and their size is always greater than 2 μm. It therefore appears reasonable that waste glass particles with size below 1 μm could easily interact with cement particles of the same dimension developing pozzolanic activity, limiting ASR, and improving long term properties of the resulting materials. Conversely, small sized particles also influence w/c ratio reducing, in this way, mortars workability. However, the absolute amount of glass fines is limited so that their influence on the fresh mortars workability as well as on the properties of the resulting hydrated materials is expected to be limited.Figure 1 PSD curves of the OPC (a), the fine fraction of aggregate (b), and that of the milled and sieved mixed CRT glass used for samples preparation (c).Figures2(a), 2(b) and 2(c) show the trend, displayed as a function of curing time, of compressive strength (solid lines, left axis) and water absorption (dashed lines, right axis) of three sets of materials, respectively: (a) superplasticizer free, (b) with an s content of 1%, and (c) with an s amount of 2%. Error bars are, for clarity, not displayed due to their overlapping and the possible confusion on reading; however data scattering is, for each composition, maintained within the interval ±5% of the reported average value. Figure 2(a) shows that the average compressive strength of the reference R reaches 50 MPa after 3 d of curing, 66 after 7, 87 after 28, and 97 after 90 and rises to 100 after 180 d; the corresponding water absorption is 6.3% after 3 d of curing, 5.7 after 7, and 5.4 after 28 and lowers to 5.1 and 4.9 after 90 and 180 d, respectively. Such an abnormally high compressive strength is reasonably due to the high strength, low porosity (low water absorption), and chemical nature of the aggregate used in the present research [21]. Compressive strength of composition V10 raises from 45 MPa after 3 d of curing to 60 after 7, 83 after 28, 100 after 90, and 105 after 180 d; water absorption ranges from 7.1 (3) to 4.4% (180). It can be observed that compositions V5 and V20 show compressive strength curves below that of the reference material over the whole range of time, whereas that of composition V10 intercepts it between 28 and 90 d of curing, the final strength being higher than that of R; the addition of CRT glass improves its long-term strength which is little affected by the possible ASR.Trend, displayed as a function of curing time, of compressive strength (solid lines, left axis) and water absorption (dashed lines, right axis) of three sets of materials: (a) superplasticizer free; (b) with a superplasticizer content of 1%; (c) with superplasticizer content of 2%. Error bars are, for clarity, not displayed due to their overlapping and the possible confusion on reading. (a) (b) (c)In fact, Chen et al. demonstrated that E-glass particle can be used as partial fine aggregate replacement material as well as supplementary binding material depending on its particle size. Particles with size smaller than 75μm could possess cementitious capability resulting from hydration or pozzolanic reaction; the coarser cylindrical might act as a potential crack arrester and inhibits the internal crack propagation [14]. Such behaviour has been confirmed by other authors [13] who demonstrated that ground waste glass containing high amount of alkali could display good performances against ASR if the fraction of fine particles can compensate the high quantity of Na2O+K2O.Figure2(b) shows that the addition of 1% superplasticizer improves compressive strength of all compositions at any ageing time. The strength of composition R1 increases from 80 MPa after 3 d of curing to 94 after 7, 114 after 28, 120 after 90, and 122 MPa after 180 d. Conversely, compositions containing 5, 10, and 20% of glass have lower strength than the reference for ageing times lower than 28 d, but their values improve faster being higher than R1 after 90 d or more. It must be also pointed out that the curve of composition V101 intercepts that of R1 shortly after 28 d and displays the best long time mechanical performance whereas those of V51 and V201 cross R after longer ageing times. Data obtained from water absorption tests (also displayed in Figure 2(b)) are in agreement with the corresponding compressive strength data, being low in materials with high strength and high in those having reduced strength levels. More in detail, it can be observed that materials with composition R1 display values between 4.7% (after 3 d of curing) and 3.1% (after 180 d) whereas these with V101 between 5.3 (after 3 d) and 2.1% (after 180 d).Due to the irregular particle shape, blends containing glass had similar but not the same rheological behaviour as the reference compositions. Pastes workability was determined by the slump test, and pastes were defined of right workability when cake width fell into the range of200±20 mm. However, pastes containing 10 or 20 wt% of glass gave slump cakes with size close to the inferior limit, being the corresponding slurries of relatively lower fluidity with respect to the reference compositions, in agreement with the results obtained by other authors [13, 14], and the resulting hydrated materials showed a slightly higher residual porosity. The addition of superplasticizer, by improving workability, has a beneficial effect on rheological behaviour and also on the final residual porosity of compositions containing 10 or 20 wt% of waste glass.Figure2(c) reports the curves obtained after testing materials with 2% of superplasticizer. Compositions R2, V52, and V102 have the same strength after 28 d and similar strength trend, whereas V202 displays a lower 28 d strength, but a faster increase; the 180 d strength of all compositions are concentrated around 137 MPa. Also in this set of materials, data obtained from water absorption tests (also displayed in Figure 2(c)) are in line with the corresponding compressive strength levels. In detail, data obtained after 3 d of curing range from 5.3 for composition V202 to 3.8 for V102, whereas those acquired after 180 d are all around 2.3%; intermediate curing times give rise to materials with intermediate water absorption levels.It must be pointed out that water absorption is not porosity but is related to the open porosity. It means that a body contains not only open porosity but also closed porosity. The water absorption test provides access to the open but not to the closed porosity which remains undetermined together with materials total porosity. However, we would like to point out the strict relationship between compressive strength and water absorption data so that their trend helps to explain material’s behaviour.XRD analysis of the hydrated samples acquired after 28, 90, and 180 d did not reveal substantial differences between reference compositions and mixed CRT glass containing materials. For comparison, the present article reports (see Figure3) the patterns acquired on samples R1 and V201, which have a wide difference of composition and compressive strength after 90 or more days of curing. Patterns are similar, and the same phases can be identified in both materials, that is, portlandite, Ca(Mg0.67Fe0.33)(CO3)2, calcite, and quartz. The presence of hydrated phases is not documented by this type of investigation probably due to their amorphous or cryptocrystalline nature [20, 22, 23]. However, it is possible to emphasize the presence of three small peaks which clearly appear in R and not in V201; in Figure 3 they are indicated by an arrow and may be attributed to the presence of residual calcium silicates. One could be led to infer that, in R, this phase is still present after 90 d, whereas in V201 it is almost completely consumed, thanks to the presence of the glass small particles.Figure 3 X-ray diffraction patterns between 15 and 55° of the compositions R1 and V201. The phases are identified by the following symbols: (•) portlandite; (○) Ca(Mg0.67Fe0.33)(CO3)2; (∆) calcite; (□) quartz; peaks due to residual calcium silicates compounds are highlighted by arrows.The different mechanical behaviour of glass containing materials with respect to the reference glass free compositions may, moreover, be explained by the contribution to the hydration phenomena of the glass particles during mortar curing; such information could be supplied by SEM investigation, but it is necessary to start from the assumption that the coarse glass particles should contribute in a different mode with respect to the smaller ones. The SEM analysis has been made over all the samples, considering ageing time coupled with materials composition, but for brevity, only the most representative images have been reported in the present paper. Figure4(a) shows a SEM image of the fracture surface of a sample with composition V201 after 90 d of curing. It is possible to observe that the large glass particle (dark) is strictly entrapped by the cementitious matrix; the interface appears well defined with no voids. The EDXS analysis of the particles showed that the glass mainly contains SiO2 and BaO, but Na2O, K2O, and Al2O3 are also present; the cementitious zone mainly contains CaO accompanied by smaller amounts of SiO2, MgO, Al2O3, and Fe2O3. It is also possible to speculate the development of a hypothetic hydrated phase containing SiO2 (10 wt%), CaO (38%), CO2 (40%), Al2O3 (3.5%), and BaO (5%) since EDXS analysis of this composition has revealed a small lump which appears well stuck to the large glass particle and is highlighted by an arrow. This hypothesis is, however, not confirmed by other investigations and, at this point, must be considered speculative.SEM micrographs showing the fracture surface of a sample with composition V201 after 90 d of curing. (a) a great glass particle (dark) is strictly entrapped by the cementitious matrix: the interface appears well defined with no voids; (b) the presence of small glass particles leads to the development of long hydrated silicates crystals. (a) (b)The effect of small glass particles on the materials microstructure can be observed in Figure4(b) where the presence of well-developed silicate hydrated crystal clusters are visible [10, 11, 18, 24]. The EDXS analysis revealed that smooth particles are CRT glass with compositions SiO2 (62.9 wt%), CaO (1.5%), PbO (6.7%), Al2O3 (1.3%), Na2O (5.4%), K2O (6.2%), and BaO (16%) whereas the surrounding matrix and elongated crystals were detected as containing, respectively, SiO2 (19.4 mol%), CaO (61.4%), MgO (1%), Al2O3 (3.2%), K2O (8.7%), Fe2O3 (6.3%), SiO2 (37.2%), CaO (58.4%), MgO (1.7%), and Al2O3 (2.7%). Such well-developed silicate hydrated crystal clusters were observed only around the small glass particles and not in the reference samples thus confirming that particles with size smaller than 75 μm could possess cementitious capability resulting from hydration or pozzolanic reaction concurring to limit ASR. As a consequence, provided that curing time is sufficient, the inevitable pores which are formed in the c/a matrix during mortar production may be filled with silicate hydrated crystals with a high shape ratio which interlock the surrounding material promoting the development of densely packed structures, raising compressive strength and reducing materials’ permeability.Materials aged for 180 d were also submitted to thermo gravimetric analysis (TGA) which did not supply any further information about the different behaviour between reference and glass containing compositions; consequently, TGA graphics are not specified in the present communication. It can also be pointed out that after 28 d of exposure to a moist environment, the change in length of the samples is always below the normally accepted value of 0.05% suggested in ASTM C33.The ICP analysis made on the solutions obtained from the release tests in water of samples R and V20 is displayed in Table3 which shows, in accordance with the work of other researchers, low elution of hazardous elements from the mortar samples containing waste materials [25]. In Table 3, only data resulted from samples containing the highest amount of mixed CRT glass and a high level of water absorption are reported since all the other compositions showed lower quantities of released hazardous elements. Some elements, such as Ca, K, Na, and S are, conversely, present in non negligible amounts, but their presence is not considered a warning parameter by most of the standard release tests. The elution release test used in the present study is not a codified test, as presently is not established leaching test for mortars or concretes containing hazardous elements, but it is indicative of the possible environmental compatibility of the materials produced. However, for safety, the Pb and Ba release from materials with composition V20 aged 28 d was also accessed by the TCLP test which showed 2.40 and 1.85 mg L−1, respectively, which are far from the established limits of 5 and 100 mg L−1, respectively, and in sufficiently good agreement with data reported by other authors [15–17]. It must be finally pointed out that TCLP tests are mandatory when hazardous waste materials need to be managed or disposed of to landfill, but equivalent tests on industrial products containing the same waste are presently missing. The authors of the present research therefore suggest the development of standards leaching tests to be used with monolithic materials containing waste hazardous components (i.e., ASTM or others for mortars or concretes).Table 3 More abundant elements (μg Kg−1 = parts per billion) revealed by the ICP on the solutions obtained from the water absorption test of samples with composition R and V20. Elements not reported were determined in quantity lower than 25 ppb. Sample name Mg Al Ca Si Na K Fe Ba Sr Pb S R 2311 3100 22005 786 16420 11003 417 <25 114 <25 19990 V20 3409 3333 16871 3956 20097 16294 390 727 326 799 21388 ## 4. Conclusions In the present research, the production of stable mortars was carried out using a commercial OPC, ground waste CRT glass, natural aggregate, and water; the addition of superplasticizer was also investigated. Mortars were produced using a fixed c/a ratio (1/3), whereas milled CRT glass was added in different proportions as well as an amount of superplasticizer.The following important conclusions were derived from the study.(1) All hydrated materials displayed high compression strength after 3,7, 28, 90, and 180 d of curing in a moist environment.(2) Glass containing samples showed a more rapid increase of strength with respect to the reference compositions when subjected to long-term ageing.(3) Materials with a s/c ratio of 1 showed the best overall behaviour.(4) The addition of more than 10 wt% of CRT glass powder did not lead to the production of materials with the best mechanical performances.(5) The results obtained in the present research are reasonably due to the favourable influence of the small glass particles which interact with the hydraulic phases promoting pozzolanic reaction, limiting ASR and increasing the amount of hydrated silicates produced during long term ageing.(6) The ICP analysis made on the solutions obtained from the release tests in water confirms the low elution of hazardous elements from the materials produced and therefore their possible environmental compatibility. --- *Source: 102519-2013-06-03.xml*
2013
# Palliative Care in Congenital Syndrome of the Zika Virus Associated with Hospitalization and Emergency Consultation: Palliative Care and Congenital Syndrome of Zika **Authors:** Aline Maria de Oliveira Rocha; Maria Julia Gonçalves de Mello; Juliane Roberta Dias Torres; Natalia de Oliveira Valença; Alessandra Costa de Azevedo Maia; Nara Vasconcelos Cavalcanti **Journal:** Journal of Tropical Medicine (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1025193 --- ## Abstract Background. Congenital syndrome of Zika virus (CSZV) is associated with neuromotor and cognitive developmental disorders, limiting the independence and autonomy of affected children and high susceptibility to complications, so palliative care needs to be discussed and applied.Aim. To identify factors associated with emergency visits and hospitalizations of patients with CSZV and clinical interventions performed from the perspective of palliative care.Design. This is a cross-sectional study with bidirectional longitudinal component. Data were collected between May and October 2017 through the review of medical records and interviews with relatives of patients hospitalized.Setting/Participants. The study was developed in a tertiary care hospital involving patients with confirmed CSZV born as of August 2015 and followed up until October 2017. Patients under investigation were excluded.Results. 145 patients were followed up at the specialized outpatient clinic, 92 (63.5%) were consulted at least once in the emergency room, and 49% had already been hospitalized, with the main reason being neurological causes, while 24.1% had never required any emergency visit or hospitalization. No risk factors were associated with the occurrence of consultations or hospitalizations. Such events happened at an early age and were accompanied by a high number of invasive procedures and interventions. An approach in palliative care was only identified in two hospitalized patients.Conclusions. For the patient with known severe malformations caused by congenital infection by the Zika virus with indication of palliative care, this approach could be used in order to allow life without suffering and disproportionate invasive method. --- ## Body ## 1. Introduction Between August and November 2015, a substantial increase in the number of cases of newborns with microcephaly [1], characterized by the World Health Organization (WHO) as an “anomaly in which the head circumference (HC) is less than two or more standard deviations than the reference for sex, age or gestational age” [2] and whose main causes are genetic or environmental reasons, such as radiation, drugs, fetal alcohol syndrome, and infections [3]. This clinical condition leads to medicalization, hospitalization, and invasive procedures such as ventriculoperitoneal shunt (VPS) and gastrostomy. The innumerable associated morbidities and limited quality of life emphasize the need to discuss palliative care in this group [4–6].The prevalence of congenital microcephaly in Brazil was estimated at 1.98 per 10,000 live births by October 2015, a number that increased 3 to 12 times during the microcephaly outbreak, depending on the region considered [7, 8]. The majority of these patients were concentrated in the state of Pernambuco [9, 10].It was possible to correlate such an aggravation to the infection by Zika virus during pregnancy, which is transmitted to the fetus and may result in miscarriage, fetal death, or congenital anomalies in several organs, being characterized as Congenital Syndrome of Zika virus (CSZV) [11–15]. CSZV has become a clinical condition associated with several complications, with a disabling and noncurable nature [16–21]. Among the malformations described in the literature are cerebral atrophy, absence of brain spins, craniofacial disproportion, cerebral calcifications, corpus callosum and cerebellar vermis dysgenesis, limb contractures, arthrogryposis, and auditory and ocular abnormalities [7, 17, 18].Such clinical manifestations have caused from the first semester of life, for example, a high frequency of epileptic seizures, changes in tone, posture and mobility, irritability, intracranial hypertension secondary to hydrocephalus, and swallowing problems with greater predisposition to gagging and respiratory infections [19–21].Palliative care is understood as active and total approach to care, which begins from diagnosis of life-limiting or life-threatening conditions and continues throughout the child’s life and death [6]. Therefore, this type of care is clearly indicated for diseases with severe and incapacitating neurological manifestations, such as CSZV.In pediatrics it should be considered for “complex chronic clinical conditions”, defined as situations that present at least 12 months of survival and affect one or more organ systems that require specialized pediatric care [4, 5]. What is aimed is the best quality of life for the patient and his family, attending to the physical, psychological, spiritual, and social needs.Indications for palliative care in pediatrics may be represented in different groups defined by theAssociation for Children with Life-Threatening or Terminal Conditions and their Families (ACT) [6]:(i) Category 1: life-threatening diseases in which curative treatment may be feasible but can fail;(ii) Category 2: diseases whose treatments are long and strenuous and can prolong life and the child participates in normal activities, but premature death is inevitable;(iii) Category 3: progressive conditions without curative treatment options, where treatment is exclusively palliative;(iv) Category 4: irreversible but nonprogressive conditions, with complications and likelihood of premature death. CSZV is considered to be included in this group.However, there were no studies in the researched literature (PubMed, SciELO, and LILACS) that discussed the adoption of palliative care in this population or articles that analyzed data on hospitalizations or search for emergency care services.The goal of the present study was to determine the frequency and factors associated with hospitalization and emergency care of patients with CSZV, in addition to observing the interventions performed at these moments, analyzing them according to the perspective of palliative care. ## 2. Material and Methods A cross-sectional observational study with internal comparison group and bidirectional longitudinal component (concurrent and nonconcurrent) involving patients with CSZV was held at the Institute of Integral Medicine Prof. Fernando Figueira (IMIP), which is one of the main centers of reference for children with CSZV and their families, with outpatient follow-up by multidisciplinary team, emergency care, and hospitalizations. The research involved patients born with CSZV as of August 2015 and followed up until October 2017 and data collection occurred between May and October 2017.Patients with characteristic neurological images on computed tomography (CT) and magnetic resonance (MR) and/or positive IgM serology for Zika virus in cerebrospinal fluid (CSF) were included. A list of patients with diagnostic confirmation was made at the outpatient clinic specialized in CSZV, followed by an analysis of the medical records and a search in the hospital's internal information system.In the prospective component of the research, the inpatients’ companions were informed about the purpose of the study and, upon agreeing to participate, a free informed consent form was requested, and the questionnaire was completed by interview and analysis of medical records.The variables analyzed in the study involved, among the maternal characteristics, age, occupation, origin, and schooling. Among the variables of the patients evaluated were sex, HC at birth, diagnosis by imaging examination or CSF, emergency care and hospitalization service frequency, age and reason for consultations/hospitalization, indication of intensive care in Intensive Care Unit (ICU), invasive procedures, palliative care approach, length of hospitalization, and reason for leaving the hospital.Data accuracy was ascertained by double entry of all data obtained in the program Microsoft Excel, exported and compared in order to correct inconsistencies. Epi Info 5.4 and GraphPad Prism 7.0 software were employed for statistical analysis. The association between categorical variables was assessed by calculating odds ratio and Chi-square test or by Fisher’s exact test, where relevant. For the numerical variables, the Shapiro-Wilk test was initially applied. The HC at birth was the only variable that presented normal distribution and therefore was analyzed by calculating mean and standard deviation, while for the other variables the median and minimum and maximum measures were calculated. The nonparametric Mann-Whitney test was used to compare numerical variables with non-normal distribution between two groups, while Student's t-test was used to compare variables with normal distribution. A p value less than 0.05 was considered significant.This project was approved by IMIP’s Ethics Committee in Research under protocol no. 54701516.1.0000.5201. ## 3. Results and Discussion The present study analyzed the characteristics of the patients followed by CSZV and evaluated possible associated factors for a greater number of consultations in the emergency care and hospitalization. In hospitalizations prospectively accompanied by the principal investigator, it was observed whether measures of palliative care were discussed or adopted together with the family.A total of 145 patients were identified in our specialized clinic. Of these, 74 (51.0%) maintained regular follow-up, 41 (28.2%) were transferred to follow-up clinic near the home city or to other referral centers in the metropolitan region, 5 (3.5%) died, and 25 (17.2%) lost follow-up in this period. The maternal and the patients with CSZV characteristics are presented in Table1.Table 1 Maternal and patients with CZVS characteristics. Institute of Integral Medicine Prof. Fernando Figueira, August 2015 to October 2017. Maternal Characteristics N (%) Patients with CSZV N (%) (i)Female 78 (53.8) (i)Schooling (years) (ii)Head circumference at birth (Z score for age and sex) ≤ 4 3 (2.1) < z -3 101 (69.5) 5-9 44 (30.3) z-3 to z-2 21 (14.5) 10-11 64 (44.1) z-2 to z-1 13 (9.0) ≥ 12 10 (6.9) > z -1 10 (7.0) No registry 24 (16.6) (iii)Age (months) <12 m 25 (17.3) 12-18 m 39 (26.9) >18 m 81 (55.8) (ii)From (iv)Consultation in emergency care 92 (63.5) MRRa 68 (46.9) (v)Hospitalization 71 (49.0) (a)Causes of hospitalization (iii)Occupation Neurological Causes 24 (33.8) Housewives 85 (58.6) Invasiveproceduresb 14 (19.8) Formal Work 18 (12.4) Respiratory infections 6 (8.5) Informal Work 14 (9.7) Wheezing 4 (5.6) Student 6 (4.1) Dysphagia 1 (1.4) No data 22 (15.2 ) Others 18 (25.3) No registry 4 (5.6) (iv)Age (years) (b)Length of hospitalization (days) ≤17 15 (10.3) ≤ 1 12 (16.9) 18-24 62 (42.8) 2-7 46 (64.8) 25 - 34 years 56 (38.6) 8 -14 8 (11.3) ≥35 years 11 (7.6) 15 - 30 4 (5.6) No registry 1 (0.7) > 30 1 (1.4) a- Metropolitan Region of Recife b- Described as invasive procedures: peritoneal ventricle shunt, gastrostomy, herniorrhaphy, prostatectomy.Most mothers (58.6%) were housewives and it was observed during the review of records and individual interviews that many mothers who had formal or informal work outside the home before the birth of these children had to leave their previous jobs to dedicate themselves to looking after the children.The level of maternal schooling was higher than that described in the literature [19], with a minimum of five years of schooling, and none of the mothers were illiterate. The low age of progenitors was consistent with findings in other studies [4, 8], demonstrating the reality of young mothers who need to dedicate special attention to children with different needs [15, 22]. Most mothers were housewives and despite receiving government financial benefit, the costs associated with the care required by these children are quite high, such as transportation, medical care, medications, and rehabilitation [22].Among the 145 patients, 53.8% were females and 53.1% were from the countryside of the city. The HC at birth was between 22 and 34 cm with a mean of 28.9 cm (standard deviation, SD ± 2.1).All patients underwent cranial tomography and 110 (75.9%) performed CSF’s collection. Most patients (76.6%) had changes in CT considered suggestive of CSZV, but with a negative CSF serology for Zika virus; one had no characteristic CT changes but positive CSF serology and 22.8% had characteristic changes both in the tomography and CSF serology positivity.In the service, 313 consultations regarding patients with CSZV were identified. Among the 92 patients who were consulted in the emergency service between the neonatal period and 20 months of age, the average was 7 months of age and the maximum number of visits per patient was 22 times, as shown in Figure1. About 1/3 (36.5%) of the children in regular follow-up never needed this type of care.Figure 1 Number of emergency visits and admissions of patients with CZVS up to 25 months of life. IMIP, August 2015 to October 2017.Despite the lack of data on the occurrence of consultations in other emergency services, it is understood that the high number of consultations in this group is associated with the complications of CSZV such as dysphagia, neurological conditions, and irritability, as described in several studies [13, 19, 23, 24].It was observed that patients from Metropolitan Region of Recife (MRR) had more consultations in emergency services, which can be attributed to the ease of access to the service, but this finding did not present a statistically significant difference.In total, 143 hospitalizations were analyzed. The first hospitalization occurred from the neonatal period up to the full 24 months, with an average of 7 months. Half of the patients (51%) never needed to be hospitalized, and the number of hospitalizations per patient ranged from one (26.2%) to eight (0.7%). The main cause of the first hospitalization was a neurological condition associated with CSZV (42.2%) and described in medical records such as microcephaly, hydrocephalus, seizures, or somnolence due to intracranial hypertension [13, 15–18, 23].Observational studies have shown evolution for hydrocephalus in approximately 41% of patients with CSZV in the first 17 months of life, with an indication of VPS in all of these [24]. The pathophysiology of the evolution of these cases for hydrocephalus is still not well known, but according to some hypotheses due to the damage to the cerebral vascular system, especially in the venous component, there is thrombosis and cerebral venous hypertension since intrauterine growth, persisting after birth, resulting in chronic venous cerebral hypertension [24–26].The hospitalizations occurred to perform invasive procedures (9.7%) such as VPS implantation, gastrostomy, or orthopedic surgeries due to complications inherent to the basis disease. Respiratory infections accounted for 4.1% of hospitalizations of these children, understood as possible consequences of bronchoaspirations due to neurological disorders and swallowing [13, 15–17]. As observed in other studies, this pattern of complications would be expected for these children; however, the high number of hospitalizations and invasive procedures stands out [13, 15–17].The duration of hospitalization was between one and 39 days, with an average of four days. Those who were discharged on the same day had been admitted for minor surgical procedures, such as herniorrhaphy or prostatectomy. The discharge from the hospital occurred by improvement (93%), transfer (6.3%), and death (0.7%).Prolonged admissions, although not frequent, call the attention to the potential for complications, especially in relation to healthcare-associated infections, to the change in family dynamics in this period, compromising the patient and family’s quality of life, in addition to the high cost to the health system [22].Risk factors for inpatient and outpatient emergency’s consultation were analyzed and compared to the nonhospitalized and non-urgently consulted groups, both with CSZV. Among the variables studied, no factor was associated with the higher occurrence of hospitalizations or consultations in the emergency room, as it can be observed in Table2.Table 2 Univariate analysis of the possible factors associated with emergency visits and hospitalization in patients with SCZV. Institute of Integral Medicine Prof. Fernando Figueira, August 2015 to October 2017. Variables EMERGENCY CONSULTS PR b (95 % RI) P Yes N (%) No N (%) 92 (63.4) 53 (36.6) Maternal education <10 years 35 (42.2) 12 (31.6) 1.18 (0.89 -1.57) 0.36 Gender (Female) 52 (56.5) 27 (50.9) 1.17 (0.82-1.66) 0.48 Maternal occupation (From home) 64 (77.1) 27 (67.5) 1.41 (0.78-2.57) 0.35 Maternal age (Median) 24 23 0.57 Origin (MRR/Interior)to 49/43 19/34 0.67 (0.44 - 1.01) 0.06 Head circumference (Mean) 28.8 4 (± 0.27) 29.01 (± 0.23) 1.30 (0.57 - 1.91) 0.64 HOSPITAL ADMISSION PR (95 % CI) P Yes N (%) No N (%) 71 (49.0) 74 (51.0) Maternal education <10 years 24 (38.1) 23 (39.7) 0.97 (0.73-1.29) 0.99 Gender (Male/Female) 36/35 31/43 0.82 (0.58-1.17) 0.36 Maternal occupation (From home) 49 (75.4) 42 (72.4) 1.12 (0.61-2.03) 0.86 Maternal age (Median) 24 24 0.72 Origin (MRR/Interior) 36/35 32/42 0.85 (0.60 to 1.20) 0.46 Head circumference (Mean) 28.97 (± 0.24) 28.93 (± 0.26) 1.16 (0.66 to 1.76) 0.89 a- Metropolitan Region of Recife b- Statistical analysis: PR: prevalence ratio, p according to Fisher’s exact test for categorical variables, Mann Whitney for nonnormal continuous and Student’s variables for normal continuous variables (head circumference).It was possible to observe hospitalizations of 13 patients prospectively and the data are shown in Table3. Of the hospitalized patients, eight were females, the HC of birth was between 28 and 32 cm (average of 29.3 cm and SD ± 1.0), and the age at admission was between 16 and 24 full months (average of 20 months).Table 3 Characteristics of CZVS patients prospectively observed during hospitalization. Institute of Integral Medicine Prof. Fernando Figueira, May to October 2017. Patient Maternal Age (years) Maternal Education (years) Mother Occupation From Sex HC birth (cm) Age (months ) Diagnosis a Duration (days) Device d ICU Indic (Y/N) CCPP (Y/N) Output Scale of Lansky 1 19 10 Housewife MRR M 29 20 Respiratory Infection 4 NGT, PVP, Catheter O2 N N Discharge 30 2 21 11 Housewife MRR M 30 22 Neurological Cause 10 GTT, PVP, Venturi M. N N Discharge 30 3 - - Housewife Interior M 29 16 Respiratory infection 12 NGT, OTT, PVP, CPAP, Venturi M. Y Y Discharge 20 4 37 12 Housewife Interior M 28.5 17 Neurological Cause 1 PVP N N Transf 20 5 21 11 Housewife MRR F 30 20 O t h e r b 6 - N N Discharge 20 6 18 9 Housewife MRR F 28 22 Wheezing 2 PVP N N Discharge 40 7 33 12 Housewife MRR F 29 20 O t h e r c 4 PVP N N Discharge 30-20 8 31 12 Housewife MRR F 30 23 Respiratory infection 2 NGT, PVP, Venturi M. N N Discharge 20 9 23 12 Housewife Interior F 29 19 Respiratory infection 10 NGT, DVC, PVP, CVP, CPAP, Venturi M. Y Y Death 20 10 16 8 Housewife Interior F 32 20 Respiratory Infection 49 NGT, VPS, PVP, CVP, Venturi M. N N Discharge 30 11 20 12 Housewife Interior M 29 21 Neurological Cause 3 NGT, VPS, PVP N N Discharge 20 12 21 10 Housewife MRR F 30 24 Resp Infec 3 PVP N N Discharge 40 13 35 9 Housewife MRR M 28 21 Resp Infec 5 PVP N N Discharge 30 a- Diagnoses were divided into categories: respiratory infections (upper and lower airways); neurological causes (epileptic seizures, irritability, hydrocephalus); wheezing; others b- Other: conjunctivitis c- Other: fever without localized signs d- Abbreviations of devices: nasogastric tube (NGT), peripheral venous punction (PVP), central venous punction (CVP), gastrostomy (GTT), orotracheal tube (OTT), Continuous Positive Airway Pressure (CPAP), Venturi mask (Venturi M.), delay vesical catheter (DVC), ventriculoperitoneal shunt (VPS) e- Acronyms/Abbreviations: Metropolitan Region of Recife (MRR), transference (Transf), male/female (M/F), yes/no (Y N), indications of intensive care unit (ICU Indic), palliative care (CCPP).Almost all these patients (92.3%) underwent invasive procedures during hospitalization, with the most common being peripheral venous puncture (92.3%), but nasogastric tube (NGT) was also used in 46.1% (six patients), the central venous puncture in 15.4% (two patients), gastrostomy in a patient (7.7%), and oxygen support in 46.1% (six) of patients, ranging from oxygen catheter to orotracheal intubation in an emergency room in one of the patients (7.7%).It was possible to observe, with emphasis on the analysis of aspects related to palliative care, the high occurrence of invasive procedures. Such procedures provoke pain during its accomplishment and discomfort during the period of use. Moreover, invasive devices can facilitate colonization by nosocomial bacteria and cause local and systemic infections [27]. Therefore, procedures may become useless and an idle cause of suffering if they are not indicated [20].Functional assessments according to the Lansky scale [19] based on neurological evaluation and accompanying reports were recorded. The largest evaluation was quantified in “40” for two (15.4%) of the patients (participating in quiet activities), four (30.8%) had their functional evaluation in class “30” (in need of assistance even for quiet activity), and the others (53.8%) had a quantified “20” evaluation (with playing entirely limited to very passive activities). This tool is an auxiliary measure in the elaboration of the care plan for these patients [19, 20] and showed the limitation regarding activities of daily living and absence of autonomy of these patients.Two (15.4%) of the patients had their records requested for a first-time ICU vacancy, both due to acute respiratory failure. After a better understanding by the medical team that assisted them regarding the functionality, characterization of the clinical condition, and association with the basis disease, the request for a vacancy in the ICU was suspended and after conversation and clarification with companions, palliative care measures were indicated: patient number three was submitted to palliative extubation according to precepts of palliative care and airway support was installed with continuous positive pressure; patient number nine had the bladder catheter of delay and central venous access removed; for both the order of non-resuscitation was expressed and symptom control measures were performed (analgesia).However, most families and caregivers received no approach to palliative care during hospitalizations, despite the basic conditions of the children. It is understood that, as an emerging aspect of pediatric practice, professionals postpone such an approach until the end of life or do not use it [21, 27].The need to perform surgical interventions and long hospitalizations in ICU is frequent in children born with malformations, which implies thinking about the limits of the therapeutic effort in these cases [27]. According to precepts of palliative care, obstinate therapeutic investments in patients without a cure perspective, such as the cases demonstrated above, should be avoided, but in practice many ICU beds are occupied, sometimes through judicial action, by patients with no possibility of therapeutic control and with underlying diseases that limit survival and interfere greatly in the quality of life, such as CSZV [21, 27]. During the observation of hospitalizations, it was seen that the indication of ICU treatment can be reversed and patients received measures to minimize suffering, avoiding painful interventions, as well as being accompanied by family throughout the hospitalization period, including the evolution to death.One of these patients died during hospitalization at 19 months of age and the family was present at all times, including death. According to a report from the Secretary of Health of Pernambuco, from the beginning of the appearance of these cases until October 2017, 2403 notifications were made, of which 438 were confirmed as CSZV and 28 (6.4%) deaths were documented in the postneonatal period [28]. According to data from the epidemiological bulletin provided by state agencies, the cases of postneonatal deaths had an average age of 6.2 months of life (ranging from 32 days to 21.9 months) [28].The other patient with indication of palliative measures received discharge from hospital with written guidelines in summary over the care plan that was drawn up during and after hospitalization. The remaining patients were discharged and there was no report or description of approach in palliative care intended for them. ## 4. Conclusions Despite the short lifespan of children with CSZV, it could be observed through this study that they were submitted to a high number of emergency room visits, as well as hospitalizations with excessive amounts of invasive procedures. It is understood that the patient with severe malformations already known as those caused by congenital infection by Zika virus could be identified early for care that would allow life without suffering and without any disproportionate invasive methods [27]. --- *Source: 1025193-2018-10-11.xml*
1025193-2018-10-11_1025193-2018-10-11.md
25,659
Palliative Care in Congenital Syndrome of the Zika Virus Associated with Hospitalization and Emergency Consultation: Palliative Care and Congenital Syndrome of Zika
Aline Maria de Oliveira Rocha; Maria Julia Gonçalves de Mello; Juliane Roberta Dias Torres; Natalia de Oliveira Valença; Alessandra Costa de Azevedo Maia; Nara Vasconcelos Cavalcanti
Journal of Tropical Medicine (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1025193
1025193-2018-10-11.xml
--- ## Abstract Background. Congenital syndrome of Zika virus (CSZV) is associated with neuromotor and cognitive developmental disorders, limiting the independence and autonomy of affected children and high susceptibility to complications, so palliative care needs to be discussed and applied.Aim. To identify factors associated with emergency visits and hospitalizations of patients with CSZV and clinical interventions performed from the perspective of palliative care.Design. This is a cross-sectional study with bidirectional longitudinal component. Data were collected between May and October 2017 through the review of medical records and interviews with relatives of patients hospitalized.Setting/Participants. The study was developed in a tertiary care hospital involving patients with confirmed CSZV born as of August 2015 and followed up until October 2017. Patients under investigation were excluded.Results. 145 patients were followed up at the specialized outpatient clinic, 92 (63.5%) were consulted at least once in the emergency room, and 49% had already been hospitalized, with the main reason being neurological causes, while 24.1% had never required any emergency visit or hospitalization. No risk factors were associated with the occurrence of consultations or hospitalizations. Such events happened at an early age and were accompanied by a high number of invasive procedures and interventions. An approach in palliative care was only identified in two hospitalized patients.Conclusions. For the patient with known severe malformations caused by congenital infection by the Zika virus with indication of palliative care, this approach could be used in order to allow life without suffering and disproportionate invasive method. --- ## Body ## 1. Introduction Between August and November 2015, a substantial increase in the number of cases of newborns with microcephaly [1], characterized by the World Health Organization (WHO) as an “anomaly in which the head circumference (HC) is less than two or more standard deviations than the reference for sex, age or gestational age” [2] and whose main causes are genetic or environmental reasons, such as radiation, drugs, fetal alcohol syndrome, and infections [3]. This clinical condition leads to medicalization, hospitalization, and invasive procedures such as ventriculoperitoneal shunt (VPS) and gastrostomy. The innumerable associated morbidities and limited quality of life emphasize the need to discuss palliative care in this group [4–6].The prevalence of congenital microcephaly in Brazil was estimated at 1.98 per 10,000 live births by October 2015, a number that increased 3 to 12 times during the microcephaly outbreak, depending on the region considered [7, 8]. The majority of these patients were concentrated in the state of Pernambuco [9, 10].It was possible to correlate such an aggravation to the infection by Zika virus during pregnancy, which is transmitted to the fetus and may result in miscarriage, fetal death, or congenital anomalies in several organs, being characterized as Congenital Syndrome of Zika virus (CSZV) [11–15]. CSZV has become a clinical condition associated with several complications, with a disabling and noncurable nature [16–21]. Among the malformations described in the literature are cerebral atrophy, absence of brain spins, craniofacial disproportion, cerebral calcifications, corpus callosum and cerebellar vermis dysgenesis, limb contractures, arthrogryposis, and auditory and ocular abnormalities [7, 17, 18].Such clinical manifestations have caused from the first semester of life, for example, a high frequency of epileptic seizures, changes in tone, posture and mobility, irritability, intracranial hypertension secondary to hydrocephalus, and swallowing problems with greater predisposition to gagging and respiratory infections [19–21].Palliative care is understood as active and total approach to care, which begins from diagnosis of life-limiting or life-threatening conditions and continues throughout the child’s life and death [6]. Therefore, this type of care is clearly indicated for diseases with severe and incapacitating neurological manifestations, such as CSZV.In pediatrics it should be considered for “complex chronic clinical conditions”, defined as situations that present at least 12 months of survival and affect one or more organ systems that require specialized pediatric care [4, 5]. What is aimed is the best quality of life for the patient and his family, attending to the physical, psychological, spiritual, and social needs.Indications for palliative care in pediatrics may be represented in different groups defined by theAssociation for Children with Life-Threatening or Terminal Conditions and their Families (ACT) [6]:(i) Category 1: life-threatening diseases in which curative treatment may be feasible but can fail;(ii) Category 2: diseases whose treatments are long and strenuous and can prolong life and the child participates in normal activities, but premature death is inevitable;(iii) Category 3: progressive conditions without curative treatment options, where treatment is exclusively palliative;(iv) Category 4: irreversible but nonprogressive conditions, with complications and likelihood of premature death. CSZV is considered to be included in this group.However, there were no studies in the researched literature (PubMed, SciELO, and LILACS) that discussed the adoption of palliative care in this population or articles that analyzed data on hospitalizations or search for emergency care services.The goal of the present study was to determine the frequency and factors associated with hospitalization and emergency care of patients with CSZV, in addition to observing the interventions performed at these moments, analyzing them according to the perspective of palliative care. ## 2. Material and Methods A cross-sectional observational study with internal comparison group and bidirectional longitudinal component (concurrent and nonconcurrent) involving patients with CSZV was held at the Institute of Integral Medicine Prof. Fernando Figueira (IMIP), which is one of the main centers of reference for children with CSZV and their families, with outpatient follow-up by multidisciplinary team, emergency care, and hospitalizations. The research involved patients born with CSZV as of August 2015 and followed up until October 2017 and data collection occurred between May and October 2017.Patients with characteristic neurological images on computed tomography (CT) and magnetic resonance (MR) and/or positive IgM serology for Zika virus in cerebrospinal fluid (CSF) were included. A list of patients with diagnostic confirmation was made at the outpatient clinic specialized in CSZV, followed by an analysis of the medical records and a search in the hospital's internal information system.In the prospective component of the research, the inpatients’ companions were informed about the purpose of the study and, upon agreeing to participate, a free informed consent form was requested, and the questionnaire was completed by interview and analysis of medical records.The variables analyzed in the study involved, among the maternal characteristics, age, occupation, origin, and schooling. Among the variables of the patients evaluated were sex, HC at birth, diagnosis by imaging examination or CSF, emergency care and hospitalization service frequency, age and reason for consultations/hospitalization, indication of intensive care in Intensive Care Unit (ICU), invasive procedures, palliative care approach, length of hospitalization, and reason for leaving the hospital.Data accuracy was ascertained by double entry of all data obtained in the program Microsoft Excel, exported and compared in order to correct inconsistencies. Epi Info 5.4 and GraphPad Prism 7.0 software were employed for statistical analysis. The association between categorical variables was assessed by calculating odds ratio and Chi-square test or by Fisher’s exact test, where relevant. For the numerical variables, the Shapiro-Wilk test was initially applied. The HC at birth was the only variable that presented normal distribution and therefore was analyzed by calculating mean and standard deviation, while for the other variables the median and minimum and maximum measures were calculated. The nonparametric Mann-Whitney test was used to compare numerical variables with non-normal distribution between two groups, while Student's t-test was used to compare variables with normal distribution. A p value less than 0.05 was considered significant.This project was approved by IMIP’s Ethics Committee in Research under protocol no. 54701516.1.0000.5201. ## 3. Results and Discussion The present study analyzed the characteristics of the patients followed by CSZV and evaluated possible associated factors for a greater number of consultations in the emergency care and hospitalization. In hospitalizations prospectively accompanied by the principal investigator, it was observed whether measures of palliative care were discussed or adopted together with the family.A total of 145 patients were identified in our specialized clinic. Of these, 74 (51.0%) maintained regular follow-up, 41 (28.2%) were transferred to follow-up clinic near the home city or to other referral centers in the metropolitan region, 5 (3.5%) died, and 25 (17.2%) lost follow-up in this period. The maternal and the patients with CSZV characteristics are presented in Table1.Table 1 Maternal and patients with CZVS characteristics. Institute of Integral Medicine Prof. Fernando Figueira, August 2015 to October 2017. Maternal Characteristics N (%) Patients with CSZV N (%) (i)Female 78 (53.8) (i)Schooling (years) (ii)Head circumference at birth (Z score for age and sex) ≤ 4 3 (2.1) < z -3 101 (69.5) 5-9 44 (30.3) z-3 to z-2 21 (14.5) 10-11 64 (44.1) z-2 to z-1 13 (9.0) ≥ 12 10 (6.9) > z -1 10 (7.0) No registry 24 (16.6) (iii)Age (months) <12 m 25 (17.3) 12-18 m 39 (26.9) >18 m 81 (55.8) (ii)From (iv)Consultation in emergency care 92 (63.5) MRRa 68 (46.9) (v)Hospitalization 71 (49.0) (a)Causes of hospitalization (iii)Occupation Neurological Causes 24 (33.8) Housewives 85 (58.6) Invasiveproceduresb 14 (19.8) Formal Work 18 (12.4) Respiratory infections 6 (8.5) Informal Work 14 (9.7) Wheezing 4 (5.6) Student 6 (4.1) Dysphagia 1 (1.4) No data 22 (15.2 ) Others 18 (25.3) No registry 4 (5.6) (iv)Age (years) (b)Length of hospitalization (days) ≤17 15 (10.3) ≤ 1 12 (16.9) 18-24 62 (42.8) 2-7 46 (64.8) 25 - 34 years 56 (38.6) 8 -14 8 (11.3) ≥35 years 11 (7.6) 15 - 30 4 (5.6) No registry 1 (0.7) > 30 1 (1.4) a- Metropolitan Region of Recife b- Described as invasive procedures: peritoneal ventricle shunt, gastrostomy, herniorrhaphy, prostatectomy.Most mothers (58.6%) were housewives and it was observed during the review of records and individual interviews that many mothers who had formal or informal work outside the home before the birth of these children had to leave their previous jobs to dedicate themselves to looking after the children.The level of maternal schooling was higher than that described in the literature [19], with a minimum of five years of schooling, and none of the mothers were illiterate. The low age of progenitors was consistent with findings in other studies [4, 8], demonstrating the reality of young mothers who need to dedicate special attention to children with different needs [15, 22]. Most mothers were housewives and despite receiving government financial benefit, the costs associated with the care required by these children are quite high, such as transportation, medical care, medications, and rehabilitation [22].Among the 145 patients, 53.8% were females and 53.1% were from the countryside of the city. The HC at birth was between 22 and 34 cm with a mean of 28.9 cm (standard deviation, SD ± 2.1).All patients underwent cranial tomography and 110 (75.9%) performed CSF’s collection. Most patients (76.6%) had changes in CT considered suggestive of CSZV, but with a negative CSF serology for Zika virus; one had no characteristic CT changes but positive CSF serology and 22.8% had characteristic changes both in the tomography and CSF serology positivity.In the service, 313 consultations regarding patients with CSZV were identified. Among the 92 patients who were consulted in the emergency service between the neonatal period and 20 months of age, the average was 7 months of age and the maximum number of visits per patient was 22 times, as shown in Figure1. About 1/3 (36.5%) of the children in regular follow-up never needed this type of care.Figure 1 Number of emergency visits and admissions of patients with CZVS up to 25 months of life. IMIP, August 2015 to October 2017.Despite the lack of data on the occurrence of consultations in other emergency services, it is understood that the high number of consultations in this group is associated with the complications of CSZV such as dysphagia, neurological conditions, and irritability, as described in several studies [13, 19, 23, 24].It was observed that patients from Metropolitan Region of Recife (MRR) had more consultations in emergency services, which can be attributed to the ease of access to the service, but this finding did not present a statistically significant difference.In total, 143 hospitalizations were analyzed. The first hospitalization occurred from the neonatal period up to the full 24 months, with an average of 7 months. Half of the patients (51%) never needed to be hospitalized, and the number of hospitalizations per patient ranged from one (26.2%) to eight (0.7%). The main cause of the first hospitalization was a neurological condition associated with CSZV (42.2%) and described in medical records such as microcephaly, hydrocephalus, seizures, or somnolence due to intracranial hypertension [13, 15–18, 23].Observational studies have shown evolution for hydrocephalus in approximately 41% of patients with CSZV in the first 17 months of life, with an indication of VPS in all of these [24]. The pathophysiology of the evolution of these cases for hydrocephalus is still not well known, but according to some hypotheses due to the damage to the cerebral vascular system, especially in the venous component, there is thrombosis and cerebral venous hypertension since intrauterine growth, persisting after birth, resulting in chronic venous cerebral hypertension [24–26].The hospitalizations occurred to perform invasive procedures (9.7%) such as VPS implantation, gastrostomy, or orthopedic surgeries due to complications inherent to the basis disease. Respiratory infections accounted for 4.1% of hospitalizations of these children, understood as possible consequences of bronchoaspirations due to neurological disorders and swallowing [13, 15–17]. As observed in other studies, this pattern of complications would be expected for these children; however, the high number of hospitalizations and invasive procedures stands out [13, 15–17].The duration of hospitalization was between one and 39 days, with an average of four days. Those who were discharged on the same day had been admitted for minor surgical procedures, such as herniorrhaphy or prostatectomy. The discharge from the hospital occurred by improvement (93%), transfer (6.3%), and death (0.7%).Prolonged admissions, although not frequent, call the attention to the potential for complications, especially in relation to healthcare-associated infections, to the change in family dynamics in this period, compromising the patient and family’s quality of life, in addition to the high cost to the health system [22].Risk factors for inpatient and outpatient emergency’s consultation were analyzed and compared to the nonhospitalized and non-urgently consulted groups, both with CSZV. Among the variables studied, no factor was associated with the higher occurrence of hospitalizations or consultations in the emergency room, as it can be observed in Table2.Table 2 Univariate analysis of the possible factors associated with emergency visits and hospitalization in patients with SCZV. Institute of Integral Medicine Prof. Fernando Figueira, August 2015 to October 2017. Variables EMERGENCY CONSULTS PR b (95 % RI) P Yes N (%) No N (%) 92 (63.4) 53 (36.6) Maternal education <10 years 35 (42.2) 12 (31.6) 1.18 (0.89 -1.57) 0.36 Gender (Female) 52 (56.5) 27 (50.9) 1.17 (0.82-1.66) 0.48 Maternal occupation (From home) 64 (77.1) 27 (67.5) 1.41 (0.78-2.57) 0.35 Maternal age (Median) 24 23 0.57 Origin (MRR/Interior)to 49/43 19/34 0.67 (0.44 - 1.01) 0.06 Head circumference (Mean) 28.8 4 (± 0.27) 29.01 (± 0.23) 1.30 (0.57 - 1.91) 0.64 HOSPITAL ADMISSION PR (95 % CI) P Yes N (%) No N (%) 71 (49.0) 74 (51.0) Maternal education <10 years 24 (38.1) 23 (39.7) 0.97 (0.73-1.29) 0.99 Gender (Male/Female) 36/35 31/43 0.82 (0.58-1.17) 0.36 Maternal occupation (From home) 49 (75.4) 42 (72.4) 1.12 (0.61-2.03) 0.86 Maternal age (Median) 24 24 0.72 Origin (MRR/Interior) 36/35 32/42 0.85 (0.60 to 1.20) 0.46 Head circumference (Mean) 28.97 (± 0.24) 28.93 (± 0.26) 1.16 (0.66 to 1.76) 0.89 a- Metropolitan Region of Recife b- Statistical analysis: PR: prevalence ratio, p according to Fisher’s exact test for categorical variables, Mann Whitney for nonnormal continuous and Student’s variables for normal continuous variables (head circumference).It was possible to observe hospitalizations of 13 patients prospectively and the data are shown in Table3. Of the hospitalized patients, eight were females, the HC of birth was between 28 and 32 cm (average of 29.3 cm and SD ± 1.0), and the age at admission was between 16 and 24 full months (average of 20 months).Table 3 Characteristics of CZVS patients prospectively observed during hospitalization. Institute of Integral Medicine Prof. Fernando Figueira, May to October 2017. Patient Maternal Age (years) Maternal Education (years) Mother Occupation From Sex HC birth (cm) Age (months ) Diagnosis a Duration (days) Device d ICU Indic (Y/N) CCPP (Y/N) Output Scale of Lansky 1 19 10 Housewife MRR M 29 20 Respiratory Infection 4 NGT, PVP, Catheter O2 N N Discharge 30 2 21 11 Housewife MRR M 30 22 Neurological Cause 10 GTT, PVP, Venturi M. N N Discharge 30 3 - - Housewife Interior M 29 16 Respiratory infection 12 NGT, OTT, PVP, CPAP, Venturi M. Y Y Discharge 20 4 37 12 Housewife Interior M 28.5 17 Neurological Cause 1 PVP N N Transf 20 5 21 11 Housewife MRR F 30 20 O t h e r b 6 - N N Discharge 20 6 18 9 Housewife MRR F 28 22 Wheezing 2 PVP N N Discharge 40 7 33 12 Housewife MRR F 29 20 O t h e r c 4 PVP N N Discharge 30-20 8 31 12 Housewife MRR F 30 23 Respiratory infection 2 NGT, PVP, Venturi M. N N Discharge 20 9 23 12 Housewife Interior F 29 19 Respiratory infection 10 NGT, DVC, PVP, CVP, CPAP, Venturi M. Y Y Death 20 10 16 8 Housewife Interior F 32 20 Respiratory Infection 49 NGT, VPS, PVP, CVP, Venturi M. N N Discharge 30 11 20 12 Housewife Interior M 29 21 Neurological Cause 3 NGT, VPS, PVP N N Discharge 20 12 21 10 Housewife MRR F 30 24 Resp Infec 3 PVP N N Discharge 40 13 35 9 Housewife MRR M 28 21 Resp Infec 5 PVP N N Discharge 30 a- Diagnoses were divided into categories: respiratory infections (upper and lower airways); neurological causes (epileptic seizures, irritability, hydrocephalus); wheezing; others b- Other: conjunctivitis c- Other: fever without localized signs d- Abbreviations of devices: nasogastric tube (NGT), peripheral venous punction (PVP), central venous punction (CVP), gastrostomy (GTT), orotracheal tube (OTT), Continuous Positive Airway Pressure (CPAP), Venturi mask (Venturi M.), delay vesical catheter (DVC), ventriculoperitoneal shunt (VPS) e- Acronyms/Abbreviations: Metropolitan Region of Recife (MRR), transference (Transf), male/female (M/F), yes/no (Y N), indications of intensive care unit (ICU Indic), palliative care (CCPP).Almost all these patients (92.3%) underwent invasive procedures during hospitalization, with the most common being peripheral venous puncture (92.3%), but nasogastric tube (NGT) was also used in 46.1% (six patients), the central venous puncture in 15.4% (two patients), gastrostomy in a patient (7.7%), and oxygen support in 46.1% (six) of patients, ranging from oxygen catheter to orotracheal intubation in an emergency room in one of the patients (7.7%).It was possible to observe, with emphasis on the analysis of aspects related to palliative care, the high occurrence of invasive procedures. Such procedures provoke pain during its accomplishment and discomfort during the period of use. Moreover, invasive devices can facilitate colonization by nosocomial bacteria and cause local and systemic infections [27]. Therefore, procedures may become useless and an idle cause of suffering if they are not indicated [20].Functional assessments according to the Lansky scale [19] based on neurological evaluation and accompanying reports were recorded. The largest evaluation was quantified in “40” for two (15.4%) of the patients (participating in quiet activities), four (30.8%) had their functional evaluation in class “30” (in need of assistance even for quiet activity), and the others (53.8%) had a quantified “20” evaluation (with playing entirely limited to very passive activities). This tool is an auxiliary measure in the elaboration of the care plan for these patients [19, 20] and showed the limitation regarding activities of daily living and absence of autonomy of these patients.Two (15.4%) of the patients had their records requested for a first-time ICU vacancy, both due to acute respiratory failure. After a better understanding by the medical team that assisted them regarding the functionality, characterization of the clinical condition, and association with the basis disease, the request for a vacancy in the ICU was suspended and after conversation and clarification with companions, palliative care measures were indicated: patient number three was submitted to palliative extubation according to precepts of palliative care and airway support was installed with continuous positive pressure; patient number nine had the bladder catheter of delay and central venous access removed; for both the order of non-resuscitation was expressed and symptom control measures were performed (analgesia).However, most families and caregivers received no approach to palliative care during hospitalizations, despite the basic conditions of the children. It is understood that, as an emerging aspect of pediatric practice, professionals postpone such an approach until the end of life or do not use it [21, 27].The need to perform surgical interventions and long hospitalizations in ICU is frequent in children born with malformations, which implies thinking about the limits of the therapeutic effort in these cases [27]. According to precepts of palliative care, obstinate therapeutic investments in patients without a cure perspective, such as the cases demonstrated above, should be avoided, but in practice many ICU beds are occupied, sometimes through judicial action, by patients with no possibility of therapeutic control and with underlying diseases that limit survival and interfere greatly in the quality of life, such as CSZV [21, 27]. During the observation of hospitalizations, it was seen that the indication of ICU treatment can be reversed and patients received measures to minimize suffering, avoiding painful interventions, as well as being accompanied by family throughout the hospitalization period, including the evolution to death.One of these patients died during hospitalization at 19 months of age and the family was present at all times, including death. According to a report from the Secretary of Health of Pernambuco, from the beginning of the appearance of these cases until October 2017, 2403 notifications were made, of which 438 were confirmed as CSZV and 28 (6.4%) deaths were documented in the postneonatal period [28]. According to data from the epidemiological bulletin provided by state agencies, the cases of postneonatal deaths had an average age of 6.2 months of life (ranging from 32 days to 21.9 months) [28].The other patient with indication of palliative measures received discharge from hospital with written guidelines in summary over the care plan that was drawn up during and after hospitalization. The remaining patients were discharged and there was no report or description of approach in palliative care intended for them. ## 4. Conclusions Despite the short lifespan of children with CSZV, it could be observed through this study that they were submitted to a high number of emergency room visits, as well as hospitalizations with excessive amounts of invasive procedures. It is understood that the patient with severe malformations already known as those caused by congenital infection by Zika virus could be identified early for care that would allow life without suffering and without any disproportionate invasive methods [27]. --- *Source: 1025193-2018-10-11.xml*
2018
# Modeling and Assessment of Long Afterglow Decay Curves **Authors:** Chi-Yang Tsai; Jeng-Wen Lin; Yih-Ping Huang; Yung-Chieh Huang **Journal:** The Scientific World Journal (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102524 --- ## Abstract Multiple exponential equations have been successfully fitted to experimental long afterglow decay curve data for some phosphor materials by previous researchers. The calculated decay constants in such equations are used to assess the phosphorescence characteristics of an object. This study generates decay constants from experimental test data and from existing literature for comparison. It shows that the decay constants of an object may not be invariant and that they are dependent on phosphor material, temperature, irradiation intensity, sample thickness, and phosphor density for samples. In addition, the use of different numbers of exponential components in interpretation leads to different numerical results for decay constants. The relationship between the calculated decay constants and the afterglow characteristics of an object is studied and discussed in this paper. The appearance of the luminescence intensity is less correlated to the decay constants than to the time-invariant constants in an equation. --- ## Body ## 1. Introduction The formation of and mechanism for phosphorescence in strontium aluminate phosphors have been thoroughly investigated, with SrAl2O4 : Eu2+, Dy3+ phosphors as one of the most studied hosts and a major host for long afterglow commercial products. The afterglow attenuates exponentially, and its intensity time decay behavior follows a first-order, second-order, or general-order kinetic behavior [1, 2]: (1) I = I 0 ( 1 + γ t ) first-order , I = I 0 ( 1 + γ t ) 2 second-order , I = I 0 ( 1 + γ t ) n general-order, where I is the phosphorescence (PLUM) intensity at any time t after switching off the excitation illumination and I 0, n, and γ are constants.To model the measured afterglow curves, many researchers have adopted multiple (e.g., simple first-order kinetics) exponential equations as follows [3] (2) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) , (3) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) + α 2 exp ⁡ ( - t τ 2 ) , (4) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) + α 2 exp ⁡ ( - t τ 2 ) + α 3 exp ⁡ ( - t τ 3 ) , where I, I 0, and t are as defined in (1), α i are time-invariant constants, and τ i are the decay constants (also called decay times) of the exponential components.The decay constants are a means to determine the decay rate of the rapid, medium, and slow exponential decay components. The single (2), double (3), or triple (4) exponential equations have been widely used in recent research that fitted the experimental decay curves and may correspond to one, two, or three trap centers in the assumed model. Tsai et al. [4] have dealt with the mathematical modeling problems and have suggested that the slope change of the curve profile is determined by τ i in the equation and that it has nothing to do with the magnitude of luminous intensity; they also examined the effect of the offset term (I 0) in an equation that had been fitted to experimental data for applications and claimed that this term should not occur in a good experimental environment [5].By using the triple exponential equation (4), Wu et al. [6] discovered that increasing the environmental temperature decreases the value of τ 3 (the largest decay constant that dominates the duration of the long afterglow). Kubo et al. [7] adopted the double exponential equations (i.e., (3)) in their analysis and claimed that the PLUM intensity of the slower decay component (larger one) decreased with temperature in the range from 305 to 458 K. Wu et al. [8] suggested that a general-order kinetics model provided the best fit for the data in their test as well as accommodating retrapping in the thermoluminescence and decay processes. He et al. [9] studied the charging process efficiency and assumed for simplicity that the traps have a single depth and hence that it was reasonable to use a single exponential equation (2) in the curve fitting process. Moreover, they pointed out that the PLUM was more intense as one pumped more irradiation energy after the fluorescence was first saturated.Other studies considered the calculated decay constants as indicative of trap depths to interpret the physical afterglow behaviors. Zhu et al. [10] calculated the constants and indicated that the decay times for phosphors were prolonged by encapsulation at room temperature. Xie et al. [11] showed that the decay characteristics reflected the fact that phosphors with different structures possessed different afterglow times. Chang et al. [12] concluded that the larger the value of the decay constant was, the slower the decay speed and the better the afterglow properties were. Pedroza-Montero et al. [13] found that the PLUM of SrA l 2 O 4 : E u 2 +, D y 3 + phosphors was characterized by at least three temporal processes around 0–25 s, 25–200 s, and 200–650 s. The most intense PLUM came from the fast decay parts (0–25 s and 25–200 s). The slower part (200–650 s) was the most enhanced for higher doses. Arellano-Tánori et al. [14] reviewed the real PLUM decay process and reported that it is complicated and that an exponential-type decay model was an oversimplification. The PLUM exhibited an intensity time decay behavior composed of three simple exponential terms with decay constants of 56 s, 180 s, and 1230 s.Han et al. [15] stressed that mere fitting of data was not physical evidence for the existence of only two trapping levels of different energies. There may be more than two trapping levels as suggested by many researchers. Nevertheless, they could always be averaged into two types of trapping levels, that is, shallow and deep. Meléndrez et al. [16] suggested that it is reasonable to assume that the PLUM intensity profiles depend on irradiation temperature and irradiation dose exposure.Based on the aforementioned arguments, four points are raised as follows.(1) By using single, double, or triple exponential equations in curve fitting for a case, different decay constants may be derived for the calculated parameters. (2) Is the lower-order component of the decay constants the most enhanced for higher doses? Does the larger value of this lower component correspond to better afterglow properties? (3) If the PLUM intensity profiles depend on the irradiation dose exposure, how do the related decay constants respond? (4) What are the major influencing factors on the three types of parameters in the above equations, that is,I 0, α i, and τ i?This study intends to answer these four points. Several forms of the decay curves have been collected, including those from existing articles, luminescence standards, and experimental tests conducted in the lab. To model and compute the decay constants of the long afterglow decay curve of the material studied, the exponential equations (2)–(4) were fitted to experimental data and the data were generated from referenced articles using a modified least squares method. We focus on the study of physical glow behaviors, or the so-called photoluminescence, not luminescence arising from chemical or other forms of reactions. ## 2. Experimental Tests This study uses PET (polyethylene terephthalate) resin as a matrix combined with strontium aluminate phosphors to form afterglow luminous thick films, with the ultimate aim of increasing its luminous intensity. Besides the phosphor material, the resin material, forming process, and physical properties are vitally important in the afterglow end product performance. The various specimens investigated in this study are manufactured with the same phosphor density but different thicknesses. The phosphor particle size is 0.1 mm in diameter, and the phosphor density in the mixture is 1 : 1 (PET versus phosphors). We use a mesh print technique to create the patches, printing layer by layer on top of a substrate with each layer being about 0.1 mm thick. In other words, to obtain a 0.6 mm thick patch, it was necessary to make six runs of the print process.The patch thickness and exposure dose are two major factors considered in this study. DIN 67510 and JIS 9107-2008 [20] luminescence standards were adopted. Table 1 lists four test conditions used in this study for comparison. The room temperature in the experimental tests was 25°C.Table 1 The luminous intensity and exposure duration used in this study. Regulation code Luminous intensity (lx) Duration (min) DIN 67510 1000 50 515 JIS Z 9100 200 20 This study 3000 5 ### 2.1. Materials In this study, the investigated specimens were made by using SrAl2O4 : Eu2+, Dy3+ phosphor powder mixed with 60% PET resin in weight, which was printed on top of the PET plastic white substrate to form a plastic patch with thicknesses between 0.4 mm and 0.6 mm. ### 2.2. Photoluminescence Experimental Test An oven was designed for this study, inside which there are four 6500 K, 100 W adjustable intensity Xenon lamps, attached to the top and sidewalls to excite afterglow specimens. The lamps could be switched on independently to meet the required luminescence intensity. A transparent glass door was installed behind the metal door in order that the metal door might be opened for monitoring the luminescent behavior of specimens without disturbing the interior temperature of the oven.A computer-controlled luminance meter (Konica Minolta LS-100) was used to measure the luminous intensity of the afterglow specimens of this study. In order to keep the same irradiation conditions among different tests before irradiating samples, samples were heated to 70°C for 6 hours to remove any acquired and residual thermoluminescence prior to cooling to the desired experimental temperature. ## 2.1. Materials In this study, the investigated specimens were made by using SrAl2O4 : Eu2+, Dy3+ phosphor powder mixed with 60% PET resin in weight, which was printed on top of the PET plastic white substrate to form a plastic patch with thicknesses between 0.4 mm and 0.6 mm. ## 2.2. Photoluminescence Experimental Test An oven was designed for this study, inside which there are four 6500 K, 100 W adjustable intensity Xenon lamps, attached to the top and sidewalls to excite afterglow specimens. The lamps could be switched on independently to meet the required luminescence intensity. A transparent glass door was installed behind the metal door in order that the metal door might be opened for monitoring the luminescent behavior of specimens without disturbing the interior temperature of the oven.A computer-controlled luminance meter (Konica Minolta LS-100) was used to measure the luminous intensity of the afterglow specimens of this study. In order to keep the same irradiation conditions among different tests before irradiating samples, samples were heated to 70°C for 6 hours to remove any acquired and residual thermoluminescence prior to cooling to the desired experimental temperature. ## 3. Slow Decay Component As in (4), the terms on the right hand side of the equations with subscripts 1, 2, and 3 are assigned as fast, medium, and slow components. In other words, they are the terms representing the major influences for different periods of afterglow luminescence duration. At large t, the dominant term is the one corresponding to the slowest component. This may be extracted as (5) I L = α L exp ⁡ ( - t τ L ) , in which I L is the long term intensity at time t, α L is the last time-invariant constant corresponding to the slowest decay component (i.e., α 2 in (3) or α 3 in (4)), and τ L is the corresponding decay constant.In the afterglow luminescence, the slow component is responsible for the long persistent behavior [19]; it shows that the initial intensity for this term is α L, and the decay rate of the luminous intensity is proportional to τ L; that is, larger values for α L and τ L would result in a higher intensity for (5). Figure 1 displays two figures cited from two articles [12, 17]. The right hand side figure shows that the three components (fast, medium, and slow) of a curve intersect at about 1.9 min, and the slow decay component, as shown in (5), is the major contributor to the luminous light. The left hand side figure shows that the slow component becomes dominant after 1 min compared to the other two components. This is the major reason researchers focus their studies on the slow component term in an equation when dealing with afterglow luminescent related topics.The figures of fast, medium, and slow component in an equation. Data cited from [12, 17]. (a) (b) ## 4. Numerical Simulations Many researchers [2, 3, 6–16] have assessed the phosphorescent characteristics according to the decay constants calculated from the use of curve fitting techniques. Decay curves were fitted to the sum of a few exponential components, each with its own decay constant. Several computer programs were written for this study to calculate the related constants from the experimental data and also other data from referenced articles.The following equation is a matrix expression for a linear relationship between a set of data and a set of parameters:(6) [ A ] n × m { X } m × 1 = { B } n × 1 , in which A is a coefficient matrix, n is the number of components in an equation, m is the number of data points, X is the parameters of the components, and B is the known data.Equations (2) to (4) are nonlinear equations. The three types of parameters to be calculated include α i, τ i, and I 0. The process and procedures used in this study are described as follows:(1) some values were assigned toτ i with the restriction τ i < τ i + 1 to use any one of (2)–(4); (2) the processed exponential equation could be transformed to a linear equation as in (6); α i and I 0 were then calculated using the least squares method to simulate a set of experimental data; (3) any predefinedτ i could generate an associated deviation (D) by using (7) to represent the difference between the computed values and the used data set: (7) D = ∑ i = 1 q [ y i - f ( x i ) ] 2 q , where the deviation, D, is defined as the root mean square offset between the experimental data y i and the computed value f ( x i ) through the use of one of (2)–(4); (4) as theτ i sampled a wide range of values, the minimum deviation case was revealed. However, in addition to the minimum deviation concerned, a minimum ∑ τ i total was employed to filter out the unfit equations.There are two different schemes proposed in this study to execute the least squares method. They are described in the following two subsections. ### 4.1. Exhaustive Search This search process was to set fixed ranges forτ i values with a constant increment and the restriction that τ i < τ i + 1. For example, the search range for τ 1 is from 1 to 1.0 with 0.001 increment, that is, 1000 elements in the range. For τ 2, it is 1.5 to 3.0 with 0.001 as increment, and for τ 3, it is 4.0 to 10.0 with 0.002 increment. The relative ordering τ 1 < τ 2 < τ 3 was retained in this process.This straightforward method searches through a predefined range forτ i and records the calculated D (deviation), I 0, and α i. Then, it is possible to choose one set of α i and τ i that fit with the minimum D and some predefined criteria. ### 4.2. Branch Search The exhaustive search method required searching with a wide fixed range and inevitably was CPU-intensive. To be more specific in setting appropriate search ranges, a branch search method was proposed. This study used (2) to process the single exponential equation first and the obtained value τ 1 * was used to estimate the search ranges for the double exponential. The obtained values τ 1 * * and τ 2 * * were then used to set the ranges for the triple exponential. The three process steps may be described as follows.( 1) Single Exponential Equation. Consider (8) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) . Two values are assigned as boundaries, τ min ⁡ and τ max ⁡, and an increment is assigned to process the problem: (9) τ min ⁡ < τ 1 < τ max ⁡ . The least squares method is first used to find τ 1 * with the minimum deviation among all. In order to achieve better results, the searching ranges were narrowed down as τ 1 * ± increment and the search was repeated until the increment was small enough.( 2) Double Exponential Equation. Consider (10) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) + α 2 exp ⁡ ( - t τ 2 ) .In this step, the calculatedτ 1 *, τ min ⁡, and τ max ⁡ are used to set the ranges as (11) τ min ⁡ < τ 1 < τ 1 * , τ 1 * < τ 2 < τ max ⁡ . The least squares method is used to find a pair of values τ 1 * * and τ 2 * * with the minimum deviation. The parameters can be fine-tuned by using τ 1 * * ± increment and τ 2 * * ± increment as two separate ranges for further convergence processes.( 3) Triple Exponential Equation. Consider(12) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) + α 2 exp ⁡ ( - t τ 2 ) + α 3 exp ⁡ ( - t τ 3 ) .Again, the calculatedτ 1 * *, τ 2 * *, τ min ⁡, and τ max ⁡ are provided to set the following ranges as: (13) τ min ⁡ < τ 1 < τ 1 * * , τ 1 * * < τ 2 < τ 2 * * , τ 2 * * < τ 3 < τ max ⁡ .Once again, we use the least squares method to find a set ofτ 1 * * *, τ 2 * * *, and τ 3 * * * with the minimum deviation. Further processes continued in the same way as in the previous double exponential case.In general, the branch search method may reduce search CPU time, as it spends about 1 min of CPU time to search for an optimal single, double, and triple exponential equations simultaneously for all the cases used in this study. However, it may fail for a few cases, since if the single exponential solutionτ 1 * is not appropriate, then it may also fail to converge for the double and triple cases. ## 4.1. Exhaustive Search This search process was to set fixed ranges forτ i values with a constant increment and the restriction that τ i < τ i + 1. For example, the search range for τ 1 is from 1 to 1.0 with 0.001 increment, that is, 1000 elements in the range. For τ 2, it is 1.5 to 3.0 with 0.001 as increment, and for τ 3, it is 4.0 to 10.0 with 0.002 increment. The relative ordering τ 1 < τ 2 < τ 3 was retained in this process.This straightforward method searches through a predefined range forτ i and records the calculated D (deviation), I 0, and α i. Then, it is possible to choose one set of α i and τ i that fit with the minimum D and some predefined criteria. ## 4.2. Branch Search The exhaustive search method required searching with a wide fixed range and inevitably was CPU-intensive. To be more specific in setting appropriate search ranges, a branch search method was proposed. This study used (2) to process the single exponential equation first and the obtained value τ 1 * was used to estimate the search ranges for the double exponential. The obtained values τ 1 * * and τ 2 * * were then used to set the ranges for the triple exponential. The three process steps may be described as follows.( 1) Single Exponential Equation. Consider (8) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) . Two values are assigned as boundaries, τ min ⁡ and τ max ⁡, and an increment is assigned to process the problem: (9) τ min ⁡ < τ 1 < τ max ⁡ . The least squares method is first used to find τ 1 * with the minimum deviation among all. In order to achieve better results, the searching ranges were narrowed down as τ 1 * ± increment and the search was repeated until the increment was small enough.( 2) Double Exponential Equation. Consider (10) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) + α 2 exp ⁡ ( - t τ 2 ) .In this step, the calculatedτ 1 *, τ min ⁡, and τ max ⁡ are used to set the ranges as (11) τ min ⁡ < τ 1 < τ 1 * , τ 1 * < τ 2 < τ max ⁡ . The least squares method is used to find a pair of values τ 1 * * and τ 2 * * with the minimum deviation. The parameters can be fine-tuned by using τ 1 * * ± increment and τ 2 * * ± increment as two separate ranges for further convergence processes.( 3) Triple Exponential Equation. Consider(12) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) + α 2 exp ⁡ ( - t τ 2 ) + α 3 exp ⁡ ( - t τ 3 ) .Again, the calculatedτ 1 * *, τ 2 * *, τ min ⁡, and τ max ⁡ are provided to set the following ranges as: (13) τ min ⁡ < τ 1 < τ 1 * * , τ 1 * * < τ 2 < τ 2 * * , τ 2 * * < τ 3 < τ max ⁡ .Once again, we use the least squares method to find a set ofτ 1 * * *, τ 2 * * *, and τ 3 * * * with the minimum deviation. Further processes continued in the same way as in the previous double exponential case.In general, the branch search method may reduce search CPU time, as it spends about 1 min of CPU time to search for an optimal single, double, and triple exponential equations simultaneously for all the cases used in this study. However, it may fail for a few cases, since if the single exponential solutionτ 1 * is not appropriate, then it may also fail to converge for the double and triple cases. ## 5. Interpretation of Data Data from various resources were derived via different means described as follows.(1) Data from existing articles: previous studies have provided a set of well-documented constants in a specified multiple exponential form. This study uses these constants to regenerate point data for the decay curve, and then the decay constants of the single, double, and triple multiple exponential equations were calculated separately. (2) Data referred to the DIN standard: those data represent some points on the associated decay curves. This study used the data to compute the associated decay constants for the double exponential equation forms as in (3). (3) Data generated from the experimental tests in this study: in order to study the effect of the irradiation duration as well as the dose exposure on the PLUM intensity profile of the phosphorescent material, we provided data from several experimental tests for use. ### 5.1. Data from Existing Articles Table2 contains two different data types in the original sources: one is the result from triple exponential equations and the other is the result from the double exponential equations. Table 2 shows the deviations resulting from all three types of exponential equations and indicates that the single exponential equations always resulted in larger deviations than the double or triple ones. The simulation deviations induced in the triple exponential equations were the smallest among them, which means that the use of triple exponential equations in interpretation may be the best model among the three. Table 2 indicates that the τ 2 in the double exponential equation were closely associated with τ 3 in the triple exponential equation. Similar relationships were found in α 2 and α 3 constants. The averages of the three decay constants of the triple exponential equations were 18.45, 91.34, and 360 s, which fall within the range suggested by Pedroza-Montero et al. [13].Table 2 Fitting parameters of the decay curves by using the single, double, and triple exponential equations. -1, -2, and -3 indicate the use of single, double, and triple exponential equations in interpretation, respectively. Sources Deviation % I 0 α 1 α 2 α 3 τ 1 (min) τ 2 (min) τ 3 (min) Sun-1 [18] 0.1008 0.3034 3.6530 0.2333 Sun-2 0.0020 0.2205 3.5206 0.2985 0.3176 4.1248 Sun-3 (given) 0.2200 2.8790 0.6890 0.2520 0.1280 0.7227 4.6237 Chang-1 [12] 0.0392 0.0687 4.2653 0.3657 Chang-2 0.0019 0.0316 3.8760 0.4269 0.2371 1.8048 Chang-3 (given) 0.0315 2.0270 1.8547 0.4212 0.0567 0.2900 1.8200 Sharma-1 [17] 0.0548 0.0 2.8928 0.6900 Sharma-2 0.0036 0.0 2.7457 0.2431 0.5500 4.3567 Sharma-3 (given) 0.0 2.6416 0.2344 0.1153 0.5328 1.8554 7.1918 HAN-1 [15] 0.0664 0.0684 8.8483 0.88002 HAN-2 (given) 0.0082 7.9885 0.6772 0.7789 3.2925 HAN-3 0.00085 0.0071 7.9132 0.3798 0.3733 0.7250 2.2800 3.9900 Xei-1 [11] 0.1350 0.0 6.7849 0.898 Xei-2 (given) 0.0 4.4678 2.4138 0.2523 2.6139 Xei-3 0.0107 0.0 4.3009 1.3180 1.2698 0.2400 1.700 3.4200 Lei-1 [19] 0.0 0.0 18.352 3.1500 Lei-2 (given) 0.4787 3.5735 0.5318 0.5735 4.74783 Lei-3 0.00043 0.4786 2.1442 1.4295 0.5332 0.4639 0.6943 4.7584Figure2 illustrates the relationship among the luminous intensity, τ 3, and α 3 (Table 2). This figure indicates that the value α 3, rather than τ 3, is most strongly correlated with the associated luminous intensity. It is noted that Chang et al. [12] used Sr 3 A l 2 O 6 : E u 3 +, D y 3 + that could reflect different decay characteristics. That is to say they did not use the same material compared to others. Hence, it is reasonable to ignore this point of Figure 2. The remaining data show consistent behavior and a high correlation between the luminous intensity and α 3 but it is hard to find any relationships between the luminous intensity and τ 3.Figure 2 Relationships between the luminous intensity (measured at 2 min),α 3, and τ 3 for the data listed in Table 2. ### 5.2. Data from the DIN Standard Figure3 shows the luminous profiles as provided by the DIN standard that grades from A to G for industry usage. Table 3 shows the calculated constants corresponding to the profiles in the figure. Again, the use of single exponential equation in interpretation results in the largest deviations and the smallest τ 1 value. Also, it indicates that the grade with the largest luminous profile (G) does not necessarily have the largest decay constant (τ 3).Table 3 Calculated constants for the DIN luminescence standard for different grades from A to G, where the luminous intensity increases as the grade number goes from A to G. A double exponential equation was used for interpretation. Terms α 1 α 2 τ 1 (min) τ 2 (min) A 0.1587 0.0162 3.7138 34.8572 B 0.2934 0.0314 4.0907 39.0857 C 1.0415 0.1004 3.5680 36.8189 D 1.5453 0.2046 3.7503 33.6278 E 2.5939 0.2570 3.9084 38.4057 F 3.2869 0.3366 3.9448 37.8302 G 4.3611 0.4439 3.8110 34.8920Figure 3 Afterglow profiles of the DIN luminous standard where the luminous intensity is measured at 2 min.This finding seems inconsistent with the argument of Chang et al. [12], who stated that the larger the value of decay time was, the better the afterglow properties were. On the contrary, this study indicates that larger α 3 values are associated with the better afterglow properties. This statement is also true for those cases in Table 2. The upper right hand side of Figure 3 displays the curves for luminous intensity, α 3, and τ 3, which provides evidence that α 3 is more directly correlated with the luminous intensity than τ 3. ### 5.3. Data from Experimental Tests Table4 lists the results of the experimental tests conducted in this study. There are two different thicknesses for the patches, that is, 0.4 mm and 0.6 mm. Table 1 shows the four different luminous intensities used to irradiate the specimens. The irradiation conditions follow the DIN and JIS luminance standards.Table 4 Fitting parameters of the experimental decay curves conducted in this study, where the value 4 denotes 0.4 mm and the value 6 denotes 0.6 mm in thickness; the values 0050, 0200, 1000, and 3000 represent the luminous intensity of excitation. A double exponential equation was applied in interpretation. Name α 1 α 2 τ 1 (min) τ 2 (min) 4-0050 0.5802 0.2180 1.044 12.8277 4-0200 6.4165 0.5041 0.7377 13.8277 4-1000 7.0333 0.7574 1.0377 12.734 4-3000 5.489 1.0582 1.6433 12.414 6-0050 0.2803 0.1627 2.6577 16.948 6-0200 1.2105 0.5252 2.5777 15.6677 6-1000 4.8986 1.0896 1.351 10.961 6-3000 10.3447 1.3694 1.2643 13.068The corresponding luminescence decay curves are shown in Figures4 and 5 for 0.4-mm and 0.6 mm thickness patches, respectively. It is obvious that the higher irradiation intensity and thicker specimens achieved higher afterglow luminous intensity after input illumination has ceased. As expected, the thicker the patch was, the better afterglow luminous behavior was. For the same patch, better luminous behavior was accompanied by higher α 3 constants as depicted at the upper right hand side of Figures 4 and 5. On the other hand, the τ 2 curve behaves differently in these two figures. Figure 4 shows that τ 2 remain almost constant under different irradiation intensity. However, in contradiction to this, Figure 5 indicates that τ 2 decreases with increasing irradiation intensity in a test.Figure 4 Afterglow profiles of the 0.4 mm thickness patch where the luminous intensity is measured at 2 min.Figure 5 Afterglow profiles of the 0.6 mm thickness patch where the luminous intensity is measured at 2 min. ## 5.1. Data from Existing Articles Table2 contains two different data types in the original sources: one is the result from triple exponential equations and the other is the result from the double exponential equations. Table 2 shows the deviations resulting from all three types of exponential equations and indicates that the single exponential equations always resulted in larger deviations than the double or triple ones. The simulation deviations induced in the triple exponential equations were the smallest among them, which means that the use of triple exponential equations in interpretation may be the best model among the three. Table 2 indicates that the τ 2 in the double exponential equation were closely associated with τ 3 in the triple exponential equation. Similar relationships were found in α 2 and α 3 constants. The averages of the three decay constants of the triple exponential equations were 18.45, 91.34, and 360 s, which fall within the range suggested by Pedroza-Montero et al. [13].Table 2 Fitting parameters of the decay curves by using the single, double, and triple exponential equations. -1, -2, and -3 indicate the use of single, double, and triple exponential equations in interpretation, respectively. Sources Deviation % I 0 α 1 α 2 α 3 τ 1 (min) τ 2 (min) τ 3 (min) Sun-1 [18] 0.1008 0.3034 3.6530 0.2333 Sun-2 0.0020 0.2205 3.5206 0.2985 0.3176 4.1248 Sun-3 (given) 0.2200 2.8790 0.6890 0.2520 0.1280 0.7227 4.6237 Chang-1 [12] 0.0392 0.0687 4.2653 0.3657 Chang-2 0.0019 0.0316 3.8760 0.4269 0.2371 1.8048 Chang-3 (given) 0.0315 2.0270 1.8547 0.4212 0.0567 0.2900 1.8200 Sharma-1 [17] 0.0548 0.0 2.8928 0.6900 Sharma-2 0.0036 0.0 2.7457 0.2431 0.5500 4.3567 Sharma-3 (given) 0.0 2.6416 0.2344 0.1153 0.5328 1.8554 7.1918 HAN-1 [15] 0.0664 0.0684 8.8483 0.88002 HAN-2 (given) 0.0082 7.9885 0.6772 0.7789 3.2925 HAN-3 0.00085 0.0071 7.9132 0.3798 0.3733 0.7250 2.2800 3.9900 Xei-1 [11] 0.1350 0.0 6.7849 0.898 Xei-2 (given) 0.0 4.4678 2.4138 0.2523 2.6139 Xei-3 0.0107 0.0 4.3009 1.3180 1.2698 0.2400 1.700 3.4200 Lei-1 [19] 0.0 0.0 18.352 3.1500 Lei-2 (given) 0.4787 3.5735 0.5318 0.5735 4.74783 Lei-3 0.00043 0.4786 2.1442 1.4295 0.5332 0.4639 0.6943 4.7584Figure2 illustrates the relationship among the luminous intensity, τ 3, and α 3 (Table 2). This figure indicates that the value α 3, rather than τ 3, is most strongly correlated with the associated luminous intensity. It is noted that Chang et al. [12] used Sr 3 A l 2 O 6 : E u 3 +, D y 3 + that could reflect different decay characteristics. That is to say they did not use the same material compared to others. Hence, it is reasonable to ignore this point of Figure 2. The remaining data show consistent behavior and a high correlation between the luminous intensity and α 3 but it is hard to find any relationships between the luminous intensity and τ 3.Figure 2 Relationships between the luminous intensity (measured at 2 min),α 3, and τ 3 for the data listed in Table 2. ## 5.2. Data from the DIN Standard Figure3 shows the luminous profiles as provided by the DIN standard that grades from A to G for industry usage. Table 3 shows the calculated constants corresponding to the profiles in the figure. Again, the use of single exponential equation in interpretation results in the largest deviations and the smallest τ 1 value. Also, it indicates that the grade with the largest luminous profile (G) does not necessarily have the largest decay constant (τ 3).Table 3 Calculated constants for the DIN luminescence standard for different grades from A to G, where the luminous intensity increases as the grade number goes from A to G. A double exponential equation was used for interpretation. Terms α 1 α 2 τ 1 (min) τ 2 (min) A 0.1587 0.0162 3.7138 34.8572 B 0.2934 0.0314 4.0907 39.0857 C 1.0415 0.1004 3.5680 36.8189 D 1.5453 0.2046 3.7503 33.6278 E 2.5939 0.2570 3.9084 38.4057 F 3.2869 0.3366 3.9448 37.8302 G 4.3611 0.4439 3.8110 34.8920Figure 3 Afterglow profiles of the DIN luminous standard where the luminous intensity is measured at 2 min.This finding seems inconsistent with the argument of Chang et al. [12], who stated that the larger the value of decay time was, the better the afterglow properties were. On the contrary, this study indicates that larger α 3 values are associated with the better afterglow properties. This statement is also true for those cases in Table 2. The upper right hand side of Figure 3 displays the curves for luminous intensity, α 3, and τ 3, which provides evidence that α 3 is more directly correlated with the luminous intensity than τ 3. ## 5.3. Data from Experimental Tests Table4 lists the results of the experimental tests conducted in this study. There are two different thicknesses for the patches, that is, 0.4 mm and 0.6 mm. Table 1 shows the four different luminous intensities used to irradiate the specimens. The irradiation conditions follow the DIN and JIS luminance standards.Table 4 Fitting parameters of the experimental decay curves conducted in this study, where the value 4 denotes 0.4 mm and the value 6 denotes 0.6 mm in thickness; the values 0050, 0200, 1000, and 3000 represent the luminous intensity of excitation. A double exponential equation was applied in interpretation. Name α 1 α 2 τ 1 (min) τ 2 (min) 4-0050 0.5802 0.2180 1.044 12.8277 4-0200 6.4165 0.5041 0.7377 13.8277 4-1000 7.0333 0.7574 1.0377 12.734 4-3000 5.489 1.0582 1.6433 12.414 6-0050 0.2803 0.1627 2.6577 16.948 6-0200 1.2105 0.5252 2.5777 15.6677 6-1000 4.8986 1.0896 1.351 10.961 6-3000 10.3447 1.3694 1.2643 13.068The corresponding luminescence decay curves are shown in Figures4 and 5 for 0.4-mm and 0.6 mm thickness patches, respectively. It is obvious that the higher irradiation intensity and thicker specimens achieved higher afterglow luminous intensity after input illumination has ceased. As expected, the thicker the patch was, the better afterglow luminous behavior was. For the same patch, better luminous behavior was accompanied by higher α 3 constants as depicted at the upper right hand side of Figures 4 and 5. On the other hand, the τ 2 curve behaves differently in these two figures. Figure 4 shows that τ 2 remain almost constant under different irradiation intensity. However, in contradiction to this, Figure 5 indicates that τ 2 decreases with increasing irradiation intensity in a test.Figure 4 Afterglow profiles of the 0.4 mm thickness patch where the luminous intensity is measured at 2 min.Figure 5 Afterglow profiles of the 0.6 mm thickness patch where the luminous intensity is measured at 2 min. ## 6. Conclusions For an afterglow decay profile, it is important to adopt appropriate multiple single exponential equations in the associated numerical simulation. Single, double, and triple exponential equations have been applied in turn to approximate a set of data. Then, the one with the minimum deviation and∑ τ i total was chosen. To solve the nonlinear simultaneous equations involved, the exhaustive search and brand search methods have been used for all the cases mentioned. Different decay constants were obtained for the calculated parameters depending on whether single, double, or triple exponentials were used to fit the curves for each case. With more exponential terms (say n terms) used in an equation to simulate a set of data, smaller α n and larger τ n values are found in the results.With the commonly used strontium aluminate phosphors, the data from previous and present experimental work indicated that the lastα i, rather than the last τ i, in an exponential equation is most strongly correlated with the afterglow characteristics of an object. The last α i value is a good index reflecting the amount of irradiation dose exposure and the quality of afterglow properties (Figures 3, 4, and 5). However, the consistent behaviors between luminous intensity and the α 3 and τ 3 values in the same afterglow phosphors do not exist among different phosphors. These findings are contradictory to other studies ([12, 14, 15]), which have correlated luminous intensity with only the value of τ 3. Thus, we conclude that it is important to carry out further studies of this nature in order to verify this finding. As shown in Figure 2, our approach may not be appropriate to cross link parameters between different systems; this needs further detailed study. Moreover, other afterglow characteristics such as the temperature effect, or different phosphor materials, also need to be studied. --- *Source: 102524-2014-09-14.xml*
102524-2014-09-14_102524-2014-09-14.md
37,573
Modeling and Assessment of Long Afterglow Decay Curves
Chi-Yang Tsai; Jeng-Wen Lin; Yih-Ping Huang; Yung-Chieh Huang
The Scientific World Journal (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102524
102524-2014-09-14.xml
--- ## Abstract Multiple exponential equations have been successfully fitted to experimental long afterglow decay curve data for some phosphor materials by previous researchers. The calculated decay constants in such equations are used to assess the phosphorescence characteristics of an object. This study generates decay constants from experimental test data and from existing literature for comparison. It shows that the decay constants of an object may not be invariant and that they are dependent on phosphor material, temperature, irradiation intensity, sample thickness, and phosphor density for samples. In addition, the use of different numbers of exponential components in interpretation leads to different numerical results for decay constants. The relationship between the calculated decay constants and the afterglow characteristics of an object is studied and discussed in this paper. The appearance of the luminescence intensity is less correlated to the decay constants than to the time-invariant constants in an equation. --- ## Body ## 1. Introduction The formation of and mechanism for phosphorescence in strontium aluminate phosphors have been thoroughly investigated, with SrAl2O4 : Eu2+, Dy3+ phosphors as one of the most studied hosts and a major host for long afterglow commercial products. The afterglow attenuates exponentially, and its intensity time decay behavior follows a first-order, second-order, or general-order kinetic behavior [1, 2]: (1) I = I 0 ( 1 + γ t ) first-order , I = I 0 ( 1 + γ t ) 2 second-order , I = I 0 ( 1 + γ t ) n general-order, where I is the phosphorescence (PLUM) intensity at any time t after switching off the excitation illumination and I 0, n, and γ are constants.To model the measured afterglow curves, many researchers have adopted multiple (e.g., simple first-order kinetics) exponential equations as follows [3] (2) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) , (3) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) + α 2 exp ⁡ ( - t τ 2 ) , (4) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) + α 2 exp ⁡ ( - t τ 2 ) + α 3 exp ⁡ ( - t τ 3 ) , where I, I 0, and t are as defined in (1), α i are time-invariant constants, and τ i are the decay constants (also called decay times) of the exponential components.The decay constants are a means to determine the decay rate of the rapid, medium, and slow exponential decay components. The single (2), double (3), or triple (4) exponential equations have been widely used in recent research that fitted the experimental decay curves and may correspond to one, two, or three trap centers in the assumed model. Tsai et al. [4] have dealt with the mathematical modeling problems and have suggested that the slope change of the curve profile is determined by τ i in the equation and that it has nothing to do with the magnitude of luminous intensity; they also examined the effect of the offset term (I 0) in an equation that had been fitted to experimental data for applications and claimed that this term should not occur in a good experimental environment [5].By using the triple exponential equation (4), Wu et al. [6] discovered that increasing the environmental temperature decreases the value of τ 3 (the largest decay constant that dominates the duration of the long afterglow). Kubo et al. [7] adopted the double exponential equations (i.e., (3)) in their analysis and claimed that the PLUM intensity of the slower decay component (larger one) decreased with temperature in the range from 305 to 458 K. Wu et al. [8] suggested that a general-order kinetics model provided the best fit for the data in their test as well as accommodating retrapping in the thermoluminescence and decay processes. He et al. [9] studied the charging process efficiency and assumed for simplicity that the traps have a single depth and hence that it was reasonable to use a single exponential equation (2) in the curve fitting process. Moreover, they pointed out that the PLUM was more intense as one pumped more irradiation energy after the fluorescence was first saturated.Other studies considered the calculated decay constants as indicative of trap depths to interpret the physical afterglow behaviors. Zhu et al. [10] calculated the constants and indicated that the decay times for phosphors were prolonged by encapsulation at room temperature. Xie et al. [11] showed that the decay characteristics reflected the fact that phosphors with different structures possessed different afterglow times. Chang et al. [12] concluded that the larger the value of the decay constant was, the slower the decay speed and the better the afterglow properties were. Pedroza-Montero et al. [13] found that the PLUM of SrA l 2 O 4 : E u 2 +, D y 3 + phosphors was characterized by at least three temporal processes around 0–25 s, 25–200 s, and 200–650 s. The most intense PLUM came from the fast decay parts (0–25 s and 25–200 s). The slower part (200–650 s) was the most enhanced for higher doses. Arellano-Tánori et al. [14] reviewed the real PLUM decay process and reported that it is complicated and that an exponential-type decay model was an oversimplification. The PLUM exhibited an intensity time decay behavior composed of three simple exponential terms with decay constants of 56 s, 180 s, and 1230 s.Han et al. [15] stressed that mere fitting of data was not physical evidence for the existence of only two trapping levels of different energies. There may be more than two trapping levels as suggested by many researchers. Nevertheless, they could always be averaged into two types of trapping levels, that is, shallow and deep. Meléndrez et al. [16] suggested that it is reasonable to assume that the PLUM intensity profiles depend on irradiation temperature and irradiation dose exposure.Based on the aforementioned arguments, four points are raised as follows.(1) By using single, double, or triple exponential equations in curve fitting for a case, different decay constants may be derived for the calculated parameters. (2) Is the lower-order component of the decay constants the most enhanced for higher doses? Does the larger value of this lower component correspond to better afterglow properties? (3) If the PLUM intensity profiles depend on the irradiation dose exposure, how do the related decay constants respond? (4) What are the major influencing factors on the three types of parameters in the above equations, that is,I 0, α i, and τ i?This study intends to answer these four points. Several forms of the decay curves have been collected, including those from existing articles, luminescence standards, and experimental tests conducted in the lab. To model and compute the decay constants of the long afterglow decay curve of the material studied, the exponential equations (2)–(4) were fitted to experimental data and the data were generated from referenced articles using a modified least squares method. We focus on the study of physical glow behaviors, or the so-called photoluminescence, not luminescence arising from chemical or other forms of reactions. ## 2. Experimental Tests This study uses PET (polyethylene terephthalate) resin as a matrix combined with strontium aluminate phosphors to form afterglow luminous thick films, with the ultimate aim of increasing its luminous intensity. Besides the phosphor material, the resin material, forming process, and physical properties are vitally important in the afterglow end product performance. The various specimens investigated in this study are manufactured with the same phosphor density but different thicknesses. The phosphor particle size is 0.1 mm in diameter, and the phosphor density in the mixture is 1 : 1 (PET versus phosphors). We use a mesh print technique to create the patches, printing layer by layer on top of a substrate with each layer being about 0.1 mm thick. In other words, to obtain a 0.6 mm thick patch, it was necessary to make six runs of the print process.The patch thickness and exposure dose are two major factors considered in this study. DIN 67510 and JIS 9107-2008 [20] luminescence standards were adopted. Table 1 lists four test conditions used in this study for comparison. The room temperature in the experimental tests was 25°C.Table 1 The luminous intensity and exposure duration used in this study. Regulation code Luminous intensity (lx) Duration (min) DIN 67510 1000 50 515 JIS Z 9100 200 20 This study 3000 5 ### 2.1. Materials In this study, the investigated specimens were made by using SrAl2O4 : Eu2+, Dy3+ phosphor powder mixed with 60% PET resin in weight, which was printed on top of the PET plastic white substrate to form a plastic patch with thicknesses between 0.4 mm and 0.6 mm. ### 2.2. Photoluminescence Experimental Test An oven was designed for this study, inside which there are four 6500 K, 100 W adjustable intensity Xenon lamps, attached to the top and sidewalls to excite afterglow specimens. The lamps could be switched on independently to meet the required luminescence intensity. A transparent glass door was installed behind the metal door in order that the metal door might be opened for monitoring the luminescent behavior of specimens without disturbing the interior temperature of the oven.A computer-controlled luminance meter (Konica Minolta LS-100) was used to measure the luminous intensity of the afterglow specimens of this study. In order to keep the same irradiation conditions among different tests before irradiating samples, samples were heated to 70°C for 6 hours to remove any acquired and residual thermoluminescence prior to cooling to the desired experimental temperature. ## 2.1. Materials In this study, the investigated specimens were made by using SrAl2O4 : Eu2+, Dy3+ phosphor powder mixed with 60% PET resin in weight, which was printed on top of the PET plastic white substrate to form a plastic patch with thicknesses between 0.4 mm and 0.6 mm. ## 2.2. Photoluminescence Experimental Test An oven was designed for this study, inside which there are four 6500 K, 100 W adjustable intensity Xenon lamps, attached to the top and sidewalls to excite afterglow specimens. The lamps could be switched on independently to meet the required luminescence intensity. A transparent glass door was installed behind the metal door in order that the metal door might be opened for monitoring the luminescent behavior of specimens without disturbing the interior temperature of the oven.A computer-controlled luminance meter (Konica Minolta LS-100) was used to measure the luminous intensity of the afterglow specimens of this study. In order to keep the same irradiation conditions among different tests before irradiating samples, samples were heated to 70°C for 6 hours to remove any acquired and residual thermoluminescence prior to cooling to the desired experimental temperature. ## 3. Slow Decay Component As in (4), the terms on the right hand side of the equations with subscripts 1, 2, and 3 are assigned as fast, medium, and slow components. In other words, they are the terms representing the major influences for different periods of afterglow luminescence duration. At large t, the dominant term is the one corresponding to the slowest component. This may be extracted as (5) I L = α L exp ⁡ ( - t τ L ) , in which I L is the long term intensity at time t, α L is the last time-invariant constant corresponding to the slowest decay component (i.e., α 2 in (3) or α 3 in (4)), and τ L is the corresponding decay constant.In the afterglow luminescence, the slow component is responsible for the long persistent behavior [19]; it shows that the initial intensity for this term is α L, and the decay rate of the luminous intensity is proportional to τ L; that is, larger values for α L and τ L would result in a higher intensity for (5). Figure 1 displays two figures cited from two articles [12, 17]. The right hand side figure shows that the three components (fast, medium, and slow) of a curve intersect at about 1.9 min, and the slow decay component, as shown in (5), is the major contributor to the luminous light. The left hand side figure shows that the slow component becomes dominant after 1 min compared to the other two components. This is the major reason researchers focus their studies on the slow component term in an equation when dealing with afterglow luminescent related topics.The figures of fast, medium, and slow component in an equation. Data cited from [12, 17]. (a) (b) ## 4. Numerical Simulations Many researchers [2, 3, 6–16] have assessed the phosphorescent characteristics according to the decay constants calculated from the use of curve fitting techniques. Decay curves were fitted to the sum of a few exponential components, each with its own decay constant. Several computer programs were written for this study to calculate the related constants from the experimental data and also other data from referenced articles.The following equation is a matrix expression for a linear relationship between a set of data and a set of parameters:(6) [ A ] n × m { X } m × 1 = { B } n × 1 , in which A is a coefficient matrix, n is the number of components in an equation, m is the number of data points, X is the parameters of the components, and B is the known data.Equations (2) to (4) are nonlinear equations. The three types of parameters to be calculated include α i, τ i, and I 0. The process and procedures used in this study are described as follows:(1) some values were assigned toτ i with the restriction τ i < τ i + 1 to use any one of (2)–(4); (2) the processed exponential equation could be transformed to a linear equation as in (6); α i and I 0 were then calculated using the least squares method to simulate a set of experimental data; (3) any predefinedτ i could generate an associated deviation (D) by using (7) to represent the difference between the computed values and the used data set: (7) D = ∑ i = 1 q [ y i - f ( x i ) ] 2 q , where the deviation, D, is defined as the root mean square offset between the experimental data y i and the computed value f ( x i ) through the use of one of (2)–(4); (4) as theτ i sampled a wide range of values, the minimum deviation case was revealed. However, in addition to the minimum deviation concerned, a minimum ∑ τ i total was employed to filter out the unfit equations.There are two different schemes proposed in this study to execute the least squares method. They are described in the following two subsections. ### 4.1. Exhaustive Search This search process was to set fixed ranges forτ i values with a constant increment and the restriction that τ i < τ i + 1. For example, the search range for τ 1 is from 1 to 1.0 with 0.001 increment, that is, 1000 elements in the range. For τ 2, it is 1.5 to 3.0 with 0.001 as increment, and for τ 3, it is 4.0 to 10.0 with 0.002 increment. The relative ordering τ 1 < τ 2 < τ 3 was retained in this process.This straightforward method searches through a predefined range forτ i and records the calculated D (deviation), I 0, and α i. Then, it is possible to choose one set of α i and τ i that fit with the minimum D and some predefined criteria. ### 4.2. Branch Search The exhaustive search method required searching with a wide fixed range and inevitably was CPU-intensive. To be more specific in setting appropriate search ranges, a branch search method was proposed. This study used (2) to process the single exponential equation first and the obtained value τ 1 * was used to estimate the search ranges for the double exponential. The obtained values τ 1 * * and τ 2 * * were then used to set the ranges for the triple exponential. The three process steps may be described as follows.( 1) Single Exponential Equation. Consider (8) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) . Two values are assigned as boundaries, τ min ⁡ and τ max ⁡, and an increment is assigned to process the problem: (9) τ min ⁡ < τ 1 < τ max ⁡ . The least squares method is first used to find τ 1 * with the minimum deviation among all. In order to achieve better results, the searching ranges were narrowed down as τ 1 * ± increment and the search was repeated until the increment was small enough.( 2) Double Exponential Equation. Consider (10) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) + α 2 exp ⁡ ( - t τ 2 ) .In this step, the calculatedτ 1 *, τ min ⁡, and τ max ⁡ are used to set the ranges as (11) τ min ⁡ < τ 1 < τ 1 * , τ 1 * < τ 2 < τ max ⁡ . The least squares method is used to find a pair of values τ 1 * * and τ 2 * * with the minimum deviation. The parameters can be fine-tuned by using τ 1 * * ± increment and τ 2 * * ± increment as two separate ranges for further convergence processes.( 3) Triple Exponential Equation. Consider(12) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) + α 2 exp ⁡ ( - t τ 2 ) + α 3 exp ⁡ ( - t τ 3 ) .Again, the calculatedτ 1 * *, τ 2 * *, τ min ⁡, and τ max ⁡ are provided to set the following ranges as: (13) τ min ⁡ < τ 1 < τ 1 * * , τ 1 * * < τ 2 < τ 2 * * , τ 2 * * < τ 3 < τ max ⁡ .Once again, we use the least squares method to find a set ofτ 1 * * *, τ 2 * * *, and τ 3 * * * with the minimum deviation. Further processes continued in the same way as in the previous double exponential case.In general, the branch search method may reduce search CPU time, as it spends about 1 min of CPU time to search for an optimal single, double, and triple exponential equations simultaneously for all the cases used in this study. However, it may fail for a few cases, since if the single exponential solutionτ 1 * is not appropriate, then it may also fail to converge for the double and triple cases. ## 4.1. Exhaustive Search This search process was to set fixed ranges forτ i values with a constant increment and the restriction that τ i < τ i + 1. For example, the search range for τ 1 is from 1 to 1.0 with 0.001 increment, that is, 1000 elements in the range. For τ 2, it is 1.5 to 3.0 with 0.001 as increment, and for τ 3, it is 4.0 to 10.0 with 0.002 increment. The relative ordering τ 1 < τ 2 < τ 3 was retained in this process.This straightforward method searches through a predefined range forτ i and records the calculated D (deviation), I 0, and α i. Then, it is possible to choose one set of α i and τ i that fit with the minimum D and some predefined criteria. ## 4.2. Branch Search The exhaustive search method required searching with a wide fixed range and inevitably was CPU-intensive. To be more specific in setting appropriate search ranges, a branch search method was proposed. This study used (2) to process the single exponential equation first and the obtained value τ 1 * was used to estimate the search ranges for the double exponential. The obtained values τ 1 * * and τ 2 * * were then used to set the ranges for the triple exponential. The three process steps may be described as follows.( 1) Single Exponential Equation. Consider (8) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) . Two values are assigned as boundaries, τ min ⁡ and τ max ⁡, and an increment is assigned to process the problem: (9) τ min ⁡ < τ 1 < τ max ⁡ . The least squares method is first used to find τ 1 * with the minimum deviation among all. In order to achieve better results, the searching ranges were narrowed down as τ 1 * ± increment and the search was repeated until the increment was small enough.( 2) Double Exponential Equation. Consider (10) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) + α 2 exp ⁡ ( - t τ 2 ) .In this step, the calculatedτ 1 *, τ min ⁡, and τ max ⁡ are used to set the ranges as (11) τ min ⁡ < τ 1 < τ 1 * , τ 1 * < τ 2 < τ max ⁡ . The least squares method is used to find a pair of values τ 1 * * and τ 2 * * with the minimum deviation. The parameters can be fine-tuned by using τ 1 * * ± increment and τ 2 * * ± increment as two separate ranges for further convergence processes.( 3) Triple Exponential Equation. Consider(12) I = I 0 + α 1 exp ⁡ ( - t τ 1 ) + α 2 exp ⁡ ( - t τ 2 ) + α 3 exp ⁡ ( - t τ 3 ) .Again, the calculatedτ 1 * *, τ 2 * *, τ min ⁡, and τ max ⁡ are provided to set the following ranges as: (13) τ min ⁡ < τ 1 < τ 1 * * , τ 1 * * < τ 2 < τ 2 * * , τ 2 * * < τ 3 < τ max ⁡ .Once again, we use the least squares method to find a set ofτ 1 * * *, τ 2 * * *, and τ 3 * * * with the minimum deviation. Further processes continued in the same way as in the previous double exponential case.In general, the branch search method may reduce search CPU time, as it spends about 1 min of CPU time to search for an optimal single, double, and triple exponential equations simultaneously for all the cases used in this study. However, it may fail for a few cases, since if the single exponential solutionτ 1 * is not appropriate, then it may also fail to converge for the double and triple cases. ## 5. Interpretation of Data Data from various resources were derived via different means described as follows.(1) Data from existing articles: previous studies have provided a set of well-documented constants in a specified multiple exponential form. This study uses these constants to regenerate point data for the decay curve, and then the decay constants of the single, double, and triple multiple exponential equations were calculated separately. (2) Data referred to the DIN standard: those data represent some points on the associated decay curves. This study used the data to compute the associated decay constants for the double exponential equation forms as in (3). (3) Data generated from the experimental tests in this study: in order to study the effect of the irradiation duration as well as the dose exposure on the PLUM intensity profile of the phosphorescent material, we provided data from several experimental tests for use. ### 5.1. Data from Existing Articles Table2 contains two different data types in the original sources: one is the result from triple exponential equations and the other is the result from the double exponential equations. Table 2 shows the deviations resulting from all three types of exponential equations and indicates that the single exponential equations always resulted in larger deviations than the double or triple ones. The simulation deviations induced in the triple exponential equations were the smallest among them, which means that the use of triple exponential equations in interpretation may be the best model among the three. Table 2 indicates that the τ 2 in the double exponential equation were closely associated with τ 3 in the triple exponential equation. Similar relationships were found in α 2 and α 3 constants. The averages of the three decay constants of the triple exponential equations were 18.45, 91.34, and 360 s, which fall within the range suggested by Pedroza-Montero et al. [13].Table 2 Fitting parameters of the decay curves by using the single, double, and triple exponential equations. -1, -2, and -3 indicate the use of single, double, and triple exponential equations in interpretation, respectively. Sources Deviation % I 0 α 1 α 2 α 3 τ 1 (min) τ 2 (min) τ 3 (min) Sun-1 [18] 0.1008 0.3034 3.6530 0.2333 Sun-2 0.0020 0.2205 3.5206 0.2985 0.3176 4.1248 Sun-3 (given) 0.2200 2.8790 0.6890 0.2520 0.1280 0.7227 4.6237 Chang-1 [12] 0.0392 0.0687 4.2653 0.3657 Chang-2 0.0019 0.0316 3.8760 0.4269 0.2371 1.8048 Chang-3 (given) 0.0315 2.0270 1.8547 0.4212 0.0567 0.2900 1.8200 Sharma-1 [17] 0.0548 0.0 2.8928 0.6900 Sharma-2 0.0036 0.0 2.7457 0.2431 0.5500 4.3567 Sharma-3 (given) 0.0 2.6416 0.2344 0.1153 0.5328 1.8554 7.1918 HAN-1 [15] 0.0664 0.0684 8.8483 0.88002 HAN-2 (given) 0.0082 7.9885 0.6772 0.7789 3.2925 HAN-3 0.00085 0.0071 7.9132 0.3798 0.3733 0.7250 2.2800 3.9900 Xei-1 [11] 0.1350 0.0 6.7849 0.898 Xei-2 (given) 0.0 4.4678 2.4138 0.2523 2.6139 Xei-3 0.0107 0.0 4.3009 1.3180 1.2698 0.2400 1.700 3.4200 Lei-1 [19] 0.0 0.0 18.352 3.1500 Lei-2 (given) 0.4787 3.5735 0.5318 0.5735 4.74783 Lei-3 0.00043 0.4786 2.1442 1.4295 0.5332 0.4639 0.6943 4.7584Figure2 illustrates the relationship among the luminous intensity, τ 3, and α 3 (Table 2). This figure indicates that the value α 3, rather than τ 3, is most strongly correlated with the associated luminous intensity. It is noted that Chang et al. [12] used Sr 3 A l 2 O 6 : E u 3 +, D y 3 + that could reflect different decay characteristics. That is to say they did not use the same material compared to others. Hence, it is reasonable to ignore this point of Figure 2. The remaining data show consistent behavior and a high correlation between the luminous intensity and α 3 but it is hard to find any relationships between the luminous intensity and τ 3.Figure 2 Relationships between the luminous intensity (measured at 2 min),α 3, and τ 3 for the data listed in Table 2. ### 5.2. Data from the DIN Standard Figure3 shows the luminous profiles as provided by the DIN standard that grades from A to G for industry usage. Table 3 shows the calculated constants corresponding to the profiles in the figure. Again, the use of single exponential equation in interpretation results in the largest deviations and the smallest τ 1 value. Also, it indicates that the grade with the largest luminous profile (G) does not necessarily have the largest decay constant (τ 3).Table 3 Calculated constants for the DIN luminescence standard for different grades from A to G, where the luminous intensity increases as the grade number goes from A to G. A double exponential equation was used for interpretation. Terms α 1 α 2 τ 1 (min) τ 2 (min) A 0.1587 0.0162 3.7138 34.8572 B 0.2934 0.0314 4.0907 39.0857 C 1.0415 0.1004 3.5680 36.8189 D 1.5453 0.2046 3.7503 33.6278 E 2.5939 0.2570 3.9084 38.4057 F 3.2869 0.3366 3.9448 37.8302 G 4.3611 0.4439 3.8110 34.8920Figure 3 Afterglow profiles of the DIN luminous standard where the luminous intensity is measured at 2 min.This finding seems inconsistent with the argument of Chang et al. [12], who stated that the larger the value of decay time was, the better the afterglow properties were. On the contrary, this study indicates that larger α 3 values are associated with the better afterglow properties. This statement is also true for those cases in Table 2. The upper right hand side of Figure 3 displays the curves for luminous intensity, α 3, and τ 3, which provides evidence that α 3 is more directly correlated with the luminous intensity than τ 3. ### 5.3. Data from Experimental Tests Table4 lists the results of the experimental tests conducted in this study. There are two different thicknesses for the patches, that is, 0.4 mm and 0.6 mm. Table 1 shows the four different luminous intensities used to irradiate the specimens. The irradiation conditions follow the DIN and JIS luminance standards.Table 4 Fitting parameters of the experimental decay curves conducted in this study, where the value 4 denotes 0.4 mm and the value 6 denotes 0.6 mm in thickness; the values 0050, 0200, 1000, and 3000 represent the luminous intensity of excitation. A double exponential equation was applied in interpretation. Name α 1 α 2 τ 1 (min) τ 2 (min) 4-0050 0.5802 0.2180 1.044 12.8277 4-0200 6.4165 0.5041 0.7377 13.8277 4-1000 7.0333 0.7574 1.0377 12.734 4-3000 5.489 1.0582 1.6433 12.414 6-0050 0.2803 0.1627 2.6577 16.948 6-0200 1.2105 0.5252 2.5777 15.6677 6-1000 4.8986 1.0896 1.351 10.961 6-3000 10.3447 1.3694 1.2643 13.068The corresponding luminescence decay curves are shown in Figures4 and 5 for 0.4-mm and 0.6 mm thickness patches, respectively. It is obvious that the higher irradiation intensity and thicker specimens achieved higher afterglow luminous intensity after input illumination has ceased. As expected, the thicker the patch was, the better afterglow luminous behavior was. For the same patch, better luminous behavior was accompanied by higher α 3 constants as depicted at the upper right hand side of Figures 4 and 5. On the other hand, the τ 2 curve behaves differently in these two figures. Figure 4 shows that τ 2 remain almost constant under different irradiation intensity. However, in contradiction to this, Figure 5 indicates that τ 2 decreases with increasing irradiation intensity in a test.Figure 4 Afterglow profiles of the 0.4 mm thickness patch where the luminous intensity is measured at 2 min.Figure 5 Afterglow profiles of the 0.6 mm thickness patch where the luminous intensity is measured at 2 min. ## 5.1. Data from Existing Articles Table2 contains two different data types in the original sources: one is the result from triple exponential equations and the other is the result from the double exponential equations. Table 2 shows the deviations resulting from all three types of exponential equations and indicates that the single exponential equations always resulted in larger deviations than the double or triple ones. The simulation deviations induced in the triple exponential equations were the smallest among them, which means that the use of triple exponential equations in interpretation may be the best model among the three. Table 2 indicates that the τ 2 in the double exponential equation were closely associated with τ 3 in the triple exponential equation. Similar relationships were found in α 2 and α 3 constants. The averages of the three decay constants of the triple exponential equations were 18.45, 91.34, and 360 s, which fall within the range suggested by Pedroza-Montero et al. [13].Table 2 Fitting parameters of the decay curves by using the single, double, and triple exponential equations. -1, -2, and -3 indicate the use of single, double, and triple exponential equations in interpretation, respectively. Sources Deviation % I 0 α 1 α 2 α 3 τ 1 (min) τ 2 (min) τ 3 (min) Sun-1 [18] 0.1008 0.3034 3.6530 0.2333 Sun-2 0.0020 0.2205 3.5206 0.2985 0.3176 4.1248 Sun-3 (given) 0.2200 2.8790 0.6890 0.2520 0.1280 0.7227 4.6237 Chang-1 [12] 0.0392 0.0687 4.2653 0.3657 Chang-2 0.0019 0.0316 3.8760 0.4269 0.2371 1.8048 Chang-3 (given) 0.0315 2.0270 1.8547 0.4212 0.0567 0.2900 1.8200 Sharma-1 [17] 0.0548 0.0 2.8928 0.6900 Sharma-2 0.0036 0.0 2.7457 0.2431 0.5500 4.3567 Sharma-3 (given) 0.0 2.6416 0.2344 0.1153 0.5328 1.8554 7.1918 HAN-1 [15] 0.0664 0.0684 8.8483 0.88002 HAN-2 (given) 0.0082 7.9885 0.6772 0.7789 3.2925 HAN-3 0.00085 0.0071 7.9132 0.3798 0.3733 0.7250 2.2800 3.9900 Xei-1 [11] 0.1350 0.0 6.7849 0.898 Xei-2 (given) 0.0 4.4678 2.4138 0.2523 2.6139 Xei-3 0.0107 0.0 4.3009 1.3180 1.2698 0.2400 1.700 3.4200 Lei-1 [19] 0.0 0.0 18.352 3.1500 Lei-2 (given) 0.4787 3.5735 0.5318 0.5735 4.74783 Lei-3 0.00043 0.4786 2.1442 1.4295 0.5332 0.4639 0.6943 4.7584Figure2 illustrates the relationship among the luminous intensity, τ 3, and α 3 (Table 2). This figure indicates that the value α 3, rather than τ 3, is most strongly correlated with the associated luminous intensity. It is noted that Chang et al. [12] used Sr 3 A l 2 O 6 : E u 3 +, D y 3 + that could reflect different decay characteristics. That is to say they did not use the same material compared to others. Hence, it is reasonable to ignore this point of Figure 2. The remaining data show consistent behavior and a high correlation between the luminous intensity and α 3 but it is hard to find any relationships between the luminous intensity and τ 3.Figure 2 Relationships between the luminous intensity (measured at 2 min),α 3, and τ 3 for the data listed in Table 2. ## 5.2. Data from the DIN Standard Figure3 shows the luminous profiles as provided by the DIN standard that grades from A to G for industry usage. Table 3 shows the calculated constants corresponding to the profiles in the figure. Again, the use of single exponential equation in interpretation results in the largest deviations and the smallest τ 1 value. Also, it indicates that the grade with the largest luminous profile (G) does not necessarily have the largest decay constant (τ 3).Table 3 Calculated constants for the DIN luminescence standard for different grades from A to G, where the luminous intensity increases as the grade number goes from A to G. A double exponential equation was used for interpretation. Terms α 1 α 2 τ 1 (min) τ 2 (min) A 0.1587 0.0162 3.7138 34.8572 B 0.2934 0.0314 4.0907 39.0857 C 1.0415 0.1004 3.5680 36.8189 D 1.5453 0.2046 3.7503 33.6278 E 2.5939 0.2570 3.9084 38.4057 F 3.2869 0.3366 3.9448 37.8302 G 4.3611 0.4439 3.8110 34.8920Figure 3 Afterglow profiles of the DIN luminous standard where the luminous intensity is measured at 2 min.This finding seems inconsistent with the argument of Chang et al. [12], who stated that the larger the value of decay time was, the better the afterglow properties were. On the contrary, this study indicates that larger α 3 values are associated with the better afterglow properties. This statement is also true for those cases in Table 2. The upper right hand side of Figure 3 displays the curves for luminous intensity, α 3, and τ 3, which provides evidence that α 3 is more directly correlated with the luminous intensity than τ 3. ## 5.3. Data from Experimental Tests Table4 lists the results of the experimental tests conducted in this study. There are two different thicknesses for the patches, that is, 0.4 mm and 0.6 mm. Table 1 shows the four different luminous intensities used to irradiate the specimens. The irradiation conditions follow the DIN and JIS luminance standards.Table 4 Fitting parameters of the experimental decay curves conducted in this study, where the value 4 denotes 0.4 mm and the value 6 denotes 0.6 mm in thickness; the values 0050, 0200, 1000, and 3000 represent the luminous intensity of excitation. A double exponential equation was applied in interpretation. Name α 1 α 2 τ 1 (min) τ 2 (min) 4-0050 0.5802 0.2180 1.044 12.8277 4-0200 6.4165 0.5041 0.7377 13.8277 4-1000 7.0333 0.7574 1.0377 12.734 4-3000 5.489 1.0582 1.6433 12.414 6-0050 0.2803 0.1627 2.6577 16.948 6-0200 1.2105 0.5252 2.5777 15.6677 6-1000 4.8986 1.0896 1.351 10.961 6-3000 10.3447 1.3694 1.2643 13.068The corresponding luminescence decay curves are shown in Figures4 and 5 for 0.4-mm and 0.6 mm thickness patches, respectively. It is obvious that the higher irradiation intensity and thicker specimens achieved higher afterglow luminous intensity after input illumination has ceased. As expected, the thicker the patch was, the better afterglow luminous behavior was. For the same patch, better luminous behavior was accompanied by higher α 3 constants as depicted at the upper right hand side of Figures 4 and 5. On the other hand, the τ 2 curve behaves differently in these two figures. Figure 4 shows that τ 2 remain almost constant under different irradiation intensity. However, in contradiction to this, Figure 5 indicates that τ 2 decreases with increasing irradiation intensity in a test.Figure 4 Afterglow profiles of the 0.4 mm thickness patch where the luminous intensity is measured at 2 min.Figure 5 Afterglow profiles of the 0.6 mm thickness patch where the luminous intensity is measured at 2 min. ## 6. Conclusions For an afterglow decay profile, it is important to adopt appropriate multiple single exponential equations in the associated numerical simulation. Single, double, and triple exponential equations have been applied in turn to approximate a set of data. Then, the one with the minimum deviation and∑ τ i total was chosen. To solve the nonlinear simultaneous equations involved, the exhaustive search and brand search methods have been used for all the cases mentioned. Different decay constants were obtained for the calculated parameters depending on whether single, double, or triple exponentials were used to fit the curves for each case. With more exponential terms (say n terms) used in an equation to simulate a set of data, smaller α n and larger τ n values are found in the results.With the commonly used strontium aluminate phosphors, the data from previous and present experimental work indicated that the lastα i, rather than the last τ i, in an exponential equation is most strongly correlated with the afterglow characteristics of an object. The last α i value is a good index reflecting the amount of irradiation dose exposure and the quality of afterglow properties (Figures 3, 4, and 5). However, the consistent behaviors between luminous intensity and the α 3 and τ 3 values in the same afterglow phosphors do not exist among different phosphors. These findings are contradictory to other studies ([12, 14, 15]), which have correlated luminous intensity with only the value of τ 3. Thus, we conclude that it is important to carry out further studies of this nature in order to verify this finding. As shown in Figure 2, our approach may not be appropriate to cross link parameters between different systems; this needs further detailed study. Moreover, other afterglow characteristics such as the temperature effect, or different phosphor materials, also need to be studied. --- *Source: 102524-2014-09-14.xml*
2014
# Electrochemical and Microstructural Analysis of FeS Films from Acidic Chemical Bath at Varying Temperatures, pH, and Immersion Time **Authors:** Ladan Khaksar; Gary Whelan; John Shirokoff **Journal:** International Journal of Corrosion (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1025261 --- ## Abstract The corrosion resistance and corrosion products of 4130 alloy steel have been investigated by depositing thin films of iron sulfide synthesized from an acidic chemical bath. Tests were conducted at varying temperatures (25°C–75°C), pH levels (2–4), and immersion time (24–72 hours). The corrosion behavior was monitored by linear polarization resistance (LPR) method. X-ray Diffraction (XRD), Energy Dispersive X-ray (EDX) spectroscopy, and Scanning Electron Microscopy (SEM) have been applied to characterize the corrosion products. The results show that, along with the formation of an iron sulfide protective film on the alloy surface, increasing temperature, increasing immersion time, and decreasing pH all directly increase the corrosion rate of steel in the tested experimental conditions. It was also concluded that increasing temperature causes an initial increase of the corrosion rate followed by a large decrease due to transformation of the iron sulfide crystalline structure. --- ## Body ## 1. Introduction The corrosion of steel in aqueous environments containing hydrogen sulfide (H2S) is of great interest to the oil and gas industry [1–5]. Unlike carbon dioxide corrosion, H2S corrosion always involves the formation of corrosion products that are predominantly iron sulfide (FeS) compounds with various phases. These corrosion product films should be characterized to illustrate the corrosion mechanism. It has been reported that the formation of the FeS generally controls the H2S corrosion [6]. However, there is still debate on how the initial corrosion product layers form.It is well known that surface scale formation is one of the most important factors that influences the corrosion rate [7]. The scale slows down the corrosion process by presenting a diffusion barrier for the species involved in the corrosion process and by covering and preventing the underlying steel from further dissolution. The scale growth depends primarily on the kinetics of scale formation [8].H2S corrosion on the metal surface is also strongly dependent on the type of corrosion product films formed on the surface of the metal during the corrosion process. The precipitation rate or the formation of these films depends on various environmental factors and the concentration of species. The stability, protectiveness, and adherence of these films determine the nature and the rate of corrosion [9, 10]. It is important to note that, in contrast to one single type of iron carbonate formed in CO2 corrosion, many types of FeS may form during H2S corrosion such as amorphous ferrous sulfide, mackinawite, cubic ferrous sulfide, smythite, greigite, pyrrhotite, troilite, pyrite, and marcasite [11–18].In aqueous solutions of H2S, two mechanisms were proposed for the formation of FeS films, namely, dissolution of iron followed by precipitation of FeS and sulfide ion adsorption followed by direct film formation [19].The first proposed theory is a possible mechanism for FeS formation in that the FeS layer is formed by precipitation only when its concentration reaches the solubility limit, analogous to how precipitation equilibrium governs the mechanism of iron carbonate formation. However, if this is to be true, the kinetics of FeS formation must be much faster than that of iron carbonate. In cases where FeS is highly undersaturated in the bulk, it can still be formed on the steel surface. This is suspected to be due to the high surface pH caused by consumption of hydronium ions by corrosion as well as locally high ferrous ion concentration, resulting in supersaturation of FeS on the steel surface. Therefore, FeS forms relatively fast on the steel surface, irrespective of the bulk conditions [20–22]. Another possible theory has been proposed by Shoesmith et al., which describes the idea that the first layer of mackinawite is generated by a direct, solid-state reaction between the steel surface and H2S [2, 19]. Mackinawite then grows with time. The corrosion product layer growth rate depends upon the corrosion rate as well as the water chemistry with regard to pH, temperature, and so forth. It has been found that when the thickness of FeS reaches a critical value, this corrosion product layer cracks due to the development of internal stresses [6, 23]. More corrosive species such as H2S or hydrogen ions diffuse through the now porous FeS layer and attack the steel surface. More FeS is then formed by either solid-state reaction between steel and H2S akin to what happened initially or precipitation of FeS due to local FeS supersaturation. This direct, solid-state reaction theory is supported by other researches [24, 25].How FeS initially forms is pertinent, because it can help to better predict the H2S corrosion. However, until now research efforts have not achieved agreement on this subject. The situation is complicated by the variety of types of FeS that can be formed. Depending on the conditions relating to the corrosion environments, mackinawite, pyrrhotite, greigite, smythite, marcasite, and pyrite are the six naturally occurring FeS minerals [5, 19].Most of the previous studies in this area are conducted at high temperatures and usually in gaseous H2S environment. In the present study, all the experiments are performed at lower temperature in an aqueous solution because the real temperature of some oil and gas production and pipelines is below 100°C. In this study, FeS films have been synthesized on the metal alloy surface without the presence of H2S in the solution. Rather, FeS was formed by chemical bath deposition of iron and sulfur ions at acidic pH levels under varying environmental conditions. ## 2. Experimental Procedure ### 2.1. Material and Sample Preparation According to NACE MR0175/ISO 15156, the most common steel alloy for tubulars and tubular components in sour service is UNS G41XX0, formerly AISI 41XX [26]. 4130 steel is among the most common alloys used in industry. This steel typically consists of 0.80–1.1 Cr, 0.15–0.25 Mo, 0.28–0.33 C, 0.40–0.60 Mn, 0.035 P, 0.040 S, 0.15–0.35 Si, and balanced Fe. The working electrode was machined from the parent material into cylinders having dimensions of approximately 9 mm length and diameter. Prior to the experiments, all specimens were polished with Coated Abrasive Manufacturers Institute (CAMI) grit designations 320, 600, and 1000 corresponding to average particle diameters 36.0, 16.0, and 10.3 microns and finally 6-micron grit silicon carbide paper and then cleansed with deionized water until a homogenous surface was observed. Following this, the specimens were quickly dried using cold air to avoid oxidation. ### 2.2. Electrolyte Solution Preparation and Synthesis of FeS Films Due to the inherent safety concerns associated with H2S gas, an alternative method of FeS film deposition was employed [27]. The alternative method provided an acidic electrolyte solution which has the potential to form thin FeS layer on the steel surface like what happens in the sour oil pipeline.This acidic chemical bath contains 6.25 g iron (II) chloride (0.15 M), 12.60 g urea (1 M), and 31.55 g thioacetamide (2 M). Deionized water was used as the solvent in every experiment. Each reagent was mixed with 210 mL of deionized water, stirred with a magnetic stir rod for 30 minutes, and mixed together under stirring for additional two hours to achieve a clear solution.The mechanism of FeS formation in this acidic bath is the slow release of iron and sulfur ions within solution followed by the deposition of these ions on the alloy surface. The iron and sulfur ions are provided from iron (II) chloride and thioacetamide, respectively. The formation of FeS films from this acidic bath is dependent on whether the deposition rate of the ionic product of iron and sulfur is higher than solubility of FeS. Adding urea to the solution adjusted the balance between hydrolysis and deposition. The proposed reactions for this mechanism are described as follow [27]: (1) F e C l 2 ⟶ F e 2 + + 2 C l - (2) C H 3 C S N H 2 + H 2 O ⟷ S 2 - + C H 3 C O N H 2 + 2 H + (3) C O N H 2 2 + H 2 O ⟷ 2 N H 3 + C O 2 (4) N H 3 + H 2 O ⟷ N H 4 + + O H - (5) F e 2 + + S 2 - ⟷ F e SFinally, the overall reaction would be written as(6) F e 2 + + C H 3 C S N H 2 + C O N H 2 2 + 2 H 2 O ⟶ F e S + C H 3 C O N H 2 + 2 N H 4 + + C O 2 ### 2.3. Corrosion Tests Experiments were conducted in a multiport glass cell with a three-electrode setup at atmospheric pressure based on the ASTM G5-94 standard for potentiostatic anodic polarization measurements [28]. A graphite rod was used as the counter electrode (CE) and saturated silver/silver chloride (Ag/AgCl) was used as the reference electrode (RE). In order to investigate the electrochemical characteristic of the corrosion films formed on the steel alloy, the specimens subjected to corrosion were used as working electrodes (WE).An Ivium Compactstat Potentiostat monitoring system was used to perform electrochemical corrosion measurements. Linear Polarization Resistance (LPR) technique was used to investigate the corrosion rate. The applied potential range for the LPR measurements was from −0.02 V to 0.02 V with a scanning rate of 0.125 mV/s. All the measurements were conducted by setting the potentiostat to take measurements at 0.5, 1, 2, 4, … to 24, 48, or 72 hours depending on the test. Prior to start of each test, the sample was immersed in the solution for 55 minutes in accordance with ASTM G5-82 [28]. The pH was adjusted by adding deoxygenated hydrochloric acid.Table1 describes the experimental conditions. Three series of experiments were conducted to investigate the effect of temperature, immersion time, and pH on the corrosion behavior of FeS films.Table 1 Experimental conditions. Condition number Temperature (C °) pH Immersion time (hour) 1 50 4 24 2 50 4 48 3 50 4 72 4 25 4 24 5 50 4 24 6 75 4 24 7 50 2 24 8 50 3 24 9 50 4 24 ### 2.4. Surface Morphology Observation and Corrosion Product Analysis Upon completion of corrosion testing, morphological characterization of the surface was conducted using FEI Quanta 400 Scanning Electronic Microscope (SEM) with Bruker Energy Dispersive X-ray (EDX) spectroscopy. The SEM was operating at 15 kV, with a working distance of 15 mm and beam current of 13 nA. The crystal structure and chemical composition of the corrosion products were characterized by X-ray Diffraction (XRD) using a Rigaku Ultima IV X-ray diffractometer operating at 40 kV and 44 mA and SEM-EDX to confirm the chemical elements. ## 2.1. Material and Sample Preparation According to NACE MR0175/ISO 15156, the most common steel alloy for tubulars and tubular components in sour service is UNS G41XX0, formerly AISI 41XX [26]. 4130 steel is among the most common alloys used in industry. This steel typically consists of 0.80–1.1 Cr, 0.15–0.25 Mo, 0.28–0.33 C, 0.40–0.60 Mn, 0.035 P, 0.040 S, 0.15–0.35 Si, and balanced Fe. The working electrode was machined from the parent material into cylinders having dimensions of approximately 9 mm length and diameter. Prior to the experiments, all specimens were polished with Coated Abrasive Manufacturers Institute (CAMI) grit designations 320, 600, and 1000 corresponding to average particle diameters 36.0, 16.0, and 10.3 microns and finally 6-micron grit silicon carbide paper and then cleansed with deionized water until a homogenous surface was observed. Following this, the specimens were quickly dried using cold air to avoid oxidation. ## 2.2. Electrolyte Solution Preparation and Synthesis of FeS Films Due to the inherent safety concerns associated with H2S gas, an alternative method of FeS film deposition was employed [27]. The alternative method provided an acidic electrolyte solution which has the potential to form thin FeS layer on the steel surface like what happens in the sour oil pipeline.This acidic chemical bath contains 6.25 g iron (II) chloride (0.15 M), 12.60 g urea (1 M), and 31.55 g thioacetamide (2 M). Deionized water was used as the solvent in every experiment. Each reagent was mixed with 210 mL of deionized water, stirred with a magnetic stir rod for 30 minutes, and mixed together under stirring for additional two hours to achieve a clear solution.The mechanism of FeS formation in this acidic bath is the slow release of iron and sulfur ions within solution followed by the deposition of these ions on the alloy surface. The iron and sulfur ions are provided from iron (II) chloride and thioacetamide, respectively. The formation of FeS films from this acidic bath is dependent on whether the deposition rate of the ionic product of iron and sulfur is higher than solubility of FeS. Adding urea to the solution adjusted the balance between hydrolysis and deposition. The proposed reactions for this mechanism are described as follow [27]: (1) F e C l 2 ⟶ F e 2 + + 2 C l - (2) C H 3 C S N H 2 + H 2 O ⟷ S 2 - + C H 3 C O N H 2 + 2 H + (3) C O N H 2 2 + H 2 O ⟷ 2 N H 3 + C O 2 (4) N H 3 + H 2 O ⟷ N H 4 + + O H - (5) F e 2 + + S 2 - ⟷ F e SFinally, the overall reaction would be written as(6) F e 2 + + C H 3 C S N H 2 + C O N H 2 2 + 2 H 2 O ⟶ F e S + C H 3 C O N H 2 + 2 N H 4 + + C O 2 ## 2.3. Corrosion Tests Experiments were conducted in a multiport glass cell with a three-electrode setup at atmospheric pressure based on the ASTM G5-94 standard for potentiostatic anodic polarization measurements [28]. A graphite rod was used as the counter electrode (CE) and saturated silver/silver chloride (Ag/AgCl) was used as the reference electrode (RE). In order to investigate the electrochemical characteristic of the corrosion films formed on the steel alloy, the specimens subjected to corrosion were used as working electrodes (WE).An Ivium Compactstat Potentiostat monitoring system was used to perform electrochemical corrosion measurements. Linear Polarization Resistance (LPR) technique was used to investigate the corrosion rate. The applied potential range for the LPR measurements was from −0.02 V to 0.02 V with a scanning rate of 0.125 mV/s. All the measurements were conducted by setting the potentiostat to take measurements at 0.5, 1, 2, 4, … to 24, 48, or 72 hours depending on the test. Prior to start of each test, the sample was immersed in the solution for 55 minutes in accordance with ASTM G5-82 [28]. The pH was adjusted by adding deoxygenated hydrochloric acid.Table1 describes the experimental conditions. Three series of experiments were conducted to investigate the effect of temperature, immersion time, and pH on the corrosion behavior of FeS films.Table 1 Experimental conditions. Condition number Temperature (C °) pH Immersion time (hour) 1 50 4 24 2 50 4 48 3 50 4 72 4 25 4 24 5 50 4 24 6 75 4 24 7 50 2 24 8 50 3 24 9 50 4 24 ## 2.4. Surface Morphology Observation and Corrosion Product Analysis Upon completion of corrosion testing, morphological characterization of the surface was conducted using FEI Quanta 400 Scanning Electronic Microscope (SEM) with Bruker Energy Dispersive X-ray (EDX) spectroscopy. The SEM was operating at 15 kV, with a working distance of 15 mm and beam current of 13 nA. The crystal structure and chemical composition of the corrosion products were characterized by X-ray Diffraction (XRD) using a Rigaku Ultima IV X-ray diffractometer operating at 40 kV and 44 mA and SEM-EDX to confirm the chemical elements. ## 3. Results and Discussion ### 3.1. Effect of Immersion Time on the Corrosion Mechanism and Products Figure1 shows the effect of 24, 48, and 72 hours of immersion time on the corrosion rate of the specimens at 50°C and pH 4. During a corrosion process, the rate of the reaction is determined by the corrosion mechanism. Growth of a corrosion film limits the rate of further corrosion by acting as a diffusion barrier for the species involved in the process. Gradually the corrosion rate decreases and the underlying steel is protected from further dissolution [8, 19]. Figure 1 indicates that in this experiment the results of LPR measurements did not agree well with the idea of a decrement of corrosion rate by increase of exposed time to the solution. It shows that corrosion rate is increasing gradually by increasing the immersion time which could be explained as follows:(1) The corrosion rate is significantly greater than the rate of film formation on the surface. (2) The corrosion product has weak adherence to the alloy surface causing it to detach and expose the unprotected alloy to the corrosive solution and increase the possibility of localize corrosion on the surface.The diffraction spectra in Figure 2 were search-matched to the XRD computer database (i.e., contains powder diffraction files (PDF) from the joint committee on powder diffraction standards (JCPDS) and international center for diffraction data (ICDD)). Figures 2(a)–2(d) identified 006–0696 iron Fe (alpha-Fe body centered cubic (bcc) crystal type), and Figure 2(e) identified both 006–0696 iron Fe (alpha-Fe bcc) and 015–0037 mackinawite FeS (tetragonal FeS crystal type). These PDF numbers and names appear in the top right corner of each diffraction spectra and corresponding line positions are superimposed onto the spectral peaks in each figure.Figure 1 Corrosion rate with time at pH 4 and 50°C.Figure 2 P-XRD analysis on the 4130 alloy surface at (a) 50°C, 4 pH, and 48 hours, (b) 25°C, 4 pH, and 24 hours, (c) 50°C, 2 pH, and 24 hours, (d) initial condition (uncorroded sample), and (e) 75°C, 4 pH, and 24 hours. (a) (b) (c) (d) (e)Figure2 shows the results of crystal structure characterization of the steel alloy surfaces with powder- (P-) XRD. From Figures 2(a)–2(c) it is apparent that XRD results primarily indicated elemental Fe consistent with the uncorroded sample in Figure 2(d); this is likely a result of inadequate film thickness for detection by a P-XRD spectrometer. The thin nature of the corrosion film on the surface of the steel alloy is consistent with literature discussing the deposition of FeS using the indicated chemical bath alternative to H2S exposure [24].As shown in Figure2(e), there was a small amount of mackinawite detected by the P-XRD spectrometer on the surface of the sample exposed to 75°C for 24 hours at 4 pH. This result suggests that the film thickness is increased at high temperatures. In lieu of thin film XRD analysis, P-XRD may be able to detect thicker corrosion layers formed at relatively high temperatures.Figure3 shows the SEM images of corrosion product films formed under varying immersion time. After 24-hour immersion time, a uniform layer of corrosion product, consisting of small tetragonal mackinawite, covered the surface [29]. As shown in Figure 3(a), this thick corrosion layer is loose and full of blister and cracks, causing the corrosion rate to accelerate by increasing the diffusion of electrochemical reaction species such as F e 2 + through the alloy surface. As has been mentioned in other researches, this initial mackinawite layer is easily cracked and peeled off due to stress as a result of the volume effect [30]. This failure of the initial corrosion layer will gradually increase the corrosion rate and expose more unprotected area to the solution.Figure 3 SEM analysis of corrosion products of 4130 alloy after (a) 24-, (c) 48-, and (e) 72-hour immersion at pH 4 and 50°C and EDX analysis of corrosion products of 4130 alloy after (b) 24-, (d) 48-, and (f) 72-hour immersion at pH 4 and 50°C. (a) (b) (c) (d) (e) (f)Figure3(b) shows the EDX analysis results of corrosion product films after 24-hour immersion. These results indicated that most of the corrosion products are iron-rich compounds such as mackinawite, which generally has lower corrosion resistance compared to sulfur-rich compounds such as troilite. The corrosion resistance of FeS follows a sequence of mackinawite < troilite and < pyrrhotite < pyrite [24].After 48-hour immersion, the corrosion scale cracks become more severe and hexagonal crystals form beside the cracks as shown in Figure3(c). The EDX results of these hexagonal crystals indicate high sulfur content in their chemical composition as shown in Figure 3(d).Figure3(e) shows that, after 72 hours of immersion, the initial corrosion product film has cracked and peeled off the surface of the specimen and a newly formed corrosion scale has been integrated. Larger, hexagonal shaped corrosion products formed on top of the new scale as mainly troilite crystals formed near the end of the 72 hours.Generally, it could be said that by increase of immersion time more corrosion resistant products such as troilite replaced the initially formed mackinawite on the alloy surface. This is supported by the EDX results that indicate the major corrosion product varied from iron-rich mackinawite to sulfur-rich troilite, in Figures3(b), 3(d), and 3(f). Despite the nucleation of stable troilite crystals on the metal surface, the results of LPR measurements showed that between 48 and 72 hours the corrosion rate dramatically increased from 0.0662 to 0.779. This increase could be explained by localized fracture of the corrosion film due to weak adhesion of the scale on the surface. This provides a path for sulfide to penetrate and attack the substrate of metal surface. ### 3.2. Effect of Temperature on the Corrosion Mechanism and Products Figure4 shows the effect of increasing temperature on the corrosion rate of specimens over the course of 24 hours at pH 4.Figure 4 Corrosion rate with temperature at pH 4 and 24 hours.It can be observed that, during first 12 hours of increasing temperature from 25°C to 75°C, the corrosion rate dramatically increased which can be explained by the following reasons:(1) Increasing the temperature could accelerate the diffusion of species involved in electrochemical reactions. (2) Temperature could affect the concentration of corrosion species by preferentially evaporating one or more species out of the solution, which could affect the corrosion reaction.It has been confirmed by previous research that temperature generally accelerates most of the chemical, electrochemical, and transporting processes occurring during the corrosion process and also both cathodic reactions and anodic currents which were measured increased with increasing temperature [31].During the final 12 hours of testing at 75°C, the corrosion rate significantly decreases from 2.2 to 0.25 mm/year, which could be related to transformation of mackinawite crystalline structure to a more resistant troilite crystalline structure. The SEM images in Figure5 show significant fracturing of the surface film at 75°C which explains the initial higher corrosion rate due to the diffusion of species into nonprotective mackinawite followed by the decreased corrosion rate due to formation of the protective troilite crystalline structure on the alloy surface.Figure 5 SEM image of corrosion products on surface of 4130 alloy after 24-hour immersion at pH 4 and 75°C. ### 3.3. Effect of pH on the Corrosion Mechanism and Products Figure6 shows the effect of pH on the corrosion rate of specimens immersed for 24 hours at 50°C.Figure 6 Corrosion rate with pH at 50°C and 24 hours.The results show that decreasing pH from 4 to 2 slightly increases the corrosion rate. The protective nature and composition of the corrosion product depend greatly on the pH of the solution. At lower values of pH (<3), iron is dissolved and FeS is mostly inhibited from precipitating on the metal surface due to very high solubility of FeS phases [32]. Figure 7(a) shows the SEM image of corrosion products on the surface of a specimen after 24 hours at pH 2 and 50°C. As can be observed, the corrosion products are loose and detached from the surface. This could result in the products being easily removed by shear stress. The EDX results as shown in Figure 7(b) indicate a high presence of sulfur compounds and a low presence of iron compounds on the surface.Figure 7 ((a) and (c)) SEM images of corrosion products on surface of 4130 alloy after 24-hour immersion at 50°C and pH 2 and((b) and (d)) EDX analysis of corrosion products on surface of 4130 alloy after 24-hour immersion at 50°C and pH 2. (a) (b) (c) (d)The SEM results in Figure7(c) of the specimen immersed in the solution at pH 2 also show the presence of a pit on the surface. This is another reason for the higher corrosion rates seen at low pH. The corrosion pit shown in Figure 7(c) has a brittle cap covering the substrate. EDX analysis indicates that this cap is primarily sulfide as shown in Figure 7(d).At pH 3, the top surface layer displayed a flaky structure as seen in Figure8. Parts of the layer had spalled off and revealed the presence of much smaller crystallites under the outer layer. It is likely that this layer is the result of the immediate precipitation of F e 2 + released by corrosion [32]. At pH values from 3 to 4, an inhibitive effect of the corrosion mechanism is seen due to the formation of a marcasite FeS protective film on the electrode surface. At pH 3, small crystals were observed on areas where the outer layer had spalled off as shown in Figure 8. At pH 4, the surface was mostly covered with a much denser layer as shown in Figure 9.Figure 8 SEM image of corrosion products on surface of 4130 alloy after 24-hour immersion at pH 3 and 50°C.Figure 9 SEM image of corrosion products on surface of 4130 alloy after 24-hour immersion at pH 4 and 50°C. ## 3.1. Effect of Immersion Time on the Corrosion Mechanism and Products Figure1 shows the effect of 24, 48, and 72 hours of immersion time on the corrosion rate of the specimens at 50°C and pH 4. During a corrosion process, the rate of the reaction is determined by the corrosion mechanism. Growth of a corrosion film limits the rate of further corrosion by acting as a diffusion barrier for the species involved in the process. Gradually the corrosion rate decreases and the underlying steel is protected from further dissolution [8, 19]. Figure 1 indicates that in this experiment the results of LPR measurements did not agree well with the idea of a decrement of corrosion rate by increase of exposed time to the solution. It shows that corrosion rate is increasing gradually by increasing the immersion time which could be explained as follows:(1) The corrosion rate is significantly greater than the rate of film formation on the surface. (2) The corrosion product has weak adherence to the alloy surface causing it to detach and expose the unprotected alloy to the corrosive solution and increase the possibility of localize corrosion on the surface.The diffraction spectra in Figure 2 were search-matched to the XRD computer database (i.e., contains powder diffraction files (PDF) from the joint committee on powder diffraction standards (JCPDS) and international center for diffraction data (ICDD)). Figures 2(a)–2(d) identified 006–0696 iron Fe (alpha-Fe body centered cubic (bcc) crystal type), and Figure 2(e) identified both 006–0696 iron Fe (alpha-Fe bcc) and 015–0037 mackinawite FeS (tetragonal FeS crystal type). These PDF numbers and names appear in the top right corner of each diffraction spectra and corresponding line positions are superimposed onto the spectral peaks in each figure.Figure 1 Corrosion rate with time at pH 4 and 50°C.Figure 2 P-XRD analysis on the 4130 alloy surface at (a) 50°C, 4 pH, and 48 hours, (b) 25°C, 4 pH, and 24 hours, (c) 50°C, 2 pH, and 24 hours, (d) initial condition (uncorroded sample), and (e) 75°C, 4 pH, and 24 hours. (a) (b) (c) (d) (e)Figure2 shows the results of crystal structure characterization of the steel alloy surfaces with powder- (P-) XRD. From Figures 2(a)–2(c) it is apparent that XRD results primarily indicated elemental Fe consistent with the uncorroded sample in Figure 2(d); this is likely a result of inadequate film thickness for detection by a P-XRD spectrometer. The thin nature of the corrosion film on the surface of the steel alloy is consistent with literature discussing the deposition of FeS using the indicated chemical bath alternative to H2S exposure [24].As shown in Figure2(e), there was a small amount of mackinawite detected by the P-XRD spectrometer on the surface of the sample exposed to 75°C for 24 hours at 4 pH. This result suggests that the film thickness is increased at high temperatures. In lieu of thin film XRD analysis, P-XRD may be able to detect thicker corrosion layers formed at relatively high temperatures.Figure3 shows the SEM images of corrosion product films formed under varying immersion time. After 24-hour immersion time, a uniform layer of corrosion product, consisting of small tetragonal mackinawite, covered the surface [29]. As shown in Figure 3(a), this thick corrosion layer is loose and full of blister and cracks, causing the corrosion rate to accelerate by increasing the diffusion of electrochemical reaction species such as F e 2 + through the alloy surface. As has been mentioned in other researches, this initial mackinawite layer is easily cracked and peeled off due to stress as a result of the volume effect [30]. This failure of the initial corrosion layer will gradually increase the corrosion rate and expose more unprotected area to the solution.Figure 3 SEM analysis of corrosion products of 4130 alloy after (a) 24-, (c) 48-, and (e) 72-hour immersion at pH 4 and 50°C and EDX analysis of corrosion products of 4130 alloy after (b) 24-, (d) 48-, and (f) 72-hour immersion at pH 4 and 50°C. (a) (b) (c) (d) (e) (f)Figure3(b) shows the EDX analysis results of corrosion product films after 24-hour immersion. These results indicated that most of the corrosion products are iron-rich compounds such as mackinawite, which generally has lower corrosion resistance compared to sulfur-rich compounds such as troilite. The corrosion resistance of FeS follows a sequence of mackinawite < troilite and < pyrrhotite < pyrite [24].After 48-hour immersion, the corrosion scale cracks become more severe and hexagonal crystals form beside the cracks as shown in Figure3(c). The EDX results of these hexagonal crystals indicate high sulfur content in their chemical composition as shown in Figure 3(d).Figure3(e) shows that, after 72 hours of immersion, the initial corrosion product film has cracked and peeled off the surface of the specimen and a newly formed corrosion scale has been integrated. Larger, hexagonal shaped corrosion products formed on top of the new scale as mainly troilite crystals formed near the end of the 72 hours.Generally, it could be said that by increase of immersion time more corrosion resistant products such as troilite replaced the initially formed mackinawite on the alloy surface. This is supported by the EDX results that indicate the major corrosion product varied from iron-rich mackinawite to sulfur-rich troilite, in Figures3(b), 3(d), and 3(f). Despite the nucleation of stable troilite crystals on the metal surface, the results of LPR measurements showed that between 48 and 72 hours the corrosion rate dramatically increased from 0.0662 to 0.779. This increase could be explained by localized fracture of the corrosion film due to weak adhesion of the scale on the surface. This provides a path for sulfide to penetrate and attack the substrate of metal surface. ## 3.2. Effect of Temperature on the Corrosion Mechanism and Products Figure4 shows the effect of increasing temperature on the corrosion rate of specimens over the course of 24 hours at pH 4.Figure 4 Corrosion rate with temperature at pH 4 and 24 hours.It can be observed that, during first 12 hours of increasing temperature from 25°C to 75°C, the corrosion rate dramatically increased which can be explained by the following reasons:(1) Increasing the temperature could accelerate the diffusion of species involved in electrochemical reactions. (2) Temperature could affect the concentration of corrosion species by preferentially evaporating one or more species out of the solution, which could affect the corrosion reaction.It has been confirmed by previous research that temperature generally accelerates most of the chemical, electrochemical, and transporting processes occurring during the corrosion process and also both cathodic reactions and anodic currents which were measured increased with increasing temperature [31].During the final 12 hours of testing at 75°C, the corrosion rate significantly decreases from 2.2 to 0.25 mm/year, which could be related to transformation of mackinawite crystalline structure to a more resistant troilite crystalline structure. The SEM images in Figure5 show significant fracturing of the surface film at 75°C which explains the initial higher corrosion rate due to the diffusion of species into nonprotective mackinawite followed by the decreased corrosion rate due to formation of the protective troilite crystalline structure on the alloy surface.Figure 5 SEM image of corrosion products on surface of 4130 alloy after 24-hour immersion at pH 4 and 75°C. ## 3.3. Effect of pH on the Corrosion Mechanism and Products Figure6 shows the effect of pH on the corrosion rate of specimens immersed for 24 hours at 50°C.Figure 6 Corrosion rate with pH at 50°C and 24 hours.The results show that decreasing pH from 4 to 2 slightly increases the corrosion rate. The protective nature and composition of the corrosion product depend greatly on the pH of the solution. At lower values of pH (<3), iron is dissolved and FeS is mostly inhibited from precipitating on the metal surface due to very high solubility of FeS phases [32]. Figure 7(a) shows the SEM image of corrosion products on the surface of a specimen after 24 hours at pH 2 and 50°C. As can be observed, the corrosion products are loose and detached from the surface. This could result in the products being easily removed by shear stress. The EDX results as shown in Figure 7(b) indicate a high presence of sulfur compounds and a low presence of iron compounds on the surface.Figure 7 ((a) and (c)) SEM images of corrosion products on surface of 4130 alloy after 24-hour immersion at 50°C and pH 2 and((b) and (d)) EDX analysis of corrosion products on surface of 4130 alloy after 24-hour immersion at 50°C and pH 2. (a) (b) (c) (d)The SEM results in Figure7(c) of the specimen immersed in the solution at pH 2 also show the presence of a pit on the surface. This is another reason for the higher corrosion rates seen at low pH. The corrosion pit shown in Figure 7(c) has a brittle cap covering the substrate. EDX analysis indicates that this cap is primarily sulfide as shown in Figure 7(d).At pH 3, the top surface layer displayed a flaky structure as seen in Figure8. Parts of the layer had spalled off and revealed the presence of much smaller crystallites under the outer layer. It is likely that this layer is the result of the immediate precipitation of F e 2 + released by corrosion [32]. At pH values from 3 to 4, an inhibitive effect of the corrosion mechanism is seen due to the formation of a marcasite FeS protective film on the electrode surface. At pH 3, small crystals were observed on areas where the outer layer had spalled off as shown in Figure 8. At pH 4, the surface was mostly covered with a much denser layer as shown in Figure 9.Figure 8 SEM image of corrosion products on surface of 4130 alloy after 24-hour immersion at pH 3 and 50°C.Figure 9 SEM image of corrosion products on surface of 4130 alloy after 24-hour immersion at pH 4 and 50°C. ## 4. Conclusions The results of this research indicated that acidic chemical bath deposition could be successfully applied to investigate the formation and growth of FeS thin films under varying experimental conditions. Due to the inherent safety concerns associated with sour corrosion experiments in laboratories, this acidic chemical bath deposition method could be applied as a substitute for H2S in certain experiments to characterize formation and transformation of FeS corrosion products.Other primary findings of this research are as follows:(i) Increase of immersion time gradually increases the corrosion rate of 4130 chromium alloy steel in this experiment, resulting from localize fracture of corrosion layer despite transformation of FeS crystalline structures from iron-rich mackinawite to sulfur-rich troilite compounds during the corrosion process. (ii) Increase of pH directly decreases the corrosion rate of 4130 alloy steel in this experiment resulting from the formation of a more resistant FeS film at higher values of pH. (iii) Increase of temperature from 25°C to 75°C causes an increase in the corrosion rate of 4130 alloy steel, likewise resulting from the transformation of FeS crystalline structure during the corrosion process. --- *Source: 1025261-2016-08-23.xml*
1025261-2016-08-23_1025261-2016-08-23.md
37,358
Electrochemical and Microstructural Analysis of FeS Films from Acidic Chemical Bath at Varying Temperatures, pH, and Immersion Time
Ladan Khaksar; Gary Whelan; John Shirokoff
International Journal of Corrosion (2016)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1025261
1025261-2016-08-23.xml
--- ## Abstract The corrosion resistance and corrosion products of 4130 alloy steel have been investigated by depositing thin films of iron sulfide synthesized from an acidic chemical bath. Tests were conducted at varying temperatures (25°C–75°C), pH levels (2–4), and immersion time (24–72 hours). The corrosion behavior was monitored by linear polarization resistance (LPR) method. X-ray Diffraction (XRD), Energy Dispersive X-ray (EDX) spectroscopy, and Scanning Electron Microscopy (SEM) have been applied to characterize the corrosion products. The results show that, along with the formation of an iron sulfide protective film on the alloy surface, increasing temperature, increasing immersion time, and decreasing pH all directly increase the corrosion rate of steel in the tested experimental conditions. It was also concluded that increasing temperature causes an initial increase of the corrosion rate followed by a large decrease due to transformation of the iron sulfide crystalline structure. --- ## Body ## 1. Introduction The corrosion of steel in aqueous environments containing hydrogen sulfide (H2S) is of great interest to the oil and gas industry [1–5]. Unlike carbon dioxide corrosion, H2S corrosion always involves the formation of corrosion products that are predominantly iron sulfide (FeS) compounds with various phases. These corrosion product films should be characterized to illustrate the corrosion mechanism. It has been reported that the formation of the FeS generally controls the H2S corrosion [6]. However, there is still debate on how the initial corrosion product layers form.It is well known that surface scale formation is one of the most important factors that influences the corrosion rate [7]. The scale slows down the corrosion process by presenting a diffusion barrier for the species involved in the corrosion process and by covering and preventing the underlying steel from further dissolution. The scale growth depends primarily on the kinetics of scale formation [8].H2S corrosion on the metal surface is also strongly dependent on the type of corrosion product films formed on the surface of the metal during the corrosion process. The precipitation rate or the formation of these films depends on various environmental factors and the concentration of species. The stability, protectiveness, and adherence of these films determine the nature and the rate of corrosion [9, 10]. It is important to note that, in contrast to one single type of iron carbonate formed in CO2 corrosion, many types of FeS may form during H2S corrosion such as amorphous ferrous sulfide, mackinawite, cubic ferrous sulfide, smythite, greigite, pyrrhotite, troilite, pyrite, and marcasite [11–18].In aqueous solutions of H2S, two mechanisms were proposed for the formation of FeS films, namely, dissolution of iron followed by precipitation of FeS and sulfide ion adsorption followed by direct film formation [19].The first proposed theory is a possible mechanism for FeS formation in that the FeS layer is formed by precipitation only when its concentration reaches the solubility limit, analogous to how precipitation equilibrium governs the mechanism of iron carbonate formation. However, if this is to be true, the kinetics of FeS formation must be much faster than that of iron carbonate. In cases where FeS is highly undersaturated in the bulk, it can still be formed on the steel surface. This is suspected to be due to the high surface pH caused by consumption of hydronium ions by corrosion as well as locally high ferrous ion concentration, resulting in supersaturation of FeS on the steel surface. Therefore, FeS forms relatively fast on the steel surface, irrespective of the bulk conditions [20–22]. Another possible theory has been proposed by Shoesmith et al., which describes the idea that the first layer of mackinawite is generated by a direct, solid-state reaction between the steel surface and H2S [2, 19]. Mackinawite then grows with time. The corrosion product layer growth rate depends upon the corrosion rate as well as the water chemistry with regard to pH, temperature, and so forth. It has been found that when the thickness of FeS reaches a critical value, this corrosion product layer cracks due to the development of internal stresses [6, 23]. More corrosive species such as H2S or hydrogen ions diffuse through the now porous FeS layer and attack the steel surface. More FeS is then formed by either solid-state reaction between steel and H2S akin to what happened initially or precipitation of FeS due to local FeS supersaturation. This direct, solid-state reaction theory is supported by other researches [24, 25].How FeS initially forms is pertinent, because it can help to better predict the H2S corrosion. However, until now research efforts have not achieved agreement on this subject. The situation is complicated by the variety of types of FeS that can be formed. Depending on the conditions relating to the corrosion environments, mackinawite, pyrrhotite, greigite, smythite, marcasite, and pyrite are the six naturally occurring FeS minerals [5, 19].Most of the previous studies in this area are conducted at high temperatures and usually in gaseous H2S environment. In the present study, all the experiments are performed at lower temperature in an aqueous solution because the real temperature of some oil and gas production and pipelines is below 100°C. In this study, FeS films have been synthesized on the metal alloy surface without the presence of H2S in the solution. Rather, FeS was formed by chemical bath deposition of iron and sulfur ions at acidic pH levels under varying environmental conditions. ## 2. Experimental Procedure ### 2.1. Material and Sample Preparation According to NACE MR0175/ISO 15156, the most common steel alloy for tubulars and tubular components in sour service is UNS G41XX0, formerly AISI 41XX [26]. 4130 steel is among the most common alloys used in industry. This steel typically consists of 0.80–1.1 Cr, 0.15–0.25 Mo, 0.28–0.33 C, 0.40–0.60 Mn, 0.035 P, 0.040 S, 0.15–0.35 Si, and balanced Fe. The working electrode was machined from the parent material into cylinders having dimensions of approximately 9 mm length and diameter. Prior to the experiments, all specimens were polished with Coated Abrasive Manufacturers Institute (CAMI) grit designations 320, 600, and 1000 corresponding to average particle diameters 36.0, 16.0, and 10.3 microns and finally 6-micron grit silicon carbide paper and then cleansed with deionized water until a homogenous surface was observed. Following this, the specimens were quickly dried using cold air to avoid oxidation. ### 2.2. Electrolyte Solution Preparation and Synthesis of FeS Films Due to the inherent safety concerns associated with H2S gas, an alternative method of FeS film deposition was employed [27]. The alternative method provided an acidic electrolyte solution which has the potential to form thin FeS layer on the steel surface like what happens in the sour oil pipeline.This acidic chemical bath contains 6.25 g iron (II) chloride (0.15 M), 12.60 g urea (1 M), and 31.55 g thioacetamide (2 M). Deionized water was used as the solvent in every experiment. Each reagent was mixed with 210 mL of deionized water, stirred with a magnetic stir rod for 30 minutes, and mixed together under stirring for additional two hours to achieve a clear solution.The mechanism of FeS formation in this acidic bath is the slow release of iron and sulfur ions within solution followed by the deposition of these ions on the alloy surface. The iron and sulfur ions are provided from iron (II) chloride and thioacetamide, respectively. The formation of FeS films from this acidic bath is dependent on whether the deposition rate of the ionic product of iron and sulfur is higher than solubility of FeS. Adding urea to the solution adjusted the balance between hydrolysis and deposition. The proposed reactions for this mechanism are described as follow [27]: (1) F e C l 2 ⟶ F e 2 + + 2 C l - (2) C H 3 C S N H 2 + H 2 O ⟷ S 2 - + C H 3 C O N H 2 + 2 H + (3) C O N H 2 2 + H 2 O ⟷ 2 N H 3 + C O 2 (4) N H 3 + H 2 O ⟷ N H 4 + + O H - (5) F e 2 + + S 2 - ⟷ F e SFinally, the overall reaction would be written as(6) F e 2 + + C H 3 C S N H 2 + C O N H 2 2 + 2 H 2 O ⟶ F e S + C H 3 C O N H 2 + 2 N H 4 + + C O 2 ### 2.3. Corrosion Tests Experiments were conducted in a multiport glass cell with a three-electrode setup at atmospheric pressure based on the ASTM G5-94 standard for potentiostatic anodic polarization measurements [28]. A graphite rod was used as the counter electrode (CE) and saturated silver/silver chloride (Ag/AgCl) was used as the reference electrode (RE). In order to investigate the electrochemical characteristic of the corrosion films formed on the steel alloy, the specimens subjected to corrosion were used as working electrodes (WE).An Ivium Compactstat Potentiostat monitoring system was used to perform electrochemical corrosion measurements. Linear Polarization Resistance (LPR) technique was used to investigate the corrosion rate. The applied potential range for the LPR measurements was from −0.02 V to 0.02 V with a scanning rate of 0.125 mV/s. All the measurements were conducted by setting the potentiostat to take measurements at 0.5, 1, 2, 4, … to 24, 48, or 72 hours depending on the test. Prior to start of each test, the sample was immersed in the solution for 55 minutes in accordance with ASTM G5-82 [28]. The pH was adjusted by adding deoxygenated hydrochloric acid.Table1 describes the experimental conditions. Three series of experiments were conducted to investigate the effect of temperature, immersion time, and pH on the corrosion behavior of FeS films.Table 1 Experimental conditions. Condition number Temperature (C °) pH Immersion time (hour) 1 50 4 24 2 50 4 48 3 50 4 72 4 25 4 24 5 50 4 24 6 75 4 24 7 50 2 24 8 50 3 24 9 50 4 24 ### 2.4. Surface Morphology Observation and Corrosion Product Analysis Upon completion of corrosion testing, morphological characterization of the surface was conducted using FEI Quanta 400 Scanning Electronic Microscope (SEM) with Bruker Energy Dispersive X-ray (EDX) spectroscopy. The SEM was operating at 15 kV, with a working distance of 15 mm and beam current of 13 nA. The crystal structure and chemical composition of the corrosion products were characterized by X-ray Diffraction (XRD) using a Rigaku Ultima IV X-ray diffractometer operating at 40 kV and 44 mA and SEM-EDX to confirm the chemical elements. ## 2.1. Material and Sample Preparation According to NACE MR0175/ISO 15156, the most common steel alloy for tubulars and tubular components in sour service is UNS G41XX0, formerly AISI 41XX [26]. 4130 steel is among the most common alloys used in industry. This steel typically consists of 0.80–1.1 Cr, 0.15–0.25 Mo, 0.28–0.33 C, 0.40–0.60 Mn, 0.035 P, 0.040 S, 0.15–0.35 Si, and balanced Fe. The working electrode was machined from the parent material into cylinders having dimensions of approximately 9 mm length and diameter. Prior to the experiments, all specimens were polished with Coated Abrasive Manufacturers Institute (CAMI) grit designations 320, 600, and 1000 corresponding to average particle diameters 36.0, 16.0, and 10.3 microns and finally 6-micron grit silicon carbide paper and then cleansed with deionized water until a homogenous surface was observed. Following this, the specimens were quickly dried using cold air to avoid oxidation. ## 2.2. Electrolyte Solution Preparation and Synthesis of FeS Films Due to the inherent safety concerns associated with H2S gas, an alternative method of FeS film deposition was employed [27]. The alternative method provided an acidic electrolyte solution which has the potential to form thin FeS layer on the steel surface like what happens in the sour oil pipeline.This acidic chemical bath contains 6.25 g iron (II) chloride (0.15 M), 12.60 g urea (1 M), and 31.55 g thioacetamide (2 M). Deionized water was used as the solvent in every experiment. Each reagent was mixed with 210 mL of deionized water, stirred with a magnetic stir rod for 30 minutes, and mixed together under stirring for additional two hours to achieve a clear solution.The mechanism of FeS formation in this acidic bath is the slow release of iron and sulfur ions within solution followed by the deposition of these ions on the alloy surface. The iron and sulfur ions are provided from iron (II) chloride and thioacetamide, respectively. The formation of FeS films from this acidic bath is dependent on whether the deposition rate of the ionic product of iron and sulfur is higher than solubility of FeS. Adding urea to the solution adjusted the balance between hydrolysis and deposition. The proposed reactions for this mechanism are described as follow [27]: (1) F e C l 2 ⟶ F e 2 + + 2 C l - (2) C H 3 C S N H 2 + H 2 O ⟷ S 2 - + C H 3 C O N H 2 + 2 H + (3) C O N H 2 2 + H 2 O ⟷ 2 N H 3 + C O 2 (4) N H 3 + H 2 O ⟷ N H 4 + + O H - (5) F e 2 + + S 2 - ⟷ F e SFinally, the overall reaction would be written as(6) F e 2 + + C H 3 C S N H 2 + C O N H 2 2 + 2 H 2 O ⟶ F e S + C H 3 C O N H 2 + 2 N H 4 + + C O 2 ## 2.3. Corrosion Tests Experiments were conducted in a multiport glass cell with a three-electrode setup at atmospheric pressure based on the ASTM G5-94 standard for potentiostatic anodic polarization measurements [28]. A graphite rod was used as the counter electrode (CE) and saturated silver/silver chloride (Ag/AgCl) was used as the reference electrode (RE). In order to investigate the electrochemical characteristic of the corrosion films formed on the steel alloy, the specimens subjected to corrosion were used as working electrodes (WE).An Ivium Compactstat Potentiostat monitoring system was used to perform electrochemical corrosion measurements. Linear Polarization Resistance (LPR) technique was used to investigate the corrosion rate. The applied potential range for the LPR measurements was from −0.02 V to 0.02 V with a scanning rate of 0.125 mV/s. All the measurements were conducted by setting the potentiostat to take measurements at 0.5, 1, 2, 4, … to 24, 48, or 72 hours depending on the test. Prior to start of each test, the sample was immersed in the solution for 55 minutes in accordance with ASTM G5-82 [28]. The pH was adjusted by adding deoxygenated hydrochloric acid.Table1 describes the experimental conditions. Three series of experiments were conducted to investigate the effect of temperature, immersion time, and pH on the corrosion behavior of FeS films.Table 1 Experimental conditions. Condition number Temperature (C °) pH Immersion time (hour) 1 50 4 24 2 50 4 48 3 50 4 72 4 25 4 24 5 50 4 24 6 75 4 24 7 50 2 24 8 50 3 24 9 50 4 24 ## 2.4. Surface Morphology Observation and Corrosion Product Analysis Upon completion of corrosion testing, morphological characterization of the surface was conducted using FEI Quanta 400 Scanning Electronic Microscope (SEM) with Bruker Energy Dispersive X-ray (EDX) spectroscopy. The SEM was operating at 15 kV, with a working distance of 15 mm and beam current of 13 nA. The crystal structure and chemical composition of the corrosion products were characterized by X-ray Diffraction (XRD) using a Rigaku Ultima IV X-ray diffractometer operating at 40 kV and 44 mA and SEM-EDX to confirm the chemical elements. ## 3. Results and Discussion ### 3.1. Effect of Immersion Time on the Corrosion Mechanism and Products Figure1 shows the effect of 24, 48, and 72 hours of immersion time on the corrosion rate of the specimens at 50°C and pH 4. During a corrosion process, the rate of the reaction is determined by the corrosion mechanism. Growth of a corrosion film limits the rate of further corrosion by acting as a diffusion barrier for the species involved in the process. Gradually the corrosion rate decreases and the underlying steel is protected from further dissolution [8, 19]. Figure 1 indicates that in this experiment the results of LPR measurements did not agree well with the idea of a decrement of corrosion rate by increase of exposed time to the solution. It shows that corrosion rate is increasing gradually by increasing the immersion time which could be explained as follows:(1) The corrosion rate is significantly greater than the rate of film formation on the surface. (2) The corrosion product has weak adherence to the alloy surface causing it to detach and expose the unprotected alloy to the corrosive solution and increase the possibility of localize corrosion on the surface.The diffraction spectra in Figure 2 were search-matched to the XRD computer database (i.e., contains powder diffraction files (PDF) from the joint committee on powder diffraction standards (JCPDS) and international center for diffraction data (ICDD)). Figures 2(a)–2(d) identified 006–0696 iron Fe (alpha-Fe body centered cubic (bcc) crystal type), and Figure 2(e) identified both 006–0696 iron Fe (alpha-Fe bcc) and 015–0037 mackinawite FeS (tetragonal FeS crystal type). These PDF numbers and names appear in the top right corner of each diffraction spectra and corresponding line positions are superimposed onto the spectral peaks in each figure.Figure 1 Corrosion rate with time at pH 4 and 50°C.Figure 2 P-XRD analysis on the 4130 alloy surface at (a) 50°C, 4 pH, and 48 hours, (b) 25°C, 4 pH, and 24 hours, (c) 50°C, 2 pH, and 24 hours, (d) initial condition (uncorroded sample), and (e) 75°C, 4 pH, and 24 hours. (a) (b) (c) (d) (e)Figure2 shows the results of crystal structure characterization of the steel alloy surfaces with powder- (P-) XRD. From Figures 2(a)–2(c) it is apparent that XRD results primarily indicated elemental Fe consistent with the uncorroded sample in Figure 2(d); this is likely a result of inadequate film thickness for detection by a P-XRD spectrometer. The thin nature of the corrosion film on the surface of the steel alloy is consistent with literature discussing the deposition of FeS using the indicated chemical bath alternative to H2S exposure [24].As shown in Figure2(e), there was a small amount of mackinawite detected by the P-XRD spectrometer on the surface of the sample exposed to 75°C for 24 hours at 4 pH. This result suggests that the film thickness is increased at high temperatures. In lieu of thin film XRD analysis, P-XRD may be able to detect thicker corrosion layers formed at relatively high temperatures.Figure3 shows the SEM images of corrosion product films formed under varying immersion time. After 24-hour immersion time, a uniform layer of corrosion product, consisting of small tetragonal mackinawite, covered the surface [29]. As shown in Figure 3(a), this thick corrosion layer is loose and full of blister and cracks, causing the corrosion rate to accelerate by increasing the diffusion of electrochemical reaction species such as F e 2 + through the alloy surface. As has been mentioned in other researches, this initial mackinawite layer is easily cracked and peeled off due to stress as a result of the volume effect [30]. This failure of the initial corrosion layer will gradually increase the corrosion rate and expose more unprotected area to the solution.Figure 3 SEM analysis of corrosion products of 4130 alloy after (a) 24-, (c) 48-, and (e) 72-hour immersion at pH 4 and 50°C and EDX analysis of corrosion products of 4130 alloy after (b) 24-, (d) 48-, and (f) 72-hour immersion at pH 4 and 50°C. (a) (b) (c) (d) (e) (f)Figure3(b) shows the EDX analysis results of corrosion product films after 24-hour immersion. These results indicated that most of the corrosion products are iron-rich compounds such as mackinawite, which generally has lower corrosion resistance compared to sulfur-rich compounds such as troilite. The corrosion resistance of FeS follows a sequence of mackinawite < troilite and < pyrrhotite < pyrite [24].After 48-hour immersion, the corrosion scale cracks become more severe and hexagonal crystals form beside the cracks as shown in Figure3(c). The EDX results of these hexagonal crystals indicate high sulfur content in their chemical composition as shown in Figure 3(d).Figure3(e) shows that, after 72 hours of immersion, the initial corrosion product film has cracked and peeled off the surface of the specimen and a newly formed corrosion scale has been integrated. Larger, hexagonal shaped corrosion products formed on top of the new scale as mainly troilite crystals formed near the end of the 72 hours.Generally, it could be said that by increase of immersion time more corrosion resistant products such as troilite replaced the initially formed mackinawite on the alloy surface. This is supported by the EDX results that indicate the major corrosion product varied from iron-rich mackinawite to sulfur-rich troilite, in Figures3(b), 3(d), and 3(f). Despite the nucleation of stable troilite crystals on the metal surface, the results of LPR measurements showed that between 48 and 72 hours the corrosion rate dramatically increased from 0.0662 to 0.779. This increase could be explained by localized fracture of the corrosion film due to weak adhesion of the scale on the surface. This provides a path for sulfide to penetrate and attack the substrate of metal surface. ### 3.2. Effect of Temperature on the Corrosion Mechanism and Products Figure4 shows the effect of increasing temperature on the corrosion rate of specimens over the course of 24 hours at pH 4.Figure 4 Corrosion rate with temperature at pH 4 and 24 hours.It can be observed that, during first 12 hours of increasing temperature from 25°C to 75°C, the corrosion rate dramatically increased which can be explained by the following reasons:(1) Increasing the temperature could accelerate the diffusion of species involved in electrochemical reactions. (2) Temperature could affect the concentration of corrosion species by preferentially evaporating one or more species out of the solution, which could affect the corrosion reaction.It has been confirmed by previous research that temperature generally accelerates most of the chemical, electrochemical, and transporting processes occurring during the corrosion process and also both cathodic reactions and anodic currents which were measured increased with increasing temperature [31].During the final 12 hours of testing at 75°C, the corrosion rate significantly decreases from 2.2 to 0.25 mm/year, which could be related to transformation of mackinawite crystalline structure to a more resistant troilite crystalline structure. The SEM images in Figure5 show significant fracturing of the surface film at 75°C which explains the initial higher corrosion rate due to the diffusion of species into nonprotective mackinawite followed by the decreased corrosion rate due to formation of the protective troilite crystalline structure on the alloy surface.Figure 5 SEM image of corrosion products on surface of 4130 alloy after 24-hour immersion at pH 4 and 75°C. ### 3.3. Effect of pH on the Corrosion Mechanism and Products Figure6 shows the effect of pH on the corrosion rate of specimens immersed for 24 hours at 50°C.Figure 6 Corrosion rate with pH at 50°C and 24 hours.The results show that decreasing pH from 4 to 2 slightly increases the corrosion rate. The protective nature and composition of the corrosion product depend greatly on the pH of the solution. At lower values of pH (<3), iron is dissolved and FeS is mostly inhibited from precipitating on the metal surface due to very high solubility of FeS phases [32]. Figure 7(a) shows the SEM image of corrosion products on the surface of a specimen after 24 hours at pH 2 and 50°C. As can be observed, the corrosion products are loose and detached from the surface. This could result in the products being easily removed by shear stress. The EDX results as shown in Figure 7(b) indicate a high presence of sulfur compounds and a low presence of iron compounds on the surface.Figure 7 ((a) and (c)) SEM images of corrosion products on surface of 4130 alloy after 24-hour immersion at 50°C and pH 2 and((b) and (d)) EDX analysis of corrosion products on surface of 4130 alloy after 24-hour immersion at 50°C and pH 2. (a) (b) (c) (d)The SEM results in Figure7(c) of the specimen immersed in the solution at pH 2 also show the presence of a pit on the surface. This is another reason for the higher corrosion rates seen at low pH. The corrosion pit shown in Figure 7(c) has a brittle cap covering the substrate. EDX analysis indicates that this cap is primarily sulfide as shown in Figure 7(d).At pH 3, the top surface layer displayed a flaky structure as seen in Figure8. Parts of the layer had spalled off and revealed the presence of much smaller crystallites under the outer layer. It is likely that this layer is the result of the immediate precipitation of F e 2 + released by corrosion [32]. At pH values from 3 to 4, an inhibitive effect of the corrosion mechanism is seen due to the formation of a marcasite FeS protective film on the electrode surface. At pH 3, small crystals were observed on areas where the outer layer had spalled off as shown in Figure 8. At pH 4, the surface was mostly covered with a much denser layer as shown in Figure 9.Figure 8 SEM image of corrosion products on surface of 4130 alloy after 24-hour immersion at pH 3 and 50°C.Figure 9 SEM image of corrosion products on surface of 4130 alloy after 24-hour immersion at pH 4 and 50°C. ## 3.1. Effect of Immersion Time on the Corrosion Mechanism and Products Figure1 shows the effect of 24, 48, and 72 hours of immersion time on the corrosion rate of the specimens at 50°C and pH 4. During a corrosion process, the rate of the reaction is determined by the corrosion mechanism. Growth of a corrosion film limits the rate of further corrosion by acting as a diffusion barrier for the species involved in the process. Gradually the corrosion rate decreases and the underlying steel is protected from further dissolution [8, 19]. Figure 1 indicates that in this experiment the results of LPR measurements did not agree well with the idea of a decrement of corrosion rate by increase of exposed time to the solution. It shows that corrosion rate is increasing gradually by increasing the immersion time which could be explained as follows:(1) The corrosion rate is significantly greater than the rate of film formation on the surface. (2) The corrosion product has weak adherence to the alloy surface causing it to detach and expose the unprotected alloy to the corrosive solution and increase the possibility of localize corrosion on the surface.The diffraction spectra in Figure 2 were search-matched to the XRD computer database (i.e., contains powder diffraction files (PDF) from the joint committee on powder diffraction standards (JCPDS) and international center for diffraction data (ICDD)). Figures 2(a)–2(d) identified 006–0696 iron Fe (alpha-Fe body centered cubic (bcc) crystal type), and Figure 2(e) identified both 006–0696 iron Fe (alpha-Fe bcc) and 015–0037 mackinawite FeS (tetragonal FeS crystal type). These PDF numbers and names appear in the top right corner of each diffraction spectra and corresponding line positions are superimposed onto the spectral peaks in each figure.Figure 1 Corrosion rate with time at pH 4 and 50°C.Figure 2 P-XRD analysis on the 4130 alloy surface at (a) 50°C, 4 pH, and 48 hours, (b) 25°C, 4 pH, and 24 hours, (c) 50°C, 2 pH, and 24 hours, (d) initial condition (uncorroded sample), and (e) 75°C, 4 pH, and 24 hours. (a) (b) (c) (d) (e)Figure2 shows the results of crystal structure characterization of the steel alloy surfaces with powder- (P-) XRD. From Figures 2(a)–2(c) it is apparent that XRD results primarily indicated elemental Fe consistent with the uncorroded sample in Figure 2(d); this is likely a result of inadequate film thickness for detection by a P-XRD spectrometer. The thin nature of the corrosion film on the surface of the steel alloy is consistent with literature discussing the deposition of FeS using the indicated chemical bath alternative to H2S exposure [24].As shown in Figure2(e), there was a small amount of mackinawite detected by the P-XRD spectrometer on the surface of the sample exposed to 75°C for 24 hours at 4 pH. This result suggests that the film thickness is increased at high temperatures. In lieu of thin film XRD analysis, P-XRD may be able to detect thicker corrosion layers formed at relatively high temperatures.Figure3 shows the SEM images of corrosion product films formed under varying immersion time. After 24-hour immersion time, a uniform layer of corrosion product, consisting of small tetragonal mackinawite, covered the surface [29]. As shown in Figure 3(a), this thick corrosion layer is loose and full of blister and cracks, causing the corrosion rate to accelerate by increasing the diffusion of electrochemical reaction species such as F e 2 + through the alloy surface. As has been mentioned in other researches, this initial mackinawite layer is easily cracked and peeled off due to stress as a result of the volume effect [30]. This failure of the initial corrosion layer will gradually increase the corrosion rate and expose more unprotected area to the solution.Figure 3 SEM analysis of corrosion products of 4130 alloy after (a) 24-, (c) 48-, and (e) 72-hour immersion at pH 4 and 50°C and EDX analysis of corrosion products of 4130 alloy after (b) 24-, (d) 48-, and (f) 72-hour immersion at pH 4 and 50°C. (a) (b) (c) (d) (e) (f)Figure3(b) shows the EDX analysis results of corrosion product films after 24-hour immersion. These results indicated that most of the corrosion products are iron-rich compounds such as mackinawite, which generally has lower corrosion resistance compared to sulfur-rich compounds such as troilite. The corrosion resistance of FeS follows a sequence of mackinawite < troilite and < pyrrhotite < pyrite [24].After 48-hour immersion, the corrosion scale cracks become more severe and hexagonal crystals form beside the cracks as shown in Figure3(c). The EDX results of these hexagonal crystals indicate high sulfur content in their chemical composition as shown in Figure 3(d).Figure3(e) shows that, after 72 hours of immersion, the initial corrosion product film has cracked and peeled off the surface of the specimen and a newly formed corrosion scale has been integrated. Larger, hexagonal shaped corrosion products formed on top of the new scale as mainly troilite crystals formed near the end of the 72 hours.Generally, it could be said that by increase of immersion time more corrosion resistant products such as troilite replaced the initially formed mackinawite on the alloy surface. This is supported by the EDX results that indicate the major corrosion product varied from iron-rich mackinawite to sulfur-rich troilite, in Figures3(b), 3(d), and 3(f). Despite the nucleation of stable troilite crystals on the metal surface, the results of LPR measurements showed that between 48 and 72 hours the corrosion rate dramatically increased from 0.0662 to 0.779. This increase could be explained by localized fracture of the corrosion film due to weak adhesion of the scale on the surface. This provides a path for sulfide to penetrate and attack the substrate of metal surface. ## 3.2. Effect of Temperature on the Corrosion Mechanism and Products Figure4 shows the effect of increasing temperature on the corrosion rate of specimens over the course of 24 hours at pH 4.Figure 4 Corrosion rate with temperature at pH 4 and 24 hours.It can be observed that, during first 12 hours of increasing temperature from 25°C to 75°C, the corrosion rate dramatically increased which can be explained by the following reasons:(1) Increasing the temperature could accelerate the diffusion of species involved in electrochemical reactions. (2) Temperature could affect the concentration of corrosion species by preferentially evaporating one or more species out of the solution, which could affect the corrosion reaction.It has been confirmed by previous research that temperature generally accelerates most of the chemical, electrochemical, and transporting processes occurring during the corrosion process and also both cathodic reactions and anodic currents which were measured increased with increasing temperature [31].During the final 12 hours of testing at 75°C, the corrosion rate significantly decreases from 2.2 to 0.25 mm/year, which could be related to transformation of mackinawite crystalline structure to a more resistant troilite crystalline structure. The SEM images in Figure5 show significant fracturing of the surface film at 75°C which explains the initial higher corrosion rate due to the diffusion of species into nonprotective mackinawite followed by the decreased corrosion rate due to formation of the protective troilite crystalline structure on the alloy surface.Figure 5 SEM image of corrosion products on surface of 4130 alloy after 24-hour immersion at pH 4 and 75°C. ## 3.3. Effect of pH on the Corrosion Mechanism and Products Figure6 shows the effect of pH on the corrosion rate of specimens immersed for 24 hours at 50°C.Figure 6 Corrosion rate with pH at 50°C and 24 hours.The results show that decreasing pH from 4 to 2 slightly increases the corrosion rate. The protective nature and composition of the corrosion product depend greatly on the pH of the solution. At lower values of pH (<3), iron is dissolved and FeS is mostly inhibited from precipitating on the metal surface due to very high solubility of FeS phases [32]. Figure 7(a) shows the SEM image of corrosion products on the surface of a specimen after 24 hours at pH 2 and 50°C. As can be observed, the corrosion products are loose and detached from the surface. This could result in the products being easily removed by shear stress. The EDX results as shown in Figure 7(b) indicate a high presence of sulfur compounds and a low presence of iron compounds on the surface.Figure 7 ((a) and (c)) SEM images of corrosion products on surface of 4130 alloy after 24-hour immersion at 50°C and pH 2 and((b) and (d)) EDX analysis of corrosion products on surface of 4130 alloy after 24-hour immersion at 50°C and pH 2. (a) (b) (c) (d)The SEM results in Figure7(c) of the specimen immersed in the solution at pH 2 also show the presence of a pit on the surface. This is another reason for the higher corrosion rates seen at low pH. The corrosion pit shown in Figure 7(c) has a brittle cap covering the substrate. EDX analysis indicates that this cap is primarily sulfide as shown in Figure 7(d).At pH 3, the top surface layer displayed a flaky structure as seen in Figure8. Parts of the layer had spalled off and revealed the presence of much smaller crystallites under the outer layer. It is likely that this layer is the result of the immediate precipitation of F e 2 + released by corrosion [32]. At pH values from 3 to 4, an inhibitive effect of the corrosion mechanism is seen due to the formation of a marcasite FeS protective film on the electrode surface. At pH 3, small crystals were observed on areas where the outer layer had spalled off as shown in Figure 8. At pH 4, the surface was mostly covered with a much denser layer as shown in Figure 9.Figure 8 SEM image of corrosion products on surface of 4130 alloy after 24-hour immersion at pH 3 and 50°C.Figure 9 SEM image of corrosion products on surface of 4130 alloy after 24-hour immersion at pH 4 and 50°C. ## 4. Conclusions The results of this research indicated that acidic chemical bath deposition could be successfully applied to investigate the formation and growth of FeS thin films under varying experimental conditions. Due to the inherent safety concerns associated with sour corrosion experiments in laboratories, this acidic chemical bath deposition method could be applied as a substitute for H2S in certain experiments to characterize formation and transformation of FeS corrosion products.Other primary findings of this research are as follows:(i) Increase of immersion time gradually increases the corrosion rate of 4130 chromium alloy steel in this experiment, resulting from localize fracture of corrosion layer despite transformation of FeS crystalline structures from iron-rich mackinawite to sulfur-rich troilite compounds during the corrosion process. (ii) Increase of pH directly decreases the corrosion rate of 4130 alloy steel in this experiment resulting from the formation of a more resistant FeS film at higher values of pH. (iii) Increase of temperature from 25°C to 75°C causes an increase in the corrosion rate of 4130 alloy steel, likewise resulting from the transformation of FeS crystalline structure during the corrosion process. --- *Source: 1025261-2016-08-23.xml*
2016
# Experimental Research and Numerical Simulation on Grouting Quality of Shield Tunnel Based on Impact Echo Method **Authors:** Fei Yao; Guangyu Chen; Jianhong Su **Journal:** Shock and Vibration (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1025276 --- ## Abstract To identify shield grouting quality based on impact echo method, an impact echo test of segment-grouting (SG) test piece was carried out to explore effect of acoustic impedance of grouting layers and grouting defects on impact echo law. A finite element numerical simulation on the impact echo process was implemented. Test results and simulation results were compared. Results demonstrated that, under some working conditions, finite element simulation results and test results both agree with theoretical values. The acoustic impedance ratio of SG material influenced the echo characteristics significantly. But thickness frequency could not be detected under some working conditions because the reflected energy is weak. Frequency feature under grouting defects was more complicated than that under no grouting defects. --- ## Body ## 1. Introduction Shield method has advantages such as characteristic security, speediness, wide application range, and small disturbance to surrounding strata. It is widely applied in subway construction engineering in cities. Since the outer diameter of shield is larger than the circumferential outer diameter of segment, a circumferential overexcavation gap will be formed between the segment and surrounding rocks. Together with construction disturbance and blasting excavation, the supporting structure separates from surrounding rocks, thus resulting in relaxation of surrounding rocks. Consequently, the supporting structure will suffer excessive bending stress and its carrying capacity will be reduced, which threatens safe use of the tunnel. In the engineering, the backfill grouting is filled, which can fill the above-mentioned overexcavation gap in Figure1 but also can prevent relaxation of surrounding rocks and segment leakage as well as reducing surface subsidence significantly.Figure 1 Overexcavation gap caused in shield construction.However, when there is problem in grouting density and even development of holes, cross section of the tunnel will change and will affect traffic safety directly when such change becomes serious. Surface subsidence or stress changes of surrounding soil mass will be produced upon great surrounding rock strain, thus causing distortion of local soil mass and increasing segment load in the tunnel indirectly. This makes engineering practices disagree with design and investigation, which increases risks significantly. At present, shield grouting quality is mainly detected by radar method [1, 2]. However, radar method is expensive and easy to be disturbed by metallic shield. Therefore, exploring a new detection approach of grouting quality of shield segment becomes an urgent and hot new field.Impact echo method [3–6] is a structural nondestructive testing technology based on transient stress wave. It impacts the concrete surface by a steel ball or hammer as an exciter to produce longitudinal wave (P wave) and transverse wave (S wave) in the concrete structure as well as Rayleigh wave (R wave) on the concrete surface [7, 8]. Stress waves will form echoes through propagation and reflections in concrete [9, 10]. Surface displacement caused by reflection of these waves will be recorded by a sensor close to the impact position. After receiving these waves, the sensor will transform signals in time domain into the frequency domain through Fast Fourier Transform (FFT) and recognize the relationship between the received signal and concrete mass, thus realizing the goal of nondestructive test.Impact echo method has been used successfully in many engineering fields, such as girder test of bridges, pipeline inwall integrity test, bridge surface damage test, and highway pavement quality test [4, 11, 12]. However, there are few researches on shield grouting quality test. In shield tunnel, grouting defects belong to concealed defects and a layered structure will be formed by segment and grouting layer. With respect to the testing structure composed of layered materials, acoustic impedances of different layers of materials will affect the impact echo test results significantly. On this basis, this paper performed impact echo tests to SG samples with different acoustic impedances to discuss effect of acoustic impedance ratio and grouting defect on impact echo law. A finite element numerical simulation on the impact echo process was accomplished by using the finite element software MSC.MARC [13]. Test results and simulation results were compared. ## 2. Experimental Study ### 2.1. Experiment Design To study the effect of acoustic impedance of the grouting layer, three segment-grouting (SG) samples with different proportions of grouting layers were designed: SG-A, SG-B, and SG-C. Specific materials parameters are listed in Table1. The three-shield capping segments made by Nanjing Ligao Segment Co., Ltd., were used. The compressive strength was 50 Mpa and the size was 1200 mm × 1200 mm × 350 mm (Figure 2). Thickness of the grouting layer was 100 mm (Figure 3).Table 1 Material parameters of SG samples. Group Grouting elasticity modulus/EG (MPa) Grouting density/ρG (kg/m3) Segment elasticity modulus/ES (MPa) Segment density/ρS (kg/m3) SG-A 2350 1539 34500 2500 SG-B 6900 1913 34500 2500 SG-C 11330 1947 34500 2500Figure 2 Concrete segments. (a) Segment-A (b) Segment-B (c) Segment-CFigure 3 Structure of SG samples.Poisson’s ratios of the grouting layer and segment layer were approximately determined, 0.25 and 0.3, respectively [14]. Wave velocity, CP, acoustic impedance, Z, and reflectance of different layers of materials, R, were calculated by the following formula and represented in Table 2.(1)CP=E1-μρ1+μ1-2μ,where E is the elasticity modulus; G is the shear modulus; ρ is the density of medium; μ is the Poisson’s ratio.(2)Z=ρ×CP,where Z is the acoustic impedance of medium.(3)R=AReflectedAIncident=Z2-Z1Z2+Z1,where Z1 and Z2 are the acoustics impedance of medium 1 and medium 2; AIncident is amplitude of the incident wave; AReflected is amplitude of the reflected wave.Table 2 Acoustic parameters of SG samples. Group Segment wave velocity (Cp1)(m/s) Segment acoustic impedance (Z1)(kg/m2s) Grouting wave velocity (Cp2)(m/s) Grouting acoustic impedance (Z2)(kg/m2s) Reflectance (R) SG-A 4390 10.98 × 106 1354 2.08 × 106 - 0.68 SG-B 4390 10.98 × 106 2081 3.98 × 106 - 0.47 SG-C 4390 10.98 × 106 2643 5.22 × 106 - 0.36To study effect of defects, a piece of foam board was embedded into the left half of three SG samples, which were compared with the SG sample without defect (the right half). To ensure that the embedded foam board could be detected [7], width of the foam board was set to be 150 mm. Thickness of the foam board was 20 mm and it was embedded at 30 mm away from the segment-grouting interface (Figure 3). The segment was simply supported at four corners. Therefore, SG-A samples were further divided into SG-A-DE (with defect) and SG-A-ND (without defect). Similarly, SG-B samples were divided into SG-B-DE and SG-B-ND, and SG-C samples were further divided into SG-C-DE and SG-C-ND. ### 2.2. Experimented Setup The IES (Impact Echo Scanning) instrument which is manufactured by Olson Instrument Company was used in this test (Figure4). The instrument includes the host computer, cable, and a scrolling sensor which contain signal receiving sensor and can adjust the exciter to different excitation intensity.Figure 4 IES (Impact Echo Scanning) instrument.The sampling frequency is set as 2048 Hz, which indicates that data points collected every 10 microseconds, and order is set as 4th order. ### 2.3. Theoretical Calculation The test principle of impact echo method is based on the relationship between the test frequency and stress wave velocity as well as sample thickness [15–17]:(4)fT=βCP2T,where fT is the peak frequency gained after FFT of time-history curve that is collected by the sensor; β is the shape factor and is determined as 1 in this paper [18]; CP is stress wave velocity in this medium; and T is thickness of the sample.When a component is composed of two different materials [19], the thickness frequency from the bottom surface to the top surface is(5)fh=12h1/βCp1+2h2/βCp2,where h1 and h2 are thicknesses of material 1 and material 2; Cp1 and Cp2 are stress wave velocities in material 1 and material 2.According to above theories, thickness frequencyf1 of the segment-grouting interface could be calculated from(6)f1=βCp12T1.Thickness frequencyf2 of the grouting-air interface is(7)f2=12T1/βCp1+2T2/βCp2,where T1 and T2 are thicknesses of concrete segment and grouting; Cp1 and Cp2 are stress wave velocities in concrete segment and grouting.Similarly, thickness frequencyf2′ at the defect could be calculated from(8)f2′=12T1′/βCp1+2T2′/βCp2,where T1′ and T2′ are thickness of the concrete segment and thickness from segment-grouting interface to defect surface; Cp1′ and Cp2′ are stress wave velocities in concrete segment and grouting, respectively.Theoretical peak frequencies at different interfaces of all samples are listed in Table3.Table 3 Theoretical peak frequencies. Specimen number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6271 4908 3256 SG-A-ND 6271 — 3256 SG-B-DE 6271 5311 3413 SG-B-ND 6271 — 3413 SG-C-DE 6271 5489 4253 SG-C-ND 6271 — 4253 ### 2.4. Test Results There is an evident peak value in the effective coverage of thickness frequency (2000~8000 Hz) in time domain graph collected from the experiment. Peaks may occur in low-frequency regions beyond the effective coverage, which are caused by surface wave or other noise interference waves. Therefore, a high-pass filtering that took about 2 kHz as the cut-off frequency was implemented to eliminate these strong low-frequency signals [9]. The filtered time domain graph of SG-A-ND is shown in Figure 5.Figure 5 Filtered time domain of SG-A-ND.Frequency domain graphs which were collected after FFT of time-history curves are presented in Figure6. Filtering of frequencies below 2 kHz is evident in these figures.Figure 6 Frequency domain graphs. (a) SG-A-ND frequency domain (b) SG-A-DE frequency domain (c) SG-B-ND frequency domain (d) SG-B-DE frequency domain (e) SG-C-ND frequency domain (f) SG-C-DE frequency domainPeak frequencies picked from Figure6 were compared to the theoretical values in Table 4.Table 4 Comparison between tested peak frequency and theoretical results under different working conditions. Model number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6738 (6271) 4590 (4908) Not obvious (3256) SG-A-ND 6348 (6271) — 3027 (3256) SG-B-DE 6543 (6271) 4883 (5311) Not obvious (3413) SG-B-ND 6640 (6271) — 3516 (3413) SG-C-DE 6641 (6271) 5566 (5489) Not obvious (4253) SG-C-ND 6641 (6271) — 4499 (4253) Note: values in ( ) are theoretical results.It can be seen from Figure6 and Table 4 that,(1) in all samples,f1 is evident. In samples without defect, f2 can be detected, but the peak is smaller than that of f1. In samples with defects, f2 could not be detected, but f2′ could and its peak is also smaller than f1. All tested peak frequencies are close to theoretical values;(2) with the increase of grouting acoustic impedance, the absolute value of reflectance of grouting-segment interface decreases gradually and the amplitude off1 decreases gradually as evident from Figure 6. This indicates that, with the reduction of absolute values of the reflectance, the reflected energy of stress wave by this interface declines gradually and the reflectivity weakens. Hence, more energy will be transmitted to the next layer of materials (grouting layer). ## 2.1. Experiment Design To study the effect of acoustic impedance of the grouting layer, three segment-grouting (SG) samples with different proportions of grouting layers were designed: SG-A, SG-B, and SG-C. Specific materials parameters are listed in Table1. The three-shield capping segments made by Nanjing Ligao Segment Co., Ltd., were used. The compressive strength was 50 Mpa and the size was 1200 mm × 1200 mm × 350 mm (Figure 2). Thickness of the grouting layer was 100 mm (Figure 3).Table 1 Material parameters of SG samples. Group Grouting elasticity modulus/EG (MPa) Grouting density/ρG (kg/m3) Segment elasticity modulus/ES (MPa) Segment density/ρS (kg/m3) SG-A 2350 1539 34500 2500 SG-B 6900 1913 34500 2500 SG-C 11330 1947 34500 2500Figure 2 Concrete segments. (a) Segment-A (b) Segment-B (c) Segment-CFigure 3 Structure of SG samples.Poisson’s ratios of the grouting layer and segment layer were approximately determined, 0.25 and 0.3, respectively [14]. Wave velocity, CP, acoustic impedance, Z, and reflectance of different layers of materials, R, were calculated by the following formula and represented in Table 2.(1)CP=E1-μρ1+μ1-2μ,where E is the elasticity modulus; G is the shear modulus; ρ is the density of medium; μ is the Poisson’s ratio.(2)Z=ρ×CP,where Z is the acoustic impedance of medium.(3)R=AReflectedAIncident=Z2-Z1Z2+Z1,where Z1 and Z2 are the acoustics impedance of medium 1 and medium 2; AIncident is amplitude of the incident wave; AReflected is amplitude of the reflected wave.Table 2 Acoustic parameters of SG samples. Group Segment wave velocity (Cp1)(m/s) Segment acoustic impedance (Z1)(kg/m2s) Grouting wave velocity (Cp2)(m/s) Grouting acoustic impedance (Z2)(kg/m2s) Reflectance (R) SG-A 4390 10.98 × 106 1354 2.08 × 106 - 0.68 SG-B 4390 10.98 × 106 2081 3.98 × 106 - 0.47 SG-C 4390 10.98 × 106 2643 5.22 × 106 - 0.36To study effect of defects, a piece of foam board was embedded into the left half of three SG samples, which were compared with the SG sample without defect (the right half). To ensure that the embedded foam board could be detected [7], width of the foam board was set to be 150 mm. Thickness of the foam board was 20 mm and it was embedded at 30 mm away from the segment-grouting interface (Figure 3). The segment was simply supported at four corners. Therefore, SG-A samples were further divided into SG-A-DE (with defect) and SG-A-ND (without defect). Similarly, SG-B samples were divided into SG-B-DE and SG-B-ND, and SG-C samples were further divided into SG-C-DE and SG-C-ND. ## 2.2. Experimented Setup The IES (Impact Echo Scanning) instrument which is manufactured by Olson Instrument Company was used in this test (Figure4). The instrument includes the host computer, cable, and a scrolling sensor which contain signal receiving sensor and can adjust the exciter to different excitation intensity.Figure 4 IES (Impact Echo Scanning) instrument.The sampling frequency is set as 2048 Hz, which indicates that data points collected every 10 microseconds, and order is set as 4th order. ## 2.3. Theoretical Calculation The test principle of impact echo method is based on the relationship between the test frequency and stress wave velocity as well as sample thickness [15–17]:(4)fT=βCP2T,where fT is the peak frequency gained after FFT of time-history curve that is collected by the sensor; β is the shape factor and is determined as 1 in this paper [18]; CP is stress wave velocity in this medium; and T is thickness of the sample.When a component is composed of two different materials [19], the thickness frequency from the bottom surface to the top surface is(5)fh=12h1/βCp1+2h2/βCp2,where h1 and h2 are thicknesses of material 1 and material 2; Cp1 and Cp2 are stress wave velocities in material 1 and material 2.According to above theories, thickness frequencyf1 of the segment-grouting interface could be calculated from(6)f1=βCp12T1.Thickness frequencyf2 of the grouting-air interface is(7)f2=12T1/βCp1+2T2/βCp2,where T1 and T2 are thicknesses of concrete segment and grouting; Cp1 and Cp2 are stress wave velocities in concrete segment and grouting.Similarly, thickness frequencyf2′ at the defect could be calculated from(8)f2′=12T1′/βCp1+2T2′/βCp2,where T1′ and T2′ are thickness of the concrete segment and thickness from segment-grouting interface to defect surface; Cp1′ and Cp2′ are stress wave velocities in concrete segment and grouting, respectively.Theoretical peak frequencies at different interfaces of all samples are listed in Table3.Table 3 Theoretical peak frequencies. Specimen number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6271 4908 3256 SG-A-ND 6271 — 3256 SG-B-DE 6271 5311 3413 SG-B-ND 6271 — 3413 SG-C-DE 6271 5489 4253 SG-C-ND 6271 — 4253 ## 2.4. Test Results There is an evident peak value in the effective coverage of thickness frequency (2000~8000 Hz) in time domain graph collected from the experiment. Peaks may occur in low-frequency regions beyond the effective coverage, which are caused by surface wave or other noise interference waves. Therefore, a high-pass filtering that took about 2 kHz as the cut-off frequency was implemented to eliminate these strong low-frequency signals [9]. The filtered time domain graph of SG-A-ND is shown in Figure 5.Figure 5 Filtered time domain of SG-A-ND.Frequency domain graphs which were collected after FFT of time-history curves are presented in Figure6. Filtering of frequencies below 2 kHz is evident in these figures.Figure 6 Frequency domain graphs. (a) SG-A-ND frequency domain (b) SG-A-DE frequency domain (c) SG-B-ND frequency domain (d) SG-B-DE frequency domain (e) SG-C-ND frequency domain (f) SG-C-DE frequency domainPeak frequencies picked from Figure6 were compared to the theoretical values in Table 4.Table 4 Comparison between tested peak frequency and theoretical results under different working conditions. Model number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6738 (6271) 4590 (4908) Not obvious (3256) SG-A-ND 6348 (6271) — 3027 (3256) SG-B-DE 6543 (6271) 4883 (5311) Not obvious (3413) SG-B-ND 6640 (6271) — 3516 (3413) SG-C-DE 6641 (6271) 5566 (5489) Not obvious (4253) SG-C-ND 6641 (6271) — 4499 (4253) Note: values in ( ) are theoretical results.It can be seen from Figure6 and Table 4 that,(1) in all samples,f1 is evident. In samples without defect, f2 can be detected, but the peak is smaller than that of f1. In samples with defects, f2 could not be detected, but f2′ could and its peak is also smaller than f1. All tested peak frequencies are close to theoretical values;(2) with the increase of grouting acoustic impedance, the absolute value of reflectance of grouting-segment interface decreases gradually and the amplitude off1 decreases gradually as evident from Figure 6. This indicates that, with the reduction of absolute values of the reflectance, the reflected energy of stress wave by this interface declines gradually and the reflectivity weakens. Hence, more energy will be transmitted to the next layer of materials (grouting layer). ## 3. Numerical Studies ### 3.1. Establishment of SG Finite Element Model The finite element mode used plane analysis method and number 11 two-dimensional plane integrated cells [20]. The impact point was taken as the origin and the excitation directional line was taken as the symmetry axis. A symmetry model was established which was composed of segment and grouting from top to down. The thickness and semiwidth of segment were 350 mm and 600 mm, while thickness of the grouting was 100 mm as shown in Figures 2(a) and 2(b). Samples were divided into SG-A-ND (no defect) and SG-A-DE (defect). The defect in SG-A-DE model was a hole whose semilength was 75 mm and thickness was 20 mm. The upper edge of the hole was 30 mm below the segment-grouting interface. Parameters of the model materials were same with samples (Figure 7).Figure 7 SG finite element models. (a) Without defect SG-A-ND (b) With defect SG-A-DEThe maximum exciting force and excitation time were set as 8 N and 40μs [21], respectively. Excitation simulation was achieved by applying a concentrated load at the axis of symmetry by using the TABLE function in MSC.MARC as depicted in Figure 8.Figure 8 Simulation parameters of exciting force. (a) Time-history curve (b) Frequency ### 3.2. Analysis of Numerical Simulation Results Velocity-time curve and accelerated-time curve at the node which is 40 mm away from the impact point were chosen. Figure9 shows the normalized acceleration-time curve of SG-A with surface waves, while Figure 10 is normalized accelerated-time curves of SG-A with surface waves eliminated.Figure 9 Acceleration-time curves of SG-A with surface waves.Figure 10 Acceleration-time curves of SG-A without surface waves.Through FFT of acceleration-time curves of SG-A without Rayleigh wave, frequency domain graphs under different working conditions were obtained and are shown in Figure11.Figure 11 Frequency domain graphs after FFT of accelerated velocity-time curves. (a) SG-A (b) SG-B (c) SG-CThe finite element simulation results and theoretical results were compared in Table5. The following can be seen from Figure 11 and Table 5:(1) f1 of all samples is evident. Peaks of f2 and f2′ are smaller than that of f1. The tested peak frequency is close to the theoretical value.(2) With the increase of grouting acoustic impedance, the absolute value of reflectance of the grouting-segment interface declines gradually and amplitude off1 decreases gradually, while the amplitude of f2 increases. This reflects that the energy reflected by grouting-segment interface is negatively correlated with acoustic impedance ratio. With the increase of acoustic impedance ratio, more and more energies will be transmitted to grouting and reflected on the grouting-air interface.(3) Same group of SG model showed different frequency features after being given the same stress excitation under conditions with and without defects. SG model with defect presented more complicated frequency features than SG model without defect. Except for SG-A-DE which has not obviousf2, SG-B-DE and SG-C-DE achieved smaller f2 and produced “low-frequency drifts.”Table 5 Comparison of peak frequency between finite element simulation and theoretical calculation. Model number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6104 (6271)−2.7% 5127 (4908)4.5% Not obvious (3256) SG-A-ND 6104 (6271)−2.7% — 2930 (3256)−10.0% SG-B-DE 6104 (6271)−2.7% 4639 (5311)−12.7% 3418 (3413)0.1% SG-B-ND 6348 (6271)2.1% — 3662 (3413)7.3% SG-C-DE 6348 (6271)2.1% 4883 (5489)−11.0% 3662 (4253)−13.9% SG-C-ND 6104 (6271)−2.7% — 4639 (4253)−13.9% Note: content in ( ) is theoretical results; content in bold is percentage difference between finite element simulated results and theoretical results.Comparison between test results and finite element simulation results is shown in Table6.Table 6 Comparison of peak frequency between test results and finite element simulation under different working conditions. Model number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6738[6104]10.4% 4590[5127]−10.5% Not obvious[not obvious] SG-A-ND 6348[6104]4.0% — 3042[2930]3.8% SG-B-DE 6543[6104]7.2% 4883[4639]5.3% Not obvious[3418] SG-B-ND 6640[6348]4.6% — 3571[3662]−2.5% SG-C-DE 6641[6348]4.6% 5566[4883]13.9% Not obvious[3662] SG-C-ND 6641[6104]8.8% — 4199[4639]−9.5% Note: content in [ ] is finite element simulation results; content in bold is percentage difference between tested results and finite element simulated results.Additionally, viewed from amplitude and variation law off2, the experiment detected fewer wave energies entering into the grouting layer than the finite element simulation and fewer energies reflecting at the defect and the grouting-air interface. This may be because the grouting and segment have not been bonded completely during grouting construction, thus resulting in the difference between stress wave transmission and the ideal state.Theoretical values, test values, and finite element simulation values of thickness frequency of segment-grouting interface (f1), thickness frequency at defect (f2′), and thickness frequency of the grouting-air interface (f2) of different samples are expressed in broken line graphs in Figures 12 and 13.Figure 12 Comparison of theoretical values, test values, and finite element simulation values of peak frequency of samples (without defect).Figure 13 Comparison of theoretical values, test values, and finite element simulation values of peak frequency of samples (with defect).It can be seen thatf2 (tested) are not obvious in defective specimens. So if the second peak frequency and the theoretical value of f2 vary widely, there may be defects. ## 3.1. Establishment of SG Finite Element Model The finite element mode used plane analysis method and number 11 two-dimensional plane integrated cells [20]. The impact point was taken as the origin and the excitation directional line was taken as the symmetry axis. A symmetry model was established which was composed of segment and grouting from top to down. The thickness and semiwidth of segment were 350 mm and 600 mm, while thickness of the grouting was 100 mm as shown in Figures 2(a) and 2(b). Samples were divided into SG-A-ND (no defect) and SG-A-DE (defect). The defect in SG-A-DE model was a hole whose semilength was 75 mm and thickness was 20 mm. The upper edge of the hole was 30 mm below the segment-grouting interface. Parameters of the model materials were same with samples (Figure 7).Figure 7 SG finite element models. (a) Without defect SG-A-ND (b) With defect SG-A-DEThe maximum exciting force and excitation time were set as 8 N and 40μs [21], respectively. Excitation simulation was achieved by applying a concentrated load at the axis of symmetry by using the TABLE function in MSC.MARC as depicted in Figure 8.Figure 8 Simulation parameters of exciting force. (a) Time-history curve (b) Frequency ## 3.2. Analysis of Numerical Simulation Results Velocity-time curve and accelerated-time curve at the node which is 40 mm away from the impact point were chosen. Figure9 shows the normalized acceleration-time curve of SG-A with surface waves, while Figure 10 is normalized accelerated-time curves of SG-A with surface waves eliminated.Figure 9 Acceleration-time curves of SG-A with surface waves.Figure 10 Acceleration-time curves of SG-A without surface waves.Through FFT of acceleration-time curves of SG-A without Rayleigh wave, frequency domain graphs under different working conditions were obtained and are shown in Figure11.Figure 11 Frequency domain graphs after FFT of accelerated velocity-time curves. (a) SG-A (b) SG-B (c) SG-CThe finite element simulation results and theoretical results were compared in Table5. The following can be seen from Figure 11 and Table 5:(1) f1 of all samples is evident. Peaks of f2 and f2′ are smaller than that of f1. The tested peak frequency is close to the theoretical value.(2) With the increase of grouting acoustic impedance, the absolute value of reflectance of the grouting-segment interface declines gradually and amplitude off1 decreases gradually, while the amplitude of f2 increases. This reflects that the energy reflected by grouting-segment interface is negatively correlated with acoustic impedance ratio. With the increase of acoustic impedance ratio, more and more energies will be transmitted to grouting and reflected on the grouting-air interface.(3) Same group of SG model showed different frequency features after being given the same stress excitation under conditions with and without defects. SG model with defect presented more complicated frequency features than SG model without defect. Except for SG-A-DE which has not obviousf2, SG-B-DE and SG-C-DE achieved smaller f2 and produced “low-frequency drifts.”Table 5 Comparison of peak frequency between finite element simulation and theoretical calculation. Model number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6104 (6271)−2.7% 5127 (4908)4.5% Not obvious (3256) SG-A-ND 6104 (6271)−2.7% — 2930 (3256)−10.0% SG-B-DE 6104 (6271)−2.7% 4639 (5311)−12.7% 3418 (3413)0.1% SG-B-ND 6348 (6271)2.1% — 3662 (3413)7.3% SG-C-DE 6348 (6271)2.1% 4883 (5489)−11.0% 3662 (4253)−13.9% SG-C-ND 6104 (6271)−2.7% — 4639 (4253)−13.9% Note: content in ( ) is theoretical results; content in bold is percentage difference between finite element simulated results and theoretical results.Comparison between test results and finite element simulation results is shown in Table6.Table 6 Comparison of peak frequency between test results and finite element simulation under different working conditions. Model number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6738[6104]10.4% 4590[5127]−10.5% Not obvious[not obvious] SG-A-ND 6348[6104]4.0% — 3042[2930]3.8% SG-B-DE 6543[6104]7.2% 4883[4639]5.3% Not obvious[3418] SG-B-ND 6640[6348]4.6% — 3571[3662]−2.5% SG-C-DE 6641[6348]4.6% 5566[4883]13.9% Not obvious[3662] SG-C-ND 6641[6104]8.8% — 4199[4639]−9.5% Note: content in [ ] is finite element simulation results; content in bold is percentage difference between tested results and finite element simulated results.Additionally, viewed from amplitude and variation law off2, the experiment detected fewer wave energies entering into the grouting layer than the finite element simulation and fewer energies reflecting at the defect and the grouting-air interface. This may be because the grouting and segment have not been bonded completely during grouting construction, thus resulting in the difference between stress wave transmission and the ideal state.Theoretical values, test values, and finite element simulation values of thickness frequency of segment-grouting interface (f1), thickness frequency at defect (f2′), and thickness frequency of the grouting-air interface (f2) of different samples are expressed in broken line graphs in Figures 12 and 13.Figure 12 Comparison of theoretical values, test values, and finite element simulation values of peak frequency of samples (without defect).Figure 13 Comparison of theoretical values, test values, and finite element simulation values of peak frequency of samples (with defect).It can be seen thatf2 (tested) are not obvious in defective specimens. So if the second peak frequency and the theoretical value of f2 vary widely, there may be defects. ## 4. Conclusions (1) With respect to peak frequenciesf1, f2, and f2′, theoretical values, finite element simulation values, and test values are in high accordance under different working conditions, which proves that the impact echo method is feasible to test grouting defect in shield tunnel. (2) The acoustic impedance ratio between grouting and segment determines reflectance of grouting-segment interface and influences the echo characteristics significantly. In this paper, higher acoustic impedance ratio leads to smaller absolute value of reflectance and smaller echo energy on the segment-grouting interface. Consequently, peak frequency on the frequency graph after FFT is more unobvious, and thickness frequency could not be detected under some working conditions. (3) Same group of SG model shows different frequency features after being given the same stress excitation under conditions with and without defects. SG model with defect has more complicated frequency features than that without defect due to the existence of three reflective interfaces. (4) Because of the poor bonding strength between the segment and the grouting during sample preparation, test results disagree with finite element simulation results (ideal conditions). Wave energy transmission in the experiment was more complicated. --- *Source: 1025276-2016-11-03.xml*
1025276-2016-11-03_1025276-2016-11-03.md
31,893
Experimental Research and Numerical Simulation on Grouting Quality of Shield Tunnel Based on Impact Echo Method
Fei Yao; Guangyu Chen; Jianhong Su
Shock and Vibration (2016)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1025276
1025276-2016-11-03.xml
--- ## Abstract To identify shield grouting quality based on impact echo method, an impact echo test of segment-grouting (SG) test piece was carried out to explore effect of acoustic impedance of grouting layers and grouting defects on impact echo law. A finite element numerical simulation on the impact echo process was implemented. Test results and simulation results were compared. Results demonstrated that, under some working conditions, finite element simulation results and test results both agree with theoretical values. The acoustic impedance ratio of SG material influenced the echo characteristics significantly. But thickness frequency could not be detected under some working conditions because the reflected energy is weak. Frequency feature under grouting defects was more complicated than that under no grouting defects. --- ## Body ## 1. Introduction Shield method has advantages such as characteristic security, speediness, wide application range, and small disturbance to surrounding strata. It is widely applied in subway construction engineering in cities. Since the outer diameter of shield is larger than the circumferential outer diameter of segment, a circumferential overexcavation gap will be formed between the segment and surrounding rocks. Together with construction disturbance and blasting excavation, the supporting structure separates from surrounding rocks, thus resulting in relaxation of surrounding rocks. Consequently, the supporting structure will suffer excessive bending stress and its carrying capacity will be reduced, which threatens safe use of the tunnel. In the engineering, the backfill grouting is filled, which can fill the above-mentioned overexcavation gap in Figure1 but also can prevent relaxation of surrounding rocks and segment leakage as well as reducing surface subsidence significantly.Figure 1 Overexcavation gap caused in shield construction.However, when there is problem in grouting density and even development of holes, cross section of the tunnel will change and will affect traffic safety directly when such change becomes serious. Surface subsidence or stress changes of surrounding soil mass will be produced upon great surrounding rock strain, thus causing distortion of local soil mass and increasing segment load in the tunnel indirectly. This makes engineering practices disagree with design and investigation, which increases risks significantly. At present, shield grouting quality is mainly detected by radar method [1, 2]. However, radar method is expensive and easy to be disturbed by metallic shield. Therefore, exploring a new detection approach of grouting quality of shield segment becomes an urgent and hot new field.Impact echo method [3–6] is a structural nondestructive testing technology based on transient stress wave. It impacts the concrete surface by a steel ball or hammer as an exciter to produce longitudinal wave (P wave) and transverse wave (S wave) in the concrete structure as well as Rayleigh wave (R wave) on the concrete surface [7, 8]. Stress waves will form echoes through propagation and reflections in concrete [9, 10]. Surface displacement caused by reflection of these waves will be recorded by a sensor close to the impact position. After receiving these waves, the sensor will transform signals in time domain into the frequency domain through Fast Fourier Transform (FFT) and recognize the relationship between the received signal and concrete mass, thus realizing the goal of nondestructive test.Impact echo method has been used successfully in many engineering fields, such as girder test of bridges, pipeline inwall integrity test, bridge surface damage test, and highway pavement quality test [4, 11, 12]. However, there are few researches on shield grouting quality test. In shield tunnel, grouting defects belong to concealed defects and a layered structure will be formed by segment and grouting layer. With respect to the testing structure composed of layered materials, acoustic impedances of different layers of materials will affect the impact echo test results significantly. On this basis, this paper performed impact echo tests to SG samples with different acoustic impedances to discuss effect of acoustic impedance ratio and grouting defect on impact echo law. A finite element numerical simulation on the impact echo process was accomplished by using the finite element software MSC.MARC [13]. Test results and simulation results were compared. ## 2. Experimental Study ### 2.1. Experiment Design To study the effect of acoustic impedance of the grouting layer, three segment-grouting (SG) samples with different proportions of grouting layers were designed: SG-A, SG-B, and SG-C. Specific materials parameters are listed in Table1. The three-shield capping segments made by Nanjing Ligao Segment Co., Ltd., were used. The compressive strength was 50 Mpa and the size was 1200 mm × 1200 mm × 350 mm (Figure 2). Thickness of the grouting layer was 100 mm (Figure 3).Table 1 Material parameters of SG samples. Group Grouting elasticity modulus/EG (MPa) Grouting density/ρG (kg/m3) Segment elasticity modulus/ES (MPa) Segment density/ρS (kg/m3) SG-A 2350 1539 34500 2500 SG-B 6900 1913 34500 2500 SG-C 11330 1947 34500 2500Figure 2 Concrete segments. (a) Segment-A (b) Segment-B (c) Segment-CFigure 3 Structure of SG samples.Poisson’s ratios of the grouting layer and segment layer were approximately determined, 0.25 and 0.3, respectively [14]. Wave velocity, CP, acoustic impedance, Z, and reflectance of different layers of materials, R, were calculated by the following formula and represented in Table 2.(1)CP=E1-μρ1+μ1-2μ,where E is the elasticity modulus; G is the shear modulus; ρ is the density of medium; μ is the Poisson’s ratio.(2)Z=ρ×CP,where Z is the acoustic impedance of medium.(3)R=AReflectedAIncident=Z2-Z1Z2+Z1,where Z1 and Z2 are the acoustics impedance of medium 1 and medium 2; AIncident is amplitude of the incident wave; AReflected is amplitude of the reflected wave.Table 2 Acoustic parameters of SG samples. Group Segment wave velocity (Cp1)(m/s) Segment acoustic impedance (Z1)(kg/m2s) Grouting wave velocity (Cp2)(m/s) Grouting acoustic impedance (Z2)(kg/m2s) Reflectance (R) SG-A 4390 10.98 × 106 1354 2.08 × 106 - 0.68 SG-B 4390 10.98 × 106 2081 3.98 × 106 - 0.47 SG-C 4390 10.98 × 106 2643 5.22 × 106 - 0.36To study effect of defects, a piece of foam board was embedded into the left half of three SG samples, which were compared with the SG sample without defect (the right half). To ensure that the embedded foam board could be detected [7], width of the foam board was set to be 150 mm. Thickness of the foam board was 20 mm and it was embedded at 30 mm away from the segment-grouting interface (Figure 3). The segment was simply supported at four corners. Therefore, SG-A samples were further divided into SG-A-DE (with defect) and SG-A-ND (without defect). Similarly, SG-B samples were divided into SG-B-DE and SG-B-ND, and SG-C samples were further divided into SG-C-DE and SG-C-ND. ### 2.2. Experimented Setup The IES (Impact Echo Scanning) instrument which is manufactured by Olson Instrument Company was used in this test (Figure4). The instrument includes the host computer, cable, and a scrolling sensor which contain signal receiving sensor and can adjust the exciter to different excitation intensity.Figure 4 IES (Impact Echo Scanning) instrument.The sampling frequency is set as 2048 Hz, which indicates that data points collected every 10 microseconds, and order is set as 4th order. ### 2.3. Theoretical Calculation The test principle of impact echo method is based on the relationship between the test frequency and stress wave velocity as well as sample thickness [15–17]:(4)fT=βCP2T,where fT is the peak frequency gained after FFT of time-history curve that is collected by the sensor; β is the shape factor and is determined as 1 in this paper [18]; CP is stress wave velocity in this medium; and T is thickness of the sample.When a component is composed of two different materials [19], the thickness frequency from the bottom surface to the top surface is(5)fh=12h1/βCp1+2h2/βCp2,where h1 and h2 are thicknesses of material 1 and material 2; Cp1 and Cp2 are stress wave velocities in material 1 and material 2.According to above theories, thickness frequencyf1 of the segment-grouting interface could be calculated from(6)f1=βCp12T1.Thickness frequencyf2 of the grouting-air interface is(7)f2=12T1/βCp1+2T2/βCp2,where T1 and T2 are thicknesses of concrete segment and grouting; Cp1 and Cp2 are stress wave velocities in concrete segment and grouting.Similarly, thickness frequencyf2′ at the defect could be calculated from(8)f2′=12T1′/βCp1+2T2′/βCp2,where T1′ and T2′ are thickness of the concrete segment and thickness from segment-grouting interface to defect surface; Cp1′ and Cp2′ are stress wave velocities in concrete segment and grouting, respectively.Theoretical peak frequencies at different interfaces of all samples are listed in Table3.Table 3 Theoretical peak frequencies. Specimen number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6271 4908 3256 SG-A-ND 6271 — 3256 SG-B-DE 6271 5311 3413 SG-B-ND 6271 — 3413 SG-C-DE 6271 5489 4253 SG-C-ND 6271 — 4253 ### 2.4. Test Results There is an evident peak value in the effective coverage of thickness frequency (2000~8000 Hz) in time domain graph collected from the experiment. Peaks may occur in low-frequency regions beyond the effective coverage, which are caused by surface wave or other noise interference waves. Therefore, a high-pass filtering that took about 2 kHz as the cut-off frequency was implemented to eliminate these strong low-frequency signals [9]. The filtered time domain graph of SG-A-ND is shown in Figure 5.Figure 5 Filtered time domain of SG-A-ND.Frequency domain graphs which were collected after FFT of time-history curves are presented in Figure6. Filtering of frequencies below 2 kHz is evident in these figures.Figure 6 Frequency domain graphs. (a) SG-A-ND frequency domain (b) SG-A-DE frequency domain (c) SG-B-ND frequency domain (d) SG-B-DE frequency domain (e) SG-C-ND frequency domain (f) SG-C-DE frequency domainPeak frequencies picked from Figure6 were compared to the theoretical values in Table 4.Table 4 Comparison between tested peak frequency and theoretical results under different working conditions. Model number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6738 (6271) 4590 (4908) Not obvious (3256) SG-A-ND 6348 (6271) — 3027 (3256) SG-B-DE 6543 (6271) 4883 (5311) Not obvious (3413) SG-B-ND 6640 (6271) — 3516 (3413) SG-C-DE 6641 (6271) 5566 (5489) Not obvious (4253) SG-C-ND 6641 (6271) — 4499 (4253) Note: values in ( ) are theoretical results.It can be seen from Figure6 and Table 4 that,(1) in all samples,f1 is evident. In samples without defect, f2 can be detected, but the peak is smaller than that of f1. In samples with defects, f2 could not be detected, but f2′ could and its peak is also smaller than f1. All tested peak frequencies are close to theoretical values;(2) with the increase of grouting acoustic impedance, the absolute value of reflectance of grouting-segment interface decreases gradually and the amplitude off1 decreases gradually as evident from Figure 6. This indicates that, with the reduction of absolute values of the reflectance, the reflected energy of stress wave by this interface declines gradually and the reflectivity weakens. Hence, more energy will be transmitted to the next layer of materials (grouting layer). ## 2.1. Experiment Design To study the effect of acoustic impedance of the grouting layer, three segment-grouting (SG) samples with different proportions of grouting layers were designed: SG-A, SG-B, and SG-C. Specific materials parameters are listed in Table1. The three-shield capping segments made by Nanjing Ligao Segment Co., Ltd., were used. The compressive strength was 50 Mpa and the size was 1200 mm × 1200 mm × 350 mm (Figure 2). Thickness of the grouting layer was 100 mm (Figure 3).Table 1 Material parameters of SG samples. Group Grouting elasticity modulus/EG (MPa) Grouting density/ρG (kg/m3) Segment elasticity modulus/ES (MPa) Segment density/ρS (kg/m3) SG-A 2350 1539 34500 2500 SG-B 6900 1913 34500 2500 SG-C 11330 1947 34500 2500Figure 2 Concrete segments. (a) Segment-A (b) Segment-B (c) Segment-CFigure 3 Structure of SG samples.Poisson’s ratios of the grouting layer and segment layer were approximately determined, 0.25 and 0.3, respectively [14]. Wave velocity, CP, acoustic impedance, Z, and reflectance of different layers of materials, R, were calculated by the following formula and represented in Table 2.(1)CP=E1-μρ1+μ1-2μ,where E is the elasticity modulus; G is the shear modulus; ρ is the density of medium; μ is the Poisson’s ratio.(2)Z=ρ×CP,where Z is the acoustic impedance of medium.(3)R=AReflectedAIncident=Z2-Z1Z2+Z1,where Z1 and Z2 are the acoustics impedance of medium 1 and medium 2; AIncident is amplitude of the incident wave; AReflected is amplitude of the reflected wave.Table 2 Acoustic parameters of SG samples. Group Segment wave velocity (Cp1)(m/s) Segment acoustic impedance (Z1)(kg/m2s) Grouting wave velocity (Cp2)(m/s) Grouting acoustic impedance (Z2)(kg/m2s) Reflectance (R) SG-A 4390 10.98 × 106 1354 2.08 × 106 - 0.68 SG-B 4390 10.98 × 106 2081 3.98 × 106 - 0.47 SG-C 4390 10.98 × 106 2643 5.22 × 106 - 0.36To study effect of defects, a piece of foam board was embedded into the left half of three SG samples, which were compared with the SG sample without defect (the right half). To ensure that the embedded foam board could be detected [7], width of the foam board was set to be 150 mm. Thickness of the foam board was 20 mm and it was embedded at 30 mm away from the segment-grouting interface (Figure 3). The segment was simply supported at four corners. Therefore, SG-A samples were further divided into SG-A-DE (with defect) and SG-A-ND (without defect). Similarly, SG-B samples were divided into SG-B-DE and SG-B-ND, and SG-C samples were further divided into SG-C-DE and SG-C-ND. ## 2.2. Experimented Setup The IES (Impact Echo Scanning) instrument which is manufactured by Olson Instrument Company was used in this test (Figure4). The instrument includes the host computer, cable, and a scrolling sensor which contain signal receiving sensor and can adjust the exciter to different excitation intensity.Figure 4 IES (Impact Echo Scanning) instrument.The sampling frequency is set as 2048 Hz, which indicates that data points collected every 10 microseconds, and order is set as 4th order. ## 2.3. Theoretical Calculation The test principle of impact echo method is based on the relationship between the test frequency and stress wave velocity as well as sample thickness [15–17]:(4)fT=βCP2T,where fT is the peak frequency gained after FFT of time-history curve that is collected by the sensor; β is the shape factor and is determined as 1 in this paper [18]; CP is stress wave velocity in this medium; and T is thickness of the sample.When a component is composed of two different materials [19], the thickness frequency from the bottom surface to the top surface is(5)fh=12h1/βCp1+2h2/βCp2,where h1 and h2 are thicknesses of material 1 and material 2; Cp1 and Cp2 are stress wave velocities in material 1 and material 2.According to above theories, thickness frequencyf1 of the segment-grouting interface could be calculated from(6)f1=βCp12T1.Thickness frequencyf2 of the grouting-air interface is(7)f2=12T1/βCp1+2T2/βCp2,where T1 and T2 are thicknesses of concrete segment and grouting; Cp1 and Cp2 are stress wave velocities in concrete segment and grouting.Similarly, thickness frequencyf2′ at the defect could be calculated from(8)f2′=12T1′/βCp1+2T2′/βCp2,where T1′ and T2′ are thickness of the concrete segment and thickness from segment-grouting interface to defect surface; Cp1′ and Cp2′ are stress wave velocities in concrete segment and grouting, respectively.Theoretical peak frequencies at different interfaces of all samples are listed in Table3.Table 3 Theoretical peak frequencies. Specimen number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6271 4908 3256 SG-A-ND 6271 — 3256 SG-B-DE 6271 5311 3413 SG-B-ND 6271 — 3413 SG-C-DE 6271 5489 4253 SG-C-ND 6271 — 4253 ## 2.4. Test Results There is an evident peak value in the effective coverage of thickness frequency (2000~8000 Hz) in time domain graph collected from the experiment. Peaks may occur in low-frequency regions beyond the effective coverage, which are caused by surface wave or other noise interference waves. Therefore, a high-pass filtering that took about 2 kHz as the cut-off frequency was implemented to eliminate these strong low-frequency signals [9]. The filtered time domain graph of SG-A-ND is shown in Figure 5.Figure 5 Filtered time domain of SG-A-ND.Frequency domain graphs which were collected after FFT of time-history curves are presented in Figure6. Filtering of frequencies below 2 kHz is evident in these figures.Figure 6 Frequency domain graphs. (a) SG-A-ND frequency domain (b) SG-A-DE frequency domain (c) SG-B-ND frequency domain (d) SG-B-DE frequency domain (e) SG-C-ND frequency domain (f) SG-C-DE frequency domainPeak frequencies picked from Figure6 were compared to the theoretical values in Table 4.Table 4 Comparison between tested peak frequency and theoretical results under different working conditions. Model number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6738 (6271) 4590 (4908) Not obvious (3256) SG-A-ND 6348 (6271) — 3027 (3256) SG-B-DE 6543 (6271) 4883 (5311) Not obvious (3413) SG-B-ND 6640 (6271) — 3516 (3413) SG-C-DE 6641 (6271) 5566 (5489) Not obvious (4253) SG-C-ND 6641 (6271) — 4499 (4253) Note: values in ( ) are theoretical results.It can be seen from Figure6 and Table 4 that,(1) in all samples,f1 is evident. In samples without defect, f2 can be detected, but the peak is smaller than that of f1. In samples with defects, f2 could not be detected, but f2′ could and its peak is also smaller than f1. All tested peak frequencies are close to theoretical values;(2) with the increase of grouting acoustic impedance, the absolute value of reflectance of grouting-segment interface decreases gradually and the amplitude off1 decreases gradually as evident from Figure 6. This indicates that, with the reduction of absolute values of the reflectance, the reflected energy of stress wave by this interface declines gradually and the reflectivity weakens. Hence, more energy will be transmitted to the next layer of materials (grouting layer). ## 3. Numerical Studies ### 3.1. Establishment of SG Finite Element Model The finite element mode used plane analysis method and number 11 two-dimensional plane integrated cells [20]. The impact point was taken as the origin and the excitation directional line was taken as the symmetry axis. A symmetry model was established which was composed of segment and grouting from top to down. The thickness and semiwidth of segment were 350 mm and 600 mm, while thickness of the grouting was 100 mm as shown in Figures 2(a) and 2(b). Samples were divided into SG-A-ND (no defect) and SG-A-DE (defect). The defect in SG-A-DE model was a hole whose semilength was 75 mm and thickness was 20 mm. The upper edge of the hole was 30 mm below the segment-grouting interface. Parameters of the model materials were same with samples (Figure 7).Figure 7 SG finite element models. (a) Without defect SG-A-ND (b) With defect SG-A-DEThe maximum exciting force and excitation time were set as 8 N and 40μs [21], respectively. Excitation simulation was achieved by applying a concentrated load at the axis of symmetry by using the TABLE function in MSC.MARC as depicted in Figure 8.Figure 8 Simulation parameters of exciting force. (a) Time-history curve (b) Frequency ### 3.2. Analysis of Numerical Simulation Results Velocity-time curve and accelerated-time curve at the node which is 40 mm away from the impact point were chosen. Figure9 shows the normalized acceleration-time curve of SG-A with surface waves, while Figure 10 is normalized accelerated-time curves of SG-A with surface waves eliminated.Figure 9 Acceleration-time curves of SG-A with surface waves.Figure 10 Acceleration-time curves of SG-A without surface waves.Through FFT of acceleration-time curves of SG-A without Rayleigh wave, frequency domain graphs under different working conditions were obtained and are shown in Figure11.Figure 11 Frequency domain graphs after FFT of accelerated velocity-time curves. (a) SG-A (b) SG-B (c) SG-CThe finite element simulation results and theoretical results were compared in Table5. The following can be seen from Figure 11 and Table 5:(1) f1 of all samples is evident. Peaks of f2 and f2′ are smaller than that of f1. The tested peak frequency is close to the theoretical value.(2) With the increase of grouting acoustic impedance, the absolute value of reflectance of the grouting-segment interface declines gradually and amplitude off1 decreases gradually, while the amplitude of f2 increases. This reflects that the energy reflected by grouting-segment interface is negatively correlated with acoustic impedance ratio. With the increase of acoustic impedance ratio, more and more energies will be transmitted to grouting and reflected on the grouting-air interface.(3) Same group of SG model showed different frequency features after being given the same stress excitation under conditions with and without defects. SG model with defect presented more complicated frequency features than SG model without defect. Except for SG-A-DE which has not obviousf2, SG-B-DE and SG-C-DE achieved smaller f2 and produced “low-frequency drifts.”Table 5 Comparison of peak frequency between finite element simulation and theoretical calculation. Model number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6104 (6271)−2.7% 5127 (4908)4.5% Not obvious (3256) SG-A-ND 6104 (6271)−2.7% — 2930 (3256)−10.0% SG-B-DE 6104 (6271)−2.7% 4639 (5311)−12.7% 3418 (3413)0.1% SG-B-ND 6348 (6271)2.1% — 3662 (3413)7.3% SG-C-DE 6348 (6271)2.1% 4883 (5489)−11.0% 3662 (4253)−13.9% SG-C-ND 6104 (6271)−2.7% — 4639 (4253)−13.9% Note: content in ( ) is theoretical results; content in bold is percentage difference between finite element simulated results and theoretical results.Comparison between test results and finite element simulation results is shown in Table6.Table 6 Comparison of peak frequency between test results and finite element simulation under different working conditions. Model number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6738[6104]10.4% 4590[5127]−10.5% Not obvious[not obvious] SG-A-ND 6348[6104]4.0% — 3042[2930]3.8% SG-B-DE 6543[6104]7.2% 4883[4639]5.3% Not obvious[3418] SG-B-ND 6640[6348]4.6% — 3571[3662]−2.5% SG-C-DE 6641[6348]4.6% 5566[4883]13.9% Not obvious[3662] SG-C-ND 6641[6104]8.8% — 4199[4639]−9.5% Note: content in [ ] is finite element simulation results; content in bold is percentage difference between tested results and finite element simulated results.Additionally, viewed from amplitude and variation law off2, the experiment detected fewer wave energies entering into the grouting layer than the finite element simulation and fewer energies reflecting at the defect and the grouting-air interface. This may be because the grouting and segment have not been bonded completely during grouting construction, thus resulting in the difference between stress wave transmission and the ideal state.Theoretical values, test values, and finite element simulation values of thickness frequency of segment-grouting interface (f1), thickness frequency at defect (f2′), and thickness frequency of the grouting-air interface (f2) of different samples are expressed in broken line graphs in Figures 12 and 13.Figure 12 Comparison of theoretical values, test values, and finite element simulation values of peak frequency of samples (without defect).Figure 13 Comparison of theoretical values, test values, and finite element simulation values of peak frequency of samples (with defect).It can be seen thatf2 (tested) are not obvious in defective specimens. So if the second peak frequency and the theoretical value of f2 vary widely, there may be defects. ## 3.1. Establishment of SG Finite Element Model The finite element mode used plane analysis method and number 11 two-dimensional plane integrated cells [20]. The impact point was taken as the origin and the excitation directional line was taken as the symmetry axis. A symmetry model was established which was composed of segment and grouting from top to down. The thickness and semiwidth of segment were 350 mm and 600 mm, while thickness of the grouting was 100 mm as shown in Figures 2(a) and 2(b). Samples were divided into SG-A-ND (no defect) and SG-A-DE (defect). The defect in SG-A-DE model was a hole whose semilength was 75 mm and thickness was 20 mm. The upper edge of the hole was 30 mm below the segment-grouting interface. Parameters of the model materials were same with samples (Figure 7).Figure 7 SG finite element models. (a) Without defect SG-A-ND (b) With defect SG-A-DEThe maximum exciting force and excitation time were set as 8 N and 40μs [21], respectively. Excitation simulation was achieved by applying a concentrated load at the axis of symmetry by using the TABLE function in MSC.MARC as depicted in Figure 8.Figure 8 Simulation parameters of exciting force. (a) Time-history curve (b) Frequency ## 3.2. Analysis of Numerical Simulation Results Velocity-time curve and accelerated-time curve at the node which is 40 mm away from the impact point were chosen. Figure9 shows the normalized acceleration-time curve of SG-A with surface waves, while Figure 10 is normalized accelerated-time curves of SG-A with surface waves eliminated.Figure 9 Acceleration-time curves of SG-A with surface waves.Figure 10 Acceleration-time curves of SG-A without surface waves.Through FFT of acceleration-time curves of SG-A without Rayleigh wave, frequency domain graphs under different working conditions were obtained and are shown in Figure11.Figure 11 Frequency domain graphs after FFT of accelerated velocity-time curves. (a) SG-A (b) SG-B (c) SG-CThe finite element simulation results and theoretical results were compared in Table5. The following can be seen from Figure 11 and Table 5:(1) f1 of all samples is evident. Peaks of f2 and f2′ are smaller than that of f1. The tested peak frequency is close to the theoretical value.(2) With the increase of grouting acoustic impedance, the absolute value of reflectance of the grouting-segment interface declines gradually and amplitude off1 decreases gradually, while the amplitude of f2 increases. This reflects that the energy reflected by grouting-segment interface is negatively correlated with acoustic impedance ratio. With the increase of acoustic impedance ratio, more and more energies will be transmitted to grouting and reflected on the grouting-air interface.(3) Same group of SG model showed different frequency features after being given the same stress excitation under conditions with and without defects. SG model with defect presented more complicated frequency features than SG model without defect. Except for SG-A-DE which has not obviousf2, SG-B-DE and SG-C-DE achieved smaller f2 and produced “low-frequency drifts.”Table 5 Comparison of peak frequency between finite element simulation and theoretical calculation. Model number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6104 (6271)−2.7% 5127 (4908)4.5% Not obvious (3256) SG-A-ND 6104 (6271)−2.7% — 2930 (3256)−10.0% SG-B-DE 6104 (6271)−2.7% 4639 (5311)−12.7% 3418 (3413)0.1% SG-B-ND 6348 (6271)2.1% — 3662 (3413)7.3% SG-C-DE 6348 (6271)2.1% 4883 (5489)−11.0% 3662 (4253)−13.9% SG-C-ND 6104 (6271)−2.7% — 4639 (4253)−13.9% Note: content in ( ) is theoretical results; content in bold is percentage difference between finite element simulated results and theoretical results.Comparison between test results and finite element simulation results is shown in Table6.Table 6 Comparison of peak frequency between test results and finite element simulation under different working conditions. Model number f 1 (Hz) f 2 ′ (Hz) f 2 (Hz) SG-A-DE 6738[6104]10.4% 4590[5127]−10.5% Not obvious[not obvious] SG-A-ND 6348[6104]4.0% — 3042[2930]3.8% SG-B-DE 6543[6104]7.2% 4883[4639]5.3% Not obvious[3418] SG-B-ND 6640[6348]4.6% — 3571[3662]−2.5% SG-C-DE 6641[6348]4.6% 5566[4883]13.9% Not obvious[3662] SG-C-ND 6641[6104]8.8% — 4199[4639]−9.5% Note: content in [ ] is finite element simulation results; content in bold is percentage difference between tested results and finite element simulated results.Additionally, viewed from amplitude and variation law off2, the experiment detected fewer wave energies entering into the grouting layer than the finite element simulation and fewer energies reflecting at the defect and the grouting-air interface. This may be because the grouting and segment have not been bonded completely during grouting construction, thus resulting in the difference between stress wave transmission and the ideal state.Theoretical values, test values, and finite element simulation values of thickness frequency of segment-grouting interface (f1), thickness frequency at defect (f2′), and thickness frequency of the grouting-air interface (f2) of different samples are expressed in broken line graphs in Figures 12 and 13.Figure 12 Comparison of theoretical values, test values, and finite element simulation values of peak frequency of samples (without defect).Figure 13 Comparison of theoretical values, test values, and finite element simulation values of peak frequency of samples (with defect).It can be seen thatf2 (tested) are not obvious in defective specimens. So if the second peak frequency and the theoretical value of f2 vary widely, there may be defects. ## 4. Conclusions (1) With respect to peak frequenciesf1, f2, and f2′, theoretical values, finite element simulation values, and test values are in high accordance under different working conditions, which proves that the impact echo method is feasible to test grouting defect in shield tunnel. (2) The acoustic impedance ratio between grouting and segment determines reflectance of grouting-segment interface and influences the echo characteristics significantly. In this paper, higher acoustic impedance ratio leads to smaller absolute value of reflectance and smaller echo energy on the segment-grouting interface. Consequently, peak frequency on the frequency graph after FFT is more unobvious, and thickness frequency could not be detected under some working conditions. (3) Same group of SG model shows different frequency features after being given the same stress excitation under conditions with and without defects. SG model with defect has more complicated frequency features than that without defect due to the existence of three reflective interfaces. (4) Because of the poor bonding strength between the segment and the grouting during sample preparation, test results disagree with finite element simulation results (ideal conditions). Wave energy transmission in the experiment was more complicated. --- *Source: 1025276-2016-11-03.xml*
2016
# Assessment of Elementary School Teachers’ Level of Knowledge and Attitude regarding Traumatic Dental Injuries in the United Arab Emirates **Authors:** Manal A. Awad; Eman AlHammadi; Mariam Malalla; Zainab Maklai; Aisha Tariq; Badria Al-Ali; Alaa Al Jameel; Hisham El Batawi **Journal:** International Journal of Dentistry (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1025324 --- ## Abstract Introduction. In this cross-sectional study, the level of knowledge and attitude of elementary school teachers regarding traumatic dental injuries (TDI) were assessed.Materials and Methods. A questionnaire was distributed to 330 elementary school teachers in 30 randomly selected schools in the Emirates of Sharjah and Dubai. The questionnaire collected information on participants’ demographic characteristics, first aid training, and attitude about emergency management of TDI.Results. 292 teachers (88%) completed the questionnaires; of these, 95% were females, and 50% of the participants had first aid training. Knowledge about tooth avulsion was inadequate, and first aid training was not associated with correct responses to management of avulsed teeth (p>0.05). A significantly higher percentage of younger teachers (p<0.05) expressed the need for future education on TDI management. A significantly higher percentage of participants who had an educational position (95%) indicated that they did not have enough knowledge regarding TDI compared to physical education teachers (79%) and administrators (87%) (p<0.05).Conclusions. Elementary school teachers in the UAE have a low level of knowledge regarding the management of dental trauma. Educational programs that address TDI are needed and could improve the elementary school teachers’ level of knowledge in emergency management of TDI. --- ## Body ## 1. Introduction Traumatic dental injuries (TDI) among children are considered a public health concern [1] Dental injuries can lead to tooth loss; subsequently it can have a negative impact on children’s psychological well-being [2–4]. A study from Brazil showed that adolescents with severe untreated TDI were 2.4 times more likely to report worse oral-health-related quality of life than adolescents without untreated TDI [4]. Accordingly, the prognosis of injured teeth depends on immediate and appropriate management by those present at the site where the trauma took place; this includes schoolteachers and staff [4, 5].Studies have demonstrated that TDI are sometimes not properly treated, one of the reasons for this problem being the time between the trauma episode and seeking dental treatment, which can sometimes be years [6].The main cause of TDI among school-age children are unpreventable falls in schools. It has been previously reported that approximately 50% of schoolchildren have experienced a TDI prior to graduation [7]. Therefore, it is highly likely that first aid would be provided by teachers and other school staff. Accordingly, teachers’ knowledge of the management of TDI is essential to improve prognosis. Although many studies have been conducted in different countries to assess schoolteachers’ knowledge of TDI [1, 2, 6–11], only one has previously been conducted in the UAE [12], and this included only Arabic-speaking teachers in the Emirate of Ajman.The main objective of this study is to evaluate the level of knowledge and attitudes of elementary schoolteachers in two Emirates in the UAE, Dubai and Sharjah, regarding TDI among school-age children. ## 2. Material and Methods In this cross-sectional study, a self-administered questionnaire was distributed to 330 elementary school teachers in the Emirates of Sharjah and Dubai. Permission to conduct the study was obtained from the educational authorities of the participating schools. Schools principals approved participation of the school teachers in the study and all participating teachers signed consent forms. The questionnaire used in this study is based on those previously used in Jordan [6] Saudi Arabia [8], Australia [13], and Iran [9]. This questionnaire consisted of four parts. Part 1 included questions related to the demographic characteristics of the study participants, including gender, age, position, first aid training, and experience with dental trauma. Part 2 included questions related to participants’ attitude towards emergency management of dental trauma. These attitude questions included 10 items with five possible answers for each (strongly agree, agree, disagree, neither agree nor disagree, agree, and strongly disagree). Part 3 provided participants with two case scenarios of dental injuries. The first concerned a fractured tooth in a nine year old, and the second was a case of a 13-year-old child with an avulsed tooth. These two scenarios had previously been used to assess schoolteachers’ knowledge about dental injuries in other studies [5, 12]. For each scenario, participants were given several options to choose from. Part 4 of the questionnaire was related to self-assessment, in which participants were asked the following three questions: (a) “Is your knowledge on dental emergency management enough?” (b) “Do you need future education in this regard?” (c) “Are you able to provide proper action when needed?” For each question, participants gave a yes or no answer.The questionnaire was translated to Arabic and backward and forward translation was used to establish equivalency. Because some teachers were not Arabic-speaking, participants were given the option of responding either to the English or the Arabic version of the questionnaire. ### 2.1. Statistical Analysis Data were analyzed using the statistical analysis package SPSS (Version 20, Chicago, IL). Results were analyzed by frequency distribution. Chi-square tests were also utilized to assess the effect of first aid training on knowledge of management of presented cases. In addition, Chi-square tests were also used to evaluate the relationship between self-assessments of knowledge, need for additional education, and proper action according to gender, position, school, age, and first aid training. For all assessments, level of significance was set at alpha 0.05, two-tailed. ## 2.1. Statistical Analysis Data were analyzed using the statistical analysis package SPSS (Version 20, Chicago, IL). Results were analyzed by frequency distribution. Chi-square tests were also utilized to assess the effect of first aid training on knowledge of management of presented cases. In addition, Chi-square tests were also used to evaluate the relationship between self-assessments of knowledge, need for additional education, and proper action according to gender, position, school, age, and first aid training. For all assessments, level of significance was set at alpha 0.05, two-tailed. ## 3. Results Of the 330 teachers who were approached from 30 randomly selected schools in Sharjah and Dubai, 88%(N=292) responded to the questionnaire. The majority of participants were females (N=278; 95%) and approximately half of school teachers had first aid training (n=146; 51%) (Table 1). In this study, 36% of the participants indicated that teachers are not responsible for traumatic dental injuries, and the majority (89%) believed that time is an important factor in the prognosis of dental trauma. In addition, 81% indicated that educational experience can assist in the management of TDI (Table 2).Table 1 Characteristics of study participants. Variable N (%) Gender Males 14 (4.8) Females 278 (95.2) Age group <35 148 (51) 36–45 102 (35) >45 42 (14) Education High school/diploma 58 (20) Bachelor, masters 227 (80) Position Educational 246 (87) Health/physical education 14 (5) Administration 24 (9) First aid Yes 146 (51) No 49 (49) Dental emergency training Yes 13 (5) No 249 (95) Witness Yes 84 (29) no 201 (71)Table 2 Participants attitude towards traumatic dental injuries. Questions Strongly agree/agree N (%) A teacher is not responsible for posttraumatic dental injuries 104 (36) Time consciousness for emergency management of dental trauma can play a vital role in improving tooth prognosis 282 (89) A tooth after avulsion will be lost definitely, so there is no need for treatment 35 (12) Dental trauma emergency management must become one of the educational priorities for teachers 177 (61) Dental trauma emergency is not an emergency situation 41 (14) Teacher intervention in school dental injuries may play a key role in traumatized tooth 212 (73) Emergency management of dental trauma requires special education and therefore there is no need for teacher intervention 114 (40) Wearing a mouth guard should be compulsory in all contact sports 186 (65) Due to legal considerations, it is advisable that a teacher refrains from intervening in such scenarios 102 (35) Having some short pertinent educational experiences, educators can provide better assistance in traumatic dental scenarios 232 (81)Table3 depicts the participants’ responses to the two cases presented to them. Only 58% knew that the damaged tooth was permanent. More than half of the participants (n=165; 57%) responded that the immediate action would be to contact the parents and advise them to send the child to the dentist. Only 33% knew that the correct immediate action is to look for the broken tooth and send the child to the dentist.Table 3 Teachers responses to two scenarios on traumatic dental injuries. Scenario N (%) Case  1 During school hours, a 9-year-old child is hit in the face with softball. Her upper front tooth is broken. Otherwise, she is healthy, unhurt, and consciousThe broken tooth is likely to be (i) Temporary 73 (25) (ii) Permanent 166 (58) (iii) Do not know 49 (17) Your immediate emergency management of the case is (i) Calm down the child and send her back to class 6 (17) (ii) Contact parents and advise them to send child to the dentist immediately 165 (57) (iii) Look for broken tooth piece and send child to the dentists with it 96 (33) (iv) Do not know what to do 10 (4) Case  2 A 13-year-old boy is hit in the face and his upper front tooth is missing and there is blood in his mouth. Otherwise, he is unhurt, healthy, and he did not lose consciousness The immediate emergency action you would take is (i) Stop the bleeding by compressing a cloth over the injury 174 (61) (ii) Look for the tooth, wash it, and put it back in its place 17 (6) (iii) Save the tooth in child’s mouth and look for professional help 53 (18) (iv) Place the tooth in a paper and send the child to dentist after the school time 26 (9) (v) Do not know what to do 14 (5) What type of health services would you seek first? (i) General physician 34 (11) (ii) Pediatric physician 13 (5) (iii) Hospital 27 (10) (iv) Dental School University 11 (4) (v) General dentists 97 (33) (vi) Pediatric dentist 96 (33) (vii) Endodontist 7 (2) Would you investigate if the child had tetanus? (i) Yes 163 (59) (ii) No 113 (41) If the tooth has fallen on the dirty ground what would you do? (i) Rinse the tooth under tap water and put it back into its socket 90 (32) (ii) Rub away the dirt by a sponge and soap and put it back 17 (6) (iii) Put it back into the socket immediately without cleaning 5 (2) (iv) Discard the tooth 81 (29) (v) Do not know what to do 87 (31) How would you transport the tooth on the way to the dentist if you cannot put the tooth back into its socket? (i) Put the tooth in ice 58 (20) (ii) Put the tooth in liquid 76 (27) (iii) Place the tooth in the child’s mouth 19 (7) (iv) Place the tooth in the child’s hand 2 (1) (v) Wrap the tooth in a handkerchief or paper tissue 130 (46) Mark desirable liquids for storing a tooth that has been knocked out while you are on your way to the dentist (i) Tap water 100 (37) (ii) Fresh milk 23 (9) (iii) Child’s saliva 29 (11) (iv) Alcohol 9 (3) (v) Saline solution 65 (24) (vi) Disinfecting solution 44 (16) (vii) Chicken egg white 0 (0) Which is the best time for putting back a tooth in if it is knocked out of the mouth? (i) Immediately after the accident? 66 (23) (ii) Within 30 min after the bleeding has stopped 55 (19) (iii) Within the same day 18 (6) (iv) This is not a crucial factor 23 (8) (v) Do not know what to do 122 (43)Table3 also summarizes the responses to questions regarding tooth avulsion. Only 6% selected the option to wash the tooth and put it back in its place in the mouth, while the majority (n=174; 61%) thought that stopping the bleeding by compressing a cloth over the injury was the correct immediate action. Of all the participants, 12% indicated that they would go to a general dentist. When asked about tetanus vaccination, more than half (n=163; 59%) responded correctly; 31% did not know what to do if the tooth fell on dirty ground, and 32% gave the correct answer that the tooth should be rinsed under tap water and put back into the socket. Forty-six percent of the participants thought that the tooth should be wrapped in a handkerchief or paper tissue to be transported to the dentist. The highest percentage (41%) thought that the liquid that the tooth should be stored in is tap water. Only 23% indicated that the best time for the tooth to be replaced in its socket was immediately after the accident.Table4 shows that first aid training did not make a significant difference to the correctness of participants’ responses.Table 4 Relationship between first aid training and teachers responses to the case scenariosa. Questions First aid course p value Yes No (1) Emergency management p = 0.2 Correct 89 (62) 77 (55) Other responses 55 (38) 64 (45) (2) The best health services Correct 51 (35) 45 (32) p = 0.50 Other responses 92 (65) 97 (68) (3) Tetanus vaccine p = 0.83 Correct 9 (6) 8 (6) Other responses 133 (94) 131 (94) (4) Cleaning before replantation of a dirty tooth 53 (38) 42 (30) p = 0.37 87 (62) 98 (70) (5) Transportation vehicle 48 (35) 42 (30) p = 0.37 89 (65) 98 (70) (6) Storage media Correct 116 (81) 106 (76) p = 0.31 Incorrect 27 (19) 33 (24) (7) Time to replant avulsed tooth p = 0.11 Correct 39 (51) 103 (48) Incorrect 27 (41) 113 (52) aBased on Chi-square test.A significantly higher percentage of younger participants (<35) acknowledged that they needed future education on dental emergency management compared to other groups (p<0.05) (Table 5). In addition, 94% of those with an educational position did not have enough knowledge of management of dental emergencies compared to other groups (p<0.05). In addition, a significantly higher percentage of teachers from government schools (93%) believed that they could benefit from further education on TDI compared to those in private schools (78%). First aid training was not related to self-assessment regarding dental emergency management.Table 5 Relationship between self-assessment and sociodemographic factors and first aid traininga. Variables Knowledge Need future education Proper action Enough Not enough Yes No Yes No N (%) N (%) N (%) N (%) N (%) N (%) Gender Male 3 (21) 11 (79)b 11 (79) 3 (21) 5 (36) 9 (64) Female 19 (7) 254 (93) 218 (80) 53 (20) 100 (37) 166 (63) Age <35 10 (7) 136 (93) 111 (77) 34 (23) 46 (32) 97 (68)b 36–45 6 (6) 95 (94) 86 (85) 15 (15) 39 (39) 60 (61) >45 69 (15) 34 (85) 32 (82) 7 (18) 20 (19) 18 (11) Position Educator 14 (6) 228 (94)b 193 (80) 47 (20) 84 (36) 152 (64) Health teacher 3 (21) 11 (79) 14 (100) 0 (0) 7 (54) 6 (46) Administration 3 (13) 21 (87) 20 (83) 4 (17) 9 (38) 15 (63) First aid Yes 11 (8) 131 (92) 116 (82) 26 (18) 51 (37) 88 (63) No 11 (8) 131 (92) 111 (79) 29 (21) 53 (38) 85 (62) aBased on Chi-square test, bp<0.05. ## 4. Discussion This study included teachers from randomly selected schools in the Emirates of Sharjah and Dubai. Similar to previous reports from Iran [12], Jordan [6], and Brazil [2], approximately half of the teachers (51%) in this study had first aid training. However, this percentage is higher than previously reported in the UAE [12], in which only 32% of school teachers in the Emirate of Ajman had first aid training and in Saudi Arabia, in which only 18% of primary school teachers had first aid training [8]. As previously reported [9], this training did not contribute to providing correct answer to the case scenarios provided. These findings highlight the importance of continuing education programs that are specifically directed towards management of TDI. Moreover, in this study, 81% of the school teachers indicated that short educational programs could assist them in the management of TDI. More than half of the teachers agreed that they are responsible for the management of dental trauma when it occurs in schools. These findings emphasis the need for tailored educational activities.Consistent with a previous study in the UAE [12], more than half of the participants (58%) in this study did not know that the broken tooth in the nine-year-old child was a permanent tooth. Lack of knowledge about the eruption timing of teeth suggests that teachers may take actions that could have future negative consequences. For example, it is important that teachers and school staff are aware that an avulsed primary tooth should not be reimplanted to avoid/reduce damage to succeeding permanent tooth, while an avulsed permanent tooth should be immediately reimplanted to keep periodontal cells viable [14, 15].In case 2, few (23%) participants responded that an avulsed tooth must be replanted immediately and the highest percentage of teachers (43%) did not know what to do. Similar results were observed in studies in Iran [7] and the UAE [12], in which 39% and 19%, respectively, of participating teachers indicated that they did not know what to do regarding the appropriate time to replant an avulsed tooth. Although immediate implantation of an avulsed tooth is necessary for long-term prognosis and reduction of possible complications, an obstacle for immediate action is the lack of parental consent [12, 16].The lack of knowledge among elementary school teachers about TDI calls for action to improve education about the types of teeth that are likely to suffer from TDI and the proper manipulation and handling of teeth to maintain vitality of cells. Moreover, our findings show that 30% of teachers had witnessed TDI; this indicates that TDI are not uncommon and that schools in collaboration with health authorities should take action to educate teachers about proper management of TDI. Furthermore, reports from previous studies [17–20] showed that providing teachers with information that specifically addressed teeth avulsion had enhanced their knowledge about TDI. For example, Pujita et al. [17] examined primary school teachers’ knowledge on TDI before and after an intervention, in which a promotion program was implemented on TDI. The authors reported that teachers’ knowledge significantly improved as a result of providing them with the promotion sessions. In another study, Lieger et al. [19] tested the effect of posters displayed in schools on teachers’ management of TDI. Findings from this study showed that, five years after the poster campaign, teachers who worked in the areas where the posters were displayed had significantly better knowledge in management of TDI compared to teachers who worked in other areas. These findings are encouraging and could be easily adopted, and their effects could be examined in elementary schools in the UAE as well.Approximately 37% of the teachers in this study reported that they could provide proper action in case of TDI, and compared to other groups, a significantly higher percentage of younger teachers (<35 years old) indicated that they were not able to provide proper action in this regard. These findings suggest that teachers in general may have overestimated their knowledge, which may lead to mismanagement of broken or avulsed teeth in children. However, younger teachers are more likely to admit to their lack of ability to handle TDI.Recently, Al-Musawi et al. [21] demonstrated that providing information on the management of TDI through the use of smartphone applications among a group of 28 schoolteachers was significantly more effective than providing lectures only (N=32) on management of avulsed teeth. The use of such a device is worth exploring in a more pragmatic situation in which stress is an important factor that should also be taken into consideration. ## 5. Conclusion The results of this study show that there is a need for education about TDI among school teachers in the UAE. Our findings also highlight the fact that more than one-third of the teachers and staff did not think that handling TDI was their responsibility. These disturbing findings should be addressed and teachers should be informed about the important role that they could play in handling TDI, as well as the negative consequences of poor management or neglect of TDI.Educational programs and training are needed to improve proper management of traumatic injuries by schoolteachers and avoid unnecessary negative consequences. Guidelines on the management of TDI in schoolteachers’ training curriculum should also be given serious consideration. --- *Source: 1025324-2017-09-14.xml*
1025324-2017-09-14_1025324-2017-09-14.md
21,286
Assessment of Elementary School Teachers’ Level of Knowledge and Attitude regarding Traumatic Dental Injuries in the United Arab Emirates
Manal A. Awad; Eman AlHammadi; Mariam Malalla; Zainab Maklai; Aisha Tariq; Badria Al-Ali; Alaa Al Jameel; Hisham El Batawi
International Journal of Dentistry (2017)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1025324
1025324-2017-09-14.xml
--- ## Abstract Introduction. In this cross-sectional study, the level of knowledge and attitude of elementary school teachers regarding traumatic dental injuries (TDI) were assessed.Materials and Methods. A questionnaire was distributed to 330 elementary school teachers in 30 randomly selected schools in the Emirates of Sharjah and Dubai. The questionnaire collected information on participants’ demographic characteristics, first aid training, and attitude about emergency management of TDI.Results. 292 teachers (88%) completed the questionnaires; of these, 95% were females, and 50% of the participants had first aid training. Knowledge about tooth avulsion was inadequate, and first aid training was not associated with correct responses to management of avulsed teeth (p>0.05). A significantly higher percentage of younger teachers (p<0.05) expressed the need for future education on TDI management. A significantly higher percentage of participants who had an educational position (95%) indicated that they did not have enough knowledge regarding TDI compared to physical education teachers (79%) and administrators (87%) (p<0.05).Conclusions. Elementary school teachers in the UAE have a low level of knowledge regarding the management of dental trauma. Educational programs that address TDI are needed and could improve the elementary school teachers’ level of knowledge in emergency management of TDI. --- ## Body ## 1. Introduction Traumatic dental injuries (TDI) among children are considered a public health concern [1] Dental injuries can lead to tooth loss; subsequently it can have a negative impact on children’s psychological well-being [2–4]. A study from Brazil showed that adolescents with severe untreated TDI were 2.4 times more likely to report worse oral-health-related quality of life than adolescents without untreated TDI [4]. Accordingly, the prognosis of injured teeth depends on immediate and appropriate management by those present at the site where the trauma took place; this includes schoolteachers and staff [4, 5].Studies have demonstrated that TDI are sometimes not properly treated, one of the reasons for this problem being the time between the trauma episode and seeking dental treatment, which can sometimes be years [6].The main cause of TDI among school-age children are unpreventable falls in schools. It has been previously reported that approximately 50% of schoolchildren have experienced a TDI prior to graduation [7]. Therefore, it is highly likely that first aid would be provided by teachers and other school staff. Accordingly, teachers’ knowledge of the management of TDI is essential to improve prognosis. Although many studies have been conducted in different countries to assess schoolteachers’ knowledge of TDI [1, 2, 6–11], only one has previously been conducted in the UAE [12], and this included only Arabic-speaking teachers in the Emirate of Ajman.The main objective of this study is to evaluate the level of knowledge and attitudes of elementary schoolteachers in two Emirates in the UAE, Dubai and Sharjah, regarding TDI among school-age children. ## 2. Material and Methods In this cross-sectional study, a self-administered questionnaire was distributed to 330 elementary school teachers in the Emirates of Sharjah and Dubai. Permission to conduct the study was obtained from the educational authorities of the participating schools. Schools principals approved participation of the school teachers in the study and all participating teachers signed consent forms. The questionnaire used in this study is based on those previously used in Jordan [6] Saudi Arabia [8], Australia [13], and Iran [9]. This questionnaire consisted of four parts. Part 1 included questions related to the demographic characteristics of the study participants, including gender, age, position, first aid training, and experience with dental trauma. Part 2 included questions related to participants’ attitude towards emergency management of dental trauma. These attitude questions included 10 items with five possible answers for each (strongly agree, agree, disagree, neither agree nor disagree, agree, and strongly disagree). Part 3 provided participants with two case scenarios of dental injuries. The first concerned a fractured tooth in a nine year old, and the second was a case of a 13-year-old child with an avulsed tooth. These two scenarios had previously been used to assess schoolteachers’ knowledge about dental injuries in other studies [5, 12]. For each scenario, participants were given several options to choose from. Part 4 of the questionnaire was related to self-assessment, in which participants were asked the following three questions: (a) “Is your knowledge on dental emergency management enough?” (b) “Do you need future education in this regard?” (c) “Are you able to provide proper action when needed?” For each question, participants gave a yes or no answer.The questionnaire was translated to Arabic and backward and forward translation was used to establish equivalency. Because some teachers were not Arabic-speaking, participants were given the option of responding either to the English or the Arabic version of the questionnaire. ### 2.1. Statistical Analysis Data were analyzed using the statistical analysis package SPSS (Version 20, Chicago, IL). Results were analyzed by frequency distribution. Chi-square tests were also utilized to assess the effect of first aid training on knowledge of management of presented cases. In addition, Chi-square tests were also used to evaluate the relationship between self-assessments of knowledge, need for additional education, and proper action according to gender, position, school, age, and first aid training. For all assessments, level of significance was set at alpha 0.05, two-tailed. ## 2.1. Statistical Analysis Data were analyzed using the statistical analysis package SPSS (Version 20, Chicago, IL). Results were analyzed by frequency distribution. Chi-square tests were also utilized to assess the effect of first aid training on knowledge of management of presented cases. In addition, Chi-square tests were also used to evaluate the relationship between self-assessments of knowledge, need for additional education, and proper action according to gender, position, school, age, and first aid training. For all assessments, level of significance was set at alpha 0.05, two-tailed. ## 3. Results Of the 330 teachers who were approached from 30 randomly selected schools in Sharjah and Dubai, 88%(N=292) responded to the questionnaire. The majority of participants were females (N=278; 95%) and approximately half of school teachers had first aid training (n=146; 51%) (Table 1). In this study, 36% of the participants indicated that teachers are not responsible for traumatic dental injuries, and the majority (89%) believed that time is an important factor in the prognosis of dental trauma. In addition, 81% indicated that educational experience can assist in the management of TDI (Table 2).Table 1 Characteristics of study participants. Variable N (%) Gender Males 14 (4.8) Females 278 (95.2) Age group <35 148 (51) 36–45 102 (35) >45 42 (14) Education High school/diploma 58 (20) Bachelor, masters 227 (80) Position Educational 246 (87) Health/physical education 14 (5) Administration 24 (9) First aid Yes 146 (51) No 49 (49) Dental emergency training Yes 13 (5) No 249 (95) Witness Yes 84 (29) no 201 (71)Table 2 Participants attitude towards traumatic dental injuries. Questions Strongly agree/agree N (%) A teacher is not responsible for posttraumatic dental injuries 104 (36) Time consciousness for emergency management of dental trauma can play a vital role in improving tooth prognosis 282 (89) A tooth after avulsion will be lost definitely, so there is no need for treatment 35 (12) Dental trauma emergency management must become one of the educational priorities for teachers 177 (61) Dental trauma emergency is not an emergency situation 41 (14) Teacher intervention in school dental injuries may play a key role in traumatized tooth 212 (73) Emergency management of dental trauma requires special education and therefore there is no need for teacher intervention 114 (40) Wearing a mouth guard should be compulsory in all contact sports 186 (65) Due to legal considerations, it is advisable that a teacher refrains from intervening in such scenarios 102 (35) Having some short pertinent educational experiences, educators can provide better assistance in traumatic dental scenarios 232 (81)Table3 depicts the participants’ responses to the two cases presented to them. Only 58% knew that the damaged tooth was permanent. More than half of the participants (n=165; 57%) responded that the immediate action would be to contact the parents and advise them to send the child to the dentist. Only 33% knew that the correct immediate action is to look for the broken tooth and send the child to the dentist.Table 3 Teachers responses to two scenarios on traumatic dental injuries. Scenario N (%) Case  1 During school hours, a 9-year-old child is hit in the face with softball. Her upper front tooth is broken. Otherwise, she is healthy, unhurt, and consciousThe broken tooth is likely to be (i) Temporary 73 (25) (ii) Permanent 166 (58) (iii) Do not know 49 (17) Your immediate emergency management of the case is (i) Calm down the child and send her back to class 6 (17) (ii) Contact parents and advise them to send child to the dentist immediately 165 (57) (iii) Look for broken tooth piece and send child to the dentists with it 96 (33) (iv) Do not know what to do 10 (4) Case  2 A 13-year-old boy is hit in the face and his upper front tooth is missing and there is blood in his mouth. Otherwise, he is unhurt, healthy, and he did not lose consciousness The immediate emergency action you would take is (i) Stop the bleeding by compressing a cloth over the injury 174 (61) (ii) Look for the tooth, wash it, and put it back in its place 17 (6) (iii) Save the tooth in child’s mouth and look for professional help 53 (18) (iv) Place the tooth in a paper and send the child to dentist after the school time 26 (9) (v) Do not know what to do 14 (5) What type of health services would you seek first? (i) General physician 34 (11) (ii) Pediatric physician 13 (5) (iii) Hospital 27 (10) (iv) Dental School University 11 (4) (v) General dentists 97 (33) (vi) Pediatric dentist 96 (33) (vii) Endodontist 7 (2) Would you investigate if the child had tetanus? (i) Yes 163 (59) (ii) No 113 (41) If the tooth has fallen on the dirty ground what would you do? (i) Rinse the tooth under tap water and put it back into its socket 90 (32) (ii) Rub away the dirt by a sponge and soap and put it back 17 (6) (iii) Put it back into the socket immediately without cleaning 5 (2) (iv) Discard the tooth 81 (29) (v) Do not know what to do 87 (31) How would you transport the tooth on the way to the dentist if you cannot put the tooth back into its socket? (i) Put the tooth in ice 58 (20) (ii) Put the tooth in liquid 76 (27) (iii) Place the tooth in the child’s mouth 19 (7) (iv) Place the tooth in the child’s hand 2 (1) (v) Wrap the tooth in a handkerchief or paper tissue 130 (46) Mark desirable liquids for storing a tooth that has been knocked out while you are on your way to the dentist (i) Tap water 100 (37) (ii) Fresh milk 23 (9) (iii) Child’s saliva 29 (11) (iv) Alcohol 9 (3) (v) Saline solution 65 (24) (vi) Disinfecting solution 44 (16) (vii) Chicken egg white 0 (0) Which is the best time for putting back a tooth in if it is knocked out of the mouth? (i) Immediately after the accident? 66 (23) (ii) Within 30 min after the bleeding has stopped 55 (19) (iii) Within the same day 18 (6) (iv) This is not a crucial factor 23 (8) (v) Do not know what to do 122 (43)Table3 also summarizes the responses to questions regarding tooth avulsion. Only 6% selected the option to wash the tooth and put it back in its place in the mouth, while the majority (n=174; 61%) thought that stopping the bleeding by compressing a cloth over the injury was the correct immediate action. Of all the participants, 12% indicated that they would go to a general dentist. When asked about tetanus vaccination, more than half (n=163; 59%) responded correctly; 31% did not know what to do if the tooth fell on dirty ground, and 32% gave the correct answer that the tooth should be rinsed under tap water and put back into the socket. Forty-six percent of the participants thought that the tooth should be wrapped in a handkerchief or paper tissue to be transported to the dentist. The highest percentage (41%) thought that the liquid that the tooth should be stored in is tap water. Only 23% indicated that the best time for the tooth to be replaced in its socket was immediately after the accident.Table4 shows that first aid training did not make a significant difference to the correctness of participants’ responses.Table 4 Relationship between first aid training and teachers responses to the case scenariosa. Questions First aid course p value Yes No (1) Emergency management p = 0.2 Correct 89 (62) 77 (55) Other responses 55 (38) 64 (45) (2) The best health services Correct 51 (35) 45 (32) p = 0.50 Other responses 92 (65) 97 (68) (3) Tetanus vaccine p = 0.83 Correct 9 (6) 8 (6) Other responses 133 (94) 131 (94) (4) Cleaning before replantation of a dirty tooth 53 (38) 42 (30) p = 0.37 87 (62) 98 (70) (5) Transportation vehicle 48 (35) 42 (30) p = 0.37 89 (65) 98 (70) (6) Storage media Correct 116 (81) 106 (76) p = 0.31 Incorrect 27 (19) 33 (24) (7) Time to replant avulsed tooth p = 0.11 Correct 39 (51) 103 (48) Incorrect 27 (41) 113 (52) aBased on Chi-square test.A significantly higher percentage of younger participants (<35) acknowledged that they needed future education on dental emergency management compared to other groups (p<0.05) (Table 5). In addition, 94% of those with an educational position did not have enough knowledge of management of dental emergencies compared to other groups (p<0.05). In addition, a significantly higher percentage of teachers from government schools (93%) believed that they could benefit from further education on TDI compared to those in private schools (78%). First aid training was not related to self-assessment regarding dental emergency management.Table 5 Relationship between self-assessment and sociodemographic factors and first aid traininga. Variables Knowledge Need future education Proper action Enough Not enough Yes No Yes No N (%) N (%) N (%) N (%) N (%) N (%) Gender Male 3 (21) 11 (79)b 11 (79) 3 (21) 5 (36) 9 (64) Female 19 (7) 254 (93) 218 (80) 53 (20) 100 (37) 166 (63) Age <35 10 (7) 136 (93) 111 (77) 34 (23) 46 (32) 97 (68)b 36–45 6 (6) 95 (94) 86 (85) 15 (15) 39 (39) 60 (61) >45 69 (15) 34 (85) 32 (82) 7 (18) 20 (19) 18 (11) Position Educator 14 (6) 228 (94)b 193 (80) 47 (20) 84 (36) 152 (64) Health teacher 3 (21) 11 (79) 14 (100) 0 (0) 7 (54) 6 (46) Administration 3 (13) 21 (87) 20 (83) 4 (17) 9 (38) 15 (63) First aid Yes 11 (8) 131 (92) 116 (82) 26 (18) 51 (37) 88 (63) No 11 (8) 131 (92) 111 (79) 29 (21) 53 (38) 85 (62) aBased on Chi-square test, bp<0.05. ## 4. Discussion This study included teachers from randomly selected schools in the Emirates of Sharjah and Dubai. Similar to previous reports from Iran [12], Jordan [6], and Brazil [2], approximately half of the teachers (51%) in this study had first aid training. However, this percentage is higher than previously reported in the UAE [12], in which only 32% of school teachers in the Emirate of Ajman had first aid training and in Saudi Arabia, in which only 18% of primary school teachers had first aid training [8]. As previously reported [9], this training did not contribute to providing correct answer to the case scenarios provided. These findings highlight the importance of continuing education programs that are specifically directed towards management of TDI. Moreover, in this study, 81% of the school teachers indicated that short educational programs could assist them in the management of TDI. More than half of the teachers agreed that they are responsible for the management of dental trauma when it occurs in schools. These findings emphasis the need for tailored educational activities.Consistent with a previous study in the UAE [12], more than half of the participants (58%) in this study did not know that the broken tooth in the nine-year-old child was a permanent tooth. Lack of knowledge about the eruption timing of teeth suggests that teachers may take actions that could have future negative consequences. For example, it is important that teachers and school staff are aware that an avulsed primary tooth should not be reimplanted to avoid/reduce damage to succeeding permanent tooth, while an avulsed permanent tooth should be immediately reimplanted to keep periodontal cells viable [14, 15].In case 2, few (23%) participants responded that an avulsed tooth must be replanted immediately and the highest percentage of teachers (43%) did not know what to do. Similar results were observed in studies in Iran [7] and the UAE [12], in which 39% and 19%, respectively, of participating teachers indicated that they did not know what to do regarding the appropriate time to replant an avulsed tooth. Although immediate implantation of an avulsed tooth is necessary for long-term prognosis and reduction of possible complications, an obstacle for immediate action is the lack of parental consent [12, 16].The lack of knowledge among elementary school teachers about TDI calls for action to improve education about the types of teeth that are likely to suffer from TDI and the proper manipulation and handling of teeth to maintain vitality of cells. Moreover, our findings show that 30% of teachers had witnessed TDI; this indicates that TDI are not uncommon and that schools in collaboration with health authorities should take action to educate teachers about proper management of TDI. Furthermore, reports from previous studies [17–20] showed that providing teachers with information that specifically addressed teeth avulsion had enhanced their knowledge about TDI. For example, Pujita et al. [17] examined primary school teachers’ knowledge on TDI before and after an intervention, in which a promotion program was implemented on TDI. The authors reported that teachers’ knowledge significantly improved as a result of providing them with the promotion sessions. In another study, Lieger et al. [19] tested the effect of posters displayed in schools on teachers’ management of TDI. Findings from this study showed that, five years after the poster campaign, teachers who worked in the areas where the posters were displayed had significantly better knowledge in management of TDI compared to teachers who worked in other areas. These findings are encouraging and could be easily adopted, and their effects could be examined in elementary schools in the UAE as well.Approximately 37% of the teachers in this study reported that they could provide proper action in case of TDI, and compared to other groups, a significantly higher percentage of younger teachers (<35 years old) indicated that they were not able to provide proper action in this regard. These findings suggest that teachers in general may have overestimated their knowledge, which may lead to mismanagement of broken or avulsed teeth in children. However, younger teachers are more likely to admit to their lack of ability to handle TDI.Recently, Al-Musawi et al. [21] demonstrated that providing information on the management of TDI through the use of smartphone applications among a group of 28 schoolteachers was significantly more effective than providing lectures only (N=32) on management of avulsed teeth. The use of such a device is worth exploring in a more pragmatic situation in which stress is an important factor that should also be taken into consideration. ## 5. Conclusion The results of this study show that there is a need for education about TDI among school teachers in the UAE. Our findings also highlight the fact that more than one-third of the teachers and staff did not think that handling TDI was their responsibility. These disturbing findings should be addressed and teachers should be informed about the important role that they could play in handling TDI, as well as the negative consequences of poor management or neglect of TDI.Educational programs and training are needed to improve proper management of traumatic injuries by schoolteachers and avoid unnecessary negative consequences. Guidelines on the management of TDI in schoolteachers’ training curriculum should also be given serious consideration. --- *Source: 1025324-2017-09-14.xml*
2017
# Evolution of Lignocellulosic Macrocomponents in the Wastewater Streams of a Sulfite Pulp Mill: A Preliminary Biorefining Approach **Authors:** Tamara Llano; Noelia García-Quevedo; Natalia Quijorna; Javier R. Viguri; Alberto Coz **Journal:** Journal of Chemistry (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102534 --- ## Abstract The evolution of lignin, five- and six-carbon sugars, and other decomposition products derived from hemicelluloses and cellulose was monitored in a sulfite pulp mill. The wastewater streams were characterized and the mass balances throughout digestion and total chlorine free bleaching stages were determined. Summative analysis in conjunction with pulp parameters highlights some process guidelines and valorization alternatives towards the transformation of the traditional factory into a lignocellulosic biorefinery. The results showed a good separation of cellulose (99.64%) during wood digestion, with 87.23% of hemicellulose and 98.47% lignin dissolved into the waste streams. The following steps should be carried out to increase the sugar content into the waste streams: (i) optimization of the digestion conditions increasing hemicellulose depolymerization; (ii) improvement of the ozonation and peroxide bleaching stages, avoiding deconstruction of the cellulose chains but maintaining impurity removal; (iii) fractionation of the waste water streams, separating sugars from the rest of toxic inhibitors for 2nd generation biofuel production. A total of 0.173 L of second-generation ethanol can be obtained in the spent liquor per gram of dry wood. The proposed methodology can be usefully incorporated into other related industrial sectors. --- ## Body ## 1. Introduction Mixed C5 and C6 hemicellulose sugar platforms serve as feedstock for fermentation producing biofuels such as ethanol or butanol, biopolymers such as polyhydroxybutyrate (PHB), or polybutylene succinate (PBS), and chemicals such as lactic acid, succinic acid, or itaconic or glutamic acids [1]. In Europe, primary energy consumption is dominated by the presence of petroleum products mostly imported from abroad. There are many concerns around the high degree of energy dependence [2]. Among the valorization alternatives described above, this work is based on bioethanol production from the pulp and paper industry, in order to be included in the transportation sector.The pulp and paper industry is being reconsidered as an important source of hemicellulose carbohydrates. Traditionally, pulping manufacturing was focused on cellulose extraction by chemical, mechanical or semichemical processes. Among the chemical processes, kraft is the most commonly used. However, other processes such as soda-anthraquinone (soda-AQ); ethanol-water pulping (organosolv); acid sulfite and bisulfite pulping; sulfite pretreatment to overcome recalcitrance of lignocellulose (SPORL); or SO2-ethanol-water process (SEW) can also be used [3–10]. Sulfite pulping is becoming popular because of the growing demand of high purity dissolving pulp for textile fiber production. An advantage of sulfite pulping is regarding high separation efficiency of cellulose. While dissolving cellulose is manufactured, hemicellulose and lignin are also generated and partially reused. Conversion of these waste streams is becoming a clear priority within the biorefinery concept.Nowadays, there are many efforts from the pulp and paper industries focused on sugar-rich resource valorization into energy and a wide variety of products. Several approaches tracing pathways and guidelines towards the conversion of pulping factories into lignocellulosic biorefineries (LCBR) can be found in the literature for kraft pulping [11–14], soda-AQ [15, 16], organosolv [17–19], or SEW process [9, 10]. Nevertheless, only a few contributions have been studied in the case of sulfite pulping [20, 21]. The investment activity in this field reveals the importance of transforming traditional pulping factories into integrated LCBR, increasing the profit margin in the existing pulp mills. Due to the complexity of the lignocellulosic biomass (LCB), many efforts are being carried out in fractionation processes. In this sense, the first step was a deep control of the main resources throughout a total mass balance of the LCB. The mass balance provides a complete description based on lignin and carbohydrate content (hemicellulose and cellulose) of the materials [9, 22]. The use of total mass balances in a whole process can point out the main resources for valorization options. In addition, the study of digestion and bleaching allows the establishment of future actions towards process improvements. One of the methodologies to get the total mass balance in these kinds of samples is the summative analysis of the carbohydrates disclosure into their derivatives together with lignin, extractives, and ash contribution [22–24]. One advantage of this methodology is the possibility of the delignification and breakdown of the polysaccharides into individual sugars and other decomposition products, giving some information about possible inhibitors in future fermentation processes. This methodology was previously reported with purposes related to the characterization of bagasse and bamboo [23], different wood species [22–27], or pulps [28–32]. More recently, spent liquors provided from SEW were also characterized by summative analysis methodologies, tracing mass balances for the residue bioconversion towards butanol, ethanol, and acetone/isopropanol [9, 10].This work contemplates the study of the summative analysis in a sulfite pulp mill, monitoring the three main wood macrocomponents—cellulose, hemicellulose, and lignin—by measuring not only the raw material but also the residues and products throughout the process. In a second step, bioethanol potentials of the sugar-rich residue were also determined. This research constitutes a novelty in an industrial sulfite process towards the conversion of this factory into a modern LCBR. In addition, thanks to the compositional analysis together with the physic-chemical pulp properties (viscosity and micro kappa were also measured) the effects of digestion and bleaching steps were also investigated. The proposed methodology can be usefully extrapolated in not only sulfite or kraft mills but also other factories working with LCB. ## 2. Materials and Methods ### 2.1. Materials and Industrial Process Description Figure1 shows the industrial process and all of the materials collected in the pulp mill. The acid sulfite process is based on the extraction of cellulose by the attack of an acidic aqueous solution under acidic conditions (pH of 1.35 ± 0.15) in the presence of excess free SO2 [2]. Delignification occurs inside the digester (see “D” unit in Figure 1) where the lignin is sulfonated by H S O 3 - forming lignosulfonates. Lignosulfonates together with high amounts of depolymerized hemicelluloses are dissolved into the so-called spent sulfite liquor (SSL) obtained at the end of the digestion stage. After digestion, the next step is bleaching (see “Z/EOP/PO” bleaching sequence in Figure 1). With the purpose of purifying the cellulose, a total chlorine free (TCF) bleaching process is used in the factory. Ozone (Z), sodium hydroxide as the extracting agent together with oxygen and hydrogen peroxide (EOP), and hydrogen peroxide as the bleaching agent in the presence of oxygen (PO) are the stages followed in the pulp mill and studied in this work.Figure 1 Raw materials, products, and by-products analyzed in this work in the acid sulfite process.Wood, SSL, and pulp industrial samples were collected throughout the process as can be seen in Figure1.Eucalyptus globulus timber is used as feedstock in this factory. On the other hand, SSL and industrial dissolving pulp samples were collected and analyzed in the process. A total of four delignification-grade pulps were analyzed starting with the crude pulp (P1) after the digestion stage and continuing with the bleaching stages after ozonation (P2); alkaline-extraction (P3); and peroxide oxygen bleaching (P4). ### 2.2. Analysis of Wood, Pulp and SSL A summary of the characterization methods applied in this work is shown in Table1. Wood and pulp samples were conditioned and prepared according to TAPPI T257 cm-02 [33]. Samples were air-dried to constant moisture to the nearest 10% w/w, milled, passed through 40-mesh sieve, and extracted with acetone in a Soxhlet apparatus in order to remove the extractives, according to T204 cm-97 standard [33]. Wood and pulp free-extractive samples were taken for the rest of the analysis. Ashes at 525°C were analyzed by TAPPI T211om-02 standard [33]. Acid-insoluble and soluble lignin were determined in wood samples by using TAPPI T222 om-02 method [33]. Cellulose content was determined using the Seifert procedure boiling a mixture of acetylacetone-dioxane-hydrochloric acid [34]. Holocellulose, which represents the total carbohydrate content (a sum of cellulose and hemicelluloses), was measured by means of the Wise chlorite technique reported recently by Haykiri-Acma et al. [35]. The lignin content (% on weight) in pulp was calculated by multiplying the kappa number by 0.17 [32]. Considering that the standard kappa number determination cannot be applied to pulp with kappa below five [41], micro kappa described in TAPPI UM 246 standard [36] was determined. To study the degradation of carbohydrate chains during the bleaching steps, intrinsic viscosity in crude, partially bleached, and bleached pulps was determined by means of the standard ISO 5351:2010 [37]. Alfa-cellulose was also measured in pulp samples using TAPPI standard T203 cm-99 [33].Table 1 Analytical procedures used in this work. Parameter Samples Equipment Standard/source Samples conditioning Wood and pulp Mill, sieve, and climate chamber TAPPI T257 cm-02 [33] Acetone extractives Soxhlet apparatus TAPPI T204 cm-97 [33] Ashes at 525°C Muffle furnace TAPPI T211 om-02 [33] Cellulose Wood Analytical balance Seifert [34] Holocellulose Analytical balance Wise [35] Lignin UV-Vis TAPPI T222 om-22 [33] Micro kappa Pulp Titration TAPPI UM 246 [36] Intrinsic viscosity Titration ISO 5351:2010 [37] Alfa-cellulose Titration TAPPI T203 cm-99 [33] Glucan Wood, pulp, and SSL (1) Hydrolysis (2) HPLC-RI (1) TAPPI T249 cm-00 [33] (wood and pulp samples)(2) Llano et al. [38] (wood, pulp, and SSL samples) Xylan Arabinnan Galactan Mannan Acetyl Lignosulfonates SSL UV-Vis UNE EN 16109 [39]SSL samples were studied in terms of lignosulfonates, sugars, and other decomposition products. The carbohydrate composition of wood, pulp, and SSL was conducted using HPLC/RID with the methodology published by Llano et al. [38]. Lignosulfonates were analyzed by UV-Vis spectroscopy, according to the UNE EN 16109 standard [39]. ### 2.3. Hydrolysis Procedure, Summative Analysis Calculations, and Biofuels Potentials Polymeric sugars contained in the cell wall of wood carbohydrates need to be broken down. Theβ ( 1 → 4 ) glycosidic linkages of the wood polymers are cleaved with the acid hydrolysis method described in TAPPI T249 cm-00 [33]. This method involves a two-step acid hydrolysis: (i) primary hydrolysis uses a strong acid at low temperature to convert the polysaccharides to oligomers, (ii) followed by the dilution to a weak acid at high temperatures to complete the conversion to monomeric sugars. First, free-extractive moisture-controlled samples were weighed at 0.35 ± 0.01 g into flask tubes. Then, 3 mL of 72% w/w H2SO4 was added into the glass test tubes, occasionally stirred in a vortex and maintained 1 h at 30°C into a thermostatic bath. The secondary hydrolysis was carried out at 120°C for 1 h after dilution to 4% by transferring hydrolyzates to Duran bottles and adding 84 mL of deionized water (Duran bottles must be hermetically closed). Afterwards, samples were cooled and a representative aliquot of 10 mL was transferred to a beaker. Several drops of bromophenol blue indicator were taken and gradually neutralized adding 0.04 N Ba(OH)2 alkaline solution to the aliquot until the solution changes from yellow to blue-violet. Then samples were centrifuged and 0.22 μm filtered and injected in the HPLC.Each monomer can be reported in the summative analysis as its pure theoretical homopolymer [24]. The weight of each constituent, determined quantitatively after the hydrolysis, has to be multiplied by a factor to calculate its contribution to the original wood component (as a theoretical homopolymer). Calculations were made by using the theoretical stoichiometric factors obtained in the literature [22–24]. These factors consist of molecular mass of anhydrous unit divided by molecular mass of the isolated substance. Table 2 shows all of the conversion factors used in this work. Each homopolymer was calculated considering not only the monosaccharides but also the degraded-compounds derived from carbohydrates; for example, cellulose is the sum of cellobiose, glucose, HMF (5-hydroxymethyl-2-furfuraldehyde), and levulinic acid multiplied by their stoichiometric factors. The individual contribution of carbohydrate-derived compounds to the final cellulose or hemicellulose content depends on the chemical structure of the macromolecules forming the cell wall. In this work, all the glucose is assumed to generate from the cellulose [24]. Simultaneously, it was also assumed that formic acid is an inhibitor mostly produced from pentose sugars, being the formation of formic from hexoses negligible compared to the levulinic acid formation [42]. Acetic acid was considered a coproduct formed at the same time as monosaccharides by degradation of the acetyl groups located on the hemicellulose [43].Table 2 Stoichiometric factors used to calculate the percentage of theoretical homopolymer. Hydrolyzed constituents Homopolymer Carbohydrate contribution Conversion factora Hydrolysis factorb g.sugar/g.polymer Ethanol factorc g.EtOH/g.monomer Glucose Glucan Cellulose 162/180 1.11b 0.511 HMF Glucan Cellulose 162/126 — — Levulinic acid Glucan Cellulose 162/116 — — Cellobiose Glucan Cellulose 324/342 — — Xylose Xylan Hemicellulose 132/150 1.136 0.511 Furfural Xylan Hemicellulose 132/96 — — Formic acid Xylan Hemicellulose 132/46 — — Arabinose Arabinan Hemicellulose 132/150 1.136 0.511 Galactose Galactan Hemicellulose 162/180 1.11 0.511 Mannose Mannan Hemicellulose 162/180 1.11 0.511 Acetic acid Acetyl Hemicellulose 43/60 — — aStoichiometric factor used to calculate the percentage of theoretical homopolymer [22]. bHydrolysis stoichiometric factor of carbohydrate polymers into free sugars [40]. cEthanol stoichiometric factor describing the mass fraction of sugar monomer converted to ethanol [40].The macrocomponent calculations from their homopolymers are given by (1). Finally the total mass closure is calculated according to (1). In addition, ethanol potentials were calculated multiplying grams of each monomer by their corresponding stoichiometric factors (see Table 2). Such factors describe the mass fraction of sugar monomers converted to ethanol [40]: (1) Total  Carbohydrate  Content = Cellulose + Hemicellulose , Hemicellulose = Xylan + Arabinan + Galactan + Mannan + Acetyl , Cellulose = Glucan , Total  Mass  Closure = Lignin + Total  Carboydrates + Extractives + Ash . ## 2.1. Materials and Industrial Process Description Figure1 shows the industrial process and all of the materials collected in the pulp mill. The acid sulfite process is based on the extraction of cellulose by the attack of an acidic aqueous solution under acidic conditions (pH of 1.35 ± 0.15) in the presence of excess free SO2 [2]. Delignification occurs inside the digester (see “D” unit in Figure 1) where the lignin is sulfonated by H S O 3 - forming lignosulfonates. Lignosulfonates together with high amounts of depolymerized hemicelluloses are dissolved into the so-called spent sulfite liquor (SSL) obtained at the end of the digestion stage. After digestion, the next step is bleaching (see “Z/EOP/PO” bleaching sequence in Figure 1). With the purpose of purifying the cellulose, a total chlorine free (TCF) bleaching process is used in the factory. Ozone (Z), sodium hydroxide as the extracting agent together with oxygen and hydrogen peroxide (EOP), and hydrogen peroxide as the bleaching agent in the presence of oxygen (PO) are the stages followed in the pulp mill and studied in this work.Figure 1 Raw materials, products, and by-products analyzed in this work in the acid sulfite process.Wood, SSL, and pulp industrial samples were collected throughout the process as can be seen in Figure1.Eucalyptus globulus timber is used as feedstock in this factory. On the other hand, SSL and industrial dissolving pulp samples were collected and analyzed in the process. A total of four delignification-grade pulps were analyzed starting with the crude pulp (P1) after the digestion stage and continuing with the bleaching stages after ozonation (P2); alkaline-extraction (P3); and peroxide oxygen bleaching (P4). ## 2.2. Analysis of Wood, Pulp and SSL A summary of the characterization methods applied in this work is shown in Table1. Wood and pulp samples were conditioned and prepared according to TAPPI T257 cm-02 [33]. Samples were air-dried to constant moisture to the nearest 10% w/w, milled, passed through 40-mesh sieve, and extracted with acetone in a Soxhlet apparatus in order to remove the extractives, according to T204 cm-97 standard [33]. Wood and pulp free-extractive samples were taken for the rest of the analysis. Ashes at 525°C were analyzed by TAPPI T211om-02 standard [33]. Acid-insoluble and soluble lignin were determined in wood samples by using TAPPI T222 om-02 method [33]. Cellulose content was determined using the Seifert procedure boiling a mixture of acetylacetone-dioxane-hydrochloric acid [34]. Holocellulose, which represents the total carbohydrate content (a sum of cellulose and hemicelluloses), was measured by means of the Wise chlorite technique reported recently by Haykiri-Acma et al. [35]. The lignin content (% on weight) in pulp was calculated by multiplying the kappa number by 0.17 [32]. Considering that the standard kappa number determination cannot be applied to pulp with kappa below five [41], micro kappa described in TAPPI UM 246 standard [36] was determined. To study the degradation of carbohydrate chains during the bleaching steps, intrinsic viscosity in crude, partially bleached, and bleached pulps was determined by means of the standard ISO 5351:2010 [37]. Alfa-cellulose was also measured in pulp samples using TAPPI standard T203 cm-99 [33].Table 1 Analytical procedures used in this work. Parameter Samples Equipment Standard/source Samples conditioning Wood and pulp Mill, sieve, and climate chamber TAPPI T257 cm-02 [33] Acetone extractives Soxhlet apparatus TAPPI T204 cm-97 [33] Ashes at 525°C Muffle furnace TAPPI T211 om-02 [33] Cellulose Wood Analytical balance Seifert [34] Holocellulose Analytical balance Wise [35] Lignin UV-Vis TAPPI T222 om-22 [33] Micro kappa Pulp Titration TAPPI UM 246 [36] Intrinsic viscosity Titration ISO 5351:2010 [37] Alfa-cellulose Titration TAPPI T203 cm-99 [33] Glucan Wood, pulp, and SSL (1) Hydrolysis (2) HPLC-RI (1) TAPPI T249 cm-00 [33] (wood and pulp samples)(2) Llano et al. [38] (wood, pulp, and SSL samples) Xylan Arabinnan Galactan Mannan Acetyl Lignosulfonates SSL UV-Vis UNE EN 16109 [39]SSL samples were studied in terms of lignosulfonates, sugars, and other decomposition products. The carbohydrate composition of wood, pulp, and SSL was conducted using HPLC/RID with the methodology published by Llano et al. [38]. Lignosulfonates were analyzed by UV-Vis spectroscopy, according to the UNE EN 16109 standard [39]. ## 2.3. Hydrolysis Procedure, Summative Analysis Calculations, and Biofuels Potentials Polymeric sugars contained in the cell wall of wood carbohydrates need to be broken down. Theβ ( 1 → 4 ) glycosidic linkages of the wood polymers are cleaved with the acid hydrolysis method described in TAPPI T249 cm-00 [33]. This method involves a two-step acid hydrolysis: (i) primary hydrolysis uses a strong acid at low temperature to convert the polysaccharides to oligomers, (ii) followed by the dilution to a weak acid at high temperatures to complete the conversion to monomeric sugars. First, free-extractive moisture-controlled samples were weighed at 0.35 ± 0.01 g into flask tubes. Then, 3 mL of 72% w/w H2SO4 was added into the glass test tubes, occasionally stirred in a vortex and maintained 1 h at 30°C into a thermostatic bath. The secondary hydrolysis was carried out at 120°C for 1 h after dilution to 4% by transferring hydrolyzates to Duran bottles and adding 84 mL of deionized water (Duran bottles must be hermetically closed). Afterwards, samples were cooled and a representative aliquot of 10 mL was transferred to a beaker. Several drops of bromophenol blue indicator were taken and gradually neutralized adding 0.04 N Ba(OH)2 alkaline solution to the aliquot until the solution changes from yellow to blue-violet. Then samples were centrifuged and 0.22 μm filtered and injected in the HPLC.Each monomer can be reported in the summative analysis as its pure theoretical homopolymer [24]. The weight of each constituent, determined quantitatively after the hydrolysis, has to be multiplied by a factor to calculate its contribution to the original wood component (as a theoretical homopolymer). Calculations were made by using the theoretical stoichiometric factors obtained in the literature [22–24]. These factors consist of molecular mass of anhydrous unit divided by molecular mass of the isolated substance. Table 2 shows all of the conversion factors used in this work. Each homopolymer was calculated considering not only the monosaccharides but also the degraded-compounds derived from carbohydrates; for example, cellulose is the sum of cellobiose, glucose, HMF (5-hydroxymethyl-2-furfuraldehyde), and levulinic acid multiplied by their stoichiometric factors. The individual contribution of carbohydrate-derived compounds to the final cellulose or hemicellulose content depends on the chemical structure of the macromolecules forming the cell wall. In this work, all the glucose is assumed to generate from the cellulose [24]. Simultaneously, it was also assumed that formic acid is an inhibitor mostly produced from pentose sugars, being the formation of formic from hexoses negligible compared to the levulinic acid formation [42]. Acetic acid was considered a coproduct formed at the same time as monosaccharides by degradation of the acetyl groups located on the hemicellulose [43].Table 2 Stoichiometric factors used to calculate the percentage of theoretical homopolymer. Hydrolyzed constituents Homopolymer Carbohydrate contribution Conversion factora Hydrolysis factorb g.sugar/g.polymer Ethanol factorc g.EtOH/g.monomer Glucose Glucan Cellulose 162/180 1.11b 0.511 HMF Glucan Cellulose 162/126 — — Levulinic acid Glucan Cellulose 162/116 — — Cellobiose Glucan Cellulose 324/342 — — Xylose Xylan Hemicellulose 132/150 1.136 0.511 Furfural Xylan Hemicellulose 132/96 — — Formic acid Xylan Hemicellulose 132/46 — — Arabinose Arabinan Hemicellulose 132/150 1.136 0.511 Galactose Galactan Hemicellulose 162/180 1.11 0.511 Mannose Mannan Hemicellulose 162/180 1.11 0.511 Acetic acid Acetyl Hemicellulose 43/60 — — aStoichiometric factor used to calculate the percentage of theoretical homopolymer [22]. bHydrolysis stoichiometric factor of carbohydrate polymers into free sugars [40]. cEthanol stoichiometric factor describing the mass fraction of sugar monomer converted to ethanol [40].The macrocomponent calculations from their homopolymers are given by (1). Finally the total mass closure is calculated according to (1). In addition, ethanol potentials were calculated multiplying grams of each monomer by their corresponding stoichiometric factors (see Table 2). Such factors describe the mass fraction of sugar monomers converted to ethanol [40]: (1) Total  Carbohydrate  Content = Cellulose + Hemicellulose , Hemicellulose = Xylan + Arabinan + Galactan + Mannan + Acetyl , Cellulose = Glucan , Total  Mass  Closure = Lignin + Total  Carboydrates + Extractives + Ash . ## 3. Results ### 3.1. Total Composition of the Lignocellulosic Samples The results of the total content per sample are shown in Table3, including the major components, ash, and extractives. The results represent the total weight percentage content of the industrial samples collected in the pulp mill. The total mass closure was near 100% in spite of the fact that some minority compounds were not analyzed such as low molecular phenolic compounds derived from lignin or aldonic and uronic acids derived from cellulose and hemicellulose.Table 3 Total weight content of industrial samples. Total mass closure E. globulus (% w/w) SSL (% w/w) P1 (% w/w) P2 (% w/w) P3 (% w/w) P4 (% w/w) Cellulose-HPLC 42.25 5.67 89 87.3 89.9 91.3 Cellulose 46.00 — 91.34 91.16 92.36 92.28 Hemicellulose-HPLC 24.92 30.42 6.2 5.1 2.2 2.1 Hemicellulose 31.55 — — — — — Lignin 26.98 42.99∗ 0.80 0.40 0.40 0.10 Ash 0.35 12.1 0.28 0.26 0.24 0.18 Extractives 1.5 — 0.30 0.20 0.20 0.20 TOTAL 96 91.18 96.58 93.26 92.94 93.88 ∗Lignin in SSL is represented by the lignosulfonate content, formed by lignin sulfonation.The comparison of traditional characterization using gravimetric and titration methods and the carbohydrate analysis derived from the summative analysis calculations is displayed in Table3. Traditional cellulose methods include alfa-cellulose in pulp [33] and Seifert for cellulose in wood [34]. Traditional hemicellulose in wood is calculated as the difference between holocellulose [35] and Seifert cellulose [34]. Cellulose-HPLC and hemicellulose-HPLC of wood and pulp samples were obtained stoichiometrically, after acid hydrolysis of carbohydrates and HPLC sugars quantification. Otherwise, SSL sugars were measured directly in the HPLC, avoiding the hydrolysis step.Cellulose obtained by traditional methods and cellulose-HPLC inEucalyptus globulus samples present values of 42.25% and 46%, respectively. The Seifert method entails higher experimental errors because of the wood digestion at high temperatures where some projections can be formed if the analysis is not carried out carefully. In pulp samples, the cellulose showed higher values by means of the traditional method. Alfa-cellulose corresponds to the insoluble fraction produced after the digestion of pulp at 25°C using 17.5% NaOH. Theoretically beta and gamma cellulose with a lower degree of polymerization are excluded, but considering the results of Table 3, there are chains with similar molecular weights that are also being quantified. Regarding the results of hemicellulose, the hemicellulose-HPLC is lower than hemicellulose calculated by traditional methods in wood 24.92% and 31.55%, respectively. This behavior could be explained by the assumption that the glucose content is only considered to form part of the cellulose fraction. In addition, gravimetric mistakes of Seifert and holocellulose methods are overlapping, giving more errors in comparison with the chromatographic method. An alternative to the study of hemicelluloses in pulp samples can be the pentosan determination with the T223 cm-01 procedure [33]; however, pentosan analysis was not performed in this work because it only contemplates the C5 sugars.The results of the total carbohydrate content (TCC) disclosure appear in Table4. TCC of 67.18% and 26.98% of lignin was obtained inEucalyptus globulus hardwood samples. Besides, the replicates checked showed average values of 42.25% cellulose and 77.55% holocellulose. Results of lignin varying from 23% to 27% and cellulose from 45 to 54% ofEucalyptus globulus timber were found in the literature [26, 27]. Such ranges are in accordance with the results obtained in this work. The total content of xylan was 13.27% in wood samples, representing more than 50% of the total hemicellulose content. This is because hardwood, in contrast to coniferous softwood with a higher portion of hexosans than pentosans, is composed mainly of pentoses where xylose is the major monosaccharide [11, 44–46]. TCC is much higher in pulp samples in comparison with theEucalyptus globulus samples. The difference is explained because little amounts of lignin and hemicellulose were found in pulp samples. Hemicellulose decreases from 6.2% to 2.1% and lignin from 0.8% to 0.1% (see Table 3). TCC in pulp samples decreases in the bleaching processes, as can be seen in Table 4, from values of 95.2% to values of 91.8–93.4%. This phenomenon can be explained by the fact that xylan drops from 5.3% to 1.5% despite the fact that cellulose increases from 89.0% to 91.3%.Table 4 Total carbohydrate content of the woody hydrolyzates. Wood SSL Sulfite dissolving pulps (% w/w) (% w/w) (% w/w) P1 P2 P3 P4 GLUCAN 42.25 5.67 89.0 87.3 89.9 91.3 Glucose 44.99 ± 2.13 4.12 ± 1.48 95.0 ± 3.81 93.7 ± 3.27 96.7 ± 2.42 98.1 ± 2.52 HMF 0.1 ± 0.03 0.02 ± 0.009 0.3 ± 0.001 0.3 ± 0.05 0.3 ± 0.002 0.3 ± 0.00 Levulinic acid 0.14 ± 0.03 0.01 ± 0.009 0.2 ± 0.09 0.2 ± 0.01 0.2 ± 0.02 0.4 ± 0.00 Cellobiose 1.51 ± 1.29 2.04 ± 0.16 3.1 ± 1.78 2.5 ± 1.95 2.4 ± 1.26 2.2 ± 1.84 XYLAN 13.27 19.15 5.3 4.5 1.7 1.5 Xylose 14.27 ± 0.47 21.43 ± 8.80 2.9 ± 0.67 2.7 ± 0.66 2.0 ± 0.42 1.7 ± 0.75 Furfural 0.2 ± 0.19 0.15 ± 0.055 0.4 ± 0.004 0.8 ± 0.02 0.1 ± 0.003 0.2 ± 0.001 Formic 0.15 ± 0.05 0.03 ± 0.028 0.3 ± 0.18 ND ND ND ARABINAN 0.52 2.46 0.3 0.3 0.2 0.2 Arabinose 0.59 ± 0.36 2.79 ± 1.71 0.3 ± 0.13 0.4 ± 0.16 0.2 ± 0.09 0.3 ± 0.16 GALACTAN 7.36 3.03 0.4 0.3 0.3 0.4 Galactose 8.18 ± 1.53 3.36 ± 1.52 0.4 ± 0.09 0.4 ± 0.06 0.3 ± 0.09 0.4 ± 0.26 MANNAN 1.00 1.28 ND ND ND ND Mannose 0.011 1.42 ± 0.50 ND ND ND ND ACETYL 2.78 4.51 0.2 ND ND ND Acetic 3.87 ± 0.32 6.30 ± 1.70 0.3 ± 0.24 ND ND ND TCC (%) 67.18 36.09 95.2 92.4 91.8 93.4 a EtOH(L/Kg.dw) 0.467 0.231 0.684 0.665 0.662 0.672 b EtOH(L/Kg.dw) 0.441 0.215 0.639 0.630 0.642 0.651 aEtOH (L/Kg.dry sample) calculated from the homopolymers using hydrolysis and fermentation factors. bEtOH (L/Kg.dry sample) calculated from the monomers using fermentation factors.Once the total carbohydrates were obtained, a theoretical quantity of bioethanol was calculated according to the stoichiometric factors explained in Section2.3. Results from 0.215 to 0.684 L ethanol per kg of dry sample were obtained. ### 3.2. Dissolving Pulp Properties: Results and Discussion Pulp properties and their evolution within the sulfite process are represented in Figures2(a), 2(b), 2(c), and 2(d). Pulp transformation from crude pulp after digestion stage (P1) to final bleached pulp (P4) is graphed with error bars.Figure 2 Evolution of wood macrocomponents in pulp along the sulfite mill: (a) glucan and alfa-cellulose; (b) hemicellulose and xylan; (c) kappa index and lignin; (d) viscosity in pulp. (a) (b) (c) (d)Pulp quality parameters are represented in Figures2(a) and 2(d). Glucan and alfa-cellulose have similar trends, especially in the alkaline extraction process; however some differences can be found in the case of ozonation with a more noticeable decrease of glucan. Pulp impurities were plotted, respectively, in Figures 2(b) and 2(c). In this case, similar results were obtained. Lignin and hemicellulose content decreases as the process advances. These results showed that the most oxidative stage is the ozonation where the main losses of lignin are registered from 0.8% to 0.4%. Although delignification is the main function of this stage, there is also a depolymerization of hemicelluloses from 6.2% to 5.1% because of the high oxidation produced by ozone. In spite of the recalcitrant nature of cellulose with no losses of alfa-cellulose, there is also a little decrease of glucan from 88.8% up to 87.3% probably due to the degradation of beta and gamma cellulose. Such behavior is also reflected in the viscosity falling from 706.4 mL/g to 568.2 mL/g. Figure 2(d) shows the polymerization degree of cellulose chains playing an important role in the quality of the final pulp. As was expected, the viscosity diminished stage by stage from 706.4 mL/g (P1) after digestion up to 492.5 mL/g (P4) after PO bleaching.The obtained results were compared with other quality pulps as a function of the process (chemical or thermomechanical), the feedstock (softwood or hardwood), and the final application (paper-grade or dissolving grade) [11, 28, 29, 47]. Results presenting major impurity removal are the more suitable for waste streams valorization towards biofuels and other value-added products. The worst pulp quality is the thermomechanical (TMP) pulp with a total carbohydrate content of 64.4% in comparison with chemical pulping processes with a total carbohydrate of 96.5% [30]. The TMP constituted low-purity (regarding the lignin content) and high-yield pulp. The difference between paper-grade and dissolving-grade pulp resides in the total glucan content that is lower in case of paper grade, obtaining values of 74.7% and 84.9% for hardwood and softwood bleached pulps [47] and 92.6% in the case of dissolving-grade pulps [29]. Consequently, the hemicellulose content is higher in paper-grade pulps than in high purity dissolving-grade pulps. In this work, total carbohydrate content in bleached pulp (P4) is 93.4% where 91.3% belongs to glucan with only 1.5% of xylan.Based on the experimental results shown in Figure2 it can be concluded that (i) ozonation stage (Z) produces the destruction of mainly lignin and also carbohydrates. Z focuses on delignification and therefore kappa is notably reduced. Nevertheless, glucan is considerably diminished during Z whereas this does not affect the alfa-cellulose. This is due to the fact that the ozone is very aggressive as a bleaching agent (being less selective than chlorine derivatives) and attacks beta and gamma cellulose chains; (ii) hot alkaline extraction stage (EOP) focuses on hemicellulose solubilization, falling hemicelluloses from 5.1% to 2.2% and specifically xylan from 4.5% to 1.7%; (iii) peroxide bleaching stage (PO) attacks the chromophore groups and the pulp is definitely purified by removing lignin traces from 0.4% to 0.1% and other groups responsible for the color of the pulp; (iv) selectivity in PO and Z stages should be improved in order to avoid the breakdown of the cellulose chains; (v) results evidenced the importance of the wastewater streams valorization considering the high charge of organic compounds removed from the high-purity dissolving pulp along the sulfite process. ### 3.3. Mass Balance of the Industrial Process The mass balance of the entire industrial process has been carried out taking into account the summative analysis. The complete characterization of the feedstock (Eucalyptus globulus timber), the inlet-outlet pulps, and the main residual stream (SSL) was required. Data of the three macrocomponents throughout the process, flow rates, digestions per day, wood moisture, or yields were considered. Some of the data are confidential to the factory and cannot be specifically displayed. Results appearing in Figure 3 have been correlated with the initial dry wood in terms of grams of cellulose, hemicellulose, and lignin per grams of dry wood. The main discussion is described as follows:(i) A total content of 99.6% of cellulose provided from the feedstock goes to the main product, dissolving pulp, indicating the good performance of the digestion process. Only traces of wood cellulose are dissolved into the spent liquor. Thus, 0.032 g.hemicellulose/g.dry wood and4.1 · 10 - 3 g.lignin/g.dry wood were detected in the crude pulp (P1) which will be removed throughout subsequent stages. (ii) Based on the global mass balance and the conclusions of Section3.2, some action lines can be made regarding Z and PO stages. A better use of the bleaching reagents and process conditions should be made in order to decrease the depolymerization degree but not to the detriment of delignification. (iii) The SSL generated after wood digestion is composed of 87.2% of the total hemicellulose in wood (0.218 g.H/g.dw) and 98.5% of the total lignin (0.266 g.L/g.dw). Hemicelluloses are hydrolyzed and dissolved as monosaccharides and other derivatives. Likewise, lignin reacts with sulfite, bisulfite ions, and sulfurous acid forming lignosulfonates. Based on these results SSL can be a perfect candidate for second-generation biofuel production.Figure 3 Mass balance results in the sulfite process.∗Results expressed as g.C/g.dw, gH/g.dw and gL/g.dw correspond to grams of cellulose, hemicellulose, and lignin per grams of dry wood, respectively.The SSL is evaporated in the factory in order to reduce the water content. However, samples collected in this work were collected before the evaporation plant, at the tank outlet (see WSSL in Figure3). Tap water is used at the end of the digestion stage to stop the hydrolysis and depolymerization reactions. In addition, wastewater streams provided from pulp washing containing cellulose, hemicellulose, and lignin are stored in the tank together with the SSL and sent to the evaporation plant as WSSL. Theoretical bioethanol potential of the WSSL was calculated based on the carbohydrates content (0.031 g.C/g.dw and 0.205 g.H/g.dw). The hydrolysis stoichiometric factors for hexoses and pentoses were, respectively, 1.11 g.C6-sugars/g.cellulose and 1.136 g.C5-sugars/g.hemicellulose; the fermentation stoichiometric factor for ethanol production is 0.511 g.EtOH/g.monosaccharide. Assuming the complete conversion of C5 and C6 sugars, the second-generation bioethanol potential of the WSSL is 0.173 L.EtOH/g.dw. ## 3.1. Total Composition of the Lignocellulosic Samples The results of the total content per sample are shown in Table3, including the major components, ash, and extractives. The results represent the total weight percentage content of the industrial samples collected in the pulp mill. The total mass closure was near 100% in spite of the fact that some minority compounds were not analyzed such as low molecular phenolic compounds derived from lignin or aldonic and uronic acids derived from cellulose and hemicellulose.Table 3 Total weight content of industrial samples. Total mass closure E. globulus (% w/w) SSL (% w/w) P1 (% w/w) P2 (% w/w) P3 (% w/w) P4 (% w/w) Cellulose-HPLC 42.25 5.67 89 87.3 89.9 91.3 Cellulose 46.00 — 91.34 91.16 92.36 92.28 Hemicellulose-HPLC 24.92 30.42 6.2 5.1 2.2 2.1 Hemicellulose 31.55 — — — — — Lignin 26.98 42.99∗ 0.80 0.40 0.40 0.10 Ash 0.35 12.1 0.28 0.26 0.24 0.18 Extractives 1.5 — 0.30 0.20 0.20 0.20 TOTAL 96 91.18 96.58 93.26 92.94 93.88 ∗Lignin in SSL is represented by the lignosulfonate content, formed by lignin sulfonation.The comparison of traditional characterization using gravimetric and titration methods and the carbohydrate analysis derived from the summative analysis calculations is displayed in Table3. Traditional cellulose methods include alfa-cellulose in pulp [33] and Seifert for cellulose in wood [34]. Traditional hemicellulose in wood is calculated as the difference between holocellulose [35] and Seifert cellulose [34]. Cellulose-HPLC and hemicellulose-HPLC of wood and pulp samples were obtained stoichiometrically, after acid hydrolysis of carbohydrates and HPLC sugars quantification. Otherwise, SSL sugars were measured directly in the HPLC, avoiding the hydrolysis step.Cellulose obtained by traditional methods and cellulose-HPLC inEucalyptus globulus samples present values of 42.25% and 46%, respectively. The Seifert method entails higher experimental errors because of the wood digestion at high temperatures where some projections can be formed if the analysis is not carried out carefully. In pulp samples, the cellulose showed higher values by means of the traditional method. Alfa-cellulose corresponds to the insoluble fraction produced after the digestion of pulp at 25°C using 17.5% NaOH. Theoretically beta and gamma cellulose with a lower degree of polymerization are excluded, but considering the results of Table 3, there are chains with similar molecular weights that are also being quantified. Regarding the results of hemicellulose, the hemicellulose-HPLC is lower than hemicellulose calculated by traditional methods in wood 24.92% and 31.55%, respectively. This behavior could be explained by the assumption that the glucose content is only considered to form part of the cellulose fraction. In addition, gravimetric mistakes of Seifert and holocellulose methods are overlapping, giving more errors in comparison with the chromatographic method. An alternative to the study of hemicelluloses in pulp samples can be the pentosan determination with the T223 cm-01 procedure [33]; however, pentosan analysis was not performed in this work because it only contemplates the C5 sugars.The results of the total carbohydrate content (TCC) disclosure appear in Table4. TCC of 67.18% and 26.98% of lignin was obtained inEucalyptus globulus hardwood samples. Besides, the replicates checked showed average values of 42.25% cellulose and 77.55% holocellulose. Results of lignin varying from 23% to 27% and cellulose from 45 to 54% ofEucalyptus globulus timber were found in the literature [26, 27]. Such ranges are in accordance with the results obtained in this work. The total content of xylan was 13.27% in wood samples, representing more than 50% of the total hemicellulose content. This is because hardwood, in contrast to coniferous softwood with a higher portion of hexosans than pentosans, is composed mainly of pentoses where xylose is the major monosaccharide [11, 44–46]. TCC is much higher in pulp samples in comparison with theEucalyptus globulus samples. The difference is explained because little amounts of lignin and hemicellulose were found in pulp samples. Hemicellulose decreases from 6.2% to 2.1% and lignin from 0.8% to 0.1% (see Table 3). TCC in pulp samples decreases in the bleaching processes, as can be seen in Table 4, from values of 95.2% to values of 91.8–93.4%. This phenomenon can be explained by the fact that xylan drops from 5.3% to 1.5% despite the fact that cellulose increases from 89.0% to 91.3%.Table 4 Total carbohydrate content of the woody hydrolyzates. Wood SSL Sulfite dissolving pulps (% w/w) (% w/w) (% w/w) P1 P2 P3 P4 GLUCAN 42.25 5.67 89.0 87.3 89.9 91.3 Glucose 44.99 ± 2.13 4.12 ± 1.48 95.0 ± 3.81 93.7 ± 3.27 96.7 ± 2.42 98.1 ± 2.52 HMF 0.1 ± 0.03 0.02 ± 0.009 0.3 ± 0.001 0.3 ± 0.05 0.3 ± 0.002 0.3 ± 0.00 Levulinic acid 0.14 ± 0.03 0.01 ± 0.009 0.2 ± 0.09 0.2 ± 0.01 0.2 ± 0.02 0.4 ± 0.00 Cellobiose 1.51 ± 1.29 2.04 ± 0.16 3.1 ± 1.78 2.5 ± 1.95 2.4 ± 1.26 2.2 ± 1.84 XYLAN 13.27 19.15 5.3 4.5 1.7 1.5 Xylose 14.27 ± 0.47 21.43 ± 8.80 2.9 ± 0.67 2.7 ± 0.66 2.0 ± 0.42 1.7 ± 0.75 Furfural 0.2 ± 0.19 0.15 ± 0.055 0.4 ± 0.004 0.8 ± 0.02 0.1 ± 0.003 0.2 ± 0.001 Formic 0.15 ± 0.05 0.03 ± 0.028 0.3 ± 0.18 ND ND ND ARABINAN 0.52 2.46 0.3 0.3 0.2 0.2 Arabinose 0.59 ± 0.36 2.79 ± 1.71 0.3 ± 0.13 0.4 ± 0.16 0.2 ± 0.09 0.3 ± 0.16 GALACTAN 7.36 3.03 0.4 0.3 0.3 0.4 Galactose 8.18 ± 1.53 3.36 ± 1.52 0.4 ± 0.09 0.4 ± 0.06 0.3 ± 0.09 0.4 ± 0.26 MANNAN 1.00 1.28 ND ND ND ND Mannose 0.011 1.42 ± 0.50 ND ND ND ND ACETYL 2.78 4.51 0.2 ND ND ND Acetic 3.87 ± 0.32 6.30 ± 1.70 0.3 ± 0.24 ND ND ND TCC (%) 67.18 36.09 95.2 92.4 91.8 93.4 a EtOH(L/Kg.dw) 0.467 0.231 0.684 0.665 0.662 0.672 b EtOH(L/Kg.dw) 0.441 0.215 0.639 0.630 0.642 0.651 aEtOH (L/Kg.dry sample) calculated from the homopolymers using hydrolysis and fermentation factors. bEtOH (L/Kg.dry sample) calculated from the monomers using fermentation factors.Once the total carbohydrates were obtained, a theoretical quantity of bioethanol was calculated according to the stoichiometric factors explained in Section2.3. Results from 0.215 to 0.684 L ethanol per kg of dry sample were obtained. ## 3.2. Dissolving Pulp Properties: Results and Discussion Pulp properties and their evolution within the sulfite process are represented in Figures2(a), 2(b), 2(c), and 2(d). Pulp transformation from crude pulp after digestion stage (P1) to final bleached pulp (P4) is graphed with error bars.Figure 2 Evolution of wood macrocomponents in pulp along the sulfite mill: (a) glucan and alfa-cellulose; (b) hemicellulose and xylan; (c) kappa index and lignin; (d) viscosity in pulp. (a) (b) (c) (d)Pulp quality parameters are represented in Figures2(a) and 2(d). Glucan and alfa-cellulose have similar trends, especially in the alkaline extraction process; however some differences can be found in the case of ozonation with a more noticeable decrease of glucan. Pulp impurities were plotted, respectively, in Figures 2(b) and 2(c). In this case, similar results were obtained. Lignin and hemicellulose content decreases as the process advances. These results showed that the most oxidative stage is the ozonation where the main losses of lignin are registered from 0.8% to 0.4%. Although delignification is the main function of this stage, there is also a depolymerization of hemicelluloses from 6.2% to 5.1% because of the high oxidation produced by ozone. In spite of the recalcitrant nature of cellulose with no losses of alfa-cellulose, there is also a little decrease of glucan from 88.8% up to 87.3% probably due to the degradation of beta and gamma cellulose. Such behavior is also reflected in the viscosity falling from 706.4 mL/g to 568.2 mL/g. Figure 2(d) shows the polymerization degree of cellulose chains playing an important role in the quality of the final pulp. As was expected, the viscosity diminished stage by stage from 706.4 mL/g (P1) after digestion up to 492.5 mL/g (P4) after PO bleaching.The obtained results were compared with other quality pulps as a function of the process (chemical or thermomechanical), the feedstock (softwood or hardwood), and the final application (paper-grade or dissolving grade) [11, 28, 29, 47]. Results presenting major impurity removal are the more suitable for waste streams valorization towards biofuels and other value-added products. The worst pulp quality is the thermomechanical (TMP) pulp with a total carbohydrate content of 64.4% in comparison with chemical pulping processes with a total carbohydrate of 96.5% [30]. The TMP constituted low-purity (regarding the lignin content) and high-yield pulp. The difference between paper-grade and dissolving-grade pulp resides in the total glucan content that is lower in case of paper grade, obtaining values of 74.7% and 84.9% for hardwood and softwood bleached pulps [47] and 92.6% in the case of dissolving-grade pulps [29]. Consequently, the hemicellulose content is higher in paper-grade pulps than in high purity dissolving-grade pulps. In this work, total carbohydrate content in bleached pulp (P4) is 93.4% where 91.3% belongs to glucan with only 1.5% of xylan.Based on the experimental results shown in Figure2 it can be concluded that (i) ozonation stage (Z) produces the destruction of mainly lignin and also carbohydrates. Z focuses on delignification and therefore kappa is notably reduced. Nevertheless, glucan is considerably diminished during Z whereas this does not affect the alfa-cellulose. This is due to the fact that the ozone is very aggressive as a bleaching agent (being less selective than chlorine derivatives) and attacks beta and gamma cellulose chains; (ii) hot alkaline extraction stage (EOP) focuses on hemicellulose solubilization, falling hemicelluloses from 5.1% to 2.2% and specifically xylan from 4.5% to 1.7%; (iii) peroxide bleaching stage (PO) attacks the chromophore groups and the pulp is definitely purified by removing lignin traces from 0.4% to 0.1% and other groups responsible for the color of the pulp; (iv) selectivity in PO and Z stages should be improved in order to avoid the breakdown of the cellulose chains; (v) results evidenced the importance of the wastewater streams valorization considering the high charge of organic compounds removed from the high-purity dissolving pulp along the sulfite process. ## 3.3. Mass Balance of the Industrial Process The mass balance of the entire industrial process has been carried out taking into account the summative analysis. The complete characterization of the feedstock (Eucalyptus globulus timber), the inlet-outlet pulps, and the main residual stream (SSL) was required. Data of the three macrocomponents throughout the process, flow rates, digestions per day, wood moisture, or yields were considered. Some of the data are confidential to the factory and cannot be specifically displayed. Results appearing in Figure 3 have been correlated with the initial dry wood in terms of grams of cellulose, hemicellulose, and lignin per grams of dry wood. The main discussion is described as follows:(i) A total content of 99.6% of cellulose provided from the feedstock goes to the main product, dissolving pulp, indicating the good performance of the digestion process. Only traces of wood cellulose are dissolved into the spent liquor. Thus, 0.032 g.hemicellulose/g.dry wood and4.1 · 10 - 3 g.lignin/g.dry wood were detected in the crude pulp (P1) which will be removed throughout subsequent stages. (ii) Based on the global mass balance and the conclusions of Section3.2, some action lines can be made regarding Z and PO stages. A better use of the bleaching reagents and process conditions should be made in order to decrease the depolymerization degree but not to the detriment of delignification. (iii) The SSL generated after wood digestion is composed of 87.2% of the total hemicellulose in wood (0.218 g.H/g.dw) and 98.5% of the total lignin (0.266 g.L/g.dw). Hemicelluloses are hydrolyzed and dissolved as monosaccharides and other derivatives. Likewise, lignin reacts with sulfite, bisulfite ions, and sulfurous acid forming lignosulfonates. Based on these results SSL can be a perfect candidate for second-generation biofuel production.Figure 3 Mass balance results in the sulfite process.∗Results expressed as g.C/g.dw, gH/g.dw and gL/g.dw correspond to grams of cellulose, hemicellulose, and lignin per grams of dry wood, respectively.The SSL is evaporated in the factory in order to reduce the water content. However, samples collected in this work were collected before the evaporation plant, at the tank outlet (see WSSL in Figure3). Tap water is used at the end of the digestion stage to stop the hydrolysis and depolymerization reactions. In addition, wastewater streams provided from pulp washing containing cellulose, hemicellulose, and lignin are stored in the tank together with the SSL and sent to the evaporation plant as WSSL. Theoretical bioethanol potential of the WSSL was calculated based on the carbohydrates content (0.031 g.C/g.dw and 0.205 g.H/g.dw). The hydrolysis stoichiometric factors for hexoses and pentoses were, respectively, 1.11 g.C6-sugars/g.cellulose and 1.136 g.C5-sugars/g.hemicellulose; the fermentation stoichiometric factor for ethanol production is 0.511 g.EtOH/g.monosaccharide. Assuming the complete conversion of C5 and C6 sugars, the second-generation bioethanol potential of the WSSL is 0.173 L.EtOH/g.dw. ## 4. Conclusions A full study of total mass balance throughout the entire sulfite pulping process in a pulp mill has been carried out, showing that the spent sulfite liquor is the most useful stream to be valorized due to the presence of lignosulfonates, sugars, and other minor compounds, giving a theoretical quantity of 0.173 L of bioethanol per gram of dry wood. Fractionation processes might be carried out to separate the value-added compounds, transforming this traditional pulp mill into a modern lignocellulosic biorefinery.The characterization of the woody materials has been developed, comparing traditional methods with more novel methods based on the hydrolysis and individual characterization of the monomers. Acid hydrolysis is a useful method for the analysis of carbohydrate composition of wood and pulp samples. Using the TAPPI T249 cm-00 standard in combination with HPLC-RID technique can give complete information of the main components for valorization options in pulping processes.Summative analysis results together with other parameters make studying every stage of the sulfite process possible. A favorable extraction of cellulose in the digester was carried out, with the presence of 99.6% of wood-cellulose in the crude pulp. On the other hand, 87.23% of hemicellulose and 98.47% of lignin are dissolved into the spent liquor.Finally, some action lines to the existing process were indicated: (i) the digestion conditions should be optimized in order to increase the depolymerization of hemicelluloses in the spent liquor; (ii) ozonation and peroxide bleaching extraction processes should also be improved, avoiding the degradation and destruction of the cellulose chains and obtaining similar values of impurities; (iii) the spent liquor should be conveniently fractionated and detoxified, separating sugars from the rest of microbial inhibitors for second-generation biofuel production by microbial fermentation. --- *Source: 102534-2015-09-03.xml*
102534-2015-09-03_102534-2015-09-03.md
53,756
Evolution of Lignocellulosic Macrocomponents in the Wastewater Streams of a Sulfite Pulp Mill: A Preliminary Biorefining Approach
Tamara Llano; Noelia García-Quevedo; Natalia Quijorna; Javier R. Viguri; Alberto Coz
Journal of Chemistry (2015)
Chemistry and Chemical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102534
102534-2015-09-03.xml
--- ## Abstract The evolution of lignin, five- and six-carbon sugars, and other decomposition products derived from hemicelluloses and cellulose was monitored in a sulfite pulp mill. The wastewater streams were characterized and the mass balances throughout digestion and total chlorine free bleaching stages were determined. Summative analysis in conjunction with pulp parameters highlights some process guidelines and valorization alternatives towards the transformation of the traditional factory into a lignocellulosic biorefinery. The results showed a good separation of cellulose (99.64%) during wood digestion, with 87.23% of hemicellulose and 98.47% lignin dissolved into the waste streams. The following steps should be carried out to increase the sugar content into the waste streams: (i) optimization of the digestion conditions increasing hemicellulose depolymerization; (ii) improvement of the ozonation and peroxide bleaching stages, avoiding deconstruction of the cellulose chains but maintaining impurity removal; (iii) fractionation of the waste water streams, separating sugars from the rest of toxic inhibitors for 2nd generation biofuel production. A total of 0.173 L of second-generation ethanol can be obtained in the spent liquor per gram of dry wood. The proposed methodology can be usefully incorporated into other related industrial sectors. --- ## Body ## 1. Introduction Mixed C5 and C6 hemicellulose sugar platforms serve as feedstock for fermentation producing biofuels such as ethanol or butanol, biopolymers such as polyhydroxybutyrate (PHB), or polybutylene succinate (PBS), and chemicals such as lactic acid, succinic acid, or itaconic or glutamic acids [1]. In Europe, primary energy consumption is dominated by the presence of petroleum products mostly imported from abroad. There are many concerns around the high degree of energy dependence [2]. Among the valorization alternatives described above, this work is based on bioethanol production from the pulp and paper industry, in order to be included in the transportation sector.The pulp and paper industry is being reconsidered as an important source of hemicellulose carbohydrates. Traditionally, pulping manufacturing was focused on cellulose extraction by chemical, mechanical or semichemical processes. Among the chemical processes, kraft is the most commonly used. However, other processes such as soda-anthraquinone (soda-AQ); ethanol-water pulping (organosolv); acid sulfite and bisulfite pulping; sulfite pretreatment to overcome recalcitrance of lignocellulose (SPORL); or SO2-ethanol-water process (SEW) can also be used [3–10]. Sulfite pulping is becoming popular because of the growing demand of high purity dissolving pulp for textile fiber production. An advantage of sulfite pulping is regarding high separation efficiency of cellulose. While dissolving cellulose is manufactured, hemicellulose and lignin are also generated and partially reused. Conversion of these waste streams is becoming a clear priority within the biorefinery concept.Nowadays, there are many efforts from the pulp and paper industries focused on sugar-rich resource valorization into energy and a wide variety of products. Several approaches tracing pathways and guidelines towards the conversion of pulping factories into lignocellulosic biorefineries (LCBR) can be found in the literature for kraft pulping [11–14], soda-AQ [15, 16], organosolv [17–19], or SEW process [9, 10]. Nevertheless, only a few contributions have been studied in the case of sulfite pulping [20, 21]. The investment activity in this field reveals the importance of transforming traditional pulping factories into integrated LCBR, increasing the profit margin in the existing pulp mills. Due to the complexity of the lignocellulosic biomass (LCB), many efforts are being carried out in fractionation processes. In this sense, the first step was a deep control of the main resources throughout a total mass balance of the LCB. The mass balance provides a complete description based on lignin and carbohydrate content (hemicellulose and cellulose) of the materials [9, 22]. The use of total mass balances in a whole process can point out the main resources for valorization options. In addition, the study of digestion and bleaching allows the establishment of future actions towards process improvements. One of the methodologies to get the total mass balance in these kinds of samples is the summative analysis of the carbohydrates disclosure into their derivatives together with lignin, extractives, and ash contribution [22–24]. One advantage of this methodology is the possibility of the delignification and breakdown of the polysaccharides into individual sugars and other decomposition products, giving some information about possible inhibitors in future fermentation processes. This methodology was previously reported with purposes related to the characterization of bagasse and bamboo [23], different wood species [22–27], or pulps [28–32]. More recently, spent liquors provided from SEW were also characterized by summative analysis methodologies, tracing mass balances for the residue bioconversion towards butanol, ethanol, and acetone/isopropanol [9, 10].This work contemplates the study of the summative analysis in a sulfite pulp mill, monitoring the three main wood macrocomponents—cellulose, hemicellulose, and lignin—by measuring not only the raw material but also the residues and products throughout the process. In a second step, bioethanol potentials of the sugar-rich residue were also determined. This research constitutes a novelty in an industrial sulfite process towards the conversion of this factory into a modern LCBR. In addition, thanks to the compositional analysis together with the physic-chemical pulp properties (viscosity and micro kappa were also measured) the effects of digestion and bleaching steps were also investigated. The proposed methodology can be usefully extrapolated in not only sulfite or kraft mills but also other factories working with LCB. ## 2. Materials and Methods ### 2.1. Materials and Industrial Process Description Figure1 shows the industrial process and all of the materials collected in the pulp mill. The acid sulfite process is based on the extraction of cellulose by the attack of an acidic aqueous solution under acidic conditions (pH of 1.35 ± 0.15) in the presence of excess free SO2 [2]. Delignification occurs inside the digester (see “D” unit in Figure 1) where the lignin is sulfonated by H S O 3 - forming lignosulfonates. Lignosulfonates together with high amounts of depolymerized hemicelluloses are dissolved into the so-called spent sulfite liquor (SSL) obtained at the end of the digestion stage. After digestion, the next step is bleaching (see “Z/EOP/PO” bleaching sequence in Figure 1). With the purpose of purifying the cellulose, a total chlorine free (TCF) bleaching process is used in the factory. Ozone (Z), sodium hydroxide as the extracting agent together with oxygen and hydrogen peroxide (EOP), and hydrogen peroxide as the bleaching agent in the presence of oxygen (PO) are the stages followed in the pulp mill and studied in this work.Figure 1 Raw materials, products, and by-products analyzed in this work in the acid sulfite process.Wood, SSL, and pulp industrial samples were collected throughout the process as can be seen in Figure1.Eucalyptus globulus timber is used as feedstock in this factory. On the other hand, SSL and industrial dissolving pulp samples were collected and analyzed in the process. A total of four delignification-grade pulps were analyzed starting with the crude pulp (P1) after the digestion stage and continuing with the bleaching stages after ozonation (P2); alkaline-extraction (P3); and peroxide oxygen bleaching (P4). ### 2.2. Analysis of Wood, Pulp and SSL A summary of the characterization methods applied in this work is shown in Table1. Wood and pulp samples were conditioned and prepared according to TAPPI T257 cm-02 [33]. Samples were air-dried to constant moisture to the nearest 10% w/w, milled, passed through 40-mesh sieve, and extracted with acetone in a Soxhlet apparatus in order to remove the extractives, according to T204 cm-97 standard [33]. Wood and pulp free-extractive samples were taken for the rest of the analysis. Ashes at 525°C were analyzed by TAPPI T211om-02 standard [33]. Acid-insoluble and soluble lignin were determined in wood samples by using TAPPI T222 om-02 method [33]. Cellulose content was determined using the Seifert procedure boiling a mixture of acetylacetone-dioxane-hydrochloric acid [34]. Holocellulose, which represents the total carbohydrate content (a sum of cellulose and hemicelluloses), was measured by means of the Wise chlorite technique reported recently by Haykiri-Acma et al. [35]. The lignin content (% on weight) in pulp was calculated by multiplying the kappa number by 0.17 [32]. Considering that the standard kappa number determination cannot be applied to pulp with kappa below five [41], micro kappa described in TAPPI UM 246 standard [36] was determined. To study the degradation of carbohydrate chains during the bleaching steps, intrinsic viscosity in crude, partially bleached, and bleached pulps was determined by means of the standard ISO 5351:2010 [37]. Alfa-cellulose was also measured in pulp samples using TAPPI standard T203 cm-99 [33].Table 1 Analytical procedures used in this work. Parameter Samples Equipment Standard/source Samples conditioning Wood and pulp Mill, sieve, and climate chamber TAPPI T257 cm-02 [33] Acetone extractives Soxhlet apparatus TAPPI T204 cm-97 [33] Ashes at 525°C Muffle furnace TAPPI T211 om-02 [33] Cellulose Wood Analytical balance Seifert [34] Holocellulose Analytical balance Wise [35] Lignin UV-Vis TAPPI T222 om-22 [33] Micro kappa Pulp Titration TAPPI UM 246 [36] Intrinsic viscosity Titration ISO 5351:2010 [37] Alfa-cellulose Titration TAPPI T203 cm-99 [33] Glucan Wood, pulp, and SSL (1) Hydrolysis (2) HPLC-RI (1) TAPPI T249 cm-00 [33] (wood and pulp samples)(2) Llano et al. [38] (wood, pulp, and SSL samples) Xylan Arabinnan Galactan Mannan Acetyl Lignosulfonates SSL UV-Vis UNE EN 16109 [39]SSL samples were studied in terms of lignosulfonates, sugars, and other decomposition products. The carbohydrate composition of wood, pulp, and SSL was conducted using HPLC/RID with the methodology published by Llano et al. [38]. Lignosulfonates were analyzed by UV-Vis spectroscopy, according to the UNE EN 16109 standard [39]. ### 2.3. Hydrolysis Procedure, Summative Analysis Calculations, and Biofuels Potentials Polymeric sugars contained in the cell wall of wood carbohydrates need to be broken down. Theβ ( 1 → 4 ) glycosidic linkages of the wood polymers are cleaved with the acid hydrolysis method described in TAPPI T249 cm-00 [33]. This method involves a two-step acid hydrolysis: (i) primary hydrolysis uses a strong acid at low temperature to convert the polysaccharides to oligomers, (ii) followed by the dilution to a weak acid at high temperatures to complete the conversion to monomeric sugars. First, free-extractive moisture-controlled samples were weighed at 0.35 ± 0.01 g into flask tubes. Then, 3 mL of 72% w/w H2SO4 was added into the glass test tubes, occasionally stirred in a vortex and maintained 1 h at 30°C into a thermostatic bath. The secondary hydrolysis was carried out at 120°C for 1 h after dilution to 4% by transferring hydrolyzates to Duran bottles and adding 84 mL of deionized water (Duran bottles must be hermetically closed). Afterwards, samples were cooled and a representative aliquot of 10 mL was transferred to a beaker. Several drops of bromophenol blue indicator were taken and gradually neutralized adding 0.04 N Ba(OH)2 alkaline solution to the aliquot until the solution changes from yellow to blue-violet. Then samples were centrifuged and 0.22 μm filtered and injected in the HPLC.Each monomer can be reported in the summative analysis as its pure theoretical homopolymer [24]. The weight of each constituent, determined quantitatively after the hydrolysis, has to be multiplied by a factor to calculate its contribution to the original wood component (as a theoretical homopolymer). Calculations were made by using the theoretical stoichiometric factors obtained in the literature [22–24]. These factors consist of molecular mass of anhydrous unit divided by molecular mass of the isolated substance. Table 2 shows all of the conversion factors used in this work. Each homopolymer was calculated considering not only the monosaccharides but also the degraded-compounds derived from carbohydrates; for example, cellulose is the sum of cellobiose, glucose, HMF (5-hydroxymethyl-2-furfuraldehyde), and levulinic acid multiplied by their stoichiometric factors. The individual contribution of carbohydrate-derived compounds to the final cellulose or hemicellulose content depends on the chemical structure of the macromolecules forming the cell wall. In this work, all the glucose is assumed to generate from the cellulose [24]. Simultaneously, it was also assumed that formic acid is an inhibitor mostly produced from pentose sugars, being the formation of formic from hexoses negligible compared to the levulinic acid formation [42]. Acetic acid was considered a coproduct formed at the same time as monosaccharides by degradation of the acetyl groups located on the hemicellulose [43].Table 2 Stoichiometric factors used to calculate the percentage of theoretical homopolymer. Hydrolyzed constituents Homopolymer Carbohydrate contribution Conversion factora Hydrolysis factorb g.sugar/g.polymer Ethanol factorc g.EtOH/g.monomer Glucose Glucan Cellulose 162/180 1.11b 0.511 HMF Glucan Cellulose 162/126 — — Levulinic acid Glucan Cellulose 162/116 — — Cellobiose Glucan Cellulose 324/342 — — Xylose Xylan Hemicellulose 132/150 1.136 0.511 Furfural Xylan Hemicellulose 132/96 — — Formic acid Xylan Hemicellulose 132/46 — — Arabinose Arabinan Hemicellulose 132/150 1.136 0.511 Galactose Galactan Hemicellulose 162/180 1.11 0.511 Mannose Mannan Hemicellulose 162/180 1.11 0.511 Acetic acid Acetyl Hemicellulose 43/60 — — aStoichiometric factor used to calculate the percentage of theoretical homopolymer [22]. bHydrolysis stoichiometric factor of carbohydrate polymers into free sugars [40]. cEthanol stoichiometric factor describing the mass fraction of sugar monomer converted to ethanol [40].The macrocomponent calculations from their homopolymers are given by (1). Finally the total mass closure is calculated according to (1). In addition, ethanol potentials were calculated multiplying grams of each monomer by their corresponding stoichiometric factors (see Table 2). Such factors describe the mass fraction of sugar monomers converted to ethanol [40]: (1) Total  Carbohydrate  Content = Cellulose + Hemicellulose , Hemicellulose = Xylan + Arabinan + Galactan + Mannan + Acetyl , Cellulose = Glucan , Total  Mass  Closure = Lignin + Total  Carboydrates + Extractives + Ash . ## 2.1. Materials and Industrial Process Description Figure1 shows the industrial process and all of the materials collected in the pulp mill. The acid sulfite process is based on the extraction of cellulose by the attack of an acidic aqueous solution under acidic conditions (pH of 1.35 ± 0.15) in the presence of excess free SO2 [2]. Delignification occurs inside the digester (see “D” unit in Figure 1) where the lignin is sulfonated by H S O 3 - forming lignosulfonates. Lignosulfonates together with high amounts of depolymerized hemicelluloses are dissolved into the so-called spent sulfite liquor (SSL) obtained at the end of the digestion stage. After digestion, the next step is bleaching (see “Z/EOP/PO” bleaching sequence in Figure 1). With the purpose of purifying the cellulose, a total chlorine free (TCF) bleaching process is used in the factory. Ozone (Z), sodium hydroxide as the extracting agent together with oxygen and hydrogen peroxide (EOP), and hydrogen peroxide as the bleaching agent in the presence of oxygen (PO) are the stages followed in the pulp mill and studied in this work.Figure 1 Raw materials, products, and by-products analyzed in this work in the acid sulfite process.Wood, SSL, and pulp industrial samples were collected throughout the process as can be seen in Figure1.Eucalyptus globulus timber is used as feedstock in this factory. On the other hand, SSL and industrial dissolving pulp samples were collected and analyzed in the process. A total of four delignification-grade pulps were analyzed starting with the crude pulp (P1) after the digestion stage and continuing with the bleaching stages after ozonation (P2); alkaline-extraction (P3); and peroxide oxygen bleaching (P4). ## 2.2. Analysis of Wood, Pulp and SSL A summary of the characterization methods applied in this work is shown in Table1. Wood and pulp samples were conditioned and prepared according to TAPPI T257 cm-02 [33]. Samples were air-dried to constant moisture to the nearest 10% w/w, milled, passed through 40-mesh sieve, and extracted with acetone in a Soxhlet apparatus in order to remove the extractives, according to T204 cm-97 standard [33]. Wood and pulp free-extractive samples were taken for the rest of the analysis. Ashes at 525°C were analyzed by TAPPI T211om-02 standard [33]. Acid-insoluble and soluble lignin were determined in wood samples by using TAPPI T222 om-02 method [33]. Cellulose content was determined using the Seifert procedure boiling a mixture of acetylacetone-dioxane-hydrochloric acid [34]. Holocellulose, which represents the total carbohydrate content (a sum of cellulose and hemicelluloses), was measured by means of the Wise chlorite technique reported recently by Haykiri-Acma et al. [35]. The lignin content (% on weight) in pulp was calculated by multiplying the kappa number by 0.17 [32]. Considering that the standard kappa number determination cannot be applied to pulp with kappa below five [41], micro kappa described in TAPPI UM 246 standard [36] was determined. To study the degradation of carbohydrate chains during the bleaching steps, intrinsic viscosity in crude, partially bleached, and bleached pulps was determined by means of the standard ISO 5351:2010 [37]. Alfa-cellulose was also measured in pulp samples using TAPPI standard T203 cm-99 [33].Table 1 Analytical procedures used in this work. Parameter Samples Equipment Standard/source Samples conditioning Wood and pulp Mill, sieve, and climate chamber TAPPI T257 cm-02 [33] Acetone extractives Soxhlet apparatus TAPPI T204 cm-97 [33] Ashes at 525°C Muffle furnace TAPPI T211 om-02 [33] Cellulose Wood Analytical balance Seifert [34] Holocellulose Analytical balance Wise [35] Lignin UV-Vis TAPPI T222 om-22 [33] Micro kappa Pulp Titration TAPPI UM 246 [36] Intrinsic viscosity Titration ISO 5351:2010 [37] Alfa-cellulose Titration TAPPI T203 cm-99 [33] Glucan Wood, pulp, and SSL (1) Hydrolysis (2) HPLC-RI (1) TAPPI T249 cm-00 [33] (wood and pulp samples)(2) Llano et al. [38] (wood, pulp, and SSL samples) Xylan Arabinnan Galactan Mannan Acetyl Lignosulfonates SSL UV-Vis UNE EN 16109 [39]SSL samples were studied in terms of lignosulfonates, sugars, and other decomposition products. The carbohydrate composition of wood, pulp, and SSL was conducted using HPLC/RID with the methodology published by Llano et al. [38]. Lignosulfonates were analyzed by UV-Vis spectroscopy, according to the UNE EN 16109 standard [39]. ## 2.3. Hydrolysis Procedure, Summative Analysis Calculations, and Biofuels Potentials Polymeric sugars contained in the cell wall of wood carbohydrates need to be broken down. Theβ ( 1 → 4 ) glycosidic linkages of the wood polymers are cleaved with the acid hydrolysis method described in TAPPI T249 cm-00 [33]. This method involves a two-step acid hydrolysis: (i) primary hydrolysis uses a strong acid at low temperature to convert the polysaccharides to oligomers, (ii) followed by the dilution to a weak acid at high temperatures to complete the conversion to monomeric sugars. First, free-extractive moisture-controlled samples were weighed at 0.35 ± 0.01 g into flask tubes. Then, 3 mL of 72% w/w H2SO4 was added into the glass test tubes, occasionally stirred in a vortex and maintained 1 h at 30°C into a thermostatic bath. The secondary hydrolysis was carried out at 120°C for 1 h after dilution to 4% by transferring hydrolyzates to Duran bottles and adding 84 mL of deionized water (Duran bottles must be hermetically closed). Afterwards, samples were cooled and a representative aliquot of 10 mL was transferred to a beaker. Several drops of bromophenol blue indicator were taken and gradually neutralized adding 0.04 N Ba(OH)2 alkaline solution to the aliquot until the solution changes from yellow to blue-violet. Then samples were centrifuged and 0.22 μm filtered and injected in the HPLC.Each monomer can be reported in the summative analysis as its pure theoretical homopolymer [24]. The weight of each constituent, determined quantitatively after the hydrolysis, has to be multiplied by a factor to calculate its contribution to the original wood component (as a theoretical homopolymer). Calculations were made by using the theoretical stoichiometric factors obtained in the literature [22–24]. These factors consist of molecular mass of anhydrous unit divided by molecular mass of the isolated substance. Table 2 shows all of the conversion factors used in this work. Each homopolymer was calculated considering not only the monosaccharides but also the degraded-compounds derived from carbohydrates; for example, cellulose is the sum of cellobiose, glucose, HMF (5-hydroxymethyl-2-furfuraldehyde), and levulinic acid multiplied by their stoichiometric factors. The individual contribution of carbohydrate-derived compounds to the final cellulose or hemicellulose content depends on the chemical structure of the macromolecules forming the cell wall. In this work, all the glucose is assumed to generate from the cellulose [24]. Simultaneously, it was also assumed that formic acid is an inhibitor mostly produced from pentose sugars, being the formation of formic from hexoses negligible compared to the levulinic acid formation [42]. Acetic acid was considered a coproduct formed at the same time as monosaccharides by degradation of the acetyl groups located on the hemicellulose [43].Table 2 Stoichiometric factors used to calculate the percentage of theoretical homopolymer. Hydrolyzed constituents Homopolymer Carbohydrate contribution Conversion factora Hydrolysis factorb g.sugar/g.polymer Ethanol factorc g.EtOH/g.monomer Glucose Glucan Cellulose 162/180 1.11b 0.511 HMF Glucan Cellulose 162/126 — — Levulinic acid Glucan Cellulose 162/116 — — Cellobiose Glucan Cellulose 324/342 — — Xylose Xylan Hemicellulose 132/150 1.136 0.511 Furfural Xylan Hemicellulose 132/96 — — Formic acid Xylan Hemicellulose 132/46 — — Arabinose Arabinan Hemicellulose 132/150 1.136 0.511 Galactose Galactan Hemicellulose 162/180 1.11 0.511 Mannose Mannan Hemicellulose 162/180 1.11 0.511 Acetic acid Acetyl Hemicellulose 43/60 — — aStoichiometric factor used to calculate the percentage of theoretical homopolymer [22]. bHydrolysis stoichiometric factor of carbohydrate polymers into free sugars [40]. cEthanol stoichiometric factor describing the mass fraction of sugar monomer converted to ethanol [40].The macrocomponent calculations from their homopolymers are given by (1). Finally the total mass closure is calculated according to (1). In addition, ethanol potentials were calculated multiplying grams of each monomer by their corresponding stoichiometric factors (see Table 2). Such factors describe the mass fraction of sugar monomers converted to ethanol [40]: (1) Total  Carbohydrate  Content = Cellulose + Hemicellulose , Hemicellulose = Xylan + Arabinan + Galactan + Mannan + Acetyl , Cellulose = Glucan , Total  Mass  Closure = Lignin + Total  Carboydrates + Extractives + Ash . ## 3. Results ### 3.1. Total Composition of the Lignocellulosic Samples The results of the total content per sample are shown in Table3, including the major components, ash, and extractives. The results represent the total weight percentage content of the industrial samples collected in the pulp mill. The total mass closure was near 100% in spite of the fact that some minority compounds were not analyzed such as low molecular phenolic compounds derived from lignin or aldonic and uronic acids derived from cellulose and hemicellulose.Table 3 Total weight content of industrial samples. Total mass closure E. globulus (% w/w) SSL (% w/w) P1 (% w/w) P2 (% w/w) P3 (% w/w) P4 (% w/w) Cellulose-HPLC 42.25 5.67 89 87.3 89.9 91.3 Cellulose 46.00 — 91.34 91.16 92.36 92.28 Hemicellulose-HPLC 24.92 30.42 6.2 5.1 2.2 2.1 Hemicellulose 31.55 — — — — — Lignin 26.98 42.99∗ 0.80 0.40 0.40 0.10 Ash 0.35 12.1 0.28 0.26 0.24 0.18 Extractives 1.5 — 0.30 0.20 0.20 0.20 TOTAL 96 91.18 96.58 93.26 92.94 93.88 ∗Lignin in SSL is represented by the lignosulfonate content, formed by lignin sulfonation.The comparison of traditional characterization using gravimetric and titration methods and the carbohydrate analysis derived from the summative analysis calculations is displayed in Table3. Traditional cellulose methods include alfa-cellulose in pulp [33] and Seifert for cellulose in wood [34]. Traditional hemicellulose in wood is calculated as the difference between holocellulose [35] and Seifert cellulose [34]. Cellulose-HPLC and hemicellulose-HPLC of wood and pulp samples were obtained stoichiometrically, after acid hydrolysis of carbohydrates and HPLC sugars quantification. Otherwise, SSL sugars were measured directly in the HPLC, avoiding the hydrolysis step.Cellulose obtained by traditional methods and cellulose-HPLC inEucalyptus globulus samples present values of 42.25% and 46%, respectively. The Seifert method entails higher experimental errors because of the wood digestion at high temperatures where some projections can be formed if the analysis is not carried out carefully. In pulp samples, the cellulose showed higher values by means of the traditional method. Alfa-cellulose corresponds to the insoluble fraction produced after the digestion of pulp at 25°C using 17.5% NaOH. Theoretically beta and gamma cellulose with a lower degree of polymerization are excluded, but considering the results of Table 3, there are chains with similar molecular weights that are also being quantified. Regarding the results of hemicellulose, the hemicellulose-HPLC is lower than hemicellulose calculated by traditional methods in wood 24.92% and 31.55%, respectively. This behavior could be explained by the assumption that the glucose content is only considered to form part of the cellulose fraction. In addition, gravimetric mistakes of Seifert and holocellulose methods are overlapping, giving more errors in comparison with the chromatographic method. An alternative to the study of hemicelluloses in pulp samples can be the pentosan determination with the T223 cm-01 procedure [33]; however, pentosan analysis was not performed in this work because it only contemplates the C5 sugars.The results of the total carbohydrate content (TCC) disclosure appear in Table4. TCC of 67.18% and 26.98% of lignin was obtained inEucalyptus globulus hardwood samples. Besides, the replicates checked showed average values of 42.25% cellulose and 77.55% holocellulose. Results of lignin varying from 23% to 27% and cellulose from 45 to 54% ofEucalyptus globulus timber were found in the literature [26, 27]. Such ranges are in accordance with the results obtained in this work. The total content of xylan was 13.27% in wood samples, representing more than 50% of the total hemicellulose content. This is because hardwood, in contrast to coniferous softwood with a higher portion of hexosans than pentosans, is composed mainly of pentoses where xylose is the major monosaccharide [11, 44–46]. TCC is much higher in pulp samples in comparison with theEucalyptus globulus samples. The difference is explained because little amounts of lignin and hemicellulose were found in pulp samples. Hemicellulose decreases from 6.2% to 2.1% and lignin from 0.8% to 0.1% (see Table 3). TCC in pulp samples decreases in the bleaching processes, as can be seen in Table 4, from values of 95.2% to values of 91.8–93.4%. This phenomenon can be explained by the fact that xylan drops from 5.3% to 1.5% despite the fact that cellulose increases from 89.0% to 91.3%.Table 4 Total carbohydrate content of the woody hydrolyzates. Wood SSL Sulfite dissolving pulps (% w/w) (% w/w) (% w/w) P1 P2 P3 P4 GLUCAN 42.25 5.67 89.0 87.3 89.9 91.3 Glucose 44.99 ± 2.13 4.12 ± 1.48 95.0 ± 3.81 93.7 ± 3.27 96.7 ± 2.42 98.1 ± 2.52 HMF 0.1 ± 0.03 0.02 ± 0.009 0.3 ± 0.001 0.3 ± 0.05 0.3 ± 0.002 0.3 ± 0.00 Levulinic acid 0.14 ± 0.03 0.01 ± 0.009 0.2 ± 0.09 0.2 ± 0.01 0.2 ± 0.02 0.4 ± 0.00 Cellobiose 1.51 ± 1.29 2.04 ± 0.16 3.1 ± 1.78 2.5 ± 1.95 2.4 ± 1.26 2.2 ± 1.84 XYLAN 13.27 19.15 5.3 4.5 1.7 1.5 Xylose 14.27 ± 0.47 21.43 ± 8.80 2.9 ± 0.67 2.7 ± 0.66 2.0 ± 0.42 1.7 ± 0.75 Furfural 0.2 ± 0.19 0.15 ± 0.055 0.4 ± 0.004 0.8 ± 0.02 0.1 ± 0.003 0.2 ± 0.001 Formic 0.15 ± 0.05 0.03 ± 0.028 0.3 ± 0.18 ND ND ND ARABINAN 0.52 2.46 0.3 0.3 0.2 0.2 Arabinose 0.59 ± 0.36 2.79 ± 1.71 0.3 ± 0.13 0.4 ± 0.16 0.2 ± 0.09 0.3 ± 0.16 GALACTAN 7.36 3.03 0.4 0.3 0.3 0.4 Galactose 8.18 ± 1.53 3.36 ± 1.52 0.4 ± 0.09 0.4 ± 0.06 0.3 ± 0.09 0.4 ± 0.26 MANNAN 1.00 1.28 ND ND ND ND Mannose 0.011 1.42 ± 0.50 ND ND ND ND ACETYL 2.78 4.51 0.2 ND ND ND Acetic 3.87 ± 0.32 6.30 ± 1.70 0.3 ± 0.24 ND ND ND TCC (%) 67.18 36.09 95.2 92.4 91.8 93.4 a EtOH(L/Kg.dw) 0.467 0.231 0.684 0.665 0.662 0.672 b EtOH(L/Kg.dw) 0.441 0.215 0.639 0.630 0.642 0.651 aEtOH (L/Kg.dry sample) calculated from the homopolymers using hydrolysis and fermentation factors. bEtOH (L/Kg.dry sample) calculated from the monomers using fermentation factors.Once the total carbohydrates were obtained, a theoretical quantity of bioethanol was calculated according to the stoichiometric factors explained in Section2.3. Results from 0.215 to 0.684 L ethanol per kg of dry sample were obtained. ### 3.2. Dissolving Pulp Properties: Results and Discussion Pulp properties and their evolution within the sulfite process are represented in Figures2(a), 2(b), 2(c), and 2(d). Pulp transformation from crude pulp after digestion stage (P1) to final bleached pulp (P4) is graphed with error bars.Figure 2 Evolution of wood macrocomponents in pulp along the sulfite mill: (a) glucan and alfa-cellulose; (b) hemicellulose and xylan; (c) kappa index and lignin; (d) viscosity in pulp. (a) (b) (c) (d)Pulp quality parameters are represented in Figures2(a) and 2(d). Glucan and alfa-cellulose have similar trends, especially in the alkaline extraction process; however some differences can be found in the case of ozonation with a more noticeable decrease of glucan. Pulp impurities were plotted, respectively, in Figures 2(b) and 2(c). In this case, similar results were obtained. Lignin and hemicellulose content decreases as the process advances. These results showed that the most oxidative stage is the ozonation where the main losses of lignin are registered from 0.8% to 0.4%. Although delignification is the main function of this stage, there is also a depolymerization of hemicelluloses from 6.2% to 5.1% because of the high oxidation produced by ozone. In spite of the recalcitrant nature of cellulose with no losses of alfa-cellulose, there is also a little decrease of glucan from 88.8% up to 87.3% probably due to the degradation of beta and gamma cellulose. Such behavior is also reflected in the viscosity falling from 706.4 mL/g to 568.2 mL/g. Figure 2(d) shows the polymerization degree of cellulose chains playing an important role in the quality of the final pulp. As was expected, the viscosity diminished stage by stage from 706.4 mL/g (P1) after digestion up to 492.5 mL/g (P4) after PO bleaching.The obtained results were compared with other quality pulps as a function of the process (chemical or thermomechanical), the feedstock (softwood or hardwood), and the final application (paper-grade or dissolving grade) [11, 28, 29, 47]. Results presenting major impurity removal are the more suitable for waste streams valorization towards biofuels and other value-added products. The worst pulp quality is the thermomechanical (TMP) pulp with a total carbohydrate content of 64.4% in comparison with chemical pulping processes with a total carbohydrate of 96.5% [30]. The TMP constituted low-purity (regarding the lignin content) and high-yield pulp. The difference between paper-grade and dissolving-grade pulp resides in the total glucan content that is lower in case of paper grade, obtaining values of 74.7% and 84.9% for hardwood and softwood bleached pulps [47] and 92.6% in the case of dissolving-grade pulps [29]. Consequently, the hemicellulose content is higher in paper-grade pulps than in high purity dissolving-grade pulps. In this work, total carbohydrate content in bleached pulp (P4) is 93.4% where 91.3% belongs to glucan with only 1.5% of xylan.Based on the experimental results shown in Figure2 it can be concluded that (i) ozonation stage (Z) produces the destruction of mainly lignin and also carbohydrates. Z focuses on delignification and therefore kappa is notably reduced. Nevertheless, glucan is considerably diminished during Z whereas this does not affect the alfa-cellulose. This is due to the fact that the ozone is very aggressive as a bleaching agent (being less selective than chlorine derivatives) and attacks beta and gamma cellulose chains; (ii) hot alkaline extraction stage (EOP) focuses on hemicellulose solubilization, falling hemicelluloses from 5.1% to 2.2% and specifically xylan from 4.5% to 1.7%; (iii) peroxide bleaching stage (PO) attacks the chromophore groups and the pulp is definitely purified by removing lignin traces from 0.4% to 0.1% and other groups responsible for the color of the pulp; (iv) selectivity in PO and Z stages should be improved in order to avoid the breakdown of the cellulose chains; (v) results evidenced the importance of the wastewater streams valorization considering the high charge of organic compounds removed from the high-purity dissolving pulp along the sulfite process. ### 3.3. Mass Balance of the Industrial Process The mass balance of the entire industrial process has been carried out taking into account the summative analysis. The complete characterization of the feedstock (Eucalyptus globulus timber), the inlet-outlet pulps, and the main residual stream (SSL) was required. Data of the three macrocomponents throughout the process, flow rates, digestions per day, wood moisture, or yields were considered. Some of the data are confidential to the factory and cannot be specifically displayed. Results appearing in Figure 3 have been correlated with the initial dry wood in terms of grams of cellulose, hemicellulose, and lignin per grams of dry wood. The main discussion is described as follows:(i) A total content of 99.6% of cellulose provided from the feedstock goes to the main product, dissolving pulp, indicating the good performance of the digestion process. Only traces of wood cellulose are dissolved into the spent liquor. Thus, 0.032 g.hemicellulose/g.dry wood and4.1 · 10 - 3 g.lignin/g.dry wood were detected in the crude pulp (P1) which will be removed throughout subsequent stages. (ii) Based on the global mass balance and the conclusions of Section3.2, some action lines can be made regarding Z and PO stages. A better use of the bleaching reagents and process conditions should be made in order to decrease the depolymerization degree but not to the detriment of delignification. (iii) The SSL generated after wood digestion is composed of 87.2% of the total hemicellulose in wood (0.218 g.H/g.dw) and 98.5% of the total lignin (0.266 g.L/g.dw). Hemicelluloses are hydrolyzed and dissolved as monosaccharides and other derivatives. Likewise, lignin reacts with sulfite, bisulfite ions, and sulfurous acid forming lignosulfonates. Based on these results SSL can be a perfect candidate for second-generation biofuel production.Figure 3 Mass balance results in the sulfite process.∗Results expressed as g.C/g.dw, gH/g.dw and gL/g.dw correspond to grams of cellulose, hemicellulose, and lignin per grams of dry wood, respectively.The SSL is evaporated in the factory in order to reduce the water content. However, samples collected in this work were collected before the evaporation plant, at the tank outlet (see WSSL in Figure3). Tap water is used at the end of the digestion stage to stop the hydrolysis and depolymerization reactions. In addition, wastewater streams provided from pulp washing containing cellulose, hemicellulose, and lignin are stored in the tank together with the SSL and sent to the evaporation plant as WSSL. Theoretical bioethanol potential of the WSSL was calculated based on the carbohydrates content (0.031 g.C/g.dw and 0.205 g.H/g.dw). The hydrolysis stoichiometric factors for hexoses and pentoses were, respectively, 1.11 g.C6-sugars/g.cellulose and 1.136 g.C5-sugars/g.hemicellulose; the fermentation stoichiometric factor for ethanol production is 0.511 g.EtOH/g.monosaccharide. Assuming the complete conversion of C5 and C6 sugars, the second-generation bioethanol potential of the WSSL is 0.173 L.EtOH/g.dw. ## 3.1. Total Composition of the Lignocellulosic Samples The results of the total content per sample are shown in Table3, including the major components, ash, and extractives. The results represent the total weight percentage content of the industrial samples collected in the pulp mill. The total mass closure was near 100% in spite of the fact that some minority compounds were not analyzed such as low molecular phenolic compounds derived from lignin or aldonic and uronic acids derived from cellulose and hemicellulose.Table 3 Total weight content of industrial samples. Total mass closure E. globulus (% w/w) SSL (% w/w) P1 (% w/w) P2 (% w/w) P3 (% w/w) P4 (% w/w) Cellulose-HPLC 42.25 5.67 89 87.3 89.9 91.3 Cellulose 46.00 — 91.34 91.16 92.36 92.28 Hemicellulose-HPLC 24.92 30.42 6.2 5.1 2.2 2.1 Hemicellulose 31.55 — — — — — Lignin 26.98 42.99∗ 0.80 0.40 0.40 0.10 Ash 0.35 12.1 0.28 0.26 0.24 0.18 Extractives 1.5 — 0.30 0.20 0.20 0.20 TOTAL 96 91.18 96.58 93.26 92.94 93.88 ∗Lignin in SSL is represented by the lignosulfonate content, formed by lignin sulfonation.The comparison of traditional characterization using gravimetric and titration methods and the carbohydrate analysis derived from the summative analysis calculations is displayed in Table3. Traditional cellulose methods include alfa-cellulose in pulp [33] and Seifert for cellulose in wood [34]. Traditional hemicellulose in wood is calculated as the difference between holocellulose [35] and Seifert cellulose [34]. Cellulose-HPLC and hemicellulose-HPLC of wood and pulp samples were obtained stoichiometrically, after acid hydrolysis of carbohydrates and HPLC sugars quantification. Otherwise, SSL sugars were measured directly in the HPLC, avoiding the hydrolysis step.Cellulose obtained by traditional methods and cellulose-HPLC inEucalyptus globulus samples present values of 42.25% and 46%, respectively. The Seifert method entails higher experimental errors because of the wood digestion at high temperatures where some projections can be formed if the analysis is not carried out carefully. In pulp samples, the cellulose showed higher values by means of the traditional method. Alfa-cellulose corresponds to the insoluble fraction produced after the digestion of pulp at 25°C using 17.5% NaOH. Theoretically beta and gamma cellulose with a lower degree of polymerization are excluded, but considering the results of Table 3, there are chains with similar molecular weights that are also being quantified. Regarding the results of hemicellulose, the hemicellulose-HPLC is lower than hemicellulose calculated by traditional methods in wood 24.92% and 31.55%, respectively. This behavior could be explained by the assumption that the glucose content is only considered to form part of the cellulose fraction. In addition, gravimetric mistakes of Seifert and holocellulose methods are overlapping, giving more errors in comparison with the chromatographic method. An alternative to the study of hemicelluloses in pulp samples can be the pentosan determination with the T223 cm-01 procedure [33]; however, pentosan analysis was not performed in this work because it only contemplates the C5 sugars.The results of the total carbohydrate content (TCC) disclosure appear in Table4. TCC of 67.18% and 26.98% of lignin was obtained inEucalyptus globulus hardwood samples. Besides, the replicates checked showed average values of 42.25% cellulose and 77.55% holocellulose. Results of lignin varying from 23% to 27% and cellulose from 45 to 54% ofEucalyptus globulus timber were found in the literature [26, 27]. Such ranges are in accordance with the results obtained in this work. The total content of xylan was 13.27% in wood samples, representing more than 50% of the total hemicellulose content. This is because hardwood, in contrast to coniferous softwood with a higher portion of hexosans than pentosans, is composed mainly of pentoses where xylose is the major monosaccharide [11, 44–46]. TCC is much higher in pulp samples in comparison with theEucalyptus globulus samples. The difference is explained because little amounts of lignin and hemicellulose were found in pulp samples. Hemicellulose decreases from 6.2% to 2.1% and lignin from 0.8% to 0.1% (see Table 3). TCC in pulp samples decreases in the bleaching processes, as can be seen in Table 4, from values of 95.2% to values of 91.8–93.4%. This phenomenon can be explained by the fact that xylan drops from 5.3% to 1.5% despite the fact that cellulose increases from 89.0% to 91.3%.Table 4 Total carbohydrate content of the woody hydrolyzates. Wood SSL Sulfite dissolving pulps (% w/w) (% w/w) (% w/w) P1 P2 P3 P4 GLUCAN 42.25 5.67 89.0 87.3 89.9 91.3 Glucose 44.99 ± 2.13 4.12 ± 1.48 95.0 ± 3.81 93.7 ± 3.27 96.7 ± 2.42 98.1 ± 2.52 HMF 0.1 ± 0.03 0.02 ± 0.009 0.3 ± 0.001 0.3 ± 0.05 0.3 ± 0.002 0.3 ± 0.00 Levulinic acid 0.14 ± 0.03 0.01 ± 0.009 0.2 ± 0.09 0.2 ± 0.01 0.2 ± 0.02 0.4 ± 0.00 Cellobiose 1.51 ± 1.29 2.04 ± 0.16 3.1 ± 1.78 2.5 ± 1.95 2.4 ± 1.26 2.2 ± 1.84 XYLAN 13.27 19.15 5.3 4.5 1.7 1.5 Xylose 14.27 ± 0.47 21.43 ± 8.80 2.9 ± 0.67 2.7 ± 0.66 2.0 ± 0.42 1.7 ± 0.75 Furfural 0.2 ± 0.19 0.15 ± 0.055 0.4 ± 0.004 0.8 ± 0.02 0.1 ± 0.003 0.2 ± 0.001 Formic 0.15 ± 0.05 0.03 ± 0.028 0.3 ± 0.18 ND ND ND ARABINAN 0.52 2.46 0.3 0.3 0.2 0.2 Arabinose 0.59 ± 0.36 2.79 ± 1.71 0.3 ± 0.13 0.4 ± 0.16 0.2 ± 0.09 0.3 ± 0.16 GALACTAN 7.36 3.03 0.4 0.3 0.3 0.4 Galactose 8.18 ± 1.53 3.36 ± 1.52 0.4 ± 0.09 0.4 ± 0.06 0.3 ± 0.09 0.4 ± 0.26 MANNAN 1.00 1.28 ND ND ND ND Mannose 0.011 1.42 ± 0.50 ND ND ND ND ACETYL 2.78 4.51 0.2 ND ND ND Acetic 3.87 ± 0.32 6.30 ± 1.70 0.3 ± 0.24 ND ND ND TCC (%) 67.18 36.09 95.2 92.4 91.8 93.4 a EtOH(L/Kg.dw) 0.467 0.231 0.684 0.665 0.662 0.672 b EtOH(L/Kg.dw) 0.441 0.215 0.639 0.630 0.642 0.651 aEtOH (L/Kg.dry sample) calculated from the homopolymers using hydrolysis and fermentation factors. bEtOH (L/Kg.dry sample) calculated from the monomers using fermentation factors.Once the total carbohydrates were obtained, a theoretical quantity of bioethanol was calculated according to the stoichiometric factors explained in Section2.3. Results from 0.215 to 0.684 L ethanol per kg of dry sample were obtained. ## 3.2. Dissolving Pulp Properties: Results and Discussion Pulp properties and their evolution within the sulfite process are represented in Figures2(a), 2(b), 2(c), and 2(d). Pulp transformation from crude pulp after digestion stage (P1) to final bleached pulp (P4) is graphed with error bars.Figure 2 Evolution of wood macrocomponents in pulp along the sulfite mill: (a) glucan and alfa-cellulose; (b) hemicellulose and xylan; (c) kappa index and lignin; (d) viscosity in pulp. (a) (b) (c) (d)Pulp quality parameters are represented in Figures2(a) and 2(d). Glucan and alfa-cellulose have similar trends, especially in the alkaline extraction process; however some differences can be found in the case of ozonation with a more noticeable decrease of glucan. Pulp impurities were plotted, respectively, in Figures 2(b) and 2(c). In this case, similar results were obtained. Lignin and hemicellulose content decreases as the process advances. These results showed that the most oxidative stage is the ozonation where the main losses of lignin are registered from 0.8% to 0.4%. Although delignification is the main function of this stage, there is also a depolymerization of hemicelluloses from 6.2% to 5.1% because of the high oxidation produced by ozone. In spite of the recalcitrant nature of cellulose with no losses of alfa-cellulose, there is also a little decrease of glucan from 88.8% up to 87.3% probably due to the degradation of beta and gamma cellulose. Such behavior is also reflected in the viscosity falling from 706.4 mL/g to 568.2 mL/g. Figure 2(d) shows the polymerization degree of cellulose chains playing an important role in the quality of the final pulp. As was expected, the viscosity diminished stage by stage from 706.4 mL/g (P1) after digestion up to 492.5 mL/g (P4) after PO bleaching.The obtained results were compared with other quality pulps as a function of the process (chemical or thermomechanical), the feedstock (softwood or hardwood), and the final application (paper-grade or dissolving grade) [11, 28, 29, 47]. Results presenting major impurity removal are the more suitable for waste streams valorization towards biofuels and other value-added products. The worst pulp quality is the thermomechanical (TMP) pulp with a total carbohydrate content of 64.4% in comparison with chemical pulping processes with a total carbohydrate of 96.5% [30]. The TMP constituted low-purity (regarding the lignin content) and high-yield pulp. The difference between paper-grade and dissolving-grade pulp resides in the total glucan content that is lower in case of paper grade, obtaining values of 74.7% and 84.9% for hardwood and softwood bleached pulps [47] and 92.6% in the case of dissolving-grade pulps [29]. Consequently, the hemicellulose content is higher in paper-grade pulps than in high purity dissolving-grade pulps. In this work, total carbohydrate content in bleached pulp (P4) is 93.4% where 91.3% belongs to glucan with only 1.5% of xylan.Based on the experimental results shown in Figure2 it can be concluded that (i) ozonation stage (Z) produces the destruction of mainly lignin and also carbohydrates. Z focuses on delignification and therefore kappa is notably reduced. Nevertheless, glucan is considerably diminished during Z whereas this does not affect the alfa-cellulose. This is due to the fact that the ozone is very aggressive as a bleaching agent (being less selective than chlorine derivatives) and attacks beta and gamma cellulose chains; (ii) hot alkaline extraction stage (EOP) focuses on hemicellulose solubilization, falling hemicelluloses from 5.1% to 2.2% and specifically xylan from 4.5% to 1.7%; (iii) peroxide bleaching stage (PO) attacks the chromophore groups and the pulp is definitely purified by removing lignin traces from 0.4% to 0.1% and other groups responsible for the color of the pulp; (iv) selectivity in PO and Z stages should be improved in order to avoid the breakdown of the cellulose chains; (v) results evidenced the importance of the wastewater streams valorization considering the high charge of organic compounds removed from the high-purity dissolving pulp along the sulfite process. ## 3.3. Mass Balance of the Industrial Process The mass balance of the entire industrial process has been carried out taking into account the summative analysis. The complete characterization of the feedstock (Eucalyptus globulus timber), the inlet-outlet pulps, and the main residual stream (SSL) was required. Data of the three macrocomponents throughout the process, flow rates, digestions per day, wood moisture, or yields were considered. Some of the data are confidential to the factory and cannot be specifically displayed. Results appearing in Figure 3 have been correlated with the initial dry wood in terms of grams of cellulose, hemicellulose, and lignin per grams of dry wood. The main discussion is described as follows:(i) A total content of 99.6% of cellulose provided from the feedstock goes to the main product, dissolving pulp, indicating the good performance of the digestion process. Only traces of wood cellulose are dissolved into the spent liquor. Thus, 0.032 g.hemicellulose/g.dry wood and4.1 · 10 - 3 g.lignin/g.dry wood were detected in the crude pulp (P1) which will be removed throughout subsequent stages. (ii) Based on the global mass balance and the conclusions of Section3.2, some action lines can be made regarding Z and PO stages. A better use of the bleaching reagents and process conditions should be made in order to decrease the depolymerization degree but not to the detriment of delignification. (iii) The SSL generated after wood digestion is composed of 87.2% of the total hemicellulose in wood (0.218 g.H/g.dw) and 98.5% of the total lignin (0.266 g.L/g.dw). Hemicelluloses are hydrolyzed and dissolved as monosaccharides and other derivatives. Likewise, lignin reacts with sulfite, bisulfite ions, and sulfurous acid forming lignosulfonates. Based on these results SSL can be a perfect candidate for second-generation biofuel production.Figure 3 Mass balance results in the sulfite process.∗Results expressed as g.C/g.dw, gH/g.dw and gL/g.dw correspond to grams of cellulose, hemicellulose, and lignin per grams of dry wood, respectively.The SSL is evaporated in the factory in order to reduce the water content. However, samples collected in this work were collected before the evaporation plant, at the tank outlet (see WSSL in Figure3). Tap water is used at the end of the digestion stage to stop the hydrolysis and depolymerization reactions. In addition, wastewater streams provided from pulp washing containing cellulose, hemicellulose, and lignin are stored in the tank together with the SSL and sent to the evaporation plant as WSSL. Theoretical bioethanol potential of the WSSL was calculated based on the carbohydrates content (0.031 g.C/g.dw and 0.205 g.H/g.dw). The hydrolysis stoichiometric factors for hexoses and pentoses were, respectively, 1.11 g.C6-sugars/g.cellulose and 1.136 g.C5-sugars/g.hemicellulose; the fermentation stoichiometric factor for ethanol production is 0.511 g.EtOH/g.monosaccharide. Assuming the complete conversion of C5 and C6 sugars, the second-generation bioethanol potential of the WSSL is 0.173 L.EtOH/g.dw. ## 4. Conclusions A full study of total mass balance throughout the entire sulfite pulping process in a pulp mill has been carried out, showing that the spent sulfite liquor is the most useful stream to be valorized due to the presence of lignosulfonates, sugars, and other minor compounds, giving a theoretical quantity of 0.173 L of bioethanol per gram of dry wood. Fractionation processes might be carried out to separate the value-added compounds, transforming this traditional pulp mill into a modern lignocellulosic biorefinery.The characterization of the woody materials has been developed, comparing traditional methods with more novel methods based on the hydrolysis and individual characterization of the monomers. Acid hydrolysis is a useful method for the analysis of carbohydrate composition of wood and pulp samples. Using the TAPPI T249 cm-00 standard in combination with HPLC-RID technique can give complete information of the main components for valorization options in pulping processes.Summative analysis results together with other parameters make studying every stage of the sulfite process possible. A favorable extraction of cellulose in the digester was carried out, with the presence of 99.6% of wood-cellulose in the crude pulp. On the other hand, 87.23% of hemicellulose and 98.47% of lignin are dissolved into the spent liquor.Finally, some action lines to the existing process were indicated: (i) the digestion conditions should be optimized in order to increase the depolymerization of hemicelluloses in the spent liquor; (ii) ozonation and peroxide bleaching extraction processes should also be improved, avoiding the degradation and destruction of the cellulose chains and obtaining similar values of impurities; (iii) the spent liquor should be conveniently fractionated and detoxified, separating sugars from the rest of microbial inhibitors for second-generation biofuel production by microbial fermentation. --- *Source: 102534-2015-09-03.xml*
2015
# Intention-Aware Autonomous Driving Decision-Making in an Uncontrolled Intersection **Authors:** Weilong Song; Guangming Xiong; Huiyan Chen **Journal:** Mathematical Problems in Engineering (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1025349 --- ## Abstract Autonomous vehicles need to perform social accepted behaviors in complex urban scenarios including human-driven vehicles with uncertain intentions. This leads to many difficult decision-making problems, such as deciding a lane change maneuver and generating policies to pass through intersections. In this paper, we propose an intention-aware decision-making algorithm to solve this challenging problem in an uncontrolled intersection scenario. In order to consider uncertain intentions, we first develop a continuous hidden Markov model to predict both the high-level motion intention (e.g., turn right, turn left, and go straight) and the low level interaction intentions (e.g., yield status for related vehicles). Then a partially observable Markov decision process (POMDP) is built to model the general decision-making framework. Due to the difficulty in solving POMDP, we use proper assumptions and approximations to simplify this problem. A human-like policy generation mechanism is used to generate the possible candidates. Human-driven vehicles’ future motion model is proposed to be applied in state transition process and the intention is updated during each prediction time step. The reward function, which considers the driving safety, traffic laws, time efficiency, and so forth, is designed to calculate the optimal policy. Finally, our method is evaluated in simulation with PreScan software and a driving simulator. The experiments show that our method could lead autonomous vehicle to pass through uncontrolled intersections safely and efficiently. --- ## Body ## 1. Introduction Autonomous driving technology has developed rapidly in the last decade. In DARPA Urban Challenge [1], autonomous vehicles showed their abilities for interacting in some typical scenarios such as Tee intersections and lane driving. In 2011, Google released its autonomous driving platforms. Over 10,000 miles of autonomous driving for each vehicle was completed under various traffic conditions [2]. Besides, many big automobile companies also plan to launch their autonomous driving product in the next several years. With these significant progresses, autonomous vehicles have shown their potential to reduce the number of traffic accidents and solve the problem of traffic congestions.One key challenge for autonomous vehicles driven in the real world is how to deal with the uncertainties, such as inaccuracy perception and unclear motion intentions. With the development of intelligent transportation system (ITS), the perception uncertainty could be solved through the vehicle2X technology and the interactions between autonomous vehicles can be solved by centralized or decentralized cooperative control algorithms. However, human-driven vehicles will still be predominance in a short time and the uncertainties of their driving intentions will still be retained due to the lack of “intention sensor.” Human drivers anticipate potential conflicts, continuously make decisions, and adjust their driving behaviors which are often not rational. Therefore, autonomous vehicles need to understand human drivers’ driving intentions and choose proper actions to behave cooperatively.In this paper, we focus on solving this problem in an uncontrolled intersection scenario. The uncontrolled intersection is a complex scenario with high accident rate. In US, stop signs can be used to normalize the vehicles’ passing sequence. However, this kind of signs is rarely used in China and the right first traffic laws are often broken by some aggressive drivers. Perception failures, misunderstandings, and wrong decisions are likely to be performed by human drivers. In such cases, even with stop signs, the “first come, first served” rule is likely to be broken. Besides, human driving behaviors are likely to change as time goes on. With these uncertain situations, specific layout, and the traffic rules, when autonomous vehicles approach an intersection, they should have potential ability to recognize the behavior of other vehicles and give a suitable corresponding behavior considering future evolution of the traffic scenario (see Figure1).Figure 1 A motivation example. Autonomous vehicle B is going straight, while human-driven vehicle A has three potential driving directions: going straight, turning right, or turning left. If vehicle A turns right, it will not affect the normal driving of autonomous vehicle B. But the other maneuvers including turning left and going straight will lead to a passing sequence problem. Besides, if they have potential conflict, autonomous vehicle B will simulate the trajectories of vehicle A in a prediction horizon and gives the best actions in the current scenario. The vehicles drawn by dash lines are the future prediction positions. The red dash lines are the virtual lane assumption used in this paper, which means that the vehicles are considered to be driven inside the lane. The dark blue area is the potential collision region for these two cars.With these requirements, we propose an intention-aware decision-making algorithm for autonomous driving in an uncontrolled intersection in this paper. Specifically, we first use easily observed features (e.g., velocity and position) and continuous hidden Markov model (HMM) [3] to build the intention prediction model, which outputs the lateral intentions (e.g., turn right, turn left, and go straight) for human-driven vehicles and longitudinal behavior (e.g., the yielding status) for related vehicles. Then, a generative partially observable Markov decision process (POMDP) framework is built to model the autonomous driving decision-making process. This framework is able to deal with the uncertainties in the environment, including human-driven vehicles’ driving intentions. However, it is intractable to compute the optimal policy for general POMDP due to its complexity. We make reasonable approximations and assumptions to solve this problem in a low computational way. A human-like policy generation mechanism is used to compute the potential policy set. A scenario prediction mechanism is used to simulate the future actions of human-driven vehicles based on their lateral and longitudinal intentions and the proper reward functions are designed to evaluate each strategy. Traffic time, safety, and laws are all considered to get the final reward equations. The proposed method has been well evaluated during simulation. The main contributions of this paper are as follows:(i) Modeling a generative autonomous driving decision-making framework considering uncertainties (e.g., human driver’s intention) in the environment. (ii) Building intention prediction model using easily observed parameters (e.g., velocity and position) for recognizing the realistic lateral and longitudinal behaviors of human-driven vehicles. (iii) Using reasonable approximations and assumption to build an efficient solver based on the specific layout in an uncontrolled intersection area.The structure of this paper is as follows. Section2 reviews the related work and two-layer HMM-based intention prediction algorithm is discussed in Section 3. Section 4 models general autonomous driving decision-making process in a POMDP, while the approximations and the simplified solver are described in Section 5. In Section 6, we evaluate our algorithm in a simulated uncontrolled intersection scenario with PreScan software and a driver simulator. Finally, the conclusion and future work are discussed in Section 7. ## 2. Related Work The decision-making module is one of the most important components of autonomous vehicles, connecting environment perception and vehicle control. Thus, numerous research works are performed to handle autonomous driving decision-making problem in the last decade. The most common method is to manually define specific driving rules corresponding to situations. Both finite state machines (FSMs) and hierarchical state machines (HSMs) are used to evaluate situations and decide in their framework [4–6]. In DARPA Urban Challenge (DUC), the winner Boss used a rule-based behavior generation mechanism to obey the predefined driving rules based on the obstacle vehicles’ metrics [1, 6]. Boss was able to check vehicle’s acceleration abilities and the spaces to decide whether merging into a new lane or passing intersections is safe. Similarly, the decision-making system of “Junior” [7], ranking second in DUC, was based on a HSM with manually defined 13 states. Due to the advantages including implementing simply and traceability, this framework is widely used in many autonomous driving platforms. However, these approaches always use constant velocity assumptions and lack considering surrounding vehicles future reactions to host vehicle’s actions. Without this ability, the driving decisions could have potential risks [8].In order to consider the evolution of future scenario, the planning and utility-based approaches have been proposed for decision-making. Bahram et al. proposed a prediction based reactive strategy to generate autonomous driving strategies [9]. A Bayesian classifier is used to predict the future motion of obstacle vehicles and a tree-based searching mechanism is designed to find the optimal driving strategy using multilevel cost functions. However, the surrounding vehicles’ reactions to autonomous vehicles’ actions are not considered in their framework. Wei et al. proposed a comprehensive approach for autonomous driver model by emulating human driving behavior [10]. The human-driven vehicles are assumed to follow a proper social behavior model and the best velocity profiles are generated in autonomous freeway driving applications. Nonetheless, their method does not consider the motion intention of human-driven vehicles and only targets in-lane driving. In their subsequent work, Wei et al. modeled traffic interactions and realized autonomous vehicle social behavior in highway entrance ramp [11]. The human-driven vehicles’ motion intentions are modeled by a Bayesian model and the human-driven vehicles’ future reactions are introduced, which is based on the yielding/not-yielding intentions at the first prediction step. Autonomous vehicles could perform social cooperative behavior using their framework. However, they do not consider the intention uncertainty over prediction time step.POMDPs provide a mathematical framework for solving the decision-making problem with uncertainties. Bai et al. proposed an intention-aware approach for autonomous driving in scenarios with many pedestrians (e.g., in campus) [12]. In their framework, the hybrid A ∗ algorithm is used to generate global path, while a POMDP planner is used to control the velocity of the autonomous vehicle solving by an online POMDP solver DESPOT [13]. Brechtel et al. presented a probabilistic decision-making algorithm using continuous POMDP [14]. They focus on dealing with the uncertainties of incomplete and inaccurate perception in the intersection area, while our goal is to deal with the uncertain intentions of human-driven vehicles. However, the online POMDP solver always needs large computation resource and consumes much time [15, 16], which limits its use in real world autonomous driving platform. Ulbrich and Maurer designed a two-step decision-making algorithm to reduce the complexity of the POMDP in lane change scenario [17]. Eight POMDP states are manually defined to simplify the problem in their framework. Cunningham et al. proposed a multipolicy decision-making method in lane changing and merging scenarios [18]. POMDPs are used to model the decision-making problem in their paper, while multivehicle simulation mechanism is used to generate the optimal high-level policy for autonomous vehicle to execute. However, the motion intentions are not considered.Overall, the autonomous driving decision-making problem with uncertain driving intention is still a challenging problem. It is necessary to build an effective behavior prediction model for human-driven vehicles. Besides, it is essential to incorporate human-driven vehicles’ intentions and behaviors into autonomous vehicle decision-making system and generate suitable actions to ensure autonomous vehicles drive safely and efficiently. This work addresses this problem by first building a HMM-based intention prediction model, then modeling human-driven vehicle’s intentions in a POMDP framework, and finally solving it in an approximate method. ## 3. HMM-Based Intention Prediction In order to pass through an uncontrolled intersection, autonomous vehicles should have the ability to predict the driving intentions of human-driven vehicles. Estimating driver’s behavior is very difficult, because the state of a vehicle driver is in some high-dimensional feature space. Instead of using driver related features (e.g., gas pedal, brake pedal, and drivers’ vision), easily observed parameters are used to build the intention prediction model in this paper.The vehicle motion intentionI considered in this paper is divided into two aspects, lateral intention I l a t ∈ { I T R , I T L , I G S , I S } (i.e., turn right, turn left, go straight, and stop) and longitudinal intention I l o n ∈ { I Y i e l d , I N Y i e l d }. The lateral intention is a high-level driving maneuver, which is determined by human drivers’ long term decision-making process. This intention is not always changed in the driving process and determines the future trajectory of human-driven vehicles. In particular, the intention of stop is treated as a lateral intention in our model because it can be predicted only using data from human-driven vehicle itself. However, the longitudinal intention is a cooperative behavior only occurring when it interacts with other vehicles. We will first describe the HMM and then formulize our intention prediction model in this section. ### 3.1. HMM A HMM consists of a set ofN finite “hidden” states and a set of M observable symbols per state. The state transition probabilities are defined as Α = { a i j }, where(1) a i j = P q t + 1 = j ∣ q t = i , 1 ≤ i , j ≤ N .The initial state distribution is denoted asπ = { π i }, where(2) π i = P q 1 = i , 1 ≤ i ≤ N .Because the observation symbols are continuous parameters, we use Gaussian Mixture Model (GMM) [19] to represent their probability distribution functions (pdf):(3) b j o = ∑ k = 1 M c j k N o ∣ μ j k , Σ j k , 1 ≤ j ≤ N ,where c j k represents the mixture coefficient in the jth state for the kth mixture. N is the pdf of a Gaussian distribution with mean μ and covariance Σ measured from observation o. Mixture coefficient c satisfies the following constraints:(4) ∑ k = 1 M c j k = 1 ,where c j k > 0,  1 ≤ j ≤ N,  1 ≤ k ≤ M.And(5) ∫ - ∞ + ∞ b j o d o = 1 , 1 ≤ j ≤ N .Then a HMM could be completely defined by hidden statesN and the probability tuples λ = ( π , A , C , μ , Σ ).In the training process, we use the Baum-Welch method [20] to estimate model parameters for different driver intention I. Once the model parameters corresponding to different driver intention have been trained, we can perform the driver’s intention estimation in the recognition process. The prediction process for lateral intentions can be seen in Figure 2.Figure 2 Prediction process for HMM. The observed sequence will be evaluated by four HMMs. Forward algorithm is used to calculate the conditional probabilities and the intention corresponding to the largest value will be considered as the vehicle’s intention. ### 3.2. HMM-Based Intention Prediction Process Given a continuous HMM, the intention prediction process is divided into two steps. The first step focused on the lateral intention. The training inputs of each vehicle’s lateral intention model in timet are defined as B l a t e r a l = { L , v , a , y a w }, where L is the distance to the intersection, v is the longitudinal velocity, a is the longitudinal acceleration, and y a w is the yaw rate, while the output of this model is the motion intentions I l a t ∈ { I T R , I T L , I G S , I S }. The corresponding HMMs can be trained, including λ T R,  λ T L,  λ G S, and λ S.The next step is about longitudinal intention. This probability could be decomposed based on the total probability formula:(6) P I Yield ∣ B = ∑ P I Yield ∣ I lat , B P I lat ∣ B = P I Yield ∣ I TR , B P I TR ∣ B + P I Yield ∣ I TL , B P I TL + P I Yield ∣ I GS , B P I GS ∣ B + P I Yield ∣ I S , B P I S ∣ B ,where B is the behavior data including B lateral and B lon.In this process, we assume that the lateral behaviorI l a t is predicted correctly by a deterministic HMM in the first step, and therefore I l a t is determined by the lateral prediction result I l a t P r e d i c t, where P I l a t ∣ B , I l a t = I l a t P r e d i c t = 1 and P I l a t ∣ B , I l a t ! = I l a t P r e d i c t = 0. And (6) is reformulated by(7) P I Yield ∣ B = P I Yield ∣ B , I latPredict P I latPredict ∣ B = P I Yield ∣ B , I latPredict .The problem is changed to modelP I Yield ∣ B , I latPredict. The features used in longitudinal intention prediction are B l o n = { Δ v , Δ a , Δ D T C }, where Δ v = v s o c i a l - v h o s t,  Δ a = a s o c i a l - a h o s t, and Δ D T C = D T C s o c i a l - D T C h o s t. D T C means the distance to the potential collision area. The output of the longitudinal intention prediction model is longitudinal motion intention I l o n ∈ { I Y i e l d , I N Y i e l d }.Instead of building a generative model, we use a deterministic approach to restrictP I Y i e l d ∣ B , I l a t P r e d i c t as 0 or 1. Thus, two types of HMMs named λ Y , I l a t,  λ N , I l a t are trained where I l a t ∈ { I T R , I T L , I G S , I S }. Two test examples for lateral and longitudinal intention prediction are shown in Figures 3 and 4. Through these two figures, we can find that our approach can recognize human-driven vehicle’s lateral and longitudinal intention successfully.Figure 3 Lateral intention prediction example. The true intention of human-driven vehicle is to turn left in this scenario. In the first figure, the value 1 of they label means turn left, 2 means turn right, 3 represents go straight, and 4 corresponds to stop. (a) (b) (c) (d) (e)Figure 4 One example of predicting longitudinal intentions. This example is based on the scenario of Figure1 and two vehicles both go straight. The value 1 of y-axis in the first figure denotes the intention of yielding, while 2 represents not yielding. In the first 2.8 s, the intention is yielding. After that, due to the acceleration action and less relative DTC, autonomous vehicle could understand human-driven vehicle’s not-yielding intention. (a) (b) (c) (d) ## 3.1. HMM A HMM consists of a set ofN finite “hidden” states and a set of M observable symbols per state. The state transition probabilities are defined as Α = { a i j }, where(1) a i j = P q t + 1 = j ∣ q t = i , 1 ≤ i , j ≤ N .The initial state distribution is denoted asπ = { π i }, where(2) π i = P q 1 = i , 1 ≤ i ≤ N .Because the observation symbols are continuous parameters, we use Gaussian Mixture Model (GMM) [19] to represent their probability distribution functions (pdf):(3) b j o = ∑ k = 1 M c j k N o ∣ μ j k , Σ j k , 1 ≤ j ≤ N ,where c j k represents the mixture coefficient in the jth state for the kth mixture. N is the pdf of a Gaussian distribution with mean μ and covariance Σ measured from observation o. Mixture coefficient c satisfies the following constraints:(4) ∑ k = 1 M c j k = 1 ,where c j k > 0,  1 ≤ j ≤ N,  1 ≤ k ≤ M.And(5) ∫ - ∞ + ∞ b j o d o = 1 , 1 ≤ j ≤ N .Then a HMM could be completely defined by hidden statesN and the probability tuples λ = ( π , A , C , μ , Σ ).In the training process, we use the Baum-Welch method [20] to estimate model parameters for different driver intention I. Once the model parameters corresponding to different driver intention have been trained, we can perform the driver’s intention estimation in the recognition process. The prediction process for lateral intentions can be seen in Figure 2.Figure 2 Prediction process for HMM. The observed sequence will be evaluated by four HMMs. Forward algorithm is used to calculate the conditional probabilities and the intention corresponding to the largest value will be considered as the vehicle’s intention. ## 3.2. HMM-Based Intention Prediction Process Given a continuous HMM, the intention prediction process is divided into two steps. The first step focused on the lateral intention. The training inputs of each vehicle’s lateral intention model in timet are defined as B l a t e r a l = { L , v , a , y a w }, where L is the distance to the intersection, v is the longitudinal velocity, a is the longitudinal acceleration, and y a w is the yaw rate, while the output of this model is the motion intentions I l a t ∈ { I T R , I T L , I G S , I S }. The corresponding HMMs can be trained, including λ T R,  λ T L,  λ G S, and λ S.The next step is about longitudinal intention. This probability could be decomposed based on the total probability formula:(6) P I Yield ∣ B = ∑ P I Yield ∣ I lat , B P I lat ∣ B = P I Yield ∣ I TR , B P I TR ∣ B + P I Yield ∣ I TL , B P I TL + P I Yield ∣ I GS , B P I GS ∣ B + P I Yield ∣ I S , B P I S ∣ B ,where B is the behavior data including B lateral and B lon.In this process, we assume that the lateral behaviorI l a t is predicted correctly by a deterministic HMM in the first step, and therefore I l a t is determined by the lateral prediction result I l a t P r e d i c t, where P I l a t ∣ B , I l a t = I l a t P r e d i c t = 1 and P I l a t ∣ B , I l a t ! = I l a t P r e d i c t = 0. And (6) is reformulated by(7) P I Yield ∣ B = P I Yield ∣ B , I latPredict P I latPredict ∣ B = P I Yield ∣ B , I latPredict .The problem is changed to modelP I Yield ∣ B , I latPredict. The features used in longitudinal intention prediction are B l o n = { Δ v , Δ a , Δ D T C }, where Δ v = v s o c i a l - v h o s t,  Δ a = a s o c i a l - a h o s t, and Δ D T C = D T C s o c i a l - D T C h o s t. D T C means the distance to the potential collision area. The output of the longitudinal intention prediction model is longitudinal motion intention I l o n ∈ { I Y i e l d , I N Y i e l d }.Instead of building a generative model, we use a deterministic approach to restrictP I Y i e l d ∣ B , I l a t P r e d i c t as 0 or 1. Thus, two types of HMMs named λ Y , I l a t,  λ N , I l a t are trained where I l a t ∈ { I T R , I T L , I G S , I S }. Two test examples for lateral and longitudinal intention prediction are shown in Figures 3 and 4. Through these two figures, we can find that our approach can recognize human-driven vehicle’s lateral and longitudinal intention successfully.Figure 3 Lateral intention prediction example. The true intention of human-driven vehicle is to turn left in this scenario. In the first figure, the value 1 of they label means turn left, 2 means turn right, 3 represents go straight, and 4 corresponds to stop. (a) (b) (c) (d) (e)Figure 4 One example of predicting longitudinal intentions. This example is based on the scenario of Figure1 and two vehicles both go straight. The value 1 of y-axis in the first figure denotes the intention of yielding, while 2 represents not yielding. In the first 2.8 s, the intention is yielding. After that, due to the acceleration action and less relative DTC, autonomous vehicle could understand human-driven vehicle’s not-yielding intention. (a) (b) (c) (d) ## 4. Modeling Autonomous Driving Decision-Making in a POMDP Framework For the decision-making process, the key problem is how to design a policy to perform the optimal actions with uncertainties. This needs to not only obtain traffic laws but also consider the driving uncertainties of human-driven vehicles. Facing potential conflicts, human-driven vehicles have uncertain probabilities to yield autonomous vehicles and some aggressive drivers may violate the traffic laws. Such elements should be implemented into a powerful decision-making framework. As a result, we model autonomous driving decision-making problem in a general POMDP framework in this section. ### 4.1. POMDP Preliminaries A POMDP model can be formulized as a tuple{ S , A , T , Z , O , R , γ }, where S is a set of states, A is the action space, and Z denotes a set of observations. The conditional function T ( s ′ , a , s ) = P r s ′ ∣ s , a models transition probabilities to state s ′ ∈ S, when the system takes an action a ∈ A in the state s ∈ S. The observation function O ( z , s ′ , a ) = P r z ∣ s ′ , a models the probability of observing z ∈ Z, when an action a ∈ A is taken and the end state is s ′ ∈ S. The reward function R ( s , a ) calculates an immediate reward when taking an action a in state s. γ ∈ [ 0,1 ] is the discount factor in order to balance the immediate and the future rewards.Because the system contains partially observed state such as intentions, a beliefb ∈ B is maintained. A belief update function τ is defined as b ′ = τ ( b , a , z ). If the agent takes action a and gets observation z, the new belief b ′ is obtained through the Bayes’ rule:(8) b ′ s ′ = η O s ′ , a , z ∑ s ∈ S T s , a , s ′ b s ,where η = 1 / ∑ s ′ ∈ S O s ′ , a , z ∑ s ∈ S T s , a , s ′ b s is a normalizing constant.A key concept in POMDP planning is a policy, a mappingπ that specifies the action a = π ( b ) at belief b. To solve the POMDP, an optimal policy π ∗ should be designed to maximize the total reward:(9) π ∗ = arg ⁡ max π ⁡ E ∑ t = 0 ∞ γ t R s t , π b t ∣ b 0 , π ,where b 0 is marked as the initial belief. ### 4.2. State Space Because of the Markov property, sufficient information should be contained in the state spaceS for decision-making process [14]. The state space includes the vehicle pose [ x , y , θ ], velocity v, the average yaw rate y a w a v e, and acceleration a a v e in the last planning period for all the vehicles. For the human-driven vehicles, the lateral and longitudinal intentions [ I l a t , I l o n ] also need to be contained for state transition modeling. However, the road context knowledge is static reference information so that it will be not added to the state space.The joint states ∈ S could be denoted as s = [ s h o s t , s 1 , s 2 , … , s N ] T, where s h o s t is the state of host vehicle (autonomous vehicle), s i,  i ∈ { 1,2 , 3 , … , N }, is the state of human-driven vehicles, and N is the number of human-driven vehicles involved. Let us define metric state x = [ x , y , θ , v , a a v e , y a w a v e ] T, including the vehicle position, heading, velocity, acceleration, and yaw rate. Thus, the state of host vehicle can be defined as s h o s t = x h o s t, while the human-driven vehicle state s i is s i = [ x i , I l a t , i , I l o n , i ] T. With the advanced perception system and V2V communication technology, we assume that the metric state x could be observed. Because the sensor noise is small and hardly affects decision-making process, we do not model observation noise for the metric state. However, the intention state cannot be directly observed, so it is the partially observable variables in our paper. The intention state should be inferred from observation data and predictive model over time. ### 4.3. Action Space In our autonomous vehicle system, the decision-making system is used to select the suitable tactical maneuvers. Specifically, in the intersection area autonomous vehicles should follow a global reference path generated by path planning module. The decision-making module only needs to generate acceleration/deceleration commands to the control layer. As the reference path may not be straight, the steering control module can adjust the front wheel angle to follow the reference path. Therefore, the action spaceA could be defined as a discrete set A = [ a c c , d e c , c o n ], which contains commands including acceleration, deceleration, and maintaining current velocity. ### 4.4. Observation Space Similar to the joint state space, the observationz is denoted as z = [ z h o s t , z 1 , z 2 , … , z N ] T, where z h o s t and z i are the host vehicle and human-driven vehicle’s observations, respectively. The acceleration and yaw rate can be approximately calculated by speed and heading in the consecutive states. ### 4.5. State Transition Model In state transition process, we need to model transition probabilityP r s ′ ∣ s , a. This probability is determined by each targeted element in the scenario. So the transition model can be calculated by the following probabilistic equation:(10) Pr ⁡ s ′ ∣ s , a = Pr ⁡ s host ′ ∣ s host , a host ∏ i = 1 N Pr ⁡ s i ′ ∣ s i .In the decision-making layer, we do not need to consider complex vehicle dynamic model. Thus, the host vehicle’s motionPr ⁡ s host ′ ∣ s host , a host can be simply represented by the following equations given action a:(11) x ′ = x + v + a Δ t 2 Δ t cos ⁡ θ + Δ θ , y ′ = y + v + a Δ t 2 Δ t sin ⁡ θ + Δ θ , θ ′ = θ + Δ θ , v ′ = v + a Δ t , yaw ave ′ = Δ θ Δ t , a ave ′ = a .Thus, the key problem is converted to computeP r s i ′ ∣ s i, the state transition probability of human-driven vehicles. Based on the total probability formula, this probability can be factorized as a sum in whole action space:(12) Pr ⁡ s i ′ ∣ s i = ∑ a i P r s i ′ ∣ s i , a i P r a i ∣ s i .With this equation, we only need to calculate the state transition probabilityP r s i ′ ∣ s i , a i given a specific action a i and the probability of selecting this action P r ( a i ∣ s i ) under current state s i.Because the human-driven vehicles’ states i = [ x i , I i ], the probability P r s i ′ ∣ s i , a i can be calculated as(13) Pr ⁡ s i ′ ∣ s i , a i = Pr ⁡ x i ′ , I i ′ ∣ x i , I i , a i = Pr ⁡ x i ′ ∣ x i , I i , a i Pr ⁡ I i ′ ∣ x i ′ , x i , I i , a i .With a certain actiona i, P r x i ′ ∣ x i , I i , a i is equal to P r x i ′ ∣ x i , I l a t , i , a i. The lateral behavior I l a t , i is considered to be a goal-directed driving intention which will not be changed in the driving process. So P r x i ′ ∣ x i , I l a t , i , a i is equal to P r x i ′ ∣ x i , a i given a reference path corresponding to the intention of I l a t , i. Using (11), P r x i ′ ∣ x i , a i can be well solved.The remaining problem for calculatingP r s i ′ ∣ s i , a i is to deal with P r I i ′ ∣ x i ′ , x i , I i , a i. The lateral intention I l a t , i ′ is assumed stable through the above explanation. And the longitudinal intention I l o n , i ′ is assumed to be not updated in this process. But it will be updated with new inputs in observation space.NowP r s i ′ ∣ s i , a i is well modeled and the remaining problem is to compute the probabilities P r a i ∣ s i of human-driven vehicles’ future actions:(14) Pr ⁡ a i ∣ s i = Pr ⁡ a i ∣ x i , I i = ∑ x host ′ P r a i ∣ x host ′ , x i , I i Pr ⁡ x host ′ ∣ x i , I i .Becausex host ′ is determined by the designed policy, Pr ⁡ x host ′ ∣ x i , I i could be calculated by (11) given an action a h o s t. The probability P r a i ∣ x host ′ , x i , I i means the distribution of human-driven vehicles’ actions given the new state x host ′ of host vehicle, the current state of itself, and its intentions. Instead of building a complex probability model, we designed a deterministic mechanism to calculate the most possible action a i given x host ′,  x i, and I i.In this prediction process, the host vehicle is assumed to be maintaining the current actions in the next time step and the actiona i will be leading human-driven vehicle passing through the potential collision area either in advance of host vehicle under the intention I N Y i e l d or behind the host vehicle under the intention I Y i e l d to keep a safe distance d s a f e. In the case with the intention of I N Y i e l d, we can calculate the low boundary a i , l o w of a i through the above process and determine the upper one using the largest comfort value a i , c o m f o r t. If a i , c o m f o r t < a i , l o w,  a i , l o w will be used as the human-driven vehicle’s action. If not, we consider the targeted a i following a normal distribution with mean value μ a i between a i , l o w and a i , c o m f o r t. To simplify our model, we use the mean value of these two boundaries to represent human-driven vehicle’s action a i. Similarly, the case with the intention of I Y i e l d can be analyzed in the same process.After these steps, the transition probabilityP r s ′ ∣ s , a is well formulized and the autonomous vehicle could have the ability to understand the future motion of the scenario through this model. ### 4.6. Observation Model The observation model is built to simulate the measurement process. The motion intention is updated in this process. The measurements of human-driven vehicles are modeled with conditional independent assumption. Thus, the observation model can be calculated as(15) P r z ∣ a , s ′ = P r z host ∣ s host ′ ∏ i = 1 N P r z i ∣ s i ′ .The host vehicle’s observation function is denoted as(16) P r z host ∣ s host ′ ~ N z host ∣ x host ′ , Σ z host .But in this paper, due to the use of V2V communication sensor, the observation error almost does not affect the decision-making result. The variance matrix is set as zero.The human-driven vehicle’s observation will follow the vehicle’s motion intentions. Because we do not consider the observation error, the value in metric state will be the same as the state transition results. But the longitudinal intention of human-driven vehicles in the state space will be updated using the new observations and HMM mentioned in Section3. The new observation space will be confirmed with the above step. ### 4.7. Reward Function The candidate policies have to satisfy several evaluation criterions. Autonomous vehicles should be driven safely and comfortably. At the same time, they should follow the traffic rules and reach the destination as soon as possible. As a result, we design objective function (17) considering three aspects including safety, time efficiency, and traffic laws, where μ 1,  μ 2, and  μ 3 are the weight coefficient:(17) R s , a = μ 1 R safety s , a + μ 2 R time s , a + μ 3 R law s , a .The detailed information will be discussed in the following subsections. In addition, the factor of comfort will be considered and discussed in policy generation part (Section5.1). #### 4.7.1. Safety Reward The safety reward functionR s a f e t y ( s , a ) is based on the potential conflict status. In our strategy, safety reward is defined as a penalty. If there are no potential conflicts, the safety reward will be set as 0. A large penalty will be assigned due to the risk of collision status.In an uncontrolled intersection, the four approaching directions are defined asA i ∈ { 1,2 , 3,4 } (Figure 5). The driver’s lateral intentions are defined as I l a t ∈ { I T R , I T L , I G S , I S }. So the driving trajectory for each vehicle in the intersection can be generally represented by A i and I l a t , j, and we marked it as T A i , I l a t , j,  1 ≤ i ≤ 4,  1 ≤ j ≤ 4. The function F is used to judge the potential collision status, which is denoted as(18) F T x , T y = 1 , if  potential  conflict, 0 , otherwise ,where T x and T y are vehicles’ maneuver T A i , I l a t , j.Figure 5 One typical scenario for calculating safety reward.F ( x , y ) can be calculated through relative direction between two cars, which is shown in Table 1.Table 1 Safe condition judgments in the intersection. Human-driven vehicle Left side Right side Opposite side Same side Driving direction TL LK TR S TL LK TR S TL LK TR S TL LK TR S Turn left ★ ★ ○ ○ ★ ★ ○ ○ ★ ★ ★ ○ ○ ○ ○ ○ Lane keeping ★ ★ ○ ○ ★ ★ ★ ○ ★ ○ ○ ○ ○ ○ ○ ○ Turn right ○ ★ ○ ○ ○ ○ ○ ○ ★ ○ ○ ○ ○ ○ ○ ○ Stop ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ★ indicates potential collision. ○ indicates no potential collision.The safety reward is based on the following items:(i) IfF ( x , y ) is equal to 0, then the safety reward is equal to 0 due to the noncollision status. (ii) If potential collision occurs, there will be a large penalty. (iii) If| T T C i - T T C h o s t | < t t h r e s h o d, there is a penalty depending on | T T C i - T T C h o s t | and T T C h o s t. #### 4.7.2. Traffic Law Reward Autonomous vehicles should follow traffic laws to interact with human-driven vehicles. Traffic law is modeled as a functionL a w ( x , y ) for each two vehicles x and y (19) Law T x , T y = 1 , if x is prior , 0 , otherwise ,where T x and T y are vehicles’ maneuver T A i , S j. This function Law ( T x , T y ) is formulized as shown in Algorithm 1.Algorithm 1:Traffic law formulization. L a w T x , T y ← 0 t x ← d x 2 I / v x t y ← d y 2 I / v y if t x < t y - Δ t, then L a w T x , T y ← 1 else if t x - t y < Δ t, then s t a t u s ← F T x , T y if s t a t u s = 1, then if I l a t , x = l a n e k e e p i n g and I l a t , y <> l a n e k e e p i n g, then L a w T x , T y ← 1 else if I l a t , x , I l a t , y = l a n e k e e p i n g and A x - A y = 1 or - 3, then L a w T x , T y ← 1 else if I l a t , x = t u r n l e f t and I l a t , y = t u r n r i g h t, then L a w T x , T y ← 1 end if end if end if return L a w ( T x , T y )If the behavior will break the law, a large penalty is applied and the behavior of obeying traffic laws will get a zero reward. #### 4.7.3. Time Reward The time cost is based on the time to the destination for the targeted vehicles in the intersection area:(20) Cost time = DTG v host .D T G is the distance to the driving goal. In addition, we also need to consider the speed limit, which is discussed in policy generation part in Section 5. ## 4.1. POMDP Preliminaries A POMDP model can be formulized as a tuple{ S , A , T , Z , O , R , γ }, where S is a set of states, A is the action space, and Z denotes a set of observations. The conditional function T ( s ′ , a , s ) = P r s ′ ∣ s , a models transition probabilities to state s ′ ∈ S, when the system takes an action a ∈ A in the state s ∈ S. The observation function O ( z , s ′ , a ) = P r z ∣ s ′ , a models the probability of observing z ∈ Z, when an action a ∈ A is taken and the end state is s ′ ∈ S. The reward function R ( s , a ) calculates an immediate reward when taking an action a in state s. γ ∈ [ 0,1 ] is the discount factor in order to balance the immediate and the future rewards.Because the system contains partially observed state such as intentions, a beliefb ∈ B is maintained. A belief update function τ is defined as b ′ = τ ( b , a , z ). If the agent takes action a and gets observation z, the new belief b ′ is obtained through the Bayes’ rule:(8) b ′ s ′ = η O s ′ , a , z ∑ s ∈ S T s , a , s ′ b s ,where η = 1 / ∑ s ′ ∈ S O s ′ , a , z ∑ s ∈ S T s , a , s ′ b s is a normalizing constant.A key concept in POMDP planning is a policy, a mappingπ that specifies the action a = π ( b ) at belief b. To solve the POMDP, an optimal policy π ∗ should be designed to maximize the total reward:(9) π ∗ = arg ⁡ max π ⁡ E ∑ t = 0 ∞ γ t R s t , π b t ∣ b 0 , π ,where b 0 is marked as the initial belief. ## 4.2. State Space Because of the Markov property, sufficient information should be contained in the state spaceS for decision-making process [14]. The state space includes the vehicle pose [ x , y , θ ], velocity v, the average yaw rate y a w a v e, and acceleration a a v e in the last planning period for all the vehicles. For the human-driven vehicles, the lateral and longitudinal intentions [ I l a t , I l o n ] also need to be contained for state transition modeling. However, the road context knowledge is static reference information so that it will be not added to the state space.The joint states ∈ S could be denoted as s = [ s h o s t , s 1 , s 2 , … , s N ] T, where s h o s t is the state of host vehicle (autonomous vehicle), s i,  i ∈ { 1,2 , 3 , … , N }, is the state of human-driven vehicles, and N is the number of human-driven vehicles involved. Let us define metric state x = [ x , y , θ , v , a a v e , y a w a v e ] T, including the vehicle position, heading, velocity, acceleration, and yaw rate. Thus, the state of host vehicle can be defined as s h o s t = x h o s t, while the human-driven vehicle state s i is s i = [ x i , I l a t , i , I l o n , i ] T. With the advanced perception system and V2V communication technology, we assume that the metric state x could be observed. Because the sensor noise is small and hardly affects decision-making process, we do not model observation noise for the metric state. However, the intention state cannot be directly observed, so it is the partially observable variables in our paper. The intention state should be inferred from observation data and predictive model over time. ## 4.3. Action Space In our autonomous vehicle system, the decision-making system is used to select the suitable tactical maneuvers. Specifically, in the intersection area autonomous vehicles should follow a global reference path generated by path planning module. The decision-making module only needs to generate acceleration/deceleration commands to the control layer. As the reference path may not be straight, the steering control module can adjust the front wheel angle to follow the reference path. Therefore, the action spaceA could be defined as a discrete set A = [ a c c , d e c , c o n ], which contains commands including acceleration, deceleration, and maintaining current velocity. ## 4.4. Observation Space Similar to the joint state space, the observationz is denoted as z = [ z h o s t , z 1 , z 2 , … , z N ] T, where z h o s t and z i are the host vehicle and human-driven vehicle’s observations, respectively. The acceleration and yaw rate can be approximately calculated by speed and heading in the consecutive states. ## 4.5. State Transition Model In state transition process, we need to model transition probabilityP r s ′ ∣ s , a. This probability is determined by each targeted element in the scenario. So the transition model can be calculated by the following probabilistic equation:(10) Pr ⁡ s ′ ∣ s , a = Pr ⁡ s host ′ ∣ s host , a host ∏ i = 1 N Pr ⁡ s i ′ ∣ s i .In the decision-making layer, we do not need to consider complex vehicle dynamic model. Thus, the host vehicle’s motionPr ⁡ s host ′ ∣ s host , a host can be simply represented by the following equations given action a:(11) x ′ = x + v + a Δ t 2 Δ t cos ⁡ θ + Δ θ , y ′ = y + v + a Δ t 2 Δ t sin ⁡ θ + Δ θ , θ ′ = θ + Δ θ , v ′ = v + a Δ t , yaw ave ′ = Δ θ Δ t , a ave ′ = a .Thus, the key problem is converted to computeP r s i ′ ∣ s i, the state transition probability of human-driven vehicles. Based on the total probability formula, this probability can be factorized as a sum in whole action space:(12) Pr ⁡ s i ′ ∣ s i = ∑ a i P r s i ′ ∣ s i , a i P r a i ∣ s i .With this equation, we only need to calculate the state transition probabilityP r s i ′ ∣ s i , a i given a specific action a i and the probability of selecting this action P r ( a i ∣ s i ) under current state s i.Because the human-driven vehicles’ states i = [ x i , I i ], the probability P r s i ′ ∣ s i , a i can be calculated as(13) Pr ⁡ s i ′ ∣ s i , a i = Pr ⁡ x i ′ , I i ′ ∣ x i , I i , a i = Pr ⁡ x i ′ ∣ x i , I i , a i Pr ⁡ I i ′ ∣ x i ′ , x i , I i , a i .With a certain actiona i, P r x i ′ ∣ x i , I i , a i is equal to P r x i ′ ∣ x i , I l a t , i , a i. The lateral behavior I l a t , i is considered to be a goal-directed driving intention which will not be changed in the driving process. So P r x i ′ ∣ x i , I l a t , i , a i is equal to P r x i ′ ∣ x i , a i given a reference path corresponding to the intention of I l a t , i. Using (11), P r x i ′ ∣ x i , a i can be well solved.The remaining problem for calculatingP r s i ′ ∣ s i , a i is to deal with P r I i ′ ∣ x i ′ , x i , I i , a i. The lateral intention I l a t , i ′ is assumed stable through the above explanation. And the longitudinal intention I l o n , i ′ is assumed to be not updated in this process. But it will be updated with new inputs in observation space.NowP r s i ′ ∣ s i , a i is well modeled and the remaining problem is to compute the probabilities P r a i ∣ s i of human-driven vehicles’ future actions:(14) Pr ⁡ a i ∣ s i = Pr ⁡ a i ∣ x i , I i = ∑ x host ′ P r a i ∣ x host ′ , x i , I i Pr ⁡ x host ′ ∣ x i , I i .Becausex host ′ is determined by the designed policy, Pr ⁡ x host ′ ∣ x i , I i could be calculated by (11) given an action a h o s t. The probability P r a i ∣ x host ′ , x i , I i means the distribution of human-driven vehicles’ actions given the new state x host ′ of host vehicle, the current state of itself, and its intentions. Instead of building a complex probability model, we designed a deterministic mechanism to calculate the most possible action a i given x host ′,  x i, and I i.In this prediction process, the host vehicle is assumed to be maintaining the current actions in the next time step and the actiona i will be leading human-driven vehicle passing through the potential collision area either in advance of host vehicle under the intention I N Y i e l d or behind the host vehicle under the intention I Y i e l d to keep a safe distance d s a f e. In the case with the intention of I N Y i e l d, we can calculate the low boundary a i , l o w of a i through the above process and determine the upper one using the largest comfort value a i , c o m f o r t. If a i , c o m f o r t < a i , l o w,  a i , l o w will be used as the human-driven vehicle’s action. If not, we consider the targeted a i following a normal distribution with mean value μ a i between a i , l o w and a i , c o m f o r t. To simplify our model, we use the mean value of these two boundaries to represent human-driven vehicle’s action a i. Similarly, the case with the intention of I Y i e l d can be analyzed in the same process.After these steps, the transition probabilityP r s ′ ∣ s , a is well formulized and the autonomous vehicle could have the ability to understand the future motion of the scenario through this model. ## 4.6. Observation Model The observation model is built to simulate the measurement process. The motion intention is updated in this process. The measurements of human-driven vehicles are modeled with conditional independent assumption. Thus, the observation model can be calculated as(15) P r z ∣ a , s ′ = P r z host ∣ s host ′ ∏ i = 1 N P r z i ∣ s i ′ .The host vehicle’s observation function is denoted as(16) P r z host ∣ s host ′ ~ N z host ∣ x host ′ , Σ z host .But in this paper, due to the use of V2V communication sensor, the observation error almost does not affect the decision-making result. The variance matrix is set as zero.The human-driven vehicle’s observation will follow the vehicle’s motion intentions. Because we do not consider the observation error, the value in metric state will be the same as the state transition results. But the longitudinal intention of human-driven vehicles in the state space will be updated using the new observations and HMM mentioned in Section3. The new observation space will be confirmed with the above step. ## 4.7. Reward Function The candidate policies have to satisfy several evaluation criterions. Autonomous vehicles should be driven safely and comfortably. At the same time, they should follow the traffic rules and reach the destination as soon as possible. As a result, we design objective function (17) considering three aspects including safety, time efficiency, and traffic laws, where μ 1,  μ 2, and  μ 3 are the weight coefficient:(17) R s , a = μ 1 R safety s , a + μ 2 R time s , a + μ 3 R law s , a .The detailed information will be discussed in the following subsections. In addition, the factor of comfort will be considered and discussed in policy generation part (Section5.1). ### 4.7.1. Safety Reward The safety reward functionR s a f e t y ( s , a ) is based on the potential conflict status. In our strategy, safety reward is defined as a penalty. If there are no potential conflicts, the safety reward will be set as 0. A large penalty will be assigned due to the risk of collision status.In an uncontrolled intersection, the four approaching directions are defined asA i ∈ { 1,2 , 3,4 } (Figure 5). The driver’s lateral intentions are defined as I l a t ∈ { I T R , I T L , I G S , I S }. So the driving trajectory for each vehicle in the intersection can be generally represented by A i and I l a t , j, and we marked it as T A i , I l a t , j,  1 ≤ i ≤ 4,  1 ≤ j ≤ 4. The function F is used to judge the potential collision status, which is denoted as(18) F T x , T y = 1 , if  potential  conflict, 0 , otherwise ,where T x and T y are vehicles’ maneuver T A i , I l a t , j.Figure 5 One typical scenario for calculating safety reward.F ( x , y ) can be calculated through relative direction between two cars, which is shown in Table 1.Table 1 Safe condition judgments in the intersection. Human-driven vehicle Left side Right side Opposite side Same side Driving direction TL LK TR S TL LK TR S TL LK TR S TL LK TR S Turn left ★ ★ ○ ○ ★ ★ ○ ○ ★ ★ ★ ○ ○ ○ ○ ○ Lane keeping ★ ★ ○ ○ ★ ★ ★ ○ ★ ○ ○ ○ ○ ○ ○ ○ Turn right ○ ★ ○ ○ ○ ○ ○ ○ ★ ○ ○ ○ ○ ○ ○ ○ Stop ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ★ indicates potential collision. ○ indicates no potential collision.The safety reward is based on the following items:(i) IfF ( x , y ) is equal to 0, then the safety reward is equal to 0 due to the noncollision status. (ii) If potential collision occurs, there will be a large penalty. (iii) If| T T C i - T T C h o s t | < t t h r e s h o d, there is a penalty depending on | T T C i - T T C h o s t | and T T C h o s t. ### 4.7.2. Traffic Law Reward Autonomous vehicles should follow traffic laws to interact with human-driven vehicles. Traffic law is modeled as a functionL a w ( x , y ) for each two vehicles x and y (19) Law T x , T y = 1 , if x is prior , 0 , otherwise ,where T x and T y are vehicles’ maneuver T A i , S j. This function Law ( T x , T y ) is formulized as shown in Algorithm 1.Algorithm 1:Traffic law formulization. L a w T x , T y ← 0 t x ← d x 2 I / v x t y ← d y 2 I / v y if t x < t y - Δ t, then L a w T x , T y ← 1 else if t x - t y < Δ t, then s t a t u s ← F T x , T y if s t a t u s = 1, then if I l a t , x = l a n e k e e p i n g and I l a t , y <> l a n e k e e p i n g, then L a w T x , T y ← 1 else if I l a t , x , I l a t , y = l a n e k e e p i n g and A x - A y = 1 or - 3, then L a w T x , T y ← 1 else if I l a t , x = t u r n l e f t and I l a t , y = t u r n r i g h t, then L a w T x , T y ← 1 end if end if end if return L a w ( T x , T y )If the behavior will break the law, a large penalty is applied and the behavior of obeying traffic laws will get a zero reward. ### 4.7.3. Time Reward The time cost is based on the time to the destination for the targeted vehicles in the intersection area:(20) Cost time = DTG v host .D T G is the distance to the driving goal. In addition, we also need to consider the speed limit, which is discussed in policy generation part in Section 5. ## 4.7.1. Safety Reward The safety reward functionR s a f e t y ( s , a ) is based on the potential conflict status. In our strategy, safety reward is defined as a penalty. If there are no potential conflicts, the safety reward will be set as 0. A large penalty will be assigned due to the risk of collision status.In an uncontrolled intersection, the four approaching directions are defined asA i ∈ { 1,2 , 3,4 } (Figure 5). The driver’s lateral intentions are defined as I l a t ∈ { I T R , I T L , I G S , I S }. So the driving trajectory for each vehicle in the intersection can be generally represented by A i and I l a t , j, and we marked it as T A i , I l a t , j,  1 ≤ i ≤ 4,  1 ≤ j ≤ 4. The function F is used to judge the potential collision status, which is denoted as(18) F T x , T y = 1 , if  potential  conflict, 0 , otherwise ,where T x and T y are vehicles’ maneuver T A i , I l a t , j.Figure 5 One typical scenario for calculating safety reward.F ( x , y ) can be calculated through relative direction between two cars, which is shown in Table 1.Table 1 Safe condition judgments in the intersection. Human-driven vehicle Left side Right side Opposite side Same side Driving direction TL LK TR S TL LK TR S TL LK TR S TL LK TR S Turn left ★ ★ ○ ○ ★ ★ ○ ○ ★ ★ ★ ○ ○ ○ ○ ○ Lane keeping ★ ★ ○ ○ ★ ★ ★ ○ ★ ○ ○ ○ ○ ○ ○ ○ Turn right ○ ★ ○ ○ ○ ○ ○ ○ ★ ○ ○ ○ ○ ○ ○ ○ Stop ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ★ indicates potential collision. ○ indicates no potential collision.The safety reward is based on the following items:(i) IfF ( x , y ) is equal to 0, then the safety reward is equal to 0 due to the noncollision status. (ii) If potential collision occurs, there will be a large penalty. (iii) If| T T C i - T T C h o s t | < t t h r e s h o d, there is a penalty depending on | T T C i - T T C h o s t | and T T C h o s t. ## 4.7.2. Traffic Law Reward Autonomous vehicles should follow traffic laws to interact with human-driven vehicles. Traffic law is modeled as a functionL a w ( x , y ) for each two vehicles x and y (19) Law T x , T y = 1 , if x is prior , 0 , otherwise ,where T x and T y are vehicles’ maneuver T A i , S j. This function Law ( T x , T y ) is formulized as shown in Algorithm 1.Algorithm 1:Traffic law formulization. L a w T x , T y ← 0 t x ← d x 2 I / v x t y ← d y 2 I / v y if t x < t y - Δ t, then L a w T x , T y ← 1 else if t x - t y < Δ t, then s t a t u s ← F T x , T y if s t a t u s = 1, then if I l a t , x = l a n e k e e p i n g and I l a t , y <> l a n e k e e p i n g, then L a w T x , T y ← 1 else if I l a t , x , I l a t , y = l a n e k e e p i n g and A x - A y = 1 or - 3, then L a w T x , T y ← 1 else if I l a t , x = t u r n l e f t and I l a t , y = t u r n r i g h t, then L a w T x , T y ← 1 end if end if end if return L a w ( T x , T y )If the behavior will break the law, a large penalty is applied and the behavior of obeying traffic laws will get a zero reward. ## 4.7.3. Time Reward The time cost is based on the time to the destination for the targeted vehicles in the intersection area:(20) Cost time = DTG v host .D T G is the distance to the driving goal. In addition, we also need to consider the speed limit, which is discussed in policy generation part in Section 5. ## 5. Approximations on Solving POMDP Problem Solving POMDP is quite difficult. The complexity of searching total brief space isO ( | A | H | Z | H ) [12], where H is the prediction horizon. In this paper, we model the intention recognition process as a deterministic model and use communication sensors to ignore the perception error, and thus the size of | Z | is reduced to 1 in the simplified problem. To solve this problem, we first generate the suitable potential policies according to the property of driving tasks and then select the reasonable total predicting interval time and total horizon. After that, the approximate optimal policy can be calculated through searching all possible policies with maximum total reward. The policy selection process is shown in Algorithm 2 and some detailed explanations are discussed in the subsections.Algorithm 2:Policy selection process. Input: Predict horizon H, time step Δ t, Current states: s h o s t = x h o s t, s h u m a n = [ x h u m a n , I h u m a n ] (1)    P ← g e n e n r a t e p o l i c y s e t ( ) (2)    for each π k ∈ P, do (3)   for i = 1 to H / Δ t, do (4)    a h o s t ← π k ( i ) (5)    s h o s t ′ ← u p d a t e s t a t e s h o s t , a h o s t (6)    a h u m a n ← p r e d i c t a c t i o n s s h o s t ′ , s h u m a n , I h u m a n (7)    x h u m a n ′ ← u p d a t e s t a t e x h u m a n , a h u m a n (8)    I h u m a n ′ ← u p d a t e i n t e n t i o n s h o s t ′ , x h u m a n ′ (9)    s h u m a n ′ ← x h u m a n ′ , I h u m a n ′ (10)      R ( i ) = c a l c u l a t e r e w a r d s h o s t ′ , s h u m a n ′ (11)      s h o s t ← s h o s t ′ (12)      s h u m a n ← s h u m a n ′ (13) end (14) R k t o t a l ← ∑ i s u m R i (15) end (16) k ∗ ← arg ⁡ max k ⁡ R k t o t a l (17) π ∗ ← π k ∗ (18) return π ∗ ### 5.1. Policy Generation For autonomous driving near intersection, the desired velocity curves need to satisfy several constraints. Firstly, except for emergency braking, the acceleration constraints are applied to ensure comfort. Secondly, the speed limit constraints should be used in this process. We aim to avoid the acceleration commands when autonomous vehicle is reaching maximum speed limit. Thirdly, for the comfort purpose, the acceleration command should not be always changed. In other words, we need to minimize the jerk.Similar to [11], the candidate policies are divided into three time segments. The first two segments are like “keep constant acceleration/deceleration actions,” while keeping constant velocity in the third segment. We use t 1,  t 2, and  t 3 to represent the time periods of these three segments. To guarantee comfort, the acceleration is limited to the range from −4 m/s2 to 2 m/s2 and we discrete acceleration action into a multiple of [ - 0.5,0.5,0 ]. Then, the action space can be represented by a discretizing acceleration set. Then, we can set the value of t 1,  t 2, and  t 3 and the prediction period of single step. An example of policy generation is shown in Figure 6.Figure 6 An example of policy generation process. (a) is the generated policies and (b) is the corresponding speed profiles. The interval of each prediction step is 0.5 s, current speed is 12 m/s2, and the speed limit is 20 m/s2. The bold black line is one policy. In the first 3 seconds, autonomous vehicles decelerate in −3.5 m/s2, then accelerate at 2 m/s2 for 4 seconds, and finally stop in the last one second. In this case, 109 policies were generated, which is suitable for replanning fast. (a) (b) ### 5.2. Planning Horizon Selection After building policy generation model, the next problem is to select a suitable planning horizon. Longer horizon can lead to a better solution but consuming more computing resources. However, as our purpose is to deal with the interaction problem in the uncontrolled intersection, we only need to consider the situation before autonomous vehicle gets through. In our algorithm, we set the prediction horizon as 8 seconds. In addition, in the process of updating the future state of each vehicle using each policy, the car following mode is used after autonomous vehicle passes through the intersection area. ### 5.3. Time Step Selection Another problem is the prediction time step. The intention prediction algorithm and the POMDP are computed in each step. If the time step ist s t e p, the total computation times will be H / t s t e p. Thus, smaller time step leads to more computation time. To solve this problem, we use a simple adaptive time step calculation mechanism to give a final value. The time step is selected based on the TTC of autonomous vehicle. If the host vehicle is far away from the intersection, we can use a very large time step. But if the TTC is quite small, the low t s t e p is applied to ensure safety. ## 5.1. Policy Generation For autonomous driving near intersection, the desired velocity curves need to satisfy several constraints. Firstly, except for emergency braking, the acceleration constraints are applied to ensure comfort. Secondly, the speed limit constraints should be used in this process. We aim to avoid the acceleration commands when autonomous vehicle is reaching maximum speed limit. Thirdly, for the comfort purpose, the acceleration command should not be always changed. In other words, we need to minimize the jerk.Similar to [11], the candidate policies are divided into three time segments. The first two segments are like “keep constant acceleration/deceleration actions,” while keeping constant velocity in the third segment. We use t 1,  t 2, and  t 3 to represent the time periods of these three segments. To guarantee comfort, the acceleration is limited to the range from −4 m/s2 to 2 m/s2 and we discrete acceleration action into a multiple of [ - 0.5,0.5,0 ]. Then, the action space can be represented by a discretizing acceleration set. Then, we can set the value of t 1,  t 2, and  t 3 and the prediction period of single step. An example of policy generation is shown in Figure 6.Figure 6 An example of policy generation process. (a) is the generated policies and (b) is the corresponding speed profiles. The interval of each prediction step is 0.5 s, current speed is 12 m/s2, and the speed limit is 20 m/s2. The bold black line is one policy. In the first 3 seconds, autonomous vehicles decelerate in −3.5 m/s2, then accelerate at 2 m/s2 for 4 seconds, and finally stop in the last one second. In this case, 109 policies were generated, which is suitable for replanning fast. (a) (b) ## 5.2. Planning Horizon Selection After building policy generation model, the next problem is to select a suitable planning horizon. Longer horizon can lead to a better solution but consuming more computing resources. However, as our purpose is to deal with the interaction problem in the uncontrolled intersection, we only need to consider the situation before autonomous vehicle gets through. In our algorithm, we set the prediction horizon as 8 seconds. In addition, in the process of updating the future state of each vehicle using each policy, the car following mode is used after autonomous vehicle passes through the intersection area. ## 5.3. Time Step Selection Another problem is the prediction time step. The intention prediction algorithm and the POMDP are computed in each step. If the time step ist s t e p, the total computation times will be H / t s t e p. Thus, smaller time step leads to more computation time. To solve this problem, we use a simple adaptive time step calculation mechanism to give a final value. The time step is selected based on the TTC of autonomous vehicle. If the host vehicle is far away from the intersection, we can use a very large time step. But if the TTC is quite small, the low t s t e p is applied to ensure safety. ## 6. Experiment and Results ### 6.1. Settings In this paper, we evaluate our approach through PreScan 7.1.0 [21], a simulation tool for autonomous driving and connected vehicles. Using this software, we can build the testing scenarios (Figure 7) and add vehicles with dynamic model. In order to get a similar scenario considering social interaction, the driver simulator is added in our experiment (Figure 8). The human-driven vehicle is driven by several people during the experiment and the autonomous vehicle makes decisions based on the human-driven vehicle’s driving behavior. The reference trajectory for autonomous vehicle is generated from path planning module and the human-driven vehicle’s data (e.g., position, velocity, and heading) are transferred through V2V communication sensor. The decision-making module sends desired velocity command to the PID controlled to follow the reference path. All policies in the experiment part use a planning horizon H = 8 s, which is discretized into the time step of 0.5 s.Figure 7 Testing scenario. Autonomous vehicle B and human-driven vehicle A are both approaching the uncontrolled intersection. To go across the intersection successfully, autonomous vehicle should interact with human-driven vehicle.Figure 8 Logitech G27 driving simulator. ### 6.2. Results It is difficult to compare different approaches in the same scenario because the environment is dynamic and not exactly the same. However, we select two typical situations and special settings to make it possible. The same initial conditions including position, orientation, and velocity for each vehicle are used in different tests. Besides, two typical situations, including human-driven vehicle getting through before or after autonomous vehicle, are compared in this section. With the same initial state, different reactions will occur based on various methods. We compare our approach and reactive-based method [6] in this section. The key difference for these two methods is that our approach considers human-driven vehicle’s driving intention.The first experiment is that human-driven vehicle tries to yield autonomous vehicle in the interaction process. The results are shown in Figures9 and 10. Firstly, Figure 9 gives us a visual comparison of the different approaches. From almost the same initial state (e.g., position and velocity), our approach could lead to autonomous vehicle passing through the intersection more quickly and reasonable.Figure 9 The visualized passing sequence. (a) is the result of our approach and (b) represents the result of reactive approach without considering intention. The black vehicle is an autonomous vehicle, while the red car is the human-driven vehicle. Each vehicle represents the position in a specific timeT with an interval of 1 second. (a) (b)Figure 10 Case test 1. In this case, human-driven vehicle passes through intersection after autonomous vehicle. (a), (c), and (e) are the performance of our method, while (b), (d), and (f) are from the strategy without considering the driving intention. (a) and (b) are the velocity profiles and the corresponding driving intention. For longitudinal intention, label 1 means yielding and label 2 means not yielding. In lateral intention, 1 means turning left, 2 means turning right, 3 means going straight, and 4 means stop. The intentions in (b) are not used in that method but for detailed analysis. (c) and (d) are the distance to collision area for autonomous vehicle and human-driven vehicle, respectively. (e) and (f) are the prediction and true motions of human-driven vehicles in time 1.5 s with a prediction length of 8 s. The red curves in these subfigures are from autonomous vehicle while blue lines are from human-driven vehicle. The green lines in (e) and (f) are the prediction velocity curves of human-driven vehicle. (a) (b) (c) (d) (e) (f)Then, let us look at Figure10 for detailed explanation. In the first 1.2 s in Figures 10(a) and 10(c), autonomous vehicle maintains speed and understands that human-driven vehicle will not perform yielding actions. Then, autonomous vehicle gets yielding intention of human-driven vehicle and understands that human-driven vehicle’s lateral intention is to go straight. Based on candidate policies, autonomous vehicle selects acceleration strategy with maximum reward and finally crosses the intersection. In this process, we can obviously find that autonomous vehicle understands human-driven vehicle’s yielding intention. Figure 10(c) is an example of understand human-driven vehicle’s behavior based on ego vehicle’s future actions in a specific time. Our strategy predicts the future actions of human-driven vehicle. Although the velocity curves after 1 s do not correspond, it does not affect the performance of our methods. The reason is that we use a deterministic model in the prediction process and the prediction value is inside two boundaries to ensure safety. Besides, the whole actions of autonomous vehicle in this process could also help human-driven vehicle to understand not-yielding intention of autonomous vehicles. In this case, cooperative driving behaviors are performed by both vehicles.However, if the intention is not considered in this process, we can find the results in Figures10(b), 10(d), and 10(f). After 2 s in Figure 10(b), while the human-driven vehicle gives a yielding intention, autonomous vehicle could not understand and they find a potential collision based on the constant velocity assumptions. Then, it decreases the speed but the human-driven vehicle also slows down. The puzzled behavior leads both vehicles to slow down near intersection. Finally, human-driven vehicle stops at the stop line and then autonomous vehicle could pass the intersection. In this strategy, the human-driven vehicle’s future motion is assumed to be constant (Figure 10(f)). Without understanding of human-driven vehicle’s intentions, this strategy can increase congestion problem.Another experiment is that human-driven vehicle tries to get through the intersection first. The results are shown in Figures11 and 12. This case is quite typical because many traffic accidents in real world are happening in this situation. In detail, if one vehicle tries to cross an intersection while violating the law, another vehicle will be in great danger if it does not understand its behavior. From the visualized performance in Figure 11, our method is a little more safe than other approaches as there is nearly collision situation in Figure 11(b). In detail, we can see from Figure 12(a) that our strategy could perform deceleration actions after we understand the not-yielding intention in 0.8 s. However, without understanding human-driven vehicle’s motion intention, the response time has a 1-second delay which may be quite dangerous. In addition, it is shown that good performance is in the predictions of human-driven vehicle’s future motion in our methods (Figure 12(e)).Figure 11 The visualized passing sequence for the case of human-driven vehicle first getting through. (a) is the result of our approach and (b) is the reactive-based approach. (a) (b)Figure 12 Case test 2. In this case, human-driven vehicle passes through intersection before autonomous vehicle through different strategies. The definition of each subfigure is the same as in Figure10. (a) (b) (c) (d) (e) (f)The results of these two cases demonstrate that our algorithm could deal with typical scenarios and have better performance than traditional reactive controller. Autonomous vehicle could be driven more safely, fast, and comfortably through our strategy. ## 6.1. Settings In this paper, we evaluate our approach through PreScan 7.1.0 [21], a simulation tool for autonomous driving and connected vehicles. Using this software, we can build the testing scenarios (Figure 7) and add vehicles with dynamic model. In order to get a similar scenario considering social interaction, the driver simulator is added in our experiment (Figure 8). The human-driven vehicle is driven by several people during the experiment and the autonomous vehicle makes decisions based on the human-driven vehicle’s driving behavior. The reference trajectory for autonomous vehicle is generated from path planning module and the human-driven vehicle’s data (e.g., position, velocity, and heading) are transferred through V2V communication sensor. The decision-making module sends desired velocity command to the PID controlled to follow the reference path. All policies in the experiment part use a planning horizon H = 8 s, which is discretized into the time step of 0.5 s.Figure 7 Testing scenario. Autonomous vehicle B and human-driven vehicle A are both approaching the uncontrolled intersection. To go across the intersection successfully, autonomous vehicle should interact with human-driven vehicle.Figure 8 Logitech G27 driving simulator. ## 6.2. Results It is difficult to compare different approaches in the same scenario because the environment is dynamic and not exactly the same. However, we select two typical situations and special settings to make it possible. The same initial conditions including position, orientation, and velocity for each vehicle are used in different tests. Besides, two typical situations, including human-driven vehicle getting through before or after autonomous vehicle, are compared in this section. With the same initial state, different reactions will occur based on various methods. We compare our approach and reactive-based method [6] in this section. The key difference for these two methods is that our approach considers human-driven vehicle’s driving intention.The first experiment is that human-driven vehicle tries to yield autonomous vehicle in the interaction process. The results are shown in Figures9 and 10. Firstly, Figure 9 gives us a visual comparison of the different approaches. From almost the same initial state (e.g., position and velocity), our approach could lead to autonomous vehicle passing through the intersection more quickly and reasonable.Figure 9 The visualized passing sequence. (a) is the result of our approach and (b) represents the result of reactive approach without considering intention. The black vehicle is an autonomous vehicle, while the red car is the human-driven vehicle. Each vehicle represents the position in a specific timeT with an interval of 1 second. (a) (b)Figure 10 Case test 1. In this case, human-driven vehicle passes through intersection after autonomous vehicle. (a), (c), and (e) are the performance of our method, while (b), (d), and (f) are from the strategy without considering the driving intention. (a) and (b) are the velocity profiles and the corresponding driving intention. For longitudinal intention, label 1 means yielding and label 2 means not yielding. In lateral intention, 1 means turning left, 2 means turning right, 3 means going straight, and 4 means stop. The intentions in (b) are not used in that method but for detailed analysis. (c) and (d) are the distance to collision area for autonomous vehicle and human-driven vehicle, respectively. (e) and (f) are the prediction and true motions of human-driven vehicles in time 1.5 s with a prediction length of 8 s. The red curves in these subfigures are from autonomous vehicle while blue lines are from human-driven vehicle. The green lines in (e) and (f) are the prediction velocity curves of human-driven vehicle. (a) (b) (c) (d) (e) (f)Then, let us look at Figure10 for detailed explanation. In the first 1.2 s in Figures 10(a) and 10(c), autonomous vehicle maintains speed and understands that human-driven vehicle will not perform yielding actions. Then, autonomous vehicle gets yielding intention of human-driven vehicle and understands that human-driven vehicle’s lateral intention is to go straight. Based on candidate policies, autonomous vehicle selects acceleration strategy with maximum reward and finally crosses the intersection. In this process, we can obviously find that autonomous vehicle understands human-driven vehicle’s yielding intention. Figure 10(c) is an example of understand human-driven vehicle’s behavior based on ego vehicle’s future actions in a specific time. Our strategy predicts the future actions of human-driven vehicle. Although the velocity curves after 1 s do not correspond, it does not affect the performance of our methods. The reason is that we use a deterministic model in the prediction process and the prediction value is inside two boundaries to ensure safety. Besides, the whole actions of autonomous vehicle in this process could also help human-driven vehicle to understand not-yielding intention of autonomous vehicles. In this case, cooperative driving behaviors are performed by both vehicles.However, if the intention is not considered in this process, we can find the results in Figures10(b), 10(d), and 10(f). After 2 s in Figure 10(b), while the human-driven vehicle gives a yielding intention, autonomous vehicle could not understand and they find a potential collision based on the constant velocity assumptions. Then, it decreases the speed but the human-driven vehicle also slows down. The puzzled behavior leads both vehicles to slow down near intersection. Finally, human-driven vehicle stops at the stop line and then autonomous vehicle could pass the intersection. In this strategy, the human-driven vehicle’s future motion is assumed to be constant (Figure 10(f)). Without understanding of human-driven vehicle’s intentions, this strategy can increase congestion problem.Another experiment is that human-driven vehicle tries to get through the intersection first. The results are shown in Figures11 and 12. This case is quite typical because many traffic accidents in real world are happening in this situation. In detail, if one vehicle tries to cross an intersection while violating the law, another vehicle will be in great danger if it does not understand its behavior. From the visualized performance in Figure 11, our method is a little more safe than other approaches as there is nearly collision situation in Figure 11(b). In detail, we can see from Figure 12(a) that our strategy could perform deceleration actions after we understand the not-yielding intention in 0.8 s. However, without understanding human-driven vehicle’s motion intention, the response time has a 1-second delay which may be quite dangerous. In addition, it is shown that good performance is in the predictions of human-driven vehicle’s future motion in our methods (Figure 12(e)).Figure 11 The visualized passing sequence for the case of human-driven vehicle first getting through. (a) is the result of our approach and (b) is the reactive-based approach. (a) (b)Figure 12 Case test 2. In this case, human-driven vehicle passes through intersection before autonomous vehicle through different strategies. The definition of each subfigure is the same as in Figure10. (a) (b) (c) (d) (e) (f)The results of these two cases demonstrate that our algorithm could deal with typical scenarios and have better performance than traditional reactive controller. Autonomous vehicle could be driven more safely, fast, and comfortably through our strategy. ## 7. Conclusion and Future Work In this paper, we proposed an autonomous driving decision-making algorithm considering human-driven vehicle’s uncertain intentions in an uncontrolled intersection. The lateral and longitudinal intentions are recognized by a continuous HMM. Based on HMM and POMDP, we model general decision-making process and then use an approximate approach to solve this complex problem. Finally, we use PreScan software and a driving simulator to emulate social interaction process. The experiment results show that autonomous vehicles with our approach can pass through uncontrolled intersections more safely and efficiently than using the strategy without considering human-driven vehicles’ driving intentions.In the near future, we aim to implement our approach into a real autonomous vehicle and perform real world experiments. In addition, more precious intention recognition algorithm aims to be figured out. Some methods like probabilistic graphic model can be used to get a distribution of each intention. Finally, designing online POMDP planning algorithms is also valuable. --- *Source: 1025349-2016-04-28.xml*
1025349-2016-04-28_1025349-2016-04-28.md
80,745
Intention-Aware Autonomous Driving Decision-Making in an Uncontrolled Intersection
Weilong Song; Guangming Xiong; Huiyan Chen
Mathematical Problems in Engineering (2016)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1025349
1025349-2016-04-28.xml
--- ## Abstract Autonomous vehicles need to perform social accepted behaviors in complex urban scenarios including human-driven vehicles with uncertain intentions. This leads to many difficult decision-making problems, such as deciding a lane change maneuver and generating policies to pass through intersections. In this paper, we propose an intention-aware decision-making algorithm to solve this challenging problem in an uncontrolled intersection scenario. In order to consider uncertain intentions, we first develop a continuous hidden Markov model to predict both the high-level motion intention (e.g., turn right, turn left, and go straight) and the low level interaction intentions (e.g., yield status for related vehicles). Then a partially observable Markov decision process (POMDP) is built to model the general decision-making framework. Due to the difficulty in solving POMDP, we use proper assumptions and approximations to simplify this problem. A human-like policy generation mechanism is used to generate the possible candidates. Human-driven vehicles’ future motion model is proposed to be applied in state transition process and the intention is updated during each prediction time step. The reward function, which considers the driving safety, traffic laws, time efficiency, and so forth, is designed to calculate the optimal policy. Finally, our method is evaluated in simulation with PreScan software and a driving simulator. The experiments show that our method could lead autonomous vehicle to pass through uncontrolled intersections safely and efficiently. --- ## Body ## 1. Introduction Autonomous driving technology has developed rapidly in the last decade. In DARPA Urban Challenge [1], autonomous vehicles showed their abilities for interacting in some typical scenarios such as Tee intersections and lane driving. In 2011, Google released its autonomous driving platforms. Over 10,000 miles of autonomous driving for each vehicle was completed under various traffic conditions [2]. Besides, many big automobile companies also plan to launch their autonomous driving product in the next several years. With these significant progresses, autonomous vehicles have shown their potential to reduce the number of traffic accidents and solve the problem of traffic congestions.One key challenge for autonomous vehicles driven in the real world is how to deal with the uncertainties, such as inaccuracy perception and unclear motion intentions. With the development of intelligent transportation system (ITS), the perception uncertainty could be solved through the vehicle2X technology and the interactions between autonomous vehicles can be solved by centralized or decentralized cooperative control algorithms. However, human-driven vehicles will still be predominance in a short time and the uncertainties of their driving intentions will still be retained due to the lack of “intention sensor.” Human drivers anticipate potential conflicts, continuously make decisions, and adjust their driving behaviors which are often not rational. Therefore, autonomous vehicles need to understand human drivers’ driving intentions and choose proper actions to behave cooperatively.In this paper, we focus on solving this problem in an uncontrolled intersection scenario. The uncontrolled intersection is a complex scenario with high accident rate. In US, stop signs can be used to normalize the vehicles’ passing sequence. However, this kind of signs is rarely used in China and the right first traffic laws are often broken by some aggressive drivers. Perception failures, misunderstandings, and wrong decisions are likely to be performed by human drivers. In such cases, even with stop signs, the “first come, first served” rule is likely to be broken. Besides, human driving behaviors are likely to change as time goes on. With these uncertain situations, specific layout, and the traffic rules, when autonomous vehicles approach an intersection, they should have potential ability to recognize the behavior of other vehicles and give a suitable corresponding behavior considering future evolution of the traffic scenario (see Figure1).Figure 1 A motivation example. Autonomous vehicle B is going straight, while human-driven vehicle A has three potential driving directions: going straight, turning right, or turning left. If vehicle A turns right, it will not affect the normal driving of autonomous vehicle B. But the other maneuvers including turning left and going straight will lead to a passing sequence problem. Besides, if they have potential conflict, autonomous vehicle B will simulate the trajectories of vehicle A in a prediction horizon and gives the best actions in the current scenario. The vehicles drawn by dash lines are the future prediction positions. The red dash lines are the virtual lane assumption used in this paper, which means that the vehicles are considered to be driven inside the lane. The dark blue area is the potential collision region for these two cars.With these requirements, we propose an intention-aware decision-making algorithm for autonomous driving in an uncontrolled intersection in this paper. Specifically, we first use easily observed features (e.g., velocity and position) and continuous hidden Markov model (HMM) [3] to build the intention prediction model, which outputs the lateral intentions (e.g., turn right, turn left, and go straight) for human-driven vehicles and longitudinal behavior (e.g., the yielding status) for related vehicles. Then, a generative partially observable Markov decision process (POMDP) framework is built to model the autonomous driving decision-making process. This framework is able to deal with the uncertainties in the environment, including human-driven vehicles’ driving intentions. However, it is intractable to compute the optimal policy for general POMDP due to its complexity. We make reasonable approximations and assumptions to solve this problem in a low computational way. A human-like policy generation mechanism is used to compute the potential policy set. A scenario prediction mechanism is used to simulate the future actions of human-driven vehicles based on their lateral and longitudinal intentions and the proper reward functions are designed to evaluate each strategy. Traffic time, safety, and laws are all considered to get the final reward equations. The proposed method has been well evaluated during simulation. The main contributions of this paper are as follows:(i) Modeling a generative autonomous driving decision-making framework considering uncertainties (e.g., human driver’s intention) in the environment. (ii) Building intention prediction model using easily observed parameters (e.g., velocity and position) for recognizing the realistic lateral and longitudinal behaviors of human-driven vehicles. (iii) Using reasonable approximations and assumption to build an efficient solver based on the specific layout in an uncontrolled intersection area.The structure of this paper is as follows. Section2 reviews the related work and two-layer HMM-based intention prediction algorithm is discussed in Section 3. Section 4 models general autonomous driving decision-making process in a POMDP, while the approximations and the simplified solver are described in Section 5. In Section 6, we evaluate our algorithm in a simulated uncontrolled intersection scenario with PreScan software and a driver simulator. Finally, the conclusion and future work are discussed in Section 7. ## 2. Related Work The decision-making module is one of the most important components of autonomous vehicles, connecting environment perception and vehicle control. Thus, numerous research works are performed to handle autonomous driving decision-making problem in the last decade. The most common method is to manually define specific driving rules corresponding to situations. Both finite state machines (FSMs) and hierarchical state machines (HSMs) are used to evaluate situations and decide in their framework [4–6]. In DARPA Urban Challenge (DUC), the winner Boss used a rule-based behavior generation mechanism to obey the predefined driving rules based on the obstacle vehicles’ metrics [1, 6]. Boss was able to check vehicle’s acceleration abilities and the spaces to decide whether merging into a new lane or passing intersections is safe. Similarly, the decision-making system of “Junior” [7], ranking second in DUC, was based on a HSM with manually defined 13 states. Due to the advantages including implementing simply and traceability, this framework is widely used in many autonomous driving platforms. However, these approaches always use constant velocity assumptions and lack considering surrounding vehicles future reactions to host vehicle’s actions. Without this ability, the driving decisions could have potential risks [8].In order to consider the evolution of future scenario, the planning and utility-based approaches have been proposed for decision-making. Bahram et al. proposed a prediction based reactive strategy to generate autonomous driving strategies [9]. A Bayesian classifier is used to predict the future motion of obstacle vehicles and a tree-based searching mechanism is designed to find the optimal driving strategy using multilevel cost functions. However, the surrounding vehicles’ reactions to autonomous vehicles’ actions are not considered in their framework. Wei et al. proposed a comprehensive approach for autonomous driver model by emulating human driving behavior [10]. The human-driven vehicles are assumed to follow a proper social behavior model and the best velocity profiles are generated in autonomous freeway driving applications. Nonetheless, their method does not consider the motion intention of human-driven vehicles and only targets in-lane driving. In their subsequent work, Wei et al. modeled traffic interactions and realized autonomous vehicle social behavior in highway entrance ramp [11]. The human-driven vehicles’ motion intentions are modeled by a Bayesian model and the human-driven vehicles’ future reactions are introduced, which is based on the yielding/not-yielding intentions at the first prediction step. Autonomous vehicles could perform social cooperative behavior using their framework. However, they do not consider the intention uncertainty over prediction time step.POMDPs provide a mathematical framework for solving the decision-making problem with uncertainties. Bai et al. proposed an intention-aware approach for autonomous driving in scenarios with many pedestrians (e.g., in campus) [12]. In their framework, the hybrid A ∗ algorithm is used to generate global path, while a POMDP planner is used to control the velocity of the autonomous vehicle solving by an online POMDP solver DESPOT [13]. Brechtel et al. presented a probabilistic decision-making algorithm using continuous POMDP [14]. They focus on dealing with the uncertainties of incomplete and inaccurate perception in the intersection area, while our goal is to deal with the uncertain intentions of human-driven vehicles. However, the online POMDP solver always needs large computation resource and consumes much time [15, 16], which limits its use in real world autonomous driving platform. Ulbrich and Maurer designed a two-step decision-making algorithm to reduce the complexity of the POMDP in lane change scenario [17]. Eight POMDP states are manually defined to simplify the problem in their framework. Cunningham et al. proposed a multipolicy decision-making method in lane changing and merging scenarios [18]. POMDPs are used to model the decision-making problem in their paper, while multivehicle simulation mechanism is used to generate the optimal high-level policy for autonomous vehicle to execute. However, the motion intentions are not considered.Overall, the autonomous driving decision-making problem with uncertain driving intention is still a challenging problem. It is necessary to build an effective behavior prediction model for human-driven vehicles. Besides, it is essential to incorporate human-driven vehicles’ intentions and behaviors into autonomous vehicle decision-making system and generate suitable actions to ensure autonomous vehicles drive safely and efficiently. This work addresses this problem by first building a HMM-based intention prediction model, then modeling human-driven vehicle’s intentions in a POMDP framework, and finally solving it in an approximate method. ## 3. HMM-Based Intention Prediction In order to pass through an uncontrolled intersection, autonomous vehicles should have the ability to predict the driving intentions of human-driven vehicles. Estimating driver’s behavior is very difficult, because the state of a vehicle driver is in some high-dimensional feature space. Instead of using driver related features (e.g., gas pedal, brake pedal, and drivers’ vision), easily observed parameters are used to build the intention prediction model in this paper.The vehicle motion intentionI considered in this paper is divided into two aspects, lateral intention I l a t ∈ { I T R , I T L , I G S , I S } (i.e., turn right, turn left, go straight, and stop) and longitudinal intention I l o n ∈ { I Y i e l d , I N Y i e l d }. The lateral intention is a high-level driving maneuver, which is determined by human drivers’ long term decision-making process. This intention is not always changed in the driving process and determines the future trajectory of human-driven vehicles. In particular, the intention of stop is treated as a lateral intention in our model because it can be predicted only using data from human-driven vehicle itself. However, the longitudinal intention is a cooperative behavior only occurring when it interacts with other vehicles. We will first describe the HMM and then formulize our intention prediction model in this section. ### 3.1. HMM A HMM consists of a set ofN finite “hidden” states and a set of M observable symbols per state. The state transition probabilities are defined as Α = { a i j }, where(1) a i j = P q t + 1 = j ∣ q t = i , 1 ≤ i , j ≤ N .The initial state distribution is denoted asπ = { π i }, where(2) π i = P q 1 = i , 1 ≤ i ≤ N .Because the observation symbols are continuous parameters, we use Gaussian Mixture Model (GMM) [19] to represent their probability distribution functions (pdf):(3) b j o = ∑ k = 1 M c j k N o ∣ μ j k , Σ j k , 1 ≤ j ≤ N ,where c j k represents the mixture coefficient in the jth state for the kth mixture. N is the pdf of a Gaussian distribution with mean μ and covariance Σ measured from observation o. Mixture coefficient c satisfies the following constraints:(4) ∑ k = 1 M c j k = 1 ,where c j k > 0,  1 ≤ j ≤ N,  1 ≤ k ≤ M.And(5) ∫ - ∞ + ∞ b j o d o = 1 , 1 ≤ j ≤ N .Then a HMM could be completely defined by hidden statesN and the probability tuples λ = ( π , A , C , μ , Σ ).In the training process, we use the Baum-Welch method [20] to estimate model parameters for different driver intention I. Once the model parameters corresponding to different driver intention have been trained, we can perform the driver’s intention estimation in the recognition process. The prediction process for lateral intentions can be seen in Figure 2.Figure 2 Prediction process for HMM. The observed sequence will be evaluated by four HMMs. Forward algorithm is used to calculate the conditional probabilities and the intention corresponding to the largest value will be considered as the vehicle’s intention. ### 3.2. HMM-Based Intention Prediction Process Given a continuous HMM, the intention prediction process is divided into two steps. The first step focused on the lateral intention. The training inputs of each vehicle’s lateral intention model in timet are defined as B l a t e r a l = { L , v , a , y a w }, where L is the distance to the intersection, v is the longitudinal velocity, a is the longitudinal acceleration, and y a w is the yaw rate, while the output of this model is the motion intentions I l a t ∈ { I T R , I T L , I G S , I S }. The corresponding HMMs can be trained, including λ T R,  λ T L,  λ G S, and λ S.The next step is about longitudinal intention. This probability could be decomposed based on the total probability formula:(6) P I Yield ∣ B = ∑ P I Yield ∣ I lat , B P I lat ∣ B = P I Yield ∣ I TR , B P I TR ∣ B + P I Yield ∣ I TL , B P I TL + P I Yield ∣ I GS , B P I GS ∣ B + P I Yield ∣ I S , B P I S ∣ B ,where B is the behavior data including B lateral and B lon.In this process, we assume that the lateral behaviorI l a t is predicted correctly by a deterministic HMM in the first step, and therefore I l a t is determined by the lateral prediction result I l a t P r e d i c t, where P I l a t ∣ B , I l a t = I l a t P r e d i c t = 1 and P I l a t ∣ B , I l a t ! = I l a t P r e d i c t = 0. And (6) is reformulated by(7) P I Yield ∣ B = P I Yield ∣ B , I latPredict P I latPredict ∣ B = P I Yield ∣ B , I latPredict .The problem is changed to modelP I Yield ∣ B , I latPredict. The features used in longitudinal intention prediction are B l o n = { Δ v , Δ a , Δ D T C }, where Δ v = v s o c i a l - v h o s t,  Δ a = a s o c i a l - a h o s t, and Δ D T C = D T C s o c i a l - D T C h o s t. D T C means the distance to the potential collision area. The output of the longitudinal intention prediction model is longitudinal motion intention I l o n ∈ { I Y i e l d , I N Y i e l d }.Instead of building a generative model, we use a deterministic approach to restrictP I Y i e l d ∣ B , I l a t P r e d i c t as 0 or 1. Thus, two types of HMMs named λ Y , I l a t,  λ N , I l a t are trained where I l a t ∈ { I T R , I T L , I G S , I S }. Two test examples for lateral and longitudinal intention prediction are shown in Figures 3 and 4. Through these two figures, we can find that our approach can recognize human-driven vehicle’s lateral and longitudinal intention successfully.Figure 3 Lateral intention prediction example. The true intention of human-driven vehicle is to turn left in this scenario. In the first figure, the value 1 of they label means turn left, 2 means turn right, 3 represents go straight, and 4 corresponds to stop. (a) (b) (c) (d) (e)Figure 4 One example of predicting longitudinal intentions. This example is based on the scenario of Figure1 and two vehicles both go straight. The value 1 of y-axis in the first figure denotes the intention of yielding, while 2 represents not yielding. In the first 2.8 s, the intention is yielding. After that, due to the acceleration action and less relative DTC, autonomous vehicle could understand human-driven vehicle’s not-yielding intention. (a) (b) (c) (d) ## 3.1. HMM A HMM consists of a set ofN finite “hidden” states and a set of M observable symbols per state. The state transition probabilities are defined as Α = { a i j }, where(1) a i j = P q t + 1 = j ∣ q t = i , 1 ≤ i , j ≤ N .The initial state distribution is denoted asπ = { π i }, where(2) π i = P q 1 = i , 1 ≤ i ≤ N .Because the observation symbols are continuous parameters, we use Gaussian Mixture Model (GMM) [19] to represent their probability distribution functions (pdf):(3) b j o = ∑ k = 1 M c j k N o ∣ μ j k , Σ j k , 1 ≤ j ≤ N ,where c j k represents the mixture coefficient in the jth state for the kth mixture. N is the pdf of a Gaussian distribution with mean μ and covariance Σ measured from observation o. Mixture coefficient c satisfies the following constraints:(4) ∑ k = 1 M c j k = 1 ,where c j k > 0,  1 ≤ j ≤ N,  1 ≤ k ≤ M.And(5) ∫ - ∞ + ∞ b j o d o = 1 , 1 ≤ j ≤ N .Then a HMM could be completely defined by hidden statesN and the probability tuples λ = ( π , A , C , μ , Σ ).In the training process, we use the Baum-Welch method [20] to estimate model parameters for different driver intention I. Once the model parameters corresponding to different driver intention have been trained, we can perform the driver’s intention estimation in the recognition process. The prediction process for lateral intentions can be seen in Figure 2.Figure 2 Prediction process for HMM. The observed sequence will be evaluated by four HMMs. Forward algorithm is used to calculate the conditional probabilities and the intention corresponding to the largest value will be considered as the vehicle’s intention. ## 3.2. HMM-Based Intention Prediction Process Given a continuous HMM, the intention prediction process is divided into two steps. The first step focused on the lateral intention. The training inputs of each vehicle’s lateral intention model in timet are defined as B l a t e r a l = { L , v , a , y a w }, where L is the distance to the intersection, v is the longitudinal velocity, a is the longitudinal acceleration, and y a w is the yaw rate, while the output of this model is the motion intentions I l a t ∈ { I T R , I T L , I G S , I S }. The corresponding HMMs can be trained, including λ T R,  λ T L,  λ G S, and λ S.The next step is about longitudinal intention. This probability could be decomposed based on the total probability formula:(6) P I Yield ∣ B = ∑ P I Yield ∣ I lat , B P I lat ∣ B = P I Yield ∣ I TR , B P I TR ∣ B + P I Yield ∣ I TL , B P I TL + P I Yield ∣ I GS , B P I GS ∣ B + P I Yield ∣ I S , B P I S ∣ B ,where B is the behavior data including B lateral and B lon.In this process, we assume that the lateral behaviorI l a t is predicted correctly by a deterministic HMM in the first step, and therefore I l a t is determined by the lateral prediction result I l a t P r e d i c t, where P I l a t ∣ B , I l a t = I l a t P r e d i c t = 1 and P I l a t ∣ B , I l a t ! = I l a t P r e d i c t = 0. And (6) is reformulated by(7) P I Yield ∣ B = P I Yield ∣ B , I latPredict P I latPredict ∣ B = P I Yield ∣ B , I latPredict .The problem is changed to modelP I Yield ∣ B , I latPredict. The features used in longitudinal intention prediction are B l o n = { Δ v , Δ a , Δ D T C }, where Δ v = v s o c i a l - v h o s t,  Δ a = a s o c i a l - a h o s t, and Δ D T C = D T C s o c i a l - D T C h o s t. D T C means the distance to the potential collision area. The output of the longitudinal intention prediction model is longitudinal motion intention I l o n ∈ { I Y i e l d , I N Y i e l d }.Instead of building a generative model, we use a deterministic approach to restrictP I Y i e l d ∣ B , I l a t P r e d i c t as 0 or 1. Thus, two types of HMMs named λ Y , I l a t,  λ N , I l a t are trained where I l a t ∈ { I T R , I T L , I G S , I S }. Two test examples for lateral and longitudinal intention prediction are shown in Figures 3 and 4. Through these two figures, we can find that our approach can recognize human-driven vehicle’s lateral and longitudinal intention successfully.Figure 3 Lateral intention prediction example. The true intention of human-driven vehicle is to turn left in this scenario. In the first figure, the value 1 of they label means turn left, 2 means turn right, 3 represents go straight, and 4 corresponds to stop. (a) (b) (c) (d) (e)Figure 4 One example of predicting longitudinal intentions. This example is based on the scenario of Figure1 and two vehicles both go straight. The value 1 of y-axis in the first figure denotes the intention of yielding, while 2 represents not yielding. In the first 2.8 s, the intention is yielding. After that, due to the acceleration action and less relative DTC, autonomous vehicle could understand human-driven vehicle’s not-yielding intention. (a) (b) (c) (d) ## 4. Modeling Autonomous Driving Decision-Making in a POMDP Framework For the decision-making process, the key problem is how to design a policy to perform the optimal actions with uncertainties. This needs to not only obtain traffic laws but also consider the driving uncertainties of human-driven vehicles. Facing potential conflicts, human-driven vehicles have uncertain probabilities to yield autonomous vehicles and some aggressive drivers may violate the traffic laws. Such elements should be implemented into a powerful decision-making framework. As a result, we model autonomous driving decision-making problem in a general POMDP framework in this section. ### 4.1. POMDP Preliminaries A POMDP model can be formulized as a tuple{ S , A , T , Z , O , R , γ }, where S is a set of states, A is the action space, and Z denotes a set of observations. The conditional function T ( s ′ , a , s ) = P r s ′ ∣ s , a models transition probabilities to state s ′ ∈ S, when the system takes an action a ∈ A in the state s ∈ S. The observation function O ( z , s ′ , a ) = P r z ∣ s ′ , a models the probability of observing z ∈ Z, when an action a ∈ A is taken and the end state is s ′ ∈ S. The reward function R ( s , a ) calculates an immediate reward when taking an action a in state s. γ ∈ [ 0,1 ] is the discount factor in order to balance the immediate and the future rewards.Because the system contains partially observed state such as intentions, a beliefb ∈ B is maintained. A belief update function τ is defined as b ′ = τ ( b , a , z ). If the agent takes action a and gets observation z, the new belief b ′ is obtained through the Bayes’ rule:(8) b ′ s ′ = η O s ′ , a , z ∑ s ∈ S T s , a , s ′ b s ,where η = 1 / ∑ s ′ ∈ S O s ′ , a , z ∑ s ∈ S T s , a , s ′ b s is a normalizing constant.A key concept in POMDP planning is a policy, a mappingπ that specifies the action a = π ( b ) at belief b. To solve the POMDP, an optimal policy π ∗ should be designed to maximize the total reward:(9) π ∗ = arg ⁡ max π ⁡ E ∑ t = 0 ∞ γ t R s t , π b t ∣ b 0 , π ,where b 0 is marked as the initial belief. ### 4.2. State Space Because of the Markov property, sufficient information should be contained in the state spaceS for decision-making process [14]. The state space includes the vehicle pose [ x , y , θ ], velocity v, the average yaw rate y a w a v e, and acceleration a a v e in the last planning period for all the vehicles. For the human-driven vehicles, the lateral and longitudinal intentions [ I l a t , I l o n ] also need to be contained for state transition modeling. However, the road context knowledge is static reference information so that it will be not added to the state space.The joint states ∈ S could be denoted as s = [ s h o s t , s 1 , s 2 , … , s N ] T, where s h o s t is the state of host vehicle (autonomous vehicle), s i,  i ∈ { 1,2 , 3 , … , N }, is the state of human-driven vehicles, and N is the number of human-driven vehicles involved. Let us define metric state x = [ x , y , θ , v , a a v e , y a w a v e ] T, including the vehicle position, heading, velocity, acceleration, and yaw rate. Thus, the state of host vehicle can be defined as s h o s t = x h o s t, while the human-driven vehicle state s i is s i = [ x i , I l a t , i , I l o n , i ] T. With the advanced perception system and V2V communication technology, we assume that the metric state x could be observed. Because the sensor noise is small and hardly affects decision-making process, we do not model observation noise for the metric state. However, the intention state cannot be directly observed, so it is the partially observable variables in our paper. The intention state should be inferred from observation data and predictive model over time. ### 4.3. Action Space In our autonomous vehicle system, the decision-making system is used to select the suitable tactical maneuvers. Specifically, in the intersection area autonomous vehicles should follow a global reference path generated by path planning module. The decision-making module only needs to generate acceleration/deceleration commands to the control layer. As the reference path may not be straight, the steering control module can adjust the front wheel angle to follow the reference path. Therefore, the action spaceA could be defined as a discrete set A = [ a c c , d e c , c o n ], which contains commands including acceleration, deceleration, and maintaining current velocity. ### 4.4. Observation Space Similar to the joint state space, the observationz is denoted as z = [ z h o s t , z 1 , z 2 , … , z N ] T, where z h o s t and z i are the host vehicle and human-driven vehicle’s observations, respectively. The acceleration and yaw rate can be approximately calculated by speed and heading in the consecutive states. ### 4.5. State Transition Model In state transition process, we need to model transition probabilityP r s ′ ∣ s , a. This probability is determined by each targeted element in the scenario. So the transition model can be calculated by the following probabilistic equation:(10) Pr ⁡ s ′ ∣ s , a = Pr ⁡ s host ′ ∣ s host , a host ∏ i = 1 N Pr ⁡ s i ′ ∣ s i .In the decision-making layer, we do not need to consider complex vehicle dynamic model. Thus, the host vehicle’s motionPr ⁡ s host ′ ∣ s host , a host can be simply represented by the following equations given action a:(11) x ′ = x + v + a Δ t 2 Δ t cos ⁡ θ + Δ θ , y ′ = y + v + a Δ t 2 Δ t sin ⁡ θ + Δ θ , θ ′ = θ + Δ θ , v ′ = v + a Δ t , yaw ave ′ = Δ θ Δ t , a ave ′ = a .Thus, the key problem is converted to computeP r s i ′ ∣ s i, the state transition probability of human-driven vehicles. Based on the total probability formula, this probability can be factorized as a sum in whole action space:(12) Pr ⁡ s i ′ ∣ s i = ∑ a i P r s i ′ ∣ s i , a i P r a i ∣ s i .With this equation, we only need to calculate the state transition probabilityP r s i ′ ∣ s i , a i given a specific action a i and the probability of selecting this action P r ( a i ∣ s i ) under current state s i.Because the human-driven vehicles’ states i = [ x i , I i ], the probability P r s i ′ ∣ s i , a i can be calculated as(13) Pr ⁡ s i ′ ∣ s i , a i = Pr ⁡ x i ′ , I i ′ ∣ x i , I i , a i = Pr ⁡ x i ′ ∣ x i , I i , a i Pr ⁡ I i ′ ∣ x i ′ , x i , I i , a i .With a certain actiona i, P r x i ′ ∣ x i , I i , a i is equal to P r x i ′ ∣ x i , I l a t , i , a i. The lateral behavior I l a t , i is considered to be a goal-directed driving intention which will not be changed in the driving process. So P r x i ′ ∣ x i , I l a t , i , a i is equal to P r x i ′ ∣ x i , a i given a reference path corresponding to the intention of I l a t , i. Using (11), P r x i ′ ∣ x i , a i can be well solved.The remaining problem for calculatingP r s i ′ ∣ s i , a i is to deal with P r I i ′ ∣ x i ′ , x i , I i , a i. The lateral intention I l a t , i ′ is assumed stable through the above explanation. And the longitudinal intention I l o n , i ′ is assumed to be not updated in this process. But it will be updated with new inputs in observation space.NowP r s i ′ ∣ s i , a i is well modeled and the remaining problem is to compute the probabilities P r a i ∣ s i of human-driven vehicles’ future actions:(14) Pr ⁡ a i ∣ s i = Pr ⁡ a i ∣ x i , I i = ∑ x host ′ P r a i ∣ x host ′ , x i , I i Pr ⁡ x host ′ ∣ x i , I i .Becausex host ′ is determined by the designed policy, Pr ⁡ x host ′ ∣ x i , I i could be calculated by (11) given an action a h o s t. The probability P r a i ∣ x host ′ , x i , I i means the distribution of human-driven vehicles’ actions given the new state x host ′ of host vehicle, the current state of itself, and its intentions. Instead of building a complex probability model, we designed a deterministic mechanism to calculate the most possible action a i given x host ′,  x i, and I i.In this prediction process, the host vehicle is assumed to be maintaining the current actions in the next time step and the actiona i will be leading human-driven vehicle passing through the potential collision area either in advance of host vehicle under the intention I N Y i e l d or behind the host vehicle under the intention I Y i e l d to keep a safe distance d s a f e. In the case with the intention of I N Y i e l d, we can calculate the low boundary a i , l o w of a i through the above process and determine the upper one using the largest comfort value a i , c o m f o r t. If a i , c o m f o r t < a i , l o w,  a i , l o w will be used as the human-driven vehicle’s action. If not, we consider the targeted a i following a normal distribution with mean value μ a i between a i , l o w and a i , c o m f o r t. To simplify our model, we use the mean value of these two boundaries to represent human-driven vehicle’s action a i. Similarly, the case with the intention of I Y i e l d can be analyzed in the same process.After these steps, the transition probabilityP r s ′ ∣ s , a is well formulized and the autonomous vehicle could have the ability to understand the future motion of the scenario through this model. ### 4.6. Observation Model The observation model is built to simulate the measurement process. The motion intention is updated in this process. The measurements of human-driven vehicles are modeled with conditional independent assumption. Thus, the observation model can be calculated as(15) P r z ∣ a , s ′ = P r z host ∣ s host ′ ∏ i = 1 N P r z i ∣ s i ′ .The host vehicle’s observation function is denoted as(16) P r z host ∣ s host ′ ~ N z host ∣ x host ′ , Σ z host .But in this paper, due to the use of V2V communication sensor, the observation error almost does not affect the decision-making result. The variance matrix is set as zero.The human-driven vehicle’s observation will follow the vehicle’s motion intentions. Because we do not consider the observation error, the value in metric state will be the same as the state transition results. But the longitudinal intention of human-driven vehicles in the state space will be updated using the new observations and HMM mentioned in Section3. The new observation space will be confirmed with the above step. ### 4.7. Reward Function The candidate policies have to satisfy several evaluation criterions. Autonomous vehicles should be driven safely and comfortably. At the same time, they should follow the traffic rules and reach the destination as soon as possible. As a result, we design objective function (17) considering three aspects including safety, time efficiency, and traffic laws, where μ 1,  μ 2, and  μ 3 are the weight coefficient:(17) R s , a = μ 1 R safety s , a + μ 2 R time s , a + μ 3 R law s , a .The detailed information will be discussed in the following subsections. In addition, the factor of comfort will be considered and discussed in policy generation part (Section5.1). #### 4.7.1. Safety Reward The safety reward functionR s a f e t y ( s , a ) is based on the potential conflict status. In our strategy, safety reward is defined as a penalty. If there are no potential conflicts, the safety reward will be set as 0. A large penalty will be assigned due to the risk of collision status.In an uncontrolled intersection, the four approaching directions are defined asA i ∈ { 1,2 , 3,4 } (Figure 5). The driver’s lateral intentions are defined as I l a t ∈ { I T R , I T L , I G S , I S }. So the driving trajectory for each vehicle in the intersection can be generally represented by A i and I l a t , j, and we marked it as T A i , I l a t , j,  1 ≤ i ≤ 4,  1 ≤ j ≤ 4. The function F is used to judge the potential collision status, which is denoted as(18) F T x , T y = 1 , if  potential  conflict, 0 , otherwise ,where T x and T y are vehicles’ maneuver T A i , I l a t , j.Figure 5 One typical scenario for calculating safety reward.F ( x , y ) can be calculated through relative direction between two cars, which is shown in Table 1.Table 1 Safe condition judgments in the intersection. Human-driven vehicle Left side Right side Opposite side Same side Driving direction TL LK TR S TL LK TR S TL LK TR S TL LK TR S Turn left ★ ★ ○ ○ ★ ★ ○ ○ ★ ★ ★ ○ ○ ○ ○ ○ Lane keeping ★ ★ ○ ○ ★ ★ ★ ○ ★ ○ ○ ○ ○ ○ ○ ○ Turn right ○ ★ ○ ○ ○ ○ ○ ○ ★ ○ ○ ○ ○ ○ ○ ○ Stop ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ★ indicates potential collision. ○ indicates no potential collision.The safety reward is based on the following items:(i) IfF ( x , y ) is equal to 0, then the safety reward is equal to 0 due to the noncollision status. (ii) If potential collision occurs, there will be a large penalty. (iii) If| T T C i - T T C h o s t | < t t h r e s h o d, there is a penalty depending on | T T C i - T T C h o s t | and T T C h o s t. #### 4.7.2. Traffic Law Reward Autonomous vehicles should follow traffic laws to interact with human-driven vehicles. Traffic law is modeled as a functionL a w ( x , y ) for each two vehicles x and y (19) Law T x , T y = 1 , if x is prior , 0 , otherwise ,where T x and T y are vehicles’ maneuver T A i , S j. This function Law ( T x , T y ) is formulized as shown in Algorithm 1.Algorithm 1:Traffic law formulization. L a w T x , T y ← 0 t x ← d x 2 I / v x t y ← d y 2 I / v y if t x < t y - Δ t, then L a w T x , T y ← 1 else if t x - t y < Δ t, then s t a t u s ← F T x , T y if s t a t u s = 1, then if I l a t , x = l a n e k e e p i n g and I l a t , y <> l a n e k e e p i n g, then L a w T x , T y ← 1 else if I l a t , x , I l a t , y = l a n e k e e p i n g and A x - A y = 1 or - 3, then L a w T x , T y ← 1 else if I l a t , x = t u r n l e f t and I l a t , y = t u r n r i g h t, then L a w T x , T y ← 1 end if end if end if return L a w ( T x , T y )If the behavior will break the law, a large penalty is applied and the behavior of obeying traffic laws will get a zero reward. #### 4.7.3. Time Reward The time cost is based on the time to the destination for the targeted vehicles in the intersection area:(20) Cost time = DTG v host .D T G is the distance to the driving goal. In addition, we also need to consider the speed limit, which is discussed in policy generation part in Section 5. ## 4.1. POMDP Preliminaries A POMDP model can be formulized as a tuple{ S , A , T , Z , O , R , γ }, where S is a set of states, A is the action space, and Z denotes a set of observations. The conditional function T ( s ′ , a , s ) = P r s ′ ∣ s , a models transition probabilities to state s ′ ∈ S, when the system takes an action a ∈ A in the state s ∈ S. The observation function O ( z , s ′ , a ) = P r z ∣ s ′ , a models the probability of observing z ∈ Z, when an action a ∈ A is taken and the end state is s ′ ∈ S. The reward function R ( s , a ) calculates an immediate reward when taking an action a in state s. γ ∈ [ 0,1 ] is the discount factor in order to balance the immediate and the future rewards.Because the system contains partially observed state such as intentions, a beliefb ∈ B is maintained. A belief update function τ is defined as b ′ = τ ( b , a , z ). If the agent takes action a and gets observation z, the new belief b ′ is obtained through the Bayes’ rule:(8) b ′ s ′ = η O s ′ , a , z ∑ s ∈ S T s , a , s ′ b s ,where η = 1 / ∑ s ′ ∈ S O s ′ , a , z ∑ s ∈ S T s , a , s ′ b s is a normalizing constant.A key concept in POMDP planning is a policy, a mappingπ that specifies the action a = π ( b ) at belief b. To solve the POMDP, an optimal policy π ∗ should be designed to maximize the total reward:(9) π ∗ = arg ⁡ max π ⁡ E ∑ t = 0 ∞ γ t R s t , π b t ∣ b 0 , π ,where b 0 is marked as the initial belief. ## 4.2. State Space Because of the Markov property, sufficient information should be contained in the state spaceS for decision-making process [14]. The state space includes the vehicle pose [ x , y , θ ], velocity v, the average yaw rate y a w a v e, and acceleration a a v e in the last planning period for all the vehicles. For the human-driven vehicles, the lateral and longitudinal intentions [ I l a t , I l o n ] also need to be contained for state transition modeling. However, the road context knowledge is static reference information so that it will be not added to the state space.The joint states ∈ S could be denoted as s = [ s h o s t , s 1 , s 2 , … , s N ] T, where s h o s t is the state of host vehicle (autonomous vehicle), s i,  i ∈ { 1,2 , 3 , … , N }, is the state of human-driven vehicles, and N is the number of human-driven vehicles involved. Let us define metric state x = [ x , y , θ , v , a a v e , y a w a v e ] T, including the vehicle position, heading, velocity, acceleration, and yaw rate. Thus, the state of host vehicle can be defined as s h o s t = x h o s t, while the human-driven vehicle state s i is s i = [ x i , I l a t , i , I l o n , i ] T. With the advanced perception system and V2V communication technology, we assume that the metric state x could be observed. Because the sensor noise is small and hardly affects decision-making process, we do not model observation noise for the metric state. However, the intention state cannot be directly observed, so it is the partially observable variables in our paper. The intention state should be inferred from observation data and predictive model over time. ## 4.3. Action Space In our autonomous vehicle system, the decision-making system is used to select the suitable tactical maneuvers. Specifically, in the intersection area autonomous vehicles should follow a global reference path generated by path planning module. The decision-making module only needs to generate acceleration/deceleration commands to the control layer. As the reference path may not be straight, the steering control module can adjust the front wheel angle to follow the reference path. Therefore, the action spaceA could be defined as a discrete set A = [ a c c , d e c , c o n ], which contains commands including acceleration, deceleration, and maintaining current velocity. ## 4.4. Observation Space Similar to the joint state space, the observationz is denoted as z = [ z h o s t , z 1 , z 2 , … , z N ] T, where z h o s t and z i are the host vehicle and human-driven vehicle’s observations, respectively. The acceleration and yaw rate can be approximately calculated by speed and heading in the consecutive states. ## 4.5. State Transition Model In state transition process, we need to model transition probabilityP r s ′ ∣ s , a. This probability is determined by each targeted element in the scenario. So the transition model can be calculated by the following probabilistic equation:(10) Pr ⁡ s ′ ∣ s , a = Pr ⁡ s host ′ ∣ s host , a host ∏ i = 1 N Pr ⁡ s i ′ ∣ s i .In the decision-making layer, we do not need to consider complex vehicle dynamic model. Thus, the host vehicle’s motionPr ⁡ s host ′ ∣ s host , a host can be simply represented by the following equations given action a:(11) x ′ = x + v + a Δ t 2 Δ t cos ⁡ θ + Δ θ , y ′ = y + v + a Δ t 2 Δ t sin ⁡ θ + Δ θ , θ ′ = θ + Δ θ , v ′ = v + a Δ t , yaw ave ′ = Δ θ Δ t , a ave ′ = a .Thus, the key problem is converted to computeP r s i ′ ∣ s i, the state transition probability of human-driven vehicles. Based on the total probability formula, this probability can be factorized as a sum in whole action space:(12) Pr ⁡ s i ′ ∣ s i = ∑ a i P r s i ′ ∣ s i , a i P r a i ∣ s i .With this equation, we only need to calculate the state transition probabilityP r s i ′ ∣ s i , a i given a specific action a i and the probability of selecting this action P r ( a i ∣ s i ) under current state s i.Because the human-driven vehicles’ states i = [ x i , I i ], the probability P r s i ′ ∣ s i , a i can be calculated as(13) Pr ⁡ s i ′ ∣ s i , a i = Pr ⁡ x i ′ , I i ′ ∣ x i , I i , a i = Pr ⁡ x i ′ ∣ x i , I i , a i Pr ⁡ I i ′ ∣ x i ′ , x i , I i , a i .With a certain actiona i, P r x i ′ ∣ x i , I i , a i is equal to P r x i ′ ∣ x i , I l a t , i , a i. The lateral behavior I l a t , i is considered to be a goal-directed driving intention which will not be changed in the driving process. So P r x i ′ ∣ x i , I l a t , i , a i is equal to P r x i ′ ∣ x i , a i given a reference path corresponding to the intention of I l a t , i. Using (11), P r x i ′ ∣ x i , a i can be well solved.The remaining problem for calculatingP r s i ′ ∣ s i , a i is to deal with P r I i ′ ∣ x i ′ , x i , I i , a i. The lateral intention I l a t , i ′ is assumed stable through the above explanation. And the longitudinal intention I l o n , i ′ is assumed to be not updated in this process. But it will be updated with new inputs in observation space.NowP r s i ′ ∣ s i , a i is well modeled and the remaining problem is to compute the probabilities P r a i ∣ s i of human-driven vehicles’ future actions:(14) Pr ⁡ a i ∣ s i = Pr ⁡ a i ∣ x i , I i = ∑ x host ′ P r a i ∣ x host ′ , x i , I i Pr ⁡ x host ′ ∣ x i , I i .Becausex host ′ is determined by the designed policy, Pr ⁡ x host ′ ∣ x i , I i could be calculated by (11) given an action a h o s t. The probability P r a i ∣ x host ′ , x i , I i means the distribution of human-driven vehicles’ actions given the new state x host ′ of host vehicle, the current state of itself, and its intentions. Instead of building a complex probability model, we designed a deterministic mechanism to calculate the most possible action a i given x host ′,  x i, and I i.In this prediction process, the host vehicle is assumed to be maintaining the current actions in the next time step and the actiona i will be leading human-driven vehicle passing through the potential collision area either in advance of host vehicle under the intention I N Y i e l d or behind the host vehicle under the intention I Y i e l d to keep a safe distance d s a f e. In the case with the intention of I N Y i e l d, we can calculate the low boundary a i , l o w of a i through the above process and determine the upper one using the largest comfort value a i , c o m f o r t. If a i , c o m f o r t < a i , l o w,  a i , l o w will be used as the human-driven vehicle’s action. If not, we consider the targeted a i following a normal distribution with mean value μ a i between a i , l o w and a i , c o m f o r t. To simplify our model, we use the mean value of these two boundaries to represent human-driven vehicle’s action a i. Similarly, the case with the intention of I Y i e l d can be analyzed in the same process.After these steps, the transition probabilityP r s ′ ∣ s , a is well formulized and the autonomous vehicle could have the ability to understand the future motion of the scenario through this model. ## 4.6. Observation Model The observation model is built to simulate the measurement process. The motion intention is updated in this process. The measurements of human-driven vehicles are modeled with conditional independent assumption. Thus, the observation model can be calculated as(15) P r z ∣ a , s ′ = P r z host ∣ s host ′ ∏ i = 1 N P r z i ∣ s i ′ .The host vehicle’s observation function is denoted as(16) P r z host ∣ s host ′ ~ N z host ∣ x host ′ , Σ z host .But in this paper, due to the use of V2V communication sensor, the observation error almost does not affect the decision-making result. The variance matrix is set as zero.The human-driven vehicle’s observation will follow the vehicle’s motion intentions. Because we do not consider the observation error, the value in metric state will be the same as the state transition results. But the longitudinal intention of human-driven vehicles in the state space will be updated using the new observations and HMM mentioned in Section3. The new observation space will be confirmed with the above step. ## 4.7. Reward Function The candidate policies have to satisfy several evaluation criterions. Autonomous vehicles should be driven safely and comfortably. At the same time, they should follow the traffic rules and reach the destination as soon as possible. As a result, we design objective function (17) considering three aspects including safety, time efficiency, and traffic laws, where μ 1,  μ 2, and  μ 3 are the weight coefficient:(17) R s , a = μ 1 R safety s , a + μ 2 R time s , a + μ 3 R law s , a .The detailed information will be discussed in the following subsections. In addition, the factor of comfort will be considered and discussed in policy generation part (Section5.1). ### 4.7.1. Safety Reward The safety reward functionR s a f e t y ( s , a ) is based on the potential conflict status. In our strategy, safety reward is defined as a penalty. If there are no potential conflicts, the safety reward will be set as 0. A large penalty will be assigned due to the risk of collision status.In an uncontrolled intersection, the four approaching directions are defined asA i ∈ { 1,2 , 3,4 } (Figure 5). The driver’s lateral intentions are defined as I l a t ∈ { I T R , I T L , I G S , I S }. So the driving trajectory for each vehicle in the intersection can be generally represented by A i and I l a t , j, and we marked it as T A i , I l a t , j,  1 ≤ i ≤ 4,  1 ≤ j ≤ 4. The function F is used to judge the potential collision status, which is denoted as(18) F T x , T y = 1 , if  potential  conflict, 0 , otherwise ,where T x and T y are vehicles’ maneuver T A i , I l a t , j.Figure 5 One typical scenario for calculating safety reward.F ( x , y ) can be calculated through relative direction between two cars, which is shown in Table 1.Table 1 Safe condition judgments in the intersection. Human-driven vehicle Left side Right side Opposite side Same side Driving direction TL LK TR S TL LK TR S TL LK TR S TL LK TR S Turn left ★ ★ ○ ○ ★ ★ ○ ○ ★ ★ ★ ○ ○ ○ ○ ○ Lane keeping ★ ★ ○ ○ ★ ★ ★ ○ ★ ○ ○ ○ ○ ○ ○ ○ Turn right ○ ★ ○ ○ ○ ○ ○ ○ ★ ○ ○ ○ ○ ○ ○ ○ Stop ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ★ indicates potential collision. ○ indicates no potential collision.The safety reward is based on the following items:(i) IfF ( x , y ) is equal to 0, then the safety reward is equal to 0 due to the noncollision status. (ii) If potential collision occurs, there will be a large penalty. (iii) If| T T C i - T T C h o s t | < t t h r e s h o d, there is a penalty depending on | T T C i - T T C h o s t | and T T C h o s t. ### 4.7.2. Traffic Law Reward Autonomous vehicles should follow traffic laws to interact with human-driven vehicles. Traffic law is modeled as a functionL a w ( x , y ) for each two vehicles x and y (19) Law T x , T y = 1 , if x is prior , 0 , otherwise ,where T x and T y are vehicles’ maneuver T A i , S j. This function Law ( T x , T y ) is formulized as shown in Algorithm 1.Algorithm 1:Traffic law formulization. L a w T x , T y ← 0 t x ← d x 2 I / v x t y ← d y 2 I / v y if t x < t y - Δ t, then L a w T x , T y ← 1 else if t x - t y < Δ t, then s t a t u s ← F T x , T y if s t a t u s = 1, then if I l a t , x = l a n e k e e p i n g and I l a t , y <> l a n e k e e p i n g, then L a w T x , T y ← 1 else if I l a t , x , I l a t , y = l a n e k e e p i n g and A x - A y = 1 or - 3, then L a w T x , T y ← 1 else if I l a t , x = t u r n l e f t and I l a t , y = t u r n r i g h t, then L a w T x , T y ← 1 end if end if end if return L a w ( T x , T y )If the behavior will break the law, a large penalty is applied and the behavior of obeying traffic laws will get a zero reward. ### 4.7.3. Time Reward The time cost is based on the time to the destination for the targeted vehicles in the intersection area:(20) Cost time = DTG v host .D T G is the distance to the driving goal. In addition, we also need to consider the speed limit, which is discussed in policy generation part in Section 5. ## 4.7.1. Safety Reward The safety reward functionR s a f e t y ( s , a ) is based on the potential conflict status. In our strategy, safety reward is defined as a penalty. If there are no potential conflicts, the safety reward will be set as 0. A large penalty will be assigned due to the risk of collision status.In an uncontrolled intersection, the four approaching directions are defined asA i ∈ { 1,2 , 3,4 } (Figure 5). The driver’s lateral intentions are defined as I l a t ∈ { I T R , I T L , I G S , I S }. So the driving trajectory for each vehicle in the intersection can be generally represented by A i and I l a t , j, and we marked it as T A i , I l a t , j,  1 ≤ i ≤ 4,  1 ≤ j ≤ 4. The function F is used to judge the potential collision status, which is denoted as(18) F T x , T y = 1 , if  potential  conflict, 0 , otherwise ,where T x and T y are vehicles’ maneuver T A i , I l a t , j.Figure 5 One typical scenario for calculating safety reward.F ( x , y ) can be calculated through relative direction between two cars, which is shown in Table 1.Table 1 Safe condition judgments in the intersection. Human-driven vehicle Left side Right side Opposite side Same side Driving direction TL LK TR S TL LK TR S TL LK TR S TL LK TR S Turn left ★ ★ ○ ○ ★ ★ ○ ○ ★ ★ ★ ○ ○ ○ ○ ○ Lane keeping ★ ★ ○ ○ ★ ★ ★ ○ ★ ○ ○ ○ ○ ○ ○ ○ Turn right ○ ★ ○ ○ ○ ○ ○ ○ ★ ○ ○ ○ ○ ○ ○ ○ Stop ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ★ indicates potential collision. ○ indicates no potential collision.The safety reward is based on the following items:(i) IfF ( x , y ) is equal to 0, then the safety reward is equal to 0 due to the noncollision status. (ii) If potential collision occurs, there will be a large penalty. (iii) If| T T C i - T T C h o s t | < t t h r e s h o d, there is a penalty depending on | T T C i - T T C h o s t | and T T C h o s t. ## 4.7.2. Traffic Law Reward Autonomous vehicles should follow traffic laws to interact with human-driven vehicles. Traffic law is modeled as a functionL a w ( x , y ) for each two vehicles x and y (19) Law T x , T y = 1 , if x is prior , 0 , otherwise ,where T x and T y are vehicles’ maneuver T A i , S j. This function Law ( T x , T y ) is formulized as shown in Algorithm 1.Algorithm 1:Traffic law formulization. L a w T x , T y ← 0 t x ← d x 2 I / v x t y ← d y 2 I / v y if t x < t y - Δ t, then L a w T x , T y ← 1 else if t x - t y < Δ t, then s t a t u s ← F T x , T y if s t a t u s = 1, then if I l a t , x = l a n e k e e p i n g and I l a t , y <> l a n e k e e p i n g, then L a w T x , T y ← 1 else if I l a t , x , I l a t , y = l a n e k e e p i n g and A x - A y = 1 or - 3, then L a w T x , T y ← 1 else if I l a t , x = t u r n l e f t and I l a t , y = t u r n r i g h t, then L a w T x , T y ← 1 end if end if end if return L a w ( T x , T y )If the behavior will break the law, a large penalty is applied and the behavior of obeying traffic laws will get a zero reward. ## 4.7.3. Time Reward The time cost is based on the time to the destination for the targeted vehicles in the intersection area:(20) Cost time = DTG v host .D T G is the distance to the driving goal. In addition, we also need to consider the speed limit, which is discussed in policy generation part in Section 5. ## 5. Approximations on Solving POMDP Problem Solving POMDP is quite difficult. The complexity of searching total brief space isO ( | A | H | Z | H ) [12], where H is the prediction horizon. In this paper, we model the intention recognition process as a deterministic model and use communication sensors to ignore the perception error, and thus the size of | Z | is reduced to 1 in the simplified problem. To solve this problem, we first generate the suitable potential policies according to the property of driving tasks and then select the reasonable total predicting interval time and total horizon. After that, the approximate optimal policy can be calculated through searching all possible policies with maximum total reward. The policy selection process is shown in Algorithm 2 and some detailed explanations are discussed in the subsections.Algorithm 2:Policy selection process. Input: Predict horizon H, time step Δ t, Current states: s h o s t = x h o s t, s h u m a n = [ x h u m a n , I h u m a n ] (1)    P ← g e n e n r a t e p o l i c y s e t ( ) (2)    for each π k ∈ P, do (3)   for i = 1 to H / Δ t, do (4)    a h o s t ← π k ( i ) (5)    s h o s t ′ ← u p d a t e s t a t e s h o s t , a h o s t (6)    a h u m a n ← p r e d i c t a c t i o n s s h o s t ′ , s h u m a n , I h u m a n (7)    x h u m a n ′ ← u p d a t e s t a t e x h u m a n , a h u m a n (8)    I h u m a n ′ ← u p d a t e i n t e n t i o n s h o s t ′ , x h u m a n ′ (9)    s h u m a n ′ ← x h u m a n ′ , I h u m a n ′ (10)      R ( i ) = c a l c u l a t e r e w a r d s h o s t ′ , s h u m a n ′ (11)      s h o s t ← s h o s t ′ (12)      s h u m a n ← s h u m a n ′ (13) end (14) R k t o t a l ← ∑ i s u m R i (15) end (16) k ∗ ← arg ⁡ max k ⁡ R k t o t a l (17) π ∗ ← π k ∗ (18) return π ∗ ### 5.1. Policy Generation For autonomous driving near intersection, the desired velocity curves need to satisfy several constraints. Firstly, except for emergency braking, the acceleration constraints are applied to ensure comfort. Secondly, the speed limit constraints should be used in this process. We aim to avoid the acceleration commands when autonomous vehicle is reaching maximum speed limit. Thirdly, for the comfort purpose, the acceleration command should not be always changed. In other words, we need to minimize the jerk.Similar to [11], the candidate policies are divided into three time segments. The first two segments are like “keep constant acceleration/deceleration actions,” while keeping constant velocity in the third segment. We use t 1,  t 2, and  t 3 to represent the time periods of these three segments. To guarantee comfort, the acceleration is limited to the range from −4 m/s2 to 2 m/s2 and we discrete acceleration action into a multiple of [ - 0.5,0.5,0 ]. Then, the action space can be represented by a discretizing acceleration set. Then, we can set the value of t 1,  t 2, and  t 3 and the prediction period of single step. An example of policy generation is shown in Figure 6.Figure 6 An example of policy generation process. (a) is the generated policies and (b) is the corresponding speed profiles. The interval of each prediction step is 0.5 s, current speed is 12 m/s2, and the speed limit is 20 m/s2. The bold black line is one policy. In the first 3 seconds, autonomous vehicles decelerate in −3.5 m/s2, then accelerate at 2 m/s2 for 4 seconds, and finally stop in the last one second. In this case, 109 policies were generated, which is suitable for replanning fast. (a) (b) ### 5.2. Planning Horizon Selection After building policy generation model, the next problem is to select a suitable planning horizon. Longer horizon can lead to a better solution but consuming more computing resources. However, as our purpose is to deal with the interaction problem in the uncontrolled intersection, we only need to consider the situation before autonomous vehicle gets through. In our algorithm, we set the prediction horizon as 8 seconds. In addition, in the process of updating the future state of each vehicle using each policy, the car following mode is used after autonomous vehicle passes through the intersection area. ### 5.3. Time Step Selection Another problem is the prediction time step. The intention prediction algorithm and the POMDP are computed in each step. If the time step ist s t e p, the total computation times will be H / t s t e p. Thus, smaller time step leads to more computation time. To solve this problem, we use a simple adaptive time step calculation mechanism to give a final value. The time step is selected based on the TTC of autonomous vehicle. If the host vehicle is far away from the intersection, we can use a very large time step. But if the TTC is quite small, the low t s t e p is applied to ensure safety. ## 5.1. Policy Generation For autonomous driving near intersection, the desired velocity curves need to satisfy several constraints. Firstly, except for emergency braking, the acceleration constraints are applied to ensure comfort. Secondly, the speed limit constraints should be used in this process. We aim to avoid the acceleration commands when autonomous vehicle is reaching maximum speed limit. Thirdly, for the comfort purpose, the acceleration command should not be always changed. In other words, we need to minimize the jerk.Similar to [11], the candidate policies are divided into three time segments. The first two segments are like “keep constant acceleration/deceleration actions,” while keeping constant velocity in the third segment. We use t 1,  t 2, and  t 3 to represent the time periods of these three segments. To guarantee comfort, the acceleration is limited to the range from −4 m/s2 to 2 m/s2 and we discrete acceleration action into a multiple of [ - 0.5,0.5,0 ]. Then, the action space can be represented by a discretizing acceleration set. Then, we can set the value of t 1,  t 2, and  t 3 and the prediction period of single step. An example of policy generation is shown in Figure 6.Figure 6 An example of policy generation process. (a) is the generated policies and (b) is the corresponding speed profiles. The interval of each prediction step is 0.5 s, current speed is 12 m/s2, and the speed limit is 20 m/s2. The bold black line is one policy. In the first 3 seconds, autonomous vehicles decelerate in −3.5 m/s2, then accelerate at 2 m/s2 for 4 seconds, and finally stop in the last one second. In this case, 109 policies were generated, which is suitable for replanning fast. (a) (b) ## 5.2. Planning Horizon Selection After building policy generation model, the next problem is to select a suitable planning horizon. Longer horizon can lead to a better solution but consuming more computing resources. However, as our purpose is to deal with the interaction problem in the uncontrolled intersection, we only need to consider the situation before autonomous vehicle gets through. In our algorithm, we set the prediction horizon as 8 seconds. In addition, in the process of updating the future state of each vehicle using each policy, the car following mode is used after autonomous vehicle passes through the intersection area. ## 5.3. Time Step Selection Another problem is the prediction time step. The intention prediction algorithm and the POMDP are computed in each step. If the time step ist s t e p, the total computation times will be H / t s t e p. Thus, smaller time step leads to more computation time. To solve this problem, we use a simple adaptive time step calculation mechanism to give a final value. The time step is selected based on the TTC of autonomous vehicle. If the host vehicle is far away from the intersection, we can use a very large time step. But if the TTC is quite small, the low t s t e p is applied to ensure safety. ## 6. Experiment and Results ### 6.1. Settings In this paper, we evaluate our approach through PreScan 7.1.0 [21], a simulation tool for autonomous driving and connected vehicles. Using this software, we can build the testing scenarios (Figure 7) and add vehicles with dynamic model. In order to get a similar scenario considering social interaction, the driver simulator is added in our experiment (Figure 8). The human-driven vehicle is driven by several people during the experiment and the autonomous vehicle makes decisions based on the human-driven vehicle’s driving behavior. The reference trajectory for autonomous vehicle is generated from path planning module and the human-driven vehicle’s data (e.g., position, velocity, and heading) are transferred through V2V communication sensor. The decision-making module sends desired velocity command to the PID controlled to follow the reference path. All policies in the experiment part use a planning horizon H = 8 s, which is discretized into the time step of 0.5 s.Figure 7 Testing scenario. Autonomous vehicle B and human-driven vehicle A are both approaching the uncontrolled intersection. To go across the intersection successfully, autonomous vehicle should interact with human-driven vehicle.Figure 8 Logitech G27 driving simulator. ### 6.2. Results It is difficult to compare different approaches in the same scenario because the environment is dynamic and not exactly the same. However, we select two typical situations and special settings to make it possible. The same initial conditions including position, orientation, and velocity for each vehicle are used in different tests. Besides, two typical situations, including human-driven vehicle getting through before or after autonomous vehicle, are compared in this section. With the same initial state, different reactions will occur based on various methods. We compare our approach and reactive-based method [6] in this section. The key difference for these two methods is that our approach considers human-driven vehicle’s driving intention.The first experiment is that human-driven vehicle tries to yield autonomous vehicle in the interaction process. The results are shown in Figures9 and 10. Firstly, Figure 9 gives us a visual comparison of the different approaches. From almost the same initial state (e.g., position and velocity), our approach could lead to autonomous vehicle passing through the intersection more quickly and reasonable.Figure 9 The visualized passing sequence. (a) is the result of our approach and (b) represents the result of reactive approach without considering intention. The black vehicle is an autonomous vehicle, while the red car is the human-driven vehicle. Each vehicle represents the position in a specific timeT with an interval of 1 second. (a) (b)Figure 10 Case test 1. In this case, human-driven vehicle passes through intersection after autonomous vehicle. (a), (c), and (e) are the performance of our method, while (b), (d), and (f) are from the strategy without considering the driving intention. (a) and (b) are the velocity profiles and the corresponding driving intention. For longitudinal intention, label 1 means yielding and label 2 means not yielding. In lateral intention, 1 means turning left, 2 means turning right, 3 means going straight, and 4 means stop. The intentions in (b) are not used in that method but for detailed analysis. (c) and (d) are the distance to collision area for autonomous vehicle and human-driven vehicle, respectively. (e) and (f) are the prediction and true motions of human-driven vehicles in time 1.5 s with a prediction length of 8 s. The red curves in these subfigures are from autonomous vehicle while blue lines are from human-driven vehicle. The green lines in (e) and (f) are the prediction velocity curves of human-driven vehicle. (a) (b) (c) (d) (e) (f)Then, let us look at Figure10 for detailed explanation. In the first 1.2 s in Figures 10(a) and 10(c), autonomous vehicle maintains speed and understands that human-driven vehicle will not perform yielding actions. Then, autonomous vehicle gets yielding intention of human-driven vehicle and understands that human-driven vehicle’s lateral intention is to go straight. Based on candidate policies, autonomous vehicle selects acceleration strategy with maximum reward and finally crosses the intersection. In this process, we can obviously find that autonomous vehicle understands human-driven vehicle’s yielding intention. Figure 10(c) is an example of understand human-driven vehicle’s behavior based on ego vehicle’s future actions in a specific time. Our strategy predicts the future actions of human-driven vehicle. Although the velocity curves after 1 s do not correspond, it does not affect the performance of our methods. The reason is that we use a deterministic model in the prediction process and the prediction value is inside two boundaries to ensure safety. Besides, the whole actions of autonomous vehicle in this process could also help human-driven vehicle to understand not-yielding intention of autonomous vehicles. In this case, cooperative driving behaviors are performed by both vehicles.However, if the intention is not considered in this process, we can find the results in Figures10(b), 10(d), and 10(f). After 2 s in Figure 10(b), while the human-driven vehicle gives a yielding intention, autonomous vehicle could not understand and they find a potential collision based on the constant velocity assumptions. Then, it decreases the speed but the human-driven vehicle also slows down. The puzzled behavior leads both vehicles to slow down near intersection. Finally, human-driven vehicle stops at the stop line and then autonomous vehicle could pass the intersection. In this strategy, the human-driven vehicle’s future motion is assumed to be constant (Figure 10(f)). Without understanding of human-driven vehicle’s intentions, this strategy can increase congestion problem.Another experiment is that human-driven vehicle tries to get through the intersection first. The results are shown in Figures11 and 12. This case is quite typical because many traffic accidents in real world are happening in this situation. In detail, if one vehicle tries to cross an intersection while violating the law, another vehicle will be in great danger if it does not understand its behavior. From the visualized performance in Figure 11, our method is a little more safe than other approaches as there is nearly collision situation in Figure 11(b). In detail, we can see from Figure 12(a) that our strategy could perform deceleration actions after we understand the not-yielding intention in 0.8 s. However, without understanding human-driven vehicle’s motion intention, the response time has a 1-second delay which may be quite dangerous. In addition, it is shown that good performance is in the predictions of human-driven vehicle’s future motion in our methods (Figure 12(e)).Figure 11 The visualized passing sequence for the case of human-driven vehicle first getting through. (a) is the result of our approach and (b) is the reactive-based approach. (a) (b)Figure 12 Case test 2. In this case, human-driven vehicle passes through intersection before autonomous vehicle through different strategies. The definition of each subfigure is the same as in Figure10. (a) (b) (c) (d) (e) (f)The results of these two cases demonstrate that our algorithm could deal with typical scenarios and have better performance than traditional reactive controller. Autonomous vehicle could be driven more safely, fast, and comfortably through our strategy. ## 6.1. Settings In this paper, we evaluate our approach through PreScan 7.1.0 [21], a simulation tool for autonomous driving and connected vehicles. Using this software, we can build the testing scenarios (Figure 7) and add vehicles with dynamic model. In order to get a similar scenario considering social interaction, the driver simulator is added in our experiment (Figure 8). The human-driven vehicle is driven by several people during the experiment and the autonomous vehicle makes decisions based on the human-driven vehicle’s driving behavior. The reference trajectory for autonomous vehicle is generated from path planning module and the human-driven vehicle’s data (e.g., position, velocity, and heading) are transferred through V2V communication sensor. The decision-making module sends desired velocity command to the PID controlled to follow the reference path. All policies in the experiment part use a planning horizon H = 8 s, which is discretized into the time step of 0.5 s.Figure 7 Testing scenario. Autonomous vehicle B and human-driven vehicle A are both approaching the uncontrolled intersection. To go across the intersection successfully, autonomous vehicle should interact with human-driven vehicle.Figure 8 Logitech G27 driving simulator. ## 6.2. Results It is difficult to compare different approaches in the same scenario because the environment is dynamic and not exactly the same. However, we select two typical situations and special settings to make it possible. The same initial conditions including position, orientation, and velocity for each vehicle are used in different tests. Besides, two typical situations, including human-driven vehicle getting through before or after autonomous vehicle, are compared in this section. With the same initial state, different reactions will occur based on various methods. We compare our approach and reactive-based method [6] in this section. The key difference for these two methods is that our approach considers human-driven vehicle’s driving intention.The first experiment is that human-driven vehicle tries to yield autonomous vehicle in the interaction process. The results are shown in Figures9 and 10. Firstly, Figure 9 gives us a visual comparison of the different approaches. From almost the same initial state (e.g., position and velocity), our approach could lead to autonomous vehicle passing through the intersection more quickly and reasonable.Figure 9 The visualized passing sequence. (a) is the result of our approach and (b) represents the result of reactive approach without considering intention. The black vehicle is an autonomous vehicle, while the red car is the human-driven vehicle. Each vehicle represents the position in a specific timeT with an interval of 1 second. (a) (b)Figure 10 Case test 1. In this case, human-driven vehicle passes through intersection after autonomous vehicle. (a), (c), and (e) are the performance of our method, while (b), (d), and (f) are from the strategy without considering the driving intention. (a) and (b) are the velocity profiles and the corresponding driving intention. For longitudinal intention, label 1 means yielding and label 2 means not yielding. In lateral intention, 1 means turning left, 2 means turning right, 3 means going straight, and 4 means stop. The intentions in (b) are not used in that method but for detailed analysis. (c) and (d) are the distance to collision area for autonomous vehicle and human-driven vehicle, respectively. (e) and (f) are the prediction and true motions of human-driven vehicles in time 1.5 s with a prediction length of 8 s. The red curves in these subfigures are from autonomous vehicle while blue lines are from human-driven vehicle. The green lines in (e) and (f) are the prediction velocity curves of human-driven vehicle. (a) (b) (c) (d) (e) (f)Then, let us look at Figure10 for detailed explanation. In the first 1.2 s in Figures 10(a) and 10(c), autonomous vehicle maintains speed and understands that human-driven vehicle will not perform yielding actions. Then, autonomous vehicle gets yielding intention of human-driven vehicle and understands that human-driven vehicle’s lateral intention is to go straight. Based on candidate policies, autonomous vehicle selects acceleration strategy with maximum reward and finally crosses the intersection. In this process, we can obviously find that autonomous vehicle understands human-driven vehicle’s yielding intention. Figure 10(c) is an example of understand human-driven vehicle’s behavior based on ego vehicle’s future actions in a specific time. Our strategy predicts the future actions of human-driven vehicle. Although the velocity curves after 1 s do not correspond, it does not affect the performance of our methods. The reason is that we use a deterministic model in the prediction process and the prediction value is inside two boundaries to ensure safety. Besides, the whole actions of autonomous vehicle in this process could also help human-driven vehicle to understand not-yielding intention of autonomous vehicles. In this case, cooperative driving behaviors are performed by both vehicles.However, if the intention is not considered in this process, we can find the results in Figures10(b), 10(d), and 10(f). After 2 s in Figure 10(b), while the human-driven vehicle gives a yielding intention, autonomous vehicle could not understand and they find a potential collision based on the constant velocity assumptions. Then, it decreases the speed but the human-driven vehicle also slows down. The puzzled behavior leads both vehicles to slow down near intersection. Finally, human-driven vehicle stops at the stop line and then autonomous vehicle could pass the intersection. In this strategy, the human-driven vehicle’s future motion is assumed to be constant (Figure 10(f)). Without understanding of human-driven vehicle’s intentions, this strategy can increase congestion problem.Another experiment is that human-driven vehicle tries to get through the intersection first. The results are shown in Figures11 and 12. This case is quite typical because many traffic accidents in real world are happening in this situation. In detail, if one vehicle tries to cross an intersection while violating the law, another vehicle will be in great danger if it does not understand its behavior. From the visualized performance in Figure 11, our method is a little more safe than other approaches as there is nearly collision situation in Figure 11(b). In detail, we can see from Figure 12(a) that our strategy could perform deceleration actions after we understand the not-yielding intention in 0.8 s. However, without understanding human-driven vehicle’s motion intention, the response time has a 1-second delay which may be quite dangerous. In addition, it is shown that good performance is in the predictions of human-driven vehicle’s future motion in our methods (Figure 12(e)).Figure 11 The visualized passing sequence for the case of human-driven vehicle first getting through. (a) is the result of our approach and (b) is the reactive-based approach. (a) (b)Figure 12 Case test 2. In this case, human-driven vehicle passes through intersection before autonomous vehicle through different strategies. The definition of each subfigure is the same as in Figure10. (a) (b) (c) (d) (e) (f)The results of these two cases demonstrate that our algorithm could deal with typical scenarios and have better performance than traditional reactive controller. Autonomous vehicle could be driven more safely, fast, and comfortably through our strategy. ## 7. Conclusion and Future Work In this paper, we proposed an autonomous driving decision-making algorithm considering human-driven vehicle’s uncertain intentions in an uncontrolled intersection. The lateral and longitudinal intentions are recognized by a continuous HMM. Based on HMM and POMDP, we model general decision-making process and then use an approximate approach to solve this complex problem. Finally, we use PreScan software and a driving simulator to emulate social interaction process. The experiment results show that autonomous vehicles with our approach can pass through uncontrolled intersections more safely and efficiently than using the strategy without considering human-driven vehicles’ driving intentions.In the near future, we aim to implement our approach into a real autonomous vehicle and perform real world experiments. In addition, more precious intention recognition algorithm aims to be figured out. Some methods like probabilistic graphic model can be used to get a distribution of each intention. Finally, designing online POMDP planning algorithms is also valuable. --- *Source: 1025349-2016-04-28.xml*
2016
# Fault Diagnosis Method Based on Gap Metric Data Preprocessing and Principal Component Analysis **Authors:** Zihan Wang; Chenglin Wen; Xiaoming Xu; Siyu Ji **Journal:** Journal of Control Science and Engineering (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1025353 --- ## Abstract Principal component analysis (PCA) is widely used in fault diagnosis. Because the traditional data preprocessing method ignores the correlation between different variables in the system, the feature extraction is not accurate. In order to solve it, this paper proposes a kind of data preprocessing method based on the Gap metric to improve the performance of PCA in fault diagnosis. For different types of faults, the original dataset transformation through Gap metric can reflect the correlation of different variables of the system in high-dimensional space, so as to model more accurately. Finally, the feasibility and effectiveness of the proposed method are verified through simulation. --- ## Body ## 1. Introduction As the complexity of industrial manufacturing systems increases, the correlation between variables in the system becomes more complex, and these variables contain important information about the status of the system. Therefore, it is an important issue that fault detection and diagnosis of the system are through the information of these variables.However, in industrial manufacturing systems, because of the different dimensions of system variables, it is usually necessary to preprocess the data to standardize the data. In traditional data preprocessing methods, ignoring the influence of dimension on the correlation between system variables leads to the lack of correlation of system variables after data preprocessing, which makes it difficult to extract the representative principal components. Therefore, maintaining the correlation between system variables is the key to data preprocessing.In order to solve this problem, many studies have been made. Wen et al. proposed a method called Relative Principle Component Analysis (RPCA) [1]; it introduces analyzing and determining the importance of each component to the prior information of the system, giving the corresponding weight of each component of the system, and establishing relative principal component model. Literature [2] proposed a fault diagnosis method based on information incremental matrix. Based on this, Yuan et al. proposed a relative transformation of information incremental matrix fault diagnosis method [3], which can effectively detect variables that play an important role in the system. Because of their smaller absolute value and less absolute changes, small changes of these important variables usually play a very crucial role in the system. Xu and Wen proposed a fault diagnosis method based on information entropy and relative principal component analysis [4]; in high-dimensional system, the high correlation of system variables leads to the model not being able to select the representative principal components. The approach given by them is to use information entropy to measure the uncertainty of variables and calculate the information gain of variables. According to the different degrees of importance of variables, relatively transform the data to get a more accurate data model. Jiao et al. proposed a method for simulation model validation based on Theil’s inequality coefficient and principal component analysis [5]; it is based on the TIC model; given a model of the differences in position and trend between the simulated output and the reference output, there is a correlation between the two differences, using PCA to obtain the verification results. Kangling et al. proposed a fault diagnosis study based on adaptive partition PCA [6]. In order to solve the inaccurate modeling problem, the diagnosis model can be automatically updated and adjusted, so as to improve the model matching and the accuracy of diagnosis results.The Gap metric is proved to be more suitable for measuring the distance between two linear systems than the norm-based ones [7, 8], and the effect of dimension on each variable can be reflected in the Riemannian space when data preprocessing is performed. The gap metric is widely used in the study of the uncertainty and robustness of the feedback system. Tryphon proposed a method that can easily calculate the gap metric [9], which improves the practicality of the gap metric. Literature [10] proposed the concept of ν-gap metric in the frequency domain and then extended it to nonlinear systems [11]. Ebadollahi and Saki proposed that in the multimodel predictive control, in order to track the maximum power point without losing its control performance, the gap metric method is used to divide the entire area of the partial load system into a corresponding linear model. This method ensures the stability of the original closed-loop system [12]. Konghuayrob and Kaitwanidvilai used v-gap to measure the distance between two linear systems [13]. They used low-order controllers with similar dynamic characteristics to replace traditional high-level and complex controllers, and both controllers have similar dynamic characteristics and robustness. For multilinear model control of nonlinear systems, Du proposed a weighting method based on 1/δ Gap metric [14]. The Gap metric method was used to calculate the weighting function of the local controller combination. The validity of the method was verified by the CSTR system.In the principal component analysis method based on the traditional data preprocessing, when removing the original data dimension, all are based on the European metric data preprocessing method, and some important information of the data will be ignored, and often these data are important variables that contain slowly changing fault information. In the method proposed in this article, Gap metric can project data on Riemann spheres in Riemann space and can highlight the information easily ignored in the European space. After the Gap metric data preprocessing method is adopted, the eigenvalue feature vector decomposition is performed on the processed data matrix, and the principal component is constructed to construct the principal component space according to the cumulative percent variance criterion, since Gap metric can highlight the variable data with small absolute change and relatively large variation in the system variables, so we can extract the main component and the main component vector when constructing the principal component space. By calculating theT2 statistical limit and the SPE statistical limit of the normal system dataset, we detect whether the T2 statistics and SPE statistics of the test dataset exceed the limit to judge whether the system is faulty, and then the fault variables are separated by the contribution of system variables to fault samples.The rest of this article is organized as follows. In the second section, we briefly review the PCA approach. In the third section, we propose a kind of improved PCA data preprocessing method, which is data preprocessing method based on gap metric. In the fourth section, we set up a system model and test the feasibility and effectiveness of the proposed method by different types of faults. In the fifth section, we give a summary and future research direction. ## 2. PCA Based on Traditional Data Preprocessing The basic idea of PCA is to decompose multivariable sample space into lower dimensional principal component subspaces composed of principal components variables and a residual subspace according to the historical data of process variables. And the statistics which can reflect the change of space are constructed in these two subspaces. Then, the sample vectors are, respectively, projected into two subspaces and compute the distance from the sample to the subspace. The process monitoring and fault detection are performed by comparing the distance with the corresponding statistics.First, model by PCA to variable space. Select a set of variables under normal conditions as the original data.x∈Rm is a test sample that contains m variables and each variable has n independent samples. Construct the original measurement data matrix:(1)Xn=x1,x2,…,xn,Xn∈Rm×n.Here, each column of Xn represents a variable, and each row represents a sample. Because the dimensions of the measured variables are different, each column of the data matrix is normalized. Assuming that the normalized measurement data matrix is X∗(2)X∗=Xn-ⅇ·qdiag⁡σ1,σ2,…,σn.Here,ⅇ=1,1,…,1T∈Rm×1,q=q1,q2,…,qn∈R1×n,  q is the average of X columns,i=1,…,n. diag⁡σ1,σ2,…,σn is the variance matrix of Xn.The covariance matrixS of X∗ is(3)S=1n-1X∗TX∗.The processing of the matrixS is generally eigenvalue decomposition, according to the size of the eigenvalues arranged in descending order. The PCA model decomposes X∗ as follows:(4)X∗=X^∗+E=TPT+E,T=X∗P,whereX^∗ is the projection in the principal component space, E is the projection in the residual space, and P∈Rm×A is the load matrix, which consists of the first A eigenvectors of S. T∈Rn×A is the scoring matrix, the elements of T are called the primary variables, and A is the number of the primary. The principal component space is the part of the modeling, and the residual space is not the part of the modeling, which represents the noise and fault information in the data.The selection of the principal component numberA is based on the Cumulative Percent Variance (CPV). This criterion determines the number of principal elements based on the cumulative sum of percentages of principal components. The CPV represents the ratio of the data changes explained by the first A principal component to the total data changes. Therefore, the cumulative contribution rate CPV of the first A principal can be expressed as(5)CPV=∑i=1Aλi∑i=1mλi,whereλi is the eigenvalue of the covariance matrix S. In general, when the cumulative contribution rate reaches 85% or more, it is considered that the number of elements A contains enough information of the original data. ## 3. Improved Data Preprocessing Method In this section, we propose a kind of data preprocessing method based on Gap metric. ### 3.1. Gap Metric In the Riemann space,φc1 and φc2 are used to represent the spherical projection of complex numbers c1 and c2 on a three-dimensional Riemann ball with diameter 1 and the chord between c1 and c2 is denoted by δ(c1,c2); then δ(c1,c2) is defined by(6)δc1,c2=φc1-φc2=c1-c21+c121+c22.θc1,c2 is used to express the spherical distance between c1 and c2, that is, the arc length connecting φc1 and φc2 on the Riemann ball; then(7)θc1,c2=arcsin⁡δc1,c2=arcsin⁡c1-c21+c121+c22.As can be seen from Figure 1, the shortest arc length on the circle is obtained from a plane-cut Riemann sphere determined by 3 points of the center of the ball, φc1 and φc2.Figure 1 The geometric meaning of the gap metric. (a) Riemann ball (b) The relationship betweenδ and θThe character of the Gap metric in the control system has similar properties in the data space. The nature of the Gap metric in the data space is as follows:(1) Gap metric can be regarded as the distance characterization of data in Riemann space, which is an extension to the traditional method based on infinite norm metrics.(2) The value of the Gap metric is in the range of 0 to 1. The smaller the value, the closer the characteristics of the two datasets. The larger the value, the greater the difference in the characteristics of the two datasets. If the Gap metric of two datasets is 0, then they contain exactly the same characteristics. ### 3.2. Gap Metric Data Preprocessing and Fault Diagnosis Set the data observation matrixXn∈Rm×n of the multivariable system as(8)Xn=x11x12⋯x1nx21x22⋯x2n⋮⋮⋱⋮xm1xm2⋯xmn.Here the column vectorxij=x1j,x2j,…,xmjT,j=1,2,…,n. represents the system variable, and the row vector represents sampled data at a sampling instant. Then, preprocessing the data matrix, the mean vector ofXn is(9)bn=1mlmXn.wherelm=1,1,…,1∈R1×m.Step 1. Project the original data onto the Riemann sphere and calculate the gap metric for each variable, and the matrixX∗ is calculated:(10)X∗=δx11,bn1δx12,bn2⋯δx1n,bnnδx21,bn1δx22,bn2⋯δx2n,bnn⋮⋮⋱⋮δxm1,bn1δxm2,bn2⋯δxmn,bnn.Here(11)δxik,bnk=φxik-φbnk=xik-bnk1+xik2·1+bnk2.Step 2. Decompose eigenvalues and eigenvectors ofX∗, select pivotal elements based on Cumulative Percent Variance (CPV), and construct principal component space and residual space.Step 3. Calculate the statistical limit ofT2 based on the principal component space of normal data and calculate the SPE statistical limit through the residual space.Step 4. Set the test data matrixYn∈Rm×n as(12)Yn=y11y12⋯y1ny21y22⋯y2n⋮⋮⋱⋮ym1ym2⋯ymn.Preprocess the test datasetYn with Gap metric method to get Y∗:(13)Y∗=δy11,bn1δy12,bn2⋯δy1n,bnnδy21,bn1δy22,bn2⋯δy2n,bnn⋮⋮⋱⋮δym1,bn1δym2,bn2⋯δymn,bnn.Here,bn is the mean vector of Xn.Step 5. Calculate theT2 statistic and SPE statistic separately and detect whether the statistic exceeds the statistical limit of normal data. If the statistic exceeds the statistical limit, it will be faulty. Otherwise, it will be normal.Step 6. Fault variables can be separated by the contribution of system variables to the fault in the residual space.The physical meaning of Gap metric is the chord pitch of the data projected on the Riemann sphere through a spherical surface, which can highlight the impact of the relative changes of the data on its own. The transformed data does not ignore variables that have small absolute changes but relatively large transformations. Frequently, these variables also contain important information. ## 3.1. Gap Metric In the Riemann space,φc1 and φc2 are used to represent the spherical projection of complex numbers c1 and c2 on a three-dimensional Riemann ball with diameter 1 and the chord between c1 and c2 is denoted by δ(c1,c2); then δ(c1,c2) is defined by(6)δc1,c2=φc1-φc2=c1-c21+c121+c22.θc1,c2 is used to express the spherical distance between c1 and c2, that is, the arc length connecting φc1 and φc2 on the Riemann ball; then(7)θc1,c2=arcsin⁡δc1,c2=arcsin⁡c1-c21+c121+c22.As can be seen from Figure 1, the shortest arc length on the circle is obtained from a plane-cut Riemann sphere determined by 3 points of the center of the ball, φc1 and φc2.Figure 1 The geometric meaning of the gap metric. (a) Riemann ball (b) The relationship betweenδ and θThe character of the Gap metric in the control system has similar properties in the data space. The nature of the Gap metric in the data space is as follows:(1) Gap metric can be regarded as the distance characterization of data in Riemann space, which is an extension to the traditional method based on infinite norm metrics.(2) The value of the Gap metric is in the range of 0 to 1. The smaller the value, the closer the characteristics of the two datasets. The larger the value, the greater the difference in the characteristics of the two datasets. If the Gap metric of two datasets is 0, then they contain exactly the same characteristics. ## 3.2. Gap Metric Data Preprocessing and Fault Diagnosis Set the data observation matrixXn∈Rm×n of the multivariable system as(8)Xn=x11x12⋯x1nx21x22⋯x2n⋮⋮⋱⋮xm1xm2⋯xmn.Here the column vectorxij=x1j,x2j,…,xmjT,j=1,2,…,n. represents the system variable, and the row vector represents sampled data at a sampling instant. Then, preprocessing the data matrix, the mean vector ofXn is(9)bn=1mlmXn.wherelm=1,1,…,1∈R1×m.Step 1. Project the original data onto the Riemann sphere and calculate the gap metric for each variable, and the matrixX∗ is calculated:(10)X∗=δx11,bn1δx12,bn2⋯δx1n,bnnδx21,bn1δx22,bn2⋯δx2n,bnn⋮⋮⋱⋮δxm1,bn1δxm2,bn2⋯δxmn,bnn.Here(11)δxik,bnk=φxik-φbnk=xik-bnk1+xik2·1+bnk2.Step 2. Decompose eigenvalues and eigenvectors ofX∗, select pivotal elements based on Cumulative Percent Variance (CPV), and construct principal component space and residual space.Step 3. Calculate the statistical limit ofT2 based on the principal component space of normal data and calculate the SPE statistical limit through the residual space.Step 4. Set the test data matrixYn∈Rm×n as(12)Yn=y11y12⋯y1ny21y22⋯y2n⋮⋮⋱⋮ym1ym2⋯ymn.Preprocess the test datasetYn with Gap metric method to get Y∗:(13)Y∗=δy11,bn1δy12,bn2⋯δy1n,bnnδy21,bn1δy22,bn2⋯δy2n,bnn⋮⋮⋱⋮δym1,bn1δym2,bn2⋯δymn,bnn.Here,bn is the mean vector of Xn.Step 5. Calculate theT2 statistic and SPE statistic separately and detect whether the statistic exceeds the statistical limit of normal data. If the statistic exceeds the statistical limit, it will be faulty. Otherwise, it will be normal.Step 6. Fault variables can be separated by the contribution of system variables to the fault in the residual space.The physical meaning of Gap metric is the chord pitch of the data projected on the Riemann sphere through a spherical surface, which can highlight the impact of the relative changes of the data on its own. The transformed data does not ignore variables that have small absolute changes but relatively large transformations. Frequently, these variables also contain important information. ## 4. Simulation In order to verify the effectiveness of the proposed method, considering random variables and their linear combinations, 6 system variables are constructed as follows:(14)x1=0.1×randn1,n,x2=0.2×randn1,n,x3=0.3×randn1,n,x4=-1.3x1+0.2x2+0.8x3+0.1×randn1,n,x5=x2-0.3x3+0.1×randn1,n,x6=x1+x4+0.1×randn1,n.Among them,randn1,n is a random sequence of 1 row and N columns generated by MATLAB. First, choose 1000 normal samples to establish the PCA model, then select 1000 samples as the test data, and introduce the constant deviation error of magnitude 3 to the last 200 samples of the variable x6 in the test data, and detect them, respectively, with the model. SPE statistics and Hotelling’s T2 statistics are used as indicators to measure the number of misinformation and the missing number of PCA.As shown in Table1, the average misinformation and the number of misstatements of the PCA, InEnPCA, and GAPPCA methods are obtained through 10 simulation statistics.Table 1 Detection result. Constant deviation error PCA InEnPCA GAPPCA T 2 SPE T 2 SPE T 2 SPE Misdiagnosis 8 7 7 8 16 8 Rate of misdiagnosis 0.010 0.009 0.009 0.010 0.020 0.010 Omissive judgement 54 0 66 0 8 0 Rate of omissive judgment 0.270 0 0.330 0 0.040 0In the test, 1~800 times exceeds the control limit for misdiagnosis, 801~1000 times below the control limit for omissive judgement. It can be seen from Figures2–4 that the misdiagnosis rate of PCA, InEnPCA, and GAPPCA is not higher than 2%. However, in the fault omission detection,T2 statistics of traditional PCA and InEnPCA show a large number of fault omissions. The omissive judgement rate ofT2 in PCA and InEnPCA is 27% and 33%, respectively, while that of GAPPCA is only 4%; this shows that after the data is preprocessed by gap metric, the principal component model established contains most of the key information of the system, thus improving the accuracy of fault detection. In the fault diagnosis, it can be seen from the contribution graph of the fault that the contribution rate of the fault variable x in GAPPCA is more easily distinguished from the contribution of other variables than PCA and InEnPCA; because system variables are preprocessed for gap metric projection on a Riemann sphere, the correlation between system variables can be better reflected.Figure 2 PCA with constant deviation fault in 801~1000 samples. (a) Fault detection results of PCA (b) Contribution plot of PCAFigure 3 InEnPCA with constant deviation fault in 801~1000 samples. (a) Fault detection results of InEnPCA (b) Contribution plot of InEnPCAFigure 4 GAPPCA with constant deviation fault in 801~1000 samples. (a) Fault detection results of GAPPCA (b) Contribution plot of GAPPCAThe application of traditional PCA method in fault diagnosis is based on the absolute distance between samples as the criterion for fault detection and fault diagnosis. However, in real systems, the occurrence of microfaults often changes very little. Therefore, the detection of minor faults is extremely necessary. Literature [15] combines the PCA technique with the univariate exponential weighted-sliding averaging for the correlation of variables in the chemical process. In view of the small deviation between normal and microfaults, [16] combines probability distribution metric and Kullback-Leibler measure to quantify residuals between potential scores and reference scores and proposed PCA algorithm control limits for small faults. Literature [17] is an analysis model based on literature [16]. The authors [17] give an approach where Kullback-Leibler divergence (KLD) was applied to the principal component variables obtained after process dimensionality reduction using PCA. This method was used to diagnose microfaults.In order to verify the detection accuracy in the case of a microfault, a slight fault with a slowly increasing rate of 0.1% is introduced into the last 200 sample points of the variablex6 in the test data. Use this model to test the fault diagnosis performance of the three methods.As shown in Table2, the average misinformation and the number of misstatements of the PCA, InEnPCA, and GAPPCA methods are obtained through 10 simulation statistics.Table 2 Detection result. Constant deviation error PCA InEnPCA GAPPCA T 2 SPE T 2 SPE T 2 SPE Misdiagnosis 12 9 8 8 12 13 Rate of misdiagnosis 0.015 0.011 0.010 0.010 0.015 0.016 Omissive judgement 199 81 134 62 75 0 Rate of omissive judgment 0.995 0.405 0.670 0.310 0.094 0It can be seen from Figures5–7 that in the detection of microfaults, due to the small changes in system variables, the traditional PCA method cannot extract the representative of the main element; as can be seen from Figure 6, PCA cannot reflect the system fault status during fault detection and diagnosis. Because the relative change of microfaults is bigger than absolute change, GAPPCA can extract representative principal component variables for small changes of system variables, and the detection results are better than the traditional PCA. In fault diagnosis, InEnPCA reflects the information gain of system variables and thus has good performance in fault diagnosis. Besides, since the gap metric reflects the correlation of system variables in the Riemannian space better than those in the European space, when using the contribution graph to separate the fault variables, we can see that the contribution rate of fault variable x6 in the contribution graph of GAPPCA is remarkable.Figure 5 PCA with microfaults detection in 801~1000 samples. (a) Fault detection results of PCA (b) Contribution plot of PCAFigure 6 InEnPCA with microfaults detection in 801~1000 samples. (a) Fault detection results of InEnPCA (b) Contribution plot of InEnPCAFigure 7 GAPPCA with microfaults detection in 801~1000 samples. (a) Fault detection results of GAPPCA (b) Contribution plot of GAPPCATo sum up, compared with the traditional PCA preprocessing method in European space, the preprocessing method based on the gap metric can better reflect the relevant information between the variables, and the faults that occur to some small but important variables can be diagnosed accurately. ## 5. Conclusions In this paper, we propose a fault diagnosis method based on Gap metric data preprocessing and PCA. When some variables in the system play an important role, and the absolute value of the variables themselves is small, they cannot detect the smaller faults. The proposed method can detect the source of the fault more accurately and reduce the rate of misdiagnosis and rate of omissive judgement.However, there are still some problems to be studied in this paper. In the PCA fault diagnosis method based on the Gap metric data preprocessing, there will be a problem that the projection overlaps. The fault information reflected by the fault samples may be projected after the Gap metric processing becomes normal. Within the region, how to separate fault information and normal information in high-dimensional space is the focus of further research. --- *Source: 1025353-2018-05-17.xml*
1025353-2018-05-17_1025353-2018-05-17.md
24,554
Fault Diagnosis Method Based on Gap Metric Data Preprocessing and Principal Component Analysis
Zihan Wang; Chenglin Wen; Xiaoming Xu; Siyu Ji
Journal of Control Science and Engineering (2018)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1025353
1025353-2018-05-17.xml
--- ## Abstract Principal component analysis (PCA) is widely used in fault diagnosis. Because the traditional data preprocessing method ignores the correlation between different variables in the system, the feature extraction is not accurate. In order to solve it, this paper proposes a kind of data preprocessing method based on the Gap metric to improve the performance of PCA in fault diagnosis. For different types of faults, the original dataset transformation through Gap metric can reflect the correlation of different variables of the system in high-dimensional space, so as to model more accurately. Finally, the feasibility and effectiveness of the proposed method are verified through simulation. --- ## Body ## 1. Introduction As the complexity of industrial manufacturing systems increases, the correlation between variables in the system becomes more complex, and these variables contain important information about the status of the system. Therefore, it is an important issue that fault detection and diagnosis of the system are through the information of these variables.However, in industrial manufacturing systems, because of the different dimensions of system variables, it is usually necessary to preprocess the data to standardize the data. In traditional data preprocessing methods, ignoring the influence of dimension on the correlation between system variables leads to the lack of correlation of system variables after data preprocessing, which makes it difficult to extract the representative principal components. Therefore, maintaining the correlation between system variables is the key to data preprocessing.In order to solve this problem, many studies have been made. Wen et al. proposed a method called Relative Principle Component Analysis (RPCA) [1]; it introduces analyzing and determining the importance of each component to the prior information of the system, giving the corresponding weight of each component of the system, and establishing relative principal component model. Literature [2] proposed a fault diagnosis method based on information incremental matrix. Based on this, Yuan et al. proposed a relative transformation of information incremental matrix fault diagnosis method [3], which can effectively detect variables that play an important role in the system. Because of their smaller absolute value and less absolute changes, small changes of these important variables usually play a very crucial role in the system. Xu and Wen proposed a fault diagnosis method based on information entropy and relative principal component analysis [4]; in high-dimensional system, the high correlation of system variables leads to the model not being able to select the representative principal components. The approach given by them is to use information entropy to measure the uncertainty of variables and calculate the information gain of variables. According to the different degrees of importance of variables, relatively transform the data to get a more accurate data model. Jiao et al. proposed a method for simulation model validation based on Theil’s inequality coefficient and principal component analysis [5]; it is based on the TIC model; given a model of the differences in position and trend between the simulated output and the reference output, there is a correlation between the two differences, using PCA to obtain the verification results. Kangling et al. proposed a fault diagnosis study based on adaptive partition PCA [6]. In order to solve the inaccurate modeling problem, the diagnosis model can be automatically updated and adjusted, so as to improve the model matching and the accuracy of diagnosis results.The Gap metric is proved to be more suitable for measuring the distance between two linear systems than the norm-based ones [7, 8], and the effect of dimension on each variable can be reflected in the Riemannian space when data preprocessing is performed. The gap metric is widely used in the study of the uncertainty and robustness of the feedback system. Tryphon proposed a method that can easily calculate the gap metric [9], which improves the practicality of the gap metric. Literature [10] proposed the concept of ν-gap metric in the frequency domain and then extended it to nonlinear systems [11]. Ebadollahi and Saki proposed that in the multimodel predictive control, in order to track the maximum power point without losing its control performance, the gap metric method is used to divide the entire area of the partial load system into a corresponding linear model. This method ensures the stability of the original closed-loop system [12]. Konghuayrob and Kaitwanidvilai used v-gap to measure the distance between two linear systems [13]. They used low-order controllers with similar dynamic characteristics to replace traditional high-level and complex controllers, and both controllers have similar dynamic characteristics and robustness. For multilinear model control of nonlinear systems, Du proposed a weighting method based on 1/δ Gap metric [14]. The Gap metric method was used to calculate the weighting function of the local controller combination. The validity of the method was verified by the CSTR system.In the principal component analysis method based on the traditional data preprocessing, when removing the original data dimension, all are based on the European metric data preprocessing method, and some important information of the data will be ignored, and often these data are important variables that contain slowly changing fault information. In the method proposed in this article, Gap metric can project data on Riemann spheres in Riemann space and can highlight the information easily ignored in the European space. After the Gap metric data preprocessing method is adopted, the eigenvalue feature vector decomposition is performed on the processed data matrix, and the principal component is constructed to construct the principal component space according to the cumulative percent variance criterion, since Gap metric can highlight the variable data with small absolute change and relatively large variation in the system variables, so we can extract the main component and the main component vector when constructing the principal component space. By calculating theT2 statistical limit and the SPE statistical limit of the normal system dataset, we detect whether the T2 statistics and SPE statistics of the test dataset exceed the limit to judge whether the system is faulty, and then the fault variables are separated by the contribution of system variables to fault samples.The rest of this article is organized as follows. In the second section, we briefly review the PCA approach. In the third section, we propose a kind of improved PCA data preprocessing method, which is data preprocessing method based on gap metric. In the fourth section, we set up a system model and test the feasibility and effectiveness of the proposed method by different types of faults. In the fifth section, we give a summary and future research direction. ## 2. PCA Based on Traditional Data Preprocessing The basic idea of PCA is to decompose multivariable sample space into lower dimensional principal component subspaces composed of principal components variables and a residual subspace according to the historical data of process variables. And the statistics which can reflect the change of space are constructed in these two subspaces. Then, the sample vectors are, respectively, projected into two subspaces and compute the distance from the sample to the subspace. The process monitoring and fault detection are performed by comparing the distance with the corresponding statistics.First, model by PCA to variable space. Select a set of variables under normal conditions as the original data.x∈Rm is a test sample that contains m variables and each variable has n independent samples. Construct the original measurement data matrix:(1)Xn=x1,x2,…,xn,Xn∈Rm×n.Here, each column of Xn represents a variable, and each row represents a sample. Because the dimensions of the measured variables are different, each column of the data matrix is normalized. Assuming that the normalized measurement data matrix is X∗(2)X∗=Xn-ⅇ·qdiag⁡σ1,σ2,…,σn.Here,ⅇ=1,1,…,1T∈Rm×1,q=q1,q2,…,qn∈R1×n,  q is the average of X columns,i=1,…,n. diag⁡σ1,σ2,…,σn is the variance matrix of Xn.The covariance matrixS of X∗ is(3)S=1n-1X∗TX∗.The processing of the matrixS is generally eigenvalue decomposition, according to the size of the eigenvalues arranged in descending order. The PCA model decomposes X∗ as follows:(4)X∗=X^∗+E=TPT+E,T=X∗P,whereX^∗ is the projection in the principal component space, E is the projection in the residual space, and P∈Rm×A is the load matrix, which consists of the first A eigenvectors of S. T∈Rn×A is the scoring matrix, the elements of T are called the primary variables, and A is the number of the primary. The principal component space is the part of the modeling, and the residual space is not the part of the modeling, which represents the noise and fault information in the data.The selection of the principal component numberA is based on the Cumulative Percent Variance (CPV). This criterion determines the number of principal elements based on the cumulative sum of percentages of principal components. The CPV represents the ratio of the data changes explained by the first A principal component to the total data changes. Therefore, the cumulative contribution rate CPV of the first A principal can be expressed as(5)CPV=∑i=1Aλi∑i=1mλi,whereλi is the eigenvalue of the covariance matrix S. In general, when the cumulative contribution rate reaches 85% or more, it is considered that the number of elements A contains enough information of the original data. ## 3. Improved Data Preprocessing Method In this section, we propose a kind of data preprocessing method based on Gap metric. ### 3.1. Gap Metric In the Riemann space,φc1 and φc2 are used to represent the spherical projection of complex numbers c1 and c2 on a three-dimensional Riemann ball with diameter 1 and the chord between c1 and c2 is denoted by δ(c1,c2); then δ(c1,c2) is defined by(6)δc1,c2=φc1-φc2=c1-c21+c121+c22.θc1,c2 is used to express the spherical distance between c1 and c2, that is, the arc length connecting φc1 and φc2 on the Riemann ball; then(7)θc1,c2=arcsin⁡δc1,c2=arcsin⁡c1-c21+c121+c22.As can be seen from Figure 1, the shortest arc length on the circle is obtained from a plane-cut Riemann sphere determined by 3 points of the center of the ball, φc1 and φc2.Figure 1 The geometric meaning of the gap metric. (a) Riemann ball (b) The relationship betweenδ and θThe character of the Gap metric in the control system has similar properties in the data space. The nature of the Gap metric in the data space is as follows:(1) Gap metric can be regarded as the distance characterization of data in Riemann space, which is an extension to the traditional method based on infinite norm metrics.(2) The value of the Gap metric is in the range of 0 to 1. The smaller the value, the closer the characteristics of the two datasets. The larger the value, the greater the difference in the characteristics of the two datasets. If the Gap metric of two datasets is 0, then they contain exactly the same characteristics. ### 3.2. Gap Metric Data Preprocessing and Fault Diagnosis Set the data observation matrixXn∈Rm×n of the multivariable system as(8)Xn=x11x12⋯x1nx21x22⋯x2n⋮⋮⋱⋮xm1xm2⋯xmn.Here the column vectorxij=x1j,x2j,…,xmjT,j=1,2,…,n. represents the system variable, and the row vector represents sampled data at a sampling instant. Then, preprocessing the data matrix, the mean vector ofXn is(9)bn=1mlmXn.wherelm=1,1,…,1∈R1×m.Step 1. Project the original data onto the Riemann sphere and calculate the gap metric for each variable, and the matrixX∗ is calculated:(10)X∗=δx11,bn1δx12,bn2⋯δx1n,bnnδx21,bn1δx22,bn2⋯δx2n,bnn⋮⋮⋱⋮δxm1,bn1δxm2,bn2⋯δxmn,bnn.Here(11)δxik,bnk=φxik-φbnk=xik-bnk1+xik2·1+bnk2.Step 2. Decompose eigenvalues and eigenvectors ofX∗, select pivotal elements based on Cumulative Percent Variance (CPV), and construct principal component space and residual space.Step 3. Calculate the statistical limit ofT2 based on the principal component space of normal data and calculate the SPE statistical limit through the residual space.Step 4. Set the test data matrixYn∈Rm×n as(12)Yn=y11y12⋯y1ny21y22⋯y2n⋮⋮⋱⋮ym1ym2⋯ymn.Preprocess the test datasetYn with Gap metric method to get Y∗:(13)Y∗=δy11,bn1δy12,bn2⋯δy1n,bnnδy21,bn1δy22,bn2⋯δy2n,bnn⋮⋮⋱⋮δym1,bn1δym2,bn2⋯δymn,bnn.Here,bn is the mean vector of Xn.Step 5. Calculate theT2 statistic and SPE statistic separately and detect whether the statistic exceeds the statistical limit of normal data. If the statistic exceeds the statistical limit, it will be faulty. Otherwise, it will be normal.Step 6. Fault variables can be separated by the contribution of system variables to the fault in the residual space.The physical meaning of Gap metric is the chord pitch of the data projected on the Riemann sphere through a spherical surface, which can highlight the impact of the relative changes of the data on its own. The transformed data does not ignore variables that have small absolute changes but relatively large transformations. Frequently, these variables also contain important information. ## 3.1. Gap Metric In the Riemann space,φc1 and φc2 are used to represent the spherical projection of complex numbers c1 and c2 on a three-dimensional Riemann ball with diameter 1 and the chord between c1 and c2 is denoted by δ(c1,c2); then δ(c1,c2) is defined by(6)δc1,c2=φc1-φc2=c1-c21+c121+c22.θc1,c2 is used to express the spherical distance between c1 and c2, that is, the arc length connecting φc1 and φc2 on the Riemann ball; then(7)θc1,c2=arcsin⁡δc1,c2=arcsin⁡c1-c21+c121+c22.As can be seen from Figure 1, the shortest arc length on the circle is obtained from a plane-cut Riemann sphere determined by 3 points of the center of the ball, φc1 and φc2.Figure 1 The geometric meaning of the gap metric. (a) Riemann ball (b) The relationship betweenδ and θThe character of the Gap metric in the control system has similar properties in the data space. The nature of the Gap metric in the data space is as follows:(1) Gap metric can be regarded as the distance characterization of data in Riemann space, which is an extension to the traditional method based on infinite norm metrics.(2) The value of the Gap metric is in the range of 0 to 1. The smaller the value, the closer the characteristics of the two datasets. The larger the value, the greater the difference in the characteristics of the two datasets. If the Gap metric of two datasets is 0, then they contain exactly the same characteristics. ## 3.2. Gap Metric Data Preprocessing and Fault Diagnosis Set the data observation matrixXn∈Rm×n of the multivariable system as(8)Xn=x11x12⋯x1nx21x22⋯x2n⋮⋮⋱⋮xm1xm2⋯xmn.Here the column vectorxij=x1j,x2j,…,xmjT,j=1,2,…,n. represents the system variable, and the row vector represents sampled data at a sampling instant. Then, preprocessing the data matrix, the mean vector ofXn is(9)bn=1mlmXn.wherelm=1,1,…,1∈R1×m.Step 1. Project the original data onto the Riemann sphere and calculate the gap metric for each variable, and the matrixX∗ is calculated:(10)X∗=δx11,bn1δx12,bn2⋯δx1n,bnnδx21,bn1δx22,bn2⋯δx2n,bnn⋮⋮⋱⋮δxm1,bn1δxm2,bn2⋯δxmn,bnn.Here(11)δxik,bnk=φxik-φbnk=xik-bnk1+xik2·1+bnk2.Step 2. Decompose eigenvalues and eigenvectors ofX∗, select pivotal elements based on Cumulative Percent Variance (CPV), and construct principal component space and residual space.Step 3. Calculate the statistical limit ofT2 based on the principal component space of normal data and calculate the SPE statistical limit through the residual space.Step 4. Set the test data matrixYn∈Rm×n as(12)Yn=y11y12⋯y1ny21y22⋯y2n⋮⋮⋱⋮ym1ym2⋯ymn.Preprocess the test datasetYn with Gap metric method to get Y∗:(13)Y∗=δy11,bn1δy12,bn2⋯δy1n,bnnδy21,bn1δy22,bn2⋯δy2n,bnn⋮⋮⋱⋮δym1,bn1δym2,bn2⋯δymn,bnn.Here,bn is the mean vector of Xn.Step 5. Calculate theT2 statistic and SPE statistic separately and detect whether the statistic exceeds the statistical limit of normal data. If the statistic exceeds the statistical limit, it will be faulty. Otherwise, it will be normal.Step 6. Fault variables can be separated by the contribution of system variables to the fault in the residual space.The physical meaning of Gap metric is the chord pitch of the data projected on the Riemann sphere through a spherical surface, which can highlight the impact of the relative changes of the data on its own. The transformed data does not ignore variables that have small absolute changes but relatively large transformations. Frequently, these variables also contain important information. ## 4. Simulation In order to verify the effectiveness of the proposed method, considering random variables and their linear combinations, 6 system variables are constructed as follows:(14)x1=0.1×randn1,n,x2=0.2×randn1,n,x3=0.3×randn1,n,x4=-1.3x1+0.2x2+0.8x3+0.1×randn1,n,x5=x2-0.3x3+0.1×randn1,n,x6=x1+x4+0.1×randn1,n.Among them,randn1,n is a random sequence of 1 row and N columns generated by MATLAB. First, choose 1000 normal samples to establish the PCA model, then select 1000 samples as the test data, and introduce the constant deviation error of magnitude 3 to the last 200 samples of the variable x6 in the test data, and detect them, respectively, with the model. SPE statistics and Hotelling’s T2 statistics are used as indicators to measure the number of misinformation and the missing number of PCA.As shown in Table1, the average misinformation and the number of misstatements of the PCA, InEnPCA, and GAPPCA methods are obtained through 10 simulation statistics.Table 1 Detection result. Constant deviation error PCA InEnPCA GAPPCA T 2 SPE T 2 SPE T 2 SPE Misdiagnosis 8 7 7 8 16 8 Rate of misdiagnosis 0.010 0.009 0.009 0.010 0.020 0.010 Omissive judgement 54 0 66 0 8 0 Rate of omissive judgment 0.270 0 0.330 0 0.040 0In the test, 1~800 times exceeds the control limit for misdiagnosis, 801~1000 times below the control limit for omissive judgement. It can be seen from Figures2–4 that the misdiagnosis rate of PCA, InEnPCA, and GAPPCA is not higher than 2%. However, in the fault omission detection,T2 statistics of traditional PCA and InEnPCA show a large number of fault omissions. The omissive judgement rate ofT2 in PCA and InEnPCA is 27% and 33%, respectively, while that of GAPPCA is only 4%; this shows that after the data is preprocessed by gap metric, the principal component model established contains most of the key information of the system, thus improving the accuracy of fault detection. In the fault diagnosis, it can be seen from the contribution graph of the fault that the contribution rate of the fault variable x in GAPPCA is more easily distinguished from the contribution of other variables than PCA and InEnPCA; because system variables are preprocessed for gap metric projection on a Riemann sphere, the correlation between system variables can be better reflected.Figure 2 PCA with constant deviation fault in 801~1000 samples. (a) Fault detection results of PCA (b) Contribution plot of PCAFigure 3 InEnPCA with constant deviation fault in 801~1000 samples. (a) Fault detection results of InEnPCA (b) Contribution plot of InEnPCAFigure 4 GAPPCA with constant deviation fault in 801~1000 samples. (a) Fault detection results of GAPPCA (b) Contribution plot of GAPPCAThe application of traditional PCA method in fault diagnosis is based on the absolute distance between samples as the criterion for fault detection and fault diagnosis. However, in real systems, the occurrence of microfaults often changes very little. Therefore, the detection of minor faults is extremely necessary. Literature [15] combines the PCA technique with the univariate exponential weighted-sliding averaging for the correlation of variables in the chemical process. In view of the small deviation between normal and microfaults, [16] combines probability distribution metric and Kullback-Leibler measure to quantify residuals between potential scores and reference scores and proposed PCA algorithm control limits for small faults. Literature [17] is an analysis model based on literature [16]. The authors [17] give an approach where Kullback-Leibler divergence (KLD) was applied to the principal component variables obtained after process dimensionality reduction using PCA. This method was used to diagnose microfaults.In order to verify the detection accuracy in the case of a microfault, a slight fault with a slowly increasing rate of 0.1% is introduced into the last 200 sample points of the variablex6 in the test data. Use this model to test the fault diagnosis performance of the three methods.As shown in Table2, the average misinformation and the number of misstatements of the PCA, InEnPCA, and GAPPCA methods are obtained through 10 simulation statistics.Table 2 Detection result. Constant deviation error PCA InEnPCA GAPPCA T 2 SPE T 2 SPE T 2 SPE Misdiagnosis 12 9 8 8 12 13 Rate of misdiagnosis 0.015 0.011 0.010 0.010 0.015 0.016 Omissive judgement 199 81 134 62 75 0 Rate of omissive judgment 0.995 0.405 0.670 0.310 0.094 0It can be seen from Figures5–7 that in the detection of microfaults, due to the small changes in system variables, the traditional PCA method cannot extract the representative of the main element; as can be seen from Figure 6, PCA cannot reflect the system fault status during fault detection and diagnosis. Because the relative change of microfaults is bigger than absolute change, GAPPCA can extract representative principal component variables for small changes of system variables, and the detection results are better than the traditional PCA. In fault diagnosis, InEnPCA reflects the information gain of system variables and thus has good performance in fault diagnosis. Besides, since the gap metric reflects the correlation of system variables in the Riemannian space better than those in the European space, when using the contribution graph to separate the fault variables, we can see that the contribution rate of fault variable x6 in the contribution graph of GAPPCA is remarkable.Figure 5 PCA with microfaults detection in 801~1000 samples. (a) Fault detection results of PCA (b) Contribution plot of PCAFigure 6 InEnPCA with microfaults detection in 801~1000 samples. (a) Fault detection results of InEnPCA (b) Contribution plot of InEnPCAFigure 7 GAPPCA with microfaults detection in 801~1000 samples. (a) Fault detection results of GAPPCA (b) Contribution plot of GAPPCATo sum up, compared with the traditional PCA preprocessing method in European space, the preprocessing method based on the gap metric can better reflect the relevant information between the variables, and the faults that occur to some small but important variables can be diagnosed accurately. ## 5. Conclusions In this paper, we propose a fault diagnosis method based on Gap metric data preprocessing and PCA. When some variables in the system play an important role, and the absolute value of the variables themselves is small, they cannot detect the smaller faults. The proposed method can detect the source of the fault more accurately and reduce the rate of misdiagnosis and rate of omissive judgement.However, there are still some problems to be studied in this paper. In the PCA fault diagnosis method based on the Gap metric data preprocessing, there will be a problem that the projection overlaps. The fault information reflected by the fault samples may be projected after the Gap metric processing becomes normal. Within the region, how to separate fault information and normal information in high-dimensional space is the focus of further research. --- *Source: 1025353-2018-05-17.xml*
2018
# Theoretical Investigation of an Air-Slot Mode-Size Matcher between Dielectric and MDM Plasmonic Waveguides **Authors:** Rami A. Wahsheh **Journal:** International Journal of Optics (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1025374 --- ## Abstract Hybrid integration of dielectric and plasmonic waveguides is necessary to reduce the propagation losses due to the metallic interactions and support of nanofabrication of plasmonic devices that deal with large data transfer. In this paper, we propose a direct yet efficient, very short air-slot coupler (ASC) of a length of 36 nm to increase the coupling efficiency between a silicon waveguide and a silver-air-silver plasmonic waveguide. Our numerical simulation results show that having the ASC at the interface makes the fabrication process much easier and ensures that light couples from a dielectric waveguide into and out of a plasmonic waveguide. The proposed coupler works over a broad frequency range achieving a coupling efficiency of 86% from a dielectric waveguide into a metal-dielectric-metal (MDM) plasmonic waveguide and 68% from a dielectric waveguide to an MDM plasmonic waveguide and back into another dielectric waveguide. In addition, we show that even if there are no high-precision fabrication techniques, light couples from a conventional dielectric waveguide (CDW) into an MDM plasmonic waveguide as long as there is an overlap between the CDW and ASC, which reduces the fabrication process tremendously. Our proposed coupler has an impact on the miniaturization of ultracompact nanoplasmonic devices. --- ## Body ## 1. Introduction Efficient coupling of light into a metal-dielectric-metal (MDM) waveguide from a conventional dielectric waveguide (CDW) is the future of the on-chip applications of the plasmonic devices such as splitters [1, 2], Mach–Zehnder interferometers [3, 4], reflectors [5], wavelength demultiplexers [6], circulators [7], filters [8], and all-optical switching [9]. Other methods of controlling the flow of light in subwavelength regime involve using phase-change materials (PCMs) such as germanium-antimony-telluride (Ge2Sb2Te5) that achieve fast switching speeds by controlling their index of refraction. In [10, 11], the authors experimentally demonstrated the integration of a PCM with hybrid metal-dielectric metasurfaces to control the amplitude, phase, and polarization of the incident optical wavefront. In our research paper, the low-loss CDWs are used to couple light into and out of the plasmonic waveguides that have large propagation losses due to their large metallic interactions. Several different structures have been proposed to achieve mode matching between a CDW and an MDM plasmonic waveguide [12–20]. These solutions are not practical to fabricate or too long for ultracompact integrated circuits. One solution in the literature that has attracted a lot of attention is proposed using a very compact air-gap coupler (AGC) at the interface between a MDM plasmonic waveguide and CDW [21]. The proposed AGC provided a large fabrication tolerance in addition to high transmission coupling efficiency (TCE) into the output dielectric waveguide. Another solution that attracted a lot of attention, because of its simple and compact design, is the air-slot coupler (ASC) that we proposed in [22]. The proposed ASC was fabricated between a 460 nm-wide silicon waveguide and 80 nm-wide gold-air-gold plasmonic waveguide. The TCE was about 40% when the length of the MDM waveguide was 500 nm. The proposed ASC couples light from the CDW into the ASC waveguide before it is coupled into the MDM plasmonic waveguide. The air-slot waveguide that is located inside the CDW at the interface with the MDM plasmonic waveguide is the ASC. Having the ASC at the interface with the CDW ensures that light couples from the CDW into and out of the MDM plasmonic waveguide. In this paper, we show the design steps and numerical results of a very compact ASC which is smaller than that reported in our previous work in [22], has better TCE into the output CDW, and requires the use of less-precision fabrication techniques. Our proposed ASC is between a 300 nm-wide silicon waveguide and a 40 nm-wide silver-air-silver plasmonic waveguide. In addition, we show the sensitivity of our designs to different fabrication challenges.The proposed ASCs were designed and analyzed using a two-dimensional (2D) finite-difference time-domain method. We used a uniform mesh size of 1 nm to accurately capture the changes of the field at the interface of the CDW, ASC, and MDM plasmonic waveguides. The dielectric material of the CDW is silicon, and that of the metal is silver. The TCE in our analysis is calculated by normalizing the output power measured at the output CDW with respect to the input power of the launched light, while the CE in our analysis is calculated by normalizing the power measured inside the MDM plasmonic waveguide with respect to the input power of the launched light. The propagation losses of the MDM plasmonic waveguides are included in our numerical results, while those of the CDWs are not included to show the effect of using the ASC on the CE.The remainder of this paper is organized as follows: in Section2, we explain the designs and results of the ASC. We also show how increasing the width of the CDW can increase the CE into the MDM plasmonic waveguide. In Section 3, we compare the spectrum results of the proposed ASCs. In Section 4, we show how our proposed designs increase the fabrication tolerance which is needed when aligning the dielectric waveguide to the plasmonic waveguide. Finally, in Section 5, we provide conclusions. ## 2. Air-Slot Coupler Design, Analysis, and Fabrication Figure1(a) shows the schematics of our proposed ASC #1. The ASC is located inside the CDW at the interface between a 300 nm-wide CDW and a 40 nm-wide MDM plasmonic waveguide. The dependence of the CE on the length of the ASC, LASC, is shown in Figure 1(b). As LASC increases, the CE oscillates and light continues to couple from the CDW into the MDM plasmonic waveguide. The CE increased from 68% to 80% as LASC increased from zero (i.e., butt coupling) to 36 nm, respectively. As shown in Figure 1(b), almost zero back reflection resulted when LASC = 36 nm which means that the ASC reduced the mode-size mismatch between the size of the mode in the CDW and that in the slot waveguide (i.e., the ASC and the MDM plasmonic waveguide). As long as there is an overlap between the slot waveguide and the CDW, light continues to couple into the MDM plasmonic waveguide. This reduces the fabrication complexity especially when using the focused ion beam (FIB) to define both ASC and MDM plasmonic waveguide.Figure 1 The schematics of the proposed ASCs#1 and #2 that show how light couples into the MDM plasmonic waveguide: (a) ASC #1 without the dielectric width expansion, (b) coupling efficiency as a function of the length of the ASC, LASC, for ASC #1, (c) ASC #2 with the dielectric width expansion, and (d) coupling efficiency as a function of the width of the CDW, WSi, for ASC #2. (a)(b)(c)(d)Moreover, we found that the CE of ASC#1 can be further improved by increasing the width of the CDW, WSi, over a length of LSi before it is connected to the MDM plasmonic waveguide (the new design is called ASC #2; see Figure 1(c)). As shown in Figure 1(d), the CE of ASC #2 increased from 80% to about 86% when WSi increased from 300 nm to 360 nm over a length of LSi = 360 nm. Even though the back reflection of ASC #2 increased slightly to 0.025% compared to 0% in ASC #1, expanding the width of the CDW acted as a fine-tuning mode size to match the mode size inside the CDW to that in the slot waveguide.To study the TCE from a 300 nm-wide CDW into a 40 nm-wide MDM plasmonic waveguide and back into a 300 nm-wide CDW, the MDM plasmonic waveguide with the ASC is embedded between two CDWs. ASC#1 is connected back to back with another ASC #1 (the new design is called ASC #3; see Figure 2(a)), and ASC #2 is connected back to back with another ASC #2 (the new design is called ASC #4; see Figure 2(b)). In both designs, ASCs #3 and #4, the ASC of a length of 36 nm is placed at each end of the MDM plasmonic waveguide. In order to find the optimum design dimensions, we compared the dependence of the TCE of ASC #3 with that of ASC #4 as a function of the length of the MDM waveguide, LMDM; see Figure 2(c). As shown in Figure 2(c), the oscillations in the measured TCE occurred due to the slot waveguide behaving as a Fabry–Perot (FP) cavity-like structure. As LMDM increases, the TCE decreases due to the metallic propagation losses in the MDM plasmonic waveguide. Less oscillations with higher TCE are achieved by using ASC #4. Both designs, ASCs #3 and #4, achieved higher TCE than that reported in [22]. The field distributions of the coupled light at 1550 nm for ASCs #3 and #4 are shown in Figures 2(d) and 2(e) when LMDM = 740 nm.Figure 2 The schematics of the proposed ASCs#3 and #4 that show how light couples into and out of the MDM plasmonic waveguide: (a) ASC #3 without the dielectric width expansion that resulted from connecting two #1 ASCs back to back, (b) ASC #4 with the dielectric width expansion that resulted from connecting two #2 ASCs back to back, (c) transmission coupling efficiency as a function of the length of the MDM plasmonic waveguide, LMDM, for both ASCs #3 and #4, and (d, e) field distributions of the coupled light at 1550 nm for ASCs #3 and #4 when LMDM = 740 nm. (a)(b)(c)(d)(e)ASC#4 could be fabricated on a 250 nm silicon-on-insulator wafer. First, the alignment marks are defined, and then the dielectric waveguides are placed at specific locations from the alignment marks, followed by etching the silicon layer at the areas where the silver layer is defined. Then, a platinum layer is deposited on top of the silver layer before using the FIB to define the air-slot waveguides. ## 3. Spectrum Analysis The spectrum of the proposed ASCs#3 and #4 is shown in Figure 3. The wavelength varied from 400 nm to 2400 nm in steps of 25 nm. At each step, the TCE into the output CDW was measured. The length of the MDM plasmonic waveguide, LMDM, was chosen as 740 nm which represents the second peak in Figure 2(c). The spectrum of the butt coupling with and without applying the WSi width expansion (i.e., LASC = 0 nm) was also added to Figure 3 to show that using the ASC increases the TCE around the communication wavelength of 1550 nm for the two proposed couplers. In comparing the spectrum of ASCs #3 and #4, we found that tapering the CDW not only resulted in a higher TCE into the output CDW but also shifted the spectrum response by about 100 nm to the right.Figure 3 The spectrum of the proposed ASCs#3 and #4 in addition to that of butt coupling with and without the WSi width expansion.Our proposed ASC#4 with the WSi width expansion can be used in different sensing applications to show that the refractive index of the slot waveguide, nSlot (i.e., the dielectric in the MDM plasmonic waveguide in addition to that in the ASC) was changed from 1 (air) to 3.5 (silicon) in steps of 0.5 while keeping LMDM at 740 nm (as shown in Figure 4). As nSlot increases, the TCE decreases, and the shape of the spectrum changes as a result of the poor coupling from the CDW into the slot waveguide in addition to the oscillations in the slot waveguide that behaved like an FP cavity-like structure. The TCE value is zero from about 950 nm to about 1150 nm for all types of nSlot.Figure 4 The spectrum of ASC#4 for different refractive indices of the slot waveguide, nSlot. ## 4. Analysis of the Sensitivity of the Design to Different Fabrication Challenges We investigated the effect of changing various parameters of the design on the spectrum response of our proposed ASC#4. The investigated parameters are shown in Figure 5(a): the length of the air-slot coupler inside the silicon waveguide, LASC; the length of the MDM plasmonic waveguide, LMDM; the width of the silicon waveguide, WSi; the width of the slot waveguide, WSlot; and the misalignment between the position of the MDM plasmonic waveguide with respect to the center of the silicon waveguide, S. We changed one parameter at a time and studied its effect on the spectrum response. The used optimum values for LASC, LMDM, WSi, WSlot, and S were 36 nm, 740 nm, 360 nm, 40 nm, and 0 nm, respectively. We found that changing LASC from 0 nm (i.e., butt coupling) to 400 nm resulted in a reduction in the width of the spectrum response mainly due to the FP cavity-like structure; see Figure 5(b). The highest TCE with the largest spectrum width occurred when LASC = 36 nm. Three lengths of LMDM were investigated, 220 nm, 560 nm, and 740 nm, which represent the first peak, the first valley, and the second peak in Figure 2(a), respectively. We found that almost no change occurred in the spectrum shape when LMDM is changed except for the reduction in the TCE due to the metallic interactions; see Figure 5(c). We also found that the cutoff wavelength can be controlled by changing WSi (see Figure 5(d)) and WSlot (see Figure 5(e)). Changing WSi from 360 nm to 440 nm resulted in shifting the cutoff wavelength from about 1100 nm to 1300 nm, respectively, whereas changing WSlot from 40 nm to 120 nm resulted in shifting the cutoff wavelength from about 1100 nm to 1000 nm, respectively. Finally, we found that changing S from 0 nm to 80 nm resulted in a reduction in the width of the spectrum response in addition to a reduction in the TCE due to the reduction in the overlapped area between the mode supported by the slot waveguide and that supported by the silicon waveguide; see Figure 5(f). This analysis shows the impact of different key design parameters on the spectrum response of the device which indicates that using the ASC reduces the need for high-precision fabrication process tremendously.Figure 5 (a) A schematic of our proposed ASC#4 that shows the design parameters and (b–f) dependence of the spectrum response of ASC #4 on the length of the air-slot coupler LASC, the length of the MDM plasmonic waveguide LMDM, the width of the silicon waveguide WSi, the width of the slot waveguide WSlot, and the misalignment between the position of the air-slot waveguide with respect to the center of the silicon waveguide S, respectively. (a)(b)(c)(d)(e)(f) ## 5. Conclusions We numerically showed that a very short ASC of a length of 36 nm and a width of 40 nm can be used to couple light from a 40 nm-wide silicon waveguide into a 40 nm-wide MDM plasmonic waveguide. When changing the key design parameters (i.e., the width of the silicon waveguide; the length of the MDM plasmonic waveguide; and the length, width, and position of the ASC), our results indicated that using our proposed ASC reduced the need for high-precision fabrication process tremendously. Moreover, the spectrum of the proposed ASC operated at a broad frequency range around the communication wavelength of 1550 nm. --- *Source: 1025374-2021-12-14.xml*
1025374-2021-12-14_1025374-2021-12-14.md
15,296
Theoretical Investigation of an Air-Slot Mode-Size Matcher between Dielectric and MDM Plasmonic Waveguides
Rami A. Wahsheh
International Journal of Optics (2021)
Physical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1025374
1025374-2021-12-14.xml
--- ## Abstract Hybrid integration of dielectric and plasmonic waveguides is necessary to reduce the propagation losses due to the metallic interactions and support of nanofabrication of plasmonic devices that deal with large data transfer. In this paper, we propose a direct yet efficient, very short air-slot coupler (ASC) of a length of 36 nm to increase the coupling efficiency between a silicon waveguide and a silver-air-silver plasmonic waveguide. Our numerical simulation results show that having the ASC at the interface makes the fabrication process much easier and ensures that light couples from a dielectric waveguide into and out of a plasmonic waveguide. The proposed coupler works over a broad frequency range achieving a coupling efficiency of 86% from a dielectric waveguide into a metal-dielectric-metal (MDM) plasmonic waveguide and 68% from a dielectric waveguide to an MDM plasmonic waveguide and back into another dielectric waveguide. In addition, we show that even if there are no high-precision fabrication techniques, light couples from a conventional dielectric waveguide (CDW) into an MDM plasmonic waveguide as long as there is an overlap between the CDW and ASC, which reduces the fabrication process tremendously. Our proposed coupler has an impact on the miniaturization of ultracompact nanoplasmonic devices. --- ## Body ## 1. Introduction Efficient coupling of light into a metal-dielectric-metal (MDM) waveguide from a conventional dielectric waveguide (CDW) is the future of the on-chip applications of the plasmonic devices such as splitters [1, 2], Mach–Zehnder interferometers [3, 4], reflectors [5], wavelength demultiplexers [6], circulators [7], filters [8], and all-optical switching [9]. Other methods of controlling the flow of light in subwavelength regime involve using phase-change materials (PCMs) such as germanium-antimony-telluride (Ge2Sb2Te5) that achieve fast switching speeds by controlling their index of refraction. In [10, 11], the authors experimentally demonstrated the integration of a PCM with hybrid metal-dielectric metasurfaces to control the amplitude, phase, and polarization of the incident optical wavefront. In our research paper, the low-loss CDWs are used to couple light into and out of the plasmonic waveguides that have large propagation losses due to their large metallic interactions. Several different structures have been proposed to achieve mode matching between a CDW and an MDM plasmonic waveguide [12–20]. These solutions are not practical to fabricate or too long for ultracompact integrated circuits. One solution in the literature that has attracted a lot of attention is proposed using a very compact air-gap coupler (AGC) at the interface between a MDM plasmonic waveguide and CDW [21]. The proposed AGC provided a large fabrication tolerance in addition to high transmission coupling efficiency (TCE) into the output dielectric waveguide. Another solution that attracted a lot of attention, because of its simple and compact design, is the air-slot coupler (ASC) that we proposed in [22]. The proposed ASC was fabricated between a 460 nm-wide silicon waveguide and 80 nm-wide gold-air-gold plasmonic waveguide. The TCE was about 40% when the length of the MDM waveguide was 500 nm. The proposed ASC couples light from the CDW into the ASC waveguide before it is coupled into the MDM plasmonic waveguide. The air-slot waveguide that is located inside the CDW at the interface with the MDM plasmonic waveguide is the ASC. Having the ASC at the interface with the CDW ensures that light couples from the CDW into and out of the MDM plasmonic waveguide. In this paper, we show the design steps and numerical results of a very compact ASC which is smaller than that reported in our previous work in [22], has better TCE into the output CDW, and requires the use of less-precision fabrication techniques. Our proposed ASC is between a 300 nm-wide silicon waveguide and a 40 nm-wide silver-air-silver plasmonic waveguide. In addition, we show the sensitivity of our designs to different fabrication challenges.The proposed ASCs were designed and analyzed using a two-dimensional (2D) finite-difference time-domain method. We used a uniform mesh size of 1 nm to accurately capture the changes of the field at the interface of the CDW, ASC, and MDM plasmonic waveguides. The dielectric material of the CDW is silicon, and that of the metal is silver. The TCE in our analysis is calculated by normalizing the output power measured at the output CDW with respect to the input power of the launched light, while the CE in our analysis is calculated by normalizing the power measured inside the MDM plasmonic waveguide with respect to the input power of the launched light. The propagation losses of the MDM plasmonic waveguides are included in our numerical results, while those of the CDWs are not included to show the effect of using the ASC on the CE.The remainder of this paper is organized as follows: in Section2, we explain the designs and results of the ASC. We also show how increasing the width of the CDW can increase the CE into the MDM plasmonic waveguide. In Section 3, we compare the spectrum results of the proposed ASCs. In Section 4, we show how our proposed designs increase the fabrication tolerance which is needed when aligning the dielectric waveguide to the plasmonic waveguide. Finally, in Section 5, we provide conclusions. ## 2. Air-Slot Coupler Design, Analysis, and Fabrication Figure1(a) shows the schematics of our proposed ASC #1. The ASC is located inside the CDW at the interface between a 300 nm-wide CDW and a 40 nm-wide MDM plasmonic waveguide. The dependence of the CE on the length of the ASC, LASC, is shown in Figure 1(b). As LASC increases, the CE oscillates and light continues to couple from the CDW into the MDM plasmonic waveguide. The CE increased from 68% to 80% as LASC increased from zero (i.e., butt coupling) to 36 nm, respectively. As shown in Figure 1(b), almost zero back reflection resulted when LASC = 36 nm which means that the ASC reduced the mode-size mismatch between the size of the mode in the CDW and that in the slot waveguide (i.e., the ASC and the MDM plasmonic waveguide). As long as there is an overlap between the slot waveguide and the CDW, light continues to couple into the MDM plasmonic waveguide. This reduces the fabrication complexity especially when using the focused ion beam (FIB) to define both ASC and MDM plasmonic waveguide.Figure 1 The schematics of the proposed ASCs#1 and #2 that show how light couples into the MDM plasmonic waveguide: (a) ASC #1 without the dielectric width expansion, (b) coupling efficiency as a function of the length of the ASC, LASC, for ASC #1, (c) ASC #2 with the dielectric width expansion, and (d) coupling efficiency as a function of the width of the CDW, WSi, for ASC #2. (a)(b)(c)(d)Moreover, we found that the CE of ASC#1 can be further improved by increasing the width of the CDW, WSi, over a length of LSi before it is connected to the MDM plasmonic waveguide (the new design is called ASC #2; see Figure 1(c)). As shown in Figure 1(d), the CE of ASC #2 increased from 80% to about 86% when WSi increased from 300 nm to 360 nm over a length of LSi = 360 nm. Even though the back reflection of ASC #2 increased slightly to 0.025% compared to 0% in ASC #1, expanding the width of the CDW acted as a fine-tuning mode size to match the mode size inside the CDW to that in the slot waveguide.To study the TCE from a 300 nm-wide CDW into a 40 nm-wide MDM plasmonic waveguide and back into a 300 nm-wide CDW, the MDM plasmonic waveguide with the ASC is embedded between two CDWs. ASC#1 is connected back to back with another ASC #1 (the new design is called ASC #3; see Figure 2(a)), and ASC #2 is connected back to back with another ASC #2 (the new design is called ASC #4; see Figure 2(b)). In both designs, ASCs #3 and #4, the ASC of a length of 36 nm is placed at each end of the MDM plasmonic waveguide. In order to find the optimum design dimensions, we compared the dependence of the TCE of ASC #3 with that of ASC #4 as a function of the length of the MDM waveguide, LMDM; see Figure 2(c). As shown in Figure 2(c), the oscillations in the measured TCE occurred due to the slot waveguide behaving as a Fabry–Perot (FP) cavity-like structure. As LMDM increases, the TCE decreases due to the metallic propagation losses in the MDM plasmonic waveguide. Less oscillations with higher TCE are achieved by using ASC #4. Both designs, ASCs #3 and #4, achieved higher TCE than that reported in [22]. The field distributions of the coupled light at 1550 nm for ASCs #3 and #4 are shown in Figures 2(d) and 2(e) when LMDM = 740 nm.Figure 2 The schematics of the proposed ASCs#3 and #4 that show how light couples into and out of the MDM plasmonic waveguide: (a) ASC #3 without the dielectric width expansion that resulted from connecting two #1 ASCs back to back, (b) ASC #4 with the dielectric width expansion that resulted from connecting two #2 ASCs back to back, (c) transmission coupling efficiency as a function of the length of the MDM plasmonic waveguide, LMDM, for both ASCs #3 and #4, and (d, e) field distributions of the coupled light at 1550 nm for ASCs #3 and #4 when LMDM = 740 nm. (a)(b)(c)(d)(e)ASC#4 could be fabricated on a 250 nm silicon-on-insulator wafer. First, the alignment marks are defined, and then the dielectric waveguides are placed at specific locations from the alignment marks, followed by etching the silicon layer at the areas where the silver layer is defined. Then, a platinum layer is deposited on top of the silver layer before using the FIB to define the air-slot waveguides. ## 3. Spectrum Analysis The spectrum of the proposed ASCs#3 and #4 is shown in Figure 3. The wavelength varied from 400 nm to 2400 nm in steps of 25 nm. At each step, the TCE into the output CDW was measured. The length of the MDM plasmonic waveguide, LMDM, was chosen as 740 nm which represents the second peak in Figure 2(c). The spectrum of the butt coupling with and without applying the WSi width expansion (i.e., LASC = 0 nm) was also added to Figure 3 to show that using the ASC increases the TCE around the communication wavelength of 1550 nm for the two proposed couplers. In comparing the spectrum of ASCs #3 and #4, we found that tapering the CDW not only resulted in a higher TCE into the output CDW but also shifted the spectrum response by about 100 nm to the right.Figure 3 The spectrum of the proposed ASCs#3 and #4 in addition to that of butt coupling with and without the WSi width expansion.Our proposed ASC#4 with the WSi width expansion can be used in different sensing applications to show that the refractive index of the slot waveguide, nSlot (i.e., the dielectric in the MDM plasmonic waveguide in addition to that in the ASC) was changed from 1 (air) to 3.5 (silicon) in steps of 0.5 while keeping LMDM at 740 nm (as shown in Figure 4). As nSlot increases, the TCE decreases, and the shape of the spectrum changes as a result of the poor coupling from the CDW into the slot waveguide in addition to the oscillations in the slot waveguide that behaved like an FP cavity-like structure. The TCE value is zero from about 950 nm to about 1150 nm for all types of nSlot.Figure 4 The spectrum of ASC#4 for different refractive indices of the slot waveguide, nSlot. ## 4. Analysis of the Sensitivity of the Design to Different Fabrication Challenges We investigated the effect of changing various parameters of the design on the spectrum response of our proposed ASC#4. The investigated parameters are shown in Figure 5(a): the length of the air-slot coupler inside the silicon waveguide, LASC; the length of the MDM plasmonic waveguide, LMDM; the width of the silicon waveguide, WSi; the width of the slot waveguide, WSlot; and the misalignment between the position of the MDM plasmonic waveguide with respect to the center of the silicon waveguide, S. We changed one parameter at a time and studied its effect on the spectrum response. The used optimum values for LASC, LMDM, WSi, WSlot, and S were 36 nm, 740 nm, 360 nm, 40 nm, and 0 nm, respectively. We found that changing LASC from 0 nm (i.e., butt coupling) to 400 nm resulted in a reduction in the width of the spectrum response mainly due to the FP cavity-like structure; see Figure 5(b). The highest TCE with the largest spectrum width occurred when LASC = 36 nm. Three lengths of LMDM were investigated, 220 nm, 560 nm, and 740 nm, which represent the first peak, the first valley, and the second peak in Figure 2(a), respectively. We found that almost no change occurred in the spectrum shape when LMDM is changed except for the reduction in the TCE due to the metallic interactions; see Figure 5(c). We also found that the cutoff wavelength can be controlled by changing WSi (see Figure 5(d)) and WSlot (see Figure 5(e)). Changing WSi from 360 nm to 440 nm resulted in shifting the cutoff wavelength from about 1100 nm to 1300 nm, respectively, whereas changing WSlot from 40 nm to 120 nm resulted in shifting the cutoff wavelength from about 1100 nm to 1000 nm, respectively. Finally, we found that changing S from 0 nm to 80 nm resulted in a reduction in the width of the spectrum response in addition to a reduction in the TCE due to the reduction in the overlapped area between the mode supported by the slot waveguide and that supported by the silicon waveguide; see Figure 5(f). This analysis shows the impact of different key design parameters on the spectrum response of the device which indicates that using the ASC reduces the need for high-precision fabrication process tremendously.Figure 5 (a) A schematic of our proposed ASC#4 that shows the design parameters and (b–f) dependence of the spectrum response of ASC #4 on the length of the air-slot coupler LASC, the length of the MDM plasmonic waveguide LMDM, the width of the silicon waveguide WSi, the width of the slot waveguide WSlot, and the misalignment between the position of the air-slot waveguide with respect to the center of the silicon waveguide S, respectively. (a)(b)(c)(d)(e)(f) ## 5. Conclusions We numerically showed that a very short ASC of a length of 36 nm and a width of 40 nm can be used to couple light from a 40 nm-wide silicon waveguide into a 40 nm-wide MDM plasmonic waveguide. When changing the key design parameters (i.e., the width of the silicon waveguide; the length of the MDM plasmonic waveguide; and the length, width, and position of the ASC), our results indicated that using our proposed ASC reduced the need for high-precision fabrication process tremendously. Moreover, the spectrum of the proposed ASC operated at a broad frequency range around the communication wavelength of 1550 nm. --- *Source: 1025374-2021-12-14.xml*
2021
# Free Radical-Scavenging, Anti-Inflammatory, and Antibacterial Activities of Water and Ethanol Extracts Prepared from Compressional-Puffing Pretreated Mango (Mangifera indica L.) Peels **Authors:** Chun-Yung Huang; Chia-Hung Kuo; Chien-Hui Wu; Ai-Wei Kuan; Hui-Ru Guo; Yu-Hua Lin; Po-Kai Wang **Journal:** Journal of Food Quality (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1025387 --- ## Abstract During the processing of mango, a huge amount of peel is generated, which is environmentally problematic. In the present study, a compressional-puffing process was adopted to pretreat the peels of various mango cultivars, and then the bioactive compounds of mango peels were extracted by water or ethanol. The phenolic compound compositions as well as the free radical-scavenging, anti-inflammatory, and antibacterial activities of water extract (WE) and ethanol extract (EE) from nonpuffed (NP) and compressional-puffed (CP) mango peels were further evaluated. It was found that compressional-puffing could increase the yield of extracts obtained from most mango varieties and could augment the polyphenol content of extracts from Jinhwang and Tainoung number 1 (TN1) cultivars. The WE and EE from TN1 exhibited the highest polyphenol content and the greatest free radical-scavenging activities among the mango cultivars tested. Seven phenolic compounds (gallic acid, pyrogallol, chlorogenic acid,p-hydroxybenzoic acid,p-coumaric acid, ECG, and CG) were detected in CPWE (compressional-puffed water extract) and CPEE (compressional-puffed ethanol extract) from TN1, and antioxidant stability of both CPWE and CPEE was higher than that of vitamin C. Further biological experiments revealed that CPEE from TN1 possessed the strongest anti-inflammatory and antibacterial activities, and thus it is recommended as a multibioactive agent, which may have applications in the food, cosmetic, and nutraceutical industries. --- ## Body ## 1. Introduction Mango (Mangifera indica L.) is recognized as one of the most economically productive fruits in tropical and subtropical areas throughout the globe. Mango has excellent nutritional value and health-promoting properties. A variety of studies have been performed showing high concentrations of antioxidants including ascorbic acid, carotenoids, and phenolic compounds in mango [1]. Mango fruit is the main edible part and is usually processed into various products such as puree, nectar, jam, leather, pickles, chutney, frozen mango, dehydrated products, and canned slices. During the processing of mango, a huge amount of peel is generated, which constitutes approximately 15–20% of the mango fruit [2]. Mango peel is a waste by-product, and its disposal may have a substantial impact on the environment. Previous studies reported that mango peel contains a variety of valuable compounds such as polyphenols, carotenoids, enzymes, and dietary fiber [2]. Extracts from mango peel also exhibit antioxidant activity [3], anti-inflammatory activity [4], protection against membrane protein degradation and morphological changes in rat erythrocytes caused by hydrogen peroxide ( H 2 O 2 ) [5], antibacterial activity [6], and anticancer activity [7]. Hence, the utilization of mango peels may be an economical means of ameliorating the problem of waste disposal from mango production factories, as well as converting a by-product into material for food, cosmetic, and pharmaceutical industrial usages.Free radicals, including superoxide anion radical( O 2 ∙ - ), hydroperoxyl radical ( H O 2 ∙ ), hydroxyl radical (HO∙), peroxyl radical (ROO∙), and alkoxyl radical (RO∙), are defined as any molecules or atoms with one or more unpaired electrons and are often involved in human diseases [8]. Many studies have shown that free radicals in living organisms cause oxidative damage to different molecules such as lipids, proteins, and nucleic acids and these are involved in the interaction phases of many diseases such as cancer, atherosclerosis, respiratory ailments, and even neuronal death [9]. Antioxidants are substances that delay or prevent the oxidation of cellular oxidisable substrates. They exert their effect by scavenging reactive oxygen species (ROS) or preventing the generation of ROS [10]. Synthetic antioxidant compounds such as butylated hydroxytoluene (BHT) and butylated hydroxyanisole (BHA) have potent antioxidant activity and are commonly used in processed foods. However, they have been restricted because of their carcinogenicity and other toxic properties [11, 12]. Thus, in recent years, there has been considerable interest in natural antioxidants derived from biological materials because of their presumed safety and potential nutritional and therapeutic value.A large number of publications have suggested that fruit polyphenols are related to immunomodulatory and anti-inflammatory properties viain vitro and animal studies [13]. Inflammation is a complicated physiological phenomenon that occurs when the immune system in the body is activated to counter threats such as injury, infection, and stress. Macrophages often play a unique role in the immune system because they not only elicit an innate immune response but also act as effector cells in inflammation and infection. When macrophages encounter bacterial endotoxin lipopolysaccharide (LPS), they can be stimulated to generate a variety of inflammatory mediators such as nitric oxide (NO), tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β), IL-6, prostaglandin E2 (PGE2), and adhesion molecules to help eradicate the bacterial assault [14]. Generally, substances with inhibitory effects on the expression and activity of enzymes (e.g., inducible NO synthase (iNOS)) involved in the generation of inflammatory mediators such as NO in the mouse macrophage-like cell line RAW 264.7 are considered to possess immunomodulatory activity [15]. Since a variety of polyphenols exist in mango peels, further research on the use of mango peel extracts as immunomodulatory or anti-inflammatory agents is warranted.Antibacterial agents are the synthetic or natural compounds that interfere with the growth and division of bacteria. A number of studies have shown that pathogenic microorganisms in humans and various animal species have developed resistance to drugs. This drug resistance is due to the random or otherwise inappropriate usage of commercial antimicrobial agents. As such, there is an urgent need for new antibacterial agents. In addition, synthetic antibiotics have been known to induce side effects such as the appearance of resistant bacteria, skin irritation, organ damage, and immunohypersensitivity [16]. Accordingly, many studies have attempted to develop new agents with high antibacterial activity but with fewer or possibly even no side effects. There is a particular demand for antibacterial compounds from natural resources [17]. Plants produce a range of antimicrobial compounds in various parts such as bark, stalk, leaves, roots, flowers, pods, seeds, stems, hull, latex, and fruit rind [6]. Fruit peel is the outer covering of a fruit, which functions as a physical barrier. It also serves as a chemical barrier by virtue of the presence of many antimicrobial constituents, which protect the fruit from exposure to external pathogens or other factors that may tend to decrease the quality of the fruit. Therefore, fruit peels are good sources for obtaining natural antibacterial agents.Bioactive compounds in mango peel are generally extracted via the following methods: extraction with 80% ethanol by sonication for 3 days at room temperature [18]; extraction performed three times with methanol, for 3 h per time [19]; extraction with 95% ethanol three times, 72 h per time [20]; extraction with acetone or ethyl acetate for up to 20 h [21–23]; extraction by microwave-assisted method [24, 25]; or extraction with supercritical CO2, followed by pressurized ethanol [26]. However, these methods generally involve the use of a large volume of solvents, require a long extraction time, consume a lot of energy, are costly, and sometimes are not eco-friendly. The present study builds upon on the research reported in our previous investigation [27]. In brief, we previously developed a compressional-puffing process that has been successfully implemented to increase the extraction yield of fucoidan from brown seaweed [27, 28] and augment the extraction yields of total phenolics and total flavonoids from pine needles [29, 30]. Compressional-puffing can be utilized as a pretreatment step to disrupt the cellular structure of samples, thereby better enabling the release of bioactive compounds by solvent extraction [27]. In this study, compressional-puffing was utilized for pretreatment of mango peels, and water extract (WE) and ethanol extract (EE) extracted from nonpuffed (NP) and compressional-puffed (CP) mango peels were compared. The phenolic compound composition and the free radical-scavenging, anti-inflammatory, and antibacterial activities of WE and EE from mango peels were also evaluated. To the best of the authors’ admittedly limited knowledge, this is the first study to elucidate the free radical-scavenging, anti-inflammatory and antibacterial activities of WE and EE extracted from compressional-puffed mango peels. The recovered WE and EE are expected to possess multifunctional activities providing a wide range of benefits. The utilization of mango peel will also help to play a role in minimizing the generation of waste worldwide. ## 2. Materials and Methods ### 2.1. Materials Folin-Ciocalteu’s phenol reagent, gallic acid, protocatechuic acid, chlorogenic acid,p-hydroxybenzoic acid, pyrogallol, caffeic acid, mangiferin, epicatechin,p-coumaric acid, ferulic acid, epicatechin gallate (ECG), catechin gallate (CG), ellagic acid, rutin, quercetin, kaempferol, homogentisic acid, tannic acid, vanillic acid, 2,2-diphenyl-1-picrylhydrazyl (DPPH), sodium nitrite, LPS, dimethyl sulfoxide (DMSO), and 2,2′-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid) diammonium salt (ABTS) were purchased from Sigma-Aldrich (St. Louis, MO, USA). 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) was purchased from Calbiochem (San Diego, CA, USA). Dulbecco’s modified Eagle’s medium (DMEM), trypsin/EDTA, fetal bovine serum (FBS), penicillin, and streptomycin were purchased from Gibco Laboratories (Grand Island, NY, USA). Methanol, acetic acid, and potassium persulfate were obtained from Nihon Shiyaku Industries, Ltd. (Tokyo, Japan). All other reagents if not declared were purchased from Sigma-Aldrich (St. Louis, MO, USA) and were all of analytical grade. ### 2.2. Mango Fruits and Peels Six mango cultivars, produced in Tainan City, Taiwan, were utilized in this study. The varieties, namely, Jinhwang, Tainoung number 1 (TN1), Irwin, Yuwen, Haden, and Tu (Figure1), were collected from a local grocery market in Xinhua District, Tainan City, Taiwan. The fruits were used after they had completed ripening. Samples of peels were separated manually from six varieties of mango fruits and were then oven-dried and stored in aluminum bags at 4°C until use.Figure 1 Appearance of various Taiwanese mango varieties. (1) Haden; (2) Tainoung number 1; (3) Tu; (4) Jinhwang; (5) Yuwen; (6) Irwin. ### 2.3. Compressional-Puffing Procedure A compressional-puffing method [27, 28, 31] with minor modification was adopted to pretreat mango peels. In brief, the dried peel samples were crumbled and sieved using a 20-mesh screen. The portion retained by the screen was collected and then compressional-puffed using a continuous compressional-puffing machine with the temperature set at 220°C. The corresponding mechanical compression pressure and steam pressure levels inside the chamber are listed in Table 1. After the compressional-puffing, the peel samples were ground into fine particles and stored at 4°C for further extraction experiments.Table 1 Process variables for compressional-puffing and extraction and extraction yields for various Taiwanese mango peel extracts. Operational variables NPWE CPWE NPEE CPEE Mechanical compression Pressure (kg/cm2) 0 5 0 5 Number of compression times 0 3 0 3 Puffing Temperature (°C) 0 220 0 220 Pressure (kg/cm2) 0 11 0 11 Time (sec) 0 10 0 10 Pretreatment Solvent 95% EtOH 95% EtOH N A ∗ NA Temperature (°C) 25 25 NA NA Time (h) 4 4 NA NA Extraction Solvent ddH2O ddH2O 95% EtOH 95% EtOH Temperature (°C) 70 70 25 25 Time (h) 1 1 4 4 Extraction yield of extract% ∗ ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 33.5 ± 0.4 c B C ∗ ∗ ∗ 36.6 ± 2.3 b C 23.4 ± 1.2 b A 30.2 ± 1.0 c B Tainoung number 1 cultivar 29.5 ± 1.2 b A 34.8 ± 1.0 b B 29.2 ± 0.7 d A 33.7 ± 0.9 d B Irwin cultivar 30.9 ± 0.9 b c B 40.0 ± 2.2 b C 22.6 ± 0.3 b A 29.6 ± 0.4 c B Yuwen cultivar 31.2 ± 1.4 b c B 37.0 ± 1.8 b C 26.3 ± 0.9 c A 37.4 ± 1.0 e C Haden cultivar 25.5 ± 1.5 a B 28.6 ± 2.7 a B 18.8 ± 0.8 a A 20.4 ± 0.5 a A Tu cultivar 25.9 ± 0.3 a B 29.1 ± 0.1 a D 22.9 ± 0.5 b A 27.0 ± 0.1 b C N A ∗: not applicable. ∗ ∗Extraction yield of extract (%) = ( g s o l i d e x t r a c t , d r y b a s i s / g m a n g o p e e l s a m p l e , d r y b a s i s ) × 100. ∗ ∗ ∗Values are mean ± SD ( n = 3 ); values in the same column with different letters (in a, b, c, d, and e) and in the same row with different letters (in A, B, C, and D) are significantly different ( p < 0.05 ). ### 2.4. Extraction Procedure We followed the methods of Yang et al. (2017) [28]. Briefly, the nonpuffed and compressional-puffed peel samples were pulverized and sieved using a 20-mesh screen. The portion passed through the screen was collected and extracted by 95% ethanol (w/v = 1 : 10) for 4 h at 25°C with shaking. The resultant solution was then centrifuged at 9,170 ×g for 10 min and the supernatant was collected. NPEE (nonpuffed ethanol extract) and CPEE (compressional-puffed ethanol extract) were thus obtained after oven-drying the supernatant at 40°C. In addition, the precipitates after 95% ethanol extraction were further extracted by double-distilled water (w/v = 1 : 10) for 1 h at 70°C with shaking. Then the mixture was centrifuged at 9,170 ×g for 10 min and the supernatant was collected. NPWE (nonpuffed water extract) and CPWE (compressional-puffed water extract) were obtained after oven-drying the supernatant at 50°C. All dried extracts were milled to fine particles and stored at 4°C for further analyses. The combined compressional-puffing pretreatment and extraction process is depicted in detail in Figure 2. The extraction yield was calculated using the following equation:(1) e x t r a c t i o n y i e l d % = g A / g B × 100 ,where g A represents the dry mass weight of the extract and g B is the weight of the mango peel sample on a dry basis.Figure 2 Flowchart of the compressional-puffing process and extraction methods for NPEE, CPEE, NPWE, and CPWE. ### 2.5. Determination of Polyphenol Content Polyphenol content was estimated by the Folin-Ciocalteu colorimetric method based on the procedure of Singleton and Rossi (1965) [32] and using gallic acid as the standard agent. ### 2.6. High-Performance Liquid Chromatography (HPLC) Analysis of Total Phenolic Compound Composition The separation of total phenolic compounds was performed by the method of Schieber et al. (2000) [33] and using a Shimadzu HPLC system (Shimadzu, Kyoto, Japan) equipped with a UV-vis detector. A reversed-phase Inspire C18 column (250 mm × 4.6 mm, id 5 μm) purchased from Dikma Technologies (USA) was used for all chromatographic separations. The column was operated at 25°C. The mobile phase consisted of 2% (v/v) acetic acid in water (eluent A), 0.5% acetic acid in water, and acetonitrile (50 : 50, v/v; eluent B). The gradient program was as follows: 20–55% B (50 min), 55–100% B (10 min), and 100–20% B (5 min). The injection volume of all samples was 20 μl. The spectra were monitored at 280 nm and performed at a flow rate of 1 ml/min. Gallic acid, pyrogallol, protocatechuic acid, chlorogenic acid,p-hydroxybenzoic acid, caffeic acid, mangiferin, epicatechin,p-coumaric acid, ferulic acid, ECG, CG, ellagic acid, rutin, quercetin, kaempferol, homogentisic acid, tannic acid, and vanillic acid were used as standards for HPLC analyses. ### 2.7. DPPH Radical-Scavenging Activity The scavenging activity of the DPPH radical in the samples was determined using the method described previously [28, 34]. In brief, 50 μl of mango peel extract (concentrations ranging from 0 to 300 μg/ml for Tainoung number 1 and Haden cultivars; 0–600 μg/ml for Jinhwang and Tu cultivars; and 0–900 μg/ml for Irwin and Yuwen cultivars) was added to 200 μl 0.1 mM DPPH solution (in methanol). The mixture was shaken vigorously for 1 min and left to stand for 30 min in the dark at room temperature. After the reaction, the absorbance of all sample solutions was then measured at 517 nm using an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA). The radical-scavenging activity was calculated as the percentage inhibition using the following equation:(2) D P P H r a d i c a l - s c a v e n g i n g % = 1 - A s a m p l e A c o n t r o l × 100 ,where A s a m p l e is the absorbance of the methanol solution of DPPH with tested samples and A c o n t r o l represents the absorbance of the methanol solution of DPPH without the sample. ### 2.8. ABTS Radical Cation-Scavenging Activity The ABTS radical cation-scavenging activity was performed according to the method described previously [28, 34]. The ABTS∙+ solution was produced by mixing 5 ml of 7 mM ABTS solution with 88 μl of 140 mM potassium persulfate and allowing the mixture to stand in the dark for 16 h at room temperature before use. The ABTS∙+ solution was diluted with 95% ethanol so that its absorbance at 734 nm was adjusted to 0.70 ± 0.05. To determine the scavenging activity, 100 μl diluted ABTS∙+ solution was mixed with 100 μl of mango peel extract (concentrations ranging from 0 to 100 μg/ml for Tainoung number 1 and Haden cultivars; 0–300 μg/ml for Irwin, Yuwen, and Tu cultivars; and 0–500 μg/ml for Jinhwang cultivar) and the mixture was allowed to react at room temperature for 6 min. After the reaction, the absorbance of all sample solutions was then measured at 734 nm using an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA). The blank was prepared in the same manner, except that distilled water was used instead of the sample. The scavenging activity of ABTS∙+ was calculated using the following equation:(3) A B T S r a d i c a l c a t i o n - s c a v e n g i n g % = 1 - A s a m p l e A c o n t r o l × 100 ,where A s a m p l e is the absorbance of ABTS with tested samples and A c o n t r o l represents the absorbance of ABTS without the sample. ### 2.9. Cell Line and Culture Murine macrophage cell lines RAW 264.7 were obtained from the Bioresource Collection and Research Center, the Food Industry Research and Development Institute (FIRDI, Hsinchu, Taiwan). The cells were grown in DMEM supplemented with 10% FBS and 100 U/ml penicillin-streptomycin solution at 37°C in a humidified chamber with 5% CO2. The medium was changed every two days. ### 2.10. Measurement of Cell Viability The MTT assay was used to evaluate cell viability. Briefly, RAW 264.7 cells (2 × 105/ml in a 96-well plate) were plated with culture medium and incubated for 24 h at 37°C, with 5% CO2 in a humidified atmosphere. The medium was removed and fresh serum-free medium containing different concentrations of mango peel extracts (concentrations ranging from 0 to 25 μg/ml for CPEE of TN1 and CPWE of TN1) was added. After 24 h of incubation at 37°C, with 5% CO2, the MTT reagent (0.1 mg/ml) was added. After incubating at 37°C for 4 h, the MTT reagent was removed and DMSO (100 μl) was added to each well and thoroughly mixed by pipetting to dissolve the MTT-formazan crystals. The absorbance was then determined by an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA) at a wavelength of 570 nm. The cell viability (%) was calculated using the following equation:(4) C e l l v i a b i l i t y % = T C × 100 ,where T is the absorbance in the test and C is the absorbance for the control. ### 2.11. Measurement of Nitrite Oxide in Culture Media RAW 264.7 cells (2 × 10 5 cells/ml) were seeded in a 96-well flat bottom plate for 24 h at 37°C with 5% CO2. The culture medium was removed and replaced with fresh medium containing tested samples at various concentrations prior to challenging with 1 μg/ml of LPS. The nitrite concentration was measured in the culture supernatant after 24 h of coincubation. In brief, 50 μl of the cultured supernatants was added in the 96-well plate and 100 μl of Griess reagent was added to each well and allowed to stand for 10 min at room temperature. The absorbance at 540 nm was measured using an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA), and the quantification of nitrite was standardized with NaNO2 at 0–100 μM concentrations [35]. ### 2.12. Zone of Inhibition Five bacteria were tested for antibacterial activity of mango peel extracts. These were three Gram-negative bacteria (Escherichia coli ATCC 11775,Salmonella typhimurium ATCC 13311, andVibrio parahaemolyticusATCC 17802) and two Gram-positive bacteria (Staphylococcus aureus ATCC 12600 andBacillus cereus ATCC 14579), which were obtained from the Culture Collection and Research Center of the Food Industry Research and Development Institute, Hsinchu, Taiwan. Antibacterial activity was measured using the standard method of diffusion disc plates on agar [36]. In brief,E. coli,S. typhimurium,S. aureus, andB. cereus were grown in tryptic soy broth (TSB) medium (Difco Laboratories, Detroit, MI, USA) andV. parahaemolyticuswas grown in TSB medium + 3% NaCl for 24 h at 37°C, and 0.1 ml of each culture of bacteria at proper cell density was spread on tryptic soy agar (TSA, Difco Laboratories, Detroit, MI, USA) plate surfaces (3% NaCl was added to TSA forV. parahaemolyticus). Paper disc (8 mm in diameter) was placed on the agar medium to load 50 μl containing 2 mg of mango peel extract (4%, w/v, in 0.05 M acetate buffer, pH 6.0). Control paper discs were prepared by infusing with 50 μl Antibiotic-Antimycotic Solution (containing 10,000 units/ml penicillin, 10 mg/ml streptomycin, and 25 μg/ml amphotericin) (Corning, Corning, NY, USA) or 50 μl 0.05 M acetate buffer. The plates were incubated at 37°C for 24 h. After 24 h, antibacterial activity of the extracts against the test bacteria was observed by growth-free zone of inhibition near the respective disc and the inhibition diameters were measured. ### 2.13. Statistical Analysis Experiments were performed at least three times. Values represent the means ± standard deviation (SD). Statistical analyses were done using the Statistical Package for the Social Sciences (SPSS). The results obtained were analyzed using one-way analysis of variance (ANOVA), followed by Duncan’s Multiple Range tests.p < 0.05 was considered statistically significant. Correlation analyses were performed using the square of Pearson’s correlation coefficient ( R 2 ). ## 2.1. Materials Folin-Ciocalteu’s phenol reagent, gallic acid, protocatechuic acid, chlorogenic acid,p-hydroxybenzoic acid, pyrogallol, caffeic acid, mangiferin, epicatechin,p-coumaric acid, ferulic acid, epicatechin gallate (ECG), catechin gallate (CG), ellagic acid, rutin, quercetin, kaempferol, homogentisic acid, tannic acid, vanillic acid, 2,2-diphenyl-1-picrylhydrazyl (DPPH), sodium nitrite, LPS, dimethyl sulfoxide (DMSO), and 2,2′-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid) diammonium salt (ABTS) were purchased from Sigma-Aldrich (St. Louis, MO, USA). 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) was purchased from Calbiochem (San Diego, CA, USA). Dulbecco’s modified Eagle’s medium (DMEM), trypsin/EDTA, fetal bovine serum (FBS), penicillin, and streptomycin were purchased from Gibco Laboratories (Grand Island, NY, USA). Methanol, acetic acid, and potassium persulfate were obtained from Nihon Shiyaku Industries, Ltd. (Tokyo, Japan). All other reagents if not declared were purchased from Sigma-Aldrich (St. Louis, MO, USA) and were all of analytical grade. ## 2.2. Mango Fruits and Peels Six mango cultivars, produced in Tainan City, Taiwan, were utilized in this study. The varieties, namely, Jinhwang, Tainoung number 1 (TN1), Irwin, Yuwen, Haden, and Tu (Figure1), were collected from a local grocery market in Xinhua District, Tainan City, Taiwan. The fruits were used after they had completed ripening. Samples of peels were separated manually from six varieties of mango fruits and were then oven-dried and stored in aluminum bags at 4°C until use.Figure 1 Appearance of various Taiwanese mango varieties. (1) Haden; (2) Tainoung number 1; (3) Tu; (4) Jinhwang; (5) Yuwen; (6) Irwin. ## 2.3. Compressional-Puffing Procedure A compressional-puffing method [27, 28, 31] with minor modification was adopted to pretreat mango peels. In brief, the dried peel samples were crumbled and sieved using a 20-mesh screen. The portion retained by the screen was collected and then compressional-puffed using a continuous compressional-puffing machine with the temperature set at 220°C. The corresponding mechanical compression pressure and steam pressure levels inside the chamber are listed in Table 1. After the compressional-puffing, the peel samples were ground into fine particles and stored at 4°C for further extraction experiments.Table 1 Process variables for compressional-puffing and extraction and extraction yields for various Taiwanese mango peel extracts. Operational variables NPWE CPWE NPEE CPEE Mechanical compression Pressure (kg/cm2) 0 5 0 5 Number of compression times 0 3 0 3 Puffing Temperature (°C) 0 220 0 220 Pressure (kg/cm2) 0 11 0 11 Time (sec) 0 10 0 10 Pretreatment Solvent 95% EtOH 95% EtOH N A ∗ NA Temperature (°C) 25 25 NA NA Time (h) 4 4 NA NA Extraction Solvent ddH2O ddH2O 95% EtOH 95% EtOH Temperature (°C) 70 70 25 25 Time (h) 1 1 4 4 Extraction yield of extract% ∗ ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 33.5 ± 0.4 c B C ∗ ∗ ∗ 36.6 ± 2.3 b C 23.4 ± 1.2 b A 30.2 ± 1.0 c B Tainoung number 1 cultivar 29.5 ± 1.2 b A 34.8 ± 1.0 b B 29.2 ± 0.7 d A 33.7 ± 0.9 d B Irwin cultivar 30.9 ± 0.9 b c B 40.0 ± 2.2 b C 22.6 ± 0.3 b A 29.6 ± 0.4 c B Yuwen cultivar 31.2 ± 1.4 b c B 37.0 ± 1.8 b C 26.3 ± 0.9 c A 37.4 ± 1.0 e C Haden cultivar 25.5 ± 1.5 a B 28.6 ± 2.7 a B 18.8 ± 0.8 a A 20.4 ± 0.5 a A Tu cultivar 25.9 ± 0.3 a B 29.1 ± 0.1 a D 22.9 ± 0.5 b A 27.0 ± 0.1 b C N A ∗: not applicable. ∗ ∗Extraction yield of extract (%) = ( g s o l i d e x t r a c t , d r y b a s i s / g m a n g o p e e l s a m p l e , d r y b a s i s ) × 100. ∗ ∗ ∗Values are mean ± SD ( n = 3 ); values in the same column with different letters (in a, b, c, d, and e) and in the same row with different letters (in A, B, C, and D) are significantly different ( p < 0.05 ). ## 2.4. Extraction Procedure We followed the methods of Yang et al. (2017) [28]. Briefly, the nonpuffed and compressional-puffed peel samples were pulverized and sieved using a 20-mesh screen. The portion passed through the screen was collected and extracted by 95% ethanol (w/v = 1 : 10) for 4 h at 25°C with shaking. The resultant solution was then centrifuged at 9,170 ×g for 10 min and the supernatant was collected. NPEE (nonpuffed ethanol extract) and CPEE (compressional-puffed ethanol extract) were thus obtained after oven-drying the supernatant at 40°C. In addition, the precipitates after 95% ethanol extraction were further extracted by double-distilled water (w/v = 1 : 10) for 1 h at 70°C with shaking. Then the mixture was centrifuged at 9,170 ×g for 10 min and the supernatant was collected. NPWE (nonpuffed water extract) and CPWE (compressional-puffed water extract) were obtained after oven-drying the supernatant at 50°C. All dried extracts were milled to fine particles and stored at 4°C for further analyses. The combined compressional-puffing pretreatment and extraction process is depicted in detail in Figure 2. The extraction yield was calculated using the following equation:(1) e x t r a c t i o n y i e l d % = g A / g B × 100 ,where g A represents the dry mass weight of the extract and g B is the weight of the mango peel sample on a dry basis.Figure 2 Flowchart of the compressional-puffing process and extraction methods for NPEE, CPEE, NPWE, and CPWE. ## 2.5. Determination of Polyphenol Content Polyphenol content was estimated by the Folin-Ciocalteu colorimetric method based on the procedure of Singleton and Rossi (1965) [32] and using gallic acid as the standard agent. ## 2.6. High-Performance Liquid Chromatography (HPLC) Analysis of Total Phenolic Compound Composition The separation of total phenolic compounds was performed by the method of Schieber et al. (2000) [33] and using a Shimadzu HPLC system (Shimadzu, Kyoto, Japan) equipped with a UV-vis detector. A reversed-phase Inspire C18 column (250 mm × 4.6 mm, id 5 μm) purchased from Dikma Technologies (USA) was used for all chromatographic separations. The column was operated at 25°C. The mobile phase consisted of 2% (v/v) acetic acid in water (eluent A), 0.5% acetic acid in water, and acetonitrile (50 : 50, v/v; eluent B). The gradient program was as follows: 20–55% B (50 min), 55–100% B (10 min), and 100–20% B (5 min). The injection volume of all samples was 20 μl. The spectra were monitored at 280 nm and performed at a flow rate of 1 ml/min. Gallic acid, pyrogallol, protocatechuic acid, chlorogenic acid,p-hydroxybenzoic acid, caffeic acid, mangiferin, epicatechin,p-coumaric acid, ferulic acid, ECG, CG, ellagic acid, rutin, quercetin, kaempferol, homogentisic acid, tannic acid, and vanillic acid were used as standards for HPLC analyses. ## 2.7. DPPH Radical-Scavenging Activity The scavenging activity of the DPPH radical in the samples was determined using the method described previously [28, 34]. In brief, 50 μl of mango peel extract (concentrations ranging from 0 to 300 μg/ml for Tainoung number 1 and Haden cultivars; 0–600 μg/ml for Jinhwang and Tu cultivars; and 0–900 μg/ml for Irwin and Yuwen cultivars) was added to 200 μl 0.1 mM DPPH solution (in methanol). The mixture was shaken vigorously for 1 min and left to stand for 30 min in the dark at room temperature. After the reaction, the absorbance of all sample solutions was then measured at 517 nm using an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA). The radical-scavenging activity was calculated as the percentage inhibition using the following equation:(2) D P P H r a d i c a l - s c a v e n g i n g % = 1 - A s a m p l e A c o n t r o l × 100 ,where A s a m p l e is the absorbance of the methanol solution of DPPH with tested samples and A c o n t r o l represents the absorbance of the methanol solution of DPPH without the sample. ## 2.8. ABTS Radical Cation-Scavenging Activity The ABTS radical cation-scavenging activity was performed according to the method described previously [28, 34]. The ABTS∙+ solution was produced by mixing 5 ml of 7 mM ABTS solution with 88 μl of 140 mM potassium persulfate and allowing the mixture to stand in the dark for 16 h at room temperature before use. The ABTS∙+ solution was diluted with 95% ethanol so that its absorbance at 734 nm was adjusted to 0.70 ± 0.05. To determine the scavenging activity, 100 μl diluted ABTS∙+ solution was mixed with 100 μl of mango peel extract (concentrations ranging from 0 to 100 μg/ml for Tainoung number 1 and Haden cultivars; 0–300 μg/ml for Irwin, Yuwen, and Tu cultivars; and 0–500 μg/ml for Jinhwang cultivar) and the mixture was allowed to react at room temperature for 6 min. After the reaction, the absorbance of all sample solutions was then measured at 734 nm using an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA). The blank was prepared in the same manner, except that distilled water was used instead of the sample. The scavenging activity of ABTS∙+ was calculated using the following equation:(3) A B T S r a d i c a l c a t i o n - s c a v e n g i n g % = 1 - A s a m p l e A c o n t r o l × 100 ,where A s a m p l e is the absorbance of ABTS with tested samples and A c o n t r o l represents the absorbance of ABTS without the sample. ## 2.9. Cell Line and Culture Murine macrophage cell lines RAW 264.7 were obtained from the Bioresource Collection and Research Center, the Food Industry Research and Development Institute (FIRDI, Hsinchu, Taiwan). The cells were grown in DMEM supplemented with 10% FBS and 100 U/ml penicillin-streptomycin solution at 37°C in a humidified chamber with 5% CO2. The medium was changed every two days. ## 2.10. Measurement of Cell Viability The MTT assay was used to evaluate cell viability. Briefly, RAW 264.7 cells (2 × 105/ml in a 96-well plate) were plated with culture medium and incubated for 24 h at 37°C, with 5% CO2 in a humidified atmosphere. The medium was removed and fresh serum-free medium containing different concentrations of mango peel extracts (concentrations ranging from 0 to 25 μg/ml for CPEE of TN1 and CPWE of TN1) was added. After 24 h of incubation at 37°C, with 5% CO2, the MTT reagent (0.1 mg/ml) was added. After incubating at 37°C for 4 h, the MTT reagent was removed and DMSO (100 μl) was added to each well and thoroughly mixed by pipetting to dissolve the MTT-formazan crystals. The absorbance was then determined by an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA) at a wavelength of 570 nm. The cell viability (%) was calculated using the following equation:(4) C e l l v i a b i l i t y % = T C × 100 ,where T is the absorbance in the test and C is the absorbance for the control. ## 2.11. Measurement of Nitrite Oxide in Culture Media RAW 264.7 cells (2 × 10 5 cells/ml) were seeded in a 96-well flat bottom plate for 24 h at 37°C with 5% CO2. The culture medium was removed and replaced with fresh medium containing tested samples at various concentrations prior to challenging with 1 μg/ml of LPS. The nitrite concentration was measured in the culture supernatant after 24 h of coincubation. In brief, 50 μl of the cultured supernatants was added in the 96-well plate and 100 μl of Griess reagent was added to each well and allowed to stand for 10 min at room temperature. The absorbance at 540 nm was measured using an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA), and the quantification of nitrite was standardized with NaNO2 at 0–100 μM concentrations [35]. ## 2.12. Zone of Inhibition Five bacteria were tested for antibacterial activity of mango peel extracts. These were three Gram-negative bacteria (Escherichia coli ATCC 11775,Salmonella typhimurium ATCC 13311, andVibrio parahaemolyticusATCC 17802) and two Gram-positive bacteria (Staphylococcus aureus ATCC 12600 andBacillus cereus ATCC 14579), which were obtained from the Culture Collection and Research Center of the Food Industry Research and Development Institute, Hsinchu, Taiwan. Antibacterial activity was measured using the standard method of diffusion disc plates on agar [36]. In brief,E. coli,S. typhimurium,S. aureus, andB. cereus were grown in tryptic soy broth (TSB) medium (Difco Laboratories, Detroit, MI, USA) andV. parahaemolyticuswas grown in TSB medium + 3% NaCl for 24 h at 37°C, and 0.1 ml of each culture of bacteria at proper cell density was spread on tryptic soy agar (TSA, Difco Laboratories, Detroit, MI, USA) plate surfaces (3% NaCl was added to TSA forV. parahaemolyticus). Paper disc (8 mm in diameter) was placed on the agar medium to load 50 μl containing 2 mg of mango peel extract (4%, w/v, in 0.05 M acetate buffer, pH 6.0). Control paper discs were prepared by infusing with 50 μl Antibiotic-Antimycotic Solution (containing 10,000 units/ml penicillin, 10 mg/ml streptomycin, and 25 μg/ml amphotericin) (Corning, Corning, NY, USA) or 50 μl 0.05 M acetate buffer. The plates were incubated at 37°C for 24 h. After 24 h, antibacterial activity of the extracts against the test bacteria was observed by growth-free zone of inhibition near the respective disc and the inhibition diameters were measured. ## 2.13. Statistical Analysis Experiments were performed at least three times. Values represent the means ± standard deviation (SD). Statistical analyses were done using the Statistical Package for the Social Sciences (SPSS). The results obtained were analyzed using one-way analysis of variance (ANOVA), followed by Duncan’s Multiple Range tests.p < 0.05 was considered statistically significant. Correlation analyses were performed using the square of Pearson’s correlation coefficient ( R 2 ). ## 3. Results and Discussion ### 3.1. Effects of Mango Varieties, Compressional-Puffing, and Extraction Methods on Extraction Yields of Peel Extracts Six varieties of mango fruits, namely, Jinhwang, Tainoung number  1 (TN1), Irwin, Yuwen, Haden, and Tu, were collected from a local grocery market in Xinhua District, Tainan City, Taiwan. Samples of peels were separated manually and the peels were oven-dried till the moisture content reached 4–7% (wet basis). The dried peel samples were crumbled and sieved using a 20-mesh screen, and the portion retained by the screen was collected and compressional-puffed according to the technique developed previously [27]. Compressional-puffing applies a mechanical compression force of approximately 5 kg/cm2 to the sample three times before puffing, which can account for the difference between compressional-puffing and the conventional puffing gun process. The puffing temperatures were set at 220°C, and the corresponding pressure level inside the chamber was found to be 11 kg/cm2 (Table 1). The NP and CP peel samples were ground and sieved using a 20-mesh screen. The portion passing through the screen was collected and then the bioactive compounds were extracted by either ethanol or hot water as shown in Figure 2. In the preliminary experiment, we extracted puffed peel sample directly using 70°C hot water and found that the extract, after being dried, exhibited a stone-like hard structure, which stuck tightly to the inner surfaces of the container and was difficult to dislodge. Thus, the 70°C hot water extraction condition was not adopted in the present study. After extraction, four peel extracts, namely, NPWE (nonpuffed water extract), CPWE (compressional-puffed water extract), NPEE (nonpuffed ethanol extract), and CPEE (compressional-puffed ethanol extract), were obtained according to their puffing pretreatments and extraction methods for each mango cultivar (Figure 2). The yields of these extracts are indicated in Table 1. In the comparison of extraction yields among different mango varieties for these four extracts, it was found that the yields of extracts for the tested mango cultivars were similar, except that Haden and Tu cultivars had relatively lower extraction yields. Thus, the peels of Jinhwang, TN1, Irwin, and Yuwen cultivars with higher yields of extracts would have advantages for further commercial production. It was reported that compressional-puffing could primarily rupture the structure of the puffed samples and then augment the extraction yield of crude fucoidan from brown algae [27, 28] and increase the extraction yields of total phenolics and total flavonoids from pine needles [29, 30]. In the present study, we also found that compressional-puffing could rupture the structure of mango peel (data not shown) and increase the extraction yields in both CPWE and CPEE as compared to NPWE and NPEE, respectively (Table 1). Therefore, compressional-puffing can also be effectively used in mango peels to facilitate the release of bioactive compounds by simple extraction operations. A comparison of the extraction yields between water and ethanol extractions revealed that water extraction tended to have higher yields of extracts as compared to ethanol extraction. A higher yield of extract has the potential for commercialized production. In addition, previous reports revealed that the composition of mango peel extract is complicated, and it may contain polyphenols, flavonoids, carotenoids, vitamin E, vitamin C, pectin, unsaturated fatty acids, and other biologically active components that positively influence health [25, 37–39]. Mango peel extract has also exhibited biological functions such as antioxidant properties [25, 39] and inhibition of HeLa human cervical carcinoma cell proliferation [38]. Generally, phenolic compounds are the major bioactive components of mango peels [18] and these have exhibited antioxidant activity and an antiproliferative effect on HeLa cells [25, 37–39]. Thus, the phenolic compound composition in our mango peel extracts and their effects on biological functions warrant further examination. Taken together, peel extracts from Jinhwang, TN1, Irwin, and Yuwen cultivars had higher extraction yields than those of Haden and Tu cultivars. Compressional-puffing pretreatment resulted in a worthwhile incremental increase in the extraction yields of mango peel extracts. Water extraction tended to have higher yields of extracts as compared to ethanol extraction, which would be beneficial in commercialized production. The phenolic compound composition and biological functions of mango peel extracts require further characterization. ### 3.2. Polyphenol Contents and Free Radical-Scavenging Activities of Peel Extracts from Various Mango Cultivars Phenolic compounds are reported to be the major bioactive components that exist in mango peels [18]. In the present study, four peel extracts (NPWE, CPWE, NPEE, and CPEE) from six mango cultivars were utilized to determine their polyphenol contents by the Folin-Ciocalteu colorimetric method. The results presented in Table 2 suggest that peel extracts from the TN1 cultivar possessed the highest amount of total phenolic compounds as compared to other peel extracts. Thus, it is reasonable to postulate that peel extracts of the TN1 may exhibit high biological activities, and therefore further investigation is warranted. Moreover, a comparison of the polyphenol contents between NPWE and CPWE in all mango cultivars revealed that polyphenol content of CPWE was higher than that of NPWE (Table 2), indicating that compressional-puffing could increase the polyphenol content of water extracts in all mango cultivars. However, in the case of ethanol extracts, only CPEEs from Jinhwang and TN1 had higher polyphenol contents than those of NPEEs from Jinhwang and TN1 (Table 2). Moreover, for all mango cultivars, polyphenol contents of ethanol extracts were higher than those of water extracts (Table 2), indicating that ethanol extraction was effective in the extraction of polyphenols. Polyphenols are well known to exhibit antioxidant activity due to their ability to scavenge free radicals via hydrogen donation or electron donation and the reactivity of the phenol moiety [40]. Accordingly, the antioxidant capacities of NPWE, CPWE, NPEE, and CPEE of six mango peels were characterized using DPPH and ABTS radical-scavenging assays. DPPH is a stable free radical and is widely used to evaluate the antioxidant activity in a relatively short time compared to other methods [41]. The S C 50 values (concentration of mango peel extract capable of scavenging 50% of DPPH radical) of the peel extracts (NPWE, CPWE, NPEE, and CPEE) from six mango cultivars for DPPH radical-scavenging activity are presented in Table 2. As shown in Table 2, all peel extracts from TN1 exhibited the most DPPH radical-scavenging activity as compared to other mango cultivars, and the most potent was CPEE of TN1 with an S C 50 value of 41.7 ± 1.3 μg/ml. Kim et al. (2010) reported that the S C 50 value of the DPPH radical-scavenging activity of Irwin mango peel ethanol extract was about 40 μg/ml [18], which was similar to the S C 50 value of CPEE of TN1 reported here. A comparison of the DPPH radical-scavenging activities of the CPWE group with those of the NPWE group revealed that compressional-puffing could increase the DPPH radical-scavenging activities of peel extracts (Table 2). Moreover, DPPH radical-scavenging activity of all EE groups (including NPEE and CPEE) was greater than that of the WE groups (NPWE and CPWE), which appeared to be positively correlated with the higher polyphenol amount in the EE groups as shown in Table 2. Regarding the scavenging activity of ABTS∙+, the relatively long-lived ABTS∙+ was decolorized during the reaction with hydrogen-donating antioxidant [42]. The S C 50 values (concentration of mango peel extract capable of scavenging 50% of ABTS radical cation) of the peel extracts (NPEE, CPEE, NPWE, and CPWE) from six mango cultivars for ABTS radical cation-scavenging activity are also presented in Table 2. The results show that, among the extracts from six mango cultivars, peel extracts from the TN1 exhibited the most ABTS radical cation-scavenging activity, and the S C 50 value for the most potent CPEE of TN1 was 13.0 ± 0.8 μg/ml. Kim et al. (2010) reported that the S C 50 value of the ABTS radical cation-scavenging activity for Irwin mango peel ethanol extract was about 200 μg/ml [18], which was less effective in ABTS radical cation-scavenging capacity as compared to our CPEE of TN1. Regarding NPWE and CPWE, compressional-puffing could increase the ABTS radical cation-scavenging activity in CPWE of mango cultivars, which was similar to the finding for DPPH radical-scavenging activity. All EEs (including NPEE and CPEE) had greater ABTS radical cation-scavenging activity compared to WEs (including NPWE and CPWE) (Table 2). To better understand the relationship between polyphenol contents and free radical-scavenging activities of peel extracts, a correlation plot was performed and the results are shown in Figure 3. A high correlation between the polyphenol contents of peel extracts and their corresponding free radical-scavenging activities (DPPH and ABTS radical-scavenging activities) was found in NPWE, CPWE, NPEE, and CPEE, which was also consistent with previously reported observations [43]. In summary, peel extracts from the TN1 had the highest amount of total phenolic compounds and possessed the most DPPH and ABTS free radical-scavenging activities. For all water extracts, compressional-puffing had a tendency to increase the contents of total phenolic compounds in CPWEs and resulted in an incremental increase in free radical-scavenging activities as compared to NPWEs. For all ethanol extracts, only CPEE of TN1 had a higher content of total phenolic compounds and possessed higher free radical-scavenging activities as compared to NPEE of TN1. Moreover, for all extracts, ethanol extracts generally had a higher amount of total phenolic compounds and caused greater free radical-scavenging activities as compared to water extracts. Therefore, in summary, both CPWE and CPEE of the TN1 cultivar warrant further analyses of the phenolic compound composition and storage stability of their antioxidant capacity, as well as their anti-inflammatory and antibacterial activities.Table 2 Polyphenol content, DPPH radical-scavenging activity, and ABTS radical cation-scavenging activity of extracts from various Taiwanese mango peels. Polyphenols% ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 1.40 ± 0.11 a A ∗ ∗ ∗ ∗ 2.11 ± 0.24 a B 5.31 ± 0.25 a C 9.13 ± 0.16 c D Tainoung number 1 cultivar 15.9 ± 0.9 e A 16.6 ± 1.1 e A 23.5 ± 0.4 e B 28.5 ± 0.7 e C Irwin cultivar 3.09 ± 0.18 b A 2.92 ± 0.19 a A 7.06 ± 0.29 b C 5.07 ± 0.11 a B Yuwen cultivar 2.36 ± 0.25 b A 4.63 ± 0.90 b B 7.21 ± 0.05 b D 6.26 ± 0.05 b C Haden cultivar 6.41 ± 0.20 d A 7.31 ± 0.19 c B 18.9 ± 0.3 d D 13.4 ± 0.3 d C Tu cultivar 5.25 ± 0.27 c A 10.1 ± 1.6 d B 14.7 ± 0.2 c D 13.0 ± 0.5 d C DPPH,S C 50 values μ g / m l ∗ ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 499 ± 7 f D 368 ± 13 f C 197 ± 12 d A 251 ± 0 f B Tainoung number 1 cultivar 57.0 ± 2.2 a C 67.0 ± 2.2 a D 46.0 ± 1.4 a B 41.7 ± 1.3 a A Irwin cultivar 368 ± 11 e C 255 ± 2 d B 195 ± 9 d A 222 ± 8 e A Yuwen cultivar 324 ± 3 d D 303 ± 5 e C 165 ± 5 c A 206 ± 4 d B Haden cultivar 124 ± 3 b D 101 ± 5 b C 69 ± 5 b A 86 ± 4 b B Tu cultivar 183 ± 2 c D 158 ± 5 c C 78.3 ± 4.9 b A 96.7 ± 4.7 c B Vitamin C 11.3 ± 0.1 ABTS,S C 50 values μ g / m l ∗ ∗ ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 186 ± 0 e D 139 ± 0 e C 70.0 ± 0.0 f B 54.0 ± 3.3 c A Tainoung number 1 cultivar 28.2 ± 3.8 a C 23.3 ± 0.5 a B 15.7 ± 0.9 a A 13.0 ± 0.8 a A Irwin cultivar 113 ± 7 c C 101 ± 6 d C 59.0 ± 0.8 e A 76.3 ± 2.8 e B Yuwen cultivar 137 ± 2 d C 102 ± 2 d B 52.0 ± 0.9 d A 62.0 ± 1.7 d A Haden cultivar 55.3 ± 1.7 b C 37.3 ± 2.4 b B 27.3 ± 0.9 c A 30.7 ± 1.7 b A Tu cultivar 115 ± 5 c D 77.3 ± 3.4 c C 24.7 ± 1.3 b A 34.0 ± 1.6 b B Vitamin C 3.58 ± 0.07 ∗ P o l y p h e n o l s ( % ) = ( g / g s o l i d e x t r a c t , d r y b a s i s ) × 100. S C ∗ ∗ 50 values (concentration of mango peel extract capable of scavenging 50% of DPPH radical) for DPPH radical-scavenging of different mango peel extracts. S C ∗ ∗ ∗ 50 values (concentration of mango peel extract capable of scavenging 50% of ABTS cation radical) for ABTS radical cation-scavenging of different mango peel extracts. ∗ ∗ ∗ ∗Values are m e a n ± S D  ( n = 3 ); values in the same column with different letters (in a, b, c, d, e, and f) and in the same row with different letters (in A, B, C, and D) are significantly different ( p < 0.05 ).Figure 3 Association between polyphenol content and DPPH/ABTS radical-scavenging activities of mango peel extracts. (a) NPWE; (b) CPWE; (c) NPEE; (d) CPEE.S C 50: concentration for scavenging 50% of DPPH or ABTS free radicals. (a) (b) (c) (d) ### 3.3. Analysis of Phenolic Compound Composition, Storage Stability of Antioxidant Capacity, Anti-Inflammatory Activity, and Antibacterial Activity in CPWE and CPEE of TN1 Cultivar Peel extracts of TN1 cultivar have the highest amount of total phenolic compounds and the most free radical-scavenging activities. Moreover, CPWE and CPEE from TN1 had higher extraction yields and greater polyphenol contents as compared to NPWE and NPEE from TN1. Therefore, the phenolic compound composition of CPWE and CPEE from TN1 was analyzed by RP-HPLC coupled with UV-vis detector. The results are shown in Figure4 and Table 3. In Figure 4, it can be seen that seven phenolic compounds, namely, gallic acid, pyrogallol, chlorogenic acid,p-hydroxybenzoic acid,p-coumaric acid, ECG, and CG, were tentatively identified in CPWE and CPEE of TN1 by HPLC analysis. Table 3 shows the quantitative data of phenolic compound composition in the CPWE and CPEE of TN1. It was found that both CPWE and CPEE of TN1 contained large amounts ofp-hydroxybenzoic acid, gallic acid, and pyrogallol and smaller amounts of chlorogenic acid, CG,p-coumaric acid, and ECG. A comparison of the phenolic compound composition in CPWE and CPEE revealed that CPEE of TN1 had greater amounts ofp-hydroxybenzoic acid, gallic acid, pyrogallol, chlorogenic acid, CG,p-coumaric acid, and ECG than those of CPWE (Table 3). These results are consistent with the data shown in Table 2, which illustrates that CPEE of TN1 has higher total phenolic compounds compared to CPWE of TN1. We found thatp-hydroxybenzoic acid was the predominant phenolic compound detected (up to 3313 ± 2 mg/100 g peel weight, dry basis) in CPEE of TN1, and the results were also supported by other studies reporting thatp-hydroxybenzoic acid could be detected in the extract of mango cultivar [44]. The concentrations of gallic acid for CPWE and CPEE of TN1 were recorded as 579 ± 72 and 1052 ± 1 mg/100 g peel weight, dry basis, respectively. These data are comparably higher than those reported previously for the ethanol extract of mango peel, with an average gallic acid concentration of 152.20 ± 0.14 mg/100 g mango peel, dry weight [45]. Previous studies suggested that pyrogallol can be detected in the ethanolic extract of mango kernel (the mango tested was purchased from an Egyptian local market) with a concentration of 1337.9 ± 0.31 mg/100 g mango kernel, dry weight, but it was absent in the ethanolic extract of mango peel [45]. However, we found that pyrogallol could be detected in CPWE and CPEE of TN1 with a concentration of 566 ± 55 and 930 ± 90 mg/100 g peel weight, dry basis, respectively. We speculate the reason may be due to differences between the tested mango varieties. Structurally,p-hydroxybenzoic acid, gallic acid, and pyrogallol are monophenolic compounds, which exhibit antioxidant activity owing to their hydrogen-donating or electron-donating properties [46]. Therefore, the high free radical-scavenging activities of CPWE and CPEE of TN1 may be attributed to the high contents ofp-hydroxybenzoic acid, gallic acid, and pyrogallol. Besides phenolic compounds, previous studies reported that a synergistic effect of combinations of phytochemicals may also result in beneficial biological functions such as inhibition of proliferation of human cancer cells [38, 47]. Thus, the synergistic effects of constituents in CPWE and CPEE of TN1 with respect to their effects on biological functions warrant further investigation. The storage stability of antioxidant agent is important with respect to its potential industrial application. Here, we evaluated the storage stability of vitamin C, CPWE of TN1, and CPEE of TN1 by DPPH radical-scavenging assay. The test sample powders were redissolved in double-distilled water at various concentrations and the sample solutions were stored at room temperature for 1, 2, 4, and 8 hours, and then the corresponding DPPH radical-scavenging activities were determined. The data presented in Figure 5(a) suggest that the well-known natural antioxidant vitamin C would dramatically reduce its DPPH radical-scavenging activity after 1–8 hours’ storage. However, the DPPH radical-scavenging activities in either CPWE of TN1 or CPEE of TN1 were not obviously changed after 1–8 hours’ storage (Figures 5(b) and 5(c)). These findings clearly indicate that the peel extracts of mango exhibited a high storage stability in terms of antioxidant activity. Fruit polyphenols have been reported to be related to immunomodulatory and anti-inflammatory properties viain vitro and animal studies [13]. NO is an inflammatory mediator induced by inflammatory cytokines or bacterial LPS in various cell types including macrophages [48]. Samples with NO inhibitory activity thus have the potential to possess anti-inflammatory activity. CPEE and CPWE from TN1 were tested for their anti-inflammatory activities by investigating their effects on NO production in LPS-induced RAW264.7 macrophages. Neither CPEE nor CPWE obviously affected the viability of RAW264.7 cells at the 6.25–25 μg/ml concentrations that were tested, in the presence of 1 μg/ml LPS (Figure 6(a)). As shown in Figure 6(b), when RAW264.7 cells were treated with 1 μg/ml LPS, the NO production was increased from 3.11 ± 0.25 μ M to 12.8 ± 0.1 μM. Moreover, when RAW264.7 cells were treated with 1 μg/ml LPS in the presence of various concentrations of CPEE, it was found that NO production was significantly decreased from 12.8 ± 0.1 μ M to 9.54 ± 0.08 μ M, whereas in the presence of various concentrations of CPWE, NO production was only slightly reduced. These results indicate that CPEE of TN1 had apparent anti-inflammatory activity, and thus it may have potential as a natural and safe agent in the protection of human health by modulating the immune system. Previous studies demonstrated that extracts with high polyphenol content exhibited high antibacterial activity [49]. As such, we evaluated the antibacterial activity of CPEE and CPWE of TN1 by the diffusion disc method. Five bacteria, three Gram-negative bacteria (E. coli,S. typhimurium, andV. parahaemolyticus) and two Gram-positive bacteria (S. aureus andB. cereus), were adopted to assess the antibacterial properties. As can be seen in Figures 7(a)–7(f), both CPEE and CPWE of TN1 exhibited antibacterial activities against the five bacteria tested. The Gram-negative bacteria were more sensitive than Gram-positive ones to CPEE and CPWE of TN1 (Figure 7(f)). In addition, for these five bacteria, exceptV. parahaemolyticus, CPEE exhibited higher antibacterial activity compared to CPWE. These results may be attributed to the higher polyphenol content detected in CPEE (Table 2), which is also consistent with previous findings [50]. Interestingly, forV. parahaemolyticus, CPEE had less antibacterial activity compared to CPWE. We speculate the reason may be due to the presence of 3% NaCl in the medium ofV. parahaemolyticus. However, further experimental studies are needed to elucidate the mechanism of action. In summary, the present study demonstrated that CPEE and CPWE from TN1 had high amounts of phenolic compounds, possessed good and stable free radical-scavenging activities, and exhibited anti-inflammatory and antibacterial activities. CPEE of TN1 exhibited the most antioxidant, anti-inflammatory, and antibacterial properties and thus has potential for use in the food, cosmetics, and nutraceutical industries.Table 3 Phenolic compound composition in the CPWE and CPEE of Tainoung number 1 cultivar. Compound Tainoung number 1 cultivar CPWEm g / 100 g ∗ CPEE (mg/100 g) p-Hydroxybenzoic acid 1863 ± 318 3313 ± 2 Gallic acid 579 ± 72 1052 ± 1 Pyrogallol 566 ± 55 930 ± 90 Chlorogenic acid 125 ± 8 245 ± 7 Catechin gallate (CG) 125 ± 43 189 ± 52 p-Coumaric acid 68.9 ± 9.4 131 ± 0 Epicatechin gallate (ECG) 32.0 ± 3.9 50.8 ± 7.0 ∗The concentration of phenolic compound is expressed as mg/100 g peel weight, dry basis.Figure 4 (a) High-performance liquid chromatography of peel extracts (CPWE and CPEE) of Tainoung number 1 cultivar; (b) high-performance liquid chromatography of polyphenol standards: gallic acid (1), pyrogallol (2), chlorogenic acid (3),p-hydroxybenzoic acid (4),p-coumaric acid (5), ECG (6), and CG (7). (a) (b)Figure 5 DPPH scavenging activities of vitamin C, CPWE of TN1, and CPEE of TN1 under different storage times. (a) Vitamin C; (b) CPWE of TN1; (c) CPEE of TN1. (a) (b) (c)Figure 6 (a) Effects of CPEE of TN1, CPWE of TN1, and LPS on cell viability of RAW 264.7 cells. (b) Effects of CPEE of TN1, CPWE of TN1, and LPS on NO secretion in RAW 264.7 cells. The data are them e a n s ± S D of triplicate samples. Bars with different letters are significantly different (p < 0.05). (a) (b)Figure 7 Zone of inhibition of CPEE of TN1 and CPWE of TN1 at concentration of 4%, w/v, in 0.05 M acetate buffer, pH 6.0, against (a)Escherichia coli, (b)Salmonella typhimurium, (c)Vibrio parahaemolyticus, (d)Staphylococcus aureus; and (e)Bacillus cereus. In each dish, A, B, C, and D represent antibiotic, acetate buffer, CPEE of TN1, and CPWE of TN1, respectively. (f) The bar graph summarizes the four separate antibacterial experiments and shows the zone of inhibition according to treatments. Values are expressed as the m e a n ± S D (n = 4). The means that have at least one common letter do not differ significantly (p > 0.05). (a) (b) (c) (d) (e) (f) ## 3.1. Effects of Mango Varieties, Compressional-Puffing, and Extraction Methods on Extraction Yields of Peel Extracts Six varieties of mango fruits, namely, Jinhwang, Tainoung number  1 (TN1), Irwin, Yuwen, Haden, and Tu, were collected from a local grocery market in Xinhua District, Tainan City, Taiwan. Samples of peels were separated manually and the peels were oven-dried till the moisture content reached 4–7% (wet basis). The dried peel samples were crumbled and sieved using a 20-mesh screen, and the portion retained by the screen was collected and compressional-puffed according to the technique developed previously [27]. Compressional-puffing applies a mechanical compression force of approximately 5 kg/cm2 to the sample three times before puffing, which can account for the difference between compressional-puffing and the conventional puffing gun process. The puffing temperatures were set at 220°C, and the corresponding pressure level inside the chamber was found to be 11 kg/cm2 (Table 1). The NP and CP peel samples were ground and sieved using a 20-mesh screen. The portion passing through the screen was collected and then the bioactive compounds were extracted by either ethanol or hot water as shown in Figure 2. In the preliminary experiment, we extracted puffed peel sample directly using 70°C hot water and found that the extract, after being dried, exhibited a stone-like hard structure, which stuck tightly to the inner surfaces of the container and was difficult to dislodge. Thus, the 70°C hot water extraction condition was not adopted in the present study. After extraction, four peel extracts, namely, NPWE (nonpuffed water extract), CPWE (compressional-puffed water extract), NPEE (nonpuffed ethanol extract), and CPEE (compressional-puffed ethanol extract), were obtained according to their puffing pretreatments and extraction methods for each mango cultivar (Figure 2). The yields of these extracts are indicated in Table 1. In the comparison of extraction yields among different mango varieties for these four extracts, it was found that the yields of extracts for the tested mango cultivars were similar, except that Haden and Tu cultivars had relatively lower extraction yields. Thus, the peels of Jinhwang, TN1, Irwin, and Yuwen cultivars with higher yields of extracts would have advantages for further commercial production. It was reported that compressional-puffing could primarily rupture the structure of the puffed samples and then augment the extraction yield of crude fucoidan from brown algae [27, 28] and increase the extraction yields of total phenolics and total flavonoids from pine needles [29, 30]. In the present study, we also found that compressional-puffing could rupture the structure of mango peel (data not shown) and increase the extraction yields in both CPWE and CPEE as compared to NPWE and NPEE, respectively (Table 1). Therefore, compressional-puffing can also be effectively used in mango peels to facilitate the release of bioactive compounds by simple extraction operations. A comparison of the extraction yields between water and ethanol extractions revealed that water extraction tended to have higher yields of extracts as compared to ethanol extraction. A higher yield of extract has the potential for commercialized production. In addition, previous reports revealed that the composition of mango peel extract is complicated, and it may contain polyphenols, flavonoids, carotenoids, vitamin E, vitamin C, pectin, unsaturated fatty acids, and other biologically active components that positively influence health [25, 37–39]. Mango peel extract has also exhibited biological functions such as antioxidant properties [25, 39] and inhibition of HeLa human cervical carcinoma cell proliferation [38]. Generally, phenolic compounds are the major bioactive components of mango peels [18] and these have exhibited antioxidant activity and an antiproliferative effect on HeLa cells [25, 37–39]. Thus, the phenolic compound composition in our mango peel extracts and their effects on biological functions warrant further examination. Taken together, peel extracts from Jinhwang, TN1, Irwin, and Yuwen cultivars had higher extraction yields than those of Haden and Tu cultivars. Compressional-puffing pretreatment resulted in a worthwhile incremental increase in the extraction yields of mango peel extracts. Water extraction tended to have higher yields of extracts as compared to ethanol extraction, which would be beneficial in commercialized production. The phenolic compound composition and biological functions of mango peel extracts require further characterization. ## 3.2. Polyphenol Contents and Free Radical-Scavenging Activities of Peel Extracts from Various Mango Cultivars Phenolic compounds are reported to be the major bioactive components that exist in mango peels [18]. In the present study, four peel extracts (NPWE, CPWE, NPEE, and CPEE) from six mango cultivars were utilized to determine their polyphenol contents by the Folin-Ciocalteu colorimetric method. The results presented in Table 2 suggest that peel extracts from the TN1 cultivar possessed the highest amount of total phenolic compounds as compared to other peel extracts. Thus, it is reasonable to postulate that peel extracts of the TN1 may exhibit high biological activities, and therefore further investigation is warranted. Moreover, a comparison of the polyphenol contents between NPWE and CPWE in all mango cultivars revealed that polyphenol content of CPWE was higher than that of NPWE (Table 2), indicating that compressional-puffing could increase the polyphenol content of water extracts in all mango cultivars. However, in the case of ethanol extracts, only CPEEs from Jinhwang and TN1 had higher polyphenol contents than those of NPEEs from Jinhwang and TN1 (Table 2). Moreover, for all mango cultivars, polyphenol contents of ethanol extracts were higher than those of water extracts (Table 2), indicating that ethanol extraction was effective in the extraction of polyphenols. Polyphenols are well known to exhibit antioxidant activity due to their ability to scavenge free radicals via hydrogen donation or electron donation and the reactivity of the phenol moiety [40]. Accordingly, the antioxidant capacities of NPWE, CPWE, NPEE, and CPEE of six mango peels were characterized using DPPH and ABTS radical-scavenging assays. DPPH is a stable free radical and is widely used to evaluate the antioxidant activity in a relatively short time compared to other methods [41]. The S C 50 values (concentration of mango peel extract capable of scavenging 50% of DPPH radical) of the peel extracts (NPWE, CPWE, NPEE, and CPEE) from six mango cultivars for DPPH radical-scavenging activity are presented in Table 2. As shown in Table 2, all peel extracts from TN1 exhibited the most DPPH radical-scavenging activity as compared to other mango cultivars, and the most potent was CPEE of TN1 with an S C 50 value of 41.7 ± 1.3 μg/ml. Kim et al. (2010) reported that the S C 50 value of the DPPH radical-scavenging activity of Irwin mango peel ethanol extract was about 40 μg/ml [18], which was similar to the S C 50 value of CPEE of TN1 reported here. A comparison of the DPPH radical-scavenging activities of the CPWE group with those of the NPWE group revealed that compressional-puffing could increase the DPPH radical-scavenging activities of peel extracts (Table 2). Moreover, DPPH radical-scavenging activity of all EE groups (including NPEE and CPEE) was greater than that of the WE groups (NPWE and CPWE), which appeared to be positively correlated with the higher polyphenol amount in the EE groups as shown in Table 2. Regarding the scavenging activity of ABTS∙+, the relatively long-lived ABTS∙+ was decolorized during the reaction with hydrogen-donating antioxidant [42]. The S C 50 values (concentration of mango peel extract capable of scavenging 50% of ABTS radical cation) of the peel extracts (NPEE, CPEE, NPWE, and CPWE) from six mango cultivars for ABTS radical cation-scavenging activity are also presented in Table 2. The results show that, among the extracts from six mango cultivars, peel extracts from the TN1 exhibited the most ABTS radical cation-scavenging activity, and the S C 50 value for the most potent CPEE of TN1 was 13.0 ± 0.8 μg/ml. Kim et al. (2010) reported that the S C 50 value of the ABTS radical cation-scavenging activity for Irwin mango peel ethanol extract was about 200 μg/ml [18], which was less effective in ABTS radical cation-scavenging capacity as compared to our CPEE of TN1. Regarding NPWE and CPWE, compressional-puffing could increase the ABTS radical cation-scavenging activity in CPWE of mango cultivars, which was similar to the finding for DPPH radical-scavenging activity. All EEs (including NPEE and CPEE) had greater ABTS radical cation-scavenging activity compared to WEs (including NPWE and CPWE) (Table 2). To better understand the relationship between polyphenol contents and free radical-scavenging activities of peel extracts, a correlation plot was performed and the results are shown in Figure 3. A high correlation between the polyphenol contents of peel extracts and their corresponding free radical-scavenging activities (DPPH and ABTS radical-scavenging activities) was found in NPWE, CPWE, NPEE, and CPEE, which was also consistent with previously reported observations [43]. In summary, peel extracts from the TN1 had the highest amount of total phenolic compounds and possessed the most DPPH and ABTS free radical-scavenging activities. For all water extracts, compressional-puffing had a tendency to increase the contents of total phenolic compounds in CPWEs and resulted in an incremental increase in free radical-scavenging activities as compared to NPWEs. For all ethanol extracts, only CPEE of TN1 had a higher content of total phenolic compounds and possessed higher free radical-scavenging activities as compared to NPEE of TN1. Moreover, for all extracts, ethanol extracts generally had a higher amount of total phenolic compounds and caused greater free radical-scavenging activities as compared to water extracts. Therefore, in summary, both CPWE and CPEE of the TN1 cultivar warrant further analyses of the phenolic compound composition and storage stability of their antioxidant capacity, as well as their anti-inflammatory and antibacterial activities.Table 2 Polyphenol content, DPPH radical-scavenging activity, and ABTS radical cation-scavenging activity of extracts from various Taiwanese mango peels. Polyphenols% ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 1.40 ± 0.11 a A ∗ ∗ ∗ ∗ 2.11 ± 0.24 a B 5.31 ± 0.25 a C 9.13 ± 0.16 c D Tainoung number 1 cultivar 15.9 ± 0.9 e A 16.6 ± 1.1 e A 23.5 ± 0.4 e B 28.5 ± 0.7 e C Irwin cultivar 3.09 ± 0.18 b A 2.92 ± 0.19 a A 7.06 ± 0.29 b C 5.07 ± 0.11 a B Yuwen cultivar 2.36 ± 0.25 b A 4.63 ± 0.90 b B 7.21 ± 0.05 b D 6.26 ± 0.05 b C Haden cultivar 6.41 ± 0.20 d A 7.31 ± 0.19 c B 18.9 ± 0.3 d D 13.4 ± 0.3 d C Tu cultivar 5.25 ± 0.27 c A 10.1 ± 1.6 d B 14.7 ± 0.2 c D 13.0 ± 0.5 d C DPPH,S C 50 values μ g / m l ∗ ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 499 ± 7 f D 368 ± 13 f C 197 ± 12 d A 251 ± 0 f B Tainoung number 1 cultivar 57.0 ± 2.2 a C 67.0 ± 2.2 a D 46.0 ± 1.4 a B 41.7 ± 1.3 a A Irwin cultivar 368 ± 11 e C 255 ± 2 d B 195 ± 9 d A 222 ± 8 e A Yuwen cultivar 324 ± 3 d D 303 ± 5 e C 165 ± 5 c A 206 ± 4 d B Haden cultivar 124 ± 3 b D 101 ± 5 b C 69 ± 5 b A 86 ± 4 b B Tu cultivar 183 ± 2 c D 158 ± 5 c C 78.3 ± 4.9 b A 96.7 ± 4.7 c B Vitamin C 11.3 ± 0.1 ABTS,S C 50 values μ g / m l ∗ ∗ ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 186 ± 0 e D 139 ± 0 e C 70.0 ± 0.0 f B 54.0 ± 3.3 c A Tainoung number 1 cultivar 28.2 ± 3.8 a C 23.3 ± 0.5 a B 15.7 ± 0.9 a A 13.0 ± 0.8 a A Irwin cultivar 113 ± 7 c C 101 ± 6 d C 59.0 ± 0.8 e A 76.3 ± 2.8 e B Yuwen cultivar 137 ± 2 d C 102 ± 2 d B 52.0 ± 0.9 d A 62.0 ± 1.7 d A Haden cultivar 55.3 ± 1.7 b C 37.3 ± 2.4 b B 27.3 ± 0.9 c A 30.7 ± 1.7 b A Tu cultivar 115 ± 5 c D 77.3 ± 3.4 c C 24.7 ± 1.3 b A 34.0 ± 1.6 b B Vitamin C 3.58 ± 0.07 ∗ P o l y p h e n o l s ( % ) = ( g / g s o l i d e x t r a c t , d r y b a s i s ) × 100. S C ∗ ∗ 50 values (concentration of mango peel extract capable of scavenging 50% of DPPH radical) for DPPH radical-scavenging of different mango peel extracts. S C ∗ ∗ ∗ 50 values (concentration of mango peel extract capable of scavenging 50% of ABTS cation radical) for ABTS radical cation-scavenging of different mango peel extracts. ∗ ∗ ∗ ∗Values are m e a n ± S D  ( n = 3 ); values in the same column with different letters (in a, b, c, d, e, and f) and in the same row with different letters (in A, B, C, and D) are significantly different ( p < 0.05 ).Figure 3 Association between polyphenol content and DPPH/ABTS radical-scavenging activities of mango peel extracts. (a) NPWE; (b) CPWE; (c) NPEE; (d) CPEE.S C 50: concentration for scavenging 50% of DPPH or ABTS free radicals. (a) (b) (c) (d) ## 3.3. Analysis of Phenolic Compound Composition, Storage Stability of Antioxidant Capacity, Anti-Inflammatory Activity, and Antibacterial Activity in CPWE and CPEE of TN1 Cultivar Peel extracts of TN1 cultivar have the highest amount of total phenolic compounds and the most free radical-scavenging activities. Moreover, CPWE and CPEE from TN1 had higher extraction yields and greater polyphenol contents as compared to NPWE and NPEE from TN1. Therefore, the phenolic compound composition of CPWE and CPEE from TN1 was analyzed by RP-HPLC coupled with UV-vis detector. The results are shown in Figure4 and Table 3. In Figure 4, it can be seen that seven phenolic compounds, namely, gallic acid, pyrogallol, chlorogenic acid,p-hydroxybenzoic acid,p-coumaric acid, ECG, and CG, were tentatively identified in CPWE and CPEE of TN1 by HPLC analysis. Table 3 shows the quantitative data of phenolic compound composition in the CPWE and CPEE of TN1. It was found that both CPWE and CPEE of TN1 contained large amounts ofp-hydroxybenzoic acid, gallic acid, and pyrogallol and smaller amounts of chlorogenic acid, CG,p-coumaric acid, and ECG. A comparison of the phenolic compound composition in CPWE and CPEE revealed that CPEE of TN1 had greater amounts ofp-hydroxybenzoic acid, gallic acid, pyrogallol, chlorogenic acid, CG,p-coumaric acid, and ECG than those of CPWE (Table 3). These results are consistent with the data shown in Table 2, which illustrates that CPEE of TN1 has higher total phenolic compounds compared to CPWE of TN1. We found thatp-hydroxybenzoic acid was the predominant phenolic compound detected (up to 3313 ± 2 mg/100 g peel weight, dry basis) in CPEE of TN1, and the results were also supported by other studies reporting thatp-hydroxybenzoic acid could be detected in the extract of mango cultivar [44]. The concentrations of gallic acid for CPWE and CPEE of TN1 were recorded as 579 ± 72 and 1052 ± 1 mg/100 g peel weight, dry basis, respectively. These data are comparably higher than those reported previously for the ethanol extract of mango peel, with an average gallic acid concentration of 152.20 ± 0.14 mg/100 g mango peel, dry weight [45]. Previous studies suggested that pyrogallol can be detected in the ethanolic extract of mango kernel (the mango tested was purchased from an Egyptian local market) with a concentration of 1337.9 ± 0.31 mg/100 g mango kernel, dry weight, but it was absent in the ethanolic extract of mango peel [45]. However, we found that pyrogallol could be detected in CPWE and CPEE of TN1 with a concentration of 566 ± 55 and 930 ± 90 mg/100 g peel weight, dry basis, respectively. We speculate the reason may be due to differences between the tested mango varieties. Structurally,p-hydroxybenzoic acid, gallic acid, and pyrogallol are monophenolic compounds, which exhibit antioxidant activity owing to their hydrogen-donating or electron-donating properties [46]. Therefore, the high free radical-scavenging activities of CPWE and CPEE of TN1 may be attributed to the high contents ofp-hydroxybenzoic acid, gallic acid, and pyrogallol. Besides phenolic compounds, previous studies reported that a synergistic effect of combinations of phytochemicals may also result in beneficial biological functions such as inhibition of proliferation of human cancer cells [38, 47]. Thus, the synergistic effects of constituents in CPWE and CPEE of TN1 with respect to their effects on biological functions warrant further investigation. The storage stability of antioxidant agent is important with respect to its potential industrial application. Here, we evaluated the storage stability of vitamin C, CPWE of TN1, and CPEE of TN1 by DPPH radical-scavenging assay. The test sample powders were redissolved in double-distilled water at various concentrations and the sample solutions were stored at room temperature for 1, 2, 4, and 8 hours, and then the corresponding DPPH radical-scavenging activities were determined. The data presented in Figure 5(a) suggest that the well-known natural antioxidant vitamin C would dramatically reduce its DPPH radical-scavenging activity after 1–8 hours’ storage. However, the DPPH radical-scavenging activities in either CPWE of TN1 or CPEE of TN1 were not obviously changed after 1–8 hours’ storage (Figures 5(b) and 5(c)). These findings clearly indicate that the peel extracts of mango exhibited a high storage stability in terms of antioxidant activity. Fruit polyphenols have been reported to be related to immunomodulatory and anti-inflammatory properties viain vitro and animal studies [13]. NO is an inflammatory mediator induced by inflammatory cytokines or bacterial LPS in various cell types including macrophages [48]. Samples with NO inhibitory activity thus have the potential to possess anti-inflammatory activity. CPEE and CPWE from TN1 were tested for their anti-inflammatory activities by investigating their effects on NO production in LPS-induced RAW264.7 macrophages. Neither CPEE nor CPWE obviously affected the viability of RAW264.7 cells at the 6.25–25 μg/ml concentrations that were tested, in the presence of 1 μg/ml LPS (Figure 6(a)). As shown in Figure 6(b), when RAW264.7 cells were treated with 1 μg/ml LPS, the NO production was increased from 3.11 ± 0.25 μ M to 12.8 ± 0.1 μM. Moreover, when RAW264.7 cells were treated with 1 μg/ml LPS in the presence of various concentrations of CPEE, it was found that NO production was significantly decreased from 12.8 ± 0.1 μ M to 9.54 ± 0.08 μ M, whereas in the presence of various concentrations of CPWE, NO production was only slightly reduced. These results indicate that CPEE of TN1 had apparent anti-inflammatory activity, and thus it may have potential as a natural and safe agent in the protection of human health by modulating the immune system. Previous studies demonstrated that extracts with high polyphenol content exhibited high antibacterial activity [49]. As such, we evaluated the antibacterial activity of CPEE and CPWE of TN1 by the diffusion disc method. Five bacteria, three Gram-negative bacteria (E. coli,S. typhimurium, andV. parahaemolyticus) and two Gram-positive bacteria (S. aureus andB. cereus), were adopted to assess the antibacterial properties. As can be seen in Figures 7(a)–7(f), both CPEE and CPWE of TN1 exhibited antibacterial activities against the five bacteria tested. The Gram-negative bacteria were more sensitive than Gram-positive ones to CPEE and CPWE of TN1 (Figure 7(f)). In addition, for these five bacteria, exceptV. parahaemolyticus, CPEE exhibited higher antibacterial activity compared to CPWE. These results may be attributed to the higher polyphenol content detected in CPEE (Table 2), which is also consistent with previous findings [50]. Interestingly, forV. parahaemolyticus, CPEE had less antibacterial activity compared to CPWE. We speculate the reason may be due to the presence of 3% NaCl in the medium ofV. parahaemolyticus. However, further experimental studies are needed to elucidate the mechanism of action. In summary, the present study demonstrated that CPEE and CPWE from TN1 had high amounts of phenolic compounds, possessed good and stable free radical-scavenging activities, and exhibited anti-inflammatory and antibacterial activities. CPEE of TN1 exhibited the most antioxidant, anti-inflammatory, and antibacterial properties and thus has potential for use in the food, cosmetics, and nutraceutical industries.Table 3 Phenolic compound composition in the CPWE and CPEE of Tainoung number 1 cultivar. Compound Tainoung number 1 cultivar CPWEm g / 100 g ∗ CPEE (mg/100 g) p-Hydroxybenzoic acid 1863 ± 318 3313 ± 2 Gallic acid 579 ± 72 1052 ± 1 Pyrogallol 566 ± 55 930 ± 90 Chlorogenic acid 125 ± 8 245 ± 7 Catechin gallate (CG) 125 ± 43 189 ± 52 p-Coumaric acid 68.9 ± 9.4 131 ± 0 Epicatechin gallate (ECG) 32.0 ± 3.9 50.8 ± 7.0 ∗The concentration of phenolic compound is expressed as mg/100 g peel weight, dry basis.Figure 4 (a) High-performance liquid chromatography of peel extracts (CPWE and CPEE) of Tainoung number 1 cultivar; (b) high-performance liquid chromatography of polyphenol standards: gallic acid (1), pyrogallol (2), chlorogenic acid (3),p-hydroxybenzoic acid (4),p-coumaric acid (5), ECG (6), and CG (7). (a) (b)Figure 5 DPPH scavenging activities of vitamin C, CPWE of TN1, and CPEE of TN1 under different storage times. (a) Vitamin C; (b) CPWE of TN1; (c) CPEE of TN1. (a) (b) (c)Figure 6 (a) Effects of CPEE of TN1, CPWE of TN1, and LPS on cell viability of RAW 264.7 cells. (b) Effects of CPEE of TN1, CPWE of TN1, and LPS on NO secretion in RAW 264.7 cells. The data are them e a n s ± S D of triplicate samples. Bars with different letters are significantly different (p < 0.05). (a) (b)Figure 7 Zone of inhibition of CPEE of TN1 and CPWE of TN1 at concentration of 4%, w/v, in 0.05 M acetate buffer, pH 6.0, against (a)Escherichia coli, (b)Salmonella typhimurium, (c)Vibrio parahaemolyticus, (d)Staphylococcus aureus; and (e)Bacillus cereus. In each dish, A, B, C, and D represent antibiotic, acetate buffer, CPEE of TN1, and CPWE of TN1, respectively. (f) The bar graph summarizes the four separate antibacterial experiments and shows the zone of inhibition according to treatments. Values are expressed as the m e a n ± S D (n = 4). The means that have at least one common letter do not differ significantly (p > 0.05). (a) (b) (c) (d) (e) (f) ## 4. Conclusion In this study, we employed a compressional-puffing pretreatment process and two extraction methods to extract bioactive compounds from six Taiwanese mango peels. The compressional-puffing process increases the extraction yields and polyphenol contents of peel extracts. Ethanol extracts of peels had higher amounts of total phenolic compounds and greater free radical-scavenging activities as compared to water extracts of peels. The polyphenol contents of extracts positively correlated to the free radical-scavenging activities of extracts. Among these extracts, CPEE of TN1 exhibited the most antioxidant, anti-inflammatory, and antibacterial properties. Thus it is suggested as a natural, safe, and stable antioxidant agent with anti-inflammatory and antibacterial properties, which may have a wide range of applications in food, cosmetics, and nutraceuticals. Future studies on the polyphenol composition and biological activities of mango peel extracts after anin vitro digestion as well as investigations of thein vivo biological activities of mango peel extracts are warranted. --- *Source: 1025387-2018-02-21.xml*
1025387-2018-02-21_1025387-2018-02-21.md
83,856
Free Radical-Scavenging, Anti-Inflammatory, and Antibacterial Activities of Water and Ethanol Extracts Prepared from Compressional-Puffing Pretreated Mango (Mangifera indica L.) Peels
Chun-Yung Huang; Chia-Hung Kuo; Chien-Hui Wu; Ai-Wei Kuan; Hui-Ru Guo; Yu-Hua Lin; Po-Kai Wang
Journal of Food Quality (2018)
Agricultural Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1025387
1025387-2018-02-21.xml
--- ## Abstract During the processing of mango, a huge amount of peel is generated, which is environmentally problematic. In the present study, a compressional-puffing process was adopted to pretreat the peels of various mango cultivars, and then the bioactive compounds of mango peels were extracted by water or ethanol. The phenolic compound compositions as well as the free radical-scavenging, anti-inflammatory, and antibacterial activities of water extract (WE) and ethanol extract (EE) from nonpuffed (NP) and compressional-puffed (CP) mango peels were further evaluated. It was found that compressional-puffing could increase the yield of extracts obtained from most mango varieties and could augment the polyphenol content of extracts from Jinhwang and Tainoung number 1 (TN1) cultivars. The WE and EE from TN1 exhibited the highest polyphenol content and the greatest free radical-scavenging activities among the mango cultivars tested. Seven phenolic compounds (gallic acid, pyrogallol, chlorogenic acid,p-hydroxybenzoic acid,p-coumaric acid, ECG, and CG) were detected in CPWE (compressional-puffed water extract) and CPEE (compressional-puffed ethanol extract) from TN1, and antioxidant stability of both CPWE and CPEE was higher than that of vitamin C. Further biological experiments revealed that CPEE from TN1 possessed the strongest anti-inflammatory and antibacterial activities, and thus it is recommended as a multibioactive agent, which may have applications in the food, cosmetic, and nutraceutical industries. --- ## Body ## 1. Introduction Mango (Mangifera indica L.) is recognized as one of the most economically productive fruits in tropical and subtropical areas throughout the globe. Mango has excellent nutritional value and health-promoting properties. A variety of studies have been performed showing high concentrations of antioxidants including ascorbic acid, carotenoids, and phenolic compounds in mango [1]. Mango fruit is the main edible part and is usually processed into various products such as puree, nectar, jam, leather, pickles, chutney, frozen mango, dehydrated products, and canned slices. During the processing of mango, a huge amount of peel is generated, which constitutes approximately 15–20% of the mango fruit [2]. Mango peel is a waste by-product, and its disposal may have a substantial impact on the environment. Previous studies reported that mango peel contains a variety of valuable compounds such as polyphenols, carotenoids, enzymes, and dietary fiber [2]. Extracts from mango peel also exhibit antioxidant activity [3], anti-inflammatory activity [4], protection against membrane protein degradation and morphological changes in rat erythrocytes caused by hydrogen peroxide ( H 2 O 2 ) [5], antibacterial activity [6], and anticancer activity [7]. Hence, the utilization of mango peels may be an economical means of ameliorating the problem of waste disposal from mango production factories, as well as converting a by-product into material for food, cosmetic, and pharmaceutical industrial usages.Free radicals, including superoxide anion radical( O 2 ∙ - ), hydroperoxyl radical ( H O 2 ∙ ), hydroxyl radical (HO∙), peroxyl radical (ROO∙), and alkoxyl radical (RO∙), are defined as any molecules or atoms with one or more unpaired electrons and are often involved in human diseases [8]. Many studies have shown that free radicals in living organisms cause oxidative damage to different molecules such as lipids, proteins, and nucleic acids and these are involved in the interaction phases of many diseases such as cancer, atherosclerosis, respiratory ailments, and even neuronal death [9]. Antioxidants are substances that delay or prevent the oxidation of cellular oxidisable substrates. They exert their effect by scavenging reactive oxygen species (ROS) or preventing the generation of ROS [10]. Synthetic antioxidant compounds such as butylated hydroxytoluene (BHT) and butylated hydroxyanisole (BHA) have potent antioxidant activity and are commonly used in processed foods. However, they have been restricted because of their carcinogenicity and other toxic properties [11, 12]. Thus, in recent years, there has been considerable interest in natural antioxidants derived from biological materials because of their presumed safety and potential nutritional and therapeutic value.A large number of publications have suggested that fruit polyphenols are related to immunomodulatory and anti-inflammatory properties viain vitro and animal studies [13]. Inflammation is a complicated physiological phenomenon that occurs when the immune system in the body is activated to counter threats such as injury, infection, and stress. Macrophages often play a unique role in the immune system because they not only elicit an innate immune response but also act as effector cells in inflammation and infection. When macrophages encounter bacterial endotoxin lipopolysaccharide (LPS), they can be stimulated to generate a variety of inflammatory mediators such as nitric oxide (NO), tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β), IL-6, prostaglandin E2 (PGE2), and adhesion molecules to help eradicate the bacterial assault [14]. Generally, substances with inhibitory effects on the expression and activity of enzymes (e.g., inducible NO synthase (iNOS)) involved in the generation of inflammatory mediators such as NO in the mouse macrophage-like cell line RAW 264.7 are considered to possess immunomodulatory activity [15]. Since a variety of polyphenols exist in mango peels, further research on the use of mango peel extracts as immunomodulatory or anti-inflammatory agents is warranted.Antibacterial agents are the synthetic or natural compounds that interfere with the growth and division of bacteria. A number of studies have shown that pathogenic microorganisms in humans and various animal species have developed resistance to drugs. This drug resistance is due to the random or otherwise inappropriate usage of commercial antimicrobial agents. As such, there is an urgent need for new antibacterial agents. In addition, synthetic antibiotics have been known to induce side effects such as the appearance of resistant bacteria, skin irritation, organ damage, and immunohypersensitivity [16]. Accordingly, many studies have attempted to develop new agents with high antibacterial activity but with fewer or possibly even no side effects. There is a particular demand for antibacterial compounds from natural resources [17]. Plants produce a range of antimicrobial compounds in various parts such as bark, stalk, leaves, roots, flowers, pods, seeds, stems, hull, latex, and fruit rind [6]. Fruit peel is the outer covering of a fruit, which functions as a physical barrier. It also serves as a chemical barrier by virtue of the presence of many antimicrobial constituents, which protect the fruit from exposure to external pathogens or other factors that may tend to decrease the quality of the fruit. Therefore, fruit peels are good sources for obtaining natural antibacterial agents.Bioactive compounds in mango peel are generally extracted via the following methods: extraction with 80% ethanol by sonication for 3 days at room temperature [18]; extraction performed three times with methanol, for 3 h per time [19]; extraction with 95% ethanol three times, 72 h per time [20]; extraction with acetone or ethyl acetate for up to 20 h [21–23]; extraction by microwave-assisted method [24, 25]; or extraction with supercritical CO2, followed by pressurized ethanol [26]. However, these methods generally involve the use of a large volume of solvents, require a long extraction time, consume a lot of energy, are costly, and sometimes are not eco-friendly. The present study builds upon on the research reported in our previous investigation [27]. In brief, we previously developed a compressional-puffing process that has been successfully implemented to increase the extraction yield of fucoidan from brown seaweed [27, 28] and augment the extraction yields of total phenolics and total flavonoids from pine needles [29, 30]. Compressional-puffing can be utilized as a pretreatment step to disrupt the cellular structure of samples, thereby better enabling the release of bioactive compounds by solvent extraction [27]. In this study, compressional-puffing was utilized for pretreatment of mango peels, and water extract (WE) and ethanol extract (EE) extracted from nonpuffed (NP) and compressional-puffed (CP) mango peels were compared. The phenolic compound composition and the free radical-scavenging, anti-inflammatory, and antibacterial activities of WE and EE from mango peels were also evaluated. To the best of the authors’ admittedly limited knowledge, this is the first study to elucidate the free radical-scavenging, anti-inflammatory and antibacterial activities of WE and EE extracted from compressional-puffed mango peels. The recovered WE and EE are expected to possess multifunctional activities providing a wide range of benefits. The utilization of mango peel will also help to play a role in minimizing the generation of waste worldwide. ## 2. Materials and Methods ### 2.1. Materials Folin-Ciocalteu’s phenol reagent, gallic acid, protocatechuic acid, chlorogenic acid,p-hydroxybenzoic acid, pyrogallol, caffeic acid, mangiferin, epicatechin,p-coumaric acid, ferulic acid, epicatechin gallate (ECG), catechin gallate (CG), ellagic acid, rutin, quercetin, kaempferol, homogentisic acid, tannic acid, vanillic acid, 2,2-diphenyl-1-picrylhydrazyl (DPPH), sodium nitrite, LPS, dimethyl sulfoxide (DMSO), and 2,2′-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid) diammonium salt (ABTS) were purchased from Sigma-Aldrich (St. Louis, MO, USA). 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) was purchased from Calbiochem (San Diego, CA, USA). Dulbecco’s modified Eagle’s medium (DMEM), trypsin/EDTA, fetal bovine serum (FBS), penicillin, and streptomycin were purchased from Gibco Laboratories (Grand Island, NY, USA). Methanol, acetic acid, and potassium persulfate were obtained from Nihon Shiyaku Industries, Ltd. (Tokyo, Japan). All other reagents if not declared were purchased from Sigma-Aldrich (St. Louis, MO, USA) and were all of analytical grade. ### 2.2. Mango Fruits and Peels Six mango cultivars, produced in Tainan City, Taiwan, were utilized in this study. The varieties, namely, Jinhwang, Tainoung number 1 (TN1), Irwin, Yuwen, Haden, and Tu (Figure1), were collected from a local grocery market in Xinhua District, Tainan City, Taiwan. The fruits were used after they had completed ripening. Samples of peels were separated manually from six varieties of mango fruits and were then oven-dried and stored in aluminum bags at 4°C until use.Figure 1 Appearance of various Taiwanese mango varieties. (1) Haden; (2) Tainoung number 1; (3) Tu; (4) Jinhwang; (5) Yuwen; (6) Irwin. ### 2.3. Compressional-Puffing Procedure A compressional-puffing method [27, 28, 31] with minor modification was adopted to pretreat mango peels. In brief, the dried peel samples were crumbled and sieved using a 20-mesh screen. The portion retained by the screen was collected and then compressional-puffed using a continuous compressional-puffing machine with the temperature set at 220°C. The corresponding mechanical compression pressure and steam pressure levels inside the chamber are listed in Table 1. After the compressional-puffing, the peel samples were ground into fine particles and stored at 4°C for further extraction experiments.Table 1 Process variables for compressional-puffing and extraction and extraction yields for various Taiwanese mango peel extracts. Operational variables NPWE CPWE NPEE CPEE Mechanical compression Pressure (kg/cm2) 0 5 0 5 Number of compression times 0 3 0 3 Puffing Temperature (°C) 0 220 0 220 Pressure (kg/cm2) 0 11 0 11 Time (sec) 0 10 0 10 Pretreatment Solvent 95% EtOH 95% EtOH N A ∗ NA Temperature (°C) 25 25 NA NA Time (h) 4 4 NA NA Extraction Solvent ddH2O ddH2O 95% EtOH 95% EtOH Temperature (°C) 70 70 25 25 Time (h) 1 1 4 4 Extraction yield of extract% ∗ ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 33.5 ± 0.4 c B C ∗ ∗ ∗ 36.6 ± 2.3 b C 23.4 ± 1.2 b A 30.2 ± 1.0 c B Tainoung number 1 cultivar 29.5 ± 1.2 b A 34.8 ± 1.0 b B 29.2 ± 0.7 d A 33.7 ± 0.9 d B Irwin cultivar 30.9 ± 0.9 b c B 40.0 ± 2.2 b C 22.6 ± 0.3 b A 29.6 ± 0.4 c B Yuwen cultivar 31.2 ± 1.4 b c B 37.0 ± 1.8 b C 26.3 ± 0.9 c A 37.4 ± 1.0 e C Haden cultivar 25.5 ± 1.5 a B 28.6 ± 2.7 a B 18.8 ± 0.8 a A 20.4 ± 0.5 a A Tu cultivar 25.9 ± 0.3 a B 29.1 ± 0.1 a D 22.9 ± 0.5 b A 27.0 ± 0.1 b C N A ∗: not applicable. ∗ ∗Extraction yield of extract (%) = ( g s o l i d e x t r a c t , d r y b a s i s / g m a n g o p e e l s a m p l e , d r y b a s i s ) × 100. ∗ ∗ ∗Values are mean ± SD ( n = 3 ); values in the same column with different letters (in a, b, c, d, and e) and in the same row with different letters (in A, B, C, and D) are significantly different ( p < 0.05 ). ### 2.4. Extraction Procedure We followed the methods of Yang et al. (2017) [28]. Briefly, the nonpuffed and compressional-puffed peel samples were pulverized and sieved using a 20-mesh screen. The portion passed through the screen was collected and extracted by 95% ethanol (w/v = 1 : 10) for 4 h at 25°C with shaking. The resultant solution was then centrifuged at 9,170 ×g for 10 min and the supernatant was collected. NPEE (nonpuffed ethanol extract) and CPEE (compressional-puffed ethanol extract) were thus obtained after oven-drying the supernatant at 40°C. In addition, the precipitates after 95% ethanol extraction were further extracted by double-distilled water (w/v = 1 : 10) for 1 h at 70°C with shaking. Then the mixture was centrifuged at 9,170 ×g for 10 min and the supernatant was collected. NPWE (nonpuffed water extract) and CPWE (compressional-puffed water extract) were obtained after oven-drying the supernatant at 50°C. All dried extracts were milled to fine particles and stored at 4°C for further analyses. The combined compressional-puffing pretreatment and extraction process is depicted in detail in Figure 2. The extraction yield was calculated using the following equation:(1) e x t r a c t i o n y i e l d % = g A / g B × 100 ,where g A represents the dry mass weight of the extract and g B is the weight of the mango peel sample on a dry basis.Figure 2 Flowchart of the compressional-puffing process and extraction methods for NPEE, CPEE, NPWE, and CPWE. ### 2.5. Determination of Polyphenol Content Polyphenol content was estimated by the Folin-Ciocalteu colorimetric method based on the procedure of Singleton and Rossi (1965) [32] and using gallic acid as the standard agent. ### 2.6. High-Performance Liquid Chromatography (HPLC) Analysis of Total Phenolic Compound Composition The separation of total phenolic compounds was performed by the method of Schieber et al. (2000) [33] and using a Shimadzu HPLC system (Shimadzu, Kyoto, Japan) equipped with a UV-vis detector. A reversed-phase Inspire C18 column (250 mm × 4.6 mm, id 5 μm) purchased from Dikma Technologies (USA) was used for all chromatographic separations. The column was operated at 25°C. The mobile phase consisted of 2% (v/v) acetic acid in water (eluent A), 0.5% acetic acid in water, and acetonitrile (50 : 50, v/v; eluent B). The gradient program was as follows: 20–55% B (50 min), 55–100% B (10 min), and 100–20% B (5 min). The injection volume of all samples was 20 μl. The spectra were monitored at 280 nm and performed at a flow rate of 1 ml/min. Gallic acid, pyrogallol, protocatechuic acid, chlorogenic acid,p-hydroxybenzoic acid, caffeic acid, mangiferin, epicatechin,p-coumaric acid, ferulic acid, ECG, CG, ellagic acid, rutin, quercetin, kaempferol, homogentisic acid, tannic acid, and vanillic acid were used as standards for HPLC analyses. ### 2.7. DPPH Radical-Scavenging Activity The scavenging activity of the DPPH radical in the samples was determined using the method described previously [28, 34]. In brief, 50 μl of mango peel extract (concentrations ranging from 0 to 300 μg/ml for Tainoung number 1 and Haden cultivars; 0–600 μg/ml for Jinhwang and Tu cultivars; and 0–900 μg/ml for Irwin and Yuwen cultivars) was added to 200 μl 0.1 mM DPPH solution (in methanol). The mixture was shaken vigorously for 1 min and left to stand for 30 min in the dark at room temperature. After the reaction, the absorbance of all sample solutions was then measured at 517 nm using an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA). The radical-scavenging activity was calculated as the percentage inhibition using the following equation:(2) D P P H r a d i c a l - s c a v e n g i n g % = 1 - A s a m p l e A c o n t r o l × 100 ,where A s a m p l e is the absorbance of the methanol solution of DPPH with tested samples and A c o n t r o l represents the absorbance of the methanol solution of DPPH without the sample. ### 2.8. ABTS Radical Cation-Scavenging Activity The ABTS radical cation-scavenging activity was performed according to the method described previously [28, 34]. The ABTS∙+ solution was produced by mixing 5 ml of 7 mM ABTS solution with 88 μl of 140 mM potassium persulfate and allowing the mixture to stand in the dark for 16 h at room temperature before use. The ABTS∙+ solution was diluted with 95% ethanol so that its absorbance at 734 nm was adjusted to 0.70 ± 0.05. To determine the scavenging activity, 100 μl diluted ABTS∙+ solution was mixed with 100 μl of mango peel extract (concentrations ranging from 0 to 100 μg/ml for Tainoung number 1 and Haden cultivars; 0–300 μg/ml for Irwin, Yuwen, and Tu cultivars; and 0–500 μg/ml for Jinhwang cultivar) and the mixture was allowed to react at room temperature for 6 min. After the reaction, the absorbance of all sample solutions was then measured at 734 nm using an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA). The blank was prepared in the same manner, except that distilled water was used instead of the sample. The scavenging activity of ABTS∙+ was calculated using the following equation:(3) A B T S r a d i c a l c a t i o n - s c a v e n g i n g % = 1 - A s a m p l e A c o n t r o l × 100 ,where A s a m p l e is the absorbance of ABTS with tested samples and A c o n t r o l represents the absorbance of ABTS without the sample. ### 2.9. Cell Line and Culture Murine macrophage cell lines RAW 264.7 were obtained from the Bioresource Collection and Research Center, the Food Industry Research and Development Institute (FIRDI, Hsinchu, Taiwan). The cells were grown in DMEM supplemented with 10% FBS and 100 U/ml penicillin-streptomycin solution at 37°C in a humidified chamber with 5% CO2. The medium was changed every two days. ### 2.10. Measurement of Cell Viability The MTT assay was used to evaluate cell viability. Briefly, RAW 264.7 cells (2 × 105/ml in a 96-well plate) were plated with culture medium and incubated for 24 h at 37°C, with 5% CO2 in a humidified atmosphere. The medium was removed and fresh serum-free medium containing different concentrations of mango peel extracts (concentrations ranging from 0 to 25 μg/ml for CPEE of TN1 and CPWE of TN1) was added. After 24 h of incubation at 37°C, with 5% CO2, the MTT reagent (0.1 mg/ml) was added. After incubating at 37°C for 4 h, the MTT reagent was removed and DMSO (100 μl) was added to each well and thoroughly mixed by pipetting to dissolve the MTT-formazan crystals. The absorbance was then determined by an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA) at a wavelength of 570 nm. The cell viability (%) was calculated using the following equation:(4) C e l l v i a b i l i t y % = T C × 100 ,where T is the absorbance in the test and C is the absorbance for the control. ### 2.11. Measurement of Nitrite Oxide in Culture Media RAW 264.7 cells (2 × 10 5 cells/ml) were seeded in a 96-well flat bottom plate for 24 h at 37°C with 5% CO2. The culture medium was removed and replaced with fresh medium containing tested samples at various concentrations prior to challenging with 1 μg/ml of LPS. The nitrite concentration was measured in the culture supernatant after 24 h of coincubation. In brief, 50 μl of the cultured supernatants was added in the 96-well plate and 100 μl of Griess reagent was added to each well and allowed to stand for 10 min at room temperature. The absorbance at 540 nm was measured using an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA), and the quantification of nitrite was standardized with NaNO2 at 0–100 μM concentrations [35]. ### 2.12. Zone of Inhibition Five bacteria were tested for antibacterial activity of mango peel extracts. These were three Gram-negative bacteria (Escherichia coli ATCC 11775,Salmonella typhimurium ATCC 13311, andVibrio parahaemolyticusATCC 17802) and two Gram-positive bacteria (Staphylococcus aureus ATCC 12600 andBacillus cereus ATCC 14579), which were obtained from the Culture Collection and Research Center of the Food Industry Research and Development Institute, Hsinchu, Taiwan. Antibacterial activity was measured using the standard method of diffusion disc plates on agar [36]. In brief,E. coli,S. typhimurium,S. aureus, andB. cereus were grown in tryptic soy broth (TSB) medium (Difco Laboratories, Detroit, MI, USA) andV. parahaemolyticuswas grown in TSB medium + 3% NaCl for 24 h at 37°C, and 0.1 ml of each culture of bacteria at proper cell density was spread on tryptic soy agar (TSA, Difco Laboratories, Detroit, MI, USA) plate surfaces (3% NaCl was added to TSA forV. parahaemolyticus). Paper disc (8 mm in diameter) was placed on the agar medium to load 50 μl containing 2 mg of mango peel extract (4%, w/v, in 0.05 M acetate buffer, pH 6.0). Control paper discs were prepared by infusing with 50 μl Antibiotic-Antimycotic Solution (containing 10,000 units/ml penicillin, 10 mg/ml streptomycin, and 25 μg/ml amphotericin) (Corning, Corning, NY, USA) or 50 μl 0.05 M acetate buffer. The plates were incubated at 37°C for 24 h. After 24 h, antibacterial activity of the extracts against the test bacteria was observed by growth-free zone of inhibition near the respective disc and the inhibition diameters were measured. ### 2.13. Statistical Analysis Experiments were performed at least three times. Values represent the means ± standard deviation (SD). Statistical analyses were done using the Statistical Package for the Social Sciences (SPSS). The results obtained were analyzed using one-way analysis of variance (ANOVA), followed by Duncan’s Multiple Range tests.p < 0.05 was considered statistically significant. Correlation analyses were performed using the square of Pearson’s correlation coefficient ( R 2 ). ## 2.1. Materials Folin-Ciocalteu’s phenol reagent, gallic acid, protocatechuic acid, chlorogenic acid,p-hydroxybenzoic acid, pyrogallol, caffeic acid, mangiferin, epicatechin,p-coumaric acid, ferulic acid, epicatechin gallate (ECG), catechin gallate (CG), ellagic acid, rutin, quercetin, kaempferol, homogentisic acid, tannic acid, vanillic acid, 2,2-diphenyl-1-picrylhydrazyl (DPPH), sodium nitrite, LPS, dimethyl sulfoxide (DMSO), and 2,2′-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid) diammonium salt (ABTS) were purchased from Sigma-Aldrich (St. Louis, MO, USA). 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) was purchased from Calbiochem (San Diego, CA, USA). Dulbecco’s modified Eagle’s medium (DMEM), trypsin/EDTA, fetal bovine serum (FBS), penicillin, and streptomycin were purchased from Gibco Laboratories (Grand Island, NY, USA). Methanol, acetic acid, and potassium persulfate were obtained from Nihon Shiyaku Industries, Ltd. (Tokyo, Japan). All other reagents if not declared were purchased from Sigma-Aldrich (St. Louis, MO, USA) and were all of analytical grade. ## 2.2. Mango Fruits and Peels Six mango cultivars, produced in Tainan City, Taiwan, were utilized in this study. The varieties, namely, Jinhwang, Tainoung number 1 (TN1), Irwin, Yuwen, Haden, and Tu (Figure1), were collected from a local grocery market in Xinhua District, Tainan City, Taiwan. The fruits were used after they had completed ripening. Samples of peels were separated manually from six varieties of mango fruits and were then oven-dried and stored in aluminum bags at 4°C until use.Figure 1 Appearance of various Taiwanese mango varieties. (1) Haden; (2) Tainoung number 1; (3) Tu; (4) Jinhwang; (5) Yuwen; (6) Irwin. ## 2.3. Compressional-Puffing Procedure A compressional-puffing method [27, 28, 31] with minor modification was adopted to pretreat mango peels. In brief, the dried peel samples were crumbled and sieved using a 20-mesh screen. The portion retained by the screen was collected and then compressional-puffed using a continuous compressional-puffing machine with the temperature set at 220°C. The corresponding mechanical compression pressure and steam pressure levels inside the chamber are listed in Table 1. After the compressional-puffing, the peel samples were ground into fine particles and stored at 4°C for further extraction experiments.Table 1 Process variables for compressional-puffing and extraction and extraction yields for various Taiwanese mango peel extracts. Operational variables NPWE CPWE NPEE CPEE Mechanical compression Pressure (kg/cm2) 0 5 0 5 Number of compression times 0 3 0 3 Puffing Temperature (°C) 0 220 0 220 Pressure (kg/cm2) 0 11 0 11 Time (sec) 0 10 0 10 Pretreatment Solvent 95% EtOH 95% EtOH N A ∗ NA Temperature (°C) 25 25 NA NA Time (h) 4 4 NA NA Extraction Solvent ddH2O ddH2O 95% EtOH 95% EtOH Temperature (°C) 70 70 25 25 Time (h) 1 1 4 4 Extraction yield of extract% ∗ ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 33.5 ± 0.4 c B C ∗ ∗ ∗ 36.6 ± 2.3 b C 23.4 ± 1.2 b A 30.2 ± 1.0 c B Tainoung number 1 cultivar 29.5 ± 1.2 b A 34.8 ± 1.0 b B 29.2 ± 0.7 d A 33.7 ± 0.9 d B Irwin cultivar 30.9 ± 0.9 b c B 40.0 ± 2.2 b C 22.6 ± 0.3 b A 29.6 ± 0.4 c B Yuwen cultivar 31.2 ± 1.4 b c B 37.0 ± 1.8 b C 26.3 ± 0.9 c A 37.4 ± 1.0 e C Haden cultivar 25.5 ± 1.5 a B 28.6 ± 2.7 a B 18.8 ± 0.8 a A 20.4 ± 0.5 a A Tu cultivar 25.9 ± 0.3 a B 29.1 ± 0.1 a D 22.9 ± 0.5 b A 27.0 ± 0.1 b C N A ∗: not applicable. ∗ ∗Extraction yield of extract (%) = ( g s o l i d e x t r a c t , d r y b a s i s / g m a n g o p e e l s a m p l e , d r y b a s i s ) × 100. ∗ ∗ ∗Values are mean ± SD ( n = 3 ); values in the same column with different letters (in a, b, c, d, and e) and in the same row with different letters (in A, B, C, and D) are significantly different ( p < 0.05 ). ## 2.4. Extraction Procedure We followed the methods of Yang et al. (2017) [28]. Briefly, the nonpuffed and compressional-puffed peel samples were pulverized and sieved using a 20-mesh screen. The portion passed through the screen was collected and extracted by 95% ethanol (w/v = 1 : 10) for 4 h at 25°C with shaking. The resultant solution was then centrifuged at 9,170 ×g for 10 min and the supernatant was collected. NPEE (nonpuffed ethanol extract) and CPEE (compressional-puffed ethanol extract) were thus obtained after oven-drying the supernatant at 40°C. In addition, the precipitates after 95% ethanol extraction were further extracted by double-distilled water (w/v = 1 : 10) for 1 h at 70°C with shaking. Then the mixture was centrifuged at 9,170 ×g for 10 min and the supernatant was collected. NPWE (nonpuffed water extract) and CPWE (compressional-puffed water extract) were obtained after oven-drying the supernatant at 50°C. All dried extracts were milled to fine particles and stored at 4°C for further analyses. The combined compressional-puffing pretreatment and extraction process is depicted in detail in Figure 2. The extraction yield was calculated using the following equation:(1) e x t r a c t i o n y i e l d % = g A / g B × 100 ,where g A represents the dry mass weight of the extract and g B is the weight of the mango peel sample on a dry basis.Figure 2 Flowchart of the compressional-puffing process and extraction methods for NPEE, CPEE, NPWE, and CPWE. ## 2.5. Determination of Polyphenol Content Polyphenol content was estimated by the Folin-Ciocalteu colorimetric method based on the procedure of Singleton and Rossi (1965) [32] and using gallic acid as the standard agent. ## 2.6. High-Performance Liquid Chromatography (HPLC) Analysis of Total Phenolic Compound Composition The separation of total phenolic compounds was performed by the method of Schieber et al. (2000) [33] and using a Shimadzu HPLC system (Shimadzu, Kyoto, Japan) equipped with a UV-vis detector. A reversed-phase Inspire C18 column (250 mm × 4.6 mm, id 5 μm) purchased from Dikma Technologies (USA) was used for all chromatographic separations. The column was operated at 25°C. The mobile phase consisted of 2% (v/v) acetic acid in water (eluent A), 0.5% acetic acid in water, and acetonitrile (50 : 50, v/v; eluent B). The gradient program was as follows: 20–55% B (50 min), 55–100% B (10 min), and 100–20% B (5 min). The injection volume of all samples was 20 μl. The spectra were monitored at 280 nm and performed at a flow rate of 1 ml/min. Gallic acid, pyrogallol, protocatechuic acid, chlorogenic acid,p-hydroxybenzoic acid, caffeic acid, mangiferin, epicatechin,p-coumaric acid, ferulic acid, ECG, CG, ellagic acid, rutin, quercetin, kaempferol, homogentisic acid, tannic acid, and vanillic acid were used as standards for HPLC analyses. ## 2.7. DPPH Radical-Scavenging Activity The scavenging activity of the DPPH radical in the samples was determined using the method described previously [28, 34]. In brief, 50 μl of mango peel extract (concentrations ranging from 0 to 300 μg/ml for Tainoung number 1 and Haden cultivars; 0–600 μg/ml for Jinhwang and Tu cultivars; and 0–900 μg/ml for Irwin and Yuwen cultivars) was added to 200 μl 0.1 mM DPPH solution (in methanol). The mixture was shaken vigorously for 1 min and left to stand for 30 min in the dark at room temperature. After the reaction, the absorbance of all sample solutions was then measured at 517 nm using an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA). The radical-scavenging activity was calculated as the percentage inhibition using the following equation:(2) D P P H r a d i c a l - s c a v e n g i n g % = 1 - A s a m p l e A c o n t r o l × 100 ,where A s a m p l e is the absorbance of the methanol solution of DPPH with tested samples and A c o n t r o l represents the absorbance of the methanol solution of DPPH without the sample. ## 2.8. ABTS Radical Cation-Scavenging Activity The ABTS radical cation-scavenging activity was performed according to the method described previously [28, 34]. The ABTS∙+ solution was produced by mixing 5 ml of 7 mM ABTS solution with 88 μl of 140 mM potassium persulfate and allowing the mixture to stand in the dark for 16 h at room temperature before use. The ABTS∙+ solution was diluted with 95% ethanol so that its absorbance at 734 nm was adjusted to 0.70 ± 0.05. To determine the scavenging activity, 100 μl diluted ABTS∙+ solution was mixed with 100 μl of mango peel extract (concentrations ranging from 0 to 100 μg/ml for Tainoung number 1 and Haden cultivars; 0–300 μg/ml for Irwin, Yuwen, and Tu cultivars; and 0–500 μg/ml for Jinhwang cultivar) and the mixture was allowed to react at room temperature for 6 min. After the reaction, the absorbance of all sample solutions was then measured at 734 nm using an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA). The blank was prepared in the same manner, except that distilled water was used instead of the sample. The scavenging activity of ABTS∙+ was calculated using the following equation:(3) A B T S r a d i c a l c a t i o n - s c a v e n g i n g % = 1 - A s a m p l e A c o n t r o l × 100 ,where A s a m p l e is the absorbance of ABTS with tested samples and A c o n t r o l represents the absorbance of ABTS without the sample. ## 2.9. Cell Line and Culture Murine macrophage cell lines RAW 264.7 were obtained from the Bioresource Collection and Research Center, the Food Industry Research and Development Institute (FIRDI, Hsinchu, Taiwan). The cells were grown in DMEM supplemented with 10% FBS and 100 U/ml penicillin-streptomycin solution at 37°C in a humidified chamber with 5% CO2. The medium was changed every two days. ## 2.10. Measurement of Cell Viability The MTT assay was used to evaluate cell viability. Briefly, RAW 264.7 cells (2 × 105/ml in a 96-well plate) were plated with culture medium and incubated for 24 h at 37°C, with 5% CO2 in a humidified atmosphere. The medium was removed and fresh serum-free medium containing different concentrations of mango peel extracts (concentrations ranging from 0 to 25 μg/ml for CPEE of TN1 and CPWE of TN1) was added. After 24 h of incubation at 37°C, with 5% CO2, the MTT reagent (0.1 mg/ml) was added. After incubating at 37°C for 4 h, the MTT reagent was removed and DMSO (100 μl) was added to each well and thoroughly mixed by pipetting to dissolve the MTT-formazan crystals. The absorbance was then determined by an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA) at a wavelength of 570 nm. The cell viability (%) was calculated using the following equation:(4) C e l l v i a b i l i t y % = T C × 100 ,where T is the absorbance in the test and C is the absorbance for the control. ## 2.11. Measurement of Nitrite Oxide in Culture Media RAW 264.7 cells (2 × 10 5 cells/ml) were seeded in a 96-well flat bottom plate for 24 h at 37°C with 5% CO2. The culture medium was removed and replaced with fresh medium containing tested samples at various concentrations prior to challenging with 1 μg/ml of LPS. The nitrite concentration was measured in the culture supernatant after 24 h of coincubation. In brief, 50 μl of the cultured supernatants was added in the 96-well plate and 100 μl of Griess reagent was added to each well and allowed to stand for 10 min at room temperature. The absorbance at 540 nm was measured using an ELISA reader (PowerWave 340, BioTek Instruments, Winooski, VT, USA), and the quantification of nitrite was standardized with NaNO2 at 0–100 μM concentrations [35]. ## 2.12. Zone of Inhibition Five bacteria were tested for antibacterial activity of mango peel extracts. These were three Gram-negative bacteria (Escherichia coli ATCC 11775,Salmonella typhimurium ATCC 13311, andVibrio parahaemolyticusATCC 17802) and two Gram-positive bacteria (Staphylococcus aureus ATCC 12600 andBacillus cereus ATCC 14579), which were obtained from the Culture Collection and Research Center of the Food Industry Research and Development Institute, Hsinchu, Taiwan. Antibacterial activity was measured using the standard method of diffusion disc plates on agar [36]. In brief,E. coli,S. typhimurium,S. aureus, andB. cereus were grown in tryptic soy broth (TSB) medium (Difco Laboratories, Detroit, MI, USA) andV. parahaemolyticuswas grown in TSB medium + 3% NaCl for 24 h at 37°C, and 0.1 ml of each culture of bacteria at proper cell density was spread on tryptic soy agar (TSA, Difco Laboratories, Detroit, MI, USA) plate surfaces (3% NaCl was added to TSA forV. parahaemolyticus). Paper disc (8 mm in diameter) was placed on the agar medium to load 50 μl containing 2 mg of mango peel extract (4%, w/v, in 0.05 M acetate buffer, pH 6.0). Control paper discs were prepared by infusing with 50 μl Antibiotic-Antimycotic Solution (containing 10,000 units/ml penicillin, 10 mg/ml streptomycin, and 25 μg/ml amphotericin) (Corning, Corning, NY, USA) or 50 μl 0.05 M acetate buffer. The plates were incubated at 37°C for 24 h. After 24 h, antibacterial activity of the extracts against the test bacteria was observed by growth-free zone of inhibition near the respective disc and the inhibition diameters were measured. ## 2.13. Statistical Analysis Experiments were performed at least three times. Values represent the means ± standard deviation (SD). Statistical analyses were done using the Statistical Package for the Social Sciences (SPSS). The results obtained were analyzed using one-way analysis of variance (ANOVA), followed by Duncan’s Multiple Range tests.p < 0.05 was considered statistically significant. Correlation analyses were performed using the square of Pearson’s correlation coefficient ( R 2 ). ## 3. Results and Discussion ### 3.1. Effects of Mango Varieties, Compressional-Puffing, and Extraction Methods on Extraction Yields of Peel Extracts Six varieties of mango fruits, namely, Jinhwang, Tainoung number  1 (TN1), Irwin, Yuwen, Haden, and Tu, were collected from a local grocery market in Xinhua District, Tainan City, Taiwan. Samples of peels were separated manually and the peels were oven-dried till the moisture content reached 4–7% (wet basis). The dried peel samples were crumbled and sieved using a 20-mesh screen, and the portion retained by the screen was collected and compressional-puffed according to the technique developed previously [27]. Compressional-puffing applies a mechanical compression force of approximately 5 kg/cm2 to the sample three times before puffing, which can account for the difference between compressional-puffing and the conventional puffing gun process. The puffing temperatures were set at 220°C, and the corresponding pressure level inside the chamber was found to be 11 kg/cm2 (Table 1). The NP and CP peel samples were ground and sieved using a 20-mesh screen. The portion passing through the screen was collected and then the bioactive compounds were extracted by either ethanol or hot water as shown in Figure 2. In the preliminary experiment, we extracted puffed peel sample directly using 70°C hot water and found that the extract, after being dried, exhibited a stone-like hard structure, which stuck tightly to the inner surfaces of the container and was difficult to dislodge. Thus, the 70°C hot water extraction condition was not adopted in the present study. After extraction, four peel extracts, namely, NPWE (nonpuffed water extract), CPWE (compressional-puffed water extract), NPEE (nonpuffed ethanol extract), and CPEE (compressional-puffed ethanol extract), were obtained according to their puffing pretreatments and extraction methods for each mango cultivar (Figure 2). The yields of these extracts are indicated in Table 1. In the comparison of extraction yields among different mango varieties for these four extracts, it was found that the yields of extracts for the tested mango cultivars were similar, except that Haden and Tu cultivars had relatively lower extraction yields. Thus, the peels of Jinhwang, TN1, Irwin, and Yuwen cultivars with higher yields of extracts would have advantages for further commercial production. It was reported that compressional-puffing could primarily rupture the structure of the puffed samples and then augment the extraction yield of crude fucoidan from brown algae [27, 28] and increase the extraction yields of total phenolics and total flavonoids from pine needles [29, 30]. In the present study, we also found that compressional-puffing could rupture the structure of mango peel (data not shown) and increase the extraction yields in both CPWE and CPEE as compared to NPWE and NPEE, respectively (Table 1). Therefore, compressional-puffing can also be effectively used in mango peels to facilitate the release of bioactive compounds by simple extraction operations. A comparison of the extraction yields between water and ethanol extractions revealed that water extraction tended to have higher yields of extracts as compared to ethanol extraction. A higher yield of extract has the potential for commercialized production. In addition, previous reports revealed that the composition of mango peel extract is complicated, and it may contain polyphenols, flavonoids, carotenoids, vitamin E, vitamin C, pectin, unsaturated fatty acids, and other biologically active components that positively influence health [25, 37–39]. Mango peel extract has also exhibited biological functions such as antioxidant properties [25, 39] and inhibition of HeLa human cervical carcinoma cell proliferation [38]. Generally, phenolic compounds are the major bioactive components of mango peels [18] and these have exhibited antioxidant activity and an antiproliferative effect on HeLa cells [25, 37–39]. Thus, the phenolic compound composition in our mango peel extracts and their effects on biological functions warrant further examination. Taken together, peel extracts from Jinhwang, TN1, Irwin, and Yuwen cultivars had higher extraction yields than those of Haden and Tu cultivars. Compressional-puffing pretreatment resulted in a worthwhile incremental increase in the extraction yields of mango peel extracts. Water extraction tended to have higher yields of extracts as compared to ethanol extraction, which would be beneficial in commercialized production. The phenolic compound composition and biological functions of mango peel extracts require further characterization. ### 3.2. Polyphenol Contents and Free Radical-Scavenging Activities of Peel Extracts from Various Mango Cultivars Phenolic compounds are reported to be the major bioactive components that exist in mango peels [18]. In the present study, four peel extracts (NPWE, CPWE, NPEE, and CPEE) from six mango cultivars were utilized to determine their polyphenol contents by the Folin-Ciocalteu colorimetric method. The results presented in Table 2 suggest that peel extracts from the TN1 cultivar possessed the highest amount of total phenolic compounds as compared to other peel extracts. Thus, it is reasonable to postulate that peel extracts of the TN1 may exhibit high biological activities, and therefore further investigation is warranted. Moreover, a comparison of the polyphenol contents between NPWE and CPWE in all mango cultivars revealed that polyphenol content of CPWE was higher than that of NPWE (Table 2), indicating that compressional-puffing could increase the polyphenol content of water extracts in all mango cultivars. However, in the case of ethanol extracts, only CPEEs from Jinhwang and TN1 had higher polyphenol contents than those of NPEEs from Jinhwang and TN1 (Table 2). Moreover, for all mango cultivars, polyphenol contents of ethanol extracts were higher than those of water extracts (Table 2), indicating that ethanol extraction was effective in the extraction of polyphenols. Polyphenols are well known to exhibit antioxidant activity due to their ability to scavenge free radicals via hydrogen donation or electron donation and the reactivity of the phenol moiety [40]. Accordingly, the antioxidant capacities of NPWE, CPWE, NPEE, and CPEE of six mango peels were characterized using DPPH and ABTS radical-scavenging assays. DPPH is a stable free radical and is widely used to evaluate the antioxidant activity in a relatively short time compared to other methods [41]. The S C 50 values (concentration of mango peel extract capable of scavenging 50% of DPPH radical) of the peel extracts (NPWE, CPWE, NPEE, and CPEE) from six mango cultivars for DPPH radical-scavenging activity are presented in Table 2. As shown in Table 2, all peel extracts from TN1 exhibited the most DPPH radical-scavenging activity as compared to other mango cultivars, and the most potent was CPEE of TN1 with an S C 50 value of 41.7 ± 1.3 μg/ml. Kim et al. (2010) reported that the S C 50 value of the DPPH radical-scavenging activity of Irwin mango peel ethanol extract was about 40 μg/ml [18], which was similar to the S C 50 value of CPEE of TN1 reported here. A comparison of the DPPH radical-scavenging activities of the CPWE group with those of the NPWE group revealed that compressional-puffing could increase the DPPH radical-scavenging activities of peel extracts (Table 2). Moreover, DPPH radical-scavenging activity of all EE groups (including NPEE and CPEE) was greater than that of the WE groups (NPWE and CPWE), which appeared to be positively correlated with the higher polyphenol amount in the EE groups as shown in Table 2. Regarding the scavenging activity of ABTS∙+, the relatively long-lived ABTS∙+ was decolorized during the reaction with hydrogen-donating antioxidant [42]. The S C 50 values (concentration of mango peel extract capable of scavenging 50% of ABTS radical cation) of the peel extracts (NPEE, CPEE, NPWE, and CPWE) from six mango cultivars for ABTS radical cation-scavenging activity are also presented in Table 2. The results show that, among the extracts from six mango cultivars, peel extracts from the TN1 exhibited the most ABTS radical cation-scavenging activity, and the S C 50 value for the most potent CPEE of TN1 was 13.0 ± 0.8 μg/ml. Kim et al. (2010) reported that the S C 50 value of the ABTS radical cation-scavenging activity for Irwin mango peel ethanol extract was about 200 μg/ml [18], which was less effective in ABTS radical cation-scavenging capacity as compared to our CPEE of TN1. Regarding NPWE and CPWE, compressional-puffing could increase the ABTS radical cation-scavenging activity in CPWE of mango cultivars, which was similar to the finding for DPPH radical-scavenging activity. All EEs (including NPEE and CPEE) had greater ABTS radical cation-scavenging activity compared to WEs (including NPWE and CPWE) (Table 2). To better understand the relationship between polyphenol contents and free radical-scavenging activities of peel extracts, a correlation plot was performed and the results are shown in Figure 3. A high correlation between the polyphenol contents of peel extracts and their corresponding free radical-scavenging activities (DPPH and ABTS radical-scavenging activities) was found in NPWE, CPWE, NPEE, and CPEE, which was also consistent with previously reported observations [43]. In summary, peel extracts from the TN1 had the highest amount of total phenolic compounds and possessed the most DPPH and ABTS free radical-scavenging activities. For all water extracts, compressional-puffing had a tendency to increase the contents of total phenolic compounds in CPWEs and resulted in an incremental increase in free radical-scavenging activities as compared to NPWEs. For all ethanol extracts, only CPEE of TN1 had a higher content of total phenolic compounds and possessed higher free radical-scavenging activities as compared to NPEE of TN1. Moreover, for all extracts, ethanol extracts generally had a higher amount of total phenolic compounds and caused greater free radical-scavenging activities as compared to water extracts. Therefore, in summary, both CPWE and CPEE of the TN1 cultivar warrant further analyses of the phenolic compound composition and storage stability of their antioxidant capacity, as well as their anti-inflammatory and antibacterial activities.Table 2 Polyphenol content, DPPH radical-scavenging activity, and ABTS radical cation-scavenging activity of extracts from various Taiwanese mango peels. Polyphenols% ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 1.40 ± 0.11 a A ∗ ∗ ∗ ∗ 2.11 ± 0.24 a B 5.31 ± 0.25 a C 9.13 ± 0.16 c D Tainoung number 1 cultivar 15.9 ± 0.9 e A 16.6 ± 1.1 e A 23.5 ± 0.4 e B 28.5 ± 0.7 e C Irwin cultivar 3.09 ± 0.18 b A 2.92 ± 0.19 a A 7.06 ± 0.29 b C 5.07 ± 0.11 a B Yuwen cultivar 2.36 ± 0.25 b A 4.63 ± 0.90 b B 7.21 ± 0.05 b D 6.26 ± 0.05 b C Haden cultivar 6.41 ± 0.20 d A 7.31 ± 0.19 c B 18.9 ± 0.3 d D 13.4 ± 0.3 d C Tu cultivar 5.25 ± 0.27 c A 10.1 ± 1.6 d B 14.7 ± 0.2 c D 13.0 ± 0.5 d C DPPH,S C 50 values μ g / m l ∗ ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 499 ± 7 f D 368 ± 13 f C 197 ± 12 d A 251 ± 0 f B Tainoung number 1 cultivar 57.0 ± 2.2 a C 67.0 ± 2.2 a D 46.0 ± 1.4 a B 41.7 ± 1.3 a A Irwin cultivar 368 ± 11 e C 255 ± 2 d B 195 ± 9 d A 222 ± 8 e A Yuwen cultivar 324 ± 3 d D 303 ± 5 e C 165 ± 5 c A 206 ± 4 d B Haden cultivar 124 ± 3 b D 101 ± 5 b C 69 ± 5 b A 86 ± 4 b B Tu cultivar 183 ± 2 c D 158 ± 5 c C 78.3 ± 4.9 b A 96.7 ± 4.7 c B Vitamin C 11.3 ± 0.1 ABTS,S C 50 values μ g / m l ∗ ∗ ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 186 ± 0 e D 139 ± 0 e C 70.0 ± 0.0 f B 54.0 ± 3.3 c A Tainoung number 1 cultivar 28.2 ± 3.8 a C 23.3 ± 0.5 a B 15.7 ± 0.9 a A 13.0 ± 0.8 a A Irwin cultivar 113 ± 7 c C 101 ± 6 d C 59.0 ± 0.8 e A 76.3 ± 2.8 e B Yuwen cultivar 137 ± 2 d C 102 ± 2 d B 52.0 ± 0.9 d A 62.0 ± 1.7 d A Haden cultivar 55.3 ± 1.7 b C 37.3 ± 2.4 b B 27.3 ± 0.9 c A 30.7 ± 1.7 b A Tu cultivar 115 ± 5 c D 77.3 ± 3.4 c C 24.7 ± 1.3 b A 34.0 ± 1.6 b B Vitamin C 3.58 ± 0.07 ∗ P o l y p h e n o l s ( % ) = ( g / g s o l i d e x t r a c t , d r y b a s i s ) × 100. S C ∗ ∗ 50 values (concentration of mango peel extract capable of scavenging 50% of DPPH radical) for DPPH radical-scavenging of different mango peel extracts. S C ∗ ∗ ∗ 50 values (concentration of mango peel extract capable of scavenging 50% of ABTS cation radical) for ABTS radical cation-scavenging of different mango peel extracts. ∗ ∗ ∗ ∗Values are m e a n ± S D  ( n = 3 ); values in the same column with different letters (in a, b, c, d, e, and f) and in the same row with different letters (in A, B, C, and D) are significantly different ( p < 0.05 ).Figure 3 Association between polyphenol content and DPPH/ABTS radical-scavenging activities of mango peel extracts. (a) NPWE; (b) CPWE; (c) NPEE; (d) CPEE.S C 50: concentration for scavenging 50% of DPPH or ABTS free radicals. (a) (b) (c) (d) ### 3.3. Analysis of Phenolic Compound Composition, Storage Stability of Antioxidant Capacity, Anti-Inflammatory Activity, and Antibacterial Activity in CPWE and CPEE of TN1 Cultivar Peel extracts of TN1 cultivar have the highest amount of total phenolic compounds and the most free radical-scavenging activities. Moreover, CPWE and CPEE from TN1 had higher extraction yields and greater polyphenol contents as compared to NPWE and NPEE from TN1. Therefore, the phenolic compound composition of CPWE and CPEE from TN1 was analyzed by RP-HPLC coupled with UV-vis detector. The results are shown in Figure4 and Table 3. In Figure 4, it can be seen that seven phenolic compounds, namely, gallic acid, pyrogallol, chlorogenic acid,p-hydroxybenzoic acid,p-coumaric acid, ECG, and CG, were tentatively identified in CPWE and CPEE of TN1 by HPLC analysis. Table 3 shows the quantitative data of phenolic compound composition in the CPWE and CPEE of TN1. It was found that both CPWE and CPEE of TN1 contained large amounts ofp-hydroxybenzoic acid, gallic acid, and pyrogallol and smaller amounts of chlorogenic acid, CG,p-coumaric acid, and ECG. A comparison of the phenolic compound composition in CPWE and CPEE revealed that CPEE of TN1 had greater amounts ofp-hydroxybenzoic acid, gallic acid, pyrogallol, chlorogenic acid, CG,p-coumaric acid, and ECG than those of CPWE (Table 3). These results are consistent with the data shown in Table 2, which illustrates that CPEE of TN1 has higher total phenolic compounds compared to CPWE of TN1. We found thatp-hydroxybenzoic acid was the predominant phenolic compound detected (up to 3313 ± 2 mg/100 g peel weight, dry basis) in CPEE of TN1, and the results were also supported by other studies reporting thatp-hydroxybenzoic acid could be detected in the extract of mango cultivar [44]. The concentrations of gallic acid for CPWE and CPEE of TN1 were recorded as 579 ± 72 and 1052 ± 1 mg/100 g peel weight, dry basis, respectively. These data are comparably higher than those reported previously for the ethanol extract of mango peel, with an average gallic acid concentration of 152.20 ± 0.14 mg/100 g mango peel, dry weight [45]. Previous studies suggested that pyrogallol can be detected in the ethanolic extract of mango kernel (the mango tested was purchased from an Egyptian local market) with a concentration of 1337.9 ± 0.31 mg/100 g mango kernel, dry weight, but it was absent in the ethanolic extract of mango peel [45]. However, we found that pyrogallol could be detected in CPWE and CPEE of TN1 with a concentration of 566 ± 55 and 930 ± 90 mg/100 g peel weight, dry basis, respectively. We speculate the reason may be due to differences between the tested mango varieties. Structurally,p-hydroxybenzoic acid, gallic acid, and pyrogallol are monophenolic compounds, which exhibit antioxidant activity owing to their hydrogen-donating or electron-donating properties [46]. Therefore, the high free radical-scavenging activities of CPWE and CPEE of TN1 may be attributed to the high contents ofp-hydroxybenzoic acid, gallic acid, and pyrogallol. Besides phenolic compounds, previous studies reported that a synergistic effect of combinations of phytochemicals may also result in beneficial biological functions such as inhibition of proliferation of human cancer cells [38, 47]. Thus, the synergistic effects of constituents in CPWE and CPEE of TN1 with respect to their effects on biological functions warrant further investigation. The storage stability of antioxidant agent is important with respect to its potential industrial application. Here, we evaluated the storage stability of vitamin C, CPWE of TN1, and CPEE of TN1 by DPPH radical-scavenging assay. The test sample powders were redissolved in double-distilled water at various concentrations and the sample solutions were stored at room temperature for 1, 2, 4, and 8 hours, and then the corresponding DPPH radical-scavenging activities were determined. The data presented in Figure 5(a) suggest that the well-known natural antioxidant vitamin C would dramatically reduce its DPPH radical-scavenging activity after 1–8 hours’ storage. However, the DPPH radical-scavenging activities in either CPWE of TN1 or CPEE of TN1 were not obviously changed after 1–8 hours’ storage (Figures 5(b) and 5(c)). These findings clearly indicate that the peel extracts of mango exhibited a high storage stability in terms of antioxidant activity. Fruit polyphenols have been reported to be related to immunomodulatory and anti-inflammatory properties viain vitro and animal studies [13]. NO is an inflammatory mediator induced by inflammatory cytokines or bacterial LPS in various cell types including macrophages [48]. Samples with NO inhibitory activity thus have the potential to possess anti-inflammatory activity. CPEE and CPWE from TN1 were tested for their anti-inflammatory activities by investigating their effects on NO production in LPS-induced RAW264.7 macrophages. Neither CPEE nor CPWE obviously affected the viability of RAW264.7 cells at the 6.25–25 μg/ml concentrations that were tested, in the presence of 1 μg/ml LPS (Figure 6(a)). As shown in Figure 6(b), when RAW264.7 cells were treated with 1 μg/ml LPS, the NO production was increased from 3.11 ± 0.25 μ M to 12.8 ± 0.1 μM. Moreover, when RAW264.7 cells were treated with 1 μg/ml LPS in the presence of various concentrations of CPEE, it was found that NO production was significantly decreased from 12.8 ± 0.1 μ M to 9.54 ± 0.08 μ M, whereas in the presence of various concentrations of CPWE, NO production was only slightly reduced. These results indicate that CPEE of TN1 had apparent anti-inflammatory activity, and thus it may have potential as a natural and safe agent in the protection of human health by modulating the immune system. Previous studies demonstrated that extracts with high polyphenol content exhibited high antibacterial activity [49]. As such, we evaluated the antibacterial activity of CPEE and CPWE of TN1 by the diffusion disc method. Five bacteria, three Gram-negative bacteria (E. coli,S. typhimurium, andV. parahaemolyticus) and two Gram-positive bacteria (S. aureus andB. cereus), were adopted to assess the antibacterial properties. As can be seen in Figures 7(a)–7(f), both CPEE and CPWE of TN1 exhibited antibacterial activities against the five bacteria tested. The Gram-negative bacteria were more sensitive than Gram-positive ones to CPEE and CPWE of TN1 (Figure 7(f)). In addition, for these five bacteria, exceptV. parahaemolyticus, CPEE exhibited higher antibacterial activity compared to CPWE. These results may be attributed to the higher polyphenol content detected in CPEE (Table 2), which is also consistent with previous findings [50]. Interestingly, forV. parahaemolyticus, CPEE had less antibacterial activity compared to CPWE. We speculate the reason may be due to the presence of 3% NaCl in the medium ofV. parahaemolyticus. However, further experimental studies are needed to elucidate the mechanism of action. In summary, the present study demonstrated that CPEE and CPWE from TN1 had high amounts of phenolic compounds, possessed good and stable free radical-scavenging activities, and exhibited anti-inflammatory and antibacterial activities. CPEE of TN1 exhibited the most antioxidant, anti-inflammatory, and antibacterial properties and thus has potential for use in the food, cosmetics, and nutraceutical industries.Table 3 Phenolic compound composition in the CPWE and CPEE of Tainoung number 1 cultivar. Compound Tainoung number 1 cultivar CPWEm g / 100 g ∗ CPEE (mg/100 g) p-Hydroxybenzoic acid 1863 ± 318 3313 ± 2 Gallic acid 579 ± 72 1052 ± 1 Pyrogallol 566 ± 55 930 ± 90 Chlorogenic acid 125 ± 8 245 ± 7 Catechin gallate (CG) 125 ± 43 189 ± 52 p-Coumaric acid 68.9 ± 9.4 131 ± 0 Epicatechin gallate (ECG) 32.0 ± 3.9 50.8 ± 7.0 ∗The concentration of phenolic compound is expressed as mg/100 g peel weight, dry basis.Figure 4 (a) High-performance liquid chromatography of peel extracts (CPWE and CPEE) of Tainoung number 1 cultivar; (b) high-performance liquid chromatography of polyphenol standards: gallic acid (1), pyrogallol (2), chlorogenic acid (3),p-hydroxybenzoic acid (4),p-coumaric acid (5), ECG (6), and CG (7). (a) (b)Figure 5 DPPH scavenging activities of vitamin C, CPWE of TN1, and CPEE of TN1 under different storage times. (a) Vitamin C; (b) CPWE of TN1; (c) CPEE of TN1. (a) (b) (c)Figure 6 (a) Effects of CPEE of TN1, CPWE of TN1, and LPS on cell viability of RAW 264.7 cells. (b) Effects of CPEE of TN1, CPWE of TN1, and LPS on NO secretion in RAW 264.7 cells. The data are them e a n s ± S D of triplicate samples. Bars with different letters are significantly different (p < 0.05). (a) (b)Figure 7 Zone of inhibition of CPEE of TN1 and CPWE of TN1 at concentration of 4%, w/v, in 0.05 M acetate buffer, pH 6.0, against (a)Escherichia coli, (b)Salmonella typhimurium, (c)Vibrio parahaemolyticus, (d)Staphylococcus aureus; and (e)Bacillus cereus. In each dish, A, B, C, and D represent antibiotic, acetate buffer, CPEE of TN1, and CPWE of TN1, respectively. (f) The bar graph summarizes the four separate antibacterial experiments and shows the zone of inhibition according to treatments. Values are expressed as the m e a n ± S D (n = 4). The means that have at least one common letter do not differ significantly (p > 0.05). (a) (b) (c) (d) (e) (f) ## 3.1. Effects of Mango Varieties, Compressional-Puffing, and Extraction Methods on Extraction Yields of Peel Extracts Six varieties of mango fruits, namely, Jinhwang, Tainoung number  1 (TN1), Irwin, Yuwen, Haden, and Tu, were collected from a local grocery market in Xinhua District, Tainan City, Taiwan. Samples of peels were separated manually and the peels were oven-dried till the moisture content reached 4–7% (wet basis). The dried peel samples were crumbled and sieved using a 20-mesh screen, and the portion retained by the screen was collected and compressional-puffed according to the technique developed previously [27]. Compressional-puffing applies a mechanical compression force of approximately 5 kg/cm2 to the sample three times before puffing, which can account for the difference between compressional-puffing and the conventional puffing gun process. The puffing temperatures were set at 220°C, and the corresponding pressure level inside the chamber was found to be 11 kg/cm2 (Table 1). The NP and CP peel samples were ground and sieved using a 20-mesh screen. The portion passing through the screen was collected and then the bioactive compounds were extracted by either ethanol or hot water as shown in Figure 2. In the preliminary experiment, we extracted puffed peel sample directly using 70°C hot water and found that the extract, after being dried, exhibited a stone-like hard structure, which stuck tightly to the inner surfaces of the container and was difficult to dislodge. Thus, the 70°C hot water extraction condition was not adopted in the present study. After extraction, four peel extracts, namely, NPWE (nonpuffed water extract), CPWE (compressional-puffed water extract), NPEE (nonpuffed ethanol extract), and CPEE (compressional-puffed ethanol extract), were obtained according to their puffing pretreatments and extraction methods for each mango cultivar (Figure 2). The yields of these extracts are indicated in Table 1. In the comparison of extraction yields among different mango varieties for these four extracts, it was found that the yields of extracts for the tested mango cultivars were similar, except that Haden and Tu cultivars had relatively lower extraction yields. Thus, the peels of Jinhwang, TN1, Irwin, and Yuwen cultivars with higher yields of extracts would have advantages for further commercial production. It was reported that compressional-puffing could primarily rupture the structure of the puffed samples and then augment the extraction yield of crude fucoidan from brown algae [27, 28] and increase the extraction yields of total phenolics and total flavonoids from pine needles [29, 30]. In the present study, we also found that compressional-puffing could rupture the structure of mango peel (data not shown) and increase the extraction yields in both CPWE and CPEE as compared to NPWE and NPEE, respectively (Table 1). Therefore, compressional-puffing can also be effectively used in mango peels to facilitate the release of bioactive compounds by simple extraction operations. A comparison of the extraction yields between water and ethanol extractions revealed that water extraction tended to have higher yields of extracts as compared to ethanol extraction. A higher yield of extract has the potential for commercialized production. In addition, previous reports revealed that the composition of mango peel extract is complicated, and it may contain polyphenols, flavonoids, carotenoids, vitamin E, vitamin C, pectin, unsaturated fatty acids, and other biologically active components that positively influence health [25, 37–39]. Mango peel extract has also exhibited biological functions such as antioxidant properties [25, 39] and inhibition of HeLa human cervical carcinoma cell proliferation [38]. Generally, phenolic compounds are the major bioactive components of mango peels [18] and these have exhibited antioxidant activity and an antiproliferative effect on HeLa cells [25, 37–39]. Thus, the phenolic compound composition in our mango peel extracts and their effects on biological functions warrant further examination. Taken together, peel extracts from Jinhwang, TN1, Irwin, and Yuwen cultivars had higher extraction yields than those of Haden and Tu cultivars. Compressional-puffing pretreatment resulted in a worthwhile incremental increase in the extraction yields of mango peel extracts. Water extraction tended to have higher yields of extracts as compared to ethanol extraction, which would be beneficial in commercialized production. The phenolic compound composition and biological functions of mango peel extracts require further characterization. ## 3.2. Polyphenol Contents and Free Radical-Scavenging Activities of Peel Extracts from Various Mango Cultivars Phenolic compounds are reported to be the major bioactive components that exist in mango peels [18]. In the present study, four peel extracts (NPWE, CPWE, NPEE, and CPEE) from six mango cultivars were utilized to determine their polyphenol contents by the Folin-Ciocalteu colorimetric method. The results presented in Table 2 suggest that peel extracts from the TN1 cultivar possessed the highest amount of total phenolic compounds as compared to other peel extracts. Thus, it is reasonable to postulate that peel extracts of the TN1 may exhibit high biological activities, and therefore further investigation is warranted. Moreover, a comparison of the polyphenol contents between NPWE and CPWE in all mango cultivars revealed that polyphenol content of CPWE was higher than that of NPWE (Table 2), indicating that compressional-puffing could increase the polyphenol content of water extracts in all mango cultivars. However, in the case of ethanol extracts, only CPEEs from Jinhwang and TN1 had higher polyphenol contents than those of NPEEs from Jinhwang and TN1 (Table 2). Moreover, for all mango cultivars, polyphenol contents of ethanol extracts were higher than those of water extracts (Table 2), indicating that ethanol extraction was effective in the extraction of polyphenols. Polyphenols are well known to exhibit antioxidant activity due to their ability to scavenge free radicals via hydrogen donation or electron donation and the reactivity of the phenol moiety [40]. Accordingly, the antioxidant capacities of NPWE, CPWE, NPEE, and CPEE of six mango peels were characterized using DPPH and ABTS radical-scavenging assays. DPPH is a stable free radical and is widely used to evaluate the antioxidant activity in a relatively short time compared to other methods [41]. The S C 50 values (concentration of mango peel extract capable of scavenging 50% of DPPH radical) of the peel extracts (NPWE, CPWE, NPEE, and CPEE) from six mango cultivars for DPPH radical-scavenging activity are presented in Table 2. As shown in Table 2, all peel extracts from TN1 exhibited the most DPPH radical-scavenging activity as compared to other mango cultivars, and the most potent was CPEE of TN1 with an S C 50 value of 41.7 ± 1.3 μg/ml. Kim et al. (2010) reported that the S C 50 value of the DPPH radical-scavenging activity of Irwin mango peel ethanol extract was about 40 μg/ml [18], which was similar to the S C 50 value of CPEE of TN1 reported here. A comparison of the DPPH radical-scavenging activities of the CPWE group with those of the NPWE group revealed that compressional-puffing could increase the DPPH radical-scavenging activities of peel extracts (Table 2). Moreover, DPPH radical-scavenging activity of all EE groups (including NPEE and CPEE) was greater than that of the WE groups (NPWE and CPWE), which appeared to be positively correlated with the higher polyphenol amount in the EE groups as shown in Table 2. Regarding the scavenging activity of ABTS∙+, the relatively long-lived ABTS∙+ was decolorized during the reaction with hydrogen-donating antioxidant [42]. The S C 50 values (concentration of mango peel extract capable of scavenging 50% of ABTS radical cation) of the peel extracts (NPEE, CPEE, NPWE, and CPWE) from six mango cultivars for ABTS radical cation-scavenging activity are also presented in Table 2. The results show that, among the extracts from six mango cultivars, peel extracts from the TN1 exhibited the most ABTS radical cation-scavenging activity, and the S C 50 value for the most potent CPEE of TN1 was 13.0 ± 0.8 μg/ml. Kim et al. (2010) reported that the S C 50 value of the ABTS radical cation-scavenging activity for Irwin mango peel ethanol extract was about 200 μg/ml [18], which was less effective in ABTS radical cation-scavenging capacity as compared to our CPEE of TN1. Regarding NPWE and CPWE, compressional-puffing could increase the ABTS radical cation-scavenging activity in CPWE of mango cultivars, which was similar to the finding for DPPH radical-scavenging activity. All EEs (including NPEE and CPEE) had greater ABTS radical cation-scavenging activity compared to WEs (including NPWE and CPWE) (Table 2). To better understand the relationship between polyphenol contents and free radical-scavenging activities of peel extracts, a correlation plot was performed and the results are shown in Figure 3. A high correlation between the polyphenol contents of peel extracts and their corresponding free radical-scavenging activities (DPPH and ABTS radical-scavenging activities) was found in NPWE, CPWE, NPEE, and CPEE, which was also consistent with previously reported observations [43]. In summary, peel extracts from the TN1 had the highest amount of total phenolic compounds and possessed the most DPPH and ABTS free radical-scavenging activities. For all water extracts, compressional-puffing had a tendency to increase the contents of total phenolic compounds in CPWEs and resulted in an incremental increase in free radical-scavenging activities as compared to NPWEs. For all ethanol extracts, only CPEE of TN1 had a higher content of total phenolic compounds and possessed higher free radical-scavenging activities as compared to NPEE of TN1. Moreover, for all extracts, ethanol extracts generally had a higher amount of total phenolic compounds and caused greater free radical-scavenging activities as compared to water extracts. Therefore, in summary, both CPWE and CPEE of the TN1 cultivar warrant further analyses of the phenolic compound composition and storage stability of their antioxidant capacity, as well as their anti-inflammatory and antibacterial activities.Table 2 Polyphenol content, DPPH radical-scavenging activity, and ABTS radical cation-scavenging activity of extracts from various Taiwanese mango peels. Polyphenols% ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 1.40 ± 0.11 a A ∗ ∗ ∗ ∗ 2.11 ± 0.24 a B 5.31 ± 0.25 a C 9.13 ± 0.16 c D Tainoung number 1 cultivar 15.9 ± 0.9 e A 16.6 ± 1.1 e A 23.5 ± 0.4 e B 28.5 ± 0.7 e C Irwin cultivar 3.09 ± 0.18 b A 2.92 ± 0.19 a A 7.06 ± 0.29 b C 5.07 ± 0.11 a B Yuwen cultivar 2.36 ± 0.25 b A 4.63 ± 0.90 b B 7.21 ± 0.05 b D 6.26 ± 0.05 b C Haden cultivar 6.41 ± 0.20 d A 7.31 ± 0.19 c B 18.9 ± 0.3 d D 13.4 ± 0.3 d C Tu cultivar 5.25 ± 0.27 c A 10.1 ± 1.6 d B 14.7 ± 0.2 c D 13.0 ± 0.5 d C DPPH,S C 50 values μ g / m l ∗ ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 499 ± 7 f D 368 ± 13 f C 197 ± 12 d A 251 ± 0 f B Tainoung number 1 cultivar 57.0 ± 2.2 a C 67.0 ± 2.2 a D 46.0 ± 1.4 a B 41.7 ± 1.3 a A Irwin cultivar 368 ± 11 e C 255 ± 2 d B 195 ± 9 d A 222 ± 8 e A Yuwen cultivar 324 ± 3 d D 303 ± 5 e C 165 ± 5 c A 206 ± 4 d B Haden cultivar 124 ± 3 b D 101 ± 5 b C 69 ± 5 b A 86 ± 4 b B Tu cultivar 183 ± 2 c D 158 ± 5 c C 78.3 ± 4.9 b A 96.7 ± 4.7 c B Vitamin C 11.3 ± 0.1 ABTS,S C 50 values μ g / m l ∗ ∗ ∗ NPWE CPWE NPEE CPEE Jinhwang cultivar 186 ± 0 e D 139 ± 0 e C 70.0 ± 0.0 f B 54.0 ± 3.3 c A Tainoung number 1 cultivar 28.2 ± 3.8 a C 23.3 ± 0.5 a B 15.7 ± 0.9 a A 13.0 ± 0.8 a A Irwin cultivar 113 ± 7 c C 101 ± 6 d C 59.0 ± 0.8 e A 76.3 ± 2.8 e B Yuwen cultivar 137 ± 2 d C 102 ± 2 d B 52.0 ± 0.9 d A 62.0 ± 1.7 d A Haden cultivar 55.3 ± 1.7 b C 37.3 ± 2.4 b B 27.3 ± 0.9 c A 30.7 ± 1.7 b A Tu cultivar 115 ± 5 c D 77.3 ± 3.4 c C 24.7 ± 1.3 b A 34.0 ± 1.6 b B Vitamin C 3.58 ± 0.07 ∗ P o l y p h e n o l s ( % ) = ( g / g s o l i d e x t r a c t , d r y b a s i s ) × 100. S C ∗ ∗ 50 values (concentration of mango peel extract capable of scavenging 50% of DPPH radical) for DPPH radical-scavenging of different mango peel extracts. S C ∗ ∗ ∗ 50 values (concentration of mango peel extract capable of scavenging 50% of ABTS cation radical) for ABTS radical cation-scavenging of different mango peel extracts. ∗ ∗ ∗ ∗Values are m e a n ± S D  ( n = 3 ); values in the same column with different letters (in a, b, c, d, e, and f) and in the same row with different letters (in A, B, C, and D) are significantly different ( p < 0.05 ).Figure 3 Association between polyphenol content and DPPH/ABTS radical-scavenging activities of mango peel extracts. (a) NPWE; (b) CPWE; (c) NPEE; (d) CPEE.S C 50: concentration for scavenging 50% of DPPH or ABTS free radicals. (a) (b) (c) (d) ## 3.3. Analysis of Phenolic Compound Composition, Storage Stability of Antioxidant Capacity, Anti-Inflammatory Activity, and Antibacterial Activity in CPWE and CPEE of TN1 Cultivar Peel extracts of TN1 cultivar have the highest amount of total phenolic compounds and the most free radical-scavenging activities. Moreover, CPWE and CPEE from TN1 had higher extraction yields and greater polyphenol contents as compared to NPWE and NPEE from TN1. Therefore, the phenolic compound composition of CPWE and CPEE from TN1 was analyzed by RP-HPLC coupled with UV-vis detector. The results are shown in Figure4 and Table 3. In Figure 4, it can be seen that seven phenolic compounds, namely, gallic acid, pyrogallol, chlorogenic acid,p-hydroxybenzoic acid,p-coumaric acid, ECG, and CG, were tentatively identified in CPWE and CPEE of TN1 by HPLC analysis. Table 3 shows the quantitative data of phenolic compound composition in the CPWE and CPEE of TN1. It was found that both CPWE and CPEE of TN1 contained large amounts ofp-hydroxybenzoic acid, gallic acid, and pyrogallol and smaller amounts of chlorogenic acid, CG,p-coumaric acid, and ECG. A comparison of the phenolic compound composition in CPWE and CPEE revealed that CPEE of TN1 had greater amounts ofp-hydroxybenzoic acid, gallic acid, pyrogallol, chlorogenic acid, CG,p-coumaric acid, and ECG than those of CPWE (Table 3). These results are consistent with the data shown in Table 2, which illustrates that CPEE of TN1 has higher total phenolic compounds compared to CPWE of TN1. We found thatp-hydroxybenzoic acid was the predominant phenolic compound detected (up to 3313 ± 2 mg/100 g peel weight, dry basis) in CPEE of TN1, and the results were also supported by other studies reporting thatp-hydroxybenzoic acid could be detected in the extract of mango cultivar [44]. The concentrations of gallic acid for CPWE and CPEE of TN1 were recorded as 579 ± 72 and 1052 ± 1 mg/100 g peel weight, dry basis, respectively. These data are comparably higher than those reported previously for the ethanol extract of mango peel, with an average gallic acid concentration of 152.20 ± 0.14 mg/100 g mango peel, dry weight [45]. Previous studies suggested that pyrogallol can be detected in the ethanolic extract of mango kernel (the mango tested was purchased from an Egyptian local market) with a concentration of 1337.9 ± 0.31 mg/100 g mango kernel, dry weight, but it was absent in the ethanolic extract of mango peel [45]. However, we found that pyrogallol could be detected in CPWE and CPEE of TN1 with a concentration of 566 ± 55 and 930 ± 90 mg/100 g peel weight, dry basis, respectively. We speculate the reason may be due to differences between the tested mango varieties. Structurally,p-hydroxybenzoic acid, gallic acid, and pyrogallol are monophenolic compounds, which exhibit antioxidant activity owing to their hydrogen-donating or electron-donating properties [46]. Therefore, the high free radical-scavenging activities of CPWE and CPEE of TN1 may be attributed to the high contents ofp-hydroxybenzoic acid, gallic acid, and pyrogallol. Besides phenolic compounds, previous studies reported that a synergistic effect of combinations of phytochemicals may also result in beneficial biological functions such as inhibition of proliferation of human cancer cells [38, 47]. Thus, the synergistic effects of constituents in CPWE and CPEE of TN1 with respect to their effects on biological functions warrant further investigation. The storage stability of antioxidant agent is important with respect to its potential industrial application. Here, we evaluated the storage stability of vitamin C, CPWE of TN1, and CPEE of TN1 by DPPH radical-scavenging assay. The test sample powders were redissolved in double-distilled water at various concentrations and the sample solutions were stored at room temperature for 1, 2, 4, and 8 hours, and then the corresponding DPPH radical-scavenging activities were determined. The data presented in Figure 5(a) suggest that the well-known natural antioxidant vitamin C would dramatically reduce its DPPH radical-scavenging activity after 1–8 hours’ storage. However, the DPPH radical-scavenging activities in either CPWE of TN1 or CPEE of TN1 were not obviously changed after 1–8 hours’ storage (Figures 5(b) and 5(c)). These findings clearly indicate that the peel extracts of mango exhibited a high storage stability in terms of antioxidant activity. Fruit polyphenols have been reported to be related to immunomodulatory and anti-inflammatory properties viain vitro and animal studies [13]. NO is an inflammatory mediator induced by inflammatory cytokines or bacterial LPS in various cell types including macrophages [48]. Samples with NO inhibitory activity thus have the potential to possess anti-inflammatory activity. CPEE and CPWE from TN1 were tested for their anti-inflammatory activities by investigating their effects on NO production in LPS-induced RAW264.7 macrophages. Neither CPEE nor CPWE obviously affected the viability of RAW264.7 cells at the 6.25–25 μg/ml concentrations that were tested, in the presence of 1 μg/ml LPS (Figure 6(a)). As shown in Figure 6(b), when RAW264.7 cells were treated with 1 μg/ml LPS, the NO production was increased from 3.11 ± 0.25 μ M to 12.8 ± 0.1 μM. Moreover, when RAW264.7 cells were treated with 1 μg/ml LPS in the presence of various concentrations of CPEE, it was found that NO production was significantly decreased from 12.8 ± 0.1 μ M to 9.54 ± 0.08 μ M, whereas in the presence of various concentrations of CPWE, NO production was only slightly reduced. These results indicate that CPEE of TN1 had apparent anti-inflammatory activity, and thus it may have potential as a natural and safe agent in the protection of human health by modulating the immune system. Previous studies demonstrated that extracts with high polyphenol content exhibited high antibacterial activity [49]. As such, we evaluated the antibacterial activity of CPEE and CPWE of TN1 by the diffusion disc method. Five bacteria, three Gram-negative bacteria (E. coli,S. typhimurium, andV. parahaemolyticus) and two Gram-positive bacteria (S. aureus andB. cereus), were adopted to assess the antibacterial properties. As can be seen in Figures 7(a)–7(f), both CPEE and CPWE of TN1 exhibited antibacterial activities against the five bacteria tested. The Gram-negative bacteria were more sensitive than Gram-positive ones to CPEE and CPWE of TN1 (Figure 7(f)). In addition, for these five bacteria, exceptV. parahaemolyticus, CPEE exhibited higher antibacterial activity compared to CPWE. These results may be attributed to the higher polyphenol content detected in CPEE (Table 2), which is also consistent with previous findings [50]. Interestingly, forV. parahaemolyticus, CPEE had less antibacterial activity compared to CPWE. We speculate the reason may be due to the presence of 3% NaCl in the medium ofV. parahaemolyticus. However, further experimental studies are needed to elucidate the mechanism of action. In summary, the present study demonstrated that CPEE and CPWE from TN1 had high amounts of phenolic compounds, possessed good and stable free radical-scavenging activities, and exhibited anti-inflammatory and antibacterial activities. CPEE of TN1 exhibited the most antioxidant, anti-inflammatory, and antibacterial properties and thus has potential for use in the food, cosmetics, and nutraceutical industries.Table 3 Phenolic compound composition in the CPWE and CPEE of Tainoung number 1 cultivar. Compound Tainoung number 1 cultivar CPWEm g / 100 g ∗ CPEE (mg/100 g) p-Hydroxybenzoic acid 1863 ± 318 3313 ± 2 Gallic acid 579 ± 72 1052 ± 1 Pyrogallol 566 ± 55 930 ± 90 Chlorogenic acid 125 ± 8 245 ± 7 Catechin gallate (CG) 125 ± 43 189 ± 52 p-Coumaric acid 68.9 ± 9.4 131 ± 0 Epicatechin gallate (ECG) 32.0 ± 3.9 50.8 ± 7.0 ∗The concentration of phenolic compound is expressed as mg/100 g peel weight, dry basis.Figure 4 (a) High-performance liquid chromatography of peel extracts (CPWE and CPEE) of Tainoung number 1 cultivar; (b) high-performance liquid chromatography of polyphenol standards: gallic acid (1), pyrogallol (2), chlorogenic acid (3),p-hydroxybenzoic acid (4),p-coumaric acid (5), ECG (6), and CG (7). (a) (b)Figure 5 DPPH scavenging activities of vitamin C, CPWE of TN1, and CPEE of TN1 under different storage times. (a) Vitamin C; (b) CPWE of TN1; (c) CPEE of TN1. (a) (b) (c)Figure 6 (a) Effects of CPEE of TN1, CPWE of TN1, and LPS on cell viability of RAW 264.7 cells. (b) Effects of CPEE of TN1, CPWE of TN1, and LPS on NO secretion in RAW 264.7 cells. The data are them e a n s ± S D of triplicate samples. Bars with different letters are significantly different (p < 0.05). (a) (b)Figure 7 Zone of inhibition of CPEE of TN1 and CPWE of TN1 at concentration of 4%, w/v, in 0.05 M acetate buffer, pH 6.0, against (a)Escherichia coli, (b)Salmonella typhimurium, (c)Vibrio parahaemolyticus, (d)Staphylococcus aureus; and (e)Bacillus cereus. In each dish, A, B, C, and D represent antibiotic, acetate buffer, CPEE of TN1, and CPWE of TN1, respectively. (f) The bar graph summarizes the four separate antibacterial experiments and shows the zone of inhibition according to treatments. Values are expressed as the m e a n ± S D (n = 4). The means that have at least one common letter do not differ significantly (p > 0.05). (a) (b) (c) (d) (e) (f) ## 4. Conclusion In this study, we employed a compressional-puffing pretreatment process and two extraction methods to extract bioactive compounds from six Taiwanese mango peels. The compressional-puffing process increases the extraction yields and polyphenol contents of peel extracts. Ethanol extracts of peels had higher amounts of total phenolic compounds and greater free radical-scavenging activities as compared to water extracts of peels. The polyphenol contents of extracts positively correlated to the free radical-scavenging activities of extracts. Among these extracts, CPEE of TN1 exhibited the most antioxidant, anti-inflammatory, and antibacterial properties. Thus it is suggested as a natural, safe, and stable antioxidant agent with anti-inflammatory and antibacterial properties, which may have a wide range of applications in food, cosmetics, and nutraceuticals. Future studies on the polyphenol composition and biological activities of mango peel extracts after anin vitro digestion as well as investigations of thein vivo biological activities of mango peel extracts are warranted. --- *Source: 1025387-2018-02-21.xml*
2018
# Conservative Management of an Iatrogenic Esophageal Tear in Kenya **Authors:** Peter Waweru; David Mwaniki **Journal:** Case Reports in Surgery (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/102540 --- ## Abstract Since its description over 250 years ago, diagnosis of esophageal perforation remains challenging, its management controversial, and its mortality high. This rare, devastating, mostly iatrogenic, condition can quickly lead to severe complications and death due to an overwhelming inflammatory response to gastric contents in the mediastinum. Diagnosis is made with the help of esophagograms and although such tears have traditionally been managed via aggressive surgical approach, recent reports emphasize a shift in favor of nonoperative care which unfortunately remains controversial. We here present a case of an iatrogenic esophageal tear resulting from a routine esophagoscopy in a 50-year-old lady presenting with dysphagia. The esophageal tear, almost missed, was eventually successfully managed conservatively, thanks to a relatively early diagnosis. --- ## Body ## 1. Introduction Esophageal perforation is a rare, devastating, and often life-threatening clinical condition [1] typically resulting from endoscopic procedures [2]. This condition remains difficult to diagnose and manage and can quickly cause death without alarm [3], owing to its nonspecific and varied clinical symptomatology [1]. While surgery has been the mainstay of treatment, nonoperative management approaches for this condition are becoming more and more common [4], but they remain controversial.We present a case of an iatrogenic esophageal perforation that developed after a diagnostic esophagoscopy in a female patient with odynophagia and the subsequent conservative treatment after an almost missed diagnosis. In view of the recent but controversial emphasis on nonoperative treatment, this case has been presented to add to the repertoire of success stories, thus encouraging nonoperative care, even in developing countries. ## 2. Case Report A 50-year-old lady presented with dysphagia, odynophagia, and regurgitation of foods. Although an esophagogastroduodenoscopy (OGD) done previously had shown gastroesophageal reflux disease (GERD), resolving esophagitis and gastritis, this new onset dysphagia warranted further examination. A barium swallow, postnasal space and chest computed tomography (CT) scans were all normal. An indirect laryngoscopy was attempted but unsuccessful due to a strong gag reflex and consequently a direct laryngoscopy and esophagoscopy were done. The investigations revealed laryngeal erythema and gastric fundal erosion with no other abnormalities. After esophagoscopy, she was successfully reversed, observed in the postanesthetic care unit, and eventually discharged to the ward in stable conditions.In the ward, she suddenly developed severe epigastric pains, respiratory distress, and difficulty in speaking, for which she was given intravenous (IV) Esomeprazole 80 mg and Buscopan (hyoscine butylbromide) 40 mg for what appeared like acute exacerbation of gastritis. She was also started on oxygen. There being minimal improvement, she was immediately transferred to the intensive care unit, where close monitoring and oxygen therapy were continued. Further investigations included an electrocardiogram (ECG) and echocardiogram which were both normal and a CT scan of the chest which revealed severe basal pneumonia. A gastrografin swallow was finally done (Figure1) and showed leakage of the contrast into the mediastinum and left pleural cavity.Figure 1 Gastrografin swallow showing leak of contrast into the left mediastinum and left pleural cavity.Following the diagnosis of an esophageal perforation, a decision was made to manage the patient nonoperatively considering the relatively early diagnosis (few hours after esophagoscopy). A chest drain was inserted percutaneously and a nasogastric tube (NGT) inserted to rest the esophagus and drain the gastric contents. She was keptnil per oral (NPO) and was started on broad-spectrum IV antibiotics, oxygen, IV proton pump inhibitors, IV fluids, and analgesics.A follow-up gastrografin swallow done on day 12 after esophagoscopy showed notable reduced leakage (Figure2).Figure 2 Follow-up gastrografin swallow showing reduced leakage.Later, a repeat OGD was carefully performed on day 14 to review the status of the injury and showed a 2 cm tear at 30 cm in the posterior wall that was contracting. The patient showed good progress on conservative management and was transferred to the ward on day 15. Feeding was gradually advanced from total parenteral to feeding via NGT to oral sips and finally solid meals before she was discharged home after about one month in stable conditions. ## 3. Discussion Esophageal perforation, reported as early as the 18th century (Hermann Boerhaave, 1724) [5], is a rare and often grave clinical condition [4] with high mortality rates over 40%, especially in septic patients [6]. While the true incidence is unclear [4], the majority of esophageal rupture cases (up to 59%) are iatrogenic [1] resulting from esophagoscopy [2] despite the actual risk of esophageal perforation during endoscopy being low [2, 7]. Boerhaave syndrome, a spontaneous esophageal rupture with no preexisting pathology, accounts for about 15% of the cases [8]. Foreign-body ingestion accounts for 12% of the cases, trauma 9%, operative injury 2%, tumors 1%, and other causes 2% [8].Thoracic esophageal perforations occur frequently [1, 8] and can lead to serious complications and death without alarm [3, 9], owing to the mediastinal contamination that ensues soon after the perforation [7]. This contamination, which is exacerbated by the negative intrathoracic pressure that draws esophageal contents into the mediastinum [10], evokes an overwhelming inflammatory response [11] leading to mediastinitis, initially chemical mediastinitis, followed by bacterial invasion and severe mediastinal necrosis [7]. Eventually, sepsis ensues leading to multiple-organ failure and death [3, 4]. The extent of this inflammation (mediastinitis), and thus the morbidity and mortality of esophageal perforation, depends not only on the cause and location of the perforation but also on the time interval between onset and access to appropriate treatment [3, 12]. It has been shown that early detection reduces mortality by over 50% [11] and treatment delays over 24 hours increase mortality significantly [13]. Unfortunately, prompt diagnosis continues to be exigent for most clinicians [5].Diagnosis of esophageal perforation is challenging owing to a nonspecific and varied clinical presentation [1] that mimics a myriad of other disorders such as myocardial infarction and peptic ulcer perforation [14]. Patients may present with any combination of nonspecific signs and symptoms including fever, tachycardia, tachypnea, acute onset chest pain, dysphagia, vomiting, and shortness of breath [4, 6, 15]. A high index of suspicion is therefore needed for recognition of esophageal perforation [5]. Once suspected, patients should be evaluated quickly with a combination of radiographs and esophagograms [8, 14]. Accurate diagnosis may however require added investigations including computed tomography and flexible esophagoscopy [7, 12].Treatment of esophageal perforations remains a challenge [13] and the appropriate management is controversial [9]. Traditionally, surgery has been the mainstay of treatment [14], but recent reports emphasize a shift in treatment strategies with nonoperative approaches becoming more common [4, 9]. It has been shown that, with careful patient selection, nonoperative management can be the treatment of choice for esophageal perforations [6] with good outcomes [9, 12, 15, 16]. Altorjay et al. [17] and others have suggested criteria for selection of nonoperative treatment including early perforations (or contained leak if diagnosis delayed); leak draining back to the esophagus; nonseptic patients; perforation not involving a neoplasm, abdominal esophagus, or distal obstruction; and availability of an experienced thoracic surgeon and contrast. When these established guidelines are followed, survival rates of up to 100% have been reported [7, 9, 15].Patients selected for nonoperative treatment are started on broad-spectrum antibiotics, intravenous fluids, oxygen therapy, adequate analgesia, and gastric acid suppression and kept nil by mouth in an intensive care unit [4, 18]. A nasogastric tube is placed to clear gastric contents and limit further contamination [9] and mediastinal contamination drained percutaneously/radiologically [18] via the chest tubes, thereby converting the esophageal perforations to esophagocutaneous fistulae that heal similar to gastrointestinal fistulae [6]. Apart from observation, the range of conservative management is growing, with the increasing use of endoscopic stents, clips, vacuum sponge therapy, and fibrin glue application [8, 12] for the selected patients. Notably though, even with meticulous patient selection, up to 20% develop multiple complications within 24 hours and require surgical intervention [2, 7].In our patient, the diagnosis of an iatrogenic esophageal perforation was made relatively early and a multidisciplinary team chose conservative treatment as the treatment of choice given that the patient was not septic and had no contraindications to the treatment. This was instituted without complications, achieving good results. While there are few such reports in resource-limited settings, conservative management should be considered in the few hospitals with institutional capacities. --- *Source: 102540-2015-07-14.xml*
102540-2015-07-14_102540-2015-07-14.md
9,843
Conservative Management of an Iatrogenic Esophageal Tear in Kenya
Peter Waweru; David Mwaniki
Case Reports in Surgery (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/102540
102540-2015-07-14.xml
--- ## Abstract Since its description over 250 years ago, diagnosis of esophageal perforation remains challenging, its management controversial, and its mortality high. This rare, devastating, mostly iatrogenic, condition can quickly lead to severe complications and death due to an overwhelming inflammatory response to gastric contents in the mediastinum. Diagnosis is made with the help of esophagograms and although such tears have traditionally been managed via aggressive surgical approach, recent reports emphasize a shift in favor of nonoperative care which unfortunately remains controversial. We here present a case of an iatrogenic esophageal tear resulting from a routine esophagoscopy in a 50-year-old lady presenting with dysphagia. The esophageal tear, almost missed, was eventually successfully managed conservatively, thanks to a relatively early diagnosis. --- ## Body ## 1. Introduction Esophageal perforation is a rare, devastating, and often life-threatening clinical condition [1] typically resulting from endoscopic procedures [2]. This condition remains difficult to diagnose and manage and can quickly cause death without alarm [3], owing to its nonspecific and varied clinical symptomatology [1]. While surgery has been the mainstay of treatment, nonoperative management approaches for this condition are becoming more and more common [4], but they remain controversial.We present a case of an iatrogenic esophageal perforation that developed after a diagnostic esophagoscopy in a female patient with odynophagia and the subsequent conservative treatment after an almost missed diagnosis. In view of the recent but controversial emphasis on nonoperative treatment, this case has been presented to add to the repertoire of success stories, thus encouraging nonoperative care, even in developing countries. ## 2. Case Report A 50-year-old lady presented with dysphagia, odynophagia, and regurgitation of foods. Although an esophagogastroduodenoscopy (OGD) done previously had shown gastroesophageal reflux disease (GERD), resolving esophagitis and gastritis, this new onset dysphagia warranted further examination. A barium swallow, postnasal space and chest computed tomography (CT) scans were all normal. An indirect laryngoscopy was attempted but unsuccessful due to a strong gag reflex and consequently a direct laryngoscopy and esophagoscopy were done. The investigations revealed laryngeal erythema and gastric fundal erosion with no other abnormalities. After esophagoscopy, she was successfully reversed, observed in the postanesthetic care unit, and eventually discharged to the ward in stable conditions.In the ward, she suddenly developed severe epigastric pains, respiratory distress, and difficulty in speaking, for which she was given intravenous (IV) Esomeprazole 80 mg and Buscopan (hyoscine butylbromide) 40 mg for what appeared like acute exacerbation of gastritis. She was also started on oxygen. There being minimal improvement, she was immediately transferred to the intensive care unit, where close monitoring and oxygen therapy were continued. Further investigations included an electrocardiogram (ECG) and echocardiogram which were both normal and a CT scan of the chest which revealed severe basal pneumonia. A gastrografin swallow was finally done (Figure1) and showed leakage of the contrast into the mediastinum and left pleural cavity.Figure 1 Gastrografin swallow showing leak of contrast into the left mediastinum and left pleural cavity.Following the diagnosis of an esophageal perforation, a decision was made to manage the patient nonoperatively considering the relatively early diagnosis (few hours after esophagoscopy). A chest drain was inserted percutaneously and a nasogastric tube (NGT) inserted to rest the esophagus and drain the gastric contents. She was keptnil per oral (NPO) and was started on broad-spectrum IV antibiotics, oxygen, IV proton pump inhibitors, IV fluids, and analgesics.A follow-up gastrografin swallow done on day 12 after esophagoscopy showed notable reduced leakage (Figure2).Figure 2 Follow-up gastrografin swallow showing reduced leakage.Later, a repeat OGD was carefully performed on day 14 to review the status of the injury and showed a 2 cm tear at 30 cm in the posterior wall that was contracting. The patient showed good progress on conservative management and was transferred to the ward on day 15. Feeding was gradually advanced from total parenteral to feeding via NGT to oral sips and finally solid meals before she was discharged home after about one month in stable conditions. ## 3. Discussion Esophageal perforation, reported as early as the 18th century (Hermann Boerhaave, 1724) [5], is a rare and often grave clinical condition [4] with high mortality rates over 40%, especially in septic patients [6]. While the true incidence is unclear [4], the majority of esophageal rupture cases (up to 59%) are iatrogenic [1] resulting from esophagoscopy [2] despite the actual risk of esophageal perforation during endoscopy being low [2, 7]. Boerhaave syndrome, a spontaneous esophageal rupture with no preexisting pathology, accounts for about 15% of the cases [8]. Foreign-body ingestion accounts for 12% of the cases, trauma 9%, operative injury 2%, tumors 1%, and other causes 2% [8].Thoracic esophageal perforations occur frequently [1, 8] and can lead to serious complications and death without alarm [3, 9], owing to the mediastinal contamination that ensues soon after the perforation [7]. This contamination, which is exacerbated by the negative intrathoracic pressure that draws esophageal contents into the mediastinum [10], evokes an overwhelming inflammatory response [11] leading to mediastinitis, initially chemical mediastinitis, followed by bacterial invasion and severe mediastinal necrosis [7]. Eventually, sepsis ensues leading to multiple-organ failure and death [3, 4]. The extent of this inflammation (mediastinitis), and thus the morbidity and mortality of esophageal perforation, depends not only on the cause and location of the perforation but also on the time interval between onset and access to appropriate treatment [3, 12]. It has been shown that early detection reduces mortality by over 50% [11] and treatment delays over 24 hours increase mortality significantly [13]. Unfortunately, prompt diagnosis continues to be exigent for most clinicians [5].Diagnosis of esophageal perforation is challenging owing to a nonspecific and varied clinical presentation [1] that mimics a myriad of other disorders such as myocardial infarction and peptic ulcer perforation [14]. Patients may present with any combination of nonspecific signs and symptoms including fever, tachycardia, tachypnea, acute onset chest pain, dysphagia, vomiting, and shortness of breath [4, 6, 15]. A high index of suspicion is therefore needed for recognition of esophageal perforation [5]. Once suspected, patients should be evaluated quickly with a combination of radiographs and esophagograms [8, 14]. Accurate diagnosis may however require added investigations including computed tomography and flexible esophagoscopy [7, 12].Treatment of esophageal perforations remains a challenge [13] and the appropriate management is controversial [9]. Traditionally, surgery has been the mainstay of treatment [14], but recent reports emphasize a shift in treatment strategies with nonoperative approaches becoming more common [4, 9]. It has been shown that, with careful patient selection, nonoperative management can be the treatment of choice for esophageal perforations [6] with good outcomes [9, 12, 15, 16]. Altorjay et al. [17] and others have suggested criteria for selection of nonoperative treatment including early perforations (or contained leak if diagnosis delayed); leak draining back to the esophagus; nonseptic patients; perforation not involving a neoplasm, abdominal esophagus, or distal obstruction; and availability of an experienced thoracic surgeon and contrast. When these established guidelines are followed, survival rates of up to 100% have been reported [7, 9, 15].Patients selected for nonoperative treatment are started on broad-spectrum antibiotics, intravenous fluids, oxygen therapy, adequate analgesia, and gastric acid suppression and kept nil by mouth in an intensive care unit [4, 18]. A nasogastric tube is placed to clear gastric contents and limit further contamination [9] and mediastinal contamination drained percutaneously/radiologically [18] via the chest tubes, thereby converting the esophageal perforations to esophagocutaneous fistulae that heal similar to gastrointestinal fistulae [6]. Apart from observation, the range of conservative management is growing, with the increasing use of endoscopic stents, clips, vacuum sponge therapy, and fibrin glue application [8, 12] for the selected patients. Notably though, even with meticulous patient selection, up to 20% develop multiple complications within 24 hours and require surgical intervention [2, 7].In our patient, the diagnosis of an iatrogenic esophageal perforation was made relatively early and a multidisciplinary team chose conservative treatment as the treatment of choice given that the patient was not septic and had no contraindications to the treatment. This was instituted without complications, achieving good results. While there are few such reports in resource-limited settings, conservative management should be considered in the few hospitals with institutional capacities. --- *Source: 102540-2015-07-14.xml*
2015
# Expression Profiling Using a cDNA Array and Immunohistochemistry for the Extracellular Matrix Genes FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 in Colorectal Carcinoma Progression and Dissemination **Authors:** Suzana Angelica Silva Lustosa; Luciano de Souza Viana; Renato José Affonso; Sandra Regina Morini Silva; Marcos Vinicius Araujo Denadai; Silvia Regina Caminada de Toledo; Indhira Dias Oliveira; Delcio Matos **Journal:** The Scientific World Journal (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/102541 --- ## Abstract Colorectal cancer dissemination depends on extracellular matrix genes related to remodeling and degradation of the matrix structure. This investigation intended to evaluate the association between FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 gene and protein expression levels in tumor tissue with clinical and histopathological neoplastic parameters of cancer dissemination. The expression associations between ECM molecules and selected epithelial markers EGFR, VEGF, Bcl2, P53, and KI-67 have also been examined in 114 patients with colorectal cancer who underwent primary tumor resection. Quantitative real-time PCR and immunohistochemistry tissue microarray methods were performed in samples from the primary tumors. The gene expression results showed that the ITGA-3 and ITGB-5 genes were overexpressed in tumors with lymph node and distant metastasis (III/IV-stage tumors compared with I/II tumors). The MMP-2 gene showed significant overexpression in mucinous type tumors, and MMP-9 was overexpressed in villous adenocarcinoma histologic type tumors. The ECM genes MMP9 and ITGA-3 have shown a significant expression correlation with EGFR epithelial marker. The overexpression of the matrix extracellular genes ITGA-3 and ITGB-5 is associated with advanced stage tumors, and the genes MMP-2 and MMP-9 are overexpressed in mucinous and villous adenocarcinoma type tumors, respectively. The epithelial marker EGFR overactivity has been shown to be associated with the ECM genes MMP-9 and ITGA-3 expression. --- ## Body ## 1. Introduction Studies have shown that alterations in genes that regulate basic cell functions such as cell-cell adhesion and ECM-cell adhesion are followed by penetration of the basal membrane, destroying the physical structure of the tissue [1]. Alterations in the expression of adhesion molecules can influence tumor aggression resulting in local infiltrative growth and metastasis. Thus, the basal membrane and the ECM jointly represent two important physical barriers to malignant invasion, and their degradation by metalloproteinase enzymes may have an important role in tumor progression and metastatic dissemination [2–4]. Other researchers, however, have reported that, in general, the expression levels of integrins alpha 3 and alpha 5 are reduced in many colorectal carcinomas (CRCs) [5, 6].Some authors have recently demonstrated that integrin inhibition, at any point of action, may lead to tumor progression inhibition. Therefore, integrin inhibition may represent a pharmacological target for cancer treatment and prevention through the suppression of cell migration and invasion and, following apoptosis induction, also through blocking tumor angiogenesis and metastases [7].In most human cancers, the metalloproteinase expression and activity levels are high compared with normal tissue, and this has also been demonstrated in colorectal adenocarcinomas [8, 9]. From these results, several researchers have analyzed the possibility that metalloproteinase expression and activity levels can be used as tumor markers, aiming to prevent tumor growth, invasion, and metastasis [10, 11].Studies have explored the hypothesis that the MMP-9 functions as a key regulator of the malignant phenotype in patients with colorectal tumors presenting with overexpression of this protease relative to the adjacent normal tissues. In this context, MMP-9 is the main agent of cancer cell invasion and metastasis in the epithelial and stromal cells of the primary colorectal tumor. In addition, human colorectal cancer cells have the ability to synthesize and secrete MMP-9. This effect, associated with the induction of proteolytic functions in the pericellular space, causes metastasis development. Hence, the MMP-9 present in tumor epithelial cells can represent a specific target for the diagnosis and treatment of metastatic CRC.Recently, Viana et al. reported that the expression of the genes SPARC, SPP1, FN-1, ITGA-5, and ITGAV correlates with common parameters of progression and dissemination in CRC, and overexpression of the ITGAV gene and protein correlates with an increased risk of perineural invasion. Moreover, according to these authors, the strong correlation of IHC expression between ITGAV and EGFR suggests an interaction between these two signaling pathways [12]. Denadai et al., in 2013, also showed that increased expression levels of ITGA-6 and ITGAV are related to venous invasion and neural infiltration, respectively, while overexpression of ITGB5 and ITGA3 is associated with stage III (TNM), and overexpression of ITGA-5 correlates with the presence of mucinous-type malignant neoplasias [13]. The authors concluded that follow-up studies, preferably with a controlled prospective design, are necessary to establish the roles of such genes as potential biomarkers to predict disease extent or outcome and possibly contribute to the management of CRC patients.According to Nowell, in 2002, tumors become more clinically and biologically aggressive over time and this has been termed “tumor progression” and includes, among other properties, invasion and metastasis, as well as more efficient escape from host immune regulation. Molecular techniques have shown that tumors expand as a clone, from a single altered cell and sequential somatic genetic changes, generating increasingly aggressive subpopulations within the expanding clone. So far multiple types of genes have been identified, and they differ in different tumors, but they provide potential specific targets for important new therapies [14].This study aimed to evaluate the relationship of the expression levels of select ECM genes and proteins, FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9, with CRC progression and dissemination and with that of P53, Bcl-2, KI-67, EGFR, and VEGF, as it has been shown by several authors that proliferation, apoptosis, and cell migration are regulated by cell-cell interaction and extracellular matrix cell components. It is also worth noting that the growth factors EGF and VEGF are usually stored in the ECM and can be activated and released after ECM modulation [15–17]. ## 2. Methods ### 2.1. Patients and Tumor Samples We studied 114 patients with stage I–IV CRC who underwent primary tumor resection at the Fundação Pio XII, Barretos Cancer Hospital, between August 2006 and July 2009. All patients were eligible for the analysis of the expression of the genes of interest through real-time PCR and immunohistochemistry (IHC) assays using the tissue microarray (TMA) technique. The median followup was 30 months at the time of this report.The ethical use of human tissue for research was approved by the institutional review board, and the design of this study followed the principles of the Declaration of Helsinki and also complied with the principles of good clinical practice. This study was also approved by the Ethics Committee of the Barretos Cancer Hospital and UNIFESP-Escola Paulista de Medicina, São Paulo, Brazil.In this study, we included patients of both genders whose age was >18 years. The patients who had received neoadjuvant treatment (chemotherapy or radiotherapy) were excluded. In all patients, tumor tissue was sampled during surgery and cryopreserved, and paraffin blocks were available for further histopathological analysis. The patients without primary CRC site resection were excluded as well as patients with a previous or current diagnosis of another primary malignancy in any location of the body other than nonmelanoma skin cancer or in situ carcinoma of the cervix. The patients with a known history of familial CRC were also excluded. The chromosomal and microsatellite instability statuses were not assessed.Sixty-three patients were male (55.3%), and 51 were female (44.7%). The median patient age was 60 years (24–83); 58 patients (50.9%) were over 60 years of age. Concerning the location of the primary tumor, the right colon was affected in 41 cases (36.0%) and the left colon in 41 cases (36.0%), and the rectum was the primary tumor site in 32 cases (28.0%). Twenty-five (21.9%) patients were considered as TNM stage I, 39 (34.2%) as TNM stage II, 34 (29.8%) as TNM stage III, and 16 (14.0%) as TNM stage IV. The most frequent site for metastasis was the liver (9 patients) followed by the peritoneum (3 patients), lungs (2 patients), and ovary (2 patients). Table1 shows the distribution of patients according to the covariable categorization.Table 1 Characteristics of the 114 patients included in the study. Variables n % Age <60 years >60 years 5658 49.150.9 Gender Female Male 5163 44.755.3 Primary tumor site Right colon Left colon  Rectum 4141 32 36.036.0 28.0 Synchronous tumor No Yes 1122 98.21.8 Histological classification Adenocarcinoma SOE Adenocarcinoma mucinous  Adenocarcinoma villous 8118 15 71.015.8 13.2 Grading: cell differentiation Well differentiated Moderate  Poor  Undifferentiated 991 14 0 7.979.8 12.3 0 Venous invasion Absent Present 9321 81.618.4 Lymphatic vessels invasion Absent Present 9123 79.820.2 Perineural invasion Absent Present 1068 93.07.0 Peritumoural lymphocyte infiltration Absent Present 2193 18.481.6 Resection margin status Positive Negative 0114 0100.0 Lymph nodes dissected Median Range 17* 3–67 Tumor stage: TNM T1 T2  T3  T4 527 71 11 4.423.7 62.3 9.6 Nodal stage N0 N1  N2 6725 22 58.821.9 19.3 Distant metastasis Absent Present 9816 85.914.1 Site of distant metastasis Absent Liver  Peritoneum  Lungs  Ovary 989 3 2 2 85.97.9 2.6 1.8 1.8 Clinical stage I II  III  IV 2539 34 16 21.934.2 29.8 14.0 *28 patients had <12 lymph nodes dissected or analyzed. ### 2.2. Outcome Measures The patients were classified according to the following clinical and pathological characteristics: age group (<60 or >60 years), gender (male versus female), site of the primary tumor (right colon versus left colon versus rectum), histological classification (adenocarcinoma not otherwise specified versus mucinous adenocarcinoma), tumor grade (low (grades I and II) versus high (grades III and IV)), and peritumoral lymphocyte infiltration (presence versus absence).Histological characteristics commonly associated with tumor dissemination and progression have been categorized as follows: venous invasion (presence versus absence); lymphatic vessel invasion (presence versus absence); perineural invasion (presence versus absence); degree of tumor invasion into the organ wall (T 1-2 versus T 3-4, AJCC 2002, 6th edition); lymph node metastasis (presence versus absence); distant metastases (presence versus absence); and TNM staging (I-II versus III-IV, AJCC 2002, 6th edition).We hypothesized that ECM molecules may be associated with CRC progression and dissemination; therefore, differences in ECM marker expression with respect to the categorization of one of the histological covariates mentioned above were analyzed using both reverse transcription- (RT-) PCR and TMA. ### 2.3. RNA Extraction and cDNA Synthesis by RT-PCR Cryopreserved samples were embedded in medium for frozen tissue specimens (Tissue-Tek OCT; Sakura Finetek, Torrance, CA, USA) and fitted into a cryostat (CM1850 UV; Leica Microsystems, Nussloch, Germany) for histological analysis. The slides mounted with sections of 4μm thickness were subjected to hematoxylin-eosin staining (Merck, Darmstadt, Germany) and analyzed by a pathologist to ensure that the selected samples represented the general tumor histology and were free of necrosis or calcifications.The areas of interest were identified microscopically and marked for macrodissection. These slides were used as guides to select and cut tissues in the cryostat. For each sample, sterile individual scalpel blades were used. After discarding areas inappropriate for RNA extraction, the tissue was mechanically macerated with liquid nitrogen and transferred to 1.5 mL RNase- and DNase-free microtubes containing 1,000μL TRIzol (Invitrogen, Carlsbad, CA, USA). RNA was extracted according to the manufacturer’s instructions, and RNA quantification was performed using a spectrophotometer (Thermo Scientific NanoDrop 2000). The quality and integrity of the RNA were verified by the presence of 28S and 18S bands in an agarose gel and stained with 1% ethidium bromide. RNA was purified using the RNeasy mini kit (Qiagen, Valencia, CA, USA) following the manufacturer’s recommendations, diluted with 30 mL of water free of RNase and DNase (Qiagen), quantified spectrophotometrically at a wavelength of 260 nm (NanoVue; GE Healthcare, Chicago, IL, USA), and stored at –80°C until use. RT-PCR was performed using the Super-Script TM III first-strand synthesis SuperMix (Invitrogen), as recommended by the manufacturer. The reaction was performed in a 20 μL final volume containing 2 μg of total RNA with oligo (dT)20 as a primer. The transcription phase was performed in a thermal cycler (Mastercycler_ep Gradient S; Eppendorf, Hamburg, Germany), and the cDNA was stored at –20°C for future reactions. ### 2.4. Analysis of the Genes of Interest For each sample, an ECM and adhesion molecule PCR array (PAHS-013; SABiosciences, Qiagen) plate was used. A mixture was prepared containing 1,275μL of buffer with SYBR Green (2x Master Mix SABiosciences RT2 qPCR), 1,173 μL RNase-free H2O, and 102 μL of the cDNA sample. Next, 25 μL aliquots were added to each well of the 96-well plate. The reactions were performed in a thermal cycler (ABI 7500; Applied Biosystems, Foster City, CA, USA), according to the following protocol: 95°C for 10 min and 40 cycles at 95°C for 15 s and 60°C for 1 min. Data analysis was performed using method from the website http://pcrdataanalysis.sabiosciences.com/pcr/arrayanalysis.php.Gene expression was classified as “high” or “low,” considering the level of expression obtained after grouping patients by the covariates of interest; that is, after categorizing patients into the control or interest groups according to the covariates studied, gene expression was determined in both groups. ### 2.5. TMA Block Construction Original paraffin blocks were sectioned at a 4μm thickness and stained with hematoxylin-eosin. All sections were reviewed to confirm the CRC diagnosis, and the histopathologic findings were reevaluated.A map was prepared using a spreadsheet containing the locations and identification of tissue samples for the construction of the TMA block. The map also guided further readings of the IHC reactions. With the aid of Beecher TM equipment (Beecher Instruments, Silver Spring, MD, USA), the TMA blocks were prepared according to the manufacturer’s specifications in the following steps: marking of the selected area in the respective paraffin block; use of the equipment to create a hollow space in the recipient block; extraction of a cylindrical tissue of the donor block, measuring 1 mm in diameter of the previously selected respective area of interest; transfer of the cylindrical tissue obtained from the donor block to the hollow space previously created in the recipient block; progression, in fractions of millimeters, to new positions within the recipient block, thereby creating a collection of tissue samples following a matrix arrangement; and assessment of the final quality of the block for storage.For adhesion of the TMA block section onto the slides, an adhesive tape system was used (Instrumedics, Hackensack, NJ, USA). The samples were cut to a thickness of 4μm, and a small roll was used to press the section on the tape. The tape with the attached histological section was then placed on a resin-coated slide (part of the adhesive system kit) and pressed with the same roll for better adherence. Afterwards, the slides were placed under UV light for 20 min and were posteriorly exposed to a solvent solution (TPC) for 20 additional minutes. The slides were dried, and the tapes were removed. Afterwards, the slides were paraffin-embedded and sent for storage in ideal cooling conditions. ### 2.6. IHC Technique The TMA block sections were mounted onto glass slides coated with silane (3-aminopropyltriethoxysilane) and dried for 30 min at 37°C. The paraffin was removed with xylene and rehydrated through a series of graded alcohols. Endogenous peroxidase activity was blocked by incubating the sections in a methanol bath containing 3% hydrogen peroxide for 20 min, followed by washing in distilled water. The sections were initially submitted to heat-induced epitope retrieval using citrate buffer (pH 9.0) in an uncovered pressure cooker (Eterna; Nigro, Araraquara, Brazil). The slides were immersed in the buffer solution, and the pressure cooker was closed with the safety valve open. Once the saturated steam was released, the safety valve was lowered until full pressurization was achieved. After 4 min under full pressurization, the closed pressure cooker was placed under running water for cooling. After removing the lid, the slides were washed in distilled running water.Blocking of the endogenous peroxidase was obtained with 3% H2O2 (10 vol), with 3 washes of 10 min duration each. The slides were again washed in distilled running water and then in phosphate-buffered saline (10 mM; pH 7.4) for 5 min. Afterwards, the primary antibody was applied, and the slides were incubated overnight at 8°C. ### 2.7. Primary Antibodies The primary monoclonal antibodies used were obtained from ABCAM Inc. (Cambridge, MA, USA) and were as follows: anti-integrin alpha 3 that reacts with human (IHC-FoFr or Fr) and mouse IgG1 isotypes (clone F35 177-1; 100μg; ab 20140); anti-integrin beta 5 that reacts with human (IHC-P or Fr) and mouse IgG2a isotypes (ab 93943); anti-MMP-2, rabbit IgG isotype (ab52756); and anti-MMP-9 (ab76003; clone EP1254). All antibodies were used at a 1 : 400 dilution.In addition, the following non-ECM primary antibodies were used in this study: anti-p53 (IgG2b class; clone DO-7; 1 : 300; M7001; DAKOCytomation, Glostrup, Denmark); anti-Bcl-2 (mouse isotype IgG1; clone 124; 1 : 600; M0887; DAKOCytomation); anti-VEGF (mouse IgG1 isotype; clone VG 1; 1 : 100; M7273; DAKOCytomation); anti-KI-67 (mouse isotype IgG1; clone MIB-1; 1 : 500; M7240; DAKOCytomation); and anti-EGFR (mouse IgG1 isotype; clone EGFR-25; NCLEGFR-384; Novocastra, Newcastle, UK). The positive controls used for IHC analysis were normal human kidney tissue for FN-1, human tonsils for ITGA-3, ITGB-5, VEGF, KI-67, P53, and Bcl-2, and placenta for EGFR. ### 2.8. Immunostaining Analysis A preliminary test was performed to identify the optimal antibody concentration and to select positive and negative controls using the dilution data supplied by the manufacturer. After washing the primary antibody with phosphate-buffered saline, the slides were incubated with biotin-free polymer in the Advance TM visualization system (DAKO) for 30 min. A freshly prepared solution containing 1 drop of 3,3′-diaminobenzidine tetrahydrochloride (DAB; Sigma, St. Louis, MO, USA) with 1 mL of substrate (DAKO) was applied for 5 min on each slide. The DAB solution was removed by washing with distilled water. The slides were counterstained with hematoxylin, dehydrated in ethanol, cleared in xylene, and mounted using Entelan TM [18–20].Tissue expression of FN-1, ITGA-3, ITGB-5, MMP-1, and MMP-9 was categorized dichotomously as “high expression” or “low expression,” according to the “quick score” method [21, 22]. This score system uses a combination of the percentage of stained cells (P) and staining intensity (I), and the “quick score” was calculated by multiplying both values.The scores used for the percentage of stained tumor cells were as follows: 0 points (absence of stained cells); 1 point (>25% of stained cells); 2 points (26–50% of stained cells); and 3 points (>50% of stained cells). The scores used for the staining intensity were as follows: 1 point (mild intensity); 2 points (moderate intensity); and 3 points (intense staining). As a result, expression of a gene product in tumor cells was considered to be high (overexpressed) when the final score was >4 (P × 1 >4), and the markers that presented a final score <4 were considered to have low expression. The stroma and the tumor cells were not treated separately during IHC analysis, and only the level of expression of markers on tumor cells was considered for scoring.The validation of different expression levels of the genes detected by real-time PCR analysis was performed by the verification of the protein expression related to each gene by IHC. Thus, for each gene (fibronectin, integrins, and metalloproteases) with increased or reduced expression by array tracing, the corresponding protein was analyzed by the antigen-antibody reaction (IHC) in TMA slides. The confirmation of the protein expression increase by IH validates the molecular finding of the tracing by RT-PCR. ### 2.9. Statistical Analyses Statistical associations between gene and protein expression levels of FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 and the clinicopathological factors were determined using a nonparametric Mann-WhitneyU test for quantitative variables and a chi-square test for qualitative variables, that is, frequencies and proportions. When the chi-square test assumptions were not met, Fisher’s exact test was used. To measure the association between the ECM markers FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 and the non-ECM markers EGFR, VEGF, P53, Bcl-2, and KI-67 (ordinal variables), the Spearman correlation coefficient was used [23].The significance level was set at 5% (P<0.05), and the data were analyzed using Statistical Package for Social Sciences (SPSS) software (Chicago, IL, USA), version 15.0. The Shapiro-Wilk test was used to verify that the data had a normal distribution. ## 2.1. Patients and Tumor Samples We studied 114 patients with stage I–IV CRC who underwent primary tumor resection at the Fundação Pio XII, Barretos Cancer Hospital, between August 2006 and July 2009. All patients were eligible for the analysis of the expression of the genes of interest through real-time PCR and immunohistochemistry (IHC) assays using the tissue microarray (TMA) technique. The median followup was 30 months at the time of this report.The ethical use of human tissue for research was approved by the institutional review board, and the design of this study followed the principles of the Declaration of Helsinki and also complied with the principles of good clinical practice. This study was also approved by the Ethics Committee of the Barretos Cancer Hospital and UNIFESP-Escola Paulista de Medicina, São Paulo, Brazil.In this study, we included patients of both genders whose age was >18 years. The patients who had received neoadjuvant treatment (chemotherapy or radiotherapy) were excluded. In all patients, tumor tissue was sampled during surgery and cryopreserved, and paraffin blocks were available for further histopathological analysis. The patients without primary CRC site resection were excluded as well as patients with a previous or current diagnosis of another primary malignancy in any location of the body other than nonmelanoma skin cancer or in situ carcinoma of the cervix. The patients with a known history of familial CRC were also excluded. The chromosomal and microsatellite instability statuses were not assessed.Sixty-three patients were male (55.3%), and 51 were female (44.7%). The median patient age was 60 years (24–83); 58 patients (50.9%) were over 60 years of age. Concerning the location of the primary tumor, the right colon was affected in 41 cases (36.0%) and the left colon in 41 cases (36.0%), and the rectum was the primary tumor site in 32 cases (28.0%). Twenty-five (21.9%) patients were considered as TNM stage I, 39 (34.2%) as TNM stage II, 34 (29.8%) as TNM stage III, and 16 (14.0%) as TNM stage IV. The most frequent site for metastasis was the liver (9 patients) followed by the peritoneum (3 patients), lungs (2 patients), and ovary (2 patients). Table1 shows the distribution of patients according to the covariable categorization.Table 1 Characteristics of the 114 patients included in the study. Variables n % Age <60 years >60 years 5658 49.150.9 Gender Female Male 5163 44.755.3 Primary tumor site Right colon Left colon  Rectum 4141 32 36.036.0 28.0 Synchronous tumor No Yes 1122 98.21.8 Histological classification Adenocarcinoma SOE Adenocarcinoma mucinous  Adenocarcinoma villous 8118 15 71.015.8 13.2 Grading: cell differentiation Well differentiated Moderate  Poor  Undifferentiated 991 14 0 7.979.8 12.3 0 Venous invasion Absent Present 9321 81.618.4 Lymphatic vessels invasion Absent Present 9123 79.820.2 Perineural invasion Absent Present 1068 93.07.0 Peritumoural lymphocyte infiltration Absent Present 2193 18.481.6 Resection margin status Positive Negative 0114 0100.0 Lymph nodes dissected Median Range 17* 3–67 Tumor stage: TNM T1 T2  T3  T4 527 71 11 4.423.7 62.3 9.6 Nodal stage N0 N1  N2 6725 22 58.821.9 19.3 Distant metastasis Absent Present 9816 85.914.1 Site of distant metastasis Absent Liver  Peritoneum  Lungs  Ovary 989 3 2 2 85.97.9 2.6 1.8 1.8 Clinical stage I II  III  IV 2539 34 16 21.934.2 29.8 14.0 *28 patients had <12 lymph nodes dissected or analyzed. ## 2.2. Outcome Measures The patients were classified according to the following clinical and pathological characteristics: age group (<60 or >60 years), gender (male versus female), site of the primary tumor (right colon versus left colon versus rectum), histological classification (adenocarcinoma not otherwise specified versus mucinous adenocarcinoma), tumor grade (low (grades I and II) versus high (grades III and IV)), and peritumoral lymphocyte infiltration (presence versus absence).Histological characteristics commonly associated with tumor dissemination and progression have been categorized as follows: venous invasion (presence versus absence); lymphatic vessel invasion (presence versus absence); perineural invasion (presence versus absence); degree of tumor invasion into the organ wall (T 1-2 versus T 3-4, AJCC 2002, 6th edition); lymph node metastasis (presence versus absence); distant metastases (presence versus absence); and TNM staging (I-II versus III-IV, AJCC 2002, 6th edition).We hypothesized that ECM molecules may be associated with CRC progression and dissemination; therefore, differences in ECM marker expression with respect to the categorization of one of the histological covariates mentioned above were analyzed using both reverse transcription- (RT-) PCR and TMA. ## 2.3. RNA Extraction and cDNA Synthesis by RT-PCR Cryopreserved samples were embedded in medium for frozen tissue specimens (Tissue-Tek OCT; Sakura Finetek, Torrance, CA, USA) and fitted into a cryostat (CM1850 UV; Leica Microsystems, Nussloch, Germany) for histological analysis. The slides mounted with sections of 4μm thickness were subjected to hematoxylin-eosin staining (Merck, Darmstadt, Germany) and analyzed by a pathologist to ensure that the selected samples represented the general tumor histology and were free of necrosis or calcifications.The areas of interest were identified microscopically and marked for macrodissection. These slides were used as guides to select and cut tissues in the cryostat. For each sample, sterile individual scalpel blades were used. After discarding areas inappropriate for RNA extraction, the tissue was mechanically macerated with liquid nitrogen and transferred to 1.5 mL RNase- and DNase-free microtubes containing 1,000μL TRIzol (Invitrogen, Carlsbad, CA, USA). RNA was extracted according to the manufacturer’s instructions, and RNA quantification was performed using a spectrophotometer (Thermo Scientific NanoDrop 2000). The quality and integrity of the RNA were verified by the presence of 28S and 18S bands in an agarose gel and stained with 1% ethidium bromide. RNA was purified using the RNeasy mini kit (Qiagen, Valencia, CA, USA) following the manufacturer’s recommendations, diluted with 30 mL of water free of RNase and DNase (Qiagen), quantified spectrophotometrically at a wavelength of 260 nm (NanoVue; GE Healthcare, Chicago, IL, USA), and stored at –80°C until use. RT-PCR was performed using the Super-Script TM III first-strand synthesis SuperMix (Invitrogen), as recommended by the manufacturer. The reaction was performed in a 20 μL final volume containing 2 μg of total RNA with oligo (dT)20 as a primer. The transcription phase was performed in a thermal cycler (Mastercycler_ep Gradient S; Eppendorf, Hamburg, Germany), and the cDNA was stored at –20°C for future reactions. ## 2.4. Analysis of the Genes of Interest For each sample, an ECM and adhesion molecule PCR array (PAHS-013; SABiosciences, Qiagen) plate was used. A mixture was prepared containing 1,275μL of buffer with SYBR Green (2x Master Mix SABiosciences RT2 qPCR), 1,173 μL RNase-free H2O, and 102 μL of the cDNA sample. Next, 25 μL aliquots were added to each well of the 96-well plate. The reactions were performed in a thermal cycler (ABI 7500; Applied Biosystems, Foster City, CA, USA), according to the following protocol: 95°C for 10 min and 40 cycles at 95°C for 15 s and 60°C for 1 min. Data analysis was performed using method from the website http://pcrdataanalysis.sabiosciences.com/pcr/arrayanalysis.php.Gene expression was classified as “high” or “low,” considering the level of expression obtained after grouping patients by the covariates of interest; that is, after categorizing patients into the control or interest groups according to the covariates studied, gene expression was determined in both groups. ## 2.5. TMA Block Construction Original paraffin blocks were sectioned at a 4μm thickness and stained with hematoxylin-eosin. All sections were reviewed to confirm the CRC diagnosis, and the histopathologic findings were reevaluated.A map was prepared using a spreadsheet containing the locations and identification of tissue samples for the construction of the TMA block. The map also guided further readings of the IHC reactions. With the aid of Beecher TM equipment (Beecher Instruments, Silver Spring, MD, USA), the TMA blocks were prepared according to the manufacturer’s specifications in the following steps: marking of the selected area in the respective paraffin block; use of the equipment to create a hollow space in the recipient block; extraction of a cylindrical tissue of the donor block, measuring 1 mm in diameter of the previously selected respective area of interest; transfer of the cylindrical tissue obtained from the donor block to the hollow space previously created in the recipient block; progression, in fractions of millimeters, to new positions within the recipient block, thereby creating a collection of tissue samples following a matrix arrangement; and assessment of the final quality of the block for storage.For adhesion of the TMA block section onto the slides, an adhesive tape system was used (Instrumedics, Hackensack, NJ, USA). The samples were cut to a thickness of 4μm, and a small roll was used to press the section on the tape. The tape with the attached histological section was then placed on a resin-coated slide (part of the adhesive system kit) and pressed with the same roll for better adherence. Afterwards, the slides were placed under UV light for 20 min and were posteriorly exposed to a solvent solution (TPC) for 20 additional minutes. The slides were dried, and the tapes were removed. Afterwards, the slides were paraffin-embedded and sent for storage in ideal cooling conditions. ## 2.6. IHC Technique The TMA block sections were mounted onto glass slides coated with silane (3-aminopropyltriethoxysilane) and dried for 30 min at 37°C. The paraffin was removed with xylene and rehydrated through a series of graded alcohols. Endogenous peroxidase activity was blocked by incubating the sections in a methanol bath containing 3% hydrogen peroxide for 20 min, followed by washing in distilled water. The sections were initially submitted to heat-induced epitope retrieval using citrate buffer (pH 9.0) in an uncovered pressure cooker (Eterna; Nigro, Araraquara, Brazil). The slides were immersed in the buffer solution, and the pressure cooker was closed with the safety valve open. Once the saturated steam was released, the safety valve was lowered until full pressurization was achieved. After 4 min under full pressurization, the closed pressure cooker was placed under running water for cooling. After removing the lid, the slides were washed in distilled running water.Blocking of the endogenous peroxidase was obtained with 3% H2O2 (10 vol), with 3 washes of 10 min duration each. The slides were again washed in distilled running water and then in phosphate-buffered saline (10 mM; pH 7.4) for 5 min. Afterwards, the primary antibody was applied, and the slides were incubated overnight at 8°C. ## 2.7. Primary Antibodies The primary monoclonal antibodies used were obtained from ABCAM Inc. (Cambridge, MA, USA) and were as follows: anti-integrin alpha 3 that reacts with human (IHC-FoFr or Fr) and mouse IgG1 isotypes (clone F35 177-1; 100μg; ab 20140); anti-integrin beta 5 that reacts with human (IHC-P or Fr) and mouse IgG2a isotypes (ab 93943); anti-MMP-2, rabbit IgG isotype (ab52756); and anti-MMP-9 (ab76003; clone EP1254). All antibodies were used at a 1 : 400 dilution.In addition, the following non-ECM primary antibodies were used in this study: anti-p53 (IgG2b class; clone DO-7; 1 : 300; M7001; DAKOCytomation, Glostrup, Denmark); anti-Bcl-2 (mouse isotype IgG1; clone 124; 1 : 600; M0887; DAKOCytomation); anti-VEGF (mouse IgG1 isotype; clone VG 1; 1 : 100; M7273; DAKOCytomation); anti-KI-67 (mouse isotype IgG1; clone MIB-1; 1 : 500; M7240; DAKOCytomation); and anti-EGFR (mouse IgG1 isotype; clone EGFR-25; NCLEGFR-384; Novocastra, Newcastle, UK). The positive controls used for IHC analysis were normal human kidney tissue for FN-1, human tonsils for ITGA-3, ITGB-5, VEGF, KI-67, P53, and Bcl-2, and placenta for EGFR. ## 2.8. Immunostaining Analysis A preliminary test was performed to identify the optimal antibody concentration and to select positive and negative controls using the dilution data supplied by the manufacturer. After washing the primary antibody with phosphate-buffered saline, the slides were incubated with biotin-free polymer in the Advance TM visualization system (DAKO) for 30 min. A freshly prepared solution containing 1 drop of 3,3′-diaminobenzidine tetrahydrochloride (DAB; Sigma, St. Louis, MO, USA) with 1 mL of substrate (DAKO) was applied for 5 min on each slide. The DAB solution was removed by washing with distilled water. The slides were counterstained with hematoxylin, dehydrated in ethanol, cleared in xylene, and mounted using Entelan TM [18–20].Tissue expression of FN-1, ITGA-3, ITGB-5, MMP-1, and MMP-9 was categorized dichotomously as “high expression” or “low expression,” according to the “quick score” method [21, 22]. This score system uses a combination of the percentage of stained cells (P) and staining intensity (I), and the “quick score” was calculated by multiplying both values.The scores used for the percentage of stained tumor cells were as follows: 0 points (absence of stained cells); 1 point (>25% of stained cells); 2 points (26–50% of stained cells); and 3 points (>50% of stained cells). The scores used for the staining intensity were as follows: 1 point (mild intensity); 2 points (moderate intensity); and 3 points (intense staining). As a result, expression of a gene product in tumor cells was considered to be high (overexpressed) when the final score was >4 (P × 1 >4), and the markers that presented a final score <4 were considered to have low expression. The stroma and the tumor cells were not treated separately during IHC analysis, and only the level of expression of markers on tumor cells was considered for scoring.The validation of different expression levels of the genes detected by real-time PCR analysis was performed by the verification of the protein expression related to each gene by IHC. Thus, for each gene (fibronectin, integrins, and metalloproteases) with increased or reduced expression by array tracing, the corresponding protein was analyzed by the antigen-antibody reaction (IHC) in TMA slides. The confirmation of the protein expression increase by IH validates the molecular finding of the tracing by RT-PCR. ## 2.9. Statistical Analyses Statistical associations between gene and protein expression levels of FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 and the clinicopathological factors were determined using a nonparametric Mann-WhitneyU test for quantitative variables and a chi-square test for qualitative variables, that is, frequencies and proportions. When the chi-square test assumptions were not met, Fisher’s exact test was used. To measure the association between the ECM markers FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 and the non-ECM markers EGFR, VEGF, P53, Bcl-2, and KI-67 (ordinal variables), the Spearman correlation coefficient was used [23].The significance level was set at 5% (P<0.05), and the data were analyzed using Statistical Package for Social Sciences (SPSS) software (Chicago, IL, USA), version 15.0. The Shapiro-Wilk test was used to verify that the data had a normal distribution. ## 3. Results ### 3.1. FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 ECM Gene Expression Levels The expression levels of the genes of interest according to the covariates studied through real-time PCR showed low expression of the FN-1 gene in patients <60 years of age compared with those ≥60 years of age (P=0.022).The ITGA-3 and ITGB-5 gene expression levels in the tumor tissue as determined using RT-PCR were not considered significant when analyzed with regard to the different measures of tumor dissemination outcome, except for those related to TNM staging and cell differentiation degree.ITGA-3 gene expression showed a significance level ofP=0.016 and a fold regulation of 2.58 comparing TNM III, IV versus TNM I, II stages. With regard to the ITGB-5 gene, a reduction in the group expression of III cell differentiation degree was observed when compared with the GI and GII group (P=0.04 and a fold change of –2.11) and an increase of this gene expression in the tumors of TNM III, IV versus TNM I, II stages (P=0.029 and a fold change of 1.33). Table 2 shows the distribution of the significant results of the RTPCR expression of the genes of interest.Table 2 Distribution of expression levels of FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 ECM genes, with significance levels ofP<0.05, fold change >2.0, and clinicopathological variables associated with genetic tracing by RT-PCR. Gene P value Fold change Clinicopathological parameter Comparison FN-1 0.022 −3.07 Age (years) <60 × ≥60 ITGA-3 0.016 2.58 TNM TNM III × TNM I ITGB-5 0.04 −2.11 Degree of cell differentiation GII × GI 0.029 1.33 TNM TNM III × TNM I MMP-2 0.015 2.17 Histological type Mucinous × tubular 0.04 −1.2 Peritumoral lymphocyte infiltration With × without 0.039 −2.11 Age >60 × ≤60 MMP-9 0.014 1.13 Histological type Villous × tubularThe expression levels of MMP2 and MMP9 genes in the tumor tissue as determined using RT-PCR were considered significant when analyzed with regard to the different measures of tumor dissemination outcome, except for those related to mucinous and villous histological types and in the parameter of venous invasion dissemination. Therefore, MMP2 gene expression was significantly different between mucinous and nonmucinous carcinomas (P=0.001) and in patients aged over 60 years (P<0.0001). With regard to the tumor expression of the MMP9 gene, an increase of this gene expression was noted in tumors with TNM III and IV staging compared with TNM I and II staging (P=0.0001), in venous invasion tumors compared with those without venous invasion (P<0.001), and in carcinoma tumors with a villous component compared with carcinomas without a villous component (P<0.0001). Tables 3, 4, 5, 6, 7, 8, and 9 show the results of immunohistochemical expression of the EMC genes and the non-ECM molecular markers according to the outcome measures degree of tumor cell differentiation, tumor TNM classification, peritumoral lymphocytic infiltration, venous invasion, perineural invasion, and type of tumor (tubular, mucinous, and villous).Table 3 Distribution of the expression levels PER IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the degree of cell differentiation of CRC (n=114). Grading tumor cell differentiation EGFR Total P value High Low I + II 0.159 N 63 37 100 % 63.0% 37.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Grading tumor cell differentiation VEGF Total Pvalue High Low I + II 0.778 N 49 51 100 % 49.0% 51.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Grading tumor cell differentiation KI-67 Total Pvalue High Low I + II 0.093 N 46 54 100 % 46.0% 54.0% 100.0% III N 3 11 14 % 21.4% 78.6% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Grading tumor cell differentiation P53 Total Pvalue High Low I + II 0.778 N 41 59 100 % 41.0% 59.0% 100.0% III N 5 9 14 % 35.7% 64.3% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Grading tumor cell differentiation Bcl-2 Total Pvalue High Low I + II 1.000 N 50 50 100 % 50.0% 50.0% 100.0% III N 7 7 14 % 50.0% 50.0% 100.0% Grading tumor cell differentiation ITGB-5 Total Pvalue High Low I + II 0.394 N 57 43 100 % 57.0% 43.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Grading tumor cell differentiation ITGA-3 Total Pvalue High Low I + II 0.397 N 50 50 100 % 50.0% 50.0% 100.0% III N 5 9 14 % 35.7% 64.3% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Grading tumor cell differentiation MMP-2 Total Pvalue High Low I + II 0.568 N 48 52 100 % 48.0% 52.0% 100.0% III N 5 9 14 % 35.7% 64.3% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Grading tumor cell differentiation MMP-9 Total Pvalue High Low I + II 0.572 N 53 47 100 % 53.0% 47.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Grading tumor cell differentiation FN-1 Total Pvalue High Low I + II 0.225 N 73 27 100 % 73.0% 27.0% 100.0% III N 8 6 14 % 57.1% 42.9% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 4 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the degree of TNM staging of CRC (n=114). TNM tumor stage EGFR Total P value High Low I + II 0.000 N 64 0 64 % 100.0% 0.0% 100.0% III + IV N 5 45 50 % 10.0% 90.0% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% TNM tumor stage VEGF Total Pvalue High Low I + II 0.186 N 27 37 64 % 42.2% 57.8% 100.0% III + IV N 28 22 50 % 56.0% 44.0% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% TNM tumor stage KI-67 Total Pvalue High Low I + II 0.000 N 38 26 64 % 59.4% 40.6% 100.0% III + IV N 11 39 50 % 22.0% 78.0% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Estadiamento P53 Total Pvalue High Low I + II 0.000 N 35 29 64 % 54.7% 45.3% 100.0% III + IV N 11 39 50 % 22.0% 78.0% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% TNM tumor stage Bcl-2 Total Pvalue High Low I + II 0.345 N 29 35 64 % 45.3% 54.7% 100.0% III + IV N 28 22 50 % 56.0% 44.0% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% TNM tumor stage ITGB-5 Total Pvalue High Low I + II 0.000 N 49 15 64 % 76.6% 23.4% 100.0% III + IV N 14 36 50 % 28.0% 72.0% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% TNM tumor stage ITGA-3 Total Pvalue High Low I + II 0.000 N 53 11 64 % 82.8% 17.2% 100.0% III + IV N 2 48 50 % 4.0% 96.0% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% TNM tumor stage MMP-2 Total Pvalue High Low I + II 0.572 N 28 36 64 % 43.8% 56.3% 100.0% III + IV N 25 25 50 % 50.0% 50.0% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% TNM tumor stage MMP-9 Total Pvalue High Low I + II 0.000 N 56 8 64 % 87.5% 12.5% 100.0% III + IV N 3 47 50 % 6.0% 94.0% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% TNM tumor stage FN-1 Total Pvalue high low I + II 0.677 N 44 20 64 % 68.8% 31.3% 100.0% III + IV N 37 13 50 % 74.0% 26.0% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 5 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the peritumoral lymphocyte infiltrate in CRC (n=114). Peritumoral lymphocyte infiltrate EGFR Total P value High Low Absence N 13 8 21 1.000 % 61.9% 38.1% 100.0% Presence N 56 37 93 % 60.2% 39.8% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Peritumoral lymphocyte infiltrate VEGF Total Pvalue High Low Absence 0.635 N 9 12 21 % 42.9% 57.1% 100.0% Presence N 46 47 93 % 49.5% 50.5% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Peritumoral lymphocyte infiltrate KI-67 Total Pvalue High Low Absence N 11 10 21 0.343 % 52.4% 47.6% 100.0% Presence N 38 55 93 % 40.9% 59.1% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Peritumoral lymphocyte infiltrate P53 Total Pvalue High Low Absence 1.000 N 8 13 21 % 38.1% 61.9% 100.0% Presence N 38 55 93 % 40.9% 59.1% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Peritumoral lymphocyte infiltrate Bcl-2 Total Pvalue High Low Absence 0.629 N 9 12 21 % 42.9% 57.1% 100.0% Presence N 48 45 93 % 51.6% 48.4% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% Peritumoral lymphocyte infiltrate ITGB-5 Total Pvalue High Low Absence 1.000 N 12 9 21 % 57.1% 42.9% 100.0% Presence N 51 42 93 % 54.8% 45.2% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Peritumoral lymphocyte infiltrate ITGA-3 Total Pvalue High Low Absence 0.227 N 13 8 21 % 61.9% 38.1% 100.0% Presence N 42 51 93 % 45.2% 54.8% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Peritumoral lymphocyte infiltrate MMP-2 Total Pvalue High Low Absence 1.000 N 10 11 21 % 47.6% 52.4% 100.0% Presence N 43 50 93 % 46.2% 53.8% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Peritumoral lymphocyte infiltrate MMP-9 Total Pvalue High Low Absence 1.000 N 11 10 21 % 52.4% 47.6% 100.0% Presence N 48 45 93 % 51.6% 48.4% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Peritumoral lymphocyte infiltrate FN-1 Total Pvalue High Low Absence 0.180 N 12 9 21 % 57.1% 42.9% 100.0% Presence N 69 24 93 % 74.2% 25.8% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 6 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, and MMP9 ECM genes and FN-1 and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the presence of venous invasion in CRC (n=114). Venous invasion EGFR Total P value High Low Absence 0.000 N 65 28 93 % 69.9% 30.1% 100.0% Presence N 4 17 21 % 19.0% 81.0% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Venous invasion VEGF Total Pvalue High Low Absence 0.810 N 44 49 93 % 47.3% 52.7% 100.0% Presence N 11 10 21 % 52.4% 47.6% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Venous invasion KI-67 Total Pvalue High Low Absence 0.055 N 44 49 93 % 47.3% 52.7% 100.0% Presence N 5 16 21 % 23.8% 76.2% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Venous invasion P53 Total Pvalue High Low Absence 0.029 N 42 51 93 % 45.2% 54.8% 100.0% Presence N 4 17 21 % 19.0% 81.0% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Venous invasion Bcl-2 Total Pvalue High Low Absence 1.000 N 46 47 93 % 49.5% 50.5% 100.0% Presence N 11 10 21 % 52.4% 47.6% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% Venous invasion ITGB-5 Total Pvalue High Low Absence 0.231 N 54 39 93 % 58.1% 41.9% 100.0% Presence N 9 12 21 % 42.9% 57.1% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Venous invasion ITGA-3 Total Pvalue High Low Absence 0.000 N 52 41 93 % 55.9% 44.1% 100.0% Presence N 3 18 21 % 14.3% 85.7% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Venous invasion MMP-2 Total Pvalue High Low Absence 0.631 N 42 51 93 % 45.2% 54.8% 100.0% Presence N 11 10 21 % 52.4% 47.6% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Venous invasion MMP-9 Total Pvalue High Low Absence 0.001 N 55 38 93 % 59.1% 40.9% 100.0% Presence N 4 17 21 % 19.0% 81.0% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Venous invasion FN-1 Total Pvalue High Low Absence 0.006 N 61 32 93 % 65.6% 34.4% 100.0% Presence N 20 1 21 % 95.2% 4.8% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 7 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, Ki-67, P53, and Bcl-2 molecules as per the presence of perineural invasion in CRC (n=114). Perineural invasion EGFR Total P value High Low Absence 0.260 N 66 40 106 % 62.3% 37.7% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Perineural invasion VEGF Total Pvalue High Low Absence 0.027 N 48 58 106 % 45.3% 54.7% 100.0% Presence N 7 1 8 % 87.5% 12.5% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Perineural invasion KI-67 Total Pvalue High Low Absence 0.462 N 47 59 106 % 44.3% 55.7% 100.0% Presence N 2 6 8 % 25.0% 75.0% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Perineural invasion P53 Total Pvalue High Low Absence 0.470 N 44 62 106 % 41.5% 58.5% 100.0% Presence N 2 6 8 % 25.0% 75.0% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Perineural invasion Bcl-2 Total Pvalue High Low Absence 0.716 N 52 54 106 % 49.1% 50.9% 100.0% Presence N 5 3 8 % 62.5% 37.5% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% Perineural invasion ITGB-5 Total Pvalue High Low Absence 0.463 N 60 46 106 % 56.6% 43.4% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Perineural invasion ITGA-3 Total Pvalue High Low Absence 0.717 N 52 54 106 % 49.1% 50.9% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Perineural invasion MMP-2 Total Pvalue High Low Absence 0.999 N 49 57 106 % 46.2% 53.8% 100.0% Presence N 4 4 8 % 50.0% 50.0% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Perineural invasion MMP-9 Total Pvalue High Low Absence 0.479 N 56 50 106 % 52.8% 47.2% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Perineural invasion FN-1 Total Pvalue High Low Absence 0.102 N 73 33 106 % 68.9% 31.1% 100.0% Presence N 8 0 8 % 100.0% 0.0% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 8 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the villous and tubular types of CRC (n=114). Tumor histologic type:tubular × villous EGFR Total P value High Low Tubular 0.579 N 52 29 81 % 64.2% 35.8% 100.0% Villous N 9 7 16 % 56.3% 43.8% 100.0% Total N 61 36 97 % 62.9% 37.1% 100.0% Tumor histologic type:tubular × villous VEGF Total Pvalue High Low Tubular 1.000 N 41 40 81 % 50.6% 49.4% 100.0% Villous N 8 8 16 % 50.0% 50.0% 100.0% Total N 49 48 97 % 50.5% 49.5% 100.0% Tumor histologic type:tubular × villous KI-67 Total Pvalue High Low Tubular 0.783 N 36 45 81 % 44.4% 55.6% 100.0% Villous N 6 10 16 % 37.5% 62.5% 100.0% Total N 42 55 97 % 43.3% 56.7% 100.0% Tumor histologic type:tubular × villous P53 Total Pvalue High Low Tubular 1.000 N 32 49 81 % 39.5% 60.5% 100.0% Villous N 6 10 16 % 37.5% 62.5% 100.0% Total N 38 59 97 % 39.2% 60.8% 100.0% Tumor histologic type:tubular × villous Bcl-2 Total Pvalue High Low Tubular 0.277 N 37 44 81 % 45.7% 54.3% 100.0% Villous N 10 6 16 % 62.5% 37.5% 100.0% Total N 47 50 97 % 48.5% 51.5% 100.0% Tumor histologic type:tubular × villous ITGB-5 Total Pvalue High Low Tubular 1.000 N 44 37 81 % 54.3% 45.7% 100.0% Villous N 9 7 16 % 56.3% 43.8% 100.0% Total N 53 44 97 % 54.6% 45.4% 100.0% Tumor histologic type:tubular × villous ITGA-3 Total Pvalue High Low Tubular 1.000 N 40 41 81 % 49.4% 50.6% 100.0% Villous N 8 8 16 % 50.0% 50.0% 100.0% Total N 48 49 97 % 49.5% 50.5% 100.0% Tumor histologic type:tubular × villous MMP-2 Total Pvalue High Low Tubular 0.585 N 44 37 81 % 54.3% 45.7% 100.0% Villous N 7 9 16 % 43.8% 56.3% 100.0% Total N 51 46 97 % 52.6% 47.4% 100.0% Tumor histologic type:tubular × villous MMP-9 Total Pvalue High Low Tubular 0.000 N 49 32 81 % 60.5% 39.5% 100.0% Villous N 1 15 16 % 6.3% 93.8% 100.0% Total N 50 47 97 % 51.5% 48.5% 100.0% Tumor histologic type:tubular × villous FN-1 Total Pvalue High Low Tubular 1.000 N 57 24 81 % 70.4% 29.6% 100.0% Villous N 11 5 16 % 68.8% 31.3% 100.0% Total N 68 29 97 % 70.1% 29.9% 100.0%Table 9 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per mucinous and tubular tumor types of CRC (n=114). Tumor histologic type:mucinous × tubular EGFR Total P value High Low Mucinous 0.273 N 8 9 17 % 47.1% 52.9% 100.0% Tubular N 52 29 81 % 64.2% 35.8% 100.0% Total N 60 38 98 % 61.2% 38.8% 100.0% Tumor histologic type:mucinous × tubular VEGF Total Pvalue High Low Mucinous 0.294 N 6 11 17 % 35.3% 64.7% 100.0% Tubular N 41 40 81 % 50.6% 49.4% 100.0% Total N 47 51 98 % 48.0% 52.0% 100.0% Tumor histologic type:mucinous × tubular KI-67 Total Pvalue High Low Mucinous 1.000 N 7 10 17 % 41.2% 58.8% 100.0% Tubular N 36 45 81 % 44.4% 55.6% 100.0% Total N 43 55 98 % 43.9% 56.1% 100.0% Tumor histologic type:mucinous × tubular P53 Total Pvalue High Low Mucinous 0.596 N 8 9 17 % 47.1% 52.9% 100.0% Tubular N 32 49 81 % 39.5% 60.5% 100.0% Total N 40 58 98 % 40.8% 59.2% 100.0% Tumor histologic type:mucinous × tubular Bcl-2 Total Pvalue High Low Mucinous 0.425 N 10 7 17 % 58.8% 41.2% 100.0% Tubular N 37 44 81 % 45.7% 54.3% 100.0% Total N 47 51 98 % 48.0% 52.0% 100.0% Tumor histologic type:mucinous × tubular ITGB-5 Total Pvalue High Low Mucinous 0.793 N 10 7 17 % 58.8% 41.2% 100.0% Tubular N 44 37 81 % 54.3% 45.7% 100.0% Total N 54 44 98 % 55.1% 44.9% 100.0% Tumor histologic type:mucinous × tubular ITGA-3 Total Pvalue High Low Mucinous 0.600 N 7 10 17 % 41.2% 58.8% 100.0% Tubular N 40 41 81 % 49.4% 50.6% 100.0% Total N 47 51 98 % 48.0% 52.0% 100.0% Tumor histologic type:mucinous × tubular MMP-2 Total Pvalue High Low Mucinous 0.001 N 2 15 17 % 11.8% 88.2% 100.0% Tubular N 44 37 81 % 54.3% 45.7% 100.0% Total N 46 52 98 % 46.9% 53.1% 100.0% Tumor histologic type:mucinous × tubular MMP-9 Total Pvalue High Low Mucinous 0.596 N 9 8 17 % 52.9% 47.1% 100.0% Tubular N 49 32 81 % 60.5% 39.5% 100.0% Total N 58 40 98 % 59.2% 40.8% 100.0% Tumor histologic type:mucinous × tubular FN-1 Total Pvalue High Low Mucinous 0.771 N 13 4 17 % 76.5% 23.5% 100.0% Tubular N 57 24 81 % 70.4% 29.6% 100.0% Total N 70 28 98 % 71.4% 28.6% 100.0%Table10 shows the distribution in absolute numbers and percentages by IHC of proteins corresponding to the ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and the EGFR, VEGF, KI-67, P53, and Bcl-2 molecules, according to the expression degrees rated as low and high of the colorectal adenocarcinoma (n=114).Table 10 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the degrees of high and low expression of CRC (n=114). Markers Low expression High expression n (%) n (%) FN-1 81 (71.1) 33 (28.9) ITGA-3 55 (48.2) 59 (51.8) ITGB-5 63 (55.3) 51 (44.7) MMP-2 53 (46.5) 61 (53.5) MMP-9 59 (51.8) 55 (48.2) P53 46 (40.4) 68 (59.6) Bcl-2 57 (50.0) 57 (50.0) KI-67 49 (43.0) 65 (57.0) EGFR 69 (60.5) 45 (39.5) VEGF 55 (48.2) 59 (51.8)With regard to the correlation of ECM genes IHC expression levels with the non-ECM molecules P53, Bcl-2, VEGF, KI-67, and EGFR, this study showed that FN-1 expression did not correlate with any ECM marker or non-ECM molecule studied. Expression of ITGA-3 showed a weak correlation with EGFR (r=0.74; P=0.000), and ITGB-5 expression displayed a regular correlation (r=0.42; P=0.000) with EGFR. MMP-2 expression did not correlate with the non-EMC molecules studied. MMP-9 was shown to have a strong expression correlation (r=0.76; P=0.000). Table 11 shows the results of the distribution of the correlation Spearman coefficients between the ECM genes and the non-ECM molecules markers studied.Table 11 The distribution of Spearman (r) correlation coefficients, two-tailed model for significant associations (P<0.05) between the immunohistochemical expressions of ECM genes FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 and epithelial markers EGFR, VEGF, KI-67, P53, and Bcl-2 in CRC (n=114). FN-1 ITGA-3 ITGB-5 MMP-2 MMP-9 EGFR VEGF KI-67 P53 Bcl-2 FN1 1 — — — — — — — — — ITGA-3 — 1 0.48 — 0.65 0.74 — — — — ITGB-5 — 0.48 1 — 0.43 0.42 — — — — MMP-2 — — — 1 — −0.20 −0.01 — MMP-9 — 0.65 0.43 — 1 0.76 — 0.30 0.22 — EGFR — 0.74 0.42 — 0.76 1 — 0.37 0.33 — VEGF — — — — 1 — — 0.33 KI-67 — — — — 0.30 0.37 — 1 0.87 — P53 — — — −0.19 0.22 0.33 — 0.87 1 — Bcl-2 — — — — 0.33 — 1 0 = no correlation; 0–0.25 = weak; 0.25–0.50 = regular; 0.50–0.75 = moderate; >0.75 = strong; 1 = perfect correlation.The results of RT-PCR and IHC methods in 114 patients with colorectal adenocarcinoma are shown in the Table12, where differentially expressed extracellular matrix genes and the respective proteins are correlated. The findings of immunohistochemistry technique validated the correlation between transcript and protein. While RT-PCR is regarded as a tool which provides a genic screening the immunohistochemical technique allows the identification of the correspondent protein of the genes differentially expressed.Table 12 Real-time PCR and immunohistochemistry correlation of differentially expressed extracellular matrix genes in 114 patients with colorectal adenocarcinoma,P<0.05. Genes Classification RT-PCR RT-PCR Parameters analysed IHC IHC Fold change P value Validation P value MMP-2 ECM proteases 2.17 0.01 Mucinous × tubular No 0.001 −1.2 0.04 Peritumoral lymphocyte infiltrate +/− No 1.000 −2.11 0.03 Age (>60 yr × <60 yr) No 0.000 FN-1 Other adhesion molecules/collagens and ECM structural constituents −3.07 0.02 AGE No 1.000 ITGB-5 Transmembrane molecules/cell-matrix adhesion 2.11 0.04 Grade cell differentiation:(I, II × III, IV) No 0.394 1.33 0.02 TNM (I, II × III, IV) Yes 0.000 ITGA-3 Transmembrane molecules 2.58 0.01 TNM (I, II × III, IV) Yes 0.000 MMP-9 ECM proteases 1.13 0.01 Villous × tubular Yes 0.000 IHC: immunohistochemistry; RT-PCR: reverse transcription polymerase. ## 3.1. FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 ECM Gene Expression Levels The expression levels of the genes of interest according to the covariates studied through real-time PCR showed low expression of the FN-1 gene in patients <60 years of age compared with those ≥60 years of age (P=0.022).The ITGA-3 and ITGB-5 gene expression levels in the tumor tissue as determined using RT-PCR were not considered significant when analyzed with regard to the different measures of tumor dissemination outcome, except for those related to TNM staging and cell differentiation degree.ITGA-3 gene expression showed a significance level ofP=0.016 and a fold regulation of 2.58 comparing TNM III, IV versus TNM I, II stages. With regard to the ITGB-5 gene, a reduction in the group expression of III cell differentiation degree was observed when compared with the GI and GII group (P=0.04 and a fold change of –2.11) and an increase of this gene expression in the tumors of TNM III, IV versus TNM I, II stages (P=0.029 and a fold change of 1.33). Table 2 shows the distribution of the significant results of the RTPCR expression of the genes of interest.Table 2 Distribution of expression levels of FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 ECM genes, with significance levels ofP<0.05, fold change >2.0, and clinicopathological variables associated with genetic tracing by RT-PCR. Gene P value Fold change Clinicopathological parameter Comparison FN-1 0.022 −3.07 Age (years) <60 × ≥60 ITGA-3 0.016 2.58 TNM TNM III × TNM I ITGB-5 0.04 −2.11 Degree of cell differentiation GII × GI 0.029 1.33 TNM TNM III × TNM I MMP-2 0.015 2.17 Histological type Mucinous × tubular 0.04 −1.2 Peritumoral lymphocyte infiltration With × without 0.039 −2.11 Age >60 × ≤60 MMP-9 0.014 1.13 Histological type Villous × tubularThe expression levels of MMP2 and MMP9 genes in the tumor tissue as determined using RT-PCR were considered significant when analyzed with regard to the different measures of tumor dissemination outcome, except for those related to mucinous and villous histological types and in the parameter of venous invasion dissemination. Therefore, MMP2 gene expression was significantly different between mucinous and nonmucinous carcinomas (P=0.001) and in patients aged over 60 years (P<0.0001). With regard to the tumor expression of the MMP9 gene, an increase of this gene expression was noted in tumors with TNM III and IV staging compared with TNM I and II staging (P=0.0001), in venous invasion tumors compared with those without venous invasion (P<0.001), and in carcinoma tumors with a villous component compared with carcinomas without a villous component (P<0.0001). Tables 3, 4, 5, 6, 7, 8, and 9 show the results of immunohistochemical expression of the EMC genes and the non-ECM molecular markers according to the outcome measures degree of tumor cell differentiation, tumor TNM classification, peritumoral lymphocytic infiltration, venous invasion, perineural invasion, and type of tumor (tubular, mucinous, and villous).Table 3 Distribution of the expression levels PER IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the degree of cell differentiation of CRC (n=114). Grading tumor cell differentiation EGFR Total P value High Low I + II 0.159 N 63 37 100 % 63.0% 37.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Grading tumor cell differentiation VEGF Total Pvalue High Low I + II 0.778 N 49 51 100 % 49.0% 51.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Grading tumor cell differentiation KI-67 Total Pvalue High Low I + II 0.093 N 46 54 100 % 46.0% 54.0% 100.0% III N 3 11 14 % 21.4% 78.6% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Grading tumor cell differentiation P53 Total Pvalue High Low I + II 0.778 N 41 59 100 % 41.0% 59.0% 100.0% III N 5 9 14 % 35.7% 64.3% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Grading tumor cell differentiation Bcl-2 Total Pvalue High Low I + II 1.000 N 50 50 100 % 50.0% 50.0% 100.0% III N 7 7 14 % 50.0% 50.0% 100.0% Grading tumor cell differentiation ITGB-5 Total Pvalue High Low I + II 0.394 N 57 43 100 % 57.0% 43.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Grading tumor cell differentiation ITGA-3 Total Pvalue High Low I + II 0.397 N 50 50 100 % 50.0% 50.0% 100.0% III N 5 9 14 % 35.7% 64.3% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Grading tumor cell differentiation MMP-2 Total Pvalue High Low I + II 0.568 N 48 52 100 % 48.0% 52.0% 100.0% III N 5 9 14 % 35.7% 64.3% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Grading tumor cell differentiation MMP-9 Total Pvalue High Low I + II 0.572 N 53 47 100 % 53.0% 47.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Grading tumor cell differentiation FN-1 Total Pvalue High Low I + II 0.225 N 73 27 100 % 73.0% 27.0% 100.0% III N 8 6 14 % 57.1% 42.9% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 4 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the degree of TNM staging of CRC (n=114). TNM tumor stage EGFR Total P value High Low I + II 0.000 N 64 0 64 % 100.0% 0.0% 100.0% III + IV N 5 45 50 % 10.0% 90.0% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% TNM tumor stage VEGF Total Pvalue High Low I + II 0.186 N 27 37 64 % 42.2% 57.8% 100.0% III + IV N 28 22 50 % 56.0% 44.0% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% TNM tumor stage KI-67 Total Pvalue High Low I + II 0.000 N 38 26 64 % 59.4% 40.6% 100.0% III + IV N 11 39 50 % 22.0% 78.0% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Estadiamento P53 Total Pvalue High Low I + II 0.000 N 35 29 64 % 54.7% 45.3% 100.0% III + IV N 11 39 50 % 22.0% 78.0% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% TNM tumor stage Bcl-2 Total Pvalue High Low I + II 0.345 N 29 35 64 % 45.3% 54.7% 100.0% III + IV N 28 22 50 % 56.0% 44.0% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% TNM tumor stage ITGB-5 Total Pvalue High Low I + II 0.000 N 49 15 64 % 76.6% 23.4% 100.0% III + IV N 14 36 50 % 28.0% 72.0% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% TNM tumor stage ITGA-3 Total Pvalue High Low I + II 0.000 N 53 11 64 % 82.8% 17.2% 100.0% III + IV N 2 48 50 % 4.0% 96.0% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% TNM tumor stage MMP-2 Total Pvalue High Low I + II 0.572 N 28 36 64 % 43.8% 56.3% 100.0% III + IV N 25 25 50 % 50.0% 50.0% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% TNM tumor stage MMP-9 Total Pvalue High Low I + II 0.000 N 56 8 64 % 87.5% 12.5% 100.0% III + IV N 3 47 50 % 6.0% 94.0% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% TNM tumor stage FN-1 Total Pvalue high low I + II 0.677 N 44 20 64 % 68.8% 31.3% 100.0% III + IV N 37 13 50 % 74.0% 26.0% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 5 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the peritumoral lymphocyte infiltrate in CRC (n=114). Peritumoral lymphocyte infiltrate EGFR Total P value High Low Absence N 13 8 21 1.000 % 61.9% 38.1% 100.0% Presence N 56 37 93 % 60.2% 39.8% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Peritumoral lymphocyte infiltrate VEGF Total Pvalue High Low Absence 0.635 N 9 12 21 % 42.9% 57.1% 100.0% Presence N 46 47 93 % 49.5% 50.5% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Peritumoral lymphocyte infiltrate KI-67 Total Pvalue High Low Absence N 11 10 21 0.343 % 52.4% 47.6% 100.0% Presence N 38 55 93 % 40.9% 59.1% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Peritumoral lymphocyte infiltrate P53 Total Pvalue High Low Absence 1.000 N 8 13 21 % 38.1% 61.9% 100.0% Presence N 38 55 93 % 40.9% 59.1% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Peritumoral lymphocyte infiltrate Bcl-2 Total Pvalue High Low Absence 0.629 N 9 12 21 % 42.9% 57.1% 100.0% Presence N 48 45 93 % 51.6% 48.4% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% Peritumoral lymphocyte infiltrate ITGB-5 Total Pvalue High Low Absence 1.000 N 12 9 21 % 57.1% 42.9% 100.0% Presence N 51 42 93 % 54.8% 45.2% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Peritumoral lymphocyte infiltrate ITGA-3 Total Pvalue High Low Absence 0.227 N 13 8 21 % 61.9% 38.1% 100.0% Presence N 42 51 93 % 45.2% 54.8% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Peritumoral lymphocyte infiltrate MMP-2 Total Pvalue High Low Absence 1.000 N 10 11 21 % 47.6% 52.4% 100.0% Presence N 43 50 93 % 46.2% 53.8% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Peritumoral lymphocyte infiltrate MMP-9 Total Pvalue High Low Absence 1.000 N 11 10 21 % 52.4% 47.6% 100.0% Presence N 48 45 93 % 51.6% 48.4% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Peritumoral lymphocyte infiltrate FN-1 Total Pvalue High Low Absence 0.180 N 12 9 21 % 57.1% 42.9% 100.0% Presence N 69 24 93 % 74.2% 25.8% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 6 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, and MMP9 ECM genes and FN-1 and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the presence of venous invasion in CRC (n=114). Venous invasion EGFR Total P value High Low Absence 0.000 N 65 28 93 % 69.9% 30.1% 100.0% Presence N 4 17 21 % 19.0% 81.0% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Venous invasion VEGF Total Pvalue High Low Absence 0.810 N 44 49 93 % 47.3% 52.7% 100.0% Presence N 11 10 21 % 52.4% 47.6% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Venous invasion KI-67 Total Pvalue High Low Absence 0.055 N 44 49 93 % 47.3% 52.7% 100.0% Presence N 5 16 21 % 23.8% 76.2% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Venous invasion P53 Total Pvalue High Low Absence 0.029 N 42 51 93 % 45.2% 54.8% 100.0% Presence N 4 17 21 % 19.0% 81.0% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Venous invasion Bcl-2 Total Pvalue High Low Absence 1.000 N 46 47 93 % 49.5% 50.5% 100.0% Presence N 11 10 21 % 52.4% 47.6% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% Venous invasion ITGB-5 Total Pvalue High Low Absence 0.231 N 54 39 93 % 58.1% 41.9% 100.0% Presence N 9 12 21 % 42.9% 57.1% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Venous invasion ITGA-3 Total Pvalue High Low Absence 0.000 N 52 41 93 % 55.9% 44.1% 100.0% Presence N 3 18 21 % 14.3% 85.7% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Venous invasion MMP-2 Total Pvalue High Low Absence 0.631 N 42 51 93 % 45.2% 54.8% 100.0% Presence N 11 10 21 % 52.4% 47.6% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Venous invasion MMP-9 Total Pvalue High Low Absence 0.001 N 55 38 93 % 59.1% 40.9% 100.0% Presence N 4 17 21 % 19.0% 81.0% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Venous invasion FN-1 Total Pvalue High Low Absence 0.006 N 61 32 93 % 65.6% 34.4% 100.0% Presence N 20 1 21 % 95.2% 4.8% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 7 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, Ki-67, P53, and Bcl-2 molecules as per the presence of perineural invasion in CRC (n=114). Perineural invasion EGFR Total P value High Low Absence 0.260 N 66 40 106 % 62.3% 37.7% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Perineural invasion VEGF Total Pvalue High Low Absence 0.027 N 48 58 106 % 45.3% 54.7% 100.0% Presence N 7 1 8 % 87.5% 12.5% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Perineural invasion KI-67 Total Pvalue High Low Absence 0.462 N 47 59 106 % 44.3% 55.7% 100.0% Presence N 2 6 8 % 25.0% 75.0% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Perineural invasion P53 Total Pvalue High Low Absence 0.470 N 44 62 106 % 41.5% 58.5% 100.0% Presence N 2 6 8 % 25.0% 75.0% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Perineural invasion Bcl-2 Total Pvalue High Low Absence 0.716 N 52 54 106 % 49.1% 50.9% 100.0% Presence N 5 3 8 % 62.5% 37.5% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% Perineural invasion ITGB-5 Total Pvalue High Low Absence 0.463 N 60 46 106 % 56.6% 43.4% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Perineural invasion ITGA-3 Total Pvalue High Low Absence 0.717 N 52 54 106 % 49.1% 50.9% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Perineural invasion MMP-2 Total Pvalue High Low Absence 0.999 N 49 57 106 % 46.2% 53.8% 100.0% Presence N 4 4 8 % 50.0% 50.0% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Perineural invasion MMP-9 Total Pvalue High Low Absence 0.479 N 56 50 106 % 52.8% 47.2% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Perineural invasion FN-1 Total Pvalue High Low Absence 0.102 N 73 33 106 % 68.9% 31.1% 100.0% Presence N 8 0 8 % 100.0% 0.0% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 8 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the villous and tubular types of CRC (n=114). Tumor histologic type:tubular × villous EGFR Total P value High Low Tubular 0.579 N 52 29 81 % 64.2% 35.8% 100.0% Villous N 9 7 16 % 56.3% 43.8% 100.0% Total N 61 36 97 % 62.9% 37.1% 100.0% Tumor histologic type:tubular × villous VEGF Total Pvalue High Low Tubular 1.000 N 41 40 81 % 50.6% 49.4% 100.0% Villous N 8 8 16 % 50.0% 50.0% 100.0% Total N 49 48 97 % 50.5% 49.5% 100.0% Tumor histologic type:tubular × villous KI-67 Total Pvalue High Low Tubular 0.783 N 36 45 81 % 44.4% 55.6% 100.0% Villous N 6 10 16 % 37.5% 62.5% 100.0% Total N 42 55 97 % 43.3% 56.7% 100.0% Tumor histologic type:tubular × villous P53 Total Pvalue High Low Tubular 1.000 N 32 49 81 % 39.5% 60.5% 100.0% Villous N 6 10 16 % 37.5% 62.5% 100.0% Total N 38 59 97 % 39.2% 60.8% 100.0% Tumor histologic type:tubular × villous Bcl-2 Total Pvalue High Low Tubular 0.277 N 37 44 81 % 45.7% 54.3% 100.0% Villous N 10 6 16 % 62.5% 37.5% 100.0% Total N 47 50 97 % 48.5% 51.5% 100.0% Tumor histologic type:tubular × villous ITGB-5 Total Pvalue High Low Tubular 1.000 N 44 37 81 % 54.3% 45.7% 100.0% Villous N 9 7 16 % 56.3% 43.8% 100.0% Total N 53 44 97 % 54.6% 45.4% 100.0% Tumor histologic type:tubular × villous ITGA-3 Total Pvalue High Low Tubular 1.000 N 40 41 81 % 49.4% 50.6% 100.0% Villous N 8 8 16 % 50.0% 50.0% 100.0% Total N 48 49 97 % 49.5% 50.5% 100.0% Tumor histologic type:tubular × villous MMP-2 Total Pvalue High Low Tubular 0.585 N 44 37 81 % 54.3% 45.7% 100.0% Villous N 7 9 16 % 43.8% 56.3% 100.0% Total N 51 46 97 % 52.6% 47.4% 100.0% Tumor histologic type:tubular × villous MMP-9 Total Pvalue High Low Tubular 0.000 N 49 32 81 % 60.5% 39.5% 100.0% Villous N 1 15 16 % 6.3% 93.8% 100.0% Total N 50 47 97 % 51.5% 48.5% 100.0% Tumor histologic type:tubular × villous FN-1 Total Pvalue High Low Tubular 1.000 N 57 24 81 % 70.4% 29.6% 100.0% Villous N 11 5 16 % 68.8% 31.3% 100.0% Total N 68 29 97 % 70.1% 29.9% 100.0%Table 9 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per mucinous and tubular tumor types of CRC (n=114). Tumor histologic type:mucinous × tubular EGFR Total P value High Low Mucinous 0.273 N 8 9 17 % 47.1% 52.9% 100.0% Tubular N 52 29 81 % 64.2% 35.8% 100.0% Total N 60 38 98 % 61.2% 38.8% 100.0% Tumor histologic type:mucinous × tubular VEGF Total Pvalue High Low Mucinous 0.294 N 6 11 17 % 35.3% 64.7% 100.0% Tubular N 41 40 81 % 50.6% 49.4% 100.0% Total N 47 51 98 % 48.0% 52.0% 100.0% Tumor histologic type:mucinous × tubular KI-67 Total Pvalue High Low Mucinous 1.000 N 7 10 17 % 41.2% 58.8% 100.0% Tubular N 36 45 81 % 44.4% 55.6% 100.0% Total N 43 55 98 % 43.9% 56.1% 100.0% Tumor histologic type:mucinous × tubular P53 Total Pvalue High Low Mucinous 0.596 N 8 9 17 % 47.1% 52.9% 100.0% Tubular N 32 49 81 % 39.5% 60.5% 100.0% Total N 40 58 98 % 40.8% 59.2% 100.0% Tumor histologic type:mucinous × tubular Bcl-2 Total Pvalue High Low Mucinous 0.425 N 10 7 17 % 58.8% 41.2% 100.0% Tubular N 37 44 81 % 45.7% 54.3% 100.0% Total N 47 51 98 % 48.0% 52.0% 100.0% Tumor histologic type:mucinous × tubular ITGB-5 Total Pvalue High Low Mucinous 0.793 N 10 7 17 % 58.8% 41.2% 100.0% Tubular N 44 37 81 % 54.3% 45.7% 100.0% Total N 54 44 98 % 55.1% 44.9% 100.0% Tumor histologic type:mucinous × tubular ITGA-3 Total Pvalue High Low Mucinous 0.600 N 7 10 17 % 41.2% 58.8% 100.0% Tubular N 40 41 81 % 49.4% 50.6% 100.0% Total N 47 51 98 % 48.0% 52.0% 100.0% Tumor histologic type:mucinous × tubular MMP-2 Total Pvalue High Low Mucinous 0.001 N 2 15 17 % 11.8% 88.2% 100.0% Tubular N 44 37 81 % 54.3% 45.7% 100.0% Total N 46 52 98 % 46.9% 53.1% 100.0% Tumor histologic type:mucinous × tubular MMP-9 Total Pvalue High Low Mucinous 0.596 N 9 8 17 % 52.9% 47.1% 100.0% Tubular N 49 32 81 % 60.5% 39.5% 100.0% Total N 58 40 98 % 59.2% 40.8% 100.0% Tumor histologic type:mucinous × tubular FN-1 Total Pvalue High Low Mucinous 0.771 N 13 4 17 % 76.5% 23.5% 100.0% Tubular N 57 24 81 % 70.4% 29.6% 100.0% Total N 70 28 98 % 71.4% 28.6% 100.0%Table10 shows the distribution in absolute numbers and percentages by IHC of proteins corresponding to the ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and the EGFR, VEGF, KI-67, P53, and Bcl-2 molecules, according to the expression degrees rated as low and high of the colorectal adenocarcinoma (n=114).Table 10 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the degrees of high and low expression of CRC (n=114). Markers Low expression High expression n (%) n (%) FN-1 81 (71.1) 33 (28.9) ITGA-3 55 (48.2) 59 (51.8) ITGB-5 63 (55.3) 51 (44.7) MMP-2 53 (46.5) 61 (53.5) MMP-9 59 (51.8) 55 (48.2) P53 46 (40.4) 68 (59.6) Bcl-2 57 (50.0) 57 (50.0) KI-67 49 (43.0) 65 (57.0) EGFR 69 (60.5) 45 (39.5) VEGF 55 (48.2) 59 (51.8)With regard to the correlation of ECM genes IHC expression levels with the non-ECM molecules P53, Bcl-2, VEGF, KI-67, and EGFR, this study showed that FN-1 expression did not correlate with any ECM marker or non-ECM molecule studied. Expression of ITGA-3 showed a weak correlation with EGFR (r=0.74; P=0.000), and ITGB-5 expression displayed a regular correlation (r=0.42; P=0.000) with EGFR. MMP-2 expression did not correlate with the non-EMC molecules studied. MMP-9 was shown to have a strong expression correlation (r=0.76; P=0.000). Table 11 shows the results of the distribution of the correlation Spearman coefficients between the ECM genes and the non-ECM molecules markers studied.Table 11 The distribution of Spearman (r) correlation coefficients, two-tailed model for significant associations (P<0.05) between the immunohistochemical expressions of ECM genes FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 and epithelial markers EGFR, VEGF, KI-67, P53, and Bcl-2 in CRC (n=114). FN-1 ITGA-3 ITGB-5 MMP-2 MMP-9 EGFR VEGF KI-67 P53 Bcl-2 FN1 1 — — — — — — — — — ITGA-3 — 1 0.48 — 0.65 0.74 — — — — ITGB-5 — 0.48 1 — 0.43 0.42 — — — — MMP-2 — — — 1 — −0.20 −0.01 — MMP-9 — 0.65 0.43 — 1 0.76 — 0.30 0.22 — EGFR — 0.74 0.42 — 0.76 1 — 0.37 0.33 — VEGF — — — — 1 — — 0.33 KI-67 — — — — 0.30 0.37 — 1 0.87 — P53 — — — −0.19 0.22 0.33 — 0.87 1 — Bcl-2 — — — — 0.33 — 1 0 = no correlation; 0–0.25 = weak; 0.25–0.50 = regular; 0.50–0.75 = moderate; >0.75 = strong; 1 = perfect correlation.The results of RT-PCR and IHC methods in 114 patients with colorectal adenocarcinoma are shown in the Table12, where differentially expressed extracellular matrix genes and the respective proteins are correlated. The findings of immunohistochemistry technique validated the correlation between transcript and protein. While RT-PCR is regarded as a tool which provides a genic screening the immunohistochemical technique allows the identification of the correspondent protein of the genes differentially expressed.Table 12 Real-time PCR and immunohistochemistry correlation of differentially expressed extracellular matrix genes in 114 patients with colorectal adenocarcinoma,P<0.05. Genes Classification RT-PCR RT-PCR Parameters analysed IHC IHC Fold change P value Validation P value MMP-2 ECM proteases 2.17 0.01 Mucinous × tubular No 0.001 −1.2 0.04 Peritumoral lymphocyte infiltrate +/− No 1.000 −2.11 0.03 Age (>60 yr × <60 yr) No 0.000 FN-1 Other adhesion molecules/collagens and ECM structural constituents −3.07 0.02 AGE No 1.000 ITGB-5 Transmembrane molecules/cell-matrix adhesion 2.11 0.04 Grade cell differentiation:(I, II × III, IV) No 0.394 1.33 0.02 TNM (I, II × III, IV) Yes 0.000 ITGA-3 Transmembrane molecules 2.58 0.01 TNM (I, II × III, IV) Yes 0.000 MMP-9 ECM proteases 1.13 0.01 Villous × tubular Yes 0.000 IHC: immunohistochemistry; RT-PCR: reverse transcription polymerase. ## 4. Discussion ### 4.1. The Possible Role of the ECM in CRC Dissemination Carcinogenesis and tumor progression represent complex processes that involve a series of events traditionally characterized as a cascade of phenomena that require more investigation to be fully elucidated. It has been assumed that as a result of the cascade of carcinogenesis, progression, and dissemination, the initial transformation requires the tumor cell to be able to invade the surrounding tissues, which would characterize its malignant nature. Thus, it is necessary that these cells detach from their adhesive interactions in the epithelium, penetrate the basal membrane, degrade the ECM, and migrate to the subjacent interstitial stroma. At this point, the tumor cell enters the blood and lymphatic stream, acquiring systemic dissemination. In the intestine, particularly, the basal membrane separates the epithelial tissue from the conjunctive tissue, and a histopathological characteristic of intestinal tumors is the loss of the basal membrane integrity [24].The ECM is composed of a large variety of structural molecules such as collagen, noncollagen glycoproteins, and proteoglycans that play a complex role in the regulation of cell behavior, influencing its development, growth, survival, migration, signal transduction, structure, and function [25, 26].Thus, the degradation of elements constituting the basal membrane and ECM mediated by certain proteolytic enzymes, usually metalloproteinases, can represent a fundamental step in tumor progression and metastasis [4].In recent decades, research in the field of cancer biology has focused extensively on the role of the ECM constituents during tumor progression. Some proteins located in specific domains of the ECM play a critical role in keeping the cells linked to matrix elements and the basal membrane, also participating in the matrix-cell signaling cascades. This information from the ECM is transmitted to the cells, mainly by means of integrin molecules, to activate, for example, cytokines, growth factors, and intracellular adaptor molecules. Thus, it can significantly affect many different processes such as cell cycle progression and cell migration and differentiation. The interaction between the biophysical properties of the cell and the ECM establishes a dynamic reciprocity, generating a sequence of reactions with a complex network of proteases, sulfatases, and possibly other enzymes to release and activate several signaling pathways in a very specific and localized manner. ECM homeostasis is, therefore, a delicate balance between the biosynthesis of proteins, its structural organization, biosignaling, and the degradation of its elements [7]. ### 4.2. The Methods Used for Tracking and Identifying ECM Genes The simplicity of PCR array allows it to be used for routine research, as it is a reliable tool to analyze the expression of a panel of genes specific to a particular pathology, offering high sensitivity and broad dynamic range. The use of the SuperArray Kit (PAHs-031A-24 Ambriex) for ECM and cell adhesion molecules allowed for the analysis of the expression of 84 genes important for cell-cell and cell-matrix interactions. This array contains ECM proteins including basal membrane, collagen, and gene constituents.Using RT-PCR, it was possible to analyze in a quick, simple, and reliable manner the expression of a group of gene transcripts involved in the progression and dissemination of colorectal adenocarcinoma in several staging phases. Various studies have used this method in different types of malignant neoplastic diseases [18, 27] with regard to angiogenesis [19], apoptosis [20], and the cell cycle [28].The TMA, a technique described by Kononen et al., in 1998, is widely accepted in the literature. This extremely simple concept consists of grouping a large number of tissue samples in a single paraffin block and allows for the study of the expression of molecular markers in a large scale with the use of stored material. TMAs have advantages over traditional cuts such as a reduction in reagents and time required to perform the reactions. The standardization of the reactions has facilitated comparative interpretation of research cases [29, 30].The use of monoclonal antibodies with IHC examination in the TMA and in situ hybridization techniques allows for the detection of differential tissue protein expression corresponding to the gene (validation process of tracing techniques) in a simplified manner, as well as more elaborate technical standardization, hence minimizing the possibility of measurement biases. ### 4.3. The Results In this study, an increase of the expression of integrins alpha 3 and beta 5 genes can be observed in advanced tumors locally or remotely in stages III and IV relative to stages I and II, which represent nonmetastatic tumors to lymph nodes and/or other sites, a fact confirmed by protein expression using the TMA technique. Other correlations such as histologic type and venous and neural invasion were also found to be significant, further reinforcing the possible role of integrins in tumor progression and dissemination in colorectal adenocarcinoma.Integrin alpha 3 is usually expressed in normal tissues and in various tumors. Studies evaluating the expression of integrin alpha 3 in primary colon cancer and its respective metastases in the liver have shown that almost 27.5% of primary tumors presented with increased expression of integrin alpha 3 relative to the metastatic tumor. In the present study, while evaluating gene expression, there was a significant difference in the expression of integrin alpha 3, a result validated by the analysis of protein expression by the immunohistochemical method, and a greater expression was observed in the TNM III and IV groups than in the TNM I and II groups, thus suggesting a possible relation of integrin 3 with more advanced stages of colorectal cancer. Significant differences were also found in the ITGA-3 protein levels between the presence and absence of venous invasion; however, this fact was not supported by the gene expression levels using RT-PCR.In 1999, Haler et al. studied the expression levels of integrins alpha 2, 3, 5, and 6 using IHC in lineages of liver metastatic colorectal carcinoma cells; an increased expression of integrins alpha 2 and alpha 3 was observed, particularly, with regard to the potential dissemination of CRC [31]. Jinka et al., in a recent study, observed the same results, including, however, a higher number of malignant tumors [7]. In the present study, the ITGB-5 gene showed significantly higher expression levels in the TNM III and IV stage groups compared with the TNM I and II groups; these data were confirmed by the results of the protein expression analysis using immunohistochemistry.When comparing ITGB-5 gene expression with the degree of cell differentiation, a significant difference was observed in the grade III group compared with the grades I and II groups; however, when evaluating the protein expression of the ITGB-5 gene, there were no significant differences between the degrees of cell differentiation in the analyses.A recent study, using cell cultures of human breast cancer and normal epithelial tissue, demonstrated a role of integrin beta 5 in tumor progression and invasion by changes in adhesion, the cell structure, and differentiation, when the inhibition of this integrin was found to significantly reduce breast carcinoma cell invasion [32]. It has also been reported that integrin expression levels may vary considerably between the normal and tumor tissue. In a certain way, integrins alpha v beta 5 and alpha v beta 6 are generally expressed in low levels or are nondetectable in normal human adult epithelium and are highly expressed in some tumors, correlating to more advanced stages of the disease [33].It is possible that the increased expression of integrins alpha v beta 3 and alpha v beta 5 promotes the linking of tumor cells in the temporary matrix proteins such as vitronectin, fibrinogen, von Willebrand factor, osteopontin, and fibronectin that are deposited in the tumor microenvironment, facilitating the process of endothelial angiogenesis [34].Among the studied genes, the overexpression of the following metalloproteinases in the tumor tissue can be correlated to at least one clinicopathological variable: MMP-1, MMP-2, MMP-9, MMP-11, and MMP-16.MMP-2 and MMP-9 metalloproteinases, our research subjects, have been reported as essential in the tumor dissemination and progression process by many authors. These proteins degrade the main component of the basal membrane, which is type IV collagen. In several studies comparing MMP-2 expression levels in CRC and clinicopathological variables, a significance was observed regarding the strong expression of this enzyme in stages III and IV (TNM) [35, 36], tumor size and venous invasion, lymph node metastasis [35, 37], and distant metastasis [35, 36, 38].MMP-2, called gelatinase A, in addition to type IV collagen, degrades other types of this protein such as V, VII, and X and also fibronectin, laminin, and elastin, which are components of the ECM [35]. Thus, MMP-2 expression has been investigated in several cancer types including colorectal adenocarcinoma, as MMP is significantly increased in the tumor tissue compared with the nontumor tissue [39].In our study, there was a correlation of MMP-2 gene and protein expression levels with the clinicopathological variables such as mucinous histological type with signet ring cells and SOE adenocarcinoma; our results show that the MMP2 has potential as a prognostic CRC marker, in agreement with other published studies.MMP9, known as gelatinase B, promotes the degradation of an important component of the basal membrane, type IV collagen, which is crucial for the invasion of malignant tumors from the proteolysis of ECM with CRC progression and metastasis [40]. Thus, there is substantial interest in the study of the MMP-9 expression in CRC as a prognostic marker.Several studies in the literature have shown increased expression of MMP-9 in CRC with significance with regard to clinicopathological variables such as stages III and IV (TNM/Dukes C and D) [38–41], lymph node metastasis [40, 42], remote metastasis [37–40], peritumoral inflammatory infiltrate [43], and degree of cell differentiation II and III [38, 40, 41, 44, 45].In this study, MMP-9 expression appeared significantly more frequently in the villous histological type that, according to the literature studies, shows a better prognosis than SOE adenocarcinomas [46]. ### 4.4. Correlation of IHC Expression of the EMC Genes of Interest FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 with the Non-ECM Molecules EGFR, VEGF, P53, Bcl-2, and KI-67 According to Viana et al., in 2013, ECM components interact with non-ECM molecules in CRC carcinogenesis, progression, and dissemination. One of the goals of our study was to evaluate the correlation of the expression of ECM components with that of P53, Bcl-2, KI-67, EGFR, and VEGF because it is known that proliferation, apoptosis, and cell migration are regulated by cell-cell interactions and ECM cell components. It is also worth noting that growth factors (e.g., EGF and VEGF) are usually stored in the ECM and can be activated and released after ECM modulation [12, 15]. In this study, we found that the correlation between MMP-9 and ITGA-3 genes with epithelial marker EGFR has been strong, whereas no relationship between the tumor expression of MMP-2, FN-1, and ITGB-5 with non-ECM molecules VEGF, KI-67, P53, and Bcl-2 could be demonstrated. ## 4.1. The Possible Role of the ECM in CRC Dissemination Carcinogenesis and tumor progression represent complex processes that involve a series of events traditionally characterized as a cascade of phenomena that require more investigation to be fully elucidated. It has been assumed that as a result of the cascade of carcinogenesis, progression, and dissemination, the initial transformation requires the tumor cell to be able to invade the surrounding tissues, which would characterize its malignant nature. Thus, it is necessary that these cells detach from their adhesive interactions in the epithelium, penetrate the basal membrane, degrade the ECM, and migrate to the subjacent interstitial stroma. At this point, the tumor cell enters the blood and lymphatic stream, acquiring systemic dissemination. In the intestine, particularly, the basal membrane separates the epithelial tissue from the conjunctive tissue, and a histopathological characteristic of intestinal tumors is the loss of the basal membrane integrity [24].The ECM is composed of a large variety of structural molecules such as collagen, noncollagen glycoproteins, and proteoglycans that play a complex role in the regulation of cell behavior, influencing its development, growth, survival, migration, signal transduction, structure, and function [25, 26].Thus, the degradation of elements constituting the basal membrane and ECM mediated by certain proteolytic enzymes, usually metalloproteinases, can represent a fundamental step in tumor progression and metastasis [4].In recent decades, research in the field of cancer biology has focused extensively on the role of the ECM constituents during tumor progression. Some proteins located in specific domains of the ECM play a critical role in keeping the cells linked to matrix elements and the basal membrane, also participating in the matrix-cell signaling cascades. This information from the ECM is transmitted to the cells, mainly by means of integrin molecules, to activate, for example, cytokines, growth factors, and intracellular adaptor molecules. Thus, it can significantly affect many different processes such as cell cycle progression and cell migration and differentiation. The interaction between the biophysical properties of the cell and the ECM establishes a dynamic reciprocity, generating a sequence of reactions with a complex network of proteases, sulfatases, and possibly other enzymes to release and activate several signaling pathways in a very specific and localized manner. ECM homeostasis is, therefore, a delicate balance between the biosynthesis of proteins, its structural organization, biosignaling, and the degradation of its elements [7]. ## 4.2. The Methods Used for Tracking and Identifying ECM Genes The simplicity of PCR array allows it to be used for routine research, as it is a reliable tool to analyze the expression of a panel of genes specific to a particular pathology, offering high sensitivity and broad dynamic range. The use of the SuperArray Kit (PAHs-031A-24 Ambriex) for ECM and cell adhesion molecules allowed for the analysis of the expression of 84 genes important for cell-cell and cell-matrix interactions. This array contains ECM proteins including basal membrane, collagen, and gene constituents.Using RT-PCR, it was possible to analyze in a quick, simple, and reliable manner the expression of a group of gene transcripts involved in the progression and dissemination of colorectal adenocarcinoma in several staging phases. Various studies have used this method in different types of malignant neoplastic diseases [18, 27] with regard to angiogenesis [19], apoptosis [20], and the cell cycle [28].The TMA, a technique described by Kononen et al., in 1998, is widely accepted in the literature. This extremely simple concept consists of grouping a large number of tissue samples in a single paraffin block and allows for the study of the expression of molecular markers in a large scale with the use of stored material. TMAs have advantages over traditional cuts such as a reduction in reagents and time required to perform the reactions. The standardization of the reactions has facilitated comparative interpretation of research cases [29, 30].The use of monoclonal antibodies with IHC examination in the TMA and in situ hybridization techniques allows for the detection of differential tissue protein expression corresponding to the gene (validation process of tracing techniques) in a simplified manner, as well as more elaborate technical standardization, hence minimizing the possibility of measurement biases. ## 4.3. The Results In this study, an increase of the expression of integrins alpha 3 and beta 5 genes can be observed in advanced tumors locally or remotely in stages III and IV relative to stages I and II, which represent nonmetastatic tumors to lymph nodes and/or other sites, a fact confirmed by protein expression using the TMA technique. Other correlations such as histologic type and venous and neural invasion were also found to be significant, further reinforcing the possible role of integrins in tumor progression and dissemination in colorectal adenocarcinoma.Integrin alpha 3 is usually expressed in normal tissues and in various tumors. Studies evaluating the expression of integrin alpha 3 in primary colon cancer and its respective metastases in the liver have shown that almost 27.5% of primary tumors presented with increased expression of integrin alpha 3 relative to the metastatic tumor. In the present study, while evaluating gene expression, there was a significant difference in the expression of integrin alpha 3, a result validated by the analysis of protein expression by the immunohistochemical method, and a greater expression was observed in the TNM III and IV groups than in the TNM I and II groups, thus suggesting a possible relation of integrin 3 with more advanced stages of colorectal cancer. Significant differences were also found in the ITGA-3 protein levels between the presence and absence of venous invasion; however, this fact was not supported by the gene expression levels using RT-PCR.In 1999, Haler et al. studied the expression levels of integrins alpha 2, 3, 5, and 6 using IHC in lineages of liver metastatic colorectal carcinoma cells; an increased expression of integrins alpha 2 and alpha 3 was observed, particularly, with regard to the potential dissemination of CRC [31]. Jinka et al., in a recent study, observed the same results, including, however, a higher number of malignant tumors [7]. In the present study, the ITGB-5 gene showed significantly higher expression levels in the TNM III and IV stage groups compared with the TNM I and II groups; these data were confirmed by the results of the protein expression analysis using immunohistochemistry.When comparing ITGB-5 gene expression with the degree of cell differentiation, a significant difference was observed in the grade III group compared with the grades I and II groups; however, when evaluating the protein expression of the ITGB-5 gene, there were no significant differences between the degrees of cell differentiation in the analyses.A recent study, using cell cultures of human breast cancer and normal epithelial tissue, demonstrated a role of integrin beta 5 in tumor progression and invasion by changes in adhesion, the cell structure, and differentiation, when the inhibition of this integrin was found to significantly reduce breast carcinoma cell invasion [32]. It has also been reported that integrin expression levels may vary considerably between the normal and tumor tissue. In a certain way, integrins alpha v beta 5 and alpha v beta 6 are generally expressed in low levels or are nondetectable in normal human adult epithelium and are highly expressed in some tumors, correlating to more advanced stages of the disease [33].It is possible that the increased expression of integrins alpha v beta 3 and alpha v beta 5 promotes the linking of tumor cells in the temporary matrix proteins such as vitronectin, fibrinogen, von Willebrand factor, osteopontin, and fibronectin that are deposited in the tumor microenvironment, facilitating the process of endothelial angiogenesis [34].Among the studied genes, the overexpression of the following metalloproteinases in the tumor tissue can be correlated to at least one clinicopathological variable: MMP-1, MMP-2, MMP-9, MMP-11, and MMP-16.MMP-2 and MMP-9 metalloproteinases, our research subjects, have been reported as essential in the tumor dissemination and progression process by many authors. These proteins degrade the main component of the basal membrane, which is type IV collagen. In several studies comparing MMP-2 expression levels in CRC and clinicopathological variables, a significance was observed regarding the strong expression of this enzyme in stages III and IV (TNM) [35, 36], tumor size and venous invasion, lymph node metastasis [35, 37], and distant metastasis [35, 36, 38].MMP-2, called gelatinase A, in addition to type IV collagen, degrades other types of this protein such as V, VII, and X and also fibronectin, laminin, and elastin, which are components of the ECM [35]. Thus, MMP-2 expression has been investigated in several cancer types including colorectal adenocarcinoma, as MMP is significantly increased in the tumor tissue compared with the nontumor tissue [39].In our study, there was a correlation of MMP-2 gene and protein expression levels with the clinicopathological variables such as mucinous histological type with signet ring cells and SOE adenocarcinoma; our results show that the MMP2 has potential as a prognostic CRC marker, in agreement with other published studies.MMP9, known as gelatinase B, promotes the degradation of an important component of the basal membrane, type IV collagen, which is crucial for the invasion of malignant tumors from the proteolysis of ECM with CRC progression and metastasis [40]. Thus, there is substantial interest in the study of the MMP-9 expression in CRC as a prognostic marker.Several studies in the literature have shown increased expression of MMP-9 in CRC with significance with regard to clinicopathological variables such as stages III and IV (TNM/Dukes C and D) [38–41], lymph node metastasis [40, 42], remote metastasis [37–40], peritumoral inflammatory infiltrate [43], and degree of cell differentiation II and III [38, 40, 41, 44, 45].In this study, MMP-9 expression appeared significantly more frequently in the villous histological type that, according to the literature studies, shows a better prognosis than SOE adenocarcinomas [46]. ## 4.4. Correlation of IHC Expression of the EMC Genes of Interest FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 with the Non-ECM Molecules EGFR, VEGF, P53, Bcl-2, and KI-67 According to Viana et al., in 2013, ECM components interact with non-ECM molecules in CRC carcinogenesis, progression, and dissemination. One of the goals of our study was to evaluate the correlation of the expression of ECM components with that of P53, Bcl-2, KI-67, EGFR, and VEGF because it is known that proliferation, apoptosis, and cell migration are regulated by cell-cell interactions and ECM cell components. It is also worth noting that growth factors (e.g., EGF and VEGF) are usually stored in the ECM and can be activated and released after ECM modulation [12, 15]. In this study, we found that the correlation between MMP-9 and ITGA-3 genes with epithelial marker EGFR has been strong, whereas no relationship between the tumor expression of MMP-2, FN-1, and ITGB-5 with non-ECM molecules VEGF, KI-67, P53, and Bcl-2 could be demonstrated. ## 5. Conclusions In CRCs, the overexpression of ITGA-3 and ITGB-5 genes and of their proteins was associated with lymph nodal dissemination stages and remote metastasis, whereas the overexpression of MMP-2 and MMP-9 genes and their proteins was associated with the mucinous and villous histological types, respectively. The epithelial marker EGFR (epidermal growth factor receptor) overactivity has been shown to be associated with the ECM genes MMP-9 and ITGA-3 expression. --- *Source: 102541-2014-03-04.xml*
102541-2014-03-04_102541-2014-03-04.md
104,441
Expression Profiling Using a cDNA Array and Immunohistochemistry for the Extracellular Matrix Genes FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 in Colorectal Carcinoma Progression and Dissemination
Suzana Angelica Silva Lustosa; Luciano de Souza Viana; Renato José Affonso; Sandra Regina Morini Silva; Marcos Vinicius Araujo Denadai; Silvia Regina Caminada de Toledo; Indhira Dias Oliveira; Delcio Matos
The Scientific World Journal (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/102541
102541-2014-03-04.xml
--- ## Abstract Colorectal cancer dissemination depends on extracellular matrix genes related to remodeling and degradation of the matrix structure. This investigation intended to evaluate the association between FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 gene and protein expression levels in tumor tissue with clinical and histopathological neoplastic parameters of cancer dissemination. The expression associations between ECM molecules and selected epithelial markers EGFR, VEGF, Bcl2, P53, and KI-67 have also been examined in 114 patients with colorectal cancer who underwent primary tumor resection. Quantitative real-time PCR and immunohistochemistry tissue microarray methods were performed in samples from the primary tumors. The gene expression results showed that the ITGA-3 and ITGB-5 genes were overexpressed in tumors with lymph node and distant metastasis (III/IV-stage tumors compared with I/II tumors). The MMP-2 gene showed significant overexpression in mucinous type tumors, and MMP-9 was overexpressed in villous adenocarcinoma histologic type tumors. The ECM genes MMP9 and ITGA-3 have shown a significant expression correlation with EGFR epithelial marker. The overexpression of the matrix extracellular genes ITGA-3 and ITGB-5 is associated with advanced stage tumors, and the genes MMP-2 and MMP-9 are overexpressed in mucinous and villous adenocarcinoma type tumors, respectively. The epithelial marker EGFR overactivity has been shown to be associated with the ECM genes MMP-9 and ITGA-3 expression. --- ## Body ## 1. Introduction Studies have shown that alterations in genes that regulate basic cell functions such as cell-cell adhesion and ECM-cell adhesion are followed by penetration of the basal membrane, destroying the physical structure of the tissue [1]. Alterations in the expression of adhesion molecules can influence tumor aggression resulting in local infiltrative growth and metastasis. Thus, the basal membrane and the ECM jointly represent two important physical barriers to malignant invasion, and their degradation by metalloproteinase enzymes may have an important role in tumor progression and metastatic dissemination [2–4]. Other researchers, however, have reported that, in general, the expression levels of integrins alpha 3 and alpha 5 are reduced in many colorectal carcinomas (CRCs) [5, 6].Some authors have recently demonstrated that integrin inhibition, at any point of action, may lead to tumor progression inhibition. Therefore, integrin inhibition may represent a pharmacological target for cancer treatment and prevention through the suppression of cell migration and invasion and, following apoptosis induction, also through blocking tumor angiogenesis and metastases [7].In most human cancers, the metalloproteinase expression and activity levels are high compared with normal tissue, and this has also been demonstrated in colorectal adenocarcinomas [8, 9]. From these results, several researchers have analyzed the possibility that metalloproteinase expression and activity levels can be used as tumor markers, aiming to prevent tumor growth, invasion, and metastasis [10, 11].Studies have explored the hypothesis that the MMP-9 functions as a key regulator of the malignant phenotype in patients with colorectal tumors presenting with overexpression of this protease relative to the adjacent normal tissues. In this context, MMP-9 is the main agent of cancer cell invasion and metastasis in the epithelial and stromal cells of the primary colorectal tumor. In addition, human colorectal cancer cells have the ability to synthesize and secrete MMP-9. This effect, associated with the induction of proteolytic functions in the pericellular space, causes metastasis development. Hence, the MMP-9 present in tumor epithelial cells can represent a specific target for the diagnosis and treatment of metastatic CRC.Recently, Viana et al. reported that the expression of the genes SPARC, SPP1, FN-1, ITGA-5, and ITGAV correlates with common parameters of progression and dissemination in CRC, and overexpression of the ITGAV gene and protein correlates with an increased risk of perineural invasion. Moreover, according to these authors, the strong correlation of IHC expression between ITGAV and EGFR suggests an interaction between these two signaling pathways [12]. Denadai et al., in 2013, also showed that increased expression levels of ITGA-6 and ITGAV are related to venous invasion and neural infiltration, respectively, while overexpression of ITGB5 and ITGA3 is associated with stage III (TNM), and overexpression of ITGA-5 correlates with the presence of mucinous-type malignant neoplasias [13]. The authors concluded that follow-up studies, preferably with a controlled prospective design, are necessary to establish the roles of such genes as potential biomarkers to predict disease extent or outcome and possibly contribute to the management of CRC patients.According to Nowell, in 2002, tumors become more clinically and biologically aggressive over time and this has been termed “tumor progression” and includes, among other properties, invasion and metastasis, as well as more efficient escape from host immune regulation. Molecular techniques have shown that tumors expand as a clone, from a single altered cell and sequential somatic genetic changes, generating increasingly aggressive subpopulations within the expanding clone. So far multiple types of genes have been identified, and they differ in different tumors, but they provide potential specific targets for important new therapies [14].This study aimed to evaluate the relationship of the expression levels of select ECM genes and proteins, FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9, with CRC progression and dissemination and with that of P53, Bcl-2, KI-67, EGFR, and VEGF, as it has been shown by several authors that proliferation, apoptosis, and cell migration are regulated by cell-cell interaction and extracellular matrix cell components. It is also worth noting that the growth factors EGF and VEGF are usually stored in the ECM and can be activated and released after ECM modulation [15–17]. ## 2. Methods ### 2.1. Patients and Tumor Samples We studied 114 patients with stage I–IV CRC who underwent primary tumor resection at the Fundação Pio XII, Barretos Cancer Hospital, between August 2006 and July 2009. All patients were eligible for the analysis of the expression of the genes of interest through real-time PCR and immunohistochemistry (IHC) assays using the tissue microarray (TMA) technique. The median followup was 30 months at the time of this report.The ethical use of human tissue for research was approved by the institutional review board, and the design of this study followed the principles of the Declaration of Helsinki and also complied with the principles of good clinical practice. This study was also approved by the Ethics Committee of the Barretos Cancer Hospital and UNIFESP-Escola Paulista de Medicina, São Paulo, Brazil.In this study, we included patients of both genders whose age was >18 years. The patients who had received neoadjuvant treatment (chemotherapy or radiotherapy) were excluded. In all patients, tumor tissue was sampled during surgery and cryopreserved, and paraffin blocks were available for further histopathological analysis. The patients without primary CRC site resection were excluded as well as patients with a previous or current diagnosis of another primary malignancy in any location of the body other than nonmelanoma skin cancer or in situ carcinoma of the cervix. The patients with a known history of familial CRC were also excluded. The chromosomal and microsatellite instability statuses were not assessed.Sixty-three patients were male (55.3%), and 51 were female (44.7%). The median patient age was 60 years (24–83); 58 patients (50.9%) were over 60 years of age. Concerning the location of the primary tumor, the right colon was affected in 41 cases (36.0%) and the left colon in 41 cases (36.0%), and the rectum was the primary tumor site in 32 cases (28.0%). Twenty-five (21.9%) patients were considered as TNM stage I, 39 (34.2%) as TNM stage II, 34 (29.8%) as TNM stage III, and 16 (14.0%) as TNM stage IV. The most frequent site for metastasis was the liver (9 patients) followed by the peritoneum (3 patients), lungs (2 patients), and ovary (2 patients). Table1 shows the distribution of patients according to the covariable categorization.Table 1 Characteristics of the 114 patients included in the study. Variables n % Age <60 years >60 years 5658 49.150.9 Gender Female Male 5163 44.755.3 Primary tumor site Right colon Left colon  Rectum 4141 32 36.036.0 28.0 Synchronous tumor No Yes 1122 98.21.8 Histological classification Adenocarcinoma SOE Adenocarcinoma mucinous  Adenocarcinoma villous 8118 15 71.015.8 13.2 Grading: cell differentiation Well differentiated Moderate  Poor  Undifferentiated 991 14 0 7.979.8 12.3 0 Venous invasion Absent Present 9321 81.618.4 Lymphatic vessels invasion Absent Present 9123 79.820.2 Perineural invasion Absent Present 1068 93.07.0 Peritumoural lymphocyte infiltration Absent Present 2193 18.481.6 Resection margin status Positive Negative 0114 0100.0 Lymph nodes dissected Median Range 17* 3–67 Tumor stage: TNM T1 T2  T3  T4 527 71 11 4.423.7 62.3 9.6 Nodal stage N0 N1  N2 6725 22 58.821.9 19.3 Distant metastasis Absent Present 9816 85.914.1 Site of distant metastasis Absent Liver  Peritoneum  Lungs  Ovary 989 3 2 2 85.97.9 2.6 1.8 1.8 Clinical stage I II  III  IV 2539 34 16 21.934.2 29.8 14.0 *28 patients had <12 lymph nodes dissected or analyzed. ### 2.2. Outcome Measures The patients were classified according to the following clinical and pathological characteristics: age group (<60 or >60 years), gender (male versus female), site of the primary tumor (right colon versus left colon versus rectum), histological classification (adenocarcinoma not otherwise specified versus mucinous adenocarcinoma), tumor grade (low (grades I and II) versus high (grades III and IV)), and peritumoral lymphocyte infiltration (presence versus absence).Histological characteristics commonly associated with tumor dissemination and progression have been categorized as follows: venous invasion (presence versus absence); lymphatic vessel invasion (presence versus absence); perineural invasion (presence versus absence); degree of tumor invasion into the organ wall (T 1-2 versus T 3-4, AJCC 2002, 6th edition); lymph node metastasis (presence versus absence); distant metastases (presence versus absence); and TNM staging (I-II versus III-IV, AJCC 2002, 6th edition).We hypothesized that ECM molecules may be associated with CRC progression and dissemination; therefore, differences in ECM marker expression with respect to the categorization of one of the histological covariates mentioned above were analyzed using both reverse transcription- (RT-) PCR and TMA. ### 2.3. RNA Extraction and cDNA Synthesis by RT-PCR Cryopreserved samples were embedded in medium for frozen tissue specimens (Tissue-Tek OCT; Sakura Finetek, Torrance, CA, USA) and fitted into a cryostat (CM1850 UV; Leica Microsystems, Nussloch, Germany) for histological analysis. The slides mounted with sections of 4μm thickness were subjected to hematoxylin-eosin staining (Merck, Darmstadt, Germany) and analyzed by a pathologist to ensure that the selected samples represented the general tumor histology and were free of necrosis or calcifications.The areas of interest were identified microscopically and marked for macrodissection. These slides were used as guides to select and cut tissues in the cryostat. For each sample, sterile individual scalpel blades were used. After discarding areas inappropriate for RNA extraction, the tissue was mechanically macerated with liquid nitrogen and transferred to 1.5 mL RNase- and DNase-free microtubes containing 1,000μL TRIzol (Invitrogen, Carlsbad, CA, USA). RNA was extracted according to the manufacturer’s instructions, and RNA quantification was performed using a spectrophotometer (Thermo Scientific NanoDrop 2000). The quality and integrity of the RNA were verified by the presence of 28S and 18S bands in an agarose gel and stained with 1% ethidium bromide. RNA was purified using the RNeasy mini kit (Qiagen, Valencia, CA, USA) following the manufacturer’s recommendations, diluted with 30 mL of water free of RNase and DNase (Qiagen), quantified spectrophotometrically at a wavelength of 260 nm (NanoVue; GE Healthcare, Chicago, IL, USA), and stored at –80°C until use. RT-PCR was performed using the Super-Script TM III first-strand synthesis SuperMix (Invitrogen), as recommended by the manufacturer. The reaction was performed in a 20 μL final volume containing 2 μg of total RNA with oligo (dT)20 as a primer. The transcription phase was performed in a thermal cycler (Mastercycler_ep Gradient S; Eppendorf, Hamburg, Germany), and the cDNA was stored at –20°C for future reactions. ### 2.4. Analysis of the Genes of Interest For each sample, an ECM and adhesion molecule PCR array (PAHS-013; SABiosciences, Qiagen) plate was used. A mixture was prepared containing 1,275μL of buffer with SYBR Green (2x Master Mix SABiosciences RT2 qPCR), 1,173 μL RNase-free H2O, and 102 μL of the cDNA sample. Next, 25 μL aliquots were added to each well of the 96-well plate. The reactions were performed in a thermal cycler (ABI 7500; Applied Biosystems, Foster City, CA, USA), according to the following protocol: 95°C for 10 min and 40 cycles at 95°C for 15 s and 60°C for 1 min. Data analysis was performed using method from the website http://pcrdataanalysis.sabiosciences.com/pcr/arrayanalysis.php.Gene expression was classified as “high” or “low,” considering the level of expression obtained after grouping patients by the covariates of interest; that is, after categorizing patients into the control or interest groups according to the covariates studied, gene expression was determined in both groups. ### 2.5. TMA Block Construction Original paraffin blocks were sectioned at a 4μm thickness and stained with hematoxylin-eosin. All sections were reviewed to confirm the CRC diagnosis, and the histopathologic findings were reevaluated.A map was prepared using a spreadsheet containing the locations and identification of tissue samples for the construction of the TMA block. The map also guided further readings of the IHC reactions. With the aid of Beecher TM equipment (Beecher Instruments, Silver Spring, MD, USA), the TMA blocks were prepared according to the manufacturer’s specifications in the following steps: marking of the selected area in the respective paraffin block; use of the equipment to create a hollow space in the recipient block; extraction of a cylindrical tissue of the donor block, measuring 1 mm in diameter of the previously selected respective area of interest; transfer of the cylindrical tissue obtained from the donor block to the hollow space previously created in the recipient block; progression, in fractions of millimeters, to new positions within the recipient block, thereby creating a collection of tissue samples following a matrix arrangement; and assessment of the final quality of the block for storage.For adhesion of the TMA block section onto the slides, an adhesive tape system was used (Instrumedics, Hackensack, NJ, USA). The samples were cut to a thickness of 4μm, and a small roll was used to press the section on the tape. The tape with the attached histological section was then placed on a resin-coated slide (part of the adhesive system kit) and pressed with the same roll for better adherence. Afterwards, the slides were placed under UV light for 20 min and were posteriorly exposed to a solvent solution (TPC) for 20 additional minutes. The slides were dried, and the tapes were removed. Afterwards, the slides were paraffin-embedded and sent for storage in ideal cooling conditions. ### 2.6. IHC Technique The TMA block sections were mounted onto glass slides coated with silane (3-aminopropyltriethoxysilane) and dried for 30 min at 37°C. The paraffin was removed with xylene and rehydrated through a series of graded alcohols. Endogenous peroxidase activity was blocked by incubating the sections in a methanol bath containing 3% hydrogen peroxide for 20 min, followed by washing in distilled water. The sections were initially submitted to heat-induced epitope retrieval using citrate buffer (pH 9.0) in an uncovered pressure cooker (Eterna; Nigro, Araraquara, Brazil). The slides were immersed in the buffer solution, and the pressure cooker was closed with the safety valve open. Once the saturated steam was released, the safety valve was lowered until full pressurization was achieved. After 4 min under full pressurization, the closed pressure cooker was placed under running water for cooling. After removing the lid, the slides were washed in distilled running water.Blocking of the endogenous peroxidase was obtained with 3% H2O2 (10 vol), with 3 washes of 10 min duration each. The slides were again washed in distilled running water and then in phosphate-buffered saline (10 mM; pH 7.4) for 5 min. Afterwards, the primary antibody was applied, and the slides were incubated overnight at 8°C. ### 2.7. Primary Antibodies The primary monoclonal antibodies used were obtained from ABCAM Inc. (Cambridge, MA, USA) and were as follows: anti-integrin alpha 3 that reacts with human (IHC-FoFr or Fr) and mouse IgG1 isotypes (clone F35 177-1; 100μg; ab 20140); anti-integrin beta 5 that reacts with human (IHC-P or Fr) and mouse IgG2a isotypes (ab 93943); anti-MMP-2, rabbit IgG isotype (ab52756); and anti-MMP-9 (ab76003; clone EP1254). All antibodies were used at a 1 : 400 dilution.In addition, the following non-ECM primary antibodies were used in this study: anti-p53 (IgG2b class; clone DO-7; 1 : 300; M7001; DAKOCytomation, Glostrup, Denmark); anti-Bcl-2 (mouse isotype IgG1; clone 124; 1 : 600; M0887; DAKOCytomation); anti-VEGF (mouse IgG1 isotype; clone VG 1; 1 : 100; M7273; DAKOCytomation); anti-KI-67 (mouse isotype IgG1; clone MIB-1; 1 : 500; M7240; DAKOCytomation); and anti-EGFR (mouse IgG1 isotype; clone EGFR-25; NCLEGFR-384; Novocastra, Newcastle, UK). The positive controls used for IHC analysis were normal human kidney tissue for FN-1, human tonsils for ITGA-3, ITGB-5, VEGF, KI-67, P53, and Bcl-2, and placenta for EGFR. ### 2.8. Immunostaining Analysis A preliminary test was performed to identify the optimal antibody concentration and to select positive and negative controls using the dilution data supplied by the manufacturer. After washing the primary antibody with phosphate-buffered saline, the slides were incubated with biotin-free polymer in the Advance TM visualization system (DAKO) for 30 min. A freshly prepared solution containing 1 drop of 3,3′-diaminobenzidine tetrahydrochloride (DAB; Sigma, St. Louis, MO, USA) with 1 mL of substrate (DAKO) was applied for 5 min on each slide. The DAB solution was removed by washing with distilled water. The slides were counterstained with hematoxylin, dehydrated in ethanol, cleared in xylene, and mounted using Entelan TM [18–20].Tissue expression of FN-1, ITGA-3, ITGB-5, MMP-1, and MMP-9 was categorized dichotomously as “high expression” or “low expression,” according to the “quick score” method [21, 22]. This score system uses a combination of the percentage of stained cells (P) and staining intensity (I), and the “quick score” was calculated by multiplying both values.The scores used for the percentage of stained tumor cells were as follows: 0 points (absence of stained cells); 1 point (>25% of stained cells); 2 points (26–50% of stained cells); and 3 points (>50% of stained cells). The scores used for the staining intensity were as follows: 1 point (mild intensity); 2 points (moderate intensity); and 3 points (intense staining). As a result, expression of a gene product in tumor cells was considered to be high (overexpressed) when the final score was >4 (P × 1 >4), and the markers that presented a final score <4 were considered to have low expression. The stroma and the tumor cells were not treated separately during IHC analysis, and only the level of expression of markers on tumor cells was considered for scoring.The validation of different expression levels of the genes detected by real-time PCR analysis was performed by the verification of the protein expression related to each gene by IHC. Thus, for each gene (fibronectin, integrins, and metalloproteases) with increased or reduced expression by array tracing, the corresponding protein was analyzed by the antigen-antibody reaction (IHC) in TMA slides. The confirmation of the protein expression increase by IH validates the molecular finding of the tracing by RT-PCR. ### 2.9. Statistical Analyses Statistical associations between gene and protein expression levels of FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 and the clinicopathological factors were determined using a nonparametric Mann-WhitneyU test for quantitative variables and a chi-square test for qualitative variables, that is, frequencies and proportions. When the chi-square test assumptions were not met, Fisher’s exact test was used. To measure the association between the ECM markers FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 and the non-ECM markers EGFR, VEGF, P53, Bcl-2, and KI-67 (ordinal variables), the Spearman correlation coefficient was used [23].The significance level was set at 5% (P<0.05), and the data were analyzed using Statistical Package for Social Sciences (SPSS) software (Chicago, IL, USA), version 15.0. The Shapiro-Wilk test was used to verify that the data had a normal distribution. ## 2.1. Patients and Tumor Samples We studied 114 patients with stage I–IV CRC who underwent primary tumor resection at the Fundação Pio XII, Barretos Cancer Hospital, between August 2006 and July 2009. All patients were eligible for the analysis of the expression of the genes of interest through real-time PCR and immunohistochemistry (IHC) assays using the tissue microarray (TMA) technique. The median followup was 30 months at the time of this report.The ethical use of human tissue for research was approved by the institutional review board, and the design of this study followed the principles of the Declaration of Helsinki and also complied with the principles of good clinical practice. This study was also approved by the Ethics Committee of the Barretos Cancer Hospital and UNIFESP-Escola Paulista de Medicina, São Paulo, Brazil.In this study, we included patients of both genders whose age was >18 years. The patients who had received neoadjuvant treatment (chemotherapy or radiotherapy) were excluded. In all patients, tumor tissue was sampled during surgery and cryopreserved, and paraffin blocks were available for further histopathological analysis. The patients without primary CRC site resection were excluded as well as patients with a previous or current diagnosis of another primary malignancy in any location of the body other than nonmelanoma skin cancer or in situ carcinoma of the cervix. The patients with a known history of familial CRC were also excluded. The chromosomal and microsatellite instability statuses were not assessed.Sixty-three patients were male (55.3%), and 51 were female (44.7%). The median patient age was 60 years (24–83); 58 patients (50.9%) were over 60 years of age. Concerning the location of the primary tumor, the right colon was affected in 41 cases (36.0%) and the left colon in 41 cases (36.0%), and the rectum was the primary tumor site in 32 cases (28.0%). Twenty-five (21.9%) patients were considered as TNM stage I, 39 (34.2%) as TNM stage II, 34 (29.8%) as TNM stage III, and 16 (14.0%) as TNM stage IV. The most frequent site for metastasis was the liver (9 patients) followed by the peritoneum (3 patients), lungs (2 patients), and ovary (2 patients). Table1 shows the distribution of patients according to the covariable categorization.Table 1 Characteristics of the 114 patients included in the study. Variables n % Age <60 years >60 years 5658 49.150.9 Gender Female Male 5163 44.755.3 Primary tumor site Right colon Left colon  Rectum 4141 32 36.036.0 28.0 Synchronous tumor No Yes 1122 98.21.8 Histological classification Adenocarcinoma SOE Adenocarcinoma mucinous  Adenocarcinoma villous 8118 15 71.015.8 13.2 Grading: cell differentiation Well differentiated Moderate  Poor  Undifferentiated 991 14 0 7.979.8 12.3 0 Venous invasion Absent Present 9321 81.618.4 Lymphatic vessels invasion Absent Present 9123 79.820.2 Perineural invasion Absent Present 1068 93.07.0 Peritumoural lymphocyte infiltration Absent Present 2193 18.481.6 Resection margin status Positive Negative 0114 0100.0 Lymph nodes dissected Median Range 17* 3–67 Tumor stage: TNM T1 T2  T3  T4 527 71 11 4.423.7 62.3 9.6 Nodal stage N0 N1  N2 6725 22 58.821.9 19.3 Distant metastasis Absent Present 9816 85.914.1 Site of distant metastasis Absent Liver  Peritoneum  Lungs  Ovary 989 3 2 2 85.97.9 2.6 1.8 1.8 Clinical stage I II  III  IV 2539 34 16 21.934.2 29.8 14.0 *28 patients had <12 lymph nodes dissected or analyzed. ## 2.2. Outcome Measures The patients were classified according to the following clinical and pathological characteristics: age group (<60 or >60 years), gender (male versus female), site of the primary tumor (right colon versus left colon versus rectum), histological classification (adenocarcinoma not otherwise specified versus mucinous adenocarcinoma), tumor grade (low (grades I and II) versus high (grades III and IV)), and peritumoral lymphocyte infiltration (presence versus absence).Histological characteristics commonly associated with tumor dissemination and progression have been categorized as follows: venous invasion (presence versus absence); lymphatic vessel invasion (presence versus absence); perineural invasion (presence versus absence); degree of tumor invasion into the organ wall (T 1-2 versus T 3-4, AJCC 2002, 6th edition); lymph node metastasis (presence versus absence); distant metastases (presence versus absence); and TNM staging (I-II versus III-IV, AJCC 2002, 6th edition).We hypothesized that ECM molecules may be associated with CRC progression and dissemination; therefore, differences in ECM marker expression with respect to the categorization of one of the histological covariates mentioned above were analyzed using both reverse transcription- (RT-) PCR and TMA. ## 2.3. RNA Extraction and cDNA Synthesis by RT-PCR Cryopreserved samples were embedded in medium for frozen tissue specimens (Tissue-Tek OCT; Sakura Finetek, Torrance, CA, USA) and fitted into a cryostat (CM1850 UV; Leica Microsystems, Nussloch, Germany) for histological analysis. The slides mounted with sections of 4μm thickness were subjected to hematoxylin-eosin staining (Merck, Darmstadt, Germany) and analyzed by a pathologist to ensure that the selected samples represented the general tumor histology and were free of necrosis or calcifications.The areas of interest were identified microscopically and marked for macrodissection. These slides were used as guides to select and cut tissues in the cryostat. For each sample, sterile individual scalpel blades were used. After discarding areas inappropriate for RNA extraction, the tissue was mechanically macerated with liquid nitrogen and transferred to 1.5 mL RNase- and DNase-free microtubes containing 1,000μL TRIzol (Invitrogen, Carlsbad, CA, USA). RNA was extracted according to the manufacturer’s instructions, and RNA quantification was performed using a spectrophotometer (Thermo Scientific NanoDrop 2000). The quality and integrity of the RNA were verified by the presence of 28S and 18S bands in an agarose gel and stained with 1% ethidium bromide. RNA was purified using the RNeasy mini kit (Qiagen, Valencia, CA, USA) following the manufacturer’s recommendations, diluted with 30 mL of water free of RNase and DNase (Qiagen), quantified spectrophotometrically at a wavelength of 260 nm (NanoVue; GE Healthcare, Chicago, IL, USA), and stored at –80°C until use. RT-PCR was performed using the Super-Script TM III first-strand synthesis SuperMix (Invitrogen), as recommended by the manufacturer. The reaction was performed in a 20 μL final volume containing 2 μg of total RNA with oligo (dT)20 as a primer. The transcription phase was performed in a thermal cycler (Mastercycler_ep Gradient S; Eppendorf, Hamburg, Germany), and the cDNA was stored at –20°C for future reactions. ## 2.4. Analysis of the Genes of Interest For each sample, an ECM and adhesion molecule PCR array (PAHS-013; SABiosciences, Qiagen) plate was used. A mixture was prepared containing 1,275μL of buffer with SYBR Green (2x Master Mix SABiosciences RT2 qPCR), 1,173 μL RNase-free H2O, and 102 μL of the cDNA sample. Next, 25 μL aliquots were added to each well of the 96-well plate. The reactions were performed in a thermal cycler (ABI 7500; Applied Biosystems, Foster City, CA, USA), according to the following protocol: 95°C for 10 min and 40 cycles at 95°C for 15 s and 60°C for 1 min. Data analysis was performed using method from the website http://pcrdataanalysis.sabiosciences.com/pcr/arrayanalysis.php.Gene expression was classified as “high” or “low,” considering the level of expression obtained after grouping patients by the covariates of interest; that is, after categorizing patients into the control or interest groups according to the covariates studied, gene expression was determined in both groups. ## 2.5. TMA Block Construction Original paraffin blocks were sectioned at a 4μm thickness and stained with hematoxylin-eosin. All sections were reviewed to confirm the CRC diagnosis, and the histopathologic findings were reevaluated.A map was prepared using a spreadsheet containing the locations and identification of tissue samples for the construction of the TMA block. The map also guided further readings of the IHC reactions. With the aid of Beecher TM equipment (Beecher Instruments, Silver Spring, MD, USA), the TMA blocks were prepared according to the manufacturer’s specifications in the following steps: marking of the selected area in the respective paraffin block; use of the equipment to create a hollow space in the recipient block; extraction of a cylindrical tissue of the donor block, measuring 1 mm in diameter of the previously selected respective area of interest; transfer of the cylindrical tissue obtained from the donor block to the hollow space previously created in the recipient block; progression, in fractions of millimeters, to new positions within the recipient block, thereby creating a collection of tissue samples following a matrix arrangement; and assessment of the final quality of the block for storage.For adhesion of the TMA block section onto the slides, an adhesive tape system was used (Instrumedics, Hackensack, NJ, USA). The samples were cut to a thickness of 4μm, and a small roll was used to press the section on the tape. The tape with the attached histological section was then placed on a resin-coated slide (part of the adhesive system kit) and pressed with the same roll for better adherence. Afterwards, the slides were placed under UV light for 20 min and were posteriorly exposed to a solvent solution (TPC) for 20 additional minutes. The slides were dried, and the tapes were removed. Afterwards, the slides were paraffin-embedded and sent for storage in ideal cooling conditions. ## 2.6. IHC Technique The TMA block sections were mounted onto glass slides coated with silane (3-aminopropyltriethoxysilane) and dried for 30 min at 37°C. The paraffin was removed with xylene and rehydrated through a series of graded alcohols. Endogenous peroxidase activity was blocked by incubating the sections in a methanol bath containing 3% hydrogen peroxide for 20 min, followed by washing in distilled water. The sections were initially submitted to heat-induced epitope retrieval using citrate buffer (pH 9.0) in an uncovered pressure cooker (Eterna; Nigro, Araraquara, Brazil). The slides were immersed in the buffer solution, and the pressure cooker was closed with the safety valve open. Once the saturated steam was released, the safety valve was lowered until full pressurization was achieved. After 4 min under full pressurization, the closed pressure cooker was placed under running water for cooling. After removing the lid, the slides were washed in distilled running water.Blocking of the endogenous peroxidase was obtained with 3% H2O2 (10 vol), with 3 washes of 10 min duration each. The slides were again washed in distilled running water and then in phosphate-buffered saline (10 mM; pH 7.4) for 5 min. Afterwards, the primary antibody was applied, and the slides were incubated overnight at 8°C. ## 2.7. Primary Antibodies The primary monoclonal antibodies used were obtained from ABCAM Inc. (Cambridge, MA, USA) and were as follows: anti-integrin alpha 3 that reacts with human (IHC-FoFr or Fr) and mouse IgG1 isotypes (clone F35 177-1; 100μg; ab 20140); anti-integrin beta 5 that reacts with human (IHC-P or Fr) and mouse IgG2a isotypes (ab 93943); anti-MMP-2, rabbit IgG isotype (ab52756); and anti-MMP-9 (ab76003; clone EP1254). All antibodies were used at a 1 : 400 dilution.In addition, the following non-ECM primary antibodies were used in this study: anti-p53 (IgG2b class; clone DO-7; 1 : 300; M7001; DAKOCytomation, Glostrup, Denmark); anti-Bcl-2 (mouse isotype IgG1; clone 124; 1 : 600; M0887; DAKOCytomation); anti-VEGF (mouse IgG1 isotype; clone VG 1; 1 : 100; M7273; DAKOCytomation); anti-KI-67 (mouse isotype IgG1; clone MIB-1; 1 : 500; M7240; DAKOCytomation); and anti-EGFR (mouse IgG1 isotype; clone EGFR-25; NCLEGFR-384; Novocastra, Newcastle, UK). The positive controls used for IHC analysis were normal human kidney tissue for FN-1, human tonsils for ITGA-3, ITGB-5, VEGF, KI-67, P53, and Bcl-2, and placenta for EGFR. ## 2.8. Immunostaining Analysis A preliminary test was performed to identify the optimal antibody concentration and to select positive and negative controls using the dilution data supplied by the manufacturer. After washing the primary antibody with phosphate-buffered saline, the slides were incubated with biotin-free polymer in the Advance TM visualization system (DAKO) for 30 min. A freshly prepared solution containing 1 drop of 3,3′-diaminobenzidine tetrahydrochloride (DAB; Sigma, St. Louis, MO, USA) with 1 mL of substrate (DAKO) was applied for 5 min on each slide. The DAB solution was removed by washing with distilled water. The slides were counterstained with hematoxylin, dehydrated in ethanol, cleared in xylene, and mounted using Entelan TM [18–20].Tissue expression of FN-1, ITGA-3, ITGB-5, MMP-1, and MMP-9 was categorized dichotomously as “high expression” or “low expression,” according to the “quick score” method [21, 22]. This score system uses a combination of the percentage of stained cells (P) and staining intensity (I), and the “quick score” was calculated by multiplying both values.The scores used for the percentage of stained tumor cells were as follows: 0 points (absence of stained cells); 1 point (>25% of stained cells); 2 points (26–50% of stained cells); and 3 points (>50% of stained cells). The scores used for the staining intensity were as follows: 1 point (mild intensity); 2 points (moderate intensity); and 3 points (intense staining). As a result, expression of a gene product in tumor cells was considered to be high (overexpressed) when the final score was >4 (P × 1 >4), and the markers that presented a final score <4 were considered to have low expression. The stroma and the tumor cells were not treated separately during IHC analysis, and only the level of expression of markers on tumor cells was considered for scoring.The validation of different expression levels of the genes detected by real-time PCR analysis was performed by the verification of the protein expression related to each gene by IHC. Thus, for each gene (fibronectin, integrins, and metalloproteases) with increased or reduced expression by array tracing, the corresponding protein was analyzed by the antigen-antibody reaction (IHC) in TMA slides. The confirmation of the protein expression increase by IH validates the molecular finding of the tracing by RT-PCR. ## 2.9. Statistical Analyses Statistical associations between gene and protein expression levels of FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 and the clinicopathological factors were determined using a nonparametric Mann-WhitneyU test for quantitative variables and a chi-square test for qualitative variables, that is, frequencies and proportions. When the chi-square test assumptions were not met, Fisher’s exact test was used. To measure the association between the ECM markers FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 and the non-ECM markers EGFR, VEGF, P53, Bcl-2, and KI-67 (ordinal variables), the Spearman correlation coefficient was used [23].The significance level was set at 5% (P<0.05), and the data were analyzed using Statistical Package for Social Sciences (SPSS) software (Chicago, IL, USA), version 15.0. The Shapiro-Wilk test was used to verify that the data had a normal distribution. ## 3. Results ### 3.1. FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 ECM Gene Expression Levels The expression levels of the genes of interest according to the covariates studied through real-time PCR showed low expression of the FN-1 gene in patients <60 years of age compared with those ≥60 years of age (P=0.022).The ITGA-3 and ITGB-5 gene expression levels in the tumor tissue as determined using RT-PCR were not considered significant when analyzed with regard to the different measures of tumor dissemination outcome, except for those related to TNM staging and cell differentiation degree.ITGA-3 gene expression showed a significance level ofP=0.016 and a fold regulation of 2.58 comparing TNM III, IV versus TNM I, II stages. With regard to the ITGB-5 gene, a reduction in the group expression of III cell differentiation degree was observed when compared with the GI and GII group (P=0.04 and a fold change of –2.11) and an increase of this gene expression in the tumors of TNM III, IV versus TNM I, II stages (P=0.029 and a fold change of 1.33). Table 2 shows the distribution of the significant results of the RTPCR expression of the genes of interest.Table 2 Distribution of expression levels of FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 ECM genes, with significance levels ofP<0.05, fold change >2.0, and clinicopathological variables associated with genetic tracing by RT-PCR. Gene P value Fold change Clinicopathological parameter Comparison FN-1 0.022 −3.07 Age (years) <60 × ≥60 ITGA-3 0.016 2.58 TNM TNM III × TNM I ITGB-5 0.04 −2.11 Degree of cell differentiation GII × GI 0.029 1.33 TNM TNM III × TNM I MMP-2 0.015 2.17 Histological type Mucinous × tubular 0.04 −1.2 Peritumoral lymphocyte infiltration With × without 0.039 −2.11 Age >60 × ≤60 MMP-9 0.014 1.13 Histological type Villous × tubularThe expression levels of MMP2 and MMP9 genes in the tumor tissue as determined using RT-PCR were considered significant when analyzed with regard to the different measures of tumor dissemination outcome, except for those related to mucinous and villous histological types and in the parameter of venous invasion dissemination. Therefore, MMP2 gene expression was significantly different between mucinous and nonmucinous carcinomas (P=0.001) and in patients aged over 60 years (P<0.0001). With regard to the tumor expression of the MMP9 gene, an increase of this gene expression was noted in tumors with TNM III and IV staging compared with TNM I and II staging (P=0.0001), in venous invasion tumors compared with those without venous invasion (P<0.001), and in carcinoma tumors with a villous component compared with carcinomas without a villous component (P<0.0001). Tables 3, 4, 5, 6, 7, 8, and 9 show the results of immunohistochemical expression of the EMC genes and the non-ECM molecular markers according to the outcome measures degree of tumor cell differentiation, tumor TNM classification, peritumoral lymphocytic infiltration, venous invasion, perineural invasion, and type of tumor (tubular, mucinous, and villous).Table 3 Distribution of the expression levels PER IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the degree of cell differentiation of CRC (n=114). Grading tumor cell differentiation EGFR Total P value High Low I + II 0.159 N 63 37 100 % 63.0% 37.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Grading tumor cell differentiation VEGF Total Pvalue High Low I + II 0.778 N 49 51 100 % 49.0% 51.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Grading tumor cell differentiation KI-67 Total Pvalue High Low I + II 0.093 N 46 54 100 % 46.0% 54.0% 100.0% III N 3 11 14 % 21.4% 78.6% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Grading tumor cell differentiation P53 Total Pvalue High Low I + II 0.778 N 41 59 100 % 41.0% 59.0% 100.0% III N 5 9 14 % 35.7% 64.3% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Grading tumor cell differentiation Bcl-2 Total Pvalue High Low I + II 1.000 N 50 50 100 % 50.0% 50.0% 100.0% III N 7 7 14 % 50.0% 50.0% 100.0% Grading tumor cell differentiation ITGB-5 Total Pvalue High Low I + II 0.394 N 57 43 100 % 57.0% 43.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Grading tumor cell differentiation ITGA-3 Total Pvalue High Low I + II 0.397 N 50 50 100 % 50.0% 50.0% 100.0% III N 5 9 14 % 35.7% 64.3% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Grading tumor cell differentiation MMP-2 Total Pvalue High Low I + II 0.568 N 48 52 100 % 48.0% 52.0% 100.0% III N 5 9 14 % 35.7% 64.3% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Grading tumor cell differentiation MMP-9 Total Pvalue High Low I + II 0.572 N 53 47 100 % 53.0% 47.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Grading tumor cell differentiation FN-1 Total Pvalue High Low I + II 0.225 N 73 27 100 % 73.0% 27.0% 100.0% III N 8 6 14 % 57.1% 42.9% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 4 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the degree of TNM staging of CRC (n=114). TNM tumor stage EGFR Total P value High Low I + II 0.000 N 64 0 64 % 100.0% 0.0% 100.0% III + IV N 5 45 50 % 10.0% 90.0% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% TNM tumor stage VEGF Total Pvalue High Low I + II 0.186 N 27 37 64 % 42.2% 57.8% 100.0% III + IV N 28 22 50 % 56.0% 44.0% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% TNM tumor stage KI-67 Total Pvalue High Low I + II 0.000 N 38 26 64 % 59.4% 40.6% 100.0% III + IV N 11 39 50 % 22.0% 78.0% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Estadiamento P53 Total Pvalue High Low I + II 0.000 N 35 29 64 % 54.7% 45.3% 100.0% III + IV N 11 39 50 % 22.0% 78.0% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% TNM tumor stage Bcl-2 Total Pvalue High Low I + II 0.345 N 29 35 64 % 45.3% 54.7% 100.0% III + IV N 28 22 50 % 56.0% 44.0% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% TNM tumor stage ITGB-5 Total Pvalue High Low I + II 0.000 N 49 15 64 % 76.6% 23.4% 100.0% III + IV N 14 36 50 % 28.0% 72.0% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% TNM tumor stage ITGA-3 Total Pvalue High Low I + II 0.000 N 53 11 64 % 82.8% 17.2% 100.0% III + IV N 2 48 50 % 4.0% 96.0% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% TNM tumor stage MMP-2 Total Pvalue High Low I + II 0.572 N 28 36 64 % 43.8% 56.3% 100.0% III + IV N 25 25 50 % 50.0% 50.0% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% TNM tumor stage MMP-9 Total Pvalue High Low I + II 0.000 N 56 8 64 % 87.5% 12.5% 100.0% III + IV N 3 47 50 % 6.0% 94.0% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% TNM tumor stage FN-1 Total Pvalue high low I + II 0.677 N 44 20 64 % 68.8% 31.3% 100.0% III + IV N 37 13 50 % 74.0% 26.0% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 5 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the peritumoral lymphocyte infiltrate in CRC (n=114). Peritumoral lymphocyte infiltrate EGFR Total P value High Low Absence N 13 8 21 1.000 % 61.9% 38.1% 100.0% Presence N 56 37 93 % 60.2% 39.8% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Peritumoral lymphocyte infiltrate VEGF Total Pvalue High Low Absence 0.635 N 9 12 21 % 42.9% 57.1% 100.0% Presence N 46 47 93 % 49.5% 50.5% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Peritumoral lymphocyte infiltrate KI-67 Total Pvalue High Low Absence N 11 10 21 0.343 % 52.4% 47.6% 100.0% Presence N 38 55 93 % 40.9% 59.1% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Peritumoral lymphocyte infiltrate P53 Total Pvalue High Low Absence 1.000 N 8 13 21 % 38.1% 61.9% 100.0% Presence N 38 55 93 % 40.9% 59.1% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Peritumoral lymphocyte infiltrate Bcl-2 Total Pvalue High Low Absence 0.629 N 9 12 21 % 42.9% 57.1% 100.0% Presence N 48 45 93 % 51.6% 48.4% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% Peritumoral lymphocyte infiltrate ITGB-5 Total Pvalue High Low Absence 1.000 N 12 9 21 % 57.1% 42.9% 100.0% Presence N 51 42 93 % 54.8% 45.2% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Peritumoral lymphocyte infiltrate ITGA-3 Total Pvalue High Low Absence 0.227 N 13 8 21 % 61.9% 38.1% 100.0% Presence N 42 51 93 % 45.2% 54.8% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Peritumoral lymphocyte infiltrate MMP-2 Total Pvalue High Low Absence 1.000 N 10 11 21 % 47.6% 52.4% 100.0% Presence N 43 50 93 % 46.2% 53.8% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Peritumoral lymphocyte infiltrate MMP-9 Total Pvalue High Low Absence 1.000 N 11 10 21 % 52.4% 47.6% 100.0% Presence N 48 45 93 % 51.6% 48.4% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Peritumoral lymphocyte infiltrate FN-1 Total Pvalue High Low Absence 0.180 N 12 9 21 % 57.1% 42.9% 100.0% Presence N 69 24 93 % 74.2% 25.8% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 6 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, and MMP9 ECM genes and FN-1 and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the presence of venous invasion in CRC (n=114). Venous invasion EGFR Total P value High Low Absence 0.000 N 65 28 93 % 69.9% 30.1% 100.0% Presence N 4 17 21 % 19.0% 81.0% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Venous invasion VEGF Total Pvalue High Low Absence 0.810 N 44 49 93 % 47.3% 52.7% 100.0% Presence N 11 10 21 % 52.4% 47.6% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Venous invasion KI-67 Total Pvalue High Low Absence 0.055 N 44 49 93 % 47.3% 52.7% 100.0% Presence N 5 16 21 % 23.8% 76.2% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Venous invasion P53 Total Pvalue High Low Absence 0.029 N 42 51 93 % 45.2% 54.8% 100.0% Presence N 4 17 21 % 19.0% 81.0% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Venous invasion Bcl-2 Total Pvalue High Low Absence 1.000 N 46 47 93 % 49.5% 50.5% 100.0% Presence N 11 10 21 % 52.4% 47.6% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% Venous invasion ITGB-5 Total Pvalue High Low Absence 0.231 N 54 39 93 % 58.1% 41.9% 100.0% Presence N 9 12 21 % 42.9% 57.1% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Venous invasion ITGA-3 Total Pvalue High Low Absence 0.000 N 52 41 93 % 55.9% 44.1% 100.0% Presence N 3 18 21 % 14.3% 85.7% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Venous invasion MMP-2 Total Pvalue High Low Absence 0.631 N 42 51 93 % 45.2% 54.8% 100.0% Presence N 11 10 21 % 52.4% 47.6% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Venous invasion MMP-9 Total Pvalue High Low Absence 0.001 N 55 38 93 % 59.1% 40.9% 100.0% Presence N 4 17 21 % 19.0% 81.0% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Venous invasion FN-1 Total Pvalue High Low Absence 0.006 N 61 32 93 % 65.6% 34.4% 100.0% Presence N 20 1 21 % 95.2% 4.8% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 7 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, Ki-67, P53, and Bcl-2 molecules as per the presence of perineural invasion in CRC (n=114). Perineural invasion EGFR Total P value High Low Absence 0.260 N 66 40 106 % 62.3% 37.7% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Perineural invasion VEGF Total Pvalue High Low Absence 0.027 N 48 58 106 % 45.3% 54.7% 100.0% Presence N 7 1 8 % 87.5% 12.5% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Perineural invasion KI-67 Total Pvalue High Low Absence 0.462 N 47 59 106 % 44.3% 55.7% 100.0% Presence N 2 6 8 % 25.0% 75.0% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Perineural invasion P53 Total Pvalue High Low Absence 0.470 N 44 62 106 % 41.5% 58.5% 100.0% Presence N 2 6 8 % 25.0% 75.0% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Perineural invasion Bcl-2 Total Pvalue High Low Absence 0.716 N 52 54 106 % 49.1% 50.9% 100.0% Presence N 5 3 8 % 62.5% 37.5% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% Perineural invasion ITGB-5 Total Pvalue High Low Absence 0.463 N 60 46 106 % 56.6% 43.4% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Perineural invasion ITGA-3 Total Pvalue High Low Absence 0.717 N 52 54 106 % 49.1% 50.9% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Perineural invasion MMP-2 Total Pvalue High Low Absence 0.999 N 49 57 106 % 46.2% 53.8% 100.0% Presence N 4 4 8 % 50.0% 50.0% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Perineural invasion MMP-9 Total Pvalue High Low Absence 0.479 N 56 50 106 % 52.8% 47.2% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Perineural invasion FN-1 Total Pvalue High Low Absence 0.102 N 73 33 106 % 68.9% 31.1% 100.0% Presence N 8 0 8 % 100.0% 0.0% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 8 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the villous and tubular types of CRC (n=114). Tumor histologic type:tubular × villous EGFR Total P value High Low Tubular 0.579 N 52 29 81 % 64.2% 35.8% 100.0% Villous N 9 7 16 % 56.3% 43.8% 100.0% Total N 61 36 97 % 62.9% 37.1% 100.0% Tumor histologic type:tubular × villous VEGF Total Pvalue High Low Tubular 1.000 N 41 40 81 % 50.6% 49.4% 100.0% Villous N 8 8 16 % 50.0% 50.0% 100.0% Total N 49 48 97 % 50.5% 49.5% 100.0% Tumor histologic type:tubular × villous KI-67 Total Pvalue High Low Tubular 0.783 N 36 45 81 % 44.4% 55.6% 100.0% Villous N 6 10 16 % 37.5% 62.5% 100.0% Total N 42 55 97 % 43.3% 56.7% 100.0% Tumor histologic type:tubular × villous P53 Total Pvalue High Low Tubular 1.000 N 32 49 81 % 39.5% 60.5% 100.0% Villous N 6 10 16 % 37.5% 62.5% 100.0% Total N 38 59 97 % 39.2% 60.8% 100.0% Tumor histologic type:tubular × villous Bcl-2 Total Pvalue High Low Tubular 0.277 N 37 44 81 % 45.7% 54.3% 100.0% Villous N 10 6 16 % 62.5% 37.5% 100.0% Total N 47 50 97 % 48.5% 51.5% 100.0% Tumor histologic type:tubular × villous ITGB-5 Total Pvalue High Low Tubular 1.000 N 44 37 81 % 54.3% 45.7% 100.0% Villous N 9 7 16 % 56.3% 43.8% 100.0% Total N 53 44 97 % 54.6% 45.4% 100.0% Tumor histologic type:tubular × villous ITGA-3 Total Pvalue High Low Tubular 1.000 N 40 41 81 % 49.4% 50.6% 100.0% Villous N 8 8 16 % 50.0% 50.0% 100.0% Total N 48 49 97 % 49.5% 50.5% 100.0% Tumor histologic type:tubular × villous MMP-2 Total Pvalue High Low Tubular 0.585 N 44 37 81 % 54.3% 45.7% 100.0% Villous N 7 9 16 % 43.8% 56.3% 100.0% Total N 51 46 97 % 52.6% 47.4% 100.0% Tumor histologic type:tubular × villous MMP-9 Total Pvalue High Low Tubular 0.000 N 49 32 81 % 60.5% 39.5% 100.0% Villous N 1 15 16 % 6.3% 93.8% 100.0% Total N 50 47 97 % 51.5% 48.5% 100.0% Tumor histologic type:tubular × villous FN-1 Total Pvalue High Low Tubular 1.000 N 57 24 81 % 70.4% 29.6% 100.0% Villous N 11 5 16 % 68.8% 31.3% 100.0% Total N 68 29 97 % 70.1% 29.9% 100.0%Table 9 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per mucinous and tubular tumor types of CRC (n=114). Tumor histologic type:mucinous × tubular EGFR Total P value High Low Mucinous 0.273 N 8 9 17 % 47.1% 52.9% 100.0% Tubular N 52 29 81 % 64.2% 35.8% 100.0% Total N 60 38 98 % 61.2% 38.8% 100.0% Tumor histologic type:mucinous × tubular VEGF Total Pvalue High Low Mucinous 0.294 N 6 11 17 % 35.3% 64.7% 100.0% Tubular N 41 40 81 % 50.6% 49.4% 100.0% Total N 47 51 98 % 48.0% 52.0% 100.0% Tumor histologic type:mucinous × tubular KI-67 Total Pvalue High Low Mucinous 1.000 N 7 10 17 % 41.2% 58.8% 100.0% Tubular N 36 45 81 % 44.4% 55.6% 100.0% Total N 43 55 98 % 43.9% 56.1% 100.0% Tumor histologic type:mucinous × tubular P53 Total Pvalue High Low Mucinous 0.596 N 8 9 17 % 47.1% 52.9% 100.0% Tubular N 32 49 81 % 39.5% 60.5% 100.0% Total N 40 58 98 % 40.8% 59.2% 100.0% Tumor histologic type:mucinous × tubular Bcl-2 Total Pvalue High Low Mucinous 0.425 N 10 7 17 % 58.8% 41.2% 100.0% Tubular N 37 44 81 % 45.7% 54.3% 100.0% Total N 47 51 98 % 48.0% 52.0% 100.0% Tumor histologic type:mucinous × tubular ITGB-5 Total Pvalue High Low Mucinous 0.793 N 10 7 17 % 58.8% 41.2% 100.0% Tubular N 44 37 81 % 54.3% 45.7% 100.0% Total N 54 44 98 % 55.1% 44.9% 100.0% Tumor histologic type:mucinous × tubular ITGA-3 Total Pvalue High Low Mucinous 0.600 N 7 10 17 % 41.2% 58.8% 100.0% Tubular N 40 41 81 % 49.4% 50.6% 100.0% Total N 47 51 98 % 48.0% 52.0% 100.0% Tumor histologic type:mucinous × tubular MMP-2 Total Pvalue High Low Mucinous 0.001 N 2 15 17 % 11.8% 88.2% 100.0% Tubular N 44 37 81 % 54.3% 45.7% 100.0% Total N 46 52 98 % 46.9% 53.1% 100.0% Tumor histologic type:mucinous × tubular MMP-9 Total Pvalue High Low Mucinous 0.596 N 9 8 17 % 52.9% 47.1% 100.0% Tubular N 49 32 81 % 60.5% 39.5% 100.0% Total N 58 40 98 % 59.2% 40.8% 100.0% Tumor histologic type:mucinous × tubular FN-1 Total Pvalue High Low Mucinous 0.771 N 13 4 17 % 76.5% 23.5% 100.0% Tubular N 57 24 81 % 70.4% 29.6% 100.0% Total N 70 28 98 % 71.4% 28.6% 100.0%Table10 shows the distribution in absolute numbers and percentages by IHC of proteins corresponding to the ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and the EGFR, VEGF, KI-67, P53, and Bcl-2 molecules, according to the expression degrees rated as low and high of the colorectal adenocarcinoma (n=114).Table 10 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the degrees of high and low expression of CRC (n=114). Markers Low expression High expression n (%) n (%) FN-1 81 (71.1) 33 (28.9) ITGA-3 55 (48.2) 59 (51.8) ITGB-5 63 (55.3) 51 (44.7) MMP-2 53 (46.5) 61 (53.5) MMP-9 59 (51.8) 55 (48.2) P53 46 (40.4) 68 (59.6) Bcl-2 57 (50.0) 57 (50.0) KI-67 49 (43.0) 65 (57.0) EGFR 69 (60.5) 45 (39.5) VEGF 55 (48.2) 59 (51.8)With regard to the correlation of ECM genes IHC expression levels with the non-ECM molecules P53, Bcl-2, VEGF, KI-67, and EGFR, this study showed that FN-1 expression did not correlate with any ECM marker or non-ECM molecule studied. Expression of ITGA-3 showed a weak correlation with EGFR (r=0.74; P=0.000), and ITGB-5 expression displayed a regular correlation (r=0.42; P=0.000) with EGFR. MMP-2 expression did not correlate with the non-EMC molecules studied. MMP-9 was shown to have a strong expression correlation (r=0.76; P=0.000). Table 11 shows the results of the distribution of the correlation Spearman coefficients between the ECM genes and the non-ECM molecules markers studied.Table 11 The distribution of Spearman (r) correlation coefficients, two-tailed model for significant associations (P<0.05) between the immunohistochemical expressions of ECM genes FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 and epithelial markers EGFR, VEGF, KI-67, P53, and Bcl-2 in CRC (n=114). FN-1 ITGA-3 ITGB-5 MMP-2 MMP-9 EGFR VEGF KI-67 P53 Bcl-2 FN1 1 — — — — — — — — — ITGA-3 — 1 0.48 — 0.65 0.74 — — — — ITGB-5 — 0.48 1 — 0.43 0.42 — — — — MMP-2 — — — 1 — −0.20 −0.01 — MMP-9 — 0.65 0.43 — 1 0.76 — 0.30 0.22 — EGFR — 0.74 0.42 — 0.76 1 — 0.37 0.33 — VEGF — — — — 1 — — 0.33 KI-67 — — — — 0.30 0.37 — 1 0.87 — P53 — — — −0.19 0.22 0.33 — 0.87 1 — Bcl-2 — — — — 0.33 — 1 0 = no correlation; 0–0.25 = weak; 0.25–0.50 = regular; 0.50–0.75 = moderate; >0.75 = strong; 1 = perfect correlation.The results of RT-PCR and IHC methods in 114 patients with colorectal adenocarcinoma are shown in the Table12, where differentially expressed extracellular matrix genes and the respective proteins are correlated. The findings of immunohistochemistry technique validated the correlation between transcript and protein. While RT-PCR is regarded as a tool which provides a genic screening the immunohistochemical technique allows the identification of the correspondent protein of the genes differentially expressed.Table 12 Real-time PCR and immunohistochemistry correlation of differentially expressed extracellular matrix genes in 114 patients with colorectal adenocarcinoma,P<0.05. Genes Classification RT-PCR RT-PCR Parameters analysed IHC IHC Fold change P value Validation P value MMP-2 ECM proteases 2.17 0.01 Mucinous × tubular No 0.001 −1.2 0.04 Peritumoral lymphocyte infiltrate +/− No 1.000 −2.11 0.03 Age (>60 yr × <60 yr) No 0.000 FN-1 Other adhesion molecules/collagens and ECM structural constituents −3.07 0.02 AGE No 1.000 ITGB-5 Transmembrane molecules/cell-matrix adhesion 2.11 0.04 Grade cell differentiation:(I, II × III, IV) No 0.394 1.33 0.02 TNM (I, II × III, IV) Yes 0.000 ITGA-3 Transmembrane molecules 2.58 0.01 TNM (I, II × III, IV) Yes 0.000 MMP-9 ECM proteases 1.13 0.01 Villous × tubular Yes 0.000 IHC: immunohistochemistry; RT-PCR: reverse transcription polymerase. ## 3.1. FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 ECM Gene Expression Levels The expression levels of the genes of interest according to the covariates studied through real-time PCR showed low expression of the FN-1 gene in patients <60 years of age compared with those ≥60 years of age (P=0.022).The ITGA-3 and ITGB-5 gene expression levels in the tumor tissue as determined using RT-PCR were not considered significant when analyzed with regard to the different measures of tumor dissemination outcome, except for those related to TNM staging and cell differentiation degree.ITGA-3 gene expression showed a significance level ofP=0.016 and a fold regulation of 2.58 comparing TNM III, IV versus TNM I, II stages. With regard to the ITGB-5 gene, a reduction in the group expression of III cell differentiation degree was observed when compared with the GI and GII group (P=0.04 and a fold change of –2.11) and an increase of this gene expression in the tumors of TNM III, IV versus TNM I, II stages (P=0.029 and a fold change of 1.33). Table 2 shows the distribution of the significant results of the RTPCR expression of the genes of interest.Table 2 Distribution of expression levels of FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 ECM genes, with significance levels ofP<0.05, fold change >2.0, and clinicopathological variables associated with genetic tracing by RT-PCR. Gene P value Fold change Clinicopathological parameter Comparison FN-1 0.022 −3.07 Age (years) <60 × ≥60 ITGA-3 0.016 2.58 TNM TNM III × TNM I ITGB-5 0.04 −2.11 Degree of cell differentiation GII × GI 0.029 1.33 TNM TNM III × TNM I MMP-2 0.015 2.17 Histological type Mucinous × tubular 0.04 −1.2 Peritumoral lymphocyte infiltration With × without 0.039 −2.11 Age >60 × ≤60 MMP-9 0.014 1.13 Histological type Villous × tubularThe expression levels of MMP2 and MMP9 genes in the tumor tissue as determined using RT-PCR were considered significant when analyzed with regard to the different measures of tumor dissemination outcome, except for those related to mucinous and villous histological types and in the parameter of venous invasion dissemination. Therefore, MMP2 gene expression was significantly different between mucinous and nonmucinous carcinomas (P=0.001) and in patients aged over 60 years (P<0.0001). With regard to the tumor expression of the MMP9 gene, an increase of this gene expression was noted in tumors with TNM III and IV staging compared with TNM I and II staging (P=0.0001), in venous invasion tumors compared with those without venous invasion (P<0.001), and in carcinoma tumors with a villous component compared with carcinomas without a villous component (P<0.0001). Tables 3, 4, 5, 6, 7, 8, and 9 show the results of immunohistochemical expression of the EMC genes and the non-ECM molecular markers according to the outcome measures degree of tumor cell differentiation, tumor TNM classification, peritumoral lymphocytic infiltration, venous invasion, perineural invasion, and type of tumor (tubular, mucinous, and villous).Table 3 Distribution of the expression levels PER IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the degree of cell differentiation of CRC (n=114). Grading tumor cell differentiation EGFR Total P value High Low I + II 0.159 N 63 37 100 % 63.0% 37.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Grading tumor cell differentiation VEGF Total Pvalue High Low I + II 0.778 N 49 51 100 % 49.0% 51.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Grading tumor cell differentiation KI-67 Total Pvalue High Low I + II 0.093 N 46 54 100 % 46.0% 54.0% 100.0% III N 3 11 14 % 21.4% 78.6% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Grading tumor cell differentiation P53 Total Pvalue High Low I + II 0.778 N 41 59 100 % 41.0% 59.0% 100.0% III N 5 9 14 % 35.7% 64.3% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Grading tumor cell differentiation Bcl-2 Total Pvalue High Low I + II 1.000 N 50 50 100 % 50.0% 50.0% 100.0% III N 7 7 14 % 50.0% 50.0% 100.0% Grading tumor cell differentiation ITGB-5 Total Pvalue High Low I + II 0.394 N 57 43 100 % 57.0% 43.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Grading tumor cell differentiation ITGA-3 Total Pvalue High Low I + II 0.397 N 50 50 100 % 50.0% 50.0% 100.0% III N 5 9 14 % 35.7% 64.3% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Grading tumor cell differentiation MMP-2 Total Pvalue High Low I + II 0.568 N 48 52 100 % 48.0% 52.0% 100.0% III N 5 9 14 % 35.7% 64.3% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Grading tumor cell differentiation MMP-9 Total Pvalue High Low I + II 0.572 N 53 47 100 % 53.0% 47.0% 100.0% III N 6 8 14 % 42.9% 57.1% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Grading tumor cell differentiation FN-1 Total Pvalue High Low I + II 0.225 N 73 27 100 % 73.0% 27.0% 100.0% III N 8 6 14 % 57.1% 42.9% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 4 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the degree of TNM staging of CRC (n=114). TNM tumor stage EGFR Total P value High Low I + II 0.000 N 64 0 64 % 100.0% 0.0% 100.0% III + IV N 5 45 50 % 10.0% 90.0% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% TNM tumor stage VEGF Total Pvalue High Low I + II 0.186 N 27 37 64 % 42.2% 57.8% 100.0% III + IV N 28 22 50 % 56.0% 44.0% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% TNM tumor stage KI-67 Total Pvalue High Low I + II 0.000 N 38 26 64 % 59.4% 40.6% 100.0% III + IV N 11 39 50 % 22.0% 78.0% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Estadiamento P53 Total Pvalue High Low I + II 0.000 N 35 29 64 % 54.7% 45.3% 100.0% III + IV N 11 39 50 % 22.0% 78.0% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% TNM tumor stage Bcl-2 Total Pvalue High Low I + II 0.345 N 29 35 64 % 45.3% 54.7% 100.0% III + IV N 28 22 50 % 56.0% 44.0% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% TNM tumor stage ITGB-5 Total Pvalue High Low I + II 0.000 N 49 15 64 % 76.6% 23.4% 100.0% III + IV N 14 36 50 % 28.0% 72.0% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% TNM tumor stage ITGA-3 Total Pvalue High Low I + II 0.000 N 53 11 64 % 82.8% 17.2% 100.0% III + IV N 2 48 50 % 4.0% 96.0% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% TNM tumor stage MMP-2 Total Pvalue High Low I + II 0.572 N 28 36 64 % 43.8% 56.3% 100.0% III + IV N 25 25 50 % 50.0% 50.0% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% TNM tumor stage MMP-9 Total Pvalue High Low I + II 0.000 N 56 8 64 % 87.5% 12.5% 100.0% III + IV N 3 47 50 % 6.0% 94.0% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% TNM tumor stage FN-1 Total Pvalue high low I + II 0.677 N 44 20 64 % 68.8% 31.3% 100.0% III + IV N 37 13 50 % 74.0% 26.0% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 5 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the peritumoral lymphocyte infiltrate in CRC (n=114). Peritumoral lymphocyte infiltrate EGFR Total P value High Low Absence N 13 8 21 1.000 % 61.9% 38.1% 100.0% Presence N 56 37 93 % 60.2% 39.8% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Peritumoral lymphocyte infiltrate VEGF Total Pvalue High Low Absence 0.635 N 9 12 21 % 42.9% 57.1% 100.0% Presence N 46 47 93 % 49.5% 50.5% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Peritumoral lymphocyte infiltrate KI-67 Total Pvalue High Low Absence N 11 10 21 0.343 % 52.4% 47.6% 100.0% Presence N 38 55 93 % 40.9% 59.1% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Peritumoral lymphocyte infiltrate P53 Total Pvalue High Low Absence 1.000 N 8 13 21 % 38.1% 61.9% 100.0% Presence N 38 55 93 % 40.9% 59.1% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Peritumoral lymphocyte infiltrate Bcl-2 Total Pvalue High Low Absence 0.629 N 9 12 21 % 42.9% 57.1% 100.0% Presence N 48 45 93 % 51.6% 48.4% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% Peritumoral lymphocyte infiltrate ITGB-5 Total Pvalue High Low Absence 1.000 N 12 9 21 % 57.1% 42.9% 100.0% Presence N 51 42 93 % 54.8% 45.2% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Peritumoral lymphocyte infiltrate ITGA-3 Total Pvalue High Low Absence 0.227 N 13 8 21 % 61.9% 38.1% 100.0% Presence N 42 51 93 % 45.2% 54.8% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Peritumoral lymphocyte infiltrate MMP-2 Total Pvalue High Low Absence 1.000 N 10 11 21 % 47.6% 52.4% 100.0% Presence N 43 50 93 % 46.2% 53.8% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Peritumoral lymphocyte infiltrate MMP-9 Total Pvalue High Low Absence 1.000 N 11 10 21 % 52.4% 47.6% 100.0% Presence N 48 45 93 % 51.6% 48.4% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Peritumoral lymphocyte infiltrate FN-1 Total Pvalue High Low Absence 0.180 N 12 9 21 % 57.1% 42.9% 100.0% Presence N 69 24 93 % 74.2% 25.8% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 6 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, and MMP9 ECM genes and FN-1 and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the presence of venous invasion in CRC (n=114). Venous invasion EGFR Total P value High Low Absence 0.000 N 65 28 93 % 69.9% 30.1% 100.0% Presence N 4 17 21 % 19.0% 81.0% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Venous invasion VEGF Total Pvalue High Low Absence 0.810 N 44 49 93 % 47.3% 52.7% 100.0% Presence N 11 10 21 % 52.4% 47.6% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Venous invasion KI-67 Total Pvalue High Low Absence 0.055 N 44 49 93 % 47.3% 52.7% 100.0% Presence N 5 16 21 % 23.8% 76.2% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Venous invasion P53 Total Pvalue High Low Absence 0.029 N 42 51 93 % 45.2% 54.8% 100.0% Presence N 4 17 21 % 19.0% 81.0% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Venous invasion Bcl-2 Total Pvalue High Low Absence 1.000 N 46 47 93 % 49.5% 50.5% 100.0% Presence N 11 10 21 % 52.4% 47.6% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% Venous invasion ITGB-5 Total Pvalue High Low Absence 0.231 N 54 39 93 % 58.1% 41.9% 100.0% Presence N 9 12 21 % 42.9% 57.1% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Venous invasion ITGA-3 Total Pvalue High Low Absence 0.000 N 52 41 93 % 55.9% 44.1% 100.0% Presence N 3 18 21 % 14.3% 85.7% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Venous invasion MMP-2 Total Pvalue High Low Absence 0.631 N 42 51 93 % 45.2% 54.8% 100.0% Presence N 11 10 21 % 52.4% 47.6% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Venous invasion MMP-9 Total Pvalue High Low Absence 0.001 N 55 38 93 % 59.1% 40.9% 100.0% Presence N 4 17 21 % 19.0% 81.0% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Venous invasion FN-1 Total Pvalue High Low Absence 0.006 N 61 32 93 % 65.6% 34.4% 100.0% Presence N 20 1 21 % 95.2% 4.8% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 7 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, Ki-67, P53, and Bcl-2 molecules as per the presence of perineural invasion in CRC (n=114). Perineural invasion EGFR Total P value High Low Absence 0.260 N 66 40 106 % 62.3% 37.7% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 69 45 114 % 60.5% 39.5% 100.0% Perineural invasion VEGF Total Pvalue High Low Absence 0.027 N 48 58 106 % 45.3% 54.7% 100.0% Presence N 7 1 8 % 87.5% 12.5% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Perineural invasion KI-67 Total Pvalue High Low Absence 0.462 N 47 59 106 % 44.3% 55.7% 100.0% Presence N 2 6 8 % 25.0% 75.0% 100.0% Total N 49 65 114 % 43.0% 57.0% 100.0% Perineural invasion P53 Total Pvalue High Low Absence 0.470 N 44 62 106 % 41.5% 58.5% 100.0% Presence N 2 6 8 % 25.0% 75.0% 100.0% Total N 46 68 114 % 40.4% 59.6% 100.0% Perineural invasion Bcl-2 Total Pvalue High Low Absence 0.716 N 52 54 106 % 49.1% 50.9% 100.0% Presence N 5 3 8 % 62.5% 37.5% 100.0% Total N 57 57 114 % 50.0% 50.0% 100.0% Perineural invasion ITGB-5 Total Pvalue High Low Absence 0.463 N 60 46 106 % 56.6% 43.4% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 63 51 114 % 55.3% 44.7% 100.0% Perineural invasion ITGA-3 Total Pvalue High Low Absence 0.717 N 52 54 106 % 49.1% 50.9% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 55 59 114 % 48.2% 51.8% 100.0% Perineural invasion MMP-2 Total Pvalue High Low Absence 0.999 N 49 57 106 % 46.2% 53.8% 100.0% Presence N 4 4 8 % 50.0% 50.0% 100.0% Total N 53 61 114 % 46.5% 53.5% 100.0% Perineural invasion MMP-9 Total Pvalue High Low Absence 0.479 N 56 50 106 % 52.8% 47.2% 100.0% Presence N 3 5 8 % 37.5% 62.5% 100.0% Total N 59 55 114 % 51.8% 48.2% 100.0% Perineural invasion FN-1 Total Pvalue High Low Absence 0.102 N 73 33 106 % 68.9% 31.1% 100.0% Presence N 8 0 8 % 100.0% 0.0% 100.0% Total N 81 33 114 % 71.1% 28.9% 100.0%Table 8 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the villous and tubular types of CRC (n=114). Tumor histologic type:tubular × villous EGFR Total P value High Low Tubular 0.579 N 52 29 81 % 64.2% 35.8% 100.0% Villous N 9 7 16 % 56.3% 43.8% 100.0% Total N 61 36 97 % 62.9% 37.1% 100.0% Tumor histologic type:tubular × villous VEGF Total Pvalue High Low Tubular 1.000 N 41 40 81 % 50.6% 49.4% 100.0% Villous N 8 8 16 % 50.0% 50.0% 100.0% Total N 49 48 97 % 50.5% 49.5% 100.0% Tumor histologic type:tubular × villous KI-67 Total Pvalue High Low Tubular 0.783 N 36 45 81 % 44.4% 55.6% 100.0% Villous N 6 10 16 % 37.5% 62.5% 100.0% Total N 42 55 97 % 43.3% 56.7% 100.0% Tumor histologic type:tubular × villous P53 Total Pvalue High Low Tubular 1.000 N 32 49 81 % 39.5% 60.5% 100.0% Villous N 6 10 16 % 37.5% 62.5% 100.0% Total N 38 59 97 % 39.2% 60.8% 100.0% Tumor histologic type:tubular × villous Bcl-2 Total Pvalue High Low Tubular 0.277 N 37 44 81 % 45.7% 54.3% 100.0% Villous N 10 6 16 % 62.5% 37.5% 100.0% Total N 47 50 97 % 48.5% 51.5% 100.0% Tumor histologic type:tubular × villous ITGB-5 Total Pvalue High Low Tubular 1.000 N 44 37 81 % 54.3% 45.7% 100.0% Villous N 9 7 16 % 56.3% 43.8% 100.0% Total N 53 44 97 % 54.6% 45.4% 100.0% Tumor histologic type:tubular × villous ITGA-3 Total Pvalue High Low Tubular 1.000 N 40 41 81 % 49.4% 50.6% 100.0% Villous N 8 8 16 % 50.0% 50.0% 100.0% Total N 48 49 97 % 49.5% 50.5% 100.0% Tumor histologic type:tubular × villous MMP-2 Total Pvalue High Low Tubular 0.585 N 44 37 81 % 54.3% 45.7% 100.0% Villous N 7 9 16 % 43.8% 56.3% 100.0% Total N 51 46 97 % 52.6% 47.4% 100.0% Tumor histologic type:tubular × villous MMP-9 Total Pvalue High Low Tubular 0.000 N 49 32 81 % 60.5% 39.5% 100.0% Villous N 1 15 16 % 6.3% 93.8% 100.0% Total N 50 47 97 % 51.5% 48.5% 100.0% Tumor histologic type:tubular × villous FN-1 Total Pvalue High Low Tubular 1.000 N 57 24 81 % 70.4% 29.6% 100.0% Villous N 11 5 16 % 68.8% 31.3% 100.0% Total N 68 29 97 % 70.1% 29.9% 100.0%Table 9 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per mucinous and tubular tumor types of CRC (n=114). Tumor histologic type:mucinous × tubular EGFR Total P value High Low Mucinous 0.273 N 8 9 17 % 47.1% 52.9% 100.0% Tubular N 52 29 81 % 64.2% 35.8% 100.0% Total N 60 38 98 % 61.2% 38.8% 100.0% Tumor histologic type:mucinous × tubular VEGF Total Pvalue High Low Mucinous 0.294 N 6 11 17 % 35.3% 64.7% 100.0% Tubular N 41 40 81 % 50.6% 49.4% 100.0% Total N 47 51 98 % 48.0% 52.0% 100.0% Tumor histologic type:mucinous × tubular KI-67 Total Pvalue High Low Mucinous 1.000 N 7 10 17 % 41.2% 58.8% 100.0% Tubular N 36 45 81 % 44.4% 55.6% 100.0% Total N 43 55 98 % 43.9% 56.1% 100.0% Tumor histologic type:mucinous × tubular P53 Total Pvalue High Low Mucinous 0.596 N 8 9 17 % 47.1% 52.9% 100.0% Tubular N 32 49 81 % 39.5% 60.5% 100.0% Total N 40 58 98 % 40.8% 59.2% 100.0% Tumor histologic type:mucinous × tubular Bcl-2 Total Pvalue High Low Mucinous 0.425 N 10 7 17 % 58.8% 41.2% 100.0% Tubular N 37 44 81 % 45.7% 54.3% 100.0% Total N 47 51 98 % 48.0% 52.0% 100.0% Tumor histologic type:mucinous × tubular ITGB-5 Total Pvalue High Low Mucinous 0.793 N 10 7 17 % 58.8% 41.2% 100.0% Tubular N 44 37 81 % 54.3% 45.7% 100.0% Total N 54 44 98 % 55.1% 44.9% 100.0% Tumor histologic type:mucinous × tubular ITGA-3 Total Pvalue High Low Mucinous 0.600 N 7 10 17 % 41.2% 58.8% 100.0% Tubular N 40 41 81 % 49.4% 50.6% 100.0% Total N 47 51 98 % 48.0% 52.0% 100.0% Tumor histologic type:mucinous × tubular MMP-2 Total Pvalue High Low Mucinous 0.001 N 2 15 17 % 11.8% 88.2% 100.0% Tubular N 44 37 81 % 54.3% 45.7% 100.0% Total N 46 52 98 % 46.9% 53.1% 100.0% Tumor histologic type:mucinous × tubular MMP-9 Total Pvalue High Low Mucinous 0.596 N 9 8 17 % 52.9% 47.1% 100.0% Tubular N 49 32 81 % 60.5% 39.5% 100.0% Total N 58 40 98 % 59.2% 40.8% 100.0% Tumor histologic type:mucinous × tubular FN-1 Total Pvalue High Low Mucinous 0.771 N 13 4 17 % 76.5% 23.5% 100.0% Tubular N 57 24 81 % 70.4% 29.6% 100.0% Total N 70 28 98 % 71.4% 28.6% 100.0%Table10 shows the distribution in absolute numbers and percentages by IHC of proteins corresponding to the ITGB-5, ITGA-3, MMP-2, MMP-9, and FN-1 ECM genes and the EGFR, VEGF, KI-67, P53, and Bcl-2 molecules, according to the expression degrees rated as low and high of the colorectal adenocarcinoma (n=114).Table 10 The distribution of expression levels per IHC of proteins corresponding to ITGB-5, ITGA-3, MMP-2, MMP9, and FN-1 ECM genes and EGFR, VEGF, KI-67, P53, and Bcl-2 molecules as per the degrees of high and low expression of CRC (n=114). Markers Low expression High expression n (%) n (%) FN-1 81 (71.1) 33 (28.9) ITGA-3 55 (48.2) 59 (51.8) ITGB-5 63 (55.3) 51 (44.7) MMP-2 53 (46.5) 61 (53.5) MMP-9 59 (51.8) 55 (48.2) P53 46 (40.4) 68 (59.6) Bcl-2 57 (50.0) 57 (50.0) KI-67 49 (43.0) 65 (57.0) EGFR 69 (60.5) 45 (39.5) VEGF 55 (48.2) 59 (51.8)With regard to the correlation of ECM genes IHC expression levels with the non-ECM molecules P53, Bcl-2, VEGF, KI-67, and EGFR, this study showed that FN-1 expression did not correlate with any ECM marker or non-ECM molecule studied. Expression of ITGA-3 showed a weak correlation with EGFR (r=0.74; P=0.000), and ITGB-5 expression displayed a regular correlation (r=0.42; P=0.000) with EGFR. MMP-2 expression did not correlate with the non-EMC molecules studied. MMP-9 was shown to have a strong expression correlation (r=0.76; P=0.000). Table 11 shows the results of the distribution of the correlation Spearman coefficients between the ECM genes and the non-ECM molecules markers studied.Table 11 The distribution of Spearman (r) correlation coefficients, two-tailed model for significant associations (P<0.05) between the immunohistochemical expressions of ECM genes FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 and epithelial markers EGFR, VEGF, KI-67, P53, and Bcl-2 in CRC (n=114). FN-1 ITGA-3 ITGB-5 MMP-2 MMP-9 EGFR VEGF KI-67 P53 Bcl-2 FN1 1 — — — — — — — — — ITGA-3 — 1 0.48 — 0.65 0.74 — — — — ITGB-5 — 0.48 1 — 0.43 0.42 — — — — MMP-2 — — — 1 — −0.20 −0.01 — MMP-9 — 0.65 0.43 — 1 0.76 — 0.30 0.22 — EGFR — 0.74 0.42 — 0.76 1 — 0.37 0.33 — VEGF — — — — 1 — — 0.33 KI-67 — — — — 0.30 0.37 — 1 0.87 — P53 — — — −0.19 0.22 0.33 — 0.87 1 — Bcl-2 — — — — 0.33 — 1 0 = no correlation; 0–0.25 = weak; 0.25–0.50 = regular; 0.50–0.75 = moderate; >0.75 = strong; 1 = perfect correlation.The results of RT-PCR and IHC methods in 114 patients with colorectal adenocarcinoma are shown in the Table12, where differentially expressed extracellular matrix genes and the respective proteins are correlated. The findings of immunohistochemistry technique validated the correlation between transcript and protein. While RT-PCR is regarded as a tool which provides a genic screening the immunohistochemical technique allows the identification of the correspondent protein of the genes differentially expressed.Table 12 Real-time PCR and immunohistochemistry correlation of differentially expressed extracellular matrix genes in 114 patients with colorectal adenocarcinoma,P<0.05. Genes Classification RT-PCR RT-PCR Parameters analysed IHC IHC Fold change P value Validation P value MMP-2 ECM proteases 2.17 0.01 Mucinous × tubular No 0.001 −1.2 0.04 Peritumoral lymphocyte infiltrate +/− No 1.000 −2.11 0.03 Age (>60 yr × <60 yr) No 0.000 FN-1 Other adhesion molecules/collagens and ECM structural constituents −3.07 0.02 AGE No 1.000 ITGB-5 Transmembrane molecules/cell-matrix adhesion 2.11 0.04 Grade cell differentiation:(I, II × III, IV) No 0.394 1.33 0.02 TNM (I, II × III, IV) Yes 0.000 ITGA-3 Transmembrane molecules 2.58 0.01 TNM (I, II × III, IV) Yes 0.000 MMP-9 ECM proteases 1.13 0.01 Villous × tubular Yes 0.000 IHC: immunohistochemistry; RT-PCR: reverse transcription polymerase. ## 4. Discussion ### 4.1. The Possible Role of the ECM in CRC Dissemination Carcinogenesis and tumor progression represent complex processes that involve a series of events traditionally characterized as a cascade of phenomena that require more investigation to be fully elucidated. It has been assumed that as a result of the cascade of carcinogenesis, progression, and dissemination, the initial transformation requires the tumor cell to be able to invade the surrounding tissues, which would characterize its malignant nature. Thus, it is necessary that these cells detach from their adhesive interactions in the epithelium, penetrate the basal membrane, degrade the ECM, and migrate to the subjacent interstitial stroma. At this point, the tumor cell enters the blood and lymphatic stream, acquiring systemic dissemination. In the intestine, particularly, the basal membrane separates the epithelial tissue from the conjunctive tissue, and a histopathological characteristic of intestinal tumors is the loss of the basal membrane integrity [24].The ECM is composed of a large variety of structural molecules such as collagen, noncollagen glycoproteins, and proteoglycans that play a complex role in the regulation of cell behavior, influencing its development, growth, survival, migration, signal transduction, structure, and function [25, 26].Thus, the degradation of elements constituting the basal membrane and ECM mediated by certain proteolytic enzymes, usually metalloproteinases, can represent a fundamental step in tumor progression and metastasis [4].In recent decades, research in the field of cancer biology has focused extensively on the role of the ECM constituents during tumor progression. Some proteins located in specific domains of the ECM play a critical role in keeping the cells linked to matrix elements and the basal membrane, also participating in the matrix-cell signaling cascades. This information from the ECM is transmitted to the cells, mainly by means of integrin molecules, to activate, for example, cytokines, growth factors, and intracellular adaptor molecules. Thus, it can significantly affect many different processes such as cell cycle progression and cell migration and differentiation. The interaction between the biophysical properties of the cell and the ECM establishes a dynamic reciprocity, generating a sequence of reactions with a complex network of proteases, sulfatases, and possibly other enzymes to release and activate several signaling pathways in a very specific and localized manner. ECM homeostasis is, therefore, a delicate balance between the biosynthesis of proteins, its structural organization, biosignaling, and the degradation of its elements [7]. ### 4.2. The Methods Used for Tracking and Identifying ECM Genes The simplicity of PCR array allows it to be used for routine research, as it is a reliable tool to analyze the expression of a panel of genes specific to a particular pathology, offering high sensitivity and broad dynamic range. The use of the SuperArray Kit (PAHs-031A-24 Ambriex) for ECM and cell adhesion molecules allowed for the analysis of the expression of 84 genes important for cell-cell and cell-matrix interactions. This array contains ECM proteins including basal membrane, collagen, and gene constituents.Using RT-PCR, it was possible to analyze in a quick, simple, and reliable manner the expression of a group of gene transcripts involved in the progression and dissemination of colorectal adenocarcinoma in several staging phases. Various studies have used this method in different types of malignant neoplastic diseases [18, 27] with regard to angiogenesis [19], apoptosis [20], and the cell cycle [28].The TMA, a technique described by Kononen et al., in 1998, is widely accepted in the literature. This extremely simple concept consists of grouping a large number of tissue samples in a single paraffin block and allows for the study of the expression of molecular markers in a large scale with the use of stored material. TMAs have advantages over traditional cuts such as a reduction in reagents and time required to perform the reactions. The standardization of the reactions has facilitated comparative interpretation of research cases [29, 30].The use of monoclonal antibodies with IHC examination in the TMA and in situ hybridization techniques allows for the detection of differential tissue protein expression corresponding to the gene (validation process of tracing techniques) in a simplified manner, as well as more elaborate technical standardization, hence minimizing the possibility of measurement biases. ### 4.3. The Results In this study, an increase of the expression of integrins alpha 3 and beta 5 genes can be observed in advanced tumors locally or remotely in stages III and IV relative to stages I and II, which represent nonmetastatic tumors to lymph nodes and/or other sites, a fact confirmed by protein expression using the TMA technique. Other correlations such as histologic type and venous and neural invasion were also found to be significant, further reinforcing the possible role of integrins in tumor progression and dissemination in colorectal adenocarcinoma.Integrin alpha 3 is usually expressed in normal tissues and in various tumors. Studies evaluating the expression of integrin alpha 3 in primary colon cancer and its respective metastases in the liver have shown that almost 27.5% of primary tumors presented with increased expression of integrin alpha 3 relative to the metastatic tumor. In the present study, while evaluating gene expression, there was a significant difference in the expression of integrin alpha 3, a result validated by the analysis of protein expression by the immunohistochemical method, and a greater expression was observed in the TNM III and IV groups than in the TNM I and II groups, thus suggesting a possible relation of integrin 3 with more advanced stages of colorectal cancer. Significant differences were also found in the ITGA-3 protein levels between the presence and absence of venous invasion; however, this fact was not supported by the gene expression levels using RT-PCR.In 1999, Haler et al. studied the expression levels of integrins alpha 2, 3, 5, and 6 using IHC in lineages of liver metastatic colorectal carcinoma cells; an increased expression of integrins alpha 2 and alpha 3 was observed, particularly, with regard to the potential dissemination of CRC [31]. Jinka et al., in a recent study, observed the same results, including, however, a higher number of malignant tumors [7]. In the present study, the ITGB-5 gene showed significantly higher expression levels in the TNM III and IV stage groups compared with the TNM I and II groups; these data were confirmed by the results of the protein expression analysis using immunohistochemistry.When comparing ITGB-5 gene expression with the degree of cell differentiation, a significant difference was observed in the grade III group compared with the grades I and II groups; however, when evaluating the protein expression of the ITGB-5 gene, there were no significant differences between the degrees of cell differentiation in the analyses.A recent study, using cell cultures of human breast cancer and normal epithelial tissue, demonstrated a role of integrin beta 5 in tumor progression and invasion by changes in adhesion, the cell structure, and differentiation, when the inhibition of this integrin was found to significantly reduce breast carcinoma cell invasion [32]. It has also been reported that integrin expression levels may vary considerably between the normal and tumor tissue. In a certain way, integrins alpha v beta 5 and alpha v beta 6 are generally expressed in low levels or are nondetectable in normal human adult epithelium and are highly expressed in some tumors, correlating to more advanced stages of the disease [33].It is possible that the increased expression of integrins alpha v beta 3 and alpha v beta 5 promotes the linking of tumor cells in the temporary matrix proteins such as vitronectin, fibrinogen, von Willebrand factor, osteopontin, and fibronectin that are deposited in the tumor microenvironment, facilitating the process of endothelial angiogenesis [34].Among the studied genes, the overexpression of the following metalloproteinases in the tumor tissue can be correlated to at least one clinicopathological variable: MMP-1, MMP-2, MMP-9, MMP-11, and MMP-16.MMP-2 and MMP-9 metalloproteinases, our research subjects, have been reported as essential in the tumor dissemination and progression process by many authors. These proteins degrade the main component of the basal membrane, which is type IV collagen. In several studies comparing MMP-2 expression levels in CRC and clinicopathological variables, a significance was observed regarding the strong expression of this enzyme in stages III and IV (TNM) [35, 36], tumor size and venous invasion, lymph node metastasis [35, 37], and distant metastasis [35, 36, 38].MMP-2, called gelatinase A, in addition to type IV collagen, degrades other types of this protein such as V, VII, and X and also fibronectin, laminin, and elastin, which are components of the ECM [35]. Thus, MMP-2 expression has been investigated in several cancer types including colorectal adenocarcinoma, as MMP is significantly increased in the tumor tissue compared with the nontumor tissue [39].In our study, there was a correlation of MMP-2 gene and protein expression levels with the clinicopathological variables such as mucinous histological type with signet ring cells and SOE adenocarcinoma; our results show that the MMP2 has potential as a prognostic CRC marker, in agreement with other published studies.MMP9, known as gelatinase B, promotes the degradation of an important component of the basal membrane, type IV collagen, which is crucial for the invasion of malignant tumors from the proteolysis of ECM with CRC progression and metastasis [40]. Thus, there is substantial interest in the study of the MMP-9 expression in CRC as a prognostic marker.Several studies in the literature have shown increased expression of MMP-9 in CRC with significance with regard to clinicopathological variables such as stages III and IV (TNM/Dukes C and D) [38–41], lymph node metastasis [40, 42], remote metastasis [37–40], peritumoral inflammatory infiltrate [43], and degree of cell differentiation II and III [38, 40, 41, 44, 45].In this study, MMP-9 expression appeared significantly more frequently in the villous histological type that, according to the literature studies, shows a better prognosis than SOE adenocarcinomas [46]. ### 4.4. Correlation of IHC Expression of the EMC Genes of Interest FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 with the Non-ECM Molecules EGFR, VEGF, P53, Bcl-2, and KI-67 According to Viana et al., in 2013, ECM components interact with non-ECM molecules in CRC carcinogenesis, progression, and dissemination. One of the goals of our study was to evaluate the correlation of the expression of ECM components with that of P53, Bcl-2, KI-67, EGFR, and VEGF because it is known that proliferation, apoptosis, and cell migration are regulated by cell-cell interactions and ECM cell components. It is also worth noting that growth factors (e.g., EGF and VEGF) are usually stored in the ECM and can be activated and released after ECM modulation [12, 15]. In this study, we found that the correlation between MMP-9 and ITGA-3 genes with epithelial marker EGFR has been strong, whereas no relationship between the tumor expression of MMP-2, FN-1, and ITGB-5 with non-ECM molecules VEGF, KI-67, P53, and Bcl-2 could be demonstrated. ## 4.1. The Possible Role of the ECM in CRC Dissemination Carcinogenesis and tumor progression represent complex processes that involve a series of events traditionally characterized as a cascade of phenomena that require more investigation to be fully elucidated. It has been assumed that as a result of the cascade of carcinogenesis, progression, and dissemination, the initial transformation requires the tumor cell to be able to invade the surrounding tissues, which would characterize its malignant nature. Thus, it is necessary that these cells detach from their adhesive interactions in the epithelium, penetrate the basal membrane, degrade the ECM, and migrate to the subjacent interstitial stroma. At this point, the tumor cell enters the blood and lymphatic stream, acquiring systemic dissemination. In the intestine, particularly, the basal membrane separates the epithelial tissue from the conjunctive tissue, and a histopathological characteristic of intestinal tumors is the loss of the basal membrane integrity [24].The ECM is composed of a large variety of structural molecules such as collagen, noncollagen glycoproteins, and proteoglycans that play a complex role in the regulation of cell behavior, influencing its development, growth, survival, migration, signal transduction, structure, and function [25, 26].Thus, the degradation of elements constituting the basal membrane and ECM mediated by certain proteolytic enzymes, usually metalloproteinases, can represent a fundamental step in tumor progression and metastasis [4].In recent decades, research in the field of cancer biology has focused extensively on the role of the ECM constituents during tumor progression. Some proteins located in specific domains of the ECM play a critical role in keeping the cells linked to matrix elements and the basal membrane, also participating in the matrix-cell signaling cascades. This information from the ECM is transmitted to the cells, mainly by means of integrin molecules, to activate, for example, cytokines, growth factors, and intracellular adaptor molecules. Thus, it can significantly affect many different processes such as cell cycle progression and cell migration and differentiation. The interaction between the biophysical properties of the cell and the ECM establishes a dynamic reciprocity, generating a sequence of reactions with a complex network of proteases, sulfatases, and possibly other enzymes to release and activate several signaling pathways in a very specific and localized manner. ECM homeostasis is, therefore, a delicate balance between the biosynthesis of proteins, its structural organization, biosignaling, and the degradation of its elements [7]. ## 4.2. The Methods Used for Tracking and Identifying ECM Genes The simplicity of PCR array allows it to be used for routine research, as it is a reliable tool to analyze the expression of a panel of genes specific to a particular pathology, offering high sensitivity and broad dynamic range. The use of the SuperArray Kit (PAHs-031A-24 Ambriex) for ECM and cell adhesion molecules allowed for the analysis of the expression of 84 genes important for cell-cell and cell-matrix interactions. This array contains ECM proteins including basal membrane, collagen, and gene constituents.Using RT-PCR, it was possible to analyze in a quick, simple, and reliable manner the expression of a group of gene transcripts involved in the progression and dissemination of colorectal adenocarcinoma in several staging phases. Various studies have used this method in different types of malignant neoplastic diseases [18, 27] with regard to angiogenesis [19], apoptosis [20], and the cell cycle [28].The TMA, a technique described by Kononen et al., in 1998, is widely accepted in the literature. This extremely simple concept consists of grouping a large number of tissue samples in a single paraffin block and allows for the study of the expression of molecular markers in a large scale with the use of stored material. TMAs have advantages over traditional cuts such as a reduction in reagents and time required to perform the reactions. The standardization of the reactions has facilitated comparative interpretation of research cases [29, 30].The use of monoclonal antibodies with IHC examination in the TMA and in situ hybridization techniques allows for the detection of differential tissue protein expression corresponding to the gene (validation process of tracing techniques) in a simplified manner, as well as more elaborate technical standardization, hence minimizing the possibility of measurement biases. ## 4.3. The Results In this study, an increase of the expression of integrins alpha 3 and beta 5 genes can be observed in advanced tumors locally or remotely in stages III and IV relative to stages I and II, which represent nonmetastatic tumors to lymph nodes and/or other sites, a fact confirmed by protein expression using the TMA technique. Other correlations such as histologic type and venous and neural invasion were also found to be significant, further reinforcing the possible role of integrins in tumor progression and dissemination in colorectal adenocarcinoma.Integrin alpha 3 is usually expressed in normal tissues and in various tumors. Studies evaluating the expression of integrin alpha 3 in primary colon cancer and its respective metastases in the liver have shown that almost 27.5% of primary tumors presented with increased expression of integrin alpha 3 relative to the metastatic tumor. In the present study, while evaluating gene expression, there was a significant difference in the expression of integrin alpha 3, a result validated by the analysis of protein expression by the immunohistochemical method, and a greater expression was observed in the TNM III and IV groups than in the TNM I and II groups, thus suggesting a possible relation of integrin 3 with more advanced stages of colorectal cancer. Significant differences were also found in the ITGA-3 protein levels between the presence and absence of venous invasion; however, this fact was not supported by the gene expression levels using RT-PCR.In 1999, Haler et al. studied the expression levels of integrins alpha 2, 3, 5, and 6 using IHC in lineages of liver metastatic colorectal carcinoma cells; an increased expression of integrins alpha 2 and alpha 3 was observed, particularly, with regard to the potential dissemination of CRC [31]. Jinka et al., in a recent study, observed the same results, including, however, a higher number of malignant tumors [7]. In the present study, the ITGB-5 gene showed significantly higher expression levels in the TNM III and IV stage groups compared with the TNM I and II groups; these data were confirmed by the results of the protein expression analysis using immunohistochemistry.When comparing ITGB-5 gene expression with the degree of cell differentiation, a significant difference was observed in the grade III group compared with the grades I and II groups; however, when evaluating the protein expression of the ITGB-5 gene, there were no significant differences between the degrees of cell differentiation in the analyses.A recent study, using cell cultures of human breast cancer and normal epithelial tissue, demonstrated a role of integrin beta 5 in tumor progression and invasion by changes in adhesion, the cell structure, and differentiation, when the inhibition of this integrin was found to significantly reduce breast carcinoma cell invasion [32]. It has also been reported that integrin expression levels may vary considerably between the normal and tumor tissue. In a certain way, integrins alpha v beta 5 and alpha v beta 6 are generally expressed in low levels or are nondetectable in normal human adult epithelium and are highly expressed in some tumors, correlating to more advanced stages of the disease [33].It is possible that the increased expression of integrins alpha v beta 3 and alpha v beta 5 promotes the linking of tumor cells in the temporary matrix proteins such as vitronectin, fibrinogen, von Willebrand factor, osteopontin, and fibronectin that are deposited in the tumor microenvironment, facilitating the process of endothelial angiogenesis [34].Among the studied genes, the overexpression of the following metalloproteinases in the tumor tissue can be correlated to at least one clinicopathological variable: MMP-1, MMP-2, MMP-9, MMP-11, and MMP-16.MMP-2 and MMP-9 metalloproteinases, our research subjects, have been reported as essential in the tumor dissemination and progression process by many authors. These proteins degrade the main component of the basal membrane, which is type IV collagen. In several studies comparing MMP-2 expression levels in CRC and clinicopathological variables, a significance was observed regarding the strong expression of this enzyme in stages III and IV (TNM) [35, 36], tumor size and venous invasion, lymph node metastasis [35, 37], and distant metastasis [35, 36, 38].MMP-2, called gelatinase A, in addition to type IV collagen, degrades other types of this protein such as V, VII, and X and also fibronectin, laminin, and elastin, which are components of the ECM [35]. Thus, MMP-2 expression has been investigated in several cancer types including colorectal adenocarcinoma, as MMP is significantly increased in the tumor tissue compared with the nontumor tissue [39].In our study, there was a correlation of MMP-2 gene and protein expression levels with the clinicopathological variables such as mucinous histological type with signet ring cells and SOE adenocarcinoma; our results show that the MMP2 has potential as a prognostic CRC marker, in agreement with other published studies.MMP9, known as gelatinase B, promotes the degradation of an important component of the basal membrane, type IV collagen, which is crucial for the invasion of malignant tumors from the proteolysis of ECM with CRC progression and metastasis [40]. Thus, there is substantial interest in the study of the MMP-9 expression in CRC as a prognostic marker.Several studies in the literature have shown increased expression of MMP-9 in CRC with significance with regard to clinicopathological variables such as stages III and IV (TNM/Dukes C and D) [38–41], lymph node metastasis [40, 42], remote metastasis [37–40], peritumoral inflammatory infiltrate [43], and degree of cell differentiation II and III [38, 40, 41, 44, 45].In this study, MMP-9 expression appeared significantly more frequently in the villous histological type that, according to the literature studies, shows a better prognosis than SOE adenocarcinomas [46]. ## 4.4. Correlation of IHC Expression of the EMC Genes of Interest FN-1, ITGA-3, ITGB-5, MMP-2, and MMP-9 with the Non-ECM Molecules EGFR, VEGF, P53, Bcl-2, and KI-67 According to Viana et al., in 2013, ECM components interact with non-ECM molecules in CRC carcinogenesis, progression, and dissemination. One of the goals of our study was to evaluate the correlation of the expression of ECM components with that of P53, Bcl-2, KI-67, EGFR, and VEGF because it is known that proliferation, apoptosis, and cell migration are regulated by cell-cell interactions and ECM cell components. It is also worth noting that growth factors (e.g., EGF and VEGF) are usually stored in the ECM and can be activated and released after ECM modulation [12, 15]. In this study, we found that the correlation between MMP-9 and ITGA-3 genes with epithelial marker EGFR has been strong, whereas no relationship between the tumor expression of MMP-2, FN-1, and ITGB-5 with non-ECM molecules VEGF, KI-67, P53, and Bcl-2 could be demonstrated. ## 5. Conclusions In CRCs, the overexpression of ITGA-3 and ITGB-5 genes and of their proteins was associated with lymph nodal dissemination stages and remote metastasis, whereas the overexpression of MMP-2 and MMP-9 genes and their proteins was associated with the mucinous and villous histological types, respectively. The epithelial marker EGFR (epidermal growth factor receptor) overactivity has been shown to be associated with the ECM genes MMP-9 and ITGA-3 expression. --- *Source: 102541-2014-03-04.xml*
2014
# Gold-Nanoparticle Decorated Graphene-Nanostructured Polyaniline Nanocomposite-Based Bienzymatic Platform for Cholesterol Sensing **Authors:** Deepshikha Saini; Ruchika Chauhan; Pratima R. Solanki; T. Basu **Journal:** ISRN Nanotechnology (2012) **Publisher:** International Scholarly Research Network **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.5402/2012/102543 --- ## Abstract A novel nanobiocomposite bienzymatic amperometric cholesterol biosensor, coupled with cholesterol oxidase (ChOx) and horseradish peroxidase (HRP), was developed based on the gold-nanoparticle decorated graphene-nanostructured polyaniline nanocomposite (NSPANI-AuNP-GR) film which was electrochemically deposited onto indium-tin-oxide (ITO) electrode from the nanocomposite (NSPANI-AuNP-GR) dispersion, as synthesized by in situ polymerization technique. The gold nanoparticle-decorated graphene-nanostructured polyaniline nanocomposite (NSPANI-AuNP-GR) offers an efficient electron transfer between underlining electrode and enzyme active center. The bienzymatic nanocomposite bioelectrodes ChOx-HRP/NSPANI-AuNP-GR/ITO have exhibited higher sensitivity, linearity, and lowerKm value than monoenzymatic bioelectrode (ChOx/NSPANI-AuNP-GR/ITO). It is inferred that bienzyme-based nanobioelectrodes offer wider linearity (35 to 500 mg/dL), higher sensitivity (0.42 μAmM−1), low km value of 0.01 mM and higher accuracy for testing of blood serum samples than monoenzyme system. Mechanism of the overall biochemical reaction has been proposed to illustrate the enhanced biosensing performance of the bienzyme system. The novelty of the electrode lies on reusability, extended shelf life, and accuracy of testing blood serum samples. --- ## Body ## 1. Introduction Graphene, one of the most exciting nanostructures of carbon, is a two-dimensional honeycomb crystalline single layer of carbon lattice [1–3]. Recently, it has received enormous interest in various areas of research, such as biosensors, bioelectronics, energy storage and conversion, drug delivery [4–7], molecular resolution sensors [8–10], ultrafast electronic devices, [11], and electromechanical resonators [5], owing to its large specific surface area, extraordinary electrical and thermal conductivities [11, 12], high mechanical stiffness [13], good biocompatibility [14], and low manufacturing cost [15]. The high electrical and thermal conductivities of graphene originate from the extended long-range π-conjugation.Out of all conducting polymers, polyaniline (PANI) is one of the promising matrixes for biosensor applications due to its simple and reversible acid/base doping/dedoping chemistry enabling control over properties such as free volume [16], solubility [17], electrical conductivity [18], and optical activity [19]. In recent years, nanostructured polyaniline (NSPANI) has aroused much scientific interest since it combines the properties of low-dimensional organic conductors and high-surface-area materials and offers the possibility of enhanced performance wherever a large interfacial area between NSPANI and its environment is required. Various strategies are employed for the synthesis of nanostructured polyaniline [20–22].Noble metal nanoparticles are known to be excellent catalysts, due to their high ratio of surface atoms with free valences to the cluster of total atoms. As well known, gold nanoparticles (AuNP) have many unique properties such as high surface free energy, strong adsorption ability, well suitability, and good conductivity [23, 24]. Besides, it can provide more binding sites and more congenial microenvironment for biomolecules immobilization to retain the bioactivity of the proteins, which can prolong the life time of biosensor [25, 26]. Nanocomposites based on metal nanoparticles and exfoliated graphene nanosheet (GR) with synergistic effect have exhibited particular promise in biosensing characteristics as they can play very interesting role such as (1) a biocompatible enzyme friendly platform, (2) fast electrocatalytic oxidation or reduction of the product generated during biochemical recognition process at the electrode surface to reduce overvoltage and avoid interference from other coexisting electroactive species, and (3) an enhanced signal because of its fast electron transfer and large working surface area. Lu et al. have reported highly sensitive and selective amperometric glucose biosensor using exfoliated graphite nanoplatelets decorated with Pt and Pd nanoparticles [27]. Beside that, graphene-based nanocomposites have been exploited to fabricate alcohol dehydrogenase biosensors for glucose, alcohol, and so forth [28–33]. Biosensors, based on graphene-encapsulated nanoparticle arrays, for highly sensitive and selective detection of breast cancer biomarkers are successfully demonstrated. The increased surface-to-volume ratio significantly has helped in lowering the detection limits (1 pM) for the target biomarkers [34]. A glucose electrochemical biosensor has been reported based on zinc oxide nanoparticles (ZnO NPs) doped in graphene (GR) nanosheets. The results show that the linear response range of the biosensor lies between 0.1 to 20 μM and the detection limit has been calculated as 0.02 μM at a signal-to-noise ratio of 3 [35].Cholesterol and its ester are essential constituents of all animal cells, and it is present in brain and nerve tissues. The level of cholesterol in serum is an important parameter in the diagnosis and prevention of heart diseases. The development of electrochemical biosensor received significant interest for precise and smart determination of cholesterol in serum and food sample [36]. However, the so far developed cholesterol biosensors suffer from reliability, poor shelf life, and low sensitivity. Therefore, it is needed to pay attention to the above challenges in order to fabricate a reliable cholesterol biosensor for clinical diagnosis. There are two key factors in the fabrication of a biosensor, firstly enzyme system and secondly transducer matrix to monitor biosensor performance. In amperometric biosensor, cholesterol quantification is usually performed by measurement of the current associated with the oxidation of hydrogen peroxide. One of the major drawbacks of electrochemical biosensor is the overpotential necessary for the oxidation of H2O2 which can cause interferences from other oxidizable species such as ascorbic acid (AA), uric acid (UA), and acetaminophen (AAP). To avoid interferences, some improved biosensors based on the coupled enzyme reactions have been reported to detect hydrogen peroxide at low potential [37, 38]. In such cases, the primary product that is produced by the reaction of the analyte with the first enzyme is further converted by a second enzyme to produce products detectable by a transducer [39]. Coupled enzyme reactions are also employed to filter out chemical signals by eliminating the interference on the enzyme [40, 41].In the previous communication, the authors have reported an in situ preparation and characterization of novel nanocomposite NSPANI-AuNP-GR dispersion based on NSPANI, graphene nanosheet and gold nanoparticles (NSPANI-AuNP-GR) with high electrocatalytic activity [42]. It has been observed out that the NSPANI-AuNP-GR nanocomposite dispersion can be successfully electrodeposited on the ITO surface. In the paper, attempt has been made to develop a reliable and reusable amperometric bienzymatic cholesterol biosensor based on nanocomposite film on ITO electrode for estimation of free cholesterol. In order to achieve the commercial viability, the developed electrochemical biosensor performance has been compared with photometric technique and tested on blood serum sample of various pathological labs. The novelty of the sensor lies on the method of fabrication of transducer matrix coupled with enzyme system, reusability, reliability, extended shelf life, sensitivity, and successful application to blood serum testing. ## 2. Experimental ### 2.1. Materials Few-layered graphene (Quantum Materials Corporation, Bangalore), aniline (Sigma-Aldrich), sodium dodecyl sulphate (SDS) (Qualigen),ammonium persulfate (NH4)2S2O8 (E-Merck), hydrochloric acid (Qualigen), and chloroauric acid HClO4 (Sigma-Aldrich) were used in the present experiment. Cholesterol oxidase (ChOx; EC 1.1.36, from Pseudomonas fluorescens) with specific activity of 24 U/mg and horseradish peroxidase (HRP, E.C1.11.1.7, 250 U/mg, from Horseradish) were purchased from Sigma, potassium ferricyanide (K3[Fe(CN)6]), potassium ferrocyanide (K4[Fe(CN)6]), sodium dihydrogen orthophosphate (NaH2PO4), and disodium hydrogen orthophosphate (Na2HPO4) were purchased from Qualigens (India). Deionized water from a Millipore-MilliQ was used in all cases to prepare aqueous solutions. Monomer was double distilled before polymerization. ### 2.2. Characterization and Measurement Fourier transform infrared spectroscopic (FTIR) measurements were performed with a Perkin-Elmer FTIR spectrophotometer. Morphological imaging of the fabricated electrodes was obtained by scanning electron microscope (LEO 440 Model), and atomic force microscopy (AFM) was performed by Park Systems XE-70 Atomic Force Microscope in noncontact mode. Cyclic voltammetry and differential pulse voltammetry (DPV) measurements were conducted in phosphate buffer (50 mM, 0.9% NaCl) containing 5 mM [Fe(CN)6]3−/4− in a three-electrode cell consisting of Ag/AgCl as reference, platinum (Pt) as counter electrode and ITO as a working electrode (0.25 cm2) using Autolab Potentiostat/Galvanostat Model AUT83945 (PGSTAT302N). #### 2.2.1. Synthesis of Gold-Nanoparticle-Decorated Graphene-Nanostructured Polyaniline Nanocomposite (NSPANI-AuNP-GR) Synthesis of NSPANI-AuNP-GR Nanodispersion In a typical synthesis, graphene was first dispersed into a dilute aqueous solution of sodium dodecyl sulphate (SDS) (0.02 M). The aniline solution in the dopant (0.02 M) was added to an aqueous solution of SDA under stirring condition. The mixture was then placed in the low temperature bath, so that the temperature was maintained at 0° to 5°C. 70μL aqueous solution of 0.05 M HAuCl4 was added into aqueous dispersion. An aqueous solution of the oxidizing agent, (NH4)2S2O8, in ice-cold water was added to the above mixture. The polymerization was allowed to proceed for 3 to 4 h with stirring. After that the stirring was stopped and the mixture was kept under static condition for 1–3 days at 277–278°K for polymerization to complete. Thus, NSPANI-AuNP-GR nanodispersion was prepared by in situ polymerization as characterized in our previous communication [42]. ### 2.3. Fabrication of NSPANI-AuNP-GR/ITO Electrodes The NSPANI-AuNP-GR nanocomposite film was electrochemically deposited from NSPANI-AuNP-GR nanocomposite dispersion as synthesized, onto ITO-coated glass plates by sweeping a potential from −200 mV to +1000 mV (versus Ag/AgCl) at a scan rate of 40 mV/s in a three-electrode cell consisting of Ag/AgCl as reference, platinum (Pt) as counter electrode, and ITO as a working electrode (0.25 cm2). The electrodeposition curves of NSPANI-AuNP-GR/ITO exhibit characteristics electrochemistry for NSPANI [33] with the main peaks a and b corresponding to the transformation of leucoemeraldine base (LB) to emeraldine salt (ES) and ES to pernigraniline salt (PS), respectively. On the reverse scan, peaks b′ and a′ correspond to the conversion of PS to ES and ES to LB, respectively. The presence of a small redox peak around +350 mV (C and C′) is associated with the formation of p-benzoquinone and hydroquinone as a side product upon cycling the potential to +1000 mV. The increase in current density with successive scans suggests that the polymer film builds up on the electrode surface. Figure 1 inset shows the plot of maximum anodic peak current versus with number of cycles. Maximum peak current was observed at 28 cycles indicating a continuous film deposition. It can also be observed that the shifts in peak potentials began to occur after a number of cycles. This may be the result of increased resistance of the electrode, as the film deposited becomes thicker. The current has reached up to 0.261 mA cm−2 for 28 cycles during electrodeposition of the nanocomposite dispersion. On further increase in number of cycles, anodic peak current decreases. This decrease in peak current is ascribed to the degradation of polymer film. In the present study, 28 cycles were used for film deposition for biosensor application.Figure 1 Electrodeposition of NSPANI-AuNP-GR nanocomposite dispersion on the ITO electrode. Inset: Plot of peak current against number of cycles. #### 2.3.1. Fabrication of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR Nanobioelectrodes The NSPANI-AuNP-GR/ITO electrode was treated with 10μL of aqueous glutaraldehyde (0.1%) as a cross-linker. 10 μL of freshly prepared ChOx (1 mg/mL) was uniformly spread onto glutaraldehyde-treated NSPANI-AuNP-GR/ITO electrode and is kept in a humid chamber for 12 h at 4°C to fabricate ChOx/NSPANI-AuNP-GR/ITO nanobioelectrode. 10 μL freshly prepared solution of HRP (1 mg/mL and ChOx (1 mg/mL) (1 : 1) was uniformly spread onto glutaraldehyde-treated NSPANI-AuNP-GR/ITO electrode and was kept in a humid chamber for 12 h at 4°C to prepare ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. The bioelectrodes were immersed in 5 mM phosphate buffer solution (pH 7.0) in order to wash out unbound enzyme from the electrode surface. When not in use, the electrode was stored at 4°C in a refrigerator. ### 2.4. Preparation of Solutions Stock solution of cholesterol was prepared in deionized water having 10% Triton X-100 and was stored at 4°C. This stock solution was further diluted to make different concentrations of cholesterol solution.o-Dianisidine (1%) solution was prepared freshly in deionized water. Buffers of various pH values were prepared by dissolving different ratios of sodium dihydrogen orthophosphate (NaH2PO4) and disodium hydrogen orthophosphate (Na2HPO4) in Millipore water. ### 2.5. Photometric Studies Photometric measurements were conducted using a UV-visible spectrophotometer. Photometric experiments were carried out with cholesterol solution using PBS buffer (50 mM, 0.9% NaCl, pH 7.4). To carry out photometric enzymatic assay of the immobilized enzyme, ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes were dipped in 3 mL of PBS solution containing 20μL of HRP (1 mg dL/1), 20 μL of o-dianisidine dye, and 100 μL of cholesterol. The difference between the initial and final absorbance values at 500 nm after 3 min incubation of cholesterol was recorded and plotted. ## 2.1. Materials Few-layered graphene (Quantum Materials Corporation, Bangalore), aniline (Sigma-Aldrich), sodium dodecyl sulphate (SDS) (Qualigen),ammonium persulfate (NH4)2S2O8 (E-Merck), hydrochloric acid (Qualigen), and chloroauric acid HClO4 (Sigma-Aldrich) were used in the present experiment. Cholesterol oxidase (ChOx; EC 1.1.36, from Pseudomonas fluorescens) with specific activity of 24 U/mg and horseradish peroxidase (HRP, E.C1.11.1.7, 250 U/mg, from Horseradish) were purchased from Sigma, potassium ferricyanide (K3[Fe(CN)6]), potassium ferrocyanide (K4[Fe(CN)6]), sodium dihydrogen orthophosphate (NaH2PO4), and disodium hydrogen orthophosphate (Na2HPO4) were purchased from Qualigens (India). Deionized water from a Millipore-MilliQ was used in all cases to prepare aqueous solutions. Monomer was double distilled before polymerization. ## 2.2. Characterization and Measurement Fourier transform infrared spectroscopic (FTIR) measurements were performed with a Perkin-Elmer FTIR spectrophotometer. Morphological imaging of the fabricated electrodes was obtained by scanning electron microscope (LEO 440 Model), and atomic force microscopy (AFM) was performed by Park Systems XE-70 Atomic Force Microscope in noncontact mode. Cyclic voltammetry and differential pulse voltammetry (DPV) measurements were conducted in phosphate buffer (50 mM, 0.9% NaCl) containing 5 mM [Fe(CN)6]3−/4− in a three-electrode cell consisting of Ag/AgCl as reference, platinum (Pt) as counter electrode and ITO as a working electrode (0.25 cm2) using Autolab Potentiostat/Galvanostat Model AUT83945 (PGSTAT302N). ### 2.2.1. Synthesis of Gold-Nanoparticle-Decorated Graphene-Nanostructured Polyaniline Nanocomposite (NSPANI-AuNP-GR) Synthesis of NSPANI-AuNP-GR Nanodispersion In a typical synthesis, graphene was first dispersed into a dilute aqueous solution of sodium dodecyl sulphate (SDS) (0.02 M). The aniline solution in the dopant (0.02 M) was added to an aqueous solution of SDA under stirring condition. The mixture was then placed in the low temperature bath, so that the temperature was maintained at 0° to 5°C. 70μL aqueous solution of 0.05 M HAuCl4 was added into aqueous dispersion. An aqueous solution of the oxidizing agent, (NH4)2S2O8, in ice-cold water was added to the above mixture. The polymerization was allowed to proceed for 3 to 4 h with stirring. After that the stirring was stopped and the mixture was kept under static condition for 1–3 days at 277–278°K for polymerization to complete. Thus, NSPANI-AuNP-GR nanodispersion was prepared by in situ polymerization as characterized in our previous communication [42]. ## 2.2.1. Synthesis of Gold-Nanoparticle-Decorated Graphene-Nanostructured Polyaniline Nanocomposite (NSPANI-AuNP-GR) Synthesis of NSPANI-AuNP-GR Nanodispersion In a typical synthesis, graphene was first dispersed into a dilute aqueous solution of sodium dodecyl sulphate (SDS) (0.02 M). The aniline solution in the dopant (0.02 M) was added to an aqueous solution of SDA under stirring condition. The mixture was then placed in the low temperature bath, so that the temperature was maintained at 0° to 5°C. 70μL aqueous solution of 0.05 M HAuCl4 was added into aqueous dispersion. An aqueous solution of the oxidizing agent, (NH4)2S2O8, in ice-cold water was added to the above mixture. The polymerization was allowed to proceed for 3 to 4 h with stirring. After that the stirring was stopped and the mixture was kept under static condition for 1–3 days at 277–278°K for polymerization to complete. Thus, NSPANI-AuNP-GR nanodispersion was prepared by in situ polymerization as characterized in our previous communication [42]. ## 2.3. Fabrication of NSPANI-AuNP-GR/ITO Electrodes The NSPANI-AuNP-GR nanocomposite film was electrochemically deposited from NSPANI-AuNP-GR nanocomposite dispersion as synthesized, onto ITO-coated glass plates by sweeping a potential from −200 mV to +1000 mV (versus Ag/AgCl) at a scan rate of 40 mV/s in a three-electrode cell consisting of Ag/AgCl as reference, platinum (Pt) as counter electrode, and ITO as a working electrode (0.25 cm2). The electrodeposition curves of NSPANI-AuNP-GR/ITO exhibit characteristics electrochemistry for NSPANI [33] with the main peaks a and b corresponding to the transformation of leucoemeraldine base (LB) to emeraldine salt (ES) and ES to pernigraniline salt (PS), respectively. On the reverse scan, peaks b′ and a′ correspond to the conversion of PS to ES and ES to LB, respectively. The presence of a small redox peak around +350 mV (C and C′) is associated with the formation of p-benzoquinone and hydroquinone as a side product upon cycling the potential to +1000 mV. The increase in current density with successive scans suggests that the polymer film builds up on the electrode surface. Figure 1 inset shows the plot of maximum anodic peak current versus with number of cycles. Maximum peak current was observed at 28 cycles indicating a continuous film deposition. It can also be observed that the shifts in peak potentials began to occur after a number of cycles. This may be the result of increased resistance of the electrode, as the film deposited becomes thicker. The current has reached up to 0.261 mA cm−2 for 28 cycles during electrodeposition of the nanocomposite dispersion. On further increase in number of cycles, anodic peak current decreases. This decrease in peak current is ascribed to the degradation of polymer film. In the present study, 28 cycles were used for film deposition for biosensor application.Figure 1 Electrodeposition of NSPANI-AuNP-GR nanocomposite dispersion on the ITO electrode. Inset: Plot of peak current against number of cycles. ### 2.3.1. Fabrication of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR Nanobioelectrodes The NSPANI-AuNP-GR/ITO electrode was treated with 10μL of aqueous glutaraldehyde (0.1%) as a cross-linker. 10 μL of freshly prepared ChOx (1 mg/mL) was uniformly spread onto glutaraldehyde-treated NSPANI-AuNP-GR/ITO electrode and is kept in a humid chamber for 12 h at 4°C to fabricate ChOx/NSPANI-AuNP-GR/ITO nanobioelectrode. 10 μL freshly prepared solution of HRP (1 mg/mL and ChOx (1 mg/mL) (1 : 1) was uniformly spread onto glutaraldehyde-treated NSPANI-AuNP-GR/ITO electrode and was kept in a humid chamber for 12 h at 4°C to prepare ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. The bioelectrodes were immersed in 5 mM phosphate buffer solution (pH 7.0) in order to wash out unbound enzyme from the electrode surface. When not in use, the electrode was stored at 4°C in a refrigerator. ## 2.3.1. Fabrication of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR Nanobioelectrodes The NSPANI-AuNP-GR/ITO electrode was treated with 10μL of aqueous glutaraldehyde (0.1%) as a cross-linker. 10 μL of freshly prepared ChOx (1 mg/mL) was uniformly spread onto glutaraldehyde-treated NSPANI-AuNP-GR/ITO electrode and is kept in a humid chamber for 12 h at 4°C to fabricate ChOx/NSPANI-AuNP-GR/ITO nanobioelectrode. 10 μL freshly prepared solution of HRP (1 mg/mL and ChOx (1 mg/mL) (1 : 1) was uniformly spread onto glutaraldehyde-treated NSPANI-AuNP-GR/ITO electrode and was kept in a humid chamber for 12 h at 4°C to prepare ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. The bioelectrodes were immersed in 5 mM phosphate buffer solution (pH 7.0) in order to wash out unbound enzyme from the electrode surface. When not in use, the electrode was stored at 4°C in a refrigerator. ## 2.4. Preparation of Solutions Stock solution of cholesterol was prepared in deionized water having 10% Triton X-100 and was stored at 4°C. This stock solution was further diluted to make different concentrations of cholesterol solution.o-Dianisidine (1%) solution was prepared freshly in deionized water. Buffers of various pH values were prepared by dissolving different ratios of sodium dihydrogen orthophosphate (NaH2PO4) and disodium hydrogen orthophosphate (Na2HPO4) in Millipore water. ## 2.5. Photometric Studies Photometric measurements were conducted using a UV-visible spectrophotometer. Photometric experiments were carried out with cholesterol solution using PBS buffer (50 mM, 0.9% NaCl, pH 7.4). To carry out photometric enzymatic assay of the immobilized enzyme, ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes were dipped in 3 mL of PBS solution containing 20μL of HRP (1 mg dL/1), 20 μL of o-dianisidine dye, and 100 μL of cholesterol. The difference between the initial and final absorbance values at 500 nm after 3 min incubation of cholesterol was recorded and plotted. ## 3. Discussion ### 3.1. Characterization of NSPANI-AuNP-GR/ITO, ChOx/NSPANI-AuNP-GR/ITO, and ChOx-HRP/NSPANI-AuNP-GR Electrodes (a) FT-IR Study Figure2 represents the FT-IR absorption spectra of the NSPANI-AuNP-GR/ITO (curve a), ChOx/NSPANI-AuNP-GR/ITO (curve b), and ChOx-HRP/NSPANI-AuNP-GR (curve c) electrodes. The FT-IR spectrum of electrochemically deposited NSPANI-AuNP-GR/ITO nanocomposite (curve a) shows benzenoid and quinoid ring stretching bands (C=C) present at 1447.6 cm−1 and 1560 cm−1. The presence of peak at 3123 cm−1 is attributed to –N–H stretching vibrations of NSPANI in the composite [43]. A peak at 1534 cm−1 due to the skeletal vibration of graphene nanosheet is observed [44] in the FT-IR spectra of NSPANI-AuNP-GR/ITO (curve a). In the FTIR spectra, apart from the above-mentioned functional groups, a peak appears at 655 cm−1 which may correspond to stretching vibration of Au–O–Au [44]. The presence of these peaks reveals the existence of NSPANI, graphene nanosheet, and AuNPs on the ITO electrode. In the FT-IR spectrum of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR bioelectrode (Figures 2(b) and 2(c)), enzyme binding is indicated by the appearance of additional absorption bands at 1524 and 1630 cm−1 assigned to the carbonyl stretch (amide I band) and N–H bending (amide II band), respectively [45]. Also, a broadband seen around 3560 cm−1 is attributed to amide bond present in ChOx [46].Figure 2 FTIR spectra of (a) NSPANI-AuNP-GR/ITO (red colour), (b) ChOx/NSPANI-AuNP-GR/ITO (green colour), (c) ChOx-HRP/NSPANI-AuNP-GR electrodes (blue colour).(b) SEM Study SEM images of NSPANI-AuNP-GR/ITO (Figure3(a)), ChOx/NSPANI-AuNP-GR/ITO (Figure 3(b)), and ChOx-HRP/NSPANI-AuNP-GR (Figure 3(c)) are shown in Figure 3. The electrodeposition of NSPANI-AuNP-GR matrix on ITO electrode has been confirmed by the homogeneous rough surface (Figure 3(a)). SEM image shows NSPANI deposited on a few-layered graphene nanosheet which provide large surface area for the incorporation of metal nanoparticles. SEM image reveals the uniform loading of AuNP over NSPANI-GR matrix (Figure 3(c)) [44]. The nanoscale surface roughness of the NSPANI-AuNP-GR nanocomposite film is suitable for the immobilization of biomolecules. From the Figures 3(b) and 3(c), it is found that the enzymes are uniformly distributed on the electrode surfaces. The surface morphology of ChOx/NSPANI-AuNP-GR/ITO (Figure 3(b)) and ChOx-HRP/NSPANI-AuNP-GR (Figure 3(c)) shows full coverage of the surface by the single and bienzyme bioconjugates. The presence of globular structure can be attributed to the covalently bound enzyme molecule since most of the proteins and enzymes possess globular structure [47, 48].SEM images of (a) NSPANI-AuNP-GR/ITO, (b) ChOx/NSPANI-AuNP-GR/ITO, and (c) ChOx-HRP/NSPANI-AuNP-GR electrodes. (a) (b) (c)(c) AFM Study AFM is employed to establish the thickness, surface morphology, and surface roughness of the NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO electrodes. The two-dimensional (2D) and three-dimensional (3D) atomic force microscopy (AFM, Figure4(a)) studies reveal that film NSPANI-AuNP-GR/ITO (2 × 2 μm) shows nanoporous morphology with roughness (root mean square (RMS)) of about 29.3 nm, though their spherical shape appears to be partially distorted. The size of the spherical nanoparticles varies from 25 to 50 nm with average particle size of 35 nm. However, after the immobilization of ChOx-HRP, the surface morphology of NSPANI-AuNP-GR/ITO film changes into smooth morphology wherein the average particle size increases to 100 nm and roughness decreases to 6.1 nm revealing that ChOx-HRP is adsorbed onto NSPANI-AuNP-GR/ITO (Figure 4(b)) via electrostatic interactions. AFM image of ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrode (2 × 2 μm) exhibits well-arranged uniform surface indicating that NSPANI-AuNP-GR/ITO film provides a desired microenvironment for strong adsorption of ChOx-HRP in a particular orientation and wherein it retains its better configuration with more active sites.2D and 3D AFM images of (a) NSPANI-AuNP-GR/ITO, (b) ChOx-HRP/NSPANI-AuNP-GR electrodes. (a) (b)(d) DPV Study DPV experiments have been conducted in phosphate buffer (50 mM, pH 7.0) containing 5 mM (Fe [Fe(CN)6]3−/4−) in the range −0.4 to 1.2 V (Figure 5). The high value of maximum anodic peak current obtained as 1.63 × 10−4 A for NSPANI-AuNP-GR/ITO electrode (curve a) suggests high conducting nature of NSPANI-AuNP-GR/ITO electrode and enhanced electron transfer towards the electrode. The magnitude of peak current decreases to 1.32 × 10−4 A (curve b) and 1.01 × 10−4 A (curve c) for ChOx-HRP/NSPANI-AuNP-GR/ITO and ChOx/NSPANI-AuNP-GR/ITO bioelectrodes, respectively, indicating slow redox process at the nanobioelectrodes due to insulating characteristics of ChOx-HRP and ChOx revealing immobilization of ChOx and ChOx-HRP on NSPANI-AuNP-GR/ITO electrode. The magnitude of peak current (1.32 × 10−4 A) of ChOx-HRP/NSPANI-AuNP-GR/ITO (curve b) is found to be higher as compared to ChOx/NSPANI-AuNP-GR/ITO (1.01 × 10−4 A) (curve c). The enhanced peak current indicates the increased surface activeness of the bioelectrode and increased number of electron transfer.Figure 5 Differential Pulse Voltammetry of (a) NSPANI-AuNP-GR/ITO (b) ChOx-HRP/NSPANI-AuNP-GR/ITO (c) ChOx/NSPANI-AuNP-GR/ITO electrodes. ### 3.2. Electrochemical Response Studies The DPV curves of ChOx/NSPANI-AuNP-GR/ (Figure6(a)) and ChOx-HRP/NSPANI-AuNP-GR/ITO (Figure 6(b)) bioelectrodes recorded in the range of −0.4 to 1.2 V using phosphate buffer of pH 7.0 containing 5 mM [Fe(CN)6]3−/4− and cholesterol solution of various concentration are shown in Figure 6. Change in current (ΔI) is plotted against cholesterol concentration values. A linear relationship between the cholesterol concentration and the increase in response current (ΔI) for both the mono- as well as bienzyme-based nanobioelectrodes is observed. The linear regression curve (Figure 6(a)) of the ChOx/NSPANI-AuNP-GR/ITO bioelectrode, which is used to detect cholesterol in the range of 35–350 mg/dL, follows the equation ΔI(current) (mA) = 0.12 (mA) + 0.0031 (mA mg dL−1) × cholesterol concentration (mg dL−1) with 99.4 μA and 0.981 as standard deviation and correlation coefficient, respectively. The sensitivity of the bioelectrode has been found to be 0.310 μA mg dL−1. The linear equation of ChOx-HRP/NSPANI-AuNP-GR/ITO in Figure 6(b) is represented by the equation ΔI(current) (mA) = 0.36 (mA) + 0.0042 (mA mg dL−1) × cholesterol concentration (mg dL−1) with 65.2 μA and 0.995 as standard deviation and correlation coefficient, respectively. Furthermore, the ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes exhibit a higher sensitivity of 0.42 μA mg dL−1 than the single enzyme-based electrodes (ChOx/NSPANI-AuNP-GR/ITO) and linear range of 35–500 mg/dL. The response current and sensitivity are higher for bienzymatic sensor than the monoenzymatic nano biosensor suggesting effective reduction of H2O2 catalyzed by HRP. All of the experiments have been carried out in triplicate sets, and the results reveal reproducibility of the system. The values of response time of (ChOx/NSPANI-AuNP-GR/ITO) and (ChOx-HRP/NSPANI-AuNP-GR/ITO) are found to be as 28 and 19 s, respectively, which are measured by measuring the time taken to reach the steady-state current after applying a steady voltage of 250 mV for 100 mg/dL of cholesterol solution in 7.0 pH PBS buffer containing 5 mM [Fe(CN)6]3−/4−.Figure 6 Calibration curve of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes.The value of the enzyme-substrate kinetics parameter (Michaelis-Menten constant,Km) estimated using the Lineweaver-Burke plot reveals affinity of enzyme for desired analyte. It may be noted that Km is dependent both on matrix and the method of immobilization of enzymes that often results in their conformational changes resulting in different values of Km. The Km value was determined by the analysis of the slope and intercept for the plot of the reciprocals of change in current versus cholesterol concentrations, that is, the Lineweaver-Burke plot of 1/ΔI versus 1/C. The values of apparent Michaelis-Menten constant (Km) have been estimated using the Lineweaver-Burke plot for ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO as 0.02 mM and 0.01 mM, respectively. The observed lower value of Km for bienzyme system indicates high affinity for cholesterol attributed to the immobilization of ChOx-HRP onto NSPANI-AuNP-GR/ITO for faster biochemical reaction. This result can be assigned to the uniform distribution of enzyme molecules on to the NSPANI-AuNP-GR/ITO nanocomposite film surface. The overall biochemical reaction for ChOx-HRP/NSPANI-AuNP-GR is shown by (1)–(5) and Scheme 1.Scheme 1 Proposed biochemical reaction on the ChOx-HRP/NSPANI-AuNP-GR.Chox-HRP/NSPANI-AuNP-GR/ITO (1) Cholesterol + O 2 → Cholest- 4 -en- 3 -one + H 2 O 2 (2) H 2 O 2 + HRP ( Fe + 3 ) → HRP-I ( Fe + 4 ) + H 2 O (3) HRP-I ( Fe + 4 ) ) + Fe ( CN ) 6 4 - → HRP-II ( Fe + 4 ) + Fe ( CN ) 6 3 - (4) HRP-II ( Fe + 4 ) ) + Fe ( CN ) 6 3 - → HRP ( Fe + 3 ) + (5) Fe ( CN ) 6 4 - → Fe ( CN ) 6 4 - + e ### 3.3. Photometric Response Studies The response characteristics of ChOx-HRP/NSPANI-AuNP-GR/ITO and ChOx/NSPANI-AuNP-GR/ITO bioelectrodes have been studied as a function of cholesterol concentration (Figure7), and the value of absorbance resulting from the oxidized form of dye has been found to be increasing linearly with increase in cholesterol concentration for both bioelectrodes. It has been found that the ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrode in the range of 35–400 mg/dL for cholesterol concentration follows the equation: Change in absorbance = 0.022 + 0.00016 × cholesterol concentration (mg/dL) with 0.003 as standard deviation whereas ChOx/NSPANI-AuNP-GR/ITO bioelectrode in the range of 35–350 mg/dL for cholesterol concentration follows the equation: Change in absorbance = 0.002 + 0.000088 × cholesterol concentration (mg/dL) with 0.0025 as standard deviation. The value of apparent Michaelis-Menten constant (Km) has been estimated using the Lineweaver-Burk plot, graph between inverse of absorption and inverse of cholesterol concentration. The lower value of Km (0.012 mM) for ChOx-HRP/NSPANI-AuNP-GR/ITO biosensor as compared to ChOx/NSPANI-AuNP-GR/ITO biosensor (0.023 mM) suggests that the NSPANI-AuNP-GR matrix is facilitating the enzymatic reaction.Figure 7 Photometric response of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes as a function of cholesterol concentration. ### 3.4. Studies of pH, Interference, Reusability, and Shelf Life of Biosensors (a) pH Studies The response current of the ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes studied in the pH range 6.0–7.8 (data not shown) suggests that both bioelectrodes exhibit maximum activity at around pH 7.0. At this pH, the biomolecules retain their natural structures and do not get denatured. Thus all experiments have been conducted out at the optimum pH value of 7.0 for cholesterol estimation.(b) Interference Studies Different interferents which are mostly present in blood such as ascorbic acid (0.05 mM), glucose (5 mM), uric acid (0.1 mM), sodium ascorbate (0.05 mM), and urea (1 mM) were tested through the DPV measurements for both biocomposite electrodes such as ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO using cholesterol solution (100 mg/dL) in a 1 : 1 ratio. Figure8 shows the effect of interferents on the observed response of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. In the Figure 8, the first bar (cholesterol) shows the current obtained with 100 mg/dL cholesterol. The remaining bars show the current corresponding to the mixture of cholesterol and interferents in a 1 : 1 ratio. The percentage interference (% interference) was calculated using (6) for various interferents: (6)%Interference=[ΔIchol-ΔIinter][ΔIchol]×100, where ΔIchol is the change in current obtained with 100 mg/dL cholesterol and ΔIinter is the change in current corresponding to the mixture of cholesterol and interferents in a 1 : 1 ratio. A maximum of 6% interference for ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrode and 9% interference for ChOx/NSPANI-AuNP-GR/ITO bioelectrode are observed.Figure 8 Interferent study of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes.(c) Reusability Studies The unique feature of both types of bioelectrodes is their reusability (Figure9) which is attributed due to composition of the transducer matrix. It has been found that ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes can be reused in number of times with 100% efficiency. Figure 9 reflects the response of bioelectrodes for 15 times testing using the same bioelectrode with 100 mg/dL cholesterol concentration in PBS solution (50 mM, 0.9% NaCl, 5 mM [Fe(CN)6]3−/4− at room temperature (25°C). The reusability of the bioelectrodes can be attributed to robust properties of the transducer matrix. The reusability indicates that NSPANI-AuNP-GR matrix offers a favourable microenvironment for enzymes which does not cause denaturing of enzymes. The reusability can be explained by the enhanced stability of the enzymes, indicating unique electrochemical properties and biocompatibility of NSPANI-AuNP-GR/ITO electrode.Figure 9 DPV curves for reusability testing (current versus potential plot with 100 mg/dL analyte for 8 times): (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes.(d) Shelf Life Studies The shelf lives of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes have been determined by measuring the response current at regular interval of one week for about two months. Figure10 demonstrates the shelf life of the ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. The ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes are stored at 4°C when not in use. The bioelectrodes have been found to be stable up to 12 weeks without any loss in activity.Figure 10 Results of shelf life of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes. ### 3.5. Comparative Evaluation of Mono- and Bienzymatic Biosensor Table1 represents a comparative evaluation of mono- and bienzymatic biosensor performance. It has been found that bienzymatic electrodes ChOx-HRP/NSPANI-AuNP-GR/ITO exhibit better performance in terms of linearity, shelf life, response time, and sensitivity as compared to monoenzyme-based ChOx/NSPANI-AuNP-GR/ITO electrode. The immobilization of the ChOx together with horseradish peroxidase (HRP) is thought to either help the protein to assume a favorable orientation or to make possible conducting channels between the prosthetic groups and the electrode surface. Both can reduce the effective electron transfer distance and thereby facilitate the charge transfer between the electrode and the enzyme [49].Table 1 A comparative evaluation of single and bienzymatic biosensor performance. S. no. Characteristics ChOx/NSPANI-AuNP-GR/ITO ChOx-HRP/NSPANI-AuNP-GR/ITO 1 Linearity 35–400 mg/dL 35–500 mg/dL 2 Detection limit 35 mg/dL 25 mg/dL 3 Response time 28 secs 19 secs 4 Sensitivity 3.10μA mg dL−1 4.22μA mg dL−1 5 K m 0.02 mM 0.01 mM 6 Shelf life 8 weeks 8 weeksTable2 shows the characteristics of ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrode including those reported in the literature for ChOx-HRP system. It is profound that bioenzymatic electrodes ChOx-HRP/NSPANI-AuNP-GR/ITO offer unique characteristics with respect to reusability, shelf life, and very low Km value of 0.01 mM.Table 2 Characteristics of ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrode including those reported in the literature. S. no. Components of biosensor Characteristics References (Mat) ChOx-HRP/NSPANI-AuNP-GR/ITO (L) upto 500 mg/dL (E) ChOx-HRP (S) 4.22μA mg dL−1 (Km) 0.01 mM (1) (M) ampero. versus Ag/AgCl (DL) 25 mg/dL Present investigation (RT) 19 secs (SL) 2 months (Mat) ChOx/f-G/GC (L) upto 135μM (2) ChOx/Au/f-G/GC (S) 314 nA/μM cm2 [44] (E) ChOx (SL) 1 month (M) ampero. versus Ag/AgCl (Mat) ChOx/NSPANI-SDS (L) 05–10.5 mM (E) ChOx (S) 9 mM (3) (M) photometric (Km) 1.32 mM [49] (RT) 59 secs (SL) 5 weeks (Mat) GR-Pt nanoparticle hybrid material (L) up to 12 mM (4) (E) ChOx, ChEt (S)2.07±0.1μA/μM/cm2 [44] (M) ampero. versus Ag/AgCl (Km) 5 mM (DL) 0.2μM (Mat) GOx-HRP/MWCNT/PPY/ITO (L) 1–10 mM (E) GOx, HRP (S) 13.8 mA/μM (5) (M) ampero. versus Ag/AgCl (Km) 0.52 mM [50] (DL) 0.1 mM (RT) 10 secs (SL) 5 weeks (Mat) ChOx/NanoFe3O4/ITO (L) 2.5–400 mg/dL (E) ChOx (S) 86 Ω/mg/dL/cm2 (6) (M) ampero. versus Ag/AgCl (Km) 0.8 mg/dL [51] (DL) 0.25 mg/dL (RT) 25 secs (SL) 55 days (Mat) ChEt-ChOx/MWCNT/SiO2-CHIT/ITO (L) 10–500 mg/dL (E) ChEt-ChOx (S) 2.12μA/mM (7) (M) ampero. versus Ag/AgCl (Km) 0.052 mM [52] (DL) 0.1 mM (RT) 10 secs (SL) 10 weeks (Mat) ChOx/PANI-NS/ITO (L) 25–500 mg/dL (E) ChOx (S)1.3×10-3 mA mg−1dL (8) (M) ampero. versus Ag/AgCl (Km) 2.5 mM [53] (RT) 10 secs (SL) 12 weeks (Mat) material; (E): enzyme; (M): method; (DL): detection limit; (L): linearity; (SL): shelf life; (RT): response time; (S): sensitivity; (Km): Michaelis-Menton constant; (f-G): functionalized graphene nanoplatelets; (Au/f-G): gold-nanoparticle-decorated f-G; (NSPANI): nanostructured polyaniline; (SDS): sodium dodecyl sulphate; (MWCNT/PPY/ITO): carboxy-modified multiwalled carbon nanotubes (MWCNT) and polypyrrole (PPY) nanocomposite film; (NanoFe3O4): nanostructured iron oxide; (SiO2): silica; (CHIT): chitosan; (PANI-NS): polyaniline nanospheres. ### 3.6. Blood Serum Testing The response of the ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes to the cholesterol in human blood serum has been investigated by amperometric and photometric studies, and results were compared. Five serum samples obtained from pathological lab were analyzed. Table3 shows the results from the blood serum samples using ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO biosensors. Both bioelectrodes provide excellent performance in evaluation of cholesterol in blood serum samples which may be due to the high electrocatalytic effect of NSPANI-AuNP-GR/ITO nanocomposite electrode. The results obtained from amperometric determination of free cholesterol in blood serum are compared with the results obtained from the photometric response studies considering as standard values of free blood cholesterol. The results obtained for ChOx-HRP/NSPANI-AuNP-GR/ITO by using amperometric and photometric studies are very close to each other with minimum error while ChOx/NSPANI-AuNP-GR/ITO shows comparatively higher deviation.Table 3 Results from the blood serum samples using ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO biosensors. Sample no. Chox/NSPANI/Au-GR/ITO (amperometric) Chox/NSPANI/Au-GR/ITO (photometric) Error (%) Chox-HRP/ NSPANI/Au-GR/ITO (amperometric) Chox-HRP/NSPANI/Au-GR/ITO (photometric) Error (%) 1 225 218 3 230 226 1.7 2 158 149 6 162 160 1.2 3 182 178 2 186 183 1.6 4 306 298 3 311 307 1.3 5 76 69 9 80 78 2.5 ## 3.1. Characterization of NSPANI-AuNP-GR/ITO, ChOx/NSPANI-AuNP-GR/ITO, and ChOx-HRP/NSPANI-AuNP-GR Electrodes (a) FT-IR Study Figure2 represents the FT-IR absorption spectra of the NSPANI-AuNP-GR/ITO (curve a), ChOx/NSPANI-AuNP-GR/ITO (curve b), and ChOx-HRP/NSPANI-AuNP-GR (curve c) electrodes. The FT-IR spectrum of electrochemically deposited NSPANI-AuNP-GR/ITO nanocomposite (curve a) shows benzenoid and quinoid ring stretching bands (C=C) present at 1447.6 cm−1 and 1560 cm−1. The presence of peak at 3123 cm−1 is attributed to –N–H stretching vibrations of NSPANI in the composite [43]. A peak at 1534 cm−1 due to the skeletal vibration of graphene nanosheet is observed [44] in the FT-IR spectra of NSPANI-AuNP-GR/ITO (curve a). In the FTIR spectra, apart from the above-mentioned functional groups, a peak appears at 655 cm−1 which may correspond to stretching vibration of Au–O–Au [44]. The presence of these peaks reveals the existence of NSPANI, graphene nanosheet, and AuNPs on the ITO electrode. In the FT-IR spectrum of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR bioelectrode (Figures 2(b) and 2(c)), enzyme binding is indicated by the appearance of additional absorption bands at 1524 and 1630 cm−1 assigned to the carbonyl stretch (amide I band) and N–H bending (amide II band), respectively [45]. Also, a broadband seen around 3560 cm−1 is attributed to amide bond present in ChOx [46].Figure 2 FTIR spectra of (a) NSPANI-AuNP-GR/ITO (red colour), (b) ChOx/NSPANI-AuNP-GR/ITO (green colour), (c) ChOx-HRP/NSPANI-AuNP-GR electrodes (blue colour).(b) SEM Study SEM images of NSPANI-AuNP-GR/ITO (Figure3(a)), ChOx/NSPANI-AuNP-GR/ITO (Figure 3(b)), and ChOx-HRP/NSPANI-AuNP-GR (Figure 3(c)) are shown in Figure 3. The electrodeposition of NSPANI-AuNP-GR matrix on ITO electrode has been confirmed by the homogeneous rough surface (Figure 3(a)). SEM image shows NSPANI deposited on a few-layered graphene nanosheet which provide large surface area for the incorporation of metal nanoparticles. SEM image reveals the uniform loading of AuNP over NSPANI-GR matrix (Figure 3(c)) [44]. The nanoscale surface roughness of the NSPANI-AuNP-GR nanocomposite film is suitable for the immobilization of biomolecules. From the Figures 3(b) and 3(c), it is found that the enzymes are uniformly distributed on the electrode surfaces. The surface morphology of ChOx/NSPANI-AuNP-GR/ITO (Figure 3(b)) and ChOx-HRP/NSPANI-AuNP-GR (Figure 3(c)) shows full coverage of the surface by the single and bienzyme bioconjugates. The presence of globular structure can be attributed to the covalently bound enzyme molecule since most of the proteins and enzymes possess globular structure [47, 48].SEM images of (a) NSPANI-AuNP-GR/ITO, (b) ChOx/NSPANI-AuNP-GR/ITO, and (c) ChOx-HRP/NSPANI-AuNP-GR electrodes. (a) (b) (c)(c) AFM Study AFM is employed to establish the thickness, surface morphology, and surface roughness of the NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO electrodes. The two-dimensional (2D) and three-dimensional (3D) atomic force microscopy (AFM, Figure4(a)) studies reveal that film NSPANI-AuNP-GR/ITO (2 × 2 μm) shows nanoporous morphology with roughness (root mean square (RMS)) of about 29.3 nm, though their spherical shape appears to be partially distorted. The size of the spherical nanoparticles varies from 25 to 50 nm with average particle size of 35 nm. However, after the immobilization of ChOx-HRP, the surface morphology of NSPANI-AuNP-GR/ITO film changes into smooth morphology wherein the average particle size increases to 100 nm and roughness decreases to 6.1 nm revealing that ChOx-HRP is adsorbed onto NSPANI-AuNP-GR/ITO (Figure 4(b)) via electrostatic interactions. AFM image of ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrode (2 × 2 μm) exhibits well-arranged uniform surface indicating that NSPANI-AuNP-GR/ITO film provides a desired microenvironment for strong adsorption of ChOx-HRP in a particular orientation and wherein it retains its better configuration with more active sites.2D and 3D AFM images of (a) NSPANI-AuNP-GR/ITO, (b) ChOx-HRP/NSPANI-AuNP-GR electrodes. (a) (b)(d) DPV Study DPV experiments have been conducted in phosphate buffer (50 mM, pH 7.0) containing 5 mM (Fe [Fe(CN)6]3−/4−) in the range −0.4 to 1.2 V (Figure 5). The high value of maximum anodic peak current obtained as 1.63 × 10−4 A for NSPANI-AuNP-GR/ITO electrode (curve a) suggests high conducting nature of NSPANI-AuNP-GR/ITO electrode and enhanced electron transfer towards the electrode. The magnitude of peak current decreases to 1.32 × 10−4 A (curve b) and 1.01 × 10−4 A (curve c) for ChOx-HRP/NSPANI-AuNP-GR/ITO and ChOx/NSPANI-AuNP-GR/ITO bioelectrodes, respectively, indicating slow redox process at the nanobioelectrodes due to insulating characteristics of ChOx-HRP and ChOx revealing immobilization of ChOx and ChOx-HRP on NSPANI-AuNP-GR/ITO electrode. The magnitude of peak current (1.32 × 10−4 A) of ChOx-HRP/NSPANI-AuNP-GR/ITO (curve b) is found to be higher as compared to ChOx/NSPANI-AuNP-GR/ITO (1.01 × 10−4 A) (curve c). The enhanced peak current indicates the increased surface activeness of the bioelectrode and increased number of electron transfer.Figure 5 Differential Pulse Voltammetry of (a) NSPANI-AuNP-GR/ITO (b) ChOx-HRP/NSPANI-AuNP-GR/ITO (c) ChOx/NSPANI-AuNP-GR/ITO electrodes. ## 3.2. Electrochemical Response Studies The DPV curves of ChOx/NSPANI-AuNP-GR/ (Figure6(a)) and ChOx-HRP/NSPANI-AuNP-GR/ITO (Figure 6(b)) bioelectrodes recorded in the range of −0.4 to 1.2 V using phosphate buffer of pH 7.0 containing 5 mM [Fe(CN)6]3−/4− and cholesterol solution of various concentration are shown in Figure 6. Change in current (ΔI) is plotted against cholesterol concentration values. A linear relationship between the cholesterol concentration and the increase in response current (ΔI) for both the mono- as well as bienzyme-based nanobioelectrodes is observed. The linear regression curve (Figure 6(a)) of the ChOx/NSPANI-AuNP-GR/ITO bioelectrode, which is used to detect cholesterol in the range of 35–350 mg/dL, follows the equation ΔI(current) (mA) = 0.12 (mA) + 0.0031 (mA mg dL−1) × cholesterol concentration (mg dL−1) with 99.4 μA and 0.981 as standard deviation and correlation coefficient, respectively. The sensitivity of the bioelectrode has been found to be 0.310 μA mg dL−1. The linear equation of ChOx-HRP/NSPANI-AuNP-GR/ITO in Figure 6(b) is represented by the equation ΔI(current) (mA) = 0.36 (mA) + 0.0042 (mA mg dL−1) × cholesterol concentration (mg dL−1) with 65.2 μA and 0.995 as standard deviation and correlation coefficient, respectively. Furthermore, the ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes exhibit a higher sensitivity of 0.42 μA mg dL−1 than the single enzyme-based electrodes (ChOx/NSPANI-AuNP-GR/ITO) and linear range of 35–500 mg/dL. The response current and sensitivity are higher for bienzymatic sensor than the monoenzymatic nano biosensor suggesting effective reduction of H2O2 catalyzed by HRP. All of the experiments have been carried out in triplicate sets, and the results reveal reproducibility of the system. The values of response time of (ChOx/NSPANI-AuNP-GR/ITO) and (ChOx-HRP/NSPANI-AuNP-GR/ITO) are found to be as 28 and 19 s, respectively, which are measured by measuring the time taken to reach the steady-state current after applying a steady voltage of 250 mV for 100 mg/dL of cholesterol solution in 7.0 pH PBS buffer containing 5 mM [Fe(CN)6]3−/4−.Figure 6 Calibration curve of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes.The value of the enzyme-substrate kinetics parameter (Michaelis-Menten constant,Km) estimated using the Lineweaver-Burke plot reveals affinity of enzyme for desired analyte. It may be noted that Km is dependent both on matrix and the method of immobilization of enzymes that often results in their conformational changes resulting in different values of Km. The Km value was determined by the analysis of the slope and intercept for the plot of the reciprocals of change in current versus cholesterol concentrations, that is, the Lineweaver-Burke plot of 1/ΔI versus 1/C. The values of apparent Michaelis-Menten constant (Km) have been estimated using the Lineweaver-Burke plot for ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO as 0.02 mM and 0.01 mM, respectively. The observed lower value of Km for bienzyme system indicates high affinity for cholesterol attributed to the immobilization of ChOx-HRP onto NSPANI-AuNP-GR/ITO for faster biochemical reaction. This result can be assigned to the uniform distribution of enzyme molecules on to the NSPANI-AuNP-GR/ITO nanocomposite film surface. The overall biochemical reaction for ChOx-HRP/NSPANI-AuNP-GR is shown by (1)–(5) and Scheme 1.Scheme 1 Proposed biochemical reaction on the ChOx-HRP/NSPANI-AuNP-GR.Chox-HRP/NSPANI-AuNP-GR/ITO (1) Cholesterol + O 2 → Cholest- 4 -en- 3 -one + H 2 O 2 (2) H 2 O 2 + HRP ( Fe + 3 ) → HRP-I ( Fe + 4 ) + H 2 O (3) HRP-I ( Fe + 4 ) ) + Fe ( CN ) 6 4 - → HRP-II ( Fe + 4 ) + Fe ( CN ) 6 3 - (4) HRP-II ( Fe + 4 ) ) + Fe ( CN ) 6 3 - → HRP ( Fe + 3 ) + (5) Fe ( CN ) 6 4 - → Fe ( CN ) 6 4 - + e ## 3.3. Photometric Response Studies The response characteristics of ChOx-HRP/NSPANI-AuNP-GR/ITO and ChOx/NSPANI-AuNP-GR/ITO bioelectrodes have been studied as a function of cholesterol concentration (Figure7), and the value of absorbance resulting from the oxidized form of dye has been found to be increasing linearly with increase in cholesterol concentration for both bioelectrodes. It has been found that the ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrode in the range of 35–400 mg/dL for cholesterol concentration follows the equation: Change in absorbance = 0.022 + 0.00016 × cholesterol concentration (mg/dL) with 0.003 as standard deviation whereas ChOx/NSPANI-AuNP-GR/ITO bioelectrode in the range of 35–350 mg/dL for cholesterol concentration follows the equation: Change in absorbance = 0.002 + 0.000088 × cholesterol concentration (mg/dL) with 0.0025 as standard deviation. The value of apparent Michaelis-Menten constant (Km) has been estimated using the Lineweaver-Burk plot, graph between inverse of absorption and inverse of cholesterol concentration. The lower value of Km (0.012 mM) for ChOx-HRP/NSPANI-AuNP-GR/ITO biosensor as compared to ChOx/NSPANI-AuNP-GR/ITO biosensor (0.023 mM) suggests that the NSPANI-AuNP-GR matrix is facilitating the enzymatic reaction.Figure 7 Photometric response of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes as a function of cholesterol concentration. ## 3.4. Studies of pH, Interference, Reusability, and Shelf Life of Biosensors (a) pH Studies The response current of the ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes studied in the pH range 6.0–7.8 (data not shown) suggests that both bioelectrodes exhibit maximum activity at around pH 7.0. At this pH, the biomolecules retain their natural structures and do not get denatured. Thus all experiments have been conducted out at the optimum pH value of 7.0 for cholesterol estimation.(b) Interference Studies Different interferents which are mostly present in blood such as ascorbic acid (0.05 mM), glucose (5 mM), uric acid (0.1 mM), sodium ascorbate (0.05 mM), and urea (1 mM) were tested through the DPV measurements for both biocomposite electrodes such as ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO using cholesterol solution (100 mg/dL) in a 1 : 1 ratio. Figure8 shows the effect of interferents on the observed response of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. In the Figure 8, the first bar (cholesterol) shows the current obtained with 100 mg/dL cholesterol. The remaining bars show the current corresponding to the mixture of cholesterol and interferents in a 1 : 1 ratio. The percentage interference (% interference) was calculated using (6) for various interferents: (6)%Interference=[ΔIchol-ΔIinter][ΔIchol]×100, where ΔIchol is the change in current obtained with 100 mg/dL cholesterol and ΔIinter is the change in current corresponding to the mixture of cholesterol and interferents in a 1 : 1 ratio. A maximum of 6% interference for ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrode and 9% interference for ChOx/NSPANI-AuNP-GR/ITO bioelectrode are observed.Figure 8 Interferent study of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes.(c) Reusability Studies The unique feature of both types of bioelectrodes is their reusability (Figure9) which is attributed due to composition of the transducer matrix. It has been found that ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes can be reused in number of times with 100% efficiency. Figure 9 reflects the response of bioelectrodes for 15 times testing using the same bioelectrode with 100 mg/dL cholesterol concentration in PBS solution (50 mM, 0.9% NaCl, 5 mM [Fe(CN)6]3−/4− at room temperature (25°C). The reusability of the bioelectrodes can be attributed to robust properties of the transducer matrix. The reusability indicates that NSPANI-AuNP-GR matrix offers a favourable microenvironment for enzymes which does not cause denaturing of enzymes. The reusability can be explained by the enhanced stability of the enzymes, indicating unique electrochemical properties and biocompatibility of NSPANI-AuNP-GR/ITO electrode.Figure 9 DPV curves for reusability testing (current versus potential plot with 100 mg/dL analyte for 8 times): (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes.(d) Shelf Life Studies The shelf lives of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes have been determined by measuring the response current at regular interval of one week for about two months. Figure10 demonstrates the shelf life of the ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. The ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes are stored at 4°C when not in use. The bioelectrodes have been found to be stable up to 12 weeks without any loss in activity.Figure 10 Results of shelf life of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes. ## 3.5. Comparative Evaluation of Mono- and Bienzymatic Biosensor Table1 represents a comparative evaluation of mono- and bienzymatic biosensor performance. It has been found that bienzymatic electrodes ChOx-HRP/NSPANI-AuNP-GR/ITO exhibit better performance in terms of linearity, shelf life, response time, and sensitivity as compared to monoenzyme-based ChOx/NSPANI-AuNP-GR/ITO electrode. The immobilization of the ChOx together with horseradish peroxidase (HRP) is thought to either help the protein to assume a favorable orientation or to make possible conducting channels between the prosthetic groups and the electrode surface. Both can reduce the effective electron transfer distance and thereby facilitate the charge transfer between the electrode and the enzyme [49].Table 1 A comparative evaluation of single and bienzymatic biosensor performance. S. no. Characteristics ChOx/NSPANI-AuNP-GR/ITO ChOx-HRP/NSPANI-AuNP-GR/ITO 1 Linearity 35–400 mg/dL 35–500 mg/dL 2 Detection limit 35 mg/dL 25 mg/dL 3 Response time 28 secs 19 secs 4 Sensitivity 3.10μA mg dL−1 4.22μA mg dL−1 5 K m 0.02 mM 0.01 mM 6 Shelf life 8 weeks 8 weeksTable2 shows the characteristics of ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrode including those reported in the literature for ChOx-HRP system. It is profound that bioenzymatic electrodes ChOx-HRP/NSPANI-AuNP-GR/ITO offer unique characteristics with respect to reusability, shelf life, and very low Km value of 0.01 mM.Table 2 Characteristics of ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrode including those reported in the literature. S. no. Components of biosensor Characteristics References (Mat) ChOx-HRP/NSPANI-AuNP-GR/ITO (L) upto 500 mg/dL (E) ChOx-HRP (S) 4.22μA mg dL−1 (Km) 0.01 mM (1) (M) ampero. versus Ag/AgCl (DL) 25 mg/dL Present investigation (RT) 19 secs (SL) 2 months (Mat) ChOx/f-G/GC (L) upto 135μM (2) ChOx/Au/f-G/GC (S) 314 nA/μM cm2 [44] (E) ChOx (SL) 1 month (M) ampero. versus Ag/AgCl (Mat) ChOx/NSPANI-SDS (L) 05–10.5 mM (E) ChOx (S) 9 mM (3) (M) photometric (Km) 1.32 mM [49] (RT) 59 secs (SL) 5 weeks (Mat) GR-Pt nanoparticle hybrid material (L) up to 12 mM (4) (E) ChOx, ChEt (S)2.07±0.1μA/μM/cm2 [44] (M) ampero. versus Ag/AgCl (Km) 5 mM (DL) 0.2μM (Mat) GOx-HRP/MWCNT/PPY/ITO (L) 1–10 mM (E) GOx, HRP (S) 13.8 mA/μM (5) (M) ampero. versus Ag/AgCl (Km) 0.52 mM [50] (DL) 0.1 mM (RT) 10 secs (SL) 5 weeks (Mat) ChOx/NanoFe3O4/ITO (L) 2.5–400 mg/dL (E) ChOx (S) 86 Ω/mg/dL/cm2 (6) (M) ampero. versus Ag/AgCl (Km) 0.8 mg/dL [51] (DL) 0.25 mg/dL (RT) 25 secs (SL) 55 days (Mat) ChEt-ChOx/MWCNT/SiO2-CHIT/ITO (L) 10–500 mg/dL (E) ChEt-ChOx (S) 2.12μA/mM (7) (M) ampero. versus Ag/AgCl (Km) 0.052 mM [52] (DL) 0.1 mM (RT) 10 secs (SL) 10 weeks (Mat) ChOx/PANI-NS/ITO (L) 25–500 mg/dL (E) ChOx (S)1.3×10-3 mA mg−1dL (8) (M) ampero. versus Ag/AgCl (Km) 2.5 mM [53] (RT) 10 secs (SL) 12 weeks (Mat) material; (E): enzyme; (M): method; (DL): detection limit; (L): linearity; (SL): shelf life; (RT): response time; (S): sensitivity; (Km): Michaelis-Menton constant; (f-G): functionalized graphene nanoplatelets; (Au/f-G): gold-nanoparticle-decorated f-G; (NSPANI): nanostructured polyaniline; (SDS): sodium dodecyl sulphate; (MWCNT/PPY/ITO): carboxy-modified multiwalled carbon nanotubes (MWCNT) and polypyrrole (PPY) nanocomposite film; (NanoFe3O4): nanostructured iron oxide; (SiO2): silica; (CHIT): chitosan; (PANI-NS): polyaniline nanospheres. ## 3.6. Blood Serum Testing The response of the ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes to the cholesterol in human blood serum has been investigated by amperometric and photometric studies, and results were compared. Five serum samples obtained from pathological lab were analyzed. Table3 shows the results from the blood serum samples using ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO biosensors. Both bioelectrodes provide excellent performance in evaluation of cholesterol in blood serum samples which may be due to the high electrocatalytic effect of NSPANI-AuNP-GR/ITO nanocomposite electrode. The results obtained from amperometric determination of free cholesterol in blood serum are compared with the results obtained from the photometric response studies considering as standard values of free blood cholesterol. The results obtained for ChOx-HRP/NSPANI-AuNP-GR/ITO by using amperometric and photometric studies are very close to each other with minimum error while ChOx/NSPANI-AuNP-GR/ITO shows comparatively higher deviation.Table 3 Results from the blood serum samples using ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO biosensors. Sample no. Chox/NSPANI/Au-GR/ITO (amperometric) Chox/NSPANI/Au-GR/ITO (photometric) Error (%) Chox-HRP/ NSPANI/Au-GR/ITO (amperometric) Chox-HRP/NSPANI/Au-GR/ITO (photometric) Error (%) 1 225 218 3 230 226 1.7 2 158 149 6 162 160 1.2 3 182 178 2 186 183 1.6 4 306 298 3 311 307 1.3 5 76 69 9 80 78 2.5 ## 4. Conclusion Gold-nanoparticle-decorated graphene-nanostructured polyaniline nanobioelectrodes have been fabricated from the NSPANI-AuNP-GR nanodispersion as synthesized via in situ polymerization, using electrodeposition technique, for the development of reusable cholesterol biosensor. Both single ChOx and ChOx-HRP-based biosensors are developed using covalent bonding through glutaraldehyde. The bienzyme-based nanocomposite bioelectrodes (ChOx-HRP/NSPANI-AuNP-GR/ITO) offer better performance in terms of detection limit, sensitivity, and response time than single enzyme system. This is attributed to the presence of HRP along with ChOx to enhance the overall biochemical reaction. It has been shown that this nanocomposite bioelectrode can be used to estimate cholesterol in blood serum samples. The unique features of the ChOx-HRP/NSPANI-AuNP-GR/ITO nanocomposite bioelectrode lie with the novelty of fabrication, minimum interference, very lowKm value, low response time, excellent reusability, and its usefulness for blood serum samples. The large specific surface area, excellent conductivity, stable, and reliable redox properties of NSPANI-AuNP-GR nanocomposite film allow the rapid transmit of electron and enhance current response for the immobilized enzymes. It should be interesting to utilize these nanocomposite electrodes for development of other biosensors. --- *Source: 102543-2012-11-14.xml*
102543-2012-11-14_102543-2012-11-14.md
64,153
Gold-Nanoparticle Decorated Graphene-Nanostructured Polyaniline Nanocomposite-Based Bienzymatic Platform for Cholesterol Sensing
Deepshikha Saini; Ruchika Chauhan; Pratima R. Solanki; T. Basu
ISRN Nanotechnology (2012)
Engineering & Technology
International Scholarly Research Network
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.5402/2012/102543
102543-2012-11-14.xml
--- ## Abstract A novel nanobiocomposite bienzymatic amperometric cholesterol biosensor, coupled with cholesterol oxidase (ChOx) and horseradish peroxidase (HRP), was developed based on the gold-nanoparticle decorated graphene-nanostructured polyaniline nanocomposite (NSPANI-AuNP-GR) film which was electrochemically deposited onto indium-tin-oxide (ITO) electrode from the nanocomposite (NSPANI-AuNP-GR) dispersion, as synthesized by in situ polymerization technique. The gold nanoparticle-decorated graphene-nanostructured polyaniline nanocomposite (NSPANI-AuNP-GR) offers an efficient electron transfer between underlining electrode and enzyme active center. The bienzymatic nanocomposite bioelectrodes ChOx-HRP/NSPANI-AuNP-GR/ITO have exhibited higher sensitivity, linearity, and lowerKm value than monoenzymatic bioelectrode (ChOx/NSPANI-AuNP-GR/ITO). It is inferred that bienzyme-based nanobioelectrodes offer wider linearity (35 to 500 mg/dL), higher sensitivity (0.42 μAmM−1), low km value of 0.01 mM and higher accuracy for testing of blood serum samples than monoenzyme system. Mechanism of the overall biochemical reaction has been proposed to illustrate the enhanced biosensing performance of the bienzyme system. The novelty of the electrode lies on reusability, extended shelf life, and accuracy of testing blood serum samples. --- ## Body ## 1. Introduction Graphene, one of the most exciting nanostructures of carbon, is a two-dimensional honeycomb crystalline single layer of carbon lattice [1–3]. Recently, it has received enormous interest in various areas of research, such as biosensors, bioelectronics, energy storage and conversion, drug delivery [4–7], molecular resolution sensors [8–10], ultrafast electronic devices, [11], and electromechanical resonators [5], owing to its large specific surface area, extraordinary electrical and thermal conductivities [11, 12], high mechanical stiffness [13], good biocompatibility [14], and low manufacturing cost [15]. The high electrical and thermal conductivities of graphene originate from the extended long-range π-conjugation.Out of all conducting polymers, polyaniline (PANI) is one of the promising matrixes for biosensor applications due to its simple and reversible acid/base doping/dedoping chemistry enabling control over properties such as free volume [16], solubility [17], electrical conductivity [18], and optical activity [19]. In recent years, nanostructured polyaniline (NSPANI) has aroused much scientific interest since it combines the properties of low-dimensional organic conductors and high-surface-area materials and offers the possibility of enhanced performance wherever a large interfacial area between NSPANI and its environment is required. Various strategies are employed for the synthesis of nanostructured polyaniline [20–22].Noble metal nanoparticles are known to be excellent catalysts, due to their high ratio of surface atoms with free valences to the cluster of total atoms. As well known, gold nanoparticles (AuNP) have many unique properties such as high surface free energy, strong adsorption ability, well suitability, and good conductivity [23, 24]. Besides, it can provide more binding sites and more congenial microenvironment for biomolecules immobilization to retain the bioactivity of the proteins, which can prolong the life time of biosensor [25, 26]. Nanocomposites based on metal nanoparticles and exfoliated graphene nanosheet (GR) with synergistic effect have exhibited particular promise in biosensing characteristics as they can play very interesting role such as (1) a biocompatible enzyme friendly platform, (2) fast electrocatalytic oxidation or reduction of the product generated during biochemical recognition process at the electrode surface to reduce overvoltage and avoid interference from other coexisting electroactive species, and (3) an enhanced signal because of its fast electron transfer and large working surface area. Lu et al. have reported highly sensitive and selective amperometric glucose biosensor using exfoliated graphite nanoplatelets decorated with Pt and Pd nanoparticles [27]. Beside that, graphene-based nanocomposites have been exploited to fabricate alcohol dehydrogenase biosensors for glucose, alcohol, and so forth [28–33]. Biosensors, based on graphene-encapsulated nanoparticle arrays, for highly sensitive and selective detection of breast cancer biomarkers are successfully demonstrated. The increased surface-to-volume ratio significantly has helped in lowering the detection limits (1 pM) for the target biomarkers [34]. A glucose electrochemical biosensor has been reported based on zinc oxide nanoparticles (ZnO NPs) doped in graphene (GR) nanosheets. The results show that the linear response range of the biosensor lies between 0.1 to 20 μM and the detection limit has been calculated as 0.02 μM at a signal-to-noise ratio of 3 [35].Cholesterol and its ester are essential constituents of all animal cells, and it is present in brain and nerve tissues. The level of cholesterol in serum is an important parameter in the diagnosis and prevention of heart diseases. The development of electrochemical biosensor received significant interest for precise and smart determination of cholesterol in serum and food sample [36]. However, the so far developed cholesterol biosensors suffer from reliability, poor shelf life, and low sensitivity. Therefore, it is needed to pay attention to the above challenges in order to fabricate a reliable cholesterol biosensor for clinical diagnosis. There are two key factors in the fabrication of a biosensor, firstly enzyme system and secondly transducer matrix to monitor biosensor performance. In amperometric biosensor, cholesterol quantification is usually performed by measurement of the current associated with the oxidation of hydrogen peroxide. One of the major drawbacks of electrochemical biosensor is the overpotential necessary for the oxidation of H2O2 which can cause interferences from other oxidizable species such as ascorbic acid (AA), uric acid (UA), and acetaminophen (AAP). To avoid interferences, some improved biosensors based on the coupled enzyme reactions have been reported to detect hydrogen peroxide at low potential [37, 38]. In such cases, the primary product that is produced by the reaction of the analyte with the first enzyme is further converted by a second enzyme to produce products detectable by a transducer [39]. Coupled enzyme reactions are also employed to filter out chemical signals by eliminating the interference on the enzyme [40, 41].In the previous communication, the authors have reported an in situ preparation and characterization of novel nanocomposite NSPANI-AuNP-GR dispersion based on NSPANI, graphene nanosheet and gold nanoparticles (NSPANI-AuNP-GR) with high electrocatalytic activity [42]. It has been observed out that the NSPANI-AuNP-GR nanocomposite dispersion can be successfully electrodeposited on the ITO surface. In the paper, attempt has been made to develop a reliable and reusable amperometric bienzymatic cholesterol biosensor based on nanocomposite film on ITO electrode for estimation of free cholesterol. In order to achieve the commercial viability, the developed electrochemical biosensor performance has been compared with photometric technique and tested on blood serum sample of various pathological labs. The novelty of the sensor lies on the method of fabrication of transducer matrix coupled with enzyme system, reusability, reliability, extended shelf life, sensitivity, and successful application to blood serum testing. ## 2. Experimental ### 2.1. Materials Few-layered graphene (Quantum Materials Corporation, Bangalore), aniline (Sigma-Aldrich), sodium dodecyl sulphate (SDS) (Qualigen),ammonium persulfate (NH4)2S2O8 (E-Merck), hydrochloric acid (Qualigen), and chloroauric acid HClO4 (Sigma-Aldrich) were used in the present experiment. Cholesterol oxidase (ChOx; EC 1.1.36, from Pseudomonas fluorescens) with specific activity of 24 U/mg and horseradish peroxidase (HRP, E.C1.11.1.7, 250 U/mg, from Horseradish) were purchased from Sigma, potassium ferricyanide (K3[Fe(CN)6]), potassium ferrocyanide (K4[Fe(CN)6]), sodium dihydrogen orthophosphate (NaH2PO4), and disodium hydrogen orthophosphate (Na2HPO4) were purchased from Qualigens (India). Deionized water from a Millipore-MilliQ was used in all cases to prepare aqueous solutions. Monomer was double distilled before polymerization. ### 2.2. Characterization and Measurement Fourier transform infrared spectroscopic (FTIR) measurements were performed with a Perkin-Elmer FTIR spectrophotometer. Morphological imaging of the fabricated electrodes was obtained by scanning electron microscope (LEO 440 Model), and atomic force microscopy (AFM) was performed by Park Systems XE-70 Atomic Force Microscope in noncontact mode. Cyclic voltammetry and differential pulse voltammetry (DPV) measurements were conducted in phosphate buffer (50 mM, 0.9% NaCl) containing 5 mM [Fe(CN)6]3−/4− in a three-electrode cell consisting of Ag/AgCl as reference, platinum (Pt) as counter electrode and ITO as a working electrode (0.25 cm2) using Autolab Potentiostat/Galvanostat Model AUT83945 (PGSTAT302N). #### 2.2.1. Synthesis of Gold-Nanoparticle-Decorated Graphene-Nanostructured Polyaniline Nanocomposite (NSPANI-AuNP-GR) Synthesis of NSPANI-AuNP-GR Nanodispersion In a typical synthesis, graphene was first dispersed into a dilute aqueous solution of sodium dodecyl sulphate (SDS) (0.02 M). The aniline solution in the dopant (0.02 M) was added to an aqueous solution of SDA under stirring condition. The mixture was then placed in the low temperature bath, so that the temperature was maintained at 0° to 5°C. 70μL aqueous solution of 0.05 M HAuCl4 was added into aqueous dispersion. An aqueous solution of the oxidizing agent, (NH4)2S2O8, in ice-cold water was added to the above mixture. The polymerization was allowed to proceed for 3 to 4 h with stirring. After that the stirring was stopped and the mixture was kept under static condition for 1–3 days at 277–278°K for polymerization to complete. Thus, NSPANI-AuNP-GR nanodispersion was prepared by in situ polymerization as characterized in our previous communication [42]. ### 2.3. Fabrication of NSPANI-AuNP-GR/ITO Electrodes The NSPANI-AuNP-GR nanocomposite film was electrochemically deposited from NSPANI-AuNP-GR nanocomposite dispersion as synthesized, onto ITO-coated glass plates by sweeping a potential from −200 mV to +1000 mV (versus Ag/AgCl) at a scan rate of 40 mV/s in a three-electrode cell consisting of Ag/AgCl as reference, platinum (Pt) as counter electrode, and ITO as a working electrode (0.25 cm2). The electrodeposition curves of NSPANI-AuNP-GR/ITO exhibit characteristics electrochemistry for NSPANI [33] with the main peaks a and b corresponding to the transformation of leucoemeraldine base (LB) to emeraldine salt (ES) and ES to pernigraniline salt (PS), respectively. On the reverse scan, peaks b′ and a′ correspond to the conversion of PS to ES and ES to LB, respectively. The presence of a small redox peak around +350 mV (C and C′) is associated with the formation of p-benzoquinone and hydroquinone as a side product upon cycling the potential to +1000 mV. The increase in current density with successive scans suggests that the polymer film builds up on the electrode surface. Figure 1 inset shows the plot of maximum anodic peak current versus with number of cycles. Maximum peak current was observed at 28 cycles indicating a continuous film deposition. It can also be observed that the shifts in peak potentials began to occur after a number of cycles. This may be the result of increased resistance of the electrode, as the film deposited becomes thicker. The current has reached up to 0.261 mA cm−2 for 28 cycles during electrodeposition of the nanocomposite dispersion. On further increase in number of cycles, anodic peak current decreases. This decrease in peak current is ascribed to the degradation of polymer film. In the present study, 28 cycles were used for film deposition for biosensor application.Figure 1 Electrodeposition of NSPANI-AuNP-GR nanocomposite dispersion on the ITO electrode. Inset: Plot of peak current against number of cycles. #### 2.3.1. Fabrication of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR Nanobioelectrodes The NSPANI-AuNP-GR/ITO electrode was treated with 10μL of aqueous glutaraldehyde (0.1%) as a cross-linker. 10 μL of freshly prepared ChOx (1 mg/mL) was uniformly spread onto glutaraldehyde-treated NSPANI-AuNP-GR/ITO electrode and is kept in a humid chamber for 12 h at 4°C to fabricate ChOx/NSPANI-AuNP-GR/ITO nanobioelectrode. 10 μL freshly prepared solution of HRP (1 mg/mL and ChOx (1 mg/mL) (1 : 1) was uniformly spread onto glutaraldehyde-treated NSPANI-AuNP-GR/ITO electrode and was kept in a humid chamber for 12 h at 4°C to prepare ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. The bioelectrodes were immersed in 5 mM phosphate buffer solution (pH 7.0) in order to wash out unbound enzyme from the electrode surface. When not in use, the electrode was stored at 4°C in a refrigerator. ### 2.4. Preparation of Solutions Stock solution of cholesterol was prepared in deionized water having 10% Triton X-100 and was stored at 4°C. This stock solution was further diluted to make different concentrations of cholesterol solution.o-Dianisidine (1%) solution was prepared freshly in deionized water. Buffers of various pH values were prepared by dissolving different ratios of sodium dihydrogen orthophosphate (NaH2PO4) and disodium hydrogen orthophosphate (Na2HPO4) in Millipore water. ### 2.5. Photometric Studies Photometric measurements were conducted using a UV-visible spectrophotometer. Photometric experiments were carried out with cholesterol solution using PBS buffer (50 mM, 0.9% NaCl, pH 7.4). To carry out photometric enzymatic assay of the immobilized enzyme, ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes were dipped in 3 mL of PBS solution containing 20μL of HRP (1 mg dL/1), 20 μL of o-dianisidine dye, and 100 μL of cholesterol. The difference between the initial and final absorbance values at 500 nm after 3 min incubation of cholesterol was recorded and plotted. ## 2.1. Materials Few-layered graphene (Quantum Materials Corporation, Bangalore), aniline (Sigma-Aldrich), sodium dodecyl sulphate (SDS) (Qualigen),ammonium persulfate (NH4)2S2O8 (E-Merck), hydrochloric acid (Qualigen), and chloroauric acid HClO4 (Sigma-Aldrich) were used in the present experiment. Cholesterol oxidase (ChOx; EC 1.1.36, from Pseudomonas fluorescens) with specific activity of 24 U/mg and horseradish peroxidase (HRP, E.C1.11.1.7, 250 U/mg, from Horseradish) were purchased from Sigma, potassium ferricyanide (K3[Fe(CN)6]), potassium ferrocyanide (K4[Fe(CN)6]), sodium dihydrogen orthophosphate (NaH2PO4), and disodium hydrogen orthophosphate (Na2HPO4) were purchased from Qualigens (India). Deionized water from a Millipore-MilliQ was used in all cases to prepare aqueous solutions. Monomer was double distilled before polymerization. ## 2.2. Characterization and Measurement Fourier transform infrared spectroscopic (FTIR) measurements were performed with a Perkin-Elmer FTIR spectrophotometer. Morphological imaging of the fabricated electrodes was obtained by scanning electron microscope (LEO 440 Model), and atomic force microscopy (AFM) was performed by Park Systems XE-70 Atomic Force Microscope in noncontact mode. Cyclic voltammetry and differential pulse voltammetry (DPV) measurements were conducted in phosphate buffer (50 mM, 0.9% NaCl) containing 5 mM [Fe(CN)6]3−/4− in a three-electrode cell consisting of Ag/AgCl as reference, platinum (Pt) as counter electrode and ITO as a working electrode (0.25 cm2) using Autolab Potentiostat/Galvanostat Model AUT83945 (PGSTAT302N). ### 2.2.1. Synthesis of Gold-Nanoparticle-Decorated Graphene-Nanostructured Polyaniline Nanocomposite (NSPANI-AuNP-GR) Synthesis of NSPANI-AuNP-GR Nanodispersion In a typical synthesis, graphene was first dispersed into a dilute aqueous solution of sodium dodecyl sulphate (SDS) (0.02 M). The aniline solution in the dopant (0.02 M) was added to an aqueous solution of SDA under stirring condition. The mixture was then placed in the low temperature bath, so that the temperature was maintained at 0° to 5°C. 70μL aqueous solution of 0.05 M HAuCl4 was added into aqueous dispersion. An aqueous solution of the oxidizing agent, (NH4)2S2O8, in ice-cold water was added to the above mixture. The polymerization was allowed to proceed for 3 to 4 h with stirring. After that the stirring was stopped and the mixture was kept under static condition for 1–3 days at 277–278°K for polymerization to complete. Thus, NSPANI-AuNP-GR nanodispersion was prepared by in situ polymerization as characterized in our previous communication [42]. ## 2.2.1. Synthesis of Gold-Nanoparticle-Decorated Graphene-Nanostructured Polyaniline Nanocomposite (NSPANI-AuNP-GR) Synthesis of NSPANI-AuNP-GR Nanodispersion In a typical synthesis, graphene was first dispersed into a dilute aqueous solution of sodium dodecyl sulphate (SDS) (0.02 M). The aniline solution in the dopant (0.02 M) was added to an aqueous solution of SDA under stirring condition. The mixture was then placed in the low temperature bath, so that the temperature was maintained at 0° to 5°C. 70μL aqueous solution of 0.05 M HAuCl4 was added into aqueous dispersion. An aqueous solution of the oxidizing agent, (NH4)2S2O8, in ice-cold water was added to the above mixture. The polymerization was allowed to proceed for 3 to 4 h with stirring. After that the stirring was stopped and the mixture was kept under static condition for 1–3 days at 277–278°K for polymerization to complete. Thus, NSPANI-AuNP-GR nanodispersion was prepared by in situ polymerization as characterized in our previous communication [42]. ## 2.3. Fabrication of NSPANI-AuNP-GR/ITO Electrodes The NSPANI-AuNP-GR nanocomposite film was electrochemically deposited from NSPANI-AuNP-GR nanocomposite dispersion as synthesized, onto ITO-coated glass plates by sweeping a potential from −200 mV to +1000 mV (versus Ag/AgCl) at a scan rate of 40 mV/s in a three-electrode cell consisting of Ag/AgCl as reference, platinum (Pt) as counter electrode, and ITO as a working electrode (0.25 cm2). The electrodeposition curves of NSPANI-AuNP-GR/ITO exhibit characteristics electrochemistry for NSPANI [33] with the main peaks a and b corresponding to the transformation of leucoemeraldine base (LB) to emeraldine salt (ES) and ES to pernigraniline salt (PS), respectively. On the reverse scan, peaks b′ and a′ correspond to the conversion of PS to ES and ES to LB, respectively. The presence of a small redox peak around +350 mV (C and C′) is associated with the formation of p-benzoquinone and hydroquinone as a side product upon cycling the potential to +1000 mV. The increase in current density with successive scans suggests that the polymer film builds up on the electrode surface. Figure 1 inset shows the plot of maximum anodic peak current versus with number of cycles. Maximum peak current was observed at 28 cycles indicating a continuous film deposition. It can also be observed that the shifts in peak potentials began to occur after a number of cycles. This may be the result of increased resistance of the electrode, as the film deposited becomes thicker. The current has reached up to 0.261 mA cm−2 for 28 cycles during electrodeposition of the nanocomposite dispersion. On further increase in number of cycles, anodic peak current decreases. This decrease in peak current is ascribed to the degradation of polymer film. In the present study, 28 cycles were used for film deposition for biosensor application.Figure 1 Electrodeposition of NSPANI-AuNP-GR nanocomposite dispersion on the ITO electrode. Inset: Plot of peak current against number of cycles. ### 2.3.1. Fabrication of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR Nanobioelectrodes The NSPANI-AuNP-GR/ITO electrode was treated with 10μL of aqueous glutaraldehyde (0.1%) as a cross-linker. 10 μL of freshly prepared ChOx (1 mg/mL) was uniformly spread onto glutaraldehyde-treated NSPANI-AuNP-GR/ITO electrode and is kept in a humid chamber for 12 h at 4°C to fabricate ChOx/NSPANI-AuNP-GR/ITO nanobioelectrode. 10 μL freshly prepared solution of HRP (1 mg/mL and ChOx (1 mg/mL) (1 : 1) was uniformly spread onto glutaraldehyde-treated NSPANI-AuNP-GR/ITO electrode and was kept in a humid chamber for 12 h at 4°C to prepare ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. The bioelectrodes were immersed in 5 mM phosphate buffer solution (pH 7.0) in order to wash out unbound enzyme from the electrode surface. When not in use, the electrode was stored at 4°C in a refrigerator. ## 2.3.1. Fabrication of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR Nanobioelectrodes The NSPANI-AuNP-GR/ITO electrode was treated with 10μL of aqueous glutaraldehyde (0.1%) as a cross-linker. 10 μL of freshly prepared ChOx (1 mg/mL) was uniformly spread onto glutaraldehyde-treated NSPANI-AuNP-GR/ITO electrode and is kept in a humid chamber for 12 h at 4°C to fabricate ChOx/NSPANI-AuNP-GR/ITO nanobioelectrode. 10 μL freshly prepared solution of HRP (1 mg/mL and ChOx (1 mg/mL) (1 : 1) was uniformly spread onto glutaraldehyde-treated NSPANI-AuNP-GR/ITO electrode and was kept in a humid chamber for 12 h at 4°C to prepare ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. The bioelectrodes were immersed in 5 mM phosphate buffer solution (pH 7.0) in order to wash out unbound enzyme from the electrode surface. When not in use, the electrode was stored at 4°C in a refrigerator. ## 2.4. Preparation of Solutions Stock solution of cholesterol was prepared in deionized water having 10% Triton X-100 and was stored at 4°C. This stock solution was further diluted to make different concentrations of cholesterol solution.o-Dianisidine (1%) solution was prepared freshly in deionized water. Buffers of various pH values were prepared by dissolving different ratios of sodium dihydrogen orthophosphate (NaH2PO4) and disodium hydrogen orthophosphate (Na2HPO4) in Millipore water. ## 2.5. Photometric Studies Photometric measurements were conducted using a UV-visible spectrophotometer. Photometric experiments were carried out with cholesterol solution using PBS buffer (50 mM, 0.9% NaCl, pH 7.4). To carry out photometric enzymatic assay of the immobilized enzyme, ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes were dipped in 3 mL of PBS solution containing 20μL of HRP (1 mg dL/1), 20 μL of o-dianisidine dye, and 100 μL of cholesterol. The difference between the initial and final absorbance values at 500 nm after 3 min incubation of cholesterol was recorded and plotted. ## 3. Discussion ### 3.1. Characterization of NSPANI-AuNP-GR/ITO, ChOx/NSPANI-AuNP-GR/ITO, and ChOx-HRP/NSPANI-AuNP-GR Electrodes (a) FT-IR Study Figure2 represents the FT-IR absorption spectra of the NSPANI-AuNP-GR/ITO (curve a), ChOx/NSPANI-AuNP-GR/ITO (curve b), and ChOx-HRP/NSPANI-AuNP-GR (curve c) electrodes. The FT-IR spectrum of electrochemically deposited NSPANI-AuNP-GR/ITO nanocomposite (curve a) shows benzenoid and quinoid ring stretching bands (C=C) present at 1447.6 cm−1 and 1560 cm−1. The presence of peak at 3123 cm−1 is attributed to –N–H stretching vibrations of NSPANI in the composite [43]. A peak at 1534 cm−1 due to the skeletal vibration of graphene nanosheet is observed [44] in the FT-IR spectra of NSPANI-AuNP-GR/ITO (curve a). In the FTIR spectra, apart from the above-mentioned functional groups, a peak appears at 655 cm−1 which may correspond to stretching vibration of Au–O–Au [44]. The presence of these peaks reveals the existence of NSPANI, graphene nanosheet, and AuNPs on the ITO electrode. In the FT-IR spectrum of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR bioelectrode (Figures 2(b) and 2(c)), enzyme binding is indicated by the appearance of additional absorption bands at 1524 and 1630 cm−1 assigned to the carbonyl stretch (amide I band) and N–H bending (amide II band), respectively [45]. Also, a broadband seen around 3560 cm−1 is attributed to amide bond present in ChOx [46].Figure 2 FTIR spectra of (a) NSPANI-AuNP-GR/ITO (red colour), (b) ChOx/NSPANI-AuNP-GR/ITO (green colour), (c) ChOx-HRP/NSPANI-AuNP-GR electrodes (blue colour).(b) SEM Study SEM images of NSPANI-AuNP-GR/ITO (Figure3(a)), ChOx/NSPANI-AuNP-GR/ITO (Figure 3(b)), and ChOx-HRP/NSPANI-AuNP-GR (Figure 3(c)) are shown in Figure 3. The electrodeposition of NSPANI-AuNP-GR matrix on ITO electrode has been confirmed by the homogeneous rough surface (Figure 3(a)). SEM image shows NSPANI deposited on a few-layered graphene nanosheet which provide large surface area for the incorporation of metal nanoparticles. SEM image reveals the uniform loading of AuNP over NSPANI-GR matrix (Figure 3(c)) [44]. The nanoscale surface roughness of the NSPANI-AuNP-GR nanocomposite film is suitable for the immobilization of biomolecules. From the Figures 3(b) and 3(c), it is found that the enzymes are uniformly distributed on the electrode surfaces. The surface morphology of ChOx/NSPANI-AuNP-GR/ITO (Figure 3(b)) and ChOx-HRP/NSPANI-AuNP-GR (Figure 3(c)) shows full coverage of the surface by the single and bienzyme bioconjugates. The presence of globular structure can be attributed to the covalently bound enzyme molecule since most of the proteins and enzymes possess globular structure [47, 48].SEM images of (a) NSPANI-AuNP-GR/ITO, (b) ChOx/NSPANI-AuNP-GR/ITO, and (c) ChOx-HRP/NSPANI-AuNP-GR electrodes. (a) (b) (c)(c) AFM Study AFM is employed to establish the thickness, surface morphology, and surface roughness of the NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO electrodes. The two-dimensional (2D) and three-dimensional (3D) atomic force microscopy (AFM, Figure4(a)) studies reveal that film NSPANI-AuNP-GR/ITO (2 × 2 μm) shows nanoporous morphology with roughness (root mean square (RMS)) of about 29.3 nm, though their spherical shape appears to be partially distorted. The size of the spherical nanoparticles varies from 25 to 50 nm with average particle size of 35 nm. However, after the immobilization of ChOx-HRP, the surface morphology of NSPANI-AuNP-GR/ITO film changes into smooth morphology wherein the average particle size increases to 100 nm and roughness decreases to 6.1 nm revealing that ChOx-HRP is adsorbed onto NSPANI-AuNP-GR/ITO (Figure 4(b)) via electrostatic interactions. AFM image of ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrode (2 × 2 μm) exhibits well-arranged uniform surface indicating that NSPANI-AuNP-GR/ITO film provides a desired microenvironment for strong adsorption of ChOx-HRP in a particular orientation and wherein it retains its better configuration with more active sites.2D and 3D AFM images of (a) NSPANI-AuNP-GR/ITO, (b) ChOx-HRP/NSPANI-AuNP-GR electrodes. (a) (b)(d) DPV Study DPV experiments have been conducted in phosphate buffer (50 mM, pH 7.0) containing 5 mM (Fe [Fe(CN)6]3−/4−) in the range −0.4 to 1.2 V (Figure 5). The high value of maximum anodic peak current obtained as 1.63 × 10−4 A for NSPANI-AuNP-GR/ITO electrode (curve a) suggests high conducting nature of NSPANI-AuNP-GR/ITO electrode and enhanced electron transfer towards the electrode. The magnitude of peak current decreases to 1.32 × 10−4 A (curve b) and 1.01 × 10−4 A (curve c) for ChOx-HRP/NSPANI-AuNP-GR/ITO and ChOx/NSPANI-AuNP-GR/ITO bioelectrodes, respectively, indicating slow redox process at the nanobioelectrodes due to insulating characteristics of ChOx-HRP and ChOx revealing immobilization of ChOx and ChOx-HRP on NSPANI-AuNP-GR/ITO electrode. The magnitude of peak current (1.32 × 10−4 A) of ChOx-HRP/NSPANI-AuNP-GR/ITO (curve b) is found to be higher as compared to ChOx/NSPANI-AuNP-GR/ITO (1.01 × 10−4 A) (curve c). The enhanced peak current indicates the increased surface activeness of the bioelectrode and increased number of electron transfer.Figure 5 Differential Pulse Voltammetry of (a) NSPANI-AuNP-GR/ITO (b) ChOx-HRP/NSPANI-AuNP-GR/ITO (c) ChOx/NSPANI-AuNP-GR/ITO electrodes. ### 3.2. Electrochemical Response Studies The DPV curves of ChOx/NSPANI-AuNP-GR/ (Figure6(a)) and ChOx-HRP/NSPANI-AuNP-GR/ITO (Figure 6(b)) bioelectrodes recorded in the range of −0.4 to 1.2 V using phosphate buffer of pH 7.0 containing 5 mM [Fe(CN)6]3−/4− and cholesterol solution of various concentration are shown in Figure 6. Change in current (ΔI) is plotted against cholesterol concentration values. A linear relationship between the cholesterol concentration and the increase in response current (ΔI) for both the mono- as well as bienzyme-based nanobioelectrodes is observed. The linear regression curve (Figure 6(a)) of the ChOx/NSPANI-AuNP-GR/ITO bioelectrode, which is used to detect cholesterol in the range of 35–350 mg/dL, follows the equation ΔI(current) (mA) = 0.12 (mA) + 0.0031 (mA mg dL−1) × cholesterol concentration (mg dL−1) with 99.4 μA and 0.981 as standard deviation and correlation coefficient, respectively. The sensitivity of the bioelectrode has been found to be 0.310 μA mg dL−1. The linear equation of ChOx-HRP/NSPANI-AuNP-GR/ITO in Figure 6(b) is represented by the equation ΔI(current) (mA) = 0.36 (mA) + 0.0042 (mA mg dL−1) × cholesterol concentration (mg dL−1) with 65.2 μA and 0.995 as standard deviation and correlation coefficient, respectively. Furthermore, the ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes exhibit a higher sensitivity of 0.42 μA mg dL−1 than the single enzyme-based electrodes (ChOx/NSPANI-AuNP-GR/ITO) and linear range of 35–500 mg/dL. The response current and sensitivity are higher for bienzymatic sensor than the monoenzymatic nano biosensor suggesting effective reduction of H2O2 catalyzed by HRP. All of the experiments have been carried out in triplicate sets, and the results reveal reproducibility of the system. The values of response time of (ChOx/NSPANI-AuNP-GR/ITO) and (ChOx-HRP/NSPANI-AuNP-GR/ITO) are found to be as 28 and 19 s, respectively, which are measured by measuring the time taken to reach the steady-state current after applying a steady voltage of 250 mV for 100 mg/dL of cholesterol solution in 7.0 pH PBS buffer containing 5 mM [Fe(CN)6]3−/4−.Figure 6 Calibration curve of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes.The value of the enzyme-substrate kinetics parameter (Michaelis-Menten constant,Km) estimated using the Lineweaver-Burke plot reveals affinity of enzyme for desired analyte. It may be noted that Km is dependent both on matrix and the method of immobilization of enzymes that often results in their conformational changes resulting in different values of Km. The Km value was determined by the analysis of the slope and intercept for the plot of the reciprocals of change in current versus cholesterol concentrations, that is, the Lineweaver-Burke plot of 1/ΔI versus 1/C. The values of apparent Michaelis-Menten constant (Km) have been estimated using the Lineweaver-Burke plot for ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO as 0.02 mM and 0.01 mM, respectively. The observed lower value of Km for bienzyme system indicates high affinity for cholesterol attributed to the immobilization of ChOx-HRP onto NSPANI-AuNP-GR/ITO for faster biochemical reaction. This result can be assigned to the uniform distribution of enzyme molecules on to the NSPANI-AuNP-GR/ITO nanocomposite film surface. The overall biochemical reaction for ChOx-HRP/NSPANI-AuNP-GR is shown by (1)–(5) and Scheme 1.Scheme 1 Proposed biochemical reaction on the ChOx-HRP/NSPANI-AuNP-GR.Chox-HRP/NSPANI-AuNP-GR/ITO (1) Cholesterol + O 2 → Cholest- 4 -en- 3 -one + H 2 O 2 (2) H 2 O 2 + HRP ( Fe + 3 ) → HRP-I ( Fe + 4 ) + H 2 O (3) HRP-I ( Fe + 4 ) ) + Fe ( CN ) 6 4 - → HRP-II ( Fe + 4 ) + Fe ( CN ) 6 3 - (4) HRP-II ( Fe + 4 ) ) + Fe ( CN ) 6 3 - → HRP ( Fe + 3 ) + (5) Fe ( CN ) 6 4 - → Fe ( CN ) 6 4 - + e ### 3.3. Photometric Response Studies The response characteristics of ChOx-HRP/NSPANI-AuNP-GR/ITO and ChOx/NSPANI-AuNP-GR/ITO bioelectrodes have been studied as a function of cholesterol concentration (Figure7), and the value of absorbance resulting from the oxidized form of dye has been found to be increasing linearly with increase in cholesterol concentration for both bioelectrodes. It has been found that the ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrode in the range of 35–400 mg/dL for cholesterol concentration follows the equation: Change in absorbance = 0.022 + 0.00016 × cholesterol concentration (mg/dL) with 0.003 as standard deviation whereas ChOx/NSPANI-AuNP-GR/ITO bioelectrode in the range of 35–350 mg/dL for cholesterol concentration follows the equation: Change in absorbance = 0.002 + 0.000088 × cholesterol concentration (mg/dL) with 0.0025 as standard deviation. The value of apparent Michaelis-Menten constant (Km) has been estimated using the Lineweaver-Burk plot, graph between inverse of absorption and inverse of cholesterol concentration. The lower value of Km (0.012 mM) for ChOx-HRP/NSPANI-AuNP-GR/ITO biosensor as compared to ChOx/NSPANI-AuNP-GR/ITO biosensor (0.023 mM) suggests that the NSPANI-AuNP-GR matrix is facilitating the enzymatic reaction.Figure 7 Photometric response of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes as a function of cholesterol concentration. ### 3.4. Studies of pH, Interference, Reusability, and Shelf Life of Biosensors (a) pH Studies The response current of the ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes studied in the pH range 6.0–7.8 (data not shown) suggests that both bioelectrodes exhibit maximum activity at around pH 7.0. At this pH, the biomolecules retain their natural structures and do not get denatured. Thus all experiments have been conducted out at the optimum pH value of 7.0 for cholesterol estimation.(b) Interference Studies Different interferents which are mostly present in blood such as ascorbic acid (0.05 mM), glucose (5 mM), uric acid (0.1 mM), sodium ascorbate (0.05 mM), and urea (1 mM) were tested through the DPV measurements for both biocomposite electrodes such as ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO using cholesterol solution (100 mg/dL) in a 1 : 1 ratio. Figure8 shows the effect of interferents on the observed response of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. In the Figure 8, the first bar (cholesterol) shows the current obtained with 100 mg/dL cholesterol. The remaining bars show the current corresponding to the mixture of cholesterol and interferents in a 1 : 1 ratio. The percentage interference (% interference) was calculated using (6) for various interferents: (6)%Interference=[ΔIchol-ΔIinter][ΔIchol]×100, where ΔIchol is the change in current obtained with 100 mg/dL cholesterol and ΔIinter is the change in current corresponding to the mixture of cholesterol and interferents in a 1 : 1 ratio. A maximum of 6% interference for ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrode and 9% interference for ChOx/NSPANI-AuNP-GR/ITO bioelectrode are observed.Figure 8 Interferent study of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes.(c) Reusability Studies The unique feature of both types of bioelectrodes is their reusability (Figure9) which is attributed due to composition of the transducer matrix. It has been found that ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes can be reused in number of times with 100% efficiency. Figure 9 reflects the response of bioelectrodes for 15 times testing using the same bioelectrode with 100 mg/dL cholesterol concentration in PBS solution (50 mM, 0.9% NaCl, 5 mM [Fe(CN)6]3−/4− at room temperature (25°C). The reusability of the bioelectrodes can be attributed to robust properties of the transducer matrix. The reusability indicates that NSPANI-AuNP-GR matrix offers a favourable microenvironment for enzymes which does not cause denaturing of enzymes. The reusability can be explained by the enhanced stability of the enzymes, indicating unique electrochemical properties and biocompatibility of NSPANI-AuNP-GR/ITO electrode.Figure 9 DPV curves for reusability testing (current versus potential plot with 100 mg/dL analyte for 8 times): (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes.(d) Shelf Life Studies The shelf lives of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes have been determined by measuring the response current at regular interval of one week for about two months. Figure10 demonstrates the shelf life of the ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. The ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes are stored at 4°C when not in use. The bioelectrodes have been found to be stable up to 12 weeks without any loss in activity.Figure 10 Results of shelf life of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes. ### 3.5. Comparative Evaluation of Mono- and Bienzymatic Biosensor Table1 represents a comparative evaluation of mono- and bienzymatic biosensor performance. It has been found that bienzymatic electrodes ChOx-HRP/NSPANI-AuNP-GR/ITO exhibit better performance in terms of linearity, shelf life, response time, and sensitivity as compared to monoenzyme-based ChOx/NSPANI-AuNP-GR/ITO electrode. The immobilization of the ChOx together with horseradish peroxidase (HRP) is thought to either help the protein to assume a favorable orientation or to make possible conducting channels between the prosthetic groups and the electrode surface. Both can reduce the effective electron transfer distance and thereby facilitate the charge transfer between the electrode and the enzyme [49].Table 1 A comparative evaluation of single and bienzymatic biosensor performance. S. no. Characteristics ChOx/NSPANI-AuNP-GR/ITO ChOx-HRP/NSPANI-AuNP-GR/ITO 1 Linearity 35–400 mg/dL 35–500 mg/dL 2 Detection limit 35 mg/dL 25 mg/dL 3 Response time 28 secs 19 secs 4 Sensitivity 3.10μA mg dL−1 4.22μA mg dL−1 5 K m 0.02 mM 0.01 mM 6 Shelf life 8 weeks 8 weeksTable2 shows the characteristics of ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrode including those reported in the literature for ChOx-HRP system. It is profound that bioenzymatic electrodes ChOx-HRP/NSPANI-AuNP-GR/ITO offer unique characteristics with respect to reusability, shelf life, and very low Km value of 0.01 mM.Table 2 Characteristics of ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrode including those reported in the literature. S. no. Components of biosensor Characteristics References (Mat) ChOx-HRP/NSPANI-AuNP-GR/ITO (L) upto 500 mg/dL (E) ChOx-HRP (S) 4.22μA mg dL−1 (Km) 0.01 mM (1) (M) ampero. versus Ag/AgCl (DL) 25 mg/dL Present investigation (RT) 19 secs (SL) 2 months (Mat) ChOx/f-G/GC (L) upto 135μM (2) ChOx/Au/f-G/GC (S) 314 nA/μM cm2 [44] (E) ChOx (SL) 1 month (M) ampero. versus Ag/AgCl (Mat) ChOx/NSPANI-SDS (L) 05–10.5 mM (E) ChOx (S) 9 mM (3) (M) photometric (Km) 1.32 mM [49] (RT) 59 secs (SL) 5 weeks (Mat) GR-Pt nanoparticle hybrid material (L) up to 12 mM (4) (E) ChOx, ChEt (S)2.07±0.1μA/μM/cm2 [44] (M) ampero. versus Ag/AgCl (Km) 5 mM (DL) 0.2μM (Mat) GOx-HRP/MWCNT/PPY/ITO (L) 1–10 mM (E) GOx, HRP (S) 13.8 mA/μM (5) (M) ampero. versus Ag/AgCl (Km) 0.52 mM [50] (DL) 0.1 mM (RT) 10 secs (SL) 5 weeks (Mat) ChOx/NanoFe3O4/ITO (L) 2.5–400 mg/dL (E) ChOx (S) 86 Ω/mg/dL/cm2 (6) (M) ampero. versus Ag/AgCl (Km) 0.8 mg/dL [51] (DL) 0.25 mg/dL (RT) 25 secs (SL) 55 days (Mat) ChEt-ChOx/MWCNT/SiO2-CHIT/ITO (L) 10–500 mg/dL (E) ChEt-ChOx (S) 2.12μA/mM (7) (M) ampero. versus Ag/AgCl (Km) 0.052 mM [52] (DL) 0.1 mM (RT) 10 secs (SL) 10 weeks (Mat) ChOx/PANI-NS/ITO (L) 25–500 mg/dL (E) ChOx (S)1.3×10-3 mA mg−1dL (8) (M) ampero. versus Ag/AgCl (Km) 2.5 mM [53] (RT) 10 secs (SL) 12 weeks (Mat) material; (E): enzyme; (M): method; (DL): detection limit; (L): linearity; (SL): shelf life; (RT): response time; (S): sensitivity; (Km): Michaelis-Menton constant; (f-G): functionalized graphene nanoplatelets; (Au/f-G): gold-nanoparticle-decorated f-G; (NSPANI): nanostructured polyaniline; (SDS): sodium dodecyl sulphate; (MWCNT/PPY/ITO): carboxy-modified multiwalled carbon nanotubes (MWCNT) and polypyrrole (PPY) nanocomposite film; (NanoFe3O4): nanostructured iron oxide; (SiO2): silica; (CHIT): chitosan; (PANI-NS): polyaniline nanospheres. ### 3.6. Blood Serum Testing The response of the ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes to the cholesterol in human blood serum has been investigated by amperometric and photometric studies, and results were compared. Five serum samples obtained from pathological lab were analyzed. Table3 shows the results from the blood serum samples using ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO biosensors. Both bioelectrodes provide excellent performance in evaluation of cholesterol in blood serum samples which may be due to the high electrocatalytic effect of NSPANI-AuNP-GR/ITO nanocomposite electrode. The results obtained from amperometric determination of free cholesterol in blood serum are compared with the results obtained from the photometric response studies considering as standard values of free blood cholesterol. The results obtained for ChOx-HRP/NSPANI-AuNP-GR/ITO by using amperometric and photometric studies are very close to each other with minimum error while ChOx/NSPANI-AuNP-GR/ITO shows comparatively higher deviation.Table 3 Results from the blood serum samples using ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO biosensors. Sample no. Chox/NSPANI/Au-GR/ITO (amperometric) Chox/NSPANI/Au-GR/ITO (photometric) Error (%) Chox-HRP/ NSPANI/Au-GR/ITO (amperometric) Chox-HRP/NSPANI/Au-GR/ITO (photometric) Error (%) 1 225 218 3 230 226 1.7 2 158 149 6 162 160 1.2 3 182 178 2 186 183 1.6 4 306 298 3 311 307 1.3 5 76 69 9 80 78 2.5 ## 3.1. Characterization of NSPANI-AuNP-GR/ITO, ChOx/NSPANI-AuNP-GR/ITO, and ChOx-HRP/NSPANI-AuNP-GR Electrodes (a) FT-IR Study Figure2 represents the FT-IR absorption spectra of the NSPANI-AuNP-GR/ITO (curve a), ChOx/NSPANI-AuNP-GR/ITO (curve b), and ChOx-HRP/NSPANI-AuNP-GR (curve c) electrodes. The FT-IR spectrum of electrochemically deposited NSPANI-AuNP-GR/ITO nanocomposite (curve a) shows benzenoid and quinoid ring stretching bands (C=C) present at 1447.6 cm−1 and 1560 cm−1. The presence of peak at 3123 cm−1 is attributed to –N–H stretching vibrations of NSPANI in the composite [43]. A peak at 1534 cm−1 due to the skeletal vibration of graphene nanosheet is observed [44] in the FT-IR spectra of NSPANI-AuNP-GR/ITO (curve a). In the FTIR spectra, apart from the above-mentioned functional groups, a peak appears at 655 cm−1 which may correspond to stretching vibration of Au–O–Au [44]. The presence of these peaks reveals the existence of NSPANI, graphene nanosheet, and AuNPs on the ITO electrode. In the FT-IR spectrum of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR bioelectrode (Figures 2(b) and 2(c)), enzyme binding is indicated by the appearance of additional absorption bands at 1524 and 1630 cm−1 assigned to the carbonyl stretch (amide I band) and N–H bending (amide II band), respectively [45]. Also, a broadband seen around 3560 cm−1 is attributed to amide bond present in ChOx [46].Figure 2 FTIR spectra of (a) NSPANI-AuNP-GR/ITO (red colour), (b) ChOx/NSPANI-AuNP-GR/ITO (green colour), (c) ChOx-HRP/NSPANI-AuNP-GR electrodes (blue colour).(b) SEM Study SEM images of NSPANI-AuNP-GR/ITO (Figure3(a)), ChOx/NSPANI-AuNP-GR/ITO (Figure 3(b)), and ChOx-HRP/NSPANI-AuNP-GR (Figure 3(c)) are shown in Figure 3. The electrodeposition of NSPANI-AuNP-GR matrix on ITO electrode has been confirmed by the homogeneous rough surface (Figure 3(a)). SEM image shows NSPANI deposited on a few-layered graphene nanosheet which provide large surface area for the incorporation of metal nanoparticles. SEM image reveals the uniform loading of AuNP over NSPANI-GR matrix (Figure 3(c)) [44]. The nanoscale surface roughness of the NSPANI-AuNP-GR nanocomposite film is suitable for the immobilization of biomolecules. From the Figures 3(b) and 3(c), it is found that the enzymes are uniformly distributed on the electrode surfaces. The surface morphology of ChOx/NSPANI-AuNP-GR/ITO (Figure 3(b)) and ChOx-HRP/NSPANI-AuNP-GR (Figure 3(c)) shows full coverage of the surface by the single and bienzyme bioconjugates. The presence of globular structure can be attributed to the covalently bound enzyme molecule since most of the proteins and enzymes possess globular structure [47, 48].SEM images of (a) NSPANI-AuNP-GR/ITO, (b) ChOx/NSPANI-AuNP-GR/ITO, and (c) ChOx-HRP/NSPANI-AuNP-GR electrodes. (a) (b) (c)(c) AFM Study AFM is employed to establish the thickness, surface morphology, and surface roughness of the NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO electrodes. The two-dimensional (2D) and three-dimensional (3D) atomic force microscopy (AFM, Figure4(a)) studies reveal that film NSPANI-AuNP-GR/ITO (2 × 2 μm) shows nanoporous morphology with roughness (root mean square (RMS)) of about 29.3 nm, though their spherical shape appears to be partially distorted. The size of the spherical nanoparticles varies from 25 to 50 nm with average particle size of 35 nm. However, after the immobilization of ChOx-HRP, the surface morphology of NSPANI-AuNP-GR/ITO film changes into smooth morphology wherein the average particle size increases to 100 nm and roughness decreases to 6.1 nm revealing that ChOx-HRP is adsorbed onto NSPANI-AuNP-GR/ITO (Figure 4(b)) via electrostatic interactions. AFM image of ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrode (2 × 2 μm) exhibits well-arranged uniform surface indicating that NSPANI-AuNP-GR/ITO film provides a desired microenvironment for strong adsorption of ChOx-HRP in a particular orientation and wherein it retains its better configuration with more active sites.2D and 3D AFM images of (a) NSPANI-AuNP-GR/ITO, (b) ChOx-HRP/NSPANI-AuNP-GR electrodes. (a) (b)(d) DPV Study DPV experiments have been conducted in phosphate buffer (50 mM, pH 7.0) containing 5 mM (Fe [Fe(CN)6]3−/4−) in the range −0.4 to 1.2 V (Figure 5). The high value of maximum anodic peak current obtained as 1.63 × 10−4 A for NSPANI-AuNP-GR/ITO electrode (curve a) suggests high conducting nature of NSPANI-AuNP-GR/ITO electrode and enhanced electron transfer towards the electrode. The magnitude of peak current decreases to 1.32 × 10−4 A (curve b) and 1.01 × 10−4 A (curve c) for ChOx-HRP/NSPANI-AuNP-GR/ITO and ChOx/NSPANI-AuNP-GR/ITO bioelectrodes, respectively, indicating slow redox process at the nanobioelectrodes due to insulating characteristics of ChOx-HRP and ChOx revealing immobilization of ChOx and ChOx-HRP on NSPANI-AuNP-GR/ITO electrode. The magnitude of peak current (1.32 × 10−4 A) of ChOx-HRP/NSPANI-AuNP-GR/ITO (curve b) is found to be higher as compared to ChOx/NSPANI-AuNP-GR/ITO (1.01 × 10−4 A) (curve c). The enhanced peak current indicates the increased surface activeness of the bioelectrode and increased number of electron transfer.Figure 5 Differential Pulse Voltammetry of (a) NSPANI-AuNP-GR/ITO (b) ChOx-HRP/NSPANI-AuNP-GR/ITO (c) ChOx/NSPANI-AuNP-GR/ITO electrodes. ## 3.2. Electrochemical Response Studies The DPV curves of ChOx/NSPANI-AuNP-GR/ (Figure6(a)) and ChOx-HRP/NSPANI-AuNP-GR/ITO (Figure 6(b)) bioelectrodes recorded in the range of −0.4 to 1.2 V using phosphate buffer of pH 7.0 containing 5 mM [Fe(CN)6]3−/4− and cholesterol solution of various concentration are shown in Figure 6. Change in current (ΔI) is plotted against cholesterol concentration values. A linear relationship between the cholesterol concentration and the increase in response current (ΔI) for both the mono- as well as bienzyme-based nanobioelectrodes is observed. The linear regression curve (Figure 6(a)) of the ChOx/NSPANI-AuNP-GR/ITO bioelectrode, which is used to detect cholesterol in the range of 35–350 mg/dL, follows the equation ΔI(current) (mA) = 0.12 (mA) + 0.0031 (mA mg dL−1) × cholesterol concentration (mg dL−1) with 99.4 μA and 0.981 as standard deviation and correlation coefficient, respectively. The sensitivity of the bioelectrode has been found to be 0.310 μA mg dL−1. The linear equation of ChOx-HRP/NSPANI-AuNP-GR/ITO in Figure 6(b) is represented by the equation ΔI(current) (mA) = 0.36 (mA) + 0.0042 (mA mg dL−1) × cholesterol concentration (mg dL−1) with 65.2 μA and 0.995 as standard deviation and correlation coefficient, respectively. Furthermore, the ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes exhibit a higher sensitivity of 0.42 μA mg dL−1 than the single enzyme-based electrodes (ChOx/NSPANI-AuNP-GR/ITO) and linear range of 35–500 mg/dL. The response current and sensitivity are higher for bienzymatic sensor than the monoenzymatic nano biosensor suggesting effective reduction of H2O2 catalyzed by HRP. All of the experiments have been carried out in triplicate sets, and the results reveal reproducibility of the system. The values of response time of (ChOx/NSPANI-AuNP-GR/ITO) and (ChOx-HRP/NSPANI-AuNP-GR/ITO) are found to be as 28 and 19 s, respectively, which are measured by measuring the time taken to reach the steady-state current after applying a steady voltage of 250 mV for 100 mg/dL of cholesterol solution in 7.0 pH PBS buffer containing 5 mM [Fe(CN)6]3−/4−.Figure 6 Calibration curve of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes.The value of the enzyme-substrate kinetics parameter (Michaelis-Menten constant,Km) estimated using the Lineweaver-Burke plot reveals affinity of enzyme for desired analyte. It may be noted that Km is dependent both on matrix and the method of immobilization of enzymes that often results in their conformational changes resulting in different values of Km. The Km value was determined by the analysis of the slope and intercept for the plot of the reciprocals of change in current versus cholesterol concentrations, that is, the Lineweaver-Burke plot of 1/ΔI versus 1/C. The values of apparent Michaelis-Menten constant (Km) have been estimated using the Lineweaver-Burke plot for ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO as 0.02 mM and 0.01 mM, respectively. The observed lower value of Km for bienzyme system indicates high affinity for cholesterol attributed to the immobilization of ChOx-HRP onto NSPANI-AuNP-GR/ITO for faster biochemical reaction. This result can be assigned to the uniform distribution of enzyme molecules on to the NSPANI-AuNP-GR/ITO nanocomposite film surface. The overall biochemical reaction for ChOx-HRP/NSPANI-AuNP-GR is shown by (1)–(5) and Scheme 1.Scheme 1 Proposed biochemical reaction on the ChOx-HRP/NSPANI-AuNP-GR.Chox-HRP/NSPANI-AuNP-GR/ITO (1) Cholesterol + O 2 → Cholest- 4 -en- 3 -one + H 2 O 2 (2) H 2 O 2 + HRP ( Fe + 3 ) → HRP-I ( Fe + 4 ) + H 2 O (3) HRP-I ( Fe + 4 ) ) + Fe ( CN ) 6 4 - → HRP-II ( Fe + 4 ) + Fe ( CN ) 6 3 - (4) HRP-II ( Fe + 4 ) ) + Fe ( CN ) 6 3 - → HRP ( Fe + 3 ) + (5) Fe ( CN ) 6 4 - → Fe ( CN ) 6 4 - + e ## 3.3. Photometric Response Studies The response characteristics of ChOx-HRP/NSPANI-AuNP-GR/ITO and ChOx/NSPANI-AuNP-GR/ITO bioelectrodes have been studied as a function of cholesterol concentration (Figure7), and the value of absorbance resulting from the oxidized form of dye has been found to be increasing linearly with increase in cholesterol concentration for both bioelectrodes. It has been found that the ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrode in the range of 35–400 mg/dL for cholesterol concentration follows the equation: Change in absorbance = 0.022 + 0.00016 × cholesterol concentration (mg/dL) with 0.003 as standard deviation whereas ChOx/NSPANI-AuNP-GR/ITO bioelectrode in the range of 35–350 mg/dL for cholesterol concentration follows the equation: Change in absorbance = 0.002 + 0.000088 × cholesterol concentration (mg/dL) with 0.0025 as standard deviation. The value of apparent Michaelis-Menten constant (Km) has been estimated using the Lineweaver-Burk plot, graph between inverse of absorption and inverse of cholesterol concentration. The lower value of Km (0.012 mM) for ChOx-HRP/NSPANI-AuNP-GR/ITO biosensor as compared to ChOx/NSPANI-AuNP-GR/ITO biosensor (0.023 mM) suggests that the NSPANI-AuNP-GR matrix is facilitating the enzymatic reaction.Figure 7 Photometric response of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes as a function of cholesterol concentration. ## 3.4. Studies of pH, Interference, Reusability, and Shelf Life of Biosensors (a) pH Studies The response current of the ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes studied in the pH range 6.0–7.8 (data not shown) suggests that both bioelectrodes exhibit maximum activity at around pH 7.0. At this pH, the biomolecules retain their natural structures and do not get denatured. Thus all experiments have been conducted out at the optimum pH value of 7.0 for cholesterol estimation.(b) Interference Studies Different interferents which are mostly present in blood such as ascorbic acid (0.05 mM), glucose (5 mM), uric acid (0.1 mM), sodium ascorbate (0.05 mM), and urea (1 mM) were tested through the DPV measurements for both biocomposite electrodes such as ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO using cholesterol solution (100 mg/dL) in a 1 : 1 ratio. Figure8 shows the effect of interferents on the observed response of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. In the Figure 8, the first bar (cholesterol) shows the current obtained with 100 mg/dL cholesterol. The remaining bars show the current corresponding to the mixture of cholesterol and interferents in a 1 : 1 ratio. The percentage interference (% interference) was calculated using (6) for various interferents: (6)%Interference=[ΔIchol-ΔIinter][ΔIchol]×100, where ΔIchol is the change in current obtained with 100 mg/dL cholesterol and ΔIinter is the change in current corresponding to the mixture of cholesterol and interferents in a 1 : 1 ratio. A maximum of 6% interference for ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrode and 9% interference for ChOx/NSPANI-AuNP-GR/ITO bioelectrode are observed.Figure 8 Interferent study of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes.(c) Reusability Studies The unique feature of both types of bioelectrodes is their reusability (Figure9) which is attributed due to composition of the transducer matrix. It has been found that ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes can be reused in number of times with 100% efficiency. Figure 9 reflects the response of bioelectrodes for 15 times testing using the same bioelectrode with 100 mg/dL cholesterol concentration in PBS solution (50 mM, 0.9% NaCl, 5 mM [Fe(CN)6]3−/4− at room temperature (25°C). The reusability of the bioelectrodes can be attributed to robust properties of the transducer matrix. The reusability indicates that NSPANI-AuNP-GR matrix offers a favourable microenvironment for enzymes which does not cause denaturing of enzymes. The reusability can be explained by the enhanced stability of the enzymes, indicating unique electrochemical properties and biocompatibility of NSPANI-AuNP-GR/ITO electrode.Figure 9 DPV curves for reusability testing (current versus potential plot with 100 mg/dL analyte for 8 times): (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes.(d) Shelf Life Studies The shelf lives of ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes have been determined by measuring the response current at regular interval of one week for about two months. Figure10 demonstrates the shelf life of the ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes. The ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes are stored at 4°C when not in use. The bioelectrodes have been found to be stable up to 12 weeks without any loss in activity.Figure 10 Results of shelf life of (a) ChOx/NSPANI-AuNP-GR/ITO and (b) ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrodes. ## 3.5. Comparative Evaluation of Mono- and Bienzymatic Biosensor Table1 represents a comparative evaluation of mono- and bienzymatic biosensor performance. It has been found that bienzymatic electrodes ChOx-HRP/NSPANI-AuNP-GR/ITO exhibit better performance in terms of linearity, shelf life, response time, and sensitivity as compared to monoenzyme-based ChOx/NSPANI-AuNP-GR/ITO electrode. The immobilization of the ChOx together with horseradish peroxidase (HRP) is thought to either help the protein to assume a favorable orientation or to make possible conducting channels between the prosthetic groups and the electrode surface. Both can reduce the effective electron transfer distance and thereby facilitate the charge transfer between the electrode and the enzyme [49].Table 1 A comparative evaluation of single and bienzymatic biosensor performance. S. no. Characteristics ChOx/NSPANI-AuNP-GR/ITO ChOx-HRP/NSPANI-AuNP-GR/ITO 1 Linearity 35–400 mg/dL 35–500 mg/dL 2 Detection limit 35 mg/dL 25 mg/dL 3 Response time 28 secs 19 secs 4 Sensitivity 3.10μA mg dL−1 4.22μA mg dL−1 5 K m 0.02 mM 0.01 mM 6 Shelf life 8 weeks 8 weeksTable2 shows the characteristics of ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrode including those reported in the literature for ChOx-HRP system. It is profound that bioenzymatic electrodes ChOx-HRP/NSPANI-AuNP-GR/ITO offer unique characteristics with respect to reusability, shelf life, and very low Km value of 0.01 mM.Table 2 Characteristics of ChOx-HRP/NSPANI-AuNP-GR/ITO nanobioelectrode including those reported in the literature. S. no. Components of biosensor Characteristics References (Mat) ChOx-HRP/NSPANI-AuNP-GR/ITO (L) upto 500 mg/dL (E) ChOx-HRP (S) 4.22μA mg dL−1 (Km) 0.01 mM (1) (M) ampero. versus Ag/AgCl (DL) 25 mg/dL Present investigation (RT) 19 secs (SL) 2 months (Mat) ChOx/f-G/GC (L) upto 135μM (2) ChOx/Au/f-G/GC (S) 314 nA/μM cm2 [44] (E) ChOx (SL) 1 month (M) ampero. versus Ag/AgCl (Mat) ChOx/NSPANI-SDS (L) 05–10.5 mM (E) ChOx (S) 9 mM (3) (M) photometric (Km) 1.32 mM [49] (RT) 59 secs (SL) 5 weeks (Mat) GR-Pt nanoparticle hybrid material (L) up to 12 mM (4) (E) ChOx, ChEt (S)2.07±0.1μA/μM/cm2 [44] (M) ampero. versus Ag/AgCl (Km) 5 mM (DL) 0.2μM (Mat) GOx-HRP/MWCNT/PPY/ITO (L) 1–10 mM (E) GOx, HRP (S) 13.8 mA/μM (5) (M) ampero. versus Ag/AgCl (Km) 0.52 mM [50] (DL) 0.1 mM (RT) 10 secs (SL) 5 weeks (Mat) ChOx/NanoFe3O4/ITO (L) 2.5–400 mg/dL (E) ChOx (S) 86 Ω/mg/dL/cm2 (6) (M) ampero. versus Ag/AgCl (Km) 0.8 mg/dL [51] (DL) 0.25 mg/dL (RT) 25 secs (SL) 55 days (Mat) ChEt-ChOx/MWCNT/SiO2-CHIT/ITO (L) 10–500 mg/dL (E) ChEt-ChOx (S) 2.12μA/mM (7) (M) ampero. versus Ag/AgCl (Km) 0.052 mM [52] (DL) 0.1 mM (RT) 10 secs (SL) 10 weeks (Mat) ChOx/PANI-NS/ITO (L) 25–500 mg/dL (E) ChOx (S)1.3×10-3 mA mg−1dL (8) (M) ampero. versus Ag/AgCl (Km) 2.5 mM [53] (RT) 10 secs (SL) 12 weeks (Mat) material; (E): enzyme; (M): method; (DL): detection limit; (L): linearity; (SL): shelf life; (RT): response time; (S): sensitivity; (Km): Michaelis-Menton constant; (f-G): functionalized graphene nanoplatelets; (Au/f-G): gold-nanoparticle-decorated f-G; (NSPANI): nanostructured polyaniline; (SDS): sodium dodecyl sulphate; (MWCNT/PPY/ITO): carboxy-modified multiwalled carbon nanotubes (MWCNT) and polypyrrole (PPY) nanocomposite film; (NanoFe3O4): nanostructured iron oxide; (SiO2): silica; (CHIT): chitosan; (PANI-NS): polyaniline nanospheres. ## 3.6. Blood Serum Testing The response of the ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO bioelectrodes to the cholesterol in human blood serum has been investigated by amperometric and photometric studies, and results were compared. Five serum samples obtained from pathological lab were analyzed. Table3 shows the results from the blood serum samples using ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO biosensors. Both bioelectrodes provide excellent performance in evaluation of cholesterol in blood serum samples which may be due to the high electrocatalytic effect of NSPANI-AuNP-GR/ITO nanocomposite electrode. The results obtained from amperometric determination of free cholesterol in blood serum are compared with the results obtained from the photometric response studies considering as standard values of free blood cholesterol. The results obtained for ChOx-HRP/NSPANI-AuNP-GR/ITO by using amperometric and photometric studies are very close to each other with minimum error while ChOx/NSPANI-AuNP-GR/ITO shows comparatively higher deviation.Table 3 Results from the blood serum samples using ChOx/NSPANI-AuNP-GR/ITO and ChOx-HRP/NSPANI-AuNP-GR/ITO biosensors. Sample no. Chox/NSPANI/Au-GR/ITO (amperometric) Chox/NSPANI/Au-GR/ITO (photometric) Error (%) Chox-HRP/ NSPANI/Au-GR/ITO (amperometric) Chox-HRP/NSPANI/Au-GR/ITO (photometric) Error (%) 1 225 218 3 230 226 1.7 2 158 149 6 162 160 1.2 3 182 178 2 186 183 1.6 4 306 298 3 311 307 1.3 5 76 69 9 80 78 2.5 ## 4. Conclusion Gold-nanoparticle-decorated graphene-nanostructured polyaniline nanobioelectrodes have been fabricated from the NSPANI-AuNP-GR nanodispersion as synthesized via in situ polymerization, using electrodeposition technique, for the development of reusable cholesterol biosensor. Both single ChOx and ChOx-HRP-based biosensors are developed using covalent bonding through glutaraldehyde. The bienzyme-based nanocomposite bioelectrodes (ChOx-HRP/NSPANI-AuNP-GR/ITO) offer better performance in terms of detection limit, sensitivity, and response time than single enzyme system. This is attributed to the presence of HRP along with ChOx to enhance the overall biochemical reaction. It has been shown that this nanocomposite bioelectrode can be used to estimate cholesterol in blood serum samples. The unique features of the ChOx-HRP/NSPANI-AuNP-GR/ITO nanocomposite bioelectrode lie with the novelty of fabrication, minimum interference, very lowKm value, low response time, excellent reusability, and its usefulness for blood serum samples. The large specific surface area, excellent conductivity, stable, and reliable redox properties of NSPANI-AuNP-GR nanocomposite film allow the rapid transmit of electron and enhance current response for the immobilized enzymes. It should be interesting to utilize these nanocomposite electrodes for development of other biosensors. --- *Source: 102543-2012-11-14.xml*
2012
# Wavelet-M-Estimation for Time-Varying Coefficient Time Series Models **Authors:** Xingcai Zhou; Fangxia Zhu **Journal:** Discrete Dynamics in Nature and Society (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1025452 --- ## Abstract This paper proposes wavelet-M-estimation for time-varying coefficient time series models by using a robust-type wavelet technique, which can adapt to local features of the time-varying coefficients and does not require the smoothness of the unknown time-varying coefficient. The wavelet-M-estimation has the desired asymptotic properties and can be used to estimate conditional quantile and to robustify the usual mean regression. Under mild assumptions, the Bahadur representation and the asymptotic normality of wavelet-M-estimation are established. --- ## Body ## 1. Introduction The analysis of nonlinear and nonstationary time series, particularly with a time trend, has been very popular over the last two decades, because most time series data, coming from economics and finance data, are nonlinear or nonstationary or trending. Some nonlinear and nonstationary parametric, semiparametric, and nonparametric time series models have been proposed in the econometrics and statistics literature, for example, [1–6] and the references therein. One of the most attracted models is the time-varying coefficient time series models, which was formulated as follows:(1)Yi=XiTβti+εi,ti=in,i=1,…,n,where Yi is the response, β·=β1·,…,βp·T is a p-dimensional vector of unspecified coefficient function defined on 0,1, Xi=Xi1,…,XipT is a p-dimensional random vector, and εi is the random error.There are many smoothers proposed to estimate the time-varying coefficientβ· in models (1), and the estimator is analyzed in the large-sample theory. Robinson [4] developed the Nadaraya–Watson method and showed the consistency and asymptotic normality of local constant estimator under the assumptions that the time series Xi is a stationary α-mixing and the errors εi are i.i.d. and independent of Xi. Cai [1] proposed the local linear approach and established asymptotic properties of the proposed estimators under the α-mixing conditions and without specifying the error distribution. Hoover et al. [7] gave a smoothing spline and a locally weighted polynomial methods for longitudinal data and presented asymptotic properties. Li et al. [8] and Fan et al. [9] made statistical inference for the partially time-varying coefficient (errors-in-variables) model, respectively. These estimations are all based on local least squares, which are efficient for Gaussian errors, but least-squares estimation may perform poorly in the presence of extreme outliers. In addition, these methods are all based on an important assumption that β· is of high smoothness. In reality, the assumption may be not satisfied. In some practical areas, such as signal and image processing, objects are frequently inhomogeneous. More robust estimation methods are required.In this paper, we propose an M-type regression based on wavelet technique, which is called wavelet-M-estimation (WME), for the time-varying coefficient time series models (1). There is the considerable literature devoted to M-estimation for nonparametric regression models. Fan et al. [10] obtained asymptotic normality of the M-estimator for local linear fit under independent observations. Hong [11] established a Bahadur representation for the local polynomial estimates in nonparametric M-regression under the i.i.d. random errors. Jiang and Mack [12] and Cai and Ould-Saïd [13] considered the local polynomial M-estimator and the local linear M-estimator for dependent observations and showed some asymptotic theories of the proposed estimators. For varying coefficient models, Tang and Cheng [14] showed asymptotic normality of local M-estimators for longitudinal data, and so on. However, the above works required the smoothness of the function being estimated, for example, assuming that the function has continuous first/second derivative. With wavelets, such assumptions are relaxed considerably. Because wavelet bases can adapt to local features of curves in both time and frequency domains, wavelet provides a new technique to analyze functions with discontinuities or sharp spikes. Therefore, it is natural to have better estimators than the local kernel method in many cases. Great achievements have been made for wavelet in nonparametric models, for example, Antoniadis et al. [15]; Donoho and Johnstone [16]; Hall and Patil [17]; Härdle et al. [18]; Vidakovic [19]; Zhou and You [20]; Lu and Li [21]; and Zhou et al. [22]. To the best of our knowledge, however, M-type estimation based on wavelet technique has not been developed for the time-varying coefficient models. We use a general formulation to treat mean regression, median regression, quantile regression, and robust mean regression in one setting by WME.The article is organized as follows. Section2 describes the wavelet analysis, α-mixing sequence, and wavelet-M-estimation for time-varying coefficients. Section 3 presents the Bahadur representation and asymptotic normality of the WME under the α-mixing stationary time series sequence, and states some application of the main results. Some technical lemmas and the proofs of the main results are given in Section 4. ## 2. Wavelet-M-Estimation As a central notion in wavelet analysis, multiresolution analysis (MRA) plays an important role for constructing a wavelet basis, which is a sequence of closed subspacesVn, n∈ℤ, in a square integrable function space ℒ2ℝ satisfying the following properties:(i) Vm⊆Vm+1 for m∈ℤ, where ℤ denotes the integer set(ii) ∩mVm=0 and ∪mVm¯=ℒ2ℝ, where A¯ is the closure of a set A(iii) V-spaces are self-similar, f2mx∈Vm iff fx∈V0 for m∈ℤ(iv) There exists a scaling functionϕ∈V0 whose integer-translates span the space V0, that is, V0=f∈ℒ2ℝfx=∑kckϕx−k, and for which the set ϕ·−k,k∈ℤ is an orthonormal basis of V0By dilation and translation forϕ·, we have ϕm,kt=2m/2ϕ2mt−k, m,k∈ℤ. From (iii) and (iv), one gets ϕmk,k∈ℤ is the orthogonal bases of Vm. According to Moore–Aronszajn theorem [23], Vm is a reproducing kernel Hilbert space with a kernel(2)Kmt,s=2m∑kϕ2mt−kϕ2ms−k.For any functionf∈Vm,(3)ft=∫fsKmt,sds.Denote the kernel ofV0 as K0t,s=∑kϕt−kϕs−k, then Kmt,s=2mK2mt,2ms. For more details, we refer to Vidakovic [19].It motivates us to define a wavelet-M-estimator ofβt by(4)β^t=argminb∑i=1nρYi−XiTb∫AiKmt,sds,where ρ· is a given convex function, and Ai are intervals that partition 0,1, so that ti∈Ai. One way of defining the intervals Ai=si−1,si is by taking s0=0, sn=1, and si=ti+ti+1/2, i=1,…,n−1. As an alternative to (4), the following equation is also used to define the WME of βt, that is, to find b to satisfy(5)∑i=1nψYi−XiTbXi∫AiKmt,sds=0,where 0 is a p-dimensional zero vector. It is a natural method to obtain (5) by taking the partial derivatives of (4) with respect to b when ρ· is continuously differentiable and equating it to null, i.e., ψ·=ρ′·. In this paper, we shall apply the suitably choice function ψ· to (5), which includes many interesting cases such as least-squares estimation, the least absolute distance estimation, and quantile regression. See the monographs Huber and Ronchetti [24] and Koenker [25] for more details about the robustness of M-estimations and quantile regression, respectively.Before stating the main results, we give the definition ofα-mixing dependence, which is necessary to establish our asymptotic theory for trending time-varying coefficient time series models. Throughout, we assume that Xi,εi is a stationary α-mixing sequence. Recall that a sequence ζk,k≥1 is said to be α-mixing (or strong mixing) if the mixing coefficients,(6)αm=supPA∩B−PAPB:A∈ℱ−∞k,B∈ℱk+m∞,converge to zero as m→∞, where ℱlk denotes the σ-field generated by ζi,l≤i≤k. The notion of α-mixing is widely adopted in the study of nonparametric regression models. It is reasonably weak and is known to be fulfilled for many stochastic processes, including many familiar linear and nonlinear time series models. We refer to the monograph of Doukhan [26] and Fan and Yao [27] for some properties or more mixing conditions. ## 3. Asymptotic Theory We first list the regularity conditions needed in the proof of the theorems although some of them might not be the weakest possible.(A1) (i) The processXi,εi is a strictly stationary α-mixing with ∑j≥1jaαjδ/2+δ<∞ for some a>δ/2+δ and δ>0. (ii) EX13+δ<∞.(A2)ρ· is a convex function, and ψ· is assumed to be any choice of the derivative of ρ·. Denote by D the set of discontinuity points of ψ·. The common distribution function F of εi satisfying FD=0.(A3)ψ· satisfies the following conditions:(i) There exists some functionB1z such that as u→0,(7)Eψε1+uX1=z=B1zu+ou,whereB1z is continuous in a neighborhood of X with B1X≠0.(ii) With probability 1,(8)maxEψε1+u−ψε12X1=z,Eψε1+u−ψε12+δX1=z≤B2u,holds uniformly forz in a neighborhood of X, where B2· is continuous at u=0 with B2u=Ou.(iii) Letγz=Eψ2ε1X1=z and γz be continuous in a neighborhood of X with γz>0. Furthermore, Eψε12+δX1=z<∞.(iv) Γx=EγX1X1X1T and Ωx=EB1X1X1X1T are nonsingular matrices.(A4) The time-varying coefficientsβj· for j=1,…,p, and the scaling function ϕ· in wavelet kernel satisfies the following conditions:(i) βj· belongs to Sobolev space with order ν>1/2.(ii) βj· satisfies the Lipschitz of order condition of order γ>0.(iii) ϕ has a compact support and is in the Schwarz space with order l>ν, satisfies the Lipschitz condition with order l. Furthermore, ϕ^ξ−1=Oξ as ξ⟶0, where ϕ^ is the Fourier transform of ϕ.(A5) (i) For the designed points,maxiti−ti−1=On−1.(ii) For some Lipschitz functionκ·,(9)ρn=maxisi−si−1−κsin=on−1.(A6) The tuning parameterm satisfies (i) n2−m→∞. (ii) n2−mn−γ+ηmlogn=o1.(iii) Letv∗=min5/3,21+ν/3,1+2γ/3−ε1 and ε1=0 for ν≠3/2, ε1>0 for ν=3/2. Assume that n2−mv∗=O1.Some remarks on the conditions are in order.Remark 1. Condition (A1) is the standard requirements for moments and the mixing coefficient for anα-mixing sequence. It is well known that among various mixing conditions, for example, α-, ρ-, and φ-mixing, and α-mixing is reasonably weak and can be used to depict many stochastic processes, including many familiar linear and nonlinear time series models. (A1) (i) is a very common condition, see Cai et al. [28]; Cai and Ould-Saïd [13]; and Fan and Yao [27]; among others.Remark 2. Conditions (A2) and (A3) are often imposed to establish the large-sample theory of M-estimation in parametric or nonparametric models, see, for example, Bai et al. [29]; Cai and Ould-Saïd [13]; and Lin et al. [30]. They are mild and cover some well-known special cases, such as least-square estimation, Huber loss, and quantile. Some special examples are given as follows.Remark 3. Conditions (A4) and (A5) are the mild regularity conditions for wavelet smoothing, which have been adopted by Antoniadis et al. [15]; Zhou and You [20]; and Zhou et al. [22]. In condition (A6), m acts as a tuning parameter, such as the bandwidth does for standard kernel smoothers; (A6) (i) and (ii) are for Bahadur representation and (A6) (i) and (iii) are for asymptotic normality, of WME. If (A6) (iii) holds, it implies (A6) (ii). There is a wide range of options to make A6 (i) and (iii) work. For example, if ν=γ=1, then υ∗=4/3. Furthermore, take 2m=On4/5, then (A6) (i) and (iii) hold. Recall thatβ^nt=b^ based on (1). Let θ^n=n2−mb^−βt, then θ=n2−mb−βt. Set eit=XiTβti−βt. We can write the object function in equation (4) as(10)Θnθ;t=1n−12m∑i=1nρεi+eit−n−12mXiTθ∫AiKmt,sds,and denote(11)Wnt=1n−12m∑i=1nψεiXi∫AiKmt,sds. The first theorem is crucial for establishing the asymptotic properties of the WME.Theorem 1. Under the conditions A1–A5 (i) and (A6) (i) and (ii), for any compact subsetK∈ℝp, we have(12)supθ∈Ksupt∈0,1Θnθ;t−Θn0;t+WntTθ−12θTΩxθ=Opbn,where(13)bn=n−γ+ηm1/2+2mn3/42mn−1/2logn.If (A1)–(A5) (i) and (A6) (i) and (iii) hold, then(14)bn=2mn1/4logn.With the help of Theorem1, we can establish the Bahadur representation of WME.Theorem 2. Under the conditions (A1)–(A5) (i) and (A6) (i) and (ii), we have(15)β^t−βt=Ωx−1∑i=1nψεiXi∫AiKmt,sds+Opdn,uniformly in t∈0,1, where(16)dn=n−γ+ηm1/4+2mn3/82mn1/4logn.If (A1)–(A5) (i) and (A6) (i) and (iii) hold, then(17)dn=2mn5/8logn.Remark 4. Theorem2 gives the Bahadur representation of WME for time-varying coefficient time series models (1), which shows that the WME has Bahadur order is Op2m/n5/8logn. It is slightly weaker than the Bahadur order Oploglogn/nh3/4 of Hong [11], where the bandwidth h⟶0. However, Hong [11] required strong smoothness: time-varying coefficient function β· with the second-order differentiability. We have greatly relaxed this assumption. We do not need the strong assumption. Our degrees of smoothness of the function are less restrictive. See condition (A4) (i) and (ii). With the help of Theorem2, we can establish asymptotic normality of the WME.Theorem 3. Under the conditions (A1)–(A5) and (A6) (i) and (iii), we have(18)n2−mβ^dt−βt⟶DN0,κtω02Ωx−1ΓxΩx−1,where β^dt=β^tm with tm=⌊2mt⌋/2m, ω02=∫01E020,udu=∑k∈ℤϕ2k, and ⌊u⌋ denotes the maximum integer not greater than u.Remark 5. To obtain an asymptotic expansion of the variance and an asymptotic normality, we need to consider an approximation toβ^t based on its values at dyadic points of order m, as Antoniadis et al. [15] have done. The main reason is that the variance of β^t as a function of t is unstable. It can be avoided by using dyadic points tm. Also see Antoniadis et al. [15] for the details. From Theorems 2 and 3, we have the uniform weak consistency of WME:(19)supt∈0,1β^t−βt=Op2mn1/2. Next, we shall give some special cases as corollaries of Theorem3.Corollary 1. Letρz=z2 and ψz=2z, which corresponds to the case of mean regression, which implies B1=2, γX1=Eε12X1, Γx=EγX1X1X1T, and Ωx=EX1X1T. We have(20)n2−mβ^dt−βt⟶DN0,14κtω02Ωx−1ΓxΩx−1.Corollary 2. Letρz=zτ−Iz<0 and ψz=τ−Iz<0, which corresponds to the case of quantile regression. Assume Pε1<0X1=τ a.s. and ε1 has a continuous positive conditional density fεX· and cumulative distribution function FεX· in a neighborhood of 0 given the X1. Thus, B1=fεX0 and Γx=τ2−2τ−1FεX0EX1X1T. Let Ωx=EX1X1T. We have(21)n2−mβ^dt−βt⟶DN0,κtω02fεX−20τ2−2τ−1FεX0Ωx−1.Furthermore, ifXi and εi are mutually independent for i=1,…,n, we have(22)n2−mβ^dt−βt⟶DN0,τ1−τκtω02fε−20Ωx−1,where fε· is the density of ε1. ## 4. Technical Lemmas and Proofs In the following sections,C is positive constant, which may be changed from line to line in the proof.Lemma 1 (see Antoniadis et al. [15] and Walter [31]). Suppose that (A4) holds. We have(i) E0t,s≤ck/1+t−sk and Ekt,s≤2kck/1+2kt−sk, where k is a positive integer and ck is a constant depending on k only(ii) sup0≤t,s≤1Kmt,s=O2m(iii) sup0≤t≤1∫01Kmt,sds≤c, where c is a positive constant(iv) ∫01Kmt,sds⟶1 uniformly in t∈0,1, as m⟶∞Lemma 2 (see Antoniadis et al. [15]). Suppose that (A4) and (A5) (i) hold andh· satisfies (A4) (i) and (ii). Then,(23)sup0≤t≤1ht−∑i=1nhti∫AiKmt,sds=On−γ+Oηm,where(24)ηm=12mν−1/2,if12<ν<32,m2m,ifν=32,12m,ifν>32.Lemma 3 (see Lin and Lu [32]). LetXi,i≥1 be an α-mixing sequence, X∈ℱ−∞k, and Y∈ℱk+n∞ with EXp<∞ and EYq<∞, 1/p+1/q<1. Then,(25)EXY−EXEY≤10EXp1/pEXq1/qαn1−1/p−1/q.Lemma 4 (see Pollard [33]). Letλnθ,θ∈Θ be a sequence of random convex functions defined on a convex, open subset Θ of ℝd. Suppose λ· is a real-valued function on Θ for which λnθ→λθ in probability, for each fixed θ in Θ. Then, for each compact subset K of Θ, in probability,(26)supθ∈Kλnθ−λθ→0.For simplicity, we introduce some notations before proving. We are interested in the asymptotic behaviors ofθ, which can follow from the new optimization objective function:(27)Gnθ;t=Θnθ;t−Θn0;t.Furthermore, denote(28)Rnθ;t=Gnθ;t−EGnθ;t+WntTθ.We rewriteGnθ;t as(29)Gnθ;t=EGnθ;t−WntTθ+Rnθ;t.Following Lemmas5–7 give the asymptotic behaviors of Rnθ;t, EGnθ;t and Wnt uniformly in t∈0,1, respectively.Lemma 5. Under the assumptions of Theorem1, for fixed θ, it holds that(30)supt∈0,1Rnθ;t=Opn2−m1/2n−γ+ηm1/2+n−12m1/4logn,where Rnθ;t is defined by (28).Proof of Lemma 5. From the definition ofRnθ;t, we have(31)Rnθ;t=Gnθ;t+WntTθ−EGnθ;t+WntTθ≕1n−12m∑i=1nUi−EUi∫AiKmt,sds≕1n−12m∑i=1nU˜i−EU˜i,where(32)U˜i=Ui∫AiKmt,sds,Ui=ρεi+eit−n−12mXiTθ−ρεi+eit+n−12mψεiXiTθ. By the convexity ofρ·, we have(33)Ui≤ψεi+eit−n−12mXiTθeit−n−12mXiTθ−ψεieit+n−12mψεiXiTθ=ψεi+eit−n−12mXiTθ−ψεieit−n−12mXiTθ. Furthermore, we have(34)VarRnθ;t=n2m2Var∑i=1nU˜i≤n2m2∑i=1nEU˜i2+2∑1≤i<j≤nCovU˜i,U˜j. Letan=2m/nn−γ+ηm+2m/n5/2. From conditions A1 (ii), A3 (ii), and A4 (ii), and Lemma 1, we have(35)∑i=1nEU˜i2=∑i=1nEEψεi+eit−n−12mXiTθ−ψεiXi2×eit−n−12mXiTθ2∫AiKmt,sds2≤∑i=1nEB2eit−n−12mXiTθeit−n−12mXiTθ2∫AiKmt,sds2≤C∑i=1nEXiTβti−βt−n−12mXiTθ3∫AiKmt,sds2≤C∑i=1nEXiTβti−βt3+n−12mXiTθ3∫AiKmt,sds2≤C∑i=1nEX13βti−βt3+n−12m3/2X13θ3∫AiKmt,sds2≤O2mnn−γ+ηm+2mn5/2=Oan. To obtain an upper bound for the second term on the right-hand side of (34), we split it into two terms as follows. Let S1=j,i:1≤j−i≤dn;1≤i<j≤n and S2=j,i:1≤i<j≤n−S1 with dn⟶∞ specified later. We have(36)∑1≤i<j≤nCovU˜i,U˜j=∑S1CovU˜i,U˜j+∑S2CovU˜i,U˜j≕Jn1+Jn2,where dn is a sequence of positive integers such that dn=Ologn as n⟶∞. ForJn1, by (35) and choice of dn, we have(37)Jn1≤C∑j=1dn∑i=1n−jCovU˜i,U˜i+j≤dn∑i=1nEU˜i2=dnOan=Oanlogn. By conditions A1 (ii), A3 (ii), and A4, and Lemmas1 and 2, we have(38)EU˜i2+δ=EEψεi+eit−n−12mXiTθ−ψεiXi2+δ×eit−n−12mXiTθ2+δ∫AiKmt,sds2+δ≤CEeit−n−12mXiTθ3+δ∫AiKmt,sds2+δ≤CEX13+δβti−βt3+δ+n−12m3+δ/2X13+δθ3+δ∫AiKmt,sds2+δ≤Cβti−βt3+δ+n−12m3+δ/2∫AiKmt,sds2+δ. ForJn2, by (38), conditions A1 (ii) and A3 (ii), and Lemmas 1–3, we have(39)Jn2≤C∑j=dn+1nαjδ/2+δ∑i=1n−dn−1EU˜i2+δ1/2+δEU˜i+dn+12+δ1/2+δ≤C∑j=dn+1nαjδ/2+δ×∑i=1n−dn−1βti−βt3+δ+n−12m3+δ/21/2+δ∫AiKmt,sds×βti+dn+1−βt3+δ+n−12m3+δ/21/2+δ∫Ai+dn+1Kmt,sds≤Cn∑j=dn+1nαjδ/2+δ×∑i=1n−dn−1βti−βt3+δ+n−12m3+δ/22/2+δ∫AiKmt,sds2+∑i=1n−dn−1βti+dn+1−βt3+δ+n−12m3+δ/22/2+δ∫Ai+dn+1Kmt,sds2≤C∑j=dn+1nαjδ/2+δ∑i=1nβti−βt3+δ+n−12m3+δ/22/2+δ∫AiKmt,sds2≤C∑j=dn+1nαjδ/2+δ∑i=1nβti−βt2+n−12m3+δ/2+δ∫AiKmt,sds2≤C∑j=dn+1nαjδ/2+δ2mnn−γ+ηm+2mn5+2δ/2+δ≤C∑j=dn+1nαjδ/2+δ2mnn−γ+ηm+2mn5/2≤Candn+1−a∑j=dn+1njaαjδ/2+δ=oan. From (36), (37), and (39), one gets ∑1≤i<j≤nCovU˜i,U˜j=Oanlogn. Combining with (34) and (35), we have(40)VarRnθ;t=On2−m2anlogn,uniform in t∈0,1. Note that ERnθ;t=0. We have(41)Rnθ;t=Opn2−m2anlogn=Opn2−m1/2n−γ+ηm1/2+n−12m1/4logn,uniform in t∈0,1.Lemma 6. Under the assumptions of Theorem1, for fixed θ, it holds that(42)supt∈0,1EGnθ;t−12θTΩxθ=On2−mn−γ+ηm=o1,where Gnθ;t is defined by (1).Proof of Lemma 6. By condition A2 (i), Lemma2, and Lemma 1 in Bai et al. [29], we have(43)EGnθ;t=1n−12m∑i=1nEEρεi+eit−n−12mXiTθ−ρεi+eitXi∫AiKmt,sds=1n−12m∑i=1nE12n−12mXiTθ2−eitn−12mXiTθB1Xi1+o1∫AiKmt,sds=12θT∑i=1nEB1XiXiXiT∫AiKmt,sdsθ1+o1−1n−12mθT∑i=1nEB1XiXiXiTβti−βt∫AiKmt,sds=12θTΩxθ−n2−mOn−γ+ηm. This finishes Proof of Lemma6.Lemma 7. Under the assumptions of Theorem3, it holds that(44)Wndt⟶DN0,ω02κtΓx,where Wndt=Wntm with tm=⌊2mt/2m⌋, and Wnt is defined by (1).Proof of Lemma 7. Recall that(45)Wnt=1n−12m∑i=1nψεiXi∫AiKmt,sds. Letξi=n2−mψεiXi∫AiKmt,sds. First, we calculate its variance-covariance. Note that Eψε1X1=o1 by condition A3 (i). We have(46)VarWnt=∑i=1nEξiξiT+2∑1≤i<j≤nCovξi,ξj. By using the same argument as those used in Proof of Lemma5, we obtain(47)∑1≤i<j≤nCovξi,ξj=o1. Now, we prove the first term on the right-hand side of (46). Using the compact support and Lipschitz properties of ϕ, one can show that E0t,· is Lipschitz uniformly in t, so that(48)supt∈0,1E02mt,2mvi−E02mt,2mui=O2mn−1,and the Lipschitz property of κ· implies(49)κvi=κsi+On−1,where ui and vi belong to Ai. By condition A3 (iii), we have(50)∑i=1nEξiξiT=∑i=1nEEξiξiTXi=EγX1X1X1Tn2−m∑i=1n∫AiKmt,sds2=2−mnΓx∑i=1n∫AiKmt,sds2−1n∫01Em2t,sκsds+2−mΓx∫01Em2t,sκsds=n2−mΓx∑i=1nsi−si−12Km2t,ui−1nsi−si−1Em2t,viκvi+2−mΓx∫01Em2t,sκsdswhereuiandvibelong onAi.By48and49,and a standard calculation=Γxn2−mO1nOn2−mρn22m+22mn2+23mn2+2−mΓx∫01Em2t,sκsds=Oρnn+2mn−1+2−mΓx∫01Em2t,sκsds. Since2−m∫01Em2t,sκsds is unstable as a function of t, we need to compute it at dyadic points of order m. By Lemma 6.1 in Antoniadis et al. [15], we have(51)limm→∞2−m∫01Em2tm,sκsds=κtω02. Combining with (46), (47), and (50), we have(52)VarWndt=ω02κtΓx+o1. Second, we shall show (44). Redefine(53)ξi=n2−mψεiXi∫AiKmt,sds−En2−mψεiXi∫AiKmt,sds. Note thatEψε1X1=o1, and it turns (44) to(54)∑i=1nξid⟶DN0,ω02κtΓx,where the definition of ξid is similar to ξi by using tm to replace t. As ψ· is not necessarily bounded, we employ a truncation method. Denote ψL·=ψ·Iψ·≤L and as before that ξ¯id=n2−mψLεiXi∫AiKmtm,sds−En2−mψLεiXi∫AiKmtm,sds, and let Γ¯x be defined as before with γ· replaced by EψL2ε1X1=⋅. Similar to Proof of Theorem 2 in Cai et al. [28], by using Doob’s large-block and small-block technique, we can show that(55)∑i=1nξ¯id⟶DN0,ω02κtΓ¯x. Letξ˜id=ξid−ξ¯id. To prove (54), by (55), it suffices to show(56)Var∑i=1nξ˜id⟶0,at first n⟶∞ and then L⟶∞. Let Γ˜x be defined as before with γ· replaced by Eψ2ε1Iψε1>LX1=⋅. With the same argument as (46), one gets(57)Var∑i=1nξ˜id=ω02κtΓ˜x+o1⟶0. Since(58)Γ˜x=EEψ2ε1Iψε1>LX1X1X1T≤EEψ2+δLδε1Iψε1>LX1X1X1T≤L−δEEψ2+δX1X1X1T⟶0,asL⟶∞,from condition A3 (iii). Thus, (56) holds. Therefore, Proof of Lemma 7 is completed.Proof of Theorem 1. From Lemmas5 and 6, and (29), for the fixed θ, we have(59)supt∈0,1Gnθ;t−12θTΩxθ+WntTθ=Opn2−mOn−γ+ηm+n2−mn−γ+ηm1/2+n−12m1/4logn=Opbn=o1. From Lemma7, it is easy to see that Wnt is stochastically bounded. Since the convex function λnθ=Gnθ;t−Wnt converges in probability to the convex function λθ=1/2θTΩxθ, it follows from convexity Lemma 4 that for any compact set K,(60)supθ∈Ksupt∈0,1Gnθ;t−12θTΩxθ+WntTθ=Opbn=o1. Notice that the convexity lemma strengthens the pointwise result to uniform convergence on compact subsets ofℝp. This completes Proof of Theorem 1.Proof of Theorem 2. To obtain Bahadur representation of WME, the idea behind the proof is to approximateGnθ;t by a quadratic function whose minimizing value has an asymptotic the behavior, and then to show that θ^n lies close enough to the minimizing value to share its asymptotic behaviour. We have done the first step, that is, the results of Theorem 1 and Lemma 7. Let θ¯n=Ωx−1Wnt and cn2=bnlogn. Now, we prove the second step. The argument will be complete if we can show for each ε>0 that(61)Pθ^n−θ¯n>cnε=o1. The argument is similar to Proof of Theorem1 in Pollard [33], whose method is extended to obtain the Bahadur representation of WME. From Theorem 1, the compact set K can be chosen to contain a closed ball Bn with center θ¯n and radius cnε, with probability arbitrarily close to one. Thereby, it implies that(62)Δn=cn−2supθ∈Bnsupt∈0,1Gnθ;t−12θTΩxθ+WntTθ=Op1logn=op1. Now, we consider the behavior ofGnθ;t outside Bn. Suppose θ=θ¯n+cnϱυ with ϱ>ε and υ is a unit vector. Define θ∗ as the boundary point of Bn that lies on the line segment from θ¯n to θ, that is, θ∗=θ¯n+cnευ. Convexity of Gnθ;t and the definition of Δn imply(63)εϱGnθ;t+1−εϱGnθ¯;t≥Gnθ∗;t≥12θ∗TΩxθ∗+WntTθ∗−cn2Δn≥12θ¯nTΩxθ¯n+WntTθ¯n+cn212ε2υTΩxυ−Δn≥Gnθ¯n;t+cn212ε2υTΩxυ−2Δn. Furthermore, we have(64)Gnθ;t≥Gnθ¯n;t+cn212ε2υTΩxυ−2Δn,uniformly in t∈0,1. In addition, the right-hand side of the above inequality does not depend on θ. Hence, for each ε>0 and n large enough, we have(65)Pinfθ−θ¯n>cnεGnθ;t≥Gnθ¯n;t+cn212ε2υTΩxυ−2Δn→1,which leads to θ^n−θ¯n≤cnε with probability tending to one, because 2Δn<1/2ε2υTΩxυ, which happens with probability tending to one, the minimum of Gnθ;t cannot occur at any θ with θ−θ¯n>cnε. So, (61) holds, which leads to(66)n2−mβ^t−βt=Ωx−1Wnt+Opcn,uniformly in t∈0,1. Thus, Proof of Theorem 2 is completed.Proof of Theorem 3. From Theorem2 and Lemma 7, we can directly obtain the result of Theorem 3. Thus, we complete Proof of Theorem 3. --- *Source: 1025452-2020-09-03.xml*
1025452-2020-09-03_1025452-2020-09-03.md
25,713
Wavelet-M-Estimation for Time-Varying Coefficient Time Series Models
Xingcai Zhou; Fangxia Zhu
Discrete Dynamics in Nature and Society (2020)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1025452
1025452-2020-09-03.xml
--- ## Abstract This paper proposes wavelet-M-estimation for time-varying coefficient time series models by using a robust-type wavelet technique, which can adapt to local features of the time-varying coefficients and does not require the smoothness of the unknown time-varying coefficient. The wavelet-M-estimation has the desired asymptotic properties and can be used to estimate conditional quantile and to robustify the usual mean regression. Under mild assumptions, the Bahadur representation and the asymptotic normality of wavelet-M-estimation are established. --- ## Body ## 1. Introduction The analysis of nonlinear and nonstationary time series, particularly with a time trend, has been very popular over the last two decades, because most time series data, coming from economics and finance data, are nonlinear or nonstationary or trending. Some nonlinear and nonstationary parametric, semiparametric, and nonparametric time series models have been proposed in the econometrics and statistics literature, for example, [1–6] and the references therein. One of the most attracted models is the time-varying coefficient time series models, which was formulated as follows:(1)Yi=XiTβti+εi,ti=in,i=1,…,n,where Yi is the response, β·=β1·,…,βp·T is a p-dimensional vector of unspecified coefficient function defined on 0,1, Xi=Xi1,…,XipT is a p-dimensional random vector, and εi is the random error.There are many smoothers proposed to estimate the time-varying coefficientβ· in models (1), and the estimator is analyzed in the large-sample theory. Robinson [4] developed the Nadaraya–Watson method and showed the consistency and asymptotic normality of local constant estimator under the assumptions that the time series Xi is a stationary α-mixing and the errors εi are i.i.d. and independent of Xi. Cai [1] proposed the local linear approach and established asymptotic properties of the proposed estimators under the α-mixing conditions and without specifying the error distribution. Hoover et al. [7] gave a smoothing spline and a locally weighted polynomial methods for longitudinal data and presented asymptotic properties. Li et al. [8] and Fan et al. [9] made statistical inference for the partially time-varying coefficient (errors-in-variables) model, respectively. These estimations are all based on local least squares, which are efficient for Gaussian errors, but least-squares estimation may perform poorly in the presence of extreme outliers. In addition, these methods are all based on an important assumption that β· is of high smoothness. In reality, the assumption may be not satisfied. In some practical areas, such as signal and image processing, objects are frequently inhomogeneous. More robust estimation methods are required.In this paper, we propose an M-type regression based on wavelet technique, which is called wavelet-M-estimation (WME), for the time-varying coefficient time series models (1). There is the considerable literature devoted to M-estimation for nonparametric regression models. Fan et al. [10] obtained asymptotic normality of the M-estimator for local linear fit under independent observations. Hong [11] established a Bahadur representation for the local polynomial estimates in nonparametric M-regression under the i.i.d. random errors. Jiang and Mack [12] and Cai and Ould-Saïd [13] considered the local polynomial M-estimator and the local linear M-estimator for dependent observations and showed some asymptotic theories of the proposed estimators. For varying coefficient models, Tang and Cheng [14] showed asymptotic normality of local M-estimators for longitudinal data, and so on. However, the above works required the smoothness of the function being estimated, for example, assuming that the function has continuous first/second derivative. With wavelets, such assumptions are relaxed considerably. Because wavelet bases can adapt to local features of curves in both time and frequency domains, wavelet provides a new technique to analyze functions with discontinuities or sharp spikes. Therefore, it is natural to have better estimators than the local kernel method in many cases. Great achievements have been made for wavelet in nonparametric models, for example, Antoniadis et al. [15]; Donoho and Johnstone [16]; Hall and Patil [17]; Härdle et al. [18]; Vidakovic [19]; Zhou and You [20]; Lu and Li [21]; and Zhou et al. [22]. To the best of our knowledge, however, M-type estimation based on wavelet technique has not been developed for the time-varying coefficient models. We use a general formulation to treat mean regression, median regression, quantile regression, and robust mean regression in one setting by WME.The article is organized as follows. Section2 describes the wavelet analysis, α-mixing sequence, and wavelet-M-estimation for time-varying coefficients. Section 3 presents the Bahadur representation and asymptotic normality of the WME under the α-mixing stationary time series sequence, and states some application of the main results. Some technical lemmas and the proofs of the main results are given in Section 4. ## 2. Wavelet-M-Estimation As a central notion in wavelet analysis, multiresolution analysis (MRA) plays an important role for constructing a wavelet basis, which is a sequence of closed subspacesVn, n∈ℤ, in a square integrable function space ℒ2ℝ satisfying the following properties:(i) Vm⊆Vm+1 for m∈ℤ, where ℤ denotes the integer set(ii) ∩mVm=0 and ∪mVm¯=ℒ2ℝ, where A¯ is the closure of a set A(iii) V-spaces are self-similar, f2mx∈Vm iff fx∈V0 for m∈ℤ(iv) There exists a scaling functionϕ∈V0 whose integer-translates span the space V0, that is, V0=f∈ℒ2ℝfx=∑kckϕx−k, and for which the set ϕ·−k,k∈ℤ is an orthonormal basis of V0By dilation and translation forϕ·, we have ϕm,kt=2m/2ϕ2mt−k, m,k∈ℤ. From (iii) and (iv), one gets ϕmk,k∈ℤ is the orthogonal bases of Vm. According to Moore–Aronszajn theorem [23], Vm is a reproducing kernel Hilbert space with a kernel(2)Kmt,s=2m∑kϕ2mt−kϕ2ms−k.For any functionf∈Vm,(3)ft=∫fsKmt,sds.Denote the kernel ofV0 as K0t,s=∑kϕt−kϕs−k, then Kmt,s=2mK2mt,2ms. For more details, we refer to Vidakovic [19].It motivates us to define a wavelet-M-estimator ofβt by(4)β^t=argminb∑i=1nρYi−XiTb∫AiKmt,sds,where ρ· is a given convex function, and Ai are intervals that partition 0,1, so that ti∈Ai. One way of defining the intervals Ai=si−1,si is by taking s0=0, sn=1, and si=ti+ti+1/2, i=1,…,n−1. As an alternative to (4), the following equation is also used to define the WME of βt, that is, to find b to satisfy(5)∑i=1nψYi−XiTbXi∫AiKmt,sds=0,where 0 is a p-dimensional zero vector. It is a natural method to obtain (5) by taking the partial derivatives of (4) with respect to b when ρ· is continuously differentiable and equating it to null, i.e., ψ·=ρ′·. In this paper, we shall apply the suitably choice function ψ· to (5), which includes many interesting cases such as least-squares estimation, the least absolute distance estimation, and quantile regression. See the monographs Huber and Ronchetti [24] and Koenker [25] for more details about the robustness of M-estimations and quantile regression, respectively.Before stating the main results, we give the definition ofα-mixing dependence, which is necessary to establish our asymptotic theory for trending time-varying coefficient time series models. Throughout, we assume that Xi,εi is a stationary α-mixing sequence. Recall that a sequence ζk,k≥1 is said to be α-mixing (or strong mixing) if the mixing coefficients,(6)αm=supPA∩B−PAPB:A∈ℱ−∞k,B∈ℱk+m∞,converge to zero as m→∞, where ℱlk denotes the σ-field generated by ζi,l≤i≤k. The notion of α-mixing is widely adopted in the study of nonparametric regression models. It is reasonably weak and is known to be fulfilled for many stochastic processes, including many familiar linear and nonlinear time series models. We refer to the monograph of Doukhan [26] and Fan and Yao [27] for some properties or more mixing conditions. ## 3. Asymptotic Theory We first list the regularity conditions needed in the proof of the theorems although some of them might not be the weakest possible.(A1) (i) The processXi,εi is a strictly stationary α-mixing with ∑j≥1jaαjδ/2+δ<∞ for some a>δ/2+δ and δ>0. (ii) EX13+δ<∞.(A2)ρ· is a convex function, and ψ· is assumed to be any choice of the derivative of ρ·. Denote by D the set of discontinuity points of ψ·. The common distribution function F of εi satisfying FD=0.(A3)ψ· satisfies the following conditions:(i) There exists some functionB1z such that as u→0,(7)Eψε1+uX1=z=B1zu+ou,whereB1z is continuous in a neighborhood of X with B1X≠0.(ii) With probability 1,(8)maxEψε1+u−ψε12X1=z,Eψε1+u−ψε12+δX1=z≤B2u,holds uniformly forz in a neighborhood of X, where B2· is continuous at u=0 with B2u=Ou.(iii) Letγz=Eψ2ε1X1=z and γz be continuous in a neighborhood of X with γz>0. Furthermore, Eψε12+δX1=z<∞.(iv) Γx=EγX1X1X1T and Ωx=EB1X1X1X1T are nonsingular matrices.(A4) The time-varying coefficientsβj· for j=1,…,p, and the scaling function ϕ· in wavelet kernel satisfies the following conditions:(i) βj· belongs to Sobolev space with order ν>1/2.(ii) βj· satisfies the Lipschitz of order condition of order γ>0.(iii) ϕ has a compact support and is in the Schwarz space with order l>ν, satisfies the Lipschitz condition with order l. Furthermore, ϕ^ξ−1=Oξ as ξ⟶0, where ϕ^ is the Fourier transform of ϕ.(A5) (i) For the designed points,maxiti−ti−1=On−1.(ii) For some Lipschitz functionκ·,(9)ρn=maxisi−si−1−κsin=on−1.(A6) The tuning parameterm satisfies (i) n2−m→∞. (ii) n2−mn−γ+ηmlogn=o1.(iii) Letv∗=min5/3,21+ν/3,1+2γ/3−ε1 and ε1=0 for ν≠3/2, ε1>0 for ν=3/2. Assume that n2−mv∗=O1.Some remarks on the conditions are in order.Remark 1. Condition (A1) is the standard requirements for moments and the mixing coefficient for anα-mixing sequence. It is well known that among various mixing conditions, for example, α-, ρ-, and φ-mixing, and α-mixing is reasonably weak and can be used to depict many stochastic processes, including many familiar linear and nonlinear time series models. (A1) (i) is a very common condition, see Cai et al. [28]; Cai and Ould-Saïd [13]; and Fan and Yao [27]; among others.Remark 2. Conditions (A2) and (A3) are often imposed to establish the large-sample theory of M-estimation in parametric or nonparametric models, see, for example, Bai et al. [29]; Cai and Ould-Saïd [13]; and Lin et al. [30]. They are mild and cover some well-known special cases, such as least-square estimation, Huber loss, and quantile. Some special examples are given as follows.Remark 3. Conditions (A4) and (A5) are the mild regularity conditions for wavelet smoothing, which have been adopted by Antoniadis et al. [15]; Zhou and You [20]; and Zhou et al. [22]. In condition (A6), m acts as a tuning parameter, such as the bandwidth does for standard kernel smoothers; (A6) (i) and (ii) are for Bahadur representation and (A6) (i) and (iii) are for asymptotic normality, of WME. If (A6) (iii) holds, it implies (A6) (ii). There is a wide range of options to make A6 (i) and (iii) work. For example, if ν=γ=1, then υ∗=4/3. Furthermore, take 2m=On4/5, then (A6) (i) and (iii) hold. Recall thatβ^nt=b^ based on (1). Let θ^n=n2−mb^−βt, then θ=n2−mb−βt. Set eit=XiTβti−βt. We can write the object function in equation (4) as(10)Θnθ;t=1n−12m∑i=1nρεi+eit−n−12mXiTθ∫AiKmt,sds,and denote(11)Wnt=1n−12m∑i=1nψεiXi∫AiKmt,sds. The first theorem is crucial for establishing the asymptotic properties of the WME.Theorem 1. Under the conditions A1–A5 (i) and (A6) (i) and (ii), for any compact subsetK∈ℝp, we have(12)supθ∈Ksupt∈0,1Θnθ;t−Θn0;t+WntTθ−12θTΩxθ=Opbn,where(13)bn=n−γ+ηm1/2+2mn3/42mn−1/2logn.If (A1)–(A5) (i) and (A6) (i) and (iii) hold, then(14)bn=2mn1/4logn.With the help of Theorem1, we can establish the Bahadur representation of WME.Theorem 2. Under the conditions (A1)–(A5) (i) and (A6) (i) and (ii), we have(15)β^t−βt=Ωx−1∑i=1nψεiXi∫AiKmt,sds+Opdn,uniformly in t∈0,1, where(16)dn=n−γ+ηm1/4+2mn3/82mn1/4logn.If (A1)–(A5) (i) and (A6) (i) and (iii) hold, then(17)dn=2mn5/8logn.Remark 4. Theorem2 gives the Bahadur representation of WME for time-varying coefficient time series models (1), which shows that the WME has Bahadur order is Op2m/n5/8logn. It is slightly weaker than the Bahadur order Oploglogn/nh3/4 of Hong [11], where the bandwidth h⟶0. However, Hong [11] required strong smoothness: time-varying coefficient function β· with the second-order differentiability. We have greatly relaxed this assumption. We do not need the strong assumption. Our degrees of smoothness of the function are less restrictive. See condition (A4) (i) and (ii). With the help of Theorem2, we can establish asymptotic normality of the WME.Theorem 3. Under the conditions (A1)–(A5) and (A6) (i) and (iii), we have(18)n2−mβ^dt−βt⟶DN0,κtω02Ωx−1ΓxΩx−1,where β^dt=β^tm with tm=⌊2mt⌋/2m, ω02=∫01E020,udu=∑k∈ℤϕ2k, and ⌊u⌋ denotes the maximum integer not greater than u.Remark 5. To obtain an asymptotic expansion of the variance and an asymptotic normality, we need to consider an approximation toβ^t based on its values at dyadic points of order m, as Antoniadis et al. [15] have done. The main reason is that the variance of β^t as a function of t is unstable. It can be avoided by using dyadic points tm. Also see Antoniadis et al. [15] for the details. From Theorems 2 and 3, we have the uniform weak consistency of WME:(19)supt∈0,1β^t−βt=Op2mn1/2. Next, we shall give some special cases as corollaries of Theorem3.Corollary 1. Letρz=z2 and ψz=2z, which corresponds to the case of mean regression, which implies B1=2, γX1=Eε12X1, Γx=EγX1X1X1T, and Ωx=EX1X1T. We have(20)n2−mβ^dt−βt⟶DN0,14κtω02Ωx−1ΓxΩx−1.Corollary 2. Letρz=zτ−Iz<0 and ψz=τ−Iz<0, which corresponds to the case of quantile regression. Assume Pε1<0X1=τ a.s. and ε1 has a continuous positive conditional density fεX· and cumulative distribution function FεX· in a neighborhood of 0 given the X1. Thus, B1=fεX0 and Γx=τ2−2τ−1FεX0EX1X1T. Let Ωx=EX1X1T. We have(21)n2−mβ^dt−βt⟶DN0,κtω02fεX−20τ2−2τ−1FεX0Ωx−1.Furthermore, ifXi and εi are mutually independent for i=1,…,n, we have(22)n2−mβ^dt−βt⟶DN0,τ1−τκtω02fε−20Ωx−1,where fε· is the density of ε1. ## 4. Technical Lemmas and Proofs In the following sections,C is positive constant, which may be changed from line to line in the proof.Lemma 1 (see Antoniadis et al. [15] and Walter [31]). Suppose that (A4) holds. We have(i) E0t,s≤ck/1+t−sk and Ekt,s≤2kck/1+2kt−sk, where k is a positive integer and ck is a constant depending on k only(ii) sup0≤t,s≤1Kmt,s=O2m(iii) sup0≤t≤1∫01Kmt,sds≤c, where c is a positive constant(iv) ∫01Kmt,sds⟶1 uniformly in t∈0,1, as m⟶∞Lemma 2 (see Antoniadis et al. [15]). Suppose that (A4) and (A5) (i) hold andh· satisfies (A4) (i) and (ii). Then,(23)sup0≤t≤1ht−∑i=1nhti∫AiKmt,sds=On−γ+Oηm,where(24)ηm=12mν−1/2,if12<ν<32,m2m,ifν=32,12m,ifν>32.Lemma 3 (see Lin and Lu [32]). LetXi,i≥1 be an α-mixing sequence, X∈ℱ−∞k, and Y∈ℱk+n∞ with EXp<∞ and EYq<∞, 1/p+1/q<1. Then,(25)EXY−EXEY≤10EXp1/pEXq1/qαn1−1/p−1/q.Lemma 4 (see Pollard [33]). Letλnθ,θ∈Θ be a sequence of random convex functions defined on a convex, open subset Θ of ℝd. Suppose λ· is a real-valued function on Θ for which λnθ→λθ in probability, for each fixed θ in Θ. Then, for each compact subset K of Θ, in probability,(26)supθ∈Kλnθ−λθ→0.For simplicity, we introduce some notations before proving. We are interested in the asymptotic behaviors ofθ, which can follow from the new optimization objective function:(27)Gnθ;t=Θnθ;t−Θn0;t.Furthermore, denote(28)Rnθ;t=Gnθ;t−EGnθ;t+WntTθ.We rewriteGnθ;t as(29)Gnθ;t=EGnθ;t−WntTθ+Rnθ;t.Following Lemmas5–7 give the asymptotic behaviors of Rnθ;t, EGnθ;t and Wnt uniformly in t∈0,1, respectively.Lemma 5. Under the assumptions of Theorem1, for fixed θ, it holds that(30)supt∈0,1Rnθ;t=Opn2−m1/2n−γ+ηm1/2+n−12m1/4logn,where Rnθ;t is defined by (28).Proof of Lemma 5. From the definition ofRnθ;t, we have(31)Rnθ;t=Gnθ;t+WntTθ−EGnθ;t+WntTθ≕1n−12m∑i=1nUi−EUi∫AiKmt,sds≕1n−12m∑i=1nU˜i−EU˜i,where(32)U˜i=Ui∫AiKmt,sds,Ui=ρεi+eit−n−12mXiTθ−ρεi+eit+n−12mψεiXiTθ. By the convexity ofρ·, we have(33)Ui≤ψεi+eit−n−12mXiTθeit−n−12mXiTθ−ψεieit+n−12mψεiXiTθ=ψεi+eit−n−12mXiTθ−ψεieit−n−12mXiTθ. Furthermore, we have(34)VarRnθ;t=n2m2Var∑i=1nU˜i≤n2m2∑i=1nEU˜i2+2∑1≤i<j≤nCovU˜i,U˜j. Letan=2m/nn−γ+ηm+2m/n5/2. From conditions A1 (ii), A3 (ii), and A4 (ii), and Lemma 1, we have(35)∑i=1nEU˜i2=∑i=1nEEψεi+eit−n−12mXiTθ−ψεiXi2×eit−n−12mXiTθ2∫AiKmt,sds2≤∑i=1nEB2eit−n−12mXiTθeit−n−12mXiTθ2∫AiKmt,sds2≤C∑i=1nEXiTβti−βt−n−12mXiTθ3∫AiKmt,sds2≤C∑i=1nEXiTβti−βt3+n−12mXiTθ3∫AiKmt,sds2≤C∑i=1nEX13βti−βt3+n−12m3/2X13θ3∫AiKmt,sds2≤O2mnn−γ+ηm+2mn5/2=Oan. To obtain an upper bound for the second term on the right-hand side of (34), we split it into two terms as follows. Let S1=j,i:1≤j−i≤dn;1≤i<j≤n and S2=j,i:1≤i<j≤n−S1 with dn⟶∞ specified later. We have(36)∑1≤i<j≤nCovU˜i,U˜j=∑S1CovU˜i,U˜j+∑S2CovU˜i,U˜j≕Jn1+Jn2,where dn is a sequence of positive integers such that dn=Ologn as n⟶∞. ForJn1, by (35) and choice of dn, we have(37)Jn1≤C∑j=1dn∑i=1n−jCovU˜i,U˜i+j≤dn∑i=1nEU˜i2=dnOan=Oanlogn. By conditions A1 (ii), A3 (ii), and A4, and Lemmas1 and 2, we have(38)EU˜i2+δ=EEψεi+eit−n−12mXiTθ−ψεiXi2+δ×eit−n−12mXiTθ2+δ∫AiKmt,sds2+δ≤CEeit−n−12mXiTθ3+δ∫AiKmt,sds2+δ≤CEX13+δβti−βt3+δ+n−12m3+δ/2X13+δθ3+δ∫AiKmt,sds2+δ≤Cβti−βt3+δ+n−12m3+δ/2∫AiKmt,sds2+δ. ForJn2, by (38), conditions A1 (ii) and A3 (ii), and Lemmas 1–3, we have(39)Jn2≤C∑j=dn+1nαjδ/2+δ∑i=1n−dn−1EU˜i2+δ1/2+δEU˜i+dn+12+δ1/2+δ≤C∑j=dn+1nαjδ/2+δ×∑i=1n−dn−1βti−βt3+δ+n−12m3+δ/21/2+δ∫AiKmt,sds×βti+dn+1−βt3+δ+n−12m3+δ/21/2+δ∫Ai+dn+1Kmt,sds≤Cn∑j=dn+1nαjδ/2+δ×∑i=1n−dn−1βti−βt3+δ+n−12m3+δ/22/2+δ∫AiKmt,sds2+∑i=1n−dn−1βti+dn+1−βt3+δ+n−12m3+δ/22/2+δ∫Ai+dn+1Kmt,sds2≤C∑j=dn+1nαjδ/2+δ∑i=1nβti−βt3+δ+n−12m3+δ/22/2+δ∫AiKmt,sds2≤C∑j=dn+1nαjδ/2+δ∑i=1nβti−βt2+n−12m3+δ/2+δ∫AiKmt,sds2≤C∑j=dn+1nαjδ/2+δ2mnn−γ+ηm+2mn5+2δ/2+δ≤C∑j=dn+1nαjδ/2+δ2mnn−γ+ηm+2mn5/2≤Candn+1−a∑j=dn+1njaαjδ/2+δ=oan. From (36), (37), and (39), one gets ∑1≤i<j≤nCovU˜i,U˜j=Oanlogn. Combining with (34) and (35), we have(40)VarRnθ;t=On2−m2anlogn,uniform in t∈0,1. Note that ERnθ;t=0. We have(41)Rnθ;t=Opn2−m2anlogn=Opn2−m1/2n−γ+ηm1/2+n−12m1/4logn,uniform in t∈0,1.Lemma 6. Under the assumptions of Theorem1, for fixed θ, it holds that(42)supt∈0,1EGnθ;t−12θTΩxθ=On2−mn−γ+ηm=o1,where Gnθ;t is defined by (1).Proof of Lemma 6. By condition A2 (i), Lemma2, and Lemma 1 in Bai et al. [29], we have(43)EGnθ;t=1n−12m∑i=1nEEρεi+eit−n−12mXiTθ−ρεi+eitXi∫AiKmt,sds=1n−12m∑i=1nE12n−12mXiTθ2−eitn−12mXiTθB1Xi1+o1∫AiKmt,sds=12θT∑i=1nEB1XiXiXiT∫AiKmt,sdsθ1+o1−1n−12mθT∑i=1nEB1XiXiXiTβti−βt∫AiKmt,sds=12θTΩxθ−n2−mOn−γ+ηm. This finishes Proof of Lemma6.Lemma 7. Under the assumptions of Theorem3, it holds that(44)Wndt⟶DN0,ω02κtΓx,where Wndt=Wntm with tm=⌊2mt/2m⌋, and Wnt is defined by (1).Proof of Lemma 7. Recall that(45)Wnt=1n−12m∑i=1nψεiXi∫AiKmt,sds. Letξi=n2−mψεiXi∫AiKmt,sds. First, we calculate its variance-covariance. Note that Eψε1X1=o1 by condition A3 (i). We have(46)VarWnt=∑i=1nEξiξiT+2∑1≤i<j≤nCovξi,ξj. By using the same argument as those used in Proof of Lemma5, we obtain(47)∑1≤i<j≤nCovξi,ξj=o1. Now, we prove the first term on the right-hand side of (46). Using the compact support and Lipschitz properties of ϕ, one can show that E0t,· is Lipschitz uniformly in t, so that(48)supt∈0,1E02mt,2mvi−E02mt,2mui=O2mn−1,and the Lipschitz property of κ· implies(49)κvi=κsi+On−1,where ui and vi belong to Ai. By condition A3 (iii), we have(50)∑i=1nEξiξiT=∑i=1nEEξiξiTXi=EγX1X1X1Tn2−m∑i=1n∫AiKmt,sds2=2−mnΓx∑i=1n∫AiKmt,sds2−1n∫01Em2t,sκsds+2−mΓx∫01Em2t,sκsds=n2−mΓx∑i=1nsi−si−12Km2t,ui−1nsi−si−1Em2t,viκvi+2−mΓx∫01Em2t,sκsdswhereuiandvibelong onAi.By48and49,and a standard calculation=Γxn2−mO1nOn2−mρn22m+22mn2+23mn2+2−mΓx∫01Em2t,sκsds=Oρnn+2mn−1+2−mΓx∫01Em2t,sκsds. Since2−m∫01Em2t,sκsds is unstable as a function of t, we need to compute it at dyadic points of order m. By Lemma 6.1 in Antoniadis et al. [15], we have(51)limm→∞2−m∫01Em2tm,sκsds=κtω02. Combining with (46), (47), and (50), we have(52)VarWndt=ω02κtΓx+o1. Second, we shall show (44). Redefine(53)ξi=n2−mψεiXi∫AiKmt,sds−En2−mψεiXi∫AiKmt,sds. Note thatEψε1X1=o1, and it turns (44) to(54)∑i=1nξid⟶DN0,ω02κtΓx,where the definition of ξid is similar to ξi by using tm to replace t. As ψ· is not necessarily bounded, we employ a truncation method. Denote ψL·=ψ·Iψ·≤L and as before that ξ¯id=n2−mψLεiXi∫AiKmtm,sds−En2−mψLεiXi∫AiKmtm,sds, and let Γ¯x be defined as before with γ· replaced by EψL2ε1X1=⋅. Similar to Proof of Theorem 2 in Cai et al. [28], by using Doob’s large-block and small-block technique, we can show that(55)∑i=1nξ¯id⟶DN0,ω02κtΓ¯x. Letξ˜id=ξid−ξ¯id. To prove (54), by (55), it suffices to show(56)Var∑i=1nξ˜id⟶0,at first n⟶∞ and then L⟶∞. Let Γ˜x be defined as before with γ· replaced by Eψ2ε1Iψε1>LX1=⋅. With the same argument as (46), one gets(57)Var∑i=1nξ˜id=ω02κtΓ˜x+o1⟶0. Since(58)Γ˜x=EEψ2ε1Iψε1>LX1X1X1T≤EEψ2+δLδε1Iψε1>LX1X1X1T≤L−δEEψ2+δX1X1X1T⟶0,asL⟶∞,from condition A3 (iii). Thus, (56) holds. Therefore, Proof of Lemma 7 is completed.Proof of Theorem 1. From Lemmas5 and 6, and (29), for the fixed θ, we have(59)supt∈0,1Gnθ;t−12θTΩxθ+WntTθ=Opn2−mOn−γ+ηm+n2−mn−γ+ηm1/2+n−12m1/4logn=Opbn=o1. From Lemma7, it is easy to see that Wnt is stochastically bounded. Since the convex function λnθ=Gnθ;t−Wnt converges in probability to the convex function λθ=1/2θTΩxθ, it follows from convexity Lemma 4 that for any compact set K,(60)supθ∈Ksupt∈0,1Gnθ;t−12θTΩxθ+WntTθ=Opbn=o1. Notice that the convexity lemma strengthens the pointwise result to uniform convergence on compact subsets ofℝp. This completes Proof of Theorem 1.Proof of Theorem 2. To obtain Bahadur representation of WME, the idea behind the proof is to approximateGnθ;t by a quadratic function whose minimizing value has an asymptotic the behavior, and then to show that θ^n lies close enough to the minimizing value to share its asymptotic behaviour. We have done the first step, that is, the results of Theorem 1 and Lemma 7. Let θ¯n=Ωx−1Wnt and cn2=bnlogn. Now, we prove the second step. The argument will be complete if we can show for each ε>0 that(61)Pθ^n−θ¯n>cnε=o1. The argument is similar to Proof of Theorem1 in Pollard [33], whose method is extended to obtain the Bahadur representation of WME. From Theorem 1, the compact set K can be chosen to contain a closed ball Bn with center θ¯n and radius cnε, with probability arbitrarily close to one. Thereby, it implies that(62)Δn=cn−2supθ∈Bnsupt∈0,1Gnθ;t−12θTΩxθ+WntTθ=Op1logn=op1. Now, we consider the behavior ofGnθ;t outside Bn. Suppose θ=θ¯n+cnϱυ with ϱ>ε and υ is a unit vector. Define θ∗ as the boundary point of Bn that lies on the line segment from θ¯n to θ, that is, θ∗=θ¯n+cnευ. Convexity of Gnθ;t and the definition of Δn imply(63)εϱGnθ;t+1−εϱGnθ¯;t≥Gnθ∗;t≥12θ∗TΩxθ∗+WntTθ∗−cn2Δn≥12θ¯nTΩxθ¯n+WntTθ¯n+cn212ε2υTΩxυ−Δn≥Gnθ¯n;t+cn212ε2υTΩxυ−2Δn. Furthermore, we have(64)Gnθ;t≥Gnθ¯n;t+cn212ε2υTΩxυ−2Δn,uniformly in t∈0,1. In addition, the right-hand side of the above inequality does not depend on θ. Hence, for each ε>0 and n large enough, we have(65)Pinfθ−θ¯n>cnεGnθ;t≥Gnθ¯n;t+cn212ε2υTΩxυ−2Δn→1,which leads to θ^n−θ¯n≤cnε with probability tending to one, because 2Δn<1/2ε2υTΩxυ, which happens with probability tending to one, the minimum of Gnθ;t cannot occur at any θ with θ−θ¯n>cnε. So, (61) holds, which leads to(66)n2−mβ^t−βt=Ωx−1Wnt+Opcn,uniformly in t∈0,1. Thus, Proof of Theorem 2 is completed.Proof of Theorem 3. From Theorem2 and Lemma 7, we can directly obtain the result of Theorem 3. Thus, we complete Proof of Theorem 3. --- *Source: 1025452-2020-09-03.xml*
2020
# Challenges and Countermeasures of Arab Immigrants and International Trade in the Era of Big Data **Authors:** Yi Huang; Miao Shao **Journal:** Mathematical Problems in Engineering (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1025453 --- ## Abstract In recent years, the development of intelligent iteration technology and the use of big data processing technology have set off an upsurge, and the analysis and application of artificial intelligence algorithms have been paid more and more attention. In order to face the challenges of Arab migration and international trade, this paper constructs the basic structure of the Arab migration action imitation model. In this paper, a simulated servo clustering algorithm based on big data and intelligent iteration is used. Then, through the analysis of pseudo servo clustering algorithm, an optimization model is established, and a big data analysis system is formed. This paper focuses on the wide application of big data statistics to solve the construction of Arab immigration and entrepreneurship data system. This paper studies and applies the big data statistics and intelligent iterative algorithm of Arab immigration behavior, focuses on the annual ladder degree of Arab immigrants, and constructs a pseudo servo cluster system based on Intelligent iterative algorithm. Finally, the simulation experiment verifies whether the clustering model can accurately retrieve the behavior of Arab immigrants in China. The era of big data provides good development opportunities for Arab immigrants and international trade, but it also faces severe challenges. The study provides a reference for strengthening the analysis of international trade, and puts forward perfect countermeasures in combination with the actual situation, so as to improve the efficiency of international trade management and promote the better implementation of international trade. --- ## Body ## 1. Introduction With the development of science and technology, computer information technology is no longer limited to the application of information technology. The industries and are closely integrated. The emerging industries of “Internet plus” are emerging. Big data is a barrier-free information technology that ignores time and space. Once the massive application information is used in International trade, the trade processor can directly select partners who meet their own conditions. Over the years, developed countries have dominated the international trade market with their advanced science and technology and production experience, and even formed a monopoly in some electronic industries. A large amount of funds flow to developed countries, which in turn promotes their trade system innovation. This basic and superior condition has increased the total trade revenue between developed and developing countries. However, due to the increase of big data and information technology, trade forms become more flexible. International trade realizes product informatization, technology informatization, and transaction informatization. If developing countries accurately grasp trade information, analyze market data and timely reform and adjust their own economic structure, they can also gain advantages and benefits in international trade competition. Therefore, the background of big data provides new opportunities for the developing countries. As for big data technology, it is usually defined as analyzing some quantitative statistical results of big data, and its analysis method is to mix them through some internal logical restrictions. The common effective method to realize this analysis method is the intelligent iterative method [1]. Arab immigrants in all time periods produce a large amount of production capacity. The public often focuses on their behavior after immigration, and the common method of collecting behavior data is still questionnaire flow survey [2]. Although the flow survey questionnaire has the advantages of simplicity and easy quantification, it is helpful to analyze and process the survey results and make it completed more efficiently [3]. Its disadvantages are also obvious. For such research on the whereabouts of international trade and employment, the form of questionnaire will not ensure the quality of research and stable recovery due to the subjective motivation, subjective thinking, and other factors of the respondents. These comprehensive factors will reduce the accuracy of analysis results [4]. Therefore, the Arabian immigration mimicry model using questionnaire survey has great limitations. However, the financial data of Arab immigrants has a large change rate and poor real-time coverage, which leads to the lack of coverage prediction of various data nodes in the common methods and models in the past, resulting in the overall error that is difficult to find in the development of prediction work. Therefore, it is necessary to study the construction model of pseudo nodes of Arab immigrants’ financial international trade data. Reference [5]. Under this background, this paper studies and constructs the intelligent iterative servo cluster and big data analysis method under the pseudo model.The innovation of this paper is to study the annual ladder of Arab immigrants and build a pseudo servo cluster system based on Intelligent iterative algorithm. The big data statistics and intelligent iterative algorithm of Arab immigration behavior are applied in the system. This paper focuses on the extensive application methods of big data statistics to solve the construction of Arab immigration and entrepreneurship data system. The system is to ensure the accuracy and predictability to a certain extent even if the model is not reconstructed when the sample data structure changes, so as to improve the efficiency of the previous model simulation data analysis. The efficient application of the model has been fully developed in the system, and effectively applied to the pseudo data query of the servo cluster.This paper is mainly divided into four parts. The first part is an overview of the treatment status at home and abroad and the construction direction of this solution; The second part introduces the fields that have not yet applied and developed big data; The third part summarizes the application of big data, intelligent iteration technology, the establishment of pseudo servo cluster, and the construction of statistical pseudo analysis based on the existing common technologies; The fourth part, using the established servo cluster system with integrated intelligent iteration technology, carries out statistical data practice through open data sets, and carries out the test process to test whether the predicted servo cluster system can accurately predict the immigration financial operation of Arab immigrants. This paper analyzes the difficulties that China has traditionally been unable to imitate the international trade structure of Arab immigrants, and points out that the intelligent iteration based on the imitation servo cluster, combined with the big data integrated system, can realize the statistical simulation analysis of tracking and processing of Arab immigrants’ destination data. ## 2. Related Work Although relevant personnel have been trying to analyze and study countermeasures in the era of big data for many years, there are still some unsolved problems, that is, creating a financial scenario for the Arab migration model [6]. Zarrinpar et al. scholars proposed that in the era of big data of Arab immigrants in China, the goose swarm algorithm should be used to optimize accordingly, find the best function set, and intelligently iterate the HGT framework to maximize the expectation of Arab immigrants [7]. Razali et al. scholars, based on the coupling cross polarization method and the hierarchical polarization complex attached to the Kelvin discrete, have proved through experiments that the use of the characteristic international trade of Arab immigrants theoretically supports the corresponding strategies of Arab immigrants and international trade [8]. Li et al. scholars put forward the discrete normalization of hierarchical differentiation data, then carry out feature identification, and combine the coupling intelligent iterative operator to extract the positive elements of the financial behavior of Arab immigrants in China. This operation shows that the maximization of trade is related to the positive behavior elements of Arab immigrants, and effectively improves the error free rate to 93% [9]. Ma and Wang scholars analyzed the case of Arab immigrants by analyzing the cycle of the means of transportation, nonhuman factors, geographical conditions and other Arab immigrants, innovatively combined with the grey correlation algorithm to avoid the relevant adaptive failure of the algorithm, so as to achieve the characteristic value of no difference in the financial industry of Arab immigrants in China to 0.7, making this method meet the application requirements [10]. Hulme combined with the nine characteristics of expected migration of Arab immigrants in China, obtained analyzable big data through benchmarking, and effectively increased the utilization rate of big data to 98% by using hierarchical differentiation. By making full use of this conclusion, China’s Arab immigrants timely adjust the corresponding planning cases and effectively improve the international financial evaluation index, which has an effective positive impact on improving the GDP [11]. Epanchin-Niell et al. believe that if China's Arab immigrants build a blockchain differentiation data query center, they can have statistical expectations on China's Arab immigrants' financial behavior through the advantages of blockchain stability, so as to make the financial behavior of Arab immigrants credible [12]. Wang et al. carried out statistical expectation big data, collected the information of Arab immigrants at the gateway, made corresponding planning prediction on the expectation big data through hierarchical differentiation, trapezoidal prediction and object stability method, allocated analysis resources for the financial behavior of pseudo Arab immigrants, and realized the high accuracy of big data statistical prediction in the integration of pseudo data model [13]. Shu et al. found that the singular discrete model can improve the construction efficiency of big data system. Through the layered complex subdatabase formed by the intelligent iteration of two terminals, they verified the improvement of the efficiency of the singular discrete method and achieved the layered construction of Arab immigrants and entrepreneurs, but it is difficult to achieve mimicry [14]. Milsom et al. believe that building a database of Arab immigrants and international trade behavior, querying characteristic conditions, creating the best solution based on big data, creating a trusted field, finding the best environment for the working environment for Arab immigrants and entrepreneurs to make decisions, and analyzing the immigration trade status through hierarchical tracking of big data [15]. Quaglia et al. screened the data through the combined intelligent iteration strategy, separated the whereabouts data and behavioral eigenvalues of Arab immigrants in China, and then substituted them into the model for aggregate analysis to verify the feasibility, realizing the digital iteration of Arab immigrants, but the accuracy of mimicry is not enough to meet the universal standard [16].To sum up, it can be seen that in the process of pseudo statistics and analysis of Arab migration and international trade, there is a lack of effective way to obtain real-time data of Arab migration, the error of the analysis model is large, the Arab migration statistical information database is small, and the analysis model has limitations [17, 18]. For the research on the challenges and Countermeasures Faced by Arab immigrants and international trade, most of the current research results are based on statistical data, but very few combine big data technology and artificial intelligence algorithms for diversified and convenient analysis. References [19–21]. On the other hand, big data analysis technology is mostly coupled with international trade and less combined with trade direction. In terms of algorithm structure, most intelligent iterative algorithms only carry out Lee’s fitting for static data, and there are few mimicry algorithm models specially designed for complex mimicry discrete data [22]. Therefore, in today’s era of big data, it is of great significance to carry out the research on the mimicry model of Arab immigration and international trade. ## 3. Analysis Method of Pseudoservo Cluster Based on Intelligent Iterative Technology ### 3.1. Simulation Analysis of Intelligent Iterative Algorithm Intelligent iterative algorithm can realize iterative estimation of data through iterative differentiation, hierarchical classification, and optimization. The key of intelligent iteration lies in hierarchical conditions, iterative analysis, and system construction. Through the limitation of hierarchical conditions, progressive iteration and discrete classification, and then realize system construction. This process reflects the criticality and non-negligible of hierarchical conditions in the process of intelligent iteration [23]. The general intelligent iterative algorithm only focuses on the vicinity of the demand point. When the object data changes dynamically, the system must reload the complete set of objects. This leads to the failure of the system to complete the corresponding requirements within the specified time when analyzing the pseudo database [24]. Generally speaking, the scientific mimicry attribute has irreplaceable effectiveness for the application of the common methods of practical hierarchical analysis [25]. Recently, Arab immigration and international trade policies have been strengthened by relevant departments. In the face of increasingly challenging international trade, the pseudo non-static behavior of Arab immigration and international trade has become more complex [26].In this context, aiming at the difficulty of non-real-time analysis in the Arab immigration and international trade model, this paper strengthens the adaptability of the servo system that can only process non-dynamic restrictive data in the past, realizes the blockchain physical and chemical system combined with the pseudo state servo cluster system, and builds a more adaptive pseudo state model from the strange resistance of each batch of non-static Arab immigration and international trade. In the dynamic surface control method for nonlinear systems, the input nonlinearity will affect the tracking performance of the closed-loop output signal.The design of dynamic surface control algorithm to improve the transient and steady-state performance of closed-loop nonlinear systems has become one of the research topics in the control field. In this paper, the dynamic surface control of several classes of nonlinear systems is studied. For nonlinear systems with unknown nonlinear links, the structural characteristics of the system are used. Extended state observer, finite time observer, and neural network observer are designed to observe the system state and unknown disturbance signal. On this basis, output feedback dynamic surface controller, dynamic surface controller based on tracking differentiator and extended state observer, neural network dynamic surface controller and adaptive robust dynamic surface controller are designed. An improved dynamic surface control scheme for multi motor drive servo system is proposed to solve the problems of nonlinear links and disturbances in motor servo system. ### 3.2. Process Demonstration of Intelligent Iterative Pseudo State Servo Cluster Algorithm In order to simplify the process, analyze, sum up and queue up the previous servo cluster algorithms, discretize, classify, combine and package the data, and face the challenges faced by Arab immigrants and international trade and the predictability of countermeasures, the servo cluster algorithm of this model introduces a process class method of introducing pseudo servo clusters, refines and composes the original basic layers, and then classifies them into classes, so as to realize the construction of pseudo servo cluster simulation system. The process of analyzing data by pseudo servo cluster is shown in Figure1.Figure 1 The process of analyzing data by servo clustering.By extracting the elements in Figure1, the servo cluster adaptive iterative center exchange results are differentiated in the mixed center level class, combined with the attraction of the mimicry model. When the servo cluster construction meets the combination conditions, this level class is disabled, so it will be dynamically replaced by the next random level. This means that if the condition of the servo stage is modified, it is independent of the condition of the servo stage.If we make the servo cluster algorithm model extraction levelM and N, it is easy to get the threshold gain relationship ∑i=1ctmi>∑i=1ctni of M and N, that is, in the threshold of servo cluster classification level, m cluster is better than n. When the object has non-subtractive mimicry, if the servo cluster classification level is obtained, the attribute values corresponding to m and N are pushed to meet the conditions, and the characteristic quantity of Eo level is used, the following can be obtained:(1)GM>GN,⇔∪i=1tpm−Eo>∪i=1tpn−Eo,⇔∪i=1tpm>∪i=1tpn⇔M>N.For the above formula, p represents the forward feature level quantity, t record the feasible value corresponding to the characteristic quantity of the series, the specific number of levels is represented by i, and ∪i=1np represents the positive hierarchical embodiment of the corresponding level, thus showing the coupling result of the sum of the corresponding characteristic quantities of levels. Under this condition, the general formula of EE is obtained as follows:(2)E=∪i=1tpm+1nlogδ+logδ+1,δ1=∑i=1tri+∑i=1tri2∑i=1tri.As for the above formula, δ≥1 and r reflect the coupling amount of the servo cluster in this layer, which covers the Curvilinear Synthetic Value of the level value, in which the bottom of log is 7. When the pseudo servo cluster is introduced into the model, the corresponding layers of level M and level n will also have a certain amount of change. When the number of iterations of the two-level difference data p corresponds to the feasible value t, and the value is between the corresponding values of M and N, the formula of the pseudo incremental complex coupling E1 is obtained as follows:(3)E1=∪i=1tpm+1/nlogδ0+logδ1+1∪i=1tpm+1/nlogδ0+logδ1+1+logδ2+2.Among them,(4)δ1=∑i=1tri+∑i=1tri2+∑i=1tri3∑i=1tri+∑i=1tri2,δ2=∑i=1tri+∑i=1tri2+∑i=1tri3+∑i=1tri4∑i=1tri+∑i=1tri2+∑i=1tri3.After the corresponding substitution formula is used,(5)δ1=h0+h1+h2h0+h1,δ2=h0+h1h2+h3h0+h1+h2.After evolution, the poor coupling and layered construction of pseudo information can be obtained,(6)ΔE=∪i=1tpm+1nlogδ+logδ+1−∪i=1tpm+1/nlogδ0+logδ1+1∪i=1tpm+1/nlogδ0+logδ1+1+logδ2+2.In order to obtain the parameter difference between the corresponding attribute values of levelM and level n, only the maximum value of pseudo complex coupling can be calculated. Combined with the characteristics of δ and δ, ΔE can get the minimum value:(7)Emin=∪i=1tpm+1nlogδ0+logδ1+1.When the current value is small and then large,ΔE can get the maximum value:(8)Emax=∪i=1tpm+1nlogδ0+logδ1+1+logδ2+2. ### 3.3. Analysis Process of Intelligent Iterative Mimicry Servo Cluster Algorithm in the Mimicry Model When the analysis process imitates the incremental set, it is not necessary to reload the data and establish a different system, but the original system is the ontology, modify the data obtained from the previous training, and improve the classification accuracy through the servo cluster algorithm. When the mimicry dynamically changes thek classification sets to the discrete servo cluster branch level, the inequalities to be achieved for the new k classification sets are as follows:(9)EmaxEmin≤∪i=1tpm+nδ1+δ2.If the data meets the conditions of intelligent iterative engineering, reduce (9),(10)EmaxEmin≤δ1+δ2+ym,n.Based on the above formula, the value ofΔE affects the new k,(11)ΔE≤∑i=0∞δi1ym,n,or(12)kmax=∑i=0∞δi+ym,n.By expanding the understanding of equation (12), we can obtain the overlapping mimicry of mimicry servo clusters. If the pseudo eigenvalue is combined with the judgment condition, it can achieve higher efficiency than generalization, that is to say, it can achieve more efficient model establishment. Therefore, the hierarchical classification mimicry and dynamic effect of the servo cluster system can not only improve the efficiency, but also improve the universality, so as to meet the corresponding update requirements. ### 3.4. Prototype Level Big Data Construction of Intelligent Iterative Pseudostate Servo Cluster Algorithm In order to complete the analysis task, the intelligent iterative simulation servo cluster is combined with the corresponding fusion algorithm for hierarchical classification for non-static operation. If the level and aggregation of clutch are simulated according to the level eigenvalues in the system process, the system performance will be studied quantitatively. The modular part levels designed in this paper are import layer, analysis layer, and export hybrid layer. The processing of Arab immigrant employment data at the big data level is shown in Figure2.Figure 2 The process of big data analysis of employment data of Arab immigrants.After the end of the analysis phase I, the hierarchical model loads the obtained levels into the database level module and the Arab immigrants and international trade of level class analysis. Foreign investment and trade play an important role in the modern economy of Arab countries. Due to the convenient transportation and the Arab business tradition, the Arab region has been an important place for world investment and trade since ancient times. The increasingly developed commercial exchanges among Arab countries have also promoted the development of Arab countries’ internal commodity exchanges and foreign trade. According to this law, Arab countries carry out foreign investment activities. The main provisions of this law are that integrated trading companies can only be operated by Arab residents. Other joint ventures established outside Zishan trade zone must be held by Arab residents with a shareholding of more than 51%. When a foreign company establishes a branch or representative office in Arabia, it must be guaranteed by its own residents. And pay a certain guarantee fee to the guarantor every year. The representative offices established by foreign companies in Arabia are not allowed to directly engage in trade and other economic activities. Foreigners must conduct foreign trade in Arab countries in the name of local guarantors or agents.In general, the level database of level class displays the name, level class and level class in the large database, while the calculated module has the original hierarchical data collection part. The algorithm of layer mimicry servo cluster is based on the optimized servo cluster algorithm. The basic architecture of this algorithm has been given above. The hierarchical classification set can be divided into class I, class II, and class III Arab immigrants. The behavior results of their financial operations are shown in Figure3.Figure 3 Simulation analysis of employment by Arab immigrants.It can be seen from Figure3 that under the pseudo servo cluster algorithm, if the number of iterations increases intelligently, the selection rate of class I, class II, and class III Arab immigrants participating in pseudo international finance will increase accordingly, the promotion rate of class II is relatively stable, and the unilateralization rate of class II and class III Arab immigrants is similar, which is similar to the existing recognized international financial and trade behavior, and in the classification of pseudo servo cluster algorithm, The development of IHF platform provides great innovative help, resulting in the intelligent iterative framework of uisg, so as to realize the non-static tracking of the pseudo data framework of Arab immigrants' finance and trade, and get the simulation conclusion closer to the actual conditions.The hierarchical classification data structure of the system is based on the big data model of IOV, which is a commonly used framework model in the past. Its layer classes include GIO layer, EGU layer, and pro layer. The big data hierarchical classification data structure is used to simulate the international trade of class I, II, and III Arab immigrants, as shown in Figure4.Figure 4 Simulation analysis of international trade by Arab immigrants.It can be easily seen from the trend structure in Figure4 that in the pseudo servo cluster model, when the number of iterations and intelligent parameter level domestication are superimposed, the accurate correlation rate of pseudo international trade selection of class I, class II, and class III Arab immigrants is also improving, the correlation rate of class I and class II Arab immigrants in the early stage is similar, and the correlation rate of class II and class III Arab immigrants in the later stage is similar, The reason for this result is that the mimicry servo cluster model mimics the non-static analysis and integration of the original Arab immigration data, and the temporal influence is more obvious in the seemingly thinking of one kind of Arab immigrants, while the other two kinds have a certain lag.As for the hierarchical classified pseudo data, the comprehensive model of big data pseudo servo cluster is formalized. Conclusion data hierarchical classification is applicable to hierarchical simulation prediction verification. The raw stone data of simulation test comes from the real Arab immigrants and international trade in a certain place as the basic raw data of training. Thus, the prediction results shown in Figure5 are obtained.Figure 5 Big data processing prediction analysis results based on mimetic servo clusters.Figure5 shows that in the face of the pseudo servo cluster algorithm, if the number of iteration levels increases, the original data and the predicted data have certain differences, and their corresponding characteristic values have certain mutations, but the differences are layered with the classification of iteration levels. The reason for this phenomenon is that the level classes are classified in the level classes, the real data will have unpredictable influencing factors, and the correlation is highlighted in the hierarchical process. The unpredictable influencing factors of real data will not change due to the number of iterations, so it can be concluded that the pseudo servo cluster model shown in this paper has good adaptability and accuracy for the application and prediction of big data. ## 3.1. Simulation Analysis of Intelligent Iterative Algorithm Intelligent iterative algorithm can realize iterative estimation of data through iterative differentiation, hierarchical classification, and optimization. The key of intelligent iteration lies in hierarchical conditions, iterative analysis, and system construction. Through the limitation of hierarchical conditions, progressive iteration and discrete classification, and then realize system construction. This process reflects the criticality and non-negligible of hierarchical conditions in the process of intelligent iteration [23]. The general intelligent iterative algorithm only focuses on the vicinity of the demand point. When the object data changes dynamically, the system must reload the complete set of objects. This leads to the failure of the system to complete the corresponding requirements within the specified time when analyzing the pseudo database [24]. Generally speaking, the scientific mimicry attribute has irreplaceable effectiveness for the application of the common methods of practical hierarchical analysis [25]. Recently, Arab immigration and international trade policies have been strengthened by relevant departments. In the face of increasingly challenging international trade, the pseudo non-static behavior of Arab immigration and international trade has become more complex [26].In this context, aiming at the difficulty of non-real-time analysis in the Arab immigration and international trade model, this paper strengthens the adaptability of the servo system that can only process non-dynamic restrictive data in the past, realizes the blockchain physical and chemical system combined with the pseudo state servo cluster system, and builds a more adaptive pseudo state model from the strange resistance of each batch of non-static Arab immigration and international trade. In the dynamic surface control method for nonlinear systems, the input nonlinearity will affect the tracking performance of the closed-loop output signal.The design of dynamic surface control algorithm to improve the transient and steady-state performance of closed-loop nonlinear systems has become one of the research topics in the control field. In this paper, the dynamic surface control of several classes of nonlinear systems is studied. For nonlinear systems with unknown nonlinear links, the structural characteristics of the system are used. Extended state observer, finite time observer, and neural network observer are designed to observe the system state and unknown disturbance signal. On this basis, output feedback dynamic surface controller, dynamic surface controller based on tracking differentiator and extended state observer, neural network dynamic surface controller and adaptive robust dynamic surface controller are designed. An improved dynamic surface control scheme for multi motor drive servo system is proposed to solve the problems of nonlinear links and disturbances in motor servo system. ## 3.2. Process Demonstration of Intelligent Iterative Pseudo State Servo Cluster Algorithm In order to simplify the process, analyze, sum up and queue up the previous servo cluster algorithms, discretize, classify, combine and package the data, and face the challenges faced by Arab immigrants and international trade and the predictability of countermeasures, the servo cluster algorithm of this model introduces a process class method of introducing pseudo servo clusters, refines and composes the original basic layers, and then classifies them into classes, so as to realize the construction of pseudo servo cluster simulation system. The process of analyzing data by pseudo servo cluster is shown in Figure1.Figure 1 The process of analyzing data by servo clustering.By extracting the elements in Figure1, the servo cluster adaptive iterative center exchange results are differentiated in the mixed center level class, combined with the attraction of the mimicry model. When the servo cluster construction meets the combination conditions, this level class is disabled, so it will be dynamically replaced by the next random level. This means that if the condition of the servo stage is modified, it is independent of the condition of the servo stage.If we make the servo cluster algorithm model extraction levelM and N, it is easy to get the threshold gain relationship ∑i=1ctmi>∑i=1ctni of M and N, that is, in the threshold of servo cluster classification level, m cluster is better than n. When the object has non-subtractive mimicry, if the servo cluster classification level is obtained, the attribute values corresponding to m and N are pushed to meet the conditions, and the characteristic quantity of Eo level is used, the following can be obtained:(1)GM>GN,⇔∪i=1tpm−Eo>∪i=1tpn−Eo,⇔∪i=1tpm>∪i=1tpn⇔M>N.For the above formula, p represents the forward feature level quantity, t record the feasible value corresponding to the characteristic quantity of the series, the specific number of levels is represented by i, and ∪i=1np represents the positive hierarchical embodiment of the corresponding level, thus showing the coupling result of the sum of the corresponding characteristic quantities of levels. Under this condition, the general formula of EE is obtained as follows:(2)E=∪i=1tpm+1nlogδ+logδ+1,δ1=∑i=1tri+∑i=1tri2∑i=1tri.As for the above formula, δ≥1 and r reflect the coupling amount of the servo cluster in this layer, which covers the Curvilinear Synthetic Value of the level value, in which the bottom of log is 7. When the pseudo servo cluster is introduced into the model, the corresponding layers of level M and level n will also have a certain amount of change. When the number of iterations of the two-level difference data p corresponds to the feasible value t, and the value is between the corresponding values of M and N, the formula of the pseudo incremental complex coupling E1 is obtained as follows:(3)E1=∪i=1tpm+1/nlogδ0+logδ1+1∪i=1tpm+1/nlogδ0+logδ1+1+logδ2+2.Among them,(4)δ1=∑i=1tri+∑i=1tri2+∑i=1tri3∑i=1tri+∑i=1tri2,δ2=∑i=1tri+∑i=1tri2+∑i=1tri3+∑i=1tri4∑i=1tri+∑i=1tri2+∑i=1tri3.After the corresponding substitution formula is used,(5)δ1=h0+h1+h2h0+h1,δ2=h0+h1h2+h3h0+h1+h2.After evolution, the poor coupling and layered construction of pseudo information can be obtained,(6)ΔE=∪i=1tpm+1nlogδ+logδ+1−∪i=1tpm+1/nlogδ0+logδ1+1∪i=1tpm+1/nlogδ0+logδ1+1+logδ2+2.In order to obtain the parameter difference between the corresponding attribute values of levelM and level n, only the maximum value of pseudo complex coupling can be calculated. Combined with the characteristics of δ and δ, ΔE can get the minimum value:(7)Emin=∪i=1tpm+1nlogδ0+logδ1+1.When the current value is small and then large,ΔE can get the maximum value:(8)Emax=∪i=1tpm+1nlogδ0+logδ1+1+logδ2+2. ## 3.3. Analysis Process of Intelligent Iterative Mimicry Servo Cluster Algorithm in the Mimicry Model When the analysis process imitates the incremental set, it is not necessary to reload the data and establish a different system, but the original system is the ontology, modify the data obtained from the previous training, and improve the classification accuracy through the servo cluster algorithm. When the mimicry dynamically changes thek classification sets to the discrete servo cluster branch level, the inequalities to be achieved for the new k classification sets are as follows:(9)EmaxEmin≤∪i=1tpm+nδ1+δ2.If the data meets the conditions of intelligent iterative engineering, reduce (9),(10)EmaxEmin≤δ1+δ2+ym,n.Based on the above formula, the value ofΔE affects the new k,(11)ΔE≤∑i=0∞δi1ym,n,or(12)kmax=∑i=0∞δi+ym,n.By expanding the understanding of equation (12), we can obtain the overlapping mimicry of mimicry servo clusters. If the pseudo eigenvalue is combined with the judgment condition, it can achieve higher efficiency than generalization, that is to say, it can achieve more efficient model establishment. Therefore, the hierarchical classification mimicry and dynamic effect of the servo cluster system can not only improve the efficiency, but also improve the universality, so as to meet the corresponding update requirements. ## 3.4. Prototype Level Big Data Construction of Intelligent Iterative Pseudostate Servo Cluster Algorithm In order to complete the analysis task, the intelligent iterative simulation servo cluster is combined with the corresponding fusion algorithm for hierarchical classification for non-static operation. If the level and aggregation of clutch are simulated according to the level eigenvalues in the system process, the system performance will be studied quantitatively. The modular part levels designed in this paper are import layer, analysis layer, and export hybrid layer. The processing of Arab immigrant employment data at the big data level is shown in Figure2.Figure 2 The process of big data analysis of employment data of Arab immigrants.After the end of the analysis phase I, the hierarchical model loads the obtained levels into the database level module and the Arab immigrants and international trade of level class analysis. Foreign investment and trade play an important role in the modern economy of Arab countries. Due to the convenient transportation and the Arab business tradition, the Arab region has been an important place for world investment and trade since ancient times. The increasingly developed commercial exchanges among Arab countries have also promoted the development of Arab countries’ internal commodity exchanges and foreign trade. According to this law, Arab countries carry out foreign investment activities. The main provisions of this law are that integrated trading companies can only be operated by Arab residents. Other joint ventures established outside Zishan trade zone must be held by Arab residents with a shareholding of more than 51%. When a foreign company establishes a branch or representative office in Arabia, it must be guaranteed by its own residents. And pay a certain guarantee fee to the guarantor every year. The representative offices established by foreign companies in Arabia are not allowed to directly engage in trade and other economic activities. Foreigners must conduct foreign trade in Arab countries in the name of local guarantors or agents.In general, the level database of level class displays the name, level class and level class in the large database, while the calculated module has the original hierarchical data collection part. The algorithm of layer mimicry servo cluster is based on the optimized servo cluster algorithm. The basic architecture of this algorithm has been given above. The hierarchical classification set can be divided into class I, class II, and class III Arab immigrants. The behavior results of their financial operations are shown in Figure3.Figure 3 Simulation analysis of employment by Arab immigrants.It can be seen from Figure3 that under the pseudo servo cluster algorithm, if the number of iterations increases intelligently, the selection rate of class I, class II, and class III Arab immigrants participating in pseudo international finance will increase accordingly, the promotion rate of class II is relatively stable, and the unilateralization rate of class II and class III Arab immigrants is similar, which is similar to the existing recognized international financial and trade behavior, and in the classification of pseudo servo cluster algorithm, The development of IHF platform provides great innovative help, resulting in the intelligent iterative framework of uisg, so as to realize the non-static tracking of the pseudo data framework of Arab immigrants' finance and trade, and get the simulation conclusion closer to the actual conditions.The hierarchical classification data structure of the system is based on the big data model of IOV, which is a commonly used framework model in the past. Its layer classes include GIO layer, EGU layer, and pro layer. The big data hierarchical classification data structure is used to simulate the international trade of class I, II, and III Arab immigrants, as shown in Figure4.Figure 4 Simulation analysis of international trade by Arab immigrants.It can be easily seen from the trend structure in Figure4 that in the pseudo servo cluster model, when the number of iterations and intelligent parameter level domestication are superimposed, the accurate correlation rate of pseudo international trade selection of class I, class II, and class III Arab immigrants is also improving, the correlation rate of class I and class II Arab immigrants in the early stage is similar, and the correlation rate of class II and class III Arab immigrants in the later stage is similar, The reason for this result is that the mimicry servo cluster model mimics the non-static analysis and integration of the original Arab immigration data, and the temporal influence is more obvious in the seemingly thinking of one kind of Arab immigrants, while the other two kinds have a certain lag.As for the hierarchical classified pseudo data, the comprehensive model of big data pseudo servo cluster is formalized. Conclusion data hierarchical classification is applicable to hierarchical simulation prediction verification. The raw stone data of simulation test comes from the real Arab immigrants and international trade in a certain place as the basic raw data of training. Thus, the prediction results shown in Figure5 are obtained.Figure 5 Big data processing prediction analysis results based on mimetic servo clusters.Figure5 shows that in the face of the pseudo servo cluster algorithm, if the number of iteration levels increases, the original data and the predicted data have certain differences, and their corresponding characteristic values have certain mutations, but the differences are layered with the classification of iteration levels. The reason for this phenomenon is that the level classes are classified in the level classes, the real data will have unpredictable influencing factors, and the correlation is highlighted in the hierarchical process. The unpredictable influencing factors of real data will not change due to the number of iterations, so it can be concluded that the pseudo servo cluster model shown in this paper has good adaptability and accuracy for the application and prediction of big data. ## 4. Result Analysis and Discussion ### 4.1. Retest Experiment and Data Analysis In order to ensure the process of the retest experiment, the pseudo model proposed above is substituted into the big data system for the retest experiment. The premise preparation of the experiment needs to reflect all kinds of advance notification parameters in the algorithm. In order to reflect certain convenience at the same time, the data in this paper selects the pseudo data of Arab immigration and international trade from 2015 to 2020 for review.Through the elaboration of the above necessary ideas in this paper, it is decided to select more than 30 kinds of characteristics of Arab immigrants, such as their studies, gender, grade, and immigration years. Based on the Arab immigrants and international trade in 2015, it is predicted the destination of China’s Arab immigrants in 2016, and compared with the actual situation of China’s Arab immigrants in 2015 to verify the accuracy of the algorithm. Then, the pseudo data of 2015 is fused with the base data to predict the destination of Arab immigrants in 2016 and calculate its accuracy, and so on. In this study, the data will be analyzed many times, and the results will be recorded. In order to verify that the pseudo servo cluster algorithm proposed in this study has effectively generated new feature attribute branches among all levels of data structure, based on the above experimental methods, this study establishes a retrieval line model system based on tensorflow framework through the analysis module in Chapter 4 of this study. For the pseudo servo (MS) cluster algorithm, Q3 servo cluster and G5 1 servo cluster (two mainstream servo cluster algorithm models in the latest research results of Q3 servo cluster and g5.1 servo cluster), three different servo cluster intelligent iterative algorithms are experimentally studied. The preliminary experimental analysis results are shown in Figure6.Figure 6 Preliminary analysis results of servo cluster intelligent iteration algorithm.It can be seen from Figure6 that the accuracy of three different intelligent iterative servo cluster algorithms for predicting the pseudo employment and international trade of China’s Arab immigrants from 2013 to 2019 is different in terms of the reliability of the experimental data results. When the number of people is less than 1000, the prediction accuracy of Q3 servo cluster algorithm is the highest. When the number of people is more than 1000, the data reliability of pseudo servo cluster algorithm is the highest and has maintained a high level. The data reliability of Q3 servo cluster algorithm is the lowest, and the prediction accuracy shows a gradual downward trend. When the number of people is more than 2000, the prediction accuracy of the three groups of results decreases to varying degrees. This is because during the experiment, the pseudo servo cluster algorithm continues to produce new subbranches. The maximum carrying capacity of its experimental data begins to decay from 1800 (i.e., clearing the data and re-performing pseudo analysis), and its data collection samples are well coupled with the base data samples. It can be understood that the pseudo servo cluster algorithm will test, judge, and classify the new pseudo samples under the “guidance” of the category training samples. The useful experimental time of the three servo clusters is shown in Figure 7.Figure 7 Experimental time for all three types of decision trees.Figure7 shows the time required to process different amounts of data during the experiment of three intelligent iterative servo cluster algorithms in the big data system. It can be seen from Figure 7 that compared with the time required by the other two mainstream big data analysis servo cluster algorithms, the pseudo servo cluster algorithm has obvious optimization because the training data of the pseudo servo cluster is constructed based on the addition of base data. Each time the data mimicry is added, the algorithm does not need to rescan all the data, but modifies the previously obtained tree branch structure, which greatly saves the problems of high cost and insufficient efficiency of the servo cluster that the algorithm needs to rescan all the data every time the data is changed, and provides an effective support for the efficiency of the big data system in analyzing the characteristics of Arab Immigrant Employment and international trade. ### 4.2. Result Analysis The experimental results can reflect that when the hierarchical optimization conditions of the conceptual mimicry model are met, the existing servo cluster algorithm is improved by using variable level optimization, and the structural analysis of the employment destination of Arab immigrants is completed in combination with the hierarchical fitting. In order to truly reflect the situation of Arab immigrants and international trade, this study uses the actual details of Arab immigrants’ participation in trade and international trade in a city, and the calculation results are shown in Figure8. Based on the data of Arab immigrants’ employment and international trade from 2018 to 2020, this study calculates the degree of Arab immigrants and financial construction of a city in 2021 through the pseudo servo cluster algorithm of the big data system. The comparison between the predicted results and the real results is shown in Figure 9.Figure 8 Results of employment and international trade mimesis analysis of Arab migrants from 2018 to 2020.Figure 9 Prediction results and accuracy of employment and international trade mimesis for Arab immigrants in 2021.By discussing the experimental conclusions in Figures8 and 9, with the continuous development of international trade, the continuous optimization of international trade policies of Arab immigrants and the continuous optimization of relevant systems, since 2019, the rate of Arab immigrants in a city in China choosing to participate in international trade has increased, the solution coefficient of international trade challenges has decreased, and the overall financial and international trade situation of Arab immigrants has tended to rise steadily. This is confirmed by the actual situation and results of the city. On the other hand, the prediction accuracy of the financial participation rate, international trade rate, and other choice rate of Arab immigrants in a city in 2021 predicted by the algorithm is more than 90%, which is very similar to the whereabouts of Arab immigrants in the city in the first half of 2021. Therefore, the simulation analysis model of Arab immigrants and international trade proposed in this study has high accuracy and practical application value. ### 4.3. Challenges and Countermeasures of International Trade under the Background of Big Data The pattern of trade globalization has been developing continuously since its formation. At present, under the new situation of big data, the degree of international trade liberalization is greater, and more trade participating countries will become. Through big data, collect information, seek partners, and businessmen from different regions communicate with each other to discuss product development, enhance trade integration, make order processing faster and more convenient, and the industrial state after data and information optimization is better. The connection of economic and trade processing is more natural.Multinational large enterprises play a leading role in the development of international trade market. Therefore, international trade mainly causes harm in the specific development trend of trading. However, China’s international trade has a very obvious trend of improvement in the materialized investment. These phenomena fully illustrate that global multinational enterprise investment will further improve the development of international trade market. Secondly, its economy also has obvious conflicts of interest. In fact, there are many conflicts in the specific development process of each economy, only in the areas that will be affected. However, as global international trade is still a growing market, more and more related trade activities are carried out among various economies. As a result, the attitude of various countries towards international market competition has become more and more intense. This situation will have a very negative impact on the developing countries. Finally, international trade gradually tends to diversify. In the specific development, the continuous development of network and information technology has led to the trend of international trade.In the international market, enterprises from different countries communicate with each other and seek partners that meet their own conditions, which have become the absolute force of international trade. Among them, multinational enterprises still serve as the backbone of their own country to explore the international trade market, break through the hidden restrictions in international trade, and carry out close cooperation between trade enterprises of the two countries, coupled with the basic conditions under big data. While multinational enterprises are at the forefront of the international trade market, they can continue to seek cooperation opportunities, which greatly improves the efficiency of enterprises and will create more trade value. ## 4.1. Retest Experiment and Data Analysis In order to ensure the process of the retest experiment, the pseudo model proposed above is substituted into the big data system for the retest experiment. The premise preparation of the experiment needs to reflect all kinds of advance notification parameters in the algorithm. In order to reflect certain convenience at the same time, the data in this paper selects the pseudo data of Arab immigration and international trade from 2015 to 2020 for review.Through the elaboration of the above necessary ideas in this paper, it is decided to select more than 30 kinds of characteristics of Arab immigrants, such as their studies, gender, grade, and immigration years. Based on the Arab immigrants and international trade in 2015, it is predicted the destination of China’s Arab immigrants in 2016, and compared with the actual situation of China’s Arab immigrants in 2015 to verify the accuracy of the algorithm. Then, the pseudo data of 2015 is fused with the base data to predict the destination of Arab immigrants in 2016 and calculate its accuracy, and so on. In this study, the data will be analyzed many times, and the results will be recorded. In order to verify that the pseudo servo cluster algorithm proposed in this study has effectively generated new feature attribute branches among all levels of data structure, based on the above experimental methods, this study establishes a retrieval line model system based on tensorflow framework through the analysis module in Chapter 4 of this study. For the pseudo servo (MS) cluster algorithm, Q3 servo cluster and G5 1 servo cluster (two mainstream servo cluster algorithm models in the latest research results of Q3 servo cluster and g5.1 servo cluster), three different servo cluster intelligent iterative algorithms are experimentally studied. The preliminary experimental analysis results are shown in Figure6.Figure 6 Preliminary analysis results of servo cluster intelligent iteration algorithm.It can be seen from Figure6 that the accuracy of three different intelligent iterative servo cluster algorithms for predicting the pseudo employment and international trade of China’s Arab immigrants from 2013 to 2019 is different in terms of the reliability of the experimental data results. When the number of people is less than 1000, the prediction accuracy of Q3 servo cluster algorithm is the highest. When the number of people is more than 1000, the data reliability of pseudo servo cluster algorithm is the highest and has maintained a high level. The data reliability of Q3 servo cluster algorithm is the lowest, and the prediction accuracy shows a gradual downward trend. When the number of people is more than 2000, the prediction accuracy of the three groups of results decreases to varying degrees. This is because during the experiment, the pseudo servo cluster algorithm continues to produce new subbranches. The maximum carrying capacity of its experimental data begins to decay from 1800 (i.e., clearing the data and re-performing pseudo analysis), and its data collection samples are well coupled with the base data samples. It can be understood that the pseudo servo cluster algorithm will test, judge, and classify the new pseudo samples under the “guidance” of the category training samples. The useful experimental time of the three servo clusters is shown in Figure 7.Figure 7 Experimental time for all three types of decision trees.Figure7 shows the time required to process different amounts of data during the experiment of three intelligent iterative servo cluster algorithms in the big data system. It can be seen from Figure 7 that compared with the time required by the other two mainstream big data analysis servo cluster algorithms, the pseudo servo cluster algorithm has obvious optimization because the training data of the pseudo servo cluster is constructed based on the addition of base data. Each time the data mimicry is added, the algorithm does not need to rescan all the data, but modifies the previously obtained tree branch structure, which greatly saves the problems of high cost and insufficient efficiency of the servo cluster that the algorithm needs to rescan all the data every time the data is changed, and provides an effective support for the efficiency of the big data system in analyzing the characteristics of Arab Immigrant Employment and international trade. ## 4.2. Result Analysis The experimental results can reflect that when the hierarchical optimization conditions of the conceptual mimicry model are met, the existing servo cluster algorithm is improved by using variable level optimization, and the structural analysis of the employment destination of Arab immigrants is completed in combination with the hierarchical fitting. In order to truly reflect the situation of Arab immigrants and international trade, this study uses the actual details of Arab immigrants’ participation in trade and international trade in a city, and the calculation results are shown in Figure8. Based on the data of Arab immigrants’ employment and international trade from 2018 to 2020, this study calculates the degree of Arab immigrants and financial construction of a city in 2021 through the pseudo servo cluster algorithm of the big data system. The comparison between the predicted results and the real results is shown in Figure 9.Figure 8 Results of employment and international trade mimesis analysis of Arab migrants from 2018 to 2020.Figure 9 Prediction results and accuracy of employment and international trade mimesis for Arab immigrants in 2021.By discussing the experimental conclusions in Figures8 and 9, with the continuous development of international trade, the continuous optimization of international trade policies of Arab immigrants and the continuous optimization of relevant systems, since 2019, the rate of Arab immigrants in a city in China choosing to participate in international trade has increased, the solution coefficient of international trade challenges has decreased, and the overall financial and international trade situation of Arab immigrants has tended to rise steadily. This is confirmed by the actual situation and results of the city. On the other hand, the prediction accuracy of the financial participation rate, international trade rate, and other choice rate of Arab immigrants in a city in 2021 predicted by the algorithm is more than 90%, which is very similar to the whereabouts of Arab immigrants in the city in the first half of 2021. Therefore, the simulation analysis model of Arab immigrants and international trade proposed in this study has high accuracy and practical application value. ## 4.3. Challenges and Countermeasures of International Trade under the Background of Big Data The pattern of trade globalization has been developing continuously since its formation. At present, under the new situation of big data, the degree of international trade liberalization is greater, and more trade participating countries will become. Through big data, collect information, seek partners, and businessmen from different regions communicate with each other to discuss product development, enhance trade integration, make order processing faster and more convenient, and the industrial state after data and information optimization is better. The connection of economic and trade processing is more natural.Multinational large enterprises play a leading role in the development of international trade market. Therefore, international trade mainly causes harm in the specific development trend of trading. However, China’s international trade has a very obvious trend of improvement in the materialized investment. These phenomena fully illustrate that global multinational enterprise investment will further improve the development of international trade market. Secondly, its economy also has obvious conflicts of interest. In fact, there are many conflicts in the specific development process of each economy, only in the areas that will be affected. However, as global international trade is still a growing market, more and more related trade activities are carried out among various economies. As a result, the attitude of various countries towards international market competition has become more and more intense. This situation will have a very negative impact on the developing countries. Finally, international trade gradually tends to diversify. In the specific development, the continuous development of network and information technology has led to the trend of international trade.In the international market, enterprises from different countries communicate with each other and seek partners that meet their own conditions, which have become the absolute force of international trade. Among them, multinational enterprises still serve as the backbone of their own country to explore the international trade market, break through the hidden restrictions in international trade, and carry out close cooperation between trade enterprises of the two countries, coupled with the basic conditions under big data. While multinational enterprises are at the forefront of the international trade market, they can continue to seek cooperation opportunities, which greatly improves the efficiency of enterprises and will create more trade value. ## 5. Conclusion In the current era of big data, the mining of data information, the improvement of production efficiency and the rapid growth of economy have become the productivity growth points of international economy and trade. Big data information processing has positive guiding and reference significance for international economy and trade. In order to realize the analysis and Research on the Countermeasures of Arab migration and international trade adjustment based on big data, this paper establishes a hierarchical mimicry integrated system based on big data and intelligent iterative mimicry servo cluster. At the beginning, it introduces the defects of the current research on the challenges of Arab migration and international trade, and in view of the current difficulties. This paper presents a pseudo model building method of intelligent iterative pseudo servo cluster algorithm in the context of big data. Secondly, it introduces the establishment of hierarchical classification method, intelligent iteration technology, and pseudo servo cluster algorithm model, respectively. By establishing such a model, it realizes the prediction and analysis of the financial problems of Arab immigrants. Finally, through the retest analysis of the proposed pseudo model on the actual database, the results show that the hierarchical classification based on big data and pseudo servo cluster algorithm has strong adaptability. If the previous non-servo cluster model is equipped with servo cluster, it can effectively improve the prediction efficiency and accuracy of the system, and provide corresponding theoretical support for the actual operation. However, this study only focuses on the construction of hierarchical model in the mimicry model layer. Then, we can draw the following conclusions: the international trade governance system is a comprehensive and complex system engineering, and its core is the multilateral trade mechanism of the world trade organization. The main stakeholders of the international trade governance system are economies with different levels of economic development. They have both common interests and national demands. The construction of the mechanism needs to take into account the interests of all parties, cooperate with existing international organizations in governance, and jointly respond to new global challenges. --- *Source: 1025453-2022-07-16.xml*
1025453-2022-07-16_1025453-2022-07-16.md
63,041
Challenges and Countermeasures of Arab Immigrants and International Trade in the Era of Big Data
Yi Huang; Miao Shao
Mathematical Problems in Engineering (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1025453
1025453-2022-07-16.xml
--- ## Abstract In recent years, the development of intelligent iteration technology and the use of big data processing technology have set off an upsurge, and the analysis and application of artificial intelligence algorithms have been paid more and more attention. In order to face the challenges of Arab migration and international trade, this paper constructs the basic structure of the Arab migration action imitation model. In this paper, a simulated servo clustering algorithm based on big data and intelligent iteration is used. Then, through the analysis of pseudo servo clustering algorithm, an optimization model is established, and a big data analysis system is formed. This paper focuses on the wide application of big data statistics to solve the construction of Arab immigration and entrepreneurship data system. This paper studies and applies the big data statistics and intelligent iterative algorithm of Arab immigration behavior, focuses on the annual ladder degree of Arab immigrants, and constructs a pseudo servo cluster system based on Intelligent iterative algorithm. Finally, the simulation experiment verifies whether the clustering model can accurately retrieve the behavior of Arab immigrants in China. The era of big data provides good development opportunities for Arab immigrants and international trade, but it also faces severe challenges. The study provides a reference for strengthening the analysis of international trade, and puts forward perfect countermeasures in combination with the actual situation, so as to improve the efficiency of international trade management and promote the better implementation of international trade. --- ## Body ## 1. Introduction With the development of science and technology, computer information technology is no longer limited to the application of information technology. The industries and are closely integrated. The emerging industries of “Internet plus” are emerging. Big data is a barrier-free information technology that ignores time and space. Once the massive application information is used in International trade, the trade processor can directly select partners who meet their own conditions. Over the years, developed countries have dominated the international trade market with their advanced science and technology and production experience, and even formed a monopoly in some electronic industries. A large amount of funds flow to developed countries, which in turn promotes their trade system innovation. This basic and superior condition has increased the total trade revenue between developed and developing countries. However, due to the increase of big data and information technology, trade forms become more flexible. International trade realizes product informatization, technology informatization, and transaction informatization. If developing countries accurately grasp trade information, analyze market data and timely reform and adjust their own economic structure, they can also gain advantages and benefits in international trade competition. Therefore, the background of big data provides new opportunities for the developing countries. As for big data technology, it is usually defined as analyzing some quantitative statistical results of big data, and its analysis method is to mix them through some internal logical restrictions. The common effective method to realize this analysis method is the intelligent iterative method [1]. Arab immigrants in all time periods produce a large amount of production capacity. The public often focuses on their behavior after immigration, and the common method of collecting behavior data is still questionnaire flow survey [2]. Although the flow survey questionnaire has the advantages of simplicity and easy quantification, it is helpful to analyze and process the survey results and make it completed more efficiently [3]. Its disadvantages are also obvious. For such research on the whereabouts of international trade and employment, the form of questionnaire will not ensure the quality of research and stable recovery due to the subjective motivation, subjective thinking, and other factors of the respondents. These comprehensive factors will reduce the accuracy of analysis results [4]. Therefore, the Arabian immigration mimicry model using questionnaire survey has great limitations. However, the financial data of Arab immigrants has a large change rate and poor real-time coverage, which leads to the lack of coverage prediction of various data nodes in the common methods and models in the past, resulting in the overall error that is difficult to find in the development of prediction work. Therefore, it is necessary to study the construction model of pseudo nodes of Arab immigrants’ financial international trade data. Reference [5]. Under this background, this paper studies and constructs the intelligent iterative servo cluster and big data analysis method under the pseudo model.The innovation of this paper is to study the annual ladder of Arab immigrants and build a pseudo servo cluster system based on Intelligent iterative algorithm. The big data statistics and intelligent iterative algorithm of Arab immigration behavior are applied in the system. This paper focuses on the extensive application methods of big data statistics to solve the construction of Arab immigration and entrepreneurship data system. The system is to ensure the accuracy and predictability to a certain extent even if the model is not reconstructed when the sample data structure changes, so as to improve the efficiency of the previous model simulation data analysis. The efficient application of the model has been fully developed in the system, and effectively applied to the pseudo data query of the servo cluster.This paper is mainly divided into four parts. The first part is an overview of the treatment status at home and abroad and the construction direction of this solution; The second part introduces the fields that have not yet applied and developed big data; The third part summarizes the application of big data, intelligent iteration technology, the establishment of pseudo servo cluster, and the construction of statistical pseudo analysis based on the existing common technologies; The fourth part, using the established servo cluster system with integrated intelligent iteration technology, carries out statistical data practice through open data sets, and carries out the test process to test whether the predicted servo cluster system can accurately predict the immigration financial operation of Arab immigrants. This paper analyzes the difficulties that China has traditionally been unable to imitate the international trade structure of Arab immigrants, and points out that the intelligent iteration based on the imitation servo cluster, combined with the big data integrated system, can realize the statistical simulation analysis of tracking and processing of Arab immigrants’ destination data. ## 2. Related Work Although relevant personnel have been trying to analyze and study countermeasures in the era of big data for many years, there are still some unsolved problems, that is, creating a financial scenario for the Arab migration model [6]. Zarrinpar et al. scholars proposed that in the era of big data of Arab immigrants in China, the goose swarm algorithm should be used to optimize accordingly, find the best function set, and intelligently iterate the HGT framework to maximize the expectation of Arab immigrants [7]. Razali et al. scholars, based on the coupling cross polarization method and the hierarchical polarization complex attached to the Kelvin discrete, have proved through experiments that the use of the characteristic international trade of Arab immigrants theoretically supports the corresponding strategies of Arab immigrants and international trade [8]. Li et al. scholars put forward the discrete normalization of hierarchical differentiation data, then carry out feature identification, and combine the coupling intelligent iterative operator to extract the positive elements of the financial behavior of Arab immigrants in China. This operation shows that the maximization of trade is related to the positive behavior elements of Arab immigrants, and effectively improves the error free rate to 93% [9]. Ma and Wang scholars analyzed the case of Arab immigrants by analyzing the cycle of the means of transportation, nonhuman factors, geographical conditions and other Arab immigrants, innovatively combined with the grey correlation algorithm to avoid the relevant adaptive failure of the algorithm, so as to achieve the characteristic value of no difference in the financial industry of Arab immigrants in China to 0.7, making this method meet the application requirements [10]. Hulme combined with the nine characteristics of expected migration of Arab immigrants in China, obtained analyzable big data through benchmarking, and effectively increased the utilization rate of big data to 98% by using hierarchical differentiation. By making full use of this conclusion, China’s Arab immigrants timely adjust the corresponding planning cases and effectively improve the international financial evaluation index, which has an effective positive impact on improving the GDP [11]. Epanchin-Niell et al. believe that if China's Arab immigrants build a blockchain differentiation data query center, they can have statistical expectations on China's Arab immigrants' financial behavior through the advantages of blockchain stability, so as to make the financial behavior of Arab immigrants credible [12]. Wang et al. carried out statistical expectation big data, collected the information of Arab immigrants at the gateway, made corresponding planning prediction on the expectation big data through hierarchical differentiation, trapezoidal prediction and object stability method, allocated analysis resources for the financial behavior of pseudo Arab immigrants, and realized the high accuracy of big data statistical prediction in the integration of pseudo data model [13]. Shu et al. found that the singular discrete model can improve the construction efficiency of big data system. Through the layered complex subdatabase formed by the intelligent iteration of two terminals, they verified the improvement of the efficiency of the singular discrete method and achieved the layered construction of Arab immigrants and entrepreneurs, but it is difficult to achieve mimicry [14]. Milsom et al. believe that building a database of Arab immigrants and international trade behavior, querying characteristic conditions, creating the best solution based on big data, creating a trusted field, finding the best environment for the working environment for Arab immigrants and entrepreneurs to make decisions, and analyzing the immigration trade status through hierarchical tracking of big data [15]. Quaglia et al. screened the data through the combined intelligent iteration strategy, separated the whereabouts data and behavioral eigenvalues of Arab immigrants in China, and then substituted them into the model for aggregate analysis to verify the feasibility, realizing the digital iteration of Arab immigrants, but the accuracy of mimicry is not enough to meet the universal standard [16].To sum up, it can be seen that in the process of pseudo statistics and analysis of Arab migration and international trade, there is a lack of effective way to obtain real-time data of Arab migration, the error of the analysis model is large, the Arab migration statistical information database is small, and the analysis model has limitations [17, 18]. For the research on the challenges and Countermeasures Faced by Arab immigrants and international trade, most of the current research results are based on statistical data, but very few combine big data technology and artificial intelligence algorithms for diversified and convenient analysis. References [19–21]. On the other hand, big data analysis technology is mostly coupled with international trade and less combined with trade direction. In terms of algorithm structure, most intelligent iterative algorithms only carry out Lee’s fitting for static data, and there are few mimicry algorithm models specially designed for complex mimicry discrete data [22]. Therefore, in today’s era of big data, it is of great significance to carry out the research on the mimicry model of Arab immigration and international trade. ## 3. Analysis Method of Pseudoservo Cluster Based on Intelligent Iterative Technology ### 3.1. Simulation Analysis of Intelligent Iterative Algorithm Intelligent iterative algorithm can realize iterative estimation of data through iterative differentiation, hierarchical classification, and optimization. The key of intelligent iteration lies in hierarchical conditions, iterative analysis, and system construction. Through the limitation of hierarchical conditions, progressive iteration and discrete classification, and then realize system construction. This process reflects the criticality and non-negligible of hierarchical conditions in the process of intelligent iteration [23]. The general intelligent iterative algorithm only focuses on the vicinity of the demand point. When the object data changes dynamically, the system must reload the complete set of objects. This leads to the failure of the system to complete the corresponding requirements within the specified time when analyzing the pseudo database [24]. Generally speaking, the scientific mimicry attribute has irreplaceable effectiveness for the application of the common methods of practical hierarchical analysis [25]. Recently, Arab immigration and international trade policies have been strengthened by relevant departments. In the face of increasingly challenging international trade, the pseudo non-static behavior of Arab immigration and international trade has become more complex [26].In this context, aiming at the difficulty of non-real-time analysis in the Arab immigration and international trade model, this paper strengthens the adaptability of the servo system that can only process non-dynamic restrictive data in the past, realizes the blockchain physical and chemical system combined with the pseudo state servo cluster system, and builds a more adaptive pseudo state model from the strange resistance of each batch of non-static Arab immigration and international trade. In the dynamic surface control method for nonlinear systems, the input nonlinearity will affect the tracking performance of the closed-loop output signal.The design of dynamic surface control algorithm to improve the transient and steady-state performance of closed-loop nonlinear systems has become one of the research topics in the control field. In this paper, the dynamic surface control of several classes of nonlinear systems is studied. For nonlinear systems with unknown nonlinear links, the structural characteristics of the system are used. Extended state observer, finite time observer, and neural network observer are designed to observe the system state and unknown disturbance signal. On this basis, output feedback dynamic surface controller, dynamic surface controller based on tracking differentiator and extended state observer, neural network dynamic surface controller and adaptive robust dynamic surface controller are designed. An improved dynamic surface control scheme for multi motor drive servo system is proposed to solve the problems of nonlinear links and disturbances in motor servo system. ### 3.2. Process Demonstration of Intelligent Iterative Pseudo State Servo Cluster Algorithm In order to simplify the process, analyze, sum up and queue up the previous servo cluster algorithms, discretize, classify, combine and package the data, and face the challenges faced by Arab immigrants and international trade and the predictability of countermeasures, the servo cluster algorithm of this model introduces a process class method of introducing pseudo servo clusters, refines and composes the original basic layers, and then classifies them into classes, so as to realize the construction of pseudo servo cluster simulation system. The process of analyzing data by pseudo servo cluster is shown in Figure1.Figure 1 The process of analyzing data by servo clustering.By extracting the elements in Figure1, the servo cluster adaptive iterative center exchange results are differentiated in the mixed center level class, combined with the attraction of the mimicry model. When the servo cluster construction meets the combination conditions, this level class is disabled, so it will be dynamically replaced by the next random level. This means that if the condition of the servo stage is modified, it is independent of the condition of the servo stage.If we make the servo cluster algorithm model extraction levelM and N, it is easy to get the threshold gain relationship ∑i=1ctmi>∑i=1ctni of M and N, that is, in the threshold of servo cluster classification level, m cluster is better than n. When the object has non-subtractive mimicry, if the servo cluster classification level is obtained, the attribute values corresponding to m and N are pushed to meet the conditions, and the characteristic quantity of Eo level is used, the following can be obtained:(1)GM>GN,⇔∪i=1tpm−Eo>∪i=1tpn−Eo,⇔∪i=1tpm>∪i=1tpn⇔M>N.For the above formula, p represents the forward feature level quantity, t record the feasible value corresponding to the characteristic quantity of the series, the specific number of levels is represented by i, and ∪i=1np represents the positive hierarchical embodiment of the corresponding level, thus showing the coupling result of the sum of the corresponding characteristic quantities of levels. Under this condition, the general formula of EE is obtained as follows:(2)E=∪i=1tpm+1nlogδ+logδ+1,δ1=∑i=1tri+∑i=1tri2∑i=1tri.As for the above formula, δ≥1 and r reflect the coupling amount of the servo cluster in this layer, which covers the Curvilinear Synthetic Value of the level value, in which the bottom of log is 7. When the pseudo servo cluster is introduced into the model, the corresponding layers of level M and level n will also have a certain amount of change. When the number of iterations of the two-level difference data p corresponds to the feasible value t, and the value is between the corresponding values of M and N, the formula of the pseudo incremental complex coupling E1 is obtained as follows:(3)E1=∪i=1tpm+1/nlogδ0+logδ1+1∪i=1tpm+1/nlogδ0+logδ1+1+logδ2+2.Among them,(4)δ1=∑i=1tri+∑i=1tri2+∑i=1tri3∑i=1tri+∑i=1tri2,δ2=∑i=1tri+∑i=1tri2+∑i=1tri3+∑i=1tri4∑i=1tri+∑i=1tri2+∑i=1tri3.After the corresponding substitution formula is used,(5)δ1=h0+h1+h2h0+h1,δ2=h0+h1h2+h3h0+h1+h2.After evolution, the poor coupling and layered construction of pseudo information can be obtained,(6)ΔE=∪i=1tpm+1nlogδ+logδ+1−∪i=1tpm+1/nlogδ0+logδ1+1∪i=1tpm+1/nlogδ0+logδ1+1+logδ2+2.In order to obtain the parameter difference between the corresponding attribute values of levelM and level n, only the maximum value of pseudo complex coupling can be calculated. Combined with the characteristics of δ and δ, ΔE can get the minimum value:(7)Emin=∪i=1tpm+1nlogδ0+logδ1+1.When the current value is small and then large,ΔE can get the maximum value:(8)Emax=∪i=1tpm+1nlogδ0+logδ1+1+logδ2+2. ### 3.3. Analysis Process of Intelligent Iterative Mimicry Servo Cluster Algorithm in the Mimicry Model When the analysis process imitates the incremental set, it is not necessary to reload the data and establish a different system, but the original system is the ontology, modify the data obtained from the previous training, and improve the classification accuracy through the servo cluster algorithm. When the mimicry dynamically changes thek classification sets to the discrete servo cluster branch level, the inequalities to be achieved for the new k classification sets are as follows:(9)EmaxEmin≤∪i=1tpm+nδ1+δ2.If the data meets the conditions of intelligent iterative engineering, reduce (9),(10)EmaxEmin≤δ1+δ2+ym,n.Based on the above formula, the value ofΔE affects the new k,(11)ΔE≤∑i=0∞δi1ym,n,or(12)kmax=∑i=0∞δi+ym,n.By expanding the understanding of equation (12), we can obtain the overlapping mimicry of mimicry servo clusters. If the pseudo eigenvalue is combined with the judgment condition, it can achieve higher efficiency than generalization, that is to say, it can achieve more efficient model establishment. Therefore, the hierarchical classification mimicry and dynamic effect of the servo cluster system can not only improve the efficiency, but also improve the universality, so as to meet the corresponding update requirements. ### 3.4. Prototype Level Big Data Construction of Intelligent Iterative Pseudostate Servo Cluster Algorithm In order to complete the analysis task, the intelligent iterative simulation servo cluster is combined with the corresponding fusion algorithm for hierarchical classification for non-static operation. If the level and aggregation of clutch are simulated according to the level eigenvalues in the system process, the system performance will be studied quantitatively. The modular part levels designed in this paper are import layer, analysis layer, and export hybrid layer. The processing of Arab immigrant employment data at the big data level is shown in Figure2.Figure 2 The process of big data analysis of employment data of Arab immigrants.After the end of the analysis phase I, the hierarchical model loads the obtained levels into the database level module and the Arab immigrants and international trade of level class analysis. Foreign investment and trade play an important role in the modern economy of Arab countries. Due to the convenient transportation and the Arab business tradition, the Arab region has been an important place for world investment and trade since ancient times. The increasingly developed commercial exchanges among Arab countries have also promoted the development of Arab countries’ internal commodity exchanges and foreign trade. According to this law, Arab countries carry out foreign investment activities. The main provisions of this law are that integrated trading companies can only be operated by Arab residents. Other joint ventures established outside Zishan trade zone must be held by Arab residents with a shareholding of more than 51%. When a foreign company establishes a branch or representative office in Arabia, it must be guaranteed by its own residents. And pay a certain guarantee fee to the guarantor every year. The representative offices established by foreign companies in Arabia are not allowed to directly engage in trade and other economic activities. Foreigners must conduct foreign trade in Arab countries in the name of local guarantors or agents.In general, the level database of level class displays the name, level class and level class in the large database, while the calculated module has the original hierarchical data collection part. The algorithm of layer mimicry servo cluster is based on the optimized servo cluster algorithm. The basic architecture of this algorithm has been given above. The hierarchical classification set can be divided into class I, class II, and class III Arab immigrants. The behavior results of their financial operations are shown in Figure3.Figure 3 Simulation analysis of employment by Arab immigrants.It can be seen from Figure3 that under the pseudo servo cluster algorithm, if the number of iterations increases intelligently, the selection rate of class I, class II, and class III Arab immigrants participating in pseudo international finance will increase accordingly, the promotion rate of class II is relatively stable, and the unilateralization rate of class II and class III Arab immigrants is similar, which is similar to the existing recognized international financial and trade behavior, and in the classification of pseudo servo cluster algorithm, The development of IHF platform provides great innovative help, resulting in the intelligent iterative framework of uisg, so as to realize the non-static tracking of the pseudo data framework of Arab immigrants' finance and trade, and get the simulation conclusion closer to the actual conditions.The hierarchical classification data structure of the system is based on the big data model of IOV, which is a commonly used framework model in the past. Its layer classes include GIO layer, EGU layer, and pro layer. The big data hierarchical classification data structure is used to simulate the international trade of class I, II, and III Arab immigrants, as shown in Figure4.Figure 4 Simulation analysis of international trade by Arab immigrants.It can be easily seen from the trend structure in Figure4 that in the pseudo servo cluster model, when the number of iterations and intelligent parameter level domestication are superimposed, the accurate correlation rate of pseudo international trade selection of class I, class II, and class III Arab immigrants is also improving, the correlation rate of class I and class II Arab immigrants in the early stage is similar, and the correlation rate of class II and class III Arab immigrants in the later stage is similar, The reason for this result is that the mimicry servo cluster model mimics the non-static analysis and integration of the original Arab immigration data, and the temporal influence is more obvious in the seemingly thinking of one kind of Arab immigrants, while the other two kinds have a certain lag.As for the hierarchical classified pseudo data, the comprehensive model of big data pseudo servo cluster is formalized. Conclusion data hierarchical classification is applicable to hierarchical simulation prediction verification. The raw stone data of simulation test comes from the real Arab immigrants and international trade in a certain place as the basic raw data of training. Thus, the prediction results shown in Figure5 are obtained.Figure 5 Big data processing prediction analysis results based on mimetic servo clusters.Figure5 shows that in the face of the pseudo servo cluster algorithm, if the number of iteration levels increases, the original data and the predicted data have certain differences, and their corresponding characteristic values have certain mutations, but the differences are layered with the classification of iteration levels. The reason for this phenomenon is that the level classes are classified in the level classes, the real data will have unpredictable influencing factors, and the correlation is highlighted in the hierarchical process. The unpredictable influencing factors of real data will not change due to the number of iterations, so it can be concluded that the pseudo servo cluster model shown in this paper has good adaptability and accuracy for the application and prediction of big data. ## 3.1. Simulation Analysis of Intelligent Iterative Algorithm Intelligent iterative algorithm can realize iterative estimation of data through iterative differentiation, hierarchical classification, and optimization. The key of intelligent iteration lies in hierarchical conditions, iterative analysis, and system construction. Through the limitation of hierarchical conditions, progressive iteration and discrete classification, and then realize system construction. This process reflects the criticality and non-negligible of hierarchical conditions in the process of intelligent iteration [23]. The general intelligent iterative algorithm only focuses on the vicinity of the demand point. When the object data changes dynamically, the system must reload the complete set of objects. This leads to the failure of the system to complete the corresponding requirements within the specified time when analyzing the pseudo database [24]. Generally speaking, the scientific mimicry attribute has irreplaceable effectiveness for the application of the common methods of practical hierarchical analysis [25]. Recently, Arab immigration and international trade policies have been strengthened by relevant departments. In the face of increasingly challenging international trade, the pseudo non-static behavior of Arab immigration and international trade has become more complex [26].In this context, aiming at the difficulty of non-real-time analysis in the Arab immigration and international trade model, this paper strengthens the adaptability of the servo system that can only process non-dynamic restrictive data in the past, realizes the blockchain physical and chemical system combined with the pseudo state servo cluster system, and builds a more adaptive pseudo state model from the strange resistance of each batch of non-static Arab immigration and international trade. In the dynamic surface control method for nonlinear systems, the input nonlinearity will affect the tracking performance of the closed-loop output signal.The design of dynamic surface control algorithm to improve the transient and steady-state performance of closed-loop nonlinear systems has become one of the research topics in the control field. In this paper, the dynamic surface control of several classes of nonlinear systems is studied. For nonlinear systems with unknown nonlinear links, the structural characteristics of the system are used. Extended state observer, finite time observer, and neural network observer are designed to observe the system state and unknown disturbance signal. On this basis, output feedback dynamic surface controller, dynamic surface controller based on tracking differentiator and extended state observer, neural network dynamic surface controller and adaptive robust dynamic surface controller are designed. An improved dynamic surface control scheme for multi motor drive servo system is proposed to solve the problems of nonlinear links and disturbances in motor servo system. ## 3.2. Process Demonstration of Intelligent Iterative Pseudo State Servo Cluster Algorithm In order to simplify the process, analyze, sum up and queue up the previous servo cluster algorithms, discretize, classify, combine and package the data, and face the challenges faced by Arab immigrants and international trade and the predictability of countermeasures, the servo cluster algorithm of this model introduces a process class method of introducing pseudo servo clusters, refines and composes the original basic layers, and then classifies them into classes, so as to realize the construction of pseudo servo cluster simulation system. The process of analyzing data by pseudo servo cluster is shown in Figure1.Figure 1 The process of analyzing data by servo clustering.By extracting the elements in Figure1, the servo cluster adaptive iterative center exchange results are differentiated in the mixed center level class, combined with the attraction of the mimicry model. When the servo cluster construction meets the combination conditions, this level class is disabled, so it will be dynamically replaced by the next random level. This means that if the condition of the servo stage is modified, it is independent of the condition of the servo stage.If we make the servo cluster algorithm model extraction levelM and N, it is easy to get the threshold gain relationship ∑i=1ctmi>∑i=1ctni of M and N, that is, in the threshold of servo cluster classification level, m cluster is better than n. When the object has non-subtractive mimicry, if the servo cluster classification level is obtained, the attribute values corresponding to m and N are pushed to meet the conditions, and the characteristic quantity of Eo level is used, the following can be obtained:(1)GM>GN,⇔∪i=1tpm−Eo>∪i=1tpn−Eo,⇔∪i=1tpm>∪i=1tpn⇔M>N.For the above formula, p represents the forward feature level quantity, t record the feasible value corresponding to the characteristic quantity of the series, the specific number of levels is represented by i, and ∪i=1np represents the positive hierarchical embodiment of the corresponding level, thus showing the coupling result of the sum of the corresponding characteristic quantities of levels. Under this condition, the general formula of EE is obtained as follows:(2)E=∪i=1tpm+1nlogδ+logδ+1,δ1=∑i=1tri+∑i=1tri2∑i=1tri.As for the above formula, δ≥1 and r reflect the coupling amount of the servo cluster in this layer, which covers the Curvilinear Synthetic Value of the level value, in which the bottom of log is 7. When the pseudo servo cluster is introduced into the model, the corresponding layers of level M and level n will also have a certain amount of change. When the number of iterations of the two-level difference data p corresponds to the feasible value t, and the value is between the corresponding values of M and N, the formula of the pseudo incremental complex coupling E1 is obtained as follows:(3)E1=∪i=1tpm+1/nlogδ0+logδ1+1∪i=1tpm+1/nlogδ0+logδ1+1+logδ2+2.Among them,(4)δ1=∑i=1tri+∑i=1tri2+∑i=1tri3∑i=1tri+∑i=1tri2,δ2=∑i=1tri+∑i=1tri2+∑i=1tri3+∑i=1tri4∑i=1tri+∑i=1tri2+∑i=1tri3.After the corresponding substitution formula is used,(5)δ1=h0+h1+h2h0+h1,δ2=h0+h1h2+h3h0+h1+h2.After evolution, the poor coupling and layered construction of pseudo information can be obtained,(6)ΔE=∪i=1tpm+1nlogδ+logδ+1−∪i=1tpm+1/nlogδ0+logδ1+1∪i=1tpm+1/nlogδ0+logδ1+1+logδ2+2.In order to obtain the parameter difference between the corresponding attribute values of levelM and level n, only the maximum value of pseudo complex coupling can be calculated. Combined with the characteristics of δ and δ, ΔE can get the minimum value:(7)Emin=∪i=1tpm+1nlogδ0+logδ1+1.When the current value is small and then large,ΔE can get the maximum value:(8)Emax=∪i=1tpm+1nlogδ0+logδ1+1+logδ2+2. ## 3.3. Analysis Process of Intelligent Iterative Mimicry Servo Cluster Algorithm in the Mimicry Model When the analysis process imitates the incremental set, it is not necessary to reload the data and establish a different system, but the original system is the ontology, modify the data obtained from the previous training, and improve the classification accuracy through the servo cluster algorithm. When the mimicry dynamically changes thek classification sets to the discrete servo cluster branch level, the inequalities to be achieved for the new k classification sets are as follows:(9)EmaxEmin≤∪i=1tpm+nδ1+δ2.If the data meets the conditions of intelligent iterative engineering, reduce (9),(10)EmaxEmin≤δ1+δ2+ym,n.Based on the above formula, the value ofΔE affects the new k,(11)ΔE≤∑i=0∞δi1ym,n,or(12)kmax=∑i=0∞δi+ym,n.By expanding the understanding of equation (12), we can obtain the overlapping mimicry of mimicry servo clusters. If the pseudo eigenvalue is combined with the judgment condition, it can achieve higher efficiency than generalization, that is to say, it can achieve more efficient model establishment. Therefore, the hierarchical classification mimicry and dynamic effect of the servo cluster system can not only improve the efficiency, but also improve the universality, so as to meet the corresponding update requirements. ## 3.4. Prototype Level Big Data Construction of Intelligent Iterative Pseudostate Servo Cluster Algorithm In order to complete the analysis task, the intelligent iterative simulation servo cluster is combined with the corresponding fusion algorithm for hierarchical classification for non-static operation. If the level and aggregation of clutch are simulated according to the level eigenvalues in the system process, the system performance will be studied quantitatively. The modular part levels designed in this paper are import layer, analysis layer, and export hybrid layer. The processing of Arab immigrant employment data at the big data level is shown in Figure2.Figure 2 The process of big data analysis of employment data of Arab immigrants.After the end of the analysis phase I, the hierarchical model loads the obtained levels into the database level module and the Arab immigrants and international trade of level class analysis. Foreign investment and trade play an important role in the modern economy of Arab countries. Due to the convenient transportation and the Arab business tradition, the Arab region has been an important place for world investment and trade since ancient times. The increasingly developed commercial exchanges among Arab countries have also promoted the development of Arab countries’ internal commodity exchanges and foreign trade. According to this law, Arab countries carry out foreign investment activities. The main provisions of this law are that integrated trading companies can only be operated by Arab residents. Other joint ventures established outside Zishan trade zone must be held by Arab residents with a shareholding of more than 51%. When a foreign company establishes a branch or representative office in Arabia, it must be guaranteed by its own residents. And pay a certain guarantee fee to the guarantor every year. The representative offices established by foreign companies in Arabia are not allowed to directly engage in trade and other economic activities. Foreigners must conduct foreign trade in Arab countries in the name of local guarantors or agents.In general, the level database of level class displays the name, level class and level class in the large database, while the calculated module has the original hierarchical data collection part. The algorithm of layer mimicry servo cluster is based on the optimized servo cluster algorithm. The basic architecture of this algorithm has been given above. The hierarchical classification set can be divided into class I, class II, and class III Arab immigrants. The behavior results of their financial operations are shown in Figure3.Figure 3 Simulation analysis of employment by Arab immigrants.It can be seen from Figure3 that under the pseudo servo cluster algorithm, if the number of iterations increases intelligently, the selection rate of class I, class II, and class III Arab immigrants participating in pseudo international finance will increase accordingly, the promotion rate of class II is relatively stable, and the unilateralization rate of class II and class III Arab immigrants is similar, which is similar to the existing recognized international financial and trade behavior, and in the classification of pseudo servo cluster algorithm, The development of IHF platform provides great innovative help, resulting in the intelligent iterative framework of uisg, so as to realize the non-static tracking of the pseudo data framework of Arab immigrants' finance and trade, and get the simulation conclusion closer to the actual conditions.The hierarchical classification data structure of the system is based on the big data model of IOV, which is a commonly used framework model in the past. Its layer classes include GIO layer, EGU layer, and pro layer. The big data hierarchical classification data structure is used to simulate the international trade of class I, II, and III Arab immigrants, as shown in Figure4.Figure 4 Simulation analysis of international trade by Arab immigrants.It can be easily seen from the trend structure in Figure4 that in the pseudo servo cluster model, when the number of iterations and intelligent parameter level domestication are superimposed, the accurate correlation rate of pseudo international trade selection of class I, class II, and class III Arab immigrants is also improving, the correlation rate of class I and class II Arab immigrants in the early stage is similar, and the correlation rate of class II and class III Arab immigrants in the later stage is similar, The reason for this result is that the mimicry servo cluster model mimics the non-static analysis and integration of the original Arab immigration data, and the temporal influence is more obvious in the seemingly thinking of one kind of Arab immigrants, while the other two kinds have a certain lag.As for the hierarchical classified pseudo data, the comprehensive model of big data pseudo servo cluster is formalized. Conclusion data hierarchical classification is applicable to hierarchical simulation prediction verification. The raw stone data of simulation test comes from the real Arab immigrants and international trade in a certain place as the basic raw data of training. Thus, the prediction results shown in Figure5 are obtained.Figure 5 Big data processing prediction analysis results based on mimetic servo clusters.Figure5 shows that in the face of the pseudo servo cluster algorithm, if the number of iteration levels increases, the original data and the predicted data have certain differences, and their corresponding characteristic values have certain mutations, but the differences are layered with the classification of iteration levels. The reason for this phenomenon is that the level classes are classified in the level classes, the real data will have unpredictable influencing factors, and the correlation is highlighted in the hierarchical process. The unpredictable influencing factors of real data will not change due to the number of iterations, so it can be concluded that the pseudo servo cluster model shown in this paper has good adaptability and accuracy for the application and prediction of big data. ## 4. Result Analysis and Discussion ### 4.1. Retest Experiment and Data Analysis In order to ensure the process of the retest experiment, the pseudo model proposed above is substituted into the big data system for the retest experiment. The premise preparation of the experiment needs to reflect all kinds of advance notification parameters in the algorithm. In order to reflect certain convenience at the same time, the data in this paper selects the pseudo data of Arab immigration and international trade from 2015 to 2020 for review.Through the elaboration of the above necessary ideas in this paper, it is decided to select more than 30 kinds of characteristics of Arab immigrants, such as their studies, gender, grade, and immigration years. Based on the Arab immigrants and international trade in 2015, it is predicted the destination of China’s Arab immigrants in 2016, and compared with the actual situation of China’s Arab immigrants in 2015 to verify the accuracy of the algorithm. Then, the pseudo data of 2015 is fused with the base data to predict the destination of Arab immigrants in 2016 and calculate its accuracy, and so on. In this study, the data will be analyzed many times, and the results will be recorded. In order to verify that the pseudo servo cluster algorithm proposed in this study has effectively generated new feature attribute branches among all levels of data structure, based on the above experimental methods, this study establishes a retrieval line model system based on tensorflow framework through the analysis module in Chapter 4 of this study. For the pseudo servo (MS) cluster algorithm, Q3 servo cluster and G5 1 servo cluster (two mainstream servo cluster algorithm models in the latest research results of Q3 servo cluster and g5.1 servo cluster), three different servo cluster intelligent iterative algorithms are experimentally studied. The preliminary experimental analysis results are shown in Figure6.Figure 6 Preliminary analysis results of servo cluster intelligent iteration algorithm.It can be seen from Figure6 that the accuracy of three different intelligent iterative servo cluster algorithms for predicting the pseudo employment and international trade of China’s Arab immigrants from 2013 to 2019 is different in terms of the reliability of the experimental data results. When the number of people is less than 1000, the prediction accuracy of Q3 servo cluster algorithm is the highest. When the number of people is more than 1000, the data reliability of pseudo servo cluster algorithm is the highest and has maintained a high level. The data reliability of Q3 servo cluster algorithm is the lowest, and the prediction accuracy shows a gradual downward trend. When the number of people is more than 2000, the prediction accuracy of the three groups of results decreases to varying degrees. This is because during the experiment, the pseudo servo cluster algorithm continues to produce new subbranches. The maximum carrying capacity of its experimental data begins to decay from 1800 (i.e., clearing the data and re-performing pseudo analysis), and its data collection samples are well coupled with the base data samples. It can be understood that the pseudo servo cluster algorithm will test, judge, and classify the new pseudo samples under the “guidance” of the category training samples. The useful experimental time of the three servo clusters is shown in Figure 7.Figure 7 Experimental time for all three types of decision trees.Figure7 shows the time required to process different amounts of data during the experiment of three intelligent iterative servo cluster algorithms in the big data system. It can be seen from Figure 7 that compared with the time required by the other two mainstream big data analysis servo cluster algorithms, the pseudo servo cluster algorithm has obvious optimization because the training data of the pseudo servo cluster is constructed based on the addition of base data. Each time the data mimicry is added, the algorithm does not need to rescan all the data, but modifies the previously obtained tree branch structure, which greatly saves the problems of high cost and insufficient efficiency of the servo cluster that the algorithm needs to rescan all the data every time the data is changed, and provides an effective support for the efficiency of the big data system in analyzing the characteristics of Arab Immigrant Employment and international trade. ### 4.2. Result Analysis The experimental results can reflect that when the hierarchical optimization conditions of the conceptual mimicry model are met, the existing servo cluster algorithm is improved by using variable level optimization, and the structural analysis of the employment destination of Arab immigrants is completed in combination with the hierarchical fitting. In order to truly reflect the situation of Arab immigrants and international trade, this study uses the actual details of Arab immigrants’ participation in trade and international trade in a city, and the calculation results are shown in Figure8. Based on the data of Arab immigrants’ employment and international trade from 2018 to 2020, this study calculates the degree of Arab immigrants and financial construction of a city in 2021 through the pseudo servo cluster algorithm of the big data system. The comparison between the predicted results and the real results is shown in Figure 9.Figure 8 Results of employment and international trade mimesis analysis of Arab migrants from 2018 to 2020.Figure 9 Prediction results and accuracy of employment and international trade mimesis for Arab immigrants in 2021.By discussing the experimental conclusions in Figures8 and 9, with the continuous development of international trade, the continuous optimization of international trade policies of Arab immigrants and the continuous optimization of relevant systems, since 2019, the rate of Arab immigrants in a city in China choosing to participate in international trade has increased, the solution coefficient of international trade challenges has decreased, and the overall financial and international trade situation of Arab immigrants has tended to rise steadily. This is confirmed by the actual situation and results of the city. On the other hand, the prediction accuracy of the financial participation rate, international trade rate, and other choice rate of Arab immigrants in a city in 2021 predicted by the algorithm is more than 90%, which is very similar to the whereabouts of Arab immigrants in the city in the first half of 2021. Therefore, the simulation analysis model of Arab immigrants and international trade proposed in this study has high accuracy and practical application value. ### 4.3. Challenges and Countermeasures of International Trade under the Background of Big Data The pattern of trade globalization has been developing continuously since its formation. At present, under the new situation of big data, the degree of international trade liberalization is greater, and more trade participating countries will become. Through big data, collect information, seek partners, and businessmen from different regions communicate with each other to discuss product development, enhance trade integration, make order processing faster and more convenient, and the industrial state after data and information optimization is better. The connection of economic and trade processing is more natural.Multinational large enterprises play a leading role in the development of international trade market. Therefore, international trade mainly causes harm in the specific development trend of trading. However, China’s international trade has a very obvious trend of improvement in the materialized investment. These phenomena fully illustrate that global multinational enterprise investment will further improve the development of international trade market. Secondly, its economy also has obvious conflicts of interest. In fact, there are many conflicts in the specific development process of each economy, only in the areas that will be affected. However, as global international trade is still a growing market, more and more related trade activities are carried out among various economies. As a result, the attitude of various countries towards international market competition has become more and more intense. This situation will have a very negative impact on the developing countries. Finally, international trade gradually tends to diversify. In the specific development, the continuous development of network and information technology has led to the trend of international trade.In the international market, enterprises from different countries communicate with each other and seek partners that meet their own conditions, which have become the absolute force of international trade. Among them, multinational enterprises still serve as the backbone of their own country to explore the international trade market, break through the hidden restrictions in international trade, and carry out close cooperation between trade enterprises of the two countries, coupled with the basic conditions under big data. While multinational enterprises are at the forefront of the international trade market, they can continue to seek cooperation opportunities, which greatly improves the efficiency of enterprises and will create more trade value. ## 4.1. Retest Experiment and Data Analysis In order to ensure the process of the retest experiment, the pseudo model proposed above is substituted into the big data system for the retest experiment. The premise preparation of the experiment needs to reflect all kinds of advance notification parameters in the algorithm. In order to reflect certain convenience at the same time, the data in this paper selects the pseudo data of Arab immigration and international trade from 2015 to 2020 for review.Through the elaboration of the above necessary ideas in this paper, it is decided to select more than 30 kinds of characteristics of Arab immigrants, such as their studies, gender, grade, and immigration years. Based on the Arab immigrants and international trade in 2015, it is predicted the destination of China’s Arab immigrants in 2016, and compared with the actual situation of China’s Arab immigrants in 2015 to verify the accuracy of the algorithm. Then, the pseudo data of 2015 is fused with the base data to predict the destination of Arab immigrants in 2016 and calculate its accuracy, and so on. In this study, the data will be analyzed many times, and the results will be recorded. In order to verify that the pseudo servo cluster algorithm proposed in this study has effectively generated new feature attribute branches among all levels of data structure, based on the above experimental methods, this study establishes a retrieval line model system based on tensorflow framework through the analysis module in Chapter 4 of this study. For the pseudo servo (MS) cluster algorithm, Q3 servo cluster and G5 1 servo cluster (two mainstream servo cluster algorithm models in the latest research results of Q3 servo cluster and g5.1 servo cluster), three different servo cluster intelligent iterative algorithms are experimentally studied. The preliminary experimental analysis results are shown in Figure6.Figure 6 Preliminary analysis results of servo cluster intelligent iteration algorithm.It can be seen from Figure6 that the accuracy of three different intelligent iterative servo cluster algorithms for predicting the pseudo employment and international trade of China’s Arab immigrants from 2013 to 2019 is different in terms of the reliability of the experimental data results. When the number of people is less than 1000, the prediction accuracy of Q3 servo cluster algorithm is the highest. When the number of people is more than 1000, the data reliability of pseudo servo cluster algorithm is the highest and has maintained a high level. The data reliability of Q3 servo cluster algorithm is the lowest, and the prediction accuracy shows a gradual downward trend. When the number of people is more than 2000, the prediction accuracy of the three groups of results decreases to varying degrees. This is because during the experiment, the pseudo servo cluster algorithm continues to produce new subbranches. The maximum carrying capacity of its experimental data begins to decay from 1800 (i.e., clearing the data and re-performing pseudo analysis), and its data collection samples are well coupled with the base data samples. It can be understood that the pseudo servo cluster algorithm will test, judge, and classify the new pseudo samples under the “guidance” of the category training samples. The useful experimental time of the three servo clusters is shown in Figure 7.Figure 7 Experimental time for all three types of decision trees.Figure7 shows the time required to process different amounts of data during the experiment of three intelligent iterative servo cluster algorithms in the big data system. It can be seen from Figure 7 that compared with the time required by the other two mainstream big data analysis servo cluster algorithms, the pseudo servo cluster algorithm has obvious optimization because the training data of the pseudo servo cluster is constructed based on the addition of base data. Each time the data mimicry is added, the algorithm does not need to rescan all the data, but modifies the previously obtained tree branch structure, which greatly saves the problems of high cost and insufficient efficiency of the servo cluster that the algorithm needs to rescan all the data every time the data is changed, and provides an effective support for the efficiency of the big data system in analyzing the characteristics of Arab Immigrant Employment and international trade. ## 4.2. Result Analysis The experimental results can reflect that when the hierarchical optimization conditions of the conceptual mimicry model are met, the existing servo cluster algorithm is improved by using variable level optimization, and the structural analysis of the employment destination of Arab immigrants is completed in combination with the hierarchical fitting. In order to truly reflect the situation of Arab immigrants and international trade, this study uses the actual details of Arab immigrants’ participation in trade and international trade in a city, and the calculation results are shown in Figure8. Based on the data of Arab immigrants’ employment and international trade from 2018 to 2020, this study calculates the degree of Arab immigrants and financial construction of a city in 2021 through the pseudo servo cluster algorithm of the big data system. The comparison between the predicted results and the real results is shown in Figure 9.Figure 8 Results of employment and international trade mimesis analysis of Arab migrants from 2018 to 2020.Figure 9 Prediction results and accuracy of employment and international trade mimesis for Arab immigrants in 2021.By discussing the experimental conclusions in Figures8 and 9, with the continuous development of international trade, the continuous optimization of international trade policies of Arab immigrants and the continuous optimization of relevant systems, since 2019, the rate of Arab immigrants in a city in China choosing to participate in international trade has increased, the solution coefficient of international trade challenges has decreased, and the overall financial and international trade situation of Arab immigrants has tended to rise steadily. This is confirmed by the actual situation and results of the city. On the other hand, the prediction accuracy of the financial participation rate, international trade rate, and other choice rate of Arab immigrants in a city in 2021 predicted by the algorithm is more than 90%, which is very similar to the whereabouts of Arab immigrants in the city in the first half of 2021. Therefore, the simulation analysis model of Arab immigrants and international trade proposed in this study has high accuracy and practical application value. ## 4.3. Challenges and Countermeasures of International Trade under the Background of Big Data The pattern of trade globalization has been developing continuously since its formation. At present, under the new situation of big data, the degree of international trade liberalization is greater, and more trade participating countries will become. Through big data, collect information, seek partners, and businessmen from different regions communicate with each other to discuss product development, enhance trade integration, make order processing faster and more convenient, and the industrial state after data and information optimization is better. The connection of economic and trade processing is more natural.Multinational large enterprises play a leading role in the development of international trade market. Therefore, international trade mainly causes harm in the specific development trend of trading. However, China’s international trade has a very obvious trend of improvement in the materialized investment. These phenomena fully illustrate that global multinational enterprise investment will further improve the development of international trade market. Secondly, its economy also has obvious conflicts of interest. In fact, there are many conflicts in the specific development process of each economy, only in the areas that will be affected. However, as global international trade is still a growing market, more and more related trade activities are carried out among various economies. As a result, the attitude of various countries towards international market competition has become more and more intense. This situation will have a very negative impact on the developing countries. Finally, international trade gradually tends to diversify. In the specific development, the continuous development of network and information technology has led to the trend of international trade.In the international market, enterprises from different countries communicate with each other and seek partners that meet their own conditions, which have become the absolute force of international trade. Among them, multinational enterprises still serve as the backbone of their own country to explore the international trade market, break through the hidden restrictions in international trade, and carry out close cooperation between trade enterprises of the two countries, coupled with the basic conditions under big data. While multinational enterprises are at the forefront of the international trade market, they can continue to seek cooperation opportunities, which greatly improves the efficiency of enterprises and will create more trade value. ## 5. Conclusion In the current era of big data, the mining of data information, the improvement of production efficiency and the rapid growth of economy have become the productivity growth points of international economy and trade. Big data information processing has positive guiding and reference significance for international economy and trade. In order to realize the analysis and Research on the Countermeasures of Arab migration and international trade adjustment based on big data, this paper establishes a hierarchical mimicry integrated system based on big data and intelligent iterative mimicry servo cluster. At the beginning, it introduces the defects of the current research on the challenges of Arab migration and international trade, and in view of the current difficulties. This paper presents a pseudo model building method of intelligent iterative pseudo servo cluster algorithm in the context of big data. Secondly, it introduces the establishment of hierarchical classification method, intelligent iteration technology, and pseudo servo cluster algorithm model, respectively. By establishing such a model, it realizes the prediction and analysis of the financial problems of Arab immigrants. Finally, through the retest analysis of the proposed pseudo model on the actual database, the results show that the hierarchical classification based on big data and pseudo servo cluster algorithm has strong adaptability. If the previous non-servo cluster model is equipped with servo cluster, it can effectively improve the prediction efficiency and accuracy of the system, and provide corresponding theoretical support for the actual operation. However, this study only focuses on the construction of hierarchical model in the mimicry model layer. Then, we can draw the following conclusions: the international trade governance system is a comprehensive and complex system engineering, and its core is the multilateral trade mechanism of the world trade organization. The main stakeholders of the international trade governance system are economies with different levels of economic development. They have both common interests and national demands. The construction of the mechanism needs to take into account the interests of all parties, cooperate with existing international organizations in governance, and jointly respond to new global challenges. --- *Source: 1025453-2022-07-16.xml*
2022
# Continuous Dependence on a Parameter of Exponential Attractors for Nonclassical Diffusion Equations **Authors:** Gang Wang; Chaozhu Hu **Journal:** Discrete Dynamics in Nature and Society (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1025457 --- ## Abstract In this paper, a new abstract result is given to verify the continuity of exponential attractors with respect to a parameter for the underlying semigroup. We do not impose any compact embedding on the main assumptions in the abstract result which is different from the corresponding result established by Efendiev et al. in 2004. Consequently, it can be used for equations whose solutions have no higher regularity. As an application, we prove the continuity of exponential attractors inH01 for a class of nonclassical diffusion equations with initial datum in H01. --- ## Body ## 1. Introduction In this paper, we study the existence and robustness of exponential attractors for the following nonclassical diffusion equation:(1)utϵ−ϵΔutϵ−Δuϵ+fuϵ=g,x∈Ω,t>0,with the initial-boundary value conditions(2)uϵx,0=u0x,x∈Ω,uϵ=0,on∂Ω,where ϵ∈0,1 and Ω⊂ℝN N≥3 is a bounded open set with smooth boundary ∂Ω. When ϵ=0, it turns out to be the classical reaction-diffusion equation. We assume that g∈L2Ω and the nonlinearity f∈C1ℝ,ℝ satisfies the following (see, e.g., [1]):(F1): there existsν>0 such that f′s≥−ν, ∀s∈ℝ(F2): there existsκ1>0 such that f′s≤κ11+s2/N−2, ∀s∈ℝ(F3):liminfs⟶∞Fs/s2≥0, where Fs=∫0sfrdr(F4): there existsκ2>0 such that liminfs⟶∞sfs−κ2Fs/s2≥0Nonclassical diffusion equations appear in fluid mechanics, soil mechanics, and heat conduction theory (see, e.g., [2]). The long-time behavior of solutions to nonclassical diffusion equations has been extensively studied by many authors for both autonomous and nonautonomous cases [3–9].The global attractor plays an important role in the study of long-time behavior of infinite dimension systems arising from physics and mechanics. It is a compact invariant set and attracts uniformly the bounded sets of the phase space. However, the rate of attraction may be arbitrary, and it may be sensible to perturbations. These drawbacks can be overcome by creating the notion of the exponential attractor [10], which is a compact, positively invariant set of finite dimension and exponentially attracts each bounded set. The existence of the exponential attractor has been extensively studied since 1994, see e.g., [5, 10–17].As discussed in [12], exponential attractors are to be more robust objects under perturbations than global attractors. In general, global attractors are only upper semicontinuous with respect to perturbations, and the lower semicontinuity property is much more delicate and can be established only for some particular cases. However, one can prove the continuity of exponential attractors under perturbations in many cases [5, 13]. In particular, for problems (1) and (2), the existence of a pullback attractor was shown by Anh and Bao in [3] for the subcritical case in H01Ω. They also proved the upper semicontinuity of the pullback attractors. However, this upper semicontinuity of the pullback attractor was established only in L2Ω, and the upper semicontinuity in H01Ω remains an open problem. In this paper, we not only prove the upper and lower semicontinuity of the exponential attractor but also show these continuities in a stronger space, i.e., H01Ω when the initial value only belongs to H01Ω.In [12], see also [17], Efendiev et al. gave an abstract result about the robustness of exponential attractors (Theorem 4.4 in [12]). A main assumption called “compact Lipschitz condition” was proposed in that theorem. The main difficulty when we apply this result to problems (1) and (2) is that the solution to problems (1) and (2) has no regularity as ϵ>0 [3]. For example, if the initial datum u0 belongs to H01Ω, the solution with initial u0=u0 is always in H01Ω and has no higher regularity. Thus, it is impossible to verify the “compact Lipschitz condition” when we want to prove the continuity of exponential attractors in H01Ω. Motivated by [18], we modify the result in [12] to adapt it to our case. Moreover, some of the coefficients are allowed to be dependent on the parameter ϵ which relax the conditions, see Theorem 1 in the following.The rest of this paper is organized as follows. In Section2, we formulate and prove the main abstract result, i.e., Theorem 1. In Section 3, we apply Theorem 1 to the dynamical system generated by (1) and (2) to prove the continuity of the exponential attractors, and we consider two cases according to the constant ν in this section.Throughout this paper, we denote by⋅X the norm of a Banach space X. The inner product and norm of L2ℝn are written as ⋅,⋅ and ⋅, respectively. We also use ur to denote the norm of u∈Lrℝn r≥1,r≠2 and u to denote the modular of u. Letter c is a generic positive constant independent of ϵ which may change its values from line to line even in the same line (sometimes for the special case, we also denote different positive constants by cii=1,2,…). ## 2. The Abstract Result and Its Proof In this section, we modify Theorem 4.4 in [12] to adapt to problems (1) and (2). We start with the definition of exponential attractors.Definition 1. LetE be a metric space, B is a bounded set in E, and let S:B⟶B be a map. We define a discrete semigroup Sn,n∈ℤ+ by Snx:=S∘⋯∘S (n times). A set ℳ⊂B is an exponential attractor for the map if the following properties hold:(1) The setℳ is compact in E and has finite fractal dimension.(2) The setℳ is positively invariant with respect to S, i.e., Sℳ⊂ℳ.(3) There exist positive constantsα0 and β0 such that(3)distESnB,ℳ≤α0e−β0n,where distEC1,C2 denotes the Hausdorff semidistance between C1 and C2 in E given by(4)distEC1,C2=supx∈C1infy∈C2x−yE,forC1,C2⊂E.Definition 2. LetX be a complete metric space endowed with the metric d and M be a bounded closed set in X. Assume that ϱ is a pseudometric [19] defined on M. Let B⊂M and ε>0.(i) A subsetU in B is said to be ε,ϱ-distinguishable if ϱx,x′>ε for any x,x′∈U,x≠x′. We denote by mϱB,ε the maximal cardinality of an ε,ϱ-distinguishable subset of B.(ii) Pseudometricϱ is said to be compact on M iff mϱM,ε is finite for every ε>0.(iii) For anyr>0, we define a local r,ε,ϱ-capacity of set M by the formula(5)CϱM;r,ε=suplnmϱB,ε:B⊂M,diamB≤2r. We now state and prove the main abstract theorem. The proof is essentially a combination of that in [18] and that of [12].Theorem 1. LetX be a Banach space and B be a bounded set in X. We assume that there exists a family of operators Sϵ:B⟶B, ϵ∈0,ϵ0, which satisfy, for any ϵ∈0,ϵ0,(i) Sϵ is Lipschitz on B, i.e., there exists Lϵ>0 such that(6)Sϵx1−Sϵx2X≤Lϵx1−x2X,∀x1,x2∈B.(ii) There exist constantsη and Kϵ and compact seminorms n1ϵx and n2ϵx on B such that(7)Sϵx1−Sϵx2X≤ηx1−x2X+Kϵn1ϵx1−x2+n2ϵSϵx1−Sϵx2,for anyx1,x2∈B, where 0<η<1 is independent of ϵ and Kϵ>0 is a constant which may be dependent on ϵ(seminorm nx on B is said to be compact iff for any subset B′⊂B if there exists a sequence xn⊂B′ such that nxm−xn⟶0 as n,m⟶∞).(iii) ∀ϵ∈0,ϵ0, ∀i∈ℕ, ∀x∈B,(8)Sϵix−S0ixX≤κiϵα,where κ>0 and α∈0,1 are constants independent of ϵ and for mapping V, Vi=V∘⋯∘V (i times). Then,∀ϵ∈0,ϵ0, and the discrete dynamical system generated by the iterations of Sϵ possesses an exponential attractor ℳϵ on B such that(1) The fractal dimension ofℳϵ is bounded in X:(9)dimf,Xℳϵ≤cϵ:=ln21+η−1lnmϵ4Kϵ1+Lϵ21/21−η,wheredimf,XA denotes the fractal dimension of A in X and mϵR is the maximal number of pairs xi,yi in X×X possessing the properties(10)xiX2+yiX2≤R2,n1εxi−xj+n2εyi−yj>1,i≠j.(2) ℳϵ attracts B in X, uniformly with respect to ϵ,(11)distXSϵiB,ℳϵ≤c1e−c2i,c2>0,i∈ℕ,wherec1 and c2 are independent of ϵ.(3) The familyℳϵ,ϵ∈0,ϵ0 is continuous at 0:(12)distsym,Xℳϵ,ℳ0≤c3ϵc4,where c3 and c4∈0,1 are independent of ϵ and distsym,X denotes the symmetric Hausdorff distance in X between sets defined by(13)distsym,XA,B:=maxdistXA,B,distXB,A.Proof. For any fixedϵ∈0,ϵ0, we set ϱϵx,y=Kϵn1ϵx−y+n2ϵSϵx−Sϵy; then, ϱϵ is compact on B in the sense of Definition 2. From [18], we see that the local r,ρ,ϱϵ-capacity of the set B admits the estimate(14)CϱϵB;r,ρ≤lnmϵ2Kϵ1+Lϵ21/2rρ,where mϵR is the maximal number of pairs xi,yi in X×X possessing the properties(15)xiX2+yiX2≤R2,n1ϵxi−xj+n2ϵyi−yj>1,i≠j. We assume diamB=2R. Let δ=1−η/2 and xi1:i1=1,…,n1 be a maximal δR,ϱϵ-distinguishable subset of B. Then, from (14), we have(16)n1=mϱϵB,δR≤expCϱϵB;R,δR≤mϵ2Kϵ1+Lϵ21/2δ:=Pϵ,B=∪i1=1n1Bi1,Bi1=υ∈B:ϱϵυ,xi1≤δR. Therefore,(17)SϵB=∪i1=1n1SϵBi1. Ify1,y2∈Bi1, then from (7), we have(18)Sϵy1−Sϵy2X≤ηy1−y2X+ϱϵy1,xi1+ϱϵy2,xi1≤2ηR+2δR:=2qR,where q=η+δ=1+η/2<1. We set Eϵ0=xi1i1=1n1 and Eϵ1=SϵEϵ0; then,(19)Eϵ1⊂SϵB,SϵEϵ0⊂Eϵ1,♯Eϵ1≤Pϵ≤Pϵ2,distXSϵB,Eϵ1≤2qR. Next, for any fixedi1∈1,…,ni1, we assume that xi1,i2:i2=1,…,ni1,2 be a maximal δqR,ϱϵ-distinguishable subset of SϵBi1. Then,(20)ni1,2=mϱϵSϵBi1,δqR≤expCϱϵSϵB;qR,δqR≤expCϱϵB;qR,δqR≤Pϵ,SϵB=∪i1=1n1∪i2=1ni1,2Bi1,i2,Bi1,i2=υ∈SϵBi1:ϱϵυ,xi1,i2≤δqR. Therefore,(21)Sϵ2B=∪i1=1n1∪i2=1ni1,2SϵBi1,i2. Ify1,y2∈Bi1,i2, then from (7), we have(22)Sϵy1−Sϵy2X≤ηy1−y2X+ϱϵy1,xi1,i2+ϱϵy2,xi1,i2≤2ηqR+2δqR=2q2R. We setEϵ2=SϵEϵ1∪Sϵxi1,i2, and then we have(23)Eϵ2⊂Sϵ2B,SϵEϵ1⊂Eϵ2,♯Eϵ2≤Pϵ2+Pϵ2≤Pϵ3,distXSϵ2B,Eϵ2≤2q2R. By the induction procedure, we can find setsEϵi, i∈ℕ, enjoy the following properties:(24)Eϵi⊂SϵiB,(25)SϵEϵi⊂Eϵi+1,(26)♯Eϵi≤Pϵi+1,(27)distXSϵiB,Eϵi≤2qiR. We now define the exponential attractor for the mapS0:B⟶B as follows:(28)ℳ0′=∪i=1∞E0i,ℳ0=ℳ0′¯X. Then, from (24)–(27), we see that ℳ0 is indeed an exponential attractor for the map S0:B⟶B (see [12]). For ϵ∈0,ϵ0, one can also construct exponential attractors for Sϵ:B⟶B as above. However, they are not the ones we needed. Totally similar to [12], one can construct exponential attractors ℳϵ based on E0i. We note that the only difference between our construction procedure and that of in [12] is that the number Pϵ here may be dependent on ϵ. However, (26) only contributes to the fractal dimension of ℳϵ. Therefore, (11) and (12) in Theorem 1 hold true. Finally, assertion (9) is a direct result of [18]. The proof is completed.Remark 1. Similar to [12], we can give an explicit expression for c4, i.e., c4=−αlnq/lnκ−lnq. If Lϵ, Kϵ, n1ϵx, and n2ϵx are all independent ϵ, we can obtain the uniform bound with respect to ϵ of the fractal dimension, and we will apply this abstract result in the last section. ## 3. Application to the Nonclassical Diffusion Equation ### 3.1. Some Useful Estimates of the Solution Since−Δ−1 is a continuous compact operator in L2Ω, by the classical spectral theorem, there exist a sequence λjj=1∞, 0<λ1≤λ2≤⋯≤λj⟶∞ as j⟶∞ and a family of elements ejj=1∞ of D−Δ, which forms an orthogonal basis in both L2D and H01D such that −Δej=λjej,∀j∈N. Given m, let Xm=spane1,…,em and Pm:L2D⟶Xm be the projection operator. For any υ∈H01Ω, we write υ=Pmυ+I−Pmυ:=υ1+υ2.Lemma 1. (see [1]). Let ϵ>0. Assume (F1)–(F4) hold and g∈L2Ω. Then, for each u0∈H01Ω, problems (1) and (2) have a unique solution uϵ=uϵt=uϵt;u0 with uϵ∈C0,T,H01Ω and utϵ∈L20,T;H01Ω for any T>0. Moreover, for any fixed t>0, uϵ is continuous in u0.Lemma 2. (see [1]). Let ϵ=0. Assume (F1)–(F4) hold and g∈L2Ω. Then, for each u0∈H01Ω, problems (1) and (2) have a unique solution u0=u0t=u0t;u0 which satisfies(29)u0∈C0,T;H01Ω∩L20,T;H2Ω,∀T>0. Also for any fixedt>0, u0 is continuous in u0. From Lemmas1 and 2, we can define a semigroup Sϵt:H01Ω⟶H01Ω by the expression(30)Sϵtu0=uϵt,t≥0,where uϵt is the solution of (1) and (2).Lemma 3. (see [1]). Assume (F1)–(F4) hold and g∈L2Ω. Then, for any bounded set D⊂H01Ω, there exist positive constants ED, c0, and T1D such that, for any solution uϵ of problems (1) and (2),(31)∇uϵt≤ED,t≥0,∇uϵt≤c0,t≥T1D,provided u0∈D, where ED, c0, and T1D are independent of ϵ. SetB=υ∈H01Ω:υH01Ω≤c0, where c0 is the constant in Lemma 3; then, B is a uniformly (with respect to ϵ) bounded absorbing set for Sϵt in H01Ω. We note that the absorbing time is independent of ϵ, and we can choose T1=T1B large enough such that SϵtB⊂B for any t≥T0 and any ϵ∈0,1. The following lemma gives several priori estimates for the derivativeutϵt of the solution to (1) and (2).Lemma 4. Under the assumptions (F1)–(F4), for anyD∈H01Ω, D is bounded, and there exists T2D, which is independent of ϵ such that, for all ϵ∈0,1,(32)utϵt2+ϵ∇utϵt2≤c,(33)∫tt+1∇utϵs2ds≤c,for any u0∈D and any t≥T2D, where c is a constant independent of ϵ.Proof. We first take the inner product of (1) with uϵt in L2Ω to get(34)12ddt∇uϵ2+ϵ∇uϵ2+∇uϵ2+fuϵ,uϵ=g,uϵ. By condition (F4), for anyθ>0, there exists cθ>0 satisfying(35)fu,u−κ2∫ΩFu+θu2+cθ≥0,∀u∈H01Ω. Putting (35) into (34), it yields that(36)ddtuϵ2+ϵ∇uϵ2+2∇uϵ2+2κ2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ. By condition (F3), for anyγ>0, there exists cγ>0 satisfying(37)∫ΩFu+γu2+cγ≥0,∀u∈H01Ω. Combining (36) and (37), we get(38)ddtuϵ2+ϵ∇uϵ2+2∇uϵ2≤2g,uϵ+2θuϵ2+2cθ+2κ2γuϵ2+2κ2cγ,that is,(39)ddtuϵ2+ϵ∇uϵ2+uϵ2≤2g,uϵ+2θuϵ2+2cθ+2κ2γuϵ2+2κ2cγ. We chooseα1>0 small enough such that α1λ1<1 and 1−α1>1/2, and apply Poincare inequality in (39) to get(40)ddtuϵ2+ϵ∇uϵ2+α1λ1uϵ2+1−α1uϵ2≤2g,uϵ+2θuϵ2+2cθ+2κ2γuϵ2+2κ2cγ. Choosingθ,γ small enough and using Ho¨lder inequality, we can get from (40) that(41)ddtuϵ2+ϵ∇uϵ2+α1λ12uϵ2+12∇uϵ2≤c. We setσ=α1λ1/2; then, 0<σ<1/2. From (41), we obtain(42)ddtuϵ2+ϵ∇uϵ2+σuϵ2+ϵ∇uϵ2≤c,that is,(43)ddteσt∇uϵt2+ϵ∇uϵt2≤ceσt. Integrating the above inequality from 0 tot, it yields that(44)eσt∇uϵt2+ϵ∇uϵt2≤u02+ϵ∇u02+ceσt,t≥0. Now, we consider (36) again. If0<κ2≤1, we can get from (36) that(45)ddtuϵ2+ϵ∇uϵ2+∇uϵ2+κ2∇uϵ2+2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ. Using a similar process as (39)–(42), we deduce that, for θ small enough,(46)ddtuϵ2+ϵ∇uϵ2+σuϵ2+ϵ∇uϵ2+κ2∇uϵ2+2∫ΩFuϵ≤c. Therefore,(47)ddteσtuϵ2+ϵ∇uϵ2+κ2eσt∇uϵ2+2∫ΩFuϵ≤ceσt. Ifκ2>1, we can get from (36) that(48)ddtuϵ2+ϵ∇uϵ2+2∇uϵ2+2∫ΩFuϵ+2κ2−2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ. Applying (37) to the above, we obtain(49)ddtuϵ2+ϵ∇uϵ2+∇uϵ2+∇uϵ2+2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ+2κ2−2γuϵ2+2κ2−2cγ. Following a similar process as (39)–(42), we deduce that, for θ, γ small enough,(50)ddtuϵ2+ϵ∇uϵ2+σuϵ2+ϵ∇uϵ2+∇uϵ2+2∫ΩFuϵ≤c. Therefore,(51)ddteσtuϵ2+ϵ∇uϵ2+eσt∇uϵ2+2∫ΩFuϵ≤ceσt. From (47) and (51), we see that, for any κ2>0,(52)ddteσtuϵ2+ϵ∇uϵ2+c5eσt∇uϵ2+2∫ΩFuϵ≤ceσt.where c5=κ2 when 0<κ2≤1 and c5=1 when κ2>1. For anyt>0, we integrate (52) over [t, t + 1] to get(53)∫tt+1eσs∇uϵs2+2∫ΩFuϵsds≤ceσtuϵt2+ϵ∇uϵt2+ceσt. Putting (44) into (53), we have, for any t>0,(54)∫tt+1eσs∇uϵs2+2∫ΩFuϵsds≤cu02+ϵ∇u02+ceσt. Next, we multiply (1) with ut and integrate it in Ω to get(55)utϵ2+ϵ∇utϵ2+12ddt∇uϵ2+2∫ΩFuϵ=g,utϵ. Applying Ho¨lder inequality to the above we get(56)utϵ2+ϵ∇utϵ2+ddt∇uϵ2+2∫ΩFuϵ≤c,t>0. Thus,(57)eσtutϵ2+ϵ∇utϵ2+ddteσt∇uϵ2+2∫ΩFuϵ≤ceσt+ceσt∇uϵ2+2∫ΩFuϵ,t>0. Dropping the first term in the left-hand side of (57), from (54) and using uniform Gronwall inequality, we conclude that(58)eσt∇uϵt2+2∫ΩFuϵt≤cu02+ϵ∇u02+ceσt,t≥1. We now differentiate equation (1) with respect to t to get(59)uttϵ−ϵΔuttϵ−Δutϵ+f′uϵutϵ=0. Taking the inner product of (59) with utϵ in L2Ω, we obtain(60)ddtutϵ2+ϵ∇utϵ2+2∇utϵ2≤cutϵ2. Since0<σ<1/2, we have(61)ddtutϵ2+ϵ∇utϵ2+σutϵ2+ϵ∇utϵ2≤cutϵ2. Hence,(62)ddteσtutϵ2+ϵ∇utϵ2≤ceσtutϵ2+ϵ∇utϵ2. Integrating (57) over (t, t + 1) and using (58), we deduce that(63)∫tt+1eσsutϵs2+ϵ∇utϵs2ds≤cu02+ϵ∇u02+ceσt,t≥1. Combining (62) and (63) and using the uniform Gronwall inequality, we obtain that(64)utϵt2+ϵ∇utϵt2≤ce−σtu02+ϵ∇u02+c,t≥2. Thus, for any boundedD⊂H01Ω, there exists T2D≥2 which is independent of ϵ such that, for any t≥T2D and any u0∈D,(65)utϵt2+ϵ∇utϵt2≤c,where c is independent of ϵ. This proves assertion (32) in Lemma 4. Finally, integrating (60) over t,t+1, then assertion (33) can be easily verified by using (32). The proof is completed.Lemma 5. Under conditions (F1)–(F4), for any two solutions of (1) and (2) uϵ1t, uϵ2t (corresponding to ϵ1, ϵ2∈0,1, respectively), with initials starting from B and for any υ∈L2Ω, we have(66)∫Ωfuϵ1t−fuϵ2tυdx≤c6∇wϵ1,ϵ2tυ,t≥0,where wϵ1,ϵ2t=uϵ1t−uϵ2t and c6 is a constant independent of ϵ.Proof. By (F2), we have(67)∫Ωfuϵ1t−fuϵ2tυdx≤∫Ωlt,ϵ1,ϵ2wϵ1,ϵ2tυdx≤c∫Ω1+uϵ1t2/N−2+uϵ2t2/N−2wϵ1,ϵ2tυdx≤c∫Ωwϵ1,ϵ2tυdx+c∫Ωuϵ1t2/N−2wϵ1,ϵ2tυdx+c∫Ωuϵ2t2/N−2wϵ1,ϵ2tυdx,where lt,ϵ1,ϵ2=∫01f′suϵ1t+1−suϵ2tds. From (31) and the Sobolev embedding H01Ω⊂L2N/N−2Ω, we have that, for t≥0,(68)∫Ωfuϵ1t−fuϵ2tυdx≤cwϵ1,ϵ2υ+cuϵ12N/N−22/N−2wϵ1,ϵ22NN−2υ+cuϵ22N/N−22/N−2wϵ1,ϵ22N/N−2υ≤c∇wϵ1,ϵ2υ+c∇uϵ12/N−2∇wϵ1,ϵ2υ+c∇uϵ22/N−2∇wϵ1,ϵ2υ≤c6∇wϵ1,ϵ2υ. The proof is completed. ### 3.2. The Main Result for a General Case:ν∈ℝ+ Now, we verify the conditions in Theorem1 for Sϵt in this case. We first verify condition (i), i.e., the Lipschitz continuity in H01 (actually uniform Lipschtz continuity in H01 since the coefficient is independent of ϵ).Lemma 6. Under assumptions (F1)–(F4), we have, for anyϵ∈0,1 and any x1,x2∈B,(69)Sϵtx1−Sϵtx2H01Ω≤ec62tx1−x2H01Ω,t≥0,where c6 is the constant in Lemma 5.Proof. Assume thatu1ϵt and u2ϵt are two solutions of (1) and (2) starting from x1, x2∈B, respectively. We consider the difference wϵt=u1ϵt−u2ϵt, and then wϵt satisfies(70)wtϵt−ϵΔwtϵt−Δwϵt+lt,ϵwϵt=0. Multiplying (70) with wtϵt and integrating it in Ω, we get(71)wtϵt2+ϵ∇wtϵt2+12ddt∇wϵt2≤∫Ωlt,ϵwϵtwtϵtdx. Applying Lemma5, we obtain(72)wtϵt2+ϵ∇wtϵt2+12ddt∇wϵt2≤c6∇wϵtwtϵt,t≥0. We use Ho¨lder inequality to get(73)ddt∇wϵt2≤c62∇wϵt2. Then, the result follows immediately from Gronwall inequality. The proof is completed.Lemma 7. Under assumptions (F1)–(F4), we have, for anyx1,x2∈B and any η∈0,1,(i) Forϵ∈0,ϵ0: there exist T3=T3η, K>0, and compact seminorm n1ϵ,t on B such that(74)Sϵtx1−Sϵtx2H01Ω≤ηx1−x2H01Ω+Kn1ϵ,tx1−x2,t≥T3.(ii) Forϵ=0: there exist a positive integer M, T3=T3η, and K>0 such that(75)S0tx1−S0tx2H01Ω≤ηx1−x2H01Ω+Kn2S0tx1−S0tx2,t≥T3,for some ϵ0>0, where n2x=PMxH01Ω, M and K are independent of ϵ, η, and t and n1ϵ,t depend on ϵ and t.Proof. We first take the inner product of (70) with kwϵ in L2Ω (k is a constant which will be fixed later) to get(76)kwtϵ,wϵ+kϵ∇wtϵ,∇wϵ+k∇wϵ2+k∫Ωlt,ϵwϵ2=0. Using (F1), we get(77)k∇wϵ2≤kwtϵwϵ+kϵ∇wtϵ∇wϵ+kνwϵ2. Combining (72) and (77), we get(78)wtϵ2+ϵ∇wtϵ2+12ddt∇wϵ2+k∇wϵ2≤kwtϵwϵ+kϵ∇wtϵ∇wϵ+kνwϵ2+c6∇wϵwtϵ. For the right-hand side of (78), we use Ho¨lder inequality to get(79)kwtϵwϵ+kϵ∇wtϵ∇wϵ+kνwϵ2+c6∇wϵwtϵ≤kν+k22wϵ2+wtϵ2+ϵ2∇wtϵ2+k2ϵ2+c622∇wϵ2. Putting the above inequality into (78), we obtain(80)ddt∇wϵ2+2k∇wϵ2≤2kν+k2wϵ2+k2ϵ+c62∇wϵ2. Fixk=c62, and then choose ϵ0 such that ϵ<1/c62, ∀ϵ≤ϵ0. Then, we get from (80) that(81)ddt∇wϵ2+c7∇wϵ2≤cwϵ2,ϵ∈0,ϵ0,c7>0,that is,(82)ddtec7t∇wϵ2≤cec7twϵ2,ϵ∈0,ϵ0. We integrate the above inequality over0,t to get(83)∇wϵt2≤e−c7t∇wϵ02+c∫0twϵs2ds,ϵ∈0,ϵ0. For anyη∈0,1, we can choose T3=T3η>0 such that e−c7T3≤η2. Therefore, from (83),(84)∇wϵt2≤η2∇wϵ02+c∫0twϵs2ds,t≥T3,ϵ∈0,ϵ0. Thus,(85)∇wϵt≤η∇wϵ0+K∫0twϵs2ds1/2,t≥T3,ϵ∈0,ϵ0. Now, we setn1ϵ,tx1−x2=∫0twϵs2ds1/2. We need to show that n1ϵ,t is compact on B for any t>0 and any ϵ∈0,1. From condition (F2), we deduce that, for any s∈ℝ,(86)fss≤cs+s2+s2N−2/N−2. Hence,(87)fu,u≤cu2+u2N−2/N−22N−2/N−2≤cu2+u2N−2/N−22N−2/N−2≤cu2+∇u2N−2/N−2≤c∇u2+∇u2N−2/N−2,∀u∈H01Ω,where we have used the Sobolev embedding H01Ω⊂L2N/N−2Ω in the above inequality. From (35) and (87), we obtain(88)κ2∫ΩFu≤θu2+cθ+c∇u2+∇u2N−2/N−2,∀u∈H01Ω,for any θ>0. For any t≥0, we can see from (32) that(89)∫0t∇uϵs2ds≤ct,ϵ∈0,1,for any u0∈B and c is independent of ϵ. Next, we integrate (56) from 0 to t, and we get(90)ϵ∫0t∇utϵs2ds≤ct+c∇u02+2∫ΩFu0. We first fixθ small enough, and then put (88) into (90) to get, for all u0∈B,(91)ϵ∫0t∇utϵs2ds≤ct+c,ϵ∈0,1. From (89) and (91), we can easily see that, for any t>0 and any fixed ϵ∈0,1, ℬε,t=uεs:s∈0,t,uε0=u0∈B is bounded in W1,20,t;H01Ω. Therefore, from the compact embedding W1,20,t;H01Ω⊂L20,t;L2Ω, we deduce that n1ϵ,t is compact in B. Thus, assertion (i) is proved. Assertion (ii) can be easily proved by multiplying (70) (with ϵ=0) with −Δw20, and this procedure is elementary; here, we omit it (see, e.g., [20] for details). The proof is completed.Lemma 8. Under assumptions (F1)–(F4), we have, for anyϵ∈0,1, any i∈ℕ, and any x∈B, there exists a positive function ℓt independent of ϵ such that, for all t≥T2+1,(92)Sϵitx−S0itxH01Ω≤ℓitϵ1/4.where T2=T2B (see Lemma 4).Proof. We assume thatuϵt ϵ∈0,1 and u0t are the solutions for the following equations:(93)utϵ−ϵΔutϵ−Δuϵ+fuϵ=g,t>0,x∈Ω,uϵ∂Ω=0,uϵx,0=x1x∈B,x∈Ω,ut0−Δu0+fu0=g,t>0,x∈Ω,u0∂Ω=0,u0x,0=x2x∈B,x∈Ω,respectively. Set w=uϵ−u0, and then w satisfies(94)wt−ϵΔutϵ−Δw+ltw=0,where lt=∫01f′suϵt+1−su0tds. Multiplying (94) with wt and using Lemma 5, it yields(95)wt2+ϵ∇utϵ,∇wt+12ddt∇w2≤c6∇wwt,t≥0. We chooseσ1=maxc62,2ν; then, the above inequality implies(96)2ϵ∇utϵ2+ddt∇w2≤σ1∇w2+2ϵ∇utϵ∇ut0. Therefore,(97)ddte−σ1t∇w2≤2ϵe−σ1t∇utϵ∇ut0. By Lemma4, we see that, for t≥T2,(98)ddte−σ1t∇w2≤cϵ1/2∇ut0≤cϵ1/2+cϵ1/2∇ut02. Multiplying (94) with w and integrating it in Ω, we obtain(99)12ddtw2+ϵ∇utϵ,∇w+∇w2≤νw2,that is,(100)ddtw2+2∇w2≤σ1w2+2ϵ∇utϵ∇w. The above inequality implies(101)ddte−σ1tw2+2e−σ1t∇w2≤2ϵ∇utϵ∇w. Dropping the second term in the left-hand side of (101) and integrating it over 0,t, we get(102)e−σ1twt2≤w02+cϵ∫0t∇utϵs2ds1/2∫0t∇ws2ds1/2. From (91), we see that, for any ϵ∈0,1,(103)∫0t∇utϵs2ds≤1ϵct+1. Combining (31), (102), and (103), we deduce that(104)e−σ1twt2≤w02+ct+1ϵ1/2,t≥0. Now, we integrate (101) over (t, t + 1) to get(105)∫tt+1e−σ1s∇ws2ds≤ce−σ1twt2+cϵ∫tt+1∇utϵs2ds1/2∫tt+1∇ws2ds1/2. By using (31) and (33), we get that, for t≥T2,(106)∫tt+1∇utϵs2ds≤c,∫tt+1∇ws2ds≤c. Therefore, from (104)–(106), we deduce that(107)∫tt+1e−σ1s∇ws2ds≤cw02+ct+1ϵ1/2+cϵ≤cw02+ct+1ϵ1/2,t≥T2. On the contrary, using (33) again, we obtain(108)∫tt+1cϵ1/2+cϵ1/2∇ut0s2ds≤cϵ1/2,t≥T2. Combining (98), (107), and (108) and using a similar process as the proof of uniform Gronwall inequality, it yields that, for all t≥T2,(109)e−σ1t+1∇wt+12≤cw02+ct+1ϵ1/2+cϵ1/2≤cw02+ct+1ϵ1/2. This implies that there exists a positive functionℓ′t independent of ϵ such that, for t≥T2+1,(110)Sϵtx1−S0tx2H01Ω≤ℓ′tϵ1/4+ℓ′tx1−x2H01Ω. Whenx1=x2=x, we get(111)Sϵtx−S0txH01Ω≤ℓ′tϵ1/4,t≥T2+1. From (110) and (111), we obtain for t≥T2+1,(112)Sϵitx−S0itxH01Ω≤ℓ′tϵ1/4+ℓ′tSϵi−1tx−S0i−1txH01Ω≤ℓ′tϵ1/4+ℓ′tℓ′tϵ1/4+ℓ′tSϵi−2tx−S0i−2txH01Ω=ℓ′tϵ1/4+ℓ′2tϵ1/4+ℓ′2tSϵi−2tx−S0i−2txH01Ω≤ℓ′t+ℓ′2t+⋯+ℓ′itϵ1/4+ℓ′i−1tSϵtx−S0txH01Ω≤2ℓ′tiϵ1/4+ℓ′itϵ1/4≤ℓitϵ1/4. The proof is completed. For any fixedη∈0,1, we set T0η=maxT1,T2+1,T3η and Sϵ=SϵT0, and then Sϵ: B⟶B. From Lemmas 6–8, we see that the discrete system Sϵnn (ϵ∈0,ϵ0) satisfies all the assumptions in Theorem 1. Therefore, we have the following.Theorem 2. Let assumptions (F1)–(F4) hold, and then∀ϵ∈0,ϵ0, and the discrete dynamical system Sϵnn defined above possesses an exponential attractor ℳϵd on B such that(1) The fractal dimension ofℳϵd is bounded in H01Ω:(113)dimf,H01Ωℳϵd≤cϵ:=ln21+η−1lnmϵ4K1+L21/21−η,whereL=ec62T0 (see Lemma 6) and mϵR is the maximal number of pairs xi,yi in H01Ω×H01Ω possessing the properties(114)ϵ∈0,ϵ0:xiH01Ω2+yiH01Ω2≤R2,n1ϵxi−xj>1,i≠j,ϵ=0:xiH01Ω2+yiH01Ω2≤R2,n2yi−yj>1,i≠j.(2) ℳϵd attracts B in H01Ω, uniformly with respect to ϵ,(115)distH01ΩSϵiB,ℳϵd≤c8e−c9i,c9>0,i∈ℕ,wherec8 and c9 are independent of ϵ.(3) The familyℳϵd,ϵ∈0,ϵ0 is continuous at 0 in H01Ω:(116)distsym,H01Ωℳϵd,ℳ0d≤c10ϵc11,where c11=−1/4ln1+η/2/lnℓT0−ln1+η/2 and c10 are independent of ϵ. To obtain the corresponding result for continuous systemSϵt defined in (30), we need to show the Ho¨lder continuity with respect to time t and the initial conditions. In general, it is difficult to verify the uniform (with respect to ϵ) Ho¨lder continuity with respect to time t when t is small. However, when t is large enough, we have the following.Lemma 9. Let assumptions (F1)–(F4) hold. Then, for anyT≥T0, the semigroup Sϵt defined in (30) is uniformly Ho¨lder continuous on T0,T×B, i.e.,(117)Sϵt1x1−Sϵt2x2H01Ω≤cx1−x2H01Ω+t1−t21/2,ϵ∈0,1,for x1, x2∈B and T0≤t1, t2≤T and c is independent of ϵ.Proof. The Lipschitz continuity with respect to the initial conditions is an immediate corollary of Lemma6. It remains to prove the continuity with respect to time t. From Lemmas 1 and 2, we know that the solution uϵt∈C0,T,H01Ω for the initial value belongs to B. Therefore, for any T0≤t1≤t2≤T,(118)uϵt2−uϵt1=∫t1t2utϵsds,which implies that(119)uϵt1−uϵt2H01Ω=∫t1t2utϵsdsH01Ω≤∫t1t2utϵsH01Ωds≤t1−t21/2∫t1t2utϵsH01Ω2ds1/2. To estimate (119), we integrate (60) from t1 to t2 to get(120)∫t1t2∇utϵs2ds≤c∫t1t2utϵs2ds+cutϵt12+ϵ∇utϵt12. Sincet1≥T0, we apply (32) to (120) to get(121)∫t1t2∇utϵs2ds≤c,where c is independent of ϵ. Putting (121) into (119), we obtain the result. The proof is completed. Our main result in this section reads as follows:Theorem 3. Let assumptions (F1)–(F4) hold. Then, for everyϵ∈0,ϵ0, the semigroup Sϵt generated by equations (1) and (2) possesses an exponential attractor ℳϵc in H01Ω. Moreover, these exponential attractors can be constructed such that(1) The fractal dimension ofℳϵc is bounded in H01Ω:(122)dimf,H01Ωℳϵc≤cϵ+2.(2) ℳϵc attracts B in H01Ω, uniformly with respect to ϵ:(123)distH01ΩSϵtB,ℳϵc≤c12e−c13t,c13>0,where c12 and c13 are independent of ϵ.(3) The familyℳϵc,ϵ∈0,ϵ0 is continuous at 0 in H01Ω:(124)distsym,H01Ωℳϵc,ℳ0c≤c14ϵc11,where c11=−1/4ln1+η/2/lnℓT0−ln1+η/2 and c14 are independent of ϵ.Proof. Settingℳϵc:=∪t∈T0,2T0Sϵtℳϵd, then we have(125)ℳϵc:=∪t∈T0,2T0Sϵtℳϵd=∪t∈T0,2T0Sϵt−T0SϵT0ℳϵd=∪t∈0,T0SϵtSϵT0ℳϵd. It follows from its definition that the setSϵT0ℳϵd is also an exponential attractor for the discrete dynamical system Sϵnn. Thus, ℳϵc is the set we needed (for details, see, e.g., [12]). The proof is completed. ### 3.3. The Main Result for a Special Case:0<ν<λ1 If, in addition,ν<λ1, we can get the uniform bound (with respect to ϵ) for the fractal dimension. To this end, we need show that the constants Lϵ and Kϵ and the compact seminorms n1ϵ and n2ϵ in Theorem 1 are all independent of ϵ for the discrete system generated by problems (1) and (2). From the above, we have proved the constant Lϵ is independent of ϵ; thus, we only need the following lemma:Lemma 10. Let (F1)–(F4) hold; in addition, we assume thatν<λ1, and then we have for any x1,x2∈B and any η∈0,1, there exist constants ϵ0∈0,1, K>0, and T0′η≥T0η (T0 is defined above) and a compact seminorm nx on B such that(126)SϵT0′x1−SϵT0′x2H01Ω≤ηx1−x2H01Ω+Knx1−x2+nSϵT0′x1−SϵT0′x2,ϵ∈0,ϵ0,where T0′η, K, and the compact seminorm nx are all independent of ϵ.Proof. We assume thatu1ϵt and u2ϵt are two solutions of (1) starting from x1,x2∈B, respectively. We consider the difference wϵt=u1ϵt−u2ϵt; then, wϵt satisfies(127)wtϵt−ϵΔwtϵt−Δwϵt+lt,ϵwϵt=0. We first take the inner product of (127) with kw2ϵ in L2Ω (k is a constant which will be fixed later) to get(128)kw2,tϵ,w2ϵ+kϵ∇w2,tϵ,∇w2ϵ+k∇w2ϵ2+k∫Ωlt,ϵwϵw2ϵ=0. Using Lemma5, we get(129)k∇w2ϵ2≤kw2,tϵw2ϵ+kϵ∇w2,tϵ∇w2ϵ+kc∇wϵw2ϵ. Next, we take the inner product of (127) with w2,tϵ in L2Ω to get(130)w2,tϵ2+ϵ∇w2,tϵ2+12ddt∇w2ϵ2+∫Ωlt,ϵwϵw2,tϵ=0. We apply Lemma5 again to get(131)w2,tϵ2+ϵ∇w2,tϵ2+12ddt∇w2ϵ2≤c6∇wϵw2,tϵ. Combining (129) and (131), we get(132)w2,tϵ2+ϵ∇w2,tϵ2+12ddt∇w2ϵ2+k∇w2ϵ2≤kw2,tϵw2ϵ+kϵ∇w2,tϵ∇w2ϵ+kc∇wϵw2ϵ+c6∇wϵw2,tϵ. For the right-hand side of (132), we use Ho¨lder inequality to get(133)kw2,tϵw2ϵ+kϵ∇w2,tϵ∇w2ϵ+kc∇wϵw2ϵ+c6∇wϵw2,tϵ≤ck2w2ϵ2+12w2,tϵ2+ϵ2∇w2,tϵ2+k2ϵ2∇w2ϵ2+k2∇wϵ2+ckw2ϵ2+c622∇wϵ2+12w2,tϵ2≤ck2w2ϵ2+12w2,tϵ2+ϵ2∇w2,tϵ2+k2ϵ2∇w2ϵ2+k2∇w2ϵ2+k2∇w1ϵ2+ckw2ϵ2+c622∇w2ϵ2+c622∇w1ϵ2+12w2,tϵ2. Putting the above inequality into (132), we obtain(134)ddt∇w2ϵ2+k∇w2ϵ2≤ck2+kw2ϵ2+c+k∇w1ϵ2+c62+k2ϵ∇w2ϵ2. Fixk=2c62, and then choose ϵ0 such that ϵ<1/4c62, ∀ϵ≤ϵ0. In the following, we assume ϵ∈0,ϵ0. Then, we get from (134) that(135)ddt∇w2ϵ2+c15∇w2ϵ2≤cw2ϵ2+c∇w1ϵ2,c15>0. Using Poincare inequality, we obtain(136)ddt∇w2ϵ2+c15∇w2ϵ2≤cλm+1−1∇w2ϵ2+c∇w1ϵ2≤cλm+1−1∇wϵ2+c∇w1ϵ2. From Lemma6, it yields that(137)ddtec15t∇w2ϵ2≤cλm+1−1ec15tec62t∇wϵ02+cec15t∇w1ϵ2. We integrate the above inequality over0,t to get(138)∇w2ϵt2≤e−c15t∇w2ϵ02+cλm+1−1ec62t∇wϵ02+c∫0t∇w1ϵs2ds≤e−c15t+cλm+1−1ec62t∇wϵ02+c∫0t∇w1ϵs2ds. For anyη∈0,1, we first choose T0′=T0′η>T0η such that e−c15T0′≤η2/2, and then fix a positive integer M=Mη such that cλM+1−1ec62T0′≤η2/2. Therefore, from (138),,(139)∇w2ϵT0′2≤η2∇wϵ02+c∫0T0′∇w1ϵs2ds. Therefore,(140)∇wϵT0′2=∇w1ϵT0′2+∇w2ϵT0′2≤η2∇wϵ02+c∫0T0′∇w1ϵs2ds+∇w1ϵT0′2,that is,(141)∇wϵT0′≤η∇wϵ0+K∫0T0′∇w1ϵs2ds1/2+∇w1ϵT0′. We setnx=PMxH01. Then, ∇w1ϵT0′=nSϵT0′x1−SϵT0′x2, and n is obviously compact on B. To estimate the term∫0T0′∇w1ϵs2ds1/2, we multiply (127) with w1ϵt and integrate it in Ω:(142)ddtw1ϵt2+ϵ∇w1ϵt2+2∇w1ϵt2+2lt,ϵwϵt,w1ϵt=0. By using (F1) and Poincare inequality, we can get from the above inequality(143)ddtw1ϵt2+ϵ∇w1ϵt2+2∇w1ϵt2≤2νw1ϵt2≤2νλ1−1∇w1ϵt2. Therefore,(144)ddtw1ϵt2+ϵ∇w1ϵt2+β∇w1ϵt2≤0,where β=21−νλ1−1>0. Integrating (144) from 0 to T0′, we get(145)∫0T0′∇w1ϵs2ds≤cw1ϵ02+ϵ∇w1ϵ02≤c∇w1ϵ02. Thus, for anyϵ∈0,ϵ0,(146)∫0T0′∇w1εs2ds1/2≤cnx1−x2. Putting (146) into (141), one can obtain the result. The proof is completed.Theorem 5. Let assumptions (F1)–(F4) hold andν<λ1. Then, for every ϵ∈0,ϵ0, the semigroup Sϵt generated by equations (1) and (2) possesses an exponential attractor ℳϵc in H01Ω. Moreover, these exponential attractors can be constructed such that (2) and (3) in Theorem 3 hold and the fractal dimension of ℳϵ is uniformly (with respect to ϵ) bounded in H01Ω, i.e.,(147)dimf,H01Ωℳϵc≤c,where c is a constant independent of ϵ. ## 3.1. Some Useful Estimates of the Solution Since−Δ−1 is a continuous compact operator in L2Ω, by the classical spectral theorem, there exist a sequence λjj=1∞, 0<λ1≤λ2≤⋯≤λj⟶∞ as j⟶∞ and a family of elements ejj=1∞ of D−Δ, which forms an orthogonal basis in both L2D and H01D such that −Δej=λjej,∀j∈N. Given m, let Xm=spane1,…,em and Pm:L2D⟶Xm be the projection operator. For any υ∈H01Ω, we write υ=Pmυ+I−Pmυ:=υ1+υ2.Lemma 1. (see [1]). Let ϵ>0. Assume (F1)–(F4) hold and g∈L2Ω. Then, for each u0∈H01Ω, problems (1) and (2) have a unique solution uϵ=uϵt=uϵt;u0 with uϵ∈C0,T,H01Ω and utϵ∈L20,T;H01Ω for any T>0. Moreover, for any fixed t>0, uϵ is continuous in u0.Lemma 2. (see [1]). Let ϵ=0. Assume (F1)–(F4) hold and g∈L2Ω. Then, for each u0∈H01Ω, problems (1) and (2) have a unique solution u0=u0t=u0t;u0 which satisfies(29)u0∈C0,T;H01Ω∩L20,T;H2Ω,∀T>0. Also for any fixedt>0, u0 is continuous in u0. From Lemmas1 and 2, we can define a semigroup Sϵt:H01Ω⟶H01Ω by the expression(30)Sϵtu0=uϵt,t≥0,where uϵt is the solution of (1) and (2).Lemma 3. (see [1]). Assume (F1)–(F4) hold and g∈L2Ω. Then, for any bounded set D⊂H01Ω, there exist positive constants ED, c0, and T1D such that, for any solution uϵ of problems (1) and (2),(31)∇uϵt≤ED,t≥0,∇uϵt≤c0,t≥T1D,provided u0∈D, where ED, c0, and T1D are independent of ϵ. SetB=υ∈H01Ω:υH01Ω≤c0, where c0 is the constant in Lemma 3; then, B is a uniformly (with respect to ϵ) bounded absorbing set for Sϵt in H01Ω. We note that the absorbing time is independent of ϵ, and we can choose T1=T1B large enough such that SϵtB⊂B for any t≥T0 and any ϵ∈0,1. The following lemma gives several priori estimates for the derivativeutϵt of the solution to (1) and (2).Lemma 4. Under the assumptions (F1)–(F4), for anyD∈H01Ω, D is bounded, and there exists T2D, which is independent of ϵ such that, for all ϵ∈0,1,(32)utϵt2+ϵ∇utϵt2≤c,(33)∫tt+1∇utϵs2ds≤c,for any u0∈D and any t≥T2D, where c is a constant independent of ϵ.Proof. We first take the inner product of (1) with uϵt in L2Ω to get(34)12ddt∇uϵ2+ϵ∇uϵ2+∇uϵ2+fuϵ,uϵ=g,uϵ. By condition (F4), for anyθ>0, there exists cθ>0 satisfying(35)fu,u−κ2∫ΩFu+θu2+cθ≥0,∀u∈H01Ω. Putting (35) into (34), it yields that(36)ddtuϵ2+ϵ∇uϵ2+2∇uϵ2+2κ2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ. By condition (F3), for anyγ>0, there exists cγ>0 satisfying(37)∫ΩFu+γu2+cγ≥0,∀u∈H01Ω. Combining (36) and (37), we get(38)ddtuϵ2+ϵ∇uϵ2+2∇uϵ2≤2g,uϵ+2θuϵ2+2cθ+2κ2γuϵ2+2κ2cγ,that is,(39)ddtuϵ2+ϵ∇uϵ2+uϵ2≤2g,uϵ+2θuϵ2+2cθ+2κ2γuϵ2+2κ2cγ. We chooseα1>0 small enough such that α1λ1<1 and 1−α1>1/2, and apply Poincare inequality in (39) to get(40)ddtuϵ2+ϵ∇uϵ2+α1λ1uϵ2+1−α1uϵ2≤2g,uϵ+2θuϵ2+2cθ+2κ2γuϵ2+2κ2cγ. Choosingθ,γ small enough and using Ho¨lder inequality, we can get from (40) that(41)ddtuϵ2+ϵ∇uϵ2+α1λ12uϵ2+12∇uϵ2≤c. We setσ=α1λ1/2; then, 0<σ<1/2. From (41), we obtain(42)ddtuϵ2+ϵ∇uϵ2+σuϵ2+ϵ∇uϵ2≤c,that is,(43)ddteσt∇uϵt2+ϵ∇uϵt2≤ceσt. Integrating the above inequality from 0 tot, it yields that(44)eσt∇uϵt2+ϵ∇uϵt2≤u02+ϵ∇u02+ceσt,t≥0. Now, we consider (36) again. If0<κ2≤1, we can get from (36) that(45)ddtuϵ2+ϵ∇uϵ2+∇uϵ2+κ2∇uϵ2+2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ. Using a similar process as (39)–(42), we deduce that, for θ small enough,(46)ddtuϵ2+ϵ∇uϵ2+σuϵ2+ϵ∇uϵ2+κ2∇uϵ2+2∫ΩFuϵ≤c. Therefore,(47)ddteσtuϵ2+ϵ∇uϵ2+κ2eσt∇uϵ2+2∫ΩFuϵ≤ceσt. Ifκ2>1, we can get from (36) that(48)ddtuϵ2+ϵ∇uϵ2+2∇uϵ2+2∫ΩFuϵ+2κ2−2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ. Applying (37) to the above, we obtain(49)ddtuϵ2+ϵ∇uϵ2+∇uϵ2+∇uϵ2+2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ+2κ2−2γuϵ2+2κ2−2cγ. Following a similar process as (39)–(42), we deduce that, for θ, γ small enough,(50)ddtuϵ2+ϵ∇uϵ2+σuϵ2+ϵ∇uϵ2+∇uϵ2+2∫ΩFuϵ≤c. Therefore,(51)ddteσtuϵ2+ϵ∇uϵ2+eσt∇uϵ2+2∫ΩFuϵ≤ceσt. From (47) and (51), we see that, for any κ2>0,(52)ddteσtuϵ2+ϵ∇uϵ2+c5eσt∇uϵ2+2∫ΩFuϵ≤ceσt.where c5=κ2 when 0<κ2≤1 and c5=1 when κ2>1. For anyt>0, we integrate (52) over [t, t + 1] to get(53)∫tt+1eσs∇uϵs2+2∫ΩFuϵsds≤ceσtuϵt2+ϵ∇uϵt2+ceσt. Putting (44) into (53), we have, for any t>0,(54)∫tt+1eσs∇uϵs2+2∫ΩFuϵsds≤cu02+ϵ∇u02+ceσt. Next, we multiply (1) with ut and integrate it in Ω to get(55)utϵ2+ϵ∇utϵ2+12ddt∇uϵ2+2∫ΩFuϵ=g,utϵ. Applying Ho¨lder inequality to the above we get(56)utϵ2+ϵ∇utϵ2+ddt∇uϵ2+2∫ΩFuϵ≤c,t>0. Thus,(57)eσtutϵ2+ϵ∇utϵ2+ddteσt∇uϵ2+2∫ΩFuϵ≤ceσt+ceσt∇uϵ2+2∫ΩFuϵ,t>0. Dropping the first term in the left-hand side of (57), from (54) and using uniform Gronwall inequality, we conclude that(58)eσt∇uϵt2+2∫ΩFuϵt≤cu02+ϵ∇u02+ceσt,t≥1. We now differentiate equation (1) with respect to t to get(59)uttϵ−ϵΔuttϵ−Δutϵ+f′uϵutϵ=0. Taking the inner product of (59) with utϵ in L2Ω, we obtain(60)ddtutϵ2+ϵ∇utϵ2+2∇utϵ2≤cutϵ2. Since0<σ<1/2, we have(61)ddtutϵ2+ϵ∇utϵ2+σutϵ2+ϵ∇utϵ2≤cutϵ2. Hence,(62)ddteσtutϵ2+ϵ∇utϵ2≤ceσtutϵ2+ϵ∇utϵ2. Integrating (57) over (t, t + 1) and using (58), we deduce that(63)∫tt+1eσsutϵs2+ϵ∇utϵs2ds≤cu02+ϵ∇u02+ceσt,t≥1. Combining (62) and (63) and using the uniform Gronwall inequality, we obtain that(64)utϵt2+ϵ∇utϵt2≤ce−σtu02+ϵ∇u02+c,t≥2. Thus, for any boundedD⊂H01Ω, there exists T2D≥2 which is independent of ϵ such that, for any t≥T2D and any u0∈D,(65)utϵt2+ϵ∇utϵt2≤c,where c is independent of ϵ. This proves assertion (32) in Lemma 4. Finally, integrating (60) over t,t+1, then assertion (33) can be easily verified by using (32). The proof is completed.Lemma 5. Under conditions (F1)–(F4), for any two solutions of (1) and (2) uϵ1t, uϵ2t (corresponding to ϵ1, ϵ2∈0,1, respectively), with initials starting from B and for any υ∈L2Ω, we have(66)∫Ωfuϵ1t−fuϵ2tυdx≤c6∇wϵ1,ϵ2tυ,t≥0,where wϵ1,ϵ2t=uϵ1t−uϵ2t and c6 is a constant independent of ϵ.Proof. By (F2), we have(67)∫Ωfuϵ1t−fuϵ2tυdx≤∫Ωlt,ϵ1,ϵ2wϵ1,ϵ2tυdx≤c∫Ω1+uϵ1t2/N−2+uϵ2t2/N−2wϵ1,ϵ2tυdx≤c∫Ωwϵ1,ϵ2tυdx+c∫Ωuϵ1t2/N−2wϵ1,ϵ2tυdx+c∫Ωuϵ2t2/N−2wϵ1,ϵ2tυdx,where lt,ϵ1,ϵ2=∫01f′suϵ1t+1−suϵ2tds. From (31) and the Sobolev embedding H01Ω⊂L2N/N−2Ω, we have that, for t≥0,(68)∫Ωfuϵ1t−fuϵ2tυdx≤cwϵ1,ϵ2υ+cuϵ12N/N−22/N−2wϵ1,ϵ22NN−2υ+cuϵ22N/N−22/N−2wϵ1,ϵ22N/N−2υ≤c∇wϵ1,ϵ2υ+c∇uϵ12/N−2∇wϵ1,ϵ2υ+c∇uϵ22/N−2∇wϵ1,ϵ2υ≤c6∇wϵ1,ϵ2υ. The proof is completed. ## 3.2. The Main Result for a General Case:ν∈ℝ+ Now, we verify the conditions in Theorem1 for Sϵt in this case. We first verify condition (i), i.e., the Lipschitz continuity in H01 (actually uniform Lipschtz continuity in H01 since the coefficient is independent of ϵ).Lemma 6. Under assumptions (F1)–(F4), we have, for anyϵ∈0,1 and any x1,x2∈B,(69)Sϵtx1−Sϵtx2H01Ω≤ec62tx1−x2H01Ω,t≥0,where c6 is the constant in Lemma 5.Proof. Assume thatu1ϵt and u2ϵt are two solutions of (1) and (2) starting from x1, x2∈B, respectively. We consider the difference wϵt=u1ϵt−u2ϵt, and then wϵt satisfies(70)wtϵt−ϵΔwtϵt−Δwϵt+lt,ϵwϵt=0. Multiplying (70) with wtϵt and integrating it in Ω, we get(71)wtϵt2+ϵ∇wtϵt2+12ddt∇wϵt2≤∫Ωlt,ϵwϵtwtϵtdx. Applying Lemma5, we obtain(72)wtϵt2+ϵ∇wtϵt2+12ddt∇wϵt2≤c6∇wϵtwtϵt,t≥0. We use Ho¨lder inequality to get(73)ddt∇wϵt2≤c62∇wϵt2. Then, the result follows immediately from Gronwall inequality. The proof is completed.Lemma 7. Under assumptions (F1)–(F4), we have, for anyx1,x2∈B and any η∈0,1,(i) Forϵ∈0,ϵ0: there exist T3=T3η, K>0, and compact seminorm n1ϵ,t on B such that(74)Sϵtx1−Sϵtx2H01Ω≤ηx1−x2H01Ω+Kn1ϵ,tx1−x2,t≥T3.(ii) Forϵ=0: there exist a positive integer M, T3=T3η, and K>0 such that(75)S0tx1−S0tx2H01Ω≤ηx1−x2H01Ω+Kn2S0tx1−S0tx2,t≥T3,for some ϵ0>0, where n2x=PMxH01Ω, M and K are independent of ϵ, η, and t and n1ϵ,t depend on ϵ and t.Proof. We first take the inner product of (70) with kwϵ in L2Ω (k is a constant which will be fixed later) to get(76)kwtϵ,wϵ+kϵ∇wtϵ,∇wϵ+k∇wϵ2+k∫Ωlt,ϵwϵ2=0. Using (F1), we get(77)k∇wϵ2≤kwtϵwϵ+kϵ∇wtϵ∇wϵ+kνwϵ2. Combining (72) and (77), we get(78)wtϵ2+ϵ∇wtϵ2+12ddt∇wϵ2+k∇wϵ2≤kwtϵwϵ+kϵ∇wtϵ∇wϵ+kνwϵ2+c6∇wϵwtϵ. For the right-hand side of (78), we use Ho¨lder inequality to get(79)kwtϵwϵ+kϵ∇wtϵ∇wϵ+kνwϵ2+c6∇wϵwtϵ≤kν+k22wϵ2+wtϵ2+ϵ2∇wtϵ2+k2ϵ2+c622∇wϵ2. Putting the above inequality into (78), we obtain(80)ddt∇wϵ2+2k∇wϵ2≤2kν+k2wϵ2+k2ϵ+c62∇wϵ2. Fixk=c62, and then choose ϵ0 such that ϵ<1/c62, ∀ϵ≤ϵ0. Then, we get from (80) that(81)ddt∇wϵ2+c7∇wϵ2≤cwϵ2,ϵ∈0,ϵ0,c7>0,that is,(82)ddtec7t∇wϵ2≤cec7twϵ2,ϵ∈0,ϵ0. We integrate the above inequality over0,t to get(83)∇wϵt2≤e−c7t∇wϵ02+c∫0twϵs2ds,ϵ∈0,ϵ0. For anyη∈0,1, we can choose T3=T3η>0 such that e−c7T3≤η2. Therefore, from (83),(84)∇wϵt2≤η2∇wϵ02+c∫0twϵs2ds,t≥T3,ϵ∈0,ϵ0. Thus,(85)∇wϵt≤η∇wϵ0+K∫0twϵs2ds1/2,t≥T3,ϵ∈0,ϵ0. Now, we setn1ϵ,tx1−x2=∫0twϵs2ds1/2. We need to show that n1ϵ,t is compact on B for any t>0 and any ϵ∈0,1. From condition (F2), we deduce that, for any s∈ℝ,(86)fss≤cs+s2+s2N−2/N−2. Hence,(87)fu,u≤cu2+u2N−2/N−22N−2/N−2≤cu2+u2N−2/N−22N−2/N−2≤cu2+∇u2N−2/N−2≤c∇u2+∇u2N−2/N−2,∀u∈H01Ω,where we have used the Sobolev embedding H01Ω⊂L2N/N−2Ω in the above inequality. From (35) and (87), we obtain(88)κ2∫ΩFu≤θu2+cθ+c∇u2+∇u2N−2/N−2,∀u∈H01Ω,for any θ>0. For any t≥0, we can see from (32) that(89)∫0t∇uϵs2ds≤ct,ϵ∈0,1,for any u0∈B and c is independent of ϵ. Next, we integrate (56) from 0 to t, and we get(90)ϵ∫0t∇utϵs2ds≤ct+c∇u02+2∫ΩFu0. We first fixθ small enough, and then put (88) into (90) to get, for all u0∈B,(91)ϵ∫0t∇utϵs2ds≤ct+c,ϵ∈0,1. From (89) and (91), we can easily see that, for any t>0 and any fixed ϵ∈0,1, ℬε,t=uεs:s∈0,t,uε0=u0∈B is bounded in W1,20,t;H01Ω. Therefore, from the compact embedding W1,20,t;H01Ω⊂L20,t;L2Ω, we deduce that n1ϵ,t is compact in B. Thus, assertion (i) is proved. Assertion (ii) can be easily proved by multiplying (70) (with ϵ=0) with −Δw20, and this procedure is elementary; here, we omit it (see, e.g., [20] for details). The proof is completed.Lemma 8. Under assumptions (F1)–(F4), we have, for anyϵ∈0,1, any i∈ℕ, and any x∈B, there exists a positive function ℓt independent of ϵ such that, for all t≥T2+1,(92)Sϵitx−S0itxH01Ω≤ℓitϵ1/4.where T2=T2B (see Lemma 4).Proof. We assume thatuϵt ϵ∈0,1 and u0t are the solutions for the following equations:(93)utϵ−ϵΔutϵ−Δuϵ+fuϵ=g,t>0,x∈Ω,uϵ∂Ω=0,uϵx,0=x1x∈B,x∈Ω,ut0−Δu0+fu0=g,t>0,x∈Ω,u0∂Ω=0,u0x,0=x2x∈B,x∈Ω,respectively. Set w=uϵ−u0, and then w satisfies(94)wt−ϵΔutϵ−Δw+ltw=0,where lt=∫01f′suϵt+1−su0tds. Multiplying (94) with wt and using Lemma 5, it yields(95)wt2+ϵ∇utϵ,∇wt+12ddt∇w2≤c6∇wwt,t≥0. We chooseσ1=maxc62,2ν; then, the above inequality implies(96)2ϵ∇utϵ2+ddt∇w2≤σ1∇w2+2ϵ∇utϵ∇ut0. Therefore,(97)ddte−σ1t∇w2≤2ϵe−σ1t∇utϵ∇ut0. By Lemma4, we see that, for t≥T2,(98)ddte−σ1t∇w2≤cϵ1/2∇ut0≤cϵ1/2+cϵ1/2∇ut02. Multiplying (94) with w and integrating it in Ω, we obtain(99)12ddtw2+ϵ∇utϵ,∇w+∇w2≤νw2,that is,(100)ddtw2+2∇w2≤σ1w2+2ϵ∇utϵ∇w. The above inequality implies(101)ddte−σ1tw2+2e−σ1t∇w2≤2ϵ∇utϵ∇w. Dropping the second term in the left-hand side of (101) and integrating it over 0,t, we get(102)e−σ1twt2≤w02+cϵ∫0t∇utϵs2ds1/2∫0t∇ws2ds1/2. From (91), we see that, for any ϵ∈0,1,(103)∫0t∇utϵs2ds≤1ϵct+1. Combining (31), (102), and (103), we deduce that(104)e−σ1twt2≤w02+ct+1ϵ1/2,t≥0. Now, we integrate (101) over (t, t + 1) to get(105)∫tt+1e−σ1s∇ws2ds≤ce−σ1twt2+cϵ∫tt+1∇utϵs2ds1/2∫tt+1∇ws2ds1/2. By using (31) and (33), we get that, for t≥T2,(106)∫tt+1∇utϵs2ds≤c,∫tt+1∇ws2ds≤c. Therefore, from (104)–(106), we deduce that(107)∫tt+1e−σ1s∇ws2ds≤cw02+ct+1ϵ1/2+cϵ≤cw02+ct+1ϵ1/2,t≥T2. On the contrary, using (33) again, we obtain(108)∫tt+1cϵ1/2+cϵ1/2∇ut0s2ds≤cϵ1/2,t≥T2. Combining (98), (107), and (108) and using a similar process as the proof of uniform Gronwall inequality, it yields that, for all t≥T2,(109)e−σ1t+1∇wt+12≤cw02+ct+1ϵ1/2+cϵ1/2≤cw02+ct+1ϵ1/2. This implies that there exists a positive functionℓ′t independent of ϵ such that, for t≥T2+1,(110)Sϵtx1−S0tx2H01Ω≤ℓ′tϵ1/4+ℓ′tx1−x2H01Ω. Whenx1=x2=x, we get(111)Sϵtx−S0txH01Ω≤ℓ′tϵ1/4,t≥T2+1. From (110) and (111), we obtain for t≥T2+1,(112)Sϵitx−S0itxH01Ω≤ℓ′tϵ1/4+ℓ′tSϵi−1tx−S0i−1txH01Ω≤ℓ′tϵ1/4+ℓ′tℓ′tϵ1/4+ℓ′tSϵi−2tx−S0i−2txH01Ω=ℓ′tϵ1/4+ℓ′2tϵ1/4+ℓ′2tSϵi−2tx−S0i−2txH01Ω≤ℓ′t+ℓ′2t+⋯+ℓ′itϵ1/4+ℓ′i−1tSϵtx−S0txH01Ω≤2ℓ′tiϵ1/4+ℓ′itϵ1/4≤ℓitϵ1/4. The proof is completed. For any fixedη∈0,1, we set T0η=maxT1,T2+1,T3η and Sϵ=SϵT0, and then Sϵ: B⟶B. From Lemmas 6–8, we see that the discrete system Sϵnn (ϵ∈0,ϵ0) satisfies all the assumptions in Theorem 1. Therefore, we have the following.Theorem 2. Let assumptions (F1)–(F4) hold, and then∀ϵ∈0,ϵ0, and the discrete dynamical system Sϵnn defined above possesses an exponential attractor ℳϵd on B such that(1) The fractal dimension ofℳϵd is bounded in H01Ω:(113)dimf,H01Ωℳϵd≤cϵ:=ln21+η−1lnmϵ4K1+L21/21−η,whereL=ec62T0 (see Lemma 6) and mϵR is the maximal number of pairs xi,yi in H01Ω×H01Ω possessing the properties(114)ϵ∈0,ϵ0:xiH01Ω2+yiH01Ω2≤R2,n1ϵxi−xj>1,i≠j,ϵ=0:xiH01Ω2+yiH01Ω2≤R2,n2yi−yj>1,i≠j.(2) ℳϵd attracts B in H01Ω, uniformly with respect to ϵ,(115)distH01ΩSϵiB,ℳϵd≤c8e−c9i,c9>0,i∈ℕ,wherec8 and c9 are independent of ϵ.(3) The familyℳϵd,ϵ∈0,ϵ0 is continuous at 0 in H01Ω:(116)distsym,H01Ωℳϵd,ℳ0d≤c10ϵc11,where c11=−1/4ln1+η/2/lnℓT0−ln1+η/2 and c10 are independent of ϵ. To obtain the corresponding result for continuous systemSϵt defined in (30), we need to show the Ho¨lder continuity with respect to time t and the initial conditions. In general, it is difficult to verify the uniform (with respect to ϵ) Ho¨lder continuity with respect to time t when t is small. However, when t is large enough, we have the following.Lemma 9. Let assumptions (F1)–(F4) hold. Then, for anyT≥T0, the semigroup Sϵt defined in (30) is uniformly Ho¨lder continuous on T0,T×B, i.e.,(117)Sϵt1x1−Sϵt2x2H01Ω≤cx1−x2H01Ω+t1−t21/2,ϵ∈0,1,for x1, x2∈B and T0≤t1, t2≤T and c is independent of ϵ.Proof. The Lipschitz continuity with respect to the initial conditions is an immediate corollary of Lemma6. It remains to prove the continuity with respect to time t. From Lemmas 1 and 2, we know that the solution uϵt∈C0,T,H01Ω for the initial value belongs to B. Therefore, for any T0≤t1≤t2≤T,(118)uϵt2−uϵt1=∫t1t2utϵsds,which implies that(119)uϵt1−uϵt2H01Ω=∫t1t2utϵsdsH01Ω≤∫t1t2utϵsH01Ωds≤t1−t21/2∫t1t2utϵsH01Ω2ds1/2. To estimate (119), we integrate (60) from t1 to t2 to get(120)∫t1t2∇utϵs2ds≤c∫t1t2utϵs2ds+cutϵt12+ϵ∇utϵt12. Sincet1≥T0, we apply (32) to (120) to get(121)∫t1t2∇utϵs2ds≤c,where c is independent of ϵ. Putting (121) into (119), we obtain the result. The proof is completed. Our main result in this section reads as follows:Theorem 3. Let assumptions (F1)–(F4) hold. Then, for everyϵ∈0,ϵ0, the semigroup Sϵt generated by equations (1) and (2) possesses an exponential attractor ℳϵc in H01Ω. Moreover, these exponential attractors can be constructed such that(1) The fractal dimension ofℳϵc is bounded in H01Ω:(122)dimf,H01Ωℳϵc≤cϵ+2.(2) ℳϵc attracts B in H01Ω, uniformly with respect to ϵ:(123)distH01ΩSϵtB,ℳϵc≤c12e−c13t,c13>0,where c12 and c13 are independent of ϵ.(3) The familyℳϵc,ϵ∈0,ϵ0 is continuous at 0 in H01Ω:(124)distsym,H01Ωℳϵc,ℳ0c≤c14ϵc11,where c11=−1/4ln1+η/2/lnℓT0−ln1+η/2 and c14 are independent of ϵ.Proof. Settingℳϵc:=∪t∈T0,2T0Sϵtℳϵd, then we have(125)ℳϵc:=∪t∈T0,2T0Sϵtℳϵd=∪t∈T0,2T0Sϵt−T0SϵT0ℳϵd=∪t∈0,T0SϵtSϵT0ℳϵd. It follows from its definition that the setSϵT0ℳϵd is also an exponential attractor for the discrete dynamical system Sϵnn. Thus, ℳϵc is the set we needed (for details, see, e.g., [12]). The proof is completed. ## 3.3. The Main Result for a Special Case:0<ν<λ1 If, in addition,ν<λ1, we can get the uniform bound (with respect to ϵ) for the fractal dimension. To this end, we need show that the constants Lϵ and Kϵ and the compact seminorms n1ϵ and n2ϵ in Theorem 1 are all independent of ϵ for the discrete system generated by problems (1) and (2). From the above, we have proved the constant Lϵ is independent of ϵ; thus, we only need the following lemma:Lemma 10. Let (F1)–(F4) hold; in addition, we assume thatν<λ1, and then we have for any x1,x2∈B and any η∈0,1, there exist constants ϵ0∈0,1, K>0, and T0′η≥T0η (T0 is defined above) and a compact seminorm nx on B such that(126)SϵT0′x1−SϵT0′x2H01Ω≤ηx1−x2H01Ω+Knx1−x2+nSϵT0′x1−SϵT0′x2,ϵ∈0,ϵ0,where T0′η, K, and the compact seminorm nx are all independent of ϵ.Proof. We assume thatu1ϵt and u2ϵt are two solutions of (1) starting from x1,x2∈B, respectively. We consider the difference wϵt=u1ϵt−u2ϵt; then, wϵt satisfies(127)wtϵt−ϵΔwtϵt−Δwϵt+lt,ϵwϵt=0. We first take the inner product of (127) with kw2ϵ in L2Ω (k is a constant which will be fixed later) to get(128)kw2,tϵ,w2ϵ+kϵ∇w2,tϵ,∇w2ϵ+k∇w2ϵ2+k∫Ωlt,ϵwϵw2ϵ=0. Using Lemma5, we get(129)k∇w2ϵ2≤kw2,tϵw2ϵ+kϵ∇w2,tϵ∇w2ϵ+kc∇wϵw2ϵ. Next, we take the inner product of (127) with w2,tϵ in L2Ω to get(130)w2,tϵ2+ϵ∇w2,tϵ2+12ddt∇w2ϵ2+∫Ωlt,ϵwϵw2,tϵ=0. We apply Lemma5 again to get(131)w2,tϵ2+ϵ∇w2,tϵ2+12ddt∇w2ϵ2≤c6∇wϵw2,tϵ. Combining (129) and (131), we get(132)w2,tϵ2+ϵ∇w2,tϵ2+12ddt∇w2ϵ2+k∇w2ϵ2≤kw2,tϵw2ϵ+kϵ∇w2,tϵ∇w2ϵ+kc∇wϵw2ϵ+c6∇wϵw2,tϵ. For the right-hand side of (132), we use Ho¨lder inequality to get(133)kw2,tϵw2ϵ+kϵ∇w2,tϵ∇w2ϵ+kc∇wϵw2ϵ+c6∇wϵw2,tϵ≤ck2w2ϵ2+12w2,tϵ2+ϵ2∇w2,tϵ2+k2ϵ2∇w2ϵ2+k2∇wϵ2+ckw2ϵ2+c622∇wϵ2+12w2,tϵ2≤ck2w2ϵ2+12w2,tϵ2+ϵ2∇w2,tϵ2+k2ϵ2∇w2ϵ2+k2∇w2ϵ2+k2∇w1ϵ2+ckw2ϵ2+c622∇w2ϵ2+c622∇w1ϵ2+12w2,tϵ2. Putting the above inequality into (132), we obtain(134)ddt∇w2ϵ2+k∇w2ϵ2≤ck2+kw2ϵ2+c+k∇w1ϵ2+c62+k2ϵ∇w2ϵ2. Fixk=2c62, and then choose ϵ0 such that ϵ<1/4c62, ∀ϵ≤ϵ0. In the following, we assume ϵ∈0,ϵ0. Then, we get from (134) that(135)ddt∇w2ϵ2+c15∇w2ϵ2≤cw2ϵ2+c∇w1ϵ2,c15>0. Using Poincare inequality, we obtain(136)ddt∇w2ϵ2+c15∇w2ϵ2≤cλm+1−1∇w2ϵ2+c∇w1ϵ2≤cλm+1−1∇wϵ2+c∇w1ϵ2. From Lemma6, it yields that(137)ddtec15t∇w2ϵ2≤cλm+1−1ec15tec62t∇wϵ02+cec15t∇w1ϵ2. We integrate the above inequality over0,t to get(138)∇w2ϵt2≤e−c15t∇w2ϵ02+cλm+1−1ec62t∇wϵ02+c∫0t∇w1ϵs2ds≤e−c15t+cλm+1−1ec62t∇wϵ02+c∫0t∇w1ϵs2ds. For anyη∈0,1, we first choose T0′=T0′η>T0η such that e−c15T0′≤η2/2, and then fix a positive integer M=Mη such that cλM+1−1ec62T0′≤η2/2. Therefore, from (138),,(139)∇w2ϵT0′2≤η2∇wϵ02+c∫0T0′∇w1ϵs2ds. Therefore,(140)∇wϵT0′2=∇w1ϵT0′2+∇w2ϵT0′2≤η2∇wϵ02+c∫0T0′∇w1ϵs2ds+∇w1ϵT0′2,that is,(141)∇wϵT0′≤η∇wϵ0+K∫0T0′∇w1ϵs2ds1/2+∇w1ϵT0′. We setnx=PMxH01. Then, ∇w1ϵT0′=nSϵT0′x1−SϵT0′x2, and n is obviously compact on B. To estimate the term∫0T0′∇w1ϵs2ds1/2, we multiply (127) with w1ϵt and integrate it in Ω:(142)ddtw1ϵt2+ϵ∇w1ϵt2+2∇w1ϵt2+2lt,ϵwϵt,w1ϵt=0. By using (F1) and Poincare inequality, we can get from the above inequality(143)ddtw1ϵt2+ϵ∇w1ϵt2+2∇w1ϵt2≤2νw1ϵt2≤2νλ1−1∇w1ϵt2. Therefore,(144)ddtw1ϵt2+ϵ∇w1ϵt2+β∇w1ϵt2≤0,where β=21−νλ1−1>0. Integrating (144) from 0 to T0′, we get(145)∫0T0′∇w1ϵs2ds≤cw1ϵ02+ϵ∇w1ϵ02≤c∇w1ϵ02. Thus, for anyϵ∈0,ϵ0,(146)∫0T0′∇w1εs2ds1/2≤cnx1−x2. Putting (146) into (141), one can obtain the result. The proof is completed.Theorem 5. Let assumptions (F1)–(F4) hold andν<λ1. Then, for every ϵ∈0,ϵ0, the semigroup Sϵt generated by equations (1) and (2) possesses an exponential attractor ℳϵc in H01Ω. Moreover, these exponential attractors can be constructed such that (2) and (3) in Theorem 3 hold and the fractal dimension of ℳϵ is uniformly (with respect to ϵ) bounded in H01Ω, i.e.,(147)dimf,H01Ωℳϵc≤c,where c is a constant independent of ϵ. --- *Source: 1025457-2020-04-07.xml*
1025457-2020-04-07_1025457-2020-04-07.md
54,319
Continuous Dependence on a Parameter of Exponential Attractors for Nonclassical Diffusion Equations
Gang Wang; Chaozhu Hu
Discrete Dynamics in Nature and Society (2020)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1025457
1025457-2020-04-07.xml
--- ## Abstract In this paper, a new abstract result is given to verify the continuity of exponential attractors with respect to a parameter for the underlying semigroup. We do not impose any compact embedding on the main assumptions in the abstract result which is different from the corresponding result established by Efendiev et al. in 2004. Consequently, it can be used for equations whose solutions have no higher regularity. As an application, we prove the continuity of exponential attractors inH01 for a class of nonclassical diffusion equations with initial datum in H01. --- ## Body ## 1. Introduction In this paper, we study the existence and robustness of exponential attractors for the following nonclassical diffusion equation:(1)utϵ−ϵΔutϵ−Δuϵ+fuϵ=g,x∈Ω,t>0,with the initial-boundary value conditions(2)uϵx,0=u0x,x∈Ω,uϵ=0,on∂Ω,where ϵ∈0,1 and Ω⊂ℝN N≥3 is a bounded open set with smooth boundary ∂Ω. When ϵ=0, it turns out to be the classical reaction-diffusion equation. We assume that g∈L2Ω and the nonlinearity f∈C1ℝ,ℝ satisfies the following (see, e.g., [1]):(F1): there existsν>0 such that f′s≥−ν, ∀s∈ℝ(F2): there existsκ1>0 such that f′s≤κ11+s2/N−2, ∀s∈ℝ(F3):liminfs⟶∞Fs/s2≥0, where Fs=∫0sfrdr(F4): there existsκ2>0 such that liminfs⟶∞sfs−κ2Fs/s2≥0Nonclassical diffusion equations appear in fluid mechanics, soil mechanics, and heat conduction theory (see, e.g., [2]). The long-time behavior of solutions to nonclassical diffusion equations has been extensively studied by many authors for both autonomous and nonautonomous cases [3–9].The global attractor plays an important role in the study of long-time behavior of infinite dimension systems arising from physics and mechanics. It is a compact invariant set and attracts uniformly the bounded sets of the phase space. However, the rate of attraction may be arbitrary, and it may be sensible to perturbations. These drawbacks can be overcome by creating the notion of the exponential attractor [10], which is a compact, positively invariant set of finite dimension and exponentially attracts each bounded set. The existence of the exponential attractor has been extensively studied since 1994, see e.g., [5, 10–17].As discussed in [12], exponential attractors are to be more robust objects under perturbations than global attractors. In general, global attractors are only upper semicontinuous with respect to perturbations, and the lower semicontinuity property is much more delicate and can be established only for some particular cases. However, one can prove the continuity of exponential attractors under perturbations in many cases [5, 13]. In particular, for problems (1) and (2), the existence of a pullback attractor was shown by Anh and Bao in [3] for the subcritical case in H01Ω. They also proved the upper semicontinuity of the pullback attractors. However, this upper semicontinuity of the pullback attractor was established only in L2Ω, and the upper semicontinuity in H01Ω remains an open problem. In this paper, we not only prove the upper and lower semicontinuity of the exponential attractor but also show these continuities in a stronger space, i.e., H01Ω when the initial value only belongs to H01Ω.In [12], see also [17], Efendiev et al. gave an abstract result about the robustness of exponential attractors (Theorem 4.4 in [12]). A main assumption called “compact Lipschitz condition” was proposed in that theorem. The main difficulty when we apply this result to problems (1) and (2) is that the solution to problems (1) and (2) has no regularity as ϵ>0 [3]. For example, if the initial datum u0 belongs to H01Ω, the solution with initial u0=u0 is always in H01Ω and has no higher regularity. Thus, it is impossible to verify the “compact Lipschitz condition” when we want to prove the continuity of exponential attractors in H01Ω. Motivated by [18], we modify the result in [12] to adapt it to our case. Moreover, some of the coefficients are allowed to be dependent on the parameter ϵ which relax the conditions, see Theorem 1 in the following.The rest of this paper is organized as follows. In Section2, we formulate and prove the main abstract result, i.e., Theorem 1. In Section 3, we apply Theorem 1 to the dynamical system generated by (1) and (2) to prove the continuity of the exponential attractors, and we consider two cases according to the constant ν in this section.Throughout this paper, we denote by⋅X the norm of a Banach space X. The inner product and norm of L2ℝn are written as ⋅,⋅ and ⋅, respectively. We also use ur to denote the norm of u∈Lrℝn r≥1,r≠2 and u to denote the modular of u. Letter c is a generic positive constant independent of ϵ which may change its values from line to line even in the same line (sometimes for the special case, we also denote different positive constants by cii=1,2,…). ## 2. The Abstract Result and Its Proof In this section, we modify Theorem 4.4 in [12] to adapt to problems (1) and (2). We start with the definition of exponential attractors.Definition 1. LetE be a metric space, B is a bounded set in E, and let S:B⟶B be a map. We define a discrete semigroup Sn,n∈ℤ+ by Snx:=S∘⋯∘S (n times). A set ℳ⊂B is an exponential attractor for the map if the following properties hold:(1) The setℳ is compact in E and has finite fractal dimension.(2) The setℳ is positively invariant with respect to S, i.e., Sℳ⊂ℳ.(3) There exist positive constantsα0 and β0 such that(3)distESnB,ℳ≤α0e−β0n,where distEC1,C2 denotes the Hausdorff semidistance between C1 and C2 in E given by(4)distEC1,C2=supx∈C1infy∈C2x−yE,forC1,C2⊂E.Definition 2. LetX be a complete metric space endowed with the metric d and M be a bounded closed set in X. Assume that ϱ is a pseudometric [19] defined on M. Let B⊂M and ε>0.(i) A subsetU in B is said to be ε,ϱ-distinguishable if ϱx,x′>ε for any x,x′∈U,x≠x′. We denote by mϱB,ε the maximal cardinality of an ε,ϱ-distinguishable subset of B.(ii) Pseudometricϱ is said to be compact on M iff mϱM,ε is finite for every ε>0.(iii) For anyr>0, we define a local r,ε,ϱ-capacity of set M by the formula(5)CϱM;r,ε=suplnmϱB,ε:B⊂M,diamB≤2r. We now state and prove the main abstract theorem. The proof is essentially a combination of that in [18] and that of [12].Theorem 1. LetX be a Banach space and B be a bounded set in X. We assume that there exists a family of operators Sϵ:B⟶B, ϵ∈0,ϵ0, which satisfy, for any ϵ∈0,ϵ0,(i) Sϵ is Lipschitz on B, i.e., there exists Lϵ>0 such that(6)Sϵx1−Sϵx2X≤Lϵx1−x2X,∀x1,x2∈B.(ii) There exist constantsη and Kϵ and compact seminorms n1ϵx and n2ϵx on B such that(7)Sϵx1−Sϵx2X≤ηx1−x2X+Kϵn1ϵx1−x2+n2ϵSϵx1−Sϵx2,for anyx1,x2∈B, where 0<η<1 is independent of ϵ and Kϵ>0 is a constant which may be dependent on ϵ(seminorm nx on B is said to be compact iff for any subset B′⊂B if there exists a sequence xn⊂B′ such that nxm−xn⟶0 as n,m⟶∞).(iii) ∀ϵ∈0,ϵ0, ∀i∈ℕ, ∀x∈B,(8)Sϵix−S0ixX≤κiϵα,where κ>0 and α∈0,1 are constants independent of ϵ and for mapping V, Vi=V∘⋯∘V (i times). Then,∀ϵ∈0,ϵ0, and the discrete dynamical system generated by the iterations of Sϵ possesses an exponential attractor ℳϵ on B such that(1) The fractal dimension ofℳϵ is bounded in X:(9)dimf,Xℳϵ≤cϵ:=ln21+η−1lnmϵ4Kϵ1+Lϵ21/21−η,wheredimf,XA denotes the fractal dimension of A in X and mϵR is the maximal number of pairs xi,yi in X×X possessing the properties(10)xiX2+yiX2≤R2,n1εxi−xj+n2εyi−yj>1,i≠j.(2) ℳϵ attracts B in X, uniformly with respect to ϵ,(11)distXSϵiB,ℳϵ≤c1e−c2i,c2>0,i∈ℕ,wherec1 and c2 are independent of ϵ.(3) The familyℳϵ,ϵ∈0,ϵ0 is continuous at 0:(12)distsym,Xℳϵ,ℳ0≤c3ϵc4,where c3 and c4∈0,1 are independent of ϵ and distsym,X denotes the symmetric Hausdorff distance in X between sets defined by(13)distsym,XA,B:=maxdistXA,B,distXB,A.Proof. For any fixedϵ∈0,ϵ0, we set ϱϵx,y=Kϵn1ϵx−y+n2ϵSϵx−Sϵy; then, ϱϵ is compact on B in the sense of Definition 2. From [18], we see that the local r,ρ,ϱϵ-capacity of the set B admits the estimate(14)CϱϵB;r,ρ≤lnmϵ2Kϵ1+Lϵ21/2rρ,where mϵR is the maximal number of pairs xi,yi in X×X possessing the properties(15)xiX2+yiX2≤R2,n1ϵxi−xj+n2ϵyi−yj>1,i≠j. We assume diamB=2R. Let δ=1−η/2 and xi1:i1=1,…,n1 be a maximal δR,ϱϵ-distinguishable subset of B. Then, from (14), we have(16)n1=mϱϵB,δR≤expCϱϵB;R,δR≤mϵ2Kϵ1+Lϵ21/2δ:=Pϵ,B=∪i1=1n1Bi1,Bi1=υ∈B:ϱϵυ,xi1≤δR. Therefore,(17)SϵB=∪i1=1n1SϵBi1. Ify1,y2∈Bi1, then from (7), we have(18)Sϵy1−Sϵy2X≤ηy1−y2X+ϱϵy1,xi1+ϱϵy2,xi1≤2ηR+2δR:=2qR,where q=η+δ=1+η/2<1. We set Eϵ0=xi1i1=1n1 and Eϵ1=SϵEϵ0; then,(19)Eϵ1⊂SϵB,SϵEϵ0⊂Eϵ1,♯Eϵ1≤Pϵ≤Pϵ2,distXSϵB,Eϵ1≤2qR. Next, for any fixedi1∈1,…,ni1, we assume that xi1,i2:i2=1,…,ni1,2 be a maximal δqR,ϱϵ-distinguishable subset of SϵBi1. Then,(20)ni1,2=mϱϵSϵBi1,δqR≤expCϱϵSϵB;qR,δqR≤expCϱϵB;qR,δqR≤Pϵ,SϵB=∪i1=1n1∪i2=1ni1,2Bi1,i2,Bi1,i2=υ∈SϵBi1:ϱϵυ,xi1,i2≤δqR. Therefore,(21)Sϵ2B=∪i1=1n1∪i2=1ni1,2SϵBi1,i2. Ify1,y2∈Bi1,i2, then from (7), we have(22)Sϵy1−Sϵy2X≤ηy1−y2X+ϱϵy1,xi1,i2+ϱϵy2,xi1,i2≤2ηqR+2δqR=2q2R. We setEϵ2=SϵEϵ1∪Sϵxi1,i2, and then we have(23)Eϵ2⊂Sϵ2B,SϵEϵ1⊂Eϵ2,♯Eϵ2≤Pϵ2+Pϵ2≤Pϵ3,distXSϵ2B,Eϵ2≤2q2R. By the induction procedure, we can find setsEϵi, i∈ℕ, enjoy the following properties:(24)Eϵi⊂SϵiB,(25)SϵEϵi⊂Eϵi+1,(26)♯Eϵi≤Pϵi+1,(27)distXSϵiB,Eϵi≤2qiR. We now define the exponential attractor for the mapS0:B⟶B as follows:(28)ℳ0′=∪i=1∞E0i,ℳ0=ℳ0′¯X. Then, from (24)–(27), we see that ℳ0 is indeed an exponential attractor for the map S0:B⟶B (see [12]). For ϵ∈0,ϵ0, one can also construct exponential attractors for Sϵ:B⟶B as above. However, they are not the ones we needed. Totally similar to [12], one can construct exponential attractors ℳϵ based on E0i. We note that the only difference between our construction procedure and that of in [12] is that the number Pϵ here may be dependent on ϵ. However, (26) only contributes to the fractal dimension of ℳϵ. Therefore, (11) and (12) in Theorem 1 hold true. Finally, assertion (9) is a direct result of [18]. The proof is completed.Remark 1. Similar to [12], we can give an explicit expression for c4, i.e., c4=−αlnq/lnκ−lnq. If Lϵ, Kϵ, n1ϵx, and n2ϵx are all independent ϵ, we can obtain the uniform bound with respect to ϵ of the fractal dimension, and we will apply this abstract result in the last section. ## 3. Application to the Nonclassical Diffusion Equation ### 3.1. Some Useful Estimates of the Solution Since−Δ−1 is a continuous compact operator in L2Ω, by the classical spectral theorem, there exist a sequence λjj=1∞, 0<λ1≤λ2≤⋯≤λj⟶∞ as j⟶∞ and a family of elements ejj=1∞ of D−Δ, which forms an orthogonal basis in both L2D and H01D such that −Δej=λjej,∀j∈N. Given m, let Xm=spane1,…,em and Pm:L2D⟶Xm be the projection operator. For any υ∈H01Ω, we write υ=Pmυ+I−Pmυ:=υ1+υ2.Lemma 1. (see [1]). Let ϵ>0. Assume (F1)–(F4) hold and g∈L2Ω. Then, for each u0∈H01Ω, problems (1) and (2) have a unique solution uϵ=uϵt=uϵt;u0 with uϵ∈C0,T,H01Ω and utϵ∈L20,T;H01Ω for any T>0. Moreover, for any fixed t>0, uϵ is continuous in u0.Lemma 2. (see [1]). Let ϵ=0. Assume (F1)–(F4) hold and g∈L2Ω. Then, for each u0∈H01Ω, problems (1) and (2) have a unique solution u0=u0t=u0t;u0 which satisfies(29)u0∈C0,T;H01Ω∩L20,T;H2Ω,∀T>0. Also for any fixedt>0, u0 is continuous in u0. From Lemmas1 and 2, we can define a semigroup Sϵt:H01Ω⟶H01Ω by the expression(30)Sϵtu0=uϵt,t≥0,where uϵt is the solution of (1) and (2).Lemma 3. (see [1]). Assume (F1)–(F4) hold and g∈L2Ω. Then, for any bounded set D⊂H01Ω, there exist positive constants ED, c0, and T1D such that, for any solution uϵ of problems (1) and (2),(31)∇uϵt≤ED,t≥0,∇uϵt≤c0,t≥T1D,provided u0∈D, where ED, c0, and T1D are independent of ϵ. SetB=υ∈H01Ω:υH01Ω≤c0, where c0 is the constant in Lemma 3; then, B is a uniformly (with respect to ϵ) bounded absorbing set for Sϵt in H01Ω. We note that the absorbing time is independent of ϵ, and we can choose T1=T1B large enough such that SϵtB⊂B for any t≥T0 and any ϵ∈0,1. The following lemma gives several priori estimates for the derivativeutϵt of the solution to (1) and (2).Lemma 4. Under the assumptions (F1)–(F4), for anyD∈H01Ω, D is bounded, and there exists T2D, which is independent of ϵ such that, for all ϵ∈0,1,(32)utϵt2+ϵ∇utϵt2≤c,(33)∫tt+1∇utϵs2ds≤c,for any u0∈D and any t≥T2D, where c is a constant independent of ϵ.Proof. We first take the inner product of (1) with uϵt in L2Ω to get(34)12ddt∇uϵ2+ϵ∇uϵ2+∇uϵ2+fuϵ,uϵ=g,uϵ. By condition (F4), for anyθ>0, there exists cθ>0 satisfying(35)fu,u−κ2∫ΩFu+θu2+cθ≥0,∀u∈H01Ω. Putting (35) into (34), it yields that(36)ddtuϵ2+ϵ∇uϵ2+2∇uϵ2+2κ2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ. By condition (F3), for anyγ>0, there exists cγ>0 satisfying(37)∫ΩFu+γu2+cγ≥0,∀u∈H01Ω. Combining (36) and (37), we get(38)ddtuϵ2+ϵ∇uϵ2+2∇uϵ2≤2g,uϵ+2θuϵ2+2cθ+2κ2γuϵ2+2κ2cγ,that is,(39)ddtuϵ2+ϵ∇uϵ2+uϵ2≤2g,uϵ+2θuϵ2+2cθ+2κ2γuϵ2+2κ2cγ. We chooseα1>0 small enough such that α1λ1<1 and 1−α1>1/2, and apply Poincare inequality in (39) to get(40)ddtuϵ2+ϵ∇uϵ2+α1λ1uϵ2+1−α1uϵ2≤2g,uϵ+2θuϵ2+2cθ+2κ2γuϵ2+2κ2cγ. Choosingθ,γ small enough and using Ho¨lder inequality, we can get from (40) that(41)ddtuϵ2+ϵ∇uϵ2+α1λ12uϵ2+12∇uϵ2≤c. We setσ=α1λ1/2; then, 0<σ<1/2. From (41), we obtain(42)ddtuϵ2+ϵ∇uϵ2+σuϵ2+ϵ∇uϵ2≤c,that is,(43)ddteσt∇uϵt2+ϵ∇uϵt2≤ceσt. Integrating the above inequality from 0 tot, it yields that(44)eσt∇uϵt2+ϵ∇uϵt2≤u02+ϵ∇u02+ceσt,t≥0. Now, we consider (36) again. If0<κ2≤1, we can get from (36) that(45)ddtuϵ2+ϵ∇uϵ2+∇uϵ2+κ2∇uϵ2+2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ. Using a similar process as (39)–(42), we deduce that, for θ small enough,(46)ddtuϵ2+ϵ∇uϵ2+σuϵ2+ϵ∇uϵ2+κ2∇uϵ2+2∫ΩFuϵ≤c. Therefore,(47)ddteσtuϵ2+ϵ∇uϵ2+κ2eσt∇uϵ2+2∫ΩFuϵ≤ceσt. Ifκ2>1, we can get from (36) that(48)ddtuϵ2+ϵ∇uϵ2+2∇uϵ2+2∫ΩFuϵ+2κ2−2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ. Applying (37) to the above, we obtain(49)ddtuϵ2+ϵ∇uϵ2+∇uϵ2+∇uϵ2+2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ+2κ2−2γuϵ2+2κ2−2cγ. Following a similar process as (39)–(42), we deduce that, for θ, γ small enough,(50)ddtuϵ2+ϵ∇uϵ2+σuϵ2+ϵ∇uϵ2+∇uϵ2+2∫ΩFuϵ≤c. Therefore,(51)ddteσtuϵ2+ϵ∇uϵ2+eσt∇uϵ2+2∫ΩFuϵ≤ceσt. From (47) and (51), we see that, for any κ2>0,(52)ddteσtuϵ2+ϵ∇uϵ2+c5eσt∇uϵ2+2∫ΩFuϵ≤ceσt.where c5=κ2 when 0<κ2≤1 and c5=1 when κ2>1. For anyt>0, we integrate (52) over [t, t + 1] to get(53)∫tt+1eσs∇uϵs2+2∫ΩFuϵsds≤ceσtuϵt2+ϵ∇uϵt2+ceσt. Putting (44) into (53), we have, for any t>0,(54)∫tt+1eσs∇uϵs2+2∫ΩFuϵsds≤cu02+ϵ∇u02+ceσt. Next, we multiply (1) with ut and integrate it in Ω to get(55)utϵ2+ϵ∇utϵ2+12ddt∇uϵ2+2∫ΩFuϵ=g,utϵ. Applying Ho¨lder inequality to the above we get(56)utϵ2+ϵ∇utϵ2+ddt∇uϵ2+2∫ΩFuϵ≤c,t>0. Thus,(57)eσtutϵ2+ϵ∇utϵ2+ddteσt∇uϵ2+2∫ΩFuϵ≤ceσt+ceσt∇uϵ2+2∫ΩFuϵ,t>0. Dropping the first term in the left-hand side of (57), from (54) and using uniform Gronwall inequality, we conclude that(58)eσt∇uϵt2+2∫ΩFuϵt≤cu02+ϵ∇u02+ceσt,t≥1. We now differentiate equation (1) with respect to t to get(59)uttϵ−ϵΔuttϵ−Δutϵ+f′uϵutϵ=0. Taking the inner product of (59) with utϵ in L2Ω, we obtain(60)ddtutϵ2+ϵ∇utϵ2+2∇utϵ2≤cutϵ2. Since0<σ<1/2, we have(61)ddtutϵ2+ϵ∇utϵ2+σutϵ2+ϵ∇utϵ2≤cutϵ2. Hence,(62)ddteσtutϵ2+ϵ∇utϵ2≤ceσtutϵ2+ϵ∇utϵ2. Integrating (57) over (t, t + 1) and using (58), we deduce that(63)∫tt+1eσsutϵs2+ϵ∇utϵs2ds≤cu02+ϵ∇u02+ceσt,t≥1. Combining (62) and (63) and using the uniform Gronwall inequality, we obtain that(64)utϵt2+ϵ∇utϵt2≤ce−σtu02+ϵ∇u02+c,t≥2. Thus, for any boundedD⊂H01Ω, there exists T2D≥2 which is independent of ϵ such that, for any t≥T2D and any u0∈D,(65)utϵt2+ϵ∇utϵt2≤c,where c is independent of ϵ. This proves assertion (32) in Lemma 4. Finally, integrating (60) over t,t+1, then assertion (33) can be easily verified by using (32). The proof is completed.Lemma 5. Under conditions (F1)–(F4), for any two solutions of (1) and (2) uϵ1t, uϵ2t (corresponding to ϵ1, ϵ2∈0,1, respectively), with initials starting from B and for any υ∈L2Ω, we have(66)∫Ωfuϵ1t−fuϵ2tυdx≤c6∇wϵ1,ϵ2tυ,t≥0,where wϵ1,ϵ2t=uϵ1t−uϵ2t and c6 is a constant independent of ϵ.Proof. By (F2), we have(67)∫Ωfuϵ1t−fuϵ2tυdx≤∫Ωlt,ϵ1,ϵ2wϵ1,ϵ2tυdx≤c∫Ω1+uϵ1t2/N−2+uϵ2t2/N−2wϵ1,ϵ2tυdx≤c∫Ωwϵ1,ϵ2tυdx+c∫Ωuϵ1t2/N−2wϵ1,ϵ2tυdx+c∫Ωuϵ2t2/N−2wϵ1,ϵ2tυdx,where lt,ϵ1,ϵ2=∫01f′suϵ1t+1−suϵ2tds. From (31) and the Sobolev embedding H01Ω⊂L2N/N−2Ω, we have that, for t≥0,(68)∫Ωfuϵ1t−fuϵ2tυdx≤cwϵ1,ϵ2υ+cuϵ12N/N−22/N−2wϵ1,ϵ22NN−2υ+cuϵ22N/N−22/N−2wϵ1,ϵ22N/N−2υ≤c∇wϵ1,ϵ2υ+c∇uϵ12/N−2∇wϵ1,ϵ2υ+c∇uϵ22/N−2∇wϵ1,ϵ2υ≤c6∇wϵ1,ϵ2υ. The proof is completed. ### 3.2. The Main Result for a General Case:ν∈ℝ+ Now, we verify the conditions in Theorem1 for Sϵt in this case. We first verify condition (i), i.e., the Lipschitz continuity in H01 (actually uniform Lipschtz continuity in H01 since the coefficient is independent of ϵ).Lemma 6. Under assumptions (F1)–(F4), we have, for anyϵ∈0,1 and any x1,x2∈B,(69)Sϵtx1−Sϵtx2H01Ω≤ec62tx1−x2H01Ω,t≥0,where c6 is the constant in Lemma 5.Proof. Assume thatu1ϵt and u2ϵt are two solutions of (1) and (2) starting from x1, x2∈B, respectively. We consider the difference wϵt=u1ϵt−u2ϵt, and then wϵt satisfies(70)wtϵt−ϵΔwtϵt−Δwϵt+lt,ϵwϵt=0. Multiplying (70) with wtϵt and integrating it in Ω, we get(71)wtϵt2+ϵ∇wtϵt2+12ddt∇wϵt2≤∫Ωlt,ϵwϵtwtϵtdx. Applying Lemma5, we obtain(72)wtϵt2+ϵ∇wtϵt2+12ddt∇wϵt2≤c6∇wϵtwtϵt,t≥0. We use Ho¨lder inequality to get(73)ddt∇wϵt2≤c62∇wϵt2. Then, the result follows immediately from Gronwall inequality. The proof is completed.Lemma 7. Under assumptions (F1)–(F4), we have, for anyx1,x2∈B and any η∈0,1,(i) Forϵ∈0,ϵ0: there exist T3=T3η, K>0, and compact seminorm n1ϵ,t on B such that(74)Sϵtx1−Sϵtx2H01Ω≤ηx1−x2H01Ω+Kn1ϵ,tx1−x2,t≥T3.(ii) Forϵ=0: there exist a positive integer M, T3=T3η, and K>0 such that(75)S0tx1−S0tx2H01Ω≤ηx1−x2H01Ω+Kn2S0tx1−S0tx2,t≥T3,for some ϵ0>0, where n2x=PMxH01Ω, M and K are independent of ϵ, η, and t and n1ϵ,t depend on ϵ and t.Proof. We first take the inner product of (70) with kwϵ in L2Ω (k is a constant which will be fixed later) to get(76)kwtϵ,wϵ+kϵ∇wtϵ,∇wϵ+k∇wϵ2+k∫Ωlt,ϵwϵ2=0. Using (F1), we get(77)k∇wϵ2≤kwtϵwϵ+kϵ∇wtϵ∇wϵ+kνwϵ2. Combining (72) and (77), we get(78)wtϵ2+ϵ∇wtϵ2+12ddt∇wϵ2+k∇wϵ2≤kwtϵwϵ+kϵ∇wtϵ∇wϵ+kνwϵ2+c6∇wϵwtϵ. For the right-hand side of (78), we use Ho¨lder inequality to get(79)kwtϵwϵ+kϵ∇wtϵ∇wϵ+kνwϵ2+c6∇wϵwtϵ≤kν+k22wϵ2+wtϵ2+ϵ2∇wtϵ2+k2ϵ2+c622∇wϵ2. Putting the above inequality into (78), we obtain(80)ddt∇wϵ2+2k∇wϵ2≤2kν+k2wϵ2+k2ϵ+c62∇wϵ2. Fixk=c62, and then choose ϵ0 such that ϵ<1/c62, ∀ϵ≤ϵ0. Then, we get from (80) that(81)ddt∇wϵ2+c7∇wϵ2≤cwϵ2,ϵ∈0,ϵ0,c7>0,that is,(82)ddtec7t∇wϵ2≤cec7twϵ2,ϵ∈0,ϵ0. We integrate the above inequality over0,t to get(83)∇wϵt2≤e−c7t∇wϵ02+c∫0twϵs2ds,ϵ∈0,ϵ0. For anyη∈0,1, we can choose T3=T3η>0 such that e−c7T3≤η2. Therefore, from (83),(84)∇wϵt2≤η2∇wϵ02+c∫0twϵs2ds,t≥T3,ϵ∈0,ϵ0. Thus,(85)∇wϵt≤η∇wϵ0+K∫0twϵs2ds1/2,t≥T3,ϵ∈0,ϵ0. Now, we setn1ϵ,tx1−x2=∫0twϵs2ds1/2. We need to show that n1ϵ,t is compact on B for any t>0 and any ϵ∈0,1. From condition (F2), we deduce that, for any s∈ℝ,(86)fss≤cs+s2+s2N−2/N−2. Hence,(87)fu,u≤cu2+u2N−2/N−22N−2/N−2≤cu2+u2N−2/N−22N−2/N−2≤cu2+∇u2N−2/N−2≤c∇u2+∇u2N−2/N−2,∀u∈H01Ω,where we have used the Sobolev embedding H01Ω⊂L2N/N−2Ω in the above inequality. From (35) and (87), we obtain(88)κ2∫ΩFu≤θu2+cθ+c∇u2+∇u2N−2/N−2,∀u∈H01Ω,for any θ>0. For any t≥0, we can see from (32) that(89)∫0t∇uϵs2ds≤ct,ϵ∈0,1,for any u0∈B and c is independent of ϵ. Next, we integrate (56) from 0 to t, and we get(90)ϵ∫0t∇utϵs2ds≤ct+c∇u02+2∫ΩFu0. We first fixθ small enough, and then put (88) into (90) to get, for all u0∈B,(91)ϵ∫0t∇utϵs2ds≤ct+c,ϵ∈0,1. From (89) and (91), we can easily see that, for any t>0 and any fixed ϵ∈0,1, ℬε,t=uεs:s∈0,t,uε0=u0∈B is bounded in W1,20,t;H01Ω. Therefore, from the compact embedding W1,20,t;H01Ω⊂L20,t;L2Ω, we deduce that n1ϵ,t is compact in B. Thus, assertion (i) is proved. Assertion (ii) can be easily proved by multiplying (70) (with ϵ=0) with −Δw20, and this procedure is elementary; here, we omit it (see, e.g., [20] for details). The proof is completed.Lemma 8. Under assumptions (F1)–(F4), we have, for anyϵ∈0,1, any i∈ℕ, and any x∈B, there exists a positive function ℓt independent of ϵ such that, for all t≥T2+1,(92)Sϵitx−S0itxH01Ω≤ℓitϵ1/4.where T2=T2B (see Lemma 4).Proof. We assume thatuϵt ϵ∈0,1 and u0t are the solutions for the following equations:(93)utϵ−ϵΔutϵ−Δuϵ+fuϵ=g,t>0,x∈Ω,uϵ∂Ω=0,uϵx,0=x1x∈B,x∈Ω,ut0−Δu0+fu0=g,t>0,x∈Ω,u0∂Ω=0,u0x,0=x2x∈B,x∈Ω,respectively. Set w=uϵ−u0, and then w satisfies(94)wt−ϵΔutϵ−Δw+ltw=0,where lt=∫01f′suϵt+1−su0tds. Multiplying (94) with wt and using Lemma 5, it yields(95)wt2+ϵ∇utϵ,∇wt+12ddt∇w2≤c6∇wwt,t≥0. We chooseσ1=maxc62,2ν; then, the above inequality implies(96)2ϵ∇utϵ2+ddt∇w2≤σ1∇w2+2ϵ∇utϵ∇ut0. Therefore,(97)ddte−σ1t∇w2≤2ϵe−σ1t∇utϵ∇ut0. By Lemma4, we see that, for t≥T2,(98)ddte−σ1t∇w2≤cϵ1/2∇ut0≤cϵ1/2+cϵ1/2∇ut02. Multiplying (94) with w and integrating it in Ω, we obtain(99)12ddtw2+ϵ∇utϵ,∇w+∇w2≤νw2,that is,(100)ddtw2+2∇w2≤σ1w2+2ϵ∇utϵ∇w. The above inequality implies(101)ddte−σ1tw2+2e−σ1t∇w2≤2ϵ∇utϵ∇w. Dropping the second term in the left-hand side of (101) and integrating it over 0,t, we get(102)e−σ1twt2≤w02+cϵ∫0t∇utϵs2ds1/2∫0t∇ws2ds1/2. From (91), we see that, for any ϵ∈0,1,(103)∫0t∇utϵs2ds≤1ϵct+1. Combining (31), (102), and (103), we deduce that(104)e−σ1twt2≤w02+ct+1ϵ1/2,t≥0. Now, we integrate (101) over (t, t + 1) to get(105)∫tt+1e−σ1s∇ws2ds≤ce−σ1twt2+cϵ∫tt+1∇utϵs2ds1/2∫tt+1∇ws2ds1/2. By using (31) and (33), we get that, for t≥T2,(106)∫tt+1∇utϵs2ds≤c,∫tt+1∇ws2ds≤c. Therefore, from (104)–(106), we deduce that(107)∫tt+1e−σ1s∇ws2ds≤cw02+ct+1ϵ1/2+cϵ≤cw02+ct+1ϵ1/2,t≥T2. On the contrary, using (33) again, we obtain(108)∫tt+1cϵ1/2+cϵ1/2∇ut0s2ds≤cϵ1/2,t≥T2. Combining (98), (107), and (108) and using a similar process as the proof of uniform Gronwall inequality, it yields that, for all t≥T2,(109)e−σ1t+1∇wt+12≤cw02+ct+1ϵ1/2+cϵ1/2≤cw02+ct+1ϵ1/2. This implies that there exists a positive functionℓ′t independent of ϵ such that, for t≥T2+1,(110)Sϵtx1−S0tx2H01Ω≤ℓ′tϵ1/4+ℓ′tx1−x2H01Ω. Whenx1=x2=x, we get(111)Sϵtx−S0txH01Ω≤ℓ′tϵ1/4,t≥T2+1. From (110) and (111), we obtain for t≥T2+1,(112)Sϵitx−S0itxH01Ω≤ℓ′tϵ1/4+ℓ′tSϵi−1tx−S0i−1txH01Ω≤ℓ′tϵ1/4+ℓ′tℓ′tϵ1/4+ℓ′tSϵi−2tx−S0i−2txH01Ω=ℓ′tϵ1/4+ℓ′2tϵ1/4+ℓ′2tSϵi−2tx−S0i−2txH01Ω≤ℓ′t+ℓ′2t+⋯+ℓ′itϵ1/4+ℓ′i−1tSϵtx−S0txH01Ω≤2ℓ′tiϵ1/4+ℓ′itϵ1/4≤ℓitϵ1/4. The proof is completed. For any fixedη∈0,1, we set T0η=maxT1,T2+1,T3η and Sϵ=SϵT0, and then Sϵ: B⟶B. From Lemmas 6–8, we see that the discrete system Sϵnn (ϵ∈0,ϵ0) satisfies all the assumptions in Theorem 1. Therefore, we have the following.Theorem 2. Let assumptions (F1)–(F4) hold, and then∀ϵ∈0,ϵ0, and the discrete dynamical system Sϵnn defined above possesses an exponential attractor ℳϵd on B such that(1) The fractal dimension ofℳϵd is bounded in H01Ω:(113)dimf,H01Ωℳϵd≤cϵ:=ln21+η−1lnmϵ4K1+L21/21−η,whereL=ec62T0 (see Lemma 6) and mϵR is the maximal number of pairs xi,yi in H01Ω×H01Ω possessing the properties(114)ϵ∈0,ϵ0:xiH01Ω2+yiH01Ω2≤R2,n1ϵxi−xj>1,i≠j,ϵ=0:xiH01Ω2+yiH01Ω2≤R2,n2yi−yj>1,i≠j.(2) ℳϵd attracts B in H01Ω, uniformly with respect to ϵ,(115)distH01ΩSϵiB,ℳϵd≤c8e−c9i,c9>0,i∈ℕ,wherec8 and c9 are independent of ϵ.(3) The familyℳϵd,ϵ∈0,ϵ0 is continuous at 0 in H01Ω:(116)distsym,H01Ωℳϵd,ℳ0d≤c10ϵc11,where c11=−1/4ln1+η/2/lnℓT0−ln1+η/2 and c10 are independent of ϵ. To obtain the corresponding result for continuous systemSϵt defined in (30), we need to show the Ho¨lder continuity with respect to time t and the initial conditions. In general, it is difficult to verify the uniform (with respect to ϵ) Ho¨lder continuity with respect to time t when t is small. However, when t is large enough, we have the following.Lemma 9. Let assumptions (F1)–(F4) hold. Then, for anyT≥T0, the semigroup Sϵt defined in (30) is uniformly Ho¨lder continuous on T0,T×B, i.e.,(117)Sϵt1x1−Sϵt2x2H01Ω≤cx1−x2H01Ω+t1−t21/2,ϵ∈0,1,for x1, x2∈B and T0≤t1, t2≤T and c is independent of ϵ.Proof. The Lipschitz continuity with respect to the initial conditions is an immediate corollary of Lemma6. It remains to prove the continuity with respect to time t. From Lemmas 1 and 2, we know that the solution uϵt∈C0,T,H01Ω for the initial value belongs to B. Therefore, for any T0≤t1≤t2≤T,(118)uϵt2−uϵt1=∫t1t2utϵsds,which implies that(119)uϵt1−uϵt2H01Ω=∫t1t2utϵsdsH01Ω≤∫t1t2utϵsH01Ωds≤t1−t21/2∫t1t2utϵsH01Ω2ds1/2. To estimate (119), we integrate (60) from t1 to t2 to get(120)∫t1t2∇utϵs2ds≤c∫t1t2utϵs2ds+cutϵt12+ϵ∇utϵt12. Sincet1≥T0, we apply (32) to (120) to get(121)∫t1t2∇utϵs2ds≤c,where c is independent of ϵ. Putting (121) into (119), we obtain the result. The proof is completed. Our main result in this section reads as follows:Theorem 3. Let assumptions (F1)–(F4) hold. Then, for everyϵ∈0,ϵ0, the semigroup Sϵt generated by equations (1) and (2) possesses an exponential attractor ℳϵc in H01Ω. Moreover, these exponential attractors can be constructed such that(1) The fractal dimension ofℳϵc is bounded in H01Ω:(122)dimf,H01Ωℳϵc≤cϵ+2.(2) ℳϵc attracts B in H01Ω, uniformly with respect to ϵ:(123)distH01ΩSϵtB,ℳϵc≤c12e−c13t,c13>0,where c12 and c13 are independent of ϵ.(3) The familyℳϵc,ϵ∈0,ϵ0 is continuous at 0 in H01Ω:(124)distsym,H01Ωℳϵc,ℳ0c≤c14ϵc11,where c11=−1/4ln1+η/2/lnℓT0−ln1+η/2 and c14 are independent of ϵ.Proof. Settingℳϵc:=∪t∈T0,2T0Sϵtℳϵd, then we have(125)ℳϵc:=∪t∈T0,2T0Sϵtℳϵd=∪t∈T0,2T0Sϵt−T0SϵT0ℳϵd=∪t∈0,T0SϵtSϵT0ℳϵd. It follows from its definition that the setSϵT0ℳϵd is also an exponential attractor for the discrete dynamical system Sϵnn. Thus, ℳϵc is the set we needed (for details, see, e.g., [12]). The proof is completed. ### 3.3. The Main Result for a Special Case:0<ν<λ1 If, in addition,ν<λ1, we can get the uniform bound (with respect to ϵ) for the fractal dimension. To this end, we need show that the constants Lϵ and Kϵ and the compact seminorms n1ϵ and n2ϵ in Theorem 1 are all independent of ϵ for the discrete system generated by problems (1) and (2). From the above, we have proved the constant Lϵ is independent of ϵ; thus, we only need the following lemma:Lemma 10. Let (F1)–(F4) hold; in addition, we assume thatν<λ1, and then we have for any x1,x2∈B and any η∈0,1, there exist constants ϵ0∈0,1, K>0, and T0′η≥T0η (T0 is defined above) and a compact seminorm nx on B such that(126)SϵT0′x1−SϵT0′x2H01Ω≤ηx1−x2H01Ω+Knx1−x2+nSϵT0′x1−SϵT0′x2,ϵ∈0,ϵ0,where T0′η, K, and the compact seminorm nx are all independent of ϵ.Proof. We assume thatu1ϵt and u2ϵt are two solutions of (1) starting from x1,x2∈B, respectively. We consider the difference wϵt=u1ϵt−u2ϵt; then, wϵt satisfies(127)wtϵt−ϵΔwtϵt−Δwϵt+lt,ϵwϵt=0. We first take the inner product of (127) with kw2ϵ in L2Ω (k is a constant which will be fixed later) to get(128)kw2,tϵ,w2ϵ+kϵ∇w2,tϵ,∇w2ϵ+k∇w2ϵ2+k∫Ωlt,ϵwϵw2ϵ=0. Using Lemma5, we get(129)k∇w2ϵ2≤kw2,tϵw2ϵ+kϵ∇w2,tϵ∇w2ϵ+kc∇wϵw2ϵ. Next, we take the inner product of (127) with w2,tϵ in L2Ω to get(130)w2,tϵ2+ϵ∇w2,tϵ2+12ddt∇w2ϵ2+∫Ωlt,ϵwϵw2,tϵ=0. We apply Lemma5 again to get(131)w2,tϵ2+ϵ∇w2,tϵ2+12ddt∇w2ϵ2≤c6∇wϵw2,tϵ. Combining (129) and (131), we get(132)w2,tϵ2+ϵ∇w2,tϵ2+12ddt∇w2ϵ2+k∇w2ϵ2≤kw2,tϵw2ϵ+kϵ∇w2,tϵ∇w2ϵ+kc∇wϵw2ϵ+c6∇wϵw2,tϵ. For the right-hand side of (132), we use Ho¨lder inequality to get(133)kw2,tϵw2ϵ+kϵ∇w2,tϵ∇w2ϵ+kc∇wϵw2ϵ+c6∇wϵw2,tϵ≤ck2w2ϵ2+12w2,tϵ2+ϵ2∇w2,tϵ2+k2ϵ2∇w2ϵ2+k2∇wϵ2+ckw2ϵ2+c622∇wϵ2+12w2,tϵ2≤ck2w2ϵ2+12w2,tϵ2+ϵ2∇w2,tϵ2+k2ϵ2∇w2ϵ2+k2∇w2ϵ2+k2∇w1ϵ2+ckw2ϵ2+c622∇w2ϵ2+c622∇w1ϵ2+12w2,tϵ2. Putting the above inequality into (132), we obtain(134)ddt∇w2ϵ2+k∇w2ϵ2≤ck2+kw2ϵ2+c+k∇w1ϵ2+c62+k2ϵ∇w2ϵ2. Fixk=2c62, and then choose ϵ0 such that ϵ<1/4c62, ∀ϵ≤ϵ0. In the following, we assume ϵ∈0,ϵ0. Then, we get from (134) that(135)ddt∇w2ϵ2+c15∇w2ϵ2≤cw2ϵ2+c∇w1ϵ2,c15>0. Using Poincare inequality, we obtain(136)ddt∇w2ϵ2+c15∇w2ϵ2≤cλm+1−1∇w2ϵ2+c∇w1ϵ2≤cλm+1−1∇wϵ2+c∇w1ϵ2. From Lemma6, it yields that(137)ddtec15t∇w2ϵ2≤cλm+1−1ec15tec62t∇wϵ02+cec15t∇w1ϵ2. We integrate the above inequality over0,t to get(138)∇w2ϵt2≤e−c15t∇w2ϵ02+cλm+1−1ec62t∇wϵ02+c∫0t∇w1ϵs2ds≤e−c15t+cλm+1−1ec62t∇wϵ02+c∫0t∇w1ϵs2ds. For anyη∈0,1, we first choose T0′=T0′η>T0η such that e−c15T0′≤η2/2, and then fix a positive integer M=Mη such that cλM+1−1ec62T0′≤η2/2. Therefore, from (138),,(139)∇w2ϵT0′2≤η2∇wϵ02+c∫0T0′∇w1ϵs2ds. Therefore,(140)∇wϵT0′2=∇w1ϵT0′2+∇w2ϵT0′2≤η2∇wϵ02+c∫0T0′∇w1ϵs2ds+∇w1ϵT0′2,that is,(141)∇wϵT0′≤η∇wϵ0+K∫0T0′∇w1ϵs2ds1/2+∇w1ϵT0′. We setnx=PMxH01. Then, ∇w1ϵT0′=nSϵT0′x1−SϵT0′x2, and n is obviously compact on B. To estimate the term∫0T0′∇w1ϵs2ds1/2, we multiply (127) with w1ϵt and integrate it in Ω:(142)ddtw1ϵt2+ϵ∇w1ϵt2+2∇w1ϵt2+2lt,ϵwϵt,w1ϵt=0. By using (F1) and Poincare inequality, we can get from the above inequality(143)ddtw1ϵt2+ϵ∇w1ϵt2+2∇w1ϵt2≤2νw1ϵt2≤2νλ1−1∇w1ϵt2. Therefore,(144)ddtw1ϵt2+ϵ∇w1ϵt2+β∇w1ϵt2≤0,where β=21−νλ1−1>0. Integrating (144) from 0 to T0′, we get(145)∫0T0′∇w1ϵs2ds≤cw1ϵ02+ϵ∇w1ϵ02≤c∇w1ϵ02. Thus, for anyϵ∈0,ϵ0,(146)∫0T0′∇w1εs2ds1/2≤cnx1−x2. Putting (146) into (141), one can obtain the result. The proof is completed.Theorem 5. Let assumptions (F1)–(F4) hold andν<λ1. Then, for every ϵ∈0,ϵ0, the semigroup Sϵt generated by equations (1) and (2) possesses an exponential attractor ℳϵc in H01Ω. Moreover, these exponential attractors can be constructed such that (2) and (3) in Theorem 3 hold and the fractal dimension of ℳϵ is uniformly (with respect to ϵ) bounded in H01Ω, i.e.,(147)dimf,H01Ωℳϵc≤c,where c is a constant independent of ϵ. ## 3.1. Some Useful Estimates of the Solution Since−Δ−1 is a continuous compact operator in L2Ω, by the classical spectral theorem, there exist a sequence λjj=1∞, 0<λ1≤λ2≤⋯≤λj⟶∞ as j⟶∞ and a family of elements ejj=1∞ of D−Δ, which forms an orthogonal basis in both L2D and H01D such that −Δej=λjej,∀j∈N. Given m, let Xm=spane1,…,em and Pm:L2D⟶Xm be the projection operator. For any υ∈H01Ω, we write υ=Pmυ+I−Pmυ:=υ1+υ2.Lemma 1. (see [1]). Let ϵ>0. Assume (F1)–(F4) hold and g∈L2Ω. Then, for each u0∈H01Ω, problems (1) and (2) have a unique solution uϵ=uϵt=uϵt;u0 with uϵ∈C0,T,H01Ω and utϵ∈L20,T;H01Ω for any T>0. Moreover, for any fixed t>0, uϵ is continuous in u0.Lemma 2. (see [1]). Let ϵ=0. Assume (F1)–(F4) hold and g∈L2Ω. Then, for each u0∈H01Ω, problems (1) and (2) have a unique solution u0=u0t=u0t;u0 which satisfies(29)u0∈C0,T;H01Ω∩L20,T;H2Ω,∀T>0. Also for any fixedt>0, u0 is continuous in u0. From Lemmas1 and 2, we can define a semigroup Sϵt:H01Ω⟶H01Ω by the expression(30)Sϵtu0=uϵt,t≥0,where uϵt is the solution of (1) and (2).Lemma 3. (see [1]). Assume (F1)–(F4) hold and g∈L2Ω. Then, for any bounded set D⊂H01Ω, there exist positive constants ED, c0, and T1D such that, for any solution uϵ of problems (1) and (2),(31)∇uϵt≤ED,t≥0,∇uϵt≤c0,t≥T1D,provided u0∈D, where ED, c0, and T1D are independent of ϵ. SetB=υ∈H01Ω:υH01Ω≤c0, where c0 is the constant in Lemma 3; then, B is a uniformly (with respect to ϵ) bounded absorbing set for Sϵt in H01Ω. We note that the absorbing time is independent of ϵ, and we can choose T1=T1B large enough such that SϵtB⊂B for any t≥T0 and any ϵ∈0,1. The following lemma gives several priori estimates for the derivativeutϵt of the solution to (1) and (2).Lemma 4. Under the assumptions (F1)–(F4), for anyD∈H01Ω, D is bounded, and there exists T2D, which is independent of ϵ such that, for all ϵ∈0,1,(32)utϵt2+ϵ∇utϵt2≤c,(33)∫tt+1∇utϵs2ds≤c,for any u0∈D and any t≥T2D, where c is a constant independent of ϵ.Proof. We first take the inner product of (1) with uϵt in L2Ω to get(34)12ddt∇uϵ2+ϵ∇uϵ2+∇uϵ2+fuϵ,uϵ=g,uϵ. By condition (F4), for anyθ>0, there exists cθ>0 satisfying(35)fu,u−κ2∫ΩFu+θu2+cθ≥0,∀u∈H01Ω. Putting (35) into (34), it yields that(36)ddtuϵ2+ϵ∇uϵ2+2∇uϵ2+2κ2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ. By condition (F3), for anyγ>0, there exists cγ>0 satisfying(37)∫ΩFu+γu2+cγ≥0,∀u∈H01Ω. Combining (36) and (37), we get(38)ddtuϵ2+ϵ∇uϵ2+2∇uϵ2≤2g,uϵ+2θuϵ2+2cθ+2κ2γuϵ2+2κ2cγ,that is,(39)ddtuϵ2+ϵ∇uϵ2+uϵ2≤2g,uϵ+2θuϵ2+2cθ+2κ2γuϵ2+2κ2cγ. We chooseα1>0 small enough such that α1λ1<1 and 1−α1>1/2, and apply Poincare inequality in (39) to get(40)ddtuϵ2+ϵ∇uϵ2+α1λ1uϵ2+1−α1uϵ2≤2g,uϵ+2θuϵ2+2cθ+2κ2γuϵ2+2κ2cγ. Choosingθ,γ small enough and using Ho¨lder inequality, we can get from (40) that(41)ddtuϵ2+ϵ∇uϵ2+α1λ12uϵ2+12∇uϵ2≤c. We setσ=α1λ1/2; then, 0<σ<1/2. From (41), we obtain(42)ddtuϵ2+ϵ∇uϵ2+σuϵ2+ϵ∇uϵ2≤c,that is,(43)ddteσt∇uϵt2+ϵ∇uϵt2≤ceσt. Integrating the above inequality from 0 tot, it yields that(44)eσt∇uϵt2+ϵ∇uϵt2≤u02+ϵ∇u02+ceσt,t≥0. Now, we consider (36) again. If0<κ2≤1, we can get from (36) that(45)ddtuϵ2+ϵ∇uϵ2+∇uϵ2+κ2∇uϵ2+2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ. Using a similar process as (39)–(42), we deduce that, for θ small enough,(46)ddtuϵ2+ϵ∇uϵ2+σuϵ2+ϵ∇uϵ2+κ2∇uϵ2+2∫ΩFuϵ≤c. Therefore,(47)ddteσtuϵ2+ϵ∇uϵ2+κ2eσt∇uϵ2+2∫ΩFuϵ≤ceσt. Ifκ2>1, we can get from (36) that(48)ddtuϵ2+ϵ∇uϵ2+2∇uϵ2+2∫ΩFuϵ+2κ2−2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ. Applying (37) to the above, we obtain(49)ddtuϵ2+ϵ∇uϵ2+∇uϵ2+∇uϵ2+2∫ΩFuϵ≤2g,uϵ+2θuϵ2+2cθ+2κ2−2γuϵ2+2κ2−2cγ. Following a similar process as (39)–(42), we deduce that, for θ, γ small enough,(50)ddtuϵ2+ϵ∇uϵ2+σuϵ2+ϵ∇uϵ2+∇uϵ2+2∫ΩFuϵ≤c. Therefore,(51)ddteσtuϵ2+ϵ∇uϵ2+eσt∇uϵ2+2∫ΩFuϵ≤ceσt. From (47) and (51), we see that, for any κ2>0,(52)ddteσtuϵ2+ϵ∇uϵ2+c5eσt∇uϵ2+2∫ΩFuϵ≤ceσt.where c5=κ2 when 0<κ2≤1 and c5=1 when κ2>1. For anyt>0, we integrate (52) over [t, t + 1] to get(53)∫tt+1eσs∇uϵs2+2∫ΩFuϵsds≤ceσtuϵt2+ϵ∇uϵt2+ceσt. Putting (44) into (53), we have, for any t>0,(54)∫tt+1eσs∇uϵs2+2∫ΩFuϵsds≤cu02+ϵ∇u02+ceσt. Next, we multiply (1) with ut and integrate it in Ω to get(55)utϵ2+ϵ∇utϵ2+12ddt∇uϵ2+2∫ΩFuϵ=g,utϵ. Applying Ho¨lder inequality to the above we get(56)utϵ2+ϵ∇utϵ2+ddt∇uϵ2+2∫ΩFuϵ≤c,t>0. Thus,(57)eσtutϵ2+ϵ∇utϵ2+ddteσt∇uϵ2+2∫ΩFuϵ≤ceσt+ceσt∇uϵ2+2∫ΩFuϵ,t>0. Dropping the first term in the left-hand side of (57), from (54) and using uniform Gronwall inequality, we conclude that(58)eσt∇uϵt2+2∫ΩFuϵt≤cu02+ϵ∇u02+ceσt,t≥1. We now differentiate equation (1) with respect to t to get(59)uttϵ−ϵΔuttϵ−Δutϵ+f′uϵutϵ=0. Taking the inner product of (59) with utϵ in L2Ω, we obtain(60)ddtutϵ2+ϵ∇utϵ2+2∇utϵ2≤cutϵ2. Since0<σ<1/2, we have(61)ddtutϵ2+ϵ∇utϵ2+σutϵ2+ϵ∇utϵ2≤cutϵ2. Hence,(62)ddteσtutϵ2+ϵ∇utϵ2≤ceσtutϵ2+ϵ∇utϵ2. Integrating (57) over (t, t + 1) and using (58), we deduce that(63)∫tt+1eσsutϵs2+ϵ∇utϵs2ds≤cu02+ϵ∇u02+ceσt,t≥1. Combining (62) and (63) and using the uniform Gronwall inequality, we obtain that(64)utϵt2+ϵ∇utϵt2≤ce−σtu02+ϵ∇u02+c,t≥2. Thus, for any boundedD⊂H01Ω, there exists T2D≥2 which is independent of ϵ such that, for any t≥T2D and any u0∈D,(65)utϵt2+ϵ∇utϵt2≤c,where c is independent of ϵ. This proves assertion (32) in Lemma 4. Finally, integrating (60) over t,t+1, then assertion (33) can be easily verified by using (32). The proof is completed.Lemma 5. Under conditions (F1)–(F4), for any two solutions of (1) and (2) uϵ1t, uϵ2t (corresponding to ϵ1, ϵ2∈0,1, respectively), with initials starting from B and for any υ∈L2Ω, we have(66)∫Ωfuϵ1t−fuϵ2tυdx≤c6∇wϵ1,ϵ2tυ,t≥0,where wϵ1,ϵ2t=uϵ1t−uϵ2t and c6 is a constant independent of ϵ.Proof. By (F2), we have(67)∫Ωfuϵ1t−fuϵ2tυdx≤∫Ωlt,ϵ1,ϵ2wϵ1,ϵ2tυdx≤c∫Ω1+uϵ1t2/N−2+uϵ2t2/N−2wϵ1,ϵ2tυdx≤c∫Ωwϵ1,ϵ2tυdx+c∫Ωuϵ1t2/N−2wϵ1,ϵ2tυdx+c∫Ωuϵ2t2/N−2wϵ1,ϵ2tυdx,where lt,ϵ1,ϵ2=∫01f′suϵ1t+1−suϵ2tds. From (31) and the Sobolev embedding H01Ω⊂L2N/N−2Ω, we have that, for t≥0,(68)∫Ωfuϵ1t−fuϵ2tυdx≤cwϵ1,ϵ2υ+cuϵ12N/N−22/N−2wϵ1,ϵ22NN−2υ+cuϵ22N/N−22/N−2wϵ1,ϵ22N/N−2υ≤c∇wϵ1,ϵ2υ+c∇uϵ12/N−2∇wϵ1,ϵ2υ+c∇uϵ22/N−2∇wϵ1,ϵ2υ≤c6∇wϵ1,ϵ2υ. The proof is completed. ## 3.2. The Main Result for a General Case:ν∈ℝ+ Now, we verify the conditions in Theorem1 for Sϵt in this case. We first verify condition (i), i.e., the Lipschitz continuity in H01 (actually uniform Lipschtz continuity in H01 since the coefficient is independent of ϵ).Lemma 6. Under assumptions (F1)–(F4), we have, for anyϵ∈0,1 and any x1,x2∈B,(69)Sϵtx1−Sϵtx2H01Ω≤ec62tx1−x2H01Ω,t≥0,where c6 is the constant in Lemma 5.Proof. Assume thatu1ϵt and u2ϵt are two solutions of (1) and (2) starting from x1, x2∈B, respectively. We consider the difference wϵt=u1ϵt−u2ϵt, and then wϵt satisfies(70)wtϵt−ϵΔwtϵt−Δwϵt+lt,ϵwϵt=0. Multiplying (70) with wtϵt and integrating it in Ω, we get(71)wtϵt2+ϵ∇wtϵt2+12ddt∇wϵt2≤∫Ωlt,ϵwϵtwtϵtdx. Applying Lemma5, we obtain(72)wtϵt2+ϵ∇wtϵt2+12ddt∇wϵt2≤c6∇wϵtwtϵt,t≥0. We use Ho¨lder inequality to get(73)ddt∇wϵt2≤c62∇wϵt2. Then, the result follows immediately from Gronwall inequality. The proof is completed.Lemma 7. Under assumptions (F1)–(F4), we have, for anyx1,x2∈B and any η∈0,1,(i) Forϵ∈0,ϵ0: there exist T3=T3η, K>0, and compact seminorm n1ϵ,t on B such that(74)Sϵtx1−Sϵtx2H01Ω≤ηx1−x2H01Ω+Kn1ϵ,tx1−x2,t≥T3.(ii) Forϵ=0: there exist a positive integer M, T3=T3η, and K>0 such that(75)S0tx1−S0tx2H01Ω≤ηx1−x2H01Ω+Kn2S0tx1−S0tx2,t≥T3,for some ϵ0>0, where n2x=PMxH01Ω, M and K are independent of ϵ, η, and t and n1ϵ,t depend on ϵ and t.Proof. We first take the inner product of (70) with kwϵ in L2Ω (k is a constant which will be fixed later) to get(76)kwtϵ,wϵ+kϵ∇wtϵ,∇wϵ+k∇wϵ2+k∫Ωlt,ϵwϵ2=0. Using (F1), we get(77)k∇wϵ2≤kwtϵwϵ+kϵ∇wtϵ∇wϵ+kνwϵ2. Combining (72) and (77), we get(78)wtϵ2+ϵ∇wtϵ2+12ddt∇wϵ2+k∇wϵ2≤kwtϵwϵ+kϵ∇wtϵ∇wϵ+kνwϵ2+c6∇wϵwtϵ. For the right-hand side of (78), we use Ho¨lder inequality to get(79)kwtϵwϵ+kϵ∇wtϵ∇wϵ+kνwϵ2+c6∇wϵwtϵ≤kν+k22wϵ2+wtϵ2+ϵ2∇wtϵ2+k2ϵ2+c622∇wϵ2. Putting the above inequality into (78), we obtain(80)ddt∇wϵ2+2k∇wϵ2≤2kν+k2wϵ2+k2ϵ+c62∇wϵ2. Fixk=c62, and then choose ϵ0 such that ϵ<1/c62, ∀ϵ≤ϵ0. Then, we get from (80) that(81)ddt∇wϵ2+c7∇wϵ2≤cwϵ2,ϵ∈0,ϵ0,c7>0,that is,(82)ddtec7t∇wϵ2≤cec7twϵ2,ϵ∈0,ϵ0. We integrate the above inequality over0,t to get(83)∇wϵt2≤e−c7t∇wϵ02+c∫0twϵs2ds,ϵ∈0,ϵ0. For anyη∈0,1, we can choose T3=T3η>0 such that e−c7T3≤η2. Therefore, from (83),(84)∇wϵt2≤η2∇wϵ02+c∫0twϵs2ds,t≥T3,ϵ∈0,ϵ0. Thus,(85)∇wϵt≤η∇wϵ0+K∫0twϵs2ds1/2,t≥T3,ϵ∈0,ϵ0. Now, we setn1ϵ,tx1−x2=∫0twϵs2ds1/2. We need to show that n1ϵ,t is compact on B for any t>0 and any ϵ∈0,1. From condition (F2), we deduce that, for any s∈ℝ,(86)fss≤cs+s2+s2N−2/N−2. Hence,(87)fu,u≤cu2+u2N−2/N−22N−2/N−2≤cu2+u2N−2/N−22N−2/N−2≤cu2+∇u2N−2/N−2≤c∇u2+∇u2N−2/N−2,∀u∈H01Ω,where we have used the Sobolev embedding H01Ω⊂L2N/N−2Ω in the above inequality. From (35) and (87), we obtain(88)κ2∫ΩFu≤θu2+cθ+c∇u2+∇u2N−2/N−2,∀u∈H01Ω,for any θ>0. For any t≥0, we can see from (32) that(89)∫0t∇uϵs2ds≤ct,ϵ∈0,1,for any u0∈B and c is independent of ϵ. Next, we integrate (56) from 0 to t, and we get(90)ϵ∫0t∇utϵs2ds≤ct+c∇u02+2∫ΩFu0. We first fixθ small enough, and then put (88) into (90) to get, for all u0∈B,(91)ϵ∫0t∇utϵs2ds≤ct+c,ϵ∈0,1. From (89) and (91), we can easily see that, for any t>0 and any fixed ϵ∈0,1, ℬε,t=uεs:s∈0,t,uε0=u0∈B is bounded in W1,20,t;H01Ω. Therefore, from the compact embedding W1,20,t;H01Ω⊂L20,t;L2Ω, we deduce that n1ϵ,t is compact in B. Thus, assertion (i) is proved. Assertion (ii) can be easily proved by multiplying (70) (with ϵ=0) with −Δw20, and this procedure is elementary; here, we omit it (see, e.g., [20] for details). The proof is completed.Lemma 8. Under assumptions (F1)–(F4), we have, for anyϵ∈0,1, any i∈ℕ, and any x∈B, there exists a positive function ℓt independent of ϵ such that, for all t≥T2+1,(92)Sϵitx−S0itxH01Ω≤ℓitϵ1/4.where T2=T2B (see Lemma 4).Proof. We assume thatuϵt ϵ∈0,1 and u0t are the solutions for the following equations:(93)utϵ−ϵΔutϵ−Δuϵ+fuϵ=g,t>0,x∈Ω,uϵ∂Ω=0,uϵx,0=x1x∈B,x∈Ω,ut0−Δu0+fu0=g,t>0,x∈Ω,u0∂Ω=0,u0x,0=x2x∈B,x∈Ω,respectively. Set w=uϵ−u0, and then w satisfies(94)wt−ϵΔutϵ−Δw+ltw=0,where lt=∫01f′suϵt+1−su0tds. Multiplying (94) with wt and using Lemma 5, it yields(95)wt2+ϵ∇utϵ,∇wt+12ddt∇w2≤c6∇wwt,t≥0. We chooseσ1=maxc62,2ν; then, the above inequality implies(96)2ϵ∇utϵ2+ddt∇w2≤σ1∇w2+2ϵ∇utϵ∇ut0. Therefore,(97)ddte−σ1t∇w2≤2ϵe−σ1t∇utϵ∇ut0. By Lemma4, we see that, for t≥T2,(98)ddte−σ1t∇w2≤cϵ1/2∇ut0≤cϵ1/2+cϵ1/2∇ut02. Multiplying (94) with w and integrating it in Ω, we obtain(99)12ddtw2+ϵ∇utϵ,∇w+∇w2≤νw2,that is,(100)ddtw2+2∇w2≤σ1w2+2ϵ∇utϵ∇w. The above inequality implies(101)ddte−σ1tw2+2e−σ1t∇w2≤2ϵ∇utϵ∇w. Dropping the second term in the left-hand side of (101) and integrating it over 0,t, we get(102)e−σ1twt2≤w02+cϵ∫0t∇utϵs2ds1/2∫0t∇ws2ds1/2. From (91), we see that, for any ϵ∈0,1,(103)∫0t∇utϵs2ds≤1ϵct+1. Combining (31), (102), and (103), we deduce that(104)e−σ1twt2≤w02+ct+1ϵ1/2,t≥0. Now, we integrate (101) over (t, t + 1) to get(105)∫tt+1e−σ1s∇ws2ds≤ce−σ1twt2+cϵ∫tt+1∇utϵs2ds1/2∫tt+1∇ws2ds1/2. By using (31) and (33), we get that, for t≥T2,(106)∫tt+1∇utϵs2ds≤c,∫tt+1∇ws2ds≤c. Therefore, from (104)–(106), we deduce that(107)∫tt+1e−σ1s∇ws2ds≤cw02+ct+1ϵ1/2+cϵ≤cw02+ct+1ϵ1/2,t≥T2. On the contrary, using (33) again, we obtain(108)∫tt+1cϵ1/2+cϵ1/2∇ut0s2ds≤cϵ1/2,t≥T2. Combining (98), (107), and (108) and using a similar process as the proof of uniform Gronwall inequality, it yields that, for all t≥T2,(109)e−σ1t+1∇wt+12≤cw02+ct+1ϵ1/2+cϵ1/2≤cw02+ct+1ϵ1/2. This implies that there exists a positive functionℓ′t independent of ϵ such that, for t≥T2+1,(110)Sϵtx1−S0tx2H01Ω≤ℓ′tϵ1/4+ℓ′tx1−x2H01Ω. Whenx1=x2=x, we get(111)Sϵtx−S0txH01Ω≤ℓ′tϵ1/4,t≥T2+1. From (110) and (111), we obtain for t≥T2+1,(112)Sϵitx−S0itxH01Ω≤ℓ′tϵ1/4+ℓ′tSϵi−1tx−S0i−1txH01Ω≤ℓ′tϵ1/4+ℓ′tℓ′tϵ1/4+ℓ′tSϵi−2tx−S0i−2txH01Ω=ℓ′tϵ1/4+ℓ′2tϵ1/4+ℓ′2tSϵi−2tx−S0i−2txH01Ω≤ℓ′t+ℓ′2t+⋯+ℓ′itϵ1/4+ℓ′i−1tSϵtx−S0txH01Ω≤2ℓ′tiϵ1/4+ℓ′itϵ1/4≤ℓitϵ1/4. The proof is completed. For any fixedη∈0,1, we set T0η=maxT1,T2+1,T3η and Sϵ=SϵT0, and then Sϵ: B⟶B. From Lemmas 6–8, we see that the discrete system Sϵnn (ϵ∈0,ϵ0) satisfies all the assumptions in Theorem 1. Therefore, we have the following.Theorem 2. Let assumptions (F1)–(F4) hold, and then∀ϵ∈0,ϵ0, and the discrete dynamical system Sϵnn defined above possesses an exponential attractor ℳϵd on B such that(1) The fractal dimension ofℳϵd is bounded in H01Ω:(113)dimf,H01Ωℳϵd≤cϵ:=ln21+η−1lnmϵ4K1+L21/21−η,whereL=ec62T0 (see Lemma 6) and mϵR is the maximal number of pairs xi,yi in H01Ω×H01Ω possessing the properties(114)ϵ∈0,ϵ0:xiH01Ω2+yiH01Ω2≤R2,n1ϵxi−xj>1,i≠j,ϵ=0:xiH01Ω2+yiH01Ω2≤R2,n2yi−yj>1,i≠j.(2) ℳϵd attracts B in H01Ω, uniformly with respect to ϵ,(115)distH01ΩSϵiB,ℳϵd≤c8e−c9i,c9>0,i∈ℕ,wherec8 and c9 are independent of ϵ.(3) The familyℳϵd,ϵ∈0,ϵ0 is continuous at 0 in H01Ω:(116)distsym,H01Ωℳϵd,ℳ0d≤c10ϵc11,where c11=−1/4ln1+η/2/lnℓT0−ln1+η/2 and c10 are independent of ϵ. To obtain the corresponding result for continuous systemSϵt defined in (30), we need to show the Ho¨lder continuity with respect to time t and the initial conditions. In general, it is difficult to verify the uniform (with respect to ϵ) Ho¨lder continuity with respect to time t when t is small. However, when t is large enough, we have the following.Lemma 9. Let assumptions (F1)–(F4) hold. Then, for anyT≥T0, the semigroup Sϵt defined in (30) is uniformly Ho¨lder continuous on T0,T×B, i.e.,(117)Sϵt1x1−Sϵt2x2H01Ω≤cx1−x2H01Ω+t1−t21/2,ϵ∈0,1,for x1, x2∈B and T0≤t1, t2≤T and c is independent of ϵ.Proof. The Lipschitz continuity with respect to the initial conditions is an immediate corollary of Lemma6. It remains to prove the continuity with respect to time t. From Lemmas 1 and 2, we know that the solution uϵt∈C0,T,H01Ω for the initial value belongs to B. Therefore, for any T0≤t1≤t2≤T,(118)uϵt2−uϵt1=∫t1t2utϵsds,which implies that(119)uϵt1−uϵt2H01Ω=∫t1t2utϵsdsH01Ω≤∫t1t2utϵsH01Ωds≤t1−t21/2∫t1t2utϵsH01Ω2ds1/2. To estimate (119), we integrate (60) from t1 to t2 to get(120)∫t1t2∇utϵs2ds≤c∫t1t2utϵs2ds+cutϵt12+ϵ∇utϵt12. Sincet1≥T0, we apply (32) to (120) to get(121)∫t1t2∇utϵs2ds≤c,where c is independent of ϵ. Putting (121) into (119), we obtain the result. The proof is completed. Our main result in this section reads as follows:Theorem 3. Let assumptions (F1)–(F4) hold. Then, for everyϵ∈0,ϵ0, the semigroup Sϵt generated by equations (1) and (2) possesses an exponential attractor ℳϵc in H01Ω. Moreover, these exponential attractors can be constructed such that(1) The fractal dimension ofℳϵc is bounded in H01Ω:(122)dimf,H01Ωℳϵc≤cϵ+2.(2) ℳϵc attracts B in H01Ω, uniformly with respect to ϵ:(123)distH01ΩSϵtB,ℳϵc≤c12e−c13t,c13>0,where c12 and c13 are independent of ϵ.(3) The familyℳϵc,ϵ∈0,ϵ0 is continuous at 0 in H01Ω:(124)distsym,H01Ωℳϵc,ℳ0c≤c14ϵc11,where c11=−1/4ln1+η/2/lnℓT0−ln1+η/2 and c14 are independent of ϵ.Proof. Settingℳϵc:=∪t∈T0,2T0Sϵtℳϵd, then we have(125)ℳϵc:=∪t∈T0,2T0Sϵtℳϵd=∪t∈T0,2T0Sϵt−T0SϵT0ℳϵd=∪t∈0,T0SϵtSϵT0ℳϵd. It follows from its definition that the setSϵT0ℳϵd is also an exponential attractor for the discrete dynamical system Sϵnn. Thus, ℳϵc is the set we needed (for details, see, e.g., [12]). The proof is completed. ## 3.3. The Main Result for a Special Case:0<ν<λ1 If, in addition,ν<λ1, we can get the uniform bound (with respect to ϵ) for the fractal dimension. To this end, we need show that the constants Lϵ and Kϵ and the compact seminorms n1ϵ and n2ϵ in Theorem 1 are all independent of ϵ for the discrete system generated by problems (1) and (2). From the above, we have proved the constant Lϵ is independent of ϵ; thus, we only need the following lemma:Lemma 10. Let (F1)–(F4) hold; in addition, we assume thatν<λ1, and then we have for any x1,x2∈B and any η∈0,1, there exist constants ϵ0∈0,1, K>0, and T0′η≥T0η (T0 is defined above) and a compact seminorm nx on B such that(126)SϵT0′x1−SϵT0′x2H01Ω≤ηx1−x2H01Ω+Knx1−x2+nSϵT0′x1−SϵT0′x2,ϵ∈0,ϵ0,where T0′η, K, and the compact seminorm nx are all independent of ϵ.Proof. We assume thatu1ϵt and u2ϵt are two solutions of (1) starting from x1,x2∈B, respectively. We consider the difference wϵt=u1ϵt−u2ϵt; then, wϵt satisfies(127)wtϵt−ϵΔwtϵt−Δwϵt+lt,ϵwϵt=0. We first take the inner product of (127) with kw2ϵ in L2Ω (k is a constant which will be fixed later) to get(128)kw2,tϵ,w2ϵ+kϵ∇w2,tϵ,∇w2ϵ+k∇w2ϵ2+k∫Ωlt,ϵwϵw2ϵ=0. Using Lemma5, we get(129)k∇w2ϵ2≤kw2,tϵw2ϵ+kϵ∇w2,tϵ∇w2ϵ+kc∇wϵw2ϵ. Next, we take the inner product of (127) with w2,tϵ in L2Ω to get(130)w2,tϵ2+ϵ∇w2,tϵ2+12ddt∇w2ϵ2+∫Ωlt,ϵwϵw2,tϵ=0. We apply Lemma5 again to get(131)w2,tϵ2+ϵ∇w2,tϵ2+12ddt∇w2ϵ2≤c6∇wϵw2,tϵ. Combining (129) and (131), we get(132)w2,tϵ2+ϵ∇w2,tϵ2+12ddt∇w2ϵ2+k∇w2ϵ2≤kw2,tϵw2ϵ+kϵ∇w2,tϵ∇w2ϵ+kc∇wϵw2ϵ+c6∇wϵw2,tϵ. For the right-hand side of (132), we use Ho¨lder inequality to get(133)kw2,tϵw2ϵ+kϵ∇w2,tϵ∇w2ϵ+kc∇wϵw2ϵ+c6∇wϵw2,tϵ≤ck2w2ϵ2+12w2,tϵ2+ϵ2∇w2,tϵ2+k2ϵ2∇w2ϵ2+k2∇wϵ2+ckw2ϵ2+c622∇wϵ2+12w2,tϵ2≤ck2w2ϵ2+12w2,tϵ2+ϵ2∇w2,tϵ2+k2ϵ2∇w2ϵ2+k2∇w2ϵ2+k2∇w1ϵ2+ckw2ϵ2+c622∇w2ϵ2+c622∇w1ϵ2+12w2,tϵ2. Putting the above inequality into (132), we obtain(134)ddt∇w2ϵ2+k∇w2ϵ2≤ck2+kw2ϵ2+c+k∇w1ϵ2+c62+k2ϵ∇w2ϵ2. Fixk=2c62, and then choose ϵ0 such that ϵ<1/4c62, ∀ϵ≤ϵ0. In the following, we assume ϵ∈0,ϵ0. Then, we get from (134) that(135)ddt∇w2ϵ2+c15∇w2ϵ2≤cw2ϵ2+c∇w1ϵ2,c15>0. Using Poincare inequality, we obtain(136)ddt∇w2ϵ2+c15∇w2ϵ2≤cλm+1−1∇w2ϵ2+c∇w1ϵ2≤cλm+1−1∇wϵ2+c∇w1ϵ2. From Lemma6, it yields that(137)ddtec15t∇w2ϵ2≤cλm+1−1ec15tec62t∇wϵ02+cec15t∇w1ϵ2. We integrate the above inequality over0,t to get(138)∇w2ϵt2≤e−c15t∇w2ϵ02+cλm+1−1ec62t∇wϵ02+c∫0t∇w1ϵs2ds≤e−c15t+cλm+1−1ec62t∇wϵ02+c∫0t∇w1ϵs2ds. For anyη∈0,1, we first choose T0′=T0′η>T0η such that e−c15T0′≤η2/2, and then fix a positive integer M=Mη such that cλM+1−1ec62T0′≤η2/2. Therefore, from (138),,(139)∇w2ϵT0′2≤η2∇wϵ02+c∫0T0′∇w1ϵs2ds. Therefore,(140)∇wϵT0′2=∇w1ϵT0′2+∇w2ϵT0′2≤η2∇wϵ02+c∫0T0′∇w1ϵs2ds+∇w1ϵT0′2,that is,(141)∇wϵT0′≤η∇wϵ0+K∫0T0′∇w1ϵs2ds1/2+∇w1ϵT0′. We setnx=PMxH01. Then, ∇w1ϵT0′=nSϵT0′x1−SϵT0′x2, and n is obviously compact on B. To estimate the term∫0T0′∇w1ϵs2ds1/2, we multiply (127) with w1ϵt and integrate it in Ω:(142)ddtw1ϵt2+ϵ∇w1ϵt2+2∇w1ϵt2+2lt,ϵwϵt,w1ϵt=0. By using (F1) and Poincare inequality, we can get from the above inequality(143)ddtw1ϵt2+ϵ∇w1ϵt2+2∇w1ϵt2≤2νw1ϵt2≤2νλ1−1∇w1ϵt2. Therefore,(144)ddtw1ϵt2+ϵ∇w1ϵt2+β∇w1ϵt2≤0,where β=21−νλ1−1>0. Integrating (144) from 0 to T0′, we get(145)∫0T0′∇w1ϵs2ds≤cw1ϵ02+ϵ∇w1ϵ02≤c∇w1ϵ02. Thus, for anyϵ∈0,ϵ0,(146)∫0T0′∇w1εs2ds1/2≤cnx1−x2. Putting (146) into (141), one can obtain the result. The proof is completed.Theorem 5. Let assumptions (F1)–(F4) hold andν<λ1. Then, for every ϵ∈0,ϵ0, the semigroup Sϵt generated by equations (1) and (2) possesses an exponential attractor ℳϵc in H01Ω. Moreover, these exponential attractors can be constructed such that (2) and (3) in Theorem 3 hold and the fractal dimension of ℳϵ is uniformly (with respect to ϵ) bounded in H01Ω, i.e.,(147)dimf,H01Ωℳϵc≤c,where c is a constant independent of ϵ. --- *Source: 1025457-2020-04-07.xml*
2020
# IBIEM Analysis of Dynamic Response of a Shallowly Buried Lined Tunnel Based on Viscous-Slip Interface Model **Authors:** Xiaojie Zhou; Qinghua Liang; Zhongxian Liu; Ying He **Journal:** Advances in Civil Engineering (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1025483 --- ## Abstract A viscous-slip interface model is proposed to simulate the contact state between a tunnel lining structure and the surrounding rock. The boundary integral equation method is adopted to solve the scattering of the plane SV wave by a tunnel lining in an elastic half-space. We place special emphasis on the dynamic stress concentration of the lining and the amplification effect on the surface displacement near the tunnel. Scattered waves in the lining and half-space are constructed using the fictitious wave sources close to the lining surfaces based on Green’s functions of cylindrical expansion and the shear wave source. The magnitudes of the fictitious wave sources are determined by viscous-slip boundary conditions, and then the total response is obtained by superposition of the free and scattered fields. The slip stiffness and viscosity coefficients at the lining-surrounding rock interface have a significant influence on the dynamic stress distribution and the nearby surface displacement response in the tunnel lining. Their influence is controlled by the incident wave frequency and angle. The hoop stress increases gradually in the inner wall of the lining as sliding stiffness increases under a low-frequency incident wave. In the high-frequency resonance frequency band, where incident wave frequency is consistent with the natural frequency of the soil column above the tunnel, the dynamic stress concentration effect is more significant when it is smaller. The dynamic stress concentration factor inside the lining decreases gradually as the viscosity coefficient increases. The spatial distribution and the displacement amplitudes of surface displacement near the tunnel change as incident wave frequency and angle increase. The effective dynamic analysis of the underground structure under an actual strong dynamic load should consider the slip effect at the lining-surrounding rock interface. --- ## Body ## 1. Introduction Analyses of seismic damage incurred by disasters such as the Kobe earthquake, Chi-Chi earthquake, and Wenchuan earthquake have shown that underground structures might be severely damaged during strong earthquakes, resulting in massive economic and societal losses [1–5]. As the scope and scale of modern underground structures continually increase, the seismic design grows increasingly complex. In theory, wave scattering and dynamic stress concentration effects should be considered for large underground structures during the seismic wave propagation process. To this effect, it is of great significance to study the seismic response and hazards feature of underground structures.In general, the calculation methods include the analytical method [6–9], finite element method [10–12], finite difference method [13–15], boundary element method [16–19], boundary integral equation method [20–23], etc. It is worth mentioning that all of these studies assumed that the lining and surrounding rock are completely bonded. In actuality, however, there are varying degrees of slip between the tunnel and surrounding rock—especially in the case of intense dynamic loading. Yi et al. [24] studied the dynamic response of a tunnel lining with a sliding interface under an incident P wave based on an interface contact model. Fang and Jin [25, 26] proposed a viscoelastic interface model to solve the dynamic response of a tunnel lining under different interface stiffness and viscosity coefficients with incident P and SV waves. Ping et al. [27] calculated the maximum moment and axial force of a circular shield tunnel under interface slip and no-slip states based on an equivalent stiffness circle.It should be noted that the abovementioned research works based on interface contact models were mainly limited to deep-buried tunnels, while the response of shallow-buried tunnels differs significantly from that of deep-buried ones [28]. Yi et al. [29] presented an analytical solution to the out-of-plane dynamic response of a shallow tunnel lining under the action of a plane SH wave including interface contact stiffness, incident angle, wave frequency, and tunnel depth as these factors affect the dynamic stress concentration of the lining. Fang et al. [30] investigated a lined tunnel in a semi-infinite alluvial valley with an elastic-slip interface and analyzed the dynamic stress distribution around the circular tunnel subjected to SH waves.However, until now, few studies have explored the seismic response of shallow tunnels under incident SV waves with the interface-slipping model due to the complexity of multimode coupling and hybrid boundary conditions. We used an indirect boundary integral equation method (IBIEM) to solve the scattering of the plane SV wave by a tunnel lining in a half-space based on the viscous-slip interface model. This method had been used effectively to solve the dynamic response of the tunnel structure [31, 32].This study aims to investigate the dynamic stress concentration effect of the tunnel lining and the surface displacement amplification near the tunnel with a viscous-slip interface. We assessed the influence of parameters such as incident wave frequency and angle, viscous-slip interface stiffness, and viscosity coefficient on the overall dynamic response of the lining and surrounding rock. This study can provide a theoretical basis for the seismic design of actual underground engineering structures under intense dynamic loads. ## 2. Calculation Model As shown in Figure1, the elastic half-space contains an infinitely long tunnel lining. Both the lining and the half-space are linearly isotropic homogeneous elastic media. The viscous-slip state between the tunnel and the surrounding rock can be assumed to consist of a series of linear springs and dampers. The parameters related to the half-space and the tunnel are shown in Table 1.Figure 1 Seismic response model of tunnel lining with the viscous-slip interface.Table 1 Half-space and tunnel parameters. Domain Shear modulus Poisson’s ratio Density Longitudinal wave speed Transverse wave speed Virtual source surface Half-space D 1 μ 1 ν 1 ρ 1 c α 1 c β 1 S 1 Tunnel D 2 μ 2 ν 2 ρ 2 c α 2 c β 2 S 2 and S3Let the buried depth of the tunnel bed, the inner and outer radii of the lining be a1 and a2, and the inner and outer boundary surfaces of the lining be S0 and S, respectively. Assume that the plane SV wave is incident at an angle θα from the half-space. ## 3. Calculation Method In this study, we considered the cylindrical shear wave source in the half-space as the fundamental solution. The indirect boundary integral equation method and viscous-slip boundary condition were used to solve the scattering of plane waves by tunnel linings and the dynamic response around the tunnel lining [20]. ### 3.1. Wave Field Analysis The total half-space wave field can be viewed as a superposition of a half-space free field (without tunnel linings) and a scattering field. We first carry out a free field analysis. The shear wave potential function in the half-space is denoted asψ (plane strain state), and the plane SV wave with circular frequency ω is incident at angle θα. In the Cartesian coordinate system, the incident SV wave potential function can be expressed as follows:(1)ψix,y=exp−ikα1xsinθα−ycosθα.For the sake of simplicity, the time factorexpiωt is omitted here. Incident plane SV waves generate reflected P waves and SV waves on the surface of the half-space. Their specific expressions are shown in [16].Scattered fields are generated in the half-space of a lined tunnel and in the interior of the lining. The fields can be determined by superimposing all the expansion wave and shear wave sources on the virtual wave source surface inside and outside the lining, respectively. Assuming that the scattered field in the half-space is generated by the virtual source faceS1, the displacement and stress in the half-space are as follows:(2)uix=∫bbx1Gi,1sx,x1+cx1Gi,2sx,x1dS1,σijx=∫bbx1Tij,1sx,x1+cx1Tij,2sx,x1dS1,where x∈D1 and x1∈S1. bx1 and cx1 correspond to the density of P and SV wave sources at the x1 position on the virtual wave source surface S1, respectively. Gi,lsx,x1 and Tij,lsx,x1 denote Green’s function of displacement and stress in the elastic half-space, respectively (lower scale index l = 1 or 2 corresponding to P and SV wave sources i,j=x,y), and function automatically satisfies the wave equation and surface boundary conditions.The internal scattering field of the lining is obtained by superimposing the action of all the expansion wave sources and shear wave sources on the virtual wave source surfacesS2 and S3. The internal displacement and stress of the lining are expressed as follows:(3)uix=∫bdx2Gi,1lx,x1+ex2Gi,2tx,x2dS2+∫bfx3Gi,1tx,x3+gx3Gi,2tx,x3dS3,σijx=∫bdx2Tij,1sx,x2+ex2Tij,2tx,x2dS2+∫bfx3Tij,1tx,x3+gx3Tij,2tx,x3dS3,where x∈D2, x2∈S2, and x3∈S3. dx2 and ex2, respectively, correspond to the density of P and SV wave sources at the x2 position on the virtual wave source surface S2. fx3 and gx3 correspond to the density of P and SV wave sources at the x3 position on the virtual wave source surface S3. Gi,lt and Tij,lt indicate the displacement and stress Green’s functions in the lining, respectively.The total displacement and stress fields in the half-space are obtained by superposition of the scattered wave field and the free field in the half-space. The internal reactions of the lining are all generated by the scattered fields within it. ### 3.2. Boundary Conditions and Solutions We built a viscous-slip interface model to determine the influence of the interface effect on the dynamic response. The lining and the half-space were connected by linear springs and dampers (Figure1). Spring and damper parameters are represented by stiffness and viscosity coefficients. In this model, the boundary conditions at the interface between the lining and the half-space S are as follows:(4)uxs−uxt=σnnskn+δn∂uxs−uxt∂t,r=a2,uys−uyt=σntskt+δt∂uys−uyt∂t,r=a2,σnns=σnnt;σnts=σntt,r=a2,σnnt=0;σntt=0,r=a1,where superscripts s and t correspond to the half-space and the lining, respectively; kn and kt are the normal and tangential stiffness coefficients of the viscous-slip boundary; and δn and δt are the normal and tangential viscosity coefficients of the viscous-slip boundary, respectively. To secure a numerical solution to the problem, we first discretely separate the inner and outer surfaces of the lining and the virtual wave source surfaces S1, S2, and S3. The number of discrete points on the inner and outer surfaces of the lining is set to N, and the number of discrete points on the virtual wave source surfaces S1, S2, and S3 is N1. The scattered displacement and stress fields in the half-space can be expressed as follows:(5)uixn=bn1Gi,1sxn,xn1+cn1Gi,2sxn,xn1,σijxn=bn1Tij,1sxn,xn1+cn1Tij,2sxn,xn1,where xn∈S and xn1∈S1 (n=1,…,N and n1=1,…,N1). bn1 and cn1 are the density of P wave and SV wave sources at the n1th discrete point on the virtual source surface S1. Similarly, the scattering field inside the lining can be constructed from discrete wave sources on S2 and S3. The scattering displacement and stress fields are(6)uixn=dn2Gi,1txn,xn2+en2Gi,2txn,xn2+fn3Gi,1txn,xn3+gn3Gi,2sxn,xn3,σijxn=dn2Tij,1txn,xn2+en2Tij,2txn,xn2+fn3Tij,1txn,xn3+gn3Tij,2txn,xn3,where xn∈S, xn2∈S2, and xn3∈S3 (n=1,…,N, n2=1,…,N1, and n3=1,…,N1).A linear system of equations can be obtained by synthesizing the above formulas:(7)W1+B1Y1+F=W2+B2Y2+W3+B3Y3,H1Y1+F=H2Y2+H3Y3,T2Y2+T3Y3=0,where W1, W2, and W3 are the displacement Green’s influence matrix of the discrete points on the outer surface of the lining of the discrete wave source on S1, S2, and S3, respectively; B1, B2, and B3 are boundary displacement matrices obtained from boundary conditions; H1, H2, and H3 are the stress Green’s influence matrix of the discrete points on the outer surface of the lining of discrete source points on S1, S2, and S3, respectively; T2 and T3 correspond to the stress-green-influenced matrix of the discrete points on the inner surface of the lining of the source points of S2 and S3; Y1, Y2, and Y3 are the virtual wave source density vectors (to be determined) on S1, S2, and S3, respectively; and F is the free field vector. System (7) can be solved using the least-squares method. The virtual wave source density is obtained, and then the scattered field is obtained. The total wave field can be obtained by superimposing the scattering field and the free field, and then the displacement and stress of the half-space and any point in the lining can be calculated. ## 3.1. Wave Field Analysis The total half-space wave field can be viewed as a superposition of a half-space free field (without tunnel linings) and a scattering field. We first carry out a free field analysis. The shear wave potential function in the half-space is denoted asψ (plane strain state), and the plane SV wave with circular frequency ω is incident at angle θα. In the Cartesian coordinate system, the incident SV wave potential function can be expressed as follows:(1)ψix,y=exp−ikα1xsinθα−ycosθα.For the sake of simplicity, the time factorexpiωt is omitted here. Incident plane SV waves generate reflected P waves and SV waves on the surface of the half-space. Their specific expressions are shown in [16].Scattered fields are generated in the half-space of a lined tunnel and in the interior of the lining. The fields can be determined by superimposing all the expansion wave and shear wave sources on the virtual wave source surface inside and outside the lining, respectively. Assuming that the scattered field in the half-space is generated by the virtual source faceS1, the displacement and stress in the half-space are as follows:(2)uix=∫bbx1Gi,1sx,x1+cx1Gi,2sx,x1dS1,σijx=∫bbx1Tij,1sx,x1+cx1Tij,2sx,x1dS1,where x∈D1 and x1∈S1. bx1 and cx1 correspond to the density of P and SV wave sources at the x1 position on the virtual wave source surface S1, respectively. Gi,lsx,x1 and Tij,lsx,x1 denote Green’s function of displacement and stress in the elastic half-space, respectively (lower scale index l = 1 or 2 corresponding to P and SV wave sources i,j=x,y), and function automatically satisfies the wave equation and surface boundary conditions.The internal scattering field of the lining is obtained by superimposing the action of all the expansion wave sources and shear wave sources on the virtual wave source surfacesS2 and S3. The internal displacement and stress of the lining are expressed as follows:(3)uix=∫bdx2Gi,1lx,x1+ex2Gi,2tx,x2dS2+∫bfx3Gi,1tx,x3+gx3Gi,2tx,x3dS3,σijx=∫bdx2Tij,1sx,x2+ex2Tij,2tx,x2dS2+∫bfx3Tij,1tx,x3+gx3Tij,2tx,x3dS3,where x∈D2, x2∈S2, and x3∈S3. dx2 and ex2, respectively, correspond to the density of P and SV wave sources at the x2 position on the virtual wave source surface S2. fx3 and gx3 correspond to the density of P and SV wave sources at the x3 position on the virtual wave source surface S3. Gi,lt and Tij,lt indicate the displacement and stress Green’s functions in the lining, respectively.The total displacement and stress fields in the half-space are obtained by superposition of the scattered wave field and the free field in the half-space. The internal reactions of the lining are all generated by the scattered fields within it. ## 3.2. Boundary Conditions and Solutions We built a viscous-slip interface model to determine the influence of the interface effect on the dynamic response. The lining and the half-space were connected by linear springs and dampers (Figure1). Spring and damper parameters are represented by stiffness and viscosity coefficients. In this model, the boundary conditions at the interface between the lining and the half-space S are as follows:(4)uxs−uxt=σnnskn+δn∂uxs−uxt∂t,r=a2,uys−uyt=σntskt+δt∂uys−uyt∂t,r=a2,σnns=σnnt;σnts=σntt,r=a2,σnnt=0;σntt=0,r=a1,where superscripts s and t correspond to the half-space and the lining, respectively; kn and kt are the normal and tangential stiffness coefficients of the viscous-slip boundary; and δn and δt are the normal and tangential viscosity coefficients of the viscous-slip boundary, respectively. To secure a numerical solution to the problem, we first discretely separate the inner and outer surfaces of the lining and the virtual wave source surfaces S1, S2, and S3. The number of discrete points on the inner and outer surfaces of the lining is set to N, and the number of discrete points on the virtual wave source surfaces S1, S2, and S3 is N1. The scattered displacement and stress fields in the half-space can be expressed as follows:(5)uixn=bn1Gi,1sxn,xn1+cn1Gi,2sxn,xn1,σijxn=bn1Tij,1sxn,xn1+cn1Tij,2sxn,xn1,where xn∈S and xn1∈S1 (n=1,…,N and n1=1,…,N1). bn1 and cn1 are the density of P wave and SV wave sources at the n1th discrete point on the virtual source surface S1. Similarly, the scattering field inside the lining can be constructed from discrete wave sources on S2 and S3. The scattering displacement and stress fields are(6)uixn=dn2Gi,1txn,xn2+en2Gi,2txn,xn2+fn3Gi,1txn,xn3+gn3Gi,2sxn,xn3,σijxn=dn2Tij,1txn,xn2+en2Tij,2txn,xn2+fn3Tij,1txn,xn3+gn3Tij,2txn,xn3,where xn∈S, xn2∈S2, and xn3∈S3 (n=1,…,N, n2=1,…,N1, and n3=1,…,N1).A linear system of equations can be obtained by synthesizing the above formulas:(7)W1+B1Y1+F=W2+B2Y2+W3+B3Y3,H1Y1+F=H2Y2+H3Y3,T2Y2+T3Y3=0,where W1, W2, and W3 are the displacement Green’s influence matrix of the discrete points on the outer surface of the lining of the discrete wave source on S1, S2, and S3, respectively; B1, B2, and B3 are boundary displacement matrices obtained from boundary conditions; H1, H2, and H3 are the stress Green’s influence matrix of the discrete points on the outer surface of the lining of discrete source points on S1, S2, and S3, respectively; T2 and T3 correspond to the stress-green-influenced matrix of the discrete points on the inner surface of the lining of the source points of S2 and S3; Y1, Y2, and Y3 are the virtual wave source density vectors (to be determined) on S1, S2, and S3, respectively; and F is the free field vector. System (7) can be solved using the least-squares method. The virtual wave source density is obtained, and then the scattered field is obtained. The total wave field can be obtained by superimposing the scattering field and the free field, and then the displacement and stress of the half-space and any point in the lining can be calculated. ## 4. Numerical Example and Validation In this section, the ratio of the buried depth of the tunnel to the inner radius of the lining isd/a1=1.0; the ratio of the inner and outer radii of the lining is a1/a2=0.9. Poisson’s ratios v of the half-space and the lining material are both 0.25. The material’s hysteretic damping ratio is 0.001, and the density ratio of the half-space and lining material is ρ1/ρ2=0.8. The ratio of the shear wave speed in the half-space and the lining is cβ1/cβ2=0.2. We define η as the dimensionless incident frequency η=2a1/λ1=ωa1/πcβ1. η represents the ratio of the inner diameter of the tunnel to the half-space shear wavelength.We define the dimensionless dynamic stress concentration factor (DSCF) asDSCF=σθθ/σ0=σθθ/μkβ12. DSCF represents the absolute value of the ratio of the circumferential stress σθθ of the lining hole to the amplitude σ0 of the incident wave stress in the half-space; k∗ is the dimensionless stiffness factor of the viscous-slip interface, and k∗=kna1/μ1=kta1/μ1; and δ∗ is the dimensionless viscosity factor of the viscous-slip interface, and δ∗=δt/a1=δn/a1. In this paper, the dimensionless stiffness factor k∗ is 1, 5, 10, and 20, respectively. The dimensionless viscosity factor δ∗ is 0. We found that the interface is essentially in a no-slip state when k∗ exceeds 20, so when the stiffness factor exceeds 20, we specify the interface is the no-slip state.In order to verify the correctness of the IBIEM method, we first letk∗ be 100 and δ∗ be 0, which is equivalent to the state of consolidation without slip, and then compare it with the state of no slip in [20]. To more fully prove the accuracy, we choose the spectrum analysis of the different locations of the lining and compare them with [20]. The calculation positions are at θ = 90° (the top of the lining), 45°, 0°, −45°, and −90° (the bottom of the lining), respectively, as shown in Figure 2. The DSCF of θ = 90° (the top of the lining) and −90° (the bottom of the lining) is 0, so it is not shown in Figure 2. As shown in Figure 2, the results calculated by the IBIEM method are quite consistent with the results in [20]. This proves the correctness of our calculation method.Figure 2 DSCF spectrum distribution on the inner lining wall surface. ## 5. Numerical Analysis ### 5.1. DSCF of Inner and Outer Lining Wall Surfaces under Single-Frequency Incident Wave #### 5.1.1. Influence of Stiffness Factork∗ Figures3–6 show the distribution of DSCF in the inner and outer wall of a rigid lining under an incident SV wave. Among them, the dimensionless incident frequency η is 0.25, 0.5, 1, and 2, respectively. The incidence angle θα is 0° and 30°, respectively;Figure 3 DSCF distribution on lining the inner wall under the incident SV wave (θα=0°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 4 DSCF distribution on lining the outer wall under the incident SV wave (θα=0°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 5 DSCF distribution on lining the inner wall under the incident SV wave (θα=30°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 6 DSCF distribution on lining the outer wall under the incident SV wave (θα=30°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)As shown in Figures3–6, under an incident plane SV wave, the DSCF distribution curves of inner and outer wall surfaces of the lining are similar. The inner wall stress amplitude is considerably larger than that of the outer wall. When the SV wave is incident at low frequency (η = 0.25, 0.5), the DSCF of the inner lining wall surface increases as the stiffness factor k∗ increases while the DSCF of the outer wall decreases as k∗ increases. When η=0.5 and θα=0°, k∗ is either 1, 5, 10, or 20; the DSCF of the outer wall surface is, respectively, 23.79, 23.04, 21.45, and 19.91.When the SV wave is obliquely incident, the increase and decrease amplitudes of the DSCF of the inner and outer wall surfaces is smaller than the normal incidence. Whenη=0.25, θα=0°, and k∗=1, the DSCF of the inner wall surface is 51.71. It is 58.39 when k∗ = 20 marking an increase in amplitude of 13%. When k∗ = 1, the DSCF of the outer wall surface is 37.21, corresponding to 16.26 when k∗ = 20, and the amplitude of increase is 56%. When η=0.25, θα=30°, and k∗=1, the DSCF of the inner wall surface is 46.74 and it corresponds to 50.34 when k∗ = 20, and the increase in amplitude is 8%. When k∗ = 1, the DSCF of the outer wall surface is 28.64, corresponding to 20.15 when k∗ = 20 at a decrease in amplitude of 30%.When SV wave is incident with frequencyη=1, the DSCF of the inner and outer wall of the lining decreases with the increase of k∗, and the dynamic stress concentration on the inner and outer wall surfaces is very significant when k∗=1. If the SV wave is obliquely incident, the DSCF of the inner wall surface is to 69.60 when k∗=1, which is 2.1 times as much as the peak 32.93 under k∗=20. The peak DSCF of the outer wall surface is 60.36, which is 2.8 times as much as the corresponding peak value 21.92 under k∗=20. The SV wave is inclined at an angle of 30°; the peak DSCF on the inner wall of the lining is 67.78 when k∗=1, which is 2.3 times as much as the peak 29.62 under k∗=20. Section 3.2 discusses this phenomenon in detail.When the SV wave is incident at high frequency (η=2), the DSCF curve on the inner and outer wall surfaces of the lining oscillates very sharply along the circumference of the hole. There is no obvious relationship between the vibration regularity and k∗. When the SV wave is inclined at an incidence of 30°s and k∗=1, the DSCF of the inner and outer wall surfaces is minimal: the peak value of the inner wall falls to 12.64. When k∗=20, the peak values of the hoop stress of the inner and outer wall surfaces are 27.50 and 23.26, respectively. #### 5.1.2. Influence of Viscosity Factorδ∗ Figure7 shows the distribution of DSCF of the outer lining wall surface under an incident SV wave. The dimensionless incident frequency η is 0.25, 0.5, and 1, respectively; the incident angle θα is 0° and 30°, respectively. The dimensionless stiffness factor k∗=5, and the viscosity factor δ∗ is 1, 10, or 100.Figure 7 DSCF distribution on outer lining wall surfaces under incident SV waves: (a)η=0.25, θα=0°; (b) η=0.25, θα=30°; (c) η=0.5, θα=0°; (d) η=0.5, θα=30°; (e) η=1, θα=0°; (f) η=1, θα=30°. (a) (b) (c) (d) (e) (f)As shown in Figure7, the DSCF of the outer wall of the lining decreases gradually as interfacial viscosity factor δ∗ increases. However, the influence of the viscosity factor δ∗ on the circumferential stress distribution of the lining gradually weakens as incident wave frequency increases. For example, when the SV wave is of low frequency (η=0.25) at normal incidence, the DSCF of the outer wall is 27.32 when δ∗=1, that is, 2.3 times the corresponding value of 11.90 when δ∗=10 and 2.6 times the corresponding value of 10.43 when δ∗=100. The SV wave is under η=1 at normal incidence. The DSCF of the outer wall is 25.56 when δ∗=1, 1.35 times the corresponding value of 18.93 when δ∗=10, and 1.39 times the corresponding value of 18.43 when δ∗=100.Compared to the normal incidence, the influence of the viscosity factor on the amplitude distribution of the circumferential stress of the lining weakens at oblique incidence. When the SV wave is incident at an angle of 30°, the spatial distribution of the circumferential stress of the lining is relatively gentle along the circumference of the hole. In the low-frequency region (η=0.25), when δ∗=1, the DSCF of the outer wall is 23.99, which is equal to 1.3 times the value of 18.23 when δ∗=100. When η=1 and δ∗=1, the DSCF of the outer wall of the lining is 27.12, which is 1.1 times the corresponding value of 23.60 when δ∗=100. ### 5.2. Lining Internal DSCF Spectrum Analysis Figures8 and 9 show the influence of the dimensionless stiffness factor k∗ on the DSCF at different positions of the inner and outer walls of the tunnel lining when the SV wave is perpendicularly incident in the spectrum state, and the viscosity factor δ∗ is 0. Calculation positions were same as in Section 3, and the DSCF of θ = 90° (the top of the lining) and −90° (the bottom of the lining) is not shown in Figures 8 and 9. Similar to our observations in the model without slipping, the stress of the lining is sensitive to changes in frequency. The spectrum curve also has obvious peaks and troughs. The peak frequency here corresponds to the natural frequency in the soil column above the lining [33].Figure 8 DSCF spectrum distribution on the inner lining wall surface under the vertically incident SV wave: (a)θ=45°; (b) θ=0°; (c) θ=−45°. (a) (b) (c)Figure 9 DSCF spectrum distribution on the outer lining wall surface under the vertically incident SV wave: (a)θ=45°; (b) θ=0°; (c) θ=−45°. (a) (b) (c)As shown in Figure8, under an SV wave with normal incidence, the interface slip stiffness factor k∗ significantly affects the internal stress spectrum of the lining. When k∗=1, the stress amplification effect in the resonance reaction section is particularly obvious. At the first-order resonance frequency (η=0.16), the stress peak at θ=45° reaches 70.0; and at the η=0.96 resonance frequency, the stress at θ=0° has a peak of 77.2. The stress peak in the resonance band gradually decreases as the stiffness factor k∗ gradually increases. For example, when k∗=20 (near the no-slip state), the stress peaks at the first two resonance frequencies at θ=45° are approximately 63.4 and 34.0. When the stiffness factor is small, the restraining effect of the lining on the upper soil column weakens and the response amplitude of the lining-upper soil column system in the resonant state is rather large, causing an increase in the corresponding stress amplitude.The resonance frequency point also is offset to a certain extent as the sliding stiffness factor increases. Whenk∗ is 1, 5, and 10, the first-order resonance frequency is 0.16, 0.18, and 0.20, respectively. This is due to the fact that the overall stiffness of the soil column above the lining increases as slip stiffness factor increases, and the stress spectrums of k∗=10 and k∗=20 are similar. According to the spatial distribution, the stress spectrum curves at several typical observation points markedly differ. If the stress peak at θ=45° occurs at the first-order natural frequency, the stress peak at θ=0° occurs at the second-order natural frequency.As shown in Figure9, the frequency variation rule of the DSCF spectrum curve of the outer wall is similar to that of the inner wall, but the outer wall DSCF spectrum is generally smaller than the inner wall. When k∗ is 1, 5, 10, and 20, the peaks of the inner wall DSCF reach 77.24, 65.65, 64.05, and 63.44 while the peaks of the outer wall DSCF are 64.14, 29.14, 24.87, and 22.33, respectively.Figure10 shows the DSCF spectrum of the outer wall surface of the lining based on the influence of the interfacial viscosity factor δ∗, where the peak of the DSCF spectrum of the outer wall decreases as the viscosity factor δ∗ increases. When δ∗ is 1, 10, and 100, the DSCF peaks are 30.43, 21.10, and 20.57, respectively. Increase in the viscosity factor δ∗ causes greater energy loss during the sliding process, which in turn causes the DSCF of the lining surface to attenuate.Figure 10 DSCF spectrum distribution on the outer lining wall surface under the vertically incident SV wave (δ∗ = 0, 1, 10, and 100): (a) θ=45°; (b) θ=0°; (c) θ=−45°. (a) (b) (c) ### 5.3. Displacement Amplitude of Ground Surface Figures11 and 12 show the surface displacement amplitude distribution above the tunnel lining as-affected by interface slip stiffness factor k∗ under an incident SV wave. The surface displacement amplitude in the figure was standardized according to the displacement amplitude of the incident wave. The dimensionless incident frequency η is 0.25 and 0.5; the incident angle θα is 0° and 30°, and the dimensionless slip stiffness factor k∗ is 1, 2, 10, and 20. The interface viscosity factor δ∗ is 0.Figure 11 Surface displacement amplitudes near tunnel lining under the incident SV wave (η=0.25): (a) surface horizontal displacement amplitude; (b) surface vertical displacement amplitude; (c) surface horizontal displacement amplitude; (d) surface vertical displacement amplitude. (a) (b) (c) (d)Figure 12 Surface displacement amplitude near tunnel lining under the incident SV wave (η=0.5): (a) surface horizontal displacement amplitude; (b) surface vertical displacement amplitude; (c) surface horizontal displacement amplitude; (d) surface vertical displacement amplitude. (a) (b) (c) (d)Figures11 and 12 show that when the SV wave has low frequency (η=0.25) at normal incidence, the spatial distribution of surface displacement is basically consistent under different slip stiffness. The variations in the stiffness factor k∗ have a considerable influence on the horizontal and vertical displacement amplitudes of the ground surface near the lining. The standard amplitude Ux/Asv of the horizontal displacement above the tunnel lining increases, while the standard amplitude Uy/Asv as k∗ increases. The surface horizontal displacement amplitudes at k∗ = 1, 5, 10, and 20 were 2.04, 2.23, 2.29, and 2.32, respectively, while the vertical displacement amplitudes were 1.71, 1.41, 1.27, and 1.16, respectively.When the SV wave is incident at an angle of 30° and at a low frequency (η=0.25), any change in slip stiffness factor k∗ has little effect on the horizontal displacement above the tunnel lining surface but does influence the spatial distribution and amplitude of the vertical displacement. The influence of k∗ gradually increases as the incident frequency of the SV wave increases (η=0.5) while the spatial distribution and amplitude of the lining’s surface displacement also markedly change. The 30° oblique incidence has more significant effects than the normal incidence. Take the vertical incidence of the SV wave (Figure 12(a)) as an example: at the position of the lining directly above the surface (i.e., x=0), the horizontal displacement amplitude of the surface decreases as k∗ increases. The amplitude is 1.47 when k∗ = 1, and the corresponding values for k∗ = 5, 10, and 20 are 1.16, 1.05, and 0.98, respectively. Just above the two sides, the horizontal displacement amplitude increases as k∗ increases. ## 5.1. DSCF of Inner and Outer Lining Wall Surfaces under Single-Frequency Incident Wave ### 5.1.1. Influence of Stiffness Factork∗ Figures3–6 show the distribution of DSCF in the inner and outer wall of a rigid lining under an incident SV wave. Among them, the dimensionless incident frequency η is 0.25, 0.5, 1, and 2, respectively. The incidence angle θα is 0° and 30°, respectively;Figure 3 DSCF distribution on lining the inner wall under the incident SV wave (θα=0°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 4 DSCF distribution on lining the outer wall under the incident SV wave (θα=0°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 5 DSCF distribution on lining the inner wall under the incident SV wave (θα=30°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 6 DSCF distribution on lining the outer wall under the incident SV wave (θα=30°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)As shown in Figures3–6, under an incident plane SV wave, the DSCF distribution curves of inner and outer wall surfaces of the lining are similar. The inner wall stress amplitude is considerably larger than that of the outer wall. When the SV wave is incident at low frequency (η = 0.25, 0.5), the DSCF of the inner lining wall surface increases as the stiffness factor k∗ increases while the DSCF of the outer wall decreases as k∗ increases. When η=0.5 and θα=0°, k∗ is either 1, 5, 10, or 20; the DSCF of the outer wall surface is, respectively, 23.79, 23.04, 21.45, and 19.91.When the SV wave is obliquely incident, the increase and decrease amplitudes of the DSCF of the inner and outer wall surfaces is smaller than the normal incidence. Whenη=0.25, θα=0°, and k∗=1, the DSCF of the inner wall surface is 51.71. It is 58.39 when k∗ = 20 marking an increase in amplitude of 13%. When k∗ = 1, the DSCF of the outer wall surface is 37.21, corresponding to 16.26 when k∗ = 20, and the amplitude of increase is 56%. When η=0.25, θα=30°, and k∗=1, the DSCF of the inner wall surface is 46.74 and it corresponds to 50.34 when k∗ = 20, and the increase in amplitude is 8%. When k∗ = 1, the DSCF of the outer wall surface is 28.64, corresponding to 20.15 when k∗ = 20 at a decrease in amplitude of 30%.When SV wave is incident with frequencyη=1, the DSCF of the inner and outer wall of the lining decreases with the increase of k∗, and the dynamic stress concentration on the inner and outer wall surfaces is very significant when k∗=1. If the SV wave is obliquely incident, the DSCF of the inner wall surface is to 69.60 when k∗=1, which is 2.1 times as much as the peak 32.93 under k∗=20. The peak DSCF of the outer wall surface is 60.36, which is 2.8 times as much as the corresponding peak value 21.92 under k∗=20. The SV wave is inclined at an angle of 30°; the peak DSCF on the inner wall of the lining is 67.78 when k∗=1, which is 2.3 times as much as the peak 29.62 under k∗=20. Section 3.2 discusses this phenomenon in detail.When the SV wave is incident at high frequency (η=2), the DSCF curve on the inner and outer wall surfaces of the lining oscillates very sharply along the circumference of the hole. There is no obvious relationship between the vibration regularity and k∗. When the SV wave is inclined at an incidence of 30°s and k∗=1, the DSCF of the inner and outer wall surfaces is minimal: the peak value of the inner wall falls to 12.64. When k∗=20, the peak values of the hoop stress of the inner and outer wall surfaces are 27.50 and 23.26, respectively. ### 5.1.2. Influence of Viscosity Factorδ∗ Figure7 shows the distribution of DSCF of the outer lining wall surface under an incident SV wave. The dimensionless incident frequency η is 0.25, 0.5, and 1, respectively; the incident angle θα is 0° and 30°, respectively. The dimensionless stiffness factor k∗=5, and the viscosity factor δ∗ is 1, 10, or 100.Figure 7 DSCF distribution on outer lining wall surfaces under incident SV waves: (a)η=0.25, θα=0°; (b) η=0.25, θα=30°; (c) η=0.5, θα=0°; (d) η=0.5, θα=30°; (e) η=1, θα=0°; (f) η=1, θα=30°. (a) (b) (c) (d) (e) (f)As shown in Figure7, the DSCF of the outer wall of the lining decreases gradually as interfacial viscosity factor δ∗ increases. However, the influence of the viscosity factor δ∗ on the circumferential stress distribution of the lining gradually weakens as incident wave frequency increases. For example, when the SV wave is of low frequency (η=0.25) at normal incidence, the DSCF of the outer wall is 27.32 when δ∗=1, that is, 2.3 times the corresponding value of 11.90 when δ∗=10 and 2.6 times the corresponding value of 10.43 when δ∗=100. The SV wave is under η=1 at normal incidence. The DSCF of the outer wall is 25.56 when δ∗=1, 1.35 times the corresponding value of 18.93 when δ∗=10, and 1.39 times the corresponding value of 18.43 when δ∗=100.Compared to the normal incidence, the influence of the viscosity factor on the amplitude distribution of the circumferential stress of the lining weakens at oblique incidence. When the SV wave is incident at an angle of 30°, the spatial distribution of the circumferential stress of the lining is relatively gentle along the circumference of the hole. In the low-frequency region (η=0.25), when δ∗=1, the DSCF of the outer wall is 23.99, which is equal to 1.3 times the value of 18.23 when δ∗=100. When η=1 and δ∗=1, the DSCF of the outer wall of the lining is 27.12, which is 1.1 times the corresponding value of 23.60 when δ∗=100. ## 5.1.1. Influence of Stiffness Factork∗ Figures3–6 show the distribution of DSCF in the inner and outer wall of a rigid lining under an incident SV wave. Among them, the dimensionless incident frequency η is 0.25, 0.5, 1, and 2, respectively. The incidence angle θα is 0° and 30°, respectively;Figure 3 DSCF distribution on lining the inner wall under the incident SV wave (θα=0°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 4 DSCF distribution on lining the outer wall under the incident SV wave (θα=0°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 5 DSCF distribution on lining the inner wall under the incident SV wave (θα=30°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 6 DSCF distribution on lining the outer wall under the incident SV wave (θα=30°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)As shown in Figures3–6, under an incident plane SV wave, the DSCF distribution curves of inner and outer wall surfaces of the lining are similar. The inner wall stress amplitude is considerably larger than that of the outer wall. When the SV wave is incident at low frequency (η = 0.25, 0.5), the DSCF of the inner lining wall surface increases as the stiffness factor k∗ increases while the DSCF of the outer wall decreases as k∗ increases. When η=0.5 and θα=0°, k∗ is either 1, 5, 10, or 20; the DSCF of the outer wall surface is, respectively, 23.79, 23.04, 21.45, and 19.91.When the SV wave is obliquely incident, the increase and decrease amplitudes of the DSCF of the inner and outer wall surfaces is smaller than the normal incidence. Whenη=0.25, θα=0°, and k∗=1, the DSCF of the inner wall surface is 51.71. It is 58.39 when k∗ = 20 marking an increase in amplitude of 13%. When k∗ = 1, the DSCF of the outer wall surface is 37.21, corresponding to 16.26 when k∗ = 20, and the amplitude of increase is 56%. When η=0.25, θα=30°, and k∗=1, the DSCF of the inner wall surface is 46.74 and it corresponds to 50.34 when k∗ = 20, and the increase in amplitude is 8%. When k∗ = 1, the DSCF of the outer wall surface is 28.64, corresponding to 20.15 when k∗ = 20 at a decrease in amplitude of 30%.When SV wave is incident with frequencyη=1, the DSCF of the inner and outer wall of the lining decreases with the increase of k∗, and the dynamic stress concentration on the inner and outer wall surfaces is very significant when k∗=1. If the SV wave is obliquely incident, the DSCF of the inner wall surface is to 69.60 when k∗=1, which is 2.1 times as much as the peak 32.93 under k∗=20. The peak DSCF of the outer wall surface is 60.36, which is 2.8 times as much as the corresponding peak value 21.92 under k∗=20. The SV wave is inclined at an angle of 30°; the peak DSCF on the inner wall of the lining is 67.78 when k∗=1, which is 2.3 times as much as the peak 29.62 under k∗=20. Section 3.2 discusses this phenomenon in detail.When the SV wave is incident at high frequency (η=2), the DSCF curve on the inner and outer wall surfaces of the lining oscillates very sharply along the circumference of the hole. There is no obvious relationship between the vibration regularity and k∗. When the SV wave is inclined at an incidence of 30°s and k∗=1, the DSCF of the inner and outer wall surfaces is minimal: the peak value of the inner wall falls to 12.64. When k∗=20, the peak values of the hoop stress of the inner and outer wall surfaces are 27.50 and 23.26, respectively. ## 5.1.2. Influence of Viscosity Factorδ∗ Figure7 shows the distribution of DSCF of the outer lining wall surface under an incident SV wave. The dimensionless incident frequency η is 0.25, 0.5, and 1, respectively; the incident angle θα is 0° and 30°, respectively. The dimensionless stiffness factor k∗=5, and the viscosity factor δ∗ is 1, 10, or 100.Figure 7 DSCF distribution on outer lining wall surfaces under incident SV waves: (a)η=0.25, θα=0°; (b) η=0.25, θα=30°; (c) η=0.5, θα=0°; (d) η=0.5, θα=30°; (e) η=1, θα=0°; (f) η=1, θα=30°. (a) (b) (c) (d) (e) (f)As shown in Figure7, the DSCF of the outer wall of the lining decreases gradually as interfacial viscosity factor δ∗ increases. However, the influence of the viscosity factor δ∗ on the circumferential stress distribution of the lining gradually weakens as incident wave frequency increases. For example, when the SV wave is of low frequency (η=0.25) at normal incidence, the DSCF of the outer wall is 27.32 when δ∗=1, that is, 2.3 times the corresponding value of 11.90 when δ∗=10 and 2.6 times the corresponding value of 10.43 when δ∗=100. The SV wave is under η=1 at normal incidence. The DSCF of the outer wall is 25.56 when δ∗=1, 1.35 times the corresponding value of 18.93 when δ∗=10, and 1.39 times the corresponding value of 18.43 when δ∗=100.Compared to the normal incidence, the influence of the viscosity factor on the amplitude distribution of the circumferential stress of the lining weakens at oblique incidence. When the SV wave is incident at an angle of 30°, the spatial distribution of the circumferential stress of the lining is relatively gentle along the circumference of the hole. In the low-frequency region (η=0.25), when δ∗=1, the DSCF of the outer wall is 23.99, which is equal to 1.3 times the value of 18.23 when δ∗=100. When η=1 and δ∗=1, the DSCF of the outer wall of the lining is 27.12, which is 1.1 times the corresponding value of 23.60 when δ∗=100. ## 5.2. Lining Internal DSCF Spectrum Analysis Figures8 and 9 show the influence of the dimensionless stiffness factor k∗ on the DSCF at different positions of the inner and outer walls of the tunnel lining when the SV wave is perpendicularly incident in the spectrum state, and the viscosity factor δ∗ is 0. Calculation positions were same as in Section 3, and the DSCF of θ = 90° (the top of the lining) and −90° (the bottom of the lining) is not shown in Figures 8 and 9. Similar to our observations in the model without slipping, the stress of the lining is sensitive to changes in frequency. The spectrum curve also has obvious peaks and troughs. The peak frequency here corresponds to the natural frequency in the soil column above the lining [33].Figure 8 DSCF spectrum distribution on the inner lining wall surface under the vertically incident SV wave: (a)θ=45°; (b) θ=0°; (c) θ=−45°. (a) (b) (c)Figure 9 DSCF spectrum distribution on the outer lining wall surface under the vertically incident SV wave: (a)θ=45°; (b) θ=0°; (c) θ=−45°. (a) (b) (c)As shown in Figure8, under an SV wave with normal incidence, the interface slip stiffness factor k∗ significantly affects the internal stress spectrum of the lining. When k∗=1, the stress amplification effect in the resonance reaction section is particularly obvious. At the first-order resonance frequency (η=0.16), the stress peak at θ=45° reaches 70.0; and at the η=0.96 resonance frequency, the stress at θ=0° has a peak of 77.2. The stress peak in the resonance band gradually decreases as the stiffness factor k∗ gradually increases. For example, when k∗=20 (near the no-slip state), the stress peaks at the first two resonance frequencies at θ=45° are approximately 63.4 and 34.0. When the stiffness factor is small, the restraining effect of the lining on the upper soil column weakens and the response amplitude of the lining-upper soil column system in the resonant state is rather large, causing an increase in the corresponding stress amplitude.The resonance frequency point also is offset to a certain extent as the sliding stiffness factor increases. Whenk∗ is 1, 5, and 10, the first-order resonance frequency is 0.16, 0.18, and 0.20, respectively. This is due to the fact that the overall stiffness of the soil column above the lining increases as slip stiffness factor increases, and the stress spectrums of k∗=10 and k∗=20 are similar. According to the spatial distribution, the stress spectrum curves at several typical observation points markedly differ. If the stress peak at θ=45° occurs at the first-order natural frequency, the stress peak at θ=0° occurs at the second-order natural frequency.As shown in Figure9, the frequency variation rule of the DSCF spectrum curve of the outer wall is similar to that of the inner wall, but the outer wall DSCF spectrum is generally smaller than the inner wall. When k∗ is 1, 5, 10, and 20, the peaks of the inner wall DSCF reach 77.24, 65.65, 64.05, and 63.44 while the peaks of the outer wall DSCF are 64.14, 29.14, 24.87, and 22.33, respectively.Figure10 shows the DSCF spectrum of the outer wall surface of the lining based on the influence of the interfacial viscosity factor δ∗, where the peak of the DSCF spectrum of the outer wall decreases as the viscosity factor δ∗ increases. When δ∗ is 1, 10, and 100, the DSCF peaks are 30.43, 21.10, and 20.57, respectively. Increase in the viscosity factor δ∗ causes greater energy loss during the sliding process, which in turn causes the DSCF of the lining surface to attenuate.Figure 10 DSCF spectrum distribution on the outer lining wall surface under the vertically incident SV wave (δ∗ = 0, 1, 10, and 100): (a) θ=45°; (b) θ=0°; (c) θ=−45°. (a) (b) (c) ## 5.3. Displacement Amplitude of Ground Surface Figures11 and 12 show the surface displacement amplitude distribution above the tunnel lining as-affected by interface slip stiffness factor k∗ under an incident SV wave. The surface displacement amplitude in the figure was standardized according to the displacement amplitude of the incident wave. The dimensionless incident frequency η is 0.25 and 0.5; the incident angle θα is 0° and 30°, and the dimensionless slip stiffness factor k∗ is 1, 2, 10, and 20. The interface viscosity factor δ∗ is 0.Figure 11 Surface displacement amplitudes near tunnel lining under the incident SV wave (η=0.25): (a) surface horizontal displacement amplitude; (b) surface vertical displacement amplitude; (c) surface horizontal displacement amplitude; (d) surface vertical displacement amplitude. (a) (b) (c) (d)Figure 12 Surface displacement amplitude near tunnel lining under the incident SV wave (η=0.5): (a) surface horizontal displacement amplitude; (b) surface vertical displacement amplitude; (c) surface horizontal displacement amplitude; (d) surface vertical displacement amplitude. (a) (b) (c) (d)Figures11 and 12 show that when the SV wave has low frequency (η=0.25) at normal incidence, the spatial distribution of surface displacement is basically consistent under different slip stiffness. The variations in the stiffness factor k∗ have a considerable influence on the horizontal and vertical displacement amplitudes of the ground surface near the lining. The standard amplitude Ux/Asv of the horizontal displacement above the tunnel lining increases, while the standard amplitude Uy/Asv as k∗ increases. The surface horizontal displacement amplitudes at k∗ = 1, 5, 10, and 20 were 2.04, 2.23, 2.29, and 2.32, respectively, while the vertical displacement amplitudes were 1.71, 1.41, 1.27, and 1.16, respectively.When the SV wave is incident at an angle of 30° and at a low frequency (η=0.25), any change in slip stiffness factor k∗ has little effect on the horizontal displacement above the tunnel lining surface but does influence the spatial distribution and amplitude of the vertical displacement. The influence of k∗ gradually increases as the incident frequency of the SV wave increases (η=0.5) while the spatial distribution and amplitude of the lining’s surface displacement also markedly change. The 30° oblique incidence has more significant effects than the normal incidence. Take the vertical incidence of the SV wave (Figure 12(a)) as an example: at the position of the lining directly above the surface (i.e., x=0), the horizontal displacement amplitude of the surface decreases as k∗ increases. The amplitude is 1.47 when k∗ = 1, and the corresponding values for k∗ = 5, 10, and 20 are 1.16, 1.05, and 0.98, respectively. Just above the two sides, the horizontal displacement amplitude increases as k∗ increases. ## 6. Conclusion The boundary integral equation method was applied to solve the seismic response of a tunnel lining in an elastic half-space under incident plane SV waves based on a viscous-slip interface model. The effects of key factors such as incident wave frequency and angle, interface slip stiffness, and interfacial viscosity coefficient on the dynamic stress response of the tunnel lining in elastic half-space and the surface displacement near the tunnel lining were analyzed. Main conclusions can be summarized as follows:(1) The interface slip stiffness factor significantly affects the dynamic stress distribution of the tunnel lining; the response characteristics are controlled by the incident wave frequency. When the slip stiffness is small, the internal stress of the lining changes very radically along the space around the hole, and the spatial distribution of the dynamic stress is highly complex. When the slip stiffness is large (k∗≥20), the dynamic response is close to that of the nonslip model.Under an incident low-frequency wave, increase in interface slip stiffness causes a gradual increase in the circumferential stress of the inner wall. The dynamic stress concentration is more significant in the no-slip state than the slip state. When the SV wave is incident withη=1 (close to the high-frequency resonant frequency band), the dynamic stress concentration effect inside the lining is very significant when the interface slip stiffness coefficient is small (k∗=1). Under high-frequency wave incidence (η=2), the influence of slip stiffness coefficient on the dynamic stress of the lining is more complex and spatial oscillation of the dynamic stress is more severe.(2) The viscosity factor of the viscous-slip interface also has a significant influence on the dynamic stress distribution of the tunnel lining. As the viscosity factor increases, the DSCF of the lining outer wall decreases gradually; however, this effect gradually weakens as the incident wave frequency increases. The influence of the viscosity coefficient under a normally incident plane SV wave is greater than that under a wave with oblique incidence.(3) The interface slip stiffness factor has a significant effect on the DSCF spectrum characteristics of the tunnel lining surface. When the slip stiffness is small (k∗=1), the dynamic stress amplification effect in the high-frequency resonance reaction section is more obvious. The stress peak in the resonance band gradually decreases as the slip stiffness increases. The resonance frequency point is also offset to a certain extent as slip stiffness increases.(4) When the SV wave is incident at a low frequency, the spatial distribution of surface displacements above the lining under different slip stiffness is essentially constant, but the displacement amplitudes are quite different. Any increase in SV wave incident frequency and incidence angle has a significant effect on both the spatial distribution of surface displacement near the lining and the amplitude of said surface displacement.In this study, we analyzed only the 2D seismic response of a shallow lined tunnel based on the viscous-slip interface model of a uniform space in half-space. Similar seismic response analyses of uneven sites and 3D tunnels merit further research. --- *Source: 1025483-2019-03-06.xml*
1025483-2019-03-06_1025483-2019-03-06.md
55,222
IBIEM Analysis of Dynamic Response of a Shallowly Buried Lined Tunnel Based on Viscous-Slip Interface Model
Xiaojie Zhou; Qinghua Liang; Zhongxian Liu; Ying He
Advances in Civil Engineering (2019)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1025483
1025483-2019-03-06.xml
--- ## Abstract A viscous-slip interface model is proposed to simulate the contact state between a tunnel lining structure and the surrounding rock. The boundary integral equation method is adopted to solve the scattering of the plane SV wave by a tunnel lining in an elastic half-space. We place special emphasis on the dynamic stress concentration of the lining and the amplification effect on the surface displacement near the tunnel. Scattered waves in the lining and half-space are constructed using the fictitious wave sources close to the lining surfaces based on Green’s functions of cylindrical expansion and the shear wave source. The magnitudes of the fictitious wave sources are determined by viscous-slip boundary conditions, and then the total response is obtained by superposition of the free and scattered fields. The slip stiffness and viscosity coefficients at the lining-surrounding rock interface have a significant influence on the dynamic stress distribution and the nearby surface displacement response in the tunnel lining. Their influence is controlled by the incident wave frequency and angle. The hoop stress increases gradually in the inner wall of the lining as sliding stiffness increases under a low-frequency incident wave. In the high-frequency resonance frequency band, where incident wave frequency is consistent with the natural frequency of the soil column above the tunnel, the dynamic stress concentration effect is more significant when it is smaller. The dynamic stress concentration factor inside the lining decreases gradually as the viscosity coefficient increases. The spatial distribution and the displacement amplitudes of surface displacement near the tunnel change as incident wave frequency and angle increase. The effective dynamic analysis of the underground structure under an actual strong dynamic load should consider the slip effect at the lining-surrounding rock interface. --- ## Body ## 1. Introduction Analyses of seismic damage incurred by disasters such as the Kobe earthquake, Chi-Chi earthquake, and Wenchuan earthquake have shown that underground structures might be severely damaged during strong earthquakes, resulting in massive economic and societal losses [1–5]. As the scope and scale of modern underground structures continually increase, the seismic design grows increasingly complex. In theory, wave scattering and dynamic stress concentration effects should be considered for large underground structures during the seismic wave propagation process. To this effect, it is of great significance to study the seismic response and hazards feature of underground structures.In general, the calculation methods include the analytical method [6–9], finite element method [10–12], finite difference method [13–15], boundary element method [16–19], boundary integral equation method [20–23], etc. It is worth mentioning that all of these studies assumed that the lining and surrounding rock are completely bonded. In actuality, however, there are varying degrees of slip between the tunnel and surrounding rock—especially in the case of intense dynamic loading. Yi et al. [24] studied the dynamic response of a tunnel lining with a sliding interface under an incident P wave based on an interface contact model. Fang and Jin [25, 26] proposed a viscoelastic interface model to solve the dynamic response of a tunnel lining under different interface stiffness and viscosity coefficients with incident P and SV waves. Ping et al. [27] calculated the maximum moment and axial force of a circular shield tunnel under interface slip and no-slip states based on an equivalent stiffness circle.It should be noted that the abovementioned research works based on interface contact models were mainly limited to deep-buried tunnels, while the response of shallow-buried tunnels differs significantly from that of deep-buried ones [28]. Yi et al. [29] presented an analytical solution to the out-of-plane dynamic response of a shallow tunnel lining under the action of a plane SH wave including interface contact stiffness, incident angle, wave frequency, and tunnel depth as these factors affect the dynamic stress concentration of the lining. Fang et al. [30] investigated a lined tunnel in a semi-infinite alluvial valley with an elastic-slip interface and analyzed the dynamic stress distribution around the circular tunnel subjected to SH waves.However, until now, few studies have explored the seismic response of shallow tunnels under incident SV waves with the interface-slipping model due to the complexity of multimode coupling and hybrid boundary conditions. We used an indirect boundary integral equation method (IBIEM) to solve the scattering of the plane SV wave by a tunnel lining in a half-space based on the viscous-slip interface model. This method had been used effectively to solve the dynamic response of the tunnel structure [31, 32].This study aims to investigate the dynamic stress concentration effect of the tunnel lining and the surface displacement amplification near the tunnel with a viscous-slip interface. We assessed the influence of parameters such as incident wave frequency and angle, viscous-slip interface stiffness, and viscosity coefficient on the overall dynamic response of the lining and surrounding rock. This study can provide a theoretical basis for the seismic design of actual underground engineering structures under intense dynamic loads. ## 2. Calculation Model As shown in Figure1, the elastic half-space contains an infinitely long tunnel lining. Both the lining and the half-space are linearly isotropic homogeneous elastic media. The viscous-slip state between the tunnel and the surrounding rock can be assumed to consist of a series of linear springs and dampers. The parameters related to the half-space and the tunnel are shown in Table 1.Figure 1 Seismic response model of tunnel lining with the viscous-slip interface.Table 1 Half-space and tunnel parameters. Domain Shear modulus Poisson’s ratio Density Longitudinal wave speed Transverse wave speed Virtual source surface Half-space D 1 μ 1 ν 1 ρ 1 c α 1 c β 1 S 1 Tunnel D 2 μ 2 ν 2 ρ 2 c α 2 c β 2 S 2 and S3Let the buried depth of the tunnel bed, the inner and outer radii of the lining be a1 and a2, and the inner and outer boundary surfaces of the lining be S0 and S, respectively. Assume that the plane SV wave is incident at an angle θα from the half-space. ## 3. Calculation Method In this study, we considered the cylindrical shear wave source in the half-space as the fundamental solution. The indirect boundary integral equation method and viscous-slip boundary condition were used to solve the scattering of plane waves by tunnel linings and the dynamic response around the tunnel lining [20]. ### 3.1. Wave Field Analysis The total half-space wave field can be viewed as a superposition of a half-space free field (without tunnel linings) and a scattering field. We first carry out a free field analysis. The shear wave potential function in the half-space is denoted asψ (plane strain state), and the plane SV wave with circular frequency ω is incident at angle θα. In the Cartesian coordinate system, the incident SV wave potential function can be expressed as follows:(1)ψix,y=exp−ikα1xsinθα−ycosθα.For the sake of simplicity, the time factorexpiωt is omitted here. Incident plane SV waves generate reflected P waves and SV waves on the surface of the half-space. Their specific expressions are shown in [16].Scattered fields are generated in the half-space of a lined tunnel and in the interior of the lining. The fields can be determined by superimposing all the expansion wave and shear wave sources on the virtual wave source surface inside and outside the lining, respectively. Assuming that the scattered field in the half-space is generated by the virtual source faceS1, the displacement and stress in the half-space are as follows:(2)uix=∫bbx1Gi,1sx,x1+cx1Gi,2sx,x1dS1,σijx=∫bbx1Tij,1sx,x1+cx1Tij,2sx,x1dS1,where x∈D1 and x1∈S1. bx1 and cx1 correspond to the density of P and SV wave sources at the x1 position on the virtual wave source surface S1, respectively. Gi,lsx,x1 and Tij,lsx,x1 denote Green’s function of displacement and stress in the elastic half-space, respectively (lower scale index l = 1 or 2 corresponding to P and SV wave sources i,j=x,y), and function automatically satisfies the wave equation and surface boundary conditions.The internal scattering field of the lining is obtained by superimposing the action of all the expansion wave sources and shear wave sources on the virtual wave source surfacesS2 and S3. The internal displacement and stress of the lining are expressed as follows:(3)uix=∫bdx2Gi,1lx,x1+ex2Gi,2tx,x2dS2+∫bfx3Gi,1tx,x3+gx3Gi,2tx,x3dS3,σijx=∫bdx2Tij,1sx,x2+ex2Tij,2tx,x2dS2+∫bfx3Tij,1tx,x3+gx3Tij,2tx,x3dS3,where x∈D2, x2∈S2, and x3∈S3. dx2 and ex2, respectively, correspond to the density of P and SV wave sources at the x2 position on the virtual wave source surface S2. fx3 and gx3 correspond to the density of P and SV wave sources at the x3 position on the virtual wave source surface S3. Gi,lt and Tij,lt indicate the displacement and stress Green’s functions in the lining, respectively.The total displacement and stress fields in the half-space are obtained by superposition of the scattered wave field and the free field in the half-space. The internal reactions of the lining are all generated by the scattered fields within it. ### 3.2. Boundary Conditions and Solutions We built a viscous-slip interface model to determine the influence of the interface effect on the dynamic response. The lining and the half-space were connected by linear springs and dampers (Figure1). Spring and damper parameters are represented by stiffness and viscosity coefficients. In this model, the boundary conditions at the interface between the lining and the half-space S are as follows:(4)uxs−uxt=σnnskn+δn∂uxs−uxt∂t,r=a2,uys−uyt=σntskt+δt∂uys−uyt∂t,r=a2,σnns=σnnt;σnts=σntt,r=a2,σnnt=0;σntt=0,r=a1,where superscripts s and t correspond to the half-space and the lining, respectively; kn and kt are the normal and tangential stiffness coefficients of the viscous-slip boundary; and δn and δt are the normal and tangential viscosity coefficients of the viscous-slip boundary, respectively. To secure a numerical solution to the problem, we first discretely separate the inner and outer surfaces of the lining and the virtual wave source surfaces S1, S2, and S3. The number of discrete points on the inner and outer surfaces of the lining is set to N, and the number of discrete points on the virtual wave source surfaces S1, S2, and S3 is N1. The scattered displacement and stress fields in the half-space can be expressed as follows:(5)uixn=bn1Gi,1sxn,xn1+cn1Gi,2sxn,xn1,σijxn=bn1Tij,1sxn,xn1+cn1Tij,2sxn,xn1,where xn∈S and xn1∈S1 (n=1,…,N and n1=1,…,N1). bn1 and cn1 are the density of P wave and SV wave sources at the n1th discrete point on the virtual source surface S1. Similarly, the scattering field inside the lining can be constructed from discrete wave sources on S2 and S3. The scattering displacement and stress fields are(6)uixn=dn2Gi,1txn,xn2+en2Gi,2txn,xn2+fn3Gi,1txn,xn3+gn3Gi,2sxn,xn3,σijxn=dn2Tij,1txn,xn2+en2Tij,2txn,xn2+fn3Tij,1txn,xn3+gn3Tij,2txn,xn3,where xn∈S, xn2∈S2, and xn3∈S3 (n=1,…,N, n2=1,…,N1, and n3=1,…,N1).A linear system of equations can be obtained by synthesizing the above formulas:(7)W1+B1Y1+F=W2+B2Y2+W3+B3Y3,H1Y1+F=H2Y2+H3Y3,T2Y2+T3Y3=0,where W1, W2, and W3 are the displacement Green’s influence matrix of the discrete points on the outer surface of the lining of the discrete wave source on S1, S2, and S3, respectively; B1, B2, and B3 are boundary displacement matrices obtained from boundary conditions; H1, H2, and H3 are the stress Green’s influence matrix of the discrete points on the outer surface of the lining of discrete source points on S1, S2, and S3, respectively; T2 and T3 correspond to the stress-green-influenced matrix of the discrete points on the inner surface of the lining of the source points of S2 and S3; Y1, Y2, and Y3 are the virtual wave source density vectors (to be determined) on S1, S2, and S3, respectively; and F is the free field vector. System (7) can be solved using the least-squares method. The virtual wave source density is obtained, and then the scattered field is obtained. The total wave field can be obtained by superimposing the scattering field and the free field, and then the displacement and stress of the half-space and any point in the lining can be calculated. ## 3.1. Wave Field Analysis The total half-space wave field can be viewed as a superposition of a half-space free field (without tunnel linings) and a scattering field. We first carry out a free field analysis. The shear wave potential function in the half-space is denoted asψ (plane strain state), and the plane SV wave with circular frequency ω is incident at angle θα. In the Cartesian coordinate system, the incident SV wave potential function can be expressed as follows:(1)ψix,y=exp−ikα1xsinθα−ycosθα.For the sake of simplicity, the time factorexpiωt is omitted here. Incident plane SV waves generate reflected P waves and SV waves on the surface of the half-space. Their specific expressions are shown in [16].Scattered fields are generated in the half-space of a lined tunnel and in the interior of the lining. The fields can be determined by superimposing all the expansion wave and shear wave sources on the virtual wave source surface inside and outside the lining, respectively. Assuming that the scattered field in the half-space is generated by the virtual source faceS1, the displacement and stress in the half-space are as follows:(2)uix=∫bbx1Gi,1sx,x1+cx1Gi,2sx,x1dS1,σijx=∫bbx1Tij,1sx,x1+cx1Tij,2sx,x1dS1,where x∈D1 and x1∈S1. bx1 and cx1 correspond to the density of P and SV wave sources at the x1 position on the virtual wave source surface S1, respectively. Gi,lsx,x1 and Tij,lsx,x1 denote Green’s function of displacement and stress in the elastic half-space, respectively (lower scale index l = 1 or 2 corresponding to P and SV wave sources i,j=x,y), and function automatically satisfies the wave equation and surface boundary conditions.The internal scattering field of the lining is obtained by superimposing the action of all the expansion wave sources and shear wave sources on the virtual wave source surfacesS2 and S3. The internal displacement and stress of the lining are expressed as follows:(3)uix=∫bdx2Gi,1lx,x1+ex2Gi,2tx,x2dS2+∫bfx3Gi,1tx,x3+gx3Gi,2tx,x3dS3,σijx=∫bdx2Tij,1sx,x2+ex2Tij,2tx,x2dS2+∫bfx3Tij,1tx,x3+gx3Tij,2tx,x3dS3,where x∈D2, x2∈S2, and x3∈S3. dx2 and ex2, respectively, correspond to the density of P and SV wave sources at the x2 position on the virtual wave source surface S2. fx3 and gx3 correspond to the density of P and SV wave sources at the x3 position on the virtual wave source surface S3. Gi,lt and Tij,lt indicate the displacement and stress Green’s functions in the lining, respectively.The total displacement and stress fields in the half-space are obtained by superposition of the scattered wave field and the free field in the half-space. The internal reactions of the lining are all generated by the scattered fields within it. ## 3.2. Boundary Conditions and Solutions We built a viscous-slip interface model to determine the influence of the interface effect on the dynamic response. The lining and the half-space were connected by linear springs and dampers (Figure1). Spring and damper parameters are represented by stiffness and viscosity coefficients. In this model, the boundary conditions at the interface between the lining and the half-space S are as follows:(4)uxs−uxt=σnnskn+δn∂uxs−uxt∂t,r=a2,uys−uyt=σntskt+δt∂uys−uyt∂t,r=a2,σnns=σnnt;σnts=σntt,r=a2,σnnt=0;σntt=0,r=a1,where superscripts s and t correspond to the half-space and the lining, respectively; kn and kt are the normal and tangential stiffness coefficients of the viscous-slip boundary; and δn and δt are the normal and tangential viscosity coefficients of the viscous-slip boundary, respectively. To secure a numerical solution to the problem, we first discretely separate the inner and outer surfaces of the lining and the virtual wave source surfaces S1, S2, and S3. The number of discrete points on the inner and outer surfaces of the lining is set to N, and the number of discrete points on the virtual wave source surfaces S1, S2, and S3 is N1. The scattered displacement and stress fields in the half-space can be expressed as follows:(5)uixn=bn1Gi,1sxn,xn1+cn1Gi,2sxn,xn1,σijxn=bn1Tij,1sxn,xn1+cn1Tij,2sxn,xn1,where xn∈S and xn1∈S1 (n=1,…,N and n1=1,…,N1). bn1 and cn1 are the density of P wave and SV wave sources at the n1th discrete point on the virtual source surface S1. Similarly, the scattering field inside the lining can be constructed from discrete wave sources on S2 and S3. The scattering displacement and stress fields are(6)uixn=dn2Gi,1txn,xn2+en2Gi,2txn,xn2+fn3Gi,1txn,xn3+gn3Gi,2sxn,xn3,σijxn=dn2Tij,1txn,xn2+en2Tij,2txn,xn2+fn3Tij,1txn,xn3+gn3Tij,2txn,xn3,where xn∈S, xn2∈S2, and xn3∈S3 (n=1,…,N, n2=1,…,N1, and n3=1,…,N1).A linear system of equations can be obtained by synthesizing the above formulas:(7)W1+B1Y1+F=W2+B2Y2+W3+B3Y3,H1Y1+F=H2Y2+H3Y3,T2Y2+T3Y3=0,where W1, W2, and W3 are the displacement Green’s influence matrix of the discrete points on the outer surface of the lining of the discrete wave source on S1, S2, and S3, respectively; B1, B2, and B3 are boundary displacement matrices obtained from boundary conditions; H1, H2, and H3 are the stress Green’s influence matrix of the discrete points on the outer surface of the lining of discrete source points on S1, S2, and S3, respectively; T2 and T3 correspond to the stress-green-influenced matrix of the discrete points on the inner surface of the lining of the source points of S2 and S3; Y1, Y2, and Y3 are the virtual wave source density vectors (to be determined) on S1, S2, and S3, respectively; and F is the free field vector. System (7) can be solved using the least-squares method. The virtual wave source density is obtained, and then the scattered field is obtained. The total wave field can be obtained by superimposing the scattering field and the free field, and then the displacement and stress of the half-space and any point in the lining can be calculated. ## 4. Numerical Example and Validation In this section, the ratio of the buried depth of the tunnel to the inner radius of the lining isd/a1=1.0; the ratio of the inner and outer radii of the lining is a1/a2=0.9. Poisson’s ratios v of the half-space and the lining material are both 0.25. The material’s hysteretic damping ratio is 0.001, and the density ratio of the half-space and lining material is ρ1/ρ2=0.8. The ratio of the shear wave speed in the half-space and the lining is cβ1/cβ2=0.2. We define η as the dimensionless incident frequency η=2a1/λ1=ωa1/πcβ1. η represents the ratio of the inner diameter of the tunnel to the half-space shear wavelength.We define the dimensionless dynamic stress concentration factor (DSCF) asDSCF=σθθ/σ0=σθθ/μkβ12. DSCF represents the absolute value of the ratio of the circumferential stress σθθ of the lining hole to the amplitude σ0 of the incident wave stress in the half-space; k∗ is the dimensionless stiffness factor of the viscous-slip interface, and k∗=kna1/μ1=kta1/μ1; and δ∗ is the dimensionless viscosity factor of the viscous-slip interface, and δ∗=δt/a1=δn/a1. In this paper, the dimensionless stiffness factor k∗ is 1, 5, 10, and 20, respectively. The dimensionless viscosity factor δ∗ is 0. We found that the interface is essentially in a no-slip state when k∗ exceeds 20, so when the stiffness factor exceeds 20, we specify the interface is the no-slip state.In order to verify the correctness of the IBIEM method, we first letk∗ be 100 and δ∗ be 0, which is equivalent to the state of consolidation without slip, and then compare it with the state of no slip in [20]. To more fully prove the accuracy, we choose the spectrum analysis of the different locations of the lining and compare them with [20]. The calculation positions are at θ = 90° (the top of the lining), 45°, 0°, −45°, and −90° (the bottom of the lining), respectively, as shown in Figure 2. The DSCF of θ = 90° (the top of the lining) and −90° (the bottom of the lining) is 0, so it is not shown in Figure 2. As shown in Figure 2, the results calculated by the IBIEM method are quite consistent with the results in [20]. This proves the correctness of our calculation method.Figure 2 DSCF spectrum distribution on the inner lining wall surface. ## 5. Numerical Analysis ### 5.1. DSCF of Inner and Outer Lining Wall Surfaces under Single-Frequency Incident Wave #### 5.1.1. Influence of Stiffness Factork∗ Figures3–6 show the distribution of DSCF in the inner and outer wall of a rigid lining under an incident SV wave. Among them, the dimensionless incident frequency η is 0.25, 0.5, 1, and 2, respectively. The incidence angle θα is 0° and 30°, respectively;Figure 3 DSCF distribution on lining the inner wall under the incident SV wave (θα=0°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 4 DSCF distribution on lining the outer wall under the incident SV wave (θα=0°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 5 DSCF distribution on lining the inner wall under the incident SV wave (θα=30°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 6 DSCF distribution on lining the outer wall under the incident SV wave (θα=30°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)As shown in Figures3–6, under an incident plane SV wave, the DSCF distribution curves of inner and outer wall surfaces of the lining are similar. The inner wall stress amplitude is considerably larger than that of the outer wall. When the SV wave is incident at low frequency (η = 0.25, 0.5), the DSCF of the inner lining wall surface increases as the stiffness factor k∗ increases while the DSCF of the outer wall decreases as k∗ increases. When η=0.5 and θα=0°, k∗ is either 1, 5, 10, or 20; the DSCF of the outer wall surface is, respectively, 23.79, 23.04, 21.45, and 19.91.When the SV wave is obliquely incident, the increase and decrease amplitudes of the DSCF of the inner and outer wall surfaces is smaller than the normal incidence. Whenη=0.25, θα=0°, and k∗=1, the DSCF of the inner wall surface is 51.71. It is 58.39 when k∗ = 20 marking an increase in amplitude of 13%. When k∗ = 1, the DSCF of the outer wall surface is 37.21, corresponding to 16.26 when k∗ = 20, and the amplitude of increase is 56%. When η=0.25, θα=30°, and k∗=1, the DSCF of the inner wall surface is 46.74 and it corresponds to 50.34 when k∗ = 20, and the increase in amplitude is 8%. When k∗ = 1, the DSCF of the outer wall surface is 28.64, corresponding to 20.15 when k∗ = 20 at a decrease in amplitude of 30%.When SV wave is incident with frequencyη=1, the DSCF of the inner and outer wall of the lining decreases with the increase of k∗, and the dynamic stress concentration on the inner and outer wall surfaces is very significant when k∗=1. If the SV wave is obliquely incident, the DSCF of the inner wall surface is to 69.60 when k∗=1, which is 2.1 times as much as the peak 32.93 under k∗=20. The peak DSCF of the outer wall surface is 60.36, which is 2.8 times as much as the corresponding peak value 21.92 under k∗=20. The SV wave is inclined at an angle of 30°; the peak DSCF on the inner wall of the lining is 67.78 when k∗=1, which is 2.3 times as much as the peak 29.62 under k∗=20. Section 3.2 discusses this phenomenon in detail.When the SV wave is incident at high frequency (η=2), the DSCF curve on the inner and outer wall surfaces of the lining oscillates very sharply along the circumference of the hole. There is no obvious relationship between the vibration regularity and k∗. When the SV wave is inclined at an incidence of 30°s and k∗=1, the DSCF of the inner and outer wall surfaces is minimal: the peak value of the inner wall falls to 12.64. When k∗=20, the peak values of the hoop stress of the inner and outer wall surfaces are 27.50 and 23.26, respectively. #### 5.1.2. Influence of Viscosity Factorδ∗ Figure7 shows the distribution of DSCF of the outer lining wall surface under an incident SV wave. The dimensionless incident frequency η is 0.25, 0.5, and 1, respectively; the incident angle θα is 0° and 30°, respectively. The dimensionless stiffness factor k∗=5, and the viscosity factor δ∗ is 1, 10, or 100.Figure 7 DSCF distribution on outer lining wall surfaces under incident SV waves: (a)η=0.25, θα=0°; (b) η=0.25, θα=30°; (c) η=0.5, θα=0°; (d) η=0.5, θα=30°; (e) η=1, θα=0°; (f) η=1, θα=30°. (a) (b) (c) (d) (e) (f)As shown in Figure7, the DSCF of the outer wall of the lining decreases gradually as interfacial viscosity factor δ∗ increases. However, the influence of the viscosity factor δ∗ on the circumferential stress distribution of the lining gradually weakens as incident wave frequency increases. For example, when the SV wave is of low frequency (η=0.25) at normal incidence, the DSCF of the outer wall is 27.32 when δ∗=1, that is, 2.3 times the corresponding value of 11.90 when δ∗=10 and 2.6 times the corresponding value of 10.43 when δ∗=100. The SV wave is under η=1 at normal incidence. The DSCF of the outer wall is 25.56 when δ∗=1, 1.35 times the corresponding value of 18.93 when δ∗=10, and 1.39 times the corresponding value of 18.43 when δ∗=100.Compared to the normal incidence, the influence of the viscosity factor on the amplitude distribution of the circumferential stress of the lining weakens at oblique incidence. When the SV wave is incident at an angle of 30°, the spatial distribution of the circumferential stress of the lining is relatively gentle along the circumference of the hole. In the low-frequency region (η=0.25), when δ∗=1, the DSCF of the outer wall is 23.99, which is equal to 1.3 times the value of 18.23 when δ∗=100. When η=1 and δ∗=1, the DSCF of the outer wall of the lining is 27.12, which is 1.1 times the corresponding value of 23.60 when δ∗=100. ### 5.2. Lining Internal DSCF Spectrum Analysis Figures8 and 9 show the influence of the dimensionless stiffness factor k∗ on the DSCF at different positions of the inner and outer walls of the tunnel lining when the SV wave is perpendicularly incident in the spectrum state, and the viscosity factor δ∗ is 0. Calculation positions were same as in Section 3, and the DSCF of θ = 90° (the top of the lining) and −90° (the bottom of the lining) is not shown in Figures 8 and 9. Similar to our observations in the model without slipping, the stress of the lining is sensitive to changes in frequency. The spectrum curve also has obvious peaks and troughs. The peak frequency here corresponds to the natural frequency in the soil column above the lining [33].Figure 8 DSCF spectrum distribution on the inner lining wall surface under the vertically incident SV wave: (a)θ=45°; (b) θ=0°; (c) θ=−45°. (a) (b) (c)Figure 9 DSCF spectrum distribution on the outer lining wall surface under the vertically incident SV wave: (a)θ=45°; (b) θ=0°; (c) θ=−45°. (a) (b) (c)As shown in Figure8, under an SV wave with normal incidence, the interface slip stiffness factor k∗ significantly affects the internal stress spectrum of the lining. When k∗=1, the stress amplification effect in the resonance reaction section is particularly obvious. At the first-order resonance frequency (η=0.16), the stress peak at θ=45° reaches 70.0; and at the η=0.96 resonance frequency, the stress at θ=0° has a peak of 77.2. The stress peak in the resonance band gradually decreases as the stiffness factor k∗ gradually increases. For example, when k∗=20 (near the no-slip state), the stress peaks at the first two resonance frequencies at θ=45° are approximately 63.4 and 34.0. When the stiffness factor is small, the restraining effect of the lining on the upper soil column weakens and the response amplitude of the lining-upper soil column system in the resonant state is rather large, causing an increase in the corresponding stress amplitude.The resonance frequency point also is offset to a certain extent as the sliding stiffness factor increases. Whenk∗ is 1, 5, and 10, the first-order resonance frequency is 0.16, 0.18, and 0.20, respectively. This is due to the fact that the overall stiffness of the soil column above the lining increases as slip stiffness factor increases, and the stress spectrums of k∗=10 and k∗=20 are similar. According to the spatial distribution, the stress spectrum curves at several typical observation points markedly differ. If the stress peak at θ=45° occurs at the first-order natural frequency, the stress peak at θ=0° occurs at the second-order natural frequency.As shown in Figure9, the frequency variation rule of the DSCF spectrum curve of the outer wall is similar to that of the inner wall, but the outer wall DSCF spectrum is generally smaller than the inner wall. When k∗ is 1, 5, 10, and 20, the peaks of the inner wall DSCF reach 77.24, 65.65, 64.05, and 63.44 while the peaks of the outer wall DSCF are 64.14, 29.14, 24.87, and 22.33, respectively.Figure10 shows the DSCF spectrum of the outer wall surface of the lining based on the influence of the interfacial viscosity factor δ∗, where the peak of the DSCF spectrum of the outer wall decreases as the viscosity factor δ∗ increases. When δ∗ is 1, 10, and 100, the DSCF peaks are 30.43, 21.10, and 20.57, respectively. Increase in the viscosity factor δ∗ causes greater energy loss during the sliding process, which in turn causes the DSCF of the lining surface to attenuate.Figure 10 DSCF spectrum distribution on the outer lining wall surface under the vertically incident SV wave (δ∗ = 0, 1, 10, and 100): (a) θ=45°; (b) θ=0°; (c) θ=−45°. (a) (b) (c) ### 5.3. Displacement Amplitude of Ground Surface Figures11 and 12 show the surface displacement amplitude distribution above the tunnel lining as-affected by interface slip stiffness factor k∗ under an incident SV wave. The surface displacement amplitude in the figure was standardized according to the displacement amplitude of the incident wave. The dimensionless incident frequency η is 0.25 and 0.5; the incident angle θα is 0° and 30°, and the dimensionless slip stiffness factor k∗ is 1, 2, 10, and 20. The interface viscosity factor δ∗ is 0.Figure 11 Surface displacement amplitudes near tunnel lining under the incident SV wave (η=0.25): (a) surface horizontal displacement amplitude; (b) surface vertical displacement amplitude; (c) surface horizontal displacement amplitude; (d) surface vertical displacement amplitude. (a) (b) (c) (d)Figure 12 Surface displacement amplitude near tunnel lining under the incident SV wave (η=0.5): (a) surface horizontal displacement amplitude; (b) surface vertical displacement amplitude; (c) surface horizontal displacement amplitude; (d) surface vertical displacement amplitude. (a) (b) (c) (d)Figures11 and 12 show that when the SV wave has low frequency (η=0.25) at normal incidence, the spatial distribution of surface displacement is basically consistent under different slip stiffness. The variations in the stiffness factor k∗ have a considerable influence on the horizontal and vertical displacement amplitudes of the ground surface near the lining. The standard amplitude Ux/Asv of the horizontal displacement above the tunnel lining increases, while the standard amplitude Uy/Asv as k∗ increases. The surface horizontal displacement amplitudes at k∗ = 1, 5, 10, and 20 were 2.04, 2.23, 2.29, and 2.32, respectively, while the vertical displacement amplitudes were 1.71, 1.41, 1.27, and 1.16, respectively.When the SV wave is incident at an angle of 30° and at a low frequency (η=0.25), any change in slip stiffness factor k∗ has little effect on the horizontal displacement above the tunnel lining surface but does influence the spatial distribution and amplitude of the vertical displacement. The influence of k∗ gradually increases as the incident frequency of the SV wave increases (η=0.5) while the spatial distribution and amplitude of the lining’s surface displacement also markedly change. The 30° oblique incidence has more significant effects than the normal incidence. Take the vertical incidence of the SV wave (Figure 12(a)) as an example: at the position of the lining directly above the surface (i.e., x=0), the horizontal displacement amplitude of the surface decreases as k∗ increases. The amplitude is 1.47 when k∗ = 1, and the corresponding values for k∗ = 5, 10, and 20 are 1.16, 1.05, and 0.98, respectively. Just above the two sides, the horizontal displacement amplitude increases as k∗ increases. ## 5.1. DSCF of Inner and Outer Lining Wall Surfaces under Single-Frequency Incident Wave ### 5.1.1. Influence of Stiffness Factork∗ Figures3–6 show the distribution of DSCF in the inner and outer wall of a rigid lining under an incident SV wave. Among them, the dimensionless incident frequency η is 0.25, 0.5, 1, and 2, respectively. The incidence angle θα is 0° and 30°, respectively;Figure 3 DSCF distribution on lining the inner wall under the incident SV wave (θα=0°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 4 DSCF distribution on lining the outer wall under the incident SV wave (θα=0°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 5 DSCF distribution on lining the inner wall under the incident SV wave (θα=30°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 6 DSCF distribution on lining the outer wall under the incident SV wave (θα=30°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)As shown in Figures3–6, under an incident plane SV wave, the DSCF distribution curves of inner and outer wall surfaces of the lining are similar. The inner wall stress amplitude is considerably larger than that of the outer wall. When the SV wave is incident at low frequency (η = 0.25, 0.5), the DSCF of the inner lining wall surface increases as the stiffness factor k∗ increases while the DSCF of the outer wall decreases as k∗ increases. When η=0.5 and θα=0°, k∗ is either 1, 5, 10, or 20; the DSCF of the outer wall surface is, respectively, 23.79, 23.04, 21.45, and 19.91.When the SV wave is obliquely incident, the increase and decrease amplitudes of the DSCF of the inner and outer wall surfaces is smaller than the normal incidence. Whenη=0.25, θα=0°, and k∗=1, the DSCF of the inner wall surface is 51.71. It is 58.39 when k∗ = 20 marking an increase in amplitude of 13%. When k∗ = 1, the DSCF of the outer wall surface is 37.21, corresponding to 16.26 when k∗ = 20, and the amplitude of increase is 56%. When η=0.25, θα=30°, and k∗=1, the DSCF of the inner wall surface is 46.74 and it corresponds to 50.34 when k∗ = 20, and the increase in amplitude is 8%. When k∗ = 1, the DSCF of the outer wall surface is 28.64, corresponding to 20.15 when k∗ = 20 at a decrease in amplitude of 30%.When SV wave is incident with frequencyη=1, the DSCF of the inner and outer wall of the lining decreases with the increase of k∗, and the dynamic stress concentration on the inner and outer wall surfaces is very significant when k∗=1. If the SV wave is obliquely incident, the DSCF of the inner wall surface is to 69.60 when k∗=1, which is 2.1 times as much as the peak 32.93 under k∗=20. The peak DSCF of the outer wall surface is 60.36, which is 2.8 times as much as the corresponding peak value 21.92 under k∗=20. The SV wave is inclined at an angle of 30°; the peak DSCF on the inner wall of the lining is 67.78 when k∗=1, which is 2.3 times as much as the peak 29.62 under k∗=20. Section 3.2 discusses this phenomenon in detail.When the SV wave is incident at high frequency (η=2), the DSCF curve on the inner and outer wall surfaces of the lining oscillates very sharply along the circumference of the hole. There is no obvious relationship between the vibration regularity and k∗. When the SV wave is inclined at an incidence of 30°s and k∗=1, the DSCF of the inner and outer wall surfaces is minimal: the peak value of the inner wall falls to 12.64. When k∗=20, the peak values of the hoop stress of the inner and outer wall surfaces are 27.50 and 23.26, respectively. ### 5.1.2. Influence of Viscosity Factorδ∗ Figure7 shows the distribution of DSCF of the outer lining wall surface under an incident SV wave. The dimensionless incident frequency η is 0.25, 0.5, and 1, respectively; the incident angle θα is 0° and 30°, respectively. The dimensionless stiffness factor k∗=5, and the viscosity factor δ∗ is 1, 10, or 100.Figure 7 DSCF distribution on outer lining wall surfaces under incident SV waves: (a)η=0.25, θα=0°; (b) η=0.25, θα=30°; (c) η=0.5, θα=0°; (d) η=0.5, θα=30°; (e) η=1, θα=0°; (f) η=1, θα=30°. (a) (b) (c) (d) (e) (f)As shown in Figure7, the DSCF of the outer wall of the lining decreases gradually as interfacial viscosity factor δ∗ increases. However, the influence of the viscosity factor δ∗ on the circumferential stress distribution of the lining gradually weakens as incident wave frequency increases. For example, when the SV wave is of low frequency (η=0.25) at normal incidence, the DSCF of the outer wall is 27.32 when δ∗=1, that is, 2.3 times the corresponding value of 11.90 when δ∗=10 and 2.6 times the corresponding value of 10.43 when δ∗=100. The SV wave is under η=1 at normal incidence. The DSCF of the outer wall is 25.56 when δ∗=1, 1.35 times the corresponding value of 18.93 when δ∗=10, and 1.39 times the corresponding value of 18.43 when δ∗=100.Compared to the normal incidence, the influence of the viscosity factor on the amplitude distribution of the circumferential stress of the lining weakens at oblique incidence. When the SV wave is incident at an angle of 30°, the spatial distribution of the circumferential stress of the lining is relatively gentle along the circumference of the hole. In the low-frequency region (η=0.25), when δ∗=1, the DSCF of the outer wall is 23.99, which is equal to 1.3 times the value of 18.23 when δ∗=100. When η=1 and δ∗=1, the DSCF of the outer wall of the lining is 27.12, which is 1.1 times the corresponding value of 23.60 when δ∗=100. ## 5.1.1. Influence of Stiffness Factork∗ Figures3–6 show the distribution of DSCF in the inner and outer wall of a rigid lining under an incident SV wave. Among them, the dimensionless incident frequency η is 0.25, 0.5, 1, and 2, respectively. The incidence angle θα is 0° and 30°, respectively;Figure 3 DSCF distribution on lining the inner wall under the incident SV wave (θα=0°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 4 DSCF distribution on lining the outer wall under the incident SV wave (θα=0°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 5 DSCF distribution on lining the inner wall under the incident SV wave (θα=30°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)Figure 6 DSCF distribution on lining the outer wall under the incident SV wave (θα=30°): (a) η=0.25; (b) η=0.5; (c) η=1; (d) η=2. (a) (b) (c) (d)As shown in Figures3–6, under an incident plane SV wave, the DSCF distribution curves of inner and outer wall surfaces of the lining are similar. The inner wall stress amplitude is considerably larger than that of the outer wall. When the SV wave is incident at low frequency (η = 0.25, 0.5), the DSCF of the inner lining wall surface increases as the stiffness factor k∗ increases while the DSCF of the outer wall decreases as k∗ increases. When η=0.5 and θα=0°, k∗ is either 1, 5, 10, or 20; the DSCF of the outer wall surface is, respectively, 23.79, 23.04, 21.45, and 19.91.When the SV wave is obliquely incident, the increase and decrease amplitudes of the DSCF of the inner and outer wall surfaces is smaller than the normal incidence. Whenη=0.25, θα=0°, and k∗=1, the DSCF of the inner wall surface is 51.71. It is 58.39 when k∗ = 20 marking an increase in amplitude of 13%. When k∗ = 1, the DSCF of the outer wall surface is 37.21, corresponding to 16.26 when k∗ = 20, and the amplitude of increase is 56%. When η=0.25, θα=30°, and k∗=1, the DSCF of the inner wall surface is 46.74 and it corresponds to 50.34 when k∗ = 20, and the increase in amplitude is 8%. When k∗ = 1, the DSCF of the outer wall surface is 28.64, corresponding to 20.15 when k∗ = 20 at a decrease in amplitude of 30%.When SV wave is incident with frequencyη=1, the DSCF of the inner and outer wall of the lining decreases with the increase of k∗, and the dynamic stress concentration on the inner and outer wall surfaces is very significant when k∗=1. If the SV wave is obliquely incident, the DSCF of the inner wall surface is to 69.60 when k∗=1, which is 2.1 times as much as the peak 32.93 under k∗=20. The peak DSCF of the outer wall surface is 60.36, which is 2.8 times as much as the corresponding peak value 21.92 under k∗=20. The SV wave is inclined at an angle of 30°; the peak DSCF on the inner wall of the lining is 67.78 when k∗=1, which is 2.3 times as much as the peak 29.62 under k∗=20. Section 3.2 discusses this phenomenon in detail.When the SV wave is incident at high frequency (η=2), the DSCF curve on the inner and outer wall surfaces of the lining oscillates very sharply along the circumference of the hole. There is no obvious relationship between the vibration regularity and k∗. When the SV wave is inclined at an incidence of 30°s and k∗=1, the DSCF of the inner and outer wall surfaces is minimal: the peak value of the inner wall falls to 12.64. When k∗=20, the peak values of the hoop stress of the inner and outer wall surfaces are 27.50 and 23.26, respectively. ## 5.1.2. Influence of Viscosity Factorδ∗ Figure7 shows the distribution of DSCF of the outer lining wall surface under an incident SV wave. The dimensionless incident frequency η is 0.25, 0.5, and 1, respectively; the incident angle θα is 0° and 30°, respectively. The dimensionless stiffness factor k∗=5, and the viscosity factor δ∗ is 1, 10, or 100.Figure 7 DSCF distribution on outer lining wall surfaces under incident SV waves: (a)η=0.25, θα=0°; (b) η=0.25, θα=30°; (c) η=0.5, θα=0°; (d) η=0.5, θα=30°; (e) η=1, θα=0°; (f) η=1, θα=30°. (a) (b) (c) (d) (e) (f)As shown in Figure7, the DSCF of the outer wall of the lining decreases gradually as interfacial viscosity factor δ∗ increases. However, the influence of the viscosity factor δ∗ on the circumferential stress distribution of the lining gradually weakens as incident wave frequency increases. For example, when the SV wave is of low frequency (η=0.25) at normal incidence, the DSCF of the outer wall is 27.32 when δ∗=1, that is, 2.3 times the corresponding value of 11.90 when δ∗=10 and 2.6 times the corresponding value of 10.43 when δ∗=100. The SV wave is under η=1 at normal incidence. The DSCF of the outer wall is 25.56 when δ∗=1, 1.35 times the corresponding value of 18.93 when δ∗=10, and 1.39 times the corresponding value of 18.43 when δ∗=100.Compared to the normal incidence, the influence of the viscosity factor on the amplitude distribution of the circumferential stress of the lining weakens at oblique incidence. When the SV wave is incident at an angle of 30°, the spatial distribution of the circumferential stress of the lining is relatively gentle along the circumference of the hole. In the low-frequency region (η=0.25), when δ∗=1, the DSCF of the outer wall is 23.99, which is equal to 1.3 times the value of 18.23 when δ∗=100. When η=1 and δ∗=1, the DSCF of the outer wall of the lining is 27.12, which is 1.1 times the corresponding value of 23.60 when δ∗=100. ## 5.2. Lining Internal DSCF Spectrum Analysis Figures8 and 9 show the influence of the dimensionless stiffness factor k∗ on the DSCF at different positions of the inner and outer walls of the tunnel lining when the SV wave is perpendicularly incident in the spectrum state, and the viscosity factor δ∗ is 0. Calculation positions were same as in Section 3, and the DSCF of θ = 90° (the top of the lining) and −90° (the bottom of the lining) is not shown in Figures 8 and 9. Similar to our observations in the model without slipping, the stress of the lining is sensitive to changes in frequency. The spectrum curve also has obvious peaks and troughs. The peak frequency here corresponds to the natural frequency in the soil column above the lining [33].Figure 8 DSCF spectrum distribution on the inner lining wall surface under the vertically incident SV wave: (a)θ=45°; (b) θ=0°; (c) θ=−45°. (a) (b) (c)Figure 9 DSCF spectrum distribution on the outer lining wall surface under the vertically incident SV wave: (a)θ=45°; (b) θ=0°; (c) θ=−45°. (a) (b) (c)As shown in Figure8, under an SV wave with normal incidence, the interface slip stiffness factor k∗ significantly affects the internal stress spectrum of the lining. When k∗=1, the stress amplification effect in the resonance reaction section is particularly obvious. At the first-order resonance frequency (η=0.16), the stress peak at θ=45° reaches 70.0; and at the η=0.96 resonance frequency, the stress at θ=0° has a peak of 77.2. The stress peak in the resonance band gradually decreases as the stiffness factor k∗ gradually increases. For example, when k∗=20 (near the no-slip state), the stress peaks at the first two resonance frequencies at θ=45° are approximately 63.4 and 34.0. When the stiffness factor is small, the restraining effect of the lining on the upper soil column weakens and the response amplitude of the lining-upper soil column system in the resonant state is rather large, causing an increase in the corresponding stress amplitude.The resonance frequency point also is offset to a certain extent as the sliding stiffness factor increases. Whenk∗ is 1, 5, and 10, the first-order resonance frequency is 0.16, 0.18, and 0.20, respectively. This is due to the fact that the overall stiffness of the soil column above the lining increases as slip stiffness factor increases, and the stress spectrums of k∗=10 and k∗=20 are similar. According to the spatial distribution, the stress spectrum curves at several typical observation points markedly differ. If the stress peak at θ=45° occurs at the first-order natural frequency, the stress peak at θ=0° occurs at the second-order natural frequency.As shown in Figure9, the frequency variation rule of the DSCF spectrum curve of the outer wall is similar to that of the inner wall, but the outer wall DSCF spectrum is generally smaller than the inner wall. When k∗ is 1, 5, 10, and 20, the peaks of the inner wall DSCF reach 77.24, 65.65, 64.05, and 63.44 while the peaks of the outer wall DSCF are 64.14, 29.14, 24.87, and 22.33, respectively.Figure10 shows the DSCF spectrum of the outer wall surface of the lining based on the influence of the interfacial viscosity factor δ∗, where the peak of the DSCF spectrum of the outer wall decreases as the viscosity factor δ∗ increases. When δ∗ is 1, 10, and 100, the DSCF peaks are 30.43, 21.10, and 20.57, respectively. Increase in the viscosity factor δ∗ causes greater energy loss during the sliding process, which in turn causes the DSCF of the lining surface to attenuate.Figure 10 DSCF spectrum distribution on the outer lining wall surface under the vertically incident SV wave (δ∗ = 0, 1, 10, and 100): (a) θ=45°; (b) θ=0°; (c) θ=−45°. (a) (b) (c) ## 5.3. Displacement Amplitude of Ground Surface Figures11 and 12 show the surface displacement amplitude distribution above the tunnel lining as-affected by interface slip stiffness factor k∗ under an incident SV wave. The surface displacement amplitude in the figure was standardized according to the displacement amplitude of the incident wave. The dimensionless incident frequency η is 0.25 and 0.5; the incident angle θα is 0° and 30°, and the dimensionless slip stiffness factor k∗ is 1, 2, 10, and 20. The interface viscosity factor δ∗ is 0.Figure 11 Surface displacement amplitudes near tunnel lining under the incident SV wave (η=0.25): (a) surface horizontal displacement amplitude; (b) surface vertical displacement amplitude; (c) surface horizontal displacement amplitude; (d) surface vertical displacement amplitude. (a) (b) (c) (d)Figure 12 Surface displacement amplitude near tunnel lining under the incident SV wave (η=0.5): (a) surface horizontal displacement amplitude; (b) surface vertical displacement amplitude; (c) surface horizontal displacement amplitude; (d) surface vertical displacement amplitude. (a) (b) (c) (d)Figures11 and 12 show that when the SV wave has low frequency (η=0.25) at normal incidence, the spatial distribution of surface displacement is basically consistent under different slip stiffness. The variations in the stiffness factor k∗ have a considerable influence on the horizontal and vertical displacement amplitudes of the ground surface near the lining. The standard amplitude Ux/Asv of the horizontal displacement above the tunnel lining increases, while the standard amplitude Uy/Asv as k∗ increases. The surface horizontal displacement amplitudes at k∗ = 1, 5, 10, and 20 were 2.04, 2.23, 2.29, and 2.32, respectively, while the vertical displacement amplitudes were 1.71, 1.41, 1.27, and 1.16, respectively.When the SV wave is incident at an angle of 30° and at a low frequency (η=0.25), any change in slip stiffness factor k∗ has little effect on the horizontal displacement above the tunnel lining surface but does influence the spatial distribution and amplitude of the vertical displacement. The influence of k∗ gradually increases as the incident frequency of the SV wave increases (η=0.5) while the spatial distribution and amplitude of the lining’s surface displacement also markedly change. The 30° oblique incidence has more significant effects than the normal incidence. Take the vertical incidence of the SV wave (Figure 12(a)) as an example: at the position of the lining directly above the surface (i.e., x=0), the horizontal displacement amplitude of the surface decreases as k∗ increases. The amplitude is 1.47 when k∗ = 1, and the corresponding values for k∗ = 5, 10, and 20 are 1.16, 1.05, and 0.98, respectively. Just above the two sides, the horizontal displacement amplitude increases as k∗ increases. ## 6. Conclusion The boundary integral equation method was applied to solve the seismic response of a tunnel lining in an elastic half-space under incident plane SV waves based on a viscous-slip interface model. The effects of key factors such as incident wave frequency and angle, interface slip stiffness, and interfacial viscosity coefficient on the dynamic stress response of the tunnel lining in elastic half-space and the surface displacement near the tunnel lining were analyzed. Main conclusions can be summarized as follows:(1) The interface slip stiffness factor significantly affects the dynamic stress distribution of the tunnel lining; the response characteristics are controlled by the incident wave frequency. When the slip stiffness is small, the internal stress of the lining changes very radically along the space around the hole, and the spatial distribution of the dynamic stress is highly complex. When the slip stiffness is large (k∗≥20), the dynamic response is close to that of the nonslip model.Under an incident low-frequency wave, increase in interface slip stiffness causes a gradual increase in the circumferential stress of the inner wall. The dynamic stress concentration is more significant in the no-slip state than the slip state. When the SV wave is incident withη=1 (close to the high-frequency resonant frequency band), the dynamic stress concentration effect inside the lining is very significant when the interface slip stiffness coefficient is small (k∗=1). Under high-frequency wave incidence (η=2), the influence of slip stiffness coefficient on the dynamic stress of the lining is more complex and spatial oscillation of the dynamic stress is more severe.(2) The viscosity factor of the viscous-slip interface also has a significant influence on the dynamic stress distribution of the tunnel lining. As the viscosity factor increases, the DSCF of the lining outer wall decreases gradually; however, this effect gradually weakens as the incident wave frequency increases. The influence of the viscosity coefficient under a normally incident plane SV wave is greater than that under a wave with oblique incidence.(3) The interface slip stiffness factor has a significant effect on the DSCF spectrum characteristics of the tunnel lining surface. When the slip stiffness is small (k∗=1), the dynamic stress amplification effect in the high-frequency resonance reaction section is more obvious. The stress peak in the resonance band gradually decreases as the slip stiffness increases. The resonance frequency point is also offset to a certain extent as slip stiffness increases.(4) When the SV wave is incident at a low frequency, the spatial distribution of surface displacements above the lining under different slip stiffness is essentially constant, but the displacement amplitudes are quite different. Any increase in SV wave incident frequency and incidence angle has a significant effect on both the spatial distribution of surface displacement near the lining and the amplitude of said surface displacement.In this study, we analyzed only the 2D seismic response of a shallow lined tunnel based on the viscous-slip interface model of a uniform space in half-space. Similar seismic response analyses of uneven sites and 3D tunnels merit further research. --- *Source: 1025483-2019-03-06.xml*
2019